text
stringlengths
56
7.94M
\begin{document} \title{ An Inverse system of nonempty objects with empty limit} \author{Satya Deo and Veerendra Vikram Awasthi} \date{} \subjclass[2000]{18G05,18B05 and 16B50 } \keywords{Inverse system, Inverse limit, Ordinal numbers} \maketitle \begin{abstract} In this article we give an explicit example of an inverse system with nonempty sets and onto bonding maps such that its inverse limit is empty. \end{abstract} \section{Introduction} It has been often quoted, without giving an example, that there are inverse systems in which all the objects are nonempty and the bonding maps are onto, but the inverse limit of the inverse system is empty. (see e.g.\cite{es}, Dugundji \cite{dug} pg 427, last para). It is hard to believe that there should be an example like this, but the fact is that there exists such an example. The original paper by L. Henkin \cite{hen} dealing with this problem gives a theorem which implies that there are several examples, but the explanation and proof of the theorem is too abstract to have a clear idea of a specific example. In this note, we present a concrete example of an inverse system of sets and maps having the stated property, which can be easily understood. It will then easily follow that we can have an example of such an inverse system in any category admitting arbitrary products, e.g., in the category of modules and homomorphisms, category of topological spaces and continuous maps, etc. \section{Notations and Preliminaries} \noindent First, let us have the following well known definitions (see \cite{dug}): {\mathbb Def \rm A binary relation $R$ in a set $A$ is called a {\bf preorder} if it is reflexive and transitive. A set together with a definite preorder is called a {\bf preordered set}.} {\mathbb Def \rm Let $\mathcal A$ be a preordered set and $\{X_{\alpha}\; |\; \alpha\in \mathcal A\}$ be a family of topological spaces indexed by $\mathcal A$. For each pair of indices $\alpha, \beta$ satisfying $\alpha < \beta$, assume that there is a given a continuous maps $f^{\beta}_{\alpha}:X_{\beta}\to X_{\alpha}$ and that these maps satisfy the following condition: If $\alpha<\beta<\gamma,$ then $f^{\gamma}_{\alpha}=f^{\beta}_{\alpha}o f^{\gamma}_{\beta}$. Then the family $\{X_{\alpha};\;f^{\beta}_{\alpha}\}$ is called an {\bf inverse system} over $\mathcal A$ with topological spaces $X_{\alpha}$ and bonding continuous maps $f^{\beta}_{\alpha}$.} {\mathbb Def \rm Let $X=\{X_{\alpha,}\;f^{\beta}_{\alpha};\;\alpha\leq\beta\}_{\alpha, \beta\in\mathcal A}$ be an inverse system of topological spaces and continuous map $f^{\beta}_{\alpha}:X_{\beta}\to X_{\alpha},\;\;\alpha\leq\beta$ based on indexing set $\mathcal A$. Consider the product space $\prod_{\alpha\in\mathcal A}X_{\alpha},$ and let $p_{\alpha}: \Pi X_{\alpha}\to X_{\alpha}$ denotes the projection map. Define $$X'=\{x\in \prod_{\alpha\in\mathcal A}X_{\alpha}\;|\;\mbox{\rm whenever}\;\alpha\leq\beta,\;f^{\beta}_{\alpha}p_{\beta}(x)= p_{\alpha}(x)\}.$$ Then the set $X'$ with subspace topology is called the {\bf inverse limit} of the inverse system $X$.} {\mathbb Remark \rm We have defined above an inverse system and inverse limit in the category of topological spaces. Clearly, this definition of an inverse system can be made in any category. However, the inverse limit of an inverse system will exist only in those categories which admit arbitrary products.} \noindent {\bf Ordinal Numbers : } For definition and well known special properties of ordinal numbers we refer to Dugundji \cite{dug}. The successor $x^+$ of a set $x$ is defined as $x\cup \{x\}$, and then $\omega$ was constructed as the smallest set that contains 0 and that contains $x^+$ whenever it contains $x$. Now, the question arises that what happens if we start with $\omega$, form its successor $\omega^+$, then form the successor of that, and proceed so on. In other words, is there something out beyond $\omega, \omega^+, (\omega^+)^+, \cdots, $ etc., in the same sense in which $\omega$ is beyond $0,1,2,\cdots,$etc.? We mention the names of some of the first few of them. After $0,1,2,\cdots$ comes $\omega,$ and after $\omega,\omega +1,\omega +2, \cdots$ comes $2\omega.$ After $2\omega +1$ (that is, the successor of $2\omega$) comes $2\omega +2,$ and then $2\omega +3$ ; next after all the terms of the sequence so begun comes $3 \omega$. At this point another application of axiom of substitution is required. Next comes $3\omega+1, 3\omega +2,3\omega +3,\cdots ,$ and after them comes $4 \omega$. In this way we get successively $\omega, 2\omega, 3 \omega,4 \omega,\cdots .$. An application of axiom of substitution yields something that follows them all in the same sense in which $\omega$ follows the natural numbers: that something is $\omega^2.$ After the whole thing starts over again : $\omega^2 +1, \omega^2 +2, \omega^2 +3, \omega^2 +\omega, \omega^2 +\omega+1, \omega^2 +\omega+2,\cdots , \omega^2 +2\omega, \omega^2 +2\omega+1,\omega^2 +2\omega+2, \cdots ,\omega^2 +3\omega, \cdots ,\omega^2 +4\omega, 2\omega^2,\cdots, 3\omega^2, \cdots, \omega^3, \cdots, \omega^4,\cdots, \omega^{\omega},\cdots, \omega^{({\omega}^{\omega})},\cdots. $ Since the countable union of countable sets is again countable, each of the above numbers is countable. Therefore, using the well-ordered property of ordinals there exists a smallest ordinal number $\omega_1$ which contains all of the above numbers and is itself uncountable. We call $\omega_1$ as the first uncountable ordinal number. \section{The Example} Let $\{0,1,2,\cdots, \omega , \omega +1, \cdots , 2\omega , 2\omega +1, \cdots, \omega^{2},\omega^{2}+1, \cdots \} $ be the set of ordinal numbers and $\omega_1$ be the first uncountable ordinal. Consider the set $\mathcal Omega=[0, \omega_1)$ of all ordinals less than $\omega_1$. We will construct an inverse mapping system $\{X_{\alpha}, f^{\beta}_{\alpha}; \alpha\leq\beta \}$ based on the directed set $\mathcal Omega$ in which all the sets $X_{\alpha}$ will be nonempty, the bonding maps $f^{\beta}_{\alpha}:X_{\beta}\to X_{\alpha},\;\alpha\leq \beta,$ will be onto, but $\varprojlim X_{\alpha}=\phi$. Let us define a {\bf point} to mean a finite sequence of an even number of elements from $\mathcal Omega,$ e.g., $$x=(\alpha_1,\alpha_2,\cdots, \alpha_{2n-1}, \alpha_{2n}),$$ which satisfy the following three conditions: \begin{enumerate} \item[(i)] $\alpha_1 < \alpha_2$ \item[(ii)] $\alpha_{2i-1} < \alpha_{2i+2}$ for $0<i<n,$ i.e., $\alpha_1<\alpha_4,\;\alpha_3<\alpha_6,\;\alpha_5<\alpha_8,\cdots$ \item[(iii)] $\alpha_{2i+1} < \alpha_{2i+2}$ and $\alpha_{2i+1} \nless \alpha_{2j+1}$ for $0\leq j< i<n,$ \end{enumerate} \noindent where $\alpha\nless\beta$ holds when neither $\alpha<\beta$ nor $\alpha=\beta.$ This means \linebreak $\alpha_1<\alpha_2,\;\alpha_3<\alpha_4,\;\alpha_5<\alpha_6,\;\alpha_7 <\alpha_8,\cdots$ and $\alpha_3\nless\alpha_1,\linebreak \alpha_5\nless\alpha_3,\;\alpha_5\nless\alpha_1, \cdots$. \noindent We may observe that the above conditions imply that\\ $\alpha_1<\alpha_2,\;\alpha_1<\alpha_4,$\\ $\alpha_3<\alpha_4,\;\alpha_3<\alpha_6,$\\ $\alpha_5<\alpha_6,\;\alpha_5<\alpha_8,$\\ $\alpha_7<\alpha_8,\;\alpha_7<\alpha_{10},$ and so on. \noindent We define {\bf index} of the point $x$ given above to be $\alpha_{2n-1,}$ {\bf order} of $x$ to be $\alpha_{2n}$ and {\bf length} of $x$ to be $n$. \noindent Let us illustrate a few begining sets $X_{\alpha},\;\alpha\in\mathcal Omega$ \begin{enumerate} \item[(i)] $x\in X_0,$ means $x=(0, \alpha)$ where $\alpha >0.$ \item[(ii)] $x\in X_1,$ means the point $x$ can be one of the following types\\ $x=(1, \alpha)$ where $\alpha >1,$ or\\ $x=(0, \alpha, 1, \beta )$ where $\alpha >0$ and $\beta >1$. \item[(iii)] $x\in X_2,$ means the point $x$ can be one of the following types\\ $x=(2, \alpha)$ where $\alpha >2,$ or\\ $x=(0, \alpha, 2, \beta )$ where $\alpha >0$ and $\beta >2$ or\\ $x=(1, \alpha, 2, \beta )$ where $\alpha >1$ and $\beta >2$ or\\ $x=(0, \alpha, 1, \beta, 2, \gamma )$ where $\alpha >0$ and $\beta >1$ and $\gamma >2$ \end{enumerate} \noindent Note that all the points defined above have index 2. Similarly we can check the elements of $X_3$ as follows: \begin{enumerate} \item[(iv)] $x\in X_3$ means $x$ is one of the following types\\ $x=(3, \alpha)$ where $\alpha >3$ or\\ $x=(0, \alpha, 3, \beta)$ where $ \alpha >0$ and $\beta >3$ or \\ $x=(1, \alpha, 3, \beta)$ where $ \alpha >1$ and $\beta >3$ or \\ $x=(2, \alpha, 3, \beta)\;\mbox{where } \alpha >2$ and $\beta >3$ or \\ $x=(0, \alpha, 1, \beta, 3, \gamma)$ where $\alpha >0$ and $\beta >1$ and $\gamma >3$ or \\ $x=(0, \alpha, 2, \beta, 3, \gamma)$ where $ \alpha >0$ and $\beta >2$ and $\gamma >3$ or \\ $x=(1, \alpha, 2, \beta, 3, \gamma)$ where $ \alpha >1$ and $\beta >2$ and $\gamma >3$ or \\ $x=(0, \alpha, 1, \beta, 2, \gamma, 3, \delta)$ {where }$ \alpha >0$ and $\beta >1$ and $\gamma >2$ and $\delta>3,\;\;\;\alpha, \beta, \gamma, \delta \in \mathcal Omega.$ \end{enumerate} \noindent Thus we have a family of nonempty disjoint sets $X_{\alpha},\;\;\alpha\in\mathcal Omega,$ whose elements are the points with index $\alpha.$ We now define the bonding maps $f^{\beta}_{\alpha}:X_{\beta}\to X_{\alpha},\;\;\alpha\leq\beta, \;\;\alpha, \beta \in\mathcal Omega.$ Let $x=(\alpha_1,\alpha_2,\cdots, \alpha_{2n-1}, \alpha_{2n})$ be an arbitrary point in $X_{\beta}$ (so that $\alpha_{2n-1}=\beta$). We define the image of $x$ in $X_{\alpha}$ under $f^{\beta}_{\alpha}$ as follows: \noindent There are two cases : \noindent {\bf Case I:} If $\alpha\leq\alpha_1$, then we define $f_{\alpha}^{\beta}(x)=(\alpha, \alpha_2)$ and since $x$ is a point in $X_{\beta},\;\; \alpha_1<\alpha_2$ by condition (i) which implies $\alpha<\alpha_2$. Therefore $(\alpha, \alpha_2)$ is a point with index $\alpha$. Hence $(\alpha, \alpha_2)\in X_{\alpha}.$ \noindent {\bf Case II:} If $\alpha\nless\alpha_1$ then there exist a least $j,\;0<j<2n-1$ such that $\alpha<\alpha_{2j+1}$ because $\alpha < \beta=\alpha_{2n-1}$. Then we define $$f_{\alpha}^{\beta}(x)= f_{\alpha}^{\beta}(\alpha_1,\alpha_2,\cdots, \alpha_{2n-1}, \alpha_{2n}) = (\alpha_1,\alpha_2,\cdots, \alpha_{2j},\alpha,\alpha_{2j+2}).$$ Clearly, $(\alpha_1,\alpha_2,\cdots,\alpha_{2j},\alpha,\alpha_{2j+2})$ satisfies all the three conditions of a point as it is only a subsequence of the point $x$. Also $(\alpha_1,\alpha_2,\cdots, \alpha_{2j},\alpha,\alpha_{2j+2})$ has index $\alpha$ hence $(\alpha_1,\alpha_2,\cdots,\alpha_{2j},\alpha,\alpha_{2j+2})\in X_{\alpha}$. \noindent Note that in particular, the map $f^{3}_{2}:X_3\to X_2,$ will be as follows. \noindent $(3, \alpha) \mapsto (2,\alpha)\in X_2$ (Case I) \\ $(0, \alpha, 3, \beta)\mapsto (0, \alpha, 2, \beta) \in X_2$ \\ $(1, \alpha, 3, \beta)\mapsto (1, \alpha, 2, \beta)\in X_2$ \\ $(2, \alpha, 3, \beta)\mapsto (2, \alpha)\in X_2$ (Case I)\\ $(0, \alpha, 1, \beta, 3, \gamma)\mapsto (0, \alpha, 1, \beta, 2, \gamma)\in X_2$ \\ $(0, \alpha, 2, \beta, 3, \gamma)\mapsto (0, \alpha, 2, \beta)\in X_2$\\ $(1, \alpha, 2, \beta, 3, \gamma)\mapsto (1, \alpha, 2, \beta)\in X_2$\\ $(0, \alpha, 1, \beta, 2, \gamma, 3, \delta)\mapsto (0, \alpha, 1, \beta, 2, \gamma)\in X_2.$\\ All others are as in Case II. \noindent Having defined $f^{\beta}_{\alpha},\;\;\alpha \leq \beta,$ let us note the following obvious properties of these bonding maps: \begin{enumerate} \item[(i).] $f^{\alpha}_{\alpha}:X_{\alpha}\to X_{\alpha}$ is identity.\\ \item[(ii).] Let $f^{\beta}_{\alpha}:X_{\beta}\to X_{\alpha}$ and $f_{\beta}^{\gamma}:X_{\gamma}\to X_{\beta},\;\;\alpha<\beta<\gamma,\;\;\alpha, \beta, \gamma \in\mathcal Omega$ be two bonding maps. Then $$f^{\beta}_{\alpha}of_{\beta}^{\gamma}= f^{\gamma}_{\alpha}:X_{\gamma}\to X_{\alpha}.$$ \end{enumerate} Let us verify a specific example of (ii): Let $f^{6}_{4}: X_6\to X_4$ and $f^{4}_{3}:X_4\to X_3$ be the two bonding maps. We want to verify $$f^{4}_{3}f^{6}_{4}=f^{6}_{3}.$$ For this we choose an arbitrary element of $X_6$ say,\\ $x=(0,\alpha, 1,\beta, 3,\gamma, 5,\delta, 6, \omega )\in X_6,$ where $\alpha>0,\;\; \beta>1,\;\;\gamma>3,\;\;\delta>5,$ and $\omega>6.$ Then by the definition of $f^{\beta}_{\alpha}$ we have \begin{eqnarray*} f^{6}_{4}(x) & = & (0,\alpha, 1,\beta, 3,\gamma,5,\delta)=y,\;\;\mbox{say and} \\ f^{4}_{3}(y)& = & (0,\alpha, 1,\beta, 3,\gamma). \;\;\; \mbox{Also,} \end{eqnarray*} $$f^{6}_{3}(x) =f^{6}_{3}(0,\alpha, 1,\beta, 3,\gamma, 5,\delta, 6, \omega ))= (0,\alpha, 1,\beta, 3,\gamma).$$ Hence $f^{4}_{3}f^{6}_{4}=f^{6}_{3}$. \noindent Thus we have the following inverse system $\{X_{\alpha},\;\;f_{\alpha}^{\beta};\;\alpha\leq\beta,\;\;\alpha, \beta\in\mathcal Omega\} $ of sets and maps defined on the directed set $\mathcal Omega.$ $$\cdots X_{\gamma}\stackrel{f^{\gamma}_{\beta}}\longrightarrow X_{\beta}\stackrel{f_{\alpha}^{\beta}}\longrightarrow X_{\alpha}\cdots X_2\stackrel{f^{2}_{1}}\longrightarrow X_1,$$ \noindent where $1<2<\cdots <\alpha < \beta < \gamma$ and $1,2,\cdots , \alpha, \beta, \gamma \in\mathcal Omega$ \noindent Now we verify that the bonding maps $f^{\beta}_{\alpha}:X_{\beta}\to X_{\alpha}, \;\;\alpha \leq \beta,$ are onto: Let $x=(\alpha_1,\alpha_2,\cdots, \alpha_{2n-1}, \alpha_{2n}) \in X_{\alpha}$ so that $\alpha=\alpha_{2n-1}.$ We choose $\gamma>\beta$ and then consider the sequence of even number of elements from the directed set $\mathcal Omega$ and let $y=(\alpha_1,\alpha_2,\cdots, \alpha_{2n},\beta,\gamma)$. Since $x$ is a subsequence of $y$ and $x$ is a point and also $\alpha_{2n-1}<\alpha<\beta<\gamma$, to prove that $y$ is a point in $X_{\beta}$, it suffices to verify that $\beta\nless\alpha_{2j+1}, \;\; 0\leq j<n$. But if this is true that $\beta\leq\alpha_{2j+1}, \;\; 0\leq j<n$ will imply that $\alpha<\alpha_{2j+1}$ since $\alpha < \beta$, which is contradiction to the fact that $x=(\alpha_1,\alpha_2,\cdots, \alpha_{2n-1},\alpha_{2n})$ is a point. Thus $y=(\alpha_1,\alpha_2,\cdots,\alpha_{2n},\beta,\gamma)$ is a point and it is an element of $X_{\beta}$ such that $f_{\alpha}^{\beta}(y)=x$ by the definition of $f_{\alpha}^{\beta}$. \noindent Let us see a particular example of ontoness as follows : Consider $f^{5}_{3}:X_5\to X_3$. We will show that there is preimage of all the elements of $X_3$ (all the 8 types as discussed earlier), is in $X_5$. The preimage of any element of $X_3,\;\;x=(\alpha_1,\alpha_2,\cdots,\alpha_{2n-1},\alpha_{2n})$ where $\alpha_{2n-1}=3$ can be obtained by just introducing two more elements from the directed set $\mathcal Omega$ in point $x$ as $(\alpha_1,\alpha_2,\cdots,\alpha_{2n-1},\alpha_{2n},5,\delta),\;\;\delta>5.$ Thus we see \begin{eqnarray*} (f^{5}_{3})^{-1}(x)& = & (f^{5}_{3})^{-1}(\alpha_1,\alpha_2,\cdots,\alpha_{2n-1},\alpha_{2n}) \\ & =& (\alpha_1,\alpha_2,\cdots,\alpha_{2n-1},\alpha_{2n},5,\delta)\\ & =& y \in X_5 \end{eqnarray*} So, we have now an inverse mapping system based on $\mathcal Omega$ $$\{X_{\alpha},\;\;f_{\alpha}^{\beta}:X_{\beta}\to X_{\alpha},\;\;\alpha\leq\beta,\;\;\alpha, \beta\in\mathcal Omega\},$$ in which each $X_{\alpha}\neq \phi$ and each $f_{\alpha}^{\beta}:X_{\beta}\to X_{\alpha},\;\alpha \leq \beta$ is onto. \section{Proof of the main result} We claim that for the above inverse system $X$, $\varprojlim X_{\alpha}=\phi$. Let us assume the contrary and suppose there exists an element $x$ in the inverse limit $\varprojlim X_{\alpha}$. In other words, $x\in \prod X_{\alpha}$ where $$x= (x_0, x_1, \cdots, x_{\omega},x_{\omega+1},\cdots,x_{2\omega},x_{2\omega+1}, \cdots),\;\;x_i\in X_i \eqno{(*)}$$ such that whenever $\alpha \leq\beta,\;\; f_{\alpha}^{\beta}(x_{\beta})=x_{\alpha}$. Let $\mathcal O$ be the set of orders of these $x_i 's$ in $(*)$. It is clear that this set $\mathcal O$ is a cofinal set of $\mathcal Omega$, since for any $\alpha \in \mathcal Omega$ there is a set $X_{\alpha}$ and since $x \in \prod X_{\alpha}$, the i-th co-ordinate in $x$ is some element $x_{\alpha}\in X_{\alpha}$. Since $\alpha=$index of $x_{\alpha}<$ order of $x_{\alpha} \in \mathcal O$, we find that $\mathcal O$ is a cofinal set in $\mathcal Omega.$ We also observe that if length of $x_{\alpha}=$ length of $x_{\beta}$, then order of $x_{\alpha}=$ order of $x_{\beta}$. To prove this we choose a $\gamma > \alpha$ and $\gamma > \beta$. Then there exists an element $x_{\gamma}\in X_{\gamma}$ in $(*)$ such that $f^{\gamma}_{\alpha}(x_{\gamma})=x_{\alpha}$ and $f^{\gamma}_{\beta}(x_{\gamma})=x_{\beta}$. But from the definition of the bonding map $f^{\beta}_{\alpha}$ it follows that the orders of $x_{\alpha}$ and $x_{\beta}$ are some element in the sequence $x_{\gamma}$, say order of $x_{\alpha}= \alpha_{2i}$ and order of $x_{\beta}= \alpha_{2j}$. Thus if the length of $x_{\alpha}= i=$ length of $x_{\beta}= j,$ then clearly, $\alpha_{2i}=\alpha_{2j}$ i.e., order of $x_{\alpha}=$ order of $x_{\beta}$. Therefore, the set of orders $\mathcal O$ behaves according to the lengths of $x_i$ in $(*)$, and length of any point is a natural number. Thus, if the lengths of $x_i 's$ are unbounded then we will get a simple sequence of the orders of the elements $x_i$ which will be cofinal. Hence there will exist a cofinal simple sequence in $\mathcal Omega=[0,\omega_1)$. But this is clearly a contradiction. On the other hand if the lengths of $x_i's$ are bounded, then the set of orders of $x_i's, \;\;i.e., \;\;\mathcal O$ will contain a maximal element of $\mathcal Omega=[0,\omega_1)$ which is again a contradiction. Hence, we find that in either of the two cases viz., when the lengths are unbounded or bounded we have a contradiction since it is well known that the set $\mathcal Omega=[0,\omega_1)$ neither posseses a simple cofinal sequence nor a maximal element. Hence there can not exist any element in the inverse limit, i.e., $\varprojlim X_{\alpha}=\phi$. \rule{2mm}{2mm} {\mathbb Remark \rm In view of the above construction, it is clear that one can always have an inverse system in any category (e.g., topological spaces and continuous functions or modules and module homomorphisms etc.) admitting arbitrary products with nonempty objects and onto bonding morphism whose inverse limit can be empty.} \noindent Satya Deo, \\ Harish Chandra Research Institute, \\ Chhatnag Road, Jhusi,\\ Allahabad 211 019, India. \\ Email: [email protected], [email protected] \noindent Veerendra Vikram Awasthi, \\ Institute of Mathematical Sciences, \\ CIT Campus, Taramani \\ Chennai 600 013, India. \\ Email: [email protected], [email protected] \end{document}
\begin{document} \title{Bayesian Image Analysis in Fourier Space} \begin{abstract} Bayesian image analysis has played a large role over the last 40+~years in solving problems in image noise-reduction, de-blurring, feature enhancement, and object detection. However, these problems can be complex and lead to computational difficulties, due to the modeled interdependence between spatial locations. The Bayesian image analysis in Fourier space (BIFS) approach proposed here reformulates the conventional Bayesian image analysis paradigm as a large set of independent (but heterogeneous) processes over Fourier space. The original high-dimensional estimation problem in image space is thereby broken down into (trivially parallelizable) independent one-dimensional problems in Fourier space. The BIFS approach leads to easy model specification with fast and direct computation, a wide range of possible prior characteristics, easy modeling of isotropy into the prior, and models that are effectively invariant to changes in image resolution. \textbf{Keywords:} Bayesian image analysis, Fourier space, Image priors, k\nobreakdash-space, Markov random fields, Statistical image analysis. \end{abstract} \section{Introduction\label{intro}} Bayesian image analysis models provide a solution for improving image quality in image reconstruction/ enhancement problems by incorporating \emph{a~priori} expectations of image characteristics along with a model for image noise, i.e., for the image degradation process~\citep{winkler1995image,guyon1995random,li2009markov}. However, conventional Bayesian image analysis models, defined in the space of conventional images (hereafter referred to as ``image space'') can be limited in practice because they can be difficult to specify and implement (requiring problem-specific code) and they can be slow to compute estimates for. Furthermore, Markov random field (MRF) model priors in conventional Bayesian image analysis (as commonly used for the type of problem discussed here) are not invariant to changes in image resolution (i.e., model parameters and MRF neighborhood size need to change when pixel dimensions change in order to retain the same spatial characteristics; this can be a problem when images are collected across multiple sites with different acquisition parameters or hardware) and are difficult to specify with isotropic autocovariance (i.e., with direction-invariant covariance). Our approach to overcoming the difficulties and limitations of the conventional Bayesian image analysis paradigm, is to move the problem to the Fourier domain and reformulate in terms of spatial frequencies: \textit{Bayesian image analysis in Fourier space} (BIFS). Spatially correlated prior distributions (priors) that are difficult to model and compute in conventional image space, can be successfully modeled via a set of independent priors across locations in Fourier space. A prior is specified for the signal at each Fourier space location (i.e. at each spatial frequency) and \emph{Parameter functions} are specified to define the values of the parameters in the prior distribution across Fourier space locations, i.e. we specify probability density functions (pdfs) over Fourier space that are conditionally independent given known values of parameter functions. The original high-dimensional problem in image space is thereby broken down into a set of one-dimensional problems, leading to easier specification and implementation, and faster computation that is further trivially parallelizable. The fast computation coupled with trivial parallelization has the potential to open up Bayesian image analysis to big data imaging problems. Furthermore, the BIFS approach carries with it numerous useful properties, including easy specification of isotropy and consistency in priors across differing image resolutions for the same field of view. Note that the BIFS approach is distinct from shrinkage prior methods that have been used previously in Fourier, wavelet or other basis set prior specifications (e.g. ~\citet{olshausen1996emergence,levin2007user}). BIFS does not seek to smooth by generating a sparse representation in the transformed space through simple thresholding with the hope that it provides \emph{a~priori} desired spatial characteristics. Rather, the goal of BIFS is to fully specify the prior distribution over Fourier space in the same spirit as Markov random field or other spatial priors as specified in Bayesian image analysis, i.e., to fully represent as faithfully as possible \emph{a~priori} expected characteristics of the true/optimal image. \subsection{Bayesian image analysis} The general image analysis problem can be described as follows: Consider observed image data, $y$, that have been degraded by some "noise" process. The goal is to get an optimal estimate of the undegraded (and generally unobserved) version of the image ($x$); this process of estimating the undegraded image will be referred to as \emph{reconstruction}. The objective of the Bayesian image analysis approach is to attempt to optimally reconstruct $x$ based on observing $y$ given knowledge of the image noise/degradation process and prior knowledge about properties that $x$ should have. The optimization being performed with respect to minimizing some loss function of the posterior. Sometimes, interest may lie not in reconstructing the true image itself, but in generating a version of the image that may enhance certain features or properties (e.g. in cancer detection it would be useful to enhance tumors in an image to make it easier for the radiologist to detect and delineate them). In this case, the optimal $x$ that we wish to construct will be an enhanced version of the true (undegraded) image with the prior designed to emphasize desirable characteristics of an enhanced image. \subsection{Conventional Bayesian image analysis:} Consider $x$ to be a true or idealized image (e.g., noise-free or with enhanced features) that we wish to recover from a sub-optimal image dataset $y$. (Note that we are using the common shorthand notation of not explicitly distinguishing the random variables and the corresponding image realizations \citep{besag1989digital,besag1991bayesian}, i.e., we use lower case $x$ and $y$.) The Bayesian image analysis paradigm incorporates \emph{a~priori} desired spatial characteristics of the reconstructed image via a prior distribution (``the prior'') for the ``true'' image $x$: $\pi(x)$; and the noise degradation process via the likelihood: $\pi(y|x)$. The prior and likelihood are combined via Bayes' Theorem to give the posterior: $\pi(x|y) \propto \pi(y|x) \pi(x)$ from which an estimate of $x$ can be extracted, e.g., the ubiquitous \emph{maximum a posteriori} (MAP) solution obtained by determining the image associated with the mode of the joint posterior distribution. \subsubsection{Markov random field (MRF) priors:} The most common choice for the prior in conventional Bayesian image analysis is a Markov random field (MRF) model~\citep{besag1974spatial,geman1984stochastic,besag1986statistical,besag1989digital,besag1991bayesian,winkler1995image,guyon1995random,li2009markov,swain2013efficient,sonka2014image}. MRF priors are used for imposing expected \emph{contextual information} to an image such as spatial smoothness, textural information (small-scale pattern repetition), edge configurations (patterns of locations of boundaries with large intensity differences) etc. MRF methods provide improvement over deterministic filtering methods by \emph{probabilistically} interacting with the data to smooth, clean, or enhance images, by appropriately weighting information from the data (via the likelihood) with the MRF prior to form the posterior probability distribution (the posterior)~\citep{besag1991bayesian}. The definition of an MRF over a set of locations $S$ is given via a conditional specification of each pixel (or voxel in 3D) intensity $x_s$ at location $s$, given the set of neighboring pixels $\partial s$; where "neighboring pixels" are defined in terms of being close to each other in space. Specifically, $\pi(x_s|x_{-s})=\pi(x_s|x_{\partial s})$, i.e. if the neighboring pixel values are known then remaining pixels add no further knowledge about the conditional distribution of the intensity at $s$. The \emph{full conditional posterior} for $x_s$ given the data $y$ and the values of $x$ at all sites other than $s$, denoted by $x_{-s}$, can therefore be written as \begin{equation} \pi(x_s|y,x_{-s}) \propto \pi(y_s|x_s) \pi(x_s|x_{\partial s}), \label{conventional} \end{equation} (making the common assumption that the noise degradation process is independent across pixels). Note that the full conditional posterior at a pixel depends on its set of neighbors (even if that neighborhood is small) and therefore the joint distribution over all pixels is highly interdependent (the covariance matrix will generally be dense). Note also that the Hammersley-Clifford Theorem~\citep{besag1974spatial} allows an alternate (and equivalent) joint specification of MRFs in terms of the product of Gibbs measures over cliques (sets of inter-connected neighbors); these include the highly used set of intrinsic pairwise difference priors \citep{besag1989digital}. However, the difficulties of dealing with a highly inter-dependent process remain. \subsubsection{Other conventional Bayesian image analysis priors} Other ``higher level'' Bayesian image analysis models exist that use priors to describe characteristics of objects in images through their geometrical specification~\citep{grenander1993general,baddeley1993stochastic,rue1999bayesian,grenander2000asymptotic,ritter2002bayesian,sheikh2005bayesian}. However, these models have been used less frequently than MRF-based models, most likely because they often need highly problem-specific computational approaches, and are more difficult to specify, implement and compute. \subsection{Basis set representation methods for image analysis} There is considerable literature on methods for representing processes in terms of basis set representations, see e.g., the field of functional analysis~\citep{james2005functional,ramsay2002applied,morris2014functional}. Both Fourier and wavelet space basis sets can be used for functional analysis (including approaches with a Bayesian emphasis)~\citep{chipman1997adaptive,abramovich1998wavelet,leporini2001bayesian,johnstone2005empirical,ray2006functional,nadarajah2007bkf,christmas2014bayesian}. However, these basis function approaches have mostly focused on using either simple priors based on L1/L2 regularization functions, and/or coefficient thresholding in the transformed space of the basis set; the aim being to shrink or get rid of the majority of coefficients. Fourier/wavelet basis set shrinkage/thresholding based methods have seen multiple applications to the field of image analysis (including from a Bayesian perspective through L1/L2 regularization). In particular, methods for image processing have been developed using Fourier, wavelet and other bases sets~\citep{olshausen1996emergence,donoho1999combined,buccigrossi1999image,figueiredo1999bayesian,donoho2002beamlets,Chang2000adaptive,portilla2003image,candes2004new,levin2007image,levin2007user,pavlicova2008detecting,vijay2012image,li2014bayesian}. By using an appropriate basis set, the sparse representation in that space is expected to generate processed images with certain characteristics. For example, sparse Fourier or wavelet representations can lead to noise reduction back in image space. (The intuition here is that removed coefficients with a small contribution to the overall signal are considered to be more likely dominated by noise.) However, in contrast to the Bayesian image analysis in the Fourier space approach that we develop here, there is no explicit \emph{a~priori} model for expected structure in the true image, other than that the true image might be well represented by a small subset of the basis functions. The BIFS paradigm provides a comprehensive approach to characterizing image priors by modeling specific priors at all Fourier space locations. Fourier representations of Gaussian Markov random fields (GMRFs) have been used in order to generate fast simulations when the GMRF neighborhood structure can be represented by a block-circulant matrix, i.e. such that the GMRF can be considered as wrapped on a 2D torus:~see~Ch.~2.6 of~\citet{rue2005gaussian}. We will explore the relationship between this Fourier space representation and a special case of the Bayesian image analysis in Fourier space approach we are proposing in Section~\ref{MRImatch}. Finally, there are a couple of interesting Fourier space-based Bayesian image analysis approaches that are of interest. \citet{baskaran1999bayesian} define a prior for the modulus (alternatively referred to as the magnitude) of the signal in Fourier space to infer the signal. It is motivated by a specific problem in X-ray crystallography where only the modulus of the Fourier transform can be measured, but not the argument (often referred to as the phase by physicists and engineers), and uses prior information over a known part of the signal which is spherically symmetric. \citet{staib1992boundary} use a different idea of generating deformable models for finding the boundaries of 2D objects in images based on elliptic Fourier decompositions. \subsection{The Bayesian Image analysis in Fourier space approach} In this paper, we define a complete framework for specifying a wide range of spatial priors for continuous-valued images as models in Fourier space. The methodological benefit of working with Bayesian image analysis in Fourier space (BIFS) is that it provides the ability to model a range of stationary spatially correlated processes in conventional image space as independent processes across spatial frequencies in Fourier space. The range of advantages afforded by transforming the Bayesian image analysis problem into Fourier space include: \begin{description} \item[a)] \emph{easy model specification:} expected spatial characteristics in images are modeled as \emph{a~priori} expectations of the contribution of spatial frequencies to the BIFS reconstruction (e.g.\, smooth reconstructions require higher signal at lower spatial frequencies). \item[b)] \emph{fast and easy computation:} specifying independence over Fourier space through BIFS means that optimization is based on a large set of low-dimensional problem (as opposed to a single high-dimensional problem). \item[c)] \emph{modular structure:} allows for relatively straightforward changes in prior model. \item[d)] \emph{resolution invariance in model specification:} BIFS allows for simple, generic specification of the prior, independent of image resolution. \item[e)] \emph{straightforward specification of isotropic models:} BIFS allows for consistent specification of the prior in different directions, even when pixel (or voxel) dimensions are not isotropic. \item[f)] \emph{ability to determine fast (and if desired, spatially isotropic) approximations to Bayesian MRF priors:} BIFS priors can be generated to mimic the behavior of many traditional prior models, providing users experienced with using MRFs with fast (non-iterative) counter-parts that can be implemented via BIFS representations. \end{description} \section{BIFS Modeling Framework} Consider $x$ to be the true (or idealized/enhanced) image that we wish to recover from a degraded or sub-optimal image dataset $y$. Instead of the conventional Bayesian image analysis approach of generating prior and likelihood models for the true image $x$ based on image data $y$ directly in terms of pixel values, we formulate the models via their discrete Fourier transform representations: $\mathcal{F} x$ and $\mathcal{F} y$. Using Bayes' Theorem, the posterior, $\pi(\mathcal{F} x | \mathcal{F} y)$, is then, \begin{equation} \pi(\mathcal{F} x | \mathcal{F} y) \propto \pi(\mathcal{F} y | \mathcal{F} x) \pi(\mathcal{F} x) \enspace. \end{equation} The key aspect of the BIFS formulation that leads to its useful properties of easy specification and computational speed, is that we specify both the prior and likelihood (and therefore the posterior) to consist of a set of independent processes over Fourier space locations. In order to induce spatial correlation in image space, the parameters of the prior distributions are specified so as to change in a systematic fashion over Fourier space; independent (but heterogeneous) processes in Fourier space are thereby transformed into spatially correlated processes in image space~\citep{zeger1985exploring,lange1997non,peligrad2006central}. Heuristically, the realized signal at each position in Fourier space corresponds to a spatially correlated process in image space (at one particular spatial frequency). In general therefore, linear combinations of these spatially correlated signals (such as that given by the discrete Fourier transform) will also lead to a correlated process in image space. This independence-based specification over Fourier space can be contrasted with the conventional Bayesian image analysis approach of using Markov random field (MRF) priors for imposing spatial correlation properties, where the Markovian neighborhood structures are used to induce correlation patterns across pixels via joint or conditional distributional specifications \citep{besag1974spatial,geman1984stochastic,besag1989digital}. In fact, as we discuss in Section \ref{MRImatch} in certain instances MRF models exactly correspond to uncorrelated processes in Fourier space. When specifying a spatially correlated prior in image space via a set of independent processes across Fourier space, the full conditional posterior at a Fourier space location $k= (k_x,k_y) \in [-\pi, \pi)^2$, or for volumetric data $(k_x,k_y,k_z) \in [-\pi, \pi)^3$, now only depends on the prior and likelihood at that same Fourier space location $k$, i.e., \begin{equation} \pi(\mathcal{F} x_k | \mathcal{F} y) = \pi(\mathcal{F} x| \mathcal{F} y_k) \propto \pi(\mathcal{F} y_k| \mathcal{F} x_k) \pi(\mathcal{F} x_k) \enspace, \end{equation} where we use $\mathcal{F} x_k$ as shorthand for $(\mathcal{F} x)_k$. The joint posterior density for the image is then \begin{equation} \pi(\mathcal{F} x| \mathcal{F} y) \propto \prod_{k \in K} \pi(\mathcal{F} y_k| \mathcal{F} x_k) \pi(\mathcal{F} x_k) \enspace, \end{equation} where $K$ is the set of all Fourier space point locations in the (discrete) Fourier transformed image. Note that for our purposes we index Fourier space along direction $v \in \{x,y,z\}$ by $\{-N_v/2,\ldots,0,1,\ldots,N_v/2 - 1\}$, rather than the common alternative of, $\{0,\ldots,N_v-1$\}, as it leads to a more convenient formulation for specifying BIFS prior models, i.e. such that they are centered at the zero frequency position of Fourier space. Furthermore, in order to account for the fact that (most) images are in practice real-valued, the Fourier transform must be conjugate (Hermitian) symmetric on the plane (or volume if 3D). A real-valued image output is ensured by considering a realization of the posterior distribution to be determined by the half-plane (half volume), the other half being conjugate symmetric to the first (see \citet{liang2000principles}, pp. 31 and 322). Therefore, for real-valued images, the BIFS posterior is only evaluated over half of Fourier space (and points on the line $x=0$ if taking half-plane in the $y$-direction or conversely $y=0$ for half-plane in the $x$-direction) and the remainder is obtained by conjugate reflection. In defining priors as a process over Fourier space we are restricting the space of possible priors to generally stationary processes, similar to MRFs with neighborhood structure wrapped on the torus. (The ``generally'' qualifier is because of the very specific exception that the BIFS priors can be specified as non-stationary with respect to identification of the overall mean by placing an improper uniform prior at k-space point $(0,0)$ for the modulus, i.e., leading to models with the same property as the intrinsic pairwise MRF priors. In practice, for image analysis problems where the goal is to enhance features, this restriction to stationarity is minimal except toward the edges of an image; and the effects at the edges can be mitigated by expanding the field of view of the data (e.g. by setting pixel values in the expanded edges to the overall image mean or to an expanded neighborhood mean). Furthermore, in many medical imaging applications the area of interest is far from the edges of the field of view and much of the boundary corresponds to regions outside of the body and therefore the intensity levels are flat toward the edges. \subsection{The BIFS prior} \label{methods} A two-step process is used to specify the BIFS prior distribution over Fourier space. First, the distributional form of the prior for the signal intensity at each Fourier space location is specified, i.e., $\pi(\mathcal{F} x_k)$. Second, the parameters of each of the priors are specified at each Fourier space location using \emph{parameter functions}. In order to specify the parameter values across all Fourier space locations simultaneously, we specify a parameter function over Fourier space that identifies the value of each parameter at each Fourier space location. Specifically, for some parameter $\alpha_k$ of $\pi(\mathcal{F} x_k)$ we choose the parameter function $f_{\alpha}$ such that the $\alpha_k = f_{\alpha}(k)$. For most problems in practice it is desirable to choose a spatially isotropic prior, which can be induced by specifying $\alpha_k = f_{\alpha}(|k|)$, where $|k| = \sqrt{k_x^2 + k_y^2}$ in 2D or $\sqrt{k_x^2+k_y^2+k_z^2}$ in 3D, i.e., such that $f$ only depends on the distance from the origin of Fourier space. In the remainder of this paper, description is given in terms of 2D analysis, but notational extension and application to 3D volumetric imaging is straightforward. \subsection{The Parameter Functions:} In order to control spatial characteristics of the prior and likelihood we develop the concept of a \emph{parameter function}. The parameters of the independent priors over Fourier space are specified according to parameter functions that describe the pattern of parameter values over Fourier space (see illustrations of Figure~\ref{parfn}); the parameter function traces out the values of parameters for the prior over Fourier space. In general, the parameter function is multivariate, with one dimension for each parameter of the prior distribution used at each Fourier space location, e.g. Figure~\ref{parfn} illustrates parameter functions for distributions with two parameters (scale and location). Note that for some models it proves more convenient to specify parameter functions for transformations of the original distribution parameters; e.g. for the gamma distribution the parameter functions might be defined for the mean and variance which can then be transformed to the shape and scale (or rate) parameters. Separate and independent priors, and associated parameter functions, are specified for each of the modulus and argument of the complex value at each Fourier space location. Working with the modulus and argument provides a more convenient framework for incorporating prior information at specific Fourier space locations (i.e., specific spatial frequencies) than working with the real and imaginary components. The convenience arises because prior information (e.g., expected characteristics of smoothness, edges, or features of interest) can be directly specified via the modulus of the process, with the argument being treated independently of the modulus. However, real and imaginary components are more difficult to specify since signal can shift between them through the translation of objects in the corresponding image space; for example, a rigid movement of an object in an otherwise constant intensity image will cause shifts between real and imaginary components (by the Fourier transform shift Theorem) whereas in the modulus/argument specification it will only change the argument of the signal. \begin{figure} \caption{Schematic of parameter function set up for the signal modulus of the signal where the distribution at each Fourier space location requires specification of a location parameter, $\mu$, and scale parameter, $\sigma$. Panel~(\subref{fig:fig_a} \label{parfn} \end{figure} \subsection{Priors and parameter functions for signal modulus} \subsubsection*{Modulus prior form:} Any continuous distribution can be used to represent the prior at each point in Fourier space. It is also possible to allow the distribution itself to differ in different regions of Fourier space, though we do not pursue that here except to allow for a different distributional form at the $k = (k_x,k_y) = (0,0)$ spatial frequency (corresponding to the mean intensity). However, there is an advantage to specifying priors that only have mass for non-negative real numbers; if priors are chosen that can take negative values then a switch from positive to negative corresponds to a $\pi$ discontinuous change in the argument which would typically lead to an overall representation of prior beliefs that is unrealistic. In practice, we find that the choice for the form of the parameter function (discussed below) has a larger influence on the spatial characteristics of the prior than the choice of prior distribution model. For the purpose of computational expedience it therefore often make sense to choose conjugate priors to the likelihood for the modulus when available. For example, we could use a Gaussian (normal) prior for the location parameter of a lognormal likelihood with known scale parameter for the noise process; specification of noise parameters will be discussed in Section~\ref{constvar}. \subsubsection*{Modulus parameter function forms:} The parameter functions for the modulus are specified as a set of 2D functions over Fourier space (in terms of $k_x$ and $k_y$): often one for each of location (e.g. mean) and scale (e.g. standard deviation, sd), or/and any other parameters of the prior. The center of Fourier space, i.e., the $k = (k_x,k_y) = (0,0)$ frequency, is the prior for the overall image intensity mean. At this location, it is reasonable to choose a different pdf than at all other locations. Indeed, the prior at $k = (k_x,k_y) = (0,0)$ can be modeled as improper, e.g. uniform on the real line, leading to an intrinsic, non-stationary prior for the image \citep{kunsch1987intrinsic,besag1991bayesian}. In general, useful priors across Fourier space are generated by allowing the parameter function for the location parameter to decrease with increasing distance from the center of Fourier space. In our experience, the functional form of the descent has a major impact on the properties of the prior model; much more so than the form of the pdf chosen for each Fourier space location. In addition, specific ranges of frequencies can be accentuated by increasing the parameter function for the location parameter over those frequencies, leading to enhanced spatial frequency bands as a function of distance from the origin. When considering how to define the modulus parameter function for the scale parameter, we find that setting it to be proportional to that for the mean is often a good strategy though the method allows for different strategies wherever warranted. Other parameter functions (i.e. parameter functions that do not represent location or scale) may be intuitively more difficult to specify. We do not consider any such functions here, as we find priors with one or two parameters at each Fourier space location to be satisfactory for problems we have encountered. One situation where a parameter function for something other than the location and scale parameters might be useful would be when mixture distribution priors are appropriate. In particular, a mixture of two distributions could be considered where one of the distributions consists of a probability mass specified at zero. The variation in probability mass for zero at each Fourier space location can itself be specified by a parameter function. For example, one might want to specify increased probability of exact zeros the further away a point is from the origin of Fourier space. Such a prior/parameter function combination would have the effect of encouraging more sparsity at Fourier space locations further from the origin. If high enough mass were specified for zeros across Fourier space this could lead to sparsely represented reconstructions that could in turn be useful when storage space is an issue. The end result would provide lasso style shrinkage in combination with prior information about spatial properties. \subsection{Priors and parameter functions for signal argument} Generally, we have limited prior knowledge about the argument of the signal; the argument is related to the relative positioning of objects in the image; moving (shifting) objects around in an image will change the argument of related frequencies. We therefore specify the prior for the argument to reflect \emph{a~priori} ignorance by using an \emph{i.i.d.\,} uniform distribution on the circle at all Fourier space locations, i.e., we define flat parameter functions representing $-\pi$ and $\pi$ as the parameters for $U(a,b)$ distributed arguments at all Fourier space locations. A notable exception to using uniform priors for the argument occurs when the prior is being built empirically from a database of images; we discuss this scenario in our final (simulation) example of Section~\ref{simstudy}. \subsection{BIFS likelihood} Similar to the prior, the BIFS likelihood is modeled separately for the modulus and argument of the signal at each Fourier space location. Because we model based on independence across Fourier space points, a range of different noise structures (specified in Fourier space) can readily be incorporated into the likelihood $\pi(\mathcal{F} y_k| \mathcal{F} x_k)$. The parameter(s) of the likelihood needs to be provided or estimated for the BIFS algorithm. A straightforward approach to this estimation is to extrapolate any areas of the original image that are known to consist only of noise to an image of equal size to the original image, and then Fourier transform to estimate the corresponding noise distribution in Fourier space. In practice, not knowing the noise standard deviation is not a major impediment to proceeding with Bayesian image analysis. The noise in the image and the precision in the prior trade off with one another in the Bayesian paradigm and therefore the parameter functions of the prior can be adjusted to produce a desired effect \emph{a posteriori}. This \emph{ad hoc} approach is common to much Markov random field prior modeling in Bayesian image analysis; the appropriate setting of hyper-parameter values is a difficult problem in general and often they are left as parameters to be tuned by the user \citep{sorbye2014scaling}. \subsection{Posterior estimation} Posterior estimation in conventional Bayesian image analysis tends to focus on MAP estimation (i.e., minimizing a $0-1$ loss function) primarily because it is usually the most computationally tractable. In the BIFS formulation the MAP estimate can be efficiently obtained by independently maximizing the posterior distribution at each Fourier space location, i.e, $x_{\mathrm{\tiny MAP}} = \mathcal{F}^{-1}(\mathcal{F} x_{\mathrm{\tiny MAP}})$ where $\mathcal{F} x_{\mathrm{\tiny MAP}} = \{\mathcal{F} x_{k, \mathrm{\tiny MAP}}, k = 1, \ldots, K \} $ and $ \mathcal{F} x_{k, \mathrm{\tiny MAP}} = \max_{\mathcal{F} x_k} \left\{ \pi(\mathcal{F} x_k | \mathcal{F} y) \right\} = \max_{\mathcal{F} x_k} \left\{ \pi(\mathcal{F} x_k | \mathcal{F} y_k) \right\} $. This contrasts with conventional Bayesian image analysis, where even the most computationally convenient MAP estimates typically require iterative computation methods such as conjugate gradients or expectation-maximization. Beyond the MAP estimate, it is straightforward to simulate from the posterior of BIFS models to get mean estimates such as minimum mean squares estimate (MMSE) estimates \citep{winkler1995image} or other summaries of samples from the posterior. The independence of posterior distributions over Fourier space implies that simple Monte Carlo simulation is all that is required to obtain posterior samples. This is in contrast to conventional Bayesian image analysis, where some form of Markov chain Monte Carlo (MCMC) simulation is typically needed: leading to issues of chain convergence and mixing that need to be dealt with \citep{gilks1996markov}. The general computational approach to implementing BIFS follows that of Algorithm~\ref{bifsalg}. Note that, when an uninformative uniform prior is used for the argument, and the likelihood is symmetric about the observed argument in the data, the corresponding maximum of the posterior at that Fourier space point is simply the argument of the Fourier transformed data at that point. Therefore, under these conditions, the exact form of the likelihood is unimportant for the MAP estimate. This leads to added simplicity for obtaining the MAP image estimate and is particularly beneficial when working with Gaussian noise in image space where the corresponding distribution for the argument in Fourier space is difficult to work with given the lack of analytical solution (See Section~\ref{ricelik}). \label{likarg} \begin{algorithm} \caption{General BIFS implementation} \label{bifsalg} \begin{algorithmic} \State Fast Fourier transform (FFT) image data, $y$, into Fourier space, $\mathcal{F} y$ \State Specify noise distribution/likelihood in Fourier space $\pi(\mathcal{F} y_k | \mathcal{F} x_k)$ \State Specify prior distribution form $\pi(\mathcal{F} x_k)$ \State Specify parameter functions for each of modulus and argument of the signal \ForEach{$k \in K$, } \State Obtain $\pi(\mathcal{F} x | \mathcal{F} y) \propto \pi(\mathcal{F} y | \mathcal{F} x) \pi(\mathcal{F} x)$ \State Generate posterior estimates/summaries/simulations at each $k$ via MAP or Monte Carlo \EndFor \State Inverse FFT posterior estimates/summaries/simulations back to image space. \end{algorithmic} \end{algorithm} \subsection{Modeling Gaussian \emph{i.i.d.\,} noise in image space} \label{ricelik} A common model for the noise in images is to assume that the image intensities are contaminated by \emph{i.i.d.\,}~Gaussian noise. Although \emph{i.i.d.\,} Gaussian noise in image space transforms to complex Gaussian noise in Fourier space with independent real and imaginary components, when the real and imaginary components are transformed to modulus and argument, the associated errors are not Gaussian distributed. The corresponding likelihood model for the modulus is the Rician distribution which takes the following form \citep{rice1944mathematical,rice1945mathematical,gudbjartsson1995rician,rowe2004complex,miolane2017template}: \begin{equation} \pi(r|\rho,\sigma) = \frac{r}{\sigma^2} \exp \left(- \frac{r^2 + \rho^2}{2 \sigma^2} \right) I_0 \left( \frac{r \rho}{\sigma^2} \right); \qquad r, \rho, \sigma \ge 0 \label{ricelikform} \end{equation} where $I_0(z)$ is the modified Bessel function of the first kind with order zero, $\sigma$ is the standard deviation of the real and imaginary Gaussian noise components in Fourier space (\emph{i.i.d.\,} real and imaginary parts), $r$ is the observed modulus of the signal, i.e., $\operatorname{Mod} (\mathcal{F} y_k)$ in the BIFS formulation, and $\rho$ is the noise-free modulus, $\operatorname{Mod} (\mathcal{F} x_k)$. Note that in the Rician likelihood, the standard deviation of each of the real and imaginary components is the $\sigma$ parameter of the Rician distribution. Therefore, by obtaining an estimate of the noise level in image space, the $\sigma$ parameter over Fourier space can itself be estimated by dividing the estimated standard deviation of the noise in image space by $4$. \label{constvar} The corresponding likelihood for the argument takes the form \citep{gudbjartsson1995rician,rowe2004complex}: \begin{equation} \pi(\psi|\rho,\theta,\sigma) = \frac{\exp \left(-\frac{\rho^2}{2 \sigma^2}\right)}{2 \pi}\left[1 + \frac{\rho}{\sigma} \cos(\psi - \theta) \exp \left(\frac{\rho^2 \cos^2 (\psi - \theta)}{2 \sigma^2} \right) \int_{r = - \infty}^{\frac{\rho \cos (\psi - \theta)}{\sigma}} \exp \left( - \frac{z^2}{2} \right) \, dz \right] \label{ModPhase} \end{equation} where $\psi \in [-\pi, \pi)$ is the observed argument of the signal, $\operatorname{Arg} (\mathcal{F} y_k)$, and $\theta \in [-\pi, \pi)$ is the noise-free argument, $\operatorname{Arg} (\mathcal{F} x_k)$. \subsubsection*{Posterior estimation for \emph{i.i.d.\,} Gaussian noise in image space} \textbf{Modulus MAP estimate:} If one is interested in the MAP image estimate (i.e. the image associated with minimizing the $0-1$ loss function of the posterior) then the mode of the posterior for the modulus at each Fourier space location needs to be estimated. A first approach might be to consider a direct off-the-shelf optimization, but we have found this to be problematic. The problem is that this is a non-trivial optimization (at least in the Rician case) because at many Fourier space locations the prior and likelihood can be highly discordant, i.e. the modes of the prior and the likelihood can be very far apart with very little density in between for both distributions. This discordance is not entirely surprising and exists for the same reason that typical conventional image analysis priors are not full representations of prior beliefs for an image, but instead typically only represent expected local characteristics such as smoothness of the image \citep{besag1993toward,green1990bayesian}. In practice, we experience direct numerical optimization to break down in extremely discordant cases. \label{discord} We therefore propose the following approach to MAP estimation with the Rician likelihood (we drop the $k$ subscript for location in Fourier space to aid clarity). Take the Rician likelihood given in Equation (5) and multiply it by the prior for the modulus $\pi(\rho)$. The posterior is $\pi(\rho|r,\sigma) \propto \pi(r|\rho,\sigma) \pi(\rho)$ and we can take logs to simplify, drop constant terms, and find the maximum. Finally, take the second derivative to check that it is concave by bounding the derivative of the Bessel function. For example, if we assume an exponential prior for $\rho$ i.e. $\pi(\rho) \propto \exp \left(- \frac{\rho}{m} \right)$, take logs and simplify we get \begin{equation*} \log \pi(\rho | r, \sigma) = c + \log \left( \frac{r}{\sigma^2} \right) - \frac{r^2+\rho^2}{2\sigma^2} + \log \left[I_0 \left( \frac{r \rho}{\sigma^2} \right) \right] -\frac{\rho}{m} \end{equation*} where $c$ is a constant term. Now differentiate w.r.t. $\rho$ and set to $0$ \begin{equation*} - \frac{\rho}{\sigma^2} - \frac{1}{m} + \frac{r I_1 \left( \frac{r \rho}{\sigma^2} \right)}{\sigma^2 I_0 \left( \frac{r \rho}{\sigma^2} \right)} = 0 \end{equation*} where $I_0(z)$ is the modified Bessel function of the first kind with order one, and define \begin{equation*} b(\rho) = \frac{I_1 \left( \frac{r \rho}{\sigma^2} \right)}{I_0 \left( \frac{r \rho}{\sigma^2} \right)} \end{equation*} to get \begin{equation} \label{exprho} \rho = r b(\rho) - \frac{\sigma^2}{m} \end{equation} The posterior estimate of $\rho$ can then be estimated quickly through iteration. Start with $\rho_0=r$ and then iterate Equation~\ref{exprho} repeatedly as $\rho_{n+1} = r b(\rho_n) - \frac{\sigma^2}{m}$ to compute the MAP estimate of $\rho$. In practice this requires only a few iterations to get high accuracy. Note that the unconstrained maximum of the function may occur at negative values of $\rho$. In that case the posterior maximum for $\rho$ is set to $0$ because the function is monotonically decreasing to the right of the maximum. A similar iterative approach can be applied when considering other prior distributions for $\rho$. \textbf{Argument MAP estimate:} Note that for a noise process with a uniformly random argument on the circle (as for \emph{i.i.d.\,} noise in real and imaginary components), the likelihood at a Fourier space point has highest density at the argument of the data at that Fourier space location, i.e. $\operatorname{Arg}(\mathcal{F} y_k)$. Therefore, since we adopt a uniform prior for the argument on the circle, then the argument corresponding to the maximum of the posterior is also $\operatorname{Arg}(\mathcal{F} y_k)$. This property will always be true when using the uniform prior on the circle for the Argument provided the likelihood for the argument is symmetric about its maximum; this condition will be necessarily true for all noise processes with uniformly random argument. \subsection{Example 1 -- Smoothing/denoising} \label{denoise} Figure~\ref{mandrillpics} shows the BIFS MAP reconstruction results of a grayscale test image of a Mandrill monkey face using the exponential prior, Rician likelihood, and a parameter function for the exponential distribution mean of the form $f_{\mu}(|k|; a, b) = a/|k|^b$ (inverse exponentiated distance) at all locations except for the origin, $k=(0,0)$, where the prior was an improper uniform distribution on the real line. The top-left panel~(a) shows the original noise-free image and the top-middle panel~(b) shows the same image with added Gaussian noise (zero mean with SD $\approx$ one third of the dynamic range of the original image). The noisy image in panel~(b) is used as the input degraded image into the BIFS models. The remaining panels show BIFS MAP reconstructions for different values of $b$, namely (c)~$b=1.5$, (d)~$b=1.75$, (e)~$b=2$, and (f)~$b=2.5$. The parameter value for $a$ was chosen based on matching the power of the parameter function (sum of square magnitude) to that in the observed data over all Fourier space points other than at $k=(0,0)$. The image intensities in each panel are linearly re-scaled to use the full dynamic range. Re-scaling is appropriate in situations where image features are of interest, but not if quantification of intensities is of interest. \begin{figure} \caption{\hspace*{0.28 cm} \label{man_a} \caption{\hspace*{0.28 cm} \label{man_b} \caption{\hspace*{0.28 cm} \label{man_c} \caption{\hspace*{0.28 cm} \label{man_d} \caption{\hspace*{0.28 cm} \label{man_e} \caption{\hspace*{0.28 cm} \label{man_f} \caption{Mandrill pics, (a) original image; (b) Gaussian noise degraded image; (c) BIFS MAP with $b=1.5$; (d) $b=1.75$; (e) $b=2$; (f) $b=2.5$. \label{mandrillpics} \label{mandrillpics} \end{figure} In examining the MAP reconstructions of Panels~(c)~to~(e) it is clear that the overall level of smoothness increases (and noise suppressed) as the value of $b$ increases. This is to be expected since higher $b$ implies faster decay of the parameter function for the signal modulus with increasing distance from the origin. However, as a price for higher noise suppression, finer level features are lost. For example, by the $b=2.5$ the whiskers and even nostrils of the Mandrill are smoothed away. \textbf{Robustness to heavy-tailed noise:} In order to examine whether the BIFS reconstructions were robust to noise distributions with heavier tails, the BIFS procedure was repeated for data with added noise generated from Student $t$-distributions controlled to have the same SD but with different degrees-of-freedom (d.f.). Figure \ref{Tmandrillpics} displays the results of the reconstructions using the same priors as above with $b = 2$ and with d.f. of (a)~10~d.f.; (b)~5~d.f.; and (c)~3~d.f. The reconstructions are visually quite similar to each other with slight differences only showing up with the very heavy-tailed 3~d.f. reconstruction. Results from other values of $b$ were also similarly robust. \begin{figure} \caption{\hspace*{0.28 cm} \label{tman_10} \caption{\hspace*{0.28 cm} \label{tman_5} \caption{\hspace*{0.28 cm} \label{tman_3} \caption{Mandrill pics reconstructed under wrong noise distribution (Student-$t$) all with $b=2$, (a) 10 d.f.; (b) 5 d.f.; (c) 3 d.f. \label{Tmandrillpics} \label{Tmandrillpics} \end{figure} \subsection{Example 2 -- Frequency enhancement} The example in Figure~\ref{moon1} displays a range of BIFS reconstructions for a grayscale test image of a surface patch on the moon. The reconstructions are again focused on using an exponential prior with Rician likelihood for the parameter function of the signal modulus. Panel~(a) shows the original image and Panel~(b) shows the \emph{i.i.d.\,} additive Gaussian noise degraded image serving as the image to apply reconstruction. Panel~(c) displays the BIFS reconstruction based on applying the denoising prior parameter function used in Example~1 with $b=2$. Panels~(d) through~(f) show frequency selective priors for which prior weight is only given to Fourier space locations within a specific range of distances from the origin (\emph{frequency selective torus parameter function}); these priors are also smoothed by an isotropic Gaussian spatial kernel with SD of 1.5 Fourier space pixels in each of the $k_x$ and $k_y$ directions. For Panel~(d) distances of 1 to 5 pixels from the origin are given non-zero mass; in Panel~(e) 10.01 to 15 pixels; and Panel~(f) 15.01 to 60 pixels. Panels~(g) through (i) show corresponding reconstructions where the parameter function is a weighted average of the torus parameter function directly above and the denoising parameter function of panel~(C); weighted at 90\% torus and 10\% denoising. In all of the examples, the level of each of the parameter functions is adjusted to approximately match total power to that observed in the image data. It is clear that the different torus parameter functions are providing results as expected in terms of focusing in on specific frequency windows. However, when mixed with other parameter functions such as the denoising prior useful compromises between the parameter function forms can be achieved. The addition of the denoising component in Panels~(g) through~(i) softens the harsh restriction to the specific frequency ranges. For example, while in Panel~(d) the reconstruction is highly blurry, the reconstruction of Panel~(g) allows enough higher resolution information to make the image less obviously blurry to view while still enhancing low frequency features. For the high range of frequencies of Panel~(f) it is clear that the addition of the denoising component in Panel~(i) is critical to be able to even understand the context of the image. The effect of combining the high-frequency selective prior with the denoising prior is to enhance smaller features. In particular, notice that this prior is able to identify the small crater that is pointed to by the top arrow in Panel~(i) which is missed by the other priors with less high frequency information. Notice also that even though it is observable in panel~(f) it is not easily distinguishable as a different type of object than the white spot at the middle of the dip in the large crater at the bottom indicated by the bottom arrow. Clearly, the design of these parameter functions is critical to the properties of the prior in a similar way to how the clique penalties of Markov random field priors operate. However, it is difficult to see how clique functions could be defined to produce priors with the properties of the frequency selective priors or their mixtures with denoising priors. \begin{figure} \caption{\hspace*{0.28 cm} \label{moon_a} \caption{\hspace*{0.28 cm} \label{moon_b} \caption{\hspace*{0.28 cm} \label{moon_c} \caption{\hspace*{0.28 cm} \label{moon_d} \caption{\hspace*{0.28 cm} \label{moon_e} \caption{\hspace*{0.28 cm} \label{moon_f} \caption{\hspace*{0.28 cm} \label{moon_g} \caption{\hspace*{0.28 cm} \label{moon_h} \caption{\hspace*{0.28 cm} \label{moon_i} \caption{Moon pics, (a) original image; (b) Gaussian noise degraded image; (c) BIFS MAP estimate using denoising prior from Section~\protect\ref{denoise} \label{moon1} \end{figure} There is clearly considerable potential for designing parameter function / prior combinations for BIFS that can produce a range of image processing characteristics that are not readily achievable with MRF priors. \subsection{Example 3 -- edge detection} The family of parameter functions described in the previous example can also prove useful for edge detection in images as illustrated in this example using a Gaussian noise contaminated version of the standard grayscale pirate test image. The sequence of images Figure~\ref{pirate1} (d) to (f) shows how the removal of low frequency information isolates edge information. The upper limit on frequencies is chosen as a trade off between capturing the highest frequency edge information vs. potentially masking the edge information if there is too much high frequency noise, Figure~\ref{pirate1} (g) to (i). \begin{figure} \caption{\hspace*{0.28 cm} \label{pirate_a} \caption{\hspace*{0.28 cm} \label{pirate_b} \caption{\hspace*{0.28 cm} \label{pirate_c} \caption{\hspace*{0.28 cm} \label{pirate_d} \caption{\hspace*{0.28 cm} \label{pirate_e} \caption{\hspace*{0.28 cm} \label{pirate_f} \caption{\hspace*{0.28 cm} \label{pirate_g} \caption{\hspace*{0.28 cm} \label{pirate_h} \caption{\hspace*{0.28 cm} \label{pirate_i} \caption{Pirate pics, (a) original image; (b) Gaussian noise degraded image; (c) BIFS denoising $b=2$; (d) BIFS 10.01 - 50; (e) 15.01 - 50; (f) 20.01 - 60; (g) 20.01 - 30; (h) 20.01 - 100; (i) 20.01 - 200. \label{pirate1} \label{pirate1} \end{figure} \section{BIFS Properties} There are multiple properties of the BIFS formulation that prove advantageous relative to conventional MRF-priors and other Bayesian image analysis models: \subsection{Computational Speed:} \label{compspeed} The independence property of the BIFS formulation generally leads to improved computational efficiency over MRF-based or other conventional Bayesian image analysis models that incorporate spatial correlation structures into the priors in image space. For MAP estimation, Bayesian image analysis using MRF priors requires high-dimensional iterative optimization algorithms such as iterated conditional modes (ICM) \citep{besag1989digital}, conjugate gradients\\ \citep{hestenes1969multiplier}, or simulated annealing \citep{geman1984stochastic}. Iterative processes are necessary when working in the space of the images because of the inter-dependence between pixels. In contrast, BIFS simply requires independent posterior mode estimation at each Fourier space location, and the inverse discrete Fourier transform of this set of Fourier space local posterior modes will provide the global posterior mode image. This independent-over-Fourier-space optimization approach also scales well with respect to increasing image size, increasing dimensionality (e.g. to 3D), or increasing prior model complexity. The level of computational complexity is basically $n g(k)$, where $g(k)$ is the complexity of the optimization at each Fourier space point. Therefore, for increased image size or dimension, the computation time is scaled by the proportional increase increase in the number of Fourier space points, whereas for increased complexity at each Fourier space point to $h(k)$ the total computation is simply scaled by $\frac{h(k)}{g(k)}$. Computational improvements can similarly be obtained if one wishes to perform posterior mean or other estimation and credible interval generation. (Note that one has to be careful interpreting credible intervals for Bayesian image analysis models since the prior models are at best only rough approximations about characteristics of prior beliefs.) Markov random field prior models generally require high-dimensional Markov chain Monte Carlo (MCMC) posterior simulations to obtain the MMSE image estimate as the posterior mean. However, BIFS only requires at most low-dimensional MCMC to be performed at each Fourier space location. A single realization of the posterior parameter set from each Fourier space location can be inverse Fourier transformed to produce an independent realization of an image from the posterior distribution, thereby avoiding the difficulties of dealing with potentially slow mixing chains that often occur in Bayesian image analysis modeling when updating at the pixel-level. In addition to the algorithmic speed up for MAP and MMSE estimation, the independence property of the Fourier space approach also allows for trivial parallelization because sampling for each Fourier space location can be performed independently. Therefore, acceleration of a factor close to the number of processors available can be achieved up to having as many processors as Fourier space points. Finally, another aspect in which it is computationally more efficient to work in the BIFS framework comes about because it is much easier to try different prior models defined in Fourier space. In particular, one can very easily change the parameter functions for the prior, which in our experience has the biggest effect on the properties of the prior overall. Changing the form of the prior distribution at each Fourier space location takes more coding effort but is still much less work than writing code for different MRF-prior-based posterior reconstructions; trying different MRF priors (beyond simply changing parameter values) is typically a non-trivial task. Overall, the computational speed afforded by BIFS, in addition to the potential for massive parallelization and flexibility of parameter functions, has great potential for the advancement of Bayesian image analysis and opens it up to play a larger role in Big Data imaging problems. \subsection{Resolution invariance} Specifying the prior in Fourier space leads to easy translation of the priors for changes in image resolution. When increasing resolution in image space by factors of two, the central region of the Fourier transform at higher resolution corresponds very closely to that of the complete Fourier transform at lower resolution; they correspond to the same spatial frequencies within the field of view. The lower resolution image is very close to a band-limited version of the image at higher resolution. Note that this is only approximate as the increased resolution can slightly alter the magnitude of the measured Fourier components whose frequency is significantly lower than that of the resolution. However, this effect can be bounded by the ratio of the maximum resolution to that of the wavelength of the measured frequency so is typically quite small. This small change in magnitude will remain small in the posterior estimate when the parameter function is continuous. When the resolution increases but not by a factor of two, the same approach of matching parameter functions over the frequency range can still be applied. However, there will no longer be a direct match of points over the lower frequencies and therefore there will be a less perfect match of the prior distributions overall. The above described BIFS approach to matching over different resolutions contrasts with MRF models, for which in order to retain the spatial properties of the prior at lower frequencies, an increase in resolution would require careful manipulation of neighborhood structure and prior parameters to match spatial auto-covariance structures between the different resolution images~\citep{rue2002fitting}. \subsection{Isotropy} \label{isotropy} In order to specify a \textit{maximally isotropic} BIFS prior, all that is required is for the prior to be specified in such a way that the distribution at each Fourier space location only depends on the distance from the center of Fourier space. This can be achieved by defining the parameter functions for the prior completely in terms of distance from the origin in Fourier space, i.e., $\pi(\mathcal{F} x_k) = g(|k|)$, and not the orientation with respect to the center of Fourier space. (The "maximally" qualifier is needed because the prior will be isotropic up to the maximal level afforded by the discrete -- and anisotropic -- specification of Fourier space on a regular, i.e. square, grid.) The relative ease with which isotropy is specified can be contrasted with that of MRF-based priors where local neighborhood characteristics need to be carefully manipulated by increasing neighborhood size and adjusting parameter values to induce approximate spatial isotropy~\citep{rue2002fitting}. For MRFs, pairwise interaction parameters for diagonal neighbors need to be specified differently to horizontal/vertical ones, requiring something of a balancing process to lead to an approximately isotropic process overall. However, for BIFS all that is required to achieve "maximal" isotropy is that the parameter functions are specified such that they only depend on the distance from the origin of Fourier space. Note however, that even for BIFS priors it may still be possible to adjust the parameter function to lead to greater isotropy in practice by tweaking the parameter function to undo anisotropy effects due to Fourier space discretization, though it is not obvious how one might go about achieving this. Although easy to specify as such, isotropy is clearly not a requirement of the BIFS prior formulation. Anisotropy can be induced by allowing the parameter functions for the prior to vary in different ways with distance from the origin of Fourier space along different directions. \section{Approximating Markov random fields with BIFS} \label{MRFmatch} Given that MRF priors have become something of a standard in Bayesian image analysis, it would be useful to generate BIFS models to try and match these commonly used priors. We propose an approach to specifying a prior that is close to an MRF of interest but also potentially "maximally isotropic". The goal is to create BIFS models that are approximate matches, but that have 1)~increased computational ease and speed (by specifying as independent over Fourier space locations); 2)~increased isotropy if desired (i.e., reduced directional preferences); and 3)~resolution invariance. To find a formulation in Fourier space that approximately matches a corresponding MRF model we propose taking the steps described in Algorithm \ref{simMRFalg}. \begin{algorithm} \caption{Simulating MRF models with BIFS} \label{simMRFalg} \begin{algorithmic}[1] \State Simulate a set of images from the MRF prior distribution and take the FFT of each image \State Determine a model for the prior probability distribution of the modulus to be used and independently estimate its' parameters at each Fourier space location \State Determine an appropriate parameter function form over Fourier space for each of the parameters from the chosen prior based on estimates from the simulated data. (If maximally isotropic approximations to the prior are required then the parameter functions need to be chosen subject to the constraint that it only depends only on distance from the origin of Fourier space) \State Estimate the coefficients of the parameter function by fitting to the marginally estimated parameters in Fourier space, e.g. via least squares \end{algorithmic} \end{algorithm} Note that it is possible to take this approach when estimating a BIFS approximation to any Bayesian image analysis prior model, though the potential to match higher-level priors (e.g. ones that directly model geometric properties of objects in the image) is likely to be much less of a good approximation. \subsection{Example 4 - Matching Gaussian MRFs} \label{MRImatch} We here consider using the above ideas to approximate the simple first-order pairwise difference intrinsic Gaussian MRF (IG-MRF) \begin{equation*} \pi(x) \propto \exp \left\{ - \frac{\kappa}{2} \sum_{i \sim j} \left(x_i-x_j \right)^2 \right\} \end{equation*} as described in \cite{besag1989digital}. The sum over $i \sim j$ is over all unordered pairs of pixels such that $i$ and $j$ are vertically or horizontally adjacent neighbors in the image. The model is called \textit{intrinsic} because the overall mean is not defined and therefore the prior is improper with respect to the overall mean level. We used the R-INLA package to simulate 1,000 realizations of a first-order IG-MRF with $\kappa = 1.0$ and first-order neighborhood structure wrapped on a torus. At each Fourier space location we adopted a prior distribution for the modulus such that the square of the modulus is distributed as exponential. This choice of prior falls in line with the theoretical results in \citep{rue2005gaussian}, Ch~2.6 for the specific case of simulating a Gaussian Markov random field with neighborhood structure wrapped on a torus (i.e. with block-circulant precision matrix). Specifically, an IG-MRF has a precision matrix $\bm{Q}$ that is block circulant, in which case we can show using the analysis in \citep{rue2005gaussian} that the priors in Fourrier space are such that the power at each frequency pair is exponentially distributed. The proof is straightforward but notationally complex so we present the 1 dimensional proof to provide intuition. The key idea is that if $\bm{Q}$ is a circulant matrix then we can decompose it as $\bm{Q}=\bm{F\Lambda F}^H$, where $\bm{F}$ is the (discrete) Fourier transform (DFT) matrix, $\bm{F}^H$ is the Hermitian (i.e. conjugate transpose of $\bm{F}$), and $\bm{\Lambda}$ is a diagonal matrix of eigenvalues of $\bm{Q}$. Now we can compute, \begin{equation*} \pi(\bm{x}) \propto \exp \left( -\bm{x}^T \bm{Qx} \right) = \exp \left( -\bm{x}^T \bm{F\Lambda F}^H \bm{x} \right) = \exp \left( -\bm{F}^T \bm{x} \bm{\Lambda F}^H \bm{x} \right) = \exp \left( -\bm{F} \bm{x} \bm{\Lambda F}^H \bm{x} \right) \\ \end{equation*} Now let $f$ be the DFT of $\bm{x}$, $f^\dagger$ the inverse DFT (IDFT) of $\bm{x}$, and $p_k$ the power at frequency $k$, so \begin{equation*} \pi(\bm{x}) \propto \exp \left( -f \bm{\Lambda} f^\dagger \right) = \exp \left( \sum_k - \lambda_k f_k f^\dagger_k \right) =\exp \left( \sum_k - \lambda_k p_k \right) =\prod_k \exp \left( -\lambda_k p_k \right) \end{equation*} This final term is a product of functions so the distributions at each frequency are independent and each has an exponential distribution. \begin{equation*} \pi(p,\theta) \propto \prod_k \exp \left( - \lambda_k p_k \right). \end{equation*} To summarize, this shows that for GMRFs with neighborhood structure specified on the torus, the power spectrum (the square of the signal modulus) is made up of independent exponential random variables. We need to obtain the posterior maximum for the prior where the modulus square follows an exponential distribution coupled with Rician noise, analogous to that of Equation~\ref{exprho} for when the exponential prior was used directly for the modulus. Assume that $X \sim \textrm{Exp}(1/m)$ for each point in Fourier space and that $P = \sqrt{X}$. Then for any $\rho \ge 0$; \begin{equation*} \Pr(P \le \rho) = \Pr(\sqrt{X} \le \rho) = \Pr(X \le \rho^2) = 1 - \exp \left(- \frac{\rho^2}{m} \right) \;\;\;\; \rho \ge 0 \end{equation*} differentiating, we get the pdf for the prior of $\rho$: \begin{equation*} \pi(\rho | m) = \frac{2 \rho}{m} \exp \left( - \frac{\rho^2}{m} \right) \;\;\; \rho \ge 0 \end{equation*} now applying Bayes' Theorem and taking logs to simplify \begin{equation*} \log \pi(\rho | r, \sigma, m) = c - \frac{\rho^2}{2\sigma^2} + \log \left(I_0 \left( \frac{r \rho}{\sigma^2} \right) \right) + \log \left( \frac{2 \rho}{m} \right) -\frac{\rho^2}{m} \end{equation*} differentiate w.r.t. $\rho$ \begin{equation*} \dfrac{\mathrm{d} \log \pi(\rho | r, \sigma, m)}{\mathrm{d} \rho} = - \frac{\rho}{\sigma^2} + \frac{r I_1 \left( \frac{r \rho}{\sigma^2} \right)}{\sigma^2 I_0 \left( \frac{r \rho}{\sigma^2} \right)} + \frac{1}{\rho} - \frac{2 \rho}{m} \end{equation*} then set equal to zero and solve to get the positive solution for $\rho$ of \begin{equation} \rho = \frac{rm b(\rho) + \sqrt{\left(b(\rho) r m \right)^2 + 8 \sigma^4 m + 4 (\sigma m)^2}}{4 \sigma^2 + 2 m}. \end{equation} However, instead of exactly matching the MRF using the eigenvalues to model the IG-MRF in Fourier space as in \citep{rue2005gaussian}, we instead fit an isotropic parameter function to the mean of the modulus at each Fourier space location using least squares. The form of the parameter function used was $$f(|k|; \textbf{a}) = \frac{a_0}{a_1 + a_2 |k| + a_3 |k|^2 + a_4|k|^3}$$ (chosen based on a trial-and-error approach to get a good least squares fit -- See Figure~\ref{fig:GMRFmodFit}). \begin{figure} \caption{Parameter model fit for modulus as a function of distance from the origin of Fourier space} \label{fig:GMRFmodFit} \end{figure} Simulations from the BIFS prior were well-matched to their MRF counterpart. Figure~\ref{simMatch} shows example realizations from each of the BIFS and direct MRF simulations in panels~(a) and~(b) respectively and they clearly exhibit similar properties. Panel~(c) shows the respective estimated autocovariance functions (ACFs) as a function of distance from 1,000 simulations of each random field. The spread observed in the estimated ACFs at longer distances is due to the anisotropy of the processes induced by the rectangular lattice and for the MRF because of the anisotropic representation of the neighborhood structure; hence the slightly narrower band for the BIFS simulations (see Section~\ref{isotropy}). \begin{figure} \caption{\hspace*{0.28 cm} \label{gsim_a} \caption{\hspace*{0.28 cm} \label{gsim_b} \caption{\hspace*{0.28 cm} \label{acf_c} \caption{Example simulations for the first order intrinsic GMRF using a direct approach in R-INLA in panel~(a) and the BIFS approximation with isotropic parameter function in panel~(b). The estimated ACF as a function of distance for each of the simulated models is given in panel~(c).} \label{simMatch} \end{figure} In order to compare posterior estimates between the two approaches we examine a simulated dataset generated using a version of the Montreal Neurological Institute (MNI) brain \citep{cocosco5online} that had been segmented into gray matter (GM), white matter (WM), and cerebro-spinal fluid (CSF)/outside brain at 128 by 128 resolution. GM is the outer ribbon (the cortex) around the brain where neuronal activity occurs, WM is made up of the connective strands that enable different regions of the cortex to communicate with each other, and CSF is fluid in the brain. The signal was generated with intensity 20.0 in GM, 10.0 in WM and 0.0 elsewhere and is displayed in Panel~(a) of Figure~\ref{MRImatchFig}. Gaussian noise (\emph{i.i.d.\,}) with SD of 2.5 was added to generate a degraded image as displayed in Panel~(b). The image was reconstructed as a MAP estimate from the degraded data using each of the IG-MRF prior (with conjugate gradients optimization) and BIFS ``maximally isotropic'' equivalent prior approximation. Comparing the IG-MRF and corresponding BIFS reconstructions in Panels~(c) and~(d), respectively, indicates that the MRF and BIFS are indistinguishable visually. This is confirmed when looking at the residual maps in panels~(e) and~(f) which are also indistinguishable. Note that Panels~(a) to~(d) are normalized to be on the same dynamic scale (i.e. such that the same value corresponds to each gray-scale shade). Similarly, Panels~(e) and~(f) are matched to a dynamic scale over the range of the residuals in both images. Of note is that the residuals for both MRFs and BIFS carry considerable residual structure. This is simply a reflection of the nature of the priors in only representing local characteristics as discussed in Section \ref{discord} \begin{figure} \caption{\hspace*{0.28 cm} \label{gmrf_a} \caption{\hspace*{0.28 cm} \label{gmrf_b} \caption{\hspace*{0.28 cm} \label{gmrf_c} \caption{\hspace*{0.28 cm} \label{gmrf_d} \caption{\hspace*{0.28 cm} \label{gmrf_e} \caption{\hspace*{0.28 cm} \label{gmrf_f} \caption{BIFS match to IG-MRF prior for perfusion MRI simulation study. (a) simulated brain signal image; (b) image of the same data with added Gaussian noise (c) IG-MRF MAP reconstruction from the noisy data (using conjugate gradients optimization); (d) BIFS MAP reconstruction based on maximally isotropic approximation to IG-MRF; (e) IG-MRF MAP residuals (relative to true signal of Panel~(a)); (f) BIFS MAP residuals. Panels~(a) to~(d) are normalized to be on the same dynamic scale. Similarly, Panels~(e) and~(f) are matched to a dynamic scale over the range of the residuals in both images.} \label{MRImatchFig} \end{figure} The level of similarity between the two reconstructions is further emphasized in Table~\ref{cfMRF}. The table gives a comparison of mean signal estimates in each tissue type along with overall predictive accuracy based on root-mean-square-error (RMSE) over all pixels in the image. The results are very similar comparing IG-MRF and BIFS, indicating that BIFS is doing a good job of approximating the results of IG-MRF. Both models shrink estimates of mean tissue levels toward neighboring tissue types due to the overall smoothing effect of the Gaussian pairwise difference prior. This bias effect is largest in GM where the tissue region is narrow and therefore the conditional distributions within GM regions often include neighbors from other tissue types, which for GM will lead to biasing down the pixel estimates. In fact, this smoothing bias is strong enough to increase the RMSE for the Bayesian reconstructions compared with just using the noisy data. This emphasizes the dangers of blindly using Bayesian image analysis when the goal is estimation. The results are very slightly better for BIFS than ICM-GMRF in terms of having slightly smaller differences from the true values, though one needs to be careful not to over-interpret this very small difference. The different results are simply due to using slightly different models and one could easily find datasets or/and models that fit better with respect to RMSE in either the MRF or BIFS frameworks. However, it should be noted again, that as mentioned in Section~\ref{compspeed} it is much easier to try different models in the BIFS framework e.g. via simply changing the form of the parameter function than it is to try different MRF priors. \begin{table}[!ht] \begin{center} \caption{Comparison between estimates of mean signal in each tissue type and overall RMSE of reconstructions.} \label{cfMRF} \begin{tabular}{l|r|r|r|r} & \textbf{True} & \textbf{True + noise} & \textbf{IG-MRF} & \textbf{BIFS}\\ \hline GM & 20.0 & 19.92 & 13.48 & 13.53\\ WM & 10.0 & 9.99 & 11.10 & 11.08\\ CSF/out & 0.0 & -0.01 & 0.54 & 0.54\\ \hline RMSE & 0.0 & 2.47 & 2.74 & 2.71 \end{tabular} \end{center} \end{table} \section{The data-driven BIFS Prior (DD-BIFS)} The standard process of generating the BIFS prior distribution described in Section \ref{methods} is based on choosing a pair of distributions to be applied as priors at each location in Fourier space (one for the modulus and the other for the argument of the complex value signal) and a set of parameter functions to define how the parameters of the distributions vary over Fourier space. In contrast, for the data-driven approach the parameters at each Fourier space location are estimated empirically from a database of transformed images. The database of images would be of high-quality and have the characteristics that are required to represent the prior specification. To estimate the parameters, all of the images in the database are first Fourier transformed, the data at each location in Fourier space are extracted (i.e. for all images), and the distribution parameters for that Fourier space location are estimated from that data. These parameter estimates are then used to define the parameters for the prior at each Fourier space location, i.e. they form the basis of the parameter functions. (Note that when the database is not large it may be more beneficial to fit parameter functions to the empirical data rather than use estimates generated separately at each Fourier space location.) The implementation of DD-BIFS modeling follows the steps of Algorithm~\ref{DDBIFS}. \begin{algorithm} \caption{DD-BIFS modeling} \label{DDBIFS} \begin{algorithmic}[1] \State Fast Fourier transform (FFT) all images in the database that are to be used to build the DD-BIFS prior \State Choose the distributional form of the prior at each location in Fourier space \State Estimate the parameters of the prior at each location in Fourier space using the data from that Fourier space location across the subjects in the database \State Scale the sets of parameters over Fourier space to adjust the influence of the prior -- this is the DD-BIFS prior \State Define the likelihood in Fourier space \State FFT the dataset to be reconstructed from image space into Fourier space \State Combine the DD-BIFS prior and likelihood for the image at each Fourier space location via Bayes' Theorem to generate the DD-BIFS posterior \State Generate the Fourier space MAP estimate by maximizing the posterior at each Fourier space location \State Inverse FFT the Fourier space MAP estimate back to image space and display \end{algorithmic} \end{algorithm} \subsection{Example 5 - Data-driven prior simulation study} \label{simstudy} To illustrate the DD-BIFS approach we simulated 10,000 256$\times$256 images containing ellipsoid objects. The number of objects was modeled as a Poisson process; the objects were simulated as randomly positioned 2D Gaussian probability density functions (resembling bumps) with random intensity, and standard deviation on each axis, and correlation between the standard deviations on each axis distributed uniformly between~-1 and~0, i.e., so that the process was not isotropic. We generated an additional realization (separate to the 10,000 used to build the prior) displayed in Figure~\ref{gal_a} and contaminated it with added Gaussian noise (Figure~\ref{gal_b}). We then performed DD-BIFS with MAP estimation for this single new realization from the process. For this reconstruction we used a deliberately miss-specified prior and likelihood below to allow a simple illustration using conjugate prior forms. At each Fourier space location $k$ we adopt a truncated Gaussian prior for the modulus: $\operatorname{Mod}(\mathcal{F} x_k) \sim TN(\mu_k, \tau_k^2, 0, \infty)$, with $\mu_k \ge 0$; a Uniform prior on the circle for the argument: $\operatorname{Arg}(\mathcal{F} x_k) \sim U(0,2 \pi)$, a Gaussian noise model for the modulus $\operatorname{Mod}(\epsilon_k) \sim N(0,\sigma^2)$, and a Uniform noise model for the argument $\operatorname{Arg}(\epsilon_k) \sim U(0,2 \pi)$, where $\epsilon_k$ is the complex noise treated as independent across Fourier space locations $k$. (Note this model does not correspond to the simulated Gaussian noise in image space, which would require the Rician likelihood used previously.) The values of $\mu_k$ and $\sigma_k$ at each Fourier space location are estimated using the approach outlined in Algorithm \ref{DDBIFS}. The global posterior mode is then obtained by generating the posterior mean based on conjugate Bayes for the corresponding non-truncated Gaussian prior and likelihood, which is equivalent to the posterior mode and hence the posterior mode of the truncated prior version, at each Fourier space location \citep{gelman2014bayesian} with \begin{equation*} x_{k,\mathrm{\tiny MAP}} = \frac{ \left( \frac{m \mu_k}{\tau_k^2} + \frac{y_k}{\sigma^2} \right) } { \left( \frac{m}{\tau_k^2} + \frac{1}{\sigma^2} \right) } \end{equation*} Note that the value of $m$ in the prior is specified by the user and can be considered to represent how many observations we want the weight of the prior to count for in the posterior. Panel~(c) of Figure~\ref{bumps} shows an IG-MRF reconstruction of the noisy data. It clearly denoises the image but at the expense of smoothing the objects in the image. Panels~(d) through (f) shows the a DD-BIFS reconstruction where the database prior is given weight equivalent to 0.1, 1.0 and 10.0 observations respectively. Note that in general for these priors the bumps that are elongated have their length better preserved than in the Gaussian prior case. As the number of observations that the DD-BIFS prior represents increases the features of the true signal begin to diminish and the noise level of the reconstruction is reduced. This makes sense because in the limit we would effectively be obtaining the MAP estimate based on the prior alone which is an average over 10,000 simulations. Note that this high-level capturing of the bump features occurs despite the independence specification in Fourier space; the BIFS formulation is able to capture the anisotropic characteristics of these features through the empirical parameter function. Note also that the DD-BIFS prior could itself be modified by changing the form of the parameter function. For example, if one wanted to diminish the anisotropic features of the background signal the prior could be taken as a function of both the database prior and a denoising parameter function. \begin{figure} \caption{\hspace*{0.0 cm} \label{gal_a} \caption{\hspace*{0.0 cm} \label{gal_b} \caption{\hspace*{0.0 cm} \label{gal_c} \caption{\hspace*{0.0 cm} \label{gal_d} \caption{\hspace*{0.0 cm} \label{gal_e} \caption{\hspace*{0.0 cm} \label{gal_f} \caption{Simulation study and reconstruction of anisotropic bump patterns. Panel~(a)~new process realization; (b)~realization with added noise; (c)~first order IG-MRF reconstruction; (d)~DD-BIFS prior equivalent to 0.1~observations (e)~DD-BIFS prior 1~observation; (f)~DD-BIFS for 10~observations. All panels except for Panel~(b) are normalized to be on the same dynamic scale. \label{bumps} \label{bumps} \end{figure} \section{Discussion and Conclusion} The BIFS modeling framework provides a new family of Bayesian image analysis models with the capacity to a)~enhance images beyond conventional standard Bayesian image analysis methods; b)~allow straightforward specification and implementation across a wide range of imaging research applications; and c)~enable fast and high-throughput processing. These benefits, along with the inherent properties of resolution invariance and isotropy, make BIFS a powerful tool for the image analysis practitioner. A particular strength of the BIFS approach is the ease with which one is able to try different prior models. Experimenting with different priors might be considered problematic in other fields, but in Bayesian image analysis we typically know in advance that any model we can specify will be wrong and at best can approximately capture some characteristics from the image; if we simulate from Bayesian image analysis priors we would expect to be waiting an extremely long time before we see a realization that would be representative of an object of interest (e.g. a brain or a car). It is therefore often of benefit to try different models until finding a prior that has the desired impact on the posterior. This approach would have additional legitimacy if one reserves images purely for the purpose of formulating a preferred prior before analyzing the images of interest. The BIFS framework has much potential for future work to expand the foundations presented here: BIFS could be applied to spatio-temporal modeling, multi-image analysis, multi-modal medical imaging, color images, 3D images, other spatial basis spaces such as wavelets, multifractal modeling, non-continuous valued MRFs with hidden latent models, etc. We hope that other statisticians, engineers, and computer scientists with an interest in image analysis will begin to explore these potential areas. \section*{Code} All code used in this manuscript is available at: https://github.com/ucsf-deb/BIFSpaper1 \end{document}
\begin{document} \title[Region-Stabilizing Scores]{Gaussian Approximation for Sums of Region-Stabilizing Scores} \author[C. Bhattacharjee]{Chinmoy Bhattacharjee} \address{Department of Mathematics, University Luxembourg, Luxembourg} \varepsilonmail{[email protected]} \author[I. Molchanov]{Ilya Molchanov} \address{Institut f\"ur Mathematische Statistik und Versicherungslehre, University of Bern, Switzerland} \varepsilonmail{[email protected]} \date{\today} \thanks{IM was supported by the Swiss National Science Foundation Grant No.\ 200021\_175584} \subjclass[2010]{Primary: 60F05, Secondary: 60D05, 60G55} \keywords{Stein's method, stabilization, minimal points, Poisson process, central limit theorem.} \begin{abstract} We consider the Gaussian approximation for functionals of a Poisson process that are expressible as sums of region-stabilizing (determined by the points of the process within some specified regions) score functions and provide a bound on the rate of convergence in the Wasserstein and the Kolmogorov distances. While such results have previously been shown in Lachi\`eze-Rey, Schulte and Yukich (2019), we extend the applicability by relaxing some conditions assumed there and provide further insight into the results. This is achieved by working with stabilization regions that may differ from balls of random radii commonly used in the literature concerning stabilizing functionals. We also allow for non-diffuse intensity measures and unbounded scores, which are useful in some applications. As our main application, we consider the Gaussian approximation of number of minimal points in a homogeneous Poisson process in $[0,1]^d$ with $d \ge 2$, and provide a presumably optimal rate of convergence. \varepsilonnd{abstract} \maketitle \section{Introduction} Let $(\mathbb{X},\mathcal{F})$ be a Borel space and let $\mathbb{Q}$ be a $\sigma$-finite measure on $(\mathbb{X}, \mathcal{F})$. For $s \ge 1$, let $\mathcal{P}_s$ denote a Poisson process with intensity measure $s\mathbb{Q}$. Our main object of study is the sum of score functions $(\xi_s)_{s \ge 1}$ given by \begin{equation} \label{eq:hs} H_s=H_s(\mathcal{P}_s):= \sum_{x \in \mathcal{P}_s} \xi_s(x,\mathcal{P}_s), \quad s \ge 1, \varepsilonnd{equation} when the sum converges. While $H_s$ is a functional of the whole point process, this representation implicitly assumes that the functional can be decomposed as a sum of local contributions at each point $x \in \mathcal{P}_s$. Indeed, in the vast literature on limit theorems for sums of score functions over points in a Poisson process (see, e.g., \cite{pen:yuk03,pen:yuk05,sch10}), it is usually assumed that the score function at a point $x$ depends on the whole point process only through the set of its points within some small (random) distance to $x$, prohibiting any long-range interactions. Conditions like exponential decay of the tail distribution of this distance, so-called `radius of stabilization', and bounds on certain moments of the score functions are crucial to derive a quantitative central limit theorem. The idea of using \textit{stabilization} for studying limit theorems started with the works \cite{PY01,pen:yuk03}. Subsequently, important further works advanced such quantitative results for the Gaussian approximation of stabilizing functionals, see, e.g., \cite{BX06,pen:yuk05,Yu15}. But all these results provided bounds that had an extraneous logarithmic factor multiplied to the inverse of the square root of the variance. The results in this area culminated in \cite{LPS16}, where, using Malliavin-Stein approach, this logarithmic factor was removed, and further in \cite{LSY19}, providing presumably optimal rates and ready-to-use conditions illustrated with numerous applications. The comparative simplicity of the bounds provided in \cite{LSY19} comes at the cost of assuming a few conditions on the underlying space and the score functions. Even though these conditions are satisfied in many important examples as demonstrated therein, they are not applicable in some cases, especially, in examples exhibiting long-range interactions. A notable example is the number of minimal (or Pareto optimal) points in $\mathcal{P}_s$ restricted to the unit cube $[0,1]^d$, $d \ge 2$. This example violates all existing stabilization conditions usually assumed in the context of quantitative limit theorems. In particular, the appearance of stabilization regions that can be arbitrarily thin and long makes the radius of stabilization too large to obtain a meaningful bound using results from \cite{LSY19}. As a result, \cite{LSY19} could only manage to handle (in the problem of counting maximal points, which is equidistributed as the number of minimal points) a modified setting, by replacing the cube with a domain of the form $\{x \in [0,\infty)^d : F(x) \le 1\}$, where $F : [0,\infty)^d \to [0,\infty)$ is strictly increasing in each coordinate with $F(0)<1$, is continuously differentiable, and has continuous partial derivatives that are bounded away from zero and infinity. Even though one can define a function $F$ to obtain a domain that is arbitrarily close to the cube, the behavior of the number of maximal points is very sensitive to small changes in the shape of the domain: while the variance of $H_s$ is of the order of $s^{(d-1)/d}$ in the setting of \cite{LSY19}, its order becomes $\log^{d-1} s$ in the case of the cube, see \cite{bai:dev:hwan:05}. The main aim of this paper is to develop a more versatile notion of stabilization that enables us to handle various examples with long-range interactions, most notably the example of minimal points in the cube. We achieve this by generalizing the concept of stabilization radius to allow for regions of arbitrary shape, that is, by replacing balls of random radii with general sets, called stabilization regions. It is unlikely to achieve this by amending the metric on the carrier space, since the shape of these stabilization regions may be random and depend heavily on the reference point, and also since the stabilization region may be empty. The only additional condition we assume is that the stabilization region is monotonically decreasing in the point configuration, which is a natural condition satisfied by all common examples. In addition, we also extend the results to non-diffuse intensity measures and to score functions with non-uniform bounds on their moments. The extension to non-diffuse intensity measures results from getting rid of some regularity assumption on $\mathbb{Q}$ imposed in \cite{LSY19}. This makes it possible to handle examples with multiple points at deterministic locations, like Poisson processes on lattices. The extension to scores with unbounded moments is crucial in examples where the score functions are not simple indicators but rather involve unbounded weight functions, or when the intensity measure is infinite. Such an extension is a byproduct of our generalization of \cite[Theorem~6.1]{LPS16}, which involves non-uniform bounds on the $(4+p)$-th moment of the first order difference operator for some $p >0$, see Theorem~\ref{thm:Main}. We present two examples concerning isolated points in the two-dimensional integer lattice and a random geometric graph in $\mathbb{R}^d$, $d \ge 2$, to demonstrate further applications of our general bounds. Apart from the fact that our approach is more versatile than that of \cite{LSY19}, to the best of our knowledge, working with general monotonically decreasing stabilization sets is new in the relevant literature and thus our work opens a new direction of investigation. It should be noted that the very comprehensive setting in \cite{LSY19} also covers the cases of Poisson processes with marks, as well as the setting of binomial processes. Our results can be extended to these settings by adapting the scheme elaborated in \cite{LSY19} to our approach relying on stabilization regions. Indeed, Theorem~4.2 in \cite{LSY19} providing a bound on Gaussian approximation for functionals of a binomial process can be modified to the setting with a non-uniformly bounded $(4+p)$-th moment of the difference operator in the same way we modify Theorem~6.1 in \cite{LPS16} in our Theorem~\ref{thm:Main}. Once this key step is achieved, one can follow our line of argument to obtain a result paralleling our Theorem~\ref{thm:KolBd} for binomial processes. Let us now explicitly describe our setup. For a Borel space $(\mathbb{X},\mathcal{F})$, denote by $\mathbb{N}b$ the family of $\sigma$-finite counting measures $\mu$ on $\mathbb{X}$ equipped with the smallest $\sigma$-algebra $\mathscr{N}$ such that the maps $\mu \mapsto \mu(A)$ are measurable for all $A \in \mathcal{F}$. We write $x\in\mu$ if $\mu(\{x\})\geq1$. Denote by $0$ the zero counting measure. Further, $\mu_A$ denotes the restriction of $\mu$ onto the set $A\in\mathcal{F}$, and $\delta_x$ is the Dirac measure at $x\in\mathbb{X}$. For $\mu_1,\mu_2\in\mathbb{N}b$, we write $\mu_1\leq\mu_2$ if the difference $\mu_2-\mu_1$ is non-negative. For each $s\geq 1$, a \varepsilonmph{score function} $\xi_s$ associates to each pair $(x,\mu)$ with $x\in\mathbb{X}$ and $\mu\in\mathbb{N}b$, a real number $\xi_s(x,\mu)$. Throughout, we assume that the function $\xi_s: \mathbb{X} \times \mathbb{N}b \to \mathbb{R}$ is measurable with respect to the product $\sigma$-algebra $\mathcal{F}\otimes\mathscr{N}$ for all $s\geq1$. With $H_s$ as in \varepsilonqref{eq:hs}, our aim is to find an upper bound on the distance between the distributions of the normalized sum of scores $(H_s-\mathbf{E} H_s)/\sqrt{{\rm Var}\,H_s}$ and a standard normal random variable $N$ in an appropriate distance. We consider two very commonly used distances, namely, the Wasserstein and the Kolmogorov distances. The Wasserstein distance between (the distributions of) real-valued random variables $X$ and $Y$ is given by $$ d_W(X,Y):= \sup_{h \in \operatorname{Lip}_1} |\mathbf{E}\; h(X) - \mathbf{E} \; h(Y)|, $$ where $\operatorname{Lip}_1$ denotes the class of all Lipschitz functions $h: \mathbb{R} \to \mathbb{R}$ with Lipschitz constant at most one. The Kolmogorov distance between $X$ and $Y$ is defined by taking the test functions to be indicators of half-lines, and is given by $$ d_K(X,Y):= \sup_{t \in \mathbb{R}} |\mathbf{P}rob{X \le t} - \mathbf{P}rob{Y \le t}|. $$ Following \cite{LSY19}, a score function stabilizes if $\xi_s(x,\mu)$ remains unaffected when the configuration $\mu$ is altered outside a ball of radius $r_x=r_x(\mu)$ (the radius of stabilization) centered at $x$. For this, it is assumed that $\mathbb{X}$ is a semimetric space and $\mathbb{Q}$ satisfies a technical condition concerning the $\mathbb{Q}$-content of an annulus in the space $\mathbb{X}$, which in particular implies that $\mathbb{Q}$ is diffuse. In \cite{LSY19}, under an exponential decay condition on the tail distribution of the stabilization radius $r_x$ as $s \to \infty$ and assuming that the $(4+p)$-th moment of the score function at $x$ is uniformly bounded by a constant for all $s\geq1$ and $x\in\mathbb{X}$ for some $p \in (0,1]$, a universal bound on the Wasserstein and Kolmogorov distances between the normalized sum of scores and $N$ was derived. The setting of stabilization regions as balls centered at $x\in\mathcal{P}_s$ with radius $r_x$ can be thought of as a special case of a more general concept of stabilization regions which are sets depending on $x$ and the Poisson process. Indeed, in some examples, it is not optimal to assume that the stabilization region is a ball. The region can be made substantially smaller if it is allowed to be of a general shape. Adjusting the theory to deal with such stabilization regions is the main contribution of our work. Our general setting of non-spherical stabilization regions also eliminates the need of extra technical assumptions on the intensity measure imposed in \cite{LSY19}. As an illustration, we show how to handle the example of minimal points in the unit cube, which does not fit into the framework of \cite{LSY19}. We also allow for multiple points and for a non-uniform bound on the $(4+p)$-th moment of the score functions, which is particularly important in examples involving infinite intensity measures, like stationary Poisson processes. Apart from examples presented in the current paper, further applications of our method has been elaborated in \cite{bhat-mol21}, where a quantitative central limit theorem is obtained for functionals of growth processes that result in generalized Johnson-Mehl tessellations, and in \cite{Bha21}, where such a result is obtained in the context of minimal directed spanning trees in dimensions three and higher, respectively. \section{Notation and main results} \label{sec:notat-main-results} Throughout the paper, for $s \ge 1$, we consider a $\mathcal{F}\otimes\mathscr{N}$-measurable score function $\xi_s(x,\mu)$. Assume that if $\xi_s(x, \mu_1)=\xi_s(x, \mu_2)$ for some $\mu_1,\mu_2 \in \mathbb{N}b$ with $0\neq \mu_1\leq \mu_2$, then \begin{equation} \label{eq:ximon} \xi_s(x, \mu_1)=\xi_s(x, \mu') \quad \text{for all} \, \mu'\in\mathbb{N}b \; \text{ with } \; \mu_1\leq \mu'\leq \mu_2. \varepsilonnd{equation} This is a natural condition to expect for any reasonably well-behaved score function. We will need a few more assumptions on the score functions. The first assumption is a generalization of the concept of stabilization radius. \begin{enumerate} \item[(A1)] \textit{Stabilization region:} For all $s \ge 1$, there exists a map $R_s$ from $\{(x,\mu)\in\mathbb{X}\times \mathbb{N}b:x\in\mu\}$ to $\mathcal{F}$ such that \begin{enumerate}[(1)] \item[(A1.1)] the set \begin{displaymath} \{(x,y_1, y_2,\mu): \{y_1,y_2\}\subseteq R_s(x, \mu+\delta_x)\} \varepsilonnd{displaymath} is measurable with respect to the product $\sigma$-algebra on $\mathbb{X}^3\times\mathbb{N}b$, \item[(A1.2)] the map $R_s$ is monotonically decreasing in the second argument, i.e.\ $$ R_s(x,\mu_1) \supseteq R_s(x,\mu_2), \quad \mu_1 \leq \mu_2,\; x\in\mu_1, $$ \item[(A1.3)] for all $\mu\in\mathbb{N}b$ and $x\in\mu$, $\mu_{R_s(x,\mu)} \neq 0$ implies $(\mu+\delta_y)_{R_s(x,\mu +\delta_y)} \neq 0$ for all $y \notin R_s(x,\mu)$, \item[(A1.4)] for all $\mu\in\mathbb{N}b$ and $x\in\mu$, \begin{displaymath} \xi_{s}\big(x,\mu\big) =\xi_{s}\big(x,\mu_{R_{s}(x,\mu)}\big). \varepsilonnd{displaymath} \varepsilonnd{enumerate} \varepsilonnd{enumerate} By taking the intersection of the set from (A1.1) with the set $\{(x,y,y,\mu):\mu\in\mathbb{N}b\} \subseteq \mathbb{X}^3 \times \mathbb{N}b$ (which is also measurable) and then applying the bijective projection on $\mathbb{N}b$ we see that \begin{equation} \label{eq:1} \{\mu \in \mathbb{N}b : y\in R_s(x,\mu+\delta_x)\}\in\mathscr{N} \varepsilonnd{equation} for all $(x,y)\in\mathbb{X}^2$. Furthermore, Fubini's theorem implies that \begin{equation} \label{eq:2} \mathbf{P}rob{y\in R_s(x, \mathcal{P}_s+\delta_x)}\, \text{ and } \, \mathbf{P}rob{\{y_1, y_2\}\subseteq R_s(x, \mathcal{P}_s+\delta_x)} \varepsilonnd{equation} are Lebesgue measurable functions of $(x,y) \in \mathbb{X}^2$ and $(x,y_1,y_2) \in \mathbb{X}^3$, respectively. Even though, assumption (A1.1) is sufficient for our result, it is indeed enough to assume \varepsilonqref{eq:1} and \varepsilonqref{eq:2}. Thus, when simpler, we will verify the conditions \varepsilonqref{eq:1} and \varepsilonqref{eq:2} instead of (A1.1). Note that (A1) holds trivially if one takes $R_s$ to be identically equal to the whole space $\mathbb{X}$. If (A1) holds with a non-trivial $R_s$, then the score function is called \varepsilonmph{region-stabilizing}. Also note that a condition like \cite[Eq.~(2.3)]{LSY19}, requiring stabilization with 7 additional points, trivially holds in our set up due to the monotonicity assumption (A1.2) and \varepsilonqref{eq:ximon}. We also assume the standard $(4+p)$-th moment condition, stated here in terms of the norm for notational simplicity. In the following, $\|\cdot\|_{4+p}$ denotes the $L^{4+p}$-norm. \begin{enumerate} \item[(A2)] \textit{$L^{4+p}$-norm:} There exists a $p \in (0,1]$ such that, for all $\mu\in\mathbb{N}b$ with $\mu(\mathbb{X}) \le 7$, \begin{displaymath} \Big\|\xi_{s}\big(x, \mathcal{P}_{s}+\delta_x+\mu\big)\Big\|_{4+p} \leq M_{s,p}(x), \quad s\geq1,\; x \in \mathbb{X}, \varepsilonnd{displaymath} where $M_{s,p} : \mathbb{X} \to \mathbb{R}$, $s\geq1$, are measurable functions. \varepsilonnd{enumerate} If the score function is an indicator random variable, Condition (A2) is trivially satisfied with $M_{s,p}\varepsilonquiv 1$ for any $p \in (0,1]$ and $s\geq1$. For notational convenience, in the sequel we will write $M_s$ instead of $M_{s,p}$, and generally drop $p$ from all subscripts. Let $r_{s}: \mathbb{X} \times \mathbb{X} \to [0,\infty]$ be a measurable function such that \begin{equation} \label{eq:Rs} \mathbf{P}rob{y \in R_s(x, \mathcal{P}_s +\delta_x)} \le e^{-r_{s}(x,y)}, \quad x, y \in \mathbb{X}. \varepsilonnd{equation} For the following it is essential that $r_s$ does not vanish, and then \varepsilonqref{eq:Rs} becomes an analog of the usual exponential stabilization condition from \cite{LSY19}. Note that we allow $r_s$ to be infinite and the probability in \varepsilonqref{eq:Rs} is well defined due to assumption (A1.1). For $x_1,x_2 \in \mathbb{X}$, denote \begin{equation} \label{eq:g2s} q_{s}(x_1,x_2):=s \int_\mathbb{X} \mathbf{P}\Big\{\{x_1,x_2\} \subseteq R_s\big(z, \mathcal{P}_s +\delta_z\big)\Big\} \;\mathbb{Q}({\mathrm d} z), \varepsilonnd{equation} noticing that the probability in the integral is well defined and $q_s$ is measurable due to Fubini's theorem and \varepsilonqref{eq:2}. For $p \in (0,1]$ as in (A2) and $\zeta:=p/(40+10p)$, let \begin{gather} \label{eq:g} g_{s}(y) :=s \int_{\mathbb{X}} e^{-\zeta r_{s}(x, y)} \;\mathbb{Q}({\mathrm d} x), \quad h_s(y):=s \int_{\mathbb{X}} M_{s}(x)^{4+p/2}e^{-\zeta r_{s}(x, y)}\;\mathbb{Q}({\mathrm d} x),\\ \label{eq:g5} G_s(y) := \widetilde{M}_{s}(y) + \tilde h_s(y) \big(1+g_{s}(y)^4\big), \quad y\in\mathbb{X}, \varepsilonnd{gather} where for $y \in \mathbb{X}$, $$ \widetilde M_s(y):=\max\{M_s(y)^2,M_s(y)^4\} \quad \text{and} \quad \tilde h_s(y):=\max\{h_s(y)^{2/(4+p/2)}, h_s(y)^{4/(4+p/2)}\}.$$ For $\alpha>0$, let \begin{equation} \label{eq:fa} f_\alpha(y):=f_\alpha^{(1)}(y)+f_\alpha^{(2)}(y)+f_\alpha^{(3)}(y), \quad y\in\mathbb{X}, \varepsilonnd{equation} where \begin{align} \label{eq:fal} f_\alpha^{(1)}(y)&:=s \int_\mathbb{X} G_s(x) e^{- \alpha r_{s}(x,y)} \;\mathbb{Q}({\mathrm d} x), \notag \\ f_\alpha^{(2)} (y)&:=s \int_\mathbb{X} G_s(x) e^{- \alpha r_{s}(y,x)} \;\mathbb{Q}({\mathrm d} x), \nonumber\\ f_\alpha^{(3)}(y)&:=s \int_{\mathbb{X}} G_s(x) q_{s}(x,y)^\alpha \;\mathbb{Q}({\mathrm d} x). \varepsilonnd{align} Finally, define the function \begin{equation} \label{eq:p} \kappa_s(x):= \mathbf{P}rob{\xi_{s}(x, \mathcal{P}_{s}+\delta_x) \neq 0},\quad x\in\mathbb{X}. \varepsilonnd{equation} Our main result is the following abstract theorem, which generalizes Theorem~2.1(a) in \cite{LSY19}. For an integrable function $f : \mathbb{X} \to \mathbb{R}$, denote $\mathbb{Q} f:=\int_\mathbb{X} f(x) \mathbb{Q}({\mathrm d} x)$. \begin{theorem}\label{thm:KolBd} Assume that $(\xi_s)_{s \ge 1}$ satisfy conditions (A1), (A2) and let $H_s$ be as in \varepsilonqref{eq:hs}. Then, for $p$ as in (A2) and $\beta:=p /(32+4 p)$, \begin{align*} d_{W}\left(\frac{H_s-\mathbf{E} H_s}{\sqrt{\Var H_s}}, N\right) &\leq C \Bigg[\frac{\sqrt{s \mathbb{Q} f_\beta^2}}{\Var H_s} +\frac{ s\mathbb{Q} ((\kappa_s+g_{s})^{2\beta}G_s)}{(\Var H_s)^{3/2}}\Bigg], \varepsilonnd{align*} and \begin{align*} d_{K}\left(\frac{H_s-\mathbf{E} H_s}{\sqrt{\Var H_s}}, N\right) &\leq C \Bigg[\frac{\sqrt{s \mathbb{Q} f_\beta^2} + \sqrt{s \mathbb{Q} f_{2\beta}}}{\Var H_s} +\frac{\sqrt{ s\mathbb{Q} ((\kappa_s+g_{s})^{2\beta}G_s)}}{\Var H_s} +\frac{ s\mathbb{Q} ((\kappa_s+g_{s})^{2\beta} G_s)}{(\Var H_s)^{3/2}}\\ &\qquad\qquad +\frac{( s\mathbb{Q} ((\kappa_s+g_{s})^{2\beta} G_s))^{5/4} + ( s\mathbb{Q} ((\kappa_s+g_{s})^{2\beta} G_s))^{3/2}}{(\Var H_s)^{2}}\Bigg] \varepsilonnd{align*} for all $s\geq1$, where $N$ is a standard normal random variable and $C \in (0,\infty)$ is a constant depending only on $p$. \varepsilonnd{theorem} In order to obtain a useful bound, it is necessary that $\mathbb{Q} (\widetilde M_s\kappa_s)$ is finite. This is surely the case if $\mathbb{Q}$ is finite and $\widetilde M_s$ is bounded. As an application of our abstract result, we consider an example regarding \varepsilonmph{minimal points} in a Poisson process. Let $\mathbb{Q}$ be the Lebesgue measure on $\mathbb{X}:=[0,1]^d$, $d \ge 2$, and let $\mathcal{P}_s$ be a Poisson process with intensity $s \mathbb{Q}$ for $s\geq1$. A point $x \in \mathbb{R}^d$ is said to dominate a point $y \in \mathbb{R}^d$ if $x-y\in\mathbb{R}_+^d\setminus\{0\}$. We write $x\succ y$, or equivalently, $y \prec x$ if $x$ dominates $y$. Points in $\mathcal{P}_s$ that do not dominate any other point in $\mathcal{P}_s$ are called minimal (or Pareto optimal) points of $\mathcal{P}_s$. The interest in studying dominance and number of minima and maxima is due to its numerous applications related to multivariate records, e.g., in the analysis of linear programming and in maxima-finding algorithms, see the references in \cite{bai:dev:hwan:05} and \cite{fil:naim20}. In the following result, we derive non-asymptotic bounds on the Wasserstein and Kolmogorov distances between the normalized number of minimal points in $\mathcal{P}_s$, and a standard Gaussian random variable. \begin{theorem}\label{thm:Pareto} Let $\mathcal{P}_s$ be a Poisson process on $[0,1]^d$ with intensity measure $s\mathbb{Q}$ and $s \ge 1$, where $\mathbb{Q}$ is the Lebesgue measure, and let \begin{equation} \label{eq:ParetoPoints} F_s:=\sum_{x \in \mathcal{P}_s} \mathds{1}_{x \text{ is a minimal point in $\mathcal{P}_s$}}. \varepsilonnd{equation} If $d\geq 2$, then \begin{displaymath} \max \left\{d_W\left(\frac{F_s - \mathbf{E} F_s}{\sqrt{\Var F_s}},N\right), d_K\left(\frac{F_s - \mathbf{E} F_s}{\sqrt{\Var F_s}},N\right)\right\} \le \frac{C}{\log^{(d-1)/2} s}, \quad s\ge1, \varepsilonnd{displaymath} for a constant $C>0$ depending only on the dimension $d$. In addition, the bound on the Kolmogorov distance is of optimal order, i.e., there exists a constant $0<C'\le C$ depending only on $d$ such that $d_K\left(\frac{F_s - \mathbf{E} F_s}{\sqrt{\Var F_s}},N\right) \ge C'/\log^{(d-1)/2} s$. \varepsilonnd{theorem} In the setting of binomial point process with $n \in \mathbb{N}$ i.i.d.\ points in the unit cube, \cite{bai:dev:hwan:05} showed that the Wasserstein distance between the normalized number of minimal points and the standard normal random variable is of the order $(\log n)^{-(d-1)/2}(\log\log n)^{2d}$ using a log-transformation trick first suggested in \cite{Ba20}, and, as a consequence, derived the order $(\log n)^{-(d-1)/4}(\log\log n)^{d}$ for the Kolmogorov distance. It is useful to note here that the variance of the number of minimal points in the binomial case is of the order $\log^{d-1} n$, see, e.g., \cite{bai:dev:hwan:05}, where the corresponding computations in the Poisson case are also available. Hence, the Wasserstein distance is of the order of the square root of the variance multiplied by an extraneous logarithmic factor, which, as mentioned before, has commonly appeared in such contexts. Furthermore, the bound on the Kolmogorov distance is vastly suboptimal. Our result in the Poisson setting substantially improves these rates to the square root of the variance of $F_s$, which is optimal for the Kolmogorov distance and presumably optimal for the Wasserstein distance. It should be noted that, in the example of Pareto optimal points, we are working with a simple Poisson process and a finite intensity measure $\mathbb{Q}$. Further examples confirm that our abstract bound applies also for Poisson processes with a non-diffuse or infinite intensity measure $\mathbb{Q}$. Note that for measures with infinite intensity, \cite{LSY19} requires that the score function decays exponentially with respect to the distance to some set $K$, and the bound in Eq.~(2.10) therein becomes trivial if this set $K$ is the whole space and $\mathbb{Q}$ is infinite. The rest of the paper is organized as follows. In Section~\ref{sec:Pareto} we prove Theorem~\ref{thm:Pareto}. Section \ref{sec:ex} provides two examples in settings, where either the intensity measure is infinite and non-diffuse or the $(4+p)$-th moments of the score functions are unbounded over the space $\mathbb{X}$, and provide bounds on the rate of convergences in the Wasserstein and the Kolmogorov distances for Gaussian approximation of certain statistics related to isolated points in these models. Finally, in Section~\ref{sec:Proof} we prove Theorem~\ref{thm:KolBd} which relies on a modified version of Theorem~6.1 in \cite{LPS16}, see Theorem~\ref{thm:Main}. The proof of the latter is presented in the Appendix. \section{Number of minimal points in the hypercube} \label{sec:Pareto} In this section, we apply Theorem~\ref{thm:KolBd} to prove Theorem~\ref{thm:Pareto} providing a quantitative limit theorem for the number of minimal points in a Poisson process on the hypercube. Throughout this section, $\mathbb{Q}$ is taken to be the Lebesgue measure on $\mathbb{X}:=[0,1]^d$ with $d\in \mathbb{N}$, and $\mathcal{P}_s$ is a Poisson process on $\mathbb{X}$ with intensity measure $s\mathbb{Q}$ for $s \ge 1$. We omit $\mathbb{Q}$ in integrals and write ${\,\mathrm d} x$ instead of $\mathbb{Q}({\mathrm d} x)$. The functional $F_s$ from \varepsilonqref{eq:ParetoPoints} can be expressed as in \varepsilonqref{eq:hs} with the score functions \begin{equation} \label{eq:xi} \xi_s(x,\mu):=\mathds{1}_{x \text{ is a minimal point in $\mu$}},\quad x\in\mu, \; \mu \in \mathbb{N}b. \varepsilonnd{equation} As a convention, we let $\xi_s(x,0)=0$. It is straightforward to see that $(\xi_s)_{s \ge 1}$ satisfies \varepsilonqref{eq:ximon}. We will show that conditions (A1) and (A2) also hold, so that Theorem~\ref{thm:KolBd} is applicable. For $x:=(x^{(1)},\dots,x^{(d)})\in \mathbb{X}$, let $[0,x]:=[0,x^{(1)}] \times\cdots\times [0,x^{(d)}]$, and denote the volume of $[0,x]$ by \begin{displaymath} |x|:=x^{(1)}\cdots x^{(d)}. \varepsilonnd{displaymath} Given a counting measure $\mu\in\mathbb{N}b$ and $x\in\mu$, define the stabilization region as \begin{equation*} R_s(x,\mu):= \begin{cases} [0,x] & \mbox{if $\mu([0,x]\setminus \{x\})=0$},\\ \varepsilonmptyset & \mbox{otherwise}. \varepsilonnd{cases} \varepsilonnd{equation*} To begin with, we note here that the region $R_s$ can be the empty set in our case, which rules out any possibility of it being represented as a ball in some metric on the space $\mathbb{X}$. Since for any $x \in \mathbb{X}$, the mapping $\mathbb{N}b \ni \mu \mapsto \mu([0,x]\setminus \{x\})$ is measurable, the condition in \varepsilonqref{eq:1} follows. Next, it is easy to see that for $x, y \in \mathbb{X}$, \begin{equation}\label{eq:Prob1} \mathbf{P}rob{y \in R_s(x,\mathcal{P}_{s} + \delta_x)}=\mathds{1}_{x \succ y} e^{-s|x|}, \varepsilonnd{equation} which is clearly measurable. Denote by $x_1\vee\dots\vee x_n$ the coordinatewise maximum of $x_1,\dots,x_n \in \mathbb{X}$, while $x_1\wedge\dots\wedge x_n$ denotes their coordinatewise minimum. For $x_1,x_2\in \mathbb{X}$, notice that $\{x_1,x_2\}\subseteq R_{s}(z,\mathcal{P}_{s}+\delta_z)$ if and only if $z\succ (x_1\vee x_2)$ and $[0,z] \setminus \{z\}$ has no points of $\mathcal{P}_s$. Thus \begin{equation}\label{eq:Prob2} \mathbf{P}rob{\{x_1,x_2\} \subseteq R_s(z,\mathcal{P}_{s} + \delta_z)} =\mathds{1}_{z \succ x_1 \vee x_2} e^{-s|z|}, \varepsilonnd{equation} which is also a measurable function of $(z,x_1, x_2)\in \mathbb{X}^3$, confirming \varepsilonqref{eq:2}. Clearly, $R_s$ is monotonically decreasing in its second argument. It is straightforward to check (A1.3). Finally, with $\xi_s$ as defined at \varepsilonqref{eq:xi}, it is easy to see that (A1.4) is satisfied. Furthermore, condition (A2) holds trivially with $M_s\varepsilonquiv 1$ for all $p \in (0,1]$ and $s \ge 1$, since $\xi_s$ is an indicator function. For definiteness, take $p=1$. For $\xi_s$ as in \varepsilonqref{eq:xi}, by \varepsilonqref{eq:Prob1} the inequality \varepsilonqref{eq:Rs} turns into an equality with $r_{s}(x,y):=s|x|$ if $y\prec x$ and $r_{s}(x,y):=\infty$ if $y$ is not dominated by $x$. Throughout the section, for a function $f:[1,\infty) \to \mathbb{R}_+$, we will write $f(s)=\mathcal{O}(\log^{d-1} s)$ to mean that $f(s)/\log^{d-1} s$ is uniformly bounded for all $s \geq 1$. It is well known (see, e.g., \cite{bai:dev:hwan:05}) that for all $\alpha >0$, \begin{equation} \label{eq:mean} s\int_{\mathbb{X}} e^{-\alpha s |x|}{\,\mathrm d} x=\mathcal{O}(\log^{d-1} s). \varepsilonnd{equation} In particular, by the Mecke formula, $\mathbf{E} F_s = s\int_{\mathbb{X}} e^{-s |x|}{\,\mathrm d} x=\mathcal{O}(\log^{d-1} s)$. Further, by the multivariate Mecke formula (see, e.g., \cite[Th.~4.4]{last:pen}), \begin{equation*} \Var(F_s) = \mathbf{E} F_s - (\mathbf{E} F_s)^2 + s^2 \iint_{D} \mathbf{P}rob{x \text{ and } y \text{ are both minimal points in } \mathcal{P}_s+\delta_x+\delta_y} {\,\mathrm d} x {\,\mathrm d} y, \varepsilonnd{equation*} where $D$ is the set of $(x,y) \in \mathbb{X}^2$ such that $x$ and $y$ are incomparable, i.e., $x \not \succ y$ and $y \not \succ x$. Hence, following the proof of Theorem~1 in \cite{bai:dev:hwan:05}, there exist finite positive constants $C_1$ and $C_2$ such that \begin{equation} \label{eq:var} C_1 \log^{d-1} s \le \operatorname{Var}(F_s) \le C_2 \log^{d-1} s, \quad s \ge 1. \varepsilonnd{equation} For $\alpha>0$, $s>0$, and $d\in\mathbb{N}$, define the function $c_{\alpha,s}: \mathbb{X} \to \mathbb{R}_+$ as \begin{equation} \label{eq:cal} c_{\alpha,s} (y):=s\int_{\mathbb{X}} \mathds{1}_{x\succ y} e^{-\alpha s |x|} {\,\mathrm d} x. \varepsilonnd{equation} In view of the Mecke formula and the Poisson empty space formula, $c_{1,s} (y)$ is the expected number of minimal points in $\mathcal{P}_s$ that dominate $y \in \mathbb{X}$. Also note that $g_{s}(y)$ and $h_s(y)$ from \varepsilonqref{eq:g} is equal to $c_{\zeta,s}(y)$ with $\zeta=p/(40+10p)=1/50$, so that $G_s(y) \le 3 + 2 c_{\zeta,s}(y)^5$. Next, we specify the function $q_{s}$ from \varepsilonqref{eq:g2s}. By \varepsilonqref{eq:Prob2}, we have \begin{displaymath} q_{s}(x_1,x_2) = s \int_\mathbb{X} \mathds{1}_{z\succ (x_1\vee x_2)} e^{-s|z|}{\,\mathrm d} z =c_{1,s}(x_1\vee x_2). \varepsilonnd{displaymath} Studying the function $c_{\alpha,s}$ is essential to understand the behaviour of minimal points. Note that $c_{\alpha,s}$ satisfies the scaling property \begin{equation} \label{eq:scale} c_{\alpha,s}(y)=\alpha^{-1} c_{1,\alpha s}(y), \quad \alpha>0,\; s>0. \varepsilonnd{equation} This will often enable us to take $\alpha=1$ without loss of generality. The following lemma demonstrates the asymptotic behaviour of the function $c_{\alpha,s}$ for large $s$. Before we state the result, notice that for $i \in \mathbb{N} \cup \{0\}$ and $\alpha>0$, $$ \int_{0}^\infty |\log w|^{i} e^{-\alpha w}{\,\mathrm d} w \le \int_0^1 |\log w|^i {\,\mathrm d} w + \int_1^\infty w^i e^{-\alpha w}{\,\mathrm d} w \le \int_0^1 |\log w|^i {\,\mathrm d} w + \frac{\Gamma (i+1)}{\alpha^{i+1}}. $$ Since any positive integer power of logarithm is integrable near zero, for all $i \in \mathbb{N} \cup \{0\}$ and $\alpha>0$, \begin{equation} \label{eq:Gamma} \int_{0}^\infty |\log w|^{i} e^{-\alpha w}{\,\mathrm d} w <\infty. \varepsilonnd{equation} \begin{lemma} \label{lemma:c-bound} For all $\alpha>0$ and $s>0$, \begin{displaymath} c_{\alpha,s}(y)\leq \frac{D}{\alpha} e^{-\alpha s|y|/2}\Big[1+\big|\log(\alpha s|y|)\big|^{d-1}\Big], \quad y \in \mathbb{X} \varepsilonnd{displaymath} for a constant $D$ that depends only on the dimension $d \in \mathbb{N}$. \varepsilonnd{lemma} \begin{proof} The result is trivial when $d=1$, so we assume $d \ge 2$. By \varepsilonqref{eq:scale}, we can also assume that $\alpha=1$. The following derivation is motivated by those used to calculate the mean of the number of minimal points in \cite[Sec.~2]{bai:dev:hwan:05}. Changing variables $u=s^{1/d}x$ in the definition of $c_{1,s}$ to obtain the first equality, and letting $z^{(i)}=-\log u^{(i)}$, $i=1,\dots,d$, in the second, for $y \in \mathbb{X}$, we obtain \begin{align*} c_{1,s}(y) &=\int_{\times_{i=1}^d [s^{1/d}y^{(i)}, s^{1/d}]} e^{-|u|} {\,\mathrm d} u\\ &=\int_{\times_{i=1}^d \big[-d^{-1}\log s, -d^{-1}\log s - \log y^{(i)}\big]} \varepsilonxp \bigg\{-e^{-\sum_{j=1}^d z^{(j)}} - \sum_{j=1}^d z^{(j)} \bigg\} {\,\mathrm d} z. \varepsilonnd{align*} Next, we change variables by letting $v=(v^{(1)}, \dots, v^{(d)})$ with $v^{(i)}:=z^{(i)}+\cdots+z^{(d)}$, $i=1,\dots,d$. Note that the integrand is only a function of $v^{(1)}$. Taking into account the integration bounds on $z^{(i)}$, we have \begin{displaymath} v^{(1)} - \bigg(- \frac{i-1}{d} \log s - \sum_{j=1}^{i-1} \log y^{(i)}\bigg) \le v^{(i)} \le - \frac{d-i+1}{d} \log s - \sum_{j=i}^d \log y^{(i)}, \quad 2 \le i \le d. \varepsilonnd{displaymath} Thus, for each $2 \le i \le d$, the integration variable $v^{(i)}$ belongs to an interval of length at most $(-\log (s|y|) - v^{(1)})$. Using the substitution $w=e^{-v^{(1)}}$ in the second step and Jensen's inequality in the last one, we obtain \begin{align*} c_{1,s}(y)& \le \int_{-\log s}^{-\log (s|y|)} \Big(-\log (s|y|) - v^{(1)}\Big)^{d-1} \varepsilonxp\Big\{-e^{-v^{(1)}} - v^{(1)}\Big\} {\,\mathrm d} v^{(1)}\\ & =\int_{s|y|}^{s} \Big(\log w-\log (s|y|)\Big)^{d-1}e^{-w} {\,\mathrm d} w \\ & \le 2^{d-2} e^{-s|y|/2} \bigg[\big|\log (s|y|)\big|^{d-1} + \int_{s|y|}^s |\log w|^{d-1} e^{-w/2}{\,\mathrm d} w \bigg]. \varepsilonnd{align*} The result now follows by \varepsilonqref{eq:Gamma}. \varepsilonnd{proof} Before proceeding to estimate the bound in Theorem~\ref{thm:KolBd}, we need some estimates of integrals involving $c_{\alpha,s}$ and $|x|$. We will often use the following representation: for $\alpha>0$, $s \ge 1$ and $i \in \mathbb{N}$, \begin{align}\label{eq:crep} s\int_\mathbb{X} c_{\alpha,s}(x)^i {\,\mathrm d} x &=s\int_\mathbb{X} \prod_{j=1}^i \Big(s\int_\mathbb{X} \mathds{1}_{z_j \succ x} e^{- \alpha s \sum_{j=1}^i |z_j| }{\,\mathrm d} z_j\Big) {\,\mathrm d} x\nonumber \\ & =s^{i+1} \int_{\mathbb{X}^i} \big|z_1 \wedge \dots\wedge z_i\big| e^{- \alpha s \sum_{j=1}^i |z_j| } {\,\mathrm d}(z_1, \dots, z_i). \varepsilonnd{align} \begin{lemma}\label{lem:intbd} For all $i \in \mathbb{N}$ and $\alpha>0$, \begin{gather} \label{eq:c1} s\int_{\mathbb{X}} c_{\alpha,s}(y)^i {\,\mathrm d} y= \mathcal{O}(\log^{d-1} s),\\ \label{eq:c2} s\int_{\mathbb{X}} \left(s \int_{\mathbb{X}}e^{-\alpha s|x\vee y|}{\,\mathrm d} x\right)^i {\,\mathrm d} y= \mathcal{O}(\log^{d-1} s),\\ \label{eq:c3} s \int_{\mathbb{X}} \left(s\int_{\mathbb{X}} c_{\alpha,s}(x\vee y) {\,\mathrm d} x \right)^i {\,\mathrm d} y = \mathcal{O}(\log^{d-1} s), \varepsilonnd{gather} where the constants in the bounds on the right-hand sides may depend on $i$. \varepsilonnd{lemma} \begin{proof} As in Lemma~\ref{lemma:c-bound}, without loss of generality let $\alpha=1$ and $s \ge 1$. We first prove \varepsilonqref{eq:c1}. For $i \in \mathbb{N}$, by Lemma~\ref{lemma:c-bound} and Jensen's inequality, we have \begin{equation} \label{eq:aux} s\int_{\mathbb{X}} c_{1,s}(y)^i {\,\mathrm d} y \le 2^{i-1} D^i \left[s \int_{\mathbb{X}} e^{- is|y|/2} {\,\mathrm d} y + s \int_{\mathbb{X}} e^{- is|y|/2} \big|\log(s|y|)\big|^{i(d-1)} {\,\mathrm d} y\right], \varepsilonnd{equation} with $D$ as in Lemma~\ref{lemma:c-bound}. The first summand is of the order of $\log^{d-1} s$ by \varepsilonqref{eq:mean}. For the second summand, we employ a similar substitution as in Lemma~\ref{lemma:c-bound} and \cite{bai:dev:hwan:05}: \begin{align*} & s \int_{\mathbb{X}} e^{- is|y|/2} \big|\log( s|y|)\big|^{i(d-1)} {\,\mathrm d} y \le \int_{[0,s^{1/d}]^d} e^{-|u|/2} \big|\log |u|\big|^{i(d-1)} {\,\mathrm d} u \qquad\;\;\; (u=s^{1/d} x)\\ & =\int_{[-d^{-1}\log s, \infty)^d} \varepsilonxp \left\{- e^{-\frac{1}{2}\sum_{j=1}^d z^{(j)}} - \sum_{j=1}^d z^{(j)} \right\} \Bigg|\sum_{j=1}^d z^{(j)}\Bigg|^{i(d-1)} {\,\mathrm d} z \quad\;\;\,\; (z^{(j)}=-\log u^{(j)})\\ &\leq \int_{-\log s}^\infty \big(\log s + v^{(1)}\big)^{d-1} \varepsilonxp\Big\{-e^{-v^{(1)}/2} - v^{(1)}\Big\} |v^{(1)}|^{i(d-1)} {\,\mathrm d} v^{(1)} \quad \quad \; (v^{(i)}=\sum_{j=i}^d z^{(j)})\\ &= \int_{0}^s \big(\log s -\log w\big)^{d-1} e^{-\sqrt{w}} |\log w|^{i(d-1)} {\,\mathrm d} w \qquad\qquad\qquad\qquad\qquad \quad \, (w=e^{-v^{(1)}})\\ &\le 2^{d-2} \left[\log^{d-1} s \int_{0}^\infty e^{-\sqrt{w}} |\log w|^{i(d-1)} {\,\mathrm d} w + \int_{0}^\infty e^{-\sqrt{w}} |\log w|^{(i+1)(d-1)} {\,\mathrm d} w\right], \varepsilonnd{align*} where the last step is due to Jensen's inequality. Finally, by substituting $t=\sqrt{w}$ and using that $t e^{-t/2} \le 2$ for $t \ge 0$, we have \begin{displaymath} \int_{0}^\infty e^{-\sqrt{w}} |\log w|^{j} {\,\mathrm d} w =\int_{0}^\infty 2^{1+j} t e^{-t} |\log t|^{j} {\,\mathrm d} t \le 2^{2+j} \int_{0}^\infty e^{-t/2} |\log t|^{j} {\,\mathrm d} t, \quad j\in\mathbb{N}. \varepsilonnd{displaymath} The result now follows by \varepsilonqref{eq:Gamma}. Next, we move on to proving \varepsilonqref{eq:c2}. For $x \in \mathbb{X}$ and $I \subseteq \{1,\dots,d\}$, we write $x^I$ for the subvector $(x^{(i)})_{i \in I}$. Assume that $x\vee y=(x^I,y^J)$ with $J:=I^c$. Note that by Jensen's inequality, we have \begin{equation}\label{eq:maxsplit} s\int_{\mathbb{X}} \left(s \int_{\mathbb{X}}e^{- s|x\vee y|}{\,\mathrm d} x\right)^i {\,\mathrm d} y\le 2^{(i-1)d} \sum_{I \subseteq \{1,\dots,d\}} s\int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x^I \succ y^I, x^J \prec y^J}e^{- s|x^I| |y^J| }{\,\mathrm d} x\right)^i {\,\mathrm d} y. \varepsilonnd{equation} First, if $I=\varepsilonmptyset$, splitting the exponential into the product of two exponentials with the power halved, using $t^i e^{-t} \le i!$ for $t\geq0$, and referring to \varepsilonqref{eq:mean} yield that \begin{displaymath} s\int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x \prec y}e^{-s |y|} {\,\mathrm d} x\right)^i {\,\mathrm d} y = s\int_{\mathbb{X}} (s|y|)^i e^{-i s |y|} {\,\mathrm d} y =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} Next, assume that $I$ is nonempty and of cardinality $m$, with $1 \le m \le d$. As a convention, let $|y^\varepsilonmptyset|:=1$ for all $y \in \mathbb{X}$. Using Lemma~\ref{lemma:c-bound} with $\alpha=1$ and Jensen's inequality in the second step, we obtain \begin{align*} s\int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x^I \succ y^I, x^J \prec y^J} e^{-s |x^I|\,|y^J|} {\,\mathrm d} x\right)^i {\,\mathrm d} y &= s\int_{\mathbb{X}}\left(s |y^J| \int_{[0,1]^m} \mathds{1}_{x^I \succ y^I} e^{-s |x^I|\,|y^J|} {\,\mathrm d} x^I\right)^i {\,\mathrm d} y\\ & \le D^{i} 2^{i-1} s \int_{\mathbb{X}} e^{-i s|y|/2}\Big[1+\big|\log(s|y|)\big|^{i(m-1)}\Big]{\,\mathrm d} y, \varepsilonnd{align*} with $D$ as in Lemma~\ref{lemma:c-bound}. The two summands can be bounded in the same manner as it was done for \varepsilonqref{eq:aux}, providing a bound of the order of $\log^{d-1} s$. The bound in \varepsilonqref{eq:c2} now follows from \varepsilonqref{eq:maxsplit}. Finally, we confirm \varepsilonqref{eq:c3}. Using that $te^{-t} \le 1$ for $t \ge 0$ in the first inequality, we have \begin{align*} s &\int_{\mathbb{X}} \left(s\int_{\mathbb{X}} c_{1,s}(x\vee y) {\,\mathrm d} x \right)^i {\,\mathrm d} y\\ &= s^{2i+1} \int_{\mathbb{X}} \int_{\mathbb{X}^i} \bigg[\prod_{j=1}^{i} \int_\mathbb{X} \mathds{1}_{z_j\succ x_j\vee y} e^{- s \sum_{j=1}^i |z_j|} {\,\mathrm d} z_j\bigg] {\,\mathrm d} (x_1, \dots, x_i) {\,\mathrm d} y\\ &= s^{i+1} \int_{\mathbb{X}^i} \bigg(s^i \prod_{j=1}^i |z_j| e^{- s \sum_{j=1}^i |z_j|/2 }\bigg)\; \big|z_1 \wedge \dots\wedge z_i\big| e^{- s \sum_{j=1}^i |z_j|/2 } {\,\mathrm d}(z_1, \dots, z_i)\\ &\le 2^i s^{i+1} \int_{\mathbb{X}^i} \big|z_1 \wedge \dots\wedge z_i\big| e^{- s \sum_{j=1}^i |z_j|/2 } {\,\mathrm d}(z_1, \dots, z_i)\\ &\le 2^i s \int_\mathbb{X} c_{1/2,s}(x)^i {\,\mathrm d} x=\mathcal{O}(\log^{d-1} s), \varepsilonnd{align*} where we have also used \varepsilonqref{eq:crep} in the penultimate step and \varepsilonqref{eq:c1} for the final step. \varepsilonnd{proof} Now we are ready to derive the bound in Theorem~\ref{thm:KolBd}. Recall from Section~\ref{sec:notat-main-results} the constants $\beta=p /(32+4 p)$ and $\zeta=p/(40+10p)$, which, in particular, satisfy that $\zeta < 2\beta$. For our example, it suffices to let $p=1$. Nonetheless, the following bounds are derived for any $\beta$ and $\zeta$, satisfying the above condition. \begin{lemma} \label{lem:intg1s} For all $\beta\in(0,1/2)$, $\zeta\in(0,2\beta)$ and $f_{2\beta}$ defined at \varepsilonqref{eq:fa}, \begin{align*} s \int_{\mathbb{X}} f_{2\beta}(x_1) {\,\mathrm d} x_1=\mathcal{O}(\log^{d-1}s). \varepsilonnd{align*} \varepsilonnd{lemma} \begin{proof} We first bound the integral of $f_{2\beta}^{(1)}$ defined at \varepsilonqref{eq:fal}. By \varepsilonqref{eq:c1}, \begin{displaymath} s\int_{\mathbb{X}} s\int_{\mathbb{X}} e^{-2\beta r_s(x_2,x_1)}{\,\mathrm d} x_2 {\,\mathrm d} x_1 = s \int_{\mathbb{X}} s \int_\mathbb{X} \mathds{1}_{x_2 \succ x_1} e^{- 2\beta s |x_2|}{\,\mathrm d} x_2 {\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} If $x_2 \succ x_1$, then $c_{\zeta,s}(x_2) \le c_{\zeta,s}(x_1)$. Since $\zeta <2\beta$, by \varepsilonqref{eq:c1}, \begin{align}\label{eq:2.1} s \int_{\mathbb{X}} s \int_{\mathbb{X}} c_{\zeta,s}(x_2)^5 e^{- 2\beta r_s(x_2,x_1)} {\,\mathrm d} x_2 {\,\mathrm d} x_1 &\le s \int_{\mathbb{X}} c_{\zeta,s}(x_1)^5 s \int_\mathbb{X} \mathds{1}_{x_2 \succ x_1} e^{- 2\beta s |x_2|} {\,\mathrm d} x_2 {\,\mathrm d} x_1\nonumber \\ & \le s \int_{\mathbb{X}} c_{\zeta,s}(x_1)^6 {\,\mathrm d} x_1=\mathcal{O}(\log^{d-1} s). \varepsilonnd{align} Since $G_s(y) \le 3 + 2 c_{\zeta,s}(y)^5$, combining the above two bounds, we obtain \begin{displaymath} s \int_{\mathbb{X}} f_{2\beta}^{(1)}(x_1) {\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} We move on to $f_{2\beta}^{(2)}$. Using again that $te^{-t} \le 1$ for $t\geq0$ and \varepsilonqref{eq:mean}, we have \begin{multline*} s\int_{\mathbb{X}} s\int_{\mathbb{X}} e^{-2\beta r_s(x_1,x_2)}{\,\mathrm d} x_2 {\,\mathrm d} x_1 = s \int_{\mathbb{X}} s \int_\mathbb{X} \mathds{1}_{x_2 \prec x_1} e^{- 2\beta s |x_1|} {\,\mathrm d} x_2 {\,\mathrm d} x_1\\ = s \int_{\mathbb{X}} s|x_1|e^{- 2\beta s |x_1|}{\,\mathrm d} x_1 \le s\beta^{-1} \int_{\mathbb{X}} e^{- \beta s |x_1|}{\,\mathrm d} x_1=\mathcal{O}(\log^{d-1} s). \varepsilonnd{multline*} Also, $\zeta< 2\beta$ and \varepsilonqref{eq:c1} yield that \begin{align*} s & \int_{\mathbb{X}} s \int_{\mathbb{X}}c_{\zeta,s}(x_2)^5 e^{- 2\beta r_s(x_1,x_2)} {\,\mathrm d} x_2 {\,\mathrm d} x_1\\ &\le s \int_{\mathbb{X}} c_{\zeta,s}(x_2)^5 \left(s\int_\mathbb{X} \mathds{1}_{x_1 \succ x_2} e^{-\zeta s |x_1|} {\,\mathrm d} x_1\right){\,\mathrm d} x_2 =s \int_{\mathbb{X}} c_{\zeta,s}(x_2)^6{\,\mathrm d} x_2 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{align*} Thus, \begin{displaymath} s \int_{\mathbb{X}} f_{2\beta}^{(2)}(x_1) {\,\mathrm d} x_1=\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} It remains to bound the integral of $f_{2\beta}^{(3)}$. For $\alpha<1$ and $x \in \mathbb{X}$, we have \begin{multline}\label{eq:cl1} c_{1,s}(x)^\alpha=e^{-\alpha s |x|} \left(s \int_\mathbb{X} \mathds{1}_{z \succ x} e^{-s(|z|-|x|)} {\,\mathrm d} z\right)^\alpha \le e^{-\alpha s |x|}\left[1+ s \int_\mathbb{X} \mathds{1}_{z \succ x} e^{-s(|z|-|x|)} {\,\mathrm d} z\right]\\ \le e^{-\alpha s |x|}\left[1+ s \int_\mathbb{X} \mathds{1}_{z \succ x} e^{-\alpha s(|z|-|x|)} {\,\mathrm d} z\right]=e^{-\alpha s |x|} + c_{\alpha,s}(x). \varepsilonnd{multline} Thus, noticing that $2\beta<1$ and using Lemma~\ref{lem:intbd}, \begin{multline*} s \int_{\mathbb{X}} s \int_{\mathbb{X}} q_s(x_1,x_2)^{2\beta} {\,\mathrm d} x_2 {\,\mathrm d} x_1 = s \int_{\mathbb{X}} s \int_{\mathbb{X}} c_{1,s}(x_1\vee x_2)^{2\beta} {\,\mathrm d} x_2 {\,\mathrm d} x_1\\ \le s^2 \int_{\mathbb{X}^2} e^{-2\beta s |x_1\vee x_2|} {\,\mathrm d} (x_1,x_2) + s^2 \int_{\mathbb{X}^2} c_{2\beta,s}(x_1\vee x_2) {\,\mathrm d} (x_1,x_2) =\mathcal{O}(\log^{d-1} s). \varepsilonnd{multline*} Finally, using \varepsilonqref{eq:cl1} and that $\zeta< 2\beta$ for the inequality, write \begin{align*} s& \int_{\mathbb{X}} s \int_{\mathbb{X}} c_{\zeta,s}(x_2)^5 q_s(x_1,x_2)^{2\beta} {\,\mathrm d} x_2 {\,\mathrm d} x_1= s \int_{\mathbb{X}} s \int_{\mathbb{X}}c_{\zeta,s}(x_2)^5 c_{1,s}(x_1\vee x_2)^{ 2\beta} {\,\mathrm d} x_2 {\,\mathrm d} x_1\\ &\le s^8 \int_{\mathbb{X}} \int_{\mathbb{X}} \int_{\mathbb{X}^5} \mathds{1}_{z_1,\dots,z_5 \succ x_2} \int_\mathbb{X} \mathds{1}_{z_6 \succ x_1\vee x_2} \varepsilonxp\left\{-\zeta s \sum_{i=1}^6 |z_i|\right\} {\,\mathrm d} z_6 {\,\mathrm d}(z_1,\dots,z_5) {\,\mathrm d} x_2 {\,\mathrm d} x_1\\ &\qquad\qquad + s \int_{\mathbb{X}} s \int_{\mathbb{X}}c_{\zeta,s}(x_2)^5 e^{-2\beta s |x_1\vee x_2|} {\,\mathrm d} x_2 {\,\mathrm d} x_1:=A_1 + A_2. \varepsilonnd{align*} By \varepsilonqref{eq:crep} and \varepsilonqref{eq:c1}, \begin{align*} A_1 &= s^8\int_{\mathbb{X}^6}|z_6|\; \big|z_1 \wedge\dots\wedge z_6\big| \varepsilonxp\left\{-\zeta s \sum_{i=1}^6 |z_i|\right\} {\,\mathrm d} (z_1,\dots,z_6)\\ &\le 2 s^7 \int_{\mathbb{X}^6} |z_1 \wedge\dots\wedge z_6| \varepsilonxp\left\{-\zeta s \sum_{i=1}^6 |z_i|/2\right\}{\,\mathrm d} (z_1,\dots,z_6)\\ &=2 s \int_\mathbb{X} c_{\zeta/2,s}(x)^6 {\,\mathrm d} x=\mathcal{O}(\log^{d-1} s). \varepsilonnd{align*} Furthermore, by the Cauchy-Schwarz inequality and Lemma~\ref{lem:intbd}, \begin{displaymath} A_2 \le \left(s \int_\mathbb{X} c_{\zeta,s} (x_2)^{10} {\,\mathrm d} x_2\right)^{1/2} \left(s \int_\mathbb{X} \left(s \int_\mathbb{X} e^{-2\beta s |x_1\vee x_2|} {\,\mathrm d} x_1\right)^2 {\,\mathrm d} x_2\right)^{1/2} =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} Therefore, \begin{displaymath} s \int_{\mathbb{X}} f_{2\beta}^{(3)}(x_1) {\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s), \varepsilonnd{displaymath} concluding the proof. \varepsilonnd{proof} \begin{lemma}\label{lem:intbd1} For $\alpha_1,\alpha_2>0$, \begin{displaymath} s \int_{\mathbb{X}} \left(s \int_{\mathbb{X}}c_{\alpha_1,s}(x)^5 e^{-\alpha_2 s |x\vee y|} {\,\mathrm d} x\right)^2 {\,\mathrm d} y =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} \varepsilonnd{lemma} \begin{proof} Since $c_{\alpha,s}$ is decreasing in $\alpha$ and in view of \varepsilonqref{eq:scale}, it suffices to prove the result with both $\alpha_1$ and $\alpha_2$ replaced by $1$. We split the inner integral into integration domains corresponding to the cases when $x\vee y=(x^I,y^J)$ with $J=I^c$ for $I \subseteq \{1,\dots,d\}$. First, if $I=\{1,\dots,d\}$, then using monotonicity of $c_{1,s}$ and \varepsilonqref{eq:c1}, we have \begin{multline*} s \int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x \succ y}c_{1,s}(x)^5 e^{-s |x\vee y|} {\,\mathrm d} x\right)^2 {\,\mathrm d} y\\ \le s \int_{\mathbb{X}} c_{1,s}(y)^{10} \left(s \int_\mathbb{X} \mathds{1}_{x \succ y}e^{- s |x|}{\,\mathrm d} x\right)^2 {\,\mathrm d} y \le s \int_{\mathbb{X}} c_{1,s}(y)^{12} {\,\mathrm d} y =\mathcal{O}(\log^{d-1} s). \varepsilonnd{multline*} By writing the function $|\cdot|$ as the product of coordinates and passing to the one-dimensional case, it is easy to see that for $a,b,y \in \mathbb{X}$, \begin{equation} \label{eq:l-inequality} |a \wedge y|\;|b\wedge y| \le |a \wedge b \wedge y|\; |y|. \varepsilonnd{equation} Hence, when $I=\varepsilonmptyset$, \begin{align*} &s \int_{\mathbb{X}} \left(s \int_{\mathbb{X}} \mathds{1}_{x \prec y}c_{1,s}(x)^5 e^{-s |x\vee y|} {\,\mathrm d} x\right)^2 {\,\mathrm d} y = s \int_{\mathbb{X}} e^{-2 s |y|}\left(s \int_\mathbb{X} \mathds{1}_{x \prec y}c_{1,s}(x)^5 {\,\mathrm d} x\right)^2 {\,\mathrm d} y\\ & \le s^{13} \int_\mathbb{X} \int_{\mathbb{X}^2} \mathds{1}_{x_1,x_2 \prec y} \int_{\mathbb{X}^{10}} \mathds{1}_{z_1,\dots, z_5 \succ x_1} \mathds{1}_{z_6,\dots,z_{10} \succ x_2} e^{- s|y| - s \sum_{i=1}^{10} |z_i|} {\,\mathrm d} (z_1,\dots,z_{10}) {\,\mathrm d}(x_1,x_2) {\,\mathrm d} y\\ &=s^{13} \int_\mathbb{X} \int_{\mathbb{X}^{10}} \big|z_1\wedge\dots\wedge z_5\wedge y\big|\; \big|z_6 \wedge\dots\wedge z_{10} \wedge y\big| e^{- s|y|- s \sum_{i=1}^{10} |z_i|} {\,\mathrm d} (z_1,\dots,z_{10}) {\,\mathrm d} y\\ &\le s^{13} \int_\mathbb{X} \int_{\mathbb{X}^{10}} \big|z_1\wedge\dots\wedge z_{10}\wedge y\big|\; |y|\; e^{ - s |y|- s \sum_{i=1}^{10} |z_i|} {\,\mathrm d} (z_1,\dots,z_{10}) {\,\mathrm d} y, \varepsilonnd{align*} where in the final step, we have used \varepsilonqref{eq:l-inequality} with $a:=z_1 \wedge\dots\wedge z_5$ and $b:=z_6 \wedge\dots\wedge z_{10}$. Splitting the exponential into product of two exponentials with powers halved, and using the fact that \begin{displaymath} s|y| e^{ - s |y|/2- s \sum_{i=1}^{10} |z_i|/2} \leq 2, \varepsilonnd{displaymath} we obtain by \varepsilonqref{eq:c1} that the last integral is bounded by \begin{align*} &2 s^{12} \int_\mathbb{X} \int_{\mathbb{X}^{10}} \big|z_1\wedge\dots\wedge z_{10} \wedge y\big| \; e^{- s |y|/2 - s \sum_{i=1}^{10} |z_i|/2 } {\,\mathrm d} (z_1,\dots,z_{10}) {\,\mathrm d} y\\ &=2 s \int_\mathbb{X} \left(s\int_\mathbb{X} \mathds{1}_{y\succ x} e^{-s|y|/2}{\,\mathrm d} y\right) \prod_{i=1}^{10} \left(s\int_\mathbb{X} \mathds{1}_{z_i\succ x} e^{- s|z_i|/2}{\,\mathrm d} z_i\right) {\,\mathrm d} x\\ &=2 s \int_\mathbb{X} c_{1/2,s}(x)^{11}{\,\mathrm d} x =\mathcal{O}(\log^{d-1} s). \varepsilonnd{align*} Next, assume that $d \ge 2$ and $I$ is nonempty of cardinality $m$ with $1 \le m \le d-1$. Using monotonicity of $c_{1,s}$ in the first step and Lemma~\ref{lemma:c-bound} in the last step upon identifying the integral as the function given by \varepsilonqref{eq:cal} in the space of dimension $m$, we have \begin{align} \label{eq:incom} s& \int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x^I \succ y^I, x^J \prec y^J} c_{1,s}(x)^5 e^{- s |x^I|\,|y^J|} {\,\mathrm d} x\right)^2 {\,\mathrm d} y\nonumber\\ & \le s \int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x^I \succ y^I, x^J \prec y^J} c_{1,s}(x^J,y^I)^5 e^{- s |x^I|\,|y^J|} {\,\mathrm d} x\right)^2 {\,\mathrm d} y\nonumber\\ & = s \int_{\mathbb{X}} \left( \int_{[0,1]^m} \mathds{1}_{x^I \succ y^I} e^{- s |x^I|\,|y^J|} {\,\mathrm d} x^I\right)^2 \left(s \int_{[0,1]^{d-m}} \mathds{1}_{x^J \prec y^J} c_{1,s}(x^J,y^I)^5 {\,\mathrm d} x^J\right)^2 {\,\mathrm d} y\nonumber\\ & \le D^2 s \int_{\mathbb{X}} \frac{e^{- s|y|}}{s^2 |y^J|^2} \left(1+ \big|\log ( s|y|)\big|^{2(m-1)}\right) \left(s \int_{[0,1]^{d-m}} \mathds{1}_{x^J \prec y^J} c_{1,s}(x^J,y^I)^5 {\,\mathrm d} x^J\right)^2 {\,\mathrm d} y, \varepsilonnd{align} with $D$ as in Lemma~\ref{lemma:c-bound}. We will now estimate the integral inside \varepsilonqref{eq:incom}. Using Lemma~\ref{lemma:c-bound} and Jensen's inequality in the first step, substituting $u=(s|y^I|)^{1/(d-m)} x^J$ in the second step, letting $z^{(i)}=\log u^{(i)}$, $i=1,\dots,d-m$, in the third one, $v^{(1)}=\sum_{i=1}^{d-m} z^{(i)}$ in the fourth, $w=e^{-v^{(1)}}$ in the fifth, and, finally, Jensen's inequality in the penultimate step, we obtain that \begin{align*} s&|y^I| \int_{[0,1]^{d-m}} \mathds{1}_{x^J \prec y^J} c_{1,s}(x^J,y^I)^5 {\,\mathrm d} x^J\\ &\le 16 D^5 s|y^I| \int_{[0,1]^{d-m}} \mathds{1}_{x^J \prec y^J} e^{-5 s |x^J|\,|y^I|/2} \Big(1+ \big|\log (s|x^J|\,|y^I|)\big|^{5(d-1)}\Big) {\,\mathrm d} x^J\\ &= 16 D^5 \int_{\big[0,(s|y^I|)^{\frac{1}{d-m}}\big]^{d-m}} \mathds{1}_{u \prec (s|y^I|)^{\frac{1}{d-m}}y^J} e^{-\frac{5}{2}|u|} \Big(1+ \big|\log (|u|)\big|^{5(d-1)}\Big) {\,\mathrm d} u\\ &= 16 D^5 \int_{\times_{j \in J}\big[-\frac{\log (s|y^I|)}{d-m} - \log y^{(j)},\infty\big)} \varepsilonxp \left\{-e^{-\frac{5}{2}\sum_{i=1}^{d-m} z^{(i)}} -\sum_{i=1}^{d-m} z^{(i)}\right\}\\ & \qquad \qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\times \Big(1+ \Big|\sum_{i=1}^{d-m} z^{(i)}\Big |^{5(d-1)}\Big) {\,\mathrm d} z\\ &\leq 16 D^5 \int_{-\log s|y|}^\infty \Big(v^{(1)} + \log (s|y|)\Big)^{d-m-1} \varepsilonxp\left\{-e^{-\frac{5}{2}v^{(1)}} - v^{(1)}\right\} \Big(1+|v^{(1)}|^{5(d-1)}\Big) {\,\mathrm d} v^{(1)}\\ &= 16 D^5 \int_{0}^{s|y|}e^{-w^{5/2}} \Big(\log (s|y|) - \log w\Big)^{d-m-1} \Big(1+|\log w|^{5(d-1)}\Big) {\,\mathrm d} w\\ & \le 16 D^5 2^{d-m-2} \bigg[|\log (s|y|)|^{d-m-1} \int_{0}^{s|y|} \Big(1+|\log w|^{5(d-1)}\Big) {\,\mathrm d} w\\ & \qquad\qquad \qquad \qquad \qquad\qquad \qquad \qquad \qquad + \int_{0}^{s|y|}|\log w|^{d-m-1}\Big(1+|\log w|^{5(d-1)}\Big) {\,\mathrm d} w \bigg]\\ & \le D' s|y| \bigg[1+ \sum_{i=1}^{6(d-1)-m} \big|\log (s|y|)\big|^{i} \bigg] \varepsilonnd{align*} for a constant $D'$ depending only on $d$ and $m$, so that the bound on the last integral in \varepsilonqref{eq:incom} is obtained by dividing by $|y^I|$ on both sides. The last step relies on an elementary inequality, saying that, for $l \in \mathbb{N} \cup \{0\}$ and $a>0$, there exists a constant $b_l>0$ depending only on $l$ such that \begin{displaymath} \int_0^{a} |\log w|^l {\,\mathrm d} w \le b_l a \left[1+\sum_{i=1}^{l} |\log a|^i\right]. \varepsilonnd{displaymath} Plugging this in \varepsilonqref{eq:incom} and using Jensen's inequality, we obtain \begin{align*} s & \int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x^I \succ y^I, x^J \prec y^J}c_{1,s}(x)^5 e^{- s |x^I|\,|y^J|} {\,\mathrm d} x\right)^2 {\,\mathrm d} y\\ & \le D'' s \int_{\mathbb{X}} \frac{e^{-s|y|}}{s^2 |y^J|^2} \left(1+ |\log (s|y|)|^{2(m-1)}\right) s^2 |y^J|^2 \left(1+ |\log (s|y|)|^{12(d-1)-2m}\right) {\,\mathrm d} y\\ &=\mathcal{O}(\log^{d-1} s) \varepsilonnd{align*} for some constant $D''$ depending on $d$ and $m$, where the last step is argued similarly as for \varepsilonqref{eq:aux}. Summing over all possible $I \subseteq\{1,\dots,d\}$ yields the desired conclusion. \varepsilonnd{proof} \begin{lemma}\label{lem:intg2s} For $\beta\in(0,1/2)$, $\zeta\in(0,\beta)$ and $f_{\beta}$ defined at \varepsilonqref{eq:fa}, \begin{align*} s \int_{\mathbb{X}} f_{\beta}(x_1)^2 {\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{align*} \varepsilonnd{lemma} \begin{proof} As in Lemma~\ref{lem:intg1s}, we consider integrals of squares of $f_\beta^{(i)}$ for $i=1,2,3$ separately. By \varepsilonqref{eq:c1}, \begin{displaymath} s\int_{\mathbb{X}} \left(s\int_{\mathbb{X}} e^{-\beta r_s(x_2,x_1)}{\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1 =s \int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x_2 \succ x_1} e^{- \beta s |x_2|} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} Arguing as in \varepsilonqref{eq:2.1}, using monotonicity of $c_{\zeta,s}$, $\zeta<\beta$, and \varepsilonqref{eq:c1}, we have \begin{align*} s \int_{\mathbb{X}} \left(s \int_{\mathbb{X}}c_{\zeta,s}(x_2)^5 e^{- \beta r_s(x_2,x_1)} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1 & \le s \int_{\mathbb{X}} c_{\zeta,s} (x_1)^{10} \left(s \int_\mathbb{X} \mathds{1}_{x_2 \succ x_1} e^{- \beta s |x_2|} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1\\ & \le s \int_{\mathbb{X}} c_{\zeta,s} (x_1)^{12}{\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{align*} Recalling that $G_s(y) \le 3 + 2 c_{\zeta,s}(y)^5$, combining the above bounds and using Jensen's inequality yield \begin{displaymath} s \int_{\mathbb{X}} f_{\beta}^{(1)}(x_1)^2 {\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} Next, we integrate the square of $f_{ \beta}^{(3)}$. Using \varepsilonqref{eq:cl1} and Lemma~\ref{lem:intbd}, \begin{multline}\label{eq:p1} s \int_{\mathbb{X}} \left(s \int_{\mathbb{X}} q_s(x_1,x_2)^{\beta} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1 = s \int_{\mathbb{X}} \left(s \int_{\mathbb{X}} c_{1,s}(x_1\vee x_2)^{ \beta} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1\\ \le 2s \int_{\mathbb{X}}\left(s \int_\mathbb{X} e^{-\beta s |x_1\vee x_2|} {\,\mathrm d} x_2 \right)^2 {\,\mathrm d} x_1 + 2s \int_{\mathbb{X}} \left(s \int_\mathbb{X} c_{\beta,s}(x_1\vee x_2) {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1=\mathcal{O}(\log^{d-1} s). \varepsilonnd{multline} Again using \varepsilonqref{eq:cl1}, \begin{align*} s &\int_{\mathbb{X}} \left(s \int_{\mathbb{X}} c_{\zeta,s}(x_2)^5 q_s(x_1,x_2)^{\beta} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1 = s \int_{\mathbb{X}} \left( s \int_{\mathbb{X}}c_{\zeta,s}(x_2)^5 c_{1,s}(x_1\vee x_2)^{\beta} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1\\ &\le 2s \int_{\mathbb{X}} \left(s \int_{\mathbb{X}}c_{\zeta,s}(x_2)^5 e^{-\beta s |x_1\vee x_2|} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1 + 2s \int_{\mathbb{X}} \left(s \int_{\mathbb{X}}c_{\zeta,s}(x_2)^5 c_{\beta,s}(x_1\vee x_2) {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1\\ &:=2(A_1 + A_2). \varepsilonnd{align*} By Lemma~\ref{lem:intbd1}, we have $A_1 =\mathcal{O}(\log^{d-1} s)$. For $x_1\in\mathbb{X}$ and $(x_{21},x_{22})\in\mathbb{X}^2$, denote \begin{multline*} A(x_1,x_{21}, x_{22}):=\Big\{(z_1,\dots,z_{12})\in\mathbb{X}^{12}:\\ z_1,\dots,z_5 \succ x_{21},\; z_6,\dots,z_{10} \succ x_{22},\; z_{11} \succ x_1\vee x_{21},\; z_{12} \succ x_1\vee x_{22}\Big\}. \varepsilonnd{multline*} By applying \varepsilonqref{eq:l-inequality} twice we have \begin{displaymath} |a \wedge x|\; |b\wedge y|\; |x\wedge y| \leq |a\wedge b\wedge x\wedge y|\; |x|\; |y| \leq |a\wedge b\wedge x\wedge y|\, (|x|+|y|)^2, \quad a,b,x,y\in\mathbb{X}. \varepsilonnd{displaymath} Using this with $a:=z_1 \wedge\dots\wedge z_5$, $b:=z_6 \wedge\dots\wedge z_{10}$, $x:=z_{11}$, $y:=z_{12}$ in the third step, \varepsilonqref{eq:crep} in the penultimate step, and \varepsilonqref{eq:c1} in the last one, we obtain \begin{align*} A_2 &\le s^{15} \int_{\mathbb{X}} \int_{\mathbb{X}^2} \int_{A(x_1,x_{21},x_{22})} e^{-\zeta s \sum_{i=1}^{12} |z_i| } {\,\mathrm d} (z_1,\dots,z_{12}) {\,\mathrm d} (x_{21}, x_{22}) {\,\mathrm d} x_1\\ &= s^{15} \int_{\mathbb{X}^{12}} e^{-\zeta s \sum_{i=1}^{12} |z_i|} |z_1 \wedge\dots\wedge z_5 \wedge z_{11}|\; |z_6 \wedge\dots\wedge z_{10} \wedge z_{12}|\; |z_{11} \wedge z_{12}| {\,\mathrm d} (z_1,\dots,z_{12})\\ &\le s^{15} \int_{\mathbb{X}^{12}} e^{-\zeta s \sum_{i=1}^{12} |z_i|} |z_1 \wedge\dots\wedge z_{12}|\; \big(|z_{11}| + |z_{12}|\big)^2 {\,\mathrm d} (z_1,\dots,z_{12})\\ & \le (8/\zeta^2) s^{13} \int_{\mathbb{X}^{12}} e^{-\zeta s \sum_{i=1}^{12} |z_i|/2}|z_1\wedge\dots\wedge z_{12}| {\,\mathrm d} (z_1,\dots,z_{12})\\ &= (8/\zeta^2) s\int_\mathbb{X} c_{\zeta/2,s}(x)^{12} {\,\mathrm d} x =\mathcal{O}(\log^{d-1} s), \varepsilonnd{align*} where for the last inequality we have used that \begin{displaymath} s^2 \big(|z_{11}|+|z_{12}|\big)^2 e^{-\zeta s \sum_{i=1}^{12} |z_i|/2 } \le 8/\zeta^2. \varepsilonnd{displaymath} Combining the bounds on $A_1$ and $A_2$ with \varepsilonqref{eq:p1} yields that \begin{displaymath} s \int_{\mathbb{X}} f_{\beta}^{(3)}(x_1)^2 {\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} For the integral of the square of $f_{\beta}^{(2)}$, arguing as in Lemma~\ref{lem:intg1s} and using the inequality $t^2 e^{-t} \le 2$ for $t\geq0$, we have \begin{multline*} s\int_{\mathbb{X}} \left(s\int_{\mathbb{X}} e^{-\beta r_s(x_1,x_2)}{\,\mathrm d} x_2 \right)^2 {\,\mathrm d} x_1 = s \int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x_2 \prec x_1} e^{-\beta s |x_1|} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1\\ = s/\beta^2 \int_{\mathbb{X}} (\beta s|x_1|)^2\; e^{-2\beta s |x_1|}{\,\mathrm d} x_1 \le 2 s/\beta^2 \int_{\mathbb{X}} e^{-\beta s |x_1|}{\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{multline*} Changing order of integration in the second step, using the Cauchy--Schwarz inequality in the third one, and referring to \varepsilonqref{eq:c1} in the last step yield that \begin{align*} s&\int_{\mathbb{X}} \left(s\int_{\mathbb{X}} c_{\zeta,s}(x_2)^5 e^{-\beta r_s(x_1,x_2)}{\,\mathrm d} x_2 \right)^2 {\,\mathrm d} x_1 =s \int_{\mathbb{X}} \left(s \int_\mathbb{X} \mathds{1}_{x_2 \prec x_1} c_{\zeta,s} (x_2)^5 e^{-\beta s |x_1|} {\,\mathrm d} x_2\right)^2 {\,\mathrm d} x_1\\ &= s^2 \int_{\mathbb{X}^2} c_{\zeta,s}(x)^5 c_{\zeta,s}(y)^5 c_{\beta,s}(x \vee y) {\,\mathrm d} (x,y)\le \left(s\int_\mathbb{X} c_{\zeta,s}(x)^{10} {\,\mathrm d} x\right)^{1/2}A_2^{1/2} =\mathcal{O}(\log^{d-1} s), \varepsilonnd{align*} where $A_2$ is defined above. Thus, \begin{displaymath} s \int_{\mathbb{X}} f_{\beta}^{(2)}(x_1)^2 {\,\mathrm d} x_1 =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} Combining, we obtain the desired result. \varepsilonnd{proof} Since $2 \beta=2p/(32+4p)<1$, to compute the bound, it suffices to provide a bound on the integral of $(\kappa_s+g_{s})^{\beta} G_s$ for any $\beta \in (0,1)$. \begin{lemma}\label{lem:qgint} For $\beta,\zeta\in(0,1)$, let $G_s$ and $\kappa_s$ be as in \varepsilonqref{eq:g5} and \varepsilonqref{eq:p} respectively. Then \begin{displaymath} s\int_\mathbb{X} G_s(x)\big(\kappa_s(x)+g_{s}(x)\big)^\beta {\,\mathrm d} x =\mathcal{O}(\log^{d-1} s). \varepsilonnd{displaymath} \varepsilonnd{lemma} \begin{proof} First note that \begin{displaymath} \kappa_s(x)=\mathbf{P}rob{\xi_{s}(x, \mathcal{P}_{s}+\delta_x)\neq 0}=e^{-s |x|}, \quad x \in \mathbb{X}. \varepsilonnd{displaymath} Using the Cauchy--Schwarz inequality in the second step, by \varepsilonqref{eq:mean} and \varepsilonqref{eq:c1}, \begin{multline*} s\int_\mathbb{X} G_s(x)\kappa_s(x)^\beta {\,\mathrm d} x \le 3s \int_\mathbb{X} (1+c_{\zeta,s}(x)^5) e^{-\beta s |x|} {\,\mathrm d} x\\ \le 3s \int_\mathbb{X} e^{-\beta s |x|} {\,\mathrm d} x + 3\left(s \int_\mathbb{X} c_{\zeta,s}(x)^{10} {\,\mathrm d} x\right)^{1/2} \left(s \int_\mathbb{X} e^{-2\beta s |x|} {\,\mathrm d} x\right)^{1/2} =\mathcal{O}(\log^{d-1} s). \varepsilonnd{multline*} Since $\beta \in (0,1)$, arguing as in \varepsilonqref{eq:cl1}, $$ c_{\zeta,s}(x)^\beta \le e^{-\beta \zeta |x|} + c_{\beta \zeta,s}(x). $$ An application of \varepsilonqref{eq:mean} and \varepsilonqref{eq:c1} now yields \begin{align*} s\int_\mathbb{X} &G_s(x)g_{s}(x)^\beta {\,\mathrm d} x \le 3s \int_\mathbb{X} \Big(1+c_{\zeta,s}(x)^5\Big) c_{\zeta,s}(x)^\beta{\,\mathrm d} x\\ &\le 3s \int_\mathbb{X} e^{-\beta \zeta |x|} {\,\mathrm d} x + 3s \int_\mathbb{X} c_{\beta \zeta,s}(x) {\,\mathrm d} x + 3s \int_\mathbb{X} c_{\zeta,s}(x)^{5+\beta}{\,\mathrm d} x=\mathcal{O}(\log^{d-1} s). \varepsilonnd{align*} Combining the above bounds, we obtain the desired conclusion. \varepsilonnd{proof} \begin{proof}[Proof of Theorem~\ref{thm:Pareto}.] By \varepsilonqref{eq:var}, $\Var(F_s) \ge C_1 \log^{d-1} s$ for all $s \ge 1$. An application of Theorem~\ref{thm:KolBd} with Lemmas~\ref{lem:intg1s}, \ref{lem:intg2s} and \ref{lem:qgint} now yields the desired upper bound. The proof of the optimality of the bound on the Kolmogorov distance follows by a general argument employed in the proof of \cite[Theorem~1.1, Eq.~(1.6)]{EG81}, which shows that the Kolmogorov distance between any integer-valued random variable, suitably normalized, and a standard normal random variable is always lower bounded by a constant times the inverse of the standard deviation, see Section~6 therein for further details. The variance upper bound in \varepsilonqref{eq:var} now yields the result. \varepsilonnd{proof} \section{Non-diffuse intensity measures and unbounded scores} \label{sec:ex} As discussed in the introduction, in addition to working with general stabilization regions, our approach generalizes results in \cite{LSY19} in two more ways. First, we allow for non-diffuse intensity measures and, second, we can consider score functions that do not have uniformly bounded moments over $x \in \mathbb{X}$. In this section, we demonstrate this with two examples. In Example~\ref{ex:1}, we consider a Poisson process on the two dimensional integer lattice with the counting measure as the intensity, which is non-diffuse. We derive a quantitative central limit theorem for the number of isolated points in this setup. In Example~\ref{ex:2}, we consider isolated vertices in a random geometric graph built on a stationary Poisson process on $\mathbb{R}^d$, where two points are joined by an edge if the distance between them is at most $\rho_s$ for some appropriate non-negative function $\rho_s$, $s \ge 1$. Poisson convergence for the number of such isolated vertices in different regimes has been extensively studied, see, e.g., \cite[Ch.~8]{pen03}. But, instead of considering the number of isolated vertices, we consider the sum of values for a general function evaluated at locations of isolated vertices, for instance, the logarithms of scaled norms. As the logarithm is unbounded near the origin, the score functions do not admit a uniform bound on their moments. We note here that in both the examples below, it should be possible to work with a binomial process as well, once a result paralleling our Theorem~\ref{thm:Main} is proved in this setting. As mentioned in the introduction, this can be done by following the scheme in \cite{LSY19} suitably adapted to incorporate general stabilization regions. \begin{example}[Non-diffuse intensity] \label{ex:1} Let $\mathbb{X}:=\mathbb{Z}^2$ and consider a Poisson process $\mathcal{P}$ on $\mathbb{Z}^2$ with the intensity measure $\mathbb{Q}$ being the counting measure on $\mathbb{Z}^2$; so we let $s=1$ and omit it from the subscripts. A point $x \in \mathcal{P}$ is said to be isolated in $\mathcal{P}$ if all its nearest neighbors are unoccupied, i.e., $\mathcal{P}(x+B)=0$, where $+$ denotes the Minkowski addition and $B:=\{(0,\pm1), (\pm1,0)\}$, so that $x+B$ is the set comprising the 4 nearest neighbors of $x\in\mathbb{Z}^2$. Consider a weight function $w: \mathbb{Z}^2 \to \mathbb{R}_+$, and for $i \in \mathbb{N}$ denote $$W_i:=\sum_{x\in\mathbb{Z}^2} w(x)^i.$$ Assume that $W_1=\sum_{x\in\mathbb{Z}^2} w(x)<\infty$, which in particular implies that $w$ is bounded. Scaling $w$, assume without loss of generality that $w$ is bounded by one. Consider the statistic $H\varepsilonquiv H_1(\mathcal{P}_1)$ defined at \varepsilonqref{eq:hs} with \begin{displaymath} \xi(x, \mathcal{P}):=w(x) \mathds{1}_{\mathcal{P}(x+B)=0}\;, \quad x\in\mathcal{P}. \varepsilonnd{displaymath} For $x \in \mathbb{Z}^2$, defining the stabilization region $R(x,\mathcal{P}+\delta_x):=(x+B)$ if $x$ is isolated in $\mathcal{P}+\delta_x$ and $R(x,\mathcal{P}+\delta_x):=\varepsilonmptyset$ otherwise, we see that \varepsilonqref{eq:ximon} and (A1) are trivially satisfied. Also, (A2) holds with $p=1$ and $M_{1,1}(x)=w(x)$, while \varepsilonqref{eq:Rs} holds with $r(x,y)=4$ for $x\in \mathbb{Z}^2$ and $y \in x+B$ and $r(x,y)=\infty$ otherwise. Next, notice that $\kappa(y)=e^{-4}$, $y \in \mathbb{Z}^2$, $\zeta=1/50$, \begin{displaymath} g(y)=\sum_{x \in y+B, x\in\mathbb{Z}^2} e^{-4/50}=4 e^{-4/50}\; \text{ and } \; h(y)=\sum_{x \in y+B, x\in\mathbb{Z}^2} w(x)^{9/2}e^{-4/50},\qquad y \in \mathbb{Z}^2, \varepsilonnd{displaymath} while $q(x_1,x_2) \le 4e^{-4}$ for $x_1,x_2 \in \mathbb{Z}^2$ with $x_2-x_1\in B+B$ and $q(x_1,x_2)=0$ otherwise. Noticing that $\max\{w(x)^2,w(x)^4, w(x)^{9/2}\}=w(x)^2$, we obtain that for all $\alpha>0$, there exists a constant $C_\alpha$ such that \begin{displaymath} f_\alpha(y) \le C_\alpha \sum_{x-y\in (B+B) \cup (B+B +B), x,y\in\mathbb{Z}^2} w(x)^2. \varepsilonnd{displaymath} Thus, with $\beta=1/36$, there exists a constant $C>0$ such that \begin{displaymath} \mathbb{Q} f_\beta^2 \le C W_4, \quad \max\{\mathbb{Q} f_{2\beta}, \mathbb{Q} ((\kappa+g)^{2\beta}G) \} \le C W_2. \varepsilonnd{displaymath} On the other hand, by the Mecke formula, we have \begin{align*} \Var(H) &= \mathbf{E} \sum_{x \in \mathcal{P}} w^2(x) \mathds{1}_{\mathcal{P}(x+B)=0} - (\mathbf{E} H)^2\\ &\qquad + \sum_{x \in \mathbb{Z}^2} \sum_{y \in (x+B)^c, y\in\mathbb{Z}^2} w(x)w(y)\mathbf{P}rob{(\mathcal{P}+\delta_x+\delta_y)\big((x+B)\cup(y+B)\big)=0}\\ & = e^{-4}W_2 - e^{-8} \sum_{x \in \mathbb{Z}^2} \sum_{y \in (x+B)} w(x)w(y) \\ & \qquad+ \sum_{x \in \mathbb{Z}^2} \sum_{y \in (x+B)^c, y\in\mathbb{Z}^2} w(x)w(y)\Big(\mathbf{P}rob{\mathcal{P}\big((x+B)\cup(y+B)\big)=0} - e^{-8}\Big)\\ & \ge e^{-4}W_2 + (e^{-7} - e^{-8})\sum_{x \in \mathbb{Z}^2} \sum_{y-x \in (B+B)} w(x)w(y)- e^{-8} \sum_{x \in \mathbb{Z}^2} \sum_{y \in (x+B)} w(x)w(y). \varepsilonnd{align*} Finally, noticing that $$ \sum_{x \in \mathbb{Z}^2} \sum_{y \in (x+B)} w(x)w(y) \le \sum_{x \in \mathbb{Z}^2} \sum_{y \in (x+B)} \frac{w(x)^2 + w(y)^2}{2} = 4 W_2, $$ we obtain $$ \Var(H) \ge (e^{-4} - 4 e^{-8})W_2. $$ Hence, an application of Theorem~\ref{thm:KolBd} yields that \begin{align*} \max&\left\{d_{W}\left(\frac{H-\mathbf{E} H}{\sqrt{\Var H}},N\right) ,d_{K}\left(\frac{H-\mathbf{E} H}{\sqrt{\Var H}},N\right) \right\}\\ &\qquad \leq \frac{C}{(W_2)^{1/2}}\left[1+\sqrt{\frac{W_4}{W_2}} + \frac{1}{W_2^{1/4}}\right] \le \frac{C}{(W_2)^{1/2}} \left[2+ \frac{1}{W_2^{1/4}}\right] , \varepsilonnd{align*} for some constant $C>0$, where the final step is due to the observation that $W_4 \le W_2$. As an example, one can take $w(x):=\mathds{1}_{x \in [-n,n]^2}$ for $n \in \mathbb{N}$ to see that the distances on the left-hand side is bounded by $C/n$, which is presumably optimal, since the variance is of the order $n^2$. In particular, arguing as in the proof of Theorem~\ref{thm:Pareto}, the bound on the Kolmogorov distance is of optimal order in this case. \varepsilonnd{example} \begin{example}[Weighted sum over isolated vertices in random geometric graphs]\label{ex:2} Let $\mathbb{X} := \mathbb{R}^d$ with $d \ge 2$, and let $\mathcal{P}_s$ be a Poisson process on $\mathbb{X}$ with intensity measure $s \mathbb{Q}$ for $s \ge 1$ and the Lebesgue measure $\mathbb{Q}$. Fix $s \ge 1$. Given $\rho_s>0$, consider a random geometric graph $G_s(\mathcal{P}_s,\rho_s)$ with the vertex set $\mathcal{P}_s$, where an edge joins two distinct vertices $x$ and $y$ if $\|x-y\| \le \rho_s$, where $\|\cdot\|$ denotes the Euclidean norm. A vertex $x \in \mathcal{P}_s$ is called isolated if $\mathcal{P}_s(B(x,\rho_s) \setminus \{x\} )=0$, where $B(x,\rho_s)$ denotes the closed ball of radius $\rho_s$ centered at $x$. For a (possibly unbounded) weight function $w_s: \mathbb{R}^d \to \mathbb{R}_+$ with $\int_{\mathbb{R}^d} \max\{w_s(x),w_s(x)^8\}{\,\mathrm d} x<\infty$, consider the statistic $H_s$ defined at \varepsilonqref{eq:hs} with \begin{displaymath} \xi_s(x,\mathcal{P}_s):=w_s(x) \mathds{1}_{x\text{ is isolated in $\mathcal{P}_s$}}, \quad x \in \mathcal{P}_s. \varepsilonnd{displaymath} For $x \in \mathbb{X}$, letting $R_s(x,\mathcal{P}_s+\delta_x):=B(x,\rho_s)$ if $x$ is isolated in $\mathcal{P}_s+\delta_x$ and $\varepsilonmptyset$ otherwise, we see that \varepsilonqref{eq:ximon} and (A1) are satisfied. As in Example~\ref{ex:1}, (A2) holds with $p=1$ and $M_s=M_{s,1}(x):=w_s(x)$. Letting $r_s(x,y):=k_d s \rho_s^d$ for $x\in \mathbb{R}^d$ and $y \in B(x,\rho_s)$, where $k_d$ is the volume of the unit ball in $\mathbb{R}^d$, and $r_s(x,y):=\infty$ otherwise, one verifies \varepsilonqref{eq:Rs}. Clearly, $\kappa_s(y)\le e^{-k_d s \rho_s^d}$ for $y \in \mathbb{R}^d$. Also, since $\zeta=1/50$, one has \begin{displaymath} g_s(y) = k_d s\rho_s^d e^{-k_ds \rho_s^d /50}\; \text{ and } \; h_s(y)=s e^{-k_ds \rho_s^d /50} \int_{B(y,\rho_s)} w_s(x)^{9/2} {\,\mathrm d} x, \quad y \in \mathbb{R}^d, \varepsilonnd{displaymath} while $q_s(x_1,x_2) \le k_d s \rho_s^d e^{-k_d s \rho_s^d}$ for $x_1,x_2 \in \mathbb{R}^d$ with $\|x_2-x_1\|\le 2 \rho_s$ and $q_s(x_1,x_2)=0$ otherwise. Next, we compute the variance of $H_s$. Denote $W_{i,s}:=s \int_{\mathbb{R}^d} w_s(x)^i {\,\mathrm d} x$, $i\in\mathbb{N}$. Applying the Mecke formula in the first equality, we obtain \begin{align*} \Var(H_s) &=s\int_{\mathbb{R}^d} w_s(x)^2 e^{-k_d s \rho_s^d} {\,\mathrm d} x- \left(s \int_{\mathbb{R}^d}w_s(x) e^{-k_d s \rho_s^d} {\,\mathrm d} x\right)^2\\ &\qquad + s^2 \int_{\mathbb{R}^d} \int_{B(x,\rho_s)^c} w_s(x)w_s(y) \varepsilonxp\left\{-\operatorname{Vol}(B(x,\rho_s) \cup B(y,\rho_s))\right\} {\,\mathrm d} y {\,\mathrm d} x \\ & \ge e^{-k_d s \rho_s^d}W_{2,s} - s^2 e^{-2k_d s \rho_s^d}\int_{\mathbb{R}^d} \int_{\mathbb{R}^d \cap B(x,\rho_s)} w_s(x)w_s(y) {\,\mathrm d} y{\,\mathrm d} x. \varepsilonnd{align*} As in the previous example, \begin{displaymath} s^2 \int_{\mathbb{R}^d} \int_{\mathbb{R}^d \cap B(x,\rho_s)} w_s(x)w_s(y) {\,\mathrm d} y {\,\mathrm d} x \le k_d s\rho_s^d W_{2,s}, \varepsilonnd{displaymath} so that \begin{displaymath} \Var(H) \ge e^{-k_d s \rho_s^d}(1- k_d s \rho_s^d e^{-k_d s \rho_s^d}) W_{2,s} \ge \frac{1}{2} e^{-k_ds \rho_s^d}W_{2,s}, \varepsilonnd{displaymath} where in the last step we have used that $ue^{-u} \le 1/2$ for $u \ge 0$. Denoting $\bar w_s:=\max\{w_s^2,w_s^4, w_s^{9/2}\}$, it is straightforward to check that \begin{displaymath} f_\alpha (y) \le C s e^{-\alpha k_d s \rho_s^d} \int_{B(y,3\rho_s)} \bar w_s(x) {\,\mathrm d} x \varepsilonnd{displaymath} for $\alpha>0$ and a constant $C>0$, so that by Jensen's inequality, \begin{displaymath} f_\alpha (y)^2 \le C^2 3^d k_d s^2 \rho_s^d e^{-2\alpha k_d s \rho_s^d} \int_{B(y,3\rho_s)} \bar w_s(x)^2 {\,\mathrm d} x. \varepsilonnd{displaymath} Thus, letting $\overline W_{i,s}:=s\int_{\mathbb{R}_d} \bar w_s(x)^i {\,\mathrm d} x$, $i \in \mathbb{N}$, and $\beta=1/36$, and using again that $ue^{-u} \le 1/2$ for $u \ge 0$, we have that there exists a constant $C_d$ depending only on the dimension $d$ such that \begin{displaymath} s\mathbb{Q} f_\beta^2 \le C_d \overline W_{2,s},\quad\text{ and }\quad \max\{s\mathbb{Q} f_{2\beta}, s\mathbb{Q} ((\kappa_s+g_s)^{2\beta}G_s) \} \le C_d \overline W_{1,s}. \varepsilonnd{displaymath} Thus, applying Theorem~\ref{thm:KolBd}, we obtain for $s \ge 1$ that \begin{align*} \max&\left\{d_{W}\left(\frac{H_s-\mathbf{E} H_s}{\sqrt{\Var H_s}},N\right), d_{K}\left(\frac{H_s-\mathbf{E} H_s}{\sqrt{\Var H_s}},N\right) \right\}\\ &\qquad \leq C'_d \Bigg[\frac{\overline W_{2,s}^{1/2} + \overline W_{1,s}^{1/2}}{e^{-k_d s \rho_s^d} W_{2,s}} +\frac{\overline W_{1,s}}{(e^{-k_d s \rho_s^d} W_{2,s})^{3/2}} +\frac{\overline W_{1,s}^{5/4} + \overline W_{1,s}^{3/2}}{(e^{-k_d s \rho_s^d} W_{2,s})^{2}}\Bigg] \varepsilonnd{align*} for some constant $C'_d>0$ depending only on the dimension. The setting can be easily extended for functions $\rho_s$ which depend on the position $x$ (see \cite{iyer:thac12}) and/or are random variables which, together with locations, form a Poisson process on the product space. As an example, consider the logarithmic weight function $w_s(x):=\log \frac{s}{\|x\|} \mathds{1}_{x \in B(0,s)}$. For $i \in \mathbb{N}$, \begin{displaymath} W_{i,s}=s \int_{B(0,s)} \log^i \frac{s}{\|x\|} {\,\mathrm d} x =dk_d s \int_0^s r^{d-1} \log^i \frac{s}{r} {\,\mathrm d} r = dk_d s^{d+1} \int_0^1 z^{d-1} \log^i \frac{1}{z} {\,\mathrm d} z = \mathcal{O}(s^{d+1}), \varepsilonnd{displaymath} so that $\overline W_{i,s}=\mathcal{O}(s^{d+1})$ for all $i \in \mathbb{N}$. Hence, in the regime when $s\rho_s^d-(d+1)(2k_d)^{-1}\log s \to -\infty$ as $s \to \infty$, one obtains Gaussian convergence as $s \to \infty$ with an appropriate non-asymptotic bound on the Wasserstein or Kolmogorov distances between the normalized $H_s$ and a standard normal random variable $N$. \varepsilonnd{example} \section{Modified bounds on the Wasserstein and Kolmogorov distances and proof of Theorem~\ref{thm:KolBd}}\label{sec:Proof} In this section, we prove Theorem~\ref{thm:KolBd}. The proof is primarily based on the following generalization of Theorem~6.1 in \cite{LPS16}, incorporating a spatially inhomogeneous moment bound given by a function $c_x$, $x \in \mathbb{X}$. The proof, which we present for completeness in the Appendix follows closely that of \cite[Theorem~6.1]{LPS16}. Let $\mathcal{P}$ be a Poisson process on a measurable space $(\mathbb{X}, \mathcal{F})$ with a $\sigma$-finite intensity measure $\nu$. Let $F:=f(\mathcal{P})$ be a measurable function of $\mathcal{P}$. For $x,y \in \mathbb{X}$, define the first and second order difference operators as $D_x F:=f(\mathcal{P}+\delta_x)-f(\mathcal{P})$ and $D_{x,y}^2 F :=D_x(D_y F)$. Also, denote by $\operatorname{dom} D$ the collection of functions $F \in L_\mathcal{P}^2$ with \begin{displaymath} \mathbf{E} \int_\mathbb{X} \left(D_{x} F\right)^{2} \nu({\mathrm d} x)<\infty. \varepsilonnd{displaymath} \begin{theorem}\label{thm:Main} Let $F \in \operatorname{dom} D$ be such that $\Var F>0$. Assume that there exists a $q>0$ such that, for all $\mu\in\mathbb{N}b$ with $\mu(\mathbb{X}) \le 1$, \begin{displaymath} \mathbf{E}\left|D_{x} F(\mathcal{P}+\mu)\right|^{4+q} \leq c_{x} \quad \text{for $\nu$-a.e.\;} x \in \mathbb{X}, \varepsilonnd{displaymath} where $c_x$ is a measurable function of $x\in\mathbb{X}$. Then \begin{multline*} d_{W}\left(\frac{F-\mathbf{E} F}{\sqrt{\Var F}}, N\right)\\ \ \leq\; \frac{12}{\Var F}\left[\int_\mathbb{X}\left(\int_\mathbb{X} c_{x_1}^{2 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(16+4 q\right)} \nu({\mathrm d} x_{1})\right)^{2} \nu({\mathrm d} x_{2})\right]^{1 / 2} +\frac{\Gamma_{F}}{(\Var F)^{3 / 2}}, \varepsilonnd{multline*} and \begin{align*} d_{K}\left(\frac{F-\mathbf{E} F}{\sqrt{\Var F}}, N\right) \leq\; & \frac{12}{\Var F}\left[\int_\mathbb{X}\left(\int_\mathbb{X} c_{x_1}^{2 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(16+4 q\right)} \nu({\mathrm d} x_{1})\right)^{2} \nu({\mathrm d} x_{2})\right]^{1 / 2} \\ &+\frac{\Gamma_{F}^{1 / 2}}{\Var F}+\frac{2\Gamma_{F}}{(\Var F)^{3 / 2}} +\frac{\Gamma_{F}^{5 / 4}+2 \Gamma_{F}^{3 / 2}}{(\Var F)^{2}} \\ &+\frac{12}{\Var F}\left[\int_{\mathbb{X}^2} c_{x_1}^{4 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(8+2 q\right)} \nu^{2}({\mathrm d} (x_{1}, x_{2}))\right]^{1 / 2}, \varepsilonnd{align*} with \begin{displaymath} \Gamma_{F}:=\int_\mathbb{X} \max\{c_x^{2/(4+q)},c_x^{4/(4+q)}\} \mathbf{P}rob{D_{x} F \neq 0}^{q /\left(8+2 q\right)} \nu({\mathrm d} x). \varepsilonnd{displaymath} \varepsilonnd{theorem} For a proof of this result, see the Appendix. We derive Theorem~\ref{thm:KolBd} from Theorem~\ref{thm:Main} by proving a series of lemmas, following the general structure of the proof of Theorem~2.1(a) in \cite{LSY19}. However, our setting is more versatile, enabling us to handle new examples. The first lemma is an exact restatement of \cite[Lemma~5.2]{LSY19}, which is also contained in Remark~6.2 of \cite{LPS16}. Recall the definition of $H_s$ given at \varepsilonqref{eq:hs}. \begin{lemma}\label{lem:D} For $s \ge 1$, $\mu \in \mathbb{N}b$ and $y_1, y_2, y_3 \in \mathbb{X}$, \begin{displaymath} D_{y} H_{s}(\mu)=\xi_{s}(y, \mu +\delta_y)+\sum_{x \in \mu} D_{y} \xi_{s}(x, \mu) \varepsilonnd{displaymath} and \begin{displaymath} D_{y_{1}, y_{2}}^{2} H_{s}(\mu)= D_{y_{1}} \xi_{s} \left(y_{2}, \mu +\delta_{y_{2}}\right) +D_{y_{2}} \xi_{s}\left(y_{1}, \mu+\delta_{y_{1}}\right) +\sum_{x \in \mu} D_{y_{1}, y_{2}}^{2} \xi_{s}(x, \mu). \varepsilonnd{displaymath} \varepsilonnd{lemma} The next lemma shows that the difference operator $D_y$ vanishes if $y$ lies outside the stabilization region. \begin{lemma}\label{lem:Dnull} Assume that (A1) holds and let $\mu \in \mathbb{N}b$ and $x,y, y_1, y_2 \in \mathbb{X}$. Then for $s \ge 1$, \begin{displaymath} D_y \xi_s (x,\mu +\delta_x)=0 \, \text{ if }\, y \not \in R_s(x, \mu+\delta_x), \varepsilonnd{displaymath} and \begin{displaymath} D_{y_1,y_2}^2 \xi_s(x,\mu+\delta_x)=0 \text{ if }\, \{y_1,y_2\} \not \subseteq R_s(x, \mu+\delta_x). \varepsilonnd{displaymath} \varepsilonnd{lemma} \begin{proof} By (A1.4), \begin{align*} D_y \xi_s (x,\mu+\delta_x) &=\xi_s (x,\mu+\delta_x+\delta_y) - \xi_s (x,\mu+\delta_x)\\ &=\xi_s \Big(x,(\mu+\delta_x+\delta_y)_{R_s(x,\mu+\delta_x+\delta_y)}\Big) - \xi_s\Big(x,(\mu+\delta_x)_{R_s(x,\mu+\delta_x)}\Big), \varepsilonnd{align*} If $(\mu+\delta_x)_{R_s(x,\mu+\delta_x)}=0$, by the monotonicity property (A1.2), for $y \notin R_s(x,\mu+\delta_x)$ we have $(\mu+\delta_x+\delta_y)_{R_s(x,\mu+\delta_x+\delta_y)}=0$ yielding $D_y \xi_s (x,\mu+\delta_x)=0$. If $(\mu+\delta_x)_{R_s(x,\mu+\delta_x)}\neq 0$, then (A1.3) implies that $(\mu+\delta_x+\delta_y)_{R_s(x,\mu+\delta_x+\delta_y)}\neq 0$. Thus, for $y \not \in R_s(x, \mu+\delta_x)$, by (A1.4) and \varepsilonqref{eq:ximon} we have \begin{displaymath} \xi_s \Big(x,(\mu+\delta_x+\delta_y)_{R_s(x,\mu+\delta_x+\delta_y)}\Big) =\xi_s \Big(x,(\mu+\delta_x+\delta_y)_{R_s(x,\mu+\delta_x)}\Big)=\xi_s \Big(x,(\mu+\delta_x)_{R_s(x,\mu+\delta_x)}\Big), \varepsilonnd{displaymath} so that $D_y \xi_s (x,\mu+\delta_x)$ vanishes. Finally, by (A1.2), $y_1 \not \in R_s(x, \mu+\delta_x)$ implies $y_1 \not \in R_s(x, \mu+\delta_{y_2}+\delta_x)$. Hence, the second order difference operator vanishes, being an iteration of the first order one. If $y_2 \not \in R_s(x, \mu+\delta_x)$, a similar argument applies. \varepsilonnd{proof} The next lemma, which is similar to \cite[Lemma~5.4(a)]{LSY19} provides a bound in terms of $M_s$ on the $(4+\varepsilon)$-th moment of the difference operator for any $\varepsilon \in (0,p]$, where $p \in (0,1]$ and $M_s$ are as in (A2). \begin{lemma}\label{lem:Dmom} Assume that (A2) holds. For all $\varepsilon \in (0,p]$, $s \ge 1$, $x,y \in \mathbb{X}$ and $\mu\in\mathbb{N}b$ with $\mu(\mathbb{X}) \le 6$ \begin{displaymath} \mathbf{E}\Big|D_y \xi_{s}\big(x, \mathcal{P}_{s}+\delta_x +\mu\big)\Big|^{4+\varepsilon} \leq 2^{4+\varepsilon} M_s(x)^{4+\varepsilon}. \varepsilonnd{displaymath} \varepsilonnd{lemma} \begin{proof} By Jensen's inequality, H\"older's inequality and assumption (A2), \begin{align*} &\mathbf{E}\Big|D_y \xi_{s}\big(x, \mathcal{P}_{s}+\delta_x+\mu\big)\Big|^{4+\varepsilon}\\ & \le 2^{3+\varepsilon} \mathbf{E} \left(\left|\xi_{s}\left(x, \mathcal{P}_{s} +\delta_x+\delta_y+\mu\right)\right|^{4+\varepsilon} +\left|\xi_{s}\left(x, \mathcal{P}_{s} +\delta_x+\mu\right)\right|^{4+\varepsilon}\right) \le 2^{4+\varepsilon} M_s(x)^{4+\varepsilon}. \qedhere \varepsilonnd{align*} \varepsilonnd{proof} Recall the functions $g_s$ and $h_s$ defined at \varepsilonqref{eq:g}. \begin{lemma}\label{lem:1MomD} Assume that (A1) and (A2) hold. Then, there exists a constant $C_{p} \in [1,\infty)$ depending only on $p$, such that \begin{displaymath} \mathbf{E} \Big|D_y H_s(\mathcal{P}_s +\mu)\Big|^{4+p/2} \le C_{p} \left[M_s^{4+p/2}(y) + h_s(y)(1+g_s(y)^4) \right] \varepsilonnd{displaymath} for all $y \in \mathbb{X}$, $\mu\in\mathbb{N}b$ with $\mu(\mathbb{X})\le 1$, and $s \ge 1$. \varepsilonnd{lemma} \begin{proof} Let $\varepsilon:=p/2$. We argue as in \cite{LSY19}. For $\mu=0$, using Lemma~\ref{lem:D} followed by Jensen's inequality, \begin{align*} \mathbf{E}\left|D_{y} H_{s}(\mathcal{P}_{s})\right|^{4+\varepsilon} &=\mathbf{E}\left|\xi_{s}(y, \mathcal{P}_{s}+\delta_y) +\sum_{x \in \mathcal{P}_{s}} D_{y} \xi_{s}(x, \mathcal{P}_{s})\right|^{4+\varepsilon} \\ &\leq 2^{3+\varepsilon} \mathbf{E}\left|\xi_{s}(y, \mathcal{P}_{s}+\delta_y)\right|^{4+\varepsilon}+2^{3+\varepsilon} \mathbf{E}\left|\sum_{x \in \mathcal{P}_{s}} D_{y} \xi_{s}(x,\mathcal{P}_{s})\right|^{4+\varepsilon}. \varepsilonnd{align*} By (A2), the first summand is bounded by $2^{3+\varepsilon}M_s(y)^{4+\varepsilon}$. Following the argument in \cite[Lemma~5.5]{LSY19}, the second summand can be bounded as \begin{displaymath} 2^{3+\varepsilon} \mathbf{E}\left|\sum_{x \in \mathcal{P}_{s}} D_{y} \xi_{s} \left(x, \mathcal{P}_{s}\right)\right|^{4+\varepsilon} \le 2^{3+\varepsilon} (I_1+ 15 I_2 + 25 I_3 + 10 I_4 + I_5), \varepsilonnd{displaymath} where for $i \in \{1,\dots,5\}$, \begin{displaymath} I_{i}=\mathbf{E} \sum_{(x_{1},\dots,x_{i})\in \mathcal{P}_{s}^{i,\neq}} \mathds{1}_{D_{y}\xi_{s}(x_{j}, \mathcal{P}_{s})\neq 0, j=1,\dots,i} \big|D_{y} \xi_{s}(x_{1}, \mathcal{P}_{s})\big|^{4+\varepsilon}. \varepsilonnd{displaymath} Here $\mathcal{P}_{s}^{i, \neq}$ stands for the set of all $i$-tuples of distinct points from $\mathcal{P}_{s}$, where multiple points at the same location are considered to be different ones. Applying the multivariate Mecke formula in the first equation, H\"older's inequality followed by Lemma~\ref{lem:Dmom} in the second step and Lemma~\ref{lem:Dnull} and (A1.2) in the third step, we obtain for $1 \le i \le 5$, \begin{align*} I_i &=s^{i} \int_{\mathbb{X}^{i}}\mathbf{E}\Big[\mathds{1}_{D_{y} \xi_{s}(x_{j}, \mathcal{P}_{s}+\delta_{x_1}+\cdots+\delta_{x_i})\neq 0, j=1, \dots, i} \big|D_y \xi_s (x_1, \mathcal{P}_s+\delta_{x_1}+\cdots+\delta_{x_i})\big|^{4+\varepsilon} \Big] \mathbb{Q}^{i}({\mathrm d}(x_{1},\dots,x_{i}))\\ &\le s^{i} \int_{\mathbb{X}^{i}} (2M_s(x_1))^{4+\varepsilon} \prod_{j=1}^{i} \mathbf{P}rob{D_y \xi_s (x_j, \mathcal{P}_s+\delta_{x_1}+\cdots +\delta_{x_i})\neq0}^{\frac{p-\varepsilon}{4 i+p i}}\;\mathbb{Q}^{i}({\mathrm d}(x_{1},\dots,x_{i}))\\ & \le 2^{4+\varepsilon} s^{i} \int_{\mathbb{X}^{i}} M_s(x_1)^{4+\varepsilon} \prod_{j=1}^{i} \mathbf{P}rob{y \in R_s(x_j,\mathcal{P}_s+\delta_{x_j})}^{\frac{p-\varepsilon}{4 i+p i}} \;\mathbb{Q}^{i}({\mathrm d}(x_{1},\dots,x_{i})). \varepsilonnd{align*} By \varepsilonqref{eq:Rs}, \begin{align*} &2^{-4-\varepsilon}I_i \le \, s^{i} \int_{\mathbb{X}^{i}} M_s(x_1)^{4+\varepsilon} \prod_{j=1}^{i} \varepsilonxp\Big\{-\frac{p-\varepsilon}{4 i+p i} r_{s}(x_j,y)\Big\} \mathbb{Q}^{i}({\mathrm d}(x_{1},\dots,x_{i}))\\ & \,= \left(s\int_{\mathbb{X}} \varepsilonxp \Big\{-\frac{p-\varepsilon}{4 i+p i} r_{s}(x, y)\Big\} \mathbb{Q}({\mathrm d} x)\right)^{i-1} \left(s\int_{\mathbb{X}} M_s(x)^{4+\varepsilon} \varepsilonxp \Big\{-\frac{p-\varepsilon}{4 i+p i} r_{s}(x, y)\Big\} \mathbb{Q}({\mathrm d} x)\right)\\ &\, \le \left(s\int_{\mathbb{X}} \varepsilonxp \Big\{-\frac{p}{40+10 p} r_{s}(x, y)\Big\} \mathbb{Q}({\mathrm d} x)\right)^{i-1} \left(s\int_{\mathbb{X}} M_s(x)^{4+\varepsilon} \varepsilonxp \Big\{-\frac{p}{40+10 p} r_{s}(x, y)\Big\} \mathbb{Q}({\mathrm d} x)\right) \\ &\,\le g_{s}(y)^{i-1} h_s(y), \varepsilonnd{align*} where $g_{s}$ and $h_s$ are defined at \varepsilonqref{eq:g}. Since $g_{s}^{i-1} \le 1+g_{s}^4$ for all $i=1,\dots,5$, this proves the result for $\mu=0$. If $\mu(\mathbb{X})=1$, the proof is similar, see the proof of \cite[Lemma~5.5]{LSY19} for details. \varepsilonnd{proof} \begin{lemma}\label{lem:inner} Assume that (A1) holds. For any $\beta>0$, $s \ge 1$ and $x_2 \in \mathbb{X}$, \begin{displaymath} s \int_{\mathbb{X}} G_s(x_1)\mathbf{P}rob{D_{x_1,x_2}^{2} H_{s}(\mathcal{P}_{s}) \neq 0}^{\beta}\mathbb{Q}({\mathrm d} x_{1}) \leq 3^\beta f_\beta(x_2) \varepsilonnd{displaymath} with $f_\beta$ defined at \varepsilonqref{eq:fa}. \varepsilonnd{lemma} \begin{proof} As in the proof of \cite[Lemma~5.9(a)]{LSY19}, by Lemma~\ref{lem:D} and the Mecke formula, one has \begin{equation} \label{eq:2Dsum} \mathbf{P}rob{D_{x_1,x_2}^{2} H_{s}(\mathcal{P}_{s}) \neq 0} \le \mathbf{P}rob{D_{x_1} \xi_{s}(x_2,\mathcal{P}_{s}+\delta_{x_2})\neq 0} +\mathbf{P}rob{D_{x_2} \xi_{s}(x_1,\mathcal{P}_{s}+\delta_{x_1}) \neq 0} + T_{x_1,x_2,s}, \varepsilonnd{equation} where \begin{displaymath} T_{x_1,x_2,s}:= s \int_{\mathbb{X}} \mathbf{P}rob{D_{x_1,x_2}^{2} \xi_{s}(z,\mathcal{P}_{s}+\delta_z) \neq 0} \mathbb{Q}({\mathrm d} z). \varepsilonnd{displaymath} By Lemma~\ref{lem:Dnull} and \varepsilonqref{eq:Rs}, the first two summands on the right-hand side of \varepsilonqref{eq:2Dsum} are bounded by $e^{- r_{s}(x_2,x_1)}$ and $e^{- r_{s}(x_1,x_2)}$, respectively. Furthermore, by Lemma~\ref{lem:Dnull} and \varepsilonqref{eq:g2s}, \begin{displaymath} T_{x_1,x_2,s} \le s \int_{\mathbb{X}} \mathbf{P}rob{\{x_1,x_2\} \subseteq R_s(z,\mathcal{P}_s+\delta_z)} \mathbb{Q}({\mathrm d} z) = q_{s}(x_1,x_2). \varepsilonnd{displaymath} By \varepsilonqref{eq:fal}, \begin{align*} & s \int_{\mathbb{X}} G_s(x_1)\mathbf{P}rob{D_{x_1,x_2}^{2}H_{s}(\mathcal{P}_{s})\neq 0}^{\beta}\mathbb{Q}({\mathrm d} x_{1})\\ &\le 3^\beta \int_{\mathbb{X}} G_s(x_1) \left[ e^{- \beta r_{s}(x_2,x_1)} +e^{- \beta r_{s}(x_1,x_2)} + q_{s}(x_1,x_2)^\beta\right] \mathbb{Q}({\mathrm d} x_{1}) = 3^\beta f_\beta(x_2). \qedhere \varepsilonnd{align*} \varepsilonnd{proof} Recall the function $\kappa_s(x)$ in \varepsilonqref{eq:p}. \begin{lemma}\label{lem:bounds} Assume that (A1) holds, and let $\beta >0$. Then for all $s \ge 1$, \begin{gather*} s \int_{\mathbb{X}}\left(s \int_{\mathbb{X}} G_s(x_1)\mathbf{P}rob{ D_{x_1,x_2}^{2} H_{s} (\mathcal{P}_{s}) \neq 0}^{\beta} \mathbb{Q}({\mathrm d} x_{1})\right)^{2} \mathbb{Q}({\mathrm d} x_{2}) \le s3^{2\beta} \mathbb{Q} f_\beta^2,\\ s^{2} \int_{\mathbb{X}^{2}} G_s(x_1)\mathbf{P}rob{ D_{x_1,x_2}^{2} H_{s}(\mathcal{P}_{s})\neq 0}^{\beta} \mathbb{Q}^{2}({\mathrm d} (x_{1}, x_{2})) \le s 3^{\beta} \mathbb{Q} f_\beta,\\ s \int_{\mathbb{X}} G_s(x)\mathbf{P}rob{ D_{x} H_{s}(\mathcal{P}_{s})\neq 0}^{\beta} \mathbb{Q}({\mathrm d} x) \le s \mathbb{Q} ((\kappa_s+g_{s})^\beta G_s). \varepsilonnd{gather*} \varepsilonnd{lemma} \begin{proof} The first two assertions follow directly from Lemma~\ref{lem:inner}. For the last one, by Lemma~\ref{lem:D} and the Mecke formula, we can write \begin{align*} \mathbf{P}rob{D_{x} H_{s}(\mathcal{P}_{s}) \neq 0} \leq \mathbf{P}rob{\xi_{s}(x,\mathcal{P}_{s}+\delta_x)\neq 0} +\mathbf{E} \sum_{z \in \mathcal{P}_{s}} \mathds{1}_{D_{x} \xi_{s}(z,\mathcal{P}_{s})\neq 0}\\ = \kappa_s(x)+s \int_{\mathbb{X}} \mathbf{P}rob{D_{x} \xi_{s}(z,\mathcal{P}_{s}+\delta_z)\neq 0} \mathbb{Q}({\mathrm d} z) \le \kappa_s(x)+g_{s}(x), \varepsilonnd{align*} where we used Lemma~\ref{lem:Dnull}, \varepsilonqref{eq:Rs} and \varepsilonqref{eq:g} in the final step. This yields the final assertion. \varepsilonnd{proof} \begin{proof}[{Proof of Theorem~\ref{thm:KolBd}:}] In view of Lemma~\ref{lem:1MomD}, the condition in Theorem~\ref{thm:Main} is satisfied with the exponent $4+p/2$ with $c_y:= C_p \left[M_s(y)^{4+p/2} + h_s(y)(1+g_s(y)^4) \right]$ for $y \in \mathbb{X}$. Hence, \begin{align*} &\max\left\{c_{y}^{2/(4+p/2)},c_{y}^{4/(4+p/2)}\right\}\\ &\le C_p^{4/(4+p/2)} \left[\max\left\{M_s(y)^2,M_s(y)^4 \right\} + \max\{h_s(y)^{2/(4+p/2)}, h_s(y)^{4/(4+p/2)}\}\big(1+g_s(y)^4\big) \right]\\ &= C_p^{4/(4+p/2)} G_s(y), \varepsilonnd{align*} where $G_s$ is defined at \varepsilonqref{eq:g5}. The result now follows from Theorem~\ref{thm:Main} upon using Lemma~\ref{lem:bounds}. \varepsilonnd{proof} \begin{thebibliography}{10} \bibitem{bai:dev:hwan:05} Bai, Z. D., Devroye, L., Hwang, H. K. and Tsai, T. H.: \newblock Maxima in hypercubes. \newblock {\varepsilonm Random Struct. Algorithms}, \textbf{27}, (2005), 290--309. \bibitem{BX06} Barbour, A.~D. and Xia, A.: \newblock Normal approximation for random sums. \newblock {\varepsilonm Adv. in Appl. Probab.}, \textbf{38}, (2006), 693--728. \bibitem{Ba20} Baryshnikov, Y.: \newblock Supporting-points processes and some of their applications. \newblock {\varepsilonm Probab. Theory Related Fields}, \textbf{117}, (2000), 163--182. \bibitem{Bha21} Bhattacharjee, C.: \newblock Gaussian approximation in random minimal directed spanning trees. \newblock {\varepsilonm Random Struct. Algorithms}, \textbf{61}, (2022), 462--492. \bibitem{bhat-mol21} Bhattacharjee, C., Molchanov, I. and Turin, R.: \newblock Central limit theorem for birth-growth model with Poisson arrivals and random growth speed. \newblock {\varepsilonm arXiv preprint arXiv:2107.06792}, (2021). \bibitem{EG81} Gunnar, E.: \newblock A remainder term estimate for the normal approximation in classical occupancy. \newblock {\varepsilonm Ann. Probab.}, \textbf{9}, (1981), 684--692. \bibitem{fil:naim20} Fill, J.~A. and Naiman, D.~Q.: \newblock The {P}areto record frontier. \newblock {\varepsilonm Electron. J. Probab.}, \textbf{25}, (2020). \bibitem{iyer:thac12} Iyer, S.~K. and Thacker, D.: \newblock Nonuniform random geometric graphs with location-dependent radii. \newblock {\varepsilonm Ann. Appl. Probab.}, \textbf{22}, (2012), 2048--2066. \bibitem{LSY19} Lachi\`eze-Rey, R., Schulte, M. and Yukich, J.~E.: \newblock Normal approximation for stabilizing functionals. \newblock {\varepsilonm Ann. Appl. Probab.}, \textbf{29}, (2019), 931--993. \bibitem{LPS16} Last, G., Peccati, G. and Schulte, M.: \newblock Normal approximation on {P}oisson spaces: {M}ehler's formula, second order {P}oincar\'{e} inequalities and stabilization. \newblock {\varepsilonm Probab. Theory Related Fields}, \textbf{165}, (2016), 667--723. \bibitem{last:pen} Last, G. and Penrose, M.: \newblock {\varepsilonm Lectures on the {Poisson} Process}. \newblock Cambridge Univ. Press, Cambridge, 2018. \bibitem{pen03} Penrose, M.: \newblock {\varepsilonm Random Geometric Graphs}. \newblock Oxford University Press, Oxford, 2003. \bibitem{PY01} Penrose, M.~D. and Yukich, J.~E.: \newblock Central limit theorems for some graphs in computational geometry. \newblock {\varepsilonm Ann. Appl. Probab.}, \textbf{11}, (2001), 1005--1041. \bibitem{pen:yuk03} Penrose, M.~D. and Yukich, J.~E.: \newblock Weak laws of large numbers in geometric probability. \newblock {\varepsilonm Ann. Appl. Probab.}, \textbf{13}, (2003), 277--303. \bibitem{pen:yuk05} Penrose, M.~D. and Yukich, J.~E.: \newblock Normal approximation in geometric probability. \newblock In {\varepsilonm Stein's Method and Applications}, volume~5 of {\varepsilonm Lect. Notes Ser. Inst. Math. Sci. Natl. Univ. Singap.}, 37--58. Singapore Univ. Press, Singapore, 2005. \bibitem{sch10} Schreiber, T.: \newblock Limit theorems in stochastic geometry. \newblock In W.~S. Kendall and I.~Molchanov, editors, {\varepsilonm New Perspectives in Stochastic Geometry}, pages 111--144. Oxford Univ. Press, Oxford, 2010. \bibitem{Yu15} Yukich, J.~E.: \newblock Surface order scaling in stochastic geometry. \newblock {\varepsilonm Ann. Appl. Probab.}, \textbf{25}, (2015), 177--210. \varepsilonnd{thebibliography} \section*{Appendix : Proof of Theorem~\ref{thm:Main}} \label{sec:modKol} In this section, we prove Theorem~\ref{thm:Main}, which is a slightly modified version of Theorem~6.1 in \cite{LPS16}. Recall that $\mathcal{P}$ is a Poisson process on a measurable space $(\mathbb{X}, \mathcal{F})$ with a $\sigma$-finite intensity measure $\nu$ and $F:=f(\mathcal{P})$ is a measurable function of $\mathcal{P}$. For $x,y \in \mathbb{X}$, recall the definitions of the first and second order difference operators $D_x F$ and $D_{x,y}^2 F$ and that of $\operatorname{dom} D$ from Section~\ref{sec:Proof}. We are generally interested in the Gaussian approximation of such a function $F$ with zero mean and unit variance with the aim to bound the Wasserstein and the Kolmogorov distances between $F$ and a standard normal random variable $N$. An important result in this direction was given in \cite{LPS16}. Define \begin{align*} \gamma_{1} &:=4\left[\int_{\mathbb{X}^3} \left[\mathbf{E}\left(D_{x_{1}} F\right)^{2}\left(D_{x_{2}} F\right)^{2}\right]^{1/2} \left[\mathbf{E}\left(D_{x_{1}, x_{3}}^{2} F\right)^{2}\left(D_{x_{2}, x_{3}}^{2} F\right)^{2}\right]^{1/2} \nu^{3}({\mathrm d} (x_{1}, x_{2}, x_{3}))\right]^{1 / 2}, \\ \gamma_{2} &:=\left[\int_{\mathbb{X}^3} \mathbf{E} \left[\left(D_{x_{1}, x_{3}}^{2} F\right)^{2}\left(D_{x_{2}, x_{3}}^{2} F\right)^{2} \right] \nu^{3}({\mathrm d} (x_{1}, x_{2}, x_{3}))\right]^{1 / 2}, \\ \gamma_{3} &:=\int_\mathbb{X} \mathbf{E}\left|D_{x} F\right|^{3} \nu({\mathrm d} x),\\ \gamma_{4} &:=\frac{1}{2}\left[\mathbf{E} F^{4}\right]^{1 / 4} \int_\mathbb{X} \left[\mathbf{E}\left(D_{x} F\right)^{4}\right]^{3 / 4} \nu({\mathrm d} x), \\ \gamma_{5} &:=\left[\int_\mathbb{X} \mathbf{E}\left(D_{x} F\right)^{4} \nu({\mathrm d} x)\right]^{1 / 2}, \\ \gamma_{6} &:=\left[\int_{\mathbb{X}^2} \left(6\left[\mathbf{E}\left(D_{x_{1}} F\right)^{4}\right]^{1 / 2} \left[\mathbf{E}\left(D_{x_{1}, x_{2}}^{2} F\right)^{4}\right]^{1 / 2} +3 \mathbf{E}\left(D_{x_{1}, x_{2}}^{2} F\right)^{4} \right) \nu^{2}({\mathrm d}(x_{1}, x_{2}))\right]^{1 / 2}. \varepsilonnd{align*} \begin{theorem*}[\cite{LPS16}, Theorems~1.1 and 1.2] \label{thm:Schulte} For $F \in \operatorname{dom} D$ having zero mean and unit variance, \begin{displaymath} d_{W}(F, N) \leq \gamma_{1}+\gamma_{2}+\gamma_{3}, \varepsilonnd{displaymath} and \begin{displaymath} d_{K}(F, N) \leq \gamma_{1}+\gamma_{2}+\gamma_{3}+\gamma_{4}+\gamma_{5}+\gamma_{6}. \varepsilonnd{displaymath} \varepsilonnd{theorem*} Under additional assumptions on the difference operator, one can simplify the bound. This is done in \cite[Theorem~6.1]{LPS16}, assuming that, for some $q>0$, the $(4+q)$-th moment of the difference operator $D_{x} F(\mathcal{P} + \mu)$ for $\mu\in\mathbb{N}b$ with total mass at most one is uniformly bounded in $x \in \mathbb{X}$. However, in some applications, as is the case in the example of minimal points discussed in Section~\ref{sec:Pareto}, such a uniform bound does not exist. In Theorem~\ref{thm:Main}, we modify \cite[Theorem~6.1]{LPS16} to allow for a non-uniform bound depending on $x$. Below, we present the proof of Theorem~\ref{thm:Main} for completeness, though the arguments remain largely similar to those in the proof of Theorem~6.1 in \cite{LPS16}, with the main difference being the presence of a spatially inhomogeneous moment bound given by the function $c_x$. \begin{proof}[Proof of Theorem~\ref{thm:Main}] By our assumption, H\"older's inequality yields that \begin{displaymath} \mathbf{E}\left(D_{x} F\right)^{4} \leq \left[\mathbf{E}\left|D_{x} F\right|^{4+q}\right]^{4 /\left(4+q\right)} \mathbf{P}rob{D_{x} F \neq 0}^{q /\left(4+q\right)} \leq c_{x}^{4 /\left(4+q\right)} \mathbf{P}rob{D_{x} F \neq 0}^{q /\left(4+q\right)} \varepsilonnd{displaymath} and \begin{displaymath} \mathbf{E}\left|D_{x} F\right|^{3} \leq c_{x}^{3 /\left(4+p\right)} \mathbf{P}rob{D_{x} F \neq 0}^{\left(1+q\right) /\left(4+q\right)}. \varepsilonnd{displaymath} Also, using H\"older's inequality as above and Jensen's inequality in the second step, we have \begin{align*} \mathbf{E}\left(D_{x_{1}, x_{2}}^{2} F\right)^{4} & \leq \left[\mathbf{E}\left|D_{x_{1}, x_{2}}^{2} F\right|^{4+q}\right]^{4 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(4+q\right)}\\ & \leq 16 \min\{c_{x_1},c_{x_2}\}^{4 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(4+q\right)}. \varepsilonnd{align*} Thus, evaluating $(\gamma_i)_{1\le i \le 6}$ for $(F-\mathbf{E} F)/\sqrt{\Var F}$, we obtain \begin{align*} \gamma_{1} &\leq \frac{8}{\Var F}\Bigg[\int_{\mathbb{X}^3} c_{x_1}^{2 /\left(4+q\right)} c_{x_2}^{2 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{3}}^{2} F \neq 0}^{q /\left(16+4q\right)}\\ &\qquad \qquad\qquad\qquad\qquad\qquad \;\quad \times \mathbf{P}rob{D_{x_{2}, x_{3}}^{2} F \neq 0}^{q /\left(16+4 q\right)} \nu^{3}({\mathrm d} (x_{1}, x_{2}, x_{3}))\Bigg]^{1 / 2} \\ &=\frac{8}{\Var F}\left[\int_\mathbb{X} \left(\int_\mathbb{X} c_{x_1}^{2 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(16+4 q\right)} \nu({\mathrm d} x_{1})\right)^2 \nu({\mathrm d} x_{2})\right]^{1 / 2},\\ \gamma_{2} &\leq \frac{4}{\Var F}\Bigg[\int_{\mathbb{X}^3} c_{x_1}^{2 /\left(4+q\right)}c_{x_2}^{2 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{3}}^{2} F \neq 0}^{q /\left(8+2 q\right)} \\ &\qquad \qquad\qquad\qquad\qquad\qquad \;\quad \times \mathbf{P}rob{D_{x_{2}, x_{3}}^{2} F \neq 0}^{q /\left(8+2 q\right)}\nu^{3}({\mathrm d}(x_{1}, x_{2}, x_{3}))\Bigg]^{1 / 2}\\ &\le \frac{4}{\Var F}\left[\int_\mathbb{X} \left(\int_\mathbb{X} c_{x_1}^{2 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(16+4 q\right)} \nu({\mathrm d} x_{1})\right)^2 \nu({\mathrm d} x_{2})\right]^{1 / 2},\\ \gamma_{3} &\leq \frac{1}{(\Var F)^{3 / 2}} \int_\mathbb{X} c_{x}^{3 /\left(4+q\right)} \mathbf{P}rob{D_{x} F \neq 0}^{\left(1+q\right) /\left(4+q\right)} \nu({\mathrm d} x) \le \frac{\Gamma_{F}}{(\Var F)^{3 / 2}},\\ \gamma_{4} &\leq \frac{1}{2(\Var F)^{2}}\left[\mathbf{E}(F-\mathbf{E} F)^{4}\right]^{1 / 4} \int_\mathbb{X} c_{x}^{3 /\left(4+q\right)} \mathbf{P}rob{D_{x} F \neq 0}^{q /\left(8+2 q\right)} \nu({\mathrm d} x)\\ &\leq \frac{\Gamma_F}{2(\Var F)^{2}}\left[\mathbf{E}(F-\mathbf{E} F)^{4}\right]^{1 / 4},\\ \gamma_{5} &\leq \frac{1}{\Var F}\left[\int_\mathbb{X} c_{x}^{4 /\left(4+q\right)} \mathbf{P}rob{D_{x} F \neq 0}^{q /\left(4+q\right)} \nu({\mathrm d} x)\right]^{1 / 2} \le \frac{\Gamma_{F}^{1/2}}{\Var F},\\ \gamma_{6} &\leq \frac{2\sqrt{6}}{\Var F} \left[\int_{\mathbb{X}^2} c_{x_1}^{4 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(8+2 q\right)} \nu^{2}({\mathrm d} (x_{1}, x_{2}))\right]^{1 / 2} \\ & \qquad\qquad\qquad\qquad +\frac{4\sqrt{3}}{\Var F} \left[\int_{\mathbb{X}^2} c_{x_1}^{4 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(4+q\right)} \nu^{2}({\mathrm d}(x_{1}, x_{2}))\right]^{1 / 2}\\ &\le \frac{2\sqrt{6}+4\sqrt{3}}{\Var F} \left[\int_{\mathbb{X}^2} c_{x_1}^{4 /\left(4+q\right)} \mathbf{P}rob{D_{x_{1}, x_{2}}^{2} F \neq 0}^{q /\left(8+2 q\right)} \nu^{2}({\mathrm d}(x_{1}, x_{2}))\right]^{1 / 2}. \varepsilonnd{align*} Finally, by \cite[Lemma~4.3]{LPS16}, \begin{align*} &\frac{\mathbf{E}(F-\mathbf{E} F)^{4}}{(\Var F)^{2}}\\ &\leq \max \left\{\frac{256}{(\Var F)^2} \left[\int_\mathbb{X} \left[\mathbf{E}\left(D_{x} F\right)^{4}\right]^{1 / 2} \nu({\mathrm d} x)\right]^{2}, \frac{4}{(\Var F)^2} \int_\mathbb{X} \mathbf{E}\left(D_{x} F\right)^{4} \nu({\mathrm d} x)+2\right\}\\ &\leq \max \left\{256 \Gamma_{F}^{2} /(\Var F)^{2}, 4 \Gamma_{F} /(\Var F)^{2}+2\right\}, \varepsilonnd{align*} so that \begin{displaymath} \gamma_{4} \leq \frac{1}{(\Var F)^{3 / 2}} \Gamma_{F} +\frac{1}{(\Var F)^{2}} \Gamma_{F}^{5 / 4}+\frac{2}{(\Var F)^{2}} \Gamma_{F}^{3 / 2}. \varepsilonnd{displaymath} An application of \cite[Theorems~1.1 and 1.2]{LPS16} yields the results. \varepsilonnd{proof} \varepsilonnd{document}
\begin{document} \title{Extracting the physical sector of quantum states} \author{D~Mogilevtsev$^{1,2}$, Y~S~Teo$^3$, J~{\v R}eh{\'a}{\v c}ek$^4$, Z~Hradil$^4$, J~Tiedau$^5$, R~Kruse$^5$, G~Harder$^5$, C~Silberhorn$^{5,6}$, L~L~Sanchez-Soto$^{5,6}$} \address{$^{1}$ Centro de Ci{\^e}ncias Naturais e Humanas, Universidade Federal do ABC, Santo Andr{\'e}, SP, 09210-170 Brazil} \address{$^{2}$ B. I. Stepanov Institute of Physics, National Academy of Science of Belarus, Nezavisimosti Avenue 68, Minsk 220072, Belarus} \address{$^{3}$ BK21 Frontier Physics Research Division, Seoul National University, 08826 Seoul, South Korea} \address{$^{4}$ Department of Optics, Palack{\'y} University, 17. listopadu 12, 77146 Olomouc, Czech Republic} \address{$^{5}$ Department Physik, Universit\"{a}t Paderborn, Warburger Stra{\ss}e~100, 33098 Paderborn, Germany} \address{$^{6}$ Max-Planck-Institut f\"ur die Physik des Lichts, Staudtstra\ss e 2, 91058 Erlangen, Germany} \address{$^{7}$ Departamento de \'Optica, Facultad de F\'{\i}sica, Universidad Complutense, 28040 Madrid, Spain} \begin{abstract} The physical nature of any quantum source guarantees the existence of an effective Hilbert space of finite dimension, the physical sector, in which its state is completely characterized with arbitrarily high accuracy. The extraction of this sector is essential for state tomography. We show that the physical sector of a state, defined in some pre-chosen basis, can be systematically retrieved with a procedure using only data collected from a set of commuting quantum measurement outcomes, with no other assumptions about the source. We demonstrate the versatility and efficiency of the physical-sector extraction by applying it to simulated and experimental data for quantum light sources, as well as quantum systems of finite dimensions. \end{abstract} \pacs{03.65.Ud, 03.65.Wj, 03.67.-a} \submitto{\NJP} \section{Introduction} The physical laws of quantum mechanics ensure that all experimental observations can be described in an \emph{effective} Hilbert space of finite dimension, to which we shall refer as the \emph{physical sector} of the state. The systematic extraction of this physical sector is crucial for reliable quantum state tomography. Photonic sources constitute an archetypical example where such an extraction is indispensable. Theoretically, the states describing these sources reside in an infinite-dimensional Hilbert space. Nonetheless, the elements of the associated density matrices decay to zero for sufficiently large photon numbers, so that there always exists a finite-dimensional physical sector that contains the state with sufficient accuracy. Reliable state tomography can thus be performed once this physical sector is correctly extracted. Experiments on estimates of the correct physical sector have been carried out~\cite{lnp:2004uq,Lvovsky:2009ys}. \sugg{ One common strategy is to make an educated guess about the state (such as Gaussianity~\cite{Rehacek:2009aa} or rank-deficiency for compressed sensing~\cite{Gross:2010dq,Cramer:2010oq,Flammia:2012if, Landon-Cardinal:2012mb,Baumgratz:2013fq,Riofrio:2017aa,Steffens:2017aa}), which defines a truncated reconstruction subspace. For instance, in compressed sensing the rank of the state is assumed to be no larger than a certain value $r$, so that specialized rank-$r$ compressed-sensing measurements can be employed to uniquely characterize the state with much fewer measurement settings. Very generally, educated guesses of certain properties of the state requires additional physical verifications. Algorithms for statistical model selection, such as the Akaike~\cite{Akaike:1974af, Usami:2003aa,Yin:2011aa} or Schwarz criteria~\cite{Schwarz:1978aa, Guta:2012bl} or the ìlikelihood sieve~\cite{Geman:1982aa,Artiles:2005aa}, have also been developed to estimate the physical sector. These algorithms provide another practical solution to reducing the complexity of the tomography problem. In the presence of the positivity constraint~\cite{Anraku:1999aa,Hughes:2003aa}, their application to quantum states becomes more sophisticated, as the procedures for deriving stopping criteria that supplies the final appropriate model subspace for the unknown state are intricate.} On the other hand, finite-dimensional systems represent another example for which a systematic physical-sector extraction becomes important. In the context of quantum information, ongoing developments in dimension-witness testing~\cite{Brunner:2008aa, Hendrych:2012aa,Ahrens:2012aa,Brunner:2013aa,Ahrens:2014aa} offer some solutions to finding the minimal dimension of a black box required to justify the given set of measurement data in a device-independent way. Searching for dimension witnesses of arbitrary dimensions is still challenging~\cite{Brunner:2013aa}. In reference~\cite{Rehacek:2016ch}, we showed that, when the measurement device is calibrated, one can systematically extract the physical sector (that is, both the Hilbert-space support and dimension) and simultaneously reconstruct any unknown state directly from the measurement data without any assumption about the state. In this paper, we introduce an even more efficient procedure that extracts the physical sector of any state from the data without state reconstruction and provide the pseudocode. This procedure requires nothing more than data obtained from a set of commuting measurements. As in~\cite{Rehacek:2016ch}, the extraction of the physical sector does not depend on any other assumptions or calibration details about the source. By construction, this procedure has a linear complexity in the dimension of the physical sector. To showcase its versatility, we apply it to simulated and experimental data for photonic sources and systems of finite dimensions. In this way, we offer a deterministic solution to the problem of extracting the correct physical sector for any quantum state in measurement-calibrated situations. \section{Physical sectors and commuting measurements} \subsection{What are physical sectors?} The concept of physical sectors and their relations to commuting measurements is probably best understood with a concrete example. Let us consider, in the Fock basis, a quantum state of light described by the density operator \begin{equation} \varrho \,\,\widehat{=} \pmatrix{ 0.9922 & * & 0.0877 & * & \cdots \cr * & * & * & * & \cdots\cr 0.0877 & * & 0.0078 & * &\cdots \cr * & * & * & * &\cdots \cr \vdots & \vdots & \vdots & \vdots &\ddots } \,, \label{eq:rho_EX} \end{equation} where $*$ denotes elements of its density matrix that are so tiny that treating them to be zero incurs very small truncation errors. If all $*=0$, $\varrho$ is the pure state $\ket{\,\,\,}\bra{\,\,\,}$ described by $\ket{\,\,\,}\propto\ket{\alpha}+\ket{-\alpha}$, with the coherent state of amplitude $\alpha=0.3536$. The density matrix elements drops to zero for sufficiently large photon numbers as those of any physical state. Some statistical reasoning for understanding the truncation error is in order. For now, we note that since all other $*$ elements are tiny, the state $\varrho$ is essentially fully characterized by a 3-dimensional sector, such that elements beyond this sector supply almost no contribution to $\varrho$. This forms a truncated Hilbert subspace where tomography can be carried out reliably. This subspace is given by $\mathcal{H}_\mathrm{sub}=\Span{\ket{0},\ket{1},\ket{2}}$. However from \eref{eq:rho_EX}, we realize that this subspace is not the smallest one that supports $\varrho$. The \emph{smallest} subspace $\mathcal{H}_\mathrm{phys}=\Span{\ket{0},\ket{2}}$ is in fact spanned by only two basis kets. This defines the 2-dimensional \emph{physical sector}. In general, the physical sector $\mathcal{H}_\mathrm{phys}$ is defined to be the \emph{smallest Hilbert subspace} that fully supports a given state with a truncation error smaller than some tiny $\epsilon$ in some basis. Evidently, the choice of basis affects the description of $\mathcal{H}_\mathrm{phys}$. If one already knows that $\varrho$ is close to $\ket{\,\,\,}\bra{\,\,\,}$, then choosing $\ket{\,\,\,}$ as part of a basis gives a 1-dimensional $\mathcal{H}_\mathrm{phys}$. Such knowledge is of course absent when $\varrho$ is unknown. In such a practical scenario in quantum optics, we may adopt the most common Fock basis for representing $\varrho$ and $\mathcal{H}_\mathrm{phys}$. When dealing with general quantum systems, the basis that is most natural in typical experiments may be chosen, such as the Pauli computational basis for qubit systems. \subsection{How are physical sectors related to commuting measurements?} Let us revisit the example in \eref{eq:rho_EX}. Because of the positivity constraint imposed on $\varrho$, whenever a diagonal element is $*$, then elements in the row and column that intersect this element are all $*$. Also, if a diagonal element is not $*$, then it is obvious that $\mathcal{H}_\mathrm{phys}$ is spanned by the basis ket for this diagonal element. For this example, the 2-dimensional $\mathcal{H}_\mathrm{phys}$ completely characterizes $\varrho$ with the $2^2=4$ elements $\varrho_{00}$, $\varrho_{22}$, $\mathop{\mathrm{Re}} \nolimits (\varrho_{02} )$ and $\mathrm{i}M ( \varrho_{02})$. It follows that knowing the location of \emph{significant} diagonal elements are all we need to ascertain $\mathcal{H}_\mathrm{phys}$. For this purpose the only necessary tool is a set of \emph{commuting} measurement outcomes with their common eigenbasis being the pre-chosen basis for $\mathcal{H}_\mathrm{phys}$. After the measurement data are performed with these commuting outcomes, all one needs to do is perform an extraction procedure on the data to obtain $\mathcal{H}_\mathrm{phys}$. This procedure would proceed to test a growing set of basis kets until it informs that the current set spans $\mathcal{H}_\mathrm{phys}$ that fully supports the data. We note here that the extraction works for any other sort of generalized measurements in principle, although we shall consider commuting measurements in subsequent discussions since they are the simplest kind necessary for extracting physical sectors in large Hilbert-space dimensions. \section{The extraction of the physical sector} In some pre-chosen basis, the physical-sector extraction procedure (PSEP) iteratively checks whether its data are supported by the cumulative sequence of $\mathcal{H}_\mathrm{sub}$ with truncation error smaller than some tiny $\epsilon$. PSEP starts deciding whether, say, $\mathcal{H}_\mathrm{sub}=\Span{\ket{n_1},\ket{n_2}}$ of the smallest dimension $d=2$ adequately supports the data. If yes, it takes this as the 2-dimensional $\mathcal{H}_\mathrm{phys}$. Otherwise, PSEP continues and decides if $\mathcal{H}_\mathrm{sub}=\Span{\ket{n_1},\ket{n_2},\ket{n_3}}$ adequately supports the data, and so on until finally PSEP assigns a $d_\mathrm{phys}$-dimensional $\mathcal{H}_\mathrm{sub}=\mathcal{H}_\mathrm{phys}$ with some statistical reliability. In each iterative step, there are three objectives to be met: \begin{enumerate}[label=\textbf{(C\roman*)}] \item\label{pt:decide} PSEP must decide if the data are supported with $\mathcal{H}_\mathrm{sub}$ spanned by some set of basis kets or not. \item\label{pt:error} PSEP must report the reliability of the statement ``$\mathcal{H}_\mathrm{sub}$ supports $\varrho$ with truncation error less than $\epsilon$''. \item\label{pt:smallest} PSEP must ensure that the final accepted set of basis kets span $\mathcal{H}_\mathrm{phys}$, the \emph{smallest} $\mathcal{H}_\mathrm{sub}$ that supports $\varrho$. \end{enumerate} In what follows, we show that all these objectives can be fulfilled with only the information encoded in the measurement data. \subsection{Deciding whether the data are supported with some subspace} We proceed by first listing a few notations. In an experiment, a set of measured commuting outcomes are described by positive operators $\sum_j \Pi_j=1$. They give measurement probabilities $p_j=\tr{\varrho \Pi_j}$ according to the Born rule. Each commuting outcome, in the common eigenstates $\ket{n}\bra{n}$ that are also used to represent the physical sector, can be written as \begin{equation} \Pi_j=\sum_l c_{jl}\ket{l} \bra{l} \label{eq:diagp} \end{equation} with positive weights $c_{jl}$ that characterize the outcome. To decide whether the $p_j$s are supported with some Hilbert subspace $\mathcal{H}_\mathrm{sub}$, the easiest way is to introduce Hermitian \emph{decision observables} \begin{equation} W_\mathrm{sub}=\sum_jy_j\Pi_j \end{equation} for real parameters $y_j$. The decision observable for testing $\mathcal{H}_\mathrm{sub}$, along with its $y_j$s, satisfies the defining property, \begin{equation} \opinner{n}{W_\mathrm{sub}}{n}=\left\{\begin{array}{@{\kern2.5pt}lL} 0 & if $\ket{n}\in\mathcal{H}_\mathrm{sub}$\,,\\ a_n>0 & otherwise\,. \end{array}\right. \label{eq:dec_obs} \end{equation} This property automatically ensures that if $\varrho$ is \emph{completely} supported in $\mathcal{H}_\mathrm{sub}$, then the expectation value $\left<W_\mathrm{sub}\right>=\sum_jy_jp_j=0$ with zero truncation error and PSEP takes this to be the physical sector ($\mathcal{H}_\mathrm{sub}=\mathcal{H}_\mathrm{phys}$). Quantum systems of finite dimensions possess states of this kind. In quantum optics however, $\varrho$ is not completely supported in any subspace, but possesses decaying density-matrix elements with increasing photon numbers [such as the example in \eref{eq:rho_EX}]. A laser source, for instance, cannot produce light of an infinite intensity. Furthermore, the Born probabilities $p_j$ are never measured. Instead, the data consist of relative frequencies $f_j$ that estimate the probabilities with statistical fluctuation. Therefore, if we define the decision random variable (RV) \begin{equation} w_\mathrm{sub}=\sum_jy_jf_j \end{equation} that estimates $\left<W_\mathrm{sub}\right>$, then PSEP may assign $\mathcal{H}_\mathrm{sub}=\mathcal{H}_\mathrm{phys}$ with a truncation error defined by $|w_\mathrm{sub}|$ that is smaller than $\epsilon$. \subsection{Quantifying the reliability of the truncation error report} The decision RV $w_\mathrm{sub}$ is an unbiased RV in that the data average of $w_\mathrm{sub}$ is the true value $\left<W_\mathrm{sub}\right>$ that PSEP achieves to estimate ($\MEAN{}{w_\mathrm{sub}}=\left<W_\mathrm{sub}\right>$). This means that in the limit of large number of measured detection events $N$ for the data $\{f_j\}$, $w_\mathrm{sub}$ approaches its expected value $\MEAN{}{w_\mathrm{sub}}$, which in turn tends to zero in the limit $\mathcal{H}_\mathrm{sub}\rightarrow\mathcal{H}_\mathrm{phys}$. This limiting behavior invites us to understand the truncation error $|w_\mathrm{sub}|$ using the well-known Hoeffding inequality~\cite{Hoeffding:1963aa}, which states that \begin{equation} \alpha\equiv{\rm Pr} \left \{ |w_\mathrm{sub}| \geq \epsilon \right \} \leq 2\exp\!\left(-\frac{N\epsilon^2}{2 \sum_jy_{j}^2}\right)\,. \label{eq:bound} \end{equation} This concentration inequality directly bounds the probability $\alpha$ of having a truncation error greater than or equal to $\epsilon$, which is the significance level of the hypothesis that $w_\mathrm{sub}=\MEAN{}{w_\mathrm{sub}}$ for all conceivable future data~\cite{Sampling:2014aa}. With \begin{equation} N\geq-\frac{2\ln(\alpha/2)}{\epsilon^2}\sum_jy_{j}^{2}\,, \end{equation} we are assured with $\alpha$ significance that the main factor for a nonzero $|w_\mathrm{sub}|$ comes from insufficient support from $\mathcal{H}_\mathrm{sub}$ since statistical fluctuation is heavily suppressed. One can obtain the more experimentally-friendly inequality~\cite{Hoeffding:1963aa} \begin{equation} \alpha\leq B_\mathrm{sub}=2\exp\!\left(-\frac{|w_\mathrm{sub}|^2} {2\Delta^2}\right) \label{eq:crit} \end{equation} in terms of the variance $\Delta^2$ of $w_\mathrm{sub}$, where we take $\epsilon\approx|w_\mathrm{sub}|$ as a sensible guide to the truncation-error threshold. For $N\gg1$, the $1/N$ scaling of $\Delta^2$ allows the quantity $B_\mathrm{sub}$ to provide an indication on the reliability of the statement ``$\mathcal{H}_\mathrm{sub}$ supports $\varrho$ with truncation error less than $\epsilon$'' with a reasonable statistical estimate for $\Delta^2$ from the data. If \eref{eq:crit} holds for $\mathcal{H}_\mathrm{sub}$ and some pre-chosen $\alpha$, then the assignment $\mathcal{H}_\mathrm{phys}=\mathcal{H}_\mathrm{sub}$ is made. Quite generally, $w_\mathrm{sub}$ and $\Delta^2$ reveal the influence of both statistical and systematic errors~\cite{Mogilevtsev:2013aa}. Therefore, by construction, for sufficiently large $N$, $\mathcal{H}_\mathrm{sub}$ eventually converges to the unique $\mathcal{H}_\mathrm{phys}$ at $\alpha$ significance with increasing size of the basis set for properly chosen $\mathcal{H}_\mathrm{sub}$. The choice of $\mathcal{H}_\mathrm{sub}$ at each iterative step of PSEP must be made so that the final extracted support is indeed $\mathcal{H}_\mathrm{phys}$, the smallest support for $\varrho$. \subsection{Ensuring that the physical sector is extracted, not another larger support} \label{subsec:Hphys_not_l} To ensure that $\mathcal{H}_\mathrm{phys}$ is really extracted, and not some other larger $\mathcal{H}_\mathrm{sub}$ that also supports the data, we once more return to the example in \eref{eq:rho_EX}. For that pure state, in the Fock basis, the $\mathcal{H}_\mathrm{sub}$ that supports the state is effectively 3-dimensional, whereas $\mathcal{H}_\mathrm{phys}$ is effectively 2 dimensional. With sufficiently large number of detection events $N$, if one naively carries out PSEP starting from $\mathcal{H}_\mathrm{sub}=\Span{\ket{0}}$, PSEP would recognize that $\mathcal{H}_\mathrm{sub}$ cannot support the data, continue to test the next larger subspace $\mathcal{H}_\mathrm{sub}=\Span{\ket{0},\ket{1}}$, where it would again conclude insufficient support. Only after the third step will PSEP accept $\mathcal{H}_\mathrm{sub}=\Span{\ket{0},\ket{1},\ket{2}}$ as the support at some fixed $\alpha$ significance. However, $\mathcal{H}_\mathrm{sub}\neq\mathcal{H}_\mathrm{phys}$. In order to efficiently extract $\mathcal{H}_\mathrm{phys}$, we need only one additional clue \emph{from the data}, that is the relative size of the diagonal elements of $\varrho$. We emphasize here that we are \emph{not} interested in the precise values of the diagonal elements, but only a very rough estimate of their relative ratios to guide PSEP. With this clue, we can then apply PSEP using the appropriately ordered sequence of basis kets to most efficiently terminate PSEP and obtain the smallest possible support for the data. For the pure-state example, the decreasing magnitude of the diagonal elements gives the order $\{\ket{0},\ket{2}\}$. For any arbitrary set of commuting $\Pi_j$s, given the measurement matrix $\bm{C}$ of coefficients $c_{jl}$, sorting the column $\bm{C}^{-}\bm{f}$, defined by the Moore-Penrose pseudoinverse $\bm{C}^{-}$ of $\bm{C}$, in descending order suffices to guide PSEP\footnote{This is \emph{not} tomography for the photon number distribution, but merely a very rough estimate on the relative ratios of diagonal elements, since $\bm{C}^{-}\bm{f}$ is not positive.}. This sorting permits the efficient completion of PSEP in $O(d_\mathrm{phys})$ steps without doing quantum tomography. Other sorting algorithms are, of course, possible without any information about the diagonal-element estimates. One can perform other tests on different permutations of basis kets within the extracted Hilbert-subspace support, although the number of steps required to complete PSEP would be larger than $O(d_\mathrm{phys})$. \subsection{An important afterword on physical-sector extraction} An astute reader would have already noticed that it is the $\mathcal{H}_\mathrm{phys}$ \emph{within} the field-of-view (FOV) of the data that can be reliably extracted. The FOV is affected by three factors: the degree of linear independence of the measured outcomes, the choice of some very large subspace to apply PSEP whose dimension does not exceed this degree of linear independence, and the accuracy of the data (the value of $N$). In real experiments, the number of linearly independent outcomes measured is always finite. With the corresponding finite data set, there exists a large subspace for extracting $\mathcal{H}_\mathrm{phys}$, in which the decision observables $W_{\mathrm{sub}}$ always satisfy \eref{eq:dec_obs} for any $\mathcal{H}_\mathrm{sub}$. For sufficiently large $N$, the collected data will capture all significant features of $\mathcal{H}_\mathrm{phys}$ within this data FOV. Indeed, if the source is \emph{truly} a black box, then defining the data FOV can be tricky. True black boxes are, however, atypical in a practical tomography experiment since it is usually the observer who prepares the state of the source and can therefore be confident that the state prepared should not deviate too far from the target state as long as the setup is reasonably well-controlled. The data FOV should therefore be guided by this common sense. On the other hand, the extraction of $\mathcal{H}_\mathrm{phys}$ in device-independent cryptography, where both the source and measurement are completely untrusted for arbitrary quantum systems, is still an open problem. We note here that the measurement in \eref{eq:diagp} may incorporate realistic imperfections, such as noise, finite detection efficiency, that are faced in a number of realistic schemes. For instance, the commuting diagonal outcomes may represent on/off detectors of varying efficiencies, or incorporate thermal noise \cite{prl2006,prl2014}. All such measurements are presumed to be calibratable, as non-calibrated measurements require other methods to probe the source. As an example, suppose that the measurement is inefficient but still trustworthy enough for the observer to describe its outcomes by the set $\{\eta_j\Pi_j\}$ with unknown inefficiencies $\eta_j<1$ that are simple functions of a few practical parameters of the setup such as transmissivities, losses and so forth. In other words, we have $\eta_j=\eta_j(T_1,\ldots,T_l)$ for $l$ that is typically much less than the total number of outcomes in practical experiments. Then the straightforward practice is to first calibrate all $T_j$s before using them to subsequently carry out PSEP for other sources. One may also choose to calibrate $T_j$ already during the sorting stage by ``solving'' the linear system $\bm{t}=\bm{C}^{-}\bm{f}'$, where $f'_j=f_j/\eta_j$ is now linear in the data $f_j$ and nonlinear in $T_j$. The estimation of $T_j$ falls under parameter tomography that is beyond the scope of this discussion, which focuses on the idea of locating physical sectors and not the exact values of density matrices. \section{The pseudocode for physical-sector extraction} Suppose we have a set of commuting measurement data $\{f_j\}$ that form the column $\bm{f}$, as well as the associated outcomes $\Pi_j$ of some eigenbasis $\{\ket{0},\ket{1},\ket{2},\ldots\}$ that is adopted to represent $\mathcal{H}_\mathrm{phys}$. For some pre-chosen basis and $\alpha$ significance, the pseudocode for PSEP is presented as follows: \begin{description} \item [step 1.] Compute the measurement matrix $\bm{C}$ and sort $\bm{C}^{-}\bm{f}$ in descending order to obtain the ordered index $\bm{i}$. Then, define the ordered sequence of basis kets $\{\ket{n_{i_1}},\ket{n_{i_2}},\ket{n_{i_3}},\ldots\}$. \item [step 2.] Set $k=0$ and $\mathcal{H}_\mathrm{sub}=\Span{\ket{n_{i_1}}}$. \item [step 3.\label{step:sloop}] Construct $W_{\mathrm{sub}}$ by solving the linear system of equations in equation~\eref{eq:dec_obs} for the $y_j$s. \item [step 4.] Compute $w_\mathrm{sub}$, $\Delta^2$ and hence $B_\mathrm{sub}$. For typical multinomial data, $\Delta^2=\sum_{jk}y_jy_k(\delta_{j,k}p_j-p_jp_k)/N$. \item [step 5.\label{step:eloop}] Increase $k$ by one and include $\ket{n_{i_k}}$ in $\mathcal{H}_\mathrm{sub}$. \item [step 6.] Repeat \textsc{step}~3 through 5 until $B_\mathrm{sub}\geq\alpha$. Finally, report $\mathcal{H}_\mathrm{phys}=\mathcal{H}_\mathrm{sub}$ and $\alpha$ and proceed to perform quantum-state tomography in $\mathcal{H}_\mathrm{phys}$. \end{description} \section{Results} \subsection{Quantum light sources} To illustrate PSEP, we consider the state in \eref{eq:rho_EX} and $\varrho = \ket{4}\frac{1}{4}\bra{4} + \ket{9}\frac{1}{2}\bra{9} + \ket{23}\frac{1}{4}\bra{23}$. Simulated data are generated with a random set of commuting measurement outcomes. The extracted physical sectors are shown in figure~\ref{fig:1}. \begin{figure} \caption{Physical sectors extracted with PSEP from simulated data of $N=10^9$ detection events for (a)~the pure state in \eref{eq:rho_EX} \label{fig:1} \end{figure} Data statistical fluctuation may be further minimized by averaging $B_{\mathrm{sub}}$ over many different sets of commuting outcomes. Moreover, one can detect additional systematic errors that are not attributed to truncation artifacts by inspecting the corresponding histograms for errors larger than the statistical fluctuation. \begin{figure} \caption{Schematic diagram of (a) the experimental setup to measure a mixture of coherent states and (b) the result of PSEP on the data for a mixture of two coherent states of mean photon numbers 9.043 and 36. Panel~(a) describes coherent states from a pulsed laser pass through an amplitude modulator (AM), which switches between two values of attenuation. Neutral density (ND) filters further attenuate the light to the single photon level. The time multiplexing detector (TMD) consists of three fiber couplers, delay lines and superconducting nanowire single photon detectors (SPD). The physical sector in panel (b) is extracted from data of $N=9.6\times10^6$ detection events. 5000 different sets of 60 outcomes out of the measured 256 were used to calculate the average $B_{\mathrm{sub} \label{fig:2} \end{figure} We next proceed to experimentally validate PSEP by measuring photon-click events of a time-multiplexed detector (TMD). We use a fiber-integrated setup to generate and measure a mixture of coherent states, as depicted in figure~\ref{fig:2}(a). Coherent states are produced by a pulsed diode laser with 35~ps pulses at 200~kHz and a wavelength of 1550~nm. These pulses are then modulated with a telecom Mach-Zehnder amplitude modulator, driven with a square-wave signal at 230~kHz. This produces pseudorandom pulse patterns with two fixed amplitudes. After passing through fiber-attenuators, the state is measured with an eight-bin TMD~\cite{Achilles:2003cs,Avenhaus:2010ff} with a bin separation of 125~ns and two superconducting nanowire detectors. We record statistics of all possible $2^8$ bin configurations, which corresponds to a total of 256 TMD outcomes. To characterize the TMD outcomes for the measurement, we perform standard detector tomography, using well calibrated coherent probe states~\cite{Rehacek:2010fk,Harder:2014tf}. The setup is similar to the previous one, but we replace the modulator by a controllable variable attenuator. We calibrate the attenuation with respect to a power meter at the laser output. This allows us to produce a set of 150 probe states with a power separation of 0.2~dB. TMD data of a statistical mixture of two coherent states are collected and PSEP is subsequently performed on these data. The accuracy of the extracted physical sector is ultimately sensitive to experimental imperfections. In this case, these imperfections are minimized owing to the state-of-the-art superconductor technology, the fruit of which is a histogram that is as clean as it gets in an experimental setting. Figure~\ref{fig:2}(b) provides convincing evidence of the feasibility and practical performance of the technique, where real data statistical fluctuation is present. This physical sector may subsequently be taken as the objective starting point for a more detailed investigation of the quantum signal with tools for tomography and diagnostics. \subsection{Finite-dimensional quantum systems} To analyze another aspect of PSEP, in this section, we apply it to quantum systems of finite dimensions with discrete-variable commuting measurement outcomes. As a specific example, we consider the arrangement in reference~\cite{Ahrens:2012aa}, which uses single photons to encode the information simultaneously in horizontal ($H$) and vertical ($V$) polarizations, and in two spatial modes ($a$ and $b$). We define four basis states: $|0 \rangle \equiv |H,a \rangle $, $|1 \rangle \equiv |V,a \rangle$, $|2 \rangle \equiv |H,b \rangle$, and $|3 \rangle \equiv |V,b \rangle$. On passing through three suitably oriented half-wave plates at angles $\theta_{1}$, $\theta_{2}$, and $\theta_{3}$, the state of such hybrid systems can be converted to the pure state $\varrho=\ket{\theta_1,\theta_2,\theta_3}\bra{\theta_1,\theta_2,\theta_3}$, defined by \begin{eqnarray} | \theta_1,\theta_2,\theta_3 \rangle = & \sin(2\theta_1)\sin(2\theta_3) \, |0 \rangle -\sin(2\theta_1)\cos(2\theta_3) \, |1 \rangle & \nonumber\\ & + \cos(2\theta_1)\cos(2\theta_2) \, | 2 \rangle + \cos(2\theta_1)\sin(2\theta_2) \, | 3 \rangle \,. & \label{eq:mpartite} \end{eqnarray} \begin{figure*} \caption{PSEP for hybrid quantum systems of finite dimensions that potentially generates either (a) a qubit state, (b) a qutrit state, (c) or a ququart state according to equation~\eref{eq:mpartite} \label{fig:3} \end{figure*} Thus, by adjusting the orientation angles of the wave plates, one could produce qubits, qutrits or ququarts from such a hybrid source. Here, we show that PSEP can rapidly extract $\mathcal{H}_\mathrm{phys}$ by inspecting only the data measured from a set of commuting quantum measurements. Figure~\ref{fig:3} presents the plots for a qubit, qutrit and ququart system characterized by the different ($\theta_1,\theta_2,\theta_3$) configurations. We have thus shown that in the typical experimental scenarios where the measurement setup is reasonably-well calibrated, and hence trusted, $\mathcal{H}_\mathrm{phys}$ can be systematically extracted within the subspace spanned by the measurement outcomes. This allows an observer to later probe the details of the unknown but trusted quantum source using only the data at hand. Notice that the relevant basis states, labeled by $n$, form a basis for the commuting measurement on the black box. As such, this procedure is not a bootstrapping instruction. Rather, it systematically identifies the correct $\mathcal{H}_\mathrm{phys}$ without any other \emph{ad hoc} assertions about the source. In this way, we turn PSEP into an efficient \emph{deterministic} dimension tester with complexity $O(d_\mathrm{phys})$, as we have already learnt from section~\ref{subsec:Hphys_not_l}. \section{Conclusions} We have formulated a systematic procedure to extract the physical sector, the smallest Hilbert-subspace support, of an unknown quantum state using only the measurement data and nothing else. This is possible because information about the physical sector is always entirely encoded in the data. This extraction requires only few efficient iterative steps of the order of the physical-sector dimension. We demonstrated the validity and versatility of the procedure with simulated and experimental data from quantum light sources, as well as finite-dimensional quantum systems. The results support the clear message that, for well-calibrated measurement devices, the physical sector can always be systematically extracted and verified with statistical tools, in which quantum-state tomography can be performed accurately. No \emph{a priori} assumptions about the source, which require additional testing, are necessary. The proposed method should serve as the reliable solution for realistic tomography experiments in quantum systems of complex degrees of freedom. \ack D.~M. acknowledges support from the National Academy of Sciences of Belarus through the program ``Convergence'', the European Commission through the SUPERTWIN project (Contract No. 686731), and by FAPESP (Grant No. 2014/21188-0). Y.~S.~T. acknowledges support from the BK21 Plus Program (Grant 21A20131111123) funded by the Ministry of Education (MOE, Korea) and National Research Foundation of Korea (NRF). J.~{\v R} and Z.~H. acknowledge support from the Grant Agency of the Czech Republic (Grant No. 15-03194S), and the IGA Project of Palack{\'y} University (Grant No. PRF~2016-005). J.~T., R.~K., G.~H., and Ch.~S. acknowledge the European Commission through the QCumber project (Contract No. 665148). Finally, L.~L.~S.~S. acknowledges the Spanish MINECO (Grant FIS2015-67963-P). \end{document}
\begin{document} \title{A Faster Algorithm for Maximum Flow in Directed Planar Graphs with Vertex Capacities} \begin{abstract} We give an $O(k^3 n \log n \min(k,\log^2 n) \log^2(nC))$-time algorithm for computing maximum integer flows in planar graphs with integer arc {\em and vertex} capacities bounded by $C$, and $k$ sources and sinks. This improves by a factor of $\max(k^2,k\log^2 n)$ over the fastest algorithm previously known for this problem [Wang, SODA 2019]. The speedup is obtained by two independent ideas. First we replace an iterative procedure of Wang that uses $O(k)$ invocations of an $O(k^3 n \log^3 n)$-time algorithm for maximum flow algorithm in a planar graph with $k$ apices [Borradaile et al., FOCS 2012, SICOMP 2017], by an alternative procedure that only makes one invocation of the algorithm of Borradaile et al. Second, we show two alternatives for computing flows in the $k$-apex graphs that arise in our modification of Wang's procedure faster than the algorithm of Borradaile et al. In doing so, we introduce and analyze a sequential implementation of the parallel highest-distance push-relabel algorithm of Goldberg and Tarjan~[JACM 1988]. \end{abstract} \section{Introduction} The maximum flow problem has been extensively studied in many different settings and variations. This work concerns two related variants of the maximum flow problem in planar graphs. The first variant is the problem of computing a maximum flow in a directed planar network with integer arc {\em and vertex} capacities, and $k$ sources and sinks. The second variant, which is used in algorithms for the first variant, is the problem of computing a maximum flow in a directed network that is nearly planar; there is a set of $k$ vertices, called apices, whose removal turns the graph planar. The problem of maximum flow in a planar graph with vertex capacities has been studied in several works since the 1990s~\cite{KhullerN94,ZhangLC08,KaplanN11,DBLP:journals/siamcomp/BorradaileKMNW17,DBLP:conf/soda/Wang19}. For a more detailed survey of the history of this problem and other relevant results see~\cite{DBLP:conf/soda/Wang19} and references therein. Vertex capacities pose a challenge in planar graphs because the standard reduction from a flow network with vertex capacities to a flow network with only arc capacities does not preserve planarity. The problem can be solved by algorithms for maximum flow in sparse graphs (i.e., graphs with $n$ vertices and $O(n)$ edges that are not necessarily planar). The fastest such algorithms currently known are an $O(n^2 /\log n)$-time algorithm~\cite{Orlin13} for sparse graphs, and an $O(n^{4/3+o(1)}C^{1/3})$-time algorithm for sparse graphs with integer capacities bounded by $C$~\cite{KathuriaLS20}. Until recently, there was no planarity exploiting algorithm for the case of more than a single source and a single sink. Significant progress on this problem was recently made by Wang~\cite{DBLP:conf/soda/Wang19}. Wang developed an $O(k^5n \log^3 n \log^2 (nC))$-time algorithm, where $k$ is the number of sources and sinks, and $C$ is the largest capacity of a single vertex. This is faster than using the two algorithms for general sparse graphs mentioned above when $k=\tilde O (n^{1/5}/\log^2 C + (nC)^{1/15})$. Wang's algorithm uses multiple calls to an algorithm of Borradaile et al.~\cite{DBLP:journals/siamcomp/BorradaileKMNW17} for computing a maximum flow in a $k$-apex graph with only arc capacities. The algorithm of Borradaile et al.~\cite{DBLP:journals/siamcomp/BorradaileKMNW17} is based on an approach originally suggested by Hochstein and Weihe~\cite{HochsteinWeihe} for a slightly more restricted problem. In Borradaile et al.'s approach, a maximum flow in a $k$-apex graph with $n$ vertices is computed by simulating the $\textsf{Push-Relabel}$ algorithm of Goldberg and Tarjan~\cite{DBLP:journals/jacm/GoldbergT88} on a complete graph with $k$ vertices, corresponding to the $k$ apices of the input graph. Whenever the $\textsf{Push-Relabel}$ algorithm pushes flow on an arc of the complete graph, the push operation is simulated by sending flow between the two corresponding apices in the input $k$-apex graph. This can be done efficiently using an $O(n\log ^3 n)$ time multiple-source multiple-sink (MSMS) maximum flow algorithm in planar graphs, which is the main result of the paper of Borradaile et al.~\cite{DBLP:journals/siamcomp/BorradaileKMNW17}. Overall, their algorithm for maximum flow in $k$-apex graphs takes $O(k^3 n \log^3 n)$ time. Flow in $k$-apex graphs can also be computed using the algorithms for sparse graphs mentioned above. The $O(k^3 n \log^3 n)$-time algorithm of Borradaile et al. is faster than these algorithms when $k=\tilde O(n^{1/3}/\log^2 C+(nC)^{1/9})$. \subsection{Our results and techniques} We improve the running time of Wang's algorithm to $O(k^3 n \log n \min(k,\log^2 n) \log^2 (nC))$. This is faster than Wang's result by a factor of $\max(k^2,k\log^2 n)$, extending the range of values of $k$ for which the planarity exploiting algorithm is the fastest known algorithm for the problem to $k=\tilde O(n^{1/3}/\log^2 C+(nC)^{1/9})$. The improvement is achieved by two main ideas. At the heart of Wang's algorithm is an iterative procedure for eliminating excess flow from vertices violating the capacity constraints. Each iteration consists of computing a circulation with some desired properties. Wang computes this circulation using $O(k)$ calls to the algorithm of Borradaile et al. for maximum flow in $k$-apex graphs. We show how to compute this circulation using a constant number of invocations of the algorithm for $k$-apex graphs. This idea alone improves on Wang's algorithm by a factor of $k$. To further improve the running time, we modify the algorithm of Borradaile et al. for maximum flow in $k$-apex graphs~\cite{DBLP:journals/siamcomp/BorradaileKMNW17}. The algorithm of Borradaile et al. uses the $\textsf{Push-Relabel}$ algorithm of Goldberg and Tarjan~\cite{DBLP:journals/jacm/GoldbergT88}. We introduce a sequential implementation of the parallel highest-distance $\textsf{Push-Relabel}$ algorithm. In this algorithm, which we call batch-highest-distance, a single operation, $\textsf{Bulk-Push}$, pushes flow on multiple arcs simultaneously, instead of just on a single arc as in Goldberg and Tarjan's $\textsf{Push}$ operation. More specifically, we simultaneously push flow on all admissible arcs whose tails have maximum height (see Section~\ref{sec:apex-alg}). This is reminiscent of parallel and distributed $\textsf{Push-Relabel}$ algorithms~\cite{DBLP:journals/jacm/GoldbergT88,DBLP:journals/siamcomp/CheriyanM89}, but our algorithm is sequential, not parallel. We prove that the total number of $\textsf{Bulk-Push}$ operations performed by the batch-highest-distance algorithm is $O(k^2)$ (this should be compared to $O(k^3)$ $\textsf{Push}$ operations for the FIFO or highest-distance $\textsf{Push-Relabel}$ algorithms). We then show that, in the case of the $k$-apex graphs that show up in Wang's algorithm, we can implement each $\textsf{Bulk-Push}$ operation using a single invocation of the $O(n \log^3 n)$-time MSMS maximum flow algorithm for planar graphs~\cite{DBLP:journals/siamcomp/BorradaileKMNW17}. Hence, we can find a maximum flow in such $k$-apex graphs in $O(k^2 n \log^3 n)$ time, which is faster by a factor of $k$ than the time required by the algorithm of Borradaile et al. We also give another way to modify the algorithm of Borradaile et al. for maximum flow in $k$-apex graphs; the second way is better when $k = o(\log^2 n)$. We observe that the structure of the $k$-apex graphs that arise in Wang's algorithm allows us to implement each of the $O(k^3)$ $\textsf{Push}$ operations of the FIFO $\textsf{Push-Relabel}$ algorithm used by Borradaile et al. in just $O(n \log n)$ time. This is done using a procedure due to Miller and Naor~\cite{DBLP:journals/siamcomp/MillerN95} for the case when all sources and sinks lie on a single face. \noindent {\bf Roadmap.} In Section~\ref{sec:prel} we provide preliminary background and notations. Section~\ref{sec:apex-alg} describes the sequential implementation of the parallel highest-distance $\textsf{Push-Relabel}$ algorithm, and its use in an algorithm for finding maximum flow in $k$-apex graphs. In Section~\ref{sec:main} we describe how to use this $\textsf{Push-Relabel}$ variant to obtain an improved algorithm for computing maximum flow in planar graphs with vertex capacities. \section{Preliminaries} \label{sec:prel} All the graphs we consider in this paper are directed. For a graph $G$ we use $V(G)$ and $E(G)$ to denote the vertex set and arc set of $G$, respectively. For any vertex $v \in V(G)$, let $\deg(v)$ denote the degree of $v$ in $G$. For a path $P$ we denote by $P[u,v]$ the subpath of $P$ that starts at $u$ and ends at $v$. We denote by $P\circ Q$ the concatenation of two paths $P,Q$ such that the first vertex of $Q$ is the last vertex of $P$. A \emph{flow network} is a directed graph $G$ with a capacity function $c: V(G)\cup E(G) \rightarrow [0,\infty)$ on the vertices and arcs of $G$, along with two disjoint sets $S,T \subset V(G)$ called {\em sources} and {\em sinks}, respectively. We assume without loss of generality that sources and sinks have infinite capacities, and that, for any arc $e=(u,v) \in E(G)$, the \emph{reverse} arc $(v,u)$, denoted $\text{rev}(e)$ is also in $E(G)$, and has capacity $c(\text{rev}(e)) = 0$. Let $\rho: E(G) \rightarrow [0,\infty)$. To avoid clutter we write $\rho(u,v)$ instead on $\rho((u,v))$. For each vertex $v$ let $ \rho^{in}(v) = \sum_{(u,v)\in E(G)}\rho(u,v) $, and $ \rho^{out}(v) = \sum_{(v,u)\in E(G)} \rho(v,u) $. The function $\rho$ is called a {\em preflow} if it satisfies the following conservation constraint: for all $v \in V(G)\setminus(S)$, $\rho^{in}(v) \geq \rho^{out}(v)$. The {\em excess of a vertex $v$ with respect to a preflow} $\rho$ is defined by $\mathrm{ex}(\rho,v) = \rho^{in}(v) - \rho^{out}(v)$. A preflow is \emph{feasible on an arc} $e \in E(G)$ if $\rho(e) \leq c(e)$. It is \emph{feasible on a vertex} $v \in V(G)$ if $\rho^{in}(v) \leq c(v)$. A preflow is said to be \emph{feasible} if, in addition to the conservation constraint, it is feasible on all arcs and vertices. The value of a preflow $\rho$ is defined as $ |\rho|=\sum_{s\in S} \rho^{out}(s)-\rho^{in}(s)$. A preflow $f$ satisfying $\mathrm{ex}(f,v)=0$ for all $v\in V(G)\setminus(S \cup T)$, is called a {\em flow}. A flow whose value is $0$ is called a \emph{circulation}. A {\em maximum flow} is a feasible flow whose value is maximum. \begin{remark}\label{rem:st} The problem of finding a maximum flow in a flow network with multiple sources and sinks can be reduced to the single-source, single-sink case by adding a super source $s$ and super sink $t$, and infinite-capacity arcs $(s,s_i)$ and $(t_i,t)$ for every $s_i \in S$ and $t_i \in T$. If the original network is planar then this transformation adds two apices to the graph. Throughout the paper, whenever we refer to the graph $G$, we mean the graph $G$ after this transformation, i.e., with a single source, the apex $s$, and a single sink, the apex $t$. \end{remark} The {\em violation of a flow $f$ at a vertex} $v$ is defined by $\mathrm{vio}(f,v) = \max \{0,f^{in}(v)-c(v)\}$. Thus, if $f$ is a feasible flow then $\mathrm{vio}(f,v)=0$ for all vertices $v$. The violation of the flow $f$ is defined to be $\mathrm{vio}(f)=\max_{v\in V(G)}{\mathrm{vio}(f,v)}$.\footnote{We define violations only with respect to flows (rather than preflows) because we will only discuss preflows in the context of flow networks without finite vertex capacities.} A preflow $\rho$ is \emph{acyclic} if there is no cycle $C$ such that $\rho(e)>0$ for every arc $e\in C$. A preflow \emph{saturates} an arc $e$ if $\rho(e)=c(e)$. The sum of two preflows $\rho$ and $\eta$ is defined as follows. For every arc $e \in E(G)$, $(\rho+\eta)(e)=\max \{0, \rho(e)+\eta(e)-\rho(\text{rev}(e))-\eta(\text{rev}(e))\}$. Multiplying the preflow $\rho$ by some constant $c$ to get the flow $c\rho$ is defined as $(c\rho)(e)=c\cdot \rho(e)$ for all $e\in E(G)$. The \emph{residual capacity} of an arc $e$ with respect to a preflow $\rho$, denoted by $c_\rho(e)$, is $c(e)-\rho(e)+\rho(\text{rev}(e))$. The \emph{residual graph} of a flow network $G$ with respect to a preflow $\rho$ is the graph $G$ where the capacity of every arc $e \in E(G)$ is set to $c_\rho(e)$. It is denoted by $G_\rho$. A path of $G$ is called \emph{augmenting} or \emph{residual} (with respect to a preflow $\rho$) if it is also a path of $G_\rho$. Suppose $G$ and $H$ are flow networks such that every arc in $G$ is also an arc in $H$. If $f'$ is a (pre)flow in $H$ then the \emph{restriction} of $f'$ to $G$ is the (pre)flow $f$ in $G$ defined by $f(e)=f'(e)$ for all $e \in E(G)$. \section{An algorithm for maximum flow in $k$-apex graphs}\label{sec:apex-alg} In this section we introduce a sequential implementation of the parallel highest-distance $\textsf{Push-Relabel}$ algorithm of Goldberg and Tarjan~\cite{DBLP:journals/jacm/GoldbergT88}, and use it in the algorithm of Borradaile et al.~\cite{DBLP:journals/siamcomp/BorradaileKMNW17} for maximum flow in $k$-apex graphs. We first give a high-level description of the $\textsf{Push-Relabel}$ algorithm. \subsection{The $\textsf{Push-Relabel}$ algorithm~\cite{DBLP:journals/jacm/GoldbergT88}} Let $H$ be a flow network (not necessarily planar) with source $s$ and sink $t$, arc capacities $c: E(H) \rightarrow \mathbb{R}$, and no finite vertex capacities. The $\textsf{Push-Relabel}$ algorithm maintains a feasible \emph{preflow} function, $\rho$, on the arcs of $H$. A vertex $u$ is called {\em active} if $\mathrm{ex}(\rho,u) > 0$. The algorithm starts with a preflow that is zero on all arcs, except for the arcs leaving the source $s$, which are saturated. Thus, all the neighbors of $s$ are initially active. When the algorithm terminates, no vertex is active and the preflow function is guaranteed to be a maximum flow. The algorithm also maintains a label function $h$ (also known as distance or height function) over the vertices of $H$. The label function $h: V(H) \rightarrow \mathbb{N}$ is {\em valid} if $h(s)=|V(H)|$, $h(t)=0$ and $h(u) \leq h(v) + 1$ for every residual arc $(u,v) \in E(H_{\rho})$. The algorithm progresses by performing two basic operations, $\textsf{Push}$ and $\textsf{Relabel}$. A $\textsf{Push}$ operation applies to an arc $(u,v)$ if $(u,v)$ is residual, $\mathrm{ex}(\rho, u) > 0$, and $h(u)=h(v)+1$. The operation moves excess flow from $u$ to $v$ by increasing the flow on $e$ by $\min\{\mathrm{ex}(\rho,u),c(e)-\rho(e)\}$. The other basic operation, $\textsf{Relabel}(u)$, assigns $u$ the label $h(u)=\min\{h(v) : (u,v) \in E(H_{\rho})\} + 1$ and applies to $u$ only if $u$ is active and $h(u)$ is not greater than the label of any neighbor of $u$ in $H_{\rho}$. In other words, $\textsf{Relabel}$ applies to an active vertex $u$ only if the excess flow in $u$ cannot be pushed out of $u$ (because $h(u)$ is not high enough). The algorithm performs applicable $\textsf{Push}$ and $\textsf{Relabel}$ operations until no vertex is active. To fit our purposes, we think of the algorithm as one that only maintains explicitly the excess $\mathrm{ex}(\rho,v)$ and residual capacity $c_{\rho}(e)$ of each vertex $v$ and arc $e$ of $H$. The preflow $\rho$ is implicit. In this view, a $\textsf{Push}(u,v)$ operation decreases $\mathrm{ex}(\rho,u)$ and $c_{\rho}(u,v)$ by $\min\{\mathrm{ex}(\rho,u),c_{\rho}(u,v)\}$ and increases $\mathrm{ex}(\rho,v)$ and $c_{\rho}(v,u)$ by the same amount. We reformulate Goldberg and Tarjan's correctness proof of the generic $\textsf{Push-Relabel}$ algorithm to fit this view. \begin{lemma}[\cite{DBLP:journals/jacm/GoldbergT88}] \label{lem:GT} Any algorithm that performs applicable $\textsf{Push}$ and $\textsf{Relabel}$ operations in any order satisfies the following properties and invariants: \begin{bracketenumerate} \item $\mathrm{ex}(\rho,\cdot)$ and $c_{\rho}(\cdot)$ are non-negative.\footnote{This corresponds to the function $\rho$ being a feasible preflow.} \item The function $h$ is a valid labeling function. \item For all $v \in V$, the value of $h(v)$ never decreases, and strictly increases when $\textsf{Relabel}(v)$ is called. \item $h(v) \leq 2|V(H)|-1$ for all $v \in V(H)$. \item Immediately after $\textsf{Push}(u,v)$ is performed, either $(u,v)$ is saturated or $u$ is inactive. \end{bracketenumerate} \end{lemma} \begin{proof} Properties (1) and (5) are immediate from the definition of $\textsf{Push}$ and the fact that excess and residual capacities only change during $\textsf{Push}$ operations. Property (2) corresponds to Lemma 3.1 in~\cite{DBLP:journals/jacm/GoldbergT88}, Property (3) is proved in Lemma 3.6 in~\cite{DBLP:journals/jacm/GoldbergT88}, and Property (4) in Lemma 3.7 in~\cite{DBLP:journals/jacm/GoldbergT88}. \end{proof} \begin{lemma}\cite[Lemma 3.3]{DBLP:journals/jacm/GoldbergT88}\label{lem:no-st-path} Properties (1), (2) imply that there is no augmenting path from $s$ to $t$ at any point of the algorithm. \end{lemma} \begin{lemma}\cite[Lemma 3.8]{DBLP:journals/jacm/GoldbergT88}\label{lem:num-relabel} Properties (3), (4) imply that the number of $\textsf{Relabel}$ operations is at most $2|V(H)|-1$ per vertex and at most $2|V(H)|^2$ overall. \end{lemma} \begin{lemma}\cite[Lemmas 3.9, 3.10]{DBLP:journals/jacm/GoldbergT88}\label{lem:num-push} Properties (1)-(5) imply that the number of $\textsf{Push}$ operations is $O(|V(H)|^2|E(H)|)$. \end{lemma} By Lemma~\ref{lem:num-relabel} and Lemma~\ref{lem:num-push}, the algorithm terminates. Since upon termination no vertex is active, the implicit preflow $\rho$ is in fact a feasible flow function. By Lemma~\ref{lem:no-st-path} $\rho$ is a maximum flow from $s$ to $t$. Variants of the $\textsf{Push-Relabel}$ algorithm differ in the order in which applicable $\textsf{Push}$ and $\textsf{Relabel}$ operations are applied. Some variants, such as FIFO, highest-distance, maximal-excess, etc., guarantee faster termination than the \(O(|V(H)|^2|E(H)|)\) guarantee given above. \subsection{A sequential implementation of the parallel highest-distance $\textsf{Push-Relabel}$ algorithm} \label{sec:bp} We present a sequential implementation of the parallel highest-distance $\textsf{Push-Relabel}$ algorithm, which we call Batch-Highest-Distance. This algorithm attempts to push flow on multiple edges simultaneously in an operation called $\textsf{Bulk-Push}$. In that sense, it resembles the parallel version of the highest-distance $\textsf{Push-Relabel}$ algorithm. It is important to note, however, that $\textsf{Bulk-Push}$ is a sequential operation and not a parallel/distributed one. We define $\textsf{Bulk-Push}$, a batched version of the $\textsf{Push}$ operation. $\textsf{Bulk-Push}(U, W)$ operates on two sets of vertices, $U$ and $W$. It is applicable under the following requirements: \begin{romanenumerate} \item $\mathrm{ex}(u) > 0$ for all $u \in U$. \item There exists an integer $h$ such that $h(u) = h$ and $h(w) = h-1$ for all $u \in U$ and $w \in W$. \item There is a residual arc $(u,w)$ for some $u \in U$ and $w \in W$. \end{romanenumerate} Note that in a regular $\textsf{Push-Relabel}$ algorithm, conditions (i) and (ii) imply that $\textsf{Push}(u,w)$ is applicable to any residual arc $(u,w)$ with $u \in U$ and $w \in W$. Condition (iii) guarantees there is at least one such arc. $\textsf{Bulk-Push}$ pushes as much excess flow as possible from vertices in $U$ to vertices in $W$ so that after $\textsf{Bulk-Push}$ the following property holds: \begin{enumerate}[(5*)] \item Immediately after $\textsf{Bulk-Push}(U,W)$ is called, for all $u \in U$ and $w \in W$, either $(u,w)$ is saturated or $u$ is inactive. \end{enumerate} We replace property (5) with the more general property (5*) in Lemma~\ref{lem:GT} and Lemma~\ref{lem:num-push}. With this modification, Lemmas~\ref{lem:GT},~\ref{lem:no-st-path},~\ref{lem:num-relabel} and~\ref{lem:num-push} apply to our sequential implementation. The proofs from~\cite{DBLP:journals/jacm/GoldbergT88} need no change except replacing $\textsf{Push}$ with $\textsf{Bulk-Push}$. Hence, our variant terminates correctly with a maximum flow from $s$ to $t$. \begin{remark*} One may think of $\textsf{Bulk-Push}(U,W)$ as performing in parallel all $\textsf{Push}$ operations on arcs whose tail is in $U$ and whose head is in $W$. However, not every maximum flow with sources $U$ and sinks $W$ can be achieved as the sum of flows pushed by multiple $\textsf{Push}$ operations. For example, consider the case where $U$ consists of a single vertex $u$, with $\mathrm{ex}(u)=2$, $W=\{w_1,w_2\}$, and the residual capacities of $(u,w_1)$ and $(u,w_2)$ are both 2. $\textsf{Bulk-Push}(U,W)$ may push one unit of excess flow from $u$ on each of $(u,w_1)$ and $(u,w_2)$, but $\textsf{Push}(u,w_i)$ would push 2 units of flow on $(u,w_i)$, and no flow on the other arc. Therefore, the correctness of this variant cannot be argued just by simulating $\textsf{Bulk-Push}$ by multiple $\textsf{Push}$ operations. Instead we chose to argue correctness by stating the generalized property~(5*). \end{remark*} We now discuss a concrete policy for choosing which $\textsf{Bulk-Push}$ and $\textsf{Relabel}$ operations to perform in the above algorithm. This policy is similar, but not identical, to the highest-distance $\textsf{Push-Relabel}$ algorithm~\cite{DBLP:journals/jacm/GoldbergT88,DBLP:journals/siamcomp/CheriyanM89}. As long as there is an active vertex, the algorithm repeatedly executes the following two steps, which together are called a {\em pulse}. Let $h_{max}$ be the maximum label of an active vertex. That is, $h_{max} = \max \{ h(v) : \mathrm{ex}(\rho, v)>0\}$. Let $H_{max}$ be the set of all the active vertices whose height is $h_{max}$. In the first step of the pulse, the algorithm invokes $\textsf{Bulk-Push}(H_{max}, W)$ where $W$ is the set of all vertices $w \in V$ such that $h(w)=h_{max}-1$.\footnote{Formally it may be that $\textsf{Bulk-Push}(H_{max},W)$ is not applicable because condition (iii) is not satisfied, e.g., when $W=\emptyset$. In such cases $\textsf{Bulk-Push}$ does not push any flow. Condition (iii) is essential for the termination of the generic generalized algorithm, which may repeat such empty calls to $\textsf{Bulk-Push}$ indefinitely. However, we prove in Lemma~\ref{lem:num-pulses} that in our specific policy there are $O(|V(H)|^2)$ pulses, regardless of the flow pushed (or not pushed) by $\textsf{Bulk-Push}$ in each pulse.} In the second step of the pulse, the algorithm applies the $\textsf{Relabel}$ operation to all remaining active vertices in $H_{max}$ in arbitrary order. \begin{algorithm} \caption{Batch-Highest-Distance$(G,c)$}\label{euclid} \begin{algorithmic}[1] \State Initialize $h(\cdot)$, $c_{\rho}(\cdot)$ and $\mathrm{ex}(\cdot)$ \While{there exists an active vertex} \State $h_{max} \gets \max \{ h(v) : \mathrm{ex}(\rho,v)>0\}$ \State $H_{max} \gets \{v \in V(H) : \mathrm{ex}(\rho,v) > 0,\text{ } h(v)=h_{max}\}$ \State $W \gets \{w \in V(H) : h(w)=h_{max}-1 \}$ \State $\textsf{Bulk-Push}(H_{max}, W)$ \State Relabel all active vertices in $H_{max}$ in arbitrary order \EndWhile \end{algorithmic} \end{algorithm} \begin{remark*} The crucial difference between this policy and the highest-distance $\textsf{Push-Relabel}$ algorithm~\cite{DBLP:journals/jacm/GoldbergT88,DBLP:journals/siamcomp/CheriyanM89} is that in the highest-distance algorithm a vertex $u$ with height $h_{max}$ is relabeled as soon as no more $\textsf{Push}$ operations can be applied to $u$. In contrast, our variant first pushes flow from all vertices with height $h_{max}$ and only then relabels all of them. \end{remark*} We partition the pulses into two types according to whether any vertices are relabeled in the relabel step of the pulse. A pulse in which at least one vertex is relabeled is called \emph{saturating}. All other pulses are called \emph{non-saturating}.\footnote{This is a generalization of the notions of saturating and non-saturating $\textsf{Push}$ operations in~\cite{DBLP:journals/jacm/GoldbergT88}.} By Lemma~\ref{lem:num-relabel}, the total number of $\textsf{Relabel}$ operations executed by the batch-highest-distance algorithm is $O(|V(H)|^2)$. We now prove the same bound on the number of $\textsf{Bulk-Push}$ operations. \begin{lemma}\label{lem:num-pulses} The number of pulses (and hence also the number of calls to $\textsf{Bulk-Push}$) executed by the batch-highest-distance algorithm is $O(|V(H)|^2)$. \end{lemma} \begin{proof} Note that the $\textsf{Relabel}$ step of a saturating pulse consists of at least one call to $\textsf{Relabel}$ which strictly increases the height of an active vertex $v$ whose height (before the increase) was $h_{max}$. Hence, a saturating pulse strictly increases the value of $h_{max}$. The fact that the height of each vertex never decreases and is bound by $2|V(H)|$ implies that (i) there are $O(|V(H)|^2)$ saturating pulses, and (ii) the total increase in $h_{max}$ over all saturating $\textsf{Bulk-Push}$ operations is $O(|V(H)|^2)$. As for non-saturating pulses, note that since excess flow is always pushed to a vertex with lower height, the push step of a pulse does not create excess in any vertex with height greater than or equal to $h_{max}$, so all vertices with height greater than $h_{max}$ remain inactive during the pulse. By property (5*), for every $u\in H_{max}$ and $w\in W$, either $(u,w)$ is saturated, or $u$ is inactive. Since the pulse is non-saturating, it follows that all the vertices in $H_{max}$ become inactive during the pulse. Hence, the value of $h_{max}$ strictly decreases during a non-saturating pulse. Since $h_{max} \geq 0$, the total decrease in $h_{max}$ is also $O(|V(H)|^2)$, so there are $O(|V(H)|^2)$ non-saturating pulses. \end{proof} Note that we do not claim that implementing the $\textsf{Bulk-Push}$ operation by applying applicable $\textsf{Push}(u,w)$ operations for $u\in U$, $w\in W$ until no more such operations can be applied would result in fewer $\textsf{Push}$ operations than the $O(|V(H)|^2|E(H)|)$ bound of Lemma~\ref{lem:num-push} for the generic $\textsf{Push-Relabel}$ algorithm. However, in Section~\ref{sec:main} we will show a situation where each call to $\textsf{Bulk-Push}$ can be efficiently implemented using a single invocation of a multiple-source multiple-sink algorithm in a planar graph. \subsection{The algorithm of Borradaile et al. for $k$-apex graphs \cite{DBLP:journals/siamcomp/BorradaileKMNW17}} The algorithm of Borradaile et al.~\cite[Section 5]{DBLP:journals/siamcomp/BorradaileKMNW17} uses the framework of Hochstein and Weihe~\cite{HochsteinWeihe}. Let $H$ be a graph with a set $V^\times$ of $k$ apices. Denote $V_0 = V(H) \setminus V^\times$. The goal is to compute a maximum flow in $H$ from a source $s \in V(H)$ to a sink $t \in V(H)$. We assume that $s$ and $t$ are apices. This is without loss of generality since treating $s$ and $t$ as apices leaves the number of apices in $O(k)$. Let $K^\times$ be a complete graph over $V^\times$. The algorithm computes a maximum flow $\rho$ from $s$ to $t$ in $H$ by simulating a maximum flow computation from $s$ to $t$ in $K^\times$ using the $\textsf{Push-Relabel}$ algorithm. Whenever a $\textsf{Push}$ operation is performed on an arc $(u,v)$ of $K^\times$ it is implemented by pushing flow from $u$ to $v$ in the graph $H_{uv}$, induced by $V_0\cup \{u,v\}$ on the residual graph of $H$ with respect to the flow computed so far. Note that, because no vertex of $V_0$ is an apex of $H$, $H_{uv}$ is a $2$-apex graph with apices $u,v$. Borradaile et al. use this fact to compute a maximum flow from $u$ to $v$ in $H_{uv}$ as follows. They split $u$ into multiple copies, each incident to a different vertex $w$ for which $(u,w)$ is an arc of $H_{uv}$. A similar process is then applied to $v$. Note that the resulting graph is planar. A maximum flow from $u$ to $v$ in $H_{uv}$ is equivalent to a maximum flow with sources the copies of $u$ and sinks the copies of $v$ in the resulting graph. This flow can be computed by the multiple-source multiple-sink maximum flow algorithm (the main result in~\cite{DBLP:journals/siamcomp/BorradaileKMNW17}) in $O(|V(H)|\log^3 |V(H)|)$ time. The correctness of implementing the $\textsf{Push-Relabel}$ algorithm on $K^\times$ in this way was proved by Hochstein and Weihe~\cite{HochsteinWeihe} by proving essentially that the algorithm satisfies the properties in Lemma~\ref{lem:GT}. Borradaile et al. used the FIFO policy of $\textsf{Push-Relabel}$, which guarantees that the number of $\textsf{Push}$ operations is $O(k^3)$, so the overall running time of their algorithm is $O(k^3 |V(H)|\log^3 |V(H)|)$. \subsection{An algorithm for maximum flow in $k$-apex graphs}\label{sec:new-apex} We use the algorithm of Borradaile et al. for maximum flow in $k$-apex graphs from the previous section, but use our new batch-highest-distance $\textsf{Push-Relabel}$ algorithm instead of the FIFO $\textsf{Push-Relabel}$ algorithm to compute the maximum flow in $K^\times$. Note that, in order to implement the batch-highest-distance algorithm on $K^\times$ we only need to maintain the excess $\mathrm{ex}(\rho,v)$ and labels $h(v)$ of each vertex $v\in K^\times$, and to be able to implement $\textsf{Bulk-Push}$ so that after the execution, property (5*) is fulfilled. We do not define a flow function in $K^\times$ nor do we explicitly maintain residual capacities of arcs of $K^\times$. Instead, we maintain a preflow $\rho$ in $H$, and define that an arc $(u,v)$ of $K^\times$ is residual if and only if there exists a residual path from $u$ to $v$ in $H_{\rho}$ that is internally disjoint from the vertices of $V^\times$. Under this definition, there is no path of residual arcs in $K^\times$ starting at $s$ and ending at $t$ if and only if there is no such path in $H$. Since $K^\times$ has $O(k)$ vertices, by Lemma~\ref{lem:num-pulses}, the algorithm performs $O(k^2)$ pulses. We next describe how a $\textsf{Bulk-Push}(U,W)$ operation in $K^\times$ is implemented. Let $A=U \cup W$. Let $H_A$ be the graph obtained from $H_\rho$ by deleting the vertices $V^\times \setminus A$. $\textsf{Bulk-Push}(U,W)$ in $K^\times$ is implemented by pushing a maximum flow in $H_A$ with sources the vertices $U$ and sinks the vertices $W$, with the additional restriction that the amount of flow leaving each vertex $u \in U$ is at most the excess of $u$. The efficiency of the procedure depends on how fast we can compute the maximum flow in $H_A$. We denote the time to execute a single $\textsf{Bulk-Push}$ operation in the graph $H_A$ by $T_{BP}$. Note that $T_{BP} = \Omega(k)$, as it takes $\Omega(k)$ time to construct $H_A$ from $H_\rho$. The proof of correctness is an easy adaptation of the proof of Hochstein and Weihe~\cite{HochsteinWeihe}. We cannot use their proof without change because Hochstein and Weihe considered only $\textsf{Push}$ operations along a single arc of $K^\times$ rather than the $\textsf{Bulk-Push}$ operations which involves more than a single pair of vertices of $K^\times$. \begin{lemma}\label{thm:apex} Maximum flow in $k$-apex graphs can be computed in $O(k^2 \cdot T_{BP})$ time. \end{lemma} \begin{proof} We first show that the properties (1)-(4) in the statement of Lemma~\ref{lem:GT}, and the generalized property (5*) from Section~\ref{sec:bp} hold. Property (1) holds since $\textsf{Bulk-Push}(U,W)$ limits the amount of flow pushed from each vertex $u\in U$ by the excess of $u$. Properties (3) and (4) hold without change since $\textsf{Relabel}$ is not changed. To show property (5*) holds, recall that an arc $(u,w)$ of $K^\times$ is residual if there exists, in the residual graph $H_{\rho}$ with respect to the current preflow $\rho$, a residual path from $u$ to $w$ that is internally disjoint from any vertex of $V^\times$. With this definition it is immediate that property (5*) holds, since our implementation of $\textsf{Bulk-Push}(U,W)$ pushes a maximum flow in $H_A$ from $U$ that is limited by the excess flow in each vertex of $U$. Hence, after $\textsf{Bulk-Push}(U,W)$ is executed, for every $u \in U$ and $w \in W$, either there is no residual path from $u$ to $w$ in $H_{\rho}$ that is internally disjoint from $V^\times$, or $u$ is inactive. As for property (2), since we did not change $\textsf{Relabel}$, $h$ remains valid after calls to $\textsf{Relabel}$. It remains to show that $h$ remains a valid labeling after $\textsf{Bulk-Push}(U, W)$. Consider two vertices $a,b \in V^\times$. We will show that after $\textsf{Bulk-Push}(U, W)$, either the arc $(a,b)$ of $K^\times$ is saturated (i.e., is no residual path from $a$ to $b$ in $H_{\rho}$), or $h(a) \leq h(b) + 1$. The flow pushed (in $H_A$) by the call $\textsf{Bulk-Push}(U,W)$ can be decomposed into a set $\mathcal P$ of flow paths, each of which starts at a vertex of $U$ and ends at a vertex of $W$. Assume that after performing $\textsf{Bulk-Push}(U, W)$ there is an augmenting $a$-to-$b$ path, $Q$ in $H_{\rho}$. If $Q$ does not intersect any path $P \in \mathcal{P}$ then $Q$ was residual also before $\textsf{Bulk-Push}(U, W)$ was called, so $h(a) \leq h(b) + 1$ because $h$ was a valid labeling before the call. Otherwise, $Q$ intersects some path in $\mathcal{P}$. Let $c,d$ be the first and last vertices of $Q$ that also belong to paths in $\mathcal P$. Let $P,P' \in \mathcal P$ be paths such that $c \in P$ and $d \in P'$. Let $w\in W$ be the last vertex of $P$ and let $u\in U$ be the first vertex of $P'$. See Figure ~\ref{fig:cross} for an illustration. Then, before $\textsf{Bulk-Push}(U, W)$ was called, $Q[a,c]\circ P[c,w]$ was a residual path from $a$ to $w$, and $P'[u,d]\circ Q[d,b]$ was a residual path from $u$ to $b$. Since $h$ was a valid labeling before the call, we have \[ h(u) \leq h(b) + 1 \text{\quad and\quad} h(a) \leq h(w) + 1. \] Since $h(u)=h(w) + 1$ it follows that \[ h(a) \leq h(w)+1 = h(u) \leq h(b) + 1, \] showing property (2). \begin{figure} \caption{Illustration of property (2) in the proof of Lemma~\ref{thm:apex} \label{fig:cross} \end{figure} We have shown that properties (1)-(4) and (5*) hold. Hence, by Lemmas~\ref{lem:num-pulses} and~\ref{lem:num-relabel}, the algorithm terminates after performing $O(|V^\times|^2)=O(k^2)$ $\textsf{Bulk-Push}$ and $\textsf{Relabel}$ operations. Since each $\textsf{Relabel}$ takes $O(k)$ time, and each $\textsf{Bulk-Push}$ takes $\Omega(k)$ time, the total running time of the algorithm is $O(k^2 \cdot T_{BP})$. By Lemma~\ref{lem:no-st-path}, when the algorithm terminates there is no residual path from $s$ to $t$ in $K^\times$. By our definition of residual arcs of $K^\times$ this implies that there is no residual path from $s$ to $t$ in $H_\rho$, so $\rho$ is a maximum flow from $s$ to $t$ in $H$. \end{proof} \section{A faster algorithm for maximum flow with vertex capacities} \label{sec:main} In this section, we give a faster algorithm for computing a maximum flow in a directed planar graph with integer arc and vertex capacities bounded by $C$, parameterized by the number $k$ of terminal vertices (sources and sinks). The fastest algorithm currently known for this problem is by Wang~\cite{DBLP:conf/soda/Wang19}. It runs in $O(k^5n\text{ polylog}(nC))$ time. We first sketch Wang's algorithm. We only go into details in the parts of the algorithm that will be modified in our algorithm in Section~\ref{sec:gx}. \subsection{Wang's algorithm} Wang's algorithm uses the following two auxiliary graphs. In both these graphs only the arcs are capacitated. Let $G$ be a planar network. Let $k$ be the total number of sources and sinks in $G$. Recall from Remark~\ref{rem:st} that we turn $G$ into a 2-apex flow network with a single super-source $s$ and super-sink $t$. \begin{definition}[The graph $G^\circ$] For a flow network $G$, the network $G^\circ$ is obtained by the following procedure. For each vertex $v \in V(G)$, replace $v$ with an undirected cycle $C_v$ with $d=\deg(v)$ vertices $v_1, ..., v_d$.\footnote{By undirected cycle we mean that there are directed arcs in both directions between every pair of consecutive vertices of the cycle $C_v$.} Each arc in $C_v$ has capacity $c(v)/2$. Connect each arc incident to $v$ with a different vertex $v_i$, preserving the clockwise order of the arcs so that no new crossings are introduced. \end{definition} \begin{definition}[The graph $G^\times$] Let $f$ be a flow in $G$. Let $X$ be the set of infeasible vertices, i.e., vertices $x \in V(G)$ such that $f^{in}(x) > c(x)$. The graph $G^\times$ is defined as follows. Starting with $G^\circ$, for each vertex $ x \in X$, replace the cycle representing $x$ with two vertices $x^{in}$, $x^{out}$ and an arc $(x^{in},x^{out})$ of capacity $c(x)$. Every arc of capacity $c > 0$ going from a vertex $u \notin C_x$ to a vertex in $C_x$ becomes an arc $(u,x^{in})$ of capacity $c$. Similarly, every arc of capacity $c > 0$ going from a vertex of $C_x$ to a vertex $u \notin C_x$ becomes an arc $(x^{out},u)$ with capacity $c$. \end{definition} Note that even though $G - \{s, t\}$ is planar, $G^\times - \{s, t\}$ is not. $\{x^{in} : x \in X\} \cup \{x^{out} : x \in X\} \cup \{s, t\}$ is an apex set in $G^\times$. Thus, $G^\times$ is a $(2|X|+2)$-apex graph. Recall that if $H$ and $G$ are two graphs such that every arc of $G$ is also an arc of $H$, then the restriction of a flow $f'$ in $H$ to $G$ is a flow $f$ in $G$ such that $f(e)=f'(e)$ for all $e\in E(G)$. Thus we can speak of the restriction of a flow $f^\circ$ in $G^\circ$, to a flow $f$ in $G$, and of the restriction of a flow $f^\times$ in $G^\times$ to a flow $f$ in $G$. Let $\lambda^*$ be the value of the maximum flow in $G$. Wang's algorithm uses binary search to find $\lambda^*$. Let $\lambda$ be the current candidate value for $\lambda^*$. The algorithm computes a flow $f^\circ$ with value $\lambda$ in the graph $G^\circ$. Let $f$ be the restriction of $f^\circ$ to $G$. Wang proves that the set $X$ of infeasible vertices under $f$ has size at most $k-2$, and that the sum of the violations of the vertices in $X$ is at most $(k-2)C$. As long as $\mathrm{vio}(f)>2k$, the algorithm improves~$f$. This improvement phase, which will be described shortly, is the crux of the algorithm. If $\mathrm{vio}(f) \leq 2k$, then $O(k^2)$ iterations of the classical Ford-Fulkerson algorithm suffice to get rid of all the remaining violations. The improvement phase of the algorithm is based on finding a circulation $g$ that cancels the violations on the infeasible vertices and does not create too much violation on other vertices. It can then be shown that adding $1/k \cdot g$ to the flow $f$ decreases $\mathrm{vio}(f)$ by a multiplicative factor of roughly $1-1/k$. After $O(k\log (kC))$ iterations of the improvement step, $\mathrm{vio}(f)$ is at most $2k$. Wang proves that in order to find the circulation $g$, it suffices to compute a circulation $g^\times$ in $G^\times$ that satisfies the following properties: \begin{enumerate} \item $f^\times+g^\times$ is feasible in $G^\times$. \item The restriction of $f^\times+g^\times$ to $G$ has no violations on vertices of $X$. \item The restriction of $f^\times+g^\times$ to $G$ has at most $(k-2)\cdot\mathrm{vio}(f)$ violation on any vertex in $V(G) \setminus X$. \end{enumerate} The desired circulation $g$ is the restriction of $g^\times$ to $G$. If no such $g^\times$ exists then $g$ does not exist, which implies that $\lambda > \lambda^*$. Wang essentially shows that any algorithm for finding $g^\times$ in $O(T)$ time, where $T=\Omega(n)$, yields an algorithm for maximum flow with vertex capacities in $O(kT\log(kC)\log(nC))$ time. The additional terms stem from the $O(k\log (kC))$ iterations of the improvement step, and the $\log(nC)$ steps of the binary search. Wang shows how to compute $g^\times$ in $T=O(k^4n\log^3 n)$ time, by eliminating the violation at each vertex of $X$ one after the other in an auxiliary graph obtained from $G^\times$. Thus, the overall running time of his algorithm is $O(k^5n\log^3n\log(kC)\log(nC))$. \subsection{A faster algorithm for computing $g^\times$}\label{sec:gx} We propose a faster way of computing the circulation $g^\times$ by eliminating the violations in all the vertices of $X$ in a single shot. Doing so correctly requires some care in defining the appropriate capacities in the auxiliary graph, since we only know that for each $x\in X$, $g^\times$ should eliminate at least $\mathrm{vio}(f,x)$ units of flow from $x$, but the actual amount of flow eliminated from $x$ may have to be larger. This issue does not come up when resolving the violations one vertex at a time as was done by Wang. Define an auxiliary graph $H$ as follows. Starting with $G^\times_{f^\times}$, the residual graph of $G^\times$ with respect to $f^\times$, \begin{itemize} \item For each $x \in X$, set the capacity of the arc $(x^{in},x^{out})$ to be 0 and the capacity of $(x^{out},x^{in})$ to be $c(x)$. \item Add a super source $s'$ and arcs $(s',x^{in})$ with capacity $\mathrm{vio}(f,x)$ for every $x \in X$. \item Add a super sink $t'$ and arcs $(x^{out},t')$ with capacity $\mathrm{vio}(f,x)$ for every $x \in X$. \end{itemize} \noindent Note that $ \{s,t \} \cup \bigcup_{x\in X} \{x^{in},x^{out}\} \cup \{s',t' \}$ is an apex set of size $O(k)$ in $H$ (recall from Remark~\ref{rem:st} that $s$ and $t$ are the super source and super sink of the original graph $G$). The circulation $g^\times$ can be found using the following algorithm. Find a maximum flow $h^\textsf{Push-Relabel}ime$ from $s'$ to $t'$ in $H$ using Lemma~\ref{thm:apex}. Convert $h^\textsf{Push-Relabel}ime$ to an acyclic flow $h$ of the same value using the algorithm of Sleator and Tarjan~\cite{DBLP:journals/jcss/SleatorT83} (cf. \cite[Lemma 2.5]{DBLP:conf/soda/Wang19}). If $h$ does not saturate every arc incident to $s'$ and $t'$, return that the desired circulation $g$ does not exist. Otherwise, $h$ can be extended to the desired circulation $g^\times$ by setting $g^\times(x^{out},x^{in})=h(x^{out},x^{in}) + \mathrm{vio}(f,x)$ for every $x \in X$ and $g^\times(e) = h(e)$ for all other arcs. The following lemma shows that any single $\textsf{Bulk-Push}$ operation in the algorithm of Lemma~\ref{thm:apex} on $H$ can be implemented by a constant number of calls to the $O(n \log^3 n)$-time multiple-source multiple-sink maximum flow algorithm in planar graphs of Borradaile et al.~\cite{DBLP:journals/siamcomp/BorradaileKMNW17}. There are two challenges that need to be overcome. First, the graph $H$ is $O(k)$-apex graph rather than planar. Second, the algorithm of Borradaile et al. computes a maximum flow from multiple sources to multiple sinks, not a maximum flow under the restriction that each source sends at most some given limit. This is not a problem in the case of a single source, or a limit on just the total value of the flow, since then some of the flow pushed can be "undone". When each of the multiple sources has a different limit, undoing the flow from one source can create residual paths from another source that did not yet reach its limit. \begin{lemma}\label{lem:time} Any single $\textsf{Bulk-Push}$ operation in the execution of the algorithm of Lemma~\ref{thm:apex} on the graph $H$ defined above can be implemented in $O(n \log^3 n)$ time. \end{lemma} \begin{proof} Let $V^\times = \{s,t \} \cup \bigcup_{x\in X} \{x^{in},x^{out}\} \cup \{s',t' \}$ be the set of apices of $H$. Recall that the algorithm of Lemma~\ref{thm:apex} invokes the batch-highest-distance $\textsf{Push-Relabel}$ algorithm on a complete graph $K^\times$ over $V^\times$, and maintains a corresponding preflow in $H$. Consider a single $\textsf{Bulk-Push}(U, W)$ operation from a set of apices $U$ to a set of apices $W$. Let $\rho$ denote the preflow pushed in $H$ up to this $\textsf{Bulk-Push}$ operation. Let $A = U \cup W$. To correctly implement $\textsf{Bulk-Push}(U,W)$, we find a flow $\rho'$ with sources $U$ and sinks $W$ in the graph $H$, which satisfies the following properties: \begin{enumerate}[(i)] \item For every $u \in U$, $\mathrm{ex}(\rho+\rho',u)\geq 0$, and \item For every $u\in U$ and $w \in W$, either $\mathrm{ex}(\rho+\rho',u)=0$ or there is no residual path in $H_{\rho+\rho'}$ from $u$ to $w$ that is internally disjoint from $V^\times$. \end{enumerate} Condition (i) guarantees that $\rho'$ does not push more flow from a vertex $u \in U$ than the current excess of $u$. Condition (ii) is condition (5*) from Section~\ref{sec:bp}. Let ${H''}$ be the graph obtained from $H_\rho$ by deleting the vertices $V^\times \setminus A$. Note that the absence of residual paths that are internally disjoint from $V^\times$ in ${H''}$ is equivalent to the absence of such paths in $H$. We will compute $\rho'$ using a constant number of invocations of the $O(n \log^3 n)$-time multiple-source multiple-sink maximum flow algorithm in planar graphs of Borradaile et al.~\cite{DBLP:journals/siamcomp/BorradaileKMNW17}. Instead of invoking this algorithm on ${H''}$, which is not planar, we shall invoke it on modified versions of ${H''}$ which are planar. Starting with $H''$, we split each vertex $w \in W$ into $\deg(w)$ copies. Each arc $e$ that was incident to $w$ before the split is now incident to a distinct copy of $w$, and is embedded so that it does not cross any other arc in the graph. Let ${H'}$ denote the resulting graph, and let $W'$ denote the set of vertices created as a result of splitting all the vertices of $W$. \begin{figure} \caption{Illustration of the auxiliary graphs used in the algorithm of Lemma~\ref{lem:time} \end{figure} The set $W'$ replaces $W$ as the set of sinks of the flow $\rho'$ we need to compute. Note that $U$ is an apex set in ${H'}$. We then build the flow $\rho'$ gradually, by computing the following steps, each using a single invocation of the multiple-source multiple-sink maximum flow algorithm of Borradaile et al. in $O(n \log^3 n)$ time. In what follows, when we say that the flow $\rho'$ satisfies condition (ii) for a subset $U'$ of $U$ we mean that for every $u\in U'$ and $w \in W$, either $\mathrm{ex}(\rho+\rho',u)=0$ or there is no residual path in $H_{\rho+\rho'}$ from $u$ to $w$ that is internally disjoint from $V^\times$. \begin{enumerate}[(1)] \item \label{step:s} If $s \in U$, starting with ${H'}$, we split $s$ into $\deg(s)$ copies. Each arc $e$ that was incident to $s$ before the split is now incident to a distinct copy of $s$, and is embedded so that it does not cross any other arc in the graph. We also delete all other vertices of $U$. We compute in the resulting graph, which is planar, a maximum flow with sources the copies of $s$ and the sinks $W^\textsf{Push-Relabel}ime$. Let $\rho'_s$ be the flow computed. If $|\rho'_s|>\mathrm{ex}(\rho,s)$, we decrease $|\rho'_s|$ by pushing $|\rho'_s|-\mathrm{ex}(\rho,s)$ units of flow back from $W'$ to the copies of $s$. This can be done in $O(n)$ time in reverse topological order w.r.t. $\rho'_s$ (cf.~\cite[Section 1.4]{DBLP:journals/siamcomp/BorradaileKMNW17}). We view $\rho'_s$ as a flow in $H$, and set $\rho' = \rho'_s$. By construction $\rho'$ satisfies condition (i), and satisfies condition (ii) for the subset $\{s\}$. \item If $t \in U$, starting with $H_{\rho'}'$, we repeat step (1) with $t$ taking the role of $s$ to compute a flow $\rho'_t$. Set $\rho' \leftarrow \rho' + \rho'_t$ By construction of $\rho'_t$, $\rho'$ now satisfies satisfies condition (i), and satisfies condition (ii) for the subset $U \cap \{s,t\}$. \item \label{step:in} Let $U^{in}$ be the set $U \cap \{x^{in} : x \in X\}$. If $U^{in} \neq \emptyset$, starting with ${{H'}}_{\rho'}$, we delete all the vertices of $U$ that are not in $U^{in}$. Note that, since the resulting graph does not contain $s,t,s',t'$, nor any $x^{out}$ for any $x \in X$, and since arcs incident to $x^{in}$ only cross those incident to $x^{out}$, the resulting graph is planar. For every $x^{in} \in U^{in}$ we add a vertex $x'$ and an arc $(x',x^{in})$ with capacity $\mathrm{ex}(\rho,x^{in})$. The resulting graph is still planar. We compute a maximum flow $\rho'_{in}$ with sources $\{x' : x^{in} \in U^{in}\}$ and sinks $W^\textsf{Push-Relabel}ime$. We view $\rho'_{in}$ as a flow in $H$, and set $\rho' \leftarrow \rho' + \rho'_{in}$. By construction of $\rho'_{in}$, $\rho'$ now satisfies condition (i), and satisfies condition (ii) for the subset $U \cap (\{s,t\} \cup U^{in})$. \item We repeat step (\ref{step:in}) with $out$ taking the role of $in$ to compute a flow $\rho'_{out}$. By construction of $\rho'_{in}$, $\rho'$ now satisfies condition (i), and satisfies condition (ii) for $U \cap (\{s,t\} \cup U^{in} \cup U^{out})$. \end{enumerate} Since $s'$ and $t'$ are the source and sink of the flow computed by the $\textsf{Push-Relabel}$ algorithm, they are never active vertices, so they never belong to $U$. Hence $\{s,t\} \cup U^{in} \cup U^{out} \supseteq U$, and conditions (i) and (ii) are fully satisfied by $\rho'$. \end{proof} Combining Lemma~\ref{lem:time} and Lemma~\ref{thm:apex}, we get the following lemma. \begin{lemma}\label{lem:main} The algorithm described above finds a circulation $g^\times$ such that \begin{enumerate} \item $f^\times+g^\times$ is feasible in $G^\times$. \item The restriction of $f^\times+g^\times$ to $G$ has no violations at vertices of $X$. \item The restriction of $f^\times+g^\times$ to $G$ has violation at most $(k-2)\cdot\mathrm{vio}(f)$ at any vertex in $V(G) \setminus X$. \end{enumerate} in $O(k^2 n \log^3 n)$ time if such a circulation exists. \end{lemma} \begin{proof} We first analyze the running time. Computing the graph $H$ can be done in $O(n)$ time. Computing the flow $h^\textsf{Push-Relabel}ime$ in $H$ using the algorithm of Lemma~\ref{thm:apex} takes $O(k^2 \cdot T_{BP})$ time. By Lemma~\ref{lem:time}, $T_{BP} = O(n \log^3 n)$ for the graph $H$. Transforming $h^\textsf{Push-Relabel}ime$ to an acyclic flow $h$ using the algorithm of Sleator and Tarjan~\cite{DBLP:journals/jcss/SleatorT83} takes $O(n\log n)$ time. Finally, computing $g^\times$ from $h$ takes $O(n)$ time. Hence, the total time to compute $g^\times$ is $O(k^2 n \log^3 n)$. In order to prove the correctness of the algorithm, we will first prove that there exists a feasible flow $h$ in $H$ that saturates every arc incident to $s'$ and $t'$ if and only if there exists a circulation $g^\times$ in $G^\times$ that satisfies conditions (1) and (2) in the statement of the lemma. ($\Leftarrow$) Assume the circulation $g^\times$ exists in $G^\times$. Define a flow $h$ in $H$ as follows. For every arc $e \in E(H)$ not of the form $(x^{out},x^{in})$ set $h(e)=g^\times(e)$. For every $x \in X$, set $h(x^{out},x^{in})=g^\times(x^{out},x^{in})-\mathrm{vio}(f,x)$, $h(s,x^{in})=\mathrm{vio}(f,x)$ and $h(x^{out}, t)=\mathrm{vio}(f,x)$. Since the restriction of $f^\times +g^\times$ to $G$ has no violations on the vertices of $x$, $g^\times (x^{out},x^{in}) \geq \mathrm{vio}(f,x)$, so $h(x^{out},x^{in}) \geq 0$ and $h$ is a well defined flow. By definition, the flow $h$ saturates every arc incident to $s'$ and $t'$. To show that $h$ is feasible in $H$ it is enough to show that $h(x^{out},x^{in}) \leq c(x)$ for every $x \in X$ (on all other arcs $h$ is feasible because $g^\times$ is feasible in $G^\times_{f^\times}$). Let $x \in X$. Since $f^\times+g^\times$ is feasible in $G^\times$, $g^\times(x^{out},x^{in}) \leq f^\times(x^{in},x^{out})$. Since $h(x^{out},x^{in}) = g^\times(x^{out},x^{in}) - \mathrm{vio}(f,x)$, $h(x^{out},x^{in}) \leq f^\times(x^{in},x^{out}) - \mathrm{vio}(f,x) = c(x)$. ($\Rightarrow$) Assume there exist a feasible flow $h$ in $H$ that saturates every arc incident to $s'$ and $t'$, and let $g^\times$ be the circulation obtained from $h$ as described above. We show that $f^\times+g^\times$ is feasible in $G^\times$. On all arcs $e$ not of the form $(x^{out},x^{in})$, $g^\times(e) = h(e)$ and the capacity of $e$ in $G^\times_{f^\times}$ equals the capacity of $e$ in $H$. Therefore, since $h$ is a feasible flow in $H$, $g^\times$ is feasible on $e$ in $G^\times_{f^\times}$, so $f^\times + g^\times$ is feasible on $e$ in $G^\times$. We now focus on the arcs $(x^{out}, x^{in})$ for each $x \in X$. Let $e = (x^{out},x^{in})$. Observe that $0 \leq h(e) \leq c(x)$. Since $g^\times(e)=h(e)+\mathrm{vio}(f,x)$ we have that $\mathrm{vio}(f,x) \leq g^\times(e) \leq c(x) + \mathrm{vio}(f,x) = f^\times(e)$. Since $(f^\times+g^\times)(x^{in},x^{out}) = f^\times(x^{in},x^{out}) - g^\times(x^{out},x^{in})$, we have $0 \leq (f^\times+g^\times)(x^{in},x^{out}) \leq c(x)$, so $f^\times + g^\times$ is feasible in $G^\times$. To finish proving the ($\Rightarrow$) direction, we show that the restriction of $f^\times + g^\times$ to $G$ has no violations on the vertices of $X$. By definition of $G^\times$ and of residual graph, the only arcs in $G^\times_{f^\times}$ that can carry flow out of $x^{in}$ are the reverses of the arcs that carry flow into $x$ in $f$, and the only arcs that can carry flow into $x^{out}$ are the reverses of the arcs that carry flow out of $x$ in $f$. We will show that $(f+g)^{in}(x) \leq c(x)$ by considering separately the contribution of the flow on arcs of $G$ that in $G^\times$ are incident to $x^{out}$, and arcs of $G$ that in $G^\times$ are incident to $x^{in}$. The only arc of $f^\times$ that carries flow into $x^{out}$ is $(x^{in},x^{out})$. Thus, there is no arc $e$ of $G$ such that $f^\times (e)$ carries flow into $x^{out}$. Since $g^\times$ only carries flow into $x^{out}$ along the reverses of arcs that carry flow out of $x^{out}$ in $f^\times$ and since for every such arc $e'$, $g^\times(e') \leq f^\times(\text{rev}(e'))$, there is also no arc $e$ of $G$ such that $(f^\times+g^\times) (e)$ carries flow into $x^{out}$. The total flow that $f^\times$ carries into $x^{in}$ is $c(x) + \mathrm{vio}(f,x)$. Let $z$ denote the total amount of flow that $g^\times$ carries into $x^{in}$. Since the only arc incident to $x^{in}$ that carries flow in $g^\times$ and does not belong to $G$ is $(x^{out},x^{in})$, the total amount of flow that $g^\times$ carries into $x^{in}$ on arcs that belong to $G$ is $z - g^\times(x^{out},x^{in})$. On the other hand, $g^\times$ carries $z$ units of flow out of $x^{in}$, and all of this flow is pushed along the reverses of arcs that carry flow into $x^{in}$ in $f^\times$ (and also belong to $G$). Hence, the total amount of flow that $f^\times + g^\times$ carries into $x^{in}$ on arcs that belong to $G$ is $c(x) + \mathrm{vio}(f,x) + (z - g^\times(x^{out},x^{in})) - z$. Therefore, \begin{eqnarray*} (f+g)^{in}(x) & = & c(x) + \mathrm{vio}(f,x) - g^\times(x^{out},x^{in})) \\ & = & c(x) + \mathrm{vio}(f,x) - (h(x^{out},x^{in}) + \mathrm{vio}(f,x)) \\ & = & c(x) - h(x^{out},x^{in}) \\ & \leq & c(x). \end{eqnarray*} We have thus shown that the algorithm computes a flow $g^\times$ satisfying conditions (1) and (2) in the statement of the lemma. To see that condition (3) is also satisfied, note that the value of the flow $h$ is $\sum_{x\in X} \mathrm{vio}(f,x) \leq (k-2) \cdot \mathrm{vio}(f)$. Since $h$ is acyclic, $h^{in}(v) \leq (k-2)\cdot \mathrm{vio}(f)$ for all $v\in H$. Since for all $v \in V(G) \setminus X$, $f^{in}(v) \leq c(v)$, and $h^{in}(v) = (g^\times)^{in}(v)$, it follows that the violation of $f^\times + g^\times$ at $v$ is at most $(k-2) \cdot \mathrm{vio}(f)$. \end{proof} Using the $O(k^2 n\log^3 n)$-time algorithm of Lemma~\ref{lem:main} in the improvement phase of Wang's algorithm instead of using Wang's $O(k^4 n\log^3 n)$-time procedure for this phase results in a running time of $O(k^3 n\text{ polylog}(nC))$ for finding a maximum flow in $G$. \subsection{Alternative algorithm for~\(k = o(\log^2 n)\)} \label{app:small_k} We now provide an alternative algorithm to the one given in the previous section that is faster for small~\(k = o(\log^2 n)\). Specifically, we describe an algorithm for computing the circulation~\(g^\times\) that runs in~\(O(k^3 n \log n)\) time instead of the~\(O(k^2 n \log^3 n)\) time required by the algorithm of Lemma~\ref{lem:main}. The final running time for computing a maximum flow with integer arc and vertex capacities is therefore \(O(k^4 n \log n \log(kC) \log(nC))\). We use the same auxiliary graph~\(H\) as defined above and again compute a maximum flow~\(h'\) from~\(s'\) to~\(t'\) in~\(H\). Let $V^\times = \{s,t \} \cup \bigcup_{x\in X} \{x^{in},x^{out}\} \cup \{s',t' \} \cup S \cup T$ be the set of apices of $H$ \emph{along with the original sources \(S\) and sinks \(T\) of \(G\)}, and let \(K^\times\) be the complete graph on \(V^\times\). Instead of using the batch-highest-distance \(\textsf{Push-Relabel}\) algorithm as in Lemma~\ref{thm:apex}, we more directly follow the strategy of Borradaile et al.~\cite{DBLP:journals/siamcomp/BorradaileKMNW17} by simulating a maximum flow computation from \(s\) to \(t\) in \(K^\times\) using the FIFO \(\textsf{Push-Relabel}\) algorithm. We do not wish to directly use the multiple-source multiple-sink flow algorithm of Borradaile et al.~\cite{DBLP:journals/siamcomp/BorradaileKMNW17}, because then each of the \(O(k^3)\) \(\textsf{Push}\) operations would take~\(O(n \log^3 n)\) time. But as above, we may take advantage of the structure of~\(H\) to perform each \(\textsf{Push}\) operation more quickly. \begin{lemma}\label{lem:no_separators-time} Any single (individual arc) $\textsf{Push}$ operation in the graph $H$ defined above can be implemented in $O(n \log n)$ time. \end{lemma} \begin{proof} Consider a single $\textsf{Push}(u, v)$ operation where \(u,v \in V^\times\). Let $\rho$ denote the preflow pushed in $H$ by the FIFO $\textsf{Push-Relabel}$ algorithm up to this $\textsf{Push}$ operation. We find a flow \(\rho'\) with source \(u\) and sink \(v\) in the graph \(H\) such that either \(\mathrm{ex}(\rho + \rho', u) = 0\) or \(\mathrm{ex}(\rho + \rho', u) > 0\) and there is no residual path in \(H_{\rho + \rho'}\) from \(u\) to \(v\) that is internally disjoint from \(V^\times\). Let \({H'}\) be the graph obtained from \(H_\rho\) by deleting the vertices \(V^\times \setminus \{u, v\}\). Instead of invoking the \(O(n \log^3 n)\)-time multiple-source multiple-sink maximum flow algorithm of Borradaile et al.~\cite{DBLP:journals/siamcomp/BorradaileKMNW17}, we will compute \(\rho'\) as follows. As before, we must consider a few different cases. \begin{itemize} \item If \(u = s\) or \(v = s\), then \(v \in S\) or \(u \in S\), respectively. We push up to \(\mathrm{ex}(\rho, u)\) units of flow directly along the arc \((u, v)\) in constant time, either saturating the arc or reducing the excess flow in \(u\) to \(0\). We may similarly push directly along the arc \((u, v)\) in constant time if one of \(u\) or \(v\) is one of \(t\), \(s'\), or \(t'\) instead. \item If \(u \in \{x_1^{in}, x_1^{out}\}\) and \(v \in \{x_2^{in}, x_2^{out}\}\) for two distinct vertices \(x_1, x_2 \in X\), then the graph \({H'}\) is planar. We add a vertex \(u'\) and an arc \((u', u)\) with capacity \(\mathrm{ex}(\rho, u)\) and compute the maximum flow \(\rho'\) with source \(u'\) and sink \(v\) using the single-source single-sink maximum flow algorithm in planar graphs of Borradaile and Klein~\cite{DBLP:journals/jacm/BorradaileK09}. \item If neither of the above cases apply, then \(u,v \in \{x^{in}, x^{out}\}\) for some \(x \in X\). If arc \((u, v)\) has positive residual capacity, we push up to \(\mathrm{ex}(\rho, u)\) units of flow directly along it in constant time. Similar to Step~\ref{step:s} in the proof of Lemma~\ref{lem:time}, starting with \({H'}\), we split \(u\) into \(\deg(u)\) copies so that each arc that was incident to \(u\) is now incident to a distinct copy of \(u\). Similarly, we split \(v\) into \(\deg(v)\) copies so each arc that was incident to \(v\) is now incident to a distinct copy of \(v\). The resulting graph is planar, and all copies of \(u\) and \(v\) lie on a common face. As mentioned by Borradaile et al.~\cite[p. 1280]{DBLP:journals/siamcomp/BorradaileKMNW17}, we can then plug the linear time shortest paths in planar graphs algorithm of Henzinger et al.~\cite{DBLP:journals/jcss/HenzingerKRS97} into a divide-and-conquer procedure of Miller and Naor~\cite{DBLP:journals/siamcomp/MillerN95} to compute a maximum flow~\(\rho'_u\) with sources the copies of \(u\) and sinks the copies of \(v\) in \(O(n \log n)\) time. Again, if the value of this flow is greater than the excess of \(u\), we push the appropriate amount of flow back to the copies of \(u\) in \(O(n)\) time. Finally, we view \(\rho'_u\) as a flow in \(H\) to set \(\rho' = \rho'_u\). \end{itemize} \end{proof} As a consequence of the previous lemma, we immediately get a variation of Lemma~\ref{lem:main} with a running time of \(O(k^3 n \log n)\). We use our \(O(k^3 n \log n)\) time algorithm in the improvement phase of Wang's algorithm whenever \(k = o(\log^2 n)\). The final running time for computing a maximum flow with integer arc and vertex capacities in this case is therefore \(O(k^4 n \log n \log(kC) \log(nC))\). \begin{theorem} A maximum flow in an $n$-vertex planar flow network $G$ with integer arc and vertex capacities bounded by $C$ can be computed in $O(k^3 n \log n \min(k, \log^2 n) \log(kC) \log (nC))$ time. \end{theorem} \end{document}
\begin{document} \title{Convex bodies with many elliptic sections} \abstract{We show in this paper that two normal elliptic sections through every point of the boundary of a smooth convex body essentially characterize an ellipsoid and furthermore, that four different pairwise non-tangent elliptic sections through every point of the $C^2$-differentiable boundary of a convex body also essentially characterize an ellipsoid.} \section{Introduction} Bianchi and Gruber \cite{gruber} proved that the boundary of a convex body $K\subset\mathbb{R}^n$ is an ellipsoid if for every direction, continuously, we can choose a hyperplane which intersects $\mathrm{bd} K$ in an ellipsoid. The proof of this result leads to the following characterization of the ellipsoid: Let $K\subset\mathbb{R}^3$ be a convex body and let $\alpha>0$. If for every support line of $K$ there is a plane $H$ containing it whose intersection with $\mathrm{bd} K$ is an ellipse of area at least $\alpha$, then $\mathrm{bd} K$ is an ellipsoid. This characterization requires that for every $p\in \mathrm{bd} K$ and every direction in the support plane of $K$ at $p$ there is a section of $\mathrm{bd} K$ in that direction, containing $p$, that is an ellipse. The aim of this work is to give a characterization of the ellipsoid where for every boundary point only a finite number of ellipses containing that point are required. The sphere was characterized in this manner by Miyaoka and Takeuchi \cite{miyaoka,takeuchi} as the unique compact, simply connected $C^\infty$ surface that satisfies one of the following properties: i) contains three circles through each point; ii) contains two transversal circles through each point; or iii) contains one circle inside a normal plane. In our paper, we shall show that two normal elliptic sections through every point of the boundary of a smooth convex body essentially characterize an ellipsoid and furthermore, four different pairwise nontangent, elliptic sections through every point of the $C^2$-differentiable boundary of a convex body also characterize an ellipsoid. The set $$C = \{(x , y , z )\in\mathbb{R}^3 |x^2 + y^2 + z^2 + \tfrac{5}{4} xyz\leq1,\newline max\{|x|,|y|,|z|\}\leq1\}$$ \cite[Example 2]{alonso} is a convex body whose boundary is not an ellipsoid. The planes parallel to $x = 0$, $y = 0$ and $z = 0$ intersect $\mathrm{bd} C$ in ellipses. Thus there are two ellipses for every point $p$ in the boundary of $C$ and there are three ellipses for every point $p$ in the boundary of $C$ except at six points. Before giving the precise statement of the main theorems, we introduce the concepts and results we will use. \section{Definitions and auxiliary results} Let us consider a continuous function $\Psi \colon \mathrm{bd} K\rightarrow\mathbb{S}^2$. For every $p\in \mathrm{bd} K$, let $L_p$ be the line through $p$ in the direction $\Psi(p)$. We say that $\Psi$ is an \emph{outward} function if for every $p\in \mathrm{bd} K$, \begin{itemize} \item the line $L_p$ is not tangent to $\mathrm{bd} K$, and $\{p+t\Psi(p) | t>0\}$ is not in $K$; and \item there is a point $q\in \mathrm{bd} K\setminus L_p$ such that the line $L_q$ intersects the line $L_p$ at a point of the interior of $K$. \end{itemize} For example, the function $\eta \colon \mathrm{bd} K\rightarrow\mathbb{S}^2$ such that for every $p\in \mathrm{bd} K$, $\eta(p)$ is the normal unit vector to $\mathrm{bd} K$ at $p$ is an outward function. To see this, let $O$ be the midpoint of the interval $L_p\cap K$ and consider the farthest and the nearest point of $\mathrm{bd} K$ to the point $O\in \mathrm{int} K$. One of them, call it $q$, is not in $L_p$, otherwise $K$ would be a solid sphere, in which case the assertion is trivially true; hence clearly $O\in L_q$. Let $K$ be a convex body and let $H_1$ and $H_2$ be two planes. In the following it will be useful to have a criterion to know when two sections $H_1\cap \mathrm{bd} K$ and $H_2\cap \mathrm{bd} K$ intersect in exactly two points. This holds exactly when the line $L = H_1\cap H_2$ intersects the interior of $K$. We say that a collection of lines $\mathcal{L}$ in $\mathbb{R}^{n+1}$ is a \emph{system of lines} if for every direction $u\in\mathbb{S}^n$ there is a unique line $L_u$ in $\mathcal{L}$ parallel to $u$. Given a system of lines $\mathcal{L}$ we define the function $\delta \colon \mathbb{S}^n\rightarrow\mathbb{R}^{n+1}$ which assigns to every direction $u\in\mathbb{S}^n$ the point $\delta(u)\in L_u$ which traverses the distance from the origin to the line $L_u$. When $\delta$ is continuous we say that $\mathcal{L}$ is a \emph{continuous system of lines}. Finally, the set of intersections between any two different lines of the system will be the \emph{center} of $\mathcal{L}$. A continuous system of lines has a certain property that will be useful for our purpose which is stated in the following lemma. \begin{lemma} \label{lem1} Let $\mathcal{L}$ be a continuous system of lines in $\mathbb{R}^{n+1}$. For every direction $u\in\mathbb{S}^n$, there exists $v\in\mathbb{S}^n$, $v\neq u$, such that the lines $L_u$ and $L_v$ have a common point. \end{lemma} \begin{proof} Let $H$ be the plane through the origin that is orthogonal to $L_u$. Without loss of generality assume that $H$ is the plane $z = 0$ and $L_u$ contains the origin. Define $\mathcal{L}_H$ as the set of orthogonal projections of all lines in $\mathcal{L}$ parallel to $H$ onto $H$; $\mathcal{L}_H$ is then a system of lines in $H$. By \cite[Proposition 3]{stein}, there is a line in $\mathcal{L}_H$ passing through the origin. This means that there is a direction $v$ in $H$ with the property that the line $L_v\in\mathcal{L}$ intersects $L_u$. \end{proof} Suppose that there is a continuous system of lines $\mathcal{L}$ such that the center of $\mathcal{L}$ is contained in the interior of $K$. Note that every line of $\mathcal{L}$ thus intersects the interior of $K$. For every $p\in \mathrm{bd} K$, let $L_p$ be the unique line of $\mathcal{L}$ through $p$. Let us define the continuous function $\Psi \colon \mathrm{bd} K\rightarrow\mathbb{S}^2$ in such a way that $\Psi(p)$ is the unique unit vector parallel to $L_p$ with the property that $\{p + t \Psi(p) | t > 0\}$ is not in $K$. By Lemma \ref{lem1}, $\Psi \colon \mathrm{bd} K\rightarrow\mathbb{S}^2$ is an outward function. If $K\subset\mathbb{R}^3$ is a strictly convex body, then the system of diametral lines of $K$ is a continuous system of lines whose center is contained in the interior of $K$. The following lemma will be of use to us in Section~4. \begin{lemma} \label{lem2} Let $M_1$ and $M_2$ be two surfaces tangent at $p\in M_1\cap M_2$. If the normal sectional curvatures of $M_1$ and $M_2$ coincide in three different directions, then the normal sectional curvatures of $M_1$ and $M_2$ coincide in every direction. \end{lemma} \begin{proof} Suppose the Euler curvature formula for the first surface $M_1$ is given by \[ \kappa_1\cos^2(\theta) +\kappa_2\sin^2(\theta), \] \noindent and for the second surface $M_2$ by \[ \kappa^{\prime}_1\cos^2(\theta -t_0) +\kappa^{\prime}_2\sin^2(\theta-t_0). \] Suppose that the difference between these two expressions has three zeros. After simplifying the difference, replacing $\cos^2(x)$ and $\sin^2(x)$ by their corresponding expressions in $cos(2x)$ and $sin(2x)$, we obtain an expression of the form $A+B\cos(2\theta)+C\sin(2\theta)$, where $A$, $B$ and $C$ depend only on the principal curvatures and the angle $t_0$. Since $\theta$ lies in the unit circle, where the number of zeros is bounded by the number of critical points, we may assume that the expression $C\cos(2\theta)-B\sin(2\theta)$ has at least three zeros. This implies that $A+B\cos(2\theta)+C\sin(2\theta)=0$ and hence that the the normal sectional curvatures of $M_1$ and $M_2$ coincide in every direction. \end{proof} \section{Two elliptic sections through a point} \begin{theorem} \label{thm:ellipsoid} Let $K\subset\mathbb{R}^3$ be a convex body and $\alpha$ a given positive number. Suppose that there is an outward function $\Psi \colon \mathrm{bd} K\rightarrow \mathbb{S}^2$ such that for every $p\in \mathrm{bd} K$\nolinebreak there are planes $H_1, H_2$ determining an angle at least $\alpha$, where $H_1\cap H_2$ is the line $L_p$ through $p$ in the direction $\Psi(p)$ and $K\cap H_i$ is an elliptic section, for $i=1,2$. Then $\mathrm{bd} K$ is an ellipsoid. \end{theorem} \begin{proof} Let $p\in \mathrm{bd} K$. By hypothesis there are two planes $H_1$ and $H_2$ such that $L_p=H_1\cap H_2$ and $E_i=\mathrm{bd} K\cap H_i$ is an ellipse, for $i=1,2$. Furthermore, there is a point $q\in \mathrm{bd} K$ such that the line $L_q$ intersects $L_p$ at a point in the interior of $K$. Hence at least one of the elliptic sections through $L_q$ is different from $E_1$ and $E_2$; call this section $E_3$. We have that $E_3$ has two points in common with $E_i$, for $i=1,2$, because $L_p\cap L_q$ belongs to the interior of both sections. \begin{center} \includegraphics[width=3in]{imagen2.pdf}\rput(-5.5,6.2){$E_1$}\rput(-2.2,7){$E_2$}\rput(-.85,5){$E_3$}\rput(-2.5,3.3){$q$}\rput(-3.5,6.7){$p$} \end{center} We will now show that there is a quadric surface which contains $E_i$, for $i=1,2,3$. Let \[E_1\cap E_2=\{p,p'\}\] and for $i=1,2$, \[E_i\cap E_3=\{p_{i3},{p'_{i3}}\}.\] We can choose $p_i\in \mathrm{bd} E_i$ distinct from $p$, $p'$, $p_{i3}$ and $p'_{i3}$ for $i=1,2$. Then the points $p,p',p_{13},p'_{13},p_{23},p'_{23},p_1,p_2$ and $p_3$ uniquely determine a quadric surface $Q$. Furthermore $E_i$ has five points in common with $Q$. This implies that $E_i\subset Q$, for $i=1,2,3$. Now we will verify that there is an open neighborhood $\mathcal{N}$ of $p$ such that $\mathrm{bd} K\cap\mathcal{N}$ is contained in $Q$. Suppose that there is no such neighborhood. Then there is a sequence $\{q_n\}_{n\in\mathbb{N}}$ in $\mathrm{bd} K\setminus Q$ such that $\lim\limits_{n\rightarrow\infty}q_n=p$, and moreover, the lines $L_{q_n}$ converge to the line $L_p$. Our strategy is to prove that if $n$ is sufficiently big, then one of the elliptic sections of $K$ through $L_{q_n}$ intersects each elliptic section $E_i$ at two points. It is clear that if $n$ is sufficiently big, both elliptic sections of $K$ through $L_{q_n}$ intersect $E_3$ at two points, because $L_{q_n}$ intersects the interior of the ellipse $E_3$. The same holds if $L_{q_n}$ intersects the interior of the ellipse $E_i$. Suppose then that $L_{q_n}$ does not intersect the interior of the ellipse $E_i$, and let $N_1$ be the unit vector normal to the plane determined by $L_{q_n}$ and $p_{i3}$ in the direction of the semiplane not containing ${p_{i3}}'$, and let $N_2$ be the unit vector normal to the plane determined by $L_n$ and ${p_{i3}}'$ in the direction of the semiplane not containing ${p_{i3}}$. Let $\alpha_n$ be the angle determined by $N_1$ and $N_2$. \begin{center} \includegraphics[width=2.65in]{bisagra.pdf}\rput(-1.75,5.4){$\alpha_n$}\rput(-3.35,5.5){$p'_{i3}$}\rput(-5.6,3.5){$p_{i3}$}\rput(-.6,5.4){$N_1$}\rput(-2,6.75){$L_{q_n}$}\rput(-4.3,6.75){$L_p$}\rput(-1.5,4.75){$N_2$}\rput(-5,4.75){$E_i$} \end{center} The convergence of $\{L_{q_n}\}_{n\in\mathbb{N}}$ to $L_p$ implies that $\lim\limits_{n\rightarrow\infty}\alpha_n=0$. Then there is $k_i\in\mathbb{N}$ such that for every $n\geq k_i$, we have $\alpha_n <\alpha$. This means that at least one of the normal elliptic sections through $L_{q_n}$ intersects the relative interior of the segment between $p_{13}$ and $p'_{13}$ and therefore has two points in common with $E_i$. Thus there is $k_0\in\mathbb{N}$ such that for $n > k_0$ there is a plane $H_n$ through $L_{q_n}$ which determines an elliptic section $F_n=H_n\cap \mathrm{bd} K$ and $F_n$ has $6$ points in common with $Q$. This shows that $F_n$ coincides with the conic $H_n \cap Q$, and hence that $q_n\in Q$. This proves that there is an open neighborhood $\mathcal{N}$ of $p$ such that $\mathcal{N}\cap \mathrm{bd} K\subset Q$. We conclude that there is a finite open cover of $\mathrm{bd} K$ in which every element coincides with a quadric; by the connectedness of $\mathrm{bd} K$ and the fact that two quadrics that coincide in a relative open set of $\mathrm{bd} K$ must be the same quadric, $\mathrm{bd} K$ is thus contained in a quadric. Therefore $\mathrm{bd} K$ is an ellipsoid. \end{proof} \begin{theorem} Let $K\subset\mathbb{R}^3$ be a convex body and $\alpha$ a given positive number. Suppose that there is a continuous system of lines $\mathcal{L}$ such that the center of $\mathcal{L}$ is contained in the interior of $K$ and through every $L$ in $\mathcal{L}$ there are planes $H_1$, $H_2$ determining an angle at least $\alpha$ such that $K\cap H_i$ is an elliptic section, for $i = 1,2$. Then $\mathrm{bd} K$ is an ellipsoid. Moreover, if for every $L$ in $\mathcal{L}$ one of the elliptic sections is a circle, then $\mathrm{bd} K$ is a sphere. \end{theorem} \begin{proof} Note first that every line of $\mathcal{L}$ intersects the interior of $K$. For every $p\in \mathrm{bd} K$, let $L_p$ be the unique line of $\mathcal{L}$ through $p$ and let us define the continuous function $\Psi \colon \mathrm{bd} K\rightarrow \mathbb{S}^2$ in such a way that $\Psi(p)$ is the unique unit vector parallel to $L_p$ with the property that $\{p + t\Psi(p) | t > 0\}$ is not in $K$. By Lemma \ref{lem1}, $\Psi \colon \mathrm{bd} K \rightarrow\mathbb{S}^2$ is an outward function and by Theorem \ref{thm:ellipsoid}, $\mathrm{bd} K$ is an ellipsoid. Let $L_0$ in $\mathcal{L}$ be the line parallel to the diameter of $K$. Since there is a circular section of $K$ through $L_0$ parallel to the diameter, and since every section of the ellipsoid $\mathrm{bd} K$ parallel to this circular section is also circular, then there is a circular section of the ellipsoid $\mathrm{bd} K$ through the diameter. This implies that the three axes of the ellipsoid $\mathrm{bd} K$ have the same length and therefore that $\mathrm{bd} K$ is a sphere. \end{proof} Let $K\subset\mathbb{R}^3$ be a smooth, strictly convex body and let $p\in \mathrm{bd} K$. Suppose that $H$ is a plane through the unit normal vector of $K$ at $p$. Then we say that the section $H\cap K$ is a \emph{normal section} of $K$ at $p$. If $H$ is a plane containing the diametral line of $K$ through $p$, then we say that the section $H\cap K$ is a \emph{diametral section} of $K$ at $p$. \begin{theorem} \label{thm:diametral} Let $K\subset\mathbb{R}^3$ be a smooth, strictly convex body and $\alpha$ a given positive number. Suppose that through every $p\in \mathrm{bd} K$ there are two elliptic normal (respectively diametral) sections determining an angle at least $\alpha$. Then $\mathrm{bd} K$ is an ellipsoid. \end{theorem} Motivated by the fact that for every diametral line of an ellipsoid there are two sections of the same area, we have the following result. \begin{corollary} Let $K\subset\mathbb{R}^3$ be a smooth, strictly convex body and $\alpha$ a given positive number. Suppose that through every diametral line there are three elliptic sections of the same area determining an angle at least $\alpha$. Then $\mathrm{bd} K$ is a sphere. \end{corollary} \begin{proof} By Theorem \ref{thm:diametral}, $\mathrm{bd} K$ is an ellipsoid. Note now that the hypothesis implies that the section through the center of $K$ orthogonal to one of the axes is a circle. This implies that the three axes of the ellipsoid $\mathrm{bd} K$ have the same length. \end{proof} \section{Four elliptic sections through a point} \begin{maintheorem} Let $K\subset\mathbb{R}^3$ be a convex body with a $C^2$-differentiable boundary and let $\alpha> 0$. Suppose that through every $p\in \mathrm{bd} K$ there are four planes $H_{p_1}$, $H_{p_2}$, $H_{p_3}$, $H_{p_4}$ satisfying: \begin{itemize} \item $H_{p_j}\cap \mathrm{bd} K$ is an ellipse $E_{p_j}$ of area greater than $\alpha > 0$, $j=1,2,3,4$, \item for $1\leq i<j\leq 4$, the ellipses $E_{p_i}$ and $E_{p_j}$ are not tangent. \end{itemize} Then $\mathrm{bd} K$ is an ellipsoid. \end{maintheorem} \begin{proof} We shall prove that locally, $\mathrm{bd} K$ is a quadric. Let $p\in \mathrm{bd} K$. Then the line $L_{ij}=H_{p_i}\cap H_{p_j}$ intersects the interior of $K$, because the ellipses $E_{p_i}$ and $E_{p_j}$ are not tangent. For $1\leq i < j \leq 4$, let $\{p, p_{ij}\}=E_{p_i}\cap E_{p_j} = L_{ij}\cap \mathrm{bd} K$. We shall show that there is a quadric $Q$ that contains the four ellipses $E_{p_1}$, $E_{p_2}$, $E_{p_3}$, $E_{p_4}$ and from that, we shall prove that $Q$ coincides with $\mathrm{bd} K$ in a neighborhood of $p$. In order to prove that $E_{p_1}$, $E_{p_2}$, $E_{p_3}$, $E_{p_4}$ are contained in a quadric $Q$, we follow the spirit of Gruber and Bianchi in \cite{gruber}. If $p\in \mathrm{bd} K$, let $H_p$ be the support plane of $K$ at $p$ and let $L_{p_i}=H_{p_i}\cap H_p$, $i=1,\dots ,4$. \noindent\textbf{Case a).} No three of the planes $E_{p_1}$, $E_{p_2}$, $E_{p_3}$, $E_{p_4}$ share a line. Then we have six distinct points $p_{ij}\in \mathrm{bd} K$. By elementary projective geometry, let $Q$ be the unique quadric which is tangent to the plane $H_p$ at $p$ and contains the six points $\{ p_{ij}\mid 1\leq i < j \leq 4 \}$. Furthermore, let $F_1\subset H_1$ be the unique quadric which is tangent to the line $L_{p_1}$ at $p$ and contains the three points $p_{12}$, $p_{13}$, $p_{1,4}$. Then $Q\cap H_{p_1}=F_1=E_{p_1}$. Similarly, the ellipses $E_{p_2}$, $E_{p_3}$, and $E_{p_4}$ are contained in the quadric $Q$. \begin{center} \includegraphics[width=3in]{4elipsesa.pdf}\rput(-3.7,5.85){$p$}\rput(-3.9,2.9){$p_{12}$}\rput(-2.3,3.3){$p_{13}$}\rput(-3.1,2.4){$p_{23}$}\rput(-.9,5.5){\textcolor{blue}{$E_1$}}\rput(-1.8,2.6){\textcolor[rgb]{0,.5,0}{$E_2$}}\rput(-2.4,6.75){\textcolor{red}{$E_3$}}\rput(-1.1,3.6){\textcolor[rgb]{.55,0,0.12}{$E_4$}}\rput(-4.6,5.45){$p_1$}\rput(-2.7,5.35){$p_2$}\rput(-4.95,3.8){$p_3$} \end{center} \noindent\textbf{Case b).} Three of planes $H_{p_1}$, $H_{p_2}$, $H_{p_3}$, $H_{p_4}$ share a line. So without loss of generality, suppose $H_{p_1} \cap H_{p_2} \cap H_{p_3} =L_{12} = L_{13} =L_{23}$ and $p_{12} = p_{13} =p_{23}$. Let $\Gamma$ be the supporting plane of $K$ at $p_{12} = p_{13} =p_{23}$ and let $Q$ be the unique quadric tangent to $H_p$ at $p$ tangent to $\Gamma$ at $p_{12} = p_{13} =p_{23}$ that contains arbitrarily chosen points $p_i\in E_i\setminus\{p, p_{12} = p_{13} =p_{23}\}$. Then $Q\cap H_1$ is the unique quadric contained in $H_1$, tangent to $L_{p_1}$ at $p$, tangent to $\Gamma \cap H_1$ at $p_{12} = p_{13} =p_{23}$ and that contains $p_1$. This implies that the ellipse $E_1$ is contained in $Q$. Similarly, $E_2 \cup E_3 \subset Q$. Let $F_4$ be the unique quadric contained in $E_4$, tangent to $L_{p_4}$ at $p$, and that contains the three distinct points $p_{14}$, $p_{24}$, and $p_{34}$. Clearly $F_4=Q\cap H_4$ but also $F_4=E_4$, which implies that $E_4\subset Q$. \noindent\textbf{Case c).} The four planes $H_{p_1}$, $H_{p_2}$, $H_{p_3}$, $H_{p_4}$ share a line. Let $H_{p_1}\cap H_{p_2}\cap H_{p_3}\cap H_{p_4}=L_0$ and $L_0\cap \mathrm{bd} K=\{p,R\}$. At $R\in \mathrm{bd} K$ there is only one support plane $H_R$ of $K$. By using a projective homeomorphism if necessary, we may assume without loss of generality that $H_p$ is parallel to $H_R$ and also that $L_0$ is orthogonal to both $H_p$ and $H_R$. As in Case b), there is a quadric $Q$ containing $E_{p_1}$, $E_{p_2}$, $E_{p_3}$. By Lemma \ref{lem2}, we know that the sectional curvature at three different directions determines the sectional curvature at all other directions. In our situation, this implies that the curvature of the two ellipses $Q\cap H_{p_4}$ and $E_4$ are the same. This, together with the fact that both ellipses $Q\cap H_{p_4}$ and $E_4$ contained in $H_4$ are tangent at $p$ to $H_4\cap H_p$ and tangent at $R$ to $H_4\cap H_R$, implies that $Q\cap H_{p_4}=E_4$ and hence that $E_4\subset Q$. We are ready to prove that there is a neighborhood $U$ of $\mathrm{bd} K$ at $p$ such that $U\subset Q$. Suppose this is not so; then there is a sequence $q_1, q_2,\dots, \in \mathrm{bd} K\setminus Q$ converging to $p$. For every $q_i\in \mathrm{bd} K$, let $H_{q_i}$ be a plane through $q_i$ such that $H_{q_i}\cap \mathrm{bd} K$ is an ellipse $E_{q_i}$ of area greater than $\alpha$. By considering a subsequence and renumbering if necessary, we may assume that the $H_{q_i}$ converge to a plane $H$ through $p$. The fact that $H_{q_i}\cap \mathrm{bd} K$ is an ellipse of area greater than $\alpha >0$ implies that $H$ is not a tangent plane of $K$. So let $L=H\cap H_p$. We may assume without loss of generality that $L\neq L_{p_1}, L_{p_2},L_{p_3}$, so the plane $H$ intersects each of the ellipses $E_1,E_2,E_3$ at two points, one of them being $p$. Since $H_{q_i}$ converges to $H$, we may choose $n_0$ such that if $i>n_0$, then $H_{q_i}$ intersects the ellipse $E_j$ at two distinct points, $j=1,2,3$. Therefore, since the quadrics $E_{q_i}$ and $Q\cap H_{q_i}$ share at least six points, they should be the same. This implies that $q_i\in Q$ for $i>n_0$, contradicting the fact that $q_1, q_2,\dots, \in \mathrm{bd} K\setminus Q$. The fact that $\mathrm{bd} K$ is compact, connected and locally a quadric implies that $\mathrm{bd} K$ is an ellipsoid. \end{proof} \end{document}
\begin{document} \title{Geometrical optimization of spin clusters for the preservation of quantum coherence} \author{Lea Gassab} \email{[email protected]} \affiliation{Department of Physics, Ko{\c{c}} University, 34450 Sar{\i}yer, Istanbul, T\"{u}rkiye} \author{Onur Pusuluk} \affiliation{Department of Physics, Ko{\c{c}} University, 34450 Sar{\i}yer, Istanbul, T\"{u}rkiye} \author{\"{O}zg\"{u}r E. M\"{u}stecapl{\i}o\u{g}lu} \affiliation{Department of Physics, Ko{\c{c}} University, 34450 Sar{\i}yer, Istanbul, T\"{u}rkiye} \affiliation{T\"{U}B\.{I}TAK Research Institute for Fundamental Sciences, 41470 Gebze, T\"{u}rkiye} \begin{abstract} We investigate the influence of geometry on the preservation of quantum coherence in spin clusters subjected to a thermal environment. Assuming weak inter-spin coupling, we explore the various buffer network configurations that can be embedded in a plane. Our findings reveal that the connectivity of the buffer network is crucial in determining the preservation duration of quantum coherence in an individual central spin. Specifically, we observe that the maximal planar graph yields the longest preservation time for a given number of buffer spins. Interestingly, our results demonstrate that the preservation time does not consistently increase with an increasing number of buffer spins. Employing a quantum master equation in our simulations, we further demonstrate that a tetrahedral geometry comprising a four-spin buffer network provides optimal protection against environmental effects. \end{abstract} \maketitle \section{Introduction} Quantum coherence plays a vital role in a wide range of quantum technologies, including quantum computing~\cite{unruh1995maintaining,duan1997preserving}, quantum sensing~\cite{RevModPhys.89.035002}, quantum metrology~\cite{Shlyakhov2018QMetrology}, and quantum cryptography~\cite{Gisin2002Crypto}. Moreover, investigating quantum coherence holds tremendous potential for enhancing our understanding of the underlying physics of living systems~\cite{chin2012coherence}. However, the presence of environmental noise and the detrimental effects of decoherence pose substantial challenges \cite{schlosshauer2019quantum,brandt1999qubit}. Noise can arise from various sources, such as thermal fluctuations, electromagnetic radiation, and interactions with neighboring particles. To address this challenge, several strategies have been proposed. These encompass intentionally introducing supplementary noise during the coupling process \cite{yang2022quantum}, using periodical kicks \cite{viola1999dynamical}, implementing a non-Hermitian driving potential \cite{huang2021effective}, employing correlated channels for interaction \cite{sk2022protecting}, leveraging topological edge states \cite{bahri2015localization,nie2020topologically,yao2021topological}, integrating auxiliary atoms \cite{faizi2019protection}, or even surfaces \cite{bluvstein2019extending} to safeguard coherence \cite{faizi2019protection}. The existing strategies for protecting coherence in artificial systems predominantly rely on external drives \cite{viola1999dynamical}, the presence of spontaneously occurring coherence \cite{faizi2019protection}, localization on edge states or positioning near surfaces \cite{bluvstein2019extending}, sophisticated interaction control methods \cite{sk2022protecting}, or tailored noise control techniques \cite{yang2022quantum}. In contrast, nature appears to have discovered a simpler solution by leveraging the arrangement and connectivity of molecular networks, as argued in light-harvesting complexes containing chromophores. Although the presence and beneficial effects of quantum coherence in such biological systems are still a subject of debate, we propose a theoretical exploration to identify ideal molecular geometries for coherence protection. By doing so, we aim to guide the engineering of artificial molecules or materials that can store quantum coherence. Our approach holds the potential to shed light on the lifetime and possible existence of quantum coherence in physiological conditions in nature. Our studies may be further complemented by quantum chemistry calculations to assess the energetic stability of these artificial quantum networks and search for their natural analogs. We have a specific focus on investigating the influence of geometrical degrees of freedom on the preservation of quantum coherence within the core of an atomic cluster, where the atoms are assumed to be two-level systems, modelled as spin-1/2 particles. In order to achieve this objective, we thoroughly examine spin-star networks to evaluate the efficacy of peripheral spins as protective barriers, shielding the central spin from its surrounding environment. The sub-network comprised of these peripheral spins, known as the buffer network, serves as an intermediary layer that adeptly absorbs and dissipates environmental noise, while simultaneously upholding the coherence of the central spin. The utilization of buffer networks holds significant potential for applications in quantum computation, quantum sensing, and the study of biological molecules \cite{thatcher1993phosphonates,fisher2015quantum}. Our findings carry practical implications for various applications that demand long-lived coherent spin states, including quantum memories \cite{gentile2017optically}, quantum magnetometry \cite{barry2020sensitivity}, quantum control and computation \cite{vandersypen2001experimental,vandersypen2005nmr}, as well as biomedical imaging \cite{gossuin2010physics}. \begin{figure} \caption{A central spin surrounded by four buffer spins, each one in its own thermal environment at temperature $T$. To illustrate how the central spin has negligible interaction with the environment, while the buffer spins can have frequent interactions, the thermal environment is depicted as a molecular bath. This system can be effectively viewed as a composite open quantum system, where the central spin is isolated from the environment, and the buffer spins each has their own local thermal dissipation channel.} \label{Fig::SpinNetwork} \end{figure} The paper is structured as follows: Sec.~\ref{sec:model} presents an overview of the model employed in this study. In Sec.~\ref{sec:results}, we examine buffer networks with varying numbers of spins and connectivities, and present our findings. Lastly, in Sec.~\ref{sec:conclusion}, we summarize our conclusions based on the results obtained. \section{Model System}\label{sec:model} \subsection{Spin-Star Network with Different Topologies} Consider a cluster of $N+1$ spins that consists of a central spin surrounded by a buffer network, as illustrated in Fig.~\ref{Fig::SpinNetwork}. Each buffer spin is individually connected to a thermal bath, while the central spin does not engage in direct interactions with the environment. Our objective is to analyze the impact of buffer network size and topology on the coherence of the central spin. To achieve this, we focus on buffer networks that can be represented as planar graphs. We explore various scenarios by considering different numbers of buffer spins, ranging from the two extremes: (i) no coupling between buffer spins, and (ii) pairwise interactions between all nearest neighbor buffer spins (see Fig.~\ref{Fig::Geometries}). \begin{figure} \caption{Depiction of extreme geometries for central spin-buffer spins network for $N=2,3,4,5,6$ buffer spins. The central spin, depicted in orange (light) color, is isolated from the environment, yet it is coupled to the blue (dark) colored buffer spins, which have local thermal dissipation pathways. For a particular number of buffer spins, $N$, we consider all the feasible buffer networks that can be embedded in a plane. Two extreme cases are depicted here: (Left column) No connectivity and (Right column) maximum connectivity within the buffer spin network.} \label{Fig::Geometries} \end{figure} For the sake of simplicity, we assume that all spins are identical with a magnitude of $1/2$. When subjected to a magnetic field aligned along the quantization axis $z$, an energy difference denoted by $\hbar \omega$ emerges between the lower state $\ket{0}$ (spin-up) and the upper state $\ket{1}$ (spin-down), enabling a two-level description. The buffer spins are brought into thermal equilibrium with the surrounding environment at an inverse temperature \(\beta\), while the central spin is initially prepared in a maximally coherent superposition state. Then, the initial cluster state can be represented as a product state: \begin{equation} \rho(0) = \frac{\ket{0}+\ket{1}}{\sqrt{2}}\otimes \rho_{th}^{\otimes n}. \end{equation} We take $\hbar$=1. The thermal state of the buffer spins is defined as \begin{equation} \rho_{th} = \frac{\mathrm{e}^{- \beta \frac{\omega}{2}\hat\sigma_{z}}}{\mathcal{Z}}, \end{equation} with \(\sigma_{z} = \ket{0} \bra{0} - \ket{1} \bra{1}\) represents the Pauli-\(z\) operator and \(\mathcal{Z} = 2 \cosh[\beta \, \omega/2]\,\) denotes the partition function. Here, we assume the Boltzmann constant \(k_B\) to be equal to 1. The total Hamiltonian of the spin cluster reads as an XX central spin model, which is a simplified case of general Richardson-Gaudin spin cluster models \cite{gaudin1976diagonalization}: \begin{equation}\label{Eq::Hamiltonian} \hat H = \sum_{i=1}^{N+1} \frac{\omega}{2}\hat\sigma_{z}^{(i)} + \sum_{i\neq j} g_{ij} (\hat\sigma_{x}^{(i)}\hat\sigma_{x}^{(j)}+\hat\sigma_{y}^{(i)}\hat\sigma_{y}^{(j)}) \, . \end{equation} We take $\hbar = 1$. The Pauli spin-$1/2$ operators for the $i^{th}$ spin are denoted by $\sigma_{x}^{(i)}$, $\sigma_{y}^{(i)}$, and $\sigma_{z}^{(i)}$, and the interaction strength between the spin pair $(i,j)$ is represented by $g_{ij}$. The central spin, labeled by ``1'', is coupled to the buffer spins at strength $g_{1j} = g\neq 0$ in all geometries under consideration. On the other hand, each buffer network is defined by a different array of coupling constants consisting of zero or \(g\) values, which can be represented by a planar graph. \begin{table*}[t!] \centering \caption{The dimensionless time for which the relative entropy \(S(\cdot\|\cdot)\), trace distance \(T(\cdot\|\cdot)\), relative entropy of coherence \(C_{RE}(\cdot)\) or \(L_1\) norm of coherence \(C_{L_1}(\cdot)\) become always inferior to $10^{-4}$ at the center of spin cluster for the parameters $\omega = 1$, $g = 0.002$, $\gamma = 0.0005$, and $T = 0.4$. The left and right columns compare the same quantity for the left and right geometries depicted in Fig.~\ref{Fig::Geometries}, which correspond to vanishing and maximal connectivities in the buffer network, respectively. The initial state of the buffer spins is a maximally coherent superposition state.} \begin{tabular}{|p{1cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|} \hline \(N+1\) &\multicolumn{2}{|c|}{$S(\rho_1 \| \rho_{th})$} & \multicolumn{2}{|c|}{$T(\rho_1, \rho_{th})$} & \multicolumn{2}{|c|}{$C_{RE}(\rho_1)$} & \multicolumn{2}{|c|}{$C_{L_1}(\rho_1)$} \\ \hline 3 & 20640 & 24210 & 20630 & 23710 & 38970 & 46680 & 41770 & 50150\\ \hline 4 & 17270 & 29870 & 16870 & 29730 & 32310 & 58320 & 34590 & 62020 \\ \hline 5 & 14990 & \textbf{32930} & 14250 & \textbf{32360} & 27250 & \textbf{64660} & 29600 & \textbf{66870} \\ \hline 6 & 13460 & 25440 & 12410 & 24800 & 24410 & 48070 & 25820 & 52010 \\ \hline 7 & 12930 & 19410 & 11030 & 17780 & 23260 & 35320 & 23260 & 37450 \\ \hline \end{tabular} \label{Table::Times} \end{table*} \begin{table*}[t!] \centering \caption{The mean value of the relative entropy \(S(\cdot\|\cdot)\), trace distance \(T(\cdot\|\cdot)\), relative entropy of coherence \(C_{RE}(\cdot)\) or \(L_1\) norm of coherence \(C_{L_1}(\cdot)\) of the central spin from time $t_1 = 29000 $ to time $t_2 = 30000 $. The parameters are the same as in Table~\ref{Table::Times}. The left and right columns compare the same quantity for the left and right geometries depicted in Fig.~\ref{Fig::Geometries}, which correspond to vanishing and maximal connectivities in the buffer network, respectively.} \begin{tabular}{|p{1cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|} \hline \(N+1\) &\multicolumn{2}{|c|}{$S(\rho_1 \| \rho_{th})$} &\multicolumn{2}{|c|}{$T(\rho_1, \rho_{th})$} & \multicolumn{2}{|c|}{$C_{RE}(\rho_1)$} & \multicolumn{2}{|c|}{$C_{L_1}(\rho_1)$} \\ \hline 3 & 1.13E-06 & 1.46E-05 & 1.12E-06 & 1.33E-05 & 4.60E-04 & 1.80E-03 & 9.17E-04 & 3.54E-03 \\ \hline 4 & 9.60E-08 & 1.12E-04 & 9.35E-08 & 1.07E-04 & 1.36E-04 & 5.07E-03 & 2.68E-04 & 1.01E-02 \\ \hline 5 & 8.26E-09 & \textbf{2.54E-04} & 6.73E-09 & \textbf{2.14E-04} & 3.89E-05 & \textbf{7.38E-03} & 7.25E-05 & \textbf{1.42E-02} \\ \hline 6 & 1.83E-09 & 2.47E-05 & 5.04E-10 & 2.07E-05 & 1.54E-05 & 2.29E-03 & 1.94E-05 & 4.41E-03 \\ \hline 7 & 1.35E-09 & 9.41E-07 & 5.32E-11 & 4.35E-07 & 1.17E-05 & 3.89E-04 & 6.32E-06 & 6.39E-04 \\ \hline \end{tabular} \label{Table::Value2} \end{table*} That is to say, the coupling constants between buffer spins can be drawn in the plane so that buffer spins and their non-zero coupling constants are represented by vertices and edges, respectively, and no two edges of the resulting graph intersect at a point other than a vertex. The maximum number of edges between the vertices corresponding to the buffer spins in such a graph is $3N-6$, in the case of \(N \geq 3\) buffer spins. This means that the array of coupling constants between buffer spins cannot include more than \(3\,N - 6\) non-zero values. Hence, the number of possible geometries for a cluster of \(N+1\) spins becomes \begin{equation} M = \sum_{k=0}^{3\,N - 6} \begin{pmatrix} E \\ k \end{pmatrix} \, , \end{equation} where $E=N(N-1)/2$ is the number of edges for a complete graph. Fig.~\ref{Fig::Geometries} shows two extreme geometries, contributing to this sum, for $N=2, 3, 4, 5$ and $N=6$ buffer spins representing the buffer network; $k$ classifies the buffer network geometries with respect to the number of edges in them. The left geometry corresponds to the \(k=0\) term, whereas the right geometry corresponds to one of the \(k = 3\,N - 6\) terms, which allows putting the central spin right in the middle of the bulk network. \subsection{Open System Dynamics} \begin{figure*} \caption{The behaviour of \(L_1\) norm of coherence \(C_{L_1} \label{Fig::CohIn4Spin} \end{figure*} We describe the open quantum system dynamics of the spin-star network by the following Lindblad master equation~\cite{breuer2002theory} \begin{equation} \label{Eq::MasterEq} \dot{\rho}(t) = -i[\hat H,\rho(t)] + \mathcal{D}(\rho(t)) \, , \end{equation} where \(\hbar\) is taken to be 1 and the unitary contribution to the dynamics is provided by the self-Hamiltonian of the system given in Eq.~\eqref{Eq::Hamiltonian}. By assuming weakly coupled buffer spins, local thermal dissipation channels are described by the dissipator in Eq.~(\ref{Eq::MasterEq}), which reads \begin{eqnarray} \begin{aligned} \label{Eq::Dissip} \mathcal{D}(\rho) &= \sum_{i=2}^{N+1} \gamma_{i} \, (1+n(\omega))[\hat\sigma_i^-\rho(t)\hat\sigma_i^{+} -\frac{1}{2} \left\{\hat\sigma_i^{+}\hat\sigma_i^-,\rho(t)\right\}]\\ & +\sum_{i=2}^{N+1} \gamma_{i} \, n(\omega) \, [\hat\sigma_i^{+}\rho(t)\hat\sigma_i^- -\frac{1}{2} \left\{\hat\sigma_i^- \hat\sigma_i^{+},\rho(t)\right\}] \, , \end{aligned} \end{eqnarray} where $n(\omega)$ is the Planck distribution at the spin resonance frequency $\omega$, $\hat\sigma^\pm$ are the Pauli spin ladder operators, and $\gamma_i = \gamma$ is the coupling constant between the environment and $i^{th}$ buffer spin, taken to be homogeneous for each buffer spin independent of the network structure for simplicity. To determine whether the dissipator~(\ref{Eq::Dissip}) achieves local thermalization in our simulations, we compare the reduced spin states \(\rho_i = \mathrm{tr}_{j_1\cdot\cdot\cdot j_n}[\rho]\), \(j_k \neq i\) with the local thermal state \(\rho_{th}\) using two different measures. The first measure is the relative entropy, defined as: \begin{equation}\label{Eq::RE} S(\rho \| \sigma) = \mathrm{tr}[\rho \, \log_2 \rho] \, - \mathrm{tr}[ \rho \, \log_2 \sigma] . \end{equation} The second measure is the trace distance, which can be reduced to \begin{equation}\label{Eq::TD} T(\rho , \sigma) = \frac{1}{2} \mathrm{tr}\left[\sqrt{(\rho - \sigma)^2}\right] \, , \end{equation} for the Hermitian matrices \(\rho\) and \(\sigma\). For a given number of buffer spins $N$, we investigate the open system dynamics of $M$ different spin clusters and search for the geometry that optimizes the protection of the central spin coherence. For this aim, we quantify the quantum coherence possessed in the reduced state of the central spin \(\rho_1 = \mathrm{tr}_{2\cdot\cdot\cdot n+1}[\rho]\) by the $l_1$ norm of coherence~\cite{baumgratz2014quantifying} that equals to the sum of the magnitude of all the off-diagonal elements of a given density matrix \begin{equation}\label{Eq::l1Norm} C_{L_1}(\rho)=\sum_{i\ne j}|\rho_{ij}| \, . \end{equation} $C_{L_1}$ neglects the signs of distinct coherences in the basis of \(\{\ket{0},\ket{1}\}\) and takes them into account independently of each other. Another measure of coherence that we utilize to determine the amount of coherence in the central spin is the relative entropy of coherence~\cite{baumgratz2014quantifying}: \begin{equation}\label{Eq_REofCoh} C_{RE}\!\left[\rho\right] = \min_{\varsigma \in \mathrm{IC}} \left(S\left[\rho \| \varsigma\right]\right) = S\left[\rho \| \rho_{d}\right] , \end{equation} where the minimum is taken over the set of incoherent states (IC) that are diagonal in the basis \(\{\ket{0},\ket{1}\}\) and $\rho_{d}$ is the diagonal part of the density matrix $\rho$. $C_{RE}$ measures the distinguishability of a density matrix with a modified copy which is subjected to a full dephasing process. \section{Results}\label{sec:results} The simulations were performed by using scientific Python packages along with key libraries from QuTiP~\cite{johansson2012qutip}. The spin transition frequency $\omega$ was taken as the time and energy scale (such that $\omega = 1$) and dimensionless scaled parameters are used in the simulations. Particularly, inter-spin coupling strength \(g\) and bath dissipation rate \(\gamma\) were taken to be 0.002 and 0.0005 respectively. The temperature of the environment ($T = \beta^{-1}$) was taken as 0.4. The simulations were continued until the quantum coherence of the central spin vanishes. Specifically, we record the time after which the quantum coherence is always inferior to $10^{-4}$. The buffer spins are in a thermal initial state at the same temperature as their local bath ($T=0.4$). Our simulations showed that the dissipator~(\ref{Eq::Dissip}) provides local thermalization for both central and buffer spins. For this aim, we compared the steady-state spin states \(\rho_i(\infty)\) with the local thermal state \(\rho_{th}\) using the relative entropy and trace distance. Additionally, we identified the thermalization time at which the relative entropy between the reduced spin states and \(\rho_{th}\) diminishes. The findings regarding the central spin are summarized in the first two columns of Table~\ref{Table::Times} for the geometries depicted in Fig.~\ref{Fig::Geometries}. The table also presents the times at which the relative entropy of coherence vanishes. Notably, the disappearance of $C_{RE}(\rho_1)$ vanishes consistently occurs later than $S(\rho_1 \| \rho_{th})$, indicating that complete decoherence occurs after the populations have reached equilibrium values. \begin{figure*} \caption{The behavior of \(L_1\) norm of coherence \(C_{L_1} \label{Fig::Coh1Spin} \end{figure*} Our findings demonstrate a correlation between the protection time of quantum coherence in the central spin and the connectivity of the buffer network, considering a specific number of buffer spins. We observe that the network with maximal planar graph embedding yields the longest protection time. To illustrate this, Table~\ref{Table::Times} presents a comparison of the protection time for different cluster sizes, depicted in Fig.~\ref{Fig::Geometries}, with left and right geometries representing minimal and maximal connectivity in the buffer network, respectively. Additionally, Fig.~\ref{Fig::CohIn4Spin} depicts the time-dependent behavior of central spin quantum coherence in four-buffer spin networks with tetrahedral and square geometries. Notably, the former exhibits a longer survival time for quantum coherence. Thus, our conclusion is that a higher number of buffer spin interactions results in a more effective protective shell around the central spin. In the scenario of vanishing connectivity, we observe a continuous decrease in quantum coherence as the number of buffer spins increases. This is expected since each buffer spin introduces an additional local thermalization channel, thereby accelerating the thermalization process. However, the behavior of protection time differs in the maximal connectivity scenario. It does not increase monotonically with the number of buffer spins in the cluster. The optimal protection against environmental decoherence, as indicated in the last column of Table~\ref{Table::Times}, is achieved by a four-spin buffer network with a tetrahedral geometry. Furthermore, Table~\ref{Table::Value2} provides the mean value of quantum coherence in the central spin. The mean value is calculated between $t_1$=29000 and $t_2$=30000, which corresponds to the thermalization time of a single spin initially in a maximally coherent superposition state within a thermal bath, as depicted in Fig.\ref{Fig::Coh1Spin}. According to Table~\ref{Table::Value2}, the tetrahedral geometry with four buffer spins yields the highest mean coherence value. \section{Conclusion} \label{sec:conclusion} In this study, we have examined the influence of buffer network size and topology on the preservation of quantum coherence in a central spin. Our analysis has specifically concentrated on weak inter-spin coupling, considering buffer networks that can be embedded in a plane without intersecting edges. The results have yielded a noteworthy observation: the preservation time of quantum coherence does not exhibit a consistent increase with the addition of more buffer spins in the cluster. Remarkably, we have identified that a four-spin buffer network, characterized by maximum connectivity and adopting a tetrahedral geometry, provides the most effective means of preserving quantum coherence against the perturbations arising from the thermal environment. This finding highlights the significance of carefully optimizing the buffer network's structural features to enhance the protection of quantum coherence in practical applications. Notably, this tetrahedral geometry is frequently observed in natural molecules, such as water-ice systems in hexagonal phases~\cite{raza2011proton}, magnetic spin-ice substances~\cite{bramwell2001spin}, and phosphate molecules in Posner's clusters~\cite{fisher2015quantum}. It is worth mentioning that our simplified model does not fully account for such complex molecules. Nevertheless, the correspondence between the tetrahedral network and the optimal geometry holds potential significance for both understanding biochemical processes and advancing artificial quantum technologies. \nocite{*} \begin{thebibliography}{30} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{https://doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Unruh}(1995)}]{unruh1995maintaining} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~G.}\ \bibnamefont {Unruh}},\ }\bibfield {title} {\bibinfo {title} {Maintaining coherence in quantum computers},\ }\href {https://doi.org/10.1103/PhysRevA.51.992} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {992} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Duan}\ and\ \citenamefont {Guo}(1997)}]{duan1997preserving} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.-M.}\ \bibnamefont {Duan}}\ and\ \bibinfo {author} {\bibfnamefont {G.-C.}\ \bibnamefont {Guo}},\ }\bibfield {title} {\bibinfo {title} {Preserving coherence in quantum computation by pairing quantum bits},\ }\href {https://doi.org/10.1103/PhysRevLett.79.1953} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {1953} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Degen}\ \emph {et~al.}(2017)\citenamefont {Degen}, \citenamefont {Reinhard},\ and\ \citenamefont {Cappellaro}}]{RevModPhys.89.035002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~L.}\ \bibnamefont {Degen}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Reinhard}},\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Cappellaro}},\ }\bibfield {title} {\bibinfo {title} {Quantum sensing},\ }\href {https://doi.org/10.1103/RevModPhys.89.035002} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {035002} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shlyakhov}\ \emph {et~al.}(2018)\citenamefont {Shlyakhov}, \citenamefont {Zemlyanov}, \citenamefont {Suslov}, \citenamefont {Lebedev}, \citenamefont {Paraoanu}, \citenamefont {Lesovik},\ and\ \citenamefont {Blatter}}]{Shlyakhov2018QMetrology} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~R.}\ \bibnamefont {Shlyakhov}}, \bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Zemlyanov}}, \bibinfo {author} {\bibfnamefont {M.~V.}\ \bibnamefont {Suslov}}, \bibinfo {author} {\bibfnamefont {A.~V.}\ \bibnamefont {Lebedev}}, \bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Paraoanu}}, \bibinfo {author} {\bibfnamefont {G.~B.}\ \bibnamefont {Lesovik}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Blatter}},\ }\bibfield {title} {\bibinfo {title} {Quantum metrology with a transmon qutrit},\ }\href {https://doi.org/10.1103/PhysRevA.97.022115} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {022115} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2002)\citenamefont {Gisin}, \citenamefont {Ribordy}, \citenamefont {Tittel},\ and\ \citenamefont {Zbinden}}]{Gisin2002Crypto} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ribordy}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Tittel}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}},\ }\bibfield {title} {\bibinfo {title} {Quantum cryptography},\ }\href {https://doi.org/10.1103/RevModPhys.74.145} {\bibfield {journal} {\bibinfo {journal} {Review of Modern Physics}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {145} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chin}\ \emph {et~al.}(2012)\citenamefont {Chin}, \citenamefont {Huelga},\ and\ \citenamefont {Plenio}}]{chin2012coherence} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Chin}}, \bibinfo {author} {\bibfnamefont {S.~F.}\ \bibnamefont {Huelga}},\ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\bibfield {title} {\bibinfo {title} {Coherence and decoherence in biological systems: principles of noise-assisted transport and the origin of long-lived coherences},\ }\href {https://doi.org/10.1098/rsta.2011.0224} {\bibfield {journal} {\bibinfo {journal} {Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences}\ }\textbf {\bibinfo {volume} {370}},\ \bibinfo {pages} {3638} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schlosshauer}(2019)}]{schlosshauer2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schlosshauer}},\ }\bibfield {title} {\bibinfo {title} {Quantum decoherence},\ }\href {https://doi.org/10.1016/j.physrep.2019.10.001} {\bibfield {journal} {\bibinfo {journal} {Physics Reports}\ }\textbf {\bibinfo {volume} {831}},\ \bibinfo {pages} {1} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brandt}(1999)}]{brandt1999qubit} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~E.}\ \bibnamefont {Brandt}},\ }\bibfield {title} {\bibinfo {title} {Qubit devices and the issue of quantum decoherence},\ }\href {https://doi.org/10.1016/S0079-6727(99)00003-8} {\bibfield {journal} {\bibinfo {journal} {Progress in Quantum Electronics}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {257} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yang}\ \emph {et~al.}(2022)\citenamefont {Yang}, \citenamefont {Yin}, \citenamefont {Zhang},\ and\ \citenamefont {Nie}}]{yang2022quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Nie}},\ }\bibfield {title} {\bibinfo {title} {Quantum coherence protection by noise},\ }\href {https://doi.org/10.1088/1612-202X/ac6e6f} {\bibfield {journal} {\bibinfo {journal} {Laser Physics Letters}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {075202} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Viola}\ \emph {et~al.}(1999)\citenamefont {Viola}, \citenamefont {Knill},\ and\ \citenamefont {Lloyd}}]{viola1999dynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Viola}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\bibfield {title} {\bibinfo {title} {Dynamical decoupling of open quantum systems},\ }\href {https://doi.org/10.1103/PhysRevLett.82.2417} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {2417} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2021)\citenamefont {Huang}, \citenamefont {Zhao},\ and\ \citenamefont {Li}}]{huang2021effective} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.-Q.}\ \bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {W.-L.}\ \bibnamefont {Zhao}},\ and\ \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Li}},\ }\bibfield {title} {\bibinfo {title} {Effective protection of quantum coherence by a non-hermitian driving potential},\ }\href {https://doi.org/10.1103/PhysRevA.104.052405} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {052405} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sk}\ and\ \citenamefont {Panigrahi}(2022)}]{sk2022protecting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sk}}\ and\ \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Panigrahi}},\ }\bibfield {title} {\bibinfo {title} {Protecting quantum coherence and entanglement in a correlated environment},\ }\href {https://doi.org/10.1016/j.physa.2022.127129} {\bibfield {journal} {\bibinfo {journal} {Physica A: Statistical Mechanics and its Applications}\ }\textbf {\bibinfo {volume} {596}},\ \bibinfo {pages} {127129} (\bibinfo {year} {2022})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bahri}\ \emph {et~al.}(2015)\citenamefont {Bahri}, \citenamefont {Vosk}, \citenamefont {Altman},\ and\ \citenamefont {Vishwanath}}]{bahri2015localization} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Bahri}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Vosk}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Altman}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vishwanath}},\ }\bibfield {title} {\bibinfo {title} {Localization and topology protected quantum coherence at the edge of hot matter},\ }\href {https://doi.org/10.1038/ncomms8341} {\bibfield {journal} {\bibinfo {journal} {Nature Communications}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {7341} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nie}\ \emph {et~al.}(2020)\citenamefont {Nie}, \citenamefont {Peng}, \citenamefont {Nori},\ and\ \citenamefont {Liu}}]{nie2020topologically} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Nie}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ and\ \bibinfo {author} {\bibfnamefont {Y.-x.}\ \bibnamefont {Liu}},\ }\bibfield {title} {\bibinfo {title} {Topologically protected quantum coherence in a superatom},\ }\href {https://doi.org/10.1103/PhysRevLett.124.023603} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {124}},\ \bibinfo {pages} {023603} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yao}\ \emph {et~al.}(2021)\citenamefont {Yao}, \citenamefont {Schl{\"o}mer}, \citenamefont {Ma}, \citenamefont {Venuti},\ and\ \citenamefont {Haas}}]{yao2021topological} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yao}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Schl{\"o}mer}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {L.~C.}\ \bibnamefont {Venuti}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Haas}},\ }\bibfield {title} {\bibinfo {title} {Topological protection of coherence in disordered open quantum systems},\ }\href {https://doi.org/10.1103/PhysRevA.104.012216} {\bibfield {journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {012216} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Faizi}\ and\ \citenamefont {Ahansaz}(2019)}]{faizi2019protection} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Faizi}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Ahansaz}},\ }\bibfield {title} {\bibinfo {title} {Protection of quantum coherence in an open v-type three-level atom through auxiliary atoms},\ }\href {https://doi.org/10.1088/1402-4896/ab2aaf} {\bibfield {journal} {\bibinfo {journal} {Physica Scripta}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {115102} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bluvstein}\ \emph {et~al.}(2019)\citenamefont {Bluvstein}, \citenamefont {Zhang}, \citenamefont {McLellan}, \citenamefont {Williams},\ and\ \citenamefont {Jayich}}]{bluvstein2019extending} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bluvstein}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {McLellan}}, \bibinfo {author} {\bibfnamefont {N.~R.}\ \bibnamefont {Williams}},\ and\ \bibinfo {author} {\bibfnamefont {A.~C.~B.}\ \bibnamefont {Jayich}},\ }\bibfield {title} {\bibinfo {title} {Extending the quantum coherence of a near-surface qubit by coherently driving the paramagnetic surface environment},\ }\href {https://doi.org/10.1103/PhysRevLett.123.146804} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {146804} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Thatcher}\ and\ \citenamefont {Campbell}(1993)}]{thatcher1993phosphonates} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~R.}\ \bibnamefont {Thatcher}}\ and\ \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Campbell}},\ }\bibfield {title} {\bibinfo {title} {Phosphonates as mimics of phosphate biomolecules: ab initio calculations on tetrahedral ground states and pentacoordinate intermediates for phosphoryl transfer},\ }\href {https://doi.org/10.1021/jo00060a050} {\bibfield {journal} {\bibinfo {journal} {The Journal of Organic Chemistry}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages} {2272} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fisher}(2015)}]{fisher2015quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Fisher}},\ }\bibfield {title} {\bibinfo {title} {Quantum cognition: The possibility of processing with nuclear spins in the brain},\ }\href {https://doi.org/10.1016/j.aop.2015.08.020} {\bibfield {journal} {\bibinfo {journal} {Annals of Physics}\ }\textbf {\bibinfo {volume} {362}},\ \bibinfo {pages} {593} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gentile}\ \emph {et~al.}(2017)\citenamefont {Gentile}, \citenamefont {Nacher}, \citenamefont {Saam},\ and\ \citenamefont {Walker}}]{gentile2017optically} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~R.}\ \bibnamefont {Gentile}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Nacher}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Saam}},\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Walker}},\ }\bibfield {title} {\bibinfo {title} {Optically polarized he 3},\ }\href {https://doi.org/10.1103/RevModPhys.89.045004} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {045004} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barry}\ \emph {et~al.}(2020)\citenamefont {Barry}, \citenamefont {Schloss}, \citenamefont {Bauch}, \citenamefont {Turner}, \citenamefont {Hart}, \citenamefont {Pham},\ and\ \citenamefont {Walsworth}}]{barry2020sensitivity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Barry}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Schloss}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Bauch}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Turner}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Hart}}, \bibinfo {author} {\bibfnamefont {L.~M.}\ \bibnamefont {Pham}},\ and\ \bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {Walsworth}},\ }\bibfield {title} {\bibinfo {title} {Sensitivity optimization for nv-diamond magnetometry},\ }\href {https://doi.org/10.1103/RevModPhys.92.015004} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {015004} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vandersypen}\ \emph {et~al.}(2001)\citenamefont {Vandersypen}, \citenamefont {Steffen}, \citenamefont {Breyta}, \citenamefont {Yannoni}, \citenamefont {Sherwood},\ and\ \citenamefont {Chuang}}]{vandersypen2001experimental} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~M.}\ \bibnamefont {Vandersypen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Steffen}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Breyta}}, \bibinfo {author} {\bibfnamefont {C.~S.}\ \bibnamefont {Yannoni}}, \bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Sherwood}},\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\bibfield {title} {\bibinfo {title} {Experimental realization of shor's quantum factoring algorithm using nuclear magnetic resonance},\ }\href {https://doi.org/10.1038/414883a} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {414}},\ \bibinfo {pages} {883} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vandersypen}\ and\ \citenamefont {Chuang}(2005)}]{vandersypen2005nmr} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~M.}\ \bibnamefont {Vandersypen}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\bibfield {title} {\bibinfo {title} {Nmr techniques for quantum control and computation},\ }\href {https://doi.org/10.1103/RevModPhys.76.1037} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern Physics}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {1037} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gossuin}\ \emph {et~al.}(2010)\citenamefont {Gossuin}, \citenamefont {Hocq}, \citenamefont {Gillis},\ and\ \citenamefont {Lam}}]{gossuin2010physics} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Gossuin}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Hocq}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Gillis}},\ and\ \bibinfo {author} {\bibfnamefont {V.~Q.}\ \bibnamefont {Lam}},\ }\bibfield {title} {\bibinfo {title} {Physics of magnetic resonance imaging: from spin to pixel},\ }\href {https://doi.org/10.1088/0022-3727/43/21/213001} {\bibfield {journal} {\bibinfo {journal} {Journal of Physics D: Applied Physics}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {213001} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gaudin}(1976)}]{gaudin1976diagonalization} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gaudin}},\ }\bibfield {title} {\bibinfo {title} {Diagonalization of a class of spin hamiltonians},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys.(Paris);(France)}\ }\textbf {\bibinfo {volume} {37}} (\bibinfo {year} {1976})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breuer}\ \emph {et~al.}(2002)\citenamefont {Breuer}, \citenamefont {Petruccione} \emph {et~al.}}]{breuer2002theory} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont {Breuer}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}}, \emph {et~al.},\ }\href {https://doi.org/10.1093/acprof:oso/9780199213900.001.0001} {\emph {\bibinfo {title} {The theory of open quantum systems}}}\ (\bibinfo {publisher} {Oxford University Press on Demand},\ \bibinfo {year} {2002})\BibitemShut {NoStop} \bibitem [{\citenamefont {Baumgratz}\ \emph {et~al.}(2014)\citenamefont {Baumgratz}, \citenamefont {Cramer},\ and\ \citenamefont {Plenio}}]{baumgratz2014quantifying} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Baumgratz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cramer}},\ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\bibfield {title} {\bibinfo {title} {Quantifying coherence},\ }\href {https://doi.org/10.1103/PhysRevLett.113.140401} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {140401} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2012)\citenamefont {Johansson}, \citenamefont {Nation},\ and\ \citenamefont {Nori}}]{johansson2012qutip} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {P.~D.}\ \bibnamefont {Nation}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\bibinfo {title} {Qutip: An open-source python framework for the dynamics of open quantum systems},\ }\href {https://doi.org/10.1016/j.cpc.2012.02.021} {\bibfield {journal} {\bibinfo {journal} {Computer Physics Communications}\ }\textbf {\bibinfo {volume} {183}},\ \bibinfo {pages} {1760} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Raza}\ \emph {et~al.}(2011)\citenamefont {Raza}, \citenamefont {Alfe}, \citenamefont {Salzmann}, \citenamefont {Klime{\v{s}}}, \citenamefont {Michaelides},\ and\ \citenamefont {Slater}}]{raza2011proton} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Raza}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Alfe}}, \bibinfo {author} {\bibfnamefont {C.~G.}\ \bibnamefont {Salzmann}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Klime{\v{s}}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Michaelides}},\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Slater}},\ }\bibfield {title} {\bibinfo {title} {Proton ordering in cubic ice and hexagonal ice; a potential new ice phase--XIc},\ }\href {https://doi.org/10.1039/C1CP22506E} {\bibfield {journal} {\bibinfo {journal} {Physical Chemistry Chemical Physics}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {19788} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bramwell}\ and\ \citenamefont {Gingras}(2001)}]{bramwell2001spin} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~T.}\ \bibnamefont {Bramwell}}\ and\ \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Gingras}},\ }\bibfield {title} {\bibinfo {title} {Spin ice state in frustrated magnetic pyrochlore materials},\ }\href {https://doi.org/10.1126/science.1064761} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {294}},\ \bibinfo {pages} {1495} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \end{thebibliography} \onecolumngrid \section*{Appendix} \begin{figure} \caption{Same figure as \ref{Fig::CohIn4Spin} \label{Fig::CohIn4Spinwc} \end{figure} The oscillations observed in Fig.~\ref{Fig::CohIn4Spin} may give the impression that we are using a parameter set that is incompatible with the assumptions made in the derivation of the master equation we describe or employ for a non-Markovian environment. However, as seen in Fig.~\ref{Fig::CohIn4Spinwc}, these oscillations arise from the dynamics of a closed system. The interaction of buffer spins affects the coherence of the central spin periodically. When we connect buffer spins to their local heat baths, these oscillations diminish over time, reducing their amplitudes. Therefore, we have depicted Fig.~\ref{Fig::CohIn4Spin} without re-scaling the axes in Fig.~\ref{Fig::CohIn4Spinlin}. \begin{figure} \caption{Same figure as \ref{Fig::CohIn4Spin} \label{Fig::CohIn4Spinlin} \end{figure} \end{document}
\begin{document} \title[3D quadratic NLS equation with electromagnetic perturbations] {3D quadratic NLS equation with electromagnetic perturbations} \begin{abstract} In this paper we study the asymptotic behavior of a quadratic Schr\"{o}dinger equation with electromagnetic potentials. We prove that small solutions scatter. The proof builds on earlier work of the author for quadratic NLS with a non magnetic potential. The main novelty is the use of various smoothing estimates for the linear Schr\"{o}dinger flow in place of boundedness of wave operators to deal with the loss of derivative.\\ As a byproduct of the proof we obtain boundedness of the wave operator of the linear electromagnetic Schr\"{o}dinger equation on an $L^2$ weighted space for small potentials, as well as a dispersive estimate for the corresponding flow. \end{abstract} \maketitle \tableofcontents \section{Introduction} \label{Introduction} \subsection{Background} We consider a quadratic NLS equation with electromagnetic potential set on $\mathbb{R}^3$: \begin{equation} \label{NLSmagn} \begin{cases} i \partial_t u + \Delta u &= \sum_{i=1}^3 a_i (x) \partial_i u + V(x) u + u^2 \\ u(t=1) &= u_1. \end{cases} \end{equation} This general form of potential includes the classical Hamiltonian Schr\"{o}dinger equation with electromagnetic potentials: \begin{equation}\label{NLSelectro} \begin{cases} i \partial_t u = H_A u \\ H_A = -(\nabla - i \overrightarrow{A}(x))^2 + V \\ \overrightarrow{A}(x) = (a_1(x), a_2(x), a_3(x)) \end{cases}, \end{equation} which corresponds to the usual Schr\"{o}dinger equation with an external magnetic field $\overrightarrow{B} = curl(\overrightarrow{A})$ as well as an external electric field $\overrightarrow{E} = - \overrightarrow{\nabla} V.$ Its Hamiltonian is \begin{align*} \mathcal{H}(u) = \frac{1}{2} \int_{\mathbb{R}^3} \big \vert ( \nabla - i \overrightarrow{A}(x) ) u \big \vert^2 + V(x) \vert u \vert^2 ~dx . \end{align*} In our setting the potentials will be assumed to be small, and our goal is to study the asymptotic behavior of solutions to the equation \eqref{NLSmagn}. \\ \\ Besides the physical interest of the problem, we are motivated by the fact that \eqref{NLSmagn} is a toy model for the study of linearizations of dispersive equations around non-zero remarkable solutions (traveling waves or solitons for example). This is why we elected to work with the equation \eqref{NLSmagn} and not a nonlinear version of \eqref{NLSelectro}. \\ The present article is a continuation of the author's earlier work \cite{L}, where the above equation was considered for $a_i =0.$ The main reason for adding the derivative term is that most models of physical relevance are quasilinear, and their linearizations will generically contain such derivative terms. Note that a more complete model would consist in treating a general quadratic nonlinearity $Q(u,\overline{u}):$ the present article leaves out the cases of nonlinearities $\overline{u}^2$ and $\vert u \vert^2$. Our method would apply for the nonlinearity $\overline{u}^2$ (in fact this is a strictly easier problem). However the nonlinearity $\vert u \vert^2$ is currently out of reach. In fact even in the flat case (no potentials are present), the problem has not been completely answered. We refer to the article of X. Wang \cite{Wang} for more on this subject. \\ \\ Regarding existing results on the behavior as $t\to \infty$ of solutions to equations of this type (nonlinearity with potential) we mention some works on the Strauss conjecture on non flat backgrounds: the equations considered have potential parts that lose derivatives, but the nonlinearity (of power type) typically has a larger exponent than what we consider in the present work. We can for example cite the work of K. Hidano, J. Metcalfe, H. Smith, C. Sogge, Y. Zhou (\cite{HMSSZ}) where the conjecture is proved outside a nontrapping obstacle. H. Lindblad, J. Metcalfe, C. Sogge, M. Tohaneanu and C. Wang (\cite{LMSTW}) proved the conjecture for Schwarzschild and Kerr backgrounds. The proofs of these results typically rely on weighted Strichartz estimates to establish global existence of small solutions. \\ \\ In this paper we prove that small, spatially localized solutions to \eqref{NLSmagn} exist globally and scatter. We take the opposite approach to the works cited above. Indeed we deal with a stronger nonlinearity which forces us to take into account its precise structure. We rely on the space-time resonance theory of P. Germain, N. Masmoudi and J. Shatah (see for example \cite{GMS}). It was developed to study the asymptotic behavior of very general nonlinear dispersive equations for power-type nonlinearities with small exponent (below the so-called Strauss exponent). In the present study the nonlinearity is quadratic (which is exactly the Strauss exponent in three dimension) hence the need to resort to this method. It has been applied to many models by these three authors and others. Without trying to be exhaustive, we can mention for example the water waves problem treated in various settings (\cite{GMSww}, \cite{GMSww2}, \cite{IPu1}), the Euler-Maxwell equation in \cite{GIP}, or the Euler-Poisson equation in \cite{IP}. Similar techniques have also been developed by other authors: S. Gustafson, K. Nakanishi and T.-P. Tsai studied the Gross-Pitaevskii equation in \cite{GNT} using a method more closely related to J. Shatah's original method of normal forms \cite{Sh}. M. Ifrim and D. Tataru used the method of testing against wave packets in \cite{IT1} and \cite{IT2} to study similar models, namely NLS and water waves in various settings. \\ \\ The difficulty related to the strong nonlinearity was already present in \cite{L}. However, in the present context, the derivative forces us to modify the approach developed in that paper since we must incorporate smoothing estimates into the argument. The inequalities we use in the present work were first introduced by C. Kenig, G. Ponce and L. Vega in \cite{KPV} to prove local well-posedness of a large class of nonlinear Schr\"{o}dinger equations with derivative nonlinearities. They allow to recover one derivative, which will be enough to deal with \eqref{NLSmagn}. Another estimate of smoothing-Strichartz type proved by A. Ionescu and C. Kenig in \cite{IK} will play an important role in the paper. Let us also mention that, in the case of magnetic Schr\"{o}dinger equations with small potentials, a large class of related Strichartz and smoothing estimates have been proved by V. Georgiev, A. Stefanov and M. Tarulli in \cite{GST}. For large potentials of almost critical decay, Strichartz and smoothing estimates have been obtained by B. Erdo\u gan, M. Goldberg and W. Schlag in \cite{EGS}, \cite{EGS2}, and a similar result was obtained recently by d'Ancona in \cite{DA}. A decay estimate for that same linear equation was proved by P. d'Ancona and L. Fanelli in \cite{DF} for small but rough potentials. A corollary of the main result of the present paper is a similar decay estimate under stronger assumptions on the potentials, but for a more general equation. Regarding dispersive estimates for the linear flow, we also mention the work of L. Fanelli, V. Felli, M. Fontelos and A. Primo in \cite{FFFP} where decay estimates for the eletromagnetic linear Schr\"{o}dinger equation are obtained in the case of particular potentials of critical decay. The proof is based on a representation formula for the solution of the equation. Finally, for the linear electromagnetic Schr\"{o}dinger equation, a corollary of our main theorem is that its wave operator is bounded on a space that can heuristically be thought of as $\langle x \rangle L^2 \cap H^{10}, $ see the precise statement in Corollary \ref{bddwaveop} below. \\ \\ To deal with the full equation, we must therefore use both smoothing and space-time resonance arguments simultaneously. The general idea is to expand the solution as a power series using Duhamel's formula repeatedly. This type of method is routinely used in the study of linear Schr\"{o}dinger equations through Born series, for example. As we mentioned, this general plan was already implemented by the author for a less general equation in \cite{L}, where there is no loss of derivatives. The additional difficulty coming from high frequencies forces us to modify the approach, and follow a different strategy in the multilinear part of the proof (that is in the estimates on the iterated potential terms). Note that along the way we obtain a different proof of the result in \cite{L}. In particular we essentially rely on Strichartz and smoothing estimates for the free linear Schr\"{o}dinger equation, instead of the more stringent boundedness of wave operators. This would allow us to relax the assumptions on the potential in \cite{L}. \subsection{Main result} \subsubsection{Notations} \label{prelim} We start this section with some notations that will be used in the paper. \\ First we recall the formula for the Fourier transform: \begin{align*} \widehat{f}(\xi) = \mathcal{F}f (\xi)= \int_{\mathbb{R}^3} e^{-i x \cdot \xi} f(x) dx \end{align*} hence the following definition for the inverse Fourier transform: \begin{align*} \check{f}(x) = [\mathcal{F}^{-1} f ](x) = \frac{1}{(2 \pi)^3} \int_{\mathbb{R}^3} e^{i x \cdot \xi} f(\xi) d\xi. \end{align*} Now we define Littlewood-Paley projections. Let $\phi$ be a smooth radial function supported on the annulus $\mathcal{C} = \lbrace \xi \in \mathbb{R}^3 ; \frac{1}{1.04} \leqslant \vert \xi \vert \leqslant 1.04 \times 1.1 \rbrace $ such that \begin{align*} \forall \xi \in \mathbb{R}^3 \setminus \lbrace 0 \rbrace, ~~~ \sum_{j \in \mathbb{Z}} \phi\big( 1.1^{-j} \xi \big) =1. \end{align*} Notice that if $j-j'>1$ then $1.1^j \mathcal{C} \cap 1.1^{j'} \mathcal{C} = \emptyset.$ \\ We will denote $P_k (\xi) := \phi(1.1^{-k}\xi)$ the Littlewood-Paley projection at frequency $1.1^k.$ \\ Similarly, $P_{\leqslant k} (\xi)$ will denote the Littlewood-Paley projection at frequencies less than $1.1^k.$ \\ It was explained in \cite{L} why we localize at frequency $1.1^k$ and not $2^k.$ (See Lemma 7.8 in that paper).\\ We will also sometimes use the notation $\widehat{f_k}(\xi) =P_k (\xi) \widehat{f}(\xi).$ \\ \\ Now we come to the main norms used in the paper: \\ We introduce the following notation for mixed norms of Lebesgue type: \begin{align*} \Vert f \Vert_{L^p _{x_j} L^q_{\widetilde{x_j}}} = \bigg \Vert \Vert f(\cdot,...,\cdot,x_j, \cdot, ..) \Vert_{L^q_{x_1,...,x_{j-1},x_{j+1},...}} \bigg \Vert_{L^p_{x_j}}. \end{align*} To control the profile of the solution we will use the following norm: \begin{align*} \Vert f \Vert_X = \sup_{k \in \mathbb{Z}} \Vert \nabla_{\xi} \widehat{f_k} \Vert_{L^2_x} \end{align*} Roughly speaking, it captures the fact that the solution has to be spatially localized around the origin. \\ For the potentials, we introduce the following controlling norm: \begin{align*} \Vert V \Vert_{Y} &= \Vert V \Vert_{L^1 _x} + \Vert V \Vert_{L^{\infty}_x} + \sum_{j=1}^3 \bigg \Vert \Vert \vert V \vert^{1/2} \Vert_{L^{\infty}_{\widetilde{x_j}}} \bigg \Vert_{L^2 _{x_j}}. \end{align*} \subsubsection{Main Theorem} \label{results} With these notations, we are ready to state our main theorem. \\ We prove that small solutions to \eqref{NLSmagn} with small potentials exist globally and that they scatter. More precisely the main result of the paper is \begin{theorem} \label{maintheorem} There exists $\varepsilon>0$ such that if $\varepsilon_0, \delta < \varepsilon$ and if $u_{1},$ $V$ and $a_i$ satisfy \begin{align*} \Vert V \Vert_{Y} + \Vert \langle x \rangle V \Vert_{Y} + \Vert (1-\Delta)^{5} V \Vert_{Y} & \leqslant \delta, \\ \Vert a_i \Vert_{Y} + \Vert \langle x \rangle a_i \Vert_{Y} + \Vert (1-\Delta)^{5} a_i \Vert_{Y} & \leqslant \delta, \\ \Vert e^{-i \Delta} u_{1} \Vert_{H^{10}_x} + \Vert e^{-i \Delta} u_{1} \Vert_{X} & \leqslant \varepsilon_0, \end{align*} then \eqref{NLSmagn} has a unique global solution. Moreover it satisfies the estimate \begin{align} \label{mainestimate} \sup_{t \in [1;\infty)} \Vert u(t) \Vert_{H^{10}_x} + \Vert e^{-it \Delta} u(t) \Vert_{X} + \sup_{k \in \mathbb{Z}} t \Vert u_k (t) \Vert_{L^6_x} \lesssim \varepsilon_0. \end{align} Moreover it scatters in $H^{10}_x$: there exists $u_{\infty} \in H^{10}_x$ and a bounded operator $W:H^{10}_x \rightarrow H^{10}_x$ such that \begin{align*} \Vert e^{-it \Delta} W u(t) - u_{\infty} \Vert_{H^{10}_x} \rightarrow 0 \end{align*} as $t \to \infty.$ \end{theorem} \begin{remark} We have not strived for the optimal assumptions on the potentials or the initial data. It is likely that the same method of proof, at least in the case where $a_i=0$, allows for potentials with almost critical decay (that is $V \in L^{3/2-}_x$ and $x V \in L^{3-}_x $ and similar assumptions on its derivative). Similarly the $H^{10}_x$ regularity can most likely be relaxed. \end{remark} \begin{remark} Unlike in the earlier work \cite{L}, we cannot treat the case of time dependent potentials. This is mainly due to the identity \eqref{identite} and its use in the subsequent proofs of our multilinear lemmas. \end{remark} \begin{remark} A similar scattering statement could be formulated in the space $X$ although it is more technical. For this reason we have elected to work in $H^{10}_x$ (see the proof in the appendix, Section \ref{scattering}). \end{remark} As we mentioned above, the result proved in Theorem \ref{maintheorem} has a direct consequence for the linear flow of the electromagnetic equation. We have the following corollary, which provides a decay estimate for the flow as well as a uniform in time boundedness of the profile of the solution on the space $X$. \begin{corollary} \label{corlin} There exists $\varepsilon>0$ such that if $ \delta < \varepsilon$ and if $u_{1},$ $V$ and $a_i$ satisfy \begin{align*} \Vert V \Vert_{Y} + \Vert \langle x \rangle V \Vert_{Y} + \Vert (1-\Delta)^{5} V \Vert_{Y} & \leqslant \delta, \\ \Vert a_i \Vert_{Y} + \Vert \langle x \rangle a_i \Vert_{Y} + \Vert (1-\Delta)^{5} a_i \Vert_{Y} & \leqslant \delta, \\ \Vert e^{-i \Delta} u_{1} \Vert_{H^{10}_x} + \Vert e^{-i \Delta} u_{1} \Vert_{X} & < + \infty, \end{align*} then the Cauchy problem \begin{equation} \label{Lmagn} \begin{cases} i \partial_t u + \Delta u &= \sum_{i=1}^3 a_i (x) \partial_i u + V(x) u \\ u(t=1) &= u_1 \end{cases} \end{equation} has a unique global solution $u(t)$ that obeys the estimate \begin{align} \label{mainestimatelin} \sup_{t \in [1;\infty)} \Vert u(t) \Vert_{H^{10}_x} + \Vert e^{-it \Delta} u(t) \Vert_{X} + \sup_{k \in \mathbb{Z}} t \Vert u_k (t) \Vert_{L^6_x} \lesssim \Vert e^{-i \Delta} u_{1} \Vert_{H^{10}_x} + \Vert e^{-i \Delta} u_{1} \Vert_{X}. \end{align} \end{corollary} \begin{remark} As we noted above for the nonlinear equation, the assumptions made on the regularity and decay of the potentials are far from optimal. We believe that minor changes in the proof would lead to much weaker conditions in the statement above. \end{remark} Now we write a corollary of the above estimate \eqref{mainestimatelin} in terms of the wave operator for the linear electromagnetic Schr\"{o}dinger operator corresponding to \eqref{NLSelectro}. \\ Indeed the $X$ part of this estimate can be written in that setting \begin{align*} \sup_{t \in [1; \infty)} \Vert e^{-it \Delta} u(t) \Vert_{X} = \sup_{t \in [1; \infty)} \Vert e^{-it \Delta} e^{i(t-1) H_A} u_1 \Vert_{X} \end{align*} where we recall that $H_A = -(\nabla - i \overrightarrow{A}(x))^2 + V$. We directly deduce the following \begin{corollary} \label{bddwaveop} Let $W$ denote the wave operator of $H_A.$ There exists $\varepsilon>0$ such that for every $\delta<\varepsilon,$ if the potentials $A,$ $V$ and the initial data $u_1$ satisfy \begin{align*} \Vert V \Vert_{Y} + \Vert \langle x \rangle V \Vert_{Y} + \Vert (1-\Delta)^{5} V \Vert_{Y} & \leqslant \delta, \\ \Vert A_i \Vert_{Y} + \Vert \langle x \rangle A_i \Vert_{Y} + \Vert (1-\Delta)^{5} A_i \Vert_{Y} & \leqslant \delta, \\ \Vert (A_i)^2 \Vert_{Y} + \Vert \langle x \rangle (A_i)^2 \Vert_{Y} + \Vert (1-\Delta)^{5} (A_i)^2 \Vert_{Y} & \leqslant \delta, \\ \Vert e^{-i \Delta} u_{1} \Vert_{H^{10}_x} + \Vert e^{-i \Delta} u_{1} \Vert_{X} & < + \infty, \end{align*} then we have \begin{align*} \Vert W u_1 \Vert_{H^{10}_x} + \Vert W u_1 \Vert_{X} \lesssim \Vert e^{-i \Delta} u_{1} \Vert_{H^{10}_x} + \Vert e^{-i \Delta} u_{1} \Vert_{X}. \end{align*} \end{corollary} \subsection{Set-up and general idea of the proof} \label{strategy} We work with the profile of the solution $f(t) = e^{-it \Delta} u(t).$ \\ \subsubsection{Local well-posedness} The local wellposedness of \eqref{NLSmagn} in $H^{10}_x$ follows from the estimate proved (in a much more general setting) by S. Doi in \cite{D}: \begin{align*} \Vert f \Vert_{L^{\infty}_t([1;T])H^{10}_x} \lesssim \Vert f_1 \Vert_{H^{10}_x} + (T-1) \Vert f \Vert_{L^{\infty}_t ([1;T])H^{10}_x}^2 \end{align*} \subsubsection{The bootstrap argument} The proof of global existence and decay relies on a bootstrap argument: we assume that for some $T>1$ and for $\varepsilon_1 = A \varepsilon_0$ ($A$ denotes some large number) the following bounds hold \begin{align*} \sup_{t \in [1;T]} \Vert f(t) \Vert_X & \leqslant \varepsilon_1, \\ \sup_{t \in [1;T]} \Vert f(t) \Vert_{H^{10}_x} & \leqslant \varepsilon_1, \end{align*} and then we prove that these assumptions actually imply the stronger conclusions \begin{align} \label{goal1} \sup_{t \in [1;T]} \Vert f(t) \Vert_X & \leqslant \frac{ \varepsilon_1}{2}, \\ \label{goal2} \sup_{t \in [1;T]} \Vert f(t) \Vert_{H^{10}_x} & \leqslant \frac{\varepsilon_1}{2}. \end{align} The main difficulty is to estimate the $X$ norm. To do so we expand $\partial_{\xi} \widehat{f}$ as a series by essentially applying the Duhamel formula recursively. The difference with \cite{L} is that, for high output frequencies, iterating the derivative part will prevent the series from converging if we use the same estimates as in that paper. To recover the derivative loss we use smoothing estimates which allow us to gain one derivative back at each step of the iteration. It is at this stage (the multilinear analysis) that the argument from \cite{L} must be modified. Instead of relying on the method of M. Becenanu and W. Schlag (\cite{BSmain}) the estimations are done more in the spirit of K. Yajima's paper \cite{Y}. \\ \subsubsection{The series expansion} Our discussion here and in the next subsection will be carried out for a simpler question than that tackled in this paper. However it retains the main novel difficulty compared to the earlier paper \cite{L}, namely the loss of derivative in the potential part. More precisely, we see how to estimate the $L^2_x$ norm of $\widehat{f}(t,\xi).$ First, we explain the way we generate the series representation of $\widehat{f}(t,\xi)$. We consider the Duhamel formula for $\widehat{f}:$ The potential part has the form \begin{align*} \int_0 ^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{W_1}(\xi-\eta_1) \alpha_1(\eta_1) \widehat{f}(s,\eta_1) d\eta_1 ds, \end{align*} where $W_1$ denotes either $a_i$ or $V$ and $\alpha_1(\eta_1) = 1$ if $W_1 = V$ and $\eta_{1,i}$ if $W_1 = a_i.$ \\ The general idea is to integrate by parts in time in that expression iteratively to write $\widehat{f}$ as a series made up of the boundary terms remaining at each step. Roughly speaking we will obtain two types of terms, corresponding either to the potential part or the bilinear part of the nonlinearity: \begin{align} \label{potentialmod} \int \prod_{\gamma=1}^{n-1} \frac{\widehat{W_{\gamma}}(\eta_{\gamma}) \alpha_{\gamma}(\eta_{\gamma})}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} \int_{\eta_{n}} \widehat{W_n}(\eta_{n-1}-\eta_n) \alpha_{n}(\eta_n) e^{it(\vert \xi \vert^2-\vert \eta_n \vert^2)} \widehat{f}(t,\eta_n) d\eta_n d\eta \end{align} and \begin{align} \label{bilinmod} \int \prod_{\gamma=1}^{n-1} \frac{\widehat{W_{\gamma}}(\eta_{\gamma}) \alpha_{\gamma}(\eta_{\gamma})}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} \int_{\eta_{n}} e^{it(\vert \xi \vert^2-\vert \eta_{n-1} - \eta_n \vert^2-\vert \eta_n \vert^2)} \widehat{f}(t,\eta_{n-1}-\eta_n) \widehat{f}(t,\eta_n) d\eta_n d\eta. \end{align} \subsubsection{Convergence of the series in $L^2$} Now we prove that the series obtained in the previous section converges in $L^{\infty}_t L^{2}_x.$ We prove estimates like \begin{align*} \Vert \eqref{potentialmod}, \eqref{bilinmod} \Vert_{L^{\infty}_t L^{2}_x} \lesssim C^n \delta^n \varepsilon_1 \end{align*} for some universal constant $C$ and where the implicit constant does not depend on $n.$ Heuristically, each $V$ factor contributes a $\delta$ in the estimate. \\ First, we write that in physical space we have, roughly speaking: \begin{align*} \eqref{potentialmod} &=\int_{0 \leqslant \tau_1} e^{i \tau_1 \Delta} W_1 \widetilde{D_1} \int_{0 \leqslant \tau_2 \leqslant \tau_1} e^{i(\tau_2-\tau_1) \Delta} W_2 \widetilde{D_2} ... \\ & \times \int_{0 \leqslant \tau_{n} \leqslant \tau_{n-1}} e^{i(\tau_n - \tau_{n-1})\Delta} W_{n} \widetilde{D_n} e^{-i \tau_n \Delta} f d\tau_1 ... d\tau_n , \end{align*} where we denoted $\widetilde{D_i}$ the operator equal to $1$ if $W_i=V$ and $\partial_{x_j}$ if $W_i =a_j.$ \\ Then, using Strichartz estimates, we can write \begin{align*} \Vert \eqref{potentialmod} \Vert_{L^2_x} &\lesssim \Bigg \Vert W_1 \widetilde{D_1} \int_{0 \leqslant \tau_2 \leqslant \tau_1} e^{i(\tau_2-\tau_1) \Delta} W_2 \widetilde{D_2} ... \\ & \times \int_{0 \leqslant \tau_{n} \leqslant \tau_{n-1}} e^{i(\tau_n - \tau_{n-1})\Delta} W_{n} \widetilde{D_n} e^{-i \tau_n \Delta} f d\tau_2 ... d\tau_n \Bigg \Vert_{L^2_{\tau_1} L^{6/5}_x}. \end{align*} Next if $W_1 = V$, then we can use Strichartz estimates again and write \begin{align*} \Vert \eqref{potentialmod} \Vert_{L^2_x} &\lesssim C \delta \Bigg \Vert \int_{0 \leqslant \tau_2 \leqslant \tau_1} e^{i(\tau_2-\tau_1) \Delta} W_2 \widetilde{D_2} ... \\ & \times \int_{0 \leqslant \tau_{n} \leqslant \tau_{n-1}} e^{i(\tau_n - \tau_{n-1})\Delta} W_{n} \widetilde{D_n} e^{-i \tau_n \Delta} f d\tau_2 ... d\tau_n \Bigg \Vert_{L^2_{\tau_1} L^{6}_x} \\ &\lesssim C \delta \Bigg \Vert W_2 \widetilde{D_2}... \int_{0 \leqslant \tau_{n} \leqslant \tau_{n-1}} e^{i(\tau_n - \tau_{n-1})\Delta} W_{n} \widetilde{D_n} e^{-i \tau_n \Delta} f d\tau_3 ... d\tau_n \Bigg \Vert_{L^2_{\tau_2} L^{6/5}_x}. \end{align*} If $W_1=a_i$ then in this case we use smoothing estimates to write that \begin{align*} \Vert \eqref{potentialmod} \Vert_{L^2_x} &\lesssim C \delta \Bigg \Vert \partial_{x_i} \int_{0 \leqslant \tau_2 \leqslant \tau_1} e^{i(\tau_2-\tau_1) \Delta} W_2 \widetilde{D_2} ... \\ & \times \int_{0 \leqslant \tau_{n} \leqslant \tau_{n-1}} e^{i(\tau_n - \tau_{n-1})\Delta} W_{n} \widetilde{D_n} e^{-i \tau_n \Delta} f d\tau_2 ... d\tau_n \Bigg \Vert_{L^{\infty}_{x_i} L^2_{\tau_1,\widetilde{x_i}}} \\ &\lesssim C \delta \Bigg \Vert W_2 \widetilde{D_2}... \int_{0 \leqslant \tau_{n} \leqslant \tau_{n-1}} e^{i(\tau_n - \tau_{n-1})\Delta} W_{n} \widetilde{D_n} e^{-i \tau_n \Delta} f d\tau_3 ... d\tau_n \Bigg \Vert_{L^{1}_{x_i} L^2_{\tau_2,\widetilde{x_i}}}. \end{align*} Then we continue this process recursively to obtain the desired bound: if we encounter a potential without derivative, we use Strichartz estimates, and if the potential carries a derivative, then we use smoothing estimates.\\ Say for example that in the expression above, $\widetilde{D_2}=1.$ Then we write that \begin{align*} \Vert \eqref{potentialmod} \Vert_{L^2_x} & \lesssim C^2 \delta^2 \Bigg \Vert \int_{0 \leqslant \tau_{n} \leqslant \tau_{n-1}} e^{i(\tau_n - \tau_{n-1})\Delta} W_{n} \widetilde{D_n} e^{-i \tau_n \Delta} f d\tau_3 ... d\tau_n \Bigg \Vert_{L^{2}_{\tau_2} L^6_x}. \end{align*} Otherwise, if $\widetilde{D_2}=\partial_{x_k}$ then we obtain \begin{align*} \Vert \eqref{potentialmod} \Vert_{L^2_x} & \lesssim C^2 \delta^2 \Bigg \Vert \int_{0 \leqslant \tau_{n} \leqslant \tau_{n-1}} e^{i(\tau_n - \tau_{n-1})\Delta} W_{n} \widetilde{D_n} e^{-i \tau_n \Delta} f d\tau_3 ... d\tau_n \Bigg \Vert_{L^{\infty}_{x_k} L^2_{\tau_2,\widetilde{x_k}}}. \end{align*} To close the estimates when $W_n = V,$ we write that, using Strichartz inequalities, we have: \begin{align*} \Vert \eqref{potentialmod} \Vert_{L^2_x} & \lesssim C^{n-1} \delta^{n-1} \Vert V e^{-i\tau_n \Delta} f \Vert_{Z} \\ & \lesssim C^n \delta^n \Vert e^{-i\tau_n \Delta} f \Vert_{L^{2}_{\tau_n} L^{6}_{x}} \\ & \lesssim C^n \delta^n \Vert f \Vert_{L^2_x}, \end{align*} where $Z$ denotes either $L^{2}_{\tau_n} L^{6/5}_{x}$ or $L^1_{x_i} L^2_{\tau_n,\widetilde{x_i}} $ depending on whether $W_{n-1}=V$ or $a_i.$ The case where $W_n = a_k$ is treated similarly, except that we use smoothing instead of Strichartz estimates.\\ \\ In the case of the nonlinear term in $f$ \eqref{bilinmod}, the $f$ that was present in \eqref{potentialmod} is replaced by the quadratic nonlinearity. As a result, the same strategy essentially reduces to estimating that quadratic term in $L^2$. This takes us back to a situation that is handled by the classical theory of space-time resonances: such a term was already present in the work of P. Germain, N. Masmoudi and J. Shatah (\cite{GMS}). \\ Of course, in reality, the situation is more complicated: here we were imprecise as to which smoothing effects we were using. Moreover we have mostly ignored the difficulty to combine the above smoothing arguments with the classical space-time resonance theory. In the actual proof we must resort to several smoothing estimates, see Section \ref{recallsmoothing} for the complete list. \subsubsection{Bounding the $X-$ norm of the profile} In the actual proof we must keep the $X-$norm of the profile under control. The situation is more delicate than for the $L^2$ norm, but the general idea is similar and was implemented in our previous paper \cite{L}. We recall it in this section for the convenience of the reader. \\ To generate the series representation we cannot merely integrate by parts in time since when the $\xi-$derivative hits the phase, an extra $t$ factor appears. Roughly speaking we are dealing with terms like \begin{align}\label{termpotheur} \int_0 ^t \int_{\mathbb{R}^3} s e^{is (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{W_1}(\xi-\eta_1) \alpha_1(\eta_1) \widehat{f}(s,\eta_1) d\eta_1 ds \end{align} for the potential part. \\ The idea is to integrate by parts in frequency to gain additional decay, and then perform the integration by parts in time. For the term above this yields an expression like \begin{align*} \int_0 ^t \int_{\mathbb{R}^3} \frac{\eta_{1}}{\vert \eta_1 \vert^2} e^{is (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{W_1}(\xi-\eta_1) \alpha_1(\eta_1) \partial_{\eta_1} \widehat{f}(s,\eta_1) d\eta_1 ds + \lbrace \textrm{easier terms} \rbrace. \end{align*} Then we can integrate by parts in time to obtain terms like: \begin{align*} &\int_{\mathbb{R}^3} \frac{\eta_{1}}{\vert \eta_1 \vert^2 (\vert \xi \vert^2-\vert \eta_1 \vert^2)} e^{it (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{W_1}(\xi-\eta_1) \alpha_1(\eta_1) \partial_{\eta_1} \widehat{f}(t,\eta_1) d\eta_1 \\ &+ \int_0 ^t \int_{\mathbb{R}^3} \frac{\eta_{1}}{\vert \eta_1 \vert^2 (\vert \xi \vert^2-\vert\eta_1 \vert^2)} e^{is (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{W_1}(\xi-\eta_1) \alpha_1(\eta_1) \partial_s \partial_{\eta_1} \widehat{f}(s,\eta_1) d\eta_1 ds \\ &+ \lbrace \textrm{easier terms} \rbrace. \end{align*} The boundary term will be the first term of the series representation. Then we iterate this process on the integral part. This is indeed possible since $\partial_{\eta_1} \widehat{f}$ and $\widehat{f}$ satisfy the same type of equation $\partial_t \widehat{f} \sim e^{-it\Delta} \big( Vu + u^2 \big)$ up to lower order terms and with different potentials (essentially $V$ and $xV$ respectively). \\ After generating the series, the next step is to prove a geometric sequence type bound for the $L^2$ norm of its terms. If we are away from space resonances, namely if the multiplier $\eta_1/\vert \eta_1 \vert^2$ is not singular, then we are essentially in the same case as in the previous section on the $L^2$ estimate of the solution. However if the added multiplier is singular, then we cannot conclude as above. The scheme we have described here is only useful away from space resonances. \\ We can modify this approach and choose to integrate by parts in time first. We obtain boundary terms with an additional $t$ factor compared to the previous section: \begin{align*} &\int_{\mathbb{R}^3} \frac{1}{\vert \xi \vert^2-\vert \eta_1 \vert^2} e^{it (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{W_1}(\xi-\eta_1) \alpha_1(\eta_1) t \widehat{f}(t,\eta_1) d\eta_1 . \end{align*} The key observation here is that if we are away from time resonances, that is if the multiplier $(\vert \xi \vert^2 - \vert \eta_1 \vert^2)^{-1}$ can be seen as a standard Fourier multiplier, then we can use the decay of $e^{it \Delta} f$ in $L^6$ to balance the $t$ factor. \\ Overall we have two strategies: one that works well away from space resonances, and the other away from time resonances. Since the space-time resonance set is reduced to the origin (that is the multipliers $(\vert \xi \vert^2 - \vert \eta_1 \vert^2)^{-1}$ and $\eta_1/\vert \eta_1 \vert^2$ are both singular simultaneously only at the origin) we use the appropriate one depending on which region of the frequency space we are located in. This general scheme was developed by the author in \cite{L}. \subsection{Organization of the paper} We start by recalling some known smoothing and Strichartz estimates in Section \ref{recallsmoothing}. We then prove easy corollaries of these that are tailored to our setting. The next three sections are dedicated to the main estimate \eqref{goal1} on the $X$ norm of the solution: Section \ref{expansion} is devoted to expanding the derivative of the Fourier transform of the profile as a series. In Section \ref{firsterm} we estimate the first iterates. As we pointed out above, this is a key step since our multilinear approach essentially reduces the estimation of the $n-$th iterates to that of the first iterates. Finally we prove in Section \ref{boundmultilinear} that the $L^2 _x$ norm of the general term of the series representation of $\partial_{\xi} \widehat{f}$ decays fast (at least like $\delta^n$ for some $\delta<1$). This allows us to conclude that the series converges. We start in Section \ref{multilinkey} by developing our key multilinear lemmas that incorporate the smoothing effect of the linear Schr\"{o}dinger flow in the iteration. They are then applied to prove the desired bounds on the $n-$th iterates. \\ We end the paper with the easier energy estimate \eqref{goal2} in Section \ref{energy}, which concludes the proof. \\\\ \textbf{Acknowledgments:} The author is very thankful to his PhD advisor Prof. Pierre Germain for the many enlightening discussions that led to this work. He also wishes to thank Prof. Yu Deng for very interesting discussions on related models. \section{Smoothing and Strichartz estimates} \label{recallsmoothing} \subsection{Known results} In this section we recall some smoothing and Strichartz estimates from the literature. In this paper we will use easy corollaries of these estimates (see next subsection) to prove key multilinear Lemmas in Section \ref{boundmultilinear}. \\ \\ We start with the classical smoothing estimates of C. Kenig, G. Ponce and L. Vega (\cite{KPV}, theorem 2.1, corollary 2.2, theorem 2.3). Heuristically the dispersive nature of the Schr\"{o}dinger equation allows, at the price of space localization, to gain one half of a derivative in the homogeneous case and one derivative in the inhomogeneous case. \begin{lemma}\label{smoothing} We have for all $j \in \lbrace 1;2;3 \rbrace:$ \begin{align} \label{smo1} \Vert \big[ D_{x_j}^{1/2} e^{it \Delta} f \big](x) \Vert_{L^{\infty}_{x_j} L^{2}_{\widetilde{x}_j,t}} \lesssim \Vert f \Vert_{L^2_x}, \end{align} its dual \begin{align} \label{smo2} \Bigg \Vert D_{x_j}^{1/2} \int \big[ e^{it \Delta} f \big](\cdot,t) dt \Bigg \Vert_{L^2_x} \lesssim \Vert f \Vert_{L^1_{x_j} L^2_{t,\widetilde{x_j}}}, \end{align} and the inhomogeneous version \begin{align} \label{smo3} \Bigg \Vert D_{x_j} \int_{0 \leqslant s \leqslant t} \big[ e^{i(t-s) \Delta} f(\cdot,s) \big] ds \Bigg \Vert_{L^{\infty}_{x_j} L^2_{t,\widetilde{x_j}}} \lesssim \Vert f \Vert_{L^1_{x_j} L^2_{s,\widetilde{x_j}}}. \end{align} \end{lemma} We will also need the following estimate proved by A. Ionescu and C. Kenig (\cite{IK}, Lemma 3). \begin{lemma} \label{smostri} We have for $j \in \lbrace 1;2;3 \rbrace:$ \begin{align*} \Bigg \Vert D_{x_j}^{1/2} \int_{0 \leqslant s \leqslant t} e^{i(t-s) \Delta} F(s,\cdot) ds \Bigg \Vert_{L^{\infty}_{x_j} L^{2}_{t,\widetilde{x_j}}} & \lesssim \Vert F \Vert_X , \end{align*} where \begin{align*} \Vert F \Vert_X = \inf_{F = F^{(1)} + F^{(2)}} \Vert F^{(1)} \Vert_{L^1_t L^2 _x} + \Vert F^{(2)} \Vert_{L^2 _t L^{6}_x}. \end{align*} \end{lemma} This lemma has the following straightforward corollary \begin{corollary} \label{smostriu} We have for $j \in \lbrace 1,2,3 \rbrace$ \begin{align*} \Bigg \Vert D_{x_j}^{1/2} \int_{0 \leqslant s \leqslant t} e^{i(t-s) \Delta} F(s,\cdot) ds \Bigg \Vert_{L^{\infty}_{x_1} L^{2}_{t,\widetilde{x_1}}} & \lesssim \Vert F \Vert_{L^{p'}_t L^{q'} _x} \end{align*} for $(p,q)$ a Strichartz admissible pair, that is $2 \leqslant p,q \leqslant \infty $ and \begin{align*} \frac{2}{p} + \frac{3}{q} = \frac{3}{2}. \end{align*} \end{corollary} Now recall the following, see for example \cite{KT}. \begin{lemma} \label{Strichartz} Let $(p,q), (r,l)$ be two admissible Strichartz pairs (see Corollary \ref{smostriu} for the definition) \\ Then we have the estimates \begin{align} \label{Str1} \Vert e^{it\Delta} f \Vert_{L^{p}_t L^q _x} \lesssim \Vert f \Vert_{L^2 _x}, \end{align} and \begin{align} \label{Str2} \Bigg \Vert \int_{\mathbb{R}} e^{-is \Delta} F(s,\cdot) \ ds \Bigg \Vert_{L^2 _x} \lesssim \Vert F \Vert_{L^{p'}_t L^{q'}_x} , \end{align} and finally the inhomogeneous version \begin{align} \label{Str3} \Bigg \Vert \int_{s \leqslant t} e^{i(t-s) \Delta} F(s,\cdot) ds \Bigg \Vert_{L^{p}_t L^{q}_x} \lesssim \Vert F \Vert_{L^{r'}_t L^{l'}_x}. \end{align} \end{lemma} \subsection{Basic lemmas for the magnetic part} In this subsection we prove easy corollaries of the estimates from the previous section. They will be useful in the multilinear analysis. \begin{lemma} \label{mgnmgn} Recall that the potentials satisfy \begin{align*} \Vert a_i \Vert_{Y} + \Vert \langle x \rangle a_i \Vert_{Y} + \Vert (1-\Delta)^{5} a_i \Vert_{Y} & \leqslant \delta. \end{align*} We have the following bounds for every $k,j \in \lbrace 1;2;3 \rbrace$: \begin{align} \label{aim} \bigg \Vert a_k (x) D_{x_k} \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3 \bigg \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} & \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^{1}_{x_k} L^2_{\tau_3,\widetilde{x_k}}} . \end{align} \end{lemma} \begin{proof} To simplify notations, we denote \begin{align*} \widetilde{F_2}(x) = \int_{\tau_3 \leqslant \tau_2} e^{-i\tau_3 \Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3. \end{align*} We proceed by duality. Let $h(x,\tau_2) \in L^{\infty}_{x_j} L^{2}_{\tau_2,\widetilde{x_j}}$. We test the expression above against that function and write using the Cauchy-Schwarz inequality \begin{align*} & \Bigg \vert \int_{\mathbb{R}^4} a_k (x) D_{x_k} e^{i\tau_2\Delta} \big( \widetilde{F_2}\big) h(x,\tau_2) d\tau_2 dx \Bigg \vert \\ & \lesssim \Bigg \Vert \vert a_k \vert^{1/2} \big \vert D_{x_k} e^{i\tau_2 \Delta} \big(\widetilde{F_2}\big) \big \vert \Bigg \Vert_{L^2_{x,\tau_2}} \big \Vert \vert a_k \vert^{1/2} \vert h \vert \big \Vert_{L^2_{x,\tau_2}} \\ & \lesssim \big \Vert \vert a_k \vert^{1/2} \big \Vert_{L^{2}_{x_j} L^{\infty}_{\widetilde{x_j}}} \big \Vert \vert a_k \vert^{1/2} \big \Vert_{L^{2}_{x_k} L^{\infty}_{\widetilde{x_k}}} \Vert h \Vert_{L^{\infty}_{x_j} L^{2}_{\tau_2,\widetilde{x_j}}} \big \Vert D_{x_k} e^{i\tau_2 \Delta} \big(\widetilde{F_2}\big) \big \Vert_{L^{\infty}_{x_k} L^{2}_{\tau_2,\widetilde{x_k}}}. \end{align*} Therefore using Lemma \ref{smoothing}, \eqref{smo3}, we can conclude that \begin{align*} \textrm{L.H.S.}~~\eqref{aim} & \lesssim \delta \big \Vert D_{x_k} e^{i\tau_2 \Delta} \big(\widetilde{F_2}\big) \Vert_{L^{\infty}_{x_k} L^2 _ {\tau_2,\widetilde{x_k}}} \\ & \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^{1}_{x_k}L^2 _ {\tau_3,\widetilde{x_k}}}. \end{align*} \end{proof} We also have the following related lemma: \begin{lemma} \label{mgnmgnfin} We have the following bounds for every $k,j \in \lbrace 1;2;3 \rbrace$: \begin{align*} \bigg \Vert a_k (x) D_{x_k}^{1/2} e^{i\tau_2\Delta} \big( \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \big) \bigg \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} & \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^2_{x}} \end{align*} and \begin{align*} \bigg \Vert a_k (x) D_{x_k}^{1/2} \int_{s \leqslant \tau_2} e^{i(\tau_2-s)\Delta} \big( \mathcal{F}^{-1}_{\eta_2} F_2(s, \eta_2) \big) ds \bigg \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} & \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(s,\eta_2) \Vert_{L^{p'}_s L^{q'}_{x}} , \end{align*} for every Strichartz-admissible pair $(p,q).$ \end{lemma} \begin{proof} The proof of this lemma is almost identical to that of the previous lemma. For the first inequality the inhomogeneous smoothing estimate is replaced by the homogeneous one. \\ For the second inequality we use Corollary \ref{smostriu}. \end{proof} We record another lemma of the same type: \begin{lemma} \label{potmgn} We have the bound \begin{align*} \bigg \Vert a_k (x) D_{x_k} \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3 \bigg \Vert_{L^2_{\tau_2} L^{6/5}_x} \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^{1}_{x_k}L^2 _ {\tau_2,\widetilde{x_k}}}. \end{align*} \end{lemma} \begin{proof} In this case we must bound \begin{align*} \Bigg \Vert a_k (x) D_{x_k} \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3 \Bigg \Vert_{L^2_{\tau_2} L^{6/5}_x} \\ := \Vert a_k (x) D_{x_k} e^{i \tau_2 \Delta} \big(\widetilde{F_2} \big) \Vert_{L^2_{\tau_2} L^{6/5}_x}. \end{align*} The proof is similar to that of Lemma \ref{mgnmgn}. \\ We proceed by duality. Let $h(x,\tau_2) \in L^{2}_{\tau_2} L^6_{x}.$ We pair the expression with $h$ and use the Cauchy-Schwarz inequality in $\tau_2$ and H\"{o}lder's inequality in $x$. The pairing is bounded by \begin{align*} & \Bigg \vert \int_{x_k} \int_{\widetilde{x_k}} a_k(x) \Bigg( \int_{\tau_2} \bigg \vert D_{x_k} e^{i\tau_2 \Delta} \big(\widetilde{F_2} \big) \bigg \vert^2 d\tau_2 \Bigg)^{1/2} \bigg( \int_{\tau_2} (h(x, \tau_2))^2 d\tau_2 \bigg)^{1/2} d\widetilde{x_k} dx_k \bigg \vert \\ & \lesssim \int_{x_k} \Vert a_k \Vert_{L^{3}_{\widetilde{x_k}}} \big \Vert D_{x_k} e^{i\tau_2 \Delta} \big(\widetilde{F_2} \big) \big \Vert_{L^{2}_{\widetilde{x_k}} L^2_{\tau_2}} \Vert h \Vert_{L^{6}_{\widetilde{x_k}}L^{2}_{\tau_2}} dx_k \\ & \lesssim \bigg \Vert D_{x_k} e^{i\tau_2 \Delta} \big(\widetilde{F_2} \big) \bigg \Vert_{L^{\infty}_{x_k} L^{2}_{\tau_2,\widetilde{x_k}}} \Vert a_k \Vert_{L^{6/5}_{x_k} L^{3}_{\widetilde{x_k}}} \Vert h \Vert_{L^{6}_{x} L^2_{\tau_2}} \\ & \lesssim \bigg \Vert D_{x_k} e^{i\tau_2 \Delta} \big(\widetilde{F_2} \big) \bigg \Vert_{L^{\infty}_{x_k} L^{2}_{\tau_2,\widetilde{x_k}}} \Vert a_k \Vert_{L^{6/5}_{x_k} L^{3}_{\widetilde{x_k}}} \Vert h \Vert_{L^2_{\tau_2} L^6 _x}, \end{align*} where for the last line we used Minkowski's inequality. \\ \\ We can conclude that, using smoothing estimates from Lemma \ref{smoothing}, \eqref{smo3}, \begin{align*} \bigg \Vert D_{x_k} e^{i\tau_2 \Delta} \big(\widetilde{F_2} \big) \bigg \Vert_{L^{\infty}_{x_k} L^{2}_{\tau_2,\widetilde{x_k}}} & = \bigg \Vert D_{x_k} \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3) \Delta} \mathcal{F}^{-1}_{\eta_2} \big( F_2 (\eta_2) \big) d\tau_3 \bigg \Vert_{L^{\infty}_{x_k} L^2_{\tau_2, \widetilde{x_k}}} \\ & \lesssim \big \Vert \mathcal{F}^{-1}_{\eta_2} F_2 (\eta_2) \big \Vert_{L^1 _{x_k} L^{2}_{\tau_3,\widetilde{x_k}}}. \end{align*} \end{proof} Now we write a similar lemma for a slightly different situation: \begin{lemma} \label{potmgnfin} We have the bounds \begin{align*} \Vert a_k (x) D_{x_k}^{1/2} e^{i \tau_2\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \big) \Vert_{L^2_{\tau_2} L^{6/5}_x} \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^2 _ {x}}, \end{align*} and \begin{align*} \Bigg \Vert a_k (x) D_{x_k}^{1/2} \int_{s \leqslant \tau_2} e^{i (\tau_2-s) \Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \big) ds \Bigg \Vert_{L^2_{\tau_2} L^{6/5}_x} \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^{p'}_t L^{q'}_ {x}}, \end{align*} for every Strichartz admissible pair $(p,q).$ \end{lemma} \begin{proof} The proof is the same as that of the previous lemma, with the inhomogenenous smoothing estimate replaced by its homogeneous version for the first inequality. \\ For the second one, we use Corollary \ref{smostriu} instead of Lemma \ref{smoothing}. \end{proof} \subsection{Basic lemmas for the electric part} We record lemmas that will allow us to control electric terms. \begin{lemma} \label{potpot} Recall that the potential $V$ satisfies \begin{align*} \Vert V \Vert_{Y} + \Vert \langle x \rangle V \Vert_{Y} + \Vert (1-\Delta)^{5} V \Vert_{Y} & \leqslant \delta. \end{align*} We have the following bound: \begin{align*} \Bigg \Vert V(x) \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3 \Bigg \Vert_{L^2 _{\tau_2} L^{6/5}_x} \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^2 _{\tau_3} L^{6/5}_x} . \end{align*} \end{lemma} \begin{proof} Using H\"{o}lder's inequality and Strichartz estimates we write that \begin{align*} & \bigg \Vert V(x) \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3 \bigg \Vert_{L^2_{\tau_2} L^{6/5}_x} \\ & \lesssim \Vert V \Vert_{L^{3/2}_x} \bigg \Vert \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3 \bigg \Vert_{L^2_{\tau_2} L^{6}_x} \\ & \lesssim \Vert V \Vert_{L^{3/2}_x} \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) \Vert_{L^{2}_{\tau_3}L^{6/5}_x} . \end{align*} \end{proof} And we have the following related Lemma in the homogeneous case: \begin{lemma} \label{potpotfin} We have the following bound: \begin{align*} \big \Vert V(x) e^{i\tau_2\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \big \Vert_{L^2 _{\tau_2} L^{6/5}_x} \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^{2}_x} , \end{align*} and \begin{align*} \Bigg \Vert V(x) \int_{s \leqslant \tau_2} e^{i(s-\tau_2)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2 (s,\eta_2) ds \Bigg \Vert_{L^2 _{\tau_2} L^{6/5}_x} & \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2 (\eta_2) \Vert_{L^{p'}_t L^{q'}_x}, \end{align*} for every Strichartz admissible pair $(p,q).$ \end{lemma} We will also need \begin{lemma} \label{mgnpot} We have the following bound: \begin{align*} \bigg \Vert V(x) \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3 \bigg \Vert_{L^1_{x_j} L^{2}_{\tau_2,\widetilde{x_j}}} \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^2_{\tau_3} L^{6/5}_x} . \end{align*} \end{lemma} \begin{proof} We must bound \begin{align*} \bigg \Vert V (x) \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3 \bigg \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} \\ := \Vert V (x) e^{i \tau_2 \Delta} \big( \widetilde{F_2} \big) \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}}. \end{align*} The reasoning is similar to the one used for Lemma \ref{potmgn}. \\ We proceed by duality. Let $h(x,\tau_2) \in L^{\infty}_{x_j} L^2_{\tau_2,\widetilde{x_j}}.$ We pair the expression with $h$ and use H\"{o}lder inequality. The pairing is bounded by \begin{align*} & \int_{x_j} \int_{\widetilde{x_j}} V(x) \Bigg( \int_{\tau_2} \bigg \vert e^{i\tau_2 \Delta} \big(\widetilde{F_1} \big) \bigg \vert^2 d\tau_2 \Bigg)^{1/2} \bigg( \int_{\tau_2} (h(x, \tau_2))^2 d\tau_2 \bigg)^{1/2} d\widetilde{x_j} dx_j \\ & \lesssim \int_{x_j} \Vert V \Vert_{L^{3}_{\widetilde{x_j}}} \big \Vert e^{i\tau_2 \Delta} \big(\widetilde{F_1} \big) \big \Vert_{L^{6}_{\widetilde{x_j}} L^2_{\tau_2}} \Vert h \Vert_{L^{2}_{\tau_2,\widetilde{x_j}}} dx_j \\ & \leqslant \Vert h \Vert_{L^{\infty}_{x_j} L^{2}_{\tau_2,\widetilde{x_j}}} \int_{x_j} \Vert V \Vert_{L^{3}_{\widetilde{x_j}}} \big \Vert e^{i\tau_2 \Delta} \big(\widetilde{F_1} \big) \big \Vert_{L^{6}_{\widetilde{x_j}} L^2_{\tau_2}} dx_j \\ & \leqslant \Vert h \Vert_{L^{\infty}_{x_j} L^{2}_{\tau_2,\widetilde{x_j}}} \Vert V \Vert_{L^{6/5}_{x_j} L^{3}_{\widetilde{x_j}}} \big \Vert e^{i\tau_2 \Delta} \big(\widetilde{F_1}\big) \big \Vert_{L^{6}_{x} L^2_{\tau_2}}. \end{align*} Now we use Minkowski's inequality and retarded Strichartz estimates from Lemma \ref{Strichartz} to write that \begin{align*} \big \Vert e^{i\tau_2 \Delta} \big(\widetilde{F_1}(\tau_2) \big) \big \Vert_{L^{6}_{x} L^2_{\tau_2}} & \lesssim \big \Vert e^{i\tau_2 \Delta} \big(\widetilde{F_1} \big) \big \Vert_{ L^2_{\tau_2} L^{6}_{x}} \\ & = \bigg \Vert \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3) \Delta} \mathcal{F}^{-1}_{\eta_2} \big( F_1 (\eta_2) \big) d\tau_3 \bigg \Vert_{ L^2_{\tau_2} L^{6}_{x}} \\ & \lesssim \big \Vert \mathcal{F}^{-1}_{\eta_2} F_1 (\eta_2) \big \Vert_{L^2 _{\tau_3} L^{6/5} _x}. \end{align*} \end{proof} We have the similar analogous Lemma (for the homogeneous case) \begin{lemma} \label{mgnpotfin} We have \begin{align*} \Vert V(x) e^{i \tau_2 \Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^1_{x_j} L^{2}_{\tau_2,\widetilde{x_j}}} \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^{2}_x} \end{align*} and \begin{align*} \Bigg \Vert V(x) \int_{s \leqslant \tau_2} e^{i(s-\tau_2)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(s,\eta_2) ds \Bigg \Vert_{L^1_{x_j} L^{\infty}_{\tau_2,\widetilde{x_j}}} \lesssim \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^{p'}_t L^{q'}_x} \end{align*} for every admissible Strichartz pair $(p,q).$ \end{lemma} \subsection{Basic bilinear lemmas} We give an easy bilinear lemma \begin{lemma}\label{bilinit} We have the bounds \begin{align*} \Bigg \Vert \mathcal{F}^{-1}_{\eta_1} \int_{\eta_2} \widehat{W}(\eta_1-\eta_2) m(\eta_2) F_2 (\eta_2,\tau_2) d\eta_2 \Bigg \Vert_{L^2_{\tau_2} L^{6/5}_x} \lesssim \Vert W \Vert_{L^p _x} \Vert \check{m} \Vert_{L^1} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^2 _{\tau_2} L^q _x}, \end{align*} where $\frac{5}{6} = \frac{1}{p} + \frac{1}{q}$ and \begin{align*} \Bigg \Vert \mathcal{F}^{-1}_{\eta_1} \int_{\eta_2} \widehat{W}(\eta_1-\eta_2) m(\eta_2) F_2 (\eta_2,\tau_2) d\eta_2 \Bigg \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} \lesssim \Vert W \Vert_{L^{\infty}_x} \Vert \check{m} \Vert_{L^1} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}}, \end{align*} and for $c$ small number \begin{align*} \Bigg \Vert \mathcal{F}^{-1}_{\eta_1} \int_{\eta_2} \widehat{W}(\eta_1-\eta_2) m(\eta_2) F_2 (\eta_2,\tau_2) d\eta_2 \Bigg \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} \lesssim \Vert W \Vert_{L^{\infty}_{x_j} L^{\frac{2+2c}{c}}_{\widetilde{x_j}}} \Vert \check{m} \Vert_{L^1} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^1_{x_j} L^2_{\tau_2} L^{2(1+c)}_{\widetilde{x_j}}}. \end{align*} Similarly \begin{align*} \Bigg \Vert \mathcal{F}^{-1}_{\eta_1} \int_{\eta_2} \widehat{W}(\eta_1-\eta_2) m(\eta_2) F_2 (\eta_2,\tau_2) d\eta_2 \Bigg \Vert_{L^2_{\tau_2} L^{6/5}_{x}} \lesssim \Vert W \Vert_{L^{\frac{6+6c}{5c}}_{x}} \Vert \check{m} \Vert_{L^1} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^2_{\tau_2} L^{\frac{6}{5}(1+c)}_x}. \end{align*} Finally \begin{align*} \Bigg \Vert \mathcal{F}^{-1}_{\eta_1} \int_{\eta_2} \widehat{W}(\eta_1-\eta_2) m(\eta_2) F_2 (\eta_2) d\eta_2 \Bigg \Vert_{L^2_{x}} \lesssim \Vert W \Vert_{L^{\infty}_x} \Vert \check{m} \Vert_{L^1} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^2_{x}}. \end{align*} \end{lemma} \begin{proof} The proofs are almost the same as that of Lemma \ref{bilin} therefore they are omitted. \end{proof} We end this section with a key quantity used in the estimation of the iterates. \begin{definition} Let $C_0$ be the largest of all the implicit constants that appear in the inequalities from Lemmas \ref{mgnmgn}, \ref{mgnmgnfin}, \ref{potmgn}, \ref{potmgnfin}, \ref{potpot}, \ref{potpotfin}, \ref{mgnpot}, \ref{mgnpotfin}, and \ref{bilinit}. We denote $C = 10^{10} C_0^{10}.$ The choice of this constant is arbitrary: we just need a large number to account for the numerical constants that appear in the iteration below. \end{definition} \section{Expansion of the solution as a series} \label{expansion} In this section and the next two, our goal is to prove \eqref{goal1}. To do so we start as in \cite{L} by expanding $\partial_{\xi_l} \widehat{f}$ as a power series. This is done through integrations by parts in time. The full details are presented in this section. \subsection{First expansion} \label{firstexpansion} The Duhamel formula for \eqref{NLSmagn} reads: \begin{align}\label{Duhamel} \hat{f}(t,\xi) &= \hat{f}_0(\xi) - \frac{i}{(2 \pi)^3} \sum_{i=1}^3 \int_1 ^t \int_{\mathbb{R}^3} e^{is(\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{a_i}(\xi-\eta_1) \eta_{1,i} \widehat{f}(s,\eta_1) d\eta_1 ds \\ \notag&-\frac{i}{(2 \pi)^3}\int_1 ^t e^{is \vert \xi \vert^2} \int_{\mathbb{R}^3} \widehat{V}(\xi - \eta_1) e^{-is \vert \eta_1 \vert ^2} \widehat{f}(s,\eta_1) d\eta_1 ds \\ \notag& -\frac{i}{(2 \pi)^3} \int_1 ^t e^{is \vert \xi \vert ^2} \int_{\mathbb{R}^3} e^{-is \vert \xi-\eta_1 \vert^2} e^{-i s \vert \eta_1 \vert^2} \widehat{f}(s,\eta_1) \widehat{f}(s,\xi-\eta_1) d\eta_1 ds . \end{align} We start by localizing in $\xi$ and taking a derivative in $\xi_l$: \\ \begin{align} \label{estimationX} \partial_{\xi_l} \widehat{f_k}(t,\xi)&= \partial_{\xi_l} \big(e^{i \vert \xi \vert^2} \widehat{u_{1,k}}(\xi) \big) \\ \label{M2}& -\frac{i}{(2 \pi)^3} \sum_{k_1 \in \mathbb{Z}} \sum_{i=1}^3 P_k(\xi) \int_{1} ^t \int_{\mathbb{R}^3} 2 i s \xi_l e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \eta_{1,i} \widehat{f_{k_1}}(s,\eta_1) \widehat{a_i}(s,\xi-\eta_1) d\eta_1 ds \\ \label{M6}& -\frac{i}{(2 \pi)^3} P_k(\xi) \sum_{k_1 \in \mathbb{Z}} \int_{1}^t \int_{\mathbb{R}^3} 2 i s \xi_l e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \widehat{f_{k_1}}(s,\eta_1) \widehat{V}(s,\xi-\eta_1) d\eta_1 ds \\ \notag & + \lbrace \textrm{remainder terms} \rbrace, \end{align} where the remainder terms are given by: \begin{align} \notag & \lbrace \textrm{remainder terms} \rbrace \\ \label{M1}& = - \frac{i}{(2 \pi)^3} \sum_{k_1 \in \mathbb{Z}} \sum_{i=1}^3 P_k(\xi) \int_{1} ^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \eta_{1,i} \widehat{f_{k_1}}(s,\eta_1) \partial_{\xi_l} \widehat{a_i}(s,\xi-\eta_1) d\eta_1 ds \\ \label{M3}&-\frac{i}{(2 \pi)^3} 1.1^{-k} \phi'(1.1^{-k}\xi) \frac{\xi_l}{\vert \xi \vert} \sum_{k_1 \in \mathbb{Z}} \sum_{i=1}^3 \int_{1} ^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \eta_{1,i} \widehat{f_{k_1}}(s,\eta_1) \widehat{a_l}(s,\xi-\eta_1) d\eta_1 ds \\ \label{M4}& - \frac{i}{(2 \pi)^3} P_k(\xi) \sum_{k_1 \in \mathbb{Z}} \int_{1}^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \widehat{f_{k_1}}(s,\eta_1) \partial_{\xi_l} \widehat{V}(s,\xi-\eta_1) d\eta_1 ds \end{align} \begin{align} \label{M5}& - \frac{i}{(2 \pi)^3} 1.1^{-k} \phi'(1.1^{-k} \xi) \frac{\xi_l}{\vert \xi \vert} \sum_{k_1 \in \mathbb{Z}} \int_{1}^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \widehat{f_{k_1}}(\eta_1) \widehat{V}(s,\xi-\eta_1) d\eta_1 ds \\ \label{M7}& -\frac{i}{(2 \pi)^3} P_k(\xi) \int_{1}^t \int_{\mathbb{R}^3} 2 i s \eta_{1,l} e^{is (\vert \xi \vert^2 -\vert \xi -\eta_1 \vert^2 - \vert \eta_1 \vert^2)} \widehat{f}(s,\eta_1) \widehat{f}(s,\xi-\eta_1) d\eta_1 ds \\ \label{M8}& -\frac{i}{(2 \pi)^3} P_k(\xi) \int_{1}^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \xi -\eta_1 \vert^2 - \vert \eta_1 \vert^2)} \widehat{f}(s,\eta_1) \partial_{\xi_l} \widehat{f}(s,\xi-\eta_1) d\eta_1 ds . \end{align} We will estimate these remainder terms directly. More precisely we will prove the following bounds in Section \ref{firsterm}: \begin{proposition} We will prove that \begin{align*} \Vert \eqref{M1}, \eqref{M3}, \eqref{M4}, \eqref{M5} \Vert_{L^2_x} & \lesssim \delta \varepsilon_1 \\ \Vert \eqref{M7},\eqref{M8} \Vert_{L^2_x} & \lesssim \varepsilon_1 ^2 . \end{align*} \end{proposition} The remaining two terms \eqref{M2} and \eqref{M6} cannot be estimated directly: they will be expanded as series by repeated integrations by parts in time. We explain this procedure in greater detail in the remainder of the section. \\ \\ To treat them in a unified way, we notice that they both have the form \begin{align*} 2 \int_{1} ^t \int_{\mathbb{R}^3} i s \xi_l e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \alpha_1( \eta_1) \widehat{f_{k_1}}(s,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 ds , \end{align*} where $W_1$ denotes either $a_i$ or $V, $ and $\alpha_1(\eta_1)= \eta_{1,i}$ if $W=a_i$ and $1$ if $W=V.$ \\ \\ We distinguish two cases:\\ \\ \underline{Case 1: $\vert k_1 - k \vert > 1$} \\ Then we integrate by parts in time using $\displaystyle \frac{1}{i(\vert \xi \vert^2-\vert \eta_1 \vert^2)} \partial_s \big(e^{is(\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \big) = e^{is(\vert \xi \vert^2 - \vert \eta_1 \vert^2)}$ and obtain for \eqref{M2} \begin{align} \label{B1}& 2 \int_{1} ^t \int_{\mathbb{R}^3} i s \xi_l e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \alpha_1(\eta_1) \widehat{f_{k_1}}(s,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 ds \\ \notag&= - \int_{1} ^t \int_{\mathbb{R}^3} \frac{2 s \xi_l}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \alpha_1(\eta_1) \partial_s \widehat{f_{k_1}}(s,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 ds \\ \notag& + \int_{\mathbb{R}^3} \frac{2 t \xi_l}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} e^{it (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \alpha_1(\eta_1) \widehat{f_{k_1}}(t,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 \\ \notag& - \int_{\mathbb{R}^3} \frac{ 2 \xi_l}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} e^{i (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \alpha_1(\eta_1) \widehat{f_{k_1}}(1,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 \\ \notag& - \int_{1} ^t \int_{\mathbb{R}^3} \frac{ 2 \xi_l}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \alpha_1(\eta_1) \widehat{f_{k_1}}(s,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 ds. \end{align} Now we use that $\partial_{s} f = e^{-is\Delta}(W_2 \widetilde{D_2} u + u^2)$ where $W_2$ stands for $V$ and the $a_i$ terms, and $\widetilde{D_2}$ stands for either $1$ if $W_2=V$ or $\partial_i$ if $W_2=a_i$. On the Fourier side, $\widetilde{D_2}$ is denoted $\alpha_2$ with a similar convention as for $\alpha_1.$ The summation on these different kinds of potentials is implicit in the following. \\ We get that \begin{align} \label{B2}&= - \int_{1} ^t \int_{\mathbb{R}^3} \frac{2 s \xi_l}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} e^{is \vert \xi \vert^2} \alpha_1(\eta_1) \widehat{(W_2 \widetilde{D_2} u)_{k_1}}(s,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 ds \\ \label{B2bis} &- \int_{1} ^t \int_{\mathbb{R}^3} \frac{2 s \xi_l}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} e^{is \vert \xi \vert^2} \alpha_1(\eta_1) \widehat{(u^2)_{k_1}}(s,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 ds \\ \label{B3}& + \int_{\mathbb{R}^3} \frac{2 t \xi_l}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} e^{it (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \alpha_1(\eta_1) \widehat{f_{k_1}}(t,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 \\ \label{B4}& - \int_{\mathbb{R}^3} \frac{ 2 \xi_l}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} e^{i (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \alpha_1(\eta_1) \widehat{f_{k_1}}(1,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 \\ \label{B5}& - \int_{1} ^t \int_{\mathbb{R}^3} \frac{ 2 \xi_l}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \alpha_1(\eta_1) \widehat{f_{k_1}}(s,\eta_1) \widehat{W_1}(\xi-\eta_1) d\eta_1 ds. \end{align} All these terms, except for \eqref{B2}, will be estimated directly. We will prove the following bounds: \begin{proposition} We have \begin{align*} \Vert \eqref{B3}, \eqref{B4}, \eqref{B5} \Vert_{L^2_x} & \lesssim \delta \varepsilon_1 \\ \Vert \eqref{B2bis} \Vert_{L^2 _x} & \lesssim \delta \varepsilon_1 ^2. \end{align*} \end{proposition} To deal with the remaining term \eqref{B2}, we will iterate the procedure presented here. Indeed if we write it as \begin{align*} \int_{1} ^t \int_{\mathbb{R}^3} \frac{\widehat{W_1}(\xi-\eta_1) P_{k_1}(\eta_1)}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} \alpha_1(\eta_1) \int_{\mathbb{R}^3} 2i s \xi_l \alpha_2(\eta_2) e^{is(\vert \xi \vert^2 - \vert \eta_2 \vert^2)} \widehat{W_2}(\eta_1-\eta_2) \widehat{f}(s,\eta_2) d\eta_2 d\eta_1 ds. \end{align*} We see that the inner integral on $\eta_2$ is similar to the term we started with, namely \eqref{M2}, \eqref{M6}. The idea is then to iterate the procedure presented here for these terms. \\ \\ \underline{Case 2: $\vert k-k_1 \vert \leqslant 1$} \\ In this case we integrate by parts in $\eta_1$ using $\displaystyle \frac{\eta_1 \cdot \nabla \big( e^{is \vert \eta_1 \vert^2} \big)}{2is \vert \eta_1 \vert^2} = e^{is \vert \eta_1 \vert^2},$ and obtain \begin{align} \notag \eqref{B1} &= \\ \label{c21}&\int_{1} ^t \int_{\mathbb{R}^3} \frac{\eta_{1,j} \xi_l \alpha_1(\eta_1)}{\vert \eta_1 \vert^2} e^{is(\vert \xi \vert^2 - \vert \eta_1 \vert^2)}\widehat{W_1}(\xi-\eta_1) \partial_{\eta_{1,j}} \widehat{f}(s,\eta_1) P_{k_1}(\eta_1) d\eta_1 ds \\ \label{c22}&+ \int_{1} ^t \int_{\mathbb{R}^3} \frac{\eta_{1,j} \xi_l \alpha_1(\eta_1)}{\vert \eta_1 \vert^2} e^{is(\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \partial_{\eta_{1,j}} \widehat{W_1}(\xi-\eta_1) \widehat{f_{k_1}}(s,\eta_1) d\eta_1 ds \\ \label{c23}& + \int_{1} ^t \int_{\mathbb{R}^3} \partial_{\eta_{1,j}} \big( \frac{\eta_{1,j} \xi_l \alpha_1(\eta_1)}{\vert \eta_1 \vert^2} \big) e^{is (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{W_1}(\xi-\eta_1) \widehat{f_{k_1}}(s,\eta_1) d\eta_1 ds \\ \label{c24}& + \int_{1} ^t \int_{\mathbb{R}^3} \frac{\eta_{1,j} \xi_l \alpha_1(\eta_1)}{\vert \eta_1 \vert^2} e^{is (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{W_1}(\xi-\eta_1) \widehat{f_{k_1}}(s,\eta_1) 1.1^{-k_1} \phi'(1.1^{-k_1} \eta_1) \frac{\eta_{1,j}}{\vert \eta_1 \vert} d\eta_1 ds. \end{align} Note that there is an implicit sum on $j$ above. \\ For the easier terms we have the following estimates that will be proved in Section \ref{firsterm} \begin{proposition} We have the following bounds \begin{align*} \Vert \eqref{c22}, \eqref{c23}, \eqref{c24} \Vert_{L^2 _x} \lesssim \delta \varepsilon_1. \end{align*} \end{proposition} For the main complicated term \eqref{c21} we integrate by parts in time in the inner integral. Since the denominator $\frac{1}{\vert \xi \vert^2 - \vert \eta_1 \vert^2}$ is singular, we must consider a regularization of that term. \\ We consider for $\beta>0$ \begin{align} \label{c21beta} \int_{1} ^t \int_{\mathbb{R}^3} \frac{\eta_{1,j} \xi_l \alpha_1(\eta_1)}{\vert \eta_1 \vert^2} e^{-\beta s}e^{is(\vert \xi \vert^2 - \vert \eta_1 \vert^2)}\widehat{W_1}(\xi-\eta_1) \partial_{\eta_{1,j}} \widehat{f}(s,\eta_1) P_{k_1}(\eta_1) d\eta_1 ds. \end{align} Now we integrate by parts in time and obtain \begin{align} \notag \eqref{c21beta} &= \\ \label{iteration}& -\int_{1} ^t \int_{\mathbb{R}^3} \frac{\eta_{1,j} \xi_l \alpha_1(\eta_1)}{i(\vert \xi \vert^2 - \vert \eta_1 \vert^2 + i \beta) \vert \eta_1 \vert^2} \\ \notag & \times e^{is(\vert \xi \vert^2 - \vert \eta_1 \vert^2 + i \beta)}\widehat{W_1}(\xi-\eta_1) \partial_s \partial_{\eta_{1,j}} \widehat{f}(s,\eta_1) P_{k_1}(\eta_1) d\eta_1 ds \\ \label{restediff}&+ \int_{\mathbb{R}^3} \frac{\eta_{1,j} \xi_l \alpha_1(\eta_1)}{i(\vert \xi \vert^2 - \vert \eta_1 \vert^2 + i \beta) \vert \eta_1 \vert^2} e^{it(\vert \xi \vert^2 - \vert \eta_1 \vert^2 + i \beta)}\widehat{W_1}(\xi-\eta_1) \partial_{\eta_{1,j}} \widehat{f}(t,\eta_1) P_{k_1}(\eta_1) d\eta_1 \\ \label{restefac}&- \int_{\mathbb{R}^3} \frac{\eta_{1,j} \xi_l \alpha_1(\eta_1)}{i(\vert \xi \vert^2 - \vert \eta_1 \vert^2 + i \beta)\vert \eta_1 \vert^2} e^{i(\vert \xi \vert^2 - \vert \eta_1 \vert^2 + i \beta)}\widehat{W_1}(\xi-\eta_1) \partial_{\eta_{1,j}} \widehat{f}(1,\eta_1) P_{k_1}(\eta_1) d\eta_1 . \end{align} The terms \eqref{restediff} and \eqref{restefac} will be estimated in the same way in Section \ref{firsterm}. We will prove the following estimates: \begin{proposition} We have the bounds (that hold uniformly on $\beta$): \begin{align*} \Vert \eqref{restediff}, \eqref{restefac} \Vert_{L^2_x} \lesssim \delta \varepsilon_1. \end{align*} \end{proposition} Since the estimates hold uniformly on $\beta,$ we have, by lower semi-continuity of the norm: \begin{align*} \big \Vert \lim_{\beta \to 0, \beta>0} \eqref{restediff}, \eqref{restefac} \big \Vert_{L^2 _x} \leqslant \liminf_{\beta \to 0, \beta>0} \Vert \eqref{restediff}, \eqref{restefac} \Vert_{L^2_x} \lesssim \delta \varepsilon_1. \end{align*} This is how all regularized terms will be handled since all our estimates are uniform on $\beta$. Therefore we will drop the regularizing factors $\beta$ to simplify notations. \\ \\ For the remaining term \eqref{iteration} we use the following expression for $\partial_t \partial_{\eta_{1,j}} \widehat{f}$ (obtained by differentiating \eqref{Duhamel}) \begin{align} \label{identder} & \partial_t \partial_{\xi_j} \widehat{f}(t,\xi) \\ \label{id1}& = 2 t \xi_j e^{it \vert \xi \vert^2} \int_{\mathbb{R}^3} \widehat{a_i}(\xi - \eta_1) e^{-it \vert \eta_1 \vert ^2} \eta_{1,i} \widehat{f}(t,\eta_1) d\eta_1 \\ \label{id2}&-i e^{it \vert \xi \vert^2} \int_{\mathbb{R}^3} \widehat{x_j a_i}(\xi - \eta_1) e^{-it \vert \eta_1 \vert ^2} \eta_{1,i} \widehat{f}(s,\eta_1) d\eta_1 \\ \label{id3}&+2 t \xi_j e^{it \vert \xi \vert^2} \int_{\mathbb{R}^3} \widehat{V}(\xi - \eta_1) e^{-it \vert \eta_1 \vert ^2} \widehat{f}(t,\eta_1) d\eta_1 \\ \label{id4}&-i e^{it \vert \xi \vert^2} \int_{\mathbb{R}^3} \widehat{x_j V}(\xi - \eta_1) e^{-it \vert \eta_1 \vert ^2} \widehat{f}(s,\eta_1) d\eta_1 \\ \label{id5}&+2 \int_{\mathbb{R}^3} t \eta_{1,j} e^{it (\vert \xi \vert^2 - \vert \xi-\eta_1 \vert^2 - \vert \eta_1 \vert^2)} \widehat{f}(t,\eta_1) \widehat{f}(t,\xi- \eta_1) d\eta_1 \\ \label{id6}&-i \int_{\mathbb{R}^3} e^{it (\vert \xi \vert^2 - \vert \xi-\eta_1 \vert^2 - \vert \eta_1 \vert^2)} \widehat{f}(t,\eta_1) \partial_{\xi_j} \widehat{f}(t,\xi- \eta_1) d\eta_1. \end{align} Of all the terms that appear subsequently, the two that we will not be able to estimate directly come from \eqref{id1} and \eqref{id3} will be of the form \begin{align} \label{model2} & \int_{1} ^t \int_{\mathbb{R}^3} \frac{\eta_{1,j} \xi_l \alpha_1(\eta_1)}{(\vert \xi \vert ^2 - \vert \eta_1 \vert^2 + i \beta)\vert \eta_1 \vert^2} \widehat{W_1}(\xi-\eta_1) P_{k_1}(\eta_1) \\ \notag & \times \int_{\mathbb{R}^3} 2 s \eta_{1,j} \widehat{W_2}(\eta_1 - \eta_2) \alpha_2(\eta_2) e^{is(\vert \xi \vert^2 - \vert \eta_2 \vert^2 + i\beta)} \widehat{f}(s,\eta_2) d\eta_2 d\eta_1 ds \\ \notag & = \int_{1} ^t \int_{\mathbb{R}^3} \frac{ \alpha_1(\eta_1)}{(\vert \xi \vert ^2 - \vert \eta_1 \vert^2 + i \beta)} e^{is(\vert \xi \vert^2 - \vert \eta_1 \vert^2)}\widehat{W_1}(\xi-\eta_1) P_{k_1}(\eta_1) \\ \notag & \times \int_{\mathbb{R}^3} 2 s \xi_l \widehat{W_2}(\eta_1 - \eta_2) e^{is(\vert \xi \vert^2 - \vert \eta_2 \vert^2 + i\beta)} \alpha_2(\eta_2) \widehat{f}(s,\eta_2) d\eta_2 d\eta_1 ds , \end{align} where the simplification here is due to the summation in $j.$ \\ We explain in the next section how to deal with these terms. \subsection{Further expansions} Note that for expressions of the type \eqref{model2} the inner integral is the same as the terms we started with (that is \eqref{M2} and \eqref{M6}). \\ Therefore we use the exact same strategy: \\ We start by localizing in the $\eta_2$ variable ($k_2$ denotes the corresponding exponent) \begin{itemize} \item If $\vert k-k_2 \vert > 1$ then we integrate by parts in time. \item If $\vert k-k_2 \vert \leqslant 1$ then we integrate by parts in $\eta_2.$ Then integrate by parts in time in the worse term (that is the term for which the derivative in $\eta_2$ falls on the profile). \end{itemize} Then we repeat the procedure iteratively. \\ \\ \underline{Case 1: $\vert k-k_n \vert >1 $} \\ At the $n-$th step of the iteration we obtain the following terms: \begin{align*} &\mathcal{F} I_1 ^n f := \int_1 ^t \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-2} \\ & \times \int_{\eta_n} s \xi_l e^{is(\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}- \eta_n \vert^2)} \widehat{f}(s,\eta_{n-1}-\eta_n) \widehat{f}(s,\eta_n) d\eta_n d\eta_{n-1} ds, \end{align*} where $W$ denotes either $V$ or one of $a_i$'s. The function $\alpha_{\gamma}(\eta)$ is equal to 1 if $W=1$ and $\eta_i$ if $W = a_i.$ \\ We also use the convention that $\eta_0 = \xi.$ \\ With similar notations, we also have the analog of \eqref{B3}: \begin{align*} &\mathcal{F} I_2 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} e^{it(\vert \xi \vert^2-\vert \eta_n \vert^2)} t \xi_l \widehat{f_{k_n}}(t,\eta_n) d\eta_n \end{align*} and similarly: \begin{align*} &\mathcal{F} I_3 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \\ & \times \int_1 ^t \xi_l e^{is(\vert \xi \vert^2-\vert \eta_n \vert^2)} \frac{\widehat{W_n}(\eta_{n-1}-\eta_n) \alpha_n (\eta_n)}{\vert \xi \vert^2 - \vert \eta_n \vert^2} \widehat{f_{k_n}}(t,\eta_n) d\eta_n ds. \end{align*} \underline{Case 2: $\vert k-k_n \vert \leqslant 1$} \\ In this case we get more terms. More precisely we have \begin{align*} &\mathcal{F} I_4 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \\ & \times e^{it(\vert \xi \vert^2-\vert \eta_n \vert^2)} \frac{\xi_l \eta_{n,j}}{\vert \eta_n \vert^2} \partial_{\eta_n,j} \widehat{f}(t,\eta_n) P_{k_n}(\eta_n) d\eta_n. \end{align*} There are the easier terms of the same type: \begin{align*} &\mathcal{F} I_5 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-2} \\ & \times \int_1 ^t \int_{\mathbb{R}^3} e^{is(\vert \xi \vert^2-\vert \eta_n \vert^2)} \alpha_n(\eta_n) \partial_{\eta_{n,j}} \widehat{W_n}(\eta_{n-1}-\eta_n) \frac{\xi_l \eta_{n,j}}{\vert \eta_n \vert^2} \widehat{f_{k_n}}(s,\eta_n) d\eta_n ds d\eta_{n-1}. \end{align*} We also get an somewhat similar term \begin{align*} &\mathcal{F} I_6 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-2} \\ & \times \int_1 ^t \int_{\eta_n} e^{is(\vert \xi \vert^2-\vert \eta_n \vert^2)} \alpha_{n}(\eta_n) \widehat{W_n}(\eta_{n-1}-\eta_n) \partial_{\eta_n,j} \big( \frac{\xi_l \eta_{n,j}}{\vert \eta_n \vert^2} \big) \widehat{f_{k_n}}(s,\eta_n) d\eta_n ds d\eta_{n-1}, \end{align*} and also \begin{align*} &\mathcal{F} I_7 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-2} \\ & \times \int_1 ^t \int_{\eta_n} e^{is(\vert \xi \vert^2 - \vert \eta_n \vert^2} \frac{\alpha_{n}(\eta_n)\eta_{n,j}\xi_l}{\vert \eta_n \vert^2} 1.1^{-k_1} \phi'(1.1^{-k_1}\eta_n) \frac{\eta_{n,j}}{\vert \eta_n \vert} \widehat{W_n}(\eta_{n-1}-\eta_n) \widehat{f_{k_n}}(s,\eta_n) d\eta_n d\eta_{n-1} ds . \end{align*} Finally we get the following terms coming from \eqref{identder} \begin{align*} &\mathcal{F} I_8 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-2} \frac{\xi_l \eta_{n-1,j}}{\vert \eta_{n-1} \vert^2} \\ & \times \int_1 ^t \int_{\eta_n} e^{is(\vert \xi \vert^2 - \vert \eta_n \vert^2} \alpha_{n}(\eta_n) \widehat{x_{n,j} W_n}(\eta_{n-1}-\eta_n) \widehat{f}(s,\eta_n) d\eta_n d\eta_{n-1} ds . \end{align*} More importantly we also have the bilinear terms \begin{align*} &\mathcal{F} I_9 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-2} \frac{\xi_l \eta_{n-1,j}}{\vert \eta_{n-1} \vert^2} \\ & \times \int_1 ^t s \int_{\eta_n} \eta_{n,j} e^{is(\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}- \eta_n \vert^2)} \widehat{f}(\eta_{n-1}-\eta_n) \widehat{f}(s,\eta_n) d\eta_n d\eta_{n-1} ds \end{align*} and \begin{align*} &\mathcal{F} I_{10} ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-2} \frac{\xi_l \eta_{n-1,j}}{\vert \eta_{n-1} \vert^2} \\ & \times \int_1 ^t \int_{\eta_n} e^{is(\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}- \eta_n \vert^2)} \partial_{n,j} \widehat{f}(\eta_{n-1}-\eta_n) \widehat{f}(s,\eta_n) d\eta_n d\eta_{n-1} ds. \end{align*} Heuristically, one has the following correspondance: $I^{n+1}_1 f$ is the analog of \eqref{B2bis}, $I_2^n f$ is the analog of both \eqref{B3} and \eqref{B4}, $I_3 ^n f$ is the analog of \eqref{B5}, $I_4 ^n f$ is the analog of both \eqref{restediff} and \eqref{restefac}, $I_5 ^n f$ is the analog of \eqref{c22}, $I_6 ^n f$ is the analog of \eqref{c23}, $I_7 ^n f$ is the analog of \eqref{c24}, $I_8^{n+1} f$ is the analog of both \eqref{id2} and \eqref{id4}, $I_9^{n+1} f$ is the analog of \eqref{id5} and $I_{10}^{n+1} f$ is the analog of \eqref{id6}. \\ \\ We will prove the following estimates in Section \ref{Boundingiterates}: \begin{proposition} \label{nthestimate} We have the bound \begin{align*} \sum_{k_1,...,k_n} \Vert I_j ^n f \Vert_{L^2_x} \lesssim C^{n} \delta^{n} \varepsilon_1 \ \ \ \ \ j \in \lbrace 1; ... ;10 \rbrace . \end{align*} The implicit constant in the above inequality does not depend on $n. $ \end{proposition} We now explain how this implies the desired bound \eqref{goal1}. \\ \begin{proof}[Proof of \eqref{goal1}] From the iteration procedure explained above, we arrive at the following expression for $\partial_{\xi_l} \widehat{f}(t,\xi)$ (where we set all the numerical constants such as $i, 2 \pi$ equal to one for better legibility, since they do not matter in the estimates): \begin{align*} \partial_{\xi_l} \widehat{f}(t,\xi) & = \partial_{\xi_l} \big( e^{i \vert \xi \vert^2} \widehat{u_{1,k}} (\xi) \big) + \eqref{M1} + \eqref{M3} + \eqref{M4} + \eqref{M5}+ \eqref{M7} + \eqref{M8} \\ & +\mathcal{F} \sum_{n=1}^{\infty} \sum_{\gamma=1} ^{n+1} \sum_{W_{\gamma} \in \lbrace a_1, a_2,a_3, V \rbrace} \sum_{j=1}^3 \Bigg( \sum_{k_1,...,k_n} I^{n+1}_1 f + I_2 ^n f + \widetilde{I_2}^n f + I_3 ^n f + I_4 ^n f + \widetilde{I_4}^n f \\ & +I_5 ^n f + I_6 ^n f + I_7 ^n f + I_8 ^{n+1} f + I_9 ^{n+1} f + I_{10}^{n+1} f \Bigg), \end{align*} where $\widetilde{I_2}^n f, \widetilde{I_4}^n f$ denote the same expressions as $I_2^n f, I_4^n f$ but with $t=1.$ \\ Since at each step of the iteration there are $O(4^n)$ terms that appear (that is the three middle sums above contribute $O(4^n)$ terms), there exists some large constant $D$ such that (using \eqref{estimationX} and Proposition \ref{nthestimate}) \begin{align*} \Vert f \Vert_{X} \leqslant \varepsilon_0 + D \sum_{n=1}^{+\infty} 4^n C^n \delta^n \varepsilon_1 \leqslant \frac{\varepsilon_1}{2}, \end{align*} provided $\delta$ is small enough. \end{proof} \section{Bounding the terms from the first expansion} \label{firsterm} In this section we bound the terms from the first expansion (see the various estimates announced in Section \ref{firstexpansion}). \\ We distinguish in the first subsection the estimates that are done without the use of smoothing estimates, from the ones that require recovering derivatives (terms of potential type) in the second subsection. \subsection{Easier terms} We start with terms that appear directly after taking a derivative in $\xi_l$ (so that do not strictly speaking arise from the iteration procedure) \begin{lemma} We have the bounds \begin{align*} \Vert \eqref{M1}, \eqref{M4} \Vert_{L^2_x} \lesssim \delta \varepsilon_1. \end{align*} \end{lemma} \begin{proof} We start with \eqref{M1}. We use Strichartz estimates, Lemma \ref{bilin} and we have if $k_1<0$ \begin{align*} \Vert \eqref{M1} \Vert_{L^2_x} & \lesssim \Bigg \Vert \mathcal{F}^{-1} \int_{\eta_1} \eta_{1,i} e^{-is \vert \eta_1 \vert^2} \widehat{f_{k_1}}(s, \eta_1) \partial_{\xi_l} \widehat{a_i}(\xi-\eta_1) d\eta_1 \Bigg \Vert_{L^{2}_t L^{6/5}_x} \\ & \lesssim 1.1^{k_1} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x} \Vert x_l a_i \Vert_{L^{3/2}_x}, \end{align*} which can be summed using Lemma \ref{dispersive}. \\ When $k_1 \geqslant 0$ we use Lemma \ref{summation} with $c=1/4$ to obtain \begin{align*} \Vert \eqref{M1} \Vert_{L^2} & \lesssim 1.1^{k_1} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x} \Vert x_l a_i \Vert_{L^{3/2}_x} \\ & \lesssim 1.1^{-k_1} \delta \varepsilon_1. \end{align*} In the case where $W=V,$ we use Strichartz estimates to obtain \begin{align*} & \Bigg \Vert P_k (\xi) \int_1 ^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{f}_{k_1}(s,\eta_1) \partial_{\xi_l} \widehat{V}(s,\xi-\eta_1) d\eta_1 ds \Bigg \Vert_{L^{\infty}_t L^2 _x} \\ & \lesssim \Vert e^{it \Delta} f_{k_1} (xV) \Vert_{L^{2}_t L^{6/5}_x} \\ & \lesssim \Vert e^{it \Delta} f_{k_1} \Vert_{L^{2}_t L^6 _x} \Vert x V \Vert_{L^{3/2} _x}, \\ \end{align*} and then we can sum over $k_1$ to deduce the result thanks to Lemmas \ref{dispersive} and \ref{summation}. \end{proof} Now we prove the following: \begin{lemma} We have the bounds \begin{align*} \Vert \eqref{M3},\eqref{M5} \Vert_{L^2_x} \lesssim \delta \varepsilon_1 . \end{align*} \end{lemma} \begin{proof} Here we essentially reproduce the proof from \cite{L}, Lemma 5.2. Since the proofs for the magnetic part and the potential part are similar, we do them simultaneously. \\ We split the proof into several cases: \\ \underline{Case 1: $k>0$} \\ In this case we use Strichartz estimates to write that \begin{align*} & \Bigg \Vert 1.1^{-k} \phi'(1.1^{-k} \xi) \frac{\xi_l}{\vert \xi \vert} \int_{1}^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \widehat{f_{k_1}}(s,\eta_1) \widehat{V}(\xi-\eta_1) d\eta_1 ds \Bigg \Vert_{L^{\infty}_t L^2 _x} \\ & \lesssim \Bigg \Vert \mathcal{F}^{-1} \int_1 ^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 - \vert \eta_1 \vert^2)} \widehat{f}_{k_1}(s,\eta_1) \widehat{V}(\xi-\eta_1) d\eta_1 ds \Bigg \Vert_{L^{\infty}_t L^2 _x} \\ & \lesssim \Vert \big( e^{it \Delta} f_{k_1} \big)V \Vert_{L^{2}_t L^{6/5}_x} \\ & \lesssim \Vert e^{it \Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x} \Vert V \Vert_{ L^{3/2}_x} \\ & \lesssim \Vert e^{it \Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x} \delta , \end{align*} and we can sum over $k_1$ using Lemma \ref{summation} and \ref{dispersive}. \\ In the magnetic case, there is an extra $1.1^{k_1}$ term. To deal with it when $k_1>0,$ we use Lemma \ref{summation} as in the proof of the previous lemma.\\ \\ \underline{Case 2: $k \leqslant 0$} \\ We distinguish three subcases:\\ \underline{Case 2.1: $k>k_1+1$} \\ In this case the frequency $\vert \xi- \eta_1 \vert$ is localized at $1.1^{k},$ therefore we can write, using Strichartz estimates and Bernstein's inequality: \begin{align*} & \Bigg \Vert 1.1^{-k} \phi'(1.1^{-k} \xi) \frac{\xi_l}{\vert \xi \vert} \int_{1}^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \widehat{f_{k_1}}(s,\eta_1) \widehat{V}(\xi-\eta_1) d\eta_1 ds \Bigg \Vert_{L^{\infty}_t L^2 _x} \\ & \lesssim 1.1^{-k} \Bigg \Vert \mathcal{F}^{-1} \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \widehat{f_{k_1}}(s,\eta_1) \widehat{V_{k}}(\xi-\eta_1) d\eta_1 ds \Bigg \Vert_{L^{2}_t L^{6/5}_x} \\ & \lesssim 1.1^{-k} \Vert V_{k} \Vert_{ L^{3/2} _x} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x} \\ & \lesssim \Vert V_k \Vert_{L^{1}} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x}, \end{align*} which can be summed using Lemmas \ref{summation} and \ref{dispersive}. \\ In the magnetic case, the proof is simpler. We directly obtain the bound \begin{align*} \Vert \eqref{M3} \Vert_{L^2_x} \lesssim 1.1^{k_1-k} \Vert {a}_{i,k} \Vert_{ L^{3/2} _x} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x}, \end{align*} which can be summed directly since $k_1>k.$ \\ \\ \underline{Case 2.2: $\vert k-k_1 \vert \leqslant 1$}\\ Then we split the $\xi-\eta_1$ frequency dyadically and denote $k_2$ the corresponding exponent. \\ Note that $\vert \xi-\eta_1 \vert \leqslant \vert \xi \vert + \vert \eta_1 \vert \leqslant 1.1^{k+1} + 1.1^{k+2} \leqslant 1.1^{k+10}.$ \\ As a result $k_2 \leqslant k +10.$ \\ Now we can write, using Strichartz estimates, Bernstein's inequality: \begin{align*} & \Bigg \Vert 1.1^{-k} \phi'(1.1^{-k} \xi) \frac{\xi_l}{\vert \xi \vert} \int_{1}^t \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \widehat{f_{k_1}}(s,\eta_1) \widehat{V_{k_2}}(\xi-\eta_1) d\eta_1 ds \Bigg \Vert_{L^{\infty}_t L^2 _x} \\ & \lesssim 1.1^{-k} \Vert V_{k_2} \Vert_{L^{3/2}_x} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x} \\ & \lesssim 1.1^{k_2-k} \Vert V \Vert_{ L^{1} _x} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x}. \end{align*} Now since $k_2 \leqslant k + 10$ the factor in front allows us to sum over $k_2.$ The result follows. \\ The magnetic case is simpler. There is no need for the additional localization since \begin{align*} \Vert \eqref{M5} \Vert_{L^2 _x} \lesssim 1.1^{k_1-k} \Vert a_i \Vert_{L^{3/2} _x} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x} . \end{align*} \underline{Case 2.3: $k_1 > k+1$}\\ In this case we split the time variable dyadically. Let's denote $m$ the corresponding exponent. \\ We must estimate \begin{align*} I_{m,k_1,k} := 1.1^{-k} \phi'(1.1^{-k} \xi) \frac{\xi_l}{\vert \xi \vert} \int_{1.1^m}^{1.1^{m+1}} \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 -\vert \eta_1 \vert^2)} \widehat{f_{k_1}}(s,\eta_1) \widehat{V_{k_1}}(\xi-\eta_1) d\eta_1 ds , \end{align*} where the extra localization can be placed on $V$ since $\xi - \eta_1$ has magnitude roughly $1.1^{k_1}$ given that $k_1>k+1.$ \\ \\ \underline{Subcase 2.3.1: $k \leqslant -\epsilon m$ ($\epsilon$ some small number)} \\ Then we write, using Bernstein's inequality, H\"{o}lder's inequality and Lemma \ref{dispersive}, that \begin{align*} \Vert I_{m,k_1,k} \Vert_{L^{\infty}_t L^2 _x} & \lesssim 1.1^{-k} 1.1^{m} \sup_{t \simeq 1.1^m} \Vert \big( e^{it \Delta} f_{k_1} V_{k_1} \big)_k \Vert_{L^2 _x} \\ & \lesssim 1.1^{m} 1.1^{\epsilon k} \sup_{t \simeq 1.1^m} \Vert e^{it \Delta} f_{k_1} V_{k_1}(t) \Vert_{L^{6/5-}_x} \\ & \lesssim 1.1^{m} 1.1^{\varepsilon k} \sup_{t \simeq 1.1^m} \Vert e^{it \Delta} f_{k_1} \Vert_{L^6 _x} \Vert V_{k_1} \Vert_{L^{3/2-}_x} \\ & \lesssim 1.1^m 1.1^{-m} 1.1^{\epsilon k} \varepsilon_1 \Vert V_{k_1} \Vert_{L^{\infty}_t L^{3/2-}_x} \\ & \lesssim 1.1^{-m \epsilon} \varepsilon_1 \Vert V_{k_1} \Vert_{L^{\infty}_t L^{3/2-}_x}. \end{align*} We can sum over $k_1$ and $m$ given the factors that appear. \\ \\ In the magnetic case we get the same bound with $V$ replaced by $\nabla a. $ \\ \\ \underline{Subcase 2.3.2: $-\epsilon m < k \leqslant 0$} \\ In this case we use Strichartz estimates as well as Lemma \ref{dispersive}: \begin{align*} \Vert I_{m,k_1,k} \Vert_{L^{\infty}_t L^2 _x} & \lesssim 1.1^{\epsilon m} \Bigg \Vert \int_{1.1^m} ^{1.1^{m+1}} e^{-is \Delta} \bigg( \big( e^{is \Delta} f_{k_1} \big) V_{k_1} \bigg) ds \Bigg \Vert_{L^2 _x} \\ & \lesssim 1.1^{\epsilon m} \Vert \textbf{1}_{t \simeq 1.1^m} \big( e^{it \Delta} f_{k_1} \big) V_{k_1} \Vert_{L^{2}_t L^{6/5}_x} \\ & \lesssim 1.1^{\epsilon m} \Vert \textbf{1}_{t \simeq 1.1^m} \big( e^{it \Delta} f_{k_1} \big) \Vert_{L^{2}_t L^{6} _x} \Vert V_{k_1} \Vert_{L^{3/2}_x} \\ & \lesssim 1.1^{\epsilon m} 1.1^{-m/2} \varepsilon_1 \Vert V_{k_1} \Vert_{L^{3/2}_x}. \end{align*} Now notice that we can sum that bound over $m$ and $k_1$. \\ In the magnetic case we get the same bound with $V$ replaced by $\nabla a. $ \end{proof} We now come to the terms that appear in the iterative procedure in the case where $\vert k-k_1 \vert>1.$ \begin{lemma} We have the bounds \begin{align*} \sum_{\vert k-k_1 \vert>1} \Vert \eqref{B3}, \eqref{B4}, \eqref{B5} \Vert_{L^2 _x} &\lesssim \delta \varepsilon_1 \\ \sum_{\vert k-k_1 \vert>1} \Vert \eqref{B2bis} \Vert \lesssim \delta \varepsilon_1 ^2. \end{align*} \end{lemma} \begin{proof} Note that the bound on \eqref{B3} implies the one on \eqref{B4} (take $t=1$). Therefore we only prove the first bound. \\ First assume that $k_1 > k+1.$ \\ We use Lemma \ref{bilin}, the dispersive estimate from Lemma \ref{dispersive} and Bernstein's inequality \begin{align*} \Vert \eqref{B3} \Vert_{L^2 _x} & \lesssim 1.1^{k} 1.1^{k_1} 1.1^{-2k_1} \Vert (a_i)_{k_1} \Vert_{L^3_x} \Vert t e^{it \Delta} f_{k_1} \Vert_{L^6_x} \\ & \lesssim 1.1^{k-k_1} \delta \varepsilon_1. \end{align*} This last expression can be summed over $k_1$ given the condition $k_1>k+1.$ \\ The reasoning is similar if $k > k_1 + 1.$ We obtain the inequality \begin{align*} \Vert \eqref{B3} \Vert_{L^2 _x} & \lesssim 1.1^{k} 1.1^{k_1} 1.1^{-2k} \Vert (a_i)_{k} \Vert_{L^3_x} \Vert t e^{it \Delta} f_{k_1} \Vert_{L^6_x} , \end{align*} and we use Lemmas \ref{summation} and \ref{dispersive} to sum over $k_1$ in that last inequality. \\ For the last term we use Strichartz estimates we get when $k>k_1+1$ \begin{align} \notag \Vert \eqref{B5} \Vert_{L^2_x} & \lesssim \Bigg \Vert \mathcal{F}^{-1} \int_{\mathbb{R}^3} \frac{\xi_l \alpha_{1}(\eta_1)}{\vert \xi \vert^2 - \vert \eta_1 \vert^2} \widehat{f_{k_1}}(s,\eta_1) \widehat{W}(\xi-\eta_1) d\eta_1 \Bigg \Vert_{L^{2}_t L^{6/5}_x} \\ \label{aux} & \lesssim 1.1^{-k} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x} \Vert \mathcal{F}^{-1} \big( \alpha(\eta_1) P_{k_1}(\eta_1) \big) \Vert_{L^1} \Vert W_{k} \Vert_{L^{3/2}_x}. \end{align} If $W = a_i$ then the bound above reads \begin{align*} \Vert \eqref{aux} \Vert_{L^2_x} & \lesssim 1.1^{k_1-k} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x} \Vert a_i \Vert_{L^{3/2}_x} \end{align*} and this can be summed when $k_1 + 1<k.$ \\ \\ If $W = V$ then using Bernstein's inequality we obtain \begin{align*} \Vert \eqref{aux} \Vert_{L^2_x} & \lesssim 1.1^{-k} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x} \Vert V_{k} \Vert_{L^{3/2}_x} \\ & \lesssim \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x} \Vert V_{k} \Vert_{L^{1}_x}. \end{align*} Now we consider the case $k_1 >k +1:$ \\ Using Strichartz estimates, Lemma \ref{bilin} and Bernstein's inequality as above yields \begin{align*} \Vert \eqref{B5} \Vert_{L^2_x} & \lesssim 1.1^{k} 1.1^{-2k_1} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x} \Vert \mathcal{F}^{-1} \big( \alpha(\eta_1) P_{k_1}(\eta_1) \big) \Vert_{L^1} \Vert W_{k_1} \Vert_{L^{3/2}_x} \end{align*} If $W = a_i$ then the bound above reads \begin{align*} \Vert \eqref{aux} \Vert_{L^2_x} & \lesssim 1.1^{k-k_1} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x} \Vert a_i \Vert_{L^{3/2}_x}, \end{align*} and this can be summed when $k_1>k+1.$\\ \\ If $W=V$ then using Bernstein's inequality we obtain \begin{align*} \Vert \eqref{aux} \Vert_{L^2_x} & \lesssim 1.1^k 1.1^{-2k_1} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x} \Vert V_{k_1} \Vert_{L^{3/2}_x} \\ & \lesssim 1.1^{k-k_1} \Vert e^{it\Delta} f_{k_1} \Vert_{L^{2} _t L^{6}_x} \Vert V_{k_1} \Vert_{L^{1}_x}, \end{align*} which can be summed. \\ \\ Now we prove the bound on \eqref{B2bis}. The reasoning is similar, therefore we only sketch it here. \\ We treat the case $k_1>k+1,$ the other case being similar. \\ Using Strichartz estimates and a standard bilinear estimate, we obtain if $W=a_i:$ \begin{align*} \Vert \eqref{B2bis} \Vert_{L^2_x} & \lesssim 1.1^k 1.1^{-2k_1} 1.1^{k_1} \Vert a_i \Vert_{L^2_x} \Vert tu \Vert_{L^{\infty}_t L^6_x} \Vert u \Vert_{L^2_t L^6 _x} \\ & \lesssim 1.1^{k-k_1} \delta \varepsilon_1^2. \end{align*} This bound can be summed. \\ In the case where $W = V,$ we write an extra line using Bernstein's inequality: \begin{align*} \Vert \eqref{B2bis} \Vert_{L^2_x} & \lesssim 1.1^k 1.1^{-2k_1} \Vert V_{k_1} \Vert_{L^2_x} \Vert tu \Vert_{L^{\infty}_t L^6_x} \Vert u \Vert_{L^2_t L^6 _x} \\ & \lesssim 1.1^{k-k_1} \Vert V_{k_1} \Vert_{L^{6/5}_x} \Vert tu \Vert_{L^{\infty}_t L^6_x} \Vert u \Vert_{L^2_t L^6 _x} \\ & \lesssim 1.1^{k-k_1} \delta \varepsilon_1^2. \end{align*} \end{proof} Now we come to the terms that appear in the iteration procedure in the case $\vert k-k_1 \vert \leqslant 1:$ \begin{lemma} We have the estimate \begin{align*} \Vert \eqref{c22} \Vert_{L^2_x} & \lesssim \delta \varepsilon_1 \\ \Vert \eqref{c23} \Vert_{L^2_x} & \lesssim \delta \varepsilon_1 \\ \Vert \eqref{c24} \Vert_{L^2_x} & \lesssim \delta \varepsilon_1. \end{align*} \end{lemma} \begin{proof} In the case where $W_1=V$ these estimates have been proved in \cite{L}, Lemma 5.6. Therefore we only give the proof in the case where $W_1=a_i$ here. \\ We use Strichartz estimates, the bilinear Lemma \ref{bilin} and Lemma \ref{summation} (with $c=1/4$) to write that \begin{align*} \Vert \eqref{c22} \Vert & \lesssim \Bigg \Vert \mathcal{F}^{-1}_{\xi} \int_{\eta_1} \frac{\xi_l \eta_{1,j}\eta_{i}}{\vert \eta_1 \vert^2} \partial_{\eta_{1,j}} \widehat{a_i} (\xi - \eta_1) e^{-is \vert \eta_1 \vert^2} \widehat{f_{k_1}}(t,\eta_1) d\eta_1 \Bigg \Vert_{L^{2}_t L^{6/5}_x} \\ & \lesssim 1.1^{k} \Vert e^{it \Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x} \Vert x_j a_i(x) \Vert_{L^{3/2}_x} \\ & \lesssim 1.1^{-k_1} \delta \varepsilon_1, \end{align*} which is good enough if $k_1>0.$ \\ Otherwise if $k_1 \leqslant 0$ we write, using that $\vert k-k_1 \vert \leqslant 1,$ \begin{align*} \Vert \eqref{c22} \Vert & \lesssim \Bigg \Vert \mathcal{F}^{-1}_{\xi} \int_{\eta_1} \frac{\xi_l \eta_{1,i}\eta_{1,j}}{\vert \eta_1 \vert^2} \partial_{\eta_{1,j}} \widehat{a_i} (\xi - \eta_1) e^{-is \vert \eta_1 \vert^2} \widehat{f_{k_1}}(t,\eta_1) d\eta_1 \Bigg \Vert_{L^{2}_t L^{6/5}_x} \\ & \lesssim 1.1^{k} \Vert e^{it \Delta} f_{k_1} \Vert_{L^{2}_t L^{6}_x} \Vert x_j a_i(x) \Vert_{L^{3/2}_x} . \end{align*} The proofs for the terms \eqref{c23}, \eqref{c24} are the similar, therefore details are omitted. \end{proof} Finally we recall two bounds on the bilinear terms \eqref{M7}, \eqref{M8} that were proved in \cite{L}, Lemmas 5.14, 5.15 and 5.16. \begin{lemma} We have the estimates \begin{align*} \Vert \eqref{M7},\eqref{M8} \Vert_{L^2} \lesssim \varepsilon_1 ^2. \end{align*} \end{lemma} \subsection{Potential terms} There remains to estimate \eqref{restediff}. Note that the estimate on \eqref{restefac} follows directly from this one by taking $t=1.$ \\ Since in the case where the potential is magnetic there is a derivative loss to deal with, we must use smoothing estimates. \\ The higher order iterates might involve both types of potentials, therefore we need a unified way to deal with such terms. Hence we also give a proof in the case where $W=V$, although the bound has already been established in \cite{L}. That is the content of the following lemma. \begin{lemma} We have \begin{align*} \Vert \eqref{restediff} \Vert_{L^2} \lesssim \delta \varepsilon_1 \end{align*} when $W = V.$ \end{lemma} \begin{proof} We use the following identity: \begin{align}\label{identit} \frac{1}{\vert \xi \vert^2 - \vert \eta \vert^2 + i \beta} =(-i) \int_0 ^{\infty} e^{i \tau_1 ( \vert \xi \vert^2 - \vert \eta \vert^2 + i \beta) } d\tau_1 \end{align} and plug it back into \eqref{restediff}. \\ We bound the outcome using Strichartz estimates, bilinear estimates from Lemma \ref{bilin}: \begin{align*} & \Bigg \Vert P_k(\xi) \int_{0}^{\infty} e^{i \tau_1 \vert \xi \vert^2} \int_{\mathbb{R}^3} \frac{\eta_{1,j}\xi_l}{\vert \eta_1 \vert^2} e^{-i\tau_1 \vert \eta_1 \vert^2} e^{-it\vert \eta_1 \vert^2} e^{-\beta \tau} \widehat{V}(\xi-\eta_1) \partial_{\eta_{1,j}} \widehat{f}(t,\eta_1) P_{k_1}(\eta_1) d\eta_1 d\tau_1 \Bigg \Vert_{L^2_x} \\ & \lesssim 1.1^{k} \Bigg \Vert e^{-\beta \tau_1} \mathcal{F}^{-1} \int_{\mathbb{R}^3} \frac{\eta_{1,j}}{\vert \eta_1 \vert^2} \widehat{V}(\xi-\eta_1) e^{-i (t+\tau) \vert \eta_1 \vert^2} \big( \partial_{\eta_{1,j}} \widehat{f}(t,\eta_1) \big) P_{k_1}(\eta_1) d\eta_1 \Bigg \Vert_{L^{2}_{\tau_1} L^{6/5}_x} \\ & \lesssim 1.1^k \Bigg \Vert \mathcal{F}^{-1} \int_{\mathbb{R}^3} \frac{\eta_{1,j}}{\vert \eta_1 \vert^2} \widehat{V}(\xi-\eta_1) e^{-i (t+\tau_1) \vert \eta_1 \vert^2} \big( \partial_{\eta_{1,j}} \widehat{f}(t,\eta_1) \big) P_{k_1}(\eta_1) d\eta_1 \Bigg \Vert_{L^{2}_{\tau_1} L^{6/5}_x} \\ & \lesssim \Vert V \Vert_{L^{3/2}_x} \bigg \Vert e^{i\tau_1 \Delta} \mathcal{F}^{-1} \big(e^{-it\vert \eta_1 \vert^2} \partial_{\eta_{1,j}} \widehat{f}(t,\eta_1) P_{k_1}(\eta_1) \big) \bigg \Vert_{L^{2}_{\tau_1} L^{6}_x}. \end{align*} We can conclude with Strichartz estimates from Lemma \ref{Strichartz}, Lemma \ref{X'} and the fact that the Schr\"{o}dinger semi-group is an isometry on $L^2 _x$: \begin{align*} \bigg \Vert e^{i\tau_1 \Delta} \mathcal{F}^{-1} \big(e^{-it \vert \eta_1 \vert^2} \partial_{\eta_{1,j}} \widehat{f}(t,\eta_1) P_{k_1}(\eta_1) \big) \bigg \Vert_{L^{2}_{\tau_1} L^{6}_x} & \lesssim \bigg \Vert \mathcal{F}^{-1} \big(e^{-it \vert \eta_1 \vert^2} \partial_{\eta_{1,j}} \widehat{f}(t,\eta_1) P_{k_1}(\eta_1) \big) \bigg \Vert_{L^{2}_x} \\ & \lesssim \Vert f \Vert_{X'} \\ & \lesssim \varepsilon_1. \end{align*} Note that the bound is uniform on $\beta.$ \end{proof} Now we explain how to bound the magnetic Schr\"{o}dinger term. This is the main difficulty of the paper. This is where a new ingredient has to be introduced compared to the earlier paper \cite{L}. We replace the boundedness of wave operators by smoothing estimates from Section \ref{recallsmoothing}. \begin{lemma} We have the following bound: \begin{align*} \Vert \eqref{restediff} \Vert_{L^2 _x} \lesssim \delta \varepsilon_1 \end{align*} when $W = a_i$. \end{lemma} \begin{proof} We use the identity \ref{identit} and plug it in \eqref{restediff}. \\ We split the integral according to the dominant direction of $\xi$ using Lemma \ref{direction}. \\ We will therefore estimate \begin{align} \label{restediffb} & \chi_{j}(\xi) \xi_l P_k (\xi) \int_{0} ^{\infty} e^{-\tau_1 \beta} e^{i\tau_1 \vert \xi \vert^2} \int_{\mathbb{R}^3} e^{-i\tau_1 \vert \eta_1 \vert^2} \eta_{1,i} \widehat{a_i}(\xi-\eta_1) \frac{\eta_{1,j} \eta_{1,i}}{\vert \eta_1 \vert^2} e^{-it\vert \eta_1 \vert^2} \partial_{\eta_{1,j}} \widehat{f}(t,\eta_1) P_{k_1}(\eta_1) d\eta_1 d\tau_1 \\ \notag & = \chi_{j}(\xi) \frac{\xi_l}{\vert \xi_j \vert^{1/2}} P_k(\xi) \vert \xi_j \vert^{1/2} \int_{0} ^{\infty} e^{-\tau_1 \beta} e^{i\tau_1 \vert \xi \vert^2} \int_{\mathbb{R}^3} e^{-i\tau_1 \vert \eta_1 \vert^2} \vert \eta_{1,i} \vert^{1/2} \widehat{a_i}(\xi-\eta_1) \widehat{g_{k_1}}(\eta_1) d\eta_1 d\tau_1 , \end{align} where \begin{align*} \widehat{g_{k_1}}(\eta_1) = \frac{\eta_{1,j} \vert \eta_{1,i}\vert ^{1/2}}{\vert \eta_1 \vert^2} \frac{\eta_{1,i}}{\vert \eta_{1,i} \vert} e^{-it\vert \eta_1 \vert^2} \partial_{\eta_{1,j}} \widehat{f}(t,\eta_1) P_{k_1}(\eta_1). \end{align*} Therefore using Young's inequality, \eqref{smo2} from Lemma \ref{smoothing}, Lemma \ref{mgnmgnfin} and Lemma \ref{X'} we find that \begin{align*} \Vert \eqref{restediffb} \Vert_{L^2_x} & \lesssim \big \Vert \mathcal{F}^{-1}_{\xi} \big( \chi_{j}(\xi) \frac{\xi_l}{\vert \xi_j \vert^{1/2}} P_k(\xi) \big) \big \Vert_{L^1_x} \\ & \times \Bigg \Vert D_{x_j}^{1/2} \int_{0}^{\infty} e^{-i \tau_1 \Delta} \bigg( e^{-\tau_1 \beta} a_i (x) D_{x_i}^{1/2} e^{i\tau_1 \Delta} \big( g_{k_1} \big) \bigg) d\tau_1 \Bigg \Vert_{L^2 _x} \\ & \lesssim 1.1^{k/2} \bigg \Vert a_i (x) D_{x_i}^{1/2} e^{i\tau_1 \Delta} \big( g_{k_1} \big) \bigg \Vert_{L^1_{x_j} L^{2}_{\tau_1,\widetilde{x_j}}} \\ & \lesssim 1.1^{k/2} \delta \Vert g_{k_1} \Vert_{L^2_x} \\ & \lesssim 1.1^{\frac{k-k_1}{2}} \delta \varepsilon_1 . \end{align*} \end{proof} \section{Multilinear terms} \label{boundmultilinear} In this section we prove Proposition \ref{nthestimate}. The estimates are based on key multilinear lemmas proved in the first subsection. We then use them to bound the iterates in the following subsection. \subsection{Multilinear lemmas} \label{multilinkey} We will start by proving multilinear lemmas that essentially allow us to reduce estimating the $n-$th iterates to estimating the first iterates. We distinguish between low and high output frequencies. We note that the case of main interest is that of high frequencies, since otherwise the loss of derivative is not a threat. However the proofs are slightly different in both cases, hence the need to separate the two. Besides, the case of low output frequencies is essentially analogous to the case of non-magnetic potentials. Therefore it is possible to see this part of the argument as an alternative proof of the result of \cite{L}. \subsubsection{The case of large output frequency} We start with the main multilinear Lemma of the paper. The other lemmas of this section will be variations of it. \begin{lemma} \label{multilinear} Assume that $k>0.$ \\ We have the bound \begin{align} \label{multimain} &\Bigg \Vert \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \widehat{g_{k_n}}(\eta_n) d\eta_n \Bigg \Vert_{L^2} \\ & \lesssim q(\max K) C^n \delta^{n} \Vert g \Vert_{L^2} \prod_{\gamma \in J^{+}} 1.1^{-k_{\gamma}} \times \prod_{\gamma \in J^{-}} 1.1^{-k} 1.1^{\epsilon k_{\gamma}} , \end{align} where $J^{+}= \lbrace j \in [[1;n ]]; k_j > k+1 \rbrace, \ \ J^{-}= \lbrace j \in [[1;n ]]; k > k_j+1 \rbrace, \ \ J = J^{+} \cup J^{-}.$ \\ $K$ denotes the complement of $J,$ $\epsilon$ denotes a number strictly between 0 and 1 and \begin{displaymath} q(\max K) = \begin{cases} 1 & \textrm{if} ~~~~ \alpha_{\max K} = 1 \\ 1.1^{k_{\max K}/2} & \textrm{otherwise}. \end{cases} \end{displaymath} Finally the implicit constant in the inequality does not depend on $n.$ \end{lemma} \begin{remark} Recall that by convention $\eta_0 = \xi.$ \end{remark} \begin{remark} The role of the products on elements of $J$ is to ensure that we can sum over $k_j, j \in J.$ \end{remark} \begin{remark} \label{beta} We will sometimes denote in the proof \begin{displaymath} \beta_{\gamma} = \begin{cases} 0 & \textrm{if} ~~~~ W_{\gamma} = V \\ 1 & \textrm{otherwise}. \end{cases} \end{displaymath} \end{remark} \begin{proof} If $\gamma \in K$ then we write (recall that such terms have been regularized, see \eqref{c21beta}) \begin{align}\label{identite} \frac{1}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2 + i\beta} =(-i) \int_{0} ^{\infty} e^{i \tau_{r(\gamma)}(\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2 + i \beta)} d\tau_{r({\gamma})} \end{align} where $r(\gamma)$ denotes the number given to $\gamma$ in the enumeration of the elements of $K$ (if $\gamma$ is the second smallest element of $K$ then $r(\gamma) = 2$ for example). \\ Our goal is to prove the bound that is uniform on $\beta$. Therefore for legibility we drop the terms $ e^{-\tau_{r(\gamma)} \beta} $ in the expressions above (they are systematically bounded by 1 in the estimates). \\ We obtain the following expression \begin{align*} \eqref{multimain} &=(-i)^{\vert K \vert} \int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma}) m_{\gamma}(\xi,\eta_{\gamma}) \\ & \times \prod_{\gamma \in K} \int_{0}^{\infty} e^{i \tau_{r(\gamma)}(\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2)} d \tau_{r(\gamma)} \alpha_{\gamma}(\eta_{\gamma}) \widehat{W_{\gamma}}(\eta_{\gamma} - \eta_{\gamma-1}) \widehat{g_{k_n}}(\eta_n) d\eta_1.. d\eta_n \end{align*} where \begin{align*} m_{\gamma}(\xi,\eta) = \frac{P_k(\xi) P_{k_{\gamma}}(\eta)}{\vert \xi \vert^2 - \vert \eta \vert^2}. \end{align*} Now we perform the following change of variables: \begin{align*} \tau_1 & \leftrightarrow \tau_1 + \tau_2 + ... \\ \tau_2 & \leftrightarrow \tau_2+ \tau_3 + ... \\ ... \end{align*} The expression becomes \begin{align*} \eqref{multimain} &=(-i)^{\vert K \vert} \int_{\tau_1} e^{i\tau_1 \vert \xi \vert^2} \Bigg( \int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma}) m_{\gamma}(\xi,\eta_{\gamma}) \\ & \times \prod_{\gamma \in K, \gamma \neq \max K} \int_{\tau_{r(\gamma)+1} \leqslant \tau_{r(\gamma)}} e^{-i (\tau_{r(\gamma)} - \tau_{r(\gamma)+1} )\vert \eta_{\gamma} \vert^2} \alpha_{\gamma}(\eta_{\gamma}) \widehat{W_{\gamma}}(\eta_{\gamma-1} - \eta_{\gamma}) \\ & \times e^{-i \tau_{r(\max K)} \vert \eta_{\max K} \vert^2} \alpha_{\max K}(\eta_{\max K}) \widehat{W_{\max K}}(\eta_{\max K-1} -\eta_{\max K}) \widehat{g_{k_n}}(\eta_n) d\eta \Bigg) d\tau_1. \end{align*} \underline{Case 1: $1 \in K,$ $\alpha_1 (\eta_1) = \eta_{1,i}, W_1 = a_i$}\\ We isolate the first term in the product: \begin{align*} \eqref{multimain} & = (-i)^{\vert K \vert} \int_{\tau_1} e^{i\tau_1 \vert \xi \vert^2} \int_{\eta_1} \widehat{a_i}(\xi-\eta_1) \eta_{1,i} e^{-i\tau_1 \vert \eta_1 \vert^2} P_{k_1}(\eta_1) \int_{\tau_2 \leqslant \tau_1} e^{i \tau_2 \vert \eta_1 \vert^2} \Bigg( \int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma}) \\ & \times \widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma}) m_{\gamma}(\xi,\eta_{\gamma}) \prod_{\gamma \in K, \gamma \neq 1, \gamma \neq \max K} \int_{\tau_{r(\gamma)+1}\leqslant \tau_{r(\gamma)}} e^{-i (\tau_{r(\gamma)} - \tau_{r(\gamma)+1} )\vert \eta_{\gamma} \vert^2} \alpha_{\gamma}(\eta_{\gamma}) \widehat{W_{\gamma}}(\eta_{\gamma-1} - \eta_{\gamma}) \\ & \times e^{-i \tau_{r(\max K)} \vert \eta_{\max K} \vert^2} \alpha_{\max K}(\eta_{\max K}) \widehat{W_{\max K}}(\eta_{\max K-1} -\eta_{\max K})\widehat{g_{k_n}}(t,\eta_n) d\eta \Bigg) d\tau_2 d\eta_1 d\tau_1. \end{align*} Now we take an inverse Fourier transform in $\xi.$ The terms in the expression above that contain $\xi$ are the first $a_i,$ and all the $m_{\gamma}$ for $\gamma \in J.$ \\ To simplify notations we denote \begin{align*} F_1(y_1,...,y_r,\eta_1)& =\int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma}) \check{m}_{\gamma}(y_{r(\gamma)},\eta_{\gamma}) \\ \notag & \times \prod_{\gamma \in K, \gamma \neq 1, \gamma \neq \max K} \int_{\tau_{r(\gamma)+1}\leqslant \tau_{r(\gamma)}} e^{-i (\tau_{r(\gamma)} - \tau_{r(\gamma)+1} )\vert \eta_{\gamma} \vert^2 } \alpha_{\gamma}(\eta_{\gamma}) \widehat{W_{\gamma}}(\eta_{\gamma} - \eta_{\gamma-1}) \\ & \times e^{-i \tau_{r(\max K)} \vert \eta_{\max K} \vert^2} \alpha_{\max K}(\eta_{\max K}) \widehat{W_{\max K}}(\eta_{\max K} -\eta_{\max K-1})\widehat{g_{k_n}}(t,\eta_n) d\widetilde{\eta_1} \Bigg) d\tau_2 d\eta_1 d\tau_1, \end{align*} where $r = \vert J \vert.$ \\ Here we abusively wrote $\check{m}$ for the the Fourier transform with respect to the first variable only. \\ Also, as for element of $K,$ $r(\gamma)$ for $\gamma \in J$ denotes the number that $\gamma $ is assigned in the enumeration of the elements of $J.$ \\ We can then write \begin{align*} \mathcal{F}^{-1}_{\xi} \eqref{multimain} &=(2 \pi)^{3r} \int_{\tau_1} e^{i \tau_1 \Delta} \Bigg( \int_{y_r, r \in J} a_i(x-y_1-...-y_r) \int_{\eta_1} \eta_{1,i} e^{i\eta_1 \cdot(x-y_1-...-y_r)} e^{-i\tau_1 \vert \eta_1 \vert^2} \\ & \times P_{k_1}(\eta_1) \int_{\tau_2 \leqslant \tau_1} e^{i\tau_2 \vert \eta_1 \vert^2} F_1(y_1, ...,y_r,\eta_1) d\tau_2 d\eta_1 dy_1 ... dy_r \Bigg) d\tau_1. \end{align*} Using Strichartz estimates from Lemma \ref{Strichartz} we write \begin{align*} \Vert \eqref{multimain} \Vert_{L^2_x} & \lesssim (2 \pi)^{3r} \Bigg \Vert \int_{y_r, r \in J} a_i(x-y_1-...-y_r) \int_{\tau_2 \leqslant \tau_1}\int_{\eta_1} \eta_{1,i} e^{i\eta_1 \cdot(x-y_1-...-y_r)} e^{-i\tau_1 \vert \eta_1 \vert^2} \\ & \times P_{k_1}(\eta_1) e^{i\tau_2 \vert \eta_1 \vert^2} F_1(y_1, ...,y_r,\eta_1) d\tau_2 d\eta_1 dy_1 ... dy_r \Bigg \Vert_{L^2_{\tau_1} L^{6/5}_{x}}. \end{align*} Now we estimate the right-hand side by duality. Consider $h(x,\tau_1) \in L^{2}_{\tau_1} L^{6}_{x} .$ \\ To simplify notations further, we denote \begin{align*} \widetilde{F_1}(\tau_1, y_1,...,y_r,\eta_1) = \int_{\tau_2 \leqslant \tau_1} e^{i\tau_2 \vert \eta_1 \vert^2} F_1(y_1, ...,y_r,\eta_1) d\tau_2. \end{align*} We pair the expression above against $h$, put the $x$ integral inside, change variables ($x \leftrightarrow x-y_1-... - y_r$) and use the Cauchy-Schwarz inequality in $\tau_1$ and then in $x_j:$ \begin{align} \label{multi1} & \Bigg \vert \int_x \int_{\tau_1} \int_{y_r, r \in J} a_i(x-y_1-...-y_r) \int_{\eta_1} e^{i\eta_1 \cdot(x-y_1-...-y_r)} \\ \notag& \times e^{-i\tau_1 \vert \eta_1 \vert^2} \eta_{1,i} P_{k_1}(\eta_1) \widetilde{F_1}(y_1,...,y_r,\eta_1) d\eta_1 h(x, \tau_1) dx d\tau_1 \Bigg \vert \\ \notag&=(2\pi)^3 \Bigg \vert \int_{y_r, r \in J} \int_x a_i(x) D_{x_i} \int_{\tau_1} e^{-i\tau_1 \Delta} \mathcal{F}^{-1}_{\eta_1} \big(P_{k_1}(\eta_1) \widetilde{F_1}(y_1,...,y_r,\eta_1) \big) h(x+y_1+...+y_r, \tau_1) d\tau_1 dx \Bigg \vert. \end{align} Now we can reproduce the proof of Lemma \ref{potmgn} (replacing $h$ by $h$ translated in space by a fixed vector). We find that \begin{align*} \vert \eqref{multi1} \vert & \lesssim \delta \int \Vert h \Vert_{L^2_{\tau_1} L^6_x} \Vert D_{x_i} e^{i\tau_1 \Delta} \mathcal{F}^{-1} \big(P_{k_1}(\eta_1) \widetilde{F_1}(y_1;...;y_r,\eta_1) \big) \Vert_{L^{\infty}_{x_j}L^2_{\tau_2,\widetilde{x_j}}} dy_1 ... dy_r. \end{align*} Now note that \begin{align*} D_{x_i} e^{i\tau_1 \Delta} \mathcal{F}^{-1}_{\eta_1} \big(P_{k_1}(\eta_1) \widetilde{F_1}(y_1,...,y_r,\eta_1) \big) & = D_{x_j} \int_{\tau_2 \leqslant \tau_1} e^{i(\tau_1-\tau_2)\Delta} \mathcal{F}^{-1}_{\eta_1} \big( P_{k_1}(\eta_1) F_1 \big) d\tau_2. \end{align*} Hence using the inhomogeneous smoothing estimate from Lemma \ref{smoothing} we obtain \begin{align*} \vert \eqref{multi1} \vert &\lesssim \delta \int \bigg \Vert \mathcal{F}^{-1}_{\eta_1} F_1(y_1,...,y_r,\eta_1) \bigg \Vert_{L^1_{x_j} L^2_{\tau_2, \widetilde{x_j}}} dy_1 ... dy_r. \end{align*} Now we consider several subcases: \\ \underline{Subcase 1.1: $2 \in K, \alpha_2(\eta_2)=\eta_{2,k}$} \\ Let's first assume that $2 \neq \max K.$ \\ Then notice that with similar notations as before, \begin{align*} F_1(\tau_2, \eta_1) = \int_{\eta_2} \widehat{a_k}(\eta_1-\eta_2) \eta_{2,k} \int_{\tau_3 \leqslant \tau_2} e^{-i(\tau_2-\tau_3)\vert \eta_2 \vert^2}P_{k_2}(\eta_2) F_{2}(\eta_2,\tau_3) d\tau_3 d\eta_2. \end{align*} Hence \begin{align*} \mathcal{F}^{-1}_{\eta_1} F_1 = (2\pi)^3 a_k (x) D_{x_k} \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2}\big( P_{k_2}(\eta_2) F_2(\eta_2,\tau_3) \big) d\tau_3, \end{align*} therefore we can use Lemma \ref{mgnmgn} to conclude that \begin{align*} \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} & \leqslant C \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^1_{x_k} L^2_{\tau_3,\widetilde{x_k}}}. \end{align*} Now in the case where $2 = \max K$ then \begin{align*} F_1 (\tau_2,\eta_1) = \int_{\eta_2} \widehat{a_k}(\eta_1-\eta_2) \eta_{2,k} e^{-i\tau_2 \vert \eta_2 \vert^2} \big(P_{k_2}(\eta_2) F_{2}(\eta_2) \big) d\eta_2. \end{align*} By a similar reasoning, we can write, using Lemma \ref{mgnmgnfin} \begin{align*} \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} & \leqslant 1.1^{k/2} C \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^2_{x}}. \end{align*} This is where the $q$ term in the result comes from. \\ \\ \underline{Subcase 1.2: $2 \in K, \alpha_2(\eta_2) = 1 $} \\ Assume first that $2 \neq \max K.$ \\ Here we have \begin{align*} \mathcal{F}^{-1} F_1 (\tau_2) =(2\pi)^3 V (x) \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} \big( P_{k_2}(\eta_2) F_2(\eta_2,\tau_3) \big) d\tau_3, \end{align*} therefore using Lemma \ref{mgnpot} we obtain \begin{align*} \bigg \Vert V (x) \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} \big(P_{k_2}(\eta_2) F_2(\eta_2,\tau_3) \big) d\tau_3 \bigg \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} \leqslant C \delta \Vert \mathcal{F}^{-1} _{\eta_2} F_2 \Vert_{L^2_{\tau_3} L^{6/5}_x}. \end{align*} Now if $2=\max K,$ we have \begin{align*} \mathcal{F}^{-1} F_1 (\tau_2) = V (x) e^{i \tau_2\Delta} \bigg( \mathcal{F}^{-1}_{\eta_2} \big(P_{k_2}(\eta_2) F_2(\eta_2)\big) \bigg). \end{align*} Hence we obtain, using Lemma \ref{mgnpotfin} \begin{align*} \bigg \Vert V (x) e^{i \tau_2\Delta} \bigg( \mathcal{F}^{-1}_{\eta_2} \big(P_{k_2}(\eta_2) F_2(\eta_2)\big) \bigg) \bigg \Vert_{L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}} \leqslant C \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^{2}_x}. \end{align*} \underline{Case 1.3: $2 \in J$} \\ We consider two subcases: \\ \underline{Subcase 1.3.1: $2 \in J^{+}$} \\ In this case we conclude using Lemma \ref{bilinit} that \begin{align*} \int \Vert \mathcal{F}^{-1} F_1 \Vert_{L^1 _{x_j} L^{2}_{\tau_2,\widetilde{x_j}}} dy_1 & \leqslant C_0 \Vert \check{m_2} \Vert_{L^1} \Vert W_2 \Vert_{L^{\infty}} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^1 _{x_j} L^{2}_{\tau_2,\widetilde{x_j}}} \\ & \leqslant C 1.1^{(\beta_2-2) k_2} \Vert W_2 \Vert_{L^{\infty}} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^1 _{x_j} L^{2}_{\tau_2,\widetilde{x_j}}}, \end{align*} where as defined in Remark \ref{beta}, $\beta_2 = 0$ or $\beta_2 = 1$ depending on whether $W_2 = V$ or $W_2 = a_i.$ \\ \underline{Subcase 1.3.2: $2 \in J^{-} $} \\ In this case we use the third inequality in Lemma \ref{bilinit} and Bernstein's inequality to write \begin{align*} \int \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^1 _{x_j} L^{2}_{\tau_2,\widetilde{x_j}}} dy_1 & \leqslant C_0 1.1^{\beta_2 k_2} 1.1^{-2k} \Vert W \Vert_{L^{\infty}_{x_j} L^{\frac{2+2c}{c}}_{\widetilde{x_j}} } \Vert \mathcal{F}^{-1} (F_2)_{k_2} \Vert_{L^1 _{x_j} L^{2(1+c)}_{\widetilde{x_j}} L^{2}_{\tau_2}} \\ & \leqslant 1.1^{-k} C \delta 1.1^{\epsilon k_2} \Vert \mathcal{F}^{-1} F_2 \Vert_{L^1 _{x_j} L^{2}_{\tau_2,\widetilde{x_j}}}. \end{align*} \underline{Case 2: $1 \in K, $ $\alpha_1 (\eta_1)=1$} \\ This case is similar to the previous one, but we use Strichartz estimates instead of smoothing effects. \\ By Plancherel's theorem, Minkowski's inequality and H\"{o}lder's inequality, we have \begin{align*} \Vert \eqref{multimain} \Vert_{L^2 _x} & \lesssim \Bigg \Vert \mathcal{F}^{-1} \bigg(\int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma}) \ \ ... \ \ d\eta_n d\eta_{n-1} ds \bigg) \Bigg \Vert_{L^2_x} \\ & =(2 \pi)^{3r} \Bigg \Vert \int_{\tau_1} e^{i \tau_1 \Delta} \bigg( \int_{y_1,...,y_r} V(x-y_1-...-y_r) \\ & \times \int_{\eta_1} e^{i\eta_1 \cdot(x-y_1-...-y_r)} e^{-i\tau_1 \vert \eta_1 \vert^2} P_{k_1}(\eta_1) \widetilde{F_1}(y_1,...,y_r,\eta_1) d\eta_1 \bigg) d\tau_1 \Bigg \Vert_{L^2_x} \\ & \lesssim (2 \pi)^{3r} \int_{y_1,...,y_r} \Bigg \Vert V(x) e^{i\tau_1 \Delta} \mathcal{F}^{-1}\big( P_{k_1}(\eta_1) \widetilde{F_1} \big)(x) \Bigg \Vert_{L^{2}_{\tau_1} L ^{6/5}_x} dy_1 ... dy_r \\ & \lesssim (2 \pi)^{3r} \int_{y_1,...,y_r} \Vert V \Vert_{L^{3/2}_x} \Vert e^{i \tau_1 \Delta} \mathcal{F}^{-1}_{\eta_1} \big( P_{k_1}(\eta_1) \widetilde{F_1} \big) \Vert_{L^2_{\tau_1} L^{6} _x} dy_1 ... dy_r \\ & \lesssim (2 \pi)^{3r} \delta \int_{y_1,...,y_r} \bigg \Vert \int_{\tau_2 \leqslant \tau_1} e^{i(\tau_1-\tau_2)\Delta} \big( \mathcal{F}^{-1}_{\eta_1} F_1 \big)_{k_1} \bigg \Vert_{L^2_{\tau_1}L^{6}_x} dy_1 ... dy_r \\ & \lesssim (2 \pi)^{3r} \delta \int_{y_1,...,y_r} \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^2_{\tau_2} L^{6/5}_x} dy_1 ... dy_r. \end{align*} Now distinguish several subcases: \\ \underline{Subcase 2.1: $2 \in K, \alpha_2(\eta_2)=1$ } \\ Assume that $2 \neq \max K.$ \\ In this case we can use Lemma \ref{potpot} and obtain \begin{align*} \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^2_{\tau_2} L^{6/5}_x} \leqslant C \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^2_{\tau_3} L^{6/5}_x}. \end{align*} In the case where $2 = \max K$ then we use Lemma \ref{potpotfin} to obtain \begin{align*} \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^2_{\tau_2} L^{6/5}_x} \leqslant C \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^{2}_x}. \end{align*} \underline{Subcase 2.2: $2 \in K, \alpha_2(\eta_2) = \eta_{2,l} $} \\ We assume first that $2 \neq \max K.$ \\ In this case we use Lemma \ref{potmgn} and obtain \begin{align*} \bigg \Vert a_l (x) D_{x_l} \int_{\tau_3 \leqslant \tau_2} e^{i(\tau_2-\tau_3)\Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) d\tau_3 \bigg \Vert_{L^2_{\tau_2} L^{6/5}_x} \leqslant C \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) \Vert_{L^1_{x_l} L^2_{\tau_3,\widetilde{x_l}}}. \end{align*} Now we treat the case where $2 = \max K.$ \\ We use Lemma \ref{mgnpotfin} and write \begin{align*} \Vert a_l (x) D_{x_l} e^{i\tau_2 \Delta} \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2) \Vert_{L^2_{\tau_2} L^{6/5}_x} \leqslant C 1.1^{k/2} \delta \Vert \mathcal{F}^{-1}_{\eta_2} F_2(\eta_2,\tau_3) \Vert_{L^2_{x}}. \end{align*} \underline{Subcase 2.3: $ 2 \in J $} \\ \underline{Subcase 2.3.1: $2 \in J^{+} $} \\ In this case we use Lemma \ref{bilinit} to write that \begin{align*} \int_{y_1} \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^2 _{\tau_2} L^{6/5}_x} dy_1 & \leqslant C_0 \Vert \check{m_2} \Vert_{L^1} \Vert W_2 \Vert_{L^{\infty}} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^2 _{\tau_2} L^{6/5} _x} \\ & \leqslant C 1.1^{- k_2} \Vert W_2 \Vert_{L^{\infty}} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^2 _{\tau_2} L^{6/5} _x} . \end{align*} \underline{Subcase 2.3.2: $2 \in J^{-} $} In this case we use Lemma \ref{bilinit} as well as Bernstein's inequality to write that \begin{align*} \int_{y_1} \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^2 _{\tau_2} L^{6/5}_x} dy_1 & \leqslant C_0 \Vert \check{m_2} \Vert_{L^1} \Vert W_2 \Vert_{L^{\frac{6+6c}{5c}}_x} \Vert \big( \mathcal{F}^{-1}_{\eta_2} F_2 \big)_{k_2} \Vert_{L^2 _{\tau_2} L^{\frac{6}{5}(1+c)} _x} \\ & \leqslant C 1.1^{-k} \Vert W_2 \Vert_{L^{\frac{6+6c}{5c}}_x} 1.1^{\epsilon k_2} \Vert \mathcal{F}^{-1}_{\eta_2} F_2 \Vert_{L^2 _{\tau_2} L^{6/5} _x}. \end{align*} \underline{Conclusion in cases 1 and 2:} In all subcases we reduced the problem to estimating $\mathcal{F}^{-1}_{\eta_2} F_2$ in either $L^2_{\tau_2} L^{6/5}_x$ or $L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}.$ Since $\mathcal{F}^{-1}_{\eta_2} F_2$ has the exact same form as $\mathcal{F}^{-1}_{\eta_1} F_1$ but with one less term in the product, and that its $L^2_{\tau_2} L^{6/5}_x$ or $L^1_{x_j} L^2_{\tau_2,\widetilde{x_j}}$ norms have already been estimated, we can conclude that we have the desired bound by induction. \\ \\ \underline{Case 3: $1 \in J$} \\ In this case we can add a frequency localization on the first potential $W_1.$ Let's denote $k_{max} = \max \lbrace k,k_1 \rbrace.$ \\ In this case we write \begin{align*} \eqref{multimain} = \int_{\tau_1} e^{i \tau_1 \vert \xi \vert^2} \int_{\eta_1} \widehat{W_{1,k_{\max}}}(\xi-\eta_1) m_1(\xi,\eta_1) F_1 (y_1,...,y_r,\eta_1) d\eta_1, \end{align*} with the same notation as in the previous cases. \\ Now we take the inverse Fourier transform in $\xi$ and obtain \begin{align*} \mathcal{F}^{-1}_{\xi} \eqref{multimain} &= (2\pi)^{3r} \int_{\tau_1} e^{-i \tau_1 \Delta} \Bigg( \int_{y_1,...,y_r} W_{1,k_{\max}}(x-y_1-...-y_r) \\ & \times \int_{\eta_1} e^{i \eta_1 \cdot (x-y_1-...)} m(y_1,\eta_1) P_{k_1}(\eta_1) F_1 (y_1,...,y_r,\eta_1) d\eta_1 dy_1 ... dy_r \Bigg) d\tau_1 \\ & = (2\pi)^{3r} \int_{\tau_1} e^{i\tau_1 \Delta} \Bigg( \int_{y_1,...y_r} W_{1,k_{\max}} (x-y_1-...y_r) \\ & \times \mathcal{F}^{-1}_{\eta_1} \big( m(y_1,\eta_1) P_{k_1}(\eta_1) F_1(\eta_1) \big)(x-y_1-...) dy_1 ... dy_r \Bigg) d\tau_1. \end{align*} Now we use Strichartz estimates, Minkowki's inequality and H\"{o}lder's inequality to write that \begin{align*} \Vert \mathcal{F}^{-1}_{\xi} \eqref{multimain} \Vert_{L^2_x} & \leqslant (2\pi)^{3r} C_0 ^3 \Bigg \Vert \int_{y_1,...y_r} W_{1,k_{\max}} (x-y_1-...y_r) \\ & \times \mathcal{F}^{-1}_{\eta_1} \big(m_1(y_1,\eta_1) F_1(\eta_1) \big)(x-y_1-...) dy_1 ... dy_r \Bigg \Vert_{L^{2}_{\tau_1} L^{6/5}_x } \\ & \leqslant (2\pi)^{3(r+1)} C_0^3 \Bigg \Vert \int_{y_1,...y_r} W_{1,k_{\max}} (x-y_1-...y_r) \int_{z} \check{m_1}(y_1,z) \\ & \times \bigg( \mathcal{F}^{-1}_{\eta_1} \big(F_1(\eta_1) \big) \bigg)_{k_1} (x-z-y_1-...) dz dy_1 ... dy_r \Bigg \Vert_{L^{2}_{\tau_1} L^{6/5}_x } . \end{align*} Now we distinguish several subcases: \\ \underline{Subcase 3.1: $1 \in J^{+} $} \\ Then we can conclude directly using Lemma \ref{bilinit} and Minkowski's inequality that \begin{align*} \Vert \mathcal{F}^{-1}_{\xi} \eqref{multimain} \Vert_{L^2_x} & \leqslant (2 \pi)^{3r} C \Vert W_{1} \Vert_{L^{\infty}} 1.1^{(\beta_1-2) k_1} \int_{y_2,...,y_r} \Vert \mathcal{F}^{-1}_{\eta_1} \big( F_1 (\eta_1) \big) \Vert_{L^2_{\tau_1} L^{6/5}_x} dy_2 ... dy_r. \end{align*} \underline{Subcase 3.2: $1 \in J^{-} $} \\ Then we use Lemma \ref{bilinit}, Minkowki's inequality and Bernstein's inequality to write that \begin{align*} \Vert \mathcal{F}^{-1}_{\xi} \eqref{multimain} \Vert_{L^2_x} & \leqslant (2 \pi)^{3r} C 1.1^{-k} \Vert W_{1,k} \Vert_{L^{\frac{6+6c}{5c}}_x} \int_{y_2,...,y_r} \Vert \bigg( \mathcal{F}^{-1}_{\eta_1} \big( F_1 (\eta_1) \big) \bigg)_{k_1} \Vert_{L^2_{\tau_1} L^{6/5(1+c)}_x} dy_2 ... dy_r \\ & \leqslant (2 \pi)^{3r} C 1.1^{-k} 1.1^{\epsilon k_1} \delta \int_{y_2,...,y_r} \Vert \mathcal{F}^{-1}_{\eta_1} \big( F_1 (\eta_1) \big) \Vert_{L^2_{\tau_1} L^{6/5}_x} dy_2 ... dy_r, \end{align*} and then we can conclude by induction as in the previous cases. \end{proof} Now we give a similar lemma that contains a gain of $1/2$ of a derivative compared to the previous one. \begin{lemma} \label{multilinearautre} Assume that $k>0.$ \\ We have the bound \begin{align} \label{multimainautre} &\Bigg \Vert \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \widehat{g_{k_n}}(\eta_n) d\eta_n \Bigg \Vert_{L^2_x} \\ & \lesssim 1.1^{-k/2} q(\max K) C^n \delta^{n} \Vert g \Vert_{L^2_x} \prod_{\gamma \in J^{+}} 1.1^{-k_{\gamma}} \times \prod_{\gamma \in J^{-}} 1.1^{-k} 1.1^{\epsilon k_{\gamma}} , \end{align} where $J^{+}= \lbrace j \in [[1;n ]]; k_j > k+1 \rbrace ,\ \ J^{-}= \lbrace j \in [[1;n ]]; k > k_j+1 \rbrace, \ \ J = J^{+} \cup J^{-}.$ \\ $K$ denotes the complement of $J$, $\epsilon$ denotes a small strictly positive number and \begin{displaymath} q(\max K) = \begin{cases} 1 & \textrm{if} ~~~~ \alpha_{\max K} = 1 \\ 1.1^{k_{\max K}/2} & \textrm{otherwise} \end{cases} \end{displaymath} Finally the implicit constant in the inequality does not depend on $n.$ \end{lemma} \begin{proof} The proof of this lemma is similar to Lemma \ref{multilinear}. The only difference is in the set-up. \\ We must bound \begin{align}\label{multilinautre} & \int_{\tau_1} e^{i\tau_1 \vert \xi \vert^2} \Bigg( \int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma}) m_{\gamma}(\xi,\eta_{\gamma}) \\ \notag & \times \prod_{\gamma \in K,\gamma \neq \max K} \int_{\tau_{r(\gamma)+1} \leqslant \tau_{r(\gamma)}} e^{-i (\tau_{r(\gamma)} - \tau_{r(\gamma)+1} )\vert \eta_{\gamma} \vert^2)} \alpha_{\gamma}(\eta_{\gamma}) \widehat{W_{\gamma}}(\eta_{\gamma-1} - \eta_{\gamma}) \\ \notag & \times \widehat{W_{\max K}}(\eta_{\max K-1}- \eta_{\max K}) \alpha_{\max K}(\eta_{\max K}) e^{-i\tau_{r(\max K) }\vert \eta_{\max K} \vert^2} \widehat{g_{k_n}}(\eta_n) d\eta \Bigg) d\tau_1. \end{align} We localize the $\xi$ variable according to the dominant direction using Lemma \ref{direction}. Then isolate the first term in the product: \begin{align} \label{obj} & P_k(\xi) \chi_{j}(\xi) \int_{\tau_1} e^{i\tau_1 \vert \xi \vert^2} \Bigg( \int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma}) m_{\gamma}(\xi,\eta_{\gamma}) \\ \notag & \times \prod_{\gamma \in K,\gamma \neq \max K} \int_{\tau_{r(\gamma)}} e^{-i (\tau_{r(\gamma)} - \tau_{r(\gamma)+1} )\vert \eta_{\gamma} \vert^2} \alpha_{\gamma}(\eta_{\gamma}) \widehat{W_{\gamma}}(\eta_{\gamma-1} - \eta_{\gamma}) \\ \notag & \times \widehat{W_{\max K}}(\eta_{\max K-1}- \eta_{\max K}) \alpha_{\max K}(\eta_{\max K}) e^{-i\tau_{r(\max K)} \vert \eta_{\max K} \vert^2} \widehat{g_{k_n}}(\eta_n) d\eta \Bigg) d\tau_1 \\ \notag & = P_k(\xi) \chi_j(\xi) \int_{\tau_1} e^{i\tau_1 \vert \xi \vert^2} \int_{\eta_1} \widehat{W_1}(\xi-\eta_1)\alpha_1(\eta_1) e^{-i\tau_1 \vert \eta_1 \vert^2} \\ \notag & \times P_{k_1}(\eta_1) \int_{\tau_2 \leqslant \tau_1} e^{i \tau_2 \vert \eta_1 \vert^2} \Bigg( \int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma}) m(\xi,\eta_{\gamma}) \\ \notag & \times \prod_{\gamma \in K, \gamma \neq 1, \gamma \neq \max K} \int_{\tau_{r(\gamma)+1}\leqslant \tau_{r(\gamma)}} e^{-i (\tau_{r(\gamma)} - \tau_{r(\gamma)+1} )\vert \eta_{\gamma} \vert^2} \alpha_{\gamma}(\eta_{\gamma}) \widehat{W_{\gamma}}(\eta_{\gamma-1} - \eta_{\gamma}) \\ \notag & \times \widehat{W_{\max K}}(\eta_{\max K-1}- \eta_{\max K}) \alpha_{\max K}(\eta_{\max K}) e^{-i \tau_{r(\max K)} \vert \eta_{\max K} \vert^2} \widehat{g_{k_n}}(\eta_n) d\eta \Bigg) d\tau_1. \end{align} Now we take an inverse Fourier transform in $\xi.$ The terms in the expression above that contain $\xi$ are the first $W,$ and all the $m_{\gamma}$ for $\gamma \in J.$ The complex exponential gives a Schr\"{o}dinger semi-group, and the other terms give a convolution. \\ To simplify notations we denote (using similar notations as above) \begin{align*} F_1(y_1,...,y_r,\eta_1) &=\int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma}) \check{m}(y_{r(\gamma)},\eta_{\gamma}) \\ \notag & \times \prod_{\gamma \in K, \gamma \neq 1, \gamma \neq \max K} \int_{\tau_{r(\gamma)+1}\leqslant \tau_{r(\gamma)}} e^{-i (\tau_{r(\gamma)} - \tau_{r(\gamma)+1} )\vert \eta_{\gamma} \vert^2)} \alpha_{\gamma}(\eta_{\gamma}) \widehat{W_{\gamma}}(\eta_{\gamma-1} - \eta_{\gamma}) \\ \notag & \times \widehat{W_{\max K}}(\eta_{\max K-1}- \eta_{\max K}) \alpha_{\max K}(\eta_{\max K}) e^{-i\tau_{r(\max K)} \vert \eta_{\max K} \vert^2} \widehat{g_{k_n}}(\eta_n) d\eta. \end{align*} Hence \begin{align*} \mathcal{F}^{-1}_{\xi} \eqref{obj} &=(2 \pi)^{3} \mathcal{F}^{-1}_{\xi} \big(P_k(\xi) \chi_j(\xi) \frac{1}{\vert \xi_j \vert ^{1/2}} \big) \ast \mathcal{F}^{-1} _{\xi} \Bigg( \vert \xi_j^{1/2} \vert \int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma}) \ \ ... \ \ d\eta_n d\eta_{n-1} ds \Bigg) \\ &=(2 \pi)^{3} \bigg(\mathcal{F}^{-1}_{\xi}\big(P_k(\xi) \chi_j(\xi) \frac{1}{\vert \xi_j \vert ^{1/2}} \big) \bigg) \\ & \ast \Bigg( \int_{\tau_1} D_{x_j}^{1/2} e^{-i \tau_1 \Delta} \Bigg( \int_{y_r, r \in J} W_1(x-y_1-...-y_r) \int_{\eta_1} \alpha_1(\eta_1) e^{i\eta_1 \cdot(x-y_1-...-y_r)} e^{-i\tau_1 \vert \eta_1 \vert^2} \\ & \times P_{k_1}(\eta_1) \int_{\tau_2 \leqslant \tau_1} e^{i\tau_2 \vert \eta_1 \vert^2} F_1(y_1, ...,y_r) d\tau_2 d\eta_1 \Bigg) d\tau_1 \Bigg). \end{align*} Using Lemma \ref{smoothing} we get \begin{align*} \Vert \eqref{obj} \Vert_{L^2_x} & \lesssim 1.1^{-k/2} \Bigg \Vert \int_{y_r, r \in J} W_1(x-y_1-...-y_r) \int_{\tau_2 \leqslant \tau_1}\int_{\eta_1} \alpha_1(\eta_1) e^{i\eta_1 \cdot(x-y_1-...-y_r)} e^{-i\tau_1 \vert \eta_1 \vert^2} \\ & \times P_{k_1}(\eta_1) e^{i\tau_2 \vert \eta_1 \vert^2} F_1(y_1, ...,y_r) d\tau_2 d\eta_1 \Bigg \Vert_{L^1_{x_j} L^{2}_{\tau_1,\widetilde{x_j}}}. \end{align*} If $\alpha_1(\eta)=\eta_{1,i}$ then using a similar proof to that of Lemma \ref{mgnmgn} we find \begin{align*} \Vert \eqref{obj} \Vert_{L^2_x} & \lesssim 1.1^{-k/2} \int_{y_r, r \in J} \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^1_{x_i} L^{2}_{\tau_2,\widetilde{x_i}}} dy_1 ... dy_r. \end{align*} If $\alpha_1(\eta_1)=1$ then using a similar proof to that of Lemma \ref{mgnpot} we have \begin{align*} \Vert \eqref{obj} \Vert_{L^2_x} & \lesssim 1.1^{-k/2} \int_{y_r, r \in J} \Vert \mathcal{F}^{-1}_{\eta_1} F_1 \Vert_{L^{2}_{\tau_2} L^{6/5}_x} dy_1 ... dy_r. \end{align*} From that point the same proof as Lemma \ref{multilinear} can be carried out to prove the desired result. \end{proof} We have the following straightforward corollary which will be useful in the next section. \begin{corollary} \label{multilinearbis} Assume that $k>0.$ \\ We have the bound \begin{align} \label{multimainbis} &\Bigg \Vert \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \widehat{g_{k_n}}(\xi,\eta_n) d\eta_n \Bigg \Vert_{L^2_x} \\ & \lesssim C^n \delta^{n} \Vert \mathcal{F}^{-1}_{\xi,\eta_n} g \Vert_{L^1 _{y_r} L^2_x} \prod_{\gamma \in J^{+}} 1.1^{-k_{\gamma}} \times \prod_{\gamma \in J^{-}} 1.1^{-k} 1.1^{\epsilon k_{\gamma}} , \end{align} where the notations are the same as in previous lemmas. \\ Finally the implicit constant in the inequality does not depend on $n.$ \end{corollary} \begin{proof} The proof of this lemma is either identical to Lemma \ref{multilinear} or \ref{multilinearautre} depending on the value of $\alpha_{\max K}.$ The only minor difference is that the dependence of $g$ on $\xi$ adds a convolution in the physical variable. \end{proof} Finally the following version of the above lemma will be useful to use Strichartz estimates for the multilinear expressions. \begin{corollary} \label{multiin} Assume that $k>0.$ \\ We have the bound \begin{align*} &\Bigg \Vert \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta)\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \int_{1}^t e^{i s \vert \xi \vert^2} \widehat{g_{k_n}}(s,\xi,\eta_n) ds d\eta_n \Bigg \Vert_{L^2_x} \\ & \lesssim C^n \delta^{n} \Vert g \Vert_{L^{p'}_t L^{q'}_x} \prod_{\gamma \in J^{+}} 1.1^{-k_{\gamma}} \times \prod_{\gamma \in J^{-}} 1.1^{-k} 1.1^{\epsilon k_{\gamma}} , \end{align*} where $(p,q)$ is a Strichartz admissible pair of exponents and $\epsilon$ denotes a small (strictly less than 1) strictly positive number. \\ The implicit constant in the inequality above does not depend on $n.$ \end{corollary} \begin{proof} Since the proof is similar to Lemma \ref{multilinearautre}, we only sketch it here. \\ First note that we can extend the domain of integration of $s$ to $(0;+\infty)$ by multiplying $g$ by $\textbf{1}_{(1;t)}.$ \\ After replacing the singular denominators by their integral expression using \eqref{identite} and doing the change of variables \begin{align*} \tau_1 & \leftrightarrow \tau_1 + \tau_2 + ... +s \\ \tau_2 & \leftrightarrow \tau_2+ \tau_3 + ... +s \\ ... & \\ s & \leftrightarrow s \end{align*} The expression becomes \begin{align*} \eqref{multimain} &=(-i)^{\vert K \vert} \int_{\tau_1} e^{i\tau_1 \vert \xi \vert^2} \Bigg( \int \prod_{ \gamma \in J} \alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma}) m_{\gamma}(\xi,\eta_{\gamma}) \\ & \times \prod_{\gamma \in K, \gamma \neq \max K} \int_{\tau_{r(\gamma)+1} \leqslant \tau_{r(\gamma)}} e^{-i (\tau_{r(\gamma)} - \tau_{r(\gamma)+1} )\vert \eta_{\gamma} \vert^2} \alpha_{\gamma}(\eta_{\gamma}) \widehat{W_{\gamma}}(\eta_{\gamma-1} - \eta_{\gamma}) \\ & \times \int_{s \leqslant \tau_{\vert K \vert}} e^{i(s-\tau_{\vert K \vert}) \vert \eta_{\max K} \vert^2} \widehat{W_{\max K}}(\eta_{\max K-1}-\eta_{\max K}) \alpha_{\max K} (\eta_{\max K}) \widehat{g_{k_n}}(s,y_r,\eta_n) d\eta \Bigg) d\tau_1. \end{align*} Now distinguish two cases: \\ \underline{Case 1: $\alpha_{\max K} = 1 $} \\ Then we bound all the terms as in the proof of Lemma \ref{multilinear} until the last one: \\ To bound $\mathcal{F}^{-1} F_{\max K-1}$ we write using retarded Strichartz estimates that \begin{align*} \bigg \Vert V(x) \int_{s \leqslant \tau_{\vert K \vert}} e^{i(s-\tau_{\vert K \vert})\Delta} \mathcal{F}^{-1} \big(P_{k_{\max K}}(\eta_{\max K}) F_{\max K }(s,\cdot) \big) ds \bigg \Vert_{L^1_{x_j} L^2 _{\tau_{\vert K \vert},\widetilde{x_j}}} \leqslant C \delta \Vert \mathcal{F}^{-1} F_{\max K} \Vert_{L^{p'}_t L^{q'}_x }, \end{align*} and \begin{align*} \bigg \Vert V(x) \int_{s \leqslant \tau_{\vert K \vert}} e^{i(s-\tau_{\vert K \vert})\Delta} \mathcal{F}^{-1} \big( P_{k_{\max K}}(\eta_{\max K}) F_{\max K}(s,\cdot) \big) ds \bigg \Vert_{L^2 _{\tau_{\vert K \vert}} L^{6/5}_x} \leqslant C \delta \Vert \mathcal{F}^{-1} F_{\max K} \Vert_{L^{p'}_t L^{q'}_x }. \end{align*} Now in the expression of $F_{\max K}$ there are only terms in $J$ therefore we can conclude the proof using bilinear lemmas. \\ \\ \underline{Case 2: $\alpha_{\max K}(\eta_{\max K}) = \eta_{\max K,i} $} \\ In this case we repeat the proof of Lemma \ref{multilinearautre} except for the last term in the product on elements of $K.$ \\ Using either Lemma \ref{mgnmgnfin} or \ref{potmgnfin} we obtain \begin{align*} & \bigg \Vert a_i(x) D_{x_i} \int_{s \leqslant \tau_{\vert K \vert}} e^{i(s-\tau_{\vert K \vert})\Delta} \mathcal{F}^{-1} \big( P_{k_{\max K}}(\eta_{\max K}) F_{\max K}(s,\cdot) \big) ds \bigg \Vert_{L^1_{x_j} L^2 _{\tau_{\vert K \vert},\widetilde{x_j}}} \\ & \leqslant 1.1^{k/2} C\delta \Vert \mathcal{F}^{-1} F_{\max K} \Vert_{L^{p'}_t L^{q'}_x }, \end{align*} and \begin{align*} & \bigg \Vert a_i(x) D_{x_i} \int_{s \leqslant \tau_{\vert K \vert}} e^{i(s-\tau_{\vert K \vert})\Delta} \mathcal{F}^{-1} \big( P_{k_{\max K}}(\eta_{\max K}) F_{\max K}(s,\cdot) \big) ds \bigg \Vert_{L^2 _{\tau_n} L^{6/5}_x} \\ & \leqslant 1.1^{k/2} C \delta \Vert \mathcal{F}^{-1} F_{\max K} \Vert_{L^{p'}_t L^{q'}_x }. \end{align*} We deduce the result in this case. \end{proof} We end with the following simpler version of the above lemma: \begin{corollary} \label{multiinbis} We have the bound \begin{align*} &\Bigg \Vert \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta)\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \int_{1}^t e^{i s \vert \xi \vert^2} \widehat{g_{k_n}}(s,\xi,\eta_n) ds d\eta_n \Bigg \Vert_{L^2} \\ & \lesssim t \delta^{\vert K \vert} C^n \Vert g \Vert_{L^{\infty}_s ([1,t]) L^{2}_x} \prod_{\gamma \in J^{+}} 1.1^{-k_{\gamma}} \times \prod_{\gamma \in J^{-}} 1.1^{-k} 1.1^{\epsilon k_{\gamma}} . \end{align*} The implicit constant in the inequality does not depend on $n.$ \end{corollary} \subsubsection{The case of small output frequency} Now we write analogs of the previous Lemmas for $k<0.$ In this case the loss of derivative is helpful, and therefore we do not need to use the smoothing effect. We only resort to Strichartz estimates, which makes this case simpler. \\ We start with the analog of Lemma \ref{multilinear}: \begin{lemma} \label{multilinearneg} Assume that $k<0.$ \\ We have the bound \begin{align} \label{multimainneg} &\Bigg \Vert \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \widehat{g_{k_n}}(\eta_n) d\eta_n \Bigg \Vert_{L^2_x} \\ & \lesssim C^n \delta^{n} \Vert g \Vert_{L^2_x} \prod_{\gamma \in J} \min \lbrace 1.1^{0.5 k_{\gamma-1}} ; 1 \rbrace \prod_{\gamma \in J^{+}} \min \lbrace 1.1^{-k_{\gamma}} ; 1.1^{\epsilon k_{\gamma}} \rbrace \times \prod_{\gamma \in J^{-}} 1.1^{k_{\gamma}-k}. \end{align} The implicit constant in the inequalities does not depend on $n.$ \end{lemma} \begin{proof} Since the proof is almost identical to that of Lemma \ref{multilinear} where only case 1 is considered. The main difference appears when we deal with terms for which $\gamma \in J.$ \\ Therefore we only consider these terms here. They are of the form \begin{align} \label{gammans} \big( \mathcal{F}^{-1} F_{\gamma-1} \big)(x)= \int_{y,z} \check{m_{\gamma}}(y,z) W_{\gamma}(x-y) \big( \mathcal{F}^{-1} F_{\gamma} \big)_{k_{\gamma}} (x-y-z) dy dz. \end{align} We must estimate the $L^2 _t L^{6/5}_x$ norm of this expression. \\ We distinguish two cases: \\ \underline{Case 1: $k_{\gamma} >k +1$} \\ The inequalities in this case give the $J^{+}$ terms in the product. \\ \underline{Subcase 1.1: $k_{\gamma}>0$} \\ Then we write using Lemma \ref{bilin} that \begin{align*} \Vert \eqref{gammans} \Vert_{L^2 _t L^{6/5}_x} \leqslant C \Vert W \Vert_{L^{\infty}_x} 1.1^{-k_{\gamma}} \Vert \big( \mathcal{F}^{-1} F_{\gamma} \big)\Vert_{L^2 _t L^{6/5}_x}. \end{align*} \underline{Subcase 1.2: $k_{\gamma}<0 $} \\ In this case, we use Lemma \ref{bilin} again as well as Bernstein's inequality to write that \begin{align*} \Vert \eqref{gammans} \Vert_{L^2 _t L^{6/5}_x} & \leqslant C_0 \Vert W \Vert_{L^{3/2-}_x} 1.1^{-k_{\gamma}} \big \Vert \big( \mathcal{F}^{-1} F_{\gamma} \big)_{k_{\gamma}} \big \Vert_{L^2 _t L^{6+}_x} \\ & \leqslant C \delta 1.1^{\epsilon k_{\gamma}} \Vert \big( \mathcal{F}^{-1} F_{\gamma} \big)_{k_{\gamma}} \Vert_{L^2 _t L^{6/5}_x} \end{align*} which can be summed. \\ \underline{Case 2: $k>k_{\gamma}+1$} \\ The inequalities in this case give the $J^{-}$ terms in the product. \\ In this case we write as in the previous case that \begin{align*} \Vert \eqref{gammans} \Vert_{L^2 _t L^{6/5}_x} & \leqslant C_0 \Vert W \Vert_{L^{3/2}_x} 1.1^{\beta k_{\gamma}} 1.1^{-2k} \big \Vert \big( \mathcal{F}^{-1} F_{\gamma} \big)_{k_{\gamma}} \big \Vert_{L^2 _t L^{6}_x} \\ & \leqslant C \delta 1.1^{2(k_{\gamma}-k)} \Vert \big( \mathcal{F}^{-1} F_{\gamma} \big)_{k_{\gamma}} \Vert_{L^2 _t L^{6/5}_x}. \end{align*} Similarly, we now show how to obtain the extra factor $\prod_{\gamma \in J} \min \lbrace 1 ; 1.1^{0.5 k_{\gamma-1}} \rbrace$. We bound \begin{align} \label{gammansprime} (\mathcal{F}^{-1} F_{\gamma-1})_{k_{\gamma-1}} =\big( \int_{y,z} \check{m}(y,z) W_{\gamma}(x-y) \big( \mathcal{F}^{-1} F_{\gamma}\big)_{k_{\gamma}} (x-y-z) dy dz \big)_{k_{\gamma-1}}. \end{align} We must, for the the same reason as above, bound the $L^2 _t L^{6/5}_x$ norm of that expression. \\ We start by using Bernstein's inequality and obtain \begin{align*} \Vert (\mathcal{F}^{-1} F_{\gamma-1})_{k_{\gamma-1}} \Vert_{L^2 _t L^{6/5}_x} \leqslant C_0 1.1^{0.5 k_{\gamma-1}} \Vert (\mathcal{F}^{-1} F_{\gamma-1})_{k_{\gamma-1}} \Vert_{L^2 _t L^{1}_x}, \end{align*} and then the proof is identical to the previous inequality: \\ We distinguish two cases: \\ \underline{Case 1: $k_{\gamma} >k +1$} \\ The inequalities in this case give the $J^{+}$ terms in the product. \\ \underline{Subcase 1.1: $k_{\gamma}>0$} \\ Then we write using Lemma \ref{bilin} that \begin{align*} \Vert \eqref{gammansprime} \Vert_{L^2 _t L^{1}_x} \leqslant C \Vert W \Vert_{L^{6}_x} 1.1^{-k_{\gamma}} \Vert \mathcal{F}^{-1} F_{\gamma} \Vert_{L^2 _t L^{6/5}_x}. \end{align*} \underline{Subcase 1.2: $k_{\gamma}<0 $} \\ In this case we use Lemma \ref{bilin} again as well as Bernstein's inequality to write that \begin{align*} \Vert \eqref{gammansprime} \Vert_{L^2 _t L^{1}_x} & \leqslant C_0 \Vert W \Vert_{L^{6/5-}_x} 1.1^{-2k_{\gamma}} \Vert \big( \mathcal{F}^{-1} F_{\gamma} \big)_{k_{\gamma}} \Vert_{L^2 _t L^{6+}_x} \\ & \leqslant C \delta 1.1^{\epsilon k_{\gamma}} \Vert \mathcal{F}^{-1} F_{\gamma} \Vert_{L^2 _t L^{6/5}_x}, \end{align*} which can be summed. \\ \underline{Case 2: $k>k_{\gamma}+1$} \\ The inequalities in this case give the $J^{-}$ terms in the product. \\ In this case we write as in the previous case that \begin{align*} \Vert \eqref{gammansprime} \Vert_{L^2 _t L^{1}_x} & \leqslant C_0 \Vert W \Vert_{L^{6/5}_x} 1.1^{-2k} \bigg \Vert \big( \mathcal{F}^{-1} F_{\gamma} \big)_{k_{\gamma}} \bigg \Vert_{L^2 _t L^{6}_x} \\ & \leqslant C \delta 1.1^{2(k_{\gamma}-k)} \Vert \mathcal{F}^{-1} F_{\gamma} \Vert_{L^2 _t L^{6/5}_x}. \end{align*} \end{proof} We keep recording analogs of the previous section. \begin{corollary} \label{multilinearbisneg} Assume that $k<0.$ \\ We have the bound \begin{align} \label{multimainbisneg} &\Bigg \Vert \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \widehat{g_{k_n}}(\xi,\eta_n) d\eta_n \Bigg \Vert_{L^2_x} \\ & \lesssim C^n \delta^{n} \Vert \mathcal{F}^{-1}_{\xi,\eta_n} g \Vert_{L^1 _{y_r} L^2_x} \prod_{j \in J} \min \lbrace 1.1^{0.5 k_{\gamma-1}} ; 1 \rbrace \prod_{\gamma \in J^{+}} \min \lbrace 1.1^{-k_{\gamma}} ; 1.1^{0.5 k_{\gamma}} \rbrace \times \prod_{\gamma \in J^{-}} 1.1^{k_{\gamma}-k}, \end{align} where the notations are the same as in previous lemmas. \\ Finally the implicit constant in the inequalities do not depend on $n.$ \end{corollary} Finally for Strichartz estimates we need: \begin{corollary} \label{multiinneg} Assume that $k<0.$ \\ We have the bound \begin{align*} &\Bigg \Vert \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \int_{1}^t e^{i s \vert \xi \vert^2} \widehat{g_{k_n}}(s,\xi,\eta_n) ds d\eta_n \Bigg \Vert_{L^2_x} \\ & \lesssim C^n \delta^{n} \Vert g \Vert_{L^{p'}_t L^{q'}_x} \prod_{\gamma \in J} \min \lbrace 1.1^{0.5 k_{\gamma-1}};1 \rbrace \prod_{\gamma \in J^{+}} \min \lbrace 1.1^{-k_{\gamma}} ; 1.1^{0.5 k_{\gamma}} \rbrace \times \prod_{\gamma \in J^{-}} 1.1^{k_{\gamma}-k}. \end{align*} The implicit constant in the inequality does not depend on $n.$ \end{corollary} And we also have the following easier version: \begin{corollary} \label{multiinneg} Assume that $k<0.$ \\ We have the bound \begin{align*} &\Bigg \Vert \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \int_{1}^t e^{i s \vert \xi \vert^2} \widehat{g_{k_n}}(s,\xi,\eta_n) ds d\eta_n \Bigg \Vert_{L^2_x} \\ & \lesssim C^n \delta^{n} t \Vert g \Vert_{L^{\infty}_t L^2_x} \prod_{\gamma \in J} \min \lbrace 1.1^{0.5 k_{\gamma-1}};1 \rbrace \prod_{\gamma \in J^{+}} \min \lbrace 1.1^{-k_{\gamma}} ; 1.1^{0.5 k_{\gamma}} \rbrace \times \prod_{\gamma \in J^{-}} 1.1^{k_{\gamma}-k}. \end{align*} The implicit constant in the inequality does not depend on $n.$ \end{corollary} \subsection{Bounding $n-$th iterates} \label{Boundingiterates} In this section we prove the bounds announced in proposition \ref{nthestimate}. In spirit they all follow from the previous multilinear lemmas and the bounds for the first iterates (Section \ref{firsterm}). However we cannot always localize the potential in frequency as easily: say for example that $k_1 \ll k$ then the potential term $\widehat{W_1}(\xi-\eta_1)$ in the first iterate is localized at frequency $1.1^k$. In the multilinear setting however we cannot conclude that $\widehat{W_n}(\eta_{n-1}-\eta_n)$ will be localized at $1.1^{k_{n-1}}$ from the fact that $k_n \ll k.$ Therefore we need to adjust slightly some of the proofs. \\ \\ We introduce the following notation to simplify the expressions that appear: \begin{definition} In this section and the next $G(\textbf{k})$ will denote either \begin{align*} \prod_{j \in J^{+}} \min \lbrace 1.1^{-k_{\gamma}} ; 1.1^{\epsilon k_{\gamma}} \rbrace \times \prod_{j \in J^{-}} 1.1^{-k}1.1^{\epsilon k_{\gamma}} \end{align*} or \begin{align*} \prod_{j \in J^{+}} \min \lbrace 1.1^{-k_{\gamma}} ; 1.1^{\epsilon k_{\gamma}} \rbrace \times \prod_{j \in J^{-}} 1.1^{k_{\gamma}-k} \end{align*} depending on whether $k>0$ or $k \leqslant 0.$ \end{definition} We start by estimating the terms that come up in the iteration in the case where $\vert k_n - k \vert >1.$ \begin{lemma} We have the bound \begin{align*} \Vert I_1 ^n f \Vert_{L^2_x} & \lesssim C^n G(\textbf{k}) \delta^n \varepsilon_1 . \end{align*} \end{lemma} \begin{proof} We start by splitting the $\eta_n$ variable dyadically. We denote $k_n$ the corresponding exponent. \\ We can apply Corollary \ref{multiin} with \begin{align*} \widehat{g_{k_{n-1}}}(s,\eta_{n-1},\xi) = \int_{\eta_{n}} s \frac{P_k(\xi) P_{k_{n}}(\eta_n)\alpha_{n}(\eta_n) \xi_l \widehat{W_n}(\eta_{n-1}-\eta_{n})}{\vert \xi \vert^2 - \vert \eta_{n} \vert^2} \widehat{u^2} (s,\eta_n) d\eta_n. \end{align*} Note that in this case $n \in J,$ meaning in the last term in the product on $\gamma$ the denominator is not singular. \\ \underline{Case 1: $W_n = V, k_n -1 > k $} \\ In this case the $\xi_l$ in front of the expression $I_1 ^n$ contributes $1.1^k$ and the symbol with denominator $\frac{1}{\vert \xi \vert^2 - \vert \eta_n \vert^2}$ contributes $1.1^{-2 k_n}.$ \\ In this case we use Bernstein's inequality and Lemma \ref{multilinear} (with $(p,q)=(2,6)$) and obtain \begin{align*} \Vert I_1 ^n f \Vert_{L^2} & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{k-2k_n} \Vert V \Vert_{L^{6/5}_x} \Vert t (u^2)_{k_n} \Vert_{L^2_t L^{\infty}_x} \\ & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{k-k_n} \Vert t (u^2)_{k_n} \Vert_{L^2_t L^{3}_x} \\ & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{k-k_n} \bigg \Vert \Vert t u \Vert_{L^{6}_x} \Vert u \Vert_{L^6_x} \bigg \Vert_{L^2 _t} \\ & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{k-k_n} \varepsilon_1^2 , \end{align*} which can be summed over $k_n$ using Lemma \ref{summation} and the fact that $k_n>k.$ \\ \underline{Case 2: $W_n = V, k_n + 1 < k $} \\ This case is handled as case 1. \\ \underline{Case 3: $W_n = a_i, k_n -1 > k$} \\ In this case we obtain a $1.1^{k-k_n}$ factor in front which is directly summable. \\ \underline{Case 4: $W_n = a_i, k_n +1 < k$} \\ In this case we obtain a $1.1^{k-2k+k_n}$ factor in front which is directly summable. \end{proof} The following term also appears when $\vert k-k_n \vert >1.$ \begin{lemma} We have the bound: \begin{align*} \Vert I_2 ^n f \Vert_{L^2_x} \lesssim C^n G(\textbf{k}) \delta^n \varepsilon_1. \end{align*} \end{lemma} \begin{proof} \underline{Case 1: $k>0.$} \\ We can write, using Lemma \ref{bilin} and Corollary \ref{multilinearbis} with \begin{align*} \widehat{g_{k_{n-1}}}(t,\xi,\eta_{n-1}) = \int_{\eta_{n}} \frac{ P_k(\xi) \xi_l \widehat{W_n}(\eta_{n-1}-\eta_n) \alpha_{n}(\eta_n)}{\vert \xi \vert^2 - \vert \eta_n \vert^2} e^{it \vert \eta_n \vert^2} t \widehat{f_{k_n}}(t,\eta_n) d\eta_n \end{align*} that we have the bound \begin{align*} \Vert I_2 ^n f \Vert_{L^2_x} & \lesssim C^{n} G(\textbf{k}) \delta^{n} \Bigg \Vert \mathcal{F}^{-1}_{\eta_{n-1}} \int_{\eta_{n}} \frac{ P_k(\xi) \xi_l \widehat{W_n}(\eta_{n-1}-\eta_n) \alpha_{n}(\eta_n)}{\vert \xi \vert^2 - \vert \eta_n \vert^2} e^{it \vert \eta_n \vert^2} t \widehat{f_{k_n}}(t,\eta_n) d\eta_n \Bigg \Vert_{L^2_x} . \end{align*} \underline{Subcase 1.1: $k_n > k+1$} \\ Then using Bernstein's inequality, we obtain (we denote $\beta_n=0$ if $\alpha_n = 1$ and $\beta_n = 1$ otherwise): \begin{align*} \Vert I_2 ^n f \Vert_{L^2_x} & \lesssim \delta^{n} C^{n} G(\textbf{k}) 1.1^{-2k_n} 1.1^k 1.1^{\beta_n k_n} \Vert W_n \Vert_{L^3 _x} \Vert t e^{it\Delta} f_{k_n} \Vert_{L^6 _x}, \end{align*} and this bound is directly summable over $k_n. $ \\ \underline{Subcase 1.2: $k_n <k-1$} \\ If $\beta_n = 1,$ we can conclude as in the previous case. \\ If $\beta_n = 0, $ then we use Bernstein's inequality and obtain \begin{align*} \Vert I_2 ^n f \Vert_{L^2_x} & \lesssim \delta^{n} C^{n} G(\textbf{k}) 1.1^{-2k} 1.1^k \Vert W \Vert_{L^{2}_x} \Vert t e^{it\Delta} f_{k_n} \Vert_{L^{\infty} _x} \\ & \lesssim \delta^{n} C^{n} G(\textbf{k}) 1.1^{0.5(k_n-k)} \Vert W \Vert_{L^{2}_x} \Vert t e^{it\Delta} f_{k_n} \Vert_{L^{6} _x}. \end{align*} We can conclude using Lemma \ref{dispersive}. \\ \\ \underline{Case 2: $k<0.$} \\ We distinguish several subcases: \\ \underline{Subcase 2.1: $k>k_n+1$} \\ \underline{Subcase 2.1.1: $k_{n-1} \leqslant k+1$} \\ Then we use Lemma \ref{multilinearneg} as well as Bernstein's inequality and write that \begin{align*} \Vert I_2 ^n f \Vert_{L^2} & \lesssim 1.1^k C^n \delta^n G(\textbf{k}) 1.1^{(\beta_n-2) k} \Vert W_{\leqslant k+10} \Vert_{L^{3-}_x} \Vert t e^{it\Delta} f_{k_n} \Vert_{L^{6+}_x} \\ & \lesssim C^n \delta^n G(\textbf{k}) 1.1^{\beta_n k} \Vert W \Vert_{L^{3/2-}_x} 1.1^{\epsilon k_n}\Vert t e^{it\Delta} f_{k_n} \Vert_{L^{6}_x} . \end{align*} \underline{Subcase 2.1.2: $k_{n-1}> k+1$} \\ If $\beta_n = 1$ we can conclude as above \\ If $\beta_n = 0$ then consider the largest integer $\gamma \in J$ such that $\gamma-1 \in K.$ In this case we use Lemma \ref{multilinearneg} and at least one of the terms in the factor \begin{align*} \prod_{\gamma \in J} 1.1^{0.5 k_{\gamma-1}} \end{align*} is equal to $1.1^{0.5 k}$. We use Bernstein's inequality to write that \begin{align*} \Vert I_2 ^n f \Vert_{L^2_x} & \lesssim 1.1^k 1.1^{0.5 k} C^n G(\textbf{k}) \delta^n 1.1^{- 2 k} \Vert W \Vert_{L^{2}_x} \Vert t e^{it\Delta} f_{k_n} \Vert_{L^{\infty}_x} \\ & \lesssim 1.1^{0.5(k_n-k)} C^n \delta^n G(\textbf{k}) \Vert W \Vert_{L^{2}_x} \Vert t e^{it\Delta} f_{k_n} \Vert_{L^{6}_x}, \end{align*} which can be summed over $k_n$ since $k_n-k<0.$ \\ \underline{Subcase 2.2: $k<k_n-1$} \\ The proof is similar in this case, therefore it is omitted. \end{proof} Now we bound $I_3 ^n f$. This term is always such that $\vert k-k_n \vert >1.$ \begin{lemma} We have the bound \begin{align*} \Vert I_3 ^n f \Vert_{L^2_x} \lesssim \delta^{n} C^n G(\textbf{k}) \varepsilon_1. \end{align*} The implicit constant does not depend on $n$ here. \end{lemma} \begin{proof} \underline{Case 1: $k>0$} \\ We use Corollary \ref{multiin} with \begin{align*} \widehat{g_{k_{n-1}}} (s,\xi,\eta_{n-1}) & = \int_{\eta_n} \frac{\xi_l P_{k}(\xi) W_n(\eta_{n-1}-\eta_n) \alpha_n (\eta_n)}{\vert \xi \vert - \vert \eta_n \vert^2} e^{-is\vert \eta_n \vert^2} \widehat{f_{k_n}}(s,\eta_n) d\eta_n. \end{align*} \\ \underline{Subcase 1.1: $k_n > k+1$} \\ In this case we obtain \begin{align*} \Vert I_3 ^n f \Vert_{L^2_x} & \lesssim C^n \delta^{n} G(\textbf{k}) \Vert g \Vert_{L^2_s L^{6/5}_x}. \end{align*} We estimate that last term using Lemma \ref{bilin} (as usual $\beta_n=0$ if $\alpha_n = 1$ and 1 otherwise) \begin{align*} \Vert g \Vert_{L^2_s L^{6/5}_x} & \lesssim 1.1^{k} 1.1^{-2k_n} 1.1^{\beta_n k_n} \Vert W_n \Vert_{L^{3/2}_x} \Vert e^{is \Delta} f_{k_n} \Vert_{L^2_s L^6_x} \\ & \lesssim 1.1^{k-k_n} 1.1^{(\beta_n-1)k_n} \Vert W_n \Vert_{L^{3/2}_x} \Vert e^{is \Delta} f_{k_n} \Vert_{L^2_s L^6_x}, \end{align*} which can be summed using Lemma \ref{summation}. \\ \underline{Subcase 1.2: $k>k_n+1$} \\ This case is similar to the previous one. \\ \\ \underline{Case 2: $k<0$} \\ Now we assume that $k<0.$ We only treat the worse case ($W=V$). \\ \underline{Subcase 2.1: $k_n > k+1$} \\ \underline{Subcase 2.1.1: $k_{n-1} \leqslant k+1 $} \\ Then we can use the fact that $W$ is then localized at frequency less that $1.1^{k_n+10}$ and we use Lemma \ref{multiinneg} Bernstein's inequality to write that \begin{align*} \Vert I_3 ^n f \Vert_{L^2} & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{k} 1.1^{-2k_n} \Vert W_{n,\leqslant k_n} \Vert_{L^{3/2}_x} \Vert e^{it\Delta} f_{k_n} \Vert_{L^2_t L^6 _x} \\ & \lesssim 1.1^{k-k_n} C^n G(\textbf{k}) \delta^n \Vert W_n \Vert_{L^{1}_x} \Vert e^{it\Delta} f_{k_n} \Vert_{L^2 _t L^6 _x} \end{align*} and we can conclude using Lemma \ref{summation}. \\ \underline{Subcase 2.1.2: $k_{n-1} > k+1$} \\ In this case, as in the previous proof, we use Lemma \ref{multiinneg} to gain an additional $1.1^{k/2}$ factor. Overall we get the bound \begin{align*} \Vert I_3 ^n f \Vert_{L^2 _x} & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{1.5 k} 1.1^{-2k_n} \Vert W \Vert_{L^{6/5}_x} \Vert e^{it\Delta} f_{k_n} \Vert_{L^2 _t L^{\infty}_x} \\ & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{1.5(k-k_n)} \delta \Vert e^{it\Delta} f_{k_n} \Vert_{L^2 _t L^{6}_x}, \end{align*} and we can conclude by Lemma \ref{summation}. \\ \underline{Subcase 2.2: $k_n< k-1$} \\ This case is treated similarly to the previous one. \end{proof} Now we come to the terms that arise in the case $\vert k-k_n \vert \leqslant 1.$ We start with $I_n ^4 f:$ \begin{lemma} We have the bound \begin{align*} \Vert I_4 ^n f \Vert \lesssim C^n G(\textbf{k}) \delta^n \varepsilon_1 . \end{align*} \end{lemma} \begin{proof} This is a direct consequence of Lemma \ref{multilinear} (or \ref{multilinearautre}) and Lemma \ref{X'}: \begin{align*} \Vert I_4 ^n f \Vert_{L^2 _x} & \lesssim C^n G(\textbf{k}) \delta^n \bigg \Vert \mathcal{F}^{-1} \big( e^{-it \vert \eta_n \vert^2} \partial_{\eta_n,j} f P_{k_n}(\eta_n) \big) \bigg \Vert_{L^2 _x} \\ & \lesssim C^n G(\textbf{k}) \delta^n \Vert f \Vert_{X'} \\ & \lesssim C^n G(\textbf{k}) \delta^n \varepsilon_1. \end{align*} \end{proof} Now we estimate the next few terms similarly, therefore all the estimates are grouped in the same lemma. Recall that all these terms appear when $\vert k-k_n \vert \leqslant 1.$ \begin{lemma} We have the bounds \begin{align*} \Vert I_5 ^n f, I_6 ^n f, I_7 ^n f, I_8^n f \Vert_{L^2} \lesssim C^n G(\textbf{k}) \delta^n \varepsilon_1. \end{align*} \end{lemma} \begin{proof} We do the proof for $I_6 ^n f,$ since the other terms are easier to deal with. \\ \underline{Case 1: $k_{n-1} \leqslant k_n +10$} \\ In this case the potential $W_n$ is localized at frequency less than $1.1^{k+10}.$ Therefore we can use Lemma \ref{multiin} (or \ref{multiinneg}) for \begin{align*} \widehat{g_{k_{n-1}}}(s,\eta_{n-1}) = \int_{\eta_n} \alpha_n(\eta_n) \widehat{W_{n, \leqslant k+10}}(\eta_{n-1}-\eta_n) \partial_{\eta_{n,j}} \big( \frac{\xi_l \eta_{n,j}}{\vert \eta_n \vert^2} \big) e^{-is \vert \eta_n \vert^2} \widehat{f_{k_n}}(s, \eta_n) d\eta_n, \end{align*} and obtain (we denote $\beta_n =0$ or $1$ depending on whether $\alpha_n = 1$ or not) \begin{align*} \Vert I_6 ^n f \Vert_{L^2} & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{-k} 1.1^{\beta_n k} \Vert W_{\leqslant k} \Vert_{L^{3/2} _x} \Vert e^{it\Delta} f_{k_n} \Vert_{L^2 _t L^6 _x} \\ & \lesssim C^n G(\textbf{k}) \delta^n \Vert e^{it\Delta} f_{k_n} \Vert_{L^2 _t L^6 _x}, \end{align*} and we can conclude using Lemma \ref{summation} \\ \underline{Case 2: $k_{n-1}> k_n +10.$} \\ \underline{Subcase 2.1: $k>0$ } \\ This subcase can be treated as case 1. \\ \underline{Subcase 2.2: $k \leqslant 0$} \\ In this case we use Lemma \ref{multiinneg}.\\ We can consider the largest $j$ such that $j-1 \in K.$ That term gives us an additional $1.1^{0.5k}$ factor. We obtain, by the same reasoning as in case 1, the bound (we only treat the worse case here, that is $\beta =0$, see case 1): \begin{align*} \Vert I_6 ^n f \Vert_{L^2} & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{-k} 1.1^{0.5k} \Vert W \Vert_{L^{6/5} _x} \Vert e^{it\Delta} f_{k_n} \Vert_{L^2 _t L^{\infty} _x} \\ & \lesssim C^n G(\textbf{k}) \delta^n \Vert e^{it\Delta} f_{k_n} \Vert_{L^2 _t L^6 _x}, \end{align*} where to obtain the last line we used Bernstein's inequality. \end{proof} Finally we have the expected bounds on the iterates of the bilinear terms: \begin{lemma} We have the bounds \begin{align*} \Vert I_9 ^n f, I_{10} ^n f \Vert_{L^2_x} \lesssim C^n G(\textbf{k}) \delta^n \varepsilon_1 . \end{align*} \end{lemma} \begin{proof} The proofs of these estimates can be straightforwardly adapted from \cite{L}, Lemmas 7.7 and 7.8. Therefore we will only treat the more complicated of the two terms, namely $I_9 ^n f.$ \\ \\ We split the frequencies $\eta_n$ and $\eta_{n-1}-\eta_n$ dyadically ($k_n$ and $k_{n+1}$ denote the corresponding exponents) as well as time ($m$ denotes the exponent): \begin{align*} &i\int_1 ^t \int_{\eta_n} \eta_{n,j} e^{is(\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1} \vert^2 )} s \widehat{f}(s,\eta_n) \widehat{f}(s,\eta_{n-1}-\eta_n) d\eta_n ds \\ & =\sum_{m=0}^{\ln t} \sum_{k_1,k_2 \in \mathbb{Z}} \int_{1.1^m} ^{1.1^{m+1}} is \eta_{n,l} e^{is (\vert \xi \vert^2 - \vert \eta_n \vert ^2 - \vert \eta_{n-1} - \eta_n \vert ^2)} \widehat{f_{k_n}}(s,\eta_n) \widehat{f_{k_{n+1}}}(s,\eta_{n-1}-\eta_n) d\eta_n ds . \end{align*} \underline{Case 1: $\max \lbrace k_n ; k_{n+1} \rbrace \geqslant m$} \\ We apply Lemma \ref{multiinbis} for \begin{align*} \widehat{g}(s,\xi,\eta_{n-1}) = \textbf{1}_{(1.1^m;1.1^{m+1})}(s) \int_{\mathbb{R}^3} is \eta_{n,l} e^{is (\vert \xi \vert^2 - \vert \eta_n \vert ^2 - \vert \eta_{n-1} - \eta_n \vert ^2)} \widehat{f_{k_n}}(s,\eta_n) \widehat{f_{k_{n+1}}}(s,\eta_{n-1}-\eta_n) d\eta_n, \end{align*} as well as Lemma \ref{bilin} to write that \begin{align*} \Vert I_9 ^n f \Vert_{L^{\infty}_t L^2 _x} & \lesssim 1.1^{2m} 1.1^{\max \lbrace k_n ; k_{n+1} \rbrace} 1.1^{-10 \max \lbrace k_n ; k_{n+1} \rbrace} \\ & \times \min \big \lbrace 1.1^{-10 \min \lbrace k_n ; k_{n+1} \rbrace}; 1.1^{3 \min \lbrace k_n ; k_{n+1} \rbrace /2} \big \rbrace \delta^n C^n G(\textbf{k}) \varepsilon_1 ^2 \\ & \lesssim 1.1^{-6m} 1.1^{-\max \lbrace k_n ; k_{n+1} \rbrace} \min \big \lbrace 1.1^{-10 \min \lbrace k_n ; k_{n+1} \rbrace}; 1.1^{3 \min \lbrace k_n ; k_{n+1} \rbrace /2} \big \rbrace G(\textbf{k}) \delta^n C^n \varepsilon_1 ^2, \end{align*} which we can sum over $k_n,k_{n+1}$ and $m.$ \\ \underline{Case 2: $\min \lbrace k_n ; k_{n+1} \rbrace \leqslant -2m $} \\ Similarly in this case we write that \begin{align*} \Vert I_9 ^n f \Vert_{L^{\infty}_t L^2 _x} & \lesssim 1.1^{2m} 1.1^{\max \lbrace k_n ; k_{n+1} \rbrace} 1.1^{3 \min \lbrace k_n ; k_{n+1} \rbrace /2} \\ & \times \min \big \lbrace 1.1^{-10 \max \lbrace k_n ; k_{n+1} \rbrace}; 1.1^{\max \lbrace k_n ; k_{n+1} \rbrace /3} \big \rbrace \delta^n C^n G(\textbf{k}) \varepsilon _1 ^2 \\ & \lesssim 1.1^{-0.5 m} 1.1^{0.25 \min \lbrace k_n ; k_{n+1} \rbrace} 1.1^{\max \lbrace k_n ; k_{n+1} \rbrace} \\ & \times \min \big \lbrace 1.1^{-10 \max \lbrace k_n ; k_{n+1} \rbrace}; 1.1^{\max \lbrace k_n ; k_{n+1} \rbrace /3} \big \rbrace \delta^n C^n G(\textbf{k}) \varepsilon _1 ^2, \end{align*} which can be summed. \\ \underline{Case 3: $-2m \leqslant k_n , k_{n+1} \leqslant m$} \\ When the gradient of the phase is not too small, we can integrate by parts in $\eta_n$ to gain decay in time. To quantify this more precisely, we split dyadically in the gradient of the phase, namely $\eta_{n-1}-2\eta_n.$ We denote $k_{n}'$ the corresponding exponent.\\ \\ \underline{Case 3.1: $k_{n}' \leqslant -10 m$} \\ We apply Lemma \ref{multiinbis} for \begin{align*} \widehat{g}(s,\eta_{n-1}) &= \textbf{1}_{(1.1^m;1.1^{m+1})}(s) \int_{\mathbb{R}^3} is \eta_{n,l} e^{is (\vert \xi \vert^2 - \vert \eta_n \vert ^2 - \vert \eta_{n-1} - \eta_n \vert ^2)} P_{k_{n}'}(2\eta_n- \eta_{n-1}) \\ &\times \widehat{f_{k_n}}(s,\eta_n) \widehat{f_{k_{n+1}}}(s,\eta_{n-1}-\eta_n) d\eta_n. \end{align*} As in Lemma 5.16 in \cite{L} we have \begin{align*} \Vert g \Vert_{L^2_{\eta_{n-1}}} \lesssim 1.1^{-13m} 1.1^{0.1 k_{n}'} \varepsilon_1^2. \end{align*} Hence \begin{align*} \Vert I_9 ^n f \Vert_{L^{\infty}_t L^2 _x} & \lesssim 1.1^m 1.1^{0.1 k_{n}'} 1.1^{-13 m} \delta^n C^n G(\textbf{k}) \varepsilon_1 ^2, \end{align*} which can be summed over $k_{n}'$ and $k_n,k_{n+1}$ (there are only $O(m^2)$ terms in the sum) as well as over $m.$ \\ \\ \underline{Case 3.2: $k_{n}' \geqslant k_n-50, k_{n+2} \geqslant -10m,$ and $-2m \leqslant k_n,k_{n+1} \leqslant m$}\\ In this case we do an integration by parts in $\eta_n$. \\ Again, this case is similar to that of lemma 7.8, \cite{L}. All the terms that appear are treated following the same strategy, therefore we focus on the case where the $\eta_n$ derivative hits one of the profiles. \\ We can apply Lemma \ref{multiin} with $(p,q) = (4,3)$ and \begin{align*} \widehat{g}(s,\xi,\eta_{n-1}) &= \textbf{1}_{(1.1^m;1.1^{m+1})}(s) \int_{\mathbb{R}^3} e^{is (\vert \xi \vert^2 - \vert \eta_n \vert ^2 - \vert \eta_{n-1} - \eta_n \vert ^2)} \frac{P_{k_{n}'}(2\eta_n- \eta_{n-1}) (2 \eta_n - \eta_{n-1})_j \eta_{n,l}}{\vert 2 \eta_n - \eta_{n-1} \vert^2} \\ &\times \widehat{f_{k_n}}(s,\eta_n) \partial_{\eta_{n,j}} \widehat{f_{k_{n+1}}}(s,\eta_{n-1}-\eta_n) d\eta_n, \end{align*} which yields the bound \begin{align*} \Vert I_9 ^n f \Vert_{L^{\infty}_t L^2 _x} \lesssim \delta^n C^n G(\textbf{k}) 1.1^{k_n-k_{n}'} 1.1^{-m/4} \varepsilon_1 ^2. \end{align*} This expression can be summed given the assumptions on the indices in this case. \\ \\ \underline{Case 3.3: $-10 m \leqslant k_{n}' \leqslant k_n-10$ and $-2m \leqslant k_n,k_{n+1} \leqslant m$} \\ There is a slight difference in this case compared to the corresponding lemma in \cite{L} due to the presence of the magnetic potentials. \\ Let's start with a further restriction: notice that $\eta_{n-1} - \eta_n = \eta_{n-1}- 2\eta_n + \eta_n$ therefore $ \vert \eta_{n-1}-\eta_n \vert \sim 1.1^{k_n} \sim 1.1^{k_{n+1}}.$ \\ Using Lemma \ref{multiinbis} as well as Bernstein's inequality and the fact that the $X$ norm of $f_{k_n}$ controls its $L^p -$ norms for $6/5<p\leqslant 2$ we get that \begin{align*} \Vert I_9 ^n f \Vert_{L^2 _x} & \lesssim \delta^n C^n G(\textbf{k}) 1.1^{2m} 1.1^{k_n} \Vert u_{k_n} \Vert_{L^{\infty} _t L^{\infty}_x} \Vert u_{k_{n+1}} \Vert_{L^{\infty} _t L^{2}_x} \\ & \lesssim \delta^n C^n G(\textbf{k}) 1.1^{2m} 1.1^{k_n} 1.1^{2.49 k_n} 1.1^{0.99 k_n} \varepsilon_1 ^2 \\ & \lesssim 1.1^{2m} 1.1^{4.48 k_n} C^n \delta^n G(\textbf{k}) \varepsilon_1 ^2. \end{align*} If $k_n \leqslant - 101/224 m$ we can sum the expressions above. Indeed there are only $O(m^2)$ terms in the sums on $k_n, k_{n+1}$ therefore the decaying factor in $m$ is enough to ensure convergence.\\ As a result we can assume from now on that $k_n >-101/224 m.$ \\ \\ First, recall the following key symbol bound from \cite{L}, which was the reason for using a frequency localization at $1.1^k$ and not $2^k :$ \begin{align} \label{symbol1} \bigg \Vert \mathcal{F}^{-1} \frac{P_{k}(\xi) P_{k_n}(\eta_n) P_{k_{n+1}}(\eta_{n+1}) P_{k_n'}(2\eta_n - \eta_{n-1}) }{\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}-\eta_n \vert^2} \bigg \Vert_{L^1} \lesssim 1.1^{-2k_n} . \end{align} Now we integrate by parts in time. \\ Let's start with the easier boundary terms. They are both of the same form, therefore we only treat one of the terms. In this case we can apply Lemma \ref{multilinear} with \begin{align*} \widehat{g}(t,\xi,\eta_{n-1}) &= \textbf{1}_{(1.1^m;1.1^{m+1})}(t) \int_{\mathbb{R}^3} \frac{it P_{k_n '}(2\eta_n - \eta_{n-1}) \eta_{n,l}}{\vert \xi \vert^2 - \vert \eta_n \vert ^2 - \vert \eta_{n-1} - \eta_n \vert ^2} e^{it (\vert \xi \vert^2 - \vert \eta_n \vert ^2 - \vert \eta_{n-1} - \eta_n \vert ^2)} \\ & \times \widehat{f_{k_n}}(t,\eta_n) \widehat{f_{k_{n+1}}}(t,\eta_{n-1}-\eta_n) d\eta_n, \end{align*} as well as Lemma \ref{bilin} and \ref{dispersive} to obtain the following bound: \begin{align*} \Vert I_9 ^n f \Vert_{L^{\infty}_t L^2_x } & \lesssim 1.1^m 1.1^{-k_n} 1.1^{-m} 1.1^{-m/2} \delta^n C^n G(\textbf{k}) \varepsilon_1 ^2 \\ & \lesssim 1.1^m 1.1^{101/224 m} 1.1^{-3m/2} \delta^n C^n G(\textbf{k}) \varepsilon_1 ^2. \end{align*} This expression can be summed. \\ After the integration by parts in time we also obtain the following main terms: (here for better legibility we only write the last part of the integral) \begin{align*} & \int_{1.1^m}^{1.1^{m+1}} \int_{\eta_n} \frac{i s \eta_{n,l}P_{k_n'}(2\eta_n-\eta_{n-1})}{\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}-\eta_n \vert^2} \\ & \times e^{is(\vert \eta_{n-1} \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}- \eta_n \vert^2)} \partial_s \widehat{f_{k_n}} (s,\eta_n) \widehat{f_{k_{n+1}}}(s,\eta_{n-1}-\eta_n) d\eta_n ds \\ &= \int_{1.1^m}^{1.1^{m+1}} \int_{\eta_n} \frac{i s \eta_{n,l}P_{k_n'}(2\eta_n-\eta_{n-1})}{\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}-\eta_n \vert^2} \\ & \times e^{is(\vert \eta_{n-1} \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}- \eta_n \vert^2)} \widehat{f_{k_{n+1}}}(s,\eta_{n-1}-\eta_n) P_{k_n}(\eta_n) \int_{\eta_{n+1} \in \mathbb{R}^3} \widehat{V}(s,\eta_n - \eta_{n+1}) \widehat{u}(s,\eta_{n+1}) d\eta_{n+1} d\eta_n ds \\ &+ \int_{1.1^m}^{1.1^{m+1}} \int_{\eta_n} \frac{i s \eta_{n,l}P_{k_n'}(2\eta_n-\eta_{n-1})}{\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}-\eta_n \vert^2} \\ & \times e^{is(\vert \eta_{n-1} \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}- \eta_n \vert^2)} \widehat{f_{k_{n+1}}}(s,\eta_{n-1}-\eta_n) P_{k_n}(\eta_n) \int_{\mathbb{R}^3} \widehat{u}(\eta_{n} - \eta_{n+1}) \widehat{u}(s,\eta_{n+1}) d\eta_{n+1} d\eta_n ds \\ & + \int_{1.1^m}^{1.1^{m+1}} \int_{\eta_n} \frac{i s \eta_{n,l}P_{k_n'}(2\eta_n-\eta_{n-1})}{\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}-\eta_n \vert^2} e^{is(\vert \eta_{n-1} \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}- \eta_n \vert^2)} \widehat{f_{k_{n+1}}}(s,\eta_{n-1}-\eta_n) \\ & \times P_{k_n}(\eta_n) \int_{\eta_{n+1} \in \mathbb{R}^3} \eta_{n+1,i} \widehat{a_i}(s,\eta_n - \eta_{n+1}) \widehat{u}(s,\eta_{n+1}) d\eta_{n+1} d\eta_n ds \\ &:= I+II+III. \end{align*} The terms $I$ and $II$ are already present in \cite{L} and they can be dealt with following the exact same strategy here. Therefore we omit the details for these two terms and focus on $III$ which, although it is very close to $I$ in \cite{L}, was not present. \\ \\ Using the observation above \eqref{symbol1} we write that, using as usual our multilinear lemmas, \begin{align} \label{dernier} \Vert III \Vert_{L^2_x} & \lesssim G(\textbf{k}) \delta^n C^n \Bigg \Vert \mathcal{F}^{-1} \int_{\eta_n} P_{k_n} (\eta_n) \frac{it \eta_{n,l}P_{k_n'}(2\eta_n - \eta_{n-1})}{\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}-\eta_n \vert^2} \\ \notag & \times \mathcal{F}(a_i \partial_{x_i} u)(t,\eta_n) \widehat{u_{k_{n+1}}}(t,\eta_{n-1}-\eta_n) d\eta_n \Bigg \Vert_{L^{4/3}_t L^{3/2}_x} \\ \notag & \lesssim 1.1^{-k_n} G(\textbf{k}) \delta^n C^n \Vert t (a_i \partial_{x_i} u)_{k_n} \Vert_{L^{\infty}_t L^2 _x} \Vert u_{k_{n+1}} \Vert_{L^{4/3}_t L^{6}_x} \\ \notag & \lesssim G(\textbf{k}) \delta^n C^n \Vert t (a_i \partial_{x_i} u)_{k_n} \Vert_{L^{\infty}_t L^{6/5} _x} \Vert u_{k_{n+1}} \Vert_{L^{4/3}_t L^{6}_x}. \end{align} Now we look at the term $a_i \partial_{x_i} u$ and decompose the frequency variable dyadically (we denote $k_{n+2}$ the corresponding exponent). This reads \begin{align*} a_i \partial_{x_i} u=(2\pi)^{-3} \sum_{k_{n+2} \in \mathbb{Z}} \mathcal{F}^{-1} \int_{\eta_{n+2}} \widehat{a_i}(\eta_n-\eta_{n+2}) \eta_{n+2,i} \widehat{u_{k_{n+2}}}(\eta_{n+2}) d\eta_{n+2} . \end{align*} \underline{Case 1: $\vert k_{n+2} - k_n \vert \leqslant 1 $} \\ There are $O(m)$ terms in that sum on $k_{n+2}.$ Then, using dispersive estimates, the bound yields \begin{align*} \Vert III \Vert_{L^2 _x} & \lesssim \sum_{k_{n+2}} 1.1^{k_{n+2}-k_n} G(\textbf{k}) \delta^n C^n \Vert t(a_i u) \Vert_{L^2_x} \Vert u_{k_{n+1}} \Vert_{L^{4/3}_t L^6 _x} \\ & \lesssim \sum_{k_{n+2}} \delta^n C^n G(\textbf{k}) \Vert a_i \Vert_{L^3_x} \Vert t u \Vert_{L^{\infty}_t L^6 _x} \Vert u_{k_{n+1}} \Vert_{L^{4/3}_t L^6 _x} \\ & \lesssim \sum_{k_{n+2}} 1.1^{-m/4+} \delta^n C^n G(\textbf{k}) \delta \varepsilon^2, \end{align*} and we are done in this case. \\ \underline{Case 2: $k_{n+2} > k_n +1$} \\ Then $a_i$ is localized at frequency roughly $1.1^{k_{n+2}}$ and we can write that \begin{align*} \Vert III \Vert_{L^2_x} & \lesssim \sum_{k_{n+2}} 1.1^{k_{n+2}-k_n} \Vert t a_{i,k_{n+2}} u \Vert_{L^{2}_x} \Vert u_{k_{n+1}} \Vert_{L^{4/3}_t L^6 _x} \delta^n C^n G(\textbf{k}) \\ & \lesssim \sum_{k_{n+2}} 1.1^{k_{n+2}-k_n} \Vert a_{i,k_{n+2}} \Vert_{L^{3}_x} \Vert t u \Vert_{L^{\infty}_t L^6 _x} \Vert u_{k_{n+1}} \Vert_{L^{4/3}_t L^6 _x} \delta^n C^n G(\textbf{k}) \\ & \lesssim \sum_{k_{n+2}} 1.1^{k_{n+2}-k_n} \Vert a_{i,k_{n+2}} \Vert_{L^{3}_x} 1.1^{-m/4+} C^n \delta^n G(\textbf{k}) \varepsilon_1 ^2, \end{align*} and we are done in this case as well since we can sum over $k_{n+2}.$ \\ \underline{Case 3: $k_{n+2} < k_n-1$} \\ We write that \begin{align*} \Vert III \Vert_{L^2_x} & \lesssim \sum_{k_{n+2}} \delta^n C^n G(\textbf{k}) 1.1^{k_{n+2}-k_n} \Vert t (a_i u) \Vert_{L^2_x} \Vert u_{k_{n+1}} \Vert_{L^{4/3}_t L^6 _x} \\ & \lesssim \sum_{k_{n+2}} 1.1^{k_{n+2}-k_n} \delta^n C^n G(\textbf{k}) \Vert a_{i} \Vert_{L^3_x} \Vert t u \Vert_{L^{\infty}_t L^6 _x} \Vert u_{k_{n+1}} \Vert_{L^{4/3}_t L^6 _x} , \end{align*} and we can conclude using Lemma \ref{summation} and the fact that $k_{n+2}<k_n-1$ to sum this bound. \end{proof} \section{Energy estimate} \label{energy} Here we prove the $H^{10}$ estimate on the solution. The method is, as in the proof of \eqref{goal1}, to expand the solution as a series. This case however is simpler, in the sense that only integrations by parts in time are required. In other words the series is genuinely obtained by repeated applications of the Duhamel formula. The terms of the series are then estimated using lemmas from Section \ref{multilinkey}. \\ \\ First recall that the bilinear part of the Duhamel formula has already been estimated in \cite{L}, Lemma 8.1: \begin{lemma}\label{H10bilin} We have the bound \begin{align*} \Bigg \Vert P_k (\xi) \int_1 ^t \int_{\mathbb{R}^3} e^{is(\vert \xi \vert^2 - \vert \eta_1 \vert^2 - \vert \xi - \eta_1 \vert^2)} \widehat{f}(s,\eta_1) \widehat{f}(s,\xi-\eta_1) d\eta_1 ds \Bigg \Vert_{H^{10}_x} \lesssim \varepsilon_1 ^2. \end{align*} \end{lemma} Therefore we must estimate the $H^{10}$ norms of the potential parts. \\ Now we expand the solution as a series by repeated integrations by parts in time for the potential parts (with suitable regularizations when the phase is close to 0). \\ At the $n-$th step of this process we obtain the following terms : \begin{align*} &\mathcal{F} J_1 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-1} \widehat{f}(s,\eta_n) d\eta_n \end{align*} which is a boundary term when doing the integration by parts. \\ There are also the terms corresponding to the main terms \begin{align*} &\mathcal{F} J_2 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-2} \\ & \times \int_1 ^t \int_{\eta_n} e^{is(\vert \xi \vert^2 - \vert \eta_n \vert^2 - \vert \eta_{n-1}-\eta_n \vert^2)} \widehat{f}(\eta_{n-1}-\eta_n) \widehat{f}(s,\eta_n) d\eta_n ds d\eta_{n-1} \end{align*} and \begin{align} \label{lastterm} &\mathcal{F} J_3 ^n f := \int \prod_{\gamma=1}^{n-1} \frac{\alpha_{\gamma}(\eta_{\gamma})\widehat{W_{\gamma}}(\eta_{\gamma-1}-\eta_{\gamma})P_{k_{\gamma}}(\eta_{\gamma}) P_k(\xi)}{\vert \xi \vert^2 - \vert \eta_{\gamma} \vert^2} d\eta_1 ... d\eta_{n-2} \\ \notag & \times \int_1 ^t \int_{\eta_n} e^{is(\vert \xi \vert^2 - \vert \eta_n \vert^2)} \widehat{W_n}(\eta_{n-1}-\eta_n) \alpha_{n}(\eta_n) \widehat{f}(s,\eta_n) d\eta_n ds d\eta_{n-1}. \end{align} To obtain the next terms in the expansion, we integrate by parts in time in that last term. Therefore to show that the series converges in $H^{10}$ and to estimate its size, it is enough to estimate only the first two iterates. \\ The following proposition gives a bound on the $H^{10}$ norm of $J_1 ^n:$ \begin{lemma} \label{en-1} We have: \begin{align*} \Vert J_1 ^n f \Vert_{H^{10}_x} \lesssim C^n G(\textbf{k}) \delta^n \varepsilon_1. \end{align*} \end{lemma} \begin{proof} The proof is almost the same as in \cite{L} Lemma 8.2, therefore it is omitted. \end{proof} Finally we estimate the $H^{10}$ norm of $J_2 ^n f.$ \begin{lemma} We have the following bound on the $H^{10}$ norm of the solution: \begin{align*} \Vert J_2 ^n f \Vert_{H^{10}_x} \lesssim C^n G(\textbf{k}) \delta^n \varepsilon_1. \end{align*} \end{lemma} \begin{remark} \label{rate} In the case where the time integral $\int_1 ^t $ is replaced by $\int_{\tau} ^t $ for some $\tau < t,$ a similar estimate holds (the only modification is that the right-hand side has an additional $\tau^{-a}$ for some $a>0$). \end{remark} \begin{proof} Since the corresponding proof was omitted in \cite{L}, we carry it out here for completness. It is almost identical to that of Lemma 8.2 in \cite{L}.\\ We start by decomposing the $\xi$ and $\eta_n$ variables dyadically. We denote $k_n$ and $k$ the corresponding exponents. \\ \underline{Case 1: $k_{n} \geqslant k-1$} \\ Using Lemma \ref{multiin} with $(p,q) = (4,3)$ and \begin{align*} \widehat{g_{k_{n-1}}}(s,\eta_{n-1}) = \int_{\eta_n} e^{-is\vert \eta_n \vert^2} \widehat{f_{k_n}}(s,\eta_n) \widehat{f}(s,\eta_{n-1}-\eta_n) d\eta_n, \end{align*} we obtain by Lemma \ref{summation} and the energy bound \begin{align*} 1.1^{10 k^{+}} \Vert P_k (\xi) J_2 ^n f \Vert_{L^2 _x} & \lesssim 1.1^{10 k^{+}} C^n G(\textbf{k}) \delta^n \Vert f_{k_n} \Vert_{L^{\infty}_t L^2_{x}} \Vert e^{it\Delta} f \Vert_{L^{4/3}_t L^6 _x} \\ & \lesssim 1.1^{10(k^{+}-k_n^{+})} G(\textbf{k}) \delta^n C^n \varepsilon_1 ^2. \end{align*} \underline{Case 2: $k_n < k-1$} \\ \underline{Subcase 2.1: $\forall j \in \lbrace 1; ...; n \rbrace, k_j<k-1 $} \\ Then the first potential in the product $\widehat{W_1}(\xi-\eta_1)$ is localized at frequency $1.1^k.$ \\ Therefore we can carry out the same proof as above and obtain \begin{align*} 1.1^{10 k^{+}} \Vert P_k (\xi) J_2 ^n f \Vert_{L^2 _x} & \lesssim C^n G(\textbf{k}) \delta^n 1.1^{10k^{+}} \Vert W_{1,k} \Vert_{Y} \Vert f \Vert_{L^{\infty}_t L^2_{x}} \Vert e^{it\Delta} f_{k_n} \Vert_{L^{4/3}_t L^6 _x}, \end{align*} which can we summed over $k$, adding an additional $\delta$ factor to the product. \\ \underline{Subcase 2.2: $\exists j \in \lbrace 1;...; n \rbrace, k_j \geqslant k-1.$} \\ Let $j'=\max \lbrace j ; k_j \geqslant k-1 \rbrace.$ \\ If $k_{j'}>k+1,$ then $W_{j'+1}(\eta_{j'}-\eta_{j'+1})$ (with the convention that $W_{n}=f$ Indeed in this case the second $f$ factor would be localized at $1.1^{k_{n-1}}$ since $1.1^{k_{n-1}}>k+1>1.1^{k_n}+2$) is localized at frequency $1.1^{k_{j'}}.$ We can then conclude as in the above case by effectively absorbing the $1.1^{10 k^{+}}$ factor at the price of replacing $W_{j'+1}$ by $\nabla^{10} W_{j'+1}.$ \\ If $\vert k_{j'}-k \vert \leqslant 1,$ then if there exists some $j'' \in \lbrace j'+1,...,n \rbrace $ such that $\vert k_{j''} - k_{j''-1} \vert > 1,$ the factor $\widehat{W_{j''}}$ is localized at frequency $1.1^{k_j ''}.$ But since there are $n$ terms in the product, $k_{j''} \geqslant k-n-1.$ Therefore by the same proof as above: \begin{align*} 1.1^{10 k^{+}} \Vert P_k (\xi) J_2 ^n f \Vert_{L^2 _x} & \lesssim C^n 1.1^{10 n} G(\textbf{k}) \delta^n 1.1^{10(k^{+}-n+k_{j''}} \Vert \nabla^{10} W_{j'',k_{j''}} \Vert_{Y} \Vert f \Vert_{L^{\infty}_t L^2_{x}} \Vert e^{it\Delta} f_{k_n} \Vert_{L^{4/3}_t L^6 _x} \end{align*} and we get the desired result. \\ Finally if $\forall j'' \geqslant j'+1,\vert k_{j''} - k_{j''-1} \vert \leqslant 1, $ then $k_n \geqslant k-n.$ Then we can conclude by writing that \begin{align*} 1.1^{10 k^{+}} \Vert P_k (\xi) J_2 ^n f \Vert_{L^2 _x} & \lesssim C^n G(\textbf{k}) 1.1^{10 n} \delta^n 1.1^{10(k^{+}-n+k_{n})} 1.1^{10 k_n ^{+}} \Vert f_{k_n} \Vert_{L^{\infty}_t L^2_{x}} \Vert e^{it\Delta} f \Vert_{L^{4/3}_t L^6 _x}. \end{align*} \end{proof} \begin{proof}[Proof of \eqref{goal2}] We conclude with the proof of \eqref{goal2}. Note that since at each step of the iteration $O(4^n)$ terms appear, given the estimates proved in this section we can write that there exists some large constant $D$ such that \begin{align*} \Vert f \Vert_{H^{10}_x} \leqslant \varepsilon_0 + D \sum_{n=0}^{+\infty} \delta^n 4^n C^n \varepsilon_1 \leqslant \frac{\varepsilon_1}{2} \end{align*} for $\delta$ small enough. \end{proof} \appendix \section{Basic estimates} In this appendix we prove a few basic estimates based on dispersive properties of the free Schr\"{o}dinger flow. \begin{lemma}\label{dispersive} We have \begin{equation*} \Vert e^{it \Delta} f_k \Vert_{L^6_x} \lesssim \frac{\varepsilon_1}{t}. \end{equation*} \end{lemma} \begin{proof} See for example \cite{L}, Lemma 3.5. \end{proof} For the summations we will need the following lemma \begin{lemma} \label{summation} For any $p, q \geqslant 1$ and $1>c >0$ \begin{align*} \Vert e^{it \Delta} f_{k} \Vert_{L^p _t L^q _x} \lesssim \Vert e^{it \Delta} f_{k} \Vert_{L^{p(1-c)} _t L^q _x}^{1-c} 1.1^{-8c k} \varepsilon_1^{c} \end{align*} and \begin{align*} \Vert e^{it \Delta} f_{k} \Vert_{L^p _t L^q _x} \lesssim 1.1^{\frac{3c}{q(1-c)}k} \Vert e^{it \Delta} f_{k} \Vert_{L^{p} _t L^{q(1-c)} _x}. \end{align*} \end{lemma} \begin{proof} We write using Sobolev embedding and the energy bound \begin{align*} \Vert e^{it \Delta} f_{k} \Vert_{L^q _x} & \lesssim \Vert e^{it \Delta} f_{k} \Vert_{L^q _x} ^{1-c} \Vert e^{it \Delta} f_{k} \Vert_{H^{2}_x}^{c} \\ & \lesssim \Vert e^{it \Delta} f_{k} \Vert_{L^q _x} ^{1-c} 1.1^{-8kc} \varepsilon_1 ^c \end{align*} and then we take the $L^p _t$ norm of that expression and obtain \begin{align*} \Vert e^{it \Delta} f_{k} \Vert_{L^p _t L^q _x} \lesssim \Vert e^{it \Delta} f_{k} \Vert_{L^{p(1-c)} _x L^q _x}^{1-c} 1.1^{-c k} \varepsilon_1^{c}. \end{align*} Similarly using Bernstein's inequality \begin{align*} \Vert e^{it \Delta} f_{k} \Vert_{L^q _x} \lesssim 1.1^{\frac{3c}{q(1-c)}k} \Vert e^{it \Delta} f_{k} \Vert_{L^{q(1-c)} _x}, \end{align*} and we take the $L^p _t$ norm of that expression and obtain the result. \end{proof} \section{Technical Lemmas} In this section we collect a number of basic technical lemmas that are used in the paper. \\ The following lemma decomposes the frequency space according to a dominant direction: \begin{lemma} \label{direction} There exist three functions $\chi_j : \mathbb{R}^3 \longrightarrow \mathbb{R}$ such that \begin{itemize} \item $1 = \chi_1 + \chi_2 + \chi_3.$ \item On support of $\chi_j,$ we have $\vert \xi_j \vert \geqslant \lbrace \frac{9}{10} \vert \xi_k \vert; k=1,2,3 \rbrace.$ \end{itemize} \end{lemma} \begin{proof} Appendix A, \cite{PuM} \end{proof} We record a basic bilinear estimate: \begin{lemma}\label{bilin} The following inequality holds \begin{equation} \Big \Vert \mathcal{F}^{-1} \int_{\mathbb{R}^3} m(\xi,\eta) \widehat{f}(\xi-\eta) \widehat{g}(\eta) d\eta \Big \Vert_{L^r} \lesssim \Vert \mathcal{F}^{-1}(m(\xi-\eta,\eta)) \Vert_{L^1} \Vert f \Vert_{L^p} \Vert g \Vert_{L^q} \end{equation} where $1/r = 1/p+1/q.$ \end{lemma} \begin{proof} See \cite{L}, Lemma 3.1 for example. \end{proof} The following bound on the norm $X'$ is convenient as it appears naturally in the estimates. \begin{lemma} \label{X'} Define the $X'-$norm as \begin{align*} \Vert f \Vert_{X'} = \sup_{k \in \mathbb{Z}} \Vert \big( \nabla_{\xi} \widehat{f} \big) P_k (\xi) \Vert_{L^2} . \end{align*} Then \begin{align*} \Vert f \Vert_{X'} \lesssim \Vert f \Vert_X. \end{align*} \end{lemma} \begin{proof} See \cite{L}, Lemma 3.8 \end{proof} \section{Scattering} \label{scattering} In this section we prove the scattering statement in Theorem \ref{maintheorem}. This is essentially a consequence of estimates proved earlier in the paper. \\ We start with the expansion from Section \ref{energy}: \begin{align*} \widehat{f}(t) &= e^{i \vert \xi \vert^2} \widehat{u_1}(\xi) - i \int_1 ^t e^{is \vert \xi \vert^2} \int_{\mathbb{R}^3} e^{-is \vert \xi-\eta_1 \vert^2 - \vert \eta_1 \vert^2} \widehat{f}(s,\eta_1) \widehat{f}(s,\xi-\eta_1) d\eta_1 \\ &+ \sum_{k=0}^{+\infty} \sum_{n=2}^{+\infty} \sum_{k_1,...,k_{n-1} \in \mathbb{Z}} \sum_{\gamma=1}^{n-1} \sum_{W_{\gamma} \in \lbrace a_1,a_2,a_3,V \rbrace} \frac{i^{n+1}}{(2\pi)^{3(n-1)}} \mathcal{F} J^{n}_1 \\ &+\sum_{k=0}^{+\infty} \sum_{n=2}^{+\infty} \sum_{k_1,...,k_{n-1} \in \mathbb{Z}} \sum_{\gamma=1}^{n-1} \sum_{W_{\gamma} \in \lbrace a_1,a_2,a_3,V \rbrace} \frac{i^{n-1}}{(2\pi)^{3(n+1)}} \mathcal{F} J^{n}_2 . \end{align*} We define the operator $W:H^{10}_x \rightarrow H^{10}_x$ as \begin{align*} W u(t) = u(t) - e^{it\Delta} \sum_{k=0}^{+\infty} \sum_{n=2}^{+\infty} \sum_{k_1,...,k_{n-1} \in \mathbb{Z}} \sum_{\gamma=1}^{n-1} \sum_{W_{\gamma} \in \lbrace a_1,a_2,a_3,V \rbrace} \frac{i^{n+1}}{(2\pi)^{3(n-1)}} J^{n}_1. \end{align*} The boundedness of $W$ for small enough $\delta>0$ is a consequence of Lemma \ref{en-1}. \\ Let $1<\tau<t.$ With the above expansion, we obtain the estimate (using the remark \ref{rate}) \begin{align*} \big \Vert Wu(t) - Wu(\tau) \big \Vert_{H^{10}_x} & \leqslant \bigg \Vert \mathcal{F}^{-1}_{\xi} \int_{\tau} ^t e^{is \vert \xi \vert^2} \int_{\mathbb{R}^3} e^{-is \vert \xi-\eta_1 \vert^2 - \vert \eta_1 \vert^2} \widehat{f}(s,\eta_1) \widehat{f}(s,\xi-\eta_1) d\eta_1 \bigg \Vert_{H^{10}_x} \\ &+ \bigg \Vert \sum_{k=0}^{+\infty} \sum_{n=2}^{+\infty} \sum_{k_1,...,k_{n-1} \in \mathbb{Z}} \sum_{\gamma=1}^{n-1} \sum_{W_{\gamma} \in \lbrace a_1,a_2,a_3,V \rbrace} \frac{i^{n-1}}{(2\pi)^{3(n+1)}} \big(J^{n}_2(t)-J^{n}_2(\tau) \big) \bigg \Vert_{H^{10}_x} \\ & \lesssim \tau^{-a} \varepsilon_1 ^2 + \tau^{-a} \sum_{n=2}^{+\infty} 4^n C^{n} \delta^{n-1} \varepsilon_1 ^2 . \end{align*} This shows that if $\delta>0$ is small enough, $\big(e^{-it\Delta} W u(t)\big)$ is Cauchy in $H^{10}_x$ and the scattering statement follows. In fact a closer inspection of the proof of Lemma \ref{en-1} would yield a quantitative polynomial decay rate for the above convergence. \end{document}
\begin{document} \begin{frontmatter} \title{Global well posedness and scattering for the elliptic and non-elliptic derivative nonlinear Schr\"odinger equations with small data} \author{Wang Baoxiang\corauthref{*}}\ead{[email protected]} \corauth[*]{Corresponding author.} \address{LMAM, School of Mathematical Sciences, Peking University, Beijing 100871, People's Republic of China} \date{March 10, 2008} \begin{abstract} \rm We study the Cauchy problem for the generalized elliptic and non-elliptic derivative nonlinear Schr\"odinger equations, the existence of the scattering operators and the global well posedness of solutions with small data in Besov spaces $B^s_{2,1}(\mathbb{R}^n)$ and in modulation spaces $M^s_{2,1}(\mathbb{R}^n)$ are obtained. In one spatial dimension, we get the sharp well posedness result with small data in critical homogeneous Besov spaces $\dot B^s_{2,1}$. As a by-product, the existence of the scattering operators with small data is also shown. In order to show these results, the global versions of the estimates for the maximal functions on the elliptic and non-elliptic Schr\"odinger groups are established. \end{abstract} \begin{keyword} \rm Derivative nonlinear Schr\"odinger equation, elliptic and non-elliptic cases, estimates for the maximal function, global well posedness, small data. \\ {\it MSC:} 35 Q 55, 46 E 35, 47 D 08. \end{keyword} \end{frontmatter} \section{Introduction} We consider the Cauchy problem for the generalized derivative nonlinear Schr\"odinger equation (gNLS) \begin{align} {\rm i} u_t + \Delta_\pm u = F(u, \bar{u}, \nabla u, \nabla \bar{u}), \quad u(0,x)= u_0(x), \label{gNLS} \end{align} where $u$ is a complex valued function of $(t,x)\in \mathbb{R} \times \mathbb{R}^n$, \begin{align} \Delta_\pm u = \sum^n_{i=1} \varepsilon_i \partial^2_{x_i}, \quad \varepsilon_i \in \{1,\, -1\}, \quad i=1,...,n, \label{Delta-pm} \end{align} $\nabla =(\partial_{x_1},..., \partial_{x_n})$, $F: \mathbb{C}^{2n+2} \to \mathbb{C}$ is a polynomial, \begin{align} F(z) = P(z_1,..., z_{2n+2})= \sum^{}_{m+1\le |\beta|\le M+1} c_\beta z^\beta, \quad c_\beta \in \mathbb{C}, \label{poly} \end{align} $m, M\in \mathbb{N}$ will be given below. There is a large literature which is devoted to the study of \eqref{gNLS}. Roughly speaking, three kinds of methods have been developed for the local and global well posedness of \eqref{gNLS}. The first one is the energy method, which is mainly useful to the elliptic case $\Delta_\pm=\Delta=\partial^2_{x_1}+...+ \partial^2_{x_n}$, see Klainerman \cite{Klai}, Klainerman and Ponce \cite{Kl-Po}, where the global classical solutions were obtained for the small Cauchy data with sufficient regularity and decay at infinity, $F$ is assumed to satisfy an energy structure condition ${\rm Re} \;\partial F/\partial (\nabla u) =0$. Chihara \cite{Chih1,Chih2} removed the condition ${\rm Re}\; \partial F/\partial (\nabla u) =0$ by using the smooth operators and the commutative estimates between the first order partial differential operators and ${\rm i} \partial_t+ \Delta$, suitable decay conditions on the Cauchy data are still required in \cite{Chih1,Chih2}. Recently, Ozawa and Zhang \cite{Oz-Zh} removed the assumptions on the decay at infinity of the initial data. They obtained that if $n\ge 3$, $s>n/2+2$, $u_0 \in H^s$ is small enough, $F$ is a smooth function vanishing of the third order at origin with ${\rm Re}\; \partial F/ \partial (\nabla u)= \nabla(\theta(|u|^2))$, $\theta\in C^2, \; \theta(0)=0$, then \eqref{gNLS} has a unique classical global solution $u\in (C_w \cap L^\infty)(\mathbb{R}, H^s) \cap C(\mathbb{R}, H^{s-1}) \cap L^2(\mathbb{R}; H^{s-1}_{2n/(n-2)})$. The main tools used in \cite{Oz-Zh} are the gauge transform techniques, the energy method together with the endpoint Strichartz estimates. The second way consists in using the $X^{s,b}$-like spaces, see Bourgain \cite{Bour} and it has been developed by many authors (see \cite{Be-Ta,CKSTT,Grun} and references therein). This method depends on both the dispersive property of the linear equation and the structure of the nonlinearities, which is very useful for the lower regularity initial data. The third method is to mainly use the dispersive smooth effects of the linear Schr\"odinger equation, see Kenig, Ponce and Vega \cite{KePoVe1,KePoVe2}. The crucial point is that the Schr\"odinger group has the following locally smooth effects ($n\ge 2$): \begin{align} & \sup_{\alpha\in \mathbb{Z}^n}\|e^{{\rm i}t \Delta} u_0\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} \lesssim \|u_0\|_{\dot H^{-1/2}}, \label{SSM-1}\\ & \sup_{\alpha\in \mathbb{Z}^n} \left\|\nabla \int^t_0 e^{{\rm i}(t-s) \Delta} f(s) ds \right\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} \lesssim \sum_{\alpha\in \mathbb{Z}^n} \|f\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)}, \label{SSM-2} \end{align} where $Q_\alpha$ is the unit cube with center at $\alpha$. Estimate \eqref{SSM-2} contains one order smooth effect, which can be used to control the derivative terms in the nonlinearities. Such smooth effect estimates are also adapted to the non-elliptic Schr\"odinger group, i.e., \eqref{SSM-1} and \eqref{SSM-2} still hold if we replace $e^{{\rm i}t \Delta} $ by $e^{{\rm i}t \Delta_\pm}$. Some earlier estimates related to \eqref{SSM-1} were due to Constantin and Saut \cite{Co-Sa}, Sj\"olin \cite{Sjol} and Vega \cite{Vega}. In \cite{KePoVe1,KePoVe2}, the local well posedness of \eqref{gNLS} in both elliptic and non-elliptic cases was established for sufficiently smooth large Cauchy data ($m\ge 1$, $u_0\in H^s$ with $s>n/2$ large enough). Moreover, they showed that the solutions are almost global if the initial data are sufficiently small, i.e., the maximal existing time of solutions tends to infinity as initial data tends to 0. Recently, the local well posedness results have been generalized to the quasi-linear (ultrahyperbolic) Schr\"odinger equations, see \cite{KePoVe3,KePoRoVe}. As far as the authors can see, the existence of the scattering operators for Eq. \eqref{gNLS} and the global well posedness of \eqref{gNLS} in the non-elliptic cases are unknown. \subsection{Main results} In this paper, we mainly apply the third method to study the global well posedness and the existence of the scattering operators of \eqref{gNLS} in both the elliptic and non-elliptic cases with small data in $B^s_{2,1}$, $s> 3/2+n/2$. We now state our main results, the notations used in this paper can be found in Sections \ref{Notations} and \ref{functionspace}. \begin{thm} \label{GWP-nD} Let $n\ge 2$ and $s>n/2 +3/2$. Let $F(z)$ be as in \eqref{poly} with $2+4/n \le m\le M<\infty $. We have the following results. {\rm (i)} If $ \|u_0\|_{B^{s}_{2,1}} \le \delta$ for $n\ge 3$, and $ \|u_0\|_{B^{s}_{2,1} \cap \dot H^{-1/2}} \le \delta $ for $n=2$, where $\delta>0$ is a suitably small number, then \eqref{gNLS} has a unique global solution $ u\in C(\mathbb{R}, \; B^{s}_{2,1} ) \cap X_0, $ where \begin{align} X_0=\left\{u \; : \; \begin{array}{l} \|D^\beta u \|_{\ell^{1,s-1/2}_\triangle \ell^\infty_\alpha (L^2_{t,x}(\mathbb{R}\times Q_\alpha))} \lesssim \delta, \ \ |\beta|\le 1 \\ \|D^\beta u \|_{\ell^{1,s-1/2}_\triangle \ell^{2+4/n}_\alpha (L^\infty_{t,x} \cap (L^{2m}_t L^\infty_x) (\mathbb{R}\times Q_\alpha))} \lesssim \delta, \ \ |\beta|\le 1 \end{array} \right\}. \label{X0-space} \end{align} Moreover, for $n\ge 3$, the scattering operator of Eq. \eqref{gNLS} carries the ball $\{u:\, \|u\|_{B^s_{2,1}} \le \delta\}$ into $B^s_{2,1}$. {\rm (ii)} If $s+1/2 \in \mathbb{N}$ and $ \|u_0\|_{H^{s}} \le \delta$ for $n\ge 3$, and $ \|u_0\|_{H^{s} \cap \dot H^{-1/2}} \le \delta $ for $n=2$, where $\delta>0$ is a suitably small number, then \eqref{gNLS} has a unique global solution $ u\in C(\mathbb{R}, \; H^{s} ) \cap X, $ where \begin{align} X=\left\{u \; : \; \begin{array}{l} \|D^\beta u \|_{\ell^\infty_\alpha (L^2_{t,x}(\mathbb{R}\times Q_\alpha))} \lesssim \delta, \; |\beta|\le s+1/2 \\ \|D^\beta u \|_{\ell^{2+4/n}_\alpha (L^\infty_{t,x} \cap (L^{2m}_t L^\infty_x) (\mathbb{R}\times Q_\alpha))} \lesssim \delta, \; |\beta|\le 1 \end{array} \right\}. \label{X-space} \end{align} Moreover, for $n\ge 3$, the scattering operator of Eq. \eqref{gNLS} carries the ball $\{u:\, \|u\|_{H^s} \le \delta\}$ into $H^s$. \end{thm} We now illustrate the proof of (ii) in Theorem \ref{GWP-nD}. Let us consider the equivalent integral equation \begin{align} u(t) =S(t)u_0 - {\rm i}\mathscr{A} F(u, \bar{u}, \nabla u, \nabla \bar{u} ), \label{int-equat} \end{align} where \begin{align} S(t): = e^{{\rm i} t\Delta_\pm}, \ \ \ \mathscr{A}f: = \int^t_0 e^{{\rm i}(t-s) \Delta_\pm} f(s) ds. \label{S(t)} \end{align} If one applies the local smooth effect estimate \eqref{SSM-2} to control the derivative terms in the nonlinearities, then the working space should contains the space $\ell^\infty_\alpha(L^2_{t,x}(\mathbb{R}\times Q_\alpha))$. For simplicity, we consider the case $ F(u, \bar{u}, \nabla u, \nabla \bar{u} )$ $ = (\partial_{x_1} u)^{\nu+1}$. By \eqref{SSM-1} and \eqref{SSM-2}, we immediately have \begin{align} \|\nabla u\|_{\ell^\infty_\alpha (L^2_{t,x} (\mathbb{R}\times Q_\alpha))} & \lesssim \|u_0\|_{H^{1/2}} +\sum_{\alpha\in \mathbb{Z}^n} \|(\partial_{x_1} u)^{\nu+1}\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} \nonumber\\ & \lesssim \|u_0\|_{H^{1/2}} + \|\nabla u\|_{\ell^\infty_\alpha (L^2_{t,x} (\mathbb{R}\times Q_\alpha))} \|\nabla u\|^{\nu}_{\ell^\nu_\alpha (L^\infty_{t,x} (\mathbb{R}\times Q_\alpha))}. \label{infty-2} \end{align} Hence, one needs to control $ \|\nabla u\|_{\ell^\nu_\alpha (L^\infty_{t,x} (\mathbb{R}\times Q_\alpha))}$. In \cite{KePoVe1,KePoVe2}, it was shown that for $\nu=2$, \begin{align} \|S(t) u_0 \|_{\ell^2_\alpha (L^\infty_{t,x} ([0,T]\times Q_\alpha))} \le C(T) \|u_0\|_{H^s}, \quad s>n/2+2. \label{Max-Hs} \end{align} In the elliptic case \eqref{Max-Hs} holds for $s>n/2$. \eqref{Max-Hs} is a time-local version which prevents us to get the global existence of solutions. So, it is natural to ask if there is a time-global version for the estimates of the maximal function. We can get the following \begin{align} \|S(t) u_0 \|_{\ell^\nu_\alpha (L^\infty_{t,x} (\mathbb{R}\times Q_\alpha))} \le C \|u_0\|_{H^s}, \quad s>n/2, \ \ \nu\ge 2+4/n. \label{Max-Hs1} \end{align} Applying \eqref{Max-Hs1}, we have for any $s>n/2$, \begin{align} \|\nabla u\|_{\ell^\nu_\alpha (L^\infty_{t,x} (\mathbb{R}\times Q_\alpha))} & \lesssim \|\nabla u_0\|_{H^{s}} + \|\nabla (\partial_{x_1}u)^{1+\nu}\|_{L^1(\mathbb{R}, H^s(\mathbb{R}^n))}. \label{Max-L1} \end{align} One can get, say for $s=[n/2]+1$, \begin{align} \|\nabla (\partial_{x_1}u)^{1+\nu}\|_{L^1(\mathbb{R}, H^s(\mathbb{R}^n))} & \lesssim \sum_{|\beta|\le s+2}\|D^\beta u\|_{\ell^\infty_\alpha (L^2_{t,x} (\mathbb{R}\times Q_\alpha))} \|\nabla u\|^{\nu}_{\ell^\nu_\alpha (L^{2\nu}_t L^\infty_{x} (\mathbb{R}\times Q_\alpha))}. \label{L1contr} \end{align} Hence, we need to further estimate $\|D^\beta u\|_{\ell^\infty_\alpha (L^2_{t,x} (\mathbb{R}\times Q_\alpha))}$ for all $|\beta| \le s+2$ and $\|\nabla u\|_{\ell^\nu_\alpha (L^{2\nu}_t L^\infty_{x} (\mathbb{R}\times Q_\alpha))}$. We can conjecture that a similar estimate to \eqref{infty-2} holds: \begin{align} & \sum_{|\beta|\le s+2}\|D^\beta u\|_{\ell^\infty_\alpha (L^2_{t,x} (\mathbb{R}\times Q_\alpha))} \nonumber\\ & \lesssim \|u_0\|_{H^{s+ 3/2}} + \sum_{|\beta|\le s+2}\|D^\beta u\|_{\ell^\infty_\alpha (L^2_{t,x} (\mathbb{R}\times Q_\alpha))} \|\nabla u\|^{\nu}_{\ell^\nu_\alpha (L^\infty_{t,x} (\mathbb{R}\times Q_\alpha))}. \label{infty-2-1} \end{align} Finally, for the estimate of $\|\nabla u\|_{\ell^\nu_\alpha (L^{2\nu}_t L^\infty_{x} (\mathbb{R}\times Q_\alpha))}$, one needs the following \begin{align} \|S(t) u_0 \|_{\ell^\nu_\alpha (L^{2\nu}_tL^\infty_{x} (\mathbb{R}\times Q_\alpha))} \le C \|u_0\|_{H^{s-1/\nu}}, \quad s>n/2, \ \ \nu\ge 2+4/n. \label{Max-Hs2} \end{align} Using \eqref{Max-Hs2}, the estimate of $\|\nabla u\|_{\ell^\nu_\alpha (L^{2\nu}_t L^\infty_{x} (\mathbb{R}\times Q_\alpha))}$ becomes easier than that of $\|\nabla u\|_{\ell^\nu_\alpha (L^\infty_{t, x} (\mathbb{R}\times Q_\alpha))}$. Hence, the solution has a self-contained behavior by using the spaces $\ell^\infty_\alpha (L^2_{t,x} (\mathbb{R}\times Q_\alpha)), \; \ell^\nu_\alpha (L^\infty_{t,x} (\mathbb{R}\times Q_\alpha))$ and $\ell^\nu_\alpha (L^{2\nu}_t L^\infty_{x} (\mathbb{R}\times Q_\alpha)) $. We will give the details of the estimates \eqref{Max-Hs1} and \eqref{Max-Hs2} in Section \ref{Max-Funct}. The nonlinear mapping estimates as in \eqref{L1contr} and \eqref{infty-2-1} will be given in Section \ref{proof-gwp-nd}. Next, we use the frequency-uniform decomposition method developed in \cite{Wa1,WaHe,WaHu} to consider the case of initial data in modulation spaces $M^s_{2,1}$, which is the low regularity version of Besov spaces $B^{n/2+s}_{2,1}$, i.e., $B^{n/2+s}_{2,1} \subset M^s_{2,1}$ is a sharp embedding and $M^s_{2,1}$ has only $s$-order derivative regularity (see \cite{SuTo,Toft,WaHe}, for the final result, see \cite{WaHu}). We have the following local well posedness result with small rough initial data: \begin{thm} \label{LWP-Mod} Let $n\ge 2$. Let $F(z)$ be as in \eqref{poly} with $2 \le m\le M<\infty $. Assume that $\|u_0\|_{M^{2}_{2,1}} \le \delta$ for $n\ge 3$, and $\|u_0\|_{M^{2}_{2,1} \cap \dot H^{-1/2}} \le \delta $ for $n=2$, where $\delta>0$ is sufficiently small. Then there exists a $T:=T(\delta)>0$ such that \eqref{gNLS} has a unique local solution $ u\in C([0,T], \; M^{2}_{2,1} ) \cap Y, $ where \begin{align} Y=\left\{u \; : \; \begin{array}{l} \|D^\beta u \|_{\ell^{1,3/2}_\Box \ell^\infty_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))} \lesssim \delta, \; |\beta|\le 1 \\ \|D^\beta u \|_{\ell^1_\Box \ell^{2}_\alpha (L^\infty_{t,x} ([0,T] \times Q_\alpha))} \lesssim \delta, \; |\beta|\le 1 \end{array} \right\}. \label{Y-Mod} \end{align} Moreover, $\lim_{\delta \searrow 0} T(\delta)=\infty$. \end{thm} The following is a global well posedness result with Cauchy data in modulation spaces $M^s_{2,1}$: \begin{thm} \label{GWP-Mod} Let $n\ge 2$. Let $F(z)$ be as in \eqref{poly} with $2+4/n \le m\le M<\infty $. Let $s>3/2 + (n+2)/m$. Assume that $\|u_0\|_{M^{s}_{2,1}} \le \delta$ for $n\ge 3$, and $\|u_0\|_{M^{s}_{2,1} \cap \dot H^{-1/2}} \le \delta$ for $n=2$, where $\delta>0$ is a suitably small number. Then \eqref{gNLS} has a unique global solution $ u\in C(\mathbb{R}, \; M^{s}_{2,1} ) \cap Z, $ where \begin{align} Z=\left\{u \; : \; \begin{array}{l} \|D^\beta u \|_{\ell^{1, s-1/2}_\Box \ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))} \lesssim \delta, \; |\beta|\le 1 \\ \|D^\beta u \|_{\ell^1_\Box\ell^{m}_\alpha (L^\infty_{t,x} \cap(L^{2m}_t L^\infty_x) (\mathbb{R} \times Q_\alpha))} \lesssim \delta, \; |\beta|\le 1 \end{array} \right\}. \label{Z-Mod} \end{align} Moreover, for $n\ge 3$, the scattering operator of Eq. \eqref{gNLS} carries the ball $\{u:\, \|u\|_{M^s_{2,1}} \le \delta\}$ into $M^s_{2,1}$. \end{thm} Finally, we consider one spatial dimension case. Denote \begin{align} s_\kappa= \frac{1}{2}- \frac{2}{\kappa}, \quad \tilde{s}_\nu= \frac{1}{2}- \frac{1}{\nu}. \label{Index} \end{align} \begin{thm} \label{GWP-1D} Let $n=1$, $M \ge m \ge 4$, $u_0\in \dot B^{1+ \tilde{s}_{M}}_{2, 1} \cap \dot B^{s_m}_{2,1}$. Assume that there exists a small $\delta>0$ such that $ \|u_0\|_{\dot B^{1+ \tilde{s}_{M}}_{2, 1} \cap \dot B^{s_m}_{2,1}} \le \delta.$ Then \eqref{gNLS} has a unique global solution $u\in X=\{u \in \mathscr{S}'( \mathbb{R}^{1+1}): \|u\|_{X} \lesssim \delta \}$, where \begin{align} \|u\|_X & =\sup_{s_m\le s \le \tilde{s}_{M}} \sum_{i=0,1} \sum_{j\in \mathbb{Z}} |\!|\!|\partial^i_x \triangle_j u |\!|\!|_s \ \ for \ \ m >4, \nonumber\\ \|u\|_X & =\sum_{i=0,1} \big(\|\partial^i_x u\|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t} } + \sup_{\tilde{s}_m\le s \le \tilde{s}_{M}} \sum_{j\in \mathbb{Z}} |\!|\!|\partial^i_x \triangle_j u |\!|\!|_s \big) \ \ for \ \ m=4, \nonumber\\ |\!|\!|\triangle_j v & |\!|\!|_s := 2^{s j} (\|\triangle_j v \|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t}} + 2^{j/2} \|\triangle_j v\|_{L^\infty_x L^2_t})\nonumber\\ & \ \ \ \ \ \ + 2^{(s-\tilde{s}_m) j}\|\triangle_j v\|_{L_{x}^{m}L_{t}^{\infty}} +2^{(s-\tilde{s}_{M})j} \|\triangle_j v\|_{L_x^{M}L_t^\infty}. \label{X-gNLS} \end{align} \end{thm} Recall that the norm on homogeneous Besov spaces $\dot B^s_{2,1}$ can be defined in the following way: \begin{align} \|f\|_{\dot B^s_{2,1}} =\sum^\infty_{j=-\infty} 2^{sj} \left(\int^{2^{j+1}}_{2^j} |\mathscr{F}f(\xi)|^2 d\xi \right)^{1/2}. \label{hBesov} \end{align} \subsection{Remarks on main results} It seems that the regularity assumptions on initial data are not optimal in Theorems \ref{GWP-nD}--\ref{GWP-Mod}, but Theorem \ref{GWP-1D} presents the sharp regularity condition to the initial data. To illustrate the relation between the regularity index and the nonlinear power, we consider a simple cases of \eqref{gNLS}: \begin{align} & {\rm i} u_t + \Delta_\pm u = u_{x_1} ^{\nu}, \ \ u(0) =\phi. \label{dNLS} \end{align} Eq. \eqref{dNLS} is invariant under the scaling $u \to u_\lambda= \lambda^{(2-\nu)/(\nu-1)} u(\lambda^2 t, \lambda x)$ and moreover, \begin{align} \|\phi\|_{\dot H^{s}(\mathbb{R}^n)} = \|u_\lambda (0, \cdot)\|_{\dot H^{s}(\mathbb{R}^n)}, \quad s= 1+ \tilde{s}_{\nu-1}:=1+n/2-1/(\nu-1). \label{invariant} \end{align} From this point of view, we say that $s =1+ \tilde{s}_{\nu-1}$ is the critical regularity index of \eqref{dNLS}. In \cite{MR}, Molinet and Ribuad showed that \eqref{dNLS} is ill-posed in one spatial dimension in the sense if $s_1\not= \tilde{s}_{\nu-1}+ 1$, the flow map of equation \eqref{dNLS} $\phi \rightarrow u$ (if it exists) is not of class $C^{\nu}$ from $\dot{B}^{s_1}_{2,1}(\mathbb{R})$ to $C ([0, \infty), \dot{B}^{s_1}_{2,1} (\mathbb{R}) )$ at the origin $\phi=0$. For each term in the polynomial nonlinearity $F(u, \bar{u}, \nabla u, \nabla \bar{u})$ as in \eqref{poly}, we easily see that the critical index $s$ can take any critical index between $s_m$ and $1+\tilde{s}_M$. So, our Theorem \ref{GWP-1D} give sharp result in the case $m\ge 4$. On the other hand, Christ \cite{Chr} showed that in the case $\nu=2$, $n=1$, for any $s\in \mathbb{R}$, there exist initial data in $H^s$ with arbitrarily small norm, for which the solution attains arbitrarily large norm after an arbitrarily short time (see also \cite{MJ}). From Christ's result together with Theorems \ref{GWP-1D}, we can expect that there exists $m_0>1$ (might be non-integer) so that for $\nu-1\ge m_0$, $s =1+ \tilde{s}_{\nu-1}$ is the minimal regularity index to guarantee the well posedness of \eqref{dNLS}, at least for the local solutions and small data global solutions in $H^s$. However, it is not clear for us how to find the exact value of $m_0$ even in one spatial dimension. However, in higher spatial dimensions, it seems that $1/2+1/M$-order derivative regularity is lost in Theorem \ref{GWP-nD} and we do not know how to attain the regularity index $s\ge 1+ \tilde{s}_M$. In two dimensional case, if $\Delta_\pm= \Delta$ and the initial value $u_0$ is a radial function, we can remove the condition $u_0\in \dot H^{-1/2}$, $\|u_0\|_{\dot H^{-1/2}} \le \delta$ by using the endpoint Strichartz estimates as in the case $n\ge 3$. Considering the nonlinearity $F(u, \nabla u)= (1-|u|^2)^{-1} |\nabla u|^{2k} u$, Theorem 1.2 holds for the case $k\ge 1$. Theorems \ref{GWP-nD} and \ref{GWP-Mod} hold for the case $k\ge 2$. Since $(1-|u|^2)^{-1}= \sum^\infty_{k=0} |u|^{2k}$, one easily sees that we can use the same way as in the proof of our main results to handle this kind of nonlinearity. \subsection{Notations} \label{Notations} Throughout this paper, we will always use the following notations. $\mathscr{S}(\mathbb{R}^n)$ and $\mathscr{S}'(\mathbb{R}^n)$ stand for the Schwartz space and its dual space, respectively. We denote by $L^p(\mathbb{R}^n)$ the Lebesgue space, $\|\cdot\|_p:= \|\cdot\|_{L^p(\mathbb{R}^n)}$. The Bessel potential space is defined by $H^s_p(\mathbb{R}^n): =(I-\Delta)^{-s/2} L^p(\mathbb{R}^n)$, $H^s(\mathbb{R}^n)=H^s_2 (\mathbb{R}^n)$, $\dot H^s(\mathbb{R}^n) =(-\Delta)^{-s/2} L^2(\mathbb{R}^n)$.\footnote{$\mathbb{R}^n$ will be omitted in the definitions of various function spaces if there is no confusion.} For any quasi-Banach space $X$, we denote by $X^*$ its dual space, by $L^p(I, X)$ the Lebesgue-Bochner space, $\|f\|_{L^p(I,X)}:= (\int_I \|f(t)\|^p_X dt)^{1/p}$. If $X=L^r(\Omega)$, then we write $L^p(I, L^r(\Omega))= L^p_tL^r_x(I\times \Omega)$ and $L^p_{t,x}(I\times \Omega)= L^p_tL^p_x(I\times \Omega)$. Let $Q_\alpha$ be the unit cube with center at $\alpha\in \mathbb{Z}^n$, i.e., $Q_\alpha=\alpha+ Q_0, Q_0= \{x=(x_1,...x_n): -1/2\le x_i< 1/2\}.$ We also needs the function spaces $\ell^q_\alpha (L^p_tL^r_{x}(I\times Q_\alpha))$, $$ \|f\|_{\ell^q_\alpha (L^p_{t}L^r_{x}(I\times Q_\alpha))}:= \left(\sum_{\alpha\in \mathbb{Z}^n} \|f\|^q_{L^p_{t}L^r_{x}(I\times Q_\alpha)} \right)^{1/q}. $$ We denote by $\mathscr{F}$ ($\mathscr{F}^{-1}$) the (inverse) Fourier transform for the spatial variables; by $\mathscr{F}_t$ ($\mathscr{F}^{-1}_{t}$) the (inverse) Fourier transform for the time variable and by $\mathscr{F}_{t,x}$ ($\mathscr{F}^{-1}_{t,x}$) the (inverse) Fourier transform for both time and spatial variables, respectively. If there is no explanation, we always denote by $\varphi_k(\cdot)$ the dyadic decomposition functions as in \eqref{dyadic-funct}; and by $\sigma_k(\cdot)$ the uniform decomposition functions as in \eqref{Mod.2}. $u \star v$ and $u*v$ will stand for the convolution on time and on spatial variables, respectively, i.e., $$ (u\star v) (t,x)= \int_{\mathbb{R}} u(t-\tau,x) v(\tau,x)d\tau, \ \ (u* v) (t,x)= \int_{\mathbb{R}^n} u(t,x-y) v(t,y)dy. $$ $\mathbb{R}, \mathbb{N}$ and $ \mathbb{Z}$ will stand for the sets of reals, positive integers and integers, respectively. $c<1$, $C>1$ will denote positive universal constants, which can be different at different places. $a\lesssim b$ stands for $a\le C b$ for some constant $C>1$, $a\sim b$ means that $a\lesssim b$ and $b\lesssim a$. We denote by $p'$ the dual number of $p \in [1,\infty]$, i.e., $1/p+1/p'=1$. For any $a>0$, we denote by $[a]$ the minimal integer that is larger than or equals to $a$. $B(x,R)$ will denote the ball in $\mathbb{R}^n$ with center $x$ and radial $R$. \subsection{Besov and modulation spaces} \label{functionspace} Let us recall that Besov spaces $B^s_{p,q}:=B^s_{p,q}(\mathbb{R}^n)$ are defined as follows (cf. \cite{BL,Tr}). Let $\psi: \mathbb{R}^n \to [0,1]$ be a smooth radial bump function adapted to the ball $B(0,2)$: \begin{align} \psi (\xi)=\left\{ \begin{array}{ll} 1, & |\xi|\le 1,\\ {\rm smooth}, & |\xi|\in [1,2],\\ 0, & |\xi|\ge 2. \end{array} \right. \label{cutoff} \end{align} We write $\delta(\cdot):= \psi(\cdot)-\psi(2\,\cdot)$ and \begin{align} \varphi_j:= \delta(2^{-j}\cdot) \ \ {\rm for} \ \ j\ge 1; \quad \varphi_0:=1- \sum_{j\ge 1} \varphi_j. \label{dyadic-funct} \end{align} We say that $ \triangle_j := \mathscr{F}^{-1} \varphi_j \mathscr{F}, \quad j\in \mathbb{N} \cup \{0\}$ are the dyadic decomposition operators. Beove spaces $B^s_{p,q}=B^s_{p,q}(\mathbb{R}^n)$ are defined in the following way: \begin{align} B^s_{p,q} =\left \{ f\in \mathscr{S}'(\mathbb{R}^n): \; \|f\|_{B^s_{p,q}} = \left(\sum^\infty_{j=0}2^{sjq} \|\,\triangle_j f\|^q_p \right)^{1/q}<\infty \right\}. \label{Besov.1} \end{align} Now we recall the definition of modulation spaces (see \cite{Fei2,Groh,Wa1,WaHe,WaHu}). Here we adopt an equivalent norm by using the uniform decomposition to the frequency space. Let $\rho\in \mathscr{S}(\mathbb{R}^n)$ and $\rho:\, \mathbb{R}^n\to [0,1]$ be a smooth radial bump function adapted to the ball $B(0, \sqrt{n})$, say $\rho(\xi)=1$ as $|\xi|\le \sqrt{n}/2$, and $\rho(\xi)=0$ as $|\xi| \ge \sqrt{n} $. Let $\rho_k$ be a translation of $\rho$: $ \rho_k (\xi) = \rho (\xi- k), \; k\in \mathbb{Z}^n$. We write \begin{align} \sigma_k (\xi)= \rho_k(\xi) \left(\sum_{k\in \mathbb{ Z}^n}\rho_k(\xi)\right)^{-1}, \quad k\in \mathbb{Z}^n. \label{Mod.2} \end{align} Denote \begin{align} \Box_k := \mathscr{F}^{-1} \sigma_k \mathscr{F}, \quad k\in \mathbb{ Z}^n, \label{Mod.5} \end{align} which are said to be the frequency-uniform decomposition operators. For any $k\in \mathbb{ Z}^n$, we write $\langle k\rangle=\sqrt{1+|k|^2}$. Let $s\in \mathbb{R}$, $0<p,q \le \infty$. Modulation spaces $M^s_{p,q}=M^s_{p,q}(\mathbb{R}^n)$ are defined as: \begin{align} & M^s_{p,q} =\left \{ f\in \mathscr{S}'(\mathbb{R}^n): \; \|f\|_{M^s_{p,q}} = \left(\sum_{k \in \mathbb{Z}^n} \langle k\rangle^{sq} \|\,\Box_k f\|_p^q \right)^{1/q}<\infty \right\}. \label{Mod.6} \end{align} We will use the function space $\ell^{1,s}_\Box \ell^q_\alpha (L^p_{t}L^r_x (I\times Q_\alpha))$ which contains all of the functions $f(t,x)$ so that the following norm is finite: \begin{align} \|f\|_{\ell^{1,s}_\Box \ell^q_\alpha (L^p_{t} L^r_x(I\times Q_\alpha))}:= \sum_{k\in \mathbb{Z}^n} \langle k\rangle ^s \left(\sum_{\alpha\in \mathbb{Z}^n} \|\Box_k f\|^q_{L^p_{t}L^r_x (I\times Q_\alpha)} \right)^{1/q}. \label{Mod.7} \end{align} Similarly, we can define the space $\ell^{1,s}_\triangle \ell^q_\alpha (L^p_{t,x}(I\times Q_\alpha))$ with the following norm: \begin{align} \|f\|_{\ell^{1,s}_\triangle \ell^q_\alpha (L^p_{t} L^r_x (I\times Q_\alpha))}:= \sum^\infty_{j=0} 2^{sj} \left(\sum_{\alpha\in \mathbb{Z}^n} \|\triangle_j f\|^q_{L^p_{t}L^r_x (I\times Q_\alpha)} \right)^{1/q}. \label{Besov.7} \end{align} A special case is $s=0$, we write $\ell^{1,0}_\Box \ell^q_\alpha (L^p_{t}L^r_x (I\times Q_\alpha))= \ell^{1}_\Box \ell^q_\alpha (L^p_{t}L^r_x (I\times Q_\alpha))$ and $\ell^{1,0}_\triangle \ell^q_\alpha (L^p_{t}L^r_x(I\times Q_\alpha)) = \ell^{1}_\triangle \ell^q_\alpha (L^p_{t}L^r_x (I\times Q_\alpha))$. The rest of this paper is organized as follows. In Section \ref{Max-Funct} we give the details of the estimates for the maximal function in certain function spaces. Section \ref{global-local-estimate} is devoted to considering the spatial local versions for the Strichartz estimates and giving some remarks on the estimates of the local smooth effects. In Sections \ref{proof-gwp-nd}--\ref{proof-gwp-1D} we prove our main Theorems \ref{GWP-nD}--\ref{GWP-1D}, respectively. \section{Estimates for the maximal function} \label{Max-Funct} \subsection{Time-local version} Recall that $S(t)= e^{- {\rm i}t \triangle_\pm }= \mathscr{F}^{-1} e^{ {\rm i}t |\xi|^2_\pm } \mathscr{F}$, where \begin{align} |\xi|^2_\pm = \sum^n_{j=1} \varepsilon_j \xi^2_j, \quad \varepsilon_j = \pm 1. \label{symbol} \end{align} Kenig, Ponce and Vega \cite{KePoVe1} showed the following maximal function estimate: \begin{align} \left( \sum_{\alpha\in \mathbb{Z}^n} \|S(t)u_0\|^2_{L^\infty_{t,x}([0,T]\times Q_\alpha)} \right)^{1/2} \lesssim C(T) \|u_0\|_{H^s}, \label{max-0} \end{align} where $s\ge 2+n/2$. If $S(t)= e^{- {\rm i}t \triangle }$, then \eqref{max-0} holds for $s>n/2$, $C(T)=(1+T)^s$. Using the frequency-uniform decomposition method, we can get the following \begin{prop} \label{prop-max-1} There exists a constant $C(T)>1$ which depends only on $T$ and $n$ such that \begin{align} \sum_{k\in \mathbb{Z}^n}\left( \sum_{\alpha\in \mathbb{Z}^n} \|\,\Box_k S(t)u_0\|^2_{L^\infty_{t,x}([0,T]\times Q_\alpha)} \right)^{1/2} \le C(T) \|u_0\|_{M^{1/2}_{2,1}}, \label{max-1} \end{align} \end{prop} In particular, for any $s>(n+1)/2$, \begin{align} \left( \sum_{\alpha\in \mathbb{Z}^n} \|\, S(t)u_0\|^2_{L^\infty_{t,x}([0,T]\times Q_\alpha)} \right)^{1/2} \le C(T) \|u_0\|_{H^{s}}. \label{max-1-note} \end{align} \noindent {\bf Proof.} By the duality, it suffices to prove that \begin{align} \int^T_0 (S(t)u_0, \psi(t) ) dt \lesssim \|u_0\|_{M^{1/2}_{2,1}} \sup_{k\in \mathbb{Z}^n}\left( \sum_{\alpha\in \mathbb{Z}^n} \|\Box_k \psi(t)\|_{L^1_{t,x}([0,T]\times Q_\alpha)}^2 \right)^{1/2}. \label{max-2} \end{align} Since $(M^{1/2}_{2,1})^*= M^{-1/2}_{2,\infty}$, we have \begin{align} \int^T_0 (S(t)u_0, \psi(t) ) dt \le \|u_0\|_{M^{1/2}_{2,1}} \left\| \int^T_0 S(-t) \psi(t) dt \right\|_{M^{-1/2}_{2,\infty}}. \label{max-3} \end{align} Recalling that $\|f\|_{M^{-1/2}_{2,\infty}} =\sup_{k\in \mathbb{Z}^n} \langle k\rangle^{-1/2} \|\Box_k f\|_2$, we need to estimate \begin{align} & \left\| \Box_k \int^T_0 S(-t) \psi(t) dt \right\|^2_2 \nonumber\\ & = \int^T_0 \left( \Box_k \psi(t), \; \int^T_0 S(t-\tau) \Box_k \psi(\tau) d\tau \right) dt \nonumber\\ & \le \sum_{\alpha \in \mathbb{Z}^n} \|\Box_k \psi\|_{L^1_{t,x}([0,T]\times Q_\alpha)} \left\|\int^T_0 S(t-\tau) \Box_k \psi(\tau) d\tau \right\|_{L^\infty_{t,x}([0,T]\times Q_\alpha)} \nonumber\\ & \le \|\Box_k \psi\|_{\ell^2_\alpha (L^1_{t,x}([0,T]\times Q_\alpha))} \left\|\int^T_0 S(t-\tau) \Box_k \psi(\tau) d\tau \right\|_{\ell^2_\alpha (L^\infty_{t,x}([0,T]\times Q_\alpha))} . \label{max-4} \end{align} If one can show that \begin{align} \left\|\int^T_0 S(t-\tau) \Box_k \psi(\tau) d\tau \right\|_{\ell^2_\alpha (L^\infty_{t,x}([0,T]\times Q_\alpha))} \lesssim C(T)\langle k\rangle \|\Box_k \psi\|_{\ell^2_\alpha (L^1_{t,x}([0,T]\times Q_\alpha))}, \label{max-5} \end{align} then from \eqref{max-3}--\eqref{max-5} we obtain that \eqref{max-2} holds. Denote \begin{align} \Lambda:= \{\ell \in \mathbb{Z}^n: {\rm supp} \,\sigma_\ell \cap {\rm supp} \,\sigma_0 \not= \varnothing \}. \label{orthogonal} \end{align} In the following we show \eqref{max-5}. In view of Young's inequality, we have \begin{align} & \left\|\int^T_0 S(t-\tau) \Box_k \psi(\tau) d\tau \right\|_{L^\infty_{t,x}([0,T]\times Q_\alpha)} \nonumber\\ & \lesssim \sum_{\ell \in \Lambda} \left\|\int^T_0 S(t-\tau) \Box_{k+\ell} \Box_k \psi(\tau) d\tau \right\|_{L^\infty_{t,x}([0,T]\times Q_\alpha)} \nonumber\\ & = \sum_{\ell \in \Lambda} \left\|\int^T_0 [\mathscr{F}^{-1} ( e^{{\rm i} (t-\tau) |\xi|^2_\pm} \sigma_{k+\ell})]* \Box_k \psi(\tau) d\tau \right\|_{L^\infty_{t,x}([0,T]\times Q_\alpha)} \nonumber\\ & \le \sum_{\ell \in \Lambda} \sum_{\beta\in \mathbb{Z}^n} \left\|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{k+\ell}) \right\|_{L^\infty_{t,x}([-T,T]\times Q_\beta)} \| \Box_k \psi\|_{L^1_{t,x}([0,T]\times (Q_\alpha-Q_\beta))}. \label{max-6} \end{align} From \eqref{max-6} and Minkowski's inequality that \begin{align} & \left\|\int^T_0 S(t-\tau) \Box_k \psi(\tau) d\tau \right\|_{\ell^2_\alpha (L^\infty_{t,x}([0,T]\times Q_\alpha))} \nonumber\\ & \le \sum_{\ell \in \Lambda} \sum_{\beta\in \mathbb{Z}^n} \left\|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{k+\ell}) \right\|_{L^\infty_{t,x}([-T,T]\times Q_\beta)} \| \Box_k \psi\|_{\ell^2_\alpha (L^1_{t,x}([0,T]\times (Q_\alpha-Q_\beta)))}. \label{max-7} \end{align} It is easy to see that \begin{align} \| \Box_k \psi\|_{\ell^2_\alpha (L^1_{t,x}([0,T]\times (Q_\alpha-Q_\beta)))} \lesssim \| \Box_k \psi\|_{\ell^2_\alpha (L^1_{t,x}([0,T]\times Q_\alpha))}. \label{max-8} \end{align} Hence, in order to prove \eqref{max-5}, it suffices to prove that \begin{align} \sum_{\beta\in \mathbb{Z}^n} \left\|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{k}) \right\|_{L^\infty_{t,x}([0,T]\times Q_\beta)} \lesssim C(T) \langle k \rangle. \label{max-9} \end{align} In fact, observing the following identity, \begin{align} |\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{k}) |= |\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0}) (\cdot + 2tk_\pm) |, \label{max-10} \end{align} where $k_\pm= (\varepsilon_1 k_1,..., \varepsilon_n k_n)$, we have \begin{align} \|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{k}) \|_{L^\infty_{t,x}([0,T]\times Q_\beta)} \le \|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0})\|_{L^\infty_{t,x}([0,T]\times Q^*_{\beta,k})}, \label{max-11} \end{align} where \begin{align*} Q^*_{\beta,k}= \{x: \; x\in 2tk_\pm + Q_\beta \;\; \mbox{for some }\; t\in [0,T]\}. \end{align*} Denote $\Lambda_{\beta,k} = \{\beta': \; Q_{\beta'} \cap Q^*_{\beta,k} \neq \varnothing\}$. It follows from \eqref{max-11} that \begin{align} \sum_{\beta \in \mathbb{Z}^n} \|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{k}) \|_{L^\infty_{t,x}([0,T]\times Q_\beta)} \le \sum_{\beta \in \mathbb{Z}^n} \sum_{\beta' \in \Lambda_{\beta,k}} \|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0})\|_{L^\infty_{t,x}([0,T]\times Q_{\beta'})}. \label{max-13} \end{align} Since each $E_{\beta,k}$ overlaps at most $O(T\langle k\rangle)$ many $Q_{\beta'}$, $\beta'\in \mathbb{Z}^n$, one can easily verify that in the sums of the right hand side of \eqref{max-13}, each $\|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0})\|_{L^\infty_{t,x}([0,T]\times Q_{\beta'})}$ repeats at most $O(T \langle k\rangle)$ times. Hence, we have \begin{align} \sum_{\beta \in \mathbb{Z}^n} \|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{k}) \|_{L^\infty_{t,x}([0,T]\times Q_\beta)} \lesssim \langle k\rangle \sum_{\beta \in \mathbb{Z}^n} \|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0})\|_{L^\infty_{t,x}([0,T]\times Q_{\beta})}. \label{max-14} \end{align} Finally, it suffices to show that \begin{align} \sum_{\beta \in \mathbb{Z}^n} \|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0})\|_{L^\infty_{t,x}([0,T]\times Q_{\beta})} \le C(T). \label{max-15} \end{align} Denote $\nabla_{t,x} =(\partial_t, \partial_{x_1},..., \partial_{x_n})$. By the Sobolev inequality, \begin{align} \sum_{\beta \in \mathbb{Z}^n} \|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0})\|_{L^\infty_{t,x}([0,T]\times Q_{\beta})} \lesssim & \; \sum_{\beta \in \mathbb{Z}^n} \|\mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0})\|_{L^{2n}_{t,x}([0,T]\times Q_{\beta})}\nonumber\\ & + \sum_{\beta \in \mathbb{Z}^n} \|\nabla_{t,x} \mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0})\|_{L^{2n}_{t,x}([0,T]\times Q_{\beta})} \nonumber\\ = & I+II. \label{max-16} \end{align} By H\"older's inequality, we have \begin{align} II \lesssim & \left(\sum_{\beta \in \mathbb{Z}^n} \left\|(1+|x|^2)^n\nabla_{t,x} \mathscr{F}^{-1} ( e^{{\rm i} t |\xi|^2_\pm} \sigma_{0}) \right\|^{2n}_{L^{2n}_{t,x}([0,T]\times Q_{\beta})}\right)^{1/2n} \nonumber\\ \lesssim & \sum^n_{i=1} \left\|\mathscr{F}^{-1} (I-\Delta)^n ( e^{{\rm i} t |\xi|^2_\pm} \xi_i \sigma_{0}) \right\|_{L^{2n}_{t,x}([0,T]\times \mathbb{R}^n)}\nonumber\\ & + \left\|\mathscr{F}^{-1} (I-\Delta)^n ( e^{{\rm i} t |\xi|^2_\pm} |\xi|^2_\pm \sigma_{0}) \right\|_{L^{2n}_{t,x}([0,T]\times \mathbb{R}^n)}\nonumber\\ \lesssim & \; C(T). \label{max-17} \end{align} One easily sees that $I$ has the same bound as that of $II$. The proof of \eqref{max-1} is finished. Noticing that $H^s\subset M^{1/2}_{2,1}$ if $s>(n+1)/2$ (cf. \cite{Toft,WaHe,WaHu}), we immediately have \eqref{max-1-note}. $ \Box$ \subsection{Time-global version} Recall that we have the following equivalent norm on Besov spaces (\cite{BL,Tr}): \begin{lem} \label{Besovnorm} Let $1\le p,q \le \infty$, $\sigma>0$, $\sigma\not\in \mathbb{N}$. Then we have \begin{align} \|f\|_{B^\sigma_{p,q}} \sim \sum_{|\beta|\le [\sigma]} \|D^\beta f\|_{L^p(\mathbb{R}^n)}+ \sum_{|\beta|\le [\sigma]} \left( \int_{\mathbb{R}^n} |h|^{-n-q\{\sigma\}} \| \vartriangle_h D^\beta f \|^q_{L^p(\mathbb{R}^n)} dh \right)^{1/q}, \label{Besov-norm-1} \end{align} where $\vartriangle_h f= f(\cdot+h)-f(\cdot)$, $[\sigma]$ denotes the minimal integer that is larger than or equals to $\sigma$, $\{\sigma\}= \sigma-[\sigma]$. \end{lem} Taking $p=q$ in Lemma \ref{Besovnorm}, one has that \begin{align} \|f\|^p_{B^\sigma_{p,p}} \sim \sum_{|\beta|\le [\sigma]} \|D^\beta f\|^p_{L^p(\mathbb{R}^n)}+ \sum_{|\beta|\le [\sigma]} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{| \vartriangle_h D^\beta f (x)|^p}{|h|^{n+p\{\sigma\}}} dxdh. \label{Besov-norm-2} \end{align} \begin{lem} \label{Max-Besov-control} Let $1< p< \infty$, $s>1/p$. Then we have \begin{align} \left( \sum_{\alpha\in \mathbb{Z}^n} \|u\|^p_{L^\infty_{t,x} (\mathbb{R} \times Q_\alpha)} \right)^{1/p} \lesssim \|(I-\partial^2_t)^{s/2} u\|_{L^p(\mathbb{R}, B^{ns}_{p,p}(\mathbb{R}^n))}. \label{Max-Besov-1} \end{align} \end{lem} {\bf Proof.} We divide the proof into the following two cases. {\it Case} 1. $ns\not\in \mathbb{N}.$ Due to $H^s_p(\mathbb{R}) \subset L^\infty(\mathbb{R})$, we have \begin{align} \|u\|_{L^\infty_{t,x} (\mathbb{R} \times Q_\alpha)} & \lesssim \|(I-\partial^2_t)^{s/2} u\|_{L^\infty_x L^p_t(Q_\alpha\times \mathbb{R})} \nonumber\\ & \le \|(I-\partial^2_t)^{s/2} u\|_{L^p_t L^\infty_x (\mathbb{R} \times Q_\alpha )} . \label{Max-Besov-2} \end{align} Recalling that $\sigma_\alpha (x) \gtrsim 1$ for all $x\in Q_\alpha$ and $\alpha\in \mathbb{Z}^n$, we have from \eqref{Max-Besov-2} that \begin{align} \|u\|_{L^\infty_{t,x} (\mathbb{R} \times Q_\alpha)} \le \|(I-\partial^2_t)^{s/2} \sigma_\alpha u\|_{L^p_t L^\infty_x (\mathbb{R} \times \mathbb{R}^n )} . \label{Max-Besov-3} \end{align} Since $B^{ns}_{p,p} ( \mathbb{R}^n) \subset L^\infty ( \mathbb{R}^n)$, in view of \eqref{Max-Besov-3}, one has that \begin{align} \|u\|_{L^\infty_{t,x} (\mathbb{R} \times Q_\alpha)} \le \|(I-\partial^2_t)^{s/2} \sigma_\alpha u\|_{L^p(\mathbb{R}, B^{ns}_{p,p}(\mathbb{R}^n))}. \label{Max-Besov-4} \end{align} For simplicity, we denote $v=(I-\partial^2_t)^{s/2} u$. By \eqref{Besov-norm-2} and \eqref{Max-Besov-4} we have \begin{align} \sum_{\alpha\in \mathbb{Z}^n} \|u\|^p_{L^\infty_{t,x} (\mathbb{R}\times Q_\alpha)} \lesssim & \sum_{|\beta|\le [ns]} \sum_{\alpha\in \mathbb{Z}^n} \int_{\mathbb{R}} \|D^\beta (\sigma_\alpha v)(t)\|^p_{L^p(\tilde{Q}_\alpha)} dt \nonumber\\ & + \sum_{|\beta|\le [ns]} \sum_{\alpha\in \mathbb{Z}^n} \int_{\mathbb{R}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{| \vartriangle_h D^\beta (\sigma_\alpha v)(t,x)|^p}{|h|^{n+p\{ns\}}} dxdh dt \nonumber\\ := & I+II. \label{Max-Besov-5} \end{align} We now estimate $II$. It is easy to see that \begin{align} | \vartriangle_h \! D^\beta (\sigma_\alpha v)| & \lesssim \sum_{\beta_1+ \beta_2=\beta} |\vartriangle_h \! (D^{\beta_1} \sigma_\alpha D^{\beta_2} v) | \nonumber\\ & \leqslant \sum_{\beta_1+ \beta_2=\beta} (|D^{\beta_1} \sigma_\alpha (\cdot+h) \, \vartriangle_h\! D^{\beta_2} v | + |(\vartriangle_h\! D^{\beta_1} \sigma_\alpha) D^{\beta_2} v |). \label{Max-Besov-6} \end{align} Since ${\rm supp}\,\sigma_\alpha$ overlaps at most finitely many ${\rm supp}\, \sigma_\beta$ and $\sigma_\beta =\sigma_0 (\cdot-\beta)$, $\beta\in \mathbb{Z}^n$, it follows from \eqref{Max-Besov-6}, $|D^{\beta_1} \sigma_\alpha| \lesssim 1$ and H\"older's inequality that \begin{align} II \lesssim & \sum_{|\beta_1|, |\beta_2|\le [ns]} \int_{\mathbb{R}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \sum_{\alpha\in \mathbb{Z}^n} |D^{\beta_1} \sigma_{\alpha}(x+h)| \frac{| \vartriangle_h\! D^{\beta_2} v (t,x)|^p}{|h|^{n+p\{ns\}}} dxdh dt \nonumber\\ & + \sum_{|\beta|\le [ns]}\sum_{\beta_1+\beta_2=\beta} \int_{\mathbb{R}^n} \frac{\| \vartriangle_h\! D^{\beta_1} \sigma_0 \|^p_{L^\infty(\mathbb{R}^n)}}{|h|^{n+p\{ns\}}} dh \nonumber\\ & \quad \times \sup_{h} \sum_{\alpha\in \mathbb{Z}^n} \int_{\mathbb{R}} \int_{B(0,\sqrt{n}) \cup B(-h, \sqrt{n}))} |D^{\beta_2} v(t,x+\alpha)|^p dx dt \nonumber\\ \lesssim & \sum_{|\beta|\le [ns]} \int_{\mathbb{R}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{| \vartriangle_h\! D^\beta v (t,x)|^p}{|h|^{n+p\{ns\}}} dxdh dt \nonumber\\ & + \|\sigma_\alpha\|^p_{B^{ns}_{\infty, p}} \sum_{|\beta|\le [ns]} \int_{\mathbb{R}} \int_{\mathbb{R}^n} |D^{\beta} v(x)|^p dx dt \nonumber\\ \lesssim & \; \|v\|^p_{L^p(\mathbb{R}, B^{ns}_{p,p}(\mathbb{R}^n))}. \label{Max-Besov-7} \end{align} Clearly, one has that \begin{align} I \lesssim \|v\|^p_{L^p(\mathbb{R}, B^{ns}_{p,p}(\mathbb{R}^n))}. \label{Max-Besov-8} \end{align} Collecting \eqref{Max-Besov-5}, \eqref{Max-Besov-7} and \eqref{Max-Besov-8}, we have \eqref{Max-Besov-1}. {\it Case} 2. $ns \in \mathbb{N}.$ One can take an $s_1< s$ such that $s_1>1/p$ and $ns_1 \not\in \mathbb{N}$. Applying the conclusion as in Case 1, we get the result, as desired. $ \Box$ For the semi-group $S(t)$, we have the following Strichartz estimate (cf. \cite{Ke-Ta}): \begin{prop}\label{Strichartz-Leb} Let $n\ge 2$. $2\le p, \rho \le 2n/(n-2)$ $(2\le p, \rho<\infty$ if $n=2)$, $2/\gamma(\cdot)= n(1/2-1/\cdot)$. We have \begin{align} \|S(t)u_0\|_{L^{\gamma(p)}(\mathbb{R}, L^p(\mathbb{R}^n))} & \lesssim \| u_0\|_{L^2(\mathbb{R}^n)}, \label{Str-Leb}\\ \|\mathscr{A} F\|_{L^{\gamma(p)}(\mathbb{R}, L^p(\mathbb{R}^n))} & \lesssim \| F\|_{L^{\gamma(\rho)'}(\mathbb{R}, L^{\rho'}(\mathbb{R}^n))}. \label{Str-Leb-Int} \end{align} \end{prop} If $p$ and $\rho$ equal to $2n/(n-2)$, then \eqref{Str-Leb} and \eqref{Str-Leb-Int} are said to be the endpoint Strichartz estimates. Using Proposition \ref{Strichartz-Leb}, we have \begin{prop} \label{prop-max-2} Let $p\ge 2+ 4/n:= 2^*$. For any $s>n/2$, we have \begin{align} \left( \sum_{\alpha\in \mathbb{Z}^n} \|\, S(t)u_0\|^p_{L^\infty_{t,x}(\mathbb{R} \times Q_\alpha)} \right)^{1/p} \lesssim \|u_0\|_{H^{s}}. \label{m-1} \end{align} \end{prop} {\bf Proof.} For short, we write $\langle \partial_t \rangle=(I-\partial^2_t)^{1/2}$. By Lemma \ref{Max-Besov-control}, for any $s_0>1/2^*$, \begin{align} \left( \sum_{\alpha\in \mathbb{Z}^n} \|S(t)u_0\|^p_{L^\infty_{t,x} (\mathbb{R} \times Q_\alpha)} \right)^{1/p} & \lesssim \left( \sum_{\alpha\in \mathbb{Z}^n} \|S(t)u_0\|^{2^*}_{L^\infty_{t,x} (\mathbb{R} \times Q_\alpha)} \right)^{1/{2^*}} \nonumber\\ & \lesssim \|\langle \partial_t \rangle^{s_0} S(t)u_0\|_{L^{2^*}(\mathbb{R}, B^{ns_0}_{2^*, 2^*}(\mathbb{R}^n))}. \label{MB-1} \end{align} We have \begin{align} \|\langle \partial_t \rangle^{s_0} S(t)u_0\|^{2^*}_{L^{2^*}(\mathbb{R}, B^{ns_0}_{2^*, 2^*}(\mathbb{R}^n))}= \sum_{k=0}^\infty 2^{ns_0k 2^*} \|\langle \partial_t \rangle^{s_0} \triangle_k S(t)u_0\|^{2^*}_{L^{2^*}_{t,x} (\mathbb{R}^{1+n})}. \label{MB-2} \end{align} Using the dyadic decomposition to the time-frequency, we obtain that \begin{align} \|\langle \partial_t \rangle^{s_0} \triangle_k S(t)u_0\|_{L^{2^*}_{t,x}} \lesssim \sum^\infty_{j=0} \| \mathscr{F}^{-1}_{t,x} \langle \tau \rangle^{s_0} \varphi_j (\tau) \mathscr{F}_t e^{{\rm i}t|\xi|^2_\pm} \varphi_k (\xi) \mathscr{F}_x u_0\|_{L^{2^*}_{t,x}}. \label{MB-3} \end{align} Noticing the fact that \begin{align} (\mathscr{F}^{-1}_{t} \langle \tau \rangle^{s_0} \varphi_j (\tau)) \star e^{{\rm i}t|\xi|^2_\pm} = c\; e^{{\rm i}t|\xi|^2_\pm} \varphi_j (|\xi|^2_\pm)\langle |\xi|^2_\pm \rangle^{s_0}, \label{MB-4} \end{align} and using the Strichartz inequality and Plancherel's identity, one has that \begin{align} \|\langle \partial_t \rangle^{s_0} \triangle_k S(t)u_0\|_{L^{2^*}_{t,x}} & \lesssim \sum^\infty_{j=0} \| S(t) \mathscr{F}^{-1}_{x} \langle |\xi|^2_\pm \rangle^{s_0} \varphi_j (|\xi|^2_\pm) \varphi_k (\xi) \mathscr{F}_x u_0\|_{L^{2^*}_{t,x}} \nonumber\\ & \lesssim \sum^\infty_{j=0} \| \mathscr{F}^{-1}_{x} \langle |\xi|^2_\pm \rangle^{s_0} \varphi_j (|\xi|^2_\pm) \varphi_k (\xi) \mathscr{F}_x u_0\|_{L^{2}_{x}(\mathbb{R}^n)} \nonumber\\ & \lesssim 2^{2 s_0 k} \sum^\infty_{j=0} \| \mathscr{F}^{-1}_{x} \varphi_j (|\xi|^2_\pm) \varphi_k (\xi) \mathscr{F}_x u_0\|_{L^{2}_{x}(\mathbb{R}^n)}. \label{MB-5} \end{align} Combining \eqref{MB-2} and \eqref{MB-5}, together with Minkowski's inequality, we have \begin{align} & \|\langle \partial_t \rangle^{s_0} S(t)u_0\|_{L^{2^*}(\mathbb{R}, B^{ns_0}_{2^*, 2^*}(\mathbb{R}^n))} \nonumber\\ & \lesssim \sum^\infty_{j=0} \left( \sum^\infty_{k=0} 2^{(n+2)s_0k 2^*}\| \mathscr{F}^{-1} \varphi_j (|\xi|^2_\pm) \varphi_k \mathscr{F} u_0\|^{2^*}_{L^{2}_{x}(\mathbb{R}^n)} \right)^{1/2^*} \nonumber\\ & \lesssim \sum^\infty_{j=0} \| \mathscr{F}^{-1} \varphi_j (|\xi|^2_\pm) \mathscr{F} u_0\|_{B^{(n+2)s_0}_{2, 2^*}}. \label{MB-6} \end{align} In view of $H^{(n+2)s_0}\subset B^{(n+2)s_0}_{2,2^*}$ and H\"older's inequality, we have for any $\varepsilon>0$, \begin{align} \sum^\infty_{j=0} \| \mathscr{F}^{-1} \varphi_j (|\xi|^2_\pm) \mathscr{F} u_0\|_{B^{(n+2)s_0}_{2, 2^*}} & \lesssim \sum^\infty_{j=0} \| \mathscr{F}^{-1} \varphi_j (|\xi|^2_\pm) \mathscr{F} u_0\|_{H^{(n+2)s_0}} \nonumber\\ & \lesssim \left(\sum^\infty_{j=0} 2^{2j\varepsilon} \| \mathscr{F}^{-1} \varphi_j (|\xi|^2_\pm) \mathscr{F} u_0\|^2_{H^{(n+2)s_0}} \right)^{1/2}. \label{MB-7} \end{align} By Plancherel's identity, and ${\rm supp} \varphi_j (|\xi|^2_\pm) \subset \{\xi:\; ||\xi|^2_\pm|\in [2^{j-1}, 2^{j+1}]\}$, we easily see that \begin{align} \left(\sum^\infty_{j=0} 2^{2j\varepsilon} \| \mathscr{F}^{-1} \varphi_j (|\xi|^2_\pm) \mathscr{F} u_0\|^2_{H^{(n+2)s_0}} \right)^{1/2} & \lesssim \left(\sum^\infty_{j=0} \| \langle |\xi|^2_\pm\rangle^\varepsilon \varphi_j (|\xi|^2_\pm) \mathscr{F} u_0\|^2_{H^{(n+2)s_0}} \right)^{1/2} \nonumber\\ & \lesssim \left(\sum^\infty_{j=0} \| \varphi_j (|\xi|^2_\pm) \mathscr{F} u_0\|^2_{H^{(n+2)s_0+2\varepsilon}} \right)^{1/2} \nonumber\\ & \lesssim \| u_0\|_{H^{(n+2)s_0+2\varepsilon}} . \label{MB-8} \end{align} Taking $s_0$ such that $(n+2)s_0 +2 \varepsilon< s$, from \eqref{MB-6}--\eqref{MB-8} we have the result, as desired. $ \Box$ Next, we consider the estimates for the maximal function based on the frequency-uniform decomposition method. This issue has some relations with the Strichartz estimates in modulation spaces. Recently, the Strichartz estimates have been generalized to various function spaces, for instance, in the Wiener amalgam spaces \cite{Co-Ni1,Co-Ni2}. Recall that in \cite{WaHe}, we obtained the following Strichartz estimate for a class of dispersive semi-groups in modulation spaces: \begin{align} U(t)= \mathscr{F}^{-1} e^{{\rm i}t P(\xi)} \mathscr{F}, \label{Str-0} \end{align} $P(\cdot): \mathbb{R}^n\to \mathbb{R}$ is a real valued function, which satisfies the following decay estimate \begin{align} \|U(t)f\|_{M^\alpha_{ p,q}} \lesssim (1+|t|)^{-\delta} \| f\|_{M_{ p',q}}, \label{Str-1} \end{align} where $2\le p<\infty$, $\alpha=\alpha(p)\in \mathbb{R}$, $\delta=\delta(p)>0$, $\alpha, \delta$ are independent of $t\in \mathbb{R}$. \begin{prop}\label{Strichartz-Mod} Let $U(t)$ satisfy \eqref{Str-1} and \eqref{Str-2}. We have for any $\gamma\ge 2 \vee (2/\delta)$, \begin{align} \|U(t)f\|_{L^\gamma(\mathbb{R}, M^{\alpha/2}_{p,1})} \lesssim \| f\|_{M_{2,1}}. \label{Str-2} \end{align} \end{prop} Recall that the hyperbolic Schr\"odinger semi-group $S(t)=e^{{\rm i} t\Delta_\pm}$ has the same decay estimate as that of the elliptic Schr\"odinger semi-group $e^{{\rm i} t\Delta}$: \begin{align*} \|S(t) u_0 \|_{L^\infty(\mathbb{R}^n)} \lesssim |t|^{-n/2} \| u_0\|_{L^1(\mathbb{R}^n))}. \end{align*} It follows that \begin{align} \|S(t)u_0\|_{M_{\infty,1}} \lesssim |t|^{-n/2} \| u_0\|_{M_{1,1}}. \label{Str-3} \end{align} On the other hand, by Hausdorff-Young's and H\"older's inequalities we easily calculate that \begin{align} \|\,\Box_k S(t)u_0\|_{L^\infty(\mathbb{R}^n)} & \lesssim \sum_{\ell \in \Lambda} \|\mathscr{F}^{-1} \sigma_{k+\ell} \mathscr{F} \Box_k S(t) u_0\|_{L^\infty(\mathbb{R}^n)} \nonumber\\ & \lesssim \sum_{\ell \in \Lambda} \| \sigma_{k+\ell} \mathscr{F} \Box_k u_0\|_{L^1(\mathbb{R}^n)} \lesssim \| \Box_k u_0\|_{L^1(\mathbb{R}^n)}, \nonumber \end{align} where $\Lambda$ is as in \eqref{orthogonal}. It follows that \begin{align} \|S(t)u_0\|_{M_{\infty,1}} \lesssim \| u_0\|_{M_{1,1}}. \label{Str-4} \end{align} Hence, in view of \eqref{Str-3} and \eqref{Str-4}, we have \begin{align} \|S(t)u_0\|_{M_{\infty,1}} \lesssim (1+ |t|)^{-n/2} \| u_0\|_{M_{1,1}}. \label{Str-4a} \end{align} By Plancherel's identity, one has that \begin{align} \|S(t)u_0\|_{M_{2,1}} = \| u_0\|_{M_{2,1}}. \label{Str-5} \end{align} Hence, an interpolation between \eqref{Str-4a} and \eqref{Str-5} yields (cf. \cite{WaHu}), \begin{align} \|S(t)u_0\|_{M_{p,1}} \lesssim (1+ |t|)^{-n(1/2-1/p)} \| u_0\|_{M_{p',1}}. \nonumber \end{align} Applying Proposition \ref{Strichartz-Mod}, we immediately obtain that \begin{prop}\label{Strichartz-Mod-1} Let $2\le p<\infty$, $2/\gamma(p)= n(1/2-1/p)$. We have for any $\gamma\ge 2 \vee \gamma(p)$, \begin{align} \|S(t)u_0\|_{L^\gamma(\mathbb{R}, M^{\alpha/2}_{p,1})} \lesssim \| u_0\|_{M_{2,1}}. \label{Str-6} \end{align} In particular, if $p\ge 2+4/n:=2^*$, then \begin{align} \|S(t)u_0\|_{L^p(\mathbb{R}, M^{\alpha/2}_{p,1})} \lesssim \| u_0\|_{M_{2,1}}. \label{Str-7} \end{align} \end{prop} Let $ \Lambda= \{\ell\in \mathbb{Z}^n:\; {\rm supp}\, \sigma_\ell \cap {\rm supp}\,\sigma_0\not= \varnothing\}$ be as in \eqref{orthogonal}. Using the fact that $\Box_k\Box_{k+\ell}=0$ if $\ell \not\in \Lambda$, it is easy to see that \eqref{Str-7} implies the following frequency-uniform estimates: \begin{align} \|\,\Box_k S(t)u_0\|_{L^p_{t,x}(\mathbb{R}\times \mathbb{R}^n)} \lesssim \|\, \Box_k u_0\|_{2}, \quad k\in \mathbb{Z}^n. \label{Str-8} \end{align} Applying this estimate, we can get the following \begin{prop} \label{prop-max-3} Let $p\ge 2+ 4/n:= 2^*$ For any $s>(n+2)/p$, we have \begin{align} \sum_{k\in \mathbb{Z}^n} \left( \sum_{\alpha\in \mathbb{Z}^n} \|\,\Box_k S(t)u_0\|^p_{L^\infty_{t,x}(\mathbb{R} \times Q_\alpha)} \right)^{1/p} \lesssim \|u_0\|_{M^{s}_{2,1}}. \label{MaxMod-1} \end{align} \end{prop} {\bf Proof.} Let us follow the proof of Proposition \ref{prop-max-2}. Denote $\langle \partial_t \rangle=(I-\partial^2_t)^{1/2}$. By Lemma \ref{Max-Besov-control}, for any $s_0>1/p$, \begin{align} \sum_{k\in \mathbb{Z}^n} \left( \sum_{\alpha\in \mathbb{Z}^n} \|\,\Box_k S(t)u_0\|^p_{L^\infty_{t,x}(\mathbb{R} \times Q_\alpha)} \right)^{1/p} & \lesssim \sum_{k\in \mathbb{Z}^n} \|\langle \partial_t \rangle^{s_0} S(t)\Box_k u_0\|_{L^{p}(\mathbb{R}, B^{ns_0}_{p,p}(\mathbb{R}^n))} \nonumber\\ & \lesssim \sum_{k\in \mathbb{Z}^n} \|\langle \partial_t \rangle^{s_0} S(t)\Box_k u_0\|_{L^{p}(\mathbb{R}, H^{ns_0}_{p}(\mathbb{R}^n))}, \label{MM-1} \end{align} where we have used the fact that $H^{ns_0}_{p}(\mathbb{R}^n) \subset B^{ns_0}_{p,p}(\mathbb{R}^n)$. Since ${\rm supp}\sigma_k \subset B(k, \sqrt{n/2})$, applying Bernstein's multiplier estimate, we get that \begin{align} \sum_{k\in \mathbb{Z}^n} \|\langle \partial_t \rangle^{s_0} S(t)\Box_k u_0\|_{L^{p}(\mathbb{R}, H^{ns_0}_{p}(\mathbb{R}^n))} \lesssim \sum_{k\in \mathbb{Z}^n} \langle k \rangle^{n s_0} \|\langle \partial_t \rangle^{s_0} S(t)\Box_k u_0\|_{L^{p}_{t,x}(\mathbb{R}^{1+n})}. \label{MM-2} \end{align} Similarly as in \eqref{MB-5}, using \eqref{Str-8}, we have \begin{align} \|\langle \partial_t \rangle^{s_0} S(t)\Box_k u_0\|_{L^{p}_{t,x}(\mathbb{R}^{1+n})} & \lesssim \sum^\infty_{j=0} \| \mathscr{F}^{-1}_{x} \langle |\xi|^2_\pm \rangle^{s_0} \varphi_j (|\xi|^2_\pm) e^{{\rm i}t|\xi|^2_\pm} \sigma_k (\xi) \mathscr{F}_x u_0\|_{L^{p}_{t,x}(\mathbb{R}^{1+n})} \nonumber\\ & \lesssim \sum^\infty_{j=0} \langle k \rangle^{2 s_0} \| \mathscr{F}^{-1}_{x} \varphi_j (|\xi|^2_\pm) \sigma_k (\xi) \mathscr{F}_x u_0\|_{L^{2}_{x}(\mathbb{R}^{n})} . \label{MM-3} \end{align} In an analogous way as in \eqref{MB-7} and \eqref{MB-8}, we obtain that \begin{align} \sum^\infty_{j=0} \langle k \rangle^{2 s_0} \| \mathscr{F}^{-1}_{x} \varphi_j (|\xi|^2_\pm) \sigma_k (\xi) \mathscr{F}_x u_0\|_{L^{2}_{x}(\mathbb{R}^{n})} \lesssim \langle k \rangle^{2 s_0+2\varepsilon} \|\, \Box_k u_0\|_{L^{2}_{x}(\mathbb{R}^{n})}. \label{MM-4} \end{align} Collecting \eqref{MM-1}--\eqref{MM-4}, we have \begin{align} \sum_{k\in \mathbb{Z}^n} \left( \sum_{\alpha\in \mathbb{Z}^n} \|\,\Box_k S(t)u_0\|^p_{L^\infty_{t,x}(\mathbb{R} \times Q_\alpha)} \right)^{1/p} \lesssim \sum_{k\in \mathbb{Z}^n}\langle k \rangle^{(n+2) s_0+ 2 \varepsilon} \| \,\Box_k u_0\|_{L^{2}_{x}(\mathbb{R}^{n})}. \label{MM-5} \end{align} Hence, by \eqref{MM-5} we have \eqref{MaxMod-1}. $ \Box$ Using the ideas as in Lemma \ref{Max-Besov-control} and Proposition \ref{prop-max-2}, we can show the following \begin{prop} \label{prop-max-4} Let $p\ge 2+ 4/n:= 2^*$. Let $2^* \le r, q \le \infty$, $s_0> 1/2^* -1/q$, $s_1 > n(1/2^* -1/r)$. Then we have \begin{align} \left( \sum_{\alpha\in \mathbb{Z}^n} \|\, S(t)u_0\|^p_{L^q(\mathbb{R}, \, L^r( Q_\alpha))} \right)^{1/p} \lesssim \|u_0\|_{H^{s_1+2s_0}}. \label{ML-1} \end{align} In particular, for any $q, p\ge 2^*$, $s> n/2- 2/q$, \begin{align} \left( \sum_{\alpha\in \mathbb{Z}^n} \|\, S(t)u_0\|^p_{L^q(\mathbb{R}, \, L^\infty( Q_\alpha))} \right)^{1/p} \lesssim \|u_0\|_{H^{s}}. \label{ML-2} \end{align} \end{prop} {\bf Sketch of Proof.} In view of $\ell^{2^*} \subset \ell^p $, it suffices to consider the case $p=2^*$. Using the inclusions $H^{s_0}_p(\mathbb{R}) \subset L^q(\mathbb{R})$ and $B^{s_1}_{p,p}(\mathbb{R}^n) \subset L^r(\mathbb{R}^n)$, we have \begin{align} \|u\|_{L^q(\mathbb{R}, \, L^r( Q_\alpha))} \lesssim \|(I-\partial^2_t)^{s_0/2} \sigma_\alpha u\|_{L^p(\mathbb{R}, B^{s_1}_{p,p}(\mathbb{R}^n))}. \label{ML-4} \end{align} Using the same way as in Lemma \ref{Max-Besov-control}, we can show that \begin{align} \left( \sum_{\alpha\in \mathbb{Z}^n} \|u\|^p_{L^q(\mathbb{R}, \, L^r( Q_\alpha))} \right)^{1/p} \lesssim \|(I-\partial^2_t)^{s_0/2} u\|_{L^p(\mathbb{R}, B^{s_1}_{p,p}(\mathbb{R}^n))}. \label{ML-3} \end{align} One can repeat the procedures as in the proof of Lemma \ref{Max-Besov-control} to conclude that \begin{align} \sum_{\alpha\in \mathbb{Z}^n} \|(I-\partial^2_t)^{s_0/2} \sigma_\alpha S(t) u_0\|^p_{L^p(\mathbb{R}, B^{s_1}_{p,p}(\mathbb{R}^n))} \lesssim \sum^\infty_{j=0} \|\mathscr{F}^{-1}\varphi_j(|\xi|^2_\pm) \mathscr{F} u_0\|_{H^{s_1+2s_0}(\mathbb{R}^n)}. \label{ML-5} \end{align} Applying an analogous way as in the proof of Proposition \ref{prop-max-2}, \begin{align} \sum^\infty_{j=0} \|\mathscr{F}^{-1}\varphi_j(|\xi|^2_\pm) \mathscr{F} u_0\|_{H^{s_1+2s_0}(\mathbb{R}^n)} \lesssim \|u_0\|_{H^{s_1+2s_0+ 2\varepsilon}}. \label{ML-6} \end{align} Collecting \eqref{ML-3} and \eqref{ML-6}, we immediately get \eqref{ML-1}. $ \Box$ \begin{prop} \label{prop-max-5} For any $q \ge p\ge 2^*$, $s> (n+2)/p- 2/q$, \begin{align} \sum_{k\in \mathbb{Z}^n}\left( \sum_{\alpha\in \mathbb{Z}^n} \|\,\Box_k S(t)u_0\|^p_{L^q(\mathbb{R}, \, L^\infty( Q_\alpha))} \right)^{1/p} \lesssim \|u_0\|_{M^{s}_{2,1}}. \label{ML-2} \end{align} \end{prop} \section{Global-local estimates on time-space} \label{global-local-estimate} \subsection{Time-global and space-local Strichartz estimates} We need some modifications to the Strichartz estimates, which are global on time variable and local on spatial variable. We always denote by $S(t)$ and $\mathscr{A}$ the generalized Schr\"odinger semi-group and the integral operator as in \eqref{S(t)}. \begin{prop} \label{local-Str} Let $n\ge 3$. Then we have \begin{align} \sup_{\alpha\in \mathbb{Z}^n}\|S(t) u_0\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \|u_0\|_2, \label{LocStr-1}\\ \sup_{\alpha\in \mathbb{Z}^n} \left\|\mathscr{A} F \right\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \sum_{\alpha\in \mathbb{Z}^n} \|F\|_{L^1_tL^2_{x} (\mathbb{R}\times Q_\alpha)}. \label{LocStr-2}\\ \sup_{\alpha\in \mathbb{Z}^n} \left\|\mathscr{A} F \right\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \sum_{\alpha\in \mathbb{Z}^n} \|F\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)}. \label{LocStr-2a} \end{align} \end{prop} {\bf Proof.} In view of H\"older's inequality and the endpoint Strichartz estimate, \begin{align} \|S(t) u_0\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \|S(t) u_0\|_{L^2_t L^{2n/(n-2)}_x (\mathbb{R}\times Q_\alpha)} \nonumber\\ & \le \|S(t) u_0\|_{L^2_t L^{2n/(n-2)}_x (\mathbb{R}\times \mathbb{R}^n)} \nonumber\\ & \lesssim \| u_0\|_{L^2_x (\mathbb{R}^n)}. \label{LocStr-3} \end{align} Using the above ideas and the following Strichartz estimate \begin{align} \left\|\mathscr{A} F \right\|_{L^2_{t}L^{2n/(n-2)}_x (\mathbb{R}\times \mathbb{R}^n)} & \lesssim \|F\|_{L^1_tL^2_{x} (\mathbb{R}\times \mathbb{R}^n)}, \label{LocStr-4}\\ \left\|\mathscr{A} F \right\|_{L^2_{t}L^{2n/(n-2)}_x (\mathbb{R}\times \mathbb{R}^n)} & \lesssim \|F\|_{L^2_tL^{2n/(n+2)}_{x} (\mathbb{R}\times \mathbb{R}^n)}, \label{LocStr-4a} \end{align} one can easily get \eqref{LocStr-2} and \eqref{LocStr-2a}. $ \Box$ Since the endpoint Strichartz estimates used in the proof of Proposition \ref{local-Str} only holds for $n\ge 3$, it is not clear for us if \eqref{LocStr-1} still hold for $n=2$. This is why we have an additional condition that $u_0\in \dot H^{-1/2}$ is small in 2D. However, we have the following (see \cite{KePoVe1}) \begin{prop} \label{local-Str-2DH} Let $n=2$. Then we have for any $1\le r<4/3$, \begin{align} \sup_{\alpha\in \mathbb{Z}^n} \left\|S(t) u_0 \right\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \min\left(\|(-\Delta)^{-1/4}u_0\|_{2}, \; \|u_0\|_{L^2\cap L^{r}(\mathbb{R}^n)} \right) . \label{LocStr-5} \end{align} \end{prop} In the low frequency case, one easily sees that \eqref{LocStr-5} is strictly weak than \eqref{LocStr-1}. \noindent{\bf Proof.} By Lemma \ref{local-sm-1}, it suffices to show \begin{align} \sup_{\alpha\in \mathbb{Z}^n} \left\|S(t) u_0 \right\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \|u_0\|_{L^2\cap L^{r}(\mathbb{R}^n)}. \label{LocStr-5-1} \end{align} Using the unitary property in $L^2$ and the $L^p-L^{p'}$ decay estimates of $S(t)$, we have \begin{align} \left\|S(t) u_0 \right\|_{L^2_{x} (Q_\alpha)} \lesssim (1+|t|)^{1-2/r} \|u_0\|_{L^2\cap L^{r}(\mathbb{R}^n)}. \label{LocStr-5-2} \end{align} Taking the $L^2_t$ norm in both sides of \eqref{LocStr-5-2}, we immediately get \eqref{LocStr-5-1}. Hence, the result follows. $ \Box$ \begin{prop} \label{local-Str-2D} Let $n=2$. Then we have \begin{align} \sup_{\alpha\in \mathbb{Z}^n} \left\|\mathscr{A} F \right\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \sum_{\alpha\in \mathbb{Z}^2} \|F\|_{L^1_tL^2_{x} (\mathbb{R}\times Q_\alpha)}. \label{LocStr-6} \end{align} \end{prop} {\bf Proof.} We notice that \begin{align} \|S(t) f\|_{L^2_{x} (Q_\alpha)} & \lesssim (1+|t|)^{-1} \|f\|_{L^1_x \cap L^2_{x} (\mathbb{R}^n)}. \label{LocStr-7} \end{align} It follows that \begin{align} \ \left\|\mathscr{A} F \right\|_{L^2_{x} ( Q_\alpha)} & \lesssim \int_{\mathbb{R}}(1+|t-\tau|)^{-1} \|F(\tau)\|_{L^1_x \cap L^2_{x} (\mathbb{R}^n)} d\tau. \label{LocStr-8} \end{align} Using Young's inequality, one has that \begin{align} \ \left\|\mathscr{A} F \right\|_{L^2_{t, x} (\mathbb{R} \times Q_\alpha)} & \lesssim \|F\|_{L^1(\mathbb{R},\, L^1_x \cap L^2_{x} (\mathbb{R}^n))}. \label{LocStr-9} \end{align} In view of H\"older's inequality, \eqref{LocStr-9} yields the result, as desired. $ \Box$ \subsection{Note on the time-global and space-local smooth effects} Kenig, Ponce and Vega \cite{KePoVe,KePoVe1} obtained the local smooth effect estimates for the Schr\"odinger group $e^{{\rm i}t \Delta}$, and their results can also be developed to the non-elliptical Schr\"odinger group $e^{{\rm i}t \Delta_\pm}$ (\cite{KePoVe2}). On the basis of their results and Proposition \ref{local-Str}, we can obtain a time-global version of the local smooth effect estimates with the nonhomogeneous derivative $(I-\Delta)^{1/2}$ instead of homogeneous derivative $\nabla$, which is useful to control the low frequency parts of the nonlinearity. \begin{lem} \label{local-sm-1} (\cite{KePoVe}) Let $\Omega$ be an open set in $\mathbb{R}^n$, $\phi$ be a $C^1(\Omega)$ function such that $\nabla \phi(\xi) \not=0$ for any $\xi \in \Omega$. Assume that there is $N\in \mathbb{N}$ such that for any $\bar{\xi}:=(\xi_1,...,\xi_{n-1}) \in \mathbb{R}^{n-1}$ and $r\in \mathbb{R}$, the equation $\phi (\xi_1,...,\xi_{k}, x, \xi_{k+1},...,\xi_{n-1})=r$ has at most $N$ solutions. For $a(x,s) \in L^\infty(\mathbb{R}^{n}\times \mathbb{R})$ and $f\in \mathscr{S}(\mathbb{R}^n)$, we denote \begin{align} W(t)f(x) = \int_\Omega e^{{\rm i}(t\phi(\xi)+x\xi)} a(x, \phi(\xi)) \hat{f}(\xi) d\xi. \label{LSM-1} \end{align} Then for $n\ge 2$, we have \begin{align} \|W(t)f\|_{L^2_{t,x} (\mathbb{R}\times B(0,R))} \le CNR^{1/2} \||\nabla \phi|^{-1/2} \hat{f}\|_{L^2(\Omega)} . \label{LSM-2} \end{align} \end{lem} \begin{cor} \label{local-sm-2} Let $n\ge 3$, $S(t)=e^{{\rm i}t \Delta_\pm}$. We have \begin{align} \sup_{\alpha\in \mathbb{Z}^n}\|S(t)u_0\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \|u_0\|_{H^{-1/2}}, \label{LSM-3}\\ \left\|\mathscr{A} f \right\|_{L^\infty(\mathbb{R}, H^{1/2})} & \lesssim \sum_{\alpha\in \mathbb{Z}^n} \|f\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)}. \label{LSM-3-dual} \end{align} For $n=2$, \eqref{LSM-3-dual} also holds if one substitutes $H^{1/2}$ by $\dot H^{1/2}$. \end{cor} {\bf Proof.} Let $\Omega=\mathbb{R}^n\setminus B(0,1)$, $\phi(\xi)= |\xi|^2_\pm$ and $\psi$ be as in \eqref{cutoff}, $a(x, s)= 1-\psi(s)$ in Lemma \ref{local-sm-1}. Taking $W(t):= S(t)\mathscr{F}^{-1}(1-\psi)\mathscr{F}$, from \eqref{LSM-2} we have \begin{align} \sup_{\alpha \in \mathbb{Z}^n} \|S(t)\mathscr{F}^{-1}(1-\psi)\mathscr{F} u_0\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} \lesssim \||\xi|^{-1/2}\hat{u}_0\|_{L^2_\xi(\mathbb{R}^n\setminus B(0,1))}. \label{LSM-4} \end{align} It follows from Proposition \ref{local-Str} that \begin{align} \|S(t)\mathscr{F}^{-1}\psi\mathscr{F} u_0\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \|\mathscr{F}^{-1}\psi\mathscr{F} u_0\|_{L^2_x (\mathbb{R}^n)}\nonumber\\ & \lesssim \|\hat{u}_0\|_{L^2_\xi (B(0,2))}. \label{LSM-6} \end{align} From \eqref{LSM-4} and \eqref{LSM-6} we have \eqref{LSM-3}, as desired. \eqref{LSM-3-dual} is the dual version of \eqref{LSM-3}. $ \Box$ When $n=2$, it is known that for the elliptic case, the endpoint Strichartz estimate holds for the radial function (cf. \cite{Tao}). So, Corollary \ref{local-sm-2} also holds for the radial function $u_0$ in the elliptic case. The following local smooth effect estimates for the nonhomogeneous part of the solutions of the Schr\"odinger equation is also due to Kenig, Ponce and Vega \cite{KePoVe1}\footnote{In \cite{KePoVe1}, the result was stated for the elliptic case, however, their result is also adapted to the non-elliptic cases.}. \begin{prop} \label{local-sm-3} Let $n\ge 2$, $S(t)=e^{{\rm i}t \Delta_\pm}$. We have \begin{align} \sup_{\alpha\in \mathbb{Z}^n} \left\| \nabla \mathscr{A} f \right\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)} & \lesssim \sum_{\alpha\in \mathbb{Z}^n} \|f\|_{L^2_{t,x} (\mathbb{R}\times Q_\alpha)}. \label{LSM-7} \end{align} \end{prop} \section{Proof of Theorem \ref{GWP-nD}} \label{proof-gwp-nd} \begin{lem} \label{Sobolev-ineq} (Sobolev Inequality). Let $\Omega \subset \mathbb{R}^n$ be a bounded domain with $\partial \Omega \in C^m$, $m, \ell \in \mathbb{N}\cup \{0\}$, $1\le r,p,q\le \infty$. Assume that $$ \frac{\ell}{m} \le \theta \le 1, \ \ \frac{1}{p} -\frac{\ell}{n} = \theta\left(\frac{1}{r} - \frac{m}{n}\right) +\frac{1-\theta}{q}. $$ Then we have \begin{align} \sum_{|\beta|=\ell}\|D^\beta u\|_{L^p(\Omega)} \lesssim \| u\|^{1-\theta}_{L^q(\Omega)} \|u\|^\theta_{W^m_r(\Omega)}, \label{Sob-1} \end{align} where $\|u\|_{W^m_r(\Omega)} = \sum_{|\beta|\le m} \| u\|_{L^r(\Omega)}$. \end{lem} {\bf Proof of Theorem \ref{gNLS}.} In order to illustrate our ideas in an exact way, we first consider a simple case $s= [n/2]+ 5/2$ and there is no difficulty to generalize the proof to the case $s>n/2+3/2$, $s+1/2 \in \mathbb{N}$. We assume without loss of generality that \begin{align} F(u,\, \bar{u},\, \nabla u,\, \nabla \bar{u}):= F(u, \nabla u)= \sum_{\Lambda_{\kappa, \nu}} c_{\kappa\nu_1...\nu_n} u^{\kappa} u^{\nu_1}_{x_1}...u^{\nu_n}_{x_n}, \label{poly-1} \end{align} where $$ \Lambda_{\kappa, \nu}=\{(\kappa, \nu_1,...,\nu_n): \, m+1\le \kappa+\nu_1+...+\nu_n \le M+1 \}. $$ Since we only use the Sobolev norm to control the nonlinear terms, $\bar{u}$ and $u$ have the same norm, whence, the general cases can be handled in the same way. Denote \begin{align} & \lambda_1 (v) : = \|v\|_{\ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))}, \nonumber\\ & \lambda_2 (v) : = \|v\|_{\ell^{2^*}_\alpha (L^\infty_{t,x} (\mathbb{R} \times Q_\alpha))}, \nonumber\\ & \lambda_3 (v) : = \|v\|_{\ell^{2^*}_\alpha (L^{2m}_{t}L^\infty_ x(\mathbb{R} \times Q_\alpha))}. \nonumber \end{align} Put \begin{align} & \mathscr{D}_n= \left\{ u: \; \sum_{|\beta|\le [n/2]+3} \lambda_1 (D^\beta u) + \sum_{|\beta|\le 1} \sum_{i=2,3} \lambda_i (D^\beta u) \le \varrho \right\}. \label{metricspace} \end{align} We consider the mapping \begin{align} & \mathscr{T}: u(t) \to S(t)u_0 - {\rm i} \mathscr{A} F(u, \, \nabla u), \label{map} \end{align} and we show that $\mathscr{T}: \mathscr{D}_n\to \mathscr{D}_n$ is a contraction mapping for any $n\ge 2$. {\it Step} 1. For any $u\in \mathscr{D}_n$, we estimate $\lambda_1 (D^\beta \mathscr{T}u)$, $|\beta| \le 3+[n/2]$. We consider the following three cases. {\it Case } 1. $n\ge 3$ and $1\le |\beta| \le 3+[n/2]$. In view of Corollary \ref{local-sm-2} and Proposition \ref{local-sm-3}, we have for any $\beta$, $1\le |\beta|\le 3+[n/2]$, \begin{align} \lambda_1 (D^\beta \mathscr{T} u) & \lesssim \|S(t) D^\beta u_0\|_{\ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))} + \sum_{\Lambda_{\kappa, \nu}} \|\mathscr{A} D^\beta(u^{\kappa} u^{\nu_1}_{x_1}...u^{\nu_n}_{x_n})\|_{\ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))} \nonumber\\ & \lesssim \|u_0\|_{H^s } + \sum_{ |\beta|\le 2+[n/2]} \sum_{\Lambda_{\kappa, \nu}} \sum_{\alpha\in \mathbb{Z}^n} \|D^\beta(u^{\kappa} u^{\nu_1}_{x_1}...u^{\nu_n}_{x_n})\|_{L^2_{t,x}(\mathbb{R} \times Q_\alpha)}. \label{Lambda1-1} \end{align} For simplicity, we can further assume that $u^{\kappa} u^{\nu_1}_{x_1}...u^{\nu_n}_{x_n} = u^{\kappa} u^{\nu}_{x_1}$ in \eqref{Lambda1-1} and the general case can be treated in an analogous way\footnote{One can see below for a general treating.}. So, one can rewrite \eqref{Lambda1-1} as \begin{align} \sum_{ 1\le |\beta|\le 3+[n/2]}\lambda_1 (D^\beta \mathscr{T} u) \lesssim \|u_0\|_{H^s } + \sum_{ |\beta|\le 2+[n/2]} \sum_{\Lambda_{\kappa, \nu}} \sum_{\alpha\in \mathbb{Z}^n} \|D^\beta(u^{\kappa} u^{\nu}_{x_1})\|_{L^2_{t,x}(\mathbb{R} \times Q_\alpha)}. \label{Lambda1-2} \end{align} It is easy to see that \begin{align} |D^\beta(u^{\kappa} u^{\nu}_{x_1})| \lesssim \sum_{\beta_1+...+\beta_{\kappa+\nu}=\beta} | D^{\beta_1} u....D^{\beta_\kappa} u D^{\beta_{\kappa+1}} u_{x_1}...D^{\beta_{\kappa+\nu}} u_{x_1} |. \label{Lambda1-3} \end{align} By H\"older's inequality, \begin{align} \|D^\beta(u^{\kappa} u^{\nu}_{x_1})\|_{L^2_x(Q_\alpha)} \lesssim \sum_{\beta_1+...+\beta_{\kappa+\nu}=\beta} \prod^\kappa_{i=1} \| D^{\beta_i}u\|_{L^{p_i}_x(Q_\alpha)} \prod^{\kappa+\nu}_{i=\kappa+1} \| D^{\beta_i} u_{x_1}\|_{L^{p_i}_x(Q_\alpha)}, \label{Lambda1-4} \end{align} where $$ p_i=\left\{ \begin{array}{ll} 2|\beta|/|\beta_i|, & |\beta_i|\ge 1,\\ \infty, & |\beta_i|=0. \end{array} \right. $$ It is easy to see that for $\theta_i= |\beta_i|/|\beta|$, $$ \frac{1}{p_i}- \frac{|\beta_i|}{n} = \theta_i \left( \frac{1}{2}- \frac{|\beta|}{n}\right)+ \frac{1-\theta_i}{\infty}. $$ Using Sobolev's inequality, one has that for $B_\alpha :=\{x:\; |x-\alpha|\le \sqrt{n}\}$, \begin{align} & \|D^{\beta_i}u\|_{L^{p_i}_x(Q_\alpha)} \le \|D^{\beta_i}u\|_{L^{p_i}_x(B_\alpha)} \lesssim \|u\|^{1-\theta_i}_{L^{\infty}_x(B_\alpha)} \|u \|^{\theta_i}_{W^{|\beta|}_2(B_\alpha)}, \ \ i=1,...,\kappa; \label{Lambda1-5}\\ & \|D^{\beta_i}u_{x_1}\|_{L^{p_i}_x(Q_\alpha)} \lesssim \|u_{x_1}\|^{1-\theta_i}_{L^{\infty}_x(B_\alpha)} \|u_{x_1} \|^{\theta_i}_{W^{|\beta|}_2(B_\alpha)}, \ \ i=\kappa+1,..., \kappa+\nu. \label{Lambda1-6} \end{align} Since $$ \sum^{\kappa+\nu}_{i=1} \theta_i =1, \quad \sum^{\kappa+\nu}_{i=1} (1-\theta_i) = \kappa+\nu-1, $$ by \eqref{Lambda1-4}--\eqref{Lambda1-6} we have \begin{align} \|D^\beta(u^{\kappa} u^{\nu}_{x_1})\|_{L^2_x(Q_\alpha)} \lesssim & \sum_{|\beta| \le 2+[n/2]} (\|u \|_{W^{|\beta|}_2(B_\alpha)}+ \|u_{x_1}\|_{W^{|\beta|}_2(B_\alpha)}) \nonumber\\ & \times (\|u\|^{\kappa+\nu-1}_{L^{\infty}_x(B_\alpha)}+ \|u_{x_1}\|^{\kappa+\nu-1}_{L^{\infty}_x(B_\alpha)}) \nonumber\\ \lesssim & \sum_{|\gamma| \le 3+[n/2]} \|D^\gamma u \|_{L^2_x (B_\alpha)} \sum_{|\beta|\le 1}\|D^\beta u\|^{\kappa+\nu-1}_{L^{\infty}_x(B_\alpha)}. \label{Lambda1-7} \end{align} It follows from \eqref{Lambda1-7} and $\ell^{2^*} \subset \ell^{\kappa+\nu-1}$ that \begin{align} & \!\!\!\! \sum _{|\beta| \le 2+[n/2]} \sum_{\alpha\in \mathbb{Z}^n} \|D^\beta(u^{\kappa} u^{\nu}_{x_1})\|_{L^2_{t,x}(\mathbb{R}\times Q_\alpha)} \nonumber\\ & \lesssim \sum_{\alpha\in \mathbb{Z}^n} \sum_{|\gamma| \le 3+[n/2]} \|D^\gamma u \|_{L^2_{t,x} (\mathbb{R}\times B_\alpha)} \sum_{|\beta|\le 1}\|D^\beta u\|^{\kappa+\nu-1}_{L^{\infty}_{t,x}(\mathbb{R}\times B_\alpha)} \nonumber\\ & \lesssim \sum_{|\gamma| \le 3+[n/2]} \lambda_1(D^\gamma u) \sum_{|\beta|\le 1}\lambda_2 (D^\beta u)^{\kappa+\nu-1} \lesssim \varrho^{\kappa+\nu}. \label{Lambda1-8} \end{align} Hence, in view of \eqref{Lambda1-2} and \eqref{Lambda1-8} we have \begin{align} \sum_{ 1\le |\beta|\le 3+[n/2]}\lambda_1 (D^\beta \mathscr{T} u) \lesssim \|u_0\|_{H^s } + \sum^{M+1}_{\kappa+\nu=m+1} \varrho^{\kappa+\nu}. \label{Lambda1-9} \end{align} {\it Case } 2. $n\ge 3$ and $|\beta|=0$. By Corollary \ref{local-sm-2}, the local Strichartz estimate \eqref{LocStr-2} and H\"older's inequality, \begin{align} \lambda_1 (\mathscr{T}u) & \lesssim \|S(t) u_0\|_{\ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))} + \|\mathscr{A} F(u, \nabla u)\|_{\ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))} \nonumber\\ & \lesssim \|u_0\|_{2} + \sum_{\alpha\in \mathbb{Z}^n} \|F(u, \nabla u) \|_{L^1_{t}L^{2}_x(\mathbb{R} \times Q_\alpha)} \nonumber\\ & \lesssim \|u_0\|_{2} + \sum_{\Lambda_{\kappa, \nu}} \sum_{\alpha\in \mathbb{Z}^n} \|u^\kappa u^{\nu_1}_{x_1}... u^{\nu_n}_{x_n} \|_{L^1_{t}L^{2}_x(\mathbb{R} \times Q_\alpha)} \nonumber\\ & \lesssim \|u_0\|_{2} + \sum^{M+1}_{\kappa+\nu=m+1} \sum_{|\gamma|\le 1} \sup_{\alpha\in \mathbb{Z}^n} \|D^\gamma u \|_{L^2_{t,x} (\mathbb{R} \times Q_\alpha)} \nonumber\\ & \ \ \ \ \ \ \ \ \ \ \ \times \sum_{|\beta|\le 1} \sum_{\alpha\in \mathbb{Z}^n} \|D^\beta u \|^{\kappa+\nu-1}_{L^{2(\kappa+\nu-1)}_{t} L^\infty_x (\mathbb{R} \times Q_\alpha)} \nonumber\\ & \lesssim \|u_0\|_{2} + \sum^{M+1}_{\kappa+\nu=m+1} \sum_{|\gamma|\le 1} \lambda_1(D^\gamma u) \sum_{i=2,3} \sum_{|\beta|\le 1} \lambda_i(D^\beta u )^{\kappa+\nu-1} \nonumber\\ & \lesssim \|u_0\|_{2} + \sum^{M+1}_{\kappa+\nu=m+1} \varrho^{\kappa+\nu}. \label{Lambda1-10} \end{align} {\it Case } 3. $n=2, \; |\beta|=0$. By Propositions \ref{local-Str-2DH} and \ref{local-Str-2D}, we have \begin{align} \lambda_1 (\mathscr{T}u) & \lesssim \|u_0\|_{\dot H^{-1/2}} + \sum_{\alpha\in \mathbb{Z}^n} \|F(u, \nabla u) \|_{L^1_{t}L^{2}_x(\mathbb{R} \times Q_\alpha)}. \label{Lambda1-11} \end{align} Using the same way as in Case 2, we have \begin{align} \lambda_1 (\mathscr{T}u) & \lesssim \|u_0\|_{\dot H^{-1/2}} + \sum^{M+1}_{\kappa+\nu=m+1} \varrho^{\kappa+\nu}. \label{Lambda1-12} \end{align} {\it Step} 2. We consider the estimates of $\lambda_2(D^\beta \mathscr{T}u)$, $|\beta| \le 1$. Using the estimates of the maximal function as in Proposition \ref{prop-max-2}, we have for $|\beta| \le 1$, $0<\varepsilon \ll 1$, \begin{align} \lambda_2 (D^\beta \mathscr{T}u) & \lesssim \|S(t)D^\beta u_0\|_{\ell^{2^*}_\alpha (L^\infty_{t,x}(\mathbb{R} \times Q_\alpha))} + \|\mathscr{A} D^\beta F(u, \nabla u)\|_{\ell^{2^*}_\alpha (L^\infty_{t,x}(\mathbb{R} \times Q_\alpha))} \nonumber\\ & \lesssim \|D^\beta u_0\|_{H^{n/2+\varepsilon}} + \sum_{|\beta| \le 1} \|D^\beta F(u, \nabla u) \|_{L^1 (\mathbb{R}, H^{n/2+\varepsilon}(\mathbb{R}^n))} \nonumber\\ & \lesssim \|u_0\|_{H^{n/2+1+\varepsilon}} + \sum_{|\beta| \le [n/2]+2}\sum_{\alpha\in \mathbb{Z}^n} \|D^\beta F(u, \nabla u) \|_{L^1_{t}L^{2}_x(\mathbb{R} \times Q_\alpha)}. \label{Lambda2-1} \end{align} Applying the same way as in Step 1, for any $|\beta| \le [n/2]+2$, \begin{align} \|D^\beta F(u, \nabla u)\|_{L^2_x(Q_\alpha)} \lesssim \sum^{M+1}_{\kappa+\nu=m+1} \sum_{|\beta|\le 1}\|D^\beta u\|^{\kappa+\nu-1}_{L^{\infty}_x(B_\alpha)} \sum_{|\gamma| \le 3+[n/2]} \|D^\gamma u \|_{L^2_x (B_\alpha)} . \label{Lambda2-2} \end{align} By H\"older's inequality, we have from \eqref{Lambda2-2} that \begin{align} \|D^\beta F(u, \nabla u)\|_{L^1_tL^2_x(\mathbb{R}\times Q_\alpha)} \lesssim & \sum^{M+1}_{\kappa+\nu=m+1} \sum_{|\gamma| \le 3+[n/2]} \|D^\gamma u \|_{L^2_{t,x} (\mathbb{R}\times B_\alpha)} \nonumber\\ & \ \ \ \ \ \times \sum_{|\beta|\le 1}\|D^\beta u\|^{\kappa+\nu-1}_{L^{2(\kappa+\nu-1)}_t L^{\infty}_x(\mathbb{R}\times B_\alpha)}. \label{Lambda2-3} \end{align} Summarizing \eqref{Lambda2-3} over all $\alpha\in \mathbb{Z}^n$, we have for any $|\beta| \le 2+[n/2]$, \begin{align} & \sum_{\alpha \in \mathbb{Z}^n} \|D^\beta F(u, \nabla u)\|_{L^1_tL^2_x(\mathbb{R}\times Q_\alpha)} \nonumber\\ & \lesssim \sum^{M+1}_{\kappa+\nu=m+1} \sum_{|\gamma| \le 3+[n/2]} \lambda_1 ( D^\gamma u ) \sum_{|\beta|\le 1} \sum_{\alpha \in \mathbb{Z}^n} \|D^\beta u\|^{\kappa+\nu-1}_{L^{2(\kappa+\nu-1)}_t L^{\infty}_x(\mathbb{R}\times B_\alpha)} \nonumber\\ & \lesssim \sum^{M+1}_{\kappa+\nu=m+1} \sum_{|\gamma| \le 3+[n/2]} \lambda_1 ( D^\gamma u ) \sum_{|\beta|\le 1} \sum_{\alpha \in \mathbb{Z}^n} \|D^\beta u\|^{\kappa+\nu-1}_{(L^{2m}_t L^{\infty}_x) \cap L^\infty_{t,x}(\mathbb{R}\times B_\alpha)} \nonumber\\ & \lesssim \sum^{M+1}_{\kappa+\nu=m+1} \sum_{|\gamma| \le 3+[n/2]} \lambda_1 ( D^\gamma u ) \sum_{|\beta|\le 1} (\lambda_2(D^\beta u)^{\kappa+\nu-1} +\lambda_3(D^\beta u)^{\kappa+\nu-1} ) \nonumber\\ & \lesssim \sum^{M+1}_{\kappa+\nu=m+1} \varrho^{\kappa+\nu}. \label{Lambda2-4} \end{align} Combining \eqref{Lambda2-1} with \eqref{Lambda2-4}, we obtain that \begin{align} \sum_{|\beta|\le 1}\lambda_2 (D^\beta \mathscr{T}u) \lesssim \|u_0\|_{H^{n/2+1+\varepsilon}} + \sum^{M+1}_{\kappa+\nu=m+1} \varrho^{\kappa+\nu}. \label{Lambda2-5} \end{align} {\it Step} 3. We estimate $\lambda_3 (D^\beta \mathscr{T}u)$, $|\beta| \le 1$. In view of Proposition \ref{prop-max-4}, one has that \begin{align} \lambda_3 (D^\beta \mathscr{T}u) & \lesssim \|S(t)D^\beta u_0\|_{\ell^{2^*}_\alpha (L^{2m}_t L^\infty_{x}(\mathbb{R} \times Q_\alpha))} + \|\mathscr{A} D^\beta F(u, \nabla u)\|_{\ell^{2^*}_\alpha (L^{2m}_t L^\infty_{x}(\mathbb{R} \times Q_\alpha))} \nonumber\\ & \lesssim \|D^\beta u_0\|_{H^{n/2 -1/m +\varepsilon}} + \sum_{|\beta| \le 1} \|D^\beta F(u, \nabla u) \|_{L^1 (\mathbb{R}, H^{n/2-1/m +\varepsilon}(\mathbb{R}^n))} \nonumber\\ & \lesssim \|u_0\|_{H^{n/2+1}} + \sum_{|\beta| \le [n/2+2]}\sum_{\alpha\in \mathbb{Z}^n} \|D^\beta F(u, \nabla u) \|_{L^1_{t}L^{2}_x(\mathbb{R} \times Q_\alpha)}, \label{Lambda3-1} \end{align} which reduces to the case as in \eqref{Lambda2-1}. Therefore, collecting the estimates as in Steps 1--3, we have for $n\ge 3$, \begin{align} \sum_{|\beta|\le 3+[n/2]}\lambda_1 (D^\beta \mathscr{T}u) + \sum_{i=2,3} \sum_{|\beta|\le 1}\lambda_i (D^\beta \mathscr{T}u) \lesssim \|u_0\|_{H^s} + \sum^{M+1}_{\kappa+\nu=m+1} \varrho^{\kappa+\nu}, \label{Lambda-1} \end{align} and for $n\ge 2$, \begin{align} \sum_{|\beta|\le 4}\lambda_1 (D^\beta \mathscr{T}u) + \sum_{i=2,3} \sum_{|\beta|\le 1}\lambda_i (D^\beta \mathscr{T}u) \lesssim \|u_0\|_{H^{7/2} \cap \dot H^{-1/2}} + \sum^{M+1}_{\kappa+\nu=m+1} \varrho^{\kappa+\nu}. \label{Lambda-2} \end{align} It follows that for $n\ge 3,$ $\mathscr{T}: \mathscr{D}_n \to \mathscr{D}_n$ is a contraction mapping if $\varrho$ and $\|u_0\|_{H^s}$ are small enough (similarly for $n=2$). Before considering the case $s>n/2+3/2$, we first establish a nonlinear mapping estimate: \begin{lem} \label{NLE-Besov} Let $n\ge 2$, $s>0$, $K\in \mathbb{N}$. Let $1\le p, p_i, q, q_i \le \infty$ satisfy $1/p = 1/p_1+(K-1)/p_2$ and $1/q=1/q_1+ (K-1)/q_2$. We have \begin{align} \|v_1...v_K\|_{\ell^{1,s}_\triangle \ell^1_\alpha (L^q_t L^p_x (\mathbb{R}\times Q_\alpha))} \lesssim & \sum^K_{k=1} \|v_k\|_{\ell^{1,s}_\triangle \ell^\infty_\alpha (L^{q_1}_t L^{p_1}_x (\mathbb{R}\times Q_\alpha))} \nonumber\\ & \ \times \prod_{i\not= k, \, i=1,...,K} \|v_i\|_{\ell^{1}_\triangle \ell^{K-1}_\alpha (L^{q_2}_t L^{p_2}_x (\mathbb{R}\times Q_\alpha))}. \label{NLE-1} \end{align} \end{lem} {\bf Proof.} Denote $S_r u= \sum_{j\le r} \triangle u$. We have \begin{align} v_1... v_K = \sum^\infty_{r=-1} (S_{r+1}v_1 ... S_{r+1} v_K - S_{r}v_1 ... S_r v_K), \label{NLE-2} \end{align} where we assume that $S_{-1}v \equiv 0$. Recall the identity, \begin{align} \prod^K_{k=1} a_k - \prod^K_{k=1} b_k = \sum^K_{k=1} (a_k-b_k) \prod_{i\le k-1} b_i \prod_{i\ge k+1} a_i, \label{NLE-3} \end{align} where we assume that $\prod_{i\le 0}a_i= \prod_{i\ge K+1} \equiv 1$. We have \begin{align} v_1... v_K = \sum^\infty_{r=-1} \sum^K_{k=1} \triangle_{r+1} v_k \prod^{k-1}_{i=1} S_{r}v_i \prod^{K}_{i=k+1} S_{r+1}v_i. \label{NLE-4} \end{align} Hence, it follows that \begin{align} & \|v_1...v_K\|_{\ell^{1,s}_\triangle \ell^1_\alpha (L^q_t L^p_x (\mathbb{R}\times Q_\alpha))} \nonumber\\ & = \sum^\infty_{j=0} 2^{sj} \sum_{\alpha\in \mathbb{Z}^n} \|\triangle_j (v_1...v_K)\|_{L^q_t L^p_x (\mathbb{R}\times Q_\alpha)} \nonumber\\ & \lesssim \sum^K_{k=1} \sum^\infty_{j=0} 2^{sj} \sum_{\alpha\in \mathbb{Z}^n} \sum^\infty_{r=-1} \left\|\triangle_j ( \triangle_{r+1} v_k \prod^{k-1}_{i=1} S_{r}v_i \prod^{K}_{i=k+1} S_{r+1}v_i )\right\|_{L^q_t L^p_x (\mathbb{R}\times Q_\alpha)}. \label{NLE-5} \end{align} Using the support property of $\widehat{\triangle_r v}$ and $\widehat{S_r v}$, we see that \begin{align} \triangle_j ( \triangle_{r+1} v_k \prod^{k-1}_{i=1} S_{r}v_i \prod^{K}_{i=k+1} S_{r+1}v_i ) \equiv 0, \ \ j> r+C. \label{NLE-6} \end{align} Using the fact $\|\int f d \mu\|_X \le \int \|f\|_X d\mu$, one has that \begin{align} \sum_{\alpha\in \mathbb{Z}^n}\|\triangle_j f\|_{L^q_t L^p_x (\mathbb{R}\times Q_\alpha)} & \le \sum_{\alpha\in \mathbb{Z}^n} \int_{\mathbb{R}^n} |\mathscr{F}^{-1}\varphi_j (y)| \|f(t, x-y)\|_{L^q_t L^p_x (\mathbb{R}\times Q_\alpha)} dy \nonumber\\ & \le \sup_{y \in \mathbb{R}^n} \sum_{\alpha\in \mathbb{Z}^n} \|f(t, x-y)\|_{L^q_t L^p_x (\mathbb{R}\times Q_\alpha)} \int_{\mathbb{R}^n} |\mathscr{F}^{-1}\varphi_j (y)| dy \nonumber\\ & \lesssim \sum_{\alpha\in \mathbb{Z}^n} \|f\|_{L^q_t L^p_x (\mathbb{R}\times Q_\alpha)}. \label{NLE-7} \end{align} Collecting \eqref{NLE-5}--\eqref{NLE-7} and using Fubini's Theorem, we have \begin{align} & \|v_1...v_K\|_{\ell^{1,s}_\triangle \ell^1_\alpha (L^q_t L^p_x (\mathbb{R}\times Q_\alpha))} \nonumber\\ & \lesssim \sum^K_{k=1} \sum^\infty_{r=-1} \sum_{j\le r+C} 2^{sj} \sum_{\alpha\in \mathbb{Z}^n} \left\|\triangle_j ( \triangle_{r+1} v_k \prod^{k-1}_{i=1} S_{r}v_i \prod^{K}_{i=k+1} S_{r+1}v_i )\right\|_{L^q_t L^p_x (\mathbb{R}\times Q_\alpha)} \nonumber\\ & \lesssim \sum^K_{k=1} \sum^\infty_{r=-1} \sum_{j\le r+C} 2^{sj} \sum_{\alpha\in \mathbb{Z}^n} \left\|\triangle_{r+1} v_k \prod^{k-1}_{i=1} S_{r}v_i \prod^{K}_{i=k+1} S_{r+1}v_i \right\|_{L^q_t L^p_x (\mathbb{R}\times Q_\alpha)} \nonumber\\ & \lesssim \sum^K_{k=1} \sum^\infty_{r=-1} 2^{s r} \sum_{\alpha\in \mathbb{Z}^n} \left\| \triangle_{r+1} v_k \prod^{k-1}_{i=1} S_{r}v_i \prod^{K}_{i=k+1} S_{r+1}v_i \right\|_{L^q_t L^p_x (\mathbb{R}\times Q_\alpha)} \nonumber\\ & \lesssim \sum^K_{k=1} \sum^\infty_{r=-1} 2^{s r} \sum_{\alpha\in \mathbb{Z}^n} \|\triangle_{r+1} v_k\|_{L^{q_1}_t L^{p_1}_x (\mathbb{R}\times Q_\alpha)} \prod_{i\not= k, \ i=1,...,K} \|v_i\|_{\ell^1_\triangle (L^{q_2}_t L^{p_2}_x (\mathbb{R}\times Q_\alpha))} \nonumber\\ & \lesssim \sum^K_{k=1} \|v_k\|_{\ell^{1,s}_\triangle \ell^\infty_\alpha (L^{q_1}_t L^{p_1}_x (\mathbb{R}\times Q_\alpha))} \sum_{\alpha\in \mathbb{Z}^n} \prod_{i\not= k, \ i=1,...,K} \|v_i\|_{\ell^1_\triangle (L^{q_2}_t L^{p_2}_x (\mathbb{R}\times Q_\alpha) )}, \label{NLE-8} \end{align} the result follows. $ \Box$ For short, we write $\|\nabla u\|_X = \|\partial_{x_1} u\|_X+...+ \|\partial_{x_n} u\|_X$. \begin{lem} \label{LE-Besov-1} Let $n \ge 3$. We have for any $s>0$ \begin{align} \sum_{k=0,1}\| S(t) \nabla^k u_0\|_{\ell^{1,s}_\triangle \ell^\infty_\alpha (L^2_{t, x} (\mathbb{R}\times Q_\alpha))} & \lesssim \|u_0\|_{B^{s+1/2}_{2,1}}, \label{Besov-LE-1}\\ \sum_{k=0,1}\| \mathscr{A} \nabla^k F\|_{\ell^{1,s}_\triangle \ell^\infty_\alpha (L^2_{t, x} (\mathbb{R}\times Q_\alpha))} & \lesssim \|F\|_{\ell^{1,s}_\triangle \ell^1_\alpha (L^2_{t, x} (\mathbb{R}\times Q_\alpha))}. \label{Besov-LE-2} \end{align} \end{lem} {\bf Proof.} In view of Corollary \ref{local-sm-2} and Propositions \ref{local-Str} and \ref{local-sm-3}, we have the results, as desired. $ \Box$ \begin{lem} \label{LE-Besov-2} Let $n =2$. We have for any $s>0$ \begin{align} \sum_{k=0,1}\| S(t) \nabla^k u_0\|_{\ell^{1,s}_\triangle \ell^\infty_\alpha (L^2_{t, x} (\mathbb{R}\times Q_\alpha))} & \lesssim \|u_0\|_{B^{s+1/2}_{2,1} \cap \dot H^{-1/2}}, \label{Besov-LE-3}\\ \| \mathscr{A} \nabla F\|_{\ell^{1,s}_\triangle \ell^\infty_\alpha (L^2_{t, x} (\mathbb{R}\times Q_\alpha))} & \lesssim \|F\|_{\ell^{1,s}_\triangle \ell^1_\alpha (L^2_{t, x} (\mathbb{R}\times Q_\alpha))}, \label{Besov-LE-4}\\ \|\mathscr{A} F\|_{\ell^{1,s}_\triangle \ell^\infty_\alpha (L^2_{t, x} (\mathbb{R}\times Q_\alpha))} & \lesssim \|F\|_{\ell^{1,s}_\triangle \ell^1_\alpha (L^1_t L^2_{x} (\mathbb{R}\times Q_\alpha))}. \label{Besov-LE-5} \end{align} \end{lem} {\bf Proof.} By Propositions \ref{local-Str-2DH}, \ref{local-Str-2D} and \ref{local-sm-3}, we have the results, as desired. $ \Box$ We now continue the proof of Theorem \ref{GWP-nD} and now we consider the general case $s>n/2+3/2$. We write \begin{align} & \lambda_1 (v) : = \sum_{i=0,1} \|\nabla^i v\|_{\ell^{1,s-1/2}_\triangle \ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))}, \nonumber\\ & \lambda_2 (v) : = \sum_{i=0,1} \|\nabla^i v\|_{\ell^{1,s-1/2}_\triangle \ell^{2^*}_\alpha (L^\infty_{t,x} (\mathbb{R} \times Q_\alpha))}, \nonumber\\ & \lambda_3 (v) : = \sum_{i=0,1} \|\nabla^i v\|_{\ell^{1,s-1/2}_\triangle \ell^{2^*}_\alpha (L^{2m}_{t}L^\infty_ x(\mathbb{R} \times Q_\alpha))}, \nonumber\\ & \mathscr{D} = \{u: \sum_{i=1,2,3}\lambda_i (v) \le \varrho \}. \label{spaceD} \end{align} Note $\lambda_i$ and $\mathscr{D}$ defined here are different from those in the above. We only give the details of the proof in the case $n\ge 3$ and the case $n=2$ can be shown in a slightly different way. Let $\mathscr{T}$ be defined as in \eqref{map}. Using Lemma \ref{LE-Besov-1}, we have \begin{align} & \lambda_1 ( \mathscr{T} u) \lesssim \|u_0\|_{B^{s}_{2,1}} +\|F\| _{\ell^{1,s-1/2}_\triangle \ell^1_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))}. \label{proof-thm-1} \end{align} For simplicity, we write \begin{align} (u)^\kappa (\nabla u)^\nu = u^{\kappa_1} \bar{u}^{\kappa_2} u^{\nu_1}_{x_1} \bar{u}^{\nu_2}_{x_1}...u^{\nu_{2n-1}}_{x_n} \bar{u}^{\nu_{2n}}_{x_n}, \label{pol-not} \end{align} $|\kappa|=\kappa_1+\kappa_2$, $|\nu|=\nu_1+...+\nu_{2n}$. By Lemma \ref{NLE-Besov}, we have \begin{align} & \|(u)^\kappa (\nabla u)^\nu\| _{\ell^{1,s-1/2}_\triangle \ell^1_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))} \nonumber\\ & \lesssim \sum_{i=0,1} \|\nabla^i u\|_{\ell^{1,s-1/2}_\triangle \ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))} \sum_{k=0,1} \|\nabla^k u\|^{|\kappa|+|\nu|-1}_{\ell^{1}_\triangle \ell^{|\kappa|+|\nu|-1}_\alpha (L^\infty_{t,x}(\mathbb{R} \times Q_\alpha))} . \label{proof-thm-2} \end{align} Hence, if $u\in \mathscr{D}$, in view of \eqref{proof-thm-1} and \eqref{proof-thm-2}, we have \begin{align} & \lambda_1 ( \mathscr{T} u) \lesssim \|u_0\|_{B^{s}_{2,1}} + \sum_{m+1\le |\kappa|+|\nu| \le M+1} \varrho ^{|\kappa|+|\nu|}. \label{proof-thm-3} \end{align} In view of the estimate for the maximal function as in Proposition \ref{prop-max-2}, one has that \begin{align} & \lambda_2 ( S(t) u_0) \lesssim \|u_0\|_{B^{s}_{2,1}}. \label{proof-thm-4} \end{align} and for $i=0,1$, \begin{align} \|\mathscr{A} \nabla^i F \|_{\ell^{1}_\triangle \ell^{2^*}_\alpha (L^\infty_{t,x}(\mathbb{R} \times Q_\alpha))} & \le \sum^\infty_{j=0} \int_{\mathbb{R}} \|S(t-\tau)(\triangle_j \nabla^i F)(\tau)\|_{ \ell^{2^*}_\alpha (L^\infty_{t,x}(\mathbb{R} \times Q_\alpha))} \nonumber\\ & \lesssim \sum^\infty_{j=0} \int_{\mathbb{R}} \|(\triangle_j \nabla^i F)(\tau)\|_{H^{s-3/2}(\mathbb{R}^n) } d\tau \nonumber\\ & \lesssim \sum^\infty_{j=0} 2^{(s-1/2)j} \int_{\mathbb{R}} \|(\triangle_j F)(\tau)\|_{L^2(\mathbb{R}^n)} d\tau. \label{proof-thm-5} \end{align} Hence, by \eqref{proof-thm-4} and \eqref{proof-thm-5}, \begin{align} & \lambda_2 ( \mathscr{T} u) \lesssim \|u_0\|_{B^{s}_{2,1}} +\|F\| _{\ell^{1,s-1/2}_\triangle \ell^1_\alpha (L^1_t L^2_{x}(\mathbb{R} \times Q_\alpha))}. \label{proof-thm-6} \end{align} Similar to \eqref{proof-thm-6}, in view of Proposition \ref{prop-max-4}, we have \begin{align} & \lambda_3 ( \mathscr{T} u) \lesssim \|u_0\|_{B^{s}_{2,1}} +\|F\| _{\ell^{1,s-1/2}_\triangle \ell^1_\alpha (L^1_t L^2_{x}(\mathbb{R} \times Q_\alpha))}. \label{proof-thm-7} \end{align} In view of Lemma \ref{NLE-Besov}, we have \begin{align} \|(u)^\kappa (\nabla u)^\nu\| _{\ell^{1,s-1/2}_\triangle \ell^1_\alpha (L^1_t L^2_{x}(\mathbb{R} \times Q_\alpha))} & \lesssim \sum_{k=0,1} \|\nabla^k u\|^{|\kappa|+|\nu|-1}_{\ell^{1}_\triangle \ell^{(|\kappa|+|\nu|-1)}_\alpha (L^{2(|\kappa|+|\nu|-1)}_t L^\infty_{x}(\mathbb{R} \times Q_\alpha))} \nonumber\\ & \ \ \ \ \times \sum_{i=0,1} \|\nabla^i u\|_{\ell^{1,s-1/2}_\triangle \ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))}. \label{proof-thm-8} \end{align} Hence, if $u \in \mathscr{D}$, we have \begin{align} & \lambda_2 ( \mathscr{T} u) + \lambda_3 ( \mathscr{T} u) \lesssim \|u_0\|_{B^{s}_{2,1}} + \sum_{m+1\le |\kappa|+|\nu| \le M+1} \varrho ^{|\kappa|+|\nu|}. \label{proof-thm-9} \end{align} Repeating the procedures as in the above, we obtain that there exists $u\in \mathscr{D}$ satisfying the integral equation $\mathscr{T} u=u$, which finishes the proof of Theorem \ref{GWP-nD}. $ \Box$ \section{Proof of Theorem \ref{LWP-Mod}} \label{proof-lwp-1d} We begin with the following \begin{lem} \label{lem-mod-1} Let $\mathscr{A}$ be as in \eqref{S(t)}. There exists a constant $C(T)>1$ which depends only on $T$ and $n$ such that \begin{align} \sum_{i=0,1}\|\mathscr{A} \nabla^i F\|_{\ell^{1}_\Box \ell^2_\alpha (L^\infty_{t,x}([0,T]\times Q_\alpha))} \le C(T) \| F\|_{\ell^{1,3/2}_\Box \ell^1_\alpha (L^1_t L^2_{x}([0,T]\times Q_\alpha))}. \label{modmax1} \end{align} \end{lem} {\bf Proof.} Using Minkowski's inequality and Proposition \ref{prop-max-1}, \begin{align} & \|\mathscr{A} \nabla^i F\|_{\ell^1_\Box \ell^2_\alpha (L^\infty_{t,x}([0,T]\times Q_\alpha))} \nonumber\\ & \le \sum_{k\in \mathbb{Z}^n} \left( \sum_{\alpha\in \mathbb{Z}^n} \left( \int^T_0 \|S(t-\tau)\Box_k \nabla^i F (\tau)\|_{L^\infty_{t,x}([0,T]\times Q_\alpha)} d\tau \right)^2 \right)^{1/2} \nonumber\\ & \le \sum_{k\in \mathbb{Z}^n} \int^T_0 \left( \sum_{\alpha\in \mathbb{Z}^n} \|S(t-\tau)\Box_k \nabla^i F (\tau)\|^2_{L^\infty_{t,x}([0,T]\times Q_\alpha)} \right)^{1/2} d\tau \nonumber\\ & \le \sum_{k\in \mathbb{Z}^n} \int^T_0 \|\Box_k \nabla^i F (\tau)\|_{M^{1/2}_{2,1}} d\tau. \label{modmax2} \end{align} It is easy to see that for $i=0,1$, \begin{align} \|\Box_k \nabla^i F \|_{M^{1/2}_{2,1}} & \lesssim \langle k\rangle^{3/2} \|\Box_k F \|_{L^2(\mathbb{R}^n)} \le \langle k\rangle^{3/2} \sum_{\alpha\in \mathbb{Z}^n}\|\Box_k F \|_{L^2(Q_\alpha)} . \label{modmax3} \end{align} By \eqref{modmax2} and \eqref{modmax3}, we immediately have \eqref{modmax1}. $ \Box$ \begin{lem} \label{lem-mod-2} Let $\mathscr{A}$ be as in \eqref{S(t)}. Let $n\ge 2$, $s>0$. Then we have \begin{align} \sum_{i=0,1}\|\nabla^i \mathscr{A} F\|_{\ell^{1,s}_\Box \ell^\infty_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))} \le \langle T\rangle^{1/2} \| F\|_{\ell^{1,s}_\Box \ell^1_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))}. \label{modmax-1} \end{align} \end{lem} {\bf Proof.} In view of Proposition \ref{local-sm-3}, we have \begin{align} \|\nabla \mathscr{A} F\|_{\ell^{1,s}_\Box \ell^\infty_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))} \lesssim \| F\|_{\ell^{1,s}_\Box \ell^1_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))}. \label{modmax-2} \end{align} By Propositions \ref{local-Str} and \ref{local-Str-2D}, \begin{align} \|\mathscr{A} F\|_{\ell^1_\Box \ell^\infty_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))} & \lesssim \| F\|_{\ell^{1,s}_\Box \ell^1_\alpha (L^1_tL^2_{x}([0,T]\times Q_\alpha))} \nonumber\\ & \le T^{1/2} \| F\|_{\ell^{1,s}_\Box \ell^1_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))}. \label{modmax-3} \end{align} By \eqref{modmax-2} and \eqref{modmax-3} we immediately have \eqref{modmax-1}. $ \Box$ \begin{lem} \label{lem-mod-3} Let $n\ge 2$, $S(t)$ be as in \eqref{S(t)}. Then we have for $i=0, 1$, \begin{align} \|\nabla^i S(t) u_0\|_{\ell^{1,s}_\Box \ell^\infty_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))} & \lesssim \|u_0\|_{M^{s+1/2}_{2,1}}, \quad n\ge 3, \label{modmax-S-1}\\ \|\nabla^i S(t) u_0\|_{\ell^{1,s}_\Box \ell^\infty_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))} & \lesssim \|u_0\|_{M^{s+1/2}_{2,1} \cap \dot H^{-1/2}}, \quad n=2. \label{modmax-S-2} \end{align} \end{lem} {\bf Proof.} \eqref{modmax-S-1} follows from Corollary \ref{local-sm-2}. For $n=2$, by Proposition \ref{local-Str-2DH}, we have the result, as desired. $ \Box$ \begin{lem} \label{NLE-modulation} Let $n\ge 2$, $s>0$, $L\in \mathbb{N}, L\ge 3$. Let $1\le p, p_i, q, q_i \le \infty$ satisfy $1/p = 1/p_1+(L-1)/p_2$ and $1/q=1/q_1+ (L-1)/q_2$. We have \begin{align} \|v_1...v_L\|_{\ell^{1,s}_\Box \ell^1_\alpha (L^q_t L^p_x (I\times Q_\alpha))} \lesssim & \sum^L_{l=1} \|v_l\|_{\ell^{1,s}_\Box \ell^\infty_\alpha (L^{q_1}_t L^{p_1}_x (I \times Q_\alpha))} \nonumber\\ & \ \times \prod_{i\not= l, \, i=1,...,L} \|v_i\|_{\ell^{1}_\Box \ell^{L-1}_\alpha (L^{q_2}_t L^{p_2}_x (I\times Q_\alpha))}. \label{NLE-mod1} \end{align} \end{lem} {\bf Proof.} Using the identity \begin{align} v_1...v_L = \sum_{k_1,..., k_L \in \mathbb{Z}^n} \Box_{k_1} v_1... \Box_{k_L} v_L \label{NLE-mod2} \end{align} and noticing the fact that \begin{align} \Box_k( \Box_{k_1} v_1... \Box_{k_L} v_L) =0, \ \ |k-k_1-...-k_L| \ge C(L,n), \label{NLE-mod3} \end{align} we have \begin{align} & \|v_1...v_L\|_{\ell^{1,s}_\Box \ell^1_\alpha (L^q_t L^p_x (I\times Q_\alpha))} \nonumber\\ & = \sum_{k\in \mathbb{Z}^n} \langle k\rangle^{s} \sum_{\alpha \in \mathbb{Z}^n} \|\Box_k( v_1... v_L)\|_{L^q_t L^p_x (I\times Q_\alpha)} \nonumber\\ & \le \sum_{k_1,...,k_n \in \mathbb{Z}^n} \sum_{|k-k_1-...-k_L|\le C} \langle k\rangle^{s} \sum_{\alpha \in \mathbb{Z}^n} \|\Box_k( \Box_{k_1} v_1... \Box_{k_L} v_L)\|_{L^q_t L^p_x (I\times Q_\alpha)}. \label{NLE-mod4} \end{align} Similar to \eqref{NLE-7} and noticing the fact that $\|\mathscr{F}^{-1}\sigma_k\|_{L^1{(\mathbb{R}^n})} \lesssim 1$, we have \begin{align} \sum_{\alpha\in \mathbb{Z}^n} \|\,\Box_k f\|_{L^q_tL^p_{x}(I\times Q_\alpha)} & =\sum_{\alpha\in \mathbb{Z}^n} \|(\mathscr{F}^{-1}\sigma_k)* f\|_{L^q_tL^p_{x}(I\times Q_\alpha)} \nonumber\\ & \le \int_{\mathbb{R}^n} |(\mathscr{F}^{-1}\sigma_k)(y)|\left(\sum_{\alpha\in \mathbb{Z}^n} \| f(t, x-y)\|_{L^q_tL^p_{x}(I\times Q_\alpha)}\right)dy \nonumber\\ & \le \sup_{y\in \mathbb{R}^n} \sum_{\alpha\in \mathbb{Z}^n} \| f(t, x-y)\|_{L^q_tL^p_{x}(I\times Q_\alpha)} \|\mathscr{F}^{-1}\sigma_k \|_{L^1(\mathbb{R}^n)} \nonumber\\ & \lesssim \sum_{\alpha\in \mathbb{Z}^n} \| f\|_{L^q_tL^p_{x}(I\times Q_\alpha)}. \label{sum-invariance} \end{align} By \eqref{NLE-mod4} and \eqref{sum-invariance}, we have \begin{align} & \|v_1...v_L\|_{\ell^{1,s}_\Box \ell^1_\alpha (L^q_t L^p_x (I\times Q_\alpha))} \nonumber\\ & \le \sum_{k_1,...,k_n \in \mathbb{Z}^n} \sum_{|k-k_1-...-k_L| \le C} \langle k \rangle^{s} \sum_{\alpha \in \mathbb{Z}^n} \|\Box_{k_1} v_1... \Box_{k_L} v_L\|_{L^q_t L^p_x (I\times Q_\alpha)} \nonumber\\ & \lesssim \sum_{k_1,...,k_n \in \mathbb{Z}^n} (\langle k_1 \rangle^s +...+ \langle k_L \rangle^{s}) \sum_{\alpha \in \mathbb{Z}^n} \|\Box_{k_1} v_1... \Box_{k_L} v_L\|_{L^q_t L^p_x (I\times Q_\alpha)}. \label{NLE-mod5} \end{align} By H\"older's inequality, \begin{align} & \sum_{k_1,...,k_n \in \mathbb{Z}^n} \langle k_1 \rangle^s \sum_{\alpha \in \mathbb{Z}^n} \|\Box_{k_1} v_1... \Box_{k_L} v_L\|_{L^q_t L^p_x (I\times Q_\alpha)} \nonumber\\ & \le \sum_{k_1,...,k_n \in \mathbb{Z}^n} \langle k_1 \rangle^s \sum_{\alpha \in \mathbb{Z}^n} \|\Box_{k_1} v_1\|_{L^{q_1}_t L^{p_2}_x (I\times Q_\alpha)} \prod^L_{i=2} \|\Box_{k_i} v_i\|_{L^{q_2}_t L^{p_2}_x (I\times Q_\alpha)} \nonumber\\ & \le \|v_1\|_{\ell^{1,s}_\Box \ell^\infty_\alpha (L^{q_1}_t L^{p_2}_x (I\times Q_\alpha))} \sum_{k_2,...,k_n \in \mathbb{Z}^n} \sum_{\alpha \in \mathbb{Z}^n} \prod^L_{i=2} \|\Box_{k_i} v_i\|_{L^{q_2}_t L^{p_2}_x (I\times Q_\alpha)} \nonumber\\ & \le \|v_1\|_{\ell^{1,s}_\Box \ell^\infty_\alpha (L^{q_1}_t L^{p_2}_x (I\times Q_\alpha))} \sum_{k_2,...,k_n \in \mathbb{Z}^n} \prod^L_{i=2} \|\Box_{k_i} v_i\|_{\ell^{L-1}_\alpha (L^{q_2}_t L^{p_2}_x (I\times Q_\alpha))} \nonumber\\ & \le \|v_1\|_{\ell^{1,s}_\Box \ell^\infty_\alpha (L^{q_1}_t L^{p_2}_x (I\times Q_\alpha))} \prod^L_{i=2} \| v_i\|_{\ell^1_\Box \ell^{L-1}_\alpha (L^{q_2}_t L^{p_2}_x (I\times Q_\alpha))}. \label{NLE-mod6} \end{align} The result follows. $ \Box$ \noindent {\bf Proof of Theorem \ref{LWP-Mod}.} Denote \begin{align} \lambda_1(v)& = \sum_{i=0,1} \|\nabla^i v\|_{\ell^{1,3/2}_\Box \ell^\infty_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))}, \nonumber\\ \lambda_2(v) & = \sum_{i=0,1} \|\nabla^i v\|_{\ell^1_\Box \ell^2_\alpha (L^\infty_{t,x}([0,T]\times Q_\alpha))}. \nonumber \end{align} Put \begin{align} & \mathscr{D} = \left\{ u: \; \lambda_1 ( u) + \lambda_2 ( u) \le \varrho \right\}. \label{metricsp-mod} \end{align} Let $\mathscr{T}$ be as in \eqref{map}. We will show that $\mathscr{T}: \mathscr{D} \to \mathscr{D}$ is a contraction mapping. First, we consider the case $n\ge 3$. Let $u\in \mathscr{D}$. By Lemmas \ref{lem-mod-2} and \ref{lem-mod-3}, we have \begin{align} \lambda_1( \mathscr{T} u) \lesssim \|u_0\|_{M^{2}_{2,1}} + \langle T\rangle^{1/2} \| F\|_{\ell^{1,3/2}_\Box \ell^1_\alpha (L^2_{t,x}([0,T]\times Q_\alpha))}. \label{LWP-1} \end{align} We use the same notation as in \eqref{pol-not}. We have from Lemma \ref{NLE-modulation} that \begin{align} \|(u)^\kappa (\nabla u)^\nu \|_{\ell^{1,3/2}_\Box \ell^1_\alpha (L^{2}_{t,x} ([0,T]\times Q_\alpha))} \lesssim & \sum_{i=0,1} \|\nabla^i u\|_{\ell^{1,3/2}_\Box \ell^\infty_\alpha (L^{2}_{t,x} ([0,T] \times Q_\alpha))} \nonumber\\ & \ \times \sum_{k=0,1} \| \nabla^k u\|^{|\kappa|+|\nu|-1}_{\ell^{1}_\Box \ell^{|\kappa|+|\nu|-1}_\alpha (L^{\infty}_{t,x} ([0,T]\times Q_\alpha))}\nonumber\\ & \lesssim \lambda_1(u) \lambda_2(u)^{|\kappa|+|\nu|-1} \le \varrho^{|\kappa|+|\nu|}. \label{LWP-2} \end{align} Hence, for $n\ge 3$, \begin{align} \lambda_1( \mathscr{T} u) \lesssim \|u_0\|_{M^{2}_{2,1}} + \sum^M_{|\kappa|+|\nu|=m+1} \varrho^{|\kappa|+|\nu|}. \label{3D-estim} \end{align} Next, we consider the estimate of $\lambda_2 (\mathscr{T} u)$. By Lemma \ref{lem-mod-1} and Proposition \ref{prop-max-1}, \begin{align} \lambda_2( \mathscr{T} u) & \lesssim \|u_0\|_{M^{3/2}_{2,1}} + C(T) \| F\|_{\ell^{1,3/2}_\Box \ell^1_\alpha (L^1_t L^2_{x}([0,T]\times Q_\alpha))}, \label{LWP-3} \end{align} which reduces to the estimates of $\lambda_1(\cdot)$ as in \eqref{LWP-1}. Similarly, for $n=2$, \begin{align} \lambda_1( \mathscr{T} u) + \lambda_2( \mathscr{T} u) \lesssim \|u_0\|_{M^{2}_{2,1} \cap \dot H^{-1/2}} + \sum^M_{|\kappa|+|\nu|=m+1} \varrho^{|\kappa|+|\nu|}. \label{2D-estim} \end{align} Repeating the procedures as in the proof of Theorem \ref{GWP-nD}, we can show our results, as desired. $ \Box$ \section{Proof of Theorem \ref{GWP-Mod}} \label{proof-GWP-Mod} The proof of Theorem \ref{GWP-Mod} follows an analogous way as that in Theorems \ref{GWP-nD} and \ref{LWP-Mod} and will be sketched. Put \begin{align} & \lambda_1(v)= \sum_{i=0,1} \|\nabla^i v\|_{\ell^{1,s-1/2}_\Box \ell^\infty_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))}, \nonumber\\ & \lambda_2(v)= \sum_{i=0,1} \|\nabla^i v\|_{\ell^1_\Box \ell^m_\alpha (L^\infty_{t,x}(\mathbb{R} \times Q_\alpha))}, \nonumber\\ & \lambda_3(v)= \sum_{i=0,1} \|\nabla^i v\|_{\ell^1_\Box \ell^m_\alpha (L^{2m}_t L^\infty_{x}(\mathbb{R} \times Q_\alpha))}. \nonumber \end{align} Put \begin{align} & \mathscr{D}= \left\{ u: \; \lambda_1 ( u)+\lambda_2 ( u)+ \lambda_3 ( u) \le \varrho \right\}. \label{metricsp-gMod} \end{align} Let $\mathscr{T}$ be as in \eqref{map}. We show that $\mathscr{T}: \mathscr{D} \to \mathscr{D}$. We only consider the case $n\ge 3$. It follows from Lemma \ref{lem-mod-3} and \ref{LE-Besov-1} that \begin{align} \lambda_1(\mathscr{T} u) \lesssim \|u_0\|_{M^{s}_{2,1}} + \|F\|_{\ell^{1,s-1/2}_\Box \ell^1_\alpha (L^2_{t,x}(\mathbb{R} \times Q_\alpha))}. \label{lambdamod-1} \end{align} Using Lemma \ref{NLE-modulation} and similar to \eqref{LWP-2}, one sees that if $u\in \mathscr{D}$, then \begin{align} \lambda_1(\mathscr{T} u) \lesssim \|u_0\|_{M^{s}_{2,1}} + \sum_{m+1\le |\kappa|+|\nu| \le M+1} \varrho^{|\kappa|+|\nu|}. \label{lambdamod-6} \end{align} Using Proposition \ref{prop-max-5} and combining the proof of \eqref{proof-thm-5}--\eqref{proof-thm-7}, we see that \begin{align} \lambda_2(\mathscr{T} u) +\lambda_3(\mathscr{T} u) \lesssim \|u_0\|_{M^{s}_{2,1}} + \sum_{m+1\le |\kappa|+|\nu| \le M+1} \varrho^{|\kappa|+|\nu|}. \label{lambdamod-7} \end{align} The left part of the proof is analogous to that of Theorems \ref{GWP-nD} and \ref{LWP-Mod} and the details are omitted. $ \Box$ \section{Proof of Theorem \ref{GWP-1D}} \label{proof-gwp-1D} We prove Theorem \ref{GWP-1D} by following some ideas as in Molinet and Ribaud \cite{MR} and Wang and Huang \cite{WaHu}. The following is the estimates for the solutions of the linear Schr\"odinger equation, see \cite{KePoVe,MR,WaHu}. Recall that $\triangle_{j}:= \mathscr{F}^{-1} \delta(2^{-j}\,\cdot) \mathscr{F}$, $j\in \mathbb{Z}$ and $\delta (\cdot )$ is as in Section \ref{functionspace}. \begin{lem} \label{cor4} Let $g\in \mathscr{S}(\mathbb{R})$,$ f\in\mathscr{S}(\mathbb{R}^{2})$, $4\le p<\infty$. Then we have \begin{align} \|\triangle_{j} S(t)g\|_{L_{t}^{\infty}L_{x}^{2} \, \cap \, L^6_{x,t}} & \lesssim \|\triangle_{j}g\|_{L^{2}},\label{cor-1}\\ \|\triangle_{j}S(t)g\|_{L_x^p L_t^\infty} & \lesssim 2^{j(\frac{1}{2}-\frac{1}{p})} \|\triangle_{j}g\|_{L^{2}},\label{cor-4}\\ \|\triangle_{j} S(t)g\|_{L_{x}^{\infty}L_{t}^{2}} & \lesssim 2^{-j/2}\|\triangle_{j}g\|_{L^{2}},\label{cor-7} \end{align} \begin{align} \|\triangle_{j} \mathscr{A}f \|_{L_{t}^{\infty}L_{x}^{2} \, \cap \, L^6_{x,t} } & \lesssim \|\triangle_{j}f\|_{L^{6/5}_{x,t}}, \label{cor-2}\\ \|\triangle_{j} \mathscr{A}f \|_{L_x^p L_t^\infty } & \lesssim 2^{j(\frac{1}{2}-\frac{1}{p})} \|\triangle_{j}f\|_{L^{6/5}_{x,t}}, \label{cor-5}\\ \|\triangle_{j} \mathscr{A}f\|_{L_{x}^{\infty}L_{t}^{2}} & \lesssim 2^{-j/2} \|\triangle_{j}f\|_{L^{6/5}_{x,t}}, \label{cor-8} \end{align} and \begin{align} \|\triangle_{j} \mathscr{A}(\partial_{x}f)\|_{L_{t}^{\infty}L_{x}^{2} \, \cap \, L^6_{x,t} } & \lesssim 2^{j/2}\|\triangle_{j}f\|_{L^1_x L^2_t}, \label{cor-3}\\ \|\triangle_{j} \mathscr{A}(\partial_{x}f)\|_{L_x^p L_t^\infty } & \lesssim 2^{j/2} 2^{j(\frac{1}{2}-\frac{1}{p})} \|\triangle_{j}f\|_{L^1_x L^2_t}, \label{cor-6}\\ \|\triangle_{j} \mathscr{A}(\partial_{x}f)\|_{L_{x}^{\infty}L_{t}^{2}} & \lesssim \|\triangle_{j}f\|_{L_{x}^{1}L_{t}^{2}}.\label{cor-9} \end{align} \end{lem} For convenience, we write for any Banach function space $X$, $$ \|f\|_{\ell_\triangle^{1,s} (X)}= \sum_{j\in \mathbb{Z}} 2^{js}\|\triangle_j f\|_{X}, \quad \|f\|_{\ell_\triangle^1 (X)}:= \|f\|_{\ell_\triangle^{1,0} (X)}. $$ \begin{lem}\label{nonlinear-estimate-1} Let $s> 0$, $1\le p, p_i, \gamma, \gamma_i \le \infty$ satisfy \begin{align} \frac{1}{p}= \frac{1}{p_1}+...+\frac{1}{p_N}, \quad \frac{1}{\gamma}= \frac{1}{\gamma_1}+...+ \frac{1}{\gamma_N}. \label{p-gamma} \end{align} Then \begin{align} \left\|u_1 ... u_N \right\|_{\ell_\triangle^{1,s} (L^p_x L^{\gamma}_t )} & \lesssim \|u_1 \|_{\ell_\triangle^{1,s} (L^{p_1}_x L^{\gamma_1}_t )} \prod^N_{i=2} \| u_i\|_{\ell_\triangle^{1} (L^{p_i}_x L^{\gamma_i}_t )} \nonumber\\ & \quad +\|u_2 \|_{\ell_\triangle^{1,s} ( L^{p_2}_x L^{\gamma_2}_t)} \prod_{i\not= 2,\, i=1,...,N} \| u_i\|_{\ell_\triangle^{1} (L^{p_i}_x L^{\gamma_i}_t )} \nonumber\\ & \quad +... + \prod^{N-1}_{i=1} \|u_i\|_{\ell_\triangle^{1} ( L^{p_i}_x L^{\gamma_i}_t)} \| u_N \|_{\ell_\triangle^{1,s} (L^{p_N}_x L^{\gamma_N}_t )}, \label{pf-7-1} \end{align} and in particular, if $u_1=...=u_N=u$, then \begin{align} \left\| u^N \right\|_{\ell_\triangle^{1,s} ( L^p_xL^{\gamma}_t)} & \lesssim \|u \|_{\ell_\triangle^{1,s} ( L^{p_1}_xL^{\gamma_1}_t)} \prod^N_{i=2} \| u\|_{\ell_\triangle^{1} ( L^{p_i}_xL^{\gamma_i}_t)}. \label{pf-7-2} \end{align} Substituting the spaces $ L^{p}_xL^{\gamma}_t$ and $L^{p_i}_x L^{\gamma_i}_t $ by $L^{\gamma}_tL^{p}_x $ and $L^{\gamma_i}_t L^{p_i}_x $, respectively, \eqref{pf-7-1} and \eqref{pf-7-2} also holds. \end{lem} \noindent {\bf Proof.} We only consider the case $N=2$ and the case $N>2$ can be handled in a similar way. We have \begin{align} u_1u_2 &= \sum^\infty_{r=-\infty} [(S_{r+1}u_1) (S_{r+1}u_2)- (S_{r}u_1) (S_{r}u_2)] \nonumber\\ &= \sum^\infty_{r=-\infty} [(\triangle_{r+1} u_1) (S_{r+1}u_2)+ (S_{r}u_1) (\triangle_{r+1}u_2)], \label{pf-7-3} \end{align} and \begin{align} \triangle_j(u_1u_2) = \triangle_j \Big(\sum_{r\ge j-10} [(\triangle_{r+1} u_1) (S_{r+1}u_2)+ (S_{r}u_1) (\triangle_{r+1}u_2)]\Big). \label{pf-7-4} \end{align} We may assume, without loss of generality that there is only the first term in the right hand side of \eqref{pf-7-4} and the second term can be handled in the same way. It follows from Bernstein's estimate, H\"older's and Young's inequalities that \begin{align} \sum_{j\in \mathbb{Z}} 2^{sj} \|\triangle_j(u_1u_2)\|_{L^p_xL^\gamma_t} & \lesssim \sum_{j\in \mathbb{Z}} 2^{sj} \sum_{r\ge j-10} \| (\triangle_{r+1} u_1) (S_{r+1}u_2) \|_{L^p_xL^\gamma_t} \nonumber\\ & \lesssim \sum_{j\in \mathbb{Z}} 2^{sj} \sum_{r\ge j-10} \| \triangle_{r+1} u_1\|_{L^{p_1}_xL^{\gamma_1}_t} \|S_{r+1}u_2 \|_{L^{p_2}_xL^{\gamma_2}_t} \nonumber\\ & \lesssim \sum_{j\in \mathbb{Z}} 2^{s(j-r)} \sum_{r\ge j-10} 2^{rs} \| \triangle_{r+1} u_1\|_{L^{p_1}_xL^{\gamma_1}_t} \|S_{r+1}u_2 \|_{L^{p_2}_xL^{\gamma_2}_t} \nonumber\\ & \lesssim \| u_1\|_{\ell_\triangle^{1,s}(L^{p_1}_xL^{\gamma_1}_t)} \|u_2 \|_{\ell_\triangle^1(L^{p_2}_xL^{\gamma_2}_t)}, \label{pf-7-5} \end{align} which implies the result, as desired. $\quad\quad\Box$ \begin{rem} \label{rem-on-lemma-3.1} \rm One easily sees that \eqref{pf-7-2} can be slightly improved by \begin{align} \left\| u^N \right\|_{\ell^{1,s} (L^p_xL^{\gamma}_t )} & \lesssim \|u \|_{\ell^{1,s} (L^{p_1}_xL^{\gamma_1}_t )} \prod^N_{i=2} \| u\|_{ L^{p_i}_xL^{\gamma_i}_t }. \label{pf-7-2-1} \end{align} In fact, from Minkowski's inequality it follows that \begin{align} \|S_{r}u \|_{L^{p}_xL^{\gamma}_t} \lesssim \|u \|_{L^{p}_xL^{\gamma}_t}. \label{pf-7-2-2} \end{align} From \eqref{pf-7-5} and \eqref{pf-7-2-2} we get \eqref{pf-7-2-1}. \end{rem} \noindent {\bf Proof of Theorem \ref{GWP-1D}.} We can assume, without loss of generality that \begin{align} F(u, \bar{u}, u_x, \bar{u}_x)= \sum_{m+1\le \kappa+\nu \le M+1} \lambda_{\kappa\nu} u^\kappa u^\nu_x \label{simple-poly} \end{align} and the general case can be handled in the same way. {\it Step} 1. We consider the case $m>4.$ Recall that \begin{align} \|u\|_{X} & =\sup_{s_m\le s \le \tilde{s}_{M}} \sum_{i=0,1} \sum_{j\in \mathbb{Z}} |\!|\!|\partial^i_x \triangle_j u |\!|\!|_s, \label{1d.2} \\ |\!|\!|\triangle_j v |\!|\!|_s :=& 2^{s j} (\|\triangle_j v \|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t}} + 2^{j/2} \|\triangle_j v\|_{L^\infty_x L^2_t})\nonumber\\ & + 2^{(s-\tilde{s}_m) j}\|\triangle_j v\|_{L_{x}^{m}L_{t}^{\infty}} +2^{(s-\tilde{s}_{M})j} \|\triangle_j v\|_{L_x^{M}L_t^\infty}. \end{align} Considering the mapping \begin{align} & \mathscr{T}: u(t) \to S(t)u_0 -{\rm i}\mathscr{A}F(u,\bar{u},u_x,\bar{u}_x), \end{align} we will show that $\mathscr{T}:X\to X$ is a contraction mapping. We have \begin{align} \| \mathscr{T} u(t)\|_X \lesssim \| S(t)u_0\|_X +\|\mathscr{A} F(u,\bar{u},u_x,\bar{u}_x)\|_X. \label{1d.5} \end{align} In view of \eqref{cor-1}, \eqref{cor-4} and \eqref{cor-7} we have, \begin{align} |\!|\!|\partial^i_x \triangle_j S(t)u_0 |\!|\!|_s \lesssim 2^{sj} \|\partial^i_x \triangle_j u_0\|_2. \end{align} It follows that \begin{align} \| S(t)u_0 \|_X \lesssim \sup_{s_m\le s\le \tilde{s}_{M}}\sum_{i=0,1} \sum_{j\in \mathbb{Z}} 2^{sj} \|\partial^i_x \triangle_j u_0\|_2 \lesssim \|u_0\|_{\dot B^{s_m}_{2,1} \cap \dot B^{1+\tilde{s}_{M}}_{2,1}}. \label{1d.7} \end{align} We now estimate $\|\mathscr{A} F(u,\bar{u},u_x,\bar{u}_x)\|_X.$ We have from \eqref{cor-2}, \eqref{cor-5} and \eqref{cor-8} that \begin{align} |\!|\!| \triangle_j (\mathscr{A} F(u,\bar{u},u_x,\bar{u}_x)) |\!|\!|_s \lesssim 2^{sj} \|\triangle_j F(u,\bar{u},u_x,\bar{u}_x)\|_{L^{6/5}_{x,t}}. \label{1d.8} \end{align} From \eqref{cor-3}, \eqref{cor-6} and \eqref{cor-9} it follows that \begin{align} |\!|\!| \triangle_j (\mathscr{A}\partial_x F(u,\bar{u},u_x,\bar{u}_x)) |\!|\!|_s \lesssim 2^{sj}2^{j/2} \|\triangle_j F(u,\bar{u},u_x,\bar{u}_x)\|_{L^1_x L^2_t}. \label{1d.9} \end{align} Hence, from \eqref{1d.2}, \eqref{1d.8} and \eqref{1d.9} we have \begin{align} \|\mathscr{A} F(u,\bar{u},u_x,\bar{u}_x)\|_X \lesssim & \sum_{j\in \mathbb{Z}} 2^{sj} \|\triangle_j F(u,\bar{u},u_x,\bar{u}_x)\|_{L^{6/5}_{x,t}} \nonumber\\ & + \sum_{j\in \mathbb{Z}} 2^{sj}2^{j/2} \|\triangle_j F(u,\bar{u},u_x,\bar{u}_x)\|_{L^1_x L^2_t}=I+II. \label{1d.10} \end{align} Now we perform the nonlinear estimates. By Lemma \ref{nonlinear-estimate-1}, \begin{align} I& \lesssim \sum_{m+1\le \kappa+\nu\le M+1} \Big( \|u\|_{\ell_\triangle^{1,s}(L^6_{x,t})} \|u\|^{\kappa-1}_{\ell_\triangle^{\,1}(L^{3(\kappa+\nu-1)/2}_{x,t}) } \|u_x\|^\nu_{\ell_\triangle^{\,1}(L^{3(\kappa+\nu-1)/2}_{x,t}) } \nonumber\\ & \quad \quad\quad \quad \quad\quad \quad+\|u_x\|_{\ell_\triangle^{1,s}(L^6_{x,t})} \|u_x \|^{\nu-1}_{\ell_\triangle^{\,1}(L^{3(\kappa+\nu-1)/2}_{x,t}) } \|u\|^\kappa_{\ell_\triangle^{\,1}(L^{3(\kappa+\nu-1)/2}_{x,t}) } \Big)\nonumber\\ & \lesssim \sum_{m+1\le \kappa+\nu\le M+1} \big(\sum_{i=0,1} \|\partial_x^i u\|_{\ell_\triangle^{1,s}(L^6_{x,t})}\big) \big(\sum_{i=0,1} \|\partial^i_x u\|^{\kappa+\nu-1}_{\ell_\triangle^{\,1}(L^{3(\kappa+\nu-1)/2}_{x,t}) } \big). \label{1d.11} \end{align} For any $m\le \lambda \le M$, we let $\frac{1}{\rho}= \frac{1}{2}- \frac{4}{3\lambda}$. It is easy to see that the following inclusions hold: \begin{align} L^{\infty}_{t} (\mathbb{R}, \dot H^{s_\lambda}) \cap L^6_t (\mathbb{R}, \dot H^{s_\lambda}_6) \subset L^{3\lambda/2}_{t} (\mathbb{R}, \dot H^{s_\lambda}_\rho) \subset L^{3\lambda/2}_{x,t}. \label{pf-7-7} \end{align} More precisely, we have \begin{align} \sum_{j\in \mathbb{Z}}\|\triangle_j u\|_{L^{3\lambda/2}_{x,t}} & \lesssim \sum_{j\in \mathbb{Z}}\|\triangle_j u\|_{ L^{3\lambda/2}_{t} (\mathbb{R},\, \dot H^{s_\lambda}_\rho)} \nonumber\\ & \lesssim \sum_{j\in \mathbb{Z}}\|\triangle_j u\|^{4/\lambda}_{ L^{6}_{t} (\mathbb{R}, \, \dot H^{s_\lambda}_6)} \|\triangle_j u\|^{1-4/\lambda}_{ L^{\infty}_{t} (\mathbb{R}, \, \dot H^{s_\lambda})} \nonumber\\ & \lesssim \|u\|^{4/\lambda}_{ \ell^{1,s_\lambda}( L^{6}_{x,t}) } \| u\|^{1-4/\lambda}_{\ell^{1,s_\lambda}( L^{\infty}_{t} L^2_x )}. \label{pf-7-8} \end{align} Using \eqref{pf-7-8} and noticing that $s_m \le s_{\kappa+\nu-1}\le s_{M} < \tilde{s}_{M}$, we have \begin{align} \|\partial^i_x u\|^{\kappa+\nu-1}_{\ell_\triangle^{\,1}(L^{3(\kappa+\nu-1)/2}_{x,t}) } & \lesssim \|\partial^i_x u\|^{4}_{\ell_\triangle^{ 1, \,s_{\kappa+\nu-1}}(L^{6}_{x,t}) } \|\partial^i_x u\|^{\kappa+\nu-5}_{\ell_\triangle^{1,\,s_{\kappa+\nu-1}}(L^\infty_t L^2_x) } \nonumber\\ & \lesssim \|u\|^{\kappa+\nu-1}_X . \label{1d.12} \end{align} Combining \eqref{1d.11} with \eqref{1d.12}, we have \begin{align} I \lesssim \sum_{m+1\le \kappa+\nu\le M+1} \|u\|^{\kappa+\nu}_X. \label{1d.13} \end{align} Now we estimate $II$. By Lemma \ref{nonlinear-estimate-1}, \begin{align} II & \lesssim \sum_{m+1\le \kappa+\nu\le M+1} \Big( \|u\|_{\ell_\triangle^{1,\,s+1/2}(L^\infty_x L^2_t)} \|u\|^{\kappa-1}_{\ell_\triangle^{\,1}(L^{\kappa+\nu-1}_x L^\infty_t) } \|u_x\|^\nu_{\ell_\triangle^{\,1}(L^{\kappa+\nu-1}_x L^\infty_t )} \nonumber\\ & \quad \quad\quad \quad \quad\quad \quad+\|u_x\|_{\ell_\triangle^{1,\,s+1/2}(L^\infty_x L^2_t)} \|u_x \|^{\nu-1}_{L^{\kappa+\nu-1}_x L^\infty_t } \|u\|^\kappa_{L^{\kappa+\nu-1}_x L^\infty_t } \Big)\nonumber\\ & \lesssim \sum_{m+1\le \kappa+\nu\le M+1} \big(\sum_{i=0,1} \|\partial_x^i u\|_{ \ell_\triangle^{1,\,s+1/2}(L^\infty_x L^2_t) }\big) \big(\sum_{i=0,1} \|\partial^i_x u\|^{\kappa+\nu-1}_{ L^{\kappa+\nu-1}_x L^\infty_t } \big) \nonumber\\ & \lesssim \sum_{m+1\le \kappa+\nu\le M+1} \|u\|_X \big(\sum_{i=0,1} \|\partial^i_x u\|^{\kappa+\nu-1}_{ L^{m}_x L^\infty_t \, \cap\, L^{M}_x L^\infty_t } \big) \nonumber\\ & \lesssim \sum_{m+1\le \kappa+\nu\le M+1} \|u\|^{\kappa+\nu}_X. \label{1d.14} \end{align} Collecting \eqref{1d.10}, \eqref{1d.11}, \eqref{1d.13} and \eqref{1d.14}, we have \begin{align} \|\mathscr{A} F(u,\bar{u},u_x,\bar{u}_x)\|_X & \lesssim \sum_{m+1\le \kappa+\nu\le M+1} \|u\|^{\kappa+\nu}_X. \label{1d.15} \end{align} By \eqref{1d.5}, \eqref{1d.7} and \eqref{1d.15} \begin{align} \| \mathscr{T} u(t)\|_X \lesssim \|u_0\|_{\dot B^{s_m}_{2,1} \cap \dot B^{1+\tilde{s}_{M}}_{2,1}}+ \sum_{m+1\le \kappa+\nu\le M+1} \|u\|^{\kappa+\nu}_X. \label{1d.16} \end{align} {\it Step} 2. We consider the case $m=4$. Recall that \begin{align} \|u\|_X & =\sum_{i=0,1} \big(\|\partial^i_x u\|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t} } + \sup_{s_5\le s \le \tilde{s}_{M}} \sum_{j\in \mathbb{Z}} |\!|\!|\partial^i_x \triangle_j u |\!|\!|_s \big). \nonumber \end{align} By \eqref{cor-1}, \eqref{cor-4} and \eqref{cor-7}, \begin{align} \| S(t)u_0 \|_X \lesssim \|u_0\|_2+ \sup_{s_5\le s\le \tilde{s}_{M}}\sum_{i=0,1} \sum_{j\in \mathbb{Z}} 2^{sj} \|\partial^i_x \triangle_j u_0\|_2 \lesssim \|u_0\|_{ B^{1+\tilde{s}_{M}}_{2,1}}. \label{1d.17} \end{align} We now estimate $\|\mathscr{A} F(u,\bar{u},u_x,\bar{u}_x)\|_X.$ By Strichartz' and H\"older's inequality, we have \begin{align} & \| \mathscr{A} F(u,\bar{u},u_x,\bar{u}_x)\|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t} }\nonumber\\ & \lesssim \sum_{5 \le \kappa+\nu\le M+1} \|(|u|+|u_x|)^{\kappa+\nu}\|_{L^{6/5}_{x,t}} \nonumber\\ & \lesssim \sum_{5 \le \kappa+\nu\le M+1} \|(|u|+|u_x|)\|_{L^{6}_{x,t}} \|(|u|+|u_x|)\|^{\kappa+\nu-1}_{L^{3(\kappa+\nu-1)/2}_{x,t}} \nonumber\\ & \lesssim \sum_{5 \le \kappa+\nu\le M+1} \|(|u|+|u_x|)\|_{L^{6}_{x,t}} \|(|u|+|u_x|)\|^{\kappa+\nu-1}_{L^{6}_{x,t} \cap L^{3M/2}_{x,t} } \nonumber\\ & \lesssim \sum_{5 \le \kappa+\nu\le M+1} \big(\sum_{i=0,1}\|\partial^i_x u \|_{L^{6}_{x,t}} \big)^{\kappa+\nu} \nonumber\\ & \ \ \ + \sum_{5 \le \kappa+\nu\le M+1} \big(\sum_{i=0,1}\|\partial^i_x u \|_{L^{6}_{x,t}} \big) \big(\sum_{i=0,1}\|\partial^i_x u \|_{L^{3M/2}_{x,t}} \big)^{\kappa+\nu-1}. \label{1d.18} \end{align} Applying \eqref{pf-7-8}, we see that \eqref{1d.18} implies that \begin{align} & \| \mathscr{A} F(u,\bar{u},u_x,\bar{u}_x)\|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t} } \lesssim \sum_{5 \le \kappa+\nu\le M+1} \|u\|_X^{\kappa+\nu}. \label{1d.19} \end{align} From Bernstein's estimate and \eqref{cor-3} it follows that \begin{align} & \|\partial_x \mathscr{A} F(u,\bar{u},u_x,\bar{u}_x) \|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t} } \nonumber\\ & \le \|P_{\le 1} (\mathscr{A}\partial_x F(u,\bar{u},u_x,\bar{u}_x)) \|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t} } \nonumber\\ & \ \ \ + \|P_{>1} (\mathscr{A}\partial_x F(u,\bar{u},u_x,\bar{u}_x)) \|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t}} \nonumber\\ & \lesssim \|\mathscr{A} F(u,\bar{u},u_x,\bar{u}_x) \|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t} } \nonumber\\ & \ \ \ + \sum_{j\gtrsim 1} 2^{j/2} \|\triangle_j F(u,\bar{u},u_x,\bar{u}_x)\|_{L^1_x L^2_t} \nonumber\\ & \lesssim \|\mathscr{A} F(u,\bar{u},u_x,\bar{u}_x) \|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t} } \nonumber\\ & \ \ \ + \sum_{j\in \mathbb{Z}} 2^{\tilde{s}_M} 2^{j/2} \|\triangle_j F(u,\bar{u},u_x,\bar{u}_x)\|_{L^1_x L^2_t} =III+IV. \label{1d.20} \end{align} The estimates of $III$ and $IV$ have been given in \eqref{1d.19} and \eqref{1d.14}, respectively. We have \begin{align} & \|\partial_x \mathscr{A} F(u,\bar{u},u_x,\bar{u}_x) \|_{L^\infty_tL^2_x \,\cap\, L^{6}_{x,t} } \lesssim \sum_{5 \le \kappa+\nu\le M+1} \|u\|_X^{\kappa+\nu}. \label{1d.21} \end{align} We have from \eqref{cor-2}--\eqref{cor-8}, \eqref{cor-3}--\eqref{cor-9} that \begin{align} \sum_{j\in \mathbb{Z}} |\!|\!| \triangle_j (\mathscr{A} F(u,\bar{u},u_x,\bar{u}_x)) |\!|\!|_s & \lesssim \sum_{j\in \mathbb{Z}} 2^{sj} \|\triangle_j F(u,\bar{u},u_x,\bar{u}_x)\|_{L^{6/5}_{x,t}}, \label{1d.22}\\ \sum_{j\in \mathbb{Z}} |\!|\!| \triangle_j (\mathscr{A}\partial_x F(u,\bar{u},u_x,\bar{u}_x)) |\!|\!|_s & \lesssim \sum_{j\in \mathbb{Z}} 2^{sj}2^{j/2} \|\triangle_j F(u,\bar{u},u_x,\bar{u}_x)\|_{L^1_x L^2_t} \label{1d.23} \end{align} hold for all $s>0$. The right hand side in \eqref{1d.23} has been estimated by \eqref{1d.14}. So, it suffices to consider the estimate of the right hand side in \eqref{1d.22}. Let us observe the equality \begin{align} F(u,\bar{u},u_x,\bar{u}_x) = \sum_{\kappa+\nu=5} \lambda_{\kappa\nu} u^\kappa u_x^\nu + \sum_{5<\kappa+\nu \le M+1} \lambda_{\kappa\nu} u^\kappa u_x^\nu :=V+VI. \label{1d.24} \end{align} For any $s_5\le s\le \tilde{s}_M$, $VI$ has been handled in \eqref{1d.11}--\eqref{1d.13}: \begin{align} \sum_{5< \kappa+\nu\le M+1} \sum_{j\in \mathbb{Z}} 2^{sj} \|\triangle_j (u^\kappa u_x^\nu )\|_{L^{6/5}_{x,t}} & \lesssim \sum_{5< \kappa+\nu\le M+1} \|u\|^{\kappa+\nu}_X. \label{1d.25} \end{align} For the estimate of $V$, we use Remark \ref{rem-on-lemma-3.1}, for any $s_5\le s\le \tilde{s}_M$, \begin{align} \sum_{\kappa+\nu=5} \sum_{j\in \mathbb{Z}} 2^{sj} \|\triangle_j (u^\kappa u_x^\nu )\|_{L^{6/5}_{x,t}} & \lesssim \big(\sum_{i=0,1}\|\partial^i_x u_x\|^4_{L^{6}_{x,t}} \big) \big(\sum_{i=0,1}\|\partial^i_x u_x\|_{\ell_\triangle^{1,s} (L^{6}_{x,t})} \big) \nonumber\\ & \lesssim \|u\|^5_X. \label{1d.26} \end{align} Summarizing the estimate above, \begin{align} \| \mathscr{T} u(t)\|_X \lesssim \|u_0\|_{ B^{1+\tilde{s}_{M}}_{2,1}}+ \sum_{5 \le \kappa+\nu\le M+1} \|u\|^{\kappa+\nu}_X, \label{1d.27} \end{align} whence, we have the results, as desired. $\quad\quad\Box$ \noindent{\bf Acknowledgment.} This work is supported in part by the National Science Foundation of China, grants 10571004 and 10621061; and the 973 Project Foundation of China, grant 2006CB805902. \footnotesize \end{document}
\begin{equation}gin{document} \title{One-shot rates for entanglement manipulation under non-entangling maps} \author{Fernando~G.S.L.~Brand\~ao and Nilanjana Datta \thanks{Fernando Brand\~ao ([email protected]) is at the Physics Department of Universidade Federal de Minas Gerais, Brazil. Nilanjana Datta ([email protected]) is in the Statistical Laboratory, Dept. of Applied Mathematics and Theoretical Physics, University of Cambridge. This work is part of the QIP-IRC supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement number 213681, EPSRC (GR/S82176/0) as well as the Integrated Project Qubit Applications (QAP) supported by the IST directorate as Contract Number 015848'. FB is supported by a "Conhecimento Novo" fellowship from Funda\c{c}\~ao de Amparo a Pesquisa do Estado de Minas Gerais (FAPEMIG). } } \maketitle \begin{equation}gin{abstract} We obtain expressions for the optimal rates of one-shot entanglement manipulation under operations which generate a negligible amount of entanglement. As the optimal rates for entanglement distillation and dilution in this paradigm, we obtain the max- and min-relative entropies of entanglement, the two logarithmic robustnesses of entanglement, and smoothed versions thereof. This gives a new operational meaning to these entanglement measures. Moreover, by considering the limit of many identical copies of the shared entangled state, we partially recover the recently found reversibility of entanglement manipulation under the class of operations which asymptotically do not generate entanglement. \end{abstract} \section{Introduction} In the distant laboratory paradigm of quantum information theory, a system shared by two or more parties might have correlations that cannot be described by classical shared randomness; we say a state is entangled if it contains such intrinsically quantum correlations and hence cannot be created by local operations and classical communication (LOCC). Quantum teleportation \cite{BBC+93} shows that entanglement can actually be seen as a resource under the constraint that only LOCC operations are accessible. Indeed, one can use entanglement and LOCC to implement any operation allowed by quantum theory \cite{BBC+93}. The development of entanglement theory is thus centered in understanding, in a quantitative manner, the interconversion of one entangled state into another by LOCC, and their use for various information-theoretic tasks \cite{HHHH07, PV07}. In \cite{BBPS96}, Bennett \textit{et al} proved that entanglement manipulations of bipartite pure states, in the asymptotic limit of an arbitrarily large number of copies of the state, are reversible. Given two bipartite pure states $\ket{\psi_{AB}}$ and $\ket{\phi_{AB}}$, the former can be converted into the latter by LOCC if, and only if, $E(\ket{\psi_{AB}}) \geq E(\ket{\phi_{AB}})$, where $E$ is the von Neumann entropy of either of the two reduced density matrices of the state. For mixed bipartite states, it turns out that the situation is rather more complex. For instance there are examples of mixed bipartite states, known as \textit{bound} entangled states \cite{HHH98}, which require a non-zero rate of pure state entanglement for their creation by LOCC in the limit of many copies, but from which no pure state entanglement can be extracted \cite{HHH98, VC01, YHHS05}. This inherent irreversibility in the asymptotic manipulation of entanglement led to the exploration of different scenarios for the study of entanglement, departing from the original one based on LOCC operations (see e.g. \cite{Rai97, Rai01, EVWW01, APE03, IP05, HOH02}). The main motivation in these studies was to develop a simplified theory of entanglement manipulation, with the hope that it would also lead to new insights into the physically motivated setting of LOCC manipulations. Recently one possible such scenario has been identified. In Refs. \cite{BP08a, BP, Bra} the manipulation of entanglement under any operation which generates a negligible amount of entanglement, in the limit of many copies, was put forward. Remarkably, it was found that one recovers for multipartite mixed states the reversibility encountered for bipartite pure states under LOCC. In such a setting, only one measure is meaningful: the regularized relative entropy of entanglement \cite{VP, VW01}; it completely specifies when a multipartite state can be converted into another by the accessible operations. This framework has also found interesting applications to the LOCC paradigm, such as a proof that the LOCC entanglement cost is strictly positive for every multipartite entangled state \cite{Bra, BP09a} (see \cite{Piani09} for a different proof), new insights into separability criteria \cite{BS09}, and impossibility results for reversible transformations of pure multipartite states \cite{Bra10}. In this paper we analyze entanglement conversion of general multipartite states under non-entangling and approximately non-entangling operations in the \textit{single copy} regime (see e.g. \cite{wolf2}, \cite{ligong}, \cite{buscemidatta}, \cite{mmnd}, \cite{wcr} for other studies of the single copy regime in classical and quantum information theory). We will identify the single copy cost and distillation functions under non-entangling maps with the two logarithmic robustnesses of entanglement \cite{VT, HN, Bra05} (one of them also referred to as the max-relative entropy of entanglement \cite{nd2}), and the min-relative entropy of entanglement \cite{nd2}, respectively. On one hand, our findings give operational interpretation to these entanglement measures. On the other hand, they give further insight into the reversibility attained in the asymptotic regime. Indeed, we will be able to prove reversibility, under catalytic entanglement manipulations, by taking the asymptotic limit in our finite copy formulae and using a certain generalization of quantum Stein's Lemma proved in Ref. \cite{BP09a} (which is also the main technical tool used in \cite{BP08a, BP, Bra}). We hence partially recover the results of \cite{BP08a, BP, Bra}, where reversibility was proved without the use of entanglement catalysis. The paper is organized as follows. In Section \ref{prelim} we introduce the necessary notation and definitions. Section \ref{main} contains our main results, stated as Theorems \ref{smoothmin}-\ref{thmeq}. These theorems are then proved in Sections \ref{min},\ref{lrob}, \ref{max} and \ref{eq}, respectively. \section{Notation and Definitions} \label{prelim} Let ${\cal B}(\mathcal{H})$ denote the algebra of linear operators acting on a finite--dimensional Hilbert space $\mathcal{H}$, and let ${\cal B}^{+}({\cal H}) \subset {\cal B}(\mathcal{H})$ denote the set of positive operators acting in $\mathcal{H}$. Let ${\cal D}(\mathcal{H}) \subset {\cal B}^{+}({\cal H})$ denote the set of states (positive operators of unit trace). Given a multipartite Hilbert space $\mathcal{H} = \mathcal{H}_1 \otimes ... \otimes \mathcal{H}_m$, we say a state $\sigma \in {\cal D}(\mathcal{H}_1 \otimes ... \otimes \mathcal{H}_m)$ is separable if there are local states $\sigma_j^k \in {\cal D}(\mathcal{H}_k)$ and a probability distribution $\{ p_j \}$ such that \begin{equation}gin{equation} \sigma = \sum_j p_j \sigma_j^1 \otimes ... \otimes \sigma_j^m. \end{equation} We denote the set of separable states by ${\cal{S}}$. For given orthonormal bases $\{|i^A\rangle\}_{i=1}^d$ and $\{|i^B\rangle\}_{i=1}^d$ in isomorphic Hilbert spaces ${\cal{H}}_A$ and ${\cal{H}}_B$ of dimension $d$, a maximally entangled state (MES) of rank $M \le d$ is given by $$ |\Psi_M^{AB}\rangle= \frac{1}{\sqrt{M}} \sum_{i=1}^M |i^A\rangle |i^B\rangle.$$ We define the fidelity of two quantum states $\rho$, $\sigma$ as \begin{equation}gin{equation} F(\rho, \sigma) = \left( \mathrm{Tr} \sqrt{\sqrt{\sigma}\rho \sqrt{\sigma}} \right)^2. \end{equation} Finally, we denote the support of an operator $X$ by $\rm{supp}(X)$. Throughout this paper we restrict our considerations to finite-dimensional Hilbert spaces, and we take the logarithm to base $2$. In \cite{nd1} two generalized relative entropy quantities, referred to as the min- and max- relative entropies, were introduced. These are defined as follows. \begin{equation}gin{definition} Let $\rho \in {\cal D}({\cal H})$ and $\sigma \in {\cal B}^{+}({\cal H})$ be such that $\rm{supp}(\rho) \subseteq \rm{supp}(\sigma)$. Their max-relative entropy is given by \begin{equation} D_{\max}(\rho|| \sigma) := \log \min \{ \lambda: \, \rho\leq \lambda \sigma \}, \end{equation} while their min-relative entropy is given by \begin{equation} D_{\min}(\rho|| \sigma) := - \log \mathrm{Tr}\bigl(\Pi_{\rho}\sigma\bigr) \ , \label{dmin} \end{equation} where $\Pi_{\rho}$ denotes the projector onto $\rm{supp}(\rho)$ \footnote{Note that $D_{\min}(\rho|| \sigma)$ is well-defined whenever $\text{supp}(\rho) \cap \text{supp}(\sigma)$ is not empty.}. \end{definition} As noted in \cite{nd1, mmnd}, $D_{\min}(\rho|| \sigma)$ is the relative R\'enyi entropy of order $0$. In \cite{nd2} two entanglement measures were defined in terms of the above quantities. \begin{equation}gin{definition} The max-relative entropy of entanglement of $\rho \in {\cal D}({\cal H})$ is given by \begin{equation} E_{\max}(\rho):= \min_{\sigma \in {\cal{S}}} D_{\max} (\rho||\sigma), \label{entmeasure} \end{equation} while its min-relative entropy of entanglement is given by \begin{equation} E_{\min}(\rho):= \min_{\sigma \in {\cal{S}}} D_{\min} (\rho||\sigma), \label{entmin} \end{equation} \end{definition} It turns out \cite{nd2} that $E_{\max}(\rho)$ is not really a new quantity, but is actually equal to the logarithmic version of the global robustness of entanglement, given by \cite{Bra05} \begin{equation} LR_G(\rho) := \log (1 + R_G(\rho)), \label{glr} \end{equation} where $R_G(\rho)$ is the global robustness of entanglement \cite{VT, HN} defined as \begin{equation} R_G(\rho) := \min_{s \in \mathbb{R}} \Bigl(s \ge 0: \exists \,\omega \in {\cal{D}} \,\,{\rm{s.t.}}\,\, \frac{1}{1+s}\rho + \frac{s}{1+s}\omega \in {\cal{S}} \Bigr). \label{gr} \end{equation} Another quantity of relevance in this paper is the robustness of entanglement \cite{VT}, denoted by $R(\rho)$. Its definition is analogous to that of $R_G(\rho)$ except that the states $\omega$ in Eq.\reff{gr} are restricted to separable states. Its logarithmic version is defined as follows. \begin{equation}gin{definition} The logarithmic robustness of entanglement of $\rho \in {\cal D}({\cal H})$ is given by \begin{equation}gin{equation} LR(\rho) := \log(1 + R(\rho)). \end{equation} \end{definition} We also define smoothed versions of the quantities we consider as follows (see also \cite{BP09a, marco}). \begin{equation}gin{definition} For any $\eps >0$, the smooth max-relative entropy of entanglement of $\rho \in {\cal D}({\cal H})$ is given by \begin{equation} {E}_{\max}^\eps (\rho) := \min_{\bar{\rho} \in B^{\eps}(\rho)} E_{\max}(\bar{\rho}), \end{equation} where $B^{\eps}(\rho) := \{\bar \rho \in {\cal D}({\cal H}) : F(\bar \rho, \rho) \geq 1 - \eps \}$. The smooth logarithmic robustness of entanglement of $\rho \in {\cal D}({\cal H})$ in turn is given by \begin{equation}\label{eps_LR} LR^\eps (\rho) := \min_{\bar{\rho} \in B^{\eps}(\rho)} LR(\bar{\rho}). \end{equation} Finally, the smooth min-relative entropy of entanglement of $\rho \in {\cal D}({\cal H})$ is defined as \begin{equation} {E}_{\min}^\eps (\rho) := \max_{0 \leq A \leq \id \atop{\mathrm{Tr}(A \rho) \geq 1 - \eps}} \min_{\sigma \in {\cal S}} \left(- \log \mathrm{Tr}(A \sigma) \right). \end{equation} \end{definition} We note that the definition of ${E}_{\min}^\eps(\rho)$ which we use in this paper is different from the one introduced in \cite{nd2}, where the smoothing was performed over an $\eps$-ball around the state $\rho$, in analogy with the smooth version of ${E}_{\max}^\eps(\rho)$ given above. Note also that while this new smoothing is a priori inequivalent to the one in \cite{nd2}, it is equivalent to the ``operator-smoothing'' introduced in \cite{buscemidatta}, which, in addition, gives rise to a continuous family of smoothed relative R\'enyi entropies. We will consider regularized versions of the smooth min- and max-relative entropies of entanglement \begin{equation}a {\E}_{\min}^\eps (\rho)&:=& \liminf_{n\rightarrow \infty}\frac{1}{n} E_{\min}^\eps(\rho^{\otimes n}), {(n)}onumber\\ {\E}_{\max}^\eps (\rho)&:=& \limsup_{n\rightarrow \infty}\frac{1}{n} E_{\max}^\eps(\rho^{\otimes n}),{(n)}onumber\\ \end{eqnarray} and the quantities \begin{equation}a {\E}_{\min} (\rho) &:=& \lim_{\eps \rightarrow 0} {\E}_{\min}^\eps (\rho) {(n)}onumber\\ {\E}_{\max} (\rho) &:=& \lim_{\eps \rightarrow 0} {\E}_{\max}^\eps (\rho) \label{deff} \end{eqnarray} In \cite{BP09a, nd2} it was proved that ${\E}_{\max} (\rho)$ is equal to the regularized relative entropy of entanglement \cite{VP, VW01} \begin{equation} E_R^\infty(\rho):= \lim_{n\rightarrow \infty} \frac{1}{n}E_R(\rho^{\otimes n}), \label{rel11} \end{equation} where \begin{equation}gin{equation} E_{R}(\omega) := \min_{\sigma \in {\cal S}} S(\omega || \sigma), \end{equation} is the relative entropy of entanglement and $S(\omega || \sigma) := \mathrm{Tr}(\rho(\log(\rho) - \log(\sigma)))$ the quantum relative entropy. In this paper we prove that also ${\E}_{\min} (\rho)$ is equal to $E_R^\infty(\rho)$ (see Theorem 4). We can now be more precise about the classes of maps we consider for the manipulation of entanglement, introduced in \cite{BP08a, BP}. \begin{equation}gin{definition} A completely positive trace-preserving (CPTP) map $\Lambda$ is said to be a non-entangling (or separability preserving) map if $\Lambda(\sigma)$ is separable for any separable state $\sigma$. We denote the class of such maps by SEPP \footnote{The acronym comes from the name {\em{separability preserving.}}}. \end{definition} \begin{equation}gin{definition} For any given $\delta >0$ we say a map $\Lambda$ is a $\delta$-non-entangling map if $R_G(\Lambda(\sigma)) \le \delta$ for every separable state $\sigma$. We denote the class of such maps by $\delta$-SEPP. \end{definition} In the following sections we will consider entanglement manipulations under non-entangling and $\delta$-non-entangling maps. We first give the definitions of achievable and optimal rates of entanglement manipulation protocols under a general class of maps, in order to make the subsequent discussion more transparent. In the definitions we will consider maps from a multipartite state to a maximally entangled state and vice-versa. It should be understood that the first two parties share the maximally entangled state, while the quantum state of the other parties is trivial (one-dimensional). \begin{equation}gin{definition} The one-shot entanglement cost of $\rho$ under the class of operations $\Theta$ is defined as \begin{equation}gin{eqnarray} & &E_{C,\Theta}^{(1), \eps}(\rho)\\ & & := \min_{M, \Lambda} \{ \log M :F( \rho, \Lambda(\Psi_M))\geq 1 - \eps, \Lambda \in \Theta, M \in \mathbb{Z}^+ \}. {(n)}onumber \end{eqnarray} \end{definition} We also consider a \textit{catalytic} version of entanglement dilution under $\delta$-non-entangling maps. \begin{equation}gin{definition} The one-shot catalytic entanglement cost of $\rho$ under a class of quantum operations ${\Theta}$ is defined as \begin{equation}gin{eqnarray} \tilde E_{C,\Theta}^{(1), \eps}(\rho) := \min_{M, K, \Lambda} && \{ \log M : \Lambda(\Psi_M \otimes \Psi_K) = \rho' \otimes \Psi_K, {(n)}onumber \\ && F( \rho, \rho')\geq 1 - \eps, \Lambda \in \Theta, M, K \in \mathbb{Z}^+ \}. {(n)}onumber \end{eqnarray} \end{definition} Finally, the next definition formalizes the notion of single-shot entanglement distillation under general classes of maps. \begin{equation}gin{definition} The one-shot distillable entanglement of $\rho$ under a class of quantum operations ${\Theta}$ is defined as \begin{equation}gin{eqnarray} & &E_{D,\Theta}^{(1), \eps}(\rho)\\ & & := \max_{M, \Lambda} \{ \log M :F( \Lambda(\rho), \Psi_M)\geq 1 - \eps, \Lambda \in \Theta, M \in \mathbb{Z}^+ \}. {(n)}onumber \end{eqnarray} \end{definition} In the following we shall consider ${\Theta}$ to be either the class of SEPP maps or the class of $\delta$-SEPP maps for a given $\delta >0$. \section{Main Results} \label{main} The main results of the paper are given by the following four theorems. They provide operational interpretations of the smooth max- and min-relative entropies of entanglement, and the logarithmic version of the robustness of entanglement, in terms of optimal rates of one-shot entanglement manipulation protocols. The first theorem relates the smoothed min-relative entropy of entanglement to the single-shot distillable entanglement under non-entangling maps. \begin{equation}gin{theorem} \label{smoothmin} For any state $\rho$ and any $\varepsilon \geq 0$, \begin{equation} \lfloor E_{\min}^\varepsilon (\rho) \rfloor \le E_{D, SEPP}^{(1), \eps}(\rho) \le E_{\min}^{\varepsilon}(\rho). \end{equation} \end{theorem} The following theorem relates the smoothed logarithmic robustness of entanglement to the one-shot entanglement cost under non-entangling maps. \begin{equation}gin{theorem} \label{logrob} For any state $\rho$ and any $\varepsilon \geq 0$, \begin{equation} LR^{\varepsilon}(\rho) \le E_{C,{\rm{SEPP}}}^{(1),\varepsilon }(\rho) \le LR^{\varepsilon}(\rho) + 1. \label{47} \end{equation} \end{theorem} We also prove an analogous theorem to the previous one, but now relating the logarithmic global robustness (alias max-relative entropy of entanglement) to the one-shot catalytic entanglement cost under $\delta$-non-entangling maps. \begin{equation}gin{theorem} \label{emax} For any $\delta, \eps > 0$ there exists a positive integer $K$, such that for any state $\rho$ \begin{equation}a E_{\max}^{\eps}(\rho \otimes \Psi_K) &-& \log K - \log(1+\delta) \le {\widetilde{E}}_{C, \delta-SEPP}^{(1), \eps}(\rho) {(n)}onumber\\ &\le& E_{\max}^{\eps}(\rho \otimes \Psi_K) - \log(1 - \eps) - \log K + 1.{(n)}onumber \\ \end{eqnarray} We can take in particular $K = \lceil 1 + \delta^{-1} \rceil$. \end{theorem} Finally we show that we can partially recover the reversibility of entanglement manipulations under asymptotically non-entangling maps \cite{BP08a, Bra05} from the results derived in this paper and the quantum hypothesis testing result of \cite{BP09a}. \begin{equation}gin{theorem} \label{thmeq} For every state $\rho \in {\cal D}({\cal H})$, \begin{equation}gin{equation} {\E}_{\min} (\rho) = {\E}_{\max} (\rho) = E_R^{\infty}(\rho). \end{equation} \end{theorem} From Theorems \ref{smoothmin} and \ref{emax} we then find that the distillable entanglement and the \textit{catalytic} entanglement cost under asymptotically non-entangling maps are the same. In Refs. \cite{BP08a, Bra05} one could show the same result without the need of catalysis. Here we need the extra resource of catalytic maximally entangled states because we want to ensure that already on a single-copy level, our operations only generate a negligible amont of entanglement; in Refs. \cite{BP08a, Bra05}, in turn, this is only the case for a large number of copies of the state. In more detail: we define the distillable entanglement under non-entangling operations as \begin{equation}gin{equation} E_D^{ne}(\rho) := \lim_{\eps \rightarrow 0} \lim_{n \rightarrow \infty} \frac{1}{n} E_{D, SEPP}^{(1), \eps}(\rho^{\otimes n}). \end{equation} It then follows easily from Theorem \ref{smoothmin} and Theorem \ref{thmeq} that $E_D^{ne}(\rho) = E_R^{\infty}(\rho)$. The catalytic entanglement cost under asymptotic non-entangling operations, in turn, is defined as \begin{equation}gin{equation} E_C^{ane}(\rho) := \lim_{\eps \rightarrow 0} \lim_{\delta \rightarrow 0} \lim_{n \rightarrow \infty} \frac{1}{n} {\widetilde{E}}_{C, \delta-SEPP}^{(1), \eps}(\rho). \end{equation} That $E_C^{ane}(\rho) = E_R^{\infty}(\rho)$ then follows from Theorems \ref{emax} and \ref{thmeq}. We note that it was already proven in Refs.\cite{nd2, BP09a} that ${\E}_{\max} (\rho) = E_R^{\infty}(\rho)$. Our contribution is to show that also the regularization of the smooth min-relative entropy of entanglement is equal to the regularized relative entropy of entanglement. \section{Proof of Theorem~\ref{smoothmin}} \label{min} The proof of Theorem~\ref{smoothmin} will employ the following lemma. \begin{equation}gin{lemma} \label{monotonicityEmin} For any $\Lambda \in {\rm{SEPP}}$, \begin{equation}gin{equation} E_{\min}^{\eps}(\rho) \geq E_{\min}^{\eps}(\Lambda(\rho)) \end{equation} \end{lemma} \begin{equation}gin{proof} Let $0 \leq A \leq \id$ be such that $\mathrm{Tr}(A \Lambda(\rho)) \geq 1 - \eps$ and $E_{\min}^{\eps}(\Lambda(\rho)) = \min_{\sigma \in {\cal S}} (- \log \mathrm{Tr}(A \sigma))$. Setting $\sigma_{\rho}$ as the optimal state in the definition of $E_{\min}^{\eps}(\rho)$, \begin{equation}a E_{\min}^{\eps}(\rho) &\geq& - \log \mathrm{Tr}(\Lambda^{\cal y}(A) \sigma_{\rho}) {(n)}onumber \\ &=& - \log \mathrm{Tr}(A \Lambda(\sigma_{\rho})) {(n)}onumber \\ &\geq& \min_{\sigma \in {\cal S}} \left( - \log \mathrm{Tr}(A \sigma) \right) {(n)}onumber \\ &=& E_{\min}^{\eps}(\Lambda(\rho)). \end{eqnarray} where $\Lambda^{\cal y}$ is the adjoint map of $\Lambda$. In the first line we used that $0 \leq \Lambda^{\cal y}(A) \leq \id$ and $\mathrm{Tr}(\Lambda^{\cal y}(A)\rho) = \mathrm{Tr}(A \Lambda(\rho))\geq 1 - \eps$, while in the third line we use the fact that $\Lambda(\sigma_\rho)$ is separable, since $\Lambda \in {\rm{SEPP}}$. \end{proof} \begin{equation}gin{proof}[Theorem~\ref{smoothmin}] We first prove that $E_{D,{\rm{SEPP}}}^{(1), \eps}\ge \lfloor E_{\min}^{\eps}(\rho) \rfloor.$ For this it suffices to prove that any $R \le \lfloor E_{\min}^{\eps}(\rho) \rfloor$ is an achievable one-shot distillation rate for $\rho$. Consider the class of completely positive trace-preserving maps $\Lambda \equiv \Lambda_A$ (for an operator $0\le A \le I$) whose action on a state $\rho$ is given as follows: \begin{equation} \Lambda(\rho) := \mathrm{Tr}(A\rho) \Psi_M + \mathrm{Tr}\bigl((I-A)\rho\bigr) \frac{(I - \Psi_M)}{M^2 - 1}, \label{vintecinco} \end{equation} for any state $\rho\in {\cal{D}}({\cal{H}})$. An isotropic state $\omega$, as the one appearing on the right-hand side of Eq. (\ref{vintecinco}), is separable if and only if $\mathrm{Tr}(\omega \Psi_M) \le 1/M$ \cite{horo1}. Hence, the map $\Lambda$ is SEPP if, and only if, for any separable state $\sigma$, $\mathrm{Tr}(\Lambda(\sigma) \Psi_M) \le 1/M$, or equivalently, \begin{equation} \mathrm{Tr}(A \sigma) \le \frac{1}{M}. \label{e11} \end{equation} We now choose $A$ as the optimal POVM element in the definition of $E_{\min}^{\eps}(\rho)$ and set $M = 2^{\lfloor E_{\min}^{\eps}(\rho) \rfloor}$. On one hand, as $\mathrm{Tr}(A \rho) \geq 1 - \eps$, we find that $F(\Lambda(\rho), \Psi_M) \geq 1 - \eps$. On the other hand, by the definition of $E_{\min}^{\eps}(\rho)$, we have that \begin{equation} 2^{- E_{\min}^{\eps}(\rho)} = \max_{\sigma \in {\cal S}} \mathrm{Tr}(A \sigma) \end{equation} and hence $\mathrm{Tr}(A \sigma) \leq 1/M$ for every separable state $\sigma$, which ensures that the map $\Lambda$ defined by \reff{vintecinco} is a SEPP map. Hence, $\log M = \lfloor E_{\min}^\eps (\rho)\rfloor$ is an achievable rate and $E_{D,{\rm{SEPP}}}^{(1), \eps}\ge \lfloor E_{\min}^{\eps}(\rho) \rfloor.$ We next prove the converse, namely that $E_{D,{\rm{SEPP}}}^{(1), \eps}(\rho) \le E_{\min}^{\eps}(\rho).$ Suppose $\Lambda$ is the optimal SEPP map such that $F(\Lambda(\rho), \Psi_M) \geq 1 - \eps$, with $\log M = E_{D,\eps}^{(1)}(\rho)$. By Lemma \ref{monotonicityEmin} we have \begin{equation}a E_{\min}^{\eps}(\rho) &\geq& E_{\min}^{\eps}(\Lambda(\rho)) {(n)}onumber \\ &=& \max_{0 \leq A \leq \id \atop{\mathrm{Tr}(A \Lambda(\rho)) \geq 1 - \eps}} \min_{\sigma \in {\cal S}} \left(- \log \mathrm{Tr}(A \sigma) \right) {(n)}onumber \\ &\geq& \min_{\sigma \in {\cal S}} \left(- \log \mathrm{Tr}(\Psi_M \sigma) \right) {(n)}onumber \\ &=& \log M {(n)}onumber \\ &=& E_{D,\eps}^{(1)}(\rho), \end{eqnarray} where we used that $0 \leq \Psi_M \leq \id$ and $\mathrm{Tr}(\Lambda(\rho)\Psi_M) \geq 1 - \eps$ and that $\mathrm{Tr}(\Psi_M \sigma) \leq 1/M$ for every separable state $\sigma$. \end{proof} \section{Proof of Theorem~\ref{logrob}} \label{lrob} \begin{equation}gin{proof} To prove the upper bound in \reff{47}, consider the quantum operation $\Lambda$ acting on a state $\omega$ as follows: \begin{equation} \Lambda(\omega) = \mathrm{Tr}(\Psi_M \omega)\rho_{\varepsilon} + \bigl[1 - \mathrm{Tr}(\Psi_M \omega)\bigr] \pi, \label{57} \end{equation} where {{$\rho_\eps$ is the state in $B^\eps(\rho)$ which achieves the minimum in the definition \reff{eps_LR}of the smooth logarithmic robustness}}, and $\pi$ is a separable state such that the state $$\sigma:= \bigl( \rho_{\varepsilon} + (M-1) \pi\bigr)/M,$$ is separable for the choice $M = 1 + \lceil R(\rho_{\varepsilon}) \rceil$. We can rewrite Eq. \reff{57} as \begin{equation} \Lambda(\omega) = q \bigl[\frac{\rho_{\varepsilon} + (M-1) \pi}{M}\bigr] + (1-q) \pi, \end{equation} where $q = M\mathrm{Tr}(\Psi_M \omega)$. For a separable state $\omega$, $\mathrm{Tr}(\Psi_M \omega) \le 1/M$ \cite{horo_sep}, and hence $0\le q \le 1$. By the convexity of the robustness \cite{rob_convex} we have that, for any separable state $\omega$, $$ R(\Lambda(\omega)) \le q R(\sigma) + (1-q) R(\pi).$$ Note that $R(\pi)=0$ since $\pi$ is separable. Moreover, since $R(\sigma)=0$ for $M = 1 + \lceil R(\rho_{\varepsilon}) \rceil$, we have $R(\Lambda(\omega))=0$, ensuring that the map $\Lambda$ is non-entangling. Note that $\Lambda(\Psi_M) = \rho_{\varepsilon}$, with the corresponding rate of $\log M = \log (1 +\lceil R(\rho_{\varepsilon}) \rceil) \leq LR^{\varepsilon}(\rho) + 1$. This then yields the upper bound in Theorem \ref{logrob}. To prove the lower bound in \reff{47}, let $\Lambda $ denote a SEPP map yielding entanglement dilution with a fidelity of at least $1-\varepsilon$, for a state $\rho$, i.e. $\Lambda_M(\Psi_M) = \rho_{\varepsilon}$, with $F(\rho, \rho_{\varepsilon}) \geq 1 - \varepsilon$, and $\log M = E_{C, {\rm{SEPP}}}^{(1), \varepsilon}$. The monotonicity of log robustness under SEPP maps \cite{BP} yields \begin{equation}a LR^{\varepsilon}(\rho) \leq LR(\rho_{\varepsilon}) &=& LR(\Lambda(\Psi_M)){(n)}onumber\\ & \le& LR(\Psi_M) {(n)}onumber\\ &=& \log M =E_{C, {\rm{SEPP}}}^{(1), \varepsilon}. {(n)}onumber\\ \end{eqnarray} \end{proof} \section{Proof of Theorem~\ref{emax}} \label{max} The following lemmata will be employed in the proof of Theorem~\ref{emax} \begin{equation}gin{lemma} \label{monoemax} For any $\delta >0$ and $\Lambda \in \delta$-{\rm{SEPP}}, \begin{equation}gin{equation} E_{\max}^{\eps}(\rho) \geq E_{\max}^{\eps}(\Lambda(\rho)) - \log(1+ \delta) \end{equation} \end{lemma} \begin{equation}gin{proof} Let $\rho_\eps$ be the optimal state in the definition of $E_{\max}^{\eps}(\rho)$, i.e., $E_{\max}^{\eps}(\rho) = E_{\max}(\rho_{\eps})$. By the monotonicity of the fidelity under CPTP maps we have that $F(\Lambda(\rho), \Lambda(\rho_\eps))\ge F(\rho, \rho_\eps) \ge 1- \eps.$ Hence, using Lemma IV.1 of \cite{BP} \begin{equation}a E_{\max}^{\eps}(\Lambda(\rho)) &\le & E_{\max}(\Lambda(\rho_\eps)){(n)}onumber\\ &\le & E_{\max}(\rho_\eps) + \log (1 + \delta){(n)}onumber\\ &=& E_{\max}^{\eps}(\rho) + \log (1 + \delta). \end{eqnarray} \end{proof} \begin{equation}gin{lemma} \label{newlemmads} For every $\rho \in {\cal D}({\cal H})$ and $\varepsilon > 0$, there is a state $\mu_{\varepsilon}$ of the form \begin{equation}gin{equation}\label{mu} \mu_{\varepsilon} := (1 - \lambda) \rho_{\varepsilon} \otimes \Psi_K + \lambda \theta \otimes \left( \frac{\id - \Psi_K}{K^2 - 1} \right), \end{equation} with $K \in \mathbb{Z}^+ \}$, $\theta, \rho_{\varepsilon} \in {\cal D}({\cal H})$, $F(\rho, \rho_\eps) \ge 1 - \eps$, and $\lambda \leq \varepsilon$, such that \begin{equation}gin{equation} E_{\max}^{\varepsilon}(\rho \otimes \Psi_K) \ge E_{\max}(\mu_{\eps}). \end{equation} \end{lemma} \begin{equation}gin{proof} Let $\mu_{\varepsilon}'$ be such that $E_{\max}^{\varepsilon}(\rho \otimes \Psi_K) = E_{\max}(\mu_{\varepsilon}')$. Then there is a separable state $\sigma$ such that \begin{equation}gin{equation} \label{newlemmaeq1} \mu_{\varepsilon}' \leq 2^{E_{\max}^{\varepsilon}(\rho \otimes \Psi_K)}\sigma \end{equation} and $F(\mu_{\varepsilon}', \rho \otimes \Psi_K) \geq 1 - \varepsilon$. Consider the twirling map \begin{equation}gin{equation} \Delta(X) := \int_{Haar} dU (U \otimes U^*) X (U \otimes U^*)^{\cal y} \end{equation} and define $\mu_{\eps} := (\id \otimes \Delta)(\mu_{\varepsilon}')$. Then, because $\Delta$ is entanglement breaking \cite{ruskai_shor} we can write \begin{equation}gin{equation}\label{mu_e} \mu_{\varepsilon} := (1 - \lambda) \rho_{\varepsilon} \otimes \Psi_K + \lambda \theta \otimes \left( \frac{\id - \Psi_K}{K^2 - 1} \right), \end{equation} for $\theta, \rho_{\varepsilon} \in {\cal D}({\cal H})$ and $0 \leq \lambda \leq 1$. From Eq. (\ref{newlemmaeq1}), \begin{equation}gin{equation} \mu_{\varepsilon} \leq 2^{E_{\max}^{\varepsilon}(\rho \otimes \Psi_K)} (\id \otimes \Delta)\sigma. \end{equation} Since $\Delta$ is LOCC, $(\id \otimes \Delta)\sigma$ is separable and we get $E_{\max}(\mu_{\varepsilon}) \leq E_{\max}^{\varepsilon}(\rho \otimes \Psi_K)$. Moreover, from the monotonicity of the fidelity under CPTP maps, $F(\mu_{\varepsilon}, \rho \otimes \Psi_K) \geq 1 - \varepsilon$. From this and (\ref{mu}) it follows that $$(1-\lambda) \ge F(\rho, \rho_\eps) \ge 1 - \eps,$$ and thus, $\lambda \leq \varepsilon$. \end{proof} \begin{equation}gin{proof}[Theorem~\ref{emax}] Let us start by proving the achievability part, namely that for every $\delta > 0$ we can find a positive integer $K$ such that ${\widetilde{E}}_{C, \delta-{\rm{SEPP}}}^{(1), \eps}(\rho) \le E_{\max}^{\eps}(\rho \otimes \Psi_K) - \log(1 - \eps) - \log K$. From Lemma \ref{newlemmads} we know there is a state $\rho_{\eps}$ such that $F(\rho_{\eps}, \rho) \geq 1 - \eps$ and $E_{\max}(\rho_{\eps} \otimes \Psi_K) \leq E_{\max}^{\eps}(\rho \otimes \Psi_K) - \log(1 - \eps)$. This can be seen as follows: {{Let $\mu_\eps$ be a state of the form given by \reff{mu}}}. From the definition of the max-relative entropy of entanglement (Definition \ref{entmeasure}) it follows that \begin{equation}gin{eqnarray} \mu_\eps &\le & 2^{E_{\max}(\mu_\eps)} \sigma',{(n)}onumber\\ &\le & 2^{E_{\max}^{\varepsilon}(\rho \otimes \Psi_K)}\sigma'. \end{eqnarray} for some separable state $\sigma' \in {\cal{B}}({\cal{H}})$, where we get the second inequality by using Lemma \ref{newlemmads}. Substituting the expression \reff{mu} of $\mu_\eps$ we get \begin{equation}a &&(1 - \lambda) \rho_{\varepsilon} \otimes \Psi_K + \lambda \theta \otimes \left( \frac{\id - \Psi_K}{K^2 - 1} \right){(n)}onumber\\ &\le & 2^{E_{\max}^{\varepsilon}(\rho \otimes \Psi_K)}\sigma'. \end{eqnarray} This yields, \begin{equation} (1 - \lambda) \rho_{\varepsilon} \otimes \Psi_K \le 2^{E_{\max}^{\varepsilon}(\rho \otimes \Psi_K)}\sigma', \end{equation} and hence, $$\rho_{\varepsilon} \otimes \Psi_K\le 2^{E_{\max}^{\varepsilon}(\rho \otimes \Psi_K)}2^{-\log(1-\lambda)} \sigma',$$ which in turn implies that $$\rho_{\varepsilon} \otimes \Psi_K\le 2^{E_{\max}^{\varepsilon}(\rho \otimes \Psi_K) - \log(1 - \eps)} \sigma',$$ since $\lambda \le \eps$. Therefore, for $K = \lceil 1 + \delta^{-1} \rceil$ and $M = \lceil K^{-1} 2^{ E_{\max}^{\eps}(\rho \otimes \Psi_K) - \log(1 - \eps)} \rceil$, we can always find a state $\pi$ such that $\bigl((\rho_\eps \otimes \Psi_K) + (MK-1) \pi\bigr)$ is an unnormalized separable state. Define the map \begin{equation}gin{eqnarray} \Lambda(\omega) &=& \bigl[ \mathrm{Tr}((\Psi_M \otimes \Psi_K) \omega) \bigr] \bigl(\rho_{\eps} \otimes \Psi_{K}) {(n)}onumber \\ &+& \bigl[\mathrm{Tr}((\id - \Psi_M \otimes \Psi_K) \omega)\bigr]\pi, \label{qop2} \end{eqnarray} We now show that with our choice of parameters the map $\Lambda$ is $\delta$-SEPP. First note that since for any separable state $\sigma \in {\cal{B}}({\cal{H}} \otimes {\cal{H}})$ $$\mathrm{Tr}\bigl( (\Psi_M \otimes \Psi_K) \sigma\bigr) \le \frac{1}{MK},$$ we can write \begin{equation} \Lambda(\sigma) = p (\rho_\eps \otimes \Psi_K) + (1-p) \pi, \label{15} \end{equation} where $p \le \frac{1}{MK}$. This in turn can be written as \begin{equation} \Lambda(\sigma) = q\bigl[\frac{ (\rho_\eps \otimes \Psi_K) + (MK-1)) \pi}{MK}\bigr] + (1-q)\pi, \label{e27} \end{equation} where $q = pMK$. Since $0\le p \le 1/MK$, we have that $0\le q \le 1$. Note that the first term in parenthesis in \reff{e27} is separable, due to the choice of $\pi$. Using the convexity of the global robustness we then conclude that $R_G(\Lambda(\sigma)) \le R_G(\pi)$, for any separable state $\sigma$. Further, from the choice of $M$ and $K$ it follows that $$R_G(\pi) \le \frac{1}{R_G(\rho_\eps \otimes \Psi_K) } \le \frac{1}{K-1}\le \delta.$$ The first inequality follows from the fact that if $(\rho + s \sigma)$ is an unnormalized separable state, then so is $(\sigma + (1/s) \rho)$, and by noting that $$\frac{\rho+s\sigma}{1+s} = \frac{\sigma + s^{-1}\rho}{1 + s^{-1}}.$$ The second inequality follows from the monotonicity of $R_G$ under LOCC \cite{VT}, which implies $R_G(\rho_{\eps}\otimes \Psi_K) \geq R_G(\Psi_K)$ and the fact $R_G(\Psi_K) = K - 1$ \cite{VT}. Finally, the third is a consequence of the choice of $K$. Note that for $\omega = \Psi_M\otimes \Psi_K$, \begin{equation}{{\Lambda}}({{\omega}}) = \Lambda (\Psi_M\otimes \Psi_K ) = \rho_\eps\otimes \Psi_{K}. \label{two} \end{equation} Hence the protocol yields a state $\rho_\eps$ with $F(\rho, \rho_\eps) \ge 1 - \eps$ and the additional maximally entangled state $\Psi_{K}$ which was employed in the start of the protocol. Its role in the protocol is to ensure that the quantum operation ${{\Lambda}}$ is a $\delta$-SEPP map for any given $\delta >0$. Since the maximally entangled states $\Psi_M$ and $\Psi_{K}$ were employed in the protocol and $\Psi_{K}$ was retrieved unchanged, the rate $ R = (\log M + \log M') - \log M'= \log M \leq E^\eps_{\max}(\rho \otimes \Psi_K) - \log K - \log(1 - \eps) + 1,$ is achievable. Next we prove the bound ${\widetilde{E}}_{C, \delta-{\rm{SEPP}}}^{(1), 0} \ge E_{\max}^\eps(\rho) - \log K -\log (1+\delta)$. Let ${{\Lambda}}$ be a $\delta$-SEPP map for which $$\Lambda(\Psi_M \otimes \Psi_K) = \rho_\eps \otimes \Psi_{K}. $$ with ${\widetilde{E}}_{c, \delta-{\rm{SEPP}}}^{(1), \eps}= \log M$. Then by Lemma \ref{monoemax}, \begin{equation}a E_{\max}^\eps(\rho \otimes \Psi_K) & \le & E_{\max}(\rho_\eps \otimes \Psi_K) {(n)}onumber\\ &=& E_{\max}(\Lambda(\Psi_M \otimes \Psi_K)) {(n)}onumber\\ &\le & E_{\max}(\Psi_M \otimes \Psi_K) + \log (1+\delta) {(n)}onumber\\ &=& \log M + \log K + \log (1+\delta). \end{eqnarray} Hence \begin{equation} \log M \ge E_{\max}^\eps(\rho \otimes \Psi_K) - \log K - \log(1 + \delta). \end{equation} \end{proof} \section{Equivalence with the regularized relative entropy of entanglement} \label{eq} In this section we prove Theorem \ref{thmeq}. The main ingredient in the proof is a certain generalizaton of Quantum Stein's Lemma proved in Refs. \cite{Bra, BP09a} and stated below as Lemma \ref{BP09} for the special case of the separable states set. \begin{equation}gin{lemma} \label{BP09} Let $\rho \in {\cal D}({\cal H})$. Then (\textit{Direct part}): For every $\eps > 0$ there exists a sequence of POVMs $\{ A_n, \id - A_n \}_{n \in \mathbb{N}}$ such that \begin{equation}gin{equation} \lim_{n \rightarrow \infty} \mathrm{Tr}((\id - A_n) \rho^{\otimes n}) = 0 \end{equation} and for every $n \in \mathbb{N}$ and $\omega_n \in {\cal S}({\cal H}^{\otimes n})$, \begin{equation}gin{equation} \label{exponentialdecay} - \frac{\log \mathrm{Tr}(A_n \omega_n)}{n} + {\eps} \geq E_{\cal M}^{\infty}(\rho) . \end{equation} (\textit{Strong Converse}): If a real number $\eps > 0$ and a sequence of POVMs $\{ A_n, \id - A_n \}_{n \in \mathbb{N}}$ are such that for every $n \in \mathbb{N}$ and $\omega_n \in {\cal S}({\cal H}^{\otimes n})$, \begin{equation}gin{equation} - \frac{\log( \mathrm{Tr}(A_n \omega_n))}{n} - \eps \geq E_{\cal M}^{\infty}(\rho), \end{equation} then \begin{equation}gin{equation} \lim_{n \rightarrow \infty} \mathrm{Tr}((\id - A_n) \rho^{\otimes n}) = 1. \end{equation} \end{lemma} \begin{equation}gin{proof} (Theorem 4). In Refs. \cite{Bra, BP09a, nd2} it was established that \begin{equation}gin{equation} \label{maxeq} {\cal E}_{\max}(\rho) = E_R^{\infty}(\rho). \end{equation} We hence focus in showing that ${\cal E}_{\min}(\rho) \geq E_R^{\infty}(\rho)$, since ${\cal E}_{\min}(\rho) \leq E_R^{\infty}(\rho)$ follows from Eq. (\ref{maxeq}) and the fact that ${\cal E}_{\max}(\rho) \geq {\cal E}_{\min}(\rho)$ (which in turn is a direct consequence of their definitions). Let $\eps > 0$ and $\{ A_n \}$ be an optimal sequence of POVMs in the direct part of Lemma \ref{BP09}. Then for sufficiently large $n$, $\mathrm{Tr}(\rho^{\otimes n}A_n) \geq 1 - \eps$ and thus \begin{equation}gin{eqnarray} {E}_{\min}^\eps (\rho^{\otimes n}) \geq \min_{\sigma \in {\cal S}({\cal H}^{\otimes n})} \left(- \log \mathrm{Tr}(A_n \sigma) \right) \geq n (E_R^{\infty}(\rho) - \eps), \end{eqnarray} where the last inequality follows from Eq. (\ref{exponentialdecay}). Dividing both sides by $n$ and taking the limit $n \rightarrow \infty$ we get \begin{equation}gin{equation} {\cal E}_{\min}^\eps (\rho) \geq E_R^{\infty}(\rho) - \eps. \end{equation} Since this equation holds for every $\eps > 0$, we can finally take the limit $\eps \rightarrow 0$ to find \begin{equation}gin{equation} {\cal E}_{\min}(\rho) \geq E_R^{\infty}(\rho). \end{equation} \end{proof} \section*{Acknowledgments} The authors would like to thank Martin Plenio for many interesting discussions on the theme of this paper. This work is part of the QIP-IRC supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement number 213681, EPSRC (GR/S82176/0) as well as the Integrated Project Qubit Applications (QAP) supported by the IST directorate as Contract Number 015848'. FB is supported by an EPSRC Postdoctoral Fellowship for Theoretical Physics. \begin{equation}gin{thebibliography}{1} \bibitem{BBC+93} C.H. Bennett, G. Brassard, C. Cr\'epeau, R. Jozsa, A. Peres, and W.K. Wootters. Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. Phys. Rev. Lett. \textbf{70}, 1895 (1993). \bibitem{HHHH07} R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki. Quantum entanglement. Rev. Mod. Phys. Vol. \textbf{81}, 865 (2009). \bibitem{PV07} M.B. Plenio and S. Virmani. An introduction to entanglement measures. Quant. Inf. Comp. \textbf{7}, 1 (2007). \bibitem{BBPS96} C.H.~Bennett, H.J.~Bernstein, S.~Popescu, and B.~Schumacher. Concentrating Partial Entanglement by Local Operations. Phys. Rev. A \textbf{53}, 2046 (1996). \bibitem{HHH98} M. Horodecki, P. Horodecki, and R. Horodecki. Mixed-State Entanglement and Distillation: Is there a �Bound� Entanglement in Nature? Phys. Rev. Lett. \textbf{80}, 5239 (1998). \bibitem{VC01} G. Vidal and J.I. Cirac. Irreversibility in asymptotic manipulations of entanglement. Phys. Rev. Lett. \textbf{86}, 5803 (2001). \bibitem{YHHS05} D. Yang, M. Horodecki, R. Horodecki, and B. Synak-Radtke. Irreversibility for all bound entangled states. Phys. Rev. Lett. \textbf{95}, 190501 (2005). \bibitem{Rai97} E.M. Rains. Entanglement purification via separable superoperators. quant-ph/9707002 \bibitem{Rai01} E.M. Rains. A semidefinite program for distillable entanglement. IEEE Trans. Inf. Theo., \textbf{47} 2921 (2001). \bibitem{EVWW01} T. Eggeling, K.G.H. Vollbrecht, R.F. Werner, and M.M. Wolf. Distillability via protocols respecting the positivity of partial transpose. Phys. Rev. Lett. \textbf{87}, 257902 (2001). \bibitem{APE03} K.M.R. Audenaert, M.B. Plenio, J. Eisert. The entanglement cost under operations preserving the positivity of partial transpose. Phys. Rev. Lett. \textbf{90}, 027901 (2003). \bibitem{IP05} S. Ishizaka and M.B. Plenio. Entanglement under asymptotic positive-partial-transpose preserving operations. Phys. Rev. A {\bf 72}, 042325 (2005). \bibitem{HOH02} M. Horodecki, J. Oppenheim, and R. Horodecki. Are the laws of entanglement theory thermodynamical? Phys. Rev. Lett. \textbf{89}, 240403 (2002). \bibitem{BP08a} F.G.S.L. Brand\~ao, and M.B. Plenio. Entanglement Theory and the Second Law of Thermodynamics. Nature Physics \textbf{4}, 873 (2008). \bibitem{BP} F.G.S.L.~Brand\~ao and M.B.~Plenio. A reversible theory of entanglement and its relation to the second law. Commun. Math. Phys. \textbf{295}, 829 (2010). \bibitem{Bra} F.G.S.L. Brand\~ao. Entanglement theory and the quantum simulation of many-body systems. PhD Thesis Imperial College 2008. arXiv:0810.0026. \bibitem{VP} V.~Vedral and M.B.~Plenio. Entanglement measures and purification procedures. Phys.~Rev.~A \textbf{57}, 1147 (1998). \bibitem{VW01} K.G.H. Vollbrecht and R.F. Werner. Entanglement measures under symmetry. Phys. Rev. A \textbf{64}, 062307 (2001). \bibitem{BP09a} F.G.S.L. Brand\~ao and M.B. Plenio. A Generalization of Quantum Stein's Lemma. Commun. Math. Phys. \textbf{295}, 791 (2010). \bibitem{Piani09} M. Piani. Relative Entropy of Entanglement and Restricted Measurements. Phys. Rev. Lett. \textbf{103}, 160504 (2009). \bibitem{BS09} S. Beigi and P.W. Shor. Approximating the Set of Separable States Using the Positive Partial Transpose Test. J. Math. Phys. \textbf{51}, 042202 (2010). \bibitem{Bra10} F.G.S.L. Brandão. On minimum reversible entanglement generating sets. In prepatration, 2010. \bibitem{wolf2} R.~Renner, S.~Wolf and J\"urg Wullschleger. The single-serving channel capacity. Proc.\ International Symposium on Information Theory (ISIT), IEEE, 2006. \bibitem{ligong} L.~Wang and R.~Renner. One-shot classical capacities of quantum channels. Poster in The Twelfth Workshop on Quantum Information Precessing (QIP 2009), Santa Fe, U.S.A. \bibitem{buscemidatta} F.~Buscemi and N.~Datta. One-shot quantum capacities of quantum channels. IEEE Trans. Inf. Theo. Vol. \textbf{56}, 1447 (2010). \bibitem{mmnd} M.~Mosonyi and N.~Datta. Generalized relative entropies and the capacity of classical-quantum channels. J. Math. Phys. \textbf{50}, 072104 (2009). \bibitem{wcr} L.~Wang, R.~Colbeck and R.~Renner. Simple channel coding bounds. arXiv:0901.0834. \bibitem{Bra05} F.G.S.L.~ Brand\~ao. Quantifying entanglement with witness operators. Phys. Rev. A \textbf{72}, 022310 (2005). \bibitem{ruskai_shor} M. Horodecki, P.W. Shor and M.R uskai. General entanglement breaking channels. Rev. Math. Phys 15, 629--641 (2003). \bibitem{VT} G.~Vidal and R.~Tarrach. Robustness of entanglement. Phys. Rev. A \textbf{59}, 141 (1999). \bibitem{HN} A.~Harrow and M.~A.Nielsen. How robust is a quantum gate in the presence of noise? Phys. Rev. A \textbf{68}, 012308 (2003). \bibitem{nd2} N.~Datta. Max- Relative Entropy of Entanglement, alias Log Robustness, Int.~Jour.~Q.~Info,\textbf{7}, 475, 2009. \bibitem{bd1} G.~Bowen and N.~Datta. Beyond i.i.d. in quantum information theory, Proceedings of the 2006 IEEE International Symposium on Information Theory, 451, 2006. \bibitem{nielsen} M.~A.Nielsen and I.~L.Chuang. Quantum Computation and Quantum Information. Cambridge University Press, Cambridge, 2000. \bibitem{nd1} N.~Datta. Min- and Max- Relative Entropies and a New Entanglement Monotone. To appear in IEEE Trans.~Inf.~Theory, June 2009; arXiv:0803.2770. \bibitem{marco} C.~Mora, M.~Piani and H.~Briegel. Epsilon-measures of entanglement. New J. Phys. \textbf{10}, 083027 (2008). \bibitem{horo1} P.~Horodecki, M.~Horodecki and R.~Horodecki. General teleportation channel, singlet fraction, and quasidistillation. Phys. Rev. A. \textbf{60}, 1888 (1999). \bibitem{hayashi} H.~Nagaoka and M.~Hayashi, ``An information-spectrum approach to classical and quantum hypothesis testing for simple hypotheses,'' IEEE Trans. Inf. Th. {\bf 49}, No.7, pp.1753--1768 (2003). \bibitem{rob_convex} G.~Vidal and R.~Tarrach, ``Robustness of entanglement,'' Phys. Rev. A {\bf{59}}, 141 (1999). \bibitem{horo_sep} P.~Horodecki, M.~Horodecki and R.~Horodecki, ``General teleportation channel, singlet fraction and quasi-distillation,'' quant-ph/9807091 \end{thebibliography} \end{document}
\begin{document} \title{Compactifications of Log Morphisms} \footnote[0] {2000 \textit{Mathematics Subject Classification}. Primary 14F40; Secondary 14F30.} \footnote[0]{\textit{Key words and phrases}. logarithmic structures, de Rham cohomology, crystalline cohomology, semistable reduction.} \footnote[0]{Part of this work was done during my visit at the University of California, Berkeley, supported by the Deutsche Forschungs Gemeinschaft. I thank Arthur Ogus and Martin Olsson for helpful discussions and remarks. I am also very grateful to the referee for his careful reading and helpful suggestions which significantly improved the present paper.} \begin{abstract} We introduce the notion of a relative log scheme with boundary: a morphism of log schemes together with a (log schematically) dense open immersion of its source into a third log scheme. The sheaf of relative log differentials naturally extends to this compactification and there is a notion of smoothness for such data. We indicate how this weak sort of compactification may be used to develop useful de Rham and crystalline cohomology theories for semistable log schemes over the log point over a field which are not necessarily proper. \end{abstract} \begin{center} {\bf Introduction} \end{center} Let $X$ be a smooth variety over a field $k$. It is well known that for the study of the cohomology of $X$ --- or even for its very definition (e.g. crystalline \cite{maghreb}, rigid \cite{berco}), or the definition of nice coefficients for it (e.g. integrable connections with regular singularities) --- it is often indispensable to take into account also a boundary $D=\overline{X}-X$ of $X$ in a smooth compactification $X\subset\overline{X}$ of $X$. If $D\subset\overline{X}$ is a normal crossing divisor on $\overline{X}$, the cohomology can conveniently be studied in the framework of logarithmic algebraic geometry. On the other hand, log geometry proved also useful to define the cohomology of proper normal crossing varieties $X$ over $k$ which occur as a fibre of a semistable family, or more generally are $d$-semistable (\cite{fkato}), see \cite{steen}, \cite{mokr}. In the present paper we attempt to develop a concept in log geometry particularly suitable to treat the mixed situation: given a non-proper $d$-semistable normal crossing variety $X/k$, we want to explain how an open immersion of $X$ into a proper $k$-scheme $\overline{X}$ can be used to investigate the cohomology of $X$, the stress lying on the fact that we avoid the assumption that $\overline{X}$ be $d$-semistable and require a weaker condition instead. Fix a base scheme $W$ for all occuring schemes. Let $T$ be a log scheme. The central definition of this note is that of a {\it $T$-log scheme with boundary}: A morphism of log schemes $X\to T$ together with an open log schematically dense embedding of log schemes $i:X\to\overline{X}$. For brevity, we often denote it simply by $(\overline{X},X)$. Morphisms of $T$-log schemes with boundary are defined in an obvious way. There are notions of exact and of boundary exact closed immersions of $T$-log schemes with boundary. The relative logarithmic de Rham complex $\Omega^{\textbf{u}llet}_{X/T}$ on $X$ extends canonically to a complex $\Omega^{\textbf{u}llet}_{(\overline{X},X)/T}$ on $\overline{X}$. These definitions are justified by a theory of smoothness for $T$-log schemes with boundary, well suited for cohomology purposes. Roughly, a $T$-log scheme with boundary $(\overline{X},X)$ is said to be weakly smooth if it satisfies a lifting property for morphisms from a nilpotent exact closed immersion of $T$-log schemes with boundary to $(\overline{X},X)$. Weak smoothness implies that $\Omega^{\textbf{u}llet}_{(\overline{X},X)/T}$ is locally free. $(\overline{X},X)$ is said to be smooth if it is weakly smooth and if for boundary exact closed immersions $(\overline{Y},Y)\to(\overline{V},V)$ of $T$-log schemes with boundary, and morphisms $(\overline{Y},Y)\to(\overline{X},X)$ of $T$-log schemes with boundary, the projections $\overline{X}\overline{\times}_T\overline{V}\to\overline{V}$ lift log \'{e}tale locally near the image of $\overline{Y}$ in $\overline{X}\overline{\times}_T\overline{V}$ to strict and smooth morphisms of log schemes (see the text for the definition of $\overline{X}\overline{\times}_T\overline{V}$). This definition is of course geared to its application to (crystalline) cohomology. However, our main theorem gives a convenient criterion for smoothness in terms of morphisms of monoids, very similar to Kato's criterion for usual log smoothness. We emphasize that even if $f:X\to T$ actually extends to a morphism of log schemes $\overline{f}:\overline{X}\to T$, our notion of smoothness is more general: $(\overline{X},X)$ might be smooth as a $T$-log scheme with boundary while $\overline{f}$ is not a log smooth morphism in the usual sense (or even not ideally smooth as defined by Ogus \cite{ogid}). See for example the discussion at the beginning of Section 3. In this regard, the theme of this paper is that (usual) log smoothness in an `interior' $X\subset\overline{X}$ of a morphism of log schemes $\overline{f}:\overline{X}\to T$ should already ensure that $\overline{f}$ has nice cohomology. (A similar principle underlies the definition of rigid cohomology \cite{berco}.) We hope that our definitions are useful for a definition of log rigid cohomology, in the case of nontrivial log structures on the base; in special cases they already turned out to be so, see \cite{hkstrat}. Section 1 contains the basic definitions and presents several examples. The main Section is the second one which is devoted to smoothness. The main theorem is the smoothness criterion \ref{tglatt}. In Section 3 we discuss the example of semistable $k$-log schemes with boundary (here $T$ is the log point over a field). These are smooth in the sense of Section 2 and we try to demonstrate how they can be used as substitutes for compactifications by usual semistable proper $k$-log schemes. We indicate several applications to de Rham cohomology and crystalline cohomology.\\ \section{$T$-log schemes with boundary} \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}}\newcounter{basnot1}\newcounter{basnot2}\setcounter{basnot1}{\value{section}}\setcounter{basnot2}{\value{satz}} We fix a base scheme $W$; all schemes and morphisms of schemes are to be understood over $W$. All morphisms of schemes are quasi-separated. We also assume that all morphisms of schemes are quasi-compact: the only reason for this additional assumption is that it implies the existence of schematic images (=``closed images'') of morphisms: see \cite{EGA} I, 9.5. We say that an open immmersion $i:X\to\overline{X}$ is schematically dense if $\overline{X}$ coincides with the schematic image of $i$. For the basic notions of log algebraic geometry we refer to K. Kato \cite{kalo}. Log structures are understood for the \'{e}tale topology. By abuse of notation, for a scheme $X$ and a morphism of monoids $\alpha:N\to{\cal O}_X(X)$ (where ${\cal O}_X(X)$ is understood multiplicatively), we will denote by $(X,\alpha)$ the log scheme with underlying scheme $X$ whose log structure is associated with the chart $\alpha$. For a log scheme $(X,{\cal N}_X)=(X,{\cal N}_X\to{\cal O}_X)$ we will often just write $X$ if it is clear from the context to which log structure on $X$ we refer, i.e. in those cases the log structure is dropped in our notation. Similarly for morphisms of log schemes. An {\it exactification} of a closed immersion of fine log schemes $Y\to X$ is a factorization $Y\stackrel{i}{\to}Z\stackrel{f}{\to}X$ with $i$ an exact closed immersion and $f$ log \'{e}tale. Recall that a morphism of log schemes $f:(X,{\cal N}_X)\to(Y,{\cal N}_Y)$ is said to be {\it strict} if $f^*{\cal N}_Y\to{\cal N}_X$ is an isomorphism. For a monoid $N$ we denote by $N^{\rm gp}$ the associated group. For a finitely generated integral monoid $Q$ we let $$W[Q]=W\times_{\mbox{\rm Spec}(\mathbb{Z})}\mbox{\rm Spec}(\mathbb{Z}[Q])$$and give it the canonical log structure for which $Q$ is a chart.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz} Definition} (i) A morphism of log schemes $f:(X,{\cal N}_X)\to(Y,{\cal N}_{{Y}})$ factors over the {\it log schematic image} $(\overline{X},{\cal N}_{\overline{X}})$ of $f$ which is defined as follows: The underlying scheme $\overline{X}$ is the schematic image of the morphism of schemes $X\to Y$ underlying $f$. Let $X\stackrel{i}{\to}\overline{X}\stackrel{\overline{f}}{\to} Y$ be the corresponding morphisms of schemes. The log structure ${\cal N}_{\overline{X}}$ is by definition the image of the natural composite map of log structures $\overline{f}^*{\cal N}_Y\to i_*f^*{\cal N}_Y\to i_*{\cal N}_X$ on $\overline{X}$. Here $i_*$ denotes the functor {\it push forward log structure}.\\(ii) A morphism of log schemes $f:(X,{\cal N}_X)\to(Y,{\cal N}_{{Y}})$ is said to be {\it log schematically dominant} if $({Y},{\cal N}_{{Y}})$ coincides with the log schematic image of $f$; it is said to be {\it log schematically dense} if in addition the underlying morphism of schemes is an open immersion.\\ A morphism of log schemes $i:(X,{\cal N}_X)\to(\overline{X},{\cal N}_{\overline{X}})$ is log schematically dense if and only if the underlying morphism of schemes is a schematically dense open immersion and the canonical morphism of log structures ${\cal N}_{\overline{X}}\to i_*{\cal N}_X$ is injective.\\ \begin{lem}\label{pfeil} Let $(X,{\cal N}_X)$ be a log scheme and $i:X\to\overline{X}$ a schematically dense open immersion of its underlying scheme into another scheme $\overline{X}$. Denote by $i_{*,{\rm sh}}{\cal N}_X$ the {\it sheaf theoretic push forward} of the sheaf of monoids ${\cal N}_X$. There exists a unique map $i_{*,{\rm sh}}{\cal N}_X\to (i_*{\cal N}_X)^{\rm gp}$ compatible with the natural maps $i_*{\cal N}_X\to i_{*,{\rm sh}}{\cal N}_X$ and $i_*{\cal N}_X\to(i_*{\cal N}_X)^{\rm gp}$. \end{lem} {\sc Proof:} First observe that ${\cal O}_{\overline{X}}\to i_*{\cal O}_X$ is injective, so henceforth we regard ${\cal O}_{\overline{X}}$ as a subsheaf of $i_*{\cal O}_X$. Also note $(i_*{\cal O}_X)^{\times}=i_*({\cal O}_X^{\times})$. It follows that we can view $i_*{\cal N}_X$ as the subsheaf of $i_{*,{\rm sh}}{\cal N}_X$ formed by those sections which map to ${\cal O}_{\overline{X}}$ under the map $\alpha:i_{*,{\rm sh}}{\cal N}_X\to i_*{\cal O}_X$ which we get by functoriality of sheaf theoretic push forward. To prove the lemma it is enough to show that $i_{*,{\rm sh}}{\cal N}_X$ arises from $i_*{\cal N}_X$ by inverting those sections $m$ for which the restrictions $\alpha(m)|_X$ are invertible. But this is the case: Take $m\in i_{*,{\rm sh}}{\cal N}_X$. Since $i_*{\cal O}_X$ arises from ${\cal O}_{\overline{X}}$ by inverting those Sections for which the restrictions to $X$ are invertible, we find $f,g\in{\cal O}_{\overline{X}}$ with $g|_X$ invertible and with $\alpha(m)=g^{-1}f$. We saw $g=\alpha(n)$ for some $n\in i_*{\cal N}_X$. Now $nm\in i_*{\cal N}_X$ and our claim and hence the lemma follows.\\ \begin{lem}\label{feinbild} The log schematic image $(\overline{X},{\cal N}_{\overline{X}})$ of a morphism of fine log schemes $f:(X,{\cal N}_X)\to(Y,{\cal N}_{{Y}})$ is a fine log scheme. \end{lem} {\sc Proof:} The coherence of ${\cal N}_{\overline{X}}$ follows from that of ${\cal N}_{{Y}}$. We have ${\cal N}_{\overline{X}}\subset i_*{\cal N}_X\subset i_{*,{\rm sh}}{\cal N}_X$, for the second inclusion see the proof of Lemma \ref{pfeil}. Therefore the integrality of ${\cal N}_X$ implies that of ${\cal N}_{\overline{X}}$.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz} Definition}\newcounter{defbs1}\newcounter{defbs2}\setcounter{defbs1}{\value{section}}\setcounter{defbs2}{\value{satz}} A {\it log scheme with boundary} is a triple $((X,{\cal N}_X), (\overline{X},{\cal N}_{\overline{X}}),i)$ where $i:(X,{\cal N}_X)\to(\overline{X},{\cal N}_{\overline{X}})$ is a log schematically dense morphism such that $i^*{\cal N}_{\overline{X}}={\cal N}_X$ and $(i_*{\cal N}_X)^{\rm gp}={\cal N}_{\overline{X}}^{\rm gp}$. Let $(T,{\cal N}_T)$ be a log scheme. A $(T,{\cal N}_T)${\it -log scheme with boundary} is a log scheme with boundary $((X,{\cal N}_X), (\overline{X},{\cal N}_{\overline{X}}),i)$ together with a morphism of log schemes $g:(X,{\cal N}_X)\to(T,{\cal N}_T)$.\\ We think of $\overline{X}-X$ as a boundary of $X$. We will often drop $i$, $g$ and the log structures from our notation and just speak of the $T$-log scheme with boundary $(\overline{X},X)$. So in the following definition which justifies the whole concept.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz} Definition:} The sheaf of relative differentials of a $T$-log scheme with boundary $(\overline{X},X)$ is defined as follows: Denote by $\tau$ the composite map$$i_{*,{\rm sh}}g^{-1}{\cal N}_T\to i_{*,{\rm sh}}{\cal N}_X\to (i_*{\cal N}_X)^{\rm gp}={\cal N}_{\overline{X}}^{\rm gp}$$where the second arrow is the one from Lemma \ref{pfeil}. Let $\Omega^1_{\overline{X}/W}$ be the sheaf of differentials of the morphism of underlying schemes $\overline{X}\to W$. Then $\Omega^1_{(\overline{X},X)/T}$ is the quotient of $$\Omega^1_{\overline{X}/W}\oplus({\cal O}_{\overline{X}}\otimes_{\mathbb{Z}}{\cal N}^{\rm gp}_{\overline{X}})$$ divided by the ${\cal O}_{\overline{X}}$-submodule generated by local sections of the forms \begin{align}(d\alpha(a),0)-(0,\alpha(a)\otimes a)&\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{ with }a\in{\cal N}_{\overline{X}}\notag\\(0,1\otimes a)&\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{ with }a\in \mbox{\rm Im}(\tau).\notag \end{align}We define the de Rham complex $\Omega^{\textbf{u}llet}_{(\overline{X},X)/T}$ by taking exterior powers and the differential as usual.\\ \begin{lem}\label{clabo} Let $(\overline{X},X)$ be a $T$-log scheme with boundary.\\(1) The restriction $\Omega^1_{(\overline{X},X)/T}|_X$ naturally coincides with the usual sheaf of relative logarithmic differentials of $g:(X,{\cal N}_X)\to(T,{\cal N}_T)$.\\(2) Suppose $g$ extends to a morphism of log schemes $\overline{g}:(\overline{X},{\cal N}_{\overline{X}})\to(T,{\cal N}_T)$. Let us assume the following conditions: \begin{description}\item[(i)] The underlying scheme of $T$ is the spectrum of a field.\item[(ii)] For any \'{e}tale morphism $\overline{V}\to\overline{X}$ with $\overline{V}$ connected, the scheme $V=\overline{V}\times_{\overline{X}}X$ is also connected.\end{description}Then $\Omega^1_{(\overline{X},X)/T}$ naturally coincides with the usual sheaf $\Omega^1_{\overline{X}/T}$ of relative logarithmic differentials of $\overline{g}$. \end{lem} {\sc Proof:} (1) is immediate. (2) and its proof were suggested by the referee. Write $T=\mbox{\rm Spec}(k)$. By base change, we may assume that $k$ is separably closed. It suffices to prove that the morphism $\overline{g}^{-1}{\mathcal N}_T\to i_{*,{\rm sh}}g^{-1}{\mathcal N}_T$ is an isomorphism. Let $x$ be a geometric point of $\overline{X}$ and let $\overline{V}$ be the strict Henselization of $\overline{X}$ at $x$. Put $V=\overline{V}\times_{\overline{X}}X$. Then, by the assumption (i), we see that both $\overline{V}$ and $V$ are connected. Hence we have$$(\overline{g}^{-1}{\mathcal N}_T)_x=\Gamma(\overline{V},\overline{g}^{-1}{\mathcal N}_T)=\Gamma(T,{\mathcal N}_T)$$$$(i_{*,{\rm sh}}g^{-1}{\mathcal N}_T)_x=\Gamma(\overline{V},\overline{g}^{-1}{\mathcal N}_T)=\Gamma(T,{\mathcal N}_T)$$and the lemma follows.\\ One class of examples where the condition (i) + (ii) of Lemma \ref{clabo} (2) holds true are the semistable $T$-log schemes discussed in Section 3; but for them, the conclusion $\Omega^1_{(\overline{X},X)/T}=\Omega^1_{\overline{X}/T}$ (if $g$ extends to $\overline{g}$) is immediate anyway. Undoubtly, if $g$ extends to $\overline{g}$, the conclusion of Lemma \ref{clabo} (2) holds under much more general conditions than the stated condition (i) + (ii). \\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz} Examples:}\newcounter{bspcol1}\newcounter{bspcol2}\setcounter{bspcol1}{\value{section}}\setcounter{bspcol2}{\value{satz}} The following examples will be discussed later on.\\(a) Let $Q,P$ be finitely generated monoids and let $\rho:Q\to P^{\rm gp}$ be a morphism. Let $P'$ be the submonoid of $P^{\rm gp}$ generated by $P$ and $\mbox{\rm Im}(\rho)$. Then$$(W[P],W[P'])$$is a $T=W[Q]$-log scheme with boundary.\\(b1) Let $Q=\mathbb{N}$ with generator $t\in Q$. Let $t_1,\ldots,t_r$ be the standard generators of $\mathbb{N}^r$. Let $X=W[\mathbb{N}^r]$, the affine $r$-space over $W$ with the log structure defined by the divisor $V(t_1\cdot\ldots\cdot t_r)$. By means of $t\mapsto t_1\cdot\ldots\cdot t_r$ this is a $T=W[Q]$-log scheme. We compactify ${X}$ by$$\overline{X}=W\times_{\mbox{\rm Spec}(\mathbb{Z})}(\times_{\mbox{\rm Spec}(\mathbb{Z})}(\mbox{\rm Proj}(\mathbb{Z}[t_0,t_i])_{1\le i\le r}))=({\bf P}_W^1)^r$$and take for ${\cal N}_{\overline{X}}$ the log structure defined by the normal crossing divisor $$(\overline{X}-X)\cup(\mbox{the closure of }V(t_1\cdot\ldots\cdot t_r)\subset X\mbox{ in }\overline{X}).$$(b2) Let $X$ and $T$ be as in (b1). Another compactifiction of $X$ is projective $r$-space, i.e. $\overline{X}'={\bf P}_W^r$; similarly we take ${\cal N}_{\overline{X'}}$ as the log structure defined by the normal crossing divisor $(\overline{X}'-X)\cup(\mbox{the closure of }V(t_1\cdot\ldots\cdot t_r)\subset X\mbox{ in }\overline{X}')$.\\(c) Let $k$ be a field, $W=\mbox{\rm Spec}(k)$ and let again $Q=\mathbb{N}$ with generator $t\in Q$. The following type of $S=W[Q]$-log scheme with boundary (which generalizes \arabic{bspcol1}.\arabic{bspcol2}(b1) if $W=\mbox{\rm Spec}(k)$ there) gives rise, by base change $t\mapsto 0$, to the $T$-log schemes with boundary discussed in Section \ref{semist} below. Let $\overline{X}$ be a smooth $W$-scheme, $X\subset \overline{X}$ a dense open subscheme, $D=\overline{X}-X$. Let $X\to S$ be a flat morphism, smooth away from the origin. Let $X_0$ be the fibre above the origin, let $\overline{X}_0$ be its schematic closure in $\overline{X}$ and suppose that $D\cup\overline{X}_0$ is a divisor with normal crossings on $\overline{X}$.\\ (d) Let $k$ be a field and let $T=(\mbox{\rm Spec}(k),\mathbb{N}\stackrel{0}{\to}k)$, the standard logarithmic point (\cite{fkato}). Let $Y$ be a semistable $k$-log scheme in the sense of \cite{mokr} 2.4.1 or \cite{fkato}. That is, $Y$ is a fine $T$-log scheme $(Y,{\cal N}_Y)$ satisfying the following conditions. \'{E}tale locally on $Y$ there exist integers $i\ge 1$ and charts $\mathbb{N}^i\to{\cal N}_Y(Y)$ for ${\cal N}_Y$ such that\\(i) if on the log scheme $T$ we use the chart $\mathbb{N}\to k, 1\mapsto 0$, the diagonal morphism $\mathbb{N}\stackrel{\delta}{\to}\mathbb{N}^i$ is a chart for the structure morphism of log schemes $Y\to T$, and\\(ii) the induced morphism of schemes $$Y\to\mbox{\rm Spec}(k)\times_{\mbox{\rm Spec}(k[t])}\mbox{\rm Spec}(k[t_1,\ldots,t_i])$$ is smooth in the classical sense. Let $\overline{X}$ be the union of some irreducible components of $Y$ and let $X$ be the open subscheme of $\overline{X}$ which is the complement in $Y$ of the union of all irreducible components not contained in $\overline{X}$. Then $\overline{X}$ inherits a structure of $T$-log scheme, but it is not log smooth over $T$. However, we can view $(\overline{X},X)$ as a $T$-log scheme with boundary (forgetting that the morphism $X\to T$ actually extends to $\overline{X}$): as such it is what we will call {\it smooth} below.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}}\newcounter{bspbdl1}\newcounter{bspbdl2}\setcounter{bspbdl1}{\value{section}}\setcounter{bspbdl2}{\value{satz}} A concrete example for \arabic{bspcol1}.\arabic{bspcol2}(c) (see \cite{hkstrat} for more details). Again let $k$ be a field and let $S=W[\mathbb{N}]$ with generator $q$ of $\mathbb{N}$. Let $Y$ be a semistable $k$-log scheme with set of irreducible components $\{Y_j\}_{j\in R}$ all of which we assume to be smooth. As in \cite{kalo} p.222/223 we define for every $j\in R$ an invertible ${\cal O}_Y$-module ${\cal F}_j$ as follows: Let ${\cal N}_{Y,j}$ be the subsheaf of the log structure ${\cal N}_Y$ of $Y$ which is the preimage of $\mbox{\rm Ker}({\cal O}_Y\to{\cal O}_{Y_j})$. This ${\cal N}_{Y,j}$ is a principal homogeneous space over ${\cal O}_Y^{\times}$, and its associated invertible ${\cal O}_Y$-module is ${\cal F}_j$. Now fix a subset $I\subset R$ with $|I|=i$ and let $L=R-I$. Suppose $M=\cap_{j\in I}Y_j$ is nonempty. Let $$V_M=\mbox{\rm Spec}(\mbox{\rm Sym}_{{\cal O}_M}(\oplus({\cal F}_j)_{j\in I}))=\times_M(\mbox{\rm Spec}(\mbox{\rm Sym}_{{\cal O}_M}({\cal F}_j)))_{j\in I}.$$By its definition, the affine vector bundle $V_M$ over $M$ comes with a natural coordinate cross, a normal crossing divisor on $V_M$. The intersection of $M$ with all irreducible components of $Y$ not containing $M$ is a normal crossing divisor $D$ on $M$. Let $D_V'\subset V_M$ be its preimage under the structure map $V_M\to M$ and let $D_V\subset V_M$ be the union of $D_V'$ with the natural coordinate cross in $V_M$. Then $D_V$ is a normal crossing divisor on $V_M$. Let ${\cal N}_{V_M}$ be the corresponding log structure on $V_M$. There exists a distinguished element $a\in\Gamma(V_M,{\cal O}_M)$ having $D_V$ as its set of zeros and such that the assignment $q\mapsto a$ defines a morphism of log schemes $V_M\to S$ with the following property: The induced $S$-log scheme $(M,{\cal N}_{V_M}|_M)$ on the zero Section $M\to V_M$ coincides with the $S$-log scheme $(M,{\cal N}_Y|M)$ induced by $Y$. This $a\in\Gamma(V_M,{\cal O}_M)=\mbox{\rm Sym}_{{\cal O}_M}(\oplus({\cal F}_j)_{j\in I})(M)$ can be described as follows: Denote the image of $q\in{\cal N}_S(S)$ (here ${\cal N}_S$ is the log structure of $S$) under the structure map ${\cal N}_S(S)\to {\cal N}_Y(Y)\to {\cal N}_Y|_M(M)$ again by $q$. Locally on $M$ it can be (non-uniquely) factored as $q=t_0\prod_{j\in I}v_j$ where $v_j$ is a (local) generator of ${\cal F}_j|_M$ and $t_0$ maps to a (local) defining equation $a_0\in{\cal O}_M$ of the divisor $D$ in $M$. Then $a=a_0.(\oplus_{j\in I}v_j)\in\mbox{\rm Sym}_{{\cal O}_M}(\oplus_{j\in I}{\cal F}_j)(M)$ is the wanted element, globally well defined. We can view $V_M$ in a canonical way as a (schematically) dense open subscheme of $$P_M=\times_M(\mbox{\rm Proj}(\mbox{\rm Sym}_{{\cal O}_M}({\cal O}_M\oplus{\cal F}_j)))_{j\in I}$$by identifying a homogenous section $s\in\mbox{\rm Sym}_{{\cal O}_M}({\cal F}_j)$ of degree $n$ with the degree zero section $s/1_{{\cal O}_M}^n$ of $\mbox{\rm Sym}_{{\cal O}_M}({\cal O}_M\oplus{\cal F}_j)[1_{{\cal O}_M}^{-1}]$. We give $P_M$ the log structure defined by the normal crossing divisor $(P_M-V_M)\cup \overline{D}_V$, where $\overline{D}_V$ is the closure of $D_V$ in $P_M$. Then $(P_M,V_M)$ is a $S$-log scheme with boundary.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}}\newcounter{basech1}\newcounter{basech2}\setcounter{basech1}{\value{section}}\setcounter{basech2}{\value{satz}} A {\it morphism} of $T$-log schemes with boundary $f:(\overline{X},X)\to(\overline{X}',X')$ is a morphism of log schemes $$f:(\overline{X},{\cal N}_{\overline{X}})\to(\overline{X}',{\cal N}_{\overline{X}'})$$ with $X\subset f^{-1}(X')$ and restricting to a morphism of $T$-log schemes $(X,{\cal N}_{{X}})\to({X}',{\cal N}_{{X}'})$. We have a fully faithful functor from the category of $T$-log schemes to the category of $T$-log schemes with boundary. Namely, take $Y$ to $(Y,Y)$. Beware that $(T,T)$ is {\it not} a final object in the category of $T$-log schemes with boundary. We have obvious base change functors for morphisms $W'\to W$ to our underlying base scheme $W$ and everything we develop here behaves well with respect to these base changes. We also have {\it base change functors for closed immersions of log schemes} $T'\to T$ as follows: if $(\overline{X},X)$ is a $T$-log scheme with boundary, let $X_{T'}=X\times_TT'$ be the fibre product in the category of log schemes. Define the log scheme $\overline{X}_{T'}$ as the log schematic image of the morphism of log schemes $X_{T'}\to\overline{X}$. Then $(\overline{X}_{T'},X_{T'})$ is a $T'$-log scheme with boundary.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} For the rest of this paper we always assume that the log scheme $T$ is fine. All fibre products of fine log schemes are taken in the category of fine log schemes, unless specified otherwise. A $T$-log scheme with boundary $(\overline{X},X)$ is said to be {\it fine} if the log scheme $(\overline{X},{\cal N}_{\overline{X}})$ is fine. \begin{lem} In the category of fine $T$-log schemes with boundary, products exist. \end{lem} {\sc Proof:} Given fine $T$-log schemes with boundary $(\overline{X}_1,X_1)$ and $(\overline{X}_2,X_2)$, set $$(\overline{X}_1,X_1)\times_T(\overline{X}_2,X_2)=(\overline{X}_1\overline{\times}_T\overline{X}_2,X_1\times_TX_2).$$Here $X_1\times_TX_2$ denotes the fibre product in the category of fine $T$-log schemes, and $\overline{X}_1\overline{\times}_T\overline{X}_2$ is defined as the log schematic image of $X_1\times_TX_2\to \overline{X}_1\times_{W}\overline{X}_2$. (So $\overline{X}_1\overline{\times}_T\overline{X}_2$ depends also on $X_1$ and $X_2$, contrary to what the notation suggests. Note that by the construction \cite{kalo} 2.7, the scheme underlying $X_1\times_TX_2$ is a subscheme of the scheme theoretic fibre product, hence is a subscheme of the scheme underlying $\overline{X}_1\times_{W}\overline{X}_2$.) That $\overline{X}_1\overline{\times}_T\overline{X}_2$ is fine follows from Lemma \ref{feinbild}.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}}\newcounter{gegenb1}\newcounter{gegenb2}\setcounter{gegenb1}{\value{section}}\setcounter{gegenb2}{\value{satz}} It is to have fibre products why we did not require $X=f^{-1}(X')$ in the definition of morphisms of $T$-log schemes with boundary. If the structural map from the underlying scheme of the log scheme $T$ to $W$ is an isomorphism, one has $(\overline{X},X)\cong(\overline{X},X)\times_T(T,T)$. However, we stress that in contrast to taking the base change with the identity $T\to T$ (cf. \arabic{basech1}.\arabic{basech2}), the operation of taking the fibre product with the $T$-log scheme with boundary $(T,T)$ is non-trivial in general. For example, let $Q=\mathbb{N}$ with generator $q\in Q$, let $T=W[Q]$ and let $U_1, U_2$ be the standard generators of $\mathbb{N}^2$. For $i\in\mathbb{Z}$ let $\overline{X}_i=W[\mathbb{N}^2]$, and let $X_i=W[\mathbb{Z}\oplus\mathbb{N}]$, the open subscheme of $\overline{X}_i$ where $U_1$ is invertible. Define a structure of $T$-log scheme with boundary on $(\overline{X}_i,X_i)$ by sending $q\mapsto U_1^iU_2$. Then $$(\overline{X}_i,X_i)\cong(\overline{X}_i,X_i)\times_T(T,T)\mbox{\rm Frac}uad\mbox{ if }i\ge0$$$$(\overline{X}_i,X_i)\not\cong(\overline{X}_i,X_i)\times_T(T,T)\mbox{\rm Frac}uad\mbox{ if }i<0.$$Indeed, $\overline{X}_i\overline{\times}_TT$ is the closure in $W[Q\oplus\mathbb{N}^2]$ of the closed subscheme $V(q-U_1^iU_2)$ of $W[Q\oplus\mathbb{Z}\oplus\mathbb{N}]$. If $i\ge0$ this is the subscheme $V(q-U_1^iU_2)$ of $W[Q\oplus\mathbb{N}^2]$ which maps isomorphically to $W[\mathbb{N}^2]$. If $i<0$ this is the subscheme $V(qU_1^{-i}-U_2)$ of $W[Q\oplus\mathbb{N}^2]$ which does not map isomorphically to $W[\mathbb{N}^2]$.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}}\newcounter{defcha1}\newcounter{defcha2}\setcounter{defcha1}{\value{section}}\setcounter{defcha2}{\value{satz}} Let $(\overline{X},X)$ be a fine $T$-log scheme with boundary. A {\it chart} $(Q\to P^{\rm gp}\supset P)$ for $(\overline{X},X)$ over $T$ is a chart $\lambda:P\to\Gamma(\overline{X},{\cal N}_{\overline{X}})$ for $(\overline{X},{\cal N}_{\overline{X}})$, a chart $\sigma:Q\to \Gamma({T},{\cal N}_{{T}})$ for $({T},{\cal N}_{{T}})$ and a morphism $\rho:Q\to P^{\rm gp}$ such that $\lambda^{\rm gp}\circ\rho=\tau\circ\sigma$, where $\tau:\Gamma({T},{\cal N}_{{T}})\to\Gamma({X},{\cal N}_{{X}})\to\Gamma(\overline{X},{\cal N}_{\overline{X}}^{\rm gp})$ is the composite of the structural map with that from Lemma \ref{pfeil}. \begin{lem} \'{E}tale locally on $\overline{X}$, charts for $(\overline{X},X)$ exist. \end{lem} {\sc Proof:} (corrected version due to the referee) We may by \cite{kalo} assume that $(\overline{X},{\cal N}_{\overline{X}})$ has a chart $g:G\to \Gamma(\overline{X},{\cal N}_{\overline{X}})$ and $({T},{\cal N}_{{T}})$ has a chart $\sigma:Q\to \Gamma({T},{\cal N}_{{T}})$. Let $x\in X$ and let ${\cal N}_{\overline{X},\overline{x}}$ be the stalk of ${\cal N}_{\overline{X}}$ at the separable closure $\overline{x}$ of $x$. Let $\varphi$ be the composite $$Q\stackrel{\sigma}{\to}\Gamma({T},{\cal N}_{{T}})\stackrel{\tau}{\to}\Gamma(\overline{X},{\cal N}_{\overline{X}}^{\rm gp})\to{\cal N}_{\overline{X},\overline{x}}^{\rm gp}.$$ Choose generators $q_1,\ldots,q_m$ of $Q$ and elements $x_i, y_i\in {\cal N}_{\overline{X},\overline{x}}$ $(1\le i\le m)$ such that $\varphi(q_i)=x_iy_i^{-1}$. Next, choose elements $a_i, b_i\in G$ and $u_i, v_i\in {\cal O}^{\times}_{\overline{X},\overline{x}}$ $(1\le i\le m)$ satisfying $g(a_i)=x_iu_i$ and $g(b_i)=y_iv_i$: these elements exist because $g$ is a chart. Now let$$f:G^{\rm gp}\oplus Q^{\rm gp}\oplus\mathbb{Z}^m\oplus\mathbb{Z}^m\longrightarrow{\cal N}_{\overline{X},\overline{x}}^{\rm gp}$$be the morphism defined by$$(h,q,(k_i)_{i=1}^m,(l_i)_{i=1}^m)\mapsto g^{\rm gp}(h)\varphi^{\rm gp}(q)\prod_{i=1}^mu_i^{k_i}\prod_{i=1}^mv_i^{l_i},$$and define $P$ by $P=f^{-1}({\cal N}_{\overline{X},\overline{x}})$. Then $f|_P:P\to {\cal N}_{\overline{X},\overline{x}}$ extends to a chart around $\bar{x}$ by \cite{kalo} 2.10. It remains to prove that the canonical inclusion $Q\to G^{\rm gp}\oplus Q^{\rm gp}\oplus\mathbb{Z}^m\oplus\mathbb{Z}^m, q\mapsto (1,q,0,0)$ actually takes values in $P^{\rm gp}$. Write a given $q\in Q$ as $q=\prod_{i=1}^mq_i^{n_i}$ with $n_i\in\mathbb{N}$. Then we have $$f(q)p=\prod_{i=1}^m(\frac{x_i}{y_i})^{n_i}=\prod_{i=1}^m(\frac{x_iu_i}{y_iv_i}\cdot\frac{v_i}{u_i})^{n_i}=\frac{f((\prod_ia_i^{n_i},0,(0),(n_i)_i))}{f((\prod_ib_i^{n_i},0,(n_i)_i,(0)))}.$$ Put $\alpha=(\prod_ia_i^{n_i},0,(0),(n_i)_i)$ and $\beta=(\prod_ib_i^{n_i},0,(n_i)_i,(0))$. Then we have $\alpha, \beta\in P$ and $f(q\beta)=f(\alpha)$. So $q\beta$ is in $P$ by the definition of $P$ and so $q$ maps to $P^{\rm gp}$.\\ \section{Smoothness} \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz} Definition:}\newcounter{defsmo1}\newcounter{defsmo2}\setcounter{defsmo1}{\value{section}}\setcounter{defsmo2}{\value{satz}} (1) A morphism of $T$-log schemes with boundary $(\overline{Y},Y)\to(\overline{X},X)$ is said to be a {\it boundary exact closed immersion} if $\overline{Y}\to\overline{X}$ is an exact closed immersion and if for every open neighbourhood $U$ of $Y$ in $X$, there exists an open neighbourhood $\overline{U}$ of $\overline{Y}$ in $\overline{X}$ with $U$ schematically dense in $\overline{U}$.\\ (2) A {\it first order thickening} of $T$-log schemes with boundary is a morphism $(\overline{L}',L')\to(\overline{L},L)$ such that $\overline{L'}\to\overline{L}$ is an exact closed immersion defined by a square zero ideal in ${\cal O}_{\overline{L}}$.\\(3) A fine $T$-log scheme with boundary $(\overline{X},X)$ is said to be {\it weakly smooth} if $\overline{X}$ is locally of finite presentation over $W$ and if the following condition holds: for every first order thickening $\eta:(\overline{L}',L')\to(\overline{L},L)$ and for every morphism $\mu:(\overline{L}',L')\to(\overline{X},X)$ there is \'{e}tale locally on $\overline{L}$ a morphism $\epsilon:(\overline{L},L)\to(\overline{X},X)$ such that $\mu=\epsilon\circ\eta$.\\(4) A $T$-log scheme with boundary $(\overline{X},X)$ is said to be {\it smooth} if it is weakly smooth and satisfies the following property: For all morphisms $(\overline{Y},Y)\to(\overline{X},X)$ and all boundary exact closed immersions $(\overline{Y},Y)\to(\overline{V},V)$ of fine $T$-log schemes with boundary, there exists \'{e}tale locally on $(\overline{X}\overline{\times}_T\overline{V})$ an exactification $$\overline{Y}\to Z\to(\overline{X}\overline{\times}_T\overline{V})$$of the diagonal embedding $\overline{Y}\to(\overline{X}\overline{\times}_T\overline{V})$ (a morphism of log schemes in the usual sense) such that the projection $Z\to(\overline{X}\overline{\times}_T\overline{V})\to \overline{V}$ is strict and log smooth.\\ Recall that by \cite{kalo} 3.8, `strict and log smooth' is equivalent to `strict and smooth on underlying schemes'. A $T$-log scheme $X$ is log smooth if and only if $(X,X)/T$ is weakly smooth. Assume this is the case. Then $(X,X)/T$ satisfies the smoothness condition with respect to test objects $({X},X){\leftarrow}(\overline{Y},Y)\rightarrow({V},V)$ (i.e. for which $\overline{V}=V$), because ${X}\overline{\times}_T{V}\stackrel{p}{\to}{V}$ is clearly log smooth. For general $(\overline{V},V)$ (and log smooth $T$-log schemes $X$) we have at least Theorem \ref{sglatt} and Theorem \ref{tglatt} below (note that the hypotheses of Proposition \ref{wecri} below for $(X,X)/T$ are {\it equivalent} to log smoothness of $X/T$, by \cite{kalo} 3.5 and as worked out in \cite{fkato}).\\ \begin{pro}\label{smocri} Let $(\overline{X},X)$ be a weakly smooth $T$-log scheme with boundary and let $T_1\to T$ be an exact closed immersion. Then $(\overline{X}_{T_1},X_{T_1})$ is a weakly smooth $T_1$-log scheme with boundary. \end{pro} {\sc Proof:} Let $$(\overline{X}_{T_1},X_{T_1})\stackrel{\mu}{\leftarrow}(\overline{L}',L')\stackrel{\eta}{\to}(\overline{L},L)$$ be a test object over $T_1$. By the weak smoothness of $(\overline{X},X)/T$ we get $\epsilon:(\overline{L},L)\to(\overline{X},X)$ \'{e}tale locally on $\overline{L}$ such that $\mu=\epsilon\circ\eta$. The restriction $\epsilon|_L:L\to\overline{X}$ goes through $X_{T_1}$; since $L$ is log schematically dense in $\overline{L}$ this implies that $\epsilon$ goes through $(\overline{X}_{T_1},X_{T_1})$ (the schematic image is transitive, \cite{EGA} I, 9.5.5).\\ \begin{pro}\label{wecri} Suppose $W$ is locally noetherian. Let $Q$ be a finitely generated integral monoid, let $S=W[Q]$ and let $T\to S$ be an exact closed immersion. Let $(\overline{X},X)$ be a $T$-log scheme with boundary. Suppose that \'{e}tale locally on $\overline{X}$ there are charts $Q\to P^{\rm gp}\supset P$ for $(\overline{X},X)$ over $T$ as in \arabic{defcha1}.\arabic{defcha2} such that the following conditions ${\rm (i), (ii)}$ are satisfied:\begin{description}\item[(i)] The kernel and the torsion part of the cokernel of $Q^{\rm gp}\to P^{\rm gp}$ are finite groups of orders invertible on $W$.\item[(ii)] Let $P'$ be the submonoid of $P^{\rm gp}$ generated by $P$ and the image of $Q\to P^{\rm gp}$ and let $W[P]_T$ be the schematic closure of $W[P']\times_ST=W[P']_T$ in $W[P]$. Then $\lambda:\overline{X}\to W[P]_T$ is smooth on underlying schemes.\end{description}Then $(\overline{X},X)/T$ is weakly smooth. \end{pro} {\sc Proof:} (Note that $\lambda$ in (ii) exists by the schematic density of $X\to\overline{X}$.) Let $$(\overline{X},X)\stackrel{\mu}{\leftarrow}(\overline{L}',L')\stackrel{\eta}{\to}(\overline{L},L)$$ be a test object over $T$. Using (i), one can follow the arguments in \cite{kalo} 3.4 to construct morphisms $(\overline{L},L)\to(W[P],W[P'])$ of $S$-log schemes with boundary. Necessarily $L$ maps in fact to $W[P']_T$. Since $L\to \overline{L}$ is log schematically dense, $\overline{L}$ maps in fact to $W[P]_T$. By (ii) this morphism can be lifted further to a morphism $\overline{L}\to\overline{X}$ inducing $(\overline{L},L)\to(\overline{X},X)$ as desired.\\ \begin{satz}\label{sglatt} In the situation of Proposition \ref{wecri}, suppose in addition $S=T$ and $T\to S$ is the identity. Then for every $S$-log scheme with boundary $(\overline{V},V)$, the projection $\overline{X}\overline{\times}_S\overline{V}\stackrel{p}{\to}\overline{V}$ is log smooth. \end{satz} {\sc Proof:} We may assume that $(\overline{X},X)$ over $T$ has a chart as described in Proposition \ref{wecri} and that $(\overline{V},V)$ over $T$ has a chart $Q\to F^{\rm gp}$, $F\to{\cal N}_{\overline{V}}(\overline{V})$. Our assumptions imply that$$\overline{X}\times_W\overline{V}\to W[P]\times_W\overline{V}$$is smooth on underlying schemes. It is also strict, hence log smooth. Perform the base change with the closed immersion of log schemes$$W[P]\overline{\times}_S\overline{V}\to W[P]\times_W\overline{V}$$to get the log smooth morphism$$\overline{X}\overline{\times}_S\overline{V}\stackrel{}{\to}W[P]\overline{\times}_S\overline{V}$$(by our construction of fibre products, $W[P]\overline{\times}_S\overline{V}$ is the log schematic closure of $W[P']{\times}_S{V}$). Its composite with the projection $$W[P]\overline{\times}_S\overline{V}\stackrel{\beta}{\to}\overline{V}$$is $p$, hence it is enough to show that $\beta$ is log smooth. Now $\beta$ arises by the base change $\overline{V}\to W[F]$ from the projection $$W[P]\overline{\times}_SW[F]\stackrel{\gamma}{\to}W[F]$$so that it is enough to show that $\gamma$ is log smooth. Let $F'$ be the submonoid of $F^{\rm gp}$ generated by $F$ and the image of $Q\to F^{\rm gp}$. Let $(P'\oplus_QF')^{\rm int}$ be the push out of $P'\leftarrow Q\to F'$ in the category of integral monoids, i.e. $(P'\oplus_QF')^{\rm int}=\mbox{\rm Im}(P'\oplus_QF'\to (P'\oplus_QF')^{\rm gp})$ where $P'\oplus_QF'$ is the push out in the category of monoids. (If $Q$ is generated by a single element then actually $(P'\oplus_QF')^{\rm int}=P'\oplus_QF'$ by \cite{kalo} 4.1.) Define the finitely generated integral monoid $$R=\mbox{\rm Im}(P\oplus F\to (P'\oplus_QF')^{\rm int}).$$Then $\gamma$ can be identified with the natural map $W[R]\to W[F]$. That this is log smooth follows from \cite{kalo} 3.4 once we know that $$a:F^{\rm gp}\to R^{\rm gp}=(P^{\rm gp}\oplus_{Q^{\rm gp}}F^{\rm gp})$$has kernel and torsion part of the cokernel finitely generated of orders invertible on $W$. But this follows from the corresponding facts for $b:Q^{\rm gp}\to P^{\rm gp}$ because we have isomorphisms $\mbox{\rm Ker}(b)\cong\mbox{\rm Ker}(a)$ and $\mbox{\rm Coker}(b)\cong\mbox{\rm Coker}(a)$.\\ \begin{satz}\label{tglatt} In the situation of Proposition \ref{wecri}, $(\overline{X},X)/T$ is smooth. \end{satz} {\sc Proof:} It remains to verify the second condition in the definition of smoothness. Let $(\overline{Y},Y)\to(\overline{X},X)$ and $(\overline{Y},Y)\to(\overline{V},V)$ be test objects. We may assume that $\overline{Y}$ is connected. Remove all irreducible components of $\overline{V}$ not meeting $\mbox{\rm Im}(\overline{Y})$ so that we may assume that each open neighbourhood of $\mbox{\rm Im}(\overline{Y})$ in $\overline{V}$ is schematically dense. After \'{e}tale localization we may assume that $(\overline{X},X)$ has a chart $P\to\Gamma(\overline{X},{\cal N}_{\overline{X}}^{\rm gp})$ as in Proposition \ref{wecri}. Viewing our test objects as objects over $S$ we can form the fibre product of fine $S$-log schemes with boundary $(W[P]\overline{\times}_S\overline{V},W[P']\times_SV)$. \'{E}tale locally on $W[P]\overline{\times}_S\overline{V}$ we find an exactification $$\overline{Y}\stackrel{i}{\to} \tilde{Z}\stackrel{\tilde{g}}{\to} W[P]\overline{\times}_S\overline{V}$$of the diagonal embedding $\overline{Y}\to W[P]\overline{\times}_S\overline{V}$. We may assume that $\tilde{Z}$ is connected. After further \'{e}tale localization on $\tilde{Z}$ we may also assume that $\tilde{q}=\tilde{p}\circ \tilde{g}:\tilde{Z}\to\overline{V}$ is strict, where $\tilde{p}:(W[P]\overline{\times}_S\overline{V})\to \overline{V}$ is the projection: this follows from the fact that for $y\in \overline{Y}$ the stalks of the log structures ${\cal N}_{\tilde{Z}}$ and $\tilde{q}^*{\cal N}_{\overline{V}}$ at the separable closure of $i(y)$ coincide, because $\overline{Y}\stackrel{i}{\to} \tilde{Z}$ and $\overline{Y}\to \overline{V}$ are exact closed immersions. By Theorem \ref{sglatt}, $\tilde{p}$ is log smooth. Thus $\tilde{q}$ is also log smooth, hence is smooth on underlying schemes. Let $$\tilde{Z}^0=\tilde{Z}\times_{(W[P]\overline{\times}_S\overline{V})}(W[P']\times_SV),$$an open subscheme of $\tilde{Z}$ containing $\mbox{\rm Im}(Y)$. Consider the restriction $\tilde{q}^0:\tilde{Z}^0\to\overline{V}$ of $\tilde{q}$. Since it is smooth on underlying schemes, it maps schematically dominantly to an open neighbourhood of $\mbox{\rm Im}(Y)$ in $V$ (here a morphism of schemes ${\cal X}\to{\cal Y}$ is said to be schematically dominant if its schematic image coincides with ${\cal Y}$). It follows that $\tilde{q}^0$ maps schematically dominantly also to $\overline{V}$ because of our assumption on $\overline{V}$ and the fact that $(\overline{Y},Y)\to(\overline{V},V)$ is boundary exact. Thus $\tilde{q}$ is a classically smooth morphism from the connected scheme $\tilde{Z}$ to another scheme $\overline{V}$ such that its restriction to the open subscheme $\tilde{Z}^0$ maps schematically dominantly to $\overline{V}$. This implies that $\tilde{Z}^0$ is schematically dense in $\tilde{Z}$, because (schematically) dominant classically smooth morphisms from a connected scheme induce bijections between the respective sets of irreducible components. It follows that $\tilde{g}$ factors as $$\tilde{Z}\stackrel{g}{\to}(W[P]_T\overline{\times}_T\overline{V})\stackrel{k}{\to}W[P]\overline{\times}_S\overline{V}:$$first as a morphism of underlying schemes because its restriction to the open schematically dense subscheme $\tilde{Z}^0$ factors through $$W[P']_T\times_T{V}=W[P']\times_S{V};$$but then also as a morphism of log schemes, because $k$ is strict. The morphism $g$ is log \'{e}tale because the composite $\tilde{g}$ with the closed embedding $k$ is log \'{e}tale. Let $$Z=\tilde{Z}\times_{(W[P]_T\overline{\times}_T\overline{V})}(\overline{X}\overline{\times}_T\overline{V}).$$From the assumption (ii) in Proposition \ref{wecri} we deduce that $\overline{X}\overline{\times}_T\overline{V}\to W[P]_T\overline{\times}_T\overline{V}$ is log smooth and strict, hence $Z\to\tilde{Z}$ is log smooth and strict, hence smooth on underlying schemes. Together with the smoothness of $\tilde{q}$ it follows that $Z\to\overline{V}$ is smooth on underlying schemes. Furthermore $Z\to \overline{X}\overline{\times}_T\overline{V}$ is log \'{e}tale because $g$ is log \'{e}tale. Finally, $Y\to Z$ is an exact closed immersion because $Z\to\tilde{Z}$ is strict and $Y\to\tilde{Z}$ is an exact closed immersion. The theorem is proven.\\ The interest in smoothness as we defined it lies in the following proposition, which enables us to develop nice cohomology theories for $T$-log schemes with boundary.\\ \begin{pro}\label{keykoh} Let $(\overline{Y},Y)\to(\overline{X}_i,X_i)$ be boundary exact closed immersions into smooth $T$-log schemes with boundary ($i=1,2$). Then there exist \'{e}tale locally on $(\overline{X}_1\overline{\times}_T\overline{X}_2)$ factorizations$$(\overline{Y},Y)\stackrel{\iota}{\to}(\overline{Z},Z)\stackrel{}{\to}(\overline{X}_1\overline{\times}_T\overline{X}_2,X_1\times_TX_2)$$of the diagonal embedding such that $\iota$ is a boundary exact closed immmersion, the map $\overline{Z}\to\overline{X}_1\overline{\times}_T\overline{X}_2$ is log \'{e}tale, and the projections $p_i:\overline{Z}\to\overline{X}_i$ are strict and log smooth, hence smooth on underlying schemes. \end{pro} {\sc Proof:} By the definition of smoothness we find \'{e}tale locally exactifications ($i=1,2$)$$\overline{Y}\to \overline{Z}_i\to\overline{X}_1\overline{\times}_T\overline{X}_2$$such that the projections $\overline{Z}_i\to \overline{X}_i$ are strict and log smooth. Let $$\overline{Z}'=\overline{Z}_1\times_{(\overline{X}_1\overline{\times}_T\overline{X}_2)}\overline{Z}_2$$and let $\overline{Y}\to\overline{Z}\to\overline{Z}'$ be an exactification of $\overline{Y}\to\overline{Z}'$. After perhaps \'{e}tale localization on $\overline{Z}$ as in the proof of Theorem \ref{tglatt} we may assume that the projections $\overline{Z}\to\overline{Z}_i$ are strict. Hence the projections $p_i:\overline{Z}\to\overline{X}_i$ are strict and log smooth. This implies that $$Z=p_1^{-1}(X_1)\cap p_2^{-1}(X_2)$$is log schematically dense in $\overline{Z}$. Indeed, it suffices to prove the log schematic density of $Z$ in $p_1^{-1}(X_1)$ and of $p_1^{-1}(X_1)$ in $\overline{Z}$. Both assertions follow from the general fact that for a strict and log smooth (and in particular classically smooth) morphism of log schemes $h:L\to S$ and a log schematically dense open immersion $S'\to S$, also $h^{-1}(S')$ with its pull back log structure from $S'$ is log schematically dense in $L$: this is easy to prove since the question is local for the \'{e}tale topology and we therefore may assume that $h$ is a relative affine space. The classical smoothness of (say) $p_1$ and the boundary exactness of $(\overline{Y},Y)\to(\overline{X}_1,X_1)$ imply that $(\overline{Y},Y)\to(\overline{Z},Z)$ is boundary exact (for each connnected component $\overline{Z}'$ of $\overline{Z}$ the map $\pi_0(\overline{Z}')\to\pi_0(\overline{X}_1)$ between sets of irreducible components induced by $p_1$ is injective). We are done.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz} Examples:} We make the exactification $Z\to\overline{X}\overline{\times}_T\overline{V}$ in Theorem \ref{tglatt} explicit in some examples, underlining the delicacy of the base change argument in Theorem \ref{tglatt}. In the following, for free variables $U_1,\ldots,V_1,\ldots$ we denote by $W[U_1,\ldots,V_1^{\pm},\ldots]$ the log scheme $$W[\mathbb{N}\oplus\ldots\oplus\mathbb{Z}\oplus\ldots]$$ with generators $U_1,\ldots$ for $\mathbb{N}\oplus\ldots$ and generators $V_1,\ldots$ for $\mathbb{Z}\oplus\ldots$. For $f\in\mathbb{Z}[U_1,\ldots,V_1^{\pm},\ldots]$ we denote by $W[U_1,\ldots,V_1^{\pm},\ldots]/f$ the exact closed subscheme defined by $f$.\\(a) Let $Q=\mathbb{N}$ with generator $q$. Let $X=W[U_1^{\pm},U_2]\subset\overline{X}=W[U_1,U_2]$. Define $X\to S$ by sending $q\mapsto U_1^{-1}U_2$, thus $(\overline{X},X)$ is a smooth $S$-log scheme with boundary. The self fibre product of $S$-log schemes with boundary is $$(\overline{X}_1,X_1)=(\overline{X},X)\overline{\times}_S(\overline{X},X)\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad$$$$=(W[U_1,U_2,V_1,V_2]/(V_1U_2-V_2U_1),W[U_1^{\pm},U_2,V_1^{\pm},V_2]/(U_1^{-1}U_2-V_1^{-1}V_2)).$$Note that the projections $q_j:\overline{X}_1\to\overline{X}$ are not flat (the fibres above the respective origins are two dimensional), although they are log smooth. We construct the desired log \'{e}tale map $Z\stackrel{g}{\to}\overline{X}_1$ according to the procedure in \cite{kalo}, 4.10. Embed $\mathbb{Z}\to\mathbb{Z}^4$ by sending $n\mapsto(n,-n,-n,n)$ and let $H$ be the image of the canonical map $\mathbb{N}^4\to(\mathbb{Z}^4/\mathbb{Z})$. Then $\overline{X}_1=W[H]$. Let $h:(\mathbb{Z}^4/\mathbb{Z})\to\mathbb{Z}^2$ be the map which sends the class of $(n_1,n_2,n_3,n_4)$ to $(n_1+n_3,n_2+n_4)$, and let $K=h^{-1}(\mathbb{N}^2)$. Then $Z=W[K]$ works. More explicitly: We have an isomorphism $K\cong\mathbb{N}^2\oplus\mathbb{Z}$ by sending the class of $(n_1,n_2,n_3,n_4)$ to $(n_1+n_3,n_2+n_4,n_1+n_2)$. Then $$Z=W[S_1,S_2,S_3^{\pm}]$$ and $g$ is given by $U_1\mapsto S_1S_3,\mbox{\rm Frac}uad U_2\mapsto S_2S_3,\mbox{\rm Frac}uad V_1\mapsto S_1,\mbox{\rm Frac}uad V_2\mapsto S_2$. Now consider the base change with $T=W[q]/q\to S$ defined by sending $q\mapsto 0$. For $j=1,2$ let $\overline{X}_{1,j}=\overline{X}_1\times_{\overline{X}}\overline{X}_T$ where in the fibre product we use the $j$-th projection as the structure map for the first factor. Let $\overline{X}_{T,1}=\overline{X}_T\overline{\times}_T\overline{X}_T$. Then we find $\overline{X}_{1,1}=W[U_1,V_1,V_2]/(V_2U_1)$, $\overline{X}_{1,2}=W[U_1,U_2,V_1]/(V_1U_2)$, thus containing $\overline{X}_{T,1}=W[U_1,V_1]$ as a {\it proper} subscheme.\\(b) Let $S, X, \overline{X}$ be as in (a), but this time define $X\to S$ by sending $q\mapsto U_1U_2$. Again $(\overline{X},X)$ is smooth. We use the embedding $\mathbb{Z}\to\mathbb{Z}^4$ which sends $n\mapsto(n,n,-n,-n)$, to define $H=\mbox{\rm Im}(\mathbb{N}^4\to(\mathbb{Z}^4/\mathbb{Z}))$. Let $h:(\mathbb{Z}^4/\mathbb{Z})\to\mathbb{Z}^2$ be the map which sends the class of $(n_1,n_2,n_3,n_4)$ to $(n_1+n_3,n_2+n_4)$, and let $K=h^{-1}(\mathbb{N}^2)$. We have an isomorphism $K\cong\mathbb{N}^2\oplus\mathbb{Z}$ by sending the class of $(n_1,n_2,n_3,n_4)$ to $(n_1+n_3,n_2+n_4,n_1-n_2)$. We thus find $$\overline{X}_1=W[H]=W[U_1,U_2,V_1,V_2]/(U_1U_2-V_1V_2),$$ $Z=W[S_1,S_2,S_3^{\pm}]$ and $g:Z\to\overline{X}_1$ is given by $U_1\mapsto S_1S_3,\mbox{\rm Frac}uad U_2\mapsto S_2S_3^{-1},\mbox{\rm Frac}uad V_1\mapsto S_1,\mbox{\rm Frac}uad V_2\mapsto S_2$. Note that in this case the projections $q_j:\overline{X}_1\to\overline{X}$ are flat. Now consider the base change with $T=W[q]/q\to S$ defined by sending $q\mapsto 0$. Then, in contrast to (a), we find $\overline{X}_{1,1}=\overline{X}_{1,2}=\overline{X}_{T,1}$ (with $\overline{X}_{1,1}, \overline{X}_{1,2}, \overline{X}_{T,1}$ as in (a)).\\ (c) Using the criterion \ref{tglatt} one checks that the log schemes with boundary mentioned in \arabic{bspcol1}.\arabic{bspcol2}(b)--(d) and \arabic{bspbdl1}.\arabic{bspbdl2} are smooth. In fact, the example (a) just discussed is a special case of \arabic{bspcol1}.\arabic{bspcol2} (b) or \arabic{bspbdl1}.\arabic{bspbdl2}. Example (b) (or rather its base change with $T=W[q]/q\to S$ as above) is a special case of \arabic{bspcol1}.\arabic{bspcol2} (d).\\ \begin{lem} $\Omega^1_{(\overline{X},X)/T}$ is locally free of finite rank if $(\overline{X},X)$ is weakly smooth over $T$. \end{lem} {\sc Proof:} The same as in the classical case.\\ \section{Semistable log schemes with boundary} \label{semist} In this Section $k$ is a field, $Q=\mathbb{N}$ with generator $q$ and $T=(\mbox{\rm Spec}(k),Q\stackrel{0}{\to}k)$. \subsection{Definitions} \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} A {\it standard semistable $T$-log scheme with boundary} is a $T$-log scheme with boundary isomorphic to:$$(\overline{X},X)=(\mbox{\rm Spec}(\frac{k[t_1,\ldots,t_{i_2}]}{(t_1,\ldots,t_{i_1})}),\mbox{\rm Spec}(\frac{k[t_1,\ldots,t_{i_1},t_{i_1+1}^{\pm},\ldots,t_{i_2}^{\pm}]}{(t_1,\ldots,t_{i_1})}))$$for some integers $1\le i_1\le i_2$ such that $$P=\mathbb{N}^{i_2}\to{\cal N}_{\overline{X}}(\overline{X}),\mbox{\rm Frac}uad 1_i\mapsto t_i\mbox{\rm Frac}uad\mbox{ for }1\le i\le i_2$$ $$Q=\mathbb{N}\to P^{\rm gp}=\mathbb{Z}^{i_2},\mbox{\rm Frac}uad q\mapsto (1_1,\ldots,1_{i_1},r_{i_1+1},\ldots,r_{i_2})$$with some $r_{j}\in\mathbb{Z}$ for $i_1+1\le j\le i_2$ is a chart in the sense of \arabic{defcha1}.\arabic{defcha2}. A {\it semistable $T$-log scheme with boundary} is a $T$-log scheme with boundary $(\overline{Y},Y)$ such that \'{e}tale locally on $\overline{Y}$ there exist morphisms $(\overline{Y},Y)\to(\overline{X},X)$ to standard semistable $T$-log schemes with boundary such that $\overline{Y}\to\overline{X}$ is strict and log smooth, and $Y={\overline Y}\times_{\overline{X}}X$. Note that $Y$ is then a semistable $k$-log scheme in the usual sense defined in \arabic{bspcol1}.\arabic{bspcol2}(d). A {\it normal crossing variety over $k$} is a $k$-scheme which \'{e}tale locally admits smooth morphisms to the underlying schemes of semistable $k$-log schemes. Following \cite{fkato} we say that a log structure ${\cal N}_{\overline{Y}}$ on a normal crossing variety $\overline{Y}$ over $k$ is {\it of embedding type} if \'{e}tale locally on $\overline{Y}$ the log scheme $(\overline{Y},{\cal N}_{\overline{Y}})$ is isomorphic to a semistable $k$-log scheme. (The point is that we do not require a {\it global} structure map of log schemes $(\overline{Y},{\cal N}_{\overline{Y}})\to T$.)\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} Let us discuss for a moment the standard semistable $T$-log schemes with boundary $(\overline{X},X)$. If in the above definition $r_j\ge 0$ for all $j$, then $f:X\to T$ actually extends to a (non log smooth in general) usual morphism of log schemes $\overline{f}:\overline{X}\to T$. If even $r_j=0$ for all $j$ then $\overline{f}$ is nothing but a semistable $k$-log scheme with an additional horizontal divisor not interfering with the structure map of log structures; in particular it is log smooth. If at least $r_j\in\{0,1\}$ for all $j$ the morphism $\overline{f}$ is ideally smooth in the sense of Ogus \cite{ogid}. Examples with $r_j=1$ for all $j$ are those in \arabic{bspcol1}.\arabic{bspcol2}(d). The concept of semistable $T$-log schemes with boundary helps us to also understand the cases with local numbers $r_j\notin\{0,1\}$: Any $(\overline{Y},Y)$ semistable $T$-log scheme with boundary is smooth, by Theorem \ref{tglatt}, and as we will see below this implies analogs of classical results for their cohomology. Examples of semistable $T$-log schemes with boundary with local numbers $r_j$ possibly not in $\{0,1\}$ are those in \arabic{bspbdl1}.\arabic{bspbdl2} or those from \ref{Grothe} below. Or think of a flat family of varieties over $\mbox{\rm Spec}(k[q])$ with smooth general fibre and whose reduced subscheme of the special fibre is a normal crossing variety, but where some components of the special fibre may have multiplicities $>1$: then unions of irreducible components of this special fibre with multiplicity $=1$ are semistable $T$-log schemes with boundary. One more big class of examples with local numbers $r_j$ possibly not in $\{0,1\}$ is obtained by the following lemma, which follows from computations with local coordinates: \begin{lem} Let $Y\to\overline{Y}$ be an embedding of $k$-schemes which \'{e}tale locally looks like the underlying embedding of $k$-schemes of a semistable $T$-log scheme with boundary (i.e. for each geometric point $y$ of $\overline{Y}$ there is a semistable $T$-log scheme with boundary which on underlying schemes looks like $Y\to\overline{Y}$ around $y$). Suppose ${\cal N}_{\overline{Y}}$ is a log structure of embedding type on $\overline{Y}$ such that $(Y,{\cal N}_{\overline{Y}}|_Y)$ is a semistable $k$-log scheme (for an appropriate structure morphism to $T$). Then $((\overline{Y},{\cal N}_{\overline{Y}}),Y)$ is a semistable $T$-log scheme with boundary. \end{lem} \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} Fumiharu Kato in \cite{fkato} has worked out precise criteria for these two properties of normal crossing varieties over $k$ --- to admit a log structure of embedding type, resp. to admit a log structure of semistable type. Now suppose we are given a semistable $T$-log scheme $Y$. An ``optimal'' compactification would be a dense open embedding into a proper semistable $k$-log scheme in the classical sense, or at least into an ideally smooth proper $k$-log scheme; however, advocating the main idea of this paper, a dense open embedding $Y\to\overline{Y}$ into a log scheme $\overline{Y}$ such that $(\overline{Y},Y)$ is a proper semistable $T$-log scheme with boundary is also very useful, and this might be easier to find, or (more importantly) be naturally at hand in particular situations.\\ \subsection{De Rham cohomology} Here we assume $\mbox{\rm char}(k)=0$. Let $Z$ be a smooth $k$-scheme and let $V$ be a normal crossing divisor on $Z$. Suppose there exists a flat morphism $f:(Z-V)\to\mbox{\rm Spec}(k[q])$, smooth above $q\ne 0$ and with semistable fibre $X$ above the origin $q=0$. Let $\overline{X}$ be the closure of $X$ in $Z$ and suppose also that $\overline{X}\cup V$ is a normal crossing divisor on $Z$. Endow $Z$ with the log structure defined by $\overline{X}\cup V$ and endow all subschemes of $Z$ with the induced log structure (we will suppress mentioning of this log structure in our notation). Then $(\overline{X},X)$ is a semistable $T$-log scheme with boundary. Let $D=\overline{X}\cap V=\overline{X}-X$ and let $\overline{X}=\cup_{1\le i\le a}\overline{X}_i$ be the decomposition into irreducible components in a fixed ordering and suppose that each $\overline{X}_i$ is classically smooth. Let $\Omega_{X/T}^{\textbf{u}llet}$ be the relative logarithmic de Rham complex.\\ \begin{pro}\label{Grothe} The restriction map$$R\Gamma(\overline{X},\Omega_{(\overline{X},X)/T}^{\textbf{u}llet})\to R\Gamma(X,\Omega_{X/T}^{\textbf{u}llet})$$is an isomorphism. \end{pro} {\sc Proof:} We use a technique of Steenbrink \cite{steen} to reduce to a standard fact. Let $\Omega_Z^{\textbf{u}llet}$ be the de Rham complex over $k$ on $Z$ with logarithmic poles along $\overline{X}\cup V$. Note that $\mbox{\rm dlog}(f^*(q))\in\Gamma(Z-V,\Omega_Z^{1})$ extends uniquely to a global Section $\theta\in\Gamma(Z,\Omega_Z^{1})$. Let $\Omega_{Z,V}^{\textbf{u}llet}$ be the de Rham complex on $Z$ with logarithmic poles only along $V$; thus $\Omega_{Z,V}^{\textbf{u}llet}$ is a {\it subcomplex} of $\Omega_Z^{\textbf{u}llet}$. Define the vertical weight filtration on $\Omega_Z^{\textbf{u}llet}$ by$$P_j\Omega^i_Z=\mbox{\rm Im}(\Omega^j_Z\otimes\Omega^{i-j}_{Z,V}\to \Omega^i_Z).$$For $j\ge 1$ let $\overline{X}^j$ be the disjoint sum of all $\cap_{i\in I}\overline{X}_i$ where $I$ runs through the subsets of $\{1,\ldots,a\}$ with $j$ elements. Let $\tau_j:\overline{X}^j\to \overline{X}$ be the canonical map and let $\Omega_{\overline{X}^j}^{\textbf{u}llet}$ be the de Rham complex on $\overline{X}^j$ with logarithmic poles along $\overline{X}^j\cap \tau_{j}^{-1}(D)$. Then we have isomorphisms of complexes$$(*)\mbox{\rm Frac}uad\mbox{\rm Frac}uad {\rm res}:\mbox{\rm Gr}_j\Omega_Z^{\textbf{u}llet}\cong \tau_{j,*}\Omega_{\overline{X}^j}^{\textbf{u}llet}[-j],$$characterized as follows: Let $x_1,\ldots,x_d$ be local coordinates on $Z$ such that $x_i$ for $1\le i\le a\le d$ is a local coordinate for $\overline{X}_i$. If $$\omega=\alpha\omegaedge\mbox{\rm dlog}(x_{i_1})\omegaedge\ldots \omegaedge\mbox{\rm dlog}(x_{i_j})\in P_j\Omega_Z^{\textbf{u}llet}$$ with $i_1<\ldots<i_j<a$, then ${\rm res}$ sends the class of $\omega$ to the class of $\alpha$. Now let $$A^{pq}=\Omega_Z^{p+q+1}/P_q\Omega_Z^{p+q+1},\mbox{\rm Frac}uad\mbox{\rm Frac}uad\mbox{\rm Frac}uad P_jA^{pq}=P_{2q+j+1}\Omega_Z^{p+q+1}/P_q\Omega_Z^{p+q+1}.$$Using the differentials $d':A^{pq}\to A^{p+1,q},\mbox{\rm Frac}uad\omega\mapsto d\omega$ and $d'':A^{pq}\to A^{p,q+1},\mbox{\rm Frac}uad\omega\mapsto \theta\omegaedge\omega$ we get a filtered double complex $A^{\textbf{u}llet\textbf{u}llet}$. We claim that $$0\to\frac{\Omega^p_Z\otimes{\cal O}_{\overline{X}}}{(\Omega^{p-1}_Z\otimes{\cal O}_{\overline{X}})\omegaedge\theta}\stackrel{\omegaedge\theta}{\to}A^{p0}\stackrel{\omegaedge\theta}{\to}A^{p1}\stackrel{\omegaedge\theta}{\to}\ldots$$is exact. Indeed, it is enough to show that for all $p$, all $j\ge 2$ the sequences$$\mbox{\rm Gr}_{j-1}\Omega^{p-1}_Z\stackrel{\omegaedge\theta}{\to}\mbox{\rm Gr}_{j}\Omega^{p}_Z\stackrel{\omegaedge\theta}{\to}\mbox{\rm Gr}_{j+1}\Omega^{p+1}_Z\stackrel{\omegaedge\theta}{\to}\ldots$$$$0\to P_0\Omega^{p-1}_Z/{\cal J}_{\overline{X}}.\Omega^{p-1}_Z\stackrel{\omegaedge\theta}{\to}\mbox{\rm Gr}_{1}\Omega^{p}_Z\stackrel{\omegaedge\theta}{\to}\mbox{\rm Gr}_{1}\Omega^{p+1}_Z\stackrel{\omegaedge\theta}{\to}\ldots$$are exact, where ${\cal J}_{\overline{X}}=\mbox{\rm Ker}({\cal O}_{Z}\to{\cal O}_{\overline{X}})$. This follows from $(*)$ and the exactness of $$0\to P_0\Omega^{p}_Z/{\cal J}_{\overline{X}}.\Omega^{p}_Z\to\tau_{1,*}\Omega^p_{\overline{X}^1}\to\tau_{2,*}\Omega^p_{\overline{X}^2}\to\ldots.$$The claim follows. It implies that the maps$$\Omega_{(\overline{X},X)/T}^{p}=\frac{\Omega^p_Z\otimes{\cal O}_{\overline{X}}}{(\Omega^{p-1}_Z\otimes{\cal O}_{\overline{X}})\omegaedge\theta}\to A^{p0}\subset A^p,\mbox{\rm Frac}uad\omega\mapsto (-1)^p\theta\omegaedge\omega$$define a quasi-isomorphism $\Omega_{(\overline{X},X)/T}^{\textbf{u}llet}\to A^{\textbf{u}llet}$, hence a spectral sequence$$E_1^{-r,q+r}=H^q(\overline{X},\mbox{\rm Gr}_r A^{\textbf{u}llet})\Longrightarrow H^q(\overline{X},\Omega_{(\overline{X},X)/T}^{\textbf{u}llet}).$$Now we can of course repeat all this on $Z-V$ instead of $Z$, and restriction from $Z$ to $Z-V$ gives a canonical morphism between the respective spectral sequences. That this is an isomorphism can be checked on the initial terms, and using the isomorphism $(*)$ this boils down to proving that the restriction maps$$H^p(\overline{X}^j,\Omega_{\overline{X}^j}^{\textbf{u}llet})\to H^p(X^j,\Omega_{\overline{X}^j}^{\textbf{u}llet})$$are isomorphisms where we set $X^j=\overline{X}^j\cap\tau_j^{-1}(X)$. But this is well known. The proof is finished.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} Now assume ${\overline X}$ is proper. Similar to the classical Hodge theory, the Hodge filtration on $$H^p(\overline{X},\Omega_{(\overline{X},X)/T}^{\textbf{u}llet})=H^p(X,\Omega_{X/T}^{\textbf{u}llet})$$ obtained by stupidly filtering $\Omega_{(\overline{X},X)/T}^{\textbf{u}llet}$ should be meaningful. Another application of Proposition \ref{Grothe} might be a Poincar\'{e} duality theorem. Suppose the underlying scheme of $\overline{X}$ is of pure dimension $d$. Let ${\cal I}_{D}=\mbox{\rm Ker}({\cal O}_{\overline{X}}\to{\cal O}_{D})$ and define the de Rham cohomology with compact support of $(\overline{X},X)/T$ as $$R\Gamma(\overline{X},{\cal I}_D\otimes\Omega_{(\overline{X},X)/T}^{\textbf{u}llet}).$$It is a natural question to ask if this is dual to $R\Gamma(\overline{X},\Omega_{(\overline{X},X)/T}^{\textbf{u}llet})=R\Gamma(X,\Omega_{X/T}^{\textbf{u}llet})$. The key would be as usual the construction of a trace map $H^d(\overline{X},{\cal I}_D\otimes\Omega_{(\overline{X},X)/T}^d)\to k$.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} Another application of semistable $T$-log schemes with boundary is the possibility to define the notion of {\it regular singularities} of a given integrable log connection on a semistable $T$-log scheme $X$, provided we have an embedding $X\to\overline{X}$ such that $(\overline{X},X)$ is a proper semistable $T$-log scheme with boundary.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} Here is an application of the construction in \arabic{bspbdl1}.\arabic{bspbdl2} to the de Rham cohomology of certain semistable $k$-log schemes (a simplified variant of the application given in \cite{hkstrat}; in fact, the present paper formalizes and generalizes a key construction from \cite{hkstrat}). In \arabic{bspbdl1}.\arabic{bspbdl2} assume that $\mbox{\rm char}(k)=0$ and that $M$ is the intersection of {\it all} irreducible components of $Y$. Recall that we constructed a morphism of log schemes $V_M\to S=(\mbox{\rm Spec}(k[q]),1\mapsto q)$. For $k$-valued points $\alpha\to S$ (with pull back log structure) let $V_{M}^{\alpha}=V_M\times_S{\alpha}$. Using the $S$-log scheme with boundary $(P_M,V_M)$ one can show that the derived category objects $R\Gamma(V_{M}^{\alpha},\Omega^{\textbf{u}llet}_{V_M^{\alpha}/\alpha})$ (with $\Omega^{\textbf{u}llet}_{V_M^{\alpha}/\alpha}$ the relative logarithmic de Rham complex; if $\alpha\ne 0$ this is the classical one) are canonically isomorphic for varying $\alpha$. Namely, the canonical restriction maps $$R\Gamma(P_{M},\Omega^{\textbf{u}llet}_{(P_M,V_M)/S})\to R\Gamma(V_{M}^{\alpha},\Omega^{\textbf{u}llet}_{V_M^{\alpha}/\alpha})$$are isomorphisms for all $\alpha$.\\ \subsection{Crystalline cohomology} Let $\tilde{S}$ be a scheme such that ${\cal O}_{\tilde{S}}$ is killed by a non-zero integer, $I\subset{\cal O}_{\tilde{S}}$ a quasi-coherent ideal with DP-structure $\gamma$ on it, and let $\tilde{\cal L}$ be a fine log structure on $\tilde{S}$. Let $(S,{\cal L})$ be an exact closed log subscheme of $(\tilde{S},\tilde{\cal L})$ defined by a sub-DP-ideal of $I$ and let $f:(X,{\cal N})\to(S,{\cal L})$ be a log smooth and integral morphism of log schemes. An important reason why log crystalline cohomology of $(X,{\cal N})$ over $(\tilde{S},\tilde{\cal L})$ works well is that locally on $X$ there exist smooth and integral, hence flat morphisms $\tilde{f}:(\tilde{X},\tilde{\cal N})\to(\tilde{S},\tilde{\cal L})$ with $f=\tilde{f}\times_{(\tilde{S},\tilde{\cal L})}(S,{\cal L})$. This implies that the crystalline complex of $X/\tilde{S}$ (with respect to any embedding system) is flat over ${\cal O}_{\tilde{S}}$, see \cite{hyoka} 2.22, and on this property many fundamental theorems rely. Now let $W$ be a discrete valuation ring of mixed characteristic $(0,p)$ with maximal ideal generated by $p$. For $n\in \mathbb{N}$ let $W_n=W/(p^n)$, $k=W_1$ and $K_0=\mbox{\rm Quot}(W)$, and let $T_n$ be the exact closed log subscheme of $S=W[Q]$ defined by the ideal $(p^n,q)$ (abusing previous notation we now take $\mbox{\rm Spec}(W)$ as the base scheme $W$ of \arabic{basnot1}.\arabic{basnot2}). Thus $T=T_1$. We will often view $T$-log schemes with boundary as $T_n$-log schemes with boundary for $n\in \mathbb{N}$.\\ \begin{lem}\label{loclif} Let $(\overline{Y},Y)/T$ be a semistable $T$-log scheme with boundary. Then there exist \'{e}tale locally on $\overline{Y}$ smooth $T_n$-log schemes with boundary $(\overline{Y}_n,Y_n)$ such that $(\overline{Y},Y)=(\overline{Y}_n,Y_n)\overline{\times}_{T_n}T$, the closed immersion $(\overline{Y},Y)\to(\overline{Y}_n,Y_n)$ is boundary exact, and such that $\Omega^1_{(\overline{Y}_n,Y_n)/T_n}$ is flat over ${\cal O}_{T_n}$ and commutes with base changes $T_m\to T_n$ for $m\le n$. \end{lem} {\sc Proof:} We may suppose that there is a strict and log smooth morphism $$h:(\overline{Y},Y)\to (\overline{X},X)=(\mbox{\rm Spec}(\frac{k[t_1,\ldots,t_{i_2}]}{(t_1,\ldots,t_{i_1})}),\mbox{\rm Spec}(\frac{k[t_1,\ldots,t_{i_1},t_{i_1+1}^{\pm},\ldots,t_{i_2}^{\pm}]}{(t_1,\ldots,t_{i_1})}))$$for some integers $1\le i_1\le i_2$ such that $P=\mathbb{N}^{i_2}$ is a chart for $(\overline{X},X)$ sending $1_i\mapsto t_i$ for $1\le i\le i_2$ and such that the structure map is given by $$Q=\mathbb{N}\to P^{\rm gp}=\mathbb{Z}^{i_2},\mbox{\rm Frac}uad q\mapsto (1_1,\ldots,1_{i_1},r_{i_1+1},\ldots,r_{i_2})$$with some $r_{j}\in\mathbb{Z}$ for $i_1+1\le j\le i_2$. We lift $(\overline{X},X)$ to$$(\overline{X}_n,X_n)=(\mbox{\rm Spec}(\frac{W_n[t_1,\ldots,t_{i_2}]}{(t_1,\ldots,t_{i_1})}),\mbox{\rm Spec}(\frac{W_n[t_1,\ldots,t_{i_1},t_{i_1+1}^{\pm},\ldots,t_{i_2}^{\pm}]}{(t_1,\ldots,t_{i_1})}))$$using the same formulas for the log structure maps. Local liftings of $h$ to $(\overline{X}_n,X_n)$ result from the classical theory, since `strict and log smooth' is equivalent to `smooth on underlying schemes'.\\ \begin{lem}\label{kriskey} Let $n\in\mathbb{N}$ and let $(\overline{Y},Y)\to(\overline{X}_i,X_i)$ be boundary exact closed immersions into smooth $T_n$-log schemes with boundary ($i=1,2$). Then there exist \'{e}tale locally on $(\overline{X}_1\overline{\times}_{T_n}\overline{X}_2)$ factorizations of the diagonal embedding $$(\overline{Y},Y)\stackrel{\iota}{\to}(\overline{Z},Z)\stackrel{}{\to}(\overline{X}_1\overline{\times}_{T_n}\overline{X}_2,X_1\times_{T_n}X_2)$$with $\iota$ a boundary exact closed immmersion, the map $\overline{Z}\to\overline{X}_1\overline{\times}_{T_n}\overline{X}_2$ log \'{e}tale, the projections $p_i:\overline{Z}\to\overline{X}_i$ strict and log smooth, and with the following property: Let $\overline{D}_{12}$ (resp. $\overline{D}_i$) denote the ${\rm DP}$ envelopes of (the underlying scheme morphism of) $\overline{Y}\to\overline{Z}$ (resp. of $\overline{Y}\to\overline{X}_i$), and let $q_i:\overline{D}_{12}\to\overline{D}_i$ be the canonical projections. Then there exist $u_{i1},\ldots,u_{im_i}\in {\cal O}_{\overline{D}_{12}}$ for $i=1$ and $i=2$ such that $du_{i1},\ldots,du_{im_i}$ form a basis of $\Omega^1_{\overline{Z}/\overline{X}_i}$ and such that the assignments $U_{ij}^{[k]}\mapsto u_{ij}^{[k]}$ ($k\in\mathbb{N}$) induce isomorphisms$$q_i^{-1}{\cal O}_{\overline{D}_i}\langle U_{i1},\ldots,U_{im_i}\rangle\cong{\cal O}_{\overline{D}_{12}}$$where on the left hand side we mean the ${\rm DP}$ envelope of the free polynomial ring. \end{lem} \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}}\newcounter{defcry1}\newcounter{defcry2}\setcounter{defcry1}{\value{section}}\setcounter{defcry2}{\value{satz}} Lemma \ref{kriskey} follows from Proposition \ref{keykoh}, and the same proofs give variants of Proposition \ref{keykoh} and Lemma \ref{kriskey} for more than two embeddings $(\overline{Y},Y)\to(\overline{X}_i,X_i)$ (and hence with products with more than two factors). As in \cite{kalo} one shows that the DP envelopes of $(\overline{Y},Y)$ in chosen exactifications of these products (e.g. the DP envelope $\overline{D}_{12}$ in Lemma \ref{kriskey}) are independent of the chosen exactifications. For a given semistable $T$-log scheme with boundary $(\overline{Y},Y)$ we now define its crystalline cohomology relative to $T_n$ by the standard method (cf. \cite{hyoka} 2.18): Choose an open covering $\overline{Y}=\cup_{\overline{U}\in\overline{\cal U}}\overline{U}$ and for each $(\overline{U},U=Y\cap\overline{U})$ a lift $(\overline{U}_n,U_n)$ as in Lemma \ref{loclif}. Taking products we get a simplicial $T_n$-log scheme with boundary $(\overline{U}^{\textbf{u}llet}_n,U^{\textbf{u}llet}_n)$ which is an embedding system for $(\overline{Y},Y)$ over $T_n$. Let $\overline{D}_n^{\textbf{u}llet}$ be the DP envelope of $(\overline{Y},Y)$ in $(\overline{U}_n^{\textbf{u}llet},U^{\textbf{u}llet}_n)$, i.e. the simplicial scheme formed by the DP envelopes of local exactifications of $(\overline{Y},Y)\to(\overline{U}_n^{\textbf{u}llet},U^{\textbf{u}llet}_n)$ as in Lemma \ref{kriskey}. Then we set$$R\Gamma_{\rm crys}((\overline{Y},Y)/T_n)=R\Gamma(\overline{D}_n^{\textbf{u}llet},\Omega^{\textbf{u}llet}_{(\overline{U}_n,U_n)/T_n}\otimes{\cal O}_{\overline{D}_n^{\textbf{u}llet}}).$$That this definition is independent of the chosen embedding follows from Lemma \ref{kriskey} and the DP Poincar\'{e} lemma.\\ \begin{lem} (a) For $m\le n$ we have $$R\Gamma_{\rm crys}((\overline{Y},Y)/T_m)\cong R\Gamma_{\rm crys}((\overline{Y},Y)/T_n)\otimes^{\mathbb{L}}_{W_n}W_m.$$(b) If $\overline{Y}$ is proper over $k$, the cohomology of $$R\lim_{\stackrel{\leftarrow}{n}}R\Gamma_{\rm crys}((\overline{Y},Y)/T_n)$$ (resp. of $R\Gamma_{\rm crys}((\overline{Y},Y)/T_n)$) is finitely generated over $W$ (resp. over $W_n$). \end{lem} {\sc Proof:} Just as in \cite{hyoka} 2.22 one deduces from Lemmata \ref{loclif} and \ref{kriskey} that $\Omega^{\textbf{u}llet}_{(\overline{U}_n,U_n)/T_n}\otimes{\cal O}_{\overline{D}_n^{\textbf{u}llet}}$ is a $W_n$-flat sheaf complex on $\overline{D}_n^{\textbf{u}llet}$ and this implies (a). If $\overline{Y}$ is proper over $k$ it follows that $R\Gamma_{\rm crys}((\overline{Y},Y)/T_1)=R\Gamma(\overline{Y},\Omega^{\textbf{u}llet}_{(\overline{Y},Y)/T_1})$ has finite dimensional cohomology over $k$ since each $\Omega^{j}_{(\overline{Y},Y)/T_1}$ is coherent. Together with (a) we conclude as in the classical case.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} Ogus \cite{ogcon} and Shiho \cite{shiho} have defined logarithmic convergent cohomology in great generality and ``in crystalline spirit''. Here we content ourselves with the following definition. Let $E$ be a fine $T$-log scheme. Let $T_{\infty}$ be the formal log scheme $(\mbox{\rm Spf}(W),1\mapsto 0)$. Choose an exact closed immersion $E\to G$ into a log smooth formal $T_{\infty}$-log scheme $G$ topologically of finite type over $W$. Associated to $G$ is a $K_0$-rigid space $G_{K_0}$ together with a specialization map ${\rm sp}$ to the special fibre of $G$. The preimage ${\rm sp}^{-1}(E)=]E[_G$ of the embedded $E$, the tube of $E$, is an admissible open subspace of $G_{K_0}$. The logarithmic de Rham complex $\Omega^{\textbf{u}llet}_{G/T_{\infty}}$ on $G$ gives rise, tensored with $\mathbb{Q}$, to a sheaf complex $\Omega^{\textbf{u}llet}_{G_{K_0}/T_{\infty,K_0}}$ on $G_{K_0}$ and we set$$R\Gamma_{\rm conv}(E/T_{\infty})=R\Gamma(]E[_G,\Omega^{\textbf{u}llet}_{G_{K_0}/T_{\infty,K_0}}),$$an object in the derived category of $K_0$-vector spaces. If there are embeddings $E\to G$ as above only locally on $E$, one works with embedding systems.\\ Now let $Y$ be a semistable $k$-log scheme with smooth irreducible components and let $M$ be the intersection of some of its irreducible components. Endow $M$ with the structure of $T$-log scheme induced from $Y$. Note that $M$ is not log smooth over $T$ (unless $Y$ has only a single irreducible component) and its usual log crystalline cohomology is pathological; it does not provide a canonical integral lattice in the log convergent cohomology of $M$, as we will now construct one by another method. In \arabic{bspbdl1}.\arabic{bspbdl2} we constructed a $S_1$-log scheme with boundary $(P_M,V_M)$ where $S_1$ is the exact closed log subscheme of $S$ defined by the ideal $(p)$. Perform the base change with the exact closed subscheme $T$ of $S_1$ defined by the ideal $(q)$ to get $(P_M^0,V_M^0)=(P_M\times_{S_1}T,V_M\times_{S_1}T)$. This is a semistable $T$-log scheme with boundary as defined above.\\ \begin{satz}\label{crisconv} There exists a canonical isomorphism $$R\lim_{\stackrel{\leftarrow}{n}}R\Gamma_{\rm crys}(({P_M^0},V_M^0)/T_n)\otimes_{W}K_0\cong R\Gamma_{\rm conv}(M/T_{\infty}).$$In particular, if $M$ is proper, each $R^j\Gamma_{\rm conv}(M/T_{\infty})$ is finite dimensional.\\ \end{satz} {\sc Proof:} {\it Step 1:} The map is$$R\lim_{\stackrel{\leftarrow}{n}}R\Gamma_{\rm crys}(({P_M^0},V_M^0)/T_n)\otimes_{W}K_0\to R\lim_{\stackrel{\leftarrow}{n}}R\Gamma_{\rm crys}(({V_M^0},V_M^0)/T_n)\otimes_{W}K_0$$$$=R\lim_{\stackrel{\leftarrow}{n}}R\Gamma_{\rm crys}(V_M^0/T_n)\otimes_{W}K_0\stackrel{(i)}{\cong}R\Gamma_{\rm conv}(V_M^0/T_{\infty})\to R\Gamma_{\rm conv}(M/T_{\infty})$$where the left hand side in $(i)$ is the usual log crystalline cohomology of $V_M^0/T_n$ and the isomorphism $(i)$ holds by log smoothness of $V_M^0/T$. That this map is an ismorphism can be checked locally.\\{\it Step 2:} We may therefore assume that there exists a smooth (in the classical sense) affine connected $\mbox{\rm Spec}(W)$-scheme $\tilde{\cal M}=\mbox{\rm Spec}(\tilde{B})$ lifting $M$ and that the invertible sheaves ${\cal F}_j|_M$ on $M$ are trivial (notation from \arabic{bspbdl1}.\arabic{bspbdl2}); let $v_j$ be a generator of ${\cal F}_j|_M$. Furthermore we may assume that the divisor $D$ on $M$ (the intersection of $M$ with all irreducible components of $Y$ not containing $M$) lifts to a (relative $\mbox{\rm Spec}(W)$) normal crosssings divisor $\tilde{\cal D}$ on $\tilde{\cal M}$. Let $$\tilde{\cal V}_{\cal M}=\mbox{\rm Spec}(\tilde{B}[x_j]_{j\in I})$$$$\tilde{\cal P}_{\cal M}=\times_{\tilde{\cal M}}(\mbox{\rm Proj}(\tilde{B}[y_j,x_j]_{j\in I}).$$Identifying the free variable $x_j$ with a lift of $v_j$ we view $\tilde{\cal V}_{\cal M}$ as a lift of $V_M$; identifying moreover the free variable $y_j$ with a lift of $1_{{\cal O}_M}$ we view $\tilde{\cal P}_{\cal M}$ as a lift of $P_M$; identifying a homogenous element $s\in\tilde{B}[x_j]_{j\in I}$ of degree $n$ with the degree zero element $s/y_j^n$ of $\tilde{B}[y_j^{\pm},x_j]$ we view $\tilde{\cal V}_{\cal M}$ as an open subscheme of $\tilde{\cal P}_{\cal M}$. As in \arabic{bspbdl1}.\arabic{bspbdl2} we factor the distinguished element $a\in\mbox{\rm Sym}_{{\cal O}_M}(\oplus({\cal F}_j)_{j\in I})(M)$ as $a=a_0.(\oplus_{j\in I}v_j)$ with defining equation $a_0\in{\cal O}_M$ of the divisor $D$ in $M$. Lift $a_0$ to a defining equation $\tilde{a}_0\in \tilde{B}$ of $\tilde{\cal D}$ in $\tilde{\cal M}$. This $\tilde{a}_0$ also defines a normal crossing divisor $\tilde{\cal D}_{\tilde{\cal V}}$ on $\tilde{\cal V}_{\cal M}$. Set $\tilde{a}=\tilde{a}_0\prod_{j\in I}x_j\in \tilde{B}[x_j]_{j\in I}$ and consider the following normal crossing divisor on $\tilde{\cal P}_{\cal M}$: the union of $\tilde{\cal P}_{\cal M}-\tilde{\cal V}_{\cal M}$ with the closure (in $\tilde{\cal P}_{\cal M}$) of the zero set of $\tilde{a}$ (in $\tilde{\cal V}_{\cal M}$). It defines a log structure on $\tilde{\cal P}_{\cal M}$. Define a morphism $\tilde{\cal V}_{\cal M}\to S$ by sending $q\mapsto \tilde{a}$. We have constructed a lift of the $S_1$-log scheme with boundary $(P_M,V_M)$ to a $S$-log scheme with boundary $(\tilde{\cal P}_{\cal M},\tilde{\cal V}_{\cal M})$. Moreover, if we denote by $\tilde{\cal T}_{\infty}$ the exact closed log subscheme of $S$ defined by the ideal $(q)$, then the $\tilde{\cal T}_{\infty}$-log scheme with boundary $(\tilde{\cal P}_{\cal M}^0,\tilde{\cal V}_{\cal M}^0)=(\tilde{\cal P}_{\cal M}\times_S\tilde{\cal T}_{\infty},\tilde{\cal V}_{\cal M}\times_S\tilde{\cal T}_{\infty})$ is a lift of the $T$-log scheme with boundary $(P_M^0,V_M^0)$.\\{\it Step 3:} Denote by ${\cal P}_{\cal M}^0$ (resp. ${\cal V}_{\cal M}^0$, resp. ${\cal M}$, resp. ${\cal D}_{\cal V}$) the $p$-adic completions of $\tilde{\cal P}_{\cal M}^0$ (resp. of $\tilde{\cal V}_{\cal M}^0$, resp. of $\tilde{\cal M}$, resp. of $\tilde{\cal D}_{\tilde{\cal V}}$). Denote by ${\cal P}_{{\cal M},n}^0$ (resp. ${\cal V}_{{\cal M},n}^0$, resp. ${\cal M}_n$, resp. ${\cal D}_{{\cal V},n}$) the reduction modulo $p^n$. Let $\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}$ be the $p$-adic completion of the de Rham complex of the $\tilde{\cal T}_{\infty}$-log scheme with boundary $(\tilde{\cal P}_{\cal M}^0,\tilde{\cal V}_{\cal M}^0)$. Its reduction $\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}\otimes(\mathbb{Z}/p^n)$ modulo $p^n$ is the de Rham complex $\Omega^{\textbf{u}llet}_{{\cal P}_{{\cal M},n}^0/T_n}$ of the ${T}_{n}$-log scheme with boundary $({\cal P}_{{\cal M},n}^0,\tilde{\cal V}_{{\cal M},n}^0)$. Observe that the differentials on $\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}$ pass to differentials on $\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}\otimes_{{\cal O}_{{\cal P}_{\cal M}^0}}{\cal O}_{\cal M}$ where we use the zero section ${\cal M}\to{\cal P}_{\cal M}^0$. Let $\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}\otimes\mathbb{Q}$ be the complex on the rigid space ${\cal P}_{{\cal M},K_0}^0$ obtained by tensoring with $K_0$ the sections of $\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}$ over open affine pieces of ${\cal P}_{\cal M}^0$. Similarly define $\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}\otimes_{{\cal O}_{{\cal P}_{\cal M}^0}}{\cal O}_{\cal M}\otimes\mathbb{Q}$. By definition we have $$R\Gamma_{\rm conv}(M/T_{\infty})=R\Gamma(]M[_{{\cal P}_{\cal M}},\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}\otimes\mathbb{Q}),$$ $$R\lim_{\stackrel{\leftarrow}{n}}R\Gamma_{\rm crys}(({P_M^0},V_M^0)/T_n)\otimes_{W}K_0=R\lim_{\stackrel{\leftarrow}{n}}R\Gamma({\cal P}_{{\cal M},n}^0,\Omega^{\textbf{u}llet}_{{\cal P}_{{\cal M},n}^0/T_n})\otimes_{W}K_0.$$In view of$$R\Gamma(]M[_{{\cal P}_{\cal M}},\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}\otimes_{{\cal O}_{{\cal P}_{\cal M}^0}}{\cal O}_{\cal M}\otimes\mathbb{Q})=R\lim_{\stackrel{\leftarrow}{n}}R\Gamma({\cal P}_{{\cal M},n}^0,\Omega^{\textbf{u}llet}_{{\cal P}_{{\cal M},n}^0/T_n}\otimes_{{\cal O}_{{\cal P}_{{\cal M},n}^0}}{\cal O}_{{\cal M}_n})\otimes_{W}K_0$$ it is therefore enough to show that the maps$$f_n:R\Gamma({\cal P}_{{\cal M},n}^0,\Omega^{\textbf{u}llet}_{{\cal P}_{{\cal M},n}^0/T_n})\to R\Gamma({\cal P}_{{\cal M},n}^0,\Omega^{\textbf{u}llet}_{{\cal P}_{{\cal M},n}^0/T_n}\otimes_{{\cal O}_{{\cal P}_{{\cal M},n}^0}}{\cal O}_{{\cal M}_n})$$ $$g:R\Gamma(]M[_{{\cal P}_{\cal M}},\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}\otimes\mathbb{Q})\to R\Gamma(]M[_{{\cal P}_{\cal M}},\Omega^{\textbf{u}llet}_{{\cal P}_{\cal M}^0/T_{\infty}}\otimes_{{\cal O}_{{\cal P}_{\cal M}^0}}{\cal O}_{\cal M}\otimes\mathbb{Q})$$are isomorphisms.\\{\it Step 4:} Let ${\cal D}_{{\cal V},n}=\cup_{l\in L}{\cal D}_{n,l}$ be the decomposition of ${\cal D}_{{\cal V},n}$ into irreducible components. Let ${\cal E}'_{n}$ be the closed subscheme of ${\cal V}_{{\cal M},n}^0$ defined by $\prod_{j\in I}x_j\in \Gamma({\cal V}_{{\cal M},n}^0,{\cal O}_{{\cal V}_{{\cal M},n}^0})$ and let ${\cal E}_{n}$ be the closure of ${\cal E}'_{n}$ in ${\cal P}_{{\cal M},n}^0$. Let ${\cal E}_{n}=\cup_{j\in I}{\cal E}_{n,j}$ be its decomposition into irreducible components. For a pair $P=(P_I,P_L)$ of subsets $P_I\subset I$ and $P_L\subset L$ let$${\cal G}_P=(\cap_{j\in P_I}{\cal E}_{n,j})\cap(\cap_{l\in P_L}{\cal D}_{n,l}),$$so we drop reference to $n$ in our notation, for convenience. Also for convenience we denote the sheaf complex $\Omega^{\textbf{u}llet}_{{\cal P}_{{\cal M},n}^0/T_n}$ on ${\cal P}_{{\cal M},n}^0$ simply by $\Omega^{\textbf{u}llet}$. For two pairs $P, P'$ as above with $P_I\cup P_L\ne \emptyset$, with $P_I\subset P'_I$ and $P_L=P'_{L}$ consider the canonical map $$w_{P,P'}:\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_P}\to\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{P'}}$$of sheaf complexes on ${\cal P}_{{\cal M},n}^0$. We claim that the map $R\Gamma({\cal P}_{{\cal M},n}^0,w_{P,P'})$ induced by $w_{P,P'}$ in cohomology is an isomorphism. For this we may of course even assume $P_I'=P_I\cup\{j_0\}$ for some $j_0\in I$, $j_0\notin P_I$. In the ${\cal O}_{{\cal G}_{P'}}$-module $\Omega^1\otimes{\cal O}_{{\cal G}_{P'}}$ we fix a complement $N$ of the submodule generated by (the class of) $\mbox{\rm dlog}(x_{j_0})\in\Gamma({\cal P}_{{\cal M},n}^0,\Omega^1\otimes{\cal O}_{{\cal G}_{P'}})$ as follows. We use the identification$$\frac{(\Omega^1_{\tilde{\cal M}}(\log(\tilde{\cal D}))\otimes{\cal O}_{{\cal G}_{P'}})\oplus(\oplus_{j\in I}{\cal O}_{{\cal G}_{P'}}.\mbox{\rm dlog}(x_j))}{{\cal O}_{{\cal G}_{P'}}.\mbox{\rm dlog}(\tilde{a})}=\Omega^1\otimes{\cal O}_{{\cal G}_{P'}}$$(with $\Omega^1_{\tilde{\cal M}}(\log(\tilde{\cal D}))$ the differential module of $(\tilde{\cal M},(\mbox{log str. def. by }\tilde{\cal D}))\to(\mbox{\rm Spec}(W),\mbox{triv.})$). If $P_L\ne\emptyset$ we may assume that we can factor our $\tilde{a}_0\in\tilde{B}$ from above as $\tilde{a}_0=\tilde{a}_0'h$ with $h\in\tilde{B}$ whose zero set in $\tilde{\cal M}=\mbox{\rm Spec}(\tilde{B})$ reduces modulo $p^n$ to an irreducible component of $\cup_{l\in P_L}{\cal D}_{l,n}$. We may assume that the ${\cal O}_{\tilde{\cal M}}$-submodule of $\Omega^1_{\tilde{\cal M}}(\log(\tilde{\cal D}))$ generated by $\mbox{\rm dlog}(h)$ admits a complement $N'$. Then we get the isomorphism $$(N'\otimes{\cal O}_{{\cal G}_{P'}})\oplus(\oplus_{j\in I}{\cal O}_{{\cal G}_{P'}}.\mbox{\rm dlog}(x_j))\cong \Omega^1\otimes{\cal O}_{{\cal G}_{P'}}$$(use $\mbox{\rm dlog}(\tilde{a})=\mbox{\rm dlog}(h)+\mbox{\rm dlog}({\tilde{a}}/{h})$). If there exists $j'\in P_I$ we get the isomorphism$$(\Omega^1_{\tilde{\cal M}}(\log(\tilde{\cal D}))\otimes{\cal O}_{{\cal G}_{P'}})\oplus(\oplus_{j\in I-\{j'\}}{\cal O}_{{\cal G}_{P'}}.\mbox{\rm dlog}(x_j))\cong\Omega^1\otimes{\cal O}_{{\cal G}_{P'}}$$(use $\mbox{\rm dlog}(\tilde{a})=\mbox{\rm dlog}(x_{j'})+\mbox{\rm dlog}({\tilde{a}}/{x_{j'}})$). In both cases, dropping the $j_0$-summand in the left hand side we get $N$ as desired. We see that the ${\cal O}_{{\cal G}_{P'}}$-subalgebra $N^{\textbf{u}llet}$ of $\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{P'}}$ generated by $N$ is stable for the differential $d$, and that we have $$\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{P'}}=N^{\textbf{u}llet}\otimes_{W_n}C^{\textbf{u}llet}$$ as complexes, where $C^{\textbf{u}llet}$ is the complex $C^0=W_n$, $C^1=W_n.\mbox{\rm dlog}(x_{j_0})$ (here $\mbox{\rm dlog}(x_{j_0})$ is nothing but a symbol), $C^m=0$ for $m\ne 0,1$, and zero differential. Let ${\cal R}=\mbox{\rm Proj} (W_n[y_{j_0},x_{j_0}])$. We have a canonical map ${\cal G}_P\to{\cal R}$. Let $D^{\textbf{u}llet}$ be the ${\cal O}_{\cal R}$-subalgebra of $\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{P}}$ generated by $\mbox{\rm dlog}(x_{j_0})\in\Gamma({\cal P}_{{\cal M},n}^0,\Omega^1\otimes{\cal O}_{{\cal G}_{P}})$. It is stable for the differential $d$, and we find$$\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{P}}=N^{\textbf{u}llet}\otimes_{W_n}D^{\textbf{u}llet}$$as complexes, where $N^{\textbf{u}llet}$ is mapped to $\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{P}}$ via the natural map (and section of $w_{P,P'}$)$$\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{P'}}\to\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{P}}$$ induced by the structure map ${\cal E}_{n,j_0}\to{\cal M}_n$. This map also induces a map $C^{\textbf{u}llet}\to D^{\textbf{u}llet}$, and it is enough to show that the latter induces isomorphisms in cohomology. But $$H^m({\cal P}_{{\cal M},n}^0,D^{\textbf{u}llet})\cong H^m({\bf P}^1_{W_n},\Omega_{{\bf P}^1_{W_n}}^{\textbf{u}llet}(\log\{0,\infty\})),$$which is $W_n$ if $0\le m\le 1$ and zero otherwise, because of the degeneration of the Hodge spectral sequence (\cite{kalo} 4.12) and $\Omega^1_{{\bf P}^1}(\log\{0,\infty\})\cong{\cal O}_{{\bf P}^1}$. So $C^{\textbf{u}llet}$ and $D^{\textbf{u}llet}$ have the same cohomology.\\{\it Step 5:} We now show that $f_n$ is an isomorphism. Let ${\cal F}_I=\cup_{j\in I}{\cal E}_{n,j}$, let ${\cal F}_L=\cup_{l\in L}{\cal D}_{n,l}={\cal D}_{{\cal V},n}$ and ${\cal F}_{I,L}={\cal F}_I\cap{\cal F}_L$. All the following tensor products are taken over ${\cal O}_{{\cal P}_{{\cal M},n}^0}$. We will show that in$$\Omega^{\textbf{u}llet}=\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal P}_{{\cal M},n}^0}\stackrel{\alpha}{\longrightarrow}\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal F}_I}\stackrel{\beta}{\longrightarrow}\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal G}_{(I,\emptyset)}}=\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal M}_n}$$both $\alpha$ and $\beta$ induce isomorphisms in cohomology. The exact sequences $$0\longrightarrow {\cal O}_{{\cal P}_{{\cal M},n}^0}\longrightarrow {\cal O}_{{\cal F}_J}\oplus {\cal O}_{{\cal F}_L}\longrightarrow {\cal O}_{{\cal F}_{I,L}}\longrightarrow0$$$$0\longrightarrow{\cal O}_{{\cal F}_{I}}\longrightarrow{\cal O}_{{\cal F}_{I}}\oplus {\cal O}_{{\cal F}_{I,L}} \longrightarrow {\cal O}_{{\cal F}_{I,L}}\longrightarrow0$$ show that, to prove that $\alpha$ induces cohomology isomorphisms, it is enough to prove that $\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal F}_L}\to\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal F}_{I,L}}$ induces cohomology isomorphisms. To see this, it is enough to show that both $\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal F}_L}\stackrel{\gamma}{\to}\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal F}_L\cap {\cal G}_{(I,\emptyset)}}$ and $\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal F}_{I,L}}\stackrel{\delta}{\to}\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal F}_L\cap {\cal G}_{(I,\emptyset)}}$ induce cohomology isomorphisms. Consider the exact sequence\begin{gather}0\longrightarrow {\cal O}_{{\cal F}_L}\longrightarrow\oplus_{l\in L}{\cal O}_{{\cal G}_{(\emptyset,\{l\})}}\longrightarrow\oplus_{\stackrel{L'\subset L}{|L'|=2}}{\cal O}_{{\cal G}_{(\emptyset,L')}}\longrightarrow\ldots\longrightarrow{\cal O}_{{\cal G}_{(\emptyset,L)}} \longrightarrow 0\tag{$*$}\end{gather}Comparison of the exact sequences $(*)\otimes\Omega^{\textbf{u}llet}$ and $(*)\otimes {\cal O}_{{\cal F}_L\cap {\cal G}_{(I,\emptyset)}}\otimes\Omega^{\textbf{u}llet}$ shows that to prove that $\gamma$ induces cohomology isomorphisms, it is enough to show this for $\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{(\emptyset,L')}}\to\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal G}_{(I,L')}}$ for all $\emptyset\ne L'\subset L$; but this has been done in Step 2. Comparison of $(*)\otimes {\cal O}_{{\cal F}_{I,L}}\otimes\Omega^{\textbf{u}llet}$ and $(*)\otimes {\cal O}_{{\cal F}_L\cap {\cal G}_{(I,\emptyset)}}\otimes\Omega^{\textbf{u}llet}$ shows that to prove that $\delta$ induces cohomology isomorphisms, it is enough to show this for $\Omega^{\textbf{u}llet}\otimes{\cal O}_{{\cal F}_I\cap{\cal G}_{(\emptyset,L')}}\stackrel{\epsilon_G}{\to}\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal G}_{(I,L')}}$ for all $\emptyset\ne L'\subset L$. Consider the exact sequence \begin{gather}0\longrightarrow {\cal O}_{{\cal F}_I}\longrightarrow\oplus_{j\in I}{\cal O}_{{\cal G}_{(\{j\},\emptyset)}}\longrightarrow\oplus_{\stackrel{I'\subset I}{|I'|=2}}{\cal O}_{{\cal G}_{(I',\emptyset)}}\longrightarrow\ldots\longrightarrow{\cal O}_{{\cal G}_{(I,\emptyset)}} \longrightarrow 0\tag{$**$}\end{gather} The exact sequence $(**)\otimes {\cal O}_{{\cal F}_I\cap{\cal G}_{(\emptyset,L')}}\otimes\Omega^{\textbf{u}llet}$ shows that to prove that $\epsilon_G$ induces cohomology isomorphisms, it is enough to show this for $\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal G}_{(I',L')}}\to\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal G}_{(I,L')}}$ for all $\emptyset\ne I'\subset {I}$; but this has been done in Step 2. The exact sequence $(**)\otimes\Omega^{\textbf{u}llet}$ shows that to prove that $\beta$ induces cohomology isomorphisms, it is enough to show this for $\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal G}_{(I',\emptyset)}}\to\Omega^{\textbf{u}llet}\otimes {\cal O}_{{\cal G}_{(I,\emptyset)}}$ for all $\emptyset\ne I'\subset {I}$; but this has been done in Step 2. The proof that $f_n$ is an isomorphism is complete. The proof that $g$ is an isomorphism is essentially the same: While Step 4 above boiled down to $H^m({\bf P}^1_{W_n},\Omega_{{\bf P}^1_{W_n}}^{\textbf{u}llet}(\log\{0,\infty\}))=W_n$ if $0\le m\le 1$, and $=0$ for other $m$, one now uses $H^m({\bf D}^0_{K_0},\Omega_{{\bf D}^0_{K_0}}^{\textbf{u}llet}(\log\{0\}))=K_0$ if $0\le m\le 1$, and $=0$ for other $m$ (here ${\bf D}^0_{K_0}$ is the open unit disk over $K_0$). The formal reasoning from Step 5 is then the same. The theorem is proven.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}}\newcounter{uniconv1}\newcounter{uniconv2}\setcounter{uniconv1}{\value{section}}\setcounter{uniconv2}{\value{satz}} Also unions $H$ of irreducible components of $Y$ are not log smooth over $T$ (unless $H=Y$) and their usual log crystalline cohomology is not useful. However, if $H^{\heartsuit}$ denotes the complement in $H$ of the intersection of $H$ with the closure of $Y-H$ in $Y$, then $(H,H^{\heartsuit})$ is a semistable $T$-log scheme with boundary. There is natural map$$h:R\Gamma_{\rm conv}(H/T)\longrightarrow R\lim_{\stackrel{\leftarrow}{n}}R\Gamma_{\rm crys}((H,H^{\heartsuit})/T_n)\otimes_{W}K_0,$$constructed as follows. We say a $T_{\infty}$-log scheme is strictly semistable if all its irreducible components are smooth $W$-schemes and if \'{e}tale locally it is the central fibre of a morphism $\mbox{\rm Spec}(W[t_1,\ldots,t_{n}])\to \mbox{\rm Spec}(W[t]),\mbox{\rm Frac}uad t\mapsto t_1\cdots t_{m}$ (some $n\ge m\ge1$), with the log structures defined by the vanishing locus of $t$ resp. of $t_1\cdots t_{m}$. We find an \'{e}tale cover $Y=\{Y_i\}_{i\in I}$ of $Y$ and for each $i\in I$ a semistable $T_{\infty}$-log scheme ${\mathcal Y}_i$ together with an isomorphism ${\mathcal Y}_i\otimes_Wk\cong Y_i$. Taking suitable blowing ups in the products of these ${\mathcal Y}_i$ (a standard procedure, compare for example \cite{mokr}) we get an embedding system for $Y$ over $T_{\infty}$ where a typical local piece $Y_J=\prod_Y(Y_i)_{i\in J}$ of $Y$ is exactly embedded as $Y_J\to {\mathcal Y}_J$ with ${\mathcal Y}_J$ a semistable $T_{\infty}$-log scheme and such there is a closed subscheme ${\mathcal H}_J$ of ${\mathcal Y}_J$, the union of some of its irreducible components, such that $H\times_Y Y_J={\mathcal H}_J\times_{{\mathcal Y}_J}Y$. Now ${\mathcal Y}_J$ is log smooth over $T_{\infty}$, hence its $p$-adic completion $\omegaidehat{\mathcal Y}_J$ may be used to compute $R\Gamma_{\rm conv}(H\times_Y Y_J/T)$. On the other hand, let ${\mathcal H}_J^{\heartsuit}\subset {\mathcal H}_J$ be the open subscheme which is the complemet in ${\mathcal Y}_J$ of all irreducible components of ${\mathcal Y}_J$ which are not fully contained in ${\mathcal H}_J$. Then $({\mathcal H}_J,{\mathcal H}_J^{\heartsuit})$ is a smooth $T_{\infty}$-log scheme with boundary, hence its $p$-adic completion may be used to compute $R\lim_{\stackrel{\leftarrow}{n}}R\Gamma_{\rm crys}((H\times_Y Y_J,H^{\heartsuit}\times_Y Y_J)/T_n)\otimes_{W}K_0$. By the proof of \cite{berfi} Proposition 1.9 there is a natural map from the structure sheaf of the tube $]H\times_Y Y_J[_{\omegaidehat{\mathcal Y}_J}$ to the structure sheaf of the $p$-adically completed DP envelope, tensored with $\mathbb{Q}$, of $H\times_Y Y_J$ in ${\mathcal H}_J$. It induces a map between our de Rham complexes in question, hence we get $h$. By the same local argument which showed the isomorphy of the map $g$ in the proof of Theorem \ref{crisconv} we see that $h$ is an isomorphism; the work on local lifts of $P_M^0$ there is replaced by work on local lifts of $Y$ here. In particular, if $H$ is proper, each $R^j\Gamma_{\rm conv}(H/T_{\infty})$ is finite dimensional.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} Suppose $k$ is perfect. Then there is a canonical Frobenius endomorphism on the log scheme $T_n$ (cf. \cite{hyoka} 3.1): The canonical lift of the $p$-power map on $k$ to an endomorphism of $W_n$, together with the endomorphism of the log structure which on the standard chart $\mathbb{N}$ is multiplication with $p$. We can also define a Frobenius endomorphism on $R\Gamma_{\rm crys}((\overline{Y},Y)/T_n)$ for a semistable $T$-log scheme with boundary $(\overline{Y},Y)$, because we can define a Frobenius endomorphism on the embedding system used in \arabic{defcry1}.\arabic{defcry2}, compatible with that on $T_n$. Namely, on a standard $T_n$-log scheme with boundary $(\overline{X}_n,X_n)$ as occurs in the proof of Lemma \ref{loclif} we act on the underlying scheme by the Frobenius on $W_n$ and by $t_i\mapsto t_i^p$ (all $i$), and on the log structure we act by the unique compatible map which on our standard chart $\mathbb{N}^{i_2}$ is multiplication with $p$. Then we lift these endomorphisms further (using the lifting property of classical smoothness) to Frobenius lifts of our $\overline{Y}$-covering and hence to the embedding system.\\ \addtocounter{satz}{1}{\bf \arabic{section}.\arabic{satz}} We finish with perspectives on possible further developments.\\(1) Mokrane \cite{maghreb} defines the crystalline cohomology of a classically smooth $k$-scheme $U$ as the log crystalline cohomology with poles in $D$ of a smooth compactification $X$ of $U$ with $D=X-U$ a normal crossing divisor. This is a cohomology theory with the usual good properties (finitely generated, Poincar\'{e} duality, mixed if $k$ is finite). He shows that under assumptions on resolutions of singularities, this cohomology theory indeed only depends on $U$. We suggest a similar approach to define the crystalline cohomology of a semistable $k$-log scheme $U$: Compactify it (if possible) into a proper semistable $T$-log scheme with boundary $(X,U)$ and take the crystalline cohomology of $(X,U)$. Similarly, classical rigid cohomology as defined by Berthelot \cite{berco} works with compactifications. Also here, to define log versions it might be useful to work with log schemes with boundary to avoid hypotheses on existence of compactifications by genuine log morphisms.\\ (2) We restricted our treatment of crystalline cohomology to that of semistable $T$-log schemes with boundary $(\overline{Y},Y)$ relative to $T_n$. For deformations of $T=T_1$ other then $T_n$ --- for example, $(\mbox{\rm Spec}(W_n), 1\mapsto p)$ --- we have at present no suitable analogs of Lemma \ref{loclif}. However, such analogs also seem to lack in idealized log geometry: for an ideally log smooth $T$-log scheme (like the union of some irreducible components of a semistable $k$-log scheme in the usual sense), there seems to be in general no lift to a flat and ideally log smooth $(\mbox{\rm Spec}(W_n), 1\mapsto p)$-log scheme. Some more foundational concepts need to be found. Let us nevertheless propose some tentative definitions of crystalline cohomology for more general fine log schemes $T$ and more general $T$-log schemes with boundary (without claiming any results). Suppose that $p$ is nilpotent in ${\cal O}_W$ and let $(I,\delta)$ be a quasicoherent DP ideal in ${\cal O}_W$. All DP structures on ideals in ${\cal O}_W$-algebras are required to be compatible with $\delta$. Let $T_0$ be a closed subscheme of $T$ and let $\gamma$ be a DP structure on the ideal of $T_0$ in $T$. Let $(\overline{X},{X})$ be a ${T}$-log scheme with boundary, and let $\overline{X}_0$ be the closure in $\overline{X}$ of its locally closed subscheme ${X}\times_{T}{T}_0$. We say $\gamma$ extends to $(\overline{X},{X})$ if there is a DP structure $\alpha$ on the ideal of $\overline{X}_0$ in $\overline{X}$, such that the structure map ${X}\to{T}$ is a DP morphism (if $\alpha$ exists, it is unique, because ${\cal O}_{\overline{X}}\to i_*{\cal O}_{X}$ is injective). Then we say $(\overline{X},{X})$ is a $\gamma$-${T}$-log scheme with boundary. For a $\gamma$-${T}$-log scheme $(\overline{X},{X})$ we can define the crystalline site and the crystalline cohomology of $(\overline{X},{X})$ over $T$ as in the case of usual log schemes. {\it Example:} Let $\underline{T}_0\subset \underline{T}$ be a closed immersion. Suppose ${T}$ is the DP envelope of $\underline{T}_0$ in $\underline{T}$ and ${T}_0\subset {T}$ is the closed subscheme defined by its DP ideal; we have $T_0=\underline{T}_0$ if $\delta$ extends to $\underline{T}$. Now if $(\underline{\overline{X}},\underline{X})$ is a $\underline{T}$-log scheme with boundary, we obtain a $\gamma$-${T}$-log scheme with boundary $(\overline{X},X)$ by taking as $\overline{X}$ the DP envelope of the schematic closure of the subscheme $\underline{X}\times_{\underline{T}}\underline{T}_0$ of $\overline{\underline{X}}$.\\ \begin{flushleft} \textsc{Mathematisches Institut der Universit\"at M\"unster\\ Einsteinstrasse 62, 48149 M\"unster, Germany}\\ \textit{E-mail address}: [email protected] \end{flushleft} \end{document}
\begin{document} \def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1} \date{} \if00 { \title{ \bf Generative Quantile Regression with \\ Variability Penalty} \author[1]{Shijie Wang} \author[2]{Minsuk Shin} \author[1]{Ray Bai} \affil[1]{\normalsize Department of Statistics, University of South Carolina, Columbia, SC 29208 \newline E-mails: \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}} \affil[2]{\normalsize Gauss Labs, Palo Alto, CA 94301} \maketitle } \fi \if10 { \begin{center} {\LARGE\bf Title} \end{center} } \fi \begin{abstract} Quantile regression and conditional density estimation can reveal structure that is missed by mean regression, such as multimodality and skewness. In this paper, we introduce a deep learning generative model for joint quantile estimation called Penalized Generative Quantile Regression (PGQR). Our approach simultaneously generates samples from many random quantile levels, allowing us to infer the conditional distribution of a response variable given a set of covariates. Our method employs a novel variability penalty to avoid the problem of vanishing variability, or memorization, in deep generative models. Further, we introduce a new family of partial monotonic neural networks (PMNN) to circumvent the problem of crossing quantile curves. A major benefit of PGQR is that it can be fit using a single optimization, thus bypassing the need to repeatedly train the model at multiple quantile levels or use computationally expensive cross-validation to tune the penalty parameter. We illustrate the efficacy of PGQR through extensive simulation studies and analysis of real datasets. Code to implement our method is available at \href{https://github.com/shijiew97/PGQR}{\tt https://github.com/shijiew97/PGQR}. \end{abstract} \noindent {\it Keywords:} conditional quantile, deep generative model, generative learning, joint quantile model, neural networks, nonparametric quantile regression \spacingset{1.5} \section{Introduction}\label{sec:intro} Quantile regression is a popular alternative to classical mean regression \citep{koenker1982robust}. For a response variable $Y \in \mathbb{R}$ and covariates $\bm{X} = (X_1, X_2, \ldots, X_p)^\top \in \mathbb{R}^{p}$, we define the $\tau$-th conditional quantile, $\tau \in (0,1)$, as the $\tau$-th quantile of the conditional distribution of $Y$ given $\bm{X}$, i.e. \begin{equation} \label{quantilefn} Q_{Y \mid \bm{X}} (\tau) = \inf \{ y: F_{Y \mid \bm{X}} (y) \geq \tau \}, \end{equation} where $F_{Y \mid \bm{X}}$ is the conditional cumulative distribution function of $Y$ given $\bm X$. For a fixed quantile level $\tau \in (0, 1)$, linear quantile regression aims to model \eqref{quantilefn} as a linear combination of $\bm{X}$, i.e. $Q_{Y \mid \bm{X}} (\tau) = \bm{X}^\top \bm{\beta}_{\tau}$. Linear quantile regression is less sensitive to outliers than least squares regression and is more robust when the assumption of iid Gaussian residual errors is violated \citep{koenker2001quantile,koenker2017handbook}. Thus, it is widely used for modeling heterogeneous data and heterogeneous covariate-response associations. Numerous extensions and variants of quantile regression have been proposed. In order to reveal complex nonlinear structures, \textit{nonparametric} quantile regression is frequently considered \citep{chaudhuri2002nonparametric, li2021nonparametric, koenker1994Biometrika}. For some function class $\mathcal{F}$, nonparametric quantile regression aims to estimate $f(\bm{X}, \tau) \in \mathcal{F}$ for $Q_{Y \mid \bm{X}}(\tau) = f(\bm{X}, \tau)$ at a given quantile level $\tau$. Many existing nonparametric quantile methods are based on either kernel smoothing or basis expansions. While quantile regression has traditionally focused on estimating the conditional quantile function at a single $\tau$-th quantile level, \textit{composite} (or \textit{joint}) quantile regression aims to estimate multiple quantile levels \textit{simultaneously} for a set of $K \geq 2$ quantiles $\{ \tau_1, \tau_2, \ldots, \tau_K \}$. Some nonparametric methods for composite quantile regression are proposed in \cite{zou2008composite}, \cite{jiang2012oracle}, \cite{jiang2012single}, and \cite{xu2017composite}. Despite its flexibility, nonparametric quantile regression carries the risk that the estimated curves at different quantile levels might \textit{cross} each other. For instance, the estimate of the 95th conditional quantile of $Y_i$ given some covariates $\bm{X}_i$ might be smaller than the estimate of the 90th conditional quantile. This is known as the \textit{crossing quantile} phenomenon. The left two panels of Figure \ref{fig:crossdata} illustrate this crossing problem. When quantile curves cross each other, the quantile estimates violate the laws of probability and are no longer reasonable. To tackle this issue, many approaches have been proposed, such as constraining the model space \citep{takeuchi2006nonparametric,sangnier2016joint,moon2021learning} or the model parameters \citep{meinshausen2006quantile,cannon2018non}. Recently in the area of deep learning, neural networks have also been applied to nonparametric quantile regression. \cite{taylor2000quantile} used feedforward neural networks (FNNs) to estimate conditional quantiles. \cite{takeuchi2006nonparametric} considered support vector machines in combination with quantile regression. Neural networks have also been used for composite quantile regression by \cite{xu2017composite} and \cite{jin2021composite}. Existing neural network-based approaches for joint quantile estimation are restricted at a prespecified quantile set. If only several quantiles (e.g. the interquartiles $\tau \in \{ 0.25, 0.5, 0.75 \}$) are of interest, then these methods are adequate to produce desirable inference. Otherwise, one typically has to refit the model at new quantiles \textit{or} specify a large enough quantile candidate set in order to infer the full conditional density $p(Y \mid \bm{X})$. \cite{dabney2018implicit} introduced the implicit quantile network (IQN), which takes a grid of quantile levels as inputs and approximates the conditional density at \textit{any} quantile level (not just the given inputs). However, IQN cannot guarantee that quantile functions at different levels do not cross each other. Composite quantile regression is closely related to the problem of \textit{conditional density estimation} (CDE) of $p(Y \mid \bm{X})$. Apart from traditional CDE methods that estimate the unknown probability density curve \citep{izbicki2017converting,https://doi.org/10.48550/arxiv.1906.07177}, several deep \textit{generative} approaches besides IQN have also been proposed. \cite{zhou2022deep} introduced the generative conditional distribution sampler (GCDS), while \cite{liu2021wasserstein} introduced the Wasserstein generative conditional sampler (WGCS) for CDE. GCDS and WGCS employ the idea of generative adversarial networks (GANs) \citep{NIPS2014_5ca3e9b1} to \textit{generate samples} from the conditional distribution $p(Y \mid \bm{X})$. In these deep generative models, random noise inputs are used to learn and generate samples from the target distribution. However, a well-known problem with deep generative networks is that the generator may simply memorize the training samples instead of generalizing to new test data \citep{Arpit2017ICML, vandenBurg2021NeurIPS}. When \textit{memorization} occurs, the random noise variable no longer reflects any variability, and the predicted density $p(Y \mid \bm{X})$ approaches a point mass. We refer to this phenomenon as \textit{vanishing variability}. To achieve the goal of joint quantile estimation as well as conditional density estimation, we propose a new deep generative approach called penalized generative quantile regression (PGQR). PGQR employs a novel variability penalty to avoid vanishing variability. Although introduced in the context of joint quantile regression, we believe that our penalty formulation is broadly applicable to other deep generative models that may suffer from memorization. We further guarantee the monotonicity (i.e. non-crossing) of the estimated quantile functions from PGQR by designing a \textit{new} family of partial monotonic neural networks (PMNNs). The PMNN architecture ensures partial monotonicity with respect to quantile inputs, while retaining the expressiveness of neural networks. The performance of PGQR depends crucially on carefully choosing a hyperparameter $\lambda$, which controls the amount of regularization by our variability penalty. However, common tuning procedures such as cross-validation are impractical for deep learning, since this would involve repetitive evaluations of the network for different choices of $\lambda$ and different training sets. Inspired by the generative idea from \cite{https://doi.org/10.48550/arxiv.2006.00767}, we construct our deep generative network in such a way that $\lambda$ is included as an additional random input. This way, only a \textit{single} optimization is needed to learn the neural network parameters, and then it is effortless to generate samples from a set of candidate values for $\lambda$. Finally, we introduce a criterion for selecting the optimal choice of $\lambda$ which we use to generate the final desired samples from multiple quantiles of $p(Y \mid \bm{X})$ simultaneously. Our main contributions can be summarized as follows: \begin{enumerate} \item We propose a deep generative approach for composite quantile regression and conditional density estimation called \textit{penalized generative quantile regression}. PGQR simultaneously generates samples from \textit{multiple} random quantile levels, thus precluding the need to refit the model at different quantiles. \item We introduce a novel variability penalty to avoid the vanishing variability phenomenon in deep generative models and apply this regularization technique to PGQR. \item We construct a new family of partial monotonic neural networks (PMNNs) to circumvent the problem of crossing quantile curves. \item To facilitate scalable computation for PGQR and bypass computationally expensive cross-validation for tuning the penalty parameter, we devise a strategy that allows PGQR to be implemented using only a \textit{single} optimization. \end{enumerate} The rest of the article is structured as follows. Section \ref{sec:Motivation} motivates our PGQR framework with a real data application on body composition and strength in older adults. Section \ref{sec:PGQR} introduces the generative quantile regression framework and our variability penalty. In Section \ref{sec:mono}, we introduce the PMNN family for preventing quantile curves from crossing each other. Section \ref{sec:computation} discusses scalable computation for PGQR, namely how to tune the variability penalty parameter with only single-model training. In Section \ref{sec:sim}, we demonstrate the utility of PGQR through simulation studies and analyses of additional real datasets. Section \ref{sec:conclusion} concludes the paper with some discussion and directions for future research. \section{Motivating Application: Discovering Subpopulations} \label{sec:Motivation} As discussed in Section \ref{sec:intro}, we aim to generate samples from the conditional quantiles $Q_{Y \mid \bm{X}} (\tau)$ of $p(Y \mid \bm{X})$ at different quantile levels $\tau \in (0,1)$. An automatic byproduct of joint \textit{nonparametric} quantile regression (as opposed to linear quantile regression) is that if the conditional quantiles $Q_{Y \mid \bm{X}} (\tau)$ are estimated well for a large number of quantiles, then we can also infer the \textit{entire} conditional distribution for $Y$ given $\bm{X}$. \begin{figure} \caption{\small Using PGQR to model the conditional density of AP-L/F ratio given weight in older adults. Left panel: Weight = 45.4 kg, Right panel: Weight = 77.5 kg.} \label{fig:bimodal} \end{figure} To motivate our methodology, we apply the proposed PGQR method (introduced in Section \ref{sec:PGQR}) to a dataset on body composition and strength in older adults \citep{RoyChoudhury2020}. The data was collected over a period of 12 years for 1466 subjects as part of the Rancho Bernardo Study (RBS), a longitudinal observational cohort study. We are interested in modeling the appendicular lean/fat (AP-L/F) ratio, i.e. \begin{align*} \text{AP-L/F Ratio} = \frac{\text{Weight on legs and arms}}{\text{Fat weight}}, \end{align*} as a function of weight (kg). Accurately predicting the AP-L/F ratio is of practical clinical interest, since the AP-L/F ratio provides information about limb tissue quality and is used to diagnose sarcopenia (age-related, involuntary loss of skeletal muscle mass and strength) in adults over the age of 30 \citep{Evans2010AJCN, Scafoglieri2017}. Figure \ref{fig:bimodal} plots the approximated conditional density of AP-L/F ratio given weight of 45.4 kg (left panel) and 77.5 kg (right panel) under the PGQR model. We see evidence of data heterogeneity (actually, depending on an unobserved factor of gender), as the estimated conditional density is unimodal when the weight of older adults is 45.4 kg but \textit{bimodal} when the weight of older adults is 77.5 kg. In short, our method \textit{discovers} the presence of two heterogeneous subpopulations of adults that weigh around 78 kg. In contrast, mean regression (e.g. simple linear regression or nonparametric mean regression) of AP-L/F ratio given weight might obscure the presence of two modes and miss the fact that weight affects AP-L/F ratio differently for these two clusters of adults. \section{Penalized Generative Quantile Regression} \label{sec:PGQR} \subsection{Generative Quantile Regression} \label{sec:GQRnopenalty} Before introducing PGQR, we first introduce our framework for generative quantile regression (without variability penalty). Given $n$ training samples $(\bm{X}_1, Y_1), (\bm{X}_2, Y_2), \ldots, (\bm{X}_n, Y_n)$ and quantile level $\tau \in (0,1)$, nonparametric quantile regression minimizes the empirical quantile loss, \begin{equation} \label{nonparametricquantile} \operatornamewithlimits{argmin}_{f \in \mathcal{F}} \frac{1}{n} \sum_{i=1}^n\rho_{\tau} \left( Y_i-f(\bm{X}_i, \tau) \right), \end{equation} where $\rho_{\tau}(u) = u(\tau-I(u<0))$ is the check function, and $\mathcal{F}$ is a function class such as a reproducing kernel Hilbert space or a family of neural networks. Neural networks are a particularly attractive way to model the quantile function $f(\bm{X}, \tau)$ in \eqref{nonparametricquantile}, because they are universal approximators for any Lebesgue integrable function \citep{lu2017expressive}. In this paper, we choose to model the quantile function in \eqref{nonparametricquantile} using deep neural networks (DNNs). We define a DNN as a neural network with at least two hidden layers and a fairly large number of nodes. We refer to \cite{EmmertStreib2020FrontiersinAI} for a comprehensive review of DNNs. Despite the universal approximation properties of DNNs, we must \textit{also} take care to ensure that our estimated quantile functions do \textit{not} cross each other. Therefore, we have to consider a wide enough \textit{monotonic} function class ${\cal G}^m$ to cover the true $Q_{Y \mid \bm{X}}(\cdot)$. We formally introduce this class ${\cal G}^m$ in Section \ref{sec:mono}. Let $G$ denote the constructed DNN, which is defined as a feature map function $\{ G \in {\cal G}^m:\mathbb{R}^{p+1} \mapsto\mathbb{R}^1 \}$ that takes $(\bm{X}_i, \tau)$ as input and generates the conditional quantile for $Y_i$ given $\bm{X}_i$ at level $\tau$ as output. We refer to this generative framework as \emph{Generative Quantile Regression} (GQR). The optimization problem for GQR is \begin{equation}\label{eq:GQR} \widehat{G} = \underset{G \in {\cal G}^m}{\operatornamewithlimits{argmin}} \ \frac{1}{n} \sum_{i=1}^n \mathbb{E}_{\tau} \big\{ \rho_{\tau} \big(Y_i - G(\bm{X}_i, \tau) \big) \big\}, \end{equation} where $\widehat{G}$ denotes the estimated quantile function with optimized parameters (i.e. the weights and biases) of the DNN. Note that optimizing the integrative loss $\mathbb{E}_{\tau}\{ \cdot \}$ over $\tau$ in \eqref{eq:GQR} is justified by Proposition \ref{prop:PGQRexistence} introduced in the next section (i.e. we set $\lambda=0$ in Proposition \ref{prop:PGQRexistence}). In order to solve \eqref{eq:GQR}, we can use stochastic gradient descent (SGD) with mini-batching \citep{EmmertStreib2020FrontiersinAI}. For each mini-batch evaluation, $\bm{X}_i$ is paired with a quantile level $\tau$ sampled from $\textrm{Uniform}(0,1)$. Consequently, if there are $M$ mini-batches, the expectation in \eqref{eq:GQR} can be approximated by a Monte Carlo average of the random $\tau$'s, i.e. we approximate $\mathbb{E}_{\tau} \big\{ \rho_{\tau} \big(Y_i - G(\bm{X}_i, \tau) \big) \big\}$ with $ M^{-1} \sum_{k=1}^{M} \big\{ \rho_{\tau_k} \big( Y_i - G(\bm{X}_i, \tau_k) \big) \big\}$. Once we have solved \eqref{eq:GQR}, it is straightforward to use $\widehat{G}$ to generate \textit{new} samples $\widehat{G}(\bm{X}, \xi_k), k =1, 2, \ldots, b$, at various quantile levels $\bm{\xi} = \{ \xi_1, \xi_2, \ldots, \xi_b \}$, where the $\xi$'s are random $\textrm{Uniform}(0,1)$ noise inputs. Provided that $b$ is large enough, the generated samples at $\bm{\xi} \in (0,1)^{b}$ can be used to reconstruct the full conditional density $p(Y \mid \bm{X})$. As discussed in Section \ref{sec:intro}, there are several other deep generative models \citep{zhou2022deep, liu2021wasserstein} for generating samples from the underlying conditional distribution. These generative approaches also take random noise $z$ as an input (typically $z \sim \mathcal{N}(0,1)$) to reflect variability, but there is no specific statistical meaning for $z$. In contrast, the random noise $\tau$ in GQR \eqref{eq:GQR} has a clear interpretation as a quantile level $\tau \in (0,1)$. Although GQR and these other deep generative approaches are promising approaches for conditional sampling, these methods are all unfortunately prone to memorization of the training data. When this occurs, the random noise does not generate \textit{any} variability. To remedy this, we now introduce our \textit{variability penalty} in conjunction with our GQR loss function \eqref{eq:GQR}. \subsection{Variability Penalty for GQR}\label{sec:pen} When a DNN is too overparameterized to capture the underlying structure in the training data, the random noise in DNN is likely to reflect no variability, no matter what value it inputs. We refer this phenomenon as \textit{vanishing variability}, and it is a common problem in deep generative models \citep{Arpit2017ICML, Arora2017, vandenBurg2021NeurIPS}. To be more specific, let $G(\cdot, z)$ be the generator function constructed by a DNN, where $z$ is a random noise variable following some reference distribution such as a standard Gaussian or a standard uniform. The vanishing variability phenomenon occurs when \begin{equation}\label{eq:overfit} \widehat{G}(\bm{X}_i,z) = Y_i, \hspace{.2cm} i = 1,...,n. \end{equation} In other words, there is no variability when inputting different random noise $z$, generating almost surely a discrete point mass at the training data. Additionally, given a \textit{new} feature vector $\bm{X}_{\textrm{new}}$, $G(\bm{X}_{\textrm{new}}, z)$ can only generate one novel sample from the true data distribution because of vanishing variability. Since GQR takes the training data $\bm{X}\in \mathbb{R}^{p}$ as input and the target quantile estimate lies in $\mathbb{R}^1$, the weights in the DNN associated with $\bm{X}\in \mathbb{R}^{p}$ are very likely to overwhelm those associated with the noise input $\tau$. As a result, GQR is very prone to encountering vanishing variability, as are other generative approaches with multidimensional features. From another point of view, we can see that due to the nonnegativity of the check function, the GQR loss \eqref{eq:GQR} achieves a minimum value of zero when we have vanishing variability \eqref{eq:overfit}. To remedy this problem, we propose a new regularization term that encourages the network to have \textit{more} variability when vanishing variability occurs. Given a set of features $\bm{X} \in \mathbb{R}^{p}$, the proposed \textit{variability penalty} is formulated as follows: \begin{equation}\label{eq:pen} \textrm{pen}_{\lambda, \alpha}(G(\bm{X},\tau),G(\bm{X},\tau^\prime)) = - \lambda \log\left \{ \Vert G(\bm{X},\tau) - G(\bm{X},\tau^\prime)\Vert_1 + 1/\alpha\right\}, \end{equation} where $\lambda \geq 0$ is the hyperparameter controlling the degree of penalization, $\alpha>0$ is a fixed hyperparameter, and $\tau,\tau^\prime \sim \textrm{Uniform}(0,1)$. The addition of $1 / \alpha$ inside the logarithmic term of \eqref{eq:pen} is mainly to ensure that the penalty function is always well-defined (i.e. the quantity inside of $\log \{ \cdot \}$ of \eqref{eq:pen} cannot equal zero). Our sensitivity analysis in Section \ref{subsec:Robustana} of the Appendix shows that our method is not usually sensitive to the choice of $\alpha$. We find that fixing $\alpha=1$ or $\alpha=5$ works well in practice. With the addition of the variability penalty \eqref{eq:pen} to our GQR loss function \eqref{eq:GQR}, our \textit{penalized} GQR (PGQR) method solves the optimization, \begin{equation}\label{eq:PGQR} \widehat{G} = \underset{G \in {\cal G}^{m}}{\operatornamewithlimits{argmin}} \frac{1}{n} \sum_{i=1}^n\left[ \mathbb{E}_{\tau} \left\{ \rho_{\tau} \big(Y_i - G(\bm{X}_i, \tau) \big)\right\} + \mathbb{E}_{\tau, \tau^\prime} \left\{\text{pen}_{\lambda, \alpha}\left( G(\bm{X}_i,\tau), G(\bm{X}_i, \tau^\prime) \right) \right\}\right], \end{equation} where ${\cal G}^{m}$ denotes the PMNN family introduced in Section \ref{sec:mono}, and $\tau$ and $\tau^\prime$ independently follow a uniform distribution on $(0,1)$. The expectation $\mathbb{E}_{\tau, \tau^\prime}$ is again approximated by a Monte Carlo average. Note that when $\lambda = 0$, the PGQR loss \eqref{eq:PGQR} reduces to the (non-penalized) GQR loss \eqref{eq:GQR}. The next proposition states that an estimated generator function $\widehat{G}(\bm{X}, \tau)$ under the PGQR objective \eqref{eq:PGQR} is equivalent to a neural network estimator $\widehat{g}_{\tau}(\bm{X})$ based on the individual (non-integrative) quantile loss function, provided that the family $\mathcal{G}^{m}$ in \eqref{eq:PGQR} is large enough. \begin{proposition}[equivalence of $\widehat g_\tau({\bm X})$ and $\widehat G({\bm X},\tau)$] \label{prop:PGQRexistence} For fixed $\lambda \geq 0$ and $\alpha > 0$, let $\widehat g_{\tau}(\bm{X}) =\operatornamewithlimits{argmin}_{g \in \mathcal{H} } \sum_{i=1}^n \left[\rho_\tau(Y_i-g(\bm{X}_i)) + \mathbb{E}_{\tau, \tau^\prime} \{ \text{\emph{pen}}_{\lambda, \alpha}(g(\bm{X}_i,\tau),g(\bm{X}_i,\tau^\prime)) \}\right]$, for a class of neural networks $\mathcal{H}$, and $ \tau,\tau^\prime\overset{\text{iid}}\sim \text{Uniform}(0,1)$. Consider a class of generator functions $\mathcal{G}$, where $\{ G \in \mathcal{G} :\mathbb{R}^p\times\mathbb{R}\times \mathbb{R }\mapsto \mathbb{R}\}$. Assume that for all $\bm{X} \in\mathcal{X} \subset \mathbb{R}^p$ and quantile levels $\tau \in (0,1)$, there exists $G \in\mathcal{G}$ such that the target neural network $\widehat g_{\tau}(\bm{X})$ can be represented by a hyper-network $G( \bm{X},\tau)$. Then, for $i=1,\dots,n$, $$ \widehat g_{\tau}(\bm{X}_i)=\widehat G(\bm{X}_i,\tau) \: \text{ a.s.}, $$ with respect to the probability law related to $\mathbb{E}_{\tau}$, where $\widehat{G}$ is a solution to \eqref{eq:PGQR}. \end{proposition} \begin{proof} See Appendix \ref{sec:proofs}. \end{proof} Proposition \ref{prop:PGQRexistence} justifies optimizing the \textit{integrative} quantile loss over $\tau$ in the PGQR objective \eqref{eq:PGQR}. The next proposition justifies adding the variability penalty \eqref{eq:pen} to the GQR loss \eqref{eq:GQR} by showing that there exists $\lambda>0$ so that memorization of the training data does \textit{not} occur under PGQR. \begin{proposition}[PGQR does not memorize the training data] \label{prop:nomemorization} Suppose that $\alpha > 0$ is fixed in the variability penalty \eqref{eq:pen}. Denote any minimizer of the PGQR objective function \eqref{eq:PGQR} by $\widehat{G}$. Then, there exists $\lambda>0$ such that \begin{align*} \widehat{G}(\bm{X}_i, \tau) \neq Y_i \hspace{.2cm} \textrm{for at least one } i = 1, \ldots, n. \end{align*} \end{proposition} \begin{proof} See Appendix \ref{sec:proofs}. \end{proof} \begin{figure} \caption{\small A plot of the penalty function $\textrm{pen} \label{fig:penalty} \end{figure} We now provide some intuition into the variability penalty \eqref{eq:pen}. We can quantify the amount of variability in the network by $V := G(\bm{X},\tau) - G(\bm{X},\tau^\prime)$. Clearly, $V=0$ when vanishing variability \eqref{eq:overfit} occurs. To avoid data memorization, we should thus penalize $V$ \textit{more heavily} when $V \approx 0$, with the maximum amount of penalization being applied when $V = 0$. Figure \ref{fig:penalty} plots the variability penalty, $\textrm{pen}_{\lambda, \alpha}(V) = -\lambda \log ( \lvert V \rvert + 1/\alpha)$, for several pairs of $(\alpha, \lambda)$. Figure \ref{fig:penalty} shows that $\textrm{pen}_{\lambda, \alpha}(V)$ is sharply peaked at zero and that $\textrm{pen}_{\lambda, \alpha}(V)$ is strictly convex decreasing in $\lvert V_i \rvert$. Therefore, maximum penalization occurs when each $V = 0$, and there is less penalization for larger $\lvert V \rvert$. As $\lambda$ increases, $\textrm{pen}_{\lambda, \alpha}(V)$ also increases for all values of $V \in (-\infty, \infty)$, indicating that larger values of $\lambda$ lead to more penalization. Our penalty function takes a specific form \eqref{eq:pen}. However, any other penalty on $ G(\bm{X}, \tau) - G(\bm{X}, \tau^\prime)$ for $\bm{X} \in \mathcal X$ that has a similar shape as the PGQR penalty (i.e. sharply peaked around zero, as in Figure \ref{fig:penalty}) would also conceivably encourage the network to have greater variability. Another way to control the variability is to directly penalize the empirical variance $s^2 = (n-1)^{-1} \sum_{i=1}^{n} [G(\bm{X}_i, \tau) - \bar{G}]^2$, where $\bar{G} = n^{-1} \sum_{i=1}^{n} G(\bm{X}_i, \tau)$, or the empirical standard deviation $s = \sqrt{s^2}$. However, penalties on $s^2$ or $s$ do \textit{not} have an additive form and individual penalty terms depend on each other, and therefore, these are \textit{not} conducive to SGD-based optimization. We now pause to highlight the novelty of our constructed variability penalty. In many other nonparametric models, e.g. those based on nonparametric smoothing, a roughness penalty is often added to an empirical loss function in order to control the smoothness of the function or density estimate \citep{GreenSilverman1994}. These roughness penalties encourage greater smoothness of the target function in the spirit of reducing variance in the bias-variance trade-off. In contrast, our variability penalty \eqref{eq:pen} directly penalizes generators with \textit{too small} variance. By adding the penalty to the GQR loss \eqref{eq:GQR}, we are encouraging the estimated network to have \textit{more} variability. \begin{figure} \caption{\small Plots of the estimated conditional densities of $p(Y \mid \bm{X} \label{fig:overfit} \end{figure} To illustrate how PGQR prevents vanishing variability, we carry out a small simulation study under the model, $Y_i = \bm{X}_i^\top \boldsymbol{\beta} + \epsilon_i, i = 1, \ldots, n$, where $\bm{X}_i \overset{iid}{\sim} \mathcal{N}(\boldsymbol{0},\bm{I}_{20})$, $\epsilon_i \overset{iid}{\sim} \mathcal{N}(0,1)$, and the coefficients in $\boldsymbol{\beta}$ are equispaced over $[-2, 2]$. We used 2000 samples to train GQR \eqref{eq:GQR} and PGQR \eqref{eq:PGQR}, and an additional 200 validation samples were used for tuning parameter selection of $\lambda$ (described in Section \ref{sec:select}). To train the generative network, we fixed $\alpha=1$ and tuned $\lambda$ from a set of 100 equispaced values between $0$ and $2.5$. We then generated 1000 samples $\{ \widehat{G}(\bm{X}_{\text{test}},\xi_k,\lambda) \}_{k=1}^{1000}$, $\xi_k \overset{iid}{\sim} \textrm{Uniform}(0,1)$, from the estimated conditional density of $p(Y \mid \bm{X}_{\text{test}})$ for three different choices of out-of-sample test data for $\bm{X}_{\text{test}}$. In our simulation, the tuning parameter selection procedure introduced in Section \ref{sec:select} chose an optimal $\lambda$ of $\lambda^{\star} = 0.01$. As shown in Figure \ref{fig:overfit}, the \textit{non}-penalized GQR model introduced in Section \ref{sec:GQRnopenalty} (dashed orange line) suffers from vanishing variability. Namely, GQR generates values near the true test sample $Y_{\text{test}}$ almost surely, despite the fact that we used 1000 different inputs for $\xi$ to generate the $\widehat{G}$ samples. In contrast, the PGQR model with optimal $\lambda^{\star} = 0.01$ (solid blue line with filled circles) approximates the true conditional density (solid black line) very closely, capturing the Gaussian shape and matching the true underlying variance. On the other hand, Figure \ref{fig:overfit} also illustrates that if $\lambda$ is chosen to be \textit{too} large in PGQR \eqref{eq:PGQR}, then the subsequent approximated conditional density will exhibit larger variance than it should. In particular, when $\lambda = 2.5$, Figure \ref{fig:overfit} shows that PGQR (dashed green line with hollow triangles) \textit{overestimates} the true conditional variance for $Y$ given $\bm{X}$. This demonstrates that the choice of $\lambda$ in the variability penalty \eqref{fig:penalty} plays a very crucial role in the practical performance of PGQR. In Section \ref{sec:select}, we describe how to select an ``optimal'' $\lambda$ so that PGQR neither underestimates \textit{nor} overestimates the true conditional variance. Our method for tuning $\lambda$ also requires only a \textit{single} optimization, making it an attractive and scalable alternative to cross-validation. \section{Partial Monotonic Neural Network}\label{sec:mono} Without any constraints on the network, PGQR is just as prone as other unconstrained nonparametric quantile regression models to suffer from crossing quantiles. For a fixed data point $\bm{X}_i$ and an estimated network $\widehat{G}$, the \textit{crossing problem} occurs when \begin{equation}\label{eq:cross} \widehat{G}(\bm{X}_i,\tau_1) > \widehat{G}(\bm{X}_i,\tau_2) \:\text{ when }\: 0<\tau_1<\tau_2<1. \end{equation} In the neural network literature, one popular way to address the crossing problem is to add a penalty term to the loss such as $\max(0, -\partial G(\bm{X}_i;\tau)/\partial \tau)$ \citep{tagasovska2018frequentist, liu2020certified}. Through regularization, the network is encouraged to have larger partial derivatives with respect to $\tau$. However, adding this penalty merely alleviates the crossing problem. There is no guarantee that the final model is free of crossing quantiles \citep{tagasovska2018frequentist}, \textit{or} we may have to keep training the model until the final network is certified to be monotonic with respect to $\tau$ at the expense of heavy computational cost \citep{liu2020certified}. Another natural way to construct a monotonic neural network (MNN) is by restricting the weights of network to be nonnegative through a transformation or through weight clipping \citep{832655, daniels2010monotone,mikulincer2022size}. Because \textit{all} the weights are constrained to be nonnegative, this class of MNNs might require longer optimization (i.e. more epochs in SGD) and/or a large amount of hidden neurons to ensure the network's final expressiveness. Instead of constraining \textit{all} the weights in the neural network, we make a simple modification to the MNN architecture which we call the \textit{partial} monotonic neural network (PMNN). The PMNN family consists of \textit{two} feedforward neural networks (FNN) as sub-networks which divide the input into two segments. The first sub-network is a weight-constrained quantile network $\mathbb{R}^1 \mapsto \mathbb{R}^h$ where the only input is the quantile level $\tau$ and $h$ denotes the number of hidden neurons. The FNN structure for one hidden layer in this sub-network is $g(\tau) = \sigma(\bm{U}_{\text{pos}} \tau+\bm{b})$, where $\bm{U}_{\text{pos}}$ is the weights matrix of \textit{nonnegative} weights, $\bm{b}$ is the bias matrix, and $\sigma$ is the activation function, such as the hyperbolic tangent (tanh) function $\sigma(x) = \tanh(x)$ or the rectified linear unit (ReLU) function $\sigma(x) = \max \{0, x \}$. The $K_1$-layer constrained sub-network $g_c$ can then be constructed as $g_{c}(\tau) = g_K \circ \dots \circ g_1$. The second sub-network is a $K_2$-layer \textit{unconstrained} network $g_{uc}: \mathbb{R}^p \mapsto \mathbb{R}^h$, taking the data $\bm{X}$ as inputs. The structure of one hidden layer in this sub-network is $g^{\prime}(\bm{X}) = \sigma(\bm{U} \bm{X} +\bm{b})$, where $\bm{U}$ is an \textit{unconstrained} weights matrix. We construct $g_{uc}$ as $g_{uc}(\bm{X}) = g_{K_2}^{\prime} \circ \dots \circ g_1^{\prime}$. Finally, we construct a single weight-constrained connection layer $f:\mathbb{R}^h \mapsto \mathbb{R}^1$ that connects the two sub-networks to quantile estimators, \begin{equation}\label{eq:pmmn} G(\bm{X}, \tau) = f \ \circ (g_c + g_{uc}). \end{equation} With our PMNN architecture, the monotonicity of $G(\cdot,\tau)$ with respect to $\tau$ is guaranteed by the positive weights in the quantile sub-network $g_c$ and the final layer $f$. At the same time, since the data sub-network $g_{uc}$ is unconstrained in its weights, $g_{uc}$ can learn the features of the training data well. In particular, the PMNN family is more flexible in its ability to learn features of the data and is easier to optimize than MNN because of this unconstrained sub-network. We use our proposed PMNN architecture as the family ${\cal G}^{m}$ over which to optimize the PGQR objective for all of the simulation studies and real data analyses in this manuscript. \begin{figure} \caption{\small Estimated quantile curves at levels $\tau \in \{ 0.1, 0.2, 0.3 , 0.4, 0.5, 0.6, 0.7, 0.8, 0.9\} \label{fig:crossdata} \end{figure} To investigate the performance of our proposed PMNN family of neural networks, we fit the PGQR model with PMNN as the function class ${\cal G}^m$ on two benchmark datasets. The first dataset is the motorcycles data \citep{silverman1985some} where the response variable is head acceleration (in g) and the predictor is time from crash impact (in ms). The second application is a bone mineral density (BMD) dataset \citep{takeuchi2006nonparametric} where the response is the standardized relative change in spinal BMD in adolescents and the predictor is the standardized age of the adolescents. For these two datasets, we estimated the quantile functions at different quantile levels $\tau \in \{ 0.1, 0.2, 0.3 , 0.4, 0.5, 0.6, 0.7, 0.8, 0.9\}$ over the domain of the predictor. We compared PGQR under PMNN to the unconstrained nonparametric quantile regression approach implemented in the \textsf{R} package \texttt{quantreg}. The left two panels of Figure \ref{fig:crossdata} demonstrate that the estimated quantile curves under \textit{unconstrained} nonparametric quantile regression are quite problematic for both the motorcycle and BMD datasets; the quantile curves cross each other at multiple points in the predictor domain. In comparison, the right two panels of Figure \ref{fig:crossdata} show that PGQR with the PMNN family ensures \textit{no} crossing quantiles for either dataset. \section{Scalable Computation for PGQR}\label{sec:computation} \subsection{Single-Model Training for PGQR} \label{sec:singlemodeltraining} As illustrated in Section \ref{sec:pen}, PGQR's performance depends greatly on the regularization parameter $\lambda \geq 0$ in the variability penalty \eqref{eq:pen}. If $\lambda$ is too small or if $\lambda = 0$, then we may underestimate the true conditional variance of $p(Y \mid \bm{X})$ and/or encounter the vanishing variability phenomenon. On the other hand, if $\lambda$ is too large, then we may overestimate the conditional variance. Due to the large number of parameters in a DNN, tuning $\lambda$ would be quite burdensome if we had to repeatedly evaluate the deep generative network $G(\cdot,\tau)$ for multiple choices of $\lambda$ and training sets. In particular, using cross-validation to tune $\lambda$ is computationally infeasible for DNNs, especially if the size of the training set is large. Inspired by the idea of \emph{Generative Multiple-purpose Sampler} (GMS) introduced by \cite{https://doi.org/10.48550/arxiv.2006.00767}, we instead propose to \textit{include} $\lambda$ as an additional input in the generator, along with data $\bm{X}$ and quantile $\tau$. In other words, we impose a discrete uniform distribution $p(\lambda)$ for $\lambda$ whose support $\Lambda$ is a grid of candidate values for $\lambda$. We then estimate the network $G(\cdot,\tau,\lambda)$ with the modified PGQR loss function, \begin{equation}\label{eq:PGQR2} \widehat{G} = \underset{G \in {\cal G}^m}{\operatornamewithlimits{argmin}} \ \frac{1}{n} \sum_{i=1}^n \mathbb{E}_{\tau, \lambda} \bigg[ \rho_{\tau} \big(Y_i - G(\bm{X}_i, \tau, \lambda) \big) + \mathbb{E}_{\tau, \tau'} \left\{ \text{pen}_{\lambda, \alpha} \left( G(\bm{X}_i, \tau, \lambda), G(\bm{X}_i, \tau^\prime, \lambda) \right) \right\} \bigg]. \end{equation} Note that \eqref{eq:PGQR2} differs from the earlier formulation \eqref{eq:PGQR} in that $\lambda \in \Lambda$ is \textit{not} fixed \textit{a priori}. In the PMNN family ${\cal G}^m$, $\lambda$ is included in the unconstrained sub-network $g_{uc}: \mathbb{R}^{p+1} \mapsto \mathbb{R}^h$. The next corollary to Proposition \ref{prop:PGQRexistence} justifies including $\lambda$ in the generator and optimizing the integrative loss over $\{ \tau, \lambda \}$ in the modified PGQR objective \eqref{eq:PGQR2}. The proof of Corollary \ref{corollary:existence} is a straightforward extension of the proof of Proposition \ref{prop:PGQRexistence} and is therefore omitted. \begin{corollary} \label{corollary:existence} Let $\widehat g_{\tau,\lambda}(\bm{X})=\operatornamewithlimits{argmin}_{g \in \mathcal{H} } \sum_{i=1}^n\left[ \rho_\tau(Y_i-g(\bm{X}_i)) + \mathbb{E}_{\tau, \tau'} \{ \textrm{pen}_{\lambda, \alpha} \left( g(\bm{X}_i, \tau), g(\bm{X}_i, \tau^\prime ) \right) \} \right],$ for a class of neural networks $\mathcal{H}$, where $\alpha > 0$ is fixed and $\tau, \tau' \overset{iid}{\sim} \textrm{Uniform}(0,1)$. Consider a class of generator functions $\mathcal{G}$ where $\{ G \in \mathcal{G} : \mathbb{R}^p\times\mathbb{R}\times \mathbb{R }\mapsto \mathbb{R}\}$. Suppose that for all $\bm{X} \in\mathcal{X}\subset \mathbb{R}^p$, quantile levels $\tau \in (0,1)$, and tuning parameters $\lambda\in \Lambda$, there exists $G \in\mathcal{G}$ such that the target neural networks $\widehat g_{\tau,\lambda}(\bm{X})$ can be represented by some $G(\bm{X},\tau,\lambda)$. Then, for $i=1,\dots,n$, $$ \widehat g_{\tau,\lambda}(\bm{X}_i)=\widehat G(\bm{X}_i,\tau,\lambda) \: \text{ a.s.}, $$ with respect to the probability law related to $\mathbb{E}_{\tau, \lambda}$, where $\widehat{G}$ is a solution to \eqref{eq:PGQR2}. \end{corollary} We note that \cite{https://doi.org/10.48550/arxiv.2006.00767} proposed to use GMS for inference in \textit{linear} quantile regression. In contrast, PGQR is a method for \textit{nonparametric} joint quantile estimation and conditional density estimation (CDE). Linear quantile regression cannot be used for CDE; as a linear model, GMS linear quantile regression also does not suffer from vanishing variability. This is not the case for GQR, which motivates us to introduce the variability penalty in Section \ref{sec:pen}. Finally, we employ the GMS idea for \textit{tuning parameter} selection in PGQR, rather than for constructing pointwise confidence bands (as in \cite{https://doi.org/10.48550/arxiv.2006.00767}). By including $\lambda \sim p(\lambda)$ in the generator, PGQR only needs to perform one \textit{single} optimization in order to estimate the network $G$. We can then \textit{use} the estimated network $\widehat{G}$ to tune $\lambda$, as we detail in the next section. This single-model training stands in contrast to traditional smoothing methods for nonparametric quantile regression, which typically require \textit{repeated} model evaluations via (generalized) cross-validation to tune hyperparameters such as bandwidth or roughness penalty. \subsection{Selecting An Optimal Regularization Parameter} \label{sec:select} We now introduce our method for selecting the regularization parameter $\lambda$ in our variability penalty \eqref{eq:pen}. Considering the candidate set $\Lambda$, the basic idea is to select the best $\lambda^\star \in \Lambda$ that minimizes the distance between the generated conditional distribution and the true conditional distribution. Denote the trained network under the modified PGQR objective \eqref{eq:PGQR2} as $\widehat{G}(\cdot,\tau,\lambda)$, and let $(\bm{X}_{\text{val}}, Y_{\text{val}})$ denote a validation sample. After optimizing \eqref{eq:PGQR2}, it is effortless to generate the conditional quantile functions for each $\bm{X}_{\text{val}}$ in the validation set, each $\lambda \in \Lambda$, and any quantile level $\xi \in (0,1)$. In order to select the optimal $\lambda^\star \in \Lambda$, we first prove the following proposition. \begin{proposition}\label{prop:lambda} Suppose that two univariate random variables $Q$ and $W$ are independent. Then, $Q$ and $W$ have the same distribution if and only if $P(Q<W \mid W)$ follows a standard uniform distribution. \end{proposition} \begin{proof} See Appendix \ref{sec:proofs}. \end{proof} Based on Proposition \ref{prop:lambda}, we know that, given a validation sample $(\bm{X}_{\text{val}}, Y_{\text{val}})$, the best $\lambda^\star \in \Lambda$ should satisfy $P_{\tau, \lambda} ( \widehat{G}(\bm{X}_{\text{val}}, \tau, \lambda^\star ) < Y_{\text{val}} \hspace{.1cm} \big\lvert \hspace{.1cm} Y_{\text{val}}) \sim \text{Uniform}(0,1).$ In practice, if we have $M$ random quantile levels $\bm{\tau} = (\tau_1, \ldots, \tau_M) \in (0,1)^{M}$, we can estimate the probability $P_{\tau, \lambda} := P_{\tau, \lambda} ( \widehat{G}(\bm{X}_{\text{val}}, \tau, \lambda^\star ) < Y_{\text{val}} \hspace{.1cm} \big\lvert \hspace{.1cm} Y_{\text{val}})$ with a Monte Carlo approximation, \begin{equation} \label{eq:phattau} \widehat{P}_{\tau, \lambda}^{(i)} = M^{-1} \sum_{k=1}^{M} \mathbb{I} \{ \widehat{G} (\bm{X}_{\text{val}}^{(i)},\hspace{.1cm} \tau_k, \lambda ) < Y_{\text{val}}^{(i)} \}\approx P_{\tau, \lambda} ( \widehat{G}(\bm{X}_{\text{val}}, \tau, \lambda^\star ) < Y_{\text{val}}\big\lvert \hspace{.1cm} Y_{\text{val}}), \end{equation} for each $i$th validation sample $(\bm{X}_{\text{val}}^{(i)}, Y_{\text{val}}^{(i)})$. With $n_{\text{val}}$ validation samples, we proceed to estimate $\widehat{P}_{\tau, \lambda}^{(i)}$ for all $n_{\text{val}}$ validation samples $(\bm{X}_{\text{val}}^{(i)}, Y_{\text{val}}^{(i)})$'s and each $\lambda \in \Lambda$. We then compare the empirical distribution of the $\widehat{P}_{\tau, \lambda}^{(i)}$'s to a standard uniform distribution for each $\lambda \in \Lambda$ and select the $\lambda^\star$ that \textit{minimizes} the distance between $\widehat{P}_{\tau, \lambda}$ and $\text{Uniform}(0,1)$. That is, given a valid distance measure $d$, we select the $\lambda^{\star}$ which satisfies \begin{equation} \label{eq:optlambda} \lambda^{\star} = \operatornamewithlimits{argmin}_{\lambda \in \Lambda} d ( \widehat{P}_{\tau, \lambda}, \textrm{Uniform}(0,1) ). \end{equation} \begin{figure} \caption{\small Histogram of the estimated $\widehat{P} \label{fig:simu-u} \end{figure} \noindent There are many possible choices for $d$ in \eqref{eq:optlambda}. In this paper, we use the Cramer–von Mises (CvM) criterion due to the simplicity of its computation and its good empirical performance. The CvM criterion is straightforward to compute as \begin{align} \label{eq:CvM} d ( \widehat{P}_{\tau, \lambda}, \textrm{Uniform}(0,1) ) = \sum_{i=1}^{n_\text{val}} ( i/n_\text{val}-\widehat{P}_{\tau, \lambda}^{(i)} - 2/{n_\text{val}})^2/n_\text{val}. \end{align} After selecting $\lambda^*$ according to \eqref{eq:optlambda}, we can easily generate samples from the conditional quantiles of $p( Y \mid \bm{X})$ for any feature data vector $\bm{X}$. We simply generate $\widehat{G}(\bm{X}, \xi_k, \lambda^{\star})$, where $k = 1, \ldots, b$, for random quantiles $\xi_1, \ldots, \xi_b \overset{iid}{\sim} \textrm{Uniform}(0,1)$. For sufficiently large $b$, the conditional density $p(Y \mid \bm{X})$ can be inferred from $\{ \widehat{G}(\bm{X}, \xi_k, \lambda^{\star}) \}_{k=1}^{b}$. The complete algorithm for scalable computation of PGQR is given in Algorithm \ref{alg:PGQR}. We now illustrate our proposed selection method for $\lambda \in \Lambda$ in the simulated linear regression example from Section \ref{sec:pen}. Figure \ref{fig:simu-u} plots the histogram of the estimated $\widehat{P}_{\tau, \lambda^{\star}}^{(i)}$'s in the validation set for the $\lambda^{\star}$ which minimizes the CvM criterion. We see that the empirical distribution of the $\widehat{P}_{\tau, \lambda^{\star}}^{(i)}$'s closely follows a standard uniform distribution. Figure \ref{fig:cvm} illustrates how the PGQR solution changes as $\log(\lambda)$ increases for every $\lambda \in \Lambda$. Figure \ref{fig:cvm} plots the conditional standard deviation (left panel), the CvM criterion (middle panel), and the coverage rate of the 95\% confidence intervals (right panel) for the PGQR samples in the validation set. The vertical dashed red line denotes the optimal $\lambda^{\star} = 0.01$. We see that the $\lambda^{\star}$ which minimizes the CvM (middle panel) captures the true conditional standard deviation of $\sigma = 1$ (left panel) and attains coverage probability close to the nominal rate (right panel). This example demonstrates that our proposed selection method provides a practical alternative to computationally burdensome cross-validation for tuning $\lambda$ in the variability penalty \eqref{eq:pen}. In Section \ref{sec:sim-more} of the Appendix, we further demonstrate that our tuning parameter selection method is suitable to use even when the true conditional variance is very small (true $\sigma^2=0.01$). In this scenario, our procedure selects a very small $\lambda^{\star} \approx 0$, so that PGQR does \textit{not} overestimate the conditional variance. \begin{figure} \caption{\small The conditional standard deviation (left panel), CvM statistic (middle panel), and coverage rate (right panel) in the validation set vs. $\log(\lambda)$ for each $\lambda \in \Lambda$. The red dashed line corresponds to the $\lambda^\star$ which minimizes the CvM criterion.} \label{fig:cvm} \end{figure} \begin{algorithm}[t] \footnotesize \caption{\footnotesize Scalable Implementation of PGQR}\label{alg:PGQR} \begin{algorithmic}[1] \STATE \emph{Split} \textit{non}-test data into non-overlapping training set $(\bm{X}_{\text{train}}, \bm{Y}_{\text{train}})$ and validation set $(\bm{X}_{\text{val}}, \bm{Y}_{\text{val}})$ \STATE \emph{Initialize} $G$ parameters $\phi$, learning rate $\gamma$, width $h$ of DNN hidden layers, and total epoches $T$ \STATE \textbf{procedure:} Optimizing $G$ \begin{ALC@g} \FOR{epoch $t$ in $1,\ldots, T$} \STATE Sample $\tau,\tau^\prime \sim \textrm{Uniform}(0,1)$ and $\lambda \in \Lambda$ \STATE Evaluate loss \eqref{eq:PGQR2} with $(\bm{X}_{\text{train}}, \bm{Y}_{\text{train}})$ \STATE Update $G$ parameters $\phi$ via SGD \ENDFOR \STATE \textbf{return} $\widehat{G}$ \end{ALC@g} \STATE \textbf{end procedure} \STATE \textbf{procedure:} Tuning $\lambda$ \begin{ALC@g} \STATE Set $M = 1000$ and sample $\tau_1, \ldots, \tau_M \overset{iid}{\sim} \textrm{Uniform}(0,1)$ \STATE Generate $\{ \widehat{G}(\bm{X}_{\text{val}}^{(i)}, \tau_1, \lambda_l ), \ldots, \widehat{G}(\bm{X}_{\text{val}}^{(i)}, \tau_M, \lambda_l ) \}$ for each $\lambda_l \in \Lambda$ and each $\bm{X}_{\text{val}}^{(i)}$, $i = 1, \ldots, n_{\text{val}}$ \STATE Compute $\widehat{P}_{\tau, \lambda_l}^{(i)}$ as in \eqref{eq:phattau} on $(\bm{X}_{\text{val}}^{(i)}, Y_{\text{val}}^{(i)})$ for each $i = 1, \ldots, n_{\text{val}}$ and each $\lambda_l \in \Lambda$ \STATE Select $\lambda^*$ according to \eqref{eq:optlambda} with CvM criterion \eqref{eq:CvM} as $d$ \STATE \textbf{return} $\lambda^*$ \end{ALC@g} \STATE \textbf{end procedure} \STATE \textbf{procedure:} Estimating $p(Y \mid \bm{X}_{\text{test}})$ for test data $\bm{X}_{\text{test}}$ \begin{ALC@g} \STATE Set $b=1000$ and sample $\xi_1, \ldots, \xi_b \overset{iid}{\sim} \text{Uniform}(0,1)$ \STATE Estimate $\xi_k$-th quantile of $Y \mid \bm{X}_{\textrm{test}}$ as $\widehat{G}(\bm{X}_{\text{test}},\xi_k,\lambda^*)$ for $k = 1, \ldots, b$ \STATE \textbf{return} $\{\widehat{G}(\bm{X}_{\text{test}},\xi_1,\lambda^*), \ldots, \widehat{G}(\bm{X}_{\text{test}},\xi_b,\lambda^*)\}$ \end{ALC@g} \STATE \textbf{end procedure} \end{algorithmic} \end{algorithm} \section{Numerical Experiments and Real Data Analysis}\label{sec:sim} We evaluated the performance of PGQR on several simulated and real datasets. We fixed $\alpha=1$ or $\alpha = 5$ in the variability penalty \eqref{eq:pen}. We optimized the modified PGQR loss \eqref{eq:PGQR2} over the PMNN family (Section \ref{sec:mono}), where each sub-network had three hidden layers with 1000 neurons per layer. The support for $\Lambda$ was chosen to be 100 equispaced values between $0$ and $1$. We compared PGQR to several other state-of-the-art methods: \begin{itemize}[leftmargin=.2in] \setlength\itemsep{0.2em} \item \noindent{\bf GCDS} \citep{zhou2022deep}. Following \cite{zhou2022deep}, we trained both a generator and a discriminator using FNNs with one hidden layer of 25 neurons. Despite this simple architecture, we found that GCDS might still encounter vanishing variability. For fair comparison to PGQR, we also increased the number of hidden layers to three and the number of nodes per layer to 1000. We refer to this modification as \textbf{deep-GCDS}. \item \noindent{\bf WGCS} \citep{liu2021wasserstein}. We adopted the gradient penalty recommended by \cite{liu2021wasserstein} and set the hyperparameter associated with the gradient penalty as 0.1. \item \noindent{\bf FlexCoDE} \citep{izbicki2017converting}. We considered three ways of estimating the basis functions $\beta_j(\bm{X})$: nearest-neighbor regression (NNR), sparse additive model (SAM), and XGBoost (FlexZboost). \item \noindent{\bf Random Forest CDE}, or RFCDE \citep{https://doi.org/10.48550/arxiv.1906.07177}. \end{itemize} For FlexCoDE and RFCDE, we adopted the default hyperparameter settings of \cite{izbicki2017converting} and \cite{https://doi.org/10.48550/arxiv.1906.07177} respectively. \begin{figure} \caption{\small Plots of the estimated conditional densities of $p(Y \mid \bm{X} \label{fig:simu} \end{figure} \subsection{Simulation Studies}\label{subsec:construction} For our simulation studies, we generated data from $Y_i = g(\bm{X}_i) + \epsilon_i, i = 1, \ldots, 2000$, for some function $g$, where $\bm{X}_i \overset{iid}{\sim} \mathcal{N} (\bm{0}, \bm{I}_p)$ and the residual errors $\epsilon_i$'s were independent. The specific simulation settings are described below. \begin{enumerate}[leftmargin=.2in] \setlength\itemsep{0.2em} \item \noindent{\bf Simulation 1: Multimodal and heteroscedastic.} $Y_i = \beta_i X_i + \epsilon_i$, where $\beta_i = \{-1,0,1\}$ with equal probability and $\epsilon_i = (0.25 \cdot \vert X_i \vert)^{1/2}$. \item \noindent{\bf Simulation 2: Mixture of left-skewed and right-skewed.} $Y_i = \bm{X}_i^{\top} \boldsymbol{\beta} + \epsilon_i$, where $\boldsymbol{\beta} \in \mathbb{R}^5$ is equispaced between $[-2, 2]$, and $\epsilon_i = \chi^2(1,1)\mathcal{I}(X_1>=0.5)+\text{log}[\chi^2(1,1)]\mathcal{I}(X_1<0.5)$. Here, the skewness is controlled by the covariate $X_1$. \item \noindent{\bf Simulation 3: Mixture of unimodal and bimodal.} $Y_i =\bm{X}_i^{\top}\boldsymbol{\beta}_i + \epsilon_i$, where $\bm{\beta}_i \in \mathbb{R}^5$ with $\beta_{i1} \in \{ -2, 2 \}$ with equal probability, $(\beta_{i2}, \beta_{i3}, \beta_{i4}, \beta_{i5})^\top$ are equispaced between $[-2, 2]$, and $\epsilon_i \overset{iid}{\sim} \mathcal{N}(0,1)$. Here, $p(Y \mid \bm{X})$ is unimodal when $X_1 \approx 0$, and otherwise, it is bimodal. \end{enumerate} In our simulations, 80\% of the data was used to train the model, 10\% was used as the validation set for tuning parameter selection, and the remaining 10\% was used as test data to evaluate model performance. In Section \ref{sec:sim-more} of the Appendix, we present additional results where $g(\bm{X}_i)$ is a nonlinear function of $\bm{X}_i$ (Simulation 4) and where the error variance is very small (Simulation 5). Figure \ref{fig:simu} compares the estimated conditional density of $p(Y \mid \bm{X}_{\text{test}})$ for three test observations from one replication of Simulation 3. We see that PGQR (solid blue line with filled circles) is able to capture both the unimodality \textit{and} the bimodality of the ground truth condtional densities (solid black line). Meanwhile, GCDS (dashed red line with hollow triangles), WGCS (dashed green line with crosses), and deep-GCDS (dashed purple line with plusses) struggled to capture the true conditional densities for at least some test points. In particular, Figure \ref{fig:simu} shows some evidence of variance \textit{underestimation} for WGCS and deep-GCDS, whereas this is counteracted by the variability penalty in PGQR. Additional figures from our simulation studies are provided in Section \ref{sec:sim-morefigs} of the Appendix. \begin{table}[t] \centering \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{c|ccc|ccc|ccc} \hline &\multicolumn{3}{c|}{Simulation 1}&\multicolumn{3}{c|}{Simulation 2}&\multicolumn{3}{c}{Simulation 3}\\ \hline Method&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$&Cov (Width)&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$&Cov (Width)&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$&Cov (Width)\\ \hline \hline PGQR ($\alpha$=1) &0.41&0.34&0.95 (23.48)&0.38&0.11&0.93 (8.14)&0.30&0.08&0.92 (6.61) \\ PGQR ($\alpha$=5) &\textbf{0.36}&\textbf{0.31}&0.95 (23.41)&\textbf{0.31}&\textbf{0.07}&\textbf{0.95 (8.83)} &\textbf{0.25}&\textbf{0.06}&\textbf{0.96 (6.60)} \\ GCDS &10.49&25.82&0.68 (15.48)&0.53&0.27&0.92 (8.55)&0.33&0.12&0.84 (5.78) \\ WGCS &229.91&73.68&0.15 (4.88)&6.57&1.45&0.80 (9.17)&6.25&1.98&0.71 (8.08) \\ deep-GCDS &7.99&56.87&0.38 (7.42)&5.41&2.89&0.42 (2.12)&6.11&2.78&0.28 (2.04) \\ \hline \hline FlexCoDE-NNR &0.97&0.54&0.96 (23.94)&0.83&0.36&0.91 (9.03)&1.12&0.75&0.92 (9.11) \\ FlexCoDE-SAM &0.37&0.62&0.97 (25.07)&0.73&1.03&0.93 (11.15)&1.01&1.99&0.93 (10.91) \\ FlexZBoost &0.77&63.81&\textbf{1.00 (46.11)}&1.29&0.36&0.91 (8.28)&1.77&0.74&0.85 (7.88) \\ RFCDE &1.98&0.72&0.44 (25.46)&0.61&0.34&0.56 (6.26)&0.83&0.65&0.96 (23.06) \\ \hline \end{tabular}} \caption{\small Table reporting the PMSE for the conditional expectation and standard deviation, as well as the coverage rate (Cov) and average width of the 95\% prediction intervals, for Simulations 1 through 3. Results were averaged across 20 replicates.} \label{tab:simulation-main} \end{table} We repeated our simulations for 20 replications. For each experiment, we computed the predicted mean squared error (PMSE) for different summaries of the conditional densities for the test data. We define the PMSE as \begin{equation}\label{eq:PMSE} \text{PMSE}= \frac{1}{n_{\text{test}}} \sum_{i=1}^{n_{\text{test}}}\left( \widehat{m} (\bm{X}_{\text{test},i})- m(\bm{X}_{\text{test},i}) \right)^2, \end{equation} where $m(\bm{X})$ generically refers to the conditional mean of $Y$ given $\bm X$, i.e. $\mathbb{E}( Y \mid \bm{X})$, or the conditional standard deviation, i.e. $\text{sd}( Y \mid \bm{X})$. For the generative models (PGQR, GCDS, WGCS, and deep-GCDS), we approximated $\widehat{m}$ in \eqref{eq:PMSE} by Monte Carlo simulation using $1000$ generated samples, while for FlexCoDE and RFCDE, we approximated $\widehat{m}$ using numerical integration. In addition to PMSE, we also used the 2.5th and 97.5th percentiles of the predicted conditional densities to construct 95\% prediction intervals for the test data. We then calculated the coverage probability (Cov) and the average width for these prediction intervals. \begin{figure} \caption{\small Plot of the average PMSE across 20 replicates for the $\tau$-th quantiles $Q_{Y \mid \bm{X} \label{fig:simu-quantile-main} \end{figure} Table \ref{tab:simulation-main} summarizes the results for PGQR with $\alpha \in \{ 1, 5 \}$ in \eqref{eq:pen} and all competing methods, averaged across 20 experiments. There was not much substantial difference between $\alpha = 1$ and $\alpha = 5$ for PGQR. Table \ref{tab:simulation-main} shows that PGQR had the lowest PMSE in all three simulations and attained coverage close to the nominal rate. In Simulations 2 and 3, the prediction intervals produced by PGQR had the highest coverage rate. In Simulation 1, FlexZBoost had 100\% coverage, but the average width of the prediction intervals for FlexZBoost was considerably larger than that of the other methods, suggesting that the intervals produced by FlexZBoost may be too conservative to be informative. We also computed the PMSE for the $\tau$-th quantile. Figure \ref{fig:simu-quantile-main} plots the PMSE for these nine quantiles for PGQR ($\alpha=1$), GCDS, deep-GCDS, and WGCS, averaged across 20 experiments. We see that PGQR (blue line with crosses) had the lowest PMSE for most of the quantile levels. PGQR also had uniformly lower PMSE in our quantile set than WGCS (green line with plusses) and deep-GCDS (red line with triangles), suggesting that WGCS and deep-GCDS may have been more prone to vanishing variability. \subsection{Real Data Analysis} We also examined the performance of PGQR on three real datasets from the UCI Machine Learning Repository, which we denote as: \texttt{machine}, \texttt{fish}, and \texttt{noise}.\footnote{Accessed from \url{https://archive.ics.uci.edu/ml/index.php}.} The \texttt{machine} dataset comes from a real experiment that collected the excitation current ($Y$) and four machine attributes ($\bm{X}$) for a set of synchronous motors \citep{kahraman2014metaheuristic}. The \texttt{fish} dataset contains the concentration of aquatic toxicity ($Y$) that can cause death in fathead minnows and six molecular descriptors ($\bm{X}$) described in \citep{cassotti2015similarity}. The \texttt{noise} dataset measures the scaled sound pressure in decibels ($Y$) at different frequencies, angles of attacks, wind speed, chord length, and suction side displacement thickness ($\bm{X}$) for a set of airfoils \citep{lopez2008neural}. Table \ref{tab:real} reports the sample size $n$ and covariate dimension $p$ for these three benchmark datasets. \begin{table}[t] \centering \resizebox{0.8\columnwidth}{!}{ \begin{tabular}{ccc|cc|cc|cc|cc} \hline &&&\multicolumn{2}{c|}{PGQR}&\multicolumn{2}{c|}{GCDS}&\multicolumn{2}{c|}{deep-GCDS}&\multicolumn{2}{c}{WGCS}\\ \hline Dataset&$n$&$p$&Cov&Width&Cov&Width&Cov&Width&Cov&Width\\ \hline \hline machine&557 &4 &\textbf{0.96} &2.82&0.91 &1.92&0.87&1.52&0.69&1.66\\ fish&908&6 &\textbf{0.96} &3.29&0.79 &2.38&0.54&1.08&0.80&2.36\\ noise&1503&5 &\textbf{0.92} &7.58 &0.00&1.68&0.00&0.24&0.00&22.41 \\ \hline \end{tabular}} \caption{\small Results from our real data analysis. Cov and width denote the coverage rate and average width respectively of the 95\% prediction intervals for the test observations.} \label{tab:real} \end{table} We examined the out-of-sample performance for PGQR (with fixed $\alpha = 1$), GCDS, deep-GCDS, and WGCS. In particular, 80\% of each dataset was randomly selected as training data, 10\% was used as validation data for tuning parameter selection, and the remaining 10\% was used as test data for model evaluation. To compare these deep generative methods, we considered the out-of-sample coverage rate (Cov) and the average width of the 95\% prediction intervals in the test data. The results from our real data analysis are summarized in Table \ref{tab:real}. We see that on all three datasets, PGQR achieved higher coverage that was closer to the nominal rate than the competing methods. It seems as though the other generative approaches may have been impacted by the vanishing variability phenomenon, resulting in too narrow prediction intervals that did not cover as many test samples. In particular, GCDS, deep-GCDS, and WGCS all performed extremely poorly on the \texttt{noise} dataset, with an out-of-sample coverage rate of zero. On the other hand, with the help of the variability penalty \eqref{eq:pen}, PGQR did \textit{not} underestimate the variance and demonstrated an overwhelming advantage over these other methods in terms of predictive power. Moreover, the average widths of the PGQR prediction intervals were not overwhelmingly large so as to be uninformative. Figure \ref{fig:ci} plots the PGQR 95\% prediction intervals for the test observations ordered in ascending order. The red crosses depict test samples that were \textit{not} captured by their corresponding PGQR prediction intervals. We do not detect any specific pattern for the uncaptured test points, indicating reasonable generalizability for the PQGR model. \begin{figure} \caption{\small The PGQR 95\% prediction intervals for the test samples in the three benchmark datasets. The red crosses indicate the test points that were not captured by their corresponding prediction intervals.} \label{fig:ci} \end{figure} \section{Discussion} \label{sec:conclusion} In this paper, we have made contributions to both the quantile regression and deep learning literature. Specifically, we proposed PGQR as a new deep generative approach to joint quantile estimation and conditional density estimation. Different from existing conditional sampling methods \citep{zhou2022deep, liu2021wasserstein}, PGQR employs a novel variability penalty to counteract the \textit{vanishing variability} phenomenon in deep generative networks. We introduced the PMNN family of neural networks to enforce the monotonicity of quantile functions. Finally, we provided a scalable implementation of PGQR which requires solving only a single optimization to select the regularization term in the variability penalty. Through analyses of real and simulated datasets, we demonstrated PGQR's ability to capture critical aspects of the conditional distribution such as multimodality, heteroscedasticity, and skewness. We anticipate that our penalty on vanishing variability in generative networks is broadly applicable for a wide number of loss functions besides the check function. In the future, we will extend the variability penalty to other deep generative models and other statistical problems besides quantile regression. In addition, we plan to pursue variable selection, so that our method can also identify the most relevant covariates when the number of features $p$ is large. Owing to its single-model training, PGQR is scalable for large $n$. However, further improvements are needed in order for PGQR to avoid the curse of dimensionality for large $p$. \section*{Acknowledgments} The last author was generously supported by NSF grant DMS-2015528. The authors are grateful to Dr. Jun Liu for helpful comments. \begingroup \setstretch{0.1} \footnotesize{ } \endgroup \appendix \section{Proofs of Propositions} \label{sec:proofs} {\bf Proof of Proposition \ref{prop:PGQRexistence}.} Suppose that for some $\epsilon>0$, there exists a set $\mathcal{C}\subset (0,1)$ with $P_\tau(\mathcal{C})>\epsilon$ such that for some $i^*\in\{1,\dots,n\}$, $\widehat g_\tau({\bm X}_{i^*}) \neq \widehat G({\bm X}_{i^*},\tau)$ for all $\tau\in \mathcal{C}$. Then we can construct another optimal generator $\widetilde G$ such that \begin{eqnarray*} &&\frac{1}{n} \sum_{i=1}^n \mathbb{E}_{\tau} \left[ \rho_{\tau} \big(y_i - \widehat G(\bm{X}_i, \tau) \big)\right] + \mathbb{E}_{\widetilde\tau, \widetilde\tau^\prime} \left\{\text{pen}_{\lambda, \alpha}\left( \widehat G(\bm{X}_i,\widetilde\tau), \widehat G(\bm{X}_i, \widetilde\tau^\prime) \right) \right\}\\ &\geq& \frac{1}{n} \sum_{i=1}^n \mathbb{E}_{\tau} \left[ \rho_{\tau} \big(y_i - \widetilde G(\bm{X}_i, \tau) \big)\right] + \mathbb{E}_{\widetilde\tau, \widetilde\tau^\prime} \left\{\text{pen}_{\lambda, \alpha}\left( \widetilde G(\bm{X}_i,\widetilde\tau), \widetilde G(\bm{X}_i, \widetilde\tau^\prime) \right) \right\}, \end{eqnarray*} where \begin{eqnarray*} \widetilde G(\bm{X}_{i^*},\tau) =\begin{cases} \widehat G(\bm{X}_{i^*},\tau)\:\:\text{for $\tau\not\in \mathcal{C}$,}\\ \widehat g_\tau(\bm{X}_{i^*})\:\:\text{for $\tau\in \mathcal{C}$}. \end{cases} \end{eqnarray*} This is a contradiction due to the fact that $\widehat g_\tau$ is the minimizer of $\frac{1}{n} \sum_{i=1}^n \rho_{\tau} (y_i - \widehat G(\bm{X}_i, \tau) ) + \mathbb{E}_{\widetilde\tau, \widetilde\tau^\prime} \{ \text{pen}_{\lambda, \alpha} ( \widehat G(\bm{X}_i,\widetilde\tau), \widehat G(\bm{X}_i, \widetilde\tau^\prime) ) \}$. \qed \\ \noindent {\bf Proof of Proposition \ref{prop:nomemorization}.} Suppose that for all $\lambda \geq 0$, all $\tau \in (0,1)$, and all $i\in \{1,\dots,n\}$, \begin{eqnarray*} \widehat G({\bm X}_i,\tau)=Y_i. \end{eqnarray*} Then the penalty part in the PGQR loss \eqref{eq:PGQR} attains $\mathbb{E}_{\tau, \tau^\prime} \{ \text{pen}_{\lambda, \alpha} ( \widehat G(\bm{X}_i,\tau), \widehat G(\bm{X}_i, \tau^\prime) ) \}= \lambda \log(\alpha)$, while the first term in \eqref{eq:PGQR} satisfies $\mathbb{E}_{\tau} [ \rho_{\tau} \big(y_i - \widehat G(\bm{X}_i, \tau) \big)]=0$. As a result, the total loss is $\lambda \log(\alpha)$. Since the case of $\widehat G({\bm X}_i,\tau)=Y_i$ is included in cases of $\textrm{Var}_\tau\{\widehat G({\bm X}_i,\tau)\}=0$, we focus on the variance. When there exists some $i \in \{1,\dots,n\}$ and some $\tau\in (0,1)$ such that $$ \textrm{Var}_\tau\left\{\widehat G({\bm X}_i,\tau)\right\}>0, $$ the resulting total loss can be made less than $\lambda\log(\alpha)$ by choosing an appropriate $\lambda>0$. This contradicts the fact that $\widehat G$ is the minimizer as in \eqref{eq:PGQR}. \qed \\ \noindent {\bf Proof of Proposition \ref{prop:lambda}.} It is trivial that $F_Q(W) := P(Q\leq W\mid W) \sim \text{Uniform}(0,1)$ when $Q\overset{d}=W$. WLOG, we assume that $F_Q$ is invertible. We shall show that $F_Q(W)\sim \text{Uniform}(0,1)$ implies that the two distributions for $Q$ and $W$ are identical. Suppose that $F_Q(W)$ follows a standard uniform distribution. Then, \begin{equation*} x = P(F_Q(W)<x) = P(W < F_Q^{-1}(x)) = F_Q(F_Q^{-1}(x)), \end{equation*} which implies that $F_W(F_W^{-1}(x)) = x$. Thus, it follows that the distributions of $Q$ and $W$ are identical. \qed \section{More Simulation Results} \subsection{Additional Simulation Studies}\label{sec:sim-more} In addition to the three simulations described in Section \ref{sec:sim} of the main manuscript, we also conducted simulation studies under the following scenarios: \begin{itemize} \item \noindent{\bf Simulation 4: Nonlinear function with an interaction term and one irrelevant covariate.} $Y_i = 0.5 \log(10-X_{i1}^2)+0.75 \exp(X_{i2} X_3/5)-0.25 \vert X_{i4}/2 \vert + \epsilon_i$, where $\epsilon_i \sim \mathcal{N}(0,1)$. Note that there is a (nonlinear) interaction between $X_2$ and $X_3$, while $X_5$ is irrelevant. \item \noindent{\bf Simulation 5: Very small conditional variance.} $Y_i = \beta X_i + \epsilon_i, i = 1, \ldots, n$, where $\beta = 1$ and $\epsilon_i \overset{iid}{\sim} \mathcal{N}(0, 0.01)$. \end{itemize} The results from these two simulations averaged across 20 replicates are shown below in Table \ref{tab:supp-simulation}. The average PMSE for the $\tau$-th quantile levels $Q_{Y \mid \bm{X}} (\tau)$, $\tau \in \left\{ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, \right.$ $\left. 0.7, 0.8, 0.9 \right\}$ are plotted in Figure \ref{fig:simu-quantile-appendix}. \begin{table}[t] \centering \resizebox{0.8\columnwidth}{!}{ \begin{tabular}{c|ccc|ccc} \hline &\multicolumn{3}{c|}{Simulation 4}&\multicolumn{3}{c}{Simulation 5}\\ \hline Method&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$& Cov (Width)&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$&Cov (Width)\\ \hline \hline PGQR ($\alpha=1$) &0.15&0.07&0.95 (4.33)&0.004&\textbf{0.0001}&0.93 (0.42) \\ PGQR ($\alpha=5$) &\textbf{0.14}&0.08&\textbf{0.96 (4.50)}&0.005&0.03&\textbf{0.99 (1.01)} \\ GCDS &0.23&0.05&0.87 (3.39)&\textbf{0.002}&0.0011&0.73 (0.27) \\ deep-GCDS &0.43&0.20&0.76 (2.88)&0.004&0.0022&\textbf{0.99 (0.56)} \\ WGCS &0.92&0.15&0.79 (4.41)&0.640&0.2372&0.70 (1.15) \\ \hline \hline FlexCoDE-NNR &0.23&0.003&0.92 (3.82)&0.069&0.0168&0.89 (0.27) \\ FlexCoDE-SAM &0.23&\textbf{0.002}&0.93 (3.83)&0.043&0.0130&0.93 (4.04) \\ FlexZBoost &0.45&0.06&0.82 (3.50)&1.244&0.2824&0.81 (4.13) \\ RFCDE &0.24&0.003&0.94 (3.97)&0.067&0.0292&0.92 (3.97) \\ \hline \end{tabular}} \caption{\small Table reporting the PMSE for the conditional expectation and standard deviation, as well as the coverage rate (Cov) and average width of the 95\% prediction intervals, for Simulations 4 and 5. Results were averaged across 20 replicates.} \label{tab:supp-simulation} \end{table} \begin{figure} \caption{\small Plot of the average PMSE across 20 replicates for the $\tau$-th quantiles $Q_{Y \mid \bm{X} \label{fig:simu-quantile-appendix} \end{figure} One may be concerned whether the regularized PGQR overestimates the conditional variance when the true conditional density has a very \textit{small} variance. To illustrate the flexibility of PGQR, Figure \ref{fig:simu-smallvar} plots the estimated conditional densities for three test observations from one replication of Simulation 5. Recall that in Simulation 5, the true conditional variance is very small ($\sigma^2 = 0.01$). With the optimal $\lambda^{\star}$ selected using the method introduced in Section \ref{sec:select}, Figure \ref{fig:simu-smallvar} shows that the estimated PGQR conditional density \textit{still} manages to capture the Gaussian shape while matching the true variance of 0.01. If the true conditional variance is very small (as in Simulation 5), then PGQR selects a tiny $\lambda^{\star} \approx 0$. In this scenario, PGQR only applies a small amount of variability penalization and thus does not overestimate the variance. \begin{figure} \caption{\small Plots of the estimated PGQR $(\alpha=1)$ conditional densities for three different test observations in Simulation 5. The optimal $\lambda^{\star} \label{fig:simu-smallvar} \end{figure} \subsection{Additional Figures} \label{sec:sim-morefigs} Here, we provide additional figures from one replication each of Simulations 1, 2, and 4 (with $\alpha=1$ in PGQR). Figures \ref{fig:simu-quantile-sim1}-\ref{fig:simu-quantile-sim4} illustrate that PGQR (blue solid line with filled circles) is better able to estimate the true conditional densities (solid black line) than GCDS, WGCS, and deep-GCDS (dashed lines). In particular, PGQR does a better job of capturing critical aspects of the true conditional distributions such as multimodality, heteroscedasticity, and skewness. \begin{itemize}[leftmargin=.2in] \setlength\itemsep{0.2em} \item \noindent{\bf Simulation 1: Multimodal and heteroscedastic.} \begin{figure} \caption{\small Plots of the estimated conditional densities of $p(Y \mid \bm{X} \label{fig:simu-quantile-sim1} \end{figure} \item \noindent{\bf Simulation 2: Mixture of left-skewed and right-skewed.} \begin{figure} \caption{\small Plots of the estimated conditional densities of $p(Y \mid \bm{X} \label{fig:simu-quantile-sim2} \end{figure} \item \noindent{\bf Simulation 4: Nonlinear function with an interaction term and one irrelevant covariate.} \begin{figure} \caption{\small Plots of the estimated conditional densities of $p(Y \mid \bm{X} \label{fig:simu-quantile-sim4} \end{figure} \end{itemize} \section{Sensitivity Analysis of PGQR to the Choice of $\alpha$ }\label{subsec:Robustana} As mentioned in Section \ref{sec:pen} of the main manuscript, we choose to fix $\alpha > 0$ in the variability penalty \eqref{eq:pen}. The main purpose of $\alpha$ is to ensure that the logarithmic term in the penalty is always well-defined. In this section, we conduct a sensitivity analysis to the choice of $\alpha$. To do this, we generated data using the same settings from Simulations 1 through 5. We then fit the PGQR model with five different choices for $\alpha \in \{ 0.5, 1, 5, 10, 50 \}$ and evaluated the performance of these five PGQR models on out-of-sample test data. Table \ref{tab:robustana} shows the results from our sensitivity analysis averaged across 20 replicates. We found that in Simulations 1 through 4, PGQR was not particularly sensitive to the choice of $\alpha$. PGQR was somewhat more sensitive to the choice of $\alpha$ in Simulation 5 (i.e. when the true conditional variance is very small), with larger values of $\alpha$ leading to higher PMSE for the conditional expectation and conditional standard deviation. In practice, we recommend fixing $\alpha = 1$ as the default $\alpha$ for PGQR to perform well. This choice of $\alpha =1$ leads to good empirical performance across many different scenarios. \begin{sidewaystable} \centering \resizebox{1.0\textwidth}{!}{ \begin{tabular}{c|ccc|ccc|ccc|ccc|ccc} \hline &\multicolumn{3}{c|}{Simulation 1}&\multicolumn{3}{c|}{Simulation 2}&\multicolumn{3}{c|}{Simulation 3}&\multicolumn{3}{c|}{Simulation 4}&\multicolumn{3}{c}{Simulation 5}\\ \hline $\alpha$&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$&Cov (Width)&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$&Cov (Width)&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$&Cov (Width)&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$&Cov (Width)&$\mathbb{E}(Y \mid \bm{X})$&$\text{sd}(Y \mid \bm{X})$&Cov (Width)\\ \hline \hline 0.5&0.39&0.41&0.95 (23.60)&0.43&0.18&0.91 (7.69)&0.36&0.09&0.89 (5.93)&0.14&0.01&0.96 (3.95)&0.004&0.0001&0.92 (0.38) \\ 1 &0.42&0.34&0.95 (23.49)&0.38&0.11&0.93 (8.14)&0.30&0.09&0.92 (6.61)&0.15&0.07&0.95 (4.33)&0.004&0.0001&0.93 (0.42) \\ 5 &0.36&0.31&0.95 (23.41)&0.31&0.07&0.94 (8.83)&0.25&0.06&0.96 (6.60)&0.14&0.08&0.96 (4.50)&0.005&0.03&0.99 (1.01) \\ 10 &0.32&0.30&0.95 (23.40)&0.29&0.06&0.95 (9.02)&0.25&0.07&0.95 (6.55)&0.13&0.08&0.95 (4.39)&0.007&0.06&0.99 (1.33) \\ 50 &0.33&0.34&0.97 (24.15)&0.29&0.07&0.95 (9.15)&0.25&0.06&0.95 (6.63)&0.15&0.14&0.94 (4.42)&0.009&0.101&0.99 (1.81) \\ \hline \end{tabular} } \caption{\small PGQR results with different choices of $\alpha \in \{ 0.5,1.0,5.0,10.0,50.0\}$ in the variability penalty for Simulations 1 through 5. This table reports the average PMSE for the conditional expectation and standard deviation, as well as the coverage rate (Cov) and the average width of the 95\% prediction intervals. Results were averaged across 20 replicates.} \label{tab:robustana} \end{sidewaystable} \end{document}
\begin{document} \begin{frontmatter} \title{Adaptive Euler-Maruyama method for\\ SDEs with non-globally Lipschitz drift:\\ part I, finite time interval} \runtitle{Adaptive Euler-Maruyama method for non-Lipschitz drift} \begin{aug} \author{\fnms{Wei} \snm{Fang}\ead[label=e1]{[email protected]}} \and \author{\fnms{Michael~B.} \snm{Giles}\ead[label=e2]{[email protected]}} \runauthor{W.~Fang \& M.B.~Giles} \affiliation{University of Oxford} \address{Mathematical Institute\\University of Oxford\\Oxford OX2 6GG\\United Kingdom\\ \printead{e1}\\ \phantom{E-mail:\ }\printead*{e2}} \end{aug} \begin{abstract} This paper proposes an adaptive timestep construction for an Euler-Maruyama approximation of SDEs with a drift which is not globally Lipschitz. It is proved that if the timestep is bounded appropriately, then over a finite time interval the numerical approximation is stable, and the expected number of timesteps is finite. Furthermore, the order of strong convergence is the same as usual, i.e.~order $\halfs$ for SDEs with a non-uniform globally Lipschitz volatility, and order $1$ for Langevin SDEs with unit volatility and a drift with sufficient smoothness. The analysis is supported by numerical experiments for a variety of SDEs. \end{abstract} \begin{keyword}[class=MSC] \kwd{60H10} \kwd{60H35} \kwd{65C30} \end{keyword} \begin{keyword} \kwd{SDE} \kwd{Euler-Maruyama} \kwd{strong convergence} \kwd{adaptive timestep} \end{keyword} \end{frontmatter} \section{Introduction} In this paper we consider an $m$-dimensional stochastic differential equation (SDE) driven by a $d$-dimensional Brownian motion: \begin{equation} \D X_t = f(X_t)\,\D t + g(X_t)\,\D W_t, \label{SDE} \end{equation} with a fixed initial value $X_0.$ The standard theory assumes the drift $f: \RR^m\!\rightarrow\!\RR^m$ and the volatility $g: \RR^m\!\rightarrow\!\RR^{m\times d}$ are both globally Lipschitz. Under this assumption, there is well-established theory on the existence and uniqueness of strong solutions, and the numerical approximation $\hX_t$ obtained from the Euler-Maruyama discretisation \[ \hX_{(n\!+\!1)h} = \hX_{nh} + f(\hX_{nh}) \, h + g(\hX_{nh}) \, \DW_n \] using a uniform timestep of size $h$ with Brownian increments $\Delta W_n$, plus a suitable interpolation within each timestep, is known \cite{kp92} to have a strong error which is $O(h^{1/2})$ so that for any $T, p > 0$ \[ \EE\left[\sup_{0\leq t \leq T}\|\hX_t \!-\! X_t \|^p\right] = O(h^{p/2}). \] The interest in this paper is in other cases in which $g$ is again globally Lipschitz, but $f$ is only locally Lipschitz. If, for some $\alpha, \beta \geq 0$, $f$ also satisfies the one-sided growth condition\\[-0.2in] \[ \langle x,f(x)\rangle \leq \alpha\|x\|^2+\beta, \] where $\langle \cdot, \cdot \rangle$ denotes an inner product, then it is again possible to prove the existence and uniqueness of strong solutions (see Theorems 2.3.5 and 2.4.1 in \cite{mao97}). Furthermore (see Lemma 3.2 in \cite{hms02}), these solutions are stable in the sense that for any $T,p > 0$\\[-0.1in] \[ \EE\left[\sup_{0\leq t \leq T}\| X_t \|^p\right] < \infty. \] The problem is that the numerical approximation given by the uniform timestep Euler-Maruyama discretisation may not be stable. Indeed, for the SDE \begin{equation} \D X_t = - X_t^3 \,\D t + \D W_t, \label{eq:cubic_drift} \end{equation} it has been proved \cite{hjk11} that for any $T\!>\!0$ and $p\!\geq\!2$ \[ \lim_{h\rightarrow 0} \EE\left[ \| \hX_T \|^p \right] = \infty. \] This behaviour has led to research on numerical methods which achieve strong convergence for these SDEs with a non-globally Lipschitz drift. One key paper in this area is by Higham, Mao \& Stuart \cite{hms02}. First, assuming a locally Lipschitz condition for both the drift and the volatility, they prove that if the uniform timestep Euler-Maruyama discretisation is stable then it also converges strongly. Assuming the drift satisfies a one-sided Lipschitz condition and a polynomial growth condition, they then prove stability and the standard order $\halfs$ strong convergence for two uniform timestep implicit methods, the Split-Step Backward Euler method (SSBE): \begin{eqnarray*} \hX_{nh}^* &=& \hX_{nh} + f(\hX_{nh}^*)\, h\\ \hX_{(n\!+\!1)h} &=& \hX_{nh}^* + g(\hX_{nh}^*)\, \Delta W_n. \end{eqnarray*} and the drift-implicit Backward Euler method: \[ \hX_{(n\!+\!1)h} = \hX_{nh} + f(\hX_{(n\!+\!1)h})\, h + g(\hX_{nh})\, \Delta W_n. \] Mao \& Szpruch \cite{ms13} prove that the implicit $\theta$-Euler method \[ \hX_{(n\!+\!1)h} = \hX_{nh} + \theta f(\hX_{(n\!+\!1)h})\, h + (1\!-\!\theta) f(\hX_{nh})\, h + g(\hX_{nh})\, \Delta W_n, \] converges strongly for $\halfs\!\leq\!\theta\!\leq\!1$ under more general conditions which permit a non-globally Lipschitz volatility. However, except for some special cases, implicit methods can require significant additional computational costs, especially for multi-dimensional SDEs; therefore, a stable explicit method is desired. Milstein \& Tretyakov proposed a general approach which discards approximate paths that cross a sphere with a sufficiently large radius $R$ \cite{mt05}. However, it is not easy to quantify the errors due to $R$. The explicit tamed Euler method proposed by Hutzenthaler, Jentzen \& Kloeden \cite{hjk12} is \[ \hX_{(n\!+\!1)h} = \hX_{nh} + \frac{ f(\hX_{nh})}{1\!+\!C\,h\|f(\hX_{nh})\|}\, h + g(\hX_{nh})\, \DW_n, \] for some fixed constant $C\!>\!0$. They prove both stability and the standard order $\halfs$ strong convergence. This approach has been extended to the tamed Milstein method by Wang \& Gan \cite{wg13}, proving order 1 strong convergence for SDEs with commutative noise. Finally, Mao \cite{mao15} proposes a truncated Euler method which has the form \begin{eqnarray*} \hX_{(n\!+\!1)h} &=& \hX_{nh} + f\left( \min( K \|\hX_{nh}\|^{-1}\!, 1 )\, \hX_{nh}\right) \, h \\ && ~~~~ + g\left( \min( K \|\hX_{nh}\|^{-1}\!, 1 )\, \hX_{nh}\right) \, \DW_n. \end{eqnarray*} By making $K$ a function of $h$, strong convergence is proved for SDEs satisfying a Khasminskii-type condition which again allows a non-globally Lipschitz volatility; in \cite{mao16} it is proved that the order of convergence is arbitrarily close to $\halfs$. In this paper, we propose instead to use the standard explicit Euler-Maruyama method, but with an adaptive timestep $h_n$ which is a function of the current approximate solution $\hX_{t_n}$. The idea of using an adaptive timestep comes from considering the divergence of the uniform timestep method for the SDE (\ref{eq:cubic_drift}). When there is no noise, the requirement for the explicit Euler approximation of the corresponding ODE to have a stable monotonic decay is that its timestep satisfies $h\!<\!\hX_{t_n}^{-2}$. An intuitive explanation for the instability of the uniform timestep Euler-Maruyama approximation of the SDE is that there is always a very small probability of a large Brownian increment $\DW_n$ which pushes the approximation $\hX_{t_{n+1}}$ into the region $h\!>\!2\,\hX_{t_{n+1}}^{-2}$ leading to an oscillatory super-exponential growth. Using an adaptive timestep avoids this problem. Adaptive timesteps have been used in previous research to improve the accuracy of numerical approximations. Some approaches use local error estimation to decide whether or not to refine the timestep \cite{gl97,mauthner98,lamba03} while others are similar to ours in setting the size of each timestep based on the current path approximation $\hX_t$ \cite{hmr01,muller04}. However, these all assume globally Lipschitz drift and volatility. The papers by Lamba, Mattingly \& Stuart \cite{lms07} and Lemaire \cite{lemaire07} are more relevant to the analysis in this paper. They both consider drifts which are not globally Lipschitz, but they assume a dissipative condition which is stronger than the conditions assumed in this paper. Lamba, Mattingly \& Stuart \cite{lms07} prove strong stability but not the order of strong convergence, while Lemaire \cite{lemaire07} considers an infinite time interval with a timestep with an upper bound which decreases towards zero over time, and proves convergence of the empirical distribution to the invariant distribution of the SDE. In this paper we are concerned with strong convergence, not weak convergence, because our interest is in using the numerical approximation as part of a multilevel Monte Carlo (MLMC) computation \cite{giles08,giles15} for which the strong convergence properties are key in establishing the rate of decay of the variance of the multilevel correction. Usually, MLMC is used with a geometric sequence of time grids, with each coarse timestep corresponding to a fixed number of fine timesteps. However, it has been shown that it is not difficult to implement MLMC using the same driving Brownian path for the coarse and fine paths, even when they have no time points in common \cite{glw16}. Paper \cite{glw16} also provides another motivation for this paper, the analysis of Langevin equations with a drift $-\nabla V(X_t)$ where $V(x)$ is a potential function which comes from the modelling of molecular dynamics. \cite{glw16} considers the FENE (Finitely Extensible Nonlinear Elastic) model which in the case of a molecule with a single bond has a 3D potential $-\mu \log (1 \!-\! \|x\|^2)$. Considerations of stability and accuracy lead to the use of a timestep of the form $ \delta\, (1 \!-\! \|\hX_n\|)^2/\max(2\mu, 36), $ for some $0\!<\!\delta\!\leq \!1$. Because of this, we pay particular attention to the case of Langevin equations, and for these we prove first order strong convergence, the same as for the uniform timestep Euler-Maruyama method for globally Lipschitz drifts. Unfortunately our assumptions do not cover the case of the FENE model as we require $-\nabla V(x)$ to be locally Lipschitz on $\RR^m$. The rest of the paper is organised as follows. Section 2 states the main theorems, and proves some minor lemmas. Section 3 has a number of example applications, many from \cite{hj15}, illustrating how suitable adaptive timestep functions can be defined. It also presents some numerical results comparing the performance of the adaptive Euler-Maruyama method to other methods. Section 4 has the proofs of the three main theorems, and finally Section 5 has some conclusions and discusses the extension to the infinite time interval which will be covered in a future paper. In this paper we assume the following setting and notation. Let $T\!>\!0$ be a fixed positive real number, and let $(\Omega,\mathcal{F},\PP)$ be a probability space with normal filtration $(\mathcal{F}_t)_{t\in[0,T]}$ corresponding to a $d$-dimensional standard Brownian motion $W_t=(W^{(1)},W^{(2)},\ldots,W^{(d)})_t^T$. We denote the vector norm by $\|v\|\triangleq(|v_1|^2+|v_2|^2+\ldots+|v_m|^2)^{\frac{1}{2}}$, the inner product of vectors $v$ and $w$ by $\langle v,w \rangle\triangleq v_1w_1+v_2w_2+\ldots+v_mw_m$, for any $v,w\in\RR^m$ and the Frobenius matrix norm by $\|A\|\triangleq \sqrt{\sum_{i,j}A_{i,j}^2}$ for all $A\in\RR^{m\times d}.$ \section{Adaptive algorithm and theoretical results} \subsection{Adaptive Euler-Maruyama method} The adaptive Euler-Maruyama discretisation is \[ \tnp = \tn + h_n, ~~~~ \hX_\tnp = \hX_\tn + f(\hX_\tn)\, h_n + g(\hX_\tn)\, \DW_n, \] where $h_n\triangleq h(\hX_\tn)$ and $\DW_n \triangleq W_\tnp \!-\! W_\tn$, and there is fixed initial data $t_0\!=\!0,\ \hX_0\!=\!X_0$. One key point in the analysis is to prove that $t_n$ increases without bound as $n$ increases. More specifically, the analysis proves that for any $T\!>\!0$, almost surely for each path there is an $N$ such that $t_N\!\geq\!T$. We use the notation $ \td \triangleq \max\{t_n: t_n\!\leq\! t\},\ n_t\triangleq\max\{n: t_n\!\leq\! t\} $ for the nearest time point before time $t$, and its index. We define the piecewise constant interpolant process $\bX_t=\hX_\td$ and also define the standard continuous interpolant \cite{kp92} as \[ \hX_t=\hX_\td+f(\hX_\td) (t\!-\!\td) + g(\hX_\td) (W_t\!-\!W_\td), \] so that $\hX_t$ is the solution of the SDE \begin{equation} \D \hX_t = f(\hX_\td)\, \D t + g(\hX_\td) \, \D W_t = f(\bX_t)\,\D t + g(\bX_t)\, \D W_t. \label{eq:approx_SDE} \end{equation} In the following subsections, we state the key results on stability and strong convergence, and related results on the number of timesteps, introducing various assumptions as required for each. The main proofs are deferred to Section \ref{sec:proofs}. \subsection{Stability} \begin{assumption}[Local Lipschitz and linear growth] \label{assp:linear_growth} $f$ and $g$ are both locally Lipschitz, so that for any $R\!>\!0$ there is a constant $C_R$ such that \[ \| f(x)\!-\!f(y) \| + \| g(x)\!-\!g(y) \| \leq C_R\, \|x\!-\!y\| \] for all $x,y\in \RR^m$ with $\|x\|,\|y\|\leq R$. Furthermore, there exist constants $\alpha, \beta \geq 0$ such that for all $x\in \RR^m$, $f$ satisfies the one-sided linear growth condition: \begin{equation} \langle x,f(x)\rangle \leq \alpha\|x\|^2+\beta, \label{eq:onesided_growth} \end{equation} and $g$ satisfies the linear growth condition: \begin{equation} \|g(x)\|^2\leq \alpha\|x\|^2+\beta. \label{eq:g_growth} \end{equation} \end{assumption} Together, (\ref{eq:onesided_growth}) and (\ref{eq:g_growth}) imply the monotone condition \[ \langle x,f(x)\rangle + \halfs \|g(x)\|^2 \leq \fracs{3}{2} ( \alpha\|x\|^2\!+\!\beta), \] which is a key assumption in the analysis of Mao \& Szpruch \cite{ms13} and Mao \cite{mao15} for SDEs with volatilities which are not globally Lipschitz. However, in our analysis we choose to use this slightly stronger assumption, which provides the basis for the following lemma on the stability of the SDE solution. \begin{lemma}[SDE stability] \label{lemma:SDE_stability} If the SDE satisfies Assumption \ref{assp:linear_growth}, then for all $p\!>\!0$ \[ \EE\left[ \, \sup_{0\leq t\leq T} \|X_t\|^p \right] < \infty. \] \end{lemma} \begin{proof} The proof is given in Lemma 3.2 in \cite{hms02}; the statement of that lemma makes stronger assumptions on $f$ and $g$, corresponding to (\ref{eq:onesided_Lipschitz}) and (\ref{eq:g_Lipschitz}), but the proof only uses the conditions in Assumption \ref{assp:linear_growth}. \end{proof} We now specify the critical assumption about the adaptive timestep. \begin{assumption}[Adaptive timestep] \label{assp:timestep} The adaptive timestep function $h: \RR^m\rightarrow \RR^+$ is continuous and strictly positive, and there exist constants $\alpha, \beta > 0$ such that for all $x\in \RR^m$, $h(x)$ satisfies the inequality \begin{equation} \langle x, f(x) \rangle + \halfs \, h(x)\, \|f(x)\|^2 \leq \alpha \|x\|^2 + \beta. \label{eq:timestep} \end{equation} \end{assumption} Note that if another timestep function $h^\delta(x)$ is smaller than $h(x)$, then $h^\delta(x)$ also satisfies the Assumption \ref{assp:timestep}. Note also that the form of (\ref{eq:timestep}), which is motivated by the requirements of the proof of the next theorem, is very similar to (\ref{eq:onesided_growth}). Indeed, if (\ref{eq:timestep}) is satisfied then (\ref{eq:onesided_growth}) is also true for the same values of $\alpha$ and $\beta$. \begin{theorem}[Finite time stability] \label{thm:stability} If the SDE satisfies Assumption \ref{assp:linear_growth}, and the timestep function $h$ satisfies Assumption \ref{assp:timestep}, then $T$ is almost surely attainable (i.e.~for $\omega\in\Omega$, $\PP(\exists N(\omega)<\infty~\mbox{s.t.}~t_{N(\omega)}\!\geq\!T)=1$) and for all $p\!>\!0$ there exists a constant $C_{p,T}$ which depends solely on $p$, $T$ and the constants $\alpha, \beta$ in Assumption \ref{assp:timestep}, such that \[ \EE\left[ \, \sup_{0\leq t\leq T} \|\hX_t\|^p \right] < C_{p,T}. \] \end{theorem} \begin{proof} The proof is deferred to Section \ref{sec:proofs}. \end{proof} To bound the expected number of timesteps, we require an assumption on how quickly $h(x)$ can approach zero as $\|x\|\rightarrow\infty$. \begin{assumption}[Timestep lower bound] \label{assp:lower_bound} There exist constants $\xi,\zeta,q\!>\!0$, such that the adaptive timestep function satisfies the inequality \[ h(x) \geq \left( \xi \|x\|^q + \zeta \right)^{-1}. \] \end{assumption} Given this assumption, we obtain the following lemma. \begin{lemma}[Bounded timestep moments] If the SDE satisfies Assumption \ref{assp:linear_growth}, and the timestep function $h$ satisfies Assumptions \ref{assp:timestep} and \ref{assp:lower_bound}, then for all $p\!>\!0$ \[ \EE\left[ N_T^p \right] < \infty. \] where $\displaystyle N_T = \min\{n: t_n\geq T\} $ is the number of timesteps required by a path approximation. \end{lemma} \begin{proof} Assumption \ref{assp:lower_bound} gives us \[ N_T \leq 1 + T \sup_{0\leq t\leq T} \frac{1}{h(\hX_t)} \leq 1 + T \left( \xi \sup_{0\leq t\leq T} \|\hX_t\|^q + \zeta \right), \] and the result is then an immediate consequence of Theorem \ref{thm:stability}. \end{proof} \subsection{Strong convergence} Standard strong convergence analysis for an approximation with a uniform timestep $h$ considers the limit $h\!\rightarrow\!0$. This clearly needs to be modified when using an adaptive timestep, and we will instead consider a timestep function $h^\delta(x)$ controlled by a scalar parameter $0\!<\!\delta\!\leq\!1$, and consider the limit $\delta\!\rightarrow\!0$. Given a timestep function $h(x)$ which satisfies Assumptions \ref{assp:timestep} and \ref{assp:lower_bound}, ensuring stability as analysed in the previous section, there are two quite natural ways in which we might introduce $\delta$ to define $h^\delta(x)$: \begin{eqnarray*} && h^\delta(x) = \delta\, \min(T, h(x) ), \\ && h^\delta(x) = \min( \delta\,T, h(x) ). \end{eqnarray*} The first refines the timestep everywhere, while the latter concentrates the computational effort on reducing the maximum timestep, with $h(x)$ introduced to ensure stability when $\|\hX_t\|$ is large. In our analysis, we will cover both possibilities by making the following assumption. \begin{assumption} \label{assp:timestep_delta} The timestep function $h^\delta$, satisfies the inequality \begin{equation} \label{eq:h_delta} \delta\, \min(T, h(x)) \leq h^\delta(x) \leq \min( \delta\,T, h(x) ), \end{equation} and $h$ satisfies Assumption \ref{assp:timestep}. \end{assumption} Given this assumption, we obtain the following theorem: \begin{theorem}[Strong convergence] \label{thm:convergence} If the SDE satisfies Assumption \ref{assp:linear_growth}, and the timestep function $h^\delta$ satisfies Assumption \ref{assp:timestep_delta}, then for all $p\!>\!0$ \[ \lim_{\delta\rightarrow 0}\EE\left[ \, \sup_{0\leq t\leq T} \|\hX_t\!-\!X_t\|^p \right] = 0. \] \end{theorem} \begin{proof} The proof is essentially identical to the uniform timestep Euler-Maruyama analysis in Theorem 2.2 in \cite{hms02} by Higham, Mao \& Stuart. The only change required by the use of an adaptive timestep is to note that \[ \hX_s - \bX_s = f(\bX_s) \, (s\!-\!\sd) + g(\bX_s) \, (W_s\!-\!W_\sd) \] and $s\!-\!\sd < \delta \, T$ and $ \EE\left[\, \|W_s\!-\!W_\sd\|^2 \ | \ {\cal F}_\sd \,\right] = d \, (s\!-\!\sd). $ \end{proof} To prove an order of strong convergence requires new assumptions on $f$ and $g$: \begin{assumption}[Lipschitz properties] \label{assp:Lipschitz} There exists a constant $\alpha\!>\!0$ such that for all $x,y\in\RR^m$, $f$ satisfies the one-sided Lipschitz condition: \begin{equation} \langle x\!-\!y,f(x)\!-\!f(y)\rangle \leq \halfs\alpha\|x\!-\!y\|^2, \label{eq:onesided_Lipschitz} \end{equation} and $g$ satisfies the Lipschitz condition: \begin{equation} \|g(x)\!-\!g(y)\|^2\leq \halfs\alpha \|x\!-\!y\|^2. \label{eq:g_Lipschitz} \end{equation} In addition, $f$ satisfies the locally polynomial growth Lipschitz condition \begin{equation} \|f(x)\!-\!f(y)\| \leq \left(\gamma\, (\|x\|^q \!+\! \|y\|^q) + \mu\right) \, \|x\!-\!y\|, \label{eq:local_Lipschitz} \end{equation} for some $\gamma, \mu, q > 0$. \end{assumption} Note that setting $y\!=\!0$ gives \[ \langle x, f(x) \rangle \ \leq\ \halfs \alpha \|x\|^2 + \langle x, f(0) \rangle \ \leq\ \alpha \|x\|^2 + \halfs \alpha^{-1} \|f(0)\|^2, \] \[ \| g(x) \|^2 \ \leq\ 2 \|g(x)\!-\!g(0)\|^2 + 2 \|g(0)\|^2 \ \leq\ \alpha \|x\|^2 + 2 \|g(0)\|^2. \] Hence, Assumption \ref{assp:Lipschitz} implies Assumption \ref{assp:linear_growth}, with the same $\alpha$ and an appropriate $\beta$. Also, if the drift and volatility are differentiable, the following assumption is equivalent to Assumption \ref{assp:Lipschitz}, and usually easier to check in practice. \begin{assumption}[Lipschitz properties] \label{assp:Lipschitz_diff} There exists a constant $\alpha\!>\!0$ such that for all $x,e\in\RR^m$ with $\|e\|\!=\!1$, $f$ satisfies the one-sided Lipschitz condition: \begin{equation} \langle e ,\nabla f(x)\, e \rangle \leq \halfs\alpha \label{eq:onesided_Lipschitz_diff} \end{equation} and $g$ satisfies the Lipschitz condition: \begin{equation} \| \nabla g(x)\|^2\leq \halfs\alpha \label{eq:g_Lipschitz_diff} \end{equation} and in addition $f$ satisfies the locally polynomial growth Lipschitz condition \begin{equation} \|\nabla f(x) \| \leq \left( 2\,\gamma\, \|x\|^q + \mu\right), \label{eq:local_Lipschitz_diff} \end{equation} for some $\gamma, \mu, q > 0$. \end{assumption} \begin{theorem}[Strong convergence order] \label{thm:convergence_order} If the SDE satisfies Assumption \ref{assp:Lipschitz}, and the timestep function $h^\delta$ satisfies Assumption \ref{assp:timestep_delta}, then for all $p\!>\!0$ there exists a constant $C_{p,T}$ such that \[ \EE\left[ \, \sup_{0\leq t\leq T} \|\hX_t\!-\!X_t\|^p \right] \leq C_{p,T} \, \delta^{p/2}. \] \end{theorem} \begin{proof} The proof is deferred to Section \ref{sec:proofs}. \end{proof} \begin{lemma}[Number of timesteps] \label{lemma:timesteps2} If the SDE satisfies Assumption \ref{assp:Lipschitz}, and the timestep function $h^\delta(x)$ satisfies Assumption \ref{assp:timestep_delta}, with $h(x)$ satisfying Assumption \ref{assp:lower_bound}, then for all $p\!>\!0$ there exists a constant $c_{p,T}$ such that \[ \EE\left[ N_T^p \right] \leq c_{p,T} \, \delta^{-p}. \] where $N_T$ is again the number of timesteps required by a path approximation. \end{lemma} \begin{proof} Equation (\ref{eq:h_delta}) and Assumption \ref{assp:lower_bound} give \begin{eqnarray*} N_T &\leq& 1 + T \sup_{0\leq t\leq T} \frac{1}{h^\delta(\hX_t)} \\ &\leq& 1 + \delta^{-1}\, T \sup_{0\leq t\leq T} \max(h^{-1}(x),T^{-1}) \\ &\leq& \delta^{-1}\, T \left( \xi \sup_{0\leq t\leq T} \|\hX_t\|^q + \zeta + (1\!+\!\delta) \, T^{-1} \right). \end{eqnarray*} The result is then a consequence of Theorem \ref{thm:stability} since $h^\delta(x)\!\leq\!h(x)$ and therefore $h^\delta(x)$ satisfies the requirements for stability. \end{proof} The conclusion from Theorem \ref{thm:convergence_order} and Lemma \ref{lemma:timesteps2} is that \[ \EE\left[ \, \sup_{0\leq t\leq T} \|\hX_t\!-\!X_t\|^p \right]^{1/p} \leq C^{1/p}_{p,T}\, c^{1/2}_{1,T} \, (\EE\left[ N_T \right])^{-1/2} \] which corresponds to order $\halfs$ strong convergence when comparing the accuracy to the expected cost. First order strong convergence is achievable for Langevin SDEs in which $m\!=\!d$ and $g$ is the identity matrix $I_m$, but this requires stronger assumptions on the drift $f$. \begin{assumption}[Enhanced Lipschitz properties] \label{assp:enhanced_Lipschitz} There exists a constant $\alpha\!>\!0$ such that for all $x,y\in\RR^m$, $f$ satisfies the one-sided Lipschitz condition: \begin{equation} \langle x\!-\!y,f(x)\!-\!f(y)\rangle \leq \halfs\alpha\|x\!-\!y\|^2. \label{eq:onesided_Lipschitz2} \end{equation} In addition, $f$ is differentiable, and $f$ and $\nabla f$ satisfy the locally polynomial growth Lipschitz condition \begin{equation} \|f(x)\!-\!f(y)\| + \|\nabla f(x)\!-\!\nabla f(y)\|\leq \left( \gamma\, (\|x\|^q \!+\! \|y\|^q) + \mu\right) \|x\!-\!y\|, \label{eq:local_Lipschitz2} \end{equation} for some $\gamma, \mu, q > 0$. \end{assumption} \begin{lemma} \label{lemma:useful} If $f$ satisfies Assumption \ref{assp:enhanced_Lipschitz} then for any $x,y,v \in \RR^m$,\\[-0.1in] \[ \langle v, f(x)\!-\!f(y) \rangle = \langle v, \nabla f(x) (x\!-\!y) \rangle + R(x,y,v), \] where the remainder term has the bound \[ |R(x,y,v)| \leq \left( \gamma\, (\|x\|^q \!+\! \|y\|^q) + \mu\right) \|v\| \, \|x\!-\!y\|^2. \] \end{lemma} \begin{proof} If we define the scalar function $u(\lambda)$ for $0\!\leq\!\lambda\!\leq\!1$ by \[ u(\lambda) = \langle v, f(y+\lambda(x\!-\!y)) \rangle, \] then $u(\lambda)$ is continuously differentiable, and by the Mean Value Theorem $u(1)\!-\!u(0)= u'(\lambda^*)$ for some $0\!<\!\lambda^*\!<\!1$, which implies that \[ \langle v, f(x)\!-\!f(y) \rangle = \langle v, \nabla f(y+\lambda^*(x\!-\!y))\, (x\!-\!y) \rangle. \] The final result then follows from the Lipschitz property of $\nabla f$. \end{proof} We now state the theorem on improved strong convergence. \begin{theorem}[Strong convergence for Langevin SDEs] \label{thm:convergence_order2} If $m\!=\!d$, $g\equiv I_m$, $f$ satisfies Assumption \ref{assp:enhanced_Lipschitz}, and the timestep function $h^\delta$ satisfies Assumption \ref{assp:timestep_delta}, then for all $T,p\in(0,\infty)$ there exists a constant $C_{p,T}$ such that \[ \EE\left[ \, \sup_{0\leq t\leq T} \|\hX_t\!-\!X_t\|^p \right] \leq C_{p,T} \, \delta^p. \] \end{theorem} \begin{proof} The proof is deferred to Section \ref{sec:proofs}. \end{proof} Comment: first order strong convergence can also be achieved for a general $g(x)$ by using an adaptive timestep Milstein discretisation, provided $\nabla g$ satisfies an additional Lipschitz condition. A formal statement and proof of this is omitted as it requires a lengthy extension to the stability analysis. In addition, this numerical approach is only practical in cases in which the commutativity condition is satisfied and therefore there is no need to simulate the L{\'evy} areas which the Milstein method otherwise requires \cite{kp92}. \section{Examples and numerical results} In this section we discuss a number of example SDEs with non-globally Lipschitz drift. In each case we comment on the applicability of the theory and a suitable choice for the adaptive timestep. We then present numerical results for three testcases which illustrate some key aspects. \subsection{Scalar SDEs} In each of the cases to be presented, the drift is of the form \begin{equation} \label{eq:scalar_drift} f(x) \approx - \, c \, \mbox{sign}(x) \, |x|^q, ~~~ \mbox{as} ~ |x|\rightarrow \infty \end{equation} for some constants $c\!>\!0$, $q\!>\!1$. Therefore, as $|x|\!\rightarrow\! \infty$, the maximum stable timestep satisfying Assumption \ref{assp:timestep} corresponds to $\langle x, f(x) \rangle + \halfs h(x) \, |f(x)|^2 \approx 0$ and hence $h(x) \approx 2 |x|/|f(x)| \approx 2\, c^{-1} |x|^{1-q}$. A suitable choice for $h(x)$ and $h^\delta(x)$ is therefore \begin{equation} h(x) = \min\left(T, c^{-1} |x|^{1-q}\right), ~~~ h^\delta(x) = \delta\, h(x). \label{eq:scalar_timestep} \end{equation} \subsubsection{Stochastic Ginzburg-Landau equation} This describes a phase transition from the theory of superconductivity \cite{hjk11,kp92}. \[ \D X_t= \left( (\eta+\halfs\sigma^2) X_t - \lambda X_t^3 \right)\D t + \sigma X_t\,\D W_t, \] where $\eta\!\geq\!0$, $\lambda,\sigma\!>\!0$. The SDE is usually defined on the domain $\RR^+$, since $X_t\!>\!0$ for all $t\!>\!0$, if $X_0\!>\!0$. However, the numerical approximation is not guaranteed to remain strictly positive and the domain can be extended to $\RR$ without any change to the SDE. The drift and volatility satisfy Assumptions \ref{assp:linear_growth} and \ref{assp:Lipschitz}, and therefore all of the theory is applicable, with a suitable choice for $h^\delta(x)$, based on (\ref{eq:scalar_drift}) and (\ref{eq:scalar_timestep}), being \[ h^\delta(x) = \delta \, \min\left(T, \lambda^{-1} x^{-2}\right). \] \subsubsection{Stochastic Verhulst equation} This is a model for a population with competition between individuals \cite{hjk11}. \[ \D X_t=\left(\left( \eta+\halfs\sigma^2\right)X_t-\lambda X_t^2 \right) \D t + \sigma X_t \, \D W_t, \] where $\eta,\lambda,\sigma\!>\!0.$ The SDE is defined on the domain $\RR^+$, but can be extended to $\RR$ by modifying it to \[ \D X_t=\left(\left( \eta+\halfs\sigma^2\right)X_t-\lambda\, |X_t| X_t \right) \D t + \sigma X_t \, \D W_t, \] so that the drift is positive in the limit $x\!\rightarrow\!-\infty$. The drift and volatility then satisfy Assumptions \ref{assp:linear_growth} and \ref{assp:Lipschitz}, and therefore all of the theory is applicable, with a suitable choice for $h^\delta(x)$, based on (\ref{eq:scalar_drift}) and (\ref{eq:scalar_timestep}), being \[ h^\delta(x) = \delta \, \min\left(T, \lambda^{-1} |x|^{-1}\right). \] \if 0 \subsubsection{Feller diffusion with logistic growth} The branching process with logistic growth is a stochastic Verhulst equation with Feller noise \cite{hjk11}. \[ \D X_t = \lambda\, X_t(K\!-\!X_t)\,\D t + \sigma\sqrt{X_t}\,\D W_t, \] where $\lambda,K,\sigma\!>\!0$. The SDE is defined on the domain $\RR^+$, but can be extended to $\RR$ by modifying it to \[ \D X_t = \lambda\, X_t(K\!-\!|X_t|)\,\D t + \sigma\sqrt{|X_t|}\,\D W_t. \] The drift and volatility satisfy Assumption \ref{assp:linear_growth} and so the numerical approximation will be stable if the maximum timestep is defined to be \[ h(x) = \min\left(T, \lambda^{-1} |x|^{-1}\right). \] However, the volatility does not satisfy the global Lipschitz condition and therefore the strong convergence theory is not applicable. \fi \if 0 \subsubsection{Wright Fisher diffusion} The Wright-Fisher family of diffusion processes is a class of evolutionary models widely used in population genetics, with applications also in finance and Bayesian statistics. \[ \D X_t = (a\!-\!bX_t)\,\D t + \gamma\sqrt{X_t(1\!-\!X_t)}\,\D W_t,\ \ X_0=x_0\in(0,1) \] where $a,b,\gamma\!>\!0$ and $2a/\gamma^2\geq 1, 2(b\!-\!a)/\gamma^2\geq 1$, so that the SDE has unique strong solutions with $\PP[X_t\!\in\!(0,1)]=1$ for $t\!>\!0$. The transformation \[ Y_t = \log X_t - \log (1\!-\!X_t), \] leads to the SDE \begin{eqnarray*} \D Y_t &=& \left( 2a-b + (a-b+\halfs\gamma^2)\exp(Y_t) + (a-\halfs\gamma^2)\exp(-Y_t) \right) \D t \\&& +\ \gamma\left(\exp(\halfs Y_t) + \exp(-\halfs Y_t) \right) \D W_t, \end{eqnarray*} defined on the whole real line $\RR$. The drift and volatility are both locally Lipschitz, but neither satisfies the linear growth condition, so none of the theory is applicable. \fi \subsection{Multi-dimensional SDEs} With multi-dimensional SDEs there are two cases of particular interest. For SDEs with a drift which, for some $\beta\!>\!0$ and sufficiently large $\|x\|$, satisfies the condition \[ \langle x, f(x) \rangle \leq - \beta\, \|x\| \, \|f(x)\|, \] one can take $\langle x, f(x) \rangle + \halfs h(x) \, |f(x)|^2 \approx 0$ and therefore a suitable definition of $h(x)$ for large $\|x\|$ is \[ h(x) = \min(T, \|x\|/\|f(x)\|). \] For SDEs with a drift which does not satisfy the condition, but for which $\|f(x)\|\rightarrow \infty$ as $\|x\|\rightarrow \infty$, an alternative choice for large $\|x\|$ is to use \begin{equation} h(x) = \min(T, \gamma\, \|x\|^2/\|f(x)\|^2), \label{eq:multi_h} \end{equation} for some $\gamma\!>\!0$. The difficulty in this case is choosing the best value for $\gamma$, taking into account both accuracy and cost. \subsubsection{Stochastic van der Pol oscillator} This describes state oscillation \cite{hj15}. \begin{eqnarray*} \D X_t^{(1)} &=& X_t^{(2)}\,\D t\\ \D X_t^{(2)} &=& \left(\alpha\left(\mu\!-\!(X_t^{(1)})^2 \right) X_t^{(2)}-\delta X_t^{(1)} \right) \,\D t + \beta \,\D W_t \end{eqnarray*} where $\alpha, \mu, \delta, \beta\!>\!0$. It can be put in the standard form by defining \[ f(x) \equiv \begin{pmatrix} x_2\\ \alpha\,(\mu\!-\!x_1^2)\,x_2-\delta x_1 \end{pmatrix},\ \ g(x) \equiv \begin{pmatrix} 0\\ \beta \end{pmatrix}. \] It follows that \[ \left\langle x,f(x) \right\rangle \ =\ - \alpha\, x_1^2\, x_2^2 + \alpha\,\mu\, x_2^2 + (1\!-\!\delta)\, x_1\, x_2 \ \leq\ \left( \alpha \mu + \halfs (1\!-\!\delta) \right) \|x\|^2. \] Therefore the drift and volatility satisfy Assumption \ref{assp:linear_growth} and the numerical approximations will be stable if the maximum timestep is defined by (\ref{eq:multi_h}). However, it can be verified that $\langle e, \nabla f(x) \, e\rangle$ is not uniformly bounded for an arbitrary $e$ such that $\|e\|=1$, and therefore the drift does not satisfy the one-sided Lipschitz condition. Hence the stability and strong convergence theory in this paper is applicable, but not the theorems on the order of convergence. Nevertheless, numerical experiments exhibit first order strong convergence, which is consistent with the fact that the volatility in uniform, so it seems there remains a gap here in the theory. \if 0 \subsubsection{Stochastic Duffing -- van der Pol oscillator} This model, combining features of the Duffing and van der Pol equations, has been used in certain aeroelasticity problems \cite{hj15}. \begin{eqnarray*} \D X_t^{(1)} &=&\,X_t^{(2)}\,\D t\\ \D X_t^{(2)} &=&\left(\alpha_1 X_t^{(1)}-\alpha_2 X_t^{(2)} - \alpha_3 X_t^{(2)}(X_t^{(1)})^2-(X_t^{(1)})^3 \right)\,\D t\\ &&+\ \sqrt{ \beta_1^2 (X_t^{(1)})^2 + \beta_2^2(X_t^{(2)})^2 + \beta_3^2} \ \D W_t \end{eqnarray*} where $\alpha_1,\alpha_2,\alpha_3,\beta_1,\beta_2,\beta_3 > 0$, so that \[ f(x) \equiv \begin{pmatrix} x_2\\ \alpha_1 x_1-\alpha_2 x_2-\alpha_3 x_2 (x_1)^2-(x_1)^3 \end{pmatrix},~~ g(x) \equiv \begin{pmatrix} 0 \\ \sqrt{ \beta_1^2 x_1^2 + \beta_2^2x_2^2 + \beta_3^2} \end{pmatrix}. \] In this case, the drift coefficient does not satisfy the one-sided linear growth condition, and therefore none of the theory is applicable. \fi \subsubsection{Stochastic Lorenz equation} This is a three-dimensional system modelling convection rolls in the atmosphere \cite{hj15}. \begin{eqnarray*} \D X_t^{(1)} &=& \left(\alpha_1X_t^{(2)}-\alpha_1X_t^{(1)}\right) \,\D t + \beta_1X_t^{(1)}\D W_t^{(1)}\\ \D X_t^{(2)} &=& \left(\alpha_2X_t^{(1)}-X_t^{(2)}-X_t^{(1)}X_t^{(3)}\right) \,\D t + \beta_2X_t^{(2)}\D W_t^{(2)}\\ \D X_t^{(3)}&=& \left(X_t^{(1)}X_t^{(2)}-\alpha_3X_t^{(3)}\right) \,\D t + \beta_3X_t^{(3)}\D W_t^{(3)} \end{eqnarray*} where $\alpha_1,\alpha_2,\alpha_3,\beta_1,\beta_2,\beta_3 > 0$, and so we have: \[ f(x) \equiv \begin{pmatrix} \alpha_1(x_2-x_1)\\ \alpha_2 x_1-x_2-x_1x_3\\ x_1x_2-\alpha_3x_3 \end{pmatrix}, ~~ g(x) \equiv \begin{pmatrix} \beta_1x_1&0&0\\ 0&\beta_2x_2&0\\ 0&0&\beta_3x_3 \end{pmatrix}. \] The diffusion coefficient is globally Lipschitz, and since $\langle x, f(x) \rangle$ consists solely of quadratic terms, the drift satisfies the one-sided linear growth condition. Noting that $\|f\|^2 \approx x_1^2(x_2^2+x_3^2) < \|x\|^4 $ as $\|x\|\rightarrow\infty$, an appropriate maximum timestep is \[ h(x) = \min(T, \gamma \|x\|^{-2}), \] for any $\gamma\!>\!0$. However, the drift does not satisfy the one-sided Lipschitz condition, and therefore the theory on the order of strong convergence is not applicable. \if 0 \subsubsection{Stochastic Brusselator in the well-stirred case} This is a model for a trimolecular chemical reaction. \cite{hj15} \begin{eqnarray*} \D X_t^{(1)} &=& \left(\delta-(\alpha+1)X_t^{(1)} + X_t^{(2)}(X_t^{(1)})^2\right) \,\D t + g_1(X_t^{(1)})\,\D W_t^{(1)}\\ \D X_t^{(2)} &=& \left(\alpha X_t^{(1)} - X_t^{(2)}(X_t^{(1)})^2\right) \,\D t + g_2(X_t^{(2)})\,\D W_t^{(2)} \end{eqnarray*} where actually we have: \[ f(x) \equiv \begin{pmatrix} \delta-(\alpha+1)x_1+ x_2(x_1)^2\\ \alpha x_1-x_2(x_1)^2 \end{pmatrix}, ~~~ g(x) \equiv \begin{pmatrix} g_1(x_1)&0\\ 0&g_2(x_2) \end{pmatrix}. \] The drift coefficient is not globally one-sided Lipschitz continuous and even fails to satisfy the one-sided linear growth condition. \fi \subsubsection{Langevin equation} The multi-dimensional Langevin equation is \begin{equation} \label{eq:Langevin} \D X_t = - \nabla V(X_t) \, \D t + \D W_t. \end{equation} In molecular dynamics applications, $V(x)$ represent the potential energy of a molecule, while in other applications $V = - \, \halfs \log \pi + \mbox{const}$ where $\pi: \RR^m \rightarrow \RR^+$ is an invariant measure. $V$ is usually defined on $\RR^m$, infinitely differentiable, and satisfies all of the assumptions in this paper so the theory is fully applicable, leading to order 1 strong convergence. \subsubsection{FENE model} The FENE (Finitely Extensible Nonlinear Elastic) model is a Langevin equation describing the motion of a long-chained polymer in a liquid \cite{bs07,glw16}. The unusual feature of the FENE model is that the potential $V(x)$ becomes infinite for finite values of $x$. In the simplest case of a molecule with a single bond, $x$ is three-dimensional and $V(x)$ takes the form $ V(x) = - \log (1 \!-\! \|x\|^2). $ The SDE is defined on $\|x\|\!<\!1$, with the drift term ensure that $\|X_t\|\!<\! 1$ for all $t\!>\!0$. Also, it can be verified that $\langle x,f(x) \rangle\!\leq\! 0$. Because the SDE is not defined on all of $\RR^3$, the theory in this paper is not applicable. However, it was one of the original motivations for the analysis in this paper, since it seems natural to use an adaptive timestep, taking smaller timestep as $\|\hX_t\|$ approaches 1, to maintain good accuracy, as the drift varies so rapidly near the boundary, and to greatly reduce the possibility of needing to clamp the computed solution to prevent it from crossing a numerical boundary at radius $1\!-\!\delta$ for some $\delta\!\ll\! 1$ \cite{glw16}. Numerical results indicate that the order of strong convergence is very close to 1. \begin{figure} \caption{Numerical results for testcase 1} \label{fig1} \end{figure} \subsection{Numerical results} The numerical tests include three testcases from \cite{hjk12} plus one new test which provides some motivation for the research in this paper. \subsubsection{Testcase 1} The first scalar testcase taken from \cite{hjk12} is \[ \D X_t = - X_t^5\, \D t + X_t\, \D W_t, ~~~~ X_0 = 1, \] with $T\!=\!1$. The three methods tested are the Tamed Euler scheme, with $C\!=\!1$, the implicit Euler scheme, and the new Euler scheme with adaptive timestep \[ h^\delta(x) = \delta \, \frac{\max(1,|x|)}{\max(1,|f(x)|}. \] Figure \ref{fig1} shows the the root-mean-square error plotted against the average timestep. The plot on the left shows the error in the terminal time, while the plot on the right shows the error in the maximum magnitude of the solution. The error in each case is computed by comparing the numerical solution to a second solution with a timestep, or $\delta$, which is 4 times smaller. When looking at the error in the final solution, all 3 methods have similar accuracy with $\halfs$ order strong convergence. However, as reported in \cite{hjk12}, the cost of the implicit method per timestep is much higher. The plot of the error in the maximum magnitude shows that the new method is slightly more accurate, presumably because it uses smaller timesteps when the solution is large. The plot was included to show that comparisons between numerical methods depend on the choice of accuracy measure being used. \begin{figure} \caption{Numerical results for testcase 2} \label{fig2} \end{figure} \subsubsection{Testcase 2} The second scalar testcase taken from \cite{hjk12} is \[ \D X_t = (X_t \!-\! X_t^3)\, \D t + X_t\, \D W_t, ~~~~ X_0 = 1, \] with $T\!=\!1$. The results in Figure \ref{fig2} are similar to the first testcase. \begin{figure} \caption{Numerical results for testcases 3 (on left) and 4 (on right)} \label{fig3} \end{figure} \subsubsection{Testcase 3} The third testcase taken from \cite{hjk12} is 10-dimensional, \[ \D X_t = (X_t \!-\! \|X_t\|^2 \, X_t)\, \D t + \D W_t, ~~~~ X_0 = 0, \] with $T\!=\!1$. The results in the left-hand plot in Figure \ref{fig3} show that the error in the final value exhibits order 1 strong convergence using all 3 methods, as expected. \subsubsection{Testcase 4} The final testcase is for the 3-dimensional FENE SDE discussed previously, \[ \D X_t = - \frac{X_t}{1 \!-\! \|X_t\|^2} \ \D t + \D W_t, ~~~~X_0 = 0, \] with $T\!=\!1$. As commented on previously, this SDE is not covered by the theory in this paper, but it is a motivation for the research because it is natural to use an adaptive timestep of the form \[ h^\delta(x) = \frac{\delta}{4} \, (1\!-\!\|x\|^2) \] to reduce the timestep when $\|\hX_t\|$ approaches the maximum radius. All three methods are clamped so that they do not exceed a radius of $r_{max}=1\!-\!10^{-10}$; if the new computed value $\hX_{t_{n+1}}$ exceeds this radius then it is replaced by $(r_{max}/\|\hX_{t_{n+1}}\|) \hX_{t_{n+1}}$. The numerical results in the right-hand plot in Figure \ref{fig3} show that the new scheme is considerably more accurate than either of the others, confirming that an adaptive timestep is desirable in this situation in which the drift varies enormously as $\|\hX_t\|$ approaches the maximum radius. \section{Proofs} \label{sec:proofs} This section has the proofs of the three main theorems in this paper, one on stability, and two on the order of strong convergence. \subsection{Theorem \ref{thm:stability}} \begin{proof} The proof proceeds in four steps. First, we introduce a constant $K$ to modify our discretisation scheme. Second, we derive an upper bound for $\|\hX_t^K\|^p$. Third, we show that the moments $\EE[ \sup_{0\leq t\leq T}\|\hX_t^K\|^p ]$ are each bounded by a constant $C_{p,T}$ which depends on $p$ and $T$ but is independent of $K$. Finally, we reach the desired conclusion by taking the limit $K\!\rightarrow\!\infty$ and using the Monotone Convergence theorem. The proof is given for $p\!\geq\! 4$; the result for $0\!\leq\! p \!<\! 4$ follows from H{\"o}lder's inequality. \noindent \textbf{{Step 1: K-Scheme definition}} For any $K\!>\!\|X_0\|,$ we modify our discretisation scheme to: \begin{equation} \hX_\tnp^K = P_K\left(\hX_\tn^K + f(\hX_\tn^K)\, h_n + g(\hX_\tn^K)\, \DW_n\right), \label{Kdiscrete} \end{equation} where $P_K(Y)\triangleq\min(1,K/\|Y\|)\, Y$ and therefore $\|\hX_\tn^K\|\!\leq\! K,\, \forall n$. The piecewise constant approximation for intermediate times is again $\bX_t^K=\hX_\td^K$, and the continuous approximation is \[ \hX_t^K=P_K\left( \hX_\td^K+f(\hX_\td^K)\, (t\!-\!\td) + g(\hX_\td^K)\, (W_t\!-\!W_\td)\right). \] Since $h(x)$ is continuous and strictly positive, it follows that \[ h_{min}^K \triangleq \inf_{\|x\|\leq K}h(x)>0. \] This strictly positive lower bound for the timesteps implies that $T$ is attainable. \noindent \textbf{{Step 2: $p$th-moment of K-Scheme solution}} $\|P_K(Y)\| \!\leq\! \|Y\|$, so if we define $\phi(x)\triangleq x\!+\!h(x)f(x)$, then (\ref{Kdiscrete}) gives \begin{eqnarray*} \|\hX_\tnp^K\|^2 & \leq & \|\hX_\tn^K\|^2 + 2\, h_n\left( \langle\hX_\tn^K,f(\hX_\tn^K)\rangle + \halfs h_n \|f(\hX_\tn^K)\|^2 \right) \\ && +\, 2\,\langle \phi(\hX_\tn^K), g(\hX_\tn^K)\,\DW_n \rangle + \|g(\hX_\tn^K)\, \DW_n\|^2 \end{eqnarray*} Using condition (\ref{eq:timestep}) for $h(x)$ then gives \begin{eqnarray} \label{2os3fs4} \|\hX_\tnp^K\|^2 & \leq & \|\hX_\tn^K\|^2 \ +\ 2\,\alpha\|\, \hX_\tn^K\|^2h_n \ +\ 2\,\beta\, h_n \nonumber\\&& +\, 2\,\langle \phi(\hX_\tn^K), g(\hX_\tn^K)\,\DW_n \rangle + \|g(\hX_\tn^K)\, \DW_n\|^2. \end{eqnarray} Similarly, for the partial timestep from $\td$ to $t$, since $(t\!-\!\td)\leq h_{n_t}$ \begin{equation} \langle \hX_\td^K, f(\hX_\td^K) \rangle + \halfs \, (t\!-\!\td)\, \|f(\hX_\td^K)\|^2 \leq \alpha \, \|\hX_\td^K\|^2 + \beta, \label{eq:partial} \end{equation} and therefore we obtain \begin{eqnarray} \label{2os3fs4t} \|\hX_t^K\|^2 & \leq & \|\hX_\td^K\|^2 \ +\ 2\,\alpha\|\, \hX_\td^K\|^2 (t\!-\!\td) \ +\ 2\,\beta\, (t\!-\!\td) \nonumber\\&& + \, 2\,\langle \hX_\td^K \!+\! f(\hX_\td^K)\, (t\!-\!\td), g(\hX_\td^K)\,(W_t\!-\!W_\td) \rangle \nonumber\\&& + \, \|g(\hX_\td^K)\, (W_t\!-\!W_\td)\|^2. \end{eqnarray} Summing (\ref{2os3fs4}) over multiple timesteps and then adding (\ref{2os3fs4t}) gives \begin{eqnarray*} \|\hX^K_t\|^2 & \leq & \|X_0\|^2 \ +\ 2\,\alpha\left( \sum_{k=0}^{n_t-1}\|\hX^K_{t_k}\|^2h_k + \|\hX_\td^K\|^2 (t\!-\!\td) \right) \ +\ 2\, \beta\, t \\[-0.1in] && +\, 2 \sum_{k=0}^{n_t-1}\langle \phi(\hX_{t_k}^K), g(\hX_{t_k}^K)\DW_k \rangle) + \sum_{k=0}^{n_t-1}\|g(\hX_{t_k}^K)\, \DW_k\|^2 \\ && +\, 2 \langle \hX_\td^K \!+\! f(\hX_\td^K)\, (t\!-\!\td),\, g(\hX_\td^K)(W_t\!-\!W_\td) \rangle \\[0.05in] && +\, \|g(\hX_\td^K)\, (W_t\!-\!W_\td)\|^2. \end{eqnarray*} Re-writing the first summation as a Riemann integral, and the second as an It{\^o} integral, raising both sides to the power $p/2$ and using Jensen's inequality, we obtain \begin{eqnarray} \label{os3fs4} \|\hX^K_t\|^p & \!\!\leq\!\! & 7^{p/2 - 1}\left\{ \rule{0in}{0.25in} \|X_0\|^p \ +\ \left(2\, \alpha \int_0^t\|\bX_s^K\|^2\,\D s\right)^{p/2}\ +\ (2\, \beta\, t)^{p/2} \right. \nonumber \\ && ~~~~ + \, \left|\, 2\! \int_0^\td \! \langle \phi(\bX_s^K), g(\bX_s^K)\, \D W_s\rangle \,\right|^{p/2} \!\! + \left(\sum_{k=0}^{n_t-1}\|g(\bX_{t_k}^K) \, \DW_k \|^2 \! \right)^{p/2} \nonumber \\ && ~~~~~~~~ + \, \left| 2\, \langle \bX_t^K \!+\! f(\bX_t^K)\, (t\!-\!\td), g(\bX_t^K) (W_t\!-\!W_\td) \rangle \right|^{p/2} \!\! \nonumber \\ && \left. ~~~~~~~~ +\, \|g(\bX_t^K) (W_t\!-\!W_\td)\|^p \rule{0in}{0.25in} \right\}. \end{eqnarray} \noindent \textbf{{Step 3: Expected supremum of $p$th-moment of K-Scheme}} For any $0\leq t\leq T$ we take the supremum on both sides of inequality (\ref{os3fs4}) and then take the expectation to obtain \[ \EE\left[ \sup_{0\leq s\leq t}\|\hX^K_s\|^p\right] \ \leq \ 7^{p/2 - 1}\left( I_1 + I_2 + I_3 + I_4 + I_5 \right), \] where \begin{eqnarray*} I_1 &=& \|X_0\|^p + \EE\left[ \left(2 \alpha \int_0^t\|\bX_s^K\|^2\,\D s\right)^{p/2} \right] + (2\, \beta\, t)^{p/2}, \\[0.05in] I_2 &=& \EE\left[ \sup_{0\leq s\leq \td}\left| \,2\! \int_0^s \langle \phi(\bX_u^K), g(\bX_u^K)\,\D W_u \rangle \right|^{p/2} \right], \\[0.05in] I_3 &=& \EE\left[ \left(\sum_{k=0}^{n_t-1}\|g(\bX_{t_k}^K) \, \DW_k\|^2\right)^{p/2}\right], \\[0.05in] I_4 &=& \EE\left[ \sup_{0\leq s\leq t} \left|2 \langle \bX_s^K \!+\! f(\bX_s^K)\, (s\!-\!\sd), g(\bX_s^K) (W_s\!-\!W_\sd) \rangle \right|^{p/2}\right], \\[0.05in] I_5 &=& \EE\left[\sup_{0\leq s\leq t} \|g(\bX_s^K) \, (W_s\!-\!W_\sd)\|^p \right]. \end{eqnarray*} We now consider $I_1, I_2, I_3, I_4, I_5$ in turn. Using Jensen's inequality, we obtain \[ I_1 \leq \|X_0\|^p + (2 \alpha)^{p/2} T^{p/2-1}\!\! \int_0^t\EE\left[ \sup_{0\leq u \leq s} \|\hX_u^K\|^p\right]\,\D s + (2\, \beta\, T)^{p/2}. \] For $I_2$, we begin by noting that ue to condition (\ref{eq:timestep}), for $u\!<\!\td$ we have \begin{eqnarray*} \|\phi(\bX_u^K)\|^2 &=& \|\bX_u^K\|^2 + 2\, h(\bX_u^K)\, \left( \langle \bX_u^K, f(\bX_u^K)\rangle + \halfs\, h(\bX_u^K) \|f(\bX_u^K)\|^2 \rule{0in}{0.16in} \right) \\ &\leq& \|\bX_u^K\|^2+ 2 \, h(\bX_u^K) \, (\alpha\|\bX_u^K\|^2 +\beta) \\ &\leq& (1+2\,\alpha\, T) \|\bX_u^K\|^2 + 2\, \beta\, T, \end{eqnarray*} and hence by Jensen's inequality \[ \|\phi(\bX_u^K)\|^{p/2} \leq 2^{p/4-1} \left(\rule{0in}{0.16in} (1+2\,\alpha\, T)^{p/4} \|\bX_u^K\|^{p/2} + (2\,\beta\, T)^{p/4} \right). \] In addition, the linear growth condition (\ref{eq:g_growth}) gives \[ \|g(\bX_u^K)\|^{p/2} \leq 2^{p/4-1} \left( \alpha^{p/4} \|\bX_u^K\|^{p/2} + \beta^{p/4} \right), \] and combining the last two equation, there exists a constant $c_{p,T}$ depending on $p$ and $T$, in addition to $\alpha, \beta$, such that \[ \|\phi(\bX_u^K)^T g(\bX_u^K) \|^{p/2} \,\leq\, c_{p,T} \left( \|\bX_u^K\|^p + 1\right). \] Then, by the Burkholder-Davis-Gundy inequality, there is a constant $C_p$ such that \begin{eqnarray*} I_2 &\leq & C_p\, 2^{p/2}\, \EE\left[ \left(\int_0^t \| \phi(\bX_u^K)^T g(\bX_u^K) \|^2 \, \D u \right)^{p/4} \right] \\ &\leq& C_p\, 2^{p/2} \, T^{p/4-1} \, \EE\left[ \int_0^t \| \phi(\bX_u^K)^T g(\bX_u^K) \|^{p/2} \, \D u \right] \\ &\leq& c_{p,T}\, C_p\, 2^{p/2} \, T^{p/4-1} \, \left( \int_0^t \EE\left[ \sup_{0\leq u\leq s}\|\hX_u^K\|^p\right] \D s \ +\ T \right). \label{os3fn8} \end{eqnarray*} For $I_3$, we start by observing that by standard results there exists a constant $c_p$ which depends solely on $p$ such that for any $t_k\!\leq\!s<t_{k+1}$, \begin{equation} \EE[ \sup_{t_k\leq u \leq s} \|\, W_u\!-\!W_{t_k} \,\|^p\ |\ \mathcal{F}_{t_k}] \ =\ c_p\, (s\!-\!\sd)^{p/2}. \label{eq:DW} \end{equation} One variant of Jensen's inequality, when $h_k, u_k$ are both positive and $p\!\geq\! 1$, is \[ \left( \sum_k h_k\, u_k \right)^p \leq \left( \sum_k h_k \right)^{p-1} \sum_k h_k u^p_k. \] Using this, and (\ref{eq:DW}) with $s\equiv t_{k+1}$ so that $s-\sd = h_k$, \begin{eqnarray*} I_3 &\leq& T^{p/2-1}\, \EE\left[\ \sum_{k=0}^{n_t-1} h_k \, \|g(\bX_{t_k}^K)\|^p\ \frac{\|\DW_k\|^p}{h_k^{p/2}} \right] \\ &\leq & T^{p/2-1}\, c_p\ \EE\left[\ \int_0^t \|g(\bX_s^K)\|^p\, \D s \right]. \end{eqnarray*} Using condition (\ref{eq:g_growth}), and Jensen's inequality, we then obtain \[ I_3 \leq (2\,T)^{p/2-1}\, c_p \left( \alpha^{p/2} \int_0^t \EE\left[ \sup_{0\leq u \leq s} \|\hX_u^K\|^p\right]\D s \ +\ \beta^{p/2}\, T\right). \] For $I_4$, using (\ref{eq:partial}) and following the same argument as for $I_2$, there exists a constant $c_{p,T}$ depending on both $p$ and $T$ such that \[ \|\bX_s^K\!+\! f(\bX_s^K) (s\!-\!\sd)\|^{p/2} \| g(\bX_s^K)\|^{p/2} \leq c_{p,T}\, \left( \|\bX_s^K\|^p + 1\right). \] Therefore, again using (\ref{eq:DW}), \begin{eqnarray*} I_4 &\leq& 2^{p/2}\, \EE\left[ \sup_{0\leq s\leq t} \left| \langle \bX_s^K\!+\! f(\bX_s^K) (s\!-\!\sd), g(\bX_s^K)\, (W_s\!-\!W_\sd) \rangle \right|^{p/2}\right] \\ &\leq& c_{p,T} \, 2^{p/2}\, \EE\left[ \sum_{k=0}^{n_t-1} \left(\|\bX_{t_k}^K\|^p \!+\! 1\right) \sup_{t_k\leq s< t_{k+1}} \| (W_s\!-\!W_\sd)\|^{p/2} \right. \\&& \hspace{1.0in} \left. +\, \left(\|\bX_t^K\|^p \!+\! 1\right) \sup_{\td\leq s\leq t} \| (W_s\!-\!W_\sd)\|^{p/2}\right] \\ &\leq& c_{p/2} \, c_{p,T}\, 2^{p/2}\, T^{p/4-1}\ \EE\left[ \sum_{k=0}^{n_t-1} \!\! \left(\|\bX_{t_k}^K\|^p \!+\! 1\right)\! h_k + \left(\|\bX_t^K\|^p \!+\! 1\right)\!(t\!-\!\td) \right] \\ &\leq& c_{p/2} \, c_{p,T} \, 2^{p/2}\, T^{p/4-1} \left( \int_0^t \EE\left[ \sup_{0\leq u \leq s} \|\hX_u^K\|^p\right] \D s \ +\ T \right). \end{eqnarray*} Similarly, using the same definition for $c_p$, we have \begin{eqnarray*} I_5 &\leq& c_p \, (2\,T)^{p/2-1} \left( \alpha^{p/2} \int_0^t \EE\left[ \sup_{0\leq u \leq s} \|\hX_u^K\|^p \right] \D s \ +\ \beta^{p/2}\, T \right). \end{eqnarray*} Collecting together the bounds for $I_1, I_2, I_3, I_4, I_5$, we conclude that there exist constants $C^1_{p,T}$ and $C^2_{p,T}$ such that \[ \EE\left[ \sup_{0\leq s\leq t}\|\hX^K_{s}\|^p\right] \leq C_{p,T}^1+C_{p,T}^2 \int_0^t\EE\left[ \sup_{0\leq u\leq s}\|\hX_u^K\|^p\right] \D s, \] and Gr\"{o}nwall's inequality gives the result \[ \EE\left[ \sup_{0\leq t\leq T}\|\hX^K_t\|^p\right] \,\leq\, C_{p,T}^{1} \, \exp(C_{p,T}^{2}\, T) \,\triangleq\, C_{p,T} \,<\, \infty. \] \noindent \textbf{{Step 4: Expected supremum of $p$th-moment of $\hX_t$}} For any $\omega\!\in\!\Omega$, $\hX_t\!=\!\hX_t^K$ for all $0\!\leq\!t\!\leq\!T$ if, and only if, $\sup_{0\leq t\leq T} \|\hX_t\| \!\leq\! K$. Therefore, by the Markov inequality, \[ \PP(\sup_{0\leq t\leq T} \|\hX_t\| < K) \, =\, \PP(\sup_{0\leq t\leq T} \|\hX_t^K\| < K) \,\geq\, 1 \!-\! \EE[\sup_{0\leq t\leq T} \|\hX_t^K\|^4] / K^4 \,\rightarrow\, 1 \] as $K\rightarrow\infty$. Hence, almost surely, $\displaystyle \sup_{0\leq t\leq T} \|\hX_t\| < \infty$ and $T$ is attainable. Also, \[ \lim_{K\rightarrow\infty} \sup_{0\leq t\leq T}\|\hX^K_t(\omega)\| = \sup_{0\leq t\leq T}\|\hX_t(\omega)\| \] and for $0\!<\! K_1 \!\leq\! K_2$, \[ \sup_{0\leq t\leq T}\|\hX^{K_1}_t(\omega)\| \ \leq\ \sup_{0\leq t\leq T}\|\hX^{K_2}_t(\omega)\| \ \leq\ \sup_{0\leq t\leq T}\|\hX_t(\omega)\|. \] Therefore, by the Monotone Convergence Theorem, \[ \EE\left[ \sup_{0\leq t\leq T}\|\hX_t\|^p\right] \ =\ \lim_{K\rightarrow\infty} \EE\left[ \sup_{0\leq t\leq T}\|\hX^K_t\|^p\right] \ \leq\ C_{p,T}. \] \end{proof} \subsection{Theorem \ref{thm:convergence_order}} \begin{proof} The approach which is followed is to bound the approximation error $e_t\triangleq \hX_t - X_t$ by terms which depend on either $\hX_s\!-\!\bX_s$ or $e_s$, and then use local analysis within each timestep to bound the former, and Gr{\"o}nwall's inequality to handle the latter. The proof is again given for $p\!\geq\! 4$; the result for $0\!\leq\! p \!<\! 4$ follows from H{\"o}lder's inequality. We start by combining the original SDE with (\ref{eq:approx_SDE}) to obtain \[ \D e_t \ =\ \left( f(\bX_t)\!-\!f(X_t)\right) \D t + \left( g(\bX_t)\!-\!g(X_t)\right) \D W_t, \] and then by It{\^o}'s formula, together with $e_0\!=\!0$, we get \begin{eqnarray*} \|e_t\|^2 &\leq& 2 \int_0^t \langle e_s, f(\hX_s)\!-\!f(X_s)\rangle \,\D s - 2 \int_0^t\langle e_s, f(\hX_s)\!-\!f(\bX_s)\rangle \,\D s\\ &&+\, \int_0^t \|g(\bX_s)\!-\!g(X_s)\|^2 \,\D s + 2 \int_0^t \langle e_s, (g(\bX_s)\!-\!g(X_s))\,\D W_s \rangle. \end{eqnarray*} Using the conditions in Assumption \ref{assp:Lipschitz}, (\ref{eq:onesided_Lipschitz}) implies that \[ \langle e_s, f(\hX_s)\!-\!f(X_s)\rangle \leq \halfs \alpha\, \|e_s\|^2, \] (\ref{eq:local_Lipschitz}) implies that \begin{eqnarray*} \left| \langle e_s, f(\hX_s)\!-\!f(\bX_s) \rangle \right| &\leq& \|e_s\| \, L(\hX_s,\bX_s) \, \| \hX_s\!-\!\bX_s \| \\&\leq& \halfs \|e_s\|^2 + \halfs L(\hX_s,\bX_s)^2 \| \hX_s\!-\!\bX_s \|^2 \end{eqnarray*} where $L(x,y)\triangleq \gamma(\|x\|^q + \|y\|^q) + \mu$, and (\ref{eq:g_Lipschitz}) gives \[ \|g(\bX_s)\!-\!g(X_s)\|^2 \ \leq\ \halfs\, \alpha \, \|\bX_s\!-\!X_s\|^2 \ \leq\ \alpha\, \|e_s\|^2 + \alpha \, \|\hX_s\!-\!\bX_s\|^2. \] Hence, \begin{eqnarray*} \|e_t\|^2 &\leq& (2\alpha\!+\!1) \int_0^t \| e_s \|^2 \,\D s + \int_0^t \left( L(\hX_s,\bX_s)^2 \!+\! \alpha \right)\|\hX_s\!-\!\bX_s\|^2\, \D s \\&& +\, 2 \int_0^t \langle e_s, (g(\bX_s)\!-\!g(X_s))\,\D W_s \rangle. \end{eqnarray*} and then by Jensen's inequality we obtain \begin{eqnarray*} \|e_t\|^p &\leq& (3T)^{p/2-1} (2\alpha\!+\!1)^{p/2} \int_0^t \| e_s \|^p \,\D s \\ &+& (3T)^{p/2-1} \int_0^t \left( L(\hX_s,\bX_s)^2 \!+\! \alpha \right)^{p/2}\|\hX_s\!-\!\bX_s\|^p\, \D s \\ &+& 3^{p/2-1} 2^{p/2} \left| \int_0^t \langle e_s, (g(\bX_s)\!-\!g(X_s))\,\D W_s \rangle \right|^{p/2}. \end{eqnarray*} Taking the supremum of each side, and then the expectation yields \begin{eqnarray*} \EE\left[\sup_{0\leq s \leq t} \|e_s\|^p \right] &\leq& (3T)^{p/2-1} (2\alpha\!+\!1)^{p/2} \int_0^t \EE\left[\sup_{0\leq u\leq s} \| e_u \|^p \right] \,\D s \\ &+& (3T)^{p/2-1} \!\! \int_0^t \! \EE\!\left[ \left(\! L(\hX_s,\bX_s)^2 \!+\! \alpha \right)^{p/2} \! \|\hX_s\!-\!\bX_s\|^p\right] \D s \\ &+& 3^{p/2-1} 2^{p/2} \EE\left[ \sup_{0\leq s \leq t} \left| \int_0^s \langle e_u, (g(\bX_u)\!-\!g(X_u))\,\D W_u \rangle \right|^{p/2} \right]. \end{eqnarray*} By the H{\"o}lder inequality, \begin{eqnarray*} \lefteqn{\hspace*{-0.5in} \EE\left[ \left( L(\hX_s,\bX_s)^2 \!+\! \alpha \right)^{p/2} \|\hX_s\!-\!\bX_s\|^p\right] } \\ &\leq& \left( \EE\left[ \left( L(\hX_s,\bX_s)^2 \!+\! \alpha \right)^p\right] \EE\left[ \|\hX_s\!-\!\bX_s\|^{2p}\right]\right)^{1/2}, \end{eqnarray*} and $\EE\left[ \left( L(\hX_s,\bX_s)^2 \!+\! \alpha \right)^p \right]$ is uniformly bounded on $[0,T]$ due to the stability property in Theorem \ref{thm:stability}. In addition, by the Burkholder-Davis-Gundy inequality (which gives the constant $C_p$ which depends only on $p$) followed by Jensen's inequality plus the Lipschitz condition for $g$, we obtain \begin{eqnarray*} \lefteqn{\hspace*{-0.2in} \EE\left[ \sup_{0\leq s \leq t} \left| \int_0^s \langle e_u, (g(\bX_u)\!-\!g(X_u))\,\D W_u \rangle \right|^{p/2} \right] } \\ &\leq& C_p\ \EE\left[ \left( \int_0^t \|e_s\|^2 \|g(\bX_s)\!-\!g(X_s)\|^2 \, \D s \right)^{p/4}\right]. \\ &\leq& C_p \ T^{p/4-1}\, (\halfs \alpha)^{p/4} \EE\left[ \int_0^t \|e_s\|^{p/2} \|\bX_s\!-\!X_s\|^{p/2} \, \D s \right] \\ &\leq& C_p \ T^{p/4-1}\, (\halfs \alpha)^{p/4} \EE\left[ \int_0^t \halfs \, \|e_s\|^p + \halfs \, \|\bX_s\!-\!X_s\|^p \, \D s \right] \\ &\leq& C_p \ T^{p/4-1}\, (\halfs \alpha)^{p/4} \EE\left[ \int_0^t (\halfs \!+\!2^{p-2})\|e_s\|^p + 2^{p-2} \|\hX_s\!-\!\bX_s\|^p \, \D s \right]. \end{eqnarray*} Hence, using $\EE[ \|\hX_s\!-\!\bX_s\|^p] \leq (\EE[ \|\hX_s\!-\!\bX_s\|^{2p}])^{1/2}$, there are constants $C^1_{p,T}, C^2_{p,T}$ such that \begin{equation} \EE\left[\sup_{0\leq s \leq t} \|e_s\|^p \right] \leq C^1_{p,T}\! \int_0^t \!\EE\!\left[\sup_{0\leq u \leq s} \|e_u\|^p \right] \D s \, +\, C^2_{p,T}\! \int_0^t \!\left( \EE\!\left[ \|\hX_s\!-\!\bX_s\|^{2p} \right] \right)^{\!1/2} \!\!\D s. \label{eq:halfway} \end{equation} For any $s\!\in\![0,T]$, $\hX_s\!-\!\bX_s = f(\hX_{\sd}) (s\!-\!\sd) + g(\hX_{\sd}) (W_s \!-\! W_\sd)$, and hence, by a combination of Jensen and H{\"o}lder inequalities, we get \begin{eqnarray*} \EE\left[ \|\hX_s\!-\!\bX_s\|^{2p}\right] &\leq& 2^{2p-1} \left( \EE\left[ \|f(\hX_{\sd})\|^{4p} \right] \EE \left[ (s\!-\!\sd)^{4p} \right]\right)^{1/2} \\ &+& 2^{2p-1} \left( \EE\left[ \|g(\hX_{\sd})\|^{4p} \right] \EE\left[ \|W_s \!-\! W_\sd\|^{4p} \right] \right)^{1/2}. \end{eqnarray*} $\EE[ \|f(\hX_{\sd})\|^{4p}]$ and $\EE[ \|g(\hX_{\sd})\|^{4p}]$ are both uniformly bounded on $[0,T]$ due to stability and the polynomial bounds on the growth of $f$ and $g$. Furthermore, we have $\EE[ (s\!-\!\sd)^{4p}] \leq (\delta T)^{4p} \leq \delta^{2p} T^{4p}$, and by standard results there is a constant $c_p$ such that $\EE[ \|W_s \!-\! W_\sd\|^{4p}] = \EE[\ \EE[\|W_s \!-\! W_\sd\|^{4p}\, |\, {\cal F}_\sd] \ ] \leq c_p (\delta T)^{2p}$. Hence, there exists a constant $C^3_{p,T}\!>\!0$ such that $\EE[\, \|\hX_s\!-\!\bX_s\|^{2p}] \leq C^3_{p,T}\, \delta^p$, and therefore equation (\ref{eq:halfway}) gives us \[ \EE\left[\sup_{0\leq s \leq t} \|e_s\|^p \right] \leq C^1_{p,T}\! \int_0^t \!\EE\!\left[\sup_{0\leq u \leq s} \|e_u\|^p \right] \D s \, +\, C^2_{p,T} \sqrt{C^3_{p,T}}\ T \, \delta^{p/2}, \] and Gr\"{o}nwall's inequality then provides the final result. \end{proof} \subsection{Theorem \ref{thm:convergence_order2}} \begin{proof} The error $e_t\triangleq \hX_t - X_t$ satisfies the SDE $\D e_t = \left( f(\bX_t)\!-\!f(X_t) \right)\,\D t$ and hence \begin{eqnarray*} \|e_t\|^2 &=& 2 \int_0^t\langle e_s, f(\hX_s)\!-\!f(X_s)\rangle \,\D s \ - 2 \int_0^t\langle e_s, f(\hX_s)\!-\!f(\bX_s) \rangle \,\D s \nonumber \\&\leq & \alpha \int_0^t\|e_s\|^2\,\D s - 2 \int_0^t \langle e_s, f(\hX_s)\!-\!f(\bX_s)\rangle \,\D s, \end{eqnarray*} due to the one-sided Lipschitz condition (\ref{eq:onesided_Lipschitz}), so therefore \begin{eqnarray} \EE\left[\sup_{0\leq s\leq t}\|e_s\|^p \right] &\leq&\ \alpha^{p/2} (2T)^{p/2-1}\int_0^t\EE\left[\sup_{0\leq u\leq s}\|e_u\|^p \right]\,\D s \nonumber \\ \label{eq:thm4} && +\ 2^{p-1}\, \EE\left[\sup_{0\leq s\leq t}\left| \int_0^s \langle e_u, f(\hX_u)\!-\!f(\bX_u) \rangle \,\D u\, \right|^{p/2}\right]. \end{eqnarray} Within a single timestep, $\hX_s\!-\!\bX_s = f(\bX_s)(s\!-\!\sd) + (W_s\!-\!W_\sd)$, and therefore Lemma \ref{lemma:useful} gives \begin{eqnarray*} \langle e_s, f(\hX_s)\!-\!f(\bX_s) \rangle &=& \langle e_s, \nabla f(\bX_s) (\hX_s\!-\!\bX_s) \rangle\ +\ R_s \\[0.1in] &=& \langle e_s, (s\!-\!\sd) \nabla f(\bX_s) f(\bX_s) \rangle\ +\ R_s \\ && +\ \langle (e_s\!-\!e_\sd), \nabla f(\bX_s) (W_s\!-\!W_\sd) \rangle \\ && +\ \langle e_\sd, \nabla f(\bX_s) (W_s\!-\!W_\sd) \rangle, \end{eqnarray*} where $|R_s|\leq \left( \gamma\, (\|\hX_s\|^q \!+\! \|\bX_s\|^q) + \mu\right)\, \|e_s\| \, \|\hX_s\!-\!\bX_s \|^2$. Hence, \[ \EE\left[\sup_{0\leq s\leq t}\left| \int_0^s \langle e_u, f(\hX_u)\!-\!f(\bX_u) \rangle \,\D u \, \right|^{p/2}\right] \leq 4^{p/2-1} (I_1 + I_2 + I_3 + I_4), \] where \begin{eqnarray*} I_1 & = & \EE\left[\sup_{0\leq s\leq t}\left|\int_0^s \langle e_u, (u\!-\!\ud) \nabla f(\bX_u) f(\bX_u) \rangle \, \D u \,\right|^{p/2}\right], \\ I_2 & = & \EE\left[\sup_{0\leq s\leq t}\left|\int_0^s R_u \ \D u \,\right|^{p/2}\right], \\ I_3 & = & \EE\left[\sup_{0\leq s\leq t}\left|\int_0^s \langle (e_u\!-\!e_\ud), \nabla f(\bX_u) (W_u\!-\!W_\ud) \rangle \, \D u \,\right|^{p/2}\right], \\ I_4 & = & \EE\left[\sup_{0\leq s\leq t}\left|\int_0^s \langle e_\ud, \nabla f(\bX_u) (W_u\!-\!W_\ud) \rangle \, \D u \,\right|^{p/2}\right]. \end{eqnarray*} We now bound $I_1, I_2, I_3, I_4$ in turn. Noting that $s\!-\!\sd\leq \delta\, T$, \begin{eqnarray*} I_1 &\leq& T^{p/2-1}\int_0^t \EE\left[ \|e_s\|^{p/2} (\delta T)^{p/2} \| f(\bX_s)\|^{p/2} \|\nabla f(\bX_s)\|^{p/2}\right] \ \D s \\ &\leq& \halfs\, T^{p/2-1} \int_0^t \EE\left[\sup_{0\leq u \leq s} \|e_u\|^p \right]\ \D s \\ && +\ \halfs\, T^{p/2-1} (\delta T)^p \int_0^t \EE\left[ \| f(\bX_s)\|^p \|\nabla f(\bX_s)\|^p\right] \ \D s. \end{eqnarray*} The last integral is finite because of stability and the polynomial bounds on the growth of both $f$ and $\nabla f$, and hence there is a constant $C^1_{p,T}$ such that \[ I_1 \leq \halfs\, T^{p/2-1} \int_0^t \EE\left[\sup_{0\leq u \leq s} \|e_u\|^p \right]\ \D s + C^1_{p,T}\, \delta^p. \] Similarly, using the H{\"o}lder inequality, \begin{eqnarray*} I_2 &\!\leq\!& T^{p/2-1}\int_0^t \EE\left[ \|e_s\|^{p/2}\, \left( \gamma\, (\|\hX_s\|^q \!+\! \|\bX_s\|^q) + \mu\right)^{p/2}\, \|\hX_s\!-\!\bX_s \|^p\, \right] \D s \\&\!\leq\!& \halfs \, T^{p/2-1} \int_0^t \EE\left[\sup_{0\leq u \leq s} \|e_u\|^p \right]\ \D s \\&&\!\!\!\!\!\! + \ \halfs \, T^{p/2-1} \int_0^t \left( \EE\left[ \left( \gamma\, (\|\hX_s\|^q \!+\! \|\bX_s\|^q) + \mu\right)^{2p}\right] \, \EE\left[ \|\hX_s\!-\!\bX_s \|^{4p}\right]\right)^{1/2}\! \D s, \end{eqnarray*} and hence, using stability and bounds on $\EE\left[ \|\hX_s\!-\!\bX_s \|^{4p} \right]$ from the proof of Theorem \ref{thm:convergence_order}, there is a constant $C^2_{p,T}$ such that \[ I_2 \leq \halfs\, T^{p/2-1} \int_0^t \EE\left[\sup_{0\leq u \leq s} \|e_u\|^p \right]\ \D s + C^2_{p,T}\, \delta^p. \] For the next term, $I_3$, we start by bounding $\|e_s\!-\!e_\sd\|$. Since \[ e_s\!-\!e_\sd = \int_\sd^s \left( f(\bX_u) - f(X_u)\right) \, \D u, \] by Jensen's inequality and Assumption \ref{assp:enhanced_Lipschitz} it follows that \begin{eqnarray*} \| e_s\!-\!e_\sd \|^p &\leq& (\delta \, T)^{p-1} \int_\sd^s \| f(\bX_u) \!-\! f(X_u) \|^p \, \D u \\ &\leq& (2\,\delta \, T)^{p-1} \int_\sd^s L^p(\bX_u,X_u) \left( \|e_u\|^p + \| \hX_u\!-\!\bX_u\|^p \right) \, \D u, \end{eqnarray*} where $L(\bX_u,X_u)\equiv \gamma(\|\bX_u\|^k\!+\!\|X_u\|^k) + \mu$. We again have an $O(\delta^{p/2})$ bound for $\EE[ \|\hX_s\!-\!\bX_s\|^p]$, while Theorem \ref{thm:convergence_order} proves that there is a constant $c_{p,T}$ such that \[ \EE[ \| e_s \|^p] \leq c_{p,T} \, \delta^{p/2}. \] Combining these, and using the H{\"o}lder inequality and the finite bound for $\EE[L^p(\bX_u,X_u)]$ for all $p\!\geq\! 2$, due to the usual stability results, we find that there is a different constant $c_{p,T}$ such that \[ \EE[ \| e_s\!-\!e_\sd \|^p ] \leq c_{p,T} \, \delta^{3p/2}. \] Now, \begin{eqnarray*} I_3 \leq T^{p/2-1} \int_0^t \EE\left[ \| e_s\!-\!e_\sd\|^{p/2}\|\nabla f(\bX_s)\|^{p/2} \|W_s\!-\!W_\sd\|^{p/2} \right] \D s, \end{eqnarray*} so using the H{\"o}lder inequality and the usual stability bounds, we conclude that there is a constant $C^3_{p,T}$ such that \[ I_3 \leq C^3_{p,T}\, \delta^p. \] Lastly, we consider $I_4$. For the timestep $[t_n,t_{n+1}]$, we have \[ \D \left( (t\!-\!t_{n+1}) (W_t\!-\!W_{t_n})\right) = (W_t\!-\!W_{t_n}) \, \D t + (t\!-\!t_{n+1})\, \D W_t \] and therefore, integrating by parts within each timestep, \begin{eqnarray*} \lefteqn{ \hspace{-0.25in} \int_0^s \langle e_\ud, \nabla f(\bX_u) (W_u\!-\!W_\ud) \rangle \, \D u } \\ & = & \int_0^s (\uo \!-\!u) \, \langle e_\ud, \nabla f(\bX_u)\, \D W_u \rangle \ -\ (\so \!-\!s) \langle e_\sd, \nabla f(\bX_s) (W_s\!-\!W_\sd) \rangle \end{eqnarray*} where $\uo = \min\{t_n: t_n\!>\!u\} = t_{n_u+1}$. Hence, $ I_4 \leq 2^{p/2 - 1} (I_{41}+I_{42}) $ where \begin{eqnarray*} I_{41} &=& \EE\left[\sup_{0\leq s\leq t} \left| \int_0^s (\uo \!-\!u)\, \langle e_\ud, \nabla f(\bX_u)\, \D W_u \rangle \right|^{p/2} \right], \\ I_{42} &=& \EE\left[\sup_{0\leq s\leq t} \left| (\so \!-\!s)\, \langle e_\sd, \nabla f(\bX_s) (W_s\!-\!W_\sd) \rangle \right|^{p/2} \right]. \end{eqnarray*} By the Burkholder-Davis-Gundy inequality, \begin{eqnarray*} I_{41} & \leq & C_p\, \EE\left[ \left( \int_0^t (\so \!-\!s)^2 \|e_\sd\|^2 \|\nabla f(\bX_s)\|^2\, \D s \right)^{p/4} \right] \\ & \leq & C_p\, T^{3p/4-1}\, \EE\left[ \int_0^t \|e_\sd\|^{p/2}\, \delta^{p/2}\, \|\nabla f(\bX_s)\|^{p/2}\, \D s \right] \\ & \leq & \fracs{1}{2} C_p\, T^{3p/4-1}\, \EE\left[ \int_0^t \left( \sup_{0\leq u\leq s}\|e_u\|^p + \delta^p\, \|\nabla f(\bX_s)\|^p \right) \D s \right] \end{eqnarray*} with $\EE[\|\nabla f(\bX_s)\|^p]$ uniformly bounded on $[0,T]$ so that there is a constant $C^{41}_{p,T}$ such that \[ I_{41} \leq \fracs{1}{2}\, C_p\, T^{3p/4-1} \int_0^t \EE\left[ \sup_{0\leq u\leq s}\|e_u\|^p \right] \D s + C^{41}_{p,T}\, \delta^p. \] Turning to $I_{42}$, Young's inequality and H{\"o}lder's inequality give \[ I_{42} \leq \frac{1}{2\xi} \EE\left[ \sup_{0\leq s\leq t} \|e_s\|^p \right] + \frac{\xi}{2} (2\delta T)^p\! \left( \EE\!\left[ \sup_{0\leq s\leq t} \|\nabla f\|^{2p} \right] \EE\!\left[ \sup_{0\leq s\leq t} \|W_s\|^{2p} \right] \right)^{1/2} \] for any $\xi\!>\!0$, and hence there is a constant $C^{42}_{p,T}$ such that \[ I_{42} \leq \frac{1}{2\xi}\, \EE\left[ \sup_{0\leq s\leq t} \|e_s\|^p \right] + \xi\, C^{42}_{p,T}\, \delta^p. \] \if 0 which we can split into two parts: \begin{eqnarray*} I_4 &\leq& 2^{p/2 - 1} \left\{ ~ \EE\left[\sup_{0\leq s\leq t} \left|\sum_{k=0}^{n_s-1} \int_{t_k}^{t_{k+1}} \langle e_{t_k}, \nabla f(\bX_{t_k}) (W_u\!-\!W_{t_{k}}) \rangle \, \D u\right|^{p/2}\right] \right. \\ &&\left. ~~~~~~~~~ +\ \EE\left[\sup_{0\leq s\leq t}\left| \int_{\sd}^s \langle e_\ud, \nabla f(\bX_\ud) (W_u\!-\!W_\ud) \rangle \,\D u\right|^{p/2}\right] ~ \right\} \\ &:= & 2^{p/2 - 1}\, (I_{41} + I_{42}). \label{J1decom} \end{eqnarray*} The summation in $I_{41}$ is a discrete martingale, so by a combination of the discrete Burkholder-Davis-Gundy inequality (Theorem 6.3.10 in \cite{strook11}) and the Jensen and H\"{o}lder inequalities we obtain: \begin{eqnarray*} I_{41} &\leq& C_p\, \EE \left[\left( \sum_{k=0}^{n_t-1}\left| \int_{t_k}^{t_{k+1}} \langle e_{t_k}, \nabla f(\bX_{t_k}) (W_u\!-\!W_{t_{k}}) \rangle \,\D u\right|^2 \right)^{p/4}\right] \\ &\leq& C_p\, \EE\left[\left(T\delta \sum_{k=0}^{n_t-1} \int_{t_k}^{t_{k+1}} \| e_{t_k}\|^2 \|\nabla f(\bX_{t_k})\|^2 \| W_u\!-\!W_{t_k}\|^2 \,\D u \right)^{p/4}\right] \\ &\leq& C_p\, T^{p/4}\delta^{p/4}\EE\left[\left( \int_0^t \| e_\ud \|^2 \|\nabla f(\bX_\ud)\|^2 \|W_u\!-\!W_{\ud}\|^2 \,\D u \right)^{p/4}\right] \\ &\leq& C_p\, T^{p/2-1} \delta^{p/4}\EE\left[ \int_0^t \| e_\ud \|^{p/2} \|\nabla f(\bX_\ud)\|^{p/2} \| W_u\!-\!W_\ud\|^{p/2} \,\D u \right] \\ &\leq& \halfs\, C_p\, T^{p/2-1} \int_0^t\EE\left[\sup_{0\leq u\leq s}\|e_u\|^p\right]\D s \\ && +\ \halfs\, C_p\, T^{p/2-1} \delta^{p/2}\int_0^t \EE\left[ \|\nabla f(\bX_\ud)\|^{2p} \right]^{1/2} \EE\left[\| W_u\!-\!W_\ud)\|^{2p}\right]^{1/2}\D u \end{eqnarray*} Hence, there exists a constant $C^{41}_{p,T}$ such that \[ I_{41} \leq \halfs\, C_p\, T^{p/2-1} \int_0^t \EE\left[\sup_{0\leq u\leq s} \|e_u\|^p \right]\D s + C^{41}_{p,T}\, \delta^p. \] Finally, for the second part $I_{42}$, by Jensen's inequality and H\"{o}lder inequality, \begin{eqnarray*} I_{42}&=& C_p\, \EE\left[\sup_{0\leq s\leq t}\left| \int_\sd^s \langle e_\ud, \nabla f(\bX_u) (W_u\!-\!W_\ud)\rangle \,\D u\right|^{p/2}\right] \\&\leq& C_p\, T^{p/2-1}\delta^{p/2-1} \EE\left[\sup_{0\leq s\leq t} \int_\sd^s\left| \langle e_\ud, \nabla f(\bX_u) (W_u\!-\!W_\ud)\rangle \right|^{p/2}\,\D u\right] \\ &\leq& C_p\, T^{p/2-1}\delta^{p/2-1} \EE\left[ \int_{0}^t \| e_\ud \|^{p/2} \|\nabla f(\bX_u)\|^{p/2}\|W_u\!-\!W_\ud\|^{p/2}\,\D u\right] \\ &\leq& \halfs\, C_p\, T^{p/2-1} \int_0^t\EE\left[\sup_{0\leq u\leq s}\|e_u\|^p\right]\D s \\ && +\ \halfs\,C_p\, T^{p/2-1} \delta^{p-2} \int_0^t\EE\left[ \|\nabla f(\bX_u)\|^{2p}\right]^{1/2} \EE\left[\|W_u\!-\!W_\ud\|^{2p}\right]^{1/2} \D u \end{eqnarray*} Thus, we find that there exists a constant $C^{42}_{p,T}>0$ such that \begin{equation} I_{42} \leq \halfs \, C_p\, T^{p/2-1} \int_0^t\EE\left[\sup_{0\leq u\leq s} \|e_u\|^p \right]\D s + C^{42}_{p,T}\, \delta^p. \end{equation} \fi Returning to (\ref{eq:thm4}), and inserting the bounds for $I_1$, $I_2$, $I_3$, $I_4$, $I_{41}$, and $I_{42}$, with $\xi = 2^{5p/2-4}$, gives \[ \EE\!\left[\sup_{0\leq s\leq t}\|e_s\|^p \right] \leq \fracs{1}{2} \EE\!\left[\sup_{0\leq s\leq t}\|e_s\|^p \right] + C^5_{p,T}\! \int_0^t \EE\!\left[\sup_{0\leq u\leq s}\|e_u\|^p \right] \D s + C^6_{p,T} \, \delta^p, \] for certain constants $C^5_{p,T},C^6_{p,T}$. Rearranging and using Gr\"{o}nwall's inequality we obtain the final conclusion that there exists a constant $C_{p,T}$ such that \[ \EE\left[\sup_{0\leq t\leq T}\|e_t\|^p \right] \leq C_{p,T}\, \delta^p. \] \end{proof} \section{Conclusions and future work} The central conclusion from this paper is that by using an adaptive timestep it is possible to make the Euler-Maruyama approximation stable for SDEs with a globally Lipschitz volatility and a drift which is not globally Lipschitz but is locally Lipschitz and satisfies a one-sided linear growth condition. If the drift also satisfies a one-sided Lipschitz condition then the order of strong convergence is $\halfs$, when looking at the accuracy versus the expected cost of each path. For the important class of Langevin equations with unit volatility, the order of strong convergence is 1. The numerical experiments suggest that in some applications the new method may not be significantly better than the tamed Euler-Maruyama method proposed and analysed by Hutzenthaler, Jentzen \& Kloeden \cite{hjk12}, but in others it is shown to be superior. One direction for extension of the theory is to SDEs with a volatility which is not globally Lipschitz, but instead satisfies the Khasminskii-type condition used by Mao \& Szpruch \cite{mao15,ms13}. Another is to extend the analysis to Milstein approximations, which are particularly important when the SDE is scalar or satisfies the commutativity condition which means that the Milstein approximation does not require the simulation of L{\'e}vy areas. Another possibility is to use a Lyapunov function $V(x)$ in place of $\|x\|^2$ in the stability analysis; this might enable one to prove stability and convergence for a larger set of SDEs. For SDEs such as the stochastic van der Pol oscillator and the stochastic Lorenz equation, if we could prove exponential integrability using the approach of Hutzenthaler, Jentzen \& Wang \cite{hjw16} then it may be possible to prove the order of strong convergence using a local one-sided Lipschitz condition. A future paper will address a different challenge, extending the analysis to ergodic SDEs over an infinite time interval. As well as proving a slightly different stability result with a bound which is uniform in time, the convergence analysis will show that under certain conditions the error bound is also uniformly bounded in time. This is in contrast to the analysis in this paper in which the bound increases exponentially with time. \end{document}
\begin{document} \preprint{APS/123-QED} \title{Quantum error correction with metastable states of trapped ions using erasure conversion} \author{Mingyu Kang} \email{[email protected]} \affiliation{Duke Quantum Center, Duke University, Durham, NC 27701, USA} \affiliation{Department of Physics, Duke University, Durham, NC 27708, USA} \author{Wesley C. Campbell} \affiliation{Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA} \affiliation{Challenge Institute for Quantum Computation, University of California, Los Angeles, CA 90095, USA} \affiliation{Center for Quantum Science and Engineering, University of California, Los Angeles, CA 90095, USA} \author{Kenneth R. Brown} \email{[email protected]} \affiliation{Duke Quantum Center, Duke University, Durham, NC 27701, USA} \affiliation{Department of Physics, Duke University, Durham, NC 27708, USA} \affiliation{Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA} \affiliation{Department of Chemistry, Duke University, Durham, NC 27708, USA} \date{\today} \begin{abstract} Erasures, or errors with known locations, are a more favorable type of error for quantum error-correcting codes than Pauli errors. Converting physical noise into erasures can significantly improve the performance of quantum error correction. Here we apply the idea of performing erasure conversion by encoding qubits into metastable atomic states, proposed by Wu, Kolkowitz, Puri, and Thompson [Nat. Comm. \textbf{13}, 4657 (2022)], to trapped ions. We suggest an erasure-conversion scheme for metastable trapped-ion qubits and develop a detailed model of various types of errors. We then compare the logical performance of ground and metastable qubits on the surface code under various physical constraints, and conclude that metastable qubits may outperform ground qubits when the achievable laser power is higher for metastable qubits. \end{abstract} \maketitle \section{Introduction} The implementation of quantum error correction (QEC) is a necessary path toward a scalable and fault-tolerant quantum computer, as quantum states are often inherently fragile and physical operations on quantum states have limited fidelities. QEC protects quantum information from errors by encoding a logical qubit into entangled states of multiple physical qubits~\cite{Knill97}. There have been exciting efforts on manipulating and exploiting the \textit{type} of physical error such that the performance of QEC is improved. One example is the engineering of qubits and operations that have strong bias between the $X$ and $Z$ Pauli noise~\cite{Mirrahimi14, Ofek16, Puri20, Cong22} and designing QEC codes that benefit from such bias by achieving higher thresholds~\cite{Aliferis08, Li19, Guillaud19, Huang20compass, Bonilla21, Darmawan21, Dua22, Xu23}. Another example is converting physical noise into \textit{erasures}, i.e., errors with known locations~\cite{Grassl97, Bennett97, Lu08}. It is clear that erasures are easier to correct than Pauli errors for QEC codes, as a code of distance $d$ is guaranteed to correct at most $d-1$ erasures but only $\lfloor (d-1)/2 \rfloor$ Pauli errors. Erasure conversion is performed by cleverly choosing the physical states encoded as qubits, such that physical noise causes leakage outside the qubit subspace. Crucially, such leakage should be detectable using additional physical operations~\cite{Alber01, Vala05, Wu22, Kubica22}. Typically, undetected leakage errors can have even more detrimental effects on QEC than Pauli errors, as traditional methods for correcting Pauli errors do not apply~\cite{Ghosh13} and methods such as leakage-reducing operations~\cite{Wu02, Byrd04, Byrd05} and circuits~\cite{Fowler13, Ghosh15, Suchara15, Brown18, Brown19, Brown20} require significant overhead. However, when leakage errors are detectable, they can be converted to erasures by resetting the leaked qubit to a known state, e.g., the maximally mixed state, within the qubit subspace. Erasure conversion is expected to achieve significantly higher QEC thresholds for hardware platforms such as superconducting qubits~\cite{Kubica22} and Rydberg atoms~\cite{Wu22}. Trapped ions are leading candidate for a scalable quantum computing platform~\cite{Brown16}. In particular, QEC has been demonstrated in various trapped-ion experiments~\cite{Schindler11, Nigg14, Linke17, Egan21, RyanAnderson21, RyanAnderson22, Postler22, Erhard21}, which include fault-tolerant memory~\cite{Egan21, RyanAnderson21} and even logical two-qubit gates~\cite{Erhard21, Postler22, RyanAnderson22} on distance-3 QEC codes. Here we address the question of whether the idea of erasure conversion can be applied to trapped-ion systems. In fact, the erasure-conversion method in Ref.~\cite{Wu22}, designed for Rydberg atoms, can be directly applied to trapped ions. In Ref.~\cite{Wu22}, a qubit is encoded in the metastable level, such that the majority (approximately $98\%$) of the spontaneous decay of the Rydberg states during two-qubit gates does not return to the qubit subspace. Additional operations can detect such decay, thereby revealing the locations of errors. For trapped ions, the spontaneous decay of the excited states during laser-based gate operations is also the fundamental source of errors, which we aim to convert to erasures in this paper. Note that an earlier work~\cite{Campbell20} has proposed a method of detecting a different type of error for trapped ions using qubits in the metastable level. While the most popular choice of a trapped-ion qubit is the ground qubit encoded in the $S_{1/2}$ manifold, the metastable qubit encoded in the $D_{5/2}$ or $F_{7/2}^o$ manifold is also a promising candidate~\cite{Allcock21}. Recently, high-fidelity coherent conversion between ground and metastable qubits has been experimentally demonstrated using Yb$^+$ ions~\cite{Yang22}. Also, it has been proposed that when ground and metastable qubits are used together, intermediate state measurement and cooling can be performed within the same ion chain~\cite{Allcock21}. Therefore, it is a timely task to add erasure conversion to the list of functionalities of metastable qubits. A careful analysis is needed before concluding that metastable qubits will be more advantageous in QEC than ground qubits. As discussed below, the erasure conversion relies on the fact that the excited states are more strongly coupled to the ground states than the metastable states. However, this fact may also cause the Rabi frequency of metastable qubits to be significantly lower than that of ground qubits, which leads to longer gate time required for metastable qubits. Also, most metastable states decay to the ground manifold after a finite lifetime, while ground qubits have practically infinite lifetime. Whether the advantage of having a higher threshold overcomes these drawbacks needs to be verified. This paper is organized as follows. In Sec.~\ref{sec:sec2}, we introduce the method of laser-based gate operation and erasure conversion on metastable qubits. In Sec.~\ref{sec:sec3}, we show the model of various types of errors for ground and metastable qubits and discuss the criteria of comparison. In Sec.~\ref{sec:sec4}, we briefly introduce the surface code and the simulation method. In Sec.~\ref{sec:sec5}, we present the results of comparing the QEC performance between ground and metastable qubits. Specifically, we conclude that metastable qubits may outperform ground qubits when metastable qubits allow higher laser power than ground qubits, which is reasonable considering the material loss due to lasers. In Sec.~\ref{sec:sec6}, we compare the erasure-conversion scheme on trapped ions and Rydberg atoms, as well as discuss future directions. We conclude with a summary in Sec.~\ref{sec:sec7}. \section{Erasure-conversion scheme} \label{sec:sec2} In this paper, we denote the hyperfine quantum state as $|L, J; F, M\rangle$, where $L$, $J$, $F$, and $M$ are the quantum numbers in the standard notation. Also, we denote a set of all states with the same $L$ and $J$ as a manifold. We define the ground qubit as hyperfine clock qubit encoded in the $S_{1/2}$ manifold as $\ket{0}_g := \ket{0, 1/2; I-1/2, 0}$ and $\ket{1}_g := \ket{0, 1/2; I+1/2, 0}$, where $I$ is the nuclear spin~\cite{Ozeri07}. Similarly, we define the metastable qubit as hyperfine clock qubit encoded in the $D_{5/2}$ manifold as $\ket{0}_m := \ket{2, 5/2; F_0, 0}$ and $\ket{1}_m := \ket{2, 5/2; F_0+1, 0}$, which is suggested for $\rm{Ba}^+$, $\rm{Ca}^+$, and $\rm{Sr}^+$ ions~\cite{Allcock21}. Here, $F_0$ can be chosen as any integer that satisfies $|J-I| \leq F_0 < J+I$. Both qubits are insensitive to magnetic field up to first order as $M=0$. Unlike ground qubits, metastable qubits are susceptible to idling error due to their finite lifetime. As a $D_{5/2}$ state spontaneously decays to the $S_{1/2}$ manifold, such an error is a \textit{leakage} outside the qubit subspace. The probability that idling error occurs during time duration $t$ after state initialization is given by \begin{equation} \label{eq:idleerror} p^{\rm (idle)}(t) = 1 - e^{-t/\tau_m}, \end{equation} where $\tau_m$ is the lifetime of the metastable state. Typically, $\tau_m$ is in the order of a few to tens of seconds for $D_{5/2}$ states~\cite{Allcock21}. \begin{figure*} \caption{(a) The laser-based gate operation (blue) on the $S_{1/2} \label{fig:fig1} \end{figure*} Laser-based gate operations on ground (metastable) qubits are performed using the two-photon Raman transition, where the laser frequencies are detuned from the $S_{1/2}$ $(D_{5/2}) \rightarrow P_{3/2}$ transition, as described in Fig.~\ref{fig:fig1}. Here we define the detuning $\Delta_g$ ($\Delta_m$) as the laser frequency minus the frequency difference between the $S_{1/2}$ ($D_{5/2}$) and $P_{3/2}$ manifolds. Apart from the ``technical'' sources of gate error due to noise in the experimental system, a fundamental source of gate error is the spontaneous scattering of the atomic state from the short-lived $P$ states. During ground-qubit gates, both the $P_{1/2}$ and the $P_{3/2}$ states contribute to the two-photon transition as well as gate error, while for metastable-qubit gates, only the $P_{3/2}$ states contribute, as transition between the $D_{5/2}$ and $P_{1/2}$ states is forbidden. When an ion that is initially in the $P_{3/2}$ state decays, the state falls to one of the $S_{1/2}$, $D_{3/2}$, and $ D_{5/2}$ manifolds with probability $r_1$, $r_2$, and $r_3$, respectively ($r_1 + r_2 + r_3 = 1$), where these probabilities are known as the \textit{resonant branching fractions}. Typically, $r_1$ is several times larger than $r_2$ and $r_3$. For ground (metastable) qubits, if the atomic state decays to either qubit level of the$S_{1/2}$ ($D_{5/2}$) manifold, the resulting gate error can be described as a \textit{Pauli} error. On the other hand, if the atomic state decays to the $D_{3/2}$ manifold, or the $D_{5/2}$ ($S_{1/2}$) manifold, or the hyperfine states of the $S_{1/2}$ ($D_{5/2}$) manifold other than the qubit states, the resulting gate error is a \textit{leakage}. We now describe how the majority of the leakage can be detected when metastable qubits are used, similarly to the scheme proposed in Ref.~\cite{Wu22}. Specifically, whenever the atomic state has decayed to either the $S_{1/2}$ or the $D_{3/2}$ manifold, the state can be detected using lasers that induce fluorescence on cycling transitions resonant to $S_{1/2} \rightarrow P_{1/2}$ and to $D_{3/2} \rightarrow P_{1/2}$, as described in Fig.~\ref{fig:fig1}(b). Unlike a typical qubit-state detection scheme where the $\ket{1}$ state is selectively optically cycled between $\ket{1}$ and appropriate sublevels in the $P_{1/2}$ manifold, this leakage detection can be performed using broadband lasers (such as in hyperfine-repumped laser cooling) such that all hyperfine levels in the $S_{1/2}$ and $D_{3/2}$ manifolds are cycled to $P_{1/2}$. In the rare event of detecting leakage, the qubit is reset to either $\ket{0}_m$ or $\ket{1}_m$, with probability $1/2$ each. This effectively replaces the leaked state to the maximally mixed state $I/2$ in the qubit subspace, which completes converting leakage to \textit{erasure}. Resetting the metastable qubit can be performed by the standard ground-qubit state preparation followed by coherent electric quadrupole transition. This has recently been experimentally demonstrated with high fidelity in less than \mbox{1 $\mu$s} using Yb$^+$ ions~\cite{Yang22}. Note that as transition between the $P_{1/2}$ and $D_{5/2}$ states is forbidden, the photons fluoresced from the $P_{1/2}$ manifold during leakage detection and ground-qubit state preparation are not resonant to any transition with the metastable-qubit states involved. This allows the erasure conversion to be performed on an ion without destroying the qubit states of the nearby ions with high probability. For ground qubits, an analogous erasure-conversion scheme of detecting leakage to the $D_{3/2}$ and $D_{5/2}$ manifolds will destroy the ground-qubit states of the nearby ions, as both the $P_{1/2}$ and the $P_{3/2}$ states decay to $S_{1/2}$ states with high probability. In the scheme described above, leakage to $D_{5/2}$ states other than the qubit states remains undetected. Such leakage can be handled by selectively pumping the $D_{5/2}$ hyperfine states except for $\ket{0}_m$ and $\ket{1}_m$ to the $S_{1/2}$ manifold through $P_{3/2}$. With high probability, the atomic state eventually decays to either the $S_{1/2}$ or the $D_{3/2}$ manifold, which then can be detected as described above. However, this requires the laser polarization to be aligned with high precision such that the qubit states are not accidentally pumped~\cite{Brown18}. Therefore, we defer a careful analysis on whether such process is feasible and classify leakage to other $D_{5/2}$ states as undetected leakage when the erasure-conversion scheme is used. \section{Two-qubit-gate Error model} \label{sec:sec3} \begin{figure} \caption{The various types of error rates of the Ba$^+$ (top) and Ca$^+$ (bottom) qubits as the detuning $\Delta_q$ ($q=g,m$) from the $P_{3/2} \label{fig:fig2} \end{figure} In this section, we describe the error model that we use for comparing the logical performance of ground and metastable qubits. The source of gate errors that we consider here is the spontaneous decay of excited states, which can cause various types of errors. When excited states decay back to one of the qubit states, either a bit flip or a phase flip occurs. When excited states decay to any other state, a leakage error occurs. Finally, for metastable qubits, when the state after decay is outside the $D_{5/2}$ manifold, such leakage can be converted to an erasure. Figure~\ref{fig:fig2} shows the various types of error rates of the Ba$^+$ and Ca$^+$ qubits as the lasers' detuning from the $P_{3/2}$ manifold is varied. Here, subscripts $g$ and $m$ denote the ground and metastable qubit, respectively, and $p_q^{(xy)}$, $p_q^{(z)}$, $p_q^{(l)}$, $p_q^{(e)}$, and $p_q$ \mbox{($q=g,m$)} denote the rate of bit flip, phase flip, leakage, erasure, and total error, respectively, for each qubit on which a two-qubit gate is applied. Up to SubSec.~\ref{subsec:2C}, we provide qualitative explanations on how these error rates are calculated from atomic physics, following the discussion in Refs.~\cite{Moore23, Uys10}. The quantitative derivations are deferred to Appendix~\ref{app:B}. In our model, the only controllable parameter that determines the error rates is the laser detuning from the transition to excited states. In reality, the laser power is also important, as the gate time is determined by both the detuning and the laser power. In SubSec.~\ref{subsec:2D}, we provide methods of comparing ground and metastable qubits with a fixed gate time, such that the errors due to technical noise are upper bounded to the same amount. \subsection{Definitions} In order to compare the gate error rates between ground and metastable qubits, we need to define several quantities. First, we define the maximal one-photon Rabi frequency of the transition between a state in the manifold of the qubit states, denoted with subscript \mbox{$q \in \{g = S_{1/2},\: m = D_{5/2}\}$}, and an excited $P$ state \mbox{($L=1$)}, as \begin{equation} \label{eq:g} g_{q} := \frac{E_q}{2 \hbar} \mu_{q}, \end{equation} where \begin{equation} \label{eq:mu} \mu_{q} := \sqrt{k_q} \left| \langle L=1 || T^{(1)}({\vec{d}}) || L_q \rangle \right|. \end{equation} Here, $E_q$ is the electric field amplitude of the laser used for the qubit in manifold $q$, $\mu_{q}$ is the largest dipole-matrix element of transition between a state in manifold $q$ and a $P$ state, and $T^{(1)}({\vec{d}})$ is the dipole tensor operator of rank 1. Also, $k_q$ is a coefficient that relates $g_{q}$ to the \textit{orbital} dipole transition-matrix element, which is calculated in Appendix~\ref{app:A} using the Wigner-$3j$ and $6j$ coefficients. Next, in order to obtain the scattering rates that lead to various types of errors, we first define the decay rate of the manifold of the excited states, denoted with subscript $e \in \{P_{1/2}, P_{3/2}\}$, to the final manifold, denoted with subscript $f \in \{S_{1/2}, D_{3/2}, D_{5/2}\}$, as \begin{align} \label{eq:gamma} \gamma_{e,f} &:= \frac{\omega_{e,f}^3}{3\pi c^3 \hbar \epsilon_0} \sum_{F_f, M_f} \left| \langle L_e, J_e; F_e, M_e | \vec{d} | L_f, J_f ; F_f, M_f \rangle \right|^2 \nonumber \\ &= \frac{\alpha_{e,f} \omega_{e,f}^3}{3\pi c^3 \hbar \epsilon_0} \left| \langle L=1 || T^{(1)}({\vec{d}}) || L_f \rangle \right|^2, \end{align} where $\omega_{e,f}$ is the frequency difference between the manifolds of the excited and final states. Also, $\alpha_{e,f}$ is a coefficient that relates $\gamma_{e,f}$ to the \textit{orbital} dipole transition-matrix element, which is calculated in Appendix~\ref{app:A} using the Wigner-$3j$ and $6j$ coefficients. Note that $\gamma_{e,f}$ does not depend on $F_e$ and $M_e$ as the frequency differences between hyperfine states of the same manifold are ignored. The laser-based gate operations use two-photon Raman beams of frequency $\omega_L$ that are detuned from the transition between manifolds $e$ and $q$ by $-\Delta_{e,q}$. In such case, the decay rate is given by \cite{Moore23} \begin{equation} \label{eq:gammaprime} \gamma'_{e,f} := \gamma_{e,f} \left(\frac{\omega_{e,f} - \Delta_{e,q}}{\omega_{e,f}} \right)^3 = \gamma_{e,f} \left(\frac{\omega_L - \omega_{f,q}}{\omega_{e,f}} \right)^3, \end{equation} where $\omega_{f,q}$ is the energy of manifold $f$ minus the energy of manifold $q$. Note that the numerator of the cubed factor does not depend on the choice of the manifold $e$ of the excited states. While manifold $e$ can be either $P_{1/2}$ or $P_{3/2}$, Eqs. (\ref{eq:gamma}) and (\ref{eq:gammaprime}) remove the dependence of $\gamma'_{e,f}/\alpha_{e,f}$ on $e$. This allows us to calculate the rates of scattering from the states of both $P_{1/2}$ and $P_{3/2}$ manifolds only using $\gamma'_{f} / \alpha_f$, where we define \begin{gather} \alpha_f := \alpha_{P_{3/2}, f}, \quad \gamma'_f := \gamma'_{P_{3/2}, f}. \label{eq:alphaf} \end{gather} Specifically, combining Eq. (\ref{eq:gammaprime}) and the branching fractions of $P_{3/2}$ states, $\gamma'_f$ is given by \begin{equation} \label{eq:gammaprimef} \gamma'_f = \begin{cases} (1 - \Delta_q / \omega_{P_{3/2}, S_{1/2}})^3 \times r_1 \gamma, & f = S_{1/2},\\ (1 - \Delta_q / \omega_{P_{3/2}, D_{3/2}})^3 \times r_2 \gamma, & f = D_{3/2},\\ (1 - \Delta_q / \omega_{P_{3/2}, D_{5/2}})^3 \times r_3 \gamma, & f = D_{5/2},\\ \end{cases} \end{equation} where $\gamma$ is the total decay rate of a $P_{3/2}$ state, and $\Delta_q$ is the detuning defined as the laser frequency minus the frequency difference between manifold $q$ and the $P_{3/2}$ manifold. For the detunings considered in this paper, $\Delta_q / \omega_{P_{3/2}, f}$ is at most approximately 0.1, so $r_i \gamma$ ($i=1,2,3$) is a reasonably close upper bound for the corresponding $\gamma'_f$. Table~\ref{tab:ka} shows the values of $k_q$ ($\alpha_f$) for the manifolds of various qubit (final) state, and their derivations can be found in Appendix~\ref{app:A}. \begin{table} \caption{\label{tab:ka} The values of the coefficient $k_q$ ($\alpha_f$), for the manifolds of various qubit (final) states.} \begin{ruledtabular} \begin{tabular}{ccccccc} & $q = S_{1/2}$ & $q = D_{5/2}$ & & $f = S_{1/2}$ & $f = D_{3/2}$ & $f = D_{5/2}$ \\ \hline $k_q$ & 1/3 & 1/5 & $\alpha_f$ & 1/3 & 1/30 & 3/10 \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{How errors arise from spontaneous scattering} \label{subsec:2B} Spontaneous scattering of the short-lived $P_{1/2}$ and $P_{3/2}$ states is the fundamental source of errors for laser-based gates. The type of error (phase flip, bit flip, leakage, or erasure) depends on to which atomic state the short-lived states decay. Rayleigh and Raman scattering are the two types of spontaneous scattering. Rayleigh scattering is the elastic case where the scattered photons and the atom do not exchange energy or angular momentum. Raman scattering is the inelastic case where the photons and the atom exchange energy, thus changing the internal state of the atom. For Rayleigh scattering, the error occurs to the qubit only when the scattering rates differ between the two qubit states, which we call effective Rayleigh scattering. This results in the dephasing of the qubit, or \textit{phase-flip} ($\hat{Z}$) error. For Raman scattering, either \textit{bit-flip} ($\hat{X}$ or $\hat{Y}$) or \textit{leakage} error occurs. Finally, for metastable qubits, if the atomic state after Raman scattering is in either the $S_{1/2}$ or the $D_{3/2}$ manifold, the leakage can be detected and converted to \textit{erasure}, as described in Sec.~\ref{sec:sec2}. We note in passing that for ground qubits, physically converting leakage to Pauli errors may be considered. Specifically, leakage of ground qubits to the $D_{3/2}$ and $D_{5/2}$ manifolds can be pumped back to the $S_{1/2}$ manifold, and leaked states in the $S_{1/2}$ manifold can be selectively pumped to the qubit states. While the former process is straightforward, the latter process suffers when the laser polarization is imperfect and unwanted (qubit) states are pumped~\cite{Brown18}. Therefore, we assume for simplicity that for ground qubits, all leakage during gates remains as leakage. The scattering rates during the two-photon Raman transition can be calculated using the Kramers-Heisenberg formula, as outlined in Ref.~\cite{Uys10}. In this section, we only introduce the scaling behavior and we defer the quantitative equations to Appendix~\ref{app:B}. The contribution of each excited state $\ket{J}$ in manifold $e$ to the rate of scattering from qubit state $\ket{i}$ in manifold $q$ to final state $\ket{j}$ in manifold $f$ is given by \begin{equation} \label{eq:Gammaij} \Gamma_{i,j,J} = C_{i,j,J} \frac{k_q \gamma'_f}{\alpha_f} \left(\frac{g_q}{\Delta_{e,q}} \right)^2, \end{equation} where $C_{i,j,J}$ is a proportionality constant that depends on the hyperfine structure of the atom and the polarization of the laser beams. By summing up over all excited and final states appropriately, the scattering rate that leads to each type of gate error can be calculated. For ground qubits, both the $P_{1/2}$ and $P_{3/2}$ states contribute to the scattering rates. Thus, the rates consist of terms that are proportional to both $(\omega_F - \Delta_g)^{-2}$ and $\Delta_g^{-2}$, where $\omega_F$ is the frequency difference between the $P_{1/2}$ and $P_{3/2}$ manifolds. Meanwhile, for metastable qubits, only the $P_{3/2}$ states contribute, as the $D_{5/2}$ states do not transition to $P_{1/2}$ states. Thus, the scattering rates are directly proportional to $\Delta_m^{-2}$. This explains why, in Fig.~\ref{fig:fig2}, as $\Delta_q / \omega_F$ approaches 1, the error rates of ground qubits increase but those of metastable qubits continue to decrease. \subsection{Two-qubit-gate error rates and erasure-conversion rate} \label{subsec:2C} \begin{table*} \caption{\label{tab:coeffs} The values of the nuclear spin $I$, the hyperfine total angular-momentum number $F_0$ of $\ket{0}_m$, the lifetime $\tau_m$ of the metastable state~\cite{Zhang20, Kreuter04}, the lifetime $\gamma^{-1}$ of the $P_{3/2}$ state~\cite{Zhang20, Jin93}, the geometric coefficient $c_0$ for the qubit-state Rabi frequency, the branching fractions $r_1$, $r_2$, and $r_3$ of the $P_{3/2}$ state~\cite{Zhang20, Gerritsma08}, and the zero-detuning erasure-conversion rate $R_e^{(0)}$ (in bold type) of two metastable qubits chosen as examples. } \begin{ruledtabular} \begin{tabular}{cccccccccc} Isotope & $I$ & $F_0$ & $\tau_m$ (s) & $\gamma^{-1}$ (ns) & $c_0$ & $r_1$ & $r_2$ & $r_3$ & $R_e^{(0)}$ \\ \hline $^{133} {\rm Ba}^+$ & $1/2$ & 2 & 30.14 & 6.2615 & 1/10 & 0.7417 & 0.0280 & 0.2303 & \textbf{0.7941}\\ $^{43} {\rm Ca}^+$ & $7/2$ & 5 & 1.16 & 6.924 & $\sqrt{7/220}$ & 0.9347 & 0.0066 & 0.0587 & \textbf{0.9509}\\ \end{tabular} \end{ruledtabular} \end{table*} The error rates are obtained by the scattering rate times the gate time, which is determined by the Rabi frequency of the qubit-state transition. For ground qubits, both $S_{1/2} \rightarrow P_{1/2}$ and $S_{1/2} \rightarrow P_{3/2}$ transitions, detuned by $\omega_F - \Delta_g$ and $-\Delta_g$, respectively, contribute to the two-photon Raman transition and similarly to the scattering rate. When the two Raman beams are both linearly polarized and mutually perpendicular, the Rabi frequency of the ground qubit is given by~\cite{Wineland03, Ozeri07} \begin{equation} \label{eq:Rabiground} \Omega_g = \frac{g_g^2}{3} \left|\frac{\omega_F}{(\omega_F - \Delta_g) \Delta_g} \right|, \end{equation} where $g_g$ is the maximal one-photon Rabi frequency of a $S_{1/2}$ state. For metastable qubits, only the $D_{5/2} \rightarrow P_{3/2}$ transition, detuned by $-\Delta_m$, contributes to the Raman transition. The Rabi frequency of the metastable qubit is given by \begin{equation}\label{eq:Rabimeta} \Omega_m = \frac{c_0 g_m^2}{|\Delta_m|}, \end{equation} where $g_m$ is the maximal one-photon Rabi frequency of a $D_{5/2}$ state, and $c_0$ is a geometric coefficient determined by $I$ and $F_0$. The gate time for a two-qubit gate following the M\o{}lmer-S\o{}rensen (MS) scheme~\cite{Sorensen99} is typically given by \begin{equation}\label{eq:gatetime} t_{\rm gate} = \frac{\pi \sqrt{K}}{2 \eta \Omega}, \end{equation} where $\Omega$ is the qubit-state Rabi frequency, $\eta$ is the Lamb-Dicke parameter, and $K$ is the number of revolutions of the trajectory of the ions in phase space~\cite{Ozeri07}. The error rates of each qubit on which a two-qubit gate is applied can be obtained by multiplying the corresponding sums of scattering rates in Eq.~(\ref{eq:Gammaij}) and the gate time in Eq.~(\ref{eq:gatetime}). Importantly, the $g_q^2$ factors are canceled out, which removes the dependence of the error rates on the electric field amplitude. Thus, the error rates can be expressed as functions of only the detuning $\Delta_q$, as shown in Fig.~\ref{fig:fig2}. Here, $\eta = 0.05$ and $K=1$ are fixed to their typical values~\footnote{While we use a fixed value of $\eta$ for ground and metastable qubits in our calculations, in reality the Lamb-Dicke parameter is proportional to $2 \omega_L \sin(\theta/2)/c \times \sqrt{\hbar/2M\omega}$ (up to a normalization coefficient that depends on the length of ion chain), where $\theta$ ($0 \leq \theta \leq \pi)$ is the angle between the two Raman beams, $c$ is the speed of light, $M$ is the ion mass, and $\omega$ is the motional-mode frequency. As the laser frequency $\omega_L$ is smaller for metastable qubits than for ground qubits (by a factor of roughly 1.3 for Ba$^+$ and 2.2 for Ca$^+$), fixing $\eta$ between metastable and ground qubits may require using larger $\theta$ or smaller $\omega$ for metastable qubits than ground qubits. Alternatively, metastable qubits may require additional laser power to compensate the effects of smaller $\eta$~\cite{Moore23}.}. The quantitative equations for the error rates of various types can be found in Appendix~\ref{app:B}. We note that single-qubit-gate error rates can also be similarly obtained by inserting $\pi/2\Omega$ into the gate time. We do not consider single-qubit-gate errors in this paper as two-qubit-gate errors are more than an order of magnitude larger, due to the additional factor $\sqrt{K}/\eta$. An important metric for metastable qubits is the ratio of the erasure rate to the total error rate \begin{equation}\label{eq:Re} R_e := p_m^{(e)} / p_m, \end{equation} which we denote as the \textit{erasure-conversion rate}, following the terminology of Ref.~\cite{Wu22}. An intuitive guess of $R_e$ from Fig.~\ref{fig:fig1} would be $r_1 + r_2$, the branching fraction to the $S_{1/2}$ and $D_{3/2}$ manifolds; however, $R_e$ is slightly larger than $r_1 + r_2$ for two reasons. First, while $\gamma'_f = r_i \gamma$ for the corresponding $i$ when $\Delta_m = 0$, as $\Delta_m$ increases, $\gamma'_{D_{5/2}}$ decreases faster than $\gamma'_{S_{1/2}}$ and $\gamma'_{D_{3/2}}$ [see Eq.~(\ref{eq:gammaprimef})], which leads to larger $R_e$. To set a constant lower bound on $R_e$, we define the zero-detuning erasure-conversion rate \begin{equation}\label{eq:Re0} R_e^{(0)} := \lim_{\Delta_m / \omega_F \rightarrow 0} R_e. \end{equation} For the detunings considered in this paper, $R_e$ is larger than $R_e^{(0)}$ by at most approximately $0.01$ for the Ba$^+$ qubit and $0.001$ for the Ca$^+$ qubit. Second, scattering to the qubit subspace of the $D_{5/2}$ manifold does not always cause an error. When the scattering is elastic, the phase-flip rate is not proportional to the Rayleigh-scattering rate but to the difference between the Rayleigh-scattering rate of the two qubit states, as explained in Sec.~\ref{subsec:2B}. Therefore, we expect $R_e^{(0)}$ to be slightly larger than $r_1 + r_2$ as well. The quantitative equation for $R_e^{(0)}$ can be found in Appendix~\ref{app:B}. Table~\ref{tab:coeffs} shows the values of various parameters relevant to the Rabi frequency and the scattering rates for two metastable qubits chosen as examples. In this paper, for each of the two species Ba$^+$ and Ca$^+$, we choose the isotope and $F_0$, the hyperfine total angular-momentum number of $\ket{0}_m$, such that the qubit-splitting frequency is the largest among the candidates shown in Ref.~\cite{Allcock21}. The chosen isotopes and $F_0$ values are $^{133} {\rm Ba}^+$, $F_0=2$ and $^{43} {\rm Ca}^+$, $F_0=5$. The zero-detuning erasure-conversion rate is 0.7941 (0.9509) for the Ba$^+$ (Ca$^+$) metastable qubit, which is slightly larger than $r_1 + r_2$. For both species, a large portion of the errors can be converted to erasures, which significantly improves the logical performance, as shown in Sec.~\ref{sec:sec5}. \subsection{Comparison of ground and metastable qubits} \label{subsec:2D} To compare the logical performance of ground and metastable qubits, we consider the following three cases: \begin{itemize} \item Case \rom{1}: $p_g = p_m$ \item Case \rom{2}: $\Omega_g = \Omega_m$ and $E_g = E_m$ \item Case \rom{3}: $\Omega_g = \Omega_m$ and $\Delta_g = \Delta_m$ \end{itemize} where $E_g$ ($E_m$) is the electric field amplitude of the laser used for ground (metastable) qubits. In case \rom{1}, the total error rate is fixed between ground and metastable qubits, as in Ref.~\cite{Wu22}. Here, we expect metastable qubits to outperform ground qubits, as a significant portion ($R_e$) of the gate errors of the metastable qubits are erasures, which is more favorable than Pauli errors for QEC. However, such a comparison does not reflect an important disadvantage of metastable qubits. Namely, the transition between metastable and excited states is significantly weaker than that between ground and excited states, i.e., \mbox{$\mu_m \ll \mu_g$}. Therefore, given the same laser power, gates on metastable qubits require either smaller detuning [see Eq.~(\ref{eq:Rabimeta})] or a longer gate time [see Eq.~(\ref{eq:gatetime})]. Note that $R_e$ being close to one, which is an advantage of metastable qubits, also comes from the fact that \mbox{$\mu_m \ll \mu_g$}, as we see in detail below. If we completely ignore noise in the experimental system, we can simply use a sufficiently longer gate time for metastable qubits than for ground qubits, such that the total error rates due to spontaneous scattering match. However, this is unrealistic, especially given that the dominant sources of two-qubit-gate errors in the current state-of-the-art trapped-ion systems are motional heating and motional dephasing~\cite{Wang20, Cetina22, Kang23ff}, the effects of which build up with gate time. Therefore, we also compare ground and metastable qubits with fixed Rabi frequency, i.e., $\Omega_g = \Omega_m$. This requires smaller detuning for metastable qubits, which leads to larger gate error due to spontaneous scattering for metastable qubits. However, the gate time is fixed, so the gate error due to technical noise is upper bounded to the same amount. We note that with metastable qubits and the erasure-conversion scheme, decreasing the detuning $\Delta_m$ is in some sense converting Pauli errors to erasures (and small amount of undetected leakage), as the gate time decreases at the cost of a larger spontaneous-scattering rate. Given the magnitude of technical noise, an optimal amount of detuning should exist, where the optimum is determined by the \textit{logical} error rate (for a similar approach, see Ref.~\cite{Jandura22}). On the other hand, for ground qubits, $\Delta_g$ needs to be set such that the Raman beams are far detuned from both $P_{1/2}$ and $P_{3/2}$ manifolds, as the detrimental leakage errors cannot be converted to erasures. To fix the Rabi frequencies, we first express the ratio $\mu_m/\mu_g$ using variables with experimentally known values. From Eqs. (\ref{eq:mu}) and (\ref{eq:gamma}), we have \begin{equation} \label{eq:muratio} \bigg( \frac{\mu_m}{\mu_g} \bigg)^2 = \frac{k_m \alpha_g \gamma_{e,m}}{k_g \alpha_m \gamma_{e,g}} \bigg(\frac{\omega_{e,g}}{\omega_{e,m}} \bigg)^3 = \frac{2 r_3}{3 r_1} \bigg(\frac{\omega_{e,g}}{\omega_{e,m}} \bigg)^3 , \end{equation} where $\omega_{e,g}$ ($\omega_{e,m}$) is the frequency difference between the $P_{3/2}$ and $S_{1/2}$ ($D_{5/2}$) manifolds. Note that $r_3/r_1$ is proportional to $(\mu_m/\mu_g)^2$, which shows that large $R_e$ stems from $\mu_m \ll \mu_g$. Then, the Rabi-frequency ratio $\Omega_m / \Omega_g$ is obtained by Eqs. (\ref{eq:g}), (\ref{eq:Rabiground}), and (\ref{eq:Rabimeta}) as \begin{align} \label{eq:Rabiratio} \frac{\Omega_m}{\Omega_g} &= 3c_0 \left| \frac{(\omega_F - \Delta_g) \Delta_g}{\omega_F \Delta_m} \right| \bigg( \frac{\mu_m E_m}{\mu_g E_g} \bigg)^2, \nonumber\\ &= 2c_0 \frac{r_3}{r_1} \left| \frac{(\omega_F - \Delta_g) \Delta_g}{\omega_F \Delta_m} \right| \bigg(\frac{\omega_{e,g}}{\omega_{e,m}} \bigg)^3 \bigg( \frac{E_m}{E_g} \bigg)^2. \end{align} The condition $\Omega_m / \Omega_g = 1$ is used for comparing ground and metastable qubits with fixed gate time. There are two additional choices of fixing variables: $E_g = E_m$ (case \rom{2}) and $\Delta_g = \Delta_m$ (case \rom{3}). In case \rom{2}, the ratio of detunings is given by \begin{equation} \label{eq:case2} {\rm case \: \rom{2} :} \quad \frac{\Delta_m}{\Delta_g} = 2c_0 \frac{r_3}{r_1} \left|1 - \frac{\Delta_g}{\omega_F} \right| \bigg(\frac{\omega_{e,g}}{\omega_{e,m}} \bigg)^3, \end{equation} which is typically in the order of $10^{-1}$ unless $\Delta_g$ is close to $\omega_F$. In case \rom{3}, the ratio of electric field amplitudes is given by \begin{equation} \label{eq:case3} {\rm case \: \rom{3} :} \quad \frac{E_m}{E_g} = \sqrt{\frac{r_1}{2c_0 r_3}} \left|1 - \frac{\Delta_g}{\omega_F} \right|^{-1/2} \bigg(\frac{\omega_{e,m}}{\omega_{e,g}} \bigg)^{3/2}, \end{equation} which is typically several times larger than one unless \mbox{$\Delta_g=\Delta_m$} is close to $\omega_F$. We note that case \rom{3}, where $E_m$ is larger than $E_g$, is experimentally motivated, as the limitation on laser power is often imposed by material loss in optical devices such as mirrors and waveguides~\cite{Brown21}. Such loss is less severe with a longer laser wavelength. As the $D_{5/2} \rightarrow P_{3/2}$ transition has a longer wavelength than the $S_{1/2} \rightarrow P_{3/2}$ transition, we expect that using metastable qubits allows significantly higher laser power for gate operations than using ground qubits. The laser power required to achieve a typical Rabi frequency is estimated for both ground and metastable qubits in Appendix~\ref{app:C}. \section{Surface-code simulation} \label{sec:sec4} It is well established that erasures, or errors with known locations, are more favorable than other types of errors for quantum~\cite{Grassl97, Bennett97, Lu08} and classical codes, as the decoder can use the information of the erasure locations. Thus, we expect the advantage of the erasure conversion of metastable qubits to be valid for all QEC codes; however, it is certainly valuable to estimate how much the QEC performance, such as the circuit-level threshold, of a particular code is improved by erasure conversion. In this paper, we choose to simulate the surface code~\cite{Raussendorf07, Fowler09, Fowler12} (for a detailed review, see Ref.~\cite{Fowler12}). In particular, we consider the rotated surface code~\cite{Tomita14}, which uses slightly fewer qubits than the standard surface code. Figure~\ref{fig:fig3}(a) shows the rotated surface code of distance $d=3$ consisting of $2d^2-1=17$ qubits. The logical qubit is encoded in $d^2$ data qubits (black circles) and the $Z$ and $X$ stabilizers (the red and blue plaquettes, respectively) are measured using $d^2-1$ syndrome qubits (white circles). The logical operator $\hat{Z}_L$ ($\hat{X}_L$) is the product of the $\hat{Z}$ ($\hat{X}$) operators of $d$ data qubits across a horizontal (vertical) line. The measured stabilizers are used by a decoder in inferring the locations and types ($\hat{X}$, $\hat{Y}$, or $\hat{Z}$) of errors. \begin{figure} \caption{(a) The layout of the rotated surface code of distance $d=3$. The black (white) circles represent data (syndrome) qubits. The red (blue) plaquettes represent $Z$ ($X$) stabilizers. The red horizontal (blue vertical) line represents the logical $\hat{Z} \label{fig:fig3} \end{figure} The surface code is a viable candidate for QEC in an experimental system, as it has a high (approximately 1\%) circuit-level threshold and it can be implemented using only nearest-neighbor interactions on a two-dimensional layout. Recently, there has been wide experimental success in demonstrating fault-tolerant memory of a single logical qubit encoded in the surface code using superconducting qubits~\cite{Krinner22, Zhao22, Google23}. We note that for trapped-ion systems, the use of nearest-neighbor interactions is not required. Thus, recent QEC experiments in trapped-ion systems have used the Bacon-Shor code~\cite{Egan21}, the five-qubit code~\cite{RyanAnderson22}, and the color code~\cite{RyanAnderson21, RyanAnderson22, Postler22}. For distance \mbox{$d=3$}, these codes allow fewer physical qubits and gate operations~\cite{Debroy20}. However, simulating the error-correction threshold using these codes can be complicated, as the family of each code of various distances is defined using code concatenation (for an example of the color code, see Ref.~\cite{Steane97}). Meanwhile, the family of rotated surface codes is straightforwardly defined on square lattices of various sizes, which leads to feasible threshold simulations. Figure~\ref{fig:fig3}(b) shows the circuit for a single round of error correction for surface codes. First, the syndrome qubits for measuring the $Z$ ($X$) stabilizers are initialized to state $\ket{0}$ ($\ket{+}$). Then, four CNOT gates are performed between the data qubit and the syndrome qubits, in the correct order. Finally, the syndrome qubits are measured in the respective basis to provide the error syndromes. This circuit is performed on all data qubits in parallel. As the measurements can be erroneous as well, typically $d$ rounds of error correction are consecutively performed for a distance-$d$ code. The most probable set of errors that could have caused the observed syndromes is inferred by a decoder run by a classical computer. Among the various efficient decoders for surface codes~\cite{Dennis02, Delfosse21, Huang20faulttolerant}, we choose the minimum-weight perfect-matching (MWPM) decoder, which finds the error chain of minimum weight using Edmonds' algorithm~\cite{Dennis02}. The errors are corrected by simply keeping track of the Pauli frame, thus not requiring any physical gate operations~\cite{Fowler12}. Erasure conversion on metastable qubits is performed by replacing both qubits with the maximally mixed state whenever leakage is detected during a two-qubit gate. In the actual implementation, this can be done by resetting both qubits to $\ket{0}_m$ and then performing a $\hat{X}$ gate with probability $1/2$, as shown for the case of the leakage of the third syndrome qubit in Fig.~\ref{fig:fig3}(b). The data qubit is also erased as a leakage error may propagate to the other qubit during a two-qubit gate (for details, see Appendix~\ref{app:D}). The decoder uses the information on the erasure locations by setting the weight of erased data qubits to zero, which decreases the weights of error chains consisting of the erased data qubits. We note that resetting to $\ket{0}_m$ instead of the maximally mixed state may be sufficient for erasure conversion. In this paper, we choose to reset to the maximally mixed state, as this can be described by a (maximally depolarizing) Pauli channel that is easy to simulate at the circuit level. To evaluate the QEC performance, we simulate the logical memory of rotated surface codes in the $Z$ basis. Specifically, we initialize the data qubits to $\ket{0}$, perform $d$ rounds of error correction for the distance-$d$ code, and then measure the data qubits in the $Z$ basis. Whether a logical error has occurred is determined by comparing the measurement of the $\hat{Z}_L$ operator [see Fig.~\ref{fig:fig3}(a)] and the expected value of $\hat{Z}_L$ after decoding. This is repeated many times to determine the logical error rate. The circuit-level simulations in this paper are performed using STIM~\cite{Gidney21}, a software package for fast simulations of quantum stabilizer circuits. The error syndromes generated by the circuit simulations are decoded using PyMatching~\cite{Higgott22}, a software package that executes the MWPM decoder. In particular, STIM allows the simulation of erasures and feeding of the location information into the decoder~\cite{stim_erasure}. \section{Results} \label{sec:sec5} \begin{figure*} \caption{(a) The logical error rates of ground (dashed) and metastable (solid) qubits for various code distances $d$ and two-qubit-gate error rates $p_q^{(2q)} \label{fig:fig4} \end{figure*} In this section, we compare the logical performance of ground and metastable qubits on the surface code, under the three cases in Sec.~\ref{subsec:2D}. For each two-qubit-gate error $p^{(2q)}_q := 2p_q - p^2_q$ ($q=g,m$), the composition of various types of errors is given by Fig.~\ref{fig:fig2}. For ground (metastable) qubits, the undetected-leakage (erasure) rate takes up the largest portion of the total error rate. First, we use the error model where the spontaneous scattering during the two-qubit gate is the only source of error. Both the undetected leakage and the erasure of rate $p$ are simulated as depolarizing error, i.e., Pauli error randomly chosen from $\{\hat{I}, \hat{X}, \hat{Y}, \hat{Z}\}$ with probability $p/4$ each. In the simulations, the only difference between undetected leakage and erasure is that the decoder knows and uses the locations of erasures but not the locations of undetected leakage. The errors during single-qubit gates, state preparation, idling, and measurement are not considered. During a two-qubit gate, an error on one of the two qubits may propagate to the other qubit. For the circuit-level simulations, we use a detailed error model that includes the propagation of Pauli~\cite{Schwerdt22} and leakage errors (for details, see Appendix~\ref{app:D}). Figure~\ref{fig:fig4}(a) shows the logical error rates of ground (dashed) and metastable (solid) qubits for various code distances and two-qubit-gate error rates. Here, \mbox{$p_g = p_m$} as in case \rom{1}. As expected from Ref.~\cite{Wu22}, using erasure conversion on metastable qubits significantly improves the threshold, which leads to a dramatic reduction in the logical error rates, compared to using ground qubits. The thresholds for our error model and the MWPM decoder are determined by intersections of the curves for $d=$5, 7, 9, and 11. For the Ba$^+$ ion, which has $R_e^{(0)} = 0.7941$, the threshold improves from 1.36\% to 2.97\% when metastable qubits are used. For the Ca$^+$ ion, which has $R_e^{(0)} = 0.9509$, the threshold improves from 1.22\% to 3.42\%. In additional to a higher threshold, erasure conversion also results in a steeper decrease of the logical error rate as the physical error rate decreases. Below the threshold, the logical error rate is proportional to $[p_q^{(2q)}]^\xi$, where $\xi$ is the \textit{effective error distance} or the slope of the logical error-rate curve. For odd $d$, $\xi = (d+1)/2$ for pure Pauli errors ($R_e=0$) and $\xi=d$ for pure erasures ($R_e=1$). Therefore, $\xi$ is expected to increase from $(d+1)/2$ to $d$ as $R_e$ increases from 0 to 1~\cite{Wu22}. Indeed, Fig.~\ref{fig:fig4}(b) shows that the effective error distance of metastable qubits is larger than $(d+1)/2$, where that of Ca$^+$ is larger than that of Ba$^+$. Meanwhile, the effective error distance of ground qubits is always $(d+1)/2$, as erasure conversion is not feasible. Thus, the advantage of metastable qubits over ground qubits grows even larger as the physical error rate decreases further from the threshold. While these results are promising, the disadvantages of metastable qubits are not yet reflected. The shorter lifetimes of metastable qubits are not considered. Also, Fig.~\ref{fig:fig2} shows that in order to achieve a fixed error rate (case \rom{1}), metastable qubits require larger detuning than ground qubits, which leads to a lower Rabi frequency and a longer gate time. \begin{figure*} \caption{The logical error rates of ground (dashed) and metastable (solid) qubits for various code distances $d$ and two-qubit-gate error rates. Idling (measurement) errors of fixed rate are added for metastable (both) qubits. The thresholds with respect to $p_q^{(2q)} \label{fig:fig5} \end{figure*} In the following simulations, we add idling errors of metastable qubits into the error model. Specifically, for a surface-code cycle time $T$, we assume that \textit{all} metastable qubits have an additional probability of erasure $p^{\rm (idle)}(T)/4$ [see Eq.~(\ref{eq:idleerror})] before each layer of CNOT gates, where the factor of $1/4$ comes from the fact that each error-correction round consists of four layers of CNOT gates. Here, we assume \mbox{$T = 3$ ms}, considering the time required for the state-of-the-art sideband cooling~\cite{Rasmusson21}. For the Ba$^+$ (Ca$^+$) ion, the idling error rate is fixed at $p^{\rm (idle)}(T)/4 = 2.49 \times 10^{-5}$ ($6.46 \times 10^{-4}$). The QEC performance of metastable qubits for various values of the idling-error rates is discussed in Appendix~\ref{app:E}. We also add measurement errors of fixed rate $10^{-4}$, which is roughly the state-of-the-art for trapped ions~\cite{Christensen20, Ransford21}, for both ground and metastable qubits. The addition of idling and measurement errors of fixed rates reduces the thresholds only slightly, as the two-qubit-gate error dominates for the range of $p_q^{(2q)}$ simulated here. Figure~\ref{fig:fig5}(a) shows the comparison result for \mbox{case \rom{2}}. The top and bottom horizontal axes represent the two-qubit-gate error rates of the ground and metastable qubits, respectively. For each $p^{(2q)}_g$, the vertically aligned $p^{(2q)}_m$ is determined by the ratio of detunings $\Delta_m / \Delta_g$ that satisfies the conditions $\Omega_g = \Omega_m$ and $E_g = E_m$ [see Eq.~(\ref{eq:case2})]. As a result, from the lowest to highest simulated $p^{(2q)}_m$, $\Delta_m / \Delta_g$ is varied from 0.0681 to 0.152 for the Ba$^+$ ion and from 0.111 to 0.229 for the Ca$^+$ ion. Thus, each $p^{(2q)}_m$ is compared with a $p^{(2q)}_g$ that is an order of magnitude lower. In this case, for both Ba$^+$ and Ca$^+$ ions, ground qubits outperform metastable qubits, despite having lower thresholds. The effects of idling and measurement errors are negligible in the range of two-qubit-gate errors simulated here. Figure~\ref{fig:fig5}(b) shows the comparison result for case \rom{3}. Here, $p^{(2q)}_g$ and $p^{(2q)}_m$ that are vertically aligned correspond to the same detuning ($\Delta_g = \Delta_m$). Thus, each $p^{(2q)}_m$ is compared with a $p^{(2q)}_g$ that is at most two times lower, which can be overcome by the improvement in the threshold due to erasure conversion. For both Ba$^+$ and Ca$^+$ ions, metastable qubits outperform ground qubits. The advantage of having a larger effective error distance shows up especially for smaller gate-error rates and larger code distances. The advantage of metastable qubits is greater for Ca$^+$, due to the larger erasure-conversion rate. Again, the effects of idling and measurement errors are negligible. For case \rom{3}, to achieve both $\Omega_g = \Omega_m$ and $\Delta_g = \Delta_m$, from the lowest to the highest simulated error rates, $E_m/E_g$ is varied from 3.83 to 2.56 for the Ba$^+$ ion and from 3.00 to 2.09 for the Ca$^+$ ion [see Eq.~(\ref{eq:case3})]. Therefore, in order to achieve the advantage shown in Fig.~\ref{fig:fig5}(b), metastable qubits require higher laser power than ground qubits (for typical values of the laser power, see Appendix~\ref{app:C}). We emphasize that it is reasonable to assume that metastable qubits allow higher laser power for gate operations, as material loss due to lasers is less severe for a longer laser wavelength. In reality, the advantage of metastable qubits may depend on the achievable laser powers. \section{Discussion and outlook} \label{sec:sec6} \subsection{Comparison with the Rydberg-atom platform} As our erasure-conversion scheme on metastable trapped-ion qubits is motivated by that on metastable Rydberg-atom qubits~\cite{Wu22}, in this section we provide a discussion on comparing the two platforms, as well as suggesting the uniqueness of erasure conversion on trapped-ion qubits. The two-qubit gates on Rydberg-atom qubits are performed by coupling the qubit state $\ket{1}$ to the Rydberg state $\ket{r}$ with Rabi frequency $\Omega$. The van der Waals interaction between the two atoms prevents them from being simultaneously excited to $\ket{r}$, a phenomenon known as the Rydberg blockade. This acquires a phase on the two-qubit state $\ket{11}$ that is different from the phase of $\ket{01}$ and $\ket{10}$, which, if carefully controlled, leads to a fully entangling two-qubit gate~\cite{Jaksch00, Lukin01}. The fundamental sources of two-qubit-gate errors are the spontaneous decay of the Rydberg states and the finite Rydberg-blockade strength~\cite{Zhang12, Graham19}. It has been argued that the effects of the finite Rydberg-blockade strength can be compensated by tuning the laser parameters~\cite{Levine19}. Thus, we focus only on the spontaneous decay of the Rydberg states. This is similar to the fact that the spontaneous decay of the excited $P$ states is the fundamental source of errors for laser-based gates on trapped ions. For both platforms, the state-of-the-art gates are dominated by other technical sources of errors in the experimental system~\cite{Wang20, Cetina22, Kang23ff, Graham19, Levine19}. The crucial difference between the two platforms is that for trapped ions, the $P$ states are only virtually occupied as the lasers are far detuned from the transition to the $P$ states, while for Rydberg atoms, the Rydberg states need to be occupied for a sufficiently long time in order to acquire the entangling phases. Thus, for Rydberg atoms, the gate-error rate due to the spontaneous decay is reduced by minimizing the time that the atoms spend in the Rydberg states, denoted as $t_R$. The state-of-the-art method directly minimizes the error rate by finding the $t_R$-optimal pulse of length \mbox{$t_{\rm gate} > t_R$} using quantum optimal control techniques~\cite{Jandura22Time}. The minimal two-qubit-gate error rate is given by \begin{equation} \label{eq:Rydberg} p^{(2q)} = \gamma_R t_R = 2.947 \times \frac{\gamma_R}{\Omega_{\rm max}}, \end{equation} where $\gamma_R$ is the lifetime of the Rydberg states and $\Omega_{\rm max}$ is the maximal Rabi frequency of the time-optimal pulse~\cite{Jandura22Time}. Using \mbox{$1/\gamma_R = 540$ $\mu$s} and \mbox{$\Omega_{\rm max} = 10$ MHz} as suggested by Ref.~\cite{Jandura22Time}, the error rate is $8.7 \times 10^{-5}$, which is on par with the minimum two-qubit-gate error rate of ground Ba$^+$ ion qubits ($1.5 \times 10^{-4}$) over $\Delta_g \in [0, \omega_F]$. We note that with limited laser power, the lower bound of the two-qubit-gate error rate of the Rydberg atoms is determined by the maximum achievable $\Omega_{\rm max}$. For trapped ions, the error rate of ground (metastable) qubits can in principle be reduced to an arbitrarily small value by increasing the detuning from the $P_{1/2}$ ($P_{3/2}$) manifold~\cite{Moore23}, at the cost of a lower Rabi frequency. In Eq.~(\ref{eq:Rydberg}), the gate-error rate decreases as the Rabi frequency increases. Meanwhile, for trapped ions, the gate-error rate due to the spontaneous scattering does not explicitly depend on the Rabi frequency (see Appendix~\ref{app:B}). Instead, as the detuning $\Delta_q$ decreases, both the Rabi frequency and the gate-error rate increase. This trade-off is central to cases \rom{2} and \rom{3} of our comparison between ground and metastable trapped-ion qubits. Also, as noted in Sec.~\ref{subsec:2D}, in the presence of technical errors that increase with the gate time, decreasing the detuning $\Delta_m$ of metastable qubits converts the technical errors into erasures (and a small amount of undetected leakage). This is a method of erasure conversion that is unique to the trapped-ion platform. Exploiting this trade-off for minimizing the \textit{logical} error rate is an opportunity for metastable trapped-ion qubits. For both platforms, it is desirable to increase the Rabi frequency by using higher laser power. This reduces the technical errors for trapped ions and both the technical and the fundamental errors for Rydberg atoms. For trapped ions, the erasure conversion of metastable qubits is performed at the cost of a higher laser-power requirement for achieving a fixed Rabi frequency, as the inherent matrix element of the transition from the metastable states to the $P$ states is weaker than from the ground states. For Rydberg alkaline-earth-like atoms, the trade-off is more complicated, as the Rydberg excitation from the ground qubit requires two-photon transitions via $^3P_1$~\cite{Ma22} but only a single photon from the metastable $^3P_0$ qubit state~\cite{Wu22}. The complication is that if we include the time of state preparation and then Rydberg excitation, the metastable qubit will require more laser power to create the Rydberg excitation in the same amount of time, but in the active processing of quantum information the state-preparation stage is rare, so the metastable qubit will use $^1S \rightarrow {^3P}$ laser less often and therefore less laser power overall. A more careful comparison of the laser-power requirements between ground and metastable Rydberg-atom qubits is beyond the scope of this paper. \subsection{Outlook} While the simulation results in this work may serve as a proof of principle, they have several limitations that lead to future directions. First, undetected-leakage errors are simulated as depolarizing errors. In reality, the fault-tolerant handling of leakage errors requires significant overhead, such as leakage-reducing operations~\cite{Wu02, Byrd04, Byrd05} and circuits~\cite{Fowler13, Ghosh15, Suchara15, Brown18, Brown19, Brown20}. In particular, the trade offs of using leakage-reducing circuits on ground trapped-ion qubits have been discussed in Refs.~\cite{Brown18, Brown19}. After implementing such overhead, the effects of undetected leakage errors are equivalent to those of depolarizing errors of a larger rate, in terms of the threshold and error distances~\cite{Brown18, Brown19}; thus, the effects of undetected leakage in our simulations may be considered as a lower bound of the actual effects. When the cost of handling leakage is considered, we expect the advantage of using metastable qubits to be significantly greater, as undetected leakage is the dominant type of error for ground qubits but not for metastable qubits equipped with erasure conversion. Second, we do not consider errors due to miscalibration of the physical parameters, which may cause overrotation errors. This is important because (i) overrotation error is often larger than stochastic error for state-of-the-art two-qubit gates~\cite{Cetina22, Kang23ff} and (ii) calibrating physical parameters to high precision may require a large number of shots and a long experiment time~\cite{Kang23mode}. Notably, overrotation errors can be converted to erasures using certified quantum gates on metastable qubits~\cite{Campbell20}. In this gate scheme, auxiliary states outside the qubit subspace are used, such that overrotation causes residual occupation of the auxiliary states, which can be detected by optical pumping. While Ref.~\cite{Campbell20} considers a heralded gate of success probability smaller than one, in the perspective of QEC, this is essentially erasure conversion. More work needs to be done on generalizing certified quantum gates to other encodings of qubit states and to more commonly used two-qubit-gate schemes such as the MS scheme. Finally, we only consider the examples of $D_{5/2}$ metastable qubits, which have a limited lifetime (see Table~\ref{tab:coeffs}). Meanwhile, the metastable qubit of the Yb$^+$ ion encoded in the $F_{7/2}^o$ manifold is even more promising, as its lifetime ranges from several days to years~\cite{Allcock21}. Laser-based gate operations on such a metastable qubit may use excited states that are more ``exotic'' than the $P$ states. We hope that this work motivates future research on utilizing these exotic states, as well as measuring their properties, such as their decay rates and branching fractions. \section{Conclusion} \label{sec:sec7} In this paper, we show that trapped-ion metastable qubits equipped with erasure conversion can outperform ground qubits in fault-tolerant logical memory using surface codes. Even when the Rabi frequency is fixed between ground and metastable qubits, the logical error rates of the metastable qubits are lower when reasonably higher laser power is allowed (such that detuning from the $P_{3/2}$ manifold is the same). We hope that this paper motivates further research in using metastable trapped-ion qubits for scalable and fault-tolerant quantum computing~\cite{Allcock21}. Also, the methodology of using our detailed error model for comparing ground and metastable qubits may be applied back to the Rydberg-atom platform~\cite{Wu22}. \begin{acknowledgments} M.K. thanks Craig Gidney for his advice on using STIM and thanks Shilin Huang for helpful discussions. W.C.C. acknowledges support from NSF grant no.\ PHY-2207985 and ARO grant no.\ W911NF-20-1-0037. M.K. and K.R.B acknowledge support from the Office of the Director of National Intelligence, Intelligence Advanced Research Projects Activity through ARO grant no. W911NF-16-1-0082 and the NSF-sponsored Quantum Leap Challenge Institute for Robust Quantum Simulation grant no.\ OMA-2120757. \end{acknowledgments} \appendix \section{Derivations of $k_q$ and $\alpha_f$} \label{app:A} In this appendix we calculate the coefficients $k_q$ and $\alpha_f$, introduced in (\ref{eq:mu}), (\ref{eq:gamma}), and (\ref{eq:alphaf}), using the Wigner $3j$ and $6j$ symbols. We mainly use two relations. First is the relation between the hyperfine transition element and the fine-structure-level transition element, given by~\cite{Zare88} \begin{widetext} \begin{align} \langle L_a, J_a ; F_a, M_a | T^{(1)}_\lambda(\vec{d}) | L_b J_b ; F_b, M_b \rangle &= (-1)^{F_a - M_a + J_a + I + F_b + 1} \sqrt{(2F_a+1)(2F_b+1)} \begin{pmatrix} F_a & 1 & F_b \\ -M_a & s & M_b \end{pmatrix} \nonumber \\ &\quad \times \begin{Bmatrix} J_a & F_a & I \\ F_b & J_b & 1 \end{Bmatrix} \langle L_a, J_a || T^{(1)}(\vec{d}) || L_b, J_b \rangle, \label{eq:ME1} \end{align} where $\lambda$ is the photon polarization. Next is the relation between the fine-structure-level transition element and the orbital transition element, given by~\cite{Zare88} \begin{equation} \langle L_a, J_a || T^{(1)}(\vec{d}) || L_b, J_b \rangle = (-1)^{(L_a + \frac{1}{2} + J_b + 1)} \sqrt{(2J_a+1)(2J_b+1)} \begin{Bmatrix} L_a & J_a & S \\ J_b & L_b & 1 \end{Bmatrix} \langle L_a || T^{(1)}(\vec{d}) || L_b \rangle, \label{eq:ME2} \end{equation} where $S = 1/2$ is the electron spin. We first calculate $k_q$. The largest dipole-matrix element of transition between a state in manifold $q$ and a $P$ state is given by \begin{equation} \mu_q := \langle 1, \frac{3}{2} ; I + \frac{3}{2}, I + \frac{3}{2} | T^{(1)}_{\frac{3}{2} - J_q}(\vec{d}) | L_q, J_q ; I + J_q, I + J_q \rangle, \end{equation} where $I$ is the nuclear spin, which we assume to be a half integer. Applying (\ref{eq:ME1}) and (\ref{eq:ME2}) sequentially, we obtain \begin{align} \mu_q &= (-1)^{J_q + 2I + \frac{1}{2}} \sqrt{(2I + 4) (2I + 2J_q + 1)} \begin{pmatrix} I + \frac{3}{2} & 1 & I + J_q \\ -I - \frac{3}{2} & \frac{3}{2} - J_q & I + J_q \end{pmatrix} \nonumber \\ &\quad \times \begin{Bmatrix} \frac{3}{2} & I + \frac{3}{2} & I \\ I + J_q & J_q & 1 \end{Bmatrix} \langle L=1, J=\frac{3}{2} || T^{(1)}(\vec{d}) || L_q, J_q \rangle \nonumber \\ &= (-1)^{2J_q + 2I + 1} \sqrt{(2I + 4) (2I + 2J_q + 1)(4)(2J_q+1)} \nonumber \\ &\quad \times \begin{pmatrix} I + \frac{3}{2} & 1 & I + J_q \\ -I - \frac{3}{2} & \frac{3}{2} - J_q & I + J_q \end{pmatrix} \begin{Bmatrix} \frac{3}{2} & I + \frac{3}{2} & I \\ I + J_q & J_q & 1 \end{Bmatrix} \begin{Bmatrix} 1 & \frac{3}{2} & \frac{1}{2} \\ J_q & L_q & 1 \end{Bmatrix} \langle L=1 || T^{(1)}(\vec{d}) || L_q \rangle. \label{eq:muq_app} \end{align} For ground qubits ($q=g$), inserting $L_q=0$ and $J_q=\frac{1}{2}$ to (\ref{eq:muq_app}) gives \begin{align} \mu_g &= (-1)^{2I} \sqrt{(2I+4)(2I+2)(4)(2)} \nonumber \\ &\quad \times \begin{pmatrix} I + \frac{3}{2} & 1 & I + \frac{1}{2} \\ -I - \frac{3}{2} & 1 & I + \frac{1}{2} \end{pmatrix} \begin{Bmatrix} \frac{3}{2} & I + \frac{3}{2} & I \\ I + \frac{1}{2} & \frac{1}{2} & 1 \end{Bmatrix} \begin{Bmatrix} 1 & \frac{3}{2} & \frac{1}{2} \\ \frac{1}{2} & 0 & 1 \end{Bmatrix} \langle L=1 || T^{(1)}(\vec{d}) || L_g=0 \rangle \nonumber \\ &= (-1)^{2I} \sqrt{(2I+4)(2I+2)(4)(2)} \times \frac{1}{\sqrt{2I+4}} \frac{(-1)^{2I+1}}{\sqrt{(2I+2)(4)}} \frac{(-1)}{\sqrt{6}} \langle L=1 || T^{(1)}(\vec{d}) || L_g=0 \rangle \nonumber \\ &= \frac{1}{\sqrt{3}} \langle L=1 || T^{(1)}(\vec{d}) || L_g=0 \rangle. \end{align} Therefore, from (\ref{eq:mu}), we have $k_g = 1/3$. Similarly, for metastable qubits ($q = m$), inserting $L_q=2$ and $J_q=5/2$ to (\ref{eq:muq_app}) gives \begin{align} \mu_m &= (-1)^{2I} \sqrt{(2I+4)(2I+6)(4)(6)} \nonumber \\ &\quad \times \begin{pmatrix} I + \frac{3}{2} & 1 & I + \frac{5}{2} \\ -I - \frac{3}{2} & -1 & I + \frac{5}{2} \end{pmatrix} \begin{Bmatrix} \frac{3}{2} & I + \frac{3}{2} & I \\ I + \frac{5}{2} & \frac{5}{2} & 1 \end{Bmatrix} \begin{Bmatrix} 1 & \frac{3}{2} & \frac{1}{2} \\ \frac{5}{2} & 2 & 1 \end{Bmatrix} \langle L=1 || T^{(1)}(\vec{d}) || L_m=2 \rangle \nonumber \\ &= (-1)^{2I} \sqrt{(2I+4)(2I+6)(4)(6)} \times \frac{1}{\sqrt{2I+6}} \frac{(-1)^{2I+1}}{\sqrt{(2I+4)(6)}} \frac{(-1)}{\sqrt{20}} \langle L=1 || T^{(1)}(\vec{d}) || L_m=2 \rangle \nonumber \\ &= \frac{1}{\sqrt{5}} \langle L=1 || T^{(1)}(\vec{d}) || L_m=2 \rangle. \end{align} From (\ref{eq:mu}), we have $k_m = 1/5$. Now we calculate $\alpha_f$. Before we start, we introduce two useful identities for Wigner $3j$ and $6j$ symbols: \begin{gather} \sum_{m_1, m_2} \begin{pmatrix} j_1 & j_2 & j_3 \\ m_1 & m_2 & m_3 \end{pmatrix} \begin{pmatrix} j_1 & j_2 & j'_3 \\ m_1 & m_2 & m'_3 \end{pmatrix} = \frac{\delta_{j_3, j'_3} \delta_{m_3, m'_3}}{2j_3+1}, \label{eq:sumrule1} \\ \sum_{j_3} (2j_3+1) \begin{Bmatrix} j_1 & j_2 & j_3 \\ m_1 & m_2 & m_3 \end{Bmatrix} \begin{Bmatrix} j_1 & j_2 & j_3 \\ m_1 & m_2 & m'_3 \end{Bmatrix} = \frac{\delta_{m_3, m'_3}}{2m_3+1}. \label{eq:sumrule2} \end{gather} We start with the left-hand side of the first equation of (\ref{eq:gamma}) and apply the Wigner-Eckart theorem, which leads to \begin{align} \sum_{F_f, M_f} |\langle L_e, J_e; F_e, M_e | \vec{d} | L_f, J_f ; F_f, M_f \rangle |^2 &= \sum_{F_f} \left| \langle L_e, J_e; F_e || T^{(1)}(\vec{d}) || L_f, J_f ; F_f \rangle \right|^2 \sum_{M_f} \left| \begin{pmatrix} F_f & 1 & F_e \\ -M_f & M_f - M_e & M_e \end{pmatrix} \right|^2 \nonumber \\ &= \sum_{F_f} \left| \langle L_e, J_e; F_e || T^{(1)}(\vec{d}) || L_f, J_f ; F_f \rangle \right|^2 \sum_{M_f, s} \left| \begin{pmatrix} F_f & 1 & F_e \\ -M_f & s & M_e \end{pmatrix} \right|^2 \nonumber \\ &= \frac{1}{2F_e+1} \sum_{F_f} \left| \langle L_e, J_e; F_e || T^{(1)}(\vec{d}) || L_f, J_f ; F_f \rangle \right|^2, \end{align} where the last equality uses (\ref{eq:sumrule1}). Applying (\ref{eq:ME2}) with $J \rightarrow F$, $L \rightarrow J$, and $S \rightarrow I$ gives \begin{align} \sum_{F_f, M_f} |\langle L_e, J_e; F_e, M_e | \vec{d} | L_f, J_f ; F_f, M_f \rangle |^2 &= \frac{1}{2F_e+1} \sum_{F_f} (2F_e+1)(2F_f+1) \left| \begin{Bmatrix} J_e & F_e & I \\ F_f & J_f & 1 \end{Bmatrix} \right|^2 \left| \langle L_e, J_e || T^{(1)}(\vec{d}) || L_f, J_f \rangle \right|^2 \nonumber \\ &= \left| \langle L_e, J_e || T^{(1)}(\vec{d}) || L_f, J_f \rangle \right|^2 \sum_{F_f} (2F_f+1) \left| \begin{Bmatrix} J_e & F_e & I \\ F_f & J_f & 1 \end{Bmatrix} \right|^2 \nonumber\\ &= \frac{1}{2J_e+1} \left| \langle L_e, J_e || T^{(1)}(\vec{d}) || L_f, J_f \rangle \right|^2, \end{align} where the last equality uses (\ref{eq:sumrule2}). Applying (\ref{eq:ME2}) gives \begin{align} \sum_{F_f, M_f} |\langle L_e, J_e; F_e, M_e | \vec{d} | L_f, J_f ; F_f, M_f \rangle |^2 &= (2J_f+1) \left|\begin{Bmatrix} L_e & J_e & \frac{1}{2} \\ J_f & L_f & 1 \end{Bmatrix} \right|^2 \left| \langle L_e || T^{(1)}(\vec{d}) || L_f \rangle \right|^2 \nonumber\\ \end{align} Therefore, from (\ref{eq:gamma}), we obtain \begin{equation} \alpha_{e,f} = (2J_f+1) \left|\begin{Bmatrix} L_e & J_e & \frac{1}{2} \\ J_f & L_f & 1 \end{Bmatrix} \right|^2. \end{equation} Finally, inserting $L_e = 1$ and $J_e = 3/2$ gives the values of $\alpha_f$ shown in Table~\ref{tab:ka}. \end{widetext} \section{Derivations of two-qubit-gate error rates} \label{app:B} In this appendix we derive the various types of two-qubit-gate error rates in Fig.~\ref{fig:fig2}. Specifically, we show how the proportionality constants $C_{i,j,J}$ in (\ref{eq:Gammaij}) are calculated, which leads to equations for the error rates and the erasure-conversion rate. Generalizing the notation of Ref.~\cite{Uys10}, we define the scattering amplitude from $\ket{i}$ in manifold $q$ to $\ket{j}$ in manifold $f$ via excited state $\ket{J}$ in manifold $e$ as~\cite{Cohen98} \begin{equation} A^{i \rightarrow j}_{J,\lambda} = -\frac{b_\lambda \langle j | \vec{d} | J \rangle \langle J | \vec{d} | i \rangle}{\Delta_{e,q} \mu_f \mu_q}, \end{equation} where $b_\lambda$ is the normalized amplitude of the polarization component $\hat{\epsilon}_\lambda$ of the Raman beam ($\lambda = 1, 0, -1$). Note that $A^{i \rightarrow j}_{J,\lambda} \neq 0$ only if the magnetic quantum number of $\ket{J}$ is equal to that of $\ket{i}$ plus $\lambda$. As explained in Sec.~\ref{subsec:2B}, phase-flip errors are caused by the effective Rayleigh scattering, and bit-flip and leakage errors are caused by the Raman scattering. The scattering rates of various types can be calculated using the Kramers-Heisenberg formula. First, the effective Rayleigh scattering rate during the qubit's Raman transition is given by~\cite{Uys10} \begin{equation}\label{eq:GammaZ} \Gamma^{(z)} = k_q g_q^2 \frac{\gamma'_q}{\alpha_q} \sum_\lambda \left(\sum_J A^{1 \rightarrow 1}_{J,\lambda} - \sum_{J'} A^{0 \rightarrow 0}_{J',\lambda} \right)^2, \end{equation} where 0 and 1 in the superscript denote the qubit states. Next, the rate of Raman scattering that leads to bit-flip errors is given by \cite{Uys10, Ozeri07} \begin{equation}\label{eq:GammaXY} \Gamma^{(xy)} = k_q g_q^2 \frac{\gamma'_q}{\alpha_q} \sum_\lambda \left[ \bigg(\sum_J A^{0 \rightarrow 1}_{J,\lambda} \bigg)^2 + \bigg( \sum_{J'} A^{1 \rightarrow 0}_{J',\lambda} \bigg)^2 \right], \end{equation} and the rate of Raman scattering that leads to leakage errors is given by \begin{equation}\label{eq:GammaL} \Gamma^{(l)} = k_q g_q^2 \sum_{j \neq 0,1} \frac{\gamma'_f}{\alpha_f} \sum_\lambda \left[ \bigg(\sum_J A^{0 \rightarrow j}_{J,\lambda} \bigg)^2 + \bigg( \sum_{J'} A^{1 \rightarrow j}_{J',\lambda} \bigg)^2 \right], \end{equation} where $j \neq 0,1$ denotes that $\ket{j}$ is not a qubit state. Finally, for $D_{5/2}$ metastable qubits with erasure conversion, the rate of Raman scattering that leads to erasures is given by \begin{equation}\label{eq:GammaE} \Gamma^{(e)} = k_q g_q^2 \sum_{j \notin D_{5/2}} \frac{\gamma'_f}{\alpha_f} \sum_\lambda \left[ \bigg(\sum_J A^{0 \rightarrow j}_{J,\lambda} \bigg)^2 + \bigg( \sum_{J'} A^{1 \rightarrow j}_{J',\lambda} \bigg)^2 \right], \end{equation} where $j \notin D_{5/2}$ denotes that $\ket{j}$ is not in the $D_{5/2}$ manifold. Now we perform the sums in (\ref{eq:GammaZ})-(\ref{eq:GammaE}). For notational convenience, we define the detuning-dependent branching fractions~\cite{Moore23} \begin{align} \label{eq:rprime} r'_1 := (1 - \Delta_q / \omega_{P_{3/2}, S_{1/2}})^3 \times r_1, \nonumber \\ r'_2 := (1 - \Delta_q / \omega_{P_{3/2}, D_{3/2}})^3 \times r_2, \\ r'_3 := (1 - \Delta_q / \omega_{P_{3/2}, D_{5/2}})^3 \times r_3, \nonumber \end{align} such that $\gamma'_f = r'_i \gamma$ for the corresponding $i$ for each manifold $f$ [see (\ref{eq:gammaprimef})]. Also, we assume that the Raman beams are linearly polarized and mutually perpendicular. Then for ground qubits, the scattering rates are given by \begin{align} \Gamma^{(z)}_g &= 0, \label{eq:GammaZg} \\ \Gamma^{(xy)}_g &= \frac{2}{9} r'_1 \gamma g_g^2 \frac{\omega_F^2}{(\omega_F - \Delta_g)^2 \Delta_g^2}, \label{eq:GammaXYg} \\ \Gamma^{(l)}_g &= \frac{2}{9} \gamma \big(\frac{g_g}{\Delta_g}\big)^2 \Bigg[ r'_1 \frac{\omega_F^2}{(\omega_F - \Delta_g)^2} \nonumber \\ &\quad\quad\quad\quad + 6r'_2 \frac{\omega_F^2 - 2\omega_F\Delta_g + 6 \Delta_g^2}{(\omega_F - \Delta_g)^2} + 6r'_3 \Bigg], \label{eq:GammaLg} \end{align} where we replace the subscripts $q$ with $g$. Note that (\ref{eq:GammaZg})-(\ref{eq:GammaLg}) are valid for any value of $I$. Similarly, for metastable qubits, \begin{align} &\Gamma^{(z)}_m = c_z r'_3 \gamma \big(\frac{g_m}{\Delta_m} \big)^2, \label{eq:GammaZm}\\ &\Gamma^{(xy)}_m = c_{xy} r'_3 \gamma \big(\frac{g_m}{\Delta_m} \big)^2, \label{eq:GammaXYm}\\ &\Gamma^{(l)}_m = (c_1 r'_1 + c_2 r'_2 + c_l r'_3) \gamma \big(\frac{g_m}{\Delta_m} \big)^2, \label{eq:GammaLm}\\ &\Gamma^{(e)}_m = (c_1 'r_1 + c_2 r'_2) \gamma \big(\frac{g_m}{\Delta_m} \big)^2,\label{eq:GammaEm} \end{align} where we replace the subscripts $q$ with $m$. Here, $c_z$, $c_{xy}$, $c_1$, $c_2$, and $c_l$ are geometric coefficients determined by $I$ and $F_0$. Note that $c_{xy}$ ($c_l$) comes from the Raman-scattering rates to the $D_{5/2}$ states within (outside) the qubit subspace. Also, $c_1$ ($c_2$) comes from the Raman-scattering rates to the $S_{1/2}$ ($D_{3/2}$) states. The values of these coefficients for the metastable Ba$^+$ and Ca$^+$ qubits considered in this paper can be found in Table \ref{tab:coeffs2}. \begin{table} \caption{\label{tab:coeffs2} Values of the geometric coefficients used in (\ref{eq:GammaZm})-(\ref{eq:GammaEm}) for two metastable qubits chosen as examples. } \begin{ruledtabular} \begin{tabular}{cccccccccccccc} Metastable qubit & $c_z$ & $c_{xy}$ & $c_1$ & $c_2$ & $c_l$ \\ \hline $^{133} {\rm Ba}^+$, $F_0=2$ & 0 & 1/75 & 2/5 & 2/5 & 1/3\\ $^{43} {\rm Ca}^+$, $F_0=5$ & 0.0035 & 7/165 & 29/55 & 29/55 & 0.3904\\ \end{tabular} \end{ruledtabular} \end{table} We note in passing that the approximations used to derive (\ref{eq:GammaZg}) - (\ref{eq:GammaEm}) are valid when $|\Delta_g|$ and $|\Delta_m|$ are much larger than the hyperfine-splitting frequencies and much smaller than the laser frequency. For extremely far-detuned illumination, other effects, such as coupling to higher excited states, may need to be taken into account (see \textit{e.g.} Ref.~\cite{Moore23} for a more thorough treatment). The error rates of each qubit on which two-qubit gate is applied follow straightforwardly by multiplying the gate time $t_{\rm gate}$ in (\ref{eq:gatetime}) to the scattering rates. Note that as $t_{\rm gate}$ is inversely proportional to $\Omega$ in (\ref{eq:Rabiground}) and (\ref{eq:Rabimeta}), the $g_q^2$ factor is cancelled out. The error rates of each ground qubit are given by \begin{align} &p^{(z)}_g = 0, \\ &p^{(xy)}_g = \frac{\pi \sqrt{K}}{3 \eta} r'_1 \gamma \left|\frac{\omega_F}{(\omega_F - \Delta_g) \Delta_g} \right|, \\ &p^{(l)}_g = \frac{\pi \sqrt{K}}{3 \eta} \frac{\gamma}{|\Delta_g|} \Bigg( r'_1 \left|\frac{\omega_F}{\omega_F - \Delta_g} \right| \nonumber\\ &\quad\quad + 6r'_2 \left|\frac{\omega_F^2 - 2\omega_F\Delta_g + 6 \Delta_g^2}{(\omega_F - \Delta_g) \omega_F} \right| + 6r'_3 \left| \frac{\omega_F - \Delta_g}{\omega_F} \right| \Bigg),\\ &p_g = p^{(z)}_g + p^{(xy)}_g + p^{(l)}_g. \end{align} Similarly, the error rates of each metastable qubit are given by \begin{align} &p^{(z)}_m = \frac{\pi \sqrt{K}}{2 \eta} \frac{c_z r'_3}{c_0} \frac{\gamma}{|\Delta_m|}, \label{eq:pmz}\\ &p^{(xy)}_m = \frac{\pi \sqrt{K}}{2 \eta} \frac{c_{xy} r'_3}{c_0} \frac{\gamma}{|\Delta_m|}, \\ &p^{(l)}_m = \frac{\pi \sqrt{K}}{2 \eta} \frac{c_1 r'_1 + c_2 r'_2 + c_l r'_3}{c_0} \frac{\gamma}{|\Delta_m|},\\ &p^{(e)}_m = \frac{\pi \sqrt{K}}{2 \eta} \frac{c_1 r'_1 + c_2 r'_2}{c_0} \frac{\gamma}{|\Delta_m|},\\ &p_m = p^{(z)}_m + p^{(xy)}_m + p^{(l)}_m. \label{eq:pm} \end{align} For metastable qubits, We define $c_3 := c_z + c_{xy} + c_l$. Then, the erasure-conversion rate defined in (\ref{eq:Re}) is given by \begin{equation}\label{eq:Re2} R_e := \frac{p^{(e)}_m}{p_m} = \frac{c_1 r'_1 + c_2 r'_2}{c_1 r'_1 + c_2 r'_2 + c_3 r'_3}, \end{equation} and the zero-detuning erasure-conversion rate defined in (\ref{eq:Re0}) is given by \begin{equation}\label{eq:Re02} R_e^{(0)} := \lim_{\frac{\Delta_m}{\omega_F} \rightarrow 0} R_e = \frac{c_1 r_1 + c_2 r_2}{c_1 r_1 + c_2 r_2 + c_3 r_3}. \end{equation} As explained in Sec.~\ref{subsec:2C}, $R_e$ is slightly larger than $R_e^{(0)}$ for nonzero $\Delta_m$, and $R_e^{(0)}$ is slightly larger than $r_1 + r_2$. This completes the derivation of the error rates and the erasure-conversion rate used in the main text. \section{Laser power estimation} \label{app:C} In this appendix we provide estimates of the typical laser power for two-qubit gates on both ground and metastable qubits. Following Ref.~\cite{Ozeri07}, the laser power required for achieving a given qubit-state Rabi frequency can be calculated. Assuming that the laser beams are Gaussian, the electric-field amplitude $E_q$ at the center of the beam is related to the power $P_q$ of each Raman beam by \begin{equation} E_q^2 = \frac{4P_q}{\pi c \epsilon_0 w_0^2} \end{equation} where $c$ is the speed of light, $\epsilon_0$ is the vacuum permittivity, and $w_0$ is the beam waist at the ion's position~\cite{Ozeri07}. Also, from (\ref{eq:g})-(\ref{eq:gamma}), we have the relation \begin{equation} \frac{g_q^2}{\gamma_q} = \frac{3k_q \pi \epsilon_0 c^3 E_q^2}{4 \alpha_q \hbar \omega_{e,q}^3} = \frac{3k_q c^2 P_q}{\alpha_q \hbar \omega_{e,q}^3 w_0^2}. \end{equation} Then, as the qubit-state Rabi frequency is proportional to $g_q^2$, the relation between the laser beam power and the Rabi frequency can be found. For ground qubits ($q=g$), from (\ref{eq:Rabiground}) and $\gamma_g = r_1 \gamma$, \begin{equation} P_g = \frac{\alpha_g \hbar \omega_{e,g}^3 w_0^2}{k_g c^2 r_1 \gamma \omega_F} |\Delta_g (\omega_F - \Delta_g)| \Omega_g. \end{equation} Similarly, for metastable qubits ($q=m$), from (\ref{eq:Rabimeta}) and $\gamma_m = r_3 \gamma$, \begin{equation} P_m = \frac{\alpha_m \hbar \omega_{e,m}^3 w_0^2}{3 c_0 k_m c^2 r_3 \gamma} |\Delta_m| \Omega_m. \end{equation} \begin{figure} \caption{Laser power of each Raman beam as the detuning $\Delta_q$ ($q=g,m$) from the $P_{3/2} \label{fig:fig_power} \end{figure} Figure~\ref{fig:fig_power} shows the laser power of each Raman beam for various detunings, for a typical Rabi frequency \mbox{$\Omega_q = 2\pi \times 0.25$ MHz} (which gives \mbox{$t_{\rm gate} = 20$ $\mu$s} for \mbox{$\eta=0.05$} and \mbox{$K=1$}) and beam waist \mbox{$w_0 = 20$ $\mu$m}, following Ref.~\cite{Ozeri07}. Combined with Fig.~\ref{fig:fig2}, the laser power is higher at detuning that yields lower gate-error rate. As expected from Case \rom{3}, for fixed Rabi frequency and detuning, the metastable qubit requires an order of magnitude higher laser power when $\Delta_q$ is not too close to $\omega_F$. In practice, the laser power is often limited by material loss in mirrors and waveguides~\cite{Brown21}, which is less severe for metastable qubits than ground qubits as the laser wavelength is longer. We expect our results provide a guideline to future experiments on whether the achievable laser power is large enough for metastable qubits to have advantage over ground qubits by erasure conversion. \section{Error propagation during two-qubit gates} \label{app:D} In order to accurately evaluate the circuit-level performance of the surface code, we use a detailed model on how an error on one qubit due to the spontaneous scattering propagate to the other qubit during a CNOT gate. As the MS gate is a widely-used native two-qubit gate for trapped ions, we first decompose the CNOT gate into MS and single-qubit gates using the circuit in Fig.~\ref{fig:cnotcircuit}~\cite{Maslov17}. As single-qubit gates are assumed to be perfect, we analyze how errors propagate during an MS gate. The errors are then altered as they go through the following single-qubit gates by the standard rules. \begin{figure} \caption{Circuit diagram of a CNOT gate (upper: control, lower: target) decomposed into native gates for trapped ions. Here, $RX(\theta) = \exp(-i \frac{\theta} \label{fig:cnotcircuit} \end{figure} First, a bit-flip ($X$) error due to the spontaneous scattering \textit{during} a MS gate does not propagate, regardless of at which point during gate time it occurred. This is because an $X$ error commutes through $XX(\theta)$ for any $\theta$. Next, for the propagation of a phase-flip ($Z$) error, we start with two extreme cases: error completely before and after a MS gate. A $Z \otimes I$ phase flip occurring before the MS gate is equivalent to a $(X \otimes X) (Z \otimes I)$ error occurring after the MS gate~\cite{Schwerdt22}. A phase flip after the MS gate trivially does not propagate. Thus, for a general case where a phase-flip error occurs \textit{during} a MS gate, we expect that additional bit-flip error occurs to one of or both of the two qubits. To find the average rate of additional bit-flip error, we use a master-equation simulation following the method in Ref.~\cite{Schwerdt22}. Specifically, we prepare the initial state $|00\rangle$, run a perfect $XX(-\pi/4)$ gate, run a $XX(\pi/4)$ gate with a phase flip injected at time $t$ ($0 \leq t \leq t_{\rm gate}$), and then measure the population of the $\ket{1}$ state of the qubits. The $\ket{1}$-state population indicates the additional bit-flip error rate. The populations are averaged over many values of $t$, drawn from a uniform distribution over the gate duration. According to our simulation results, when a phase flip occurs to one of the qubits during a MS gate, additional $X \otimes I$, $I \otimes X$, or $X \otimes X$ error occurs with probability $r$, $r$, and $1/2-r$, respectively, where $r=0.1349$. Finally, we consider the propagation of a leakage error. If a leakage happens at time $t < t_{\rm gate}$ after a MS gate started, the two-qubit rotation $XX(\theta)$ is not fully performed up to $\theta = \pi/4$. Thus, the effect of a leakage can be described by two qubits undergoing a partial $XX$ rotation followed by the leaked qubit being traced out. To start, we describe how the partial rotation angle evolves over time. During a MS gate, a normal mode of the ion chain's collective motion is briefly excited by laser beams near-resonant to the sideband transition. Here we denote $\delta$ as the detuning from the sideband transition. Then, the action of a MS gate up to time $t$ ($0 \leq t \leq t_{\rm gate}$) in the subspace of the two qubits is given by $\exp(-i \theta(t) X \otimes X)$, where~\cite{Wu18} \begin{equation} \theta(t) = \frac{\delta t - \sin \delta t}{\delta t_{\rm gate} - \sin \delta t_{\rm gate}} \times \frac{\pi}{4} \end{equation} such that $\theta(0) = 0$ and $\theta(t_{\rm gate}) = \pi/4$. There is an additional condition that the motional mode needs to be completely disentangled from the qubits at the end of the gate, which is satisfied when $\delta t_{\rm gate} = 2K\pi$. We choose $K=1$ as in the main text and obtain \begin{equation} \theta(t) = \frac{\pi t}{4 t_{\rm gate}} - \frac{1}{8}\sin\left(2\pi \frac{t}{t_{\rm gate}} \right). \end{equation} We note that even when one of the qubit is leaked during the gate, the $\delta t_{\rm gate} = 2K\pi$ condition guarantees that the other qubit is completely disentangled from the motional mode after the gate. Now we describe the action of tracing out the leaked qubit after an incomplete $XX$ rotation. Such channel is complicated to write in a general form that is valid for all initial states. However, for the surface code, in the absence of errors, the syndrome qubit is always in the $\ket{0}$ state right before a MS gate is applied. If the syndrome qubit is leaked at time $t$, it can be straightforwardly shown that the channel on the data qubit is a bit-flip channel with probability $\sin^2 \theta(t)$, regardless of its initial state. If the data qubit is leaked at time $t$, the channel on the syndrome qubit is also a bit-flip channel with probability $\sin^2 \theta(t)$, where here we additionally use that in the subspace of the pair of data and syndrome qubits, the data qubit is in the maximally mixed state. Therefore, for both data and syndrome qubits, a leakage in one of the qubits at time $t$ results in a bit flip with probability $\sin^2 \theta(t)$ on the other qubit. Assuming again that the distribution of probability that a leakage occurs is uniform over the gate duration, the average probability of bit-flip propagation is given by \begin{equation} \frac{1}{t_{\rm gate}} \int_0^{t_{\rm gate}} \sin^2 \theta(t) dt = 0.2078. \end{equation} Thus, whenever one of the two qubits is leaked (and becomes a maximally mixed state in our simulations), the other qubit undergoes a bit flip with probability 0.2078. This concludes the model for error propagation during a MS gate. Then, the errors go through the single-qubit gates following the MS gate in Fig.~\ref{fig:cnotcircuit}, leading to the error model for a CNOT gate. Specifically, the rates of two-qubit Pauli errors $IX$, $IY$, ..., $ZZ$ for each CNOT gate can be calculated in terms of $p_q^{(xy)}$, $p_q^{(z)}$, $p_q^{(l)}$, and $p_q^{(e)}$ ($q = g,m$), which are fed into the circuit simulated by Stim~\cite{Gidney21}. A caveat here is that the two-qubit Pauli errors are always appended after the execution of each CNOT gate. In reality, for the case of a leakage error, the MS gate is replaced by an incomplete $XX$ rotation that directly becomes the propagated error. Thus, the two-qubit Pauli errors are ideally appended after a $RZ(\pi/2) \otimes RX(\pi/2)$ gate rather than a CNOT gate (see Fig.~\ref{fig:cnotcircuit}). However, switching between CNOT and $RZ(\pi/2) \otimes RX(\pi/2)$ gates probabilistically is challenging to simulate efficiently. We justify always appending the two-qubit Pauli errors after the CNOT gate by the following argument. The CNOT gate can be expressed as \begin{equation} \text{CNOT} = I \otimes |+\rangle \langle +| + Z \otimes |-\rangle \langle -| = |0\rangle \langle 0| \otimes I + |1\rangle \langle 1| \otimes X. \nonumber \end{equation} In our simulations, a leaked qubit is in the maximally mixed state. Thus, when the target qubit is leaked, the CNOT gate becomes either $I$ or $Z$ on the control qubit with probability 1/2 each. This can be approximated to $RZ(\pi/2)$ on the control qubit on average. Similarly, when the control qubit is leaked, the CNOT gate becomes either $I$ or $X$ on the target qubit with probability 1/2 each, and this can be approximated to $RX(\pi/2)$ on the target qubit on average. \section{Effects of metastable qubits' idling errors on QEC} \label{app:E} Here we simulate how the idling errors of metastable qubits affect the QEC performance. Due to the finite lifetime of the $D_{5/2}$ states, metastable qubits can spontaneously decay to the $S_{1/2}$ manifold during idling. Such leakage errors can always be converted to erasures, using the erasure-conversion scheme described in Sec.~\ref{sec:sec2}. The idling-error rate is given in (\ref{eq:idleerror}), where the upper bound of time $t$ is determined by the surface-code cycle time $T$, which can be considered as the clock time for fault-tolerant quantum computation. We consider the example of metastable Ca$^+$ qubit in Table~\ref{tab:coeffs}, as its lifetime \mbox{1.16 s} is significantly shorter than the metastable Ba$^+$ qubit's \mbox{30.14 s}. In the surface-code simulations, similarly to Sec.~\ref{sec:sec5}, we assume that all qubits can be erased, each with probability $p^{\rm (idle)}(T)/4$ before each layer of CNOT gates, where the factor $1/4$ is from the four layers of CNOT gates per cycle. Various values of surface-code cycle time $T$ ranging from \mbox{3 ms} to \mbox{1.19 s} are considered, such that the idling-error rate $p^{\rm (idle)}(T)/4$ is varied from $6.46 \times 10^{-4}$ to $0.161$. We also add two-qubit-gate errors of fixed rate $p_m^{(2q)} = 1.14 \times 10^{-3}$ and measurement errors of fixed rate $10^{-4}$, where the former is the lowest value considered in Fig.~\ref{fig:fig4}(a) for Ca$^+$ qubits. \begin{figure} \caption{Logical error rates of metastable Ca$^+$ qubits for various code distances $d$ and idling-error rates $p^{\rm (idle)} \label{fig:fig_loss} \end{figure} Figure~\ref{fig:fig_loss} shows the logical error rates $p_L$ for various code distances $d$ and idling-error rates $p^{\rm (idle)}(T)/4$. The threshold idling-error rate is 5.14\%, which is higher than the threshold two-qubit-gate error rate 3.42\%, as idling errors are completely converted to erasures. A distance-$d$ code is guaranteed to correct $d-1$ idling errors per cycle, as indicated by the slopes of the logical-error curves. The top horizontal axis represents the surface-code cycle time $T$. While Fig.~\ref{fig:fig_loss} plots the logical error rates for cycle times as long as \mbox{$\sim 1$ s} to show the threshold, such long cycle time is impractical for quantum computation. For relatively feasible cycle times, say, \mbox{$T < 10$ ms}, $p_L$ quickly decreases below $10^{-6}$ for $d \geq 5$. For $d=3$, $p_L$ converges to a nonzero value as the idling-error rate decreases, which indicates that the effects of the two-qubit-gate errors of fixed rate $p_m^{(2q)} = 1.14 \times 10^{-3}$ dominate the effects of the idling errors when \mbox{$T \lesssim 3$ ms}. Therefore, for reasonably short surface-code cycle times, we expect that the metastable qubits' idling errors have significantly smaller impact on the QEC performance than the gate errors. For metastable qubits with longer lifetime, such as Ba$^+$ and Yb$^+$, the idling errors will be even more negligible. \end{document}
\begin{document} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{remark}[theorem]{Remark} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \newenvironment{proof}{\noindent {\bf Proof.}}{\rule{3mm}{3mm}\par } \newcommand{{\it p.}}{{\it p.}} \newcommand{\em}{\em} \allowdisplaybreaks[4] \newtheorem {Problem} {Problem}[section] \newtheorem {Theorem} [Problem]{Theorem} \newtheorem {Lemma}[Problem]{Lemma} \newtheorem{Conjecture}[Problem]{Conjecture} \newtheorem {Corollary}[Problem]{Corollary} \newenvironment {Proof}{\noindent {\bf Proof.}}{ \ensuremath{\square}} \newcommand*{\QEDB}{ \ensuremath{\square}} \newcommand{{\it Europ. J. Combinatorics}, }{{\it Europ. J. Combinatorics}, } \newcommand{{\it J. Combin. Theory Ser. B.}, }{{\it J. Combin. Theory Ser. B.}, } \newcommand{{\it J. Combin. Theory}, }{{\it J. Combin. Theory}, } \newcommand{{\it J. Graph Theory}, }{{\it J. Graph Theory}, } \newcommand{{\it Combinatorica}, }{{\it Combinatorica}, } \newcommand{{\it Discrete Math.}, }{{\it Discrete Math.}, } \newcommand{{\it Ars Combin.}, }{{\it Ars Combin.}, } \newcommand{{\it SIAM J. Discrete Math.}, }{{\it SIAM J. Discrete Math.}, } \newcommand{{\it SIAM J. Algebraic Discrete Methods}, }{{\it SIAM J. Algebraic Discrete Methods}, } \newcommand{{\it SIAM J. Comput.}, }{{\it SIAM J. Comput.}, } \newcommand{{\it Contemp. Math. AMS}, }{{\it Contemp. Math. AMS}, } \newcommand{{\it Trans. Amer. Math. Soc.}, }{{\it Trans. Amer. Math. Soc.}, } \newcommand{{\it Ann. Discrete Math.}, }{{\it Ann. Discrete Math.}, } \newcommand{{\it J. Res. Nat. Bur. Standards} {\rm B}, }{{\it J. Res. Nat. Bur. Standards} {\rm B}, } \newcommand{{\it Congr. Numer.}, }{{\it Congr. Numer.}, } \newcommand{{\it Canad. J. Math.}, }{{\it Canad. J. Math.}, } \newcommand{{\it J. London Math. Soc.}, }{{\it J. London Math. Soc.}, } \newcommand{{\it Proc. London Math. Soc.}, }{{\it Proc. London Math. Soc.}, } \newcommand{{\it Proc. Amer. Math. Soc.}, }{{\it Proc. Amer. Math. Soc.}, } \newcommand{{\it J. Combin. Math. Combin. Comput.}, }{{\it J. Combin. Math. Combin. Comput.}, } \newcommand{{\it Graphs Combin.}, }{{\it Graphs Combin.}, } \title{On the spectral radius of graphs without a star forest\thanks{ This work is supported by the National Natural Science Foundation of China (Nos. 11971311, 11531001), the Montenegrin-Chinese Science and Technology Cooperation Project (No.3-12) }} \author{Ming-Zhu Chen\thanks{E-mail: [email protected]}, A-Ming Liu,\thanks{E-mail: [email protected]}\\ School of Science, Hainan University, Haikou 570228, P. R. China, \\ \and Xiao-Dong Zhang$^{\dagger}$\thanks{Corresponding Author E-mail: [email protected]} \\School of Mathematical Sciences, MOE-LSC, SHL-MAC\\ Shanghai Jiao Tong University, Shanghai 200240, P. R. China} \date{} \maketitle \begin{abstract} In this paper, we determine the maximum spectral radius and all extremal graphs for (bipartite) graphs of order $n$ without a star forest, extending Theorem~1.4 (iii) and Theorem~1.5 for large $n$. As a corollary, we determine the minimum least eigenvalue of $A(G)$ and all extremal graphs for graphs of order $n$ without a star forest, extending Corollary~1.6 for large $n$. \\ \\ {\it AMS Classification:} 05C50, 05C35, 05C83\\ \\ {\it Key words:} Spectral radius; extremal graphs; star forests; grpahs; bipartite graphs \end{abstract} \section{Introduction} Let $G$ be an undirected simple graph with vertex set $V(G)=\{v_1,\dots,v_n\}$ and edge set $E(G)$, where $n$ is called the order of $G$. The \emph{adjacency matrix} $A(G)$ of $G$ is the $n\times n$ matrix $(a_{ij})$, where $a_{ij}=1$ if $v_i$ is adjacent to $v_j$, and $0$ otherwise. The \emph{spectral radius} of $G$ is the largest eigenvalue of $A(G)$, denoted by $\rho(G)$. The least eigenvalue of $A(G)$ is denoted by $\rho_n(G)$. For $v\in V(G)$, the \emph{neighborhood} $N_G(v)$ of $v$ is $\{u: uv\in E(G)\}$ and the \emph{degree} $d_G(v)$ of $v$ is $|N_G(v)|$. We write $N(v)$ and $d(v)$ for $N_G(v)$ and $d_G(v)$ respectively if there is no ambiguity. Denote by $\Delta(G)$ the maximum degree of $G$. Let $S_{n-1}$ be a star of order $n$. The \emph{center} of a star is the vertex of maximum degree in the star. The \emph{ centers } of a star forest are the centers of the stars in the star forest. A graph $G$ is \emph{$H$-free} if it does not contain $H$ as a subgraph. For two vertex disjoint graphs $G$ and $H$, we denote by $G\cup H$ and $G\nabla H$ the \emph{union} of $G$ and $H$, and the \emph{join} of $G$ and $H$ which is obtained by joining every vertex of $G$ to every vertex of $H$, respectively. Denote by $kG$ the the union of $k$ disjoint copies of $G$. For graph notation and terminology undefined here, readers are referred to \cite{BM}. Recall that the problem of maximizing the number of edges over all graphs without fixed subgraphs is one of the cornerstones of graph theory. \begin{Problem}\label{P0} Given a graph $H$, what is the maximum number of edges of a graph $G$ of order $n$ without $H$ $?$ \end{Problem} Many instances of Problems~\ref{P0} have been solved. For example, Lidick\'{y}, Liu, and Palmer \cite{LLP} determined the maximum number of edges of graphs without a forest forest if the order of a graph is sufficiently large. \begin{Theorem}\label{thm-edge}\cite{LLP} Let $F=\cup_{i=1}^k S_{d_i}$ be a star forest with $k\geq2$ and $d_1\geq\cdots \geq d_k\geq2$. If $G$ is an $F$-free graph of sufficiently large order $n$, then $$e(G)\leq \max_{1\leq i\leq k}\bigg\{(i-1)(n-i+1)+\binom{i-1}{2}+\bigg\lfloor\frac{(d_i-1)(n-i+1)}{2}\bigg\rfloor\bigg\}.$$ \end{Theorem} In spectral extremal graph theory, a similar central problem is of the following type: \begin{Problem}\label{P1} Given a graph $H$, what is the maximum $\rho(G)$ of a graph $G$ of order $n$ without $H$$?$ \end{Problem} Many instances of Problem~\ref{P1} have been solved, for example, see \cite{Chen2019-2,Cioaba2019,Gao2019,Nikiforov2010,Tait2019,Tait2017,Zhai2020}. In addition, if $H$ is a linear forest, Problem~\ref{P1} was solved in \cite{Chen2019}. For $H=kP_3$, the bipartite version of Problem~\ref{P1} was also proved in \cite{Chen2019}. In order to state these results, we need some symbols for given graphs. Let $S_{n,h}=K_h\nabla \overline{K}_{n-h}$. Furthermore, $S^+_{n,h}=K_h \nabla (K_2\cup \overline{K}_{n-h-2})$. Let $F_{n,k}=K_{k-1}\nabla ( (p K_2)\cup K_s)$, where $n-(k-1)=2p+s$ and $0\leq s<2$. In addition, for $k\geq2$ and $d_1\geq\cdots \geq d_k\geq1$, define \begin{eqnarray*} f(k,d_1,\dots,d_k)=\frac{k^2(\sum_{i=1}^k d_i+k-2)^2(\sum_{i=1}^k 2d_i+5k-4)^{4k-2}+2(k-2)(\sum_{i=1}^kd_i)}{k-2}. \end{eqnarray*} \begin{Theorem}\label{A}\cite{Chen2019} Let $F=\cup_{i=1}^k P_{a_i}$ be a linear forest with $k\geq2$ and $a_1\geq \cdots \geq a_k\geq2$. Denote $ h=\sum \limits_{i=1}^k \lfloor\frac{a_i}{2}\rfloor-1$ and suppose that $G$ is an $F$-free graph of sufficiently large order $n$.\\ (i) If there exists an even $a_i$, then $\rho(G)\leq \rho(S_{n,h})$ with equality if and only if $G = S_{n,h}$;\\ (ii) If all $a_i$ are odd and there exists at least one $a_i>3$, then $\rho(G)\leq \rho(S^+_{n,h})$ with equality if and only if $G = S^+_{n,h}$.\\ (iii) If all $a_i$ are 3, i.e., $F= k P_3$, then $\rho(G)\leq \rho(F_{n,k}) $ with equality if and only if $G = F_{n,k}$. \end{Theorem} \begin{Theorem}\label{bipartite}\cite{Chen2019} Let $G$ be a $k P_3$-free bipartite graph of order $n\geq 11k-4$ with $k\geq2 $. Then $$\rho(G)\leq \sqrt{(k-1)(n-k+1)}$$ with equality if and only if $G= K_{k-1,n-k+1}$. \end{Theorem} \begin{Corollary}\label{least eigenvalue}\cite{Chen2019} Let $G$ be a $kP_3$-free graph of order $n\geq 11k-4$ with $k\geq2 $. Then $$\rho_n(G)\geq -\sqrt{(k-1)(n-k+1)}$$ with equality if and only if $G= K_{k-1,n-k+1}$. \end{Corollary} In Theorem~\ref{A}, the extremal graph for $kP_3$ varies form other linear forests. Note that $kP_3$ is also a star forest $kS_2$. Motivated by Problem~\ref{P1}, Theorems~\ref{thm-edge}, \ref{A} and \ref{bipartite}, we determine the maximum spectral radius and all extremal graphs for all (bipartite) graphs of order $n$ without a star forest. As a corollary, we determine the minimum least eigenvalue of $A(G)$ and all extremal graphs for graphs of order $n$ without a star forest, extending Corollary~\ref{least eigenvalue} for large $n$. The main results of this paper are stated as follows. \begin{Theorem}\label{thm1} Let $F=\cup_{i=1}^k S_{d_i}$ be a star forest with $k\geq2$ and $d_1\geq\cdots \geq d_k\geq1$. If $G$ be an $F$-free graph of order $n\geq \frac{(\sum_{i=1}^k 2d_i+5k-8)^4(\sum_{i=1}^k d_i+k-2)^4}{k-2}$, then\\ $$\rho(G)\leq \frac{k+d_k-3+\sqrt{(k-d_k-1)^2+4(k-1)(n-k+1)}}{2}$$ with equality if and only if $G=K_{k-1}\nabla H$, where $H$ is a $(d_k-1)$-regular graph of order $n-k+1$. In particular, if $d_k=2$, then $$\rho(G)\leq \rho(F_{n,k})$$ with equality if and only if $G=F_{n,k}$. \end{Theorem} {\bf Remark 1.} The extremal graph in Theorem~\ref{thm1} only depends on the number of the components of $F$ and the minimum order of the stars in $F$. \begin{Theorem}\label{thm2} Let $F=\cup_{i=1}^k S_{d_i}$ be a star forest with $k\geq2$ and $d_1\geq\cdots \geq d_k\geq1$. If $G$ is an $F$-free bipartite graph of order $n\geq \frac{f^2(k,d_1,\dots, d_k)}{4k-8}$, then $$\rho(G)\leq \sqrt{(k-1)(n-k+1)}$$ with equality if and only if $G=K_{k-1,n-k+1}$. \end{Theorem} \begin{Corollary}\label{Cor3} Let $F=\cup_{i=1}^k S_{d_i}$ be a star forest with $k\geq2$ and $d_1\geq\cdots \geq d_k\geq1$. If $G$ is an $F$-free graph of order $n\geq \frac{f^2(k,d_1,\dots, d_k)}{4k-8}$, then $$\rho_n(G)\geq -\sqrt{(k-1)(n-k+1)}$$ with equality if and only if $G=K_{k-1,n-k+1}$. \end{Corollary} {\bf Remark 2.} For sufficiently large $n$, the extremal graphs in Theorem~\ref{thm2} and Corollary~\ref{Cor3} only depend on the number of the components of $F$. \section{Preliminary} We first give a very rough estimation on the number of edges for a graph of order $n\geq \sum_{i=1}^k d_i+k$ without a star forest. \begin{Lemma}\label{e1} Let $F=\cup_{i=1}^k S_{d_i}$ be a star forest with $k\geq2$ and $d_1\geq\cdots \geq d_k\geq1$. If $G$ is an $F$-free graph of order $n\geq \sum_{i=1}^k d_i+k$, then $$e(G)\leq\bigg(\sum_{i=1}^k d_i+2k-3\bigg)n-(k-1)\bigg(\sum_{i=1}^k d_i+k-1\bigg).$$ \end{Lemma} \begin{Proof} Let $C=\{v\in V(G):d(v)\geq \sum_{i=1}^k d_i+k-1\}$. Since $G$ is $F$-free, $|C|\leq k-1$, otherwise we can embed an $F$ in $G$ by the definition of $C$. Hence \begin{eqnarray*} e(G) &=& \sum_{v\in C}d(v)+ \sum_{v\in V(G)\backslash C}d(v)\\ &\leq&(n-1)|C|+(n-|C|)\bigg(\sum_{i=1}^k d_i+k-2\bigg) \\ &=& \bigg(n- \sum_{i=1}^k d_i-k+1\bigg)|C|+\bigg(\sum_{i=1}^k d_i+k-2\bigg)n\\ &\leq&(k-1)\bigg(n- \sum_{i=1}^k d_i-k+1\bigg)+\bigg(\sum_{i=1}^k d_i+k-2\bigg)n\\ &=&\bigg(\sum_{i=1}^k d_i+2k-3\bigg)n-(k-1)\bigg(\sum_{i=1}^k d_i+k-1\bigg)\\ \end{eqnarray*} \end{Proof} \begin{Lemma}\label{spec1} Let $F=\cup_{i=1}^k S_{d_i}$ be a star forest with $k\geq2$ and $d_1\geq\cdots \geq d_k\geq1$. Let $G$ be an $F$-free connected bipartite graph of order $n\geq\frac{d_1^2}{k-1}+k-1$ with the maximum spectral radius $\rho(G)$ and $\mathbf x=(x_u)_{u\in V(G)}$ be a positive eigenvector of $\rho(G)$ such that $\max\{x_u:u\in V(G)\}=1$. Then $x_u\geq \frac{1}{\rho (G)}$ for all $u\in V(G)$. \end{Lemma} \begin{Proof} Set for short $\rho=\rho (G)$. Choose a vertex $w\in V(G)$ such that $x_w=1$. Since $K_{k-1,n-k+1}$ is $F$-free, we have $$\rho\geq \rho(K_{k-1,n-k+1})=\sqrt{(k-1)(n-k+1)}.$$ If $u=w$, then $x_u=1\geq \frac{1}{\rho }$. So we next suppose that $u\neq w$. We consider the following two cases. {\bf Case~1.} $u$ is adjacent to $w$. By eigenequation of $A(G)$ on $u$, $$\rho x_u=\sum_{uv\in E(G)}x_v\geq x_w=1,$$ which implies that $$x_u\geq \frac{1}{\rho }.$$ {\bf Case~2.} $u$ is not adjacent to $w$. Let $G_1$ be a graph obtained from $G$ by deleting all edges incident with $u$ and adding an edge $uw$. Note that $uw$ is a pendant edge in $G_1$. {\bf Claim.} $G_1$ is also $F$-free. Suppose that $G_1$ contains an $F$ as a subgraph. Since $G$ is $F$-free and $G_1$ contains an $F$ as a subgraph, we have $uw\in E(F)$. Since $uw$ is a pendant edge in $G_1$, $w$ is a center of $F$ with $d_F(w)=d_j$, where $1\leq j\leq k$. Let $G_2$ be the subgraph of $G_1$ by deleting $w$ and all its neighbors in $F$. Note that $G_2$ is also a subgraph of $G$. Since $G_1$ contains an $F$ as a subgraph, $G_2$ contains $\cup_{ i\neq j}S_{d_i}$ as a subgraph. By eigenequation of $G$ on $w$, $$d(w)\geq\sum_{vw\in E(G)}x_v=\rho x_w=\rho\geq \sqrt{(k-1)(n-k+1)}\geq d_1\geq d_j.$$ This implies that $G$ contains an $F$ as a subgraph, a contradiction. By Claim, $G_1$ is $F$-free. Then \begin{eqnarray*} 0 &\geq& \rho(G_1)-\rho\geq \frac{\mathbf x^{\mathrm{T}}A(G_1)\mathbf x}{\mathbf x^{\mathrm{T}}\mathbf x}- \frac{ {\bf x^T} A(G)\bf x}{\bf {x^T}\bf x}\\ &=& \frac{2}{\mathbf x^{\mathrm{T}}\mathbf x} \Big(x_ux_w-x_u\sum_{uv\in E(G)}x_v\Big)\\ &=& \frac{2x_u}{\mathbf x^{\mathrm{T}}\mathbf x} \Big(1-\rho x_u \Big), \end{eqnarray*} which implies that $$x_u\geq \frac{1}{\rho }.$$ This completes the proof. \end{Proof} \begin{Lemma}\label{spec2} Let $d \geq 1$, $k\geq1$, $n\geq \frac{(d-1)^2+(k-1)^2}{k-1}$, and $H$ be a graph of order $n -k+1$. If $G= K_{k-1} \nabla H$ and $\Delta(H) \leq d-1$, then $$\rho(G)\leq \frac{k+d-3+\sqrt{(k-d-1)^2+4(k-1)(n-k+1)}}{2}$$ with equality if and only if $H$ is a $(d-1)$-regular graph. \end{Lemma} \begin{Proof} If $d=1$, then $G=K_{k-1}\nabla \overline{K}_{n-k+1}$. it is easy to calculate that $$\rho (K_{k-1}\nabla \overline{K}_{n-k+1})=\frac{k-2+\sqrt{(k-2)^2+4(k-1)(n-k+1)}}{2}.$$ Next suppose that $d\geq 2$. Let $u_1,u_2,\cdots,u_{k-1}$ be the vertex of $G$ corresponding to $K_{k-1}$ in the representation $G := K_{k-1} \nabla H$. Set for short $\rho=\rho(G)$ and let ${\bf x}=(x_v)_{v\in E(G)}$ be a positive eigenvector of $\rho$. By symmetry, $x_{u_1}=\cdots=x_{u_{k-1}}$. Choose a vertex $v\in V (H)$ such that $$x_v = \max_{ w\in V (H)} x_w.$$ By eigenequation of $A(G)$ on $u_1$ and $v$, we have \begin{equation}\label{6} \begin{aligned} \rho x_{u_1} &= (k-2)x_{u_1}+\sum_{uu_1\in V(H)}x_u\leq(k-2)x_{u_1}+(n-k+1)x_v \end{aligned} \end{equation} \begin{equation}\label{7} \begin{aligned} \rho x_{v} &\leq& (k-1)x_{u_1}+\sum_{uv\in E(H)}x_u \leq(k-1)x_{u_1}+(d-1)x_v, \end{aligned} \end{equation} which implies that \begin{eqnarray*} (\rho-k+2)x_{u_1}&\leq& (n-k+1)x_v \\ (\rho-d+1)x_v &\leq& (k-1)x_{u_1}. \end{eqnarray*} Since$$\rho> \rho(K_{k-1})= k-2,$$ and $$\rho>\rho(K_{k-1,n-k+1})=\sqrt{(k-1)(n-k+1)}\geq d-1,$$ we have $$ \rho^2-(k+d-3)\rho+ (k-2)(d-1)-(k-1)(n-k+1)\leq0.$$ Hence $$\rho\leq \frac{k+d-3+\sqrt{(k-d-1)^2+4(k-1)(n-k+1)}}{2}.$$ If equality holds, then all equalities in (\ref{6}) and (\ref{7}) hold. So $d(v)=k+d-2$ and $x_u = x_v$ for any vertex $u \in V (H)$. Since for any $u\in V(H)$, \begin{eqnarray*} \rho x_u&=& (k-1)x_{u_1}+ \sum_{uz\in E(H)}x_z \leq(k-1)x_{u_1}+ (d-1)x_v=\rho x_v, \end{eqnarray*} we have $d(u)=d(v)=d+k-2$. So $H$ is $(d-1)$-regular. \end{Proof} \section{Proof of Theorem~\ref{thm1}} Before proving Theorem~\ref{thm1}, we first prove the following important result for connected graphs without a star forest. \begin{Theorem}\label{c-thm1} Let $F=\cup_{i=1}^k S_{d_i}$ be a star forest with $k\geq2$ and $d_1\geq\cdots \geq d_k\geq1$. If $G$ be an $F$-free connected graph of order $n\geq(\sum_{i=1}^k 2d_i+5k-7)^2(\sum_{i=1}^k d_i+k-2)^2$, then $$\rho(G)\leq \frac{k+d_k-3+\sqrt{(k-d_k-1)^2+4(k-1)(n-k+1)}}{2}$$ with equality if and only if $G=K_{k-1}\nabla H$, where $H$ is a $(d_k-1)$-regular graph of order $n-k+1$. In particular, if $d_k=2$, then $$\rho(G)\leq \rho(F_{n,k})$$ with equality if and only if $G=F_{n,k}$. \end{Theorem} \begin{Proof} Let $G$ be an $F$-free connected graph of order $n$ with the maximum spectral radius. Set for short $V=V(G)$, $E=E(G)$, $A=A(G)$, and $\rho=\rho(G)$. Let $\mathbf x=(x_v)_{v\in V(G)}$ be a positive eigenvector of $\rho $ such that $$x_w=\max\{x_u:u\in V(G)\}=1.$$ Since $K_{k-1,n-k+1}$ is $F$-free, we have $$\rho\geq \rho(K_{k-1,n-k+1})=\sqrt{(k-1)(n-k+1)}. $$ Let $L=\{v\in V: x_v> \epsilon\} $ and $S=\{v\in V: x_v\leq \epsilon\} $, where $\epsilon = \frac{1}{\sum_{i=1}^k 2d_i+5k-7}$. {\bf Claim.} $|L|=k-1$. If $|L|\neq k-1$, then $|L|\geq k$ or $|L|\leq k-2$. First suppose that $|L|\geq k$. By eigenequation of $A$ on any vertex $u\in L$, we have $$\sum_{i=1}^kd_i+k-2\leq \frac{\sqrt{(k-1)(n-k+1)}}{\sum_{i=1}^k 2d_i+5k-7}=\sqrt{(k-1)(n-k+1)}\epsilon<\rho x_u=\sum_{uv\in E} x_v\leq d(u),$$ where the first inequality holds because $n\geq(\sum_{i=1}^k 2d_i+5k-7)^2(\sum_{i=1}^k d_i+k-2)^2$. Hence $$ d(u)\geq\sum_{i=1}^kd_i+k-1.$$ Then we can embed an $F$ with all centers in $L$ in $G$, a contradiction. Next suppose that $|L|\leq k-2$. Then $$e(L)\leq \binom{|L|}{2}\leq \frac{1}{2}(k-2)(k-3)$$ and $$e(L,S)\leq (k-2)(n-k+2).$$ In addition, by Lemma~\ref{e1}, $$e(S)\leq e(G)\leq\bigg(\sum_{i=1}^k d_i+2k-3\bigg)n.$$ By eigenequation of $A^2$ on $w$, we have \begin{eqnarray*} (k-1)(n-k+1) &\leq&\rho^2=\rho^2x_w=\sum\limits_{vw\in E}\sum\limits_{uv\in E}x_u\leq\sum\limits_{uv\in E}(x_u+x_v)\\ &=& \sum\limits_{uv\in E(L,S)}(x_u+x_v)+\sum\limits_{uv\in E(S)}(x_u+x_v) +\sum\limits_{uv\in E(L)}(x_u+x_v)\\ &\leq& \sum\limits_{uv\in E(L,S)}(x_u+x_v)+ 2\epsilon e(S)+2e(L)\\ &\leq& \sum\limits_{uv\in E(L,S)}(x_u+x_v)+2\epsilon\bigg(\sum_{i=1}^k d_i+2k-3\bigg)n+(k-2)(k-3) \end{eqnarray*} Hence \begin{eqnarray*} \sum\limits_{uv\in E(L,S)}(x_u+x_v)&\geq& (k-1)(n-k+1)-2\epsilon\bigg(\sum_{i=1}^k d_i+2k-3\bigg)n-(k-2)(k-3). \end{eqnarray*} On the other hand, by the definition of $L$ and $S$, we have $$\sum\limits_{uv\in E(L,S)}(x_u+x_v)\leq(1+\epsilon)e(L,S)\leq (1+\epsilon)(k-2)(n-k+2).$$ Thus $$ (1+\epsilon)(k-2)(n-k+2)\geq (k-1)(n-k+1)-2\epsilon\bigg(\sum_{i=1}^k d_i+2k-3\bigg)n-(k-2)(k-3),$$ which implies that $$\bigg(\bigg(\sum_{i=1}^k 2d_i+5k-8\bigg)\epsilon-1\bigg)n\geq\epsilon(k-2)^2-(k^2-3k+3).$$ Since $\epsilon = \frac{1}{\sum_{i=1}^k 2d_i+5k-7}$, we have \begin{eqnarray*} n &\leq&(k^2-3k+3)\bigg(\sum_{i=1}^k 2d_i+5k-8\bigg)\bigg(\sum_{i=1}^k 2d_i+5k-7\bigg)-\\ &&(k-2)^2\bigg(\sum_{i=1}^k 2d_i+5k-8\bigg) \\ &\leq& \bigg(\sum_{i=1}^k 2d_i+5k-7\bigg)^2\bigg(\sum_{i=1}^k d_i+k-2\bigg)^2, \end{eqnarray*} a contradiction. This proves the Claim. By Claim, $|L|=k-1$ and thus $|S|=n-k+1$. Then the subgraph $H$ induced by $S$ in $G$ is $S_{d_k}$-free. Otherwise, we can embed $F$ in $G$ with $k-1$ centers in $L$ and a center in $S$ as $d(u)\geq\sum_{i=1}^kd_i+k-1$ for any $u\in L$, a contradiction. Now $\Delta(H)\leq d_k-1$. Note that the resulting graph obtained from $G$ by adding all edges in $L$ and all edges with one end in $L$ and the other in $S$ are also $F$-free and its spectral radius increases strictly. By the extremality of $G$, we have $G=K_{k-1}\nabla H$. By Lemma~\ref{spec2} and the extremality of $G$, it follows that $H$ is a $(d_k-1)$-regular graph and $$\rho=\frac{k+d_k-3+\sqrt{(k-d_k-1)^2+4(k-1)(n-k+1)}}{2}.$$ In particular, if $d_k=2$ then $\Delta(H)\leq 1$, i.e., $H=pK_2\cup qK_1$, where $2p+q=n-k+1$. By the extremality of $G$, $G=F_{n,k}$. This completes the proof. \end{Proof} \noindent{\bf Proof of Theorems~\ref{thm1}.} Let $G$ be an $F$-free graph of order $n$ with the maximum spectral radius. If $G$ is connected, then the result follows directly from Theorem~\ref{c-thm1}. Next we suppose that $G$ is not connected. Since $K_{k-1,n-k+1}$ is $F$-free, we have $$\rho(G)\geq \sqrt{(k-1)(n-k+1)}.$$ Let $G_1$ be a component of $G$ such that $\rho(G_1)=\rho(G)$ and $n_1=|V(G_1)|$. Then \begin{eqnarray*} n_1-1 &\geq& \rho(G_1)=\rho(G)\geq \sqrt{(k-1)(n-k+1)}\geq \sqrt{(k-2)n}\\ &\geq& \bigg(\sum_{i=1}^k 2d_i+5k-8 \bigg)^2 \bigg(\sum_{i=1}^k d_i+k-2 \bigg)^2, \end{eqnarray*} which implies that \begin{eqnarray*} n_1 &\geq& \bigg(\sum_{i=1}^k 2d_i+5k-8\bigg)^2\bigg(\sum_{i=1}^k d_i+k-2\bigg)^2+1. \end{eqnarray*} By Theorem~\ref{c-thm1} again, \begin{eqnarray*} \rho(G) =\rho(G_1) &\leq& \frac{k+d_k-3+\sqrt{(k-d_k-1)^2+4(k-1)(n_1-k+1)}}{2} \\ &<& \frac{k+d_k-3+\sqrt{(k-d_k-1)^2+4(k-1)(n-k+1)}}{2}. \end{eqnarray*} In particular, if $d_k=2$ then it follows from By Theorem~\ref{c-thm1} again, $$\rho(G)=\rho(G_1)= \rho (F_{n_1,k})<\rho (F_{n,k}).$$ Hence the result follows. \QEDB Note that extremal graph in Theorem~\ref{A} (iii) also holds for signless Laplacian special radius $q(G)$ \cite{Chen2020}. We conjecture the extremal graph in Theorem~\ref{thm1} also holds for signless Laplacian spectral radius $q(G)$. \begin{Conjecture} Let $F=\cup_{i=1}^k S_{d_i}$ be a star forest with $k\geq2$ and $d_1\geq\cdots \geq d_k\geq1$. If $G$ be an $F$-free graph of large order $n$, then $$q(G)\leq \frac{n+2k+2d_k-6+\sqrt{(n+2k-2d_k-2)^2-8(k-1)(k-d_k-1)}}{2}$$ with equality if and only if $G=K_{k-1}\nabla H$, where $H$ is a $(d_k-1)$-regular graph of order $n-k+1$. In particular, if $d_k=2$, then $$q(G)\leq q(F_{n,k})$$ with equality if and only if $G=F_{n,k}$. \end{Conjecture} \section{Proofs of Theorem~\ref{thm2} and Corollary~\ref{Cor3}} Before proving Theorem~\ref{thm2} and Corollary~\ref{Cor3}, we first prove the following important result for bipartite connected graphs without a star forest. \begin{Theorem}\label{c-thm3} Let $F=\cup_{i=1}^k S_{d_i}$ be a star forest with $k\geq2$ and $d_1\geq\cdots \geq d_k\geq1$. If $G$ is an $F$-free connected bipartite graph of order $n\geq f(k,d_1,\dots,d_k)$, then $$\rho(G)\leq \sqrt{(k-1)(n-k+1)}$$ with equality if and only if $G=K_{k-1,n-k+1}$. \end{Theorem} \begin{Proof} Let $G$ be an $F$-free connected bipartite graph of order $n$ with the maximum spectral radius. Set for short $V=V(G)$, $E=E(G)$, $A=A(G)$, and $\rho=\rho(G)$. Let $\mathbf x=(x_v)_{v\in V(G)}$ be a positive eigenvector of $\rho $ such that $$x_w=\max\{x_u:u\in V(G)\}=1.$$ Since $K_{k-1,n-k+1}$ is $F$-free, we have \begin{equation}\label{8} \rho\geq \rho(K_{k-1,n-k+1})=\sqrt{(k-1)(n-k+1)}. \end{equation} Let $L=\{v\in V: x_v> \epsilon\} $ and $S=\{v\in V: x_v\leq \epsilon\} $, where $$\frac{\sum_{i=1}^kd_i+k-2}{\sqrt{(k-1)(n-k+1)}}\leq \epsilon \leq \frac{1}{k\sum_{i=1}^k (2d_i+5k-4)^{2k-1}}\bigg(1-\frac{\sum_{i=1}^k d_i}{n}\bigg).$$ {\bf Claim~1.} $|L|\leq k-1$. Suppose that $|L|\geq k$. By eigenequation of $A$ on any vertex $u\in L$, we have $$\sum_{i=1}^kd_i+k-2\leq\sqrt{(k-1)(n-k+1)}\epsilon<\rho x_u=\sum_{uv\in E} x_v\leq d(u).$$ Hence $$d(u)\geq \sum_{i=1}^kd_i+k-1.$$ Then we can embed an $F$ in $G$ with all centers in $L$, a contradiction. This proves Claim~1. Since $|L|\leq k-1$, we have $$e(L)\leq \binom{|L|}{2}\leq \frac{1}{2}(k-1)(k-2)$$ and $$e(L,S)\leq (k-1)(n-k+1).$$ In addition, by Lemma~\ref{e1}, $$e(S)\leq e(G)\leq\bigg(\sum_{i=1}^k d_i+2k-3\bigg)n.$$ We next show that for any vertex in $L$ has large degree. {\bf Claim~2.} Let $u\in L$ and $x_u=1-\emlta$. Then $$ d(u)\geq \bigg(1-\bigg(\sum_{i=1}^k 2d_i+5k-5\bigg)(\emlta+\epsilon)\bigg)n.$$ Let $B_u=\{v\in V: uv\notin E\}$. We first sum of eigenvector over all vertices of $G$. \begin{eqnarray*} \rho \sum_{v\in V}x_v &=& \sum_{v\in V} \rho x_v=\sum_{v\in V} \sum_{vz\in E } x_z=\sum_{v\in V} d(v) x_v\leq \sum_{v\in L} d(v)x_v + \sum_{v\in S} d(v)x_v \\ &\leq& \sum_{v\in L} d(v)+\epsilon \sum_{v\in S} d(v)= 2e(L)+e(L,S)+\epsilon(2e(S)+e(L,S))\\ &=& 2e(L)+2\epsilon e(S)+(1+\epsilon)e(L,S), \end{eqnarray*} which implies that \begin{equation}\label{4} \sum_{v\in V}x_v\leq \frac{ 2e(L)+2\epsilon e(S)+(1+\epsilon)e(L,S)}{\rho}. \end{equation} Next we sum of eigenvector over all vertices in $B_u$ by E.q. (\ref{4}) and Lemma~\ref{spec1}. Since \begin{equation*}\label{5} \begin{aligned} \frac{1}{ \rho }|B_u|&\leq \sum_{v\in B_u}x_v\leq\sum_{v\in V(G)}x_v-\sum_{uv\in E(G)} x_v=\sum_{v\in V(G)}x_v-\rho x_u\\ &\leq \frac{ 2e(L)+2\epsilon e(S)+(1+\epsilon)e(L,S)}{\rho}-\rho x_u, \end{aligned} \end{equation*} we have \begin{eqnarray*} |B_u| &\leq& 2e(L)+2\epsilon e(S)+(1+\epsilon)e(L,S)-\rho^2x_u \\ &\leq& 2e(L)+2\epsilon e(S)+(1+\epsilon)e(L,S)-(k-1)(n-k+1)(1-\emlta) \\ &\leq&(k-1)(k-2)+2\epsilon \bigg(\sum_{i=1}^k d_i+2k-3\bigg)n+(1+\epsilon)(k-1)(n-k+1)-\\ &&(k-1)(n-k+1)(1-\emlta) \\ &=&\bigg(2\epsilon \bigg(\sum_{i=1}^k d_i+2k-3\bigg)+(\emlta+\epsilon)(k-1)\bigg)n+(k-1)(k-2)-(\emlta+\epsilon)(k-1)^2\\ &\leq& \bigg(\sum_{i=1}^k 2d_i+4k-6+(k-1)+1\bigg)(\emlta+ \epsilon)n\\ &=& \bigg(\sum_{i=1}^k 2d_i+5k-6\bigg)(\emlta+ \epsilon)n, \end{eqnarray*} where the last second inequality holds since $(k-1)(k-2)\leq \epsilon n<(\emlta+\epsilon)n$ by the definition of $\epsilon$ and $n$. Hence $$d(u)\geq n-1-\bigg(\sum_{i=1}^k 2d_i+5k-6\bigg)(\emlta+ \epsilon)n\geq\bigg(1-\bigg(\sum_{i=1}^k 2d_i+5k-5\bigg)(\emlta+\epsilon)\bigg)n.$$ This completes Claim~2. {\bf Claim~3.} Let $ 1 \leq s<k-1 $. Suppose that there is a set $X$ of $s$ vertices such that $X=\{v\in V:x_v\geq 1-\eta ~\text{and} ~d(v)\geq(1 - \eta)n\} $. Then there exists a vertex $u\in L\backslash X$ such that $$ x_{u}\geq 1 - \bigg(\sum_{i=1}^k2d_i+5k-5\bigg)^2(\eta + \epsilon)$$ and $$d(u)\geq\bigg(1 - \bigg(\sum_{i=1}^k2d_i+5k-5\bigg)^2(\eta + \epsilon)\bigg)n .$$ By eigenequation of $A^2$ on $w$, we have \begin{eqnarray*} \rho^2&=&\rho^2x_w=\sum\limits_{vw\in E}\sum\limits_{uv\in E}x_u\leq\sum\limits_{uv\in E}(x_u+x_v)\\ &=& \sum\limits_{uv\in E(S)}(x_u+x_v) +\sum\limits_{uv\in E(L)}(x_u+x_v)\sum\limits_{uv\in E(L,S)}(x_u+x_v)\\ &\leq& 2\epsilon e(S)+2e(L)+\sum\limits_{uv\in E(L,S)}(x_u+x_v)\\ & \leq& 2\epsilon e(S)+2e(L)+\epsilon e(L,S)+\sum_{\substack{ uv\in E( L\backslash X,S)\\u \in L\backslash X }}x_u+\sum_{\substack{uv\in E(L\cap X,S)\\u\in L\cap X}}x_u, \end{eqnarray*} which implies that \begin{eqnarray*} &&\sum_{\substack{uv\in E(L\backslash X,S)\\ u\in L\backslash X}}x_u\\ &\geq& \rho^2- 2\epsilon e(S)-2e(L)-\epsilon e(L,S)-\sum_{\substack{uv\in E( L\cap X,S)\\u\in L\cap X}}x_u\\ &\geq&(k-1)(n-k+1)-2\epsilon\bigg (\sum_{i=1}^k d_i+2k-3\bigg)n-(k-1)(k-2)-\\ &&\epsilon(k-1)(n-k+1)-sn\\ &=&\bigg(k-1-s-\epsilon \bigg(\sum_{i=1}^k2d_i+5k-7\bigg)\bigg)n-(k-1)(2k-3)+\epsilon(k-1)^2\\ &\geq&\bigg(k-1-s-\epsilon \bigg(\sum_{i=1}^k2d_i+5k-7\bigg)\bigg)n-\epsilon n\\ &=&\bigg(k-1-s-\epsilon \bigg(\sum_{i=1}^k2d_i+5k-6\bigg)\bigg)n, \end{eqnarray*} where the last third inequality holds since $(k-1)(2k-3)\leq \epsilon n$ by the definition of $\epsilon$ and $n$. In addition, \begin{eqnarray*} e(L\backslash X,S)&=& e(L,S)-e(L\cap X, S) \\ &\leq & (k-1)(n-k+1)-s(1-\eta)n+\binom{s}{2} \\ &\leq&(k-1-s(1-\eta))n-\bigg((k-1)^2-\binom{k-2}{2} \bigg)\\ &\leq&(k-1-s(1-\eta))n. \end{eqnarray*} Let $$g(s)=\frac{k-1-s-\epsilon \bigg(\sum_{i=1}^k2d_i+5k-6\bigg)}{k-1-s(1-\eta)}.$$ It is easy to see that $g(s)$ is decreasing with respect to $1\leq s\leq k-2$. Then \begin{eqnarray*} \frac{\sum_{\substack{uv\in E(L\backslash X,S)\\u\in L\backslash X}}x_u}{e(L\backslash X,S)} &\geq& g(s)\geq g(k-2) =\frac{1-\epsilon \bigg(\sum_{i=1}^k2d_i+5k-6\bigg)}{1+(k-2)\eta}\\ &\geq&1-\bigg(\sum_{i=1}^k2d_i+5k-6\bigg)(\eta+\epsilon). \end{eqnarray*} Hence there exists a vertex $u\in L\backslash X$ such that $$ x_u\geq1-\bigg(\sum_{i=1}^k2d_i+5k-6\bigg)(\eta+\epsilon).$$ By Claim~2, \begin{eqnarray*} d(u) &\geq& \bigg(1-\bigg(\sum_{i=1}^k 2d_i+5k-5\bigg)\bigg(\bigg(\sum_{i=1}^k2d_i+5k-6\bigg)(\eta+\epsilon)+\epsilon\bigg)\bigg)n\\ &\geq &\bigg(1-\bigg(\sum_{i=1}^k2d_i+5k-5\bigg)^2(\eta+\epsilon)\bigg)n \end{eqnarray*} This completes Claim~3. {\bf Claim~4.} $|L|=k-1$. Furthermore, for all $u\in L$, $$x_u\geq 1-\bigg (\sum_{i=1}^k 2d_i+5k-4\bigg)^{2k-1}\epsilon$$ and $$d(u)\geq\bigg(1-\bigg(\sum_{i=1}^k 2d_i+5k-4\bigg)^{2k-1}\epsilon\bigg)n.$$ Note that $w\in L$ and $x_{w}=1$. By Claim~2, $$d(w)\geq \bigg(1-\bigg(\sum_{i=1}^k 2d_i+5k-5\bigg)\epsilon\bigg)n. $$ Applying Claim~5 iteratively for $k-2$ times, we can find a set $X\subseteq L\backslash \{w\}$ of $k-2$ vertices such that for any $u\in X$, \begin{eqnarray*} x_u &\geq& 1-\bigg(\sum_{j=1}^{k-2}\bigg(\sum_{i=1}^k 2d_i+5k-5\bigg)^{2j}+\bigg(\sum_{i=1}^k 2d_i+5k-5\bigg)^{2k-2}\bigg(\sum_{i=1}^k 2d_i+5k-4\bigg)\bigg)\epsilon\\ &\geq& 1- \bigg (\sum_{i=1}^k 2d_i+5k-4\bigg)^{2k-1}\epsilon \end{eqnarray*} and $$d(u)\geq\bigg(1-\bigg(\sum_{i=1}^k 2d_i+5k-4\bigg)^{2k-1}\epsilon\bigg)n.$$ Noting $|L|\leq k-1$, we have $L=X\cup \{w\}$. Hence $|L|=k-1$. This proves Claim~4. Let $T$ be the common neighborhood of $L$ and $R=S\backslash T$. By Claim~4, $$|L|=k-1$$ and $$|T|\geq\bigg(1-k\bigg(\sum_{i=1}^k 2d_i+5k-4\bigg)^{2k-1}\epsilon\bigg)n\geq \sum_{i=1}^k d_i.$$ Since $G$ is bipartite, $L$ and $T$ are both independent sets of $G$. {\bf Claim~5.} $R$ is empty. Suppose that $R$ is not empty, i.e., there is a vertex $v\in R$. Then $v$ has at most $d_k-1$ neighbors in $S$, otherwise we can embed an $F$ in $G$. Let $H$ be a graph obtained from $G$ by removing all edges incident with $v$ and then connecting $v$ to each vertex in $L$. Clearly, $H$ is still $F$-free. By the definition of $R$, $v$ can be adjacent to at most $k-2$ vertices in $L$. Let $u\in L$ be the vertex not adjacent to $v$. Then By Claims~4 and 5, we have \begin{eqnarray*} \rho(H)- \rho &\geq& \frac{{\bf x}^T A(H){\bf x}}{ {\bf x}^T{\bf x}}-\frac{{\bf x}^T A{\bf x}}{ {\bf x}^T{\bf x}}\\ &\geq&\frac{2x_v}{{\bf x}^T{\bf x}}\bigg( x_u-\sum_{\substack{uz\in E\\ z\in S}} x_z\bigg)\\ &\geq& \frac{2x_v}{{\bf x}^T{\bf x}}\bigg(1-\bigg (\sum_{i=1}^k 2d_i+5k-4\bigg)^{2k-1}\epsilon-(d_k-1)\epsilon\bigg)\\ &=& \frac{2x_v}{{\bf x}^T{\bf x}}\bigg( 1-\bigg (\bigg (\sum_{i=1}^k 2d_i+5k-4\bigg)^{2k-1}+d_k-1\bigg )\epsilon\bigg )\\ &>&0, \end{eqnarray*} Hence $\rho(H)>\rho$, a contradiction. This proves Claim~5. By Claim~5, $S=T$. By the definition of $T$, we have $G=K_{k-1,n-k+1}$. This completes the proof. \end{Proof} \noindent {\bf Proof of Theorem~\ref{thm2}.} Let $G$ be an $F$-free bipartite graph of order $n$ with the maximum spectral radius. If $G$ is connected, then the result follows directly from Theorem~\ref{c-thm3}. Next we suppose that $G$ is not connected. Since $K_{k-1,n-k+1}$ is $F$-free, $$\rho(G)\geq\sqrt{ (k-1)(n-k+1)}.$$ Let $G_1$ be a component of $G$ such that $\rho(G_1)=\rho(G)$ and $n_1=|V(G_1)|$. Note that $G$ is triangle-free. By Wilf theorem \cite[Theorem~2]{Wilf1986}, we have \begin{eqnarray*} \frac{ n_1^2}{4} &\geq& \rho^2(G_1)=\rho(G)^2\geq (k-1)(n-k+1)\geq (k-2)n \geq\frac{f^2(k,d_1,\dots,d_k)}{4}, \end{eqnarray*} which implies that $$n_1\geq f(k,d_1,\dots,d_k) . $$ By Theorem~\ref{c-thm3} again, \begin{eqnarray*} \rho(G_1) &\leq& \sqrt{ (k-1)(n_1-k+1)}< \sqrt{ (k-1)(n-k+1)}, \end{eqnarray*} a contradiction. This completes the proof. \QEDB \noindent {\bf Proof of Corollary~\ref{Cor3}.} By a result of Favaron et al. \cite{FMS}, $\rho_n(G) \geq \rho_n(H)$ for some spanning bipartite subgraph $H$. Moreover, the equality holds if and only if $G= H$, which can be deduced by its original proof. By Theorem~\ref{thm2}, $$\rho(H)\leq \sqrt{(k-1)(n-k+1)}$$ with equality if and only if $H = K_{k-1,n-k+1}$. Since the spectrum of a bipartite graph is symmetric \cite{LP}, $$\rho_n(H)\geq-\sqrt{(k-1)(n-k+1)}$$ with equality if and only if $H= K_{k-1,n-k+1}$. Thus we have $$ \rho_n(G)\geq -\sqrt{(k-1)(n-k+1)}$$ with equality if and only if $G= K_{k-1,n-k+1}$. \QEDB \end{document}
\begin{document} \title{Classification of bicovariant differential calculi over free orthogonal Hopf algebras} \begin{center} \textit{{Laboratoire de~Mathématiques (UMR 6620)}, Université Blaise Pascal, \\ Complexe universitaire des Cézeaux, 63171 Aubière Cedex, France.} \end{center} \begin{center} \href{mailto:[email protected]}{\tt [email protected]} \end{center} \begin{abstract} We show that if two Hopf algebras are monoidally equivalent, then their categories of bicovariant differential calculi are equivalent. We then classify, for $q \in \mathbb C^*$ not a root of unity, the finite dimensional bicovariant differential calculi over the Hopf algebra $ \cO_q(SL_2)$. Using a monoidal equivalence between free orthogonal Hopf algebras and $ \cO_q(SL_2)$ for a given $q$, this leads us to the classification of finite dimensional bicovariant differential calculi over free orthogonal Hopf algebras. \end{abstract} \section*{Introduction} The notion of \emph{differential calculus} over a Hopf algebra has been introduced by Woronowicz in~\cite{wor_cd}, with the purpose of giving a natural adaptation of differential geometry over groups, in the context of quantum groups. An important question in this topic, is the classification of bicovariant differential calculi over a given Hopf algebra, see for example~\cite{BS_class}, \cite{maj_class} or \cite{hs_cd_SLq}. The aim of the present paper is to classify the finite dimensional (first order) bicovariant differential calculi over an important class of Hopf algebras, namely the \emph{free orthogonal Hopf algebras}, also called \emph{Hopf algebras associated to non-degenerate bilinear forms}~\cite{DV_BE}. Given an invertible matrix $E \in GL_n(\mathbb C)$ with $n \geqslant 2$, the free orthogonal Hopf algebra $\cB(E)$ associated with $E$ is the universal Hopf algebra generated by a family of elements $(a_{ij})_{1\leqslant i,j \leqslant n}$ submitted to the relations: \[E^{-1} a^t E a =I_n = a E^{-1}a^t E , \] where $ a $ is the matrix $ (a_{ij})_{1\leqslant i,j \leqslant n}$. Its coproduct, counit and antipode are defined by: \[\Delta (a_{ij}) = \sum\limits_{k=1}^n a_{ik} \otimes a_{kj}, \qquad \varepsilon (a) = I_n, \qquad S(a) = E^{-1}a^tE.\] The Hopf algebra $\cB(E)$ can also be obtained as an appropriate quotient of the FRT bialgebra associated to Yang-Baxter operators constructed by Gurevich~\cite{gure}. If $E \overline{E} = \lambda I_n$, with $\lambda \in \mathbb R^*$, there exists an involution $*$ on $\cB(E)$ defined by $a_{ij}^* = \left(E^{-1} a^t E\right)_{ji}$, endowing $\cB(E)$ with a Hopf $*$-algebra structure. This Hopf $*$-algebra corresponds to a free orthogonal compact quantum group as defined in~\cite{wang_vandaele} or \cite{qortho}, and is generally denoted by $\cA_o((E^t)^{-1})$. This justifies the term ``free orthogonal Hopf algebra'' for $\cB(E)$. The starting point of our classification is a result of~\cite{bic_BE}, which states that if $q \in \mathbb C^*$ satisfies $q^2 + \tr(E^{-1}E^t)q + 1 = 0$, then the Hopf algebras $\cB(E)$ and $\cO_q(SL_2)$ are monoidally equivalent, \textit{i.e. } their categories of comodules are monoidally equivalent. The proof of~\cite{bic_BE}, is based on a deep result of Schauenburg~\cite{Sch_big}, and gives an explicit description of the correspondence between $\cB(E)$-comodules and $\cO_q(SL_2)$-comodules. We use here similar arguments to show that if two Hopf algebras are monoidally equivalent, then their categories of bicovariant differential calculi are equivalent (Theorem~\ref{th_eq}). This theorem generalizes a result of~\cite{MO}, where the two monoidally equivalent Hopf algebras are assumed to be related by a cocycle twist. Applying Theorem~\ref{th_eq} to the Hopf algebras $\cB(E)$ and $\cO_q(SL_2)$, the study of bicovariant differential calculi over the Hopf algebra $\cB(E)$ is simplified, and therefore reduces to the study of bicovariant differential calculi over $\cO_q(SL_2)$. This classification has been made over $\cO_q(SL_2)$ in~\cite{hs_cd_SLq} for transcendental values of $q$ (which is not the case here since $q$ has to satisfy $q^2 + \tr(E^{-1}E^t)q + 1 = 0$). Our classification uses a different approach than in~\cite{hs_cd_SLq}, and is based on the classification of the finite dimensional $ \cO_q(SL_2)$-Yetter-Drinfeld modules made in~\cite{TAK}. The paper is organized as follows. We gather in the first section some known results about bicovariant differential calculi over Hopf algebras, and their formulation in terms of Yetter-Drinfeld modules. Furthermore, we show that if the category of Yetter-Drinfeld modules over a Hopf algebra $H$ is semisimple, then the bicovariant differential calculi over $H$ are inner. In Section 2, using the language of cogroupoids~\cite{HG_co}, we prove that two monoidally equivalent Hopf algebras have equivalent categories of bicovariant differential calculi. We finally classify in Section 3 the finite dimensional bicovariant differential calculi over the Hopf algebra $\cO_q(SL_2)$ for $q \in \mathbb C^*$ not a root of unity, using the fact that by~\cite{TAK} the category of finite dimensional Yetter-Drinfeld modules over $\cO_q(SL_2)$ is semisimple. This allows to classify the finite dimensional bicovariant differential calculi over $\cB(E)$, provided that the solutions of the equation $q^2 + \tr(E^{-1}E^t)q + 1 = 0$ are not roots of unity. \subsection*{Notations and Conventions} Let $H$ be a Hopf algebra. Its comultiplication, antipode and counit will respectively be denoted by $\Delta$, $S$ and $\varepsilon$. A coaction of a left (respectively right) $H$-comodule will generally be denoted by $\lambda$ (respectively $\rho$). We will use Sweedler's notations: $\Delta(x) = \sum x_{(1)} \otimes x_{(2)}$ for $x \in H$, and $\rho(v) = \sum v_{(0)} \otimes v_{(1)}$ for $v$ in a right comodule $V$. \section{Bicovariant differential calculi} We start this section by recalling the definition of a bicovariant differential calculus, and of the equivalent notion, expressed in terms of Yetter-Drinfeld modules (called \emph{reduced differential calculus} in this paper). We then prove some basic lemmas which will be useful in the sequel. The main result of this section states that if the category of (finite dimensional) Yetter-Drinfeld modules over a Hopf algebra $H$ is semisimple, then the (finite dimensional) bicovariant differential calculi over $H$ are inner. We refer to~\cite{KS} for background material on Hopf algebras and comodules. \begin{definition} Let $H$ be a Hopf algebra. A \emph{Hopf bimodule} $M$ over $H$ is an $H$-bimodule together with a left comodule structure $\lambda: M \rightarrow H \otimes M$ and a right comodule structure $\rho: M \rightarrow M \otimes H$ such that: \begin{itemize} \item $\forall x,y \in H, \forall v \in M$, $\lambda(x.v.y) = \Delta(x).\lambda(v).\Delta(y)$, \item $\forall x,y \in H, \forall v \in M$, $\rho(x.v.y) = \Delta(x).\rho(v).\Delta(y)$, \item $(id_H \otimes \rho) \circ \lambda = (\lambda \otimes id_H) \circ \rho$. \end{itemize} The category of Hopf bimodules over $H$, whose morphisms are the maps which are right and left linear and colinear over $H$, is denoted by $^H_H \cM^H_H$. \end{definition} \begin{definition} Let $H$ be a Hopf algebra. A (right) \emph{Yetter-Drinfeld module} over $H$ is a right $H$-module and a right $H$-comodule $V$ such that: \[\forall x \in H, \forall v \in V, \quad \sum (v.x)_{(0)} \otimes (v.x)_{(1)} = \sum v_{(0)}.x_{(2)} \otimes S(x_{(1)})v_{(1)}x_{(3)} .\] The category of Yetter-Drinfeld modules over $H$, whose morphisms are the maps which are both linear and colinear over $H$, is denoted by $\cY\cD(H)$. The category of finite dimensional Yetter-Drinfeld modules over $H$ is denoted by $\cY\cD_f(H)$. \end{definition} \begin{exemple}\label{ex_eps} Let $H$ be a Hopf algebra. We denote by $\mathbb C_\varepsilon$ the Yetter-Drinfeld module whose base-space is $\mathbb C$, with right coaction $\lambda \mapsto \lambda \otimes 1$ and right module structure defined by $\lambda \triangleleft x = \lambda \varepsilon(x)$ for $\lambda \in \mathbb C_\varepsilon$ and $x \in H$. \end{exemple} We recall from~\cite{YDmod} the correspondence between Yetter-Drinfeld modules and Hopf bimodules. \begin{theoreme}[{\cite[Theorem 5.7]{YDmod}}]\label{th_YD} Let $H $ be a Hopf algebra. The categories $^H _H \cM ^H _H$ and $\cY \cD(H)$ are equivalent. \end{theoreme} We describe for convenience the equivalence of categories involved in the previous theorem. Let $M$ be a Hopf bimodule over $H$, with right coaction $\rho$ and left coaction $\lambda$. The space $^{inv}M = \{v \in M \;\, ; \; \lambda(v) = 1 \otimes v\}$ of left-coinvariant elements of $M$ has a Yetter-Drinfeld module structure defined as follows. We have $\rho(^{inv}M) \subset \:^{inv}M \otimes H$, and the right coaction of $^{inv}M$ is just the restriction of $\rho$ to $^{inv}M$. The right module structure is defined by $w\triangleleft x =\sum S(x_{(1)}).w.x_{(2)}$. Conversely, given a Yetter-Drinfeld module $V$, then the space $H \otimes V$ can be equipped with a Hopf bimodule structure, with left and right actions given by: \[x.(y \otimes v).z = \sum xyz_{(1)} \otimes v\triangleleft z_{(2)},\] and the right ($\rho$) and left ($\lambda$) coactions given by: \begin{align*} \rho(x \otimes v) & = \sum x_{(1)} \otimes v_{(0)} \otimes x_{(2)}v_{(1)},\\ \lambda(x \otimes v) & = \sum x_{(1)} \otimes x_{(2)} \otimes v. \end{align*} We then have for $M \in\, ^H_H\cM ^H _H$, $M \cong H \otimes\,^{inv}M$ and for $V \in \cY\cD(H)$, $V \cong \,^{inv}(H \otimes V)$. The equivalence of categories between $^H _H \cM ^H _H$ and $\cY\cD(H)$ is then: \[ \appl[\cF:]{^H _H \cM ^H _H}{\cY \cD(H)}{M}{^{inv}M,} \quad \text{with quasi-inverse } \appl[\cG:]{\cY \cD(H)}{^H _H \cM ^H _H}{V}{H \otimes V.} \] A morphism $f: M \rightarrow N$ in $^H _H \cM ^H _H$ automatically satisfies $f(^{inv}M) \subset \,^{inv}N$, and $\cF(f) : \,^{inv}M \rightarrow \,^{inv}N$ is just the restriction of $f$. Conversely, if $f : V \rightarrow W$ is a morphism of Yetter-Drinfeld modules, then $\cG(f) = id_H \otimes f$. \begin{definition} Let $H$ be a Hopf algebra. A (first order) \emph{bicovariant differential calculus} $(M,d)$ over $H$ is a Hopf bimodule $M$ together with a left and right comodule morphism $d: H \rightarrow M$ such that $\forall x,y \in H, d(xy) = x.d(y)+d(x).y$ and such that $M = \vect \{x.d(y) \;\, ; \; x,y \in H \}.$ A bicovariant differential calculus $(M,d)$ is said \emph{inner} if there exists a bi-coinvariant element $\theta \in M$ (\textit{i.e. } satisfying $\rho(\theta) = \theta \otimes 1$ and $\lambda(\theta) = 1 \otimes \theta)$ such that $\forall x \in H, d(x) = \theta.x - x. \theta$. The \emph{dimension} of a bicovariant differential calculus $(M,d)$ is the dimension of the vector space $^{inv}M$. A \emph{morphism of bicovariant differential calculi} $f : (M,d_M) \rightarrow (N,d_N)$ is a morphism of Hopf bimodules such that $f \circ d_M = d_N$. We denote by $\cD\cC(H)$ the category of bicovariant differential calculi over $H$. \end{definition} Bicovariant differential calculi were introduced by Woronowicz in~\cite{wor_cd}. An overview is given in~\cite[Part IV.]{KS}. The notion of bicovariant differential calculus has the following interpretation in terms of Yetter-Drinfeld modules. \begin{definition} Let $H$ be a Hopf algebra. A \emph{reduced differential calculus} over $H$ is a Yetter-Drinfeld module $V$ together with a surjective map $\omega:H \rightarrow V$ satisfying: \[\forall x,y \in H, \:\omega(xy) = \omega(x).y + \varepsilon(x) \omega(y) \quad \text{and} \quad \sum \omega(x)_{(0)} \otimes \omega(x)_{(1)} = \sum \omega(x_{(2)}) \otimes S(x_{(1)})x_{(3)}.\] A \emph{morphism of reduced differential calculi} $f: (V, \omega_V) \rightarrow (W,\omega_W)$ is a morphism of Yetter-Drinfeld modules such that $f \circ \omega_V = \omega_W$. We denote by $\cR\cD\cC(H)$ the category of reduced differential calculi over $H$. \end{definition} \begin{lemme}\label{lem_rdc} The equivalence of categories of Theorem~\ref{th_YD} induces an equivalence between the categories $\cD\cC(H)$ and $\cR\cD\cC(H)$: \[ \appl[\cF:]{\cD\cC(H)}{\cR\cD\cC(H)}{(M,d)}{(^{inv}M,\omega_d)} \quad \text{with quasi-inverse } \appl[\cG:]{\cR\cD\cC(H)}{\cD\cC(H)}{(V,\omega)}{(H \otimes V, d_\omega)} \] where for $x \in H$, $\omega_d(x) = \sum S(x_{(1)})d(x_{(2)})$ and $d_\omega(x) = \sum x_{(1)} \otimes \omega(x_{(2)})$. \end{lemme} \begin{proof} The one-to-one correspondence between bicovariant differential calculi and reduced differential calculi is described in~\cite[Section 14]{KS}. We may now focus on the functoriality of this correspondence. If $f : (M,d_M) \rightarrow (N,d_N)$ is a morphism of bicovariant differential calculi, then the restriction of $f$, $\cF(f) :\,^{inv}M \rightarrow \,^{inv}N$ satisfies for all $x \in H$, \[ \cF(f) \circ \omega_{d_M}(x) = \sum f\left(S(x_{(1)})d_M(x_{(2)})\right) = \sum S(x_{(1)})f(d_M(x_{(2)})) = \sum S(x_{(1)}) d_N(x_{(2)}) = \omega_{d_N}(x). \] Hence $\cF(f)$ is a morphism of reduced differential calculi. Conversely, if $f: (V, \omega_V) \rightarrow (W, \omega_W)$ is a morphism of reduced differential calculi, then \[(id_H \otimes f) \circ d_{\omega_V} (x) = \sum x_{(1)} \otimes f( \omega_V(x_{(2)})) = \sum x_{(1)} \otimes \omega_W(x_{(2)}) = d_{\omega_W}(x). \] Thus $\cG(f) = id_H \otimes f$ is a morphism of bicovariant differential calculi. Since $\cF$ and $\cG$ are quasi-inverse to each other between the categories $\cY\cD(H)$ and $^H _H \cM ^H _H$, it only remains to check that the natural transformation providing the equivalence $\cF \circ \cG \cong id$ (respectively $\cG \circ \cF \cong id$) consists of morphisms of reduced (respectively bicovariant) differential calculi. Let $(V, \omega)$ be a reduced differential calculus over $H$. The isomorphism of Yetter-Drinfeld modules \[ \appl[\theta :]{\cF \circ \cG(V) =\, ^{inv}(H \otimes V)}{V}{x \otimes v}{\varepsilon(x)v} \] satisfies for $x \in H$, \[ \theta \circ \omega_{d_\omega} (x) = \theta\left(\sum S(x_{(1)}) d_{\omega} (x_{(2)})\right) = \theta \left(\sum S(x_{(1)}) x_{(2)} \otimes \omega (x_{(3)})\right) = \theta(1 \otimes \omega(x)) = \omega(x). \] Thus $\theta$ is an isomorphism of reduced differential calculi. Conversely, let $(M,d)$ be a bicovariant differential calculus over $H$. The isomorphism of Hopf bimodules \[ \appl[\gamma :]{\cG \circ \cF (M) = H \otimes \,^{inv}M}{M}{x \otimes v}{x.v} \] satisfies for $x \in H$, \[ \gamma \circ d_{\omega_d}(x) = \gamma \left( \sum x_{(1)} \otimes \omega_d(x_{(2)}) \right) = \gamma \left( \sum x_{(1)} \otimes S(x_{(2)}) d (x_{(3)}) \right) = \sum \varepsilon(x_{(1)}) d(x_{(2)}) = d(x). \] Hence $\gamma$ is a morphism of bicovariant differential calculi, which ends the proof. \end{proof} \begin{rmq}\label{rmq_im} Let $V$ be a Yetter-Drinfeld module, and let $\omega : H \rightarrow V$ be a map satisfying all the axioms of a reduced differential calculus, except the surjectivity condition. Then $\im(\omega)$ is a Yetter-Drinfeld submodule of $V$. Indeed, we have $\omega(x).y = \omega(xy)- \varepsilon(x)\omega(y) = \omega(xy-\varepsilon(x)y) \in \im(\omega) $ for all $x, y \in H$, thus $\im(\omega)$ is a submodule of $V$, and $\sum \omega(x)_{(0)} \otimes \omega(x)_{(1)} = \sum \omega(x_{(2)}) \otimes S(x_{(1)})x_{(3)} \in \im(\omega) \otimes H$, thus $\im(\omega)$ is a subcomodule of $V$. \end{rmq} \begin{definition} A reduced differential calculus $(V, \omega)$ is said \emph{inner} if there exists a coinvariant element $\theta \in V$ (\textit{i.e. } satisfying $\rho(\theta) = \theta \otimes 1$) such that $\forall x \in H,$ $\omega(x) = \theta.x - \varepsilon(x) \theta$. A reduced differential calculus $\omega : H \rightarrow V$ is said \emph{simple} if $V$ is a simple Yetter-Drinfeld module. That is to say, if there is no non-trivial subspace $W \subset V$, which is both a submodule and a subcomodule of $V$. Let $(V,\omega)$, $(W_1, \omega_1)$, ($W_2, \omega_2)$ be reduced differential calculi. We say that $(V, \omega)$ is the direct sum of $(W_1, \omega_1)$ and $(W_2, \omega_2)$ and we write $(V,\omega) = ( W_1, \omega_1) \oplus (W_2, \omega_2)$, if $V = W_1 \oplus W_2$ and if for all $x \in H$, $\omega(x ) = (\omega_1(x),\omega_2(x))$. \end{definition} Note that the direct sum of reduced differential calculi is not always well defined. The problem is that if $(V, \omega_V)$ and $(W, \omega_W)$ are reduced differential calculi over a Hopf algebra $H$, then the map \[ \appl[\omega:]{ H}{V \oplus W}{x}{(\omega_V(x), \omega_W(x))} \] can fail to be surjective. We give in the next lemma a necessary and a sufficient condition for the existence of the direct sum of simple reduced differential calculi. \begin{lemme}\label{lem_sum} Let $(V_1, \omega_1), \ldots , (V_n, \omega_n)$ be simple reduced differential calculi over a Hopf algebra $H$. We set \[ \appl[\omega:]{H}{V =\bigoplus\limits_{i =1}^n V_i}{x}{(w_1(x), \ldots , w_n(x))}. \] If the $V_i$'s are two-by-two non isomorphic as Yetter-Drinfeld modules, then $(V,\omega)$ is a reduced differential calculus. Conversely, if $(V,\omega)$ is a reduced differential calculus, then the reduced differential calculi $(V_i, \omega_i)$ are two-by-two non-isomorphic. \end{lemme} \begin{proof} The map $\omega$ clearly satisfies all the axioms of a reduced differential calculus, except the surjectivity condition. In order to prove the lemma, we thus have to examine under which conditions $\omega$ is onto. Assume that the $V_i$'s are two-by-two non-isomorphic (as Yetter-Drinfeld modules). According to Remark~\ref{rmq_im}, the image of $\omega$ is a Yetter-Drinfeld submodule of $V$. There is therefore a subset $\cI \subset \{1, \ldots, n\}$ such that there exists an isomorphism of Yetter-Drinfeld modules $f : \im(\omega) \rightarrow \bigoplus\limits_{i\in\cI} V_i$. For $k \in \{1, \ldots, n\}$, we denote by $\pi_k :V \rightarrow V_k$ the canonical projection. The map $\pi_k \circ \omega = \omega_k$ is onto, thus the restriction of $\pi_k$ to $\im(\omega)$ is also onto. This means that $\pi_k$ induces a non-zero morphism of Yetter-Drinfeld modules $\bigoplus\limits_{i\in\cI} V_i \rightarrow V_k$, hence an isomorphism of Yetter-Drinfeld modules $V_l \cong V_k$, with $l \in \cI$. Since by hypothesis the $V_i$'s are two-by-two non-isomorphic, we have $k= l$, hence $k \in \cI$. Thus $\cI = \{1, \ldots, n\}$, $\im(\omega) = V$, and we conclude that $\omega: H \rightarrow V$ is a reduced differential calculus. Assume now that there is an isomorphism $f : (V_j,\omega_j) \rightarrow (V_i, \omega_i)$ with $i \neq j$. We denote by $\eta : H \rightarrow V_i \oplus V_i$ the map defined by the composition \[\xymatrixcolsep{4em}\xymatrix{ H \ar[r]^-\omega & V \ar[r]^-{\pi_i \oplus \pi_j} & V_i \oplus V_j \ar[r]^-{id \oplus f} & V_i \oplus V_i . }\] The map $\eta$ is clearly not surjective, since $\eta (x) = (\omega_i(x) , f\circ \omega_j(x)) = (\omega_i(x), \omega_i(x))$. This implies that $\omega$ is not surjective, since $\pi_i \oplus \pi_j$ and $id \oplus f$ are both surjective. \end{proof} \begin{lemme}\label{lem_simple} Let $V$ be a simple Yetter-Drinfeld module over a Hopf algebra $H$, admitting a non-zero right-coinvariant element $\theta \in V$. If $V$ is not isomorphic to the Yetter-Drinfeld module $\mathbb C_\varepsilon$ (of Example~\ref{ex_eps}), then the map \[ \appl[\omega_\theta:]{H}{V}{x}{\theta.x-\varepsilon(x)\theta} \] defines a reduced differential calculus over $H$. \end{lemme} \begin{proof} We have for $x \in H$, \begin{align*} \omega_\theta (x).y + \varepsilon(x)\omega_\theta(y) &= (\theta.x - \varepsilon(x)\theta).y + \varepsilon(x)(\theta.y-\varepsilon(y)\theta) \\ & = (\theta.x).y - \varepsilon(x)\varepsilon(y)\theta = \omega_\theta (xy) \end{align*} and \begin{align*} \rho \circ \omega_\theta (x)& = \rho(\theta.x) -\varepsilon(x) \theta \otimes 1 \\ &= \sum \theta.x_{(2)} \otimes S(x_{(1)}).1.x_{(3)} -\varepsilon(x) \theta \otimes 1 \\ & = \sum \theta.x_{(2)} \otimes S(x_{(1)})x_{(3)} - \sum \varepsilon(x_{(2)})\theta \otimes S(x_{(1)})x_{(3)} \\ & = \sum \omega_\theta( x_{(2)}) \otimes S(x_{(1)})x_{(3)}. \end{align*} By Remark~\ref{rmq_im}, the image of $\omega_\theta$ is thus a Yetter-Drinfeld submodule of $V$. Since $V$ is simple, the image of $\omega_\theta$ is either $V$, in which case $\omega_\theta$ is indeed a reduced differential calculus, or $\im(\omega_\theta) = (0)$. In that case, since $\theta$ is coinvariant and $\theta . x = \varepsilon(x) \theta$ for all $x \in H$, the map $\mu : \mathbb C_\varepsilon \rightarrow V$ given by $\mu(\lambda) = \lambda \theta$ is a non-zero morphism between simple Yetter-Drinfeld modules, hence an isomorphism. \end{proof} The end of this section is devoted to the proof of the following lemma. \begin{lemme}\label{lem_inner} Let $H$ be a Hopf algebra such that the category $\cY\cD_f(H)$ is semisimple (\textit{i.e. } each finite dimensional Yetter-Drinfeld module over $H$ can be decomposed into a direct sum of simple Yetter-Drinfeld modules). Then each finite dimensional reduced differential calculus over $H$ is inner. \end{lemme} \begin{definition} Let $(V,\omega)$ be a reduced differential calculus. We denote by $V_\omega$ the Yetter-Drinfeld module over $H$ defined as follows. As a right comodule, $V_\omega = V \oplus \mathbb C$ (where the $H$-comodule structure on $\mathbb C$ is the canonical one: $\lambda \to \lambda \otimes 1$). Its right module structure is defined for $v \in V$, $\lambda \in \mathbb C$ and $x \in H$ by: $(v, \lambda) .x = (v.x + \lambda \omega(x), \lambda \varepsilon(x))$. Let us check that this formula defines an $H$-module structure on $V \oplus \mathbb C$. We have \begin{align*} ((v, \lambda). x).y &= (v.x + \lambda \omega(x), \lambda \varepsilon(x)).y = ((v.x + \lambda \omega(x)).y + \lambda \varepsilon(x) \omega(y), \lambda \varepsilon(x)\varepsilon(y)) \\ & = (v.(xy) + \lambda \omega(xy), \lambda \varepsilon(xy)) = (v,\lambda).(xy), \end{align*} and the other axioms of a right module are clearly satisfied. Before checking that the Yetter-Drinfeld condition is satisfied on $V_\omega$, let us note that, denoting by $j : V \rightarrow V \oplus \mathbb C$ the canonical injection, and by $p: V \oplus \mathbb C \rightarrow \mathbb C_\varepsilon$ the canonical projection, then clearly $j$ and $p$ are both module and comodule maps, and the short sequence: \[{ \xymatrix { 0 \ar[r]& V \ar[r]^-j & V_\omega \ar[r]^-p & \mathbb C_\varepsilon \ar[r] &0 }} \] is exact. Since $V$ is a Yetter-Drinfeld module and $j : V \rightarrow V_\omega$ is a module and comodule morphism, the Yetter-Drinfeld condition: \[\forall x \in H, \rho(w.x) = \sum w _{(0)}.x _{(2)} \otimes S(x _{(1)}) w _{(1)} x _{(3)}\] is automatically satisfied for $w \in j(V)$. Hence it only remains to check that the Yetter-Drinfeld condition is also satisfied on $\mathbb C$, that is, that for all $x$ in $H$, $\rho((0, 1).x) = \sum (0, 1).x_{(2)} \otimes S(x_{(1)})x_{(3)}$. We have for $x \in H$ \begin{align*} \sum (0, 1).x_{(2)} \otimes S(x_{(1)})x_{(3)} &= \sum (\omega(x_{(2)}), \varepsilon(x_{(2)})) \otimes S(x_{(1)})x_{(3)} \\ & = (j\otimes id) \left(\sum \omega(x_{(2)}) \otimes S(x_{(1)})x_{(3)}\right) + (0,1) \otimes \varepsilon(x) \\ & = (j \otimes id) \circ \rho (\omega(x)) + (0,1) \otimes \varepsilon (x) \\ &= \rho(j(\omega(x))) + \rho(0, \varepsilon(x)) = \rho( \omega(x), \varepsilon(x)) = \rho((0,1).x), \end{align*} hence $V_\omega$ is indeed a Yetter-Drinfeld module, and \[{ \xymatrix { 0 \ar[r]& V \ar[r]^-j & V_\omega \ar[r]^-p & \mathbb C_\varepsilon \ar[r] &0 }} \] is a short exact sequence of Yetter-Drinfeld modules. \end{definition} \begin{lemme} A reduced differential calculus $(V,\omega)$ is inner if and only if the short exact sequence of Yetter-Drinfeld modules \[{ \xymatrix { 0 \ar[r]& V \ar[r]^-j & V_\omega \ar[r]^-p & \mathbb C_\varepsilon \ar[r] &0 }} \] splits. \end{lemme} \begin{proof} Assume first that $(V,\omega)$ is inner. Let $\theta \in V$ be a right-coinvariant element such that $\omega = x \mapsto \theta.x - \varepsilon(x)\theta$. We set \[ \appl[r:]{V_\omega}{V}{(v, \lambda)}{v + \lambda \theta.} \] It is a comodule morphism since for $v \in V$, $ \lambda \in \mathbb C$, \begin{align*} (r \otimes id) \circ \rho(v, \lambda) & = (r \otimes id) \circ \rho \circ j (v) +(r \otimes id )((0,\lambda) \otimes 1) = ((r \circ j) \otimes id) \circ \rho (v) +\lambda \theta \otimes 1 \\ & = \rho(v) + \lambda \rho(\theta) =\rho \circ r (v,\lambda). \end{align*} And we have for $v \in V$, $\lambda \in \mathbb C$ and $x \in H$, \[ r((v, \lambda).x) = r(v.x +\lambda \omega(x), \lambda\varepsilon(x)) = v.x + \lambda \theta.x -\lambda\varepsilon(x)\theta + \lambda\varepsilon(x)\theta = (v+\lambda \theta).x = r(v, \lambda).x. \] Hence $r$ is a Yetter-Drinfeld module morphism satisfying $r\circ j = id_V$, so that the above sequence splits. Assume conversely that the short exact sequence of Yetter-Drinfeld modules associated to $(V,\omega)$ splits: \[{ \xymatrix { 0 \ar[r]& V \ar[r]^-j & V_\omega =V \oplus \mathbb C \ar@/^1.1pc/[l]^-r\ar[r]^-p & \mathbb C_\varepsilon \ar[r] &0 . }} \] We set $\theta = r(0,1)$. Then $\rho(\theta) =\rho\circ r(0,1) = (r\otimes id)\circ \rho (0,1) = \theta \otimes 1$ and for $x \in H$, \[ \theta.x -\varepsilon(x)\theta = r((0,1).x) - r(0, \varepsilon(x)) = r\left((\omega(x), \varepsilon(x))-(0, \varepsilon(x))\right) = r(j(\omega(x))) = \omega(x). \] Hence the result. \end{proof} Lemma~\ref{lem_inner} follows immediately. \section{Monoidal equivalence} We show in this section that if two Hopf algebras are monoidally equivalent, then their categories of bicovariant differential calculi are also equivalent. In order to describe the equivalence between the categories $\cD\cC(H)$ and $\cD\cC(L)$, when $H$ and $L$ are monoidally equivalent Hopf algebras, we will need some definitions and results about cogroupoids, which we recall here. We refer to~\cite{HG_co} for a survey on the subject. \begin{definition} A \emph{cocategory} $\cC$ consists of: \begin{itemize} \item a set of objects $\ob(\cC)$, \item for all $X,Y \in \ob(\cC)$, an algebra $\cC(X,Y)$, \item for all $X,Y,Z \in \ob(\cC)$, algebra morphisms $\Delta_{X,Y}^Z : \cC(X,Y) \rightarrow \cC(X,Z) \otimes \cC(Z,Y)$ and $\varepsilon_X : \cC(X,X) \rightarrow \mathbb C$ such that for all $X,Y,Z,T \in \ob(\cC)$, the following diagrams commute: \[{ \xymatrix @!0 @R=1.5cm @C6.5cm { \cC(X,Y) \ar[r]^-{\Delta_{X,Y}^Z} \ar[d]_{\Delta_{X,Y}^T} & \cC(X,Z) \otimes \cC(Z,Y) \ar[d]^{id \otimes \Delta_{Z,Y}^T} \\ \cC(X,T) \otimes \cC(T,Y) \ar[r]_-{\Delta_{X,T}^Z \otimes id} & \cC(X,Z) \otimes \cC(Z,T) \otimes \cC(T,Y) }}\] \[ \xymatrix { \cC(X,Y) \ar@{=}[rd] \ar[r]^-{\Delta_{X,Y}^X} & \cC(X,X) \otimes \cC(X,Y) \ar[d]^{\varepsilon_X \otimes id} \\ & \cC(X,Y) } \qquad \quad \xymatrix { \cC(X,Y) \ar@{=}[rd] \ar[r]^-{\Delta_{X,Y}^Y} & \cC(X,Y) \otimes \cC(Y,Y) \ar[d]^{id \otimes \varepsilon_Y} \\ & \cC(X,Y) } \] \end{itemize} A cocategory is said to be \emph{connected} if for all $X, Y \in \ob(\cC)$, $\cC(X,Y)$ is a non-zero algebra. \end{definition} \begin{definition} A \textit{cogroupoid} $\cC$ is a cocategory equipped with linear maps $S_{X,Y} : \cC(X,Y) \rightarrow \cC(Y,X)$ such that for all $X,Y \in \ob(\cC)$, the following diagrams commute: \[{ \xymatrix @!0 @R=1.5cm @C2.6cm { \cC(X,X) \ar[r]^-{\varepsilon_X} \ar[d]_{\Delta_{X,X}^Y} & \mathbb C \ar[r]^-{u} & \cC(Y,X) \\ \cC(X,Y) \otimes \cC(Y,X) \ar[rr]_-{S_{X,Y} \otimes id} & & \cC(Y,X) \otimes \cC(Y,X) \ar[u]^m }} \] \[ { \xymatrix @!0 @R=1.5cm @C2.6cm { \cC(X,X) \ar[r]^-{\varepsilon_X} \ar[d]_{\Delta_{X,X}^Y} & \mathbb C \ar[r]^-{u} & \cC(X,Y) \\ \cC(X,Y) \otimes \cC(Y,X) \ar[rr]_-{id \otimes S_{Y,X}} & & \cC(X,Y) \otimes \cC(X,Y) \ar[u]^m }} \] where $m$ denotes the multiplication and $u$ the unit. \end{definition} We will use Sweedler notations for cogroupoids: \[ \text{for } a^{X,Y} \in \cC(X,Y), \;\Delta_{X,Y}^Z(a^{X,Y}) = \sum a_{(1)}^{X,Z} \otimes a_{(2)}^{Z,Y}.\] \begin{theoreme}[{\cite[Proposition 1.16 and Theorem 6.1]{HG_co}}]\label{bic_YD} Let $H$ and $L$ be two Hopf algebras such that there exists a linear monoidal equivalence between their categories of right comodules $\cM^H$ and $\cM^L$. Then there exists a linear monoidal equivalence between $\cY\cD(H)$ and $\cY\cD(L)$, inducing an equivalence between the categories of finite dimensional Yetter-Drinfeld modules $\cY\cD_f(H)$ and $\cY\cD_f(L).$ \end{theoreme} Let us recall the construction of this equivalence. As a consequence of~\cite{Sch_big}, restated in the context of cogroupoids, the existence of a linear monoidal equivalence between the categories $\cM^H$ and $\cM^L$ is equivalent to the existence of a connected cogroupoid $\cC$ and two objects $X, Y \in \ob (\cC)$ such that $H \cong \cC(X, X)$ and $L \cong \cC(Y,Y)$ (see~\cite[Theorem 2.10]{HG_co}). Then the equivalence between the categories $\cY\cD(H)$ and $\cY\cD(L)$ is given by the functor: \[ \appl[\cF_X^Y:]{\cY\cD(\cC(X,X))}{\cY\cD(\cC(Y,Y))}{V}{V \cotens\limits_{\cC(X,X)} \cC(X,Y),} \] where \[V \! \cotens\limits_{\cC(X,X)}\! \cC(X,Y) = \left\{ \sum\limits_i v_i \otimes a_i^{X,Y} \!\in V \otimes \cC(X,Y) \, ; \sum v_{i(0)} \otimes v_{i(1)}^{X,X} \otimes a_i^{X,Y} = \sum v_i \otimes a_{i(1)}^{X,X} \otimes a_{i(2)}^{X,Y} \right\}.\] The right $L \cong \cC(Y,Y)$-module structure of $V \cotens\limits_{\cC(X,X)} \cC(X,Y) $ is given by: \[ \left(\sum_i v_i\otimes a_i^{X,Y}\right) \triangleleft b^{Y,Y} = \sum_i v_i.b_{(2)}^{X,X} \otimes S_{Y,X}(b_{(1)}^{Y,X})a_i b_{(3)}^{X,Y} \] and its right comodule structure is given by the map $id_V \otimes \Delta_{X,Y}^Y$. The quasi-inverse of $\cF_X^Y$ is the functor $\cF_Y^X$. By~\cite[Proposition 1.16]{HG_co}, the functor $\cF_X^Y$ induces an equivalence between the categories of finite dimensional Yetter-Drinfeld modules $\cY\cD_f(H)$ and $\cY\cD_f(L).$ \begin{lemme} Let $\cC$ be a cogroupoid and let $X,Y$ be in $\ob(\cC)$ such that $\cC(Y,X) \neq (0)$. Let $\omega : \cC(X,X) \rightarrow V$ be a reduced differential calculus over $\cC(X,X)$. The map \[ \appl[\overline{\omega}:]{\cC(Y,Y)}{V \cotens\limits_{\cC(X,X)} \cC(X,Y)}{a^{Y,Y}} { \sum \omega(a_{(2)}^{X,X}) \otimes S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y}} \] is a reduced differential calculus over $\cC(Y,Y)$. \end{lemme} \begin{proof} We already know, by the previous theorem, that $V \cotens\limits_{\cC(X,X)} \cC(X,Y)$ is a Yetter-Drinfeld module over $\cC(Y,Y)$. We firstly have to check that the map $\overline{\omega}$ is well defined, which is to say, we have to check that \[ \sum \omega(a_{(2)}^{X,X})_{(0)} \otimes \omega(a_{(2)}^{X,X})_{(1)} \otimes S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y} = \sum \omega(a_{(2)}^{X,X}) \otimes \Delta_{X,Y}^X \left(S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y}\right). \] On the one hand, we have: \begin{align*} \sum \omega(a_{(2)}^{X,X})_{(0)} \otimes \omega(a_{(2)}^{X,X})_{(1)} \otimes S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y} = \sum \omega(a_{(3)}^{X,X}) \otimes S_{X,X}(a_{(2)}^{X,X})a_{(4)}^{X,X} \otimes S_{Y,X}(a_{(1)}^{Y,X})a_{(5)}^{X,Y}. \end{align*} And on the other hand, \begin{align*} \Delta_{X,Y}^X \left(S_{Y,X}(a^{Y,X})b^{X,Y}\right) &= \Delta_{X,Y}^X (S_{Y,X}(a^{Y,X}))\Delta_{X,Y}^X (b^{X,Y}) \\ & =\sum \left(S_{X,X}(a_{(2)}^{X,X}) \otimes S_{Y,X}(a_{(1)}^{Y,X})\right) \left(b_{(1)}^{X,X} \otimes b_{(2)}^{X,Y} \right) \\ & = \sum S_{X,X}(a_{(2)}^{X,X})b_{(1)}^{X,X} \otimes S_{Y,X}(a_{(1)}^{Y,X})b_{(2)}^{X,Y} \end{align*} so that \begin{align*} \sum \omega(a_{(2)}^{X,X}) \otimes \Delta_{X,Y}^X \left(S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y}\right) = \sum \omega(a_{(3)}^{X,X}) \otimes S_{X,X}(a_{(2)}^{X,X})a_{(4)}^{X,X} \otimes S_{Y,X}(a_{(1)}^{Y,X})a_{(5)}^{X,Y} \end{align*} which shows that $\overline{\omega} : \cC(Y,Y) \rightarrow V \cotens\limits_{\cC(X,X)} \cC(X,Y)$ is well defined. We have \begin{align*} \overline{\omega}(a^{Y,Y}) \triangleleft b^{Y,Y} & = \sum \left(\omega(a_{(2)}^{X,X}) \otimes S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y}\right) \triangleleft b^{Y,Y} \\ & = \sum \omega(a_{(2)}^{X,X}).b_{(2)}^{X,X} \otimes S_{Y,X}(b_{(1)}^{Y,X}) S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y} b_{(3)}^{X,Y}. \end{align*} Consequently, we have \begin{align*} \overline{\omega}(a^{Y,Y}b^{Y,Y}) & = \sum \omega(a_{(2)}^{X,X}b_{(2)}^{X,X}) \otimes S_{Y,X}(a_{(1)}^{Y,X}b_{(1)}^{Y,X})a_{(3)}^{X,Y}b_{(3)}^{X,Y} \\ & = \sum \omega(a_{(2)}^{X,X}).b_{(2)}^{X,X} \otimes S_{Y,X}(a_{(1)}^{Y,X}b_{(1)}^{Y,X})a_{(3)}^{X,Y}b_{(3)}^{X,Y} \\ & \qquad + \sum \varepsilon_X(a_{(2)}^{X,X}) \omega(b_{(2)}^{X,X}) \otimes S_{Y,X}(a_{(1)}^{Y,X}b_{(1)}^{Y,X})a_{(3)}^{X,Y}b_{(3)}^{X,Y} \\ & = \overline{\omega}(a^{Y,Y}) \triangleleft b^{Y,Y} +\sum \omega(b_{(2)}^{X,X}) \otimes S_{Y,X} (b_{(1)}^{Y,X})S_{Y,X}(a_{(1)}^{Y,X})a_{(2)}^{X,Y}b_{(3)}^{X,Y} \\ & = \overline{\omega}(a^{Y,Y}) \triangleleft b^{Y,Y} + \varepsilon_Y(a^{Y,Y}) \overline{\omega}(b^{Y,Y}). \end{align*} Denoting $\rho = id_V \otimes \Delta_{X,Y}^{Y}$ the $\cC(Y,Y)$-comodule structure of $V \cotens\limits_{\cC(X,X)} \cC(X,Y)$, we have for all $a^{Y,Y} \in \cC(Y,Y)$, \begin{align*} \rho \circ \overline{\omega}(a^{Y,Y}) & =\sum \omega(a_{(2)}^{X,X}) \otimes \Delta_{X,Y}^Y \left(S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y}\right) \\ &= \sum \omega(a_{(3)}^{X,X}) \otimes S_{Y,X}(a_{(2)}^{Y,X})a_{(4)}^{X,Y} \otimes S_{Y,Y}(a_{(1)}^{Y,Y})a_{(5)}^{Y,Y} \\ & = \sum \overline{\omega}(a_{(2)}^{Y,Y}) \otimes S_{Y,Y}(a_{(1)}^{Y,Y})a_{(3)}^{Y,Y}. \end{align*} Now, in order to prove the lemma, it only remains to check that $\overline{\omega}$ is onto. Let $\sum\limits_{i} v_i \otimes a_i^{X,Y}$ be in $ V \cotens\limits_{\cC(X,X)} \cC(X,Y)$ and let $\varphi : \cC(Y,X) \rightarrow \mathbb C$ be a linear map satisfying $\varphi(1) = 1$. We have \[ \sum\limits_{i} v_{i(0)} \otimes v_{i(1)}^{X,X} \otimes a_i^{X,Y} = \sum\limits_{i} v_{i} \otimes a_{i(1)}^{X,X} \otimes a_{i(2)}^{X,Y} \] since $\sum\limits_{i} v_i \otimes a_i^{X,Y}$ is in $ V \cotens\limits_{\cC(X,X)} \cC(X,Y)$. Applying $id_V \otimes \Delta_{X,X}^Y \otimes id_{\cC(X,Y)}$ on both sides, we find \[ \sum\limits_{i} v_{i(0)} \otimes v_{i(1)}^{X,Y} \otimes v _{i(2)}^{Y,X} \otimes a_i^{X,Y} = \sum\limits_{i} v_{i} \otimes a_{i(1)}^{X,Y} \otimes a_{i(2)}^{Y,X} \otimes a_{i(3)}^{X,Y}. \] This shows that \begin{align*} \sum\limits_{i}\varphi \left( v _{i(2)}^{Y,X} S_{X,Y}( a_i^{X,Y})\right) v_{i(0)} \otimes v_{i(1)}^{X,Y} & = \sum\limits_{i} \varphi \left( a_{i(2)}^{Y,X} S_{X,Y}( a_{i(3)}^{X,Y}) \right) v_{i} \otimes a_{i(1)}^{X,Y} \\ & = \sum\limits_{i}\varepsilon_{Y} ( a_{i(2)}^{Y,Y}) v_{i} \otimes a_{i(1)}^{X,Y} = \sum\limits_{i} v_{i} \otimes a_{i}^{X,Y}. \end{align*} Since $\omega : \cC(X,X) \rightarrow V$ is onto, there exists $b_i^{X,X} \in \cC(X,X)$ such that $\omega(b_i^{X,X}) = v_i$. We have then \[ \sum v_{i(0)} \otimes v_{i(1)}^{X,X} = \sum \omega(b_i^{X,X})_{(0)} \otimes \omega(b_i^{X,X})_{(1)} = \sum \omega(b_{i(2)}^{X,X}) \otimes S_{X,X}(b_{i(1)}^{X,X})b_{i(3)}^{X,X}, \] so that \begin{align*} \sum v_{i(0)} \otimes v_{i(1)}^{X,Y} \otimes v_{i(2)}^{Y,X} & =\sum \omega(b_{i(3)}^{X,X}) \otimes S_{Y,X}(b_{i(2)}^{Y,X})b_{i(4)}^{X,Y} \otimes S_{X,Y}(b_{i(1)}^{X,Y})b_{i(5)}^{Y,X} . \end{align*} We have therefore \begin{align*} \sum\limits_{i} v_{i} \otimes a_{i}^{X,Y} & = \sum_{i}\varphi \left( v _{i(2)}^{Y,X} S_{X,Y}( a_i^{X,Y})\right) v_{i(0)} \otimes v_{i(1)}^{X,Y} \\ & = \sum_{i}\varphi \left(S_{X,Y}(b_{i(1)}^{X,Y})b_{i(5)}^{Y,X} S_{X,Y}( a_i^{X,Y})\right) \omega(b_{i(3)}^{X,X}) \otimes S_{Y,X}(b_{i(2)}^{Y,X})b_{i(4)}^{X,Y} \\ & = \sum_{i}\varphi \left(S_{X,Y}(b_{i(1)}^{X,Y})b_{i(3)}^{Y,X} S_{X,Y}( a_i^{X,Y})\right) \overline{ \omega}(b_{i(2)}^{Y,Y}) \end{align*} which allows to conclude that $\overline{\omega}$ is onto. \end{proof} \begin{rmq}\label{rmq_inner} If $\omega : \cC(X,X) \rightarrow V$ is an inner reduced differential calculus, let $\theta \in V$ be a right-coinvariant element such that $\forall a^{X,X} \in \cC(X,X)$, $\omega(a^{X,X}) = \theta.a^{X,X} - \varepsilon_X(a^{X,X})\theta$. We then have \begin{align*} \forall a^{Y,Y} \in \cC(Y,Y), \;\overline{\omega}(a^{Y,Y}) &= \sum \omega(a_{(2)}^{X,X}) \otimes S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y} \\ & = \sum (\theta.a_{(2)}^{X,X} - \varepsilon_X(a_{(2)}^{X,X})\theta) \otimes S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y} \\ & = \sum \theta.a_{(2)}^{X,X} \otimes S_{Y,X}(a_{(1)}^{Y,X})a_{(3)}^{X,Y} - \theta \otimes \varepsilon_Y(a^{Y,Y})\\ & = (\theta \otimes 1) \triangleleft a^{Y,Y} - \varepsilon_Y(a^{Y,Y})(\theta \otimes 1). \end{align*} Consequently, $\overline{\omega}$ is an inner reduced differential calculus, whose corresponding right-coinvariant element is $\theta \otimes 1.$ \end{rmq} Combining the previous lemma with Theorem~\ref{bic_YD}, we obtain the main result of this section. It generalizes a result of~\cite{MO}, where the two monoidally equivalent Hopf algebras are assumed to be related by a cocycle twist. \begin{theoreme}\label{th_eq} Let $H$ and $L$ be two Hopf algebras such that there exists a linear monoidal equivalence between their categories of right comodules $\cM^H$ and $\cM^L$. Then there exists an equivalence between the categories: \begin{itemize} \item of bicovariant differential calculi $\cD\cC(H)$ and $\cD\cC(L)$, \item of finite dimensional bicovariant differential calculi $\cD\cC_f(H)$ and $\cD\cC_f(L)$. \end{itemize} \end{theoreme} \begin{proof} Let $\cC$ be a connected cogroupoid such that there exist $X,Y \in \ob(\cC)$ satisfying $\cC(X,X) \cong H$ and $\cC(Y,Y) \cong L$. We consider the functor induced by Theorem~\ref{bic_YD} and the previous lemma: \[ \appl[\cF_X^Y:]{\cR\cD\cC(H)}{\cR\cD\cC(L)}{(V,\omega)} {(V \cotens\limits_{H} \cC(X,Y), \overline{\omega})} \] which sends a morphism $f:V \rightarrow W$ in $\cR\cD\cC(H)$, to $\cF_X^Y(f) = f\otimes id : V \cotens\limits_{H} \cC(X,Y) \rightarrow W \cotens\limits_{H} \cC(X,Y)$. It is known to be a morphism of Yetter-Drinfeld modules, and one easily checks that it is a morphism of reduced differential calculi. Since $\cF_X^Y$ is an equivalence between the categories of Yetter-Drinfeld modules over $H$ and $L$, with quasi-inverse $\cF_Y^X$, we only have to check that the natural transformation providing the equivalence $\cF_X^Y \circ \cF_Y^X \cong id$ consists of morphisms of reduced differential calculi. In other words, we have to check that, for all $(V,\omega) \in \cR\cD\cC(H)$, the morphism of Yetter-Drinfeld modules: \[ \appl[\theta_V :]{V}{(V \cotens\limits_{H} \cC(X,Y)) \cotens\limits_{L} \cC(Y,X)} {v}{\sum v_{(0)} \otimes v_{(1)}^{X,Y} \otimes v_{(2)}^{Y,X}} \] is a morphism of reduced differential calculi. We have for $a^{X,X} \in H \cong \cC(X,X)$, \begin{align*} \theta_V \circ \omega(a^{X,X}) & = \sum(id \otimes \Delta_{X,X}^Y)\left(\omega(a_{(2)}^{X,X}) \otimes S_{X,X}(a_{(1)}^{X,X})a_{(3)}^{X,X}\right) \\ & = \sum \omega(a_{(3)}^{X,X}) \otimes S_{Y,X}(a_{(2)}^{Y,X})a_{(4)}^{X,Y} \otimes S_{X,Y}(a_{(1)}^{X,Y})a_{(5)}^{Y,X} \\ & = \sum \overline{\omega}(a_{(2)}^{Y,Y}) \otimes S_{X,Y}(a_{(1)}^{X,Y})a_{(3)}^{Y,X} = \overline{\overline{\omega}}(a^{X,X}). \end{align*} Thus $\theta_V$ is a morphism of reduced differential calculi, and $\cF_X^Y$ is an equivalence of categories. Gathering this with Lemma~\ref{lem_rdc}, we obtain an equivalence $\cD\cC(H) \cong \cR\cD\cC(H) \cong \cR\cD\cC(L) \cong \cD\cC(L)$, inducing an equivalence $\cD\cC_f(H) \cong \cR\cD\cC_f(H) \cong \cR\cD\cC_f(L) \cong \cD\cC_f(L)$. \end{proof} \section{Classification of bicovariant differential calculi over free orthogonal Hopf algebras} In this section, we gather the results of the previous sections in order to classify the finite dimensional reduced differential calculi over the free orthogonal Hopf algebras. To this end, we start by classifying the finite dimensional reduced differential calculi over the Hopf algebra $ \cO_q(SL_2)$, when $q \in \mathbb C^*$ is not a root of unity. This classification is based on the classification of finite dimensional $ \cO_q(SL_2)$-Yetter-Drinfeld modules made in~\cite{TAK}, and Lemma~\ref{lem_inner}.. \begin{definition} Let $q \in \mathbb C^*$ be not a root of unity. $ \cO_q(SL_2)$ is the Hopf algebra generated by four elements $a,b,c,d$ subject to the relations: \[ \left\{ \begin{array}{l} ba = q ab \; , \:ca = qac \; , \:db = qbd \; , \:dc = q cd \; , \:bc = cb \; , \\ ad -q^{-1}bc = da-qbc = 1. \end{array} \right. \] Its comultiplication, counit and antipode are defined by: \[\begin{array}{llll} \Delta (a) = a \otimes a + b \otimes c, & \Delta (b) = a \otimes b + b \otimes d, & \Delta (c) = c \otimes a + d \otimes c, & \Delta (d) = c \otimes b + d \otimes d, \\ \varepsilon (a) = \varepsilon(d) = 1, & \varepsilon(b) = \varepsilon(c) = 0 , &&\\ S(a) = d, & S(b ) = -qb, & S(c) = -q^{-1}c, & S(d) = a. \end{array}\] \end{definition} \begin{definition} Let $n$ be in $\mathbb N$. We denote by $V_n$ the simple right $ \cO_q(SL_2)$-comodule with basis $(v_i^{(n)})_{0 \leqslant i \leqslant n}$, and coaction $\rho_n$ defined by: \[ \rho_n(v_i^{(n)}) = \sum_{k=0}^n v_k^{(n)} \otimes \left( \sum_{\substack{r+s=k \\ 0\leqslant r \leqslant i \\0 \leqslant s \leqslant n-i}} \cnk{i}{r}_{q^2}\cnk{n-i}{s}_{q^2}q^{(i-r)s}a^rb^sc^{i-r}d^{n-i-s} \right) \] where $\cnk{n}{k}_{q^2}$ denotes the $q^2$-binomial coefficient. That is to say: \[ \cnk{n}{k}_{q^2} = q^{k(n-k)}\frac{[n]_q!}{[n-k]_q![k]_q!} \quad \text{with } [k]_q = \frac{q^k-q^{-k}}{q-q^{-1}} \quad \text{and } [k]_q! = [1]_q.[2]_q \ldots [k]_q. \] \end{definition} \begin{definition} Let $n,m$ be in $\mathbb N$ and let $\epsilon \in \{-1,1\}.$ We denote by $V_{n,m}^\epsilon$ the $ \cO_q(SL_2)$-Yetter-Drinfeld module $V_n \otimes V_m$ equipped with its canonical right coaction, and with right module structure defined by: \begin{align*} (v_i^{(n)} \otimes v_j^{(m)}).a & = \epsilon q^{\frac{m-n}{2}+i-j}v_i^{(n)} \otimes v_j^{(m)}, \\ (v_i^{(n)} \otimes v_j^{(m)}).b &= -\epsilon q^{-\frac{n+m}{2}+i+j+1}(1-q^{-2})[j]_q v_i^{(n)} \otimes v_{j-1}^{(m)},\\ (v_i^{(n)} \otimes v_j^{(m)}).c &= \epsilon q^{\frac{m+n}{2}-i-j}(1-q^{-2})[n-i]_q v_{i+1}^{(n)} \otimes v_j^{(m)}, \\ (v_i^{(n)} \otimes v_j^{(m)}).d & = \epsilon q^{\frac{n-m}{2}+j-i} (v_i^{(n)} \otimes v_j^{(m)} -q(1-q^{-2})^2[j]_q[n-i]_q v_{i+1}^{(n)} \otimes v_{j-1}^{(m)}). \end{align*} $V_{n,n}^\epsilon$ will also be denoted by $V_n^\epsilon$. \end{definition} \begin{rmq}\label{rmq_class} By~\cite{TAK}, every simple finite dimensional $ \cO_q(SL_2)$-Yetter-Drinfeld module is of the form $V_{n,m}^\epsilon$, and each finite dimensional $ \cO_q(SL_2)$-Yetter-Drinfeld module can be decomposed into a direct sum of simple Yetter-Drinfeld modules. To see that our description of $V_{n,m}^\epsilon$ coincides with the one given in~\cite[(6.4)]{TAK}, just consider the basis $(v_{i,j})_{\substack{0\leqslant i \leqslant n \\ 0\leqslant j \leqslant m}}$ given by \[v_{i,j} = \dfrac{1}{[n-i]_q![m-j]_q!} v_{n-i}^{(n)} \otimes v_j^{(m)}. \] One can check that the $v_{i,j}$'s satisfy~\cite[(6.4)]{TAK} and that the map \[v_{i,j} \mapsto \dfrac{1}{[n-i]_q![m-j]_q!} v_{n-i}^{(n)} \otimes v_j^{(m)}\] is an isomorphism of Yetter-Drinfeld modules. \end{rmq} \begin{rmq}~\label{rmq_simple} Let $n,m$ be in $\mathbb N$ and $\epsilon$ be in $\{-1, 1\}$. The Clebsch-Gordan formula for the decomposition of $V_n \otimes V_m$ into simple comodules ensures that the space of right-coinvariant elements of $V_{n,m}^\epsilon$ is one-dimensional if $n= m$, and zero-dimensional otherwise. Hence if $n\neq m$, there is no inner reduced differential calculus of the form $\omega : \cO_q(SL_2) \rightarrow V_{n,m}^\epsilon$, and there is at most one (up to isomorphism) inner reduced differential calculus of the form $\omega : \cO_q(SL_2) \rightarrow V_n^\epsilon$. If $(n,\epsilon) \neq (0,1)$, then $V_n^\epsilon$ is not isomorphic to the Yetter-Drinfeld module $\mathbb C_\varepsilon$, and by Lemma~\ref{lem_simple}, there indeed exists such an inner reduced differential calculus, which we denote by $\omega_n^\epsilon : \cO_q(SL_2) \rightarrow V_n^\epsilon$. \end{rmq} As a direct consequence of Lemma~\ref{lem_inner}, and the fact that by~\cite{TAK}, the category $\cY\cD_f(\cO_q(SL_2))$ is semisimple, we have the following result. \begin{proposition} Each finite dimensional bicovariant differential calculus over $\cO_q(SL_2)$ is inner. \end{proposition} This allows to deduce the classification of finite dimensional reduced differential calculi over $\cO_q(SL_2)$. \begin{theoreme}\label{th_class} Every simple finite dimensional reduced differential calculus over $\cO_q(SL_2)$ is of the form $(V_n^\epsilon, \omega_n^\epsilon)$, with $n \in \mathbb N$, $\epsilon \in \{-1,1\}$ and $(n,\epsilon) \neq (0,1)$. Furthermore, each finite dimensional reduced differential calculus $(V,\omega)$ over $\cO_q(SL_2)$ can be decomposed into a direct sum: \[(V,\omega) \cong \bigoplus\limits_{i=1}^d (V_{n_i}^{\epsilon_i}, \omega_{n_i}^{\epsilon_i}), \] where $(n_1, \ldots n_d) \in \mathbb N^d$, $(\epsilon_1, \ldots ,\epsilon_d) \in \{-1,1\}^d$ satisfies $(n_i, \epsilon_i) \neq (0,1)$ for all $i$ in $\{1, \ldots , d\}$ and $ (n_i,\epsilon_i) \neq (n_j, \epsilon_j)$ for all $i \neq j$. \end{theoreme} \begin{proof} Since each finite dimensional reduced differential calculus over $\cO_q(SL_2)$ is inner, and each simple finite dimensional Yetter-Drinfeld module over $\cO_q(SL_2)$ is of the form $V_{n,m}^\epsilon$, we conclude by Remark~\ref{rmq_simple} that the simple finite dimensional reduced differential calculi over $\cO_q(SL_2)$ are the $(V_n^\epsilon, \omega_n^\epsilon)$ with $(n,\epsilon) \neq (0,1)$. Now if $(V,\omega)$ is a finite dimensional reduced differential calculus over $\cO_q(SL_2)$, by~\cite{TAK}, we have an isomorphism of Yetter-Drinfeld modules $V \cong \bigoplus\limits_{i=1}^d V_i$ where each $V_i$ is a simple Yetter-Drinfeld module. One then easily checks that for $i \in \{1,\ldots, d\}$, $\omega_i = \pi_i \circ \omega : \cO_q(SL_2) \rightarrow V_i$ (where $\pi_i : V \rightarrow V_i$ is the canonical projection) is a reduced differential calculus. We thus have $(V_i, \omega_i) \cong (V_{n_i}^{\epsilon_i}, \omega_{n_i}^{\epsilon_i})$ for some $(n_i, \epsilon_i) \neq (0,1)$. Then $(V, \omega) \cong \bigoplus\limits_{i=1}^d (V_{n_i}^{\epsilon_i}, \omega_{n_i}^{\epsilon_i})$, and by Lemma~\ref{lem_sum}, we have $(n_i,\epsilon_i) \neq (n_j, \epsilon_j)$ when $i\neq j$. \end{proof} In order to give the classification of finite dimensional reduced differential calculi over free orthogonal Hopf algebras, we need the definition of the bilinear cogroupoid $\cB$. It will provide an explicit description of the equivalence between the categories of reduced differential calculi over a free orthogonal Hopf algebra $\cB(E)$ and $\cO_q(SL_2)$, for a well chosen $q$. \begin{definition} The \emph{bilinear cogroupoid} $\cB$ is defined as follows: \begin{itemize} \item $\ob(\cB) = \{E \in GL_n(\mathbb C) \;\, ; \; n \geqslant 1 \}$, \item For $E, F \in \ob(\cB)$, and $m,n \geqslant 1$ such that $E \in GL_m(\mathbb C)$ and $F \in GL_n(\mathbb C)$, $\cB(E,F)$ is the universal algebra generated by elements $(a_{ij})_{\substack{1\leqslant i \leqslant m \\ 1 \leqslant j \leqslant n}}$ submitted to the relations: \[F^{-1} a^t E a =I_n \quad \text{and} \quad a F^{-1}a^t E = I_m, \] where $a = (a_{ij})_{\substack{1\leqslant i \leqslant m \\ 1 \leqslant j \leqslant n}}$. \item For $E,F,G \in \ob(\cB)$, $\Delta_{E,F}^G : \cB(E,F) \rightarrow \cB(E,G) \otimes \cB(G,F)$, $\varepsilon_E : \cB(E,E) \rightarrow \mathbb C$ and $S_{E,F} : \cB(E,F)\rightarrow\cB(F,E)$ are characterized by: \begin{align*} \Delta_{E,F}^G(a_{ij}) & = \sum\limits_{k=1}^n a_{ik} \otimes a_{kj}, \text{where } n \geqslant 1 \text{ is such that } G \in GL_n(\mathbb C), \\ \varepsilon_E (a_{ij}) & = \delta_{ij},\\ S_{E,F}(a_{ij}) & = (E^{-1}a^tF)_{ij}. \end{align*} \end{itemize} For $E \in GL_n(\mathbb C)$, $\cB(E,E)$ is a Hopf algebra, which will also be denoted by $\cB(E)$, and called the \emph{free orthogonal Hopf algebra associated with $E$}. \end{definition} \begin{rmq} One easily checks that $\cO_q(SL_2) = \cB(E_q)$, where \[ E_q = \begin{pmatrix} 0 & 1 \\ -q^{-1} & 0 \end{pmatrix}. \] \end{rmq} By~\cite[Corollary 3.5]{HG_co}, for $\lambda \in \mathbb C$, the subcogroupoid $\cB^\lambda$ of $\cB$ defined by \[\cB^\lambda = \{ E \in GL_m(\mathbb C) \;\, ; \; n\geqslant 2, \tr(E^{-1}E^t) = \lambda\}\] is connected (here ``$\tr$'' denotes the usual trace). In the following, $E \in GL_m(\mathbb C)$ with $m \geqslant 2$, denotes a matrix such that any solution of the equation $q^2 + \tr(E^{-1} E^t)q + 1 =0$ is not a root of unity. If $q$ is a solution of this equation, we have $\tr(E_q^{-1} E_q^t) = -q-q^{-1} = \tr(E^{-1}E^t)$, thus $E$ and $E_q$ are in the connected cogroupoid $\cB^\lambda$, where $\lambda = -q-q^{-1}$. The Hopf algebras $\cB(E_q) = \cO_q(SL_2)$ and $\cB(E)$ are thus monoidally equivalent, and by Theorem~\ref{th_eq}, we have an equivalence between the categories of reduced differential calculi $\cR\cD\cC(\cO_q(SL_2))$ and $\cR\cD\cC(\cB(E))$ given by: \[ \appl[\cF_{E_q}^{E} :]{\cR\cD\cC(\cO_q(SL_2))}{\cR\cD\cC(\cB(E))} {(V,\omega)}{(V \cotens\limits_{\cO_q(SL_2)} \cB(E_q, E), \overline{\omega}).} \] \begin{definition} For $n$ in $\mathbb N$ and $\epsilon \in \{-1,1\}$ such that $(n, \epsilon) \neq (0,1)$, we denote by $W_n^\epsilon$ the $\cB(E)$-Yetter-Drinfeld module $V_n^\epsilon \cotens\limits_{\cO_q(SL_2)} \cB(E_q, E)$. We fix a non-zero right-coinvariant element $\theta_n \in V_n \otimes V_n$ and we denote by $\eta_n^\epsilon : \cB(E) \rightarrow W_n^\epsilon$ the inner reduced differential calculus defined by $\eta_n^\epsilon(x) = (\theta_n \otimes 1) \triangleleft x - \varepsilon(x)(\theta_n \otimes 1)$. \end{definition} By Remark~\ref{rmq_inner}, $\cF_{E_q}^{E}(V_n^\epsilon, \omega_n^\epsilon)$ is isomorphic to $(W_n^\epsilon, \eta_n^\epsilon)$ for all $n \in \mathbb N$ and all $\epsilon \in \{-1,1\}$ such that $(n, \epsilon) \neq (0,1)$. According to Theorems~\ref{th_eq} and~\ref{th_class}, we obtain the following classification of finite dimensional reduced differential calculi over $\cB(E)$. \begin{proposition} Each finite dimensional bicovariant differential calculus over $\cB(E)$ is inner. \end{proposition} \begin{theoreme}\label{th_class_free} Every simple finite dimensional reduced differential calculus over $\cB(E)$ is of the form $(W_n^\epsilon , \eta_n^\epsilon)$, with $n \in \mathbb N$, $\epsilon \in \{-1,1\}$ and $(n,\epsilon) \neq (0,1)$. Furthermore, each finite dimensional reduced differential calculus $(W,\eta)$ over $\cB(E)$ can be decomposed into a direct sum: \[(W,\eta) \cong \bigoplus\limits_{i=1}^d (W_{n_i}^{\epsilon_i}, \eta_{n_i}^{\epsilon_i}), \] where $(n_1, \ldots n_d) \in \mathbb N^d$, $(\epsilon_1, \ldots ,\epsilon_d) \in \{-1,1\}^d$ satisfies $(n_i, \epsilon_i) \neq (0,1)$ for all $i$ in $\{1, \ldots , d\}$ and $ (n_i,\epsilon_i) \neq (n_j, \epsilon_j)$ for all $i \neq j$. \end{theoreme} \end{document}
\begin{document} \title[Computing Hilbert modular forms]{Computing systems of Hecke eigenvalues associated to Hilbert modular forms} \author{Matthew Greenberg} \address{University of Calgary, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada} \email{[email protected]} \author{John Voight} \address{Department of Mathematics and Statistics, University of Vermont, 16 Colchester Ave, Burlington, VT 05401, USA} \email{[email protected]} \date{\today} \begin{abstract} We utilize effective algorithms for computing in the cohomology of a Shimura curve together with the Jacquet-Langlands correspondence to compute systems of Hecke eigenvalues associated to Hilbert modular forms over a totally real field $F$. \end{abstract} \maketitle The design of algorithms for the enumeration of automorphic forms has emerged as a major theme in computational arithmetic geometry. Extensive computations have been carried out for elliptic modular forms and now large databases exist of such forms \cite{CremonaLatest,SteinWatkins}. As a consequence of the modularity theorem of Wiles and others, these tables enumerate all isogeny classes of elliptic curves over $\mathbb{Q}$ up to a very large conductor. The algorithms employed to list such forms rely heavily on the formalism of modular symbols, introduced by Manin \cite{Manin} and extensively developed by Cremona \cite{Cremona}, Stein \cite{Stein}, and others. For a positive integer $N$, the space of modular symbols on $\Gamma_0(N) \subsetet \SL_2(\mathbb{Z})$ is defined to be the group $H^1_c(Y_0(N)(\mathbb{C}),\mathbb{C})$ of compactly supported cohomology classes on the open modular curve $Y_0(N)(\mathbb{C})=\Gamma_0(N)\backslash\mathcal{H}$, where $\mathcal{H}$ denotes the upper half-plane. Let $S_2(\Gamma_0(N))$ denote the space of cuspidal modular forms for $\Gamma_0(N)$. By the Eichler-Shimura isomorphism, the space $S_2(\Gamma_0(N))$ embeds into $H^1_c(Y_0(N)(\mathbb{C}),\mathbb{C})$ and the image can be characterized by the action of the Hecke operators. In sum, to compute with the space of modular forms $S_2(\Gamma_0(N))$, one can equivalently compute with the space of modular symbols $H^1_c(Y_0(N)(\mathbb{C}),\mathbb{C})$ together with its Hecke action. This latter space is characterized by a natural isomorphism of Hecke-modules \[ H^1_c(Y_0(N)(\mathbb{C}),\mathbb{C})\cong \Hom_{\Gamma_0(N)}(\Div^0\mathbb{P}^1(\mathbb{Q}),\mathbb{C}), \] where a cohomology class $\omega$ is mapped to the linear functional which sends the divisor $s-r \in \Div^0\mathbb{P}^1(\mathbb{Q})$ to the integral of $\omega$ over the image on $Y_0(N)$ of a path in $\mathcal{H}$ between the cusps $r$ and $s$. Modular symbols have proved to be crucial in both computational and theoretical roles. They arise in the study of special values of $L$-functions of classical modular forms, in the formulation of $p$-adic measures and $p$-adic $L$-functions, as well as in the conjectural constructions of Gross-Stark units~\cite{DarmonDasgupta} and Stark-Heegner points~\cite{DarmonSHP}. It is therefore quite inconvenient that a satisfactory formalism of modular symbols is absent in the context of automorphic forms on other Shimura varieties. Consequently, the corresponding theory is not as well understood. From this point of view, alternative methods for the explicit study of Hilbert modular forms are of particular interest. Let $F$ be a totally real field of degree $n=[F:\mathbb{Q}]$ and let $\mathbb{Z}_F$ denote its ring of integers. Let $S_2(\mathfrak{N})$ denote the Hecke module of (classical) Hilbert modular cusp forms over $F$ of parallel weight $2$ and level $\mathfrak{N} \subsetet \mathbb{Z}_F$. Demb\'el\'e~\cite{Dembele} and Demb\'el\'e and Donnelly~\cite{DD} have presented methods for computing with the space $S_2(\mathfrak{N})$ under the assumption that $n$ is even. Their strategy is to apply the Jacquet-Langlands correspondence in order to identify systems of Hecke eigenvalues occurring in $S_2(\mathfrak{N})$ inside spaces of automorphic forms on $B^\times$, where $B$ is the quaternion algebra over $F$ ramified precisely at the infinite places of $F$---whence their assumption that $n$ is even. In this way, Demb\'el\'e and his coauthors have convincingly demonstrated that automorphic forms on totally definite quaternion algebras, corresponding to Shimura varieties of dimension zero, are amenable to computation. Here, we provide an algorithm which permits this computation when $n$ is odd. We locate systems of Hecke eigenvalues in the (degree one) cohomology of a Shimura curve. We explain how a reduction theory for the associated quaternionic unit groups, arising from a presentation of a fundamental domain for the action of this group \cite{V-fd}, allows us to compute the Hecke module structure of these cohomology groups in practice. Our methods work without reference to cusps or a canonical moduli interpretation of the Shimura curve, as these features of the classical situation are (in general) absent. Our main result is as follows. \begin{theorema*} There exists an algorithm which, given a totally real field $F$ of strict class number $1$ and odd degree $n$, and an ideal $\mathfrak{N}$ of $\mathbb{Z}_F$, computes the system of Hecke eigenvalues associated to Hecke eigenforms in the space $S_2(\mathfrak{N})$ of Hilbert modular forms of parallel weight $2$ and level $\mathfrak{N}$. \end{theorema*} In fact, our methods work more generally for fields $F$ of even degree as well (under a hypothesis on $\mathfrak{N}$) and for higher weight $k$, and therefore overlap with the methods of Demb\'el\'e and his coauthors in many cases; see the precise statement in Theorem \ref{maintheorem} below. This overlap follows from the Jacquet-Langlands correspondence, but it can also be explained by the theory of nonarchimedean uniformization of Shimura curves: one can describe the $\mathbb{C}_\mathfrak{p}$-points of a Shimura curve, for suitable primes $\mathfrak{p}$, as the quotient of the $\mathfrak{p}$-adic upper half-plane $\mathcal{H}_\mathfrak{p}$ by a definite quaternion order of the type considered by Demb\'el\'e. (For a discussion of the assumption on the class number of $F$, see Remark \ref{strictclassno1}.) In particular, this theorem answers (in part) a challenge of Elkies \cite{Elkies} to compute modular forms on Shimura curves. The article is organized as follows. In \S\S 1--2, we introduce Hilbert modular forms (\S\ref{S:HMF}) and quaternionic modular forms (\S\ref{S:QMF}) and the correspondence of Jacquet-Langlands which relates them. In \S\ref{S:cohom}, we discuss how systems of Hecke eigenvalues associated to certain Hilbert or quaternionic modular forms may be found in cohomology groups of Shimura curves. The rest of the paper is devoted to explicit computation in these cohomology groups. In~\S\ref{S:quaternions}, we discuss algorithms for representing quaternion algebras and their orders. In \S\ref{S:fuchsian}, we show how the fundamental domain algorithms of the second author allow for computation in the cohomology of Fuchsian groups, and we show how algorithms for solving the word problem in these groups is the key to computing the Hecke action. We conclude by presenting applications and examples of our algorithms. Beyond applications to the enumeration of automorphic forms, the techniques of this paper hold the promise of applications to Diophantine equations. The first author \cite{Greenberg} has proposed a conjectural, $p$-adic construction of algebraic Stark-Heegner points on elliptic curves over totally real fields. These points are associated to the data of an embedding of a non-CM quadratic extension $K$ of a totally real field $F$ into a quaternion $F$-algebra $B$. Note that such a quaternion algebra cannot be totally definite. We propose to generalize the formalism of overconvergent modular symbols employed in~\cite{DP} in the case $B=\MM_2(\mathbb{Q})$ to the general quaternionic situation in order to allow for the efficient calculation of these points. The authors gratefully acknowledge the hospitality of the \textsf{Magma} group at the University of Sydney for their hospitality and would like to thank Lassina Demb\'el\'e, Steve Donnelly, Benjamin Linowitz, and Ron Livn\'e for their helpful comments. \section{Hilbert modular forms}\label{S:HMF} We begin by defining the space of classical Hilbert modular cusp forms. Our main reference for standard facts about Hilbert forms is Freitag \cite{Freitag}. Let $F$ be a totally real field of degree $n=[F:\mathbb{Q}]$ with ring of integers $\mathbb{Z}_F$. Throughout, we assume that the strict class number of $F$ is equal to $1$ (see Remark \ref{strictclassno1} for comments on this assumption). Let $v_1,\ldots,v_n$ be the embeddings of $F$ into $\mathbb{R}$. If $x\in F$, we write $x_i$ as a shorthand for $v_i(x)$. Each embedding $v_i$ induces a embedding $v_i:\MM_2(F)\hookrightarrow \MM_2(\mathbb{R})$. Extending our shorthand to matrices, if $\gamma\in \MM_2(F)$ we write $\gamma_i$ for $v_i(\gamma)$. Let \[ \GL_2^+(F) = \{\gamma\in\GL_2(F) : \text{$\det \gamma_i>0$ for all $i=1,\ldots,n$}\}. \] The group $\GL_2^+(F)$ acts on the cartesian product $\mathcal{H}^n$ by the rule \[ \gamma(\tau_1,\ldots,\tau_n)=(\gamma_1\tau_1,\ldots,\gamma_n\tau_n) \] where as usual $\GL_2^+(\mathbb{R})$ acts on $\mathcal{H}$ by linear fractional transformations. Let $\gamma=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in \GL_2(\mathbb{R})$ and $\tau\in\mathcal{H}$. We define \begin{equation} \label{jlabel} j(\gamma,\tau) = c\tau+d\in\mathbb{C}. \end{equation} For a \emph{weight} \begin{equation}\label{E:weight} k=(k_1,\ldots,k_n) \in (2\mathbb{Z}_{>0})^n, \end{equation} we define a right \emph{weight $k$ action} of $\GL_2^+(F)$ on the space of complex-valued functions on $\mathcal{H}^n$ by \[ (f\slsh{k} \gamma)(\tau)=f(\gamma\tau)\prod_{i=1}^n (\det\gamma_i)^{k_i/2}j(\gamma_i,\tau_i)^{-k_i} \] for $f : \mathcal{H}^n \to \mathbb{C}$ and $\alphamma \in \GL_2^+(F)$. The center $F^\times$ of $\GL_2^+(F)$ acts trivially on such $f$. Therefore, the weight $k$ action descends to an action of $\PGL_2^+(F)=\GL_2^+(F)/F^\times$. Now let $\mathfrak{N}$ be a (nonzero) ideal of $\mathbb{Z}_F$. Define \[ \Gamma_0(\mathfrak{N})=\left\{\begin{pmatrix}a&b\\c&d\end{pmatrix}\in\GL_2^+(\mathbb{Z}_F) : c\in\mathfrak{N}\right\}. \] A \emph{Hilbert modular cusp form of weight $k$ and level $\Gamma_0(\mathfrak{N})$} is an analytic function $f:\mathcal{H}^n\to\mathbb{C}$ such that $f\slsh{k}\gamma=f$ for all $\gamma\in\Gamma_0(\mathfrak{N})$ and such that $f$ vanishes at the cusps of $\Gamma_0(\mathfrak{N})$. We write $S_k(\mathfrak{N})=S_k(\Gamma_0(\mathfrak{N}))$ for the finite-dimensional $\mathbb{C}$-vector space of Hilbert modular forms of weight $k$ and level $\Gamma_0(\mathfrak{N})$. (See Freitag \cite[Chapter 1]{Freitag} for proofs and a more detailed exposition.) The space $S_k(\mathfrak{N})$ is equipped with an action of Hecke operators as follows. Let $\mathfrak{p}$ be a (nonzero) prime ideal of $\mathbb{Z}_F$ with $\mathfrak{p}\nmid \mathfrak{N}$. Write $\mathbb{F}_\mathfrak{p}$ for the residue field of $\mathbb{Z}_F$ at $\mathfrak{p}$. By our assumption that $F$ has strict class number one, there exists a totally positive element $p \in \mathbb{Z}_F$ which generates the ideal $\mathfrak{p}$. Let $\pi=\begin{pmatrix} 1 & 0 \\ 0 & p \end{pmatrix}$. Then there are elements $\gamma_a\in \Gamma=\Gamma_0(\mathfrak{N})$, indexed by $a \in \mathbb{P}^1(\mathbb{F}_\mathfrak{p})$, such that \begin{equation} \label{Gammagaa} \Gamma \pi \Gamma = \bigsqcup_{a\in\mathbb{P}^1(\mathbb{F}_\mathfrak{p})}\Gamma\alpha_a, \end{equation} where $\alpha_a = \pi \gamma_a$. If $f\in S_k(\mathfrak{N})$, we define \[ f\slsh{} T_\mathfrak{p}=\sum_{a\in\mathbb{P}^1(\mathbb{F}_\mathfrak{p})} f\slsh{k}\alpha_a. \] Then $f \slsh{} T_\mathfrak{p}$ also belongs to $S_k(\mathfrak{N})$ and the operator $T_\mathfrak{p}:S_k(\mathfrak{N})\to S_k(\mathfrak{N})$ is called the \emph{Hecke operator} associated to the ideal $\mathfrak{p}$. One verifies directly that $T_\mathfrak{p}$ depends only on $\mathfrak{p}$ and not on our choice of generator $p$ of $\mathfrak{p}$ or on our choice of representatives (\ref{Gammagaa}) for $\Gamma\backslash\Gamma\pi\Gamma$. Suppose instead now that $\mathfrak{p}$ is a prime ideal of $\mathbb{Z}_F$ such that $\mathfrak{p}^e\parallel\mathfrak{N}$. Let $p$ and $n$ be totally positive generators of $\mathfrak{p}$ and $\mathfrak{n}$, respectively. Then there exist $x,y\in \mathbb{Z}_F$ such that $xp^e - y(n/p^e)=1$. The element \[ \pi=\begin{pmatrix}xp^e & y\\ n & p^e\end{pmatrix} \] with $\det\pi=p^e$ normalizes $\Gamma$, and we have $\pi^2\in F^\times \Gamma$. Setting \[ f\slsh{} W_{\mathfrak{p}^e}=f\slsh{}\pi, \] we verify that $W_{\mathfrak{p}^e}$ is an involution of $S_2(\mathfrak{N})$, called an \emph{Atkin-Lehner involution}, and this operator depends only on the prime power $\mathfrak{p}^e$ and not on its generator $p$ or the elements $x,y$. We conclude this section by defining the space of newforms. Let $\mathfrak{M}$ be an ideal of $\mathbb{Z}_F$ with $\mathfrak{M} \mid \mathfrak{N}$. For any totally positive element $d \mid \mathfrak{N}\mathfrak{M}^{-1}$ we have a map \begin{align*} h_d: S_k(\Gamma_0(\mathfrak{M})) &\hookrightarrow S_k(\mathfrak{N}) \\ f &\mapsto f \slsh{} \left( \begin{smallmatrix} d & 0 \\ 0 & 1 \end{smallmatrix} \right). \end{align*} We say that $f \in S_k(\mathfrak{N})$ is an \emph{oldform} at $\mathfrak{M}$ if $f$ is in the image of $h_d$ for some $d \mid \mathfrak{N}\mathfrak{M}^{-1}$. Let $S_k(\mathfrak{N})^{\text{$\mathfrak{M}$-old}}$ denote the space of oldforms at $\mathfrak{M}$. Then we can orthogonally decompose the space $S_k(\mathfrak{N})$ as \[ S_k(\mathfrak{N})=S_k(\mathfrak{N})^{\text{$\mathfrak{M}$-old}} \oplus S_k(\mathfrak{N})^{\text{$\mathfrak{M}$-new}} \] and we say that $f \in S_k(\mathfrak{N})$ is a \emph{newform} at $\mathfrak{M}$ if $f \in S_k(\mathfrak{N})^{\text{$\mathfrak{M}$-new}}$. \section{Quaternionic modular forms}\label{S:QMF} Our main reference for this section is Hida~\cite[\S2.3]{HidaHMF}. Let $B$ be a quaternion algebra over $F$ which is split at the real place $v_1$ and ramified at the real places $v_2,\ldots,v_n$ (and possibly some finite places). Let $\mathfrak{D}$ be the \emph{discriminant} of $B$, the product of the primes of $\mathbb{Z}_F$ at which $B$ is ramified. Let $\omega(\mathfrak{D})$ denote the number of distinct primes dividing $\mathfrak{D}$. Then since a quaternion algebra is ramified at an even number of places, we have \begin{equation} \label{omegaD} \omega(\mathfrak{D}) \equiv n-1\pmod{2}. \end{equation} We note that the case $\mathfrak{D}=(1)$ is possible in (\ref{omegaD}) if and only if $n$ is odd. For unity of presentation, we assume that $\omega(\mathfrak{D}) + n>1$ or equivalently that $B\ncong \MM_2(\mathbb{Q})$ or that $B$ is a division algebra. Since $B$ is split at $v_1$, we may choose an embedding \begin{equation} \label{gidef} \iota_1:B \hookrightarrow B\otimes \mathbb{R}\xrightarrow{\sim} \MM_2(\mathbb{R}). \end{equation} We denote by $\nrd:B \to F$ the \emph{reduced norm} on $B$, defined by $\nrd(\alphamma)=\alphamma\overline{\alphamma}$ where $\overline{\phantom{x}}:B \to B$ is the unique \emph{standard involution} (also called \emph{conjugation}) on $B$. Let $B^\times_+$ denote the subgroup of $B^\times$ consisting of elements with totally positive reduced norm. Since $B$ is ramified at all real places except $v_1$, an element $\alphamma \in B^\times$ has totally positive norm if and only if $v_1(\nrd(\alphamma))>0$, so that \begin{equation} \label{Bcpplus} B^\times_+=\{\gamma\in B^\times : v_1(\nrd \gamma) = \det \iota_1(\gamma)>0\}. \end{equation} The group $B^\times_+$ acts on $\mathcal{H}$ via $v_1$; we write simply $\gamma_1$ for $\iota_1(\gamma)$. As $F^\times\subset B^\times_+$ acts trivially on $\mathcal{H}$ via $\iota_1$, the action of $B^\times_+$ on $\mathcal{H}$ descends to the quotient $B^\times_+/F^\times$. For an integer $m\varepsilonq 0$, let $P_m=P_m(\mathbb{C})$ be the subspace of $\mathbb{C}[x,y]$ consisting of homogeneous polynomials of degree $m$. In particular, $P_0=\mathbb{C}$. For $\gamma\in\GL_2(\mathbb{C})$, let $\bar{\gamma}$ be the adjoint of $\gamma$, so that $\gamma\bar{\gamma}=\det\gamma$. Note that this notation is consistent with the bar notation used for conjugation in a quaternion algebra as $\iota_1$ is an isomorphism of algebras with involution: $\iota_1(\bar{\gamma})=\overline{\iota_1(\gamma)}$. Define a right action of $\GL_2(\mathbb{C})$ on $P_{m}(\mathbb{C})$ by \[ (q \cdot \gamma)(x,y) = q((x\,\,y)\bar{\gamma})=q(dx-cy,-bx+ay) \] for $\alphamma=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right) \in \GL_2(\mathbb{C})$ and $q \in P_m(\mathbb{C})$. For $\ell\in\mathbb{Z}$, define a modified right action $\cdot_\ell$ of $\GL_2(\mathbb{C})$ on $P_m(\mathbb{C})$ by \[ q\cdot_\ell \gamma = (\det\gamma)^{\ell}\, (q\cdot\gamma). \] We write $P_m(\ell)(\mathbb{C})$ for the resulting right $\GL_2(\mathbb{C})$-module. Let $k$ be a weight as in (\ref{E:weight}) and let $w_i=k_i-2$ for $i=2,\ldots,n$. Define the right $\GL_2(\mathbb{C})^{n-1}$-module \begin{equation} \label{WCC} W(\mathbb{C})= P_{w_2}(-w_2/2)(\mathbb{C})\otimes\cdots\otimes P_{w_n}(-w_n/2)(\mathbb{C}). \end{equation} For the ramified real places $v_2,\dots,v_n$ of $F$, we choose splittings \begin{equation} \label{Bcpplus2n} \iota_i : B \hookrightarrow B \otimes_F \mathbb{C} \cong M_2(\mathbb{C}). \end{equation} We abbreviate as above $\alphamma_i = \iota_i(\alphamma)$ for $\alphamma \in B$. Then $W(\mathbb{C})$ becomes a right $B^\times$-module via $\alphamma \mapsto (\alphamma_2,\dots,\alphamma_n) \in \GL_2(\mathbb{C})^{n-1}$. We write $(x,\gamma)\mapsto x^\gamma$ for this action. We define a \emph{weight $k$ action} of $B^\times_+$ on the space of $W(\mathbb{C})$-valued functions on $\mathcal{H}$ by \[ (f \slsh{k} \gamma)(\tau)=(\det \gamma_1)^{k_1/2}j(\gamma_1,\tau)^{-k_1}f(\gamma_1\tau)^\gamma \] for $f:\mathcal{H} \to W(\mathbb{C})$ and $\alphamma \in B_+^\times$, where $j$ is defined as in (\ref{jlabel}). Note that the center $F^\times \subsetet B^\times_+$ again acts trivially, so the action descends to $B^\times_+/F^\times$. We endow $W(\mathbb{C})$ with an analytic structure via a choice of linear isomorphism $W(\mathbb{C})\cong \mathbb{C}^{(k_2-1)\cdots(k_n-1)}$; the resulting analytic structure does not depend on this choice. Let $\mathfrak{N}$ be an ideal of $\mathbb{Z}_F$ which is prime to $\mathfrak{D}$ and let $\mathcal{O}_0(\mathfrak{N})$ be an Eichler order in $B$ of level $\mathfrak{N}$. We denote by $\mathcal{O}_0(\mathfrak{N})_+^\times \subsetet B_+^\times$ the units of $\mathcal{O}_0(\mathfrak{N})$ with totally positive reduced norm, as in (\ref{Bcpplus}), and we let $\Gamma_0^{\mathfrak{D}}(\mathfrak{N}) = \mathcal{O}_0(\mathfrak{N})_+^\times/\mathbb{Z}_{F,+}^\times$, where $\mathbb{Z}_{F,+}^\times$ denotes the units of $\mathbb{Z}_F$ with totally positive norm. A \emph{quaternionic modular form of weight $k$ and level $\mathcal{O}_0(\mathfrak{N})$} is an analytic function $f:\mathcal{H} \to W(\mathbb{C})$ such that $f \slsh{k} \gamma=f$ for all $\gamma\in \mathcal{O}_0(\mathfrak{N})^\times_+$. We write $S_k^{\mathfrak{D}}(\mathfrak{N})$ for the finite-dimensional $\mathbb{C}$-vector space of quaternionic modular forms of weight $k$ and level $\mathcal{O}_0(\mathfrak{N})$. Spaces of quaternion modular forms can be equipped with the action of Hecke operators. Let $\mathfrak{p}$ be a prime ideal of $\mathbb{Z}_F$ with $\mathfrak{p}\nmid \mathfrak{D}\mathfrak{N}$. Since $F$ has strict class number $1$, by strong approximation \cite[Theor\`eme III.4.3]{Vigneras} there exists $\pi \in \mathcal{O}_0(\mathfrak{N})$ such that $\nrd \pi$ is a totally positive generator for the ideal $\mathfrak{p}$. It follows that there are elements $\gamma_a\in \Gamma = \Gamma_0(\mathfrak{N})$, indexed by $a\in \mathbb{P}^1(\mathbb{F}_\mathfrak{p})$, such that \begin{equation} \label{heckequat} \Gamma\pi\Gamma = \bigsqcup_{a\in\mathbb{P}^1(\mathbb{F}_\mathfrak{p})}\Gamma\alpha_a, \end{equation} where $\alpha_a= \pi\gamma_a$. We define the \emph{Hecke operator} $T_\mathfrak{p}: S_k(\mathfrak{N}) \to S_k(\mathfrak{N})$ by the rule \begin{equation} \label{heckequatdef} f\slsh{}T_\mathfrak{p} = \sum_{a\in\mathbb{P}^1(\mathbb{F}_\mathfrak{p})}f\slsh{k}\alpha_a. \end{equation} The space $S_k^{\mathfrak{D}}(\mathfrak{N})$ also admits an action of Atkin-Lehner operators. Now suppose that $\mathfrak{p}^e \subsetet \mathbb{Z}_F$ is a prime power with $\mathfrak{p}^e \parallel \mathfrak{D}\mathfrak{N}$. (Recall that $e=1$ if $\mathfrak{p} \mid \mathfrak{D}$ and that $\mathfrak{D}$ and $\mathfrak{N}$ are coprime.) Then there exists an element $\pi \in \mathcal{O}_0(\mathfrak{N})$ whose reduced norm is a totally positive generator of $\mathfrak{p}^e$ and such that $\pi$ generates the unique two-sided ideal of $\mathcal{O}_0(\mathfrak{N})$ with norm $\mathfrak{p}^e$. The element $\pi$ normalizes $\mathcal{O}_0(\mathfrak{N})$ and $\pi^2\in \mathcal{O}_0(\mathfrak{N}) \subsetet \mathcal{O}_0(\mathfrak{N})^\times F^\times$ (see Vign\'eras \cite[Chapitre II, Corollaire 1.7]{Vigneras} for the case $\mathfrak{p}\mid\mathfrak{D}$ and the paragraph following \cite[Chapitre II, Lemme 2.4]{Vigneras} if $\mathfrak{p}\mid\mathfrak{N}$). Thus, we define the \emph{Atkin-Lehner involution} $W_{\mathfrak{p}^e}: S_k(\mathfrak{N}) \to S_k(\mathfrak{N})$ by \begin{equation} \label{AtkinLehner2} f \slsh{} W_{\mathfrak{p}^e}=f\slsh{k}\pi \end{equation} for $f \in S_k(\mathfrak{N})$. As above, this definition is independent of the choice of $\pi$ and so only depends on the ideal $\mathfrak{p}^e$. The following fundamental result---the Jacquet-Langlands correspondence---relates these two spaces of Hilbert and quaternionic modular forms. \begin{theorem} \label{JLthm} There is a vector space isomorphism \[ S_k^\mathfrak{D}(\mathfrak{N}) \xrightarrow{\sim} S_k(\mathfrak{D}\mathfrak{N})^{\text{$\mathfrak{D}$-new}} \] which is equivariant for the actions of the Hecke operators $T_\mathfrak{p}$ with $\mathfrak{p}\nmid\mathfrak{D}\mathfrak{N}$ and the Atkin-Lehner involutions $W_{\mathfrak{p}^e}$ with $\mathfrak{p}^e\parallel\mathfrak{D}\mathfrak{N}$. \end{theorem} A useful reference for the Jacquet-Langlands correspondence is Hida~\cite[Proposition 2.12]{HidaCM}, where Theorem~\ref{JLthm} is deduced from the representation theoretic results of Jacquet-Langlands. In particular, when $n=[F:\mathbb{Q}]$ is odd, we may take $\mathfrak{D}=(1)$ to obtain an isomorphism $S_k^{(1)}(\mathfrak{N}) \xrightarrow{\sim} S_k(\mathfrak{N})$. \section{Quaternionic modular forms and the cohomology of Shimura curves}\label{S:cohom} In this section, we relate the spaces $S_k^{\mathfrak{D}}(\mathfrak{N})$ of quaternionic modular forms, together with their Hecke action, to the cohomology of Shimura curves. As above, let $\Gamma=\Gamma_0^{\mathfrak{D}}(\mathfrak{N})=\mathcal{O}_0(\mathfrak{N})^\times_+/\mathbb{Z}_F^\times$. The action of $\Gamma$ on $\mathcal{H}$ is properly discontinuous. Therefore, the quotient $\Gamma\backslash\mathcal{H}$ has a unique complex structure such that the natural projection $\mathcal{H}\to \Gamma\backslash\mathcal{H}$ is analytic. Since $B$ is, by assumption, a division algebra, $\Gamma\backslash\mathcal{H}$ has the structure of a compact Riemann surface. By the theory of canonical models due to Shimura \cite{ShimuraZeta} and Deligne \cite{Deligne}, this Riemann surface is the locus of complex points of the \emph{Shimura curve} $X=X_0^{\mathfrak{D}}(\mathfrak{N})$ which is defined over $F$, under our assumption that $F$ has strict class number $1$. Define the right $\GL_2(\mathbb{C})^n=\GL_2(\mathbb{C})\times\GL_2(\mathbb{C})^{n-1}$-module \[ V(\mathbb{C})=\bigotimes_{i=1}^{n} P_{w_i}(-w_i/2)(\mathbb{C}) =P_{w_1}(-w_1/2)(\mathbb{C})\otimes W(\mathbb{C}). \] The group $B^\times$ acts on $V(\mathbb{C})$ via the embedding $B^\times\hookrightarrow\GL_2(\mathbb{C})^n$ given by $\gamma\mapsto(\gamma_1,\ldots\gamma_n)$ arising from (\ref{gidef}) and (\ref{Bcpplus2n}). As $F^\times$ acts trivially on $P_{w_1}(-w_1/2)(\mathbb{C})$ via $\iota_1$ and on $W(\mathbb{C})$ via $(\iota_2,\ldots,\iota_n)$, the action of $\mathcal{O}_0(\mathfrak{N})^\times_+ \subsetet B^\times$ on $V(\mathbb{C})$ descends to a right action of $\Gamma$ on $V(\mathbb{C})$. It is convenient to identify $V(\mathbb{C})$ with the subspace of the algebra $\mathbb{C}[x_1,y_1,\ldots,x_n,y_n]$ consisting of those polynomials $q$ which are homogeneous in $(x_i,y_i)$ of degree $w_i$. Under this identification, the action of $B^\times$ on $V(\mathbb{C})$ takes the form \[ q^\gamma(x_1,y_1,\ldots,x_n,y_n)= \prod_{i=1}^n(\det\gamma_i)^{-w_i/2} q((x_1\,\,y_1)\bar{\gamma}_1,\ldots(x_n\,\,y_n)\bar{\gamma}_n). \] Let $V=V(\mathbb{C})$. We now consider the cohomology group $H^1(\Gamma,V)$. We represent elements of $H^1(\Gamma,V)$ by (equivalence classes of) crossed homomorphisms. Recall that a \emph{crossed homomorphism} (or \emph{$1$-cocycle}) $f:\Gamma\to V$ is a map which satisfies the property \begin{equation} \label{crossedprop} f(\gamma\delta)=f(\gamma)^\delta + f(\delta) \end{equation} for all $\alphamma,\delta \in \Gamma$. A crossed homomorphism $f$ is \emph{principal} (or a \emph{$1$-coboundary}) if there is an element $g\in V$ such that \begin{equation} \label{principalprop} f(\gamma) = g^{\gamma}- g \end{equation} for all $\alphamma \in \Gamma$. Let $Z^1(\Gamma,V)$ and $B^1(\Gamma,V)$ denote the spaces of crossed homomorphisms and principal crossed homomorphisms from $\Gamma$ into $V$, respectively. Then \[ H^1(\Gamma,V)=Z^1(\Gamma,V)/B^1(\Gamma,V). \] We now define (for a third time) the action of the Hecke operators, this time in cohomology. Let $f:\Gamma\to V$ be a crossed homomorphism, and let $\gamma\in \Gamma$. Recall the definition of Hecke operators (\ref{heckequat}--\ref{heckequatdef}). There are elements $\delta_a \in \Gamma$ for $a \in \mathbb{P}^1(\mathbb{F}_\mathfrak{p})$ and a unique permutation $\gamma^*$ of $\mathbb{P}^1(\mathbb{F}_\mathfrak{p})$ such that \begin{equation} \label{deltaa} \alpha_a\gamma = \delta_a\alpha_{\gamma^*a} \end{equation} for all $a$. Define $f \slsh{} T_\mathfrak{p}:\Gamma\to V$ by \begin{equation}\label{E:heckeformula} (f \slsh{} T_\mathfrak{p})(\gamma) = \sum_{a\in\mathbb{P}^1(\mathbb{F}_\mathfrak{p})}f(\delta_a)^{\alpha_a}. \end{equation} It is a standard calculation~\cite[\S8.3]{ShimuraArith} that $f \slsh{} T_\mathfrak{p}$ is a crossed homomorphism and that $T_\mathfrak{p}$ preserves $1$-coboundaries. Moreover, $f\slsh{} T_\mathfrak{p}$ does not depend on our choice of the coset representatives $\alpha_a$. Therefore, $T_\mathfrak{p}$ yields a well-defined operator \[ T_\mathfrak{p}:H^1(\Gamma,V)\to H^1(\Gamma,V). \] \begin{remark} One may in a natural way extend this definition to compute the action of any Hecke operator $T_\mathfrak{a}$ for $\mathfrak{a}\subsetet \mathbb{Z}_F$ an ideal with $(\mathfrak{a}, \mathfrak{D}\mathfrak{N})=1$. \end{remark} We now define an operator corresponding to complex conjugation. Let $\mu \in \mathcal{O}_0(\mathfrak{N})^\times$ be an element such that $v(\nrd \mu)<0$; such an element exists again by strong approximation. Then $\mu$ normalizes $\Gamma$ and $\mu^2 \in \Gamma$. If $f\in Z^1(\Gamma,V)$ then the map $f \slsh{} W_{\infty}$ defined by \begin{equation} \label{winfty} (f \slsh{} W_{\infty})(\gamma) = f(\mu \alphamma \mu^{-1})^{\mu} \end{equation} is also a crossed homomorphism which is principal if $f$ is principal, so it induces a linear operator \[ W_\infty:H^1(\Gamma,V)\to H^1(\Gamma,V). \] Since $\mu^2\in \Gamma$ and $\mathbb{Z}_{F,+}^*$ acts trivially on $V$, the endomorphism $W_\infty$ has order two. Therefore, $H^1(\Gamma,V)$ decomposes into eigenspaces for $W_{\infty}$ with eigenvalues $+1$ and $-1$, which we denote \[ H^1(\Gamma,V) = H^1(\Gamma,V)^+\oplus H^1(\Gamma,V)^-. \] It is not hard to see that $T_\mathfrak{p}$ commutes with $W_{\infty}$. Therefore, $T_\mathfrak{p}$ perserves the $\pm$-eigenspaces of $H^1(\Gamma,V)$. The group $H^1(\Gamma,V)$ also admits an action of Atkin-Lehner involutions. Letting $\mathfrak{p}$, $e$, and $\pi$ be as in the definition of the Atkin-Lehner involutions (\ref{AtkinLehner2}) in \S\ref{S:QMF}, we define the involution $W_{\mathfrak{p}^e}$ on $H^1(\Gamma, V)$ by \begin{equation} \label{AL3} (f \slsh{} W_{\mathfrak{p}^e})(\alphamma)=f(\pi \alphamma \pi^{-1})^{\pi}. \end{equation} We conclude this section by relating the space of quaternionic modular forms to cohomology by use of the \emph{Eichler-Shimura isomorphism} (in analogy with the classical case $B=\M_2(\mathbb{Q})$ \cite[\S8.2]{ShimuraArith}). We choose a base point $\tau\in\mathcal{H}$, and for $f\in S_k^{\mathfrak{D}}(\mathfrak{N})$ we define a map \[ \text{ES}(f):\Gamma\to V \] by the rule \[ \text{ES}(f)(\gamma) = \int_{\gamma_1^{-1}\tau}^\tau f(z)(zx_1+y_1)^{w_1}dz\in V(\mathbb{C}). \] A standard calculation~\cite[\S8.2]{ShimuraArith} shows that $\text{ES}(f)$ is a crossed homomorphism which depends on the choice of base point $\tau$ only up to $1$-coboundaries. Therefore, $\text{ES}$ descends to a homomorphism \[ \text{ES}:S_k(\Gamma)\to H^1(\Gamma,V). \] Let \[ \text{ES}^\pm:S_k(\Gamma)\to H^1(\Gamma,V)^\pm. \] be the composition of $\text{ES}$ with projection to the $\pm$-subspace, respectively. \begin{theorem}[{\cite[\S4]{MS}}] \label{EichlerShimurathm} The maps $\text{ES}^\pm$ are isomorphisms (of $\mathbb{C}$-vector spaces) which are equivariant with respect to the action of the Hecke operators $T_\mathfrak{p}$ for $\mathfrak{p} \nmid \mathfrak{D}\mathfrak{N}$ and $W_{\mathfrak{p}^e}$ for $\mathfrak{p}^e\parallel\mathfrak{D}\mathfrak{N}$. \end{theorem} In sum, to compute the systems of Hecke eigenvalues occurring in spaces of Hilbert modular forms, combining the Jacquet-Langlands correspondence (Theorem \ref{JLthm}) and the Eichler-Shimura isomorphism (Theorem \ref{EichlerShimurathm}), it suffices to enumerate those systems occurring in the Hecke module $H^1(\Gamma,V(\mathbb{C}))^+$. Let $\mathfrak{N}\subset\mathbb{Z}_F$ be an ideal. Suppose that $\mathfrak{N}$ admits a factorization $\mathfrak{N}=\mathfrak{D}\mathfrak{M}$ such that \begin{equation} \label{factorz} \text{$\mathfrak{D}$ is squarefree, $\mathfrak{D}$ is coprime to $\mathfrak{M}$, and $\omega(\mathfrak{D})\equiv n-1\pmod{2}$.} \end{equation} Then there is a quaternion $F$-algebra $B$ ramified precisely at primes dividing $\mathfrak{D}$ and at the infinite places $v_2,\ldots,v_n$ of $F$. The goal of the second part of this paper is to prove the following theorem which describes how spaces of automorphic forms discussed in the first part of this paper can be computed in practice. \begin{theorem} \label{maintheorem} There exists an explicit algorithm which, given a totally real field $F$ of strict class number $1$, an ideal $\mathfrak{N}\subset\mathbb{Z}_F$, a factorization $\mathfrak{N}=\mathfrak{D}\mathfrak{M}$ as in \textup{(\ref{factorz})}, and a weight $k \in (2\mathbb{Z}_{>0})^n$, computes the systems of eigenvalues for the Hecke operators $T_\mathfrak{p}$ with $\mathfrak{p}\nmid \mathfrak{D}\mathfrak{M}$ and the Atkin-Lehner involutions $W_{\mathfrak{p}^e}$ with $\mathfrak{p}^e\parallel\mathfrak{D}\mathfrak{M}$ which occur in the space of Hilbert modular cusp forms $S_k(\mathfrak{N})^{\text{$\mathfrak{D}$-new}}$. \end{theorem} By this we mean that we will exhibit an explicit finite procedure which takes as input the field $F$, the ideals $\mathfrak{D}$ and $\mathfrak{M}$, and the integer $k$, and produces as output a set of sequences $(a_\mathfrak{p})_{\mathfrak{p}}$ with $a_\mathfrak{p} \in \overline{\mathbb{Q}}$ which are the Hecke eigenvalues of the finite set of $\mathfrak{D}$-newforms in $S_2(\mathfrak{N})$. The algorithm will produce the eigenvalues $a_\mathfrak{p}$ in any desired ordering of the primes $\mathfrak{p}$. \begin{remark} \label{strictclassno1} Generalizations of these techniques will apply when the strict class number of $F$ is greater than $1$. In this case, the canonical model $X$ for the Shimura curve associated to $B$ is the disjoint union of components defined over the strict class field of $F$ and indexed by the strict ideal class group of $\mathbb{Z}_F$. The Hecke operators and Atkin-Lehner involutions then permute these components nontrivially and one must take account of this additional combinatorial data when doing computations. Because of this additional difficulty (see also Remark \ref{oopsfunddomtoo} below), we leave this natural extension as a future project. We note however that it is a folklore conjecture that if one orders totally real fields by their discriminant, then a (substantial) positive proportion of fields will have strict class number $1$. For this reason, we are content to considering a situation which is already quite common. \end{remark} \section{Algorithms for quaternion algebras}\label{S:quaternions} We refer to work of Kirschmer and the second author \cite{KV} as a reference for algorithms for quaternion algebras. We will follow the notation and conventions therein. To even begin to work with the algorithm implied by Theorem \ref{maintheorem} we must first find a representative quaternion algebra $B=\quat{a,b}{F}$ with discriminant $\mathfrak{D}$ which is ramified at all but one real place. From the point of view of effective computability, one can simply enumerate elements $a,b \in \mathbb{Z}_F \setminus \{0\}$ and then compute the discriminant of the corresponding algebra \cite{VoightMatrixRing} until an appropriate representative is found. (Since such an algebra exists, this algorithm always terminates after a finite amount of time.) In practice, it is much more efficient to compute as follows. \begin{alg} This algorithm takes as input a discriminant ideal $\mathfrak{D} \subsetet \mathbb{Z}_F$, the product of distinct prime ideals with $\omega(\mathfrak{D}) \equiv n-1 \pmod{2}$, and returns $a,b \in \mathbb{Z}_F$ such that the quaternion algebra $B=\quat{a,b}{F}$ has discriminant $\mathfrak{D}$. \begin{enumalg} \item Find $a \in \mathfrak{D}$ such that $v(a)>0$ for at most one real place $v_1$ of $F$ and such that $a\mathbb{Z}_F=\mathfrak{D}\mathfrak{b}$ with $\mathfrak{D}+\mathfrak{b}=\mathbb{Z}_F$ and $\mathfrak{b}$ odd. \item Find $t \in \mathbb{Z}_F/8a\mathbb{Z}_F$ such that the following hold: \begin{itemize} \item For all primes $\mathfrak{p} \mid \mathfrak{D}$, we have $\displaystyle{\legen{t}{\mathfrak{p}}=-1}$; \item For all primes $\mathfrak{q} \mid \mathfrak{b}$, we have $\displaystyle{\legen{t}{\mathfrak{q}}=1}$; and \item For all prime powers $\mathfrak{r}^e \parallel 8\mathbb{Z}_F$ with $\mathfrak{r} \nmid \mathfrak{D}$, we have $t \equiv 1 \pmod{\mathfrak{r}^e}$. \end{itemize} \item Find $m \in \mathbb{Z}_F$ such that $b=t+8am \in \mathbb{Z}_F$ is prime and such that $v(b)<0$ for all $v \neq v_1$ and either $v_1(a)>0$ or $v_1(b)>0$. \end{enumalg} \end{alg} Since our theorem does not depend on the correctness of this algorithm, we leave it to the reader to verify that the algebra $B=\quat{a,b}{F}$ output by this algorithm has indeed the correct set of ramified places. \begin{remark} When possible, it is often helpful in practice to have $a\mathbb{Z}_F=\mathfrak{D}$, though this requires the computation of a generator for the ideal $\mathfrak{D}$ with the correct real signs; for example, if $\mathfrak{D}=\mathbb{Z}_F$ and there exists a unit $u \in \mathbb{Z}_F^\times$ such that $v(u)>0$ for a unique real place $v=v_1$ and such that $u \equiv 1 \pmod{8}$, then we may simply take $B=\quat{-1,u}{F}$. One may wish to be alternate between Steps 2 and 3 in searching for $b$. Finally, we note that in Step 2 one may either find the element $t$ by deterministic or probabilistic means. \end{remark} Given this representative algebra $B$, there are algorithms \cite{VoightMatrixRing} to compute a maximal order $\mathcal{O} \subsetet B$, which is represented in bits by a pseudobasis. Furthermore, given a prime $\mathfrak{p} \nmid \mathfrak{D}$ there exists an algorithm to compute an embedding $\iota_\mathfrak{p}:\mathcal{O} \hookrightarrow \MM_2(\mathbb{Z}_{F,\mathfrak{p}})$ where $\mathbb{Z}_{F,\mathfrak{p}}$ denotes the completion of $F$ at $\mathfrak{p}$. As a consequence, one can compute an Eichler order $\mathcal{O}_0(\mathfrak{N})\subset\mathcal{O}$ for any level $\mathfrak{N} \subsetet \mathbb{Z}_F$. \section{Computing in the cohomology of a Shimura curve} \label{S:fuchsian} In this section, we show how to compute explicitly in first cohomology group of a Shimura curve, equipped with its Hecke module structure. Throughout, we abbreviate $\Gamma=\Gamma_0^{\mathfrak{D}}(\mathfrak{N})$ and $\mathcal{O}=\mathcal{O}_0(\mathfrak{N})$ as above. In the choices of embeddings $\iota_1,\dots,\iota_n$ in (\ref{Bcpplus}) and (\ref{Bcpplus2n}), we may assume without loss of generality that their image is contained in $\MM_2(K)$ with $K \hookrightarrow \mathbb{C}$ a number field containing $F$; in other words, we may take $K$ to be any Galois number field containing $F$ which splits $B$. In particular, we note that then that image of $\iota_1(B)$ is contained in $K \cap \mathbb{R}$. So throughout, we may work throughout with the coefficient module \[ V(K)=\bigotimes_{i=1}^n P_{w_i}(K)(-w_i/2) \] since obviously $V(K) \otimes_K \mathbb{C} = V(\mathbb{C})$. The $K$-vector space $V(K)$ is then equipped with an action of $\Gamma$ via $\iota$ which can be represented using exact arithmetic over $K$. \begin{remark} In fact, one can take the image of $\iota$ to be contained in $\M_2(\mathbb{Z}_K)$ for an appropriate choice of $K$, where $\mathbb{Z}_K$ denotes the ring of integers of $K$. Therefore, one could alternatively work with $V(\mathbb{Z}_K)$ as a $\mathbb{Z}_K$-module and thereby recover from this integral stucture more information about the structure of this module. In the case where $k=(2,2,\ldots,2)$, we may work already with the coefficient module $\mathbb{Z}$, and we do so in Algorithm \ref{weight2} below. \end{remark} The first main ingredient we will use is an algorithm of the second author \cite{V-fd}. \begin{prop} \label{funddom} There exists an algorithm which, given $\Gamma$, returns a finite presentation for $\Gamma$ with a minimal set of generators and a solution to the word problem for the computed presentation. \end{prop} \begin{remark}\label{R:sidepairing} One must choose a point $p \in \mathcal{H}$ with trivial stabilizer $\Gamma_p=\{1\}$ in the above algorithm, but a random choice of a point in any compact domain will have trivial stabilizer with probability $1$. The algorithm also yields a fundmental domain $\mathcal{F}$ for $\Gamma$ which is, in fact a hyperbolic polygon. For each element $\gamma$ of our minimal set of generators, $\mathcal{F}$ and $\gamma\mathcal{F}$ share a single side. In other words, there is a unique side $s$ of $\mathcal{F}$ such $\gamma s$ is also a side of $\mathcal{F}$. We say that $\gamma$ is a \emph{side pairing element} for $s$ and $\gamma s$. \end{remark} \begin{remark} \label{oopsfunddomtoo} The algorithms of the second author~\cite{V-fd} concern the computation of a fundamental domain for $\mathcal{O}^\times_1/\{\pm 1\}$ acting on $\mathcal{H}$, where $\mathcal{O}^\times_1$ denote the subgroup of $\mathcal{O}^\times$ consisting of elements of reduced norm $1$. Since we have assumed that $F$ has narrow class number $1$, the natural inclusion $\mathcal{O}^\times_1 \hookrightarrow \mathcal{O}^\times_+$ induces an isomorphism \[ \mathcal{O}^\times_1/\{\pm 1\}\xrightarrow{\sim} \mathcal{O}^\times_+/\mathbb{Z}_F^\times=\Gamma. \] Indeed, if $\gamma\in\mathcal{O}_0(\mathfrak{N})^\times_+$, then $\nrd\gamma \in \mathbb{Z}_{F,+}^\times=\mathbb{Z}_F^{\times 2}$ so if $\nrd\gamma=u^2$ then $\gamma/u$ maps to $\gamma$ in the above map. \end{remark} Let $G$ denote a set of generators for $\Gamma$ and $R$ a set of relations in the generators $G$, computed as in Proposition \ref{funddom}. We identify $Z^1(\Gamma,V(K))$ with its image under the inclusion \begin{align*} j_G:Z^1(\Gamma,V(K)) &\to \bigoplus_{g \in G} V(K) \\ f &\mapsto (f(g))_{g\in G}. \end{align*} It follows that $Z^1(\Gamma,V(K))$ consists of those $f \in \bigoplus_{g \in G} V(K)$ which satisfy $f(r)=0$ for $r \in R$; these become linear relations written out using the crossed homomorphism property (\ref{crossedprop}), and so an explicit $K$-basis for $Z^1(\Gamma,V(K))$ can be computed using linear algebra. The space of principal crossed homomorphisms (\ref{principalprop}) is obtained similarly, where a basis is obtained from any basis for $V(K)$. We obtain from this a $K$-basis for the quotient \[ H^1(\Gamma,V(K)) = Z^1(\Gamma,V(K))/B^1(\Gamma,V(K)) \] and an explicit $K$-linear map $Z^1(\Gamma, V(K)) \to H^1(\Gamma,V(K))$. We first decompose the space $H^1(\Gamma,V(K))$ into $\pm$-eigenspaces for complex conjugation $W_\infty$ as in (\ref{winfty}). An element $\mu \in \mathcal{O}^\times$ with $v_1(\nrd(\mu))<0$ can be found simply by enumeration of elements in $\mathcal{O}$. Given such an element $\mu$, we can then find a $K$-basis for the subspace $H^1(\Gamma,V(K))^+$ by linear algebra. \begin{remark} This exhaustive search in practice benefits substantially from the methods of Kirschmer and the second author \cite{KV} using the absolute reduced norm on $\mathcal{O}$, which gives the structure of a lattice on $\mathcal{O}$ so that an LLL-lattice reduction can be performed. \end{remark} Next, we compute explicitly the action of the Hecke operators on the $K$-vector space $H^1(\Gamma,V)^+$. Let $\mathfrak{p}\subset\mathbb{Z}_F$ be an ideal with $\mathfrak{p}\nmid\mathfrak{D}\mathfrak{N}$ and let $\pi\subset\mathcal{O}$ be such that $\nrd\pi$ is a totally positive generator of $\pi$. We need to compute explicitly a coset decomposition as in (\ref{heckequat}). Now the set $\mathcal{O}(\mathfrak{p})=\mathcal{O}^\times \pi \mathcal{O}^\times$ is in natural bijection with the set of elements whose reduced norm generates $\mathfrak{p}$; associating to such an element the left ideal that it generates gives a bijection between the set $\mathcal{O}^\times \backslash \mathcal{O}(\mathfrak{p})$ and the set of left ideals of reduced norm $\mathfrak{p}$, and in particular shows that the decomposition~\eqref{heckequat} is independent of $\pi$. But this set of left ideals in turn is in bijection \cite[Lemma 6.2]{KV} with the set $\mathbb{P}^1(\mathbb{F}_\mathfrak{p})$: explicitly, given a splitting $\iota_\mathfrak{p}:\mathcal{O} \hookrightarrow \MM_2(\mathbb{Z}_{F,\mathfrak{p}})$, the left ideal corresponding to a point $a=(x:y)\in\mathbb{P}^1(\mathbb{F}_\mathfrak{p})$ is \begin{equation} \label{Iadef} I_a:=\mathcal{O}\iota_\mathfrak{p}^{-1}\begin{pmatrix} x & y \\ 0 & 0 \end{pmatrix} + \mathcal{O}\mathfrak{p}. \end{equation} For $a \in \mathbb{P}^1(\mathbb{F}_\mathfrak{p})$, we let $\alpha_a \in \mathcal{O}$ be such that $\mathcal{O} \alpha_a = I_a$ and $\nrd \alpha_a = \nrd \pi$. We have shown that \[ \mathcal{O}(\mathfrak{p})=\mathcal{O}^\times \pi \mathcal{O}^\times = \bigsqcup_{a \in \mathbb{P}^1(\mathbb{F}_\mathfrak{p})} \mathcal{O}^\times \alpha_a. \] To compute the $\alpha_a$, we use the following proposition {\cite{KV}}. \begin{prop} \label{KVprinc} There exists an (explicit) algorithm which, given a left $\mathcal{O}$-ideal $I$, returns $\alpha \in \mathcal{O}$ such that $\mathcal{O} \alpha = I$. \end{prop} If $v_1(\nrd\alpha_a)$ happens to be negative, then replace $\alpha_a$ by $\mu\alpha_a$. Intersection with $\mathcal{O}^\times_+$ then yields the decomposition~\eqref{heckequat}. Next, in order to use equation~\eqref{E:heckeformula} to compute $(f \slsh{} T_\mathfrak{p})(\gamma)$ for all $\gamma\in G$, we need to compute to compute the permutations $a\mapsto\gamma^*a$ of $\mathbb{P}^1(\mathbb{F}_\mathfrak{p})=\mathbb{F}_\mathfrak{p} \cup \{\infty\}$. \begin{alg} \label{ba} Given $\gamma\in\Gamma$, this algorithm returns the permutation $a\mapsto \gamma^*a$. \begin{enumalg} \item Let $\beta=\iota_\mathfrak{p}(\alpha_a\alphamma) \in \MM_2(\mathbb{Z}_{F,\mathfrak{p}})$ and let $\beta_{ij}$ denote the $ij$th entry of $\beta$ for $i,j=1,2$. \item If $\ord_\mathfrak{p}(\beta_{11}) \leq 0$ then return $\beta_{12}/\beta_{11} \bmod{\mathfrak{p}}$. \item If $\ord_\mathfrak{p}(\beta_{12})=0$ or $\ord_{\mathfrak{p}}(\beta_{21}) > 0$ then return $\infty$. \item Otherwise, return $\beta_{22}/\beta_{21} \bmod{\mathfrak{p}}$. \end{enumalg} \end{alg} The proof that this algorithm gives correct output is straightforward. Having computed the permutation $\gamma^*$, for each $a \in \mathbb{P}^1(\mathbb{F}_p)$ we compute $\delta_a=\alpha_a\gamma\alpha_{\gamma^*a}^{-1} \in \Gamma$ as in (\ref{deltaa}). By Proposition \ref{funddom}, we can write $\delta_a$ as a word in the generators $G$ for $\Gamma$, and using the crossed homomorphism property (\ref{crossedprop}) for each $f \in Z^1(\Gamma,V(K))^+$ we compute $f \slsh{} T_\mathfrak{p} \in Z^1(\Gamma,V(K))^+$ by computing $(f \slsh{} T_\mathfrak{p})(\gamma) \in V(K)$ for $\gamma\in G$. Finally, we compute the Atkin-Lehner involutions $W_{\mathfrak{p}^e}$ for $\mathfrak{p}^e \parallel \mathfrak{D}\mathfrak{N}$ as in (\ref{AL3}). We compute an element $\pi \in \mathcal{O}$ with totally positive reduced norm such that $\pi$ generates the unique two-sided ideal $I$ of $\mathcal{O}_0(\mathfrak{N})$ of norm $\mathfrak{p}^e$. The ideal $I$ can be computed easily \cite{KV}, and a generator $\pi$ can be computed again using Proposition \ref{KVprinc}. We then decompose the space $H^1(\Gamma,V(K))$ under the action of the Hecke operators into Hecke irreducible subspaces for each operator $T_\mathfrak{p}$, and from this we compute the systems of Hecke eigenvalues using straightforward linear algebra over $K$. (Very often it turns out in practice that a single operator $T_\mathfrak{p}$ is enough to break up the space into Hecke irreducible subspaces.) For concreteness, we summarize the algorithm in the simplest case of parallel weight $k=(2,2,\ldots,2)$. Here, we may work simply the coefficient module $\mathbb{Z}$ since the $\Gamma$-action is trivial. Moreover, the output of the computation of a fundamental domain provided by Proposition \ref{funddom} is a minimal set of generators and relations with the following properties \cite[\S 5]{V-fd}: each generator $g \in G$ is labelled either elliptic or hyperbolic, and each relation becomes trivial in the free abelian quotient. \begin{alg} \label{weight2} Let $\mathfrak{p} \subsetet \mathbb{Z}_F$ be coprime to $\mathfrak{D}$. This algorithm computes the matrix of the Hecke operator $T_\mathfrak{p}$ acting on $H^1(\Gamma,\mathbb{Z})^+$. \begin{enumalg} \item Compute a minimal set of generators for $\Gamma$ using Proposition \ref{funddom}, and let $G$ denote the set of nonelliptic generators. Let $H = \bigoplus_{\alphamma \in G} \mathbb{Z} \alphamma$. \item Compute an element $\mu$ as in (\ref{winfty}) and decompose $H$ into $\pm$-eigenspaces $H^{\pm}$ for $W_\infty$. \item Compute $\alpha_a$ for $a \in \mathbb{P}^1(\mathbb{F}_\mathfrak{p})$ as in (\ref{deltaa}) by Proposition \ref{KVprinc}. \item For each $\alphamma \in G$, compute the permutation $\alphamma^*$ of $\mathbb{P}^1(\mathbb{F}_\mathfrak{p})$ using Algorithm \ref{ba}. \item Initialize $T$ to be the zero matrix acting on $H$, with rows and columns indexed by $G$. \item For each $\alphamma \in G$ and each $a \in \mathbb{P}^1(\mathbb{F}_\mathfrak{p})$, let $\delta_a := \alpha_a \alphamma \alpha_{\alphamma^*(a)}^{-1} \in \Gamma$. Write $\delta_a$ as a word in $G$ using Proposition \ref{funddom}, and add to the column indexed by $\alphamma$ the image of $\delta_a \in H$. \item Compute the action $T^+$ of the matrix $T$ on $H^+$ and return $T^+$. \end{enumalg} \end{alg} We note that Step 1 is performed as a precomputation step as it does not depend on the ideal $\mathfrak{p}$. \section{Higher level and relation to homology} In this section, we consider two topics which may be skipped on a first reading. The first considers a simplification if one fixes the quaternion algebra and varies the level $\mathfrak{N}$. The second considers the relationship between our cohomological method and the well-known method of modular symbols. \subsetection*{Higher level} Let $\mathcal{O}(1)$ be a maximal order of $B$ which contains $\mathcal{O}_0(\mathfrak{N})$ and let $\Gamma(1)=\mathcal{O}(1)^\times_+/\mathbb{Z}_F^\times$. From Shapiro's lemma, we have an isomorphism \[ H^1(\Gamma(1),\mathbb{C}oind_{\Gamma}^{\Gamma(1)} V(K) \cong H^1(\Gamma, V(K)) \] for any $K[\Gamma]$-module $V$ where $\mathbb{C}oind_{\Gamma}^{\Gamma(1)} V$ denotes the coinduced module $V$ from $\Gamma$ to $\Gamma(1)$. In particular, if one fixes $\mathfrak{D}$ and wishes to compute systems of Hecke eigenvalues for varying level $\mathfrak{N}$, one can vary the coefficient module instead of the group and so the fundamental domain algorithm need only be called once to compute a presentation for $\Gamma(1)$. To compute the action of $\Gamma(1)$ on the coinduced module then, we need first to enumerate the set of cosets $\Gamma \backslash \Gamma(1)$. We use a set of side pairing elements \cite{V-fd} for $\Gamma(1)$, which are computed as part of the algorithm to compute a presentation for $\Gamma$. (Side pairing elements are defined in Remark~\ref{R:sidepairing}.) \begin{alg} \label{cosets} Let $\mathfrak{N}$ be a level and let $G$ be a set of side pairing elements for $\Gamma(1)$. This algorithm computes representatives $\alpha \in \Gamma(1)$ such that $\Gamma \backslash \Gamma(1) = \bigsqcup \Gamma\alpha$. \begin{enumalg} \item Initialize $\mathcal{F} := \{1\}$ and $A := \{\}$. Let $G^{\pm} := G \cup G^{-1}$. \item Let \[ \mathcal{F} := \{g\alphamma : g \in G,\ \alphamma \in \mathcal{F},\ g\alphamma\alpha^{-1} \not\in \Gamma \text{ for all $\alpha \in A$}\} \] and let $A := A \cup \mathcal{F}$. If $\mathcal{F} = \emptyset$, then return $A$; else, return to Step 2. \end{enumalg} \end{alg} \begin{proof}[Proof of correctness] Consider the (left) Cayley graph of $\Gamma$ on the set $G^{\pm}$: this is the graph with vertices indexed by the elements of $\Gamma$ and directed edges $\alphamma \to \delta$ if $\delta=g\alphamma$ for some $g \in G^{\pm}$. Then the set $\Gamma \backslash \Gamma(1)$ is in bijection with the set of vertices of this graph under the relation which identifies two vertices if they are in the same coset modulo $\Gamma$. Since the set $G^{\pm}$ generates $\Gamma$, this graph is connected, and Step 2 is simply an algorithmic way to traverse the finite quotient graph. \end{proof} \begin{remark} Although it is tempting to try to obtain a set of generators for $\Gamma$ from the above algorithm, we note that a presentation for $\Gamma$ may be arbitrarily more complicated than that of $\Gamma(1)$ and require many more elements than the number of cosets (due to the presence of elliptic elements). For this reason, it is more efficient to work with the induced module than to compute separately a fundamental domain for $\Gamma$. \end{remark} To compute the Hecke operators as in \S 5 using this simplification, the same analysis applies and the only modification required is to ensure that the elements $\alpha_a$ arising in (\ref{deltaa}) satisfy $\lambda_a \in \mathcal{O}_0(\mathfrak{N})$---this can obtained by simply multiplying by the appropriate element $\alpha \in \Gamma \backslash \Gamma(1)$, and in order to do this efficiently one can use a simple table lookup. \subsetection*{Homology} We now relate the cohomological approach taken above to the method using homology. Although the relationship between these two approaches is intuitively clear as the two spaces are dual, this alternative perspective also provides a link to the theory of modular symbols, which we briefly review now. Let $S_2(N)$ denote the $\mathbb{C}$-vector space of classical modular cusp forms of level $N$ and weight $2$. Integration defines a nondegenerate Hecke-equivariant pairing between $S_2(N)$ and the homology group $H_1(X_0(N),\mathbb{Z})$ of the modular curve $X_0(N)$. Let $\mathcal{S}_2(N)$ denote the space of cuspidal modular symbols, i.e.\ linear combination of paths in the completed upper half-plane $\mathcal{H}^*$ whose endpoints are cusps and whose images in $X_0(N)$ are a linear combination of loops. Manin showed that there is a canonical isomorphism \begin{center} $\mathcal{S}_2(N) \cong H_1(X_0(N),\mathbb{Z})$. \end{center} If $SL_2(\mathbb{Z})=\bigsqcup_i \Gamma_0(N) \alphamma_i$, then the set of symbols $\alphamma_i\{0,\infty\}=\{\alphamma_i(0),\alphamma_i(\infty)\}$ generate the space $\mathcal{S}_2(N)$. We have an explicit description of the action of the Hecke operators on the space $\mathcal{S}_2(N)$, and the \emph{Manin trick} provides an algorithm for writing an arbitrary modular symbol as a $\mathbb{Z}$-linear combination of the symbols $\alphamma_i\{0,\infty\}$. The Shimura curves $X=X_0^{\mathfrak{D}}(\mathfrak{N})$ do not have cusps, and so this method does not generalize directly. Consider instead the sides of a fundamental domain $D$ for $\Gamma=\Gamma_0^{\mathfrak{D}}(\mathfrak{N})$, and let $G$ be the corresponding set of side pairing elements and $R$ the set of minimal relations among them \cite{V-fd}. The side pairing gives an explicit characterization of the gluing relations which describe $X$ as a Riemann surface, hence one obtains a complete description for the homology group $H_1(X,\mathbb{Z})$. Let $V$ be the set of midpoints of sides of $D$. Then for each $v \in V$, there is a unique $\alphamma \in G$ such that $w=\alphamma v \in V$, and the path from $v$ to $\alphamma v$ in $\mathcal{H}$, which we denote by $p(\alphamma)=\{v,\alphamma v\}$, projects to a loop on $X$. Each relation $r=\alphamma_1 \alphamma_2 \cdots \alphamma_t=1$ from $R$ induces the relation in the homology group $H_1(X,\mathbb{Z})$ \begin{align} \label{indrelat} 0 &= p(1)=p(\alphamma_1 \cdots \alphamma_t) \notag \\ &=\{v,\alphamma_1 v\} + \{\alphamma_1 v,\alphamma_1\alphamma_2 v\} + \dots + \{\alphamma_1 \cdots \alphamma_{t-1} v,\alphamma_1 \cdots \alphamma_{t-1}\alphamma_t v\} \\ &=\{v,\alphamma_1 v\} + \{v,\alphamma_2 v\} + \dots + \{v,\alphamma_t v\} = p(\alphamma_1) + p(\alphamma_2) + \dots + p(\alphamma_t) \notag \end{align} In particular, if $\alphamma \in G$ is an elliptic element then $p(\alphamma)=0$ in $H^1(X,\mathbb{Q})$. Let $\mathcal{S}_2^{\mathfrak{D}}(\mathfrak{N})$ be the $\mathbb{Q}$-vector space generated by $p(\alphamma)$ for $\alphamma \in G$ modulo the relations (\ref{indrelat}) with $r \in R$. It follows that \[ H_1(X,\mathbb{Q}) \cong H_1(\Gamma,\mathbb{Q}) \cong \mathcal{S}_2^{\mathfrak{D}}(\mathfrak{N}) \] and we call $\mathcal{S}_2^{\mathfrak{D}}(\mathfrak{N})$ the space of \emph{Dirichlet-modular symbols} for $X$ (relative to $D$). The Hecke operators act in an analogous way, as follows. If $\alpha_a$ for $a \in \mathbb{P}^1(\mathbb{F}_\mathfrak{p})$ is a set of representatives as in (\ref{deltaa}), then $\alpha_a$ acts on the path $p(\alphamma)=\{v,\alphamma v\}$ by $\alpha_a\{v,\alphamma v\}=\{\alpha_a v, \alpha_a\alphamma v\}$; if $\alpha_a \alphamma = \delta_a \alpha_{\alphamma^* a}= \delta_a \alpha_b$ as before, then in homology we obtain \begin{align*} \sum_{a \in \mathbb{P}^1(\mathbb{F}_\mathfrak{p})} \{\alpha_a v, \alpha_a \alphamma v\} &= \sum_a \{\alpha_a v, \delta_a \alpha_b v\} = \sum_a \left(\{\alpha_a v, v\} + \{v,\delta_a v\} + \{\delta_a v, \delta_a \alpha_b v\}\right) \\ &= \sum_a \{v,\delta_a v\} + \sum_a \left(\{\alpha_a v, v\} + \delta_a\{v, \alpha_b v\}\right) \\ &= \sum_a \{v,\delta_a v\} + \sum_a \left(- \{v,\alpha_a v\} + \{v, \alpha_b v\}\right) = \sum_a \{v,\delta_a v\}. \end{align*} Thus, the action of the Hecke operators indeed agrees with that in cohomology, and so one could also rephrase our methods in terms of Dirichlet-modular symbols. The analogue of the Manin trick in our context is played by the solution to the word problem in $\Gamma$. When $\Gamma=\SL_2(\mathbb{Z})$, the Manin trick arises directly from the Euclidean algorithm; therefore, our methods may be seen in this light as a generalization of the Euclidean algorithm to the group $\Gamma$. Already this point of view has been taken by Cremona \cite{CremonaIQF} and his students, who generalized methods of modular symbols to the case of $SL_2(\mathbb{Z}_K)$ for $K$ an imaginary quadratic field. We point out that idea analogy between fundamental domain algorithms for Fuchsian groups and continued fraction algorithms goes back at leastto Eichler~\cite{Eichler}. It seems likely therefore that many other results which follow from the theory of modular symbols should hold in the context of Shimura curves and Hilbert modular varieties as well. \section{Cubic fields} In this section, we tabulate some examples computed with the algorithms illustrated above. We perform our calculations in \textsf{Magma} \cite{Magma}. \subsetection*{First example} We first consider the cubic field $F$ with $d_F=1101 = 3 \cdot 367$ and primitive element $w$ satisfying $w^3 - w^2 - 9w + 12=0$. The field $F$ has Galois group $S_3$ and strict class number $1$. The Shimura curve $X(1)=X_0^{(1)}(1)$ associated to $F$ has signature $(1; 2^2, 3^5)$---according to tables of the second author \cite{VoightShim}, this is the cubic field of strict class number $1$ with smallest discriminant such that the corresponding Shimura curve has genus $\varepsilonq 1$. We first compute the representative quaternion algebra $B=\quat{-1,-w^2 + w + 1}{F}$ with discriminant $\mathfrak{D}=(1)$ and a maximal order $\mathcal{O}$, generated as a $\mathbb{Z}_F$-algebra by $\alpha$ and the element \[ \mathfrak{r}ac{1}{4}\bigl( (-8w + 14) + (-2w + 4)\alpha + (-w + 2)\beta\bigr). \] We then compute a fundamental domain for $\Gamma(1)=\Gamma_0^{(1)}(1)$; it is shown in Figure 7.1. We may take $\mu=\beta$ as an element to represent complex conjugation; indeed, $\beta^2=-w^2 + w + 1 \in \mathbb{Z}_F^*$ is a unit. Since the spaces $H^1(\Gamma,\mathbb{Z})^{\pm}$ are one-dimensional, the Hecke operators $T_\mathfrak{p}$ act by scalar multiplication, and the eigenvalues are listed in Table 7.2: for each prime $\mathfrak{p}$ of $\mathbb{Z}_F$ with $N(\mathfrak{p}) < 50$, we list a generator $\pi$ of $\mathfrak{p}$ and the eigenvalue $a(\mathfrak{p})$ of $T_\mathfrak{p}$. \begin{table}[h] \[ \begin{array}{cc|cc} N \mathfrak{p} & \pi & a(\mathfrak{p}) & \#E(\mathbb{F}_{\mathfrak{p}}) \\ \hline 2\rule{0pt}{2.5ex} & w - 2 & 0 & 3 \\ 3 & w - 3 & -3 & 7 \\ 3 & w - 1 & -1 & 5 \\ 4 & w^2 + w - 7 & -3 & 8 \\ 19 & w + 1 & -6 & 26 \\ 23 & w^2 - 2w - 1 & 6 & 18 \\ 31 & 2w^2 - 19 & 3 & 29 \\ 31 & w^2 - 5 & 0 & 32 \\ 31 & 3w - 5 & 4 & 28 \\ 41 & w^2 + 2w - 7 & 0 & 42 \\ 43 & w^2 - 11 & 9 & 35 \\ 47 & 3w - 7 & -9 & 57 \end{array} \] \textbf{Table 7.2}: Hecke eigenvalues for the group $\Gamma(1)$ for $d_F=1101$ \end{table} The curve $X(1)$ has modular Jacobian $E$ and we have $\# E(\mathbb{F}_\mathfrak{p})=N \mathfrak{p} +1-a(\mathfrak{p})$, so we list these values in Table 7.2 as well. By the reduction theory of Carayol \cite{Carayol}, we know that $E$ is an elliptic curve over $F$ with everywhere good reduction. (Compare this result with calculations of Demb\'el\'e and Donnelly \cite{DD}.) We note that since the $a(\mathfrak{p})$ for the primes $\mathfrak{p}$ with $N\mathfrak{p}=31$ are not equal, the curve $E$ does not arise as the base change of a curve defined over $\mathbb{Q}$. To find a candidate curve $E$, we begin by searching for a curve over $F$ with everywhere good reduction. We follow the methods of Cremona and Lingham \cite{CremonaLingham}. \begin{remark} One possible alternative approach to find an equation for the curve $E$ would be to use the data computed above to give congruence conditions on a minimal Weierstrass model for $E$ \[ E:y^2 + a_1 xy + a_3 y = x^3 + a_2 x^2 + a_4 x^4 + a_6 \] with $a_i \in \mathbb{Z}_F$. For example, by coordinate change for $y$, without loss of generality we may assume that $a_1,a_3$ are chosen from representatives of the set $\mathbb{Z}_F/2\mathbb{Z}_F$, and since the reduction of $E$ modulo the prime $\mathfrak{p}$ with $N\mathfrak{p}=2$ is supersingular, and a Weierstrass model for such curve is of the form $y^2+y=f(x)$, we may assume that $a_1 \equiv 0 \pmod{w-2}$ and $a_3 \equiv 1 \pmod{w-2}$, leaving $4$ possibilities each for $a_1,a_3$. In a similar way, we obtain further congruences. In our case, this approach fails to find a model, but we expect it will be useful in many cases. \end{remark} \begin{remark} When $F$ is a real quadratic field, there are methods of Demb\'el\'e which apply \cite{DembeleRealQuadratic} to find an equation for the curve $E$ by computing the real periods of $E$. \end{remark} In our situation, where $F$ has (strict) class number $1$, we conclude (see Cremona and Lingman \cite[Propositions 3.2--3.3]{CremonaLingham}) that there exists $w \in \mathbb{Z}_F^*/\mathbb{Z}_F^{*6}$ and an integral point on the elliptic curve $E_w:y^2=x^3-1728w$ such that $E$ has $j$-invariant $j=x^3/w=1728+y^2/w$. In our situation, we need not provably enumerate all such curves and content ourselves to find as many such integral points as we can. Indeed, we find for the unit $w=-1506w^2 + 6150w - 5651 \in \mathbb{Z}_F^*$ that the curve $E_w$ has rank $3$ and we find an integral point \[ (-11w^2 - 24w + 144, -445w^2 + 1245w - 132) \in E_w(F); \] this point corresponds to a curve with $j$-invariant \[ (-2w^2-4w+7)^9(2w^2+w-17)^3(w-2)^{18}(w-3)^6 = -1805w^2 - 867w + 14820 \] where we note that the first two terms are units and recall that $N(w-2)=2$, $N(w-3)=-3$. We then find an appropriate quadratic twist of this curve which has conductor $(1)=\mathbb{Z}_F$ as follows: \[ A: y^2 + w(w + 1)xy + (w + 1)y = x^3 + w^2x^2 + a_4x + a_6 \] where \begin{center} \mathfrak{o}otnotesize{$a_4 = -139671409350296864w^2 - 235681481839938468w + 623672370161912822$} \end{center} and \begin{center} \mathfrak{o}otnotesize{$a_6 = 110726054056401930182106463w^2 + 186839095087977344668356726w - 494423184252818697135532743$}. \end{center} We verify that $\#A(\mathbb{F}_\mathfrak{p})=N(\mathfrak{p})+1-a(\mathfrak{p})$ in agreement with the above table, so this strongly suggests that the Jacobian $E=J(1)$ of $X(1)$ is isogenous to $A$ (and probably isomorphic to $A$). To prove that in fact $E$ is isogeneous to $A$, we use the method of Faltings and Serre. (For an exposition of this method, see Sch\"utt \cite[\S 5]{Schutt} and Dieulefait, Guerberoff, and Pacetti \cite[\S 4]{Dieule}, and the references contained therein.) Let $\rho_E,\rho_A: \Gal(\overline{F}/F) \to \GL_2(\mathbb{Z}_2)$ be the $2$-adic Galois representations associated to the $2$-adic Tate modules of $E$ and $A$, respectively, and let $\overline{\rho}_E,\overline{\rho}_A : \Gal(\overline{F}/F) \to \GL_2(\mathbb{F}_2)$ be their reductions modulo $2$. We will show that we have an isomorphism $\overline{\rho}_E \cong \overline{\rho}_A$ of absolutely irreducible representations and then lift this to an isomorphism $\rho_E \cong \rho_A$ by comparing traces of $\rho_E$ and $\rho_A$. It then follows from work of Faltings that $E$ is isogeneous to $A$. First we show that the representations $\overline{\rho}_E$ and $\overline{\rho}_A$ are absolutely irreducible and isomorphic. (It is automatic that they have the same determinant, the cyclotomic character.) For $E$, it is clear from Table 7.2 that the image of $\overline{\rho}_E$ contains elements both of even and odd order and so must be all of $\GL_2(\mathbb{F}_2) \cong S_3$. For $A$, we verify that the $2$-division polynomial is irreducible, and adjoining a root of this polynomial to $F$ we obtain a field which is more simply given by the polynomial \[ x^3 + (w + 1)x^2 + (w^2 + w - 5)x + (w^2 + w - 7) \] and with relative discriminant $4\mathbb{Z}_F$; the splitting field $L$ of this polynomial indeed has Galois group isomorphic to $S_3$. At the same time, the field cut out by $\overline{\rho}_E$ is an $S_3$-extension of $F$ which is unramified away from $2$. We may enumerate all such fields using class field theory, as follows. Any such extension has a unique quadratic subextension which is also unramified away from $2$ and by Kummer theory is given by adjoining the square root of a $2$-unit $\alpha \in \mathbb{Z}_F$. Since $2\mathbb{Z}_F=\mathfrak{p}_2\mathfrak{p}_2'$ with $\mathfrak{p}_2=(w-2)\mathbb{Z}_F$, $\mathfrak{p}_2'=(w^2+w-7)\mathbb{Z}_F$ (of inertial degrees $1,2$, respectively), we see that there are $31=2^{3+2}-1$ possibilities for $\alpha$. Now for each quadratic extension $K$, we look for a cyclic cubic extension which is unramified away from $2$ and which generates a Galois field over $F$. By class field theory, we computing the complete list, and in fact we find a unique such field and thereby recover the field $L$. The field $L$ arises from the quadratic subfield $K=F(\sqrt{u})$, where $u=-19w^2-32w+85 \in \mathbb{Z}_F^*$; the class group of $K$ is trivial, but the ray class group modulo $\mathfrak{p}_2=(w-2)\mathbb{Z}_F$ is isomorphic to $\mathbb{Z}/3\mathbb{Z}$. We have therefore indeed shown that $\overline{\rho}_E$ and $\overline{\rho}_A$ are isomorphic. Next, we lift this isomorphism to one between $\rho_E$ and $\rho_A$. Following Faltings and Serre, we must first compute all quadratic extensions $M$ of $L$ which are unramified away from $2$. We find that the $2$-unit group of $\mathbb{Z}_L$ modulo squares has rank $16$ over $\mathbb{Z}/2\mathbb{Z}$; in other words, we have $16$ independent quadratic characters of $L$. To each odd ideal $\mathfrak{A}$ of $\mathbb{Z}_L$, we associate the corresponding vector $v(\mathfrak{A}) = (\chi(\mathfrak{A}))_{\chi} \in \{\pm 1\}^{16}$ of values of these $16$ characters. Then for each $v \in \{\pm 1\}^{16}$, we need to find an odd ideal $\mathfrak{A}$ of $\mathbb{Z}_L$ such that $v(\mathfrak{A})=v$ and then verify that $\rho_A(\mathbb{F}rob_\mathfrak{a})=\rho_E(\mathbb{F}rob_\mathfrak{a})$, where $\mathfrak{a} = \mathbb{Z}_F \cap \mathfrak{A}$. For this, it is enough to verify that $\tr \rho_A(\mathbb{F}rob_\mathfrak{a})=\tr \rho_E(\mathbb{F}rob_\mathfrak{a})$. Moreover, by multiplicativity, thinking of $\{\pm 1\}^{16}$ as an $\mathbb{F}_2$-vector space in a natural way, it is enough to verify this for a set of odd primes $\mathfrak{P}$ of $\mathbb{Z}_L$ such that the values $v(\mathfrak{P})$ span $\{\pm 1\}^{16}$. We find that the set of primes $\mathfrak{P}$ of $\mathbb{Z}_L$ which lie above primes $\mathfrak{p}$ of $\mathbb{Z}_F$ of norm at most $239$ will suffice for this purpose, and for this set we indeed verify that the Hecke eigenvalues agree. (In other words, we verify that $\#E(\mathbb{F}_{\mathfrak{p}})=\#A(\mathbb{F}_{\mathfrak{p}})$ for enough primes $\mathfrak{p}$ of $\mathbb{Z}_F$ to ensure that in fact equality holds for all odd primes $\mathfrak{p}$ of $\mathbb{Z}_F$, whence $\rho_J \cong \rho_A$.) Therefore $E$ is isogeneous to $A$, and we have explicitly identified the isogeny class of the Jacobian of the Shimura curve $X(1)$ over $F$. \subsetection*{Second example} As a second example, we consider the Galois cubic field with $d_F=1369=37^2$, so that $F$ is the (unique totally real) cubic field in $\mathbb{Q}(\zeta_{37})$. The field $F$ is generated by an element $w$ satisfying $w^3-w^2-12w-11$. Here, the associated Shimura curve of level and discriminant $1$ has signature $(1;2^3,3^3)$. Computing as above, we obtain the Hecke eigenvalues listed in Table 7.3. \begin{table}[h] \[ \begin{array}{c|cc} N \mathfrak{p} & a(\mathfrak{p}) & \#E(\mathbb{F}_p) \\ \hline 8\rule{0pt}{2.5ex} & -5 & 14 \\ 11 & -2 & 14 \\ 23 & -4 & 28 \\ 27 & 0 & 28 \\ 29 & 9 & 21 \\ 31 & -10 & 42 \\ 37 & -11 & 49 \\ 43 & 2 & 42 \\ 47 & 6 & 42 \end{array} \] \textbf{Table 7.3}: Hecke eigenvalues for the group $\Gamma(1)$ for $d_F=1369$ \end{table} Let $E$ denote the Jacobian of the Shimura curve $X(1)$ under consideration. Since $\#E(\mathbb{F}_p)$ is always divisible by $7$, it is reasonable to believe that $E$ has a nontrivial $7$-torsion point. A quick search for $F$-rational points on $X_1(7)$ (using the Tate model) yields a curve $A$ with minimal model \[ A: y^2 + (w^2 + w + 1)xy = x^3 + (-w - 1)x^2 + (256w^2 + 850w + 641)x + (5048w^2 + 16881w + 12777) \] with $\#A(\mathbb{F}_p)$ as above. Since $A[7](F)$ is nontrivial, an argument of Skinner-Wiles \cite{SkinnerWiles} shows that in fact $A$ is isogenous to $E$. We note that the eigenvalue $a(\mathfrak{p})$ does not depend on the choice of prime $\mathfrak{p}$ above $p$, which suggests that $E$ should come as a base change from a curve defined over $\mathbb{Q}$. Indeed, it can be shown by Galois descent (arising from the functoriality of the Shimura-Deligne canonical model) that the curve $X(1)$ has field of moduli equal to $\mathbb{Q}$. Looking at the curves of conductor $1369$ in Cremona's tables \cite{Cremona}, we find that $A$ is in fact the base change to $F$ of the curve \[ \text{\textsf{1369b1}}: y^2 + xy = x^3 - x^2 + 3166x - 59359. \] \subsetection*{Third example} We conclude with a third example for which the dimension of the space of cuspforms is greater than $1$. The first cubic field $F$ for which this occurs has $d_F=961=31^2$, and the signature of the corresponding Shimura curve $X(1)$ is $(2;2^4,3)$. Since this field $F$ is Galois, by Galois descent the corresponding $L$-function is a base change from $\mathbb{Q}$. We list the characteristic polynomial $\chi(T_\mathfrak{p})$ of Hecke operators $T_\mathfrak{p}$ in Table 7.4. \begin{table}[h] \[ \begin{array}{c|cc} N \mathfrak{p} & \chi(T_\mathfrak{p}) \\ \hline 2\rule{0pt}{2.5ex} & x^2+2x-1 \\ 23 & x^2+8x+16 \\ 27 & x^2-4x-28 \\ 29 & x^2+8x+8 \\ 31 & x^2 + 20x + 100 \\ 47 & x^2 - 8x - 16 \end{array} \] \textbf{Table 7.4}: Hecke eigenvalues for the group $\Gamma(1)$ for $d_F=961$ \end{table} We see that $\mathbb{Q}(T_\mathfrak{p})=\mathbb{Q}(\sqrt{5})$, so the Jacobian $J$ of $X(1)$ is an abelian surface with real multiplication by $\mathbb{Q}(\sqrt{5})$. Computing via modular symbols the space of newforms for the classical modular group $\Gamma_0(961)$, we find that these characteristic polynomials indeed match a newform with $q$-expansion \[ q + wq^2 + wq^3 + (-2w - 1)q^4 + \dots -4q^{23} + \dots - (2w + 6)q^{29} + \dots \] where $w^2+2w-1=0$. It is interesting to note that there are six $2$-dimensional (irreducible) new Hecke eigenspaces for $\Gamma_0(961)$ and only one of the corresponding forms corresponds to an abelian surface which acquires everywhere good reduction over $F$. \subsetection*{Comments} Combining the above methods with those of Demb\'el\'e and Donnelly, one can systematically enumerate Hilbert modular forms over a wide variety of totally real fields and conductors. This project has been initiated by Donnelly and the second author \cite{VoightDonnelly}, and we refer to this upcoming work for many, many more examples for fields up to degree $6$. We conclude with a few comments on the efficiency of our algorithms. Unfortunately, we have no provable assessment of the running time of our method. In practice, the computation of a fundamental domain seems to be the most (unpredictably) time-consuming step, taking on the order of minutes on a standard PC for cubic and quintic fields of small discriminant; but this step should be considered a precomputation step as it need be done only once for each field $F$. The computation of a single Hecke operator takes on the order of seconds (for the examples above) to a few minutes (for quintic fields), the most onerous step being the principalization step (Proposition \ref{KVprinc}) and the bookkeeping involved in working with the induced module; with further careful optimization, we believe that this running time can be substantially lowered. \end{document}
\begin{document} \title{Simple Mechanisms for Welfare Maximization in Rich Advertising Auctions hanks{ Alexandros Psomas was supported in part by a Google Research Scholar Award. The work of Divyarthi Mohan was partially supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement no. 866132). Part of this work was done when Mohan and Psomas were employed at Google Research.} \begin{abstract} Internet ad auctions have evolved from a few lines of text to richer informational layouts that include images, sitelinks, videos, etc. Ads in these new formats occupy varying amounts of space, and an advertiser can provide multiple formats, only one of which can be shown. The seller is now faced with a multi-parameter mechanism design problem. Computing an efficient allocation is computationally intractable, and therefore the standard Vickrey-Clarke-Groves (VCG) auction, while truthful and welfare-optimal, is impractical. In this paper, we tackle a fundamental problem in the design of modern ad auctions. We adopt a ``Myersonian'' approach and study allocation rules that are monotone both in the bid and set of rich ads. We show that such rules can be paired with a payment function to give a truthful auction. Our main technical challenge is designing a monotone rule that yields a good approximation to the optimal welfare. Monotonicity doesn't hold for standard algorithms, e.g. the incremental bang-per-buck order, that give good approximations to ``knapsack-like'' problems such as ours. In fact, we show that no deterministic monotone rule can approximate the optimal welfare within a factor better than $2$ (while there is a non-monotone FPTAS). Our main result is a new, simple, greedy and monotone allocation rule that guarantees a $3$ approximation. In ad auctions in practice, monotone allocation rules are often paired with the so-called \emph{Generalized Second Price (GSP)} payment rule, which charges the minimum threshold price below which the allocation changes. We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded. Under standard no overbidding assumption, we prove a pure PoA bound of $6$ and a Bayes-Nash PoA bound of $\frac{6}{(1 - \frac{1}{e})}$. Finally, we experimentally test our algorithms on real-world data. \end{abstract} \section{Introduction} \label{sec:intro} Internet Ad Auctions, in addition to being influential in advancing auction theory and mechanism design, are a half-a-trillion dollar industry~\cite{emarketer}. A significant advertising channel is sponsored search advertising: ads that are shown along with search results when you type a query in a search box. These ads traditionally were a few lines of text and a link, leading to the standard abstraction for ad auctions: multiple items for sale to a set of unit-demand bidders, where each bidder $i$ has a private value $v_{i} \cdot \alpha_{is}$ for the ad in position $s$, which has click-through rate $\alpha_{is}$. However, when using your favorite search engine, you might instead encounter sitelinks/extensions leading to parts of the advertisers' website, seller ratings indicating how other users rate this advertiser or offers for specific products. Advertisers can often explicitly opt in or out of showing different extensions with their ads. In fact, some extensions require the advertiser to provide additional assets, e.g. sitelinks, phone numbers, prices, promotion etc, and an ad cannot be shown unless this additional information is available. All these extensions/decorations change the amount of \emph{space} the ad occupies, as well as affect the probability of a user clicking on the ad. The new and unexplored abstraction for modern ad auctions is now to select a set of ads that fit within the given total space. In this paper, we study the problem of designing a \emph{truthful} auction to determine the best set of ads that can be shown, with the goal of maximizing the social welfare. More formally, we consider the \emph{Rich Ads} problem. In our model, each advertiser specifies a value per click $v_i$ and set of rich ads. Each ad has an associated probability of click $\alpha_{ij}$ and a space $w_{ij}$ that it would occupy if shown. The space and click probabilities are publicly known. Crucially, advertisers' private information is \emph{not} single-dimensional. In addition to misreporting her value, there is another strategic dimension available: an advertiser can report only a subset of the set of ads available if the allocation under this report improves her utility. The open problem we address in this paper is whether there exist simple, approximately optimal and truthful mechanisms for Rich Ads. \subsection{Results and Techniques} The classic Vickrey-Clarke-Groves (VCG) mechanism is truthful and maximizes welfare for our setting, but it is computationally intractable: maximizing welfare is NP-complete (even without truthfulness) since our problem generalizes the {\sc Knapsack} problem. It is also well known that coupling an approximation algorithm for welfare with VCG payments does not result in a truthful mechanism. And, maximal-in-range mechanisms~\cite{NisanR01}, that optimize social welfare over a restricted domain, even though are one way around such situations, have limited use, since the range of possible outcomes (allocations) has to be committed to before seeing the bidders preferences (i.e., needs to be independent of bidders reports). For single parameter problems, Myerson's lemma~\cite{Myerson81} can be used to obtain a truthful mechanism, as long as the allocation rule is monotone. The Rich Ads problem is not a single parameter problem, so this approach does not immediately work. However, similar to the inter-dimensional (or ``one-and-a-half'' dimensional) regime~\cite{FiatGKK16,devanur2020optimal1,devanur2020optimal2}, we can extend Myerson's lemma to our domain. We show that an allocation rule that is monotone in the bid and the set of rich ads\footnote{Here, an allocation rule is defined to be monotone in the set of rich ads if the expected clicks allocated to an advertiser can only increase when the advertiser reports a superset of rich ads.} can be paired with a payment rule to obtain a truthful mechanism. Incentive issues aside, the Rich Ads problem is an extension of the {\sc Knapsack} problem, called {\sc Multi-Choice Knapsack}: in addition to the knapsack constraint, we also have constraints that allow to allocate (at most) one rich ad per advertiser. As an algorithmic problem, this is well studied~\cite{Sinha79,lawler1979fast}. The optimal fractional allocation can be derived using a simple \emph{greedy} algorithm using the \emph{incremental bang-per-buck} order. However, it turns out that the optimal (integral or otherwise) allocation, as well as other natural allocations, are not monotone. In fact, as we show, no deterministic (resp. randomized) monotone allocation rule can obtain more than half (resp. $11/12$ fraction) of the social welfare. In contrast, without the monotonicity constraint, there is an FPTAS for the {\sc Multi-Choice Knapsack} problem~\cite{lawler1979fast}. Our \emph{main result} is providing an integral allocation rule that is monotone and obtains at least a third of the optimal (fractional) social welfare. Pairing with an appropriate payment function we get the following (informal) theorem. \begin{informal}\label{informal1} There exists a simple truthful mechanism, that can be computed in polynomial time, which obtains a $3$-approximation to the optimal social welfare. \end{informal} To obtain this result, we first find an allocation of space amongst the advertisers. In contrast to the optimal fractional algorithm described above which allocates greedily using the incremental bang-per-buck order, our algorithm allocates greedily using an \emph{absolute bang-per-buck} order. Crucially, the \emph{space} allocated to each advertiser in this way is monotone, even though the expected number of clicks (i.e. the utility) of the bang-per-buck algorithm itself is \emph{not} monotone. By post-processing to utilize this space optimally for each advertiser, we obtain an integral allocation that is monotone in the expected number of clicks. We prove that this allocation gives a two approximation to the optimal fractional welfare, minus the largest value ad. Finally, by randomizing between this integral allocation (with probability $2/3$) and the largest value ad (with probability $1/3$), we get a $3$-approximation to the optimal social welfare. Since the overall allocation rule is monotone, we can pair it with an appropriate pricing to get a truthful mechanism. We proceed to further explore the merits of our monotone allocation rule by pairing it with the \emph{Generalized Second Price (GSP)} payment rule, which charges each advertiser the minimum threshold (on their bid) below which their allocation changes. The overall auction is not truthful. However, we can analyze its performance by bounding its social welfare in a worst-case equilibrium. In particular, we consider the full information pure Nash equilibrium, where bidders best-respond to a profile of competitors bids, as well as the incomplete information Bayes-Nash equilibrium, where the bidders best-respond to a distribution of valuation draws and bids for the competitors. The corresponding pure Price of Anarchy (PoA) and Bayes-Nash Price of Anarchy are the ratios of the optimal social welfare to the welfare of the worst equilibrium. In either setting, we make the standard no-overbidding assumption, where bidders do not bid more than their value. This assumption is required as without it the PoA of even the single-item second price auction (which is truthful) can be unbounded.\footnote{Consider, for example, the equilibrium where all bidders bid $0$, except the lowest bidder, who bids infinity.} \begin{informal} There exists a simple mechanism with a monotone allocation rule, paired with the GSP payment rule, which under the no-overbidding assumption guarantees a pure Price of Anarchy (resp. Bayes-Nash PoA) of at most $6$ (resp. $\frac{6}{1-1/e}$). \end{informal} We prove our PoA bounds by identifying a suitable deviation for each advertiser, and bounding the advertiser's utility in this deviation relative to the social welfare of the optimal integral allocation, our integral bang-per-buck allocation (in the equilibrium), and the largest value ad (in the equilibrium); as opposed to single-dimensional PoA bounds, the knapsack constraint in our setting introduces a number of technical obstacles we need to bypass. To prove a bound for the Bayes-Nash PoA, we combine techniques from our pure PoA bound with the standard smoothness framework~\cite{Roughgarden-intrinsic}. In particular, the smoothness part of our argument is very similar to that of~\cite{caragiannis} for the Bayes-Nash PoA of the standard GSP position auction. Due to the specific form of the smoothness framework that we use, our bound also applies to mixed, correlated, coarse-correlated and Bayes-Nash with correlated valuations. Finally, we provide an empirical evaluation of our mechanism on real world data from a large search engine. We compare performance of our mechanism with VCG and the fractional-optimal allocation that doesn't account for incentives. Our empirical results show that our allocation rule obtains at least $0.4$ fraction of the optimal in the worst-case. However, there are many instances where our allocation rule is almost as good as VCG. In fact, the average approximation factor of our allocation rule is $0.97$. Furthermore, our mechanisms are significantly faster than VCG, even with the Myersonian payment computation. We also empirically evaluate heuristic extensions of our algorithms when there is a bound on the total number of distinct rich ads shown. \subsection{Related work} Traditional sponsored search auctions have been studied extensively~\cite{AggarwalGM06,Varian07, EdelmanOS07}. A number of recent works relax the traditional model of sponsored search auctions~\cite{Colini-Baldeschi20,HUMMEL2016} and introduce different versions of the ``rich ads'' problem~\cite{Deng10,CavalloKSW17,GhiasiHLY19,HartlineIKLN18}; the specific model we study in this paper is new.~\citet{Deng10} are the first to formulate a rich ad problem where ads can occupy multiple slots. They analyze VCG and GSP variants for a special version of the rich ad problem where ads can be of only one of two possible sizes. They leave the problem addressed in this paper as an open problem for future work. Much of the literature focuses on GSP-like rules (e.g., because the cost of switching from existing GSP to VCG can be high~\cite{VarianH14}).~\citet{CavalloKSW17} consider the more general rich ad problem where there are constraints on number of ads shown and position effects in the click through rate. But their setting is still single-parameter --- advertisers report a bid per click and cannot mis-report the set of rich ads. They provide a local search algorithm that runs within polynomial time and a generalized GSP like pricing to go with it. However, as opposed to our interest here, their auction is not truthful, nor do they give any approximation guarantees.~\citet{GhiasiHLY19} consider the optimization problem when the probability of click is submodular or subadditive in the size of the rich ad. They give an LP rounding based algorithm that provides a $4$ approximation for submodular and a $\Omega(\frac{\log m}{\log \log m})$ approximation for subadditive, with respect to the social welfare assuming truthful bidding. They however do not provide a truthful payment rule, or any PoA guarantees. These works also focus on a single-dimensional setting (where the advertiser is strategic about its bid but the set of ads is publicly known). In contrast, we consider a multi-dimensional setting. Our simple and truthful mechanism also has a \emph{monotone} allocation function, so we pair it with GSP as well. The design of core or core-competitive auctions in the rich ad setting was presented in \cite{HartlineIKLN18, GoelKL15}. These works compromise on truthfulness or social welfare to achieve revenue competitiveness. The PoA of the GSP auction for text ads was studied in ~\cite{caragiannis2011efficiency, lucier2010price, lucier2011gsp}. Our PoA bounds use the smoothness framework introduced in ~\cite{Roughgarden-intrinsic}, and later extended by~\cite{caragiannis} to show PoA bounds for GSP (as well as in~\cite{Roughgarden-bayespoa} and~\cite{syrgkanis2013composable} for more general use). Finally, there is a separate, but related, research agenda of reducing Bayesian incentive compatible mechanism design to Bayesian algorithm design for welfare optimization~\cite{HartlineL10, BeiH11, HartlineKM15, Dughmi21}. We note that here we don't assume access to priors\footnote{This also rules out prophet-inequality type solutions, e.g.~\citet{DuttingFKL20}. } and focus on worst-case approximation guarantees and truthfulness in dominant strategies (not Bayesian incentive compatibility). \section{Preliminaries}\label{sec: prelims} \subsection{Rich Ad Model} We introduce the following model for the rich ads auction problem. There is a set $\mc{N}$ of $n$ advertisers and a universe of rich ads $\mathcal{S}$. Each advertiser $i \in \mc{N}$ has a private value per click $v_i$ and a private set of rich ads $A_i \subseteq \mathcal{S}$.\footnote{ We expect the rich ads to be tailored to an advertiser, so we assume that $A_i\cap A_{i'} = \emptyset$, for all $i, i' \in \mc{N}$.} We use $\mathbf{v} = (v_1, v_2, \ldots, v_n)$ to denote the vector of values per click and $\mathbf{A} = (A_1, A_2, \ldots, A_n)$ to denote the vector of sets of rich ads. For every advertiser $i \in \mc{N}$, each rich ad $j \in \mathcal{S}$ has a publicly known space $w_{ij}$ and a publicly known probability of click $\alpha_{ij}$.\footnote{Note that this safe to assume. The space consumed by a rich ad is evident when the rich ad is provided. The probability of click can be predicted by the platform (e.g. using machine learning models).} We use $v_{ij}$ for the value of rich ad $j$ for advertiser $i$. If $j \notin A_i$, then the value of advertiser $i$ is $v_{ij} = 0$; otherwise, $v_{ij} = \alpha_{ij} v_i$. An advertiser can be allocated only one of the ads from the set $\mathcal{S}$. Finally, there is a total limit $W$ on the total space occupied by the ads. We assume without loss of generality that for each $i$ and each $j$, $w_{ij} \leq W$, as any ad that is larger than $W$ cannot be allocated integrally in space $W$. A (randomized or fractional) allocation $\mathbf{x} \in [0,1]^{n \times |S|}$ indicates the probability $x_{ij}$ that ad $j$ is allocated to advertiser $i$. An allocation is feasible if each advertiser gets at most one ad, i.e. $\sum_{j \in \mathcal{S}} x_{ij} \leq 1$ for all $i \in \mc{N}$, and the total space used is at most $W$, i.e. $\sum_{i \in \mc{N}, j \in \mathcal{S}} x_{ij} w_{ij} \leq W$. An allocation is integral if $x_{ij} \in \{ 0 , 1 \}$, for all $i \in \mc{N}$ and $j \in \mathcal{S}$. Our goal is to maximize social welfare. For an allocation $\mathbf{x} = Alg(\mathbf{v}, \mathbf{A})$, $SW(Alg(\mathbf{v}, \mathbf{A})) = \sum_{i, j} x_{i,j} v_{ij}$. We can write an integer program for the optimal allocation as follows, by introducing a binary variable $x_{ij} \in \{0, 1\}$ for the allocation of advertiser $i \in \mc{N}$ and rich ad $j \in \mathcal{S}$. The objective is to maximize welfare $\sum_{ij} x_{ij} v_{ij}$, subject to a Knapsack constraint $\sum_{i} \sum_{j} w_{ij} x_{ij} \leq W$, and feasibility, i.e. $\sum_{j} x_{ij} \leq 1$ for all $i \in \mc{N}$ (expressing that each advertiser can get only one ad). \subsection{Mechanism Design Considerations} By standard revelation principle arguments, it suffices to focus on direct revelation mechanisms. Each advertiser $i \in \mc{N}$ reports a bid $b_i$ and a set of rich ads $S_i \subseteq \mathcal{S}$. Similarly to many works in the inter-dimensional regime, e.g.~\cite{malakhov2009optimal,devanur2017optimal}, we assume that $S_i\subseteq A_i$, that is, an advertiser cannot report that they want an ad they don't have. Let $\mathbf{b} = (b_1, b_2, \ldots, b_n)$ to denote the vector of bids and $\mathbf{S} = (S_1, S_2, \ldots, S_n)$ to denote the vector of sets of rich ads. We use $b_{ij} = b_i \cdot \alpha_{ij}$ if $j \in S_i$ and $b_{ij} = 0$ otherwise, and refer to the rich ad using a (reported value, space) tuple $(b_{ij}, w_{ij})$. A mechanism selects a set of ads to show, of total space at most $W$, and charges a payment to each advertiser. Let $x_{ij}(\mathbf{b}, \mathbf{S})$ be the probability that ad $j$ is allocated to advertiser $i$, and $p_i(\mathbf{b}, \mathbf{S})$ denote the expected payment of advertiser $i$. Let $\mathbf{x}_i(\mathbf{b},\mathbf{S})$ be the allocation vector of advertiser $i$. We assume that for any valid allocation rule for $j \notin S_i$ $x_{ij}(\mathbf{b}, \mathbf{S}) = 0$. We slightly overload notation, and use $x_i(\mathbf{b}, \mathbf{S})$ to denote the expected number of clicks the advertiser will get; that is, $x_i(\mathbf{b}, \mathbf{S}) = \sum_{j \in S_i} x_{ij}(\mathbf{b}, \mathbf{S}) \alpha_{ij}$. If required we refer to the cost per-click $cpc_i(\mathbf{b}, \mathbf{S}) = p_i(\mathbf{b}, \mathbf{S})/x_i(\mathbf{b}, \mathbf{S})$ Advertisers have quasi-linear utilities. An advertiser with value $v_{i}$ and set $A_i$ has utility $v_i x_i(\mathbf{b}, \mathbf{S}) - p_i(\mathbf{b}, \mathbf{S})$, when reports are according to $\mathbf{b}$ and $\mathbf{S}$. Let $u_i(v_i, A_i \rightarrow b_i, S_i; \mathbf{b}_{-i}, \mathbf{S}_{-i})$ be the utility of advertiser $i$ when her true value and set of ads are $v_i, A_i$, but reports $b_i, S_i$, and everyone else reports according to $\mathbf{b}_{-i}, \mathbf{S}_{-i}$. For ease of notation we often drop $\mathbf{b}_{-i}, \mathbf{S}_{-i}$ when it's clear from the context. When the profile of true types is fixed, we drop $(v_i, A_i)$ and use the notation $u_i(b_i, S_i, \mathbf{b}_{-i}, \mathbf{S}_{-i})$. A mechanism is \emph{truthful} if no advertiser has an incentive to lie, i.e. for any all $\mathbf{b}_{-i}, \mathbf{S}_{-i}$, $u_i(v_i, A_i \rightarrow v_i, A_i; \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq u_i(v_i, A_i \rightarrow b_i, S_i; \mathbf{b}_{-i}, \mathbf{S}_{-i})$, for all $v_i, A_i, b_i, S_i$, A mechanism is individually rational if in all of its outcomes, all agents have non-negative utility. We are interested in auctions that are computationally tractable, truthful, individually rational, with the goal of maximizing the social welfare $SW(\mathbf{x}(\mathbf{b}, \mathbf{S})) = \sum_i v_i x_{i}(\mathbf{b}, \mathbf{S})$. Even ignoring truthfulness and individual rationality, the computational constraints rule out achieving the optimal social welfare; therefore, we seek approximately optimal mechanisms. \begin{definition} A truthful mechanism $\mathcal{M}$ obtains an $\alpha$ factor approximation to the social welfare if $SW(\mathbf{x}_{\mathcal{M}} ({\mathbf{v}, \mathbf{A}})) \geq SW(\mathbf{x}_{OPT}({\mathbf{v}, \mathbf{A}})) / \alpha$. \end{definition} \paragraph{GSP Pricing and Price of Anarchy.} In Sponsored Search, a popular pricing choice is the Generalized Second Price (GSP)~\cite{Varian07,EdelmanOS07}. While this is classically defined for a position auction with k ads being selected, it can be defined for any allocation algorithm that is monotone in the bid. \begin{definition}[GSP] For any allocation rule $x_i$ that is monotone in bid, and a bid profile $(\mathbf{b}, \mathbf{S})$, the GSP per click price for advertiser $i$ is the minimum bid below which the advertiser obtains a smaller amount of expected clicks: $cpc_i(\mathbf{b}, \mathbf{S}) = \inf_{b'_i: x_i(b'_i, S_i, \mathbf{b}_{-i}, \mathbf{S}_{i}) = x_i(b_i, S_i, \mathbf{b}_{-i}, \mathbf{S}_{-i})} b'_i$. Given a GSP cost-per-click, the GSP payment is the cost-per-click times expected number of clicks: $p_i(\mathbf{b}, \mathbf{S}) = x_i(\mathbf{b}, \mathbf{S}) cpc_i(\mathbf{b}, \mathbf{S})$. \end{definition} The mechanism that charges GSP prices may not be truthful. We can study the Price of Anarchy (PoA)~\cite{koutsoupias1999worst} to understand the effective social welfare. The notion of Price of Anarchy captures the inefficiency of a pure Nash equilibrium. Fix a valuation profile $(\mathbf{v}, \mathbf{A})$. A set of bids $(\mathbf{b}, \mathbf{S})$ is a \emph{pure Nash equilibrium}, if for each $i$, for any $(b'_i, S_i)$, $u_i(v_i, A_i \rightarrow b_i, S_i; \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq u_i(v_i, A_i \rightarrow b'_i, S'_i; \mathbf{b}_{-i}, \mathbf{S}_{-i})$. The pure Price of Anarchy is the ratio of the optimal social welfare to the social welfare of the worst pure Nash equilibrium of the mechanism $\mathcal{M}$: $\text{pure PoA} = \sup_{\mathbf{v}, \mathbf{A},~\text{pure Nash} (\mathbf{b}, \mathbf{S})} \frac{SW(OPT(\mathbf{v}, \mathbf{A}))}{SW(\mathcal{M}(\mathbf{b}, \mathbf{S}))}$. We will also consider Bayes-Nash Price of Anarchy. Let $(\mathbf{v}, \mathbf{A})$ be drawn from a (possibly correlated) distribution $\mathcal{D}$, and $D_i$ be the marginal of $i$ in $\mathcal{D}$. A \emph{Bayes-Nash equilibrium} is a (possibly randomized) mapping from value, set of rich ads $(v_i, A_i)$ to a reported type $(b_i(v_i, A_i), S_i(v_i, A_i))$ for each $i$ and $(v_i, A_i) \in Support({D}_i)$ such that, for any $b_i'$ and $S_i' \subseteq A_i$, $$ \mathbb{E}_{\mathbf{v}_{-i}, \mathbf{A}_{-i}, \mathbf{b}_{-i}, \mathbf{S}_{-i}}[u_i(v_i, A_i \rightarrow b_i, S_i; \mathbf{b}_{-i}, \mathbf{S}_{-i})] \geq \mathbb{E}_{\mathbf{v}_{-i}, \mathbf{A}_{-i}, \mathbf{b}_{-i}, \mathbf{S}_{-i} }[u_i(v_i, A_i \rightarrow b_i', S_i', \mathbf{b}_{-i}, \mathbf{S}_{-i})] $$ In the expression above the expectation is conditioned on $v_i$ and over random draws of $\mathbf{v}_{-i}, \mathbf{A}_{-i}$ and the competitors bids $\mathbf{b}_{-i}, \mathbf{S}_{-i}$ drawn from the mapping $b_j(v_j, A_j), S_j(v_j, A_j)$ for each $j\neq i$. The \emph{Bayes-Nash Price of Anarchy (PoA)} is the ratio of the optimal social welfare to that of the worst Bayes-Nash equilibrium of the mechanism $\mathcal{M}$. $$\text{Bayes-Nash POA} = \sup_{\mathcal{D}, (\mathbf{b}, \mathbf{S}) \text{ Bayes Nash eq.}}\frac{\mathbb{E}_{(\mathbf{v}, \mathbf{A}) } [SW(OPT(\mathbf{v}, \mathbf{A}))]}{\mathbb{E}_{(\mathbf{v}, \mathbf{A}), (\mathbf{b}, \mathbf{S})}[SW(\mathcal{M}(\mathbf{b}(\mathbf{v}, \mathbf{A}), \mathbf{S}(\mathbf{v}, \mathbf{A})))]}.$$ For the PoA bounds we also focus on equilibria that satisfy the \emph{no overbidding} condition, i.e. equilibria where each bid profile $\mathbf{b}, \mathbf{S}$ satisfies $b_i \leq v_i$ for every advertiser $i$. Note that, the equilibrium condition still allows for deviations that overbid. However, by the definition of GSP overbidding is dominated. If an advertiser can obtain a higher expected number of clicks by bidding higher than their value, then their GSP cost-per-click will be larger than their value $v_i$, and the advertiser obtains negative utility. On the other hand if the expected number of clicks is unchanged, then the GSP cost-per-click is also the same and the utility is unchanged as well, thus no advertiser will be able to gain by overbidding. \subsection{Optimal fractional allocations} The algorithmic problem is a special case of a well-known variation of the {\sc Knapsack} problem, called {\sc Multi-Choice Knapsack}~\cite{Sinha79}. The integer program for {\sc Multi-Choice Knapsack} is the same as the integer program above, except that the inequality constraint $\sum_{j} x_{ij} \leq 1$ is replaced with an equality. Our problem is easily reduced to the {\sc Multi-Choice Knapsack} problem by introducing a \emph{null} ad with $(\alpha_{i0}, w_{i0}) = (0, 0)$. \citet{Sinha79} provide a characterization of the optimal fractional solution of {\sc Multi-Choice Knapsack} and provide a fast algorithm to compute the fractional optimal solution. As is colloquial in the {\sc Knapsack} literature, we refer to the ratio $\frac{b_{ik}}{w_{ik}}$ as \emph{Bang-per-Buck} and the ratio $\frac{b_{ij} - b_{ik}}{w_{ij} - w_{ik}}$ with $w_{ij} > w_{ik}$ as \emph{Incremental Bang-per-Buck}.~\cite{Sinha79} show that allocating ads in the incremental bang-per-buck order gives the optimal fractional solution. We state this standard algorithm in Appendix~\ref{app: missing prelims}. We refer to the solution constructed by this algorithm as OPT and use it as a benchmark in our approximation guarantees. We note a few more properties of OPT. \begin{fact}[\cite{Sinha79}] \label{fact: fracopt} In the optimal fractional allocation constructed by the algorithm, all advertisers except one have a rich-ad allocated integrally. Also for any advertiser $i$ allocated space $W_i^*$ in OPT, the allocation maximizes the value that advertiser $i$ can obtain in that space. \end{fact} This fact also implies a $2$-approximate integral allocation as follows. Construct an optimal fractional solution using the incremental bang-per-buck order. Let $i'$ denote the advertiser that is allocated last: select the larger of the optimal fractional solution without $i'$ and the highest value ad of $i'$. \section{Monotonicity and Lower Bounds}\label{sec: monotone and lower bounds} \subsection{Monotonicity implies truthfulness} In single-parameter domains, Myerson's lemma provides a handy tool for constructing truthful mechanisms. One has to only construct a monotone allocation rule, and then the lemma provides a complementary payment rule such that the overall mechanism is truthful. We extend this approach to our particular multi-parameter domain. If the set of ads is fixed for each advertiser, then monotonicity in bid and Myerson-like payments imply truthfulness. We give constraints between the allocation rules for different sets of ads and show that they imply truthfulness everywhere using a local-to-global argument. We begin by defining monotonicity in our setting. An allocation rule is said to be monotone if it is monotone in each dimension of the buyer's preferences. \begin{definition} An allocation rule $x(\mathbf{b}, \mathbf{S})$ is monotone in $b_i, S_i$ for each $i$, if (1) For all $\mathbf{b}_{-i}, \mathbf{S}_{-i}, S_i, b_i' \geq b_i$; we have $x_i(b_i', S_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \ge x_i(b_i, S_i, \mathbf{b}_{-i}, \mathbf{S}_{-i})$, and (2) For all $\mathbf{b}_{-i}, \mathbf{S}_{-i}, b_i, S_i' \supseteq S_i$; we have $x_i(b_i, S'_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \ge x_i(b_i, S_i, \mathbf{b}_{-i}, \mathbf{S}_{-i})$. \end{definition} As the following example shows, the optimal allocation rule is \emph{not} monotone. The example also shows that monotonicity is not necessary for truthfulness (since VCG is truthful and optimal). \begin{example} \label{ex: opt} Consider two advertisers with two rich ads each. Both have value $1$ and the rich ads have (value, size) $= (1, 1), (1 + \varepsilonilon, 2)$. The space available is $3$. In this case, the optimal integer solution chooses the smaller ad from one advertiser and the larger ad from other. However, if one of them removes their smaller option, they get the larger option deterministically. \end{example} The next lemma shows that monotonicity is sufficient for truthfulness. \begin{lemma} \label{lem: monotone} If a valid allocation rule $\mathbf{x}(\mathbf{b}, \mathbf{S})$ is monotone in $b_i, S_i$ for each $i \in \mc{N}$, then charging payment $p_i(\mathbf{b}, \mathbf{S}) = b_i x_i(\mathbf{b}, \mathbf{S}) - \int_{0}^{b_i} x_i(b, \mathbf{b}_{-i}, \mathbf{S}) db$ results in a truthful auction. \end{lemma} \begin{proof} We note that, for a fixed set of rich ads $S'_i$ this is a single parameter setting (in the bids $b_i$). Since we use the same payment rule as Myerson, his result implies for any $b_i, S'_i$ $u_i(b_i, \mathbf{b}_{-i}, S'_i, \mathbf{S}_{-i}) \leq u_i(v_i, \mathbf{b}_{-i}, S'_i, \mathbf{S}_{-i})$. Thus the mechanism is truthful in bids. Next we show that reporting a smaller set of rich ads and the true value is not beneficial, i.e. $u_i(v_i, S_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \leq u_i(v_i, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i})$, for any $S_i \subseteq A_i$. From the definition of the payment, $u_i(v_i, S_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) = \int_0^{v_i} x_i(b, S_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) db$. Since the allocation rule is monotone, we have \\ $x_i(b, \mathbf{b}_{-i}, A_i, \mathbf{S}_{-i}) \geq x_i(b, \mathbf{b}_{-i}, S_i, \mathbf{S}_{-i})$; the claim follows. Finally we can chain these two results to show that misreporting both bid and the set of rich ads will also not increase a buyer's utility. Let $S_i$ be the reported set of rich ads, and fix $\mathbf{b}_{-i}$ and $\mathbf{S}_{-i}$. Recall that in our model the buyer can only report a subset of rich-ads, thus $S_i \subseteq A_i$. Further for any valid allocation rule $x_{ij} =0$ for all $j\notin S_i$. Thus the utility when the true type is $A_i$ is equal to the utility when the true type is $S_i$: $u_i(\nu_i, A_i\rightarrow \nu'_i,S_i) = u_i(\nu_i,S_i \rightarrow \nu_i',S_i )$ for all $\nu_i,\nu_i'$. Putting all the previous claims together we have: \begin{align*} u_i(v_i,A_i\rightarrow v_i,A_i) & \ge u_i(v_i,A_i\rightarrow v_i,S_i) \quad\text{(By local IC)}\\ & = u_i (v_i,S_i \rightarrow v_i,S_i ) \\ & \ge u_i (v_i,S_i \rightarrow b_i,S_i )\quad\text{(By local IC)}\\ & = u_i (v_i,A_i \rightarrow b_i,S_i ) \qedhere \end{align*} \end{proof} \subsection{Lower bounds} Next, we illustrate the challenge in coming up with allocation rules which are monotone in the set of rich ads. We also prove that monotonicity rules out approximation ratios strictly better than $2$ for deterministic mechanisms (and $12/11$ for randomized mechanisms). As we've seen in Example~\ref{ex: opt}, the algorithm that finds the optimal integer allocation is not monotone. The following example shows that simple algorithms such as selecting ads in the incremental bang-per-buck order, or the $2$ approximation algorithm presented in Section~\ref{sec: prelims} are not monotone either. Recall that bang-per-buck = $b_{ij}/w_{ij}$ and incremental bang-per-buck = $\frac{b_{ij} - b_{ik}}{w_{ij} - w_{ik}}$ with $w_{ij} > w_{ik}$. \begin{example} We have two advertisers $A$ and $B$. $A$ has two rich ads with (value, size) as (2,1),(3.5,3). $B$ has one ad with (value, size)=(3,3). (i) Suppose $W=4$. The incremental bang-per-buck algorithm picks $A: (2, 1)$, followed by $B: (3, 3)$. The rich ad $A: (3.5, 3)$ does not get picked over $B: (3, 3)$, despite having higher bang-per-buck, since it has smaller incremental bang-per-buck (of $0.75$). On the other hand if A removes the rich ad $(2, 1)$, the algorithm picks $A: (3.5, 3)$ at the very beginning, and $A$ obtains a higher value. (ii) Suppose $W = 3.5$, the optimal fraction solution is $A: (2, 1)$ with a weight of $1.0$ and $B: (3, 3)$ with weight $2.5/3$. The $2$-approximation algorithm of Section~\ref{sec: prelims} compares allocating just $A: (2, 1)$ or just $B: (3, 3)$, and chooses $B$. But if A removed $(2, 1)$, then the optimal fraction solution is $A: (3.5, 3)$ with a weight of $1.0$ and $B: (3, 3)$ with weight $0.5/3$. The $2$-approximation algorithm compares $A: (3.5, 3)$ and $B: (3, 3)$, and chooses $A$ because it has higher value. Thus A gets a higher value by removing $(2, 1)$.\footnote{Even though we only seek integer monotone algorithms, this example shows that even the fractional allocation is not monotone: when A provides all ads its value is $2$, but when A removes $(2, 1)$, its value is $3.5$.} \end{example} The next theorem gives lower bounds on the approximation factor a monotone algorithm can achieve. \begin{theorem}\label{thm: lower bounds} No monotone and deterministic (resp. randomized) algorithm has an approximation ratio better than $2 - \varepsilon$ (resp. $12/11 - \varepsilon$) for any $\varepsilon > 0$. \end{theorem} \begin{proof} There are two advertisers. Each advertiser has two rich ads: $(1, 1), (1+ \varepsilon, 2)$. The total space is $3$. The optimal solution has value $(2 + \varepsilonilon)$. To obtain an approximation better than $2-\varepsilonilon$, we must choose a small ad for at least one of the advertisers. Since the algorithm is monotone, when that advertiser does not provide the small ad, the algorithm cannot give them the larger ad. Thus when the advertiser does not provide the small ad, the algorithm must give this advertiser nothing, resulting in welfare at most $(1+\varepsilon)$. See Appendix~\ref{app:missing from sec 3} for the proof for randomized algorithms. \end{proof} \section{A Simple Monotone $3$-Approximation}\label{sec: positive result} In this section we give our main result: a monotone algorithm that obtains a $3$ approximation to the optimal social welfare. First, we give a fractional algorithm for allocating \emph{space} to each advertiser $i$. Second, we show that optimally (and integrally) using the space given to each advertiser $i$ gives a monotone allocation. Finally, we show that randomizing between the former algorithm and simply allocating the max value ad is a monotone rule that obtains a $3$ approximation to $OPT$. Omitted proofs (and definitions) can be found in Appendix~\ref{app: missing from positive}. We show that the truthful payment function matching our allocation rule can be computed in polynomial time in Appendix~\ref{appsubsec: payments}. \subsection{Monotone space allocation algorithm}\label{subsec:monotone space} We start by giving a monotone algorithm for allocating space to each advertiser. This total space allocated is monotone in $b_i$ and $S_i$, for all $i \in \mc{N}$. Our algorithm also provides an allocation of rich-ads to that space, but this allocation by itself may not be monotone, and may not provide a good approximation. Our algorithm, which we call $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$, works as follows. First, we order the ads in the bang-per-buck order. We iteratively choose the next ad in this order; let $i$ be the corresponding advertiser, and $j$ be the rich ad. We replace the previous ad of $i$ with $j$, if this choice results in more space allocated to $i$. If there is not enough space we fractionally allocate $j$ and terminate. \begin{algo}[$\ifmmode{ALG_B}\else{$ALG_B$~}\fi$] Let $w$ denote the remaining space at any stage of the algorithm; initialize $w = W$. Let $E_i$ be the set of ads that are available to agent $i$ and {let $\mc{E}$ denote the set $\cup_{i=1}^n E_i$}; initialize $E_i = S_i$. Let $x_{ij}$ denote the fractional allocation of advertiser $i$ for the rich ad $j$. The total space allocated to advertiser $i$ is $W_i = \sum_{j \in \mc{M}} w_{ij} x_{ij}$. While the remaining space $w$ is not zero: \begin{enumerate}[leftmargin=*] \item Let $i$ be the advertiser whose rich ad $(b_{ij}, w_{ij})$ has the highest bang-per-buck, among all ads in $\mc{E}$. Let $(b_{ik}, w_{ik})$ be the ad previously allocated to $i$; $(b_{ik}, w_{ik}) = (0, 0)$, if no previous ad exists. \item Remove all ads of $i$ {(including ad $j$)} with space at most $w_{ij}$ from $E_i$. \item If $w \geq w_{ij} - w_{ik}$, add $w_{ij} - w_{ik}$ to the total space allocated to advertiser $i$, which now becomes $W_i = w_{ij}$. Allocate rich ad $j$ in that space, i.e. set $x_{ij} = 1$ (and $x_{ik} = 0$). Update $w = w - w_{ij} + w_{ik}$. \item If $w < w_{ij} - w_{ik}$, add $w$ to the total space allocated to advertiser $i$, that is, $W_i = w_{ik} + w$. Allocate rich ad $j$ to $i$ fractionally with $x_{ij} = W_i/w_{ij}$ (and set $x_{ik} = 0$). Update $w = 0$. \end{enumerate} \end{algo} The following observation shows that any ads removed for not being-selected will not be used by the fractional-optimal solution as well. See Appendix~\ref{app: missing prelims} for the precise definition of dominated. \begin{observation}\label{obv:ALGB ignores dominated} Let $j'\in E_i$ be some ad removed from $E_i$ in ``step 2'' of $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ for having space at most $w_{ij}$. Then either $j' = j$ or $j'$ is a ``dominated'' ad. \end{observation} Next, we prove that $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ allocates space that is monotone in $b_i$ and $S_i$, for all $i \in \mc{N}$. \begin{theorem}\label{thm: monotone-space} Let $\mathbf{x}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi)$ denote the allocation of $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$. Then, for all $i \in \mc{N}$ and $b_i \leq b'_i$, we have $ \sum_{j} w_{ij} x_{ij}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi(\mathbf{b},\mathbf{S})) \leq \sum_{j} w_{ij} x_{ij}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi(b'_i,\mathbf{b}_{-i},\mathbf{S}))$. Also, for all $i$ and $S_i \subseteq S'_i$, the space $W_i$ is monotone in $S_i$: $\sum_{j} w_{ij} x_{ij}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi(\mathbf{b},\mathbf{S})) \leq \sum_{j} w_{ij} x_{ij} (\ifmmode{ALG_B}\else{$ALG_B$~}\fi(\mathbf{b},S_i,\mathbf{S}_{-i}))$. \end{theorem} \begin{proof} $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ allocates some ads that are later replaced. We refer to such ads as temporarily allocated. We first prove that an advertiser $i$ will not get allocated less space when bidding $b_i' > b_i$. For any agent $i \in \mc{N}$ and any $j\in S_i$, $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ temporarily allocates $j$ if and only if the total space occupied by ads with higher bang-per-buck (counting only the largest such ad for each advertiser) is strictly less than $W$. For any ad $j\in S_i$, $j$'s bang-per-buck $b_{ij}/w_{ij}$ is increasing in the bid. Therefore, if $j\in S_i$ was temporarily allocated to $i$ when reporting $b_i$, then $j$ will definitely be temporarily allocated to $i$ under $b_i' > b_i$. If $j$ is the last ad to be allocated by the algorithm when reporting $b_i$ (and thus might not fit integrally in the remaining space under bid $b_i$), it will only be considered earlier under $b'_i > b_i$, and therefore the remaining space (before allocating $j$) can only be (weakly) larger. Thus, in all cases, the space allocated to $i$ under $b_i'$ is at least as much as under $b_i$. We next show that by removing an ad $k \in S_i$ the space allocated to $i$ does not increase. {Note that it is sufficient to prove monotonicity removing one rich-ad at a time. Monotonicity for removing a subset of rich-ads follows through transitivity.} If no ad is allocated to $i$ under $S_i$, then definitely the same holds for $S_i \setminus \{ k \}$. Otherwise, let $j \in S_i$ be the final ad allocated to $i$ under $S_i$. If $k$ was never (temporarily or otherwise) allocated under $S_i$, then the allocation under $S_i \setminus\{k\}$ remains the same. Therefore, we focus on the case that $k$ was allocated to $i$ under $S_i$. First, consider the case that $k \neq j$. Let $\ell$ be an ad (temporarily or otherwise) allocated under $S_i \setminus \{ k \}$, but not under $S_i$ (if no such ad exists, the claim follows). It must be that $b_{ik}/w_{ik} \geq b_{i \ell}/w_{i \ell}$ otherwise the algorithm under $S_i$ would have temporarily allocated $\ell$ before $k$. Note, if $w_{i\ell} > w_{ik}$ and the algorithm under $S_i$ did not allocate $\ell$ then it must be the case that space ran out before we got to $\ell$. This implies that, also under $S_i\setminus\{k\}$, the space will run out before we get to $\ell$. Hence $w_{i \ell} \leq w_{ik}$. At the time when $\ell$ is allocated to $i$ (under $S_i \setminus \{ k \}$), the total space allocated to bidders other than $i$ is weakly larger compared to the time when $k$ is allocated to $i$ (under $S_i$), while the space allocated to $i$ is weakly smaller. Second, consider the case that $k = j$ and $i$ is not the last advertiser allocated by the algorithm. Removing $k$ leads to a different last ad, say $\ell$, for $i$ under $S_i \setminus \{ k \}$. If this ad was temporarily allocated under $S_i$, the claim follows. Otherwise, $\ell$ must have a lower bang-per-buck than $k$. If $w_{i\ell} \leq w_{ik}$, the claim follows. Otherwise, all advertisers who got allocated after $i$ under $S_i$ (we know this set is non-empty) have the opportunity to claim a (weakly) larger amount of space under $S_i \setminus \{ k\}$, before $\ell$ is considered. Thus, the maximum amount of space $i$ is allocated is $w_{ik}$. Finally, suppose $k=j$, and $i$ is the last advertiser allocated by the algorithm. let $W_{-i} = W - w_{ij}x_{ij}$ be the space allocated to advertisers other than $i$, under $S_i$. Since they are allotted this space \emph{before} ad $k$ is considered, the maximum space $i$ can get is $W- W_{-i}$, her allocation under $S_i$. \end{proof} $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ is inefficient since the bang-per-buck allocation can change the relative order in which the advertisers are assigned space, generating sub-optimal outcomes. \begin{example} Let $M$ be a large integer. Let $W = M - 1$ and consider two advertiser A and B. A has two rich ads with (value, size) as $(1, 1), (1+\varepsilon, M-1)$. B has one rich ad with (value, size) = $(\frac{M-1}{M}, M-1)$. Fractional OPT selects $A: (1, 1)$ fully and $B: (\frac{M-1}{M}, M-1)$ fractionally with weight $(M-2)/(M-1)$ and obtains a social welfare $1 + \frac{M-2}{M}$. The bang-per-buck allocation {in \ifmmode{ALG_B}\else{$ALG_B$~}\fi} selects $A: (1+ \varepsilon, M)$ fully and obtains social welfare $(1+\varepsilon)$. \end{example} While the space allocated is monotone in the bid and rich ads, the allocation itself may not be monotone, since \ifmmode{ALG_B}\else{$ALG_B$~}\fi may allocate an advertiser a larger ad with lower value than another option; we provide an example in Appendix~\ref{app: examples}. \subsection{Integral Monotone Allocation} The allocation generated by $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ can be fractional for one advertiser, and can be sub-optimal (but integer) for some of the other advertisers. In the following algorithm, we post-process to find the best \emph{single ad} that fits in $W_i$. \begin{algo}[$\ifmmode{ALG_I}\else{$ALG_I$~}\fi$] First, run $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$. Let $W_i$ be the space allotted to advertiser $i$. Second, post-process to allocate the ad $j$ with maximum value that fits in $W_i$, i.e. $j\in \mathrm{argmax}_{w_{ij}\le W_i} b_{ij}$. Any remaining space is left unallocated. \end{algo} Observe that $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ is monotone, since (1) the space allocated by $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ is monotone, and (2) if the space allocated by $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ is larger, then the post-processing that allocates the highest value ad that fits in this space will also result in same or larger value. We note that any "unused" space in $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ remains unallocated. However, $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ alone might be an arbitrarily bad approximation: we provide an example in Appendix~\ref{app: examples}. \paragraph{Main result.} Our main result is the following theorem. \begin{theorem}\label{thm: simple 4-approx} The randomized algorithm that runs $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ with probability $2/3$, and otherwise allocates the maximum valued ad, is monotone in $b_i$ and $S_i$, obtains a $3$-approximation to the social welfare, and this approximation factor is tight. \end{theorem} \begin{proof} Let $b_{max}$ denote the value of the maximum valued ad. We will first show that $2 SW(\mathbf{x}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)) + b_{max} \geq SW(\mathbf{x}_{OPT})$. Let $\mathrm{Val}(\mathbf{x},A,\vec{s}) = \sum_{i\in A} (\sum_{j \in \mc{S}} b_{ij} \cdot x_{ij} ) \cdot \left(\frac{s_i}{\sum_{j \in \mc{S} } w_{ij} x_{ij} }\right)$ be the fraction of the social welfare of allocation $\mathbf{x}$, $SW(\mathbf{x})$, contributed by a subset of advertisers $A$ for space $\vec{s}$. Let $\mathbf{x}^* = \mathbf{x}_{OPT(\mathbf{b},\mathbf{S})}$ denote an optimal fractional allocation. Let $W_i = W_i(\ifmmode{ALG_B}\else{$ALG_B$~}\fi(\mathbf{b},\mathbf{S}))$ and $W_i^* = W_i(OPT(\mathbf{b},\mathbf{S}))$ denote the total space allocated to advertiser $i$ in $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ and the optimal allocation $\mathbf{x}^*$, respectively. The space allocated in $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ is exactly the same as $W_i$. Recall Fact~\ref{fact: fracopt}, that there is an optimal fractional allocation where at most one advertiser is allocated fractionally. Let $i'$ be the advertiser whose allocation in $\mathbf{x}^*$ is fractional. There is also at most one advertiser in $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ whose allocation is fractional: the advertiser corresponding to the very last ad that is included; let $i''$ be this advertiser. We start by giving a series of technical claims. First, we bound the part of $SW(\mathbf{x}(OPT))$ contributed by advertisers who are allocated more space in \ifmmode{ALG_B}\else{$ALG_B$~}\fi than in $OPT$. Let $\mc{I}$ denote the set of advertisers $i$ with $W_i \ge W_i^*$. \begin{claim}\label{claim: integral-OPT-I} $\mathrm{Val}(\mathbf{x}^*, \mc{I}\setminus \{i'\}, \vec{W}^*) \le \mathrm{Val}(\mathbf{x}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi), \mc{I}\setminus \{i'\}, \vec{W})$. \end{claim} Let $\mc{K} = \mc{N}\setminus \mc{I}$ be the set of advertisers $k$ with $W_k < W_k^*$. If $\mc{K}=\emptyset$ then $W_i = W_i^*$ for all $i$. We note that $b_i\cdot x_i(\ifmmode{ALG_I}\else{$ALG_I$~}\fi) = b_i\cdot \mathbf{x}_i^*$ for all $i\neq i'$ by Claim~\ref{claim: integral-OPT-I}. Also $b_{max} \geq b_i \cdot \mathbf{x}_{i'}^*$. Thus we get $SW(\mathbf{x}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)) + b_{max} \geq SW(OPT)$. So for the rest of the proof we assume $\mc{K} \neq \emptyset$. We bound the portions of $SW(\mathbf{x}_{OPT})$ contributed by $k\in \mc{K}$ using the following claim. \begin{claim}\label{claim: bpb-K} For all $k\in \mc{K}\setminus \{i'\}$ and $i \in \mc{N}$, we have $\frac{b_k\cdot \mathbf{x}_k^*}{W_k^*} \le \frac{b_i \cdot \mathbf{x}_i(\ifmmode{ALG_B}\else{$ALG_B$~}\fi)}{W_i}$. \end{claim} That is, advertisers that are allocated less space in \ifmmode{ALG_I}\else{$ALG_I$~}\fi than in $\mathbf{x}^*$, must have lower bang-per-buck as otherwise their rich ad would be considered by \ifmmode{ALG_I}\else{$ALG_I$~}\fi and they will be allocated more space. Finally we bound the contribution of $i'$, that is the ad allocated fractionally in $OPT$ as follows: \begin{claim}\label{claim:i'-contribution} Let $(b_s,s), (b_\ell,\ell)$ with $s < l$ be the ads used in $\mathbf{x}^*_{i'}$, the optimal fractional allocation for $i'$. It holds that: (i) $\frac{b_\ell - b_s}{\ell - s} \le \frac{b_i\cdot \mathbf{x}_{i}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi)}{W_i}$ for all $i\in\mc{N}$, (ii) if $s > W_{i'}$, then $\frac{b_s}{s} \le \frac{b_i\cdot \mathbf{x}_{i}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi)}{W_i}$ for all $i\in\mc{N}$, and (iii) if $s\le W_{i'}$ then $b_s \le b_{i'}\cdot \mathbf{x}_{i'}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)$. \end{claim} The proof of this claim is more involved. Intuitively we can bound the smaller of $i'$'s allocated ads (if it is small enough) with the value $i'$ obtains in $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ and the larger ad since it is allocated last has the lowest incremental bang-per-buck than any other advertiser's ad and hence the incremental bang-per-buck is also lower than the other ad's actual bang-per-buck. We put all the claims together to bound $\mathrm{Val}(\mathbf{x}^*,\mc{K}\cup \{i'\}, \vec{W}^*)$. If $s > W_{i'}$, then by Claims~\ref{claim: bpb-K} and~\ref{claim:i'-contribution} we see that the bang-per-buck of $k\in \mc{K}\cup \{i'\}$ is less $b_i\cdot \mathbf{x}_i(\ifmmode{ALG_B}\else{$ALG_B$~}\fi)/W_i$ for all $i$. Then the value for $\mc{K}\cup \{i'\}$ in OPT is less than the contribution of advertisers using the \emph{same total space} in \ifmmode{ALG_B}\else{$ALG_B$~}\fi. \begin{align} \mathrm{Val}(\mathbf{x}^*,\mc{K}\cup \{i'\}, \vec{W}^*) \le \mathrm{Val}(\mathbf{x}^*, \mc{K}\setminus \{i'\}, \vec{W}^*) + {b_{i'}\cdot \mathbf{x}_{i'}^*} \le \mathrm{Val}(\mathbf{x}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi), \mc{N}, \vec{W})\label{eq:bound-opt-k-1} \end{align} If $s \le W_{i'}$, then by Claims~\ref{claim: bpb-K} and~\ref{claim:i'-contribution} we get that $b_s < b_{i'}\cdot \mathbf{x}_{i'}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)$ and the bang-per-buck of the ads in \ifmmode{ALG_B}\else{$ALG_B$~}\fi is higher than that of $K\setminus\{i'\}$ and $\frac{b_\ell - b_s}{\ell - s}$. Thus we have, \begin{align} \mathrm{Val}(\mathbf{x}^*,\mc{K}\cup \{i'\}, \vec{W}^*) &\le \mathrm{Val}(\mathbf{x}^*, \mc{K}\setminus \{i'\}, \vec{W}^*) + b_s + (W_{i'}^* - s)\frac{b_\ell - b_s}{\ell - s}\notag\\ &\le \mathrm{Val}(\mathbf{x}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi), \mc{N}, \vec{W}) + b_{i'}\cdot \mathbf{x}_{i'}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)\label{eq:bound-opt-k-2} \end{align} Hence by putting both $\mc{I}$ and $\mc{K}$ together we get, \begin{align*} SW(\mathbf{x}_{OPT}) &= \mathrm{Val}(\mathbf{x}^*, \mc{I}\setminus\{i'\}, \vec{W}^*) + \mathrm{Val}(\mathbf{x}^*, \mc{K}\cup\{i'\}, \vec{W}^*)\\ &\le \mathrm{Val}(\mathbf{x}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi), \mc{I}\setminus \{i'\}, \vec{W}) + \mathrm{Val}(\mathbf{x}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi), \mc{N}, \vec{W}) + b_{i'}\cdot \mathbf{x}_{i'}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi) \\ &\leq \mathrm{Val}(\mathbf{x}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi), \mc{N}, \vec{W}) + \mathrm{Val}(\mathbf{x}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi), \mc{N}, \vec{W}) \\ &\le 2\mathrm{Val}(\mathbf{x}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi), \mc{N}, \vec{W}) + b_{max} \end{align*} where the first equality is the definition of $SW(\mathbf{x}_{OPT})$, and the second inequality puts together Claim~\ref{claim: integral-OPT-I} and Equations~\eqref{eq:bound-opt-k-1}, and~\eqref{eq:bound-opt-k-2}. The third inequality holds because $\mc{I} \cup \{ i' \} \subseteq \mc{N}$. The final inequality holds because $b_i \cdot \mathbf{x}_i(\ifmmode{ALG_I}\else{$ALG_I$~}\fi) \ge b_{i}\cdot \mathbf{x}_{i}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi)$ for all $i \neq i''$, and $b_{i''} \cdot \mathbf{x}_{i''}(\ifmmode{ALG_B}\else{$ALG_B$~}\fi) \le b_{max}$. Thus we have that $SW(\mathbf{x}_{OPT}) \le 2\mathrm{Val}(\mathbf{x}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi), \mc{N}, \vec{W}) + b_{\max}.$ Therefore, running $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ with probability $2/3$ and allocating $b_{\max}$ with probability $1/3$ is a $3$-approximation to $SW(\mathbf{x}_{OPT})$. Since our algorithm is randomizing between two monotone rules, it is monotone as well. The following instance shows that the approximation of the algorithm is at least $3$ (see Appendix~\ref{app: missing from positive} for the proof). Let $M$ be a large integer. There are four advertisers named $A, B, C, D$. We describe the set of rich ads, bang-per-buck and incremental bang-per-buck for each advertiser in Table~\ref{table: lb_richads}. Total space $W = 2M - \varepsilon$. We assume ties are broken in the order $A, B, C, D$ (but the example can be constructed without ties). \begin{table}[h!] \centering \begin{tabular}{ |c|c|c|c| } \hline & rich ads & bpb & ibpb \\ \hline $A$ & $(M, 1), (M+\varepsilon, M)$ & $M, \frac{M+\varepsilon}{M}$ & M, $\frac{\varepsilon}{M-1}$\\ $B$ & $(1+\varepsilon, 1), (M + \varepsilon, M)$ & $1+\varepsilon, \frac{M + \varepsilon}{M}$ & $1+\varepsilon, 1$ \\ $C$ & $(M-1, M - 1)$ & $ 1 $ & $1$ \\ $D$ & $(M+ 2\varepsilon, 2M - \varepsilon)$ & $\frac{M+ 2\varepsilon}{2M-\varepsilon}$ & $\frac{M+ 2\varepsilon}{2M-\varepsilon}$ \\ \hline \end{tabular} \caption{Rich ads, Bang-per-buck(bpb) and Incremental Bang-per-bucks(ibpb) for each advertiser.} \label{table: lb_richads} \end{table} The fractional optimal solution can be constructed by allocating ads in the incremental-bang-per-buck order. The incremental bang-per-buck order is: $A: (M, 1), B: (1+\varepsilon, 1), B: (M + \varepsilon, M), C:(M-1, M-1), D:(M+2\varepsilon, 2M - \varepsilon), A: (M+\varepsilon, M) $. Any subsequent ad from the same advertiser fully replaces the previously selected ad. The allocation stops when the space runs out, so it will stop while allocating $C:(M-1, M-1)$ which will be allocated fractionally. The social welfare of the fractional optimal is $M + M + \varepsilon + \frac{(M-1-\varepsilon)}{M-1} \cdot (M-1) = 3M - 1$. \ifmmode{ALG_B}\else{$ALG_B$~}\fi considers ads in the order $A: (M, 1), B: (1+\varepsilon, 1), A: (M+\varepsilon, M), B: (M+\varepsilon, M), C: (M-1, M-1), D: (M+ 2\varepsilon, 2M - \varepsilon)$. Once again, any subsequent ad from the same advertiser fully replaces the previously selected ad. The algorithm stops when the space runs out. Thus the algorithm stops while allocating $B: (M+ \varepsilon, M)$. \ifmmode{ALG_B}\else{$ALG_B$~}\fi will allocate space of $M$ to advertiser $A$ and space $M-\varepsilon$ to advertiser $B$. \ifmmode{ALG_I}\else{$ALG_I$~}\fi runs \ifmmode{ALG_B}\else{$ALG_B$~}\fi and post-processes to find the best ad for the allocated space. Thus $A$ is allocated $(M+\varepsilon, M)$ and $B$ is allocated $(1+ \varepsilon, 1)$. The social welfare of \ifmmode{ALG_I}\else{$ALG_I$~}\fi is $1 + \varepsilon + M + \varepsilon = M + 1 + 2\varepsilon$. The maximum value allocation will select $D: (M + 2\varepsilon, 2M - \varepsilon)$. Thus the expected value of the algorithm that randomly chooses between \ifmmode{ALG_I}\else{$ALG_I$~}\fi with probability $2/3$ and the largest value single ad with probability $1/3$ is $2/3(M + 1 + 2\varepsilon) + 1/3(M + 2\varepsilon)= M + 2/3 + 2\varepsilon$ and the ratio with the fractional optimal allocation is $\frac{3M - 1}{M + 2\varepsilon + 2/3} = \frac{3 - 1/M}{1 + \frac{2}{3M} + 2\frac{\varepsilon}{M}}.$ This ratio can be made arbitrarily close to $3$ by choosing $\varepsilon = 1/M$ and $M$ that is large enough. \end{proof} \subsection{Computing Myerson Payments}\label{appsubsec: payments} Finally, we note that the truthful payment function matching our allocation rule (that gives the overall auction) can be computed in time that is polynomial in the number of advertisers and rich ads. Let \ifmmode{ALG_{max}}\else{$ALG_{max}$~}\fi be the algorithm which simply allocates the maximum valued ad. To compute the payment for an advertiser $i$, we need to compute the expected allocation as a function of $i$'s bid $b_i$. We can compute the expected allocation from $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ and $\ifmmode{ALG_{max}}\else{$ALG_{max}$~}\fi$ independently, and the final allocation is just $\frac{2}{3} x_i(\ifmmode{ALG_I}\else{$ALG_I$~}\fi(b, \mathbf{b}_{-i}, \mathbf{S}) + \frac{1}{3} x_i(\ifmmode{ALG_{max}}\else{$ALG_{max}$~}\fi(b, \mathbf{b}_{-i}, \mathbf{S})$. The payment is then $\frac{2}{3} p_i(\ifmmode{ALG_I}\else{$ALG_I$~}\fi(\mathbf{b}, \mathbf{S})) + \frac{1}{3} p_i(\ifmmode{ALG_{max}}\else{$ALG_{max}$~}\fi(\mathbf{b}, \mathbf{S}))$. The payment for $\ifmmode{ALG_{max}}\else{$ALG_{max}$~}\fi$ is simple: Advertiser $i$'s allocation is $\max_{j \in S_i} \alpha_{ij}$ if she is the highest value bidder, and zero otherwise. The expected payment $p_i$ is the second highest value $\max_{i' \neq i, j \in S_{i'}} b_{i'j}$ if $i$ is allocated and zero otherwise. The payment for $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ can be computed by identifying possible thresholds for $i$'s bid where $i$'s allocation changes, and computing expected allocation for these thresholds. Advertiser $i$'s allocation can change \emph{only} when his bang-per-buck for one of his ads is tied with that of another ad: there are therefore at most $O(n|\mathcal{S}|^2)$ such thresholds. Once we identify thresholds $t_1, t_2, ...$, the corresponding allocations can be computed by re-running the bang-per-buck allocation algorithm. Note that as $i$'s bid changes, the relative order of rich ads does not change for any advertiser $j \in \mc{N}$. Thus, the new bang-per-buck allocation can be computed in $O(n|\mathcal{S}|)$ time. Once we have the thresholds $t_0 = 0, t_1, t_2, ...$ and the corresponding expected allocations $x_i(\ifmmode{ALG_I}\else{$ALG_I$~}\fi(t_j, b_{-i}, \mathbf{S}))$, the final payment is $\sum_{j=1} \left(x_i(\ifmmode{ALG_I}\else{$ALG_I$~}\fi(t_j, b_{-i}, \mathbf{S})) - x_i(\ifmmode{ALG_I}\else{$ALG_I$~}\fi(t_{j-1}, b_{-i}, \mathbf{S})\right) t_j$. \section{Price of Anarchy Bounds for GSP} \label{app: poa} In this section we prove bounds on the pure and Bayes-Nash PoA when monotone algorithms are paired with the GSP payment rule. Note that unlike the previous section, in this section we bound relative to the optimal integer allocation which we denote as \ifmmode{IntOPT}\else{$IntOPT$~}\fi. We consider a mechanism $\mathcal{M}$ that runs $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ with probability $1/2$ and allocates the maximum value ad with probability $1/2$. The corresponding GSP payment for either allocation rule is charged depending on the coin flip. The bidders utility is modelled in expectation over the random coin flip that selects between the two allocation rules. We refer to $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ as bang-per-buck allocation and that of the maximum-value ad as the max-value allocation. The following example illustrates that GSP paired with our allocation rules can be non-truthful. \begin{example} Suppose there are two advertisers. Advertiser $1$ has value $v_1 = 1$ and set of rich ads $A_1$ with one rich ad with (click probability, space): $(1/M, 1)$. Advertiser $2$ has value $v_2 = 1$ and set of rich ads $A_2$ with two rich ads having (click probability, space): $(\varepsilon/M, 1), (1 + \varepsilon^2, M)$ respectively. Suppose total space $W = M$. In this example, truth-telling is not an equilibrium. Suppose both advertisers bid truthfully. The bang-per-buck order is $2: (1+ \varepsilon^2, M), 1:(1/M, 1), 2:(\varepsilon/M, 1)$. Thus advertiser $2$ gets allocated $(1+\varepsilon^2, M)$ and pays cost-per-click $\frac{1}{M} \cdot \frac{M}{1+\varepsilon^2}$, which is the minimum bid below which 2's bang-per-buck $(1+\varepsilon^2)\cdot bid/M$ is lower than advertiser $1$'s. In the max-value allocation advertiser $2$ is allocated $(1+\varepsilon^2, M)$ with the GSP cost-per-click of $1/M \cdot 1/(1+\varepsilon^2)$. Advertiser $2$'s utility is \begin{align*} u_2(v_1, A_1, v_2, A_2) &= \frac{1}{2} (1+ \varepsilon^2)\left(1 - \frac{1}{M} \cdot \frac{M}{(1+\varepsilon^2)}\right) + \frac{1}{2} (1+\varepsilon^2)\left(1 - \frac{1}{M} \cdot \frac{1}{(1+\varepsilon^2)}\right) \\ &= \frac{1}{2} (1 + 2 \varepsilon^2 - 1/M) \end{align*} However, if advertiser $2$ bids $\frac{1}{2}$, advertiser $2$'s utility is $$u_2(1/2, A_1, v_2, A_2) = \frac{1}{2} (\varepsilon/M) + \frac{1}{2} (1+\varepsilon^2 - 1/M) $$ In the second case, the calculation for max-value is the same. In the bang-per-buck allocation, advertiser A has higher bang-per-buck and is allocated first. The rest of the space is allocated to advertiser $2$ which is filled with the smaller ad. The GSP cost-per-click is zero. The latter utility is higher for $\varepsilon < 1/ M$. \end{example} We will use $SW_{bpb}$ and $SW_{max}$ to denote the social welfare of the bang-per-buck and max-value algorithms. Then, $SW_{\mathcal{M}}(\mathbf{b}, \mathbf{S}) = \frac{1}{2} SW_{bpb}(\mathbf{b}, \mathbf{S}) + \frac{1}{2} SW_{max}(\mathbf{b}, \mathbf{S}).$ We use $u_i^{max}$ and $u_i^{bpb}$ to denote the utility of advertiser $i$ in the max-value and bang-per-buck allocation respectively. The key challenge is in bounding the social welfare of the bang-per-buck allocation. Recall that the bang-per-buck algorithm allocates in the order of $b_{ij}/w_{ij}$. Rich ads from an advertiser that occupy less space than a previously allocated higher bang-per-buck rich ad are not picked. As the algorithm continues, it might replace a previously allocated rich ad \footnote{{(We will refer to these ads as being temporarily allocated)}} of an advertiser with another one that occupies more space (but may have lower value). The algorithm stops, when the next rich ad cannot fit within the available space or the set of rich ads runs out. We also post-process each advertiser to choose a rich ad with the highest value that fits within allocated space. We will develop a little notation to make the argument cleaner. For each $i$ and $j\in S_i$ we use $(i,j)$ to denote this rich-ad. Without loss, we can assume that the space $w_{ij}$ occupied by any rich-ad $(i, j)$ is integer and the total space available $W$ is also integer. Let's think of the algorithm as consuming space in discrete units. For each unit of space, we can associate with it the rich ad that was first allocated to cover that unit of space. For the $k$'th unit of space, let $i(k)$ as the advertiser the $k$'th unit of space is allocated to, and $j(k)$ the rich ad $j(k) \in S_{i(k)}$ that was (temporarily) allocated for advertiser $i(k)$ when the $k$'th unit of space was first covered. Note that $j(k)$ may not be the final rich ad allocated to $i(k)$. In general, the algorithm stops before all the space runs out, in particular, because the next rich ad in the bang-per-buck order is too large to fit in the remaining space. We associate this rich-ad with each of the remaining units of space. It helps to be able to identify $(i(k), j(k))$ as we know that the bang-per-bucks $b_{i(k)j(k)}/w_{i(k)j(k)}$ are non-increasing as k increases. We use the following lemma to relate an upper bound to the payment any ``small'' ad has to pay and the equilibrium social welfare. \begin{lemma} \label{lem: beta-bound} Consider an equilibrium profile of per click bids $\mathbf{b}$ and sets of rich ads $\mathbf{S}$. We assume the bids satisfy no overbidding $b_i \leq v_i$ for each $i$. Let $k = \lfloor W/2 \rfloor + 1$, $(r, j) = (i(k), b(k))$, and $\beta = \frac{b_{rj}}{w_{rj}}$. Then, $ \beta W \leq 2 \cdot SW_{bpb}(\mathbf{b}, \mathbf{S}) + 2 \cdot SW_{max}(\mathbf{b}, \mathbf{S})$. \end{lemma} \begin{proof} Recall that we defined $(i(k), j(k))$ as the rich-ad allocated by $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ to the $k$'th unit of space. Then we let $k^* = \lfloor W/2 \rfloor + 1$, $(r, j) = (i(k^*), j(k^*)$ and $\beta = b_{rj}/w_{rj}$. Note that $(r, j)$ may not be allocated integrally if it is bigger than the remaining space. Case (i): $(r, j)$ is allocated integrally \\ The rich-ads are also allocated contiguously. Let $K_0 = 0$ and $K_t = K_{t-1} + w_{i(K_{t})j(K_{t})}$. $K_t$ is the cumulative units of space covered by the first $t$ rich ads. For each $k \in \{K_{t-1} + 1, K_{t-1} + 2, \ldots, K_t\}$, $i(k) = i(K_t)$ and $j(k) = j(K_t)$. Then $\sum_{k = K_{t-1} + 1}^{K_{t}} \frac{v_{i(k)j(k)}}{w_{i(k)j(k)}} = v_{i(K_{t})j(K_{t})}$. \begin{align*} \beta W &\leq 2 \beta k^* = 2 \frac{b_{i(k^*)j(k^*)}}{w_{i(k^*)j(k^*)}} k^*\\ &\leq 2\sum_{k=1}^{k^*} \frac{b_{i(k)j(k)}}{w_{i(k)j(k)}} \\ &\leq 2\sum_{k=1}^{k^*} \frac{v_{i(k)j(k)}}{w_{i(k)j(k)}} = 2 \sum_{t=1} \sum_{k = K_{t-1} + 1}^{K_{t}} \frac{v_{i(k)j(k)}}{w_{i(k)j(k)}} = 2 \sum_{t=1} v_{i(K_{t})j(K_{t})} \\ &\leq 2SW_{bpb}(\mathbf{b}, \mathbf{S}) \end{align*} In the first inequality, we use that $k^* > W/2$. Second inequality follows since, for each $k \leq k^*$, $b_{i(k)j(k)}/w_{i(k)j(k)} \geq \beta$. In the third step, we use the no-overbidding assumption. And the last inequality uses the fact that the temporary allocation $v_{i(K_{i}))j(K_{i}} \leq v_{i(K_i) \rho(i(K_i))}$ where $\rho(i)$ denotes the ad allocated to advertiser $i$ in the bang-per-buck allocation. \\ Case(ii) $(r, j)$ is not placed. \\ We have that $w_{rj} > W/2$ and $\beta \leq 2 b_{rj}/W$. Hence,$\beta W \leq 2 b_{rj} \leq 2 SW_{max}(\mathbf{b}, \mathbf{S})$. Combining the inequalities for the two cases, we have, $\beta W \leq 2 SW_{bpb}(\mathbf{b}, \mathbf{S}) + 2SW_{max}(\mathbf{b}, \mathbf{S})$ \end{proof} The following lemma establishes a bound on the utility of an advertiser with rich ad $(i, t)$ of size less than $W/2$ bidding at least $\beta \frac{w_{it}}{\alpha_{it}}$ where $\alpha_{it}$ is the probability of click for rich ad $(i, t)$ . We use these deviations with the equilibrium condition to relate the social welfare of a bid-profile to that of a target optimal outcome. \begin{lemma} \label{lem: deviation} Let $k = \lfloor W/2 \rfloor + 1$, $(r, j) = (i(k), b(k))$, and $\beta = b_{rj}/w_{rj}$ as defined above. Then for an advertiser $i$, with rich-ad $(i, t)$ with $w_{it} \leq W/2$ bidding $(y, \{t\})$ with $y = \min\{v_i, \beta w_{it}/\alpha_{it} + \varepsilon\}$, for some $\varepsilon$, $u_i^{bpb}(y, \{t\}, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq v_{it} - \beta w_{it}.$ \end{lemma} We prove the following lemma, which covers Lemma~\ref{lem: deviation}, and is also used in the Bayes-Nash POA proof. \begin{lemma} \label{lem: bayes-deviation} Let $k = \lfloor W/2 \rfloor + 1$, $(r, j) = (i(k), b(k))$, and $\beta = b_{rj}/w_{rj}$ as defined above. Then for an advertiser $i$, with rich-ad $(i, t)$ with $w_{it} \leq W/2$ bidding $(y, A_i)$ with $v_i \geq y \geq \beta\frac{w_{it}}{\alpha_{it}}$ $$u_i^{bpb}(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq v_{it} - y \alpha_{it}$$ Moreover, for some $\varepsilon$, $y = \min\{v_i, \beta w_{it}/\alpha_{it} + \varepsilon\}$, $$u_i^{bpb}(y, \{t\}, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq v_{it} - \beta w_{it}.$$ \end{lemma} \begin{proof} First, we will argue that with any bid $b'_i \in (\beta \frac{w_{it}}{\alpha_{it}}, v_i]$, and set of rich ads $S'_i \subseteq A_i$ such that $t \in S'_i$, $i$ will be allocated a rich ad of value at least $v_{it}$. This is tricky, because changing $i$'s bid also changes the allocation of all advertisers allocated earlier in the bang-per-buck order. However, as long as the space occupied by all ads other than $i$ when we get to $(i, t)$ is at most $W/2$, there is sufficient space remaining to place $(i, t)$. Since $b'_i > \beta\frac{w_{it}}{\alpha_{it}}$, $\frac{b'_i \alpha_{it}}{w_{it}} > \beta$, and $(i,t)$ appears before $(r,j)$ in bang-per-buck order. Hence the space allocated to others before $(i,t)$ is at most $W/2$. Thus the bang-per-buck algorithm will allocate at least $(i, t)$. Suppose the algorithm allocates $(i, j)$ instead, then $w_{ij} \geq w_{it}$ and the post processing step guarantees that $v_{ij} \geq v_{it}$ and $\alpha_{ij} \geq \alpha_{it}$. Since the GSP cost-per-click is at most the bid $b'_i$ we get \[ u_i^{bpb}(b'_i, S'_i, \mathbf{b}_{-i}, \mathbf{S}) \ge v_{ij} - b'_i \alpha_{ij} = \alpha_{ij}(v_i - b'_i) \ge v_{it} - b'_i\alpha_{it} \] In the last step we use that $b'_i \leq v_i$. If $v_i < \beta\frac{w_{it}}{\alpha_{it}}$, let us consider the deviation $(b'_i,\{t\})$ then by setting $b'_i = v_i$ we get $u_i^{bpb}(b'_i, A_i, \mathbf{b}_{-i}, \mathbf{S}) \geq 0 \geq v_{it} - \beta w_{it}$. This is because the GSP cost-per-click is at most the bid $b'_i \le\beta \frac{w_{it}}{\alpha_{it}}$. Next we consider a deviation $b_i', \{(t)\}$ with $b'_i= \beta\frac{w_{it}}{\alpha_{it}} + \varepsilon < v_i$, for some $\varepsilon > 0$ such that $(r,j)$ immediately follows $(i,t)$ in bang-per-buck order. In this case $b'_i$ is still sufficient for $i$ to be allocated at least $(i, t)$. The GSP payment of $(i,t)$ is at most $\beta\cdot{w_{it}}$, as GSP payment is set by the bang-per-buck of $(r,j)$ or lower. Thus, \begin{align*} u_i^{bpb}(b'_i, \{t\}, \mathbf{b}_{-i}, \mathbf{S}) & \ge v_{it} - \beta w_{it} \qedhere \end{align*} \end{proof} Now we can prove the pure price of anarchy bound. In a pure PoA proof, we can consider deviations that depend on the other players bids which allows us to obtain a tighter analysis. \begin{theorem} With the no-overbidding assumption, the pure PoA of the mechanism that selects using $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ with probability $1/2$, selects the maximum value ad with probability $1/2$ and charges the GSP payment in each case is at most $6$ \end{theorem} \begin{proof} Consider the integer optimal allocation \ifmmode{IntOPT}\else{$IntOPT$~}\fi. For each $i$, let $\tau(i)$ denote the rich ad selected for advertiser $i$ in \ifmmode{IntOPT}\else{$IntOPT$~}\fi. If an advertiser $i$ is not allocated in \ifmmode{IntOPT}\else{$IntOPT$~}\fi, we set $\tau(i) = 0$ which indicates the null ad. Let $(\mathbf{b}, \mathbf{S})$ denote a pure Nash equilibrium bid profile. The allocation of the mechanism is composed of two parts - bang-per-buck allocation and max-value-allocation. Let $\gamma(i)$ refer to the max-value allocation for advertiser $i$, but note that $\gamma(i) = 0$, i.e. the null ad, for all but one ad. Let $\rho(i)$ denote the rich ad allocated to advertiser $i$ in the bang-per-buck allocation. First note that for any bid $b_i' \leq v_i$ and $S'_i \subseteq A_i$, $u_i^{bpb}(b'_i, S'_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq 0$ and $u_i^{max}(b'_i, S'_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq 0$. This is because the GSP cost-per-click is always less than the bid and the bid is less than value. Thus in either mechanism if the allocated rich ad is $(i, t)$, the utility $\alpha_{it}(v_i - cpc_i) \geq 0$. We first bound the social welfare in the bang-per-buck allocation for advertisers that obtain an ad of space $\leq W/2$ in the optimal outcome $\tau$. Let $(r, j)$ be defined as in Lemma~\ref{lem: deviation} and let $\beta = \frac{b_{rj}}{w_{rj}}$. By Lemma~\ref{lem: deviation}, for each $i$ with $w_{i\tau(i)} \leq W/2$, there exists an $\varepsilon$ such that with $b'_i=\min \{v_i, \beta\frac{w_{i\tau(i)}}{\alpha_{i\tau(i)}} + \varepsilon\}$, $ u_i^{bpb}(b'_i, \{\tau(i)\}, \mathbf{b}_{-i}, \mathbf{S}) \ge v_{i\tau(i)} - \beta w_{i\tau(i)}.$ Thus we get, \begin{align*} (p v_{i\rho(i)} + (1-p) v_{i\gamma(i)}) &\geq u_i(\mathbf{b}, \mathbf{S})\\ &\geq u_i(b'_i, \{\tau(i)\}, \mathbf{b}_{-i}, \mathbf{S}) \\ &\geq p u_i^{bpb}(b'_i,\{\tau(i)\}, \mathbf{b}_{-i}, \mathbf{S}) \\ &\geq p v_{i\tau(i)} - p \beta w_{i \tau(i)} \end{align*} Here, the first inequality uses the fact that the equilibrium payment is non-negative and the second is from the equilibrium condition. The third inequality follows from the fact that utility in max-value with GSP is non-negative. The last step is the estimation we have derived. We have the above inequality for all $i$ such that $w_{i \tau(i)} \leq W/2$. Note that there can be at most one advertiser with $w_{i \tau(i)} > W/2$, denote this advertiser as $i^*$. Let $T = \mc{N} \setminus \{i^*\}$. Summing the above inequality over all $i \neq i^*$ we get, \begin{align} p SW(T, \rho) + (1-p) SW(T, \gamma) &= \sum_{i \neq i^* } (p v_{i\rho(i)} + (1-p) v_{i\gamma(i)}) \nonumber \\ &\geq \sum_{i \neq i^* } (p v_{i\tau(i)} - p \beta w_{i \tau(i)}) \nonumber \\ &\geq p SW(T, \tau) - p \sum_{i \neq i^*} \beta w_{i \tau(i)} \label{eq:eq1} \end{align} {\bf Case 1:} First we consider the case that there is no $i^*$ with $w_{i^* \tau(i*)} > W/2$. Starting from equation~\eqref{eq:eq1}, we can bound $\sum_i w_{i \tau(i)} \leq W$ and use Lemma~\ref{lem: beta-bound}. We have $p SW_{bpb}(\mathbf{b}, \mathbf{S}) + (1-p) SW_{max}(\mathbf{b}, \mathbf{S}) \geq p SW(\ifmmode{IntOPT}\else{$IntOPT$~}\fi) - 2p (SW_{bpb}(\mathbf{b}, \mathbf{S}) + SW_{max}(\mathbf{b}, \mathbf{S}))$. Setting $p=1/2$ and rearranging, we get that the price of anarchy = $SW(\ifmmode{IntOPT}\else{$IntOPT$~}\fi)/SW_{\mathcal{M}}(\mathbf{b}, \mathbf{S}) \leq 6$. {\bf Case 2:} If there is an ad $i^*$ with $w_{i^* \tau(i^*)} > W/2$. Then we have that $\sum_{i \neq i^*} w_{i \tau(i)} < W/2$. We will prove the following inequality for $i^*$. \begin{equation} \label{eq:bound-S} p v_{i^*\rho(i^*)} + (1-p) SW_{max}(\mathbf{b}, \mathbf{S}) \geq (1-p) v_{i^* \tau(i^*)} \end{equation} Let $i^*$ deviate to bid truthfully. His utility on deviation, \begin{align} u_{i^*}(v_{i^*}, A_{i^*}, \mathbf{b}_{-i^*}, \mathbf{S}_{-i^*}) &\geq (1-p) u_{i^*}^{max}(v_{i^*}, A_{i^*}, \mathbf{b}_{-i^*}, \mathbf{S}_{-i^*}) \notag\\ &\geq (1-p) \max_{j \in A_{i^*}} v_{i^*j} - (1-p) \max_{i \neq i^*, j \in S_{i}} b_{ij} \notag\\ &\geq (1-p) v_{i^*\tau(i^*)} - (1-p) \max_{i \neq i^*, j \in S_{i}} b_{ij} \end{align} Here the first inequality uses the fact that with bid less than value, the utility in the bang-per-buck allocation with GSP cost-per-click is non-negative. The second inequality is because $i^*$ may not be allocated when bidding $v_i^*$ in which case the competing bid is larger than $i^*$'s value. If $i^*$ gets allocated in the max-value algorithm in equilibrium, $u_{i^*}(\mathbf{b},\mathbf{S})$ is at most $p v_{i^*\rho(i^*)} + (1-p) v_{i^*\gamma(i^*)} - (1-p) \max_{i \neq i^*, j \in S_{i}} b_{ij}$. Using the equilibrium condition with $u_{i^*}(\mathbf{b}, \mathbf{S}) \geq u_{i^*}(v_{i^*}, A_{i^*}, \mathbf{b}_{-i^*}, \mathbf{S}_{-i^*})$, we get $p v_{i^*\rho(i^*)} + (1-p) v_{i^*\gamma(i^*)} \geq (1-p) v_{i^*\tau(i^*)}$. Equation~\eqref{eq:bound-S} follows because $v_{i^*\gamma(i^*)} = SW_{max}(\mathbf{b},\mathbf{S})$. If $i^*$ does not get allocated in the max-value algorithm in the equilibrium, then $u_{i^*}(\mathbf{b},\mathbf{S})$ is at most $p v_{i^* \rho(i^*)}$. Using $(1-p)\max_{i \neq i^*, j \in S_{i}} b_{ij} \leq(1-p) SW_{max}(\mathbf{b}, \mathbf{S})$, with the pure Nash equilibrium condition, and rearranging we get $p v_{i^*\rho(i^*)} + (1-p) SW_{max}(\mathbf{b}, \mathbf{S}) \geq (1-p) v_{i^* \tau(i^*)}$, i.e., equation~\eqref{eq:bound-S}. Then adding equations~\eqref{eq:eq1} and ~\eqref{eq:bound-S}, we get \begin{align*} p SW_{bpb}(\mathbf{b}, \mathbf{S}) + 2(1-p) SW_{max}(\mathbf{b}, \mathbf{S}) \geq p (SW(\ifmmode{IntOPT}\else{$IntOPT$~}\fi) - v_{i^*\tau(i^*)}) + (1-p) v_{i^* \tau(i^*)} - p \beta \frac{W}{2} \end{align*} where we use $SW_{bpb}(\mathbf{b}, \mathbf{S}) = SW(T, \rho) + v_{i^*\rho(i^*)}$, $SW_{max}(\mathbf{b}, \mathbf{S}) \geq SW(T, \gamma)$, $SW(\ifmmode{IntOPT}\else{$IntOPT$~}\fi) = SW(T, \tau) + v_{i^* \tau(i^*)}$, and $\sum_{i \in T} w_{i \tau(i)} < W/2$. By Lemma~\ref{lem: beta-bound}, we have $\beta W/2 \le SW_{bpb}(\mathbf{b}, \mathbf{S}) + SW_{max}(\mathbf{b}, \mathbf{S})$. Thus, $$ pSW_{bpb}(\mathbf{b}, \mathbf{S}) + 2(1-p) SW_{max}(\mathbf{b}, \mathbf{S})) \geq p SW(\ifmmode{IntOPT}\else{$IntOPT$~}\fi) - p SW_{bpb}(\mathbf{b}, \mathbf{S}) - p SW_{max}(\mathbf{b}, \mathbf{S}) . $$ Setting $p = 1/2$, we get that $6 SW(\mathbf{b}, \mathbf{S}) \geq SW(\ifmmode{IntOPT}\else{$IntOPT$~}\fi)$. So the price of anarchy is at most 6. \end{proof} \paragraph{Bayes-Nash PoA} We also provide bounds on the Bayes-Nash Price of Anarchy when our allocation rule is paired with the GSP payment rule. Our proof technique is very similar to that of~\cite{caragiannis}. We borrow ideas from~\cite{caragiannis} and combine with techniques from our pure-PoA bound to prove a smoothness inequality, and obtain a bound on the Bayes-Nash PoA. \begin{theorem} \label{thm: bayes-poa} Under a no-overbidding assumption, the mechanism that runs $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ with probability $1/2$ and allocates to the maximum valued ad with probability $1/2$, and charges the corresponding GSP price has a Bayes-Nash PoA of $\frac{6}{1-1/e}$. \end{theorem} We will prove the bound using the smoothness framework. Our proof approach is similar to that of~\cite{caragiannis} for proving bounds on the price of anarchy of the GSP auction in the traditional position auction setting. However the knapsack constraint and the randomized allocation rule create unique challenges in our setting that we have to overcome. We recall the definition of $(\lambda, \mu)$ semi-smoothness, as defined as \cite{caragiannis}, that extends \cite{Roughgarden-intrinsic}, \cite{nadavR}. \begin{definition}[$(\lambda,\mu)$ semi-smooth games~\cite{caragiannis}] A game is $(\lambda,\mu)$ semi-smooth if for any bid-profile $(\mathbf{b}, \mathbf{S})$, for each player $i$, there exists a randomized distribution over over bids $b_i'$ such that $$\sum_i \mathbb{E}_{b_i'(v_i)}[u_i(b_i'(v_i), b_{-i}, \mathbf{S})] \geq \lambda SW(OPT(\mathbf{v}, \mathbf{A})) - \mu SW(Alg(\mathbf{b}, \mathbf{S})) $$ \end{definition} The following lemma from ~\cite{caragiannis} shows that smoothness inequality of the above form provides a bound on the Bayes-Nash POA. \begin{lemma} [~\cite{caragiannis}] If a game is $(\lambda, \mu)$-semi-smooth and its social welfare is at least the sum of the players’ utilities, then the Bayes-Nash POA is at most $(\mu + 1)/\lambda$. \end{lemma} Thus, it only remains to prove the smoothness inequality. We prove that our mechanism is $(\frac{1}{2}(1-\frac{1}{e}), 2)$ semi-smooth, and hence obtain a Bayes-Nash POA bound of $6/(1 - \frac{1}{e}) \approxeq 9.49186$. \begin{theorem} Under a no-overbidding assumption, the mechanism that runs $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ with probability $1/2$ and allocates to the maximum valued ad with probability $1/2$, and charges the corresponding GSP price in each is $(\frac{1}{2}(1-\frac{1}{e}), 2)$ semi-smooth. \end{theorem} \begin{proof} Fix a valuations profile $(\mathbf{v}, \mathbf{A})$ Consider the integer optimal allocation with valuation $(\mathbf{v}, \mathbf{A})$ as $OPT$. For each $i$, let $\tau(i)$ denote the rich-ad selected for advertiser $i$ in $OPT$. $\tau(i) = 0$ be the null ad with $\alpha_{i0} = 0$ and $w_{i0} = 0$ if advertiser $i$ is not allocated in $OPT$. Let $(\mathbf{b}, \mathbf{S})$ denote a bid profile. The allocation of the mechanism is composed of two parts - bang-per-buck allocation and max-value-allocation. Let $\rho(i)$ denote the rich ad allocated to advertiser $i$ in the bang-per-buck allocation and $\gamma(i)$ denote the rich ad allocated to the advertiser $i$ in the bang-per-buck allocation. If an advertiser is not allocated we refer to the null ad with $\alpha_{i0} = 0$ and $w_{i0} = 0$. We will use $SW_{bpb}$ and $SW_{max}$ to denote the social welfare of the bang-per-buck and max-value allocation algorithms. Then, $$SW_{\mathcal{M}}(\mathbf{b}, \mathbf{S}) = p SW_{bpb}(\mathbf{b}, \mathbf{S}) + (1-p) SW_{max}(\mathbf{b}, \mathbf{S}) = p \sum_i v_{i\rho(i)} + (1-p) v_{m, \mu}.$$ Most of the difficulty in proving the smoothness inequality is in reasoning about what happens in the bang-per-buck allocation. As in Lemma~\ref{lem: beta-bound}, let $k^*=\lfloor W/2\rfloor + 1$ and $(r, j) = (i(k^*), j(k^*))$ be the rich-ad that would be allocated the $k^*$'th unit of space. Let $\beta = b_r \alpha_{rj}/w_{rj}$. For any advertiser $i$, consider the advertiser deviates to bid $y$ drawn from a distribution on $[0, v_i(1-1/e)]$ with $f(y) = 1/(v_i - y)$. Then by Lemma~\ref{lem: bayes-deviation}, $u_i^{bpb}(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq v_{i\tau(i)} - y \alpha_{i\tau(i)}$. {If $ \alpha_{i \tau(i)} \cdot y < \beta w_{i\tau(i)}$, we just lower bound the utility by $0$.} \begin{align*} \mathbb{E}_{y \sim F}[u_i^{bpb}(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i})] &\geq \mathbb{E}_{y \sim F} [\alpha_{i\tau(i)}(v_i - y) I(\alpha_{i \tau(i)} \cdot y \geq \beta w_{i\tau(i)})] \\ &= \int_0^{v_i(1-1/e)} \alpha_{i\tau(i)}(v_i - y) I(\alpha_{i \tau(i)} \cdot y \geq \beta w_{i\tau(i)}) \cdot \frac{1}{v_i-y} dy \\ &= \alpha_{i \tau(i)} v_i (1-1/e) - \beta w_{i\tau(i)} \end{align*} We have the above inequality for every $i$, with $w_{i \tau(i)} \leq W/2$. {\bf Case 1:} First consider the case where every advertiser has $w_{i \tau(i)} \leq W/2$ in OPT. Then we can sum over all $i$ and use the equilibrium condition to obtain a single smoothness inequality. \begin{align*} \sum_i &\mathbb{E}_{y \sim f} u_i(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \\ &= p \sum_i \mathbb{E}_{y \sim f} u_i^{bpb}(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) + (1-p) \sum_i \mathbb{E}_{y \sim f} u_i^{max}(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \\ &\geq p \sum_{i} \mathbb{E}_{y \sim f} u_i^{bpb}(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \\ &\geq p \sum_{i} [\alpha_{i \tau(i)} v_i (1-1/e) - \beta w_{i\tau(i)})] \\ &\geq p (1-1/e) SW(OPT) - p \beta W \end{align*} In the last step, we bound $\sum_i w_{i\tau(i)} \leq W$. If $p = 1/2$, by Lemma~\ref{lem: beta-bound} we have $p \beta W \leq 2p (SW_{bpb}(\mathbf{b}, \mathbf{S}) + SW_{max}(\mathbf{b}, \mathbf{S}) = 2 SW_{\mathcal{M}}(\mathbf{b}, \mathbf{S})$. And we have a smoothness inequality with parameters $(1/2(1-1/e), 2)$. {\bf Case 2:} Otherwise suppose $OPT$ has an advertiser with $w_{i\tau(i)} > W/2$. Note that OPT can have at most one advertiser with ${w_{i \tau(i)}} > W/2$. Denote this advertiser as $i^*$. We consider the utility of an advertiser $i^*$ as he deviates to bid $y$ drawn from distribution $f$ on $(0, v_{i^*}(1-1/e))$ with $f(y) = 1/(v_{i^*} - y)$. With a bid of $y$, the bidder will definitely be allocated {in the max-value algorithm} if $ \alpha_{i^* \tau(i^*)} y \geq SW_{max}(\mathbf{b}, \mathbf{S})$. Note that this is a loose condition, $(i^*,\tau(i^*))$ may not be the most valuable rich ad for $i^*$, and if $i^*$ is allocated in the bid profile the threshold to win might be lower than $SW_{max}(\mathbf{b}, \mathbf{S})$ . But we will use this weaker condition. Again, recall that the GSP cost-per-click will be at most the bid $y$. Then we have, \begin{align*} \mathbb{E}_{y \sim f}[u_{i^*}^{max}(y, A_{i^*}, \mathbf{b}_{-i^*}, \mathbf{S}_{-{i^*}})] &\geq \mathbb{E}_{y \sim f}[\alpha_{i^*\tau(i^*)} (v_{i^*} - y) I(\alpha_{i^* \tau(i^*)} y \geq SW_{max}(\mathbf{b}, \mathbf{S}))] \\ &\geq \int_0^{v_{i^*}(1-1/e)} \alpha_{i^*\tau(i^*)} (v_{i^*} - y) \frac{1}{(v_{i^*} - y)} I(\alpha_{i^* \tau(i^*)} y \geq SW_{max}(\mathbf{b}, \mathbf{S}))dy \\ &\geq \int_{\frac{SW_{max}(\mathbf{b}, \mathbf{S}))}{\alpha_{i^*\tau(i^*)}}}^{v_{i^*}(1-1/e)} \alpha_{i^*\tau(i^*)} dy \\ &\geq (1-1/e) \alpha_{i^*\tau(i^*)} v_{i^*} - SW_{max}(\mathbf{b}, \mathbf{S}) \end{align*} We can combine all the inequalities to obtain a single smoothness inequality. \begin{align*} \sum_i &\mathbb{E}_{y \sim f} u_i(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \\ &= p \sum_i \mathbb{E}_{y \sim f} u_i^{bpb}(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) + (1-p) \sum_i \mathbb{E}_{y \sim f} u_i^{max}(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \\ &\geq p \sum_{i\neq i^*} \mathbb{E}_{y \sim f} u_i^{bpb}(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) + (1-p) \mathbb{E}_{y \sim f} u_{i^*}^{max}(y, A_{i^*}, \mathbf{b}_{-i^*}, \mathbf{S}_{-i^*}) \\ &\geq p \sum_{i \neq i^*} [\alpha_{i \tau(i)} v_i (1-1/e) - \beta w_{i\tau(i)})] + (1-p) [(1-1/e) \alpha_{i^*\tau(i^*)} v_{i^*} - SW_{max}(\mathbf{b}, \mathbf{S})] \\ &\geq p (1-1/e) SW(OPT) - p \beta W/2 + (1-2p) (1-1/e)v_{i^* \tau(i^*)} - (1-p) SW_{max}(\mathbf{b}, \mathbf{S}) \end{align*} Here in the last step we bound $\sum_{i \neq i^*} w_{i \tau(i)} < W/2$. By Lemma~\ref{lem: beta-bound}, we obtain an upper bound of $2 SW_{bpb}(\mathbf{b}, \mathbf{S}) + 2 SW_{max}(\mathbf{b}, \mathbf{S})$ on $\beta W$. Setting $p = 1/2$, $$\sum_i \mathbb{E}_{y \sim f} u_i(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq \frac{1}{2}(1-\frac{1}{e}) SW(OPT) - \frac{1}{2}( SW_{bpb}(\mathbf{b}, \mathbf{S}) + SW_{max}(\mathbf{b}, \mathbf{S})) - \frac{1}{2} SW_{max}(\mathbf{b}, \mathbf{S}). $$ Recall that $SW_{\mathcal{M}}(\mathbf{b}, \mathbf{S}) = \frac{1}{2} SW_{bpb}(\mathbf{b}, \mathbf{S}) + \frac{1}{2} SW_{max}(\mathbf{b}, \mathbf{S}).$ Hence, the smoothness inequality \begin{align*} \sum_i \mathbb{E}_{y \sim f} u_i(y, A_i, \mathbf{b}_{-i}, \mathbf{S}_{-i}) \geq 1/2(1-1/e) SW(OPT) - 2 SW(\mathbf{b}, \mathbf{S}), \end{align*} follows, and we obtain a bound on the Bayes-Nash POA of $6/(1-1/e) \approxeq 9.49186$. \end{proof} \section{Experiments} \label{sec: experiments} In this section we present some empirical results for our truthful mechanisms. The allocation rules we use for our theoretical results can be extended to obtain higher value. First we extend $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ to skip past a high bang-per-buck ad that does not fit in the remaining space. More precisely, recall that $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ calls $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ to get the space allocation. $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ stops when the ad being considered cannot be fit fully in the remaining space. The advertiser corresponding to this ad still gets allocated the remaining space, which is filled with the highest-value ad that fits in post-processing. In our modified version, we update $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ to skip past this large ad. This is equivalent to dropping step $(3)$ of $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ (i.e., we keep going until we run out of ads), and running the rest of $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ as it is. We call this modified algorithm \emph{\ifmmode{\mathrm{Greedy-BpB}}\else{GreedyByBangPerBuck}\fi}. For our theoretical result we also select the maximum value ad with probability $1/3$. In practice, this can be very inefficient. For our empirical evaluation, we extend this to continues to allocate as long as space is remaining. Similar to the \ifmmode{\mathrm{Greedy-BpB}}\else{GreedyByBangPerBuck}\fi, the algorithm skips past a high value but large ad that cannot fit, and continues allocating until the space or the set of rich ads runs out. We call this algorithm \emph{GreedyByValue}. It worth noting that these extensions do not improve the worst-case approximation ratio of Theorem~\ref{thm: simple 4-approx}: we include a brief proof in Appendix~\ref{app:proof-experiments}. We implement our proposed randomized algorithm by flipping a coin with probability $2/3$ for each query and selecting the result of GreedyByBangPerBuck algorithm if it is heads and GreedyByValue otherwise. We call this \emph{RandomizedGreedy}. As a baseline, we have implemented \emph{IntOPT}, that uses brute-force-search to evaluate all possible allocations and compute the optimal integer allocation. This allocation paired with the VCG payment rule (which is computed by computing the integer OPT with advertiser $i$ removed, and subtracting from that the allocation of all advertisers other than $i$ in the optimal allocation) gives the VCG mechanism. Finally, we implement the incremental bang-per-buck order algorithm as \emph{IncrementalBPB} to compute the fractionally optimal allocation. We evaluated our algorithms from real world data obtained from a large search advertising company. The data consists of a sample of approximately 11000 queries, selected to have at least 6 advertisers each. All the space values for the rich-ads are integer. We use 500 as the space limit as that is larger than the space of any individual rich ad. Table~\ref{tab:my_label} compares the average performance of these algorithms. Our algorithms are comparable to VCG, on average, but require a lot less time to run. \begin{table} \centering \begin{tabular}{c|c|c|c} Algorithm & ApproxECPM & time-msec \\ \hline GreedyByBPB-Myerson & 0.9493 & 0.0033 \\ GreedyByValue-Myerson & 0.9196 & 0.0016 \\ RandomizedGreedy-Myerson & 0.9393 $\pm$ 0.0001 & 0.0027 $\pm$ 0.000 \\ VCG & 1.0 & 0.0308 \end{tabular} \caption{Average performance of the algorithms compared to VCG. We report average approximation of eCPM relative to VCG and average running time in miliseconds. We report confidence intervals for the randomized algorithm by noting the average performance over all queries over 100 runs.} \label{tab:my_label} \end{table} We first compare the approximation obtained by various algorithms to the fractional-optimal. \begin{figure} \caption{Histogram of approximation factor for IntOPT, GreedyByValue, GreedyByBangPerBuck and RandomizedGreedy compared to Fractional Optimal.} \label{fig:frac-opt-approx} \end{figure} In Figure~\ref{fig:frac-opt-approx}, we see that \ifmmode{\mathrm{Greedy-BpB}}\else{GreedyByBangPerBuck}\fi~and IntOPT obtain at least a $0.55$ fraction of the fractional opt, while the approximation factor for GreedyByValue can be as low as $0.4$. There is also a substantial amount of mass in the $1.0$ bucket where integer-opt and fractional-opt coincide and the greedy algorithms also sometimes achieve that. Next we compare the approximation obtained by various algorithms to IntOPT. \begin{figure} \caption{Histogram of approximation factor for GreedyByValue, GreedyByBangPerBuck and randomized Greedy compared to IntOPT. Height of each bucket represents the fraction of queries in the bucket. Last figure shows correlation of approximation factor relative to IntOPT for GreedyByValue, GreedyByBangPerBuck. Color-scale in the heat-map by log(number of queries) in bucket.} \label{fig:vcg-approx} \end{figure} In Figure~\ref{fig:vcg-approx} we see the approximation obtained by various algorithms compared to the IntOPT allocation. There are more queries where we obtain the optimal approximation, but the worst-case is still $~0.6$ for GreedyByBangPerBuck and $~0.4$ for GreedyByValue. For additional insight, we plot a heatmap to correlate the approximation factor obtained by GreedyByBangPerBuck and GreedyByValue with VCG as the baseline. In Figure~\ref{fig:vcg-approx}, we compare the approximation factor of GreedyByValue and GreedyByBangPerBuck. Blank spaces in this plot correspond to not having any queries with a particular combination of approximation factors. We note that a lot of the queries have the same approximation factor for GreedyByValue and GreedyByBangPerBuck --- indicating that RandomizedGreedy won't make mistakes. But GreedyByBangPerBuck more often has better approximation factor than GreedyByValue, so sticking to GreedyByBangPerBuck as the only heuristic might perform better. Finally, in Appendix~\ref{app:experiments} (Figure~\ref{fig:running_time_alg}) we compare the clock-time of our allocation rules with that of the IntOPT allocation rule. These allocation rules are monotone, so they can be paired with Myerson's payment rule as implied by Lemma~\ref{lem: monotone}. We evaluate the time required to compute the payments and relative revenue compared to VCG in Appendix~\ref{app:experiments}. Additionally, in Appendix~\ref{app:experiments} we evaluate our algorithms with an added cardinality constraints and show that we obtain reasonable approximations to the optimal allocation. \appendix \section{Missing Preliminaries}\label{app: missing prelims} \subsection{Fractional Optimal and the Incremental-bang-per-buck Algorithm}\label{appsubsec: incremental bang-per-buck order} \citet{Sinha79} introduce the notion of \emph{Dominated} and \emph{LP dominated} ads and show that they are not used in the fractional optimal solution. \begin{definition}[Dominated/LP-dominated~\cite{Sinha79}]\label{def: dominated} For an advertiser $i$, the rich ads of the advertiser can be dominated in two ways. \begin{itemize}[leftmargin=*] \item{Dominated:} If two rich ads satisfy $w_{ij} \leq w_{ik}$ and $b_{ij} \geq b_{ik}$ then $k$ is dominated by $j$. \item {LP Dominated:} If three rich ads $j, k, l$ with $w_{ij} < w_{ik} < w_{il}$ and $b_{ij} < b_{ik} < b_{il}$, satisfy $\frac{b_{il} - b_{ik}}{w_{il} - w_{ik}} \geq \frac{b_{ik} - b_{ij}} {w_{ik} - w_{ij}}$, then $k$ is LP-dominated by $j$ and $l$. \end{itemize} \end{definition} Moroever,~\cite{Sinha79} showed that the fractional optimal solution can be obtained through the following \emph{incremental-bang-per-buck} algorithm. \begin{algo}~ \begin{itemize}[leftmargin=*] \item Eliminate all the Dominated and LP-dominated rich ads for each advertiser. \item (Compute incremental-bang-per-buck) For each advertiser, allocate the null ad. Sort the remaining rich ads by space (label them $i1, \ldots ik, \ldots$). Construct a vector of scores $\frac{b_{ik} - b_{ik-1}}{w_{ik} - w_{ik-1}}$ for these. \item (Allocate in incremental bang-per-buck order) While space remains: select the rich ad $(i, k)$ with highest score among remaining rich ads. If the remaining space is at least as much as the incremental space required ($w_{ik} - w_{i(k-1)}$), this new rich ad is allocated to its advertiser and it fully replaces previously allocated rich ad for the advertiser. Otherwise, allocate the advertiser fractionally in the remaining space. This fractional allocation puts a weight $x$ on the previously allocated rich ad of the advertiser and $(1-x)$ on the newly selected rich-ad such that the fractional space equals remaining space plus previously allocated space of the advertiser.\footnote{For more details refer to Lemma~\ref{lem: 1optfrac} in the appendix.} \end{itemize} \end{algo} We prove some lemmas about the optimal fractional solution. The following lemma provides a simple characterization of the advertisers' welfare depending on whether the optimal solution uses one or two rich ads fractionally. \begin{lemma} \label{lem: 1optfrac} Let $W_i^*$ be the space allocated to advertiser $i$ with a non-null allocation in an optimal solution to the {\sc Multi-Choice Knapsack} problem. There are two cases. \begin{enumerate} \item The optimal allocation uses a single {non-null} rich ad with (value, size) $(b_l, l)$ ($l = W^*_{i}$) in space $W^*_i$. The advertiser is allocated integrally and its value is $b_l$. \item The optimal allocation uses two rich ads with (value, size) $(b_s, s)$, $(b_l, l)$, with $s < W^*_{i} < l$ and $(b_l, l)$ not null, in space $W^*_i$. We have that $b_l > b_s$ and the advertiser's value (i.e. their contribution to the social welfare) is $b_s + \frac{b_l - b_s}{l-s} (W^*_{i} - s)$. If $(b_s, s)$ is not null, $\frac{b_s}{s} \geq \frac{b_l}{l} \geq \frac{b_l - b_s}{l-s}$. \end{enumerate} \end{lemma} \begin{proof} Recall that the main purpose of the null ad is to help make the fractional allocation of advertiser $i$ exactly equal to one. We can reason about the optimal allocation of advertiser $i$ in space $W_i$ without the null ad and bringing the null ad if $i$'s allocation is less than one. Note that the null ad does not change the value or space occupied by advertiser $i$. The optimization problem for a single advertiser (without the null ad) is as follows. Let $S$ be the set of rich ads for advertiser $i$. \begin{align*} \max &\sum_{k \in S} x_k b_k \\ s.t. & \sum_{k \in S} x_k w_k \leq W_i^* \\ & \sum_{k \in S} x_k \leq 1 \\ & x_k \geq 0 \qquad \forall k \in S \end{align*} This LP has $|S|$ variables and $|S| + 2$ constraints, so there exists an optimal solution with at most $2$ non-zero variables. Suppose there is only one non-zero variable. The optimal fractional solution is made of a single ad $(b_l, l)$ with $s \geq W_i^*$. Suppose the advertiser is allocated $x \leq 1$. Then since $l x \leq W_i^*$, we have that $x \leq \frac{W_i^*}{l}$. Since the optimal fractional solution maximizes the advertiser $i$'s value $b_l x$ in that space (as noted in Fact~\ref{fact: fracopt}), we have that $x = \frac{W_i^*}{l}$ and the advertisers value is $b_l \frac{W_i^*}{l} = \frac{b_l}{l} W_i^*$. When $l = W_i^*$, $x = 1$ and the advertiser's allocation is integral. Otherwise, $x < 1$, we can set $(b_s, s) = (0, 0)$ with $x_s = 1-x$ and the advertiser's value is still $\frac{b_l}{l} W_i^* = \frac{b_l - 0}{l-0} W_i^* + 0$. Suppose the optimal allocation uses two (non-null) rich ads $(b_s, s)$, $(b_\ell, \ell)$. Then both the knapsack and unit demand constraints must be tight. That is $x_s s + x_\ell \ell = 1$ and $x_s s + x_\ell \ell = W_i^*$. The solution to this system is to have $x_s = \frac{l - W_i^*}{l-s}$ and $x_l = \frac{W_i^* - s}{l-s}$. Both $x_s$ and $x_l$ are fractional if $s < W_i^* < l$. And the advertiser's value is \begin{align*} &b_s \frac{l - W_i^*}{l-s} + b_l \frac{W_i^* - s}{l-s} \\ &~~= b_s + (b_l - b_s) \frac{W_i^* - s}{l-s} \\ &~~= b_s + \frac{b_l - b_s}{l-s} (W_i^* - s) \qedhere \end{align*} \end{proof} The following lemma is a niche property of the optimal fractional solution constructed by the incremental bang-per-buck order algorithm (Appendix~\ref{appsubsec: incremental bang-per-buck order}) that can be easily derived and that we use in our proofs. \begin{lemma}\label{lem:incremental bpb} Let $i$ be the last advertiser {that is allocated in the incremental bang-per-buck order algorithm} and suppose it is allocated fractionally. Let $\mathbf{x}^* = \mathbf{x}(OPT)$ denote the optimal fractional allocation. Let $(b_s,s),(b_\ell,\ell)$ be the ads used in $x_i^*$. For all $j\neq i$, $\frac{b_\ell - b_s}{\ell - s} \le \frac{b_{j}\cdot x_j^*}{ W_j^*}$. \end{lemma} \begin{proof} Let $j$ be an advertiser with $j \neq i$. Since $j \neq i$, by Fact ~\ref{fact: fracopt} allocation of $j$ in $\mathbf{x}^*$ is integral. Let $(b_{jk}, w_{jk})$ be the ad allocated to $j$. Let $(b_{jt}, w_{jt})$ be the ad that was previously allocated to $j$ when $(b_{jk}, w_{jk})$ was considered. Since the incremental bang-per-buck is defined by sorting ads in increasing order of their space, $w_{jk} \geq w_{jt}$. Thus $\frac{b_{jk} - b_{jt}}{w_{jk}-w_{jt}}$ is the incremental bang-per-buck that allowed $j$ to be selected and since $i$ is the last advertiser we have that $\frac{b_l - b_s}{l-s} \leq \frac{b_{jk} - b_{jt}}{w_{jk}-w_{jt}}$. To conclude the proof, we show that $\frac{b_{jk} - b_{jt}}{w_{jk}-w_{jt}} \leq \frac{b_{jk}}{w_{jk}}$. This is true if and only if $\frac{b_{jk}}{w_{jk}} \leq \frac{b_{jt}}{w_{jt}}$. We have that $0 \leq w_{jt} \leq w_{jk}$. We can conclude that $0 \leq b_{jt} \leq b_{jk}$, as otherwise, $k$ is dominated by $t$. Finally, Suppose, $\frac{b_{jk}}{w_{jk}} > \frac{b_{jt}}{w_{jt}}$, then with simple rearrangement we can obtain that $\frac{b_{jk}}{w_{jk}} < \frac{b_{jt} - b_{jk}}{w_{jt} - w_{jk}}$ and $k$ is LP-dominated by $0$ and $t$. \end{proof} \section{Proofs from Section~\ref{sec: monotone and lower bounds}}\label{app:missing from sec 3} \begin{proof}[Proof of Theorem~\ref{thm: lower bounds} continued] Consider an instance with two advertisers $A, B$, and rich ads $\{(1, 1), (2-\varepsilon, 2)\}$. The total space available is $3$. The optimal allocation can randomly pick between $\{A: (1, 1), B: (2-\varepsilon, 2)\}$ and $\{A: (2-\varepsilon, 2), B: (1, 1)\}$, getting a total value of $3-\varepsilon$. Therefore, in the output of any algorithm in this instance, some advertiser, say $B$, obtains value $x \leq (3 - \varepsilon)/2$. A randomized monotone allocation rule must ensure that $B$'s value, if she hides $(1, 1)$, is at most $x$. In that case, the algorithm will randomize between $\{A: (1, 1), B: (2-\varepsilon, 2)\}$ and $\{A: (2-\varepsilon, 2)\}$, and it choose the first allocation with probability more than $\frac{x}{2-\varepsilon}$. The social welfare of this randomized allocation is at most $\frac{x}{2-\varepsilon} \cdot (3-\varepsilon)+ (1 - \frac{x}{2-\varepsilon}) \cdot (2-\varepsilon) \leq (11/12) \cdot (3-\varepsilon)$. Recalling that OPT is $(3 - \varepsilon)$ concludes the proof. \end{proof} \section{Proofs from Section~\ref{sec: positive result}}\label{app: missing from positive} \begin{proof}[Proof of Observation~\ref{obv:ALGB ignores dominated}] We have that $\frac{b_{ij}}{w_{ij}} \geq \frac{b_{ij'}}{w_{ij'}}$ and $w_{ij} \geq w_{ij'}$. If we multiply these two inequalities we get: $b_{ij} \geq b_{ij'}$. If $w_{ij} = w_{ij'}$, then $j'$ is dominated by $j$. Otherwise, we will show that $j'$ is LP-dominated by $j$ and $0$. We have that $0 < w_{ij'} < w_{ij}$ and $0 \leq b_{ij'} \leq b_{ij}$. It remains to show, $\frac{b_{ij} - b_{ij'}}{w_{ij} - w_{ij'}} \geq \frac{(b_{ij'}}{w_{ij'}}$. This is equivalent to $\frac{b_{ij}}{w_{ij}} \geq \frac{b_{ij'}}{w_{ij'}}$, which is true. \end{proof} \begin{proof}[Proof of Claim~\ref{claim: integral-OPT-I}] The post-processing step in $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ integrally allocates the best ad that fits in $W_i$ for all advertisers $i\in \mc{N}$. Let $INT_i(w)$ denote best ad in $S_i$ that fits in space $w$. For all $i\in \mc{I}\setminus \{i'\}$, since $W_i \ge W_i^*$ and the optimal allocation $\mathbf{x}^*_i$ is integral, we get $b_i\cdot \mathbf{x}_i(\ifmmode{ALG_I}\else{$ALG_I$~}\fi) = b_i\cdot \mathbf{x}_i(INT_i(W_i) \ge b_i\cdot \mathbf{x}_i(INT_i(W_i^*)) = b_i \cdot \mathbf{x}_i(OPT_i(W_i^*))$. Recall, by Lemma~\ref{lem: 1optfrac}, we have $ b_i\cdot \mathbf{x}_i(OPT_i(W_i^*)) = b_i\cdot \mathbf{x}_i^*$. Thus we get $b_i\cdot \mathbf{x}_i^* \le b_i \cdot \mathbf{x}_i(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)$ for all $i\in \mc{I}\setminus\{i'\}$. Further, by summing up the contributions of all $i\in \mc{I}\setminus\{i'\}$, we get $\mathrm{Val}(\mathbf{x}^*, \mc{I}\setminus\{i'\}, \vec{W}^*) \le \mathrm{Val}(\mathbf{x}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi), \mc{I}\setminus\{i'\}, \vec{W})$. \end{proof} \begin{proof}[Proof of Claim~\ref{claim: bpb-K}] $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ consider ads in decreasing order of bang-per-buck, and moreover by Observation~\ref{obv:ALGB ignores dominated} it never ``ignores'' ads that are allocated in $OPT$. So, if $k\in \mc{K}\setminus \{i'\}$ had higher bang-per-buck in $OPT$ than any advertiser $i$ in $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$, then the space allotted for $k$ in $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ would have been at least $W_{k}^*$; a contradiction. Note that this holds even if $i=i''$ is the last advertiser considered in $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ (who potentially gets a fractional allocation). \end{proof} \begin{proof}[Proof of Claim~\ref{claim:i'-contribution}] From Lemma~\ref{lem: 1optfrac} we know that $b_\ell > b_s$ and $b_{i'}\cdot \mathbf{x}_{i'}^* = b_s + (b_\ell - b_s)\frac{W_{i'}^* - s}{\ell - s}$. Moreover if $(b_s, s)$ is not the null ad, i.e. $s > 0$, $\frac{b_\ell - b_s}{\ell-s} \leq \frac{b_\ell}{\ell} \leq \frac{b_s}{s}$, Suppose that $i' \in \mc{K}$. If $s > W_{i'}$, then by the same argument as Claim~\ref{claim: bpb-K} we have that for all $i\in \mc{N}$ the bang-per-buck of every ad in $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ is at least $b_s/s \ge b_\ell/\ell \ge (b_\ell - b_s)/(\ell-s)$. If $s \le W_{i'}$, we have $b_s \le b_{i'}\cdot \mathbf{x}_{i'}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)$. And, by the same argument as Claim~\ref{claim: bpb-K}, we have that $\frac{b_\ell - b_s}{\ell-s} \leq \frac{b_\ell}{\ell} \leq \frac{b_j \mathbf{x}_j(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)}{W_j}$ for $j \in \mc{N}$. Suppose that $i'\in \mc{I}$. Clearly, $s \le W_{i'}^* \leq W_{i'}$, so we have $b_s \le v_{i'}\cdot \mathbf{x}_{i'}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)$. Let $k\in \mc{K}$ be some ad with $W_k^* > W_k$. By Lemma~\ref{lem:incremental bpb} we have $(b_\ell - b_s)/(\ell -s) \leq b_k\cdot \mathbf{x}_k^*/ W_k^*$. Claim~\ref{claim: bpb-K} gives us that $b_k\cdot \mathbf{x}_k^*/W_k^* \le b_i\cdot \mathbf{x}_i(\ifmmode{ALG_B}\else{$ALG_B$~}\fi)/W_i$ for all $i\in\mc{N}$. Thus we get $(b_\ell - b_s)/(\ell -s) \le b_i\cdot \mathbf{x}_i(\ifmmode{ALG_B}\else{$ALG_B$~}\fi)/W_i$ for all $i\in\mc{N}$. \end{proof} \section{Examples}\label{app: examples} The following example shows that $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ might not be monotone. \begin{example}\label{eg:algB-not-monotone} Consider two advertisers $A$ and $B$. $A$ has two rich ads with (value,size) = $(2,2)$ and $(1,3)$, and $B$ has one rich ad with value size $(0.5,3)$. Let the total space $W=3$. $\ifmmode{ALG_B}\else{$ALG_B$~}\fi$ will allocate $(1,3)$. But if $A$ removed $(1,3)$, then the algorithm allocates $(2,2)$ to $A$ and $(0.5,3)$ to $B$ fractionally. \end{example} The following example shows that $\ifmmode{ALG_I}\else{$ALG_I$~}\fi$ can be an arbitrarily bad approximation to the social welfare. \begin{example}\label{eg:algI-bad} We have two advertisers $A$ and $B$. $A$ has one rich ad with (value, size) = ($\varepsilon,\varepsilon/2$), and $B$ has one rich ad ($(M,M)$). The total space available is $W= M$. Clearly, the optimal integer allocation is to award the entire space to $B$, to obtain social welfare $= M$. The fractional opt selects $A: (\varepsilon,\varepsilon/2)$ and $B:(M,M)$ with weight $M-\varepsilon/2$, obtaining social welfare $=M+\varepsilon/2$. We note that, in this instance the bang-per-buck algorithm also selects the fractional optimal allocation. At the same time, the integer allocation $\mathbf{x}(\ifmmode{ALG_I}\else{$ALG_I$~}\fi)$, drops $B$, and only obtains social welfare $= \varepsilon$. \end{example} \section{Proofs from Section~\ref{sec: experiments}}\label{app:proof-experiments} We briefly sketch a proof that the new ``heuristically better" algorithms are still monotone. \begin{lemma} \ifmmode{\mathrm{Greedy-BpB}}\else{GreedyByBangPerBuck}\fi~ is monotone in both $b_i$ and $S_i$. \end{lemma} \begin{proof} Recall that in \ifmmode{\mathrm{Greedy-BpB}}\else{GreedyByBangPerBuck}\fi~we do not stop immediately when encountering an ad that doesn't fit, but instead we will continue until we run out ads or knapsack space. We will refer to this as the bang-per-buck allocation. Finally, we do a post-processing step as usual. Let $W_i$ denote the space allocated to agent $i$ before the post-processing step for each $i$. Since the post-processing step uses the space $W_i$ optimally, it is enough to show that the bang-per-buck allocation is space monotone. That is, we will show that $W_i$ is monotone in $b_i$ and $S_i$ (without worrying about the post-processing step). Also, wlog we can assume that the algorithm continues to consider ads in bang-per-buck order until we run out of ads (that is, if knapsack is full we will keep going without allocating anything else). Clearly, for any $b'_i> b_i$, the advertiser will not get allocated lesser space when bidding $b'_i$. This is because, when reporting $b'_i$, all the ads of $i$ now have higher bang per buck (than compared to reporting $b_i$). So any ad $j\in S_i$ that was allocated (temporarily or otherwise) under $b_i$ will still be allocated in $b'_i$. In particular, the space available to $j$ is weakly higher under $b'_i$. We next show that by removing any ad $j\in S_i$ the space allocated to $i$ does not increase. Note that, by transitivity, it is sufficient to prove monotonicity for removing one ad at a time. We see that by removing ads we can only increase the amount of space allocated to other advertisers. This is because the algorithm will continue until we run out of space. First, if $j$ was never (temporarily) allocated by the algorithm then either $j$ was dominated or there was not enough space to allocate $j$, in either case nothing changes under $S_i\setminus \{j\}$. First consider the case where $j$ was the final ad allocated under $S_i$. This implies that either $j$ was the largest of $i$, or no ad $k\in S_i$ larger than $j$ (that is, $w_{ik} > w_{ij}$) had enough space available. In either case, by removing $j$ the space allocated to $i$ can only be lesser, since no such larger ad $k$ (if any) will have enough space available. Next we consider the case where $j$ was temporarily allocated and finally a larger ad $k$ was allocated to $i$ instead. Then under $S_i\setminus \{j\}$ can only have weakly lesser space available to $k$. To see why this is true, consider "time step" at which $j$ was temporarily allocated. {Note that $j$ being allocated does not reduce the space available to advertiser $i$ as subsequent rich ads of $i$ replace $j$ and can use the space previously allocated to $j$.} If $w-w_{ij}$ is the space available to other agents at this time step, then by removing $j$ there is at least $w-w_{ij}$ space available for other agents at this time step (when $j$ is not available). {Also while $j$ being allocated does cause the algorithm to drop other rich ads of lower bang-per-buck and lower size, since the size of these rich ads is necessary smaller they cannot make more space available under $S_{i} \setminus \{j\}$.} Hence we see that the space allocated to $i$ is weakly lesser under $S_{i}\setminus \{j\}$. \end{proof} \begin{lemma} GreedyByValue is monotone in both $b_i$ and $S_i$. \end{lemma} \begin{proof} Recall that in GreedyByValue we continue allocating ads in value order (while only allocating at most one ad per advertiser) until we run out of ads. That is, we skip past ads that do not fit in the available space and continue considering ads in value order. Once an agent $i$ get allocated we ignore all their remaining ads. Suppose $i$ changes her bid from $b_i$ to $b'_i > b_i$. This implies, the value of all of $i$'s ads increased. Therefore, her allocation under $b'_i$ can only be better. Similarly by dropping an ad $j\in S_i$ the allocation of $i$ can only be worse. If $j$ was not allocated under $S_i$ then nothing changes because either $j$ didn't fit (in which case we will still continue) or a better ad was allocated before $j$ and will be allocated under $S_i\setminus \{j\}$. If $j$ was allocated, then $j$ was the highest valued ad of $i$ that fit. That is, all ads of $i$ with higher value than $j$ (if any) were considered before $j$ and did not fit. Thus they will still not be allocated even under $S_i \setminus \{j\}$. \end{proof} \section{Further Empirical Evaluation}\label{app:experiments} \paragraph{Comparison with Myerson Payment rule.} We compare the revenue performance of our allocation algorithms when paired with truthful payment rules. \begin{table} \centering \begin{tabular}{c|c|c|c} Algorithm &ApproxECPM & ApproxPayment & time-msec \\ GreedyByBPB-Myerson & 0.9493 & 0.66 & 0.0033 \\ GreedyByValue-Myerson & 0.9196 & 1.00 & 0.0016 \\ RandomizedGreedy-Myerson & 0.9393 $\pm$ 0.0001 & 0.7758 $\pm$ 0.0007 & 0.0027 $\pm$ 0.000 \\ VCG & 1.0 & 1.0 & 0.0308 \end{tabular} \caption{Average performance of the algorithms compared to VCG. We report average approximation of eCPM and payment relative to VCG and average running time in miliseconds. We report confidence intervals for the randomized algorithm.} \label{tab:payment} \end{table} We pair IntOPT with the VCG payment rule gives the VCG mechanism. The VCG payment for advertiser $i$ is computed by computing the brute-force integer OPT with the advertiser $i$ removed and subtracting from that the allocation of all advertisers other than advertiser $i$ in the optimal allocation. For our monotone mechanisms we compute Myerson payments as implied by ~\ref{lem: monotone}. We compute the Myerson payment for advertiser $i$, by first computing the GSP cost per click at the submitted bid, setting a new-bid equal to GSP cost-per-click - $\varepsilonilon$, and rerunning the allocation algorithm. This procedure is repeated until GSP cost-per-click or the allocation of advertiser $i$ is equal to zero. We first compare the revenue performance of the four different truthful mechanisms that we have. Since these mechanisms are truthful, revenue can be compared without having to reason about equilibrium. Note however that we do not set reserve prices and reserve prices can be set and tuned differently to fully compare the revenue from these mechanisms. \begin{figure} \caption{Histogram of ratio of revenue with MyersonPaymentRule for GreedyByValue, GreedyByBangPerBuck (GreedyByBPB) and RandomizedGreedy compared to VCG} \label{fig:payment-approx} \end{figure} In Figure~\ref{fig:payment-approx}, the revenue from GreedyByBangPerBuck and GreedyByValue can be both higher and lower than VCG. GreedyByBangPerBuck tends to have lower revenue on average. This is probably due to the bang-per-buck allocation --- large value ads might also occupy larger space and have lower bang-per-buck. Thus, even if a large value ad is used to price smaller ads that are selected, since the bang-per-buck is small the payment for the smaller ad is still small. We might also make better trade-off between revenue and efficiency by stopping the algorithms early. In Figure~\ref{fig:running_time_mech}, we compare the running time for computing truthful payments in each mechanism. We note that all the implementations can be further optimized and the choice of programming language can influence the running time as well. The results here are from mechanisms implemented in Python. We see that GreedybyBangPerBuck(GreedyByBPB) and GreedyByValue run much faster than VCG. The greedy allocation rules themselves are much faster than brute-force OPT, but the truthful payment rule computation for the Greedy algorithm requires more recursive calls to the Greedy allocation rule than that for VCG, this can be further optimized if required. \begin{figure} \caption{Histogram of ratio of running time in milliseconds for for GreedyByValue-Myerson, GreedyByBPB-Myerson and VCG mechanisms. We clip running time larger than 50 in the last bin.} \label{fig:running_time_mech} \end{figure} \paragraph{Empirical results with cardinality constraint.} We also implemented our algorithm with the cardinality constraint. Suppose there is a limit of $k=4$ distinct advertisers to be shown. This changes the optimization problem and the greedy-incremental-bang-per-buck algorithm no longer produces the optimal allocation. We compare the performance of simple greedy algorithms with Myerson payment rule with that of the VCG mechanism that computes the optimal allocation. The algorithm for computing optimal allocation recurses on subsets of ads and can be easily extended to track the cardinality constraint. The algorithm that allocates greedily by value of the rich-ad, will allocate the highest ad that fits within the available space and this algorithm can be stopped as as soon as $k$ distinct advertisers have been selected. To obtain the best social welfare using the greedy by bang-per-buck heuristic, more care is required. We cannot stop as soon as $k$ distinct advertisers are selected, instead we can improve social welfare further by replacing previously selected ads. Thus we extend our GreedyByBangPerBuck algorithm such that if the cardinality constraint is reached, it replace existing ad of the same advertiser if present (in this case the cardinality is unaffected) or replaces allocated ad of the advertiser that has the lowest value among all allocated advertisers. \begin{figure} \caption{Approximation Factor of algorithms relative to IntOPT (a) Histogram of approximation factor for GreedyByValue, GreedyByBangPerBuck and randomized Greedy compared to IntOPT with a cardinality constraint of 4 ads } \label{fig:vcg-approx-cardinality} \end{figure} In Figure~\ref{fig:vcg-approx-cardinality}, we compare the approximation factor of our greedy algorithms with cardinality constraint relative to the optimal integer allocation. We find that the worst-case approximation factors are still $0.6$ and $0.4$ for the GreedyByBangPerBuck and GreedyByValue algorithms, but $60\%$ of the queries have an approximation of $1.0$. \begin{figure} \caption{CDF of running time in milliseconds for for GreedyByValue, GreedyByBangPerBuck and IntOPT algorithms. } \label{fig:running_time_alg} \end{figure} \end{document}
\begin{document} \title{On regular but not completely regular spaces} \operatorname{c}bjclass[2000]{Primary: 54D10; Secondary: 54G20} \keywords{Niemytzki plane; Songefrey plane; Lusin gap} \date{} \maketitle \begin{abstract} We present how to obtain non-comparable regular but not completely regular spaces. We analyze a generalization of Mysior's example, extracting its underlying purely set-theoretic framework. This enables us to build simple counterexamples, using the Niemytzki plane, the Songefrey plane or Lusin gaps. \end{abstract} \section{Introduction}\label{ss1} Our discussion focuses around a question: \textit{How can a completely regular space be extended by a point to only a regular space?} Before A. Mysior's example, such a construction seemed quite complicated, compare \cite{mys} and \cite{en}. R. Engelking included a description of Mysior's example in the Polish edition of his book \cite[p. 55-56]{eng}. In \cite{ciw} there is considered a modification of Mysior's example which requires no algebraic structure on the space. We present a purely set-theoretic approach which enables us to obtain non-comparable examples, such spaces are $X(\omega, \lambda_1)$ and $X( \lambda_2, \kappa)$, see Section \ref{s1}. This approach is a step towards a procedure to rearrange some completely regular spaces onto only regular ones. One can find a somewhat similar idea in \cite{jon}, compare "the Jones' counterexample machine" in \cite[p. 317]{ciw}. The starting point of our discussion are the cases of completely regular spaces which are not normal. For example, subspaces of the Niemytzki plane are examined in \cite{cho} or \cite{ss}, some $\Psi$-spaces are studied in \cite{hh}, also the Songefrey plane is commentated in \cite{ss}. The key idea of our construction of counterexamples looks roughly as follows. Start from a completely regular space $X$, which is not normal. In fact, we need that $ X $ contains countable many pairwise disjoint closed subsets which, even after removal from each of them a small subset, cannot be separated by open sets. By numbering these closed sets as $\Delta_X(k)$ and assuming that the collections of small sets form proper ideals $I_X(k)$, we should check that the property $(*)$ is fulfilled. Copies of $X$ are numbered by integers and then the $k$-th copy is glued along the set $\Delta_X(k)$ to the $(k-1)$-copy, moreover copies of sets $\Delta_X(m)$, for $k\not=n\not=k-1$, are removed from the $k$-th copy. As a result we get the completely regular space $ \textbf{Y}_X $, which has a one-point extension to the regular space which is not completely regular. In fact, given a completely regular space X, which we do not know whether it has one-point extension to the space which is only regular, we can build a space $\textbf{Y}_X$ which has such an extension. A somewhat similar method was presented in \cite{jon}. For this reason, we look for ways of comparing such spaces. Following the concept of topological ranks, compare \cite[p. 112]{kur} or \cite[p. 24]{sie}, which was developed in Polish School of Mathematics, we say that spaces $X$ and $Y$ have non-comparable \textit{regularity ranks}, whenever $X$ and $Y$ are regular but not completely regular and there does not exist a regular but not completely regular space $Z$ such that $Z$ is homeomorphic to a subspace of $X$ and $Z$ is homeomorphic to a subspace of $Y$. \section{On Mysior's example}\label{s1} We modify the approach carried out in \cite{ciw}, which consists in a generalization of Mysior's example, compare \cite{mys}. Despite the fact that our arguments resemble those used in \cite{ciw}, we believe that this presentation is a bit simpler and enables us to construct some non-homeomorphic examples, for example spaces $X(\lambda, \kappa)$. Let $\kappa$ be an uncountable cardinal and $\{A(k): k \in \mathbb Z \}$ be a countable infinite partition of $\kappa$ into pairwise disjoint subsets of the cardinality $\kappa$, where $\mathbb Z$ stands for the integers. Denote the diagonal of the Cartesian product $\kappa^2$ by $\Delta = \{(x,x): x \in \kappa \}$ and put $\Delta(k)= \Delta \cap A(k)^2$. Fix an infinite cardinal number $\lambda < \kappa$ and proper $\lambda^+$-complete ideals $I(k,\lambda)$ on the sets $A(k)$. In particular, we assume that singletons are in $ I(k,\lambda )$, hence $ H \in I(k,\lambda) $ for any $H \operatorname{c}bseteq A(k)$ such that $|H| \leqslant \lambda$. Consider a topology $\mathcal T$ on $X=\kappa^2$ generated by the basis consisting of all singletons $\{ a \}$, whenever $a \in \kappa^2 \setminus \Delta $, and all the sets $$ \{(x,x)\}\cup (\{x\}\times (A(k-1)\setminus G)) \cup ((A(k+1) \setminus F)\times \{x\}) ,$$ where $x\in A(k)$ and $| G| < \lambda $ and $F\in I(k+1, \lambda)$. We denote not-singleton basic sets by $\Gamma(x, G,F)$. \begin{lem}\label{1} Assume that $H \operatorname{c}bseteq \Delta (k) \cap V,$ where $V$ is an open set in $X$. If the set $ \{x\in A(k): (x,x) \in H\}$ does not belong to the ideal $ I(k, \lambda)$, then the difference $\Delta(k-1)\setminus cl_X (V)$ has the cardinality less than $\lambda$. \end{lem} \begin{proof} Suppose that a set $\{(b_\alpha,b_\alpha): \alpha < \lambda \} \operatorname{c}bseteq \Delta(k-1)$ of the cardinality $\lambda$ is disjoint from $ cl_X(V)$. For each $\alpha < \lambda$, fix a basic set $\Gamma(b_\alpha, G_\alpha, F_\alpha)$ disjoint from $V$, where $F_\alpha \in I(k, \lambda)$. The ideal $I(k, \lambda)$ is $\lambda^+$-complete and the set $\{x\in A(k): (x,x) \in H\} $ does not belong to this ideal. So, there exists a point $(x,x) \in H$ such that $$x\in A(k)\setminus \bigcup \{ F_\alpha : \alpha < \lambda \}.$$ Therefore $$(x, b_\alpha) \in (A(k)\setminus F_\alpha) \times \{b_\alpha\} \operatorname{c}bseteq \Gamma(b_\alpha, G_\alpha, F_\alpha)$$ for every $\alpha < \lambda$. Fix a basic set $\Gamma(x, G_x, F_x) \operatorname{c}bseteq V$ and $\alpha < \lambda$ such that $b_\alpha \in A(k-1)\setminus G_x$. We get $(x, b_\alpha) \in \{x\}\times (A(k-1)\setminus G_x) \operatorname{c}bseteq V $, a contradiction. \end{proof} \begin{cor} The space $X$ is completely regular, but not normal. \end{cor} \begin{proof} The base consists of closed-open sets and one-points subsets of $X$ are closed. So $X$, being a zero-dimensional T$_1$ space, is completely regular. Subsets $\Delta(k+1)$ and $\Delta (k)$ are closed and disjoint. By Lemma \ref{1}, if a set $V\operatorname{c}bseteq X$ is open and $ \Delta(k +1) \operatorname{c}bseteq V$, then $cl_X(V) \cap \Delta(k)\not=\emptyset$, which implies that $X$ is not a normal space. \end{proof} \begin{pro}\label{3} Assume that the cardinal $\lambda$ has an uncountable cofinality. If $f:X\to \mathbb R$ is a continuous real valued function, then for any point $x \in \kappa$ there exists a basic set $\Gamma(x, G_x, F_x)$ such that the function $f$ is constant on it. \end{pro} \begin{proof} Without loss of generality, we can assume that $f(x,x)=0$. For each $n>0$, fix a base set $\Gamma(x, G_n,F_n) \operatorname{c}bseteq f^{-1}((\frac{-1}{n},\frac1n))$. Then put $G_x= \cup \{G_n: n>0\}$ and $F_x= \cup \{F_n: n>0\}$. Since $\lambda$ has an uncountable cofinality, we get that the set $ \Gamma(x, G_x,F_x)$ belongs to the base. Obviously, if $$\textstyle (a,b) \in \cap \{\Gamma(x, G_n, F_n): n>0\}= \Gamma(x, G_x,F_x) ,$$ then $f(a,b)=0$. \end{proof} When $\lambda$ has the countable cofinality, then the above proof also works, but then the set $G_x = \cup \{G_n: n>0\} $ may have the cardinality $\lambda$, and therefore $\Gamma(x, G_x,F_x)$ does not necessarily belong to the base, and also it could be not open. Furthermore, any continuous real valued function must also be constant onto other large subsets of X. \begin{lem}\label{l2} Let $k \in \mathbb Z$. If $f:X\to \mathbb R$ is a continuous real valued function, then for any $\varepsilon >0$ there exists a real number $a$ such that $f[\Delta(k-1)] \operatorname{c}bseteq[a,a+3\cdot \varepsilon]$ for all but less than $\lambda$ many points $(x,x) \in \Delta(k-1)$. \end{lem} \begin{proof} Fix a real number $b$ and $\varepsilon >0$. The ideal $I(k, \lambda)$ is $\lambda^+$-complete, so we can choose an integer $q \in \mathbb Z$ such that the subset $$ \{ x\in A(k): f(x,x) \in[b+q \cdot \varepsilon, b+(q+1) \cdot \varepsilon ]\}$$ does not belong to $I(k, \lambda)$. Use Lemma \ref{1}, putting $a= b+(q-1) \cdot \varepsilon$ and $H= f^{-1}([a+\varepsilon, a+2\cdot \varepsilon ]) \cap \Delta (k)$ and $V=f^{-1}((a, a+3\cdot \varepsilon ))$. Since $cl_X (V) \operatorname{c}bseteq f^{-1}([a, a+3\cdot \varepsilon ])$, the proof is completed. \end{proof} \begin{cor}\label{c3} If $f:X\to \mathbb R$ is a continuous real valued function, then for any $k \in \mathbb Z$ there exists a real number $a_k$ such that $f(x,x) = a_k$ for all but $\lambda$ many points $(x,x) \in \Delta(k)$. Moreover, if $\lambda$ has uncountable cofinality, then $f(x,x) = a_k$ for all but less than $\lambda$ many $x \in A(k)$. \end{cor} \begin{proof} Apply Lemma \ref{l2}, substituting consecutively $ \frac1n$ for $\varepsilon $, for $n>0$, and $k+1$ for $k$. \end{proof} \begin{theorem} \label{6} If $f:X\to \mathbb R$ is a continuous real valued function, then there exists $a\in \mathbb R$ such that $f(x,x)=a$ for all but $\lambda$ many $x\in \kappa$. Moreover, when $\lambda$ has an uncountable cofinality, then $f(x,x)=a$ for all but less than $\lambda$ many $x\in \kappa$. \end{theorem} \begin{proof} We shall to prove that the numbers $a_k$ which appear in Corollary \ref{c3} are equal. To do this, suppose that $a_k \not= a_{k-1}$ for some $k\in \mathbb Z$. Choose disjoint open intervals $\mathbb J$ and $\mathbb I$ such that $ a_k \in \mathbb J$ and $a_{k-1}\in \mathbb I$. Apply Lemma \ref{1}, taking $ H=\{(x,x) \in \Delta(k): f(x,x) = a_k \}$ and $V= f^{-1} (\mathbb J)$. Since $cl_X(V) \cap f^{-1} (\mathbb I) =\emptyset$, we get $f(x,x) \not= a_{k-1} $ for all but less than $\lambda$ many $x\in A(k-1)$, a contradiction. \end{proof} Knowing infinite cardinal numbers $\lambda < \kappa$ and proper $\lambda^+$-complete ideals $I(k,\lambda)$ on sets $A(k)$, one can extend the space $X$ by one or two points so as to get a regular space which is not completely regular. This is a standard construction, compare \cite{jon}, \cite{mys} and \cite{ciw} or \cite[Example 1.5.9]{eng}, so we will describe it briefly. Fix points $+\infty$ and $-\infty$ that do not belong to $X$. On the set $X^*= X \cup\{-\infty, +\infty\}$ we introduce the following topology. Let open sets in $X$ be open in $X^*$, too. But the sets $$\mathcal V^+_m=\{+\infty\} \cup \bigcup\{A(n) \times \kappa: n>m\}$$ form a base at the point $ +\infty $ and the sets $$\mathcal V^-_m=\{-\infty\} \cup \bigcup\{A(n) \times \kappa: n\leqslant m\}\setminus \Delta (m)$$ form a base at the point $ -\infty .$ Thus we have $$\Delta (m) = cl_{X^*}(\mathcal V^+_m) \cap cl_{X^*}(\mathcal V^-_m) = cl_{X}(\mathcal V^+_m \cap X) \cap cl_{X}(\mathcal V^-_m \cap X),$$ which gives that the space $X^*$ is regular and not completely regular. Indeed, consider a closed subset $D \operatorname{c}bseteq X^*$ and a point $p\in X^* \setminus D$. When $p\in X$, then $p$ has a closed-open neighborhood in $X^*$ which is disjoint with $D$. When $p=+\infty$, then consider the basic set $\mathcal V^+_m$ which is disjoint with $D$ and check $cl_{X^*}(\mathcal V^+_{m+1}) \operatorname{c}bseteq \mathcal V^+_m. $ Analogously, when $p=-\infty$, then consider the basic set $\mathcal V^-_m$ which is disjoint with $D$ and check $cl_{X^*}(\mathcal V^-_{m-1}) \operatorname{c}bseteq \mathcal V^-_m. $ By Theorem \ref{6}, no continuous real valued function separates an arbitrary closed set $\Delta(k)$ from a point $p\in \{+\infty, -\infty\}$. Hence the space $X^*$ is not completely regular. The same holds for subspaces $X^*\setminus \{+\infty\}$ and $X^*\setminus \{-\infty\}$. Moreover, if $f: X^* \to \mathbb R$ is a continuous function, then $f(+\infty) = f(-\infty).$ Now for convenience, the above defined space $X$ is denoted $ X(\lambda, \kappa)$, whenever the ideals $ I(\lambda, \kappa)$ consist of sets of the cardinality less than $\lambda$. Assuming $\omega < \lambda_1 < \lambda_2 < \kappa$ we get two (non-comparable) non-homeomorphic spaces $X(\omega, \lambda_1)$ and $X( \lambda_2, \kappa)$, since the first one has the cardinality $\lambda_1.$ But a subspace of $X( \lambda_2, \kappa)$ of the cardinality $\lambda_1$ is discrete and its closure in $(X(\omega, \lambda_2))^*$, being zero-dimensional, is completely regular. In other words, spaces $(X(\omega, \lambda_1))^*$ and $(X( \lambda_2, \kappa))^*$ have non-comparable regularity ranks. \section{General approach} The analysis conducted above can be generalized using some known counterexamples. We apply such a generalization to the Niemytzki plane, cf. \cite[p. 34]{eng} or \cite[pp. 100 - 102]{ss}, the Songefrey's half-open square topology, cf. \cite[pp. 103 - 105]{ss} and special Isbell-Mrówka spaces (which are also known as $\Psi$-spaces). Given a space $X$ and a closed and discrete subset $\Delta_X\operatorname{c}bseteq X$, assume that $\Delta_X$ can be partitioned onto pairwise disjoint subsets $\Delta_X(k)$. For each $k\in \mathbb Z$, let $I_X(k)$ be a proper ideal on $\Delta_X(k)$. Suppose that the following property is fulfilled: ($*$). \textit{If a set $V \operatorname{c}bseteq X$ is open and the set $\Delta_X(k) \setminus V$ belongs to $ I_X(k)$, then the set $\Delta_X(k-1) \setminus cl_X(V)$ belongs to $I_X(k-1)$.} \noindent Then it is possible to give a general scheme of a construction of a completely regular space $\textbf{Y}=\textbf{Y}_X$, which has one-point extension to a regular space which is not completely regular and two-point extension to a regular space such that no continuous real valued function separates the extra points, whereas removing a single point we get a regular space which is not completely regular. To get this we put $$\textbf{x}_k = \begin{cases} (k,x), \text{ when } x\in X\setminus \Delta_X;\\ \{(k,x), (k+1,x)\}, \text{ when } x\in \Delta_X(k).\\ \end{cases}$$ And then put $\textbf{Y}_X= \{\textbf{x}_k: x\in X \mbox{ and } k \in \mathbb Z \}.$ Endow $\textbf{Y}_X$ with the topology as follows. If $k \in \mathbb Z $ and $ V\operatorname{c}bseteq X\setminus \Delta_X$ is an open subset of $X$, then the set $\{\textbf{x}_k: x \in V\}$ is open in $\textbf{Y}_X.$ Thus we define neighborhoods of the point $\textbf{x}_k$ where $x \notin \Delta_X$. To define neighborhoods of the point $\textbf{x}_k$, where $x \in \Delta_X$, we use the formula: If $k \in \mathbb Z$ and $V \operatorname{c}bseteq X$ is an open subset, then the set $$\{\textbf{x}_k: x \in V\}\cup \{\textbf{x}_{k+1}: x \in V\setminus \Delta_X\}$$ is open in $\textbf{Y}_X.$ To get a version of $(*)$, we put the following: $\Delta_\textbf{Y}(k) = \{\textbf{x}_k: x \in \Delta_X(k)\}$; $\Delta_\textbf{Y}=\bigcup\{\Delta_X(k): k\in \mathbb Z\}$; Let $I_\textbf{Y}(k)$ be a proper ideal which consists of sets $\{\textbf{x}_k: x \in A\}$ for $A \in I_X(k)$; $\textbf{Y}_k= \{\textbf{y}_k: y \in X\setminus \Delta_X\}.$ So, if $k\in \mathbb Z$, then $$\Delta_\textbf{Y}(k) = cl_\textbf{Y}(\{\textbf{y}_k: y \in X\setminus \Delta_X\})\cap cl_\textbf{Y}(\{\textbf{y}_{k+1}: y \in X\setminus \Delta_X\}).$$ As we can see, the properties of the space $\textbf{Y}_X$ can be automatically rewritten from the relevant properties of $ X $, so we leave details to the reader. \begin{pro}\label{p7} Assume that a space $X$ satisfied $(*)$ and the space $\textbf{Y}$ is as above. If a set $\textbf{V} \operatorname{c}bseteq \textbf{Y}$ is open and the set $\Delta_\textbf{Y} (k) \setminus \textbf{V}$ belongs to $ I_\textbf{Y}(k)$, then the set $\Delta_\textbf{Y} (k-1) \setminus cl_\textbf{Y}(\textbf{V})$ belongs to $I_\textbf{Y}(k-1)$. $\Box$ \end{pro} \begin{pro} If a space $X$ is completely regular, then the space $\textbf{Y}$ is completely regular, too. $\Box$ \end{pro} Now, fix points $+\infty$ and $-\infty$ that do not belong to $\textbf{Y}$. On the set $\textbf{Y}^*= \textbf{Y} \cup\{-\infty, +\infty\}$ we introduce the following topology. Let open sets in $\textbf{Y}$ be open in $\textbf{Y}^*$, too. But the sets $$\mathcal V^+_m=\{+\infty\} \cup \bigcup\{\textbf{Y}_n: n\geqslant m\}\cup \bigcup \{\Delta_\textbf{Y}(n): n>m\}$$ form a base at the point $ +\infty $ and the sets $$\mathcal V^-_m=\{-\infty\} \cup \bigcup\{\textbf{Y}_n: n\leqslant m\}\cup \bigcup \{\Delta_\textbf{Y}(n): n<m\}$$ form a base at the point $ -\infty .$ Thus we have $$\Delta_\textbf{Y} (m) \operatorname{c}bseteq cl_{\textbf{Y}^*}(\mathcal V^+_m) \cap cl_{\textbf{Y}^*}(\mathcal V^-_m)= \Delta_\textbf{Y} (m) \cup \textbf{Y}_m,$$ which implies the following. \begin{theorem}\label{9} If $f:\textbf{Y}^*\to \mathbb R$ is a continuous real valued function, then $f(+\infty)=f(-\infty)$. \end{theorem} \begin{proof} Suppose $f: \textbf{Y}^* \to \mathbb R$ is a continuous function such that $f(+\infty) = 1$ and $f(-\infty) = 0.$ Fix a decreasing sequence $\{\epsilon_n\}$ which converges to $\frac12$. Thus $$f^{-1}((\epsilon_n, 1]) \operatorname{c}bseteq cl_{\textbf{Y}^*}(f^{-1}((\epsilon_n, 1])) \operatorname{c}bseteq f^{-1}([\epsilon_n, 1]) \operatorname{c}bseteq f^{-1}((\epsilon_{n+1}, 1]).$$ By Proposition \ref{p7}, if $K_m\in I_\textbf{Y}(m)$ and $\Delta_\textbf{Y}(m) \setminus K_m \operatorname{c}bseteq f^{-1}((\epsilon_n, 1]),$ then $$ f^{-1}((\epsilon_{n+1}, 1]) \operatorname{c}pseteq \Delta_\textbf{Y}(m-1) \setminus K_{m-1}, $$ for some $K_{m-1}\in I_\textbf{Y}(m-1).$ Since there exists $m \in \mathbb Z$ such that $+\infty \in \mathcal V^+_m \operatorname{c}bseteq f^{-1}((\epsilon_0, 1]) $, inductively, we get $$\Delta_{\textbf{Y}}\setminus \bigcup\{K_n: n\in \mathbb Z \} \operatorname{c}bseteq f^{-1}([\frac12, 1]), $$ which implies that each $ \mathcal V^-_n$ contains a point $\textbf{y}\in \textbf{Y} $ such that $f(\textbf{y})\geqslant \frac12.$ Hence $f(-\infty) \geqslant \frac12$, a contradiction. \end{proof} \operatorname{c}bsection{Application of the Niemytzki plane} Recall that the Niemytzki plane $\mathbb P = \{(a,b) \in \mathbb R \times \mathbb R: 0\leqslant b \}$ is the closed half-plane which is endowed with the topology generated by open discs disjoint with the real axis $\Delta_{\mathbb P}=\{(x,0): x \in \mathbb R\}$ and all sets of the form $\{a\} \cup D$ where $D\operatorname{c}bseteq \mathbb P$ is an open disc which is tangent to $\Delta_{\mathbb P}$ at the point $a \in \Delta_{\mathbb P}$. Choose pairwise disjoint subsets $\Delta_{\mathbb P} (k)\operatorname{c}bseteq \Delta_{\mathbb P}$, where $k\in \mathbb Z$, such that each set $\Delta_{\mathbb P} (k)$ meets every dense $G_\delta$ subset of the real axis. To do that is enough to slightly modify the classic construction of a Bernstein set. Namely, fix an enumeration $\{A_\alpha: \alpha < \frak{c}\}$ of all dense $G_\delta$ subsets of the real axis. Defining inductively at step $\alpha$ choose a (1-1)-numerated subset $\{p_k^\alpha : k \in \mathbb Z\}\operatorname{c}bseteq A_\alpha \setminus \{p_k^\beta : k \in \mathbb Z \mbox{ and } \beta < \alpha \}$. Then, for each $k \in \mathbb Z, $ put $\Delta_{\mathbb P} (k) = \{p^\alpha_k: \alpha < \frak{c} \}.$ Let us assume that if $F \operatorname{c}bseteq \mathbb R \times \mathbb R$, then the topology on $F$ induced from the Euclidean topology will be called the \textit{natural topology} on $F$. A set, which is a countable union of nowhere dense subsets in the natural topology on $F $, will be called \textit{a set of first category} in $F$. Our proof of the following lemma is a modification of known reasoning justifying that $\mathbb P$ is not a normal space, compare \cite[pp. 101 -102]{ss}. \begin{lem}\label{7} Let a set $F \operatorname{c}bseteq \Delta_{\mathbb P}$ be a dense subset in the natural topology on the real axis $\Delta_{\mathbb P}$. If a set $V$ is open in $\mathbb P$ and $F \operatorname{c}bseteq V$, then the set $\Delta_{\mathbb P} \setminus cl_{\mathbb P}(V)$ is of first category in $\Delta_{\mathbb P}$. \end{lem} \begin{proof} To each point $a\in \Delta_{\mathbb P}\setminus cl_{\mathbb P}(V)$ there corresponds a disc $D_a \operatorname{c}bseteq \mathbb P \setminus cl_{\mathbb P}(V)$ of radius $r_a$ tangent to $\Delta_{\mathbb P}$ at the point $a$. Put $$S_n=\{ a\in \Delta_{\mathbb P} \setminus cl_{\mathbb P}(V): r_a \geqslant \frac1n \} $$ and use density of $F$ to check that each $S_n$ is nowhere dense in the natural topology on $\Delta_{\mathbb P}$. So $\bigcup \{S_n: n>0\}=\Delta_{\mathbb P} \setminus cl_{\mathbb P}(V).$ \end{proof} The space $\textbf{Y}_{\mathbb P}$ is completely regular. The subspaces $\textbf{Y}_{\mathbb P}\cup \{-\infty\}$, $\textbf{Y}_{\mathbb P}\cup \{+\infty\}$ and and the space $\textbf{Y}_{\mathbb P}^*$ are regular. Moreover, if $f:\textbf{Y}^*_{ \mathbb P}\to \mathbb R$ is a continuous real valued function, then $f(+\infty)=f(-\infty)$. \operatorname{c}bsection{Application of the Songefrey plane,} i.e. application of the Songenfrey's half-open square topology. Recall that the Songefrey plane $\mathbb S = \{(a,b): a\in \mathbb R \mbox{ and } b \in \mathbb R \}$ is the plane endowed with the topology generated by rectangles of the form $[a,b) \times [c,d)$. Let $\Delta_{\mathbb S}=\{(x,-x): x \in \mathbb R\}$. Since $\Delta_{\mathbb S}$ with the topology induced from the Euclidean topology is homeomorphic with the real line, we can choose pairwise disjoint subsets $\Delta_{\mathbb S} (k)\operatorname{c}bseteq \Delta_{\mathbb S}$ such that each set $\Delta_{\mathbb S} (k)$ meets every dense $G_\delta$ subset of $\Delta_{\mathbb S}$. The following lemma can be proved be the second category argument used previously in the proof Lemma \ref{7}, so we omit it, compare also \cite[pp. 103 -104]{ss}. \begin{lem}\label{8} Let a set $F \operatorname{c}bseteq \Delta_{\mathbb S}$ be a dense subset in the topology on $\Delta_{\mathbb S}$ which is inherited from the Euclidean topology. If a set $V$ is open in $\mathbb S$ and $F \operatorname{c}bseteq V$, then the set $\Delta_{\mathbb S} \setminus cl_{\mathbb S}(V)$ is of first category in $\Delta_{\mathbb S}$. $\Box$ \end{lem} Again, the space $\textbf{Y}_{\mathbb S}$ is completely regular. The subspaces $\textbf{Y}_{\mathbb S}\cup \{-\infty\}$, $\textbf{Y}_{\mathbb S}\cup \{+\infty\}$ and and the space $\textbf{Y}_{\mathbb S}^*$ are regular. Moreover, if $f:\textbf{Y}^*_{ \mathbb S}\to \mathbb R$ is a continuous real valued function, then $f(+\infty)=f(-\infty)$. \operatorname{c}bsection{Applications of some $\Psi$-spaces} Let us recall some notions needed to define a Lusin gap, compare \cite{luz}. A family of sets is called \textit{almost disjoint}, whenever any two members of it have the finite intersection. A set $C$ \textit{separates} two families, whenever each member of the first family is almost contained in $C$, i.e. $B \setminus C$ is finite for any $B \in Q$, and each member of the other family is almost disjoint with $C$. An uncountable family $\mathcal L$, which consists of almost disjoint and infinite subsets of $\omega$, is called \textit{Lusin-gap,} whenever no two its uncountable and disjoint subfamilies can be separated by a subset of $\omega$. Adapting concepts discussed in \cite{mro} or \cite{hh}, to a Lusin-gap $\mathcal L$, let $ \Psi(\mathcal L) = \mathcal L \cup \omega$. A topology on $\Psi(\mathcal L)$ is generated as follows. Any subset of $\omega$ is open, also for each point $A\in \mathcal L$ the sets $\{ A\} \cup A \setminus F$, where $F$ is finite, are open. \begin{pro} If $\mathcal L$ is a Lusin-gap and $\bigcup \{ \Delta_{\mathcal L}(k): k\in \mathbb Z \} = \mathcal L$, then the space $\Psi(\mathcal L)$ satisfies the property $(*)$, whenever sets $\Delta_\mathcal L(k) $ are uncountable and pairwise disjoint and each ideal $I_{\mathcal L}(k)$ consists of all countable subsets of $\Delta_{\mathcal L}(k)$. \end{pro} \begin{proof} Consider uncountable and disjoint families $\mathcal{ A, B} \operatorname{c}bseteq \mathcal L$. Suppose $\mathcal A \operatorname{c}bseteq V$ and $\mathcal B \operatorname{c}bseteq W,$ where open sets $V$ and $ W$ are disjoint. Let $$ C= \bigcup \{A \operatorname{c}bseteq \omega: \{A\} \cup A \mbox{ is almost contained in } V \}. $$ The set $C$ separates families $\mathcal{ A}$ and $\mathcal B$, which contradicts that $\mathcal L$ is a Lusin-gap. Setting $\mathcal{ A} = \Delta_{ \mathcal L} (k)$ and $\mathcal{ B} = \Delta_{ \mathcal L}(k-1)$, we are done. \end{proof} The space $\textbf{Y}_{\mathcal L}$ is completely regular. Again by Theorem \ref{9}, we get the following. The subspaces $\textbf{Y}_{\mathcal L}\cup \{-\infty\}$, $\textbf{Y}_{\mathcal L}\cup \{+\infty\}$ and and the space $\textbf{Y}_{\mathcal L}^*$ are regular. Moreover, if $f:\textbf{Y}^*_{ \mathcal L}\to \mathbb R$ is a continuous real valued function, then $f(+\infty)=f(-\infty)$. \section{Comment} In \cite{jon}, F. B. Jones formulated the following problem: \textit{Does a non-completely regular space always contain a substructure similar to that possessed by $Y$}? Jones' space $Y$ is constructed by gluing (sewing) countably many disjoint copies of a suitable space $X$. This method fixes two subsets of $X$ and consists in sewing alternately copies of either of them. On the other hand, our method consists in gluing different sets at each step. The problem of Jones may be understood as an incentive to study the structural diversity of regular spaces, which are not completely regular. Even though the meaning of "a substructure similar to that possessed by $Y$" seems vague, we think that an appropriate criterion for the aforementioned diversity is a slightly modified concept of a topological rank, compare \cite{kur} or \cite{sie}. We have introduced regularity ranks, but our counterexamples are only a preliminary step to the study of diversity of regular spaces. \end{document}
\betaegin{document} \title{On the bounded index property\\ for products of aspherical polyhedra} \mathrm{Aut}hor{Qiang ZHANG and Shengkui YE} \alphaddress{School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China} \email{[email protected]} \alphaddress{Department of Mathematical Sciences, Xi'an Jiaotong-Liverpool University, Jiangsu, China} \email{[email protected]} \thanks{The authors are partially supported by NSFC Grants \#11771345, \#11961131004 and \#11971389.} \subjclass[2010]{55M20, 55N10, 32Q45} \keywords{Bounded index property, fixed point, aspherical polyhedron, negative curved manifold, product} \betaegin{abstract} A compact polyhedron $X$ is said to have the Bounded Index Property for Homotopy Equivalences (BIPHE) if there is a finite bound $\mathcal{B}$ such that for any homotopy equivalence $f:X\rightarrow X$ and any fixed point class $\mathbf{F}$ of $f$, the index $|\mathrm{ind}(f,\mathbf{F})|\leq \mathcal{B}$. In this note, we consider the product of compact polyhedra, and give some sufficient conditions for it to have BIPHE. Moreover, we show that products of closed Riemannian manifolds with negative sectional curvature, in particular hyperbolic manifolds, have BIPHE, which gives an affirmative answer to a special case of a question asked by Boju Jiang. \end{abstract} \maketitle \section{Introduction} Fixed point theory studies fixed points of a self-map $f$ of a space $X$. Nielsen fixed point theory, in particular, is concerned with the properties of the fixed point set \betaegin{equation*} \mathrm{Fix} f:=\{x\in X|f(x)=x\} \end{equation*} that are invariant under homotopy of the map $f$ (see \cite{fp1} for an introduction). The fixed point set $\mathrm{Fix} f$ splits into a disjoint union of \emph{ fixed point classes}: two fixed points $a$ and $a^{\prime }$ are in the same class if and only if there is a lifting $\tilde f: \widetilde X\to \widetilde X$ of $f$ such that $a, a^{\prime }\in p(\mathrm{Fix} \tilde f)$, where $p:\widetilde X\to X$ is the universal cover. Let $\mathrm{Fpc}(f)$ denote the set of all the fixed point classes of $f$. For each fixed point class $\mathbf{F}\in \mathrm{Fpc}(f)$, a homotopy invariant \emph{index} $ \mathrm{ind}(f,\mathbf{F})\in \mathbb{Z}$ is well-defined. A fixed point class is \emph{essential} if its index is non-zero. The number of essential fixed point classes of $f$ is called the $Nielsen$ $number$ of $f$, denoted by $N(f)$. The famous Lefschetz-Hopf theorem says that the sum of the indices of the fixed points of $f$ is equal to the $Lefschetz$ $number$ $ L(f) $, which is defined as \betaegin{equation*} L(f):=\sum_q(-1)^q\mathrm{Trace} (f_*: H_q(X;\mathbb{Q})\to H_q(X;\mathbb{Q} )). \end{equation*} In this note, all maps considered are continuous, and all spaces are triangulable, namely, they are homeomorphic to polyhedra. A compact polyhedron $X$ is said to have the \emph{Bounded Index Property (BIP)}(resp. \emph{Bounded Index Property for Homeomorphisms (BIPH)}, \emph{Bounded Index Property for Homotopy Equivalences (BIPHE)}) if there is an integer $\mathcal{B}>0$ such that for any map (resp. homeomorphism, homotopy equivalence) $f:X\rightarrow X$ and any fixed point class $\mathbf{F}$ of $f$, the index $|\mathrm{ind}(f, \mathbf{F})|\leq \mathcal{B}$. Clearly, if $X$ has BIP, then $X$ has BIPHE and hence has BIPH. For an aspherical closed manifold $M$, if the well-known Borel conjecture (any homotopy equivalence $f:M\rightarrow M$ is homotopic to a homeomorphism $g:M\rightarrow M$) is true, then $M$ has BIPHE if and only if it has BIPH. In \cite{JG}, Jiang and Guo proved that compact surfaces with negative Euler characteristics have BIPH. Later, Jiang \cite{fp2} showed that graphs and surfaces with negative Euler characteristics not only have BIPH but also have BIP (see \cite{K1}, \cite{K2} and \cite{JWZ} for some parallel results). Moreover, Jiang asked the following question: \betaegin{ques} (\cite[Qusetion 3]{fp2}) Does every compact aspherical polyhedron $X$ (i.e. $ \pi_i(X)=0$ for all $i>1$) have BIP or BIPH? \end{ques} In \cite{Mc}, McCord showed that infrasolvmanifolds (manifolds which admit a finite cover by a compact solvmanifold) have BIP. In \cite{JW}, Jiang and Wang showed that geometric 3-manifolds have BIPH for orientation-preserving self-homeomorphisms: the index of each essential fixed point class is $\pm 1$ . In \cite{Z}, the first author showed that orientable compact Seifert 3-manifolds with hyperbolic orbifolds have BIPH, and later in \cite{Z2, Z3}, he showed that compact hyperbolic $n$-manifolds (not necessarily orientable) also have BIPH. Recently, in \cite{ZZ}, Zhang and Zhao showed that products of hyperbolic surfaces have BIPH. Note that in \cite[Section 6]{fp2}, Jiang gave an example that showed that BIPH is not preserved by taking products: the 3-sphere $S^3$ has BIPH while the product $S^3\times S^3$ does not have BIPH. In this note, we consider the product of connected compact polyhedra, and give some sufficient conditions for it to have BIPHE (and hence has BIPH). The main result of this note is the following: \betaegin{thm} \label{main thm1} Suppose $X_1,\ldots,X_n$ are connected compact aspherical polyhedra satisfying the following two conditions: $\mathrm{(1)}$ $\pi_1(X_i)\not \cong\pi_1(X_j)$ for $i\neq j$, and all of them are centerless and indecomposable; $\mathrm{(2)}$ all of $X_1,\ldots,X_n$ have BIPHE. \noindent Then the product $X_1\times\cdots\times X_n$ also has BIPHE (and hence has BIPH). \end{thm} Moreover, we show that products of closed Riemannian manifolds with negative sectional curvature have BIPHE: \betaegin{thm} \label{main thm2} Let $M=M_1\times\cdots\times M_n$ be a product of finitely many connected closed Riemannian manifolds, each with negative sectional curvature everywhere but not necessarily with the same dimensions (in particular hyperbolic manifolds). Then $M$ has BIPHE. \end{thm} Recall that a closed $2$-dimensional Riemannian manifold $M$ with negative sectional curvature is a closed hyperbolic surface, and hyperbolic surfaces have BIP. As a corollary of Theorem \ref{main thm2}, we have \betaegin{cor} A closed Riemannian manifold with negative sectional curvature everywhere has BIPHE. \end{cor} To prove the above theorems, we first study automorphisms of products of groups in Section \ref{sect 2}, and give some facts about the bounded index property of fixed points in Section \ref{sect 3}. Then in Section \ref{sect 4}, we generalize the results of alternating homeomorphisms (see \cite[Section 3]{ZZ}) to that of cyclic homeomorphisms of products of surfaces. Finally in Section \ref{sect 5}, we show that every homotopy equivalence of products of aspherical manifolds can be homotoped to two nice forms, and taking advantage of that, we finish the proofs. \section{Automorphisms of products of groups}\label{sect 2} In this section, we give some facts about automorphisms of direct products of finitely many groups. \betaegin{defn} A group $G$ is called $\emph{unfactorizable}$ if whenever $G=HK$ for subgroups $H,K$ satisfying $hk=kh$ for any $h\in H,k\in K$ we have either $ H=1$ or $K=1.$ If $G=H\times K$ for some groups $H,K\ $implies either $H$ or $K$ is trivial, we call $G$ is \emph{indecomposable}. \end{defn} \betaegin{lem} Let $G$ be an unfactorizable group and $G=\Pi_{i=1}^n A_{i}$ a direct product. Then $A_{i}=G $ for some $i$ and all $A_{j}=1$ for $i\neq j.$ \end{lem} \betaegin{proof} Suppose that there are at least two non-trivial components $A_{1},A_{2}.$ Then $G=A_{1}A_{2}\times \Pi _{i\neq 1,2}A_{i}.$ Then $\Pi _{i\neq 1,2}A_{i}=1,$ since $G$ is unfactorizable. Therefore, $G=A_{1}A_{2}$ a contradiction. \end{proof} \betaegin{lem} A group $G$ is unfactorizable if and only if $G$ is centerless and indecomposable. \end{lem} \betaegin{proof} Suppose that $G$ is unfactorizable. Since $G=GC$ for the center subgroup $C,$ we see that $C=1.$ An unfactorizable group is obviously indecomposable. Conversely, assume that $G$ is centerless and indecomposable. If $G=KH$ for commuting $K,H.$ Then any element $g\in K\cap H$ is central and thus trivial. This implies that $G=K\times H.$ Therefore, either $K$ or $H$ is trivial. \end{proof} For a direct product $G=G_1\times \cdots \times G_n$, we collect together the coordinates corresponding to isomorphic $G_i$'s and present it in the form $G=G_1^{\,n_1}\times \cdots \times G_m^{\,n_m}$, where $n_i\gammaeq 1$ and $ G_i\ncong G_j$ for $1\leq i\neq j\leq m$. Given automorphisms $\phi_i\in \mathrm{Aut}(G_i)$ for $i=1,\ldots ,n$, let $\prod_{i=1}^n \phi_i =\phi_1\times \cdots \times \phi_n \colon G\to G$, $(g_1,\ldots ,g_n)\mapsto (\phi_1(g_1), \ldots ,\phi_n(g_n))$ be the product of $\phi_i$'s. We have an analogous result of \cite[Proposition 4.4]{ZVW} as follows. \betaegin{prop} \label{aut of prouct group} Let each group $G_i, i=1,\ldots,m $, be unfactorizable, and $G=G_1^{\,n_1}\times \cdots \times G_m^{\,n_m}$ a direct product, where $m\gammaeq 1$, $n_i\gammaeq 1$, and $G_i\ncong G_j$ for $i\neq j$. Then for every $\phi \in \mathrm{Aut}(G)$, there exist automorphisms $ \phi_{i,j}\in \mathrm{Aut}(G_i)$ and permutations $\sigma_i \in S_{n_i}$, such that \betaegin{equation*} \phi =\sigma_1 \comp\cdots \comp \sigma_m \comp (\prod_{i=1}^m \prod_{j=1}^{n_i} \phi_{i,j}) = \prod_{i=1}^m (\sigma_i \comp \prod_{j=1}^{n_i} \phi_{i,j}). \end{equation*} \end{prop} \betaegin{proof} For brevity, we first assume that $G=G_1\times G_2$ (here $G_1$ may be isomorphic to $G_2$), and $\phi: G_1\times G_2\to G_1\times G_2$ an automorphism. Let \betaegin{equation*} A_1=\{a_{1}|\phi(g_{1},1)=(a_{1},b_{1}),g_1\in G_1\},~~B_1=\{b_{1}|\phi(g_{1},1)=(a_{1},b_{1}),g_1\in G_1\}, \end{equation*} and \betaegin{equation*} A_2=\{a_{2}|\phi(1,g_{2} )=(a_{2},b_{2}),g_2\in G_2\},~~B_2=\{b_{2}|\phi(1,g_{2} )=(a_{2},b_{2}),g_2\in G_2\}. \end{equation*} Then $G_1=A_1A_2$, $a_{1}a_{2}=a_{2}a_{1}$ for every $a_i\in A_i$; and $ G_2=B_1B_2$, $b_{1}b_{2}=b_{2}b_{1}$ for every $b_i\in B_i$. Since $G_{1}, G_2$ are unfactorizable, we see that either $A_1$ or $A_2$ is trivial, and either $B_1$ or $B_2$ is trivial. If $A_1=G_1,A_2=1$, then $B_1=1, B_2=G_2$, and we have \betaegin{equation*} \phi=\phi_1\times \phi_2: G_1\times G_2\to G_1\times G_2,\quad (g_1,g_2)\mapsto (\phi_1(g_1),\phi_2(g_2))=(a_1,b_2) \end{equation*} where $\phi_i\in \mathrm{Aut}(G_i)$. If $A_1=1, A_2=G_1$, then $B_1=G_2, B_2=1$, then we have $G_1\cong G_2$, and \betaegin{equation*} \phi=\sigma\comp (\phi_1\times \phi_2): G_1\times G_2\to G_1\times G_2,\quad (g_1,g_2)\mapsto (\phi_2(g_2),\phi_1(g_1))=(a_2, b_1) \end{equation*} for $\phi_i\in \mathrm{Aut}(G_i)$ and $\sigma\in S_2$ a permutation. Now we have proved that Proposition \ref{aut of prouct group} holds for the case $G=G_1\times G_2$. For the general case that $G=\prod_i G_i$ has more than 2 factors, the same argument as above shows that Proposition \ref{aut of prouct group} also holds. \end{proof} Since the inner automorphism group $\mathrm{Inn}(\prod_i G_i)$ of a product is isomorphic to the product $\prod_i \mathrm{Inn}(G_i)$, summarizing the above results, we have proved the following: \betaegin{thm} \label{aut and out of product groups} Let $G_1,\ldots, G_m $ be finitely many centerless indecomposable groups, and $G_i\ncong G_j$ for $i\neq j$. Then the automorphism group of the product $\prod_{i=1}^m G_i^{n_i}$ \betaegin{equation*} \mathrm{Aut}(\prod_{i=1}^m G_i^{n_i})\cong \prod_{i=1}^m\betaigg(\prod_{n_i} \mathrm{Aut}(G_{i})\betaigg)\rtimes S_{n_i}, \end{equation*} and the outer automorphism group \betaegin{equation*} \mathrm{Out}(\prod_{i=1}^m G_i^{n_i})\cong \prod_{i=1}^m\betaigg(\prod_{n_i} \mathrm{Out}(G_{i})\betaigg)\rtimes S_{n_i}, \end{equation*} where the symmetric group $S_{n_i}$ of $n_i\gammaeq 1$ elements acts on $ \prod_{n_i}\mathrm{Aut}(G_{i})$ and $\prod_{n_i}\mathrm{Out}(G_{i})$ by natural permutations. \end{thm} There are many examples of centerless indecomposable groups. \betaegin{example} Non-abelian free groups are centerless indecomposable groups. \end{example} \betaegin{example}\label{centerless of negatived mfd} Non-abelian torsion-free Gromov hyperbolic groups are centerless indecomposable groups. In particular, the fundamental group of a closed Riemannian manifold with negative sectional curvature everywhere is centerless and indecomposable. \end{example} \betaegin{proof} Let $G$ be a non-abelian torsion-free Gromov hyperbolic group and $1\neq \gammaamma \in G.$ It is well-known that the centralizer subgroup $C(\gammaamma )=\{g\in G\mid g\gammaamma =\gammaamma g\}$ contains $\langle \gammaamma \rangle $ as a finite-index subgroup (see \cite{bh}, Corollary 3.10, p.462). Therefore, the center of $G$ is trivial and $G$ is indecomposable. \end{proof} \section{Facts about the bounded index properties}\label{sect 3} In this section, we give some facts about BIP, BIPHE and BIPH. In order to state results conveniently, we will use the following definition. \betaegin{definition} Let $X$ be a compact polyhedron and $C$ be a family of self-maps of $X.$ We call that $X$ has the \emph{Bounded Index Property with respect to} $C$ (denoted by $\mathrm{BIPC}$) if there exists an integer $\mathcal{B}>0$ (depending only on $C$) such that for any map $f\in C$ and any fixed point class $\mathbf{F}$ of $f$, the index $|\mathrm{ind}(f,\mathbf{F})|\leq \mathcal{B}$. The minimum such a $\mathcal{B}$ is called the bounded index for $C$. \end{definition} When $C$ is the set of all self-maps (self-homeomorphisms, self-homotopy equivalences, respectively), we simply denote the $\mathrm{BIPC}$ by BIP (BIPH, BIPHE, respectively). It is obvious that $\mathrm{BIPC}_{1}$ implies $\mathrm{BIPC}_{2}$ when $C_{2}$ is a subset of $C_{1}.$ For maps of polyhedra, Jiang gave the following definition of mutant, and showed that the Nielsen fixed point invariants are invariants of mutants (see \cite[Sect. 1]{fp2}). \betaegin{definition} Let $f:X\rightarrow X$ and $g:Y\rightarrow Y$ be self-maps of compact connected polyhedra. We say $g$ is obtained from $f$ by \emph{commutation}, if there exist maps $\phi :X\rightarrow Y$ and $\psi :Y\rightarrow X$ such that $f=\psi \comp \phi $ and $g=\phi \comp \psi $. Say $g$ is a \emph{mutant} of $f$, if there is a finite sequence $\{f_i : X_i\to X_i | i = 1,2,\cdots,k\}$ of self-maps of compact polyhedra such that $f=f_1$, $g=f_k$, and for each $i$, either $X_{i+1}=X_i$ and $f_{i+1}\simeq f_i$, or $f_{i+1}$ is obtained from $f_i$ by commutation. \end{definition} \betaegin{lem}[Jiang, \cite{fp2}]\label{mutant invar.} Mutants have the same set of indices of essential fixed point classes, hence also the same Lefschetz number and Nielsen number. \end{lem} Note that mutants give an equivalence relation on self-maps of compact polyhedra. In other words, two self-maps $f:X\to X$ and $g:Y\to Y$ are mutant-equivalent if there exist finitely many maps of compact polyhedra $X_1=X,X_2,\ldots, X_k=Y,$ $$u_{i}:X_i\rightarrow X_{i+1},\quad v_{i}:X_{i+1}\rightarrow X_i,\quad\quad i=1,2,\cdots ,k-1,$$ such that $f\simeq v_{1}\comp u_{1}: X_1\to X_1, ~~g\simeq u_{k-1}\comp v_{k-1}:X_k\to X_k,$ and for $i<k-1$, $$u_{i}\comp v_{i}\simeq v_{i+1}\comp u_{i+1}.$$ For a self-map $f:X\to X$ of a connected compact polyhedron $X$, let $[f]_m$ denote the mutant-equivalent class of $f$, and for a family $C$ of self-maps of $X$, let $[C]_m:=\{[f]_m|f\in C\}$ be the set of mutant-equivalent classes of $C$. Note that $f$ has only finitely many non-empty fixed point classes, and each is a compact subset of $X$, we have a finite bound $\mathcal{B}_{f}$ of $\mathrm{ind}(f, \mathbf{F})$ for all the fixed point classes $\mathbf{F}$ of $f$. As an immediate consequence of Lemma \ref{mutant invar.}, we have the following: \betaegin{prop}\label{ind for muatant} \label{BIP of homotopy type} For any family $C$ of self-maps of a connected compact polyhedron $X$, the set $\{\mathrm{ind}(f,\mathbf{F})|f\in C, \mathbf{F}\in \mathrm{Fpc}(f)\}$ defined on $C$ factors through the equivalence classes $[C]_{m}$. Namely, for any set $[C]_m$ of mutant-equivalent classes, we have a set of indices $$\mathrm{ind}([C]_m):=\{\mathrm{ind}(f,\mathbf{F})|f\in C, \mathbf{F}\in \mathrm{Fpc}(f)\}$$ depending only on $[C]_m$. Moreover, if $[C]_m$ is finite, then $X$ has BIPC. \end{prop} Suppose that $X$ is aspherical and let $[X]$ (resp. $[X]_{h.e}$) be the set of all homotopy classes of self-maps (resp. self-homotopy equivalences) of $X$ . It is well-known that there is a bijective correspondence \betaegin{equation*} \eta:[X]\longleftrightarrow \mathrm{End}(\pi _{1}(X))/\mathrm{Inn}(\pi _{1}(X)), \end{equation*} given by sending a self-map $f$ to the induced endomorphism $f_{\pi}$ of the fundamental group, where the inner automorphism group $\mathrm{Inn}(\pi _{1}(X))$ acts on the semigroup $ \mathrm{End}(\pi _{1}(X))$ of all the endomorphisms of $\pi _{1}(X)$ by composition. Note that $\eta$ induces a bijection (still denoted by $\eta$) \betaegin{equation*} \eta: \lbrack X]_{h.e}\longleftrightarrow \mathrm{Out}(\pi _{1}(X)). \end{equation*} For any self-map $f$ of $X$, let $[f]$ denote the homotopy class of $f$, and for any family $C$ of self-homotopy equivalences of $X$, let $[C]:=\{[f]|f\in C\}$ be the set of homotopy classes of $C$. Then the image $\eta([C])$ is a subset of $\mathrm{Out} (\pi _{1}(X)).$ For a group $G$ and a subset $H$ of $G$, two elements $g,h\in H$ are conjugate if there exists an element $k\in G$ such that $g=khk^{-1}.$ Let $\betaar{h}$ be the conjugacy class of $h$ in $G$, and $\mathrm{Conj}_{G}H:=\{\betaar h|h\in H\}$ be the set of conjugacy classes of $H$ in $G$. We have the following: \betaegin{lem}\label{finite out implies BIPH} Let $X$ be a connected compact aspherical polyhedron. Suppose that $C$ is a family of self-homotopy equivalences of $X. $ Then we have a natural surjection $$\Phi:~~ \mathrm{Conj}_{\mathrm{Out}(\pi_{1}(X))}\eta([C])\longrightarrow [C]_m, \quad defined~by\quad \overline{\eta([f])}\longmapsto [f]_m.$$ Moreover, if the set $\mathrm{Conj}_{\mathrm{Out}(\pi_{1}(X))}\eta([C])$ is finite, then $[C]_m$ is also finite and hence $X$ has BIPC. In particular, when $\mathrm{Out}(\pi _{1}(X))$ has finitely many conjugacy classes, the polyhedron $X$ has the bounded index property with respect to the set of all self-homotopy equivalences, i.e., $X$ has BIPHE. \end{lem} \betaegin{proof} For self-homotopy equivalences $f, g\in C$, if $\overline{\eta([f])}=\overline{\eta([g])}$, then there exists $\eta([h])=\eta([h'])^{-1}\in \mathrm{Out}(\pi_1(X))$ for $h:X\to X$ a homotopy equivalence and $h': X\to X$ a homotopy inverse of $h$ such that $$\eta([g])=\eta([h])\cdot\eta([f])\cdot\eta([h])^{-1}=\eta([h])\cdot\eta([f])\cdot\eta([h'])\in \mathrm{Out}(\pi_1(X)).$$ This implies that $g\simeq h\comp (f\comp h')$. Note that $f\simeq (f\comp h')\comp h$, so we have $[f]_m=[g]_m$. Therefore, $\Phi$ is well-defined. It is obvious from the definition that $\Phi$ is surjective, and the proof is finished by Proposition \ref{ind for muatant}. \end{proof} \betaegin{thm} \label{main thm3} Let $M=M_1\times\cdots\times M_m$ be a product of finitely many connected closed Riemannian manifolds, each with negative sectional curvature everywhere, and with (not necessarily the same) dimension $\gammaeq 3.$ Then $M$ has BIPHE. \end{thm} \betaegin{proof} Rips and Sela \cite{RS} building on ideas of Paulin proved that $\mathrm{Out} (G)$ is a finite group when $G$ is the fundamental group of a closed Riemannian manifolds of dimension $\gammaeq 3$ with negative sectional curvature everywhere. Since the factors $M_{i}^{n_{i}},i=1,\ldots ,m$, are closed Riemannian manifolds, each with negative sectional curvature everywhere, and the dimensions $n_{i}\gammaeq 3$, we have that $\mathrm{Out}(\pi _{1}(M))$ is also finite by Theorem \ref{aut and out of product groups}. Therefore, $M$ has BIPHE (and hence BIPH) by Lemma \ref{finite out implies BIPH}. \end{proof} \section{Fixed points of cyclic homeomorphisms of products of surfaces}\label{sect 4} In this section, we will generalize the results of alternating homeomorphisms (see \cite[Section 3]{ZZ}) to \emph{cyclic homeomorphisms} of products of surfaces. Let $F$ be a connected closed hyperbolic surface, and hence, the Euler characteristics $\chi(F)<0$. \betaegin{defn} A self-homeomorphism $f$ of $F^m:=\overbrace{F\times\cdots\times F}^m$, is called a \emph{cyclic homeomorphism}, if \betaegin{equation*} f=\tau\comp (\prod_{i=1}^m f_i): F^m\to F^m, (a_1,a_2,\ldots,a_m)\mapsto (f_m(a_m),f_1(a_1),\ldots,f_{m-1}(a_{m-1})), \end{equation*} where $f_1,\ldots,f_m$ are self-homeomorphisms of $F$, and $\tau=(12\cdots m)\in S_m$ is a $m$-cycle. \end{defn} Note that for a compact hyperbolic surface, every homeomorphism is isotopic to a diffeomorphism, then by the same argument as in the proof of \cite[Lemma 3.2]{ZZ}, we have \betaegin{lem} \label{transversal} Let $f_1,\ldots, f_m$ be self-homeomorphisms of $F$, and $f=\tau\comp (\prod_{i=1}^m f_i): F^m\to F^m$ a cyclic homeomorphism. Then $ f_1,\ldots, f_m$ can be isotoped to diffeomorphisms $g_1,\ldots, g_m$ respectively, such that the graph of the corresponding cyclic homeomorphism $ g=\tau\comp (\prod_{i=1}^m g_i): F^m\to F^m$ is transversal to the diagonal in $F^m$. Moreover, $f$ is homotopic to $g$ and, for each fixed point $(a_1, a_2,\ldots, a_m)$ of $g$, there are charts of $F^m$ at $(a_1, a_2,\ldots, a_m)$ such that under the charts, $g$ has a local canonical form \betaegin{eqnarray} &&(u_{11}, u_{12}, u_{21}, u_{22},\ldots, u_{m1},u_{m2}) \notag \\ &\mapsto& (g_{m1}(u_{m1}, u_{m2}), g_{m2}(u_{m1}, u_{m2}),g_{11}(u_{11}, u_{12}), g_{12}(u_{11}, u_{12}), \notag \\ &&\ldots, g_{(m-1)1}(u_{(m-1)1}, u_{(m-1)2}),g_{(m-1)2}(u_{(m-1)1}, u_{(m-1)2})) \notag \end{eqnarray} where $g_{i1}, g_{i2}$ are the components of $g_i$ under the charts. \end{lem} \betaegin{lem} \label{index of cylic homeomorphism} If $f=\tau\comp (\prod_{i=1}^m f_i): F^m\to F^m$ is a cyclic homeomorphism, then the natural map \betaegin{equation*} \rho: F\to F^m, \quad a_1\mapsto (a_1, f_1(a_1),\ldots, f_{m-1}\comp\cdots \comp f_2\comp f_1(a_1)) \end{equation*} induces an index-preserving one-to-one corresponding between the set $ \mathrm{Fpc}(f_{m}\comp\cdots\comp f_2\comp f_1)$ of fixed point classes of $ f_{m}\comp\cdots\comp f_2\comp f_1$ and the set $\mathrm{Fpc}(f)$ of fixed point classes of $f$. \end{lem} \betaegin{proof} It is clear that \betaegin{eqnarray} \mathrm{Fix} f&=&\{(a_1,a_2,\ldots,a_m)|f_1(a_1)=a_2, f_2(a_2)=a_3,\ldots,f_m(a_m)=a_1\} \notag \\ &=&\{(a_1, f_1(a_1),\ldots, f_{m-1}\comp\cdots\comp f_2\comp f_1(a_1))|a_1\in \mathrm{Fix} (f_{m}\comp\cdots\comp f_2\comp f_1)\}. \notag \end{eqnarray} Suppose that $a_1$ and $a_1^{\prime }$ are in the same fixed point class of $ f_{m}\comp\cdots\comp f_1$, and $p: \widetilde F\to F$ is the universal cover. Then there is a lifting $\tilde f_i$ of $f_i$ such that $ a_1,a_1^{\prime }\in p(\mathrm{Fix} (\tilde f_m\comp\cdots\comp \tilde f_1))$ , and there is a point $\tilde a_1\in p^{-1}(a_1)$ and a point $\tilde a_1^{\prime -1}(a_1^{\prime })$ with $(\tilde f_m\comp\cdots\comp \tilde f_1)(\tilde a_1) = \tilde a_1$ and $(\tilde f_m\comp\cdots\comp \tilde f_1)(\tilde a_1^{\prime }) = \tilde a_1^{\prime }$. Hence, \betaegin{eqnarray} &&(\tau\comp(\tilde f_1\times \cdots\times\tilde f_m)) (\tilde a_1, \tilde f_1(\tilde a_1),\ldots,\tilde f_{m-1}\comp\cdots\comp \tilde f_1(\tilde a_1)) \notag \\ &=& ((\tilde f_m\comp\cdots\comp \tilde f_1)(\tilde a_1), \tilde f_1(a_1),\ldots,\tilde f_{m-1}\comp\cdots\comp \tilde f_1)(\tilde a_1)) \notag \\ &=&(\tilde a_1, \tilde f_1(\tilde a_1),\ldots,\tilde f_{m-1}\comp\cdots\comp \tilde f_1(\tilde a_1)). \notag \end{eqnarray} It follows that \betaegin{equation*} (a_1, f_1(a_1),\ldots, f_{m-1}\comp\cdots\comp f_1(a_1))\in (\prod_m p)( \mathrm{Fix}(\tau\comp(\tilde f_1\times \cdots\times\tilde f_m))). \end{equation*} Similarly, we also have \betaegin{equation*} (a^{\prime }_1, f_1(a^{\prime }_1),\ldots, f_{m-1}\comp\cdots\comp f_1(a^{\prime }_1))\in (\prod_m p)(\mathrm{Fix}(\tau\comp(\tilde f_1\times \cdots\times\tilde f_m))). \end{equation*} Since $(\tau\comp(\tilde f_1\times \cdots\times\tilde f_m))$ is a lifting of $f$, we obtain that $(a_1, f_1(a_1),\ldots, f_{m-1}\comp\cdots\comp f_1(a_1)) $ and $(a^{\prime }_1, f_1(a^{\prime }_1),\ldots, f_{m-1}\comp \cdots\comp f_1(a^{\prime }_1))$ are in the same fixed point class of $f$. Conversely, suppose that the two points above are in the same fixed point class of $f$. Then there is a lifting $\tilde f_i$ of $f_i$ such that both of them lie in $(\prod_m p)(\mathrm{Fix}(\tau\comp(\tilde f_1\times \cdots\times\tilde f_m))) $. Hence, $a_1,a_1^{\prime }\in p(\mathrm{Fix} (\tilde f_m\comp \cdots\comp\tilde f_1))$, we conclude that $a_1$ and $ a_1^{\prime }$ are in the same fixed point class of $f_m\comp\cdots\comp f_2 \comp f_1$. Now we shall prove that as a bijective correspondence between the sets of fixed point classes, $\rho$ is index-preserving. Since the indices of fixed point classes are invariant under homotopies, by Lemma \ref{transversal} we may homotope $f_i$ for $i=1,2,\ldots,m$ such that the graph of $f$ is transversal to the diagonal, and $f$ has local canonical forms in a neighborhood of every fixed point. Suppose that the differential $Df_i$ of $ f_i$ at $a_i$ is $N_i=\left( \betaegin{array}{cc} \frac{\partial f_{i1}}{\partial u_{i1}} & \frac{\partial f_{i1}}{\partial u_{i2}}\\ \frac{\partial f_{i2}}{\partial u_{i1}} & \frac{\partial f_{i2}}{\partial u_{i2}} \end{array} \right).$ Then the differential $Df$ of $f$ at $(a_1,a_2,\ldots,a_m)$ is \betaegin{equation*} N=\left( \betaegin{array}{ccccc} 0 & 0 & \cdots & 0 & N_m \\ N_1 & 0 & \cdots & 0 & 0 \\ 0 & N_2 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & N_{m-1} & 0 \end{array} \right). \end{equation*} Therefore, the index of $f_m\comp\cdots\comp f_2\comp f_1$ at the fixed point $a_1$ is \betaegin{equation*} \mathrm{ind}(f_m\comp\cdots\comp f_2\comp f_1, a_1) = sgn\det(I_2-N_m\cdots N_2N_1), \end{equation*} and the index of $f$ at the fixed point $(a_1,a_2,\ldots,a_m)$ is \betaegin{eqnarray} \mathrm{ind}(f, (a_1,a_2,\ldots,a_m))&=&sgn \det(I_{2m}-N) \notag \\ &=&sgn\det(I_2-N_{m-1}\cdots N_1N_m) \notag \\ &=& sgn\det(I_2-N_m\cdots N_2N_1) , \notag \end{eqnarray} where $I_k$ is the identity matrix of order $k$. Therefore, \betaegin{equation*} \mathrm{ind}(f_m\comp\cdots\comp f_2\comp f_1, a_1)=\mathrm{ind}(f, (a_1,a_2,\ldots,a_m)), \end{equation*} and the proof is finished. \end{proof} As a corollary, we have \betaegin{cor} \label{alter for N&L} \betaegin{equation*} N(f)=N(f_m\comp\cdots\comp f_2\comp f_1), \quad L(f)=L(f_m\comp\cdots\comp f_2\comp f_1). \end{equation*} \end{cor} Directly following from Lemma \ref{index of cylic homeomorphism}, Corollary \ref{alter for N&L} and \cite[Theorem 4.1]{JG}, we have \betaegin{prop} \label{bounds for cyclic homeomorphism} If $f: F^m\to F^m$ is a cyclic homeomorphism, then $\mathrm{(A)}$ For every fixed point class $\mathbf{F}$ of $f$, we have \betaegin{equation*} 2\chi(F)-1\leq \mathrm{ind}(f,\mathbf{F})\leq 1. \end{equation*} Moreover, almost every fixed point class $\mathbf{F}$ of $f$ has index $\gammaeq -1$, in the sense that \betaegin{equation*} \sum_{\mathrm{ind}(f,\mathbf{F})<-1}\{\mathrm{ind}(f,\mathbf{F})+1\}\gammaeq 2\chi(F), \end{equation*} where the sum is taken over all fixed point classes $\mathbf{F}$ with $ \mathrm{ind}(f,\mathbf{F})<-1$; $\mathrm{(B)}$ Let $L(f)$ and $N(f)$ be the Lefschetz number and the Nielsen number of $f$ respectively. Then \betaegin{equation*} |L(f)-\chi(F)|\leq N(f)-\chi(F). \end{equation*} \end{prop} \section{Fixed points of product maps and proofs of Theorem \ref{main thm1} and \ref{main thm2}}\label{sect 5} \label{sect. of borel conjecture} To prove Theorem \ref{main thm1} and Theorem \ref{main thm2}, we need some facts about fixed points of product maps. \subsection{Fixed points of product maps} Let $X_1,\ldots,X_n$ be connected compact polyhedra. \betaegin{defn} A self-map $f: X_1\times\cdots\times X_n\to X_1\times\cdots\times X_n$ is called a \emph{product map}, if \betaegin{equation*} f=f_1\times\cdots\times f_n: X_1\times\cdots\times X_n\to X_1\times\cdots\times X_n,\quad (a_1,\ldots,a_n)\mapsto (f_1(a_1),\ldots,f_n(a_n)), \end{equation*} where $f_i$ is a self-map of $X_i, i=1,\ldots,n$. \end{defn} By a proof analogous to that of \cite[Lemma 2.2]{ZZ}, we have the following lemma about the fixed point classes of product maps. \betaegin{lem} \label{product of index} If $f: X_1\times\cdots\times X_n\to X_1\times\cdots\times X_n$ is a product map, then $\mathrm{Fix} f= \mathrm{Fix} f_1\times\cdots\times \mathrm{Fix} f_n,$ and each fixed point class $\mathbf{F}\in \mathrm{Fpc}(f)$ splits into a product of some fixed point classes of $f_i$, i.e., \betaegin{equation*} \mathbf{F}=\mathbf{F}_1\times\cdots\times \mathbf{F}_n, \quad\mathrm{ind}(f, \mathbf{F})=\mathrm{ind}(f_1,\mathbf{F}_1)\cdots\mathrm{ind}(f_n, \mathbf{F} _n), \end{equation*} where $\mathbf{F}_i\in \mathrm{Fpc} (f_i)$ is a fixed point class of $f_i$ for $i=1,\ldots,n$. Moreover, \betaegin{equation*} L(f)=L(f_1)\cdots L(f_n),\quad N(f)=N(f_1)\cdots N(f_n). \end{equation*} \end{lem} \subsection{Proofs of Theorem \ref{main thm1} and Theorem \ref{main thm2}} Now we can give the proofs of Theorem \ref{main thm1} and \ref{main thm2}. Since the index of fixed points is homotopy invariant, we omit the base points of fundamental groups in the following. \betaegin{proof}[Proof of Theorem \protect\ref{main thm1}] Let $X=X_1\times\cdots\times X_n$, and $f:X\to X$ a homotopy equivalence. Then $f$ induces an automorphism $f_{\pi}:\pi _{1}(X)\rightarrow \pi_{1}(X)$ . Note that $\pi_1(X)$ is isomorphic to the direct product $ \prod_{i=1}^{n}\pi_1(X_i)$. By Proposition \ref{aut of prouct group}, the condition (1) in Theorem \ref{main thm1} implies $f_{\pi}=\phi_{1}\times \cdots\times \phi_n$ with $\phi_{i}$ an automorphism of $\pi _{1}(X_i), i=1,\dots, n$. Note that $X_i$ is a compact aspherical polyhedron, so $\phi_{i}$ can be induced by a homotopy equivalence $f_{i}:X_i\to X_i, i=1,\dots, n$. Since the product $X$ is also an compact aspherical polyhedron, $f$ is homotopic to the product map $f_{1}\times\cdots\times f_n$ which is also a homotopy equivalence. Recall that $X_i$ has BIPHE, then the index $ \mathrm{ind}(f_i,\mathbf{F}_i)$ of any fixed point class $\mathbf{F}_i$ of $ f_i$ has a finite bound $\mathcal{B}_{X_i}$ depending only on $X_i$. By the product formula of index in Lemma \ref{product of index}, we have the index $ |\mathrm{ind}(f,\mathbf{F})|<\mathcal{B}_X:=\prod_{i=1}^n\mathcal{B}_{X_i}$ for every fixed point class $\mathbf{F}$ of $f$. Therefore, $X$ has BIPHE. \end{proof} \betaegin{proof}[Proof of Theorem \protect\ref{main thm2}] Let $M_1,\ldots, M_n$ be connected closed Riemannian manifolds, each with negative sectional curvature everywhere, and $M=M_1\times\cdots\times M_n$. Collect together the coordinates corresponding to homotopy equivalent $M_i$'s and present it in the form \betaegin{equation*} M=\prod^s_{i=1}M_i^{\,n_i}\times \prod^m_{i=s+1}M_i^{\,n_i}, \end{equation*} where $M_1,\ldots, M_s$, $0\leq s\leq n$, are hyperbolic surfaces, $ M_{s+1},\ldots, M_m$ have dimensions $\gammaeq 3$, $n_i\gammaeq 1$ (recall that $n_i$ is not the dimension but the number of copies of $M_i$), $n_1+\cdots+n_m=n $ and $M_i\not\simeq M_j$ for $1\leq i\neq j\leq m$. Then by Example \ref{centerless of negatived mfd}, the fundamental group $G_i:=\pi_1(M_i)$ is centerless and indecomposable for $i=1,2,\ldots n$. Therefore, \betaegin{equation*} \pi_1(M)=\prod^s_{i=1}G_i^{\,n_i}\times \prod^m_{i=s+1}G_i^{\,n_i}, \end{equation*} where $G_i\ncong G_j$ for $1\leq i\neq j\leq m$. For any homotopy equivalence $f:M\to M$, $f$ induces an automorphism $ f_{\pi} $ of $\pi_1(M)$, then by Proposition \ref{aut of prouct group}, there exist automorphisms $\phi_{i,j}\in \mathrm{Aut}(G_i)$ and permutations $\sigma_i \in S_{n_i}$, such that \betaegin{equation*} f_{\pi}= \prod_{i=1}^s (\sigma_i \comp \prod_{j=1}^{n_i} \phi_{i,j})\times \prod_{i=s+1}^m (\sigma_i \comp \prod_{j=1}^{n_i} \phi_{i,j}):=\phi\times \psi, \end{equation*} where $\phi=\prod_{i=1}^s (\sigma_i \comp \prod_{j=1}^{n_i} \phi_{i,j})$ and $\psi=\prod_{i=s+1}^m (\sigma_i \comp \prod_{j=1}^{n_i} \phi_{i,j})$. Recall that $G_i$ for $i>s$ is the fundamental group of a Riemannian manifold with dimensions $\gammaeq 3$, then $\psi$ can be induced by a homotopy equivalence $ h: \prod^m_{i=s+1}M_i^{\,n_i}\to \prod^m_{i=s+1}M_i^{\,n_i}.$ On the other hand, $G_i$ ($i\leq s$) is the fundamental group of a closed hyperbolic surface, so the automorphism $\phi_{i,j}\in \mathrm{Aut}(G_i)$ can be induced by a homeomorphism $f_{i,j}$, and hence $\phi$ is induced by the homeomorphism \betaegin{equation*} g=\prod_{i=1}^s(\sigma_i \comp \prod_{j=1}^{n_i} f_{i,j}):\prod^s_{i=1}M_i^{\,n_i}\to \prod^s_{i=1}M_i^{\,n_i}. \end{equation*} That is $\phi=g_{\pi}$ and thus $f_{\pi}=g_{\pi}\times h_{\pi}=(g\times h)_{\pi} $. Since $M$ is also aspherical, $f$ is homotopic to the product map $g\times h$. By Theorem \ref{main thm3}, for every fixed point class $\mathbf{F}$ of $h$, we have $|\mathrm{ind}(h,\mathbf{F})|< \mathcal{B}_M$ for some finite bound $\mathcal{B}_M$ depending only on $M$. To complete the proof, by Lemma \ref{product of index}, it suffices to show that $|\mathrm{ind}(g,\mathbf{F}^{\prime })|<\mathcal{B}^{\prime }_M$ for some finite bound $\mathcal{B}^{\prime }_M$ depending only on $M$, for every fixed point class $\mathbf{F'}$ of $g$. Since every permutation $\sigma_i\in S_{n_i}$ is a product of disjoint cycles, and $M_1,\ldots, M_s$ are hyperbolic surfaces, we can rewritten \betaegin{equation*} g=\prod_{k} g_k:\prod^s_{i=1}M_i^{\,n_i}\to \prod^s_{i=1}M_i^{\,n_i} \end{equation*} as a product of finitely many cyclic homeomorphisms $g_k$ of products of hyperbolic surfaces. Then by Proposition \ref{bounds for cyclic homeomorphism}, we can choose $\mathcal{B}^{\prime }_M=\prod^s_{i=1}|2\chi(M_i)-1|^{\,n_i}.$ \end{proof} \betaegin{thebibliography}{JWZ} \betaibitem[BH]{bh} M. Bridson and A. Haefliger, \textit{Metric Spaces of Non-Positive Curvature}, Grund. Math. Wiss. 319, Springer-Verlag, Berlin-Heidelberg-New York, 1999. \betaibitem[J1]{fp1} B. Jiang, \emph{Lectures on Nielsen Fixed Point Theory}, Contemporary Mathematics vol.~14, American Mathematical Society, Providence (1983). \betaibitem[J2]{fp2} B. Jiang, \emph{Bounds for fixed points on surfaces}, Math. Ann. 311 (1998), 467--479. \betaibitem[JG]{JG} B. Jiang and J. Guo, \emph{Fixed points of surface diffeomorphisms}, Pac. J. Math. 160 (1) (1993), 67--89. \betaibitem[JW]{JW} B. Jiang and S. Wang, \emph{Lefschetz numbers and Nielsen numbers for homeomorphisms on aspherical manifolds}, Topology Hawaii, 1990, World Sci. Publ., River Edge, NJ, 1992, 119--136. \betaibitem[JWZ]{JWZ} B. Jiang, S.D. Wang and Q. Zhang, \emph{Bounds for fixed points and fixed subgroups on surfaces and graphs}, Algebr. Geom. Topol. 11 (2011), 2297--2318. \betaibitem[K1]{K1} M. Kelly, \emph{A bound for the fixed-point index for surface mappings}, Ergodic Theory Dynam. Systems 17 (1997), 1393--1408. \betaibitem[K2]{K2} M. Kelly, \emph{Bounds on the fixed point indices for self-maps of certain simplicial complexes}, Topology Appl. 108 (2000), 179--196. \betaibitem[Mc]{Mc} C. McCord,\emph{Estimating Nielsen numbers on infrasolvmanifolds}, Pac. J. Math. 154 (1992), 345--368. \betaibitem[RS]{RS} E. Rips and Z. Sela, \emph{Structure and rigidity in hyperbolic groups. I}, Geom. Funct. Anal. 4 (1994), no. 3, 337--371. \betaibitem[Z1]{Z} Q. Zhang, \emph{Bounds for fixed points on Seifert manifolds} , Topology Appl. 159 (15) (2012), 3263--3273. \betaibitem[Z2]{Z2} Q. Zhang, \emph{Bounds for fixed points on hyperbolic 3-manifolds}, Topology Appl. 164 (2014), 182--189. \betaibitem[Z3]{Z3} Q. Zhang, \emph{Bounds for fixed points on hyperbolic manifolds}, Topology Appl. 185-186 (2015), 80--87. \betaibitem[ZVW]{ZVW} Q. Zhang, E. Ventura and J. Wu, \emph{Fixed subgroups are compressed in surface groups}, Internat. J. Algebra Comput. 25 (5) (2015), 865--887. \betaibitem[ZZ]{ZZ} Q. Zhang and X. Zhao, \emph{Bounds for fixed points on products of hyperbolic surfaces}, J. Fixed Point Theory Appl. 21: 6 (2019), 1--11. \end{thebibliography} \end{document}
\begin{document} \title{On the Linear Convergence of the Cauchy Algorithm for a Class of Restricted Strongly Convex Functions} \author{Hui Zhang\thanks{ College of Science, National University of Defense Technology, Changsha, Hunan, 410073, P.R.China. Corresponding author. Email: \texttt{[email protected]}} } \date{\today} \maketitle \begin{abstract} In this short note, we extend the linear convergence result of the Cauchy algorithm, derived recently by E. Klerk, F. Glineur, and A. Taylor, from the case of smooth strongly convex functions to the case of restricted strongly convex functions with certain form. \end{abstract} \textbf{Keywords.} Cauchy algorithm, gradient method, linear convergence, restricted strongly convex \newline \textbf{AMS subject classifications.} 90C25, 90C22, 90C20. \section{Introduction} The Cauchy algorithm, which was proposed by Augustin-Louis Cauchy in 1847, is also known as the gradient method with exact line search. Although this is the first taught algorithm during introductory courses on nonlinear optimization, the worst-case convergence rate question of this method was not precisely understood until very recently the authors of \cite{Klerk2016on} settled it for strongly convex, continuously differentiable functions $f$ with Lipschitz continuous gradient. This class of functions, denoted by ${\mathcal{F}}_{\mu,L}(\mathbb{R}^n)$, can be described by the following inequalities: \begin{align} & \|\nabla f(x)-\nabla f(y)\|\leq L\|x-y\|, \quad\forall x, y\in\mathbb{R}^n; \label{Lip} \\ &\langle \nabla f(x)-\nabla f(y), x-y\rangle \geq \mu \|x-y\|^2, \quad\forall x, y\in\mathbb{R}^n.\label{SC} \end{align} The gradient descent method with exact line search can be described as follows. \begin{algorithm}[htb] \caption{The gradient descent method with exact line search} \label{alg0} \begin{tabbing} \textbf{Input:} $f\in {\mathcal{F}}_{\mu,L}(\mathbb{R}^n)$, $x_0\in\mathbb{R}^n$ .\\ 1: \textbf{for} $k=0, 1, \cdots,$ \textbf{do}\\ 2: \quad $\gamma=\arg\min_{\gamma\in \mathbb{R}} f(x_i-\gamma\,\nabla f(x_i))$; \quad\quad\quad ${//}$ the exact linear search\\ 3: \quad $x_{i+1}=x_i-\gamma\,\nabla f(x_i)$; \quad\quad\quad\quad\quad\quad\quad\quad ${//}$ the gradient descent\\ 4: \textbf{end for} \end{tabbing} \end{algorithm} The authors of \cite{Klerk2016on} obtained the following result, which settles the worst-case convergence rate question of the Cauchy algorithm on ${\mathcal{F}}_{\mu,L}(\mathbb{R}^n)$. \begin{theorem}\label{mainresult0} Let $f\in {\mathcal{F}}_{\mu,L}(\mathbb{R}^n)$, $x_*$ a global minimizer of $f$ on $\mathbb{R}^n$, and $f_*=f(x_*)$. Each iteration of the gradient method with exact line search satisfies \begin{equation}\label{linearcon} f(x_{i+1})-f_*\leq \left( \frac{L-\mu}{L+\mu}\right)^2 (f(x_{i})-f_*), ~~i=0, 1, \cdots. \end{equation} \end{theorem} For the case of quadratic functions in ${\mathcal{F}}_{\mu,L}(\mathbb{R}^n)$ with the form $$f(x)=\frac{1}{2}x^TQx+ c^Tx,$$ where $Q\in\mathbb{R}^{n\times n}$ is a positive definite matrix and $c\in \mathbb{R}^n$ is a vector, the result in Theorem \ref{mainresult0} is well-known. The main contribution of \cite{Klerk2016on} is extending the linear convergence of \eqref{linearcon} from quadratic case to the case of ${\mathcal{F}}_{\mu,L}(\mathbb{R}^n)$. However, Theorem \ref{mainresult0} can not completely answer what will happen when the matrix $Q$ is positive semidefinite. In this note, we will further extend the linear convergence of \eqref{linearcon} to a certain class of restricted strongly convex (RSC) functions, or more concretely to the functions that have the form of $f(x)=h(Ax)$, where $h\in {\mathcal{F}}_{\mu,L}(\mathbb{R}^m)$ and $A\in \mathbb{R}^{m\times n}$ with $m\leq n$, which cover the quadratic function $f(x)=\frac{1}{2}x^TQx+ c^Tx$ with positive semidefinite matrix $Q$. The concept of the restricted strongly convex was proposed in our previous paper \cite{zhang2015restricted,zhang2013gradient} and is strictly weaker than the strong convex; for detail and its recent development we refer the reader to \cite{zhang2015The,Frank2015linear,zhang2016new}. It is not difficulty to see that $f(x)=h(Ax)$ is generally not strongly convex even $h(\cdot)$ is unless $A$ has full column-rank. But it is always restricted strongly convex since it satisfies the following inequality that was proposed to define the RSC function: \begin{equation} \langle \nabla f(x), x-x^\prime\rangle \geq \nu \|x-x^\prime\|^2, \quad\forall x, y\in\mathbb{R}^n,\label{RSC} \end{equation} where $x^\prime$ is the projection point of $x$ onto the minimizer set of $f(x)$, and $\nu$ is a positive constant. A decisive difference to strong convexity is that the set of minimizers of a RSC function need not be a singleton. Now, we state our main result as follows: \begin{theorem}\label{mainresult} Let $h\in {\mathcal{F}}_{\mu,L}(\mathbb{R}^m)$, $A\in \mathbb{R}^{m\times n}$ with $m\leq n$ and having full row-rank, $f(x)=h(Ax)$, $x_*$ belong the minimizer set of $f$ on $\mathbb{R}^n$, and $f_*=f(x_*)$. Denote $\kappa_h=\frac{\mu}{L}$ and $\kappa(A)=\frac{\lambda_{\min}(AA^T)}{\lambda_{\max}(AA^T)}$, where $\lambda_{\min}(AA^T)$ and $\lambda_{\max}(AA^T)$ stand for the smallest and largest eigenvalues of $AA^T$, respectively. Each iteration of the gradient method with exact line search satisfies \begin{equation}\label{mainconv} f(x_{i+1})-f_*\leq \left(\frac{2-\kappa(A)-\kappa(A)\kappa_h}{2-\kappa(A)+\kappa(A)\kappa_h}\right)^2 (f(x_{i})-f_*), ~~i=0, 1, \cdots. \end{equation} \end{theorem} When $A$ is the identity matrix, Theorem \ref{mainresult} exactly recovers Theorem \ref{mainresult0} since $\kappa(A)=1$. From this sense, our main result is a further extension of Theorem \ref{mainresult0}. Recently, there appeared several papers \cite{zhang2015restricted,Frank2015linear,I2015Linear,zhang2016new}, the authors of which derived linear convergence results of gradient method with \textsl{fixed} step-length for RSC functions. The author of \cite{Frank2015linear} analyzed the linear convergence rate of gradient method with exact linear search for RSC functions, but he only showed the existence of linear convergence rate. Our novelty here lies in the exact expression \eqref{mainconv} for linear convergence. For generally quadratic case, we have the following consequence. \begin{corollary}\label{corr} Let $f(x)=\frac{1}{2}x^TQx+ c^Tx,$ where $Q\in\mathbb{R}^{n\times n}$ is a positive semidefinite matrix and $c\in \mathbb{R}^n$ is a vector; both of them are given. Assume that there is a vector $x_*$ minimizing $f(x)$ on $\mathbb{R}^n$ and denote $f_*=f(x_*)$. Then, each iteration of the gradient method with exact line search satisfies \begin{equation}\label{lincon} f(x_{i+1})-f_*\leq \left(1- \frac{\lambda^{++}_{\min}(Q)}{ \lambda_{\max}(Q)}\right)^2 (f(x_{i})-f_*), ~~i=0, 1, \cdots, \end{equation} where $\lambda^{++}_{\min}(Q)$ stands for the smallest strictly positive eigenvalue of $Q$ and $\lambda_{\max}(Q)$ is the largest eigenvalue of $Q$. \end{corollary} When $Q$ is positive definite, applying Theorem \ref{mainresult0} to the quadratic function $f(x)=\frac{1}{2}x^TQx+ c^Tx$, we can get $$f(x_{i+1})-f_*\leq \left(\frac{\lambda_{\max}(Q)-\lambda_{\min}(Q)}{ \lambda_{\max}(Q)+\lambda_{\min}(Q)}\right)^2(f(x_{i})-f_*), ~~i=0, 1, \cdots.$$ But from Corollary \ref{corr}, we can only obtain a worse linear convergence rate $\left(1- \frac{\lambda_{\min}(Q)}{ \lambda_{\max}(Q)}\right)^2$. Therefore, we wonder whether one can improve the rate in Corollary \ref{corr} from $\left(1- \frac{\lambda^{++}_{\min}(Q)}{ \lambda_{\max}(Q)}\right)^2$ to $$\left( \frac{\lambda_{\max}(Q)-\lambda^{++}_{\min}(Q)}{ \lambda_{\max}(Q)+\lambda^{++}_{\min}(Q)}\right)^2.$$ \section{Proof} \begin{proof}[Proof of Theorem \ref{mainresult}] We divide the proof into three steps. \textbf{Step 1.} Denote $\lambda=\sqrt{\lambda_{\max}(AA^T)}$ and let $\widetilde{h}_\lambda(y)=h(\lambda y)$. Since $\nabla \widetilde{h}_\lambda(y)=\lambda\cdot \nabla h(\lambda y)$, from the definition of $h\in {\mathcal{F}}_{\mu,L}(\mathbb{R}^m)$, we have that \begin{align} & \|\nabla \widetilde{h}_\lambda(y)-\nabla \widetilde{h}_\lambda(z)\|\leq \lambda^2 L\|y-z\|, \quad\forall y, z\in\mathbb{R}^m; \\ &\langle \nabla \widetilde{h}_\lambda(y)-\nabla \widetilde{h}_\lambda(z), y-z\rangle \geq \lambda^2 \mu \|y-z\|^2, \quad\forall y, z\in\mathbb{R}^m. \end{align} Denote $\widetilde{u}=\lambda^2 \mu. \widetilde{L}=\lambda^2 L$ and $\widetilde{A}=\lambda^{-1}A$; Then we can conclude that $\widetilde{h}_\lambda(y)\in {\mathcal{F}}_{\widetilde{\mu},\widetilde{L}}(\mathbb{R}^m)$ and $$f(x)=h(Ax)=\widetilde{h}_\lambda(\widetilde{A}x).$$ \textbf{Step 2.} The iterates, generated by the gradient method with exact line search, satisfy the following two conditions for $i=0, 1, \cdots,$ \begin{align} & x_{i+1}-x_i+\gamma \nabla f(x_i)= 0,~~ \textrm{for~some}~~ \gamma >0; \\ & \nabla f(x_{i+1})^T(x_{i+1}-x_i)= 0,\label{cond1} \end{align} where the first condition follows from the gradient descent step, and the second condition is an alternative expression of the exact linear search step. Since $\gamma>0$, it holds that successive gradients are orthogonal, i.e., \begin{equation}\label{cond2} \nabla f(x_{i+1})^T\nabla f(x_i)= 0, ~i=0, 1, \cdots. \end{equation} Note that $\nabla f(x)=\widetilde{A}^T\nabla \widetilde{h}_\lambda(\widetilde{A}x)$. Denote $y_i=\widetilde{A}x_i, i=0, 1, \cdots$; Then, in light of the conditions \eqref{cond1} and \eqref{cond2}, we can get that \begin{align} & \nabla \widetilde{h}_\lambda(y_{i+1})^T(y_{i+1}-y_i)= 0, ~i=0, 1, \cdots; \label{cond3} \\ & \nabla \widetilde{h}_\lambda(y_{i+1})^T\widetilde{A}\widetilde{A}^T\nabla \widetilde{h}_\lambda(y_i)= 0, ~i=0, 1, \cdots. \label{cond4} \end{align} \textbf{Step 3.} Following the arguments in \cite{Klerk2016on}, we consider only the first iterate, given by $x_0$ and $x_1$, as well as the minimizer $y_*$ of $\widetilde{h}_\lambda(y)\in {\mathcal{F}}_{\widetilde{\mu},\widetilde{L}}(\mathbb{R}^m)$. Set $h_i=\widetilde{h}_\lambda(y_i)$ and $g_i=\nabla \widetilde{h}_\lambda(y_i)$ for $i\in\{*, 0, 1\}$. Let $\epsilon =1-\kappa(A)$. Then, the following five inequalities are satisfied: \begin{enumerate} \item[1:]~~ $h_0\geq h_1+g^T_1(y_0-y_1)+\frac{1}{2(1-\widetilde{\mu}/\widetilde{L})}\left(\frac{1}{\widetilde{L}}\|g_0-g_1\|^2+\widetilde{\mu} \|y_0-y_1\|^2-2\frac{\widetilde{\mu}}{\widetilde{L}}(g_1-g_0)^T(y_1-y_0)\right)$ \item[2:]~~ $h_*\geq h_0+g^T_0(y_*-y_0)+\frac{1}{2(1-\widetilde{\mu}/\widetilde{L})}\left(\frac{1}{\widetilde{L}}\|g_*-g_0\|^2+\widetilde{\mu} \|y_*-y_0\|^2-2\frac{\widetilde{\mu}}{\widetilde{L}}(g_0-g_*)^T(y_0-y_*\right)$ \item[3:]~~ $h_*\geq h_1+g^T_1(y_*-y_1)+\frac{1}{2(1-\widetilde{\mu}/\widetilde{L})}\left(\frac{1}{\widetilde{L}}\|g_*-g_1\|^2+\widetilde{\mu} \|y_*-y_1\|^2-2\frac{\widetilde{\mu}}{\widetilde{L}}(g_1-g_*)^T(y_1-y_*)\right)$ \item[4:]~~ $0\geq g^T_1(y_1-y_0)$ \item[5:]~~ $0\geq g^T_0g_1-\epsilon\|g_0\|\|g_1\|,$ \end{enumerate} where the first three inequalities are the ${\mathcal{F}}_{\mu,L}$-interpolability conditions (see Theorem 4 in \cite{Taylor2016smooth}), the fourth inequality is a relaxation of \eqref{cond3}. It remains to show the fifth inequality. Indeed, in terms of \eqref{cond4}, we have $g^T_o\widetilde{A}\widetilde{A}^Tg_1=0$ and hence by the Cauchy-Schwartz inequality we can derive that $$g^T_0g_1=g^T_o(I-\widetilde{A}\widetilde{A}^T)g_1\leq \|I-\widetilde{A}\widetilde{A}^T\|\cdot\|g_0\|\|g_1\|,$$ where $I$ is the identity matrix. Since $\widetilde{A}=\lambda^{-1}A=\frac{A}{\sqrt{\lambda_{\max}(AA^T)}}$, it holds that $$\|\widetilde{A}\widetilde{A}^T\|=\frac{\|AA^T\|}{\lambda_{\max}(AA^T)}\leq 1$$ and hence $$\|I-\widetilde{A}\widetilde{A}^T\|=1-\lambda_{\min}(\widetilde{A}\widetilde{A}^T) =1-\frac{\lambda_{\min}(AA^T)}{\lambda_{\max}(AA^T)}=1-\kappa(A).$$ Therefore, $$g^T_0g_1 \leq (1-\kappa(A))\|g_0\|\|g_1\|=\epsilon \|g_0\|\|g_1\|.$$ Since $A$ is full row-rank, it is not difficulty to see that $0\leq \epsilon <1$. Now, we can repeat the arguments in the proof Theorem 5.1 in \cite{Klerk2016on} and get \begin{equation} h_1-h_*\leq \rho_\epsilon^2(h_1-h_*), \end{equation} where $\rho_\epsilon=\frac{1-\kappa_\epsilon}{1+\kappa_\epsilon}$ and $\kappa_\epsilon=\frac{\widetilde{\mu}(1-\epsilon)}{\widetilde{L}(1+\epsilon)}$. After a simple calculus, we obtain $$\rho_\epsilon=\frac{2-\kappa(A)-\kappa(A)\kappa_h}{2-\kappa(A)+\kappa(A)\kappa_h}.$$ Finally, noting that $f_*=h_*$ and for $i\in\{0,1\}$ it holds $$f(x_i)=h(Ax_i)=\widetilde{h}_\lambda(\widetilde{A}x_i)=\widetilde{h}_\lambda(y_i)=h_i,$$ we get $$f(x_1)-f_*\leq \left(\frac{2-\kappa(A)-\kappa(A)\kappa_h}{2-\kappa(A)+\kappa(A)\kappa_h}\right)^2(f(x_0)-f_*),$$ from which the conclusion follows. This completes the proof. \end{proof} \begin{proof}[Proof of Corollary \ref{corr}] Since $\nabla f(x)=Qx+c$, by the assumption we have that $-Qx_*=c$. Let $m=\textrm{rank}(Q)$ and let $Q=U\Sigma U^T$ be the reduced singular value decomposition, where $U\in\mathbb{R}^{n\times m}$ has orthogonal columns, and $\Sigma\in\mathbb{R}^{m\times m}$ is a diagonal matrix with the nonzero eigenvalues of $Q$ as its diagonal entries. Denote $B=U\Sigma^{\frac{1}{2}}$, and $u=-\Sigma^{\frac{1}{2}}U^Tx_*$; Then $$f(x)=\frac{1}{2}x^TBB^Tx+(B^Tx)^Tu=\frac{1}{2}\|B^Tx+u\|^2-\frac{1}{2}\|u\|^2.$$ Let $h(y)=\frac{1}{2}\|y+u\|^2-\frac{1}{2}\|u\|^2$. Then we have $f(x)=h(B^Tx)$ with $h(y)\in {\mathcal{F}}_{1,1}(\mathbb{R}^m)$ and $B^T$ having full row-rank. Note that $\kappa_h=1$ and $$\kappa(B^T)=\frac{\lambda_{\min}(B^TB)}{\lambda_{\max}(B^TB)}=\frac{\lambda^{++}_{\min}(Q)}{ \lambda_{\max}(Q)}.$$ Therefore, the desired result follows from Theorem \ref{mainresult}. This completes the proof. \end{proof} \end{document}
\begin{document} \pagestyle{plain} \pagenumbering{arabic} \defAppendix \Alph{section}.{\mbox{\Bbb R}oman{section}.} \def\mbox{\Bbb R}oman{section}-\Alph{subsection}.{\mbox{\Bbb R}oman{section}-\Alph{subsection}.} \def\Alph{section}.\arabic{equation}{\mbox{\Bbb R}oman{section}.\arabic{equation}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\displaystyle\int{\displaystyle\int} \def\rf#1{(\ref{#1})} \def\e#1{\mbox{e}^{#1}} \def\mathop{\rm sgn}\nolimits{\mathop{\rm sgn}\nolimits} \font\Bbb=msbm10 \def\mbox{\Bbb R}{\mbox{\Bbb R}} \def\mbox{\Bbb C}{\mbox{\Bbb C}} \def{\cal P}{{\cal P}} \def{\cal P}h{\widehat{{\cal P}}} \def{\cal P}b{\overline{{\cal P}}} \def\widehat{\chi}{\widehat{\chi}} \def\overline{h}{\overline{h}} \def\overline{h}t{\widetilde{\overline{h}}} \def\overline{\!K}{\overline{\!K}} \def\overline{\!K}t{\widetilde{\overline{\!K}}} \def\overline{\Omega}{\overline{\Omega}} \def\overrightarrow{q},\overrightarrow{p}{\overrightarrow{q},\overrightarrow{p}} \def\widehat{\rho}{\widehat{\rho}} \def\build#1#2{\mathrel{\mathop{\kern 0pt#1}\limits_{#2}}} \renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \title{\vspace*{-2cm} TIFR/TH/02-15 \\ PM/02-15 \\ \LARGE\bf Bell inequalities in four dimensional phase space and the three marginal theorem\thanks{Work supported by the Indo-French Centre for the Promotion of Advanced Research, Project Nb 1501-02.}.} \author{{\Large G. Auberson}\thanks{e-mail: [email protected]} \\ \sl Laboratoire de Physique Math\'ematique, UMR 5825-CNRS, \\ \sl Universit\'e Montpellier II, \\ \sl F-34095 Montpellier, Cedex 05, FRANCE. \and {\Large G. Mahoux}\thanks{e-mail: [email protected]} \\ \sl Service de Physique Th\'eorique, \\ \sl Centre d'\'Etudes Nucl\'eaires de Saclay, \\ \sl F-91191 Gif-sur-Yvette Cedex, FRANCE. \and {\Large S.M. Roy}\thanks{e-mail: [email protected]} and {\Large Virendra Singh}\thanks{e-mail: [email protected]} \\ \sl Department of Theoretical Physics, \\ \sl Tata Institute of Fundamental Research, \\ \sl Homi Bhabha Road, Mumbai 400 005, INDIA. } \date{May 28, 2002.} \maketitle \renewcommand{\arabic{footnote}}{\arabic{footnote}} \begin{abstract} We address the classical and quantum marginal problems, namely the question of simultaneous realizability through a common probability density in phase space of a given set of compatible probability distributions. We consider only distributions authorized by quantum mechanics, i.e. those corresponding to complete commuting sets of observables. For four-dimensional phase space with position variables $\vec q$ and momentum variables $\vec p$, we establish the two following points: i) given {\sl four} compatible probabilities for ($q_1,q_2$), ($q_1,p_2$), ($p_1,q_2$) and ($p_1,p_2$), there does not always exist a positive phase space density $\rho(\vec q, \vec p)$ reproducing them as marginals; this settles a long standing conjecture; it is achieved by first deriving Bell-like inequalities in phase space which have their own theoretical and experimental interest. ii) given instead at most {\sl three} compatible probabilities, there always exist an associated phase space density $\rho(\vec q, \vec p)$; the solution is not unique and its general form is worked out. These two points constitute our ``three marginal theorem''. \noindent PACS : 03.65.Ta, 03.67.-a \end{abstract} \section{Introduction} \setcounter{equation}{0} In classical mechanics position and momentum can be simultaneously specified. Hence phase space density has a well defined meaning in classical statistical mechanics. In quantum theory the probability density for observing eigenvalues of a complete commuting set (CCS) of observables is specific to the experimental context for measuring that CCS. Joint probabilities for different CCS which contain mutually noncommuting operators are not defined. For example for a 2-dimensional configuration space, with $\vec q, \vec p$ denoting position and momentum, probability densities of anyone of the four CCS $(q_1,q_2)$, $(q_1,p_2)$, $(p_1,q_2)$ or $(p_1,p_2)$ are defined, but not their joint probabilities. The question one may raise is : can one define such joint probabilities, e.g. a phase space probability density $\rho(\vec q,\vec p)$ such that all its marginals\footnote{ In agreement with common terminology, by ``marginal'' of a distribution over several variables, we denote integrals of the distribution over a subset of its variables.} coincide with the quantum mechanical probabilities for the different individual CCS? This general question was first raised by Martin and Roy \cite{MartinRoy1}. The Martin-Roy contextuality theorem demonstrates the impossibility of realizing quantum probability densities of all possible choices of the CCS of observables as marginals of one positive definite phase space density. For example, consider a two dimensional configuration space. Let coordinates $q_{1\alpha}$, $q_{2\alpha}$ be obtained from $q_1,q_2$ by a rotation of arbitrary angle $\alpha$, and momenta $p_{1\alpha}$, $p_{2\alpha}$ be related similarly to $p_1,p_2$ \begin{eqnarray} \left(\matrix{q_{1\alpha} \cr q_{2\alpha}}\right) = V\left(\matrix{q_1 \cr q_2}\right), \ \left(\matrix{p_{1\alpha} \cr p_{2\alpha}}\right) = V\left(\matrix{p_1 \cr p_2}\right), \label{I.1} \end{eqnarray} where \begin{eqnarray} V = \left(\matrix{\cos\alpha & \sin\alpha \cr -\sin\alpha & \cos\alpha}\right). \label{I.2} \end{eqnarray} Does there exist for every quantum state (with density operator $\widehat{\rho}$) a positive definite phase space density $\rho (\vec q, \vec p)$ such that its marginals agree with the corresponding quantum probabilities, i.e., \begin{eqnarray} \int dp_{1\alpha} dq_{2\alpha}\ \rho (\vec q,\vec p) = \langle q_{1\alpha}, p_{2\alpha} |\,\widehat{\rho}\,| q_{1\alpha},p_{2\alpha}\rangle \label{I.3} \end{eqnarray} for all $\alpha$ ranging from 0 to $2\pi$? They answered this question in the negative by finding a state $\widehat{\rho}$ for which eqs.~\rf{I.3} for all $\alpha$ are inconsistent with positivity of $\rho$. Since different $\alpha$ correspond to different experimental contexts, the Martin-Roy theorem is a new Gleason-Kochen-Specker type contextuality theorem \cite{GKS2}. The positivity of the phase space density $\rho(\vec q,\vec p)$ is absolutely crucial for this theorem; otherwise the Wigner distribution function \cite{Wigner3} would be a solution of \rf{I.3}. Equations \rf{I.3} constitute conditions on an infinite set of marginals of $\rho(\vec q,\vec p)$ (corresponding to the continuously infinite choices for $\alpha$) to agree with corresponding quantum probability densities. Their inconsistency still leaves open the question of consistency of a finite number of such marginal conditions. Indeed, the consistency of two marginal conditions where the marginals involve only nonintersecting sets of variables has been known for some time. Cohen and Zaparovanny \cite{CZ4} constructed the most general positive $\rho(\vec q,\vec p)$ obeying \[ \int d\vec p\ \rho(\vec q,\vec p) = \langle \vec q|\widehat{\rho}|\vec q\rangle, \ \int d\vec q\ \rho(\vec q,\vec p) = \langle \vec p|\widehat{\rho}|\vec p\rangle. \] Their solutions generalize the obvious simple uncorrelated solution for pure states $\psi$, \[ \rho(\vec q,\vec p) = |\psi(\vec q)|^2\ |\tilde\psi(\vec p)|^2, \] where tilda denotes Fourier transform. Based on generalized phase space densities exhibiting position momentum correlations, Roy and Singh \cite{RS5} constructed a causal quantum mechanics reproducing quantum position and momentum probability densities, thus improving on De Broglie-Bohm mechanics \cite{DBB6} which only reproduced the quantum position probability densities. Later, going much further than the nonintersecting marginals of Cohen et al. \cite{CZ4}, Roy and Singh \cite{RS7} constructed a causal quantum mechanics based on a positive $\rho(\vec q,\vec p)$ whose marginals reproduce the quantum probability densities of a chain of $N+1$ different CCS, e.g. \[ (Q_1,Q_2,\cdots,Q_N), \ (P_1,Q_2,\cdots,Q_N), \ (P_1,P_2,Q_3,\cdots,Q_N), \cdots (P_1,P_2,\cdots,P_N). \] Here $N$ is the dimension of the configuration space, and each CCS in the chain is obtained from the preceding one by replacing one of the position operators $Q_i$ by the conjugate momentum operator $P_i$. Roy and Singh proposed the following definition: a {\bf Maximally Realistic Causal Quantum Mechanics} is a causal mechanics which simultaneously reproduces the quantum probability densities of the maximum number of different (mutually noncommuting) CCS of observables as marginals of the same positive definite phase space density. They also conjectured that for $N$ dimensional configuration space this maximum number is $N+1$. A proof of this long standing conjecture is important for quantum mechanics where it quantifies the extent of simultaneous realizability of non commuting CCS. In this paper, we restrict ourselves to the case of $N=2$ degrees of freedom. The general case ($N>2$) will be dealt with in a forthcoming paper. In Section II below, we first state the {\bf classical and quantum marginal problems} and second, show that, given four classical compatible two-variable probability distributions, there does not always exist a positive phase space distribution reproducing them as marginals. In Section III, we develop a new tool, ``the phase space Bell inequalities'', which are the phase space analogues of the standard Bell inequalities \cite{JSB8} for a system of two spin-half particles. We use them in Section IV to prove the conjecture for four-dimensional phase space ($N=2$), namely the impossibility of simultaneous realization of quantum probabilities of more than three CCS as marginals. In Section V, we explicitly construct the most general phase space distribution which reproduces probabilities of three CSS as marginals. These results, the {\bf three marginal theorem}, are relevant for the construction of maximally realistic quantum mechanics. As our results are essentially new theorems for multidimensional Fourier transforms, they are also expected to be useful for classical signal and image processing~\cite{Cohen}. The theorems of the present paper and their generalizations to arbitrary $N$ \cite{AMRS2N} considerably advance previous results in the field, which have only dealt with nonintersecting sets of marginals (e.g. time and frequency). A summary of the results of this paper without detailed proofs is being reported separately \cite{AMRSletter}. \section{Four marginal problem} \setcounter{equation}{0} Let us consider a physical system with 2-dimensional configuration space. Let $(q_1,p_1)$ and $(q_2,p_2)$ be a set of canonical variables in the corresponding phase space. We look for a (normalized) probability distribution $\rho(q_1,q_2,p_1,p_2)$ such that \begin{equation} \rho(q_1,q_2,p_1,p_2) \geq 0\,, \end{equation} \begin{eqnarray} \int dp_1dp_2\,\rho(q_1,q_2,p_1,p_2) & = & R(q_1,q_2)\,, \label{II.2.R} \\ \int dp_1dq_2\,\rho(q_1,q_2,p_1,p_2) & = & S(q_1,p_2)\,, \label{II.2.S} \\ \int dq_1dp_2\,\rho(q_1,q_2,p_1,p_2) & = & T(p_1,q_2)\,, \label{II.2.T} \\ \int dq_1dq_2\,\rho(q_1,q_2,p_1,p_2) & = & U(p_1,p_2)\,, \label{II.2.U} \end{eqnarray} where the four marginals $R(q_1,q_2)$, $S(q_1,p_2)$, $T(p_1,q_2)$, and $U(p_1,p_2)$, the respective joint probabilities, are given. For consistency we must have \begin{equation} R,S,T,U\geq0\,, \label{II.3} \end{equation} and \begin{equation}\begin{array}{rcl} \displaystyle\int dq_2\,R(q_1,q_2)&=&\displaystyle\int dp_2\,S(q_1,p_2)\,,\\ \displaystyle\int dq_1\,R(q_1,q_2)&=&\displaystyle\int dp_1\,T(p_1,q_2)\,,\\ \displaystyle\int dq_1\,S(q_1,p_2)&=&\displaystyle\int dp_1\,U(p_1,p_2)\,,\\ \displaystyle\int dq_2\,T(p_1,q_2)&=&\displaystyle\int dp_2\,U(p_1,p_2)\,. \end{array} \label{II.4} \end{equation} We shall refer to the problem {\sl Given four distributions $R$, $S$, $T$ and $U$, satisfying the consistency conditions, does there always exist a positive $\rho(q_1,q_2,p_1,p_2)$ with these distributions as marginals?} as the {\sl Classical four marginal problem}. When the system is quantum mechanical and is described by a state vector $|{\cal P}si\rangle$, each of the four marginals involves a pair of compatible observables and we have \begin{equation}\begin{array}{rcl} R(q_1,q_2) & = & |\langle q_1,q_2|\widehat{\rho}|q_1,q_2\rangle|^2\,, \\ S(q_1,p_2) & = & |\langle q_1,p_2|\widehat{\rho}|q_1,p_2\rangle|^2\,, \\ T(p_1,q_2) & = & |\langle p_1,q_2|\widehat{\rho}|p_1,q_2\rangle|^2\,, \\ U(p_1,p_2) & = & |\langle p_1,p_2|\widehat{\rho}|p_1,p_2\rangle|^2\,. \end{array} \label{II.5} \end{equation} In this case, the above consistency conditions are automatically satisfied. We then refer to the problem as the {\sl Quantum four marginal problem}. A positive answer to it for all states $\widehat{\rho}$ would mean that a realistic interpretation of the quantum results is possible (to the extent that only measurements connected to the four marginals are involved). We shall see that the answer to both problems is negative. Let us first show that the classical four marginal problem does not always admit a solution. To this end, consider the following set of marginals \begin{eqnarray} R(q_1,q_2) & = & {1\over2}\left[\delta(q_1-a_1)\delta(q_2-a_2)+ \delta(q_1-a'_1)\delta(q_2-a'_2)\right]\,, \label{II.6.R} \\ S(q_1,p_2) & = & {1\over2}\left[\delta(q_1-a_1)\delta(p_2-b_2)+ \delta(q_1-a'_1)\delta(p_2-b'_2)\right]\,, \label{II.6.S} \\ T(p_1,q_2) & = & {1\over2}\left[\delta(p_1-b_1)\delta(q_2-a_2)+ \delta(p_1-b'_1)\delta(q_2-a'_2)\right]\,, \label{II.6.T} \\ U(p_1,p_2) & = & {1\over2}\left[\delta(p_1-b_1)\delta(p_2-b'_2)+ \delta(p_1-b'_1)\delta(p_2-b_2)\right]\,, \label{II.6.U} \end{eqnarray} which obviously satisfy the consistency conditions \rf{II.3} and \rf{II.4}. They possess two essential features. First, their non factorized form. Second, in view of the expressions of $R$, $S$ and $T$, the positions of the factors $\delta(p_2-b_2)$ and $\delta(p_2-b'_2)$ in the expression of $U$ are not the ``natural ones''. Eq.\rf{II.6.R} means that the support of the distribution $R$ in the plane $(q_1,q_2)$ consists in the two points $(a_1,a_2)$ and $(a'_1,a'_2)$. As a consequence, any positive $\rho$ satisfying \rf{II.2.R} should have a support the projection of which on the plane $(q_1,q_2)$ would also consist in those two points. That is \begin{equation} \rho = \delta(q_1-a_1)\delta(q_2-a_2)\alpha(p_1,p_2)+ \delta(q_1-a'_1)\delta(q_2-a'_2)\alpha'(p_1,p_2)\,, \label{II.7.R} \end{equation} where $\alpha$ and $\alpha'$ are some positive distributions. Similarly, from eqs.\rf{II.2.S} to \rf{II.2.U} \begin{eqnarray} \rho & = & \delta(q_1-a_1)\delta(p_2-b_2)\beta(p_1,q_2)+ \delta(q_1-a'_1)\delta(p_2-b'_2)\beta'(p_1,q_2)\,, \label{II.7.S} \\ & = & \delta(p_1-b_1)\delta(q_2-a_2)\gamma(q_1,p_2)+ \delta(p_1-b'_1)\delta(q_2-a'_2)\gamma'(q_1,p_2)\,, \label{II.7.T} \\ & = & \delta(p_1-b_1)\delta(p_2-b'_2)\eta(q_1,q_2)+ \delta(p_1-b'_1)\delta(p_2-b_2)\eta'(q_1,q_2)\,. \label{II.7.U} \end{eqnarray} According to eqs.\rf{II.7.R} to \rf{II.7.T} \begin{equation}\begin{array}{rcl} \rho & = & v\,\delta(q_1-a_1)\delta(q_2-a_2)\delta(p_1-b_1)\delta(p_2-b_2)\\ & & +v'\delta(q_1-a'_1)\delta(q_2-a'_2)\delta(p_1-b'_1)\delta(p_2-b'_2)\,, \end{array} \label{II.8} \end{equation} with $v\geq0$, $v'\geq0$, ($v+v'=1$). Clearly, eqs.\rf{II.7.U} and \rf{II.8} are incompatible, which establishes the non existence of $\rho$, and settles the classical four marginal problem. This however does not settle the quantum problem. Actually, the above example obviously cannot be strictly realized through a wave function in accordance with eqs.\rf{II.5}. More than that, this example is so ``twisted'' that, even after smoothing out the $\delta$ measures in eqs.\rf{II.6.R} to \rf{II.6.U}, approaching it close enough through a wave function appears as very difficult (if not impossible). Instead, we develop a new mathematical tool. \section{Phase space Bell inequalities} \setcounter{equation}{0} Consider any choice of functions $r(q_1,q_2)$, $s(q_1,p_2)$, $t(p_1,q_2)$, and $u(p_1,p_2)$, obeying \begin{equation} A \leq r(q_1,q_2)+s(q_1,p_2)+t(p_1,q_2)+u(p_1,p_2) \leq B \qquad (\forall q_1,q_2,p_1,p_2), \label{II.9} \end{equation} Multiply by $\rho(\vec q, \vec p)$, integrate over phase space and use positivity and normalization of $\rho(\vec q, \vec p)$. We deduce that the (classical as well as quantum) four marginal problem cannot have a solution unless \begin{equation}\begin{array}{rcccl} A & \leq & \displaystyle\int dq_1dq_2\,r(q_1,q_2)R(q_1,q_2) +\displaystyle\int dq_1dp_2\,s(q_1,p_2)S(q_1,p_2) & & \\ & & +\displaystyle\int dp_1dq_2\,t(p_1,q_2)T(p_1,q_2) +\displaystyle\int dp_1dp_2\,u(p_1,p_2)U(p_1,p_2) & \leq & B\,. \end{array} \label{II.10} \end{equation} Here $R$, $S$, $T$ and $U$ are defined by eqs.\rf{II.5} in the quantum case. It turns out that a particularly interesting choice is \begin{equation}\begin{array}{rcl} r(q_1,q_2) & = & \mathop{\rm sgn}\nolimits F_1(q_1)\,\mathop{\rm sgn}\nolimits F_2(q_2)\,, \\ s(q_1,p_2) & = & \mathop{\rm sgn}\nolimits F_1(q_1)\,\mathop{\rm sgn}\nolimits G_2(p_2)\,, \\ t(p_1,q_2) & = & \mathop{\rm sgn}\nolimits G_1(p_1)\,\mathop{\rm sgn}\nolimits F_2(q_2)\,, \\ u(p_1,p_2) & = & -\mathop{\rm sgn}\nolimits G_1(p_1)\,\mathop{\rm sgn}\nolimits G_2(p_2)\,, \end{array} \label{II.11} \end{equation} with $A=-2$, $B=+2$ and with $F_1$, $F_2$, $G_1$ and $G_2$ arbitrary non vanishing functions\footnote{Note that, with this choice, the sum $r+s+t+u$ assumes only its two extremal values $A=-2$ and $B=+2$, which makes it in a sense optimal.\label{footnote 1}}. Then the inequalities \rf{II.10} become a phase space analogue of the Bell inequalities for spin variables. The necessary conditions \rf{II.10} provide us with an alternative proof that the classical problem does not always admit a solution. Indeed, it is readily seen that they are violated for the marginals \rf{II.6.R} to \rf{II.6.U} and functions $F$'s and $G$'s such that \[\begin{array}{c} F_1(a_1),F_2(a_2),G_1(b_1),G_2(b_2) > 0\,, \\ F_1(a'_1),F_2(a'_2),G_1(b'_1),G_2(b'_2) < 0\,. \end{array}\] We shall see in the next section that the necessary conditions \rf{II.10} can be violated also in the quantum case. There, the analogy between our correlation inequalities \rf{II.10} (with the choice \rf{II.11}) and Bell inequalities will become more apparent, especially as regards to their implications. \section{Solving the four marginal quantum problem} \setcounter{equation}{0} This section is divided into four parts. In the first one, we prove the existence of wave functions which violate the correlation inequalities \rf{II.10}. Strictly speaking, this already settles the problem. However, the explicit construction of such wave functions, which we present in subsections B and C, is worthwhile in that it exhibits the physical implications of our inequalities. In subsection D, we elaborate on the formal analogy with Bell inequalities. \subsection{Non constructive proof} One first notices that $\chi_1(q_1) \equiv {1\over2} \left [1+\mathop{\rm sgn}\nolimits F_1(q_1) \right]$ is the characteristic function of some set $S_1\subset \mbox{\Bbb R}$, and similarly for $F_2$, $G_1$ and $G_2$, so that eqs.\rf{II.11} read \begin{equation}\begin{array}{rcl} r(q_1,q_2) & = & (2\chi_1-1)(2\chi_2-1)\,, \\ s(q_1,p_2) & = & (2\chi_1-1)(2\chi'_2-1)\,, \\ t(p_1,q_2) & = & (2\chi'_1-1)(2\chi_2-1)\,, \\ u(p_1,p_2) & = & -(2\chi'_1-1)(2\chi'_2-1)\,, \end{array} \label{III.1} \end{equation} where $\chi_i$ stands for $\chi_i(q_i)$ and $\chi'_i$ for $\chi'_i(p_i)$, ($i=1,2$). Inequalities \rf{II.9} then become \begin{equation} 0 \leq {\cal P} \leq 1 \,, \end{equation} and in fact ${\cal P}(1-{\cal P})=0$ (see footnote \ref{footnote 1}), where ${\cal P}(q_1,q_2,p_1,p_2)$ is given by \begin{equation} {\cal P} = \chi_1+\chi_2+\chi'_1\chi'_2-\chi_1\chi_2-\chi_1\chi'_2-\chi'_1\chi_2 \,. \end{equation} Let us define a corresponding quantum operator ${\cal P}h$ by \begin{equation} {\cal P}h = \widehat{\chi}_1+\widehat{\chi}_2+\widehat{\chi}'_1\widehat{\chi}'_2-\widehat{\chi}_1\widehat{\chi}_2-\widehat{\chi}_1\widehat{\chi}'_2-\widehat{\chi}'_1\widehat{\chi}_2 \,, \label{III.4} \end{equation} where \begin{equation}\begin{array}{rcl} \widehat{\chi}_1 & = & \displaystyle\int_{S_1}dq_1\,|q_1\rangle\langle q_1|\,\otimes \,{\bf 1}_2\,,\\ \widehat{\chi}_2 & = & {\bf 1}_1\,\otimes\,\displaystyle\int_{S_2}dq_2\,|q_2\rangle \langle q_2|\,,\\ \widehat{\chi}'_1 & = & \displaystyle\int_{S'_1}dp_1\,|p_1\rangle\langle p_1|\,\otimes \,{\bf 1}_2 \,,\\ \widehat{\chi}'_2 & = & {\bf 1}_1\,\otimes\,\displaystyle\int_{S'_2}dp_2\,|p_2\rangle \langle p_2|\,. \end{array} \label{III.5} \end{equation} The $\widehat{\chi}$'s are orthogonal projectors\footnote{In eqs.\rf{III.5}, $S_1$, $S_2$, $S'_1$, $S'_2$ are the supports of $\widehat{\chi}_1$, $\widehat{\chi}_2$, $\widehat{\chi}'_1$, $\widehat{\chi}'_2$ respectively. Also, $\int_{S_1}dq_1\,|q_1\rangle\langle q_1|$ is, in standard Dirac notation, the orthogonal projection $\psi(q_1)\rightarrow\chi_1(q_1)\psi(q_1)$, whereas $\int_{S'_1}dp_1\,|p_1\rangle\langle p_1|$ is the orthogonal projection $\tilde{\psi}(p_1)\rightarrow\chi'_1(p_1) \tilde{\psi}(p_1)$, $\tilde{\psi}(p_1)$ being the Fourier transform of $\psi(q_1)$, and so on.} ($\widehat{\chi}\dagger=\widehat{\chi}$,\ $\widehat{\chi}^2=\widehat{\chi}$) acting on ${\cal H}\equiv L^2(\mbox{\Bbb R},dq_1)\otimes L^2(\mbox{\Bbb R},dq_2)$. The product of two of them involving different indices commutes, so that ${\cal P}h$ is a (bounded) self-adjoint operator. The inequalities \rf{II.10} to be tested in the quantum context then become, for pure states $\widehat{\rho}=|{\cal P}si\rangle\langle{\cal P}si|$, \begin{equation} 0\leq\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle\leq1\qquad\forall\ |{\cal P}si\rangle \in{\cal H}\mbox{ with }\langle{\cal P}si|{\cal P}si\rangle=1\,, \label{III.6} \end{equation} or, equivalently, \begin{equation} {\cal P}h\geq0 \mbox{\ \ \ and\ \ \ } {\bf 1}-{\cal P}h\geq0 \mbox{\ \ \ in the operator sense}. \label{III.7} \end{equation} Because $\widehat{\chi}_j$ fails to commute with $\widehat{\chi}'_j$ ($j=1,2$), ${\cal P}h$ is {\sl not} an orthogonal projector (see below), in contrast to the classical equality ${\cal P}^2={\cal P}$. Exploiting this fact leads to the \newtheorem{proposition}{Proposition} \begin{proposition} The operators ${\cal P}h$ and (${\bf 1}-{\cal P}h$) cannot be both positive. \label{proposition} \end{proposition} As a consequence, there is at least one $|{\cal P}si\rangle\neq0$ such that the inequalities $\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle\geq0$ and $\langle{\cal P}si|({\bf1}-{\cal P}h)|{\cal P}si\rangle\geq0$ cannot be simultaneously true. This just means that one of the two inequalities \rf{III.6} is violated for that $|{\cal P}si\rangle$, which settles the question. \noindent{\sl Proof of proposition 1:} \noindent Assume that ${\cal P}h$ and (${\bf 1}-{\cal P}h$) are both positive. This would imply \begin{equation} {\cal P}h({\bf 1}-{\cal P}h) \geq 0 \,, \label{III.9} \end{equation} (remember that the product of two positive {\sl commuting} operators is positive). \noindent Now, a straightforward calculation of ${\cal P}h^2$ from eq.\rf{III.4} yields \begin{equation} {\cal P}h^2 = {\cal P}h-\left[\widehat{\chi}_1,\widehat{\chi}'_1\right] \left[\widehat{\chi}_2,\widehat{\chi}'_2\right]\,, \end{equation} and eq.\rf{III.9} would mean that $\left[\widehat{\chi}_1,\widehat{\chi}'_1\right] \left[\widehat{\chi}_2,\widehat{\chi}'_2\right]$ is a positive operator. That this is wrong is not surprising. Let us show it. Take a factorized $|{\cal P}si\rangle$, namely $|{\cal P}si\rangle=|{\cal P}hi_1\rangle\otimes|{\cal P}hi_2\rangle$, so that \[ \langle{\cal P}si|{\cal P}h({\bf 1}-{\cal P}h)|{\cal P}si\rangle = -\langle{\cal P}hi_1|i\left[\widehat{\chi}_1,\widehat{\chi}'_1\right]|{\cal P}hi_1\rangle \langle{\cal P}hi_2|i\left[\widehat{\chi}_2,\widehat{\chi}'_2\right]|{\cal P}hi_2\rangle\,. \] It is enough to show that, for a given choice of the characteristic functions $\chi$ and $\chi'$, the real number \begin{equation} R[{\cal P}hi] \equiv \langle{\cal P}hi|i\left[\widehat{\chi},\widehat{\chi}'\right]|{\cal P}hi\rangle \end{equation} can assume both signs when $|{\cal P}hi\rangle$ is varied. Let us define \[ |{\cal P}hi^+\rangle = \widehat{\chi}\,|{\cal P}hi\rangle\,, \qquad |{\cal P}hi^-\rangle = ({\bf 1}-\widehat{\chi})|{\cal P}hi\rangle\,. \] Using the identity \[ \left[\widehat{\chi},\widehat{\chi}'\right] = \widehat{\chi}\widehat{\chi}'({\bf 1}-\widehat{\chi})-({\bf 1}-\widehat{\chi})\widehat{\chi}'\widehat{\chi} \] gives $R[{\cal P}hi]$ the form \[ R[{\cal P}hi] = i\langle{\cal P}hi^+|\widehat{\chi}'|{\cal P}hi^-\rangle -i\langle{\cal P}hi^-|\widehat{\chi}'|{\cal P}hi^+\rangle\,. \] Obviously, for $|\widetilde{{\cal P}hi}\rangle=|{\cal P}hi^+\rangle-|{\cal P}hi^-\rangle$, one has $R[\widetilde{{\cal P}hi}]=-R[{\cal P}hi]$. This concludes the proof. \noindent {\sl Remarks:} \noindent 1) When the wave function $|{\cal P}si\rangle$ factorizes, i.e.\ ${\cal P}si(q_1,q_2) = {\cal P}hi_1(q_1){\cal P}hi_2(q_2)$, a corresponding probability distribution $\rho$ always exists, namely \begin{equation} \rho(q_1,q_2,p_1,p_2) = |{\cal P}hi_1(q_1)|^2\,|{\cal P}hi_2(q_2)|^2\, |\tilde{{\cal P}hi}_1(p_1)|^2\,|\tilde{{\cal P}hi}_2(p_2)|^2 \,, \label{factor} \end{equation} where the $\tilde{{\cal P}hi}_i$'s are the Fourier transforms \begin{equation} \tilde{{\cal P}hi}_i(p_i) = {1\over\sqrt{2\pi}} \int_{-\infty}^{+\infty}dq_i\, \mbox{e}^{-ip_iq_i}\,{\cal P}hi_i(q_i)\,, \qquad (i=1,2). \end{equation} Of course, this implies that eqs.\rf{III.6} are automatically satisfied for such factorized $|{\cal P}si\rangle$'s (which can also be checked from eq.\rf{III.4}). \noindent 2) The fact (used in the proof) that $\langle{\cal P}si|{\cal P}h({\bf 1}-{\cal P}h)|{\cal P}si\rangle<0$ for some factorized $|{\cal P}si\rangle$'s is {\sl not} inconsistent with the inequalities $0\leq\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle\leq1$ which are satisfied for those $|{\cal P}si\rangle$'s. \subsection{Construction} We want to find wave functions $|{\cal P}si\rangle$ violating the inequalities \rf{III.6}. According to the first of the above remarks, one has to depart from the class of factorized $|{\cal P}si\rangle$'s. The simplest way to do it is to take just a sum of two such products. Choose first \[ S_1=S_2\equiv S \qquad \mbox{ and } \qquad S'_1=S'_2\equiv S'\,, \] so that \begin{equation} {\cal P}h = \widehat{\chi}\otimes{\bf 1}_2+{\bf 1}_1\otimes\widehat{\chi}+\widehat{\chi}'\otimes\widehat{\chi}'-\widehat{\chi}\otimes\widehat{\chi} -\widehat{\chi}\otimes\widehat{\chi}'-\widehat{\chi}'\otimes\widehat{\chi} \,. \label{III.12} \end{equation} Take next \begin{equation}\left\{\begin{array}{l} |{\cal P}si\rangle={1\over \sqrt{1+|\lambda|^2}} (|\phi\rangle+\lambda|\varphi\rangle) \qquad (\lambda\in\mbox{\Bbb C}) \,, \\ \mbox{with } \quad \left\{ \begin{array}{l} |\phi\rangle=|\phi_1\rangle\otimes|\phi_2\rangle\,, \qquad |\varphi\rangle=|\varphi_1\rangle\otimes|\varphi_2\rangle\,, \\ \langle\phi_1|\phi_1\rangle=\langle\phi_2|\phi_2\rangle= \langle\varphi_1|\varphi_1\rangle=\langle\varphi_2|\varphi_2\rangle=1, \quad \langle\phi_1|\varphi_1\rangle=0\,, \end{array} \right. \end{array} \right. \label{III.13} \end{equation} so that $|{\cal P}si\rangle$ is properly normalized. For the moment, choose also \begin{equation} \phi_1=\phi_2\equiv f\quad \mbox{ and } \quad\varphi_1=\varphi_2\equiv g\,. \label{III.14} \end{equation} with \begin{equation} \langle f|f\rangle=\langle g|g\rangle=1\,, \quad \langle f|g\rangle=0\,. \label{III.15} \end{equation} Then \begin{equation} \langle{\cal P}si|{\cal P}h|{\cal P}si\rangle = {1\over{1+|\lambda|^2}} \left[ \langle\phi|{\cal P}h|\phi\rangle +(\lambda\langle\phi|{\cal P}h|\varphi\rangle+c.c.) +|\lambda|^2\langle\varphi|{\cal P}h|\varphi\rangle \right]\,, \label{III.16} \end{equation} with \begin{equation}\begin{array}{rcl} \langle\phi|{\cal P}h|\phi\rangle & = & 2\langle f|\widehat{\chi}|f\rangle +\langle f|\widehat{\chi}'|f\rangle^2 -\langle f|\widehat{\chi}|f\rangle^2 -2\langle f|\widehat{\chi}|f\rangle\langle f|\widehat{\chi}'|f\rangle \,, \\ \langle\varphi|{\cal P}h|\varphi\rangle & = & 2\langle g|\widehat{\chi}|g\rangle +\langle g|\widehat{\chi}'|g\rangle^2 -\langle g|\widehat{\chi}|g\rangle^2 -2\langle g|\widehat{\chi}|g\rangle\langle g|\widehat{\chi}'|g\rangle \,, \\ \langle\phi|{\cal P}h|\varphi\rangle & = & \langle f|\widehat{\chi}'|g\rangle^2 -\langle f|\widehat{\chi}|g\rangle^2 -2\langle f|\widehat{\chi}|g\rangle\langle f|\widehat{\chi}'|g\rangle\,. \end{array} \label{III.17} \end{equation} We already know that $0\leq\langle\phi|{\cal P}h|\phi\rangle\leq1$ and $0\leq\langle\varphi|{\cal P}h|\varphi\rangle\leq1$\,. Clearly, in view of \rf{III.16}, our goal will be reached (namely $\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle <0$ or $\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle>1$) if one can find $f$ and $g$ such that \begin{equation} |\langle\phi|{\cal P}h|\varphi\rangle|^2 > \langle\phi|{\cal P}h|\phi\rangle\langle\varphi|{\cal P}h|\varphi\rangle\,. \label{III.18} \end{equation} We claim that this can be achieved with $S=S'=(0,\infty)$, $f(q)\equiv \langle q|f\rangle$ an even, normalized function in $L^2(-\infty,\infty)$, and \begin{equation} g(q)\equiv\langle q|g\rangle = \mathop{\rm sgn}\nolimits(q)\,f(q)\,. \label{III.19} \end{equation} With this choice, eqs.\rf{III.15} are automatically satisfied and \begin{equation} \langle f|\widehat{\chi}|f\rangle = \langle g|\widehat{\chi}|g\rangle = \langle f|\widehat{\chi}|g\rangle = {1\over 2}\,. \label{III.20} \end{equation} Also, since the Fourier transforms $\tilde{f}(p)$ and $\tilde{g}(p)$ are respectively even and odd functions \begin{equation} \langle f|\widehat{\chi}'|f\rangle = \langle g|\widehat{\chi}'|g\rangle = {1\over 2}\,. \label{III.21} \end{equation} As for the non trivial interference term $\langle f|\widehat{\chi}'|g\rangle$, it is given by (see appendix A) \begin{equation} \langle f|\widehat{\chi}'|g\rangle = -{i\over\pi} \int_0^\infty dq\int_0^\infty dq' \,f^*(q)f(q')\left({1\over q+q'}-{P\over q-q'}\right)\,. \label{III.22} \end{equation} At this stage, it is advantageous to take $f$ as a {\sl real} function, so that by symmetry \[ \langle f|\widehat{\chi}'|g\rangle = -{i\over\pi} \int_0^\infty dq\int_0^\infty dq' \,{f(q)f(q')\over q+q'}\,. \] Let us set \begin{eqnarray} h(q) & = & \sqrt{2}\,f(q)\,, \label{III.23} \\ K(q,q') & = & {1\over\pi}\,{1\over q+q'}\,. \label{III.24} \end{eqnarray} Then \begin{equation} \langle f|\widehat{\chi}'|g\rangle = -{i\over 2}\,\gamma\,, \qquad\qquad (\gamma\in\mbox{\Bbb R}) \label{III.25} \end{equation} with \begin{equation} \gamma = \int_0^\infty dq\int_0^\infty dq'\,h(q)\,K(q,q')\,h(q')\,, \label{III.26} \end{equation} and \begin{equation} \|h\|_{L^2(0,\infty)} = 1\,. \label{III.27} \end{equation} The insertion of eqs.\rf{III.20}, \rf{III.21} and \rf{III.25} in \rf{III.17} gives \[\begin{array}{c} \langle\phi|{\cal P}h|\phi\rangle = \langle\varphi|{\cal P}h|\varphi\rangle = {1\over 2}\,, \\ \langle\phi|{\cal P}h|\varphi\rangle = -{1\over 4}(1+\gamma^2) +{\displaystyle i\over 2}\,\gamma\,, \end{array} \] so that eq.\rf{III.18} reads \[ (\gamma^2+1)^2+4\gamma^2 > 4 \,, \] which is satisfied provided that \begin{equation} |\gamma| > \sqrt{2\sqrt{3}-3} \,\cong\, 0.6813 \label{III.28} \end{equation} Moreover, with $\lambda=\rho\,\e{i\theta}$, eq.\rf{III.16} becomes \begin{equation} \langle{\cal P}si|{\cal P}h|{\cal P}si\rangle = {1\over 2}-{\rho\over 2(1+\rho^2)} \left[(1+\gamma^2)\cos\theta+2\gamma\sin\theta\right]\,. \label{III.29} \end{equation} We already know that $|\gamma|$ cannot exceed 1, because $|\langle f|\widehat{\chi}'|g\rangle|^2 \leq \langle f|\widehat{\chi}'|f\rangle \langle g|\widehat{\chi}'|g\rangle={1\over4}$. Are there however some $h$'s (subjected to \rf{III.27}) such that $\gamma$ (given by eq.\rf{III.26}) fulfils \rf{III.28}? If this occurs we have reached our goal and it only remains to maximize $|\gamma|$ in order to obtain the extremal values of $\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle$ (within the present scheme) through eq.\rf{III.29}. In other words, one has to solve the problem \renewcommand{1.4}{.7} \[ \gamma_0 \equiv \build{\sup} {\begin{array}{c} \scriptstyle \|h\|_{\scriptscriptstyle L^2(0,\infty)}=1 \\ \scriptstyle h=h^* \end{array}} |\langle h|K|h\rangle| = \ ? \] \renewcommand{1.4}{1.2} \noindent In appendix B, it is shown that the (bounded) integral operator $K$ with kernel \rf{III.24} on $L^2(0,\infty)$ is positive and has the purely continuous spectrum $[0,1]$. This immediately entails $\gamma_0=1$, and we get \[ \left.\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle\right|_{\gamma=1} = {1\over 2}-{\rho\over 1+\rho^2}(\cos\theta+\sin\theta)\,, \] \begin{equation}\begin{array}{rclclcl} \build{\inf}{\lambda} \left.\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle\right|_{\gamma=1} & = & \left.\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle\right|_{\gamma=\rho=1, \theta={\pi\over4}} & = & {1-\sqrt{2}\over 2 } & \cong & -0.2071 \\ \build{\sup}{\lambda} \left.\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle\right|_{\gamma=1} & = & \left.\langle{\cal P}si|{\cal P}h|{\cal P}si\rangle\right|_{\gamma=\rho=1, \theta={-3\pi\over4}} & = & {1+\sqrt{2}\over 2 } & \cong & 1.2071 \end{array} \label{III.30} \end{equation} Actually, as discussed in appendix B, due to the continuous spectrum of $K$, these extremal values cannot be strictly reached, but only approached arbitrarily close via a family of normalized functions $h$, e.g. \[ h_L(q) = {\theta(L-q)\over \sqrt{\ln(L+1)}}{1\over \sqrt{q+1}}, \qquad L\rightarrow\infty \] or smoothed forms of this. Of course, other functions $h$ will also do the job (although less perfectly), that is meet the crucial requirement \rf{III.28}. Taking for example $h(q)={1\over q+1}$ (which is normalized in $L^2(0,\infty)$), one gets \[ \gamma = {\pi\over 4} \cong 0.7854 \] Finally, collecting the equations \rf{III.13}, \rf{III.14}, \rf{III.19} and \rf{III.23}, together with $\lambda={\pi\over4},-{3\pi\over4}$, one obtains the wave functions leading to the maximal violations \rf{III.30} \begin{equation} {\cal P}si_{\pm}(q_1,q_2) = {1\over 2\sqrt{2}}\left[1\pm \e{i{\pi\over 4}}\mathop{\rm sgn}\nolimits(q_1)\mathop{\rm sgn}\nolimits(q_2)\right]h(|q_1|)h(|q_2|)\,, \label{III.31} \end{equation} where $h(q)$ stands for some regularized form of $1\over\sqrt{q}$, with $\int_0^\infty dq\,h(q)^2=1$. \subsection{Introducing Einstein locality and relative motion} Let us now interpret $q_1$ and $q_2$ as the coordinates of two particles (rather than the $x$ and $y$ coordinates of the same particle).. Then the wave functions \rf{III.31} describe states of two particles not spatially separated and with zero relative momentum. These two restrictions can be easily disposed of. First, it can be checked that nothing is essentially changed in the previous derivation if one keeps \begin{equation} S_1=S'_1=(0,\infty)\,; \qquad \phi_1(q_1)=f(q_1)\,, \quad \varphi_1(q_1) = \mathop{\rm sgn}\nolimits(q_1) f(q_1)\,, \label{III.32} \end{equation} but replaces \begin{equation} S_2=S'_2=(0,\infty)\,; \qquad \phi_2(q_2)=f(q_2)\,, \quad \varphi_2(q_2) = \mathop{\rm sgn}\nolimits(q_2) f(q_2)\,, \label{III.33} \end{equation} by \[ S_2=(a,\infty)\,, \quad S'_2=(0,\infty)\,; \qquad \phi_2(q_2)=f(q_2-a)\,, \quad \varphi_2(q_2) = \mathop{\rm sgn}\nolimits(q_2-a) f(q_2-a)\,. \] Then eq.\rf{III.31} becomes \[ {\cal P}si_{\pm}(q_1,q_2) = {1\over 2\sqrt{2}}\left[1\pm \e{i{\pi\over 4}}\mathop{\rm sgn}\nolimits(q_1)\mathop{\rm sgn}\nolimits(q_2-a)\right]h(|q_1|)h(|q_2-a|)\,, \] with $a$ arbitrary. This allows us to let Einstein locality enter the game. Similarly, nothing is essentially changed if one keeps eqs.\rf{III.32} but replaces eqs.\rf{III.33} by \[ S_2=(0,\infty)\,, \quad S'_2=(P,\infty)\,; \qquad \phi_2(q_2)=\e{iPq_2}f(q_2)\,, \quad \varphi_2(q_2) = \e{iPq_2}\mathop{\rm sgn}\nolimits(q_2) f(q_2)\,. \] Then eq.\rf{III.31} becomes \[ {\cal P}si_{\pm}(q_1,q_2) = {1\over 2\sqrt{2}}\left[1\pm \e{i{\pi\over 4}}\mathop{\rm sgn}\nolimits(q_1)\mathop{\rm sgn}\nolimits(q_2)\right] \e{iPq_2}h(|q_1|)h(|q_2|)\,, \] with $P$ arbitrary. This allows us to put the two particles in relative motion. \subsection{Analogy with Bell spin $1\over2$ correlation inequalities} Let us denote by $|+\rangle$ a normalized function $f$ close to the (symmetrized) eigenfunction of the operator $K$ with ``eigenvalue'' $\lambda_0=1$ (i.e.\ $\gamma\cong1$ in eq.\rf{III.25}), and by $|-\rangle$ the orthogonal function $g$ (as given by eq.\rf{III.19}). Consider the subspace $V=\mbox{span}(|+\rangle,|-\rangle)$ of the full 1-particle Hilbert space, together with the orthogonal projector ${\cal P}i$ onto $V$. Call $\Gamma$ (resp. $\Gamma'$) the restriction of $\widehat{\chi}$ (resp. $\widehat{\chi}'$) to the 2-dimensional space $V$ \[ \Gamma={\cal P}i\,\widehat{\chi}\,{\cal P}i\,, \qquad \Gamma'={\cal P}i\,\widehat{\chi}'\,{\cal P}i\,. \] Then eqs.\rf{III.20}, \rf{III.21} and \rf{III.25} tell us that $\Gamma$ and $\Gamma'$ are represented in the orthonormal basis $\{|+\rangle,|-\rangle\}$ by the matrices \[\begin{array}{rcl} \Gamma = \left(\begin{array}{lr} {1\over2} & {1\over2} \\ {1\over2} & {1\over2} \end{array}\right) & = & {1\over2}\,(1+\sigma_x)\,,\\ [.2in] \Gamma' = \left(\begin{array}{cc} {1\over2} & {i\over2}\gamma \\ -{i\over2}\gamma & {1\over2} \end{array}\right) & = & {1\over2}\,(1-\gamma\sigma_y)\,. \end{array}\] In the idealized limit $\gamma\rightarrow1$ (and only in this limit), one observes that $\Gamma$ and $\Gamma'$ are themselves orthogonal projections $V\rightarrow V$ \[\begin{array}{rclrcl} \Gamma&=&\Gamma^\dagger\,, & \Gamma^2&=&\Gamma\,, \\ \Gamma'&=&\Gamma'^\dagger\,,\qquad & \Gamma'^2&=&\Gamma'\,. \end{array}\] This implies that both operators $\widehat{\chi}$ and $\widehat{\chi}'$ leave the subspace $V$ invariant \[ [{\cal P}i,\widehat{\chi}] = [{\cal P}i,\widehat{\chi}'] = 0\,. \] Indeed, a straightforward calculation shows that \[ [(\mbox{\bf 1}-{\cal P}i)\widehat{\chi}{\cal P}i]^\dagger[(\mbox{\bf 1}-{\cal P}i)\widehat{\chi}{\cal P}i] = 0\,, \] which entails $(\mbox{\bf 1}-{\cal P}i)\widehat{\chi}{\cal P}i=0$ and $\widehat{\chi}{\cal P}i={\cal P}i\widehat{\chi}$. The same for $\widehat{\chi}'$. Hence, in the 2-particle Hilbert space, the operator \rf{III.12} also leaves invariant $V\otimes V$, and ${\cal P}b:={\cal P}h{\cal P}i$ assumes the simple form \begin{equation} {\cal P}b = {1\over2}+{1\over4}\left(\sigma_y^{(1)}\sigma_y^{(2)} -\sigma_x^{(1)}\sigma_x^{(2)}+\sigma_x^{(1)}\sigma_y^{(2)} +\sigma_y^{(1)}\sigma_x^{(2)}\right)\,, \label{III.34} \end{equation} whereas the maximally violating wave functions \rf{III.31} read \begin{equation} |{\cal P}si_\pm\rangle = {1\over\sqrt{2}}\left(|+\rangle^{(1)}|+\rangle^{(2)} \pm\e{i{\pi\over4}}|-\rangle^{(1)}|-\rangle^{(2)}\right)\,. \label{III.35} \end{equation} From eq.\rf{III.34} one can check that \begin{equation} {\cal P}b({\bf 1}-{\cal P}b) = -{1\over4}\,\sigma_z^{(1)}\sigma_z^{(2)}\,, \label{III.36} \end{equation} which is just the projected form of ${\cal P}h({\bf 1}-{\cal P}h)=[\widehat{\chi}_1,\widehat{\chi}'_1]\, [\widehat{\chi}_2,\widehat{\chi}'_2]$, and the expectation value of the operator \rf{III.36} is \[ \langle{\cal P}si_\pm|{\cal P}b({\bf 1}-{\cal P}b)|{\cal P}si_\pm\rangle = -{1\over4} \] for the wave functions \rf{III.35}. The result \rf{III.30} is also directly recovered from eqs.\rf{III.34} and \rf{III.35} \[ \langle{\cal P}si_\pm|{\cal P}b|{\cal P}si_\pm\rangle = {1\mp\sqrt{2}\over2}\,. \] Then one sees that, in the idealized limit $\gamma\rightarrow1$, the original phase space setting up of the problem is formally equivalent to the standard EPR setting up for a two spin $1\over2$ system, together with its classical Bell inequalities. \section{General solution of the three marginal problem} \setcounter{equation}{0} We have proved here the impossibility of reproducing quantum probabilities of four CCS as marginals. Roy and Singh \cite{RS7} have given examples to show that reproducing three CCS is possible. In this section, we construct the most general nonnegative phase space density which reproduces three different (noncommuting) CCS as marginals. Our results encapsulate the extent to which noncommuting CCS can be simultaneously realized in quantum mechanics. \indent Among the four marginals $R,S,T,U$ obeying the compatibility conditions \rf{II.4} which are at our disposal, the particular choice of three of them is completely irrelevant. For definiteness, we choose $R,T$ and $U$, which we rename $\sigma_0(q_1,q_2)$, $\sigma_1(p_1,q_2)$ and $\sigma_2(p_1,p_2)$. We assume that these marginals are probability densities in the full mathematical sense, that is they are true (integrable and non negative) {\sl functions}. This means that we restrict our marginal probability distributions to {\sl absolutely continuous} measures (with respect to Lebesgue measure) in $\mbox{\Bbb R}^2$. Notice that such a restriction is automatic in the quantum case, due to eqs.\rf{II.5}. Likewise, we look for the general solution of the three marginal problem in the class of absolutely continuous measures in the phase space $\mbox{\Bbb R}^4$. This means that we want to describe all the solutions $\rho$ of the equations \begin{equation}\begin{array}{c} \displaystyle \sigma_0(q_1,q_2)=\int dp_1dp_2\,\rho(\overrightarrow{q},\overrightarrow{p})\,,\\ \displaystyle \sigma_1(p_1,q_2)=\int dq_1dp_2\,\rho(\overrightarrow{q},\overrightarrow{p})\,,\\ \displaystyle \sigma_2(p_1,p_2)=\int dq_1dq_2\,\rho(\overrightarrow{q},\overrightarrow{p})\,, \end{array} \label{IV.1} \end{equation} which belong to $L^1(\mbox{\Bbb R}^4,d^2q\,d^2p)$. Notice that this is a restricted problem even in the quantum case, since nothing prevents a probability measure containing a singular part to project on marginals which are $L^1$-functions. To some extent, the above restrictions can be removed, allowing us to include e.g. probability measures partly concentrated on submanifolds of the phase space. However, dealing with such extensions at some degree of generality requires painful manipulations, and we shall ignore them here\footnote{Special cases are treated in \cite{RS5} and \cite{RS7}.}. As for the full inclusion of singular measures, it appears as both delicate and of little practical interest. Let us introduce the one variable marginals \begin{equation}\begin{array}{c} \displaystyle \sigma_{01}(q_2) = \int dq_1\,\sigma_0(q_1,q_2)\,, \\ \displaystyle \sigma_{12}(p_1) = \int dq_2\,\sigma_1(p_1,q_2)\,. \end{array}\label{IV.4} \end{equation} Owing to the compatibility conditions \rf{II.4}, these definitions are equivalent to \begin{equation}\begin{array}{c} \displaystyle \sigma_{01}(q_2) = \int dp_1\,\sigma_1(p_1,q_2)\,, \\ \displaystyle \sigma_{12}(p_1) = \int dp_2\,\sigma_2(p_1,p_2)\,. \end{array}\label{IV.5} \end{equation} As the support properties of the functions $\sigma_j$ (which are allowed to vanish on some parts of $\mbox{\Bbb R}^2$) are not innocent in the forthcoming construction, we need to pay attention to them. Let $\Sigma_j\subset\mbox{\Bbb R}^2$ ($j=0,1,2$) be the essential support of $\sigma_j$. The above compatibility conditions, together with the positivity conditions $\sigma_j\geq0$, clearly yield two constraints on the supports $\Sigma_j$, namely \begin{equation}\begin{array}{r} \{q_2\in\mbox{\Bbb R}\ |\ \exists\ q_1\in\mbox{\Bbb R} \mbox{\ such that } (q_1,q_2)\in\Sigma_0\} \quad = \qquad\qquad\qquad\qquad \\ \{q_2\in\mbox{\Bbb R}\,|\,\exists\ p_1\in\mbox{\Bbb R} \mbox{\ such that } (p_1,q_2)\in\Sigma_1\}\,, \\ \{p_1\in\mbox{\Bbb R}\,|\,\exists\ q_2\in\mbox{\Bbb R} \mbox{\ such that } (p_1,q_2)\in\Sigma_1\} \quad = \qquad\qquad\qquad\qquad \\ \{p_1\in\mbox{\Bbb R}\,|\,\exists\ p_2\in\mbox{\Bbb R} \mbox{\ such that } (p_1,p_2)\in\Sigma_2\}\,. \end{array}\label{IV.6} \end{equation} To the $\Sigma_j$'s we associate the subsets $E_j$'s of the phase space defined by \begin{equation}\begin{array}{c} E_0 = \{\overrightarrow{q},\overrightarrow{p}\ |\ (q_1,q_2)\in\Sigma_0\,, (p_1,p_2)\in\mbox{\Bbb R}^2 \}\,, \\ E_1 = \{\overrightarrow{q},\overrightarrow{p}\ |\ (p_1,q_2)\in\Sigma_1\,, (q_1,p_2)\in\mbox{\Bbb R}^2 \}\,, \\ E_2 = \{\overrightarrow{q},\overrightarrow{p}\ |\ (p_1,p_2)\in\Sigma_2\,, (q_1,q_2)\in\mbox{\Bbb R}^2 \}\,. \end{array}\label{IV.7} \end{equation} Finally, we denote by $E$ the intersection of the $E_j$'s \begin{equation} E = E_0\cap E_1\cap E_2\,. \label{IV.8} \end{equation} Clearly, due to positivity again, any solution $\rho$ of eqs.\rf{IV.1} must have its essential support contained in $E$. The three marginal problem in the precise form stated above is then completely solved by \newtheorem{theorem}{Theorem} \begin{theorem} 1) The Lebesgue measure of $E$ is not zero and the function $\rho_0$ defined (a.e.) by \begin{equation} \rho_0(\overrightarrow{q},\overrightarrow{p}) = \left\{ \begin{array}{l} \sigma_0(q_1,q_2) {\displaystyle 1\over\displaystyle\sigma_{01}(q_2)} \sigma_1(p_1,q_2) {\displaystyle1\over\displaystyle\sigma_{12}(p_1)} \sigma_2(p_1,p_2) \mbox{\ \ if\ }(\overrightarrow{q},\overrightarrow{p})\in E\,, \\ 0 \qquad \mbox{ otherwise,} \end{array}\right. \label{IV.9} \end{equation} is a non negative solution of the problem \rf{IV.1} in $L^1(\mbox{\Bbb R}^4,d^2q\,d^2p)$. 2) The general solution $\rho$ of \rf{IV.1} in $L^1(\mbox{\Bbb R}^4,d^2q\,d^2p)$ is given by \begin{equation} \rho(\overrightarrow{q},\overrightarrow{p}) = \rho_0(\overrightarrow{q},\overrightarrow{p})+\lambda\,\Delta(\overrightarrow{q},\overrightarrow{p})\,, \label{IV.10} \end{equation} where \begin{equation} \lambda\in\left[-{1\over m_+},{1\over m_-}\right]\,, \label{IV.11} \end{equation} and \renewcommand{1.4}{1.9} \begin{equation}\begin{array}{l} \Delta(\overrightarrow{q},\overrightarrow{p}) = F(\overrightarrow{q},\overrightarrow{p})-\rho_0(\overrightarrow{q},\overrightarrow{p}) \left[\displaystyle {1\over\sigma_0(q_1,q_2)}\displaystyle\int dp'_1dp'_2 \,F(q_1,q_2,p'_1,p'_2) \right. \\ \qquad +\displaystyle {1\over\sigma_1(p_1,q_2)}\displaystyle\int dq'_1dp'_2 \,F(q'_1,q_2,p_1,p'_2) + {1\over\sigma_2(p_1,p_2)}\displaystyle\int dq'_1dq'_2 \,F(q'_1,q'_2,p_1,p_2) \\ \qquad\left. -\displaystyle {1\over\sigma_{01}(q_2)}\displaystyle\int dq'_1dp'_1dp'_2 \,F(q'_1,q_2,p'_1,p'_2) - {1\over\sigma_{12}(p_1)}\displaystyle\int dq'_1dq'_2dp'_2 \,F(q'_1,q'_2,p_1,p'_2) \right], \end{array}\label{IV.12} \end{equation} \renewcommand{1.4}{1.4} $F$ being an {\sl arbitrary} $L^1(\mbox{\Bbb R}^4,d^2q\,d^2p)$-function with essential support contained in $E$. The (F-dependent) constants $m_\pm$ in \rf{IV.11} are defined as \begin{equation} m_+ = \ \build{\mbox{ ess sup }}{(\overrightarrow{\scriptstyle q}, \overrightarrow{\scriptstyle p})\in E} \ {\Delta(\overrightarrow{q},\overrightarrow{p})\over\rho_0(\overrightarrow{q},\overrightarrow{p})}\,, \qquad m_- = \ -\build{\mbox{ ess inf }}{(\overrightarrow{\scriptstyle q}, \overrightarrow{\scriptstyle p})\in E} \ {\Delta(\overrightarrow{q},\overrightarrow{p})\over\rho_0(\overrightarrow{q},\overrightarrow{p})}\,, \label{IV.13} \end{equation} and are both positive if $\Delta\not=0$ ($m_+=\infty$ or/and $m_-=\infty$ are not excluded). \label{theorem} \end{theorem} \noindent{\sl Proof:} \noindent 1) To begin with, $\rho_0$ given by \rf{IV.9} is well defined and non negative. Indeed, due to \rf{IV.4} (or \rf{IV.5}) and the positivity of the $\sigma_j$'s, $\sigma_{01}(q_2)$ and $\sigma_{12}(p_1)$ are a.e. non zero for $(\overrightarrow{q},\overrightarrow{p})\in E_0$ and $E_1$ (or $E_1$ and $E_2$), so that the denominators in eq.\rf{IV.9} do not vanish on $E$ (except maybe on sets of Lebesgue measure 0). Next, in order to check that $\rho_0$ obeys the first equation \rf{IV.1}, we consider the integral \begin{equation} \int dp_1 \int dp_2\,\rho_0(\overrightarrow{q},\overrightarrow{p}) \label{IV.14} \end{equation} with this specific order of the $p$ integrations. According to the relations \rf{IV.6} and the definition of $E$, one observes first that the projection of $E$ on the $(p_1,p_2)$ plane is the set $\Sigma_2$, so that the integration over $p_2$ removes the factor $\sigma_2/\sigma_{12}$ in $\rho_0$; and second, that the projections of $\Sigma_1$ and $\Sigma_2$ on $p_1$ coincide, so that the integration over $p_1$ removes the factor $\sigma_1/\sigma_{01}$ in $\rho_0$, and one is left with the expected result $\sigma_0(q_1,q_2)$. We can now write \begin{equation} \int dp_1dp_2\,\rho_0(\overrightarrow{q},\overrightarrow{p}) = \sigma_0(q_1,q_2) \label{IV.15} \end{equation} where, thanks to Fubini theorem, the integration order is completely irrelevant. The other two equations \rf{IV.1} are derived in a similar way. This calculation shows at once that the Lebesgue measure of $E$ is not zero and that $\rho_0\in L^1(\mbox{\Bbb R}^4,d^2q\,d^2p)$. \noindent 2) That any non negative solution $\rho$ of eqs.\rf{IV.1} admits the representation \rf{IV.10}-\rf{IV.12} is easy to establish. Indeed, since the essential support of $\rho$ is necessarily contained in $E$, we are allowed to take $F=\rho$ in eq.\rf{IV.12}, which gives (using \rf{IV.1}) \begin{equation} \Delta(\overrightarrow{q},\overrightarrow{p}) = \rho(\overrightarrow{q},\overrightarrow{p})-\rho_0(\overrightarrow{q},\overrightarrow{p})\,. \label{IV.16} \end{equation} Then, from \rf{IV.13} \[ m_- = -\build{\mbox{ ess inf }}{(\overrightarrow{\scriptstyle q}, \overrightarrow{\scriptstyle p})\in E} \left({\rho\over\rho_0}-1\right) \leq 1\,. \] As $1/m_-\geq 1$ in \rf{IV.11}, we can choose $\lambda=1$, which makes eq.\rf{IV.16} equivalent to the representation \rf{IV.10}. It remains to show that any function $\rho$ defined by \rf{IV.10} to \rf{IV.13} (and thus with essential support $E$) is a non negative solution of eqs.\rf{IV.1} in $L^1(\mbox{\Bbb R}^4,d^2q\,d^2p)$. In order to prove that $\rho$ satisfies the first equation \rf{IV.1}, we rearrange pairwise the right-hand side of \rf{IV.12} as follows \begin{eqnarray} \Delta &=& \left[F-{\rho_0\over\sigma_0} \int dp'_1dp'_2\,F\right] -\left[{\rho_0\over\sigma_1} \int dq'_1dp'_2\,F - {\rho_0\over\sigma_{01}} \int dq'_1dp'_1dp'_2\,F\right] \nonumber \\ && -\left[{\rho_0\over\sigma_2} \int dq'_1dq'_2\,F - {\rho_0\over\sigma_{12}} \int dq'_1dq'_2dp'_2\,F\right]\,. \label{IV.18} \end{eqnarray} Then, integrating the right-hand side over $p_1$ and $p_2$, one finds, by an extensive use of eqs.\rf{IV.4} to \rf{IV.6} as in part 1), that the two terms coming from each square bracket cancel each other, leading to \[ \int dp_1dp_2\,\Delta(\overrightarrow{q},\overrightarrow{p}) = 0\,. \] This, with \rf{IV.10} and \rf{IV.15}, implies that $\rho$ satisfies the first equation \rf{IV.1}. That it satisfies the other two equations \rf{IV.1} is proved in a similar way. This calculation also shows that $\rho\in L^1(\mbox{\Bbb R}^4,d^2q\,d^2p)$. Finally \[ \int_E d^2qd^2p\,\Delta(\overrightarrow{q},\overrightarrow{p}) = 0\,, \] which implies that $m_\pm$ in eqs.\rf{IV.13} are both strictly positive if $\Delta$ does not vanish a.e. on $\mbox{\Bbb R}^4$. The positivity of $\rho$ is then a trivial consequence of eqs.\rf{IV.10}, \rf{IV.11} and \rf{IV.13}. The proof is complete. \noindent {\sl Remark:} \noindent Theorem 1, as it is stated above, deals with $L^1$ functions, and thus excludes the occurrence of Dirac measures. We insist on the fact that this is unnecessarily restrictive. Indeed Dirac measures can be easily accommodated and the theorem suitably rephrased, to the price however of cumbersome mathematical intricacies which we do not want to enter into. An immediate corollary of Proposition \ref{proposition} and Theorem \ref{theorem} is \begin{theorem}[Three marginal theorem] Let R, S, T and U be probablilty distributions for $(q_1,q_2)$, $(q_1,p_2)$, $(p_1,q_2)$ and $(p_1,p_2)$ obeying the consistency conditions \rf{II.4}. Given $n$ arbitrary distributions among $\{R,S,T,U\}$, a necessary and sufficient condition for them to be marginals of a probability density in the $4$-dimensional phase space is $n\leq3$. \end{theorem} \section{Conclusions} \setcounter{equation}{0} We have solved the four marginal problem in four dimensional phase space thus proving a long standing conjecture \cite{RS7} and vastly improving the first results of Martin and Roy \cite{MartinRoy1} which dealt with infinite number of marginals. To achieve this, we first derived ``phase space Bell inequalities'' which have their own interest. Actually they allow, at least in principle, direct ``experimental'' tests of the orthodox-versus-hidden variable interpretations of quantum mechanics within the position-momentum sector, analoguous to those performed within the spin sector. The technique of phase space Bell inequalities established here has applications to quantum information processing. Generalizing the example \rf{factor}, one can show that for any separable density operator $\rho$ one can construct a phase space density obeying the four marginal conditions. Hence, the Bell inequalities \rf{II.10}, with $R$, $S$, $T$ and $U$ given by \rf{II.5} must hold for every separable quantum state, irrespective of any physical interpretation of the associated phase space density. Their violation by a quantum state is a signature and even a quantitative measure of entanglement of this state. We have also constructed the most general positive definite phase space density which has the maximum number of marginals (three) coinciding with corresponding quantum probabilities of three different (noncommuting) CCS. These results should be useful in the construction of maximally realistic quantum theories. \section{Proof of equation \rf{III.22}} \setcounter{equation}{0} Since $S'=(0,\infty)$, one has \[ \widehat{\chi}'(p)\tilde{g}(p) = \theta(p)\tilde{g}(p)\,. \] Assuming first that $g$ belongs to $\cal S$ (the Schwartz space of infinitely differentiable functions on $\mbox{\Bbb R}$ with fast decrease at infinity), one can write \[ (\widehat{\chi}'g)(q) = \int_{-\infty}^\infty dq'\,\tilde{\theta}(q-q')g(q')\,, \] where $\tilde{\theta}$ is the Fourier transform of $\theta$ in the distribution-theoretic sense \[ \tilde{\theta}(q) \equiv {1\over 2\pi}\int_{-\infty}^\infty dp\, \e{ipq}\,\theta(p) = {i\over 2\pi}{P\over q}+{1\over 2}\,\delta(q)\,. \] Then, if $f$ also belongs to $\cal S$ \[ \langle f|\widehat{\chi}'|g\rangle = {i\over 2\pi} \int_{-\infty}^{\infty}dq\,f^*(q) \int_{-\infty}^{\infty}dq'\,{P\over q-q'}\,g(q') +{1\over 2}\,\langle f|g\rangle\,. \] In particular, for even $f$ and odd $g$, $\langle f|g\rangle$ vanishes, and \[ \langle f|\widehat{\chi}'|g\rangle = -{i\over\pi} \int_0^{\infty}dq\,f^*(q) \int_0^{\infty}dq'\,\left({1\over q+q'}-{P\over q-q'}\right)g(q') \,, \] which gives eq.\rf{III.22} if $g$ coincides with $f$ on $(0,\infty)$. The continuation from $\cal S$ to $L^2(-\infty,\infty)$ is performed as usual by continuity, using the fact that $\cal S$ is a dense subspace in $L^2(-\infty,\infty)$. \section{Study of the operator K} \setcounter{equation}{0} From the very definition of $K$ through the integral kernel \rf{III.24}, one has \[ (Kh)(q) = {1\over\pi}\int_0^\infty dq'\,{h(q')\over q+q'}\,. \] Let us put \[ \overline{h}(u) = \e{{\scriptstyle u}\over 2}h(\e{u})\,. \] Since $\int_{-\infty}^\infty du\,|\overline{h}(u)|^2=\int_0^\infty dq\,|h(q)|^2$, the correspondence $h\mapsto\overline{h}$ defines a unitary mapping $L^2(0,\infty) \rightarrow L^2(-\infty,\infty)$ and \begin{equation} \overline{\!Kh}\,(u) = \int_{-\infty}^\infty dv\,\,\overline{\!K}(u-v)\overline{h}(v)\,, \label{B.1} \end{equation} where \[ \overline{\!K}(u) = {1\over 2\pi\cosh{{\displaystyle u}\over 2}}\,. \] Then, another unitary mapping $L^2(-\infty,\infty) \rightarrow L^2(-\infty,\infty)$, namely the Fourier transform \[ \overline{h}t(k) = {1\over\sqrt{2\pi}} \int_{-\infty}^\infty du\,\e{iku}\, \overline{h}(u)\,, \] reduces the convolution product in \rf{B.1} to an ordinary product \[ \widetilde{\overline{\!Kh}}(k) = \overline{\!K}t(k)\,\overline{h}t(k)\,, \] where \begin{equation} \overline{\!K}t(k)\equiv\int_{-\infty}^\infty du\,\e{iku}\,\overline{\!K}(u) = {1\over\cosh\pi k}\,. \label{B.2} \end{equation} Therefore, the operator $K$ on $L^2(0,\infty)$ is unitarily equivalent to the multiplicative operator $\rf{B.2}$ on $L^2(-\infty,\infty)$. The latter is evidently a positive operator with purely continuous spectrum $[0,1]$. Its generalized (non normalizable) ``eigenfunctions'' are \[ \overline{h}t_s(k) = \delta(k-s) \qquad (s\in\mbox{\Bbb R})\,, \] with ``eigenvalues'' $\lambda_s={1\over\displaystyle\cosh{\pi s}}$\,, and their preimage in $L^2(0,\infty)$ are \[ h_s(q) = {1\over\sqrt{2\pi q}} \e{-is\ln q}\,. \] Of particular interest for us is the extremal one, with ``eigenvalue'' $\lambda_0=1$ \[ h_0(q) = {1\over\sqrt{2\pi q}}\,. \] Of course, the corresponding maximal value $\gamma_0=1$ of $\gamma=\langle h|K|h\rangle$ cannot be attained, but only approached arbitrarily close through a family of normalizable functions mimicking ${1\over\sqrt{q}}$. For instance, introducing two cutoffs, $\varepsilon$ at small $q$ and $L$ at large $q$, and setting \[ h_{\varepsilon,L}(q) = {1\over\sqrt{\ln {L\over\varepsilon}}}\, \chi_{(\varepsilon,L)}(q)\,{1\over\sqrt{q}}\qquad(\|h_{\varepsilon,L}\|=1)\,, \] one gets \[ \langle h_{\varepsilon,L}|K|h_{\varepsilon,L}\rangle = 1-{4\over\pi\ln{L\over\varepsilon}}\,\int_{\sqrt{\varepsilon/L}}^1dx\, {\arctan x\over x} = 1-\mbox{O}\left({1\over\ln{L\over\varepsilon}}\right)\,, \] so that \renewcommand{1.4}{.7} $\build{\lim}{\begin{array}{cc} \scriptstyle\varepsilon\rightarrow0 \\ \scriptstyle L\rightarrow\infty \end{array}} \langle h_{\varepsilon,L}|K|h_{\varepsilon,L}\rangle = 1$\,. \renewcommand{1.4}{1.4} Notice that one can keep $\varepsilon$ fixed (e.g.\ $\varepsilon=1$) and let $L$ alone go to $\infty$ without changing anything (this is in fact a consequence of the scale invariance of the operator $K$), or even choose a family of less singular functions $h$, like \[ h_L(q) = {1\over\sqrt{\ln(L+1)}}\,\theta(L-q)\,{1\over\sqrt{q+1}}\,. \] \end{document}
\begin{document} \title{Reply to Ryff's Comment on ``Experimental Nonlocality Proof of Quantum Teleportation and Entanglement Swapping''} \author{Thomas Jennewein$^1$, Gregor Weihs$^{1,2}$, Jian-Wei Pan$^1$, and Anton Zeilinger$^1$} \affiliation{$^1$Institut f\"ur Experimentalphysik, Universit\"at Wien, Boltzmanngasse 5, A--1090 Wien, Austria\\ $^2$Ginzton Laboratory, S-23 Stanford University, Stanford, CA 94304, USA} \begin{abstract} Ryff's Comment raises the question of the meaning of the quantum state. We argue that the quantum state is just the representative of information available to a given observer. Then Ryff's interpretation of one of our experiments and our original one are both admissible. \end{abstract} \maketitle Ryff's criticism \cite{1} of our interpretation of part of our recent experiment \cite{2} on the teleportation of entanglement gives us the opportunity to present our position in more detail. The basic issue is the role of relative time order of various detection events on the one hand and the meaning of a quantum state on the other hand. In the experiment two pairs of entangled photons are produced and one photon from each pair is sent to Alice. The other photon from each pair is sent to Bob (this might actually be two - even spacelike -separated places). Bob is free to choose which polarizations to measure on his two photons separately. Likewise Alice is free to choose whether she wants to project her two photons onto an entangled state and thus effect quantum teleportation or measure them individually. Most importantly, each one of them decides which measurement to perform and registers the results without being aware at all what kind of measurement the other performs at which time. Both Alice's data and Bob's data are completely independent of whatever the other decides to measure. Then they ask themselves how their data are to be interpreted. Obviously both Alice's and Bob's interpretations depend critically on the information they have. It is assumed they both know the initial entangled states. Alice then, on the basis of her measurement result can make certain statements about Bob's possible results. These can be collected into expectation catalogs that give lists of results Bob may obtain for the specific observables he might choose to measure. The quantum state is no more than a most compact representative of such expectation catalogs \cite{3}. If Alice decides to perform a Bell-state analysis she will use an entangled state for her prediction of Bob's results. If she measures the polarizations of the two photons separately, she will use an unentangled product state. In both cases she will be able to arrive at a correct (maximal and in general probabilistic) set of predictions in both being compatible with Bob's results. In the first case she concludes, certainly correctly, that Bob's two photons are entangled and teleportation has succeeded. In the second case she will conclude that there is no entanglement between Bob's two photons and that no teleportation has happened. But, as stressed above, the data obtained by Bob are independent of Alice's actions. Indeed his data set taken alone is completely random. Likewise Bob will always use a product state based on his measurement results and he thus will be able to predict Alice's results both for the case when she performs a Bell state measurement and when she does not. It is now important to analyze what we mean by "prediction". As the relative time ordering of Alice's and Bob's events is irrelevant, "prediction" cannot refer to the time order of the measurements. It is helpful to remember that the quantum state is just an expectation catalog. Its purpose is to make predictions about possible measurement results a specific observer does not know yet. Thus which state is to be used depends on which information Alice and Bob have and "prediction" means prediction about measurement results they will learn in the future independent of whether these measurements have already been performed by someone or not. Also, in our point of view it is irrelevant whether Alice performs her measurement earlier in any reference frame than Bob's or later or even if they are spacelike separated when the seemingly paradoxical situation arises that different observers are spacelike separated. In all these cases Alice will use the same quantum state to predict the results she will learn from Bob. In short, we don't see any problem with Alice using her results to predict which kind of results she will learn from Bob even if he might already have obtained these results. There is no action into the past since the events observed by Bob are independent of which measurements Alice performs and at which time. Thus we have no disagreement with Ryff's way to interpret our experiment. But we certainly disagree with his position that his way of looking at the situation is the only possible one. Yet we would still agree with Peres \cite{4} that there is a possible paradox here. But this paradox does not arise if the quantum state is viewed to be no more than just a representative of information. \end{document}
\begin{document} \title{Erd\H{o}s-P\'osa property of minor-models with prescribed vertex sets} \author[1]{O-joung Kwon\thanks{Supported by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Education (No. NRF-2018R1D1A1B07050294).}} \author[2]{D\'aniel Marx\thanks{Supported by ERC Consolidator Grant SYSTEMATICGRAPH (No. 725978).}} \affil[1]{Department of Mathematics, Incheon National University, Incheon, South Korea.} \affil[2]{Institute for Computer Science and Control, Hungarian Academy of Sciences, Budapest, Hungary} \date{\today} \maketitle \begin{abstract} A minor-model of a graph $H$ in a graph $G$ is a subgraph of $G$ that can be contracted to $H$. We prove that for a positive integer $\ell$ and a non-empty planar graph $H$ with at least $\ell-1$ connected components, there exists a function $f_{H, \ell}:\mathbb{N}\rightarrow \mathbb{R}$ satisfying the property that every graph $G$ with a family of vertex subsets $Z_1, \ldots, Z_m$ contains either $k$ pairwise vertex-disjoint minor-models of $H$ each intersecting at least $\ell$ sets among prescribed vertex sets, or a vertex subset of size at most $f_{H, \ell}(k)$ that meets all such minor-models of $H$. This function $f_{H, \ell}$ is independent with the number $m$ of given sets, and thus, our result generalizes Mader's $\cS$-path Theorem, by applying $\ell=2$ and $H$ to be the one-vertex graph. We prove that such a function $f_{H, \ell}$ does not exist if $H$ consists of at most $\ell-2$ connected components. \end{abstract} \section{Introduction} A class $\mathcal{C}$ of graphs is said to have the \emph{Erd\H{o}s-P\'osa property} if there exists a function $f$ satisfying the following property: for every graph $G$ and a positive integer $k$, either $G$ contains either $k$ pairwise vertex-disjoint subgraphs each isomorphic to a graph in $\mathcal{C}$ or a vertex set $T$ of size at most $f(k)$ such that $G-T$ has no subgraph isomorphic to a graph in $\mathcal{C}$. Erd\H{o}s and P\'osa~\cite{ErdosP1965} showed that the class of all cycles has the Erd\H{o}s-P\'osa property. Later, several variations of cycles having the Erd\H{o}s-P\'osa property have been investigated; for instance, directed cycles~\cite{ReedRST1996}, long cycles~\cite{FioriniH2014, MoussetNSW2016, RobertsonS1986}, cycles intersecting a prescribed vertex set~\cite{KakimuraKM2011,PontecorviW2012, HuyneJW2017}, and holes~\cite{KimK2018}. We refer to a survey of the Erd\H{o}s-P\'osa property by Raymond and Thilikos~\cite{RaymondT2017} for more examples. This property has been extended to a class of graphs that contains some fixed graph as a minor. A \emph{minor-model function} of a graph $H$ in a graph $G$ is a function $\eta$ with the domain $V(H)\cup E(H)$, where \begin{itemize} \item for every $v\in V(H)$, $\eta(v)$ is a non-empty connected subgraph of $G$, all pairwise vertex-disjoint \item for every edge $e$ of $H$, $\eta(e)$ is an edge of $G$, all distinct \item for every edge $e=uv$ of $H$, if $u\neq v$ then $\eta(e)$ has one end in $V(\eta(u))$ and the other in $V(\eta(v))$; and if $u = v$, then $\eta(e)$ is an edge of $G$ with all ends in $V(\eta(v))$. \end{itemize} We call the image of such a function an \emph{$H$-minor-model} in $G$, or shortly an \emph{$H$-model} in $G$. We remark that an $H$-model is not necessarily a minimal subgraph that admits a minor-model function from $H$. For instance, when $H$ is the one-vertex graph, any connected subgraph is an $H$-model. As an application of Grid Minor Theorem, Robertson and Seymour~\cite{RobertsonS1986} proved that the class of all $H$-models has the Erd\H{o}s-P\'osa property if and only if $H$ is planar. Another remarkable result on packing and covering objects in a graph is Mader's $\cS$-path Theorem~\cite{Mader78}. Mader's $\cS$-path Theorem states that for a family $\cS$ of vertex subsets of a graph $G$, $G$ contains either $k$ pairwise vertex-disjoint paths connecting two distinct sets in $\cS$, or a vertex subset of size at most $2k-2$ that meets all such paths. An interesting point of this theorem is that the number of sets in $\cS$ does not affect on the bound $2k-2$. A simplified proof was later given by Schrijver~\cite{Schrijver01}. In this paper, we generalize Robertson and Seymour's theorem on the Erd\H{o}s-P\'osa\ property of $H$-models and Mader's $\cS$-path Theorem together. For a graph $G$ and a multiset $\cZ$ of vertex subsets of $G$, the pair $(G, \cZ)$ is called \emph{a rooted graph}. For a positive integer $\ell$ and a family $\cZ$ of vertex subsets, an $H$-model $F$ is called an \emph{$(H, \cZ, \ell)$-model} if there are at least $\ell$ distinct sets $Z$ of $\cZ$ that contains a vertex of $F$. A vertex set $S$ of $G$ is called an \emph{$(H, \cZ, \ell)$-deletion set} if $G- S$ has no $(H, \cZ, \ell)$-models. For a graph $G$, we denote by $\cc(G)$ the number of connected components of $G$. We completely classify when the class of $(H, \cZ, \ell)$-models has the Erd\H{o}s-P\'osa\ property. \begin{theorem}\label{thm:main} For a positive integer $\ell$ and a non-empty planar graph $H$ with $\cc(H)\ge \ell-1$, there exists $f_{H, \ell}:\mathbb{N}\rightarrow \mathbb{R}$ satisfying the following property. Let $(G, \cZ)$ be a rooted graph and $k$ be a positive integer. Then $G$ contains either $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models in $G$, or an $(H, \cZ, \ell)$-deletion set of size at most $f_{H,\ell}(k)$. If $\cc(H)\le \ell-2$, then such a function $f_{H, \ell}$ does not exist. \end{theorem} Together with the result of Robertson and Seymour~\cite{RobertsonS1986} on $H$-models, we can reformulate as follows. \begin{theorem} The class of $(H, \cZ, \ell)$-models has the Erd\H{o}s-P\'osa\ property if and only if $H$ is planar and $\cc(H)\ge \ell-1$. \end{theorem} By setting $\cZ=\{V(G)\}$ and $\ell=1$, Theorem~\ref{thm:main} contains the Erd\H{o}s-P\'osa\ property of $H$-models. We point out that the size of a deletion set in Theorem~\ref{thm:main} does not depend on the number of given prescribed vertex sets in $\cZ$. Thus it generalizes Mader's $\cS$-path Theorem, when $H$ is the one-vertex graph and $\ell=2$. Here we give one example showing that the class of $(H, \cZ, \ell)$-models does not satisfy the Erd\H{o}s-P\'osa\ property when $H$ is planar, but consists of at most $\ell-2$ connected components. Let $\ell=3$ and $H$ be a connected graph. Let $G$ be an $(n\times n)$-grid with sufficiently large $n$, and let $Z_1, Z_2, Z_3$ be the set of all vertices in the first column, the first row, and the last column, respectively, except corner vertices. See Figure~\ref{fig:counterex1}. One can observe that there cannot exist two $H$-models in $G$ meeting all of $Z_1, Z_2, Z_3$ because $H$ is connected. On the other hand, we may increase the minimum size of an $(H, \{Z_1, Z_2, Z_3\}, \ell)$-deletion set as much as we can, by taking a sufficiently large grid. In Section~\ref{sec:counterex}, we will generalize this argument for all pairs $H$ and $\ell$ with $\cc(H)\le \ell-2$. We remark that Bruhn, Joos, and Schaudt~\cite{BJS2018} considered labelled-minors and provided a characterization for 2-connected labelled graphs $H$ where $H$-models intersecting a given set have the Erd\H{o}s-P\'osa property. However, their minor-models are minimal subgraphs containing a graph $H$ as a minor. So the context is slightly different. One of the main tools to prove Theorem~\ref{thm:main} is the rooted variant of Grid Minor Theorem (Theorem~\ref{thm:rootedgridminor}). Note that just a large grid model may not contain many pairwise vertex-disjoint $(H, \cZ, \ell)$-models. We investigate a proper notion, called a \emph{$(\cZ,k,\ell)$-rooted grid model}, which contains many disjoint $(H, \cZ, \ell)$-models. Briefly, we show that every graph with a large grid model contains a $(\cZ, k, \ell)$-rooted grid model, or a separation of small order separating most of sets of $\cZ$ from the given grid model. Previously, Marx, Seymour, and Wollan~\cite{MarxSW2013} proved a similar result with one prescribed vertex set. The previous result was stated in terms of \emph{tangles}~\cite{RobertsonS91}, but we state in an elementary way without using tangles. The advantage of our formulation is that we do not need to define a relative tangle at each time, in the induction step. In Section~\ref{sec:pureep}, we introduce a pure $(H, \cZ, \ell)$-model that consists of a minimal set of connected components of an $(H, \cZ, \ell)$-model intersecting $\ell$ sets of $\cZ$. A formal definition can be found in the beginning of Section~\ref{sec:pureep}. Note that in a pure $(H, \cZ, \ell)$-model, each component has to intersect a set of $\cZ$, while in an ordinary $(H, \cZ, \ell)$-model, a component does not necessarily intersect a set of $\cZ$. Because of this property, when we have a separation $(A,B)$ where $B-V(A)$ contains few sets of $\cZ$ but contains a large grid, we may find an irrelevant vertex to reduce the instance. Starting from this observation, we will obtain the Erd\H{o}s-P\'osa\ property for pure $(H, \cZ, \ell)$-models (Theorem~\ref{thm:mainpure}). In Section~\ref{sec:total}, we obtain the Erd\H{o}s-P\'osa\ property for $(H, \cZ, \ell$)-models, using the result for pure $(H, \cZ, \ell)$-models. An observation is that every $(H, \cZ, \ell)$-model contains a pure $(H, \cZ, \ell)$-model, by taking components that essentially hits $\ell$ sets of $\cZ$. So, any deletion set for pure $(H, \cZ, \ell)$-models hits all $(H, \cZ, \ell)$-models as well. When we have a situation that a given graph has large grid model (with a proper separation) and $k$ pairwise vertex-disjoint pure $(H, \cZ, \ell)$-models, we complete these models to $(H, \cZ, \ell)$-models, by taking rest components from the large grid model. This will complete the argument for Erd\H{o}s-P\'osa\ property of $(H, \cZ, \ell)$-models. \begin{figure} \caption{There are no two pairwise vertex-disjoint $H$-models meeting all of $Z_1, Z_2, Z_3$.} \label{fig:counterex1} \end{figure} \section{Preliminaries}\label{sec:prelim} All graphs in this paper are simple, finite and undirected. For a graph $G$, we denote by $V(G)$ and $E(G)$ the vertex set and the edge set of $G$, respectively. For $S\subseteq V(G)$, we denote by $G[S]$ the subgraph of $G$ induced by $S$. For $S\subseteq V(G)$ and $v\in V(G)$, let $G- S$ be the graph obtained by removing all vertices in $S$, and let $G- v:=G- \{v\}$. For two graphs $G$ and $H$, we define $G\cap H$ as the graph on the vertex set $V(G)\cap V(H)$ and the edge set $E(G)\cap E(H)$, and define $G\cup H$ analogously. A \emph{separation of order} $k$ in a graph $G$ is a pair $(A,B)$ of subgraphs of $G$ such that $A\cup B=G$, $E(A)\cap E(B)=\emptyset$, and $\abs{V(A\cap B)}=k$. For two disjoint vertex subsets $A$ and $B$ of a graph $G$, we say that $A$ is \emph{complete} to $B$ if for every vertex $v$ of $A$ and every vertex $w$ of $B$, $v$ is adjacent to $w$. For a positive integer $n$, let $[n]:=\{1, \ldots, n\}$. For two positive integers $m$ and $n$ with $m\le n$, let $[m,n]:=\{m, \ldots, n\}$. For a set $A$, we denote by $2^A$ the set of all subsets of $A$. For positive integers $g$ and $h$, the \emph{$(g\times h)$-grid} is the graph on the vertex set $\{v_{i,j}:i\in [g], j\in [h]\}$ where $v_{i,j}$ and $v_{i',j'}$ are adjacent if and only if $\abs{i-i'}+\abs{j-j'}=1$. For each $i\in [g]$, we call $\{v_{i,1}, \ldots, v_{i,h}\}$ \emph{the $i$-th row} of $G$, and define its columns similarly. We denote by $\cG_g$ the $(g\times g)$-grid graph, and for a positive integer $\ell$, we denote by $\ell\cdot \cG_g$ the disjoint union of $\ell$ copies of $(g\times g)$-grid graphs. A graph is \emph{planar} if it can be embedded on the plane without crossing edges. A graph $H$ is a \emph{minor} of $G$ if $H$ can be obtained from a subgraph of $G$ by contracting edges. It is well known that $H$ is a minor of $G$ if and only if $G$ has an $H$-model. \paragraph{\bf Operations on multisets.} A \emph{multiset} is a set with allowing repeatition of elements. In particular, for a rooted graph $(G, \cZ)$, we consider $\cZ$ as a multiset. For a multiset $\cZ$, we denote by $\multiabs{\cZ}$ the number of sets in $\cZ$, which counts elements with multiplicity, and does not count a possible empty set. For example, $\multiabs{ \{A, B, B, C, C, \emptyset\} }=5$. Note that for an ordinary set $A$, we use $\abs{A}$ for the size of a set $A$ in a usual meaning. Let $\cZ$ be a multiset of subsets of a set $A$. For a subset $B$ of $A$, we define $\cZ|_{B}:= \{X\cap B: X\in \cZ\}$ and $\cZ\setminus B:=\{X\setminus B:X\in \cZ\}$. For convenience, when $(G, \cZ)$ is a rooted graph and $H$ is a subgraph of $G$, we write $\cZ|_{H}:=\cZ|_{V(H)}$ and $\cZ\setminus H:=\cZ\setminus V(H)$. \paragraph{\bf Tree-width.} A \emph{tree-decomposition} of a graph $G$ is a pair $(T,\cB)$ of a tree $T$ and a family $\cB=\{B_t\}_{t\in V(T)}$ of vertex sets $B_t\subseteq V(G)$, called \emph{bags}, satisfying the following three conditions: \begin{enumerate} \item[(T1)] $V(G)=\bigcup_{t\in V(T)}B_t$. \item[(T2)] For every edge $uv$ of $G$, there exists a vertex $t$ of $T$ such that $\{u, v\}\subseteq B_t$. \item[(T3)] For $t_1$, $t_2$, and $t_3\in V(T)$, $B_{t_1}\cap B_{t_3}\subseteq B_{t_2}$ whenever $t_2$ is on the path from $t_1$ to $t_3$. \end{enumerate} The \emph{width} of a tree-decomposition $(T,\cB)$ is $\max\{ \abs{B_{t}}-1:t\in V(T)\}$. The \emph{tree-width} of $G$, denoted by $\tw(G)$, is the minimum width over all tree-decompositions of $G$. Robertson and Seymour~\cite{RobertsonS1986} showed that every graph of sufficiently large tree-width contains a big grid model. \paragraph{\bf Grid minor-models.} In the course of Graph Minors Project, Robertson and Seymour~\cite{RobertsonS1986} showed that every graph with sufficiently large tree-width contains a large grid as a minor. Our result is also based on this theorem. \begin{theorem}[Grid Minor Theorem~\cite{RobertsonS1986}]\label{thm:gridtheorem} For all $g\ge 1$, there exists $\kappa (g)\ge 1$ such that every graph of tree-width at least $\kappa (g)$ contains a minor isomorphic to $\cG_g$; in other words, it contains a $\cG_g$-model. \end{theorem} The original function $\kappa (g)$ due to Roberson and Seymour was a tower of exponential functions. Chuznoy~\cite{Chuznoy2016} recently announced that $\kappa (g)$ can be taken to be $\mathcal{O}(g^{19}\operatorname{poly}\log g)$. This polynomial bound gives a polynomial bound on the function $f$ in Theorem~\ref{thm:main}. We also use a known result that if $H$ is planar and $n\ge 14\abs{V(H)}\ge 2\abs{V(H)}+4\abs{E(H)}$, then $\cG_{n}$ contains an $H$-model~\cite{RobertsonST1994}. Suppose that a graph $G$ contains a $\cG_{n}$-model. When we remove $k$ vertices contained in the model, we can take a $\cG_{n-k}$-model in a special way. The following lemma describes how we can take a smaller grid model. \begin{lemma}\label{lem:restrictgrid} Let $n>k\ge 0$. Let $G$ be a graph having a $\cG_{n}$-model $H$ with a model function $\eta$, and let $S\subseteq V(G)$ with $\abs{S}=k$. Then $G-S$ contains a $\cG_{n-k}$-model contained in $H$ such that it contains all rows and columns of $H$ that does not contain a vertex of $S$. \end{lemma} \begin{proof} Let $i_1<i_2< \cdots <i_a$ be the set of indices of all rows of $H$ that do not contain a vertex of $S$. Similarly, let $j_1<j_2< \cdots <j_b$ be the set of indices of all columns of $H$ that do not contain a vertex of $S$. Clearly, $a\ge n-k$ and $b\ge n-k$. Let $i_0=j_0=0$. We define that for $x\in [a]$ and $y\in [b]$, \begin{displaymath} \alpha(v_{x,y}): = \left\{ \begin{array}{ll} H[\bigcup_{i_{x-1}< x'\le i_x} \eta(v_{x',j_y})\cup \bigcup_{j_{y-1}< y'\le j_y} \eta(v_{i_x, y'})] \\ \qquad \qquad \qquad \textrm{if $x\in [a-1]$ and $y\in [b-1]$}\\ H[\bigcup_{i_{x-1}< x'\le i_x} \eta(v_{x',j_y})\cup \bigcup_{j_{y-1}< y'\le n} \eta(v_{i_x, y'})] \\ \qquad \qquad \qquad \textrm{if $x\in [a-1]$ and $y=b$}\\ H[\bigcup_{i_{x-1}< x'\le n} \eta(v_{x',j_y})\cup \bigcup_{j_{y-1}< y'\le j_y} \eta(v_{i_x, y'})] \\ \qquad \qquad \qquad \textrm{if $x\in a$ and $y\in [b-1]$}\\ H[\bigcup_{i_{x-1}< x'\le n} \eta(v_{x',j_y})\cup \bigcup_{j_{y-1}< y'\le n} \eta(v_{i_x, y'})] \\ \qquad \qquad \qquad \textrm{if $x\in a$ and $y\in b$.}\\ \end{array} \right. \end{displaymath} These vertex-models with edges crossing between those models in $\cG_n$ induce a $\cG_{a, b}$-model such that it contains all rows and columns of $H$ that does not contain a vertex of $S$. By merging consecutive columns or rows if necessary, it also induces a $\cG_{n-k}$-model. \end{proof} \section{Finding a rooted grid model}\label{sec:colorfulrootedgrid} In this section, we introduce a grid model with some additional conditions, called a \emph{$(\cZ,k,\ell)$-rooted grid model}. The advantage of this notion is that for a planar graph $H$ with at least $\ell-1$ connected components, every $(\cZ,k,\ell)$-rooted grid model of sufficiently large order always contains many vertex-disjoint $(H, \cZ, \ell)$-models. Let $(G,\cZ=\{Z_i:i\in [m]\})$ be a rooted graph. For a positive integer $k$, a vertex set $\{w_i:i\in [n]\}$ in $G$ is said to \emph{admit a $(\cZ, k)$-partition} if there exist a partition $L_1, \ldots, L_x$ of $\{w_i: i\in [n]\}$ and an injection $\gamma:[x]\rightarrow[m]$ such that for each $i\in [x]$, $\abs{L_i}\le k$ and $L_i\subseteq Z_{\gamma (i)}$. For positive integer $g,k, \ell$ with $g\ge k\ell$, we define \emph{a model function of a $(\cZ, k, \ell)$-rooted grid model of order $g$} as a model function $\eta$ of $\cG_g$ such that for each $i\in [k\ell]$, $V(\eta(v_{1,i}))$ contains a vertex $w_i$ and $\{w_i:i\in [k\ell]\}$ admits a $(\cZ, k)$-partition. We call $w_1, \ldots, w_{k\ell}$ the \emph{root vertices} of the model. The image of such a model $\eta$ is called a \emph{$(\cZ,k,\ell)$-rooted grid model of order $g$}. A $(\emptyset, k, \ell)$-rooted grid model of order $g$ is just a model of $\cG_g$. We prove the following. \begin{theorem}\label{thm:rootedgridminor} Let $g,k,\ell$, and $n$ be positive integers with $g\ge k\ell$ and $n\ge g(k^2 \ell^2+1)+k\ell$. Every rooted graph $(G, \cZ)$ having a $\cG_n$-model contains either \begin{enumerate}[(1)] \item a separation $(A,B)$ of order less than $k(\ell-\multiabs{\cZ\setminus A})$ where $\multiabs{\cZ\setminus A}\le \ell-1$ and $B-V(A)$ contains a $\cG_{n-\abs{V(A\cap B)}}$-model, or \item a $(\cZ, k, \ell)$-rooted grid model of order $g$. \end{enumerate} \end{theorem} To prove Theorem~\ref{thm:rootedgridminor}, we prove a related variation of Menger's theorem. For positive integers $k$ and $n$, a set of pairwise vertex-disjoint paths $P_1, \ldots, P_n$ from $\bigcup_{Z\in \cZ}Z$ to $Y$ is called \emph{a $(\cZ,Y,k)$-linkage of order $n$} if the set of all end vertices of $P_1, \ldots, P_n$ in $\bigcup_{Z\in \cZ} Z$ admits a $(\cZ,k)$-partition. \begin{proposition}\label{prop:mengervar} Let $k$ and $\ell$ be positive integers. Every rooted graph $(G,\cZ)$ with $Y\subseteq V(G)$ contains either \begin{enumerate}[(1)] \item a separation $(A,B)$ of order less than $k(\ell-\multiabs{\cZ\setminus A})$ such that $Y\subseteq V(B)$ and $\multiabs{\cZ\setminus A}\le \ell-1$, or \item a $(\cZ,Y,k)$-linkage of order $k\ell$. \end{enumerate} \end{proposition} \begin{proof} Let $\cZ:=\{Z_i:i\in [m]\}$. We obtain a graph $G'$ from $G$ as follows: \begin{itemize} \item for each $i\in [m]$, add a vertex set $W_i$ of size $k$ and make $W_i$ complete to $Z_i$. \end{itemize} It is not hard to observe that if there are $k\ell$ pairwise vertex-disjoint paths from $\bigcup_{i\in [m]}W_i$ to $Y$ in $G'$, then there is a $(\cZ,Y,k)$-linkage of order $k\ell$ in $G$. Thus, by Menger's theorem, we may assume that there is a separation $(C,D)$ in $G'$ of order less than $k\ell$ with $\bigcup_{i\in [m]} W_i\subseteq V(C)$ and $Y\subseteq V(D)$. We claim that there is a separation $(A,B)$ described in (1). If $Z_j\setminus V(C)\neq\emptyset$ for some $j\in [m]$, then one vertex of $Z_j$ is contained in $V(D)\setminus V(C)$, and thus, $W_j$ should be contained in $V(D)$ as $W_j$ is complete to $Z_j$. Since $(C,D)$ has order less than $k\ell$, we have \[\multiabs{\cZ\setminus C}\le \ell-1 .\] Moreover, for every $j$ with $Z_j\setminus V(C)\neq \emptyset$, all vertices of $W_j$ are contained in $V(C\cap D)$, because $\bigcup_{i\in [m]} W_i\subseteq V(C)$. Thus, if we take a restriction $(C\cap G,D\cap G)$ of $(C,D)$ on $G$, then at least $k\multiabs{\cZ\setminus C}$ vertices are removed from the separator $V(C\cap D)$. So, $(C\cap G, D\cap G)$ is a separation in $G$ of order less than $k(\ell-\multiabs{\cZ\setminus (C \cap G)}))$ such that $Y\subseteq V(D\cap G)$ and $\multiabs{\cZ\setminus (C\cap G)}=\multiabs{\cZ\setminus C}\le \ell-1$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:rootedgridminor}] Let $\cZ:=\{Z_i: i\in [m]\}$ and $H$ be the given $\cG_n$-model with a model function $\eta$. We say that the image $\bigcup_{v\in R} \eta(v)$ of a column $R$ of $\cG_n$ is a column of $H$. We will mark a set of columns of $H$ and use a constructive sequence of unmarked columns to construct the required grid model in (2). \begin{claim}\label{claim:seqcolumn} For $t\le \frac{n}{k\ell}$, $G$ contains a separation described in (1) or a sequence $(\cP_1, \cR_1), \ldots, (\cP_{t}, \cR_{t})$ where \begin{itemize} \item $\cR_i$ is a set of $k\ell$ columns of $H$, and for $i\neq j$, $\cR_i$ and $\cR_j$ are disjoint, \item $\cP_i$ is a $(\cZ,T,k)$-linkage of order $k\ell$ for some set $T$ of $k\ell$ vertices contained in pairwise distinct columns of $\cR_i$, \item for every column $R\notin \bigcup_{i\in [t]} \cR_i$, none of the paths in $\bigcup_{i\in [t]}\cP_i$ meet $R$. \end{itemize} \end{claim} \begin{clproof} We inductively find a sequence $(\cP_1, \cR_1), \ldots, (\cP_{t}, \cR_{t})$ if a separation described in (1) does not exist. Suppose that there is such a sequence $(\cP_1, \cR_1), \ldots, (\cP_{t-1}, \cR_{t-1})$. Choose a set $\cR$ of $k\ell$ arbitrary columns not in $\bigcup_{i\in [\ell-1]}\cR_i$. Such a set of columns exists as $n\ge t k\ell$. We choose $k\ell$ vertices from distinct columns of $\cR$, and say $T$. By Proposition~\ref{prop:mengervar}, $G$ contains a separation $(A,B)$ of order less than $k(\ell-\multiabs{\cZ\setminus A})$ where $T\subseteq V(B)$ and $\multiabs{\cZ\setminus A}\le \ell-1$, or a $(\cZ,T,k)$-linkage of order $k\ell$ in $G$. Suppose the former separation exists. By Lemma~\ref{lem:restrictgrid}, $G-V(A\cap B)$ contains a $\cG_{n-\abs{V(A\cap B)}}$-model $H'$, such that it contains all rows and columns of $H$ that does not contain a vertex of $A\cap B$. Note that $\abs{V(A\cap B)}<k\ell = \abs{T}$. Since we have chosen $T$ from $k\ell$ distinct columns of $H$, $H$ contains a column that contains a vertex of $T$ but does not contain a vertex of $A\cap B$. Thus, this column is contained in $H'$. It implies that $H'$ is contained in $B- V(A)$. As $\abs{V(A\cap B)}< k\ell$, it follows that $(A,B)$ is a separation described in (1), a contradiction. Therefore, $G$ contains a $(\cZ,T,k)$-linkage $\cP$ of order $k\ell$. To guarantee the last condition of the claim, pick a path $P\in \cP$, and if possible, shorten it such that its new second end vertex is in a column not in $\bigcup_{i\in [t-1]}\cR_i$ and this column is different from where the second end vertices of the other paths in $\cP$ are. We perform such shortenings as long as possible. If it is not possible to shorten anymore, then we let $\cP_{t}=\cP$ and $\cR_{t}$ to be the set of columns where the second end vertices of the paths in $\cP$ are. Now, it is clear that the paths in $\cP_{t}$ do not visit any column not in $\bigcup_{i\in [t]}\cR_i$. \end{clproof} Let $(\cP_1, \cR_1), \ldots, (\cP_{k\ell}, \cR_{k\ell})$ be a sequence given by Claim~\ref{claim:seqcolumn}. Since $n\ge g(k^2\ell^2+1)+k\ell$, there exists $p$ with $k\ell\le p\le n-g$ such that for every $1\le i\le g$, the $(p+i)$-th column is not in $\cR$. Let \begin{align*} X:=\bigcup_{i\in [k\ell+1,2k\ell]} \eta(v_{i, p+1}), \qquad D:=\bigcup_{i\in [k\ell+1,n]}\bigcup_{j\in [g]} \eta(v_{i, p+j}). \end{align*} Let $G'$ be the graph obtained from $G$ by deleting $D\setminus X$ and contracting the set $\eta(v_{i, p+1})$ to a vertex $w_{i, p+1}$ for each $i\in [k\ell+1, 2k\ell]$. Let $W:=\{w_{i, p+1}:i\in [k\ell+1, 2k\ell]\}$ and $\cZ':=\cZ|_{V(G)\setminus D}$. \begin{claim} There is no separation $(A,B)$ in $G'$ of order less than $k(\ell-\multiabs{\cZ'\setminus A})$ such that $W\subseteq V(B)$ and $\multiabs{\cZ'\setminus A}\le \ell-1$. \end{claim} \begin{clproof} Assume that there is such a separation $(A,B)$ in $G'$. Since the sets of columns $\cR_1, \ldots, \cR_{k\ell}$ are disjoint and $\abs{V(A\cap B)}<k\ell$, there is an integer $1\le s\le k\ell$ such that $V(A\cap B)$ is disjoint from all of columns in $\cR_s$. First suppose that all of columns in $\cR_s$ are contained in $B- V(A)$. Since $\abs{V(A\cap B)}\le k(\ell-\multiabs{\cZ'\setminus A})-1$, among the paths in $\cP_s$, there are $k\multiabs{\cZ'\setminus A} +1$ paths fully contained in $B- V(A)$. On the other hand, by the definition of $(\cZ,T,k)$-linkages, the set of all end vertices of paths in $\cP_s$ on the vertex set $\bigcup_{Z\in \cZ} Z$ admits a $(\cZ,k)$-partition, that is, \begin{itemize} \item there exist a partition $L_1, \ldots, L_x$ of the end vertices in $\bigcup_{i\in [m]} Z_i$ and an injection $\gamma:[x]\rightarrow[m]$ where for each $i\in [x]$, $\abs{L_i}\le k$ and $L_i$ is contained in $Z_{\gamma(i)}$. \end{itemize} From the condition that each $L_i$ has size at most $k$, at least $\multiabs{\cZ'\setminus A} +1$ paths of $\cP_s$ are fully contained in $B- V(A)$ and have end vertices in pairwise distinct sets of $\{L_i: i\in [x]\}$. It means that $\multiabs{\cZ'\setminus A}\ge \multiabs{\cZ'\setminus A} +1$, which is a contradiction. We conclude that there is a column $R$ in $\cR_s$ that is contained in $A- V(B)$. We observe that there are $k\ell$ vertex-disjoint paths from $R$ to $W$ in $G'$. If $R$ is the $i$-th column of $H$ for some $i\le p$, then we can simply use paths from $(k\ell+1)$-th, $\ldots$, $(2k\ell)$-th rows. Assume that $R$ is the $i$-th column where $i\ge p+1$. Then for each $t\in [k\ell]$, we construct a $t$-th path such that it starts in $\eta(v_{t,i})$, goes to $\eta(v_{t, p+t-k\ell})$ then goes to $\eta(v_{2k\ell+1-t, p+t-k\ell})$, and then terminates in $w_{2k\ell+1-t, p+1}$. This is possible because $p\ge k\ell$. One of these $k\ell$ paths is disjoint from $V(A\cap B)$, hence there is a vertex of $W$ in $V(A)\setminus V(B)$, contradicting the assumption that $W\subseteq V(B)$. \end{clproof} Therefore, by Proposition~\ref{prop:mengervar}, $G'$ contains a $(\cZ',W,k)$-linkage $\cP$ of order $k\ell$. Let $\eta'$ be a model of $\cG_g$ where $\eta'(v_{i,j})=\eta(v_{k\ell+i, p+j})$. This model can be extended by the paths in $\cP$, and it satisfies the conditions of the required model. \end{proof} We show that every sufficiently large $(\cZ,k,\ell)$-rooted grid model contains $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models. \begin{lemma}\label{lem:colorfulgrid} Let $k, \ell$, and $h$ be positive integers. Every rooted graph $(G,\cZ)$ having a $(\cZ,k, \ell)$-rooted grid model of order $k\ell(h+2) +1$ contains $k$ pairwise vertex-disjoint $(\ell\cdot\cG_{h}, \cZ, \ell)$-models. Moreover, if $\ell\ge 2$, then $G$ contains $k$ pairwise vertex-disjoint $((\ell-1)\cdot\cG_{h}, \cZ, \ell)$-models. \end{lemma} The usefulness of the $(\cZ, k)$-partition is given in the next lemma. \begin{lemma}\label{lem:zkpartition} Let $(G,\cZ=\{Z_1, \cdots, Z_m\})$ be a rooted graph. Every $(\cZ, k)$-partition of a vertex set $\{w_1, \ldots, w_{k\ell}\}$ admits a partition $I_1, \ldots, I_k$ of the index set $\{1,\ldots, k\ell\}$ such that for each $j\in [k]$, \begin{itemize} \item $\abs{I_j}=\ell$, \item there is an injection $\beta_j:I_j\rightarrow[m]$ where for each $i\in I_j$, $w_i$ is contained in $Z_{\beta_j (i)}$, and \item there are two integers $a_j, b_j\in I_j$ with $a_j<b_j$ where there is no integer $c$ in $\bigcup_{i\in [j,k]}I_i$ with $a_j<c<b_j$. \end{itemize} \end{lemma} \begin{proof} We inductively find such a partition $I_1, \ldots, I_k$ of $\{w_1, \ldots, w_{k\ell}\}$. If $k=1$, then by the definition of a $(\cZ, k)$-partition, there is an injection $\gamma:\{1, \ldots, \ell\}\rightarrow[m]$ where for each $i\in [\ell]$, $w_i\in Z_{\gamma (i)}$. Thus, $I_1=\{1,\ldots, \ell\}$ satisfies the property. Let us assume that $k\ge 2$, and let $L_1, \ldots, L_x$ be a partition of $\{w_1, \ldots, w_{k\ell}\}$ and $\gamma:[x]\rightarrow[m]$ be an injection where for each $i\in [x]$, $\abs{L_i}\le k$ and $L_i$ is contained in $Z_{\gamma (i)}$. Let us choose a vertex set $S$ of size $\ell$ from $\bigcup_{j\in [x]}L_j$ so that \begin{itemize} \item for each $j\in [x]$, $\abs{S\cap L_j}\le 1$, \item if $\abs{L_i}=k$, then $S\cap L_i\neq \emptyset$, \item there are two vertices $w_p$ and $w_{p+1}$ in $S$ with consecutive indices. \end{itemize} If there is a set $L_i$ with $\abs{L_i}=k$, then we first choose two consecutive vertices where one is contained in $L_i$ and the other is not in $L_i$, and then choose one vertex from each set $L$ of $\{L_1, \ldots, L_x\}$ with $\abs{L}=k$ that are not selected before. If there is no set $L_i$ with $\abs{L_i}=k$, then we choose any two consecutive vertices that are contained in two distinct sets of $\cZ$. Note that the number of selected vertices cannot be larger than $\ell$. If necessary, by putting some vertices from sets of $\{L_1, \ldots, L_x\}$ which are not selected before, we can fill in $S$ so that $\abs{S}=\ell$, there are two vertices with consecutive indices, and there is an injection $\beta_1$ from the index set of $S$ to $[m]$ where for each $w_i\in S$, $w_i$ is contained in $Z_{\beta_1 (i)}$. By taking a restriction, we can obtain a partition $L_1', \ldots, L_x'$ of $\{w_i:i\in [k\ell]\}\setminus S$ and an injection $\gamma':[x]\rightarrow[m]$ where $\abs{L_i'}\le k-1$ and $L_i'$ is contained in $Z_{\gamma'(i)}$ for each $i\in [x]$. We give a new ordering $v'_1, \ldots, v'_{k(\ell-1)}$ of the vertex set $\{w_i:i\in [k\ell]\}\setminus S$ following the order in $\{w_i:i\in [k\ell]\}$. By induction, there exists a partition $I_2', \ldots, I_{k}'$ of $[(k-1)\ell]$ such that for each $j\in [2,k]$, \begin{itemize} \item $\abs{I_j'}=\ell-1$ and there is an injection $\beta_j:I_j'\rightarrow[m]$ where for each $i\in I_j'$, $w_i'$ is contained in $Z_{\beta_j (i)}$, and \item there are two integers $a_j, b_j\in I_j'$ with $a_j<b_j$ where there is no integer $c$ in $\bigcup_{j\le i\le k}I_i'$ with $a_j<c<b_j$. \end{itemize} It gives a corresponding partition $I_2, \ldots, I_k$ of the index set of $\{w_i:i\in [k\ell]\}\setminus S$. Let $I_1$ be the set of indices of vertices in $S$. Then $\{I_i:i\in [k]\}$ is a required partition. \end{proof} \begin{figure} \caption{This is a $(\{Z_1, Z_2, Z_3\} \label{fig:rootedexpan} \end{figure} \begin{proof}[Proof of Lemma~\ref{lem:colorfulgrid}] We first construct $k$ pairwise vertex-disjoint $(\ell\cdot\cG_{h}, \cZ, \ell)$-models. Let $\eta$ be the model function of the $(\cZ,k, \ell)$-rooted grid model and let $w_1, \ldots, w_{k\ell}$ be the root vertices of the model where $w_i\in \eta(v_{i,1})$ for each $i\in [k\ell]$. By Lemma~\ref{lem:zkpartition}, there exists a partition $I_1, \ldots, I_k$ of $[k\ell]$ such that for each $j\in [k]$, \begin{itemize} \item $\abs{I_j}=\ell$ and there is an injection $\beta_j:I_j\rightarrow[m]$ where for each $i\in I_j$, $w_i$ is contained in $Z_{\beta_j (i)}$, and \item there are two integers $a, b\in I_j$ with $a<b$ where there is no integer $c$ in $\bigcup_{i\in [j,k]}I_i$ with $a<c<b$. \end{itemize} For each $j\in [k\ell]$, we define a subgraph $Q_j$ of $\cG_{k\ell(h+2)+1}$ as \begin{align*} Q_j:=(\cG_{k\ell(h+2)+1})[&\{v_{a,j}:a\in [1,k\ell+2-j]\} \\ \cup &\{v_{k\ell+2-j,b}:b\in [j,k\ell+1+h(j-1)]\} \\ \cup &\{v_{a,k\ell+1+h(j-1)}:a\in [k\ell+2-j,k\ell+2]\} \\ \cup &\{v_{a,b}: a\in [k\ell+2,k\ell+1+h], \\ & \qquad \quad b\in [k\ell+1+h(j-1),k\ell+hj]\}]. \end{align*} Note that each $Q_j$ consists of a path from a vertex of $\{w_1, \ldots, w_{k\ell}\}$ to a $\cG_{h}$-model. See Figure~\ref{fig:rootedexpan} for an illustration. For each $j\in [k]$, let \[G_j:=\eta\left( \bigcup_{i\in I_j} V(Q_i) \right).\] It is not hard to observe that each $G_j$ is a $(\ell\cdot \cG_{h}, \cZ, \ell)$-model. Thus, we obtain $k$ pairwise vertex-disjoint $(\ell\cdot \cG_{h}, \cZ, \ell)$-models. Now, we prove the second statement. Let us assume that $\ell\ge 2$. We first construct the subgraphs $Q_j$ in the same way. Then, for each $j\in [k]$, we add a path $P_j$ as follows: $P_j$ starts in $v_{k\ell+1+h, k\ell+1+h(a_j-1)}$, goes to $v_{k\ell+1+h+i, k\ell+1+h(a_j-1)}$ then goes to $v_{k\ell+1+h+i, k\ell+1+h(b_j-1)}$, and then terminates in $v_{k\ell+1+h, k\ell+1+h(b_j-1)}$. Note that $P_1, \ldots, P_k$ are pairwise vertex-disjoint in $\cG_{k\ell(h+2)+1}$ because of the second condition of the partition $I_1, \ldots, I_k$. So, for each $j\in [k]$, the union of $G_j\cup \eta(V(P_j))$ is a $((\ell-1)\cdot \cG_{h}, \cZ, \ell)$-model, and thus, $(\cZ,k, \ell)$-rooted grid model of order $k\ell(h+2)+1$ contains $k$ pairwise vertex-disjoint $((\ell-1)\cdot \cG_{h}, \cZ, \ell)$-models. \end{proof} \section{Erd\H{o}s-P\'osa property of pure $(H, \cZ, \ell)$-models}\label{sec:pureep} Let $(G, \cZ)$ be a rooted graph and $H$ be a graph. A subgraph $F$ of $G$ is called a \emph{pure $(H, \cZ, \ell)$-model} if there exist a subset $\cH=\{H_1, \ldots, H_t\}$ of the set of connected components of $H$ and a function $\alpha:\{1, \ldots, t\}\rightarrow 2^{\cZ}\setminus \{\emptyset\}$ such that \begin{itemize} \item $F$ is the image of a model $\eta$ of $H_1\cup \cdots \cup H_t$ in $G$, \item for $Z\in \alpha(i)$, $\eta(V(H_i))\cap Z\neq \emptyset$, \item $\alpha(1), \ldots, \alpha(t)$ are pairwise disjoint and $\multiabs{\bigcup_{i\in [t]} \alpha(i)}=\ell$. \end{itemize} A pure $(H, \cZ, \ell)$-model can be seen as a minimal set of connected components of an $(H, \cZ, \ell)$-model for intersecting at least $\ell$ sets of $\cZ$. Because of the second and third conditions, every pure $(H, \cZ, \ell)$-model consists of the image of at most $\ell$ connected components of $H$. A pure $(H,\cZ,\ell)$-model may be an $(H, \cZ, \ell)$-model itself when $H$ consists of $\ell-1$ or $\ell$ connected components. In this section, we establish the Erd\H{o}s-P\'osa\ property for pure $(H, \cZ, \ell)$-models. The reason for using the pure $(H, \cZ, \ell)$-models is that a procedure to find an irrelevant vertex in a graph of large tree-width works well for pure $(H, \cZ, \ell)$-models. Later, we will argue how the problem for $(H, \cZ, \ell)$-models can be reduced to the problem for pure $(H, \cZ, \ell)$-models. Generally, it is possible that $G$ has more than $k$ pairwise vertex-disjoint pure $(H, \cZ, \ell)$-models, even though it has no $k$ vertex-disjoint $(H, \cZ, \ell)$-models. So the relation is not very direct. A vertex set $S$ of $G$ is called a \emph{pure $(H, \cZ, \ell)$-deletion set} if $G- S$ has no pure $(H, \cZ, \ell)$-models. \begin{theorem}\label{thm:mainpure} For a positive integer $\ell$ and a non-empty planar graph $H$ with $\cc(H)\ge \ell-1$, there exists $f^1_{H, \ell}:\mathbb{N}\rightarrow \mathbb{R}$ satisfying the following property. Let $(G,\cZ)$ be a rooted graph and $k$ be a positive integer. Then $G$ contains either $k$ pairwise vertex-disjoint pure $(H, \cZ, \ell)$-models, or a pure $(H, \cZ, \ell)$-deletion set of size at most $f^1_{H, \ell}(k)$. \end{theorem} \subsection{Erd\H{o}s-P\'osa property of $(H, \cZ, \ell)$-models in graphs of bounded tree-width} When the underlying graph has bounded tree-width, every class of graphs with at most $t$ connected components for some fixed $t$ has the Erd\H{o}s-P\'osa\ property. It follows from the Erd\H{o}s-P\'osa\ property of subgraphs in a tree consisting of at most $d$ connected components for some fixed $d$, which was proved by Gy{\'a}rf{\'a}s and Lehel~\cite{GyarfasL70}. Later, the bound on a hitting set was improved by Berger~\cite{Berger2004}. For a tree $T$ and a positive integer $d$, a subgraph of $T$ is called a \emph{$d$-subtree} if it consists of at most $d$ connected components. \begin{theorem}[Berger~\cite{Berger2004}]\label{thm:dsubtree} Let $T$ be a tree and let $k$ and $d$ be positive integers. Let $\cF$ be a set of $d$-subtrees of $T$. Then $T$ contains either $k$ pairwise vertex-disjoint subgraphs in $\cF$, or a vertex set $S$ of size at most $(d^2-d+1)(k-1)$ such that $T- S$ has no subgraphs in $\cF$. \end{theorem} \begin{proposition}\label{prop:eponboundedtw} Let $k,h, \ell$, and $w$ be positive integers and let $H$ be a graph with $h$ vertices. Let $(G,\cZ)$ be a rooted graph with $\tw(G)\le w$. Then $G$ contains either $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models, or an $(H, \cZ, \ell)$-deletion set of size at most $(w-1)(h^2-h+1)(k-1)$. Furthermore, the same statement holds for pure $(H, \cZ, \ell)$-models. \end{proposition} \begin{proof} We first prove for usual $(H, \cZ, \ell)$-models. Let $\cH$ be the class of all $(H, \cZ, \ell)$-models. Let $(T, \cB:=\{B_t\}_{t\in V(T)})$ be a tree-decomposition of $G$ of width at most $w$. For $v\in V(G)$, let $\cP(v)$ denote the set of the vertices $t$ in $T$ such that $B_t$ contains $v$. From the definition of tree-decompositions, for each $v\in V(G)$, $T[\cP(v)]$ is connected and for every edge $xy$ in $G$, there exists $B\in \cB$ containing both $x$ and $y$. It implies that for a connected subgraph $F$ of $G$, $T[\bigcup_{v\in V(F)} \cP(v)]$ is connected. Let $\cF$ be the family of sets $\bigcup_{x\in V(F)} \cP(x)$ for all $F\in \cH$. Observe that for $F_1, F_2\in \cH$ where $\bigcup_{x\in V(F_1)} \cP(x)$ and $\bigcup_{x\in V(F_2)} \cP(x)$ are disjoint, $F_1$ and $F_2$ are vertex-disjoint. For a set $S\in \cF$, $T[S]$ consists of at most $h$ connected components, which means that it is a $h$-subtree. So, if $\cF$ contains $k$ pairwise disjoint sets, then clearly, $\cH$ contains $k$ pairwise vertex-disjoint subgraphs of $\cH$ in $G$. Thus, we may assume that there are no $k$ pairwise disjoint subsets in $\cF$. Then by Theorem~\ref{thm:dsubtree}, $T$ has a vertex set $W$ of size at most $(h^2-h+1)(k-1)$ such that $W$ meets all sets in $\cF$. It implies that $\bigcup_{t\in W} B_t$ has at most $(w-1)(h^2-h+1)(k-1)$ vertices and it meets all subgraphs in $\cH$. The same argument holds for pure $(H, \cZ, \ell)$-models. \end{proof} \subsection{Reduction to graphs of bounded tree-width} Let $(G, \cZ)$ be a rooted graph and $H$ be a graph. Let $\tau^*_{H}(G,\cZ, \ell)$ be the minimum size of a pure $(H, \cZ, \ell)$-deletion set of $G$. A vertex $v$ of $G$ is called \emph{irrelevant for pure $(H, \cZ, \ell)$-models} if $\tau^*_{H}(G,\cZ, \ell)=\tau^*_{H}(G- v, \cZ, \ell)$. If there is no confuse from the context, then we shortly say that $v$ is an irrelevant vertex. In this subsection, we argue that for two simpler cases when $G$ has large tree-width, one can obtain an irrelevant vertex. This will help to show base cases of Theorem~\ref{thm:mainpure}. \begin{proposition}\label{prop:irrelevant} Let $g,h$, and $\ell$ be positive integers and $x$ be a non-negative integer such that $g\ge (x^2+14hx+2x+1)(x^2+1)$. Let $(G,\cZ)$ be a rooted graph, and $H$ be a planar graph with $h$ vertices and $\cc(H)\ge \ell-1$, and $(A,B)$ be a separation in $G$ of order at most $x$ such that $\multiabs{\cZ\setminus A}=0$ and $B-V(A)$ contains a $\cG_{g}$-model. Then there exists an irrelevant vertex for pure $(H, \cZ, \ell)$-models. \end{proposition} \begin{proof} We proceed it by induction on $x$. If $x=0$, then a pure $(H, \cZ, \ell)$-model cannot have a vertex of $B- V(A)$ because every connected component of a pure $(H, \cZ, \ell)$-model contains at least one vertex of $\bigcup_{Z\in \cZ} Z$. Thus, every vertex in $B- V(A)$ is irrelevant. We may assume that $x\ge 1$. Let $W:=V(A\cap B)$ and $x':=x^2+14hx+2x$. Note that $g\ge x'(x^2+1)+x^2$. By applying Theorem~\ref{thm:rootedgridminor} to $B$ (with $(\cZ, g, k, \ell)\leftarrow (\{W\}, x', \abs{W}, 1)$), we deduce that $B$ contains either \begin{enumerate}[(1)] \item a separation $(A',B')$ of order less than $\abs{W}$ in $B$ such that $W\subseteq V(A')$ and $B'- V(A')$ contains a $\cG_{g-\abs{V(A'\cap B')}}$-model, or \item a $(\{W\}, \abs{W}, 1)$-rooted grid model of order $x'$. \end{enumerate} Suppose that there is a separation $(A',B')$ described in (1). Since $W\subseteq V(A')$,$((A- E(A\cap B))\cup A', B')$ is a separation in $G$ of order at most $\abs{W}-1\le x-1$ such that $\multiabs{\cZ\setminus (A\cup A')}=0$ and $B'- V(A\cup A')$ contains a $\cG_{g-\abs{V(A'\cap B')}}$-model. Thus $B'-V(A\cup A')$ contains a $\cG_{g-(x-1)}$-model. Note that \begin{align*} g-(x-1)&\ge (x^2+14hx+2x+1)((x-1)^2+1) \\ &\ge ((x-1)^2+14h(x-1)+2(x-1)+1)((x-1)^2+1). \end{align*} Thus, by the induction hypothesis, $G$ contains an irrelevant vertex. So, we may assume that $B$ contains a $(\{W\}, \abs{W}, 1)$-rooted grid model of order $x'$, say $M$. This means that there is a model function $\eta_1$ of $\cG_{x'}$ for $M$ such that for $i\in [\abs{W}]$, $V(\eta_1(v_{1,i}))$ contains a vertex of $W$. We choose a vertex $w$ in $V(\eta_1(v_{x',x'}))$. We show that $w$ is an irrelevant vertex. To show this, it is sufficient to check $\tau_{H}^*(G,\cZ, \ell)\le \tau_{H}^*(G- w, \cZ, \ell)$. Let $T$ be a pure $(H, \cZ, \ell)$-deletion set of $G- w$. We claim that $G$ contains a pure $(H, \cZ, \ell)$-deletion set of size at most $\abs{T}$. If $G- T$ has no pure $(H, \cZ, \ell)$-models, then we are done. We may assume that $G- T$ has a pure $(H, \cZ, \ell)$-model. Let $T_A:=T\cap V(A)$ and $T_B:=T\cap V(B)$. Let $W'$ be a minimum size subset of $W\setminus T$ such that $G- (T_A\cup W')$ contains no pure $(H, \cZ, \ell)$-models. Such a set exists, as $G- (T_A\cup W)$ has no pure $(H, \cZ, \ell)$-models. If $\abs{T\setminus V(A)}\ge \abs{W'}$, then $T_A\cup W'$ is a pure $(H, \cZ, \ell)$-deletion set of $G$ of size at most $\abs{T'}\le \abs{T}$. So, we may assume that $\abs{T\setminus V(A)}\le \abs{W'}-1\le \abs{W}-1$. \begin{figure} \caption{The image of $\eta_1$ in Proposition~\ref{prop:irrelevant} \label{fig:irrelevant} \end{figure} We prove that $G- (T\cup \{w\})$ has a pure $(H, \cZ, \ell)$-model, which yields a contradiction. \begin{claim} $G- (T\cup \{w\})$ has a pure $(H, \cZ, \ell)$-model. \end{claim} \begin{clproof} Note that all vertices of $W$ are contained in the first row of $M$ and $w$ is contained in the last column of $M$. See Figure~\ref{fig:irrelevant}. Observe that for every $i\in [\abs{W}+1, x']$, the $i$-th column of $M$ does not contain a vertex of $W$. Since $w$ is in the last column of $M$, there are at most $\abs{T\setminus V(A)}\le x-1$ columns of $M$ from the $(\abs{W}+1)$-th column to the $(x'-1)$-th column, that have a vertex in $T\cup \{w\}$. Because \[x'-\abs{W}-\abs{T\setminus V(A)}-1\ge x(x+14h+2)-2x=x(x+14h),\] there is a set of $x+14h$ consecutive columns of $M$ that have no vertices in $T\cup W\cup \{w\}$. A similar argument holds for rows as well. So, there is a set of $x+14h$ consecutive rows of $M$ that have no vertices in $T\cup W\cup \{w\}$. It implies that there exist $1\le p\le x'-1-(x+14h)$ and $x\le q\le x'-1-(x+14h)$ such that for each $1\le i,j\le x+14h$, the $(p+i)$-th row and the $(q+j)$-th column do not contain a vertex of $T\cup W\cup \{w\}$. We will use the grid model induced by $V(\eta_1(v_{p+i,q+j}))$'s for $1\le i,j\le x+14h$. Let $G'$ be this subgraph. Note that $G'$ contains an $H$-model as $G'$ has order at least $14h$. Now, we observe that there are $\abs{W}$ vertex-disjoint paths from $W$ to \[\eta (\{v_{p+1, q+1}\}), \ldots, \eta(\{v_{p+\abs{W}, q+1}\}).\] We construct an $i$-th path such that it starts in $\eta_1(v_{1,i})\cap W$, goes to $\eta_1(v_{p+1+\abs{W}-i,i})$, and then terminates in $\eta_1(v_{p+1+\abs{W}-i, q+1}) $. Among those paths, there are $\abs{W} - \abs{T_B}$ paths $Q_1, \ldots, Q_{\abs{W} - \abs{T_B}}$ from $W\setminus T$ to $G'$ avoiding $T\cup \{w\}$. Let $C$ be the set of the end vertices of $Q_1, \ldots, Q_{\abs{W} - \abs{T_B}}$ in $W\setminus T$, and let $C':=(W\setminus T)\setminus C$. Since \[\abs{C}=\abs{W}-\abs{T_B}= \abs{W\setminus T}-\abs{T\setminus V(A)}\ge \abs{W\setminus T}-(\abs{W'}-1),\] we have $\abs{C'}\le \abs{W'}-1$. Therefore, by the choice of the set $W'$, $G- (T_A\cup C')$ contains a pure $(H, \cZ, \ell)$-model $F$. If $F$ is contained in $A$, then it contradicts the assumption that $(G-w)-T$ has no pure $(H, \cZ, \ell)$-models. Thus, $F$ contains a vertex of $B-V(A)$. Note that each connected component of $F$ cannot be fully contained in $B- V(A)$ because $\multiabs{\cZ\setminus A}=0$. Thus each connected component of $F$ containing a vertex of $B-V(A)$ also contains a vertex of $A$, and thus it contains a vertex of $W$. Let $F'$ be the subgraph of $F$ consisting of all the connected components fully contained in $A- V(B)$. Then there exist two disjoint sets $\cZ_1, \cZ_2\subseteq \cZ$ and a proper subset $\cC$ of the set of connected components of $H$ such that \begin{itemize} \item $\abs{\cZ_1}+\abs{\cZ_2}=\ell$, \item $F'$ is a $H[\bigcup_{C\in \cC} V(C)]$-model and $V(F')\cap Z\neq \emptyset$ for all $Z\in \cZ_1$, and \item for each $Z\in \cZ_2$, there is a path from $Z$ to $W\setminus (T\cup C')$ in $A- (T\cup C')$ avoiding the vertices of $F'$. \end{itemize} Since $\cC$ is a proper subset of the set of connected components of $H$, we can obtain an $H'$-model in $G'$ where $H'$ consists of connected components of $H$ not contained in $\cC$. Since all vertices of $W\setminus (T\cup C')$ are linked to $G'$ by the paths $Q_1, \ldots, Q_{\abs{W}- \abs{T_B}}$, we have a pure $(H, \cZ, \ell)$-model, which is contained in $G- (T\cup \{w\})$. \end{clproof} However, it contradicts our assumption that $G-(T\cup \{w\})$ has no pure $(H, \cZ, \ell)$-model. Therefore, $\tau_{H}^*(G,\cZ, \ell)\le \tau_{H}^*(G- w, \cZ, \ell)$ and we conclude that $w$ is an irrelevant vertex. \end{proof} If $H$ is connected and $\ell=2$, then we further analyze the case when $G$ has a separation $(A,B)$ with $\multiabs{\cZ\setminus A}=1$. It will be needed for base cases. We remark that if $H$ is connected, then pure $(H, \cZ, 2)$-models are just $(H, \cZ, 2)$-models. \begin{proposition}\label{prop:irrelevant2} Let $g$ and $h$ be positive integers and $x$ be a non-negative integer such that $g\ge 2(4x^2+14hx+3x+1)(4x^2+1)$. Let $(G,\cZ)$ be a rooted graph, $H$ be a connected planar graph with $h$ vertices, and $(A,B)$ be a separation in $G$ of order at most $x$ such that $\multiabs{\cZ\setminus A}\le 1$ and $B-V(A)$ contains a $\cG_g$-model. Then $G$ contains an irrelevant vertex $w$ for pure $(H, \cZ, 2)$-models. \end{proposition} \begin{proof} We prove it by induction on $x$. If $x=0$, then no pure $(H, \cZ, 2)$-models have a vertex of $B- V(A)$ as $H$ is connected. Thus, every vertex in $B- V(A)$ is irrelevant, and since $g\ge 1$, there is an irrelevant vertex in $B- V(A)$. Let us assume that $x\ge 1$. Also, if $\multiabs{\cZ\setminus A}=0$, then by Proposition~\ref{prop:irrelevant}, $G$ contains an irrelevant vertex. So, we may assume that $\multiabs{\cZ\setminus A}=1$. Let $W:=V(A\cap B)$ and $x':=2(4x^2+14hx+3x)$. Let $Z_a$ be the unique set in $\cZ$ that intersects $B-V(A)$, and let $Y_a:=Z_a\setminus V(A)$ and $\cZ':=\{W, Y_a\}$. Note that $g\ge x'(4x^2+1)+4x^2$. By applying Theorem~\ref{thm:rootedgridminor} to $B$ (with $(\cZ, g, k, \ell)\leftarrow (\cZ', x', \abs{W}, 2))$, $B$ contains either \begin{enumerate}[(1)] \item a separation $(A',B')$ of order less than $\abs{W}(2-\multiabs{\cZ'\setminus A'})$ in $B$ such that $\multiabs{\cZ'\setminus A'}\le 1$ and $B'- V(A')$ contains $\cG_{g-\abs{V(A'\cap B')}}$-model, or \item a $(\cZ', \abs{W}, 2)$-rooted grid model of order $x'$. \end{enumerate} Suppose there is a separation $(A',B')$ described in (1). Since $\multiabs{\cZ'\setminus A'}\le 1$, there are three possibilities; either ($Y_a\setminus V(A')\neq \emptyset$ and $W\subseteq V(A')$), or ($Y_a\subseteq V(A')$ and $W\setminus V(A')\neq \emptyset$), or ($Y_a\cup W\subseteq V(A')$). In each case, we argue that there is an irrelevant vertex. \begin{itemize} \item (Case 1. $Y_a\setminus V(A')\neq \emptyset$ and $W\subseteq V(A')$.) \\ Then $((A- E(A\cap B))\cup A', B')$ is a separation in $G$ of order at most $\abs{W}-1\le x-1$, and $\multiabs{\cZ\setminus (A\cup A')}=1$ and $B'- V(A\cup A')$ contains a $\cG_{g-(x-1)}$-model. Since \begin{align*} g-(x-1)&\ge 2(4x^2+14hx+3x+1)(4x^2+1)-(x-1) \\ &\ge 2(4(x-1)^2+14h(x-1)+3(x-1)+1)(4(x-1)^2+1), \end{align*} by the induction hypothesis, $G$ contains an irrelevant vertex. \item (Case 2. $Y_a\cup W\subseteq V(A')$.) \\ Then $((A- E(A\cap B))\cup A', B')$ is a separation in $G$ of order at most $2\abs{W}\le 2x-1$, and $\multiabs{\cZ\setminus (A\cup A')}=0$ and $B'- V(A\cup A')$ contains a $\cG_{g-(2x-1)}$-model. Since \begin{align*} g-(2x-1)&\ge 2(4x^2+14hx+3x+1)(4x^2+1)-(2x-1)\\ &\ge (8x^2+28hx+6x+2)(4x^2+1)-(2x-1) \\ &\ge (4x^2+28hx)(4x^2+1) \\ &\ge (4x^2+28hx-14h)(4x^2-4x+2) \\ &= ((2x-1)^2+14h(2x-1)+2(2x-1)+1)((2x-1)^2+1), \end{align*} by Proposition~\ref{prop:irrelevant}, $G$ contains an irrelevant vertex. \item (Case 3. $Y_a\subseteq V(A')$ and $W\setminus V(A')\neq \emptyset$.) \\ In this case, $((A- E(A\cap B))\cup A', B')$ is a separation in $G$ of order at most $\abs{W\cup V(A'\cap B')}\le 2x-1$ where $\multiabs{\cZ\setminus (A\cup A')}=0$ and $B'- V(A\cup A')$ contains a $\cG_{g-(2x-1)}$-model. By the same reason in Case 2, $G$ contains an irrelevant vertex. \end{itemize} \vskip 0.3cm Now, we assume that $B$ contains a $(\cZ', \abs{W}, 2)$-rooted grid model of order $x'$, say $M$. So, there is a model function $\eta_1$ of $\cG_{x'}$ for $M$ such that for $1\le i\le 2\abs{W}$, $V(\eta_1(v_{1,i}))$ contains a vertex of $W\cup Y_a$ and $W\subseteq \{w_1, \ldots, w_{2\abs{W}}\}$. We choose a vertex $w$ in $V(\eta_1(v_{x',x'}))$. We show that $w$ is an irrelevant vertex. To show this, it is sufficient to check that $\tau_{H}^*(G,\cZ, 2)\le \tau_{H}^*(G- w, \cZ, 2)$. Let $T$ be a pure $(H, \cZ, 2)$-deletion set $T$ of $G- w$. We claim that $G$ contains a pure $(H, \cZ, \ell)$-deletion set of size at most $\abs{T}$. If $G- T$ has no pure $(H, \cZ, 2)$-models, then we are done. We may assume that $G- T$ has a pure $(H, \cZ, 2)$-model. Let $T_A:=T\cap V(A)$ and $T_B:=T\cap V(B)$. Let $W'$ be a minimum size subset of $W\setminus T$ such that $G- (T_A\cup W')$ contains no pure $(H, \cZ, 2)$-models. Such a set exists, because $\multiabs{\cZ\setminus A}=1$ and thus $G- (T_A\cup W)$ has no $(H, \cZ, 2)$-models. If $\abs{T\setminus V(A)}\ge \abs{W'}$, then $T_A\cup W'$ is a pure $(H, \cZ, 2)$-deletion set of size at most $\abs{T}$. So, we may assume that $\abs{T\setminus V(A)}\le \abs{W'}-1$. We claim that $G- (T\cup \{w\})$ has a pure $(H, \cZ, 2)$-model, which yields a contradiction. \begin{claim} $G- (T\cup \{w\})$ has a pure $(H, \cZ, 2)$-model. \end{claim} \begin{clproof} Note that all vertices of $W$ are contained in the first row of $M$ and $w$ is contained in the last column of $M$. Since $w$ is in the last column of $M$, there are at most $\abs{T\setminus V(A)}\le x-1$ columns of $M$ from the $(2\abs{W}+1)$-th column to the $(x'-1)$-th column, contains vertices of $T\setminus V(A)$. Because \begin{align*} x'-2\abs{W}-\abs{T\setminus V(A)}-1&\ge 2x(4x+14h+3)-3x \\ &\ge x(x+14h), \end{align*} there is a set of $x+14h$ consecutive columns of $M$ that have no vertices in $T\cup \{w_1, w_2, \ldots, w_{2\abs{W}}, w\}$. A similar argument holds for rows as well. So, there is a set of $x+14h$ consecutive rows of $M$ that have no vertices in $T\cup W\cup \{w\}$. It implies that there exist $1\le p\le x'-1-(x+14h)$ and $x\le q\le x'-1-(x+14h)$ such that for each $1\le i,j\le x+14h$, the $(p+i)$-th row and the $(q+j)$-th column do not contain a vertex of $T\cup W\cup \{w\}$. We will use the grid model induced by $V(\eta_1(v_{p+i,q+j}))$'s for $1\le i,j\le x+14h$. Let $G'$ be this subgraph. Observe that there are $2\abs{W}$ vertex-disjoint paths from $\{w_1, \ldots, w_{2\abs{W}}\}$ to $\eta (\{v_{p+1, q+1}\}), \ldots, \eta(\{v_{p+2\abs{W}, q+1}\})$. Among those paths starting from $W$, there are at least $\abs{W}-\abs{T_B}$ paths $Q_1, \ldots, Q_{\abs{W}-\abs{T_B}}$ from $W\setminus T$ to $G'$ avoiding $T\cup \{w\}$. Similarly, among those paths starting from $\{w_1, \ldots, w_{2\abs{W}}\}\setminus W\subseteq Y_a$, there are at least $\abs{W}-\abs{T\setminus V(A)}\ge 1$ paths from $\{w_1, \ldots, w_{2\abs{W}}\}\setminus W$ to $G'$, avoiding $T\cup \{w\}$. Let $R$ be the one of the latter paths. Note that $R$ connects the set $Z_a$ and $G'$. Let $C$ be the set of the end vertices of paths $Q_1, \ldots, Q_{\abs{W}-\abs{T_B}}$ contained in $W\setminus T$, and let $C':=(W\setminus T)\setminus C$. Since \[\abs{C}=\abs{W}-\abs{T_B}= \abs{W\setminus T}-\abs{T\setminus V(A)}\ge \abs{W\setminus T}-(\abs{W'}-1),\] we have $\abs{C'}\le \abs{W'}-1$. Therefore, by the choice of the set $W'$, there is an $(H, \cZ, 2)$-model in $G- (T_A\cup C')$. In particular, this model should intersect a set in $\cZ\setminus \{Z_a\}$. As $H$ is connected, there is a path $P$ from $\bigcup_{Z\in \cZ\setminus \{Z_a\}} Z$ to $G'$ avoiding $T_A$. Therefore, $G'\cup P\cup R$ contains an $(H, \cZ, 2)$-model, which is contained in $G- (T\cup \{w\})$. \end{clproof} However, it contradicts our assumption that $G- (T\cup \{w\})$ contains no pure $(H, \cZ, 2)$-model. Therefore, $\tau_{H}^*(G,\cZ, 2)\le \tau_{H}^*(G- w, \cZ, 2)$ and we conclude that $w$ is an irrelevant vertex. \end{proof} \subsection{Separating a grid model from sets of $\cZ$} In the proof of Theorem~\ref{thm:mainpure}, we will proceed by induction on $\ell$. The following proposition shows that given a sufficiently large grid-model, we can find either many disjoint $(H, \cZ, \ell)$-models, or a separation that separates a large grid-model from most of sets in $\cZ$. The main difference from Theorem~\ref{thm:rootedgridminor} is that here we also guarantee that when we have a latter separation, we keep having a rooted grid model with a smaller subset of $\cZ$. This will help to reduce the instance into an instance with smaller $\ell$ value so that one can apply the induction hypothesis. \begin{proposition}\label{prop:excluding} Let $g,k,h$, and $\ell$ be positive integers with $g\ge 2(k\ell(14h+2)+1)(k^2\ell^2+1)$ and let $\ell^*:=\ell-1$ if $\ell\ge 2$, and $\ell^*:=\ell$ otherwise. Every rooted graph $(G, \cZ)$ having a $\cG_g$-model contains either \begin{enumerate} \item $k$ pairwise vertex-disjoint $(\ell^*\cdot \cG_{14h}, \cZ, \ell)$-models, or \item a separation $(A,B)$ of order less than $k\ell^2$ such that $\multiabs{\cZ\setminus A}<\ell$ and $B- V(A)$ contains a $\cG_{g-k\ell^2}$-model and also contains a $(\cZ\setminus A,k, \multiabs{\cZ\setminus A})$-rooted grid model of order $k\ell(14h+2)+1$. \end{enumerate} \end{proposition} \begin{proof} We start with finding a separation $(A_0,B_0)$ in $G$ of order less than $k\ell$ where $\multiabs{\cZ\setminus A_0}\le \ell-1$ and $B_0- V(A_0)$ contains a $\cG_{g-k\ell}$-model. If $\multiabs{\cZ}\le \ell-1$, then the separation $(\emptyset, G)$ is such a separation. Suppose that $\multiabs{\cZ}\ge \ell$. Since $g\ge (k\ell(14h+2)+1)(k^2\ell^2+1)+k\ell$, by Theorem~\ref{thm:rootedgridminor}, $G$ contains either such a separation $(A_0, B_0)$, or a $(\cZ, k, \ell)$-rooted grid model of order $k\ell(14h+2)+1$. If it has a rooted grid model, then by Lemma~\ref{lem:colorfulgrid}, $G$ contains $k$ pairwise vertex-disjoint $(\ell^*\cdot \cG_{14h}, \cZ, \ell)$-models. Therefore, we may assume that there is a separation $(A_0, B_0)$ in $G$ of order less than $k\ell$ where $\multiabs{\cZ\setminus A_0}\le \ell-1$ and $B_0- V(A_0)$ contains a $\cG_{g-k\ell}$-model. We show the following. \begin{claim} $G$ contains a separation $(A,B)$ with order less than $k\ell^2$ such that $\multiabs{\cZ\setminus A}=\ell'$ for some $0\le \ell'<\ell$, and $B- V(A)$ contains a $\cG_{g-k\ell^2}$-model and also contains a $(\cZ\setminus A,k,\ell')$-rooted grid model of order $k\ell(14h+2)+1$. \end{claim} \begin{clproof} We recursively construct a sequence $(A_0, B_0), \ldots, (A_{\ell-1}, B_{\ell-1}) $ such that for each $0\le i\le \ell-1$, \begin{itemize} \item $(A_i, B_i)$ is a separation in $G$ of order less than $k\sum_{0\le j\le i}(\ell-j)$, \item $\multiabs{\cZ\setminus A_{i}}\le (\ell-1)-i$, and \item $B_{i}- V(A_{i})$ contains a $\cG_{g-k\sum_{0\le j\le i}(\ell-j)}$-model, \end{itemize} unless the separation described in the claim exists. If there is such a sequence, then $(A_{\ell-1}, B_{\ell-1})$ is a separation of order less than \[k\sum_{0\le j\le \ell-1}(\ell-j)=k\sum_{1\le j\le \ell}j=\frac{k\ell(\ell+1)}{2}\le k\ell^2\] such that $\multiabs{\cZ\setminus A_{\ell-1}}=0$ and $B_{\ell-1}- V(A_{\ell-1})$ contains a $\cG_{g-k\ell^2}$-model. Since a $\cG_{g-k\ell^2}$-model is also a $(\emptyset, k, 0)$-rooted grid model of order at least $k\ell(14h+2)+1$, $(A_{\ell-1}, B_{\ell-1})$ is a required separation. Note that the sequence $(A_0, B_0)$ exists. Suppose that there is a such a sequence $(A_0, B_0), \ldots, (A_{t}, B_{t})$ exists for some $0\le t<\ell-1$. If $\multiabs{\cZ\setminus A_{t}}< (\ell-1)-t$, then by taking $(A_{t+1}, B_{t+1}):=(A_{t}, B_{t})$, we have a required sequence $(A_0, B_0), \ldots, (A_{t+1}, B_{t+1})$. Thus, we may assume that $\multiabs{\cZ\setminus A_{t}}= (\ell-1)-t$. Let $\cY:=\cZ\setminus A_{t}$. Note that $\multiabs{\cY}=(\ell-1)-t$. By Theorem~\ref{thm:rootedgridminor}, $B_t-V(A_t)$ contains either \begin{enumerate}[(1)] \item a separation $(C,D)$ of order less than $k(\multiabs{\cY}-\multiabs{\cY\setminus C})$ where $\multiabs{\cY\setminus C}\le \multiabs{\cY}-1$ and $D- V(C)$ contains a $\cG_{g-k\sum_{0\le j\le t}(\ell-j)-\abs{V(C\cap D)}}$-model, or \item a $(\cY, k, \multiabs{\cY})$-rooted grid model of order $k\ell(14h+2)+1$. \end{enumerate} If it has the latter rooted grid model, then $(A_t, B_t)$ is a required separation where $\ell'=(\ell-1)-t$ and $\cZ\setminus A_t=\cY$, because $B_t-V(A_t)$ also contains a $\cG_{g-k\ell^2}$-model. Assume that we have a separation $(C,D)$ described in (1). Let $A_{t+1}:=G[V(A_t)\cup V(C)]$ and $B_{t+1}:=G[V(D)\cup V(A_t\cap B_t)]- E(G[V(A_t\cap B_t)\cup V(C\cap D)])$. Observe that $(A_{t+1}, B_{t+1})$ is a separation in $G$ such that \begin{itemize} \item $\multiabs{\cZ\setminus A_{t+1}}\le \multiabs{\cY}-1=\ell-1-(t+1)$, and \item $B_{t+1}- V(A_{t+1})$ contains a $\cG_{g-k\sum_{0\le j\le t}(\ell-j)-\abs{V(C\cap D)}}$-model. \end{itemize} Since $\abs{V(C\cap D)}<k\multiabs{\cY}=k(\ell-1-t)$, $\abs{V(A_t\cap B_t)}+k(\ell-1-t)<k\sum_{0\le j\le t+1}(\ell-j)$, as required. \end{clproof} We conclude that $G$ contains either $k$ pairwise vertex-disjoint $(\ell^*\cdot \cG_{14h}, \cZ, \ell)$-models, or a separation $(A,B)$ with order less than $k\ell^2$ such that $\multiabs{\cZ\setminus A}=\ell'$ for some $0\le \ell'<\ell$ and $B- V(A)$ contains a $(\cZ\setminus A,k, \ell')$-rooted grid model of order $k\ell(14h+2)+1$. \end{proof} \begin{lemma}\label{lem:reduction} Let $\ell$ be a positive integer. Let $(G, \cZ)$ be a rooted graph such that there is a separation $(A,B)$ in $G$ such that $\multiabs{\cZ\setminus A}=\ell'$ for some $1\le \ell'<\ell$. Let $\cZ'$ be the multiset of all sets $X$ in $\cZ$ where $X\subseteq V(A)$. If $T$ is a pure $(H, \cZ', \ell-\ell')$-deletion set of $A- V(B)$, then $T\cup V(A\cap B)$ is a pure $(H, \cZ, \ell)$-deletion set of $G$. \end{lemma} \begin{proof} Suppose that $G- (T\cup V(A\cap B))$ has a pure $(H, \cZ, \ell)$-model $F$. Since $\abs{\cZ\setminus A}=\ell'$, $F\cap (A- V(B))$ should meet at least $\ell-\ell'$ sets of $\cZ'$. It means that $F\cap (A- V(B))$ is a pure $(H, \cZ', \ell-\ell')$-model. Since $T$ meets all such models, it contradicts our assumption. We conclude that $G- (T\cup V(A\cap B))$ has no $(H, \cZ, \ell)$-models. \end{proof} Now, we are ready to give the proof of Theorem~\ref{thm:mainpure}. \begin{proof}[Proof of Theorem~\ref{thm:mainpure}] We recall that $\kappa$ is the function in the Grid Minor Theorem. For a positive integer $\ell$ and a non-empty planar graph $H$ with $\cc(H)\ge \ell-1$ and $h=V(H)$, we define that \begin{align*} x=x_{H, \ell}(k)&:=k\ell^2 \\ g=g_{H, \ell}(k)&:=2(4x^2+14hx+3x+1)(4x^2+1)+x\\ f^1_{H, \ell}(k)&:=\left\{ \begin{array}{ll} \kappa(g_{H, \ell}(k))(h^2-h+1)(k-1) & \textrm{if $\ell=1$}\\ \kappa(g_{H, \ell}(k))(h^2-h+1)(k-1) \\ \qquad + (\ell-1) f^1_{H, \ell-1}(k)+ k\ell^2 & \textrm{if $\ell\ge 2$.} \end{array} \right. \end{align*} Note that $g_{H, \ell}(k)\ge 2(14hx+2x+1)(x^2+1)$, the function defined in Proposition~\ref{prop:excluding}. We will use the fact that $\cG_{14h}$ contains an $H$-model. We prove the statement by induction on $\ell+\cc(H)$. Suppose that the theorem does not hold, and $G$ is a minimal counterexample. We first show for two base cases. \begin{itemize} \item (Case 1-1. $\ell=1$) \\ If $G$ has tree-width at most $\kappa(g)$, then by Proposition~\ref{prop:eponboundedtw}, $G$ contains $k$ pairwise vertex-disjoint pure $(H, \cZ, 1)$-models or a pure $(H, \cZ, 1)$-deletion set of size at most $(\kappa(g)+1)(h^2-h+1)(k-1)$. Thus, we conclude that $\tau^*_H(G, \cZ, 1)\le (\kappa(g)+1)(h^2-h+1)(k-1)=f^1_{H, \ell}(k).$ Suppose $G$ has tree-width larger than $\kappa(g)$. By Theorem~\ref{thm:gridtheorem}, $G$ contains a $\cG_{g}$-model. So, by Proposition~\ref{prop:excluding}, $G$ contains either $k$ pairwise vertex-disjoint $(H, \cZ, 1)$-models, or a separation $(A,B)$ with order less than $x$ such that $\multiabs{\cZ\setminus A}=0$ and $B- V(A)$ contains a $\cG_{g-x}$-model. As each $(H, \cZ, 1)$-model contains a pure $(H, \cZ, 1)$-model, in the former case, we have $k$ pairwise vertex-disjoint pure $(H, \cZ, 1)$-models. Thus, we may assume that we have the latter separation $(A,B)$. Then, since \[ g-x\ge (x^2+14hx+2x+1)(x^2+1),\] by Proposition~\ref{prop:irrelevant}, there exists an irrelevant vertex $v$ for pure $(H, \cZ, 1)$-models in $G$. This contradicts the fact that $G$ is chosen to be a minimal counterexample. \item (Case 1-2. $\ell=2$ and $\cc(H)=1$.) This case is almost same as Case 1, but we use Proposition~\ref{prop:irrelevant2} instead of Proposition~\ref{prop:irrelevant} to find an irrelevant vertex. This is possible because $g-x\ge 2(4x^2+14hx+3x+1)(4x^2+1)$. \end{itemize} Now we assume that $\ell\ge 2$ and $\cc(H)\ge 2$ if $\ell=2$. We may assume that $G$ has tree-width at least $\kappa(g)$, otherwise we can apply Proposition~\ref{prop:eponboundedtw}. Now, we reduce the given instance in two ways; either reduce its tree-width or the parameter $\ell$. By Theorem~\ref{thm:gridtheorem}, $G$ contains a $\cG_{g}$-model. By Proposition~\ref{prop:excluding}, $G$ contains either $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models, or a separation $(A,B)$ with order less than $x$ such that \begin{itemize} \item $\multiabs{\cZ\setminus A}=\ell'<\ell$ and $B- V(A)$ contains a $\cG_{g-x}$-model and a $(\cZ\setminus A,k,\ell')$-rooted grid model of order $k\ell(14h+2)+1$. \end{itemize} We may assume that we have the latter separation. Let $\cZ'$ be the multiset of all sets $X$ in $\cZ$ where $X\subseteq V(A)$. We observe that if $\ell'=0$, then there is an irrelevant vertex $v$ by Proposition~\ref{prop:irrelevant}. Thus, in this case, $G-v$ satisfies the theorem because $G$ is chosen as a minimal counterexample. But then $G$ also satisfies the theorem as $\tau^*_H(G, \cZ, \ell)=\tau^*_H(G-v, \cZ, \ell)$. Thus, we can assume that $\ell'\ge 1$. We will argue that in the remaining part, one can reduce the instance into $A- V(B)$ with the parameter $\ell-\ell'$. Recall that $\cc(H)\ge \ell-1$. We divide into two cases depending on $\cc(H)\ge \ell$ or not. \begin{itemize} \item (Case 2-1. $\cc(H)\ge \ell$.) \\ Since $B- V(A)$ contains a $(\cZ\setminus A,k, \ell')$-rooted grid model of order $k\ell(14h+2)+1$, by Lemma~\ref{lem:colorfulgrid}, $B- V(A)$ contains $k$ pairwise vertex-disjoint $(\ell'\cdot \cG_{14h}, \cZ\setminus A, \ell')$-models. We will use this later. Since $\ell'>0$, we have $\ell-\ell'<\ell$. So, we apply the induction hypothesis on ($A- V(B)$, $\cZ'$, $\ell-\ell'$, $k$), and we have that $A-V(B)$ contains either $k$ vertex-disjoint pure $(H, \cZ', \ell-\ell')$-models, or a pure $(H, \cZ', \ell-\ell')$-deletion set $T$ of size at most $f^1_{H, \ell-\ell'}(k)$. If it outputs a deletion set $T$, then by Lemma~\ref{lem:reduction}, $G- \left(T\cup V(A\cap B)\right)$ has no pure $(H, \cZ,\ell)$-models. Since \[f^1_{H, \ell-\ell'}(k) + k\ell^2\le f^1_{H, \ell-1}(k) + k\ell^2 \le f^1_{H, \ell}(k), \] $T\cup V(A\cap B)$ is a required pure $(H, \cZ, \ell)$-deletion set in $G$. Suppose we have $k$ pairwise vertex-disjoint pure $(H, \cZ', \ell-\ell')$-models in $A- V(B)$. Since each model in $A- V(B)$ consists of the image of at most $\ell-\ell'$ connected components of $H$ and $H$ consists of at least $\ell$ connected components, we can complete it into a pure $(H, \cZ, \ell)$-model by taking a relevant pure $(H, \cZ\setminus A, \ell')$-model from a $(\ell'\cdot \cG_{14h}, \cZ\setminus A, \ell')$-model in $B- V(A)$. For instance, if $H_1, \ldots, H_{\ell}$ is a set of connected components of $H$ and a pure $(H, \cZ', \ell-\ell')$-models in $A- V(B)$ is the image of the union of $H_1, \ldots, H_{\ell-\ell'}$, then we obtain images of $H_{\ell-\ell'+1}, \ldots, H_{\ell}$ from each connected component of the $\ell'\cdot \cG_{14h}$-grid model. Therefore, $G$ contains $k$ pairwise vertex-disjoint pure $(H, \cZ, \ell)$-models. \item (Case 2-2. $\cc(H)= \ell-1$. ) \\ When $\ell'\ge 2$, we can prove as in Case 2-1. Note that every pure $(H, \cZ', \ell-\ell')$-model is the image of at most $\ell-\ell'$ connected components of $H$, and there are $k$ pairwise vertex-disjoint pure $((\ell'-1)\cdot \cG_{14h}, \cZ\setminus A, \ell')$-models in $B- V(A)$ by Lemma~\ref{lem:colorfulgrid}. Thus, we can return $k$ pairwise vertex-disjoint pure $(H, \cZ, \ell)$-models, or a pure $(H, \cZ, \ell)$-deletion set of size at most $f^1_{H, \ell}(k)$. But, we cannot do the same thing when $\ell'=1$, because we can only say that $B- V(A)$ contains $k$ pairwise vertex-disjoint pure $(\ell'\cdot \cG_{14h}, \cZ\setminus A, \ell')$-models, not pure $((\ell'-1)\cdot \cG_{14h}, \cZ\setminus A, \ell')$-models. So, we may assume that $\ell'=1$. If $\ell=2$, then $H$ is connected, and this case was resolved in Case 1-2. So, we may also assume that $\ell\ge 3$. Note that a pure $(H, \cZ', \ell-1)$-model in $A- V(B)$ may be the image of $H$ itself. But, this cannot be a part of a pure $(H, \cZ, \ell)$-model of $G$, and thus, we ignore it (the model already used all components to meet only $\ell-1$ sets of $\cZ$). For this reason, we only consider pure $(H, \cZ', \ell-1)$-models in $A- V(B)$ that are not the images of $H$. For each subgraph $H'$ of $H$ induced by its $\ell-2$ connected components, we apply induction hypothesis with $(A- V(B), \cZ', \ell-1, k)$. Since $\cc(H')<\cc(H)$, we obtain that $A-V(B)$ contains either $k$ pairwise vertex-disjoint pure $(H', \cZ', \ell-1)$-models or a pure $(H', \cZ', \ell-1)$-deletion set $T_{H'}$ of size at most $f^1_{H, \ell-1}(k)$. Suppose that for some subgraph $H'$, we have $k$ pairwise vertex-disjoint pure $(H', \cZ', \ell-1)$-models in $A- V(B)$. Then we can complete them into $k$ pairwise vertex-disjoint pure $(H, \cZ, \ell)$-models in $G$ using $k$ pairwise vertex-disjoint $(\cG_{14h}, \cZ\setminus A, 1)$-models in $B- V(A)$. Therefore, we may assume that for all possible subgraphs $H'$ of $H$ with $\ell-2$ connected components, we have a pure $(H', \cZ', \ell-1)$-deletion set $T_{H'}$ in $A- V(B)$. Let $T$ be the union of all such deletion sets $T_{H'}$. Note that $\abs{T}\le (\ell-1)f^1_{H, \ell-1}(k)$. We claim that $G- \left(T\cup V(A\cap B)\right)$ has no pure $(H, \cZ,\ell)$-models. Suppose $G- \left(T\cup V(A\cap B)\right)$ has a pure $(H, \cZ,\ell)$-model $F$. First assume that $F$ is fully contained in $A- V(B)$. If $F$ is the image of a subgraph $H'$ of $H$ consisting of its at most $\ell-2$ connected components, then there exists a subgraph $H''$ of $H$ induced by its exactly $\ell-2$ connected components where $H'$ is a subgraph of $H''$. Thus, by ignoring the set in $\cZ$ intersecting $B-V(A)$, $F$ contains a $(H'', \cZ', \ell-1)$-model. But $T_{H''}$ meets a vertex of $F$, contradicting the assumption that $V(F)\cap T=\emptyset$. Thus, we may assume that $F$ is the image of $H$. Since $\ell\ge 3$ and $H$ consists of $\ell-1$ connected components and $F$ intersects $\ell$ sets of $\cZ$, there should be a connected component of $F$ intersecting exactly one set among those $\ell$ sets. In other words, there is a subgraph $H'$ of $H$ consisting of its $\ell-2$ connected components where the subgraph of $F$ induced by the image of $H'$ intersects $\ell-2$ sets of $\cZ$. But this contradicts our assumption that $V(F)\cap T_{H'}=\emptyset$. So, we may assume that $F-V(A)\neq \emptyset$. In this case, since $\ell'=1$, only one connected component of $F$ can be contained in $B- V(A)$. So, there are at most $\ell-2$ connected components of $F\cap (A- V(B))$ whose union meets $\ell-1$ sets of $\cZ'$. It means that $F\cap (A- V(B))$ is a pure $(H', \cZ', \ell-1)$-model for some subgraph $H'$ of $H$ induced by its $\ell-2$ connected components. Since $T$ meets all such models, we have a contradiction. We conclude that $G- \left(T\cup V(A\cap B)\right)$ has no pure $(H, \cZ,\ell)$-models. Since $(\ell-1) f^1_{H, \ell-1}(k) + k\ell^2\le f^1_{H, \ell}(k)$, we have a required pure $(H, \cZ, \ell)$-deletion set. \end{itemize} We conclude that $G$ contains either $k$ pairwise vertex-disjoint pure $(H, \cZ, \ell)$-models, or a pure $(H, \cZ, \ell)$-deletion set of size at most $f^1_{H, \ell}(k)$. \end{proof} \section{Packing and covering $(H, \cZ, \ell)$-models}\label{sec:total} Now we prove the main result of this paper. \begin{thmmain} For a positive integer $\ell$ and a non-empty planar graph $H$ with $\cc(H)\ge \ell-1$, there exists $f_{H, \ell}:\mathbb{N}\rightarrow \mathbb{R}$ satisfying the following property. Let $(G, \cZ)$ be a rooted graph and $k$ be a positive integer. Then $G$ contains either $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models in $G$, or an $(H, \cZ, \ell)$-deletion set of size at most $f_{H,\ell}(k)$. \end{thmmain} \begin{proof} We recall from the proof of Theorem~\ref{thm:mainpure} functions $x, g$, and $f^1$. For all positive integers $k, \ell$, and $h$, \begin{align*} x=x_{H, \ell}(k)&:=k\ell^2 \\ g=g_{H, \ell}(k)&:=2(4x^2+14hx+3x+1)(4x^2+1)+x\\ f^1_{H, \ell}(k)&:=\left\{ \begin{array}{ll} \kappa(g_{H, \ell}(k))(h^2-h+1)(k-1) & \textrm{if $\ell=1$}\\ \kappa(g_{H, \ell}(k))(h^2-h+1)(k-1) \\ \qquad + (\ell-1) f^1_{H, \ell-1}(k)+ k\ell^2 & \textrm{if $\ell\ge 2$.} \end{array} \right. \\ f_{H, \ell}(k)&:=\ell f^1_{H, \ell}(k)+ k\ell^2. \end{align*} We observe that if $\cc(H)=1$, then pure $(H, \cZ, \ell)$-models are exactly $(H, \cZ, \ell)$-models. Therefore, Theorem~\ref{thm:mainpure} implies the statement. We may assume that $\cc(H)\ge 2$. If $G$ has tree-width at most $\kappa (g)$, and by Proposition~\ref{prop:eponboundedtw}, $G$ contains either $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models or an $(H, \cZ, \ell)$-deletion set $T$ of size at most $(\kappa (g)+1)(h^2-h+1)(k-1)\le f^1_{H, \ell}(k)\le f_{H, \ell}(k)$. Therefore, we may assume that $G$ has tree-width larger than $\kappa(g)$. By Theorem~\ref{thm:gridtheorem}, $G$ contains a $\cG_g$-model. By Proposition~\ref{prop:excluding}, $G$ contains either $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models, or a separation $(A,B)$ with order less than $x$ such that \begin{itemize} \item $\multiabs{\cZ\setminus A}=\ell'$ for some $0\le \ell'<\ell$ and $B- V(A)$ contains a $\cG_{g-x}$-model, and contains a $(\cZ\setminus A,k, \ell')$-rooted grid model of order $k\ell(14h+2)+1$. \end{itemize} We may assume that we have the latter separation. If $\ell'\ge 1$, then we do a procedure similar to the proof in Theorem~\ref{thm:mainpure}. But, when $\ell'=0$ there was an irrelevant vertex argument for pure models. That argument cannot be extended to usual models. Instead, we can reduce the instance into an instance of packing and covering $k$ pairwise disjoint pure $(H, \cZ, \ell)$-models in $A- V(B)$. \begin{itemize} \item (Case 1. $\ell'=0$.) \\ We apply Theorem~\ref{thm:mainpure} to the instance $(A- V(B), \cZ, \ell, k)$ for pure $(H, \cZ, \ell)$-models. Then $A-V(B)$ contains either $k$ pairwise vertex-disjoint pure $(H, \cZ, \ell)$-models, or a pure $(H, \cZ, \ell)$-deletion set $T$ of size at most $f^1_{H, \ell}(k)$. Suppose that $A-V(B)$ has $k$ pairwise vertex-disjoint pure $(H, \cZ, \ell)$-models. Note that among those models, some of them is the image of $H$ itself, and some is not the image of $H$. Since $g-x\ge 14hk$, $B-V(A)$ contains $k$ vertex-disjoint $\cG_{14h}$-models. So, for those pure models in $A-V(B)$ that are not the images of $H$, we can complete them into $(H, \cZ, \ell)$-models, using $\cG_{14h}$-models in $B-V(A)$. Therefore, $G$ has $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models. On the other hand, if we have a pure $(H, \cZ, \ell)$-deletion set $T$, then $G- (T\cup V(A\cap B))$ has no $(H, \cZ, \ell)$-models. This is because every $(H, \cZ, \ell)$-model in $G- V(A\cap B)$ contains a pure $(H, \cZ, \ell)$-model in $A- V(B)$ (by collecting only components hitting $\ell$ sets of $\cZ$). Since \[\abs{T\cup V(A\cap B)}\le f^1_{H, \ell}(k)+k\ell^2\le f_{H, \ell}(k),\] we have a required set. \end{itemize} Now, we assume that $\ell'\ge 1$. Let $\cZ'$ be the multiset of all sets $X$ in $\cZ$ where $X\subseteq V(A)$. We follow a similar procedure in Theorem~\ref{thm:mainpure}. We divide cases depending on whether $\cc(H)\ge \ell$ or not. \begin{itemize} \item (Case 2-1. $\cc(H)\ge \ell$.) \\ Since $B- V(A)$ contains a $(\cZ\setminus A,k, \ell')$-rooted grid model of order $k\ell(14h+2)+1$, by Lemma~\ref{lem:colorfulgrid}, $B- V(A)$ contains $k$ pairwise vertex-disjoint $(\ell'\cdot \cG_{14h}, \cZ\setminus A, \ell')$-models. We apply Theorem~\ref{thm:mainpure} to the instance $(A- V(B), \cZ', \ell-\ell', k)$. Then $A-V(B)$ contains either $k$ pairwise vertex-disjoint pure $(H, \cZ', \ell-\ell')$-models, or a pure $(H, \cZ', \ell-\ell')$-deletion set $T$ of size at most $f^1_{H, \ell-\ell'}(k)$. In the former case, we can obtain $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models in $G$ using the $k$ pairwise vertex-disjoint $(\ell'\cdot \cG_{14h}, \cZ\setminus A, \ell')$-models in $B- V(A)$. For the latter case, $G- \left(T\cup V(A\cap B)\right)$ has no $(H, \cZ,\ell)$-models where $\abs{T\cup V(A\cap B)}\le f^1_{H, \ell-\ell'}(k) + k\ell^2\le f_{H, \ell}(k)$. \item (Case 2-2. $\cc(H)= \ell-1$.) \\ If $\ell'\ge 2$, then we can obtain the same result as in Case 2-1, because there are in fact $k$ pairwise vertex-disjoint $((\ell'-1)\cdot \cG_{14h}, \cZ\setminus A, \ell')$-models in $B- V(A)$ by Lemma~\ref{lem:colorfulgrid}. We may assume that $\ell'=1$. Since $\cc(H)\ge 2$, we have $\ell\ge 3$. Now, for each subgraph $H'$ of $H$ induced by its $\ell-2$ connected components, we apply Theorem~\ref{thm:mainpure} to the instance $(A- V(B), \cZ', \ell-1, k)$. Then we deduce that $A-V(B)$ contains either $k$ pairwise vertex-disjoint pure $(H', \cZ', \ell-1)$-models, or a pure $(H', \cZ', \ell-1)$-deletion set $T_{H'}$ of size at most $f^1_{H, \ell-1}(k)$. If $A-V(B)$ contains $k$ pairwise vertex-disjoint pure $(H', \cZ', \ell-1)$-models for some subgraph $H'$ of $H$ induced by its $\ell-2$ connected components, then we can complete them using $(\cG_{14h}, \cZ\setminus As, 1)$-models in $B- V(A)$. Therefore, we may assume that there is a pure $(H', \cZ', \ell-1)$-deletion set $T_{H'}$ of size at most $f^1_{H, \ell-1}(k)$ in $A- V(B)$ for all such subgraphs $H'$. Let $T$ be the union of all deletion sets $T_{H'}$. Note that $\abs{T}\le (\ell-1)f^1_{H, \ell-1}(k)$. We claim that $G- \left(T\cup V(A\cap B)\right)$ has no $(H, \cZ,\ell)$-models. Suppose that $G- \left(T\cup V(A\cap B)\right)$ has an $(H, \cZ, \ell)$-models $F$. Since $\ell'=1$, there are $\ell-2$ connected components of $F\cap (A- V(B))$ that meet $\ell-1$ sets of $\cZ'$, which contains a pure $(H'', \cZ', \ell-1)$-model for some subgraph $H''$ of $H$ induced by $\ell-2$ connected components of $H$. Since $T$ meets all such models, we conclude that $G- \left(T\cup V(A\cap B)\right)$ has no $(H, \cZ,\ell)$-models. Since \[\abs{T\cup V(A\cap B)}\le (\ell-1)f^1_{H, \ell-1}(k) + k\ell^2\le f_{H, \ell}(k), \] we have a required $(H, \cZ, \ell)$-deletion set. \end{itemize} We conclude that $G$ contains either $k$ pairwise vertex-disjoint $(H, \cZ, \ell)$-models in $G$, or an $(H, \cZ, \ell)$-deletion set of size at most $f(k,\ell,h)$. \end{proof} \section{Examples when $H$ has at most $\ell-2$ connected components}\label{sec:counterex} In this section, we prove that if the number of connected components of $H$ is at most $\ell-2$, then the class of $(H, \cZ, \ell)$-models does not have the Erd\H{o}s-P\'osa property. \begin{proposition}\label{prop:negative} Let $\ell$ be a positive integer and $H$ be a non-empty planar graph with at most $\ell-2$ connected components. Then the class of $(H, \cZ, \ell)$-models does not have the Erd\H{o}s-P\'osa property. \end{proposition} \begin{figure} \caption{The construction for showing that $(H, \cZ, \ell)$-models have no Erd\H{o} \label{fig:counterex2} \end{figure} \begin{proof} We show that for every positive integer $x$, there is a rooted graph $(G, \cZ)$ satisfying that \begin{itemize} \item $G$ has one $(H, \cZ, \ell)$-model, but no two vertex-disjoint $(H, \cZ, \ell)$-models, \item for every vertex subset $S$ of size at most $x$, $G- S$ has an $(H, \cZ, \ell)$-model. \end{itemize} It implies that the function for the Erd\H{o}s-P\'osa property does not exist for $k=2$. Let $H_1, \ldots, H_t$ be the connected components of $H$. Note that $\ell-t\ge 2$ by the assumption. Let $G$ be the disjoint union of $\cG_{(\ell-t+2)n}$, say $G_1$, and $t-1$ copies of $\cG_{n}$, say $G_2, \ldots, G_t$. For each $1\le k\le t$, we denote by $v^k_{i,j}$ for the copy of $v_{i,j}$ in $G_k$. For each $1\le j\le \ell-t+1$, let $Z_j:=\{v^1_{1,i}:n(j-1)+1\le i\le nj \}$. Also, for each $\ell-t+2\le j\le \ell$, let $Z_{j}:=\{v^{j-\ell+t}_{1,i}:1\le i\le n\}$. Since $\ell-t+1\ge 3$, $G_1$ contains at least $3$ sets of $\cZ$. We will determine $n$ later. We depict this construction in Figure~\ref{fig:counterex2}. It is clear that if $n\ge 14h$, then $G$ contains one $(H, \cZ, \ell)$-model because each $G_i$ contains a $\cG_{14h}$ subgraph, and thus contains a $H_i$-model, and we can take a path from each set of $\cZ$ to the constructed connected component of $H$ in $G_i$. We observe that there are no two $(H, \cZ, \ell)$-models. Let $F_1, F_2$ be $(H, \cZ, \ell)$-models. For each $F_i$, since $H$ consists of exactly $t$ connected components, each $G_i$ should contain the image of one connected component of $H$. Especially, $F_i\cap G_1$ is the image of one connected component of $H$ which meets all sets of $\cZ$ contained in $G_1$. However, since $F_1\cap G_1$ and $F_2\cap G_1$ are connected, they should intersect. Now we claim that if $n\ge (14h+x+1)(x+1)+x+1$ and $S$ is a vertex set of size at most $x$ in $G$, then $G- S$ contains an $(H, \cZ, \ell)$-model. Suppose that $n\ge (14h+x+1)(x+1)+x+1$ and $S$ is a vertex set of size at most $x$ in $G$. Let us fix $2\le j\le t$. Then there exists $1\le p_j\le n-(x+1)$ and $0\le q_j\le n-(x+1)$ such that $v^{j}_{p_j+a, q_j+b}\notin S$ for all $1\le a,b\le 14h+x+1$. Let $F_j$ be the subgraph of $G_j$ induced by the vertex set $\{v^{j}_{p_j+a, q_j+b}:1\le a,b\le x+1\}$. Clearly there are $x+1$ pairwise vertex-disjoint paths from $Z_{j+(\ell-t)}$ to $\{v^j_{p_j+1, q_j+b}:1\le b\le x+1\}$ and there is at least one path that does not meet $S$. Since $F_j$ contains an $H_j$-model, $G_j- S$ contains an $H_j$-model having a vertex of $Z_{j+(\ell-t)}$. Similarly, in $G_1$, there exists $1\le p_1\le n-(x+1)$ and $(\ell-t+1)n\le q_1\le (\ell-t+1)n+n-(x+1)$ such that $v^{j}_{p_j+a, q_j+b}\notin S$ for all $1\le a,b\le 14h+x+1$. Let $F_1$ be the subgraph of $G_1$ induced by the vertex set $\{v^{1}_{p_1+a, q_1+b}:1\le a,b\le x+1\}$. It is not hard to observe that there are $x+1$ pairwise vertex-disjoint paths from $\{v^1_{p_1+a, q_1+1}:1\le a\le x+1\}$ to each $Z_i$ for $1\le i\le \ell-t+1$. Therefore, $G_1- S$ contains an $H_1$-model having a vertex of $Z_i$ for $1\le i\le \ell-t+1$. We conclude that $G- S$ has an $(H, \cZ, \ell)$-model, as required. \end{proof} \section{Conclusion} In this paper, we show that the class of $(H, \cZ, \ell)$-models has the Erd\H{o}s-P\'osa\ property if and only if $H$ is planar and $\cc(H)\ge \ell-1$. Among all the interesting results on the Erd\H{o}s-P\'osa\ property, some objects that intersect $\ell$ sets among given vertex sets have not much studied. For our result, we do not restrict an $H$-model to a minimal subgraph containing $H$-minor. What if we consider minimal $H$-models or $H$-subdivisions? Then it may be difficult to have such a nice characterization. As a first step to study such families, we pose one specific problem: \begin{itemize} \item Does the set of cycles intersecting at least two sets among given sets $Z_1, Z_2, \ldots, Z_m$ in a graph $G$ have the Erd\H{o}s-P\'osa\ property, with a bound on a deletion set that does not depend on $m$? \end{itemize} Huynh, Joos, and Wollan~\cite{HuyneJW2017} showed that $(S_1, S_2)$-cycles have the Erd\H{o}s-P\'osa\ property. Thus, if this is true, then it generalizes the result for $(S_1, S_2)$-cycles. \end{document}
\begin{document} \title{Experimental Violation of Bell Inequality beyond Cirel'son's Bound} \author{Yu-Ao Chen} \email{[email protected]} \affiliation{Physikalisches Institut, Universit\"{a}t Heidelberg, Philosophenweg 12, D-69120 Heidelberg, Germany} \author{Tao Yang} \affiliation{Physikalisches Institut, Universit\"{a}t Heidelberg, Philosophenweg 12, D-69120 Heidelberg, Germany} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027, People's Republic of China} \author{An-Ning Zhang} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027, People's Republic of China} \author{Zhi Zhao} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027, People's Republic of China} \author{Ad\'{a}n Cabello} \email{[email protected]} \affiliation{Departamento de F\'{\i}sica Aplicada II, Universidad de Sevilla, E-41012 Sevilla, Spain} \author{Jian-Wei Pan} \affiliation{Physikalisches Institut, Universit\"{a}t Heidelberg, Philosophenweg 12, D-69120 Heidelberg, Germany} \affiliation{Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027, People's Republic of China} \date{\today} \begin{abstract} The correlations between two qubits belonging to a three-qubit system can violate the Clauser-Horne-Shimony-Holt-Bell inequality beyond Cirel'son's bound [A. Cabello, Phys. Rev. Lett. {\bf 88}, 060403 (2002)]. We experimentally demonstrate such a violation by 7 standard deviations by using a three-photon polarization-entangled Greenberger-Horne-Zeilinger state produced by Type-II spontaneous parametric down-conversion. In addition, using part of our results, we obtain a violation of the Mermin inequality by 39 standard deviations. \end{abstract} \pacs{03.65.Ud, 03.67.Mn, 42.50.Dv, 42.50.Xa} \maketitle As stressed by Peres \cite{Peres93}, Bell inequalities \cite{Bell64,Bell87} have nothing to do with quantum mechanics. They are constraints imposed by local realistic theories on the values of linear combinations of the averages (or probabilities) of the results of experiments on two or more separated systems. Therefore, when examining data obtained in experiments to test Bell inequalities, it is legitimate to do it from the perspective (i.e., under the assumptions) of local realistic theories, without any reference to quantum mechanics. This approach leads to some apparently paradoxical results. A remarkable one is that, while it is a standard result in quantum mechanics that no quantum state can violate the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality \cite{CHSH69} beyond Cirel'son's bound, namely $2 \sqrt 2$ \cite{Cirelson80}, the correlations between two qubits belonging to a three-qubit system can violate the CHSH-Bell inequality beyond $2 \sqrt{2}$ \cite{Cabello02}. In particular, if we use a three-qubit Greenberger-Horne-Zeilinger (GHZ) state \cite{GHZ89}, we can even obtain the maximum allowed violation of the CHSH-Bell inequality, namely $4$ \cite{Cabello02}. In this Letter, we report the first observation of a violation of the CHSH-Bell inequality beyond Cirel'son's bound by using a three-photon polarization-entangled GHZ state produced by Type-II spontaneous parametric down-conversion. In addition, since the experiment also provides all the data required for testing Mermin's three-party Bell inequality \cite{Mermin90}, we use our results to demonstrate the violation of this inequality. The main idea behind the CHSH-Bell inequality \cite{CHSH69} is that, in local realistic theories, the absolute value of a particular combination of correlations between two distant particles $i$ and $j$ is bounded by 2: \begin{align} \vert C\left( A,B\right) -mC\left( A,b\right) & -nC\left( a,B\right) \nonumber\\ & -mnC\left( a,b\right) \vert\leqslant2\label{CHSH1} \end{align} where \emph{m} and \emph{n} can be either $-1$ or $1$, and $A$ and $a$ ($B$ and $b$) are physical observables taking values $-1$ or $1$, referring to local experiments on particle $i$ ($j$). The correlation $C\left( A,B\right)$ of $A$ and $B$ is defined as \begin{align} C\left( A,B\right) & =P_{AB}\left( 1,1\right) -P_{AB}\left( 1,-1\right) \nonumber\\ & -P_{AB}\left( -1,1\right) +P_{AB}\left( -1,-1\right), \label{correlation} \end{align} where $P_{AB}\left( 1,-1\right)$ denotes the joint probability of obtaining $A=1$ and $B=-1$ when $A$ and $B$ are measured. Cirel'son proved that, for a two particle system prepared in any quantum state, the absolute value of the combination of quantum correlations appearing in the inequality (\ref{CHSH1}) is bounded by 2$\sqrt{2}$ \cite{Cirelson80}. However, assuming local realistic theories' point of view, the correlations predicted by quantum mechanics between two distant qubits belonging to a three-qubit system can violate the CHSH-Bell inequality beyond Cirel'son's bound \cite{Cabello02}. In our experiment, the three distant qubits are polarization-entangled photons prepared in the GHZ state: \begin{equation} \vert\Psi\rangle=\frac{1}{\sqrt{2}}\left( |H\rangle|H\rangle|H\rangle +|V\rangle|V\rangle|V\rangle\right), \label{GHZ} \end{equation} where $H$ ($V$) denotes horizontal (vertical) linear polarization. During the experiment, we will analyze the polarization of each photon in one of two different basis: either in the $X$ basis, which is defined as the linear polarization basis $H/V$ rotated by $45^{0}$, which is denoted as $H^{\prime}/V^{\prime}$; or in the $Y$ basis, which is defined as the circular polarization basis $R/L$ (right-hand/left-hand). These polarization bases can be expressed in terms of the $H/V$ basis as \begin{align} & \left\vert H^{\prime}\right\rangle =\frac{1}{\sqrt{2}}\left(|H\rangle +|V\rangle\right) , \quad\left\vert V^{\prime}\right\rangle =\frac{1}{\sqrt {2}}\left(|H\rangle-|V\rangle\right), \nonumber\\ & \left\vert R\right\rangle =\frac{1}{\sqrt{2}}\left(|H\rangle +i|V\rangle\right) , \quad\left\vert L\right\rangle =\frac{1}{\sqrt{2}}\left( |H\rangle-i|V\rangle\right) . \label{basis} \end{align} The measurement results $H^{\prime}$ ($R$) and $V^{\prime}$ ($L$) are denoted by $1$ and $-1$, respectively. \begin{figure} \caption{Experimental setup for generating three-photon GHZ states. An UV pulse passes twice through the BBO crystal to generate two pairs of polarization-entangled photons by Type-II spontaneous parametric down-conversion used to perform the preparation of three-photon GHZ state. The UV laser with a central wavelength of 394 nm has pulse duration of 200fs, a repetition rate of 76 MHz, and an average pump power of 400 mW. We observe about $2\times10^{4} \label{setup} \end{figure} To generate the three-photon GHZ state (\ref{GHZ}) we use the technique developed in previous experiments \cite{BPDWZ99,PDGWZ01}. The experimental setup for generating three-photon entanglement is shown in Fig.~\ref{setup}. A pulse of ultraviolet (UV) light passes through a beta-barium borate (BBO) crystal twice to produce two polarization-entangled photon pairs, where both pairs are in the state \begin{equation} \left\vert \Psi_2\right\rangle =1/\sqrt{2}\left( |H\rangle|H\rangle +|V\rangle|V\rangle\right). \end{equation} One photon out of each pair is then steered to a polarization beam splitter (PBS) where the path lengths of each photon have been adjusted (by scanning the delay position) so that they arrive simultaneously. After the two photons pass through the PBS, and exit it by a different output port each, there is no way whatsoever to distinguish from which emission each of the photons originated, then correlations due to four-photon GHZ entanglement \begin{equation} \left\vert \Psi_4\right\rangle =1/\sqrt {2}\left( |H\rangle|H\rangle|H\rangle|H\rangle+|V\rangle|V\rangle |V\rangle|V\rangle\right) \end{equation} can be observed \cite{PDGWZ01}. After that, by performing a $|H^{\prime}\rangle$ polarization projective measurement onto one of the four outputs, the remaining three photons will be prepared in the desired GHZ state (\ref{GHZ}). \begin{figure} \caption{Typical experimental results for polarization measurements on all three photons in a $X$ basis triggered by the photon $4$ at the $H'$ polarization. The coincidence rates of $H'H'H'$ and $H'H'V'$ components are shown as a function of the pump delay mirror position. The high visibility obtained at zero delay implies that three photons are indeed in a coherent superposition.} \label{coherent} \end{figure} In the experiment, the observed fourfold coincident rate of the desired component $HHHH$ or $VVVV$ is about $1.4$ per second. By performing a $H'$ projective measurement at photon $4$ as the trigger of the fourfold coincident, the ratio between any of the desired events $HHH$ and $VVV$ to any of the 6 other nondesired ones, e.g., $HHV$, is about $65:1$. To confirm that these two events are indeed in a coherent superposition, we have performed polarization measurements in $X$ basis. In Fig.~\ref{coherent}, we compare the count rates of $H'H'H'$ and $H'H'V'$ components as we move the delay mirror (Delay) by the trigger photon $4$ at the $H'$ polarization. The latter component is suppressed with a visibility of $83\%$ at zero delay, which confirms the coherent superposition of $HHH$ and $VVV$. For each three-photon system prepared in the state (\ref{GHZ}), we will define as photons $i$ and $j$ those two giving the result $-1$ when making $X$ measurement on all three photons; the third photon will be denoted as $k$. If all three photons give the result $1$, photons $i$ and $j$ could be any pair of them. Since no other combination of results is allowed for the state (\ref{GHZ}), $i$ and $j$ are well defined for every three-photon system. We are interested in the correlations between two observables, $A$ and $a$, of photon $i$ and two observables, $B$ and $b$, of photon $j$. In particular, let us choose $A=X _{i}$, $a=Y _{i}$, $B=X _{j}$, and $b=Y _{j}$, where $X _{q}$ and $Y _{q}$ are the polarizations of photon $q$ along the basis $X$ and $Y$, respectively. The particular CHSH-Bell inequality (\ref{CHSH1}) we are interested in is the one in which $m=n=y _{k}$, where $y _{k}$ is one of the possible results, $-1$ or $1$ (although we do not know which one), of measuring $Y_{k}$. With this choice we obtain the CHSH-Bell inequality \begin{align} \vert C & \left( X_{i},X_{j}\right) -y_{k} C\left( X_{i},Y_{j}\right) \nonumber\\ & -y_{k} C\left( Y_{i},X_{j}\right) -C\left( Y_{i},Y_{j}\right) \vert \leqslant2, \label{CHSH3} \end{align} which holds for local realistic theories, regardless of the particular value, either $-1$ or $1$, of $y_{k}$. We could force photons $i$ and $j$ to be those in locations $1$ and $2$, by measuring $X$ on the photon in location $3$, and then selecting only those events in which the result of this measurement is $1$. This procedure guarantees that our definition of photons $i$ and $j$ is physically meaningful. However, it does not allow us to measure $Y$ on photon $k$. The key point for testing inequality (\ref{CHSH3}) is noticing that we do not need to know in which locations are photons $i$, $j$, and $k$ for every three-photon system. We can obtain the required data by performing suitable combinations of measurements of $X$ or $Y$ on the three photons. In order to see this, let us first translate inequality (\ref{CHSH3}) into the language of joint probabilities. Assuming that the expected value of any local observable cannot be affected by anything done to a distant particle, the CHSH-Bell inequality (\ref{CHSH3}) can be transformed into a more convenient experimental inequality \cite{CH74,Mermin95}: \begin{align} -1\leqslant & P_{X_{i}X_{j}}(-1,-1)-P_{X_{i}Y_{j}}(-1,-y_{k})\nonumber\\ & -P_{Y_{i}X_{j}}(-y_{k},-1)-P_{Y_{i}Y_{j}}(y_{k},y_{k})\leqslant 0. \label{CHSH4} \end{align} The bounds $l$ of inequalities (\ref{CHSH1}) and (\ref{CHSH3}) are transformed into the bounds $(l-2)/4$ of inequality (\ref{CHSH4}). Therefore, the local realistic bound in (\ref{CHSH4}) is $0$, Cirel'son's bound is $(\sqrt{2}-1)/2$, and the maximum value is $1/2$. To measure the inequality (\ref{CHSH4}), we must relate the four joint probabilities appearing in (\ref{CHSH4}) to the probabilities of coincidences in a experiment with three spatial locations, $1$, $2$, and $3$. For instance, it can be easily seen that \begin{align} P_{X_{i}} & _{X_{j}}(-1,-1)=\nonumber\\ & P_{X_{1}X_{2}X_{3}}(1,-1,-1)+P_{X_{1}X_{2}X_{3}}(-1,1,-1)\nonumber\\ & +P_{X_{1}X_{2}X_{3}}(-1,-1,1)+P_{X_{1}X_{2}X_{3}}(-1,-1,-1). \label{prob1} \end{align} In addition, $P_{X_{i}Y_{j}}(-1,-y_{k})$ and $P_{Y_{i}X_{j}}(-y_{k},-1)$ are both less than or equal to \begin{align} P_{X_{1} Y_{2}} & _{Y_{3}}(-1,1,-1)+P_{X_{1} Y_{2} Y_{3}}(-1,-1,1)\nonumber\\ & +P_{Y_{1} X_{2} Y_{3}}(1,-1,-1)+P_{Y_{1} X_{2} Y_{3}}(-1,-1,1)\nonumber\\ & +P_{Y_{1} Y_{2} X_{3}}(1,-1,-1)+P_{Y_{1} Y_{2} X_{3}}(-1,1,-1). \label{prob2} \end{align} Finally, \begin{align} P_{Y_{i}} & _{Y_{j}}(y_{k},y_{k})=P_{Y_{1} Y_{2} Y_{3}}(1,1,1)+P_{Y_{1} Y_{2} Y_{3}}(-1,-1,-1). \label{prob3} \end{align} Therefore, by performing measurements in $5$ specific configurations ($XXX$, $XYY$, $XYX$, $YXX$, and $YYY$), we can obtain the value of the middle side of inequality (\ref{CHSH4}). In the state ({\ref{GHZ}}), the first three probabilities in the right-hand of (\ref{prob1}) are expected to be $1/4$, and the fourth is expected to be zero; the six probabilities in (\ref{prob2}) are expected to be zero, and the two probabilities in the right hand side of (\ref{prob3}) are expected to be $1/8$. Therefore, the middle side of inequality (\ref{CHSH4}) is expected to be $1/2$, which means that the left-hand side of inequality (\ref{CHSH3}) is $4$, which is not only beyond Cirel'son's bound, $2 \sqrt{2}$, but is also the maximum possible violation of inequality (\ref{CHSH3}). \begin{figure} \caption{Experimental results observed for the 5 required configurations: $XXX$, $XYY$ $YXY$, $YYX$, and $YYY$.} \label{result} \end{figure} The experiments consists of performing measurements in $5$ specific configurations. As shown in Fig.~\ref{setup}, we use polarizers oriented at $\pm45^{0}$ and $\lambda/4$ plates to perform $X$ or $Y$ measurements. For these $5$ configurations, the experimental results for all possible outcomes are shown in Fig.~\ref{result}. Substituting the experimental results (shown in Fig.~\ref{result}) into the right-hand side of (\ref{prob1}), we obtain \begin{align} P_{X_{i}} & _{X_{j}}(-1,-1)=0.738\pm0.012. \label{ineq1} \end{align} Similarly, substituting the experimental results in (\ref{prob2}), we obtain \begin{align} P_{X_{i}Y_{j}}(-1,-y_{k})\leqslant0.072\pm0.007,\nonumber\\ P_{Y_{i}X_{j}}(-y_{k},-1)\leqslant0.072\pm0.007. \label{ineq2} \end{align} Finally, substituting the experimental results in (\ref{prob3}), we obtain \begin{align} P_{Y_{i}Y_{j}}(y_{k},y_{k})=0.254\pm0.011. \label{ineq3} \end{align} Therefore, the prediction for the middle side of (\ref{CHSH4}) is greater than or equal to $0.340\pm0.019$, and the prediction for the right-hand side of (\ref{CHSH3}) is greater than or equal to $3.36\pm0.08$, which clearly violates Cirel'son's bound by $7$ standard deviations. In addition, using part of the results contained in Fig.~\ref{result}, we can test the three-particle Bell inequality derived by Mermin \cite{Mermin90}, \begin{align} \vert C & \left(X_1,Y_2,Y_3\right)+C\left(Y_1,X_2,Y_3\right) \nonumber\\ & +C\left(Y_1,Y_2,X_3\right)-C\left(X_1,X_2,X_3\right) \vert \leqslant2. \label{Mermin} \end{align} From the results in Fig.~\ref{result} we obtain $3.57\pm0.04$ for the left-hand side of (\ref{Mermin}), which is a violation of inequality (\ref{Mermin}) by $39$ standard deviations. Note that the experiment for observing the violation beyond Cirel'son's bound also requires performing measurements in an additional configuration ($YYY$). In conclusion, we have demonstrated a violation of the CHSH-Bell inequality beyond Cirel'son's bound. It should be emphasized that such a violation is predicted by quantum mechanics and appears when examining the data from the perspective of local realistic theories \cite{Cabello02}. In addition, it should be stressed that the reported experiment is different as previous experiments to test local realism involving three or four-qubit GHZ states \cite{PBDWZ00,ZYCZZP03}, since it is based on a definition of pairs which is conditioned to the result of a measurement on a third qubit, and requires performing measurements in additional configurations. This work was supported by the Alexander von Humboldt Foundation, the Marie Curie Excellence Grant from the EU, the Deutsche Telekom Stiftung, the NSFC, and the CAS. A.C. acknowledges additional support from Projects No.~FIS2005-07689 and No.~FQM-239. \end{document}
\begin{document} \title{Transformed snapshot interpolation} \author{G. Welper\footnote{Department of Mathematics, Texas A\&M University, College Station, Texas 77843-3368, USA, email \href{mailto:[email protected]}{\texttt{[email protected]}}}} \date{} \maketitle \begin{abstract} Functions with jumps and kinks typically arising from parameter dependent or stochastic hyperbolic PDEs are notoriously difficult to approximate. If the jump location in physical space is parameter dependent or random, standard approximation techniques like reduced basis methods, PODs, polynomial chaos, etc. are known to yield poor convergence rates. In order to improve these rates, we propose a new approximation scheme. As reduced basis methods, it relies on snapshots for the reconstruction of parameter dependent functions so that it is efficiently applicable in a PDE context. However, we allow a transformation of the physical coordinates before the use of a snapshot in the reconstruction, which allows to realign the moving discontinuities and yields high convergence rates. The transforms are automatically computed by minimizing a training error. In order to show feasibility of this approach it is tested by 1d and 2d numerical experiments. \end{abstract} \noindent \textbf{Keywords:} Parametric PDEs, reduced order modelling, shocks, transformations, interpolation, convergence rates, stability \noindent \textbf{AMS subject classifications:} 41A46, 41A25, 35L67, 65M15 \section{Introduction} A cornerstone of reduced order modelling, stochastic PDEs and uncertainty quantification, is the efficient approximation of high dimensional PDE solutions $u(x, \mu)$ depending on physical variables $x \in \Omega$ and parametric or random variables $\mu \in \mathcal{P} \subset \mathbb{R}^d$. Many contemporary approximation techniques like e.g. reduced basis methods \cite{RozzaHuynhPatera2008, SenVeroyHuynhEtAl2006, PateraRozza2006}, POD \cite{Sirovich1987, KunischVolkwein2001, KunischVolkwein2002}, Karhunen Lo\`{e}ve expansion \cite{Loeve1978} or polynomial chaos \cite{Wiener1938, XiuKarniadakis2002, Schoutens2000} build upon a reconstruction by a truncated sum \begin{equation} u(x,\mu) \approx \sum_{i} c_i(\mu) \psi_i(x), \label{eq:polyadic-decomposition} \end{equation} where the choice and computation of $c_i(\mu)$ and $\psi_i(x)$ depends on the specific method at hand: For reduced basis methods $\psi_i(x) = u(x, \mu_i)$ are snapshots and the $c_i(\mu)$ are computed by a Galerkin projection. For POD and Karhunen Lo\`{e}ve expansions one minimizes the error between $u(x,\mu)$ and any truncated representation of the form \eqref{eq:polyadic-decomposition} and in case of polynomial chaos the functions $c_i(\mu)$ are orthogonal polynomials. Borrowing from the tensor community, we refer to \eqref{eq:polyadic-decomposition} as a \emph{polyadic decomposition} and denote methods based on it by \emph{polyadic decomposition based methods}. The success of all these methods relies on the fact that for many problems one can truncate this sum to a few summands only for the price of a very small error. However, this regularity assumption is not always true. An important class of problems are functions $u(x,\mu)$ that have parameter dependent or random jumps or kinks arising e.g. in parametric or stochastic hyperbolic PDEs. Polyadic decomposition based methods are expected to perform poorly for these type of problems. In fact in Appendix \ref{sec:linear-width} we consider a counterexample for which no polyadic decomposition based method can achieve a convergence rate higher than one with respect to the number of summands in a polyadic decomposition. See also \cite{IaccarinoPetterssonNordstrEtAl2010} for a survey in case of uncertainty quantification. There are relatively few methods in the literature \cite{ConstantineIaccarino2012, OhlbergerRave2013, JakemanNarayanXiu2013} that directly address this poor performance for parameter dependent jumps and kinks. Instead, much of the work does use polyadic decompositions and focuses on different problems arising in the context of reduced order modelling of parametric hyperbolic PDEs and singularly perturbed problems: Solving the PDE directly in a reduced basis, online/offline decompositions and error estimators, see \cite{HaasdonkOhlberger2008, HaasdonkOhlberger2008a, NguyenRozzaPatera2009, YanoPateraUrban2014, DahmenPleskenWelper2014, Dahmen2015}. The goal of this paper is the construction of an alternative approximation method to replace standard polyadic decompositions in order to achieve higher convergence rates for functions $u(x,\mu)$ with parameter dependent jumps and kinks. In addition, the method relies on snapshots and optionally error estimators as input data so that it can be used efficiently and non-intrusively with existing PDE solvers. Somewhat similar to \cite{OhlbergerRave2013}, we allow a transformation $\phi(\mu, \eta): \Omega \to \Omega$ of the physical domain before we use a snapshot $u(x,\eta)$ in the reconstruction of $u(x,\mu)$, i.e. \begin{equation*} u(x,\mu) \approx \sum_{\eta \in \mathcal{P}_n} c_\eta(\mu) u(\phi(\mu, \eta)(x), \eta) \end{equation*} where $\eta$ is in some finite set of parameters $\mathcal{P}_n$. The purpose of the additional transform is an alignment of the discontinuities of $u(x,\eta)$ with the ones of $u(x,\mu)$. As a result the discontinuities are ``invisible'' in parameter direction so that very few summands yield accurate approximations. More rigorously, we prove a high order error estimate that does not depend on the regularity of $u(x,\mu)$ itself, but on the regularity of the modified snapshots after alignment which is considerably higher for many practical problems. In addition, because exact alignment is rarely possible in practice, we also take perturbation results into account. Similar to greedy methods for the construction of reduced bases, or neural networks, the transform $\phi$ is computed by minimizing the approximation error on a training sample of snapshots. Although this might seem prohibitively complicated, in Section \ref{sec:optimization} we discuss some preliminary arguments to avoid being trapped in local minima and in Section \ref{sec:numerical-experiments} some 2d numerical experiments are provided where simple subgradient methods provide good results. The outlined approximation scheme allows for various realizations with regard to the choices of the coefficients $c_\eta(\mu)$ or the inner transforms $\phi(\mu, \eta)$. Because the main objective of the paper is a proof of principle that one can approximate functions with parameter dependent jumps and kinks with high order from snapshots alone, we usually vote for the simplest possible choices and leave more sophisticated variants for future research. The paper is organized as follows: In Section \ref{sec:transformed-interpolation} we present the main approximation scheme and a basic error estimate. Then, in Section \ref{sec:stability} we prove stability results and consider approximations of the inner transform. Afterwards we turn to the actual construction of $\phi(\mu,\eta)$. We first discuss some characteristic based approaches and their drawbacks in Section \ref{sec:transform-by-characteristics} and then the optimization of $\phi(\mu,\eta)$ by training errors in Section \ref{sec:optimization}. Finally Section \ref{sec:numerical-experiments} provides some 1d and 2d numerical experiments. For the sake of completeness, in Appendix \ref{sec:linear-width} we consider a counterexample for which no polyadic decomposition based method can achieve high order convergence rates. \section{Transformed snapshot interpolation} \label{sec:transformed-interpolation} In order to motivate the new approximation scheme, we consider the following prototype example throughout this section \begin{align} u(x, \mu) & := \psi \left( \frac{x}{0.4+\mu}-1 \right), & \psi(x) & := \left\{ \begin{array}{ll} \exp \left( - \frac{1}{1-x^2} \right) & -1 \le x < -\frac{1}{2} \\ 0 & \text{else} \end{array} \right. \label{eq:mollifier-cut-off} \end{align} where $\psi$ is the standard mollifier cut off at $x = -1/2$. This is not necessarily a solution of a PDE, but has the main features we are interested in: a discontinuity that is moving with the parameter. An example for a polyadic decomposition based approximation can be seen in Figure \ref{fig:interpolation} where we recover $u(x,\mu)$ from three snapshots by a polynomial interpolation \begin{equation*} u(x,\mu) \approx \sum_{\eta \in \mathcal{P}_n} \ell_\eta(\mu) u(x, \eta) \end{equation*} where $\mathcal{P}_n \subset \mathcal{P}$ are some interpolation points and $\ell_\eta$, $\eta \in \mathcal{P}_n$ are the corresponding Lagrange interpolation polynomials. We see the typical ``staircasing behaviour'' which significantly deteriorates the solutions quality. Although our choice of the polyadic decomposition is perhaps overly simplistic, reduced basis methods and other more sophisticated schemes suffer from the same problem. \begin{figure} \caption{Left: Snapshots of the parametric function \eqref{eq:mollifier-cut-off} \label{fig:interpolation} \end{figure} Unlike this superposition of snapshots resulting in the ``staircasing'' phenomena, it seems much more intuitive to compute one snapshot $u(x,\eta)$ and recover the function $u(x,\mu)$ for a different $\mu$ by stretching this snapshot such that the left end of the support is fixed and the jump locations match. To state this intuition in mathematical terms, ``stretching'' one function $u(x,\eta)$ to match a second one $u(x,\mu)$ essentially boils down to a transform $\phi(\mu, \eta): \Omega \to \Omega$ of the physical variables so that we obtain the approximation \begin{equation*} u(x,\mu) \approx u(\phi(\mu, \eta)(x),\eta). \end{equation*} In general, even for optimal choices of $\phi$, we cannot obtain arbitrarily good approximation errors in this way. To obtain convergence, we therefore require in addition that $\eta$ is close to $\mu$ which yields the following simple approximation scheme: First, we choose a finite subset $\mathcal{P}_n \subset \mathcal{P}$ of the parameter domain $\mathcal{P}$ and compute the snapshots $u(x, \eta)$ for $\eta \in \mathcal{P}_n$. Then, given a new $\mu \in \mathcal{P}$, we find the $\eta_\mu \in \mathcal{P}_n$ closest to $\mu$ and approximate \begin{equation} u(x,\mu) \approx u(\phi(\mu, \eta_\mu)(x),\eta_\mu). \label{eq:piecewise-constant} \end{equation} Besides the snapshots themselves this requires the knowledge of the transforms $(x,\mu) \to \phi(\mu, \eta)(x)$ for finitely many $\eta \in \mathcal{P}_n$. Thus instead of approximating one single function depending on $x$ and $\mu$ we have to find many of them! However, whereas polyadic decompositions perform poorly for $u(x,\mu)$, they often yield good results for the transforms $\phi(\mu, \eta)$: Their smoothness with respect to $\mu$ depends on the smoothness of the jump or kink location with respect to the parameter and not the smoothness of $u(x,\mu)$ itself. For example \eqref{eq:mollifier-cut-off} the jump location is $j(\mu) = \frac{1}{5} + \frac{1}{2} \mu$ so that a linear transform \begin{equation*} \phi(\mu, \eta)(x) = x - j(\mu) + j(\eta) \end{equation*} is sufficient to align the jumps. As shown in Figure \ref{fig:transformed-interpolation}, this transform does not align the left end of the supports, but because $\eta$ is close to $\mu$ this is good enough as we see below. For more complicated problems the transform $\phi(\mu, \eta)$ is not explicitly known and we have to find efficient ways to compute it from the given data. We postpone this issue to Section \ref{sec:optimization} and assume for the remainder of this section that $\phi(\mu, \eta)(x)$ is given to us. \begin{figure} \caption{Left: Transformed snapshots of the parametric function \eqref{eq:mollifier-cut-off} \label{fig:transformed-interpolation} \end{figure} To assess the approximation error, we observe that our scheme \eqref{eq:piecewise-constant} is a piecewise constant approximation of the \emph{transformed snapshots} \begin{equation*} v_\mu(x,\eta) := u(\phi(\mu, \eta)(x),\eta) \end{equation*} with respect to $\eta$ at the point $\eta = \mu$. Thus, we obtain the error estimate \begin{equation} \|v_\mu(x, \mu) - v_\mu(x, \eta_\mu)\|_{L_p} = \mathcal{O}(n^{-1}) \label{eq:piecewise-const-error} \end{equation} where $n$ is the number of snapshots, provided that $v_\mu(x,\eta)$ is differentiable with respect to $\eta$. This is achieved by the inner transform $\phi(\mu, \eta)$: Whereas the original snapshots $u(x,\mu)$ have jumps in parameter direction, the transformed snapshots have jumps in fixed locations independent of $\eta$ resulting in a smooth dependence of $v_\mu(x, \eta)$ on $\eta$. Because $\phi(\mu, \eta)$ is supposed to align the discontinuities and kinks of $u(\cdot, \mu)$ and $u(\cdot, \eta)$, it is natural to require that \begin{equation} \phi(\mu,\mu)(x) = x \label{eq:id} \end{equation} which yields $u(x,\mu) = v_\mu(x,\mu)$. With \eqref{eq:piecewise-const-error}, it follows that \begin{equation*} \|u(x, \mu) - v_\mu(x, \eta_\mu)\|_{L_p} = \mathcal{O}(n^{-1}) \end{equation*} so that our approximation scheme achieves first order convergence. In comparison, due to lacking smoothness for standard piecewise constant approximations \begin{equation} u(x,\mu) \approx u(x,\eta_\mu) \label{eq:piecewise-constant-no-transform} \end{equation} we expect convergence rates of $h^{1/p}$ for spacial errors in $L_p$. Thus, depending on the norm, the inner transform yields a gain in the convergence order for $p<1$ or none at all for $p=1$. However, the major impediment is no longer a lack of regularity but the low order convergence of the piecewise constant approximation of $\eta \to v_\mu(x,\eta)$. Therefore, we replace it by a higher order scheme. For simplicity, we confine ourselves to a simple polynomial interpolation and leave more sophisticated choices for future research. Thus for interpolation points $\mathcal{P}_n \subset \mathcal{P}$ and corresponding Lagrange basis polynomials $\ell_\eta$ we define the \emph{transformed snapshot interpolation} by \begin{equation} u(x,\mu) \approx u_n(x,\mu) := \sum_{\eta \in \mathcal{P}_n} \ell_\eta(\mu) u(\phi(\mu, \eta)(x),\eta). \label{eq:interpol-transform} \end{equation} The input data for this reconstruction is identical to the previous piecewise constant case: We need $|\mathcal{P}_n|$ snapshots and $|\mathcal{P}_n|$ transforms $(x,\mu) \to \phi(\mu,\eta)(x)$, $\eta \in \mathcal{P}_n$. Only the reconstruction formula has been changed to a higher order interpolation. In order to state an error estimate, let $\mathbb{P}^n$ be the span of the Lagrange basis polynomials and recall that the Lebesgue constant is the norm of the polynomial interpolation operator in the $\sup$-norm which is given by \begin{equation} \Lambda_n := \sup_{\mu \in \mathcal{P}} \sum_{\eta \in \mathcal{P}_n}^n |\ell_\eta(\mu)|. \label{eq:lebesgue-constant} \end{equation} We obtain the following error estimate. \begin{proposition} \label{prop:outer-error} Assume $u_n(x,\mu)$ is defined by the transformed snapshot interpolation \eqref{eq:interpol-transform}. Then for all $\mu \in \mathcal{P}$ the error is bounded by \begin{equation*} \|u(\cdot, \mu) - u_n(\cdot, \mu)\|_{L_1} \le \Lambda_n \Big\| \inf_{p \in \mathbb{P}^n} |v(\cdot, \mu) - p| \Big\|_{L_1}. \end{equation*} \end{proposition} \begin{proof} The proof follows directly from $u(x, \mu) = v_\mu(x,\mu)$ and standard interpolation estimates applied to $\eta \to v_\mu(x,\eta)$. \end{proof} For this proposition as well as the remainder of this paper we choose the $L_1$-norm to measure errors because it is the most common choice for hyperbolic PDEs. Also note that the given result is just one option of the various estimates for polynomial interpolation. For example, if we assume analytic dependence of $v_\mu(x,\eta)$ on $\eta$ and use Chebyshev nodes for a one dimensional parameter, we can achieve exponential convergence rates. The most important observation, however, is that the estimate does not involve any regularity assumption of $u(x,\mu)$ itself. Instead it relies on the regularity of $v_\mu(x,\eta)$ with respect to $\eta$ which can be considerably better. The results for example \eqref{eq:mollifier-cut-off} are shown in the right picture in Figure \ref{fig:transformed-interpolation}. We see a very accurate approximation of the jump, however the approximation quality around the left end of the support of $u(x,\mu)$ is slightly worse than for the original interpolation in Figure \ref{fig:interpolation}. The reason is that the left end of the support is parameter dependent after the transform so that $v_\mu(x,\eta)$ is no longer analytic in $\eta$, however infinitely differentiable. Therefore the loss we suffer at this point is of orders of magnitude less than the staircasing behaviour around the jump of the simple interpolation in Figure \ref{fig:interpolation}. In summary, instead of approximating the non-smooth function $u(x,\mu)$ directly, for every target $\mu \in \mathcal{P}$ we construct a new smooth function $(x,\eta) \to v_\mu(x, \eta)$ and approximate this function instead. The interpolation condition \eqref{eq:id} guarantees that $u(x,\mu) = v_\mu(x,\eta)$ so that this yields accurate approximations of $u(x,\mu)$ itself as depicted in Figure \ref{fig:solution-manifold}. In addition, for our preliminary simple linear interpolation of $v_\mu(x,\eta)$ this allows an offline/online decomposition: in an offline phase, we compute the snapshots $u(x,\eta)$ as well as the transforms $\phi(\mu, \eta)(x)$ at the interpolation nodes $\eta$ (see Section \ref{sec:optimization} below). Then in an online phase we can efficiently approximate $u(x,\mu)$ for any $\mu \in \mathcal{P}$ by the transformed snapshot interpolation \eqref{eq:interpol-transform}. \begin{figure} \caption{The three dimensional coordinate axes indicate the linear space of $x$-dependent functions so that each point of the red line represents a snapshot $x \to u(x,\eta)$ for some parameter $\eta$. The kinks indicate that in our case this solution manifold is not smooth with respect to $\eta$, however note that in reality it is non-smooth for every parameter. The blue dashed line indicates the more smooth transformed snapshots $v_\mu(x, \eta)$.} \label{fig:solution-manifold} \end{figure} \section{Stability} \label{sec:stability} Of course high order smoothness of $v_\mu(x, \eta)$ with respect to $\eta$ needed for high approximation orders in Proposition \ref{prop:outer-error} requires that jumps and kinks are exactly aligned. However, for any finite approximation of the inner transform $\phi$, this is rarely possible. Therefore, we next consider two perturbation results, that allow us to bound the error while taking approximation errors of the inner transform into account. The first one, Lemma \ref{lemma:perturbation-measure}, relies on a measure theoretic argument and allows rather general transforms including ones with kinks as found in e.g. finite element discretizations. The second one, Corollary \ref{cor:perturbation-diffeomorphism}, avoids measure theory, but requires the inner transforms $\phi(\mu, \eta)$ to be diffeomorphisms. In the following, let $\varphi(\mu, \eta)(x)$ be a perturbation of $\phi(\mu, \eta)(x)$. To simplify the arguments below, for the time being, we forget about the parameter dependence and consider two transforms $\varphi, \phi: \Omega \to \Omega$ instead. If we assume that each point $\phi(x)$ can be connected to the point $\varphi(x)$ along a curve $\Phi^s(x)$ for $s$ in the interval $[0,1]$, we can rewrite the perturbation by the fundamental theorem of line integrals \begin{equation*} (u \circ \phi)(x) - (u \circ \varphi)(x) = \int_0^1 u'(\Phi^s(x)) \partial_s \Phi^s(x) \, \text{d} t, \end{equation*} so that it remains to estimate the right hand side. The map $(x,s) \to \Phi^s(x)$ can be regarded as a function from $\Omega \times [0,1] \to \Omega$ such that \begin{align} \Phi^0(x) & = \phi(x) & \Phi^1(x) & = \varphi(x). \label{eq:homotopy} \end{align} which is a homotopy between $\phi$ and $\varphi$ if it is continuous in addition. Furthermore, let $\lambda$ be the Lebesgue measure and $\mathcal{A}$ the Lebesgue $\sigma$-algebra on $\Omega$ and let $\Phi^s_* \lambda$ denote the pushforward measure defined by \begin{equation*} \begin{aligned} \Phi^s_* \lambda(A) & = \lambda( (\Phi^s)^{-1}(A)) & & \text{for all } A \in \mathcal{A}. \end{aligned} \end{equation*} Then we have the following lemma. \begin{lemma} \label{lemma:perturbation-measure} Assume that $u \in BV(\Omega)$ and $\Phi^s$, $0 \le s \le 1$ given by \eqref{eq:homotopy} is measurable and differentiable with respect to $s$ such that \begin{equation} \begin{aligned} \Phi^s_* \lambda(A) & \le c \lambda(A) & & \text{for all } A \in \mathcal{A} \text{ and } 0 \le s \le 1 \end{aligned} \label{eq:pushforward-bound} \end{equation} and \begin{equation} \sup_{\substack{0 \le s \le 1 \\ x \in \Omega}} |\partial_s \Phi^s(x)| \le C \|\phi - \varphi\|_{L_\infty(\Omega)} \label{eq:length} \end{equation} for constants $c, C \ge 0$. Then we have \begin{equation} \|u \circ \phi - u \circ \varphi \|_{L_1(\Omega)} \le c C \|u\|_{BV(\Omega)} \|\phi - \varphi\|_{L_\infty(\Omega)}. \label{eq:perturbation-measure} \end{equation} \end{lemma} Let us discuss the main assumptions before we prove the proposition. If the speed $|\partial_s \Phi^s(x)|$ of each curve $s \to \Phi^s(x)$ is quasi uniform, i.e. equivalent to a constant $S$ for all $x$ and $s$, we have \begin{equation*} \|\partial_s \Phi^s\|_{L_\infty(\Omega \times [0,1])} \sim \int_0^1 | \partial_s \Phi^s(x) | \, \text{d} s =: l(x) \end{equation*} where $l(x)$ is the length of the curve connecting $\phi(x)$ to $\varphi(x)$. In that case assumption \eqref{eq:length} states that, up to a constant, the length of each curve $\Phi^s(x)$ is bounded by the distance $|\phi(x) - \varphi(x)|$ of its endpoints. In case the domain $\Omega$ is convex, a simple choice of the curves $\Phi^s(x)$ are the convex combinations of the end points: \begin{equation*} \Phi^s(x) = (1-s) \phi(x) + s \varphi(x). \end{equation*} In that case, we have $\partial_s \Phi^s(x) = \varphi(x) - \phi(x)$ so that condition \eqref{eq:length} is satisfied. In order to justify the second assumption \eqref{eq:pushforward-bound}, let us consider the following scenario: Assume that $\phi(x) = x_0 \in \Omega$ and $\varphi(x) = x_1 \in \Omega$ map all of $\Omega$ to single points. Furthermore let $u$ be a piecewise constant function with a jump so that $x_0$ and $x_1$ are on different sides of this jump. On the one hand we obtain $\|u \circ \phi - u \circ \varphi \|_{L_1(\Omega)} = \|u(x_0) - u(x_1)\|_{L_1(\Omega)} = h \lambda(\Omega)$ where $h$ is the hight of the jump. On the other hand we have $\|\phi - \varphi\|_{L_\infty(\Omega)} = |x_0 - x_1|$ which can be made arbitrary small by suitable choices of $x_0$ and $x_1$ on each side of the jump. Thus the main statement \eqref{eq:perturbation-measure} of the proposition is violated. This counterexample relies on the fact that both transforms concentrate all weight in a single point such that $\Phi^i_* \lambda(\{x_i\}) = \lambda(\Omega)$, $i=0,1$ which is ruled out by assumption \eqref{eq:pushforward-bound}. Finally, we assume that the outer function $u \in BV(\Omega)$ is of bounded variation. This allows jumps and kinks and is one of the most common norms for stability results of hyperbolic PDEs. \begin{proof}[Proof of Lemma \ref{lemma:perturbation-measure}] For the time being, let us assume that $u \in C^1(\Omega)$. Applying the fundamental theorem for line integrals, we obtain \begin{equation*} (u \circ \phi)(x) - (u \circ \varphi)(x) = \int_0^1 u'(\Phi^s(x)) \partial_s \Phi^s(x) \, \text{d} t \end{equation*} Thus, we have \begin{align*} \| u \circ \phi - u \circ \varphi \|_{L_1(\Omega)} & = \int_\Omega \left| \int_0^1 u'(\Phi^s(x)) \partial_s \Phi^s(x) \, \text{d} s \right| \text{d} x \\ & \le \int_\Omega \int_0^1 |u'(\Phi^s(x))| |\partial_s \Phi^s(x)| \, \text{d} s \, \text{d} x \\ & \le \sup_{\substack{0 \le s \le 1 \\ x \in \Omega}} |\partial_s \Phi^s(x)| \int_\Omega \int_0^1 |u'(\Phi^s(x))| \, \text{d} s \, \text{d} x \\ & \le C \|\phi - \varphi\|_{L_\infty(\Omega)} \int_0^1 \int_\Omega |u'(\Phi^s(x))| \, \text{d} x \, \text{d} s \end{align*} Using the pushforward measure $\Phi^s_* \lambda^{n+1}$ and its bound \eqref{eq:pushforward-bound} we conclude that \begin{equation} \int_\Omega |u'(\Phi^s(x))| \, \text{d} x = \int_\Omega |u'(y)| \, \text{d} \Phi^s_* \lambda(x) \le c \int_\Omega |u'(y)| \, \text{d} x \label{eq:perturbation-measure-proof-1} \end{equation} Combining the last two estimates and using that $\int_0^1 \, \text{d} s = 1$ yields \begin{equation*} \| u \circ \phi - u \circ \varphi \|_{L_1(\Omega)} \le c C \|\phi - \varphi\|_{L_\infty(\Omega)} \int_\Omega | u'(y)| \, \text{d} y, \end{equation*} which is equivalent to the estimate \eqref{eq:perturbation-measure} we wish to prove. Finally, we extend the estimate to all $u \in BV(\Omega)$ by using a density argument. To this end note that for all $\epsilon > 0$ there is a $u_\epsilon \in C^1(\Omega)$ such that \begin{align*} \|u - u_\epsilon\|_{L_1(\Omega)} & \le \epsilon & \|u_\epsilon'\|_{L_1(\Omega)} & \le \|u\|_{BV(\Omega)} + \epsilon \end{align*} Thus, to apply a density argument, is suffices to bound $\|u \circ \phi - u_\epsilon \circ \phi\|_{L_1(\Omega)}$ and $\|u \circ \varphi - u_\epsilon \circ \varphi\|_{L_1(\Omega)}$. Analogously to \eqref{eq:perturbation-measure-proof-1} we obtain \begin{align*} \|u \circ \phi - u_\epsilon \circ \phi\|_{L_1(\Omega)} & = \int_\Omega |u(\Phi^0(x)) - u_\epsilon(\Phi^0(x))| \, \text{d} x \\ & = \int_\Omega |u(y) - u_\epsilon(y)| \, \text{d} \Phi^0_* \lambda(y) \\ & \le c \int_\Omega |u(y) - u_\epsilon(y)| \, \text{d} y \\ & \le c \|u - u_\epsilon\|_{L_1(\Omega)} \end{align*} The bound for $\|u \circ \varphi - u_\epsilon \circ \varphi\|_{L_1(\Omega)}$ follows analogously which completes the proof. \end{proof} If the transforms $\Phi^s(x)$ can be chosen to be diffeomorphisms, the pushforward measure is explicitly given by the usual transformation rule \begin{equation} \Phi^s_* \lambda(A) = \int_A |\det D_x (\Phi^s)^{-1}(x)| \, \text{d} \lambda(x) \label{eq:pushforward-explicit} \end{equation} so that we obtain the following corollary. \begin{corollary} \label{cor:perturbation-diffeomorphism} Assume that $u \in BV(\Omega)$ and that $\Phi^s$, $0 \le s \le 1$ given by \eqref{eq:homotopy} are diffeomorphisms for fixed $s$ and differentiable with respect to $s$ such that \begin{equation*} \begin{aligned} |\det D_x (\Phi^s)^{-1}(x)| & \le c & & \text{for all } A \in \mathcal{A} \text{ and } 0 \le s \le 1 \end{aligned} \end{equation*} and \begin{equation*} \sup_{\substack{0 \le s \le 1 \\ x \in \Omega}} |\partial_s \Phi^s(x)| \le C \|\phi - \varphi\|_{L_\infty(\Omega)} \end{equation*} for constants $c, C \ge 0$. Then we have \begin{equation*} \|u \circ \phi - u \circ \varphi \|_{L_1(\Omega)} \le c C \|u\|_{BV(\Omega)} \|\phi - \varphi\|_{L_\infty(\Omega)}. \end{equation*} \end{corollary} \begin{proof} We just have to show the bounds \eqref{eq:pushforward-bound} of the pushforward. By its explicit formula \eqref{eq:pushforward-explicit} we have \begin{equation*} \Phi^s_* \lambda(A) = \int_A |\det D_x (\Phi^s)^{-1}(x)| \, \text{d} \lambda(x) \le c \lambda(A) \end{equation*} for all $A \in \mathcal{A}$ and $0 \le s \le 1$ so that the corollary follows from Lemma \ref{lemma:perturbation-measure} \end{proof} Let us now consider the transformed snapshot interpolation \eqref{eq:interpol-transform} again. Assume that there is a transform $\phi(\mu, \eta)(x)$ that aligns the jumps and kinks exactly so that we obtain high convergence rates in Proposition \ref{prop:outer-error}. In general, we have to find a finite approximation to this exact transform, say $\phi_m(\mu, \eta)(x)$. Note that according to \eqref{eq:interpol-transform} we only need to know this function for the $|\mathcal{P}_n|$ nodes $\eta \in \mathcal{P}_n$, so that we have to approximate $|\mathcal{P}_n|$ functions depending on $x \in \Omega$ and a parameter $\mu \in \mathcal{P}$. Of course this is exactly the same problem as approximating a function $u(x,\mu)$ which is our initial problem, however, the regularity of $\phi(\mu,\eta)(x)$ can be much more favorable as we have seen in the introduction in Section \ref{sec:transformed-interpolation} or as we will see in Section \ref{sec:transform-by-characteristics} below. Therefore, we can apply a more classical polyadic decomposition based approach to find an approximation $\phi_m(\mu, \eta)(x)$ of the inner transform. Replacing the exact transform by the approximate one in the transformed snapshot interpolation yields \begin{equation} u(x,\mu) \approx u_{n,m}(x,\mu) := \sum_{\eta \in \mathcal{P}_n} \ell_\eta(\mu) u(\phi_m(\mu, \eta)(x),\eta). \label{eq:interpol-inner-outer} \end{equation} Combining the error estimate of Proposition \ref{prop:outer-error} with the perturbation result Lemma \ref{lemma:perturbation-measure} we arrive at the following Proposition. \begin{proposition} \label{prop:inner-outer-error} Assume that $u \in BV(\Omega)$ and that there are curves $\Phi^s(\mu, \eta)(x)$, $0 \le s \le 1$ for $x \in \Omega$, $\mu \in \mathcal{P}$, $\eta \in \mathcal{P}_n$ measurable and differentiable with respect to $s$ such that \begin{align*} \Phi(\mu,\eta)^0(x) & = \phi(\mu, \eta)(x) & \Phi(\mu,\eta)^1(x) & = \phi_m(\mu, \eta)(x). \end{align*} and \begin{equation*} \begin{aligned} \Phi(\mu,\eta)^s_* \lambda(A) & \le c \lambda(A) & & \text{for all } A \in \mathcal{A} \text{ and } 0 \le s \le 1 \end{aligned} \end{equation*} and \begin{equation*} \sup_{\substack{0 \le s \le 1 \\ x \in \Omega}} |\partial_s \Phi(\mu, \eta)^s(x)| \le C \|\phi(\mu, \eta) - \phi_m(\mu, \eta)\|_{L_\infty(\Omega)} \end{equation*} for constants $c, C \ge 0$. Furthermore let $u_{n,m}(x,\mu)$ be defined by the transformed snapshot interpolation \eqref{eq:interpol-inner-outer}. Then for all $\mu \in \mathcal{P}$ we have the error estimate \begin{align*} \|u(\cdot, \mu) - u_{n,m}(\cdot, \mu)\|_{L_1} & \le \Lambda_n \Big\| \inf_{p \in \mathbb{P}^n} |v(\cdot, \mu) - p| \Big\|_{L_1} \\ & \quad + c C \Lambda_n \max_{\eta \in \mathcal{P}_n} \|u(\cdot, \eta)\|_{BV(\Omega)} \max_{\eta \in \mathcal{P}_n} \| \phi(\mu, \eta) - \phi_m(\mu, \eta) \|_{L_\infty(\Omega)}, \end{align*} where $\Lambda_n$ is the Lebesgue constant \eqref{eq:lebesgue-constant}. \end{proposition} \begin{proof} We have \begin{equation*} \|u(\cdot, \mu) - u_{n,m}(\cdot, \mu)\|_{L_1(\Omega)} \le \|u(\cdot, \mu) - u_n(\cdot, \mu)\|_{L_1(\Omega)} + \|u_n(\cdot, \mu) - u_{n,m}(\cdot, \mu)\|_{L_1(\Omega)} \end{equation*} With Proposition \ref{prop:outer-error} the first term can be estimated by \begin{equation*} \|u(\cdot, \mu) - u_n(\cdot, \mu)\|_{L_1} \le \Lambda_n \Big\| \inf_{p \in \mathbb{P}^n} |v(\cdot, \mu) - p| \Big\|_{L_1}. \end{equation*} In order to estimate the second term, using the definition of the Lebesgue constant and Lemma \ref{lemma:perturbation-measure}, we obtain \begin{align*} \|u_n(\cdot, \mu) - u_{n,m}(\cdot, \mu)\|_{L_1(\Omega)} & \le \sum_{\eta \in \mathcal{P}_n} |\ell_\eta(\mu)| \Big\| u(\phi(\mu, \eta)(x),\eta) - u(\phi_m(\mu, \eta)(x),\eta) \Big\|_{L_1(\Omega)} \\ & \le \Lambda_n \max_{\eta \in \mathcal{P}_n} \| u(\phi(\mu, \eta)(x),\eta) - u(\phi_m(\mu, \eta)(x),\eta) \|_{L_1(\Omega)} \\ & \le c C \Lambda_n \max_{\eta \in \mathcal{P}_n} \|u(\cdot, \eta)\|_{BV(\Omega)} \max_{\eta \in \mathcal{P}_n} \| \phi(\mu, \eta) - \phi_m(\mu, \eta) \|_{L_\infty(\Omega)} \end{align*} Combining all three estimates completes the proof. \end{proof} If the $\mu$ dependence of $\phi(\mu, \eta)$ is smooth, we can use a polyadic decomposition for its approximation. Although there are much more sophisticated methods, possibly the simplest choice is a linear interpolation \begin{equation} \phi_m(\mu, \eta)(x) = \sum_{\nu \in \hat{\mathcal{P}}_m} \hat{\ell}_{\nu}(\mu) \phi(\nu,\eta)(x) \label{eq:transform-interpolation} \end{equation} where $\hat{\ell}_\nu$ are Lagrange basis polynomials with respect to nodes in some finite set $\hat{\mathcal{P}}_m$. With this inner approximation, the error bound of Proposition \ref{prop:inner-outer-error} depends of the smoothness of the transformed snapshot $v_\mu(x, \eta)$ with respect to $\eta$ and of the transforms $(x,\mu) \to \phi(\mu, \eta)(x)$ with respect to $\mu$. If both dependencies are analytic, for suitable interpolation points the error decays exponentially. \section{Inner transforms by characteristics} \label{sec:transform-by-characteristics} We still have to choose an inner transform $\phi(\mu, \eta)$ such that the transformed snapshots $v_\mu(x, \eta)$ are as smooth in $\eta$ as possible. One obvious idea that comes to mind is to somehow make use of characteristics. In this section, we discuss some problems that arise from that approach for the Riemann problem for Burgers' equation. In our example, the parameter is the hight of the jump in the initial condition which yields the parametric PDE \begin{equation*} \begin{aligned} u_t + \left(\frac{1}{2} u^2 \right)_x & = 0 & \text{in } \mathbb{R} \times \mathbb{R}^+ \\ u(x,0) = g_\mu(x) & = \left\{ \begin{array}{ll} \mu & x \le 0 \\ 0 & x > 0 \end{array} \right. & \text{for } t = 0. \end{aligned} \end{equation*} In addition, we assume that $\mu > 0$ so that the solution has a shock along the curve $\frac{1}{2} \mu t$. To write down an explicit solution formula, let $\chi(x,t)$ be the origin (at time $t=0$) of the characteristic passing through the point $(x,t)$. It is easily seen to be \begin{equation} \chi_\mu(x,t) = \left\{ \begin{array}{ll} x - \mu t & x \le \frac{1}{2} \mu t \\ x & x > \frac{1}{2} \mu t. \end{array} \right. \label{eq:backward-characteristic} \end{equation} Because $u(x,t,\mu)$ is constant along characteristics, we obtain \begin{equation} u(x, t, \mu) = g_\mu(\chi_\mu(x,t)). \label{eq:burgers-solution} \end{equation} The previous discussion aside, a simple idea for an approximation scheme is to encode or approximate $g_\mu(x)$ and the characteristics $\chi_\mu(x,t)$ and then use the exact solution formula \eqref{eq:burgers-solution} to reconstruct $u(x,t,\mu)$. In spirit this is similar to our original idea \eqref{eq:piecewise-constant} where the snapshots $u(\cdot, \eta)$ are replaced by $g_\eta(\cdot)$ and the transform $\phi(\mu, \eta)$ by the characteristic $\chi_\mu$. However, from the explicit formula \eqref{eq:backward-characteristic} for $\chi_\mu$, we see that this function has a parameter dependent jump. Thus, in general, we have to face the same difficulties for approximating the parameter dependent characteristic $(x,t,\mu) \to \chi_\mu(x,t)$ as for the original solution $(x,t,\mu) \to u(x,t,\mu)$ so that there is no progress with respect to this issue. If we want to use characteristics to define the inner transform $\phi(\mu, \eta)$ of the transformed snapshot interpolation, the problems are even more complicated. As for \eqref{eq:burgers-solution} we can follow the characteristics backward in time, but because we to not evaluate the initial condition $g_\mu$ but a snapshot $u(x,t,\eta)$, we then follow the characteristics forward in time with a different parameter. To this end, let $\varphi_\mu(y,t)$ be the position of the characteristic at time $t$, starting at the initial position $y$ at time $t=0$: \begin{equation} \varphi_\mu(y,t) = \left\{ \begin{array}{ll} y + \mu t & y \le - \frac{1}{2} \mu t \\ \frac{1}{2} \mu t & - \frac{1}{2} \mu t \le y < \frac{1}{2} \mu t \\ y & \frac{1}{2} \mu t < y. \end{array} \right. \label{eq:forward-characteristic} \end{equation} Then we can transform one solution for parameter $\eta$ into a solution for parameter $\mu$ by \begin{equation} u(x, t, \mu) = g_\mu(\chi_\mu(x,t)) = \frac{\mu}{\eta} g_\eta(\chi_\mu(x,t)) = \frac{\mu}{\eta} u(\varphi_\eta(\chi_\mu(x,t),t), t, \eta). \label{eq:solution-forward-backward} \end{equation} However, this formula is only correct for $\mu \ge \eta$. The reason is that the interval of points at $t=0$ that eventually end up in the shock at time $t$ is strictly larger for larger parameters. Thus, for $\mu < \eta$ there is a interval $I$ around the shock location of $\mu$ for which $\chi_\mu(I,t)$ is mapped into the shock location $\frac{1}{2} \eta$ by the forward characteristic $\varphi_\eta$. Thus, the right hand side of \eqref{eq:solution-forward-backward} has only one single value in the interval $I$ or is undefined whereas the left hand side has to different values and the formula is thus not correct. Nonetheless, for $\mu \ge \eta$, we can define the transform \begin{equation} \phi(\mu, \eta)(x,t) = \varphi_\eta(\chi_\mu(x,t),t) \label{eq:transform-characteristics} \end{equation} so that by \eqref{eq:solution-forward-backward} the transformed snapshot is \begin{equation*} v_\mu(x, t, \eta) = u(\phi(\mu, \eta)(x,t), \eta) = u(\varphi_\eta(\chi_\mu(x,t),t), t, \eta) = \frac{\eta}{\mu} u(x,t,\mu) \end{equation*} which is clearly smooth and in fact even linear in $\eta$. Using $\mu \ge \eta$ it is easy to verify that $\phi(\mu, \eta)$ is \begin{align*} \phi(\mu, \eta)(x,t) & = \left\{ \begin{array}{ll} x - (\mu-\eta)t & x \le \frac{1}{2} \mu t \\ x & x > \frac{1}{2} \mu t \end{array} \right. \end{align*} Note that for our approximation scheme \eqref{eq:interpol-transform} we need to know the function $(x,\mu) \to \phi(\mu, \eta)(x)$ for finitely many $\eta \in \mathcal{P}$. Again, this function has a $\mu$ dependent jump so that its approximation poses the same difficulties already encountered for $u(x,t,\mu)$ itself. However, we are not obliged to use the transform \eqref{eq:transform-characteristics} based on characteristics. By noting that the shock location is $\frac{1}{2} \mu t$, simply shifting the whole solution in $x$-direction by \begin{equation} \phi(\mu,\eta)(x,t) = x + \frac{1}{2} (\mu - \eta)t \label{eq:shift} \end{equation} aligns the shocks, i.e. the transformed snapshot $u(\phi(\mu,\eta)(x,t),t,\eta)$ has its shock in the location $\frac{1}{2} \mu t$ which is the shock location for parameter $\mu$. In addition the interpolation condition \eqref{eq:id} is obviously satisfied. Note that by \eqref{eq:backward-characteristic} and \eqref{eq:burgers-solution} the parametric solution is \begin{equation*} u(x,t,\mu) = \left\{ \begin{array}{ll} \mu & x \le \frac{1}{2} \mu \\ 0 & x > \frac{1}{2} \mu. \end{array} \right. \end{equation*} so that the transformed snapshot becomes \begin{equation*} v_\mu(x,t,\eta) = \left\{ \begin{array}{ll} \eta & x \le \frac{1}{2} \mu \\ 0 & x > \frac{1}{2} \mu. \end{array} \right. = \frac{\eta}{\mu} u(x,t,\mu) \end{equation*} which is the same as for the characteristics based transform, but now for all $\mu, \eta \in \mathcal{P}$ . Recall that the error estimate of Proposition \ref{prop:outer-error} just requires smoothness with respect to $\eta$ which is obviously the case. However, opposed to the characteristic construction also $\phi(\mu,\eta)(x)$ is smooth in $\mu$ so that polyadic decomposition based methods yield accurate approximations of $\phi$ at low cost. \section{Optimizing the interpolation error} \label{sec:optimization} \subsection{Generalized gradients} \label{sec:generalized-gradients} We still need to find a realistic way to actually compute the inner transform $\phi(\mu, \eta)$. Similar to the construction of reduced bases, PODs or neural networks, we aim at finding an inner transform $\phi$ that minimizes the approximation error. To this end, we measure the error in the $sup$-norm with respect to the parameter which is typical for reduced basis methods but not mandatory. It follows that the overall error is given by \begin{equation} \sigma_\mathcal{P}(\phi) := \sup_{\mu \in \mathcal{P}} \sigma_\mu(\phi), \label{eq:sup-error} \end{equation} where \begin{equation*} \sigma_\mu(\phi) := \|u(\cdot, \mu) - u_n(\cdot, \mu; \phi)\|_{L_1(\Omega)} \end{equation*} is the error for one fixed parameter. To make the dependence on the inner transform more explicit, in this section we denote the transformed snapshot interpolation \eqref{eq:interpol-transform} by $u_n(x, \mu; \phi) = u_n(x, \mu)$. In practice it is not possible to minimize the error $\sigma_\mathcal{P}(\phi)$ directly because it would require the knowledge of all functions $u(\cdot, \mu)$ for all parameters $\mu \in \mathcal{P}$. To this end, we only assume to know the errors $\sigma_\mu(\phi)$ of a finite training sample $\mu \in \mathcal{T} \subset \mathcal{P}$ so that the overall error \eqref{eq:sup-error} is replaced by the training error \begin{equation} \sigma_\mathcal{T}(\phi) := \sup_{\mu \in \mathcal{T}} \sigma_\mu(\phi). \label{eq:training-error} \end{equation} Although surrogates for the training error are available for some singularly perturbed problems \cite{NguyenRozzaPatera2009, YanoPateraUrban2014, DahmenPleskenWelper2014, Dahmen2015} we omit these in favor of future research. Instead, we resort to an explicit knowledge of some training snapshots $u(\cdot, \mu)$, $\mu \in \mathcal{T}$ in addition to the snapshots that are used for the reconstruction \eqref{eq:interpol-transform} itself. In contrast to the reduced basis method this severely limits the size of the training sample. Nevertheless, in Section \ref{sec:numerical-experiments} we consider examples which yield good results with roughly twice as many training snapshots than reconstruction snapshots, so that the additional burden of the training samples is reasonable. Because we are explicitly interested in non-smooth functions $u$, the error $\sigma_\mathcal{T}(\phi)$ is a non-trivial objective function to minimize. The next proposition shows that despite possible jumps of $u$ the error is Lipschitz continuous, nonetheless. The assumptions of this proposition are essentially the same as for Lemma \ref{lemma:perturbation-measure} and are commented right after it. \begin{proposition} \label{prop:error-lipschitz} Assume that $u \in BV(\Omega)$ and that there are curves $\Phi^s(\mu, \eta)(x)$, $0 \le s \le 1$ for $x \in \Omega$, $\mu \in \mathcal{P}$, $\eta \in \mathcal{P}_n$ measurable and differentiable with respect to $s$ such that \begin{align} \Phi^0(\mu, \eta)(x) & = \phi(\mu, \eta)(x) & \Phi^1(\mu, \eta)(x) & = \varphi(\mu, \eta)(x). \label{eq:homotopy-transform} \end{align} and \begin{equation*} \begin{aligned} \Phi^s(\mu, \eta)_* \lambda(A) & \le c \lambda(A) & & \text{for all } A \in \mathcal{A} \text{ and } 0 \le s \le 1 \end{aligned} \end{equation*} and \begin{equation*} \sup_{\substack{0 \le s \le 1 \\ x \in \Omega}} |\partial_s \Phi(\mu,\eta)^s(x)| \le C \|\phi(\mu, \eta) - \varphi(\mu, \eta)\|_{L_\infty(\Omega)} \end{equation*} for constants $c, C \ge 0$. Then we have \begin{equation} |\sigma_\mathcal{T}(\phi) - \sigma_\mathcal{T}(\varphi)| \le cC \Lambda_n \sup_{\eta \in \mathcal{P}_n} \|u(\cdot, \eta)\|_{BV(\Omega)} \sup_{\substack{\mu \in \mathcal{P} \\ \eta \in \mathcal{P}_n}} \left\| \phi(\mu, \eta) - \varphi(\mu, \eta)\right\|_{L_\infty(\Omega)}. \label{eq:error-lipschitz} \end{equation} \end{proposition} \begin{proof} Note that the triangle inequality implies that \begin{equation} |\sigma_\mathcal{T}(\phi) - \sigma_\mathcal{T}(\varphi)| = \left|\sup_{\mu \in \mathcal{T}} \sigma_\mu(\phi) - \sup_{\mu \in \mathcal{T}} \sigma_\mu(\varphi)\right| \le \sup_{\mu \in \mathcal{T}} |\sigma_\mu(\phi) - \sigma_\mu(\varphi)| \label{eq:error-lipschitz-1} \end{equation} so that it is sufficient to bound $|\sigma_\mu(\phi) - \sigma_\mu(\varphi)|$. To this end note that \begin{align*} |\sigma_\mu(\phi) - \sigma_\mu(\varphi)| & = \Big| \|u(\cdot, \mu) - u_n(\cdot, \mu; \phi)\|_{L_1(\Omega)} - \|u(\cdot, \mu) - u_n(\cdot, \mu; \varphi)\|_{L_1(\Omega)} \Big| \\ & \le \Big\| \big[ u(\cdot, \mu) - u_n(\cdot, \mu; \phi) \big] - \big[ u(\cdot, \mu) - u_n(\cdot, \mu; \varphi) \big] \Big\|_{L_1(\Omega)} \\ & \le \left\| \sum_{\eta \in \mathcal{P}_n} \ell_\eta(\mu) \big[u(\phi(\mu, \eta)(x),\eta) - u(\varphi(\mu, \eta)(x),\eta) \big] \right\|_{L_1(\Omega)} \\ & \le \sum_{\eta \in \mathcal{P}_n} |\ell_\eta(\mu)| \left\| u(\phi(\mu, \eta)(x),\eta) - u(\varphi(\mu, \eta)(x),\eta) \right\|_{L_1(\Omega)} \end{align*} Due to the given assumptions, we can now apply Lemma \ref{lemma:perturbation-measure} to conclude that \begin{align*} |\sigma_\mu(\phi) - \sigma_\mu(\varphi)| & \le cC \sum_{\eta \in \mathcal{P}_n} |\ell_\eta(\mu)| \|u(\cdot, \eta)\|_{BV(\Omega)} \left\| \phi(\mu, \eta) - \varphi(\mu, \eta)\right\|_{L_\infty(\Omega)} \\ & \le cC \Lambda_n \sup_{\eta \in \mathcal{P}_n} \|u(\cdot, \eta)\|_{BV(\Omega)} \sup_{\eta \in \mathcal{P}_n} \left\| \phi(\mu, \eta) - \varphi(\mu, \eta)\right\|_{L_\infty(\Omega)}. \end{align*} With \eqref{eq:error-lipschitz-1} this yields claimed estimate \eqref{eq:error-lipschitz}. \end{proof} In order to optimize the training error $\sigma_\mathcal{T}(\phi)$, we search for a minimizer in a set of candidate transforms $\phi \in \Phi \subset X$ in a Banach space $X$. The space of continuous functions $C(\mathcal{P} \times \mathcal{P} \times \Omega)$ with the additional restrictions from Proposition \ref{prop:error-lipschitz} seems to be a reasonable choice for $\Phi$ because in the last proposition the transform error is measured in the supremum norm. In general this objective function is not differentiable so that we cannot rely on standard gradient based optimizers. However, because $\sigma_\mathcal{T}(\phi)$ is Lipschitz continuous according to the last proposition, we can use optimization methods from non-smooth optimization \cite{Kiwiel1985, BurkeLewisOverton2005} relying on the generalized Clarke gradient \cite{Clarke2013}. To this end, for a direction $v \in X$, we first define the generalized directional derivative \begin{equation*} \sigma^\circ(\phi; v) := \limsup_{\varphi \to \phi; \, h \downarrow 0} \frac{\sigma(\varphi + hv) - \sigma(\varphi)}{h}, \end{equation*} where we suppress the additional $\mathcal{T}$ subscript of $\sigma$ for simplicity. Note that this limit is well defined because $\sigma$ is Lipschitz continuous. In order to define a gradient from these directional derivatives, recall that in the differentiable case one can define the gradient $\nabla \sigma \in X^*$ variationally by \begin{equation*} \begin{aligned} \partial_v \sigma & = \dualp{\nabla \sigma, v}, & & \text{for all } v \in X \end{aligned} \end{equation*} where $X^*$ is the dual space of $X$ and $\dualp{\cdot, \cdot}$ the corresponding dual pairing. Likewise, in the Lipschitz continuous case we define the generalized gradient by \begin{equation*} \begin{aligned} \partial_C \sigma = \{g \in X^* | \, \sigma^\circ(\phi; v) \ge \dualp{g, v}, \text{ for all } v \in X \}. \end{aligned} \end{equation*} In case $\sigma$ is differentiable this reduces to the standard gradient and in case $\sigma$ is convex to the subgradient. In the literature on non-smooth optimization one can find several algorithms to minimize $\sigma_\mathcal{T}(\phi)$ based on this generalized gradient. For some first numerical tests, we use a simple subgradient method: \begin{align} \phi^{k+1} & = \phi^{k} + h_k \Delta^k & \Delta^k & \in \partial_C \sigma(\phi^k) & \sigma^0 = Id \label{eq:subgrad} \end{align} where $Id(x) = x$ is the identity transform. Note that this method does not use the full generalized gradient $\partial_C \sigma$ but just one element of it in each step. This is typical for non-smooth/convex optimization methods because usually the full generalized gradient is not known. Since non-smooth optimization problems often have kinks at the minimum itself, we cannot use standard techniques to control the step size and use a fixed rule \begin{equation} \begin{aligned} h_k & = \alpha k^\beta, & \alpha & > 0, & 0 & < \beta < 1 \end{aligned} \label{eq:step-size} \end{equation} instead. This simple method converges for convex functions \cite{Bertsekas1999, Kiwiel1985} and yields good results in the numerical experiments below. More sophisticated methods including convergence analysis for non-convex problems are available, see e.g. \cite{Kiwiel1985, BurkeLewisOverton2005}. \subsection{Global minima?} Because the objective function $\sigma_\mathcal{T}(\phi)$ is non-convex in general, we must make sure that we do not end up in a suboptimal local minimum. In this section, we discuss some preliminary ideas to overcome this issue for a simple class of 1d problems. To this end, let us consider piecewise constant functions \begin{equation} \begin{aligned} u(x,\mu) & = \left\{ \begin{array}{ll} u_0, & \text{for } x < x_1(\mu) \\ u_i, & \text{for } x_i(\mu) \le x < x_{i+1}(\mu) \\ u_n, & \text{for } x_n(\mu) \le x \end{array} \right. \end{aligned} \label{eq:piecewise-const} \end{equation} with smooth parameter dependent jump locations $x_i(\mu)$. We assume that the order $x_0(\mu) < \dots < x_n(\mu)$ never changes and that the jump locations are well separated i.e. there is a constant $L \ge 0$ with $|x_i(\mu) - x_{i-1}(\mu)| \ge L$, $i=1, \dots, n$. \paragraph{Transformed snapshot interpolation and training error} Let us first state the transformed snapshot interpolation for these functions and the optimization problem to find the inner transform. Because $u(x,\mu)$ is piecewise constant in $x$, for transforms $\phi(\mu, \eta)$ that perfectly align the discontinuities the transformed snapshots $v_\mu(x, \eta)$ are constant in $\eta$. Therefore, it is sufficient to confine ourselves to one single snapshot for the outer interpolation, say $u(x,\mu_0)$ with node $\mathcal{P}_n = \{\mu_0\}$ so that we obtain \begin{equation} u(x,\mu) = v_\mu(x,\mu_0) = u(\phi(\mu, \mu_0), \mu_0). \label{eq:tsi-constant-u} \end{equation} It follows that we have to compute inner transforms $\phi(\mu, \mu_0)$ for all $\mu \in \mathcal{P}$ and the single fixed node $\mu_0$. To this end, we assume to know additional training snapshots $u(\cdot, \mu_1), \dots, u(\cdot, \mu_m)$ for interpolation points $\mu_1 < \dots < \mu_m$, with $\mu_0 \le \mu_1$ for simplicity. Because we just use on snapshot for the outer interpolation the training error \eqref{eq:training-error} reduces to \begin{equation} \sigma_\mathcal{T}(\phi) = \sup_{1 \le i \le m} \sigma_{\mu_i}(\phi) = \sup_{1 \le i \le m} \|u(\cdot, \mu_i) - u(\phi(\mu_i, \mu_0), \mu_0)\|_{L_1(\Omega)}. \end{equation} Since all transforms $\phi(\mu_i, \mu_0)$, $1 \le 1 \le m$ are uncorrelated, we can further simplify this and optimize for each transform $\phi(\mu_i, \mu_0)$ individually, which yields \begin{equation} \phi(\mu_i, \mu_0) = \argmin_{\varphi} \|u(\cdot, \mu_i) - u(\varphi(\cdot), \mu_0)\|_{L_1(\Omega)} \label{eq:opt-i} \end{equation} for $i=1, \dots, n$. Finally, with $\phi(\mu_0, \mu_0)(x) = x$ to ensure the interpolation condition \eqref{eq:id}, as in \eqref{eq:transform-interpolation} we can define the full transform by an interpolation \begin{equation*} \phi_m(\mu, \mu_0) = \sum_{i=0}^m \hat{\ell}(\mu) \phi(\mu_i, \mu_0) \end{equation*} where $\hat{\ell}_i$ are the Lagrange polynomials for the nodes $\mu_0, \dots, \mu_m$. \paragraph{Counterexample: Local minima} It remains to solve the $m$ optimization problems \eqref{eq:opt-i}. Already for this simple problem, optimization methods relying on local search for updates can easily be fooled into a non-optimal local minimum. To this end, consider the example in Figure \ref{fig:example-local} defined by \begin{align} u(x,\mu) & = \chi_{I_1(\mu)} + \chi_{I_2(\mu)}, & I_1(\mu) & = [\mu, \mu+1), & I_2(\mu) & = [\mu+4, \mu+5), \label{eq:two-boxes} \end{align} where $\chi$ is the characteristic function and the parameter shifts the entire function. \begin{figure} \caption{Functions \eqref{eq:two-boxes} \label{fig:example-local} \end{figure} The snapshots $\mu_0$ and $\mu_2$ in Figure \eqref{fig:example-local} are arranged such that the first interval of $u(x,\mu_2)$ intersects the second one of $u(x,\mu_0)$. Therefore the error $\sigma_{\mu_2}(\phi)$ is simply the area of the mismatch between the overlapping intervals plus the area of the two mismatched intervals. Note in particular that any small perturbation of $\phi(\mu_2, \mu_0)$ does not change the later error contribution. It follows that optimization schemes exclusively relying on local information are fooled into a local minimum that matches the (wrong) intersecting intervals. However, the situation changes, when the difference between the snapshot parameter $\mu_0$ and the training parameter $\mu$ is small as e.g. for $\mu_1$ in Figure \ref{fig:example-local}. In this case there are no mismatched intervals and intuitively already simple subgradient based optimization schemes converge to the correct global minimum perfectly aligning the two functions. That this is in fact true is discussed in the following. \paragraph{Local convexity} We confine ourselves to spatially monotone transforms $\phi(\mu, \mu_0)$ in agreement with our assumption that the order of the jumps $x_i(\mu)$ does not change. Ideally, we search for transforms which exactly match the jump locations \begin{equation*} \phi(\mu, \mu_0)(x_i(\mu)) = x_i(\mu_0) \Leftrightarrow \phi(\mu,\mu_0)^{-1}(x_i(\mu_0)) = x_i(\mu). \end{equation*} Practically, we have to deal with perturbations so that the jumps only approximately match \begin{equation*} \phi(\mu,\mu_0)^{-1}(x_i(\mu_0)) \approx x_i(\mu). \end{equation*} If this matching error is sufficiently small compared to the minimal jump distance $L$, such that only adjacent intervals of $u(\cdot, \mu)$ and $v_\mu(x,\mu_0)$ overlap the training error simplifies to \begin{equation} \sigma_\mu(\phi) = \sum_{i=1}^n |u_i - u_{i-1}| |x_i(\mu) - \phi(\mu,\mu_0)^{-1}(x_i(\mu_0))|. \label{eq:err-piecewise-const} \end{equation} This error is convex in $\phi(\mu, \mu_0)^{-1}$ which is not surprising because we assume that we are already close to a minimum. Whereas in principle the convexity allows us to compute optimal transforms this is not yet very practical since it requires very good initial values. However the identity transform $id(x) = x$ satisfied \begin{equation} |x_i(\mu) - id^{-1}(x_i(\mu_0))| = |x_i(\mu) - x_i(\mu_0)| \label{eq:err-id} \end{equation} which is sufficiently small to guarantee the error representation \eqref{eq:err-piecewise-const} for $\mu$ sufficiently close to $\mu_0$ so that $id$ can be used as an initial value in that case. Formally, in order to obtain a convex optimization problem, we augment the original training error minimization \eqref{eq:opt-i} with the following constraints \begin{equation} \begin{aligned} & \sigma_\mu(\phi) \to \min \\ & \phi(\mu, \mu_0)^{-1} \in C(\Omega) \text{ strictly monotonically increasing} \\ & |x_i(\mu_0) - \phi(\mu,\mu_0)^{-1}(x_i(\mu_0))| \le B, \quad i=0, \dots, n \end{aligned} \end{equation} for some constant $B>0$. Both constraints are clearly convex in $\phi(\mu, \mu_0)^{-1}$. To show convexity of the objective function, note that by the triangle inequality we have \begin{equation*} |x_i(\mu) - \phi(\mu,\mu_0)^{-1}(x_i(\mu_0))| \le |x_i(\mu) - x_i(\mu_0)| + |x_i(\mu_0) - \phi(\mu,\mu_0)^{-1}(x_i(\mu_0))|. \end{equation*} Thus, for $\mu$ sufficiently close to $\mu_0$ and $B$ sufficiently small the objective function reduces to \eqref{eq:err-piecewise-const} which is convex with respect to $\phi(\mu, \mu_0)^{-1}$. According to \eqref{eq:err-id} the identity is allowed by the constraints so that we know a suitable initial value for iterative optimization methods. Moreover for a transform $\hat{\phi}(\mu, \mu_0)$ perfectly aligning the discontinuities, we have \begin{equation*} |x_i(\mu_0) - \hat{\phi}(\mu,\mu_0)^{-1}(x_i(\mu_0))| = |x_i(\mu_0) - x_i(\mu)| \end{equation*} which is also allowed by the constraints. It follows that the optimal error is $\sigma_\mu(\hat{\phi}) = 0$ which is therefore a global minimum. \paragraph{Locality by transitivity} In summary for $\mu$ sufficiently close to $\mu_0$, we can reliably find a global minimum of the error $\sigma_\mu(\phi)$ by solving a convex optimization problem, eventually with the identity $id(x) = x$ as initial value. But what about larger differences of $\mu$ and $\mu_0$ as e.g. in our counter example with $\mu_2$ in Figure \ref{fig:example-local}? To this end, recall that the main purpose of the transform $\phi(\mu, \eta)$ is the alignment of jumps and kinks. Thus, if $x(\mu)$ is the location of a jump for parameter $\mu$, we want the transforms to satisfy \begin{equation*} \phi(\mu, \eta)(x(\mu)) = x(\eta). \end{equation*} This condition guarantees that the transformed snapshot $v_\mu(x(\mu), \eta) = u(x(\eta), \mu)$ has a jump at $x(\mu)$ just as the target function $u(x,\mu)$. But this alignment condition is transitive in nature: For three consecutive parameters $\mu_0$, $\mu_1$ and $\mu_2$ we have \begin{equation*} \big( \phi(\mu_1, \mu_0) \circ \phi(\mu_2, \mu_1)\big) (x(\mu_2)) = x(\mu_0) \end{equation*} so that $\phi(\mu_1, \mu_0) \circ \phi(\mu_2, \mu_1)$ correctly aligns the jumps for parameters $\mu_0$ and $\mu_2$. Because this alignment property is a major requirement for the transform $\phi(\mu_2, \mu_0)$, we can define it that way \begin{equation} \phi(\mu_2, \mu_1) := \phi(\mu_1, \mu_0) \circ \phi(\mu_2, \mu_1). \label{eq:compose} \end{equation} Let us apply this construction to our example transformed snapshot interpolation \eqref{eq:tsi-constant-u} where we have one snapshot at $\mu_0$ and $m$ training snapshots at $\mu_1 \le \dots \le \mu_m$. If we enforce transitivity \eqref{eq:compose}, we define \begin{equation*} \phi(\mu_i, \mu_0) := \phi(\mu_{1}, \mu_0) \circ \dots \circ \phi(\mu_i, \mu_{i-1}) \end{equation*} so that we are left with the calculation of the ``local in $\mu$'' transforms $\phi(\mu_i, \mu_{i-1})$. Because they are supposed to align jumps and kinks of $u(\cdot, \mu_i)$ and $u(\cdot, \mu_{i-1})$ we can compute them by solving the optimization problem \begin{equation} \phi(\mu_i, \mu_{i-1}) = \argmin_{\varphi} \|u(\cdot, \mu_i) - u(\varphi(\cdot), \mu_{i-1})\|_{L_1(\Omega)} \label{eq:convex-opt} \end{equation} which is the same as the original problem \eqref{eq:opt-i} with $\mu_0$ replaced by $\mu_{i-1}$. By choosing sufficiently many training snapshots, we can enforce $|\mu_i - \mu_{i-1}|$ to be sufficiently small such that the optimization problem \eqref{eq:convex-opt} becomes convex. Therefore, we can reliably find global minimizers $\phi(\mu_i, \mu_{i-1})$ and in turn by \eqref{eq:compose} a transform that perfectly aligns the discontinuities for the parameters $\mu_0$ and $\mu_i$. This leads to a zero training error $\sigma_\mathcal{T}(\phi)$ which is therefore a global minimum. In summary, for the reconstruction of piecewise constant functions in 1d from one snapshot, we can find $\phi(\mu, \eta)$ as the global minimum of the training error provided there are sufficiently many training snapshots. Of course the argument relies on a couple of assumptions that are not true in more general cases. Notably, the functions $u(x,\mu)$ might not be piecewise constant, we eventually want to use more snapshots for the outer interpolation and the parameter and spacial dimensions can be larger that one. Nonetheless, a transitivity property of the transforms is still realistic. As for the simple example of this section, this allows some locality in the parameter for the minimization of the training error. How to make use of this and to what extend this is helpful for more complicated scenarios is an open problem. \section{Numerical experiments} \label{sec:numerical-experiments} In this section, we consider some first numerical tests of the transformed snapshot interpolation \eqref{eq:interpol-transform}. First, in Section \ref{sec:gaussian}, a 1d example is presented, where the focus is on the approximation rate, while the inner transforms $\phi(\mu, \eta)$ are given explicitly. Then, in Section \ref{sec:burgers} the method is tested with a 2d Burgers Riemann problem where the solution is explicitly known. Finally, in Section \ref{sec:shock-bubble} the method is applied to a shock bubble interaction which is a more challenging test case for the optimizer of the inner transform. \subsection{Cut off Gaussian} \label{sec:gaussian} For a first numerical experiment, we consider the parametric function \begin{align} u(x, \mu) & := N \left( \frac{x}{0.4+\mu}-1 \right), & N(x) & := \left\{ \begin{array}{ll} 0.4 \, e^{-7.0 \, x^2} & -1 \le x < -\frac{1}{2} \\ 0 & \text{else} \end{array} \right. \label{eq:normal-cut-off} \end{align} which is a scaled and shifted Gaussian, cut off at a parameter dependent location, see Figure \ref{fig:example-gaussian}. This function is not chosen with a parametric PDE in mind, but has a parameter dependent jump and because it is known explicitly it is well suited for analyzing the performance of the transformed snapshot interpolation. For the snapshots we consider to alternatives: First we use the exact function \eqref{eq:normal-cut-off} and second we interpolate it by piecewise linear functions on a uniform grid. The latter choice should simulate the outcome of PDE solvers which yield similar approximations of the parametric solution. Due to the simplicity of the example, we choose shifts for the inner transform: \begin{equation*} \phi(\mu, \eta)(x) = x + s(\mu, \eta). \end{equation*} Recall that for the transformed snapshot interpolation \eqref{eq:interpol-inner-outer}, we only need to know $s(\mu, \eta)$ for interpolation points $\mu \in \hat{\mathcal{P}}_m$ and $\eta \in \mathcal{P}_n$. Thus, we can encode $\phi$ by storing $m \times n$ floating point numbers. For all examples, we choose $\mathcal{P}_n = \hat{\mathcal{P}}_n$. In addition, in this example we only consider the approximation properties of the transformed snapshot interpolation. In order not to interfere with the optimizer for actually finding the transform, we use an explicit formula for $s(\mu, \eta)$ that exactly aligns the jumps and consider the optimizer in the numerical examples below. The numerical results are summarized in Figure \ref{fig:example-gaussian}. In case we use exact snapshots, we see a more than polynomial convergence rate. Note that after aligning the snapshots, the transformed snapshots $v_\mu(x,\eta)$ are analytic in $\eta$ so that this behaviour is in line with the error bounds of Proposition \ref{prop:outer-error}. However, for the linearly interpolated snapshots the situation is different. The error first decays and then saturates at a level dependent on the spacial grid resolution. These levels correspond to the maximal error of the snapshots themselves as shown in Table \ref{table:example-gaussian}. This makes sense because the transformed snapshot interpolation error can hardly be better than the error of the snapshots it relies on. For a comparison, Figure \ref{fig:example-gaussian} also contains the error of a simple polynomial interpolation without transform. We see the typical staircasing behaviour and an error that is orders of magnitudes worse than the one with previous transform. \begin{figure} \caption{Left: Errors of transformed snapshot interpolations of \eqref{eq:normal-cut-off} \label{fig:example-gaussian} \end{figure} \begin{table}[htb] \pgfplotstabletypeset[ every head row/.style={ before row={ \hline \multicolumn{1}{|c|}{} & \multicolumn{5}{|c|}{transformed snapshot interpolation} & \multicolumn{1}{c|}{interpolation} \\ \hline }, after row={\hline} }, col sep=comma, columns/{n parameters}/.style={column name=$n$, column type/.add={|}{}}, columns/{Example 0}/.style={column name=$0.1$, column type/.add={|}{}}, columns/{Example 1}/.style={column name=$0.01$, column type/.add={|}{}}, columns/{Example 2}/.style={column name=$0.001$, column type/.add={|}{}}, columns/{Example 3}/.style={column name=$0.0001$, column type/.add={|}{}}, columns/{Example 4}/.style={column name=exact, column type/.add={|}{}}, columns/{id}/.style={column name=exact, column type/.add={|}{|}}, every last row/.style={after row={ \hline \hline \multicolumn{1}{|c|}{} & \multicolumn{5}{|c|}{maximal $L_1(\Omega)$ error of the snapshots} & \multicolumn{1}{|c|}{} \\ \hline & $3.58 \cdot 10^{-3}$ & $3.53 \cdot 10^{-4}$ & $3.48 \cdot 10^{-5}$ & $3.66 \cdot 10^{-10}$ & & \\ \hline }} ]{pics/example_shift_normal_grid_errors.csv} \caption{Errors for example \eqref{eq:normal-cut-off} for different uniform spacial grids and number of snapshots ($n$). The last line contains the maximal $L_1(\Omega)$ error of the respective snapshots.} \label{table:example-gaussian} \end{table} \subsection{2d Burgers' equation} \label{sec:burgers} For a second example, we consider the two dimensional Burgers' equation \begin{equation*} \partial_t u + \nabla \cdot \left(\frac{1}{2} u^2 \boldsymbol{v} \right) = 0 \end{equation*} with $\boldsymbol{v} = (1,1)^T$ and the initial condition \begin{equation*} u(x, y, 0) = \left\{ \begin{array}{rl} -0.2 & \text{if } x<0.5 \text { and } y>0.5 \\ -1.0 & \text{if } x>0.5 \text { and } y>0.5 \\ 0.5 & \text{if } x<0.5 \text { and } y<0.5 \\ 0.8 & \text{if } x>0.5 \text { and } y<0.5 \end{array} \right. \end{equation*} on the unit cube $\Omega = [0,1]^2$. According to \cite{GuermondPasquettiPopov2011, GerhardMuller2014} the exact solution for this problem is \begin{equation} u(x, y, t) = \left\{ \begin{array}{rlcl} \begin{array}{rr} -0.2 \\ 0.5 \end{array} & \text{if } x < \frac{1}{2} - \frac{3t}{5} & \text{and} & \left\{\begin{array}{l} y > \frac{1}{2} + \frac{3t}{20}, \\ \text{otherwise}, \end{array} \right. \\ \begin{array}{rr} -1.0 \\ 0.5 \end{array} & \text{if } \frac{1}{2} - \frac{3t}{5} < x < \frac{1}{2} - \frac{t}{4} & \text{and} & \left\{\begin{array}{l} y > -\frac{8x}{7} + \frac{15}{14} - \frac{15t}{28}, \\ \text{otherwise}, \end{array} \right. \\ \begin{array}{rr} -1.0 \\ 0.5 \end{array} & \text{if } \frac{1}{2} - \frac{t}{4} < x < \frac{1}{2} + \frac{t}{2} & \text{and} & \left\{\begin{array}{l} y > \frac{x}{6} + \frac{5}{12} - \frac{5t}{24}, \\ \text{otherwise}, \end{array} \right. \\ \begin{array}{rr} -1.0 \\ \frac{2x - 1}{2t} \end{array} & \text{if } \frac{1}{2} + \frac{t}{2} < x < \frac{1}{2} + \frac{4t}{5} & \text{and} & \left\{\begin{array}{l} y > x - \frac{5}{18t} \left( x + t - \frac{1}{2} \right)^2, \\ \text{otherwise}, \end{array} \right. \\ \begin{array}{rr} -1.0 \\ 0.8 \end{array} & \text{if } \frac{1}{2} + \frac{4t}{5} < x & \text{and} & \left\{\begin{array}{l} y > \frac{1}{2} - \frac{t}{10}, \\ \text{otherwise}, \end{array} \right. \\ \end{array} \right. \label{eq:burgers-exact} \end{equation} For a simple test of the transformed snapshot interpolation, we consider the time $t$ as the parameter of interest so that the snapshots are solutions at various time instances used for the reconstruction of the solution at intermediate times. Note that with this choice of the parameter the solution has exactly the features in question: it has parameter dependent jumps and kinks along non-trivial curves. In addition the exact solution is known which is helpful for an exact assessment of the numerical errors. In order to simulate a numerical solution of the Burger's equation, we sample the snapshots on a $100 \times 100$ grid and use a piecewise linear reconstruction from these samples. Also the integrals for evaluating the errors during the optimization of the inner transform $\phi$ rely on this grid. Figure \ref{fig:burgers} shows the results for a reconstruction at time $0.45$ from two snapshots at times $0.3$ and $0.5$ with an additional training snapshot at time $0.4$ to define the training error $\sigma_\mathcal{T}(\phi)$. For a first test, the inner transforms $\phi(\mu, \eta)$ for $\mu, \eta \in \{0.3, 0.5\}$ are simply polynomials mapping $\mathbb{R}^2 \to \mathbb{R}^2$. In general this choice does not guarantee that $\Omega$ is mapped to itself, however it is easy to enforce that the edges of the rectangular domain are mapped to itself so that small perturbations of the identity are diffeomorphisms. In order to be able to align both the kink and the jump in the lower right corner, for the $x$-component we choose a (multivariate) polynomial of degree $3 \times 2$ and for the $y$-component a polynomial of degree $2 \times 2$. For the optimization of the training error with respect $\phi$ we use a subgradient method \eqref{eq:subgrad}, \eqref{eq:step-size} with $500$ steps of the rather conservative fixed step size \begin{equation*} h_k = \frac{10^{-3}}{(i+1)^{0.1}}. \end{equation*} Compared to a classical polynomial interpolation, the additional transform almost completely removes the artificial staircasing behaviour. Also the kinks around the ``ramp'' in the upper left corner of the figures are much better resolved. The $L_1$ errors computed by an adaptive quadrature instead of the grid of the snapshots are as follows. \begin{center} \begin{tabular}{ll} $L_1$-error interpolation & 0.0355373675439 \\ $L_1$-error transformed snapshot interpolation & 0.00739513000396 \\ maximal snapshot $L_1$-error & 0.0051148730424 \end{tabular} \end{center} In conclusion the additional inner transform reduces the error almost by a factor of five compared to a plain polynomial interpolation. Note that the error of the transformed snapshot interpolation is almost down to the maximal error of the snapshots themselves. As we have verified in Figure \ref{fig:example-gaussian} for the 1d example of Section \ref{sec:gaussian} we expect the error to saturate somewhere around this level so that more snapshots or degrees of freedom for the inner transform are not expected to yield major improvements. This is a serious bottleneck for computing convergence rates: Due to the jump discontinuities the maximal error of the snapshots converges with a low rate. The resulting high number of spacial degrees of freedom renders the computation of convergences rates challenging. \begin{figure} \caption{Reconstruction of the solution \eqref{eq:burgers-exact} \label{fig:burgers} \end{figure} \subsection{Shock bubble interaction} \label{sec:shock-bubble} For a last more complicated example, we consider a compressible Euler simulation of a shock-bubble interaction \cite{Nazarov2015}. Because the code for the above examples relies on piecewise linear interpolation on a uniform grid to represent the snapshots it is straight forward to read them from pictures. For the shock bubble interaction experiments they are frames from a video showing the time evolution of the density provided by \cite{Nazarov2015a, Nazarov2015}. Figure \ref{fig:bubble} shows the snapshots and the reconstruction at a new time by linear interpolation and transformed snapshot interpolation. Comparable to Section \ref{sec:burgers}, we simply choose third order polynomials for the inner transform $\phi$ mapping the edges of the domain to itself. We see that the linear interpolation result basically shows the two bubbles from the original snapshots, whereas the true solution of course just has one bubble. Using the additional transform $\phi$, the second picture finds the correct location of the shock and the bubble. Thus despite lots of more complicated fine structure in the pictures, the optimizer reliably finds the correct transform. Having a closer look, the reconstructed bubble appears to be a little blurred. However, we only use third order polynomials which eventually is insufficient for a perfect alignment of the shapes. \begin{figure} \caption{Left column: Snapshots for time indices $3.5$, $3.75$ and $4.0$ where the middle one is only used for the optimization of the transform $\phi$. Right column: reconstruction at time index $3.7$ by linear interpolation (top) and transformed snapshot interpolation (bottom).} \label{fig:bubble} \end{figure} \begin{appendices} \section{Linear Width} \label{sec:linear-width} As an example for the limitations of polyadic decomposition based methods, let us consider their best possible performance for the following simple parametric transport problem: \begin{equation*} \begin{aligned} A_\mu u_\mu & := u_t + \mu u_x = 0 & & \text{for } 0<t<1, \, x \in \mathbb{R} \\ u(x,0) & = g(x) := \left\{ \begin{array}{ll} 0 & x<0 \\ 1 & x \ge 0 \end{array} \right. & & \text{for } t=0, \end{aligned} \end{equation*} with parameter $\mu \in \mathcal{P} = [\mu_{\min}, \mu_{\max}] \subset \mathbb{R}$. Its solution is given by \begin{equation} u(x,t) = g(x-\mu t). \label{eq:exact-solution} \end{equation} The typical benchmark for the performance of reduced basis methods is the Kolmogorov $n$-width \begin{equation*} d_n(\mathcal{F}) = \inf_{\dim Y = n} \sup_{u \in \mathcal{F}} \inf_{\phi \in Y} \|u - \phi\|_{L_1} \end{equation*} of the solution manifold \begin{equation} \mathcal{F} := \{u(\cdot, \mu) | \mu \in \mathcal{P} \} = \{g(x - \mu t) | \mu \in \mathcal{P} \}. \label{eq:solution-transport} \end{equation} However, with $X_n := \linspan \{\psi_i : \, i=1, \dots, n\}$ based on the $\psi_i$ of the polyadic decomposition \eqref{eq:polyadic-decomposition}, we conclude that \begin{equation*} d_n(\mathcal{F}) \le \sup_{u \in \mathcal{F}} \inf_{\phi \in X_n} \|u(\cdot, \mu) - \phi\|_{L_1} \le \sup_{u \in \mathcal{F}} \left\| u(\cdot, \mu) - \sum_{i=1}^n c_i(\mu) \psi_i \right\|_{L_1}. \end{equation*} Therefore, the errors of polyadic decomposition based methods, including but not restricted to reduced basis methods, are lower bounded by the Kolmogorov $n$-width if the error is measured in the $\sup$-norm with respect to the parameter variable. Of course this measure for the error is not appropriate for all applications, but we use it here to exemplify the limitations of the standard polyadic decomposition based methods. For our simple model problem the Kolmogorov $n$-width is bounded according to the following Proposition. The proof is similar to \cite{Donoho2001}, see also \cite{LorentzGolitschekMakovoz1996}. Note that order one is already achieved by a simple nonadaptive piecewise constant approximation as e.g. in \eqref{eq:piecewise-constant-no-transform}. \begin{proposition} The Kolmogorov width of the solution manifold $\mathcal{F}$ defined in \eqref{eq:solution-transport} satisfies \begin{equation*} d_n(\mathcal{F}) \sim n^{-1}, \end{equation*} i.e. $d_n(\mathcal{F})$ is equivalent to $n^{-1}$ up to a constant. \end{proposition} \begin{proof} We show the lower bound by comparing the Kolmogorov $n$-width of the solution manifold $\mathcal{F}$ to the known width of a ball in the $L_1$-norm. For the construction of this ball, we choose a uniform distribution of $k+1$ snapshots with parameters \begin{equation*} \begin{aligned} \mu_i & = \mu_{\min} + \frac{i}{k}(\mu_{\max} - \mu_{\min}), & i & = 0, \dots, k, \end{aligned} \end{equation*} where $k$ will be chosen below. Note that these specific snapshots are only used for this proof and are not necessarily used in actual approximation methods like e.g. reduced basis methods. We use the snapshots to define the functions \begin{equation*} \begin{aligned} \xi_i & = u_{\mu_i} - u_{\mu_{i-1}}, & i & = 1, \dots, k, \end{aligned} \end{equation*} which will be the corners of the $L_1$-ball. From \begin{equation*} \inf_{y \in Y} \|\xi_i - y\|_{L_1} \le \inf_{y \in Y} \|u_{\mu_i} - y\|_{L_1} + \inf_{y \in Y} \|u_{\mu_{i-i}} - y\|_{L_1} \le 2 \sup_{u \in \mathcal{F}} \inf_{y \in Y} \|u - y\|_{L_1} \end{equation*} for any linear space $Y$ of dimension at most $n$ follows that \begin{equation*} d_n(\{ \xi_0, \dots, \xi_k \}) \le 2 d_n(\mathcal{F}). \end{equation*} We complete the set $\{ \xi_0, \dots, \xi_k \}$ to a full ball without increasing the Kolmogorov width. To this end assume that $\lambda_i$, $i=1, \dots, k$ satisfy $\sum_{i=1}^k |\lambda_1| \le 1$ and $y_i$, $i=1, \dots, k$ are the minimizers of $\inf_{y \in Y} \|\xi_i - y\|_{L_1}$. Then we have \begin{multline*} \inf_{y \in Y} \left \| \sum_{i=1}^k \lambda_i \xi_i - y \right\|_{L_1} \le \left \| \sum_{i=1}^k \lambda_i \xi_i - \sum_{i=1}^k \lambda_i y_i \right\|_{L_1} \\ \le \sum_{i=1}^k |\lambda_i| \|\xi_i - y_i\|_{L_1} \le d_n(\{ \xi_0, \dots, \xi_k \}) \le 2 d_n(\mathcal{F}) \end{multline*} for any space $Y$ of dimension smaller than $n$ realizing the Kolmogorov width of $\mathcal{F}$. It follows that for the set \begin{equation*} \mathcal{F}' = \left\{ \sum_{i=1}^k \lambda_i \xi_i : \sum_{i=1}^k |\lambda_1| \le 1 \right\} \end{equation*} we have \begin{equation} d_n(\mathcal{F}') \le 2 d_n(\mathcal{F}). \label{eq:compare-width} \end{equation} Next, we show that $\mathcal{F}'$ is in fact an $L_1$-ball. To this end, note that according to the exact solution \eqref{eq:exact-solution} the functions $\xi_i$ take the value one on disjoint triangles and zero else. The area of the triangles is $h/2$ with $h = (\mu_{\max} - \mu_{\min})/k$, because the time $t$ is bounded between $0 < t < 1$. Thus, we have \begin{equation*} \|\xi_i\|_{L_1} = \frac{h}{2}. \end{equation*} It follows that \begin{equation*} \left\| \sum_{i=1}^k \lambda_i \xi_i \right\|_{L_1} = \sum_{i=1}^k |\lambda_i| \|\xi_i\|_{L_1} = \frac{h}{2} \sum_{i=1}^k |\lambda_i|. \end{equation*} so that \begin{equation*} \mathcal{F}' = \linspan \{ \xi_1, \dots, \xi_k \} \cap B^{h/2}_{L_1} \end{equation*} where $B^{h/2}_{L_1}$ is the $L_1$-ball with radius $h/2$. Choosing $k = 2n$, we obtain the Kolmogorov width \begin{equation*} d_n(\mathcal{F}') = \frac{h}{2}, \end{equation*} see e.g. \cite{LorentzGolitschekMakovoz1996}. Using \eqref{eq:compare-width} and $h \sim 1/n$ completes the proof of the lower bound. In order to prove the upper bound, note that $u_\mu - u_{\mu_i}$ is one on a triangular domain and zero else where $u_{\mu_i}$ are the snapshots used in the proof of the lower bound. Calculating the area of the triangle as before, this yields \begin{equation*} \|u_\mu - u_{\mu_i}\|_{L_1} \le \frac{h}{2} \end{equation*} for the parameter $\mu_i$ closest to $\mu$. Thus, a piecewise constant approximation by $u_{\mu_0}, \dots, u_{\mu_k}$ yields the upper bound of the proposition. \end{proof} \end{appendices} \end{document}
\begin{document} \title{An approximation theorem for nuclear operator systems} \author{Kyung Hoon Han} \address{Department of Mathematical Sciences, Seoul National University, San 56-1 ShinRimDong, KwanAk-Gu, Seoul 151-747, Republic of Korea} \email{[email protected]} \author[V.~I.~Paulsen]{Vern I.~Paulsen} \address{Department of Mathematics, University of Houston, Houston, TX 77204-3476, U.S.A.} \email{[email protected]} \subjclass[2000]{46L06, 46L07, 47L07} \keywords{operator system, tensor product, nuclear} \date{} \dedicatory{} \commby{} \begin{abstract} We prove that an operator system $\mathcal S$ is nuclear in the category of operator systems if and only if there exist nets of unital completely positive maps $\varphi_\lambda : \cl S \to M_{n_\lambda}$ and $\psi_\lambda : M_{n_\lambda} \to \cl S$ such that $\psi_\lambda \circ \varphi_\lambda$ converges to ${\rm id}_{\cl S}$ in the point-norm topology. Our proof is independent of the Choi-Effros-Kirchberg characterization of nuclear $C^*$-algebras and yields this characterization as a corollary. We give an explicit example of a nuclear operator system that is not completely order isomorphic to a unital $C^*$-algebra. \end{abstract} \maketitle \section{Introduction} In summary, we prove that an operator system $\cl S$ has the property that for every operator system $\cl T$ the {\em minimal operator system tensor product} $\cl S \otimes_{\min} \cl T$ coincides with the {\em maximal operator system tensor product} $\cl S \otimes_{\max} \cl T$ if and only if there is a point-norm factorization of $\cl S$ through matrices of the type described in the abstract. Our proof of this fact is quite short, direct and independent of the corresponding factorization results of Choi, Effros and Kirchberg for nuclear $C^*$-algebras. Our proof uses in a key way a characterization of the maximal operator system tensor product given in \cite{KPTT1}. We are then able to deduce the Choi-Effros-Kirchberg characterization of nuclear $C^*$-algebras as an immediate corollary. The proof that one obtains in this way of the Choi-Effros-Kirchberg result combines elements of the proofs given in \cite{CE3} and \cite{Pi} but eliminates the need to approximate maps into the second dual or to introduce decomposable maps. Finally, we give a fairly simple example of an operator system that is {\em nuclear} in this sense, but is not completely order isomorphic to any $C^*$-algebra and yet has second dual completely order isomorphic to $B(\ell^2(\bb N)).$ Earlier, Kirchberg and Wassermann\cite{KW} constructed a nuclear operator system that is not even embeddable in any nuclear $C^*$-algebra. In \cite{Ka}, Kadison characterized the unital subspaces of a real continuous function algebra on a compact set by observing that the norm of a real continuous function algebra is determined by the unit and the order. As for its noncommutative counterpart, Choi and Effros gave an abstract characterization of the unital involutive subspaces of $\cl B(\cl H)$ \cite{CE1}. The observation that the unit and the matrix order in $\cl B(\cl H)$ determine the matrix norm is key to their characterization. The former is called a real function system or a real ordered vector space with an Archimedean order unit while the latter is termed an operator system. Although the abstract characterization of an operator system played a key role in the work of Choi and Effros \cite{CE1} on the tensor products of $C^*$-algebras, there had not been much attempt to study the categorical aspects of operator systems and their tensor theory until a series of papers \cite{PT, PTT, KPTT1, KPTT2}. In particular, \cite{KPTT1} introduced axioms for tensor products of operator systems and characterized the minimal and maximal tensor products of operator systems. The positive cone of the minimal tensor product is the largest among all possible positive cones of operator system tensor products while that of the maximal tensor product is the smallest. These extend the minimal tensor product and the maximal tensor product of $C^*$-algebras. In other words, the minimal (respectively, maximal) operator system tensor product of two unital $C^*$-algebras is the operator subsystem of their minimal (respectively, maximal) $C^*$-tensor product. For the purposes of this paper, a unital $C^*$-algebra $\cl A$ will be called {\em $C^*$-nuclear} if and only if it has the property that for every unital $C^*$-algebra $\cl B$ the minimal $C^*$-tensor product $\cl A \otimes_{C^*\min} \cl B$ is equal to the maximal $C^*$-tensor product $\cl A \otimes_{C^*\max} \cl B$. We say that a $C^*$-algebra $\cl A$ has the {\em completely positive approximation property} (in short, CPAP) if there exists a net of unital completely positive maps $\varphi_\lambda : \cl A \to \cl A$ with finite rank which converges to ${\rm id}_{\cl A}$ in the point-norm topology. The Choi-Effros-Kirchberg result is that a $C^*$-algebra $\cl A$ is $C^*$-nuclear if and only if $\cl A$ has the CPAP if and only if there exist nets of unital completely positive maps $\varphi_\lambda : \cl A \to M_{n_\lambda}$ and $\psi_\lambda : M_{n_\lambda} \to \cl A$ such that $\psi_\lambda \circ \varphi_\lambda$ converges to ${\rm id}_{\cl A}$ in the point-norm topology \cite{CE3, Ki1}. For a recent proof which uses operator space methods and the decomposable approximation, we refer the reader to \cite[Chapter 12]{Pi}. An operator system will be called {\em nuclear} provided that the minimal tensor product of it with an arbitrary operator system coincides with the maximal tensor product. In \cite{KPTT1}, this property was called $(\min,\max)$-nuclear. It is natural to ask whether the approximation theorems of nuclear $C^*$-algebras \cite{CE3, Ki1} also hold in the category of operator systems. In section 3, we show that an operator system $\cl S$ is nuclear if and only if there exist nets of unital completely positive maps $\varphi_\lambda : \cl S \to M_{n_\lambda}$ and $\psi_\lambda : M_{n_\lambda} \to \cl S$ such that $\psi_\lambda \circ \varphi_\lambda$ converges to ${\rm id}_{\cl S}$ in the point-norm topology. We then prove, independent of the Choi-Effros-Kirchberg theorem, that a $C^*$-algebra is $C^*$-nuclear if and only if it is nuclear as an operator system. Thus, we obtain the Choi-Effros-Kirchberg characterization as a corollary of the factorization result for operator systems. In contrast, CPAP does not imply nuclearity in the category of operator systems. Let $$\cl S_0 = {\rm span} \{ E_{1,1}, E_{1,2}, E_{2,1}, E_{2,2}, E_{2,3}, E_{3,2}, E_{3,3} \} \subset M_3.$$ In \cite[Theorem~5.18]{KPTT1}, it is shown that this finite dimensional operator system $\cl S_0$ is not nuclear. On the other hand, \cite[Theorem~5.16]{KPTT1} shows that the minimal and maximal operator system tensor products of $\cl S_0 \otimes \cl B$ coincide for every unital $C^*$-algebra $\cl B.$ Thus, for operator systems, tensoring with $C^*$-algebras is not sufficient to discern ordinary nuclearity, i.e., $(\min,\max)$-nuclearity. However, it is easily seen that the minimal and maximal operator system tensor products of $\cl S \otimes \cl B$ coincide for every unital $C^*$-algebra $\cl B$ if and only if $\cl S$ is $(\min,{\rm c})$-nuclear, in the sense of \cite{KPTT1}. Finally, in section 4, we construct a nuclear operator system that is not unitally, completely order isomorphic to a unital $C^*$-algebra. This shows that the theory of nuclear operator systems properly extends the theory of nuclear $C^*$-algebras. In contrast, by \cite{CE1}, every injective operator system is unitally, completely order isomorphic to a unital $C^*$-algebra. \section{preliminaries} Let $\cl S$ and $\cl T$ be operator systems. Following \cite{KPTT1}, an {\em operator system structure} on $\cl S \otimes \cl T$ is defined as a family of cones $M_n (\cl S \otimes_\tau \cl T)^+$ satisfying: \begin{enumerate} \item[(T1)] $(\cl S \otimes \cl T, \{ M_n (\cl S \otimes_\tau \cl T)^+ \}_{n=1}^\infty, 1_{\cl S} \otimes1_{\cl T})$ is an operator system denoted by $\cl S \otimes_\tau \cl T$, \item[(T2)] $M_n(\cl S)^+ \otimes M_m(\cl T)^+ \subset M_{mn} (\cl S \otimes_\tau \cl T)^+$ for all $n,m \in \mathbb N$, and \item[(T3)] if $\varphi : \cl S \to M_n$ and $\psi : \cl T \to M_m$ are unital completely positive maps, then $\varphi \otimes \psi : \cl S \otimes_\tau \cl T \to M_{mn}$ is a unital completely positive map. \end{enumerate} By an {\em operator system tensor product,} we mean a mapping $\tau : \cl O \times \cl O \to \cl O$, such that for every pair of operator systems $\cl S$ and $\cl T$, $\tau (\cl S, \cl T)$ is an operator system structure on $\cl S \otimes \cl T$, denoted $\cl S \otimes_\tau \cl T$. We call an operator system tensor product $\tau$ {\em functorial,} if the following property is satisfied: \begin{enumerate} \item[(T4)] For any operator systems $\cl S_1, \cl S_2, \cl T_1, \cl T_2$ and unital completely positive maps $\varphi : \cl S_1 \to \cl T_1, \psi : \cl S_2 \to \cl T_2$, the map $\varphi \otimes \psi : \cl S_1 \otimes \cl S_2 \to \cl T_1 \otimes \cl T_2$ is unital completely positive. \end{enumerate} An operator system structure is defined on two fixed operator systems, while the functorial operator system tensor product can be thought of as the bifunctor on the category consisting of operator systems and unital completely positive maps. Given an operator system $\cl R$ we let $S_n(\cl R)$ denote the set of unital completely positive maps of $\cl R$ into $M_n$. For operator systems $\cl S$ and $\cl T$, we put $$M_n(\cl S \otimes_{\min} \cl T)^+ = \{ [p_{i,j}]_{i,j} \in M_n(\cl S \otimes \cl T) : \forall \varphi \in S_k(\cl S), \psi \in S_m(\cl T), [(\varphi \otimes \psi)(p_{i,j})]_{i,j} \in M_{nkm}^+ \}.$$ Then the family $\{ M_n(\cl S \otimes_{\min} \cl T)^+ \}_{n=1}^\infty$ is an operator system structure on $\cl S \otimes \cl T.$ Moreover, if we let $\iota_{\cl S} : \cl S \to \cl B(\cl H)$ and $\iota_{\cl T} : \cl T \to \cl B(\cl K)$ be any unital completely order isomorphic embeddings, then it is shown in \cite{KPTT1} that this is the operator system structure on $\cl S \otimes \cl T$ arising from the embedding $\iota_{\cl S} \otimes \iota_{\cl T} : \cl S \otimes \cl T \to \cl B(\cl H \otimes \cl K)$. As in \cite{KPTT1}, we call the operator system $(\cl S \otimes \cl T, \{ M_n(\cl S \otimes_{\min} \cl T) \}_{n=1}^\infty, 1_{\cl S} \otimes 1_{\cl T})$ the {\em minimal} tensor product of $\cl S$ and $\cl T$ and denote it by $\cl S \otimes_{\min} \cl T$. The mapping $\min : \cl O \times \cl O \to \cl O$ sending $(\cl S, \cl T)$ to $\cl S \otimes_{\min} \cl T$ is an injective, associative, symmetric and functorial operator system tensor product. The positive cone of the minimal tensor product is the largest among all possible positive cones of operator system tensor products \cite[Theorem~4.6]{KPTT1}. For $C^*$-algebras $\cl A$ and $\cl B$, we have the completely order isomorphic inclusion $$\cl A \otimes_{\min} \cl B \subset \cl A \otimes_{\rm C^*\min} \cl B$$ \cite[Corollary~4.10]{KPTT1}. For operator systems $\cl S$ and $\cl T$, we put $$D_n^{\max}(\cl S, \cl T) = \{ \alpha(P \otimes Q) \alpha^* : P \in M_k(\cl S)^+, Q \in M_l(\cl T)^+, \alpha \in M_{n,kl},\ k,l \in \mathbb N \}.$$ Then it is a matrix ordering on $\cl S \otimes \cl T$ with order unit $1_{\cl S} \otimes 1_{\cl T}$. Let $\{ M_n(\cl S \otimes_{\max} \cl T)^+ \}_{n=1}^\infty$ be the Archimedeanization of the matrix ordering $\{ D_n^{\max}(\cl S, \cl T) \}_{n=1}^\infty$. Then it can be written as $$M_n(\cl S \otimes_{\max} \cl T)^+ = \{ X \in M_n(\cl S \otimes \cl T) : \forall \varepsilon>0, X+\varepsilon I_n \otimes 1_{\cl S} \otimes 1_{\cl T} \in D_n^{\max}(\cl S, \cl T) \}.$$ We call the operator system $(\cl S \otimes \cl T, \{ M_n(\cl S \otimes_{\max} \cl T)^+ \}_{n=1}^\infty, 1_{\cl S} \otimes 1_{\cl T})$ the {\em maximal} operator system tensor product of $\cl S$ and $\cl T$ and denote it by $\cl S \otimes_{\max} \cl T$. The mapping $\max : \cl O \times \cl O \to \cl O$ sending $(\cl S, \cl T)$ to $\cl S \otimes_{\max} \cl T$ is an associative, symmetric and functorial operator system tensor product. The positive cone of the maximal tensor product is the smallest among all possible positive cones of operator system tensor products \cite[Theorem~5.5]{KPTT1}. For $C^*$-algebras $\cl A$ and $\cl B$, we have the completely order isomorphic inclusion $$\cl A \otimes_{\max} \cl B \subset \cl A \otimes_{\rm C^*\max} \cl B$$ \cite[Theorem~5.12]{KPTT1}. \section{An approximation theorem for nuclear operator systems} We prove the main theorem of this paper which generalizes the Choi-Effros-Kirchberg approximation theorem. The proof is quite simple compared to the original one. In particular, the proof does not depend on the Kaplansky density theorem. \begin{thm}\label{main1} Suppose that $\Phi : \cl S \to \cl T$ is a unital completely positive map for operator systems $\cl S$ and $\cl T$. The following are equivalent: \begin{enumerate} \item[(i)] the map $${\rm id}_{\cl R} \otimes \Phi : \cl R \otimes_{\min} \cl S \to \cl R \otimes_{\max} \cl T$$ is completely positive for any operator system $\cl R$; \item[(ii)] the map $${\rm id}_E \otimes \Phi : E \otimes_{\min} \cl S \to E \otimes_{\max} \cl T$$ is completely positive for any finite dimensional operator system $E$; \item[(iii)] there exist nets of unital completely positive maps $\varphi_\lambda : \cl S \to M_{n_\lambda}$ and $\psi_\lambda : M_{n_\lambda} \to \cl T$ such that $\psi_\lambda \circ \varphi_\lambda$ converges to the map $\Phi$ in the point-norm topology. $$\xymatrix{\cl S \ar[rr]^\Phi \ar[dr]_{\varphi_\lambda} && \cl T \\ & M_{n_\lambda} \ar[ur]_{\psi_\lambda} &}$$ \end{enumerate} \end{thm} \begin{proof} Clearly, (i) implies (ii). $\rm (iii)\Rightarrow (i).$ For any operator system $\cl R$ and any $n \in \bb N,$ if we identify $M_k(M_n \otimes \cl R) = M_{nk} \otimes \cl R$ in the usual manner, then a somewhat tedious calculation shows that $D_k^{\max}(M_n, \cl R) = M_{nk}(\cl R)^+ = M_k(M_n \otimes_{\min} \cl R)^+.$ This gives an independent verification that $\cl R \otimes_{\max} M_n = \cl R \otimes_{\min} M_n,$ i.e., that the two operator system structures are identical. Alternatively, this fact follows from \cite[Corollary~6.8]{KPTT1}, which they point out is obtained independently of the Choi-Effros-Kirchberg theorem. From the maps $$\xymatrix{\cl R \otimes_{\min} \cl S \ar[rr]^-{{\rm id}_{\cl R} \otimes \varphi_\lambda} && \cl R \otimes_{\min} M_{n_\lambda} = \cl R \otimes_{\max} M_{n_\lambda} \ar[rr]^-{{\rm id}_{\cl R} \otimes \psi_\lambda} && \cl R \otimes_{\max} \cl T,}$$ we see that the map $${\rm id}_{\cl R} \otimes \psi_\lambda \circ \varphi_\lambda : \cl R \otimes_{\min} \cl S \to \cl R \otimes_{\max} \cl T$$ is completely positive for any operator system $\cl R$. Since $\|\cdot \|_{\cl R \otimes_{\max} \cl T}$ is a cross norm, ${\rm id}_{\cl R} \otimes (\psi_\lambda \circ \phi_\lambda) (z)$ converges to ${\rm id}_{\cl R} \otimes \Phi(z)$ for each $z \in \cl R \otimes \cl S$. It follows that $z \in (\cl R \otimes_{\min} \cl S)^+$ implies ${\rm id}_{\cl R} \otimes \Phi (z) \in (\cl R \otimes_{\max} \cl T)^+$. \vskip 1pc $\rm (ii)\Rightarrow (iii).$ Let $E$ be a finite dimensional operator subsystem of $\cl S$. There exists a state $\omega_1$ on $E$ which plays a role of the non-canonical Archimedean order unit on the dual space $E^*$ \cite[Corollary~4.5]{CE1}. In other words, $(E^*, \omega_1$) is an operator system. We can regard the inclusion $\iota : E \subset \cl S$ as an element in $(E^* \otimes_{\min} \cl S)^+$ \cite[Lemma~8.4]{KPTT2}. The restriction $\Phi |_E : E \to \cl T$ can be identified with the element $({\rm id}_{E^*} \otimes \Phi) (\iota)$. By assumption, it belongs to $(E^* \otimes_{\max} \cl T)^+$. We consider the directed set $$\Omega = \{(E, \varepsilon) : \text{$E$ is a finite dimensional operator subsystem of $\cl S$}, \varepsilon >0 \}$$ with the standard partial order. Let $\lambda = (E, \varepsilon)$. For any $\varepsilon >0$, the restriction $\Phi|_E$ can be written as $$\Phi|_E + \varepsilon \omega_1 \otimes 1_{\cl T} = \alpha f \otimes Q \alpha^*$$ for $\alpha \in M_{1,n_{\lambda}m}, f \in M_{n_\lambda}(E^*)^+$ and $Q \in M_m(\cl T)^+$. The map $f : E \to M_{n_\lambda}$ is completely positive and the matrix $f(1_{\cl S})$ is positive semi-definite. Let $P$ be the support projection of $f(1_{\cl S})$. For $x \in \cl S^+$, we have $$0 \le f(x) \le \|x\|f(1_{\cl S}) \le \|x\| \|f(1_{\cl S})\| P.$$ Since every element in $\cl S$ can be written as a linear combination of positive elements in $\cl S$, the range of $f$ is contained in $P M_{n_\lambda} P$. The positive semi-definite matrix $f(1_{\cl S})$ is invertible in $P M_{n_\lambda} P$. We denote by $f(1_{\cl S})^{-1}$ its inverse in $P M_{n_\lambda} P$. Put $p = {\rm rank} P$ and let $U^* P U = I_{p} \oplus 0$ be the diagonalization of $P$. Since we can write $$\begin{aligned} & \alpha f \otimes Q \alpha^* \\ = & \alpha (f(1_{\cl S})^{1 \over 2} U \begin{pmatrix} I_p \\ 0 \end{pmatrix} \otimes I_m) \cdot [\begin{pmatrix} I_p & 0 \end{pmatrix} U^* f(1_{\cl S})^{-{1 \over 2}}\ f\ f(1_{\cl S})^{-{1 \over 2}} U \begin{pmatrix} I_p \\ 0 \end{pmatrix} \otimes Q] \ \\ & \cdot (f(1_{\cl S})^{1 \over 2} U \begin{pmatrix} I_{ p} \\ 0 \end{pmatrix} \otimes I_m)^* \alpha^*, \end{aligned}$$ we may assume that $f : E \to M_{n_\lambda}$ is a unital completely positive map. By the Arveson extension theorem, $f : E \to M_{n_\lambda}$ extends to a unital completely positive map $\varphi_\lambda : \cl S \to M_{n_\lambda}$. We define a completely positive map $\psi'_\lambda : M_{n_\lambda} \to \cl T$ by $$\psi'_\lambda(A) = \alpha A \otimes Q \alpha^*, \qquad A \in M_{n_\lambda}.$$ For $x \in E$, we have $$\|\Phi(x)-\psi'_\lambda \circ \varphi_\lambda (x) \| = \|\Phi(x)-\alpha f(x) \otimes Q \alpha^*\| = \varepsilon \| \omega_1(x) 1_{\cl T} \| \le \varepsilon \|x\|.$$ Hence, we can take nets of unital completely positive maps $\varphi_\lambda : \cl S \to M_{n_\lambda}$ and completely positive maps $\psi'_\lambda : M_{n_\lambda} \to \cl T$ such that $\psi'_\lambda \circ \varphi_\lambda$ converges to the map $\Phi$ in the point-norm topology. Since each $\varphi_\lambda$ is unital, $\psi'_\lambda (I_{n_\lambda})$ converges to $1_{\cl T}$. Let us choose a state $\omega_\lambda$ on $M_{n_\lambda}$ and set $$\psi_\lambda(A) = {1 \over \| \psi'_\lambda\|} \psi'_\lambda(A) + \omega_\lambda(A) (1_{\cl T} - {1 \over \| \psi'_\lambda \|} \psi'_\lambda (I_{n_{\lambda}})).$$ Then $\psi_\lambda : M_{n_\lambda} \to \cl T$ is a unital completely positive map such that $\psi_\lambda \circ \varphi_\lambda$ converges to the map $\Phi$ in the point-norm topology. \end{proof} Putting $\cl S = \cl T$ and $\Phi = {\rm id}_{\cl S}$, we obtain the following corollary. \begin{cor}\label{main2} Let $\cl S$ be an operator system. The following are equivalent: \begin{enumerate} \item[(i)] $\cl S$ is nuclear; \item[(ii)] we have $$E \otimes_{\min} \cl S = E \otimes_{\max} \cl S$$ for any finite dimensional operator system $E$; \item[(iii)] there exist nets of unital completely positive maps $\varphi_\lambda : \cl S \to M_{n_\lambda}$ and $\psi_\lambda : M_{n_\lambda} \to \cl S$ such that $\psi_\lambda \circ \varphi_\lambda$ converges to ${\rm id}_{\cl S}$ in the point-norm topology. \end{enumerate} \end{cor} \begin{cor}[Choi-Effros-Kirchberg Theorem] Let $\cl A$ be a unital $C^*$-algebra. Then $\cl A$ is $C^*$-nuclear if and only if there exist nets of unital completely positive maps $\varphi_\lambda : \cl A \to M_{n_\lambda}$ and $\psi_\lambda : M_{n_\lambda} \to \cl A$ such that $\psi_\lambda \circ \varphi_\lambda$ converges to ${\rm id}_{\cl A}$ in the point-norm topology. \end{cor} \begin{proof} It will be enough to prove that if $\cl A$ is $C^*$-nuclear, then for every operator system $\cl T,$ the minimal and maximal operator system tensor products coincide on $\cl A \otimes \cl T.$ Again this fact follows from \cite[Corollary~6.8]{KPTT1} which is independent of the Choi-Effros-Kirchberg theorem. Since the notation is somewhat different in \cite{KPTT1} and their result relies on several earlier results, we repeat the argument below. Let $C^*_u(\cl T)$ be the universal $C^*$-algebra generated by the operator system $\cl T$ as defined in \cite{KPTT1}. Since $\cl A$ is $C^*$-nuclear, we have that $\cl A \otimes_{C^*\min} C^*_u(\cl T) = \cl A \otimes_{C^*\max} C^*_u(\cl T).$ But we have that $\cl A \otimes_{\min} \cl T \subseteq \cl A \otimes_{C^*\min} C^*_u(\cl T)$ completely order isomorphically, by \cite[Corollary~4.10]{KPTT1}. Also, by \cite[Theorem~6.4]{KPTT1} the inclusion of the {\em commuting} tensor product $\cl A \otimes_{\rm c} \cl T \subseteq \cl A \otimes_{C^*\max} C^*_u(\cl T)$ is a complete order isomorphism. Thus, the fact that $\cl A$ is $C^*$-nuclear implies that $\cl A \otimes_{\min} \cl T = \cl A \otimes_{\rm c} \cl T$ completely order isomorphically. Finally, the result follows from the fact \cite[Theorem~6.7]{KPTT1}, that for any $C^*$-algebra $\cl A$, $\cl A \otimes_{\rm c} \cl T = \cl A \otimes_{\max} \cl T,$ completely order isomorphically. \end{proof} \begin{rem} Suppose that we call an operator system $\cl S$ {\it $C^*$-nuclear} if $\cl S \otimes_{\min} \cl B = \cl S \otimes_{\max} \cl B$ for every unital $C^*$-algebra $\cl B.$ Then it follows by \cite[Theorem~6.4]{KPTT1}, that an operator system $\cl S$ is $C^*$-nuclear if and only if $\cl S \otimes_{\min} \cl T = \cl S \otimes_{\rm c} \cl T$ for every operator system $\cl T$. In the terminology of \cite{KPTT1}, this latter property is the definition of $(\min,{\rm c})$-nuclearity. Thus, an operator system is $C^*$-nuclear if and only if it is $(\min,{\rm c})$-nuclear. A complete characterization of such operator systems is still unknown. \end{rem} By a result of Choi and Effros \cite{CE2}, a $C^*$-algebra $\cl A$ is nuclear if and only if its enveloping von Neumann algebra ${\cl A}^{**}$ is injective. We wish to extend this result to nuclear operator systems. In the next section we produce an example of a nuclear operator system that is not completely order isomorphic to any $C^*$-algebra. An operator space $X$ is called {\it nuclear} provided that there exist nets of complete contractions $\varphi_{\lambda}:X \to M_{n_{\lambda}}$ and $\psi_{\lambda}: M_{n_{\lambda}} \to X$ such that $\psi_\lambda \circ \varphi_\lambda$ converges to ${\rm id}_X$ in the point-norm topology. Kirchberg \cite{Ki2} gives an example of an operator space $X$ that is not nuclear, but such that the bidual $X^{**}$ is completely isometric to an injective von~Neumann algebra. A later theorem of Effros, Ozawa and Ruan\cite[Theorem~4.5]{EOR} implies that Kirchberg's operator space $X$ is also not locally reflexive. See \cite{ER} for further details on local reflexivity. These pathologies do not occur for operator systems. This follows from the works of Kirchberg \cite{Ki2} and of Effros, Ozawa and Ruan \cite{EOR}. The following summarizes their results. \begin{thm}\label{main3} Let $\cl S$ be an operator system. Then the following are equivalent: \begin{enumerate} \item[(i)] $\cl S$ is a nuclear operator system; \item[(ii)] $\cl S$ is a nuclear operator space; \item[(iii)] $\cl S^{**}$ is unitally completely order isomorphic to an injective von~Neumann algebra. \end{enumerate} \end{thm} \begin{proof} Clearly, (i) implies (ii) by Theorem~\ref{main1}. For (ii) $\Rightarrow$ (iii), combine \cite[Theorem~4.5]{EOR} and \cite[Theorem~3.1]{CE1} and Sakai's theorem. Finally, the proof that (iii) implies (i), is due to Kirchberg \cite[Lemma~2.8(ii)]{Ki2}. \end{proof} Smith's characterization of nuclear $C^*$-algebras \cite[Theorem~1.1]{Sm} follows from (ii) $\Rightarrow$ (i). We now see another contrast between operator spaces and operator systems. \begin{cor} Let $\cl S$ be an operator system. If $\cl S^{**}$ is unitally completely order isomorphic to an injective von~Neumann algebra, then $\cl S$ is a locally reflexive operator space. \end{cor} \begin{proof} By the above result, $\cl S$ is a nuclear operator space and hence by \cite[Theorem~4.4]{EOR}, $\cl S$ is locally reflexive. \end{proof} \begin{cor} Every finite dimensional nuclear operator system is unitally completely order isomorphic to the direct sum of matrix algebras. \end{cor} \begin{proof} Let $\cl S$ be a finite dimensional operator system. Then $\cl S = \cl S^{**},$ which by the above result is unitally completely order isomorphic to a finite dimensional $C^*$-algebra. \end{proof} \begin{rem} Kirchberg \cite[Theorem~1.1]{Ki2} proves that every nuclear separable operator system is unitally completely isometric to a quotient of the CAR-algebra by a hereditary C*-subalgebra and that conversely, every such quotient gives rise to a nuclear separable operator system. \end{rem} \section{A Nuclear Operator system that is not a $C^*$-algebra} Kirchberg and Wassermann\cite{KW} constructed a remarkable example of a nuclear operator system that has no unital complete order embedding into any nuclear $C^*$-algebra. So, in particular, they give an example of a nuclear operator system that is not unitally completely order isomorphic to a $C^*$-algebra. In this section we provide a very concrete example of this latter phenomena. Let $\cl K_0 \subseteq \cl B(\ell^2(\bb N))$ denote the norm closed linear span of $\{ E_{i,j}: (i,j) \ne (1,1) \},$ where $E_{i,j}$ are the standard matrix units and let $$\cl S_0= \{ \lambda I + K_0 : \lambda \in \bb C, K_0 \in \cl K_0 \}\subseteq \cl B(\ell^2(\bb N))$$ denote the operator system spanned by $\cl K_0$ and the identity operator. The goals of this section are to show that $\cl S_0$ is a nuclear operator system that it is not unitally completely order isomorphic to any $C^*$-algebra and that $\cl S_0^{**}$ is unitally completely order isomorphic to $\cl B(\ell^2(\bb N)).$ Let $V_n: \bb C^n \to \ell^2(\bb N)$ be the isometric inclusion defined by $V_n(e_j) =e_j, 1 \le j \le n$ and let $Q_n \in \cl B(\ell^2(\bb N))$ be the projection onto the orthocomplement of $V_n(\bb C^n).$ Finally, define unital completely positive maps, $\varphi_n : \cl B(\ell^2(\bb N)) \to M_n$ and $\psi_n:M_n \to \cl B(\ell^2(\bb N))$ by $$\varphi_n(X) = V_n^*XV_n \qquad \text{and} \qquad \psi_n(Y) = V_nYV_n^* + y_{1,1}Q_n,\quad Y=(y_{i,j}).$$ \begin{prop} The following hold: \begin{enumerate} \item[(i)] $\psi_n(M_n) \subseteq \cl S_0;$ \item[(ii)] for any $m \in \bb N$ and $(X_{i,j}) \in M_m(\cl S_0),$ $\|(X_{i,j}) - (\psi_n \circ \varphi_n(X_{i,j})) \| \to 0$ as $n \to +\infty;$ \item[(iii)] $\cl S_0$ is a nuclear operator system. \end{enumerate} \end{prop} \begin{proof} Given $Y \in M_n,$ we have that $\psi_n(Y - y_{1,1}I_n) \in \cl K_0,$ and hence $\psi_n(Y) \in \cl S_0$ and (i) follows. If $X \in \cl K_0,$ then the first $n \times n$ matrix entries of $\psi_n \circ \varphi_n(X)$ agree with those of $X$ and the remaining entries are $0.$ Since $X$ is compact, $\|X - \psi_n \circ \varphi_n(X) \| \to 0$ and since both maps are unital, we have that (ii) holds for the case $m=1.$ The case $m >1$ follows similarly. Statement (iii) follows by (ii) and Theorem~\ref{main1}. \end{proof} \begin{thm} The nuclear operator system $\cl S_0$ is not unitally completely order isomorphic to a $C^*$-algebra. \end{thm} \begin{proof} Assume to the contrary that $\cl A$ is a unital $C^*$-algebra and that $\gamma: \cl A \to \cl S_0$ is a unital, complete order isomorphism. Then $\gamma$ is also a completely isometric isomorphism. Use the Stinespring representation \cite[Theorem 4.1]{Pa} to write $\gamma(a) = P\pi(a)P,$ where $\pi:\cl A \to \cl B(\ell^2(\bb N) \oplus \cl H)$ is a unital $*$-homomorphism and $P: \ell^2(\bb N) \oplus \cl H \to \ell^2(\bb N)$ denotes the orthogonal projection. Let $a_{i,j}, (i,j) \ne (1,1)$ denote the unique elements of $\cl A,$ satisfying $\gamma(a_{i,j}) = E_{i,j}.$ Relative to the decomposition $\ell^2(\bb N) \oplus \cl H,$ we have that \[ \pi(a_{i,j}) = \begin{pmatrix} E_{i,j} & B_{i,j}\\C_{i,j} & D_{i,j} \end{pmatrix}, \] where $B_{i,j}: \cl H \to \ell^2(\bb N), C_{i,j}: \ell^2(\bb N) \to \cl H$ and $D_{i,j}: \cl H \to \cl H$ are bounded operators. By choosing an orthonormal basis $\{ u_t \}_{t \in T}$ we may regard $B_{i,j}$ as an $\bb N \times T$ matrix and $C_{i,j}$ as a $T \times \bb N$ matrix. Since $\|\pi(a_{i,j})\| = \|E_{i,j}\| =1,$ we must have that the $i$-th row of $B_{i,j}$ is $0$ and the $j$-th column of $C_{i,j}$ is $0.$ If $k \ne i,$ then \[ 1= \|( E_{i,j}, E_{k,k+1})\| = \|(\pi(a_{i,j}), \pi(a_{k,k+1})) \| \ge \|(E_{i,j}, B_{i,j}, E_{k,k+1}, B_{k,k+1})\| \] from which it follows that the $k$-th row of $B_{i,j}$ is also $0.$ This proves that $B_{i,j} =0$ for all $(i,j) \ne (1,1).$ A similar argument using the fact that $\|\begin{pmatrix} E_{i,j} \\ E_{k+1,k} \end{pmatrix} \| =1$ for $k \ne j$ yields that $C_{i,j} =0$ for all $(i,j) \ne (1,1).$ Since $\cl A$ is the closed linear span of $a_{i,j}, (i,j) \ne (1,1)$ and the identity it follows that for any $a \in \cl A,$ \[ \pi(a) = \begin{pmatrix} \gamma(a) & 0 \\ 0 & \rho(a) \end{pmatrix}, \] for some linear map $\rho: \cl A \to \cl B(\cl H).$ But since $\pi$ is a unital $*$-homomorphism, it follows that $\gamma: \cl A \to \cl B(\ell^2(\bb N))$ is a unital $*$-homomorphism and, consequently, that $\cl S_0$ is a $C^*$-subalgebra of $\cl B(\ell^2(\bb N)).$ But $E_{1,2}, E_{2,1} \in \cl S_0,$ while $E_{1,1} = E_{1,2}E_{2,1} \notin \cl S_0.$ This contradiction completes the proof. \end{proof} By Theorem \ref{main3}, we know that $\cl S_0^{**}$ is an injective von~Neumann algebra, so it is interesting to identify the precise algebra. \begin{thm} $\cl S_0^{**}$ is unitally completely order isomorphic to $\cl B(\ell^2(\bb N)).$ \end{thm} \begin{proof} We only prove that $\cl S_0^{**}$ is unitally order isomorphic to $\cl B(\ell^2(\bb N)).$ To this end, let $\cl S = \{ \lambda I + K: \lambda \in \bb C, K \in \cl K(\ell^2(\bb N)) \},$ denote the unital $C^*$-algebra spanned by the compact operators $\cl K(\ell^2(\bb N))$ and the identity. Thus, $\cl S_0 \subseteq \cl S$ is a codimension 1 subspace. As vector spaces, we have that $\cl S = \bb C \oplus \cl K(\ell^2(\bb N)),$ so that $\cl S^* = \bb C \oplus \cl T(\ell^2(\bb N)),$ where this latter space denotes the trace class operators. We let $\delta_{i,j}: \cl K(\ell^2(\bb N)) \to \bb C$ denote the linear functional satisfying \[\delta_{i,j}(E_{k,l}) = \begin{cases} 1& i=k, j=l\\ 0& \text{ otherwise} \end{cases} \] so that every element of $\cl K(\ell^2(\bb N))^*$ is of the form $\sum_{i,j} t_{i,j} \delta_{i,j}$ for some trace class matrix $T= (t_{i,j}).$ We identify $\cl S^* = \bb C \oplus \cl T(\ell^2(\bb N))$ where \[ \langle (\beta, T), \lambda I + K \rangle = \beta \lambda + \sum_{i,j} t_{i,j} k_{i,j} = \beta \lambda + {\rm tr}(T^tK) \] with $K=(k_{i,j})$. The functional $(\beta, T)$ is positive if and only if $T$ is a positive operator and $\beta \ge {\rm tr}(T).$ If $(\beta, T)$ is a positive functional on $\cl S$, then we have $$0 \le \langle (\beta,T), K \rangle = {\rm tr}(T^t K) \quad \text{and} \quad 0 \le \langle (\beta,T), I-I_n \rangle = \beta - {\rm tr}(T^t I_n)$$ for all positive compact operators $K$ and $n \in \bb N$. Let $\lambda I + K$ be a positive operator. Since $K$ is compact, we have $\lambda \ge 0$. The converse follows from $$\langle (\beta, T), \lambda I + K \rangle = \beta \lambda + {\rm tr}(T^t K) \ge {\rm tr}(T^t (\lambda I +K)) \ge 0.$$ Identify $\cl S_0^*$ with $\bb C \oplus \cl T_0$ where $\cl T_0$ denotes the trace class operators $T_0=(t_{i,j})$ with $t_{1,1} =0.$ Since every positive functional on $\cl S_0$ extends to a positive functional on $\cl S$ by the Krein theorem, we have that $(\beta, T_0)$ defines a positive functional if and only if there exists $\alpha \in \bb C,$ such that $T= T_0 + \alpha E_{1,1}$ is positive and $\beta \ge {\rm tr}(T_0) + \alpha.$ That is if and only if $\beta \ge {\rm tr}(T),$ where $T$ is some positive trace class operator equal to $T_0$ modulo the span of $E_{1,1}.$ In a similar fashion we may identify $\cl S_0^{**}$ as the vector space $\bb C \oplus \cl B_0$, where $X_0=(x_{i,j}) \in \cl B_0$ if and only if $X_0$ is bounded and $x_{1,1} =0.$ Moreover, $(\mu, X_0)$ will define a positive element of $\cl S_0^{**}$ if and only if \[ \mu \beta + \sum_{(i,j) \ne (1,1)} x_{i,j} t_{i,j} \ge 0,\] for every positive linear functional $(\beta, T_0).$ We claim that $(\mu, X_0)$ is positive if and only if $\mu I +X_0 \in \cl B(\ell^2(\bb N))$ is a positive operator. This will show that the bijection $$(\mu,X_0) \in \cl S_0^{**} \mapsto \mu I +X_0 \in \cl B(\ell^2(\bb N))$$ is an order isomorphism. Also, note that the identity of $\cl S_0^{**}$ is $(1,0),$ so that this map is unital. To see the claim, first let $(\mu, X_0) \in \cl S_0^{**}$ be positive. Given any $T= T_0 + \alpha E_{1,1}$ a positive trace class operator, let $\beta = \alpha + {\rm tr}(T_0) = {\rm tr}(T).$ Then $(\beta, T_0)$ is positive in $\cl S_0^*$ and, hence \[ 0 \le \mu \beta + \sum_{(i,j) \ne (1,1)} x_{i,j} t_{i,j} = \mu {\rm tr}(T) + {\rm tr}(X_0^t T) = {\rm tr}((\mu I+ X_0)^t T). \] Since $T$ was an arbitrary trace class operator, this shows that $\mu I +X_0$ is a positive operator in $\cl B(\ell^2(\bb N)).$ Conversely, if $\mu I + X_0$ is a positive operator, then for any positive $(\beta, T_0) \in \cl S_0^*$, pick $\alpha$ as above and set $T = \alpha E_{1,1} + T_0$. We have that \[ \mu \beta + \sum_{(i,j) \ne (1,1)} x_{i,j} t_{i,j} \ge \mu {\rm tr}(T) + {\rm tr}(X_0^t T) = {\rm tr}((\mu I + X_0)^t T) \ge 0, \] since both operators are positive. This completes the proof of the claim and of the theorem. \end{proof} \end{document}
\begin{document} \title{Beta Laguerre ensembles in global regime} \begin{abstract} Beta Laguerre ensembles, generalizations of Wishart and Laguerre ensembles, can be realized as eigenvalues of certain random tridiagonal matrices. Analogous to the Wishart ($\beta=1$) case and the Laguerre $(\beta = 2)$ case, for fixed $\beta$, it is known that the empirical distribution of the eigenvalues of the ensembles converges weakly to Marchenko--Pastur distributions, almost surely. The paper restudies the limiting behavior of the empirical distribution but in regimes where the parameter $\beta$ is allowed to vary as a function of the matrix size $N$. We show that the above Marchenko--Pastur law holds as long as $\beta N \to \infty$. When $\beta N \to 2c \in (0, \infty)$, the limiting measure is related to associated Laguerre orthogonal polynomials. Gaussian fluctuations around the limit are also studied. \noindent{\bf Keywords:} Beta Laguerre ensembles, Marchenko--Pastur distributions, associated Laguerre orthogonal polynomials, Poincar\'e inequality \noindent{\bf AMS Subject Classification: } Primary 60B20; Secondary 60F05 \end{abstract} \section{Introduction} Beta Laguerre ($\beta$-Laguerre) ensembles are ensembles of $N$ positive particles distributed according the following joint probability density function \begin{equation}\label{bLE} \frac{1}{Z_{N, M}^{(\beta)}}\prod_{i < j}|\lambda_j - \lambda_i|^\beta \prod_{i = 1}^N \left( \lambda_i^{\frac{\beta}{2}(M-N+1) - 1} e^{-\frac{\beta M}{2} \lambda_i} \right), \quad (\lambda_i > 0), \end{equation} where $\beta > 0$ and $M > N -1$ are parameters, and $Z_{N, M}^{(\beta)}$ is the normalizing constant. For two special values $\beta = 1,2$, they are the joint density of the eigenvalues of Wishart matrices and Laguerre matrices, respectively. For general $\beta > 0$, the ensembles can be realized as eigenvalues of a random tridiagonal matrix model $J_N = B_N (B_N)^t$ with a bidiagonal matrix $B_N$ consisting of independent entries distributed as \[ B_N = \frac{1}{\sqrt{\beta M}} \begin{pmatrix} \chi_{\beta M} \\ \chi_{\beta (N-1)} &\chi_{\beta(M - 1)} \\ &\ddots &\ddots\\ &&\chi_\beta &\chi_{\beta(M - N + 1)} \end{pmatrix}, \] where $\chi_k$ denotes the chi distribution with $k$ degrees of freedom, and $(B_N)^t$ denotes the transpose of $B_N$. Wishart matrices (resp.~Laguerre matrices) are random matrices of the form $M^{-1}G^{t}G$ (resp.~$M^{-1}G^{*}G$), where $G$ is an $M \times N$ matrix consisting of i.i.d.~(independent identically distributed) entries of standard real (resp.~complex) Gaussian distribution. Here $G^*$ denotes the Hermitian conjugate of the matrix $G$. These random matrix models can be defined in a different way as invariant probability measures on the set of symmetric (resp.~Hermitian) matrices. The limiting behavior of their eigenvalues has been studied intensively, and it is known that as $N \to \infty$ with $N/M \to \gamma \in (0,1)$, the empirical distribution \[ L_N = \frac{1}{N}\sum_{i = 1}^N \delta_{\lambda_i} \] converges to the Marchenko--Pastur distribution with parameter $\gamma$, almost surely. Here $\delta_\lambda$ denotes the Dirac measure at $\lambda$. The convergence means that for any bounded continuous function $f$, \[ \int f(x) dL_N(x) = \frac{1}{N} \sum_{i = 1}^N f(\lambda_i) \to \int f(x) mp_\gamma(x) dx\quad \text{as}\quad N \to \infty, \text{almost surely,} \] where $mp_\gamma(x)$ is the density of the Marchenko--Pastur distribution with parameter $\gamma$, \[ mp_\gamma(x) = \frac{1}{2\pi \gamma x} \sqrt{(\lambda_+ - x)(x - \lambda_-)}, \quad (\lambda_- < x < \lambda_+), \quad \lambda_{\pm} = (1 \pm \sqrt \gamma)^2. \] Gaussian fluctuations around the limit were also established with explicit formula for the limiting variance. The convergence to a limit and fluctuations around the limit of the empirical distributions are two typical problems in the global regime when a random matrix model is considered. Refer the book \cite{Pastur-book} for more details on Wishart and Laguerre matrices. The convergence to Marchenko--Pastur distributions and fluctuations around the limit were extended to $\beta$-Laguerre ensembles for general $\beta> 0$ in \cite{DE06} by using the random tridiagonal matrix model. Based on the model as well, results in the local regime and the edge regime were established \cite{Jacquot-Valko-2011, Sosoe-Wong-2014}. Note that the parameter $\beta$ is fixed in those studies. The aim of this paper is to refine results in the global regime of $\beta$-Laguerre ensembles. We assume that the parameter $\beta$ varies as a function of $N$, while the parameter $M$ depends on $N$ in a way that $N/M \to \gamma \in (0,1)$ as usual. We show that the Marchenko--Pastur law holds as long as $\beta N \to \infty$. When $\beta N $ stays bounded, the limiting measure is related to associated Laguerre orthogonal polynomials. For the proof, we extend some ideas that were used in \cite{DS15, Nakano-Trinh-2018, Trinh-2017} in case of Gaussian beta ensembles. Our main results are stated in the following two theorems. \begin{theorem}[Convergence to a limit distribution]\label{thm:LLN} \begin{itemize} \item[\rm(i)] As $\beta N \to \infty$, the empirical distribution $L_N$ converges weakly to the Marchenko--Pastur distribution with parameter $\gamma$, almost surely. Here $L_N = N^{-1} \sum_{i = 1}^N \delta_{\lambda_i}$ is the empirical distribution of the $\beta$-Laguerre ensembles~\eqref{bLE}. \item[\rm(ii)] As $\beta N \to 2c \in (0,\infty)$, the sequence $\{L_N\}$ converges weakly to the probability measure $\nu_{\gamma, c}$, almost surely, where $\nu_{\gamma, c}$ is given in Theorem~{\rm\ref{thm:2c-unscale}}. \end{itemize} \end{theorem} To prove the above results, it is sufficient to show that any moment of $L_N$ converges almost surely to the corresponding moment of $\nu_{\gamma, c}$. Here for convenience, we refer to $\nu_{\gamma, \infty}$ as the Marchenko--Pastur distribution with parameter $\gamma$. It then follows that (see \cite{Trinh-ojm-2018}) for any continuous function $f$ of polynomial growth (there is a polynomial $P(x)$ such that $|f(x)| \le P(x)$ for all $x \in \R$), \[ \int f(x) dL_N(x) = \frac{1}{N} \sum_{i = 1}^N f(\lambda_i) \to \int f(x) d\nu_{\gamma, c}( x)\quad \text{as}\quad N \to \infty, \text{almost surely.} \] \begin{theorem}[Gaussian fluctuations around the limit] \label{thm:CLT-full} Assume that the function $f$ has continuous derivative of polynomial growth. Then the following hold. \begin{itemize} \item[\rm(i)] As $\beta N \to \infty$, \[ \sqrt{\beta} \left( \sum_{i = 1}^N f(\lambda_i) - \Ex\bigg[ \sum_{i = 1}^N f(\lambda_i) \bigg] \right) \dto \Normal(0, \sigma_f^2), \] where \[ \sigma_f^2 = \frac{1}{2\pi^2} \int_{\gamma_-}^{\gamma^+}\int_{\gamma_-}^{\gamma^+} \left(\frac{f(y) - f(x)}{y - x}\right)^2 \frac{4\gamma - (x - \gamma_m)(y - \gamma_m)}{\sqrt{4 \gamma - (x - \gamma_m)^2} \sqrt{4 \gamma - (y - \gamma_m)^2}}dx dy, \] with $\gamma_m = (\gamma_{-} + \gamma_{+})/2 = (1 + \gamma)$. Here `$\dto$' denotes the convergence in distribution. \item[\rm(ii)] As $\beta N \to 2c \in (0, \infty)$, \[ \sqrt{\beta} \left( \sum_{i = 1}^N f(\lambda_i) - \Ex\bigg[ \sum_{i = 1}^N f(\lambda_i) \bigg] \right) \dto \Normal(0, \sigma_{f,c}^2), \] where $\sigma_{f,c}^2$ is a constant. \end{itemize} \end{theorem} The paper is organized as follows. In the next section, we introduce the random tridiagonal matrix model and related concepts. Results on convergence to a limit and Gaussian fluctuations around the limit are shown in Section~3 and Section~4, respectively. \section{Random tridigonal matrix model and spectral measures} \subsection{Random tridiagonal matrix model} Let $G$ be an $M\times N$ random matrix consisting of i.i.d.\ real standard Gaussian random variables. Then $X = M^{-1} G^t G$ is called a Wishart matrix. When $M \ge N$, the eigenvalues $\lambda_1, \dots, \lambda_N$ of $X$ have the following joint density \[ \frac{1}{Z_{M, N}} |\Delta(\lambda)| \prod_{i = 1}^N \left( \lambda_i^{\frac12(M - N + 1) - 1} e^{-\frac M2 \lambda_i} \right),\quad (\lambda_i > 0), \] where $Z_{M,N}$ is the normalizing constant, and $\Delta(\lambda)=\prod_{i < j}(\lambda_j - \lambda_i)$ denotes the Vandermonde determinant. A Laguerre matrix $X = M^{-1} G^* G$ is obtained when the entries of $G$ are i.i.d.\ of standard complex Gaussian distribution, where recall that $G^*$ stands for the Hermitian conjugate of $G$. In this case, the joint density of the eigenvalues has an analogous formula to the Wishart case. Then $\beta$-Laguerre ensembles are defined to be ensembles of $N$ positive particles with the joint density \begin{equation}\label{bLE-2} \frac{1}{Z_{M, N}^{(\beta)}} |\Delta(\lambda)|^\beta \prod_{i = 1}^N \left( \lambda_i^{\frac\beta2(M - N + 1) - 1} e^{-\frac {\beta M}{2} \lambda_i} \right),\quad (\lambda_i > 0), \end{equation} where $\beta > 0$ and $M > N - 1$, which generalizes Wishart $(\beta = 1)$ and Laguerre $(\beta = 2)$ ensembles. A random tridiagonal matrix model for $\beta$-Laguerre ensembles was introduced in \cite{DE02} based on tridiagonalizing Wishart or Laguerre matrices. Let \[ B_N = \frac{1}{\sqrt{\beta M}} \begin{pmatrix} \chi_{\beta M} \\ \chi_{\beta (N-1)} &\chi_{\beta(M - 1)} \\ &\ddots &\ddots\\ &&\chi_\beta &\chi_{\beta(M - N + 1)} \end{pmatrix} \] denote a random bidiagonal matrix with independent entries. Then the eigenvalues of the tridiagonal matrix $J_N = B_N (B_N)^t$ are distributed as the $\beta$-Laguerre ensembles~\eqref{bLE-2}. Using this random matrix model and combinatorial arguments, the convergence to Marchenko--Pastur distributions and Gaussian fluctuations around the limit (only for polynomial test functions) were established in \cite{DE06}. \subsection{Spectral measures of Jacobi matrices} A symmetric tridiagonal matrix is called a Jacobi matrix. Spectral measures of Jacobi matrices $\{J_N\}$ have been studied recently. Here the spectral measure of $J_N$ is defined to be the probability measure $\mu_N$ satisfying \[ \int x^k d\mu_N(x) = (J_N)^k(1,1), \quad k = 0,1,2,\dots. \] It follows from the spectral decomposition of $J_N$ that $\mu_N$ has the following form \[ \mu_N = \sum_{i = 1}^N q_i^2 \delta_{\lambda_i},\quad (q_i^2 = v_i(1)^2), \] where $v_1, \dots, v_N$ are the normalized eigenvectors of $J_N$ corresponding to the eigenvalues $\lambda_1, \dots, \lambda_N$. Spectral measures can also be defined for infinite Jacobi matrices. Let $J$ be an infinite Jacobi matrix, \[ J = \begin{pmatrix} a_1 &b_1 \\ b_1 &a_2 &b_2\\ &\ddots &\ddots &\ddots \end{pmatrix}, \quad a_i \in \R, b_i > 0. \] Then there exists a probability measure $\mu$ such that \[ \int x^k d\mu(x) = J^k(1,1), \quad k = 0,1,\dots. \] When the above moment problem has a unique solution $\mu$, the measure $\mu$ is called the spectral measure of $J$. A simple, but useful sufficient condition for the uniqueness is given by \cite[Corollary 3.8.9]{Simon-book-2011} \[ \sum_{n=1}^\infty \frac{1}{b_n} = \infty. \] By definition, moments of the spectral measure $\mu_N$ depend locally on upper left entries of $J_N$, and thus, the limiting behavior of $\mu_N$ follows easily from those of entries. In particular, for fixed $\beta$, as $N \to \infty$ with $N/M \to \gamma \in (0,1)$, entry-wisely, \[ B_N = \frac{1}{\sqrt{\beta M}} \begin{pmatrix} \chi_{\beta M} \\ \chi_{\beta (N-1)} &\chi_{\beta(M - 1)} \\ &\ddots &\ddots\\ &&\chi_\beta &\chi_{\beta(M - N + 1)} \end{pmatrix} \to \begin{pmatrix} 1 \\ \sqrt{\gamma} &1 \\ &\ddots &\ddots\\ \end{pmatrix}. \] Here the convergence holds almost surely and in $L_q$ for any $q \in [1, \infty)$. It follows that almost surely, the spectral measure $\mu_N$ converges weakly to the spectral measure of the following infinite Jacobi matrix \begin{equation}\label{MP-gamma} MP_\gamma = \begin{pmatrix} 1\\ \sqrt{\gamma} &1\\ &\ddots &\ddots \end{pmatrix} \begin{pmatrix} 1 &\sqrt{\gamma}\\ &1 &\sqrt{\gamma} \\ &&\ddots &\ddots \end{pmatrix} = \begin{pmatrix} 1 &\sqrt{\gamma} \\ \sqrt{\gamma} &1+\gamma &\sqrt{\gamma}\\ &\sqrt{\gamma} &1+\gamma &\sqrt{\gamma}\\ &&\ddots &\ddots &\ddots \end{pmatrix}, \end{equation} which is nothing but the Marchenko--Pastur distribution with parameter $\gamma$ \cite{Trinh-ojm-2018}. For the Jacobi matrix $J_N$, the weights $q_1^2, \dots, q_N^2$ in the spectral measure $\mu_N$ are independent of the eigenvalues and have Dirichlet distribution with parameter $\beta/2$. From which, the empirical distribution $L_N$ and the spectral measure $\mu_N$ have the following relations \begin{align} \Ex[\bra{L_N, f}] &= \Ex[\bra{\mu_N, f}] , \label{same-mean}\\ \Var[\bra{L_N, f}] &= \frac{\beta N + 2}{\beta N} \Var[\bra{\mu_N, f}] - \frac{2}{\beta N} \left(\Ex[\bra{\mu_N, f^2}] - \Ex[\bra{\mu_N, f}]^2 \right), \label{relation-of-variance} \end{align} for suitable test functions $f$. Here we use the notation $\bra{\mu, f}$ to denote the integral $\int f d\mu$ for a measure $\mu$ and an integrable function $f$. We conclude this section by giving several remarks on spectral measures of Jacobi matrices. The spectral measure $\mu$ orthogonalizes the sequence of polynomials $\{P_n\}_{n \ge 0}$ defined by \begin{align*} &P_0 = 1, \quad P_1 = x - a_1,\\ &P_{n+1} = x P_n - a_{n + 1}P_n - b_n^2 P_{n - 1}, \quad n \ge 1. \end{align*} In a particular case, $a_n =(\alpha + 2n-1)$ and $b_n = \sqrt{n}\sqrt{\alpha + n}$, the sequence of polynomials $\{L_n\}$ defined by \begin{align*} &L_0 = 1, \quad L_1 = x - (\alpha + 1),\\ &L_{n+1} = x L_n - (\alpha + 2n + 1)L_n - n(\alpha + n) L_{n - 1}, \quad n \ge 1, \end{align*} coincides with the sequence of scaled generalized Laguerre polynomials. The spectral measure in this case is the Gamma distribution with parameters $(\alpha+1, 1)$, that is, the probability measure with density $\Gamma(\alpha + 1)^{-1} x^\alpha e^{-x}, x > 0$. In other words, the Gamma distribution with parameters $(\alpha + 1,1)$ is the spectral measure of the following infinite Jacobi matrix $J_\alpha$, \begin{align*} J_\alpha &= \begin{pmatrix} \alpha + 1 & \sqrt{\alpha + 1} \\ \sqrt{\alpha + 1} &\alpha + 3 & \sqrt{2}\sqrt{ \alpha + 2} \\ &&\ddots &\ddots &\ddots \end{pmatrix} \\ &= \begin{pmatrix} \sqrt{\alpha + 1} \\ \sqrt 1 &\sqrt{\alpha + 2} \\ &\ddots &\ddots \end{pmatrix} \begin{pmatrix} \sqrt{\alpha + 1} &\sqrt1 \\ &\sqrt{\alpha + 2} &\sqrt{2} \\ &&\ddots &\ddots \end{pmatrix}. \end{align*} When the entries of $J_\alpha$ are `shifted' by a constant $c \in \R$, the resulting orthogonal polynomials are called associated Laguerre polynomials. The spectral measure in this case was explicitly calculated in \cite{Ismail-et-al-1988} as Model II for associated Laguerre orthogonal polynomials. \begin{lemma}[\cite{Ismail-et-al-1988}]\label{lem:associated-Laguerre} For $c > -1$ and $\alpha + c + 1 > 0$, let \[ J_{\alpha, c} = \begin{pmatrix} \sqrt{\alpha + c + 1} \\ \sqrt{c + 1} & \sqrt{\alpha + c + 2}\\ &\ddots &\ddots \end{pmatrix} \begin{pmatrix} \sqrt{\alpha + c + 1} & \sqrt{c + 1} \\ &\sqrt{\alpha + c + 2} &\sqrt{c+2}\\ &&\ddots &\ddots \end{pmatrix}, \] and $\mu_{\alpha,c}$ be its spectral measure. Then the density and the Stieltjes transform of $\mu_{\alpha, c}$ are given by \begin{align*} \mu_{\alpha, c}(x) &= \frac{1}{\Gamma(c+1) \Gamma(1+c+\alpha)} \frac{x^{\alpha} e^{-x}}{|\Psi(c, -\alpha; x e^{-i \pi})|^2}, \quad x \ge 0,\\ S_{\mu_{\alpha, c}}(z) &= \int_0^\infty \frac{\mu_{\alpha, c} (x) dx}{x - z} = \frac{\Psi(c+1, 1 -\alpha; -z)}{\Psi(c, -\alpha; -z)}, \quad z \in \C \setminus \R. \end{align*} Here $\Psi(a, b; z)$ is Tricomi's confluent hypergeometric function. \end{lemma} Note that when $\alpha$ is not an integer, an alternative formula for $\Psi(a, b; z)$ could be used \begin{align*} \Psi(c, - \alpha; xe^{-i \pi}) &= \frac{\Gamma(\alpha + 1)}{\Gamma(\alpha + c + 1)} \,_1F_1(c; -\alpha; -x) \\ &\qquad - \frac{\Gamma(-\alpha - 1)}{\Gamma(c)} x^{\alpha + 1} e^{-i\pi \alpha} \,_1F_1 (\alpha + c + 1; 2+\alpha; -x) , \end{align*} where ${}_1F_1(a; b; z)$ is the Kummer function. \section{Convergence to a limit distribution} In what follows, the parameter $M$ depends on $N$ in the way that $N/M \to \gamma \in (0,1)$ as $N \to \infty$. We study the limiting behavior of the empirical distribution $L_N$ through its moments $\bra{L_N, x^r}, r=0,1,2,\dots$. Recall from the identity \eqref{same-mean} that for $r = 0,1,2,\dots,$ \[ \Ex[\bra{L_N, x^r}] = \Ex \bigg[\frac1N \tr [(J_N)^r] \bigg] = \Ex[\bra{\mu_N, x^r}] = \Ex[(J_N)^r(1,1)]. \] Here $\tr[A]$ denotes the trace of a matrix $A$. \subsection{The Marchenko--Pastur regime, $\beta N \to \infty$} A key observation in this regime is the following asymptotic behavior of chi distributions. \begin{lemma} As $k \to \infty$, \[ \frac{\chi_k}{\sqrt{k}} \to 1 \quad \text{ in $L^q$ for any $1 \le q < \infty$.} \] \end{lemma} Let $\{c_i\}_{i = 1}^N$ and $\{d_i\}_{i = 1}^{N-1}$ be the diagonal and the sub-diagonal of $B_N$, respectively. Note that although we do not write explicitly, $\{c_i\}$ and $\{d_i\}$ depend on the triple $(N, M, \beta)$. It is clear that as $\beta N \to \infty$, \[ B_N = \frac{1}{\sqrt{\beta M}} \begin{pmatrix} \chi_{\beta M} \\ \chi_{\beta (N-1)} &\chi_{\beta(M - 1)} \\ &\ddots &\ddots\\ &&\chi_\beta &\chi_{\beta(M - N + 1)} \end{pmatrix} \to \begin{pmatrix} 1 \\ \sqrt{\gamma} &1 \\ &\ddots &\ddots\\ \end{pmatrix}. \] Here the convergence means the pointwise convergence of entries, which holds in $L^q$ for any $q \in [1, \infty)$, that is, for fixed $i$, as $N \to \infty$ with $\beta N \to \infty$, \begin{equation}\label{Lp-convergence-of-cd} c_i \to 1, \quad d_i \to \sqrt{\gamma} \quad \text{in $L^q$ for $q\in [1, \infty)$.} \end{equation} Since $J_N$ is a tridiagonal matrix of the form \[ J_N = B_N (B_N)^t = \begin{pmatrix} c_1^2 &c_1d_1\\ c_1d_1 &c_2 ^2 + d_1^2 &c_2d_2\\ &\ddots &\ddots &\ddots\\ &&c_{N-1}d_{N - 1} &c_N^2 + d_{N-1}^2 \end{pmatrix}, \] it follows that for fixed $r \in \N$, when $N>r$, $(J_N)^r(1,1)$ is a polynomial of $\{c_i, d_i\}_{i \le \frac{r+1}2}$. Let us show some explicit formulae for $(J_N)^r(1,1)$, \begin{align*} J_N(1,1) &= c_1^2,\\ (J_N)^2(1,1) &= c_1^4 + c_1^2 d_1^2,\\ (J_N)^3(1,1) &= c_1^6 + 2 c_1^4 d_1^2 + c_1^2 c_2^2 d_1^2 + c_1^2 d_1^4. \end{align*} Then the pointwise convergence \eqref{Lp-convergence-of-cd} implies that as $N \to \infty$ with $\beta N \to \infty$, \[ (J_N)^r(1,1) \to (MP_\gamma)^r(1,1) \quad \text{in $L^q$ for any $q \in [1,\infty)$,} \] where recall that the Jacobi matrix $MP_\gamma$ is given in \eqref{MP-gamma}. Consequently, as $N \to \infty$, \[ \Ex[(J_N)^r(1,1) ] \to (MP_\gamma)^r(1,1). \] Recall also that the spectral measure of $MP_\gamma$ is the Marchenko--Pastur distribution with parameter $\gamma$. Therefore, in this regime, the following convergence of the mean values holds. \begin{lemma}\label{lem:mean-infinity} For any $r \in \N$, as $N \to \infty$ with $\beta N \to \infty$, \[ \Ex[\bra{L_N, x^r}] = \Ex\left[\frac1N \tr[ (J_N)^r] \right ] = \Ex[(J_N)^r(1,1) ] \to \bra{mp_\gamma, x^r}. \] \end{lemma} \subsection{High temperature regime, $\beta N \to 2c \in (0, \infty)$} Let \[ B_\infty :=\begin{pmatrix} \chitilde_{\frac{2c}\gamma} \\ \chitilde_{2c} &\chitilde_{\frac{2c}\gamma}\\ &\ddots &\ddots \end{pmatrix}, \quad \chitilde_k = \frac{1}{\sqrt2} \chi_k, \] be an infinite bidiagonal matrix with independent entries. Then as $\beta N \to 2c$, \[ B_N = \frac{1}{\sqrt{\beta M}} \begin{pmatrix} \chi_{\beta M} \\ \chi_{\beta (N-1)} &\chi_{\beta(M - 1)} \\ &\ddots &\ddots\\ &&\chi_\beta &\chi_{\beta(M - N + 1)} \end{pmatrix} \dto \frac{\sqrt{\gamma}}{\sqrt c}B_\infty, \] meaning that entries of $B_N$ converge in distribution to the corresponding entries of the infinite matrix $\frac{\sqrt{\gamma}}{\sqrt c}B_\infty$. Recall that $(J_N)^r(1,1)$ is a polynomial of $\{c_i, d_i\}_{i \le \frac{r+1}2}$. Moreover, the entries of $B_N$ and those of $B_\infty$ are independent. Therefore we can deduce that as $\beta N \to 2c$, \begin{align*} (J_N)^r(1,1) &\dto (J_\infty)^r(1,1),\\ \Ex[(J_N)^r(1,1) ] &\to \Ex[(J_\infty)^r(1,1) ], \end{align*} where \[ J_\infty = \frac{\gamma}{c} B_\infty (B_\infty)^t. \] By this approach, we have just shown the existence of the limit of the mean values when $\beta N \to 2c$. However, we are not able to identify the limit directly from $J_\infty$. To identify the limit, we now extend some ideas that used in \cite{DS15} in case of Gaussian beta ensembles to show the following. \begin{theorem}\label{thm:2c-unscale} Let $\nu_{\gamma, c}$ be the spectral measure of the following Jacobi matrix \[ \frac{\gamma}{c} \begin{pmatrix} \sqrt{\frac{c}\gamma } \\ \sqrt{c + 1} & \sqrt{\frac c{\gamma} + 1}\\ &\ddots &\ddots \end{pmatrix} \begin{pmatrix} \sqrt{\frac{c}\gamma } &\sqrt{c + 1} \\ & \sqrt{\frac c{\gamma} + 1} &\sqrt{c + 2}\\ &&\ddots &\ddots \end{pmatrix} = \frac \gamma c J_{\frac c\gamma, c}. \] Then for any $r \in \N$, as $N \to \infty$ with $\beta N \to 2c$, \[ \Ex[\bra{L_N, x^r}] = \Ex \left[ \frac1N \tr [(J_N)^r ] \right] = \Ex[(J_N)^r(1,1) ] \to \bra{\nu_{\gamma, c} , x^r}. \] \end{theorem} Theorem~\ref{thm:2c-unscale} is equivalent to Theorem~\ref{thm:2c-scale} below which states a result for scaled $\beta$-Laguerre ensembles. Let us switch to the scaled version. Let $\tilde J_N := \tilde B_N (\tilde B_N)^t$ be a Jacobi matrix, where \[ \tilde B_N = \begin{pmatrix} \chitilde_{2\alpha + 2 + 2\kappa(N-1)} \\ \chitilde_{2\kappa (N-1)} &\chitilde_{2\alpha + 2 + 2\kappa(N-2)} \\ &\ddots &\ddots \\ &&\chitilde_{2\kappa} &\chitilde_{2\alpha + 2} \end{pmatrix}, \] (with $\kappa = \beta / 2 $ and $ \alpha = \frac{\beta}{2}(M - N + 1) - 1 = \kappa(M - N + 1) - 1)$. Then the joint density of the eigenvalues of $\tilde J_N$ is proportional to \[ |\Delta(\tilde\lambda)|^{2\kappa} \prod_{i = 1}^N \left( \tilde\lambda_i^{\alpha} e^{-\tilde\lambda_i} \right),\quad \tilde \lambda_i > 0. \] Let $\tilde \mu_N$ be the spectral measure of $\tilde J_N$ and \[ m_r(N, \kappa, \alpha) = \Ex[\bra{\tilde\mu_N, x^r}] = \Ex[(\tilde J_N)^r(1,1)]. \] Then $m_r(N, \kappa, \alpha)$ satisfies the following duality relation. \begin{lemma}[cf.~{\cite[Theorem~2.11]{DE06}}] The function $m_r(N, \kappa, \alpha)$ is a polynomial with respect to $N, \kappa$ and $\alpha$ and satisfies the following relation \[ m_r(N, \kappa, \alpha) = (-1)^r \kappa^r m_r(- \kappa N, \kappa^{-1}, -\alpha/\kappa). \] \end{lemma} Based on the above duality relation, we now identify the limit of $m_r(N, \kappa, \alpha) $ in the regime where $\kappa N \to c$. For fixed $N$, it is straightforward to calculate the limit of $\kappa^{-1/2} \tilde B_N$ with parameters $(N, \kappa, a \kappa)$ as $\kappa \to \infty$, where $a$ is fixed, \[ \left( \frac{1}{\sqrt{\kappa}} \tilde B_N \right) \to \begin{pmatrix} \sqrt{a + N - 1} \\ \sqrt{N - 1} & \sqrt{a + N - 2}\\ &\ddots &\ddots\\ &&\sqrt{1} &\sqrt{a} \end{pmatrix} =: D_N(a). \] Here the convergence holds in $L^q$ entry-wisely. Therefore, \[ \lim_{\kappa \to \infty} \kappa^{-r} m_r(N, \kappa, a \kappa) = \lim_{\kappa \to \infty} \Ex[\kappa^{-r} (\tilde J_N)^r(1,1)] = (D_N(a) D_N(a)^t)^r(1,1). \] Next, in viewing of the duality relation, it holds that \[ \lim_{N \to \infty, \kappa N \to c} m_r(N, \kappa, \alpha)= \lim_{\kappa \to \infty} (-1)^r \kappa^{-r} m_r(- c, \kappa, -\alpha \kappa). \] Let us consider the following infinite matrix by exchanging $N \leftrightarrow -c, a \leftrightarrow -\alpha$ and replacing the sign inside square roots as well, \[ W_{\alpha, c}= \begin{pmatrix} \sqrt{\alpha + c + 1} \\ \sqrt{c + 1} & \sqrt{\alpha + c + 2}\\ &\sqrt{c + 2} &\sqrt{\alpha + c + 3} \\ &&\ddots &\ddots \end{pmatrix}. \] Let $J_{\alpha, c} = W_{\alpha, c} W_{\alpha, c}^t$ and $l_r(\alpha, c)= (J_{\alpha, c})^r(1,1)$. Then it follows that \[ \lim_{\kappa N \to c} m_r(N, \kappa, \alpha) = \lim_{\kappa \to \infty} (-1)^r \kappa^{-r} m_r(- c, \kappa, -\alpha \kappa) = l_{r}(\alpha,c). \] In conclusion, we have just proved the following result. \begin{theorem}\label{thm:2c-scale} Let $\alpha > - 1$ and $\cc \ge 0$ be given. Then in the regime where $ \beta N \to 2 \cc$, \[ \Ex\left[ \frac 1N \tr ((\tilde J_N)^r)\right] = \Ex[(\tilde J_N)^r(1,1)] \to \bra{\mu_{\alpha, c}, x^r}, \] for any $r \in \N$. Here recall that $\mu_{\alpha, c}$ is the spectral measure of $J_{\alpha, c}$ whose density is given in Lemma~{\rm\ref{lem:associated-Laguerre}} \end{theorem} The limiting measure in this regime was calculated in \cite{Allez-Wishart-2013} by a different method. \subsection{Almost sure convergence} In this section, we complete the proof of Theorem~\ref{thm:LLN} by showing the following almost sure convergence. \begin{lemma} For any $r \in \N$, as $N\beta \to 2c \in (0, \infty]$, \[ S_N := \frac{1}{N} \tr[(J_N)^r] - \Ex\left [\frac{1}{N} \tr[(J_N)^r] \right] \to 0,\quad \text{almost surely.} \] \end{lemma} The idea is that for fixed $r$, $p_i := (J_N)^r(i, i)$ is independent of $p_j=(J_N)^r(j, j)$, if $|i - j| \ge D_r$, where $D_r$ is a constant. Then write $S_N$ as a sum of $D_r$ sums of independent random variables \[ S_N = \frac{1}{N}\sum_{i} (p_{1+i D_r} - \Ex[p_{1+i D_r}]) + \cdots + \frac{1}{N}\sum_{i} (p_{D_r+i D_r} - \Ex[p_{D_r+i D_r} ]). \] For each sum of independent random variables, we use the following result whose proof can be found in the proof of Theorem 2.3 in \cite{Trinh-2017} to show the almost sure convergence. \begin{lemma} Assume that for each $N$, the random variables $\{\xi_{N, i}\}_{i = 1}^{\ell_N}$ are independent and that \begin{equation}\label{4-moment} \sup_{N} \sup_{1\le i \le \ell_N} \Ex[(\xi_{N, i})^4] < \infty. \end{equation} Assume further that $\ell_N / N \to const \in (0, \infty)$ as $N \to \infty$. Then as $N \to \infty$, \[ \frac{1}N \sum_{i = 1}^{\ell_N} \left(\xi_{N, i} - \Ex[\xi_{N, i}] \right) \to 0, \quad \text{almost surely}. \] \end{lemma} The moment condition \eqref{4-moment} can be easily checked for $p_i$ in the regime $\beta N \to 2c \in (0, \infty]$. The desired almost sure convergence then follows immediately. \section{Gaussian fluctuations around the limit} \subsection{Polynomial test functions} In this section, we establish central limit theorems (CLTs) for polynomial test functions. Since arguments are similar to those used in \cite{Trinh-2017} in case of Gaussian beta ensembles, we only sketch main steps. Without loss of generality, assume that $M = N/\gamma$, where $\gamma \in (0,1)$ is fixed. Let us write $B_N$ as \[ B_N = \frac{\sqrt{\gamma}}{\sqrt{\beta N}}\begin{pmatrix} c_1 \\ d_1 &c_2 \\ &\ddots &\ddots\\ &&d_{N - 1} &c_N \end{pmatrix} ,\quad \text{where } \begin{cases} c_i \sim \chi_{\frac{\beta N}{\gamma} - \beta(i - 1)} ,\\ d_i \sim \chi_{\beta N - \beta i}. \end{cases} \] Then \[ J_N = B_N (B_N)^t = \frac{\gamma}{\beta N}\begin{pmatrix} c_1^2 &c_1d_1\\ c_1d_1 &c_2 ^2 + d_1^2 &c_2d_2\\ &\ddots &\ddots &\ddots\\ &&c_{N-1}d_{N - 1} &c_N^2 + d_{N-1}^2 \end{pmatrix} . \] Recall that for fixed $r \ge 1$, the $r$th moment $\bra{\mu_N, x^r}$ is a polynomial of $\{c_i, d_i\}_{1 \le i \le \frac{r+1}{2}}$. It is actually a polynomial of $\{c_i^2, d_i^2\}$, that is, \[ \bra{\mu_N, x^r} = (J_N)^r(1,1) = \frac{\gamma^r}{(\beta N)^r} \sum_{\vec\eta, \vec\zeta} a(\vec\eta, \vec\zeta) \prod_{i=1}^r c_i^{2\eta_i} d_i^{2\zeta_i}, \] where non-negative integer vectors $\vec\eta = (\eta_1, \dots, \eta_r)$ and $\vec\zeta = (\zeta_1, \dots, \zeta_r)$ satisfy $\sum_{i = 1}^r (\eta_i + \zeta_i) = r$. Then, from formulae for moments of chi-squared distributions, we conclude that \begin{lemma} \begin{itemize} \item[\rm(i)] For $r \in \N$, \[ \Ex[\bra{\mu_N, x^r}] = \sum_{k = 0}^r \frac{R_{r; k} (\beta)}{(\beta N)^k}, \] where $R_{r; k} (\beta)$ is a polynomial in $\beta$ of degree at most $k$. \item[\rm(ii)] For $r, s \in \N$, \[ \Ex[\bra{\mu_N, x^r}\bra{\mu_N, x^s}] = \sum_{k = 0}^{r+s} \frac{Q_{r,s; k} (\beta)}{(\beta N)^k}, \] where $Q_{r,s; k} (\beta)$ is a polynomial in $\beta$ of degree at most $k$. \end{itemize} \end{lemma} Let $p$ be a polynomial of degree $m$. From the above expressions, we can derive a general form of $\Var[\bra{\mu_N, p}]$, and then that of $\Var[\bra{L_N, p}]$ by taking into account of the relation \eqref{relation-of-variance}. Similar to the case of Gaussian beta ensembles, the formula for $\Var[\bra{L_N, p}]$ can be simplified as follows. The proof is omitted. \begin{lemma} Let $p$ be a polynomial of order $m$. Then the variance $\Var[\bra{L_N, p}] $ can be expressed as \[ \Var[\bra{L_N, p}] = \sum_{k = 2}^{2m + 1} \frac{\beta \ell_{p; k} (\beta)}{(\beta N)^k}, \] where $ \ell_{p; k} (\beta)$ is a polynomial in $\beta$ of degree at most $(k -2)$. \end{lemma} \begin{corollary} \begin{itemize} \item[\rm(i)] As $\beta N \to \infty$, \[ \beta N^2 \Var[\bra{L_N, p}] \to \sigma_p^2. \] \item[\rm(ii)] As $\beta N \to 2c$, \[ \beta N^2 \Var[\bra{L_N, p}] \to \sigma_{p,c}^2. \] \end{itemize} \end{corollary} Theorem 3.4 in \cite{Trinh-2017} provides sufficient conditions under which CLTs for $\{\bra{L_N, p}\}$ hold in case of Jacobi matrices with independent entries. The result can be easily extended to Wishart-type Jacobi matrices $\{J_N\}$ here by considering the filtration $\{\cF_k = \sigma\{c_i, d_i : i = 1,\dots, k\}\}_k$. The convergence of variances as in the previous corollary is one of the two sufficient conditions. The remaining one is quite similar to that in the Gaussian beta ensembles case, and hence, we will not mention it in details here. Consequently, the following CLTs for polynomial test functions follow. \begin{theorem} Let $p$ be a polynomial. Then the following hold. \begin{itemize} \item[\rm(i)] As $\beta N \to \infty$, \[ \sqrt{\beta} N (\bra{L_N, p} - \Ex[\bra{L_N, p}] ) \dto \Normal(0, \sigma_p^2). \] \item[\rm(ii)] As $\beta N \to 2c$, \[ \sqrt{\beta} N (\bra{L_N, p} - \Ex[\bra{L_N, p}] ) \dto \Normal(0, \sigma_{p,c}^2). \] \end{itemize} \end{theorem} \begin{remark} \begin{itemize} \item[(i)] For fixed $\beta$, it was shown in \cite{DE06} that as $N \to \infty$, \[ N \bra{L_N, p} - N\bra{mp_\gamma, p} - \left( \frac2\beta - 1\right)\bra{\mu_1, p} \dto \Normal(0, \sigma_p^2/\beta), \] where $\mu_1$ is given by \[ \mu_1= \frac14 \delta_{\lambda_{-}} + \frac14 \delta_{\lambda_{+}} + \frac{1}{2\pi} \frac{1}{\sqrt{(\lambda_+ - x) (x - \lambda_-)}} \one_{(\lambda_-, \lambda_+) } (dx). \] \item[(ii)] Using results in case $\beta = 1,2$, we deduce that the limiting variance in the regime $\beta N \to \infty$ is given by (cf.~\cite[Theorem 7.3.1]{Pastur-book}) \[ \sigma_f^2 = \frac{1}{2\pi^2} \int_{\gamma_-}^{\gamma^+}\int_{\gamma_-}^{\gamma^+} \left(\frac{f(y) - f(x)}{y - x}\right)^2 \frac{4\gamma - (x - \gamma_m)(y - \gamma_m)}{\sqrt{4 \gamma - (x - \gamma_m)^2} \sqrt{4 \gamma - (y - \gamma_m)^2}}dx dy, \] where $\gamma_m = (\gamma_{-} + \gamma_{+})/2 = (1 + \gamma)$. \end{itemize} \end{remark} \subsection{$C^1$ test functions} To extend CLTs from polynomial test functions to $C^1$ test functions, one idea is to use a type of Poincar\'e inequality. Consider (scaled) $\beta$-Laguerre ensembles with the joint density proportional to \[ |\Delta(\lambda)|^\beta \prod_{i = 1}^N \left( \lambda_i^{\alpha} e^{-\eta\lambda_i} \right), \] where $\alpha = \frac{\beta}{2}(M - N + 1) - 1$, and $\eta > 0$. Directly from the joint density, we can derive a Poincar\'e inequality by using the following result. However, this approach requires $\alpha > 0$. \begin{lemma} [{\cite[Proposition~2.1]{Bobkov-Ledoux-2000}}] \label{lem:Poincare} Let $d\nu = e^{-V} dx$ be a probability measure supported on an open convex set $\Omega \subset \R^N$. Assume that $V$ is twice continuously differentiable and strictly convex on $\Omega$. Then for any locally Lipschitz function $F$ on $\Omega$, \[ \Var_\nu[F] = \int F^2 d\nu - \left(\int F d\nu \right)^2 \le \int (\nabla F)^t\Hess(V)^{-1} \nabla F d\nu. \] Here $\Hess(V)$ denotes the Hessian of $V$. \end{lemma} Let $\Omega = \{(\lambda_1, \dots, \lambda_N): 0 < \lambda_1 <\cdots < \lambda_N\} \subset \R^N$. Let $\nu$ be the distribution of the ordered eigenvalues of (scaled) $\beta$-Laguerre ensembles, that is, the probability measure on $\Omega$ of the form \[ d\nu = const \times |\Delta(\lambda)|^\beta \prod_{i = 1}^N \left( \lambda_i^{\alpha} e^{-\eta\lambda_i} \right) d\lambda_1 \cdots d\lambda_N= e^{-V} d\lambda_1 \cdots d\lambda_N, \] where \[ V = const + \eta\sum_{i = 1}^N \lambda_i - \alpha \sum_{i = 1}^N \log \lambda_i - \frac{\beta}{2} \sum_{i \neq j} \log|\lambda_j - \lambda_i|. \] Then the Hessian matrix of $V$ can be easily calculated \begin{align*} \frac{\partial^2 V}{\partial \lambda_i^2} &= \frac{\alpha}{\lambda_i^2} + \beta \sum_{j \neq i} \frac{1}{(\lambda_j - \lambda_i)^2},\\ \frac{\partial^2 V}{\partial \lambda_i \partial \lambda_j} &= -\beta \frac{1}{(\lambda_j - \lambda_i)^2}. \end{align*} Observe that $\Hess (V) \ge D= \diag(\frac{\alpha}{\lambda_i^2})$. Here for two real symmetric matrices $A$ and $B$, the notation $A \ge B$ indicates that $A-B$ is positive semi-definite. It follows that $\Hess (V)^{-1} \le D^{-1}$. And hence, using Lemma~\ref{lem:Poincare} with \[ F = \frac{1}{N} \sum_{i = 1}^N f(\lambda_i), \] for a continuously differentiable function $f$, we get the following inequality \[ \Var_\nu[F] \le \int ( \nabla F)^t\Hess(V)^{-1}\nabla F d\nu \le \int (\nabla F)^t D^{-1}\nabla F d\nu. \] The inequality can be rewritten as \begin{equation} \Var[\bra{L_N, f}] \le \frac{1}{\alpha N} \Ex[\bra{L_N, (\lambda f'(\lambda))^2}]. \end{equation} This is one type of Poincar\'e inequality for $\beta$-Laguerre ensembles in this paper. The restriction of the above inequality is that $\alpha$ must be positive, which does not hold in the regime $\beta N \to 2c$ when $c$ is small. The second approach based on the random Jacobi matrix model removes such restriction. We will end up with a slightly different inequality. Let us begin with several concepts. A real random variable $X$ is said to satisfy a Poincar\'e inequality if there is a constant $c>0$ such that for any smooth function $f \colon \R \to \R$, \[ \Var[f(X)] \le c \Ex[f'(X)^2]. \] Here, by smooth we mean enough regularity so that the considering terms make sense. By definition, it is clear that $X$ satisfies a Poincar\'e inequality with a constant $c$, if and only if $\alpha X$ satisfies a Poincar\'e inequality with a constant $c\alpha^2$, for non-zero constant $\alpha$. \begin{lemma} The chi distribution $\chi_k$ satisfies a Poincar\'e inequality with $c = 1$, that is, \[ \Var[f(X)] \le \Ex[f'(X)^2], \quad X \sim \chi_k. \] \end{lemma} \begin{proof} The probability density function of the chi distribution with $k$ degrees of freedom is given by \[ \frac{1}{2^{\frac k2 - 1} \Gamma(\frac k2)} x^{k - 1} e^{-\frac{x^2}{2}}, \quad (x > 0). \] Thus, for $k \ge 1$, the conclusion follows immediately from Lemma~\ref{lem:Poincare}. Next, we consider the case $0 < k <1$. Let $Y = X^{k}$. Then the probability density function of $Y$ is given by \[ const\times \exp({-\frac{y^{\frac 2k}}{2}}),\quad ( y > 0). \] By using Lemma~\ref{lem:Poincare} with $V = const + \frac{y^{\frac 2k}}2$, we obtain that \[ \Var[g(Y)] \le \frac{k^2}{2-k}\Ex[Y^{2 - \frac 2k} g'(Y)^2]. \] For given $f(x)$, let $g(y) = f(y^\frac1k)$. Then we see that \[ \Var[f(X)] = \Var[g(Y)] \le \frac{k^2}{2-k}\Ex[Y^{2 - \frac 2k} g'(Y)^2] = \frac{1}{2-k}\Ex[f'(X)^2] \le \Ex[f'(X)^2]. \] This means that $X$ satisfies a Poicar\'e inequality with a constant $c= 1$. The proof is complete. \end{proof} We need the following property. \begin{lemma}[{\cite[Corollary 5.7]{Ledoux-book}}] Assume that $X_i, (i = 1, \dots, k),$ satisfy Poicar\'e inequalities with constants $c_i$. Assume further that $\{X_i\}$ are independent. Then for any smooth function $g \colon \R^k \to \R$, \[ \Var[g(X_1, \dots, X_k)] \le (\max_i c_i) \Ex[|\nabla g(X_1, \dots, X_k)|^2]. \] \end{lemma} Let $Y = (y_{mn})$ be an $M \times N$ real matrix, and $X = Y^t Y = (x_{ij})$. For a continuously differentiable function $f \colon \R \to \R$, let \[ g = g((y_{mn})) = \tr (f(X)). \] Then the partial derivatives of $g$ can be expressed as follows. \begin{lemma}[cf.~{Eq.~(7.2.5) in \cite{Pastur-book}}] It holds that \[ \left( \frac{\partial g}{\partial y_{mn}}\right)_{M\times N} = 2Y f'(X). \] Consequently \[ \sum_{m,n} \left( \frac{\partial g}{\partial y_{mn}}\right)^2 = 4\tr(Y f'(X) f'(X) Y^t ) = 4\tr(X f'^2(X)) = 4 \sum_{i = 1}^N \lambda_i f'(\lambda_i)^2. \] Here $\lambda_1, \dots, \lambda_N$ are the eigenvalues of $X$. \end{lemma} Combining three lemmas above, we arrive at another type of Poincar\'e inequality for $\beta$-Laguerre ensembles. \begin{theorem} Assume that $f$ is a continuously differentiable function. Then for $\beta$-Laguerre ensembles~\eqref{bLE}, the following inequality holds \begin{equation}\label{Poincare-for-Laguerre} \Var[\bra{L_N, f}] \le \frac{4}{\beta M N} \Ex[\bra{L_N, x f'(x)^2}]. \end{equation} \end{theorem} The above Poincar\'e inequality is a key ingredient to extend CLTs to $C^1$ test functions. Let us now prove Theorem~\ref{thm:CLT-full}. \begin{proof}[Proof of Theorem~{\rm\ref{thm:CLT-full}}] Assume that $f$ has continuous derivative $f'$ of polynomial growth. This implies that $f' \in L^2 ((1+x^2)d\nu_{\gamma, c}(x))$. Recall that for convenience, $\nu_{\gamma, \infty}$ denotes the Marchenko--Pastur distribution with parameter $\gamma$. We need the following property of measures determined by moments (called M.~Riesz's Theorem (1923) in \cite{Bakan-2001}): the measure $\mu$ is determined by moments, if and only if polynomials are dense in $L^2((1+x^2)d\mu(x))$. Consequently, there is a sequence of polynomials $\{p_k\}$ converging to $f'$ in $ L^2((1+x^2)d\nu_{\gamma,c}(x))$. It then follows that \[ \int x(p_k - f')^2 d\nu_{\gamma, c}(x) \le \frac12\int (p_k - f')^2 (1+x^2) d\nu_{\gamma, c}(x) \to 0 \quad \text{as}\quad k \to \infty. \] Let $P_k$ be a primitive of $p_k$. Since $P_k$ is a polynomial, for fixed $k$, as $N \to \infty$, \begin{equation}\label{CLT-XNk} X_{N, k}:= \sqrt{\beta} N (\bra{L_N, P_k} - \Ex[\bra{L_N, P_k}]) \dto \Normal(0, \sigma_{P_k, c}^2), \quad \Var[X_{N, k}] \to \sigma_{P_k, c}^2. \end{equation} Let $Y_N = \sqrt{\beta} N (\bra{L_N, f} - \Ex[\bra{L_N, f}])$. Then by the Poincar\'e inequality~\eqref{Poincare-for-Laguerre}, \begin{equation}\label{Variance-approx} \Var[ Y_N - X_{N, k} ] \le \frac{4N}{ M} \Ex[\bra{L_N, x(f' - p_k)^2}], \end{equation} which implies that \[ \lim_{k \to \infty} \limsup_{N \to \infty} \Var[ Y_N - X_{N, k} ] \le 4 \gamma \lim_{k \to \infty} \bra{\nu_{\gamma, c}, x(f' - p_k)^2} = 0. \] Here we have used the property that for continuous function $g$ of polynomial growth, \[ \Ex[\bra{L_N, g}] \to \bra{\nu_{\gamma, c}, g}. \] Then, the CLT for $Y_N$ follows from the equations \eqref{CLT-XNk} and \eqref{Variance-approx} by using the following general result whose proof can be found in \cite{Nakano-Trinh-2018} or \cite{Trinh-2018-CLT}. The proof is complete. \end{proof} \begin{lemma}\label{lem:CLT-triangle} Let $\{Y_N\}_{N=1}^\infty$ and $\{X_{N,k}\}_{N,k=1}^\infty$ be mean zero real random variables. Assume that \begin{itemize} \item[\rm(i)] for each $k$, as $N \to \infty$, $ X_{N,k} \dto \Normal(0, \sigma_k^2),$ and $\Var[X_{N, k}] \to \sigma_k^2$; \item[\rm(ii)] $ \lim_{k \to \infty} \limsup_{N \to \infty} \Var[X_{N,k} - Y_N] =0. $ \end{itemize} Then the limit $\sigma^2 = \lim_{k \to \infty} \sigma_k^2$ exists, and as $N \to \infty$, \[ Y_N \dto \Normal(0, \sigma^2), \quad \Var[Y_N] \to \sigma^2. \] \end{lemma} \noindent{\bf Acknowledgment.} This work is supported by the VNU University of Science under project number TN.18.03 (H.D.T) and by JSPS KAKENHI Grant Number JP19K14547 (K.D.T.). \end{document}
\begin{document} \renewcommand{{\bf \roman{enumi})}}{\emph{\arabic{enumi}.}} \renewcommand{\theenumi}{{\bf \roman{enumi})}} \title{The Carath\'eodory Topology for Multiply Connected Domains I} \begin{abstract} We consider the convergence of pointed multiply connected domains in the \Car topology. Behaviour in the limit is largely determined by the properties of the simple closed hyperbolic geodesics which separate components of the complement. Of particular importance are those whose hyperbolic length is as short as possible which we call \emph{meridians} of the domain. We prove a continuity result on convergence of such geodesics for sequences of pointed hyperbolic domains which converge in the \Car topology to another pointed hyperbolic domain. Using this we describe an equivalent condition to \Car convergence which is formulated in terms of Riemann mappings to standard slit domains. \end{abstract} \section{Introduction} The \Car topology for pointed domains was first introduced in 1952 by \Car \cite{Car} who proved that, for simply connected domains, convergence with respect to this topology is equivalent to convergence of suitably normalized inverse Riemann mappings on compact subsets of the unit disc $\D$. This result is also mentioned by McMullen \cite{McM} who uses it to prove a compactness result for polynomial-like mappings. Our work is also motivated by complex dynamics, in particular the area of non-autonomous iteration where one considers compositions arising from sequences of analytic functions which are allowed to vary. It turns out that in order to prove a non-autonomous version of the classical Sullivan straightening theorem, one must consider the behaviour of multiply connected pointed domains with respect to this topology. As we shall see (e.g. in Figure 2 below), one issue is that connectivity is not in general preserved for \Car limits and that some of the complementary components can shrink to a point. This presents problems if one wants to perform quasiconformal surgery on multiply connected domains as certain conformal invariants associated with the domains can become unbounded. One of our ultimate goals, then, will be to find necessary and sufficient conditions for which connectivity is preserved for \Car limits and none of the complementary components of the limit domain is a point (in the finitely connected case, such domains are called \emph{non-degenerate}). Epstein \cite{Ep} has shown that convergence in the \Car topology is equivalent to convergence of suitably normalized universal covering maps on compact subsets of $\D$ (see Theorem 1.2). However, it turns out that the limiting behaviour of a sequence of domains of the same connectivity is best understood in terms of certain simple closed hyperbolic geodesics associated with the domain. In Theorem 1.8 we prove the important result that if a pointed domain $\Uu$ is a \Car limit of a sequence of pointed domains $\{\Umum\}_{m=1}^\infty$, then every simple closed geodesic of $U$ is a uniform limit of simple closed geodesics of the domains $U_m$ and the corresponding hyperbolic lengths and distances of these geodesics to the basepoints also converge. Of particular importance are those geodesics are known as \emph{meridians} which are essentially the shortest simple closed geodesics which separate the complement of the domain in some prescribed way. In Theorem 3.2 we use meridians to prove a version of the above classical result concerning convergence of normalized inverse Riemann mappings for the multiply connected case where we replace the unit disc by suitable slit domains. In the second part of this paper we use meridians to give a solution to our originally stated problem regarding the preservation of connectivity. In fact in Theorem 4.2 we give several equivalent conditions for a family of non-degenerate $n$-connected pointed domains which ensure that any \Car limit is still $n$-connected and non-degenerate. We begin our exposition with a short resume of the well-known results about the \Car topology. For the most part we shall be working with the spherical metric $\rm d^{\scriptscriptstyle \#} (\cdot \,, \cdot)$ on $\cbar$ (rather than the Euclidean metric). Recall that the length element for this metric, $\dsharpz$ is given by \[ \dsharpz = \frac{|{\rm d}z|}{1 + |z|^2}\] and that for an analytic function we have the \emph{spherical derivative} \[ f^\# (z) = \frac{f'(z)}{1 + |f(z)|^2}.\] A {\it pointed domain} is a pair $(U,u)$ consisting of an open connected subset $U$ of $\cbar$ (possibly equal to $\cbar$ itself) and a point $u$ in $U$. We say that $\Umum \to \Uu$ in the \Car topology as $m$ tends to infinity if \begin{enumerate} \renewcommand{{\bf \roman{enumi})}}{{\bf \roman{enumi})}} \renewcommand{\theenumi}{{\bf \roman{enumi})}} \item $u_m \to u$ in the spherical topology; \item For all compact sets $K \subset U$, $K \subset U_m$ for all but finitely many $m$; \item For any {\it connected} (spherically) open set $N$ containing $u$, if $N \subset U_m$ for\\ infinitely many $m$, then $N \subset U$. \end{enumerate} We also wish to consider the degenerate case where $U = \{u\}$. In this case condition ii) is omitted ($U$ has no interior of which we can take compact subsets) while condition iii) becomes \begin{enumerate} \renewcommand{{\bf \roman{enumi})}}{{\bf \roman{enumi})}} \renewcommand{\theenumi}{{\bf \roman{enumi})}} \setcounter{enumi}{2} \item For any {\it connected} open set $N$ containing $u$, $N$ is contained in at most finitely many of the sets $U_m$. \end{enumerate} The above definition is a slight modification of that given in the book of McMullen \cite{McM} and much of what follows in this section is based on his exposition. However, the original reference for this material goes back to \Car \cite{Car} who in 1952 used an alternative definition which centered on the \emph{\Car kernel} (this approach was also used subsequently by Duren \cite{Dur}). For a sequence of pointed domains as above, one first requires that $u_m \to u$ in the spherical topology. If there is no open set containing $u$ which is contained in the intersection of all but finitely many of the sets $U_m$, one then defines the kernel of the sequence of pointed domains $\{\Umum\}_{m=1}^\infty$ to be $\{u\}$. Otherwise one then defines the \Car kernel as the largest domain $U$ containing $u$ with the property ii) above, namely that every compact subset $K$ of $U$ must lie in $U_m$ for all but finitely many $m$. It is relatively easy to check that an arbitrary union of domains with this property will also inherit it. Hence a largest such domain does indeed exist. Convergence in this context is then defined by requiring that every subsequence of pointed domains has the same kernel as the whole sequence. It is not too hard to show that this version of \Car convergence is equivalent to the first one. In fact, one has the following. \begin{theorem} Let $\{\Umum\}_{m=1}^\infty$ be a sequence of pointed domains and $\Uu$ be another pointed domain where we allow the possibility that $\Uu = (\{u\},u)$. Then the following are equivalent: \begin{enumerate} \item $\Umum \to \Uu$; \item $u_m \to u$ in the spherical topology and $\{\Umum\}_{m=1}^\infty$ has \Car kernel $U$ as does every subsequence; \item $u_m \to u$ in the spherical topology and, for any subsequence where the complements of the sets $U_m$ converge in the Hausdorff topology (with respect to the spherical metric), $U$ correspsonds with the connected component of the complement of the Hausdorff limit which contains $u$ (this component being empty in the degenerate case $U = \{u\}$). \end{enumerate} \end{theorem} It follows easily from the compactness of $\cbar$ combined with the Blaschke selection theorem that, provided we use the spherical rather than the Euclidean metric, any sequence of non-empty closed subsets of $\cbar$ will have a subsequence which converges in the Hausdorff topology. Hence, from above, given any family of pointed domains we can always find a sequence in the family which converges in the \Car topology (although the limit pointed domain may well be degenerate). In fact, this convenient fact is the main reason we define things using the spherical topology rather than the more usual Euclidean topology. As we remarked earlier, connectivity cannot increase with respect to \Car limits. To be precise, if each $U_m$ above is at most $n$-connected, then so is the limit domain $U$. The reason for this is that by {\it 3.} above, complementary components are allowed to merge in the Hausdorff limit, but they cannot split up into more components (see Figure 2 for an illustration of what can happen in this situation). Recall that a Riemann surface is called \emph{hyperbolic} if its universal covering space is the unit disc $\D$. From the uniformization theorem, it is well-known that a domain $U \subset \cbar$ is hyperbolic if $\cbar \setminus U$ contains at least three points. For such a domain, the universal covering map allows us to define the \emph{hyperbolic metric} on $U$ which we denote by $\rho_U (\cdot\,, \cdot)$ or just $\rho (\cdot \,, \cdot)$, if the domain involved is clear from the context. Extending this notation slightly, we shall use $\rho_U(z, A)$ or $\rho(z, A)$ to denote the distance in the hyperbolic metric from a point $z \in U$ to a subset $A$ of $U$. Finally, for a curve $\gamma$ in $U$, let us denote the hyperbolic length of $\gamma$ in $U$ by $\ell_U(\gamma)$, or, again when the context is clear, simply by $\ell(\gamma)$. Often, for the sake of convenience, we shall restrict ourselves to considering domains which are subsets of $\C$ so that the point at infinity is in one of the components of the complement. This simplification has the advantage that for a sequence of functions whose ranges lie in domains which are subsets of $\C$ and thus avoid infinity, convergence in the spherical topology is equivalent to the simpler condition of convergence in the Euclidean topology. To see why there is little loss of generality in making this assumption, suppose $\Umum$ converges to $\Uu$ with $U$ hyperbolic. Then any Hausdorff limit of the sets $\cbar \setminus U_m$ must contain at least three distinct points since otherwise, $U$ will fail to be hyperbolic. We can then apply a M\"obius transformation to $U$ to move these three points to $0$, $1$ and $\infty$. If we now apply the same transformation to the domains $U_m$, then we know that $0$, $1$ and $\infty$ are close to $\cbar \setminus U_m$ for $m$ large. We can therefore choose three points in $\cbar \setminus U_m$ which get moved to $0$, $1$, $\infty$ by a M\"obius transformation which is very close to the identity. It is easy to see from the definition of \Car convergence that this does not affect convergence to the limit domain $\Uu$ and so we have what we want. One of the nice features of the \Car topology is that the geometric and topological formulations of convergence given above correspond to the function-theoretic condition of the local uniform convergence of suitably normalized covering maps. Of course, in the simply connected case, these are just the inverses of Riemann mappings to the unit disc. We will prove the following result in Section 2. \begin{theorem} Let $\{\Umum\}_{m \ge 1}$ be a sequence of pointed hyperbolic domains and for each $m$ let $\pi_m$ be the unique normalized covering map from $\D$ to $U_m$ satisfying $\pi_m(0) = u_m$, $\pi'_m(0) >0$. Then $\Umum$ converges in the \Car topology to another pointed hyperbolic domain $\Uu$ if and only if the mappings $\pi_m$ converge with respect to the spherical metric uniformly on compact subsets of $\D$ to the covering map $\pi$ from $\D$ to $U$ satisfying $\pi(0)=u$, $\pi'(0) >0$. In addition, in the case of convergence, if $D$ is a simply connected subset of $U$ and $v \in D$, then locally defined branches $\omega_m$ of $\pi_m^{\circ -1}$ on $D$ for which $\omega_m(v)$ converges to a point in $\D$ will converge locally uniformly with respect to the spherical metric on $D$ to a uniquely defined branch $\omega$ of $\pi^{\circ -1}$. Finally, if $\pi_m$ converges with respect to the spherical topology locally uniformly on $\D$ to the constant function $u$, then $\Umum$ converges to $(\{u\},u)$. \end{theorem} One of the most important ways to characterize a multiply connected domain is in terms of the simple closed hyperbolic geodesics which separate components of the complement and we will use the tool of homology from complex analysis to classify these curves. The following four results are proved in \cite{Com2}. Note that in \cite{Com2} it is always assumed that if a simple closed curve $\gamma$ separates two disjoint closed sets $E$, $F$, then $\infty \in F$. This has the advantage of allowing us to assign a consistent orientation to such a curve so that the winding number $n(\gamma, z)$ is $1$ for all points of $E$ and $0$ for all points of $F$. However, it is obvious that, by applying a suitable M\"obius transformation if needed, we can assume that $E$ and $F$ are any two arbitrary disjoint closed subsets of $\cbar$. Another advantage of assuming $\infty \in F$ is that all positively oriented simple closed curves which separate $E$ and $F$ are then in the same homology class and vice versa. If $U \subset \C$, and $\gamma$, $\eta$ are curves in $U$, then we write $\gamma \underset{U} \approx \eta$ to denote homology in $U$. On the other hand, if we allow $\infty \in U$, then this it is easy to see that there can be curves which separate the complement of $U$ in the same way, but which are not homologous in $U$. This is important for the definition of meridians (see Definition 1.1 below) where we need to take this into account if we wish to consider subdomains of $\cbar$ instead of just subdomains of $\C$. \begin{theorem}[\cite{Com2} Theorem 2.1] Let $U$ be a domain and suppose we can find disjoint non-empty closed sets $E$, $F$ with $\cbar \setminus U = E \cup F$. Then there exists a piecewise smooth simple closed curve in $U$ which separates $E$ and $F$. \end{theorem} For the next three results, we assume the common hypothesis that $U$ is a hyperbolic domain and $E$ and $F$ are closed disjoint non-empty sets neither of which is a point and for which $\cbar \setminus U = E \cup F$. Let us call such a separation of the complement of $U$ \emph{non-trivial}. Also, since we are considering domains which are subsets of $\C$, let us assume that $E$ is bounded and $\infty \in F$. \begin{theorem}[\cite{Com2} Theorem 2.4] Let $\tilde \gamma$ be a simple closed curve which separates $E$ and $F$. Then there exists a unique simple closed smooth geodesic $\gamma$ which is the shortest curve in the free homotopy class of $\tilde \gamma$ in $U$ and in particular also separates $E$ and $F$. Conversely, given a simple closed smooth hyperbolic geodesic $\gamma$ in $U$, $\gamma$ separates $\cbar \setminus U$ non-trivially and is the unique geodesic in its homotopy class and also the unique curve of shortest possible length in this class. \end{theorem} Note that the fact that $\gamma$ must separate $E$ and $F$ in the first part of the statement follows easily from the Jordan curve theorem and the fact that $\gamma$ is simple and must be homologous in $U$ to $\tilde \gamma$. As we will see (e.g. in Figure 1), there may be many geodesics in different homotopy classes which separate $E$ and $F$. However, we can always find one which is as short as possible. \begin{theorem}[\cite{Com2} Theorem 1.1] Let $U$, $E$ and $F$ be as above. Then there exists a geodesic $\gamma$ which separates $E$ and $F$ and whose length in the hyperbolic metric is as short as possible among all geodesics which separate $E$ and $F$. \end{theorem} Unfortunately, this geodesic need be neither simple nor uniquely defined (see \cite{Com2} for details). However, there does always exist a simple closed geodesic of minimum length which separates $E$ and $F$. \begin{theorem}[Existence Theorem, \cite{Com2} Theorem 1.3] There exists a simple closed geodesic $\gamma$ in $U$ which separates $E$ and $F$ and whose hyperbolic length is as short as possible in its homology class and is also as short as possible among all simple closed curves which separate $E$ and $F$. Furthermore, any curve in the homology class of $\gamma$ and which has the same length as $\gamma$ must also be a simple closed geodesic. \end{theorem} Note that $\gamma$ is the shortest curve in its homology class which in general includes curves which may not be simple (including possible $\tilde \gamma$ itself). The above statement is a simplified version of the original. In the original, the class of curves which separated $E$ and $F$ by parity was considered and this class is larger than the homology class of $\gamma$ (again, see \cite{Com2} for details). Let $\gamma$ be a simple closed smooth hyperbolic geodesic which is topologically non-trivial in $U$, let $\pi : \D \mapsto U$ be a universal covering map and let $G$ be the corresponding group of covering transformations. Any lift of $\gamma$ to $\D$ is a hyperbolic geodesic in $\D$ and going once around $\gamma$ lifts to a covering transformation $A$ which fixes this geodesic. It is then not hard to see that $A$ must be a hyperbolic M\"obius transformation and the invariant geodesic is then $Ax_A$, the axis of $A$. The hyperbolic length of $\gamma$ is then the same as the translation length $\ell(A)$ which is the hyperbolic distance $A$ moves points on $Ax_A$. Note that the quantity $\ell(A)$ does not depend on our choice of lift and is conformally invariant. We call a segment of $\eta$ of $Ax_A$ which joins two points $z$, $A(z)$ on $Ax_A$ a \emph{full segment} of $Ax_A$. This discussion and the above result lead to the following definition. \begin{definition} Let $U$ be a hyperbolic domain and let $E$, $F$ be any non-trivial separation of $\cbar \setminus U$ as above (where we do not assume that $\infty \notin U$). A simple closed hyperbolic geodesic $\gamma$ in $U$ which separates $E$ and $F$ whose hyperbolic length is as short as possible is called a a \emph{meridian} of $U$ and the hyperbolic length $\ell_U(\gamma)$ is called the \emph{translation length} or simply the \emph{length} of $\gamma$. \end{definition} Note that in Definition 1.5 of \cite{Com2}, a slightly different definition was given where the meridian was defined to be the shortest possible simple closed geodesic in its homology class. As mentioned above, in that paper it was assumed that $\infty \in F$ and in this case the two definitions are equivalent. However, since we wish to consider arbitrary domains in $\cbar$ and not just in $\C$, we need the slightly more general definition above. An important special case and indeed the prototype for the above definition is the equator of a conformal annulus and just as the equator is important in determining the geometry of a conformal annulus, meridians are important in determining the geometry of domains of (possibly) higher connectivity. \begin{center} \scalebox{0.75}{\includegraphics{Counterexample1.pdf}} \end{center} The main problem with meridians is that, except in special cases such as an annulus, meridians may not be unique as Figure 1 above shows. The two meridians shown are not homotopic but are in the same homology class and have equal length (see \cite{Com2} Theorem 1.4 for details). However, if one of the complementary components is connected, then we do have uniqueness. \begin{theorem}[\cite{Com2} Theorem 1.5] If at least one of the sets $E$, $F$ is connected, then there is only one simple closed geodesic $\gamma$ in $U$ which separates $E$ and $F$. In particular, $\gamma$ must be a meridian. In addition, any other geodesic which separates $E$ and $F$ must be longer than $\gamma$. \end{theorem} Let us call a meridian as above where at least one of the sets $E$, $F$ is connected a \emph{principal meridian} of $U$. The theorem then tells us that principal meridians are unique. These meridians have other nice properties. For example, they are disjoint and do not meet any other meridians of $U$ (\cite{Com2} Theorem 2.6). To see the pathologies which can arise when one takes a limit in the \Car topology, consider Figure 2 below. Note how the connectivity decreases when parts of the complement merge in the limit or are `pinched off' (see Figure 2 below). \begin{center} \scalebox{1.061}{\includegraphics{Pathology2}} \unitlength1cm \begin{picture}(0.01,0.01) \put(-11.4,-.3){\footnotesize$(U_m,u_m)$ } \put(-7.42,2.4){\footnotesize $m \to \infty$} \put(-3.33,-.3){\footnotesize$(U,u)$ } \put(-10.73,1.93){$\scriptstyle u_m$} \put(-2.9,1.93){$\scriptstyle u$} \end{picture} \end{center} In the above figure the principal meridians which separate the semi-circular shaped complementary components on the left from the rest of $\cbar \setminus U_m$ have lengths which must tend to infinity. For the small complementary component in the middle which shrinks to a point, the opposite happens and the principal meridian which separates this component from the rest of the complement has length tending to zero. Finally for the the circular complementary component on the right which is almost swallowed by the circular arc, the principal meridian which separates this component from the rest of the complement will tend to a circle (in fact the equator of a round annulus). However, the distance of this meridian from the base point $u_m$ will tend to infinity. The important issue here is that the fact that the limit domain is degenerate and of lower connectivity than the domains of the approximating sequence can be understood entirely in terms of the behaviour of the meridians and in fact of the principal meridians of these domains. Meridians are thus central to understanding the \Car topology in the multiply connected case. Even though simple closed geodesics can behave badly with respect to limits in the \Car topology, we can say something as the theorem below which is one of the main results of this paper shows. Roughly it states that a simple closed geodesic of the limit domain can be approximated by simple closed geodesics of the approximating domains. We say that a sequence of curves $\gamma_m$ converges uniformly to a curve $\gamma$ if we can find parametrizations for all the curves $\gamma_m$ over the same interval which converge uniformly to a parametrization of $\gamma$. \begin{theorem} Let $\{\Umum\}_{m=1}^\infty$ be a sequence of multiply connected hyperbolic pointed domains which converges in the \Car topology to a multiply connected hyperbolic pointed domain $\Uu$ (with $U \ne \{u\}$). If $\gamma$ is a simple closed geodesic of $U$ whose length is $\ell$, then we can find simple closed geodesics $\gamma_m$ of each $U_m$ such that if $\ell_m$ is the length of each $\gamma_m$, then: \begin{enumerate} \item The hyperbolic distance in $U_m$ from $u_m$ to $\gamma_m$, $d_m = \rho_{U_m}(u_m, \gamma_m)$, converges to $d = \rho_U (u, \gamma)$, the hyperbolic distance in $U$ from $u$ to $\gamma$; \item The simple closed geodesics $\gamma_m$ converge uniformly to $\gamma$ while the corresponding lengths $\ell(\gamma_m)$ converge to $\ell(\gamma)$; \item If $u_m$ lies on $\gamma_m$ for infinitely many $m$, then $u$ lies on $\gamma$. \end{enumerate} \end{theorem} In the case of meridians for a domain, we can say the following. \begin{theorem} Let $\Umum$ and $\Uu$ be as above in Theorem 1.8, let $\tilde \gamma$ be a simple closed geodesic of $U$ and let $\tilde \gamma_m$ be the curves in each $U_m$ which converge to $\tilde \gamma$ as above. For each $m$, let $\gamma_m$ be a meridian of $U_m$ with $\gamma_m \underset{U_m} \approx \tilde \gamma_m$. Then the distances $d_m = \rho_{U_m}(u_m, \gamma_m)$ are uniformly bounded above and the lengths $\ell_m = \ell(\gamma_m)$ are uniformly bounded above and uniformly bounded below away from zero. \end{theorem} \begin{theorem} Again let $\Umum$ and $\Uu$ be as in Theorem 1.8 and suppose $E$, $F$ is a non-trivial separation of $\cbar \setminus U$ into disjoint closed subsets. Then we can find a meridian $\gamma$ which separates $E$ and $F$ and a subsequence $m_k$ and meridians $\gamma_{m_k}$ of $U_{m_k}$ such that if $\ell_{m_k}$ is the length of each $\gamma_{m_k}$ and $\ell$ the length of $\gamma$, then: \begin{enumerate} \item The hyperbolic distance in $U_{m_k}$ from $u_{m_k}$ to $\gamma_{m_k}$, $d_{m_k} = \rho_{U_{m_k}}(u_{m_k}, \gamma_{m_k})$, converges to $d = \rho_U (u, \gamma)$, the hyperbolic distance in $U$ from $u$ to $\gamma$; \item The meridians $\gamma_{m_k}$ converge uniformly to $\gamma$ while the corresponding lengths $\ell_{m_k}$ converge to $\ell$; \item If $u_{m_k}$ lies on $\gamma_m$ for infinitely many $m$, then $u$ lies on $\gamma$. \end{enumerate} Furthermore, if $\gamma$ is a principal meridian of $U$, then {\it 1.}, {\it 2.} and {\it 3.} hold for any subsequence. \end{theorem} An important special case is domains with finite connectivity. We adopt the convention that if $U$ is $n$-connected and $K^1, K^2, \ldots \ldots, K^n$, denote the components of $\cbar \setminus U$, then the last component $K^n$ will always be the unbounded one (note that Ahlfors uses the same convention in \cite{Ahl}). For a domain of finite connectivity $n$, one can see using elementary combinatorics that there are at most $E(n) := 2^{n-1} - 1$ different ways to separate $\cbar \setminus U$ non-trivially and thus at most this number of meridians which separate the complement of $U$ in distinct ways. One can also show that there are at most $P(n) := \min\{n,E(n)\}$ principal meridians. If we can find $P(n)$ principal meridians, let us call such a collection the \emph{principal system of meridians} or simply the \emph{principal system} for $U$. If we can find a full collection of $E(n)$ meridians, let us call such a collection an \emph{extended system of meridians} or simply an \emph{extended system} for $U$. If $n \le 3$, than any meridians of $U$ which exist must be principal. The first case where we can have meridians which are not principal is $n=4$ as we see in Figure 1. Finally, as the principal meridians are always disjoint and in different and non-trivial homotopy classes, they form a geodesic multicurve in the sense of Definition 3.6.1 of \cite{Hub}. However, except when $n = 2$ or $3$, this multicurve will not be maximal (see Theorem 3.6.2 of \cite{Hub}). On the other hand, the meridians of an extended system may well intersect and so will not in general be a multicurve at all. A finitely connected domain $U$ is called \emph{non-degenerate} if none of the components of $\cbar \setminus U$ is a point. The principal meridians are precisely those meridians which can fail to exist if some of the complementary components are points and it is not hard to show the following. \begin{proposition}[\cite{Com2} Proposition 3.1] If $U$ is a domain of finite connectivity $n \ge 2$, then $U$ has at least $E(n) - P(n)$ meridians and any principal meridians of $U$ which exist are uniquely defined. Furthermore, the following are equivalent: \begin{enumerate} \item $U$ is non-degenerate; \item $U$ has $P(n)$ principal meridians; \item $U$ has $E(n)$ meridians in distinct homology classes. \end{enumerate} \end{proposition} If $U$ is a non-degenerate $n$-connected domain and $\Gamma = \{\gamma^i, 1 \le i \le E(n)\}$ is an extended system for $U$, we shall adopt the convention that the first $P(n)$ meridians are always those of the principal system and that for $1 \le i \le P(n)$, $\gamma^i$ separates $K^i$ from the rest of $\cbar \setminus U$. Let us denote the lengths of the meridians of $\Gamma$ by $\ell^i$, $1 \le i \le E(n)$. For a pointed domain $\Uu$, we will also need consider the distances $d^i := \rho(u, \gamma^i)$, $1 \le i \le E(n)$ from the base points to these meridians. The collection of numbers $\ell^i$ and $d^i$, $1 \le i \le E(n)$, we shall refer to as the \emph{lengths} and \emph{distances} of $\Gamma$ respectively and naturally we can make similar definitions for a principal system. Note that the lengths are independent of the choice of meridians for the system, but, except for the principal meridians, the distances in general are not. However, this will not be too much of a problem as we see from Theorem 1.9. Let us call a meridian \emph{significant} if it is a limit of meridians for a subsequence as above. If the domain is finitely connected and non-degenerate, let us call a system of meridians a \emph{significant system} if each meridian in the system is significant. We have the following useful corollary. \begin{corollary} Let $n \ge 2$ and let $\{\Umum\}_{m=1}^\infty$ be a sequence of $n$-connected hyperbolic pointed domains which converges in the \Car topology to a non-degenerate hyperbolic $n$-connected pointed domain $\Uu$ with $U \ne \{u\}$. Then we can find a significant system of meridians for $\Uu$. Furthermore, if all the domains $U_m$ are also $n$-connected and non-degenerate and for each $m$ we let $\Gamma_m = \{\gim, 1 \le i \le E(n)\}$ be any system of meridians for $U_m$, then the distances $d^i_m = \rho(u_m , \gim)$ are bounded above while the lengths $\leim = \ell_{U_m}(\gim)$ are bounded above and below away from zero. These bounds are uniform in $m$ and independent of our choice of the systems $\Gamma_m$. \end{corollary} Theorems 1.2, 1.8, 1.9 and 1.10 and Corollary 1.1 will be proved in Section 2. In Section 3 we will present some applications including a version of Theorem 1.2 stated in terms of Riemann mappings to slit domains instead of universal covering maps. \section{Convergence of Geodesics and Meridans} Starting with Theorem 1.2, we prove the theorems stated in the previous section, together with some supporting results. For a family of M\"obius transformations $\Phi = \{\phi_\alpha, \alpha \in A\}$, we say that $\Phi$ is \emph{bi-equicontinuous on $\cbar$} if both $\Phi$ and the family $\Phi^{\circ -1} = \{\phi_\alpha^{\circ -1}, \alpha \in A\}$ of inverse mappings are uniformly Lipschitz families on $\cbar$ (with respect to the spherical metric). {\bf Proof of Theorem 1.2 \hspace{.4cm}} A proof of most of this result can be found in the Ph.D. thesis of Adam Epstein \cite{Ep} and the proof is similar to the better known special case where all the domains involved are discs and the mappings $\pi_m$ are then Riemann maps. A proof of the disc case can be found in {\Car}'s original exposition \cite{Car}. In order to extend Epstein's results to a full proof, we need to show in the non-degenerate case that if $\Umum$ converges to $\Uu$ with $U$ hyperbolic, then the covering maps $\pi_m$ give a normal family on $\D$ and that any limit function must be non-constant. Note that in the non-degenerate case, we may (if we like) assume that $U \subset \C$ so that the sequence $u_m$ is bounded in the case of either \Car convergence or convergence of normalized covering maps and so convergence in the spherical topology is equivalent to convergence in the Euclidean topology. Lastly, in the degenerate case we need to show that $\Umum$ converges to $(\{u\},u)$ as stated. Dealing first with the non-degenerate case, since $U$ is hyperbolic, it then follows from the Hausdorff version of \Car convergence that we can find $\delta >0$ such that for every $m$ large enough $\cbar \setminus U_m$ contains at least three points which are at least distance $\delta$ away from each other in terms of the spherical metric. The reason for this is that if this were not true we could find a subsequence which converged to a domain which was $\cbar$ with one or two points removed, both of which are impossible (note that this argument also shows that if $\Umum \to \Uu$ with $U$ hyperbolic, then $U_m$ must be hyperbolic for $m$ large enough). Using Theorem 2.3.3 on page 34 of \cite{Bear}, we can postcompose by a bi-equicontinuous family of M\"obius transformations and apply Montel's theorem to conclude that the covering maps $\pi_m$ give a normal family (in the spherical topology) on $\D$. Since $U \ne \{u\}$, it follows from i) and ii) of \Car convergence and applying the Koebe one-quarter theorem to branches of inverse maps on a suitable disc about $u$ in $U$ that all limit functions must be non-constant and this completes the proof in the non-degenerate case. For the degenerate case, suppose $\pi_m$ converges to the constant function $u$ locally uniformly on $\D$ but $\Umum$ does not converge to $(\{u\},u)$. By Theorem 1.1, using the Hausdorff version of \Car convergence, we can find a subsequence $(U_{m_k}, u_{m_k})$ so that these pointed domains converge to a pointed domain $(\tilde U, u)$ where $\tilde U$ is open with $u \in \tilde U$. From the non-degenerate case, it would then follow that $\pi_{m_k}$ would converge locally uniformly on $\D$ to $\tilde \pi$, the normalized covering map for $V$ and with this contradiction the proof is complete. $\Box$ As one might suspect from the statement of Theorem 1.2, it does not follow that if $\Umum$ converges to a degenerate pointed domain $(\{u\},u)$, then the normalized covering maps $\pi_m$ as above must converge locally uniformly to $u$ on $\D$. The basic reason this fails is that it is possible that the sequence $\{\pi_m\}_{m=1}^\infty$ does not give a normal family and we now give a counterexample which exhibits this behaviour. For each $m \ge 1$, let $U_m = {\mathrm A}(0,\tfrac{1}{m^3}, m)$, $u_m= \tfrac{1}{m}$ and consider the sequence of pointed domains $(U_m, u_m)$. This sequence clearly tends to $(\{0\},0)$ and if the family of covering maps had a locally convergent subsequence $\pi_{m_k}$, then it would follow from Rouch\'e's theorem and local compactness as argued by Epstein that $\pi_{m_k}$ must tend to the constant function $0$ locally uniformly on $\D$. However, it is easy to see that the annulus ${\mathrm A}(0,\tfrac{1}{m^2}, 1)$ has uniformly bounded hyperbolic diameter in $U_m$ (as it has the same equator and half the modulus of the larger annulus) and thus in $U_m$ by the Schwarz lemma for the hyperbolic metric. Since this annulus contains the base point $u_m = \tfrac{1}{m}$ (which actually lies on its equator), it follows that we can find points $z_m$ within bounded hyperbolic distance of $0$ in $\D$ with $\pi_m(0) = 1$. With this contradiction, we see that the sequence of covering maps cannot have a convergent subsequence and in particular cannot converge as required. As McMullen {\cite{McM} Page 67 Theorem 5.3) remarks in the disc case, we can move the base points for a convergent sequence of pointed discs by a uniformly bounded hyperbolic distance without affecting whether or not the sequence converges. The proof of this fact is a straightforward application of Theorem 1.2. \begin{corollary} Let $\{\Umum\}_{m=1}^\infty$ be a sequence of pointed hyperbolic domains which converges to $\Uu$ with $U$ hyperbolic. \begin{enumerate} \item If $w_m \in U_m$ for each $m$, $w \in U$ and $w_m \to w$ as $m \to \infty$ , then $(U_m, w_m)$ converges to $(U,w)$. \item If $w_m \in U_m$ for each $m$ and we can find $d >0$ independent of $m$ so that $\rho_{U_m}(u_m, w_m) \le d$, then we can find $w \in U$ and a subsequence of the sequence of pointed domains $\{(U_m, w_m)\}_{m=1}^\infty$ which converges to $(U,w)$. \end{enumerate} \end{corollary} The following Lemma will be very useful to us in proving a number of results, especially Theorem 1.8. For $z \in \cbar$ and $r > 0$, let us denote the open spherical disc of radius $r$ about $z$ by ${\mathrm D^\#}(z,r)$. \begin{lemma}Suppose $\{\Umum\}_{m=1}^\infty$ is a sequence of pointed multiply connected domains which converges to a pointed multiply connected domain $\Uu$ in the \Car topology (where we include the degenerate case $U = \{u\}$) and suppose in addition that the complements $\cbar \setminus \Um$ converge in the Hausdorff topology (with respect to the spherical metric on $\cbar$) to a set $K$. Then $\partial U \subset K$. \end{lemma} \proof Suppose first that we are in the degenerate case where $U = \{u\}$. By iii) of \Car convergence in the degenerate case, for any $0 < \epsilon \le \pi$, ${\mathrm D^\#}(u, \epsilon)$ contains points of $\cbar \setminus U_m$ for all but finitely $m$ and on letting $\epsilon \to 0$, we see that $\partial U = \{u\} \subset K$ as desired. Now suppose that $U \ne \{u\}$. By the Hausdorff version of \Car convergence, $U$ is the component of the complement of this Hausdorff limit which contains the point $u$. So suppose the conclusion fails. If $z$ were a point in $\partial U$ which missed $K$, we could find a spherical disc ${\mathrm D^\#}(z, \delta)$ of some radius $\delta >0$ about $0$ which missed $K^m$ for all $m$ sufficiently large. Then $U \cup {\mathrm D^\#}(z, \delta)$ would be a connected set which missed $K$ and since this set contains $z \notin U$, this would contradict the maximality of $U$ as a connected component of $\cbar \setminus K$ whence $\partial U \subset K$ as desired. $\Box$ \begin{center} \scalebox{1.07}{\includegraphics{Counterexample4}} \unitlength1cm \begin{picture}(0.01,0.01) \put(-11.4,-.3){\footnotesize$(U_m,0)$ } \put(-7.4,2.8){\footnotesize$m \to \infty$} \put(-3.1,-.3){\footnotesize$(U,0)$ } \end{picture} \end{center} The reader might wonder if, in the case where the limit domain $U$ above was $n$-connected, would this force the set $K$ to have exactly $n$ components. However this is false as the following counterexample depicted in Figure 2 shows. For each $m$ and each $ 2 \le i \le m$, let $\Kim$ be the circle ${\mathrm C}(0,1 - 1/i)$ with an arc of height $1/m$ centered about $1-1/i$ removed. If we then set $U_m = \D \setminus \bigcup_{2 \le i \le m} \Kim$, then the pointed domains $(U_m, 0)$ converge to $({\mathrm D}(0,1/2), 0)$ while their complements converge to the set $\bigcup_{i>=2}{\mathrm C}(0,1-1/i) \bigcup (\cbar \setminus \D)$ which clearly has infinitely many components. {\bf Proof of Theorem 1.8} \hspace{.4cm} Suppose that $\Umum$ converges to $\Uu$ as in the statement in which case we know from Theorem 1.2 that the corresponding normalized covering maps $\pi_m$ converge uniformly on compact sets of $\D$ to the normalized covering map $\pi$ for $U$. Let $\eta$ be a lifting of $\gamma$ to $\D$ which is as close as possible to $0$ and let $A$ be the corresponding hyperbolic covering transformation whose axis is $\eta$. Let $\sigma = [a,b]$ be a full segment of $\eta$ with $b = A(a)$ and which contains the closest point on $\eta$ to $0$. Let $\epsilon > 0$ be small and take a small hyperbolic disc $D$ in $\D$ about $a$ of radius $\epsilon$. $\tilde D = A(D)$ is then a disc of hyperbolic radius $\epsilon$ about $b= A(a)$ and we have that $\pi \equiv \pi \circ A$ on $D$. If we let $R$ be an elliptic rotation of \emph{angle} $\pi$ about $b$, then $R$ and hence $R \circ A$ cannot belong to the group of covering transformations of $U$ as this would violate the local injectivity at $b$ of the covering map $\pi$. From above, the difference $\pi - \pi \circ R \circ A$ cannot be identically zero on $\D$ as otherwise it would follow from the monodromy theorem that $R \circ A$ and hence $R$ would belong to the group of covering transformations. $\pi - \pi \circ R \circ A$ then has an isolated zero at $a$ and so, for $\epsilon$ small enough, $\pi - \pi \circ R \circ A$ is non-zero on the boundary of $D$. By the local uniform convergence of $\pi_m$ to $\pi$ on $\D$, if we apply Rouch\'e's theorem to $D$, we see that for $m$ large enough, we can find points $a_m \in D$ and $b_m = R(A(a_m)) \in R(A(D))$ with $\pi_m(a_m) = \pi_m(b_m)$. Since $\epsilon$ was arbitrary, if we let $\sigma_m'$ be the geodesic segment between $a_m$ and $b_m$, then $a_m \to a$, $b_m \to b$ and $\sigma_m' \to \sigma$ as $m \to \infty$. Thus we can find covering transformations $A_m$ of $\D$ for $\pi_m$ with $A_m(a_m) = b_m$. Now let $\gamma_m'$ be the image of $\sigma_m'$ under $\pi_m$. Note that since $\gamma$ is simple while $\sigma_m'$ is very close to $\sigma$, it follows again from the convergence of $\pi_m$ to $\pi$ that, moving $a_m$ and $b_m$ slightly closer together along $\sigma_m$ if needed by an amount which will tend to $0$ as $m \to \infty$, we can assume that there are no points of self-intersection on $\gamma_m'$. $\gamma_m'$ is then a simple closed curve which is a geodesic except at possibly one point where it is not smooth (i.e. there may be a corner). $\gamma$ is also a simple closed curve and as before we will let $E$ and $F$ denote the intersection of $\cbar \setminus U$ with each of the two complementary components of $\gamma$ and assume that $E$ is bounded and $\infty \in F$. Since $\gamma$ is a geodesic, each of $E$ and $F$ must contain at least two points in view of the second part of Theorem 1.4. If we let $z, w \in E$ be two such points, then we may assume that they are in $\partial E \subset \partial U$. As $\gamma_m'$ is very close to $\gamma$, the winding number of $\gamma_m'$ about $z$ will be close to that of $\gamma$ about $z$ and the same will be true for $w$. As the curves $\gamma_m'$ are simple, $z$ and $w$ are then inside $\gamma_m'$ for $m$ large and it then follows from Lemma 2.1 that for $m$ large enough there are at least two points of $\cbar \setminus U_m$ inside $\gamma_m'$ while the same argument shows that we may also assume the same about the outside of $\gamma_m'$. $\gamma_m'$ is thus a simple closed curve which separates $\cbar \setminus U_m$ non-trivially and we may now apply Theorem 1.4 for $m$ large enough to find a simple closed geodesic $\gamma_m$ which is homotopic in $U_m$ to $\gamma_m'$. By lifting the homotopy, we can then find a lifting of $\gamma_m$ which coincides with the axis of $A_m$ which we will denote by $\eta_m$. \begin{center} \scalebox{1.037}{\includegraphics{HyperbolicGeometryPicture}} \unitlength1cm \begin{picture}(0.01,0.01) \put(-12.2,6.7){\tiny$\eta_m$} \put(-12.45,3.7){\tiny$\sigma_m$ } \put(-11.55,3.6){\tiny$\sigma_m'$} \put(-10.85,3.4){\tiny$\tau_m$ } \put(-12.45,2){\tiny$s_m$} \put(-12.45,5.5){\tiny$t_m$} \put(-11.87,1.9){\tiny$a_m$ } \put(-10.8,5.3){\tiny$b_m$ } \put(-7.66,3.5){$\pi_m$} \put(-3.22,.5){\tiny$\gamma_m$} \put(-3.22,1.12){\tiny$\gamma_m'$} \put(-3.52,1.9){\tiny$\pi_m(\tau_m)$} \put(-3.22,4.85){\tiny$z_m$} \put(-3.52,5.85){\tiny$\pi_m(s_m)$} \end{picture} \end{center} The circle which passes through $a_m$, $b_m$ and the fixed points of $A_m$ is invariant under $A_m$ and its image under $\pi_m$ is a smooth closed curve which in particular has no corner at the point $\pi_m(a_m) = \pi_m(b_m)$ which we will call $z_m$ (this is easiest to see in the upper half-plane model as in the figure above where we let $0$ and $\infty$ be the fixed points of $A_m$ and the imaginary axis the axis of $A_m$ where this circle corresponds to a ray connecting $0$ to $\infty$). Let $\tau_m$ be the segment of this circle which passes through $a_m$ and $b_m$. Now $\sigma_m'$ is very close to $\sigma$ for $m$ large and since $\pi_m$ converges locally uniformly on $\D$ to $\pi$, the derivatives $\pi_m'$ converge locally uniformly to $\pi'$. Thus the difference between the angles of the two tangents to $\gamma_m'$ at the corner at $z_m$ will be very small and will tend to $0$ as $m$ tends to infinity. Since the covering maps $\pi_m$ are angle-preserving, we can say the same about the angles of the tangents at the two endpoints $a_m$, $b_m$ of $\sigma_m'$ (note that this applies in the upper half-plane picture rather than that for the unit disc). Now the image $\pi_m(\tau_m)$ of the above invariant circle under $\pi_m$ is a smooth curve and it is clear from the picture above that $\sigma_m'$ must lie on one side of $\tau_m$. It then follows from above that $\sigma_m'$ must then be very close to $\tau_m$. However, since the hyperbolic distance between $a_m$ and $b_m$ is bounded below, this can only happen if $\sigma_m'$ is very close to a segment $\sigma_m$ of the the axis $\eta_m$ of $A_m$ which connects points $s_m$, $t_m$ with $t_m = A_m(s_m)$ (again this is easiest to see in the upper half-plane picture above). Hence $\sigma_m$ is very close to $\sigma'_m$ which in turn is very close to $\sigma$ and since all three of these are geodesic segments, their lengths in the hyperbolic metric of $\D$ will also be close. Since these segments are all mapped to simple closed curves by their corresponding covering maps, this gives us \emph{1.} and the second part of \emph{2.} immediately while the rest of \emph{2.} follows on applying the local uniform convergence of $\pi_m$ to $\pi$. Finally, \emph{3.} follows immediately from \emph{2.}, which finishes the proof. $\Box$ We remark that the proof above relied mostly on the convergence of normalized covering maps. The only place where we needed \Car convergence directly was for Lemma 2.1 which was used just once to show that the curve $\gamma_m'$ separated $\cbar \setminus U_m$ non trivially. We turn now to proving Theorem 1.9. We first need a lemma from \cite{Com2}. Note that the original version of this lemma was for subdomains of $\C$ where two positively oriented curves separate the complement of $U$ in the same way if and only if they are homologous in $U$. As, usual, however, any hyperbolic domain in $\cbar$ can be mapped to a hyperbolic domain in $\C$ using a M\"obius transformation. \begin{lemma}[\cite{Com2} Lemma 2.3] Let $U \subset \cbar$ be a hyperbolic domain and let $\gamma_1$, $\gamma_2$ be two simple closed geodesics in $U$ which separate $\cbar \setminus U$ in the same way and suppose that one of these curves lies in the closure of one of the complementary components of the other. Then $\gamma_1 = \gamma_2$. \end{lemma} {\bf Proof of Theorem 1.9 \hspace{.4cm}} Let $E$ and $F$ be the subsets of $\cbar \setminus U$ separated by $\tilde \gamma$ and, as usual, we assume that $\infty \in F$. Now let $\tilde \gamma_m$ be the geodesics in $U_m$ which converge to $\tilde \gamma$ as in Theorem 1.8. Now for each $m$, let $\gamma_m$ be a meridian which separates the complement $\cbar \setminus U_m$ in the same way as $\tilde \gamma_m$. Applying a M\"obius transformation if needed to map one of the points of $\cbar \setminus U_m$ to $\infty$, we can conclude by Lemma 2.2 above that $\gamma_m$ must meet $\tilde \gamma_m$ and it follows from Theorem 1.8 that the hyperbolic distances $\rho_{U_m}(u_m, \gamma_m)$ must be uniformly bounded above. By Theorems 1.6 and 1.8, the lengths $\ell_m$ are obviously bounded above. To see that they must be bounded below, for each $m$ let $\pi_m$ be the normalized universal covering map for $U_m$ and let $\pi$ be the normalized universal covering map for $U$. By Theorem 1.2, $\pi_m$ then converges locally uniformly on $\D$ to $\pi$. Now for each $m$, let $\sigma_m$ be a full segment of a lift of $\gamma_m$ which is as close as possible to $0$. The segments $\sigma_m$ are all within bounded distance of $0$ and have uniformly bounded hyperbolic lengths. It then follows that the lengths of these segments and hence the curves $\gamma_m$ must be bounded below away from $0$ since otherwise, by the local uniform convergence of $\pi_m$ to $\pi$ we would obtain a contradiction to the fact that $\pi$ as a covering map must be locally injective. $\Box$ {\bf Proof of Theorem 1.10 \hspace{.4cm}} Let $\tilde \gamma$ be a meridian in $U$ which separates $E$ and $F$ which exists by virtue Theorem 1.6. By the discussion at the end of page 3 and the start of page 4 about post-composing with suitably chosen M\"obius transformations, we can assume that $\infty \in F$ and also that $\infty \notin U_m$ for every $m$ which allows us to make use of homology as a tool to completely describe how a simple closed curve separates the complements of these domains. By Theorem 1.8, we can find a sequence of geodesics $\tilde \gamma_m$ which tends to $\tilde \gamma$. By Theorem 1.6 again, we can then find meridians $\gamma_m$ in the homology class of each $\tilde \gamma_m$. By Theorem 1.9, the associated distances $d_m$ for the curves $\gamma_m$ are uniformly bounded above while the lengths $\ell_m$ are again uniformly bounded above and bounded below away from zero. If we now let $\pi_m$ and $\pi$ be the normalized covering maps for each $U_m$ and $U$ respectively, then again $\pi_m$ converges locally to $\pi$ by Theorem 1.2. As above, we can then find full segments $\sigma_m$ of liftings of each $\gamma_m$ which are a uniformly bounded hyperbolic distance from $0$ which are the axes of hyperbolic M\"obius transformations $A_m$ of bounded translation length. It then follows that we can find a subsequence $m_k$ for which the corresponding segments $\sigma_{m_k}$ converge to a geodesic segment $\sigma$ which must have positive length otherwise we again obtain a contradiction to the local injectivity of $\pi$ as at the end of the proof of Theorem 1.9. If we then set $\gamma = \pi(\sigma)$, then $\gamma$ is a closed hyperbolic geodesic of $U$ and the meridians $\gamma_{m_k}$ must converge to $\gamma$. We still need show that this geodesic is simple and a meridian which separates $\cbar \setminus U$ into the same sets $E$, $F$ as $\tilde \gamma$ does. As the curves $\tilde \gamma_{m_k}$ , $\gamma_{m_k}$ converge to $\tilde \gamma$, $\gamma$ respectively, by ii) of \Car convergence, $\tilde \gamma_{m_k}$ , $\gamma_{m_k}$ are bounded away from the boundaries $\partial U_{m_k}$. Thus if $z \in \partial U$ then, by Lemma 2.1, for $k$ large we can find a point $z_{m_k}$ which is very close to $z$ and thus in the same complementary region of $\gamma_{m_k}$. The same argument allows us to make a similar conclusion for the curves $\tilde \gamma_{m_k}$. Hence for $z \in \partial U$ and $k$ large, by homology in $U_{m_k}$, \[n(\gamma_{m_k}, z) = n(\gamma_{m_k}, z_{m_k}) = n(\tilde \gamma_{m_k}, z_{m_k}) = n(\tilde \gamma_{m_k}, z).\] It then follows from the above that for $k$ large enough $\tilde \gamma_{m_k}$ and $\gamma_{m_k}$ are homologous in $U$ as well as in $U_{m_k}$ (it is not hard to see that there is sufficient generality in considering winding numbers around points only of $\partial U$ rather than all of $\cbar \setminus U$). Also, by the uniform convergence of the curves $\tilde \gamma_{m_k}$ and $\gamma_{m_k}$ to $\tilde \gamma$ and $\gamma$ respectively, these curves eventually lie in $U$ and are homologous in $U$ to $\tilde \gamma$ and $\gamma$ respectively. Hence, for large $k$ we have \[ \gamma \underset{U} \approx \gamma_{m_k} \underset{U} \approx \tilde \gamma_{m_k} \underset{U} \approx \tilde \gamma.\] Thus $\gamma$ is homologous in $U$ to $\tilde \gamma$ and since $\tilde \gamma$ is a meridian, the length of $\gamma$ cannot be smaller than that of $\tilde \gamma$. On the other hand, by the convergence of the curves $\gamma_{m_k}$ to $\gamma$ using universal covering maps above, and the fact that the curves $\gamma_{m_k}$ are meridians, it follows that $\gamma$ cannot be longer than $\tilde \gamma$ either. By Theorem 1.6, $\gamma$ is then a meridian which separates $E$ and $F$ and in particular simple which completes the proof. $\Box$ {\bf Proof of Corollary 1.1} \hspace{.4cm} The existence of a significant system of meridians for $\Uu$ is immediate in view of Theorem 1.10. Now let $\gamma^i$, $1 \le i \le E(n)$ be any extended system of meridians for $U$ and let $\gim$ be the curves which converge to each $\gamma^i$ as in Theorem 1.8. By Lemma 2.1, we see that for $m$ large enough, the curves $\gim$ give different separations of the complement $\cbar \setminus U_m$. This implies that for $m$ large enough, any meridian of $U_m$ separates the complement of $U_m$ in the same way as one of the curves $\gim$ and the uniform bounds on the distances and lengths of the system then follows from Theorem 1.9. $\Box$ \begin{center} \scalebox{1.04}{\includegraphics{Handschellen.pdf}} \unitlength1cm \begin{picture}(0.01,0.01) \put(-10.65,-.35){$U_1$ } \put(-3.33,-.35){$U_2$ } \end{picture} \end{center} The reader might wonder if for a sequence $\{\Umum\}_{m=1}^\infty$ the convergence of meridians and their lengths together with that of the sequence $\{u_m\}_{m=1}^\infty$ is sufficient to ensure that $\{\Umum\}_{m=1}^\infty$ converges in the \Car topology. The example in Figure 3 shows that this is not the case, the basic reason being that knowing the meridians of a domain does not allow one to determine the domain itself. In both $U_1$ and $U_2$ the circle indicated is the unit circle and the domains are symmetric under $1/z$ so that by Theorem 1.7 this circle is the equator of the topological annulus concerned. However, it is clear that a sequence of pointed domains which alternated between these two could not converge in the \Car topology. \section{Riemann Mappings} In this section we prove a version of Theorem 1.2 for Riemann maps instead of covering maps. This is useful in situations where one wants to investigate properties of a family of functions where the functions are defined on different domains of the same connectivity. The usual thing to do is to normalize the domains to make them as similar as possible. However, given that even the normalized domains will likely be different, we need a notion of convergence of a sequence of functions on defined on varying domains. Of course, this is only likely to make sense if the domains themselves are also converging. \begin{definition} Let $\{\Umum\}_{m=1}^\infty$ be a sequence of pointed domains which converges in the \Car topology to a pointed domain $\Uu$ with $\Uu \ne (\{u\},u)$. For each $m$ let $f_m$ be an analytic function (with respect to the spherical topology) defined on $U_m$ and let $f$ be an analytic function defined on $U$. We say that $f_m$ converges to $f$ {\rm uniformly on compact subsets of $U$} or simply {\rm locally uniformly to $f$ on $U$} if, for every compact subset $K$ of $U$ and every $\epsilon >0$, there exists $m_0$ such that ${\mathrm d^\#}(f_m(z), f(z)) < \epsilon$ on $K$ for all $m \ge m_0$. \end{definition} This is an adaptation to the spherical topology of the definition originally given in \cite{Ep}. Note that, in view of condition ii) of \Car convergence, for any such $K$ $f_m$ will be defined on $K$ for all sufficiently large $m$ and so the definition is meaningful. Clearly if all the domains involved are the same, then we recover the standard definition of uniform convergence on compact subsets. This version of local uniform convergence is further related to the standard one in view of the following result whose proof is a straightforward application of Theorem 1.2 combined with ii) of \Car convergence. \begin{proposition} Let $\{\Umum\}_{m=1}^\infty$ be a sequence of pointed domains, which converges to $\Uu$ with $\Uu \ne (\{u\},u)$ and let $\pi_m$ and $\pi$ be the normalized covering maps from $\D$ to each $U_m$ and $U$ respectively. Let $f_m$ be defined on $U_m$ for each $m$ and $f$ be defined on $U$ and suppose $f_m$ converges uniformly to $f$ on compact subsets of $U$. Then the compositions $f_m \circ \pi_m$ converge locally uniformly on $\D$ to $f \circ \pi$. \end{proposition} Recall that there is a version of the Riemann mapping theorem for multiply connected domains which maps a given multiply connected domain into a domain of the same connectivity which is of some standard shape. There is some difference regarding the precise form of these standard domains: however, one of the most common is a round annulus from which a number of concentric circular slits have been removed such as can be found in Ahlfors' book \cite{Ahl}. From now on, we shall refer to such domains as \emph{standard domains} (where in the case $n=1$ the standard domain is the unit disc). \begin{theorem}[\cite{Ahl} Page 255, Theorem 10] For an $n$-connected non-degenerate pointed domain $\Uu$ with $n > 1$, there is a conformal mapping $\phi(z)$ which maps $U$ to an annulus ${\mathrm A}(0,1,e^{\lambda^1})$ minus $n-2$ concentric arcs situated on the circles ${\mathrm C}(0,e^{\lambda^i})$, $i = 2, \ldots \ldots, n-1$. Furthermore, up to a choice of which complementary components of $U$ correspond to $\overline \D$ and $\cbar \setminus {{\mathrm D}(0,e^{\lambda^1})}$, the numbers $\lambda^i$, $i = 1, \ldots \ldots, n-1$ are uniquely determined as are the positions of the slits up to a rotation. If in addition we require that $\phi^\#(u) > 0$, the map $\phi$ is uniquely determined. \end{theorem} We remark that, despite the fact that the Riemann mapping does not in general extend beyond $U$, the construction Ahlfors gives shows how the correspondence between complementary components of $U$ and of the image domain can be done in a way which is both well-defined and natural. We recall a well-known lemma concerning the behaviour of the hyperbolic metric near the boundary. A proof of the original version for the Euclidean metric can be found in \cite{CG} Page 13, Theorem 4.3 and it is the lower bound it gives on the hyperbolic metric which will be of particular importance for us. For a point $u \in U$, we shall denote the spherical distance to $\partial U$ by $\delta^\#_U (u)$ or just $\delta^\# (u)$ if once again the domain is clear from the context. \begin{lemma} Let $U \subset \cbar$ be a hyperbolic domain. Then there exists $C > 0$ for which the hyperbolic metric $\rho(\cdot\,, \cdot)$ on $U$ satisfies \[ \frac{C}{\delta^\#(z)\log(1/\delta^\#(z))} \dsharpz \le {\rm d}\rho (z) \le \frac{4}{\delta^\#(z)} \dsharpz, \qquad \mbox{as} \quad z \to \partial U.\] \end{lemma} The upper bound follows from the result in \cite{CG} combined with the facts that $\delta^\#(z)$ is less or equal to than the Euclidean distance to the boundary and that the spherical and Euclidean metrics are equivalent to within a factor of $2$ on the unit disc while the quantities $\dsharpz$, $\delta^\#(z)$ are invariant under the map $z \mapsto \tfrac{1}{z}$. To obtain the lower bound, one lets $z_1$ be the closest point in $\cbar \setminus U$ to $z$ and chooses two other points $z_2$, $z_3$ in $\partial U$. These three points are then mapped using a M\"obius transformation to $0$, $1$ and $\infty$ respectively and one then obtains a lower bound on the hyperbolic metric for $\cbar \setminus \{0,1,\infty\}$ and applies the Schwarz lemma. It thus follows from Theorem 2.3.3 on page 34 of \cite{Bear} that these estimates are uniform with respect to the minimum separation in the spherical metric between $z_1$, $z_2$ and $z_3$. Meridians are conformally invariant in the following sense. \begin{lemma} If $U$ is a hyperbolic domain and $\phi$ is a univalent function defined on $U$, then $\gamma$ is a meridian of $U$ if and only if $\phi(\gamma)$ is a meridian of $\phi(U)$. Furthermore, $\gamma$ is a principal meridian if and only if $\phi(\gamma)$ is. \end{lemma} \proof As before, we can assume that both $U$ and $\phi(U)$ are subdomains of $\C$. $\gamma$ is a geodesic in $U$ if and only $\phi(\gamma)$ is a geodesic in $\phi(U)$. Also, two curves $\gamma_1$ and $\gamma_2$ are homologous in $U$ if and only if $\phi(\gamma_1)$ and $\phi(\gamma_2)$ are homologous in $\phi(U)$. The first part of the statement now follows from the conformal invariance of hyperbolic length. For the second part, by invariance of homotopy or homology, if $\gamma$ is a simple closed curve in $U$, then $\gamma$ separates $\cbar \setminus U$ if and only if $\phi(\gamma)$ separates $\cbar \setminus \phi(U)$. It is then not too hard to see that by Theorem 1.3, we have that if $\gamma$ separates $\cbar \setminus U$ into two non-empty subsets $E$, $F$ and $\phi(\gamma)$ separates $\cbar \setminus \phi(U)$ into non-empty subsets $\tilde E$, $\tilde F$, then $E$ and $F$ are both disconnected if and only if $\tilde E$ and $\tilde F$ are. $\Box$ We will also need the following lemma on the conformal invariance of non-dgeneracy for finitely connected domains. \begin{lemma} Let $U$ be an $n$-connected domain with $n \ge 1$ and let $\phi$ be a univalent function defined on $U$. Then $U$ is non-degenerate if and only if $\phi(U)$ is. \end{lemma} \proof For the case $n=1$, this is immediate from the Riemann mapping theorem in the simply connected case and the fact that $\C$ and $\D$ are not conformally equivalent. For $n \ge 2$, recall that a domain is degenerate if and only if we can find a curve in the domain which is homotopic to a puncture and contains curves of arbitrarily short hyperbolic length in its homotopy class. Since hyperbolic length and homotopy are both preserved by $\phi$, the result follows. $\Box$ Recall that a Riemann map to an $n$-connected slit domain as above with $n > 1$ is specified by $3n-5$ real numbers $\lambda^1, \lambda^2, \ldots, \lambda^{n-1}, \theta^1, \theta^2 \ldots, \theta^{2n-4}$ (we remark that Ahlfors considers the domains rather than the mappings, in which case, one can make an arbitrary rotation which allows one to eliminate one parameter in which case the domain is specified by $3n-6$ real numbers). Representing this list of numbers as a vector $\Lambda$, let us designate the pointed standard domain by $\ALa$ where the inner radius is $1$, the outer radius $e^{\lambda^1}$ and the remaining $n-2$ complementary components are circular slits which are arcs of the circles ${\mathrm C}(0, e^{\lambda^j})$ which run from $e^{\lambda^j + i\theta^{2j - 3}}$ to $e^{\lambda^j + i\theta^{2j-2}}$. If $\Uu$ is mapped by the unique suitably normalized Riemann map $\phi$ to the pointed standard domain $\ALa$ where $\phi(u) = a$, $\phi'(u) >0$, we shall call $\ALa$ a \emph{standard domain for $U$}. This domain is unique up to assignment of which complementary components of $U$ get mapped to $\overline \D$ and $\cbar \setminus \overline{{\mathrm D}(0,e^{\lambda^1})}$. Before stating the result, we remark that we only consider sequences of domains which have the same connectivity. To see why this is necessary, consider, for example, a pointed domain $(U,u)$ of low connectivity which is the limit of a sequence $\Umum$ where the domains $U_m$ have high connectivity which tends to infinity and where the diameters of the complementary components of $U_m$ all tend to zero. For $m$ large, at least one of the complementary components $L^i$ of $U$ is close (in the sense of the Euclidean or spherical distance distance between sets) to many complementary components of $U_m$. However, this leads to two problems: firstly just which component of $\cbar \setminus U_m$ should one choose to correspond to a slit which is close to the corresponding slit for $L^i$ and secondly the fact that the components of $\cbar \setminus U_m$ could be very far apart relative to their size which could make the outer radius of $\ALm$ potentially very large (or even infinite) if one of these widely separated components corresponds to either of the components $\overline \D$ or $\cbar \setminus {\mathrm D}(0,e^{\lambda^1})$ of the complement of the standard domain $\ALm$. For a sequence of standard pointed domains $\{\ALam\}_{m=1}^\infty$, convergence in the \Car topology to another $n$-connected pointed domain $\ALa$ is precisely equivalent to the convergence of the points $a_m$ to $a$ and convergence in $\R^{3n -5}$ of the vectors $\Lambda_m$ to the corresponding vector $\Lambda$ for $\ALa$. Finally, we remark that the behaviour and conformal invariance of the meridians and their lengths and the use of Theorem 1.10 are right at the heart of the proof of this result. Not surprisingly, Theorem 1.2 also plays a major role. \begin{theorem} Let $n \ge 1$, let $\{\Umum\}_{m=1}^\infty$ be a sequence of $n$-connected non-degenerate pointed domains, let $\Uu$ be an $n$-connected non-degenerate pointed domain and let $\ALa$ be a pointed standard domain for $\Uu$ where $a$ is the image of $U$ under the corresponding normalized Riemann map $\phi$ as in Theorem 3.1 (where we make any choice we wish regarding which components of $\cbar \setminus U$ correspond to $\overline \D$ and the unbounded complementary component of $A^\Lambda$). Then $\Umum$ converges to $\Uu$ if and only if we can label the components of the complements $\cbar \setminus U_m$ and choose corresponding normalized Riemann mappings $\phi_m$ to standard domains $\ALam$ so that these standard domains converge to $\ALa$ and the inverses $\psi_m$ of the maps $\phi_m$ converge locally uniformly on $\ALa$ to $\psi = \phi^{\circ -1}$, the inverse Riemann map for $\Uu$. In addition, in the case of convergence, the Riemann maps $\phi_m$ converge locally uniformly on $\Uu$ to the Riemann map $\phi$ for $\Uu$. \end{theorem} \proof The case $n=1$ is already proved in Theorem 1.2, so let us from now on assume that $n \ge 2$ and that the standard domains are then annuli from which (possibly) some slits have been removed. Suppose first that $\Umum$ converges to $\Uu$ and assume without loss of generality that $U \subset \C$. The sequence $\{u_m\}$ of base points converges to $u$ and by discarding finitely many members if needed, we can assume that this sequence is bounded (in $\C$). Next, let $L^i$, $1 \le i \le n$ be the components of $\cbar \setminus U$ which correspond to our choice of standard domain (i.e. $L^1$ and $L^n$ correspond respectively to $\overline \D$ and $\cbar \setminus \overline {\rm D}(0, e^{\lambda^1})$). By the Hausdorff version of \Car convergence, any Hausdorff limit of the sets $\cbar \setminus U_m$ is contained in $\cbar \setminus U$. Using Lemma 2.1, we can label the components $\Kim$ of $\cbar \setminus U_m$ so that for each $1 \le i \le n$, and each component $L^i$ of $\cbar \setminus U$, any Hausdorff limit of the sets $\Kim$ is a subset of the component $L^i$ of $\cbar \setminus U$. We claim that the numbers $\lambda^1_m$ must be bounded above since otherwise, as each of the sets $\cbar \setminus \ALm$ has only $n$ components of which $n-2$ are slits, there would be a subsequence $m_k$ for which the standard annuli $A^{\Lambda}_{m_k}$ would contain round annuli whose moduli tended to infinity. By conformal invariance, we could say the same about the domains $U^{m_k}$ (where such thick annuli would separate the complements of these domains). The hyperbolic lengths of the equators of these annuli would then tend to $0$ and, by Theorem 1.6, the lengths of any meridians in the same homology classes as these equators would also tend to $0$. However, Corollary 1.1 tells us that the lengths $\leim$, $1 \le i \le E(n)$ of each $U_m$ are bounded below away from zero which then gives us a contradiction. By Montel's theorem and ii) of \Car convergence, the Riemann maps then give a normal family on any subdomain of $U$ which is compactly contained in $U$. A standard diagonal argument then shows that they must give a normal family on $U$ in the sense that any sequence taken from this family will have a subsequence which converges uniformly on compact subsets of $U$ in the sense of Definition 3.1. Now let $\gamma^i$, $1 \le i \le P(n)$ be the principal system of meridians which exists by Proposition 1.1 and using Lemma 3.2, we can consider the corresponding meridians $\tilde \gamma^i_m$, $1 \le i \le P(n)$ of $\ALm$. By relabelling if needed, we can say that the meridian $\tilde \gamma^1_m$ then separates $\overline \D$ from the rest of $\ALm$. Since the numbers $\lambda^1_m$ are uniformly bounded above, it follows that the spherical diameters of these meridians are bounded below. Additionally, by Theorem 1.10 and the conformal invariance of the hyperbolic metric, the lengths of these curves are uniformly bounded above. In view of our remarks after Lemma 3.1, we can use the estimates this result gives on the hyperbolic metric in a uniform fashion and since the improper integral \[ \int_0^{\tfrac{1}{2}} { \frac{1}{x \log (1/ x)}{\rm d}x} \] diverges, we can find $\delta>0$ such that for every $m$ a spherical $\delta$-neighbourhood of the meridian $\tilde \gamma^1_m$ is contained in $\ALm$. It then follows again by Theorem 1.10 and conformal invariance combined with the same estimates on the hyperbolic metric that we can make $\delta >0$ smaller if needed so that the spherical distance to the boundary $\delta^\#_{U_m}(a_m) \ge \delta$ for every $m$ and a spherical $\delta$-neighbourhood of each of the principal meridians $\tilde \gamma^i_m$ of $\ALm$ will be contained in $\ALm$ for $1 \le i \le P(n)$ and every $m$. In particular this means that the complementary components of each $\ALm$ are at least $2\delta$ away from each other. By the Koebe one-quarter theorem, the absolute values of the derivatives $\phi_m'(u_m)$ are uniformly bounded above and below away from $0$, whence all limit functions for the sequence $\{\phi_m\}_{m=1}^\infty$ must be non-constant and in fact univalent in view of Hurwitz's theorem. We next want to show that the standard domains $\ALam$ converge in the \Car topology and we will do this by appealing to Theorem 1.2. Suppose that we can find a subsequence $m_k$ for which $\phi_{m_k}$ converges on $(U,u)$ to some univalent limit function $\phi$. Recall the normalized covering maps $\pi_m : \D \to U_m$ of Theorem 1.2 which by this result converge to the normalized covering map $\pi: \D \to U$. If we now set $\chi_{m_k} = \phi_{m_k} \circ \,\pi_{m_k}$, then $\chi_{m_k}$ is the unique normalized covering map for the standard pointed domain $(A^{\Lambda_{m_k}},a_{m_k})$. By Proposition 3.1, the functions $\chi_{m_k}$ then converge on compact subsets of $\D$ to $\phi \circ \pi$. Since $\phi$ is univalent and $\pi$ is a covering map, $\phi \circ \pi$ is itself a covering map which must in fact be $\chi$, the normalized covering map from $\D$ to $\phi(U)$. By Theorem 1.2, the domains $(A^{\Lambda_{m_k}},a_{m_k})$ then converge to a limit domain $(A',a')$ where $A' = \phi (U)$, and since $\delta(a_m) \ge \delta$ and the complementary components of each $\ALm$ are at least $2\delta$ away from each other for every $m$, $(A',a') \ne \{a'\}$ and must be $n$-connected. Also, from the Hausdorff version of \Car convergence, it follows that $(A',a')$ must be a standard pointed domain. Finally, as $U$ is non-degenerate and $\phi$ is univalent, it follows again by Lemma 3.3 that $A'$ is also non-degenerate. Now $\phi$ is univalent on $U$ and clearly $\phi^\#(u) >0$, so $\phi$ is the normalized Riemann map from $\Uu$ to $(A',a')$. By Theorem 3.1, $A'$ is conformally equivalent to $A^{\Lambda}$ and in order to show these two domains are equal we just need to show that $\phi$ preserves the labelling of the components of $\cbar \setminus U$. Let $\gamma$ be a simple closed curve around the complementary component $L^1$ of $U$ which does not encircle the other complementary components of $U$ and which exists in view of Theorem 1.3. By our labelling of the complementary components of the domains $U_m$ and ii) of \Car convergence, $\gamma$ separates $K^1_{m_k}$ from the other components of $\cbar \setminus U_{m_k}$ for $k$ large enough. From this it is not too hard to see that, for $k$ large enough, $\phi_{m_k}(\gamma)$ is then a simple closed curve which separates $\overline \D$ from the other components of $\cbar \setminus A^{\Lambda_{m_k}}$ and thus encloses $\overline \D$. If we then let $z$ be any point of $\overline \D$, then by the local uniform convergence of $\phi_{m_k}$ to $\phi$ on $U$, $n(\phi(\gamma),z) = n(\phi_{m_k}(\gamma),z) = \pm 1$ for $k$ large enough whence $\phi(\gamma)$ also encloses $\overline \D$. It also follows from Lemma 2.1 and the convergence of the pointed domains $(A^{\Lambda_{m_k}},a_{m_k})$ to $(A',a')$ that $\phi(\gamma)$ does not enclose any of the other components of $\cbar \setminus A'$. $L^1$ thus corresponds under $\phi$ to the complementary component $\overline \D$ of $A'$ (and also $A^{\Lambda}$) and a similar argument shows that $L^n$ corresponds to the unbounded complementary component of $A'$. By the uniqueness part of Theorem 3.1, we must then have that $(A',a') = \ALa$. It then follows easily that $\ALam$ converges to $\ALa$ and that the mappings $\phi_m$ converge on compact subsets of $U$ to $\phi$. We still need to show the inverses $\psi_m$ converge. Since the domains $\Umum$ converge to another pointed $n$-connected domain none of whose complementary components is a point, by Lemma 2.1 the spherical diameters of the complements $\cbar \setminus U_m$ are bounded below and the usual argument of post-composing with a bi-equicontinuous family of M\"obius transformations and applying Montel's theorem shows that the functions $\psi_m$ give a family which is normal on $A^\Lambda$ in the sense given earlier. Applying the Koebe one-quarter theorem and Hurwitz's theorem as before shows that all limit functions must be non-constant and univalent. If we then have a sequence $\psi_{m_k}$ which converges uniformly on compact subsets of $A^\Lambda$ to a limit function $\tilde \psi$, then, by Proposition 3.1 again, $\psi_{m_k} \circ \chi_{m_k}$ converges uniformly on compact subsets of $\D$ to $\tilde \psi \circ \chi$. Using Rouch\'e's theorem and local compactness as in \cite{Ep} then shows that $\tilde \psi (A^\Lambda) = U$ with $\tilde \psi (a) = u$ and using ii) of \Car convergence, it follows easily that $\phi_{m_k} \circ \psi_{m_k}$ converges uniformly on compact subsets of $A^\Lambda$ to $\phi \circ \tilde \psi$ whence $\tilde \psi = \phi^{\circ -1}$. With this the proof of the first direction is finished. For the other direction, suppose now that the standard pointed domains $\ALam$ converge to $\ALa$, which is an $n$-connected non-degenerate standard domain and that the corresponding inverse Riemann maps $\psi_m$ converge to $\psi$. For each $m$ let $\chi_m$ be the normalized covering map from $\D$ to the standard domain $\ALm$ and let $\chi$ be the corresponding covering map for $\AL$ so that $\chi_m$ converges uniformly on compact subsets of $\D$ to $\chi$ by Theorem 1.2. Again by Proposition 3.1, $\pi_m = \chi_m \circ \psi_m$ will converge locally uniformly to $\chi \circ \psi = \pi$ on $\D$. By Theorem 1.2, it then follows that $\Umum$ converges to $\Uu$. On the other hand, as $\Uu$ is a \Car limit of pointed domains of connectivity $n$, $U$ has connectivity $\le n$ and since $A^\Lambda$ is $n$-connected and $\psi$ is univalent, it follows from Theorem 3.1 that $U$ must be $n$-connected. Finally, as $A^\Lambda$ is non-degenerate, it follows from Lemma 3.3 that $U$ must be non-degenerate. $\Box$ \end{document}
\begin{document} \title{Long cycles in fullerene graphs} \author{ Daniel Kr{\'a}l'$^a$\footnote{Institute for Theoretical Computer Science ({\sc iti}) is supported by Ministry of Education of the Czech Republic as project 1M0545.} \and Ond{\v r}ej Pangr{\'a}c$^a$ \and Jean-S{\'e}bastien Sereni$^a$\footnote{This author is supported by the European project {\sc ist fet Aeolus.}} \and Riste {\v S}krekovski$^b$\footnote{Supported in part by Ministry of Science and Technology of Slovenia, Research Program P1-0297.}} \date{} \maketitle \begin{center} $^a$ Department of Applied Mathematics ({\sc kam}) and Institute for Theoretical Computer Science ({\sc iti}), Faculty of Mathematics and Physics, Charles University, Malostransk\'e N\'am\v est\'i 25, 118 00 Prague, Czech Republic.\\ E-mails: \{{\tt kral,pangrac,sereni}\}{\tt @kam.mff.cuni.cz}.\\ $^b$ Department of Mathematics, University of Ljubljana, Jedranska 19, 1111 Ljubljana, Slovenia.\\ \end{center} \begin{abstract} It is conjectured that every fullerene graph is hamiltonian. Jendrol' and Owens proved [J. Math. Chem. 18 (1995), pp.~83--90] that every fullerene graph on $n$ vertices has a cycle of length at least $4n/5$. In this paper, we improve this bound to $5n/6-2/3$. \end{abstract} \section{Introduction} \emph{Fullerenes} are carbon-cage molecules comprised of carbon atoms that are arranged on a sphere with twelve pentagon-faces and other hexagon-faces. The~ico\-sahedral $C_{60}$, well known as Buckministerfullerene, was found by Kroto et~al.~\cite{KHBCS}, and later confirmed through experiments by Kr\"{a}tchmer et~al.~\cite{KLFH} and Taylor et al.~\cite{THAK}. Since the discovery of the first fullerene molecule, the fullerenes have been objects of interest to scientists in many disciplines. Many properties of fullerene molecules can be studied using mathematical tools and results. Thus, \emph{fullerene graphs} were defined as cubic (i.e.~$3$-regular) planar 3-connected graphs with pentagonal and hexagonal faces. Such graphs are suitable models for fullerene molecules: carbon atoms are represented by vertices of the graph, whereas the edges represent bonds between adjacent atoms. It is known that there exists a fullerene graph on $n$ vertices for every even $n\ge 20$, $n\not=22$. See the monograph of Fowler and Manolpoulos~\cite{FM} for more information on fullerenes. The hamiltonicity of planar 3-connected cubic graph has been attracting much interest of mathematicians since Tait~\cite{Tait} in 1878 gave a short and elegant (but also false) proof of the Four Color Theorem based on the "fact" that planar 3-connected cubic graphs are hamiltonian. The missing detail of the proof was precisely the previously mentioned ``fact'' which became known as Tait's Conjecture. Later, Tutte~\cite{Tutte} disproved Tait's Conjecture. The hamiltonicity of various subclasses of 3-connected planar cubic graphs was additionally investigated. Gr\"unbaum and Zaks~\cite{GZ} asked whether the graphs in the family ${\cal G}_3(p, q)$ of 3-connected cubic planar graphs whose faces are of size $p$ and $q$ with $p<q$ are hamiltonian for any $p,q$. Note that $p \in \{3,4,5\}$ by Euler's formula. Also note that fullerene graphs correspond to ${\cal G}_3(5,6)$. Goodey~\cite{Goodey1,Goodey2} has proved that all graphs contained in ${\cal G}_3(3,6)$ and ${\cal G}_3(4,6)$ are hamiltonian. Zaks~\cite{Zaks} found non-hamiltonian graphs in the family ${\cal G}_3(5, k)$ for $k\ge 7$. Similarly, Walther~\cite{Walther} showed that families ${\cal G}_3(3, q)$ for $7\le q\le 10$ and ${\cal G}_3(4, 2k + 1)$ for $k\ge 3$, contain non-hamiltonian graphs. For more results in this area, also see~\cite{O1,O2,O3, Tkac}. Let us restrict our attention to ${\cal G}_3(5,6)$. Ewald~\cite{Ewald} proved that every fullerene graph contains a cycle which meets every face of $G$. This implies that there is a cycle through at least $n/3$ of the vertices of any fullerene graph on $n$ vertices. It is well known that each graph $G\in{\cal G}_3(5,6)$ has a dominating cycle $C$, i.e. a cycle $C$ such that each edge of $G$ has an end-vertex on $C$ (in particular, a Tutte cycle is dominating since $G$ is cyclically $4$-edge-connected). This immediately improves the bound from $n/3$ to $3n/4$. Jendrol' and Owens~\cite{JO} gave a better bound of $4n/5$. In this paper, we improve the bound to $5n/6-2/3$. \section{Preliminary observations} We follow the terminology of Jendrol' and Owens~\cite{JO}. We consider a longest cycle $C$ of a fullerene graph; a vertex contained in $C$ is {\em black} and a vertex not contained in $C$ is {\em white}. Our aim is to show that there are at most $n/6+2/3$ white vertices for an $n$-vertex fullerene graph $G$. The following was shown~\cite{JO}. \begin{lemma} \label{lm-path} Let $G$ be a fullerene graph and $C$ a longest cycle in $G$. The graph $G$ contains no path comprised of three white vertices. \end{lemma} \begin{figure} \caption{The possible ways for a cycle $C$ to traverse a face of size five or six (up to symmetry) without forming a path of three white vertices. The cycle $C$ is indicated by bold edges.} \label{fig-possible} \end{figure} Lemma~\ref{lm-path} implies that no face of $G$ is incident with more than two white vertices (see Figure~\ref{fig-possible} for all possibilities, up to symmetry, how the cycle $C$ can traverse a face of $G$). The faces incident with two white vertices are called {\em white} and the faces incident with no white vertices are called {\em black}. Let us now observe the following simple fact. \begin{lemma} \label{lm-white5} If $C$ is a longest cycle in a fullerene graph $G$, then there are no white faces of size five. \end{lemma} \begin{figure} \caption{Prolonging the cycle $C$ if the graph $G$ contains a white face of size five.} \label{fig-white5} \end{figure} \begin{proof} Assume that there is a white face $v_1v_2v_3v_4v_5$. By symmetry we can assume, that the cycle $C$ contains the path $v_3v_4v_5$. Replacing the path $v_3v_4v_5$ with the path $v_3v_2v_1v_5$ (see Figure~\ref{fig-white5}) yields a cycle of $G$ longer than $C$, a contradiction. \end{proof} In the sequel, we use the following notion. Given a face $f$ with vertices $v_1,v_2,\ldots,v_k$ (in cyclic order), we let $f_{i,i+1}$ be the face different from $f$ that contains the edge $v_{i}v_{i+1}$ (the indices are taken modulo $k$). \section{Initial charge and discharging rules} \label{sect-rules} Using a discharging argument, we argue that the number of white vertices with respect to a longest cycle $C$ in an $n$-vertex fullerene graph $G$ is at most $n/6+2/3$. Fix such a cycle $C$. Each white vertex initially receives $3$ units of charge. Next, each white vertex sends $1$ unit of charge to each of its three incident faces. Observe that each white face has $2$ units of charge, each black face has no charge and each remaining face has $1$ unit of charge each. The charge is now redistributed based on the following rules (the indices are taken modulo the length of the considered face where appropriate). \begin{description} \item[Rule A] A black face $f_0=v_1\ldots v_6$ receives $1/2$ unit of charge from the face $f_{i,i+1}$ if the path $v_{i-1}v_iv_{i+1}v_{i+2}$ is contained in the cycle $C$ and the face $f$ is white. \item[Rule B] A black face $f_0=v_1\ldots v_6$ receives $1$ unit of charge from the face $f_{i,i+1}$ if the edge $v_iv_{i+1}$ is contained in the cycle $C$, neither the edge $v_{i-1}v_i$ nor the edge $v_{i+1}v_{i+2}$ is contained in $C$ and the face $f$ is white. \end{description} \noindent The Rules A and B are illustrated in Figure~\ref{fig-rules}. \begin{figure} \caption{Configurations (up to symmetry) to which Rules A and B are applied.} \label{fig-rules} \end{figure} In Sections~\ref{sect-white} and~\ref{sect-black}, we show that each face has at most $1$ unit of charge after applying Rules A and B. Based on this fact, we conclude in Section~\ref{sect-main} that the number of white vertices is at most $f/3$ where $f$ is the number of faces of $G$. The bound on the length of the cycle $C$ will then follow. \section{Final charge of white faces} \label{sect-white} In this section, we analyze the final amount of charge of white faces. By Lemma~\ref{lm-white5}, we can restrict our attention to faces of size six. \begin{lemma} \label{lm-white6-para} Let $C$ be a longest cycle of a fullerene graph $G$. Assume that the discharging rules as described in Section~\ref{sect-rules} have been applied. If $f=v_1v_2v_3v_4v_5v_6$ is a white face of $G$ such that the edges $v_2v_3$ and $v_5v_6$ are contained in $C$, then the final amount of charge of $f$ is $1$ unit. \end{lemma} \begin{figure} \caption{Configurations analyzed in the proof of Lemma~\ref{lm-white6-para} \label{fig-white6-para} \end{figure} \begin{proof} The initial amount of charge of the face $f$ is $2$ units. If both the faces $f_{23}$ and $f_{56}$ are black faces of size six, then the face $f$ sends $1/2$ unit of charge to each of them by Rule A and thus its final amount of charge is $1$ unit. Assume that the face $f_{56}$ is not a black face of size six. Hence, the graph $G$, up to symmetry, contains one of the configurations depicted in the left column of Figure~\ref{fig-white6-para}. Rerouting the cycle $C$ as indicated in the figure yields a cycle longer than $C$, a contradiction. Since our arguments translate to the case where the face $f_{23}$ is not a black face of size six, the proof of the lemma is finished. \end{proof} \begin{lemma} \label{lm-white6-orto1} Let $C$ be a longest cycle of a fullerene graph $G$. Assume that the discharging rules as described in Section~\ref{sect-rules} have been applied. If $f=v_1v_2v_3v_4v_5v_6$ is a white face of $G$ such that the edges $v_4v_5$, $v_5v_6$ and $v_6v_1$ are contained in $C$, then the final amount of charge of $f$ is $1$ unit. \end{lemma} \begin{figure} \caption{Configurations analyzed in the proof of Lemma~\ref{lm-white6-orto1} \label{fig-white6-orto1} \end{figure} \begin{proof} First, suppose that the face $f_{56}$ has size five. Rerouting the cycle $C$ as indicated in the top line of Figure~\ref{fig-white6-orto1} yields a face of size five incident with two or more white vertices (the vertices $v_5$ and $v_6$ become white). This is excluded by Lemma~\ref{lm-white5}. We conclude that the face $f_{56}$ has size six. For $i\in\{5,6\}$, let $v'_i$ be the neighbor of the vertex $v_i$ that is not incident with the face $f$. The vertex $v'_5$ cannot be white: otherwise, rerouting the cycle $C$ as indicated in the bottom line of Figure~\ref{fig-white6-orto1} yields a path formed by three white vertices. This is impossible by Lemma~\ref{lm-path}. Thus, the vertex $v'_5$ is black. Similarly, the vertex $v'_6$ is black. Consequently, the face $f_{56}$ is black and by Rule B, the face $f_{56}$ receives $1$ unit of charge from the face $f$. Since the face $f$ sends charge to no other face and its initial amount of charge is $2$ units, its final amount of charge is $1$ unit. \end{proof} \begin{lemma} \label{lm-white6-orto2} Let $C$ be a longest cycle of a fullerene graph $G$. Assume that the discharging rules as described in Section~\ref{sect-rules} have been applied. If $f=v_1v_2v_3v_4v_5v_6$ is a white face of $G$ such that the edges $v_3v_4$ and $v_5v_6$ are contained in $C$ and the edge $v_4v_5$ is not contained in $C$, then the final amount of charge of $f$ is $1$ unit. \end{lemma} \begin{figure} \caption{Configurations analyzed in the proof of Lemma~\ref{lm-white6-orto2} \label{fig-white6-orto2} \end{figure} \begin{proof} The initial amount of charge of the face $f$ is $2$ units. If both the face $f_{34}$ and $f_{56}$ are black faces of size six, then the face $f$ sends $1/2$ unit of charge to each of them by Rule A and thus its final amount of charge is $1$ unit. Assume that the face $f_{56}$ is not a black face of size six. Hence, the graph $G$, up to symmetry, contains one of the configurations depicted in the left column of Figure~\ref{fig-white6-orto2}. Rerouting the cycle $C$ as indicated in the figure yields a cycle of $G$ longer than $C$, a contradiction. Since our arguments translate to the case where the face $f_{34}$ is not a black face of size six, the proof of the lemma is finished. \end{proof} Lemmas~\ref{lm-white5}, \ref{lm-white6-para}, \ref{lm-white6-orto1} and~\ref{lm-white6-orto2} yield the following. \begin{lemma} \label{lm-white} Let $C$ be a longest cycle of a fullerene graph $G$. Assume that the discharging rules as described in Section~\ref{sect-rules} have been applied. The final amount of charge of any white face of $G$ is $1$ unit. \end{lemma} \section{Final charge of black faces} \label{sect-black} This section is devoted to the analysis of the final charge of black faces. Since no black face of size five receives any charge, we can restrict our attention to black faces of size six. The final charge of a black face $f$ of size six is at most one unless the face $f$ is isomorphic to one of the faces depicted in Figure~\ref{fig-black}---note that the amount of charge of $f$ can exceed $1$ unit only if Rule A applies three times to $f$, Rule B applies twice to $f$ or both Rules A and B apply to $f$. We analyze each of the configurations separately in a series of three lemmas. \begin{figure} \caption{Black faces of size six that could receive more than $1$ unit of charge.} \label{fig-black} \end{figure} \begin{lemma} \label{lm-black-AAA} Let $C$ be a longest cycle of a fullerene graph $G$. Assume that the discharging rules as described in Section~\ref{sect-rules} have been applied. If $f=v_1v_2v_3v_4v_5v_6$ is a black face of $G$ such that the edges $v_5v_6$, $v_6v_1$, $v_1v_2$, $v_2v_3$ and $v_3v_4$ are contained in $C$ and the edge $v_4v_5$ is not, then the final amount of charge of $f$ is at most $1$ unit. \end{lemma} \begin{figure} \caption{Configurations analyzed in the proof of Lemma~\ref{lm-black-AAA} \label{fig-black-AAA} \end{figure} \begin{proof} The face $f$ can receive charge only by Rule~A from the faces $f_{61}$, $f_{12}$ and $f_{23}$. Assume for the sake of contradiction that $f$ receives charge of $1/2$ unit from each of these three faces. In particular, $G$ contains, up to symmetry, one of the configurations depicted in Figure~\ref{fig-black-AAA} (recall that $G$ cannot contain a path formed by three white vertices by Lemma~\ref{lm-path}). Rerouting the cycle $C$ as indicated in the figure yields a cycle of $G$ longer than the cycle $C$ which contradicts our choice of $C$. We conclude that Rule~A can apply at most twice to the face $f$. \end{proof} \begin{lemma} \label{lm-black-AB} Let $C$ be a longest cycle of a fullerene graph $G$. Assume that the discharging rules as described in Section~\ref{sect-rules} have been applied. If $f=v_1v_2v_3v_4v_5v_6$ is a black face of $G$ such that the edges $v_2v_3$, $v_4v_5$, $v_5v_6$ and $v_6v_1$ are contained in $C$ and the edges $v_1v_2$ and $v_3v_4$ are not, then the final amount of charge of $f$ is at most $1$ unit. \end{lemma} \begin{figure} \caption{Configurations analyzed in the proof of Lemma~\ref{lm-black-AB} \label{fig-black-AB} \end{figure} \begin{proof} If the final amount of charge of $f$ is greater than $1$ unit, then $f$ receives $1$ unit of charge from the face $f_{23}$ by Rule B and $1/2$ unit of charge from the face $f_{56}$ by Rule A. Hence, $G$ contains one of the two configurations depicted in Figure~\ref{fig-black-AB}. In either of the two cases, it is possible to reroute the cycle $C$ as indicated in Figure~\ref{fig-black-AB} to obtain a cycle of $G$ longer than $C$, a contradiction. \end{proof} \begin{lemma} \label{lm-black-BB} Let $C$ be a longest cycle of a fullerene graph $G$. Assume that the discharging rules as described in Section~\ref{sect-rules} have been applied. If $f=v_1v_2v_3v_4v_5v_6$ is a black face of $G$ such that the edges $v_2v_3$, $v_4v_5$ and $v_6v_1$ are contained in $C$ and the edges $v_1v_2$, $v_3v_4$ and $v_5v_6$ are not, then the final amount of charge of $f$ is at most $1$ unit. \end{lemma} \begin{figure} \caption{The configuration analyzed in the proof of Lemma~\ref{lm-black-BB} \label{fig-black-BB} \end{figure} \begin{proof} The face $f$ can receive charge only by Rule~B. Assume that Rule B applies twice to $f$. By symmetry, we may assume that the charge is given by the faces $f_{23}$ and $f_{45}$. In particular, the graph $G$ contains the configuration depicted in Figure~\ref{fig-black-BB}. Reroute now the cycle $C$ as indicated in the figure. Since the obtained cycle is longer than the cycle $C$, we conclude that Rule~B cannot apply twice to the face $f$. \end{proof} Lemmas~\ref{lm-black-AAA}, \ref{lm-black-AB} and~\ref{lm-black-BB} yield the following. \begin{lemma} \label{lm-black} Let $C$ be a longest cycle of a fullerene graph $G$. Assume that the discharging rules as described in Section~\ref{sect-rules} have been applied. The final amount of charge of any black face of $G$ is at most $1$ unit. \end{lemma} \section{Main result} \label{sect-main} \begin{theorem} \label{thm-main} Let $G$ be a fullerene graph with $n$ vertices. The graph $G$ contains a cycle of length at least $5n/6-2/3$. \end{theorem} \begin{proof} Consider a longest cycle $C$ contained in the graph $G$ and apply the discharging procedure described in Section~\ref{sect-rules}. By Lemmas~\ref{lm-white} and~\ref{lm-black}, every white face and every black face a has final charge of at most $1$ unit. Since the initial amount of charge of other faces is $1$ unit and the other faces do not send out or receive any charge, we conclude that the final amount of charge of any face of $G$ is at most $1$ unit. Each white vertex has initially been assigned $3$ units of charge. Since the final amount of charge of every face is at most $1$ unit, the amount of charge was preserved during the discharging phase and vertices do not have any charge at the end of the process, there are at most $f/3$ white vertices where $f$ is the number of faces of $G$. By Euler's formula, $n=2f-4$. Hence, there are at most $n/6+2/3$ white vertices. Consequently, there are at least $5n/6-2/3$ black vertices and thus the length of the cycle $C$ is at least $5n/6-2/3$. \end{proof} \end{document}
\begin{document} \begin{abstract} This article documents my journey down the rabbit hole, chasing what I have come to know as a particularly unyielding problem in Ramsey theory on the integers: the $2$-Large Conjecture. This conjecture states that if $D \subseteq \mathbb{Z}^+$ has the property that every $2$-coloring of $\mathbb{Z}^+$ admits arbitrarily long monochromatic arithmetic progressions with common difference from $D$ then the same property holds for any finite number of colors. We hope to provide a roadmap for future researchers and also provide some new results related to the $2$-Large Conjecture. \end{abstract} \title{Down the Large Rabbit Hole} \section{Prologue} Mathematicians tend not to write of their failures. This is rather unfortunate as there are surely countless creative ideas that have never seen the light of day; I have long believed that a \textsc{Journal of Failed Attempts} should exist. My goal with this article is 3-fold: (1) a chronicle of my battle with what I consider a particularly difficult conjecture; (2) to present my progress on this conjecture; and (3) to provide a roadmap to those who want to take on this challenging conjecture. The majority of this work took place over the course of a year, circa 2010\footnote{Supported in part by the National Security Agency [grant number H98230-10-1-0204].}. Since that time I have frequently revisited this intriguing problem, even though that year was mostly an exercise in banging my head against various brick walls. I wish I knew how to quit it. I love this conjecture, so much so that I've followed it down the rabbit hole. However, if we are to take away one message from Steinbeck's {\it Of Mice and Men}, it's that sometimes the rabbit doesn't love you back. \section{What's Up, Doc?} Ramsey theory may best be summed up as ``the study of the preservation of structures under set partitions" \cite{LR}. For this article, we will restrict our attention to the positive integers, and our investigation to the set of arithmetic progressions (our structure). As is common in Ramsey theory, we will use colors to denote set partition membership. Formally, for $r \in \mathbb{Z}^+$, an $r$-coloring of the positive integers is defined by $\chi: \mathbb{Z}^+ \rightarrow \{0,1,\dots,r-1\}$. We say that $S \subseteq \mathbb{Z}^+$ is {\it monochromatic} under $\chi$ if $|\chi(S)|=1$. In order to discuss the preservation of structure, and, subsequently, state the 2-Large Conjecture, we turn to a fundamental result in Ramsey theory on the integers: van der Waerden's Theorem \cite{vdw}. \begin{theorem}[van der Waerden's Theorem] For any fixed positive integers $k$ and $r$, every $r$-coloring of $\mathbb{Z}^+$ admits a monochromatic $k$-term arithmetic progression. \label{vdw} \end{theorem} In a certain sense, we cannot break the existence of arithmetic progressions via set partitioning since van der Waerden's Theorem proves that one of the partition classes {\it must} contain an arithmetic progression. If you don't believe me, try $2$-coloring the first nine positive integers without creating a monochromatic $3$-term arithmetic progression (I'll wait). So now that we're all on board, the next attribute of arithmetic progressions to take note of is that they are closed under translation and dilation: if $S=\{a,a+d,a+2d,\dots,a+(k-1)d\}$ is a $k$-term arithmetic progression, and $b$ and $c$ are positive integers, then $c+bS =\{(ab+c), (ab+c)+bd,(ab+c)+2bd,\dots,(ab+c)+(k-1)bd\}$ is also a $k$-term arithmetic progression. It is this attribute that affords us a simple inductive argument when proving van der Waerden's Theorem. Specifically, assuming that the $r=2$ case of Theorem \ref{vdw} is true (for all $k$), we may prove that it is true for general $r$ rather simply. In order to proceed, we need a restatement of Theorem \ref{vdw}, which is often referred to as the finite version. \begin{theorem}[van der Waerden's Theorem restatement] For any fixed positive integers $k$ and $r$, there exists a minimum integer $w(k;r)$ such that every $r$-coloring of $\{1,2,\dots,w(k;r)\}$ admits a monochromatic $k$-term arithmetic progression. \label{vdw2} \end{theorem} The proof of equivalence of Theorem \ref{vdw} and Theorem \ref{vdw2} (at least the nontrivial direction) is given by {\it The Compactness Principle}, which, in this setting, could also be called {\it Cantor's Diagonal Principle}, as the proof is an application and slight modification of the diagonal argument Cantor used to prove that the set of real numbers is uncountable. Now back to the induction argument. We may assume that $w(k;s)$ exist for $s=2,3,\dots,r-1$ for any $k\in \mathbb{Z}^+$. Let $m=w(k;r-1)$ so that $n=w(m;2)$ exists. Consider $\chi$, an arbitrary $r$-coloring of $\{1,2,\dots,n\}$. For ease of exposition, let the colors be red and $r-1$ different shades of blue. Consider someone who cannot distinguish between shades of blue so that the $r$-coloring looks like a $2$-coloring to this person. By the definition of $n$, such a person would conclude that a monochromatic $m$-term arithmetic progression exists under $\chi$. If this monochromatic progression is red, we are done, so we assume that it is ``blue." Let it be $a+d, a+2d, a+3d,\dots,a+md$ and note that, since we can distinguish between shades of blue, we have an $(r-1)$-colored $m$-term arithmetic progression. We have a one-to-one correspondence between $(r-1)$-colorings of $T=\{1,2,\dots,m\}$ and $a+dT = \{a+d,a+2d,a+3d,\dots,a+md\}$. By the definition of $m$ and because arithmetic progressions are closed under translation and dilation, we see that $T$, and hence $a+dT$, admits a monochromatic $k$-term arithmetic progression, thereby completing the inductive step. Of course, the previous paragraph is only a partial proof since I made the significant assumption that Theorem \ref{vdw} holds for two colors; however, we can state the following: \begin{quote} \normalsize ($\star$) If every $2$-coloring of $\mathbb{Z}^+$ admits arbitrarily long monochromatic arithmetic progressions, then, for any $r \in \mathbb{Z}^+$, every $r$-coloring of $\mathbb{Z}^+$ admits arbitrarily long monochromatic arithmetic progressions. \end{quote} Having obtained this conditional result ($\star$), the rabbit hole is starting to come into view. Brown, Graham, and Landman \cite{BGL} investigated a strengthening of Theorem \ref{vdw} by restricting the set of allowable common differences. \begin{definition}[$r$-large, large, $D$-ap] Let $D\subseteq \mathbb{Z}^+$ and let $r \in \mathbb{Z}^+$. We refer to an arithmetic progression $a,a+d, a+2d,\dots,a+(k-1)d$ with $d \in D$ as a $k$-term {\it $D$-ap}. If for any $k \in \mathbb{Z}^+$, every $r$-coloring of $\mathbb{Z}^+$ admits a monochromatic $k$-term $D$-ap, then we say that $D$ is {\it $r$-large} . If $D$ is $r$-large for all $r \in \mathbb{Z}^+$, then we say that $D$ is {\it large}. \end{definition} Using this definition, we would restate ($\star$) as: \begin{quote}\normalsize ($\star$) If $\mathbb{Z}^+$ is $2$-large, then $\mathbb{Z}^+$ is large. \end{quote} We now can read the sign above that rabbit hole. It has the following conjecture, due to Brown, Graham, and Landman \cite{BGL}, scrawled on it: \begin{conj}[$2$-Large Conjecture] Let $D \subseteq \mathbb{Z}^+$. If $D$ is $2$-large, then $D$ is large. \end{conj} All known $2$-large sets are also large. Some $2$-large sets are: $m\mathbb{Z}^+$ for any positive integer $m$ (in particular, the set of even positive integers); the range of any integer-valued polynomial $p(x)$ with $p(0)=0$; any set $\{\left\lfloor \alpha n \right\rfloor: n \in \mathbb{Z}^+\}$ with $\alpha$ irrational. We will be visiting all of these sets on our journey. As we move forward, you may think you have spotted the rabbit, but that rabbit is cunning. Beware of false promise, which comes to you in hare clothing. \section{The Carrot} So, what makes this conjecture so appealing? Firstly, the $2$-Large Conjecture is so very natural given the proof of conditional statement ($\star$). Secondly, there are several {\it a priori} disparate tools in Ramsey theory at our disposal. Thirdly, who doesn't like a challenge; the lure of the carrot is strong (but don't disregard the stick). We can approach this problem: \begin{itemize} \item[] (1) purely measure-theoretically, \item[](2) using measure-theoretic ergodic systems, \item[](3) using discrete topological dynamical systems, \item[](4) algebraically through the Stone-\v{C}ech compactification of $\mathbb{Z}^+$, and \item[](5) combinatorically/using other ad-hoc methods. \end{itemize} Even though I have described these approaches as disparate, there are connections between them that will become clear as we carry on the investigation. \subsection{Measure-theoretic Approach} On the measure-theoretic front, we must start with Szemer\'edi's \cite{Sze} celebrated result. For $A \subseteq \mathbb{Z}^+$, let $\bar{d}(A)$ denote the upper density of $A$: $\bar{d}(A) = \limsup_{n \rightarrow \infty} \frac{|A \cap \{1,2,\dots,n\}|}{n}$. \begin{theorem}[Szemer\'edi's theorem] Any subset $S \subseteq \mathbb{Z}^+$ with $\bar{d}(S)>0$ contains arbitrarily long arithmetic progressions. \end{theorem} Szemer\'edi's proof has been called elementary, but it is anything but easy, straightforward, or simple. In fact, contained with his proof is a logical flow chart on 24 vertices with 36 directed edges that furnishes the reader with an overview of the intricate web of logic used to prove the seminal result. So, how do we mesh this result with $2$-large sets? Since every $2k$-term arithmetic progression with common difference $d$ contains a $k$-term arithmetic progression with common difference $2d$, we have large sets with positive density (the set of even positive integers). A result in \cite{BGL} shows that $\{10^n: n \in \mathbb{Z}^+\}$ is not $2$-large, so we have sets with 0 density that are not $2$-large. Perhaps there is a density condition that distinguishes large and non-large sets. Unfortunately, further exploration shows this is not true. We can have sets with positive upper density that are not $2$-large and we can have sets with zero upper density that are $2$-large. To this end, first consider the set of odd integers $D_1$. Coloring $\mathbb{Z}^+$ by alternating red and blue, we do not even have a monochromatic $2$-term $D_1$-ap. Hence, $D_1$ has positive density but is not $2$-large. Now consider the set of squares $D_2$. As a very specific case of a far reaching extension of Szemer\'edi's result, Bergelson and Liebman \cite{BL} have shown that $D_2$ is large. More generally (but still not as general as the full theorem), Bergelson and Liebman proved the following result. \begin{theorem}[Bergelson and Liebman] Let $p(x): \mathbb{Z}^+ \rightarrow \mathbb{Z}^+$ be a polynomial with $p(0)=0$. Then the set $D=\{p(i): i \in \mathbb{Z}^+\}$ is large. More precisely, any subset of $\mathbb{Z}^+$ of positive upper density contains arbitrarily long $D$-aps.\label{BandL} \end{theorem} In quick order we have seen that distinguishing large and non-large sets solely by their densities is not the correct approach. However, the proof of Theorem \ref{BandL} leads us to our next approach. \subsection{Measure-theoretic Ergodic Approach} Closely related to the above approach is the use of ergodic systems. The connection between Szemer\'edi's Theorem and ergodic dynamical systems is provided by Furstenberg's correspondence principle \cite{Fur}, which uses the following notations. We remark here that we are specializing all results to the integers and that the stated results do not necessarily hold in different ambient spaces; see, e.g., \cite{BM}. \begin{notation} For $S \subseteq \mathbb{Z}^+$ and $n \in \mathbb{Z}$, we let $S-n = \{s-n: s \in S\}$. For the remainder of the article, we reserve the symbol $T$ for the shift operator that acts on $\mathcal{X}$, the family of infinite sequences $x=(x_i)_{i \in \mathbb{Z}}$, by $Tx_n = x_{n+1}$. \end{notation} \begin{theorem}[Furstenberg's Correspondence Principle] Let $E \subseteq \mathbb{Z}^+$ with $\bar{d}(E)>0$. Then, for any $k \in \mathbb{Z}^+$, there exists a probability measure-preserving dynamical system $(\mathcal{X},\mathcal{B},\mu,T)$ with a set $A \in \mathcal{B}$ such that $\mu(A) = \bar{d}(E)$ and $$ \bar{d}\!\left(\bigcap_{i=0}^k (E-in)\right) \!\geq \!\mu\left( \bigcap_{i=0}^k T^{-in}A\right) $$ for any $n \in \mathbb{Z}^+$. \label{Furst} \end{theorem} The above result can be viewed as the impetus for ergodic Ramsey theory as a field of research. Furstenberg proved that there exists $d \in \mathbb{Z}^+$ such that, for any $A$ with $\mu(A)>0$ we have $$\mu\left(A \cap T^{-d}\!A \cap T^{-2d}\!A \cap \!\cdots \! \cap T^{-kd}\!A\right) > 0.$$ By Theorem \ref{Furst}, we have $E \cap (E-d) \cap (E-2d) \cap \!\cdots \!\cap (E-kd) \neq \emptyset$. Hence, by taking $a$ in this intersection, we have $\{a,a+d,a+2d,\dots,a+kd\} \subseteq E$. Consequently, Furstenberg provided an ergodic proof of Szemer\'edi's theorem\footnote{Furstenberg used Banach upper density and not upper density}. Having followed this path it seems we have hit another dead end in our journey; there appears to be no mechanism for controlling the number of colors in these arguments. Perhaps a non-measure-theoretic dynamical system approach can help. \subsection{Topological Dynamical Systems Approach} As ergodic systems are specific types of dynamical systems, the $2$-Large Conjecture may be susceptible to the use of a different breed of dynamical system, namely a topological one. We will denote the space of infinite sequences $(x_n)_{n \in \mathbb{Z}}$ with $x_i \in \{0,1,\dots,r-1\}$ by $\mathcal{X}_r$ and let $T$ remain the shift operator acting on $\mathcal{X}_r$. Specializing to our situation, we state Birkhoff's Multiple Recurrence Theorem due to Furstenberg and Weiss \cite{FW} (see also \cite{Bir}). \begin{theorem}[Birkhoff's Multiple Recurrence Theorem] \label{th6} Let $k,r \in \mathbb{Z}^+$. Under the product topology, for any open set $U \subseteq \mathcal{X}_r$ there exists $d \in \mathbb{Z}^+$ so that $ U \cap T^{-d}U \cap T^{-2d} \cap \cdots \cap T^{-kd}U \neq \emptyset. $ \end{theorem} Furstenberg's and Weiss' result allowed them to give a new proof of van der Waerden's Theorem: Define a metric for $x,y \in \mathcal{X}_r$ by $$ d(x,y) = \left(\min_{i \in \mathbb{Z}^+} x(i) \neq y(i)\right)^{-1}, $$ where $x(i)$ denotes the value/color of the $i^{\mathrm{th}}$ positive term in $x$. A small $d(x,y)$ means we have value/color agreement in the initial terms of $x$ and $y$. Let $x\in \mathcal{X}_r$ be the sequence corresponding to any given arbitrary $r$-coloring of $\mathbb{Z}^+$. Theorem \ref{th6} helps prove that there exists $y \in \{T^mx\}_{m \in \mathbb{Z}^+}$ such that all of $d(y,T^dy), d(y,T^{2d}y),\dots, d(y,T^{kd}y)$ are less than $1$ for some $d$. Hence, $y, T^dy, T^{2d}y, \dots, T^{kd}y$ all have the same first value/color. Since $y=T^{a}x$ for some $a$ we have $x_a, x_{a+d}, x_{a+2d},\dots,x_{a+kd}$ all of the same value/color, meaning that $a, a+d,\dots,a+kd$ is a monochromatic arithmetic progression. \begin{remark} We can actually have a guarantee that all of $d(y,T^dy),\break d(y,T^{2d}y),\dots, d(y,T^{kd}y)$ are less than any $\epsilon > 0$; however, this is not needed to prove van der Waerden's Theorem. It does provide for some very interesting results like arbitrary long progressions all with the same common difference each starting in a set of arbitrarily long consecutive intervals. It should also be remarked that this latter result can be shown combinatorially, too. \end{remark} So how can we use this to attack the $2$-Large Conjecture? Given a $2$-large set $D$, we have a guarantee that over the space $\mathcal{X}_2$ there exists $y \in \{T^mx\}_{m \in \mathbb{Z}^+}$ such that all of $d(y,T^dy), d(y,T^{2d}y),\dots, d(y,T^{kd}y)$ are less than $1$ for some $d \in D$. Our goal is to prove that this criterion implies the same over the space $\mathcal{X}_r$. Although Remark 7 states that all of $d(y,T^dy), d(y,T^{2d}y), \dots,\break d(y,T^{kd}y)$ can be arbitrarily small, we can only guarantee they are less than $1$ (with our given metric) if we require $d$ from a $2$-large set. Hence, we could convert an $r$-coloring to a binary equivalent $2$-coloring if we discovered a result that a long enough $(r-1)$-colored $D$-ap admits a monochromatic $k$-term $D$-ap (we have no such result, but this idea will prove fruitful in Section 6). \section{Back Where We Started} Presently, it seems we have ended up back where we started. Fittingly, this recurrence phenomenon is a key notion in dynamical systems. Momentarily, before getting to the Stone-\v{C}ech compactification, we'll have a diagram to aid in visualizing how the different types of results based on the above approaches relate to each other. In Figure 1, we give implications between the types of recurrence we have considered thus far, followed by their definitions. \begin{figure} \caption{Relationship between types of recurrence considered thus far} \end{figure} All of the following definitions are given with respect to arithmetic progressions over the integers; as such, some of the definitions are specific cases of more general definitions. Some of the implications above fail in more general settings. \begin{definitions} Let $r \in \mathbb{Z}^+$. Denote the set of infinite sequences $(x_n)_{n \in \mathbb{Z}}$ with $x_i \in \{0,1,\dots,r-1\}$ by $\mathcal{X}_r$ and let $T$ be the shift operator acting on $\mathcal{X}_r$. For $D \subseteq \mathbb{Z}^+$, we say that $D$ is \begin{itemize} \item[](i) {\it chromatically $k$-recurrent} if, for any $r$, every $r$-coloring of $\mathbb{Z}^+$ admits a monochromatic $k$-term arithmetic progression $a, a+d,\dots,a+(k-1)d$ with $d \in D$. \\ \item[](ii) {\it topologically $k$-recurrent} if, for any $r$, the dynamical system $(\mathcal{X}_r,T)$ has the property that for every open set $U \subseteq \mathcal{X}_r$ there exists $d \in D$ such that $$ U \cap T^{-d}U \cap T^{-2d} \cap \cdots \cap T^{-kd}U \neq \emptyset. $$ \\[-20pt] \item[](iii) {\it density $k$-intersective} if for every $A\subseteq \mathbb{Z}^+$ with $\bar{d}(A)>0$, there exists $d \in D$ such that $$ A \cap (A-d) \cap (A-2d) \cap \cdots \cap (A-kd) \neq \emptyset. $$ \\[-20pt] \item[](iv) {\it measurably $k$-recurrent} if for any probability measure-preserving dynamical system $(\mathcal{X}_r,\mathcal{B},\mu,T)$ and any $A \in \mathcal{B}$ with $\mu(A)>0$ there exists $d \in D$ such that $$\mu\left(A \cap T^{-d}\!A \cap T^{-2d}\!A \cap \!\cdots \! \cap T^{-kd}\!A\right) > 0.$$ \end{itemize} \end{definitions} The fact that the double implications shown in Figure 1 are true has already been partially discussed; see \cite{Jun} for details on the left double implication. The top-most implication is the definition of large. The negated implication was proved by K\v{r}\'i\v{z} \cite{Kriz} with nice write-ups by Jungi\'c \cite{Jun} and McCutcheon \cite{McC}, while the remaining implication comes from the fact that any finite coloring of $\mathbb{Z}^+$ contains a color class of positive upper density. The negated implication offers a bit of insight -- a tiny flashlight for our travels, if you will. The set used to prove this negation is the set $C'$ given in \cite{McC}: $$ C' = \{c \in \{0,1,\dots,M\}: c \!\!\!\! \pmod{p_i} = \pm 1 \mbox{ for at least $2r$ indices $i$}\} $$ where $r$ is sufficiently large, $p_1, p_2,\dots,p_{2r+2}$ are sufficiently large primes, and $M=\prod_{i=1}^{2r+2} p_i$. We do not know if $C'$ is $2$-large/large or not (this seems to be a difficult problem in-and-of itself), but if it is then there exists a set of positive upper density that does not contain a $C'$-ap. We should take this uncertainty as a warning that we have no guarantee a large set $D$ has its $D$-aps lie inside a color class with positive upper density, even though Szemer\'edi's Theorem assures us that arbitrarily long arithmetic progressions are there. \vskip 20pt \section{Stone-\v{C}ech Compactification Approach} Although the three approaches in Section 3 all have nice links between them, the approach championed by Bergelson, Hindman, Strauss, and others is quite disparate from the previous methods we have seen. The approach is a blend of set theory, topology, and algebra. We'll start by describing the points in the Stone-\v{C}ech compactification on $\mathbb{Z}^+$, which requires the following definition (again, specialized to the positive integers). \begin{definition}[filter, ultrafilter] Let $p$ be a family of subsets of $\mathbb{Z}^+$ (this lowercase $p$ is the standard notation in this field). If $p$ satisfies all of \begin{itemize} \item[](i) $\emptyset \not\in p$; \item[](ii) $A \in p$ and $A \subseteq B \Rightarrow B \in p$; and \item[](iii) $A,B \in p \Rightarrow A \cap B \in p$, \end{itemize} then we say that $p$ is a {\it filter}. If, in addition, $p$ satisfies \begin{itemize} \item[](iv) for any $C \subseteq \mathbb{Z}^+$ either $C \in p$ or $C^c=\mathbb{Z}^+ \setminus C \in p$ \end{itemize} then we say that $p$ in an {\it ultrafilter}. (Item (iv) means that $p$ is not properly contained in any other filter.) \end{definition} \begin{example} The set of subsets $\mathcal{F}=\{A \subseteq \mathbb{Z}^+: |\mathbb{Z}^+ \setminus A | < \infty\}$ is a filter but not an ultrafilter (it is known as the Fr\'echet filter). It is not an ultrafilter since, taking $C$ from (iv) above to be the set of even positive integers we see that neither $C$ nor its complement is in $\mathcal{F}$. The set $\mathcal{G} = \{A \subseteq \mathbb{Z}^+: x \in A\}$ for any fixed $x \in \mathbb{Z}^+$ is an ultrafilter. \end{example} \begin{remark*} One hint that the ultrafilter direction may not prove useful is that the family of large sets is not a(n) (ultra)filter. Parts (i), (ii), and (iv) of the ultrafilter definition are satisfied, but part (iii) is not. To see this, consider $A = \{i^3: i \in \mathbb{Z}^+\}$ and $B = \{i^3+8: i \in \mathbb{Z}^+\}$. These are both large sets (see \cite{BGL}); however, $A \cap B = \emptyset$ (Fermat's Last Theorem serves as a very useful result for counterexamples) and so cannot be large. \end{remark*} The Stone-\v{C}ech compactification of $\mathbb{Z}^+$ is denoted by $\beta\mathbb{Z}^+$, and the points in $\beta\mathbb{Z}^+$ are the ultrafilters, i.e., $\beta\mathbb{Z}^+ = \{p: \mbox{$p$ is an ultrafilter}\}$. Having the space set, we need to define addition in $\beta\mathbb{Z}^+$. \begin{definition}[addition in $\beta\mathbb{Z}^+$] Let $A \subseteq \mathbb{Z}^+$ and let $p,q \in \beta\mathbb{Z}^+$. As before, $A-x =\{y \in \mathbb{Z}^+: y+x \in A\}$. We define the addition of two ultrafilters by $ A \in p+q \Longleftrightarrow \{x \in \mathbb{Z}^+: A-x \in p\} \in q. $ \end{definition} The link between $r$-colorings and ultrafilters is provided by the following lemma. \begin{lemma} Let $r \in \mathbb{Z}^+$ and let $p \in \beta\mathbb{Z}^+$. For any $r$-coloring of $\mathbb{Z}^+$, one of the color classes is in $p$. \label{9} \end{lemma} \begin{proof} Let $\mathbb{Z}^+ = \cup_{i=1}^r C_i$, where $C_i$ is the $i^{\mathrm{th}}$ color class. By part (iv) of the definition of ultrafilter, if we assume that none of the $C_i$'s are in $p$, then each of their complements is in $p$. Applying part (iii) of the definition of a filter $r-2$ times, we have $\cap_{i=1}^{r-1} C_i^c \in p$. But $\cap_{i=1}^{r-1} C_i^c = C_r$ so $C_r \in p$, a contradiction. \end{proof} A number of results that use $\beta\mathbb{Z}^+$ to give Ramsey-type results rely on the existence of an additive idempotent in $\beta\mathbb{Z}^+$. To this end, if $p+p = p$ is such an element, then $A \in p+p = p$ means that $B=\{x \in \mathbb{Z}^+: A-x \in p\}$ is in $p$ as well. Thus, we have $a \in A$ and $d \in B$ such that $a$ and $a-d$ are both in A. Hence, since $p$ is a filter, by item (iii) above we have $A \cap B \in p$. Hence, appealing to Lemma \ref{9}, we can consider $A$ to be a color class in an $r$-coloring so that we have a monochromatic solution to $x+y=z$ with $x=d$, $y=a-d$, and $z=a$. This result is known as {Schur's Theorem}. In order to obtain van der Waerden's Theorem through the use of ultrafilters, quite a bit of algebra of $\beta\mathbb{Z}^+$ is needed, so we will not present that here. The interested reader should consult the sublime book by Hindman and Strauss \cite{HS}. Unfortunately for our goal, while reading through this book it becomes quite clear that the number of colors used to state the Ramsey-type results is irrelevant and can be kept arbitrary in the arguments. So there does not seem a natural way to control for the number of colors. On the other hand, there are many types of {\it largeness} that can be proved using ultrafilters. A good description of largeness via ultrafilters is given by Bergelson and Downarowicz in \cite{BD}. An intuitive notion of largeness would be that a certain property $P$ is large (in the general sense) if $P$ itself is partition regular or if every set with property $P$ is partition regular in some (possibly different) sense. Perhaps one of these types of largeness will help achieve our goal. Figure 2, below, gives a summary of how these different types are related; most of the chart is due to Bergelson and Hindman \cite{BH}. \begin{figure} \caption{Type of largeness and implications. Missing implications are not true.} \end{figure} In defining the terms in Figure 2, we will start at the bottom and work our way up. As we move up the chart, the concepts are all types of largeness that increase in robustness. \begin{definitions} Let $S \subseteq \mathbb{Z}^+$. We say that $S$ is \begin{itemize} \item[](i) {\it accessible} if every $r$-coloring, for every $r$, admits arbitrarily long progressions $x_1,x_2,\dots,x_n$ such that $x_{i+1}-x_i \in S$ for $i=1,2,\dots,n-1$. \item[](ii) a {\it $\Delta$-set} if there exists $T \subseteq \mathbb{Z}^+$ such that $T-T \subseteq S$. \item[](iii) an {\it IP-set} if there exists $T = \{t_i\} \subseteq \mathbb{Z}^+$ such that $FS(T)=\{\sum_{f \in F} t_f: F \subseteq \mathbb{Z}^+ \mbox{ with } |F|<\infty\} \subseteq S$ (the notation $FS(T)$ stands for the {\it finite sums} of $T$). \item[](iv) {\it piecewise syndetic} if there exists $r \in \mathbb{Z}^+$ such that for any $n \in \mathbb{Z}^+$ there exists $\{t_1<t_2<\cdots<t_n\} \subseteq S$ with $t_{i+1}-t_i \leq r$ for $1 \leq i \leq n-1$. \item[](v) {\it syndetic} if there exists $r \in \mathbb{Z}^+$ such that $S=\{s_1<s_2<\cdots\}$ satisfies $s_{i+1}-s_i \leq r$ for all $i \in \mathbb{Z}^+$. \item[](vi) a {\it central set} if $S \in p$ where $p$ is an idempotent ultrafilter with the property that for all $A \in p$, the set $\{n \in \mathbb{Z}^+: A+n \in p\}$ is syndetic. (The original, equivalent, definition comes from Furstenberg (see \cite{Fur2}) in the area of dynamical systems.) \end{itemize} \end{definitions} The remaining categories in Figure 2 all have a $^{\star}$ on them. This is the designation for the {\it dual} property. If $X$ is one of the non-starred properties in Figure 2, then we say that a set $S$ is in $X^\star$ if $S$ intersects every set that has property $X$. All implications and non-implications that do not involve any property in $\{\mbox{$2$-large, large, accessible, large$^\star$}\}$ are from \cite{BH}. The fact that accessible does not imply large was first shown by Jungi\'c \cite{Jun}, who provided a non-explicit accessible set that is not $7$-large. Recently, Guerreiro, Ruzsa, and Silva \cite{GRS} provided an explicit accessible set that is not $3$-large. In the next section, we give an explicit accessible set that is not $2$-large, explaining why accessible does not imply $2$-large. The fact that a $\Delta$-set is also accessible is from \cite[Th. 10.27]{LR}, while the fact that large implies accessible is straightforward. The set of cubes is large (and, hence, accessible) but not a $\Delta$-set. To see this, assume otherwise and let $\{s_i\}$ be an increasing sequence of positive integers such that $s_j - s_i$ is a cube for all $j>i$. Then there exists $t<u<v$ such that $s_v - s_t, s_v - s_u,$ and $s_u-s_t$ are all cubes. But then we have $s_v-s_t = (s_v-s_u) + (s_u-s_t)$ as an integer solution to $z^3=x^3+y^3$, a contradiction. Large$^\star$ implying IP$^\star$ comes from the fact that all IP sets are large sets. To see that IP$^\star$ does not imply large$^\star$, let $D$ be the set of integer cubes. Then $D \not \in $ IP (in fact, for any $x,y \in D$ we have $x+y \not \in D$)\footnote{I have a cute, short proof of this fact, but, like Fermat, there is not enough room in the bottom margin here, especially given the length of this lengthy, and unnecessary, footnote.}. Hence, any $A \in $ IP cannot be a subset of $D$, meaning that $A \cap D^c \neq \emptyset$. Thus, $D^c$ intersects all IP-sets; in other words, $D^c \in $ IP$^\star$. By Theorem \ref{BandL}, we know that $D$ is large. Now, because $D^c$ does not interesect the large set $D$ we see that $D^c \not \in$ large$^\star$. Investigating some of the properties in Figure 2, we have some interesting results that could aid in proving or disproving the $2$-large conjecture. \begin{theorem}[Furstenberg and Weiss \cite{FW}] Let $k,r \in \mathbb{Z}^+$. For any $r$-coloring of $\mathbb{Z}^+$, for some color $i$, the set of common differences of monochromatic $k$-term arithmetic progressions of color $i$ is in IP$^\star$. \label{IPstar} \end{theorem} Theorem \ref{IPstar} gives a very strong property for the set of common differences in monochromatic arithmetic progressions. Along these same lines, we have the following result. \begin{theorem}[Bergelson and Hindman \cite{BH2}] For any $k \in \mathbb{Z}^+$ and any infinite subset of positive integers $A$, under any finite coloring of $\mathbb{Z}^+$ there exists a monochromatic $k$-term arithmetic progression with common difference in $FS(A)$. \end{theorem} Unfortunately, neither of these last two theorems helps guide us out of the rabbit hole, as generic $2$-large sets don't necessarily have any obvious structures. So we now find ourselves moving into the darkest corner of the hole: the combinatorics encampment. \section{Some Combinatorial Results} We start this section by providing an accessible set that is not $2$-large. Had this result not been possible, we would have disproved the $2$-Large Conjecture (since there exists an accessible set that is not large) and been saved from the depths of the rabbit hole. But, alas, it was not meant to be. \begin{theorem} There exists an accessible set that is not $2$-large. \end{theorem} \begin{proof} This proof takes as inspiration the proof from \cite{GRS}, which provides an accessible set that is not $3$-large. Let $S=\{2^{4i}: i \geq 0\}$. From \cite[Th. 10.27]{LR}, we know that $S-S = \{2^{4j} - 2^{4i}: 0 \leq i < j\}$ is an accessible set. We will provide a $2$-coloring of $\mathbb{Z}^+$ that avoids monochromatic $25$-term $(S-S)$-aps. For each $n \in \mathbb{Z}^+$, write $n$ in its binary representation so that $\sum_{i \geq 0} b_i(n) 2^i = n$. Next, partition the $b_i=b_i(n)$ into intervals of length $4$: $$ [\dots b_i b_{i-1} \dots b_2 b_1 b_0] = \bigcup_{j \geq 0} I_j(b), $$ where $I_j(b) = [b_{4j+3}, b_{4j+2}, b_{4j+1}, b_{4j}]$. Each of these $I_j(b)$'s is one of 16 possible binary sequences (if some or all of $b_{4j+3}, b_{4j+2},$ and $b_{4j+1}$ are missing for the largest $j$, we take the missing terms to be $0$). Apply the mapping $m$ to each $I_j(b)$: $$ m: \left\{ \hspace*{-5pt} \begin{array}{lr} [0,0,1,1], [0,1,a,b], [1,0,0,0], [1,0,1,0], [1,0,1,1] \! \longrightarrow \!0\\[5pt] [0,0,0,0], [0,0,0,1], [0,0,1,0], [1,0,0,1], [1,1,c,d] \!\longrightarrow \! 1,\\ \end{array} \right. $$ \normalsize where $a,b,c,d$ may be, independently, either $0$ or $1$. \normalsize Let $m_j(b) = m(I_j(b))$. We color the integer $n$ by $\chi(n) = \sum_{j} m_j(b) \,\,(\mbox{mod } 2)$. We will now show that $\chi$ is a $2$-coloring of $\mathbb{Z}^+$ that does not admit a monochromatic $25$-term arithmetic progression with common difference from $S-S$. Let $x_1 < x_2 < \cdots < x_{25}$ be an arithmetic progression with common difference $d=2^{4s} - 2^{4t}$. We will use the shorthand $I_j(\ell)$ to represent $I_j(x_{\ell})$ and $m_j(\ell)$ to represent $m(I_j(\ell))$. First, consider $U=\{I_t(\ell): 8 \leq \ell \leq 23\}$. By definition of $d$, we see that $I_t(\ell)$ and $I_t(\ell+1)$ will differ; moreover, by the definition of $d$, the set $U$ will contain all 16 possible binary strings of length 4. In particular, there exists $r \in \{8,9,\dots,23\}$ such that $I_t(r) = [0,0,1,0]$. Thus, by adding/substracting multiples of $d$, we can conclude the following: $$ \begin{array}{c|c|c} j&I_t(j)&m(I_t(j))\\[3pt]\hline &\\[-7pt] r-7&[1,0,0,1]&1\\[3pt] r-6&[1,0,0,0]&0\\[3pt] r-5&[0,1,1,1]&0\\[3pt] r-4&[0,1,1,0]&0\\[3pt] r-3&[0,1,0,1]&0\\[3pt] r-2&[0,1,0,0]&0\\[3pt] r-1&[0,0,1,1]&0\\[3pt] r&[0,0,1,0]&1\\[3pt] r+1&[0,0,0,1]&1\\[3pt] r+2&[0,0,0,0]&1 \end{array} $$ \normalsize (we do not need $j \in \{r+3, r+4,\dots,r+8\}$ and these need not exist). Next, we consider how $d$ affects $I_s(\ell)$ for $\ell \in \{r-7, r-6, \dots,r-1,r+1, r+2\}$ for all possible cases of $I_s(r)$. If $I_s(r) = [1,0,0,0]$ we have $m_s(r)=0$ so that $m_s(r)+m_t(r) = 1$. This gives us $I_s(r+1) = [1,0,0,1]$ so that $m_s(r+1)+m_t(r+1) = 2$. Since all other $I_j(r)$ are unaffected by the addition of $d$ (to get to $I_j(r+1)$; there are no carries with addition by $d$). Hence, $\chi(x_r) \neq \chi(x_{r+1})$. This same analysis (perhaps switching the values of the sums $m_s(j)+m_t(j), j=r,r+1$) holds when $I_s(r) \in \{[1,0,0,1], [1,0,1,1]\}$. If $I_s(r)$ is in $\{\,[1,1,0,1],\,\, [1,1,1,0], [1,1,1,1],\,\, [0,0,0,1],\,\, [0,0,1,0],\,\, [0,1,0,0],\break [0,1,0,1],$ $[0,1,1,0], [0,1,1,1]\,\}$ then we have $\chi(x_r) \neq \chi(x_{r-1})$ easily since there are no carry issues. If $I_s(r) = [1,0,1,0]$ then $\chi(x_r) \neq \chi(x_{r+2})$; if $I_s(r) = [1,1,0,0]$ then $\chi(r) \neq \chi(x_{r-3})$. The remaining two cases $I_s(r) \in \{[0,0,0,0], [0,0,1,1]\}$ both involve carries and need extra attention. If $I_s(r)=[0,0,0,0]$ then either $\chi(r) \neq \chi(r-4)$ or $\chi(r) \neq \chi(r-5)$ depending on whether or not any carries change the value of $\sum_{j > s} m_j(r-1)$. Lastly, if $I_s(r) = [0,0,1,1]$, then either $\chi(r) \neq \chi(r-6)$ or $\chi(r) \neq \chi(r-7)$ depending on whether or not any carries change the value of $\sum_{j > s} m_j(r-1)$. We have shown that for any possible $I_s(r)$, the $25$-term $(S-S)$-ap cannot be monochromatic, thereby proving the theorem. \end{proof} We will now present some positive results. But first, some definitions. \begin{definition}[$r$-syndetic] Let $S=\{s_1<s_2<\cdots\}$ be a syndetic set with $s_{i+1}-s_i \leq r$ for all $i \in \mathbb{Z}^+$. Then we say that $S$ is {\it $r$-syndetic} \end{definition} \begin{definition}[anastomotic] Let $D \subseteq \mathbb{Z}^+$. If every syndetic set admits a $k$-term $D$-ap then we say that $D$ is {\it $k$-anastomotic}. If $D$ is $k$-anastomatic for all $k \in \mathbb{Z}^+$, then we say that $D$ is {\it anastomotic}. \end{definition} \begin{remark*} The term syndetic has been defined by other authors and is an adjective meaning serving to connect. The term anastomotic is new and I chose it since its meaning is: serving to communicate between parts of a branching system. \end{remark*} \begin{theorem} Let $D \subseteq \mathbb{Z}^+$. If $D$ is $r$-large, then every $r$-syndetic set admits arbitrarily long $D$-aps. Conversely, if every $(2r+1)$-syndetic set admits arbitrarily long $D$-aps, then $D$ is $r$-large. \label{13} \end{theorem} \begin{proof} The first statement is straightforward: for any $r$-syndetic set $S$, we define an $r$-coloring $\chi:\mathbb{Z}^+\rightarrow \{1,\dots,r\}$ by $\chi(i) = \displaystyle \min(\{s-i: s \in S \mbox{ with }i \leq s\}+1$. Since $D$ is $r$-large we have arbitrarily long $D$-aps under $\chi$. Since arithmetic progressions are translation invariant, by the definition of $\chi$ we see that $S$ admits arbitrarily long $D$-aps. To prove the second statement, consider $\sigma$, an arbitrary $r$-coloring of $\mathbb{Z}^+$ using the colors $1,2,\dots,r$. For every color $i$, replace each occurrence of the color $i$ in $\sigma$ by the string of length $r$ with a $1$ in the $i$th position and $0$ in all others. This process gives us a $2$-coloring $\hat{\sigma}$ of $\mathbb{Z}$. Let $S$ be the set of positions of all $1s$ under $\hat{\sigma}$. Note that $S$ is a $(2r+1)$-syndetic set. By assumption, $S$ admits arbitrarily long monochromatic $D$-aps. In particular, it admits arbitrarily long $(rD)$-aps (by taking every $r$th term in a sufficieintly long $D$-ap). Since we now have an arbitrarily long $(rD)$-ap, in the original $r$-coloring $\sigma$ we have a monochromatic $D$-ap. \end{proof} \begin{remark*} Recently, Host, Kra, and Maass \cite{HKM} have independently proven a result similar to Theorem \ref{13}; their result is slightly stronger in that they prove that ``$(2r+1)$-syndetic" can be replaced by ``$(2r-1)$-syndetic." \end{remark*} An immediate consequence of Theorem \ref{13} is the following: \begin{corollary} Let $D \subseteq \mathbb{Z}^+$. Then $D$ is large if and only if $D$ is anastomotic. \label{cor14} \end{corollary} It would be nice if every syndetic set contained an infinite arithmetic progression as we would be done: since every $2$-large set contains a multiple of every integer, every $2$-large set would be anastomotic and, by Corollary \ref{cor14} we would be done. Unfortunately, this is not true. Let $\alpha$ be irrational and let $\chi: \mathbb{Z}^+ \rightarrow \{0,1\}$ be $\chi(x) = \lfloor \alpha n \rfloor - \lfloor \alpha (n-1) \rfloor$. Each color class under $\chi$ corresponds to a syndetic set (e.g., if $\alpha$ is the golden ratio, then each color class is $4$-syndetic). Consider $S = \{i: \chi(i)=0\}$. Assume, for a contradiction, that there exist integers $a$ and $d$ such that $S$ contains $a+dn$ for all $n\in\mathbb{Z}^+$. Then we have $\lfloor \alpha(a+dn)\rfloor = \lfloor \alpha(a+d(n-1))\rfloor$ for all positive integers $n \geq 2$. Let $\beta = \alpha d$ and note that $\beta$ is also irrational. Let $\{ x \}$ be the fractional part of $x$. Then $\{ \{\beta n\}: n \in \mathbb{Z}^+\}$ is dense in $[0,1)$. Consider $y \in (\{-\alpha a\}, \{\alpha - \alpha a\})$. We claim that we cannot have $\{\beta n\} = y$ for any $n \in \mathbb{Z}^+$. Assume to the contrary that $\{\beta j\} = y$ so that $\beta j = \ell + y$ for some integer $\ell$. Then $\beta j - y$ is an integer. By choice of $y$, we have $\beta j - y$ strictly between $\beta j + \alpha a - \alpha$ and $\beta j + \alpha a$. But this is not possible since $\lfloor \beta j + \alpha a - \alpha\rfloor = \lfloor \beta j + \alpha a \rfloor$. Hence, $\{ \{\beta n\}: n \in \mathbb{Z}^+\} \cap (\{-\alpha a\}, \{\alpha - \alpha a\}) = \emptyset.$ But $\{ \{\beta n\}: n \in \mathbb{Z}^+\}$ is dense in $[0,1)$, a contradiction.\footnote{The preceeding argument is based on an answer given by Mario Carneiro on {\tt math.stackexchange.com} for question 1487778; I was unable to find a published reference.} Hence, we have a syndetic set without an infinite arithmetic progression. The same analysis shows that $\{i: \chi(i)=1\}$ does not contain one either. Hence, we can cover the positive integers with two syndetic sets, neither of which contain an infinite arithmetic progression. \subsection{A Brief Detour} We take a brief side trip back to the dynamical system setting in order to expand on Figure 1 to include the new notions just introduced so that we have an overview of how the different types of recurrence are related. Furthermore, we define two other types of recurrence to display the relationships between a fixed number of colors and an arbitrary number of colors. \begin{definitions} Let $r \in \mathbb{Z}^+$ be fixed and consider $D \subseteq \mathbb{Z}^+$. We say $D$ is {\it $r$-chromatically $k$-recurrent} if every finite $r$-coloring of $\mathbb{Z}^+$ admits a monochromatic $k$-term $D$-ap; we say $D$ is {\it $r$-syndetically $k$-recurrent} if every $r$-syndetic set contains a $k$-term $D$-ap. \end{definitions} In Figure \ref{fig3} below, we assume that the same restrictions as those in Figure 1 are still in place. Missing implications are unknown. \begin{figure} \caption{Relationship between types of recurrence considered in this article} \label{fig3} \end{figure} \subsection{Back to the Combinatorics Encampment} You are surely asking yourself, if you've traveled with me this far: do you have any positive results? Well, to help aid you in keeping a sanguine outlook, I'll now offer a few items that offer a glimmer of hope for escape from our current residence in the dark depths of the large rabbit hole (pun intended). We start with a strong condition for a set to be large. \begin{definition}[bounded multiples condition] We say that $D\subseteq \mathbb{Z}^+$ satisfies the {\it bounded multiples condition} if there exists $M \in \mathbb{Z}^+$ such that for every $i \in \mathbb{Z}^+$, there exists $m \leq M$ such that $im \in D$. In other words, for every positive integer $i$, at least one element of $\{i, 2i, 3i,\dots,Mi\}$ is in $D$. \label{bmc} \end{definition} Using this definition, we have the following characterization. \begin{theorem} If $D \subseteq \mathbb{Z}^+$ satisfies the bounded multiples condition, then $D$ is large. \label{prop1} \end{theorem} \begin{proof} Let $M$ be the constant that exists by the bounded multiples condition. We proceed by showing that $D$ is $k$-anastomotic for all $k$. Let $A \subseteq \mathbb{Z}^+$ by syndetic. If $A^c$ is not syndetic, then it has arbitrarily long gaps. This means that $A$ contains arbitrarily long intervals. In this situation, $A$ contains arbitrarily long $D$-aps. Now let both of the sets $A=\{a_i\}_{i \in \mathbb{Z}^+}$ and $A^c=B=\{b_i\}_{i \in \mathbb{Z}^+}$ be $g$-syndetic with $g = \max_{i \in \mathbb{Z}^+}\{a_{i+1}-a_i, b_{i+1}-b_i\}$. Let $\chi(n)$ equal 1 if $n \in A$ and $0$ if $n \in B$. Define the $2^{g+1}$-coloring $\gamma: \mathbb{Z}^+ \rightarrow \{0,1\}^{g+1}$ by $\gamma(n) = (\chi(n), \chi(n+1),\dots,\chi(n+g))$. Note that $\gamma(n)$ cannot consist of all 0s or all 1s by the definition of $g$. Since $\gamma$ is a finite coloring of $\mathbb{Z}^+$, by van der Waerden's Theorem there exists a monochromatic $Mk$-term arithmetic progression under $\gamma.$ By the definition of $\gamma$ both $A$ and $B$ contain $Mk$-term arithmetic progressions, each with the same common difference. Let $d$ be this common difference. Since $D$ satisfies the bounded multiples condition, there exists $m \leq M$ such that $md \in D$. By taking every $m^{\mathrm{th}}$ term of the $Mk$-term arithmetic progressions, we see that both $A$ and $B$ have $k$-term $D$-aps. \end{proof} \begin{remark*} The converse of Theorem \ref{prop1} is not true. We know that the set of perfect squares is large; however, it does not satisfy the bounded multiples condition. To see this, consider a prime $p$. Then the smallest multiple of $p$ in the set of perfect squares is $p^2$. Since $p$ may be arbitrarily large, we do not have the existence of $M$ needed in the definition above. \end{remark*} As was done at the beginning of this article, we will now present a ``finite version" of the 2-large definition. Instead of appealing to the Compactness Principle, we will offer a terse proof of equivalence. \begin{lemma} Let $D$ be $2$-large. For each $k \in \mathbb{Z}^+$, there exists an integer $N=N(k,D)$ such that every $2$-coloring of $\{1,2,\dots,N\}$ admits a monochromatic $k$-term $D$-ap. \label{lem16} \end{lemma} \begin{proof} Assume not and, for each $i \in \mathbb{Z}^+$, let $\chi_i$ be a $2$-coloring of $\{1,2,\dots,i\}$ with no monochromatic $k$-term $D$-ap. Define $\gamma$, inductively, by $\gamma(j)=c_j$ where $\chi_i(j)=c_j$ occurs infinitely often among those $\chi_i$ where $\chi_i(\ell) = \gamma(\ell)$ for $\ell<j$. Now note that $\gamma$ is a $2$-coloring of $\mathbb{Z}^+$ with no monochromatic $k$-term $D$-ap, a contradiction. \end{proof} We will have use for the following notation in the remainder of this section. \begin{notation} Let $m \in \mathbb{Z}^+$ and $D \subseteq \mathbb{Z}^+$. Then $mD = \{md: d \in D\}$ and $D^m = \displaystyle\left\{ \prod_{f \in F} d_{f}: d_{i} \in D, F \subseteq \mathbb{Z}^+ \mbox{ with } |F|=m \right\}$. \end{notation} We can now present two easy lemmas. We will provide a proof for the first and leave the very similar proof of the second to the reader. \begin{lemma} Let $D$ be $2$-large and let $m \in \mathbb{Z}^+$. Then $N(k,mD) \leq mN(k,D)$. \label{lem17} \end{lemma} \begin{proof} Consider any $2$-coloring of the first $mN(k,D)$ positive integers. We will show that there exists a monochromatic $k$-term $mD$-ap. Given our coloring, consider only those integers divisible by $m$. Via the obvious one-to-one correspondence between $\{m,2m,\dots,mN(k,D)\}$ and $\{1,2,\dots,N(k,D)\}$, we have a monochromatic $k$-term $D$-ap in the latter interval, meaning that we have a monochromatic $k$-term $mD$-ap in the former interval. \end{proof} \begin{lemma} Let $m\in \mathbb{Z}^+$. Then $D$ is $r$-large if and only if $mD$ is $r$-large. \label{lem18} \end{lemma} We also will use the following definition. \begin{definition} Let $D$ be $2$-large. Define $M(k,D;2) = N(k,D)$ and, for $r \geq 3$, $$ M(k,D;r)=N(M(k,D;r-1),D), $$ where $N(k,D)$ is the integer from Lemma \ref{lem16}. \end{definition} As $N(k,D)$ exists for all $k$ (by Lemma \ref{lem16}), we see that $M(k,D;r)$ is well-defined for all $k$ and $r$. We are now ready to finally see a ray of hope in the next theorem. due to Brown, Graham, and Landman \cite{BGL}. The proof presented here is the details of the sketch given in \cite{BGL} using the functions defined in the present article. \begin{theorem}[\cite{BGL}] Let $r \in \mathbb{Z}^+$. Let $D \subseteq \mathbb{Z}^+$ be $r$-large. Then $D^m$ is $r^m$-large. \label{main} \end{theorem} We will prove this for $r=2$. The generalization to arbitrary $r$ is clear but requires new notations and definitions. \begin{proof} The proof is by induction on $m$. We will show that any $2^m$-coloring of $[1,M(k,D^m;2^m)]$ admits a monochromatic $k$-term $D^{m}$-ap. For $m=1$, we have $M(k,D^m;2^m) = N(k,D)$ so that the base case holds by definition of $N(k,D)$. We now assume the statement for $m$ and will show it for $m+1$. Note that we will be using the definition $M(k,D^m;r) = N(M(k,D^m;r-1),D)$ as the function $N$ is only valid for the given $2$-large set $D$. Consider any $2^{m+1}$-coloring of $[1,M(k,D^{m};2^{m+1})]$. Partition the colors into two equal sets, each of size $2^m$. Calling these sets ``red" and ``blue," we can view the coloring as a 2-coloring of the interval $[1,N(M(k,D^m; 2^{m+1}-1),D)] \supseteq [1,N(M(k,D^m; 2^{m}),D)]$. By definition of $N(M(k,D^m; 2^{m}),D)$ we have either a ``red" or ``blue" $D$-ap of length $M(k,D^m; 2^{m})$. In either case, we have a $2^m$-coloring of a $D$-ap: $a+d[1,M(k,D^m; 2^{m})]$. By the induction hypothesis, $[1,M(k,D^m; 2^{m})]$ admits a monochromatic $D^m$-ap of length $k$, giving us a monochromatic $k$-term $D^{m+1}$-ap. \end{proof} \section{Epilogue} Having attained Theorem \ref{main}, a hidden door in the rabbit hole has opened, leading us back to the Stone-\v{C}ech locale; by Theorem \ref{main}, if $D$ is a $2$-large subsemigroup of $(\mathbb{Z}^+,\cdot)$, then $D$ is large. For example, this result gives us, for any monomial $x^n$: if $S=\{x^n: x \in \mathbb{Z}^+\}$ is $2$-large, then it is large. This holds since $i^n \cdot j^n = (ij)^n$ gives us that $S$ is a semigroup (of course, we already know $S$ is large via other means). However, the set of odd positive integers is also a semigroup (under multiplication) but is not $2$-large since, from \cite{BGL}, a $2$-large set must have a multiple of every positive integer. Combining the range of polynomials and multiples of every positive integer, Frantzikinakis \cite{Fran} has shown that if $p(n): \mathbb{Z}^+ \rightarrow \mathbb{Z}^+$ is an integer-valued polynomial then $D=\{p(n): n \in \mathbb{Z}^+\}$ is measurably $k$-recurrent for all $k$ if and only if it contains multiples of every positive integer. And now we once again find ourselves wading in the dynamical systems pool after traveling through the Stone-\v{C}ech locale. Note that Frantzikinakis' result is stronger than $D$ being large (see Figure \ref{fig3}) and relies heavily on polynomials but also suggests that, perhaps, if $D$ contains multiples of every positive integer, then $D$ is large. If this were true, then the $2$-Large Conjecture is true. But yet again, we are foiled: the set $\{n! : n \in \mathbb{Z}^+\}$ clearly contains a multiple of every positive integer, but is not 2-large \cite{BGL}. And now we are back in the combinatorics encampment. \vskip 5pt Okay silly rabbit, enough tricks; I surrender. \vskip 5ptFor now. \vskip 20pt \noindent {\bf Acknowledgments.} I would like to thank Dan Saracino for a very careful reading of an earlier version of this paper. I would also like to thank the anonymous referee for an incredibly helpful and detailed report; this paper has been made better because of it. Finally, thank you to Sohail Farhangi for finding a gap in a proof of a theorem that appeared in a previous version of this paper. \vskip 30pt \footnotesize \end{document} \begin{biog} \item[Aaron Robertson] ([email protected]) received his PhD in mathematics in 1999 from Temple University. He went to Colgate University immediately after graduate school. In his recent free time he has served on the Board of Education for the Hamilton Central School district, been the faculty liaison for Colgate's (currently ranked \#8 nationally) women's ice hockey team, played guitar and drums, sang, and recorded music. \begin{affil} Department of Mathematics, Colgate University, Hamilton NY 13346\\ [email protected] \end{affil} \end{biog} \eject \end{document}
\begin{document} \title{An Extension of the Blow-up Lemma to arrangeable graphs hanks{ YK was partially supported by CNPq (Proc.~308509/2007-2, 484154/2010-9, 477203/2012-4). AT and AW were partially supported by DFG grant TA 309/2-2. This work was partially supported by the University of S\~{a} \hyphenation{pre-decessors} \begin{abstract} The Blow-up Lemma established by Koml\'os, S\'ark\"ozy, and Szemer\'edi in 1997 is an important tool for the embedding of spanning subgraphs of bounded maximum degree. Here we prove several generalisations of this result concerning the embedding of $a$-arrangeable graphs, where a graph is called $a$-arrangeable if its vertices can be ordered in such a way that the neighbours to the right of any vertex $v$ have at most $a$ neighbours to the left of $v$ in total. Examples of arrangeable graphs include planar graphs and, more generally, graphs without a $K_s$-subdivision for constant~$s$. Our main result shows that $a$-arrangeable graphs with maximum degree at most $\sqrt{n}/\log n$ can be embedded into corresponding systems of super-regular pairs. This is optimal up to the logarithmic factor. We also present two applications. We prove that any large enough graph~$G$ with minimum degree at least $\big(\frac{r-1}{r}+\gamma\big)n$ contains an $F$-factor of every $a$-arrangeable $r$-chromatic graph~$F$ with at most $\xi n$ vertices and maximum degree at most $\sqrt{n}/\log n$, as long as $\xi$ is sufficiently small compared to $\gamma/(ar)$. This extends a result of Alon and Yuster [J. Combin. Theory Ser. B 66(2), 269--282, 1996]. Moreover, we show that for constant~$p$ the random graph $\Gnp$ is universal for the class of $a$-arrangeable $n$-vertex graphs~$H$ of maximum degree at most $\xi n/\log n$, as long as $\xi$ is sufficiently small compared to $p/a$. \end{abstract} \renewcommand{\itmit{\roman{*}}an{enumi}}{\itmit{\roman{*}}an{enumi}} \renewcommand{(\theenumi)}{(\itmit{\roman{*}}an{enumi})} \section{Introduction} The last 15 years have witnessed an impressive series of results guaranteeing the presence of spanning subgraphs in dense graphs. In this area, the so-called \emph{Blow-up Lemma} has become one of the key instruments. It emerged out of a series of papers by Koml\'os, S\'ark\"ozy, and Szemer\'edi (see e.g.~\cite{KSS95,KSS_pos,KSS97,KSS98,KSS_seyweak,KSS_sey,KSS_alon}) and asserts, roughly spoken, that we can find bounded degree spanning subgraphs in $\varepsilon$-regular pairs. It was used for determining, among others, sufficient degree conditions for the existence of $F$-factors, Hamilton paths and cycles and their powers, spanning trees and triangulations, and graphs of sublinear bandwidth in graphs, digraphs and hypergraphs (see the survey~\cite{KueOstSurvey} for an excellent overview of these and related achievements). In this way, the Blowb<>-up Lemma has reshaped extremal graph theory. However, with very few exceptions, the embedded spanning subgraphs~$H$ considered so far came from classes of graphs with constant maximum degree, because the Blow-up Lemma requires the subgraph it embeds to have constant maximum degree. In fact, the Blow-up Lemma is usually the only reason why the proofs of the above mentioned results only work for such subgraphs. The central purpose of this paper is to overcome this obstacle. We shall provide extensions of the Blow-up Lemma that can embed graphs whose degrees are allowed to grow with the number of vertices. These versions require that the subgraphs we embed are arrangeable.\footnote{We remark that it was already suggested in~\cite{Komlos99} to relax the maximum degree constraint to arrangeability.} We will formulate them in the following and subsequently present some applications. \paragraph{Blow-up Lemmas.} We first introduce some notation. Let~$G$, $H$ and $R$ be graphs with vertex sets $V(G)$, $V(H)$, and $V(R)=\{1,\dots,r\}\mathrel{=\mathop:}[r]$. For $v\in V(G)$ and $S,U\subseteq V(G)$ we define $N(v,S)\mathrel{\mathop:}= N(v)\cap S$ and $N(U,S)=\bigcup_{v\in U} N(v,S)$. Let $A,B\subset V(G)$ be non-empty and disjoint, and let $\varepsilon,\delta\in[0,1]$. The \emph{density} of the pair $(A,B)$ is defined to be $d(A,B):=e(A,B)/(|A||B|)$. The pair $(A,B)$ is \emph{$\varepsilon$-regular}, if $|d(A,B)-d(A',B')|\le\varepsilon$ for all $A' \subseteq A$ and $B' \subseteq B$ with $|A'|\geq\varepsilon|A|$ and $|B'|\geq\varepsilon|B|$. An $\varepsilon$-regular pair $(A,B)$ is called \emph{$(\varepsilon,\delta)$-regular}, if $d(A,B)\ge\delta$ and \emph{$(\varepsilon,\delta)$-super-regular}, if $|N(v,B)| \ge \delta|B|$ for all $v\in A$ and $|N(v,A)|\ge \delta|A|$ for all $v\in B$. We say that~$H$ has an \emph{$R$-partition} $V(H)=X_1\dcup\dots\dcup X_r$, if for every edge $xy\in E(H)$ there are distinct $i,j\in[r]$ with $x\in X_i$, $y\in X_j$ and $ij\in E(R)$. $G$ has a \emph{corresponding} \emph{$(\varepsilon,\delta)$-super-regular $R$-partition} $V(G)=V_1\dcup\dots\dcup V_r$, if $|V_i|=|X_i|=:n_i$ for all $i\in [r]$ and every pair $(V_i,V_j)$ with $ij\in E(R)$ is $(\varepsilon,\delta)$-super-regular. In this case~$R$ is also called the \emph{reduced graph} of the super-regular partition. Moreover, these partitions are \emph{balanced} if $n_1\le n_2\le \dots \le n_r\le n_1+1$. They are \emph{$\kappa$-balanced} if $n_j\le \kappa n_i$ for all $i,j\in[r]$. The partition classes $V_i$ are also called \emph{clusters}. With this notation, a simple version of the Blow-up Lemma of Koml\'os, S\'ark\"ozy, and Szemer\'edi~\cite{KSS97} can now be formulated as follows. \begin{thm}[Blow-up Lemma~\cite{KSS97}] \label{thm:Blow-up:KSS} Given a graph $R$ of order $r$ and positive parameters $\delta, \Delta$, there exists a positive $\varepsilon = \varepsilon(r,\delta, \Delta)$ such that the following holds. Suppose that $H$ and $G$ are two graphs with the same number of vertices, where $\Delta(H) \le \Delta$ and $H$ has a balanced $R$-partition, and $G$ has a corresponding $(\varepsilon,\delta)$-super-regular $R$-partition. Then there exists an embedding of $H$ into $G$. \end{thm} We remark that R\"odl and Ruci\'nski~\cite{RodlRuci99} gave a different proof for this result. In addition, Koml\'os, S\'ark\"ozy, and Szemer\'edi~\cite{KSS98} gave an algorithmic proof. Our first result replaces the restriction on the maximum degree of $H$ in Theorem~\ref{thm:Blow-up:KSS} by a restriction on its arrangeability. This concept was first introduced by Chen and Schelp in~\cite{CheSch93}. \begin{defi}[$a$-arrangeable] \label{def:arrangeable} Let $a$ be an integer. A graph is called \emph{$a$-arrangeable} if its vertices can be ordered as $(x_1,\dots ,x_n)$ in such a way that $\big|N\big(N(x_i,\field{R}ight_i),\Left_i\big)\big| \le a$ for each $1 \le i \le n$, where $\Left_i = \{x_1, x_2,\dots, x_i \}$ and $\field{R}ight_i = \{x_{i+1}, x_{i+2},\dots, x_n\}$. \end{defi} Obviously, every graph $H$ with $\Delta(H)\le a$ is $(a^2-a+1)$-arrangeable. Other examples for arrangeable graphs are planar graphs: Chen and Schelp showed that planar graphs are 761-arrangeable \cite{CheSch93}; Kierstead and Trotter \cite{KieTro93} improved this to 10-arrangeable. In addition, R\"odl and Thomas \cite{RodTho97} showed that graphs without $K_s$-subdivision are $s^8$-arrangeable. On the other hand, even 1-arrangeable graphs can have unbounded degree (e.g.~stars). \begin{thm}[Arrangeable Blow-up Lemma] \label{thm:Blow-up:arr} Given a graph $R$ of order $r$, a positive real $\delta$ and a natural number $a$, there exists a positive real $\varepsilon = \varepsilon(r,\delta,a)$ such that the following holds. Suppose that $H$ and $G$ are two graphs with the same number of vertices, where $H$ is $a$-arrangeable, $\Delta(H) \le \sqrt{n}/\log n$ and $H$ has a balanced $R$-partition, and $G$ has a corresponding $(\varepsilon,\delta)$-super-regular $R$-partition. Then there exists an embedding of $H$ into $G$. \end{thm} Koml\'os, S\'ark\"ozy, and Szemer\'edi proved that the Blow-up Lemma allows for the following strengthenings that are useful in applications. We allow the clusters to differ in size by a constant factor and we allow certain vertices of $H$ to restrict their image in~$G$ to be taken from an a priori specified set of linear size. However, in contrast to the original Blow-up Lemma, we need to be somewhat more restrictive about the image restrictions: We still allow linearly many vertices in each cluster to have image restrictions, but now only a constant number of different image restrictions is permissible in each cluster (we shall show in Section~\ref{sec:opt} that this is best possible). In the following, we state an extended version of the Blow-up Lemma that makes this precise. \begin{thm}[Arrangeable Blow-up Lemma, full version] \label{thm:Blow-up:arr:full} For all $C,a,\Delta_R,\kappa \in \field{N}$ and for all $\delta,c>0$ there exist $\varepsilon,\alpha>0$ such that for every integer $r$ there is $n_0$ such that the following is true for every $n\ge n_0$. Assume that we are given \renewcommand{\itmit{\roman{*}}an{enumi}}{\alph{enumi}} \begin{enumerate} \item a graph~$R$ of order $r$ with $\Delta(R)<\Delta_R$, \item\label{item:Blow-up:H} an $a$-arrangeable $n$-vertex graph $H$ with maximum degree $\Delta(H)\le \sqrt{n}/\log n$, together with a $\kappa$-balanced $R$-partition $V(H)=X_1\dcup\dots\dcup X_r$, \item a graph $G$ with a corresponding $(\varepsilon,\delta)$-super-regular $R$-partition $V(G)=V_1\dcup\dots\dcup V_r$ with $|V_i|=|X_i|=:n_i$ for every $i\in[r]$, \item for every $i\in [r]$ a set $S_i\subseteq X_i$ of at most $|S_i|\le \alpha n_i$ \emph{image restricted vertices}, such that $|N_H(S_i)\cap X_j|\le \alpha n_j$ for all $ij\in E(R)$, \item\label{item:Blow-up:ir} and for every $i\in[r]$ a family $\mathcal{I}_i=\{I_{i,1},\dots,I_{i,C}\}\subseteq 2^{V_i}$ of permissible \emph{image restrictions}, of size at least $|I_{i,j}|\ge c n_i$ each, together with a mapping $I\colon S_i\to \mathcal{I}_i$, which assigns a permissible image restriction to each image restricted vertex. \end{enumerate} Then there exists an embedding $\varphi\colon V(H) \to V(G)$ such that $\varphi(X_i)=V_i$ and $\varphi(x) \in I(x)$ for every $i\in[r]$ and every $x\in S_i$. \end{thm} As we shall show, the upper bound on the maximum degree of~$H$ in Theorem~\ref{thm:Blow-up:arr:full} is optimal up to the $\log$-factor (see Section~\ref{sec:opt}). However, if we require additionally that every $(a+1)$-tuple of~$G$ has a big common neighbourhood then this degree bound can be relaxed to $o(n/\log n)$. \begin{thm}[Arrangeable Blow-up Lemma, extended version] \label{thm:Blow-up:arr:ext} Let $a,\Delta_R,\kappa \in \field{N}$ and $\iota,\delta>0$ be given. Then there exist $\varepsilon,\xi>0$ such that for every $r$ there is $n_0\in \field{N}$ such that the following holds for every $n\ge n_0$. Assume that we are given a graph $R$ of order $r$ with $\Delta(R)<\Delta_R$, an $a$-arrangeable $n$-vertex graph with $\Delta(H) \le \xi n/\log n$, together with a $\kappa$-balanced $R$-partition, and a graph~$G$ with a corresponding $(\varepsilon,\delta)$-super-regular $R$-partition $V=V_1\dcup\dots\dcup V_r$. Assume that in addition for every $i\in[r]$ every tuple $(u_1,\dots,u_{a+1})\subseteq V\setminus V_i$ of vertices satisfies $|\bigcap_{j\in[a+1]}N_G(u_j) \cap V_i|\ge \iota |V_i|$. Then there exists an embedding of $H$ into $G$. \end{thm} Again, the degree bound of $\xi n/\log n$ for $H$ in Theorem~\ref{thm:Blow-up:arr:ext} is optimal up to the constant factor. The same degree bound can be obtained if we do require~$H$ only to be an almost spanning subgraph, even if the additional condition on $(a+1)$-tuples from Theorem~\ref{thm:Blow-up:arr:ext} is dropped again. \begin{thm}[Arrangeable Blow-up Lemma, almost spanning version] \label{thm:Blow-up:arr:almost-span} Let $\mu>0$ and assume that we have exactly the same setup as in Theorem~\ref{thm:Blow-up:arr:full}, but with $\Delta(H)\le \xi n/\log n$ instead of the maximum degree bound given in~\eqref{item:Blow-up:H}, where~$\xi$ is sufficiently small compared to all other constants. Fix an $a$-arrangeable ordering of~$H$, let~$X_i'$ be the first $(1-\mu)n_i$ vertices of $X_i$ in this ordering, and set $H':=H[X_1'\cup\dots\cup X_r']$. Then there exists an embedding $\varphi\colon V(H') \to V(G)$ such that $\varphi(X_i')\subseteq V_i$ and $\varphi(x) \in I(x)$ for every $i\in[r]$ and every $x\in S_i\cap X_i'$. \end{thm} Let us point out that one additional essential difference between these three versions of the Blow-up Lemma and Theorem~\ref{thm:Blow-up:KSS} concerns the order of the quantifiers: the regularity~$\varepsilon$ that we require only depends on the maximum degree $\Delta_R$ of the reduced graph~$R$, but \emph{not} on the number of the vertices in $R$. Sometimes this is useful in applications. Clearly, we can reformulate our theorems to match the original order of quantifiers of Theorem~\ref{thm:Blow-up:KSS}; the lower bound on $n_0$ can be omitted in this case. \paragraph{Applications.} To demonstrate the usefulness of these extensions of the Blow-up Lemma, we consider two example applications that can now be derived in a relatively straightforward manner. At the end of this section we are going to mention a few further applications that are more difficult and will be proven in separate papers. Our first application concerns $F$-factors in graphs of high minimum degree. This is a topic which is well investigated for graphs~$F$ of \emph{constant} size. For a graph~$F$ on~$f$ vertices, an \emph{$F$-factor} in a graph~$G$ is a collection of vertex disjoint copies of~$F$ in~$G$ such that all but at most $f-1$ vertices of~$G$ are covered by these copies of $F$. A classical theorem by Hajnal and Szemer\'edi~\cite{HajSze70} states that each $n$-vertex graph~$G$ with minimum degree $\delta(G)\ge\frac{r-1}{r}n$ has a $K_r$-factor. Alon and Yuster~\cite{AloYus} considered arbitrary graphs $F$ and showed that, if $r$ denotes the chromatic number of $F$, every sufficiently large graph~$G$ with minimum degree $\delta(G)\ge(\frac{r-1}{r}+\gamma)n$ contains an $F$-factor. This was improved upon by Koml\'os, S\'ark\"ozy, and Szemer\'edi~\cite{KSS_alon}, who replaced the linear term~$\gamma n$ in the degree bound by a constant $C=C(F)$; and by K\"uhn and Osthus~\cite{KueOst_packings}, who, inspired by a result of Koml\'os~\cite{Kom00}, determined the precise minimum degree threshold for every constant size~$F$ up to a constant. In contrast to the previous results we consider graphs~$F$ whose size may grow with the number of vertices~$n$ of the host graph~$G$. More precisely, we allow graphs~$F$ of size linear in~$n$. To prove this result, we use Theorem~\ref{thm:Blow-up:arr:full} (see Section~\ref{sec:Applications}) and hence we require that~$F$ is $a$-arrangeable and has maximum degree at most $\sqrt{n}/\log n$. \begin{thm}\label{thm:GrowingHfactors} For every $a,r$ and $\gamma>0$ there exist $n_0$ and $\xi>0$ such that the following is true. Let $G$ be any graph on $n\ge n_0$ vertices with $\delta(G)\ge (\tfrac{r-1}{r}+\gamma)n$ and let $F$ be an $a$-arrangeable $r$-chromatic graph with at most $\xi n$ vertices and with maximum degree $\Delta(F)\le \sqrt{n}/\log n$. Then $G$ contains an $F$-factor. \end{thm} Our second application is a universality result for random graphs $\Gnp$ with constant~$p$ (that is, a graph on vertex set~$[n]$ for which every $e\in\binom{[n]}{2}$ is inserted as an edge independently with probability~$p$). A graph~$G$ is called \emph{universal} for a family~$\mathcal{H}$ of graphs if~$G$ contains a copy of each graph in~$\mathcal{H}$ as a subgraph. For instance, graphs that are universal for the family of forests, of planar graphs and of bounded degree graphs have been investigated (see~\cite{AloCap} and the references therein). Here we consider the class \[\mathcal{H}_{n,a,\xi} := \{H: \text{$|H|=n$, $H$ is $a$-arrangeable, $\Delta(H)\le \xi n/\log n$}\}\] of arrangeable graphs whose maximum degree is allowed to grow with~$n$. Using Theorem~\ref{thm:Blow-up:arr:ext}, we show that with high probability $\Gnp$ contains a copy of each graph in $\mathcal{H}_{n,a,\xi}$ (see Section~\ref{sec:Applications}). Universality problems for bounded degree graphs in (subgraphs of) random graphs with constant~$p$ were also considered in~\cite{HuaLeeSud}. Another result for subgraphs of potentially growing degree and $p$ tending to 0 can be found in~\cite{Rio00}. Theorem~2.1 of~\cite{Rio00} implies that any $a$-arrangeable graph of maximum degree $o(n^{1/4})$ can be embedded into $\Gnp$ with $p>0$ constant with high probability. \begin{thm}\label{thm:GnpUniversal} For all constants $a$, $p>0$ there exists $\xi>0$ such that $\Gnp$ is universal for $\mathcal{H}_{n,a,\xi}$ with high probability. \end{thm} In addition, we use Theorem~\ref{thm:Blow-up:arr:full} in~\cite{ArrBolKom} to establish an analogue of the Bandwidth Theorem from~\cite{BoeSchTar09} for arrangeable graphs. More precisely, we prove the following result. \begin{thm}[Arrangeable Bandwidth Theorem~\cite{ArrBolKom}] \label{thm:BolKom:arr} For all $r,a\in\field{N}$ and $\gamma>0$, there exist constants $\beta>0$ and $n_0\in\field{N}$ such that for every $n\ge n_0$ the following holds. If $H$ is an $r$-chromatic, $a$-arrangeable graph on $n$ vertices with $\Delta(H)\le \sqrt{n}/\log n$ and bandwidth at most $\beta n$ and if $G$ is a graph on $n$ vertices with minimum degree $\delta(G)\ge \left(\tfrac{r-1}r+\gamma\right)n$, then there exists an embedding of $H$ into $G$. \end{thm} As we also show there, this implies for example that every graph~$G$ with minimum degree at least $(\frac34+\gamma)n$ contains \emph{almost every} planar graph~$H$ on~$n$ vertices, provided that $\gamma>0$. In addition it implies that almost every planar graph~$H$ has Ramsey number $R(H)\le12|H|$. Finally, another application of Theorem~\ref{thm:Blow-up:arr:full} appears in~\cite{ASW12}. In that paper Allen, Skokan, and W\"urfl prove the following result, closing a gap left in the analysis of large planar subgraphs of dense graphs by K\"uhn, Osthus, and Taraz~\cite{KueOstTar} and K\"uhn and Osthus~\cite{KueOst_triang}. \begin{thm}[Allen, Skokan, W\"urfl~\cite{ASW12}] \label{thm:DensePlanarSubgraphs} For every $\gamma\in (0,1/2)$ there exists $n_\gamma$ such that every graph on $n\ge n_\gamma$ vertices with minimum degree at least $\gamma n$ contains a planar subgraph with $2n-4k$ edges, where $k$ is the unique integer such that $k\le 1/(2\gamma)<k+1$. \end{thm} \paragraph{Methods.} To prove the full version of our Arrangeable Blow-up Lemma (Theorem~\ref{thm:Blow-up:arr:full}), we proceed in two steps. Firstly, we use a random greedy algorithm to embed an almost spanning subgraph~$H'$ of the target graph~$H$ into the host graph~$G$ (proving Theorem~\ref{thm:Blow-up:arr:almost-span} along the way). Secondly, we complete the embedding by finding matchings in suitable auxiliary graphs which concern the remaining vertices in $V(H)\setminus V(H')$ and the unused vertices~$V^\text{\it Free}$ of~$G$. The first step uses an approach similar to the one of Koml\'os, S\'ark\"ozy, and Szemer\'edi~\cite{KSS97}. The second step utilises ideas from R\"odl and Ruci\'nski's~\cite{RodlRuci99}. Let us briefly comment on the similarities and differences. The use of a random greedy algorithm to prove the Blow-up Lemma appears in~\cite{KSS97}. The idea is intuitive and simple: Order the vertices of the target graph~$H'$ arbitrarily and consecutively embed them into the host graph~$G$, in each step choosing a random image vertex $\varphi(x)$ in the set $A(x)$ of those vertices which are still possible as images for the vertex~$x$ of~$H'$ we are currently embedding. If for some unembedded vertex~$x$ the set~$A(x)$ gets too small, then call~$x$ critical and embed it immediately, but still randomly in~$A(x)$. Our random greedy algorithm proceeds similarly, with one main difference. We cannot use an arbitrary order of the vertices of~$H'$, but have to use one which respects the arrangeability bound. Consequently, we also cannot embed critical vertices immediately -- each vertex has to be embedded when it is its turn according to the given order. So we need a different strategy for dealing with critical vertices. We solve this problem by reserving a linear sized set of special vertices in~$G$ for the embedding of critical vertices, which are very few. The second step is more intricate. Similarly to the approach in~\cite{RodlRuci99} we construct for each cluster $V_i$ an auxiliary bipartite graph~$F_i$ with the classes $X_i\setminus V(H')$ and $V_i\cap V^\text{\it Free}$ and an edge between $x\in V(H)$ and $v\in V(G)$ whenever embedding $x$ into $v$ is a permissible extension of the partial embedding from the first step. Moreover, we guarantee that $V(H)\setminus V(H')$ is a stable set. Then, clearly, if each~$F_i$ has a perfect matching, there is an embedding of~$H$ into~$G$. So the question remains how to show that the auxiliary graphs have perfect matchings. R\"odl and Ruci\'nski approach this by showing that their auxiliary graphs are super-regular. We would like to use a similar strategy, but there are two main difficulties. Firstly, because the degrees in our auxiliary graphs vary greatly, they cannot be super-regular. Hence we have to appropriately adjust this notion to our setting, which results in a property that we call weighted super-regular. Secondly, the proof that our auxiliary graphs are weighted super-regular now has to proceed quite differently, because we are dealing with the arrangeable graphs. \paragraph{Structure.} This paper is organised as follows. In Section~\ref{sec:notation} we provide notation and some tools. In Section~\ref{sec:almost-spanning} we show how to embed almost spanning arrangeable graphs, which will prove Theorem~\ref{thm:Blow-up:arr:almost-span}. In Section~\ref{sec:spanning} we extend this to become a spanning embedding, proving Theorem~\ref{thm:Blow-up:arr:full}. At the end of Section~\ref{sec:spanning}, we also outline how a similar argument gives Theorem~\ref{thm:Blow-up:arr:ext}. In Section~\ref{sec:opt} we explain why the degree bounds in the new versions of the Blow-up Lemma and the requirements for the image restrictions are essentially best possible. In Section~\ref{sec:Applications}, we give the proofs for our applications, Theorem~\ref{thm:GrowingHfactors} and Theorem~\ref{thm:GnpUniversal}. \section{Notation and preliminaries} \label{sec:notation} All logarithms are to base $e$. For a graph~$G$ we write $V(G)$ for its vertex set, $E(G)$ for its edge set and denote the number of its vertices by $|G|$, its \emph{maximum degree} by $\Delta(G)$ and its \emph{minimum degree} by $\delta(G)$. Let $u,v\in V(G)$ and $U,W\subset V(G)$. The \emph{neighbourhood} of~$u$ in~$G$ is denoted by $N_G(u)$, the neighbourhood of~$u$ in the set $U$ by $N_G(u,U):=N_G(u)\cap U$. Similarly $N_G(U)=\bigcup_{x\in U} N_G(x)$ and $N_G(U,W):=N_G(U)\cap W$. The \emph{co-degree} of~$u$ and~$v$ is $\deg_G(u,v)=|N_G(u)\cap N_G(v)|$. We often omit the subscript~$G$. For easier reading, we will often use $x$, $y$ or $z$ for vertices in the graph~$H$ that we are embedding, and $u$, $v$, $w$ for vertices of the host graph $G$. We shall also use the following version of the Hajnal-Szemer\'edi Theorem~\cite{HajSze70}. \begin{thm} \label{thm:HajnalSzemeredi} Every graph $G$ on $n$ vertices and maximum degree $\Delta(G)$ can be partitioned into $\Delta(G)+1$ stable sets of size $\lfloor n/(\Delta(G)+1)\rfloor$ or $\lceil n/(\Delta(G)+1)\rceil$ each. \end{thm} \subsection{Arrangeability} Let~$H$ be a graph and $(x_1,x_2,\dots,x_n)$ be an $a$-arrangeable ordering of its vertices. We write $x_i \prec x_j$ if and only if $i<j$ and say that $x_i$ is \emph{left} of~$x_j$ and $x_j$ is \emph{right} of $x_i$. We write $N^-(x)\mathrel{\mathop:}= \{y\in N_H(x) : y \prec x\}$ and $N^+(x)\mathrel{\mathop:}= \{y\in N_H(x): x \prec y\}$ and call these the set of \emph{predecessors} or the set of \emph{successors} of $x$ respectively. Predecessors and successors of vertex sets and in vertex sets are defined accordingly. Then $|N^+(x)| \le \Delta(H)$ for all $x\in V(H)$ and the definition of arrangeability says that $N^-\big(N^+(x_i)\big)\cap\{x_1,\dots,x_i\}$ is of size at most~$a$ for each $i\in[n]$. Moreover, it follows that all $x\in V(H)$ satisfy $|N^-(x)| \le a$ and \begin{align} \label{eq:degree-sum} e(H) = \sum_{x\in V(H)} |N^+(x)| = \sum_{x\in V(H)} |N^-(x)| \le a n\, . \end{align} In the proof of our main theorem, it will turn out to be desirable to have a vertex ordering which is not only arrangeable, but also has the property that its final $\mu n$ vertices form a stable set. More precisely we require the following properties. \begin{defi}[stable ending] \label{def:StableEnding} Let $\mu>0$ and let $H = (X_1\dcup\dots\dcup X_r,E)$ be an $r$-partite, $a$-arrangeable graph with partition classes of order $|X_i|=n_i$ with $\sum_{i\in[r]}n_i=n$. Let $(v_1,\dots,v_{n})$ be an $a$-arrangeable ordering of $H$. We say that the ordering has a \emph{stable ending of order} $\mu n$ if $W=\{v_{(1-\mu)n +1},\dots,v_{n}\}$ has the following properties \begin{enumerate} \item $|W\cap X_i| = \mu n_i$ for every $i\in[r]$, \item $H[W]$ is a stable set. \end{enumerate} \end{defi} The next lemma shows that an arrangeable order of a graph can be reordered to have a stable ending while only slightly increasing the arrangeability bound. \begin{lem} \label{lem:arr:StableEnding} Let $a,\Delta_R,\kappa$ be integers and let $H$ be an $a$-arrangeable graph that has a $\kappa$-balanced $R$-partition with $\Delta(R)<\Delta_R$. Then $H$ has a $(5a^2\kappa\Delta_R)$-arrangeable ordering with stable ending of order $\mu n$, where $\mu = 1/(10a(\kappa\Delta_R)^2)$. \end{lem} \begin{proof} Let $X=X_1\dcup\dots\dcup X_r$ be a $\kappa$-balanced $R$-partition of $H$ with $|X_i|=n_i$. Further let $(x_1,\dots,x_{n})$ be any $a$-arrangeable ordering of $H$. In a first step we will find a stable set $W\subseteq X$ with $|W\cap X_i|=\mu n_i$ for $\mu=1/(10a(\kappa\Delta_R)^2)$. Note that for every $i\in[r]$ a vertex $x\in X_i$ has only neighbours in sets $X_j$ with $ij\in E(R)$. Further $H[X_i\cup \{X_j:ij\in E(R)\}]$ has at most $\kappa\Delta_Rn_i$ vertices and is $a$-arrangeable. Therefore \begin{align*} \sum_{w\in X_i} \deg(w) \leByRef{eq:degree-sum} 2a\kappa\Delta_Rn_i. \end{align*} It follows that at least half the vertices $w\in X_i$ have $\deg(w)\le 4a\kappa\Delta_R$. Let $W_i'$ be the set of these vertices and $m'_i$ be their number. Now we greedily find a stable set $W\subseteq \bigcup_{i\in[r]}W'_i$ as follows. In the beginning we set $W=\emptyset$. Then we iteratively select an $i\in[r]$ with \begin{equation}\label{eq:arr:choice} |X_i\cap W|/n_i = \min_{j\in [r]} |X_j\cap W|/n_j\,, \end{equation} choose an arbitrary vertex $x\in W_i'$, move it to $W$ and delete $x$ from $W_i'$ and $N_H(x)$ from $W_j'$ for all $j\in[r]$. We perform this operation until we have found a stable set $W$ with $|W\cap X_i|=\mu n_i$ for all $i\in[r]$ or we attempt to choose a vertex from an empty set~$W_{i^*}'$. So assume that, at some point, we try to choose a vertex from an empty set $W_{i^*}'$. For each $i\in[r]$ let $m_i$ be the number of vertices chosen from $X_i$ (and moved to~$W$) so far. Moreover, let $i\in[r]$ be such that $m_i<\mu n_i$ and consider the last step when a vertex from $X_i$ was chosen. Before this step, $m_i-1$ vertices of~$X_i$ and at most $m_{i^*}$ vertices of~$X_{i^*}$ have been chosen. By~\eqref{eq:arr:choice} we thus have $(m_i-1)/n_i\le m_{i^*}/n_{i^*}$, which implies $m_i\le\kappa m_{i^*}+1$ because $n_i\le\kappa n_{i^*}$. Hence, since $W'_{i^*}$ became empty, we have \begin{align*} n_{i^*}/2 \le m_{i^*}' &\le m_{i^*} + \sum_{\{i^*,i\}\in E(R)}m_i 4a\kappa\Delta_R\\ &\le m_{i^*} + (\Delta_R-1)(\kappa m_{i^*}+1) 4a\kappa\Delta_R \le m_{i^*}\, 5a(\kappa\Delta_R)^2\,. \end{align*} Thus $m_{i^*}\ge n_{i^*}/(10a(\kappa\Delta_R)^2)$. Since we then try to choose from $W'_{i^*}$ we must have $m_{i^*}/n_{i^*}\le m_i/n_i$ by~\eqref{eq:arr:choice}, which implies $m_i\ge n_i/(10a(\kappa\Delta_R)^2)=\mu n_i$. Hence we indeed find a stable set $W$ with $|W\cap X_i|=\mu n_i$ for all $i\in[r]$. Given this stable set $W$ we define a new ordering in which these vertices are moved to the end in order to form the stable ending. To make this more precise let $(x_1',\dots,x_n')$ be the vertex ordering obtained from $(x_1,\dots,x_n)$ by moving all vertices of $W$ to the end (in any order). It remains to prove that $(x_1',\dots,x_n')$ is $(5a^2\kappa\Delta_R)$-arrangeable. Let $L_i'=\{x_1',\dots,x_i'\}$ and $R_i'=\{x_{i+1}',\dots,x_n'\}$ be the vertices left and right of $x'_i$ in the new ordering. We have to show that \begin{align*} \big|N\big(N(x_i',R_i'),L_i'\big)\big|\le 5a^2\kappa\Delta_R \end{align*} for all $i\in[n]$. This is obvious for the vertices in $W$ because they are now at the end and $W$ is stable. For $x_i\notin W$ let $N'_i=N\big(N(x_i,R'_i),L'_i\big)$ be the set of predecessors of successors of $x_i$ in the new ordering. $N_i$ is defined analogously for the original ordering. Then all vertices in $N_i'\setminus N_i$ are neighbours of predecessors $y$ of $x_i$ in the original ordering with $y\in W$. There are at most $a$ such left-neighbours of $x_i$ and each of these has at most $4a\kappa\Delta_R$ neighbours by definition of $W$. Hence \[ |N_i'| \le|N_i| +a\cdot 4a\kappa\Delta_R\le a + 4a^2\kappa\Delta_R \le 5a^2\kappa\Delta_R\,. \] \end{proof} \subsection{Weighted regularity} \label{sec:WeightedRegularity} In our proof we shall make use of a weighted version of $\varepsilon$-regularity. More precisely, we will have to deal with a bipartite graph whose vertices have very different degrees. The idea is then to give each vertex a weight antiproportional to its degree and then say that the graph is weighted regular if the following holds. \begin{defi}[Weighted regular pairs] \label{def:weighted:regularity} Let $\varepsilon>0$ and consider a bipartite graph $G=(A\dcup B,E)$ with a weight function $\omega:A\to [0,1]$. For $A'\subseteq A$ , $B'\subseteq B$ we define the \emph{weighted density} \[ d_\omega(A',B')\mathrel{\mathop:}=\frac{\sum_{x\in A'}\omega(x)|N(x,B')|}{|A'|\cdot|B'|}\, . \] We say that the pair $(A,B)$ with weight function $\omega$ is \emph{weighted $\varepsilon$-regular} (with respect to $\omega$) if for any $A'\subseteq A$ with $|A'|\ge \varepsilon|A|$ and any $B'\subseteq B$ with $|B'|\ge \varepsilon |B|$ we have \[ |d_\omega(A,B) - d_\omega(A',B')| \le \varepsilon\, . \] \end{defi} Many results for $\varepsilon$-regular pairs carry over to weighted $\varepsilon$-regular pairs. For one, subpairs of weighted regular pairs are weighted regular. \begin{prop} \label{prop:regular:sub} Let $G=(A\dcup B,E)$ with weight function $\omega:A\to [0,1]$ be weighted $\varepsilon$-regular. Further let $A'\subseteq A$, $B'\subseteq B$ with $|A'|\ge \gamma|A|$ and $|B'|\ge\gamma |B|$ for some $\gamma\ge\varepsilon$ and set $\varepsilon'\mathrel{\mathop:}=\max\{2\varepsilon,\varepsilon/\gamma\}$. Then $(A'\dcup B', E\cap A'\times B')$ is a weighted $\varepsilon'$-regular pair with respect to the restricted weight function $\omega':A'\to [0,1]$, $\omega'(x)=\omega(x)$. \end{prop} \begin{proof} Let $A'\subseteq A$ and $B'\subseteq B$ with $|A'|\ge \gamma |A|$, $|B'|\ge\gamma|B|$ be arbitrary. The definition of weighted $\varepsilon$-regularity implies that $|d_\omega(A,B)-d_\omega(A',B')|\le \varepsilon$. Moreover, $|d_\omega(A,B)-d_\omega(A^*,B^*)|\le \varepsilon$ for all $A^*\subseteq A'$ and $B^*\subseteq B'$ with $|A^*|\ge (\varepsilon/\gamma)|A'|\ge \varepsilon|A|$, $|B^*|\ge (\varepsilon/\gamma)|B'|\ge \varepsilon|B|$ for the same reason. It follows by triangle inequality that $|d_\omega(A',B')-d_\omega(A^*,B^*)|\le 2\varepsilon$. Hence $(A'\dcup B',E\cap A'\times B')$ with weight function $\omega':A'\to[0,1]$ is a weighted $\varepsilon'$-regular pair where $\varepsilon'=\max\{2\varepsilon,\varepsilon/\gamma\}$. \end{proof} If most vertices of a bipartite graph have the `right' degree and most pairs have the `right' co-degree then the graph is an $\varepsilon$-regular pair. This remains to be true for weighted regular pairs and weighted degrees and co-degrees. \begin{defi}[Weighted degree and co-degree] \label{def:weighted:degree} Let $G=(A\dcup B, E)$ be a bipartite graph and $\omega:A\to [0,1]$. For $x,y\in A$ we define the \emph{weighted degree} of $x$ as $\deg_\omega(x)\mathrel{\mathop:}= \omega(x)|N(x,B)|$ and the \emph{weighted co-degree} of $x$ and $y$ as $\deg_\omega(x,y)\mathrel{\mathop:}= \omega(x)\omega(y)|N(x,B)\cap N(y,B)|$. \end{defi} A proof of the following lemma can be found in the Appendix. \begin{lem} \label{lem:reg:weighted:deg-codeg} Let $\varepsilon>0$ and $n\ge \varepsilon^{-6}$. Further let $G=(A\dcup B,E)$ be a bipartite graph with $|A|=|B|=n$ and let $\omega: A \to [\varepsilon,1]$ be a weight function for $G$. If \begin{enumerate} \item $|\{x\in A: |\deg_\omega(x)-d_\omega(A,B)n| > \varepsilon^{14}n\}| < \varepsilon^{12}n$ \quad and \item $|\{\{x,y\}\in \binom A2: |\deg_\omega(x,y)-d_\omega(A,B)^2n| \ge \varepsilon^{9}n\}| \le \varepsilon^{6}\binom n2$ \end{enumerate} then $(A,B)$ is a weighted $3\varepsilon$-regular pair. \end{lem} It is well known that a balanced $(\varepsilon,\delta)$-super-regular pair has a perfect matching if $\delta>2\varepsilon$ (see, e.g., \cite{RodlRuci99}). Similarly, balanced weighted regular pairs with an appropriate minimum degree bound have perfect matchings (see the Appendix for a proof). \begin{lem} \label{lem:reg:weighted:matching} Let $\varepsilon>0$ and let $G=(A\dcup B,E)$ with $|A|=|B|=n$ and weight function $\omega: A \to [\sqrt\varepsilon,1]$ be a weighted $\varepsilon$-regular pair. If $\deg(x)>2\sqrt\varepsilon n$ for all $x\in A\cup B$ then $G$ contains a perfect matching. \end{lem} \subsection{Chernoff type bounds} \label{sec:ChernoffBounds} Our proofs will heavily rely on the probabilistic method. In particular we will want to bound random variables that are close to being binomial. By close to we mean that the individual events are not necessarily independent but occur with certain probability even if condition on the outcome of other events. The following two variations on the classical bound by Chernoff make this more precise. \begin{lem} \label{lem:pseudo:Chernoff} Let $0\le p_1\le p_2 \le 1$, $0<c \le 1$. Further let $\mathcal{A}_i$ for $i\in[n]$ be 0-1-random variables and set $\mathcal{A}:=\sum_{i\in[n]}\mathcal{A}_i$. If \[ p_1 \le \PP\left[\mathcal{A}_i=1 \,\left|~\parbox{150pt}{ $\mathcal{A}_{j}=1$ for all $j \in J$ and\\ $\mathcal{A}_j=0$ for all $j\in [i-1]\setminus J$}\right. \right] \le p_2 \] for every $i\in[n]$ and every $J\subseteq[i-1]$ then \[ \PP[\mathcal{A} \le (1-c) p_1n] \le \exp\left(-\frac{c^2}3 p_1n \right) \] and \[ \PP[\mathcal{A} \ge (1+c)p_2n] \le \exp\left(-\frac{c^2}3 p_2n \right)\, . \] \end{lem} Similarly we can state a bound on the number of tuples of certain random variables. \begin{lem} \label{lem:pseudo:Chernoff:tuple} Let $0<p$ and $a,m,n\in \field{N}$. Further let $\mathcal{I}\subseteq\mathcal{P}([n])\setminus\{\emptyset\}$ be a collection of $m$ disjoint sets with at most $a$ elements each. For every $i\in [n]$ let $\mathcal{A}_i$ be a 0-1-random variable. Further assume that for every $I\in\mathcal{I}$ and every $k\in I$ we have \[ \PP\left[\mathcal{A}_k=1 \,\left|~\parbox{150pt}{ $\mathcal{A}_{j}=1$ for all $j \in J$ and\\ $\mathcal{A}_j=0$ for all $j\in [k-1]\setminus J$}\right. \right] \ge p \] for every $J\subseteq [k-1]$ with $[k-1]\cap I\subseteq J$. Then \[ \PP\Big[ \big|\{I\in \mathcal{I}: \mathcal{A}_{i}=1\text{ for all $i\in I$}\}\big|\ge \tfrac{1}{2} p^am\Big] \ge 1-2\exp\Big(-\frac1{12} p^am \Big)\, . \] \end{lem} The proofs for both lemmas can be found in the Appendix. The first one is very close to the proof of the classical Chernoff bound while the second proof builds on the fact that the events $[\mathcal{A}_i=1 \text{ for all $i\in I$}]$ have probability at least $p^a$ for every $I\in\mathcal{I}$. In particular, in the special case $a=1$, Lemma~\ref{lem:pseudo:Chernoff} implies Lemma~\ref{lem:pseudo:Chernoff:tuple}. \section{An almost spanning version of the Blow-up Lemma} \label{sec:almost-spanning} This section is dedicated to the proof of Theorem~\ref{thm:Blow-up:arr:almost-span} which is a first step towards Theorem~\ref{thm:Blow-up:arr:full}. We give a randomised algorithm for the embedding of an almost spanning subgraph $H'$ into $G$ and show that it is well defined and that it succeeds with positive probability. This embedding of $H'$ is later extended to the embedding of a spanning subgraph $H$ in Section~\ref{sec:spanning}. Applying the randomised algorithm to $H$ while only embedding $H'$ provides the structural information necessary for the extension of the embedding. It is for this reason that we define a graph $H$ while only embedding a subgraph $H'\subseteq H$ into $G$ in this section. \begin{remark} In the following we shall always assume that each super-regular pair $(V_i,V_j)$ appearing in the proof has density \begin{equation*} d(V_i,V_j)=\delta \end{equation*} \emph{exactly}, and minimum degree \begin{equation} \label{eq:min-deg} \min\nolimits_{v\in V_i} \deg(v,V_j) \ge \tfrac12\delta|V_j| \,, \quad \min\nolimits_{v\in V_j} \deg(v,V_i) \ge \tfrac12\delta|V_i| \,, \end{equation} since otherwise we can simply appropriately delete random edges to obtain this situation (while possibly increasing regularity to~$2\varepsilon$). \end{remark} \subsection{Constants, constants} Since there will be plenty of constants involved in the following proofs we give a short overview first. \begin{tabbing} \quad \= $\Delta_R$: \= the maximum degree of $R$ is \emph{strictly} smaller than $\Delta_R$ \\ \> $r$: \> the number of clusters \\ \> $a$: \> the arrangeability of $H$ \\ \> $s$: \> the chromatic number of $H$ \\ \> $\delta$: \> the density of the pairs $(V_i,V_j)$ in $G$ \\ \> $\mu$: \> the proportion of $G$ that will be left after embedding $H$ \\ \> $\xi$: \> some constant in the degree-bound of $H$ \\ \> $\varepsilon$: \> the regularity of the pairs $(V_i,V_j)$ in $G$ \\ \> $\varepsilon'$: \> the weighted regularity of the auxiliary graphs $F_i(t)$ \\ \> $\kappa$: \> the maximum quotient between cluster sizes \\ \> $\gamma$: \> a threshold for moving a vertex into the critical set \\ \> $\lambda$: \> the fraction of vertices whose predecessors receive a special embedding \\ \> $\alpha$: \> the fraction of vertices with image restrictions \\ \> $c$: \> the relative size of the image restrictions \\ \> $C$: \> the maximum number of image restrictions per cluster \\ \end{tabbing} Now let $C,a,\Delta_R,\kappa \in\field{N}$ and $\delta,c,\mu>0$ be given. We define the following constants. \begin{align} \gamma &= \frac c2 \frac{\mu}{10} \delta^a\;, \label{eq:consts:gamma} \\ \lambda &= \frac{1}{25 a}\delta\gamma\;, \label{eq:consts:lambda}\\ \varepsilon' &= \min\left\{\left(\frac{\lambda \delta^a}{6\cdot 2^{a^2+1}3^a}\right)^2,\left(\frac{7\gamma}{30}\right)^2\right\}\;, \label{eq:consts:eps:p}\\ \varepsilon &= \min\left\{ \frac1{\Delta_R(1+C)2^{a+1}}\varepsilon', \left(\frac{\varepsilon'}{3}\right)^{36}\right\}\;, \label{eq:consts:eps} \\ \alpha &= \frac{\sqrt\varepsilon}6\;. \label{eq:consts:alpha} \intertext{Furthermore, let $r$ be given. Then we choose} \xi &= \frac{8\varepsilon^2}{9\gamma^2\kappa r}\,. \label{eq:consts:xi} \end{align} Moreover, we ensure that $n_0$ is big enough to guarantee \begin{align} \sqrt{n_0} \ge 48\frac{3^a2^{a^2+1}a \kappa r}{\lambda \delta^a}\;, \quad n_0 \ge 60\frac{\kappa r}{\varepsilon^2\delta\mu} \log (12 (n_0)^2) \text{,~~and~~~} \log n_0 \ge 36\frac{2^{a^2}a^2\kappa r}{\lambda} \,. \label{eq:consts:n} \end{align} All logarithms are base $e$. In short, the constants used relate as \[ 0 < \xi \ll \varepsilon \ll \alpha \ll \varepsilon' \ll \lambda \ll \gamma \ll \mu, \delta \le 1\, . \] Moreover, $\varepsilon \ll 1/\Delta_R$. Note that it follows from these definitions that $(1+\varepsilon/\delta)^a\le 1+\sqrt\varepsilon/3$ and $(1-\varepsilon/\delta)^a\ge 1-\sqrt\varepsilon/3$ which implies \begin{align} \label{eq:eps:1} \frac{(\delta+\varepsilon)^a}{1+\sqrt{\varepsilon}/3} \le \delta^a \le \frac{(\delta-\varepsilon)^a}{1-\sqrt{\varepsilon}/3}, \quad \text{in particular} \quad (\delta-\varepsilon)^a \ge \frac{9}{10} \delta^a. \end{align} \subsection{The randomised greedy algorithm} \label{sec:rga:definition} Let $V(H)=(x_1,\dots,x_n)$ be an $a$-arrangeable ordering of $H$ and let $H'\subseteq H$ be a subgraph induced by $\{x_1,\dots,x_{(1-\mu)n}\}$. In this section we define a \emph{randomised greedy algorithm} (RGA) for the embedding of $V(H')$ into $V(G)$. This algorithm processes the vertices of $H$ vertex by vertex and thereby defines an embedding $\varphi$ of $H'$ into $G$. We say that vertex $x_t$ gets embedded in time step $t$ where $t$ runs from 1 to $T=|H'|$. Accordingly $t(x)\in [n]$ is defined to be the time step in which vertex $x$ will be embedded. We explain the main ideas before giving an exact definition of the algorithm. \textbf{Preparing $H$:} Recall that $S_i$ is the set of \emph{image restricted vertices} in $X_i$ and set $S\mathrel{\mathop:}= \bigcup S_i$. We define~$L_i^*$ to be the last $\lambda n_i$ vertices in $X_i\setminus N(S)$ in the arrangeable ordering. Moreover, we define $X_i^*\mathrel{\mathop:}= N^-(L_i^*) \cup S_i$ and $X^*\mathrel{\mathop:}= \bigcup X_i^*$. Those vertices will be called the \emph{important vertices}. The name indicates that they will play a major r\^ole for the spanning embedding. Important vertices shall be treated specially by the embedding algorithm. The $a$-arrangeability of $H$ implies that \begin{align} \label{eq:important:ub} |X_i^*| \le a \lambda n_i + \alpha n_i \end{align} for all $i\in [r]$. \textbf{Preparing $G$:} Before we start embedding into $G$ we randomly set aside $(\mu/10) n_i$ vertices in $V_i$ for each $i\in[r]$. We denote these sets by $V^\textsc{\MakeTextLowercase{S}}_i$ and call them the \emph{special vertices}. All remaining vertices, i.e., $V^\textsc{\MakeTextLowercase{O}}_i\mathrel{\mathop:}= V_i\setminus V^\textsc{\MakeTextLowercase{S}}_i$ will be called \emph{ordinary vertices}. As the name suggests our algorithm will try to embed most vertices of $H'$ into the sets $V^\textsc{\MakeTextLowercase{O}}_i$ and only if this fails resort to embedding into $V^\textsc{\MakeTextLowercase{S}}_i$. The idea is that the special vertices will be reserved for the important vertices and for those vertices in $H'$ whose embedding turns out to be intricate. We define \[ V^\textsc{\MakeTextLowercase{O}} \mathrel{\mathop:}= \bigcup_{i=1}^r V^\textsc{\MakeTextLowercase{O}}_i, \quad V^\textsc{\MakeTextLowercase{S}} \mathrel{\mathop:}= \bigcup_{i=1}^r V^\textsc{\MakeTextLowercase{S}}_i . \] Note that $V^\textsc{\MakeTextLowercase{O}}\dcup V^\textsc{\MakeTextLowercase{S}}$ defines a partition of $V(G)$. \textbf{Candidate sets:} While our embedding process is running, more and more vertices of $G$ will be used up to accommodate vertices of $H$. For each time step $t\in[n]$ we denote by $V^{\text{\it Free}}(t)\mathrel{\mathop:}= V(G)\setminus \{v\in V(G): \exists t'<t : \varphi(x_{t'}) = v\}$ the set of vertices where no vertex of $H$ has been embedded yet. Obviously $\varphi(x_t) \in V^{\text{\it Free}}(t)$ for all $t$. The algorithm will define sets $C_{t,x} \subseteq V(G)$ for $1\le t \le T$, $x\in V(H)$, which we will call the \emph{candidate set} for $x$ at time $t$. Analogously \[ A_{t,x}\mathrel{\mathop:}= C_{t,x}\cap V^{\text{\it Free}}(t)\] will be called the \emph{available candidate set} for $x$ at time $t$. Again we distinguish between the \emph{ordinary candidate} set $C^\textsc{\MakeTextLowercase{O}}_{t,x} \mathrel{\mathop:}= C_{t,x}\cap V^\textsc{\MakeTextLowercase{O}}$ and the \emph{special candidate set} $C^\textsc{\MakeTextLowercase{S}}_{t,x} \mathrel{\mathop:}= C_{t,x} \cap V^\textsc{\MakeTextLowercase{S}}$ or their respective available version $A^\textsc{\MakeTextLowercase{O}}_{t,x} \mathrel{\mathop:}= A_{t,x} \cap V^\textsc{\MakeTextLowercase{O}}$ and $A^\textsc{\MakeTextLowercase{S}}_{t,x} \mathrel{\mathop:}= A_{t,x}\cap V^\textsc{\MakeTextLowercase{S}}$. Finally we define a set $Q(t)\subseteq V(H)$ and call it the \emph{critical set} at time $t$. $Q(t)$ will contain the vertices whose available candidate set got too small at time $t$ or earlier. \noindent \textbf{Algorithm RGA} \noindent \textsc{Initialisation}\\[.5ex] Randomly select $V^\textsc{\MakeTextLowercase{S}}_i\subseteq V_i$ with $|V^\textsc{\MakeTextLowercase{S}}_i|=(\mu/10)|V_i|$ for each $i\in [r]$. For $x\in X_i\setminus S_i$ set $C_{1,x}=V_i$ and for $x\in S_i$ set $C_{1,x}=I(x)$. Set $Q(1)=\emptyset$.\\[0.5ex] Check that for every $i\in[r]$, $v \in V_i$, and every $j\in N_R(i)$ we have \begin{align} \label{eq:cond:size:special-neighbours} \left| \frac{|N_G(v)\cap V_j^\textsc{\MakeTextLowercase{S}}|}{|V_j^\textsc{\MakeTextLowercase{S}}|} - \frac{|N_G(v)\cap V_j|}{|V_j|} \right| \le \varepsilon\,. \end{align} Further check that every $x\in S_i$ has \begin{align} \label{eq:cond:size:image-restricted} |C_{1,x}^\textsc{\MakeTextLowercase{S}}| = |I(x) \cap V_i^\textsc{\MakeTextLowercase{S}}| \ge \tfrac{1}{20}c\mu\, n_i\;. \end{align} \textbf{Halt with failure} if any of these does not hold.\\[.5ex] \noindent\textsc{Embedding Stage}\\[.5ex] For $t\ge 1$, \textbf{repeat} the following steps.\\[1ex] \emph{Step~1 -- Embedding $x_t$:} Let $x=x_t$ be the vertex of $H$ to be embedded at time $t$. \noindent Let $A'_{t,x}$ be the set of vertices~$v\in A_{t,x}$ which satisfy~\eqref{eq:cond:size-ordinary} and~\eqref{eq:cond:size-special} for all $y\in N^+(x)$: \begin{align} (\delta-\varepsilon) |C^\textsc{\MakeTextLowercase{O}}_{t,y}| \le& |N_G(v)\cap C^\textsc{\MakeTextLowercase{O}}_{t,y}| \le (\delta+\varepsilon) |C^\textsc{\MakeTextLowercase{O}}_{t,y}|, \label{eq:cond:size-ordinary}\\ (\delta-\varepsilon) |C^\textsc{\MakeTextLowercase{S}}_{t,y}| \le& |N_G(v)\cap C^\textsc{\MakeTextLowercase{S}}_{t,y}| \le (\delta+\varepsilon) |C^\textsc{\MakeTextLowercase{S}}_{t,y}|.\label{eq:cond:size-special} \end{align} \noindent Choose $\varphi(x)$ uniformly at random from \begin{align}\label{eq:Ax} A(x)&\mathrel{\mathop:}=\begin{cases} A^\textsc{\MakeTextLowercase{O}}_{t,x} \cap A'_{t,x} \quad & \text{ if $x \notin X^*$ and $x \notin Q(t)$,} \\ A^\textsc{\MakeTextLowercase{S}}_{t,x} \cap A'_{t,x} & \text{ else.} \end{cases} \intertext{ \emph{Step~2 -- Updating candidate sets:} for each unembedded vertex $y\in V(H)$, set} \nonumber C_{t+1,y} &\mathrel{\mathop:}= \begin{cases} C_{t,y} \cap N_G(\varphi(x)) & \text{ if $y \in N^+(x)$, } \\ C_{t,y} &\text{ otherwise.}\end{cases} \end{align} \emph{Step~3 -- Updating critical vertices:} We will call a vertex $y\in X_i$ \emph{critical} if $y\notin X_i^*$ and \begin{align} \label{eq:cond:queue:in} |A^\textsc{\MakeTextLowercase{O}}_{t+1,y}| &< \gamma n_i. \end{align} Obtain $Q(t+1)$ by adding to $Q(t)$ all critical vertices that have not been embedded yet. Set $Q_i(t+1)=Q(t+1)\cap X_i$.\\[.5ex] \textbf{Halt with failure} \textbf{if} there is $i\in [r]$ with \begin{align} \label{eq:cond:queue:abort} |Q_i(t+1)| &> \varepsilon' n_i\,. \end{align} \textbf{Else, if} there are no more unembedded vertices left in $V(H')$ \textbf{halt with success},\\ \textbf{otherwise} set $t \leftarrow t+1$ and go back to \emph{Step~1}.\\ We have now defined our randomised greedy algorithm for the embedding of an almost spanning subgraph $H'$ into $G$. The rest of this section is to prove that it succeeds with positive probability. This then implies Theorem~\ref{thm:Blow-up:arr:almost-span}. In order to analyse the RGA we define auxiliary graphs which describe possible embeddings of vertices of $H'$ into $G$. These auxiliary graphs inherit some kind of regularity from $G$ with positive probability. We show that the algorithm terminates successfully whenever this happens. In the subsequent Section~\ref{sec:Initialisation} we show that conditions~\eqref{eq:cond:size:special-neighbours} and~\eqref{eq:cond:size:image-restricted} hold with probability at least $5/6$. The \textsc{Initialisation} of the RGA succeeds whenever this happens. Moreover, we prove that the embedding of each vertex is randomly chosen from a set of linear size in Step~1 of the \textsc{Embedding Stage}. In Section~\ref{sec:AuxiliaryGraph} we define auxiliary graphs and derive that all auxiliary graphs are weighted regular with probability at least $5/6$. We also show that condition~\eqref{eq:cond:queue:abort} never holds if this is the case. Thus the \textsc{Embedding Stage} also terminates successfully with probability at least $5/6$. We conclude that the whole RGA succeeds with probability at least $2/3$. This implies Theorem~\ref{thm:Blow-up:arr:almost-span}. \subsection{Initialisation and Step~1} \label{sec:Initialisation} This section is to prove that the \textsc{Initialisation} of the RGA succeeds with probability at least $5/6$ and that Step~1 of the \textsc{Embedding Stage} always chooses vertices from a non-empty set. \begin{lem}\label{lem:RGA:init} The \textsc{Initialisation} succeeds with probability at least $5/6$, i.e. both condition~\eqref{eq:cond:size:special-neighbours} and~\eqref{eq:cond:size:image-restricted} hold for every $i\in[r]$, $v\in V_i$, $j\in [r]\setminus\{i\}$, and $x\in S_i$ with probability $5/6$. \end{lem} \begin{proof}[of Lemma~\ref{lem:RGA:init}] Fix one $v\in V_i$, $j\in [r]\setminus \{i\}$. Since $V_j^\textsc{\MakeTextLowercase{S}}$ is a randomly chosen subset of $V_j$ we have \[ \EE[ |N_G(v)\cap V_j^\textsc{\MakeTextLowercase{S}}|] = |N_G(v)\cap V_j|\frac{|V_j^\textsc{\MakeTextLowercase{S}}|}{|V_j|} \ge \frac{\delta}2 n_j \frac{\mu}{10}\,. \] It follows from a Chernoff bound (see Theorem~\ref{thm:ChernoffBound} in the Appendix) that \[ \PP\Big[ | N_G(v)\cap V_j^\textsc{\MakeTextLowercase{S}}| - |N_G(v)\cap V_j| \frac{|V_j^\textsc{\MakeTextLowercase{S}}|}{|V_j|} > \varepsilon |V_j^\textsc{\MakeTextLowercase{S}}| \Big] \le \exp\left( -\frac{\varepsilon^2}6 \delta n_i \frac{\mu}{10}\right) \leByRef{eq:consts:n} \frac 1{12n^2}\;. \] Similarly $c|V_i^\textsc{\MakeTextLowercase{S}}|\ge c\frac{\mu}{10}n_i$ and \[ \PP[ c|V_i^\textsc{\MakeTextLowercase{S}}|-|I(x)\cap V_i^\textsc{\MakeTextLowercase{S}}| \ge \frac c2 |V_i^\textsc{\MakeTextLowercase{S}}|] \le \exp\left(-\frac1{12}c\frac{\mu}{10}n_i\right) \le \frac1{12 n}\;. \] A union bound over all $i\in [r]$, $v\in V_i$ and $j\in N_R(i)$ or over all $x\in S_i$ finishes the proof. \end{proof} Let us write $\pi(t,x)$ for the number of predecessors of $x$ that already got embedded by time $t$: \[ \pi(t,x)\mathrel{\mathop:}=|\{t'<t: \{x,x_{t'}\}\in E(H)\}|. \] Obviously $\pi(t,x)\le a$ by the definition of arrangeability. \begin{lem} \label{lem:RGA:candidate-sets} Let $x\in X_i\setminus S_i$ and $t\le T$ be arbitrary. Then \begin{align*} (1-\mu/10)(\delta-\varepsilon)^{\pi(t,x)}n_i &\le|C^\textsc{\MakeTextLowercase{O}}_{t,x}| \le (1-\mu/10)(\delta+\varepsilon)^{\pi(t,x)} n_i\, , \\ (\mu/10)(\delta-\varepsilon)^{\pi(t,x)} n_i &\le |C^\textsc{\MakeTextLowercase{S}}_{t,x}| \le (\mu/10)(\delta+\varepsilon)^{\pi(t,x)} n_i\,. \end{align*} If $x\in S_i$, $t\le T$ then \[ \frac{9}{10}\gamma n_i \le |C^\textsc{\MakeTextLowercase{S}}_{t,x}|\,. \] \end{lem} \begin{proof} The \textsc{Initialisation} of the RGA defines the candidate sets such that $|C^\textsc{\MakeTextLowercase{O}}_{1,x}| = (1-\mu/10)n_i$ and $|C^\textsc{\MakeTextLowercase{S}}_{1,x}| = (\mu/10)n_i$ for every $x\in X_i\setminus S_i$. In the \textsc{Embedding Stage} conditions~\eqref{eq:cond:size-ordinary} and \eqref{eq:cond:size-special} guarantee that $C^\textsc{\MakeTextLowercase{O}}_{t,x}$ and $C^\textsc{\MakeTextLowercase{S}}_{t,x}$ respectively shrink by a factor of $(\delta \pm \varepsilon)$ whenever a vertex in $N^-(x)$ is embedded. If $x\in S_i$ we still have $|C_{1,x}^\textsc{\MakeTextLowercase{S}}|\ge (c\mu/20)n_i$ by \eqref{eq:cond:size:image-restricted}. The statement follows as conditions~\eqref{eq:cond:size-ordinary} and \eqref{eq:cond:size-special} again guarantee that $C_{t,x}^\textsc{\MakeTextLowercase{S}}$ shrinks at most by a factor of $(\delta-\varepsilon)^a$. Moreover, $\tfrac{1}{20}c\mu(\delta-\varepsilon)^a\ge \tfrac{9}{10}\gamma$ by~\eqref{eq:eps:1} and the definition of $\gamma$. \end{proof} We now argue that $\varphi(x)$ is chosen from a non-empty set at the end of Step~1 in the \textsc{Embedding Stage}. In fact, we will show that $\varphi(x)$ is chosen from a set of size linear in $n_i$. \begin{lem} \label{lem:RGA:choices} For any vertex $x\in X_i$ that gets embedded in the \textsc{Embedding Stage} $\varphi(x)$ is chosen randomly from a set $A(x)$ of size at least $(\gamma/2) n_i$.\\ Moreover, if $x$ gets embedded into $V_i^\textsc{\MakeTextLowercase{S}}$ \[ |X_i^*| + |Q_i(t(x))| + |A_{t(x),x}^\textsc{\MakeTextLowercase{S}}\setminus A(x)| \le \frac{\delta}{18} |C_{t(x),x}^\textsc{\MakeTextLowercase{S}}|\; . \] If the RGA completes the \textsc{Embedding Stage} successfully but $x\in X_i$ does not get embedded in the \textsc{Embedding Stage} we have \[ |A_{T,x}^\textsc{\MakeTextLowercase{S}}| \ge \frac{7\gamma}{10} n_i\;. \] \end{lem} \begin{proof} We claim that any $x\in X_i$ that gets embedded into $V_i^\sigma$ during the \textsc{Embedding Stage} has \begin{align} \label{eq:RGA:choices:cl} |A_{t(x),x}^\sigma| \ge \frac{7\gamma}{10}n_i\;. \end{align} We will establish equation~\eqref{eq:RGA:choices:cl} at the end of this proof. In order to show the first statement of the lemma we now bound $|A_{t(x),x}^\sigma\setminus A(x)|$, i.e., we determine the number of vertices that potentially violate conditions~\eqref{eq:cond:size-ordinary} or~\eqref{eq:cond:size-special}. As $H$ is $a$-arrangeable, the vertices $y \in N^+(x)$ share at most $2^a$ distinct ordinary candidate sets $C^\textsc{\MakeTextLowercase{O}}_{t(x),y}$ in each $V_j$. The number of special candidate sets $C^\textsc{\MakeTextLowercase{S}}_{t(x),y}$ in each $V_j$ might be larger by a factor of $C$ as they arise from the intersection with at most $C$ sets $I_{j,k}$ (with $k\in [C]$) which are the image restrictions. Moreover, there are less than $\Delta_R$ many sets $V_j$ with $j\in N_R(i)$ bounding the total number of candidate sets we have to care for by $\Delta_R(1+C)2^a$. As we embed $x$ into an $\varepsilon$-regular pair there are at most $2\varepsilon n_i$ vertices $v\in A^\sigma_{t(x),x}$ for each $C^\textsc{\MakeTextLowercase{O}}_{t(x),y}$ that violate~\eqref{eq:cond:size-ordinary} (and the same number for each $C^\textsc{\MakeTextLowercase{S}}_{t(x),y}$ that violate~\eqref{eq:cond:size-special}) with $y\in N^+(x)$. Hence \begin{align} \label{eq:RGA:excludes} |A_{t(x),x}^\sigma\setminus A(x)| \le \Delta_R(1+C)2^{a+1}\varepsilon n_i \end{align} if $x$ gets embedded into $V_i^\sigma$. Now $\Delta_R(1+C)2^{a+1}\varepsilon n_i \le \gamma/5 n_i$ by~\eqref{eq:consts:eps} and \[ |A(x)| = |A_{t(x),x}^\sigma| - |A_{t(x),x}^\sigma\setminus A(x)| \ge (\gamma/2)n_i \] follows. Next we show the second statement of the lemma. If $x\in X_i$ gets embedded into $V_i^\textsc{\MakeTextLowercase{S}}$ in the \textsc{Embedding Stage} we conclude \begin{align*} |X_i^*| + |Q_i(t(x))| + |A_{t(x),x}^\textsc{\MakeTextLowercase{S}}\setminus A(x)| &\le \left(a\lambda + \alpha\right)n_i+ \varepsilon' n_i + \Delta_R(1+C)2^{a+1}\varepsilon n_i \\ & \leBy{\eqref{eq:consts:lambda},\eqref{eq:consts:eps}} \left(\tfrac1{25}\delta\gamma + \alpha + 2\varepsilon'\right)n_i \leByRef{eq:consts:alpha} \tfrac{1}{20}\delta\gamma\,n_i \\ &\le \tfrac{\delta}{18} |C_{t(x),x}^\textsc{\MakeTextLowercase{S}}| \end{align*} where the first inequality is due to \eqref{eq:important:ub},~\eqref{eq:cond:queue:abort}, and~\eqref{eq:RGA:excludes} and the last inequality is due to $|C_{t(x),x}^\textsc{\MakeTextLowercase{S}}|\ge \tfrac{9}{10}\gamma n_i$ by Lemma~\ref{lem:RGA:candidate-sets}. We now return to Equation~\eqref{eq:RGA:choices:cl}. In order to prove it we distinguish between the two cases of~\eqref{eq:Ax} in \emph{Step 1} of the \textsc{Embedding Stage}. If $x\notin X^*$ has never entered the critical set, it is embedded into $A^\textsc{\MakeTextLowercase{O}}_{t(x),x}$ and $|A^\textsc{\MakeTextLowercase{O}}_{t(x),x}| \ge (7\gamma/10)n_i$ holds by condition~\eqref{eq:cond:queue:in}. Else $x$ gets embedded into $A^\textsc{\MakeTextLowercase{S}}_{t(x),x}$. As only vertices from $Q_i(t(x))$ or $X_i^*$ have been embedded into $V_i^\textsc{\MakeTextLowercase{S}}$ so far, we can bound $|A^\textsc{\MakeTextLowercase{S}}_{t(x),x}|$ by \begin{align*} |A^\textsc{\MakeTextLowercase{S}}_{t(x),x}| &\ge |C^\textsc{\MakeTextLowercase{S}}_{t(x),x}| - |Q_i(t(x))| - |X_i^*|\\ &\geByRef{eq:important:ub} \frac{9\gamma}{10}n_i - \varepsilon' n_i - (a\lambda+\alpha)n_i \ge \frac{7\gamma}{10} n_i \end{align*} where the second inequality is due to Lemma~\ref{lem:RGA:candidate-sets} and the third inequality is due to our choice of constants. In any case we have $|A_{t(x),x}^\sigma|\ge \frac{7\gamma}{10}n_i$ if $x$ gets embedded into $V_i^\sigma$ (with $\sigma\in\{\textsc{\MakeTextLowercase{O}},\textsc{\MakeTextLowercase{S}}\}$) in Step~1 of the \textsc{Embedding Stage}. If the RGA completes the \textsc{Embedding Stage} successfully but $x\in X_i$ does not get embedded during the \textsc{Embedding Stage} the analogous argument gives \begin{align*} |A^\textsc{\MakeTextLowercase{S}}_{T,x}| &\ge |C^\textsc{\MakeTextLowercase{S}}_{T,x}| - |Q_i(T)| - |X_i^*| \ge \frac{7\gamma}{10} n_i\; . \end{align*} \end{proof} \subsection{The auxiliary graph} \label{sec:AuxiliaryGraph} We run the RGA as described above. In order to analyse it, we define \emph{auxiliary graphs} $F_i(t)$ which monitor at every time step $t$ whether a vertex $v\in V(G)$ is still contained in the candidate set of a vertex $x\in V(H)$. Let $F_i(t) \mathrel{\mathop:}= (X_i\dcup V_i,E(F_i(t)))$ where $xv \in E(F_i(t))$ if and only if $v\in C_{t,x}$. We stress that we use the candidate sets $C_{t,x}$ and not the set of available candidates $A_{t,x}$. This is well defined as $C_{t,x}\subseteq V_i$ for every $x \in X_i$ and every~$t$. Note that $F_i(t)$ is a balanced bipartite graph. By Lemma~\ref{lem:RGA:candidate-sets} we have \begin{align}\label{eq:aux:min-max-deg} (\delta-\varepsilon)^{\pi(t,x)} n_i \le \deg_{F_i(t)}(x) \le (\delta+\varepsilon)^{\pi(t,x)} n_i \end{align} for every $x\in X_i\setminus S_i$, i.e., the degree of $x$ in $F_i(t)$ strongly depends on the number $\pi(t,x)$ of embedded predecessors. The main goal of this section is proving, however, that if we take this into account and weight the auxiliary graphs accordingly, then they are with high probability weighted regular (see Lemma~\ref{lem:aux:reg}). It will turn out that the RGA succeeds if this is the case (see Lemma~\ref{lem:reg-small-queue}). More precisely, for $F_i(t)$ we shall use the weight function $\omega_t\colon X_i\to[0,1]$ with \begin{equation} \label{eq:weighted:def} \omega_t(x):=\delta^{a-\pi(t,x)}\,. \end{equation} Observe that the weight function depends on~$t$. For nicer notation, we write $\deg_{\omega,t}(x)\mathrel{\mathop:}=\deg_{\omega(t)}(x)=\omega_t(x)|N_{F_i(t)}(x)|$ for $x\in X_i$ and $d_{\omega,t}(X,Y)\mathrel{\mathop:}= d_{\omega(t)}(X,Y)$ for $X\subseteq X_i$ and $Y\subseteq V_i$. By~\eqref{eq:aux:min-max-deg} we have \begin{align} \deg_{\omega,t}(x) &\ge \delta^{a-\pi(t,x)} (\delta-\varepsilon)^{\pi(t,x)} n_i \geByRef{eq:eps:1} (1-\sqrt\varepsilon/3)\delta^a n_i\,, \label{eq:weighted:lower-deg} \\ \deg_{\omega,t}(x) &\le \delta^{a-\pi(t,x')} (\delta+\varepsilon)^{\pi(t,x')} n_i \leByRef{eq:eps:1} (1+\sqrt\varepsilon/3)\delta^a n_i \label{eq:weighted:upper-deg} \end{align} for every $x\in X_i\setminus S_i$ and $t$. Thus for every $i\in[r]$ and $t\le T$ the auxiliary graph $F_i(t)$ satisfies \begin{align} \label{eq:aux:reg:weighted:density} (1-\sqrt\varepsilon/2)\delta^a \leByRef{eq:consts:alpha} (1-\alpha)(1-\sqrt\varepsilon/3)\delta^a \le d_{\omega,t}(X_i,V_i) \le (1+\sqrt{\varepsilon}/2)\delta^a\, . \end{align} Let $\mathcal{R}_i(t)$ denote the event that $F_i(t)$ is weighted $\varepsilon'$-regular for $\varepsilon'$ as in~\eqref{eq:consts:eps:p}. Further let $\mathcal{R}_i$ be the event that $\mathcal{R}_i(t)$ for all $t\le T$. \begin{lem} \label{lem:aux:reg} We run the RGA in the setting of Theorem~\ref{thm:Blow-up:arr:almost-span}. Then $\mathcal{R}_i$ holds for all $i\in [r]$ with probability at least $5/6$. \end{lem} We will use Lemma~\ref{lem:reg:weighted:deg-codeg} and weighted degrees and co-degrees to prove Lemma~\ref{lem:aux:reg}. \begin{proof}[of Lemma~\ref{lem:aux:reg}] This proof checks the conditions of Lemma~\ref{lem:reg:weighted:deg-codeg}. Let \begin{align*} W_i^{(1)}(t) &= \big\{ x\in X_i : |\deg_{\omega,t}(x)-d_{\omega,t}(X_i,V_i) n_i| > \sqrt{\varepsilon} n_i \big\}\,,\\ W_i^{(2)}(t) &= \Big\{ \{x,y\} \in \binom{X_i}2 : |\deg_{\omega,t}(x,y)-d_{\omega,t}(X_i,V_i)^2 n_i| \ge \sqrt[4]{\varepsilon} n_i \Big\} \end{align*} be the set of vertices and pairs which deviate from the expected (co-)degree. Let $W_i^{(1)}:=\bigcup_{t\in[T]} W_i^{(1)}(t)$ and $W_i^{(2)}:=\bigcup_{t\in[T]} W_i^{(2)}(t)$. We have $\varepsilon'\ge 3\varepsilon^{1/36}$ by~\eqref{eq:consts:eps}, and by Lemma~\ref{lem:reg:weighted:deg-codeg} all auxiliary graphs $F_i(t)$ with $t=1,\dots,T$ are weighted $\varepsilon'$-regular if both \begin{align} |W_i^{(1)}| &< \sqrt{\varepsilon} n_i\,, \label{eq:aux:reg:co-deg:cond:1}\\ |W_i^{(2)}| &\le \sqrt[4]{\varepsilon} \binom{n_i}2\, . \label{eq:aux:reg:co-deg:cond:2} \end{align} Thus $\mathcal{R}_i$ occurs whenever equations~\eqref{eq:aux:reg:co-deg:cond:1} and~\eqref{eq:aux:reg:co-deg:cond:2} are satisfied. We will prove that this happens for a fixed $i\in[r]$ with probability at least $1-n_i^{-1}$, which together with a union bound over $i\in[r]$ implies the statement of the lemma. So fix $i\in[r]$. From~\eqref{eq:weighted:lower-deg},~\eqref{eq:weighted:upper-deg} and~\eqref{eq:aux:reg:weighted:density} we deduce that \begin{align*} | \deg_{\omega,t}(x) - d_{\omega,t}(X_i,V_i) n_i | \le \sqrt{\varepsilon} n_i\ \end{align*} for all $x \in X_i\setminus S_i$ and every $t\le T$. But $|S_i|\le\alpha n_i < \sqrt\varepsilon n_i$ by~\eqref{eq:consts:alpha} and equation~\eqref{eq:aux:reg:co-deg:cond:1} is thus \emph{always} satisfied. It remains to consider~\eqref{eq:aux:reg:co-deg:cond:2}. To this end let $P_i$ be the set of all pairs $\{y,z\}\in\binom{X_i\setminus S_i}2$ with $N^-(y)\cap N^-(z)=\emptyset$. Observe that $|\binom{X_i}2 \setminus \binom{X_i\setminus S_i}2| \le \alpha n_i^2 \le\frac{\sqrt\varepsilon}6 n_i^2$ by~\eqref{eq:consts:alpha} and \begin{equation*}\begin{split} \Big|\Big\{\{y,z\}\in \binom{X_i}2 : N^-(y)\cap N^-(z)\neq\emptyset\Big\}\Big| & \le a\Delta(H)n_i \le a \frac{\xi n}{\log n} n_i \\ & \le \frac{2a\xi\kappa r}{\log n}\binom{n_i}2 \leByRef{eq:consts:xi} \sqrt\varepsilon \binom{n_i}2\,. \end{split}\end{equation*} Hence it suffices to show that \begin{equation}\label{eq:aux:almost-final-cond} \PP\Big[ |W_i^{(2)} \cap P_i| \le \tfrac12\sqrt[4]{\varepsilon} \binom{n_i}2 \Big] \ge \PP\Big[ |W_i^{(2)} \cap P_i| \le \tfrac12\sqrt[4]{\varepsilon} |P_i| \Big] > 1-n_i^{-1} \,. \end{equation} For this we first partition $P_i$ into sets of mutually predecessor disjoint pairs, i.e., $P_i=K_1\dcup\dots\dcup K_\ell$ such that for every $k\in[\ell]$ no vertex of $X_i$ appears in two different pairs in $K_k$, and moreover no two pairs in $K_k$ contain two vertices that have a common predecessor. Theorem~\ref{thm:HajnalSzemeredi} applied to the following graph asserts that there is such a partition with almost equally sized classes~$K_k$: Let $\mathcal{P}$ be the graph on vertex set $P_i$ with edges between exactly those pairs $\{y_1,y_2\}, \{y'_1,y'_2\} \in P_i$ which have either $\{y_1,y_2\}\cap\{y_1',y_2'\}\neq\emptyset$ or $\big(N^-_H(y_1)\cup N^-_H(y_2)\big)\cap \big(N^-_H(y'_{1})\cup N^-_H(y'_2)\big)\neq \emptyset$. This graph has maximum degree $\Delta(\mathcal{P})< 2a\Delta(H)n_i \le 2a(\xi n/\log n) n_i$. Hence Theorem~\ref{thm:HajnalSzemeredi} gives a partition $K_1\dcup\dots\dcup K_\ell$ of~$P_i$ into stable sets with $|K_k|\ge \lfloor |P_i|/\big(\Delta(\mathcal{P})+1\big)\rfloor \ge \log n/(8a\xi\kappa r)$ for all $k\in[\ell]$, where we used $|P_i|\ge n_i^{2}/4$. Now fix $k\in[\ell]$ and consider the random variable $K'_k\mathrel{\mathop:}= K_k\cap W_i^{(2)}$. Our goal now is to show \begin{equation}\label{eq:aux:final-cond} \PP\Big[ |K'_k| > \tfrac12\sqrt[4]{\varepsilon} |K_k| \Big] \le n_i^{-3} \,, \end{equation} as this together with another union bound over $k\in[\ell]$ with $\ell<n_i^2$ implies~\eqref{eq:aux:almost-final-cond}. We shall first bound the probability that some fixed pair $\{y,z\}\in K_k$ gets moved to $W_i^{(2)}(t)$ (and hence to $K'_k$) at some time $t$. For a pair $\{y,z\}\in K_k$ and $t\in[T]$ let $\mathcal{C}o_{t,y,z}$ denote the event that $| \deg_{\omega,t+1}(y,z) - \deg_{\omega,t}(y,z)| \le \varepsilon n_i$. Why are we interested in these events? Obviously $\mathcal{C}o_{t,y,z}$ holds for all time steps $t$ with $x_t \notin N^-(y)\dcup N^-(z)$. This is because we have $\deg_{\omega,t+1}(y,z)=\deg_{\omega,t}(y,z)$ for such~$t$. Moreover $|d_{\omega,t'}(X_i,V_i)^2-\delta^{2a}|\le 2\sqrt{\varepsilon}\delta^{2a}$ by~\eqref{eq:aux:reg:weighted:density}. Thus the fact that $|N^-(y)\dcup N^-(z)|\le 2a$ and the definition of~$\omega$ from~\eqref{eq:weighted:def} imply the following. If $\mathcal{C}o_{t,y,z}$ holds for all $t\le T$, then \begin{align*} |\deg_{\omega,t'}(y,z) - d_{\omega,t'}(X_i,V_i)^2n_i| &\le |\deg_{\omega,t'}(y,z)-\delta^{2a}n_i| + |d_{\omega,t'}(X_i,V_i)^2-\delta^{2a}|\,n_i\\ &\le 2a\varepsilon n_i + 2\sqrt{\varepsilon}\delta^{2a} n_i \leByRef{eq:consts:eps} \sqrt[4]{\varepsilon} n_i \end{align*} for every $t'\le T$. In other words, if $\mathcal{C}o_{t,y,z}$ holds for all $t\le T$ then $\{y,z\}\not\in K_k\cap W_i^{(2)}$. More precisely, we have the following. \begin{fact}\label{fac:aux:Co} For the smallest~$t$ with $\{y,z\}\in W_i^{(2)}(t)$ we have that $\mathcal{C}o_{t',y,z}$ holds for all $t'<t$ but \emph{not} for $t'=t$. \end{fact} Moreover, if $\mathcal{C}o_{t',y,z}$ holds for all $t'<t$ then \[ |\deg_{\omega,t}(y,z) - \deg_{\omega,0}(y,z)| \le \big(\pi(t,y)+\pi(t,z)\big)\varepsilon n_i\,. \] Recall that $\deg_{\omega,t}(y,z)=\delta^{a-\pi(t,y)}\delta^{a-\pi(t,z)}|C_{t,y}\cap C_{t,z}|$ and in particular (since $y,z\in X_i\setminus S_i$) $\deg_{\omega,0}(y,z)=\delta^{2a}n_i$. Hence \begin{align}\label{eq:aux:assumingCo} |C_{t,y}\cap C_{t,z}|\ge (\delta^{\pi(t,y)+\pi(t,z)}-\varepsilon \delta^{\pi(t,y)+\pi(t,z)-2a}(\pi(t,y)+\pi(t,z)))n_i\geByRef{eq:consts:eps} \varepsilon n_i\,. \end{align} We now claim that \begin{align} \label{eq:aux:reg:prob} \PP[\mathcal{C}o_{t,y,z} \, | \, \mathcal{C}o_{t',y,z}\text{ for all $t'<t$}] \ge 1-\frac{4\varepsilon}\gamma\;. \end{align} This is obvious if $x_t\notin N^-(y)\dcup N^-(z)$. So assume we are about to embed an $x_t\in N^-(y)\dcup N^-(z)$, which happens to be in~$X_j$. Then $\varphi(x_t)$ is chosen randomly among at least $(\gamma/2) n_j$ vertices of $V_j$ by Lemma~\ref{lem:RGA:choices}. Out of those at most $2\varepsilon n_j$ vertices $v\in V_j$ have \[ \big|\deg(v,C_{t,y}\cap C_{t,z}) - d(V_i,V_j) \cdot |C_{t,y}\cap C_{t,z}| \big| > \varepsilon n_i \] because $|C_{t,y}\cap C_{t,z}|\ge \varepsilon n_i$ by~\eqref{eq:aux:assumingCo} and $G[V_i,V_j]$ is $\varepsilon$-regular. For every other choice of $\varphi(x_t)=v\in V_j$ we have \begin{align*} \big|\deg_{\omega,t+1}(y,z) &- \deg_{\omega,t}(y,z)\big| \\ & = \big| \omega_{t+1}(y)\omega_{t+1}(z) \deg(v,C_{t,y}\cap C_{t,z}) - \omega_t(y)\omega_t(z) \cdot |C_{t,y}\cap C_{t,z}|\big|\\ & = \omega_{t+1}(y)\omega_{t+1}(z) \cdot \big|\deg(v,C_{t,y}\cap C_{t,z}) - \delta\cdot |C_{t,y}\cap C_{t,z}| \big|\\ & = \omega_{t+1}(y)\omega_{t+1}(z) \cdot \big|\deg(v,C_{t,y}\cap C_{t,z}) - d(V_i,V_j)\cdot |C_{t,y}\cap C_{t,z}| \big|\\ & \le \omega_{t+1}(y)\omega_{t+1}(z) \cdot \varepsilon n_i \le \varepsilon n_i\,. \end{align*} Thus at most $2\varepsilon n_j$ out of $(\gamma/2)n_j$ choices for $\varphi(x_t)$ will result in $\overline{\mathcal{C}o_{t,y,z}}$, which implies~\eqref{eq:aux:reg:prob}, as claimed. Finally, in order to show concentration, we will apply Lemma~\ref{lem:pseudo:Chernoff}. For this purpose observe that by the construction of~$K_k$ for each time step $t\in[T]$ the embedding of~$x_t$ changes the co-degree of at most one pair in~$K_k$, which we denote by $\{y_t,z_t\}$ if present. That is, $x_t \in N^-(y_t)\cup N^-(z_t)$. Now let $T_k\subseteq [T]$ be the set of time steps $t$ with $\{y_t,z_t\}$ in $K_k$, i.e., let $T_k$ be the set of time steps which actually change the co-degree of a pair in $K_k$. Since $|N^-(y)\cup N^-(z)|\le 2a$ for every pair $\{y,z\}\in K_k$ we have $|T_k|\le 2a|K_k|$. We define the following 0-1-variables $\mathcal{A}(t)$ for $t\in T_k$: Let $\mathcal{A}(t)=1$ if and only if $\mathcal{C}o_{t',y_t,z_t}$ holds for all $t'\in [t-1]\cap T_k$ but not for $t'=t$. Fact~\ref{fac:aux:Co} then implies $|K'_k|\le\mathcal{A}:=\sum_{t\in T_k}\mathcal{A}(t)$. Moreover, for any $t'<t$ with $\{y_{t'},z_{t'}\}=\{y_t,z_t\}$ and $\mathcal{A}(t')=1$ we have $\mathcal{A}(t)=0$ by definition. Hence, for any $t\in T_k$ and $J\subseteq[t]\cap T_k$ we have \[ \PP\left[\mathcal{A}(t)=1\,\Big|\,\parbox{170pt}{$\mathcal{A}(t')=1$ for all $t'\in J$\\ $\mathcal{A}(t')=0$ for all $t'\in ([t]\cap T_k)\setminus J$}\right]\le 4\varepsilon/\gamma \] by~\eqref{eq:aux:reg:prob}. Now either $|T_k|<16a\varepsilon|K_k|/\gamma$ and thus $\mathcal{A} < 16a\varepsilon|K_k|/\gamma$ by definition. Or $|T_k|\ge 16a\varepsilon|K_k|/\gamma$ and \[ \PP\left[\mathcal{A}\ge \frac{16a\varepsilon}\gamma |K_k|\right]\le\PP\left[\mathcal{A}\ge \frac{8\varepsilon}\gamma |T_k|\right]\le\exp\left(-\frac{4\varepsilon}{3\gamma}|T_k|\right)\le n_i^{-3}, \] by Lemma~\ref{lem:pseudo:Chernoff}, where the last inequality follows from \[ \frac{4\varepsilon}{3\gamma}|T_k| \ge \frac{64a\varepsilon^2}{3\gamma^2}|K_k|\ge \frac{8\varepsilon^2\log n}{3\gamma^2\xi\kappa r} \geByRef{eq:consts:xi} 3\log n_i\,. \] Since $|K'_k|\le\mathcal{A}$ and $16a\varepsilon/\gamma <\frac12\sqrt[4]{\varepsilon}$ by~\eqref{eq:consts:eps} we obtain~\eqref{eq:aux:final-cond} as desired. \end{proof} We have now established that the auxiliary graph $F_i(t)$ for the embedding of $X_i$ into $V_i$ is weighted regular for all times $t\le T$ with positive probability. The following lemma states that no critical set ever gets large in this case, i.e., if all auxiliary graphs remain weighted regular, then the RGA terminates successfully. \begin{lem} \label{lem:reg-small-queue} For every $t\le T$ and $i\in[r]$ we have: $\mathcal{R}_i(t)$ implies that $|Q_i(t)| \le\varepsilon' n_i$. In particular, $\mathcal{R}_i$ for all $i\in[r]$ implies that the RGA completes the \textsc{Embedding Stage} successfully. \end{lem} \begin{proof} The idea of the proof is the following. Vertices only become critical because their available candidate set is significantly smaller than the average available candidate set. In other words, the weighted density between the set of critical vertices and $V_i^{\text{\it Free}}(t)$ deviates significantly from the weighted density of the auxiliary graph. Since the auxiliary graph is weighted regular it follows that there cannot be many critical vertices. Indeed, assume for contradiction that there is $i\in[r]$ and $t\le T$ with $|Q_i(t)|>\varepsilon' n_i$ and such that $F_i(t)$ is weighted $\varepsilon'$-regular. Let $x\in Q_i(t)$ be an arbitrary critical vertex. Then $x$ is an ordinary vertex and the available (ordinary) candidate set $A_{t,x}^\textsc{\MakeTextLowercase{O}}=C_{t,x}\cap V_i^\textsc{\MakeTextLowercase{O}}\cap V_i^{\text{\it Free}}(t)$ of $x$ got small, that is, \begin{align*} |C_{t,x}\cap V_i^\textsc{\MakeTextLowercase{O}}\cap V_i^{\text{\it Free}}(t)| \lByRef{eq:cond:queue:in} \gamma n_i \leByRef{eq:consts:gamma} \frac{\mu}{20}\delta^a n_i\,. \end{align*} In the language of the auxiliary graph this means that \begin{align*} \deg_{\omega,t}(x, V_i^\textsc{\MakeTextLowercase{O}} \cap V_i^{\text{\it Free}}(t)) &= \omega_t(x) |C_{t,x}\cap V_i^\textsc{\MakeTextLowercase{O}} \cap V_i^{\text{\it Free}}(t)| \le \frac{\mu}{20}\delta^a n_i . \end{align*} Moreover $|V_i^\textsc{\MakeTextLowercase{O}}\cap V_i^{\text{\it Free}}(t)| \ge |V_i^{\text{\it Free}}(t)|-|V_i^\textsc{\MakeTextLowercase{S}}| \ge \tfrac{9}{10} \mu n_i\ge \varepsilon' n_i$. This implies \begin{align} \label{eq:weighted:upper-irreg} d_{\omega,t}( Q_i(t), V_i^\textsc{\MakeTextLowercase{O}}\cap V_i^{\text{\it Free}}(t) ) \le \frac{\mu/20\, \delta^a n_i}{9/10\, \mu n_i} = \frac1{18} \delta^a\, . \end{align} Since~\eqref{eq:aux:reg:weighted:density} and \eqref{eq:weighted:upper-irreg} imply that \[ d_{\omega,t}(X_i,V_i) - d_{\omega,t}( Q_i(t), V_i^\textsc{\MakeTextLowercase{O}}\cap V_i^{\text{\it Free}}(t) ) \ge \frac12 \delta^a - \frac1{18} \delta^a > \varepsilon', \] but $F_i(t)$ is weighted $\varepsilon'$-regular we conclude that $|Q_i(t)|< \varepsilon' n_i$. \end{proof} Theorem~\ref{thm:Blow-up:arr:almost-span} is now immediate from the following lemma. \begin{lem}\label{lem:rga:success} If we apply the RGA in the setting of Theorem~\ref{thm:Blow-up:arr:almost-span}, then with probability at least $2/3$ the event $\mathcal{R}_i$ holds for all $i\in[r]$ and the RGA finds an embedding of $H'$ into $G$ (obeying the $R$-partitions of $H$ and $G$ and the image restrictions). \end{lem} \begin{proof}[of Lemma~\ref{lem:rga:success}] Let $C,a,\Delta_R,\kappa$ and $\delta,c,\mu$ be given. Set the constants $\gamma,\varepsilon,\alpha$ as in \eqref{eq:consts:gamma}-\eqref{eq:consts:alpha}. Let $r$ be given and choose $n_0,\xi$ as in~\eqref{eq:consts:xi}-\eqref{eq:consts:n}. Further let $R$ be a graph of order $r$ with $\Delta(R)<\Delta_R$ and let $G,H,H'$ have the required properties. Run the RGA with these settings. The \textsc{Initialisation} succeeds with probability at least $5/6$ by Lemma~\ref{lem:RGA:init}. It follows from Lemma~\ref{lem:aux:reg} that $\mathcal{R}_i$ occurs for all $i\in [r]$ with probability at least $5/6$. This implies that no critical set $Q_i$ ever violates the bound~\eqref{eq:cond:queue:abort} by Lemma~\ref{lem:reg-small-queue}. Thus the \textsc{Embedding Stage} also succeeds with probability $5/6$. We conclude that the RGA succeeds with probability at least $2/3$. Thus an embedding $\varphi$ of $H'=H[X_1'\dcup\dots\dcup X_r']$ into $G$ which maps $X_i'$ into $V_i$ exists. Moreover this embedding guarantees $\varphi(x)\in I(x)$ for all $x\in S_i\cap X_i'$ by definition of the algorithm. \end{proof} At the end of this section we want to point out that the minimum degree bound for $H$ in Theorem~\ref{thm:Blow-up:arr:almost-span} can be increased even further if we swap the order of the quantifiers. More precisely, for a fixed graph $R$ we may choose $\varepsilon$ such that almost spanning subgraphs of linear maximum degree can be embedded into a corresponding $(\varepsilon,d)$-regular $R$-partition. \begin{thm} \label{thm:Blow-up:arr:almost-span:lin} Given a graph $R$ of order $r$ and positive parameters $a,\kappa,\delta,\mu$ there are $\varepsilon,\xi>0$ such that the following holds. Assume that we are given \renewcommand{\itmit{\roman{*}}an{enumi}}{\alph{enumi}} \begin{enumerate} \item a graph $G$ with a $\kappa$-balanced $(\varepsilon,\delta)$-regular $R$-partition $V(G)=V_1\dcup\dots\dcup V_r$ with $|V_i|=:n_i$ and \item an $a$-arrangeable graph $H$ with maximum degree $\Delta(H)\le \xi n$ (where $n=\sum n_i$), together with a corresponding $R$-partition $V(H)=X_1\dcup\dots\dcup X_r$ with $|X_i|\le(1-\mu)n_i$. \end{enumerate} Then there is an embedding $\varphi\colon V(H) \to V(G)$ such that $\varphi(X_i)\subseteq V_i$. \end{thm} \begin{proof}[(sketch)] Theorem~\ref{thm:Blow-up:arr:almost-span:lin} is deduced along the lines of the proof of Theorem~\ref{thm:Blow-up:arr:almost-span}. Once more the randomised greedy algorithm from Section~\ref{sec:rga:definition} is applied. It finds an embedding of $H$ into $G$ if all auxiliary graphs $F_i(t)$ remain weighted regular throughout the \textsc{Embedding Stage}. This in turn happens if each auxiliary graph $F_i(t)$ contains few pairs $\{x,y\}\in \binom{X_i}2$ whose weighted co-degree deviates from the expected value. In the setting of Theorem~\ref{thm:Blow-up:arr:almost-span} this is the case with positive probability as has been proven in Lemma~\ref{lem:aux:reg}: Inequality~\eqref{eq:aux:almost-final-cond} states that the number of pairs with incorrect co-degree exceeds the bound of~\eqref{eq:aux:reg:co-deg:cond:2} with probability at most $1-n_i^{-1}$. This particular argument is the only part of the proof of Theorem~\ref{thm:Blow-up:arr:almost-span} that requires the degree bound of $\Delta(H)\le \xi n/\log n$. We then used~\eqref{eq:aux:almost-final-cond} and a union bound over $i\in[r]$ to show that all auxiliary graphs $F_i(t)$ remain weighted regular throughout the \textsc{Embedding Stage} with probability at least $5/6$. Since $r$ can be large compared to all constants except $n_0$ we need the bound $1-n_i^{-1}$ in~\eqref{eq:aux:almost-final-cond}. In the setting of Theorem~\ref{thm:Blow-up:arr:almost-span:lin} however it suffices to replace this bound by a constant. More precisely, since we are allowed to choose $\varepsilon$ depending on the order of $R$ the proof of Lemma~\ref{lem:aux:reg} becomes even simpler: Set $\varepsilon,\xi$ small enough to ensure $8a\varepsilon/\gamma+2a\kappa r\xi\le \sqrt[4]\varepsilon/(6r)$. Note that inequality~\eqref{eq:aux:reg:prob} then implies that the expected number of pairs $\{y,z\}\in \binom{X_i}2$ with incorrect co-degree is bounded by $2a\frac{4\varepsilon}\gamma \binom{n_i}2+a\Delta(H)n_i \le \frac{\sqrt[4]{\varepsilon}}{6r}\binom{n_i}2$. It follows from Markov's inequality and the union bound over all $i\in[r]$ that all auxiliary graphs $F_i(t)$ remain weighted regular throughout the \textsc{Embedding Stage} with probability at least $5/6$. Choosing $\varepsilon$ sufficiently small we can thus guarantee that the randomised greedy algorithm successfully embeds $H$ into $G$ with positive probability. \end{proof} Using the classical approach of Chvatal, R\"odl, Szemer\'edi, and Trotter~\cite{CRST} Theorem~\ref{thm:Blow-up:arr:almost-span:lin} easily implies that all $a$-arrangeable graphs have linear Ramsey numbers. This result has first been proven (using the approach of~\cite{CRST}) by Chen and Schelp~\cite{CheSch93}. \section{The spanning case} \label{sec:spanning} In this section we prove our main result, Theorem~\ref{thm:Blow-up:arr:full}. We use the randomised greedy algorithm and its analysis from Section~\ref{sec:almost-spanning} to infer that the almost spanning embedding found in Theorem~\ref{thm:Blow-up:arr:almost-span} can in fact be extended to a spanning embedding. We shortly describe our strategy in Section~\ref{sec:Proof:outline} and establish a minimum degree bound for the auxiliary graphs in Section~\ref{sec:Proof:aux:min-deg} before we give the proof of Theorem~\ref{thm:Blow-up:arr:full} in Section~\ref{sec:Proof:Blow-up:arr:full}. We conclude this section with a sketch of the proof of Theorem~\ref{thm:Blow-up:arr:ext} in Section~\ref{sec:Proof:Blow-up:arr:ext}. \subsection{Outline of the proof} \label{sec:Proof:outline} Let $G, H$ satisfy the conditions of Theorem~\ref{thm:Blow-up:arr:full}. We first use Lemma~\ref{lem:arr:StableEnding} to order the vertices of~$H$ such that the arrangeability of the resulting order is bounded and its last $\mu n$ vertices form a stable set~$W$. We then run the RGA to embed the almost spanning subgraph $H'=H[X\setminus W]$ into~$G$. The RGA is successful and the resulting auxiliary graphs $F_i(T)$ are all weighted regular (that is, $\mathcal{R}_i$ holds) with probability $2/3$ by Lemma~\ref{lem:rga:success}. It remains to extend the embedding of $H'$ to an embedding of $H$. Since~$W$ is stable it suffices to find for each $i\in[r]$ a bijection between \begin{align*} L_i \mathrel{\mathop:}= X_i \setminus W \end{align*} and $V_i^{\text{\it Free}}(T)$ which respects the candidate sets, i.e., which maps $x$ into $C_{T,x}$. Such a bijection is given by a perfect matching in $F_i^*\mathrel{\mathop:}= F_i(T)[L_i\dcup V_i^{\text{\it Free}}(T)]$, which is the subgraph of $F_i(T)$ induced by the vertices left after the \textsc{Embedding Phase} of the RGA. By Lemma~\ref{lem:reg:weighted:matching} balanced weighted regular pairs with an appropriate minimum degree bound have perfect matchings. Now, $(L_i,V_i^{\text{\it Free}})$ is a subpair of a weighted regular pair and thus weighted regular itself by Proposition~\ref{prop:regular:sub}. Hence our main goal is to establish a minimum degree bound for $(L_i,V_i^{\text{\it Free}})$. More precisely we shall explain in Section~\ref{sec:Proof:aux:min-deg} that it easily follows from the definition of the RGA that vertices in $L_i$ have the appropriate minimum degree if $\mathcal{R}_i$ holds. \begin{prop} \label{prop:aux:left-min-deg} Run the RGA in the setting of Theorem~\ref{thm:Blow-up:arr:full} and assume that $\mathcal{R}_j$ holds for all $j\in [r]$. Then every $x\in L_i$ has \[ \deg_{F_i(T)}(x,V_i^{\text{\it Free}}(T)) \ge 3\sqrt{\varepsilon'}n_i.\] \end{prop} For vertices in~$V_i^{\text{\it Free}}$ on the other hand this is not necessarily true. But it holds with sufficiently high probability. This is also proved in Section~\ref{sec:Proof:aux:min-deg}. \begin{lem}\label{lem:aux:right-min-deg} Run the RGA in the setting of Theorem~\ref{thm:Blow-up:arr:full} and assume that $\mathcal{R}_j$ holds for all $j\in[r]$. Then we have \[ \PP\left[\forall i\in [r],~\forall v\in V_i^{\text{\it Free}}(T): \deg_{F_i(T)}(v,L_i) \ge 3\sqrt{\varepsilon'}n_i \right] \ge \frac23 \, . \] \end{lem} \subsection{Minimum degree bounds for the auxiliary graphs} \label{sec:Proof:aux:min-deg} In this section we prove Proposition~\ref{prop:aux:left-min-deg} and Lemma~\ref{lem:aux:right-min-deg}. For the former we need to show that vertices $x\in L_i$ have an appropriate minimum degree in $F_i^*$, which is easy. \begin{proof}[of Proposition~\ref{prop:aux:left-min-deg}] Since $\mathcal{R}_j$ holds for all $j\in [r]$ the RGA completed the \textsc{Embedding Stage} successfully by Lemma~\ref{lem:reg-small-queue}. Note that all $x \in L_i$ did not get embedded yet. Thus \begin{align*} \deg_{F_i(T)}(x,V_i^{\text{\it Free}}(T)) = |A_{T,x}| \ge |A_{T,x}^\textsc{\MakeTextLowercase{S}}| &\ge \tfrac{7}{10}\gamma n_i \geByRef{eq:consts:eps:p} 3\sqrt{\varepsilon'} n_i\,, \end{align*} for every $x\in L_i$ by Lemma~\ref{lem:RGA:choices}. \end{proof} Lemma~\ref{lem:aux:right-min-deg} claims that vertices in $V_i^{\text{\it Free}}(T)$ with positive probability also have a sufficiently large degree in $F_i^*$. We sketch the idea of the proof. Let $x\in L_i$ and $v\in V_i^{\text{\it Free}}(T)$ for some $i\in[r]$. Recall that there is an edge $xv \in E(F_i(T))$ if and only if $\varphi(N^-(x)) \subseteq N_G(v)$. So we aim at lower-bounding the probability that $\varphi(N^-(x))\subseteq N_G(v)$ for many vertices $x\in L_i$. Now let $y \in N^-(x)$ be a predecessor of~$x$. Recall that~$y$ is randomly embedded into~$A(y)$, as defined in~\eqref{eq:Ax}. Hence the probability that $y$ is embedded into $N_G(v)$ is $|A(y)\cap N_G(v)|/|A(y)|$. Our goal will now be to show that these fractions are bounded from below by a constant for all predecessors of many vertices~$x\in L_i$, which will then imply Lemma~\ref{lem:aux:right-min-deg}. To motivate this constant lower bound observe that a random subset $A$ of $X_j$ satisfies $|A\cap N_G(v)|/|A|=|N_G(v)\cap V_j|/|V_j|$ in expectation, and the right hand fraction is bounded from below by $\delta/2$ by~\eqref{eq:min-deg}. For this reason we call the vertex $v$ \emph{likely for $y\in X_j$} and say that $\mathcal{A}_v(y)$ holds, if \begin{align*} \frac{|A(y)\cap N_G(v)|}{|A(y)|} \ge \frac{2}{3}\frac{|V_j \cap N_G(v)|}{|V_j|}\,. \end{align*} Hence it will suffice to prove that for every $v\in V_i$ there are many $x\in L_i$ such that $v$ is likely for all $y\in N^-(x)$. We will focus on the last $\lambda n_i$ vertices $x$ in $L_i\setminus N(S)$ (i.e., on vertices $x\in L_i^*$) as we have a good control over the embedding of their predecessors (who are in $X^*\setminus S$). Note that there indeed are $\lambda n_i$ vertices in $L_i \setminus N(S)$ as $\mu n_i - \alpha n_i \ge \lambda n_i$. For $i\in[r]$ and $v\in V_i$ we define \begin{align*} L_i^*(v)&\mathrel{\mathop:}=\{x\in L_i^*: \mathcal{A}_v(y)\text{ holds for all $y\in N^-(x)$}\}\,. \end{align*} Our goal is to show that a positive proportion of the vertices in~$L_i^*$ will be in $L_i^*(v)$. The following lemma makes this more precise. \begin{lem} \label{lem:aux:rmd:Avy} We run the RGA in the setting of Theorem~\ref{thm:Blow-up:arr:full} and assume that $\mathcal{R}_j$ holds for all $j\in[r]$. Then \[ \PP\left[\forall i\in[r], \forall v\in V_i: |L_i^*(v)|\ge 2^{-a^2-1}|L_i^*|\right] \ge \frac56\,. \] \end{lem} Lemma~\ref{lem:aux:rmd:Avy} together with the subsequent lemma will imply Lemma~\ref{lem:aux:right-min-deg}. \begin{lem}\label{lem:aux:rmd:pre} Run the RGA in the setting of Theorem~\ref{thm:Blow-up:arr:full} and assume that $\mathcal{R}_j$ holds for all $j\in[r]$ and that $|L_i^*(v)|\ge 2^{-a^2-1}|L_i^*|$. Then we have \[ \PP\left[\forall i\in [r],~\forall v\in V_i^{\text{\it Free}}(T): \deg_{F_i(T)}(v,L_i) \ge 3\sqrt{\varepsilon'}n_i \right] \ge \frac56 \, . \] \end{lem} \begin{proof}[of Lemma~\ref{lem:aux:rmd:pre}] Let $i\in[r]$ and $v\in V_i$ be arbitrary and assume that the event of Lemma~\ref{lem:aux:rmd:Avy} occurs, this is, assume that we do have $|L_i^*(v)|\ge 2^{-a^2-1}|L_i^*|$. We claim that $v$ almost surely has high degree in $F_i(T)$ in this case. \begin{claim} If $|L_i^*(v)|\ge 2^{-a^2-1}|L_i^*|$, then \[ \PP\left[\deg_{F_i(T)}(v,L_i)\ge 3\sqrt{\varepsilon'}n_i\right] \ge 1-\frac{1}{n_i^2}\;. \] \end{claim} This claim, together with a union bound over all $i\in[r]$ and $v\in V_i$, implies that \[ \PP\left[\forall i\in[r], \forall v\in V_i: \deg_{F_i(T)}(v,L_i)\ge 3\sqrt{\varepsilon'}n_i\right] \ge \frac 56 \] if $|L_i^*(v)|\ge 2^{-a^2-1}|L_i^*|$ for all $i\in[r]$ and all $v\in V_i$. It remains to establish the claim. \begin{claimproof}[of Claim] Let~$x\in L_i^*(v)$. Recall that $xv\in E(F_i)$ if and only if $\varphi(y)\in N_G(v)$ for all $y\in N^-(x)$. If the events $[\varphi(y)\in N_G(v)]$ were independent for all $y \in N^-(L_i^*(v))$ we could apply a Chernoff bound to infer that almost surely a linear number of the vertices $x\in L_i^*(v)$ is such that $[\varphi(y)\in N_G(v)]$ for all $y\in N^-(x)$. However, the events might be far from independent: just imagine two vertices $x$, $x'$ sharing a predecessor $y$. We address this issue by partitioning the vertices into classes that do not share predecessors. We then apply Lemma~\ref{lem:pseudo:Chernoff:tuple} to those classes to finish the proof of the claim. Here come the details. We partition $L_i^*(v)$ into predecessor disjoint sets. To do so we construct an auxiliary graph on vertex set $L_i^*(v)$ that has an edge $xx'$ for exactly those vertices $x\neq x'$ that share at least one predecessor in $H$. As~$H$ is $a$-arrangeable, the maximum degree of this auxiliary graph is bounded by $a\Delta(H)-1$. Hence we can apply Theorem~\ref{thm:HajnalSzemeredi} to partition the vertices of this auxiliary graph into stable sets $K_1\dcup\dots\dcup K_b$ with \begin{align} \label{eq:aux:rmd:Kl:1} |K_\ell| \ge \frac{|L_i^*(v)|}{a\Delta(H)} \ge \frac{2^{-a^2-1}\lambda \sqrt{n}}{a\,\kappa\, r}\log n\geByRef{eq:consts:n} 48\left(\frac{3}{\delta}\right)^a\log n_i \end{align} for $\ell \in [b]$. Those sets are predecessor disjoint in $H$. We now want to apply Lemma~\ref{lem:pseudo:Chernoff:tuple}. Let $\mathcal{I}=\{ N^-(x):x\in K_\ell\}$. The sets in $\mathcal{I}$ are pairwise disjoint and have at most $a$ elements each. Name the elements of $\bigcup_{I\in \mathcal{I}}I=\{y_1,\dots,y_s\}$ (with $s=|\bigcup_{I\in\mathcal{I}}I|$) in ascending order with respect to the arrangeable ordering. Furthermore, let $\mathcal{A}_k$ be a random variable which is 1 if and only if $y_k$ gets embedded into $N_G(v)$. By the definition of $L_i^*(v)$, the event $\mathcal{A}_v(y_k)$ holds for each $k\in[s]$. It follows from the definition of $\mathcal{A}_v(y_k)$ that \begin{equation*} \PP[\mathcal{A}_k=1] = \PP[\varphi(y_k) \in N_G(v)] = \frac{|A(y_k)\cap N_G(v)|}{|A(y_k)|} \ge \frac23\frac{|N_G(v)\cap V_j|}{|V_j|} \geByRef{eq:min-deg} \frac\delta3\;. \end{equation*} This lower bound on the probability of $\mathcal{A}_k=1$ remains true even if we condition on other events $\mathcal{A}_j=1$ (or their complements $\mathcal{A}_j=0$), because in this calculation the lower bound relies solely on $|A(y_k)\cap N_G(v)|/|A(y_k)|$, which is at least $\delta/3$ for all $k\in[m]$ regardless of the embedding of other $y_j$. Hence, \[ \PP\left[\mathcal{A}_k=1 \Big| \parbox{150pt}{$\mathcal{A}_{j}=1$ for all $j\in J$\\ $\mathcal{A}_j=0$ for all $j\in [k-1]\setminus J$}\right]\ge \frac\delta3 \] for every $k$ and every $J\subseteq [k-1]$ (this is stronger than the condition required by Lemma~\ref{lem:pseudo:Chernoff:tuple}). By Lemma~\ref{lem:pseudo:Chernoff:tuple}, we have \begin{align*} & \PP\left[ \big|\big\{x\in K_\ell:\varphi(N^-(x))\subseteq N_G(v)\big\}\big| \ge \frac12\Big(\frac{\delta}{3}\Big)^a|K_\ell|\right]\\ & \hspace{3cm} = \PP\left[ \big|\big\{I\in \mathcal{I}: \mathcal{A}_i=1\text{ for all $i\in I$}\big\}\big| \ge \frac12\Big(\frac{\delta}{3}\Big)^a|K_\ell|\right] \\ & \hspace{3cm} \ge 1-2\exp\left(-\frac1{12}\Big(\frac{\delta}{3}\Big)^a|K_\ell|\right) \\ & \hspace{3cm} \geByRef{eq:aux:rmd:Kl:1} 1-2\exp(-4\log n_i) = 1-2\cdot n_i^{-4}\,. \end{align*} Applying a union bound over all $\ell\in[b]$ we conclude that \begin{align*} \deg_{F_i(T)}(v,L_i) &\ge \big|\big\{x \in L_i^*(v): \varphi(N^-(x))\subseteq N_G(v)\big\}\big|\\ &\ge \sum_{\ell\in[b]} \frac12\Big(\frac{\delta}{3}\Big)^a|K_\ell| = \frac12\Big(\frac{\delta}{3}\Big)^{a} |L_i^*(v)| \ge \frac{\delta^a}{2\cdot 2^{a^2+1}3^a}\lambda n_i \geByRef{eq:consts:eps:p} 3\sqrt{\varepsilon'} n_i \end{align*} with probability at least $1-2n_i\exp(-4\log n_i)\ge 1-n_i^{-2}$. \end{claimproof} This concludes the proof of the lemma. \end{proof} The remainder of this section is dedicated to the proof of Lemma~\ref{lem:aux:rmd:Avy}. This proof will use similar ideas as the proof of Lemma~\ref{lem:aux:rmd:pre}. This time, however, we are not only interested in the predecessors of $x\in L_i^*$ but in the predecessors of the predecessors. We call those \emph{predecessors of second order} and say two vertices $x$, $x'$ are \emph{predecessor disjoint of second order} if $N^-(x)\cap N^-(x')=\emptyset$ and $N^-(N^-(x))\cap N^-(N^-(x'))=\emptyset$. To prove Lemma~\ref{lem:aux:rmd:Avy}, we have to show that for any vertex $v\in V_i$ many vertices $x$ in $L_i^*$ are such that all their predecessors $y\in N^-(x)$ are likely for $v$. Note that $x\in L_i^*$ implies that $y\in N^-(x)$ gets embedded into the special candidate set $C_{t(y),y}^\textsc{\MakeTextLowercase{S}}$. It depends only on the embedding of the vertices in $N^-(y)$ whether a given vertex $v\in V_i$ is likely for $y$ or not. Therefore, we formulate an event $\mathcal{B}_{v,x}(z)$, which, if satisfied for all $z\in N^-(y)$, will imply $\mathcal{A}_v(y)$ as we will show in the next proposition. Recall that $C_{1,y}^\textsc{\MakeTextLowercase{S}}=V_j^\textsc{\MakeTextLowercase{S}}$ for $y\in N^-(L_i^*)\subseteq X^*$ and $C_{t(z)+1,y}^\textsc{\MakeTextLowercase{S}}=C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(\varphi(z))$. For $x\in L_i^*$ and $z\in N^-(N^-(x))$ let $\mathcal{B}_{v,x}(z)$ be the event that \[ \left|\frac{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}|} - \frac{|C_{t(z)+1,y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(z)+1,y}^\textsc{\MakeTextLowercase{S}}|} \right|\le \frac{2\varepsilon}{\delta-\varepsilon} \] for all $y\in N^-(x)$. \begin{prop} \label{prop:hdp:predecessors} Let $i\in[r]$, $v\in V_i$, $x \in L_i^*$, and $z\in N^-(N^-(x))$, then \[ \PP\left[\mathcal{B}_{v,x}(z) \big|\big. \mathcal{B}_{v,x}(z')\text{ for all $z'\in N^-(N^-(x))$ with $t(z')<t(z)$}\right] \ge 1/2. \] This remains true if we additionally condition on other events $\mathcal{B}_{v,\widetilde x}(\widetilde z)$ (or their complements) with $\widetilde z\in N^-(N^-(\widetilde x))$ for $\widetilde x\in L_i^*$, as long as $x$ and $\widetilde x$ are predecessor disjoint of second order.\\ Moreover, if $\mathcal{B}_{v,x}(z)$ occurs for all $z\in N^-(N^-(x))$, then $\mathcal{A}_v(y)$ occurs for all $y\in N^-(x)$. \end{prop} \begin{proof}[of Proposition~\ref{prop:hdp:predecessors}] Let $x\in L_i^*$ and let $z\in N^-(N^-(x))$ lie in $X_\ell$. Further assume that $\mathcal{B}_{v,x}(z')$ holds for all $z'\in N^-(N^-(x))$ with $t(z')<t(z)$. For $y\in N^-(x)$ let $j(y)$ be such that $y\in X_{j(y)}$. Then $\mathcal{B}_{v,x}(z')$ for all $z'\in N^-(y)$ with $t(z')<t(z)$ implies \begin{align*} \frac{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}|} &\ge \frac{|C_{1,y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{1,y}^\textsc{\MakeTextLowercase{S}}|}-\frac{2\varepsilon\cdot a}{\delta-\varepsilon} \\ &= \frac{|V_{j(y)}\cap N_G(v)|}{|V_{j(y)}|}-\frac{2\varepsilon\cdot a}{\delta-\varepsilon} \geByRef{eq:min-deg} \frac\delta2 - \frac{2\varepsilon\cdot a}{\delta-\varepsilon} \end{align*} where the identity $C_{1,y}=V_{j(y)}$ is due to $y\notin S$. Hence $|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|\ge \varepsilon n_{j(\ell)}$ by~\eqref{eq:cond:size-special} and our choice of constants. Now fix a $y\in N^-(x)$. As $(V_{j(y)},V_\ell)$ is an $\varepsilon$-regular pair all but at most $4\varepsilon n_\ell$ vertices $w\in A_{t(z),z}\subseteq V_{\ell}$ simultaneously satisfy \begin{align*} \left|\frac{\big|N_G\big(w,C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)\big)\big|}{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|} - d(V_{j(y)},V_\ell)\right| &\le \varepsilon\,,\text{ and}\\ \left|\frac{|N_G(w, C_{t(z),y}^\textsc{\MakeTextLowercase{S}})|}{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}|} - d(V_{j(y)},V_\ell)\right|&\le\varepsilon\;. \end{align*} Hence, all but at most $4\varepsilon a n_\ell$ vertices in $V_\ell$ satisfy the above inequalities for all $y\in N^-(x)$. If $\varphi(z)=w$ for a vertex $w$ that satisfies the above inequalities for all $y\in N^-(x)$ we have \begin{align*} \left|\frac{|C_{t(z)+1,y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|} - \frac{|C_{t(z)+1,y}^\textsc{\MakeTextLowercase{S}}|}{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}|}\right| \le 2\varepsilon\, . \end{align*} This implies $B_{v,x}(z)$ as \begin{align*} \left|\frac{|C_{t(z)+1,y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(z)+1,y}^\textsc{\MakeTextLowercase{S}}|} - \frac{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}|}\right| &\le 2\varepsilon \frac{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(z)+1,y}^\textsc{\MakeTextLowercase{S}}|}\\ &\le 2\varepsilon \frac{|C_{t(z),y}^\textsc{\MakeTextLowercase{S}}|}{|C_{t(z)+1,y}^\textsc{\MakeTextLowercase{S}}|} \leByRef{eq:cond:size-ordinary} \frac{2\varepsilon}{\delta-\varepsilon}\, . \end{align*} Since $\varphi(z)$ is chosen randomly from $A(z)\subseteq A_{t(z),z}$ with $|A(z)|\ge (\gamma/2) n_\ell$ by Lemma~\ref{lem:RGA:choices}, we obtain \begin{align*} \PP\left[ \mathcal{B}_{v,x}(z) \big|\big. \mathcal{B}_{v,x}(z')\text{ for all $z'\in N^-(N^-(x))$, $t(z')<t(z)$}\right] \ge 1- \frac{4\varepsilon a n_\ell}{(\gamma/2) n_\ell} \ge \frac12 \,. \end{align*} Note that this probability follows alone from the $\varepsilon$-regularity of the pairs $(V_{j(y)},V_\ell)$ and the fact that $A(z)$ and $C_{t(z),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)$ are large. If $x$ and $\widetilde x$ are predecessor disjoint of second order the outcome of the event $\mathcal{B}_{v,\widetilde x}(\widetilde z)$ for $\widetilde z\in N^-(N^-(\widetilde x))$ does not influence those parameters. We can therefore condition on other events $\mathcal{B}_{v,\widetilde x}(\widetilde z)$ as long as $x$ and $\widetilde x$ are predecessor disjoint of second order. It remains to show the second part of the proposition, that $v$ is likely for all $y\in N^-(x)$ if $\mathcal{B}_{v,x}(z)$ holds for all $z\in N^-(N^-(x))$. Again let $x\in L_i^*$ and let $y\in N^-(x)$ lie in $X_j$. Recall that condition~\eqref{eq:cond:size:special-neighbours} in the definition of the RGA guarantees \[ \left| \frac{|V_j^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|V_j^\textsc{\MakeTextLowercase{S}}|} - \frac{|V_j\cap N_G(v)|}{|V_j|}\right| \le \varepsilon\,. \] Moreover, $\mathcal{B}_{v,x}(z)$ for all $z\in N^-(y)$, $C_{1,y}^\textsc{\MakeTextLowercase{S}}=V_j^\textsc{\MakeTextLowercase{S}}$ (as $y\notin S$) and the fact that $|N^-(y)|\le a$ imply \begin{align*} \label{eq:hdp:intersection} \left|\frac{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}|} - \frac{|V_j^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|V_j^\textsc{\MakeTextLowercase{S}}|}\right| \le \frac{2\varepsilon\cdot a}{\delta-\varepsilon}\, . \end{align*} As $2\varepsilon a/(\delta-\varepsilon)+\varepsilon \le \delta/36 \le (\delta/18)|V_j\cap N_G(v)|/|V_j|$ we conclude that \begin{align} \label{eq:rmd:CV} \frac{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}} \cap N_G(v)|}{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}|} \ge \frac{17}{18}\frac{|V_j\cap N_G(v)|}{|V_j|} \end{align} for all $y\in N^-(x)$ if $\mathcal{B}_{v,x}(z)$ for all $z\in N^-(N^-(x))$. Equation~\eqref{eq:rmd:CV} in turn implies $\mathcal{A}_v(y)$ as only few vertices get embedded into $V^\textsc{\MakeTextLowercase{S}}$ thus making $C_{t,y}^\textsc{\MakeTextLowercase{S}} \approx A_{t,y}^\textsc{\MakeTextLowercase{S}}$. More precisely, by Lemma~\ref{lem:RGA:choices} we have \begin{align*} \frac{|A(y)\cap N_G(v)|}{|A(y)|} &\ge \frac{|A_{t(y),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)| - |A_{t(y),y}^\textsc{\MakeTextLowercase{S}}\setminus A(y)|}{|A_{t(y),y}^\textsc{\MakeTextLowercase{S}}|} \\ &\ge \frac{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)| - |X_j^*| - |Q_j(t(y))| - |A_{t(y),y}^\textsc{\MakeTextLowercase{S}}\setminus A(y)|}{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}|}\\ &\ge \frac{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}|}-\frac{\delta}{18}\\ &\geByRef{eq:rmd:CV} \left( \frac{17}{18} -\frac{2}{18}\right) \frac{|V_j\cap N_G(v)|}{|V_j|}\;. \end{align*} \end{proof} We have seen that $x\in L_i^*$ also lies in $L_i^*(v)$ if $\mathcal{B}_{v,x}(z)$ holds for all $z\in N^-(N^-(x))$. To prove Lemma~\ref{lem:aux:rmd:Avy} it therefore suffices to show that an arbitrary vertex $v$ has a linear number of vertices $x\in L_i^*$ with $\mathcal{B}_{v,x}(z)$ for all $z\in N^-(N^-(x))$. \begin{proof}[of Lemma~\ref{lem:aux:rmd:Avy}] Let $i\in[r]$ and $v\in V_i$ be arbitrary. We partition $L_i^*$ into classes of vertices that are predecessor disjoint of second order. Observe that for every $x\in L_i^*$ we have \begin{align*} \left|\left\{ x'\in L_i^*: \parbox{160pt}{$N^-(x)\cap N^-(x')\neq \emptyset$ or \\[.5ex]$N^-(N^-(x))\cap N^-(N^-(x'))\neq \emptyset$} \right\}\right| &\le (a\Delta(H))^2 \le \frac{a^2n}{\log^2 n}\\ & \lByRef{eq:consts:n} \frac{\lambda n_i}{36\cdot 2^{a^2}\log n_i} \end{align*} as $H$ is $a$-arrangeable. Recall that $|L_i^*|=\lambda n_i$. Therefore, Theorem~\ref{thm:HajnalSzemeredi} gives a partition $L_i^*=K_1\dcup\dots\dcup K_b$ with \begin{align} \label{eq:aux:rmd:Kl:2} |K_\ell|\ge 36\cdot 2^{a^2}\log n_i \end{align} for all $\ell\in[b]$ such that the vertices in $K_\ell$ are predecessor disjoint of second order. Next we want to apply Lemma~\ref{lem:pseudo:Chernoff:tuple}. Let $\ell\in[b]$ be fixed. We define $\mathcal{I}=\{ N^-(N^-(x)): x\in K_\ell\}$. These sets are pairwise disjoint and have at most $a^2$ elements each. Name the elements of $\bigcup_{I\in \mathcal{I}} I=\{z_1,\dots,z_{|\cup_{I\in \mathcal{I}} I|}\}$ in ascending order with respect to the arrangeable ordering. Then for every $I\in \mathcal{I}$ and every $z_k \in I$ we have \[ \PP\left[ \mathcal{B}_{v,x}(z_k) \Big|\, \parbox{190pt}{ $\mathcal{B}_{v,x}(z_j)$ for all $z_j\in J$,\\[.5ex] $\overline{\mathcal{B}_{v,x}}(z_j)$ for all $z_j\in \{z_1,\dots,z_{k-1}\}\setminus J$}\right] \ge \frac 12 \] for every $J \subseteq \{z_1,\dots,z_{k-1}\}$ with $\{z_1,\dots,z_{k-1}\}\cap I \subseteq J$ by Proposition~\ref{prop:hdp:predecessors}. We set $K_\ell(v)\mathrel{\mathop:}= \left\{ x \in K_\ell: \mathcal{B}_{v,x}(z) \text{ for all }z \in N^-(N^-(x))\right\}$ and apply Lemma~\ref{lem:pseudo:Chernoff:tuple} to derive \begin{align*} \PP\left[ \left|K_\ell(v)\right| \ge 2^{-a^2-1}|K_\ell|\right] &\ge 1 - 2\exp\left(-\frac{1}{12}2^{-a^2}|K_\ell|\right)\\ &\geByRef{eq:aux:rmd:Kl:2} 1 - 2\exp\left(-3\log n_i \right)=1-2\cdot n_i^{-3}\,. \end{align*} Note that we have $\bigcup_{\ell\in[b]} K_\ell(v) \subseteq L_i^*(v)$ as the following is true for every $x\in K_\ell$ by Proposition~\ref{prop:hdp:predecessors}: $\mathcal{B}_{v,x}(z)$ for all $z\in N^-(N^-(x))$ implies $\mathcal{A}_v(y)$ for all $y\in N^-(x)$. Taking a union bound over all $\ell \in [b]$ we thus obtain that \[ \PP\left[ \left|L_i^*(v)\right| \ge 2^{-a^2-1}|L_i^*|\right] \ge 1-b\cdot 2n_i^{-3} \ge 1 - \frac{2}{n_i^2}\,. \] One further union bound over all $i\in[r]$ and $v\in V_i$ finishes the proof. \end{proof} \subsection{Proof of Theorem~\ref{thm:Blow-up:arr:full}} \label{sec:Proof:Blow-up:arr:full} Putting everything together, we conclude that the RGA gives a spanning embedding of $H$ into $G$ with probability at least~$1/3$. We now use Lemma~\ref{lem:reg:weighted:matching}, Proposition~\ref{prop:aux:left-min-deg}, and Lemma~\ref{lem:aux:right-min-deg} to prove our main result. \begin{proof}[of Theorem~\ref{thm:Blow-up:arr:full}] Let integers $C,a,\Delta_R,\kappa$ and $\delta,c>0$ be given. Set $a'= 5a^2\kappa\Delta_R$ and $\mu=1/(10a'(\kappa\Delta_R)^2)$. We invoke Theorem~\ref{thm:Blow-up:arr:almost-span} with parameters $C,a',\Delta_R,\kappa$ and $\delta,c,\mu>0$ to obtain $\varepsilon,\alpha>0$. Let $r$ be given and choose $n_0$ as in Theorem~\ref{thm:Blow-up:arr:almost-span}. Now let $R$ be a graph on $r$ vertices with $\Delta(R)<\Delta_R$. And let $G$ and $H$ satisfy the conditions of Theorem~\ref{thm:Blow-up:arr:full}, i.e., let $G$ have the $(\varepsilon,\delta)$-super-regular $R$-partition $V(G)=V_1\dcup\dots\dcup V_r$ and let $H$ have a $\kappa$-balanced $R$-partition $V(H)=X_1\dcup\dots\dcup X_r$. Further let $\{x_1,\dots,x_n\}$ be an $a$-arrangeable ordering of~$H$. We apply Lemma~\ref{lem:arr:StableEnding} to find an $a'$-arrangeable ordering $\{x_1',\dots,x_n'\}$ of $H$ with a stable ending of order $\mu n$. Let $H' = H[\{x_1',\dots,x_{(1-\mu)n}'\}]$ be the subgraph induced by the first $(1-\mu)n$ vertices of the new ordering. We take this ordering and run the RGA as described in Section~\ref{sec:rga:definition} to embed~$H'$ into~$G$. By Lemma~\ref{lem:rga:success} we have \begin{align}\label{eq:aux:reg:1} \PP\big[\text{RGA successful and } \mathcal{R}_i\text{ for all $i\in[r]$}\big] \ge \frac23 \,, \end{align} where~$\mathcal{R}_i$ is the event that the auxiliary graph $F_i(t)$ is weighted $\varepsilon'$-regular for all $t\le T$. Note that every image restricted vertex $x\in S_i\cap V(H')$ has been embedded into $I(x)$ by the definition of the RGA. Now assume that $\mathcal{R}_i$ holds for all $i\in[r]$. It remains to embed the stable set $L_i=V(H)\setminus V(H')$. To this end we shall find in each $F_i^*:=F_i(T)[L_i\dcup V_i^{\text{\it Free}}(T)]$ a perfect matching, which defines a bijection from $L_i$ to $V_i^{\text{\it Free}}(T)$ that maps every $x\in L_i$ to a vertex $v\in V_i^{\text{\it Free}}(T)\cap C_{T,x}$. Note that again $x\in S_i$ is embedded into $I(x)$ by construction. Since $F_i(T)$ is weighted $\varepsilon'$-regular the subgraph $F_i^*$ is weighted $(\varepsilon'/\mu)$-regular by Proposition~\ref{prop:regular:sub}. Moreover \begin{align} \label{eq:aux:reg:2} \PP\left[\delta(F_i^*) \ge 3\sqrt{\varepsilon'}n_i\text{ for all $i\in[r]$}\right] \ge \frac23 \end{align} by Proposition~\ref{prop:aux:left-min-deg} and Lemma~\ref{lem:aux:right-min-deg}. In other words, with probability at least $2/3$ all graphs $F_i^* = F_i(T)[L_i \dcup V_i^\text{\it Free}]$ are balanced, bipartite graphs on $2 \mu n_i$ vertices with $\deg(x)\ge 3\sqrt{\varepsilon'/\mu} (\mu n_i)$ for all $x\in L_i\cup V_i^\text{\it Free}$. Also note that $\omega(x)\ge \delta^a \ge \sqrt{\varepsilon'/\mu}$ for all $x\in L_i$ by definition of $\varepsilon'$. We conclude from Lemma~\ref{lem:reg:weighted:matching} that $F_i^* $ has a perfect matching if $F_i^*$ has minimum degree at least $3\sqrt{\varepsilon'}n_i$. Hence, combining \eqref{eq:aux:reg:1} and~\eqref{eq:aux:reg:2} we obtain that the RGA terminates successfully and all $F_i^*$ have perfect matchings with probability at least $1/3$. Thus there is an almost spanning embedding of $H'$ into $G$ that can be extended to a spanning embedding of $H$ into $G$. \end{proof} \subsection{Proof of Theorem~\ref{thm:Blow-up:arr:ext}} \label{sec:Proof:Blow-up:arr:ext} We close this section by sketching the proof of Theorem~\ref{thm:Blow-up:arr:ext}, which is very similar to the proof of Theorem~\ref{thm:Blow-up:arr:full}. We start by quickly summarising the latter. For two graphs $G$ and $H$ let the partitions $V=V_1\dcup\dots\dcup V_k$ and $X=X_1\dcup\dots\dcup X_k$ satisfy the requirements of Theorem~\ref{thm:Blow-up:arr:full}. In order to find an embedding of $H$ into $G$ that maps the vertices of $X_i$ onto $V_i$ we proceeded in two steps. First we used a randomised greedy algorithm to embed an almost spanning part of $H$ into $G$. This left us with sets $L_i\subseteq X_i$ and $V_i^\text{\it Free}\subseteq V_i$. We then found a bijection between the $L_i$ and $V_i^\text{\it Free}$ that completed the embedding of $H$. More precisely, we did the following. We ran the randomised greedy algorithm from Section~\ref{sec:rga:definition} and defined auxiliary graphs $F_i(t)$ on vertex sets $V_i\dcup X_i$ that kept track of all possible embeddings at time $t$ of the embedding algorithm. We showed that the randomised greedy embedding succeeds for the almost spanning subgraph if all the auxiliary graphs remain weighted regular (Lemma~\ref{lem:reg-small-queue}). This in turn happens with probability at least $2/3$ by Lemma~\ref{lem:rga:success}. This finished stage one of the embedding (and also proved Theorem~\ref{thm:Blow-up:arr:almost-span}). For the second stage of the embedding we assumed that stage one found an almost spanning embedding by time $T$ and that all auxiliary graphs are weighted regular. We defined $F_i^*(T)$ to be the subgraph of $F_i(T)$ induced by $L_i\dcup V_i^\text{\it Free}$. This subgraph inherits (some) weighted regularity from $F_i(T)$. Moreover, we showed that all $F_i^*(T)$ have a minimum degree which is linear in $n_i$ with probability at least $2/3$ (see Proposition~\ref{prop:aux:left-min-deg} and Lemma~\ref{lem:aux:right-min-deg}). Each $F_i^*(T)$ has a perfect matching in this case by Lemma~\ref{lem:reg:weighted:matching}. Those perfect matchings then gave the bijection of $L_i$ onto $V_i^\text{\it Free}$ that completed the embedding of $H$ into $G$. We concluded that with probability at least $2/3$ the almost spanning embedding found by the randomised greedy algorithm in stage one can be extended to a spanning embedding of $H$ into $G$. For the proof of Theorem~\ref{thm:Blow-up:arr:ext} we proceed in exactly the same way. Note that Theorem~\ref{thm:Blow-up:arr:full} and Theorem~\ref{thm:Blow-up:arr:ext} differ only in the following aspects. The first allows a maximum degree of $\sqrt{n}/\log n$ for $H$ while the latter extends this to $\Delta(H)\le \xi n/\log n$. This does not come free of charge. Theorem~\ref{thm:Blow-up:arr:ext} not only requires the $R$-partition of $G$ to be super-regular but also imposes what we call the \emph{tuple condition}, that every tuple of $a+1$ vertices in $V\setminus V_i$ have a linearly sized joint neighbourhood in $V_i$. We now sketch how one has to change the proof of Theorem~\ref{thm:Blow-up:arr:full} to obtain Theorem~\ref{thm:Blow-up:arr:ext}. Again we proceed in two stages. The first of those, which gives the almost spanning embedding, is identical to the previously described one: here the larger maximum degree is not an obstacle (see also Theorem~\ref{thm:Blow-up:arr:almost-span}). Again all auxiliary graphs are weighted regular by the end of the \textsc{Embedding Phase} with probability at least $2/3$. Moreover, all vertices in $L_i$ have linear degree in $F_i^*(T)$ by the same argument as before (see Proposition~\ref{prop:aux:left-min-deg} and its proof). It now remains to show to show that every vertex $v$ in $V_i^\text{\it Free}$ has a linear degree in the auxiliary graph $F_i^*(T)$. At this point we deviate from the proof of Theorem~\ref{thm:Blow-up:arr:full}. Recall that $L_i^*$ was defined as the last $\lambda n_i$ vertices of $X_i\setminus N(S)$ in the arrangeable ordering and $L_i^*(v)$ was defined as the set of vertices $x\in L_i^*$ with $\mathcal{A}_v(y)$ for all $y\in N^-(x)$. We still want to prove that $L_i^*(v)$ is large for every $v$ as this again would imply the linear minimum degree for all $v\in V_i^\text{\it Free}$. However, the maximum degree $\Delta(H)\le \xi n/\log n$ does not allow us to partition $L_i$ into sets which are predecessor disjoint of second order any more. This, however, was crucial for our proof of $|L_i^*(v)|\ge 2^{-a^2-1}|L_i^*|$ (see the proof of Lemma~\ref{lem:aux:rmd:Avy}). We may, however, alter the definition of the event $\mathcal{A}_v(y)$ to overcome this obstacle. Instead of requiring that $|A(y) \cap N_G(v)|/|A(y)|\ge (2/3)|V_j\cap N_G(v)|/|V_j|$, we now define $\mathcal{A}_v(y)$ in the proof of Theorem~\ref{thm:Blow-up:arr:ext} to be the event that \begin{align*} \frac{|A(y)\cap N(v)|}{|A(y)|} \ge \frac\iota2\;. \end{align*} We still denote by $L_i^*(v)$ the set of vertices $x\in L_i^*$ with $\mathcal{A}_v(y)$ for all $y\in N^-(x)$. Now the tuple condition guarantees that $|C_{t(y),y}\cap N_G(v)| \ge \iota n_j$ for any $y\in X_j$ and $v \in V\setminus V_j$. Since we chose $V_j^\textsc{\MakeTextLowercase{S}}\subseteq V_j$ randomly we obtain $|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)| \ge (\mu/20) \iota n_j$ for all $y\in X_j$ \emph{almost surely}. The same arguments as in the proof of Proposition~\ref{prop:hdp:predecessors} imply that $A(y)\approx C_{t(y),y}^\textsc{\MakeTextLowercase{S}}$ for all $y$ that are predecessors of vertices $x\in L_i^*$. Hence, \[ \frac{|A(y)\cap N_G(v)|}{|A(y)|} \approx \frac{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}\cap N_G(v)|}{|C_{t(y),y}^\textsc{\MakeTextLowercase{S}}|}\approx \frac{(\mu/20) \iota n_j}{(\mu/10) \delta^a n_j} \ge \frac\iota2 \] for all $x\in L_i^*$ and all $y\in N^-(x)$ almost surely. If this is the case we have $|L_i^*(v)|=|L_i^*|=\lambda n_i$ and therefore the assertion of Lemma~\ref{lem:aux:rmd:Avy} holds also in this setting. It remains to show that the same is true for Lemma~\ref{lem:aux:rmd:pre}. Indeed, after some appropriate adjustments of the constants, the very same argument implies $\deg_{F_i(T)}(v,L_i)\ge 3\varepsilon'n_i$ for all $i\in[r]$ and $v\in V_i^\text{\it Free}$ if $|L_i^*(v)|=|L_i^*|$. More precisely, the change in the definition of $\mathcal{A}_v(y)$ will force smaller values of $\varepsilon'$, that is, the constant in the bound of the joint neighbourhood of each $(a+1)$-tuple has to be large compared to the $\varepsilon$ in the $\varepsilon$-regularity of the partition $V_1\dcup\dots\dcup V_k$. The constants then relate as \[ 0 < \xi \ll \varepsilon \ll \varepsilon' \ll \lambda \ll \gamma \ll \mu, \delta, \iota \le 1\, . \] The remaining steps in the proof of Theorem~\ref{thm:Blow-up:arr:ext} are identical to those in the proof of Theorem~\ref{thm:Blow-up:arr:full}. For $i\in[r]$ the minimum degree in $F_i^*(T)$ together with the weighted regularity implies that $F_i^*(T)$ has a perfect matching. The perfect matching defines a bijection of $L_i$ onto $V_i^\text{\it Free}$ that in turn completes the embedding of $H$ into $G$. To wrap up, let us quickly comment on the different degree bounds for $H$ in Theorem~\ref{thm:Blow-up:arr:full} and Theorem~\ref{thm:Blow-up:arr:ext}. The proof of Theorem~\ref{thm:Blow-up:arr:ext} just sketched only requires $\Delta(H)= \xi n/\log n$. This is needed to partition $L_i^*$ into \emph{predecessor disjoint sets} in the last step in order to prove the minimum degree for the auxiliary graphs. Contrary to that the proof of Theorem~\ref{thm:Blow-up:arr:full} partitions the vertices of $L_i^*$ into sets which are \emph{predecessor disjoint of second order}, i.e., which do have $N^-(N^-(x))\cap N^-(N^-(x'))=\emptyset$ for all $x\neq x'$. This is necessary to ensure that there is a linear number of vertices $x$ in $L_i^*$ with $\mathcal{A}_v(y)$ for all $y\in N^-(x)$, i.e., whose predecessors all get embedded into $N_G(v)$ with probability $\delta/3$. More precisely we ensure that, all predecessors $y$ of $x$ have the following property. The predecessors $z_1,\dots,z_k$ of $y$ are embedded to a $k$-tuple $\left(\varphi(z_1),\dots,\varphi(z_k)\right)$ of vertices in $G$ such that $\bigcap N(\varphi(z_i)) \cap N_G(v) \cap V_{j(y)}$ is large. This fact follows trivially from the tuple condition of Theorem~\ref{thm:Blow-up:arr:ext} and hence we don't need a partition into predecessor disjoint sets of second order. \section{Optimality} \label{sec:opt} The aim of this section is twofold. Firstly, we shall investigate why the degree bounds given in Theorem~\ref{thm:Blow-up:arr:full} and in Theorem~\ref{thm:Blow-up:arr:ext} are best possible. Secondly, we shall why the conditions Theorem~\ref{thm:Blow-up:arr:full} imposes on image restrictions are so restrictive. \paragraph{Optimality of Theorem~\ref{thm:Blow-up:arr:ext}.} To argue that the requirement $\Delta(H)\le n/\log n$ is optimal up to the constant factor we use a construction from~\cite{KSS95} and the following proposition. \begin{prop} \label{prop:Domination} For every $\varepsilon>0$ the domination number of a graph $\Gnp$ with high probability is larger than $(1-\varepsilon)p \log n$. \end{prop} \begin{proof} The probability that a graph in $\Gnp$ has a dominating set of size $r$ is bounded by \begin{align*} \binom nr\left(1-(1-p)^r\right)^{n-r} &\le \exp\left(r\log n - \exp(-rp)(n-r)\right)\,.\\ \intertext{Setting $r=(1-\varepsilon) p \log n$ we obtain} \PP\left[\parbox{130pt}{$\Gnp$ has a dominating set of size $(1-\varepsilon) p\log n$}\right] &\le \exp\left((1-\varepsilon)p \log^2 n - \frac{n-(1-\varepsilon)p\log n}{n^{1-\varepsilon}}\right) \to 0 \end{align*} for every (fixed) positive $\varepsilon$. \end{proof} Let $H$ be a tree with a root of degree $\tfrac12\log n$, such that each neighbour of this root has $2n/\log n$ leaves as neighbours. This graph $H$ almost surely is not a subgraph of $\Gnp[0.9]$ by Proposition~\ref{prop:Domination} as the neighbours of the root form a dominating set. \paragraph{Optimality of Theorem~\ref{thm:Blow-up:arr:full}.} The degree bound $\Delta(H)\le\sqrt{n}/\log n$ is optimal up to the $\log$-factor. More precisely, we can show the following. \begin{prop} \label{prop:UpperBound:deg} For every $\varepsilon>0$ and $n_0\in\field{N}$ there are $n\ge n_0$, an $(\varepsilon,1/2)$-super-regular pair $(V_1,V_2)$ with $|V_1|=|V_2|=n$ and a tree $T\subseteq K_{n,n}$ with $\Delta(T)\le \sqrt n +1$, such that $(V_1,V_2)$ does not contain~$T$. \end{prop} Condition~\eqref{item:Blow-up:ir} of Theorem~\ref{thm:Blow-up:arr:full} allows only a constant number of permissible image restrictions per cluster. The following proposition shows that also this is best possible (up to the value of the constant). \begin{prop} \label{prop:UpperBound:landing} For every $\varepsilon>0$, $n_0\in\field{N}$, and every $w\colon\field{N} \to \field{N}$ which goes to infinity arbitrarily slowly, there are $n\ge n_0$, an $(\varepsilon,1/2)$-super-regular pair $(V_1,V_2)$ with $|V_1|=|V_2|=n$ and a tree $T\subseteq K_{n,n}$ with $\Delta(T)\le w(n)$ such that the following is true. The images of $w(n)$ vertices of~$T$ can be restricted to sets of size $n/2$ in $V_1\cup V_2$ such that no embedding of~$T$ into $(V_1,V_2)$ respects these image restrictions. \end{prop} We remark that our construction for Proposition~\ref{prop:UpperBound:landing} does not require a spanning tree~$T$, but only one on $w(n)+1$ vertices. Moreover, this proposition shows that the number of admissible image restrictions drops from linear (in the original Blow-up Lemma) to constant (in Theorem~\ref{thm:Blow-up:arr:full}), if the maximum degree of the target graph~$H$ increases from constant to an increasing function. We now give the constructions that prove these two propositions. \begin{proof}[of Proposition~\ref{prop:UpperBound:deg} (sketch)] Let $\varepsilon>0$ and $n_0$ be given, choose an integer~$k$ such that $1/k \ll \varepsilon$ and an integer~$n$ such that $k,n_0\ll n$, and consider the following bipartite graph $G_k=(V_1\dcup V_2,E)$ with $|V_1|=|V_2|=n$. Let $W_1,\dots,W_k$ be a balanced partition of $V_1$. Now for each \emph{odd} $i\in[k]$ we randomly and independently choose a subset $U_i\subseteq V_2$ of size $n/2$; and we set $U_{i+1}:= V_2\setminus U_i$. Then we insert exactly all those edges into~$E$ which have one vertex in~$W_i$ and the other in~$U_i$, for $i\in[k]$. Clearly, every vertex in~$G$ has degree $n/2$. In addition, using the degree co-degree characterisation of $\varepsilon$-regularity it is not difficult to check that $(V_1,V_2)$ almost surely is $\varepsilon$-regular. Next, we construct the tree~$T$ as follows. We start with a tree~$T'$, which consists of a root of degree $\sqrt n -1$ and is such that each child of this root has exactly $\sqrt{n}$ leaves as children. For obtaining~$T$, we then take two copies of~$T'$, call their roots~$x_1$ and~$x_2$, respectively, and add an edge between~$x_1$ and~$x_2$. Clearly, the two colour classes of~$T$ have size~$n$ and $\Delta(T)=\sqrt n +1$. It remains to show that $T\not\subseteq G_k$. Assume for contradiction that there is an embedding~$\varphi$ of~$T$ into $G_k$ such that $\varphi(x_1)\in W_1$. Note that $n-1$ vertices in~$T$ have distance~$2$ from~$x_1$. Since~$G_k$ is bipartite~$\varphi$ has to map these $n-1$ vertices to~$V_1$. In particular, one of them has to be embedded in~$W_2$. However, the distance between~$W_1$ and~$W_2$ in~$G_k$ is greater than~$2$. \end{proof} The proof of Proposition~\ref{prop:UpperBound:landing} proceeds similarly. \begin{proof}[of Proposition~\ref{prop:UpperBound:landing} (sketch)] Let~$\varepsilon$, $n_0$ and $w$ be given, choose $n$ large enough so that $n_0\le n$ and $1/w(n) \ll \varepsilon$, and set $k:=w(n)$. We reuse the graph~$G_k=(V_1\dcup V_2,E)$ from the previous proof as $\varepsilon$-regular pair. Now consider any balanced tree~$T$ with a vertex~$x$ of degree $\Delta(T)=w(n)=k$. Let $\{y_1,\dots,y_k\}$ be the neighbours of~$x$ in~$T$. For $i\in[k]$ we then restrict the image of~$y_i$ to $V_2\setminus U_i$. We claim that there is no embedding of~$T$ into~$G$ that respects these image restrictions. Indeed, clearly~$x$ has to be embedded into~$W_j\subseteq V_1$ for some $j\in[k]$ because its neighbours are image restricted to subsets of~$V_2$. However, by the definition of~$U_j$ this prevents~$y_j$ from being embedded into $V_2\setminus U_j$. \end{proof} \section{Applications} \label{sec:Applications} \subsection{$F$-factors for growing degrees} This section is to prove Theorem~\ref{thm:GrowingHfactors}. Our strategy will be to repeatedly embed a collection of copies of $F$ into a super-regular $r$-tuple in $G$ with the help of the Blow-up Lemma version stated as Theorem~\ref{thm:Blow-up:arr:full}. The following result by B\"ottcher, Schacht, and Taraz~\cite[Lemma 6]{BoeSchTar09} says that for $\gamma>0$ any sufficiently large graph $G$ with $\delta(G)\ge ((r-1)/r+\gamma)|G|$ has a regular partition with a reduced graph $R$ that contains a $K_r$-factor. Moreover, all pairs of vertices in $R$ that lie in the same $K_r$ span super-regular pairs in $G$. Let $K_r^{(k)}$ denote the disjoint union of $k$ complete graphs on $r$ vertices each. For all $n,k,r\in\field{N}$, we call an integer partition $(n_{i,j})_{i\in[k],j\in[r]}$ of $[n]$ (with $n_{i,j}\in\field{N}$ for all $i\in[k]$ and $j\in[r]$) $r$-equitable, if $|n_{i,j}-n_{i,j'}|\le 1$ for all $i\in[k]$ and $j,j'\in[r]$. \begin{lem}\label{lem:EquiSuperPartition} For all $r\in\field{N}$ and $\gamma>0$ there exists $\delta>0$ and $\varepsilon_0>0$ such that for every positive $\varepsilon\le\varepsilon_0$ there exists $K_0$ and $\xi_0>0$ such that for all $n\ge K_0$ and for every graph $G$ on vertex set $[n]$ with $\delta(G)\ge((r-1)/r+\gamma)n$ there exists $k\in \field{N}\setminus\{0\}$, and a graph $K_r^{(k)}$ on vertex set $[k]\times [r]$ with \renewcommand{(\theenumi)}{(R\itmit{\arabic{*}}ic{enumi})} \begin{enumerate} \item $k\le K_0$, \item there is an $r$-equitable integer partition $(m_{i,j})_{i\in[k],j\in[r]}$ of $[n]$ with $(1+\varepsilon)n/(kr)\ge m_{i,j}\ge (1-\varepsilon)n/(kr)$ such that the following holds.\footnote{The upper bound on $m_{i,j}$ is implicit in the proof of the lemma but not explicitly stated in~\cite{BoeSchTar09}.} \end{enumerate} For every partition $(n_{i,j})_{i\in[k],j\in[r]}$ of $n$ with $m_{i,j}-\xi_0 n \le n_{i,j} \le m_{i,j}+\xi_0 n$ there exists a partition $(V_{i,j})_{i\in[k],j\in[r]}$ of $V$ with \renewcommand{\itmit{\roman{*}}an{enumi}}{V\itmit{\arabic{*}}ic{enumi}} \renewcommand{(\theenumi)}{(\itmit{\roman{*}}an{enumi})} \begin{enumerate} \item\label{eq:lem:ESP:1} $|V_{i,j}|=n_{i,j}$, \item\label{eq:lem:ESP:2} $(V_{i,j})_{i\in[k],j\in[r]}$ is $(\varepsilon,\delta)$-super-regular on $K_r^{(k)}$. \end{enumerate} \end{lem} \renewcommand{(\theenumi)}{(\itmit{\roman{*}}an{enumi})} Using this partitioning result for $G$, Theorem~\ref{thm:GrowingHfactors} follows easily. \begin{proof}[of Theorem~\ref{thm:GrowingHfactors}] We alternatingly choose constants as given by Theorem~\ref{thm:Blow-up:arr:full} and Lemma~\ref{lem:EquiSuperPartition}. So let $\delta,\varepsilon_0>0$ be the constants given by Lemma~\ref{lem:EquiSuperPartition} for $r$ and $\gamma>0$. Further let $\varepsilon,\alpha>0$ be the constants given by Theorem~\ref{thm:Blow-up:arr:full} for $C=0$, $a$, $\Delta_R=r$, $\kappa=2$, $c=1$ and $\delta$. We are setting $C=0$ as we do not use any image restrictions in this proof. If necessary we decrease $\varepsilon$ such that $\varepsilon\le\varepsilon_0$ holds. Let $K_0$ and $\xi_0>0$ be as in Lemma~\ref{lem:EquiSuperPartition} with $\varepsilon$ as set before. For $r$ let $n_0$ be given by Theorem~\ref{thm:Blow-up:arr:full}. If necessary increase $n_0$ such that $n_0\ge K_0$. Finally set $\xi=\xi_0$. In the following we assume that \begin{enumerate} \item $G$ is of order $n\ge n_0$ and has $\delta(G)\ge(\tfrac{r-1}{r}+\gamma)n$, and \item $H$ is an $a$-arrangeable, $r$-chromatic $F$-factor with $|F|\le \xi n$, $\Delta(F)\le\sqrt{n}/\log n$. \end{enumerate} We need to show that $H\subseteq G$. For this purpose we partition $H$ into subgraphs $H_1,\dots,H_k$, where $H_i$ is to be embedded into $\cup_{j\in[r]}V_{i,j}$ later, as follows. Let $(m_{i,j})_{i\in[k],j\in[r]}$ be an $r$-equitable partition of $[n]$ with $m_{i,j}\ge (1-\varepsilon)n/(kr)$ as given by Lemma~\ref{lem:EquiSuperPartition}. For $i=1,\dots,k-1$ we choose $\ell_i$ such that both \begin{align}\label{eq:GrowingHfactors:partition:1} \big|m_{i,j}-\ell_i |F|\big| \le |F| \text{, \qquad and \qquad} \left|\sum_{i'\le i} (m_{i',j}-\ell_{i'}|F|)\right| \le |F| \end{align} for all $j\in[r]$. Let $n_{i,j}= \ell_i |F|$ for all $j\in[r]$ and set $H_i$ to be $\ell_ir$ copies of $F$. Note that there exists an $r$-colouring of $H_i$ in which each colour class $X_{i,j}$ has exactly $n_{i,j}$ vertices. Finally $H_k$ is set to be $H\setminus (H_1\cup\dots\cup H_{k-1})$. Let $\chi:V(H_k)\to [r]$ be a colouring of $H_k$ where the colour-classes have as equal sizes as possible and set $n_{k,j}\mathrel{\mathop:}=|\chi^{-1}(j)|$ and $X_{k,j}\mathrel{\mathop:}= \chi^{-1}(j)$ for $j\in[r]$. It follows from~\eqref{eq:GrowingHfactors:partition:1} that \begin{align} \label{eq:GrowingHfactors:partition:2} |n_{i,j}-m_{i,j}| \le |F| \le \xi_0 n \end{align} for all $i\in[k]$, $j\in[r]$. Thus there exists a partition $(V_{i,j})_{i\in[k],j\in[r]}$ of $V(G)$ with properties~\eqref{eq:lem:ESP:1} and~\eqref{eq:lem:ESP:2} by Lemma~\ref{lem:EquiSuperPartition}. We apply Theorem~\ref{thm:Blow-up:arr:full} to embed $H_i$ into $G[V_{i,1}\cup\dots\cup V_{i,r}]$ for every $i\in[k]$. Note that we have partitioned $V(H_i)=X_{i,1}\cup\dots\cup X_{i,r}$ in such a way that $|X_{i,j}|=n_{i,j}$ and $vw\in E(H_i)$ implies $v\in X_{i,j}$, $w\in X_{i,j'}$ with $j\neq j'$. Now properties~\eqref{eq:lem:ESP:1},~\eqref{eq:lem:ESP:2} guarantee that $|V_{i,j}|=n_{i,j}$ and $(V_{i,j},V_{i,j'})$ is an $(\varepsilon,\delta)$-super-regular pair in $G$ for all $i\in[k]$ and $j,j'\in[r]$ with $j\neq j'$. It follows that $H_i$ is a subgraph of $G[V_{i,1}\cup\dots\cup V_{i,r}]$ by Theorem~\ref{thm:Blow-up:arr:full}. \end{proof} \subsection{Random graphs and universality} Next we prove Theorem~\ref{thm:GnpUniversal}, which states that $G=\Gnp$ is universal for the class of $a$-arrangeable bounded degree graphs, $\mathcal{H}_{n,a,\xi}= \{H: |H|=n, H \text{ $a$-arr., $\Delta(H)\le \xi n/\log n$}\}$. To prove this we will find a balanced partition of $G$ and apply Theorem~\ref{thm:Blow-up:arr:ext}. For this purpose we also have to find a balanced partition of the graphs $H\in\mathcal{H}_{n,a,\xi}$. To this end we shall use the following result of Kostochka, Nakprasit, and Pemmaraju~\cite{KosNakPem05}. \begin{thm}[Theorem 4 from~\cite{KosNakPem05}] \label{thm:DegenerateEquit} Every $a$-arrangeable\footnote{In fact~\cite{KosNakPem05} shows this result for the more general class of $a$-degenerate graphs.} graph $H$ with $\Delta(H)\le n/15$ has a balanced $k$-colouring for each $k\ge 16a$. \end{thm} A graph has a \emph{balanced $k$-colouring} if the graph has a proper colouring with at most $k$ colours such that the sizes of the colour classes differ by at most 1. \begin{proof}[of Theorem~\ref{thm:GnpUniversal}] Let $a$ and $p$ be given. Set $\Delta_R\mathrel{\mathop:}= 16a$, $\kappa=1$, $\iota\mathrel{\mathop:}= \tfrac12 p^{a+1}$, $\delta\mathrel{\mathop:}= p/2$ and let $R$ be a complete graph on $16a$ vertices. Set $r\mathrel{\mathop:}= 16a$ and let $\varepsilon$, $\xi$, $n_0$ as given by Theorem~\ref{thm:Blow-up:arr:ext}. Let $n\ge n_0$ and let $V=V_1\dcup\dots\dcup V_{r}$ be a balanced partition of $[n]$. Then we generate a random graph $G=\Gnp$ on vertex set $[n]$. Every pair $(V_i,V_j)$ is $(\varepsilon, p/2)$-super-regular in $G$ with high probability. Furthermore with high probability we have that every tuple $(u_1,\dots,u_{a+1})\subseteq V\setminus V_i$ satisfies $|\cap_{j\in[a+1]}N_G(u_j)\cap V_i|\ge \iota |V_i|$. So assume this is the case and let $H\in\mathcal{H}_{n,a,\xi}$. We partition $H$ into $16a$ equally sized stable sets with the help of Theorem~\ref{thm:DegenerateEquit}. Thus $H$ satisfies the requirements of Theorem~\ref{thm:Blow-up:arr:ext} and $H$ embeds into $G$. \end{proof} \addcontentsline{toc}{section}{Appendix} \section*{Appendix} \subsection*{Weighted regularity} In this section we provide some background on \emph{weighted regularity}. In particular, we supplement the proofs of Lemma~\ref{lem:reg:weighted:deg-codeg} and Lemma~\ref{lem:reg:weighted:matching}. We start with a short introduction to the results on weighted regularity by Czygrinow and R\"odl~\cite{CzygRodl00}. Their focus lies on hypergraphs. However, we only present the graph case here. Czygrinow and R\"odl define their weight function on the set of edges (whereas in our scenario we have a bipartite graph with weights on the vertices of one class). They consider weighted graphs $G=(V,\widetilde\omega)$ where $\widetilde\omega: V\times V\to \mathbb{N}_{\ge 0}$. One can think of $\widetilde\omega(x,y)$ as the multiplicity of the edge $(x,y)$. Their weighted degree and co-degree for $x,y\in V$ are then defined as \[ \deg_{\widetilde\omega}^*(x) \mathrel{\mathop:}= \sum_{y\in V}\widetilde\omega(x,y), \qquad \deg_{\widetilde\omega}^*(x,y) \mathrel{\mathop:}= \sum_{z\in V}\widetilde\omega(x,z)\widetilde\omega(y,z)\,. \] For disjoint $A,B\subseteq V$ they define \[ d_{\widetilde\omega}^*(A,B) = \frac{\sum\widetilde\omega(x,y)}{K|A|\,|B|}\, , \] where the sum is over all pairs $(x,y) \in A\times B$ and $K\mathrel{\mathop:}= 1+\max\{\widetilde\omega(x,y): (x,y)\in V\times V\}$.\footnote{Czygrinow and R\"odl require $K$ to be strictly larger than the maximal weight for technical reasons.} A pair $(A,B)$ in $G=(V,\widetilde\omega)$ with $A\cap B=\emptyset$ is called \emph{$(\varepsilon,\widetilde\omega)$-regular} if \[ |d_{\widetilde\omega}^*(A,B)-d_{\widetilde\omega}^*(A',B')|<\varepsilon \] for all $A'\subseteq A$ with $|A'|\ge \varepsilon |A|$ and all $B'\subseteq B$ with $|B'|\ge\varepsilon|B|$. As in the unweighted case, regular pairs can be characterised by the degree and co-degree distribution of their vertices. The following lemma (see~\cite[Lemma 4.2]{CzygRodl00}) shows that a pair is weighted regular in the setting of Czygrinow and R\"odl if most of the vertices have the correct weighted degree and most of the pairs have the correct weighted co-degree. \begin{lem}[Czygrinow, R\"odl~\cite{CzygRodl00}] \label{lem:weighted:CR} Let $G=(A\dcup B,\widetilde\omega)$ be a weighted graph with $|A|=|B|=n$ and let $\varepsilon,\xi\in (0,1)$, $\xi^2<\varepsilon$, $n\ge 1/\xi$. Assume that both of the following conditions are satisfied: \begin{enumerate} \renewcommand{\itmit{\roman{*}}an{enumi}}{\itmit{\roman{*}}an{enumi}'} \renewcommand{(\theenumi)}{(\itmit{\roman{*}}an{enumi})} \item\label{eq:reg:w:CR:1} $\left|\left\{x\in A:|\deg_{\widetilde\omega}^*(x)-K\,d_{\widetilde\omega}^*(A,B)n| > K\xi^2n\right\}\right| <\xi^2n$, and \item\label{eq:reg:w:CR:2} $\left|\left\{ \{x_i,x_j\} \in \binom A2 :\left|\deg_{\widetilde\omega}^*(x_i,x_j)-K^2d_{\widetilde\omega}^*(A,B)^2n\right| \ge K^2\xi n\right\}\right|\le \xi\binom n2$. \end{enumerate} Then for every $A'\subseteq A$ with $|A'|\ge \varepsilon n$ and every $B'\subseteq B$ with $|B'|\ge \varepsilon n$ we have \[ |d_{\widetilde\omega}^*(A',B') - d_{\widetilde\omega}^*(A,B)|\le 2\frac{\xi^2}\varepsilon + \frac{\sqrt{5\xi}}{\varepsilon^2-\varepsilon\xi^2}\,. \] \end{lem} The assertion of Lemma~\ref{lem:weighted:CR} implies that the pair $(A,B)$ is $(\varepsilon',\widetilde\omega)$-regular, where $\varepsilon'=\max\{\varepsilon, 2\xi^2/\varepsilon + \sqrt{5\xi}/(\varepsilon^2-\varepsilon\xi^2)\}$, if the conditions of the lemma are satisfied. Our goal is to translate this result into our setting of weighted regularity (see Section~\ref{sec:WeightedRegularity}). We shortly recall our definition of weighted graphs and weighted regularity before we restate and prove Lemma~\ref{lem:reg:weighted:deg-codeg}. Let $G=(A\dcup B,E)$ be a bipartite graph and $\omega: A\to [0,1]$ be our weight function for $G$. We define the weighted degree of a vertex $x\in A$ to be $\deg_\omega(x)= \omega(x)|N(x,B)|$ and the weighted co-degree of $x,y\in A$ as $\deg_\omega(x,y)=\omega(x)\omega(y)|N(x,B)\cap N(y,B)|$. Similarly, the weighted density of a pair $(A',B')$ is defined as \[ d_\omega(A',B')\mathrel{\mathop:}=\sum_{x\in A'}\frac{\omega(x)|N(x,B')|}{|A'|\cdot |B'|}\,. \] Again the pair $(A,B)$ is called weighted $\varepsilon$-regular if \[ |d_\omega(A,B) - d_\omega(A',B')|\le \varepsilon \] for all $A'\subseteq A$ and $B'\subseteq B$ with $|A'|\ge \varepsilon |A|$ and $|B'|\ge \varepsilon |B|$. We now prove Lemma~\ref{lem:reg:weighted:deg-codeg}, which we restate here for the reader's convenience. \begin{lemNN}[Lemma~\ref{lem:reg:weighted:deg-codeg}] Let $\varepsilon>0$ and $n\ge \varepsilon^{-6}$. Further let $G=(A\dcup B,E)$ be a bipartite graph with $|A|=|B|=n$ and let $\omega: A \to [\varepsilon,1]$ be a weight function for $G$. If \begin{enumerate} \renewcommand{\itmit{\roman{*}}an{enumi}}{\itmit{\roman{*}}an{enumi}} \renewcommand{(\theenumi)}{(\itmit{\roman{*}}an{enumi})} \item\label{eq:reg:w:d-g:1} $|\{x\in A: |\deg_\omega(x)-d_\omega(A,B)n| > \varepsilon^{14}n\}| < \varepsilon^{12}n$ \quad and \item\label{eq:reg:w:d-g:2} $|\{\{x,y\}\in \binom A2: |\deg_\omega(x,y)-d_\omega(A,B)^2n| \ge \varepsilon^{9}n\}| \le \varepsilon^{6}\binom n2$ \end{enumerate} then $(A,B)$ is a weighted $3\varepsilon$-regular pair. \end{lemNN} \newcommand{C}{C} \begin{proof}[of Lemma~\ref{lem:reg:weighted:deg-codeg}] Let $\varepsilon>0$, $G=(A\cup B,E)$ and $\omega:A\to[\varepsilon,1]$ satisfy the requirements of the lemma. From this $\omega$ we define a weight function $\widetilde\omega:A\times B \to \mathbb{N}_{\ge 0}$ in the setting of Lemma~\ref{lem:weighted:CR}. For $(x,y)\in A\times B$ we set \[ \widetilde\omega(x,y) \mathrel{\mathop:}= \begin{cases} \left\lceil C \cdot \omega(x)\right\rceil & \text{ if $\{x,y\}\in E$,}\\ \, 0 & \text{ otherwise,} \end{cases} \] where $\varepsilon^{-13}-1\leC\le\varepsilon^{-14}$ is chosen such that $K=\max\{\widetilde\omega(x,y)+1: (x,y)\in A\times B\}\ge\varepsilon^{-13}$. (This is possible unless $E=\emptyset$.) Note that our choice of constants implies $K/C\le 1+2\varepsilon^{13}$. Moreover, let $d_{\widetilde\omega}^*(A,B)$ be defined as above. The definition of $\widetilde\omega$ implies \begin{align} \label{eq:weight-rel:1} C \deg_\omega(x) &\le \deg_{\widetilde\omega}^*(x) \le C \deg_\omega(x) + |N(x,B)|\text{ and}\\ \label{eq:weight-rel:2} C^2 \deg_\omega(x,y) &\le \deg_{\widetilde\omega}^*(x,y) \le C^2\deg_\omega(x,y) + (2C+1)|N(x,B)\cap N(y,B)| \end{align} for all $x,y\in A$. Here the second inequality follows from \[\lceil C\cdot \omega(x)\rceil^2 \le \left( C\cdot \omega(x)+1\right)^2 \le C^2\big(\omega(x)\big)^2+2C+1\,. \] Moreover, \begin{align}\label{eq:weight-rel:3} C d_\omega(A',B') &\le K d_{\widetilde\omega}^*(A',B') \le C d_\omega(A',B') + 1 \end{align} for all $A'\subseteq A$, $B'\subseteq B$ which in turn implies that \begin{align}\label{eq:weight-rel:4} \big(C\, d_\omega(A',B')\big)^2 - \big(K\,d_{\widetilde\omega}^*(A',B')\big)^2 &\le 1\cdot(C+K) \end{align} for all $A'\subseteq A$, $B'\subseteq B$. We now verify that conditions~\eqref{eq:reg:w:d-g:1} and~\eqref{eq:reg:w:d-g:2} of Lemma~\ref{lem:reg:weighted:deg-codeg} imply conditions~\eqref{eq:reg:w:CR:1} and~\eqref{eq:reg:w:CR:2} of Lemma~\ref{lem:weighted:CR}. Set $\xi\mathrel{\mathop:}=\varepsilon^6$ and let $x\in A$ be such that $|\deg_\omega(x)-d_\omega(A,B)n|\le \varepsilon^{14}n$. It follows from~\eqref{eq:weight-rel:1},~\eqref{eq:weight-rel:3} and the triangle inequality that \begin{align*} |\deg_{\widetilde\omega}^*(x) - K\, d_{\widetilde\omega}^*(A,B)n|&\le |\deg_{\widetilde\omega}^*(x) - C \deg_\omega(x)|\\ &\qquad + |C\, \deg_\omega(x) - C\, d_\omega(A,B)n| \\ &\qquad + |C\, d_\omega(A,B)n - K\, d_{\widetilde\omega}^*(A,B)n|\\ &\le n + C \varepsilon^{14}n + n \le 3n\\ &\le K \xi^2 n\,. \end{align*} Hence, condition~\eqref{eq:reg:w:d-g:1} implies condition~\eqref{eq:reg:w:CR:1}. Now let $\{x,y\}\in\binom A2$ satisfy $|\deg_\omega(x,y)-d_\omega(A,B)^2n| < \varepsilon^{9}n$. It follows from~\eqref{eq:weight-rel:2},~\eqref{eq:weight-rel:3} and~\eqref{eq:weight-rel:4} that \begin{align*} \left|\deg_{\widetilde\omega}^*(x,y)-K^2d_{\widetilde\omega}^*(A,B)^2n\right| &\le |\deg_{\widetilde\omega}^*(x,y) - C^2\deg_{\omega}(x,y)|\\ &\qquad + |C^2 \deg_{\omega}(x,y) - C^2 d_\omega(A,B)^2n|\\ &\qquad + \left|C^2 d_\omega(A,B)^2n - K^2d_{\widetilde\omega}^*(A,B)^2n\right|\\ &< (2C+1)n + C^2 \varepsilon^9n + (C+K)n\\ &\le K^2 \xi n\,, \end{align*} where the last inequality is due to $C\le K/\varepsilon$. Thus, condition~\eqref{eq:reg:w:d-g:2} implies condition~\eqref{eq:reg:w:CR:2}. We conclude that $G=(A\dcup B,\widetilde\omega)$ satisfies the requirements of Lemma~\ref{lem:weighted:CR}. Hence every $A'\subseteq A$ with $|A'|\ge \varepsilon n$ and every $B'\subseteq B$ with $|B'|\ge \varepsilon n$ has \[ |d_{\widetilde\omega}^*(A',B')-d_{\widetilde\omega}^*(A,B)|\le 2\frac{\xi^2}\varepsilon + \frac{\sqrt{5\xi}}{\varepsilon^2-\varepsilon\xi^2}\le \frac52 \varepsilon \,. \] Together with~\eqref{eq:weight-rel:3} and the fact that $K/C\le 1+2\varepsilon^{13}$ this finishes the proof as we have \begin{align*} |d_\omega(A',B')-d_\omega(A,B)| &\le |d_\omega(A',B') - \tfrac KC d_{\widetilde\omega}^*(A',B')| \\ & \qquad + |\tfrac KC d_{\widetilde\omega}^*(A',B') - \tfrac KC d_{\widetilde\omega}^*(A,B)| \\ & \qquad + |\tfrac KC d_{\widetilde\omega}^*(A,B) - d_\omega(A,B)|\\ &\le \tfrac 1C + \tfrac KC \tfrac52\varepsilon +\tfrac 1C\\ &\le 3\varepsilon\,. \end{align*} \end{proof} We want to point out that the requirement that $\omega$ is at least $\varepsilon$ does not cause any problem when we apply Lemma~\ref{lem:reg:weighted:deg-codeg} because one could simply increase the weight of all vertices $x$ with $\omega(x)<\varepsilon$ to $\varepsilon$ without changing the weighted densities in the subpairs by more than $\varepsilon$. Hence a graph with an arbitrary weight function is weighted $2\varepsilon$-regular if the graph with the modified weight function is weighted $\varepsilon$-regular. The remainder of this section is dedicated to the proof of Lemma~\ref{lem:reg:weighted:matching} which we restate here. \begin{lemNN}[Lemma~\ref{lem:reg:weighted:matching}] Let $\varepsilon>0$ and let $G=(A\dcup B,E)$ with $|V_i|=n$ and weight function $\omega: A \to [\sqrt\varepsilon,1]$ be a weighted $\varepsilon$-regular pair. If $\deg(x)> 2\sqrt\varepsilon n$ for all $x\in A\cup B$ then $G$ contains a perfect matching. \end{lemNN} \begin{proof}[of Lemma~\ref{lem:reg:weighted:matching}] In order to prove that $G=(A\dcup B,E)$ has a perfect matching, we will verify the K\"onig--Hall criterion for $G$, i.e., we will show that $|N(S)| \ge |S|$ for every $S\subseteq A$. We distinguish three cases. \underline{Case 1, $|S| < \varepsilon n$}: The minimum degree $\deg(x) \ge 2\sqrt\varepsilon n$ implies $|N(S)|\ge 2\sqrt\varepsilon n\ge |S|$ for any non-empty set $S$. \underline{Case 2, $\varepsilon n \le |S| \le (1-\varepsilon) n$}: Note that $\deg(x)> 2\sqrt\varepsilon n$ and $\omega(x)\ge \sqrt\varepsilon$ for all $x\in A$ implies that $d_\omega(A,B)> 2\varepsilon$. We now set $T=B\setminus N(S)$. Since $d_\omega(S,T)=0$ and $(A,B)$ is a weighted-$\varepsilon$-regular pair with weighted density greater than $2\varepsilon$ we conclude that $|T|< \varepsilon n$. \underline{Case 3, $|S| > (1-\varepsilon)n$}: For every $y\in B$ we have $|S|+|N(y)|\ge (1-\varepsilon+2\sqrt\varepsilon)n>n$ and thus $N(y)\cap S \neq \emptyset$. It follows that $N(S)=B$ if $|S|>(1-\varepsilon)n$. \end{proof} \subsection*{Chernoff type bounds} The analysis of our randomised greedy embedding (see Section~\ref{sec:rga:definition}) repeatedly uses concentration results for random variables. Those random variables are the sum of Bernoulli variables. If these are mutually independent we use a Chernoff bound (see, e.g.,~\cite[Corollary 2.3]{RandomGraphs}). \begin{thm}[Chernoff bound] \label{thm:ChernoffBound} Let $\mathcal{A}=\sum_{i\in[n]}\mathcal{A}_i$ be a binomially distributed random variable with $\PP[\mathcal{A}_i]=p$ for all $i\in[n]$. Further let $c\in[0,3/2]$. Then \[ \PP[|\mathcal{A} - pn| \ge c\cdot pn] \le \exp\left(-\frac{c^2}{3}pn\right).\] \end{thm} However, we also consider scenarios where the Bernoulli variables are not independent. \begin{lemNN}[Lemma~\ref{lem:pseudo:Chernoff}] Let $0\le p_1\le p_2 \le 1$, $0<c \le 1$. Further let $\mathcal{A}_i$ for $i\in[n]$ be a 0-1-random variable and set $\mathcal{A}:=\sum_{i\in[n]}\mathcal{A}_i$. If \begin{align} \label{eq:pseudo:Independence} p_1 \le \PP\left[\mathcal{A}_i=1 \,\left|~\parbox{150pt}{ $\mathcal{A}_{j}=1$ for all $j \in J$ and\\ $\mathcal{A}_j=0$ for all $j\in [i-1]\setminus J$}\right. \right] \le p_2 \end{align} for every $i\in[n]$ and every $J\subseteq[i-1]$ then \[ \PP[\mathcal{A} \le (1-c) p_1n] \le \exp\left(-\frac{c^2}3 p_1n \right) \] and \[ \PP[\mathcal{A} \ge (1+c)p_2n] \le \exp\left(-\frac{c^2}3 p_2n \right)\, . \] \end{lemNN} The somewhat technical conditioning in~\eqref{eq:pseudo:Independence} allows us to bound the probability for the event $\mathcal{A}_i=1$ even if we condition on any outcome of the events $\mathcal{A}_j$ with $j<i$. The idea of the proof now is to relate the random variable $\mathcal{A}$ to a truly binomially distributed random variable and then use a Chernoff bound. \begin{proof}[of Lemma~\ref{lem:pseudo:Chernoff}] For $k,\ell\in \field{N}_0$ define $a_{\ell,k}=\PP[\sum_{i\le\ell}\mathcal{A}_i \le k]$ and $b_{\ell,k}=\PP[B_{\ell,p_1}\le k]$ where $B_{\ell,p_1}$ is a binomially distributed random variable with parameters $\ell$ and $p_1$. So both $a_{\ell,k}$ and $b_{\ell,k}$ give a probability that a random variable (depending on $\ell$ and $p_1$) is below a certain value $k$. The following claim relates these two probabilities. \begin{claim} \label{cl:pseudo:Chernoff:Compare} For every $k\ge 0$, $\ell\ge 0$ we have $a_{\ell,k}\le b_{\ell,k}$. \end{claim} \begin{claimproof}[] We will prove the claim by induction on $\ell$. For $\ell=0$ we trivially have $a_{0,k} = 1 = b_{0,k}$ for all $k\ge 0$. Now assume that the claim is true for $\ell-1$ and every $k\ge 0$. Now \[ a_{\ell,0} \le (1-p_1)a_{\ell-1,0}\le (1-p_1)b_{\ell-1,0} = b_{\ell,0}\,. \] As \[ \PP\left[\mathcal{A}_\ell=1 \,\left|~\parbox{150pt}{ $\mathcal{A}_{j}=1$ for all $j \in J$ and\\ $\mathcal{A}_j=0$ for all $j\in [\ell-1]\setminus J$}\right. \right] \ge p_1 \] for every $J\subseteq [\ell-1]$ it follows that for $k\ge 1$ \begin{align} \label{eq:pseudo:Chernoff:Compare:1} a_{\ell,k} &\le a_{\ell-1,k-1} + (a_{\ell-1,k}-a_{\ell-1,k-1})(1-p_1)\, . \end{align} This upper bound on $a_{\ell,k}$ implies that for every $k\ge 1$ we have \begin{align*} a_{\ell,k} &\leByRef{eq:pseudo:Chernoff:Compare:1} p_1\cdot a_{\ell-1,k-1} + (1-p_1) \cdot a_{\ell-1,k} \\ &\le p_1\cdot b_{\ell-1,k-1} + (1-p_1)\cdot b_{\ell-1,k}\\ &= p_1 \PP[B_{\ell-1,p_1}\le k-1] + (1-p_1) \PP[B_{\ell-1,p_1} \le k]\\ &= \PP[B_{\ell-1,p_1}\le k-1] + (1-p_1) \PP[B_{\ell-1,p_1} = k]\\ &= \PP[B_{\ell,p_1}\le k] = b_{\ell,k}\,. \end{align*} Here the second inequality is due to the induction hypothesis. This finishes the induction step and the proof of the claim. \end{claimproof} Now the first inequality of the lemma follows immediately. We set $\ell=n$, $k=(1-c)p_1n$ and obtain \begin{align*} \PP[\mathcal{A}\le (1-c)p_1n] = a_{n,(1-c)p_1n} &\le b_{n,(1-c)p_1n} = \PP[B_{n,p_1}\le (1-c)p_1n] \le \exp\left(-\frac{c^2}3p_1n\right), \end{align*} where the last inequality follows by Theorem~\ref{thm:ChernoffBound}. The second assertion of the lemma follows by an analogous argument: set $a_{\ell,k}=\PP[\sum_{i\le\ell}\mathcal{A}_i \ge k]$ and $b_{\ell,k}=\PP[B_{\ell,p_2}\ge k]$ and obtain \begin{align} \label{eq:pseudo:Chernoff:Compare:2} a_{\ell,k} \le a_{\ell-1,k} + (a_{\ell-1,k-1}-a_{\ell-1,k})p_2\, . \end{align} It follows by induction on $\ell$ that \begin{align*} a_{\ell,k} &\leByRef{eq:pseudo:Chernoff:Compare:2} (1-p_2)\cdot a_{\ell-1,k} + p_2\cdot a_{\ell-1,k-1} \\ &\le (1-p_2)\cdot b_{\ell-1,k} + p_2\cdot b_{\ell-1,k-1}\\ &= \PP[B_{\ell-1,p_2}\ge k] + \PP[B_{\ell-1,p_2} = k-1] p_2\\ &= \PP[B_{\ell,p_2}\ge k] = b_{\ell,k}\,. \end{align*} Once more the second inequality follows from the induction hypothesis. Setting $\ell=n$ and $k=(1+c)p_2n$ and using Theorem~\ref{thm:ChernoffBound} again then finishes the proof. \end{proof} In addition we need a similar result with a more complex setup. \begin{lemNN}[Lemma~\ref{lem:pseudo:Chernoff:tuple}] Let $0<p$ and $a,m,n\in \field{N}$. Further let $\mathcal{I}\subseteq\mathcal{P}([n])\setminus\{\emptyset\}$ be a collection of $m$ disjoint sets with at most $a$ elements each. For every $i\in [n]$ let $\mathcal{A}_i$ be a 0-1-random variable. Further assume that for every $I\in\mathcal{I}$ and every $k\in I$ we have \[ \PP\left[\mathcal{A}_k=1 \,\left|~\parbox{150pt}{ $\mathcal{A}_{j}=1$ for all $j \in J$ and\\ $\mathcal{A}_j=0$ for all $j\in [k-1]\setminus J$}\right. \right] \ge p \] for every $J\subseteq [k-1]$ with $[k-1]\cap I\subseteq J$. Then \[ \PP\Big[ \big|\{I\in \mathcal{I}: \mathcal{A}_{i}=1\text{ for all $i\in I$}\}\big|\ge \tfrac{1}{2} p^am\Big] \ge 1-2\exp\Big(-\frac1{12} p^am \Big)\, . \] \end{lemNN} \begin{proof}[of Lemma~\ref{lem:pseudo:Chernoff:tuple}] Let $p>0$, $a,m,n\in\field{N}$ and $\mathcal{I}$ be given. We order the elements of $\mathcal{I}$ as $\mathcal{I}=\{I_1,\dots,I_m\}$ by their respective largest index. This means, the $I_j$ are sorted such that $j'<j$ implies that there is an index $i_j\in I_j$ with $i<i_j$ for all $i\in I_{j'}$. For $i\in[m]$ we now define events $\mathcal{B}_i$ as \[ \mathcal{B}_i \mathrel{\mathop:}= \begin{cases} 1 & \text{ if $\mathcal{A}_j=1$ for all $j\in I_i$,}\\ 0 & \text{ otherwise.}\end{cases} \] We claim that the events $\mathcal{B}_i$ satisfy equation~\eqref{eq:pseudo:Independence} where the probability is bounded from below by $p^a$. \begin{claim} For every $i\in[m]$ and $J\subseteq [i-1]$ we have \[ \PP\left[\mathcal{B}_i=1 \,\left|~\parbox{150pt}{ $\mathcal{B}_{j}=1$ for all $j \in J$ and\\ $\mathcal{B}_j=0$ for all $j\in [i-1]\setminus J$}\right. \right] \ge p^a\,. \] \end{claim} \begin{claimproof}[] Let $i\in[m]$ and $J\subseteq [i-1]$ be given. We assume that $|I|=a$ for ease of notation. (The proof is just the same if $|I|<a$.) So let $I_i=\{i_1,\dots,i_a\}$ be in ascending order and define $i_0\mathrel{\mathop:}= 0$. For $v\in \{0,1\}^{i_k-i_{k-1}-1}$ let $H_k(v)$ be the 0-1-random variable with \[ H_k(v) = \begin{cases} 1 & \text{ $\mathcal{A}_{i_{k-1}+\ell}=v_\ell$ for all $\ell\in[i_k-i_{k-1}-1]$,}\\ 0 & \text{ otherwise.}\end{cases} \] The rationale for this definition is the following. The outcome of $\mathcal{B}_i$ is determined by the outcome of the random variables $\mathcal{A}_{i_j}$. However, we cannot neglect the random variables $\mathcal{A}_\ell$ for $\ell \notin I_i$ as the $\mathcal{A}_\ell$ are not mutually independent. Instead we condition the probability of $\mathcal{A}_{i_j}=1$ on possible outcomes of $\mathcal{A}_\ell$ with $\ell< i_j$. Now $H_k(v)=1$ with $v \in \{0,1\}^{i_k-i_{k-1}-1}$ represents one outcome for the $\mathcal{A}_\ell$ with $i_{k-1}<\ell<i_k$. We call the $v\in\{0,1\}^{i_k-i_{k-1}-1}$ the \emph{history} between $\mathcal{A}_{i_{k-1}}$ and $\mathcal{A}_{i_k}$. It follows from the requirements of Lemma~\ref{lem:pseudo:Chernoff:tuple} that for any tuple $(v_1,\dots,v_k)\in \{0,1\}^{i_1-1}\times \{0,1\}^{i_2-i_1-1}\times \dots \times \{0,1\}^{i_k-i_{k-1}-1}$ we have \begin{align} \label{eq:pseudo:Chernoff:tuple:1} \PP\left[\mathcal{A}_{i_k}=1 \left| ~ \parbox{150pt}{ $\mathcal{A}_{i_j}=1$ for all $j\in[k-1]$ and\\ $H_j(v_j)=1$ for all $j\in[k-1]$} \right. \right] \ge p\,. \end{align} However, we are not interested in every possible history $\left(v_1,\dots,v_k\right)$ as some of the histories cannot occur simultaneously with the event $\mathcal{B}=1$ where \begin{align*} \mathcal{B}=1 &\text{ if and only if } \left[\parbox{140pt}{$\mathcal{B}_j=1$ for all $j\in J$ and\\ $\mathcal{B}_j=0$ for all $j\in [i-1]\setminus J$}\right]\,. \\ \intertext{For ease of notation we define the following shortcuts} \mathcal{H}(v_1,\dots,v_k)=1 &\text{ if and only if } H_j(v_j)=1 \text{ for all $j\in [k]$,}\\ \mathcal{A}^{(k)}=1 &\text{ if and only if } \mathcal{A}_{i_j}=1 \text{ for all $j\in [k]$.} \end{align*} Moreover, we define $C_k$ to be the set of all tuples $(v_1,\dots,v_k) \in \{0,1\}^{i_1-1}\times \{0,1\}^{i_2-i_1-1}\times \dots \times \{0,1\}^{i_k-i_{k-1}-1}$ with \[ \PP\left[ \mathcal{H}(v_1,\dots,v_k)=1 \text{ and } \mathcal{B}=1 \right] >0\,. \] In other words, the elements of $C_k$ are those histories that are compatible with the event that we condition on in the claim. Note in particular that $\mathcal{B}=1$ if and only if there is a $(v_1,\dots,v_a)\in C_a$ with $\mathcal{H}(v_1,\dots,v_a)=1$. With these definitions we can rewrite the probability in the assertion of our claim as \begin{align*} \PP\left[\mathcal{B}_i=1 \mid \mathcal{B}=1 \right] = \sum_{(v_1,\dots,v_a)\in C_a} \PP\left[\parbox{90pt}{ $\mathcal{A}^{(a)}=1$ and \\$\mathcal{H}(v_1,\dots,v_a)=1$} \,\Big|~\mathcal{B}=1~\Big.\right]\,. \end{align*} We now prove by induction on $k$ that \begin{align} P_k\mathrel{\mathop:}=\sum_{(v_1,\dots,v_k)\in C_k} \PP\left[\parbox{90pt}{ $\mathcal{A}^{(k)}=1$ and \\$\mathcal{H}(v_1,\dots,v_k)=1$} \,\Big|~\mathcal{B}=1~\Big.\right] \ge p^k \end{align} for all $k\in[a]$. The induction base $k=1$ is immediate from the requirements of the lemma as \[ P_1 = \sum_{v_1\in C_1} \PP\left[\parbox{65pt}{ $\mathcal{A}^{(1)}=1$ and \\$\mathcal{H}(v_1)=1$} \,\Big|~\mathcal{B}=1~\Big.\right] \geByRef{eq:pseudo:Chernoff:tuple:1} p \sum_{v_1\in C_1} \PP\left[\mathcal{H}(v_1)=1 \,\big|~\mathcal{B}=1~\big.\right] = p\,. \] The last equality above follows by total probability from the definition of $C_1$. So assume that the induction hypothesis holds for $k-1$. Then \begin{align*} P_k &= \sum_{(v_1,\dots,v_k)\in C_k} \PP\left[\parbox{90pt}{ $\mathcal{A}^{(k)}=1$ and \\$\mathcal{H}(v_1,\dots,v_k)=1$} \,\Big|~\mathcal{B}=1~\Big.\right] \\ &= \sum_{(v_1,\dots,v_k)\in C_k} \PP\left[\mathcal{A}_{i_k}=1 \, \Big| \parbox{130pt}{$\mathcal{B}=1$ and $\mathcal{A}^{(k-1)}=1$ and \\$\mathcal{H}(v_1,\dots,v_k)=1$} \Big. \right] \cdot \PP\left[\parbox{90pt}{$\mathcal{A}^{(k-1)}=1$ and \\$\mathcal{H}(v_1,\dots,v_k)=1$} \Big| \mathcal{B}=1 \Big. \right] \\ &\geByRef{eq:pseudo:Chernoff:tuple:1} p \cdot \sum_{(v_1,\dots,v_k)\in C_k} \PP\left[\parbox{90pt}{$\mathcal{A}^{(k-1)}=1$ and \\$\mathcal{H}(v_1,\dots,v_k)=1$} \Big| \mathcal{B}=1 \Big. \right] \\ &= p \cdot \sum_{(v_1,\dots,v_{k-1})\in C_{k-1}} \PP\left[\parbox{100pt}{$\mathcal{A}^{(k-1)}=1$ and \\$\mathcal{H}(v_1,\dots,v_{k-1})=1$} \Big| \mathcal{B}=1 \Big. \right] \\ &= p \cdot P_{k-1} \ge p^k\,. \end{align*} The claim now follows as \[ \PP[\mathcal{B}_i=1 \mid \mathcal{B}=1] = \sum_{(v_1,\dots,v_a)\in C_a} \PP\left[\parbox{90pt}{ $\mathcal{A}^{(a)}=1$ and \\$\mathcal{H}(v_1,\dots,v_a)=1$} \,\Big|~\mathcal{B}=1~\Big.\right] \ge p^a\,. \] \end{claimproof} We have seen that the $\mathcal{B}_i$ are pseudo-independent and that they have probability at least $p^a$ each. Thus we can apply Lemma~\ref{lem:pseudo:Chernoff} and derive \[ \PP\left[ |\{i\in[m]: \mathcal{B}_i=1\}| \ge \frac12 p^a m \right] \ge 1-2\exp\left(-\frac1{12}p^a m\right)\,. \] \end{proof} \end{document}
\begin{document} \title {Intersection disjunctions for reverse convex sets} \author{Eli Towle\thanks{[email protected]} \and James Luedtke\thanks{[email protected]}} \date{\small Department of Industrial and Systems Engineering, University of Wisconsin -- Madison} \maketitle \begin{abstract} We present a framework to obtain valid inequalities for a reverse convex set: the set of points in a polyhedron that lie outside a given open convex set. Reverse convex sets arise in many models, including bilevel optimization and polynomial optimization. An intersection cut is a well-known valid inequality for a reverse convex set that is generated from a basic solution that lies within the convex set. We introduce a framework for deriving valid inequalities for the reverse convex set from basic solutions that lie outside the convex set. We first propose an extension to intersection cuts that defines a two-term disjunction for a reverse convex set, which we refer to as an intersection disjunction. Next, we generalize this analysis to a multi-term disjunction by considering the convex set's recession directions. These disjunctions can be used in a cut-generating linear program to obtain valid inequalities for the reverse convex set. \noindent \rule{0pt}{1.5em}\textbf{Keywords:} Mixed-integer nonlinear programming; valid inequalities; reverse convex sets; disjunctive programming; intersection cuts \end{abstract} \begin{acknowledgments} \begin{sloppypar} This material is based upon work supported by the U.S.\ Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) under Contract DE-AC02-06CH11357. The authors acknowledge partial support through NSF grant SES-1422768. \end{sloppypar} \end{acknowledgments} \section{Introduction}\label{sec:intro} A reverse convex set is a set of the form $\P \setminus \Q$, where $\P \subseteq \R^n$ is a polyhedron and $\Q \subseteq \R^n$ is an open convex set. This is a general set structure arising in the context of mixed-integer nonlinear programming (MINLP). In this setting, $\P$ is a linear programming relaxation of the MINLP feasible region, and $\Q$ contains no solutions feasible to the problem. We are motivated by cases where $\cl(\Q)$ is either non-polyhedral or is defined by a large number of linear inequalities, because if $\cl(\Q)$ is a polyhedron defined by a small number of inequalities, we can optimize and separate over $\clconv(\P \setminus \Q)$ efficiently using the disjunctive programming techniques of \citet{balas1979,balas1974}. We study valid inequalities for reverse convex sets. These inequalities can be used to strengthen the convex relaxation of any problem for which an open convex set containing no feasible points can be identified; such sets are known as {\em convex $S$-free sets} (e.g., \citet{conforti2014b}). Intersection cuts are valid inequalities for $\P \setminus \Q$. Intersection cuts were introduced in the context of concave minimization by \citet{tuy1964} and later by \citet{balas1971} for integer programming. The inequalities of \citet{tuy1964} are often referred to as ``concavity cuts'' or ``$\gamma$-valid'' cuts. For ease of exposition, we refer to such inequalities as ``intersection cuts'' due to their similarity to the inequalities of \citet{balas1971}. An intersection cut is generated from a basic solution $\xbasis$ of $\P$ that satisfies $\xbasis \in \Q$. A basic solution $\xbasis$ of $\P$ corresponding to basis $B$ forms the apex of a translated simplicial cone $\PB$ defining a relaxation of $\P$. For each extreme ray of this cone, a point on $\bd(\Q)$ that intersects the extreme ray is found. A hyperplane $c^\tp x = d$ is formed, such that the hyperplane passes through all of these points and satisfies $c^\tp \xbasis > d$. The intersection cut $c^\tp x \leq d$ is valid for $\P \setminus \Q$. For a detailed review of intersection cuts, see Section~\ref{sec:ic}. Our main contribution in this work is to show how valid inequalities for $\P \setminus \Q$ can be obtained from a basic solution $\xbasis$ of $\P$ {\em in the case where $\xbasis \notin \cl(\Q)$}. Because $\xbasis \notin \Q$, intersection cuts generated using the cone $\PB$ are not valid or even well-defined in general. However, under the assumption that each extreme ray of $\PB$ that does not intersect $\Q$ lies in the recession cone of $\Q$, we present two linear inequalities that form a two-term disjunction that contains $\P \setminus \Q$. If $\P$ intersected with one of these inequalities is empty, the inequality defining the other disjunctive term is valid for $\P \setminus \Q$. We call inequalities obtained in this manner {\em external intersection cuts}. If both disjunctive terms are nonempty, we can generate valid inequalities for $\P \setminus \Q$ using the standard cut-generating linear program (CGLP) for disjunctive programming of \citet{balas1979,balas1974}. We refer to these disjunctions as {\em intersection disjunctions}. \citet{glover1974} exploits the recession structure of $\Q$ to strengthen the intersection cut. This motivates us to extend our analysis by considering $\recc(\Q)$, the recession cone of $\Q$. We present a relaxation of $\PB \setminus \Q$ that incorporates the structure of $\recc(\Q)$. We provide a class of valid inequalities for the relaxation, the size of which grows exponentially with the dimension of the problem. We derive a polynomial-size extended formulation that captures the full strength of this exponential family of inequalities. We then prove that the proposed relaxation of $\PB\setminus \Q$ is equivalent to the union of at most $n$ possibly nonconvex sets, thereby forming a disjunction that contains the reverse convex set. Under some assumptions, we propose a polyhedral relaxation of each disjunctive term individually. Given these polyhedral relaxations, we can use a CGLP to generate disjunctive cuts for $\P \setminus \Q$. This paper is organized as follows. In Section~\ref{subsec:relatedlit}, we review related literature. In Section~\ref{subsec:rc-examples}, we provide motivating examples of reverse convex sets. In Section~\ref{sec:ic}, we review intersection cuts. In Section~\ref{sec:cuts-1}, we present a two-term disjunction that contains $\P \setminus \Q$ and is generated by basic solutions of $\P$ that lie outside of $\Q$. In Section~\ref{sec:cuts-2}, we extend this analysis by presenting a multi-term disjunction for $\P \setminus \Q$ by considering $\recc(\Q)$. We propose extended formulations that can be used to define polyhedral relaxations of the disjunctive terms. \subsection{Related literature}\label{subsec:relatedlit} The problem of optimizing a linear function over a reverse convex set is known as linear reverse convex programming (LRCP). \citet{tuy1987} shows that any convex program with multiple reverse convex constraints can be reduced to one with a single reverse convex constraint with the introduction of an additional variable and an additional convex constraint. By reduction from a concave minimization problem, optimizing a linear function over a reverse convex set is NP-hard. \citet{matsui1996} shows that this holds even in special cases restricting the structure of the linear constraints or the convex set $\Q$. Reverse convex optimization problems were first studied from a global optimality perspective in the 1970s (e.g., \citet{bansal1975b,bansal1975}, \citet{hillestad1975}, and \citet{ueing1972}). \citet{hillestad1980b,hillestad1980} presented one of the first cutting plane algorithms for LRCP, though it does not always converge to an optimal solution (e.g., \citet{gurlitz1985}). Numerous algorithms for solving LRCP have been developed. \citet{gurlitz1991} present a partial enumeration procedure, \citet{thuong1984} propose sequentially solving concave minimization problems, and \citet{fulop1990} proposes a cutting plane algorithm that cuts off edges of the polyhedron that do not contain an optimal solution. \citet{bensaad1990} present a cutting plane algorithm to solve LRCP based on level sets, but later showed that it does not converge to a globally optimal solution \cite{bensaad1994}. Branch-and-bound methods from concave minimization literature have also been adapted to solve reverse convex optimization problems (e.g., \citet{horst1988}, \citet{horst1990}, \citet{muu1985}, and \citet{ueing1972}). \citet{hillestad1980} define the concept of a basic solution for LRCP. They show that the convex hull of the feasible region of LRCP is a polytope if the linear constraints form a polytope and the functions defining the reverse convex constraints are differentiable. \citet{sen1987} extend this result, showing that the closure of the convex hull of any polyhedron intersected with a finite number of reverse convex constraints is a polyhedron. In cutting plane algorithms for MINLP problems, intersection cuts may be constructed from the basis corresponding to an optimal solution to an LP relaxation of the problem. Infeasible basic solutions of $\P$ within $\Q$ are also candidates for generating intersection cuts for $\P \setminus \Q$ and may yield intersection cuts that are not dominated by those generated from feasible basic solutions. Gomory mixed-integer (GMI) cuts for mixed-integer linear programming behave similarly. In particular, \citet {nemhauser1990} showed that the intersection of GMI cuts from all basic solutions is equivalent to the split closure. This is not true when considering only GMI cuts from basic feasible solutions (e.g., \citet{cornuejols2001}). Intersection cuts can be generated from any convex set that does not contain feasible points in its interior. In integer programming, these are maximal lattice-free convex sets. More generally, these types of sets are convex $S$-free sets. When considering a fixed basis, intersection cuts generated using a convex set $\Q$ can only be stronger than those produced using a subset of $\Q$. \citet{balas2013} generalize intersection cuts such that inequalities for $\P \setminus \Q$ can be obtained using a more general polyhedron rather than a translated simplicial cone. \citet{glover1974} proposes improved intersection cuts for the special case where $\Q$ is a polyhedron. Intersection cuts omit from the inequality variables corresponding to extreme rays of $\PB$ that lie within the recession cone of $\Q$. \citet{glover1974} uses the polyhedron's recession information to include these terms with negative coefficients, thereby strengthening the cut. A similar strengthening is proposed for polynomial optimization problems by \citet{bienstock2020}. The idea of improving the intersection cut by considering $\recc(\Q)$ has been studied in the context of minimal valid functions. \citet{dey2010} consider minimal valid functions for a polyhedral $\cl(\Q)$ and note that the minimal valid function for $\PB \setminus \Q$ is the uniquely defined intersection cut if $\xbasis \in \Q$ and $\interior(\recc(\Q)) = \emptyset$. Results from this work were extended by \citet{basu2010} and \citet{basu2011}. \citet{fukasawa2011} use the nonnegativity of integer variables to derive minimal valid inequalities for a mixed-integer set. These inequalities consider recession directions of a relevant convex lattice-free set and thus may contain negative variable coefficients. Ultimately, standard approaches to generating intersection cuts for $\P \setminus \Q$ require a basic solution of $\P$ that lies within the convex set $\Q$. In this paper, we present a framework for constructing valid inequalities for $\P \setminus \Q$ using a basic solution that lies outside $\cl(\Q)$. \subsection{Motivating examples}\label{subsec:rc-examples} In many MINLP problems, a reverse convex set $\P \setminus \Q$ can be derived via a problem reformulation. In this case, the polyhedron $\P$ is a relaxation of the MINLP feasible region, and the set $\Q$ is an open convex set which is known to contain no solutions feasible to the MINLP. We can use the set $\P \setminus \Q$ to derive valid inequalities for the original problem. We motivate our study of reverse convex sets by showing how this structure appears in a variety of MINLP contexts. For all of the examples that follow, the closure of the set $\Q$ we derive is non-polyhedral, or possibly defined by a large number of linear inequalities. One example of this reverse convex structure appears in difference of convex (DC) functions (e.g., \citet{tuy1986}). A function $f \colon \R^n \rightarrow \R$ is a DC function if there exist convex functions $g, h \colon \R^n \rightarrow \R$ such that $f(x) = g(x) - h(x)$ for all $x \in \R^n$. A DC set can be written as \begin{align} \{x \in \R^n \colon g(x) - h(x) \leq 0 \}. \label{eq:dcset} \end{align} Equivalently, we can write \eqref{eq:dcset} as $\proj_x (\dcset)$, where $\dcset \coloneqq \{(x,t) \in \R^n \times \R \colon g(x) - t \leq 0,\ h(x) - t \geq 0 \}$. The convex set $\Q = \{(x,t) \in \R^n \times \R \colon h(x) - t < 0 \}$ contains no points feasible to $\dcset$. \citet{hartman1959} shows the class of DC functions is broad, subsuming all twice continuously differentiable functions. Reverse convex sets also appear in the context of polynomial optimization. \citet{bienstock2020} consider the set of symmetric matrices representable as the outer-product of a vector with itself: $\{x x^\tp \colon x \in \R^n \}$. Polynomial optimization problems can be reformulated to include the constraint that a square matrix of variables is outer-product representable. \citet{bienstock2020} construct non-polyhedral {\em outer-product-free} sets $\Q$ that do not contain any matrices representable as an outer-product of some vector, and as such are not feasible to the problem. Accordingly, they present families of cuts for $\P \setminus \Q$, where $\P$ is formed by the linear constraints of the problem reformulation. They characterize sets that are {\em maximal} outer-product-free, that is, not contained in any other outer-product-free sets. For the specific case of quadratically constrained programs (QCPs), \citet{saxena2010} use disjunctive programming techniques to derive valid inequalities for a reverse convex set in an extended variable space. In a companion paper, \citet{saxena2011} suggest the following {\em eigen-reformulation} of the quadratic constraint $x^\tp A x + a^\tp x + b \leq 0$: \begin{align*} \mash{\sum\limits_{j \colon \lambda_j > 0}}\ &\lambda_j (v_j^\tp x)^2 + \mash{\sum\limits_{j \colon \lambda_j < 0}} \lambda_j s_j + a^\tp x + b \leq 0 \\ s_j =\ &(v_j^\tp x)^2 \quad \forall j \colon \lambda_j < 0, \end{align*} where $\lambda_1, \ldots, \lambda_n$ denote the eigenvalues of $A$ and $v_1, \ldots, v_n$ the corresponding eigenvectors. The convex set $\{ (x,s) \in \R^n \times \R^n \colon s_j > (v_j^\tp x)^2\ \forall j \textrm{ s.t. } \lambda_j < 0\}$ does not contain any points feasible to QCP. Reverse convex sets can also be used to define relaxations of bilevel optimization problems. Bilevel programs include constraints of the form $d^\tp y \leq \mathit{\Phi}(x)$, where $\mathit{\Phi}(x)$ is the {\em value function} of the lower-level problem for a fixed top-level decision $x$: \begin{align*} \mathit{\Phi}(x) &\coloneqq \min_{y} \{ d^\tp y \colon Ax + By \leq b\}. \end{align*} The function $\mathit{\Phi}(\cdot)$ is convex. The set $\{(x,y) \colon d^\tp y > \mathit{\Phi}(x)\}$ is defined by a reverse convex inequality and does not contain any points feasible to the bilevel program. The closure of this set is polyhedral, but may be defined by a large number of linear inequalities. \citet{fischetti2016} propose intersection cuts for a specific class of bilevel integer programming problems. \section{Intersection cut review}\label{sec:ic} We briefly review intersection cuts, following the presentation of \citet{conforti2014}. Let $A \in \R^{m \times n}$ be a matrix with full row rank and let $b \in \R^m$. Let $\P = \{x \in \R^n_{+} \colon Ax = b\}$ be a polyhedron. Let $\Q \subseteq \R^n$ be an open convex set. We are interested in valid inequalities for the reverse convex set $\P \setminus \Q$. For a basis $\B$ of $\P$, let $N = \{1, \ldots, n\} \setminus \B$ be the nonbasic variables. For some $\abar \in \R^{|\B| \times |N|}$ and $\bbar \in \R^{|\B|}_{+}$, we can rewrite $\P$ as \begin{align*} \P &= \Big\lbrace x \in \R^n \colon x_i = \bbar_i - \sum_{j \in N} \abar_{ij} x_j, i \in \B,\ x_j \geq 0, j = 1, \ldots, n \Big\rbrace. \end{align*} The basic solution corresponding to basis $\B$ is $\xbasis$, where $\xbasis_i = \bbar_i$ if $i \in B$, and $0$ if $i \in N$. By removing the nonnegativity constraints on variables $x_i$, $i \in \B$, we obtain $\PB$, the cone admitted by the basis $\B$. The basic solution $\xbasis$ forms the apex of $\PB \supseteq \P$. There is an extreme ray $\rbar^j$ of $\PB$ for each $j \in N$: \begin{align*} \rbar^j_k &= \begin{cases} -\abar_{kj} & \textrm{if } k \in B \\ 1 & \textrm{if } k = j \\ 0 & \textrm{if } k \in N \setminus \{j\}. \end{cases} \end{align*} The conic hull of the extreme rays $\{\rbar^j \colon j \in N\}$ forms the recession cone of $\PB$. Together, the basic solution $\xbasis$ and these extreme rays provide a complete internal representation of $\PB$, namely, $\PB = \{ \xbasis + \sum_{j \in N} x_j \rbar^j \colon x \in \R^{|N|}_{+} \}$. Intersection cuts are valid inequalities for $\PB \setminus \Q$ constructed from basic solutions of $\P$ that lie within $\Q$. These cuts are transitively valid for $\P \setminus \Q \subseteq \PB \setminus \Q$. Assume $\xbasis \in \Q$. For each $j \in N$, let $\exit_j$ be defined as \begin{align}\label{eq:ic-exit-def} \exit_j &\coloneqq \sup\{ \exit \geq 0 \colon \xbasis + \exit \rbar^j \in \Q \}. \end{align} The set $\{\xbasis + \exit_j \rbar^j \colon j \in N \}$ is the set of points where the extreme rays of $\PB$ emanating from $\xbasis$ leave the set $\Q$. Because $\Q$ is open, $\exit_j > 0$ for all $j \in N$. If $\exit_j = +\infty$, $\rbar^j$ lies in the recession cone of $\Q$. The following inequality is valid for $\P \setminus \Q$ (\citet{balas1971}): \begin{align} \sum\limits_{j \in N} \frac{x_j}{\exit_j} \geq 1. \label{eq:int-cut-inequality} \end{align} We refer to \eqref{eq:int-cut-inequality} as the {\em standard intersection cut}. Here, and throughout the paper, we use the convention that $x / \pm\infty \coloneqq 0$. \textbf{Notation.} Let $\extreals \coloneqq \R \cup \{-\infty, +\infty\}$ be the extended real numbers. For a nonzero vector $r \in \R^n$ and $\enter,\exit \in \extreals$, we define the line segment $\intervalr{\enter}{\exit}{} \coloneqq \{\lambda r \colon \lambda \in (\enter, \exit) \}$. Closed brackets (e.g., $\clintervalr{\enter}{\exit}{}$) denote the inclusion of one or both endpoints of the line segment. The set $\intervalr{\enter}{\exit}{}$ is unbounded if and only if $\enter = -\infty$ or $\exit = +\infty$. We remark that $\lcintervalnor{0}{+\infty}r$ is equivalent to $\cone(r)$. However, we use the notation $\lcintervalnor{0}{+\infty}r$ for consistency. \section{Intersection disjunctions and external intersection cuts}\label{sec:cuts-1} \phantom{}In this paper, we consider a fixed basis $B$ and corresponding basic solution $\xbasis$. Let $\PB$ be defined as in Section~\ref{sec:ic}. For the remainder of this paper, we assume the basic solution $\xbasis$ lies outside of $\cl(\Q)$. Recall $\PB$ is a translated simplicial cone with apex $\xbasis$ and linearly independent extreme rays $\{\rbar^j \colon j \in N\}$. For all $j \in N$, let $\exit_j$ be defined as in \eqref{eq:ic-exit-def}, and let \begin{alignat*}{2} \enter_j &\coloneqq \inf\{ \enter \geq 0 \colon \xbasis + \enter \rbar^j \in \Q \}. \end{alignat*} For $j \in N$, we use the convention $\enter_j = +\infty$ and $\exit_j = -\infty$ if the set $\{\xbasis\} + \lcinterval{0}{+\infty}{j}$ does not intersect $\Q$. If $\cl(\Q)$ is polyhedral and $\{\xbasis\} + \lcinterval{0}{+\infty}{j}$ intersects $\Q$, $\enter_j$ and $\exit_j$ can be obtained by solving a linear program. If $\cl(\Q)$ is non-polyhedral, a convex program may be required to obtain these parameters. An exception is the case where $\Q$ is bounded and a point in $\Q \cap (\{\xbasis\} + \lcinterval{0}{+\infty}{j})$ is known a priori, in which case a binary search can be performed to find the values of $\enter_j$ and $\exit_j$. We partition $N$ into the following three sets: \begin{align*} \N0 &\coloneqq \{j \in N \colon \enter_j = +\infty, \exit_j = -\infty\} \\ \N1 &\coloneqq \{j \in N \colon \enter_j \in (0, +\infty), \exit_j = +\infty\} \\ \N2 &\coloneqq \{j \in N \colon \enter_j \in (0, +\infty), \exit_j \in (\enter_j, +\infty)\}. \end{align*} For $j \in \N0$, the halfline $\{\xbasis\} + \lcinterval{0}{+\infty}{j}$ does not intersect $\Q$. Observe $\rbar^j \in \recc(\Q)$ for $j \in \N1$. Throughout Section~\ref{sec:cuts-1}, we make the following assumption. \begin{assumption}\label{ass:1} It holds that $\rbar^j \in \recc(\Q)$ for all $j \in \N0$. \end{assumption} We say a disjunction is {\em valid} for a set if the disjunction contains the set. Theorem~\ref{thm:n1-n2} proposes a valid disjunction for $\PB \setminus \Q \supseteq \P \setminus \Q$. \begin{theorem}\label{thm:n1-n2} Under Assumption~\ref{ass:1}, for every $x \in \PB \setminus \Q$, either \begin{align} \sum_{j \in N} \frac{x_j}{\enter_j} &\leq 1, \textrm{ or } \sum_{j \in N} \frac{x_j}{\exit_j} \geq 1. \label{eq:dis} \end{align} \end{theorem} \begin{proof} If $N = \N0$, then $\enter_j = +\infty$ for all $j \in N$, and all $x \in \PB \setminus \Q$ trivially satisfy $\sum_{j \in N} x_j / \enter_j \leq 1$. We prove the result for $N \neq \N0$. Assume $\xhat \in \PB$ satisfies $u \coloneqq \sum_{j \in N} \xhat_j /\enter_j > 1$ and $\ell \coloneqq \sum_{j \in N} \xhat_j / \exit_j < 1$. We show $\xhat \in \Q$. Because $u > 1$ and $\ell < 1$, there exists $\gamma \in (0, 1)$ such that $\gamma u + (1 - \gamma)\ell = 1$. For all $j \in N$, let $\theta_j \coloneqq \gamma \xhat_j / \enter_j + (1 - \gamma) \xhat_j / \exit_j \in [0, 1]$. It holds that $\theta_j = 0$ if and only if $j \in \N0$ or $\xhat_j = 0$. Therefore, $\sum_{j \in \N1 \cup \N2 \colon \xhat_j > 0} \theta_j = 1$. We write $\xhat$ as \begin{align} \xhat &= \ \mash{\sum\limits_{\substack{j \in \N1 \cup \N2 \colon \\ \xhat_j > 0}}} \theta_j \bigg( \xbasis + \frac{\xhat_j}{\theta_j} \rbar^j \bigg) + \sum\limits_{j \in \N0} \xhat_j \rbar^j. \label{eq:thm1-case1} \end{align} Consider $j \in \N1 \cup \N2$ satisfying $\xhat_j > 0$. If $j \in \N1$, then $\xhat_j / \theta_j = \enter_j / \gamma \in (\enter_j, \exit_j)$. Similarly, if $j \in \N2$, then $\xhat_j / \theta_j = \enter_j \exit_j / (\gamma \exit_j + (1 - \gamma) \enter_j) \in (\enter_j, \exit_j)$. In both cases, $\xbasis + (\xhat_j / \theta_j) \rbar^j \in \Q$. By \eqref{eq:thm1-case1}, $\xhat$ is a convex combination of points in $\Q$ plus an element of $\recc(\Q)$. Thus, $\xhat \in \Q$. \end{proof} \begin{remark}\label{remark:0} The two-term disjunction \eqref{eq:dis} can be used in a disjunctive framework to generate valid inequalities for $\P \setminus \Q$. Specifically, the set $\P \setminus \Q$ is a subset of $\P_1 \cup \P_2$, where $\P_1 \coloneqq \{x \in \P \colon \sum_{j \in N} x_j / \enter_j \leq 1\}$ and $\P_2 \coloneqq \{x \in \P \colon \sum_{j \in N} x_j / \exit_j \geq 1\}$. The sets $\P_1$ and $\P_2$ are polyhedral, because $\P$ is a polyhedron and the inequalities \eqref{eq:dis} are linear. We can obtain valid inequalities for $\conv(\P \setminus \Q)$ by generating valid inequalities for $\conv(\P_1 \cup \P_2)$ using the disjunctive programming approach of \citet{balas1979,balas1974}. \end{remark} \begin{remark}\label{remark:1} We consider the relationship between the two-term disjunction \eqref{eq:dis} and the standard intersection cut. The two-term disjunction \eqref{eq:dis} assumes that the basic solution $\xbasis$ does not lie within $\cl(\Q)$. If $\xbasis \in \Q$, then $\N0 = \emptyset$ (trivially, every extreme ray of $\PB$ emanating from $\xbasis \in \Q$ intersects $\Q$) and $\enter_j = 0$ for all $j \in N$. Because $\enter_j = 0$ for all $j \in N$, the inequality $\sum_{j \in N} x_j / \enter_j \leq 1$ of \eqref{eq:dis} is ill-defined. Instead, we can show that all points in $\PB \setminus \Q$ lie in either $\{\xbasis\}$ or $\{x \in \R^n \colon \sum_{j \in N} x_j / \exit_j \geq 1\}$. However, because $\{\xbasis\} \subseteq \Q$, we can conclude that the inequality $\sum_{j \in N} x_j / \exit_j \geq 1$ is valid for $\PB \setminus \Q$. This is precisely the standard intersection cut of \citet{balas1971}. \end{remark} \begin{example}\label{ex:pq} Let $\P = \R_{+}^2$ and $\Q = \{ x \in \R^2 \colon (x_1 - 1)^2 - x_2 < 1/2 \}$. Consider $\PB$ generated by the (only) basic solution of $\P$, $\xbasis = (0,0) \notin \Q$. In this case, $\PB = \P$. The feasible region $\P \setminus \Q$ is the disconnected set shaded in Figure~\ref{fig:t-1}. The inequalities \eqref{eq:dis} form a valid disjunction for $\P \setminus \Q$, shown in Figure~\ref{fig:two-way-dis}. \begin{figure} \caption{The set $\P \setminus \Q$ for Example~\ref{ex:pq} \caption{Every point in the darkened set $\P \setminus \Q$ satisfies one of the inequalities \eqref{eq:dis} \caption{The two-term disjunction of Theorem~\ref{thm:n1-n2} \label{fig:t-1} \label{fig:two-way-dis} \end{figure} \end{example} Proposition~\ref{prop:termwise-conv} states that if $\Q$ is bounded, then the inequality defining each term of \eqref{eq:dis} is sufficient to define the convex hull of the points in $\PB \setminus \Q$ satisfying that inequality. This is not true if the interior of $\recc(\Q)$ is nonempty, as shown by \citet{dey2010} for the standard intersection cut \eqref{eq:int-cut-inequality}. \begin{proposition}\label{prop:termwise-conv} Under Assumption~\ref{ass:1}, if $\Q$ is bounded, then \begin{align*} \conv(\{x \in \PB \setminus \Q \colon \textstyle\sum_{j \in N} x_j / \enter_j \leq 1\}) &= \{x \in \PB \colon \textstyle\sum_{j \in N} x_j / \enter_j \leq 1\}, \textrm{ and} \\ \conv(\{x \in \PB \setminus \Q \colon \textstyle\sum_{j \in N} x_j / \exit_j \geq 1\}) &= \{x \in \PB \colon \textstyle\sum_{j \in N} x_j / \exit_j \geq 1\}. \end{align*} \end{proposition} \begin{proof} We show only that $\conv(\{x \in \PB \setminus \Q \colon \textstyle\sum_{j \in N} x_j / \enter_j \leq 1\}) = \{x \in \PB \colon \textstyle\sum_{j \in N} x_j / \enter_j \leq 1\}$, as the second statement can be shown using similar techniques. Under Assumption~\ref{ass:1}, $\Q$ bounded implies $N = \N2$. Because $\PB \setminus \Q \subseteq \PB$ and the set $\{x \in \PB \colon \textstyle\sum_{j \in N} x_j / \enter_j \leq 1\}$ is convex, $\conv(\{x \in \PB \setminus \Q \colon \textstyle\sum_{j \in N} x_j / \enter_j \leq 1\}) \subseteq \{x \in \PB \colon \textstyle\sum_{j \in N} x_j / \enter_j \leq 1\}$. Next, let $\xhat \in \PB$ satisfy $\sum_{j \in N} \xhat_j / \enter_j \leq 1$. Then \begin{align} \xhat &= \xbasis + \sum\limits_{j \in N} \xhat_j \rbar^j = \sum\limits_{j \in N} \frac{\xhat_j}{\enter_j} (\xbasis + \enter_j \rbar^j) + \bigg( 1 - \sum\limits_{j \in N} \frac{\xhat_j}{\enter_j} \bigg) \xbasis \nonumber \\ &\in \conv(\{\xbasis\} \cup \{\xbasis + \enter_j \rbar^j \colon j \in N\}) \subseteq \conv(\P \setminus \Q). \label{eq:conv-points-ineq} \end{align} Because $\xbasis_j = 0$ for all $j \in N$, we have $\sum_{j \in N} \xbasis_j / \enter_j = 0$. For $i \in N$, let $y^i_j \coloneqq \xbasis + \enter_i \rbar^i$. For any $i,j \in N$, $y^i_j$ equals $\enter_j$ if $i = j$, and $0$ otherwise. Hence, for all $i \in N$, $\sum_{j \in N} y^i_j / \enter_j = 1$. Continuing from \eqref{eq:conv-points-ineq}, we have $\xhat \in \conv(\{x \in \PB \setminus \Q \colon \textstyle\sum_{j \in N} x_j / \enter_j \leq 1\})$. \end{proof} The disjunction presented in Theorem~\ref{thm:n1-n2} can be particularly useful if $\P$ is empty when intersected with one of the inequalities \eqref{eq:dis}. In this case, the inequality defining the other disjunctive term is valid for $\PB \setminus \Q$. \begin{definition} If $\{x \in \P \colon \sum_{j \in N} x_j / \exit_j \geq 1\} = \emptyset$, we refer to the inequality $\sum_{j \in N} x_j / \enter_j \leq 1$ as an {\em external intersection cut}. We say the same for the inequality $\sum_{j \in N} x_j / \exit_j \geq 1$ if $\{x \in \P \colon \sum_{j \in N} x_j / \enter_j \leq 1\} = \emptyset$. \end{definition} External intersection cuts are valid for $\P \setminus \Q$. We provide an example where intersection cuts are insufficient to define $\conv(\P \setminus \Q)$, but the facet-defining inequality for $\conv(\P \setminus \Q)$ can be obtained from an external intersection cut. In order to derive the inequalities \eqref{eq:dis}, we must first translate our polyhedral set to standard form, using additional slack variables as necessary. We then select a basis and calculate $\enter_j$ and $\exit_j$ for all $j \in N$. In this example and all that follow, we intentionally omit the intermediate steps required to obtain these inequalities, presenting them in the original variable space. \begin{example}\label{ex:intro} Let \begin{align*} \P &= \{ x \in \R_{+}^2 \colon -x_1 + 3x_2 \leq 3/2 \} \\ \Q &= \{ x \in \R^2 \colon ||x||_2 < 1 \}. \end{align*} As can be seen in Figure~\ref{fig:unobtainable}, no standard intersection cut is able to generate the inequality that is facet-defining for $\conv(\P \setminus \Q)$. However, the basic solution $\xbasis = (-3/2, 0) \notin \cl(\Q)$ corresponding to the constraints $x_2 \geq 0$ and $-x_1 + 3x_2 \leq 3/2$ generates this inequality as an external intersection cut. For this basic solution, the set $\{x \in \P \colon \sum_{j \in N} x_j / \enter_j \leq 1\}$ is empty, implying that the inequality $\sum_{j \in N} x_j / \exit_j \geq 1$ is valid for $\P \setminus \Q$. Figure~\ref{fig:obtainable} shows the inequalities \eqref{eq:dis} for this example. We note that in this example, there does exist an open convex set $\Q' \supseteq \Q$ such that $\xbasis \in \Q'$ and the intersection cut defined by $\xbasis$ with respect to $\Q$ generates the facet-defining inequality for $\conv(\P \setminus \Q)$. For instance, one such set is $\Q' = \Q \cup \{ x \in \R^2 \colon -1 < x_2 < 1,\ x_1 < 0\}$. Methods for enlarging the set $\Q$ to generate intersection cuts stronger than those produced by $\Q$ are outside the scope of this work, though this topic has been studied by \citet{balas1972}. \begin{figure} \caption{The facet-defining inequality (solid line) for the reverse convex set in Example~\ref{ex:intro} \caption{An external intersection cut from the basic solution $\xbasis = (-3/2, 0)$ defines the facet-defining inequality for $\conv(\P \setminus \Q)$ for Example~\ref{ex:intro} \caption{An external intersection cut for Example~\ref{ex:intro} \label{fig:unobtainable} \label{fig:obtainable} \end{figure} \end{example} \begin{example}\label{ex:consequence} Figure~\ref{fig:consequence} depicts another example of an external intersection cut. The extreme rays of $\PB$ enter into the convex set $\Q$ and remain within $\Q$ on an unbounded interval, so $\{ x \in \P \colon \sum_{j \in N} x_j / \exit_j \geq 1 \} = \emptyset$. The external intersection cut $\sum_{j \in N} x_j / \enter_j \leq 1$ is valid for $\P \setminus \Q$. \begin{figure} \caption{The external intersection cut $\sum_{j \in N} \label{fig:consequence} \end{figure} \end{example} \begin{definition} If $\{ x \in \P \colon \sum_{j \in N} x_j / \exit_j \geq 1 \} \neq \emptyset$ and $\{ x \in \P \colon \sum_{j \in N} x_j / \enter_j \leq 1 \} \neq \emptyset$, we say the disjunction \eqref{eq:dis} is an {\em intersection disjunction} for $\P \setminus \Q$. \end{definition} If \eqref{eq:dis} is an intersection disjunction for $\P \setminus \Q$, we can use a disjunctive CGLP to generate valid inequalities for $\conv (\P \setminus \Q)$ using the techniques of \citet{balas1979,balas1974}. We provide an example of why Assumption~\ref{ass:1} is necessary for the validity of the two-term disjunction of Theorem~\ref{thm:n1-n2}. \begin{example}\label{ex:ass1} Consider the reverse convex set shown in Figure~\ref{fig:n0-1}. The extreme ray $\rbar^2$ does not intersect the bounded convex set $\Q$, so \eqref{eq:dis} is not a disjunction for $\P \setminus \Q$. Figure~\ref{fig:n0-2} shows the same example but with the halfline $\lcinterval{0}{+\infty}{2}$ added to the set $\Q$. Because Assumption~\ref{ass:1} holds, Theorem~\ref{thm:n1-n2}'s disjunction is valid for $\P \setminus \Q$. \begin{figure} \caption{Theorem~\ref{thm:n1-n2} \caption{The validity of Theorem~\ref{thm:n1-n2} \caption{Assumption~\ref{ass:1} \label{fig:n0-1} \label{fig:n0-2} \end{figure} \end{example} Our final example of this section motivates considering how the recession cone can be used to derive more general valid disjunctions for $\PB \setminus \Q$. \begin{example}\label{ex:useless} Let $\P = \R^2_{+}$, and \begin{align*} \Q &= \Big\lbrace (x_1,x_2) \colon \Big(x_1 - \frac{3}{4} \Big)^2 + \Big(x_2 - \frac{1}{4} \Big)^2 < \frac{1}{4} \Big\rbrace + \cone\Big( \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 2 \\ 1 \end{bmatrix} \Big). \end{align*} Figure~\ref{fig:n0-insufficient-mod} provides a graphical representation of $\PB \setminus \Q$, where $\PB$ is generated from the basic solution $\xbasis = (0,0)$. Assumption~\ref{ass:1} does not hold; namely, $2 \in \N0$, but $\rbar^2 \notin \recc(\Q)$. However, there exists a valid two-term disjunction for $\PB \setminus \Q$ that cannot be obtained with the theory of this section. \begin{figure} \caption{Although a valid two-term disjunction for Example~\ref{ex:useless} \label{fig:n0-insufficient-mod} \end{figure} \end{example} \section{Valid inequalities and intersection disjunctions using \texorpdfstring{$\recc(\Q)$}{recc(C)}}\label{sec:cuts-2} In this section, we generalize the results of Section~\ref{sec:cuts-1} by considering the full recession cone of $\Q$. In Section~\ref{subsec:inner}, we construct an inner approximation of $\Q$ and analyze its relationship to $\PB \setminus \Q$. We derive inequalities to define a polyhedral relaxation of $\PB \setminus \Q$ in Section~\ref{subsec:tq-cuts}. In Section~\ref{subsec:multiterm}, we generalize the two-term disjunction of Theorem~\ref{thm:n1-n2} to a multi-term disjunction that uses the recession cone of $\Q$. We propose polyhedral relaxations of these disjunctive terms in Section~\ref{subsec:skq-cuts-combined}. \subsection{An inner approximation of \texorpdfstring{$\Q$}{C}}\label{subsec:inner} Let $\Nonetwo \coloneqq \N1 \cup \N2$. We define $\T$ and $\T^\Q$ as follows: \begin{align} \T &\coloneqq \{\xbasis\} + \conv \big(\textstyle\bigcup_{j \in \Nonetwo} \interval{\enter_j}{\exit_j}{j} \big),\enspace \T^\Q \coloneqq \T + \recc(\Q). \label{eq:def-tq} \end{align} Both $\T$ and $\T^\Q$ are subsets of $\Q$. Additionally, we define $\Rset$ and $\Rset^\Q$ as follows: \begin{align*} \Rset &\coloneqq \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \N1} \interval{\enter_j}{\exit_j}{j} \big),\enspace \Rset^\Q \coloneqq \Rset + \recc(\Q). \end{align*} Note that $\Rset^\Q \subseteq \T^\Q$. We derive inequalities valid for $\PB \setminus \T^\Q \supseteq \PB \setminus \Q$. We illustrate the sets $\T^\Q$ and $\Rset^\Q$ graphically in the example that follows. \begin{example}\label{ex:tq} Let $\P = \R^2_{+}$ and $\Q = \{(x_1,x_2) \colon x_2 > \sqrt{(x_1 - 1)^2+1} - 1.1 \}$. Let $\xbasis = (0,0)$ be the basic solution of $\P$, corresponding to basis $B$. The sets $\PB$ and $\Q$ are shown in Figure~\ref{fig:pb-tq-1}. Figures \ref{fig:pb-tq-2} and~\ref{fig:pb-tq-3} show the sets $\T^\Q$ and $\PB \setminus \T^\Q$, respectively. Figure~\ref{fig:pb-rq-1} shows the set $\Rset^\Q$. The set $\PB \setminus \Rset^\Q$ is depicted in Figure~\ref{fig:pb-rq-2}. \begin{figure} \caption{The set $\PB \setminus \Q$ for Example~\ref{ex:tq} \label{fig:pb-tq-1} \caption{The set $\T^\Q$ is an inner approximation of $\Q$.} \label{fig:pb-tq-2} \caption{The darkened and disconnected set $\PB \setminus \T^\Q$ is a relaxation of $\PB \setminus \Q$.} \label{fig:pb-tq-3} \caption{The construction of $\PB \setminus \T^\Q$ for Example~\ref{ex:tq} \label{fig:pb-tq} \end{figure} \end{example} \begin{figure} \caption{The set $\Rset^\Q$ is an inner approximation of $\Q$ that does not consider points along extreme rays of $\PB$ corresponding to indices in $\N2$.} \caption{The darkened set $\PB \setminus \Rset^\Q$ is a relaxation of $\PB \setminus \Q$. Its relationship to $\PB \setminus \Q$ and $\PB \setminus \T^\Q$ is established in Theorem~\ref{thm:conv-cq} \caption{The construction of $\Rset^\Q$ for Example~\ref{ex:tq} \label{fig:pb-rq-1} \label{fig:pb-rq-2} \end{figure} We motivate the study of $\PB \setminus \T^\Q$ and $\PB \setminus \Rset^\Q$ by showing that each set retains the strength of $\PB \setminus \Q$ under the convex hull operator. \begin{theorem}\label{thm:conv-cq} It holds that \begin{align*} \clconv(\PB \setminus \Q) = \clconv(\PB \setminus \T^\Q) = \clconv(\PB \setminus \Rset^\Q). \end{align*} \end{theorem} \begin{proof} Observe $\{\xbasis\} + \bigcup_{j \in \N1} \interval{\enter_j}{\exit_j}{j} \subseteq \{\xbasis\} + \bigcup_{j \in \Nonetwo} \interval{\enter_j}{\exit_j}{j} \subseteq \Q$. By definition, $\Rset^\Q \subseteq \T^\Q \subseteq \Q$, which implies \begin{align*} \PB \setminus \Q \subseteq \PB \setminus \T^\Q \subseteq \PB \setminus \Rset^\Q. \end{align*} To complete the proof, we show $\PB \setminus \Rset^\Q \subseteq \clconv(\PB \setminus \Q)$. Let $y \in \PB \setminus \Rset^\Q$. Assume $y \in \Q$, or we have nothing to prove. Because $y \in \PB$, we have $y = \xbasis + \sum_{j \in N} y_j \rbar^j$, where $y_j \geq 0$ for all $j \in N$. Let $\eta \coloneqq \sum_{j \in \N1} y_j / \enter_j$. To begin, assume $\eta < 1$. Assume also that $\sum_{j \in \N0 \cup \N2} y_j > 0$, otherwise $y$ is a convex combination of the points $\{\xbasis\} \cup \{\xbasis + \enter_j \rbar^j \colon j \in \N1\} \subseteq \PB \setminus \Q$: \begin{align*} y &= (1 - \eta)\xbasis + \sum_{j \in \N1} \frac{y_j}{\enter_j} (\xbasis + \enter_j \rbar^j). \end{align*} Let $\lambday \coloneqq \sum_{j \in \N0 \cup \N2} y_j / (1 - \eta)$. We rewrite $y$ as \begin{align*} y &= \sum_{j \in \N1} \frac{y_j}{\enter_j} (\xbasis + \enter_j \rbar^j) + \mash{\sum_{j \in \N0 \cup \N2}} \ \frac{y_j}{\lambday} (\xbasis + \lambday \rbar^j). \end{align*} We have $\xbasis + \enter_j \rbar^j \in \PB \setminus \Q$ for all $j \in \N1$. Additionally, $\xbasis + \lambday \rbar^j \in \PB \setminus \Q$ for all $j \in \N0$. For $j \in \N2$, $\xbasis + \lambday \rbar^j \in \conv(\PB \setminus \Q)$, because $\xbasis \in \PB \setminus \Q$ and $\xbasis + \delta \rbar^j \in \PB \setminus \Q$ for a sufficiently large $\delta > \lambday$. The coefficients on the vectors $\{\xbasis + \enter_j \rbar^j \colon j \in \N1\} \cup \{\xbasis + \lambday \rbar^j \colon j \in \N0 \cup \N2\}$ are nonnegative and sum to one. Then $y \in \conv (\PB \setminus \Q)$. Next, assume $\eta = 1$. Because $y_j \geq 0$ for all $j \in N$, we have $\sum_{j \in \N1} y_j > 0$. Then there exists $k \in \N1$ satisfying $y_k > 0$. Let $y^{\epsilon} \coloneqq y - \epsilon \rbar^k$. For all $\epsilon \in (0, y_k]$, it holds that $y^\epsilon \in \PB$. Furthermore, $\sum_{j \in \N1} y^\epsilon_j / \enter_j = \eta - \epsilon / \enter_k < 1$. It follows from the above analysis that $y^\epsilon \in \conv(\PB \setminus \Q)$. Consequently, $y = \lim_{\epsilon \rightarrow 0} y^\epsilon \in \clconv(\PB \setminus \Q)$. Finally, assume $\eta > 1$. For $\epsilon \in [0,1)$, let $z^\epsilon$ be the following: \begin{align*} z^\epsilon &\coloneqq \xbasis + \sum_{j \in \N1} \frac{y_j}{\lambdaz} [(1 - \epsilon)\lambdaz + \epsilon] \rbar^j \end{align*} Consider a fixed $\epsilon \in [0,1)$. It holds that $z^\epsilon \in \Rset$, because \begin{align*} z^\epsilon = \xbasis + \sum_{j \in \N1} \frac{y_j / \enter_j}{\lambdaz} [(1 - \epsilon)\lambdaz + \epsilon] \enter_j \rbar^j \in \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \N1} \interval{\enter_j}{\exit_j}{j} \big). \end{align*} It must be the case that $y - z^\epsilon \notin \recc(\Q)$. If not, we have $y = z^\epsilon + q$ for some $q \in \recc(\Q)$, implying $y \in \Rset^\Q$ and contradicting $y \in \PB \setminus \Rset^\Q$. It holds that $y - z^\epsilon \in \recc(\PB)$, because the coefficients on the terms $\{\rbar^j \colon j \in N\}$ are nonnegative: \begin{align*} y - z^\epsilon &= \mash{\sum\limits_{j \in \N0 \cup \N2}} y_j \rbar^j + \sum\limits_{j \in \N1} \frac{y_j}{\lambdaz}(\lambdaz - 1) \epsilon \rbar^j. \end{align*} Then we have $y - z^\epsilon \in \recc(\PB) \setminus \recc(\Q)$. For a sufficiently large $\gamma_{\epsilon} > 0$, $y + \gamma_{\epsilon}(y - z^\epsilon) \notin \Q$, because $y - z^\epsilon \notin \recc(\Q)$. Because $y - z^\epsilon \in \recc(\PB)$, it follows that $y + \gamma_{\epsilon}(y - z^\epsilon) \in \PB \setminus \Q$. Let $\zhat \in \conv(\PB \setminus \Q)$ be defined as follows: \begin{align*} \zhat &\coloneqq \xbasis + \sum_{j \in \N1} \frac{y_j}{\lambdaz} \rbar^j = \sum_{j \in \N1} \frac{y_j / \enter_j}{\lambdaz} (\xbasis + \enter_j \rbar^j) \in \conv(\{ \xbasis + \enter_j\rbar^j \colon j \in \N1 \}) \end{align*} For any $\epsilon \in [0,1)$, let $\vv^{\epsilon}$ be the following convex combination of $\zhat$ and $y + \gamma_{\epsilon}(y - z^\epsilon)$: \begin{align} \vv^{\epsilon} \coloneqq \frac{\gamma_{\epsilon}}{\gamma_{\epsilon} + 1} \zhat + \frac{1}{\gamma_{\epsilon} + 1} \left( y + \gamma_{\epsilon}(y - z^\epsilon) \right) \in \conv(\PB \setminus \Q). \label{eq:zhat-def} \end{align} We have $\lim_{\epsilon \rightarrow 1} z^\epsilon = \zhat$, and $\gamma_{\epsilon} / (\gamma_{\epsilon} + 1) \in [0,1)$ for all $\gamma_{\epsilon} > 0$. Then \begin{align*} \lim\limits_{\epsilon \rightarrow 1} \bigg\lvert \bigg\lvert \frac{\gamma_{\epsilon}}{\gamma_{\epsilon} + 1}(\zhat - z^\epsilon) \bigg\rvert \bigg\rvert \leq \lim\limits_{\epsilon \rightarrow 1} \bigg\lvert \frac{\gamma_{\epsilon}}{\gamma_{\epsilon} + 1} \bigg\rvert \lvert\lvert \zhat - z^\epsilon \rvert\rvert &= 0. \end{align*} Rearranging the definition of $\vv^\epsilon$ from \eqref{eq:zhat-def}, we have $y = \vv^\epsilon + [\gamma_{\epsilon} / (\gamma_{\epsilon} + 1)](z^\epsilon - \zhat)$. Thus, \begin{align*} y &= \lim\limits_{\epsilon \rightarrow 1} \vv^{\epsilon} \in \clconv(\PB \setminus \Q). \tag*{\qedhere} \end{align*} \vspace*{-\belowdisplayskip} \end{proof} Theorem~\ref{thm:conv-cq} supports our selection of $\PB \setminus \T^\Q$ as a relaxation of $\PB \setminus \Q$, as we do not lose anything when considering $\clconv (\PB \setminus \T^\Q)$. For the remainder of Section~\ref{sec:cuts-2}, we make the following assumption. \begin{assumption}\label{ass:recc} The recession cone of $\Q$ is contained in the recession cone of $\PB$. \end{assumption} If Assumption~\ref{ass:recc} does not hold, we can consider the convex set $\PB \cap \Q$ instead of $\Q$. Indeed, $\recc(\PB \cap \Q) \subseteq \recc(\PB)$. Our analysis only requires the set $\Q$ to be relatively open in $\PB$, not necessarily open. By Corollary~\ref{cor:replace}, replacing $\Q$ with $\PB \cap \Q$ does not change the strength of our relaxation of $\PB \setminus \Q$ with respect to the convex hull operator. \begin{corollary}\label{cor:replace} It holds that $\clconv(\PB \setminus \Q) = \clconv(\PB \setminus \T^{\PB \cap \Q})$. \end{corollary} \begin{proof} The statement follows directly from Theorem~\ref{thm:conv-cq}: \begin{align*} \clconv(\PB \setminus \Q) &= \clconv(\PB \setminus (\PB \cap \Q)) = \clconv(\PB \setminus \T^{\PB \cap \Q}). \tag*{\qedhere} \end{align*} \end{proof} If $\PB$ in Corollary~\ref{cor:replace} is replaced with $\P$, the statement is no longer true in general. This is relevant, because we present a valid disjunction for $\PB \setminus \T^\Q$ in Section~\ref{subsec:multiterm}. When we add the constraints of $\P$ to the disjunctive formulation of $\PB \setminus \T^\Q$, the cuts obtained from a CGLP could be stronger than those obtained from a CGLP built from a valid disjunction for $\PB \setminus \T^{\PB \cap \Q}$. Thus, substituting $\Q$ with $\PB \cap \Q$ in order to satisfy Assumption~\ref{ass:recc} has the potential to weaken the generated disjunctive cuts. \subsection{Polyhedral relaxation of \texorpdfstring{$\PB \setminus \T^\Q$}{P\carrot B \back T\carrot C}}\label{subsec:tq-cuts} We present valid inequalities for $\PB \setminus \genset_{\D}^{\Q}$, where $\D \subseteq \Nonetwo$ is fixed and \begin{align} \genset_{\D} &\coloneqq \{\xbasis\} + \conv \big(\textstyle\bigcup_{j \in \D} \interval{\enter_j}{+\infty}{j} \big),\enspace \genset_{\D}^{\Q} \coloneqq \genset_{\D} + \recc(\Q). \label{eq:genset-def} \end{align} In this section, we are interested in deriving valid inequalities for $\PB \setminus \genset_{\D}^{\Q}$ when $\D = \N1$, in which case $\genset_{\D}^{\Q} = \Rset^\Q$. Because $\Rset^\Q \subseteq \T^\Q$, inequalities valid for $\PB \setminus \Rset^\Q$ are also valid for $\PB \setminus \T^\Q$. We consider the more general set $\genset_{\D}^{\Q}$ to be able to apply this analysis in Section~\ref{subsec:skq-cuts-combined}. Observe that $\recc(\genset_{\D}^{\Q}) = \cone(\{ \rbar^j \colon j \in \D\}) + \recc(\Q)$. Furthermore, $\genset_{\D}^{\Q} \subseteq \Q$ if and only if $\D \subseteq \N1$. For $(i,j) \in \D \times (N \setminus \D)$, let $\epshatone_{ij}$ be the following: \begin{align*} \epshatone_{ij} &\coloneqq \sup \{ \epshatone \geq 0 \colon \enter_i \rbar^i + \epshatone \rbar^j \in \recc(\genset_{\D}^{\Q}) \}. \end{align*} It holds that $\epshatone_{ij} = +\infty$ if $\rbar^j \in \recc(\genset_{\D}^{\Q})$. In all other cases, $\epshatone_{ij}$ is finite and its supremum is attained, because $\recc(\genset_{\D}^{\Q})$ is a closed convex cone. The parameter $\epshatone_{ij}$ depends on $\D$, but we suppress this dependence for notational simplicity. Let $\Msetone$ be defined as follows: \begin{align*} \Msetone &\coloneqq \{i \in \D \colon \epshatone_{ij} > 0\ \forall j \in N \setminus \D\}. \end{align*} Let $\F_{ij} \coloneqq \cone(\rbar^i,\rbar^j)$ be the cone formed by extreme rays $\rbar^i$ and $\rbar^j$ ($i,j \in N$). The set $\Msetone$ is composed of indices $i \in \D$ corresponding to extreme rays of $\PB$ that exhibit the following property: for every $j \in N \setminus \D$, the cone $\F_{ij}$ contains a nontrivial element of $\recc(\genset_{\D}^{\Q})$ (i.e., anything outside of $\lcinterval{0}{+\infty}{i}$). \begin{proposition}\label{prop:epshat-recc} Let $(i,j) \in \Msetone \times (N \setminus \D)$. For any $\epshatone \in [0, \epshatone_{ij})$, $\enter_i \rbar^i + \epshatone \rbar^j \in \recc(\genset_{\D}^\Q)$. \end{proposition} \begin{proof} If $\gamma_{ij} = +\infty$, then $\rbar^j \in \recc(\genset_{\D}^{\Q})$, and the point $\enter_i \rbar^i + \gamma \rbar^j$ lies in $\F_{ij} \subseteq \recc(\genset_{\D}^{\Q})$. Assume $\gamma_{ij} < +\infty$. The point $\enter_i \rbar^i + \epshatone \rbar^j$ is a convex combination of $\enter_i \rbar^i + \epshatone_{ij} \rbar^j$ and $\enter_i \rbar^i$, both of which lie in $\recc(\genset_{\D}^{\Q})$: \begin{align*} \enter_i \rbar^i + \epshatone \rbar^j&= \frac{\epshatone}{\epshatone_{ij}} ( \enter_i \rbar^i + \epshatone_{ij} \rbar^j ) + \bigg(1 - \frac{\epshatone}{\epshatone_{ij}} \bigg) \enter_i \rbar^i \in \recc(\genset_{\D}^{\Q}). \tag*{\qedhere} \end{align*} \end{proof} For $j \in N \setminus \D$ and $\Sset \subseteq \Msetone$, we define $\epsone_j(\Sset)$ to be \begin{align*} \epsone_j(\Sset) &= \begin{cases} \min_{i \in \Sset} \epshatone_{ij} & \textrm{ if } \Sset \neq \emptyset \\ +\infty & \textrm{ otherwise}. \end{cases} \end{align*} The parameter $\epsone_j(\Sset)$ also depends on $\D$. We again omit this dependence for notational simplicity. Theorem~\ref{thm:tq-cuts} presents a family of valid inequalities for $\PB \setminus \genset_{\D}^{\Q}$. \begin{theorem}\label{thm:tq-cuts} Let $\Sset \subseteq \Msetone$. The inequality \begin{align} \sum\limits_{j \in \Sset} \frac{x_j}{\enter_j} - \mash{\sum\limits_{j \in N \setminus \D}} \ \frac{x_j}{\epsone_j(\Sset)} &\leq 1 \label{eq:tq-inequality} \end{align} is valid for $\PB \setminus \genset_{\D}^{\Q}$. \end{theorem} Prior to proving Theorem~\ref{thm:tq-cuts}, we use Farkas' lemma to derive a result on the existence of a solution to a particular family of linear equations. \begin{lemma}\label{lem:farkas-cons-2} Let $\indone, \indtwo$ be two finite index sets. Let $\aparam \in \R^{|\indone|}_{+}$ and $\cparam \in \R^{|\indtwo|}_{+}$ satisfy $\sum_{i \in \indone} \aparam_i - \sum_{j \in \indtwo} \cparam_j > 0$. Then there exists $\theta \in \R^{|\indone| \times |\indtwo|}_{+}$ such that \begin{equation}\label{eq:fark-sys-2} \begin{alignedat}{2} \sum\limits_{i \in \indone} \theta_{ij} &= 1 &&\forall j \in \indtwo \\ \sum\limits_{j \in \indtwo} \cparam_j \theta_{ij} &\leq \aparam_i \qquad &&\forall i \in \indone. \end{alignedat} \end{equation} \end{lemma} \begin{proof} Let $\bhat \coloneqq \sum_{j \in \indtwo} \cparam_j$. If $\bhat = 0$, then any $\theta \in \R^{|\indone| \times |\indtwo|}_{+}$ satisfying $\sum_{i \in \indone} \theta_{ij} = 1$ for all $j \in \indtwo$ is a solution to system \eqref{eq:fark-sys-2}. Therefore, assume $\bhat > 0$. Assume for contradiction \eqref{eq:fark-sys-2} does not have a solution. By Farkas' lemma, there exist $y \in \R^{|\indtwo|}$ and $z \in \R^{|\indone|}_{+}$ such that \begin{subequations}\label{eq:fark-family2} \begin{alignat}{2} y_j + \cparam_j z_i &\geq 0 \qquad && \forall i \in \indone,\ j \in \indtwo \label{eq:fark2-sub1} \\ \mash{\sum\limits_{j \in \indtwo}} y_j + \mash{\sum\limits_{i \in \indone}} \aparam_i z_i &< 0. && \label{eq:fark2-sub2} \end{alignat} \end{subequations} We multiply \eqref{eq:fark2-sub1} by $\aparam_i/\bhat$ to obtain \begin{align*} \frac{\aparam_i}{\bhat} y_j + \frac{\cparam_j}{\bhat} \aparam_i z_i &\geq 0 \qquad \forall i \in \indone,\ j \in \indtwo. \end{align*} Summing this expression over $i \in \indone$ and $j \in \indtwo$ produces the inequality \begin{align} \frac{\textstyle\sum_{i \in \indone} \aparam_i}{\bhat} \mash{\sum\limits_{j \in \indtwo}} y_j + \mash{\sum\limits_{i \in \indone}} \aparam_i z_i &\geq 0. \label{eq:farkas-proof-reduction} \end{align} By assumption, $\sum_{i \in \indone} \aparam_i - \bhat > 0$, which implies $\sum_{i \in \indone} \aparam_i / \bhat > 1$. Combining \eqref{eq:fark2-sub2} with the inequality $\sum_{i \in \indone} \aparam_i z_i \geq 0$, we conclude that $\sum_{j \in \indtwo} y_j < 0$. Thus, \eqref{eq:farkas-proof-reduction} implies \begin{align} \mash{\sum_{j \in \indtwo}} y_j + \mash{\sum_{i \in \indone}} \aparam_i z_i \geq 0 \label{eq:farkas-contradiction} \end{align} Inequality \eqref{eq:farkas-contradiction} contradicts \eqref{eq:fark2-sub2}. Therefore, \eqref{eq:fark-sys-2} has a solution. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:tq-cuts}.] The statement is trivially true if $\Sset = \emptyset$. Therefore, assume $\Sset \neq \emptyset$. For ease of notation, let $\epsone_j \coloneqq \epsone_j(\Sset)$ for $j \in N \setminus \D$. Let $\helper \coloneqq \{j \in N \setminus \D \colon \epsone_j < +\infty\}$ and $\helperc \coloneqq \{j \in N \setminus \D \colon \epsone_j = +\infty\}$. Note $j \in \helperc$ if and only if $\rbar^j \in \recc(\genset_{\D}^{\Q})$. Let $\xhat \in \PB$ satisfy $\sum_{j \in \Sset} \xhat_j / \enter_j - \sum_{j \in \helper} \xhat_j / \epsone_j > 1$. We show $\xhat \in \genset_{\D}^{\Q}$. We apply Lemma~\ref{lem:farkas-cons-2} with $\indone \coloneqq \Sset$, $\indtwo \coloneqq \helper$, $\aparam_i \coloneqq \xhat_i / \enter_i$ for $i \in \Sset$, and $\cparam_j \coloneqq \xhat_j / \epsone_j$ for $j \in \helper$. Thus, there exists $\theta \in \R^{|\Sset| \times |\helper|}_{+}$ satisfying \begin{subequations}\label{eq:fark2} \begin{alignat}{2} \sum\limits_{i \in \Sset} \theta_{ij} &= 1 &&\forall j \in \helper \label{eq:fark2-sum1} \\ \mash{\sum\limits_{j \in \helper}} \ \frac{\xhat_j}{\epsone_j} \theta_{ij} &\leq \frac{\xhat_i}{\enter_i} \qquad &&\forall i \in \Sset. \label{eq:fark2-theta} \end{alignat} \end{subequations} By Proposition~\ref{prop:epshat-recc}, $q^{ij} \coloneqq \enter_i \rbar^i + \epsone_j \rbar^j \in \recc(\genset_{\D}^{\Q})$ for all $i \in \Sset$ and $j \in \helper$, because $0 < \epsone_j \leq \epshatone_{ij}$. Consequently, \begin{align} \rbar^j &= \frac{1}{\epsone_j} q^{ij} - \frac{1}{\epsone_j} \enter_i \rbar^i \qquad \forall i \in \Sset,\ j \in \helper. \label{eq:tq-rays-intermediate} \end{align} We use \eqref{eq:tq-rays-intermediate} and $\theta$ from \eqref{eq:fark2} to rewrite $\rbar^j$, $j \in \helper$: \begin{align} \rbar^j &= \sum\limits_{i \in \Sset} \theta_{ij} \bigg( \frac{1}{\epsone_j} q^{ij} - \frac{1}{\epsone_j} \enter_i \rbar^i \bigg). \label{eq:tq-rays-final} \end{align} We have $N = \D \cup \helper \cup \helperc$. Substituting \eqref{eq:tq-rays-final} into the definition of $\xhat$, we have: \begin{align} \xhat &= \xbasis + \sum\limits_{i \in \Sset} \xhat_i \rbar^i + \mash{\sum\limits_{j \in \D \setminus \Sset}} \xhat_j \rbar^j + \sum\limits_{j \in \helper} \xhat_j \rbar^j + \sum\limits_{j \in \helperc} \xhat_j \rbar^j \nonumber \\ &= \xbasis + \sum\limits_{i \in \Sset} \xhat_i \rbar^i + \ \mash{\sum\limits_{j \in \helper}} \ \sum\limits_{i \in \Sset} \xhat_j \theta_{ij} \left( \frac{1}{\epsone_j} q^{ij} - \frac{1}{\epsone_j} \enter_i \rbar^i \right) + \mash{\sum\limits_{\quad j \in \helperc \cup (\D \setminus \Sset)}} \xhat_j \rbar^j \nonumber \\ &= \xbasis + \sum\limits_{i \in \Sset} \bigg( \frac{\xhat_i}{\enter_i} - \mash{\sum\limits_{j \in \helper}} \theta_{ij} \frac{\xhat_j}{\epsone_j} \bigg) \enter_i \rbar^i + \sum\limits_{i \in \Sset} \ \mash{\sum\limits_{\ j \in \helper}} \theta_{ij} \frac{\xhat_j}{\epsone_j} q^{ij} + \mash{\sum\limits_{\quad j \in \helperc \cup (\D \setminus \Sset)}} \xhat_j \rbar^j. \label{eq:new-xhat} \end{align} By \eqref{eq:fark2-sum1}, the sum of the weights on the terms $\enter_i \rbar^i$, $i \in \Sset$ in \eqref{eq:new-xhat} are greater than $1$: \begin{align} \sum_{i \in \Sset} \bigg( \frac{\xhat_i}{\enter_i} - \mash{\sum_{j \in \helper}} \theta_{ij} \frac{\xhat_j}{\epsone_j} \bigg) = \sum_{i \in \Sset} \frac{\xhat_i}{\enter_i} - \mash{\sum_{j \in \helper}} \frac{\xhat_j}{\epsone_j} &> 1. \label{eq:tq:sumgeq1} \end{align} By \eqref{eq:fark2-theta}, each individual coefficient on $\enter_i \rbar^i$, $i \in \Sset$ in \eqref{eq:new-xhat} is nonnnegative. Together with \eqref{eq:tq:sumgeq1}, we have \begin{align} \xbasis + \sum\limits_{i \in \Sset} \bigg( \frac{\xhat_i}{\enter_i} - \mash{\sum\limits_{j \in \helper}} \theta_{ij} \frac{\xhat_j}{\epsone_j} \bigg) \enter_i \rbar^i \in \{\xbasis\} + \conv\bigg( \bigcup_{i \in \Sset} \interval{\enter_i}{+\infty}{i} \bigg). \label{eq:xhat-part1} \end{align} Continuing from \eqref{eq:xhat-part1}, $\{\xbasis\} + \conv ( \cup_{i \in \Sset} \interval{\enter_i}{+\infty}{i} ) \subseteq \genset_{\D}$, because $\Sset \subseteq \D$. Furthermore, because recession cone membership is preserved under addition, \begin{align} \sum\limits_{i \in \Sset} \ \mash{\sum\limits_{\ j \in \helper}} \theta_{ij} \frac{\xhat_j}{\epsone_j} q^{ij} + \mash{\sum\limits_{j \in \D \setminus \Sset}} \xhat_j \rbar^j + \sum\limits_{j \in \helperc} \xhat_j \rbar^j \in \recc(\genset_{\D}^{\Q}). \label{eq:xhat-part2} \end{align} This holds because $q^{ij} \in \recc(\genset_{\D}^{\Q})$ for all $i \in \Sset$ and $j \in \helper$, $\rbar^j \in \recc(\genset_{\D}^{\Q})$ for all $j \in \D$ by \eqref{eq:genset-def}, and $\rbar^j \in \recc(\genset_{\D}^{\Q})$ for all $j \in \helperc$. By \eqref{eq:xhat-part1} and \eqref{eq:xhat-part2}, we have $\xhat \in \genset_{\D} + \recc(\Q) = \genset_{\D}^{\Q}$. \end{proof} As a result of Theorem~\ref{thm:tq-cuts} and Proposition~\ref{prop:epshat-recc}, for any $\Sset \subseteq \Msetone$ and $\epshatone \in \R^{|N \setminus \D|}_{+}$ satisfying $\epshatone_j \in (0,\epsone_j(\Sset))$ for all $j \in N \setminus \D$, the inequality \begin{align*} \sum_{j \in \Sset} \frac{x_j}{\enter_j} - \mash{\sum_{j \in N \setminus \D}} \ \frac{x_j}{\epshatone_j} \leq 1 \end{align*} is valid for $\PB \setminus \genset_{\D}^{\Q}$. This is useful when calculating $\epsone_j(\Sset)$ approximately (e.g., via binary search), as it is sufficient to calculate a positive lower bound on $\epsone_j(\Sset)$. However, our choice of $\epsone_j$ yields a stronger inequality than inequalities corresponding to smaller choices of $\epshatone$. \begin{contexample} Consider $\D = \N1$. Figures \ref{fig:pb-tq-4} and~\ref{fig:pb-tq-5} provide a graphical representation of Theorem~\ref{thm:tq-cuts} applied to Example~\ref{ex:tq}. The selection of $\epsone_1(\Sset)$ is shown in Figure~\ref{fig:pb-tq-4}, where $\Sset = \{2\}$. The vector $\epsone_1(\Sset) \rbar^1 + \enter_2 \rbar^2$ lies on the boundary of $\recc(\genset_{\D}^{\Q}) \cap \F_{12}$. We note $\recc(\genset_{\D}^{\Q}) = \recc(\Q)$. Figure~\ref{fig:pb-tq-5} shows the valid inequality of Theorem~\ref{thm:tq-cuts} using this selection of $\Sset$. \begin{figure} \caption{The maximal selection of $\epsone_1(\Sset)$, where $\Sset = \{2\} \caption{Theorem~\ref{thm:tq-cuts} \caption{The valid inequality of Theorem~\ref{thm:tq-cuts} \label{fig:pb-tq-4} \label{fig:pb-tq-5} \end{figure} \end{contexample} We next consider the problem of selecting a subset of $\Msetone$ that yields the most violated inequality of the form \eqref{eq:tq-inequality} to cut off a candidate solution $\xhat \in \PB$. That is, we are interested in the separation problem \begin{align} \max_{\Sset \subseteq \Msetone} \sum\limits_{j \in \Sset} \frac{\xhat_j}{\enter_j} - \mash{\sum\limits_{j \in N \setminus \D}} \ \frac{\xhat_j}{\epsone_j(\Sset)}. \label{eq:tq-separation} \end{align} For each $j \in N \setminus \D$, we define the function $\funcone_j \colon 2^{\Msetone} \rightarrow \R$ to be $\funcone_j(\Sset) = -\xhat_j / \min_{i \in \Sset} \epshatone_{ij}$ if $\Sset \neq \emptyset$, and $0$ otherwise. The value $\funcone_j(\Sset)$ is the contribution of index $j \in N \setminus \D$ to the objective function \eqref{eq:tq-separation} for a given $\Sset$. \begin{proposition}\label{prop:tq-separation} The maximization problem \eqref{eq:tq-separation} is a supermodular maximization problem. \end{proposition} \begin{proof} The separation problem \eqref{eq:tq-separation} can be equivalently written as \begin{align} \max_{\Sset \subseteq \Msetone} \sum\limits_{j \in \Sset} \frac{\xhat_j}{\enter_j} + \mash{\sum\limits_{j \in N \setminus \D}} \funcone_j(\Sset). \label{eq:tq-separation-rewrite} \end{align} From standard properties of the $\min$ operator, the objective function of \eqref{eq:tq-separation-rewrite} is the sum of supermodular (and modular) functions. \end{proof} By Proposition~\ref{prop:tq-separation}, the separation problem \eqref{eq:tq-separation} can be solved in strongly polynomial time (e.g., \citet{grotschel1981}, \citet{grotschel2012}, \citet{orlin2009}). We next propose an extended formulation for the relaxation of $\PB \setminus \genset_{\D}^{\Q}$ defined by inequality \eqref{eq:tq-inequality} for all $\Sset \subseteq \Msetone$: \begin{align*} \tqrelax_{\D} \coloneqq \bigg\lbrace x \in \R^{|N|}_{+} \colon \sum\limits_{j \in \Sset} \frac{x_j}{\enter_j} - \mash{\sum\limits_{j \in N \setminus \D}} \ \frac{x_j}{\epsone_j(\Sset)} \leq 1 \ \forall \Sset \subseteq \Msetone \bigg\rbrace. \end{align*} Let $\Msetone = \{1, \ldots, \tqcardone\}$ and $\tqcardtwo \coloneqq |N \setminus \D|$. For each $j \in N \setminus \D$, let $\pi_j(1), \pi_j(2), \ldots, \pi_j(\tqcardone)$ be ordered such that $\epshatone_{\pi_j(1),j} \leq \epshatone_{\pi_j(2),j} \leq \ldots \leq \epshatone_{\pi_j(\tqcardone),j}$. Similarly, for any $i \in \Msetone$, let $\ellone_j(i)$ satisfy $\pi_j(\ellone_j(i)) = i$. For all $j \in N \setminus \D$, let $\epshatone_{0j} \coloneqq +\infty$, $\theta_{0j} \coloneqq 0$, $v_{0j} \coloneqq 0$, $v_{\tqcardone+1,j} \coloneqq 0$, $\pi_j(0) \coloneqq 0$, and $\pi_j(\tqcardone+1) \coloneqq 0$. We define $\tqextended_{\D}$ to be the set of $(x,\theta,v,\lambda) \in \R_{+}^{|N|} \times \R_{+}^{\tqcardone \times \tqcardtwo} \times \R_{+}^{\tqcardone \times \tqcardtwo} \times \R^{\tqcardtwo}$ such that \begin{equation*}\label{eq:sepip-dual} \begin{alignedat}{2} \sum\limits_{i \in \Msetone} \enspace \mash{\sum\limits_{j \in N \setminus \D}} \theta_{ij} + \mash{\sum\limits_{j \in N \setminus \D}} \lambda_j &\leq 1 \\ \theta_{ij} + v_{ij} - v_{i+1,j} + \bigg( \frac{1}{\epshatone_{\pi_j(i+1),j}} - \frac{1}{\epshatone_{\pi_j(i),j}} \bigg) x_j &\geq 0 \quad && \forall i = 0, \ldots, \tqcardone,\ j \in N \setminus \D \\ \mash{\sum\limits_{j \in N \setminus \D}} \theta_{\ellone_j(i),j} - \frac{1}{\enter_i}x_i &\geq 0 \quad && \forall i = 1, \ldots, \tqcardone. \end{alignedat} \end{equation*} Theorem~\ref{thm:extendedform1} establishes the relationship between $\tqextended_{\D}$ and the relaxation $\tqrelax_{\D}$. \begin{theorem}\label{thm:extendedform1} The polyhedron $\tqextended_{\D}$ is an extended formulation of $\tqrelax_{\D}$: \begin{align*} \proj_x(\tqextended_{\D}) = \tqrelax_{\D}. \end{align*} \end{theorem} \begin{proof} We first argue that the following linear program solves the separation problem \eqref{eq:tq-separation} for a fixed $\xhat \in \PB$: \begin{subequations}\label{eq:sepip} \begin{alignat}{4} \max_{y,z}\ && \sum\limits_{i \in \Msetone} \frac{\xhat_i}{\enter_i} z_i - &\mash{\sum\limits_{j \in N \setminus \D}} \enspace\ \sum\limits_{i = 1}^{\tqcardone} && \frac{\xhat_j}{\epshatone_{\pi_j(i),j}} (y_{i-1,j} - y_{ij}) && \label{eq:sepip-obj} \\ \textrm{s.t.}\ && y_{0j} &= 1 && \forall j \in N \setminus \D && (\lambda_j) \label{eq:sepip-1} \\ && y_{ij} + z_{\pi_j(i)} &\leq 1 && \forall i = 1, \ldots, \tqcardone,\ j \in N \setminus \D \qquad && (\theta_{ij})\label{eq:sepip-2} \\ && y_{ij} - y_{i-1,j} &\leq 0 && \forall i = 1, \ldots, \tqcardone,\ j \in N \setminus \D && (v_{ij}) \label{eq:sepip-3} \\ && y_{ij} &\geq 0 && \forall i = 0, \ldots, \tqcardone,\ j \in N \setminus \D \label{eq:sepip-4} \\ && z_i &\geq 0 && \forall i \in \Msetone. \label{eq:sepip-5} \end{alignat} \end{subequations} The constraint matrix of $\eqref{eq:sepip}$ is totally unimodular. To see this, we complement the $z_i$ variables with $1 - z_i$ for all $i \in \Msetone$ to obtain an equivalent problem. The resulting constraint matrix has $0,\pm 1$ entries, and each row contains no more than one $1$ and one $-1$. We show \eqref{eq:sepip} correctly models the separation problem \eqref{eq:tq-separation}. First, let $\Sset^*$ be the optimal solution of \eqref{eq:tq-separation}. We construct $(y^*,z^*)$ feasible to \eqref{eq:sepip} with objective function value equal to $\sum_{i \in \Sset^*} \xhat_i / \enter_i - \sum_{j \in N \setminus \D} \xhat_j / \epsone_j(\Sset^*)$. If $\Sset^* = \emptyset$, then the optimal objective value of the separation problem is $0$. In this case, set $y_{ij}^*=1$ for all $i = 0,1, \ldots, \tqcardone$ and $j \in N \setminus \D$, and set $z_i^*=0$ for all $i \in \Msetone$. Then $(y^*,z^*)$ is feasible to \eqref{eq:sepip} with objective value $0$. If $\Sset^* \neq \emptyset$, set $z_i^* = 1$ if $i \in \Sset^*$, and $0$ otherwise. For all $j \in N \setminus \D$, let $k_j \in \Sset^*$ be the smallest index satisfying $\pi_j(k_j) \in \arg \min_{i \in \Sset^*} \epshatone_{ij}$; that is, $k_j = \min \{k \colon \epshatone_{\pi_j(k_j),j} = \epsone_j(\Sset^*)\}$. For each $j \in N \setminus \D$, set $y^*_{ij} = 1$ for all $i = 0, 1, \ldots, k_j - 1$, and set $y^*_{ij} = 0$ for all $i = k_j, \ldots, \tqcardone$. By construction, $(y^*,z^*)$ satisfies \eqref{eq:sepip-1}--\eqref{eq:sepip-5}. For a fixed $j \in N \setminus \D$, we have \begin{align*} \sum\limits_{i = 1}^{\tqcardone} \frac{\xhat_j}{\epshatone_{\pi_j(i),j}} (y^*_{i-1,j} - y^*_{ij}) = \frac{\xhat_j}{\epshatone_{\pi_j(k_j),j}} = \frac{\xhat_j}{\epsone_j(\Sset^*)}, \end{align*} and the objective function \eqref{eq:sepip-obj} evaluates to the desired value of \begin{align*} \sum\limits_{i \in \Msetone} \frac{\xhat_i}{\enter_i} z^*_i - \mash{\sum\limits_{j \in N \setminus \D}} \enspace\ \sum\limits_{i = 1}^{\tqcardone}\frac{\xhat_j}{\epshatone_{\pi_j(i),j}} (y^*_{i-1,j} - y^*_{ij}) = \sum\limits_{i \in \Sset^*} \frac{\xhat_i}{\enter_i} - \mash{\sum\limits_{j \in N \setminus \D}} \ \frac{\xhat_j}{\epsone_j(\Sset^*)}. \end{align*} Now, let $(y^*,z^*)$ be an optimal solution to \eqref{eq:sepip}. Set $\Sset^* = \{i \in \Msetone \colon z^*_i = 1\}$. It remains to show $\sum_{i \in \Sset^*} \xhat_i / \enter_i - \sum_{j \in N \setminus \D} \xhat_j / \epsone_j(\Sset^*)$ is not less than the optimal objective value of \eqref{eq:sepip}. Recall the constraint matrix of \eqref{eq:sepip} is totally unimodular, so $(y^*,z^*)$ is $0$--$1$ valued. If $z^*_i = 0$ for all $i \in \Msetone$, then the separation problem objective evaluated at $\Sset^* = \emptyset$ is $0$ and the optimal objective value of \eqref{eq:sepip} is nonpositive, as desired. Next, assume $\sum_{i \in \Msetone} z^*_i \geq 1$. By constraints \eqref{eq:sepip-2}, given $j \in N \setminus \D$, $y^*_{ij} = 0$ for all $i \in \Msetone$. By constraints \eqref{eq:sepip-1} and \eqref{eq:sepip-3}, for each $j \in N \setminus \D$, there exists $k_j$ such that $y_{ij} = 1$ for $i = 0, \ldots, k_j - 1$ and $y_{ij} = 0$ for $i = k_j, \ldots, \tqcardone$. Then the optimal objective value of \eqref{eq:sepip} is \begin{align} \sum\limits_{i \in \Msetone} \frac{\xhat_i}{\enter_i} z^*_i - \mash{\sum\limits_{j \in N \setminus \D}} \enspace\ \sum\limits_{i = 1}^{\tqcardone}\frac{\xhat_j}{\epshatone_{\pi_j(i),j}} (y^*_{i-1,j} - y^*_{ij}) = \sum\limits_{i \in \Msetone} \frac{\xhat_i}{\enter_i} z^*_i - \mash{\sum\limits_{j \in N \setminus \D}} \ \frac{\xhat_j}{\epshatone_{\pi_j(k_j),j}}. \label{eq:sepipi-optobj} \end{align} Consider a fixed $j \in N \setminus \D$. By constraints \eqref{eq:sepip-2}, $z_{\pi_j(i)} = 0$ for $i = 1, \ldots, k_j - 1$. Then $\arg\min \{i \in \Msetone \colon z^*_i = 1\} \geq k_j$. Due to the ordering $\epshatone_{\pi_j(1),j} \leq \ldots \leq \epshatone_{\pi_j(\tqcardone),j}$, we have $\epsone_j(\Sset^*) = \min_{i \in \Sset^*} \epshatone_{ij} \geq \epshatone_{\pi_j(k_j),j}$. Therefore, the optimal objective value of the separation problem evaluated at $\Sset^*$ is at least as large as \eqref{eq:sepipi-optobj}: \begin{align*} \sum\limits_{i \in \Sset^*} \frac{\xhat_i}{\enter_i} - \mash{\sum\limits_{j \in N \setminus \D}} \ \frac{\xhat_j}{\epsone_j(\Sset^*)} &\geq \sum\limits_{i \in \Msetone} \frac{\xhat_i}{\enter_i} z^*_i - \mash{\sum\limits_{j \in N \setminus \D}} \ \frac{\xhat_j}{\epshatone_{\pi_j(k_j),j}}. \end{align*} Hence, \eqref{eq:sepip} models the separation problem \eqref{eq:tq-separation} for a fixed $\xhat_j \in \PB$. We conclude by relating $\tqextended_{\D}$ to \eqref{eq:sepip}. The point $\xhat$ lies in $\tqrelax_{\D}$ if and only if the primal objective \eqref{eq:sepip-obj} does not exceed $1$. The linear program \eqref{eq:sepip} is feasible and bounded, so strong duality applies. Let $\lambda$, $\theta$, and $v$ be the linear program's dual variables, as labeled in \eqref{eq:sepip}. By strong duality, $\xhat \in \tqrelax_{\D}$ if and only if the dual of \eqref{eq:sepip} has objective value less than or equal to $1$. Because the dual of \eqref{eq:sepip} is a minimization problem, we enforce this condition with the constraint $\sum_{i \in \Msetone} \sum_{j \in N \setminus \D} \theta_{ij} + \sum_{j \in N \setminus \D} \lambda_j \leq 1$. We also replace the fixed $\xhat$ in the dual of \eqref{eq:sepip} with the nonnegative variable $x \in \R^{|N|}_{+}$. Thus, $x \in \tqrelax_{\D}$ if and only if there exists $(\theta,v,\lambda)$ satisfying the dual constraints of \eqref{eq:sepip} and the aforementioned dual objective cut. These constraints define $\tqextended_{\D}$. \end{proof} Within the proof of Theorem~\ref{thm:extendedform1}, we show that the linear program \eqref{eq:sepip} can be used to solve the separation problem \eqref{eq:tq-separation}. The remainder of the proof uses the separation linear program \eqref{eq:sepip} and duality theory to derive an extended formulation, a technique that was first proposed by \citet{martin1991}. For a fixed $j \in N \setminus \D$, the constraints \eqref{eq:sepip-1}, \eqref{eq:sepip-3}, and \eqref{eq:sepip-4} form an instance of the mixing set, first studied by \citet{gunluk2001}. The proof's derivation of the extended formulation $\tqextended_{\D}$ follows results from \citet{luedtke2008} and \citet{miller2003}. Proposition~\ref{prop:no-tq-cuts} states that if no cuts of the form \eqref{eq:tq-inequality} exist, then there exist no valid inequalities for $\clconv (\PB \setminus \Q)$ other than those defining $\PB$. \begin{proposition}\label{prop:no-tq-cuts} Under Assumption~\ref{ass:recc}, if $\Msetone = \emptyset$, then $\clconv(\PB \setminus \genset_{\D}^{\Q}) = \PB$. \end{proposition} \begin{proof} It suffices to show $\{\xbasis\} + \lcinterval{0}{+\infty}{i} \subseteq \clconv(\PB \setminus \genset_{\D}^{\Q})$ for $i \in N$. We first show $\{\xbasis\} + \lcinterval{0}{+\infty}{i} \subseteq \PB \setminus \genset_{\D}^{\Q}$ for $i \in N \setminus \D$. Assume for contradiction there exists $k \in N \setminus \D$ and $\gamma \geq 0$ such that $\xbasis + \gamma \rbar^k \in \genset_{\D}^{\Q}$. By the definition of $\genset_{\D}^{\Q}$ in \eqref{eq:genset-def}, there exists $\lambda \in \R^{|D|}_{+}$, $\theta \in \R^{|\D|}_{+}$, and $q \in \recc(\Q)$ such that $\lambda_j > \enter_j$ for all $j \in \D$, $\sum_{j \in \D} \theta_j = 1$, and \begin{align*} \xbasis + \gamma \rbar^k &= \xbasis + \sum\limits_{j \in \D} \theta_j \lambda_j \rbar^j + q. \end{align*} Equivalently, we have $q = \gamma \rbar^k - \sum_{j \in \D} \theta_j \lambda_j \rbar^j$. Because the vectors $\{\rbar^j \colon j \in N\}$ are linearly independent and there exists $k \in \D$ such that $-\theta_j \lambda_j < 0$, it holds that $q \notin \cone(\{\rbar^j \colon j \in N\}) = \recc(\PB)$. This contradicts Assumption~\ref{ass:recc}, which states $\recc(\Q) \subseteq \recc(\PB)$. Now, consider $i \in \D$, $\lambda > 0$, and $\epshatone > 0$. Because $\Msetone = \emptyset$, there exists $j \in N \setminus \D$ such that $\lambda \rbar^i + \epshatone \rbar^j \notin \recc(\genset_{\D}^{\Q})$. Then for a sufficiently large $M > 1$, $\xbasis + M (\lambda \rbar^i + \epshatone \rbar^j) \notin \genset_{\D}^{\Q}$. We have that $\xbasis + \lambda \rbar^i + \epshatone \rbar^j$ is a convex combination of $\xbasis \notin \genset_{\D}^{\Q}$ and $\xbasis + M (\lambda \rbar^i + \epshatone \rbar^j) \notin \genset_{\D}^{\Q}$. Thus, $\xbasis + \lambda \rbar^i + \epshatone \rbar^j \in \conv(\PB \setminus \genset_{\D}^{\Q})$ for all $\epshatone > 0$, so $\xbasis + \lambda \rbar^i \in \clconv(\PB \setminus \genset_{\D}^{\Q})$. \end{proof} In the case where $\D = \N1$ ($\genset_{\D}^{\Q} = \Rset^\Q$) and $\Msetone = \emptyset$, Theorem~\ref{thm:conv-cq} and Proposition~\ref{prop:no-tq-cuts} together give us $\Msetone = \emptyset \implies \clconv(\PB \setminus \Q) = \PB$. \subsection{A multi-term disjunction valid for \texorpdfstring{$\PB \setminus \T^\Q$}{P\carrot B \back T\carrot C}}\label{subsec:multiterm} The set $\PB \setminus \T^\Q$ has the potential to be a tighter relaxation of $\PB \setminus \Q$ than the two-term disjunction of Theorem~\ref{thm:n1-n2}, because it considers the full structure of $\recc(\Q)$. In this section, we derive a valid disjunction for $\PB \setminus \T^\Q$ that contains $|\N2| + 1$ terms. These terms are defined by nonconvex sets, but in Section~\ref{subsec:skq-cuts-combined} we derive polyhedral relaxations of each term. Given the valid disjunction we derived for $\PB \setminus \T^\Q$, these polyhedral relaxations can be used with other inequalities defining $\P$ to obtain a union of polyhedra that contains $\P \setminus \T^\Q$. The disjunctive programming approach of Balas can be applied to construct a CGLP to find a valid inequality that separates a candidate solution from $\clconv(\P \setminus \T^\Q)$. Let $\So^\Q$ be defined as follows: \begin{align}\label{eq:def-s0q} \So^\Q &\coloneqq \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \Nonetwo} \interval{\enter_j}{+\infty}{j} \big) + \recc(\Q). \end{align} We define the following sets for $k \in \N2$: \begin{alignat}{2} \Sk^\Q &\coloneqq \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \N2} \lcinterval{0}{\exit_j}{j} \big) + \rcinterval{-\infty}{0}{k} + \recc(\Q). \label{eq:def-skq} \end{alignat} The sets $\So^\Q$ and $\Sk^\Q$ ($k \in \N2$) are the foundation of our multi-term valid disjunction for $\PB \setminus \T^\Q$. In Proposition~\ref{prop:rewrite-skq}, we present an equivalent construction of $\Sk^\Q$ ($k \in \N2$). \begin{proposition}\label{prop:rewrite-skq} For $k \in \N2$, $\Sk^\Q$ can be written as \begin{align} \Sk^\Q &= \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \Nonetwo} \interval{\enter_j}{\exit_j}{j} \big) + \rcinterval{-\infty}{0}{k} + \recc(\Q). \label{eq:def-skq-alt} \end{align} \end{proposition} \begin{proof} For $A_1,A_2 \subseteq \R^n$, it holds that $\conv(A_1 + A_2) = \conv(A_1) + \conv(A_2)$. For $B \subseteq \R^n$, we also have $(A_1 \cup A_2) + B = (A_1 + B) \cup (A_2 + B)$. This gives us: \begin{align} \Sk^\Q &= \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \N2} \lcinterval{0}{\exit_j}{j} \big) + \rcinterval{-\infty}{0}{k} + \recc(\Q) \nonumber \\ &= \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \N2} ( \lcinterval{0}{\exit_j}{j} + \rcinterval{-\infty}{0}{k} + \recc(\Q) ) \big). \label{eq:skq-stepone} \end{align} Observe that $\interval{\enter_j}{\exit_j}{j} \subseteq \{0\} + \recc(\Q)$ for all $j \in \N1$. This allows us to rewrite $\Sk^\Q$ from \eqref{eq:skq-stepone} as \begin{align} \Sk^\Q &= \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \Nonetwo} ( \lcinterval{0}{\exit_j}{j} + \rcinterval{-\infty}{0}{k} + \recc(\Q) ) \big). \label{eq:skq-prep-replace} \end{align} Finally, note that $0 \in \interval{\enter_k}{\exit_k}{k} + \rcinterval{-\infty}{0}{k}$. Hence, for all $j \in \Nonetwo$, we can replace $\lcinterval{0}{\exit_j}{j}$ in the convex hull operator of \eqref{eq:skq-prep-replace} with $\interval{\enter_j}{\exit_j}{j}$: \begin{align*} \Sk^\Q &= \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \Nonetwo} ( \interval{\enter_j}{\exit_j}{j} + \rcinterval{-\infty}{0}{k} + \recc(\Q) ) \big) \\ &= \{\xbasis\} + \conv \big( \textstyle\bigcup_{j \in \Nonetwo} \interval{\enter_j}{\exit_j}{j} \big) + \rcinterval{-\infty}{0}{k} + \recc(\Q). \tag*{\qedhere} \end{align*} \end{proof} Theorem~\ref{thm:disjunction} presents a disjunctive representation of $\PB \setminus \T^\Q$. Throughout, let $\Ncup \coloneqq \N2 \cup \{0\}$. \begin{theorem}\label{thm:disjunction} It holds that \begin{align} \PB \setminus \T^\Q = \mash{\bigcup_{k \in \Ncup}} \ (\PB \setminus \Sk^\Q). \label{eq:disjunction} \end{align} \end{theorem} Before proving Theorem~\ref{thm:disjunction}, we prove a consequence of Farkas' lemma \cite{farkas1902}. \begin{lemma}\label{lem:farkas-cons-1} Let $\aparam, \cparam \in \R^{\lparam}_{+}$, where $\sum_{i = 1}^{\lparam} \aparam_i > 0$. There exists $\theta \in \R^{\lparam + 1}_{+}$ such that \begin{align}\label{eq:fark-sys-1} \begin{split} \sum\limits_{i = 0}^{\lparam} \theta_i &= 1 \\ \aparam_i \theta_0 - \cparam_i \theta_i &= 0 \qquad i = 1, \ldots, \lparam. \end{split} \end{align} \end{lemma} \begin{proof} By Farkas' lemma, either system \eqref{eq:fark-sys-1} has a solution, or there exists $y \in \R^{\lparam + 1}$ such that \begin{subequations}\label{eq:fark-family1} \begin{align} y_0 + \sum\limits_{i = 1}^{\lparam} \aparam_i y_i &\geq 0 \label{eq:fark-sub1} \\ y_0 - \cparam_i y_i &\geq 0 \qquad i = 1, \ldots, \lparam \label{eq:fark-sub2} \\ y_0 &< 0. \label{eq:fark-sub3} \end{align} \end{subequations} Assume for contradiction there exists a $y$ satisfying \eqref{eq:fark-family1}. The nonnegativity of $\cparam$, \eqref{eq:fark-sub2}, and \eqref{eq:fark-sub3} imply $y_i < 0$ for all $i = 1, \ldots, \lparam$. The vector $\aparam$ is nonnegative and by assumption sums to a strictly positive value. We conclude $y_0 + \sum_{i = 1}^{\lparam} \aparam_i y_i < 0$, contradicting \eqref{eq:fark-sub1}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:disjunction}.] It suffices to show $\T^\Q = \bigcap_{k \in \Ncup} \Sk^\Q$. If $\N2 = \emptyset$, we have $\T^\Q = \So^\Q$ by \eqref{eq:def-tq} and \eqref{eq:def-s0q}, and the result holds. Therefore, assume $\N2 \neq \emptyset$. By construction, $\T^\Q \subseteq \Sk^\Q$ for all $k \in \Ncup$, implying $\T^\Q \subseteq \cap_{k \in \Ncup} \Sk^\Q$. Let $\xhat \in \cap_{k \in \Ncup} \Sk^\Q$. By $\xhat$'s membership in $\So^\Q$, there exist $\lambda^0 \in \R^{|\Nonetwo|}_{++}$, $\mu \in \R^{|\N2|}_{++}$, $\delta^0 \in \R_{+}^{|\Nonetwo|}$, and $q^0 \in \recc(\Q)$ such that $\lambda^0_j \in (\enter_j, \exit_j)$ for all $j \in \Nonetwo$, $\sum_{j \in \Nonetwo} \delta^0_j = 1$, and \begin{align}\label{eq:proof-xhat-s0} \xhat &= \xbasis + \sum\limits_{j \in \N1} \delta^0_j \lambda^0_j \rbar^j + \sum\limits_{j \in \N2} \delta^0_j (\lambda^0_j + \mu_j) \rbar^j + q^0. \end{align} If $\delta^0_j = 0$ for all $j \in \N2$, then $\xhat \in \{\xbasis\} + \conv( \cup_{j \in \N1} \interval{\enter_j}{\exit_j}{j} ) + \recc(\Q) \subseteq \T^\Q$ by \eqref{eq:proof-xhat-s0} and we have nothing left to prove. We therefore assume $\sum_{j \in \N2} \delta^0_j > 0$. From \eqref{eq:def-skq-alt}, $\xhat \in \cap_{k \in \N2} \Sk^\Q$ implies that for all $k \in \N2$, there exist $\lambda^k \in \R^{|\Nonetwo|}_{++}$, $\eta_k \in \R_{+}$, $\delta^k \in \R_{+}^{|\Nonetwo|}$, and $q^k \in \recc(\Q)$ such that $\lambda^k_j \in (\enter_j, \exit_j)$ for all $j \in \Nonetwo$, $\sum_{j \in \Nonetwo} \delta^k_j = 1$, and \begin{align}\label{eq:proof-xhat-sk} \xhat &= \xbasis + \mash{\sum\limits_{j \in \Nonetwo}} \ \delta^k_j \lambda^k_j \rbar^j - \eta_k \rbar^k + q^k. \end{align} Because $\mu \in \R^{|\N2|}_{++}$ and $\sum_{j \in \N2} \delta^0_j > 0$, it holds that $\sum_{j \in \N2} \delta^0_j \mu_j > 0$. We apply Lemma~\ref{lem:farkas-cons-1} with $\lparam \coloneqq |\N2|$, $\aparam_j \coloneqq \delta_j^0 \mu_j$ for $j \in \N2$, and $\cparam_j \coloneqq \eta_j$ for $j \in \N2$. Then there exists $\theta \in \R^{|\N2| + 1}_{+}$ such that $\sum_{j \in \Ncup} \theta_j = 1$ and $\theta_0 \delta^0_k \mu_k = \theta_k \eta_k$ for all $k \in \N2$. We use this $\theta$ as convex combination multipliers on \eqref{eq:proof-xhat-s0} and \eqref{eq:proof-xhat-sk} to rewrite $\xhat$ as \begin{align} \xhat &= \xbasis + \mash{\sum\limits_{k \in \Ncup }} \ \mash{\sum\limits_{\ j \in \Nonetwo}} \theta_k \delta^k_j \lambda^k_j \rbar^j + \mash{\sum\limits_{k \in \Ncup}} \theta_k q^k. \label{eq:xhat-conv-rep} \end{align} For every $j \in \Nonetwo$ and $k \in \Ncup$, $\lambda^k_j \rbar^j \in \interval{\enter_j}{\exit_j}{j}$. The coefficients on the terms $\lambda^k_j \rbar^j$ ($j \in \Nonetwo$, $k \in \Ncup$) in \eqref{eq:xhat-conv-rep} are nonnegative and sum to one: \begin{align*} \mash{\sum\limits_{k \in \Ncup }} \ \mash{\sum\limits_{\ j \in \Nonetwo}} \theta_k \delta^k_j &= \mash{\sum\limits_{k \in \Ncup }} \theta_k \mash{\sum\limits_{j \in \Nonetwo}} \delta^k_j = 1. \end{align*} It follows that $\xbasis + \sum_{k \in \Ncup} \sum_{j \in \Nonetwo} \theta_k \delta^k_j \lambda^k_j \rbar^j \in \T$. Lastly, we have $\sum_{k \in \Ncup} \theta_k q^k \in \recc(\Q)$. Thus, by \eqref{eq:xhat-conv-rep}, $\xhat \in \T^\Q$. \end{proof} The multi-term disjunction \eqref{eq:disjunction} is a generalization of the two-term disjunction of Theorem~\ref{thm:n1-n2}. Recall that this two-term disjunction does not account for the recession structure of $\Q$ beyond the property that $\rbar^j \in \recc(\Q)$ for all $j \in \N1$ and the assumption $\rbar^j \in \N0$ for all $j \in \N0$. If $\Q$ is bounded and Assumption~\ref{ass:1} holds, it can be shown that the multi-term disjunction reduces to the simple two-term disjunction of Theorem~\ref{thm:n1-n2}. In particular, we have \begin{align*} \PB \setminus \So^\Q &= \{x \in \PB \colon \textstyle\sum_{j \in N} x_j / \enter_j \leq 1\} \\ \PB \setminus \Sk^\Q &= \{x \in \PB \colon \textstyle\sum_{j \in N} x_j / \exit_j \geq 1\} \quad \forall k \in \N2. \end{align*} In the remainder of this paper, we derive polyhedral relaxations for each of the terms in the disjunction \eqref{eq:disjunction}. Given a polyhedral relaxation of each disjunctive term, we can obtain valid inequalities for $\P \setminus \Q$ using a disjunctive approach analogous to the method outlined in Remark~\ref{remark:0}. \begin{remark} The multi-term disjunction \eqref{eq:disjunction} for $\P \setminus \Q$ can be extended to the case $\xbasis \in \Q$. Specifically, if $\enter_j = 0$ for all $j \in N$, the set $\So^\Q$ defined in \eqref{eq:def-s0q} contains every point in $\PB$ except for $\xbasis$. Because $\xbasis \in \Q$, we know that $\PB \setminus \So^\Q$ (one of the terms of the disjunction \eqref{eq:disjunction}) is empty. \end{remark} \addtocounter{example}{-1} \begin{contexample} Using the two-term disjunction from Section~\ref{sec:cuts-1}, we were unable to derive meaningful cuts for $\P \setminus \Q$ from Example~\ref{ex:useless}. In contrast, Theorem~\ref{thm:disjunction} provides a disjunction for $\P \setminus \Q$. A graphical representation of the relationship $\T^\Q = \bigcap_{k \in \Ncup} \Sk^\Q$ for this example is shown in Figures \ref{fig:tq-skq-2}--\ref{fig:tq-skq-1}. In this example, $|\N0| = |\N2| = 1$. The disjunction of Theorem~\ref{thm:disjunction} can be seen in Figure~\ref{fig:tq-dis-1}. \begin{figure} \caption{The set $\So^\Q$ for Example~\ref{ex:useless} \caption{The set $\Sbare_1^\Q$ for Example~\ref{ex:useless} \caption{The set $\T^\Q$ is the intersection of $\So^\Q$ and $\Sbare_1^\Q$.} \caption{Construction of $\T^\Q$ for Example~\ref{ex:useless} \label{fig:tq-skq-2} \label{fig:tq-skq-3} \label{fig:tq-skq-1} \end{figure} \begin{figure} \caption{By Theorem~\ref{thm:disjunction} \label{fig:tq-dis-1} \end{figure} \end{contexample} \addtocounter{example}{1} Based on the disjunction \eqref{eq:disjunction}, the inequalities \eqref{eq:tq-inequality}, which are valid for $\PB \setminus \T^\Q$, are also valid for $\PB \setminus \Sk^\Q$ for all $k \in \Ncup$. The sets $\PB \setminus \Sk^\Q$, $k \in \Ncup$ are nonconvex in general. In Section~\ref{subsec:skq-cuts-combined}, we derive polyhedral relaxations of these sets. Together, these relaxations form $|\N2| + 1$ polyhedra whose union contains the feasible region $\P \setminus \Q$. \subsection{Polyhedral relaxation of \texorpdfstring{$\PB \setminus \Sk^\Q$, $k \in \Ncup$}{P\carrot B \back S\under k\carrot C, k \elementof N\subtwo\carrot 0}}\label{subsec:skq-cuts-combined} In this section, we describe a polyhedral relaxation of the set $\PB \setminus \Sk^\Q$ for $k \in \Ncup$. To begin, we consider the set $\PB \setminus \So^\Q$. The set $\So^\Q$ is equivalent to $\genset_{\D}^{\Q}$ from Section~\ref{subsec:tq-cuts} when $\D = \Nonetwo$. As such, the theory of Section~\ref{subsec:tq-cuts} can be applied to the specific case $\D = \Nonetwo$ to obtain an exponential family of inequalities for $\PB \setminus \So^\Q$ and a polynomial-size extended formulation of the polyhedron defined by these inequalities. \addtocounter{example}{-1} \begin{contexample} Let $\D$ from Section~\ref{subsec:tq-cuts} equal $\Nonetwo$. Consider $\P$ and $\Q$ defined in Example~\ref{ex:useless}. Figure~\ref{fig:pb-s0q-1} shows the selection of $\epsone_2(\Sset)$ for $\Sset = \{1\}$. This $\epsone_2(\Sset)$ is then used to construct the inequality of Theorem~\ref{thm:tq-cuts} in Figure~\ref{fig:pb-s0q-2}. \begin{figure} \caption{The maximal selection of $\epsone_2(\Sset)$, where $\Sset = \{1\} \caption{Theorem~\ref{thm:tq-cuts} \caption{The valid inequality of Theorem~\ref{thm:tq-cuts} \label{fig:pb-s0q-1} \label{fig:pb-s0q-2} \end{figure} \end{contexample} \addtocounter{example}{1} Now, let $k \in \N2$ be fixed. For the remainder of this section, we describe a polyhedral relaxation of $\PB \setminus \Sk^\Q$. Let $\J$ be defined as follows: \begin{align*} \J &\coloneqq \{i \in N \colon \rbar^i \in \recc(\Sk^\Q) \}. \end{align*} Because $\recc(\Q) \subseteq \recc(\Sk^\Q)$, we have $\N1 \subseteq \J$. \begin{observation}\label{obs:recc-skq} It holds that $\recc(\Sk^\Q) = \recc(\Q) + \rcinterval{-\infty}{0}{k}$. \end{observation} \begin{proposition} The index $k$ is not in $\J$. \end{proposition} \begin{proof} Assume for contradiction $k \in \J$. By Observation~\ref{obs:recc-skq}, there exists $q \in \recc(\Q)$ and $\lambda \geq 0$ such that $\rbar^k = q - \lambda \rbar^k$, which implies $\rbar^k \in \recc(\Q)$. This is a contradiction; $k \in \N2$, so the halfline $\lcinterval{0}{+\infty}{k}$ extending from $\xbasis$ intersects $\Q$ on a finite interval. \end{proof} Proposition~\ref{prop:skq-pb-int} characterizes the points where $\Sk^\Q$ intersects each edge of $\PB$. \begin{proposition}\label{prop:skq-pb-int} Let $j \in N$. If Assumption~\ref{ass:recc} holds, then \begin{align*} \exitstar_j \coloneqq \sup \{\lambda \geq 0 \colon \xbasis + \lambda \rbar^j \in \Sk^\Q\} &= \begin{dcases} 0 &\textrm{if } j \in \N0 \setminus \J \\ \exit_j &\textrm{if } j \in \N2 \setminus \J \\ +\infty &\textrm{if } j \in \J. \end{dcases} \end{align*} \end{proposition} \begin{proof} Let $j \in \J$. By Observation~\ref{obs:recc-skq}, there exists $\lambda \geq 0$ such that $\rbar^j + \lambda \rbar^k \in \recc(\Q)$. Consider any $\gamma > 0$. We have $\xbasis + \gamma(\rbar^j + \lambda \rbar^k) \in \Sk^\Q$. Because $-\rbar^k \in \recc(\Sk^\Q)$, we have $\xbasis + \gamma \rbar^j \in \Sk^\Q$. Thus, $\exitstar_j = +\infty$. Next, let $j \in \N2 \setminus \J$. By the construction of $\Sk^\Q$ in \eqref{eq:def-skq}, $\exitstar_j \geq \exit_j$. Assume for contradiction $\exitstar_j > \exit_j$. There exists $\theta \in \R^{|\N2|}_{+}$, $\delta \in \R^{|\N2|}_{+}$, $\gamma \geq 0$, and $q \in \recc(\Q)$ such that $\sum_{i \in \N2} \theta_i = 1$, $\delta_i \in [0, \exit_i)$ for $i \in \N2$, and \begin{align} \xbasis + \exitstar_j \rbar^j &= \xbasis + \sum\limits_{i \in \N2} \theta_i \delta_i \rbar^i - \gamma \rbar^k + q \nonumber \\ \implies q &= \exitstar_j \rbar^j - \sum\limits_{i \in \N2} \theta_i \delta_i \rbar^i + \gamma \rbar^k. \label{eq:x-skq-cone} \end{align} Observe $\theta_i \delta_i = 0$ for all $i \in \N2 \setminus \{j,k\}$ and $\gamma \geq \theta_k \delta_k$; if not, $q \notin \recc(\PB)$ from \eqref{eq:x-skq-cone}, contradicting Assumption~\ref{ass:recc}. Therefore, \begin{align*} q &= (\exitstar_j - \theta_j \delta_j) \rbar^j + (\gamma - \theta_k \delta_k) \rbar^k. \end{align*} Because $\rbar^k \in \recc(\Sk^\Q)$, we have $q - (\gamma - \theta_k \delta_k) \rbar^k = (\exitstar_j - \theta_j \delta_j) \rbar^j \in \recc(\Sk^\Q)$. This contradicts $j \notin \J$. Finally, let $j \in \N0 \setminus \J$. Assume for contradiction $\exitstar_j > 0$. We follow the definitions in the previous case ($j \in \N2 \setminus \J$) to obtain \begin{align*} q &= \exitstar_j \rbar^j + (\gamma - \theta_k \delta_k) \rbar^k. \end{align*} Again, we obtain $\exitstar_j \rbar^j \in \recc(\Sk^\Q)$, contradicting $j \notin \J$. \end{proof} The proof of Proposition~\ref{prop:skq-pb-int} shows that without Assumption~\ref{ass:recc}, it may be the case that $\exitstar_j > \exit_j$ for some $j \in N$. This is due to the addition of $\rcinterval{-\infty}{0}{k}$ to $\recc(\Q)$. Corollary~\ref{cor:n0-j} follows from Proposition~\ref{prop:skq-pb-int}. \begin{corollary}\label{cor:n0-j} If $\N0 \subseteq \J$, then there exists $\epsilon > 0$ such that $\xbasis + \epsilon \rbar^j \in \Sk^\Q$ for all $j \in N$. \end{corollary} By Corollary~\ref{cor:n0-j}, if $\N0 \subseteq \J$, $\xbasis$ lies in the relative interior of $\Sk^\Q$. We can construct a polyhedral relaxation of $\PB \setminus \Sk^\Q$ by using intersection cuts generated by the cone $\PB$. Methods for strengthening intersection cuts (e.g., \citet{glover1974}) can be used to obtain a strengthened polyhedral relaxation. For this reason, we present inequalities only for the case $\N0 \nsubseteq \J$. \begin{assumption}\label{ass:2} There exists $j \in \N0$ such that $\rbar^j \notin \recc(\Sk^\Q)$, i.e., $\N0 \nsubseteq \J$. \end{assumption} For $i \in \J$ and $j \in N \setminus \J$, let \begin{align*} \epshatthree_{ij} &\coloneqq \sup \{ \epshatthree \geq 0 \colon \rbar^i + \epshatthree \rbar^j \in \recc(\Sk^\Q) \}. \end{align*} We define $\Msetthree$ to be the indices of $\J$ that satisfy the following property: \begin{align*} \Msetthree &\coloneqq\{i \in \J \colon \epshatthree_{ij} > 0\ \forall j \in N \setminus \J \}. \end{align*} For any $i \in \Msetthree$ and $j \in N \setminus \J$, $\recc(\Sk^\Q)$ intersected with the cone $\F_{ij}$ contains something other than the trivial directions $\lcinterval{0}{+\infty}{i} \subseteq \recc(\Sk^\Q)$. The proof of Proposition~\ref{prop:epshatthree-recc} is similar to that of Proposition~\ref{prop:epshat-recc}. \begin{proposition}\label{prop:epshatthree-recc} Let $(i,j) \in \Msetthree \times (N \setminus \J)$. For any $\epshatthree \in [0,\epshatthree_{ij})$, we have $\rbar^i + \epshatthree \rbar^j \in \recc(\Sk^\Q)$. \end{proposition} For $\Sset \subseteq \Msetthree$ and $j \in N \setminus \J$, define $\epsthree_j(\Sset)$ to be \begin{align*} \epsthree_j(\Sset) &= \begin{cases} \min_{i \in \Sset} \epshatthree_{ij} & \textrm{ if } \Sset \neq \emptyset \\ +\infty & \textrm{ otherwise}. \end{cases} \end{align*} By Proposition~\ref{prop:epshatthree-recc}, if $\Sset \neq \emptyset$, $\rbar^i + \epsthree_j(\Sset) \rbar^j \in \recc(\Sk^\Q)$ for all pairs $(i,j) \in \Sset \times (N \setminus \J)$. \begin{theorem}\label{thm:skq-cuts} Let $\Sset \subseteq \Msetthree$. The inequality \begin{align} \sum\limits_{i \in U} x_i - \mash{\sum\limits_{j \in N \setminus \J}} \ \frac{x_j}{\epsthree_j(\Sset)} &\leq 0 \label{eq:skq-inequality} \end{align} is valid for $\PB \setminus \Sk^\Q$. \end{theorem} \begin{proof} Assume $\Sset \neq \emptyset$, or the result trivially holds. By construction, $\epsthree_j(\Sset) > 0$. Let $\xhat \in \PB$ satisfy $\sum_{i \in U} \xhat_i- \sum_{j \in N \setminus \J} \xhat_j / \epsthree_j(\Sset) > 0$. We show $\xhat \in \Sk^\Q$. For ease of notation, let $\epsthree_j \coloneqq \epsthree_j(\Sset)$. By Proposition~\ref{prop:epshatthree-recc}, for $(i,j) \in \Sset \times (N \setminus \J)$, there exists $q^{ij} \in \recc(\Sk^\Q)$ such that $q^{ij} = \rbar^i + \epsthree_j \rbar^j$. Then \begin{alignat*}{2} \rbar^j &= \frac{1}{\epsthree_j} q^{ij} - \frac{1}{\epsthree_j} \rbar^i \qquad &&\forall i \in \Sset,\ j \in N \setminus \J. \end{alignat*} By Lemma~\ref{lem:farkas-cons-2}, there exists $\theta \in \R_{+}^{|\Sset| \times |N \setminus \J|}$ such that \begin{subequations}\label{eq:farkas-in-skq} \begin{alignat}{2} \sum\limits_{i \in \Sset} \theta_{ij} &= 1 \qquad && \forall j \in N \setminus \J \label{eq:farkas-in-skq-1} \\ \mash{\sum\limits_{j \in N \setminus \J}} \ \theta_{ij} \frac{\xhat_j}{\epsthree_j} &\leq \xhat_i && \forall i \in \Sset. \label{eq:farkas-in-skq-2} \end{alignat} \end{subequations} This result is obtained with $\indone \coloneqq \Sset$, $\indtwo \coloneqq N \setminus \J$, $\aparam_i \coloneqq \xhat_i$ for all $i \in \Sset$, and $\cparam_j \coloneqq \xhat_j / \epsthree_j$ for all $j \in N \setminus \J$. With the $\theta$ satisfying \eqref{eq:farkas-in-skq}, we have \begin{align} \rbar^j &= \sum\limits_{i \in \Sset} \theta_{ij} \bigg( \frac{1}{\epsthree_j} q^{ij} - \frac{1}{\epsthree_j} \rbar^i \bigg) \qquad \forall j \in N \setminus \J. \label{eq:new-rbarj} \end{align} Using \eqref{eq:new-rbarj}, $\xhat$ is equivalent to \begin{align*} \xhat &= \xbasis + \sum\limits_{i \in \Sset} \xhat_i \rbar^i + \mash{\sum\limits_{i \in \J \setminus \Sset}} \xhat_i \rbar^i + \mash{\sum\limits_{j \in N \setminus \J}} \xhat_j \rbar^j \\ &= \xbasis + \sum\limits_{i \in \Sset} \xhat_i \rbar^i + \mash{\sum\limits_{j \in N \setminus \J}} \xhat_j \sum\limits_{i \in \Sset} \theta_{ij} \bigg( \frac{1}{\epsthree_j} q^{ij} - \frac{1}{\epsthree_j} \rbar^i \bigg) + \mash{\sum\limits_{i \in \J \setminus \Sset}} \xhat_i \rbar^i \\ &= \xbasis + \sum\limits_{i \in \Sset} \bigg( \xhat_i - \mash{\sum\limits_{j \in N \setminus \J}} \theta_{ij} \frac{\xhat_j}{\epsthree_j} \bigg) \rbar^i + \mash{\sum\limits_{j \in N \setminus \J}} \enspace \sum\limits_{i \in \Sset} \theta_{ij} \frac{\xhat_j}{\epsthree_j} q^{ij} + \mash{\sum\limits_{i \in \J \setminus \Sset}} \xhat_i \rbar^i. \end{align*} By \eqref{eq:farkas-in-skq-2}, the coefficients on the terms $\rbar^i$, $i \in \Sset$ are nonnegative. Observe that \begin{alignat*}{2} \bigg( \xhat_i - \mash{\sum\limits_{j \in N \setminus \J}} \theta_{ij} \frac{\xhat_j}{\epsthree_j} \bigg) \rbar^i &\in \recc(\Sk^\Q) \qquad && \forall i \in \Sset \\ \theta_{ij} \frac{\xhat_j}{\epsthree_j} q^{ij} &\in \recc(\Sk^\Q) && \forall i \in \Sset,\ j \in N \setminus \J \\ \xhat_i \rbar^i &\in \recc(\Sk^\Q) && \forall i \in \J \setminus \Sset. \end{alignat*} It follows that $\xhat \in \{\xbasis\} + \recc(\Sk^\Q) \subseteq \Sk^\Q$. \end{proof} We next consider the separation problem for $\skrelax \supseteq \PB \setminus \Sk^\Q$, where \begin{align*} \skrelax &\coloneqq \bigg\lbrace x \in \R^{|N|}_{+} \colon \sum\limits_{i \in U} x_i - \mash{\sum\limits_{j \in N \setminus \J}} \ \frac{x_j}{\epsthree_j(\Sset)} \leq 0 \ \forall \Sset \subseteq \Msetthree \bigg\rbrace. \end{align*} In particular, given some $\xhat \in \PB$, we are interested in finding a subset of $\Msetthree$ that maximizes the violation of an inequality of the form \eqref{eq:skq-inequality}: \begin{align} \max_{\Sset \subseteq \Msetthree}\sum\limits_{i \in U} \xhat_i - \mash{\sum\limits_{j \in N \setminus \J}} \ \frac{\xhat_j}{\epsthree_j(\Sset)}. \label{eq:skq-separation} \end{align} \begin{proposition}\label{prop:skq-supermodular} The separation problem \eqref{eq:skq-separation} is a supermodular maximization problem. \end{proposition} Similar to the derivation of $\tqextended_{\D}$ in Section~\ref{subsec:tq-cuts}, we derive an extended formulation for the relaxation of $\PB \setminus \Sk^\Q$ defined by inequality \eqref{eq:skq-inequality} for all $\Sset \subseteq \Msetthree$. Let $\Msetthree = \{1, \ldots, \skcardone\}$, where $\skcardone \coloneqq |\Msetthree|$. Let $\skcardtwo \coloneqq |N \setminus \J|$. For all $j \in N \setminus \J$, let $\pi_j(1), \pi_j(2), \ldots, \pi_j(\skcardone)$ be ordered to satisfy $\epshatthree_{\pi_j(1),j} \leq \epshatthree_{\pi_j(2),j} \leq \ldots \leq \epshatthree_{\pi_j(\skcardone),j}$. For $i \in \Msetthree$, let $\ellone_j(i)$ be the unique integer satisfying $\pi_j(\ellone_j(i)) = i$. For all $j \in N \setminus \J$, let $\epshatthree_{0j} \coloneqq +\infty$, $\theta_{0j} \coloneqq 0$, $v_{0j} \coloneqq 0$, $v_{\skcardone+1,j} \coloneqq 0$, $\pi_j(0) \coloneqq 0$, and $\pi_j(\skcardone+1) \coloneqq 0$. We define $\skextended$ to be the set of $(x,\theta,v,\lambda) \in \R_{+}^{|N|} \times \R_{+}^{\skcardone \times \skcardtwo} \times \R_{+}^{\skcardone \times \skcardtwo} \times \R^{\skcardtwo}$ such that \begin{equation*} \begin{alignedat}{2} \sum\limits_{i \in \Msetthree} \enspace\ \mash{\sum\limits_{j \in N \setminus \J}} \theta_{ij} + \mash{\sum\limits_{j \in N \setminus \J}} \lambda_j &\leq 0 \\ \theta_{ij} + v_{ij} - v_{i+1,j} + \bigg( \frac{1}{\epshatthree_{\pi_j(i+1),j}} - \frac{1}{\epshatthree_{\pi_j(i),j}} \bigg) x_j &\geq 0 \quad && \forall i = 0, \ldots, \skcardone,\ j \in N \setminus \J \\ \mash{\sum\limits_{j \in N \setminus \J}} \theta_{\ellone_j(i),j} - x_i &\geq 0 \quad && \forall i = 1, \ldots, \skcardone. \end{alignedat} \end{equation*} \begin{theorem}\label{thm:extendedform3} It holds that $\proj_x(\skextended) = \skrelax$. \end{theorem} The proof of Theorem~\ref{thm:extendedform3} is left out, because it mirrors that of Theorem~\ref{thm:extendedform1}. We can use the extended formulation $\proj_x(\skextended)$ to construct a polyhedral relaxation of $\PB \setminus \Sk^\Q$ from the multi-term disjunction \eqref{eq:disjunction}. The nontrivial inequalities of Theorem~\ref{thm:skq-cuts} are predicated on the existence of a nonempty $\Sset \subseteq \Msetthree$. We end this section by stating that if no such subset exists (i.e., $\Msetthree = \emptyset$), then no nontrivial inequalities exist for $\PB \setminus \Sk^\Q$. \begin{proposition}\label{prop:msetthree-sufficient} Under Assumption~\ref{ass:2}, if $\Msetthree = \emptyset$, then $\clconv(\PB \setminus \Sk^\Q) = \PB$. \end{proposition} \begin{proof} By Assumption~\ref{ass:2}, $\N0 \setminus \J \neq \emptyset$. We show $\{\xbasis\} + \lcinterval{0}{+\infty}{i} \subseteq \clconv(\PB \setminus \Sk^\Q)$ for all $i \in N$. Observe $\xbasis \in \cl(\PB \setminus \Sk^\Q)$ by Proposition~\ref{prop:skq-pb-int}. Consider any $i \in N \setminus \J$ and $\gamma > 0$. We show $\xbasis + \gamma \rbar^i \in \clconv(\PB \setminus \Sk^\Q)$. By Proposition~\ref{prop:skq-pb-int}, $\exitstar_i$ is finite. Then for a sufficiently large $M > \gamma$, $\xbasis + M \rbar^i \notin \Sk^\Q$. We have that $\xbasis + \gamma \rbar^i$ is a convex combination of $\xbasis \in \cl(\PB \setminus \Sk^\Q)$ and $\xbasis + M \rbar^i \in \PB \setminus \Sk^\Q$. Hence, $\xbasis + \gamma \rbar^i \in \clconv(\PB \setminus \Sk^\Q)$. Now, consider $i \in \J$, $\lambda > 0$, and $\epshatthree > 0$. Because $\Msetthree = \emptyset$, there exists $j \in N \setminus \J$ such that $\lambda \rbar^i + \epshatthree \rbar^j \notin \recc(\Sk^\Q)$. Then there exists $M > 1$ such $\xbasis + M(\lambda \rbar^i + \epshatthree \rbar^j)$ lies outside of $\Sk^\Q$. Therefore, $\xbasis + \lambda \rbar^i + \epshatthree \rbar^j$ is a convex combination of $\xbasis \in \cl(\PB \setminus \Sk^\Q)$ and $\xbasis + M(\lambda \rbar^i + \epshatthree \rbar^j) \in \PB \setminus \Sk^\Q$. This holds for an arbitrary $\epshatthree > 0$, so $\xbasis + \lambda \rbar^i \in \clconv(\PB \setminus \Sk^\Q)$. \end{proof} \section{Discussion and future work} Our analysis requires the basic solution $\xbasis$ to lie outside $\cl(\Q)$. We showed in Section~\ref{sec:cuts-1} that if $\xbasis \in \Q$, we obtain the standard intersection cut of Balas. It remains to discuss how we can derive valid inequalities for $\P \setminus \Q$ when $\xbasis \in \bd(\Q)$. Under Assumption~\ref{ass:1}, our analysis still applies if $\xbasis \in \bd(\Q)$. To demonstrate this, assume for simplification that $\N0 = \emptyset$ (this is a more restrictive version of Assumption~\ref{ass:1}). It follows that $\enter_j = 0$ for all $j \in N$. Similar to the observation made in Remark~\ref{remark:1} for the case $\xbasis \in \Q$, we can show that every point in $\PB \setminus \Q$ lies in $\{\xbasis\}$ or $\{x \in \PB \colon \sum_{j \in N} x_j / \exit_j \geq 1 \}$. We can generate inequalities for $\P \setminus \Q$ in a disjunctive CGLP using the two polyhedra defined by the constraints of $\P$ added to each of these two sets. Similarly, if $\xbasis \in \bd(\Q)$ and Assumption~\ref{ass:1} holds, the term $\PB \setminus \So^\Q$ of the multi-term disjunction \eqref{eq:disjunction} is equal to $\{\xbasis\}$. We can again use disjunctive programming to generate cuts for $\P \setminus \Q$ with the knowledge that $\PB \setminus \So^\Q = \{\xbasis\}$. Polyhedral relaxations for the remaining disjunctive terms can still be generated using the methods discussed in Section~\ref{subsec:skq-cuts-combined}. We conclude with some ideas for future work. One direction is to study the computational strength of cuts obtained using these ideas. Another possibility is to generalize this disjunctive framework to allow for cuts to be generated by bases of rank less than $m$ (i.e., bases that do not admit a basic solution). Additionally, the strength of $\tqrelax_{\D}$ relative to $\PB \setminus \genset_{\D}^\Q$ could be analyzed. Specifically, it remains to be seen if $\tqrelax_{\D} = \conv(\PB \setminus \genset_{\D}^\Q)$, which by Theorem~\ref{thm:extendedform1} would imply that we have a polynomial-size extended formulation of $\conv(\PB \setminus \So^\Q)$. The same applies to the strength of $\skrelax$ relative to $\PB \setminus \Sk^\Q$ for $k \in \N2$. \end{document}
\begin{document} \title{Congruences for the number of partitions and bipartitions with distinct even parts} \author{ Haobo Dai\footnote{Department of Mathematics, University of Shanghai Jiao Tong University, Shanghai, 200240, China; e-mail: {\tt [email protected]}} } \date{} \maketitle \begin{abstract} Let $ped(n)$ denote the number of partitions of $n$ wherein even parts are distinct (and odd parts are unrestricted). We show infinite families of congruences for $ped(n)$ modulo $8$. We also examine the behavior of $ped_{-2}(n)$ modulo $8$ in detail where $ped_{-2}(n)$ denotes the number of bipartitions of $n$ with even parts distinct. As a result, we find infinite families of congruences for $ped_{2}(n)$ modulo $8$. \\ \noindent\textbf{Keywords}\quad partitions and bipartitions with even parts distinct; congruences; binary quadratic forms. \\ \noindent\textbf{Mathematics Subject Classification (2000)}: 05A17;11P83 \end{abstract} \section{Introduction} Let $ped(n)$ denote the number of partitions of $n$ wherein even parts are distinct (and odd parts are unrestricted). The generating function for $ped(n)$ (\cite{AHS}) is $$\sum_{n=0}^{\infty}ped(n)q^n:=\frac{(q^4;q^4)_{\infty}}{(q;q)_{\infty}}=\prod_{m=1}^{\infty}\frac{(1-q^{4m})}{(1-q^m)}. \eqno(1.1)$$ Note that by (1.1), the number of partitions of $n$ wherein even parts are distinct equals the number of partitions of $n$ with no parts divisible by $4$, i.e., the $4$-regular partitions (see \cite{AHS} and references therein). The arithmetic properties were studied by Andrews, Hirshhorn and Sellers \cite{AHS} and Chen \cite{Ch}. For example, in \cite {AHS}, Andrews, et al., proved that for all $n\geq 0$, $$ped(9n+4)\equiv 0 \quad (\Mod 4) \eqno(1.2)$$ and $$ped(9n+7)\equiv 0 \quad (\Mod 4).\eqno(1.3)$$ Suppose that $r$ is an integer such that $1\leq r <8p$, $rp\equiv 1(\Mod 8)$, and $(r, p)=1$. By using modular forms, Chen \cite{Ch} showed that if $c(p)\equiv 0(\Mod 4)$, then, for all $n\geq 0$, $\alpha\geq 1$, $$ped\left(p^{2\alpha}n+\frac{rp^{2\alpha-1}-1}{8}\right)\equiv 0 \quad (\Mod 4) \eqno(1.4)$$ where $c(p)$ is the $p$-th coefficient of $\frac{\eta^4(16z)}{\eta(8z)\eta(32z)}:=\sum_{n=1}^{\infty}c(n)q^n$. Note that in \cite{Ch}, Chen didn't show the coefficients of $c(p)$ explicitly. Note also that in a beautiful paper \cite{Ch1}, Chen studied arithmetic properties for the number of $k$-tuple partitions with even parts distinct modulo $2$ for any positive integer $k$ by using Hecke nilpotency. Berkovich and Patane \cite{B-K} calculated $c(n)$ explicitly. In particular, they showed that $c(p)=0$ if and only if $p=2$, $p\equiv 5 (\Mod 8)$ and $p\equiv 3 (\Mod 4)$. As a direct application of Chen's, Berkovich and Patane's theorems, we have the following. \begin{cor} Let $p$ be a prime which is congruent to 5 modulo $8$ or congruent to $3$ modulo $4$ and suppose that $r$ is an integer such that $1\leq r <8p$, $rp\equiv 1(\Mod 8)$, and $(r, p)=1$, then $$ped\left(p^{2\alpha}n+\frac{rp^{2\alpha-1}-1}{8}\right)\equiv 0 \quad (\Mod 4)$$ for all $n\geq 0$ and $\alpha\geq 1$. \end{cor} Ono and Penniston \cite{O-P} showed an explicit formula for $Q(n)$ modulo $8$ by using the arithmetic of the ring of $\mathbb{Z}[\sqrt{-6}]$ where $Q(n)$ denotes the number of partitions of an integer $n$ into distinct parts. We are unable to explicitly determine $ped(n)$ modulo $8$. But we can prove infinitely families congruences for $ped(n)$ modulo $8$. Our first main result is the following. \begin{thm} Let $p$ be a prime which is congruent to $7 (\Mod 8)$. Suppose that $r$ is an integer such that $1\leq r <8p$, $rp\equiv 1(\Mod 8)$, and $(r, p)=1$, then for all $n\geq 0$, $\alpha\geq 0$, we have $$ped\left(p^{2\alpha+2}n+\frac{rp^{2\alpha-1}+1}{8}\right)\equiv 0 \quad (\Mod 8).$$ \end{thm} \begin{example} For all $n\geq 0$, $\alpha\geq 0$, $$ped\left(7^{2\alpha+2}n+\frac{r\times 7^{2\alpha+1}-1}{8}\right)\equiv 0 \quad (\Mod 8),$$ for $r=15,23,31,39$ and $47$. \end{example} Let $ped_{-2}(n)$ be the number of bipartitions of $n$ with even parts distinct. The generating function of $ped_{-2}(n)$ \cite{Lin} is $$\sum_{n=0}^{\infty}ped_{-2}(n)q^n:=\frac{(q^4;q^4)^2_{\infty}}{(q;q)^2_{\infty}}=\prod_{m=1}^{\infty}\frac{(1-q^{4m})^2}{(1-q^m)^2}. \eqno(1.5)$$ Recently, in \cite{Lin}, Lin investigated arithmetic properties of $ped_{-2}(n)$. In particular, he showed following theorems: \begin{thm}(\cite{Lin}) For $\alpha\geq 0$ and any $n\geq 0$, we have $$ped_{-2}(n)\left(3^{2\alpha+2}n+\frac{11\times 3^{2\alpha+1}-1}{4}\right)\equiv 0 \quad (\Mod 3), $$ $$ped_{-2}(n)\left(3^{2\alpha+3}n+\frac{5\times 3^{2\alpha+2}-1}{4}\right)\equiv 0 \quad (\Mod 3). $$ \end{thm} \begin{thm}[\cite{Lin}] $ped_{-2}(n)$ is even unless $n$ is of the form $k(k+1)$ for some $k\geq 0$. Furthermore, $ped_{-2}(n)$ is a multiple of $4$ if $n$ is not the sum of two triangular numbers. \end{thm} As a corollary of Theorem 1.3 and 1.4, Lin proved an infinite family of congruences for $ped_{-2}(n)$ modulo $12$: $$ped_{-2}\left(3^{2\alpha+2}n+\frac{11\times 3^{2\alpha+1}-1}{4}\right)\equiv 0 \quad (\Mod 12),$$ for any integers $\alpha\geq 0$ and $n\geq 0$. As in \cite{O-P}, our second main achievement is to examine $ped_{-2}(n)$ modulo $8$ in detail. \begin{thm} If $n$ is a non-negative integer, then let $N$ and $M$ be the unique positive integers for which $$4n+1=N^2M$$ where $M$ is square-free. Then the following are true: \begin{itemize} \item [(1)] If $M=1$, then $ped_{-2}(n)$ is odd. \item [(2)] If $M=p$, and ord$_p(4n+1)\equiv 1 (\Mod 4)$, then $ped_{-2}(n)\equiv 2 (\Mod 4)$. \item [(3)] If $M=p$, and ord$_p(4n+1)\equiv 3 (\Mod 8)$, then $ped_{-2}(n)\equiv 4 (\Mod 8)$. \item [(4)] If $M=p_1p_2$, where $p_1$ and $p_2$ are distinct primes, $p_i\equiv 1 (\Mod 4)$ and ord$_{p_i}(4n+1)\equiv 1 (\Mod 4)$ for $i=1,2$, then $ped_{-2}(n)\equiv 4(\Mod 8)$. \item [(5)] In all other cases we have that $ped_{-2}(n)\equiv 0(\Mod 8).$ \end{itemize} \end{thm} As a corollary to Theorem 1.5. we can show infinite families of congruences for $ped_{-2}(n)$ modulo $8$. \begin{cor} \begin{itemize} \item [(i)] For $p\equiv 3 (\Mod 4)$, and let $1\leq r< 4p$, $(r,p)=1$, $rp\equiv 1 (\Mod 4)$, then we have $$ped_{-2}\left(p^{2\alpha+2}n+\frac{r\times p^{2\alpha+1}-1}{4}\right)\equiv 0 \quad (\Mod 8), \eqno(1.6)$$ for $\alpha\geq 0$ and any $n\geq 0$. \item [(ii)] For $p\equiv 1 (\Mod 4)$, and let $1\leq r< 4p$, $(r,p)=1$, $rp\equiv 1 (\Mod 4)$, then we have $$ped_{-2}\left(p^{8\alpha+8}n+\frac{r\times p^{8\alpha+7}-1}{4}\right)\equiv 0 \quad (\Mod 8),\eqno(1.7)$$ for $\alpha\geq 0$ and any $n\geq 0$. \end{itemize} \end{cor} \begin{proof} Since the proof of (i) and (ii) are similar, we only give the proof of (ii). Note that $$4\left(p^{8\alpha+8}n+\frac{r\times p^{8\alpha+7}-1}{4}\right)+1=p^{8\alpha+7}(4pn+r),$$ so for any $n$, then from Theorem 1.7, (1.7) is obvious. \end{proof} \begin{example} For $r=7,11$, for all $n\geq 0$, $\alpha\geq 0$, we have $$ped_{-2}\left(3^{2\alpha+2}n+\frac{r\times 3^{2\alpha+1}-1}{4}\right)\equiv 0 \quad (\Mod 8).$$ Combining Lin's Theorem 1.5, we have $$ped_{-2}\left(3^{2\alpha+2}n+\frac{11\times 3^{2\alpha+1}-1}{4}\right)\equiv 0 \quad (\Mod 24).$$ \end{example} In section 2, we prove Theorem 1.2 and in section 3 we prove Theorem 1.7. Our method is elementary which is motivated by that of Mahlburg's \cite{Ma}. We only use the knowledge of binary quadratic forms. {\bf \noindent Acknowledgments.} \\ We wish to thank the NSF of China (No.11071160) for its generous support. We would also like to thank the referee for his/her helpful comments. \section{Proof of Theorem 1.2} In this section, we prove Theorem 1.2. We need the following well-known identities \cite{On}: \setcounter{equation}{0} \begin{eqnarray} \prod_{n=1}^{\infty}\frac{(1-q^n)^2}{(1-q^{2n})}&=&\sum_{n=-\infty}^{\infty}(-1)^nq^{n^2}=1+2\sum_{n=1}^{\infty}(-1)^nq^{n^2},\\ q\prod_{n=1}^{\infty}\frac{(1-q^{16n})^2}{(1-q^{8n})}&=&\sum_{n=0}^{\infty}q^{(2n+1)^2}. \end{eqnarray} \begin{proof}[Proof of Theorem 1.2] Since $$\sum_{n=0}^{\infty}ped(n)q^n: =\prod_{n=1}^{\infty}\frac{(1-q^{4n})}{(1-q^n)}=\prod_{n=1}^{\infty}\frac{(1-q^{2n})^2}{(1-q^n)}\cdot \frac{(1-q^{4n})}{(1-q^{2n})^2} ,$$ so we have \begin{eqnarray*} & &\sum_{n=0}^{\infty}ped(n)q^{8n+1}\\ &=& q\prod_{n=1}^{\infty}\frac{(1-q^{16n})^2}{1-q^{8n}}\cdot \frac{(1-q^{32n})}{(1-q^{16n})^2}\\ &=& \left(\sum_{n=0}^{\infty}q^{(2n+1)^2}\right)\cdot \frac{1}{1+2\sum_{n=1}^{\infty}(-1)^n q^{16n^2}} \quad \quad (\text{by}\ (2.1),(2.2))\\ &\equiv& \left(\sum_{n=0}^{\infty}q^{(2n+1)^2}\right) \left(1-2\sum_{n=1}^{\infty}(-1)^nq^{16n^2}+4\left(\sum_{n=1}^{\infty}(-1)^nq^{16n^2}\right)^2\right) (\Mod 8). \end{eqnarray*} From above, it is clear that $ped(n)$ is odd if and only if $8n+1$ is a square. Note that \begin{eqnarray*} \left(\sum_{n=1}^{\infty}(-1)^nq^{16n^2}\right)^2 &=&\left(\sum_{m_1,m_2=1}^{\infty}(-1)^{m_1+m_2}q^{16m_1^2+16m_2^2}\right)\\ &=& 2\left(\sum_{m_1,m_2=1 \atop m_1< m_2}^{\infty}(-1)^{m_1+m_2}q^{16m_1^2+16m_2^2}\right) +\left(\sum_{n=1}^{\infty}q^{32n^2}\right). \end{eqnarray*} So we have \begin{eqnarray} \sum_{n=0}^{\infty}ped(n)q^{8n+1} &\equiv&\left(1-2\sum_{n=1}^{\infty}(-1)^nq^{16n^2}+4\left(\sum_{n=1}^{\infty}q^{32n^2}\right)\right) \\ &&\times\left(\sum_{n=0}^{\infty}q^{(2n+1)^2}\right)\quad (\Mod 8). \end{eqnarray} Note that if $8n+1=p^{2\alpha+1}(8pm+r)$ where $p\equiv 7 (\Mod 8)$, $(r,p)=1$, $rp\equiv 1 (\Mod 8)$, then $8n+1$ can't be represented by $x^2$, $x^2+y^2$ or $x^2+2y^2$. So for these $8n+1$, from (2.3) and (2.4), it is easy to see that $ped(n)\equiv 0 (\Mod 8)$. This completes the proof of Theorem 1.2. \end{proof} \section{Proof of Theorem 1.7} In this section, we prove Theorem 1.7. The method is similar to that of Theorem 1.2. \begin{proof}[Proof of Theorem 1.7] Since $$\sum_{n=0}^{\infty}ped_{-2}(n)q^n: =\prod_{n=1}^{\infty}\frac{(1-q^{4n})^2}{(1-q^n)^2}=\prod_{n=1}^{\infty}\frac{(1-q^{4n})^2}{(1-q^{2n})}\cdot \frac{(1-q^{2n})}{(1-q^{n})^2} ,$$ so by (2.1) and (2.2), we have \begin{eqnarray*} \sum_{n=0}^{\infty}ped_{-2}(n)q^{4n+1} &=& q\prod_{n=1}^{\infty}\frac{(1-q^{16n})^2}{1-q^{8n}}\cdot \frac{(1-q^{8n})}{(1-q^{4n})^2}\\ &=&\frac{1}{1+2\sum_{n=1}^{\infty}(-1)^n q^{4n^2}}\times \left(\sum_{n=0}^{\infty}q^{(2n+1)^2}\right)\\ &\equiv& \left(1-2\sum_{n=1}^{\infty}(-1)^nq^{4n^2}+4\left(\sum_{n=1}^{\infty}(-1)^nq^{4n^2}\right)^2\right)\\ &&\times\left(\sum_{n=0}^{\infty}q^{(2n+1)^2}\right)\quad (\Mod 8). \end{eqnarray*} Note that \begin{eqnarray*} \left(\sum_{n=1}^{\infty}(-1)^nq^{4n^2}\right)^2 &=& \left(\sum_{m_1,m_2=1}^{\infty}(-1)^{m_1+m_2}q^{4m_1^2+4m_2^2}\right)\\ &=& 2\left(\sum_{m_1,m_2=1 \atop m_1< m_2}^{\infty}(-1)^{m_1+m_2}q^{4m_1^2+4m_2^2}\right)+\sum_{n=1}^{\infty}q^{8n^2}\\ &=&2\left(\sum_{m_1,m_2=1 \atop m_1< m_2}^{\infty}(-1)^{m_1+m_2}q^{4m_1^2+4m_2^2}\right)+\frac{1}{2}\left(\sum_{n=\infty}^{\infty}q^{8n^2}-1\right). \end{eqnarray*} Note also that \begin{eqnarray*} \sum_{n=1}^{\infty}(-1)^nq^{4n^2} &=& \frac{1}{2}\left(\sum_{n=-\infty}^{\infty}(-1)^nq^{4n^2}-1\right)\\ &=& -\frac{1}{2}-\frac{1}{2}\left(\sum_{n=-\infty}^{\infty}q^{4n^2}\right) +\left(\sum_{n=-\infty}^{\infty}q^{16n^2}\right). \end{eqnarray*} So we have \begin{eqnarray} \sum_{n=0}^{\infty}ped_{-2}(n)q^{4n+1}&\equiv& \left(\sum_{n=-\infty}^{\infty}q^{4n^2}-2\sum_{n=-\infty}^{\infty}q^{16n^2}+2\sum_{n=\infty}^{\infty}q^{8n^2}\right)\nonumber\\ && \times\frac{1}{2}\left(\sum_{n=\infty}^{\infty}q^{(2n+1)^2}\right) \nonumber\\ &\equiv& \left(\frac{1}{2}\sum_{n=-\infty}^{\infty}q^{4n^2}+\sum_{n=\infty}^{\infty}q^{8n^2}-\sum_{n=-\infty}^{\infty}q^{16n^2}\right)\\ & &\times \left(\sum_{n=\infty}^{\infty}q^{(2n+1)^2}\right)\quad (\Mod 8). \end{eqnarray} If we put $$\left(\sum_{n=\infty}^{\infty}q^{(2n+1)^2}\right)\left(\sum_{n=\infty}^{\infty}q^{4n^2}\right):=\sum_{n=0}^{\infty}a(n)q^{4n+1},\eqno(3.3)$$ $$\left(\sum_{n=\infty}^{\infty}q^{(2n+1)^2}\right)\left(\sum_{n=\infty}^{\infty}q^{8n^2}\right):=\sum_{n=0}^{\infty}b(n)q^{4n+1},\eqno(3.4)$$ and $$\left(\sum_{n=\infty}^{\infty}q^{(2n+1)^2}\right)\left(\sum_{n=\infty}^{\infty}q^{16n^2}\right):=\sum_{n=0}^{\infty}c(n)q^{4n+1},\eqno(3.5)$$ then it is easy to see from (3.1), (3.2) that \begin{itemize} \item [(1)] $ped_{-2}(n)$ is odd if and only if $\frac{a(n)}{2}$ is odd, \item [(2)] $ped_{-2}(n)\equiv 2 (\Mod 4)$ if and only if $\frac{a(n)}{2}+b(n)-c(n)\equiv 2 (\Mod 4)$, \item [(3)] $ped_{-2}(n)\equiv 4 (\Mod 8)$ if and only if $\frac{a(n)}{2}+b(n)-c(n)\equiv 4 (\Mod 8)$, \item [(4)] In all other cases, we have that $ped_{-2}(n)\equiv 0(\Mod 8)$. \end{itemize} We separate into two cases according to the parity of $n$. (i) If $n=2m$ is even, then it is easy to see that \begin{eqnarray*} &&\#\{(x,y)\in \mathbb{Z}\times \mathbb{Z}:8m+1=x^2+16y^2, 8m+1 \ \text{is not a square}\}\\ &=&\#\{(x,y)\in \mathbb{Z}\times \mathbb{Z}:8m+1=x^2+4y^2, 8m+1 \ \text{is not a square}\}\\ &=& \frac{1}{2}\#\{(x,y)\in \mathbb{Z}\times \mathbb{Z}:8m+1=x^2+y^2, 8m+1 \ \text{is not a square}\}, \end{eqnarray*} and \begin{eqnarray*} &&\#\{(x,y)\in \mathbb{Z}\times \mathbb{Z}:8m+1=x^2+8y^2, 8m+1 \ \text{is not a square}\}\\ &=&\#\{(x,y)\in \mathbb{Z}\times \mathbb{Z}:8m+1=x^2+2y^2, 8m+1 \ \text{is not a square}\} \end{eqnarray*} Note that if $8m+1=p_1^{\alpha_1}\cdots p_k^{\alpha_k}q_1^{\beta_1}\cdots q_l^{\beta_l}r_1^{\gamma_1}\cdots r_u^{\gamma_u}s_1^{\delta_1}\cdots s_v^{\delta_v}$, where $p_i\equiv 1$ $(\Mod 8)$, $q_i\equiv 3 (\Mod 8)$, $r_i\equiv 5 (\Mod 8)$ and $s_i\equiv 7 (\Mod 8)$, then by using the decomposition of prime ideals in $\mathbb{Z}[i]$, we know that $8m+1=x^2+y^2$ has integral solutions if and only if $\beta_j,\gamma_t\equiv 0 (\Mod 2)$ for $1\leq j\leq l$ and $1\leq t\leq u$. If all $\beta_j,\delta_t$ are even, then it is easy to see that $$\#\{(x,y)\in \mathbb{Z}\times \mathbb{Z}:8m+1=x^2+y^2\}=4(\alpha_1+1)\cdots (\alpha_k+1)(\gamma_1+1)\cdots(\gamma_u+1).$$ We obtain that $a(n)=2(\alpha_1+1)\cdots (\alpha_k+1)(\gamma_1+1)\cdots(\gamma_u+1)$ if all $\beta_j,\gamma_t$ are even. Now it is clear that $\frac{a(n)}{2}$ is odd if and only if $4n+1=8m+1$ is a square. Similarly, by using the decomposition of prime ideals in $\mathbb{Z}[\sqrt{-2}]$, we know that if all $\gamma_j,\delta_t\equiv 0(\Mod 2)$, then (there are only two roots of unities $\pm 1$ in $\mathbb{Z}[\sqrt{-2}]$) $$\#\{(x,y)\in \mathbb{Z}\times \mathbb{Z}:8m+1=x^2+2y^2\}=2(\alpha_1+1)\cdots (\alpha_k+1)(\beta_1+1)\cdots(\beta_l+1).$$ From the above argument, we obtain the following results. Suppose all $\delta_i$ are even, then \begin{itemize} \item [(a)] If $\beta_i$ is odd for some $1\leq i\leq l$ and all $\gamma_j$ is even, then $\frac{a(n)}{2}+b(n)-c(n)\equiv b(n)\equiv 2(\alpha_1+1)\cdots (\alpha_k+1)(\beta_1+1)\cdots(\beta_l+1)\equiv 0(\Mod 8)$ (because there must be another $\beta_{i'}$ odd for $p_i\equiv 3 (\Mod 8)$ while $8m+1\equiv 1 (\Mod 8)$). \item [(b)] If $\gamma_i$ is odd for some $1\leq i\leq u$ and all $\beta_j$ is even, then $\frac{a(n)}{2}+b(n)-c(n)\equiv \frac{a(n)}{2}-c(n)\equiv -(\alpha_1+1)\cdots (\alpha_k+1)(\gamma_1+1)\cdots(\gamma_u+1)(\Mod 8)$. \item [(c)] If all $\beta_i$ and $\gamma_j$ are even, then $\frac{a(n)}{2}+b(n)-c(n)\equiv -(\alpha_1+1)\cdots (\alpha_k+1)(\gamma_1+1)\cdots(\gamma_u+1)+2(\alpha_1+1)\cdots (\alpha_k+1)(\beta_1+1)\cdots(\beta_l+1) (\Mod 8)$. \item [(d)] If $\beta_i$ and $\gamma_j$ are odd for some $i$ and $j$, then clearly $ped_{-2}(n)\equiv 0 (\Mod 8)$. \end{itemize} From (a),(b),(c) and (d), it is not difficult to see that (for $n=2m$) \begin{itemize} \item [(1)] if $4n+1$ is a square, then $ped_{-2}(n)$ is odd, \item [(2)] if $4n+1=pa^2$ where ord$_p(4n+1)\equiv 1 (\Mod 4)$, then $ped_{-2}(n)\equiv 2 (\Mod 4)$, \item [(3)] if $4n+1=p_1p_2a^2$ where $p_i\equiv1 (\Mod 4)$ and ord$_{p_i}(4n+1)\equiv 1 (\Mod 4)$ or $4n+1=p^3a^2$ and ord$_p(4n+1)\equiv 3(\Mod 8)$, then $ped_{-2}(n)\equiv 4 (\Mod 8)$ \item [(4)] and for all other cases, $ped_{-2}(n)\equiv 0 (\Mod 8)$. \end{itemize} (ii) If $n=2m+1$, then $4n+1=8m+5$. Since $(2m_1+1)^2+8m_2^2\equiv(2m_1+1)^2+16m_2^2\equiv 1(\Mod 8)$, so $4n+1$ can not be represented by $(2m_1+1)^2+8m_2^2$ and $(2m_1+1)^2+16m_2^2$. Note also that \begin{eqnarray*} &&\#\{(x,y)\in \mathbb{Z}\times \mathbb{Z}:8m+5=x^2+4y^2\}\\ &=& \frac{1}{2}\#\{(x,y)\in \mathbb{Z}\times \mathbb{Z}:8m+5=x^2+y^2 \}. \end{eqnarray*} So if $8m+5=p_1^{\alpha_1}\cdots p_k^{\alpha_k}q_1^{\beta_1}\cdots q_l^{\beta_l}r_1^{\gamma_1}\cdots r_u^{\gamma_u}s_1^{\delta_1}\cdots s_v^{\delta_v}$, where $p_i\equiv 1 (\Mod 8)$, $q_i\equiv 3 (\Mod 8)$, $r_i\equiv 5 (\Mod 8)$ and $s_i\equiv 7 (\Mod 8)$, then $$\frac{a(n)}{2}\equiv (\alpha_1+1)\cdots (\alpha_k+1)(\gamma_1+1)\cdots(\gamma_u+1) \quad (\Mod 8). \eqno(3.6)$$ Just like in (i), it is easy to find case by case by (3.6) for $ped_{n} (\Mod 8)$ for $n=2m+1$: \begin{itemize} \item [(1)] if $4n+1=pa^2$ where ord$_p(4n+1)\equiv 1 (\Mod 4)$, then $ped_{-2}(n)\equiv 2 (\Mod 4)$, \item [(2)] if $4n+1=p_1p_2a^2$ and ord$_{p_i}(4n+1)\equiv 1 (\Mod 4)$ or $4n+1=p^3a^2$ and ord$_p(4n+1)\equiv 3(\Mod 8)$, then $ped_{-2}(n)\equiv 4 (\Mod 8)$ \item [(3)] and for all other cases, $ped_{-2}(n)\equiv 0 (\Mod 8)$. \end{itemize} Now combining (i) and (ii), with a little thought, we complete the proof of Theorem 1.7. \end{proof} \end{document}
\begin{document} \title{A Riemann-Hurwitz Formula for Skeleta in Non-Archimedean Geometry} \begin{abstract} Let $k$ be an algebraically closed non-Archimedean non trivially real valued field which is complete with respect to its valuation. Let $\phi : C' \to C$ be a finite morphism between smooth projective irreducible $k$-curves. The morphism $\phi$ induces a morphism $\phi^{\mathrm{an}} : C'^{\mathrm{an}} \to C^{\mathrm{an}}$ between the Berkovich analytifications of the curves. We construct a pair of deformation retractions of $C'^{\mathrm{an}}$ and $C^{\mathrm{an}}$ which are compatible with the morphism $\phi^{\mathrm{an}}$ and whose images $\Upsilon_{C'^{\mathrm{\mathrm{an}}}}$, $\Upsilon_{C^{\mathrm{\mathrm{an}}}}$ are closed subspaces of $C'^{\mathrm{an}}$, $C^{\mathrm{an}}$ that are homeomorphic to finite metric graphs. We refer to such closed subspaces as skeleta. In addition, the subspaces $\Upsilon_{C'^{\mathrm{\mathrm{an}}}}$ and $\Upsilon_{C^{\mathrm{\mathrm{an}}}}$ are such that their complements in their respective analytifications decompose into the disjoint union of isomorphic copies of Berkovich open balls. The skeleta can be seen as the union of vertices and edges, thus allowing us to define their genus. The genus of a skeleton in a curve $C$ is in fact an invariant of the curve which we call $g^{\mathrm{an}}(C)$. The pair of compatible deformation retractions forces the morphism $\phi^{\mathrm{an}}$ to restrict to a map $\Upsilon_{C'^{\mathrm{\mathrm{an}}}} \to \Upsilon_{C^{\mathrm{\mathrm{an}}}}$. We study how the genus of $\Upsilon_{C'^{\mathrm{\mathrm{an}}}}$ can be calculated using the morphism $\phi^{\mathrm{an}}_{|\Upsilon_{C'^{\mathrm{\mathrm{an}}}}}$ and invariants defined on $\Upsilon_{C^{\mathrm{an}}}$. \end{abstract} \tableofcontents \emph{Acknowledgments:} This research was funded by the ERC Advanced Grant NMNAG. I would like to thank my advisor Professor François Loeser for his support and guidance during this period of work. I am grateful as well to J\'er\^ome Poineau and Matt Baker for their suggestions and encouragement. I would also like to thank Marco Macaulan, Antoine Ducros, Giovanni Rosso and Yimu Yin for the discussions and comments which have been integral to the development of this article. \section{Introduction} Our goal in this paper is to define and study a topological invariant on algebraic curves defined over suitable non-Archimedean real valued fields that arise naturally from their analytifications. Let $k$ be an algebraically closed, complete non-Archimedean non trivially real valued field. Let $C$ be a $k$-curve. By $k$-curve, we mean a one dimensional connected reduced separated scheme of finite type over the field $k$. It is well known that there exists a deformation retraction of $C^{\mathrm{an}}$ onto a closed subspace $\Upsilon$ which is homeomorphic to a finite metric graph [\cite{berk}, Chapter 4], [\cite{HL}, Section 7]. We call such subspaces \emph{skeleta}. The skeleton $\Upsilon$ can be decomposed into a set of vertices $V({\Upsilon})$ and a set of edges $E({\Upsilon})$. We define the genus of the skeleton $\Upsilon$ as follows. \begin{align*} g(\Upsilon) = 1 - V({\Upsilon}) + E({\Upsilon}). \end{align*} In Proposition 2.25, we show that $g(\Upsilon)$ is a well defined invariant of the curve and does not depend on the retract $\Upsilon$. Let $g^{\mathrm{an}}(C) := g(\Upsilon)$ for any such $\Upsilon$. We study how $g^{\mathrm{an}}$ varies for a finite morphism using a compatible pair of deformation retractions. Let $C',C$ be smooth projective irreducible $k$-curves and $\phi : C' \to C$ be a finite morphism. The morphism $\phi$ induces a morphism between the respective analytifications which we denote $\phi^{\mathrm{an}}$. Hence we have \begin{align*} \phi^{\mathrm{an}} : C'^{\mathrm{an}} \to C^{\mathrm{an}}. \end{align*} We prove that there exists a pair of \emph{compatible} deformation retractions \begin{align*} \psi : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}} \end{align*} and \begin{align*} \psi' : [0,1] \times C'^{\mathrm{an}} \to C'^{\mathrm{an}} \end{align*} with the following properties. \begin{enumerate} \item The sets $\Upsilon_{C'^{\mathrm{an}}} := \psi'(1,C'^{\mathrm{an}})$ and $\Upsilon_{C^{\mathrm{an}}} := \psi(1,C^{\mathrm{an}})$ are closed subspaces of $C'^{\mathrm{an}}$ and $C^{\mathrm{an}}$ which are homeomorphic to finite metric graphs. Furthermore, we have that $\Upsilon_{C'^{\mathrm{an}}} = (\phi^{\mathrm{an}})^{-1}(\Upsilon_{C^{\mathrm{an}}})$. \item The analytic spaces $C'^{\mathrm{an}} \smallsetminus \Upsilon_{C'^{\mathrm{an}}}$ and $C^{\mathrm{an}} \smallsetminus \Upsilon_{C^{\mathrm{an}}}$ decompose into the disjoint union of isomorphic copies of Berkovich open disks i.e. there exist weak semistable vertex sets (cf. Definition 2.19) $\mathfrak{A} \subset C^{\mathrm{an}}$ and $\mathfrak{A}' \subset C'^{\mathrm{an}}$ such that $\Upsilon_{C^{\mathrm{an}}} = \Sigma(C^{\mathrm{an}},\mathfrak{A})$ and $\Upsilon_{C'^{\mathrm{an}}} = \Sigma(C'^{\mathrm{an}},\mathfrak{A}')$. \item The deformation retractions $\psi$ and $\psi'$ are said to be \textbf{compatible} in the sense that the following diagram is commutative. \setlength{\unitlength}{1cm} \begin{picture}(10,5) \put(3.5,1){$[0,1] \times C^{\mathrm{an}}$} \put(7,1){$C^{\mathrm{an}}$} \put(3.5,3.5){$[0,1] \times C'^{\mathrm{an}}$} \put(7,3.5){$C'^{\mathrm{an}}$} \put(4.4,3.3){\vector(0,-1){1.75}} \put(7.1,3.3){\vector(0,-1){1.75}} \put(5.4,1.1){\vector(1,0){1.55}} \put(5.4,3.6){\vector(1,0){1.55}} \put(5.8,0.7){$\psi$} \put(5.6,3.2){$\psi'$} \put(4.5,2.3){$id \times \phi^{\mathrm{an}}$} \put(7.2,2.3){$\phi^{\mathrm{an}}$} . \end{picture} \end{enumerate} In Sections 4 and 5, we study how $g^{\mathrm{an}}(C')$ and $g^{\mathrm{an}}(C)$ relate to each other under the added assumption that $\phi : C' \to C$ is a finite morphism between smooth projective irreducible curves. The necessary notation to make sense of the following result - Corollary 4.9 can be found in Section 4.1 and Definitions 4.6 and 4.8. \\ \noindent $\mathbf{Corollary}$ $\mathbf{4.9}$ \emph{ Let $\phi : C' \to C$ be a finite separable morphism between smooth projective irreducible curves over the field $k$. Let $g^{\mathrm{an}}(C'), g^{\mathrm{an}}(C)$ be as in Definition 2.26. We have the following equation. \begin{align*} 2g^{\mathrm{an}}(C') - 2 = \mathrm{deg}(\phi)(2g^{\mathrm{an}}(C) - 2) + \Sigma_{p \in C^{\mathrm{an}}} 2 i(p) g_p + R - \Sigma_{p \in C^{\mathrm{an}}} R^1_p. \end{align*} } In Section 5, we present another method to calculate the invariant $g^{\mathrm{an}}(C')$ using the existence of a pair of compatible deformation retractions $\psi$ and $\psi'$ on $C^{\mathrm{an}}$ and $C'^{\mathrm{an}}$ whose images are skeleta $\Upsilon_{C^{\mathrm{an}}}$ and $\Upsilon_{C'^{\mathrm{an}}}$. We assume in addition that the morphism $\phi : C' \to C$ is such that the induced extension of function fields $k(C) \hookrightarrow k(C')$ is Galois. By construction of $\psi'$ and $\psi$, $\phi^{\mathrm{an}}$ restricts to a morphism between the two skeleta. We show that the genus of the skeleton $\Upsilon_{C'^{\mathrm{an}}}$ can be calculated using invariants associated to the points of $\Upsilon_{C^{\mathrm{an}}}$. In order to do so we define a divisor $w$ on $\Upsilon_{C^\mathrm{an}}$ whose degree is $2g(\Upsilon_{C'^{\mathrm{an}}}) - 2$. A divisor on a finite metric graph is an element of the free abelian group generated by the points on the graph. We define $w$ as follows. For a point $p \in \Upsilon_{C^\mathrm{an}}$, let $w(p)$ denote the order of the divisor at $p$. We set \begin{align*} w(p) := (\sum_{e_p \in E_p, p' \in (\phi^{\mathrm{an}})^{-1}(p)} l(e_p,p')) - 2n_p. \end{align*} The terms in this expression are defined as follows. Let $T_p$ denote the tangent space at the point $p$ (cf. 2.2.3, 2.4.1). \begin{enumerate} \item Let $E_p \subset T_p$ be those elements for which there exists a representative starting from $p$ and contained completely in $\Upsilon_{C^{\mathrm{an}}}$. \item Let $p' \in C'^{\mathrm{an}}$ such that $\phi^{\mathrm{an}}(p') = p$. The morphism $\phi^{\mathrm{an}}$ induces a map $d\phi_{p'}$ between the tangent spaces $T_{p'}$ and $T_{p}$ (cf. 2.2.3, 2.4.1). Let $e_p \in E_p$. We define $L(e_p,p') \subset T_{p'}$ to be the preimages of $e_p$ for the map $d\phi_{p'}$. As $\Upsilon_{C'^{\mathrm{an}}} = (\phi^{\mathrm{an}})^{-1}(\Upsilon_{C^{\mathrm{an}}})$, any element of $L(e_p,p')$ can be represented by a geodesic segment that is contained completely in $\Upsilon_{C'^{\mathrm{an}}}$. Let $l(e_p,p')$ denote the cardinality of the set $L(e_p,p')$. \item We define $n_p$ to be the cardinality of the set of preimages of the point $p$ i.e. $n_p := \mathrm{card} \{(\phi^{\mathrm{an}})^{-1}(p)\}$. \end{enumerate} In Proposition 5.4, we show that $w$ is indeed a well defined divisor whose degree is equal to $2g(\Upsilon_{C'^{\mathrm{an}}}) - 2$. We then study the values $n_p$ and $l(e_p,p')$ described above. These results are sketched below. \\ We study the value $n_p$ for $p \in \Upsilon_{C^{\mathrm{an}}}$ in terms of two invariants - $\mathrm{ram}(p)$ and $c_1(p)$ which are defined as follows. Let $p \in \Upsilon_{C^{\mathrm{an}}}$. \begin{enumerate} \item If $p$ is a point of type I then we set $\mathrm{ram}(p)$ to be the ramification degree $\mathrm{ram}(p'/p)$ for any $p' \in C'^{\mathrm{an}}$ such that $\phi^{\mathrm{an}}(p') = p$. As the morphism $\phi$ is Galois, $\mathrm{ram}(p)$ is well defined. If $p$ is not of type I then we set $\mathrm{ram}(p) := 1$. \item In order to define the invariant $c_1$, we introduce an equivalence relation on $C'(k)$. For $y_1,y_2 \in C'(k)$, we set $y_1 \sim_{c(1)} y_2$ if $\phi(y_1) = \phi(y_2)$ and $\psi'(1,y_1) = \psi'(1,y_2)$. Let $c_1(y)$ denote the cardinality of the equivalence class that contains $y$. In Lemma 5.8, we show that the function $c_1 : C(k) \to \mathbb{Z}_{\geq 0}$ defined by setting $c_1(x) = c_1(y)$ for any $y \in \phi^{-1}(x)$ is well defined. We proceed to show that if $x \in C(k)$ then $c_1(x)$ depends only on the point $\psi(1,x) \in \Upsilon_{C^{\mathrm{an}}}$. This defines $c_1 : \Upsilon_{C^{\mathrm{an}}} \to \mathbb{Z}_{\geq 0}$. \end{enumerate} The values $c_1(p)$ and $\mathrm{ram}(p)$ can be used to calculate $n_p$ by the following relation (Proposition 5.10). \begin{align*} n_p = [k(C') : k(C)]/(c_{1}(p)\mathrm{ram}(p)). \end{align*} \\ We simplify the term $l(e_p,p')$ which appears in the expression defining $w$. Let $p \in \Upsilon_{C^{\mathrm{an}}}$ and $e_p \in E_p$. In Lemma $5.12$ we show that $l(e_p,p')$ remains constant as $p'$ varies through the set of preimages $p' \in (\phi^{\mathrm{an}})^{-1}(p)$. We set $l(e_p) := l(e_p,p')$. We introduce the invariants - $\widetilde{\mathrm{ram}(e_p)}$ and $\widetilde{\mathrm{ram}}(p)$ to study $l(e_p)$. \begin{enumerate} \item Let $p \in \Upsilon_{C^{\mathrm{an}}}$. By definition $e_p$ is an element of the tangent space $T_p$ at $p$ (cf. Sections 2.2.3, 2.4.1). As $p$ is of type II, it corresponds to a discrete valuation of the $\tilde{k}$-function field $\widetilde{\mathcal{H}(p)}$. For any $p' \in (\phi^{\mathrm{an}})^{-1}(p)$, the extension of fields $\widetilde{\mathcal{H}(p)} \hookrightarrow \widetilde{\mathcal{H}(p')}$ can be decomposed into the composite of a purely inseparable extension and a Galois extension. Hence the ramification degree $\mathrm{ram}(e'/e_p)$ is constant as $e'$ varies through the set of preimages of $e_p$ at $T_{p'}$ for the map $d\phi_{p'}^{alg} : T_{p'} \to T_p$ (cf. 2.4.1). Let $\widetilde{\mathrm{ram}}(e_p)$ be this number. When $p$ is of type $I$, we set $\widetilde{\mathrm{ram}}(e_p) = \mathrm{ram}(p)$ and when $p$ is of type III, we set $\widetilde{\mathrm{ram}}(e_p) = c_1(p)$. \item For $p \in \Upsilon_{C^{\mathrm{an}}}$, we define $\widetilde{\mathrm{ram}}(p) := \Sigma_{e_p \in E_p} 1/ \widetilde{\mathrm{ram}}(e_p)$. \end{enumerate} In Proposition $5.15$, we show that if $p \in \Upsilon_{C^{\mathrm{an}}}$ and $e_p \in E_p$ then \begin{align*} l(e_p) = [k(C'):k(C)]/(n_p \widetilde{\mathrm{ram}}(e_p)). \end{align*} \\ The results of section 5 are compiled so that the value $2g^{\mathrm{an}}(C') - 2$ can be computed in terms of the invariants $c_1, \widetilde{\mathrm{ram}}$ and $\mathrm{ram}$. \\ \noindent $\mathbf{Theorem}$ $\mathbf{5.17}$ \emph{ Let $\phi : C' \to C$ be a finite morphism between smooth projective irreducible $k$-curves such that the extension of function fields $k(C) \hookrightarrow k(C')$ induced by $\phi$ is Galois. Let $g^{\mathrm{an}}(C')$ be as in Definition 2.26. We have that \begin{align*} 2g^{\mathrm{an}}(C') - 2 = \mathrm{deg}(\phi) \Sigma_{p \in \Upsilon_{C^{\mathrm{an}}}} [\widetilde{\mathrm{ram}}(p) - 2/(c_1(p)\mathrm{ram}(p))]. \end{align*} } Recently, in \cite{TEM2} and \cite{TEM3}, Michael Temkin, Adina Cohen and Dmitri Trushin have obtained results on wild ramification for finite morphisms between quasi-smooth Berkovich curves which bear some resemblance to results considered in this paper. \section{Preliminaries} \subsection{ The analytification of a $k$-variety} Let $k$ be a non-trivially non-Archimedean real valued, algebraically closed complete field. Let $|.|$ denote the valuation on $k$. By definition, $|.| : k^* \to \mathbb{R}_{> 0}$ which we extend to $|.| : k \to \mathbb{R}_{\geq 0}$ by setting $|0| := 0$. Similarly, we define $\mathrm{val} : k^* \to \mathbb{R}$ by setting $\mathrm{val} = -\mathrm{log}(|.|)$ and extend it to $\mathrm{val} : k \to \mathbb{R} \cup \infty$ by setting $\mathrm{val}(0) = \infty$. We begin with a brief discussion of the analytification of a $k$-variety after which we deal with the analytification of a $k$-curve. By curve, we mean a one dimensional connected reduced separated scheme of finite type over $k$. Let $X$ be a reduced, separated scheme of finite type over the field $k$. Associated functorially to $X$ is a Berkovich analytic space $X^{\mathrm{an}}$. We examine this notion in more detail. Let $\mathrm{k-an}$ denote the category of $k$-analytic spaces [\cite{berk2}, 1.2.4], $\mathrm{Set}$ denote the category of sets and $\mathrm{Sch_{lft/k}}$ denote the category of schemes which are locally of finite type over $k$. We define a functor \begin{align*} F &: \mathrm{k-an} \to \mathrm{Set} \\ & Y \mapsto \mathrm{Hom}(Y,X) \end{align*} where $\mathrm{Hom}(Y,X)$ is the set of morphisms of $k$-ringed spaces. The following theorem defines the space $X^{\mathrm{an}}$. \begin{thm} [\cite{berk}, 3.4.1] The functor $F$ is representable by a $k$-analytic space $X^{\mathrm{an}}$ and a morphism $\pi : X^{\mathrm{an}} \to X$. For any non-Archimedean field $K$ extending $k$, there is a bijection $X^{\mathrm{an}}(K) \to X(K)$. Furthermore, the map $\pi$ is surjective. \end{thm} The associated $k$-analytic space $X^{\mathrm{an}}$ is \emph{good}. This means that for every point $x \in X^{\mathrm{an}}$ there exists a neighbourhood of $x$ isomorphic to an affinoid space. Theorem 2.1 implies the existence of a well defined functor \begin{align*} ()^{\mathrm{an}} :& Sch_{lft/k} \to \mbox{\emph{good} } k-\mathrm{an} \\ & X \mapsto X^{\mathrm{an}}. \end{align*} As a set $X^{\mathrm{an}}$ is the collection of pairs $\{(x,\eta)\}$ where $x$ is a scheme theoretic point of $X$ and $\eta$ is a valuation on the residue field $k(x)$ which extends the valuation on the field $k$. We endow this set with a topology as follows. A pre-basic open set is of the form $\{(x,\eta) \in U^{\mathrm{an}}||f(\eta)| \in W\}$ where $U$ is a Zariski open subset of $X$ with $f \in O_X(U)$, $W$ an open subspace of $\mathbb{R}_{\geq 0}$ and $|f(\eta)|$ is the evaluation of the image of $f$ in the residue field $k(x)$ at $\eta$. A basic open set is any set which is equal to the intersection of a finite number of pre-basic open sets. Properties of the scheme translate to properties of the associated analytic space. If $X$ is proper then $X^{\mathrm{an}}$ is compact and if $X$ is connected then $X^{\mathrm{an}}$ is pathwise connected. Let $C$ be a $k$-curve. As above, the set $C^{\mathrm{an}}$ is the collection of pairs $\{(x,\eta)\}$ where $x$ is a scheme theoretic point of $C$ and $\eta$ is a rank one valuation on the residue field $k(x)$ which extends the valuation on the field $k$. We divide the points of $C^{\mathrm{{an}}}$ into four groups using this description. For a point $\mathbf{x} := (x,\mu) \in C^{\mathrm{an}}$, let $\mathcal{H}(x)$ denote the completion of the residue field $k(x)$ for the valuation $\eta$. Let $s(\mathbf{x})$ denote the trancendence degree of the residue field $\widetilde{\mathcal{H}(\mathbf{x})}$ over $\tilde{k}$ and $t(\mathbf{x})$ the rank of the group $|\mathcal{H}(\mathbf{x})^*|/|k^*|$. Abhyankar's inequality implies that $s(\mathbf{x}) + t(\mathbf{x}) \leq 1$. This allows us to classify points. We call $\mathbf{x}$ a type I point if it is a $k$-point of the curve. In which case, both $t(\mathbf{x}) = s(\mathbf{x}) = 0$. If $s(\mathbf{x}) = 1$ then $t(\mathbf{x}) = 0$ and such a point is said to be of type II. If $t(\mathbf{x}) = 1$ then $s(\mathbf{x}) = 0$ and such a point is considered to be of type III. Lastly, if $t(\mathbf{x}) = d(\mathbf{x}) = 0$ and $\mathbf{x}$ is not a $k$-point of the curve then we call $\mathbf{x}$ a point of type IV. The fact that $C$ is connected and separated implies that the analytification $C^{\mathrm{an}}$ is Hausdorff and pathwise connected. When $C$ is in addition projective, the analytification $C^{\mathrm{an}}$ compact. As an example we describe the analytification of the projective line $\mathbb{P}^{1,\mathrm{an}}_k$. \\ \\ \subsection{Semistable vertex sets} \subsubsection{$\mathbb{P}^{1,\mathrm{an}}_k$- The analytification of the projective line over $k$} The points of $\mathbb{P}^{1,\mathrm{an}}_k$ can be classified as follows. The set of type I points are the $k$-points $\mathbb{P}^1_k(k)$ of the projective line. The type II, III and IV points are of the form $(\zeta, \mu)$ where $\zeta$ is the generic point of $\mathbb{P}^1_k$ and $\mu$ is a multiplicative norm on the function field $k(\mathbb{P}^1_k)$ which extends the valuation on the field $k$. The field $k(\mathbb{P}^1_k)$ can be identified with $k(T)$ by choosing coordinates. Hence describing the set of points of $\mathbb{P}^{1,an}_k \setminus \mathbb{P}^1_k(k)$ is equivalent to describing the set of multiplicative norms on the function field $k(T)$ which extend the valuation on $k$. Let $a \in \mathbb{P}^1_k(k)$ be a $k$-point and $B(a,r) \subset k$ denote the closed disk around $a$ of radius $r$ contained in $\mathbb{P}^1_k(k)$. We define a multiplicative norm $\eta_{a,r}$ on $k(T)$ as follows. Let $f \in k(T)$. We set $|f(\eta_{a,r})| := \mathrm{sup}_{y \in B(a,r)} \{|f(y)|\}$. It can be checked that this is a multiplicative norm on the function field. If $r$ belongs to $|k^*|$ then $(\zeta,\eta_{a,r})$ is a type II point. Otherwise $(\zeta,\eta_{a,r})$ defines a type III point. It can be shown that every type II and type III point is of this form. A type IV point corresponds to a family of nested closed disks with empty intersection. Let $J$ be a directed index set and for every $j \in J$, $B(a_j,r_j)$ be a closed disk around $a_j \in k$ of radius $r_j$ such that $\bigcap_{j \in J} B(a_j,r_j) = \emptyset$. Let $\mathfrak{E} := \{B(a_j,r_j)|j \in J\}$. We define a multiplicative norm $\eta_{\mathfrak{E}}$ on the function field as follows. For $f \in k(T)$, let $|f(\eta_\mathfrak{E})| := \mathrm{inf}_{j \in J} \{\mathrm{sup}_{y \in B(a_j,r_j)} |f(y)|\}$. The set of multiplicative norms on $k(T)$ defined in this manner corresponds to the set of type IV points in $\mathbb{P}^{1,an}_k$. It is standard practice to describe the points of $\mathbb{A}^{1,an}_k$ as the collection $\mathcal{M}(k[T])$ of multiplicative seminorms on the algebra $k[T]$ which extend the valuation of the field $k$. As a set $\mathbb{P}^{1,an}_k = \mathbb{A}^{1,an}_k \cup \{\infty\}$ where $\infty \in \mathbb{P}^1_k(k)$ is the complement of the affine subspace $\mathrm{Spec}(k[T]) \subset \mathbb{P}^1_k$. \subsubsection{The standard analytic domains in $\mathbb{A}^{1,\mathrm{an}}$.} We follow the treatment in Section 2 of \cite{BPR}. The topological space $\mathbb{P}^{1,an}_k$ is compact, simply connected and Hausdorff. We now describe certain subspaces of $\mathbb{A}^{1,an}_k \subset \mathbb{P}^{1,an}_k$. The tropicalization map, $\mathrm{trop} : \mathcal{M}(k[T]) = \mathbb{A}^{1,\mathrm{an}} \to \mathbb{R} \cup \infty$ is defined by $p \mapsto -\mathrm{log}|T(p)|$. Using $\mathrm{trop}$, we define certain analytic domains contained in $\mathbb{A}^{1,\mathrm{an}}$. \begin{enumerate} \item For $r \in |k^*|$, the standard closed ball of radius $r$, $\mathbf{B}(r)$ is the set \\ $\mathrm{trop}^{-1}([-\mathrm{log}(r),\infty])$. The space $\mathbf{B}(r)$ is the affinoid space $\mathcal{M}(k\{r^{-1}T\})$. \item For $r \in |k^*|$, the standard open ball of radius $r$ denoted $\mathbf{O}(r)$ is the set $\mathrm{trop}^{-1}((-\mathrm{log}(r),\infty])$. The space $\mathbf{O}(r)$ is an open analytic domain contained in $\mathbb{A}^{1,\mathrm{an}}$. \item For $r_1,r_2 \in |k^*|$ with $r_1 \leq r_2$, the standard closed annulus $\mathbf{S}(r_1,r_2)$ of inner radius $r_1$ and outer radius $r_2$ is the set $\mathrm{trop}^{-1}([-\mathrm{log}(r_2),-\mathrm{log}(r_1)])$. It is the affinoid space $\mathcal{M}(k\{r_1T^{-1},r_2^{-1}T\})$. The (logarithmic) modulus of $\mathbf{S}(r_1,r_2)$ is defined to be the value $\mathrm{log}(r_2) - \mathrm{log}(r_1)$. \item For $r_1,r_2 \in |k^*|$ with $r_1 < r_2$, the standard open annulus of inner radius $r_1$ and outer radius $r_2$ denoted $\mathbf{S}(r_1,r_2)_+$ is the set $\mathrm{trop}^{-1}((-\mathrm{log}(r_2),-\mathrm{log}(r_1)))$. The (logarithmic) modulus of $\mathbf{S}(r_1,r_2)_+$ is defined to be the value $\mathrm{log}(r_2) - \mathrm{log}(r_1)$. \item Let $r \in |k^*|$. The standard punctured Berkovich open disk of radius $r$ is the set $\mathbf{O}(r) \smallsetminus \{0\}$ which we denote $\mathbf{S}(0,r)_+$. \end{enumerate} We now highlight certain sub spaces of the analytic domains defined above. The tropicalization map defined above restricts to a map $\mathrm{trop} : \mathbf{G}_m^{\mathrm{an}} \to \mathbb{R}$. We define a section $\sigma : \mathbb{R} \to \mathbf{G}_m^{\mathrm{an}}$ of the restriction of the tropicalization map to $\mathbf{G}_m^{\mathrm{an}}$ by mapping $r \in \mathbb{R}$ to the point $\eta_{0,-\mathrm{exp}(r)}$ (cf. 2.2.1). \begin{defi} \emph{Let $A$ be a standard open annulus, a standard closed annulus or a standard punctured open disk. The} skeleton of $A$ \emph{denoted $\Sigma(A)$ is the set $\sigma(\mathbb{R}) \cap A$.} \end{defi} \begin{es} \begin{enumerate} \item \emph{If $r_1, r_2 \in |k^*|$ with $r_1 < r_2$ then the skeleton of the standard open annulus $\mathbf{S}(r_1,r_2)_+$ is the set $\sigma((-\mathrm{log}(r_2),-\mathrm{log}(r_1)))$.} \item \emph{If $r_1, r_2 \in |k^*|$ with $r_1 \leq r_2$ then the skeleton of the standard closed annulus $\mathbf{S}(r_1,r_2)$ is the set $\sigma([-\mathrm{log}(r_2),-\mathrm{log}(r_1)])$.} \end{enumerate} \end{es} Following \cite{BPR}, we introduce the following definition to distinguish those properties of the standard analytic domains above and their skeleta which are invariant under isomorphism. \begin{defi} \emph{A} general closed disk (resp. general closed annulus, resp. general open annulus, resp. general open disk, resp. general punctured Berkovich open disk) \emph{is an analytic space that is isomorphic to a standard closed disk (resp. standard closed annulus, resp. standard open annulus, resp. standard open disk, resp. standard punctured open disk)}. \end{defi} \begin{prop} [\cite{BPR}, 2.8] Let $A,A'$ be standard closed annuli or open annuli or punctured open disks. Let $\phi : A \to A'$ be an isomorphism. Then $\Sigma(A) = \phi^{-1}(\Sigma(A'))$. \end{prop} \begin{defi} \emph{Let $A$ be a general open annulus (resp. general closed annulus, resp. general punctured Berkovich open disk). Let $A'$ be a standard open annulus (resp. standard closed annulus, resp. standard punctured Berkovich open disk) such that there exists an isomorphism of analytic spaces $\phi : A \to A'$. The} skeleton $\Sigma(A)$ of $A$ \emph{is the set $\phi^{-1}(\Sigma(A'))$. The skeleton of $A$ is well defined by Proposition 2.5. } \end{defi} When $A$ is a general open or closed annulus or general punctured open disk, the skeleton $\Sigma(A)$ can be identified with a real interval upto linear transformations of the form $x \mapsto x + \mathrm{val}(\alpha)$ for some $\alpha \in k^*$. The skeleton $\Sigma(A)$ is endowed with the structure of a metric space. We introduce the notion of a semistable vertex set of a smooth, projective curve and then generalize this notion to the case of any curve $C$ over $k$. As above, given a semistable vertex set we associate to it a closed subspace called its skeleton. We then show that the homotopy type of $C^{\mathrm{an}}$ is determined by such skeleta. What follows is inspired by the treatment in [\cite{aminibaker}, 4.4], [\cite{HL}, Section 7] and [\cite{BPR}, Section 5]. \begin{defi} \emph{ Let $C$ be a smooth, projective, irreducible curve defined over the field $k$ and $C^{\mathrm{an}}$ be its analytification.} A semistable vertex set $\mathfrak{V}$ for $C^{\mathrm{an}}$ \emph{is a finite collection of type II points such that if $\mathcal{C}$ denotes the set of connected components of $C^{\mathrm{an}} \smallsetminus \mathfrak{V}$ then there exists a finite subset $S \subset \mathcal{C}$ such that every $A \in S$ is isomorphic to a standard open annulus whose inner and outer radius belong to $|k^*|$ and every $A \in \mathcal{C} \smallsetminus S$ is isomorphic to the standard open disk of unit radius $\mathbf{O}(1)$. Such a decomposition of the space $C^{\mathrm{an}} \smallsetminus \mathfrak{V}$ is called a} semistable decomposition. \end{defi} The existence of semistable vertex sets in $C^{\mathrm{an}}$ follows from Section 4 in \cite{BPR}. \\ \begin{defi} An abstract finite metric graph \emph{comprises the following data: A finite set of} vertices \emph{$W$, a set of} edges \emph{$E \subset W \times W$ which is symmetric and a function $l:E \to \mathbb{R}_{> 0} \cup \infty$ such that if $(x,y) \in E$ then $l(x,y) = l(y,x)$}. \end{defi} The function $l$ is called the length function. \begin{defi} A finite metric graph $G$ \emph{is the geometric realisation of an abstract finite metric graph $(V,E,l)$ in which every edge $e$ can be identified with a real interval of length $l(e)$.} The genus $g(G)$ of the graph $G$ \emph{is defined to be the number $1 - \mathrm{card}(V) + \mathrm{card}(E)$.} \end{defi} It can be verified that if $G$ is a graph which is the geometric realisation of two abstract finite metric graphs $(V,E,l)$ and $(V',E',l')$ then $1 - \mathrm{card}(V) + \mathrm{card}(E) = 1 - \mathrm{card}(V') + \mathrm{card}(E')$. \begin{defi} \emph{Let $C$ be a smooth projective irreducible curve over $k$.} The skeleton associated to a semistable vertex set $\mathfrak{V}$ in $C^{\mathrm{an}}$ \emph{is defined to be the union of the skeleta of all open annuli which occur in the semistable decomposition along with the vertex set $\mathfrak{V}$. It is denoted $\Sigma(C^{\mathrm{an}},\mathfrak{V})$.} \end{defi} Let $C$ be as in the definition above. The skeleton $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ can be seen as a graph whose edges correspond to the closures of the skeleta of the generalized open annuli that occur in the semistable decomposition associated to the set $\mathfrak{V}$. The space $C^{\mathrm{an}}$ is pathwise connected. Hence for any two points $v_1,v_2 \in \mathfrak{V}$, there exists a path between them. From the nature of the semistable decomposition, each such path can be taken to be the union of a finite number of edges of the skeleton $\Sigma(C^{\mathrm{an}},\mathfrak{V})$. It follows that $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ is pathwise connected. By Corollary 2.6 in \cite{BPR}, the modulus of the skeleton of every open annulus which occurs in the semistable decomposition defines a length function on the set of edges of $\Sigma(C^{\mathrm{an}},\mathfrak{V})$. The skeleton $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ is thus a finite, metric graph. Let $x,y \in \Sigma(C^{\mathrm{an}},\mathfrak{V})$ and $P$ be an injective path from $x$ to $y$. The path $P$ can be seen as the finite union of injective closed paths $\bigcup_i P_i$ such that for every $i$, $P_i$ is contained in the closure of the skeleton of a general open annulus which occurs in the semistable decomposition associated to $\mathfrak{V}$. The skeleton of such an open annulus is a metric space, and its metric extends to its closure in $C^{\mathrm{an}}$. It follows that the length $l(P_i)$ of the path $P_i$ is well defined. For instance, if $P_i$ is an injective path from $x_i$ to $y_i$ then $l(P_i) := d(x_i,y_i)$ where $d$ is the metric on the skeleton of the closure of the open annulus that contains $x_i$ and $y_i$. We set $l(P) := \Sigma_i l(P_i)$. The graph $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ can be given the structure of a metric space by defining the distance between two points $x,y$ in $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ to be $\mathrm{min}_{P \in \mathcal{P}(x,y)} \{l(P)\}$ where $\mathcal{P}(x,y)$ is the set of injective paths between $x$ and $y$. This defines a metric on $\Sigma(C^{\mathrm{an}},\mathfrak{V})$. Let $\mathfrak{V}'$ be a semistable vertex set that contains $\mathfrak{V}$. By Propositions 3.13 and 5.3 in \cite{BPR}, $\Sigma(C^{\mathrm{an}},\mathfrak{V}) \subseteq \Sigma(C^{\mathrm{an}},\mathfrak{V}'$ and the inclusion is an isometry. Let $\mathbf{H}_0(C^{\mathrm{an}})$ denote the set of points in $C^{\mathrm{an}}$ of type II or III. By Corollary 5.1 in loc.cit., \begin{align*} \mathbf{H}_0(C^{\mathrm{an}}) = \varinjlim_{\mathfrak{V}} \Sigma(C^{\mathrm{an}},\mathfrak{V}). \end{align*} The limit in the above equation is taken over the family of semistable vertex sets $\mathfrak{V}$ in $C^{\mathrm{an}}$. As each of the $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ are metric spaces and the inclusions in the inductive limit are isometries, we have a metric on the space $\mathbf{H}_0(C^{\mathrm{an}})$ which is called its skeletal metric. By Corollary 5.7 in \cite{BPR}, this metric extends in a unique way to the space $\mathbf{H}(C^{\mathrm{an}}) := C^{\mathrm{an}} \smallsetminus C(k)$. \subsubsection{ The tangent space at a point on $C^{\mathrm{an}}$ } Let $C$ be a smooth projective irreducible $k$-curve. We begin with the notion of a geodesic segment in a metric space $T$. \begin{defi} \emph{A} geodesic segment from $x$ to $y$ in a metric space $T$ \emph{is the image of an isometric embedding $[a,b] \to T$ with $[a,b] \subset \mathbb{R}$ and $a \mapsto x$, $b \mapsto y$. We often identify a geodesic segment with its image in $T$ and denote it $[x,y]$. } \end{defi} Let $p \in \mathbf{H}(C^{\mathrm{an}})$. A \emph{non trivial geodesic segment starting at $p$} is a geodesic segment $\alpha : [0,a] \hookrightarrow \mathbf{H}(C^{\mathrm{an}})$ such that $a > 0$ and $\alpha(0) = p$. We say that two non trivial geodesic segments starting from $p$ are \emph{equivalent at $p$} if they agree in a neighborhood of zero. If $\alpha$ is a geodesic segment starting at the point $p$ then we refer to the equivalence class defined by $\alpha$ as its \emph{germ}. These notions can be adapted to the case $p \in C(k)$ as follows. A \emph{non trivial geodesic segment starting at $p$} is an embedding $\alpha : [\infty,a] \hookrightarrow C^{\mathrm{an}}$ such that $a < \infty$, $\alpha(\infty) = p$, $\alpha((\infty,a]) \subset \mathbf{H}(C^{\mathrm{an}})$ and the restriction $\alpha_{|(\infty,a]}$ is an isometry. As before, we say that two non trivial geodesic segments starting from $p$ are \emph{equivalent at $p$} if they agree in a neighborhood of $\infty$ and if $\alpha$ is a geodesic segment starting at the point $p$ then we refer to the equivalence class defined by $\alpha$ as its \emph{germ}. We now define the tangent space at a point $C^{\mathrm{an}}$. \begin{defi} \emph{Let $x \in C^{\mathrm{an}}$}. The tangent space at $x$ \emph{denoted $T_x$ is the set of non trivial geodesic segments starting from $x$ upto equivalence at $x$.} \end{defi} Let $x \in C^{\mathrm{an}}$. The tangent space at $x$ depends solely on a neighborhood of $x$. Following Sections 4 and 5 of \cite{BPR}, we introduce the concept of a simple neighborhood $U$ of $x$ and state the result which relates the tangent space $T_x$ to $\pi_0(U \smallsetminus x)$. \begin{prop} (\cite{BPR}, Corollary 4.27) Let $C$ be a smooth projective irreducible $k$-curve. Let $x \in C^{\mathrm{an}}$. There is a fundamental system of open neighborhoods $\{U_\alpha\}$ of $x$ of the following form: \begin{enumerate} \item If $x$ is a type-I or a type-IV point then the $U_{\alpha}$ are open balls. \item If $x$ is a type-III point then the $U_{\alpha}$ are open annuli with $x \in \Sigma(U_\alpha)$. \item If $x$ is a type-II point then $U_\alpha = \tau^{-1}(W_\alpha)$ where $W_\alpha$ is a simply-connected open neighborhood of $x$ in $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ for some semistable vertex set $\mathfrak{V}$ of $C^{\mathrm{an}}$ that contains $x$ and $\tau : C^{\mathrm{an}} \to C^{\mathrm{an}}$ is defined by $x \mapsto \lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(1,x)$ (Proposition 2.21). Each $U_{\alpha} \smallsetminus \{x\}$ is a disjoint union of open balls and open annuli \end{enumerate} \end{prop} \begin{defi} [\cite{BPR}, Definition 4.28] \emph{Let $C$ be a smooth projective irreducible $k$-curve. A neighborhood of $x \in C^{\mathrm{an}}$ of the form described in Proposition 2.13 is called a} simple neighborhood of $x$. \end{defi} The following proposition is a minor modification of Lemma 5.12 of \cite{BPR} to include points of type I as well. \begin{prop} Let $x \in C^{\mathrm{an}}$ and let $U$ be a simple neighborhood of $x$ in $C^{\mathrm{an}}$. Then $[x,y] \mapsto y$ establishes a bijection $T_x \to \pi_0(U \smallsetminus \{x\})$. Moreover, \begin{enumerate} \item If $x$ is of type I, IV then there is only one tangent direction at $x$. \item If $x$ has type III then there are two tangent directions at $x$. \item If $x$ has type II then $U = \mathrm{red}^{-1}(E)$ for a smooth irreducible component $E$ of the special fiber of a semistable formal model $\mathfrak{C}$ of $C$ by (cf. 4.29 loc.cit.) and $T_x \tilde{\to} \pi_0(U \smallsetminus \{x\}) \tilde{\to} E(\tilde{k})$. \end{enumerate} \end{prop} \begin{rem} It should be pointed out that the notation $[x,y]$ was introduced only when $x,y \in \mathbf{H}(C^{\mathrm{an}})$. When $x \in C^{\mathrm{an}}$ is a point of type I and $\alpha : [\infty,a] \hookrightarrow C^{\mathrm{an}}$ is a geodesic segment starting from $x$, by $[x,\alpha(a)]$ we mean the image of the embedding $\alpha([\infty,a])$. \end{rem} Let $\rho : C' \to C$ be a finite morphism between smooth projective $k$-curves. If $x' \in C'^{\mathrm{an}}$ then the tangent space at $x'$ maps to the tangent space at $\rho(x')$ in an obvious fashion. Suppose $x$ was not of type I. Let $\lambda :[0,1] \to C'^{\mathrm{an}}$ be a representative of a point on the tangent space at $x'$. Let $U$ be a simple neighborhood (cf. Definition 2.14) of the point $\rho(x')$. We can find $a > 0$ such that $\rho \circ \lambda((0,a])$ lies in a connected component of the space $U \smallsetminus \rho(x')$. This connected component which contains $\rho \circ \lambda((0,a])$ depends only on the equivalence class of $\lambda$ i.e. on the element of the tangent space that is represented by $\lambda$. A similar argument can be used when $x' \in C'(k)$. By 2.15, we have thus defined a map \begin{align*} d\rho_{x'} :& T_{x'} \to T_{\rho(x')} \end{align*} \subsection{Weak semistable vertex sets} Let $C$ be a smooth projective irreducible $k$-curve and let $\mathfrak{V}$ be a semistable vertex set in $C^{\mathrm{an}}$. Recall that we defined the skeleton associated to $\mathfrak{V}$ and denoted it $\Sigma(C^{\mathrm{an}},\mathfrak{V})$. Observe that by construction, the connected components of the space $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}}, \mathfrak{V})$ are isomorphic to Berkovich open balls. If $C$ was not smooth or not complete then there does not exist a finite set of type II points $\mathfrak{V} \subset C^{\mathrm{an}}$ such that $C^{\mathrm{an}}$ decomposes into the disjoint union of general open annuli and general open disks. However, we can find a finite set of points $\mathfrak{V}$ in $C^{\mathrm{an}}$ and as before define a finite graph $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ such that the space $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{V})$ is the disjoint union of general open disks. It is with this goal in mind that we introduce the notion of weak semistable vertex sets, first for smooth projective irreducible curves and then for any $k$-curve. \begin{defi} \emph{Let $C$ be a smooth projective irreducible $k$-curve.} A weak semistable vertex set $\mathfrak{W}$ in $C^{\mathrm{an}}$ \emph{is defined to be a finite collection of points of type I or II in $C^{\mathrm{an}}$ such that if $\mathcal{C}$ denotes the set of connected components of $C^{\mathrm{an}} \smallsetminus \mathfrak{W}$ then there exists a finite subset $S \subset \mathcal{C}$ such that every $A \in S$ is isomorphic to a standard open annulus or a standard punctured Berkovich open unit disk and every $A \in \mathcal{C} \smallsetminus S$ is isomorphic to a standard Berkovich open unit disk.} \end{defi} As before, we define the skeleton $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ associated to such a set. Let $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ be the union of $\mathfrak{W}$ and the skeleton of every open annulus and punctured open disk in the decomposition of $C^{\mathrm{an}} \smallsetminus \mathfrak{W}$. The closed subspace $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ is homeomorphic to a connected, finite metric graph whose length function is not necessarily finite by which we mean that there could be edges of length $\infty$. We generalise this notion of weak semistable vertex sets to the case of curves over $k$. \begin{rem}Let $C$ be a $k$-curve. Let $j : C \hookrightarrow \bar{C}$ be a dense open immersion where $\bar{C}$ is projective over $k$. The pair $(j,\bar{C})$ is called a compactification of $C_{/k}$. Let $F := \bar{C} \smallsetminus C$. We know that $F$ is a finite set of points and $C^{\mathrm{an}} = \bar{C}^{\mathrm{an}} \smallsetminus F$. Let $\bar{C}_i$ denote the irreducible components of $\bar{C}$ and $\bar{C'}_i$ denote their respective normalisations. The canonical morphisms $\bar{C'}_i \to \bar{C}$ define a morphism $\rho_{\bar{C}} : \bigcup_i \bar{C}'_i \to \bar{C}$. \end{rem} We make use of the notation introduced in Remark 2.18 in the definition that follows. \begin{defi} \emph{Let $C$ be a $k$-curve. Let $\bar{C}$ be a compactification of $C$.} A weak semistable vertex set $\mathfrak{W}$ for $C^{\mathrm{an}}$ \emph{ is a finite collection of points of type I or II, in $\bar{C}^{\mathrm{an}}$ such that} \emph{ \begin{enumerate} \item The set $\mathfrak{W}$ contains the set of singular points of $\bar{C}$ and the points $\bar{C} \smallsetminus C$. \item $\rho_{\bar{C}}^{-1}(\mathfrak{W}) \cap \bar{C}'_i$ is a weak semistable vertex set of the irreducible smooth projective curve $\bar{C}'_i$. \end{enumerate} } \end{defi} As the above definition requires a compactification $j : C \hookrightarrow \bar{C}$, we should have said a weak semistable set for the pair $(C^{\mathrm{an}}, j : C \hookrightarrow \bar{C})$. However, we abbreviate notation and refer to a set $\mathfrak{W}$ which satisfies the conditions of the above definition, simply as a weak semistable vertex set for $C^{\mathrm{an}}$. As before, we define the skeleton associated to a weak semistable vertex set for $C^{\mathrm{an}}$ as follows. \begin{defi} \emph{Let $C$ be a $k$-curve and $\bar{C}$ be a compactification of $C_{/k}$. Let $\mathfrak{W}$ be a weak semistable vertex set for $C^{\mathrm{an}}$. Let $\mathfrak{W}'_i := \rho_{\bar{C}}^{-1}(\mathfrak{W}) \cap \bar{C}'^{\mathrm{an}}_i$. We define} the skeleton associated to $\mathfrak{W}$ to be $\Sigma(C^{\mathrm{an}},\mathfrak{W}) := [\bigcup_i \rho_{\bar{C}}(\Sigma(\mathfrak{W}'_i,{\bar{C'}_i}^{\mathrm{an}}))] \cap C^{\mathrm{an}}$. \end{defi} It can be verified directly from the definition of the skeleta $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ associated to $\mathfrak{W}$ that the space $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{W})$ decomposes into the disjoint union of sets each of which are isomorphic as analytic spaces to the Berkovich open disk $\mathbf{O}(0,1)$. \begin{prop} Let $C$ be a $k$-curve. Let $\mathfrak{V} \subset \mathfrak{W}$ be weak semistable vertex sets of $C^{\mathrm{an}}$. There exists a deformation retraction $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})} : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ whose image is the skeleton $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ and a deformation retraction $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{\mathfrak{W}}$ with image $\Sigma(C^{\mathrm{an}},\mathfrak{W})$. \emph{(The image of a deformation retraction $\lambda : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ is the set $\lambda(1,C^{\mathrm{an}}) := \{\lambda(1,p) | p \in C^{\mathrm{an}} \}$).} \end{prop} \begin{proof} We begin by constructing a deformation retraction $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})} : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ with image $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(1,C^{\mathrm{an}}) = \Sigma(C^{\mathrm{an}},\mathfrak{V})$. Let $\mathcal{D}$ denote the set of connected components of the space $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{V})$. By definition, each element $D \in \mathcal{D}$ is isomorphic to the Berkovich open ball $\mathbf{O}(0,1)$. We fix isomorphisms $\rho_D : D \to \mathbf{O}(0,1)$ for every $D \in \mathcal{D}$. We define $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})} : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ as follows. For $p \in \Sigma(C^{\mathrm{an}},\mathfrak{V})$, we set $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(t,p) := p$ for every $t \in [0,1]$. Let $p \in C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{V})$. There exists $D \in \mathcal{D}$ such that $p \in D$. Suppose, $p$ was not a type IV point. By 2.1.1, there exists $a \in k$ and $r \in [0,1)$ such that $\rho_D(p) = \eta_{a,r}$. When $r = 0$, we maintain that $\eta_{a,r}$ is the point $a$. For $t \in [0,r]$, we set $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(t,p) := p$ and when $t \in (r,1)$, let $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(t,p) := \rho_D^{-1}(\eta_{a,t})$. Lastly, let $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(1,p) := \bar{D} \cap \Sigma(C^{\mathrm{an}},\mathfrak{V})$ where $\bar{D}$ is the closure of $D$ in $C^{\mathrm{an}}$. When $p$ is of type IV, $\rho_D(p)$ corresponds to the semi-norm associated to a nested sequence of closed disks $\{B(x_i,u_i) \subset k \}_i$ whose intersection is empty. Let $u := \mathrm{lim}_i(u_i)$. For any $t > u$, there exists a unique closed disk $B(x,t)$ such that its analytification $\mathbf{B}(x,t)$ contains the point $\rho_D(p)$. We set $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(t,p) := p$ when $t \in [0,u]$ and $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(t,p) := \rho_D^{-1}(\eta_{x,t})$ for $t \in (u,1)$ where $\mathbf{B}(x,t) \subset \mathbf{O}(0,1)$ is the unique Berkovich closed disk of radius $t$ that contains the point $\rho_D(p)$. As before, let $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(1,p) := \bar{D} \cap \Sigma(C^{\mathrm{an}},\mathfrak{V})$. The function $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})} : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ is well defined and the only points $p$ in $C^{\mathrm{an}}$ such that $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(t,p) = p$ for every $t \in [0,1]$ are those points which belong to $\Sigma(C^{\mathrm{an}},\mathfrak{V})$. Furthermore, $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(1,C^{\mathrm{an}}) = \Sigma(C^{\mathrm{an}},\mathfrak{V})$. We show that $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}$ is continuous when $[0,1] \times C^{\mathrm{an}}$ is endowed with the product topology. Let $W \subset C^{\mathrm{an}}$ be a connected open set. If $W$ is disjoint from $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ then it must be contained in some $D \in \mathcal{D}$ and $\lambda^{-1}(W)$ does not intersect $\{1\} \times C^{\mathrm{an}} \subset [0,1] \times C^{\mathrm{an}}$. We show that $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W)$ is open in $[0,1] \times C^{\mathrm{an}}$. As $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}([0,1) \times D) \subseteq D$, the map ${\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}}_{|[0,1) \times D}$ defines a map $\lambda' : [0,1) \times \mathbf{O}(0,1) \to \mathbf{O}(0,1)$ given by $\lambda'(t,x) := \rho_D(\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(t,\rho^{-1}_D(x)))$ for $t \in [0,1)$ and $x \in \mathbf{O}(0,1)$. We need only check that $(\lambda'_D)^{-1}(\rho_D(W))$ is open in $\mathbf{O}(0,1)$. This can be verified using the explicit description of connected open sets in $\mathbf{O}(0,1)$ provided by Lemma 2.32. Let $W$ be a connected open set which intersects $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ in an open set $W'$. Let $\mathcal{D'} := \{D \in \mathcal{D} | \bar{D} \cap \Sigma(C^{\mathrm{an}},\mathfrak{V}) \subset W'\}$. The semistable decomposition of $C^{\mathrm{an}}$ and the connectedness of $W$ imply that it must be contained in $\bigcup_{D \in \mathcal{D}'} D \cup W'$. We can decompose $W$ as the disjoint union $\bigcup_{D \in \mathcal{D}'} (W \cap D) \bigcup W'$. The set $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W \cap D)$ is open in $[0,1] \times C^{\mathrm{an}}$ for every $D \in \mathcal{D}'$. By construction $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W') = ([0,1] \times W') \cup (\{1\} \times (\bigcup_{D \in \mathcal{D}'} D))$. We show that every point in $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W')$ has an open neighbourhood contained in $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W)$. It can be verified that $[0,1] \times W \subset \lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W)$. Observe that the set $[0,1] \times W \subset \lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W)$ forms an open neighborhood of every point in $[0,1] \times W'$. It remains to show that every point in $\{1\} \times (\bigcup_{D \in \mathcal{D}'} D)$ has an open neighbourhood contained in $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W)$. Let $x \in D$ for some $D \in \mathcal{D}'$. As $W$ is connected and $W$ is an open neighborhood of $\overline{D} \smallsetminus D$, we must have that $W \cap D$ is connected as well. By Remark 2.31, we can reduce to the case when $W \cap D$ is the complement in $D$ of the union of a finite number of Berkovich closed disks and points of types I and IV. It follows that there exists $r_D$ such that $(r_D,1] \times D \subset \lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W)$. The set $(r_D,1] \times D$ is an open neighborhood of $(1,x)$ contained in $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{-1}(W)$. We have thus proved that $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})} : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ is continuous and hence a deformation retraction. We now prove the second part of the proposition. We define a deformation retraction $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^\mathfrak{W} : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ with image $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ as follows. For $p \in C^{\mathrm{an}}$, let $s_p \in [0,1]$ be the smallest real number such that $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(s_p,p) \in \Sigma(C^{\mathrm{an}},\mathfrak{W})$. We define $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^\mathfrak{W}$ as follows. For $p \in C^{\mathrm{an}}$, $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^\mathfrak{W}(t,p) := \lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(t,p)$ when $t \in [0,s_p]$ and $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^\mathfrak{W}(t,p) := \lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}(s_p,p)$ when $t \in (s_p,1]$. Using arguments as before, it can be checked that $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^\mathfrak{W}$ is indeed a deformation retraction with image $\Sigma(C^{\mathrm{an}},\mathfrak{W})$. \end{proof} \begin{rem} \emph{Recall that if $C$ is a smooth projective $k$-curve then the space $\mathbf{H}(C) := C^{\mathrm{an}} \smallsetminus C(k)$ is a metric space. We can hence define isometries $\alpha : [a,b] \hookrightarrow \mathbf{H}(C)$ where $[a,b] \subset \mathbb{R}$. This fact can be generalized to any $k$-curve. Let $C$ be a $k$-curve. By Remark 2.18, there exists a finite set of smooth projective curves $\{\bar{C}'_i\}$ such that $\bigcup_i \mathbf{H}((\bar{C}'_i)^{\mathrm{an}}) = \mathbf{H}(C^{\mathrm{an}}) := C^{\mathrm{an}} \smallsetminus C(k)$. We say that a continuous function $\alpha : [a,b] \hookrightarrow \mathbf{H}(C^{\mathrm{an}})$ is an \emph{isometry} if $\alpha([a,b]) \subset \mathbf{H}((\bar{C}'_i)^{\mathrm{an}})$ for some $i$ and $\alpha : [a,b] \hookrightarrow \mathbf{H}((\bar{C}'_i)^{\mathrm{an}})$ is an isometry.} \emph{Let the notation be as in Proposition 2.21. For $p \in C^{\mathrm{an}}$, let $(\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{\mathfrak{W}})^p : [0,1] \to C^{\mathrm{an}}$ be the path defined by $t \mapsto \lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{\mathfrak{W}}(t,p)$. Observe that the deformation retraction $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{\mathfrak{W}}$ is such that if $a,b \in [0,1]$ and $p \in \mathbf{H}(C^{\mathrm{an}})$ then there exists $a_1 < b_1 \in [a,b]$ such that $(\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{\mathfrak{W}})^p$ is constant on the segments $[a,a_1]$ and $[b_1,b]$ and $(\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^{\mathfrak{W}})^p \circ -\mathrm{exp} : [-\mathrm{log}(a_1), -\mathrm{log}(b_1)] \to \mathbf{H}(C^{\mathrm{an}})$ is an isometry.} \end{rem} \begin{defi} \emph{ Let $C$ be a $k$-curve. Let $\mathfrak{V} \subset \mathfrak{W}$ be weak semistable vertex sets. Let $\lambda : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ be a deformation retraction with image $\Sigma(C^{\mathrm{an}},\mathfrak{V})$. For $p \in C^{\mathrm{an}}$, let $s_p \in [0,1]$ be the smallest real number such that $\lambda(s_p,p) \in \Sigma(C^{\mathrm{an}},\mathfrak{W})$. A deformation retraction $\lambda' : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ is said to} extend the deformation retraction $\lambda$ \emph{if for $p \in C^{\mathrm{an}}$, $\lambda'(t,p) = \lambda(t,p)$ when $t \in [0,s_p]$ and $\lambda'(t,p) = \lambda(s_p,p)$ when $t \in (s_p,1]$.} \end{defi} \begin{rem} \emph{In the proof of Proposition 2.21, we constructed deformation retractions $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}$ and $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^\mathfrak{W}$. Observe that $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}^\mathfrak{W}$ is an extension of $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}$ by $\mathfrak{W}$.} \end{rem} As outlined in the introduction, we show that given a weak semistable vertex set $\mathfrak{W}$ of a complete curve $C$, the genus of the finite graph $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ is an invariant of the curve. \begin{prop} Let $C$ be a complete $k$-curve and $\mathfrak{W}$ be a weak semistable vertex set in $C^{\mathrm{an}}$. Let $\Upsilon \subset C^{\mathrm{an}}$ be a closed subset that does not contain any points of type IV and is a finite graph. Suppose that there exists a deformation retraction $\lambda : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ with image $\lambda(1,C^{\mathrm{an}}) = \Upsilon$. We have that \begin{align*} g(\Sigma(C^{\mathrm{an}},\mathfrak{W})) = g(\Upsilon). \end{align*} \end{prop} \begin{proof} Let $\psi : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ be the deformation retraction associated to the set $\mathfrak{W}$ with image $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ as constructed in Proposition 2.21. As the graph $\Upsilon$ is finite and does not contain any points of type IV, we can find a weak semistable vertex set $\mathfrak{W}'$ such that $\Sigma(C^{\mathrm{an}},\mathfrak{W}')$ contains $\Upsilon$. We can choose $\mathfrak{W}'$ so that $\mathfrak{W} \subset \mathfrak{W}'$. The restriction $\lambda : [0,1] \times \Sigma(C^{\mathrm{an}},\mathfrak{W}') \to C^{\mathrm{an}}$ implies that $\Sigma(C^{\mathrm{an}},\mathfrak{W}')$ and $\Upsilon$ are homotopy equivalent. It follows that $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ is homotopic to $\Upsilon$ as $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{W})} : [0,1] \times \Sigma(C^{\mathrm{an}},\mathfrak{W}') \to \Sigma(C^{\mathrm{an}},\mathfrak{W}')$ is a deformation retraction onto $\Sigma(C^{\mathrm{an}},\mathfrak{W})$. Hence $g(\Sigma(C^{\mathrm{an}},\mathfrak{W})) = g(\Upsilon)$. \end{proof} \begin{defi} \emph{Let $C$ be a $k$-curve and $\bar{C}$ be a compactification of $C$. Let $f$ denote the cardinality of the finite set of points $\bar{C}(k) \smallsetminus C(k)$. We define} $g^{\mathrm{an}}(C)$ \emph{to be $g(\Sigma(\bar{C}^{\mathrm{an}},\mathfrak{W})) + f$ where $\Sigma(\bar{C}^{\mathrm{an}},\mathfrak{W})$ is the skeleton associated to a weak semistable vertex set $\mathfrak{W}$ for $C^{\mathrm{an}}$.} \end{defi} It can be checked easily that this definition does not depend on the compactification of $C$ chosen. Proposition 2.25 implies that $g^{\mathrm{an}}(C)$ is a well defined invariant of the $k$-curve $C$. We end this section with the following proposition concerning finite graphs. \begin{prop} Let $\phi : C' \to C$ be a finite morphism of smooth projective irreducible curves. Let $H \subset C^{\mathrm{an}}$ be a finite graph which does not contain any points of type IV. We then have that $(\phi^{\mathrm{an}})^{-1}(H)$ is a finite graph. \end{prop} \begin{proof} We may suppose at the outset that the graph $H$ is connected. We show that we may reduce to the case when the extension of function fields $k(C) \hookrightarrow k(C')$ induced by $\phi$ is Galois. The extension of function fields $k(C) \hookrightarrow k(C')$ decomposes into a pair of extensions $k(C) \hookrightarrow L$ which is separable and $L \hookrightarrow k(C')$ which is purely inseparable. Let $C_1$ denote the smooth projective irreducible curve that corresponds to the function field $L$. The morphisms $C' \to C_1$ and $C'^{\mathrm{an}} \to C_1^{\mathrm{an}}$ are homeomorphisms. It follows that if the preimage of $H$ for the morphism $C_1^{\mathrm{an}} \to C^{\mathrm{an}}$ is a finite graph then $(\phi^{\mathrm{an}})^{-1}(H)$ is a finite graph as well. We may hence suppose that $k(C) \hookrightarrow k(C')$ is separable. Let $L'$ denote the Galois closure of the extension $k(C) \hookrightarrow k(C')$ and $C''$ be the smooth projective irreducible curve that corresponds to the function field $L'$. We have a sequence of morphisms $C'' \to C' \to C$. Let $\psi : C'' \to C'$. If the preimage $H''$ of $H$ for the morphism $C''^{\mathrm{an}} \to C^{\mathrm{an}}$ is a finite graph then its image $\psi^{\mathrm{an}}(H'') = (\phi^{\mathrm{an}})^{-1}(H)$ for the morphism $\psi^{\mathrm{an}} : C''^{\mathrm{an}} \to C'^{\mathrm{an}}$ is a finite graph. Indeed, the group $G' := \mathrm{Gal}(k(C'')/k(C'))$ acts on $H''$. The graph $H''$ is defined by combinatorial data i.e. a finite set of vertices $W \subset C''^{\mathrm{an}}$ and a set of edges $E \subset W \times W$ which can be realized as subspaces of $C''^{\mathrm{an}}$. The group $G'$ must act on the sets $W$ and $E$. It follows that the quotient of $H''$ for the action of the group $G'$ can be described in terms of the $G'$-orbits in $W$ and $E$. Hence $\psi^{\mathrm{an}}(H'')$ is a finite graph. We have thus reduced to the case when the morphism $\phi : C' \to C$ is Galois. Let $\mathfrak{W}$ be a semistable vertex set in $C'^{\mathrm{an}}$. Let $\mathfrak{V}$ be a weak semistable vertex set in $C^{\mathrm{an}}$ that contains $\phi^{\mathrm{an}}(\mathfrak{W})$ and is such that $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ contains the finite graph $H$. We may suppose in addition that $\mathfrak{V}$ was chosen so that the graph $\Sigma(C^{\mathrm{an}}, \mathfrak{V})$ contains no loop edges. It suffices to prove the lemma for $H = \Sigma(C^{\mathrm{an}},\mathfrak{V})$. We show that there exists a finite graph $H' \subset C'^{\mathrm{an}}$ such that $\phi^{\mathrm{an}}(H') = H$. Let $\mathcal{C}$ denote the connected components of the space $C^{\mathrm{an}} \smallsetminus \mathfrak{V}$. As $\mathfrak{V}$ is a weak semistable vertex set, there exists a finite set $S \subset \mathcal{C}$ such that every $A \in S$ is isomorphic to a standard Berkovich punctured open disk of unit radius or a standard open annulus and if $A \in \mathcal{C} \smallsetminus S$ then $A$ is isomorphic to a standard Berkovich open disk. Likewise, let $\mathcal{C}'$ denote the set of connected components of the space $C'^{\mathrm{an}} \smallsetminus \mathfrak{W}$. As $\mathfrak{W}$ is a semistable vertex set, there exists a finite set $S' \subset \mathcal{C}'$ such that every $A \in S'$ is isomorphic to a standard open annulus and if $A \in \mathcal{C}' \smallsetminus S'$ then $A$ is isomorphic to a standard Berkovich open disk. Let $\mathfrak{V}' := (\phi^{\mathrm{an}})^{-1}(\mathfrak{V})$. The morphism $\phi^{\mathrm{an}}$ is surjective, open and closed (cf. 6.1). It follows that the restriction of $\phi^{\mathrm{an}}$ to $C'^{\mathrm{an}} \smallsetminus \mathfrak{V}'$ is a surjective clopen morphism onto $C^{\mathrm{an}} \smallsetminus \mathfrak{V}$. Hence if $D'$ is a connected component of the space $C'^{\mathrm{an}} \smallsetminus \mathfrak{V}'$ then there exists a connected component $D$ of the space $C^{\mathrm{an}} \smallsetminus \mathfrak{V}$ such that $\phi^{\mathrm{an}}$ restricts to a surjective morphism from $D'$ onto $D$. Let $D$ be a connected component of the space $C^{\mathrm{an}} \smallsetminus \mathfrak{V}$ which is not a general Berkovich open ball. We show that if $D'$ is a connected component of the space $C'^{\mathrm{an}} \smallsetminus \mathfrak{V}'$ such that $\phi^{\mathrm{an}}(D') = D$ then $D'$ cannot be a general Berkovich open ball. Suppose that $D'$ was a general Berkovich open ball. There exists a point $q \in C'^{\mathrm{an}}$ such that $D' \cup \{q\}$ is compact. It follows that $D \cup \phi^{\mathrm{an}}(q)$ is compact. The only elements in $\mathcal{C}$ for which this is possible are general Berkovich open disks which contradicts our assumption. Let $D \in \mathcal{C}$ be a punctured open disk or open annulus. Let $D'$ be a connected component in $C'^{\mathrm{an}} \smallsetminus \mathfrak{V}'$ such that $\phi^{\mathrm{an}}(D') = D$. There exists a finite set of points $P_{D'}$ such that $D' \smallsetminus P_{D'}$ is the disjoint union of general Berkovich open disks and finitely many general open annuli or punctured Berkovich disks. Let $\mathcal{C}'_{D'}$ denote the connected components of $D' \smallsetminus P_{D'}$. Let $O$ be a Berkovich open disk in $D' \smallsetminus P_{D'}$. The image $\phi^{\mathrm{an}}(O)$ is a connected open subset of $D$ for which there exists $p \in D$ such that $\phi^{\mathrm{an}}(O) \cup \{p\}$ is compact. It follows from Lemma 2.32 that $\phi^{\mathrm{an}}(O)$ must be a Berkovich open disk $D$ and hence lies in the complement of the skeleton $\Sigma(D)$. Let $S_{D'}$ be the set of open annuli or punctured open disks in $\mathcal{C}'_{D'}$. The set $S_{D'}$ is finite. If $A \in S_{D'}$ then by Proposition 2.5 in \cite{BPR}, we must have that $\phi^{\mathrm{an}}(\Sigma(A)) \subset \Sigma(D)$. We showed that if $O \in \mathcal{C}'_{D'} \smallsetminus S_{D'}$ then $\phi^{\mathrm{an}}(O) \subset D \smallsetminus \Sigma(D)$. It follows that $\Sigma(D) \smallsetminus \bigcup_{A \in S_{D'}} \phi^{\mathrm{an}}(\Sigma(A))$ is at most a finite set of points. Let $\Sigma(D') := \bigcup_{A \in S_{D'}} \Sigma(A) \cup P_{D'}$. The set $\Sigma(D')$ is a closed connected subset of $D'$. As $\phi^{\mathrm{an}}$ restricted to $D'$ is closed, its image $\phi^{\mathrm{an}}(\Sigma(D')) = \{ \phi^{\mathrm{an}}(p) | p \in P_{D'} \} \cup \bigcup_{A \in S_{D'}} \phi^{\mathrm{an}}(\Sigma(A))$ is a closed connected subset of $D$. Hence $\{ \phi^{\mathrm{an}}(p) | p \in P_{D'} \} \subset \Sigma(D)$ and $\phi^{\mathrm{an}}(\Sigma(D')) = \Sigma(D)$. For every $D \in S$, let $D'_D$ be a connected component of $C'^{\mathrm{an}} \smallsetminus \mathfrak{V}'$ such that $\phi^{\mathrm{an}}(D'_D) = D$. We showed that there exists $\Sigma(D'_D) \subset D'_D$ such that $\phi^{\mathrm{an}}(\Sigma(D'_D)) = \Sigma(D)$. If $H'_0 := \bigcup_{D \in S} \Sigma(D'_D)$ then $H' := H'_0 \cup \mathfrak{V}'$ is a finite graph. We must have that $\phi^{\mathrm{an}}(H') = H$. Let $G := \mathrm{Gal}(k(C')/k(C))$. The set $\bigcup_{g \in G} g(H')$ is a finite graph and since the morphism $C' \to C$ is Galois, we must have that $(\phi^{\mathrm{an}})^{-1}(H) = \bigcup_{g \in G} g(H')$. \end{proof} \subsection{The Non-Archimedean Poincaré-Lelong Theorem} The non-Archimedean \\ Poincaré-Lelong theorem is used in Sections 6.3 and 6.4. Our treatment follows that of \cite{BPR}. Let $C$ be a smooth projective irreducible $k$-curve. Let $x \in C^{\mathrm{an}}$ be a point of type II. The field $\widetilde{\mathcal{H}(x)}$ (cf. 2.1) is an algebraic function field over $\tilde{k}$. Let $\tilde{C}_x$ denote the smooth projective $\tilde{k}$-curve that corresponds to the field $\widetilde{\mathcal{H}(x)}$. Let $\mathrm{Prin}(C)$ and $\mathrm{Prin}(\tilde{C}_x)$ denote the group of principal divisors on the curves $C_{/k}$ and $\tilde{C}_{x /\tilde{k}}$ respectively. We define a map $\mathrm{Prin}(C) \to \mathrm{Prin}(\tilde{C}_x)$ as follows. Let $f \in k(C)$ be a rational function on $C$ and $c$ be any element in $k$ such that $|f(x)| = |c|$. This implies that $(c^{-1}f) \in \mathcal{H}(x)^0$. Let $f_x$ denote the image of $c^{-1}f$ in $\widetilde{\mathcal{H}(x)}$. Although $f_x \in \widetilde{\mathcal{H}(x)}$ depends on the choice of $c \in k$, the divisor that $f_x$ defines on $\tilde{C}_x$ is independent of $c$. Hence we have a well defined map $\mathrm{Prin}(C) \to \mathrm{Prin}(\tilde{C}_x)$. It can be shown that this map is a homomorphism of groups. A function $F : C^{\mathrm{an}} \to \mathbb{R}$ is piecewise affine if for any geodesic segment $\lambda : [a,b] \to \mathbf{H}(C^{\mathrm{an}})$, the composition $F \circ \lambda : [a,b] \to \mathbb{R}$ is piecewise affine. The outgoing slope of a piecewise affine function $F$ at a point $x \in \mathbf{H}(C^{\mathrm{an}})$ along a tangent direction $v \in T_x$ is defined to be \begin{align*} \delta_vF(x) := \mathrm{Lim}_{\epsilon \to 0} (F \circ \lambda)'(\epsilon) . \end{align*} where $\lambda : [0,a] \hookrightarrow \mathbf{H}(C^{\mathrm{an}})$ is a representative of the element $v$. It is evident from the definition that $\delta_vF(x)$ depends only on the equivalence class of $\lambda$ i.e. it depends only on the element $v \in T_x$. \begin{thm}\emph{(Non-Archimedean Poincaré-Lelong Theorem)} Let $f \in k(C)$ be a non-zero rational function on the curve $C$ and $S$ denote the set of zeros and poles of $f$. Let $\mathfrak{W}$ be a weak semistable vertex set whose set of $k$-points is the set $S$. Let $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ be the skeleton associated to $\mathfrak{W}$ and $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{W})} :[0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ be the deformation retraction with image $\Sigma(C^{\mathrm{an}},\mathfrak{W})$. We will use $\lambda_e$ to denote the morphism $\lambda_{\Sigma(C^{\mathrm{an}},\mathfrak{W})}(1,\_) : C^{\mathrm{an}} \to C^{\mathrm{an}}$ (cf. Proposition 2.21). If $F := -\mathrm{log}|f| : C^{\mathrm{an}} \smallsetminus S \to \mathbb{R}$. Then we have that \begin{enumerate} \item $F = F \circ \lambda_e$. \item $F$ is piecewise affine with integer slopes and $F$ is affine on each edge of $\Sigma(C^{\mathrm{an}},\mathfrak{W})$. \item If $x$ is a type II point of $C^{\mathrm{an}}$ and $v$ is an element of the tangent space $T_x$, then $\mathrm{ord}_{\tilde{v}}(f_x) := \delta_vF(x)$ defines a discrete valuation $\mathrm{ord}_{\tilde{v}}$ on the $\tilde{k}$-function field $\tilde{k}(\tilde{C}_x)$. \item If $x \in C^{\mathrm{an}}$ is of type II or III then $\sum_{v \in T_x} \delta_vF(x) = 0$. \item Let $x \in S$, $c$ be the ray in $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ whose closure in $C^{\mathrm{an}}$ contains $x$ and $y \in \mathfrak{W}$ the other end point of $e$. If $v \in T_y$ is that element of the tangent space $T_y$ for which $c$ is a representative then $\delta_vF(y) = ord_x(f)$. \end{enumerate} \end{thm} \subsubsection{ An alternate description of the tangent space at a point $x$ of type II} Let $x \in C^{\mathrm{an}}$ be a point of type II. We define the algebraic tangent space at a point of type II and show how this notion reconciles nicely with the definition we introduced above. Recall that the field $\widetilde{\mathcal{H}(x)}$ is of transcendence degree $1$ over $\tilde{k}$ and uniquely associated to this $\tilde{k}$-function field is a smooth, projective $\tilde{k}$-curve which is denoted $\tilde{C}_x$. \begin{defi} The algebraic tangent space at $x$ \emph{denoted $T^\mathrm{alg}_x$ is the set of closed points of the curve $\tilde{C}_x$}. \end{defi} We now write out a map $B: T_x \to T^\mathrm{alg}_x$. The closed points of the $\tilde{k}$-curve $\tilde{C}_x$ correspond to discrete valuations on the field $\widetilde{\mathcal{H}(x)}$. Given a germ $e_x \in T_x$ and $f \in \widetilde{\mathcal{H}(x)}$ there exists $g \in \mathcal{H}(x)$ such that $|g(x)| = 1$ and $\tilde{g} = f$. Let $B(e_x)(f)$ be the slope of the function $-\mathrm{log}|g|$ along the germ $e_x$ directed outwards. By the Non-Archimedean Poincaré-Lelong Theorem, $B(e_x)$ defines a discrete valuation on the function field $\widetilde{\mathcal{H}(x)}$ i.e. a closed point of the curve $\tilde{C}_x$. The map $B$ is a well defined bijection. Let $C'$ be a smooth, projective, irreducible curve over the field $k$ and $\rho : C' \to C$ a finite morphism. If $x'$ is a preimage of the point $x$ then it must be of type II as well. The inclusion of non-Archimedean valued complete fields $\mathcal{H}(x) \hookrightarrow \mathcal{H}(x')$ induces an extension of $\tilde{k}$-function fields $\widetilde{\mathcal{H}(x)} \hookrightarrow \widetilde{\mathcal{H}(x')}$. This defines a morphism $d\rho_{x'}^{\mathrm{alg}} : T^\mathrm{alg}_{x'} \to T^\mathrm{alg}_x$ between the algebraic tangent space at $x'$ and the algebraic tangent space at $x$. Recall that we have in addition a map $d\rho_{x'} : T_{x'} \to T_x$. These maps are compatible in the sense that the following diagram is commutative. \setlength{\unitlength}{1cm} \begin{picture}(10,5) \put(4,1){$T^\mathrm{alg}_{x'}$} \put(6.9,1){$T^\mathrm{alg}_x$} \put(4.3,3.5){$T_{x'}$} \put(7,3.5){$T_x$} \put(4.4,3.3){\vector(0,-1){1.75}} \put(7.1,3.3){\vector(0,-1){1.75}} \put(4.8,1.1){\vector(1,0){2}} \put(4.8,3.6){\vector(1,0){2}} \put(5.6,0.7){$d\rho_{x'}^\mathrm{alg}$} \put(5.6,3.2){$d\rho_{x'}$} \put(4.5,2.3){$B$} \put(7.2,2.3){$B$} \end{picture} \subsection{Continuity of lifts} Let $\phi : C' \to C$ be a finite morphism between irreducible smooth projective curves. In Section 3, we construct a pair of deformation retractions $\lambda' : [0,1] \times C'^{\mathrm{an}} \to C'^{\mathrm{an}}$ and $\lambda : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ which are compatible for the morphism $\phi^{\mathrm{an}}$. Our method of proof is to first construct a suitable deformation retraction $\lambda$ on $C^{\mathrm{an}}$ and then lift it to a function $\lambda' : [0,1] \times C'^{\mathrm{an}} \to C'^{\mathrm{an}}$ such that for every $q \in C'^{\mathrm{an}}$, the map ${\lambda'}^q : [0,1] \to C'^{\mathrm{an}}$ defined by setting ${\lambda'}^{q}(t) = \lambda'(t,q)$ is continuous. Our goal in this section is to show that given a deformation retraction $\lambda$ and a lift $\lambda'$ as above, the function $\lambda'$ is continuous. \begin{lem} Let $\phi : C' \to C$ be a finite morphism between $k$-curves and suppose in addition that $C$ is normal. Let $\mathfrak{V} \subset C^{\mathrm{an}}$ be a weak semistable vertex set and suppose $\mathfrak{V}'$ is a weak semistable vertex set for $C'^{\mathrm{an}}$ such that $\Sigma(C'^{\mathrm{an}},\mathfrak{V}') = (\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{V}))$. Let $\mathcal{D}$ denote the set of connected components of the space $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{V})$ and likewise, $\mathcal{D}'$ denote the set of connected components of the space $C'^{\mathrm{an}} \smallsetminus \Sigma(C'^{\mathrm{an}},\mathfrak{V}')$. If $D' \in \mathcal{D}'$ then there exists $D \in \mathcal{D}$ such that $\phi^{\mathrm{an}}(D') = D$. Furthermore, the restriction $\phi^{\mathrm{an}}_{|D'} : D' \to D$ is both closed and open. \end{lem} \begin{proof} Let $D' \in \mathcal{D}'$. As $D'$ is connected and $\phi^{\mathrm{an}}$ is continuous, we must have that $\phi^{\mathrm{an}}(D')$ is connected. Furthermore, $\phi^{\mathrm{an}}(D') \subset C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{V})$ because $\Sigma(C'^{\mathrm{an}},\mathfrak{V}') = (\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{V}))$. The open subspace $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{V})$ decomposes into the disjoint union $\bigcup_{A \in \mathcal{D}} A$. It follows that there exists $D \in \mathcal{D}$ such that $\phi^{\mathrm{an}}(D') \subset D$. Let $A'$ be a connected component of the space $(\phi^{\mathrm{an}})^{-1}(D)$ that contains $D'$. As $A' \subset C'^{\mathrm{an}} \smallsetminus \Sigma(\mathfrak{W},C'^{\mathrm{an}})$ and $C'^{\mathrm{an}} \smallsetminus \Sigma(\mathfrak{W},C'^{\mathrm{an}})$ decomposes into the disjoint union $\bigcup_{U \in \mathcal{D}'} U$, we must have that $D' = A'$. The morphism $\phi^{\mathrm{an}}$ is a finite morphism and hence closed. By Lemma 6.1, it is open as well. It follows that $\phi^{\mathrm{an}}$ restricts to a morphism $D' \to D$ which is both open and closed. As $D$ is connected, we must have that $\phi^{\mathrm{an}}(D') = D$. \end{proof} \begin{rem} \emph{Recall that for $a \in k$ such that $|a| < 1$ and $r < 1$, we used $\mathbf{O}(a,r)$ to denote the Berkovich open disk around $a$ of radius $r$ and $\mathbf{B}(a,r)$ to denote the Berkovich closed disk around $a$ of radius $r$. By Proposition 1.6 in \cite{BR}, a basis $\mathcal{B}$ for the open sets of $\mathbf{O}(0,1)$ is given by the sets \begin{align*} \mathbf{O}(a,r),\mbox{ } \mathbf{O}(a,r) \smallsetminus \bigcup_{i \in I} X_i, \mbox{ } \mathbf{O}(0,1) \smallsetminus \bigcup_{i \in I} X_i \end{align*} where $I$ ranges over finite index sets, $a$ ranges over $O(0,1)$, where $r \in (0,1)$ and $X_i$ is either a Berkovich closed sub disk of the form $\mathbf{B}(a_i,r_i)$ with $a_i \in O(0,1)$ and $r_i \in [0,1)$ or a point of type I or IV. We classify the elements of this basis by referring to Berkovich open sub disks as sets of form 1, Berkovich open disks from which a finite number of closed disks have been removed as sets of form 2 and the complement of the union of a finite number of closed sub disks as sets of form 3.} \end{rem} \begin{lem} Let $U \subset \mathbf{O}(0,1)$ be a connected open set. Then $U$ is of the form $\mathbf{O}(a,r) \smallsetminus \bigcup_{j \in J} X_j$ where $a \in O(0,1)$, $r \in (0,1]$, $J$ is an index set and the $X_j$ are Berkovich closed disks or points of type I or IV. In addition, the $X_j$ can be taken to be disjoint from each other. \end{lem} Note that we do not claim every set of the form $\mathbf{O}(a,r) \smallsetminus \bigcup_{j \in J} X_j$ is open, as this is false. For instance the set $\mathbf{H}(\mathbf{O}(0,1)) := \mathbf{O}(0,1) \smallsetminus O(0,1)$ is not open in $\mathbf{O}(0,1)$ as $O(0,1)$ is dense in $\mathbf{O}(0,1)$. \begin{proof} Let $x \in U$ be a point of type I. We define $r_x$ to be the supremum of the set $\{r \in [0,1) | \eta_{x,r} \in U\}$ where the notation $\eta_{x,r}$ was introduced in 2.2.1. We maintain that $\eta_{x,0} := 0$. The point $\eta_{x,r_x}$ either does not belong to $\mathbf{O}(0,1)$ or belongs to the set $\bar{U} \smallsetminus U$ where $\bar{U}$ denotes the closure of $U$ in $\mathbf{O}(0,1)$. Indeed, the interval $[0,1)$ can be identified with the set $\{\eta_{x,r} | r \in [0,1)\}$ by $r \mapsto \eta_{x,r}$ and this map is a homeomorphism when $[0,1)$ is equipped with the linear topology and $\{\eta_{x,r} | r \in [0,1)\} \subset \mathbf{O}(0,1)$ is given the subspace topology. We claim that if $x,y \in U$ are points of type I and if $\eta_{x,r_x} \in \mathbf{O}(0,1)$ then $\eta_{x,r_x} = \eta_{y,r_y}$. Since $\mathbf{O}(0,1) \smallsetminus \{\eta_{y,r_y}\}$ decomposes into the disjoint union of open sets, we must have that $U \subset \mathbf{O}(y,r_y)$. Hence $r_x \leq r_y$ which implies that $r_x = r_y$. Furthermore, since $x \in \mathbf{O}(y,r_y)$, we must have that $\eta_{x,r_x} = \eta_{y,r_y}$. Likewise, it can be shown that if $x,y \in U$ are points of type I and $r_x = 1$ then $r_y = 1$. Let $\eta := \eta_{x,r_x}$ where $x \in U$ is a point of type I. Let $J := \bar{U} \smallsetminus \{U \cup \eta\}$ where $\bar{U}$ is closure of $U$ in $\mathbf{O}(0,1)$. If $x \in J$ is not a point of type IV then it can be seen as $\eta_{a,r}$ for some $a \in O(0,1)$ and $r \in [0,1)$. Let $X_j$ denote the Berkovich closed disk $\mathbf{B}(a,r)$. Let $y \in \mathbf{B}(a,r)$ be a point of type I. If $y \in U$ then we must have that $U \subset \mathbf{O}(y,r)$ since $\mathbf{O}(0,1) \smallsetminus \eta_{a,r}$ is the disjoint union of open sets and $U$ is connected. As $r_y$ is constant as $y$ varies along the set of type I points in $U$, it can be deduced that $r_y = r$ and that $x = \eta_{y,r_y}$ which is not possible. Hence $U \subset \mathbf{O}(0,1) \smallsetminus X_j$. Let $x \in U$ be a point of type I. If $j \in J$ then since $j \neq \eta$ lies in the closure of $U$, we must have that $j \in \mathbf{O}(x,r_x)$ and $X_j \subset \mathbf{O}(x,r_x)$. It can be verified using the results above that $U = \mathbf{O}(x,r_x) \smallsetminus \bigcup_{j \in J} X_j$ which concludes the proof. \end{proof} \begin{lem} Let $f : \mathbf{O}(0,1) \to \mathbf{O}(0,1)$ be a surjective, open and closed continuous function. \begin{enumerate} \item If $U \in \mathcal{B}$ is an open set of form $i$ where $i$ is 1 or 3 then $f(U)$ is of form $i$ as well. If $U \in \mathcal{B}$ is of form 2 then $f(U)$ is of form 1 or 2. If we suppose in addition that $f$ is bijective and $U$ is of form 2 then $f(U)$ is also of form 2. \item If $Y \subset \mathbf{O}(0,1)$ is a Berkovich closed disk then $f(Y)$ is a Berkovich closed disk. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item Let $U$ be a Berkovich open sub ball. We show that $f(U)$ is a Berkovich open ball. The closure $\overline{U}$ is a compact subspace of $\mathbf{O}(0,1)$. Observe that $\overline{U} \smallsetminus U$ is a single point which we denote $p$. As $f$ is continuous, $f(\overline{U}) = f(U) \cup \{f(p)\}$ must be compact as well. As $U$ is connected, $f(U)$ must be a connected open set as well. By Lemma 2.32, it suffices to verify which connected open sets in $\mathbf{O}(0,1)$ are such that they can be compactified by adding a single point of $\mathbf{O}(0,1)$. It can be checked by hand that the only possibility for $f(U)$ is a Berkovich open ball contained in $\mathbf{O}(0,1)$. Let $\{D_1,\ldots,D_m\}$ be a finite number of Berkovich closed disks or points of types I or IV in $\mathbf{O}(0,1)$ and $U := \mathbf{O}(0,1) \smallsetminus (\bigcup_i D_i)$. The set $U$ is of form 3 and we show that $f(U)$ is also of form 3. As $U$ is connected, the image $f(U)$ is a connected open set as well. By Lemma 2.32, $f(U)$ must be of the form $\mathbf{O}(a,r) \smallsetminus \bigcup_{j \in J} X_j$ where $a \in O(0,1)$, $r \in (0,1]$, $J$ is an index set and the $X_j$ are Berkovich closed disks or points of type I or IV. In addition, the $X_j$ can be taken to be disjoint. We claim that $r = 1$. Suppose $r < 1$. Then the closure of $f(U)$ in $\mathbf{O}(0,1)$ denoted $\overline{f(U)}$ is compact. The $D_i$ are Berkovich closed disks or points of types I or IV and hence compact. As a result, the $f(D_i)$ are compact subsets of $\mathbf{O}(0,1)$. The surjectivity of $f$ implies that $\mathbf{O}(0,1) = \bigcup_i f(D_i) \cup (\overline{f(U)})$ is compact. This is a contradiction and we must hence have that $r =1$. We claim that the index set $J$ is finite. There exists a finite set of points $S := \{p_1,\ldots,p_r\}$ in $\mathbf{O}(0,1)$ such that $U \cup S$ is closed in $\mathbf{O}(0,1)$. As the map $f$ is closed, $f(U \cup S)$ is closed in $\mathbf{O}(0,1)$. Uniquely associated to each $j \in J$ is an element $x_j \in \mathbf{O}(0,1)$ that lies in the closure of $f(U)$. Hence we must have that the index set $J$ is finite. This implies that $f(U) \in \mathcal{B}$ and is of form 3. Let $U \in \mathcal{B}$ be an open set of form 2. As $U$ is contained in a Berkovich open disk $U'$, we must have that its image is a connected open set which is contained in the Berkovich open disk $f(U')$ strictly contained in $\mathbf{O}(0,1)$. Repeating the arguments above, it can be shown that $f(U) \in \mathcal{B}$ is of form either 1 or 2. Suppose that $f$ is bijective and let $U = U' \smallsetminus (\bigcup_i D_i)$ where the $D_i \subset U'$ are Berkovich closed sub disks or points of type I or IV. It follows that $f(U) = f(U') \smallsetminus \bigcup_i f(D_i)$. As the only connected open subsets of $\mathbf{O}(0,1)$ which are open balls from which a finite number of closed subspaces have been removed are of form 2, we conclude that $f(U)$ is of form 2. \item Let $Y$ be a closed disk of radius $r$ in $O(0,1)$. The closed disk $Y$ can be seen as the union of a family of Berkovich open sub disks in $O(0,1)$ of radius $r$ and a point. Hence we can write $Y = (\bigcup_{i \in I} V_i) \cup \{q\}$ where $I$ is an index set, the $V_i$ are Berkovich open disks of radius $r$ and $q$ is the unique point such that for every $i$, $V_i \cup \{q\}$ is compact. It follows that $f(V_i \cup \{q\}) = f(V_i) \cup f(q)$ is a compact set. Let $U_i := f(V_i)$ and $p = f(q)$. By part (1), the $U_i$ are Berkovich open balls contained in $\mathbf{O}(0,1)$. The point $p$ of $\mathbf{O}(0,1)$ such that $U_i \cup \{p\}$ is compact is uniquely determined by $U_i$. Furthermore, this point $p$ determines the radius of the Berkovich open ball $U_i$. It follows that the radii of the Berkovich open balls $U_i$ are the same. Let $t$ be the radius of Berkovich open balls $U_i$. Let $X$ denote the Berkovich closed ball corresponding to the point $p$. The radius of $X$ is $t$. The tangent space at $p$ is in bijection with the set of Berkovich open balls of radius $t$ contained in $X$ and the open annulus $O(0,1) \smallsetminus X$. Likewise, the tangent space at $q$ is the set of Berkovich open balls $V_i$ of radius $r$ contained in $Y$ and the open annulus $O(0,1) \smallsetminus Y$. The tangent space at $q$ surjects onto the tangent space at $p$. Furthermore, for every $i \in I$, the image $U_i$ of the Berkovich open ball $V_i$ is a Berkovich open ball of radius $t$ contained in $X$. Hence if $U \subset X$ is a Berkovich open disk of radius $t$ then there exists $j \in I$ such that $f(V_j) = U$. It follows that $f(Y) = X$. \end{enumerate}  \end{proof} \begin{lem} Let $a \in k$ and $r$ be a positive real number belonging to $|k^*|$. Let $\mathbf{B}(a,r)$ denote the Berkovich closed disk around $a$ of radius $r$ and let $\sigma : \mathbf{B}(a,r) \to \mathbf{B}(a,r)$ be an automorphism of analytic spaces. Suppose $W \subset \mathbf{B}(a,r)$ is a Berkovich open disk of radius $0 < s < r$ then $\sigma(W)$ is a Berkovich open disk of radius $s$. Likewise, if $W$ is a Berkovich closed disk of radius $0 \leq s < r$ then $\sigma(W)$ is a Berkovich closed disk of radius $s$. \end{lem} \begin{proof} As $r \in |k^*|$, we can suppose that $r = 1$, $a = 0$. We choose coordinates and write $\mathbf{B}(0,1) = \mathcal{M}(k\{T\})$. The automorphism $\sigma$ induces an automorphism $\sigma' : k\{T\} \to k\{T\}$ of affinoid algebras. By the Weierstrass preparation theorem, we must have that $\sigma'(T) = f(T)u$ where $f(T) = c(T - a_1)(T - a_r)$ is a polynomial in $T$ with $c \in k^*$ and $u$ is an invertible element in $k\{T\}$. As $\sigma'$ is an automorphism, we must have that $|c| = 1$ and that $f(T)$ is of degree $1$. It follows that $f(T) = c(T - b)$ for some $b \in B(0,1)$. We now show that if $W$ is a Berkovich open sub ball around a point $x \in B(0,1)$ of radius $s$ then $\sigma(W)$ is a Berkovich open ball around $\sigma(x)$ of radius $s$. By definition, $W = \{p \in \mathbf{B}(0,1)| |(T-x)(p)| < s \}$. It follows that $\sigma(W) = \{q \in \mathbf{B}(0,1) | |(1/(cu)T) - (x - b)| < s\}$. As $|cu| = 1$, it can be checked that the claim has been verified. The proof can be repeated when $W$ is the closed disk $\{p \in \mathbf{B}(0,1)| |(T-x)(p)| \leq s \}$. \end{proof}  We make use of the following notation in the statements that follow. Let $\phi : C' \to C$ be a finite morphism between smooth projective curves. Let $\mathfrak{W}', \mathfrak{W}$ be weak semistable vertex sets for $C'^{\mathrm{an}}$ and $C^{\mathrm{an}}$ respectively such that $\Sigma(C'^{\mathrm{an}},\mathfrak{W}') = (\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}))$. Let $\mathcal{D}'$ denote the set of connected components of the space $C'^{\mathrm{an}} \smallsetminus \Sigma(C'^{\mathrm{an}},\mathfrak{W}')$. If $D' \in \mathcal{D}'$ then $D'$ is isomorphic to the Berkovich unit ball $\mathbf{O}(0,1)$ and we identify $D'$ via this isomorphism. Likewise, let $\mathcal{D}$ denote the set of connected components of the space $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{W})$. \begin{lem} Let $\phi : C' \to C$ be a finite morphism between irreducible projective smooth $k$-curves. Let $\mathfrak{V} \subset \mathfrak{W}$ be weak semistable vertex sets of $C^{\mathrm{an}}$. Let $\mathfrak{W}' \subset C'^{\mathrm{an}}$ be a weak semistable vertex set such that $\Sigma(C'^{\mathrm{an}},\mathfrak{W}') = (\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}))$. Let $\lambda^\mathfrak{W}_{\Sigma(C^{\mathrm{an}},\mathfrak{V})} : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ be the deformation retraction constructed in Proposition 2.21 whose image is $\Sigma(C^{\mathrm{an}},\mathfrak{W})$. Let $\lambda' : [0,1] \times C'^{\mathrm{an}} \to C'^{\mathrm{an}}$ be a function such that for every $q \in C'^{\mathrm{an}}$, the path ${\lambda'}^q : [0,1] \to C'^{\mathrm{an}}$ defined by $t \mapsto {\lambda'}(t,q)$ is continuous and $\lambda'^{q}(1) \in \Sigma(C'^{\mathrm{an}},\mathfrak{W}')$. Furthermore, if $\phi^{\mathrm{an}}(q) = p$ then $\lambda'^q$ is the unique path starting from $q$ such that $\phi^{\mathrm{an}} \circ \lambda'^q = (\lambda^\mathfrak{W}_{\Sigma(C^{\mathrm{an}},\mathfrak{V})})^{p}$. Let $D' \in \mathcal{D}'$ and $x_1,x_2 \in D'$. There exists $r \in [0,1]$ such that $\lambda'^{x_1}(r), \lambda'^{x_2}(r) \in D'$ and $\lambda'^{x_1}_{|[r,1]} = \lambda'^{x_2}_{|[r,1]}$. \end{lem} We simplify notation and write $\lambda$ in place of $\lambda^\mathfrak{W}_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}$. \begin{proof} Recall that when constructing the deformation retraction $\lambda$, we identified every $D \in \mathcal{D}$ with the standard Berkovich open unit disk. Let $D'$ be as in the statement of the lemma. By Lemma 2.30, there exists $D \in \mathcal{D}$ such that $\phi^{\mathrm{an}}(D') = D$. If $\overline{D}$ is the closure of $D$ in $C^{\mathrm{an}}$ then $\overline{D} \smallsetminus D$ is a single point $\eta$. By construction of the deformation retraction $\lambda$ there exists $s \in [0,1]$ such that for every $y \in D$, $\lambda(s,y) = \eta$ and the restriction $[0,s) \times D \to D$ given by $(t,x) \mapsto \lambda(t,x)$ is well defined. We must hence have that the restriction $\lambda' : [0,s) \times D' \to D'$ is well defined. If $\overline{D'}$ is the closure of $D'$ in $C'^{\mathrm{an}}$ then $\overline{D'} \smallsetminus D'$ is a single point $\eta'$. Furthermore, for every $x \in D'$, $\lambda'(s,x) = \eta'$. If $U'$ is a simple neighborhood of the point $\eta'$ then the tangent space $T_{\eta'}$ is in bijection with the connected components of the space $U' \smallsetminus \eta'$. The set $D'$ corresponds to a single element of the tangent space at $\eta'$. It follows that for some $r \in [0,s)$, $\lambda'^{x_1}(r) \in D' \cap \lambda'^{x_2}([0,s])$. Let $q := \lambda'^{x_1}(r)$ and $p := \phi^{\mathrm{an}}(q)$. By construction of the deformation retraction $\lambda$, $\lambda(t,p) = p$ for every $t \in [0,r]$. Also, the deformation $\lambda$ satisfies the following property. For every $a < b \in [0,1]$ and $y \in C^{\mathrm{an}}$, $\lambda(b,(\lambda(a,y)) = \lambda(b,y)$. It follows that $\lambda'^q_{|[r,1]}$, $\lambda'^{x_1}_{|[r,1]}$ and $\lambda'^{x_2}_{|[r,1]}$ are all lifts of the path $\lambda^p_{|[r,1]}$. As the lift of the path starting from $p$ is unique, we must have that $\lambda'^{q}_{[r,1]} = \lambda'^{x_1}_{[r,1]} = \lambda'^{x_2}_{[r,1]}$. \end{proof} \begin{prop} Let $\phi : C' \to C$ be a finite morphism between irreducible projective smooth $k$-curves. Assume in addition that the extension of function fields $k(C) \hookrightarrow k(C')$ is a finite Galois extension and let $G := \mathrm{Gal}(k(C')/k(C))$. Let $\mathfrak{V} \subset \mathfrak{W}$ be weak semistable vertex sets of $C^{\mathrm{an}}$. Let $\mathfrak{W}' \subset C'^{\mathrm{an}}$ be a weak semistable vertex set such that $\Sigma(C'^{\mathrm{an}},\mathfrak{W}') = (\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}))$. Let $\lambda^\mathfrak{W}_{\Sigma(C^{\mathrm{an}},\mathfrak{V})} : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ be the deformation retraction as constructed in Proposition 2.21 whose image is $\Sigma(C^{\mathrm{an}},\mathfrak{W})$. Let $\lambda' : [0,1] \times C'^{\mathrm{an}} \to C'^{\mathrm{an}}$ be a function such that for every $q \in C'^{\mathrm{an}}$, the path ${\lambda'}^q : [0,1] \to C'^{\mathrm{an}}$ defined by $t \mapsto {\lambda'}(t,q)$ is continuous and also that the following diagram commutes. \setlength{\unitlength}{1cm} \begin{picture}(10,5) \put(3.5,1){$[0,1] \times C^{\mathrm{an}}$} \put(8,1){$C^{\mathrm{an}}$} \put(3.5,3.5){$[0,1] \times C'^{\mathrm{an}}$} \put(8,3.5){$C'^{\mathrm{an}}$} \put(4.4,3.3){\vector(0,-1){1.75}} \put(8.1,3.3){\vector(0,-1){1.75}} \put(5.4,1.1){\vector(1,0){2.55}} \put(5.4,3.6){\vector(1,0){2.55}} \put(5.7,0.7){$\lambda^\mathfrak{W}_{\Sigma(\mathfrak{V},C^{\mathrm{an}})}$} \put(6.2,3.2){$\lambda'$} \put(4.5,2.3){$id \times \phi^{\mathrm{an}}$} \put(8.2,2.3){$\phi^{\mathrm{an}}$}. \end{picture} We suppose in addition that for every $q \in C'^{\mathrm{an}}$, the path $\lambda'^{q}$ is the unique lift starting from $q$ of the path $(\lambda^\mathfrak{W}_{\Sigma(\mathfrak{V},C^{\mathrm{an}})})^{\phi^{\mathrm{an}}(q)}$ and also that $\lambda'$ is $G$-invariant i.e. for every $g \in G$, $t \in [0,1]$ and $x \in C'^{\mathrm{an}}$, $g(\lambda'(t,x)) = \lambda'(t,g(x))$. The following statements are then true. \begin{enumerate} \item Let $\mathcal{D}'$ denote the set of connected components of the space $C'^{\mathrm{an}} \smallsetminus \Sigma(C'^{\mathrm{an}},\mathfrak{W}')$. If $D' \in \mathcal{D}'$ then $D'$ is isomorphic to the Berkovich unit ball $\mathbf{O}(0,1)$ and we identify $D'$ via this isomorphism. By Lemma 2.30, the group $G$ has a well defined action on the set $\mathcal{D}'$. Let $H \subset G$ be the sub group which fixes $D'$. Let $W$ be a Berkovich closed or open ball strictly contained in $D'$. There exists a Berkovich closed sub ball $\mathbf{B}(0,r) \subset D'$ with $r \in |k^*|$ such that $H$ stabilizes $\mathbf{B}(0,r)$ and $W \subset \mathbf{B}(0,r)$. \item The map $\lambda' : [0,1] \times C'^{\mathrm{an}} \to C'^{\mathrm{an}}$ is continuous. \end{enumerate} \end{prop} Over the course of the proof, we simplify notation and write $\lambda$ in place of $\lambda^\mathfrak{W}_{\Sigma(C^{\mathrm{an}},\mathfrak{W})}$. The hypothesis that $\lambda'$ is $G$-invariant is redundant as it can be deduced from the uniqueness of the lifts. \begin{proof} \begin{enumerate} \item Let $\mathcal{D}$ denote the set of connected components of the space $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{W})$. In the proof of Proposition 2.21, we constructed the deformation retraction $\lambda$ by identifying every $D \in \mathcal{D}$ with Berkovich open unit disks centered at $0$. By Lemma 6.1, the morphism $\phi^{\mathrm{an}}$ is open. As $\phi$ is a finite morphism, the morphism $\phi^{\mathrm{an}}$ is closed as well. Let $D' \in \mathcal{D}'$. By Lemma 2.30, there exists $D \in \mathcal{D}$ such that $\phi^{\mathrm{an}}(D') = D$. We also showed in 2.30, that $D'$ is a connected component of the space $(\phi^{\mathrm{an}})^{-1}(D)$ and hence the morphism $\phi^{\mathrm{an}}$ restricts to a closed and open morphism from $D'$ onto $D$. We use $\phi^{\mathrm{an}}_{D'}$ to denote the restriction of $\phi^{\mathrm{an}}$ to $D'$. Let $x \in D$ and $R := (\phi_{D'}^{\mathrm{an}})^{-1}(x) = \{ y_1,\ldots,y_m\}$. Recall that $H$ is the sub group of $G$ which stabilizes $D'$. It follows that $R = \{h(y_1)\}_{h \in H}$. There exists $s \in [0,1]$ such that for every $z \in D$, $\lambda(t,z) \in D$ for $t \in [0,s)$ and $\lambda(s,z) \in \overline{D} \smallsetminus D$. For every $i$, let $p_{y_i}$ denote the path $[0,s] \to \overline{D'}$ defined by $t \mapsto \lambda'(t,y_i)$. The paths $p_{y_i}$ are each lifts of the path $\lambda^x : [0,s] \to \overline{D}$ defined by $t \mapsto \lambda(t,x)$. Observe that $p_{y_i}(s) = \overline{D'} \smallsetminus D'$. By Lemma 2.35, there exists $r \in [0,s)$ such that for every $y,y' \in R$, ${p_{y'}}_{|[r,s)} = {p_{y}}_{|[r,s)}$. Let $r' \in [r,s)$ be such that $\lambda^x(r') = \eta_{0,u}$ (cf. 2.1.1) for some $u \in |k^*|$ and $\mathbf{B}(0,u)$ contains $\phi_{D'}^{\mathrm{an}}(W)$. Since $p_{y_i}(r') = p_{y_j}(r')$ for every $y_i,y_j \in R$ and the paths $p_y$ for $y \in R$ are Galois conjugates of each other, we must have that $q := p_{y}(r')$ is fixed by $H$ and hence $q = (\phi_{D'}^{\mathrm{an}})^{-1}(\lambda(r',x))$. We simplify notation and use $X$ to denote the closed disk $\mathbf{B}(0,u)$. The group $H$ restricts to an action of the space $Y := (\phi^{\mathrm{an}}_{D'})^{-1}(X)$. We claim that $Y$ is a Berkovich closed disk in $D'$. Let $Y'$ denote the complement of $Y$ in $D'$. The image $\phi^{\mathrm{an}}(Y')$ is the complement of $X$ in $D$ as $\phi^{\mathrm{an}}_{D'}$ is surjective. Let $X' := D \smallsetminus X$. We claim that the space $Y'$ is a connected open set. Suppose that $Y'$ is not connected. The morphism $\phi^{\mathrm{an}}_{D'}$ is clopen and hence maps each connected component of $Y'$ onto the complement of $X$ in $D$. Let $Z$ be a connected component of $Y'$. Lemmas 2.32 and 2.33 can be used to show that $Z$ must be of the form $\mathbf{O}(0,1) \smallsetminus \bigcup_{j \in J} X_j$ where the $X_j$ are Berkovich closed disks or points of type I or IV. As the morphism $\phi^{\mathrm{an}}$ restricts to a finite morphism from $Z$ onto $X'$, it can be deduced that there can be only a finite number of points in $\overline{Y'} \smallsetminus Y'$ where $\overline{Y'}$ denotes the closure of $Y'$ in $D'$. We must hence have that $Y' \in \mathcal{B}$ and is of form 3. As the union of open sets in $\mathcal{B}$ of form 3 is connected, we conclude that $Y'$ is a connected open set in $\mathcal{B}$ of form 3. It must be the complement of the union of a finite number of Berkovich closed disks or points of type I or IV. The complement of $Y'$ is the space $Y$ and $Y = (\phi^{\mathrm{an}}_{D'})^{-1}(X)$. We showed that there exists a point $q \in Y$ which is $H$-invariant. As $\phi^{\mathrm{an}}_{D'}$ is clopen, every connected component of $Y$ must contain the point $q$ and hence $Y$ is connected. If the union of a finite number of Berkovich closed disks and points of types I or IV in $D'$ is connected then that union must be a Berkovich closed disk as well. Hence, $Y$ is a Berkovich closed disk. The morphism $\phi^{\mathrm{an}}_{D'}$ restricts to a finite morphism from $Y$ onto $X$. As $k$ is algebraically closed, the radius of $Y$ belongs to the group $|k^*|$. This proves the first part of the proposition \item We make use of the notation introduced in the proof of part 1 of the proposition. Let $W \subset C'^{\mathrm{an}}$ be a connected open set. We must show that $\lambda'^{-1}(W)$ is an open subset of $[0,1] \times C'^{\mathrm{an}}$. We divide the proof into two cases - when $W \cap \Sigma(C'^{\mathrm{an}},\mathfrak{W}')$ is empty and when $W \cap \Sigma(C'^{\mathrm{an}},\mathfrak{W}')$ is non-empty. We treat the first case - $W \cap \Sigma(C'^{\mathrm{an}},\mathfrak{W}') = \emptyset$. As $W$ is connected, there exists $D' \in \mathcal{D}'$ such that $W \subset D'$. By 2.30, there exists $D \in \mathcal{D}$ such that $\phi^{\mathrm{an}}(D') = D$. We may suppose further that $W$ belongs to $\mathcal{B}$. It must be of form 1, 2 or 3. Suppose that $W$ is a Berkovich open disk contained in $D'$. Let $V := \phi^{\mathrm{an}}(W) \subset D$. By Lemma 2.33, $V$ is a Berkovich open disk in $D$. By construction of $\lambda$, we must have that there exists $s \in [0,1]$ such that $\lambda^{-1}(V) = [0,s) \times V$. By assumption, $\lambda'$ is compatible with $\lambda$ in that the following diagram is commutative. \setlength{\unitlength}{1cm} \begin{picture}(10,5) \put(3.5,1){$[0,1] \times C^{\mathrm{an}}$} \put(8,1){$C^{\mathrm{an}}$} \put(3.5,3.5){$[0,1] \times C'^{\mathrm{an}}$} \put(8,3.5){$C'^{\mathrm{an}}$} \put(4.4,3.3){\vector(0,-1){1.75}} \put(8.1,3.3){\vector(0,-1){1.75}} \put(5.4,1.1){\vector(1,0){2.55}} \put(5.4,3.6){\vector(1,0){2.55}} \put(6.2,0.7){$\lambda$} \put(6.2,3.2){$\lambda'$} \put(4.5,2.3){$id \times \phi^{\mathrm{an}}$} \put(8.2,2.3){$\phi^{\mathrm{an}}$}. \end{picture} It follows that $\lambda'^{-1}(W) \subset [0,s) \times (\phi^{\mathrm{an}})^{-1}(V)$. Let $A_1,\ldots,A_m$ denote the connected components of $(\phi^{\mathrm{an}})^{-1}(V)$. Observe that $(\phi^{\mathrm{an}})^{-1}(V) = \bigcup_{\sigma \in G} \sigma(W)$. We suppose without loss of generality that $W \subset A_1$. As $A_1$ is connected, we must have that $A_1 \subset D'$. We claim that $W = A_1$. By 2.30, if $\sigma \in G$ then $\sigma(D') \in \mathcal{D}'$. Let $H := \{ \sigma \in G | \sigma(D') = D' \}$. We must have that $A_1 \subset \bigcup_{\sigma \in H} \sigma(W)$ and $A_1 \cap \sigma(W) = \emptyset$ if $\sigma \notin H$. By part (1) of the proposition and Lemma 2.34, if $\sigma \in H$ then $\sigma(W)$ is a Berkovich open ball whose radius is equal to that of $W$. It follows that one of the connected components of $\bigcup_{\sigma \in H} \sigma(W)$ is the ball $W$. Hence $W = A_1$. Observe that if $A_i$ is a connected component and $x \in A_i$ then for every $t \in [0,s)$, we must have that $\lambda'(t,x) \in A_i$. Indeed, the path $\lambda'^x : [0,s) \to C'^{\mathrm{an}}$ defined by $t \mapsto \lambda'(t,x)$ is contained in $(\phi^{\mathrm{an}})^{-1}(V)$. As $\{A_j\}$ is the set of connected components of $(\phi^{\mathrm{an}})^{-1}(V)$ we must have that $\lambda'(t,x) \in A_i$ for every $t \in [0,s)$. It follows from this observation that $\lambda'^{-1}(W) = [0,s) \times W$. Let $W \subseteq D$ be a Berkovich open ball and $Y_1,\ldots,Y_m$ be disjoint Berkovich closed sub disks of $W$ or points of type I or IV. Let $Z := \bigcup_{1 \leq i \leq n} Y_i$. We show that $\lambda'^{-1}(W \smallsetminus Z)$ is an open subset of $[0,1] \times C'^{\mathrm{an}}$. We have already shown that there exists $s \in [0,1]$ such that $\lambda'^{-1}(W) = [0,s) \times W$. Hence $\lambda'^{-1}(W \smallsetminus Z) = ([0,s) \times W) \smallsetminus \lambda'^{-1}(Z)$. As $Z$ is the disjoint union of the $Y_i$, we must have that $\lambda'^{-1}(Z) = \bigcup_i \lambda'^{-1}(Y_i)$. It suffices hence to show that if $Y$ is a Berkovich closed disk contained in $D'$ then there exists $t \in [0,1]$ such that $\lambda'^{-1}(Y) = [0,t] \times Y$. By Lemma 2.33, the image of $Y$ for the morphism $\phi^{\mathrm{an}}_{D'}$ is a Berkovich closed disk or a point of type I or IV. We can use essentially the same argument above wherein we showed that the preimage of a Berkovich open disk $O$ in $D'$ for the function $\lambda'$ is an open subset of $[0,1] \times C'^{\mathrm{an}}$ of the form $[0,s') \times O$ to show that $\lambda'^{-1}(Y) = [0,t] \times Y$. To conclude the proof, we treat the case when $W \subset C'^{\mathrm{an}}$ is a connected open set that intersects $\Sigma(C'^{\mathrm{an}},\mathfrak{W}')$. Let $S := W \cap \Sigma(C'^{\mathrm{an}},\mathfrak{W}')$ and let $\mathcal{D}'_S := \{D' \in \mathcal{D} | \overline{D'} \cap S \neq \emptyset \}$. As $W$ is connected, for every $D' \in \mathcal{D}'_S$, $W \cap D'$ is a non empty connected open neighborhood which is the union of open sets in $\mathcal{B}$ of form 3. It suffices to prove $\lambda'^{-1}(W)$ is open under the assumption that $W \cap D'$ for $D' \in \mathcal{D}'_S$ belongs to $\mathcal{B}$ and is of form 3. We have the equality $W = \bigcup_{D' \in \mathcal{D}'_S} (W \cap D') \cup (W \cap \Sigma(C'^{\mathrm{an}},\mathfrak{W}'))$. It follows that $\lambda'^{-1}(W) = \bigcup_{D' \in \mathcal{D}'_S} \lambda'^{-1}(W \cap D') \cup \lambda'^{-1}(\Sigma(C'^{\mathrm{an}},\mathfrak{W}') \cap W)$. We showed that for every $D' \in \mathcal{D}'_S$, $\lambda'^{-1}(W \cap D')$ is an open set of $[0,1] \times D'$. Furthermore, it can be checked that $[0,1] \times W \subset \lambda'^{-1}(W)$. By construction, $\lambda'^{-1}(\Sigma(C'^{\mathrm{an}},\mathfrak{W}') \cap W) = ([0,1] \times (\Sigma(C'^{\mathrm{an}},\mathfrak{W}') \cap W)) \bigcup (\{1\} \times \bigcup_{D' \in \mathcal{D}'_S} D')$. The set $[0,1] \times W$ is an open subset of $[0,1] \times C'^{\mathrm{an}}$ that is contained in $\lambda'^{-1}(W)$ and is a neighborhood of every point of $[0,1] \times (\Sigma(C'^{\mathrm{an}},\mathfrak{W}') \cap W)$. It remains to show that if $x \in D'$ for some $D' \in \mathcal{D}'_S$ then there exists an open subset of $[0,1] \times C'^{\mathrm{an}}$ contained in $\lambda'^{-1}(W)$ that is a neighborhood of $(1,x)$. We proceed as follows. As $W \cap D'$ is a connected open subset in $\mathcal{B}$ of form 3, it can be verified using the arguments above (case 1) that there exists $r_{W \cap D'} \in [0,1)$ such that $(r_{W \cap D'},1] \times D' \subset \lambda'^{-1}(W)$. As $(r_{W \cap D'},1] \times D'$ is an open subset of $[0,1] \times C'^{\mathrm{an}}$ that contains $1 \times D'$ we conclude the claim and the proof. \end{enumerate} \end{proof} \section{ Compatible deformation retractions} Our goal in this section is to prove the existence of a pair of compatible deformation retractions in the case of a finite morphism between curves. The precise statement is the following. \begin{thm} Let $C$ and $C'$ be smooth projective irreducible $k$-curves and $\phi : C' \to C$ be a finite morphism. There exists a pair of deformation retractions \begin{align*} \psi : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}} \end{align*} and \begin{align*} \psi' : [0,1] \times C'^{\mathrm{an}} \to C'^{\mathrm{an}} \end{align*} with the following properties. \begin{enumerate} \item The images $\Upsilon_{C'^{\mathrm{an}}} := \psi'(1,C'^{\mathrm{an}})$ and $\Upsilon_{C^{\mathrm{an}}} := \psi(1,C^{\mathrm{an}})$ are closed subspaces of $C'^{\mathrm{an}}$ and $C^{\mathrm{an}}$ respectively, each with the structure of a connected, finite metric graph. Furthermore, we have that $\Upsilon_{C'^{\mathrm{an}}} = (\phi^{\mathrm{an}})^{-1}(\Upsilon_{C^{\mathrm{an}}})$. \item There exists weak semistable vertex sets $\mathfrak{A}' \subset C'^{\mathrm{an}}$ and $\mathfrak{A} \subset C^{\mathrm{an}}$ such that $\Upsilon_{C'^{\mathrm{an}}} = \Sigma(C'^{\mathrm{an}},\mathfrak{A}')$ and $\Upsilon_{C^{\mathrm{an}}} = \Sigma(C^{\mathrm{an}},\mathfrak{A})$. \item The deformation retractions $\psi$ and $\psi'$ are \textbf{compatible} i.e. the following diagram is commutative. \setlength{\unitlength}{1cm} \begin{picture}(10,5) \put(2.5,1){$[0,1] \times C^{\mathrm{an}}$} \put(6,1){$C^{\mathrm{an}}$} \put(2.5,3.5){$[0,1] \times C'^{\mathrm{an}}$} \put(6,3.5){$C'^{\mathrm{an}}$} \put(3.4,3.3){\vector(0,-1){1.75}} \put(6.1,3.3){\vector(0,-1){1.75}} \put(4.4,1.1){\vector(1,0){1.55}} \put(4.4,3.6){\vector(1,0){1.55}} \put(4.7,0.7){$\psi$} \put(4.7,3.2){$\psi'$} \put(3.5,2.3){$id \times \phi^{\mathrm{an}}$} \put(6.2,2.3){$\phi^{\mathrm{an}}.$} \end{picture} \end{enumerate} \end{thm} \begin{rem} In \cite{HL}, Hrushovski and Loeser construct compatible deformation retractions in greater generality. Let $\pi : V' \to V$ be a finite morphism between quasi-projective varieties $V'$ and $V$ over a non-Archimedean non-trivially real valued field $F$. There exists a generalised real interval [\cite{HL}, Section 3.9] $I$ and a pair of deformation retractions $H : I \times V^{\mathrm{an}} \to V^{\mathrm{an}}$ and $H' : I \times V'^{\mathrm{an}} \to V'^{\mathrm{an}}$ which are compatible in the sense defined above. This result follows from Remark 10.1.2 (2) and Corollary 13.1.6 in loc.cit. \end{rem} To prove Theorem 3.1, we adapt the strategy employed in Section 7 of \cite{HL}. Our method of proof is as follows. We begin by proving the theorem for a finite morphism $\phi : C \to \mathbb{P}^1_k$ where $C$ is a smooth, projective curve and the extension of function fields $k(\mathbb{P}^1_k) \hookrightarrow k(C)$ induced by the morphism $\phi$ is Galois. We then use this result to prove the theorem for a finite morphism $\phi : C' \to C$ between smooth projective curves. We begin with the following lemma which provides us compatible weak semistable vertex sets for a finite morphism between smooth projective irreducible curves. \begin{lem} Let $\phi : C' \to C$ be a finite morphism between smooth projective irreducible curves. Let $S \subset C^{\mathrm{an}}$ be a finite set of points none of which are of type IV. There exists a weak semistable vertex set $\mathfrak{W} \subset C^{\mathrm{an}}$ such that $\Sigma(C^{\mathrm{an}},\mathfrak{W})$ contains $S$, $\mathfrak{V} := (\phi^{\mathrm{an}})^{-1}(\mathfrak{W})$ is a weak semistable vertex set of $C'^{\mathrm{an}}$ and $\Sigma(C'^{\mathrm{an}}, \mathfrak{V}) = (\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}))$. \end{lem} \begin{proof} Let $\mathfrak{W}_0$ be a weak semistable vertex set for $C^{\mathrm{an}}$ such that $\Sigma(C^{\mathrm{an}},\mathfrak{W}_0)$ contains $S$. As the morphism $\phi^{\mathrm{an}}$ is finite, the preimage $(\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}_0))$ is a finite graph which does not contain a point of type IV (Proposition 2.27). Let $\mathfrak{V}_0$ be a weak semistable vertex set such that $\Sigma(C'^{\mathrm{an}},\mathfrak{V}_0)$ contains $(\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}_0))$. Let $\mathfrak{W}_1$ be a weak semistable vertex set such that the skeleton $\Sigma(C^{\mathrm{an}},\mathfrak{W}_1)$ contains $\phi^{\mathrm{an}}(\Sigma(C'^{\mathrm{an}},\mathfrak{V}_0))$. We claim that the preimage $A := (\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}_1))$ is connected. Let $A_1,\ldots,A_m$ denote the connected components of $A$ such that $A_1$ contains $\Sigma(C'^{\mathrm{an}},\mathfrak{V}_0)$. The morphism $\phi^{\mathrm{an}}$ is an open and closed morphism (cf. Lemma 6.1). It follows that $\phi^{\mathrm{an}}$ restricts to a surjective map from each of the $A_i$ onto $\Sigma(C^{\mathrm{an}},\mathfrak{W}_1)$. However since $A_1$ contains the set $(\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}_0))$ we must have that $A = A_1$. It follows that $A$ is a connected graph that contains the skeleton $\Sigma(C'^{\mathrm{an}},\mathfrak{V}_0)$, using which it can be checked that $C'^{\mathrm{an}} \smallsetminus A$ is the disjoint union of sets each of which are isomorphic to Berkovich open balls. We claim that these open balls have radii belonging to $|k^*|$. Let $D'$ be a connected component of $C'^{\mathrm{an}} \smallsetminus A$. As the morphism $\phi^{\mathrm{an}}$ is clopen and $A = (\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}_1))$, there exists a connected component $D$ of $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{W}_1)$ such that the morphism $\phi^{\mathrm{an}}$ restricts to a finite morphism from $D'$ onto $D$. As $D$ is isomorphic to a Berkovich open ball of radius belonging to $|k^*|$, we must have that $D'$ is a Berkovich open ball whose radius belongs to $|k^*|$. It follows that there exists a weak semistable vertex set $\mathfrak{V}_1$ in $C'^{\mathrm{an}}$ such that $\Sigma(C'^{\mathrm{an}},\mathfrak{V}_1) = A$. The set $\mathfrak{W} := \mathfrak{W}_1 \cup \phi^{\mathrm{an}}(\mathfrak{V}_1)$ is a weak semistable vertex set for $C^{\mathrm{an}}$ and $\Sigma(C^{\mathrm{an}},\mathfrak{W}) = \Sigma(C^{\mathrm{an}},\mathfrak{W}_1)$. Let $\mathfrak{V} := (\phi^{\mathrm{an}})^{-1}(\mathfrak{W})$. As $\mathfrak{V}$ contains $\mathfrak{V}_1$ and is contained in $\Sigma(C'^{\mathrm{an}},\mathfrak{V})$, we must have that $\mathfrak{V}$ is a weak semistable vertex set and $\Sigma(C'^{\mathrm{an}},\mathfrak{V}) = \Sigma(C'^{\mathrm{an}},\mathfrak{V}_1)$. The pair $\mathfrak{V},\mathfrak{W}$ satisfy the claims made in the lemma. \end{proof} \subsection{Lifting paths}  Let $\phi : C' \to C$ be a finite morphism between $k$-curves. A path in $C^{\mathrm{an}}$ is a continuous function $u : [a,b] \to C^{\mathrm{an}}$ where $[a,b]$ is a real interval. To construct deformation retractions which are compatible for the morphism $\phi$, we must understand to what extent certain paths on $C^{\mathrm{an}}$ can be lifted. By a lift of a path, we mean the following. \begin{defi} \emph{Let $a < b$ be real numbers and $u : [a,b] \to C^{\mathrm{an}}$ be a continuous function.} A lift of the path $u$ \emph{is a path $u' :[a,b] \to C'^{\mathrm{an}}$ such that $u = \phi^{\mathrm{an}} \circ u'$.} \end{defi} \begin{lem} Let $\phi : C' \to C$ be a finite separable morphism between irreducible projective smooth $k$-curves such that the extension of function fields induced by $\phi$ is separable. Let $\mathfrak{V} \subset \mathfrak{W}$ be weak semistable vertex sets of $C^{\mathrm{an}}$. Assume that $\mathfrak{W}$ contains the set of $k$-points of $C$ over which the morphism $\phi$ is ramified. Let $\mathfrak{W}' \subset C'^{\mathrm{an}}$ be a weak semistable vertex set such that $\Sigma(C'^{\mathrm{an}},\mathfrak{W}') = (\phi^{\mathrm{an}})^{-1}(\Sigma(C^{\mathrm{an}},\mathfrak{W}))$. Let $\lambda^\mathfrak{W}_{\Sigma(C^{\mathrm{an}},\mathfrak{V})} : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ be the deformation retraction constructed in Proposition 2.21 whose image is $\Sigma(C^{\mathrm{an}},\mathfrak{W})$. We simplify notation and write $\lambda$ in place of $\lambda^\mathfrak{W}_{\Sigma(C^{\mathrm{an}},\mathfrak{V})}$. Let $p \in C^{\mathrm{an}}$. Let $\lambda^p : [0,1] \to C^{\mathrm{an}}$ be the path defined by $t \mapsto \lambda(t,p)$. Let $q \in (\phi^{\mathrm{an}})^{-1}(p)$. There exists a unique path $u : [0,1] \to C'^{\mathrm{an}}$ such that $u(0) = q$ and $\phi^{\mathrm{an}} \circ u = \lambda^p$. \end{lem} \begin{proof} We split the proof into two cases. \begin{enumerate} \item \emph{Let $p \in C^{\mathrm{an}}$ be a point which is not of type IV.} We can suppose that $p \notin \Sigma(C^{\mathrm{an}},\mathfrak{W})$ since when $p \in \Sigma(C^{\mathrm{an}},\mathfrak{W})$, the path $\lambda^p$ is constant and hence can always be lifted. We show firstly that for every $t \in [0,1)$, there exists $\epsilon > 0$ such that if $z' \in (\phi^{\mathrm{an}})^{-1}(\lambda^p(t))$ then $\lambda^p_{|[t,t + \epsilon]}$ lifts \emph{uniquely} to a path starting from $z'$. Suppose $z := \lambda^p(t)$ was a point of type I. By construction of $\lambda$, we must have that $z = p$ and $t = 0$. Our choice of weak semistable vertex sets implies that $\phi$ is étale over $\lambda^p(t)$. It follows from Hensel's lemma that there exists neighborhoods $V_{z'}$ in $C'^{\mathrm{an}}$ around $z'$ and $V_p$ around $p$ such that $\phi^{\mathrm{an}}$ restricts to a homeomorphism from $V_{z'}$ onto $V_p$. We conclude from this fact that there does indeed exist $\epsilon > 0$ such that $\lambda^p_{|[0,\epsilon]}$ lifts uniquely to a path starting from $z'$. Let $z := \lambda^p(t)$ be a point of type II or III. By construction, for every $s \in [t,1]$, $\lambda^p(s) = \lambda^z(s)$ where $\lambda^z : [0,1] \to C^{\mathrm{an}}$ is the path defined by $s \mapsto \lambda(s,z)$. Furthermore, for every $s \in [0,t]$, $\lambda^z(s) = z$. It suffices to show that there exists $\epsilon > 0$ such that the path $\lambda^z_{|[t,t + \epsilon]}$ lifts uniquely to a path starting from $z'$. If there exists $\epsilon > 0$ such that $\lambda^z_{|[t,t + \epsilon]}$ is constant then our claim is obviously true. Let us hence suppose no such $\epsilon$ exists. By assumption, $z \notin \Sigma(C^{\mathrm{an}},\mathfrak{W})$. It follows that $z' \in C'^{\mathrm{an}} \smallsetminus \Sigma(C'^{\mathrm{an}},\mathfrak{W}')$. Recall that we used $\mathcal{D}$ to denote the set of connected components of the space $C^{\mathrm{an}} \smallsetminus \Sigma(C^{\mathrm{an}},\mathfrak{W})$ and when constructing the deformation $\lambda$ we identified each $D \in \mathcal{D}$ with a Berkovich open ball whose radius belongs to the value group $|k^*|$. Likewise, let $\mathcal{D}'$ denote the set of connected components of the space $C'^{\mathrm{an}} \smallsetminus \Sigma(C'^{\mathrm{an}},\mathfrak{W}')$. We identify each $D' \in \mathcal{D}'$ with a Berkovich open ball of unit radius. Let $D \in \mathcal{D}$ be such that $z \in D$ and $D' \in \mathcal{D}'$ such that $z' \in D'$. By Lemma 2.30, we have that $\phi^{\mathrm{an}}(D') = D$ and in addition $\phi^{\mathrm{an}}$ restricts to an open and closed map on $D'$. By construction of the deformation retraction $\lambda$, there exists $\beta \in [0,1]$ such that for every $x \in D$, $\lambda(s,x) \in D$ when $s \in [0,\beta)$ i.e. $\lambda : [0,\beta) \times D \to D$ is well defined and $\lambda(s,x) = \lambda(\beta,x)$ when $s \in [\beta,1]$. Our assumption that there does not exist $\epsilon > 0$ such that $\lambda^z_{|[t,t + \epsilon]}$ is constant and the construction of $\lambda$ imply that the path $\lambda^z_{|[t,\beta]}$ is injective and that $\lambda^z([t,\beta]) \subset \mathbf{H}(C^{\mathrm{an}})$. Furthermore, the composition $\lambda^z \circ (-\mathrm{exp}) : [-\mathrm{log}(\beta),-\mathrm{log}(t)] \to C^{\mathrm{an}}$ is an isometry (cf. Remark 2.22). Let $r \in |k^*|$ denote the radius of the ball $D$. The point $z$ must be of the form $\eta_{a,t}$ for some $a \in D$. Likewise, the point $z' \in D'$ must be of the form $\eta_{b,t'}$ for some $b \in D'$ and $t' \in (0,1)$. We may choose $b$ so that $\phi^{\mathrm{an}}(b) = a$. We show now that we can reduce to the case when $a = b = 0$. The translation automorphism $t_{-a}: O(0,r) \to O(0,r)$ defined by $x \mapsto x - a$ induces an automorphism $t_{-a}^{\mathrm{an}} : D \to D$ that maps the point $z$ to $\eta_{0,t}$. We have that $\lambda : [0,\beta) \times D \to D$ is well defined and it can be checked that $\lambda$ is $t_{-a}^{\mathrm{an}}$ invariant i.e. for every $s \in [0,\beta)$ and $x \in D$, $t_{-a}^{\mathrm{an}}(\lambda(s,x)) = \lambda(s,t_{-a}^{\mathrm{an}}(x))$. Similarly, let $t_{-b} : O(0,1) \to O(0,1)$ denote the translation morphism $y \mapsto y - b$. The map $t_{-b}$ induces an automorphism $t_{-b}^{\mathrm{an}} : D' \to D'$ that maps $z' = \eta_{b,t'}$ to $\eta_{0,t'}$. Let $\phi^{\mathrm{an}}_{D'}$ denote the restriction of $\phi^{\mathrm{an}}$ to $D'$. As $t_{-a}^{\mathrm{an}}$ and $t_{-b}^{\mathrm{an}}$ are automorphisms, there exists a morphism $f : D' \to D$ such that the following diagram commutes. \setlength{\unitlength}{1cm} \begin{picture}(10,5) \put(3.3,1){$D$} \put(6,1){$D$} \put(3.3,3.5){$D'$} \put(6,3.5){$D'$} \put(3.4,3.3){\vector(0,-1){1.75}} \put(6.1,3.3){\vector(0,-1){1.75}} \put(3.9,1.1){\vector(1,0){1.9}} \put(3.9,3.6){\vector(1,0){1.9}} \put(4.7,0.7){$t^{\mathrm{an}}_{-a}$} \put(4.7,3.2){$t^{\mathrm{an}}_{-b}$} \put(3.5,2.3){$\phi_{D'}^{\mathrm{an}}$} \put(6.2,2.3){$f$} \end{picture} Suppose there exists $\epsilon > 0$ and a unique path $u' : [t,t+\epsilon]$ starting from $\eta_{0,t'}$ such that $f \circ u' = \lambda^{\eta_{0,t}}_{|[t,t+\epsilon]}$. The commutativity of the above diagram and the fact that $t_{-a}^{\mathrm{an}}$ and $t_{-b}^{\mathrm{an}}$ are automorphisms imply that there exists a unique path $u : [t,t+\epsilon]$ starting at $z'$ such that $\phi^{\mathrm{an}} \circ u = \lambda^z_{|[t,t+\epsilon]}$. We may hence assume that $z = \eta_{0,t}$ and $z' = \eta_{0,t'}$ and that $\phi^{\mathrm{an}}(0) = 0$. Let $F \subset D'$ denote the set $(\phi^{\mathrm{an}})^{-1}(0)$. Let $\mathfrak{A} \subset D'$ be a finite set of points of type II with the following property. Let $\mathcal{C}$ denote the set of connected components of $D' \smallsetminus (\mathfrak{A} \cup F)$. There exists a finite set $\Upsilon \subset \mathcal{C}$ such that if $A \in \Upsilon$ then $A$ is isomorphic to a standard open annulus or a standard punctured Berkovich open disk and if $A \in \mathcal{C} \smallsetminus \Upsilon$ then $A$ is isomorphic to Berkovich open ball with radius in $|k^*|$. Let $\Sigma(D')$ be the union of $\mathfrak{A} \cup F$ and the skeleton of every element $A \in \Upsilon$. By assumption, we must have that $z' \in \Sigma(D')$. Suppose, $z'$ is not a vertex of the finite graph $\Sigma(D')$. It follows that there exists a standard open annulus $A' \in \Upsilon$ that contains $z$ and in particular does not intersect $F$. The map $\phi^{\mathrm{an}}$ restricts to a morphism $ A' \to D \smallsetminus \{0\}$. By [\cite{BPR}, Proposition 2.5], $\phi^{\mathrm{an}}(A') \subset D \smallsetminus \{0\}$ must be a standard open annulus $A$ as well and $\phi^{\mathrm{an}}(\Sigma(A')) = \Sigma(A)$. By assumption, we have that $z \in \Sigma(A)$ and for small enough $\epsilon > 0$, the path $\lambda^z_{|[t,t + \epsilon]}$ is contained in $\Sigma(A)$. Recall that we defined sections $\sigma : \mathrm{trop}(A) \to \Sigma(A)$ and $\sigma : \mathrm{trop}(A') \to \Sigma(A')$ of the tropicalization maps $\mathrm{trop} : \Sigma(A) \to \mathrm{trop}(A)$ and $\mathrm{trop} : \Sigma(A') \to \mathrm{trop}(A')$ (cf. 2.2.2). By definition of the tropicalization map, we must have that $[-\mathrm{log}(t),-\mathrm{log}(t + \epsilon)] \subset \mathrm{trop}(A)$. By construction, $\lambda^z \circ -\mathrm{exp} : [-\mathrm{log}(t), -\mathrm{log}(t + \epsilon)] \to \Sigma(A)$ coincides with $\sigma_{|[-\mathrm{log}(t), -\mathrm{log}(t + \epsilon)]}$. As $\sigma$ is a homeomorphism, the morphism $\phi^{\mathrm{an}}_{|\Sigma(A')} : \Sigma(A') \to \Sigma(A)$ induces a map $\phi^{\mathrm{trop}} : \mathrm{trop}(A') \to \mathrm{trop}(A)$. By loc.cit, there exists a non zero $d \in \mathbb{Z}$ such that $\phi^{\mathrm{trop}}$ is of the form $d(.) + -\mathrm{log}(|\delta|)$ for some $\delta \in k^*$. Let $(\phi^{\mathrm{trop}})^{-1}$ denote the inverse of $\phi^{\mathrm{trop}}$. It can be verified that $u := \sigma \circ (\phi^{\mathrm{trop}})^{-1} \circ -\mathrm{log} : [t,t+\epsilon] \to D'$ is a lift of $\lambda^z_{|[t,t+\epsilon]}$ starting from $z'$. In fact, $u$ is the unique lift of $\lambda^z_{|[t,t+\epsilon]}$ starting from $z'$. Indeed, by Proposition 2.5 in loc.cit., it can be deduced that $\phi^{\mathrm{an}}(A' \smallsetminus \Sigma(A')) \subset A \smallsetminus \Sigma(A)$ and furthermore, the map $\phi^{\mathrm{an}}$ restricted to $\Sigma(A')$ is a bijection onto $\Sigma(A)$. As $\lambda^z_{|[t,t+\epsilon]}$ is a path along $\Sigma(A)$, $u$ must be a lift along $\Sigma(A')$ and hence unique. Suppose $z' = \eta_{0,t'} \in \Sigma(D')$ is a vertex. We must have that $z'$ is a type II point. It follows that there exists a standard open annulus $A' \in \Upsilon$ with inner radius $t'$ and outer radius belonging to $|k^*|$. By \cite{BPR}, the image $\phi^{\mathrm{an}}(A')$ is a standard open annulus $A \subset D$ and we choose $\epsilon > 0$ small enough so that $\lambda^z_{|(t,t+\epsilon]}$ is a path contained in $\Sigma(A)$. We now proceed as above to obtain a lift $u$ of $\lambda^z_{|(t,t+\epsilon]}$ starting from $z'$ and for reasons mentioned above it is the unique lift contained in $\Sigma(A)$. It remains to show that $u$ is the unique lift in $C'^{\mathrm{an}}$ and this requires an additional argument for the following reason. Observe that the annulus $A$ contains exactly one element of the tangent space $T_{z'}$, whereas previously when $z'$ was in the interior of $A$, every tangent direction was contained in $A$. It follows that although $u$ might be the unique lift in $A$, it might not be the only lift in $C'^{\mathrm{an}}$. However, it is the only possible lift since by Lemma 2.33, the Berkovich closed ball $\mathbf{B}(0,t')$ maps onto the closed ball $\mathbf{B}(0,t)$ and the path $\lambda^z_{|(t,t + \epsilon]}$ is contained in the complement of the $\mathbf{B}(0,t)$. Hence any lift of the $\lambda^z_{|(t,t + \epsilon]}$ must be contained entirely in $A$. This proves that the lift $u$ is unique. Let $L$ denote the set of $s \in [0,1]$ for which there exists a lift of the path $\lambda^p_{|[0,s]}$ starting from $q$. Let $s \in L$. We claim that there exists a unique lift of the path $\lambda^p_{|[0,s]}$ starting from $q$. Let $u',u$ be two lifts of the path $\lambda^p_{|[0,s]}$ starting from $q$. Let $s_0 \in [0,s]$ be the largest real number such that $u'_{|[0,s_0]} = u_{|[0,s_0]}$. Let $q' := u'(s_0) = u(s_0)$ and $p' := \phi^{\mathrm{an}}$. By the first part of the proof, there exists $\epsilon > 0$ and a unique lift $v$ of the path $\lambda^p_{|[s_0,s_0 + \epsilon]}$. As $u'_{|[s_0,s_0 + \epsilon]}$ and $u_{|[s_0,s_0 + \epsilon]}$ are both lifts of $\lambda^p_{|[s_0,s_0 + \epsilon]}$, we must have that $v = u_{|[s_0,s_0 + \epsilon]} = u'_{|[s_0,s_0 + \epsilon]}$. This implies that $u_{|[0,s_0 + \epsilon]} = u'_{|[0,s_0 + \epsilon]}$ which contradicts our assumption on $s_0$. We have thus verified our claim. It can be deduced from this that the set $L$ contains its supremum. Let $t$ be the largest real number in $[0,1]$ such that the path $\lambda^p_{|[0,t]}$ lifts to a path $u : [0,t] \to C'^{\mathrm{an}}$ starting from $q$. Suppose $t < 1$. By the first half of the proof, there exists a small real number $\epsilon$ such that $\lambda^p_{|[t,t + \epsilon]}$ lifts to a path starting from $u' : [t,t+\epsilon] \to C'^{\mathrm{an}}$ starting from $u(t)$. Glueing $u$ and $u'$ gives a path $u'' : [0,t+\epsilon] \to C'^{\mathrm{an}}$ which lifts $\lambda^p_{|[0,t+\epsilon]}$. Hence we have a contradiction to our assumption that $t < 1$. Therefore there exists a unique lift of the path $\lambda^p$. \item \emph{Let $p$ be a point of type IV.} We must have that $p \notin \Sigma(C^{\mathrm{an}},\mathfrak{W})$. We make use of the notation introduced in part (1) of the proof. There exists $D \in \mathcal{D}$ such that $p \in D$. The path $\lambda^p$ is injective on $[a,b]$. Let $U$ be a connected neighborhood in $D$ of the point $p$ such that $(\phi^{\mathrm{an}})^{-1}(U)$ decomposes into the disjoint union of connected open sets $\{U_1,\ldots,U_m\}$ and each $U_i$ contains exactly one preimage of the point $p$. We claim that we can shrink $U$ and choose $a < t'_1 < b$ such that $\lambda^p([0,t'_1]) \subset U$ and for every $x \in \lambda^p([0,t'_1])$ there exists exactly one preimage of $x$ in each of the $U_i$. This can be accomplished as follows. We show that there exists $t' \in (a,b)$ such that for every element $x \in \lambda^p([a,t'))$ the cardinality of the set $(\phi^{\mathrm{an}})^{-1}(x)$ is constant. We can then shrink $U$ so that it does not intersect $\lambda^p([t',b])$ and choose $t'_1 \in (a,t')$ suitably small. Such a $U$ must satisfy the claim since the morphism $\phi^{\mathrm{an}}$ being closed and open (cf. Lemma 6.1) is surjective from each of the $U_i$ onto $U$. Observe that if $t_1,t_2 \in (a,s]$ are such that $t_1 < t_2$ then the number of preimages of $\lambda^p(t_1)$ is greater than or equal to the number of preimages of $\lambda^p(t_2)$. This follows from the uniqueness of lifts from part (1) and that if $P$ is a lift of the path $\lambda^p_{|[t_1,s]}$ then $P_{|[t_2,s]}$ is a lift of the path $\lambda^p_{|[t_2,s]}$. As the morphism $\phi^{\mathrm{an}}$ is finite, there exists $t' \in (a,b)$ such that the number of preimages of every point $x \in \lambda^p((a,t'))$ is constant. The preimages in $C'^{\mathrm{an}}$ of the point $p$ are of type IV and the tangent space at any such point is a single element. It follows that the number of preimages of every point in $\lambda^p([a,t'))$ is a constant. Let $t'_1 \in (a,t')$ be such that $\lambda^p(t') \in U$. This verifies the claim. We suppose without loss of generality that $q \in U_1$. We show firstly that the path $\lambda^p_{[0,t'_1]}$ can be lifted to a path in $C^{\mathrm{an}}$ starting from $q$. It suffices to show that the path $\lambda^p_{[a,t'_1]}$ can be lifted to a path in $C^{\mathrm{an}}$ starting from $q$. Let $I$ denote the set of real numbers $r \in [a,t'_1]$ for which there exists a lift $P_r$ of the path $\lambda^p_{|[r,t'_1]}$ contained in $U_1$. As $U_1$ contains exactly one preimage of the point $\lambda^p(r)$, the uniqueness of lifts from part (1) of the proposition implies that the set $I$ is closed. Let $t_0$ denote the smallest element of the set $I$. Suppose $t_0 > a$. Let $a' \in (a,t_0)$ and $p' := \lambda^p(a')$. By construction, $p'$ is not a point of type IV. Let $q'$ be the unique preimage of $p'$ in $U_1$. By part (1), there exists a lift $P'$ of the path $\lambda^{p'}_{|[a',t'_1]}$ starting from $q'$. By construction, $\lambda^{p'}_{|[t_0,t'_1]}$ coincides with $\lambda^p_{|[t_0,t'_1]}$. As the lifts are unique, we must have that $P'_{|[t_0,t'_1]} = P$. This is a contradiction to our assumption that $t_0 > a$. It follows that there exists a lift of the path $\lambda^p_{|[a,t'_1]}$ in $U_1$ which in turn implies that there exists a lift of the path $\lambda^p_{|[0,t'_1]}$ in $U_1$ as $\lambda^p_{|[0,a]}$ is constant. We abuse notation and refer to this lift as $P'$ as well. Let $p'' := \lambda^p(t'_1)$. By construction, $\lambda^{p''}_{|[t'_1,1]}$ coincides with the path $\lambda^p_{|[t'_1,1]}$. Let $q'' := P'(t'_1)$. By part (1), there exists a lift $P''$ of the path $\lambda^{p''}_{|[t'_1,1]}$ starting from $q''$. Glueing the paths $P'$ and $P''$ results in a lift of the path $\lambda^p$ starting from $q$. \end{enumerate} \end{proof} \subsection{Finite morphisms to $\mathbb{P}^1_k$} Let $C$ be a smooth projective irreducible $k$-curve. Let $\phi : C \to \mathbb{P}^1_k$ be a finite morphism such that the extension of function fields $k(\mathbb{P}^1_k) \hookrightarrow k(C)$ induced by $\phi$ is separable. Let $R$ be the finite set of $k$-points of $\mathbb{P}^1_k$ over which the morphism $\phi$ is ramified. Let $\mathfrak{W}$ be a weak semistable vertex set that contains $R$ such that $\mathfrak{V} := (\phi^{\mathrm{an}})^{-1}(\mathfrak{W})$ is a weak semistable vertex set for $C^{\mathrm{an}}$ and $\Sigma(C^{\mathrm{an}},\mathfrak{V}) = (\phi^{\mathrm{an}})^{-1}(\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{W}))$. By Proposition 2.21, there exists a deformation retraction \begin{align*} \lambda_{\Sigma(\mathbb{P}_k^{1,\mathrm{an}},\mathfrak{W})} : [0,1] \times \mathbb{P}_k^{1,\mathrm{an}} \to \mathbb{P}_k^{1,\mathrm{an}} \end{align*} whose image is the skeleton $\Sigma(\mathbb{P}_k^{1,\mathrm{an}},\mathfrak{W})$. Recall that for a point $p \in \mathbb{P}^{1,\mathrm{an}}_k$, the deformation retraction $\lambda_{\Sigma(\mathbb{P}_k^{1,\mathrm{an}},\mathfrak{W})}$ defines a path $\lambda^p_{\Sigma(\mathbb{P}_k^{1,\mathrm{an}},\mathfrak{W})} : [0,1] \to \mathbb{P}^{1,\mathrm{an}}_k$ by $t \mapsto \lambda_{\Sigma(\mathbb{P}_k^{1,\mathrm{an}},\mathfrak{W})}(t,p)$. We are now in a position to prove Theorem 3.1 for the morphism $\phi : C \to \mathbb{P}^1_k$. We suppose in addition that the extension of function fields $k(\mathbb{P}^1_k) \hookrightarrow k(C)$ induced by $\phi$ is Galois. \begin{prop} Let $C$ be a smooth projective irreducible curve and let $\phi : C \to \mathbb{P}^1_k$ be a finite morphism such that the extension of function fields $k(\mathbb{P}^{1}_k) \hookrightarrow k(C)$ is Galois. Let $\mathfrak{W}$ be a weak semistable vertex set for $\mathbb{P}^{1,\mathrm{an}}_k$ that contains the closed points over which the morphism is ramified such that $\mathfrak{V} := (\phi^{\mathrm{an}})^{-1}(\mathfrak{W})$ is a weak semistable vertex set for $C^{\mathrm{an}}$ and $\Sigma(C^{\mathrm{an}},\mathfrak{V}) = (\phi^{\mathrm{an}})^{-1}(\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{W}))$. There exists a pair of compatible deformation retractions $\psi' : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ and $\psi : [0,1] \times \mathbb{P}_k^{1,\mathrm{an}} \to \mathbb{P}_k^{1,\mathrm{an}}$ whose images are the connected finite graphs $\Sigma(C^{\mathrm{an}},\mathfrak{V})$ and $\Sigma(\mathbb{P}_k^{1,\mathrm{an}},\mathfrak{W})$ respectively. \end{prop} \begin{proof} Let $\psi := \lambda_{\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{W})}$. We define the deformation retraction $\psi' : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ as follows. Let $q' \in C^{\mathrm{an}}$ and $q := \phi^{\mathrm{an}}(q')$. By Lemma 3.5, there exists a unique lift $\psi'^{q'}$ of the path $\psi^q$ starting at $q'$. For $t \in [0,1]$ and $q' \in C'^{\mathrm{an}}$, we set $\psi'(t,q') := \psi'^{q'}(t)$. The uniqueness of the lifts $\psi'^{q'}$ imply that $\psi'$ is well defined. Let $G = \mathrm{Gal}(k(C)/k(\mathbb{P}^1_k))$. The uniqueness of the lift implies that for every $g \in G$, $g \circ \psi'^{q'} = \psi'^{g(q')}$. It follows that for every $t \in [0,1]$, $g(\psi'(t,q')) = \psi'(t,g(q'))$. The compatibility of $\psi'$ and $\psi$ implies that $\psi'(1,C^{\mathrm{an}})$ is equal to $(\phi^{\mathrm{an}})^{-1}(\Sigma(\mathbb{P}_k^{1,\mathrm{an}},\mathfrak{W})) = \Sigma(C^{\mathrm{an}},\mathfrak{V})$. The continuity of $\psi'$ follows from 2.36. \end{proof} We show that Theorem 3.1 can be deduced from Proposition 3.6. \begin{proof} Let $\phi : C' \to C$ be a finite morphism between smooth projective irreducible $k$-curves. It suffices to prove the theorem when the extension of function fields $k(C) \hookrightarrow k(C')$ induced by the morphism $\phi$ is separable. Indeed, the extension $k(C) \hookrightarrow k(C')$ can be decomposed into a separable field extension $k(C) \hookrightarrow L$ and a purely inseparable extension $L \hookrightarrow k(C')$. Let $C''$ denote the smooth projective irreducible $k$-curve that corresponds to the function field $L$. The corresponding morphism of curves $C' \to C''$ and its analytification $C'^{\mathrm{an}} \to C''^{\mathrm{an}}$ are homeomorphisms. If $\mathfrak{V}''$ is a weak semistable vertex set for $C''^{\mathrm{an}}$ then its preimage $\mathfrak{V}'$ in $C'^{\mathrm{an}}$ is a weak semistable vertex set as well and a deformation retraction of $C''^{\mathrm{an}}$ with image $\Sigma(C''^{\mathrm{an}},\mathfrak{V}'')$ lifts to a deformation retraction on $C'^{\mathrm{an}}$ with image $\Sigma(C'^{\mathrm{an}},\mathfrak{V}')$. Let $a : C \to \mathbb{P}^1_k$ be a finite separable morphism and let $K$ be a finite Galois extension of $k(\mathbb{P}^1_k)$ that contains $k(C')$. Let $C''$ denote the smooth projective irreducible curve corresponding to the function field $K$. By construction we have the following sequence of morphisms : $C'' \to C' \to C \to \mathbb{P}^1_k$. Let $c : C'' \to \mathbb{P}^1_k$ denote this composition. Using Lemma 3.3, it can be checked that there exists a weak semistable vertex set $\mathfrak{A}$ for $\mathbb{P}^{1,\mathrm{an}}_k$ that contains the points over which the morphism $c : C'' \to \mathbb{P}^1_k$ is ramified and in addition that $(a^{\mathrm{an}})^{-1}(\mathfrak{A})$ is a weak semistable vertex set of $C^{\mathrm{an}}$, ${(a \circ \phi)^{\mathrm{an}}}^{-1}(\mathfrak{A})$ is a weak semistable vertex set of $C'^{\mathrm{an}}$ and $(c^{\mathrm{an}})^{-1}(\mathfrak{A}) = \mathfrak{A}''$ is a weak semistable vertex set for $C''^{\mathrm{an}}$. Furthermore, $\Sigma(C^{\mathrm{an}},(a^{\mathrm{an}})^{-1}(\mathfrak{A})) = (a^{\mathrm{an}})^{-1}(\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{A}))$, $\Sigma(C'^{\mathrm{an}},{(a \circ \phi)^{\mathrm{an}}}^{-1}(\mathfrak{A})) = {(a \circ \phi)^{\mathrm{an}}}^{-1}(\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{A}))$ and $\Sigma(C''^{\mathrm{an}},\mathfrak{A}'') = (c^{\mathrm{an}})^{-1}(\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{A}))$. The deformation retraction $\lambda_{\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{A})}$ has image $\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{A})$ and lifts to a deformation retraction $\lambda''^{\mathfrak{A}''}$ with image $\Sigma(C''^{\mathrm{an}},\mathfrak{A}'')$. Let $G := \mathrm{Gal}(k(C'')/k(\mathbb{P}^1_k))$. The deformation retraction $\lambda''^{\mathfrak{A}''}$ is $G$-invariant. There exists sub groups $H_{C'} \subset G$ and $H_{C} \subset G$ such that $C'' \to C'$ and $C'' \to C$ are the quotient morphisms $C'' \to C''/H_{C'}$ and $C'' \to C''/H_{C}$. As $\lambda''^{\mathfrak{A}''}$ is $H_{C}$ and $H_{C'}$ invariant, it must induce deformations $\lambda_{C}$ and $\lambda_{C'}$ whose images are $(a^{\mathrm{an}})^{-1}(\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{A}))$ and $((a \circ \phi)^{\mathrm{an}})^{-1}(\Sigma(\mathbb{P}^{1,\mathrm{an}}_k,\mathfrak{A}))$ respectively. This proves the theorem. \end{proof} \section{Calculating the genera $g^{\mathrm{an}}(C')$ and $g^{\mathrm{an}}(C)$} In the previous sections we showed that given a $k$-curve, there exists a deformation retraction of the curve onto a closed subspace which is a finite metric graph. We called such subspaces skeleta. In Definition 2.26, we introduced the genus of a skeleton and by Proposition 2.25 it is independent of the weak semistable vertex sets that define it, implying that it is in fact an invariant of the curve. In what follows we study how these invariants relate to each other given a finite morphism between the spaces they are associated to. The theorem that follows is analogous to the Riemann-Hurwitz formula in algebraic geometry. We introduce the notation involved in the statement of 4.1. Let $\phi : C' \to C$ be a finite separable morphism between smooth projective curves over the field $k$. \subsection{Notation} We define the genus of a point $p \in C^{\mathrm{an}}$ as follows. If $p \in C^\mathrm{an}$ is of type II then let $g_p$ denote the genus of the smooth projective curve $\tilde{C}_p$ which corresponds to the $\tilde{k}$-function field $\widetilde{\mathcal{H}(p)}$ and if $p \in C^{\mathrm{an}}$ is not of type II then we set $g_p = 0$. Let $p' \in C'^{\mathrm{an}}$ which is of type II and let $p := \phi^{\mathrm{an}}(p')$. The $\tilde{k}$-function fields $\widetilde{\mathcal{H}(p')}$, $\widetilde{\mathcal{H}(p)}$ define smooth projective $\tilde{k}$-curves $\tilde{C}'_{p'}$, $\tilde{C}_p$ respectively. The morphism $\phi^{\mathrm{an}}$ induces an injection $\widetilde{\mathcal{H}(p)} \hookrightarrow \widetilde{\mathcal{H}(p')}$ which implies a morphism $\tilde{C'}_{p'} \to \tilde{C}_p$. This morphism is not necessarily separable. The extension $\widetilde{\mathcal{H}(p)} \hookrightarrow \widetilde{\mathcal{H}(p')}$ can be decomposed so that there exists an intermediate $\tilde{k}$-function field $\widetilde{I(p',p)}$ and the extension $\widetilde{\mathcal{H}(p)} \hookrightarrow \widetilde{I(p',p)}$ is purely inseparable while $\widetilde{I(p',p)} \hookrightarrow \widetilde{\mathcal{H}(p')}$ is separable of degree $s(p',p)$. Such a decomposition exists by \cite{MOU}. Let $\tilde{C}_{p',p}$ denote the smooth projective $\tilde{k}$-curve which corresponds to the field $\widetilde{I(p',p)}$. By construction, the genus of the curve $\tilde{C}_{p',p}$ is equal to $g_p$. The finite separable morphism $\tilde{C'}_{p'} \to \tilde{C}_{p',p}$ can be used to relate the genera of the two curves via the Riemann-Hurwitz formula. As in [\cite{hart}, IV.2], let \begin{align*} R_{p',p} := \Sigma_{P \in \tilde{C'}_{p'}} \mathrm{length} (\Omega_{\tilde{C'}_{p'}/\tilde{C}_{p',p}})_P .P \end{align*} and \begin{align*} R := \Sigma_{P \in C'} \mathrm{length} (\Omega_{C'/C})_P .P. \end{align*} We define invariants on the points of $C^{\mathrm{an}}$ which relate the values $g^{\mathrm{an}}(C'), g^{\mathrm{an}}(C)$ from Definition 2.26. For $p \in C^{\mathrm{an}}$ of type II, let \begin{align*} s(p) &:= \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} s(p',p), \\ R^1_{p',p} &:= \mathrm{deg}(R_{p',p}) - (2s(p',p) - 2), \\ R^1_p &:= \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} R^1_{p',p}. \end{align*} When $p$ is not of type II, let $s(p)$ be the cardinality of the fibre $(\phi^{\mathrm{an}})^{-1}(p)$ and $R^1_p := 0$. \subsection{A Riemann-Hurwitz formula for the analytic genus} \begin{thm} Let $\phi : C' \to C$ be a finite separable morphism between smooth projective curves over the field $k$. Let $g^{\mathrm{an}}(C'), g^{\mathrm{an}}(C)$ be as in Definition 2.26. We have the following equation. \begin{align*} 2g^{\mathrm{an}}(C') - 2 = \mathrm{deg}(\phi)(2g^{\mathrm{an}}(C) - 2) + \Sigma_{p \in C^{\mathrm{an}}} 2 s(p) g_p + \mathrm{deg}(R) - \Sigma_{p \in C^{\mathrm{an}}} R^1_p. \end{align*} \end{thm} \begin{proof} In order to prove Theorem 4.1, we make use of the fact that there exists a pair of deformation retractions $\psi' : [0,1] \times C'^{\mathrm{an}} \to C'^{\mathrm{an}}$ and $\psi : [0,1] \times C^{\mathrm{an}} \to C^{\mathrm{an}}$ which are compatible with the morphism $\phi^{\mathrm{an}}$ (Theorem 3.1). Let $\Upsilon_{C'^{\mathrm{an}}}$ and $\Upsilon_{C^{\mathrm{an}}}$ denote the images of the deformation retractions $\psi'$ and $\psi$ respectively. We can assume that $\Upsilon_{C^{\mathrm{an}}}$ contains the ramification locus of the morphism $\phi$. Furthermore, there exists weak semistable vertex sets $\mathfrak{A} \subset C^{\mathrm{an}}$ and $\mathfrak{A}' \subset C'^{\mathrm{an}}$ such that $\Upsilon_{C^{\mathrm{an}}} = \Sigma(C^{\mathrm{an}},\mathfrak{A})$ and $\Upsilon_{C'^{\mathrm{an}}} = \Sigma(C'^{\mathrm{an}},\mathfrak{A}')$. We identify a set of vertices $V(\Upsilon_{C^{\mathrm{an}}}), V(\Upsilon_{C'^{\mathrm{an}}})$ for the skeleta $\Upsilon_{C^{\mathrm{an}}}$ and $\Upsilon_{C'^{\mathrm{an}}}$ which satisfy the following conditions. \begin{enumerate} \item $V(\Upsilon_{C'^{\mathrm{an}}}) = (\phi^{\mathrm{an}})^{-1}(V(\Upsilon_{C^{\mathrm{an}}})).$ \item $\mathfrak{A} \subset V(\Upsilon_{C^{\mathrm{an}}})$ and $\mathfrak{A'} \subset V(\Upsilon_{C'^{\mathrm{an}}})$. \item If $p$ (resp. $p'$) is a point on the skeleton $\Upsilon_{C^{\mathrm{an}}}$ (resp. $\Upsilon_{C'^{\mathrm{an}}}$) for which there exists a sufficiently small open neighbourhood $U \subset \Upsilon_{C^{\mathrm{an}}}$ (resp. $U' \subset \Upsilon_{C'^{\mathrm{an}}}$) such that $U \smallsetminus \{p\}$ (resp. $U' \smallsetminus \{p'\}$) has atleast three connected components then $p \in V(\Upsilon_{C^{\mathrm{an}}})$ (resp. $p' \in V(\Upsilon_{C'^{\mathrm{an}}})$). \end{enumerate} It can be verified that a pair $(V(\Upsilon_{C^{\mathrm{an}}}),V(\Upsilon_{C'^{\mathrm{an}}}))$ satisfying these properties does indeed exist. We define the set of edges $E(\Upsilon_{C^{\mathrm{an}}})$ (resp. $E(\Upsilon_{C'^{\mathrm{an}}})$) for the skeleton $\Upsilon_{C^{\mathrm{an}}}$ $(\Upsilon_{C'^{\mathrm{an}}})$ to be the collection of all paths contained in $\Upsilon_{C^{\mathrm{an}}}$ (resp. $\Upsilon_{C'^{\mathrm{an}}}$) connecting any two vertices. Since $\Upsilon_{C^{\mathrm{an}}}$ (resp. $\Upsilon_{C'^{\mathrm{an}}}$) is the skeleton associated to a weak semistable vertex set, the edges of the skeleton are identified with real intervals. This defines a length function on the set of edges. By definition, $g^{\mathrm{an}}(C) = g(\Upsilon_{C^{\mathrm{an}}})$ and $g^{\mathrm{an}}(C') = g(\Upsilon_{C'^{\mathrm{an}}})$ The genus formula [\cite{aminibaker}, 4.5] implies that \begin{align*} g(C) = g(\Upsilon_{C^{\mathrm{an}}}) + \Sigma_{p \in V(\Upsilon_{C^{\mathrm{an}}})} g_p \end{align*} and \begin{align*} g(C') = g(\Upsilon_{C'^{\mathrm{an}}}) + \Sigma_{p' \in V(\Upsilon_{C'^{\mathrm{an}}})} g_{p'}. \end{align*} By definition, the spaces $C'^{\mathrm{an}} \smallsetminus \mathfrak{A}$ and $C'^{\mathrm{an}} \smallsetminus \mathfrak{A'}$ decompose into the disjoint union of Berkovich open balls and open annuli. It follows that if $p \notin \mathfrak{A}$ or $p' \notin \mathfrak{A'}$ then $g_p = 0$ and $g_{p'} = 0$. As $\mathfrak{A} \subset V(\Upsilon_{C^{\mathrm{an}}})$ and $\mathfrak{A'} \subset V(\Upsilon_{C'^{\mathrm{an}}})$, the equations above can be rewritten as \begin{align} g(C) = g(\Upsilon_{C^{\mathrm{an}}}) + \Sigma_{p \in C^{\mathrm{an}}} g_p \end{align} and \begin{align} g(C') = g(\Upsilon_{C'^{\mathrm{an}}}) + \Sigma_{p' \in C'^{\mathrm{an}}} g_{p'}. \end{align} The morphism $\phi : C' \to C$ is a finite separable morphism between smooth, projective curves. The Riemann-Hurwitz formula [\cite{hart}, Corollary IV.2.4] enables us to relate the genera of the curves $C'$ and $C$. Precisely, \begin{align} 2g(C') - 2 = \mathrm{deg}(\phi)(2g(C) - 2) + \mathrm{deg}(R) \end{align} where $R$ is a divisor on the curve $C'$ such that if $\phi$ is tamely ramified at $x' \in C'$ then $\mathrm{ord}_{x'}(R) = \mathrm{ram}(x',x) -1$. Using the above, we obtain the following equation relating $g^{\mathrm{an}}(C')$ and $g^{\mathrm{an}}(C)$. \begin{align*} 2g^{\mathrm{an}}(C') - 2 + 2(\Sigma_{p' \in C'^{\mathrm{an}}} g_{p'} ) = \mathrm{deg}(\phi)(2g^{\mathrm{an}}(C) -2) + \\ \mathrm{deg}(\phi)(2\Sigma_{p \in C^{\mathrm{an}}} g_p) + \mathrm{deg}(R). \end{align*} The only points $p' \in C'^{\mathrm{an}}$ for which $g_{p'} \neq 0$ belong to $\mathfrak{A'}$ and are of type II. Let $p'$ be such a point and $p := \phi^{\mathrm{an}}(p')$. Applying the Riemann-Hurwitz formula to the extension $\widetilde{I(p',p)} \hookrightarrow \widetilde{\mathcal{H}(p')}$ relates $g_{p'}$ and $g_p$ by the following equation. \begin{align*} 2g_{p'} - 2 = s(p',p)(2g_p - 2) + \mathrm{deg}(R_{p',p}). \end{align*} This equation holds for all points of type II. When $p'$ and $p$ are not of type II, we set $s(p',p) := 1$ and $R_{p',p} := 0$. These invariants imply the following equation. \begin{align*} 2g^{\mathrm{an}}(C') - 2 = \mathrm{deg}(\phi)(2g^{\mathrm{an}}(C) -2) - \Sigma_{p' \in C'^{\mathrm{an}}} [s(p',p)(2g_p -2) + \mathrm{deg}(R_{p',p}) + 2] + \\ \mathrm{deg}(\phi)\Sigma_{p \in C^{\mathrm{an}}}(2g_p) + \mathrm{deg}(R). \end{align*} Let $s(p) := \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} s(p',p)$, $R^1_{p',p} := \mathrm{deg}(R_{p',p}) - (2s(p',p) - 2)$ and $R^1_p := \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} R^1_{p',p}$. These invariants further simplify the equation above to the following form. \begin{align*} 2g^{\mathrm{an}}(C') - 2 = \mathrm{deg}(\phi)(2g^{\mathrm{an}}(C) -2) - \Sigma_{p \in C^{\mathrm{an}}} 2s(p)g_p - \Sigma_{p \in C^{\mathrm{an}}} R^1_{p} + \mathrm{deg}(R). \end{align*} \end{proof} The rest of this section is dedicated to studying the invariant $s(p)$ arising in the equation above. \subsubsection{Calculating $i(p')$ and the defect} Let $M$ be a non-Archimedean valued field with valuation $v$. Let $|M^*|$ denote the value group and $\tilde{M}$ denote the residue field. Let $M'$ be a finite extension of the field $M$ such that the valuation $v$ extends uniquely to $M'$. By Ostrowski's lemma, we have the following equality. \begin{align*} [M':M] = (|M'^{*}| : |M^*|)[\tilde{M'}:\tilde{M}]c^r. \end{align*} Here $c$ is the characteristic of the residue field if it is positive and one otherwise. The value $d(M',M) := c^r$ is called the \emph{defect} of the extension. If $r=0$ then we call the extension $M'/M$ \emph{defectless}. We now relate this definition to the situation we are dealing with. Let $p$ be a point of type II belonging to $C^{\mathrm{an}}$ and $p' \in (\phi^{\mathrm{an}})^{-1}(p)$. Since the field $k$ is algebraically closed non-Archimedean valued and the points $p,p'$ are of type II, the value groups of the fields $\mathcal{H}(p)$ and $\mathcal{H}(p')$ remain the same. We have the following equality \begin{align*} [\mathcal{H}(p') : \mathcal{H}(p)] = [\widetilde{\mathcal{H}(p')} : \widetilde{\mathcal{H}(p)}]d(p',p) \end{align*} where $d(p',p)$ is the defect of the extension ${\mathcal{H}(p')} / {\mathcal{H}(p)}$. \begin{lem} Let $p \in C^{\mathrm{an}}$ and $p' \in (\phi^\mathrm{an})^{-1}(p)$. The extension $\mathcal{H}(p) \hookrightarrow \mathcal{H}(p')$ is defectless i.e. $d(p',p) = 1$. \end{lem} \begin{proof} We make use of the Poincaré-Lelong theorem and our construction in Section 3 of the pair of compatible deformation retractions $\psi$ and $\psi'$. Let $r \in [0,1]$ be the smallest real number such that $p \in \psi(r,C(k)) = \{\psi(r,x) | x \in C(k)\}$. Since the deformation retractions are compatible it follows that if $p' \in (\phi^{\mathrm{an}})^{-1}(p)$ then $p' \in \psi'(r,C'(k))$. Let $x \in C(k)$ be such that $\psi(r,x) = p$. Observe that our choice of $\Upsilon_{C^{\mathrm{an}}}$ implies that the morphism $\phi$ is unramified over $x$. Let $P_x$ denote the path $\psi(\_,x) : [0,r] \to C^{\mathrm{an}}$. Given a simple neighborhood [\cite{BPR}, Definition 4.28], $U$ of $p$, the germ of the path $P_x$ at $p$ lies in a connected component of $U \smallsetminus \{p\}$ and hence defines an element of the tangent space which we refer to as $e_x$. Equivalently, for some $a > 0$, the path $(P_x)_{|[a,r]} \circ -\mathrm{exp} : [-\mathrm{log}(a), -\mathrm{log}(r)] \to C^{\mathrm{an}}$ is a geodesic segment and its germ defines the element $e_x$ of the tangent space $T_p$ (cf. Remark 2.22). Let $t_x$ be a uniformisant of the local ring $O_{C,x}$ such that $|t_x(p)| = 1$ and $\tilde{t}_x$- the image of $t_x$ in the residue field $\widetilde{\mathcal{H}(p)}$ is a uniformisant at the point $e_x$ in $\widetilde{\mathcal{H}(p)}$. This can be accomplished by choosing $t_x$ so that it has no zeros or poles at any $k$-point $y$ for which the path $\psi(\_,y) : [0,r] \to C^{\mathrm{an}}$ coincides with $e_x$ in the tangent space and using the Poincar\'e-Lelong theorem. Such a choice is possible by the semistable decomposition associated to the skeleton $\Upsilon_{C^{\mathrm{an}}}$. Let $t'_x$ denote the image of $t_x$ in the function field $k(C')$. Our choice of $t_x$ implies that for every $p' \in (\phi^{\mathrm{an}})^{-1}(p)$, $|t'_x(p')| = 1$. The inclusion $\mathcal{H}(p) \hookrightarrow \mathcal{H}(p')$ induces an inclusion of $\tilde{k}$-function fields $\widetilde{\mathcal{H}(p)} \hookrightarrow \widetilde{\mathcal{H}(p')}$. As before, let $\tilde{C}_p$ and $\tilde{C}'_{p'}$ denote the smooth projective curves associated to these function fields. As explained above, the path $P_x$ defines a $\tilde{k}$-point $e_x$ of the curve $\tilde{C}_p$. Let $E(e_x,p')$ denote the set of preimages of this point on the curve $\tilde{C}'_{p'}$. Let $S:= \{x'_1,...x'_k\}$ denote the preimages of the point $x$ and $\mathrm{ram}(x'_i,x)$ denote the ramification index of the morphism $\phi$ at the point $x'_i$. Since the skeleton $\Upsilon_{C^{\mathrm{an}}}$ contains the set of $k$-points over which the morphism is ramified, we have that $\mathrm{ram}(x'_i,x) = 1$ for all $i$. If $x' \in S$ then the path $P_{x'} := \psi'(\_,x') : [0,r] \to C^{\mathrm{an}}$ defines an element of the tangent space at $\psi'(r,x')$. Indeed, if $U'$ is a simple neighborhood of the point $\psi'(r,x')$ then there exists $a \in [0,r)$ such that ${P_{x'}}_{|[a,r)}$ is contained in exactly one connected component of the space $U' \smallsetminus \psi'(r,x')$. The set of elements of the tangent spaces $T_{p'}$ for $p' \in (\phi^{\mathrm{an}})^{-1}(p)$ that are defined by the paths $\{\psi'(\_,x'_i) : [0,r] \to C^{\mathrm{an}}\}$ coincides with the set $E(e_x) := \cup_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} E(e_x,p')$. Our choice of $t_x$ implies that if $y' \in C'(k) \smallsetminus \phi^{-1}(x)$ and $\psi'(\_,y') : [0,r] \to C'^{\mathrm{an}} \in E(e_x)$ then $t'_x$ cannot have a zero or pole at $y'$. For $p' \in (\phi^{\mathrm{an}})^{-1}(p)$ and $e' \in E(e_x,p')$, let $S_{e',p'}$ be the collection of those $x' \in S$ such that $\psi'(r,x') = p'$ and $\psi'(\_,x') : [0,r] \to C^{\mathrm{an}} = e'$. The non-Archimedean Poincaré-Lelong theorem implies that \begin{align*} \delta_{e'}(-|\mathrm{log}(t'_x)|)(p') = \Sigma_{x' \in S_{e',p'}} \mathrm{ram}(x',x) = \mathrm{card}(S_{e',p'}). \end{align*} The second equality follows from the fact that $\mathrm{ram}(x',x) = 1$. Furthermore, \begin{align*} \delta_{e'}(-|\mathrm{log}(t'_x)|)(p') = \mathrm{ord}_{e'}(\tilde{t}'_x). \end{align*} Since $\Sigma_{e' \in E(e_x,p')} \mathrm{ord}_{e'}(\tilde{t'}_x) = [\widetilde{\mathcal{H}(p')} : \widetilde{\mathcal{H}(p)}]$, we must have that \begin{align*} \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} [\widetilde{\mathcal{H}(p')} : \widetilde{\mathcal{H}(p)}] = \Sigma_{x' \in S} \mathrm{ram}(x',x) = \Sigma_{e',p'} \mathrm{card}(S_{e',p'}). \end{align*} Hence \begin{align*} \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} [\widetilde{\mathcal{H}(p')} : \widetilde{\mathcal{H}(p)}] = \mathrm{card}(S). \end{align*} As the field $k$ is algebraically closed, the expression on the right is equal to the degree of the morphism $\phi$ and we have that \begin{align*} \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} [\widetilde{\mathcal{H}(p')} : \widetilde{\mathcal{H}(p)}] = \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} [\mathcal{H}(p') : \mathcal{H}(p)]. \end{align*} This implies that for every $p' \in (\phi^{\mathrm{an}})^{-1}(p)$ the extension $\mathcal{H}(p) \hookrightarrow \mathcal{H}(p')$ is defectless. \end{proof} The result above follows from the more general fact that the residue field $\mathcal{H}(p)$ associated to a point $p$ of type II on the analytification $C^{\mathrm{an}}$ of a $k$-curve $C$ is stable [\cite{TEM}, Corollary 6.3.6], \cite{DUC}. Lemma 4.2 can in fact be used to prove this result. Propositions 2 and 4 of Section 3.6 in \cite{BGR} allow us to give the following definition of a stable field which is complete. \begin{defi} \emph{A complete field $K$ is} stable \emph{if and only if for every finite separable field extension $L/K$ the following equality holds \begin{align*} [L:K] = [|L^*| : |K^*|][\widetilde{L} : \widetilde{K}]. \end{align*}} \end{defi} \begin{prop} Let $S$ be a $k$-curve. Let $p \in S^{\mathrm{an}}$ be a point of type II. The complete field $\mathcal{H}(p)$ is stable. \end{prop} \begin{proof} Let $S_i$ be an irreducible component of $S$ such that $p \in S_i^{\mathrm{an}}$. Let $S'_i$ denote the normalisation of $S_i$. There exists a finite set of $k$-points $F'$ and $F$ in $S'_i$ and $S_i$ respectively such that ${S'}_i^{\mathrm{an}} \smallsetminus F' = S_i^{\mathrm{an}} \smallsetminus F$. It follows that we may reduce to the case when $S$ is smooth, projective and integral. Let $L$ be a finite separable extension of $\mathcal{H}(p)$. By definition, $\mathcal{H}(p)$ is the completion of the function field of the curve $S$ with respect to the valuation associated to $p$. Let $L_0$ denote the integral closure of $k(S)$ in $L$. By construction, $L_0$ is a finite separable field extension of $k(S)$. Hence there exists a smooth projective $k$-curve $S'$ such that $k(S') = L_0$. Let $\widehat{L_0}$ denote the completion of $L_0$ induced by the restriction of the valuation of $L$. We claim that $\widehat{L_0} = L$. Let $\alpha \in L$. By construction $\alpha$ is algebraic over $\widehat{L_0}$. Let $g \in \widehat{L_0}[X]$ be the minimal polynomial of $\alpha$. Suppose that $g$ is not a monomial. As $\widehat{L_0}$ is the completion of $L_0$, there exists a sequence $(f_i)_i$ of polynomials of $\mathrm{deg}(g)$ in $L_0[X]$ which converge to $g$ with respect to the Gauss norm i.e. the coefficients of the $(f_i)_i$ converge to the coefficients of $g$. Let $\alpha_i$ denote a root of $f_i$ for each $i$. By Corollary 2 of [\cite{BGR}, Section 3.4], there exists a sub sequence of $(\alpha_i)_i$ which converges to a root $\alpha'$ of $g$. As the $\alpha_i$ are algebraic over $L_0$ they must be algebraic over $k(S)$. Furthemore, by Proposition 3 in [\cite{BGR}, Section 3.4], for large enough $i$ we must have that $\alpha_i \in L$. Hence by definition of $L_0$, $\alpha_i \in L_0$. Hence $\alpha' \in \widehat{L_0}$. This implies a contradiction to our assumption that $g$ is irreducible and of degree greater than or equal to $2$. Consequently, $\alpha \in \widehat{L_0}$ and $\widehat{L_0} = L$. Hence there exists a point $p'$ of type II on $S'^{\mathrm{an}}$ such that $\mathcal{H}(p') = L$. The proposition follows from Lemma 4.2. \end{proof} We study the invariants $s(p)$ and $s(p',p)$ of Theorem 4.1 using the deformation retractions $\psi$ and $\psi'$. \begin{defi} (The equivalence relation $\sim_{i(r)}$) \emph{Let $r \in [0,1]$. We define an equivalence relation $\sim_{i(r)}$ on $C'(k)$ as follows. We set $x'_1 \sim_{i(r)} x'_2$ if and only if $\phi(x'_1) = \phi(x'_2)$, $\psi'(r,x'_1) = \psi'(r,x'_2)$ and the elements of the tangent space $T_{\psi'(r,x'_1)} = T_{\psi'(r,x'_2)}$ defined by the paths $\psi'(\_,x'_1) : [0,r] \to C'^{\mathrm{an}}$, $\psi'(\_,x'_2) : [0,r] \to C'^{\mathrm{an}}$ coincide. For $x' \in C'(k)$, let $\mathrm{card} [x']_{i(r)}$ be the cardinality of the equivalence class which contains $x'$. } \end{defi} \begin{defi} (The real number $r_p$, the set $Q_{p',p}$ and the invariant $i(p',p)$) \emph{Let $p \in C^{\mathrm{an}}$ be a point which is not of type IV and $p' \in (\phi^{\mathrm{an}})^{-1}(p)$. \begin{enumerate} \item We define $r_p \in [0,1]$ to be the smallest real number for which $p \in \psi(r_p,C(k))$ where $\psi(r_p,C(k)) := \{\psi(r_p,x) | x \in C(k)\}$. \item We define $Q_{p',p} := \{x' \in C'(k)|\psi'(r_p,x') = p'\}$. \item Let $i(p',p) := \mathrm{min}_{x' \in Q_{p',p}} \{\mathrm{card} [x']_{i(r_p)}\}$. \end{enumerate} } \end{defi} Recall that if $p$ is a point of type II then we used $s(p',p)$ to denote the separable degree of the field extension $\widetilde{\mathcal{H}(p)} \hookrightarrow \widetilde{\mathcal{H}(p')}$ and we set $s(p',p) = 1$ otherwise. \begin{prop} Let $p' \in C'^{\mathrm{an}}, p := \phi^{\mathrm{an}}(p')$. \begin{enumerate} \item When $p$ is of type II, the number $i(p',p)$ (cf. Definition 4.6) is the degree of inseparability of the extension $\widetilde{\mathcal{H}(p)} \hookrightarrow \widetilde{\mathcal{H}(p')}$. Hence $s(p',p) = [\mathcal{H}(p'):\mathcal{H}(p)]/i(p',p)$. \item When $p$ is not of type II or IV, $s(p) := \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} s(p',p)$ is the number of $\sim_{i(r(p))}$ equivalence classes in $\phi^{-1}(x)$ for any $x \in C(k)$ such that $\psi(r_p,x) = p$. \end{enumerate} \end{prop} \begin{proof} The second assertion can be verified directly and we restrict to proving the proposition for points of type II. Let $\tilde{C}_p$ and $\tilde{C}'_{p'}$ denote the smooth projective curves corresponding to the function fields $\widetilde{\mathcal{H}(p)}$ and $\widetilde{\mathcal{H}(p')}$ respectively. For a point $e \in \tilde{C}_p$, let $s_e$ denote the uniformisant of the local ring $O_{\tilde{C}_p,e}$. Let $x \in C(k)$ be such that $\psi(r_p,x) = p$. Since $\Upsilon_{C^{\mathrm{an}}}$ contains every $k$-point over which the morphism $\phi$ is ramified, $\phi$ is unramified over $x$. Let $e_x \in \tilde{C}_p$ correspond to the path $\psi(\_,x) : [0,r_p] \to C^{\mathrm{an}}$. Let $t_x$ be a uniformisant of $x$ such that $|t_x(p)| = 1$ and it does not have any zeros or poles at any $y$ for which the element of the tangent space $T_p$ defined by $\psi(\_,y) : [0,r_p] \to C^{\mathrm{an}}$ coincides with $e_x$. It follows that the image of $t_x$ in the field $\widetilde{{\mathcal{H}(p)}}$ is a uniformisant at the point $e_x$. We can hence assume $\tilde{t}_x = s_e$. Let $e' \in \tilde{C}'_{p'}$ map to $e_x$ and $y' \in C'(k)$ be such that the element of $T_{p'}$ defined by the path $\psi'(\_,y') : [0,r_p] \to C'^{\mathrm{an}}$ coincides with $e'$. By the Non-Archimedean Poincaré-Lelong Theorem, the order of vanishing of the uniformisant $\tilde{t}_x$ at $e'$ is equal to the cardinality of the equivalence class $[y']_{i(r_p)}$. The inseparable degree of $\widetilde{\mathcal{H}(p')}/\widetilde{\mathcal{H}(p)}$ is equal to $\mathrm{min}_{\{(e',e)|e' \in \tilde{C}'_{p'}, e' \mapsto e\}} \{\mathrm{ord}_{e'}(s_e)\}$ i.e. $\mathrm{min}_{\{(e',e)|e' \in \tilde{C}'_{p'}, e' \mapsto e\}} \{\mathrm{ord}_{e'}(\tilde{t}_x)\}$. Hence $i(p',p) = \mathrm{min}_{x' \in Q_{p',p}} \{\mathrm{card} [x']_{i(r(p))}\}$ is the degree of inseparability of the extension $\widetilde{\mathcal{H}(p')}/\widetilde{\mathcal{H}(p)}$. The equality $s(p',p) = [\mathcal{H}(p'):\mathcal{H}(p)]/i(p',p)$ follows from Lemma 4.2. \end{proof} \begin{defi} \emph{ Let $p \in C^{\mathrm{an}}$. We define $i(p) := \Sigma_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} [\mathcal{H}(p'):\mathcal{H}(p)]/i(p',p)$ where $i(p',p)$ is as in Definition 4.6.} \end{defi} Proposition 4.7, Theorem 4.1 and the fact that $g_p = 0$ when $p$ is not of type II imply the following corollary. \begin{cor} Let $\phi : C' \to C$ be a finite separable morphism between smooth projective curves over the field $k$. Let $g^{\mathrm{an}}(C'), g^{\mathrm{an}}(C)$ be as in Definition 2.26. We have the following equation. \begin{align*} 2g^{\mathrm{an}}(C') - 2 = \mathrm{deg}(\phi)(2g^{\mathrm{an}}(C) - 2) + \Sigma_{p \in C^{\mathrm{an}}} 2 i(p) g_p + \mathrm{deg}R - \Sigma_{p \in C^{\mathrm{an}}} R^1_p. \end{align*} \end{cor} \section{A second calculation of $g^{\mathrm{an}}(C')$} Let $\phi : C' \to C$ be a finite morphism between smooth projective curves over the field $k$. Our results in Section 3 imply the existence of a pair of deformation retractions $\psi'$, $\psi$ on $C'^{\mathrm{an}}$ and $C^{\mathrm{an}}$ which are compatible with the morphism $\phi^{\mathrm{an}}$. We choose $\psi$ and $\psi'$ as in the proof of Theorem 4.1. Let $\Upsilon_{C'^{\mathrm{an}}}$ and $\Upsilon_{C^{\mathrm{an}}}$ denote the images of the retractions $\psi'$ and $\psi$ respectively. The deformation retractions $\psi, \psi'$ can be constructed so that $\Upsilon_{C^{\mathrm{an}}}$ contains those points of $C(k)$ over which the morphism $\phi$ is ramified and does not contain any point of type IV. We have that $g^{\mathrm{an}}(C') = g(\Upsilon_{C'^{\mathrm{an}}})$ and $g^{\mathrm{an}}(C) = g(\Upsilon_{C^{\mathrm{an}}})$. \begin{defi} \emph{A} divisor \emph{on a finite metric graph is an element of the free abelian group generated by the points of the graph.} \end{defi} As outlined in the introduction, in this section we introduce a divisor $w$ on the skeleton $\Upsilon_{C^{\mathrm{an}}}$ and relate the degree of this divisor to the genus of the skeleton $\Upsilon_{C'^{\mathrm{an}}}$. The point of doing so is to study how $g(\Upsilon_{C'^{\mathrm{an}}})$ can be calculated in terms of $g(\Upsilon_{C^{\mathrm{an}}})$ and the behaviour of the morphism between the sets of vertices. We preserve our choices of vertex sets and edge sets for the two skeleta from the proof of Theorem 4.1. \begin{defi} (The invariant $n_p$ and the sets of tangent directions $E_p$, $L(e_p,p')$.) \emph{Let $p \in \Upsilon_{C^{\mathrm{an}}}$. \begin{enumerate} \item Let $n_p$ denote the number of preimages of $p$ for the morphism $\phi^{\mathrm{an}}$. \item Let $T_p$ denote the tangent space at the point $p$ (cf. 2.2.3, 2.4.1). We define $E_{p,\Upsilon_{C^{\mathrm{an}}}} \subset T_p$ to be those elements for which there exists a representative starting from $p$ and contained completely in $\Upsilon_{C^{\mathrm{an}}}$. When there is no ambiguity concerning the graph $\Upsilon_{C^{\mathrm{an}}}$, we simplify notation and write $E_p$. \item For any $p' \in C'^{\mathrm{an}}$ such that $\phi^{\mathrm{an}}(p') = p$, the morphism $\phi^{\mathrm{an}}$ induces a map $d\phi_{p'}$ between the tangent spaces $T_{p'}$ and $T_{p}$ (cf. 2.2.3, 2.4.1). For $p' \in (\phi^{\mathrm{an}})^{-1}(p)$ and $e_p \in E_p$, we define $L(e_p,p') \subset T_{p'}$ to be the set of preimages of $e_p$ for the map $d\phi_{p'}$ and $l(e_p,p')$ to be the cardinality of the set $L(e_p,p')$. \end{enumerate}} \end{defi} Observe that as $\Upsilon_{C'^{\mathrm{an}}} = (\phi^{\mathrm{an}})^{-1}(\Upsilon_{C^{\mathrm{an}}})$, any element of $L(e_p,p')$ can be represented by a geodesic segment that is contained completely in $\Upsilon_{C'^{\mathrm{an}}}$. \begin{defi} (The divisor $w$ of $\Upsilon_{C^{\mathrm{an}}}$) \emph{Let the notation be as in Definition 5.2. For a point $p \in \Upsilon_{C^{\mathrm{an}}}$, let $w(p) := (\sum_{e_p \in E_p, p' \in (\phi^{\mathrm{an}})^{-1}(p)} l(e_p,p')) - 2n_p$. We define $w$ to be the divisor $\Sigma_{p \in \Upsilon_{C^{\mathrm{an}}}} w(p)p$.} \end{defi} \begin{prop} The degree of the divisor $w$ is equal to $2g(\Upsilon_{C'^{\mathrm{an}}}) - 2$. \end{prop} \begin{proof} We begin by stating the following fact concerning connected, finite metric graphs. Let $\Sigma$ be a connected, finite metric graph. Let $p \in \Sigma$. Let $U$ be a simply connected neighborhood of $p$ in $\Sigma$. We define $t_p$ to be the cardinality of the set of connected components of the space $U \smallsetminus \{p\}$ and $D_\Sigma := \sum_{p \in \Sigma} (t_p - 2)p$. It can be verified that $D_\Sigma$ is a divisor on the finite graph $\Sigma$ whose degree is equal to $2g(\Sigma) - 2$. The connected, finite graphs $\Upsilon_{C'^{\mathrm{an}}}$ and $\Upsilon_{C^{\mathrm{an}}}$ are the images of a pair of compatible deformation retractions. Hence the morphism $\phi^{\mathrm{an}}$ restricts to a continuous map $\Upsilon_{C'^{\mathrm{an}}} \to \Upsilon_{C^{\mathrm{an}}}$. This map induces a homomorphism $\phi_* : \mathrm{Div}(\Upsilon_{C'^{\mathrm{an}}}) \to \mathrm{\mathrm{Div}}(\Upsilon_{C^{\mathrm{an}}})$ defined as follows. We define $\phi_*$ only on the generators of the group $\mathrm{Div}(\Upsilon_{C'^{\mathrm{an}}})$. If $1.p' \in \mathrm{Div}(\Upsilon_{C'^{\mathrm{an}}})$ then we set $\phi_*(1.p') = 1.\phi(p')$. Note that for any divisor $D' \in \mathrm{Div}(\Upsilon_{C'^{\mathrm{an}}})$, $\mathrm{deg}(\phi_*(D')) = \mathrm{deg}(D')$. We will show that $w = \phi_*(D_{\Upsilon_{C'^{\mathrm{an}}}})$. By definition, \begin{align*} \phi_*(D_{\Upsilon_{C'^{\mathrm{an}}}})(p) = (\sum_{p' \in (\phi^{\mathrm{an}})^{-1}(p)} t_{p'}) - 2n_p . \end{align*} Let $p' \in \Upsilon_{C'^{\mathrm{an}}}$ and $p = \phi^{\mathrm{an}}(p')$. We must have that the number of distinct germs of geodesic segments starting from $p'$ and contained in $\Upsilon_{C'^{\mathrm{an}}}$ is $t_{p'}$. We have a map $d\phi_{p'} : T_{p'} \to T_p$ which maps germs of geodesic segments starting at $p'$ to germs of geodesic segments starting at $p$. As $\Upsilon_{C'^{\mathrm{an}}} = (\phi^{\mathrm{an}})^{-1}(\Upsilon_{C^{\mathrm{an}}})$, we must have that the image via $d\phi_{p'}$ of a germ for which there exists a representative contained in $\Upsilon_{C'^{\mathrm{an}}}$ and starting from $p'$ must be a germ starting at $p$ for which there exists a representative contained in $\Upsilon_{C^{\mathrm{an}}}$. Likewise, if $e_p$ is a germ starting at $p$ which has a representative contained in $\Upsilon_{C^{\mathrm{an}}}$ then its preimage for the map $d\phi_{p'}$ is a germ starting at $p'$ for which there exists a representative contained in $\Upsilon_{C'^{\mathrm{an}}}$. It follows that $\sum_{p' \in (\phi^{\mathrm{an}})^{-1}(p)}t_{p'} = \sum_{e_p \in E_p, p' \in \phi^{-1}(p)} l(e_p,p')$. Hence $\phi_*(D_{\Upsilon_{C^{\mathrm{an}}}}) = w$. \end{proof} \subsection{Calculating $n_p$} We extend the invariant $n_p$ of Definition 5.2 to all points of $C^{\mathrm{an}}$. \begin{defi}(The invariant $n_p$) \emph{Let $p \in C^{\mathrm{an}}$. Let $n_p$ denote the number of preimages of $p$ for the morphism $\phi^{\mathrm{an}}$.} \end{defi} In this section we study $n_p$ for $p \in C^{\mathrm{an}}$ with the added restriction that the extension of function fields $k(C) \hookrightarrow k(C')$ associated to the morphism $\phi$ is Galois. \begin{defi}(The invariant $\mathrm{ram}(p)$ for $p \in C^{\mathrm{an}}$) \emph{ Let $p \in C^{\mathrm{an}}$. \begin{enumerate} \item Let $p$ be a point of type I i.e. $p \in C(k)$. Let $p' \in C'(k)$ such that $\phi(p') = p$. Let $\mathrm{ram}(p',p)$ denote the ramification degree associated to the extension of the discrete valuation rings $O_{C,p} \hookrightarrow O_{C',p'}$. Since the morphism $\phi$ is Galois, for $p \in C(k)$, the ramification degree $\mathrm{ram}(p',p)$ is a constant as $p'$ varies along the set of preimages of the point $p$. The ramification degree depends only on the point $p \in C(k)$ and we denote it $\mathrm{ram}(p)$. As $k$ is algebraically closed we have that \begin{align*} [k(C') : k(C)] = n_p \mathrm{ram}(p). \end{align*} \item When $p$ is not of type I, we define $\mathrm{ram}(p) := 1$. \end{enumerate}} \end{defi} Let $p$ be a point of $C^{\mathrm{an}}$ which is not of type IV. Recall that $r_p$ is the smallest real number in the real interval $[0,1]$ such that $p$ belongs to $\psi(r_p,C(k)) = \{\psi(r_p,x) | x \in C(k)\}$. Since the pair of deformation retractions $\psi'$ and $\psi$ are compatible with the morphism $\phi^{\mathrm{an}}$, we must have that $(\phi^{\mathrm{an}})^{-1}(p) \subset \psi'(r_p,C'(k))$. \begin{defi}(The equivalence relation $\sim_{c(r)}$ on $C(k)$) \emph{ For $r \in [0,1]$, we define an equivalence relation $\sim_{c(r)}$ on the set of $k$-points of the curve $C'$. Let $x'_1,x'_2 \in C'(k)$. We set $x'_1 \sim_{c(r)} x'_2$ if $\phi(x'_1) = \phi(x'_2)$ and $\psi'(r,x'_1) = \psi'(r,x'_2)$. Observe that each equivalence class is finite. For $x' \in C'(k)$, let $[x']_{c(r)}$ denote that equivalence class containing the point $x'$.} \end{defi} \begin{lem} If $x'_1, x'_2 \in C'(k)$ such that $\phi(x'_1) = \phi(x'_2)$ then \begin{align*} \mathrm{card} [x'_1]_{c(r)} = \mathrm{card} [x'_2]_{c(r)} \end{align*} for all $r \in [0,1]$. \end{lem} \begin{proof} The lemma is tautological when $x'_1 \sim_{c(r)} x'_2$. Let us hence assume that $\psi'(r,x'_1) = p'_1$ and $\psi'(r,x'_2) = p'_2$ where $p'_1$ and $p'_2$ are two points on $C'^{\mathrm{an}}$. Observe that since $\Upsilon_{C'^{\mathrm{an}}}$ and $\Upsilon_{C^{\mathrm{an}}}$ are the images of a pair of compatible deformation retractions, $\phi^{\mathrm{an}}(p'_1) = \phi^{\mathrm{an}}(p'_2)$. Let $p := \phi^{\mathrm{an}}(p'_1)$. The Galois group $G := \mathrm{Gal}(k(C')/k(C))$ acts trasitively on the set of preimages $\phi^{-1}(p)$. Let $\sigma \in G$ be an element of the Galois group such that $\sigma(p'_1) = p'_2$. By construction, the deformation retraction $\psi'$ is Galois invariant i.e. if $t \in [0,1], q \in C'^{\mathrm{an}}$ and $g \in \mathrm{Gal}(k(C')/k(C))$ then $\psi'(t,g(q)) = g(\psi'(t,q))$. It follows that if $a \sim_{c(r)} x'_1$ then $\sigma(a) \sim_{c(r)} x'_2$. As $\sigma$ is bijective, $\mathrm{card} [x'_1]_{c(r)} \leq \mathrm{card} [x'_2]_{c(r)}$. By symmetry we conclude that the lemma is true. \end{proof} \begin{defi} (The invariant $c_r(x)$ for $x \in C(k)$) \emph{ Let $x \in C(k)$ and $x' \in C'(k)$ such that $\phi(x') = x$. We define \begin{align*} c_r(x) := \mathrm{card} [x']_{c(r)}. \end{align*} Lemma 5.8 implies that $c_r(x)$ is well defined.} \end{defi} \begin{prop} Let $p \in C^{\mathrm{an}}$ be a point which is not of type IV. We have the following equality. \begin{align*} n_p = [k(C') : k(C)]/ (c_{r_p}(x)\mathrm{ram}(x)) \end{align*} for any $x \in C(k)$ such that $\psi(r_p,x) = p$. \end{prop} \begin{proof} When $p \in C(k)$, we must have that if $x \in C(k)$ is such that $\psi(r_p,x) = p$ then $x = p$ and $r_p = 0$. Hence $c_{r_p}(p) = 1$ and the proposition amounts to showing that $[k(C') : k(C)] = n_p\mathrm{ram}(p)$ which is a well known calculation. Suppose $p \in C^{\mathrm{an}} \smallsetminus C(k)$. Let $x \in C(k)$ be such that $\psi(r_p,x) = p$. As the deformation retractions $\psi$ and $\psi'$ are compatible we must have that $\psi'(r_p,y) \in (\phi^{\mathrm{an}})^{-1}(p)$ for every $y \in \phi^{-1}(x)$. Furthermore, given $q \in C'^{\mathrm{an}}$ which maps to $p$ via $\phi^{\mathrm{an}}$, there exists a $y \in \phi^{-1}(x)$ such that $\psi'(r_p,y) = q$. This can be deduced from the Galois invariance of the deformation retraction $\psi'$. As $\psi$ fixes the points of $C(k)$ which are ramified, we must have that if $x \in C(k)$ such that $\psi(r_p,x) = p$ then $n_x = [k(C') : k(C)]$ and $\mathrm{ram}(x) = 1$. The proposition can be deduced from these observations. \end{proof} Observe that if $x \in C(k)$ is such that $\psi(r_p,x) = p$ then $c_{r_p}(x) = c_s(x)$ for any $s \in [r_p,1]$. This observation and Proposition 5.10 allow us to define the following invariant - $c_{1}(p)$ for $p \in \Upsilon_{C^{\mathrm{an}}}$. \begin{defi}(The invariant $c_1(p)$ for $p \in \Upsilon_{C^{\mathrm{an}}}$) \emph{Let $p \in \Upsilon_{C^{\mathrm{an}}}$. The function $c_1 : C(k) \to \mathbb{Z}_{\geq 0}$ factors through $\Upsilon_{C^{\mathrm{an}}}$ via the retraction $\psi(1,\_)$. Hence we have $c_1 : \Upsilon_{C^{\mathrm{an}}} \to \mathbb{Z}_{\geq 0}$. By Proposition 5.10,} \begin{align*} n_p = [k(C') : k(C)]/ (c_1(p)\mathrm{ram}(p)) \end{align*} \emph{where $\mathrm{ram}(p)$ is as in Definition 5.6.} \end{defi} \subsection{Calculating $l(e_p,p')$} \begin{lem} Let $p \in \Upsilon_{C^{\mathrm{an}}}$ and $e_p \in E_p$. Then $l(e_p,p')$ is a constant as $p'$ varies through the set of preimages of $p$ for the morphism $\phi^{\mathrm{an}}$. \end{lem} \begin{proof} Let $p'_1,p'_2 \in (\phi^{\mathrm{an}})^{-1}(p)$. The Galois group $\mathrm{Gal}(k(C')/k(C))$ acts transitively on the set of preimages of the point $p$. As $\Upsilon_{C'^{\mathrm{an}}} = (\phi^{\mathrm{an}})^{-1}(\Upsilon_{C^{\mathrm{an}}})$, the elements of the Galois group are homeomorphisms on $C'^{\mathrm{an}}$ which restrict to homeomorphisms on $\Upsilon_{C'^{\mathrm{an}}}$. It follows that if $\sigma \in \mathrm{Gal}(k(C')/k(C))$ is such that $\sigma(p'_1) = p'_2$ then $\sigma$ maps the set of germs $L(e_p,p'_1)$ injectively to the set $L(e_p,p'_2)$. By symmetry, we conclude that our proof is complete. \end{proof} \begin{defi} (The invariants $l(e_p)$ and $\widetilde{\mathrm{ram}}(e_p)$ for $p \in \Upsilon_{C^{\mathrm{an}}}$ and $e_p \in E_p$) \emph{Let $p \in \Upsilon_{C^{\mathrm{an}}}$ and $e_p \in E_p$ (Definition 5.2). \begin{enumerate} \item We define $l(e_p) := l(e_p,p')$ for any $p' \in (\phi^{\mathrm{an}})^{-1}(p)$. Lemma 5.12 implies that $l(e_p)$ is well defined. \item \begin{enumerate} \item By Section 2.4.3, when $p$ is a point of type II, $e_p$ corresponds to a discrete valuation of the $\tilde{k}$-function field $\widetilde{\mathcal{H}(p)}$. For any $p' \in (\phi^{\mathrm{an}})^{-1}(p)$, the extension of fields $\widetilde{\mathcal{H}(p)} \hookrightarrow \widetilde{\mathcal{H}(p')}$ can be decomposed into the composite of a purely inseparable extension and a Galois extension. Hence the ramification degree ${\mathrm{ram}}(e'/e_p)$ is constant as $e'$ varies through the set of preimages of $e_p$ for the morphism $d\phi^{alg}_{p'} : T_{p'} \to T_p$ (cf. 2.4.1). Let $\widetilde{\mathrm{ram}}(e_p)$ be this number. \item When $p$ is of type I, the set $E_p$ contains only one element and we set $\widetilde{\mathrm{ram}}(e_p) := \mathrm{ram}(p)$. \item When $p$ is of type III, let $\widetilde{\mathrm{ram}}(e_p) := c_1(p)$. \end{enumerate} \end{enumerate}} \end{defi} Applying Propositions 5.4 and 5.12, the value $2g^{\mathrm{an}}(C') - 2$ can be calculated in terms of $l(e_p)$ as follows. \begin{prop} Let the notation be as in Definition 5.13. We have that \begin{align*} 2g^{\mathrm{an}}(C') - 2 = \Sigma_{p \in \Upsilon_{C^{\mathrm{an}}}} n_p ((\Sigma_{e_p \in E_p} l(e_p)) - 2). \end{align*} \end{prop} \begin{prop} Let $p \in \Upsilon_{C^{\mathrm{an}}}$ and $e_p \in E_p$. The following equality holds. \begin{align*} l(e_p) = [k(C'):k(C)]/(n_p\widetilde{\mathrm{ram}}(e_p)). \end{align*} \end{prop} \begin{proof} When $p$ is a point of type I or III, we must have that $l(e_p)$ is $1$ and hence the proposition can be easily verified by applying Proposition 5.10. Let us suppose that $p$ is a point of type II. The morphism $\phi : C' \to C$ corresponds to an extension of function fields $k(C) \hookrightarrow k(C')$ which is Galois. As $p \in C^{\mathrm{an}}$ is of type II, it corresponds to a multiplicative norm on the function field $k(C)$. The set of preimages $\phi^{-1}(p)$ corresponds to those multiplicative norms on $k(C')$ which extend the multiplicative norm $p$ on $k(C)$. For every $p' \in (\phi^{\mathrm{an}})^{-1}(p)$, $\mathcal{H}(p')$ is the completion of $k(C')$ for $p'$ and is a finite extension of the non-Archimedean valued complete field $\mathcal{H}(p)$. The Galois group $\mathrm{Gal}(k(C')/k(C))$ acts transitively on the set $(\phi^{\mathrm{an}})^{-1}(p)$. It follows that degree of the extension $[\mathcal{H}(p'):\mathcal{H}(p)]$ is a constant as $p'$ varies through the set $(\phi^{\mathrm{an}})^{-1}(p)$. We denote this number $f(p)$. Hence we have that \begin{align*} [k(C'):k(C)] = n_pf(p). \end{align*} By Lemma 4.2, $f(p) = [\widetilde{\mathcal{H}(p')}:\widetilde{\mathcal{H}(p)}]$. Uniquely associated to the $\tilde{k}$-function fields $\mathcal{H}(p)$ and $\mathcal{H}(p')$ are smooth, projective $\tilde{k}$-curves denoted $\tilde{C}_p$ and $\tilde{C}'_{p'}$. The germ $e_p$ corresponds to a closed point on the former of these curves. The number $l(e_p)$ is the cardinality of the set of preimages of the closed point $e_p$ for the morphism $\tilde{C}_{p'} \to \tilde{C}_p$ induced by $\phi^{\mathrm{an}}$. The result now follows from [\cite{Liu}, Theorem 7.2.18] applied to the $\tilde{k}$-function fields $\widetilde{\mathcal{H}(p)}, \widetilde{\mathcal{H}(p')}$ and the divisor $e_p$. \end{proof} The results of this section can be compiled so that the value $2g^{\mathrm{an}}(C') - 2$ can be computed in terms of the invariant $\widetilde{\mathrm{ram}}$ introduced below and the invariants $\mathrm{ram}$ and $c_1$ from Definition 5.11. \begin{defi}(The invariant $\widetilde{\mathrm{ram}}(p)$ for $p \in \Upsilon_{C^{\mathrm{an}}}$) \emph{Let $p \in \Upsilon_{C^{\mathrm{an}}}$. We define $\widetilde{\mathrm{ram}}(p) := \Sigma_{e_p \in E_p} (1/ \widetilde{\mathrm{ram}}(e_p))$.} \end{defi} The following theorem can be verified using 5.14 and 5.10. \begin{thm} Let $\phi : C' \to C$ be a finite morphism between smooth projective irreducible $k$-curves such that the extension of function fields $k(C) \hookrightarrow k(C')$ induced by $\phi$ is Galois. Let $g^{\mathrm{an}}(C')$ be as in Definition 2.26. For $p \in \Upsilon_{C^{\mathrm{an}}}$, let $\widetilde{\mathrm{ram}}(p), c_1(p)$ and $\mathrm{ram}(p)$ be the invariants introduced in Definition 5.6, 5.11 and 5.16. We have that \begin{align*} 2g^{\mathrm{an}}(C') - 2 = \mathrm{deg}(\phi) \Sigma_{p \in \Upsilon_{C^{\mathrm{an}}}} [\widetilde{\mathrm{ram}}(p) - 2/(c_1(p)\mathrm{ram}(p))]. \end{align*} \end{thm} \section{Appendix} At several instances over the course of this paper, we used the following fact. Let $\phi : C' \to C$ be a finite surjective morphism between $k$-curves where $C$ is in addition normal. The induced morphism $\phi^{\mathrm{an}} :C'^{\mathrm{an}} \to C^{\mathrm{an}}$ is then open. The following lemma justifies this statement. \begin{lem}  Let $F$ be a non-Archimedean complete non trivially real valued field. Let $\phi : V \to W$ be a finite surjective morphism between irreducible $F$-varieties with $W$ normal. The induced morphism $\phi^{\mathrm{an}} : V^{\mathrm{an}} \to W^{\mathrm{an}}$ is an open morphism. \end{lem}  \begin{proof} We apply Lemma 3.2.4 in \cite{berk} to prove the lemma. Clearly, we need only show that if $W$ is normal then $W^{\mathrm{an}}$ is locally irreducible. By 3.4.3 in loc.cit, $W^{\mathrm{an}}$ is a normal $k$-analytic space. Let $x \in W^{\mathrm{an}}$ and $U \subset W^{\mathrm{an}}$ be a $k$-analytic neighborhood of $x$. Let $U' \subset U$ be the connected component that contains $x$. The space $U'$ is a normal $k$-analytic space. By 3.1.8 in loc.cit, it must be irreducible. This completes the proof. \end{proof} \end{document}
\begin{document} \title{\STErev{Full quantum state reconstruction of symmetric two-mode squeezed thermal states via spectral homodyne detection}} \author{Simone Cialdi} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \affiliation{Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milan, Italy} \author{Carmen Porto} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \author{Daniele Cipriani} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \author{Stefano Olivares} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \affiliation{Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milan, Italy} \author{Matteo G. A. Paris} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \affiliation{Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milan, Italy} \date{\today} \begin{abstract} \STErev{ We suggest and demonstrate a scheme to reconstruct the symmetric two-mode squeezed thermal states of spectral sideband modes from an optical parametric oscillator. The method is based on a single homodyne detector and active stabilization of the cavity. The measurement scheme have been successfully tested on different two-mode squeezed thermal states, ranging from uncorrelated coherent states to entangled states.} \mbox{e}nd{abstract} \pacs{03.65.Wj, 42.50.Lc, 42.50.Dv} \maketitle {{rm i}t Introduction} -- Homodyne detection (HD) is an effective tool to characterize the quantum state of light in either the time \cite{tim1,tim2,tim3,tim4,tim5,tim6,tim7,tim8} or the frequency \cite{pik75,yue83,sch84,abb83,yur85,smi9xa,smi9xb,smi9xc, vog89,dar94,mun95,sch96a,sch96b,rev03,rev09,dis1,dis2,dis3,dis4,dis5} domain. In a spectral homodyne detector, the signal under investigation interferes at a balanced beam splitter with a local oscillator (LO) with frequency $\omega_0$. The two outputs undergo a photodetection process and their photocurrents are combined leading to a photocurrent continuously varying in time. The information about the spectral field modes at frequencies $\omega_0 \pm \Omega$ (sidebands) is then retrieved by electronically mixing the photocurrent with a reference signal with frequency $\Omega$ and phase $\Psi$. Upon varying the phase $\theta$ of the LO, we may access different field quadratures, whereas the phase $\Psi$ can be adjusted to select the symmetric ${\mathcal S}$ or antisymmetric ${\mathcal A}$ balanced combinations of the upper and lower sideband modes. \par Measuring the sole modes ${\mathcal S}$ and ${\mathcal A}$ \STErev{through homodyne detection is not enough} to assess the spectral correlation between the modes under investigation \cite{bar:PRA:13} and, in turn, to fully characterize a generic quantum state. In order to retrieve the full information about the sidebands it has been suggested that one should spatially separate the two modes \cite{hunt:05,hun02} or implement more sophisticated setups \cite{bar:PRA:13,bar:PRL:13} involving resonator detection. On the other hand, it would be desirable to have schemes, which do not require structural modifications of the experimental setup. In turn, this would make possible to embed more easily diagnostic tools in interferometry \cite{som03} and continuous-variable-based quantum technology. \STErev{Remarkably, in the relevant cases of interest for continuous variable quantum information, such as squeezed state generation by spontaneous parametric down-conversion, the correlation between the modes vanishes, due to symmetric nature of the generated state.} \par \STErev{ In this Letter we suggest and demonstrate a measurement scheme where the relevant information for the quantum state reconstruction of symmetric spectral modes is obtained by using a single homodyne detector with active stabilization through the Pound-Drever-Hall (PDH) technique \cite{PDH}. In particular, the reconstruction is achieved by exploiting the phase coherence of the setup, guaranteed in every step of the experiment, and two auxiliary combinations of the sideband modes selected by setting the mixer phase at $\Psi = \pm \pi/4$.} \begin{figure} {rm i}ncludegraphics[width=0.99\columnwidth]{Fig1.pdf} \caption{\label{f:scheme} Schematic diagram of the experimental setup. See the main text for details.} \mbox{e}nd{figure} \par {{rm i}t Homodyne detection and state reconstruction} -- A schematic diagram of our apparatus is sketched in Fig.~\ref{f:scheme}. The principal radiation source is provided by a home made Nd:YAG Laser ($\sim$300~mW @1064~nm and 532~nm) internally-frequency-doubled by a periodically poled MgO:LiNbO$_3$ (PPLN in Fig.~\ref{f:scheme}). To obtain the single mode operation, a light diode is placed inside the laser cavity. One laser output (@532~nm) pumps the MgO:LiNbO$_3$ crystal of the optical parametric oscillator (OPO) whereas the other output (@1064~nm) is sent to a polarizing beam splitter (PBS) to generate the local oscillator (LO) and the seed for the OPO. The power of the LO ($\sim$10~mW) is set by an amplitude modulator (AM). Two phase modulators (PMa and PMb in Fig.~\ref{f:scheme}) generate both the sidebands used as OPO coherent seeds and as active stabilization of the OPO cavity with the PDH technique \cite{PDH}. For the OPO stabilization we use a frequency of 110~MHz (HF) while the frequency $\Omega$ for the generation of the input seed is about 3~MHz. \STErev{This is indeed a major effort, but it will turn out to be fundamental for the {\mbox{e}m full} reconstruction of the symmetric states addressed below.} The OPO cavity is linear with a free spectral range (FSR) of 3300~MHz, the output mirror has a reflectivity of 92\% and the rear mirror of 99\%. The linewidth is about 55~MHz, thus the OPO stabilization frequency HF is well above the OPO linewidth while the frequency $\Omega$ is well inside. In order to actively control the length of the OPO cavity its rear mirror is connected to a piezo that is controlled by the signal error of the PDH apparatus. \par The detector consists of a 50:50 beam splitter, two low noise detectors and a differential amplifier based on a LMH6624 operational amplifier. The interferometer visibility is about 95\%. We remove the low frequency signal through a high-pass filter @500~kHz and then the signal is sent to the demodulation stage. To extract the information about the signal at frequency $\Omega$ we use an electronic setup consisting in a phase shifter, a mixer ($\bigotimes$ in Fig.~\ref{f:scheme}) and a low-pass filter @300~kHz. Since, as we will see in the following, we need to measure the signal at two different orthogonal phases, $\Psi_1$ and $\Psi_2=\Psi_1+\pi/2$, for the sake of simplicity we implemented a double electronic setup to observe at the same time the outputs (see Fig.~\ref{f:scheme}). Finally, the LO phase $\theta$ is scanned between 0 and $2\pi$ by a piezo connected with a mirror before the beam splitter of the HD. The acquisition time is 20~ms and we collect about 100~000 points by a 2~GHz oscilloscope. \par If $a_0(\omega_0)$ is the photon annihilation operator of the signal mode at the input of the HD, it is easy to show that the detected photocurrent can be written as (note that the ``fast term'' $\omega_0$ is canceled by the presence of the LO at the same frequency) $I(t) \propto a_0(t)\, \mbox{e}^{-i\theta} + a_0^{\dag}(t)\, \mbox{e}^{i\theta}$ \cite{bachor}, where $\theta$ is the phase difference between signal and LO and we introduced the time-dependent field operator $a_0(t)$, that is slowly varying with respect to the carrier at $\omega_0$, such that $a_0(t) = \mbox{e}^{-i\omega_0t} {rm i}nt d\omega\, F(\omega)\, a_0(\omega)\,\mbox{e}^{-i\omega t} \mbox{e}quiv \mbox{e}^{-i\omega_0 t} a_0(t)$, $F(\omega)$ being the apparatus spectral response function. \par To retrieve the information about the sidebands at frequencies $\omega_0\pm \Omega$, described by the time-dependent field operators $\hat{a}t{a}_{\pm\Omega}(t)$, we use electronic mixers set at the frequency $\Omega$ with phase shift $\Psi$ with respect to the signal, leading to the current $I_{\Omega}(t,\Psi) = I(t) \cos(\Omega t + \Psi)$. Neglecting the terms proportional to $\mbox{e}xp(\pm 2 i \Omega t)$ (low-pass filter), we find the following expression for operator describing the (spectral) photocurrent $I_{\Omega}(t,\Psi) \propto X_\theta (t,\Psi | \Omega)$, where $X_\theta(t,\Psi | \Omega) = b (t,\Psi | \Omega)\, \mbox{e}^{-i \theta} + b^\dag (t,\Psi | \Omega)\, \mbox{e}^{i \theta}$ is the quadrature operator associated with the field operator (note the dependence on the two sidebands): \begin{equation}\label{transform} b (t,\Psi | \Omega) = \frac{a_{+\Omega}(t)\, \mbox{e}^{ i \Psi} + a_{-\Omega}(t)\, \mbox{e}^{- i \Psi}}{\sqrt{2}}. \mbox{e}nd{equation} Note that $\left[ b (t,\Psi | \Omega) , b^\dag (t',\Psi | \Omega) \right] = \chi_{(\Delta \omega)^{-1}}(|t-t'|)$. \par The interaction inside the OPO is bilinear and involves the sideband modes $a_{\pm\Omega}$ \cite{bachor}. It is described by the effective Hamiltonian $H_{\Omega} \propto a_{+\Omega}^{\dag} a_{-\Omega}^{\dag} + {\rm h.c.}$, that is a two-mode squeezing interaction. Due to the linearity of $H_{\Omega}$, if the initial state is a coherent state or the vacuum, the generated two-mode state $\varrho_{\Omega}$ is a Gaussian state, namely, a state described by Gaussian Wigner functions and, thus, fully characterized by its covariance matrix (CM) ${\boldsymbol{\sigma}}_{\Omega}$ and first moment vector ${\boldsymbol{R}}$ \cite{oli:st,weed}. It is worth noting that due to the symmetry of $H_\Omega$, the two-sideband state is symmetric \cite{bar:PRA:13} and can be written as $\varrho_\Omega = D_{2}(\alpha)S_{2}(\xi) \nu_{+\Omega}(N)\otimes\nu_{-\Omega}(N) S_{2}^{\dag}(\xi)D_{2}^{\dag}(\alpha)$, where $D_2(\alpha) = \mbox{e}xp\{[\alpha(a_{+\Omega}^{\dag}+a_{-\Omega}^{\dag}) -\mbox{h.c.}]/\sqrt{2}\}$ is the symmetric displacement operator and $S_2(\alpha) = \mbox{e}xp(\xi a_{+\Omega}^{\dag} a_{-\Omega}^{\dag}-\mbox{h.c.})$ the two mode squeezing operator and $\nu_{\pm\Omega}(N)$ is the thermal state of mode $ a_{\pm\Omega}$ with $N$ average photons \cite{oli:st}. The state $\varrho_{\Omega}$ belongs to the so-called class of the two-mode squeezed thermal states, generated by the application of $S_{2}^{\dag}(\xi)D_{2}^{\dag}(\alpha)$ to two thermal states with (in general) different energies. In order to test our experimental setup, we acted on the OPO pump and on the phase modulation to generate and characterize three classes of states: the coherent ($\alpha \ne 0$ and $N,\xi=0$), the squeezed ($\xi,N \ne 0$ and $\alpha=0$) and the squeezed-coherent ($\alpha,\xi,N \ne 0$) two-mode sideband state. We now consider the mode operators: \begin{equation}\label{AS} b (t, 0 | \Omega) \mbox{e}quiv a_{\rm s}, \quad \mbox{and} \quad b (t,\pi/2 | \Omega) \mbox{e}quiv a_{\rm a} , \mbox{e}nd{equation} which correspond to the symmetric (${\mathcal S}$) and antisymmetric (${\mathcal A}$) combination of the sideband modes, respectively, and the corresponding quadrature operators $q_{k} = X_{0}(t,\Psi_k |\Omega)$, $p_{k} = X_{\pi/2}(t,\Psi_k |\Omega)$, and $z^{\pm}_{k} = X_{\pm\pi/4}(t,\Psi_k |\Omega)$, $k={\rm a}, {\rm s}$, with $\Psi_{\rm s} =0$ and $\Psi_{\rm a} = \pi/2 $. In the ${\mathcal S} / {\mathcal A}$ modal basis, the first moment vector of $\varrho_{\Omega}$ reads ${\boldsymbol{R}}' = (\ave{q_{\rm s}},\ave{p_{\rm s}}, \ave{q_{\rm a}},\ave{p_{\rm a}})^T$ and \STErev{ its $4 \times 4$ CM can be written in the following block-matrix form: \begin{equation}\label{sigma:p:stationary} {\boldsymbol{\sigma}}'= \left(\begin{array}{cc} {\boldsymbol{\sigma}}_{\rm s} & {\boldsymbol{\sigma}}_\delta \\[1ex] {\boldsymbol{\sigma}}_\delta^T &{\boldsymbol{\sigma}}_{\rm a} \mbox{e}nd{array} \right),\quad {\boldsymbol{\sigma}}_\delta = \left(\begin{array}{cc} \mbox{e}psilon_q & \delta_{qp} \\[1ex] \delta_{pq} & \mbox{e}psilon_p \mbox{e}nd{array} \right), \mbox{e}nd{equation} where \cite{dauria:05}: \begin{equation}\label{sigma:p:s:m} {\boldsymbol{\sigma}}_{k} = \left(\begin{array}{cc} \ave{q_{k}^2} - \ave{q_{k}}^2 & \frac12 \ave{(z_{k}^{+})^2-(z_{k}^{-})^2} \\[1ex] \frac12\ave{(z_{k}^{+})^2-(z_{k}^{-})^2} & \ave{p_{k}^2} - \ave{p_{k}}^2 \mbox{e}nd{array} \right) \mbox{e}nd{equation} is the CM of the mode $k= {\rm a}, {\rm s}$, $\mbox{e}psilon_l= \ave{l_{\rm s}l_{\rm a}} - \ave{l_{\rm s}}\ave{l_{\rm a}}$, $\delta_{l \bar{l}}= \ave{l_{\rm s} \bar{l}_{\rm a}} - \ave{l_{\rm s}}\ave{\bar{l}_{\rm a}}$ with $l,\bar{l}=q,p$ and $l \ne \bar{l}$. The matrix elements of ${\boldsymbol{\sigma}}_k$ can be directly measured from the homodyne traces of corresponding mode $a_k$ \cite{SM}, whereas the entries of ${\boldsymbol{\sigma}}_{\delta}$ cannot. However, the information about $\mbox{e}psilon_{l}$ can be retrieved by changing the value of the mixer phase to $\Psi=\pm\pi/4$. In fact, it easy to show that \cite{dauria:05,dauria:PRL,buono:JOSAB} $\mbox{e}psilon_l = \frac12 \left(\ave{l_{+}^2} - \ave{l_{-}^2}\right) - \ave{l_{\rm s}}\ave{l_{\rm a}}$, $l=q,p$, where $q_{\pm} = X_{0}(t,\pm\pi/4 |\Omega)$ and $p_{\pm} = X_{\pi/2}(t,\pm\pi/4 |\Omega)$.} \par \STErev{We now focus on $\delta_{l\bar{l}}$. Given the state $\varrho_{\Omega}$, but with different thermal contributions, these elements are equal to the energy unbalance between the sidebands (without the contribution due to the displacement that does not affect the CM) \cite{SM}, namely, $\delta_{qp} = -\delta_{pq} =\Delta N_{\Omega} = (N_{+\Omega}-N_{-\Omega})$, which cannot be directly accessed by the spectral homodyne detection alone. To overcome this issue, a resonator detection method has been proposed and demonstrated in Refs.~\cite{bar:PRA:13,bar:PRL:13}. In our case we can exploit the error signal from the PDH stabilization to check the symmetry of the sideband state and also to measure the presence of some energy unbalance of the two sidebands, leading to non-vanishing $\delta_{l\bar{l}}$. More in details, given the cavity bandwidth, the PDH error signal allows to measure the unbalance as \cite{SM} $\Delta N_{\Omega} = (\tau_{+\Omega} - \tau_{-\Omega})N_{\Omega}$, where $\tau_{\pm \Omega}$ are the relative transmission coefficients associated with the two sideband modes and $N_{\Omega} = N_{+\Omega} + N_{-\Omega}$ can be obtained from the (reconstructed) diagonal elements of ${\boldsymbol{\sigma}}_{\rm s}$ and ${\boldsymbol{\sigma}}_{\rm a}$ \cite{SM,FOP}. } \begin{figure}[t] {rm i}ncludegraphics[width=0.938\columnwidth]{Fig2.pdf} \caption{\label{f:trace:ch} Homodyne traces referring to the coherent two-mode sideband state and the reconstructed ${\boldsymbol{R}}'$ and ${\boldsymbol{\sigma}}'$. The purities of the modes $\cal S$ and $\cal A$ are $\mu_{\rm s} = 0.99 \pm 0.02$ and $\mu_{\rm a} = 0.99 \pm 0.01$, respectively. Only the relevant elements are shown.} \mbox{e}nd{figure} \begin{figure}[t] {rm i}ncludegraphics[width=0.95\columnwidth]{Fig3.pdf} \caption{\label{f:trace:sq} Homodyne traces referring to the squeezed two-mode sideband state and the reconstructed ${\boldsymbol{R}}'$ and ${\boldsymbol{\sigma}}'$. The noise reduction is $3.1 \pm 0.3$~dB for both the modes $\cal S$ and $\cal A$, wheres their purities are $\mu_{\rm s} = 0.68 \pm 0.07$ and $\mu_{\rm a} = 0.67 \pm 0.02$, respectively. Only the relevant elements are shown.} \mbox{e}nd{figure} \begin{figure}[t] {rm i}ncludegraphics[width=0.95\columnwidth]{Fig4.pdf} \caption{\label{f:trace:sq:ch} Homodyne traces referring to the squeezed-coherent two-mode sideband state and the reconstructed ${\boldsymbol{R}}'$ and ${\boldsymbol{\sigma}}'$. The noise reduction is $2.7 \pm 0.3$~dB for the $\cal S$ mode and $2.4 \pm 0.2$~dB for the $\cal S$ mode, wheres the purities are $\mu_{\rm s} = 0.68 \pm 0.07$ and $\mu_{\rm a} = 0.64 \pm 0.02$, respectively. Only the relevant elements are shown.} \mbox{e}nd{figure} \begin{table}[tb!] \begin{tabular}{l} \hline\hline $\bullet$ {{rm i}t Two-mode coherent state:} \\[1ex] {rm i}ncludegraphics[width=0.8\columnwidth]{R_ch.pdf} \\[1ex] {rm i}ncludegraphics[width=0.8\columnwidth]{omega_ch.pdf} \\[1ex] \hline\hline $\bullet$ {{rm i}t Two-mode squeezed state:} \\[1ex] {rm i}ncludegraphics[width=0.8\columnwidth]{R_sq.pdf} \\[1ex] {rm i}ncludegraphics[width=0.8\columnwidth]{omega_sq.pdf}\\[1ex] \hline\hline $\bullet$ {{rm i}t Two-mode squeezed-coherent state:} \\[1ex] {rm i}ncludegraphics[width=0.84\columnwidth]{R_sqch.pdf} \\[1ex] {rm i}ncludegraphics[width=0.8\columnwidth]{omega_sqch.pdf}\\ \hline\hline \mbox{e}nd{tabular} \caption{\label{t:sidebands} Reconstructed first moment vectors ${\boldsymbol{R}}$ and CMs ${\boldsymbol{\sigma}}_\Omega$ of the two-mode sideband states $\varrho_\Omega$ corresponding to the states of Figs.~\ref{f:trace:ch}, \ref{f:trace:sq} and \ref{f:trace:sq:ch}, respectively.} \mbox{e}nd{table} \par \par {{rm i}t Experimental results} -- Given the state $\varrho_{\Omega}$, the full reconstruction of the CM requires the measurement of the quadratures of modes $a_{\rm s}, a_{\rm a},$ and $a_{\pm} = b(t,\pm\pi/4|\Omega)$. Once the mode has been selected by choosing the suitable mixer phase $\Psi$, the LO phase $\theta$ was scanned from $0$ to $2\pi$ to acquire the corresponding homodyne trace. The statistical analysis of each trace allows to reconstruct the expectation value of the moments of the quadrature required to reconstruct the CM ${\boldsymbol{\sigma}}'$ and the first moments vector ${\boldsymbol{R}}'$. \par Figures~\ref{f:trace:ch}, \ref{f:trace:sq} and \ref{f:trace:sq:ch} show the experimental spectral homodyne traces corresponding to the coherent, squeezed and squeezed-coherent two-mode sideband states, respectively. In the same figures we report the corresponding ${\boldsymbol{\sigma}}'$ and ${\boldsymbol{R}}'$. All the reconstructed ${\boldsymbol{\sigma}}'$ satisfy the physical condition ${\boldsymbol{\sigma}}' +i {\boldsymbol{\Omega}} \geq 0$ where ${\boldsymbol{\Omega}}=i {\boldsymbol{\sigma}}_y \oplus {\boldsymbol{\sigma}}_y$, ${\boldsymbol{\sigma}}_y$ being the Pauli matrix \cite{oli:st}. This implies that the modes ${\cal S}$ and ${\cal A}$ represent the same local quantum state, namely, ${\boldsymbol{\sigma}}_{\rm s}= {\boldsymbol{\sigma}}_{\rm a}$: this is in agreement with our measurement within statistical errors, as one can check from the Figs.~\ref{f:trace:ch}, \ref{f:trace:sq} and \ref{f:trace:sq:ch}. Furthermore, the diagonal elements of the off-diagonal blocks are zero within their statistical errors, in agreement with the expectation for a factorized state of the two modes. \par We should now calculate the corresponding CMs in the modal basis $\hat{a}t{a}_{+\Omega}$ and $\hat{a}t{a}_{-\Omega}$ of the upper and lower sideband, respectively. Because of Eqs.~(\ref{AS}) we can write ${\boldsymbol{\sigma}}_\Omega = {\boldsymbol{S}}^{T}\,{\boldsymbol{\sigma}}' {\boldsymbol{S}}$ and ${\boldsymbol{R}} = {\boldsymbol{S}}^{T} {\boldsymbol{R}}'$, where \begin{equation}\label{simplettica} {\boldsymbol{S}}=\frac{1}{\sqrt{2}} \left(\begin{array}{cc} {\mathbbm I} & {\mathbbm I} \\ -i {\boldsymbol{\sigma}}_y & i {\boldsymbol{\sigma}}_y \mbox{e}nd{array} \right) \mbox{e}nd{equation} is the symplectic transformation associated with the mode transformations of Eqs.~(\ref{AS}). The results are summarized in Table~\ref{t:sidebands}. Whereas the reconstructed two-mode sideband coherent state is indeed a product of two coherent states, the other two reconstructed states exhibit non-classical features. In particular, the minimum symplectic eigenvalues of the corresponding partially transposed CMs \cite{simon,seraf} read $\tilde{\lambda} = 0.50 \pm 0.02$ and $\tilde{\lambda} = 0.55 \pm 0.03$ for the two-mode squeezed and squeezed-coherent state, respectively: since in both the cases $\tilde{\lambda} < 1$, we conclude that the sideband modes are entangled. \par {{rm i}t Concluding remarks} -- \STErev{In conclusion, we have presented a measurement scheme to fully reconstruct the class of symmetric two-mode squeezed thermal states of spectral sideband modes, a class of states with Gaussian Wigner functions widely exploited in continuous variable quantum technology. The scheme is based on a homodyne detection and active stabilization, which guarantees phase coherence in every step of the experiment, and on a suitable analysis of the detected photocurrents. We have shown that by properly choosing the electronic mixer phase it is possible to select four different combinations of the upper and lower sideband which, together with the information form the PDH error signal, allows to reconstruct the elements of the covariance matrix of the state under consideration.} The scheme has been successfully demonstrated to reconstruct both factorized and entangled sideband states. \par In our implementation we have used two electronic mixers and retrieved information about two modes at a time. It is also possible to use four mixers and extract information about the four modes at the same time. The method is based on a single homodyne detector and does not involve elements outside the main detection tools of continuous variable optical systems. As such, our procedure is indeed a versatile diagnostic tool, suitable to be embedded in quantum information experiments with continuous variable systems in the spectral domain. \par {{rm i}t Acknowledgments} -- This work has been supported by UniMI through the UNIMI14 grant 15-6-3008000-609 and the H2020 Transition Grant 15-6-3008000-625, and by EU through the collaborative project QuProCS (Grant Agreement 641277). MGAP and SO thank Alberto Porzio for useful discussions. \begin{thebibliography}{20} \bibitem{tim1} A. I. Lvovsky, H. Hansen, T. Aichele, O. Benson, J. Mlynek, and S. Schiller, Phys. Rev. Lett. {\bf 87}, 050402 (2001). \bibitem{tim2} A. Zavatta, M. Bellini, P. L. Ramazza, F. Marin, and F. T. Arecchi, J. Opt. Soc. Am. B {\bf 19}, 1189 (2002). \bibitem{tim3} A. I. Lvovsky and J. H. Shapiro, Phys. Rev. A {\bf 65}, 033830 (2002). \bibitem{tim4} S. A. Babichev, B. Brezger, and A. I. Lvovsky, Phys. Rev. Lett. {\bf 92}, 047903 (2004). \bibitem{tim5} A. Zavatta, S. Viciani, and M. Bellini, Phys. Rev. A {\bf 70}, 053821 (2004). \bibitem{tim6} A. Zavatta, S. Viciani, and M. Bellini, Science {\bf 306}, 660 (2004). \bibitem{tim7} V. Parigi, A. Zavatta, M. S. Kim, and M. Bellini Science {\bf 317}, 1890 (2007). \bibitem{tim8} S. Grandi, A. Zavatta, M. Bellini, M. G. A. Paris, preprint arXiv:1505.03297 \bibitem{pik75} E. Jakeman, C.J. Oliver, and E. R. Pike, Adv. Phys. {\bf 24} 349 (1975). \bibitem{yue83} H. P.Yuen, and W. S. Chan, Opt.Lett. {\bf 8},177 (1983). \bibitem{sch84} B. L. Schumaker, Opt.Lett. {\bf 9}, 189 (1984). \bibitem{abb83} G. L. Abbas, V. W. S. Chan, and S. T. Yee, Opt. Lett. {\bf 8}, 419 (1983). \bibitem{yur85} B. Yurke, Phys. Rev. A {\bf 32}, 300 (1985). \bibitem{smi9xa} D. T. Smithey, M. Beck, M. G. Raymer, and A. Faridani, Phys. Rev. Lett. {\bf 70}, 1244 (1993). \bibitem{smi9xb} M. G. Raymer, M. Beck, and D. F. McAlister, Phys. Rev. Lett. {\bf 72}, 1137 (1994). \bibitem{smi9xc} D. T. Smithey, M. Beck, J. Cooper, and M. G. Raymer, Phys. Rev. A {\bf 48}, 3159 (1993). \bibitem{vog89} K. Vogel and H. Risken, Phys. Rev. A {\bf 40}, 2847 (1989). \bibitem{dar94} G. M. D'Ariano, C. Macchiavello, and M. G. A. Paris, Phys. Rev. A {\bf 50}, 4298 (1994). \bibitem{mun95} M. Munroe, D. Boggavarapu, M. E. Anderson, and M. G. Raymer, Phys. Rev. A {\bf 52}, R924 (1995). \bibitem{sch96a} S. Schiller, G. Breitenbach, S. F. Pereira, T. Muller, and J. Mlynek, Phys. Rev. Lett. {\bf 77}, 2933 (1996). \bibitem{sch96b} G. Breitenbach, S. Schiller, and J. Mlynek, Nature {\bf 387}, 471 (1997). \bibitem{rev03} G. M. D'Ariano, M. G. A Paris, and M. F. Sacchi, Adv. Imag. Electr. Phys. {\bf 128}, 205-308 (2003). \bibitem{rev09} A. I. Lvovsky and M. G. Raymer, Rev. Mod. Phys. {\bf 81}, 299 (2009). \bibitem{dis1} M. Gu, H. M. Chrzanowski, S. M. Assad, T. Symul, K. Modi, T. C. Ralph, V. Vedral, and P. K. Lam, Nat. Phys. {\bf 8}, 671 (2012). \bibitem{dis2} L. S. Madsen, A. Berni, M. Lassen, and U. L. Andersen, Phys. Rev. Lett. {\bf 109}, 030402 (2012). \bibitem{dis3} {R. Blandino, M. G. Genoni, J. Etesse, M. Barbieri, M. G. A. Paris, P. Grangier, R. Tualle-Brouri}, Phys. Rev. Lett {\bf 109}, 180402 (2012). \bibitem{dis4} C. Peuntinger, V. Chille, L. Mista, Jr., N. Korolkova, M. F\"ortsch, J. Korger, C. Marquardt, and G. Leuchs, Phys. Rev. Lett. {\bf 111}, 230506 (2013). \bibitem{dis5} V. Chille, N. Quinn, C. Peuntinger, C. Croal, L. Mista, Jr., C. Marquardt, G. Leuchs, and N. Korolkova, Phys. Rev. A {\bf 91}, 050301(R) (2015). \bibitem{bar:PRA:13} F.~A.~S.~Barbosa, A.~S.~Coelho, K.~N.~Cassemiro, P.~Nussenzveig, C.~Fabre, A.~S.~Villar, and M.~Martinelli, Phys. Rev. A {\bf 88}, 052113 (2013). \bibitem{hunt:05} E.~H.~Huntington, G.~N.~Milford, C. Robilliard, T.~C.~Ralph, O.~Gl\"ockl, U.~L.~Andersen, S.~Lorenz and G.~Leuchs, Phys. Rev. A {\bf 71}, 041802(R) (2005). \bibitem{hun02} E. H. Huntington, and T. C. Ralph, J. Opt. B {\bf 4}, 123 (2002). \bibitem{bar:PRL:13} F.~A.~S.~Barbosa, A.~S.~Coelho, K.~N.~Cassemiro, P.~Nussenzveig, C.~Fabre, M.~Martinelli, and A.~S.~Villar, Phys. Rev. Lett. {\bf 111}, 052113 (2013). \bibitem{som03} K. Somiya, Phys. Rev. D {\bf 67}, 122001 (2003) \bibitem{PDH} R. W. P. Drever, J. L. Hall, F. V. Kowalski, J. Hough, G. M. Ford, A. J. Munley, and H. Ward, Appl. Phys. B {\bf 31} 97 (1983). \bibitem{bachor} H.-A.~Bachor, and T.~C.~Ralph, {{rm i}t A Guide to Experiments in Quantum Optics} (Wiley, 2004). \bibitem{oli:st} S.~Olivares, Eur. Phys. J. Special Topics {\bf 203}, 3 (2012). \bibitem{weed} C. Weedbrook, S. Pirandola, R. Garcia-Patr\'on, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, Rev. Mod. Phys. {\bf 84}, 621 (2012). \bibitem{dauria:05} V.~D'Auria, A.~Porzio, S.~Solimeno, S.~Olivares, and M.~G.~A.~Paris, J. Opt. B: Quantum and Semiclass. Opt. {\bf 7}, S750 (2005). \bibitem{SM} See the Supplemental material for details about the calculations. \bibitem{dauria:PRL} V.~D'Auria, S.~Fornaro, A.~Porzio, S.~Solimeno, S.~Olivares, and M.~G.~A.~Paris, Phys. Rev. Lett. {\bf 102}, 020502 (2009). \bibitem{buono:JOSAB} D.~Buono, G.~Nocerino, V.~D'Auria, A.~Porzio, S.~Olivares, and M.~G.~A.~Paris, J. Opt. Soc. Am. B {\bf 27}, A110 (2010). \bibitem{FOP} A. Ferraro, S. Olivares, and M. G. A. Paris, {{rm i}t Gaussian States in Quantum Information} (Bibliopolis, Napoli, 2005). \bibitem{simon} R. Simon, Phys. Rev. Lett. {\bf 84}, 2726 (2000). \bibitem{seraf} A. Serafini, F. Illuminati, and S. De Siena, J. Phys. B: At. Mol. Opt. Phys. {\bf 37}, L21 (2004). \mbox{e}nd{thebibliography} \appendix \widetext \section*{SUPPLEMENTAL MATERIAL} \subsection{Covariance matrix elements of the two-mode squeezed thermal state} In this section we explicitly show how we can calculate the elements of the CM ${\boldsymbol{\sigma}}'$, given in Eq.~(3) of the main text. We recall that, according to our definitions: \begin{equation}\label{mode:transf} b (t, 0 | \Omega) = \frac{\hat{a}t{a}_{+\Omega}(t) + \hat{a}t{a}_{-\Omega}(t)}{\sqrt{2}} \mbox{e}quiv a_{\rm s}, \quad\mbox{and}\quad b (t,\pi/2 | \Omega) = i\, \frac{\hat{a}t{a}_{+\Omega}(t) - \hat{a}t{a}_{-\Omega}(t)}{\sqrt{2}} \mbox{e}quiv a_{\rm a}, \mbox{e}nd{equation} therefore the quadrature operator $X_\theta(t,\Psi | \Omega) = b (t, \Psi | \Omega) \, \mbox{e}^{-i\theta} + b^{\dag} (t, \Psi | \Omega) \, \mbox{e}^{i\theta}$ can be written as: \begin{align} X_\theta(t,\Psi | \Omega) = \cos \Psi \left[ q_s \cos \theta + p_s \sin \theta \right] + \sin \Psi \left[ q_a \cos \theta + p_a \sin \theta \right]. \mbox{e}nd{align} If we set $\Psi=0$, we have: \begin{subequations}\label{s:quad} \begin{align} X_0(t, 0 | \Omega) & \mbox{e}quiv q_{\rm s} = a_{\rm s}+a_{\rm s}^\dag = \frac{q_{+\Omega} + q_{-\Omega}}{\sqrt{2}} \Rightarrow \langle q_{\rm s}^2 \rangle - \langle q_{\rm s} \rangle^2,\\ X_{\pi/2}(t, 0 | \Omega) &\mbox{e}quiv p_{\rm s} = i(a_{\rm s}^\dag-a_{\rm s}) = \frac{p_{+\Omega} + p_{-\Omega}}{\sqrt{2}} \Rightarrow \langle p_{\rm s}^2 \rangle - \langle p_{\rm s} \rangle^2,\\ X_{\pm\pi/4}(t, 0 | \Omega) & \mbox{e}quiv \frac{q_{\rm s} \pm p_{\rm s}}{\sqrt{2}} \Rightarrow \frac12 \langle q_{\rm s} p_{\rm s} + p_{\rm s} q_{\rm s} \rangle - \langle q_{\rm s} \rangle \langle p_{\rm s} \rangle, \mbox{e}nd{align} \mbox{e}nd{subequations} for $\Psi = \pi/2$ we obtain: \begin{subequations}\label{a:quad} \begin{align} X_0(t, \pi/2 | \Omega) & \mbox{e}quiv q_{\rm a} = a_{\rm a}+a_{\rm a}^\dag = \frac{p_{-\Omega} - p_{+\Omega}}{\sqrt{2}} \Rightarrow \langle q_{\rm a}^2 \rangle - \langle q_{\rm a} \rangle^2,\\ X_{\pi/2}(t, \pi/2 | \Omega) &\mbox{e}quiv p_{\rm a} = i(a_{\rm a}^\dag-a_{\rm a}) = \frac{q_{+\Omega} - q_{-\Omega}}{\sqrt{2}} \Rightarrow \langle p_{\rm a}^2 \rangle - \langle p_{\rm a} \rangle^2,\\ X_{\pm\pi/4}(t, \pi/2 | \Omega) & \mbox{e}quiv \frac{q_{\rm a} \pm p_{\rm a}}{\sqrt{2}} \Rightarrow \frac12 \langle q_{\rm a} p_{\rm a} + p_{\rm a} q_{\rm a} \rangle - \langle q_{\rm a} \rangle \langle p_{\rm a} \rangle, \mbox{e}nd{align} \mbox{e}nd{subequations} On the other hand, if we set $\Psi=\pm\pi/4$ we find: \begin{align} X_0(t, \pm\pi/4 | \Omega) = \frac{q_{\rm a} \pm q_{\rm s}}{\sqrt{2}}, \quad \mbox{and} \quad X_{\pi/2}(t, \pm\pi/4 | \Omega) = \frac{p_{\rm s} \pm p_{\rm a}}{\sqrt{2}}, \mbox{e}nd{align} ad we have the following identities: \begin{align} \langle X_0^2(t, \pi/4 | \Omega) &- X_0^2(t, -\pi/4 | \Omega) \rangle = 2 \langle q_{\rm a} q_{\rm s} \rangle \mbox{e}quiv \mbox{e}psilon_q, \\[1ex] \langle X_{\pi/2}^2(t, \pi/4 | \Omega) &- X_{\pi/2}^2(t, -\pi/4 | \Omega) \rangle = 2 \langle p_{\rm a} p_{\rm s} \rangle \mbox{e}quiv \mbox{e}psilon_p. \mbox{e}nd{align} \par As mentioned in the main text, it is not possible to calculate the elements $\delta_{qp}$ and $\delta_{pq}$ directly from the spectral homodyne traces \cite{bar:PRA:13}. However, when the state under consideration is a two-mode squeezed thermal state $\varrho_{\Omega} = D_{2}(\alpha)S_{2}(\xi) \nu_{+\Omega}(N_1)\otimes\nu_{-\Omega}(N_2) S_{2}^{\dag}(\xi)D_{2}^{\dag}(\alpha)$, where $D_2(\alpha) = \mbox{e}xp\{[\alpha(a_{+\Omega}^{\dag}+a_{-\Omega}^{\dag}) -\mbox{h.c.}]/\sqrt{2}\}$ is the symmetric displacement operator and $S_2(\alpha) = \mbox{e}xp(\xi a_{+\Omega}^{\dag} a_{-\Omega}^{\dag}-\mbox{h.c.})$ the two mode squeezing operator and $\nu_{\pm\Omega}(N)$ is the thermal state of mode $ a_{\pm\Omega}$ with $N$ average photons \cite{oli:st}, we can calculate $\delta_{qp}$ and $\delta_{pq}$ as follows. \par Since the covariance matrix does not depend on the displacement operator, we can assume $\alpha$. Furthermore, it is useful to introduce the following parameterization: we define the squeezed photons per mode $N_{\rm sq} = \sinh^2 r$, the total number of thermal photons $N_{\rm th}= N_1+N_2$, and the thermal-photon fraction $R_{\rm th} = N_1/N_{\rm th}$. Thereafter, the energies of the two sidebands are given by: \begin{align} N_{+\Omega} &= N_{\rm sq} (1 + N_{\rm th}) + R_{\rm th} N_{\rm th}, \\ N_{-\Omega} &= N_{\rm sq} (1 + N_{\rm th}) + (1 - R_{\rm th}) N_{\rm th}, \mbox{e}nd{align} respectively, and thus: \begin{equation} N_{+\Omega} + N_{-\Omega} = 2 N_{\rm sq} + N_{\rm th} (1+ 2N_{\rm sq})\,\quad \mbox{and} \quad N_{+\Omega} - N_{-\Omega} = N_{\rm th} (2R_{\rm th} - 1). \mbox{e}nd{equation} \par The covariance matrix associated with $\varrho_{\Omega}$ have the following block-matrix form: \begin{equation}\label{CM:sb} {\boldsymbol{\sigma}}_{\Omega}= \left(\begin{array}{cc} A \, {\mathbbm I} & C \, {\boldsymbol{\sigma}}_z \\[1ex] C \, {\boldsymbol{\sigma}}_z & B \, {\mathbbm I} \mbox{e}nd{array} \right), \mbox{e}nd{equation} where ${\boldsymbol{\sigma}}_z$ is the Pauli matrix and: \begin{subequations} \begin{align} A &= 1 + 2 N_{\rm sq} (1 +N_{\rm th}) + 2 R_{\rm th} N_{\rm sq}, \\ B &= 1 + 2 N_{\rm sq} (1 +N_{\rm th}) + 2 (1 - R_{\rm th}) N_{\rm sq}, \\ C &= 2 (1 +N_{\rm th}) \sqrt{N_{\rm sq}(1+N_{\rm sq})}\,. \mbox{e}nd{align} \mbox{e}nd{subequations} The corresponding {\mbox{e}m measured} covariance matrix reads (see the main text for details): \begin{equation}\label{CM:meas} {\boldsymbol{\sigma}}'= \left(\begin{array}{cc} \frac12(A+B) \, {\mathbbm I} + C \, {\boldsymbol{\sigma}}_z & (N_{+\Omega}-N_{-\Omega}) \, i {\boldsymbol{\sigma}}_y \\[1ex] (N_{+\Omega}-N_{-\Omega}) \, i {\boldsymbol{\sigma}}_y & \frac12(A+B) \, {\mathbbm I} + C \, {\boldsymbol{\sigma}}_z \mbox{e}nd{array} \right), \mbox{e}nd{equation} ${\boldsymbol{\sigma}}_y$ being the Pauli matrix. Note that while ${\boldsymbol{\sigma}}'$ is always symmetric, ${\boldsymbol{\sigma}}_\Omega$ can be also asymmetric. \subsection{Retrieving the energy unbalance of the two-mode squeezed thermal state} In order to determine the energy {\mbox{e}m difference} between the sidebands, we should measure the cavity bandwidth and, by exploiting the error signal of the PDH \cite{PDH}, we can assess the relative cavity transmission coefficients $\tau_{\pm \Omega} = T_{\pm \Omega}/(T_{+\Omega} + T_{-\Omega})$ associated with the two sideband modes, where $T_{\pm \Omega}$ are the actual transmission coefficients. Therefore, the energy difference can be obtained as: \begin{equation} N_{+\Omega}-N_{-\Omega} = \frac{T_{+\Omega} - T_{-\Omega}}{T_{+\Omega} + T_{-\Omega}}\, (N_{+\Omega}+N_{-\Omega}). \mbox{e}nd{equation} In general, given the covariance matrix ${\boldsymbol{\sigma}}$ of a Gaussian state, the total energy can be obtained from the sum of its diagonal elements $[{\boldsymbol{\sigma}}]_{kk}$ as (without loss of generality we are still assuming the absence of the displacement): \begin{equation} N_{\rm tot} = \frac14 \sum_{k=1}^{4} [{\boldsymbol{\sigma}}]_{kk} -1. \mbox{e}nd{equation} Experimentally, we can find the total energy $N_{+\Omega}+N_{-\Omega}$ from the first and second moments of the operators in Eqs.~(\ref{s:quad}) and in Eqs.~(\ref{a:quad}), which are measured from the homodyne detection. \begin{thebibliography}{5} \bibitem{bar:PRA:13} F.~A.~S.~Barbosa, A.~S.~Coelho, K.~N.~Cassemiro, P.~Nussenzveig, C.~Fabre, A.~S.~Villar, and M.~Martinelli, Phys. Rev. A {\bf 88}, 052113 (2013). \bibitem{oli:st} S.~Olivares, Eur. Phys. J. Special Topics {\bf 203}, 3 (2012). \bibitem{PDH} R. W. P. Drever, J. L. Hall, F. V. Kowalski, J. Hough, G. M. Ford, A. J. Munley, and H. Ward, Appl. Phys. B {\bf 31} 97 (1983). \mbox{e}nd{thebibliography} \mbox{e}nd{document} \mbox{e}nd{document}
\begin{equation}gin{document} \setcounter{page}{1} \title{Scaling Limit of the Fleming-Viot Multi-Colour Process} \author{Oliver Tough\footnote{Institut de Mathématiques, Université de Neuchâtel, Switzerland. ([email protected])}} \date{October 11, 2021} \mbox{$\bf m $}aketitle \begin{equation}gin{abstract} The Fleming-Viot process associated to a killed normally reflected diffusion $X_t$ is known to provide a particle representation for the quasi-stationary distribution of $X_t$. In this paper we establish three results for this particle system. We firstly establish that it also provides a particle representation for the principal right eigenfunction of the submarkovian infinitesimal generator of $X_t$. Secondly we establish that the Fleming-Viot process provides a representation for the $Q$-process. Thirdly we prove a conjecture due to Bieniek and Burdzy on the asymptotic distribution of the spine of the Fleming-Viot process and its side branches. We obtain these as corollaries of the following result. The Fleming-Viot multi-colour process is obtained by attaching genetic information to the particles in the Fleming-Viot process. We establish that under a suitable rescaling it converges to the Fleming-Viot superprocess from population genetics, with spatial inhomogeneity having the effect of speeding up the rate of genetic drift according to a simple formula. \end{abstract} \section{Introduction} In this paper we study the behavior of a system of interacting diffusion processes, known as a Fleming-Viot particle system. This is known to provide a particle representation for killed Markov processes and their quasi-stationary distributions (QSDs). Throughout this paper, $(X_t)_{0\leq t<\tau_{\partial}}$ will be defined to be a diffusion process evolving in the closure $\bar D$ of an open, bounded domain $D\subseteq {\mathbb R}^d$, normally reflected at the ($C^{\infty}$) boundary $\partial D$, and killed at position dependent rate $\kappa(x)$ (\textit{soft killing}). Nevertheless we emphasise that it should be possible to use the same proof strategy to obtain similar results to the results of this paper for more general killed Markov processes, subject to overcoming additional difficulties (see Remark \ref{rmk:extension to hard killing etc}). By checking \cite[Assumption (A)]{Champagnat2014}, we establish in Theorem \ref{theo:convergence to QSD for reflected diffusion with soft killing} (proven in the appendix) that there exists a unique quasi-stationary distribution (QSD) $\pi$ such that \begin{equation}gin{equation}\label{eq:exponential convergence to QSD reflected diffusion intro} \mbox{$\bf m $}athcal{L}_{\mbox{$\bf m $}u}(X_t\lvert \tau_{\partial}>t)\overset{\text{TV}}{\rightarrow} \pi\quad\text{as}\quad t\rightarrow\infty \end{equation} uniformly over all initial conditions $\mbox{$\bf m $}u \in \mbox{$\bf m $}athcal{P}(\bar D)$. We write $L$ for the infinitesimal generator of $X_t$. The QSD $\pi$ corresponds to the principal left eigenmeasure of $L$ by \cite[Proposition 4]{Meleard2011}; we write $-\mbox{$\bf m $}box{$\lambda$}bda$ and $\phi$ for the corresponding principal eigenvalue and right eigenfunction respectively. This provides for the leading order behaviour of $X_t$ over large timescales since, when $\phi$ is normalised so that $\langle \pi,\phi\rightarrowngle=1$, we have \begin{equation}gin{equation}\label{eq:leading order behaviour of killed Markov processes in terms of prin etriple} {\mathbb P}_x(X_t\in \cdot)\sim \phi(x)e^{-\mbox{$\bf m $}box{$\lambda$}bda t}\pi(\cdot)\quad\text{as}\quad t\rightarrow\infty. \end{equation} The Fleming-Viot process is constructed by taking $N$ copies of $X_t$ evolving independently until one of the particles is killed, at which time the particle which is killed jumps onto the location of another particle chosen independently and uniformly at random. This process is then repeated inductively over an infinite time horizon, obtaining the Fleming-Viot process $(\vec{X}^N_t)_{t\geq 0}=(\vec{X}^{N,1}_t,\ldots,X^{N,N}_t)_{t\geq 0}$. The process $\vec{X}^N_t$ has empirical measure-valued process $m^N_t$ and renormalised jump process $J^N_t$, \[ m^N_t:=\frac{1}{N}\sum_{k=1}^N\delta_{X^k_t},\quad J^N_t:=\frac{1}{N}\#\{\text{jumps up to time t}\}. \] It is straightforward to combine \eqref{eq:exponential convergence to QSD reflected diffusion intro}, \cite[Theorem 2.2]{Villemonais2011} and \cite[$1^{\text{st}}$ eq. on p.450]{Villemonais2011} to see that $\vec{X}^N_t$ provides the following representation for both the QSD $\pi$ and principal eigenvalue $-\mbox{$\bf m $}box{$\lambda$}bda$, \begin{equation}gin{equation}\label{eq:jt conv of (m,J) intro for FV multi-colour intro} (m^N_t,J^N_t-J^N_{t-1})\rightarrow (\pi,\mbox{$\bf m $}box{$\lambda$}bda)\quad\text{in probability as}\quad N\wedge t\rightarrow \infty. \end{equation} In this paper we will consider the Fleming-Viot multi-colour process introduced by Grigorescu and Kang \cite[Section 5.1]{Grigorescu2012}, which is obtained by attaching genetic information to the particles in the Fleming-Viot particle system. We will obtain a scaling limit of this genetic information as $N\rightarrow\infty$ and time is rescaled. We will use this to establish three results on the Fleming-Viot process, which we now outline. \subsection{Particle Representation for the Principal Right Eigenfunction of $L$} We will establish a Fleming-Viot representation for $\phi$. Thus Fleming-Viot processes provide a complete description of the large time behaviour of $X_t$ to leading order by \eqref{eq:leading order behaviour of killed Markov processes in terms of prin etriple}. Throughout this paper, for a probability measure $\mbox{$\bf m $}u$ and non-negative measurable function $f$ on the same measurable space with $0<\langle \mbox{$\bf m $}u,f\rightarrowngle<\infty$ we write $\mbox{$\bf m $}u^f$ for the tilted measure, \begin{equation}gin{equation}\label{eq:tilted measure} \mbox{$\bf m $}u^f(dx):=\frac{f(x)\mbox{$\bf m $}u(dx)}{\langle \mbox{$\bf m $}u,f\rightarrowngle}. \end{equation} We fix $\mbox{$\bf m $}u\in {\mathcal{P}}(\bar D)$ and use \eqref{eq:leading order behaviour of killed Markov processes in terms of prin etriple} to calculate \begin{equation}gin{equation}\label{eq:convergence of Law of X0 given killing time > t calculation} \begin{equation}gin{split} {\mathcal{L}}_{\mbox{$\bf m $}u}(X_0\lvert \tau_{\partial}>t)=\frac{\int_{\bar D}{\mathbb P}_x(\tau_{\partial}>t)\delta_x\mbox{$\bf m $}u(dx)}{{\mathbb P}_{\mbox{$\bf m $}u}(\tau_{\partial}>t)}\\ =\frac{\int_{\bar D}e^{\mbox{$\bf m $}box{$\lambda$}bda t}{\mathbb P}_x(\tau_{\partial}>t)\delta_x\mbox{$\bf m $}u(dx)}{e^{\mbox{$\bf m $}box{$\lambda$}bda t}{\mathbb P}_{\mbox{$\bf m $}u}(\tau_{\partial}>t)}\overset{\text{TV}}{\rightarrow} \frac{\int_{\bar D}\phi(x)\delta_x\mbox{$\bf m $}u(dx)}{\langle \mbox{$\bf m $}u,\phi\rightarrowngle}=\mbox{$\bf m $}u^{\phi}. \end{split} \end{equation} We further observe that knowing both $\mbox{$\bf m $}u$ and $\mbox{$\bf m $}u^{\phi}$ determines the value of $\phi$ on $\text{supp}(\mbox{$\bf m $}u)$ (up to rescaling), so that a particle representation for $\mbox{$\bf m $}u^{\phi}$ for any given $\mbox{$\bf m $}u$ gives us a representation for $\phi$. This motivates consideration of the Markov chain $(X_t,X_0)_{0\leq t<\infty}$ which evolves like $(X_t)_{0\leq t <\infty}$ in the first variable but is constant in the second variable. We consider the Fleming-Viot process associated to $(X_t,X_0)_{0\leq t<\infty}$, which we write as \begin{equation}gin{equation}\label{eq:FV assoc to (Xt,X0)} \{(X^{N,i}_t,\eta^{N,i}_t)_{0\leq t<\infty}:1\leq i\leq N\}. \end{equation} We then have by \cite[Theorem 2.2]{Villemonais2011} that \begin{equation}gin{equation}\label{eq:hydro limit part syst made from (Xt,X0)} \lim_{N\rightarrow\infty}\frac{1}{N}\sum_{i=1}^N\delta_{(X^{N,i}_t,\eta^{N,i}_t)}={\mathcal{L}}((X_t,X_0)\lvert \tau_{\partial}>t)), \end{equation} so that using \eqref{eq:convergence of Law of X0 given killing time > t calculation} we have \[ \lim_{t\rightarrow\infty}\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{i=1}^N\delta_{\eta^{N,i}_t}=\mbox{$\bf m $}u^{\phi}. \] In fact, \cite[Theorem 2.2]{Villemonais2011} gives a quantitative estimate for \eqref{eq:hydro limit part syst made from (Xt,X0)}, which we may combine with \eqref{eq:convergence of Law of X0 given killing time > t calculation} to see that \begin{equation}gin{equation}\label{eq:estimate for right efn using Villemonais} \lim_{\substack{N\wedge t\rightarrow \infty\\N\gg e^{2\mbox{$\bf m $}box{$\lambda$}bda t}}}\frac{1}{N}\sum_{i=1}^N\delta_{\eta^{N,i}_t}= \mbox{$\bf m $}u^{\phi}. \end{equation} Clearly this requirement on the size of $N$ is unsatisfying. We will strengthen the notion of convergence in Corollary \ref{cor:representation for right e-fn}, establishing that $N\gg t$ is sufficient (and optimal - see Remark \ref{rmk:optimality approx method for phi}): \[ \lim_{\substack{N\wedge t\rightarrow \infty\\N\gg t}}\frac{1}{N}\sum_{i=1}^N\delta_{\eta^{N,i}_t}= \mbox{$\bf m $}u^{\phi}. \] Thus we obtain a particle representation for $\mbox{$\bf m $}u^{\phi}$, for any given $\mbox{$\bf m $}u\in{\mathcal{P}}(\bar D)$, and hence a particle representation for the principal right eigenfunction $\phi$. \subsection{Particle Representation for the $Q$-Process} We write $X^{\infty}_t$ for the $Q$-process, which corresponds to $X_t$ conditioned never to be killed and whose existence will be proven in Lemma \ref{lem:law of Q-process given by tilting} by verifying \cite[Assumption (A)]{Champagnat2014}. It corresponds to the limit \[ {\mathcal{L}}((X^{\infty}_t)_{0\leq t\leq T}):=\lim_{h\rightarrow\infty}{\mathcal{L}}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T+h). \] A rigorous definition of the $Q$-process will be given by \eqref{eq:family of prob measures Q-process}. Whereas the QSD $\pi$ describes the distribution conditioned on survival of $X_t$ for large $t$ at specific instants of time, the $Q$-process provides for a pathwise description of the paths of $X_t$ which survive for a very long time. Dynamical Historical Processes (DHPs), which we shall define in \eqref{eq:DHP multi-colour chapter}, correspond to the paths we obtain by tracing the ancestry of the particles backwards in time. In particular for $i\in \{1,\ldots,N\}$ and $t<\infty$ the DHP \[ (\mbox{$\bf m $}athcal{H}^{N,i,t}_s)_{0\leq s\leq t}\in C([0,t];\bar D) \] is the path we obtain by tracing backwards in time from particle $i$ at time $t$. Equivalently it is the unique continuous path from time $0$ to time $t$ made from the paths of the particles which terminates with particle $i$ at time $t$. We illustrate this in Figure \ref{fig:DHPs intro}. \begin{equation}gin{figure}[h] \begin{equation}gin{center} \begin{equation}gin{tikzpicture}[scale=0.7] \nonumberde[right] at (12,2) {$t=1$}; \nonumberde[right] at (12,4) {$t=2$}; \nonumberde[right] at (12,6) {$t=3$}; \draw[-,thick,black](-3,0) -- ++(0,6); \draw[-,thick,black](12,0) -- ++(0,6); \draw[->,snake=snake,black](0,0) -- ++(0,4); \draw[->,snake=snake,black](3,0) -- ++(0,2); \draw[-,snake=snake,ultra thick,red](6,0) -- ++(0,4); \draw[-,snake=snake,ultra thick,blue](9,0) -- ++(0,4); \nonumberde[below] at (0,0) {$i=1$}; \nonumberde[below] at (3,0) {$i=2$}; \nonumberde[below] at (6,0) {$i=3$}; \nonumberde[below] at (9,0) {$i=4$}; \draw[-,dashed,black](3,2) -- ++(3,0); \draw[->,snake=snake,black](6,2) -- ++(1.5,2); \nonumberde[left] at (6,3) {$i=3$}; \nonumberde[right] at (6.75,3) {$i=2$}; \draw[-,dashed,black](0,4) -- ++(6,0); \draw[-,dashed,black](7.5,4) -- ++(1.5,0); \draw[->,snake=snake,ultra thick,red](6,4) -- ++(-3,2); \draw[->,snake=snake,black](6,4) -- ++(0,2); \draw[->,snake=snake,ultra thick,blue](9,4) -- ++(-1.5,2); \draw[->,snake=snake,black](9,4) -- ++(0,2); \nonumberde[above] at (3,6) {$i=1$}; \nonumberde[above] at (6,6) {$i=3$}; \nonumberde[above] at (7.5,6) {$i=2$}; \nonumberde[above] at (9,6) {$i=4$}; \end{tikzpicture} \caption{The DHPs $(\mbox{$\bf m $}athcal{H}^{4,1,3}_t)_{0\leq t\leq 3}$ and $(\mbox{$\bf m $}athcal{H}^{4,2,3}_t)_{0\leq t\leq 3}$ corresponding to particles $1$ and $2$ at time $t=3$ are in thick red and blue respectively.} \label{fig:DHPs intro} \end{center} \end{figure} Bieiniek and Burdzy \cite[Theorem 4.2]{Bieniek2018} provided a representation for the pre-limit ${\mathcal{L}}((X_t)_{0\leq t\leq T }\lvert \tau_{\partial}>T)$ based on these DHPs, \[ \lim_{N\rightarrow\infty}\frac{1}{N}\sum_{i=1}^N\delta_{(\mbox{$\bf m $}athcal{H}^{N,i,T}_s)_{0\leq s\leq T}}= {\mathcal{L}}((X_t)_{0\leq t\leq T }\lvert \tau_{\partial}>T) \] for $T<\infty$ fixed. In Corollary \ref{cor:representation for Q-process} we will extend \cite[Theorem 4.2]{Bieniek2018} to obtain a representation for the $Q$-process, \[ \lim_{\substack{N\wedge t\rightarrow \infty\\N\gg t}}\frac{1}{N}\sum_{i=1}^N\delta_{(\mbox{$\bf m $}athcal{H}^{N,i,t}_s)_{0\leq s\leq T}}= {\mathcal{L}}((X^{\infty}_s)_{0\leq s\leq T}) \] for $T<\infty$ fixed. Furthermore we will see that $N\gg t$ is optimal (see Remark \ref{rmk:optimality approx method for Q-process}). \subsection{Asymptotic Distribution of the Spine and Side Branches of the Fleming-Viot Process} In \cite{Grigorescu2012}, Grigorescu and Kang constructed the spine of the Fleming-Viot process - the unique continuous path from time 0 to time $\infty$ constructed from the paths of the particles. We will provide a rigorous definition of the spine in \eqref{eq:spine}, which may be thought of as the unique DHP up to time $\infty$. This was later extended to a very general setting by Bieniek and Burdzy \cite[Theorem 3.1]{Bieniek2018}. One can also define side branches along this spine, forming together what we call the skeleton of the Fleming-Viot process, as in Figure \ref{fig:Skeleton example intro to section}. \begin{equation}gin{figure}[h] \begin{equation}gin{center} \begin{equation}gin{tikzpicture}[scale=0.7] \foreach \y [count=\n]in {0,2}{ \draw[-,dashed,black](3,1+\y) -- ++(-2,0); \draw[->,snake=snake,black](3,\y) -- ++(0,1); } \foreach \y [count=\n]in {0,2}{ \draw[-,dashed,black](3.5,1.75+\y) -- ++(-1.75,0); \draw[->,snake=snake,black](3.5,0.75+\y) -- ++(0,1); } \foreach \y [count=\n]in {2,4}{ \draw[-,dashed,black](-3,\y) -- ++(3,0); \draw[->,snake=snake,black](-3,-1+\y) -- ++(0,1); } \foreach \y [count=\n]in {0,2}{ \draw[-,snake=snake,ultra thick,blue](0,\y) -- ++(1,1); \draw[->,snake=snake,red](1,1+\y) -- ++(1.5,1.5); } \draw[-,snake=snake,ultra thick,blue](1,1) -- ++(-1,1); \draw[->,snake=snake,ultra thick,blue](1,3) -- ++(-2,2); \foreach \y [count=\n]in {0,2}{ } \foreach \y [count=\n]in {0,2}{ \draw[->,snake=snake,red](1.75,1.75+\y) -- ++(-0.5,0.5); } \foreach \y [count=\n]in {2,4}{ } \draw[->,snake=snake,red](0,4) -- ++(1,1); \draw[->,snake=snake,red](0,2) -- ++(-1,1); \draw[-,thick,black](4,0) -- ++(0,5); \draw[-,thick,black](-4,0) -- ++(0,5); \end{tikzpicture} \caption[The skeleton of the Fleming-Viot process]{The thick blue path denotes the path of the spine, whilst the side-trees are in red. Together, these form the skeleton.} \label{fig:Skeleton example intro to section} \end{center} \end{figure} Whereas this has a different construction to the classical spine decomposition of a critical branching process, Bieniek and Burdzy \cite[Section 5]{Bieniek2018} established that asymptotically as $N\rightarrow\infty$ it has the same description when the state space is finite. They conjectured that this is also true for general state spaces \cite[p.3752]{Bieniek2018}. Since then, Burdzy, Kołodziejek and Tadić in \cite{Burdzy2019,Burdzy2020} have established a law of the iterated logarithm \cite[Theorem 7.1]{Burdzy2020} which, as they explain, hints that the conjecture of Bieniek and Burdzy should hold in the setting considered in \cite{Burdzy2020}. Nevertheless, the conjecture of Bieniek and Burdzy has not been established under any other conditions (as far as this author is aware). We will obtain in Corollary \ref{cor:asymptotic distribution of the spine decomposition} a proof of Bieniek and Burdzy's conjecture in the setting considered in this paper - when $X_t$ is a normally reflected diffusion in a compact domain $\bar D$ with soft killing. Whereas Bieniek and Burdzy's proof when the state space is finite \cite[Section 5]{Bieniek2018} used the finiteness of the state space in an essential way, it should be possible to extend our proof to more general settings, subject to overcoming additional difficulties. \subsection{Scaling Limit of the Fleming-Viot Multi-Colour Process} Introduced by Grigorescu and Kang \cite[Section 5.1]{Grigorescu2012}, the Fleming-Viot multi-colour process is obtained by attaching genetic information to the particles in the Fleming-Viot particle system. To be more precise, the Fleming-Viot multi-colour process, \[ (\vec{X}^N_t,\vec{\eta}^N_t)_{0\leq t<\infty}=((X^1_t,\eta^1_t),\ldots,(X^N_t,\eta^N_t))_{0\leq t<\infty}, \] is defined by taking the Fleming-Viot particle system $(X^1_t,\ldots,X^N_t)_{0\leq t<\infty}$ and associating to each $X^i$ a colour $\eta^i$ taking values in the colour space ${\mathbb{K}}$ (assumed to be a Polish space), which is constant between killing times. Moreover, if $X^i$ is killed at time $t$, and jumps onto $X^j$, we set \[ (X^i_t,\eta^i_t)=(X^j_{t-},\eta^j_{t-}). \] These colours track ancestral information about the corresponding particles in the Fleming-Viot particle system $(X^1_t,\ldots,X^N_t)_{0\leq t<\infty}$, as the particle system evolves forwards in time. Given a gene with two alleles, $a$ and $A$, the SDE \[ dp_t=\sqrt{p_t(1-p_t)}dW_t \] models the evolution of the proportion $p_t$ of the population carrying the $a$-allele. This is the classical Wright-Fisher diffusion. The Fleming-Viot superprocess \cite{Fleming1979} is a measure-valued generalisation of this Wright-Fisher diffusion, which allows for the possibility that the set of alleles is infinite. To avoid confusion with the Fleming-Viot processes which are the subject of this paper, we shall instead refer to Fleming-Viot superprocesses as Wright-Fisher superprocesses, a precise definition of which will be given in Definition \ref{defin:MVWF}. We shall extract the Wright-Fisher superprocess as the limit over $O(N)$ timescales of the Fleming-Viot multi-colour process (Theorem \ref{theo:Convergence to FV diffusion}). We will obtain corollaries \ref{cor:representation for right e-fn} (our particle representation for $\phi$) and \ref{cor:representation for Q-process} (our representation for the $Q$-process) by way of Theorem \ref{theo:Convergence to FV diffusion} applied to different choices for the colour space ${\mathbb{K}}$. For instance, we may observe that the Fleming-Viot process given in \eqref{eq:FV assoc to (Xt,X0)} is precisely the Fleming-Viot multi-colour process we obtain by taking ${\mathbb{K}}=\bar D$. Consequentially, our particle representation for $\phi$ (Corollary \ref{cor:representation for right e-fn}) will be obtained by taking the colour space ${\mathbb{K}}$ to be the spatial domain $\bar D$ in Theorem \ref{theo:Convergence to FV diffusion}. We will obtain a separation of timescales: whilst Theorem \ref{theo:Convergence to FV diffusion} will give that the empirical measure of the colours evolves on a slow ${\mathcal{O}}(N)$ timescale, \eqref{eq:measure of mNE} will give that the particles corresponding to any particular colour converges to the QSD $\pi$ on a fast ${\mathcal{O}}(1)$ timescale. A similar separation of timescales has been obtained by Méléard and Tran in \cite{Meleard2012}. There they considered the evolution of traits in a population of individuals (without spatial structure), where the individuals give birth (passing on their trait) and die in an age-dependent manner, and interact with each other through the effect of the common empirical measure of their traits upon their death rate (representing competition for resources). They found that the age component evolves on a fast timescale, converging to a deterministic equilibria (which is dependent upon the traits), while the trait distributions evolve on a slow timescale, converging to a superprocess over the slow timescale as the population converges to infinity. For each $1\leq i\leq N$ the colour $\eta^i_t$ tracks ancestral information about the corresponding particle $X^i_t$ as the particle system $\vec{X}^N_t$ evolves forwards in time. If we track the ancestries of the particles backwards in time, we obtain a coalescent process. A scaling limit for this backwards in time coalescent has been established in a simlar setting by Brown, Jenkins, Johansen and Koskela in \cite[Theorem 3.2]{Brown2021}. They considered the geneaology of a sequential Markov chain Monte Carlo algorithm. This captures the phenomenon of ancestral degeneracy, which has a substantial impact on the performance of the algorithm. They established that the geneaology converges to Kingman's coalescent (which is dual to the Wright-Fisher superprocess \cite[Appendix A]{Labbe2013}) as the number of particles goes to infinity and time is suitably rescaled. We conjecture that Kingman's coalescent can similarly be obtained as the scaling limit of the ancestries of the Fleming-Viot process. In population genetics, variance effective population refers to the population of an idealised, spatially unstructured population with the same genetic drift per generation. For a variety of reasons, this effective population size is generally observed to be considerably less than the census population size (the actual number of individuals in the population) \cite{Frankham1995}. We will see that spatial inhomogeneity has the effect of reducing the effective population size of the Fleming-Viot multi-colour process according to a simple formula. To calculate the variance effective population, we compare the Fleming-Viot multi-colour process to the Wright-Fisher model, a classical model from population genetics with non-overlapping generations. We shall obtain in Theorem \ref{theo:Convergence to FV diffusion} that, after rescaling time by $N$, the Fleming-Viot multi-colour process converges to a Wright-Fisher superprocess with a certain parameter ${\mathbb{T}}heta$ (which we shall define in \eqref{eq:const theta}). We recall that $(\pi,-\mbox{$\bf m $}box{$\lambda$}bda,\phi)$ is the principal eigentriple of the infinitesimal generator $L$. It is straightforward to combine \cite[Theorem 2.2]{Villemonais2011} and \cite[$1^{\text{st}}$ eq. on p.450]{Villemonais2011} with Theorem \ref{theo:convergence to QSD for reflected diffusion with soft killing} to establish that individuals in the Fleming-Viot multi-colour process die, on average, $\mbox{$\bf m $}box{$\lambda$}bda$ times per unit time. We should therefore take a Wright-Fisher model with $\mbox{$\bf m $}box{$\lambda$}bda$ generations per unit time. If we take such a Wright-Fisher model with population $\Big\lfloor\frac{1}{2}\Big(\frac{\lvert\lvert \phi\rvert\rvert_{L^1(\pi)}}{\lvert\lvert \phi\rvert\rvert_{L^2(\pi)}}\Big)^2N\Big\widehatloor$, then after rescaling time by $N$ we see that \[ \vec{\eta}^{\text{Wright-Fisher},\Big\lfloor\frac{1}{2}\Big(\frac{\lvert\lvert \phi\rvert\rvert_{L^1(\pi)}}{\lvert\lvert \phi\rvert\rvert_{L^2(\pi)}}\Big)^2N\Big\widehatloor}_{Nt} \] converges to a Wright-Fisher superprocess with the same parameter ${\mathbb{T}}heta$. Thus the Fleming-Viot multi-colour process has a variance effective population size of \[ N_{\text{eff}}\sim \frac{1}{2}\Big(\frac{\lvert\lvert \phi\rvert\rvert_{L^1(\pi)}}{\lvert\lvert \phi\rvert\rvert_{L^2(\pi)}}\Big)^2N. \] The Wright-Fisher superprocess also arises as the limit of suitably rescaled Moran processes, which are similar to the Fleming-Viot multi-colour processes we consider here. In particular if we remove the spatial structure from the Fleming-Viot multi-colour process, so that each individual is killed at fixed Poisson rate $\mbox{$\bf m $}box{$\lambda$}bda$ rather than position-dependent rate $\kappa(x)$, we obtain the Moran model $\vec{\eta}^{\text{Moran},N}$. This Moran model has a variance effective population size of \[ N_{\text{eff}}^{\text{Moran}}\sim \frac{1}{2}N. \] Thus the incorporation of spatial structure multiplies the variance effective population size by \begin{equation}gin{equation}\label{eq:effect of spatial structure on effective population} \frac{\lvert\lvert \phi\rvert\rvert_{L^1(\pi)}^2}{\lvert\lvert \phi\rvert\rvert_{L^2(\pi)}^2}\leq 1. \end{equation} Equivalently, with the population and number of generations per unit time being kept constant, the incorporation of spatial inhomogeneity multiplies the rate of genetic drift by \begin{equation}gin{equation}\label{eq:effect of spatial structure on genetic drift} \frac{\lvert\lvert \phi\rvert\rvert_{L^2(\pi)}^2}{\lvert\lvert \phi\rvert\rvert_{L^1(\pi)}^2}\geq 1. \end{equation} We note that we have equality in \eqref{eq:effect of spatial structure on effective population} and \eqref{eq:effect of spatial structure on genetic drift} if and only if the killing rate $\kappa$ is everywhere constant, which is equivalent to the removal of spatial structure. In population genetics, if eventually all individuals in a given population have the same variant of a particular gene (allele), we say this allele is fixed. The fixation probability of an allele refers to the probability that the allele will become fixed. The use of diffusion approximations to calculate fixation probabilities is classical in population genetics \cite[p.1280]{Patwa2008}. We are likewise able to use our diffusion approximation, Theorem \ref{theo:Convergence to FV diffusion}, to calculate the fixation probabilities of our colours (Theorem \ref{theo:convergence of the fixed colour}). Corollary \ref{cor:convergence of the spine} establishes that the spine of the Fleming-Viot process converges in distribution to the $Q$-process. Corollary \ref{cor:asymptotic distribution of the spine decomposition} extends this, establishing that the spine and its side branches converges in distribution to the spine decomposition of the critical branching process - establishing Bieniek and Burdzy's conjecture \cite[p.3752]{Bieniek2018} in this setting. To prove this, we take a Fleming-Viot multi-colour process whose fixed colour corresponds to the spine and side branches of the Fleming-Viot process, and apply Theorem \ref{theo:convergence of the fixed colour}. \subsection{Overview of the Paper} In Section \ref{section:FV multi-colour chapter, def and background} we present the necessary background and definitions for our general results on the Fleming-Viot multi-colour process, before presenting these main results (theorems \ref{theo:Convergence to FV diffusion} and \ref{theo:convergence of the fixed colour}) in Section \ref{section:Main results}. In sections \ref{section:principal right e-fn approximation}-\ref{section:Skeleton of the FV process} we present the corollaries of our general results, the proofs of these corollaries using theorems \ref{theo:Convergence to FV diffusion} and \ref{theo:convergence of the fixed colour} being presented immediately after their statement in the relevant sections. In Section \ref{section:principal right e-fn approximation} we present our first application: a particle representation for the principal right eigenfunction $\phi$ of the infinitesimal generator $L$. In Section \ref{section:Q-process approximation} we present our second application: a representation for the $Q$-process. In the same section we present Corollary \ref{cor:convergence of the spine}, providing for convergence in distribution of the spine of the Fleming-Viot process to the $Q$-process. We will see that corollaries \ref{cor:representation for Q-process} and \ref{cor:convergence of the spine} are obtained by applying theorems \ref{theo:Convergence to FV diffusion} and \ref{theo:convergence of the fixed colour} respectively to the same Fleming-Viot multi-colour process. Whilst Corollary \ref{cor:convergence of the spine} provides for convergence of the spine of the Fleming-Viot process, Bieniek and Burdzy conjectured convergence of both the spine and its side branches to the classical spinal decomposition of the corresponding critical branching process. Corollary \ref{cor:asymptotic distribution of the spine decomposition} in Section \ref{section:Skeleton of the FV process} establishes this. Note that whilst the proofs of corollaries \ref{cor:convergence of the spine} and \ref{cor:asymptotic distribution of the spine decomposition} use precisely the same argument, the proof of Corollary \ref{cor:asymptotic distribution of the spine decomposition} also requires background on branching processes as well as Theorem \ref{theo:convergence of augmented DHPs}. The first-time reader is therefore directed towards the proof of Corollary \ref{cor:convergence of the spine} to understand our strategy to prove Bieniek and Burdzy's conjecture. We recall that the particles in our Fleming-Viot multi-colour process possess a spatial position $X^i_t$ and colour $\eta^i_t$ at time $t$. The key idea to prove our general theorems (theorems \ref{theo:Convergence to FV diffusion} and \ref{theo:convergence of the fixed colour}) will be to consider the quantity \begin{equation}gin{equation}\label{eq:tilted empirical measure} \begin{equation}gin{split} {\mathcal{Y}}^N_t=\frac{\frac{1}{N}\sum_{i=1}^N\phi(X^{i}_t)\delta_{\eta^{i}_t}}{\frac{1}{N}\sum_{i=1}^N\phi(X^{i}_t)}, \end{split} \end{equation} corresponding to tilting the empirical measure of the colours of the Fleming-Viot multi-colour process using the principal right eigenfunction evaluated at the corresponding spatial positions. We will see that ${\mathcal{Y}}^N_t$ evolves on an ${\mathcal{O}}(N)$ timescale in a manner we characterise in Theorem \ref{theo:characterisation of Y}. Two heuristic explanations motivating the choice of ${\mathcal{Y}}^N_t$ - which suggest ${\mathcal{Y}}^N_t$ should evolve on an ${\mathcal{O}}(N)$ timescale - are given in Section \ref{section: characterisation of Y}. We shall then prove Theorem \ref{theo:characterisation of Y} in the same section by performing the requisite calculations. These calculations will use heavily notation that we introduce in Section \ref{section:O and U notation}. With Theorem \ref{theo:characterisation of Y} at hand, the proof of theorems \ref{theo:Convergence to FV diffusion} and \ref{theo:convergence of the fixed colour} in sections \ref{section:proof of convergence to FV diffusion} and \ref{section:proof of convergence of the fixed colour} respectively are fairly straightforward, which imply corollaries \ref{cor:representation for right e-fn}, \ref{cor:representation for Q-process} and \ref{cor:convergence of the spine}. Finally we prove Theorem \ref{theo:convergence of augmented DHPs} in Section \ref{section:convergence of augmented DHPs}, an extension of \cite[Theorem 4.2]{Bieniek2018} which allows us to prove Corollary \ref{cor:asymptotic distribution of the spine decomposition} using Theorem \ref{theo:convergence of the fixed colour}. \section{Definitions and Background}\label{section:FV multi-colour chapter, def and background} For a given topological space ${\bf S}$ we write ${\mathcal{B}}({\bf S})$ for the Borel $\sigma$-algebra on ${\bf S}$, and write ${\mathcal{P}}({\bf S})$ for the space of probability measures on ${\mathcal{B}}({\bf S})$, equipped with the topology of weak convergence of measures. We write $\mbox{$\bf m $}athcal{M}({\bf S})$ for the space of all bounded Borel measurable functions on ${\bf S}$. For general separable metric spaces $({\bf S},d)$ we let $\text{W}_1h$ denote the Wasserstein-$1$ metric on $\mbox{$\bf m $}athcal{P}({\bf S})$ generated by the metric $d\wedge 1$, which metrises ${\mathcal{P}}({\bf S})$ \cite[Theorem 6]{Gibbs2002}. We write $\mbox{$\bf m $}athcal{P}_{\text{W}_1h}({\bf S})$ for the metric space $(\mbox{$\bf m $}athcal{P}({\bf S}),\text{W}_1h)$. We prove the following proposition in the appendix. \begin{equation}gin{prop}\label{prop:convergence in W in prob equivalent to weakly prob} Let $({\bf S},d)$ be a separable metric space. Let $(\mbox{$\bf m $}u_n)_{n\geq 1}$ be a sequence of ${\mathcal{P}}({\bf S})$-valued random measures, and $\mbox{$\bf m $}u\in{\mathcal{P}}({\bf S})$ a deterministic measure. Then the following are equivalent: \begin{equation}gin{enumerate} \item $\text{W}_1h(\mbox{$\bf m $}u_n,\mbox{$\bf m $}u)\overset{p}{\rightarrow} 0$ as $n\rightarrow\infty$;\label{enum:equiv prop conv in W} \item $\langle \mbox{$\bf m $}u_n,f\rightarrowngle\overset{p}{\rightarrow} 0$ as $n\rightarrow\infty$ for all $f\in C_b({\bf S})$.\label{enum:equiv prop conv against test fns} \end{enumerate} \end{prop} We will introduce another metric on ${\mathcal{P}}({\bf S})$ in Subsection \ref{subsection:Weak atomic metric}, the weak atomic metric $\text{W}_1t$. We consider an open, connected, bounded and non-empty domain $D\subseteq {\mathbb R}^d$ with closure $\bar D$ and $C^{\infty}$ boundary $\partial D$ having inward unit normal $\vec{n}(x)$ $(x\in \partial D)$, and a separate cemetary state $\partial=\{0\}$. We take $C^{\infty}$ functions $\kappa:\bar D\rightarrow {\mathbb R}_{>0}$, $b:\bar D\rightarrow {\mathbb R}^d$ and $\sigma:\bar D\rightarrow M_{d\times m}$ (where $M_{d\times m}$ is the set of real $d\times m$ matrices) with $\sigma\sigma^T$ uniformly positive definite. Note in particular that $\kappa$ is assumed to be everywhere positive (though some of our results will not require this assumption). We consider a normally reflected diffusion $(X^0_t)_{0\leq t<\infty}$ in the domain $\bar D$ corresponding to a solution of the Skorokhod problem. In particular, for any filtered probability space on which is defined the $m$-dimensional Brownian motion $W_t$ and initial condition $x\in\bar D$, there exists by \cite[Theorem 3.1]{Lions1984a} a pathwise unique strong solution of the Skorokhod problem \begin{equation}gin{equation}\label{eq:Skorokhod problem} \begin{equation}gin{split} X^0_t=x+\int_0^tb(X^0_s)ds+\int_0^t\sigma(X^0_s)dW_s+\int_0^t\vec{n}(X^0_s)d\xi_s\in \bar D,\quad 0\leq t<\infty,\quad \int_0^{\infty}{\mathbbm{1}}_D(X^0_t)d\xi_t=0, \end{split} \end{equation} where $W_s$ is a Brownian motion and the local time $\xi_t$ is a non-decreasing process with $\xi_0=0$. This corresponds to a solution of the submartingale problem introduced by Stroock and Varadhan \cite{Stroock1971}, and is a Feller process \cite[Theorem 5.8, Remark 2]{Stroock1971} (and hence strong Markov). It is then straightforward (using a separate probability space on which is defined an exponential random variable) to construct an enlarged filtered probability space on which $(X^0,W,\xi)$ is a solution of the Skorokhod problem and on which there is a stopping time $\tau_{\partial}$ corresponding to the ringing time of a Poisson clock with position dependent rate $\kappa(X^0_t)$, from which is constructed the killed process $(X_t)_{0\leq t<\tau_{\partial}}$. This killed process is a solution to \begin{equation}gin{equation}\label{eq:killed process SDE} \begin{equation}gin{split} {\mathbbm{1}}(t\geq \tau_{\partial})-\int_0^{t\wedge \tau_{\partial}}\kappa(X^0_s)ds\quad\text{is a martingale},\quad X_t:=\begin{equation}gin{cases} X^0_t,\quad t<\tau_{\partial}\\ 0,\quad t\geq \tau_{\partial} \end{cases}, \end{split} \end{equation} where $W_s$ is an $m$-dimensional Brownian motion and the local time $\xi_t$ is a non-decreasing process with $\xi_0=0$. Since $X^0_t$ is Feller, the process $X_t$ is therefore also Feller (and hence strong Markov). We write $L^0/L=L^0-\kappa$ for the infinitesimal generators of $X^0$ and $X$ respectively, having the same domains $\mbox{$\bf m $}athcal{D}(L^0)=\mbox{$\bf m $}athcal{D}(L)$. We further define the Carre du Champs operator $\Gamma_0$ on the algebra ${\mathcal{A}}$, \begin{equation}gin{equation}\label{eq:Carre du champs} \begin{equation}gin{split} \Gamma_0(f,g):=L_0(fg)-fL_0(f)-gL_0(f),\quad \Gamma_0(f):=\Gamma_0(f,f),\\ f,g\in {\mathcal{A}}:=\{f\in C^2(\bar D):\vec{n}\cdot \nabla f\equiv 0\quad\text{on}\quad \partial D\}, \end{split} \end{equation} so that for $f\in {\mathcal{A}}$ we have \[ [f(X^0)]_t=\int_0^t\Gamma_0(f)(X^0_s)ds. \] The following theorem is proven in the appendix by checking \cite[Assumption (A)]{Champagnat2014}. By considering the case whereby the killing rate $\kappa(x)$ is replaced by $\kappa(x)+1$, we see that it still remains valid if $\kappa$ is not assumed to be everywhere positive. \begin{equation}gin{theo}\label{theo:convergence to QSD for reflected diffusion with soft killing} There exists a quasi-stationary distribution (QSD) $\pi\in \mbox{$\bf m $}athcal{P}(\bar D)$ for $X_t$ and constants $C<\infty$, $k>0$ such that \begin{equation}gin{equation}\label{eq:exponential convergence to QSD reflected diffusion} \lvert\lvert \mbox{$\bf m $}athcal{L}_{\mbox{$\bf m $}u}(X_t\lvert \tau_{\partial}>t)-\pi\rvert\rvert_{\text{TV}}\leq Ce^{-kt}\quad\text{for all}\quad\mbox{$\bf m $}u\in\mbox{$\bf m $}athcal{P}(\bar D)\quad\text{and}\quad t\geq 0. \end{equation} Moreover $\pi$ is a left eigenmeasure of $L$ with eigenvalue $-\mbox{$\bf m $}box{$\lambda$}bda<0$, \begin{equation}gin{equation}\label{eq:pi left eigenmeasure} \langle \pi, Lf\rightarrowngle=-\mbox{$\bf m $}box{$\lambda$}bda \langle \pi,f\rightarrowngle,\quad f\in{\mathcal{D}}(L), \end{equation} and with corresponding positive right eigenfunction $\phi\in {\mathcal{A}}\cap C^{2}(\bar D; {\mathbb R}_{>0})$. This right eigenfunction is unique up to rescaling and, when normalised so that $\langle \pi,\phi\rightarrowngle=1$, corresponds to the uniform limit \begin{equation}gin{equation}\label{eq:phi uniform limit} \phi(x)=\lim_{t\rightarrow\infty}e^{\mbox{$\bf m $}box{$\lambda$}bda t}{\mathbb P}_x(\tau_{\partial}>t). \end{equation} \end{theo} \begin{equation}gin{rmk}\label{rmk:extension to hard killing etc} It is absolutely essential for our proofs only that $X_t$ is a Feller process satisfying the conclusions of Theorem \ref{theo:convergence to QSD for reflected diffusion with soft killing}. In particular, removing the assumption that the spatial domain is compact (for instance to include killing at the boundary) would require controls on the particles valid over large timescales, which should be possible albeit difficult. \end{rmk} We define $(X^{\infty}_t)_{0\leq t<\infty}$ to be the $Q$-process - the process $(X_t)_{0\leq t<\infty}$ conditioned never to be killed. The following lemma, proven in the appendix, establishes the existence of this $Q$-process and provides for a characterisation of the law of $X^{\infty}$ which will prove to be essential to our results. \begin{equation}gin{lem}\label{lem:law of Q-process given by tilting} We write $(\Omega,{\mathcal{G}},({\mathcal{G}}_t)_{t\geq 0},(X_t)_{0\leq t<\infty},({\mathbb P}_x)_{x\in \bar D})$ for the killed Markov process defined in \eqref{eq:killed process SDE} (whereby ${\mathbb P}_x(X_0=x)=1$). There exists a family $({\mathbb P}^{\infty}_x)_{x\in \bar D}$ of probability measures on $(\Omega,{\mathcal{G}},({\mathcal{G}}_t)_{t\geq 0})$ defined by \begin{equation}gin{equation}\label{eq:family of prob measures Q-process} \lim_{t\rightarrow \infty}{\mathbb P}_x(A\lvert t<\tau_{\partial})={\mathbb P}^{\infty}_x(A) \end{equation} for all $A\in \cup_{s\geq 0}{\mathcal{G}}_s$. The process $(\Omega,({\mathcal{G}}_t)_{t\geq 0},(X_t)_{t\geq 0},({\mathbb P}^{\infty}_x)_{x\in \bar D})$ is then a $\bar D$-valued homogeneous strong Markov process, which we call the $Q$-process. We write $(X^{\infty}_t)_{t\geq 0}$ for this $Q$-process, that is for $(X_t)_{t\geq 0}$ under ${\mathbb P}^{\infty}$, whose law we characterise as follows. For $T<\infty$ we define \begin{equation}gin{equation}\label{eq:phiT defn} \phi^T:C([0,T];\bar D)\ni y\mbox{$\bf m $}apsto \phi(y_T)\in {\mathbb R} \end{equation} and fix $\mbox{$\bf m $}u\in{\mathcal{P}}(\bar D)$. Then we have \begin{equation}gin{equation}\label{eq:law of Q-process expression} {\mathcal{L}}_{\mbox{$\bf m $}u^{\phi}}((X^{\infty}_t)_{0\leq t\leq T})=\begin{itemize}g({\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T)\begin{itemize}g)^{\phi^T}. \end{equation} \end{lem} Note that uniqueness in law of the $Q$-process $X^{\infty}_t$ is immediate from the definition. Since we have checked (in the proof of Theorem \ref{theo:convergence to QSD for reflected diffusion with soft killing}) \cite[Assumption (A)]{Champagnat2014}, \cite[Theorem 3.1 (ii)]{Champagnat2014} gives us the transition kernel of $X^{\infty}$, so that in particular we may calculate that $(X^{\infty}_t)_{0\leq t<\infty}$ is a normally reflected diffusion with drift $b+\frac{\sigma\sigma^{\dagger}\nabla \phi}{\phi}$ and diffusivity $\sigma\sigma^{\dagger}$. \subsection{Fleming-Viot Processes and Multi-Colour Processes}\label{subsection:FV multi-colour process} We firstly define the Fleming-Viot process corresponding to \eqref{eq:killed process SDE}. \begin{equation}gin{defin}[$N$-Particle Fleming-Viot Process] We generate $N$ $(\geq 2)$ particles \[ \vec{X}^N_t=(X^{N,1}_t,\ldots,X^{N,N}_t),\quad t\geq 0 \] in the domain $\bar D$ evolving independently according to \eqref{eq:killed process SDE}. When a particle is killed we relocate it to the position of a different particle chosen uniformly at random. Moreover we write $\tau_n$ for the $n^{\text{th}}$ time at which any particle is killed $(\tau_0:=0)$ and define $J^N_t=\frac{1}{N}\sup\{n:\tau_n\leq t\}$ to be the number of deaths up to time $t$ normalised by $\frac{1}{N}$. We write $\tau^i_k$ for the $k^{\text{th}}$ death time of particle $X^i$ ($\tau^i_0:=0$) and $U^i_k\in\{1,\ldots,N\}\setminus\{i\}$ for the index of the particle it jumps onto at this time. We further define $J^N_t:=\frac{1}{N}\sup\{n:\tau_n\leq t\}$ to be the number of jumps up to time $t$. \label{defin:FV particle system reflected diffusions} \end{defin} This is well-defined up to the time $\tau_{\text{WD}}=\tau_{\infty}\wedge\tau_{\text{stop}}$ whereby $\tau_{\infty}=\inf\{t>0:J^N_{t-}=\infty\}$ and $\tau_{\text{stop}}=\inf\{t>0:\text{two particles are killed at time }t\}$. Since $\kappa$ is bounded, $\tau_{\text{WD}}=\infty$ almost surely. We take $({\mathbb{K}},d)$ to be a complete separable metric space, which we call the colour space. Introduced by Grigorescu and Kang \cite[Section 5.1]{Grigorescu2012}, the Fleming-Viot multi-colour process is defined as follows. \begin{equation}gin{defin}[Fleming-Viot Multi-Colour Process] We define $(\vec{X}^N_t,\vec{\eta}^N_t)_{0\leq t<\infty}\linebreak =\{(X^{N,i}_t,\eta^{N,i}_t)_{0\leq t<\infty}:i=1,\ldots,N\}$ as follows: \begin{equation}gin{equation} \left \{ \begin{equation}gin{split} (i) & \quad \text{Initial condition: $((X^{N,1}_0,\eta^{N,1}_0),\ldots,(X^{N,N}_0,\eta^{N,N}_0))\sim \upsilon^N\in \mbox{$\bf m $}athcal{P}((\bar D\times{\mathbb{K}})^N)$.}\\ (ii) & \quad \text{For $t \in [0,\infty)$ and between killing times the particles $(X^{N,i}_t,\eta^{N,i}_t)$ evolve and are}\\ & \quad\text{killed independently according to $\eqref{eq:killed process SDE}$ in the first variable, and are constant in}\\ & \quad\text{the second variable.}\\ (iii) & \quad \text{We write $\tau^i_k$ for the death times of particle $(X^{N,i},\eta^{N,i})$ (with $\tau^i_0:=0$). When }\\ & \quad\text{particle $(X^{N,i},\eta^{N,i})$ is killed at time $\tau^i_k$ it jumps to the location of particle}\\ & \quad\text{$j=U^i_k\in\{1,\ldots,N\}\setminus\{i\}$ chosen independently and uniformly at random, at}\\ & \quad \text{which time we set $(X^{N,i}_{\tau^i_k},\eta^{N,i}_{\tau^i_k})=(X^{N,j}_{\tau^i_k-},\eta^{N,j}_{\tau^i_k-})$. Moreover we write $\tau_n$ for the $n^{\text{th}}$}\\ & \quad \text{time at which any particle is killed (with $\tau_0:=0$).} \end{split} \right. \label{eq:N-particle m label (X,eta) system} \end{equation} We then define \[ J^N_t:=\frac{1}{N}\sup\{n>0:\tau_n\leq t\} \] to be the number of deaths up to time t normalised by $\frac{1}{N}$ and define the empirical measures \begin{equation}gin{equation}\label{eq:spatial and colour empirical measures} m^N_t:=\frac{1}{N}\sum_{i=1}^N\delta_{X^{N,i}_t}\quad\text{and}\quad \chi^{N}_t:=\frac{1}{N}\sum_{i=1}^N\delta_{\eta^{N,i}_t}. \end{equation} We further define the fixation time \[ \tau^N_{\text{fix}}=\inf\{t>0:\eta^{N,1}_t=\ldots=\eta^{N,N}_t\},\] and write $\iota^N$ for the fixed colour \[ \iota^N=\eta^{N,1}_{\tau^N_{\text{fix}}}=\ldots=\eta^{N,N}_{\tau^N_{\text{fix}}}. \] \label{defin:Multi-Colour Process} \end{defin} \begin{equation}gin{rmk}\label{rmk:fixation time finite almost surely} Since $\kappa$ is bounded and uniformly bounded away from $0$, it is easy to see that $\tau^N_{\text{fix}}<\infty$, with probability $1$. \end{rmk} As with the Fleming-Viot process, this is well-defined since $\kappa$ is bounded. \subsection{The Weak Atomic Metric}\label{subsection:Weak atomic metric} Convergence in our scaling limit shall be given in terms of the weak atomic metric, introduced by Ethier and Kurtz \cite{Ethier1994}. We shall define the weak atomic metric on the colour space $({\mathbb{K}},d)$ (which we recall is assumed to be a Polish space). In \cite{Ethier1994}, Ethier and Kurtz defined the weak atomic metric on the space of all finite, positive, Borel measures, whereas we restrict our attention to probability measures on ${\mathbb{K}}$. We fix $\Psi(u)=(1-u)\vee 0$ and define the weak atomic metric to be \begin{equation}gin{equation} \text{W}_1t(\mbox{$\bf m $}u,\nu)=\text{W}_1h(\mbox{$\bf m $}u,\nu)+\sup_{0<\mbox{$\bf m $}box{$\epsilon$}ilon\leq 1}\Big\lvert \int_{{\mathbb{K}}}\int_{{\mathbb{K}}}\Psi\Big(\frac{d(x,y)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big)\mbox{$\bf m $}u(dx)\mbox{$\bf m $}u(dy)-\int_{{\mathbb{K}}}\int_{{\mathbb{K}}}\Psi\Big(\frac{d(x,y)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big)\nu(dx)\nu(dy)\Big\rvert. \label{eq:Weak-atomic metric} \end{equation} In \cite{Ethier1994} they used the Levy-Prokhorov metric instead of the $\text{W}_1h$-metric, and let $\Psi$ be an arbitrary continuous, non-decreasing function such that $\Psi(0)=1$ and $\Psi(1)=0$. We make the above choices for simplicity (note that $\text{W}_1h$ is equivalent to the Levy-Prokhorov metric \cite[Theorem 2]{Gibbs2002}). Convergence in the weak atomic metric is equivalent to weak convergence and convergence of the location and sizes of the atoms \cite[Lemma 2.5]{Ethier1994}. \begin{equation}gin{lem}[Lemma 2.5, \cite{Ethier1994}]\label{lem:Weak atomic metric = conv of atoms and weak} Consider a sequence of probability measures $(\mbox{$\bf m $}u_n)_{n=1}^{\infty}$ and a probability measure $\mbox{$\bf m $}u$, all in ${\mathcal{P}}({\mathbb{K}})$. The following are equivalent: \begin{equation}gin{enumerate} \item $\text{W}_1t(\mbox{$\bf m $}u_n,\mbox{$\bf m $}u)\rightarrow 0$ as $n\rightarrow\infty$. \item We have both of the following: \begin{equation}gin{enumerate} \item $\text{W}_1h(\mbox{$\bf m $}u_n,\mbox{$\bf m $}u)\rightarrow 0$ as $n\rightarrow\infty$; \item there exists an ordering of the atoms $\{\alpha_i\delta_{x_i}\}$ of $\mbox{$\bf m $}u$ and the atoms $\{\alpha^n_i\delta_{x^n_i}\}$ of $\mbox{$\bf m $}u_n$ so that $\alpha_1\geq \alpha_2\geq \ldots$ and $\lim_{n\rightarrow\infty}(\alpha^n_i,x^n_i)=(\alpha_i,x_i)$ for all $i$. \end{enumerate} \end{enumerate} \end{lem} Thus measures are close in the weak atomic metric if and only if they are both close in the Wasserstein-$1$ metric $\text{W}_1h$ and have similar atoms. For instance $\frac{1}{2}\text{Leb}_{[0,1]}+\frac{1}{2}\delta_{\frac{1}{2}}$ is close in the weak atomic metric to $(\frac{1}{2}-\mbox{$\bf m $}box{$\epsilon$}ilon)\text{Leb}_{[0,1]}+(\frac{1}{2}+\mbox{$\bf m $}box{$\epsilon$}ilon)\delta_{\frac{1}{2}+\mbox{$\bf m $}box{$\epsilon$}ilon}$ (for small $\mbox{$\bf m $}box{$\epsilon$}ilon>0$) but not to $\frac{1}{2}\text{Leb}_{[0,1]}+(\frac{1}{4}\delta_{\frac{1}{2}-\mbox{$\bf m $}box{$\epsilon$}ilon}+\frac{1}{4}\delta_{\frac{1}{2}+\mbox{$\bf m $}box{$\epsilon$}ilon})$ (as the atoms are quite different) nor to $\frac{1}{3}\text{Leb}_{[0,1]}+\frac{2}{3}\delta_{\frac{1}{2}}$ (as they are $\text{W}_1h$-far apart). It will be useful to be able to characterise tightness in both ${\mathcal{P}}({\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$ and ${\mathcal{P}}(D([0,T];{\mathcal{P}}_{\text{W}_1t}({\mathbb{K}})))$. We note that ${\mathcal{B}}({\mathcal{P}}({\mathbb{K}}))={\mathcal{B}}({\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$ \cite[p.5]{Ethier1994}, so that probability measures in ${\mathcal{P}}({\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$ are probability measures in ${\mathcal{P}}({\mathcal{P}}({\mathbb{K}}))$ and vice-versa. Ethier and Kurtz established in \cite[Lemma 2.9]{Ethier1994} the following tightness criterion. \begin{equation}gin{lem}[Lemma 2.9, \cite{Ethier1994}]\label{lem:relatively compact family of measures in weak atomic topology} Consider a sequence of measures $(\mbox{$\bf m $}u_n)_{n=1}^{\infty}$ in ${\mathcal{P}}({\mathcal{P}}({\mathbb{K}}))$. Then the following are equivalent: \begin{equation}gin{enumerate} \item $(\mbox{$\bf m $}u_n)_{n=1}^{\infty}$ is tight in ${\mathcal{P}}({\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$. \item $(\mbox{$\bf m $}u_n)_{n=1}^{\infty}$ is tight in ${\mathcal{P}}({\mathcal{P}}_{\text{W}_1h}({\mathbb{K}}))$ and we also have \begin{equation}gin{equation}\label{eq:Compact containment condition} \sup_n{\mathbb E}\Big[\int_{{\mathbb{K}}}\int_{{\mathbb{K}}}\Psi\Big(\frac{d(x,y)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big){\mathbbm{1}}(x\neq y)\mbox{$\bf m $}u_n(dx)\mbox{$\bf m $}u_n(dy)\Big]\rightarrow 0\quad\text{as}\quad \mbox{$\bf m $}box{$\epsilon$}ilon\rightarrow 0. \end{equation} \end{enumerate} \end{lem} Note that the above statement is slightly different from the statement of \cite[Lemma 2.9]{Ethier1994}. It is straightforward to see that the two statements are equivalent for families of probability measures; for our purposes this lemma statement will be easier to use. Ethier and Kurtz also established in \cite[Theorem 2.12]{Ethier1994} a tightness criterion for sets of probability measures on $D([0,T];{\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$. We will make use of a version of this criterion which they provided for in \cite[Remark 2.13]{Ethier1994}. \begin{equation}gin{theo}[Theorem 2.12 and Remark 2.13, \cite{Ethier1994}]\label{theo:compact containment condition} Consider $((\mbox{$\bf m $}u^n_t)_{0\leq t\leq T})_{n=1}^{\infty}$, a sequence of processes with sample paths in $D([0,T];{\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$. Then the following are equivalent: \begin{equation}gin{enumerate} \item $\{{\mathcal{L}}((\mbox{$\bf m $}u^n_t)_{0\leq t\leq T})\}$ is tight in ${\mathcal{P}}(D([0,T];{\mathcal{P}}_{\text{W}_1t}({\mathbb{K}})))$. \item $\{{\mathcal{L}}((\mbox{$\bf m $}u^n_t)_{0\leq t\leq T})\}$ is tight in ${\mathcal{P}}(D([0,T];{\mathcal{P}}_{\text{W}_1h}({\mathbb{K}})))$ and for each $\delta>0$, there exists $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ such that \begin{equation}gin{equation}\label{eq:Compact containment condition processes} \liminf_{n\rightarrow\infty}{\mathbb P}\Big[\sup_{0\leq t\leq T}\Big(\int_{{\mathbb{K}}}\int_{{\mathbb{K}}}\Psi\Big(\frac{d(x,y)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big){\mathbbm{1}}(x\neq y)\mbox{$\bf m $}u^n_t(dx)\mbox{$\bf m $}u^n_t(dy)\Big)\leq \delta\Big]\geq 1-\delta. \end{equation} \end{enumerate} \end{theo} \subsection{The Wright-Fisher superprocess} Our scaling limit will be given by the Fleming-Viot superprocess, a generalisation of the Wright-Fisher diffusion introduced by Fleming and Viot \cite{Fleming1979}. To avoid confusion with the Fleming-Viot processes which are the subject of this paper, we instead refer to these as Wright-Fisher superprocesses. We recall that given a gene with two alleles, $a$ and $A$, the SDE \[ dp_t=\sqrt{p_t(1-p_t)}dW_t \] models the evolution of the proportion $p_t$ of the population carrying the $a$-allele. This is the classical Wright-Fisher diffusion. Generalising this to $n$ alleles, the driftless $n$-Type Wright-Fisher diffusion process of rate $\theta>0$ takes values in the simplex $\Delta_n:=\{p=(p_1,\ldots,p_n)\in {\mathbb R}_{\geq 0}^n:\sum_j p_j=1\}$ and is characterised by the generator \begin{equation}gin{equation} L_{\text{WF}}=\frac{1}{2}\theta\sum_{i,j=1}^np_i(\delta_{ij}-p_j)\frac{\partial^2}{\partial p_i\partial p_j},\quad \mbox{$\bf m $}athcal{D}(L)=C^2({\mathbb R}^n). \label{eq:generator of K dim WF-diff} \end{equation} This was generalised by Fleming and Viot \cite{Fleming1979} to a probability measure-valued process, the Wright-Fisher superprocess, modelling the evolution of the proportions of infinitely many alleles. We shall define this process on ${\mathcal{P}}({\mathbb{K}})$ as a solution of a martingale problem, whereby the Polish space $({\mathbb{K}},d)$ is the colour space defined in Subsection \ref{subsection:FV multi-colour process}. We shall then present some basic facts about the Wright-Fisher superprocess. Property \ref{enum:MVWF WF on meas subsets} provides intuition for how one may think of it. \begin{equation}gin{defin}[Wright-Fisher superprocess] A Wright-Fisher superprocess on $\mbox{$\bf m $}athcal{P}({\mathbb{K}})$ of rate $\theta> 0$ with initial condition $\nu^0\in{\mathcal{P}}({\mathbb{K}})$ is a c\'adl\'ag $\mbox{$\bf m $}athcal{P}({\mathbb{K}})$-valued process $(\nu_t)_{0\leq t<\infty}$ such that $\nu_0:=\nu^0$ and which is characterised by the generator \begin{equation}gin{equation} (\mbox{$\bf m $}athcal{L}\varphi)(\nu)=\frac{1}{2}\theta\int_{{\mathbb{K}}}\int_{{\mathbb{K}}}\frac{\delta^2 \varphi(\nu)}{\delta \nu(x)\delta \nu(y)}\nu(dx)\begin{itemize}g(\delta_x(dy)-\nu(dy)\begin{itemize}g), \label{eq:generator of MVWF diff} \end{equation} whereby the domain is given by \begin{equation}gin{equation} \mbox{$\bf m $}athcal{D}(\mbox{$\bf m $}athcal{L})=\{\varphi:\varphi(\nu)\equiv F(\langle f_1,\nu\rightarrowngle,\ldots,\langle f_n,\nu\rightarrowngle),F\in C^2({\mathbb R}^n),f_i\in C_b({\mathbb{K}})\},\quad \langle g,\phi\rightarrowngle:=\int_{{\mathbb{K}}}gd\phi \end{equation} and $\frac{\delta \varphi(\nu)}{\delta \nu(x)}$ is the Gateaux derivative $\frac{\delta \varphi(\nu)}{\delta \nu(x)}:=\lim_{\mbox{$\bf m $}box{$\epsilon$}ilon\rightarrow 0}\frac{\varphi(\nu+\mbox{$\bf m $}box{$\epsilon$}ilon \delta_x)-\varphi(\nu)}{\mbox{$\bf m $}box{$\epsilon$}ilon}$, so that \begin{equation}gin{equation} \frac{\delta^2 \varphi(\nu)}{\delta \nu(x)\delta \nu(y)}=\sum_{i,j=1}^n\frac{\partial^2F(p_1,\ldots,p_n)}{\partial p_i\partial p_j}_{\lvert_{p_k=\langle \nu,f_k \rightarrowngle}}f_i(x)f_j(y). \label{eq:wxpression for deriv of varphi in mu} \end{equation} \label{defin:MVWF} We further define the fixation time $\tau_{\text{fix}}=\inf\{t>0:\nu_t=\delta_{\alpha}\text{ for some } \alpha\in {\mathbb{K}}\}$ and write $\iota$ for the corresponding fixed colour, \begin{equation}gin{equation} \nu_t=\delta_{\iota},\quad t\geq \tau_{\text{fix}}. \end{equation} \end{defin} \begin{equation}gin{prop}\label{prop:basic facts WF superprocess} We fix $\nu^0\in{\mathcal{P}}({\mathbb{K}})$ and $\theta>0$. There exists a unique in law Wright-Fisher superprocess on ${\mathcal{P}}({\mathbb{K}})$ with initial condition $\nu_0=\nu^0$. This has the following properties: \begin{equation}gin{enumerate} \item \label{enum:MVWF purely atomic} Almost surely $\nu_t$ is purely atomic with finitely many atoms for all $t>0$. \item \label{enum:fixation times almost surely finite} The fixation time is $\tau_{\text{fix}}$ almost surely finite. \item \label{enum:MVWF WF on meas subsets} For all finite disjoint unions of measurable subsets $\dot{\cup}_{j=1}^n{\mathcal{A}}_j={\mathbb{K}}$ we have \begin{equation}gin{equation}\label{eq:WF diff sect measure of disjoint sets} (\nu_t({\mathcal{A}}_1),\ldots,\nu_t({\mathcal{A}}_n)),\quad t\geq 0 \end{equation} is an $n$-type Wright-Fisher diffusion of rate $\theta$. \item \label{enum:MVWF cts} $(\nu_t)_{0\leq t<\infty}\in C([0,\infty);\mbox{$\bf m $}athcal{P}_{\text{W}_1t}({\mathbb{K}}))\subseteq C([0,\infty);\mbox{$\bf m $}athcal{P}_{\text{W}_1h}({\mathbb{K}}))$ almost surely. \item \label{enum:fixed colour dist like ic} The fixed colour is distributed like the initial condition, \begin{equation}gin{equation}\label{eq:law of iota = nu} {\mathcal{L}}(\iota)=\nu^0. \end{equation} \end{enumerate} \end{prop} We note that the Wright-Fisher superprocess is usually defined with the martingale problem featuring extra terms representing mutation, selection and recombination. We will not need this generality here. At the same time, it is often assumed that the underlying space ${\mathbb{K}}$ is either compact, or at least locally compact. We will need, however, to assume that ${\mathbb{K}}$ is a (possibly not locally compact) Polish space. We will provide a proof of Proposition \ref{prop:basic facts WF superprocess} in the appendix. \section{Main Results}\label{section:Main results} We define the constant \begin{equation}gin{equation}\label{eq:const theta} {\mathbb{T}}heta=\frac{2\mbox{$\bf m $}box{$\lambda$}bda \lvert\lvert \phi\rvert\rvert_{L^2(\pi)}^2}{\lvert\lvert \phi\rvert\rvert_{L^1(\pi)}^2}. \end{equation} We take some deterministic initial profile $\nu^0\in\mbox{$\bf m $}athcal{P}({\mathbb{K}})$ and fix $(\nu_t)_{0\leq t<\infty}$ a Wright-Fisher superprocess of rate ${\mathbb{T}}heta$ and initial condition $\nu^0$. We then consider a sequence of Fleming-Viot multi-colour Processes $(\vec{X}^N_t,\eta^N_t)_{0\leq t<\infty}$. We recall that the empirical measures $\chi^{N}_t$ and the tilted empirical measures ${\mathcal{Y}}^N_t$ were defined in \eqref{eq:spatial and colour empirical measures} and \eqref{eq:tilted empirical measure} respectively as \[ \chi^{N}_t=\frac{1}{N}\sum_{i=1}^N\delta_{\eta^{N,i}_t}\quad\text{and}\quad {\mathcal{Y}}^N_t=\frac{\frac{1}{N}\sum_{i=1}^N\phi(X^{N,i}_t)\delta_{\eta^{N,i}_t}}{\frac{1}{N}\sum_{i=1}^N\phi(X^{N,i}_t)}. \] We assume that ${\mathcal{Y}}^N_0\rightarrow \nu^0$ in $\text{W}_1t$ in probability. \begin{equation}gin{theo}\label{theo:Convergence to FV diffusion} We fix some time horizon $T<\infty$. Then we have the following: \begin{equation}gin{enumerate} \item\label{enum:conv of measure-valued process to FV in Weak atomic metric} After rescaling time, $(\nu_t)_{t\geq 0}$ corresponds to the limit of $({\mathcal{Y}}^N_{Nt})_{t\geq 0}$, \begin{equation}gin{equation}\label{eq:conv of measure-valued process to FV in Weak atomic metric} ({\mathcal{Y}}^N_{Nt})_{0\leq t\leq T}\rightarrow (\nu_t)_{0\leq t\leq T}\quad\text{in}\quad D([0,T];\mbox{$\bf m $}athcal{P}_{\text{W}_1t}({\mathbb{K}}))\quad\text{in distribution as}\quad N\rightarrow\infty. \end{equation} \item\label{enum:convergence of empirical measure to FV} Moreover $(\nu_t)_{t\geq 0}$ also corresponds to the limit of $(\chi^{N}_{Nt})_{t\geq 0}$ in a weaker sense \begin{equation}gin{equation}\label{eq:convergence of empirical measure to FV} \chi^{N}_{t_N}\rightarrow \nu_{t}\quad\text{in}\quad \text{W}_1t\quad \text{in distribution as}\quad N\wedge t_N\rightarrow\infty\quad\text{with}\quad \frac{t_N}{N}\rightarrow t. \end{equation} \end{enumerate} \end{theo} \begin{equation}gin{rmk} The proof of Theorem \ref{theo:Convergence to FV diffusion} does not use the positivity of $\kappa$ (see Remark \ref{rmk:positivity of kappa characterisation of Y}), so is still true if we only have $\kappa\in C^{\infty}(\bar D;{\mathbb R}_{\geq 0})$.\label{rmk:WF diff thm still true if only kappa non-neg} \end{rmk} \begin{equation}gin{rmk} We will prove \eqref{eq:convergence of empirical measure to FV} by establishing $\text{W}_1t(\chi^N_{t_N},{\mathcal{Y}}^N_{t_N})\rightarrow 0$ as $N\rightarrow\infty$, so that the notion of convergence in \eqref{eq:convergence of empirical measure to FV} may be strengthened to convergence of the finite-dimensional distributions. Nevertheless \eqref{eq:convergence of empirical measure to FV} shall be sufficient for our purposes. \end{rmk} Determining fixation probabilities by way of a diffusion approximation is classical in population genetics \cite[p.1280]{Patwa2008}. We are likewise able to use Theorem \ref{theo:Convergence to FV diffusion} to prove the following theorem, giving the asymptotic distribution of the fixed colour $\iota^N$. \begin{equation}gin{theo}\label{theo:convergence of the fixed colour} We have convergence in distribution of the fixed colour, \begin{equation}gin{equation} \iota^N\overset{d}{\rightarrow} \iota. \end{equation} \end{theo} While we establish in Theorem \ref{theo:convergence of the fixed colour} convergence in distribution of the fixed colour, we do not establish convergence in distribution of the fixation times, $\tau^N_{\text{fix}}\overset{d}{\rightarrow} \tau_{\text{fix}}$, leaving this as a conjecture. Given a Fleming-Viot process $(\vec{X}^N_t)_{0\leq t<\infty}$, a colour space ${\mathbb{K}}$ and a choice of initial condition $\vec{\eta}^N_0$, the corresponding Fleming-Viot multi-colour process $(\vec{X}^N_t,\vec{\eta}^N_t)_{0\leq t<\infty}$ is uniquely determined, and corresponds to tracking information (encoded by the colours) about the ancestries of the particles as the particle system evolves forwards in time. One can also trace the ancestries of the particles backwards in time, obtaining a coalescent process. In Theorem \ref{theo:Convergence to FV diffusion} we obtained the Wright-Fisher superprocess, which is dual to Kingman's coalescent \cite[Appendix A]{Labbe2013} (a definition of which can be found in \cite[Section 2.1.1]{Berestycki2009}), as the scaling limit of the colours as $N\rightarrow\infty$ and time is rescaled. This motivates the conjecture that the coalescent process obtained by tracing the ancestries of the particles in the Fleming-Viot process backwards in time should converge as $N\rightarrow\infty$ and time is rescaled to Kingman's coalescent. A similar scaling limit result, \cite[Theorem 3.2]{Brown2021}, was recently established by Brown, Jenkins, Johansen and Koskela. They considered the geneaology of a sequential Markov chain Monte Carlo algorithm, obtaining Kingman's coalescent as the scaling limit as the number of particles goes to infinity and time is suitably rescaled. This captures the phenomenon of ancestral degeneracy, which has a substantial impact on the performance of the algorithm. \section{Fleming-Viot Particle Representation for $\phi$}\label{section:principal right e-fn approximation} We take a sequence of Fleming-Viot multi-colour processes with colour space ${\mathbb{K}}=\bar D$ and initial conditions satisfying \[ \frac{1}{N}\sum_{i=1}^N\delta_{\eta^{N,i}_0}\rightarrow \mbox{$\bf m $}u\quad\text{weakly in probability.} \] As we explain in the introduction to this paper on page \pageref{eq:estimate for right efn using Villemonais}, we can use \cite[Theorem 2.2]{Villemonais2011} to see that \[ \lim_{\substack{N\wedge t\rightarrow \infty\\N\gg e^{2\mbox{$\bf m $}box{$\lambda$}bda t}}}\frac{1}{N}\sum_{i=1}^N\delta_{\eta^{N,i}_t}= \mbox{$\bf m $}u^{\phi}. \] The requirement $N\gg e^{2\mbox{$\bf m $}box{$\lambda$}bda t}$ is clearly problematic. We now use Theorem \ref{theo:Convergence to FV diffusion} to strengthen the notion of convergence, providing a particle representation for the principal right eigenfunction $\phi$. \begin{equation}gin{cor}[Paticle Representation for $\phi$]\label{cor:representation for right e-fn} We fix $\mbox{$\bf m $}u\in{\mathcal{P}}(\bar D)$, take the space of colours to be the spatial domain ${\mathbb{K}}=\bar D$ and take a sequence of Fleming-Viot multi-colour processes $(\vec{X}^N_t,\vec{\eta}^N_t)_{0\leq t<\infty}$ such that \[ \eta^{N,i}_0:=X^{N,i}_0,\quad 1\leq i\leq N\quad\text{and}\quad m^N_0=\chi^{N}_0\rightarrow \mbox{$\bf m $}u\quad\text{in}\quad \text{W}_1t\quad\text{in probability as}\quad N\rightarrow\infty. \] We then obtain from Theorem \ref{theo:Convergence to FV diffusion} that \begin{equation}gin{equation} \chi^{N}_t\rightarrow \mbox{$\bf m $}u^{\phi}\quad \text{in}\quad \text{W}_1t\quad \text{in probability as}\quad N\wedge t\rightarrow \infty\quad\text{with}\quad \frac{t}{N}\rightarrow 0. \end{equation} \end{cor} Note that by Remark \ref{rmk:WF diff thm still true if only kappa non-neg}, Corollary \ref{cor:representation for right e-fn} is still true if we only have $\kappa\in C^{\infty}(\bar D;{\mathbb R}_{\geq 0})$. \begin{equation}gin{rmk}[Optimality of $1\ll t\ll N$]\label{rmk:optimality approx method for phi} It is clear from Theorem \ref{theo:Convergence to FV diffusion} that $N\gg t$ is optimal. In particular if $\frac{t}{N}$ converges to a positive constant as $N\rightarrow\infty$ then Theorem \ref{theo:Convergence to FV diffusion} implies that $\chi^N_t$ will converge to a random measure. \end{rmk} The simplest example is given by defining the particles $\eta^{N,i}_0=X^{N,i}_0$ to be independently and uniformly distributed at time $0$, so that \[ \chi^N_t\approx \text{Leb}^{\phi}\quad\text{for}\quad 1\ll t\ll N, \] providing a particle representation for the probability measure whose Radon-Nikodym derivative with respect to Lebesgue measure measure is the right eigenfunction $\phi$ (multiplied by some normalising constant). The fact that we have convergence in the weak atomic metric gives us a particle representation for the value of the principal right eigenfunction $\phi$ of $L$ at a given set of points. For instance, if we are only interested in the value of $\phi$ at a given point $x\in \bar D$ (with $\phi$ normalised so that $\langle \nu,\phi\rightarrowngle =1$ whereby $\nu(\{x\})=0$), then we may take $\nu=\frac{\nu+\delta_x}{2}$. Corollary \ref{cor:representation for right e-fn} and Lemma \ref{lem:Weak atomic metric = conv of atoms and weak} guarantee that for $1\ll t\ll N$, $\chi^N_t$ has an atom $\alpha^N\delta_{x^N}$ such that $(\alpha^N,x^N)\approx (\frac{\phi(x)}{\phi(x)+\langle \nu,\phi\rightarrowngle},x)$, so that \[ \frac{1}{\frac{1}{\alpha^N}-1}\approx \frac{\phi(x)}{\langle \nu,\phi\rightarrowngle}. \] The same strategy may be employed to obtain $\phi$ (for a given normalisation) at a given finite set of points. We now consider a toy example. \begin{equation}gin{example} We consider the differential operator \[ L=\frac{1}{2}\Delta-\Big(2-\frac{\pi^2\cos(\pi x)}{4+2\cos(\pi x)}\Big) \] corresponding to reflected Brownian motion in $[0,1]$ with soft killing rate \[ \kappa(x)=2-\frac{\pi^2\cos(\pi x)}{4+2\cos(\pi x)}. \] The Neumann eigenproblem \[ L\phi=-\mbox{$\bf m $}box{$\lambda$}bda \phi,\quad \partial_x\phi_{\{0,1\}}\equiv 0 \] has a unique (up to rescaling) positive right eigenfunction \[ \phi=2+\cos(\pi x), \] with corresponding eigenvalue $-\mbox{$\bf m $}box{$\lambda$}bda =-2$. We used Python to simulate the particle system given in Corollary \ref{cor:representation for right e-fn} with $N=10^6$, $t=5$ and initial condition $\mbox{$\bf m $}u=\frac{1}{21}\sum_{k=0}^{20}\delta_{\frac{k}{20}}$. We note that here $e^{2\mbox{$\bf m $}box{$\lambda$}bda t}\approx 5*10^{8}\gg N$, so \eqref{eq:estimate for right efn using Villemonais} does not apply. Corollary \ref{cor:representation for right e-fn} and Lemma \ref{lem:Weak atomic metric = conv of atoms and weak} give that for $1\ll t\ll N$, $\chi^N_t$ has an atom of mass approximately equal to $\frac{\phi(\frac{i}{20})}{\sum_{k=0}^{20}\phi(\frac{k}{20})}$ at each point $\frac{i}{20}$ ($0\leq i\leq 20$). The result is given in Figure \ref{fig:partrepnforPrincipalRightNeumEfn}. \end{example} \begin{equation}gin{figure}[h] \includegraphics[scale=0.7]{PartRepnforPrincipalRightNeumEfn} \caption{The simulation value at each point $\frac{i}{20}$ ($0\leq i\leq 20$), linearly interpolated in between adjacent points, is given by the blue path. The orange path shows the right eigenfunction $\phi$, normalised so that $\sum_{k=0}^{20}\phi(\frac{k}{20})=1$.} \label{fig:partrepnforPrincipalRightNeumEfn} \end{figure} \section{Representation for the Q-process and Asymptotic Distribution of the Spine of the Fleming-Viot Process}\label{section:Q-process approximation} \subsection{Background}\label{subsection:Background for Asympt dist of Q-process} The dynamical historical process (DHP) $(\mbox{$\bf m $}athcal{H}^{N,i,t}_s)_{0\leq s\leq t}\in C([0,t];\bar D)$ is the path we obtain by tracing the ancestral path of particle $i$ back from time $t$. We define the DHPs for $1\leq i\leq N$ and $0\leq t<\infty$ using the following family of random functions $\varrho^N_{t_0,t_1}:\{1,\ldots,N\}\rightarrow \{1,\ldots,N\}$ for $0\leq t_0\leq t_1<\infty$, \begin{equation}gin{equation}\label{eq:functions PhiN} \begin{equation}gin{split} \varrho^N_{t,t}\equiv \text{Id},\quad \varrho^N_{t,t_1}\text{ is piecewise constant on }t\in [\tau_k,\tau_{k+1})\cap [0,t_1],\\ \text{if $X^i$ jumps onto $X^j$ at time $\tau_k$ then }\varrho^N_{\tau_k-,t_1}=\begin{equation}gin{cases} i\mbox{$\bf m $}apsto j\\ \ell\mbox{$\bf m $}apsto \ell\quad\text{for}\quad \ell\neq i \end{cases}\circ \varrho^N_{\tau_k,t_1}\quad\text{for } \tau_k\leq t_1. \end{split} \end{equation} The random function $\varrho^N_{s,t}(i)$ corresponds to tracing the continuous ancestral path backwards from the particle with index $i$ at time $t$ to its ancestor at time $s$. The DHPs $(\mbox{$\bf m $}athcal{H}^{N,i,t})_{0\leq s\leq t}\in C([0,t];\bar D)$ are then defined for $1\leq i\leq N$ and $0\leq t<\infty$ as \begin{equation}gin{equation}\label{eq:DHP multi-colour chapter} \mbox{$\bf m $}athcal{H}^{N,i,t}_s=X^{N,\varrho^N_{s,t}(i)}_s,\quad 0\leq s\leq t. \end{equation} Grigorescu and Kang introduced the Fleming-Viot multi-colour process in \cite[Section 5.1]{Grigorescu2012} to construct the spine of the Fleming-Viot particle system. Heuristically speaking, the spine is the unique DHP of infinite length. It is the unique path from time $0$ to time $\infty$ obtained by following the paths of the particles $X^1,\ldots,X^N$. We reformulate Grigorescu and Kang's construction as follows. We firstly observe that for $t_0\leq t_1\leq t_2$ we have $\varrho^N_{t_0,t_2}=\varrho^N_{t_0,t_1}\circ\varrho^N_{t_1,t_2}$. This motivates the definition of the fixation time \begin{equation}gin{equation} \mbox{$\bf m $}athcal{T}_t^N=\inf\{t'\geq t:\varrho^N_{t,t'}\text{ is constant}\}, \label{eq:defin of tauTN} \end{equation} so that $\varrho_{t,t'}^N$ is constant for all $t'\geq {\mathcal{T}}^N_t$. We therefore define the fixed index at time $t$, \begin{equation}gin{equation}\label{eq:fixed index} \zeta_t^N=\varrho_{t,\mbox{$\bf m $}athcal{T}_t^N}^N(1)=\ldots=\varrho_{t,\mbox{$\bf m $}athcal{T}_t^N}^N(N). \end{equation} We use the fixed index to define the spine, $\mbox{$\bf m $}athcal{H}^{N,\infty}_t\in C([0,\infty);\bar D)$, of the Fleming-Viot $N$-particle system, \begin{equation}gin{equation}\label{eq:spine} \mbox{$\bf m $}athcal{H}^{N,\infty}_t:=X^{N,\zeta^N_t}_t,\quad 0\leq t<\infty. \end{equation} The fixed index $\zeta^N_t$ is therefore the index of the particle the spine is following at time $t$, and is time-dependent. We note that in order to determine the spine over the time horizon $T<\infty$, it is sufficient to determine the fixed index $\zeta^N_T$ at time $T$ since \[ (\mbox{$\bf m $}athcal{H}^{N,\infty}_t)_{0\leq t\leq T}=(\mbox{$\bf m $}athcal{H}^{N,\zeta^N_T,T}_t)_{0\leq t\leq T}=(\mbox{$\bf m $}athcal{H}^{N,i,t'}_s)_{0\leq t\leq T}\quad \text{for all}\quad 1\leq i\leq N\quad\text{and}\quad t'\geq \mbox{$\bf m $}athcal{T}_T^N. \] Therefore the DHPs obtained by tracing backwards from any time after time $\mbox{$\bf m $}athcal{T}_T^N$ share the same path over the time horizon $[0,T]$, which is the path of the spine up to time $T$, as in Figure \ref{fig:spine from random fns example}. This spine is then the unique DHP up to time $\infty$. \begin{equation}gin{figure}[h]\label{fig:spine from random fns example} \begin{equation}gin{center} \begin{equation}gin{tikzpicture}[scale=0.7] \draw[->,snake=snake,ultra thick,red](-2,0) -- ++(0,2); \draw[-,snake=snake,ultra thick,blue](-2,2) -- ++(-1.25,2.5); \nonumberde[left] at (-2.625,3.25) {$1$}; \draw[-,snake=snake,thick,blue](-3.25,4.5) -- ++(-0.25,0.5); \draw[->,snake=snake,blue](-3.5,5) -- ++(-1.25,2.5); \nonumberde[left] at (-4.5,7) {$1$}; \draw[->,snake=snake,red](-2,2) -- ++(0,1.5); \nonumberde[right] at (-2,2.75) {$2$}; \draw[->,snake=snake,blue](-3,0) -- ++(-1,2); \draw[-,dashed,blue](-4,2) -- ++(2,0); \draw[->,snake=snake,black](2,0) -- ++(0,5); \nonumberde[left] at (2,4.5) {$4$}; \draw[->,snake=snake,green](1,0) -- ++(-2.25,4.5); \nonumberde[right] at (-0.5,3) {$3$}; \draw[-,dashed,green](-1.25,4.5) -- ++(-2,0); \draw[->,snake=snake,thick,green](-3.25,4.5) -- ++(5.25,3.5); \nonumberde[above] at (2,8) {$3$}; \nonumberde[right] at (-1.75,5.5) {$3$}; \draw[-,dashed,blue](-4.75,7.5) -- ++(6,0); \draw[->,snake=snake,thick,blue](1.25,7.5) -- ++(-0.5,0.5); \nonumberde[above] at (0.75,8) {$1$}; \draw[-,dashed,red](-2,3.5) -- ++(4,0); \draw[->,snake=snake,red](2,3.5) -- ++(2,3.5); \nonumberde[right] at (3,5.25) {$2$}; \draw[-,dashed,red](4,7) -- ++(-3.5,0); \draw[->,snake=snake,thick,red](0.5,7) -- ++(-0.5,1); \nonumberde[above] at (0,8) {$2$}; \draw[-,dashed,black](2,5) -- ++(-5.5,0); \draw[->,snake=snake,thick,black](-3.5,5) -- ++(1,3); \nonumberde[above] at (-2.5,8) {$4$}; \nonumberde[right] at (6,2) {$t=1$}; \nonumberde[right] at (6,4) {$t=2$}; \nonumberde[right] at (6,6) {$t=3$}; \nonumberde[right] at (6,8) {$t=4$}; \nonumberde[below] at (-3,0) {$1$}; \nonumberde[below] at (-2,0) {$2$}; \nonumberde[below] at (1,0) {$3$}; \nonumberde[below] at (2,0) {$4$}; \draw[-,thick,black](6,0) -- ++(0,8); \draw[-,thick,black](-6,0) -- ++(0,8); \draw[-,dashed,black](-6,2) -- ++(12,0); \draw[-,dashed,black](-6,4) -- ++(12,0); \draw[-,dashed,black](-6,6) -- ++(12,0); \end{tikzpicture} \caption[Construction of the spine using random functions]{Tracing backwards from time $t=3.5$, all of the paths converge to the particle with index $1$ at time $t=2.25$, so that $\varrho^4_{2.24,3.5}(1)=\varrho^4_{2.24,3.5}(2)=\varrho^4_{2.24,3.5}(3)=\varrho^4_{2.24,3.5}(4)=1$. However $\varrho^4_{2.24,3.49}(2)=2$ while $\varrho^4_{2.24,3.49}(3)=1$, so that ${\mathcal{T}}_{2.24}=3.5$ and $\zeta_{2.24}=1$. By visual inspection, we can see that ${\mathcal{T}}_t=3.5$ for all $0\leq t<2.25$, but $\zeta_t$ is not constant over this time interval: $\zeta_t=2$ for $0\leq t<1$ and $\zeta_t=1$ for $1\leq t<2.25$. Thus the path of the spine ${\mathcal{H}}^{\infty}_t$ up to time $t=2.25$ - denoted by the thick path - is given by $\mbox{$\bf m $}athcal{H}^{\infty}_t=X^2_t$ for $0\leq t<1$ and ${\mathcal{H}}^{\infty}_t=X^1_t$ for $1\leq t<2.25$. However, for $t=2.25$, $\varrho^4_{2.25,4}(4)=1$ while $\varrho^4_{2.25,4}(3)=3$ so that ${\mathcal{T}}_{2.25}>4$ - thus information about the paths after time $4$ is needed to determine the spine after time $2.25$.} \end{center} \end{figure} This construction relies on being able to show that for all $N\in\mbox{$\bf m $}athbb{N}$, \begin{equation}gin{equation}\label{eq:finiteness of fixation times for all times} \tau^N_t<\infty\quad\text{for all}\quad t<\infty\quad\text{almost surely.} \end{equation} \begin{equation}gin{proof}[Proof of \eqref{eq:finiteness of fixation times for all times}] We firstly fix $T\in \mbox{$\bf m $}athbb{N}$, and observe that $\{(\tilde{X}^{N,i}_t,\eta^{N,i}_t)_{0\leq t\leq T}:1\leq i\leq N\}$ defined by \begin{equation}gin{equation}\label{eq:multi-colour process for well-posedness of the spine} \tilde{X}^{N,i}_t=X^{N,i}_{T+t},\quad \eta^i_t=\varrho^N_{T,T+t}(i), \end{equation} has the dynamics of the Fleming-Viot multi-colour process, where the space of colours is given by ${\mathbb{K}}=\{1,\ldots,N\}$. In particular, if $\tilde{X}^{N,i}_t$ dies at time $t$, or equivalently if $X^{N,i}$ dies at time $T+t$, and jumps onto the particle with index $j$, then $\eta^{N,i}_t=\varrho^N_{T,T+t}(i)$ becomes \[ \varrho^N_{T,T+t-}\circ \varrho^N_{T+t-,T+t}(i)=\varrho^N_{T,T+t-}(j)=\eta^{N,j}_{t-}. \] At the same time, $\eta^{N,\ell}_t=\varrho^N_{T,T+t}(\ell)$ for $\ell\neq i$ becomes \[ \varrho^N_{T,T+t-}\circ \varrho^N_{T+t-,T+t}(\ell)=\varrho^N_{T,T+t-}(\ell)=\eta^{N,\ell}_{t-}. \] Remark \ref{rmk:fixation time finite almost surely} then implies that $\tau^N_T<\infty$ almost surely. Since $T\in \mbox{$\bf m $}athbb{N}$ was arbitrary, we have that \[ {\mathbb P}(\tau^N_T<\infty\quad \text{for all}\quad T\in\mbox{$\bf m $}athbb{N})=1. \] Since $\tau^N_{T}<\infty$ implies that $\tau^N_{t}<\infty$ for all $t\leq T$ (composition of a constant function with a non-constant function is constant), we have \eqref{eq:finiteness of fixation times for all times}. \end{proof} Thus we have established the existence of the spine; the uniqueness is immediate. The spine was first constructed by Grigorescu and Kang in \cite{Grigorescu2012}, and then under very general conditions by Bieniek and Burdzy in \cite[Theorem 3.1]{Bieniek2018}. Bieniek and Burdzy \cite[Theorem 5.3]{Bieniek2018} established that it converges in distribution to the $Q$-process as $N\rightarrow\infty$ when the state space is finite, and conjectured this is also true for general state spaces \cite[p.3752]{Bieniek2018}. We will use Theorem \ref{theo:convergence of the fixed colour} to prove this conjecture in Corollary \ref{cor:convergence of the spine}, when the fixed killed Markov process is a normally reflected diffusion with soft killing. In fact, they also defined side branches of the spine, conjecturing an asymptotic description of both the spine and its side branches. We will prove this extension in the following section. We may observe that $\{(\tilde{X}^{N,i}_t,\eta^{N,i}_t)_{0\leq t\leq T}:1\leq i\leq N\}$ defined as follows, \begin{equation}gin{equation}\label{eq:multi-colour process for spine} \tilde{X}^{N,i}_t=X^{N,i}_{T+t},\quad \eta^i_t=(\mbox{$\bf m $}athcal{H}^{N,\varrho^N_{T+t,T}(i),T}_s)_{0\leq s\leq T}, \end{equation} has the dynamics of the Fleming-Viot multi-colour process where the space of colours is given by ${\mathbb{K}}=C([0,T];\bar D)$. This differs from \eqref{eq:multi-colour process for well-posedness of the spine} in that we have made here a different choice for the colour space ${\mathbb{K}}$. Whereas the fixed colour of the multi-colour process we considered in \eqref{eq:multi-colour process for well-posedness of the spine} corresponds to the fixed index of the spine at time $T$, $\zeta^N_T$, with this choice of colour space the fixed colour corresponds to the path of the spine up to time $T$, \begin{equation}gin{equation}\label{eq:Spine = fixed colour} \iota^N=({\mathcal{H}}^{N,\infty}_t)_{0\leq t\leq T}. \end{equation} This will allow us in Corollary \ref{cor:convergence of the spine} to obtain an asymptotic description of the spine. \subsection{Statement of Results} We fix a finite time horizon $T<\infty$. In the following, we write $\text{W}_1h$ and $\text{W}_1t$ for the Wasserstein and weak atomic metrics on ${\mathcal{P}}(C([0,T];\bar D))$ respectively. We fix $\mbox{$\bf m $}u\in{\mathcal{P}}(\bar D)$ and consider a sequence of Fleming-Viot processes $(\vec{X}^N_t)_{0\leq t<\infty}$ such that \[ \frac{1}{N}\sum_{i=1}^N\delta_{X^{N,i}_0}\rightarrow \mbox{$\bf m $}u\quad\text{weakly in probability as}\quad N\rightarrow\infty. \] We recall that the $Q$-process corresponds to the killed Markov process $(X_s)_{0\leq s<\infty}$ conditioned never to be killed. Its rigorous definition and existence is provided for in Lemma \ref{lem:law of Q-process given by tilting}, uniqueness in law being immediate from the definition. We let $(X^{\infty}_t)_{0\leq t<\infty}$ be the $Q$-process with initial condition $\mbox{$\bf m $}u^{\phi}$, take the DHPs $({\mathcal{H}}^{N,i,t}_s)_{0\leq s\leq t}\in C([0,t];\bar D)$ defined in \eqref{eq:DHP multi-colour chapter}, and recall that $({\mathcal{H}}^{N,\infty}_t)_{0\leq t<\infty}$ is the spine of the $N$-particle system defined in \eqref{eq:spine}. We obtain the following representation for the $Q$-process. \begin{equation}gin{cor} We fix $T<\infty$. Then we have \begin{equation}gin{equation} \begin{equation}gin{split} \frac{1}{N}\sum_{i=1}^N\delta_{({\mathcal{H}}^{N,i,t}_s)_{0\leq s\leq T}}\rightarrow {\mathcal{L}}_{\mbox{$\bf m $}u^{\phi}}((X^{\infty}_s)_{0\leq s\leq T})\quad \text{in}\quad \text{W}_1h\quad \text{in probability as}\quad N\wedge t\rightarrow \infty\quad\text{with}\quad \frac{t}{N}\rightarrow 0. \end{split} \end{equation} \label{cor:representation for Q-process} \end{cor} Note that as with Corollary \ref{cor:representation for right e-fn}, Corollary \ref{cor:representation for Q-process} is still true if we only have $\kappa\in C^{\infty}(\bar D;{\mathbb R}_{\geq 0})$ by Remark \ref{rmk:WF diff thm still true if only kappa non-neg}. \begin{equation}gin{rmk}[Optimality of $N\gg t$]\label{rmk:optimality approx method for Q-process} As in Remark \ref{rmk:optimality approx method for phi}, it is clear from Theorem \ref{theo:Convergence to FV diffusion} that $N\gg t$ is optimal. In particular if $\frac{t}{N}$ converges to a positive constant as $N\rightarrow\infty$ then Theorem \ref{theo:Convergence to FV diffusion} implies that $\chi^N_t$ will converge to a random measure. \end{rmk} We may use Theorem \ref{theo:convergence of the fixed colour} to prove the following corollary. \begin{equation}gin{cor}\label{cor:convergence of the spine} We have convergence in distribution of the spine to the $Q$-process over any fixed time horizon $T<\infty$: \begin{equation}gin{equation} ({\mathcal{H}}^{N,\infty}_t)_{0\leq t\leq T}\rightarrow (X^{\infty}_t)_{0\leq t\leq T}\quad\text{in}\quad C([0,T];\bar D)\quad \text{in distribution as}\quad N\rightarrow\infty. \end{equation} \end{cor} \begin{equation}gin{proof}[Proof of corollaries \ref{cor:representation for Q-process} and \ref{cor:convergence of the spine}] We assume without loss of generality that $T>0$, the $T=0$ case being an immediate consequence of the $T>0$ case. We take the Fleming-Viot multi-colour process given by \eqref{eq:multi-colour process for spine}, which we recall is obtained by taking the space of colours to be ${\mathbb{K}}=C([0,T];\bar D)$ and setting \[ \tilde{X}^{N,i}_t:=X^{N,i}_{T+t},\quad \eta^{N,i}_t:=({\mathcal{H}}^{N,i,T+t}_s)_{0\leq s\leq T}. \] We recall that the fixed colour $\iota^N$ is precisely the path of the spine of $(\vec{X}^N_t)_{t\geq 0}$ up to time $T$, \begin{equation}gin{equation}\tag{\ref{eq:Spine = fixed colour}} ({\mathcal{H}}^{N,\infty}_t)_{0\leq t\leq T}=\iota^N. \end{equation} We have by \cite[Theorem 4.2]{Bieniek2018} and Proposition \ref{prop:convergence in W in prob equivalent to weakly prob} that \[ \text{W}_1h\begin{itemize}g(\frac{1}{N}\sum_{i=1}^N\eta^{N,i}_0, {\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T)\begin{itemize}g)\rightarrow 0\quad\text{in probability as}\quad N\rightarrow\infty. \] Since ${\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq \tau\wedge T})$ is non-atomic for $T>0$, so too is ${\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T)$, hence we have \[ \text{W}_1t\Big(\frac{1}{N}\sum_{i=1}^N\eta^{N,i}_0, {\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T)\Big)\rightarrow 0\quad\text{in probability as}\quad N\rightarrow\infty. \] Therefore the tilted empirical measures given by \eqref{eq:tilted empirical measure} satisfy \begin{equation}gin{equation}\label{eq:convergence of tilted empiricial measure at time 0 to tilted law of killed process conditioned not to be killed up to time T} \text{W}_1t\Big({\mathcal{Y}}^N_0, \begin{itemize}g({\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T)\begin{itemize}g)^{\phi^T}\Big)\rightarrow 0\quad\text{in probability as}\quad N\rightarrow\infty, \end{equation} whereby we recall $\phi^T$ is defined in \eqref{eq:phiT defn} to be \[ \phi^T:C([0,T];\bar D)\ni y\mbox{$\bf m $}apsto \phi(y_T)\in {\mathbb R}. \] We recall that Lemma \ref{lem:law of Q-process given by tilting} gives that $\begin{itemize}g({\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T)\begin{itemize}g)^{\phi^T}$ is precisely the law of the $Q$-process $X^{\infty}$ with initial condition $X^{\infty}_0\sim \mbox{$\bf m $}u^{\phi}$. We take $(\nu_t)_{0\leq t<\infty}$ to be the corresponding Wright-Fisher superprocess on ${\mathbb{K}}=C([0,T];\bar D)$ of rate ${\mathbb{T}}heta$ (the constant defined in \eqref{eq:const theta}) and initial condition $\nu_0=\nu^0={\mathcal{L}}_{\mbox{$\bf m $}u^{\phi}}((X^{\infty}_t)_{0\leq t\leq T})\in {\mathcal{P}}(C([0,T];\bar D))={\mathcal{P}}({\mathbb{K}})$. Theorem \ref{theo:Convergence to FV diffusion} implies that \[ \begin{equation}gin{split} \frac{1}{N}\sum_{i=1}^N\delta_{\eta^{N,i}_t}\rightarrow \nu_0={\mathcal{L}}_{\mbox{$\bf m $}u^{\phi}}((X^{\infty}_s)_{0\leq s\leq T})\quad \text{in}\quad \text{W}_1t\quad \text{in probability as}\quad N\wedge t\rightarrow \infty\quad\text{with}\quad \frac{t}{N}\rightarrow 0, \end{split} \] so that we have Corollary \ref{cor:representation for Q-process}. Moreover by Proposition \ref{prop:basic facts WF superprocess}, Property \ref{enum:fixed colour dist like ic}, the fixed colour $\iota$ of $(\nu_t)_{0\leq t<\infty}$ has distribution ${\mathcal{L}}(\iota)=\nu^0$ so that \begin{equation}gin{equation}\label{eq:fixed FV colour = Q-process} \iota\sim{\mathcal{L}}_{\mbox{$\bf m $}u^{\phi}}((X^{\infty}_s)_{0\leq s\leq T}). \end{equation} Combining \eqref{eq:Spine = fixed colour} and \eqref{eq:fixed FV colour = Q-process}, Theorem \ref{theo:convergence of the fixed colour} implies Corollary \ref{cor:convergence of the spine}. \end{proof} \section{Asymptotic Distribution of the Skeleton of the Fleming-Viot Process} \label{section:Skeleton of the FV process} We proved in Corollary \ref{cor:convergence of the spine} that the spine of the Fleming-Viot process converges in distribution to the $Q$-process as $N\rightarrow\infty$. In fact, Bieniek and Burdzy conjectured more than this in \cite[p.3752]{Bieniek2018}. As we shall explain, one can also define the spine of the Fleming-Viot process as having side branches; we will refer to the tree obtained by attaching the side branches to the spine of the Fleming-Viot process as the skeleton of the Fleming-Viot process. Bieniek and Burdzy conjectured \cite[p.3752]{Bieniek2018} that the skeleton of the Fleming-Viot process has the following asymptotic description as $N\rightarrow\infty$, proving their conjecture \cite[Section 5]{Bieniek2018} when the state space is finite: the spine is distributed like the $Q$-process, along which branch events occur at twice the rate of the corresponding critical binary branching process, at each of these branch events the side branch being distributed like an independent copy of the corresponding critical binary branching process. This asymptotic description corresponds to the well-known spine decomposition of the critical binary branching process. We will prove this conjecture in Corollary \ref{cor:asymptotic distribution of the spine decomposition}. Before stating our results, we must firstly define these branching processes. \subsection{Branching Processes}\label{subsection: Branching processes intro} We fix $T<\infty$ and define $\mbox{$\bf m $}athscr{C}([0,T];\bar D)$ to be the space of partial functions \[ \begin{equation}gin{split} \mbox{$\bf m $}athscr{C}([0,T];\bar D)=\{(t_b,f,t_d)\in [0,T]\times \begin{itemize}g(\cup_{0\leq t_0\leq t_1\leq T}C([t_0,t_1])\begin{itemize}g)\times \begin{itemize}g([0,T]\cup \{\ast\}\begin{itemize}g):\\ \text{Dom}(f)=[t_b,t_d]\text{ if } 0\leq t_d\leq T\quad \text{and}\quad\text{Dom}(f)=[t_b,T]\text{ if } t_d=\ast\} \end{split} \] where $\ast$ is an isolated point, so that $\lvert t-\ast\rvert:=1$ for all $t\in [0,T]$. We equip $\mbox{$\bf m $}athscr{C}([0,T];\bar D)$ with the metric \[ d_{\mbox{$\bf m $}athscr{C}}((t^1_b,f^1,t^1_d),(t^2_b,f^2,t^2_d)):=\lvert t^1_b-t^1_b\rvert+\sup_{0\leq t\leq T}\lvert f^1(t^1_b\vee t\wedge t^1_d)- f^2(t^2_b\vee t\wedge t^2_d)\rvert+\lvert t^2_d-t^2_d\rvert. \] Then $(\mbox{$\bf m $}athscr{C}([0,T];\bar D),d_{\mbox{$\bf m $}athscr{C}})$ is a Polish space, corresponding to the space of possible paths a particle may take between the time of its birth, $t_b$, and the time of its death, $t_d$ ($t_d=\ast$ corresponding to not dying prior to or at time $T$). We define the closed subset of $\mbox{$\bf m $}athscr{C}([0,T];\bar D)$ corresponding to paths which survive at time $T$, \[ \mbox{$\bf m $}athscr{C}^{\ast}([0,T];\bar D):=\{(t_b,f,\ast)\in \mbox{$\bf m $}athscr{C}([0,T];\bar D)\}. \] We define $\Xi$ to be the set of finite, rooted, binary ordered trees, which we label with standard Ulam-Harris notation and define as follows. We firstly define \[ \Gamma=\cup_{n=0}^{\infty}\{1,2\}^n\quad (\{1,2\}^0:=\emptyset) \] and write $\emptyset$ for the root. For $u=(u_1,\ldots,u_n)$, $v=(v_1,\ldots,v_m)\in \Gamma$ we write $uv$ for the concatenation $uv=(u_1,\ldots,u_n,v_1,\ldots,v_m)$. We similarly define $uk=(u_1,\ldots,u_n,k)$ and $ku=(k,u_1,\ldots,u_n)$ for $k\in \{1,2\}$ and $u\in \Gamma$. We say that $uk$ is a child of $u$ and $u$ is the parent of $uk$. For $u=(u_1,\ldots,u_n)\neq \emptyset$ we write \begin{equation}gin{equation}\label{eq:parent and sibling of u} p(u)=(u_1,\ldots,u_{n-1})\quad \text{and}\quad s(u)=\begin{equation}gin{cases} p(u)1,\quad u=p(u)2\\ p(u)2,\quad u=p(u)1 \end{cases} \end{equation} for the parent and sibling of $u$ respectively. More generally, if $v=uw$ for some $w\in \Gamma\setminus\{\emptyset\}$ then we say $u$ is an ancestor of $v$, and write $u<v$. We define a finite, rooted, binary ordered tree $\Pi\in \Xi$ to be a finite subset of $\Gamma$ such that \[ \begin{equation}gin{split} \emptyset\in \Gamma,\quad u\in \Pi\setminus\{\emptyset\}\Rightarrow p(u), s(u)\in \Pi. \end{split} \] We define $\Lambda=\Lambda(\Pi)$ to be the set of leaves of $\Pi$, which are those branches of $\Pi$ without any children. We then fix $T<\infty$ and define the set of time $T$ marked trees to be \[ \begin{equation}gin{split} {\bf{C}}_T=\cup_{\Pi\in\Xi}\Big\{(t^v_b,f^v,t^v_d)_{v\in \Pi}\in \begin{itemize}g(\mbox{$\bf m $}athscr{C}([0,T];\bar D)\begin{itemize}g)^{\Pi}:\text{if $v\in \Pi$ and $v\neq \emptyset$ then }t_b^v=t_d^{p(v)}\text{ and}\\ f^{v}(t_b^{v})=f^{p(v)}(t_d^{p(v)})\}. \end{split} \] One should interpret $t_b^{v}$ and $t_d^v$ to be the times of the birth and death events of particle $v$ (with $\ast$ the event of not dying), and $f^v$ to be the path particle $v$ takes during its lifetime $[t^v_b,t^v_d)$ (we define $f^v$ at time $t^v_d$ for convenience). Note that this differs from the usual definition of marked tree in that we define here the birth and death times of each particle, rather than marking each vertex with the lifetime of the particle (from which the birth and death times may be calculated). Moreover the spine of a branching process is commonly defined by extending the underlying probability space in order to distinguish a (random) ancestral path, whereas we shall distinguish the ancestral path obtained by inductively choosing the first child (what we call the primary path). These conventions shall be more useful for our purposes. For ${\mathcal{T}}\in {\bf{C}}_T$ we write $\Pi=\Pi({\mathcal{T}})=\text{Dom}({\mathcal{T}})$ for the unmarked tree on which ${\mathcal{T}}=(t^v_b,f^v,t^v_d)_{v\in \Pi}$ is defined and write $\Lambda({\mathcal{T}})=\Lambda(\Pi({\mathcal{T}}))$ for the leaves of this tree. We equip ${\bf{C}}_T$ with the metric \begin{equation}gin{equation}\label{eq:metric dCT} d_{{\bf{C}}_T}({\mathcal{T}}^1,{\mathcal{T}}^2)=\begin{equation}gin{cases} 1\wedge \sup_{v\in \Pi({\mathcal{T}}^1)}d_{\mbox{$\bf m $}athscr{C}}({\mathcal{T}}^1(v),{\mathcal{T}}^2(v)),\quad \Pi({\mathcal{T}}^1)=\Pi({\mathcal{T}}^2)\\ 1,\quad \Pi({\mathcal{T}}^1)\neq\Pi({\mathcal{T}}^2) \end{cases} \end{equation} which turns ${\bf{C}}_T$ into a Polish space. We inductively define $e_0=\emptyset$ and $e_{n+1}=e_n1$ for $n\geq 0$. We write \[ \ell({\mathcal{T}})=\sup\{n:e_{n}\in \Pi({\mathcal{T}})\} \] and refer to $e_0,e_1,\ldots,e_{\ell({\mathcal{T}})}$ as the primary path of the tree with length $\ell({\mathcal{T}})$. We define $\lvert (v_1,\ldots,v_n)\rvert\linebreak =n$ to be the depth of the branch $v=(v_1,\ldots,v_n)$ for $v\in \Pi({\mathcal{T}})$, and further define $\lvert \Pi\rvert$ to be the cardinality of the tree $\Pi$ (the number of branches it contains). We then define the set of marked trees whose primary path survives until time $T$, \[ {\bf{C}}^{\ast}_T=\{{\mathcal{T}}=(t^v_b,f,t^v_d)_{v\in \Pi}:t^{e_{\ell({\mathcal{T}})}}_d=\ast\}, \] which we equip with the metric $d_{{\bf{C}}_T}$, under which it is a closed subset of ${\bf{C}}_T$. We further define the primary process \begin{equation}gin{equation}\label{eq:primary process} (t^{{\mathcal{T}}}_b,f^{{\mathcal{T}}},t^{{\mathcal{T}}}_d):=(t^{\emptyset}_b,([t^{\emptyset}_b,t^{e_{\ell({\mathcal{T}})}}_d]\ni t\mbox{$\bf m $}apsto f^{e_i}(t)\quad\text{such that}\quad t\in [t^{e_i}_b,t^{e_{i}}_d], t^{e_{\ell({\mathcal{T}})}}_d)\in \mbox{$\bf m $}athscr{C}([0,T];\bar D). \end{equation} Given a marked tree $({\mathcal{T}},\Pi)$ with branch $v\in \Pi$ and $t_b^v\leq t< t_d^v$ (or $\leq T$ if $t_d^v=\ast$) we define the $v$-subtree by \begin{equation}gin{equation}\label{eq:v-subtree} \begin{equation}gin{split} \Pi_{(v)}=\{w:vw\in \Pi\},\quad {\mathcal{T}}_{(v)}(w)={\mathcal{T}}(vw),\quad w\in \Pi_{(v)}. \end{split} \end{equation} Given $(t_b,f,t_d)\in \mbox{$\bf m $}athscr{C}([0,T];\bar D)$ and trees ${\mathcal{T}}^k=(t^{k,v}_b,f^{k,v},t^{k,v}_d)_{v\in \Pi^k}$ such that $t_b\leq t^{1,\emptyset}_b<\ldots< t^{n,\emptyset}_b\leq t_d$ ($\leq T$ if $t_d=\ast$) and $f^{k,\emptyset}(t^{k,\emptyset}_b)=f(t^{k,\emptyset}_b)$ for $1\leq k\leq n$ we define the tree \begin{equation}gin{equation}\label{eq:trees glued to path} ({\mathcal{T}},\Pi)=\Psi((t_b,f,t_d),({\mathcal{T}}^k)_{1\leq k\leq n}) \end{equation} by gluing ${\mathcal{T}}^1,\ldots,{\mathcal{T}}^n$ onto $f$, \[ \begin{equation}gin{split} \Pi=\{e_k:0\leq k\leq n+1\}\cup \{e_k2v:0\leq k\leq n,\; v\in \Pi^k\}, \\ {\mathcal{T}}(e_k)=(t_k,f_{[t_{k},t_{k+1}]},t_{k+1}),\quad 0\leq k\leq n,\quad {\mathcal{T}}(e_2kv)={\mathcal{T}}^k(v),\quad v\in \Pi^k,\quad 0 \leq k\leq n \end{split} \] whereby we write $t_0=t_b$, $t_k=t^{b,\emptyset}_k$ ($1\leq k\leq n$) and $t_{n+1}=t_d$. Note that trees $({\mathcal{T}}^k,\Pi^k)$ are always glued onto $(t_b,f,t_d)$ in the order of their birth times $t^{k,\emptyset}_b$ (an arbitrary order may be chosen if there are simultaneous birth times, but this will always be a null event). This forms the tree with primary path $f$ and with sub-trees ${\mathcal{T}}^1,\ldots,{\mathcal{T}}^n$ along the primary path. Since in the following constructions the number of particles $N$ in the Fleming-Viot particle system is fixed, we neglect the $N$ superscript for the sake of notation where it would not create confusion. Given a Fleming-Viot process $(\vec{X}^N_t)_{0\leq t\leq T}$ the marked tree ${\mathcal{U}}^{N,i,t}$ for $1\leq i\leq N$ and $0\leq t\leq T$) consists of the particle $X^{i}$ at time $t$ and its descendants. It is defined inductively as follows. \begin{equation}gin{defin}\label{defin:branching tree particle i at time t} For $v\in \Pi({\mathcal{U}}^{N,i,t})$, $\varsigma(v)$ will represent the index of the particle which vertex $v$ corresponds to, so that $f^v$ will represent the path of the particle $X^{\varsigma(v)}$ between times $t^v_b$ and $t^v_d$. In particular $t_b^{\emptyset}=t$ and $\varsigma(\emptyset)=i$. The time $t_d^{\emptyset}$ is the first time after time $t_b^{\emptyset}=t$ that particle $X^{N,\varsigma(\emptyset)}=X^i$ has another particle (say the particle $X^j$) jump onto it, dies, or reaches time $T$ without these first two possibilities happening (in which case we set $t^{\emptyset}_d=\ast$). Thus we set $f^{\emptyset}$ to be the path of $X^{\varsigma(\emptyset)}=X^{i}$ between times $t^{\emptyset}_b$ and time $t^{\emptyset}_d$ (time $T$ if $t^{\emptyset}_d=\ast$), defining $f^{\emptyset}$ at time $t^{\emptyset}$ by left-continuity if this time corresponds to the particle being killed. This defines ${\mathcal{U}}^{N,i,t}(\emptyset)=(t^{\emptyset}_b,f^{\emptyset},t^{\emptyset}_d)$. If we have the first possibility - the time $t_d^{\emptyset}$ corresponds to particle $X^j$ jumping onto particle $X^i$ - then $\emptyset$ has two children $(1), (2)\in \Pi({\mathcal{U}}^{N,i,t})$. In this case we set $\varsigma((1))=i$, $\varsigma((2))=j$, and $t^{(1)}_b=t^{(2)}_b=t^{\emptyset}_d$ (in general we adopt the convention that $\varsigma(v1)=\varsigma(v)$ for all $v,v1\in \Pi({\mathcal{U}}^{N,i,t})$). Otherwise $\emptyset$ is a leaf and no new branches are added. We repeat inductively as above on each child until no new branches are added. \end{defin} We note that this defines the index function $\varsigma$ such that \begin{equation}gin{equation}\label{eq:index function iota} f^v(t)=X^{N,\varsigma(v)}_t,\quad t^v_b\leq t< t^v_d. \end{equation} We now define for $h\geq 0$ the augmented DHPs ${\bf{H}}^{N,i,T+h}_T$ and the skeleton ${\bf{H}}^{N,\infty}_T$ up to time T. We recall that the random maps $\varrho^N_{t_0,t_1}$ defined in \eqref{eq:functions PhiN} trace the ancestry of the particles at time $t_1$ backwards to time $t_0$. We further recall that $U^j_m$ is the index of the particle $X^j$ jumps onto on its $m^{\text{th}}$ death. We say that the DHP $({\mathcal{H}}^{N,i,T+h}_t)_{0\leq t\leq T}$ branches at time $t\leq T$ if some particle $X^j$ dies for the $m^{\text{th}}$ time at time $t=\tau^j_m$ and jumps onto $({\mathcal{H}}^{N,i,T}_t)_{0\leq t\leq T}$, so that $U^j_m=\varrho^N_{\tau^j_m-,T+h}(i)$. The DHP may continue following $X^{\varrho^N_{\tau^j_m-,T+h}(i)}$, so that $\varrho^N_{\tau^j_m,T+h}(i)=\varrho^N_{\tau^j_m-,T+h}(i)$, or may start following particle $X^j$, so that $\varrho^N_{\tau^j_m,T+h}(i)=j$. We then write $0\leq t_1<\ldots<t_n\leq T$ for the branch events of the DHP up to time $T$, corresponding to particle $j_k$ dying at time $t_k$ and jumping onto $X^{\varrho^N_{t_k-,T}(i)}$. The $k^{\text{th}}$ side tree is the marked tree formed by the particle involved in the $k^{\text{th}}$ branch event which the DHP doesn't proceed to then follow, \[ ({\mathcal{U}}^k,\Pi^k):=\begin{equation}gin{cases} ({\mathcal{U}}^{N,j_k,t_k},\Pi^{j_k,t_k}),\quad\varrho^N_{t_k,T}(i)=\varrho^N_{t_k-,T}(i)\\ ({\mathcal{U}}^{N,\varrho^N_{t_k-,T}(i),t_k},\Pi^{\varrho^N_{t_k-,T}(i),t_k}),\quad\varrho^N_{t_k,T}(i)\neq\varrho^N_{t_k-,T}(i) \end{cases}. \] We then define the augmented DHP by gluing these side trees onto the DHP, \begin{equation}gin{equation} {\bf{H}}^{N,i,T+h}_T=\Psi((0,({\mathcal{H}}^{N,i,T+h}_t)_{0\leq t \leq T},\ast),\{{\mathcal{U}}^k\}_{k=1}^n) \end{equation} by gluing these side branches onto the DHP. We define the side trees of the spine and the skeleton ${\bf{H}}^{N,\infty}_T$ up to time $T$ in the same manner. We shall obtain convergence in distribution of the skeleton to a limit we now describe. We fix $\mbox{$\bf m $}u\in {\mathcal{P}}(\bar D)$ in the following construction. We define the branching rate of the critical branching process, \begin{equation}gin{equation} \mbox{$\bf m $}box{$\lambda$}bda_t(\mbox{$\bf m $}u):=\langle {\mathcal{L}}_{\mbox{$\bf m $}u}(X_t\lvert \tau_{\partial}>t) \end{equation} For given $0\leq t\leq T$ and $x\in \bar D$ we inductively construct the critical branching process ${\mathcal{T}}^{t,x}=(t^v_b,f^v_{[t^v_b,t^v_d]},t^v_d)_{v\in \Pi^{t,x}}$ as follows. This is a normally reflected diffusion starting from position $x$ at time $t$, which branches at rate $\mbox{$\bf m $}box{$\lambda$}bda_s(\mbox{$\bf m $}u)$ at time $s$ and is killed at position-dependent rate $\kappa$. \begin{equation}gin{defin}\label{defin:critical branching tree from time t pos x} We firstly set $t^{\emptyset}_b=t$. Corresponding to branching, we stop a Poisson clock at rate $\mbox{$\bf m $}box{$\lambda$}bda_s(\mbox{$\bf m $}u)$ for $s\geq t^{\emptyset}_b$, writing $t^{\emptyset}_{\text{branch}}$ for the ringing time. The path $f^{\emptyset}(s)$ then evolves independently for $s\geq t$ like the killed normally reflected diffusion process $X_s$ (i.e. as a solution of \eqref{eq:killed process SDE}) with initial condition $f^{\emptyset}(t)=x$, until either it is killed, the Poisson clock goes off, or it reaches time $T$. We then define $t^{\emptyset}_d$ to be the first of these times ($t^{\emptyset}_d=\ast$ if it reaches time $T$), so that the path $f^{\emptyset}$ is defined on the time interval $[t,t^{\emptyset}_d]$ ($[t,T]$ if it survives until time $T$, $t^{\emptyset}_d=\ast$). In the event of it being killed at time $t^{\emptyset}_d$, we define $f^{\emptyset}(t^{\emptyset}_d)$ via the left-limit. This defines ${\mathcal{T}}^{t,x}(\emptyset)=(t_b^{\emptyset},f^{\emptyset},t_d^{\emptyset})$. In the case of the Poisson clock having gone off first, $\emptyset$ branches to produce two children, so we add $(1)$ and $(2)$ to $\Pi^{t,x}$. Otherwise $\emptyset$ is a leaf. In the event of there being children we repeat the above process for each of these children starting from position $f^{\emptyset}(t^{\emptyset}_d)$ at time $t^{\emptyset}_d$, and so on inductively until no new branches are added. Since the Fleming-Viot $N$-particle system has finitely many killing events up to time $T$, almost surely, the inductive process terminates, almost surely. \end{defin} This defines a family of probability measures ${\mathcal{T}}^{t,x}\sim {\mathbb P}^{t,x}\in {\mathcal{P}}({\bf{C}}_T)$ corresponding to the Markov kernel \begin{equation}gin{equation}\label{eq:Markov kernel for marked tree T} K^{{\bf{C}}_T}(t,x;\cdot):={\mathbb P}^{t,x}(\cdot). \end{equation} We prove the following lemma in the appendix. \begin{equation}gin{lem}\label{lem:cty of kernel} The map $[0,T]\times \bar D\ni (t,x)\mbox{$\bf m $}apsto K^{{\bf{C}}_T}(t,x;\cdot)\in {\mathcal{P}}({\bf{C}}_T)$ is continuous. \end{lem} We then take the $Q$-process $X^{\infty}_t$ with initial condition $X^{\infty}_0\sim \mbox{$\bf m $}u^{\phi}$, and create branch events at Poisson rate $2\mbox{$\bf m $}box{$\lambda$}bda_t(\mbox{$\bf m $}u)$, at times $0\leq t_1<\ldots<t_n\leq T$ say. We take independently at random the trees ${\mathcal{T}}^{t_k,X^{\infty}_{t_k}}\sim K^{{\bf{C}}_T}(t_k,X^{\infty}_{t_k};\cdot)$, which are copies of the critical branching process started at the temporal and spatial locations of each of the branch events. We then glue ${\mathcal{T}}^{t_1,X^{\infty}_{t_1}},\ldots,{\mathcal{T}}^{t_n,X^{\infty}_{t_n}}$ to the $Q$-process $(X^{\infty}_t)_{0\leq t\leq T}$ using $\Psi$ (defined in \eqref{eq:trees glued to path}), obtaining the $\mbox{$\bf m $}u$-spine decomposition of the critical branching process \begin{equation}gin{equation}\label{eq:mu-spine decomposition} ({\bf{X}}^{\infty,\mbox{$\bf m $}u}_T,\Pi^{\infty,\mbox{$\bf m $}u}_T)=\Psi((0,(X^{\infty}_t)_{0\leq t\leq T},\ast),\{{\mathcal{T}}^{t_k,X^{\infty}_{t_k}}\}). \end{equation} \subsection{Statement of Results} The proof of Corollary \ref{cor:convergence of the spine} relied on \cite[Theorem 4.2]{Bieniek2018}. We now seek to extend Corollary \ref{cor:convergence of the spine} to obtain convergence of the skeleton - so that the branching rate along the spine and the distribution of the side branches also converge. This will require an extension of \cite[Theorem 4.2]{Bieniek2018}. In the following, we write $\text{W}_1h$ and $\text{W}_1t$ for the Wasserstein and weak atomic metrics respectively on ${\mathcal{P}}({\bf{C}}^{\ast}_T)$. We fix $\mbox{$\bf m $}u\in{\mathcal{P}}(\bar D)$, some time horizon $T<\infty$, and take the process $(X^T)_{0\leq t\leq T}$ with distribution \[ (X^T_t)_{0\leq t\leq T}\sim{\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T). \] As in the construction of ${\bf{X}}^{\infty,\mbox{$\bf m $}u}_T$ \eqref{eq:mu-spine decomposition} we create branch events at rate $2\mbox{$\bf m $}box{$\lambda$}bda_t(\mbox{$\bf m $}u)=2\langle {\mathcal{L}}_{\mbox{$\bf m $}u}(X_t\lvert \tau>t),\kappa\rightarrowngle$, at times $0\leq t_1<\ldots<t_n\leq T$. We then create independently at random the trees ${\mathcal{T}}^{t_k,X^T_{t_k}}\sim K^{{\bf{C}}_T}(t_k,X^T_{t_k};\cdot)$ ($k=1,\ldots,n$) which we glue to $X^T$, \begin{equation}gin{equation} {\bf{X}}^{T,\mbox{$\bf m $}u}:=\Psi((0,(X^T_t)_{0\leq t\leq T},\ast),\{{\mathcal{T}}^{t_k,X^T_{t_k}}\}). \end{equation} \begin{equation}gin{theo}\label{theo:convergence of augmented DHPs} We take a sequence of Fleming-Viot processes $\vec{X}^N$ such that $\frac{1}{N}\sum_{i=1}^N\delta_{X^{N,i}_0}\rightarrow \mbox{$\bf m $}u$ weakly in probability with augmented DHPs ${\bf{H}}^{N,i,T}$. Then we have \begin{equation}gin{equation} \text{W}_1h\Big(\frac{1}{N}\sum_{i=1}^N\delta_{{\bf{H}}^{N,i,T}},{\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u})\Big)\rightarrow 0\quad\text{as}\quad N\rightarrow\infty. \end{equation} \end{theo} Note that the proof of Theorem \ref{theo:convergence of augmented DHPs} is entirely different from Bieniek and Burdzy's proof of \cite[Theorem 4.2]{Bieniek2018}, and thus provides an alternate proof of \cite[Theorem 4.2]{Bieniek2018} (under more restrictive conditions). This theorem then allows us to prove the following corollary in precisely the same way we proved Corollary \ref{cor:convergence of the spine}. \begin{equation}gin{cor}\label{cor:asymptotic distribution of the spine decomposition} We take a sequence of Fleming-Viot processes $\vec{X}^N$ such that $\frac{1}{N}\sum_{i=1}^N\delta_{X^{N,i}_0}\rightarrow \mbox{$\bf m $}u$ weakly in probability with skeletons ${\bf{H}}^{N,\infty}_T$. We let ${\bf{X}}^{\infty,\mbox{$\bf m $}u}_T$ be the $\mbox{$\bf m $}u$-spine decomposition of the critical branching process up to time T. Then we have \begin{equation}gin{equation} {\bf{H}}^{N,\infty}_T\rightarrow {\bf{X}}^{\infty,\mbox{$\bf m $}u}_T\quad\text{in}\quad {\bf{C}}^{\ast}_T\quad\text{in distribution}\quad\text{as}\quad N\rightarrow\infty. \end{equation} \end{cor} \begin{equation}gin{proof}[Proof of Corollary \ref{cor:asymptotic distribution of the spine decomposition}] As in the proof of corollary \ref{cor:convergence of the spine}, we assume without loss of generality that $T>0$. We take the Fleming-Viot multi-colour process $(\vec{\tilde{X}}^{N}_t,\vec{\eta}^N)$ obtained by taking the colour space to be ${\mathbb{K}}={\bf{C}}^{\ast}_T$ and set \[ \tilde{X}^{N,i}_t:=X^{N,i}_{T+t},\quad \eta^{N,i}_t:={\bf{H}}^{N,i,T+t}_T. \] We define $\Phi^T$, which evaluates $\phi$ at the end of the primary path, as \[ \Phi^T:{\bf{C}}^{\ast}_T\ni {\mathcal{T}}=(t^v_b,f^v,t^v_d)_{v\in \Pi}\mbox{$\bf m $}apsto \phi(f^{e_{\ell({\mathcal{T}})}}_T). \] We repeat the argument used to establish \eqref{eq:convergence of tilted empiricial measure at time 0 to tilted law of killed process conditioned not to be killed up to time T} in the proof of Corollary \ref{cor:convergence of the spine}, using Theorem \ref{theo:convergence of augmented DHPs} in place of \cite[Theorem 4.2]{Bieniek2018} and noting that ${\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T)$ being non-atomic implies that ${\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u}_T)$ is also (the former being the push-forward of the latter under the map taking trees to their primary paths). This establishes that \[ \text{W}_1t({\mathcal{Y}}^N_0,({\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u}_T))^{\Phi^T})\rightarrow 0\quad\text{in probability as}\quad N\rightarrow\infty. \] Here the fixed colour $\iota^N$ of the $N$-particle Fleming-Viot multi-colour process is precisely the skeleton ${\bf{H}}^{N,\infty}_T$. Moreover we may deduce from Lemma \ref{lem:law of Q-process given by tilting} that $\begin{itemize}g({\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u}_T)\begin{itemize}g)^{\Phi^T}$ is precisely ${\mathcal{L}}({\bf{X}}^{\infty,\mbox{$\bf m $}u}_T)$. Applying Theorem \ref{theo:convergence of the fixed colour} as in the proof of Corollary \ref{cor:convergence of the spine}, we obtain Corollary \ref{cor:asymptotic distribution of the spine decomposition}.\end{proof} \section{${\mathcal{O}}$ and ${\mathcal{U}}$ Notation}\label{section:O and U notation} Before turning to our proof, we introduce the following notation which shall significantly simplify our proofs. For any finite variation process $(X_t)_{0\leq t<\infty}$ we write $V_t(X)$ for the total variation \[ V_t(X)=\sup_{0=t_0<t_1<\ldots<t_n=t}\sum_{i=0}^{n-1}\lvert X_{t_{i+1}}-X_{t_i}\rvert. \] Moreover for all c\'adl\'ag processes $(X_t)_{0\leq t<\infty}$ we write \[ \Delta X_t= X_t-X_{t-}. \] Given some family of random variables $\{X^{N}:N\in\mbox{$\bf m $}athbb{N}\}$ and non-negative random variables $\{Y^{N}:N\in\mbox{$\bf m $}athbb{N}\}$, we say $X^{N}={\mathcal{O}}(Y^{N})$ or $X^{N}={\mathcal{U}}(Y^{N})$ if there exists a uniform constant $C<\infty$ (respectively $c>0$) such that $\lvert X^{N}\rvert\leq CY^{N}$ (respectively $X^{N}\geq cY^{N}$). Note that we shall abuse notation by using an equals sign, rather than an inclusion sign. We now define the notion of process sequence class. Given sequences of processes $\{(X^N_t)_{t\geq 0}:N\in\mbox{$\bf m $}athbb{N}\}$ and $\{(Y^N_t)_{t\geq 0}:N\in\mbox{$\bf m $}athbb{N}\}$, we say that: \begin{equation}gin{enumerate} \item\label{enum:process sequence classes MG controlled QV} $X^N_t={\mathcal{O}}^{{\text{MG}}}_t(Y^N)$ if for all $N\geq N_0$ (for some $N_0<\infty$) and for some $C<\infty$, $X^N_t$ is a martingale whose quadratic variation is such that \begin{equation}gin{equation} [X^N]_t-\int_0^tCY^N_sds\quad\text{is a supermartingale.} \end{equation} \item $X^N_t={\mathcal{U}}^{{\text{MG}}}_t(Y^N)$ if for all $N\geq N_0$ (for some $N_0<\infty$) and for some $c>0$, $X^N_t$ is a martingale whose quadratic variation is such that \begin{equation}gin{equation} [X^N]_t-\int_0^tcY^N_sds\quad\text{is a submartingale.} \end{equation} \item $X^N_t={\mathcal{O}}^{{\mathcal{F}}V}_t(Y^N)$ if for all $N\geq N_0$ (for some $N_0<\infty$) and for some $C<\infty$, $X^N_t$ is a finite variation process whose total variation is such that \begin{equation}gin{equation} V_t(X^N)-\int_0^tCY^N_sds\quad\text{is a supermartingale.} \end{equation} \item\label{enum:process seqeunce classes FV controlled jumps} $X^N_t={\mathcal{O}}^{\Delta}_t(Y^N)$ if for all $N\geq N_0$ (for some $N_0<\infty$) and for some $C<\infty$, $X^N_t$ is such that \begin{equation}gin{equation} \lvert \Delta X^N_t\rvert\leq CY^N_{t-}\quad\text{for all }0\leq t<\infty,\quad \text{almost surely.} \end{equation} \end{enumerate} We refer to ${\mathcal{O}}^{{\text{MG}}}_t(Y^N),{\mathcal{U}}^{{\text{MG}}}_t(Y^N),{\mathcal{O}}^{{\mathcal{F}}V}_t(Y^N)$ and ${\mathcal{O}}^{{\mathcal{F}}V}_t(Y^N)$ for $\{(Y^N_t)_{0\leq t<\infty}:N\in\mbox{$\bf m $}athbb{N}\}$ a given sequence of processes as process sequence classes. Note that as with sequences of random variables, we abuse notation by using an equals sign rather than an inclusion sign. Suppose that we have constants $r_N>0$ ($N\in\mbox{$\bf m $}athbb{N}$). For a given sequence of processes $Y^N$, write $Z^N_s:=Y^N_{r_Ns}$. The statements $X^N_t={\mathcal{O}}^{{\text{MG}}}_t(Y^N_{r_N\cdot})$, $X^N_t={\mathcal{U}}^{{\text{MG}}}_t(Y^N_{r_N\cdot})$, $X^N_t={\mathcal{O}}^{{\mathcal{F}}V}_t(Y^N_{r_N\cdot})$ and $X^N_t={\mathcal{O}}^{\Delta}_t(Y^N_{r_N\cdot})$ should be interpreted as the statements $X^N_t={\mathcal{O}}^{{\text{MG}}}_t(Z^N_{\cdot})$, $X^N_t={\mathcal{U}}^{{\text{MG}}}_t(Z^N_{\cdot})$, $X^N_t={\mathcal{O}}^{{\mathcal{F}}V}_t(Z^N_{\cdot})$ and $X^N_t={\mathcal{O}}^{\Delta}_t(Z^N_{\cdot})$ respectively. Given an index set $\mbox{$\bf m $}athbb{A}$, a family of sequences of processes $\{((X^{N,\alpha}_t)_{0\leq t<\infty})_{N=1}^{\infty}:\alpha\in \mbox{$\bf m $}athbb{A}\}$, and a family of process sequence classes $\{{\mathcal{A}}^{N,\alpha}_t:\alpha\in\mbox{$\bf m $}athbb{A}\}$, we say $X^{N,\alpha}={\mathcal{A}}^{N,\alpha}_t$ uniformly if the constants $C^{\alpha}$ (or $c_{\alpha}$) and $N^{\alpha}_0$ used to define $X^{N,\alpha}_t={\mathcal{A}}^{N,\alpha}_t$ as in \ref{enum:process sequence classes MG controlled QV} - \ref{enum:process seqeunce classes FV controlled jumps} can be chosen uniformly in $\alpha\in\mbox{$\bf m $}athbb{A}$. It will be useful to take the sum and intersection of process sequence classes and specific sequences of processes. To be more precise, for any process sequence classes ${\mathcal{A}}^N_t$ and ${\mathcal{B}}^N_t$, and the sequence of processes $F^N_t$, we say that: \begin{equation}gin{enumerate} \item $X^N_t={\mathcal{A}}^N_t\cap{\mathcal{B}}^N_t$ if $X^N_t={\mathcal{A}}^N_t$ and $X^N_t={\mathcal{B}}^N_t$; \item $X^N_t=F^N_t+{\mathcal{A}}^N_t$ if there exists a sequence of processes $G^N_t$ such that $G^N_t={\mathcal{A}}^N_t$ and $X^N_t=F^N_t+G^N_t$; \item $X^N_t={\mathcal{A}}^N_t+{\mathcal{B}}^N_t$ if there exists sequences of processes $G^N_t$ and $H^N_t$ such that $G^N_t={\mathcal{A}}^N_t$, $H^N_t={\mathcal{B}}^N_t$ and $X^N_t=G^N_t+H^N_t$; \item $dX_t=dF^N_t+d{\mathcal{A}}^N_t+d{\mathcal{B}}^N_t$ if there exists sequences of processes $G^N_t$ and $H^N_t$ such that $G^N_t={\mathcal{A}}^N_t$, $H^N_t={\mathcal{B}}^N_t$ and $dX^N_t=dF^N_t+dG^N_t+dH^N_t$. \end{enumerate} For example, if $X^N_t={\mathcal{O}}_t^{{\text{MG}}}(1)\cap{\mathcal{U}}^{{\text{MG}}}_t(1)+{\mathcal{O}}_t^{{\mathcal{F}}V}(\frac{1}{N})\cap{\mathcal{O}}^{\Delta}_t(\frac{1}{N^2})$ then there exists $G^N_t$ and $H^N_t$ such that $X^N_t=G^N_t+H^N_t$ whereby $G^N_t={\mathcal{O}}_t^{{\text{MG}}}(1)\cap{\mathcal{U}}^{{\text{MG}}}_t(1)$ and $H^N_t={\mathcal{O}}_t^{{\mathcal{F}}V}(\frac{1}{N})\cap{\mathcal{O}}^{\Delta}_t(\frac{1}{N^2})$. Thus $X^N_t={\mathcal{O}}_t^{{\text{MG}}}(1)\cap{\mathcal{U}}^{{\text{MG}}}_t(1)+{\mathcal{O}}_t^{{\mathcal{F}}V}(\frac{1}{N})\cap{\mathcal{O}}^{\Delta}_t(\frac{1}{N^2})$ means that for some $0<c<C<\infty$ there exists for all $N$ large enough martingales $G^{N}_t$ and finite-variation processes $H^{N}_t$ such that: \begin{equation}gin{align} X^N_t=G^N_t+H^N_t,\\ \quad [G^N]_t-Ct\quad\text{is a supermartingale since}\quad G^N_t={\mathcal{O}}^{{\text{MG}}}_t(1),\\ [G^N]_t-ct\quad\text{is a submartingale since}\quad G^N_t={\mathcal{U}}^{{\text{MG}}}_t(1),\\ V_t(H^N)-\frac{t}{N}\quad\text{is a supermartingale since}\quad H^N={\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{1}{N}),\\ \text{and}\quad \lvert \Delta Z^N_t\rvert\leq \frac{C}{N^2}\quad\text{for all}\quad 0\leq t<\infty,\quad \text{almost surely, since}\quad H^N_t= {\mathcal{O}}^{\Delta}_t(\frac{1}{N^2}). \end{align} \section{Characterisation of ${\mathcal{Y}}^N_t$}\label{section: characterisation of Y} In this section, we write $(\Omega,{\mathcal{G}},({\mathcal{G}}_t)_{t\geq 0},{\mathbb P})$ for the underlying filtered probability space. \begin{equation}gin{rmk} In the present section, all statements as to processes belonging to various process sequence classes should be interpreted as being uniform over all choices ${\mathcal{E}},{\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}})$ (or over all sequences of ${\mathcal{G}}_0$-measurable random ${\mathcal{E}}^N,{\mathcal{F}}^N\in{\mathcal{B}}({\mathbb{K}})$, in the case of Part \ref{enum:Thm 8.2 true for random sets} of Theorem \ref{theo:characterisation of Y}). \end{rmk} The results of this paper hinge on considering the quantity ${\mathcal{Y}}^N_t$, which we recall is given by tilting the empirical measure of the Fleming-Viot multi-colour process using the principal right eigenfunction, \begin{equation}gin{equation}{\tag{\ref{eq:tilted empirical measure}}} \begin{equation}gin{split} {\mathcal{Y}}^N_t=\frac{\frac{1}{N}\sum_{i=1}^N\phi(X^{i}_t)\delta_{\eta^{i}_t}}{\frac{1}{N}\sum_{i=1}^N\phi(X^{i}_t)}. \end{split} \end{equation} The goal of this section is to characterise the evolution of ${\mathcal{Y}}^N$ over large timescales through fast-variable elimination. In particular we prove the following theorem. \begin{equation}gin{theo}\label{theo:characterisation of Y} We define for ${\mathcal{E}}\in{\mathcal{B}}({\mathbb{K}})$, \[ P^{N,{\mathcal{E}}}_t:=\frac{1}{N}\sum_{i=1}^N{\mathbbm{1}}_{\eta^i_t\in {\mathcal{E}}}\phi(X^i_t),\quad Q^N_t:=P^{N,{\mathbb{K}}}=\frac{1}{N}\sum_{i=1}^N\phi(X^i_t),\quad\text{and}\quad Y^{N,{\mathcal{E}}}_t:={\mathcal{Y}}^N_t({\mathcal{E}})=\frac{P^{N,{\mathcal{E}}}_t}{Q^N_t}. \] We further define \[ \begin{equation}gin{split} \Lambda^{N,{\mathcal{E}}}_t:=\langle m^{N,{\mathcal{E}}}_t,\Gamma_{0}(\phi)+\kappa\phi^2\rightarrowngle +\langle m^{N,{\mathcal{E}}}_t,\phi^2\rightarrowngle \langle m^{N}_t,\kappa\rightarrowngle\quad \text{and}\quad \Lambda^N_t:=\Lambda^{N,{\mathbb{K}}}_t. \end{split} \] Then we have the following: \begin{equation}gin{enumerate} \item\label{enum:bound on cov of YE YF} The covariation $[Y^{N,{\mathcal{E}}},Y^{N,{\mathcal{F}}}]_t$ is such that $[Y^{N,{\mathcal{E}}},Y^{N,{\mathcal{F}}}]_t={\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{Y^{N,{\mathcal{E}}}Y^{N,{\mathcal{F}}}}{N})$ for disjoint ${\mathcal{E}}, {\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}})$. \item There exists martingales ${\mathcal{K}}^{N,{\mathcal{E}}}_t$ for ${\mathcal{E}}\in{\mathcal{B}}({\mathbb{K}})$ such that $Y^{N,{\mathcal{E}}}_t$ satisfies \begin{equation}gin{equation}\label{eq:Y in terms of K and extra terms} \begin{equation}gin{split} Y^{N,{\mathcal{E}}}_t={\mathcal{K}}^{N,{\mathcal{E}}}_t+\int_0^t\Big[-\frac{1}{(N-1)Q_s^N}\langle m^{N,{\mathcal{E}}}_s-Y^{N,{\mathcal{E}}}_s m^N_s,\kappa\phi\rightarrowngle -\frac{1}{NQ^N_s}\langle Y^{N,{\mathcal{E}}}_sm^N_s-m^{N,{\mathcal{E}}}_s,\kappa\phi\rightarrowngle\\+\frac{1}{N(Q^N_s)^2}\begin{itemize}g(Y^{N,{\mathcal{E}}}_s\Lambda^{N}_s-\Lambda^{N,{\mathcal{E}}}_s\begin{itemize}g) \Big]ds +{\mathcal{O}}_t^{{\text{MG}}}\begin{itemize}g(\frac{Y^{N,{\mathcal{E}}}}{N^3}\begin{itemize}g)+{\mathcal{O}}_t^{{\mathcal{F}}V}\begin{itemize}g(\frac{Y^{N,{\mathcal{E}}}}{N^2}\begin{itemize}g)\cap{\mathcal{O}}^{\Delta}_t\begin{itemize}g(\frac{1}{N^3}\begin{itemize}g) \end{split} \end{equation} for ${\mathcal{E}}\in{\mathcal{B}}({\mathbb{K}})$ and such that \begin{equation}gin{equation}\label{eq:covariation of KE and KF} \begin{equation}gin{split} [{\mathcal{K}}^{N,{\mathcal{E}}},{\mathcal{K}}^{N,{\mathcal{F}}}]_t=\int_0^t\frac{1}{N(Q^N_s)^2}\Big( \Lambda^{N,{\mathcal{E}}\cap {\mathcal{F}}}_s -Y^{N,{\mathcal{E}}}_s \Lambda^{N,{\mathcal{F}}}_s -Y^{N,{\mathcal{F}}}_s \Lambda^{N,{\mathcal{E}}}_s +Y^{N,{\mathcal{E}}}_sY^{N,{\mathcal{F}}}_s \Lambda^{N}_s \Big)ds\\ +{\mathcal{O}}_t^{{\text{MG}}}\begin{itemize}g(\frac{Y^{{\mathcal{E}}}Y^{{\mathcal{F}}}+Y^{{\mathcal{E}}\cap {\mathcal{F}}}}{N^3}\begin{itemize}g)+{\mathcal{O}}_t^{{\mathcal{F}}V}\Big(\frac{Y^{{\mathcal{E}}}Y^{{\mathcal{F}}}+Y^{{\mathcal{E}}\cap {\mathcal{F}}}}{N^2}\Big)\cap{\mathcal{O}}^{\Delta}_t(0)\quad\text{for all}\quad {\mathcal{E}},{\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}}). \end{split} \end{equation} \item\label{enum:Y O FV plus U MG} Furthermore $Y^{N,{\mathcal{E}}}_t$ satisfies \begin{equation}gin{equation}\label{eq:Y O FV plus U MG} Y^{N,{\mathcal{E}}}_t={\mathcal{O}}^{{\mathcal{F}}V}_t\begin{itemize}g(\frac{Y^{N,{\mathcal{E}}}}{N}\begin{itemize}g)\cap{\mathcal{O}}^{\Delta}_t\begin{itemize}g(\frac{1}{N^3}\begin{itemize}g)+{\mathcal{O}}^{{\text{MG}}}_t\begin{itemize}g(\frac{Y^{N,{\mathcal{E}}}}{N}\begin{itemize}g)\cap {\mathcal{U}}^{{\text{MG}}}_t\begin{itemize}g(\frac{(1-Y^{N,{\mathcal{E}}})^2Y^{N,{\mathcal{E}}}}{N}\begin{itemize}g)\cap{\mathcal{O}}^{\Delta}_t(\frac{1}{N}). \end{equation} \item Parts \ref{enum:bound on cov of YE YF}- \ref{enum:Y O FV plus U MG} remain true if ${\mathcal{E}}$ and ${\mathcal{F}}$ are replaced with a sequence of $\sigma_0$-measurable random sets ${\mathcal{E}}^N$ and ${\mathcal{F}}^N$.\label{enum:Thm 8.2 true for random sets} \end{enumerate} \end{theo} \begin{equation}gin{rmk}\label{rmk:positivity of kappa characterisation of Y} Note that the positivity of $\kappa$ is used here only to establish that the martingale term on the right-hand side of \eqref{eq:Y O FV plus U MG} is ${\mathcal{U}}^{{\text{MG}}}_t\begin{itemize}g(\frac{(1-Y^{N,{\mathcal{E}}})^2Y^{N,{\mathcal{E}}}}{N}\begin{itemize}g)$, which will be used in the proof of Theorem \ref{theo:convergence of the fixed colour} but not in the proof of Theorem \ref{theo:Convergence to FV diffusion}. \end{rmk} Before proving Theorem \ref{theo:characterisation of Y}, we will motivate our choice of ${\mathcal{Y}}^N$. \subsection{Motivation for Choosing ${\mathcal{Y}}^N$} Heuristically, there are two reasons why ${\mathcal{Y}}^N$ is a sensible quantity to investigate. \begin{equation}gin{enumerate} \item If $x(t)$ and $y(t)$ both satisfy the ODEs $\dot{x}=c(t)x$ and $\dot{y}=c(t)y$ for the same $c(t)$, then $\frac{y(t)}{x(t)}$ is constant. Thus if to leading order both $P^{N,{\mathcal{E}}}$ and $Q^N$ evolve with drift terms proportional to themselves (note that we should expect the martingale terms to be small for large $N$), with the same constant of proportionality, then $\frac{P^{N,{\mathcal{E}}}}{Q^{N}}$ should evolve on a longer timescale. The killed process $X_t$ satisfies \[ d\phi(X_t)=L\phi(X_t)+\text{martingale terms}=-\mbox{$\bf m $}box{$\lambda$}bda \phi(X_t)+\text{martingale terms,} \] hence between jumps, and including the process of killing the particles, the quantities $P^{N,{\mathcal{E}}}$ and $Q^N$ evolve with drift terms proportional to themselves, with the same constant of proportionality. Furthermore if particle $(X^{N,i},\eta^{N,i})$ dies at time $t$, $\frac{1}{N}\phi(X^{N,i}_t){\mathbbm{1}}(\eta^{N,i}_t\in {\mathcal{E}})$ (respectively $\frac{1}{N}\phi(X^{N,i}_t)$) is added to the value of $P^{N,{\mathcal{E}}}_t$ (respectively $Q^{N,{\mathcal{E}}}_t$), the expected value of which is $P^{N,{\mathcal{E}}}_{t-}+{\mathcal{O}}(\frac{1}{N})$ (respectively $Q^{N,{\mathcal{E}}}_{t-}+{\mathcal{O}}(\frac{1}{N})$). \item Consider a dynamical system in Euclidean space, $\dot{x}_t=b(x_t)$, with an attractive manifold of equilibrium and flow map $\varphi(x,s)$. Katzenberger \cite{Katzenberger1991} established (under reasonable conditions) that the long term dynamics of the randomly perturbed dynamical system \begin{equation}gin{equation}\label{eq:simple randomly perturbed dynamical system Katzenberger analogy} dx^{\mbox{$\bf m $}box{$\epsilon$}ilon}_t=b(x^{\mbox{$\bf m $}box{$\epsilon$}ilon}_t)dt+\mbox{$\bf m $}box{$\epsilon$}ilon dW_t \end{equation} can be obtained by considering \begin{equation}gin{equation}\label{eq:pi function Katzenberger analogy} \varpi(x):=\lim_{\substack{s\rightarrow\infty}}\varphi(x,s). \end{equation} We will argue that ${\mathcal{Y}}^N_t$ can be thought of as being analogous to the quantity $\varpi$ considered by Katzenberger. We summarise Katzenberger's idea as follows. Since $\nabla \varpi\cdot b\equiv 0$, Ito's lemma implies that \[ d\varpi(x^{\mbox{$\bf m $}box{$\epsilon$}ilon}_t)=\mbox{$\bf m $}box{$\epsilon$}ilon \nabla \varpi(x^{\mbox{$\bf m $}box{$\epsilon$}ilon}_t)\cdot dW_t+\frac{1}{2}\mbox{$\bf m $}box{$\epsilon$}ilon^2\Delta \varpi(x^{\mbox{$\bf m $}box{$\epsilon$}ilon}_t)dt. \] Then by arguing that $x^{\mbox{$\bf m $}box{$\epsilon$}ilon}_{\frac{t}{\mbox{$\bf m $}box{$\epsilon$}ilon^2}}\approx \varpi(x^{\mbox{$\bf m $}box{$\epsilon$}ilon}_{\frac{t}{\mbox{$\bf m $}box{$\epsilon$}ilon^2}})$ (since the dynamical system should be pushed towards the attractive manifold of equilibrium on a fast timescale), we can obtain a scaling limit for $x^{\mbox{$\bf m $}box{$\epsilon$}ilon}_{\frac{t}{\mbox{$\bf m $}box{$\epsilon$}ilon^2}}$ since both the drift and diffusion terms are ${\mathcal{O}}(1)$ over this timescale. Consider the $\bar D\times{\mathbb{K}}$-valued killed Markov process $(X_t,\eta_t)_{0\leq t<\tau_{\partial}}$, where $(X_t)_{0\leq t<\tau_{\partial}}$ evolves according to \eqref{eq:killed process SDE} and $\eta_t=\eta_0$ is constant for all $0\leq t<\tau_{\partial}$. Applying \cite[Theorem 2.2]{Villemonais2011} to the Fleming-Viot multi-colour process $\frac{1}{N}\sum_{i=1}^N\delta_{(X^{N,i}_t,\eta^{N,i}_t)}$, we have that \[ \frac{1}{N}\sum_{i=1}^N\delta_{(X^{N,i}_t,\eta^{N,i}_t)}\rightarrow {\mathcal{L}}((X_t,\eta_t)\lvert \tau_{\partial}>t)\quad\text{as}\quad N\rightarrow\infty. \] Therefore we can think of the Fleming-Viot multi-colour process as a random perturbation of the dynamical system with flow map \begin{equation}gin{equation}\label{eq:flow of laws for katzenberger analogy} {\mathcal{P}}(\bar D\times {\mathbb{K}})\times [0,\infty)\ni (\upsilon,s)\mbox{$\bf m $}apsto {\mathcal{L}}_{\upsilon}((X_s,\eta_s)\lvert \tau_{\partial}>s)\in {\mathcal{P}}(\bar D\times {\mathbb{K}}). \end{equation} The analogue of $\varpi(x)$ is therefore \begin{equation}gin{equation}\label{eq:flow map for multi-colour particle system} \begin{equation}gin{split} \lim_{s\rightarrow\infty}{\mathcal{L}}_{\upsilon}((X_s,\eta_s)\lvert \tau_{\partial}>s)=\lim_{s\rightarrow\infty}\frac{\int_{\bar D\times {\mathbb{K}}}{\mathbb P}_x(\tau_{\partial}>s){\mathcal{L}}_x(X_s\lvert \tau_{\partial}>s)\otimes\delta_{\eta}d\upsilon((x,\eta))}{\int_{\bar D\times {\mathbb{K}}}{\mathbb P}_x(\tau_{\partial}>s)d\upsilon((x,\eta))}\\ =\lim_{s\rightarrow\infty}\frac{\int_{\bar D\times {\mathbb{K}}}e^{\mbox{$\bf m $}box{$\lambda$}bda s}{\mathbb P}_x(\tau_{\partial}>s){\mathcal{L}}_x(X_s\lvert \tau_{\partial}>s)\otimes\delta_{\eta}d\upsilon((x,\eta))}{\int_{\bar D\times {\mathbb{K}}}e^{\mbox{$\bf m $}box{$\lambda$}bda s}{\mathbb P}_x(\tau_{\partial}>s)d\upsilon((x,\eta))}= \frac{\int_{\bar D\times {\mathbb{K}}}\phi(x)\pi\otimes\delta_{\eta}d\upsilon((x,\eta))}{\int_{\bar D\times {\mathbb{K}}}\phi(x)d\upsilon((x,\eta))}. \end{split} \end{equation} Substituting $\upsilon$ with $\frac{1}{N}\sum_{i=1}^N\delta_{(X^{N,i}_t,\eta^{N,i}_t)}$ in \eqref{eq:flow map for multi-colour particle system}, we see that the analogue of $\varpi(x^{\mbox{$\bf m $}box{$\epsilon$}ilon}_t)$ is \[ \frac{\frac{1}{N}\sum_{i=1}^N\phi(X^{N,i}_t)\pi\otimes\delta_{\eta^{N,i}_t}}{\frac{1}{N}\sum_{i=1}^N\phi(X^{N,i}_t)}=\pi\otimes {\mathcal{Y}}^N_t. \] We discard $\pi$ since it is constant, considering only ${\mathcal{Y}}^N_t$. \end{enumerate} These motivate consideration of the quantity ${\mathcal{Y}}^N_t$, but don't prove anything. We now perform the calculations necessary to prove Theorem \ref{theo:characterisation of Y} \subsection{Proof of Theorem \ref{theo:characterisation of Y}} We firstly introduce some definitions. We define \[ F(\vec{r})=\frac{p}{q}\quad \text{for}\quad\vec{r}=\begin{equation}gin{pmatrix} p\\ q \end{pmatrix}\in {\mathbb R}_{>0}^2. \] We write $H=H(\vec{r})$ for the Hessian and calculate \begin{equation}gin{equation} \begin{equation}gin{split} \nabla F(\vec{r})=\begin{equation}gin{pmatrix} \frac{1}{q} \\ -\frac{p}{q^2} \end{pmatrix}\quad\text{and}\quad H(\vec{r})=\begin{equation}gin{pmatrix} 0 & -\frac{1}{q^2}\\ -\frac{1}{q^2} & 2\frac{p}{q^3} \end{pmatrix}\quad \text{for}\quad\vec{r}=\begin{equation}gin{pmatrix} p\\ q \end{pmatrix}\in {\mathbb R}_{>0}^2. \end{split} \label{eq:formula for nabla F and Hessian} \end{equation} We have the key property \begin{equation}gin{equation} \nabla F\cdot \vec{r}=0\quad\text{and}\quad \vec{r}\cdot H(F)\vec{r}=0\quad \text{for}\quad\vec{r}=\begin{equation}gin{pmatrix} p\\ q \end{pmatrix}\in {\mathbb R}_{>0}^2. \label{eq:formula for nabla F dot r} \end{equation} We further define \[ \begin{equation}gin{split} \vec{R}^{N,{\mathcal{E}}}:=\begin{equation}gin{pmatrix} P^{N,{\mathcal{E}}}_t\\ Q^{N}_t \end{pmatrix}\quad\text{so that}\quad Y^{N,{\mathcal{E}}}=F(\vec{R}^{N,{\mathcal{E}}}). \end{split} \] We shall firstly establish the following proposition, which characterises $P^{N,E}$. \begin{equation}gin{prop}\label{prop:proposition characterising pk} We have for all ${\mathcal{E}}\in{\mathcal{B}}({\mathbb{K}})$: \begin{equation}gin{equation} dP^{N,{\mathcal{E}}}_t=P^{N,{\mathcal{E}}}_t\begin{itemize}g(-\mbox{$\bf m $}box{$\lambda$}bda +\frac{N}{N-1}\langle m^{N}_t,\kappa\rightarrowngle\begin{itemize}g)dt -\frac{1}{N-1}\langle m^{N,{\mathcal{E}}}_t,\kappa\phi\rightarrowngle dt+dM^{N,{\mathcal{E}}}_t \label{eq:sde for pk} \end{equation} whereby $M^{N,{\mathcal{E}}}$ are martingales which satisfy for all ${\mathcal{E}},{\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}})$: \begin{equation}gin{equation} \begin{equation}gin{split} [M^{N,{\mathcal{E}}},M^{N,{\mathcal{F}}}]_t=-\frac{1}{N}\int_0^tP^{N,{\mathcal{E}}}_s\langle m^{N,{\mathcal{F}}}_s,\kappa\phi\rightarrowngle+P^{N,{\mathcal{F}}}_s\langle m^{N,{\mathcal{E}}}_s,\kappa\phi\rightarrowngle ds\\ +\frac{1}{N}\int_0^t\Lambda^{N,{\mathcal{E}}\cap{\mathcal{F}}}_s ds +{\mathcal{O}}_t^{{\text{MG}}}\begin{itemize}g(\frac{P^{{\mathcal{E}}}P^{{\mathcal{F}}}+P^{{\mathcal{E}}\cap {\mathcal{F}}}}{N^3}\begin{itemize}g)+{\mathcal{O}}_t^{{\mathcal{F}}V}\begin{itemize}g(\frac{P^{{\mathcal{E}}}P^{{\mathcal{F}}}+P^{{\mathcal{E}}\cap {\mathcal{F}}}}{N^2}\begin{itemize}g)\cap{\mathcal{O}}^{\Delta}_t(0). \end{split} \label{eq:covariation for ME MF} \end{equation} \end{prop} We write $M^N_t$ for $M^{N,{\mathbb{K}}}_t$. We will then establish Part \ref{enum:bound on cov of YE YF} of Theorem \ref{theo:characterisation of Y}, followed by the following version of Ito's lemma. \begin{equation}gin{lem}[Ito's Lemma]\label{lem:Ito for F} We have \begin{equation}gin{equation} Y^{N,{\mathcal{E}}}_t=Y^{N,{\mathcal{E}}}_0+\int_0^t\nabla F(R^{N,{\mathcal{E}}}_{s-})\cdot dR^{N,{\mathcal{E}}}_s+\frac{1}{2}dR^{N,{\mathcal{E}}}_s\cdot H(R^{N,{\mathcal{E}}}_{s-})dR^{N,{\mathcal{E}}}_s+{\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{Y^{N,{\mathcal{E}}}}{N^2})\cap{\mathcal{O}}^{\Delta}_t(\frac{1}{N^3}). \end{equation} \end{lem} Combining \eqref{eq:formula for nabla F and Hessian} with Proposition \ref{prop:proposition characterising pk} and Lemma \ref{lem:Ito for F} we obtain \eqref{eq:Y in terms of K and extra terms} by calculation whereby \begin{equation}gin{equation}\label{eq:formula for KNE} {\mathcal{K}}^{N,{\mathcal{E}}}_t:=\int_0^t\frac{1}{Q^N_{s-}}\begin{itemize}g(dM^{N,{\mathcal{E}}}_s-Y^{N,{\mathcal{E}}}_{s-}dM^{N}_s\begin{itemize}g). \end{equation} We then obtain \eqref{eq:covariation of KE and KF} from \eqref{eq:covariation for ME MF} and \eqref{eq:formula for KNE}. We now use the positivity of $\kappa$ to calculate \[ \begin{equation}gin{split} \Lambda^{N,{\mathcal{E}}}_t -2Y^{N,{\mathcal{E}}} \Lambda^{N,{\mathcal{E}}}_t +(Y^{N,{\mathcal{E}}}_t)^2 \Lambda^{N}_t=\begin{itemize}g(1-2Y^{N,{\mathcal{E}}}+(Y^{N,{\mathcal{E}}})^2\begin{itemize}g)\Lambda^{N,{\mathcal{E}}}_t+(Y^{N,{\mathcal{E}}}_t)^2 \Lambda^{N,{\mathcal{E}}^c}_t\\ ={\mathcal{U}}\begin{itemize}g(Y^{N,{\mathcal{E}}}(1-Y^{N,{\mathcal{E}}})^2\begin{itemize}g) \end{split} \] so that by \eqref{eq:Y in terms of K and extra terms} we have \[ Y^{N,{\mathcal{E}}}_t={\mathcal{O}}^{{\mathcal{F}}V}_t\begin{itemize}g(\frac{Y^{N,{\mathcal{E}}}}{N}\begin{itemize}g)\cap{\mathcal{O}}^{\Delta}_t\begin{itemize}g(\frac{1}{N^3}\begin{itemize}g)+{\mathcal{O}}^{{\text{MG}}}_t\begin{itemize}g(\frac{Y^{N,{\mathcal{E}}}}{N}\begin{itemize}g)\cap {\mathcal{U}}^{{\text{MG}}}_t\begin{itemize}g(\frac{(1-Y^{N,{\mathcal{E}}})^2Y^{N,{\mathcal{E}}}}{N}\begin{itemize}g). \] Since there are no simultaneous killing events (almost surely) and $\phi$ is both bounded and bounded away from $0$ (by Theorem \ref{theo:convergence to QSD for reflected diffusion with soft killing}), $Y^{N,{\mathcal{E}}}_t= {\mathcal{O}}^{\Delta}_t(\frac{1}{N})$. Therefore we have Part \ref{enum:Y O FV plus U MG} of Theorem \ref{theo:characterisation of Y}. Since in parts \ref{enum:bound on cov of YE YF}-\ref{enum:Y O FV plus U MG} of Theorem \ref{theo:characterisation of Y} the statements of processes belonging to various process sequence classes are uniform over all choices ${\mathcal{E}},{\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}})$, we may take $N_0<\infty$ such that $N\geq N_0$ is large enough for all such statements. We take sequences of $\sigma_0$-measurable sets ${\mathcal{E}}^N$ and ${\mathcal{F}}^N$. We take for each $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ and $N\geq N_0$ disjoint ${\mathcal{S}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}_1,\ldots,{\mathcal{S}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}_{n^N_{\mbox{$\bf m $}box{$\epsilon$}ilon}}\in{\mathcal{B}}({\mathbb{K}})$ such that \begin{equation}gin{equation}\label{eq:prob of event F less than epsilon} \cup_{i=1}^{n^{\mbox{$\bf m $}box{$\epsilon$}ilon}_N}{\mathcal{S}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}_i={\mathbb{K}}\quad\text{and}\quad {\mathbb P}(F^N_{\mbox{$\bf m $}box{$\epsilon$}ilon})<\mbox{$\bf m $}box{$\epsilon$}ilon\quad\text{whereby}\quad F^N_{\mbox{$\bf m $}box{$\epsilon$}ilon}:=\{1\leq i\leq n^N_{\mbox{$\bf m $}box{$\epsilon$}ilon}: \lvert {\mathcal{S}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}_i\cap \text{supp}({\mathcal{Y}}^N_0)\rvert \geq 2\}. \end{equation} We then define \[ \begin{equation}gin{split} I^N_{\mbox{$\bf m $}box{$\epsilon$}ilon}:=\{1\leq i\leq n^N_{\mbox{$\bf m $}box{$\epsilon$}ilon}:\lvert {\mathcal{S}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}_i\cap \text{supp}(\chi^N_0)\cap {\mathcal{E}} \rvert \geq 2\},\quad {\mathcal{E}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}:=\cup_{i\in I^N_{\mbox{$\bf m $}box{$\epsilon$}ilon}}{\mathcal{S}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}_i,\\ J^N_{\mbox{$\bf m $}box{$\epsilon$}ilon}:=\{1\leq i\leq n^N_{\mbox{$\bf m $}box{$\epsilon$}ilon}:\lvert {\mathcal{S}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}_i\cap \text{supp}(\chi^N_0)\cap {\mathcal{F}} \rvert \geq 2\}\quad\text{and}\quad {\mathcal{F}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}:=\cup_{i\in J^N_{\mbox{$\bf m $}box{$\epsilon$}ilon}}{\mathcal{S}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}_i \end{split} \] for $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ and $N\geq N_0$. Since for each $N\geq N_0$ and $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ there are finitely many possibilities for the random sets ${\mathcal{E}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}$ and ${\mathcal{F}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}$, the statements of parts \ref{enum:bound on cov of YE YF}-\ref{enum:Y O FV plus U MG} of Theorem \ref{theo:characterisation of Y} remain true with ${\mathcal{E}}$ and ${\mathcal{F}}$ replaced with the sequence of random sets ${\mathcal{E}}^{N,{\mathcal{E}}}$ and ${\mathcal{F}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}$ respectively, for any $\mbox{$\bf m $}box{$\epsilon$}ilon>0$. Moreover, on the event $F^{N}_{\mbox{$\bf m $}box{$\epsilon$}ilon}$, ${\mathcal{E}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}={\mathcal{E}}^N$ and ${\mathcal{F}}^{N,\mbox{$\bf m $}box{$\epsilon$}ilon}={\mathcal{F}}^N$. Thus by \eqref{eq:prob of event F less than epsilon} and the uniformity of the above equivalence classes, we obtain Part \ref{enum:Thm 8.2 true for random sets} of Theorem \ref{theo:characterisation of Y}. \subsubsection*{Proof of Proposition \ref{prop:proposition characterising pk}} Since $N$ is fixed throughout this proof, we neglect the $N$ superscript for the sake of notation, where it would not create confusion. We recall that $\tau^i_n$ represents the $n^{\text{th}}$ killing time of particle $(X^i,\eta^i)$ ($\tau^i_0:=0$), $\tau_n$ the killing time of any particle ($\tau_0=0$) and $J^N_t:=\frac{1}{N}\sup\{n:\tau_n\leq t\}$ the number of killing times up to time $t$, renormalised by $N$. We fix ${\mathcal{E}}\in{\mathcal{B}}({\mathbb{K}})$ and set $\phi^{{\mathcal{E}}}(x,\eta)=\phi(x){\mathbbm{1}}(\eta\in {\mathcal{E}})$. We define the processes \begin{equation}gin{equation} \begin{equation}gin{split} A^{{\mathcal{E}}}_t=\frac{1}{N}\sum_{i=1}^N\sum_{\tau^i_n\leq t}\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})-\frac{N}{N-1}\int_0^tP^{{\mathcal{E}}}_s\langle m^N_s,\kappa\rightarrowngle ds+\frac{1}{N(N-1)}\sum_{i=1}^N\sum_{\tau^i_n\leq t}\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-}), \\ B^{{\mathcal{E}}}_t=\langle m^{N,{\mathcal{E}}}_t,\phi\rightarrowngle-\langle m^{N,{\mathcal{E}}}_0,\phi\rightarrowngle -\frac{1}{N}\sum_{i=1}^N\sum_{\tau^i_n\leq t}\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})+\mbox{$\bf m $}box{$\lambda$}bda\int_0^t\langle m^{N,{\mathcal{E}}}_s,\phi\rightarrowngle ds,\\ \text{and}\quad C^{{\mathcal{E}}}_t=\frac{1}{N}\sum_{i=1}^N\sum_{\tau^i_n\leq t}\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})-\int_0^t\langle m^{N,{\mathcal{E}}}_s,\kappa\phi\rightarrowngle ds. \end{split} \label{eq:defin of Ak Bk Ck} \end{equation} We firstly establish that $A^{{\mathcal{E}}}_t$, $B^{{\mathcal{E}}}_t$ and $C^{{\mathcal{E}}}_t$ are martingales, so that \begin{equation}gin{equation} \begin{equation}gin{split} M^{{\mathcal{E}}}_t=A^{{\mathcal{E}}}_t+B^{{\mathcal{E}}}_t-\frac{1}{N-1}C^{{\mathcal{E}}}_t =-\frac{N}{N-1}\int_0^tP^{N,{\mathcal{E}}}_s\langle m^N_s,\kappa\phi\rightarrowngle ds \\+\langle m^{N,{\mathcal{E}}}_t,\phi\rightarrowngle-\langle m^{N,{\mathcal{E}}}_0,\phi\rightarrowngle +\mbox{$\bf m $}box{$\lambda$}bda\int_0^t\langle m^{N,{\mathcal{E}}}_s,\phi\rightarrowngle ds +\frac{1}{N-1}\int_0^t\langle m^{N,{\mathcal{E}}}_s,\kappa\phi\rightarrowngle ds \end{split} \label{eq:formula for martingale Mk} \end{equation} is a martingale. We therefore have \eqref{eq:sde for pk}. We will then establish \eqref{eq:covariation for ME MF} by establishing it for ${\mathcal{E}}={\mathcal{F}}$ and for ${\mathcal{E}},{\mathcal{F}}$ disjoint. \underline{$A^{{\mathcal{E}}}_t$ is a martingale} We have that if particle $i$ dies at time $t$, then each $j\neq i$ is selected with probability $\frac{1}{N-1}$, so that the expected value of $\phi^{{\mathcal{E}}}(X^i_t,\eta^i_t)$ is given by: \[ \frac{1}{N-1}\sum_{j\neq i}\phi^{{\mathcal{E}}}(X^j_{t-},\eta^j_{t-})=\frac{N}{N-1}\times \begin{itemize}g[P^{{\mathcal{E}}}_{t-} - \frac{1}{N}\phi^{{\mathcal{E}}}(X^i_{t-},\eta^i_{t-}) \begin{itemize}g]. \] Therefore \[ \begin{equation}gin{split} \frac{1}{N}\sum_{i=1}^N\sum_{\tau^i_n\leq t}\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})-\frac{1}{N}\sum_{i=1}^N\sum_{\tau^i_n\leq t}\Big(\frac{N}{N-1}\times \begin{itemize}g[P^{{\mathcal{E}}}_{\tau^i_n-} - \frac{1}{N}\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\begin{itemize}g] \Big)\\ =\frac{1}{N}\sum_{i=1}^N\sum_{\tau^i_n\leq t}\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})-\frac{N}{N-1}\int_0^tP^{{\mathcal{E}}}_{s-}dJ^N_s+\frac{1}{N(N-1)}\sum_{i=1}^N\sum_{\tau^i_n\leq t}\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-}) \end{split} \] is a martingale. We finally note that \begin{equation}gin{equation}\label{eq:MG for J} J^N_t-\int_0^t\langle m^N_s,\kappa\rightarrowngle ds\quad\text{is a martingale} \end{equation} so that \[ \int_0^tP_{s-}^{N,{\mathcal{E}}}dJ_s-\int_0^tP_s^{N,{\mathcal{E}}}\langle m^N_s,\kappa\rightarrowngle ds \] is a martingale. Therefore $A^{{\mathcal{E}}}_t$ is a martingale. \underline{$B^{{\mathcal{E}}}_t$ is a martingale} We define the martingale \[ B^{{{\mathcal{E}}},i,n}_t:=\begin{equation}gin{cases} 0,\quad t<\tau^i_n\\ \phi^{{\mathcal{E}}}(X^i_t,\eta^i_t)-\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})-\int_{\tau^i_n}^tL\phi(X^i_s){\mathbbm{1}}(\eta^i_s\in{\mathcal{E}})ds,\quad \tau^i_n\leq t<\tau^i_{n+1}\\ -\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})-\int_{\tau^i_n}^{\tau^i_{n+1}}L\phi(X^i_s){\mathbbm{1}}(\eta^i_s\in{\mathcal{E}})ds,\quad t\geq \tau^i_{n+1} \end{cases}. \] Since $L\phi=-\mbox{$\bf m $}box{$\lambda$}bda\phi$, we can write \[ \begin{equation}gin{split} B_t^{{{\mathcal{E}}},i,n}={\mathbbm{1}}(\tau^i_n\leq t<\tau^i_{n+1})\phi^{{\mathcal{E}}}(X^i_t,\eta^i_{t})-\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n}){\mathbbm{1}}(t \geq\tau^i_{n}) +\mbox{$\bf m $}box{$\lambda$}bda\int_0^t{\mathbbm{1}}(\tau^i_n\leq s<\tau^i_{n+1})\phi^{{\mathcal{E}}}(X^i_{s},\eta^i_{s})ds. \end{split} \] Therefore $\sum_{n<n_0}B_t^{{{\mathcal{E}}},i,n}$ is a martingale for all $n_0<\infty$. Since for some $C<\infty$, $\sum_{n<n_0}\lvert B^{{\mathcal{E}},i,n}_t\rvert\leq C(1+t+NJ^N_t)$ for all $n_0<\infty$, $(\sum_{n<\infty}B_t^{{{\mathcal{E}}},i,n})_{0\leq t<\infty}$ is a martingale. Therefore \[ B_t^{{\mathcal{E}}}:=\frac{1}{N}\sum_{i=1}^N\sum_{n<\infty}B_t^{{{\mathcal{E}}},i,n} \] is a martingale. \underline{$C^{{\mathcal{E}}}_t$ is a martingale} We have that \[ C_t^{{{\mathcal{E}}},i,n}:={\mathbbm{1}}(t\geq \tau^i_{n+1})\phi^{{\mathcal{E}}}(X^i_{\tau^i_{n+1}-},\eta^i_{\tau^i_{n+1}-})-\int_0^{t}{\mathbbm{1}}(\tau^i_n\leq s<\tau^i_{n+1})\kappa(X^i_s)\phi^{{\mathcal{E}}}(X^i_s,\eta^i_s)ds \] is a martingale. Since for some $C<\infty$, $\sum_{n<n_0}\lvert C^{{\mathcal{E}},i,n}_t\rvert\leq C(1+t+NJ^N_t)$ for all $n_0<\infty$, $\sum_{n<\infty}C^{{\mathcal{E}},i,n}$ is a martingale. Therefore \[ C^{{\mathcal{E}}}_t:=\frac{1}{N}\sum_{i=1}^N\sum_{n<\infty}C_t^{{{\mathcal{E}}},i,n} \] is a martingale. \underline{The Quadratic Variation of $M^{{\mathcal{E}}}$} We firstly decompose $M^{{\mathcal{E}}}_t$ into \[ M^{{\mathcal{E}}}_t:=M^{{\mathcal{E}},J}_t+M^{{\mathcal{E}},C}_t, \] whereby \begin{equation}gin{equation}\label{eq:equation for MEJ} M^{{{\mathcal{E}}},J}_t:=\sum_{\tau^i_n\leq t}\Delta M^{{\mathcal{E}}}_{\tau^i_n}=\frac{1}{N}\sum_{\tau^i_n\leq t}\begin{itemize}g[\phi^{{\mathcal{E}}}(X^{i}_{\tau^i_n},\eta^{i}_{\tau^i_n}) -\phi^{{\mathcal{E}}}(X^{i}_{\tau^i_n-},\eta^{i}_{\tau^i_n-})\begin{itemize}g]. \end{equation} Since the underlying dynamics of the Markovian process are continuous, $M^{{{\mathcal{E}}},C}$ is then the continuous part of $M^{{\mathcal{E}}}_t$. We observe that \begin{equation}gin{equation}\label{eq:covariation of MEC MEJ} [M^{{{\mathcal{E}}},C},M^{{{\mathcal{E}}},J}]_t=0. \end{equation} We now show that \begin{equation}gin{equation} [M^{{{\mathcal{E}}},C}]_t=\frac{1}{N}\int_0^t\langle m^{N,{\mathcal{E}}}_s,\Gamma_{0}(\phi)\rightarrowngle ds. \label{eq:martingale problem for Mkc} \end{equation} We define \[ H_t=\langle m^{N,{\mathcal{E}}}_t,\phi\rightarrowngle-\langle m^{N,{\mathcal{E}}}_0,\phi\rightarrowngle-\sum_{\tau^i_n\leq t}\begin{itemize}g[\langle m^{N,{\mathcal{E}}}_{\tau^i_n},\phi\rightarrowngle-\langle m^{N,{\mathcal{E}}}_{\tau^i_n-},\phi\rightarrowngle\begin{itemize}g] \] so that we have \[ [M^{{{\mathcal{E}}},C}]_t=[H]_t. \] We then define \[ H^{i,n}_t=\begin{equation}gin{cases} 0,\quad t<\tau^i_n\\ \phi^{{\mathcal{E}}}(X^i_t,\eta^i_t)-\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n}),\quad \tau^i_n\leq t<\tau^i_{n+1}\\ \phi^{{\mathcal{E}}}(X^i_{\tau^i_{n+1}-},\eta^i_{\tau^i_{n+1}-})-\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n}),\quad t\geq \tau^i_{n+1} \end{cases}. \] Thus we have \[ \begin{equation}gin{split} [H^{i,n}]_t=\int_0^t\Gamma_{0}(\phi)(X^i_s){\mathbbm{1}}(\eta^i_s\in{\mathcal{E}}){\mathbbm{1}}(\tau^i_n\leq s<\tau^i_{n+1})ds, \end{split} \] whereby $\Gamma_0$ is the Carre du Champs operator defined in \eqref{eq:Carre du champs}. Since for fixed $i\in\{1,\ldots,N\}$ the sets $[\tau^i_n,\tau^i_{n+1})$ ($0\leq n<\infty$) are disjoint, we have $[\sum_n H^{i,n}]_t=\sum_n[H^{i,n}]_t$. Moreover since the particles evolve independently between jumps we have $[\sum_i\sum_nH^{i,n}]_t=\sum_i[\sum_n H^{i,n}]_t$. Therefore we have $[M^{{{\mathcal{E}}},C}]_t=[H]_t=\frac{1}{N^2}\sum_{i,n}[H^{i,n}]_t$, so that we have \eqref{eq:martingale problem for Mkc}. We now show that \begin{equation}gin{equation}\label{eq:QV of MJ} \begin{equation}gin{split} [M^{{\mathcal{E}},J}]_t=\frac{1}{N}\int_0^t\langle m^{N}_s,\kappa\rightarrowngle \langle m^{N,{\mathcal{E}}}_s,\phi^2\rightarrowngle +\langle m^{N,{\mathcal{E}}},\kappa\phi^2\rightarrowngle -2P^{{\mathcal{E}}}_s\langle m^{N,{\mathcal{E}}},\kappa\phi\rightarrowngle ds\\ +{\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{P^{{\mathcal{E}}}}{N^2})\cap{\mathcal{O}}^{\Delta}_t(0)+{\mathcal{O}}^{{\text{MG}}}_t(\frac{P^{{\mathcal{E}}}}{N^3}). \end{split} \end{equation} We have from \eqref{eq:equation for MEJ} that \begin{equation}gin{equation}\label{eq:Quadratic variation of jump part of ME} [M^{{{\mathcal{E}}},J}]_t=\frac{1}{N^2}\sum_{\tau^i_n\leq t}\begin{itemize}g[\phi^{{\mathcal{E}}}(X^{i}_{\tau^i_n},\eta^{i}_{\tau^i_n}) -\phi^{{\mathcal{E}}}(X^{i}_{\tau^i_n-},\eta^{i}_{\tau^i_n-})\begin{itemize}g]^2. \end{equation} At time $\tau^i_n-$, the expected value of $\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})$ is \[ P_{\tau^i_n-}^{{\mathcal{E}}}+{\mathcal{O}}\Big(\frac{P_{\tau^i_n-}^{{\mathcal{E}}}+\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})}{N}\Big), \] whilst (using that $\phi$ is bounded) that of $\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})^2$ is \[ \langle m^{N,{\mathcal{E}}}_{\tau^i_n-},\phi^2 \rightarrowngle +{\mathcal{O}}\Big(\frac{P_{\tau^i_n-}^{{\mathcal{E}}}+\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})}{N}\Big). \] Therefore the expected value of $\begin{itemize}g[\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})-\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\begin{itemize}g]^2$ at time $\tau^i_n-$ is \[ \begin{equation}gin{split} \langle m^{N,{\mathcal{E}}}_{\tau^i_n-},\phi^2\rightarrowngle-2P^{{\mathcal{E}}}_{\tau^i_n-}\phi^{{\mathcal{E}}}(X^{i}_{\tau^i_n-},\eta^{i}_{\tau^i_n-})+\begin{itemize}g(\phi^{{\mathcal{E}}}(X^{i}_{\tau^i_n-},\eta^{i}_{\tau^i_n-})\begin{itemize}g)^2 +{\mathcal{O}}\Big(\frac{P_{\tau^i_n-}^{{\mathcal{E}}}+\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})}{N}\Big). \end{split} \] We now note that for $a,b\geq 0$ \begin{equation}gin{equation}\label{eq:a,b eqn} (a-b)^4\leq \mbox{$\bf m $}ax(a,b)^3(a+b). \end{equation} Since $X^i$ is killed at rate $\kappa(X^i_t)$, we have \[ \begin{equation}gin{split} \tilde{M}_t:=[M^{{\mathcal{E}},J}]-\frac{1}{N^2}\int_0^t\sum_{i=1}^N\kappa(X^i_{s-})\Big[\langle m^{N,{\mathcal{E}}}_{s-},\phi^2\rightarrowngle-2P^{{\mathcal{E}}}_{s-}\phi^{{\mathcal{E}}}(X^{i}_{s-},\eta^{i}_{s-})+\begin{itemize}g(\phi^{{\mathcal{E}}}(X^{i}_{s-},\eta^{i}_{s-})\begin{itemize}g)^2 \\ +{\mathcal{O}}\Big(\frac{P_{s-}^{{\mathcal{E}}}+\phi^{{\mathcal{E}}}(X^i_{s-},\eta^i_{s-})}{N}\Big)\Big]ds \end{split} \] is a martingale, whose quadratic variation is bounded by (using \eqref{eq:a,b eqn}) \[ \begin{equation}gin{split} [\tilde{M}]_{t_2}-[\tilde{M}]_{t_1}=[[M^{{\mathcal{E}},J}]]_{t_2}-[[M^{{\mathcal{E}},J}]]_{t_1}\leq \frac{\lvert\lvert\phi\rvert\rvert_{\infty}^3}{N^4}\sum_{t_1<\tau^i_n\leq t}\begin{itemize}g[\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})+\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})\begin{itemize}g]\\ \leq \frac{\lvert\lvert\phi\rvert\rvert_{\infty}^3}{N^3}\Big((A^{{\mathcal{E}}}_{t_2}-A^{{\mathcal{E}}}_{t_1})+\frac{N}{N-1}\int_{t_1}^{t_2}P^{{\mathcal{E}}}_s\langle m^{N}_s,\kappa\rightarrowngle ds+\frac{N-2}{N-1}(C^{{\mathcal{E}}}_{t_2}-C^{{\mathcal{E}}}_{t_1})+\frac{N-2}{N-1}\int_{t_1}^{t_2}\langle m^{N,{\mathcal{E}}}_s,\kappa\phi\rightarrowngle ds\Big). \end{split} \] Therefore we have \eqref{eq:QV of MJ}. Combining this with \eqref{eq:covariation of MEC MEJ} and \eqref{eq:martingale problem for Mkc} we have \begin{equation}gin{equation}\label{eq:quadratic variation of ME} \begin{equation}gin{split} [M^{{\mathcal{E}}}]_t=\frac{1}{N}\int_0^t\langle m^{N}_s,\kappa\rightarrowngle \langle m^{N,{\mathcal{E}}}_s,\phi^2\rightarrowngle +\langle m^{N,{\mathcal{E}}},\kappa\phi^2\rightarrowngle -2P^{{\mathcal{E}}}_s\langle m^{N,{\mathcal{E}}},\kappa\phi\rightarrowngle ds\\ +\frac{1}{N}\int_0^t\langle m^{N,{\mathcal{E}}}_s,\Gamma_{0}(\phi)\rightarrowngle ds +{\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{P^{{\mathcal{E}}}}{N^2})\cap{\mathcal{O}}^{\Delta}_t(0)+{\mathcal{O}}^{{\text{MG}}}_t(\frac{P^{{\mathcal{E}}}}{N^3}). \end{split} \end{equation} \underline{Quadratic Covariation of $[M^{{\mathcal{E}}},M^{{\mathcal{F}}}]$ for Disjoint ${\mathcal{E}},{\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}})$} It is tempting at this point to apply the polarisation identity to \eqref{eq:quadratic variation of ME}. Unfortunately this does not provide a sufficiently good control on the small finite variation terms, so we calculate the covariation directly in the same manner as the calculation of $[M^{{\mathcal{E}}}]_t$. Since the particles evolve independently between jump times, we have \[ \begin{equation}gin{split} [M^{{\mathcal{E}}},M^{{\mathcal{F}}}]_t=[M^{{\mathcal{E}},J},M^{{\mathcal{F}},J}]_t\\ =\frac{1}{N^2}\sum_{\tau^i_n\leq t}\begin{itemize}g[\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})-\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\begin{itemize}g]\begin{itemize}g[\phi^{{\mathcal{F}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})-\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\begin{itemize}g]. \end{split} \] Since ${\mathcal{E}}\cap{\mathcal{F}}=\emptyset$ we have \begin{equation}gin{equation}\label{eq:cross phiE phiF terms =0} \phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})\phi^{{\mathcal{F}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})=\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n}-)\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})=0 \end{equation} so that \[ [M^{{\mathcal{E}}},M^{{\mathcal{F}}}]_t=-\frac{1}{N^2}\sum_{\tau^i_n\leq t}\begin{itemize}g[\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\phi^{{\mathcal{F}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})+\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\begin{itemize}g]. \] Arguing as before and using \eqref{eq:cross phiE phiF terms =0}, the expected value of \[ \phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\phi^{{\mathcal{F}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})+\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-}) \] at time $\tau^i_n-$ is given by \[ \begin{equation}gin{split} \Big[P_{\tau^i_n-}^{{\mathcal{E}}}+{\mathcal{O}}\Big(\frac{P_{\tau^i_n-}^{{\mathcal{E}}}+\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})}{N}\Big)\Big]\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\\ +\Big[P_{\tau^i_n-}^{{\mathcal{F}}} +{\mathcal{O}}\Big(\frac{P_{\tau^i_n-}^{{\mathcal{F}}}+\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})}{N}\Big)\Big]\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-}) \\ =\begin{itemize}g[P_{\tau^i_n-}^{{\mathcal{E}}}\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})+P_{\tau^i_n-}^{{\mathcal{F}}}\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\begin{itemize}g](1+{\mathcal{O}}(\frac{1}{N})). \end{split} \] Arguing as before, we have \[ \begin{equation}gin{split} \tilde{M}:=[M^{{\mathcal{E}}},M^{{\mathcal{F}}}]_t+\frac{1}{N^2}\int_0^t\sum_{i=1}^N\kappa(X^i_{s})[P^{{\mathcal{E}}}_{s}\phi^{{\mathcal{F}}}(X^i_s,\eta^i_s)+P^{{\mathcal{F}}}_{s}\phi^{{\mathcal{E}}}(X^i_s,\eta^i_s)](1+{\mathcal{O}}(\frac{1}{N}))ds\\ =[M^{{\mathcal{E}}},M^{{\mathcal{F}}}]_t+\frac{1}{N}\int_0^t\langle m^{N,{\mathcal{E}}}_sP^{{\mathcal{F}}}_s+m^{N,{\mathcal{F}}}_sP^{{\mathcal{E}}}_s,\kappa\phi\rightarrowngle (1+{\mathcal{O}}(\frac{1}{N}))ds \end{split} \] is a martingale, whose quadratic variation satisfies \[ \begin{equation}gin{split} [\tilde{M}]_{t_2}-[\tilde{M}]_{t_1}=[[M^{{\mathcal{E}}},M^{{\mathcal{F}}}]]_{t_2}-[[M^{{\mathcal{E}}},M^{{\mathcal{F}}}]]_{t_1}\\ \leq \frac{2\lvert\lvert\phi\rvert\rvert_{\infty}^2}{N^4}\sum_{t_1<\tau_n^i\leq t_2}\begin{itemize}g[\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\phi^{{\mathcal{F}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})+\phi^{{\mathcal{E}}}(X^i_{\tau^i_n},\eta^i_{\tau^i_n})\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-},\eta^i_{\tau^i_n-})\begin{itemize}g]\\ =-\frac{2\lvert\lvert\phi\rvert\rvert_{\infty}^2}{N^2}([M^{{\mathcal{E}}},M^{{\mathcal{F}}}]_{t_2}-[M^{{\mathcal{E}}},M^{{\mathcal{F}}}]_{t_1}). \end{split} \] Thus \[ \begin{equation}gin{split} [[M^{{\mathcal{E}}},M^{{\mathcal{F}}}]]_t+2\frac{\lvert\lvert \phi\rvert\rvert_{\infty}^2}{N^2}[M^{{\mathcal{E}}},M^{{\mathcal{F}}}]_{t} \end{split} \] is a supermartingale. Therefore \[ \begin{equation}gin{split} [[M^{{\mathcal{E}}},M^{{\mathcal{F}}}]]_t-3\frac{\lvert\lvert \phi\rvert\rvert_{\infty}^2}{N^3}\int_0^t\langle m^{N,{\mathcal{E}}}_sP^{{\mathcal{F}}}_s+m^{N,{\mathcal{F}}}_sP^{{\mathcal{E}}}_s,\kappa\phi\rightarrowngle ds \end{split} \] is a supermartingale for all $N$ large enough so that we have \begin{equation}gin{equation}\label{eq:quadratic covariation of ME MF disjoint} \begin{equation}gin{split} [M^{{\mathcal{E}}},M^{{\mathcal{F}}}]_t=-\frac{1}{N}\int_0^tP^{{\mathcal{E}}}_s\langle m^{N,{\mathcal{F}}}_s,\kappa\phi\rightarrowngle+P^{{\mathcal{F}}}_s\langle m^{N,{\mathcal{E}}}_s,\kappa\phi\rightarrowngle ds\\+{\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{P^{{\mathcal{E}}}P^{{\mathcal{F}}}}{N^2})\cap{\mathcal{O}}^{\Delta}_t(0)+{\mathcal{O}}^{{\text{MG}}}_t(\frac{P^{{\mathcal{E}}}P^{{\mathcal{F}}}}{N^3}). \end{split} \end{equation} Combining \eqref{eq:quadratic covariation of ME MF disjoint} with \eqref{eq:quadratic variation of ME} we have \eqref{eq:covariation for ME MF}. \qed \subsubsection*{Proof of Part \ref{enum:bound on cov of YE YF} of Theorem \ref{theo:characterisation of Y}} We decompose \[ \vec{R}^{N,{\mathcal{E}}}_t=\vec{R}^{N,{\mathcal{E}},C}_t+\vec{R}^{N,{\mathcal{E}},J}_t\quad \text{and} \quad Y^{N,{\mathcal{E}}}_t=F(\vec{R}^{N,{\mathcal{E}}}_t)=Y^{N,{\mathcal{E}},C}_t+Y^{N,{\mathcal{E}},J}_t \] for ${\mathcal{E}}\in{\mathcal{B}}({\mathbb{K}})$, whereby \[ Y^{N,{\mathcal{E}},J}_t=\sum_{s\leq t}\Delta Y^{N,{\mathcal{E}},J}_s\quad\text{and}\quad \vec{R}^{N,{\mathcal{E}},J}_t:=\sum_{s\leq t}\Delta\vec{R}^{N,{\mathcal{E}}}_s=\begin{equation}gin{pmatrix} P^{{\mathcal{E}},J}_t\\ Q^J_t \end{pmatrix}. \] Then by Ito's lemma we have \begin{equation}gin{equation}\label{eq:Ito for YE cts part} dY^{N,{\mathcal{E}},C}_t=\nabla F(\vec{R}^{N,{\mathcal{E}}}_t)\cdot d\vec{R}^{N,{\mathcal{E}},C}_t+\frac{1}{2}d\vec{R}^{N,{\mathcal{E}},C}_t\cdot H(F)(\vec{R}^{N,{\mathcal{E}}}_t)d\vec{R}^{N,{\mathcal{E}},C}_t. \end{equation} We can therefore calculate \begin{equation}gin{equation}\label{eq:cov YEC YFC calc} \begin{equation}gin{split} d[Y^{N,{\mathcal{E}},C},Y^{N,{\mathcal{F}},C}]_t=\frac{1}{(Q^N_t)^2}\begin{itemize}g(dP^{N,{\mathcal{E}},C}_t-Y^{N,{\mathcal{E}}}_{t}dQ^{N,C}_t\begin{itemize}g)\cdot\begin{itemize}g(dP^{N,{\mathcal{F}},C}_t-Y^{N,{\mathcal{F}}}_{t}dQ^{N,C}_t\begin{itemize}g). \end{split} \end{equation} Proposition \ref{prop:proposition characterising pk} implies that \begin{equation}gin{equation}\label{eq:cov PEC PFC} \begin{equation}gin{split} d[P^{N,{\mathcal{E}},C},P^{N,{\mathcal{F}},C}]_t={\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{Y^{N,{\mathcal{E}}}Y^{N,{\mathcal{F}}}}{N}+\frac{Y^{N,{\mathcal{E}}\cap{\mathcal{F}}}}{N^2}) \end{split} \end{equation} for all ${\mathcal{E}},{\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}})$. Combining \eqref{eq:cov YEC YFC calc} with \eqref{eq:cov PEC PFC} we have \begin{equation}gin{equation}\label{eq:cov YEC YFC} d[Y^{N,{\mathcal{E}},C},Y^{N,{\mathcal{F}},C}]_t={\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{Y^{N,{\mathcal{E}}}Y^{N,{\mathcal{F}}}}{N}) \end{equation} for all ${\mathcal{E}},{\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}})$ disjoint. We also have that \[ [Y^{N,{\mathcal{E}},J},Y^{N,{\mathcal{F}},J}]_t=\sum_{\tau^i_n\leq t}\Delta Y^{N,{\mathcal{E}}}_{\tau^i_n}\Delta Y^{N,{\mathcal{F}}}_{\tau^i_n}. \] Since $Q^N_t$ is bounded below away from $0$, by bounding the partial derivatives of $F$ and using \eqref{eq:cross phiE phiF terms =0} we can calculate for all ${\mathcal{E}},{\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}})$ disjoint that \[ \begin{equation}gin{split} \lvert\Delta Y^{N,{\mathcal{E}}}_{\tau^i_n}\Delta Y^{N,{\mathcal{F}}}_{\tau^i_n}\rvert={\mathcal{O}}\Big(\lvert \Delta P^{N,{\mathcal{E}}}_{\tau^i_n}\Delta P^{N,{\mathcal{F}}}_{\tau^i_n}\rvert+\frac{P^{N,{\mathcal{E}}}_{\tau^i_n-}\lvert \Delta P^{N,{\mathcal{F}}}_{\tau^i_n}\rvert+P^{N,{\mathcal{F}}}_{\tau^i_n-}\lvert \Delta P^{N,{\mathcal{E}}}_{\tau^i_n}\rvert}{N}+\frac{P^{N,{\mathcal{E}}}_{\tau^i_n-}P^{N,{\mathcal{F}}}_{\tau^i_n-}}{N^2}\Big)\\ ={\mathcal{O}}\Big(\frac{\lvert\phi^{{\mathcal{E}}}(X^i_{\tau^i_n})\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-})\rvert}{N^2}\Big)+{\mathcal{O}}\Big(\frac{\lvert\phi^{{\mathcal{F}}}(X^i_{\tau^i_n})\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-})\rvert}{N^2}\Big)+{\mathcal{O}}\Big(\frac{\lvert\phi^{{\mathcal{E}}}(X^i_{\tau^i_n})-\phi^{{\mathcal{E}}}(X^i_{\tau^i_n-})\rvert}{N^2}P^{N,{\mathcal{F}}}_{\tau^i_n-}\Big)\\ +{\mathcal{O}}\Big(\frac{\lvert\phi^{{\mathcal{F}}}(X^i_{\tau^i_n})-\phi^{{\mathcal{F}}}(X^i_{\tau^i_n-})\rvert}{N^2}P^{N,{\mathcal{E}}}_{\tau^i_n-}\Big)+{\mathcal{O}}\Big(\frac{P^{N,{\mathcal{E}}}_{\tau^i_n-}P^{N,{\mathcal{F}}}_{\tau^i_n-}}{N^2}\Big). \end{split} \] Since $\kappa$ is bounded, it is straightforward to then see that \[ \sum_{\tau^i_n\leq t}\lvert\Delta Y^{N,{\mathcal{E}}}_{\tau^i_n}\Delta Y^{N,{\mathcal{F}}}_{\tau^i_n}\rvert={\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{P^{N,{\mathcal{E}}}P^{N,{\mathcal{F}}}}{N})={\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{Y^{N,{\mathcal{E}}}Y^{N,{\mathcal{F}}}}{N}), \] so that \begin{equation}gin{equation}\label{eq:cov YEJ YFJ} [Y^{N,{\mathcal{E}},J},Y^{N,{\mathcal{F}},J}]_t=\sum_{\tau^i_n\leq t}\lvert\Delta Y^{N,{\mathcal{E}}}_{\tau^i_n}\Delta Y^{N,{\mathcal{F}}}_{\tau^i_n}\rvert={\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{P^{N,{\mathcal{E}}}P^{N,{\mathcal{F}}}}{N})={\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{Y^{N,{\mathcal{E}}}Y^{N,{\mathcal{F}}}}{N}) \end{equation} for all ${\mathcal{E}},{\mathcal{F}}\in{\mathcal{B}}({\mathbb{K}})$ disjoint. Combining \eqref{eq:cov YEC YFC} with \eqref{eq:cov YEJ YFJ} we have Part \ref{enum:bound on cov of YE YF} of Theorem \ref{theo:characterisation of Y}. \qed \subsubsection*{Proof of Lemma \ref{lem:Ito for F}} We take $0\leq t_0\leq t_1\leq t$ and write \[ Y^{N,{\mathcal{E}},J}_{t_1}-Y^{N,{\mathcal{E}},J}_{t_0}=\sum_{t_0<s\leq t_1}\begin{itemize}g(Y^{N,{\mathcal{E}}}_{s}-Y^{N,{\mathcal{E}}}_{s-}\begin{itemize}g). \] We may calculate \begin{equation}gin{equation}\label{eq:3rd derivs for Taylor} \frac{\partial^3 F}{\partial p^3}=\frac{\partial^3 F}{\partial^2 p\partial q}=0,\quad \frac{\partial^3 F}{\partial p\partial^2 q}=\frac{2}{q^3},\quad \frac{\partial^3 F}{\partial^3 q}=\frac{-6p}{q^4}. \end{equation} Thus by Taylor's theorem, \eqref{eq:3rd derivs for Taylor}, the fact that almost surely there are no simultaneous killing events, and the fact that $Q^{N}_t$ is bounded above and below away from $0$, we have \[ \begin{equation}gin{split} \Big\lvert Y^{N,{\mathcal{E}},J}_{s}-Y^{N,{\mathcal{E}},J}_{s-}-\nabla F(\vec{R}^{N}_{s-})\cdot (\vec{R}^{N,J}_{s}-\vec{R}^{N,J}_{s-})-\frac{1}{2}(\vec{R}^{N,J}_{s}-\vec{R}^{N,J}_{s-})\cdot H(F)(\vec{R}^{N}_{s-})( \vec{R}^{N,J}_{s}-\vec{R}^{N,J}_{s-})\Big\rvert\\ ={\mathcal{O}}\Big(P^{N,{\mathcal{E}}}_{s-}\lvert\Delta Q^N_{s-}\rvert^3+\lvert\Delta P^{N,{\mathcal{E}}}_{s-}\rvert\lvert\Delta Q^{N}_{s-}\rvert^2\Big). \end{split} \] Since $\kappa$ and $\phi$ are bounded, it is straightforward to then see that \begin{equation}gin{equation} \begin{equation}gin{split} Y^{N,{\mathcal{E}},J}_t-Y^{N,{\mathcal{E}},J}_0-\int_0^t\nabla F(\vec{R}^N_{s-})\cdot d\vec{R}^{N,J}_s-\frac{1}{2}\int_0^td\vec{R}^{N,J}_s\cdot H(F)(\vec{R}^N_{s-})d\vec{R}^{N,J}_s\\={\mathcal{O}}_t^{{\mathcal{F}}V}\Big(\frac{Y^{N,{\mathcal{E}}}}{N^3}\Big)\cap{\mathcal{O}}^{\Delta}(\frac{1}{N^3}). \end{split} \end{equation} Combining with \eqref{eq:Ito for YE cts part} we have Lemma \ref{lem:Ito for F}. \qed \section{Proof of Theorem \ref{theo:Convergence to FV diffusion}}\label{section:proof of convergence to FV diffusion} We now use the calculations of Section \ref{section: characterisation of Y} to prove Theorem \ref{theo:Convergence to FV diffusion}. \subsection{Proof of Part \ref{enum:conv of measure-valued process to FV in Weak atomic metric} of Theorem \ref{theo:Convergence to FV diffusion}} Our proof is structured as follows: \begin{equation}gin{enumerate} \item\label{enum:conv of dist for given colour} We shall firstly prove that for ${\mathcal{E}}\in{\mathcal{B}}({\mathbb{K}})$ and $f\in C_b(\bar D)$ we have \begin{equation}gin{equation}\label{eq:measure of mNE} \langle m^{N,{\mathcal{E}}}_t-Y^{N,{\mathcal{E}}}_t\pi,f\rightarrowngle\rightarrow 0\quad \text{in probability as}\quad t\wedge N\rightarrow\infty. \end{equation} \item \label{enum:conv on disj meas sets to WF diff} We fix $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ and take $\{k_1,k_2,\ldots\}$, a dense subset of ${\mathbb{K}}$. Then for all $i$ we can find $\frac{\mbox{$\bf m $}box{$\epsilon$}ilon}{2}<r_i<\mbox{$\bf m $}box{$\epsilon$}ilon$ such that $\nu^0(\partial B(k_i,r_i))=0$. We set $A_i=B(k_i,r_i)\setminus(\cup_{j=1}^{i-1}A_j)$. Since the disjoint union of $A_i$ is ${\mathbb{K}}$, we can find $n<\infty$ such that $\nu^0((\cup_{i=1}^n A_i)^c)<\mbox{$\bf m $}box{$\epsilon$}ilon$. We set $A_0:=(\cup_{i=1}^n A_i)^c$ and pick arbitrary $k_0\in {\mathbb{K}}$. We shall prove that $(Y^{N,A_0}_{Nt},\ldots,Y^{N,A_n}_{Nt})_{0\leq t\leq T}$ converges in $D([0,T];{\mathbb R}^{n+1})$ in distribution to a Wright-Fisher diffusion of rate ${\mathbb{T}}heta$ and initial condition $(\nu^0(A_0),\ldots,\nu^0(A_n))$. \item \label{enum:conv to WF super in Wass} We then use this to prove that \begin{equation}gin{equation}\label{eq:conv of measure-valued type process in Wasserstein} ({\mathcal{Y}}^N_{Nt})_{0\leq t\leq T}\rightarrow (\nu_t)_{0\leq t\leq T}\quad\text{in}\quad D([0,T];\mbox{$\bf m $}athcal{P}_{\text{W}_1h}({\mathbb{K}}))\quad\text{in distribution.} \end{equation} \item \label{enum:conv to WF super in Wat} We will then strengthen the notion of convergence to convergence in $D([0,T];{\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$ in distribution by verifying the compact containment condition of Theorem \ref{theo:compact containment condition}, thereby establishing \eqref{eq:conv of measure-valued process to FV in Weak atomic metric}. \end{enumerate} \subsubsection*{Step \ref{enum:conv of dist for given colour}} We define on $\{A,B\}$ the process \[ \zeta^{N,i}_t=\begin{equation}gin{cases} A,\quad \eta^{N,i}_t\in {\mathcal{E}}\\ B,\quad \eta^{N,i}_t\nonumbertin {\mathcal{E}} \end{cases},\quad 1\leq i\leq N. \] This defines a Fleming-Viot process $((X^{N,1}_t,\zeta^{N,1}_t),\ldots,(X^{N,N}_t,\zeta^{N,N}_t))$ on $\bar D\times \{A,B\}$ corresponding to the killed Markov process $(X_t,\zeta_t)_{0\leq t<\tau_{\partial}}$ which evolves according to \eqref{eq:killed process SDE} in the first variable and is constant (up to the killing time) in the second variable. This Fleming-Viot process has empirical measure \[ \upsilon^{N}_t=\frac{1}{N}\sum_{i=1}^N{\mathbbm{1}}(\eta^{N,i}_t\in {\mathcal{E}})\delta_{(X^{N,i}_t,A)}+{\mathbbm{1}}(\eta^{N,i}_t\nonumbertin {\mathcal{E}})\delta_{(X^{N,i}_t,B)}. \] Given $\mbox{$\bf m $}u\in \mbox{$\bf m $}athcal{P}(\bar D\times \{A,B\})$ we can write $\mbox{$\bf m $}u=c_A\mbox{$\bf m $}u^A\otimes \delta_{A}+c_B\mbox{$\bf m $}u^B\otimes \delta_{B}$ for $\mbox{$\bf m $}u^A,\mbox{$\bf m $}u^B\in\mbox{$\bf m $}athcal{P}(\bar D)$. Then we may calculate \[ \begin{equation}gin{split} \mbox{$\bf m $}athcal{L}_{\mbox{$\bf m $}u}((X_t,\zeta_t)\lvert \tau_{\partial}>t)=\frac{c_A{\mathbb P}_{\mbox{$\bf m $}u^A}(\tau_{\partial}>t)\mbox{$\bf m $}athcal{L}_{\mbox{$\bf m $}u^A}(X_t\lvert \tau_{\partial}>t)\otimes\delta_A+c_B{\mathbb P}_{\mbox{$\bf m $}u^B}(\tau_{\partial}>t)\mbox{$\bf m $}athcal{L}_{\mbox{$\bf m $}u^B}(X_t\lvert \tau_{\partial}>t)\otimes\delta_B}{{\mathbb P}_{\mbox{$\bf m $}u}(\tau_{\partial}>t)}. \end{split} \] Therefore by \eqref{eq:exponential convergence to QSD reflected diffusion} there exists $0\leq c(\mbox{$\bf m $}u,t)\leq 1$ and uniform constants $C<\infty$, $k>0$ such that \begin{equation}gin{equation}\label{eq:(X,zeta) for large times like pi in each variable} \lvert\lvert \mbox{$\bf m $}athcal{L}_{\mbox{$\bf m $}u}((X_t,\zeta_t)\lvert \tau_{\partial}>t)-(c(\mbox{$\bf m $}u,t)\pi\otimes \delta_A+(1-c(\mbox{$\bf m $}u,t))\pi\otimes \delta_B)\rvert\rvert_{\text{TV}}\leq Ce^{-kt}\quad\text{for all}\quad \mbox{$\bf m $}u\in\mbox{$\bf m $}athcal{P}(\bar D\times \{A,B\}). \end{equation} Using \cite[Theorem 2.2]{Villemonais2011} we obtain \[ \limsup_{t\wedge N\rightarrow\infty}{\mathbb E}\Big[\Big\lvert\langle \upsilon_{t}^N,\tilde{f}\rightarrowngle -\langle {\mathcal{L}}_{\upsilon^N_{t-h}}((X_h,\zeta_h)\lvert \tau_{\partial}>h),\tilde{f}\rightarrowngle \Big\rvert\Big]=0 \] for all $h>0$ and $\tilde{f}\in C_b(\bar D\times\{A,B\})$, so that by \eqref{eq:(X,zeta) for large times like pi in each variable} we have \begin{equation}gin{equation}\label{eq:convergence to multiple of pi delta A and pi delta B} \limsup_{t\wedge N\rightarrow\infty}{\mathbb E}\Big[\Big\lvert\Big\langle \upsilon_{t}^N -\Big(c(\upsilon^{N}_{t-h},h)\pi\otimes \delta_A+( 1-c(\upsilon^{N}_{t-h},h))\pi\otimes \delta_B\Big),\tilde{f}\Big\rightarrowngle \Big\rvert\Big]\leq Ce^{-kh}. \end{equation} By taking $\tilde{f}(x,a)$ in \eqref{eq:convergence to multiple of pi delta A and pi delta B} to be $\phi(x){\mathbbm{1}}(a=A)$ and $\phi(x){\mathbbm{1}}(a=B)$ we see that \[ \limsup_{t\wedge N\rightarrow\infty}{\mathbb E}[P^{N,{\mathcal{E}}}_t-c(\upsilon_{t-h}^N,h)\langle \pi,\phi\rightarrowngle],\; \limsup_{t\wedge N\rightarrow\infty}{\mathbb E}[P^{N,{\mathcal{E}}^c}_t-(1-c(\upsilon_{t-h}^N,h))\langle \pi,\phi\rightarrowngle]\leq Ce^{-kh}. \] Since $h>0$ is arbitrary we have \[ \Big\langle \upsilon^{N}_t-\Big(\frac{P^{N,{\mathcal{E}}}_{t}}{\langle \pi,\phi\rightarrowngle}\pi\otimes \delta_A+\frac{P^{N,{\mathcal{E}}^c}_t}{\langle \pi,\phi\rightarrowngle}\pi\otimes\delta_B\Big),\tilde{f}\Big\rightarrowngle\rightarrow 0\quad \text{in probability as}\quad N\wedge t\rightarrow \infty \] for all $\tilde{f}\in C_b(\bar D\times\{A,B\})$. By taking $\tilde{f}(x,a)$ in \eqref{eq:convergence to multiple of pi delta A and pi delta B} to be $\phi(x)$ we see that \[ Q^N_t\rightarrow \langle \pi,\phi\rightarrowngle\quad\text{in probability as}\quad N\wedge t\rightarrow\infty. \] Therefore for all $\tilde{f}\in C_b(\bar D\times\{A,B\})$ we have \[ \begin{itemize}g\langle \upsilon^{N}_t-\begin{itemize}g(Y^{N,{\mathcal{E}}}_{t}\pi\otimes \delta_A+Y^{N,{\mathcal{E}}^c}_t\pi\otimes\delta_B\begin{itemize}g),\tilde{f}\begin{itemize}g\rightarrowngle\rightarrow 0\quad \text{in probability as}\quad N\wedge t\rightarrow \infty. \] Taking $\tilde{f}(x,a)$ to be $f(x){\mathbbm{1}}(a=A)$ for arbitrary $f\in C_b(\bar D)$ we have \eqref{eq:measure of mNE}. \subsubsection*{Step \ref{enum:conv on disj meas sets to WF diff}} We recall that ${\mathcal{K}}^{N,{\mathcal{E}}}_t$, $\Lambda^{N,{\mathcal{E}}}_t$ (for ${\mathcal{E}}\in {\mathcal{B}}({\mathbb{K}})$), ${\mathcal{K}}^N_t$ and $\Lambda^N_t$ were defined in Theorem \ref{theo:Convergence to FV diffusion}. We define \[ (\vec{Y}^N_{Nt})_{0\leq t\leq T}:=(Y^{N,A_0}_{Nt},\ldots,Y^{N,A_n}_{Nt})_{0\leq t\leq T}, \] and use Aldous' criterion \cite[Theorem 1]{Aldous1978} to verify $\{{\mathcal{L}}((\vec{Y}^N_{Nt})_{0\leq t\leq T})\}$ is tight in ${\mathcal{P}}(D([0,T];{\mathbb R}^{n+1}))$. Since $0\leq Y^{N,A_i}_{Nt}\leq 1$, \cite[Condition (3)]{Aldous1978} is satisfied. We now take a sequence $(\tau_N,\delta_N)_{N=1}^{\infty}$ of stopping times $\tau_N$ and constants $\delta_n>0$ satisfying \cite[Condition (1)]{Aldous1978}, for the purpose of checking \cite[Condition (A)]{Aldous1978}. In particular we have by \eqref{eq:Y O FV plus U MG} that for some $F^N={\mathcal{O}}^{{\mathcal{F}}V}(1)$ and $M^N={\mathcal{O}}^{{\text{MG}}}(1)$ we have \[ Y^{N,A_i}_{N(\tau_N+\delta_N)}-Y^{N,A_i}_{N\tau_N}=F^N_{\tau_N+\delta_N}-F^N_{\tau_N}+Z^N_{\tau_N+\delta_N}-Z_N\rightarrow 0\quad\text{in probability.} \] Thus $\{(\vec{Y}^N_{Nt})_{0\leq t\leq T}\}$ satisfies \cite[Condition (A)]{Aldous1978}, \[ \vec{Y}^N_{N(\tau_N+\delta_N)}-\vec{Y}^N_{N\tau_N}\rightarrow 0\quad\text{in probability,} \] and hence $\{{\mathcal{L}}((\vec{Y}^N_{Nt})_{0\leq t\leq T})\}$ is tight in ${\mathcal{P}}(D([0,T];{\mathbb R}^{n+1}))$ by \cite[Theorem 1]{Aldous1978}. For all ${\mathcal{E}}\in{\mathcal{B}}({\mathbb{K}})$, \eqref{eq:measure of mNE} implies \begin{equation}gin{equation}\label{eq:convergence of ANE} \begin{equation}gin{split} \Lambda^{N,{\mathcal{E}}}_t-Y^{N,{\mathcal{E}}}_t[\langle \pi,\Gamma_{0}(\phi)+\kappa\phi^2\rightarrowngle+\langle \pi,\phi^2\rightarrowngle \langle \pi,\kappa\rightarrowngle],\; Q^N_t-\langle \pi,\phi\rightarrowngle \rightarrow 0\quad \text{in probability as}\quad t\wedge N\rightarrow\infty. \end{split} \end{equation} Then applying \eqref{eq:measure of mNE} and Fubini's theorem to \eqref{eq:Y in terms of K and extra terms} we obtain \begin{equation}gin{equation}\label{eq:K minus Y converges to 0} \begin{equation}gin{split} \sup_{0\leq t\leq T}\lvert (Y^{N,{\mathcal{E}}}_{Nt}-Y^{N,{\mathcal{E}}}_{0})-({\mathcal{K}}^{N,{\mathcal{E}}}_{Nt}-{\mathcal{K}}^{N,{\mathcal{E}}}_{0})\rvert \rightarrow 0\quad\text{in probability as}\quad N\rightarrow\infty. \end{split} \end{equation} We consider a subsequential limit in distribution of $\{(\vec{Y}^N_{Nt})_{0\leq t\leq T}\}$, \[ (\vec{Y}_t)_{0\leq t\leq T}=(Y^{A_0}_t,\ldots,Y^{A_n}_t)_{0\leq t\leq T}, \] which by Part \ref{enum:Y O FV plus U MG} of Theorem \ref{theo:characterisation of Y} must have continuous paths. Using \eqref{eq:K minus Y converges to 0} we conclude that $({\mathcal{K}}^{N,A_0}_{Nt},\ldots,{\mathcal{K}}^{N,A_n}_{Nt})_{0\leq t\leq T}$ converges in $D([0,T];{\mathbb R}^{n+1})$ in distribution along this subsequence to \[ (\vec{Y}_t-\vec{Y}_0)_{0\leq t\leq T}. \] Since $({\mathcal{K}}^{N,A_0}_{Nt},\ldots,{\mathcal{K}}^{N,A_n}_{Nt})_{0\leq t\leq T}$ is a martingale for each $N$, $(Y^{A_0}_{t},\ldots,Y^{A_n}_{t})_{0\leq t\leq T}$ is a martingale with respect to its natural filtration $\sigma_t$. We obtain from \eqref{eq:covariation of KE and KF} that for all $0\leq i,j\leq n$, \[ \begin{equation}gin{split} {\mathcal{K}}^{N,A_i}_{Nt}{\mathcal{K}}^{N,A_j}_{Nt}-\int_0^t\frac{1}{(Q^N_s)^2}\Big( {\mathbbm{1}}(i=j)\Lambda^{N,A_i}_{Ns} -Y^{N,A_i}_{Ns} \Lambda^{N,A_j}_{Ns} -Y^{N,A_j}_{Ns} \Lambda^{N,A_i}_{Ns} +Y^{N,A_i}_{Ns}Y^{N,A_j}_{Ns} \Lambda^{N}_{Ns} \Big)ds\\ -{\mathcal{O}}_t^{{\text{MG}}}\begin{itemize}g(\frac{Y^{N,A_i}_{N\cdot}Y^{N,A_j}_{N\cdot}+{\mathbbm{1}}(i=j)Y^{N,A_i}_{N\cdot}}{N^2}\begin{itemize}g)-{\mathcal{O}}_t^{{\mathcal{F}}V}\Big(\frac{Y^{N,A_i}_{N\cdot}Y^{N,A_j}_{N\cdot}+{\mathbbm{1}}(i=j)Y^{N,A_i\cap A_j}_{N\cdot}}{N}\Big)\cap{\mathcal{O}}^{\Delta}_t(0) \end{split} \] is a martingale for all $N$, so that by \eqref{eq:convergence of ANE}, \[ \begin{equation}gin{split} Y^{A_i}_{t}Y^{A_j}_{t}-\int_0^t\frac{\langle \pi,\Gamma_{0}(\phi)+\kappa\phi^2\rightarrowngle+\langle \pi,\phi^2\rightarrowngle \langle \pi,\kappa\rightarrowngle}{\langle \pi,\phi\rightarrowngle^2}\begin{itemize}g( {\mathbbm{1}}(i=j)Y^{A_i}_{s} -Y^{A_i}_{s} Y^{A_j}_{s} \begin{itemize}g) ds \end{split} \] is a $(\sigma_t)_{t\geq 0}$-martingale. Thus \[ \begin{equation}gin{split} [Y^{A_i},Y^{A_j}]_{t}=\int_0^t\frac{\langle \pi,\Gamma_{0}(\phi)+\kappa\phi^2\rightarrowngle+\langle \pi,\phi^2\rightarrowngle \langle \pi,\kappa\rightarrowngle}{\langle \pi,\phi\rightarrowngle^2}\begin{itemize}g( {\mathbbm{1}}(i=j)Y^{A_i}_{s} -Y^{A_i}_{s} Y^{A_j}_{s} \begin{itemize}g) ds. \end{split} \] We have that (note that this is the only place where we use $1\in{\mathcal{D}}(L)$) \begin{equation}gin{equation}\label{eq:calculation for diffusivity equal to theta} \begin{equation}gin{split} \langle \pi,\Gamma_{0}(\phi)+\kappa\phi^2\rightarrowngle=\langle \pi,L(\phi^2)-2\phi L(\phi)\rightarrowngle=\mbox{$\bf m $}box{$\lambda$}bda \langle \pi,\phi^2\rightarrowngle\quad \text{and}\quad \langle \pi,\kappa\rightarrowngle=\langle \pi,-L(1)\rightarrowngle=\mbox{$\bf m $}box{$\lambda$}bda. \end{split} \end{equation} Since $\nu^0(\partial A_i)=0$ for all $0\leq i\leq n$, \[ \vec{Y}^N_0\rightarrow (\nu^0(A_0),\ldots,\nu^0(A_n))\quad\text{in probability.} \] Thus each subsequential limit $(\vec{Y}_t)_{0\leq t\leq T}$ must be a solution of the $n+1$-type Wright-Fisher diffusion of rate ${\mathbb{T}}heta$ with initial condition $(\nu^0(A_0),\ldots,\nu^0(A_n))$, which is unique in law. Therefore we have convergence of the whole sequence in $D([0,T];{\mathbb R}^{n+1})$ in distribution to this Wright-Fisher diffusion. \subsubsection*{Step \ref{enum:conv to WF super in Wass}} We take $\mbox{$\bf m $}box{$\epsilon$}ilon_{\ell}\rightarrow 0$, giving $k^{\ell}_0,k^{\ell}_1,\ldots,k^{\ell}_{n_{\ell}}\in{\mathbb{K}}$ for each $\ell\in \mbox{$\bf m $}athbb{N}$ as provided for in Step \ref{enum:conv on disj meas sets to WF diff}. We define for each $\ell\in\mbox{$\bf m $}athbb{N}$ the projection \[ {\bf{P}}^{\ell}:{\mathcal{P}}(\bar D)\ni \mbox{$\bf m $}u\mbox{$\bf m $}apsto \sum_{j=0}^{n_{\ell}}\mbox{$\bf m $}u(A_{k^{\ell}_j})\delta_{k^{\ell}_j}\in {\mathcal{P}}(\bar D). \] We therefore have \[ \text{W}_1h({\bf{P}}^{\ell}(\mbox{$\bf m $}u),\mbox{$\bf m $}u)\leq \mbox{$\bf m $}box{$\epsilon$}ilon+\mbox{$\bf m $}u(A^{\ell}_0)\quad\text{for all}\quad\mbox{$\bf m $}u\in{\mathcal{P}}(\bar D). \] We metrise ${\mathcal{P}}(D([0,T];{\mathcal{P}}_{\text{W}_1h}({\mathbb{K}})))$ using the Wasserstein-$1$ metric associated to the metric $d_{D([0,T];{\mathcal{P}}_{\text{W}_1h}}\wedge 1$, writing $\text{W}_1h$ for this metric. We write $(\nu^{\ell}_t)_{0\leq t\leq T}$ for a Wright-Fisher superprocess of rate ${\mathbb{T}}heta$ and initial condition ${\bf{P}}^{\ell}(\nu^0)$. Step \ref{enum:conv on disj meas sets to WF diff} implies that \[ \text{W}_1h({\mathcal{L}}(({\bf{P}}^{\ell}({\mathcal{Y}}^{N}_{Nt}))_{0\leq t\leq T}),{\mathcal{L}}((\nu^{\ell}_t)_{0\leq t\leq T}))\rightarrow 0\quad \text{as}\quad N\rightarrow\infty. \] Therefore by the triangle inequality we have \begin{equation}gin{equation}\label{eq:triangle inequality for Wass Law of YN and mu} \begin{equation}gin{split} \limsup_{N\rightarrow\infty}\text{W}_1h({\mathcal{L}}(({\mathcal{Y}}^{N}_{Nt})_{0\leq t\leq T}),{\mathcal{L}}((\nu_t)_{0\leq t\leq T}))\leq \limsup_{N\rightarrow\infty}\text{W}_1h({\mathcal{L}}(({\mathcal{Y}}^{N}_{Nt})_{0\leq t\leq T}),{\mathcal{L}}(({\bf{P}}^{\ell}({\mathcal{Y}}^{N}_{Nt}))_{0\leq t\leq T}))\\ +\text{W}_1h({\mathcal{L}}((\nu^{\ell}_t)_{0\leq t\leq T}),{\mathcal{L}}((\nu_t)_{0\leq t\leq T}))\leq 2\mbox{$\bf m $}box{$\epsilon$}ilon_{\ell} +\limsup_{N\rightarrow\infty}{\mathbb E}[\sup_{0\leq t\leq T}Y^{N,A_0^{\ell}}_{Nt}]+{\mathbb E}[\sup_{0\leq t\leq T}\nu_{t}(A^{\ell}_0)]. \end{split} \end{equation} Using Step \ref{enum:conv on disj meas sets to WF diff} we have \[ \limsup_{N\rightarrow\infty}{\mathbb E}[\sup_{0\leq t\leq T}Y^{N,A_0^{\ell}}_{Nt}]={\mathbb E}[\sup_{0\leq t\leq T}\nu_{t}(A^{\ell}_0)]. \] Since $(\nu_t(A_0))_{0\leq t\leq T}$ is a Wright-Fisher diffusion of rate ${\mathbb{T}}heta$ and initial condition $\nu^0(A^{\ell}_0)<\mbox{$\bf m $}box{$\epsilon$}ilon_l$, \[ {\mathbb E}[\sup_{0\leq t\leq T}\nu_{t}(A^{\ell}_0)]\rightarrow 0\quad\text{as}\quad \ell\rightarrow\infty. \] Therefore taking $\limsup_{\ell\rightarrow\infty}$ of both sides of \eqref{eq:triangle inequality for Wass Law of YN and mu} we obtain \eqref{eq:conv of measure-valued type process in Wasserstein}. \subsubsection*{Step \ref{enum:conv to WF super in Wat}} It is trivial to see that $({\mathcal{Y}}^N_t)_{0\leq t\leq T}$ has sample paths almost surely contained in $D([0,T];{\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$. We recall that $\Psi(u)=(1-u)\vee 0$ is the function used to define the $\text{W}_1t$ metric on Page \pageref{eq:Weak-atomic metric}. Having established \eqref{eq:conv of measure-valued type process in Wasserstein}, Theorem \ref{theo:compact containment condition} gives that in order to establish \eqref{eq:conv of measure-valued process to FV in Weak atomic metric}, it is sufficient to establish the following condition. \begin{equation}gin{cond} For every $\delta>0$, there exists $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ such that \begin{equation}gin{equation}\label{eq:sum of prod of nearby Ys} \limsup_{N\rightarrow\infty}{\mathbb P}\Big(\sup_{0\leq t\leq T}\sum_{\substack{k,\ell\in {\mathbb{K}}\\ k\neq \ell}}Y^{N,\{k\}}_{Nt}Y^{N,\{\ell\}}_{Nt}\Psi\Big(\frac{d(k,\ell)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big)\leq \delta\Big)\geq 1-\delta. \end{equation} \label{cond:condition equivalent to weak atomic compact containment} \end{cond} Note that the above sum is well-defined as the terms are non-zero only for $k,\ell\in \text{supp}({\mathcal{Y}}^{N}_0)$, which is a finite (random) set. We calculate using parts \ref{enum:bound on cov of YE YF}, \ref{enum:Y O FV plus U MG} and \ref{enum:Thm 8.2 true for random sets} of Theorem \ref{theo:characterisation of Y} that \[ \begin{equation}gin{split} d(Y^{N,\{k\}}_{t}Y^{N,\{\ell\}}_{t})=Y^{N,\{k\}}_{t}dY^{N,\{\ell\}}_{t}+Y^{N,\{\ell\}}_{t}dY^{N,\{k\}}_{t}+d[Y^{N,\{k\}},Y^{N,\{\ell\}}]_t\\ =Y^{N,\{k\}}_{t}\Big[d{\mathcal{O}}^{{\mathcal{F}}V}_t\Big(\frac{Y^{N,\{\ell\}}}{N}\Big)+d{\mathcal{O}}^{{\text{MG}}}_t\Big(\frac{Y^{N,\{\ell\}}}{N}\Big)\Big]+Y^{N,\{\ell\}}_{t}\Big[d{\mathcal{O}}^{{\mathcal{F}}V}_t\Big(\frac{Y^{N,\{k\}}}{N}\Big)+d{\mathcal{O}}^{{\text{MG}}}_t\Big(\frac{Y^{N,\{k\}}}{N}\Big)\Big]\\+d{\mathcal{O}}^{{\mathcal{F}}V}_t\Big(\frac{Y^{N,\{k\}}Y^{N,\{\ell\}}}{N}\Big)=d{\mathcal{O}}^{{\mathcal{F}}V}_t(\frac{Y^{N,\{k\}}Y^{N,\{\ell\}}}{N})+d{\mathcal{O}}^{{\text{MG}}}_t(\frac{Y^{N,\{k\}}Y^{N,\{\ell\}}}{N}), \end{split} \] uniformly over all random $k,\ell\in\text{supp}({\mathcal{Y}}^{N}_0)$. Thus \[ \sum_{k,\ell\in\mbox{$\bf m $}athbb{K}} Y^{N,\{k\}}_{Nt}Y^{N,\{\ell\}}_{Nt}={\mathcal{O}}^{{\mathcal{F}}V}_t(\sum_{k,\ell\in\mbox{$\bf m $}athbb{K}} Y^{N,\{k\}}_{N\cdot}Y^{N,\{\ell\}}_{N\cdot})+{\mathcal{O}}^{{\text{MG}}}_t(\sum_{k,\ell\in\mbox{$\bf m $}athbb{K}} Y^{N,\{k\}}_{N\cdot}Y^{N,\{\ell\}}_{N\cdot}), \] Therefore, using Gronwall's inequality, there exists uniform $C<\infty$ such that \[ e^{-Ct}\sum_{\substack{k,\ell\in {\mathbb{K}}\\ k\neq \ell}}Y^{N,\{k\}}_{Nt}Y^{N,\{\ell\}}_{Nt}\Psi\Big(\frac{d(k,\ell)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big) \] is a supermartingale for all $N$ large enough. Therefore \[ \begin{equation}gin{split} {\mathbb P}\Big(\sup_{0\leq t\leq T}\sum_{\substack{k,\ell\in {\mathbb{K}}\\ k\neq \ell}}Y^{N,\{k\}}_{Nt}Y^{N,\{\ell\}}_{Nt}\Psi\Big(\frac{d(k,\ell)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big)\leq \delta\Big)\\ \geq {\mathbb P}\Big(\sup_{0\leq t\leq T}e^{-Ct}\sum_{\substack{k,\ell\in {\mathbb{K}}\\ k\neq \ell}}Y^{N,\{k\}}_{Nt}Y^{N,\{\ell\}}_{Nt}\Psi\begin{itemize}g(\frac{d(k,\ell)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\begin{itemize}g)\leq e^{-CT}\delta\Big)\\ \geq 1-\frac{1}{e^{-CT}\delta}{\mathbb E}\Big[\sum_{\substack{k,\ell\in {\mathbb{K}}\\ k\neq \ell}}Y^{N,\{k\}}_{0}Y^{N,\{\ell\}}_{0}\Psi\Big(\frac{d(k,\ell)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big)\Big]. \end{split} \] We have assumed that the initial conditions converge in the weak atomic topology, so that Lemma \ref{lem:relatively compact family of measures in weak atomic topology} gives that \[ \sup_N{\mathbb E}\Big[\sum_{\substack{k,\ell\in {\mathbb{K}}\\ k\neq \ell}}Y^{N,\{k\}}_{0}Y^{N,\{\ell\}}_{0}\Psi\Big(\frac{d(k,\ell)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big)\Big]\rightarrow 0\quad\text{as}\quad \mbox{$\bf m $}box{$\epsilon$}ilon\rightarrow 0. \] We have therefore verified Condition \ref{cond:condition equivalent to weak atomic compact containment} and hence established \eqref{eq:conv of measure-valued process to FV in Weak atomic metric}. \qed \subsection{Proof of Part \ref{enum:convergence of empirical measure to FV} of Theorem \ref{theo:Convergence to FV diffusion}} We take $T>\sup_{N}\frac{t_N}{N}$. Since $(\nu_t)_{0\leq t\leq T}\in C([0,T];{\mathcal{P}}({\mathbb{K}}))$ (by Part \ref{enum:MVWF cts} of Proposition \ref{prop:basic facts WF superprocess}) we have by \eqref{eq:conv of measure-valued process to FV in Weak atomic metric} that \[ {\mathcal{Y}}^N_{t_N}\rightarrow \nu_t\quad \text{in}\quad {\mathcal{P}}({\mathbb{K}})\quad \text{in distribution.} \] Therefore \eqref{eq:measure of mNE} implies that \[ \chi^{N}_{t_N}\rightarrow \nu_t\quad \text{in}\quad {\mathcal{P}}({\mathbb{K}})\quad \text{in distribution.} \] Since ${\mathcal{Y}}^N$ satisfies \eqref{eq:sum of prod of nearby Ys} and there exists $C<\infty$ such that $\chi^{N}_t\leq C{\mathcal{Y}}^N_t$ for all $N$, we have for all $\delta>0$ that there exists $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ such that \[ \inf_{N}{\mathbb P}\Big(\sup_{0\leq t\leq T}\sum_{\substack{k,\ell\in {\mathbb{K}}\\ k\neq \ell}}\chi^{N}_{Nt}(\{k\})\chi^{N}_{Nt}(\{\ell\})\Psi\Big(\frac{d(k,\ell)}{\mbox{$\bf m $}box{$\epsilon$}ilon}\Big)\leq \delta\Big)\geq 1-\delta. \] Therefore \cite[Lemma 2.9]{Ethier1994} implies that $\{{\mathcal{L}}(\chi^{N}_{t_N})\}$ is tight in ${\mathcal{P}}({\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$, so that we have \eqref{eq:convergence of empirical measure to FV}.\qed \section{Proof of Theorem \ref{theo:convergence of the fixed colour}}\label{section:proof of convergence of the fixed colour} We fix $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ and take $T<\infty$ such that with probability at least $1-\mbox{$\bf m $}box{$\epsilon$}ilon$, $\nu_T$ has an atom of mass $1$ (which we can do by Part \ref{enum:fixation times almost surely finite} of Proposition \ref{prop:basic facts WF superprocess}). We define $\begin{equation}ta^N_T$ and $\begin{equation}ta_T$ to be the largest atom of ${\mathcal{Y}}^N_T$ (respectively ${\mathcal{Y}}_T$), and write $\alpha^N_T$ (respectively $\alpha_T$) for the mass of this atom. In both cases, if there is more than one such atom, choose from amongst the largest independently and uniformly at random. If $\nu_T$ has an atom of mass $1$, then this atom must be $\begin{equation}ta_T=\iota$. Therefore we have \begin{equation}gin{equation}\label{eq:k=iota with large prob} {\mathbb P}(\begin{equation}ta_T\neq \iota)\leq \mbox{$\bf m $}box{$\epsilon$}ilon. \end{equation} We shall prove the following proposition. \begin{equation}gin{prop}\label{prop:N-particle atom and mass close to that of FV} For every $\delta>0$, \[ \limsup_{N\rightarrow\infty}{\mathbb P}(\alpha^N_T< 1-\delta)\leq \mbox{$\bf m $}box{$\epsilon$}ilon\quad\text{and}\quad \limsup_{N\rightarrow\infty}\text{W}_1h({\mathcal{L}}(\begin{equation}ta^N_T),{\mathcal{L}}(\begin{equation}ta_T))\leq \mbox{$\bf m $}box{$\epsilon$}ilon. \] \end{prop} We then prove the following Lemma. \begin{equation}gin{lem}\label{lem:semimg hitting 0 lemma} Consider a sequence of $[0,1]$-valued semimartingales $Z^N_t$ of the form \begin{equation}gin{equation}\label{eq:SDE for ZN OFV UMG} Z^N_t={\mathcal{O}}^{{\mathcal{F}}V}_t(Z^N)\cap{\mathcal{O}}^{\Delta}_t\begin{itemize}g(\frac{1}{N}\begin{itemize}g)+{\mathcal{U}}^{{\text{MG}}}_t(Z^N(1-Z^N)^2)\cap{\mathcal{O}}^{\Delta}_t(\frac{1}{N}). \end{equation} Then we have \begin{equation}gin{equation}\label{eq:hitting time of semimg with 0} \limsup_{N\rightarrow\infty}{\mathbb P}(Z^N_t\text{ doesn't hit 0})\leq 2\limsup_{\delta\rightarrow 0}\limsup_{N\rightarrow\infty}{\mathbb P}(Z^N_0>\delta). \end{equation} \end{lem} We now apply Lemma \ref{lem:semimg hitting 0 lemma} to the semimartingales \[ \tilde{Y}^N_t:=Y^{N,\{\begin{equation}ta_T\}^c}_{T+t}. \] Using parts \ref{enum:Y O FV plus U MG} and \ref{enum:Thm 8.2 true for random sets} of Theorem \ref{theo:characterisation of Y}, $\tilde{Y}^N_t$ satisfies \eqref{eq:SDE for ZN OFV UMG} so that \[ \begin{equation}gin{split} \limsup_{N\rightarrow\infty}{\mathbb P}(\iota^N\neq \begin{equation}ta^N_T)=\limsup_{N\rightarrow\infty}{\mathbb P}(\tilde{Y}^N_t\text{ doesn't hit 0})\\ \leq 2\limsup_{\delta\rightarrow 0}\limsup_{N\rightarrow\infty}{\mathbb P}(\tilde{Y}^N_0>\delta)\leq 2\limsup_{\delta\rightarrow 0}\limsup_{N\rightarrow\infty}{\mathbb P}(\alpha^N_T< 1-\delta)\leq 2\mbox{$\bf m $}box{$\epsilon$}ilon. \end{split} \] Therefore we have \[ \begin{equation}gin{split} \limsup_{N\rightarrow\infty}\text{W}_1h({\mathcal{L}}(\iota^N),{\mathcal{L}}(\iota))\leq \limsup_{N\rightarrow\infty}{\mathbb P}(\iota^N\neq \begin{equation}ta^N_T) + \limsup_{N\rightarrow\infty}\text{W}_1h({\mathcal{L}}(\begin{equation}ta^N_T),{\mathcal{L}}(\begin{equation}ta_T))+{\mathbb P}(\begin{equation}ta_T\neq \iota) \leq 4\mbox{$\bf m $}box{$\epsilon$}ilon. \end{split} \] Since $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ is arbitrary, we are done. \qed \subsection{Proof of Proposition \ref{prop:N-particle atom and mass close to that of FV}} Using Theorem \ref{theo:Convergence to FV diffusion} and Skorokhod's representation theorem, we may support ${\mathcal{Y}}^N_T$ and $\nu_T$ on a common probability space $(\Omega',{\mathcal{G}}',{\mathbb P}')$ along which \[ {\mathcal{Y}}^N_T\rightarrow \nu_T\quad\text{in}\quad \text{W}_1t\quad{\mathbb P}'\text{-almost surely.} \] If $\nu_T=\delta_{\begin{equation}ta_T}$, then $\begin{equation}ta^N_T\rightarrow \begin{equation}ta_T$ and $\alpha^N_T\rightarrow 1$ ${\mathbb P}'$-almost surely. Thus $\alpha^N_T$ converges to $1$ and $\begin{equation}ta^N_T$ converges to $\begin{equation}ta_T$ as $N\rightarrow\infty$ with ${\mathbb P}'$-probability at least $1-\mbox{$\bf m $}box{$\epsilon$}ilon$. Therefore we have \[ \begin{equation}gin{split} \limsup_{N\rightarrow\infty}{\mathbb P}(\alpha^N_T< 1-\delta)=\limsup_{N\rightarrow\infty}{\mathbb P}'(\alpha^N_T <1-\delta)\leq {\mathbb P}'(\alpha^N_T\nrightarrow 1)\leq \mbox{$\bf m $}box{$\epsilon$}ilon,\\ \limsup_{N\rightarrow\infty}\text{W}_1h({\mathcal{L}}(\begin{equation}ta^N_T),{\mathcal{L}}(\begin{equation}ta_T))\leq \limsup_{\delta'\rightarrow 0}\limsup_{N\rightarrow\infty}\begin{itemize}g[\delta'+{\mathbb P}'\begin{itemize}g(d(\begin{equation}ta^N_T,\begin{equation}ta_T)\geq \delta'\begin{itemize}g)\begin{itemize}g] \leq \limsup_{N\rightarrow\infty}{\mathbb P}'(\begin{equation}ta^N_T\nrightarrow \begin{equation}ta_T)\leq \mbox{$\bf m $}box{$\epsilon$}ilon. \end{split} \] \qed \subsection{Proof of Lemma \ref{lem:semimg hitting 0 lemma}} We define for $c>0$ the following time-change and stopping time, \[ \begin{equation}gin{split} t(u)=\int_0^{u}\frac{1}{Z^N_{t(v)}}dv,\quad 0\leq u\leq \tau^c:=\inf\{u'>0:Z^N_{t(u')}\in \{0\}\cup [c,1]\},\quad S^N_u:=Z^N_{t(u)},\quad 0\leq u\leq \tau^c. \end{split} \] We fix $h,\delta,k>0$ to be determined such that $k\delta<\frac{1}{2}$, so that taking $c=k\delta$ we have \[ S^N_u=F^N_u+M^N_u,\quad 0\leq u\leq \tau^{k\delta} \] whereby \[ F^N_u= {\mathcal{O}}^{{\mathcal{F}}V}_u(1)\cap{\mathcal{O}}^{\Delta}_u\Big(\frac{1}{N}\Big),\quad M^N_u={\mathcal{U}}^{{\text{MG}}}_u(1)\cap{\mathcal{O}}^{\Delta}_u\Big(\frac{1}{N}\Big). \] We now define the stopping time \[ \tau^F=\inf\{u>0:V_u(F^N)\geq \delta\}. \] We have for all $N$ large enough that \[ \begin{equation}gin{split} V_{\tau^{k\delta}\wedge \tau^F\wedge h}(F^N)\leq V_{\tau^{k\delta}\wedge \tau^F\wedge h-}(F^N)+\lvert \Delta F^N_{\tau^{k\delta}\wedge \tau^F\wedge h}\rvert \leq \delta+{\mathcal{O}}(\frac{1}{N})\leq 2\delta,\\ \lvert S^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h}-S^{N}_0\rvert \leq\lvert S^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h-}-S^{N}_0\rvert+\lvert \Delta S^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h}\rvert \leq k\delta+{\mathcal{O}}(\frac{1}{N})\leq (k+1)\delta. \end{split} \] Moreover for some $C<\infty$ we have \begin{equation}gin{equation}\label{eq:FV of FN} {\mathbb P}(V_{\tau^{k\delta}\wedge \tau^F\wedge h}(F^N)\geq \delta)\leq \frac{{\mathbb E}[V_{\tau^{k\delta}\wedge \tau^F\wedge h}(F^N)]}{\delta}\leq \frac{Ch}{\delta}. \end{equation} From this we conclude that for all $N$ large enough we have \begin{equation}gin{align} \lvert M^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h}-M^{N}_0\rvert \leq \lvert S^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h}-S^{N}_0\rvert +V_{\tau^{k\delta}\wedge\tau^F\wedge h}(F^N)\leq (k+3)\delta,\label{eq:bound on absolute value of Mart after time change}\\ -[(M^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h}-M^{N}_0)\wedge 0]\leq -[(S^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h}-S^{N}_0)\wedge 0]+2\delta,\label{eq:bound on minimum of Mart after time change}\\ (M^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h}-M^{N}_0)\vee 0 \geq (S^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h}-S^{N}_0)\vee 0-2\delta. \label{eq:bound on maximum of Mart after time change} \end{align} Using \eqref{eq:bound on absolute value of Mart after time change} and that $M^{N}={\mathcal{U}}^{{\text{MG}}}(1)$ we have \[ {\mathbb E}[\tau^F\wedge \tau^{k\delta}\wedge h]\leq C{\mathbb E}[[M^{N}]_{\tau^F\wedge \tau^{k\delta}\wedge h}]= C\text{Var}(M^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h}-M^{N}_0)\leq C(k+3)^2\delta^2 \] for some $C<\infty$ so that \begin{equation}gin{equation}\label{eq:estimate on prob of not hitting bdy over h time horizon} {\mathbb P}(\tau^F\wedge \tau^{k\delta}\wedge h=h)\leq \frac{C(k+3)^2\delta^2}{h}. \end{equation} Note that if $S^{N}_0\geq k\delta$, $\tau^{k\delta}=0$ so that $S^{N}_{\tau^{k\delta}\wedge \tau^F\wedge h}=S^{N}_0$. Therefore using \eqref{eq:bound on minimum of Mart after time change} we have \begin{equation}gin{equation}\label{eq:expE of -Mwedge 0} {\mathbb E}\begin{itemize}g[-[(M^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h=h}-M^{N}_0)\wedge 0]\begin{itemize}g]\leq {\mathbb E}[S^{N}_0\wedge k\delta]+2\delta\leq 3\delta+k\delta{\mathbb P}(S^{N}_0>\delta). \end{equation} Thus using \eqref{eq:bound on maximum of Mart after time change}, \eqref{eq:expE of -Mwedge 0} and the fact that $M^N_u$ is a martingale, we have \begin{equation}gin{equation}\label{eq:prob of S at least kdelta} \begin{equation}gin{split} {\mathbb P}(S^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h}\geq k\delta)\leq {\mathbb P}[(S^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h}-S^{N}_0)\vee 0-2\delta \geq (k-3)\delta]+{\mathbb P}(S^{N}_0>\delta) \\\leq {\mathbb P}[M^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h}-M^{N}_0\geq (k-3)\delta]+{\mathbb P}(S^{N}_0>\delta)\leq \frac{{\mathbb E}[(M^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h}-M^{N}_0)\vee 0]}{(k-3)\delta}+{\mathbb P}(S^{N}_0>\delta)\\ =\frac{{\mathbb E}\begin{itemize}g[-[(M^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h}-M^{N}_0)\wedge 0]\begin{itemize}g]}{(k-3)\delta}+{\mathbb P}(S^{N}_0>\delta)\leq \frac{3}{k-3}+\frac{2k-3}{k-3}{\mathbb P}(S^{N}_0>\delta). \end{split} \end{equation} for all $N$ large enough. We may therefore calculate using \eqref{eq:FV of FN}, \eqref{eq:estimate on prob of not hitting bdy over h time horizon} and \eqref{eq:prob of S at least kdelta}, \[ \begin{equation}gin{split} {\mathbb P}(S^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h}=0)={\mathbb P}(\tau^{k\delta}<h\wedge \tau^F, S^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h}<k\delta)\\\geq 1-{\mathbb P}(S^{N}_{\tau^F\wedge \tau^{k\delta}\wedge h}\geq k\delta) -{\mathbb P}(\tau^F\wedge \tau^{k\delta}\wedge h=h)-{\mathbb P}(V_{\tau^F\wedge \tau^{k\delta}\wedge h}(F^N)\geq \delta) \\\geq 1- \frac{Ch}{\delta}-\frac{3}{k-3}-\frac{C(k+3)^2\delta^2}{h}-\frac{2k-3}{k-3}{\mathbb P}(S^{N}_0>\delta). \end{split} \] We now choose $h=k^3\delta^2$ and $k>3$ so that \[ \begin{equation}gin{split} {\mathbb P}(S^N_{\tau^F\wedge \tau^{k\delta}\wedge h}\neq 0) \leq \frac{3}{k-3}+\frac{2k-3}{k-3}{\mathbb P}(S^{N}_0>\delta)+\frac{4C}{k}+Ck^3\delta. \end{split} \] Taking $\limsup_{k\rightarrow\infty}\limsup_{\delta\rightarrow 0}\limsup_{N\rightarrow\infty}$ of both sides we have \[ \begin{equation}gin{split} \limsup_{N\rightarrow\infty}{\mathbb P}(Z^N_t\text{ doesn't hit 0})\leq \limsup_{k\rightarrow\infty}\limsup_{\delta\rightarrow 0}\limsup_{N\rightarrow\infty}{\mathbb P}(S^N_{\tau^F\wedge \tau^{k\delta}\wedge h}\neq 0) \\\leq 2\limsup_{\delta\rightarrow 0}{\mathbb P}(S^{N}_0>\delta). \end{split} \] \qed \section{Proof of Theorem \ref{theo:convergence of augmented DHPs}}\label{section:convergence of augmented DHPs} We refer the reader back to Section \ref{subsection: Branching processes intro} for the definitions of the branching processes and related objects used in this section. We define for given $({\mathcal{T}},\Pi)$ and for the leaf $v=(v_1,\ldots,v_n)\in\Lambda({\mathcal{T}})$, the $v$-primary marked tree $({\mathcal{T}}_{v\text{ primary}},\Pi_{v\text{ primary}})$ as follows. We write \[ v^m=(v_1,\ldots,v_m)\quad\text{for}\quad 0\leq m\leq n\quad \text{and}\quad n(u)=\sup\{m:v^m\leq u\}\quad\text{for} \quad u\in \Pi \] so that there exists unique \[ w(u)=(w_1(u),\ldots,w_m(u))\quad\text{such that}\quad u=v^{n(u)}w(u). \] We then write \[ \tilde{w}(u)=\begin{equation}gin{cases} (2,w_2(u),\ldots,w_m(u)),\quad w(u)\neq \emptyset\\ \emptyset,\quad w(u)=\emptyset \end{cases} \] and define \begin{equation}gin{equation}\label{eq:v-primary tree} \Pi_{v\text{ primary}}:=\{e_{n(u)}\tilde{w}(u):u\in \Pi\},\quad {\mathcal{T}}_{v\text{ primary}}(e_{n(u)}\tilde{w}(u)):={\mathcal{T}}(u),\quad u\in \Pi. \end{equation} This is the marked tree we obtain by relabeling the underlying tree so that $v$ is the primary path. \begin{equation}gin{figure}[h] \begin{equation}gin{center} \begin{equation}gin{tikzpicture}[scale=0.7] \draw[-,snake=snake,blue](-4.5,0) -- ++(1,1); \draw[-,snake=snake,blue](-3.5,1) -- ++(-1,1); \draw[-,snake=snake,red](-3.5,1) -- ++(0.75,0.75); \draw[->,snake=snake,red](-2.75,1.75) -- ++(0.75,0.75); \draw[-,snake=snake,red](-4.5,2) -- ++(1,1); \draw[-,snake=snake,blue](-3.5,3) -- ++(-1,1); \draw[-,snake=snake,red](-3.5,3) -- ++(0.75,0.75); \draw[->,snake=snake,red](-2.75,3.75) -- ++(0.75,0.75); \draw[-,snake=snake,red](-4.5,4) -- ++(1,1); \draw[->,snake=snake,red](-3.5,5) -- ++(1,1); \foreach \y [count=\n]in {2,4}{ \draw[-,snake=snake,blue](-4.5,\y) -- ++(-0.75,0.75); \draw[->,snake=snake,blue](-5.25,\y+0.75) -- ++(-0.75,0.75); } \draw[->,snake=snake,blue](-3.5,5) -- ++(-1,1); \draw[-,thick,black](-0.5,0) -- ++(0,6); \draw[-,thick,black](-8.5,0) -- ++(0,6); \draw[->,snake=snake,blue](-2.75,1.75) -- ++(-0.5,0.5); \draw[->,snake=snake,ultra thick,blue](-2.75,3.75) -- ++(-0.5,0.5); \draw[->,snake=snake,red](-5.25,4.75) -- ++(0.5,0.5); \draw[->,snake=snake,red](-5.25,2.75) -- ++(0.5,0.5); \foreach \y [count=\n]in {0,2}{ \draw[-,snake=snake,ultra thick,blue](4.5,\y) -- ++(1,1); } \draw[->,snake=snake,red](6.25,1.75) -- ++(0.75,0.75); \draw[-,snake=snake,red](5.5,1) -- ++(0.75,0.75); \draw[-,snake=snake,ultra thick,blue](5.5,3) -- ++(0.75,0.75); \draw[->,snake=snake,red](6.25,3.75) -- ++(0.75,0.75); \draw[-,snake=snake,ultra thick,blue](5.5,1) -- ++(-1,1); \draw[-,snake=snake,red](5.5,3) -- ++(-1,1); \draw[-,snake=snake,red](4.5,4) -- ++(1,1); \draw[->,snake=snake,red](5.5,5) -- ++(1,1); \foreach \y [count=\n]in {0,2,4}{ } \draw[-,snake=snake,red](4.5,2) -- ++(-0.75,0.75); \draw[->,snake=snake,blue](3.75,2.75) -- ++(-0.75,0.75); \draw[-,snake=snake,blue](4.5,4) -- ++(-0.75,0.75); \draw[->,snake=snake,blue](3.75,4.75) -- ++(-0.75,0.75); \foreach \y [count=\n]in {2,4}{ } \draw[->,snake=snake,blue](5.5,5) -- ++(-1,1); \draw[-,thick,black](8.5,0) -- ++(0,6); \draw[-,thick,black](0.5,0) -- ++(0,6); \draw[->,snake=snake,blue](6.25,1.75) -- ++(-0.5,0.5); \draw[->,snake=snake,ultra thick,blue](6.25,3.75) -- ++(-0.5,0.5); \draw[->,snake=snake,red](3.75,4.75) -- ++(0.5,0.5); \draw[->,snake=snake,red](3.75,2.75) -- ++(0.5,0.5); \end{tikzpicture} \caption[The $v$-primary tree]{On the left the original marked tree $({\mathcal{T}},\Pi)$ is shown, whilst on the right the $v$-primary tree $({\mathcal{T}}_{v\text{ primary}},\Pi_{v\text{ primary}})$ is shown. In both diagrams the first child at each branch is in blue and the second is in red, so that in each diagram the continuous blue path shows the primary path. On the left, $v$ is in thick, whilst on the right the new primary path is in thick.} \label{fig:v-rooter tree example} \end{center} \end{figure} We recall that the random functions $\varrho^N_{s,t}$ ($0\leq s\leq t$) were defined in \eqref{eq:functions PhiN}, and correspond to tracing the continuous ancestral path backwards from the particle with index $i$ at time $t$ to its ancestor at time $s$. We consider the random trees ${\mathcal{U}}^{N,j,t}$ defined in Defininition \ref{defin:branching tree particle i at time t} with $t=0$ and make the following observation. For each $1\leq i\leq N$ there exists a unique index $j$ $(=\varrho^N_{0,T+h}(i))$ and leaf $v\in \Lambda({\mathcal{U}}^{N,j,0})$ such that the $v$-primary tree ${\mathcal{U}}^{N,j,0}_{v\text{ primary}}$ is equal to the augmented DHP ${\bf{H}}^{N,i,T+h}$. Moreover every such $v$-primary tree whose primary path survives (${\mathcal{U}}^{N,j,0}_{v\text{ primary}}\in{\bf{C}}^{\ast}_T$) corresponds to an augmented DHP. Therefore we have \begin{equation}gin{equation}\label{eq:DHPs equal to v-primary trees} \frac{1}{N}\sum_{i=1}^N\delta_{{\bf{H}}^{N,i,T}}=\frac{1}{N}\sum_{i=1}^N\sum_{v\in \Lambda({\mathcal{U}}^{N,i,0})}{\mathbbm{1}}({\mathcal{U}}^{N,i,0}_{v\text{ primary}}\in{\bf{C}}^{\ast}_T)\delta_{{\mathcal{U}}^{N,i,0}_{v\text{ primary}}}. \end{equation} In this proof we will make use of the space ${\mathcal{M}}$ of finite, non-negative Borel measures, equipped with the topology of weak convergence of measures. We define the map \begin{equation}gin{equation}\label{eq:map P hat} \hat{P}:{\bf{C}}_T\ni {\mathcal{T}}\mbox{$\bf m $}apsto \sum_{v\in \Lambda ({\mathcal{T}})}{\mathbbm{1}}({\mathcal{T}}_{v\text{ primary}}\in {\bf{C}}^{\ast}_T)\delta_{{\mathcal{T}}_{v\text{ primary}}}\in {\mathcal{M}}({\bf{C}}^{\ast}_T). \end{equation} We then define for each $1\leq i\leq N$ the random measures \begin{equation}gin{equation}\label{eq:random measures XiNi} \xi^N_i:=\hat{P}({\mathcal{U}}^{N,i,0}). \end{equation} We note that $\frac{1}{N}\sum_{i=1}^N\xi^N_i$ must be a probability measure by \eqref{eq:DHPs equal to v-primary trees}. Our goal is to establish that \begin{equation}gin{equation}\label{eq:conv of sum xiNi in Wass in prob} \text{W}_1h(\frac{1}{N}\sum_{i=1}^N\xi^N_i, {\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u}))\rightarrow 0\quad\text{as}\quad N\rightarrow\infty, \end{equation} so that by \eqref{eq:DHPs equal to v-primary trees} we have Theorem \ref{theo:convergence of augmented DHPs}. Our strategy will be to show that \begin{equation}gin{equation}\label{eq:conv of sum of xiNi,f} \langle\frac{1}{N}\sum_{i=1}^N\xi^N_i,f\rightarrowngle \rightarrow \langle {\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u}),f\rightarrowngle\quad\text{in probability as}\quad N\rightarrow\infty,\quad \text{for all}\quad f\in C_b({\bf{C}}^{\ast}_T), \end{equation} by applying the following lemma, which is straightforward to prove. \begin{equation}gin{lem} Let $\{\mbox{$\bf m $}box{$\gamma$}ma^{N}_k:1\leq k\leq N\in \mbox{$\bf m $}athbb{N}\}$ be a triangular array of uniformly integrable random variables, and let $S_N:=\sum_{k\leq N}\mbox{$\bf m $}box{$\gamma$}ma^{N}_k$. We further assume that $(\mbox{$\bf m $}box{$\gamma$}ma^N_1,\ldots,\mbox{$\bf m $}box{$\gamma$}ma^N_N)$ is exchangeable for each $N$, and that $(\mbox{$\bf m $}box{$\gamma$}ma^{N}_1,\mbox{$\bf m $}box{$\gamma$}ma^{N}_2)\rightarrow (\mbox{$\bf m $}box{$\gamma$}ma_1,\mbox{$\bf m $}box{$\gamma$}ma_2)$ in distribution as $N\rightarrow \infty$, for a pair of independent random variables $(\mbox{$\bf m $}box{$\gamma$}ma_1,\mbox{$\bf m $}box{$\gamma$}ma_2)$. Then we have $\frac{S_N}{N}\rightarrow {\mathbb E}[\mbox{$\bf m $}box{$\gamma$}ma_1]$ in probability. \label{lem:weak convergence lemma Multi-Colour} \end{lem} Using Proposition \ref{prop:convergence in W in prob equivalent to weakly prob}, this implies \eqref{eq:conv of sum xiNi in Wass in prob} and hence Theorem \ref{theo:convergence of augmented DHPs}. We proceed as follows. \begin{equation}gin{enumerate} \item\label{enum:random permutation step} We apply a random permutation (independently and uniformly selected from the symmetric group) to the indices for each $N$, so that the particle system $\vec{X}^N_t$ is exchangeable. Since we have assumed that $\frac{1}{N}\sum_{i=1}^N\delta_{X^{N,i}_0}\rightarrow \mbox{$\bf m $}u$ weakly in probability, we have that \begin{equation}gin{equation}\label{eq:conv in dist of XN1 XN2} (X^{N,1}_0,X^{N,2}_0)\overset{d}{\rightarrow} \mbox{$\bf m $}u\otimes \mbox{$\bf m $}u\quad\text{as}\quad N\rightarrow\infty. \end{equation} \item\label{enum:uniform integrability of xiNi(A)} We prove that $\{\lvert \Pi({\mathcal{U}}^{N,1,0})\rvert: N\in\mbox{$\bf m $}athbb{N}\}$, $\{\xi^{N}_1({{\bf{C}}}^{\ast}_T):N\in{\mathbb{N}}\}$ and $\{\zeta_{x}({{\bf{C}}}^{\ast}_T):x\in\bar D,t\in [0,T]\}$ are uniformly integrable. \item \label{enum:coupling tree step} Using Step \ref{enum:uniform integrability of xiNi(A)}, we couple $(\vec{X}^N,{\mathcal{U}}^{N,1,0},{\mathcal{U}}^{N,2,0})$ with a pair of marked trees $({\mathcal{V}}^{N,1},{\mathcal{V}}^{N,2})$ such that, conditional on $(X^{N,1}_0,X^{N,2}_0)$, ${\mathcal{V}}^{N,1}$ and ${\mathcal{V}}^{N,2}$ are independent with distribution ${\mathcal{V}}^{N,i}\overset{d}{=} {\mathcal{T}}^{X^{N,i}_0,0}$ $(i=1,2)$, and such that we have \begin{equation}gin{equation}\label{eq:prob of U and V coupled trees differing} {\mathbb P}({\mathcal{V}}^{N,i}={\mathcal{U}}^{N,i,0})\rightarrow 1\quad\text{as}\quad N\rightarrow\infty\quad \text{for}\quad i=1,2. \end{equation} \item\label{enum:sum of emp meas of v-primary paths} We take independent $u^i\sim \mbox{$\bf m $}u$ for $i=1,2$. Conditional on the outcome of $u^i$, we independently take the tree ${\mathcal{T}}^{x,0}$ with $x=u^i$, obtaining the tree ${\mathcal{T}}^i$ and the measure $\zeta^i:=\hat{P}({\mathcal{T}}^i)\in {\mathcal{P}}({\bf{C}}_T^{\ast})$ ($i=1,2$). Note in particular that ${\mathcal{T}}^1$ and ${\mathcal{T}}^2$ are independent and identically distributed (as are $\zeta^1$ and $\zeta^2$ by extension). We prove that \begin{equation}gin{equation}\label{eq:expected value of zetachi} {\mathbb E}[\zeta^1(\cdot)]={\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u})(\cdot). \end{equation} \item\label{enum:convergence of (UN10,UN20)} We use \eqref{eq:conv in dist of XN1 XN2} and the coupling of Step \ref{enum:coupling tree step} to prove that \begin{equation}gin{equation}\label{eq:convergence of (UN10,UN20)} ({\mathcal{U}}^{N,1,0},{\mathcal{U}}^{N,2,0})\rightarrow ({\mathcal{T}}^1,{\mathcal{T}}^2)\quad\text{in distribution as}\quad N\rightarrow\infty. \end{equation} \item\label{enum:cty of hatP map} We prove that $\hat{P}:{\bf{C}}_T\rightarrow {\mathcal{M}}({\bf{C}}_T^{\ast})$ is continuous. \item\label{enum:convergence of xiN1 xiN2(A)} We define for $f\in C_b({\bf{C}}^{\ast}_T)$ the continuous function \begin{equation}gin{equation} F^f:{\mathcal{M}}({\bf{C}}_T^{\ast})\ni \mbox{$\bf m $}u\mbox{$\bf m $}apsto \langle \mbox{$\bf m $}u,f\rightarrowngle. \end{equation} Thus $\langle \zeta^i,f\rightarrowngle=F^f\circ \hat{P}({\mathcal{T}}^i)$ and $\langle \xi^N_i,f\rightarrowngle =F^f\circ \hat{P}({\mathcal{U}}^{N,i,0})$ ($i=1,2$). Therefore \eqref{eq:convergence of (UN10,UN20)} and the continuity of $F^f\circ \hat{P}$ (by Step \ref{enum:cty of hatP map}) imply that \begin{equation}gin{equation}\label{eq:conv in distn of xiN1,xiN2} (\langle \xi^N_1,f\rightarrowngle,\langle \xi^N_2,f\rightarrowngle )\rightarrow (\langle\zeta^1,f\rightarrowngle ,\langle \zeta^2,f\rightarrowngle)\quad\text{in distribution for all}\quad f\in C_b({\bf{C}}_T^{\ast}). \end{equation} \item\label{enum:conv in prob of xiNi against test fns} We fix $f\in C_b({\bf{C}}_T^{\ast})$. We are now in a position to apply Lemma \ref{lem:weak convergence lemma Multi-Colour} to $\mbox{$\bf m $}box{$\gamma$}ma^N_k:=\langle \xi^N_k,f\rightarrowngle$ ($1\leq k\leq N<\infty$) and $\mbox{$\bf m $}box{$\gamma$}ma_i:=\langle \zeta^i,f\rightarrowngle$ ($i=1,2$). Step \ref{enum:random permutation step} implies that $(\mbox{$\bf m $}box{$\gamma$}ma^N_1,\ldots,\mbox{$\bf m $}box{$\gamma$}ma^N_N)$ is exchangeable for all $N$. We observe that $\xi^N_i$ is a random measure with mass $\xi^N_i({\bf{C}}^{\ast}_T)$, so that $\lvert \langle \xi^N_i,f\rightarrowngle\rvert\leq \lvert\lvert f\rvert\rvert_{\infty}\xi^N_i({\bf{C}}^{\ast}_T)$. Therefore Step \ref{enum:uniform integrability of xiNi(A)} and the boundedness of $f$ imply the uniform integrability of $\{\mbox{$\bf m $}box{$\gamma$}ma^N_k:1\leq k\leq N<\infty\}$. We have the convergence in distribution of $(\mbox{$\bf m $}box{$\gamma$}ma^N_1,\mbox{$\bf m $}box{$\gamma$}ma^N_2)$ to $(\mbox{$\bf m $}box{$\gamma$}ma_1,\mbox{$\bf m $}box{$\gamma$}ma_2)$ as $N\rightarrow\infty$ by \eqref{eq:conv in distn of xiN1,xiN2}. Therefore we can apply Lemma \ref{lem:weak convergence lemma Multi-Colour} and \eqref{eq:expected value of zetachi} to see that we have \eqref{eq:conv of sum of xiNi,f}: \[ \langle\frac{1}{N}\sum_{i=1}^N\xi^N_i,f\rightarrowngle =\frac{1}{N}\langle\sum_{i=1}^N\xi^N_i,f\rightarrowngle \overset{p}{\rightarrow} {\mathbb E}[\langle \zeta^1,f\rightarrowngle]= \langle {\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u}),f\rightarrowngle\quad\text{as}\quad N\rightarrow\infty. \] \end{enumerate} \subsection{Step \ref{enum:uniform integrability of xiNi(A)}} We recall $\varsigma$ is the index function defined in \eqref{eq:index function iota}. We define $B^N_t=\{v\in \Pi({\mathcal{U}}^{N,1,0}): t^v_b\leq t\}$ for $0\leq t\leq T$, the vertices of ${\mathcal{U}}^{N,1,0}$ born prior to or at time $t$. Since the death rate for each particle is bounded by $\lvert \lvert \kappa\rvert\rvert_{\infty}$, with probability at most $\frac{\lvert \{\varsigma(v):v\in B^{N}_t\}\rvert}{N-1}\leq \frac{\lvert B^{N}_t\rvert}{N-1}\leq \frac{2\lvert B^{N}_t\rvert}{N}$ of jumping onto a particle in $\{\varsigma(v):v\in B^{N}_t\}$ (and thereby causing $\lvert B^{N}_t\rvert$ to increase by $1$), we see that $\lvert B^{N}_t\rvert$ may be coupled with a jump process $M_t$ which starts at $1$ and jumps $n\mbox{$\bf m $}apsto n+1$ at rate $2\lvert\lvert \kappa\rvert\rvert_{\infty}n$. Moreover \[ e^{-2\lvert\lvert \kappa\rvert\rvert t}M_t\wedge C \quad\text{is a supermartingale} \] for all $C<\infty$, so that \[ {\mathbb E}[M_Te^{-2\lvert\lvert \kappa\rvert\rvert_{\infty} T}\wedge C]\leq 1. \] Taking $C\rightarrow \infty$, we see that $M_T\in L^1$. We have that \[ \xi^N_1({{\bf{C}}}^{\ast}_T)=\lvert\{v\in \Lambda({\mathcal{U}}^{N,1,0}):t^v_d=\ast\}\rvert\leq \lvert B^{N}_T\rvert \] so that both $\lvert \Pi({\mathcal{U}}^{N,1,0})\rvert$ and $\xi^N_1({{\bf{C}}}^{\ast}_T)$ are stochastically dominated by $C_T\in L^1$ for all $N$, hence $\{\lvert \Pi({\mathcal{U}}^{N,1,0})\rvert:N\in\mbox{$\bf m $}athbb{N}\}$ and $\{\xi^N_1({{\bf{C}}}^{\ast}_T):N\in\mbox{$\bf m $}athbb{N}\}$ are uniformly integrable. The proof that $\{\zeta_{x,t}({{\bf{C}}}^{\ast}_T):x\in \bar D,\; t\in [0,T]\}$ is uniformly integrable is identical. \subsection{Step \ref{enum:coupling tree step}} Similarly to \eqref{eq:trees glued to path}, for marked trees ${\mathcal{T}}^i=(t^{i,v}_b,f^{i,v},t^{i,v}_d)_{v\in \Pi^i}$ ($i=1,2$) and $v\in \Pi^1$ such that $t^{1,v}_b\leq t^{2,\emptyset}_b<t^{1,v}_d$ and $f^{2,\emptyset}(t^{2,\emptyset}_b)=f^{1,v}(t^{2,\emptyset}_b)$ we may glue ${\mathcal{T}}^2$ to ${\mathcal{T}}^1$ at the vertex $v$ by defining \[ ({\mathcal{T}},\Pi)=\Psi({\mathcal{T}}^1,v,{\mathcal{T}}^2) \] according to \[ \begin{equation}gin{split} \Pi=\{u\in \Pi^1:v\nless u\}\cup\{v1w:vw\in \Pi^1\}\cup \{v2w:w\in \Pi^2\},\\ {\mathcal{T}}(u)=\begin{equation}gin{cases} {\mathcal{T}}^1(u),\quad u\in\Pi^1\quad \text{and}\quad u\ngeq v\\ (t^{1,v}_b,f^{1,v}_{[t^{1,v}_b,t^{2,\emptyset}_b]},t^{2,\emptyset}_b),\quad u=v\\ (t^{2,\emptyset}_b,f^{1,v}_{[t^{2,\emptyset}_b,t^{1,v}_d]},t^{1,v}_d),\quad u=v1\\ {\mathcal{T}}^1(vw),\quad u=v1w\quad\text{for some}\quad vw\in \Pi^1\setminus \{v\}\\ {\mathcal{T}}^2(w),\quad u=v2w\quad\text{for some}\quad w\in \Pi^2 \end{cases} \end{split}. \] Note in particular that the marks for $p(v)$ and $s(v)$ become a single mark. Then for a family of marked trees ${\mathcal{T}}^k=(t^{k,v}_b,f^{k,v},t^{k,v}_d)_{v\in \Pi^k}$ $(0\leq k\leq n)$ and a family of vertices $v_k\in \Pi^0$ such that $t^{k,\emptyset}_b\in [t^{0,v_k}_b,t^{0,v_k}_d]$ ($1\leq k\neq n$) we may glue ${\mathcal{T}}^1,\ldots,{\mathcal{T}}^n$ onto ${\mathcal{T}}^0$, obtaining $\Psi({\mathcal{T}}^0,\{{\mathcal{T}}^k:1\leq k\leq n\})$ as follows. We say $k$ is an ancestor of $k'$ if $v_k\leq v_{k'}$ and $t^{k,\emptyset}_b<t^{k',\emptyset}_b$. We reorder the ${\mathcal{T}}^k$ so that $k$ is not an ancestor of $k'$ for $1\leq k<k'\leq n$. The purpose of this is to prevent the process of gluing on the $k^{\text{th}}$ tree affecting the label of the branch on which the $k'^{\text{th}}$ tree should be glued. We then inductively define \[ {\mathcal{T}}^{(1)}:=\Psi({\mathcal{T}}^0,v^1,{\mathcal{T}}^1),\quad {\mathcal{T}}^{(k+1)}:=\Psi({\mathcal{T}}^{(k)},v^{k+1},{\mathcal{T}}^{k+1}),\quad \Psi({\mathcal{T}}^0,\{{\mathcal{T}}^k:1\leq k\leq n\}):={\mathcal{T}}^{(n)}. \] This glues the trees $\{{\mathcal{T}}^k:1\leq k\leq n\}$ onto ${\mathcal{T}}^0$ at the vertices $v_k$. For a branch $v\neq \emptyset$ belonging to some binary tree $v\in \Pi$, we recall that $p(v)$ and $s(v)$ are the parent and sibling of $v$ respectively \eqref{eq:parent and sibling of u}. We now define the process of removing a branch $\emptyset \neq v\in \Pi$ and its descendents from a marked tree $({\mathcal{T}},\Pi)$. In particular for a marked tree ${\mathcal{T}}=(t_b^v,f^v,t_d^v)_{v\in \Pi}$ with $\emptyset \neq v\in \Pi$ we remove $v$ and all of its descendents by defining the new tree \[ ({\mathcal{T}}',\Pi')=\Upsilon({\mathcal{T}},v) \] according to \[ \begin{equation}gin{split} \Pi'=\{u:p(v)\nless u\}\cup \{p(v)w:s(v)w\in \Pi\},\\ {\mathcal{T}}'(u)=\begin{equation}gin{cases} {\mathcal{T}}(u),\quad p(v)\nleq u\\ \Big(t_b^{p(v)},\Big([t_b^{p(v)},t_d^{s(v)}]\ni t\mbox{$\bf m $}apsto \begin{equation}gin{cases} f^{p(v)}(t),\quad t\in [t_b^{p(v)},t_d^{p(v)}]\\ f^{s(v)}(t),\quad t\in [t_b^{s(v)},t_d^{s(v)}] \end{cases}\Big),t_d^{s(v)}\Big),\quad u=p(v)\\ {\mathcal{T}}(s(u)w),\quad u>p(v) \end{cases}. \end{split} \] This removes $v$ and its descendants, fuses together the sibling and parent of $v$, and adjusts the index of the descendants of the sibling of $v$ appropriately. We couple ${\mathcal{U}}^{N,i,0}=(t^{i,v}_b,f^{i,v},t^{i,v}_d)_{v\in \Pi^i}$ to copies ${\mathcal{V}}^{N,i}$ of the trees ${\mathcal{T}}^{X^{N,i}_0,0}$ (defined in Definition \ref{defin:critical branching tree from time t pos x}) with $x=X^i_0$ and $t=0$) for $i=1,2$. We write $\varsigma^{N,i}$ for the index functions corresponding to ${\mathcal{U}}^{N,i,0}$ \eqref{eq:index function iota} for $i=1,2$. We define \[ {\mathcal{P}}^{N,i}_t=\{\varsigma^{N,i}(v):v\in \Pi({\mathcal{U}}^{N,i,0})\text{ and }t^{N,i,v}_b\leq t<t^{N,i,v}_d\} \quad\text{for}\quad i=1,2 \] to be the indices of those particles alive at time $t$, with empirical measure \[ m^{N,i}_t=\frac{1}{N}\sum_{k\in {\mathcal{P}}^{N,i}_t}\delta_{X^k_t},\quad i=1,2. \] We now take $(\vec{X}^N,{\mathcal{U}}^{N,1,0},{\mathcal{U}}^{N,2,0})$ and construct ${\mathcal{V}}^{N,1}$ as follows: \begin{equation}gin{enumerate} \item We recall the convention that if $\varsigma^{N,i}(v)=j$, and vertex $v$ branches when $X^{N,k}$ jumps onto $X^{N,j}$, then $\varsigma(v1):=j$ and $\varsigma(v2):=k$. Using $\Upsilon$, we inductively remove every branch $v\in \Pi({\mathcal{U}}^{N,1,0})$ of the form $v=p(v)2$ which corresponds to a particle in ${\mathcal{P}}^{N,1}_{t^{i,v}_b-}\cup {\mathcal{P}}^{N,2}_{t^{i,v}_b-}$ dying and jumping onto $p(v)$. We obtain the tree $\tilde{{\mathcal{U}}}^{N,1,0}$. \item\label{enum:removal of vertices construction of coupling of trees} Note that ${\mathcal{P}}^{N,1}_{t}\cap {\mathcal{P}}^{N,2}_{t}=\emptyset$ so that \[ \frac{N}{N-1}\langle m^N_{t}-m^{N,1}_{t}-m^{N,2}_{t}, \kappa\rightarrowngle \] represents the rate at which a particle not in ${\mathcal{P}}^{N,1}_{t}\cup {\mathcal{P}}^{N,2}_{t}$ dies and jumps onto any given particle in ${\mathcal{P}}^{N,1}_{t}$. Inductively for all $v\in \Pi(\tilde{{\mathcal{U}}}^{N,1,0})$, if \[ \frac{N}{N-1}\langle m^N_{t^v_b-}-m^{N,1}_{t^v_b-}-m^{N,2}_{t^v_b-}, \kappa\rightarrowngle>\mbox{$\bf m $}box{$\lambda$}bda_{t_{t^v_b-}}, \] then with probability \[ 1-\frac{\mbox{$\bf m $}box{$\lambda$}bda_{t_{t^v_b-}}}{\frac{N}{N-1}\langle m^N_{t^v_b-}-m^{N,1}_{t^v_b-}-m^{N,2}_{t^v_b-}, \kappa\rightarrowngle} \] we remove $v$ and all its descendents using $\Upsilon$. We obtain $\hat{{\mathcal{U}}}^{N,1,0}$. \item\label{enum:gluing on extra trees tree coupling} For each remaining branch $v\in\Pi(\hat{{\mathcal{U}}}^{N,1,0})$, we generate an independent Poisson process of rate $\begin{itemize}g(\mbox{$\bf m $}box{$\lambda$}bda_{t}(\mbox{$\bf m $}u)-\frac{N}{N-1}\langle m^N_{t}-m^{N,1}_{t}-m^{N,2}_{t}, \kappa\rightarrowngle\begin{itemize}g)\vee 0$ on $[t^v_b,t^v_d]$, obtaining times $t^v_1,\ldots,t^v_{n_v}$ and gluing independent copies of ${\mathcal{T}}^{f^{i,v}(t_k),t_k}$ (for each $1\leq k\leq n_v$ and each remaining $v$) onto $\hat{{\mathcal{U}}}^{N,1,0}$ tree using $\Psi$. We write ${\mathcal{V}}^{N,1}$ for the resulting tree. \item We then repeat the above process for ${\mathcal{U}}^{N,2,0}$, obtaining ${\mathcal{V}}^{N,2}$. \end{enumerate} Then by construction, conditional on $(X^{N,1}_0,X^{N,2}_0)$, ${\mathcal{U}}^{N,1}$ and ${\mathcal{U}}^{N,2}$ are independent and distributed like ${\mathcal{T}}^{X^{N,1}_0,0}$ and ${\mathcal{T}}^{X^{N,2}_0,0}$ respectively. We now establish \eqref{eq:prob of U and V coupled trees differing} using Step \ref{enum:sum of emp meas of v-primary paths}. We write $A_N$ for the event that some particle in ${\mathcal{P}}_t^{N,1}\cup {\mathcal{P}}_t^{N,2}$ dies and jumps onto another particle in ${\mathcal{P}}_t^{N,1}$ for some $0\leq t\leq T$. We write $B_N$ for the event that for some $v\in\Pi(\tilde{U}^{N,1,0})$, the vertex $v$ is removed as in Step \ref{enum:removal of vertices construction of coupling of trees} of the construction of the coupling. We write $C_N$ for the event that for some $v\in\Pi(\hat{U}^{N,1,0})$ at least one critical branching tree is glued on as in Step \ref{enum:gluing on extra trees tree coupling} of the construction of the coupling. Then we have \[ {\mathbb P}({\mathcal{U}}^{N,1,0}={\mathcal{V}}^{N,1})\geq 1-{\mathbb P}(A_N)-{\mathbb P}(B_N)-{\mathbb P}(C_N). \] We will show that ${\mathbb P}({\mathcal{U}}^{N,1,0}={\mathcal{V}}^{N,1})$ converges to $1$ as $N\rightarrow\infty$ by showing that the probability of each of $A_N$, $B_N$ and $C_N$ converges to $0$ as $N\rightarrow\infty$. The rate at which a particle in ${\mathcal{P}}^{N,1}_t\cup {\mathcal{P}}^{N,2}_t$ dies and jumps onto a particle in ${\mathcal{P}}^{N,1}_t$ is bounded by \[ \Big( \lvert {\mathcal{P}}^{N,1}_t\rvert+\lvert {\mathcal{P}}^{N,2}_t\rvert \Big) \lvert\lvert \kappa\rvert\rvert_{\infty}\frac{\lvert{\mathcal{P}}^{N,1}_t\rvert}{N-1}. \] Therefore, using that $\lvert {\mathcal{P}}^{N,1}_t\rvert\leq \lvert \Pi({\mathcal{U}}^{N,1}_0)\rvert$ for all $0\leq t\leq T$, we have \[ \begin{equation}gin{split} {\mathbb P}(A_N)\leq {\mathbb P}\Big(\sup_{0\leq t\leq T}\lvert {\mathcal{P}}^{N,1}_t\rvert >\mbox{$\bf m $}box{$\epsilon$}ilon (N-1)\Big)+{\mathbb E}\Big[\mbox{$\bf m $}box{$\epsilon$}ilon\lvert\lvert \kappa\rvert\rvert \int_0^T\lvert {\mathcal{P}}^{N,1}_t\rvert+\lvert {\mathcal{P}}^{N,2}_t\rvert dt\Big]\\ \leq {\mathbb P}\begin{itemize}g(\lvert \Pi({\mathcal{U}}^{N,1}_0)\rvert >\mbox{$\bf m $}box{$\epsilon$}ilon (N-1)\begin{itemize}g)+2\mbox{$\bf m $}box{$\epsilon$}ilon\lvert\lvert \kappa\rvert\rvert_{\infty}T{\mathbb E}[\lvert \Pi({\mathcal{U}}^{N,1}_0)\rvert] \end{split} \] for all $\mbox{$\bf m $}box{$\epsilon$}ilon>0$. Using Step \ref{enum:uniform integrability of xiNi(A)} and taking $\limsup_{\mbox{$\bf m $}box{$\epsilon$}ilon\rightarrow 0}\limsup_{N\rightarrow\infty}$ of both sides, we see that ${\mathbb P}(A_N)\rightarrow 0$ as $N\rightarrow\infty$. We define \[ p^N_t:=1-\frac{\mbox{$\bf m $}box{$\lambda$}bda_{t}(\mbox{$\bf m $}u)}{\frac{N}{N-1}\langle m^N_{t}-m^{N,1}_{t}-m^{N,2}_{t}, \kappa\rightarrowngle}\vee 0,\quad q^N_t=\begin{itemize}g(\mbox{$\bf m $}box{$\lambda$}bda_{t}(\mbox{$\bf m $}u)-\frac{N}{N-1}\langle m^N_{t}-m^{N,1}_{t}-m^{N,2}_{t}, \kappa\rightarrowngle\begin{itemize}g)\vee 0. \] Since $\lvert \langle m^{N,i}_t,\kappa\rightarrowngle\rvert \leq \frac{1}{N}\lvert \lvert\kappa\rvert\rvert_{\infty}\lvert \Pi({\mathcal{U}}^{N,i}_0)\rvert$, we can combine Step \ref{enum:uniform integrability of xiNi(A)} with the hydrodynamic limit theorem for the Fleming-Viot process, \cite[Theorem 2.2]{Villemonais2011}, to conclude that \[ \sup_{0\leq t\leq T}p^N_t,\quad \sup_{0\leq t\leq T}q^N_t\rightarrow 0\quad\text{in probability.} \] It is straightforward to see that for all $\mbox{$\bf m $}box{$\epsilon$}ilon>0$, \[ {\mathbb P}(B_N)\leq {\mathbb P}(\sup_{0\leq t \leq T}p^N_t>\mbox{$\bf m $}box{$\epsilon$}ilon)+\mbox{$\bf m $}box{$\epsilon$}ilon {\mathbb E}[\lvert \Pi({\mathcal{U}}^{N,1,0})\rvert],\quad {\mathbb P}(C_N)\leq {\mathbb P}(\sup_{0\leq t \leq T}q^N_t>\mbox{$\bf m $}box{$\epsilon$}ilon)+\mbox{$\bf m $}box{$\epsilon$}ilon T {\mathbb E}[\lvert \Pi({\mathcal{U}}^{N,1,0})\rvert]. \] Taking $\limsup_{\mbox{$\bf m $}box{$\epsilon$}ilon\rightarrow 0}\limsup_{N\rightarrow\infty}$ of both sides and using Step \ref{enum:uniform integrability of xiNi(A)}, we have ${\mathbb P}(B_N), {\mathbb P}(C_N)\rightarrow 0$ as $N\rightarrow\infty$. Thus we have \eqref{eq:prob of U and V coupled trees differing}. \subsection{Step \ref{enum:sum of emp meas of v-primary paths}} We recall that for a given tree ${\mathcal{T}}$, we write ${\mathcal{T}}_{(v)}$ for the $v$-subtree \eqref{eq:v-subtree} whereas ${\mathcal{T}}_{v\text{ primary}}$ is the $v$-primary tree \eqref{eq:v-primary tree}, whose primary path is of length $\ell({\mathcal{T}}_{v\text{ primary}})$. We claim that \begin{equation}gin{equation}\label{eq:zeta=trees tilted by length} {\mathbb E}[\zeta^1(\cdot)]={\mathbb E}[2^{\ell({\mathcal{T}}^{1})}{\mathbbm{1}}({\mathcal{T}}^{1}\in \cdot\cap {\bf{C}}^{\ast}_T)]. \end{equation} It is sufficient to consider sets of the form \[ A=\{{\mathcal{T}}\in {\bf{C}}^{\ast}_T:{\mathcal{T}}(e_i)\in G_i,\quad 0\leq i\leq n\quad\text{and}\quad {\mathcal{T}}_{(e_{i-1}2)}\in F_i,\quad 1\leq i\leq n\}\in {\mathcal{B}}({\bf{C}}^{\ast}_T) \] where $F_i\in {\mathcal{B}}({\bf{C}}_T)$ and $G_i\in {\mathcal{B}}(\mbox{$\bf m $}athscr{C}([0,T];\bar D))$ with $G_n\in {\mathcal{B}}(\mbox{$\bf m $}athscr{C}^{\ast}([0,T];\bar D))$. For $a=(k_1,\ldots,k_n)\in \{1,2\}^n$ we write $a_i=(k_1,\ldots,k_i)$ $(a_0:=\emptyset)$. Then we can calculate: \[ \begin{equation}gin{split} {\mathbb E}[\zeta^1(A)]=\sum_{a\in \{1,2\}^n}{\mathbb P}\begin{itemize}g(a\in \Lambda({\mathcal{T}}^{1}),\quad {\mathcal{T}}^{1}(a_i)\in G_i\quad\text{for}\quad 0\leq i\leq n \quad \text{and}\\ {\mathcal{T}}^{1}_{(s(a_{i}))}\in F_i\quad\text{for}\quad 1\leq i\leq n\begin{itemize}g) =2^n{\mathbb P}({\mathcal{T}}^{1}\in A). \end{split} \] Therefore we have \eqref{eq:zeta=trees tilted by length}. We now claim that \begin{equation}gin{equation}\label{eq:law of bfX = trees tilted by length} {\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u})(\cdot)={\mathbb E}[2^{\ell({\mathcal{T}}^{1})}{\mathbbm{1}}({\mathcal{T}}^{1}\in \cdot\cap {\bf{C}}^{\ast}_T)]. \end{equation} We recall that for ${\mathcal{T}}\in {\bf{C}}_T$, $(t^{{\mathcal{T}}}_b,f^{{\mathcal{T}}},t^{{\mathcal{T}}}_d)\in \mbox{$\bf m $}athscr{C}([0,T];\bar D)$ is the primary process \eqref{eq:primary process}, of length $\ell({\mathcal{T}})$. It is sufficient to consider sets of the form \[ A=\{{\mathcal{T}}:f^{{\mathcal{T}}}\in H,\quad \ell({\mathcal{T}})=n,\quad t^{e_i}_b\in [t_i,t_i+h_i]\quad\text{and}\quad {\mathcal{T}}_{(s(e_{i}))}\in F_i\quad \text{for}\quad 1\leq i\leq n\}, \] whereby $H\in {\mathcal{B}}(\mbox{$\bf m $}athscr{C}^{\ast}([0,T]))$, $F_i\in {\mathcal{B}}({\bf{C}}_T)$ $(1\leq i\leq n)$ and $t_1\leq t_1+h_1\leq t_2\leq t_2+h_2\leq \ldots \leq t_n\leq t_n+h_n$. Then we can calculate \[ \begin{equation}gin{split} {\mathcal{L}}({{\bf{X}}}^{T,\mbox{$\bf m $}u})(A) =\int_{C([0,T];\bar D)}\int_{t_n}^{t_n+h_n}\ldots\int_{t_1}^{t_1+h_1} \prod_{i=1}^n\begin{itemize}g[K^{{{\bf{C}}_T}}(s_i,f(s_i);F_i)2\mbox{$\bf m $}box{$\lambda$}bda_{s_i}(\mbox{$\bf m $}u)\begin{itemize}g]\\ e^{-2\int_0^T\mbox{$\bf m $}box{$\lambda$}bda_s(\mbox{$\bf m $}u)ds}ds_1\ldots ds_n {\mathcal{L}}_{\mbox{$\bf m $}u}(X_t\lvert \tau_{\partial}>T)(df)\\ =2^n\int_{C([0,T];\bar D)}\int_{t_n}^{t_n+h_n}\ldots\int_{t_1}^{t_1+h_1} \prod_{i=1}^n\begin{itemize}g[K^{{\bf{C}}_T}(s_i,f(s_i);F_i)\mbox{$\bf m $}box{$\lambda$}bda_{s_i}(\mbox{$\bf m $}u)\begin{itemize}g]\\ e^{-\int_0^T\mbox{$\bf m $}box{$\lambda$}bda_s(\mbox{$\bf m $}u)ds} ds_1\ldots ds_n \underbrace{\frac{e^{-\int_0^T\mbox{$\bf m $}box{$\lambda$}bda_s(\mbox{$\bf m $}u)ds}}{{\mathbb P}_{\mbox{$\bf m $}u}(\tau_{\partial}>T)}}_{=1}{\mathcal{L}}_{\mbox{$\bf m $}u}(X_t)(df) =2^n{\mathbb P}({\mathcal{T}}^{1}\in A) \end{split} \] Therefore we have \eqref{eq:law of bfX = trees tilted by length} so that by \eqref{eq:zeta=trees tilted by length} we have \[ {\mathcal{L}}({\bf{X}}^{T,\mbox{$\bf m $}u})(\cdot)={\mathbb E}[2^{\ell({\mathcal{T}}^{1})}{\mathbbm{1}}({\mathcal{T}}^{1}\in \cdot\cap {{\bf{C}}}^{\ast}_T)]={\mathbb E}[\zeta^{1}(\cdot)]. \] \subsection{Step \ref{enum:convergence of (UN10,UN20)}} We recall that ${\mathcal{T}}^{t,x}\sim K^{{\bf{C}}_T}(t,x;\cdot)$. We define the Markov kernel \[ \tilde{K}((x_1,x_2);\cdot):=K^{{\bf{C}}_T}(0,x_1;\cdot)\otimes K^{{\bf{C}}_T}(0,x_2;\cdot),\quad x_1,x_2\in \bar D. \] Lemma \ref{lem:cty of kernel} (proven in the appendix) implies that \[ \bar D\times \bar D\ni (x_1,x_2)\mbox{$\bf m $}apsto \tilde{K}((x_1,x_2);\cdot):=K^{{\bf{C}}_T}(0,x;\cdot)\otimes K^{{\bf{C}}_T}(0,x_2;\cdot)\in {\mathcal{P}}({\bf{C}}_T\times {\bf{C}}_T) \] is continuous. We take the coupled marked trees ${\mathcal{V}}^{N,1},{\mathcal{V}}^{N,2}\in {\bf{C}}_T$ constructed in Step \ref{enum:coupling tree step}, whose joint distribution is given by \[ ({\mathcal{V}}^{N,1},{\mathcal{V}}^{N,2})\sim \tilde{K}({\mathcal{L}}((X^{N,1}_0,X^{N,2}_0));\cdot). \] Since $(X^{N,1}_0,X^{N,2}_0)\overset{d}{\rightarrow} \mbox{$\bf m $}u\otimes \mbox{$\bf m $}u$ by \eqref{eq:conv in dist of XN1 XN2}, we have that \[ ({\mathcal{V}}^{N,1},{\mathcal{V}}^{N,2})\overset{d}{\rightarrow} \tilde{K}(\mbox{$\bf m $}u\otimes\mbox{$\bf m $}u ;\cdot)=K^{{\bf{C}}_T}(0,\mbox{$\bf m $}u;\cdot)\otimes K^{{\bf{C}}_T}(0,\mbox{$\bf m $}u;\cdot)\sim (\zeta^1,\zeta^2)\quad \text{as}\quad N\rightarrow \infty. \] We therefore have \eqref{eq:convergence of (UN10,UN20)} by \eqref{eq:prob of U and V coupled trees differing}. \subsection{Step \ref{enum:cty of hatP map}} We consider trees ${\mathcal{T}}^i=(t^{i,v}_b,f^i,t^{i,v}_d)_{v\in \Pi^i}$ ($i=1,2$). If $d_{{\bf{C}}_T}({\mathcal{T}}^1,{\mathcal{T}}^2)<1$, then by the definition of the metric $d_{{\bf{C}}_T}$ \eqref{eq:metric dCT}, $\Pi({\mathcal{T}}^1)=\Pi({\mathcal{T}}^2)$ and \begin{equation}gin{equation}\label{eq:Step 6 same set of vertices} \begin{equation}gin{split} \{v\in \Lambda({\mathcal{T}}_1):{\mathcal{T}}^1_{v\text{ primary}}\in {\mathcal{C}}^{\ast}_T\}=\{v\in \Lambda ({\mathcal{T}}_1):t^{1,v}_d=\ast\}\\ =\{v\in \Lambda ({\mathcal{T}}_2):t^{2,v}_d=\ast\}=\{v\in \Lambda({\mathcal{T}}_2):{\mathcal{T}}^2_{v\text{ primary}}\in {\mathcal{C}}^{\ast}_T\}. \end{split} \end{equation} We consider a sequence of trees ${\mathcal{T}}^n\rightarrow {\mathcal{T}}$ in $d_{{\bf{C}}_T}$ as $n\rightarrow\infty$, assuming without loss of generality that $d_{{\bf{C}}_T}({\mathcal{T}}^n,{\mathcal{T}})<1$ for all $n$ in order to make use of \eqref{eq:Step 6 same set of vertices}. We define \[ {\bf{V}}:=\{v\in \Lambda({\mathcal{T}}):{\mathcal{T}}_{v\text{ primary}}\in {\mathcal{C}}^{\ast}_T\}=\{v\in \Lambda({\mathcal{T}}^n):{\mathcal{T}}^n_{v\text{ primary}}\in {\mathcal{C}}^{\ast}_T\}\quad\text{for all}\quad n. \] Therefore \[ \hat{P}({\mathcal{T}})=\sum_{v\in {\bf{V}}}\delta_{{\mathcal{T}}_{v\text{ primary}}}\quad\text{and}\quad \hat{P}({\mathcal{T}}^n)=\sum_{v\in {\bf{V}}}\delta_{{\mathcal{T}}^n_{v\text{ primary}}}\quad \text{for all}\quad n. \] For all $v\in{\bf{V}}$ we have that ${\mathcal{T}}^n_{v\text{ primary}}\rightarrow {\mathcal{T}}_{v\text{ primary}}$ in $d_{{\mathcal{C}}_T}$ as $n\rightarrow\infty$, so that \[ \delta_{{\mathcal{T}}^n_{v\text{ primary}}}\rightarrow \delta_{{\mathcal{T}}_{v\text{ primary}}}\quad\text{in}\quad {\mathcal{M}}({\bf{C}}^{\ast}_T)\quad \text{as}\quad n\rightarrow\infty. \] Thus we have $\hat{P}({\mathcal{T}}^n)\rightarrow \hat{P}({\mathcal{T}})$ in ${\mathcal{M}}({\bf{C}}^{\ast}_T)$ as $n\rightarrow\infty$. Since ${\bf{C}}^{\ast}_T$ is a Polish space, ${\mathcal{M}}({\bf{C}}_T^{\ast})$ is metrisable, hence sequential continuity implies continuity. \qed \section{Appendix} Here we prove various technical lemmas, whose proofs we have deferred to this appendix. \subsection{Proof of Proposition \ref{prop:convergence in W in prob equivalent to weakly prob}} We begin by establishing \ref{enum:equiv prop conv in W} implies \ref{enum:equiv prop conv against test fns}. By Skorokhod's representation theorem, there exists a sequence of random variables ${\bf U}_n\overset{d}{=} \mbox{$\bf m $}u_n$ on a common probability space $(\Omega',{\mathcal{G}}',{\mathbb P}')$, such that $W({\bf U}_n,\mbox{$\bf m $}u)\rightarrow 0$ ${\mathbb P}'$-almost surely. Therefore $\langle {\bf U}_n,f\rightarrowngle\rightarrow \langle \mbox{$\bf m $}u,f\rightarrowngle$ ${\mathbb P}'$-almost surely by \cite[Theorem 6]{Gibbs2002}, hence we have \ref{enum:equiv prop conv against test fns}. We now establish the reverse implication. We begin by establishing that for \begin{equation}gin{equation}\label{eq:sets A cty set} A\in{\mathcal{B}}({\bf S})\quad\text{such that}\quad\mbox{$\bf m $}u(\partial A)=0 \end{equation} we have \begin{equation}gin{equation}\label{eq:convergence in prob of xiNi(A)} \mbox{$\bf m $}u_n(A)\rightarrow \mbox{$\bf m $}u(A)\quad \text{in probability as}\quad N\rightarrow\infty. \end{equation} \begin{equation}gin{proof}[Proof of \eqref{eq:convergence in prob of xiNi(A)}] We consider a set $A\in{\mathcal{B}}({\bf S})$ of the form \eqref{eq:sets A cty set} and fix $\mbox{$\bf m $}box{$\epsilon$}ilon>0$. We take two sequences of continuous and bounded functions on ${\bf S}$, $(f^+_n)_{n=1}^{\infty}$ and $(f^-_n)_{n=1}^{\infty}$, such that ${\mathbbm{1}}_{\bar A}\leq f^+_n\leq 1$ (respectively $0\leq f^-_n\leq {\mathbbm{1}}_{A^o}$) and which converge pointwisely to ${\mathbbm{1}}_{\bar A}$ (respectively ${\mathbbm{1}}_{A^o}$) as $n\rightarrow\infty$. We then use the dominated convergence theorem and \eqref{eq:sets A cty set} to obtain $f^-_{\mbox{$\bf m $}box{$\epsilon$}ilon},f^+_{\mbox{$\bf m $}box{$\epsilon$}ilon}\in C_b({\bf S})$ such that \[ 0\leq f^-_{\mbox{$\bf m $}box{$\epsilon$}ilon}\leq {\mathbbm{1}}_A\leq f^+_{\mbox{$\bf m $}box{$\epsilon$}ilon}\leq 1,\quad \langle \mbox{$\bf m $}u,f^-_{\mbox{$\bf m $}box{$\epsilon$}ilon}\rightarrowngle \geq \mbox{$\bf m $}u(A)-\mbox{$\bf m $}box{$\epsilon$}ilon\quad\text{and}\quad \langle \mbox{$\bf m $}u,f^+_{\mbox{$\bf m $}box{$\epsilon$}ilon}\rightarrowngle \leq \mbox{$\bf m $}u(A)+\mbox{$\bf m $}box{$\epsilon$}ilon. \] Then by assumption we have \[ \begin{equation}gin{split} {\mathbb P}(\mbox{$\bf m $}u_n(A)\geq \mbox{$\bf m $}u(A)+2\mbox{$\bf m $}box{$\epsilon$}ilon) \leq {\mathbb P} (\langle\mbox{$\bf m $}u_n,f^+_{\mbox{$\bf m $}box{$\epsilon$}ilon}\rightarrowngle \geq \langle \mbox{$\bf m $}u,f^+_{\mbox{$\bf m $}box{$\epsilon$}ilon}\rightarrowngle+\mbox{$\bf m $}box{$\epsilon$}ilon)\rightarrow 0\quad\text{as}\quad N\rightarrow\infty,\\ {\mathbb P}(\mbox{$\bf m $}u_n(A)\leq \mbox{$\bf m $}u(A)-2\mbox{$\bf m $}box{$\epsilon$}ilon) \leq {\mathbb P} (\langle\mbox{$\bf m $}u_n,f^-_{\mbox{$\bf m $}box{$\epsilon$}ilon}\rightarrowngle \leq \langle \mbox{$\bf m $}u,f^-_{\mbox{$\bf m $}box{$\epsilon$}ilon}\rightarrowngle-\mbox{$\bf m $}box{$\epsilon$}ilon)\rightarrow 0\quad\text{as}\quad N\rightarrow\infty. \end{split} \] Since $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ is arbitary, we have \eqref{eq:convergence in prob of xiNi(A)}. \end{proof} We now fix $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ and take $(A_j)_{j=0}^m$ of the form \eqref{eq:sets A cty set} such that $\text{diam}(A_j)<\mbox{$\bf m $}box{$\epsilon$}ilon$ for all $1\leq j\leq m$ and $\mbox{$\bf m $}u(A_0)<\mbox{$\bf m $}box{$\epsilon$}ilon$. We then have that \[ \begin{equation}gin{split} W(\mbox{$\bf m $}u_n,\mbox{$\bf m $}u)\leq \mbox{$\bf m $}box{$\epsilon$}ilon+\sum_{j=1}^m\Big\lvert \mbox{$\bf m $}u_n(A_j)-\mbox{$\bf m $}u(A_j)\Big\rvert +\mbox{$\bf m $}u(A_0)+\mbox{$\bf m $}u_n(A_0)\overset{p}{\rightarrow} \mbox{$\bf m $}box{$\epsilon$}ilon+2\mbox{$\bf m $}u(A_0)<3\mbox{$\bf m $}box{$\epsilon$}ilon\quad \text{as}\quad n\rightarrow\infty. \end{split} \] Since $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ is arbitrary, we have established \ref{enum:equiv prop conv against test fns} implies \ref{enum:equiv prop conv in W}. \qed \subsection{Proof of Theorem \ref{theo:convergence to QSD for reflected diffusion with soft killing}}\label{subsection:convergence to QSD for reflected diffusion with soft killing} Our strategy is to check \cite[Assumption (A)]{Champagnat2014}, starting with \cite[Assumption (A1)]{Champagnat2014}. We let \[ P^{\partial}_t(x,\cdot)={\mathbb P}(X_t\in\cdot\quad\text{and}\quad X_s\nonumbertin \partial D\quad\text{for all}\quad 0\leq s\leq t) \] denote the semigroup corresponding to the SDE \[ dX^{\partial}_t=b(X^{\partial}_s)ds+\sigma(X^{\partial}_s)dW_s, \] killed at the boundary $\partial D$ rather than reflected as in \eqref{eq:Skorokhod problem}. We define \[ K_k=\{x\in D:d(x,D^c)\geq \frac{1}{k}\} \] as in \cite[p.19]{Champagnat2018a}. The killing rate, $\kappa$, of $X_t$ being bounded and \cite[Proposition 12.1]{Champagnat2018a} imply that there exists $\nu\in{\mathcal{P}}(\bar D)$ and $t_{\ast}>0$ such that, for all $k\in \mbox{$\bf m $}athbb{N}$, there exists $c_k>0$ such that \begin{equation}gin{equation}\label{eq:diffusion killed at bdy at least nu} e^{t_{\ast}\lvert\lvert \kappa\rvert\rvert_{\infty}}P_{t_{\ast}}(x;\cdot)\geq P^{\partial}_{t_{\ast}}(x;\cdot)\geq c_k\nu(\cdot)\quad \text{for all}\quad x\in K_k. \end{equation} We now prove the following elementary fact: there exists $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ and $k\in\mbox{$\bf m $}athbb{N}$ such that \begin{equation}gin{equation}\label{eq:positive prob of being in interior reflected diffusion} P_{t_{\ast}}(x,K_k)\geq \mbox{$\bf m $}box{$\epsilon$}ilon\quad \text{for all}\quad x\in \bar D. \end{equation} \begin{equation}gin{proof}[Proof of \eqref{eq:positive prob of being in interior reflected diffusion}] We assume not for contradiction, so that for all $k$ there exists $(x_{k,n})_{n=1}^{\infty}$ a sequence in $\bar D$ such that $P_{t_{\ast}}(x_{k,n},K_{k+1})\rightarrow 0$ as $n\rightarrow\infty$. We take a convergent subsequence such that $x_{k,n}\rightarrow x_k$ as $n\rightarrow\infty$ and define $\phi_k\in C(D)$ such that ${\mathbbm{1}}_{K_k}\leq \phi_k\leq {\mathbbm{1}}_{K_{k+1}}$ for all $k\in\mbox{$\bf m $}athbb{N}$. Therefore \[ P_{t_{\ast}}(x_{k},\phi_k)\leq \limsup_{n\rightarrow\infty} P_{t_{\ast}}(x_{k,n},\phi_k)=0 \] along this subsequence. We take a convergent subsequence such that $x_k\rightarrow x$, so that \[ P_{t_{\ast}}(x,\phi_m)=\limsup_{k\rightarrow\infty}P_{t_{\ast}}(x_{k},\phi_m)\leq \limsup_{k\rightarrow\infty}P_{t_{\ast}}(x_{k},\phi_k)=0 \] along this subsequence, for all $m\in\mbox{$\bf m $}athbb{N}$. Since $m\in \mbox{$\bf m $}athbb{N}$ is arbitrary, $P_{t_{\ast}}(x,\partial D)=1$. We now write $\tau_k=\inf\{t>0:X^0_t\in K_k\}$, and note that by the strong Markov property, \eqref{eq:diffusion killed at bdy at least nu} and the fact that the killing rate $\kappa$ is bounded, ${\mathbb P}(X_{t_{\ast}}\in D)=0$ implies that ${\mathbb P}(\tau_k\leq t_{\ast})=0$ for all $k$, so that ${\mathbb P}_x(X^0_s\in \partial D\quad \text{for all}\quad 0\leq s\leq t_{\ast})=1$. This is a contradiction, so we have \eqref{eq:positive prob of being in interior reflected diffusion}. \end{proof} Combining \eqref{eq:diffusion killed at bdy at least nu} with \eqref{eq:positive prob of being in interior reflected diffusion} we have \[ {\mathcal{L}}_x(X_{2t_{\ast}}\in\cdot\lvert \tau_{\partial}>t)\geq P_{2t_{\ast}}(x.\cdot)\geq \mbox{$\bf m $}box{$\epsilon$}ilon \inf_{y\in K_k}P_{t_{\ast}}(y,\cdot)\geq \mbox{$\bf m $}box{$\epsilon$}ilon c_ke^{-t_{\ast}\lvert\lvert \kappa\rvert\rvert_{\infty}}\nu(\cdot). \] Therefore we have \cite[Assumption (A1)]{Champagnat2014}, so now turn to checking \cite[Assumption (A2)]{Champagnat2014}. In \cite[p.6]{Schwab2005} they use the Krein-Rutman theorem to prove that there exists $\phi\in {\mathcal{A}}\cap C^{2}(\bar D; {\mathbb R}_{>0})$ and $\mbox{$\bf m $}box{$\lambda$}bda\in {\mathbb R}$ such that \[ L\phi+\mbox{$\bf m $}box{$\lambda$}bda\phi=0\quad\text{on}\quad \bar D,\quad \phi>0\quad\text{on}\quad \bar D,\quad \nabla\phi\cdot\vec{n}=0\quad\text{on}\quad \partial D. \] We may see that $e^{\mbox{$\bf m $}box{$\lambda$}bda t}\phi(X_t){\mathbbm{1}}(t<\tau_{\partial})$ is a martingale, so that \begin{equation}gin{equation}\label{eq:integral of right eigenfunction against dist} \langle P_{t}(\mbox{$\bf m $}u,\cdot),\phi\rightarrowngle= e^{-\mbox{$\bf m $}box{$\lambda$}bda t}\langle\mbox{$\bf m $}u,\phi\rightarrowngle\quad \text{for all}\quad \mbox{$\bf m $}u\in {\mathcal{P}}(\bar D). \end{equation} Therefore by \eqref{eq:integral of right eigenfunction against dist} we have for all $\mbox{$\bf m $}u\in{\mathcal{P}}(\bar D)$ that \begin{equation}gin{equation}\label{eq:lower bound on Pnu of killing in time t proof for reflected diffusion} {\mathbb P}_{\mbox{$\bf m $}u}(t<\tau_{\partial})\geq \frac{\langle P_{t}(\mbox{$\bf m $}u,\cdot),\phi\rightarrowngle}{\sup_{x'\in \bar D}\phi(x')}= \frac{e^{-\mbox{$\bf m $}box{$\lambda$}bda t}\langle \mbox{$\bf m $}u,\phi\rightarrowngle}{\sup_{x'\in \bar D}\phi(x')}\geq \frac{\inf_{x'\in \bar D}\phi(x')}{\sup_{x'\in \bar D}\phi(x')}e^{-\mbox{$\bf m $}box{$\lambda$}bda t}. \end{equation} Similarly, \eqref{eq:integral of right eigenfunction against dist} gives that for all $x\in \bar D$ we have \begin{equation}gin{equation}\label{eq:bound of Px killing in time t proof for reflected diffusion} {\mathbb P}_{x}(t<\tau_{\partial})\leq \frac{\langle P_{t}(x,\cdot),\phi\rightarrowngle}{\inf_{x'\in \bar D}\phi(x')}\leq \frac{\sup_{x'\in \bar D}\phi(x')}{\inf_{x'\in \bar D}\phi(x')}e^{-\mbox{$\bf m $}box{$\lambda$}bda t}. \end{equation} Therefore we have \[ {\mathbb P}_{x}(t<\tau_{\partial})\leq \Big(\frac{\sup_{x'\in \bar D}\phi(x')}{\inf_{x'\in \bar D}\phi(x')}\Big)^2{\mathbb P}_{\mbox{$\bf m $}u}(t<\tau_{\partial}),\quad \text{for all}\quad t\geq 0\quad\text{and}\quad x\in \bar D. \] Thus we have verified \cite[Assumption (A)]{Champagnat2014}, so that \cite[Theorem 1.1]{Champagnat2014} implies the existence of a quasi-stationary distribution $\pi\in{\mathcal{P}}(\bar D)$ satisfying \eqref{eq:exponential convergence to QSD reflected diffusion} while \cite[Proposition 2.3 and Corollary 2.4]{Champagnat2014} imply that $\phi$ is unique up to rescaling and, when normalised so that $\langle \pi,\phi\rightarrowngle =1$, corresponds to the uniform limit \eqref{eq:phi uniform limit}. Finally, the QSD $\pi$ corresponds to the left eigenmeasure of $L$ for some eigenvalue $-\mbox{$\bf m $}box{$\lambda$}bda'<0$ by \cite[Proposition 4]{Meleard2011}. This eigenvalue must be equal to $-\mbox{$\bf m $}box{$\lambda$}bda$, since \[ -\mbox{$\bf m $}box{$\lambda$}bda'\langle \pi,\phi\rightarrowngle=\langle \pi, L\phi\rightarrowngle=-\mbox{$\bf m $}box{$\lambda$}bda \langle \pi,\phi\rightarrowngle, \] so that we have \eqref{eq:pi left eigenmeasure}. \qed \subsection{Proof of Lemma \ref{lem:law of Q-process given by tilting}} We recall that we checked \cite[Assumption (A)]{Champagnat2014} in the proof of Theorem \ref{theo:convergence to QSD for reflected diffusion with soft killing}, which by \cite[Theorem 3.1]{Champagnat2014} implies the existence of the family of probability measures $({\mathbb P}^{\infty}_x)_{x\in\bar D}$ satisfying \eqref{eq:family of prob measures Q-process}. Moreover \cite[Theorem 3.1]{Champagnat2014} further gives that $(\Omega,{\mathcal{G}},({\mathcal{G}}_t)_{t\geq 0},(X_t)_{t\geq 0},({\mathbb P}^{\infty}_x)_{x\in \bar D})$ is a strong Markov process with a transition kernel $p^{\infty}$ given by \[ p^{\infty}_t(x,dy)=e^{\mbox{$\bf m $}box{$\lambda$}bda t}\frac{\phi(y)}{\phi(x)}p_t(x;dy), \] where $p_t(x;\cdot)$ is the submarkovian transition kernel for $X_t$. We now verify \eqref{eq:law of Q-process expression}. It is sufficient to check \eqref{eq:law of Q-process expression} for sets of the form \[ A=\{f\in C([0,T];\bar D):f_{t_i}\in A_i\quad\text{for all}\quad 0\leq i\leq n\}\in {\mathcal{B}}(C([0,T];\bar D)) \] whereby we fix $0=t_0<t_1<\ldots<t_n=T$ and take $A_0,\ldots,A_n\in{\mathcal{B}}(\bar D)$. In the following, by $x(A)\propto y(A)$ we mean $x(A)=cy(A)$ for some constant $c\neq 0$ independent of our choice of $A$. We calculate \[ \begin{equation}gin{split} {\mathcal{L}}_{\mbox{$\bf m $}u^{\phi}}((X^{\infty}_t)_{0\leq t\leq T})(A) \propto\int_{A_0}\int_{A_1\times\ldots\times A_n}\prod_{i=1}^n p^{\infty}_{t_{i}-t_{i-1}}(x_{t_{i-1}},dx_{t_{i}})\phi(x_{t_0})\mbox{$\bf m $}u(dx_{t_0})\\ =\int_{A_0}\int_{A_1\times\ldots\times A_n}\prod_{i=1}^n p_{t_{i}-t_{i-1}}(x_{t_{i-1}},dx_{t_{i}})\prod_{i=1}^n\Big(\frac{\phi(x_{t_{i+1}})}{\phi(x_{t_{i}})}e^{\mbox{$\bf m $}box{$\lambda$}bda(t_{i+1}-t_i)}\Big)\phi(x_{t_0})\mbox{$\bf m $}u(dx_{t_0}) \\ \propto \int_{A_0\times\ldots\times A_n}\phi(x_{t_n})d{\mathcal{L}}_{\mbox{$\bf m $}u}((X_{t_{0}},\ldots,X_{t_n}))(x_{t_0}, \ldots ,x_{t_n})\\ \propto \int_{A_0\times\ldots\times A_n}\phi(x_{t_n})d{\mathcal{L}}_{\mbox{$\bf m $}u}((X_{t_{0}},\ldots,X_{t_n})\lvert \tau>T)(x_{t_0}, \ldots ,x_{t_n}) \propto \begin{itemize}g({\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T)\begin{itemize}g)^{\phi^T}(A). \end{split} \] Since the constant of proportionality is independent of $A$, with both ${\mathcal{L}}_{\mbox{$\bf m $}u^{\phi}}((X^{\infty}_t)_{0\leq t\leq T})$ and $\begin{itemize}g({\mathcal{L}}_{\mbox{$\bf m $}u}((X_t)_{0\leq t\leq T}\lvert \tau_{\partial}>T)\begin{itemize}g)^{\phi^T}$ probability measures, we have \eqref{eq:law of Q-process expression}. \qed \subsection{Proof of Proposition \ref{prop:basic facts WF superprocess}} Without loss of generality we fix $\theta=1$, referring to a Wright-Fisher superprocess of rate $1$ as a Wright-Fisher superprocess. \subsubsection*{Uniqueness and properties \ref{enum:MVWF purely atomic}, \ref{enum:fixation times almost surely finite} and \ref{enum:MVWF WF on meas subsets}} In the case whereby ${\mathbb{K}}$ is compact, uniqueness in law is given by \cite[Theorem 3.2 (a)]{Ethier1993c} while properties \ref{enum:MVWF purely atomic}, \ref{enum:fixation times almost surely finite} and \ref{enum:MVWF WF on meas subsets} are provided for by \cite[Corollary 1.5]{Ethier1993b}. It is classical that every Polish space is homeomorphic to a subset of the Hilbert cube $\mbox{$\bf m $}athbb{H}=[0,1]^{\mbox{$\bf m $}athbb{N}}$, a compact metrisable space. We may therefore take the continuous injection \begin{equation}gin{equation}\label{eq:cts inj into Hilbert cube} \vartheta:{\mathbb{K}}\rightarrow \mbox{$\bf m $}athbb{H}. \end{equation} By considering the martingale problem, we see that for any Wright-Fisher superprocess $\nu_t$ on ${\mathcal{P}}({\mathbb{K}})$, $\vartheta_{\#}\nu_t$ is a Wright-Fisher superprocess on $\mbox{$\bf m $}athbb{H}$. Thus uniqueness in law as well as properties \ref{enum:MVWF purely atomic}, \ref{enum:fixation times almost surely finite} and \ref{enum:MVWF WF on meas subsets} transfer from the compact case to the case for general Polish ${\mathbb{K}}$. \subsubsection*{Existence and Property \ref{enum:MVWF cts}} Well-posedness of the $n$-type Wright-Fisher diffusion is guaranteed by \cite[Theorem 3.2]{Ethier1993c} (or by Cauchy-Lipschitz), as it corresponds to the Wright-Fisher superprocess on the compact set $\{1,\ldots,n\}$. We define $\tilde{\text{W}_1h}$ to be the standard Wasserstein-$1$ metric on ${\mathcal{P}}(C([0,\infty);{\mathcal{P}}_{\text{W}_1h}({\mathbb{K}})))$ generated by the minimum of the uniform metric and $1$. We define \[ F(\mbox{$\bf m $}box{$\epsilon$}ilon):={\mathbb E}[\sup_{0\leq t<\infty}p_t]\quad\text{such that}\quad dp_t=\sqrt{p_t(1-p_t)}dW_t,\quad p_0=\mbox{$\bf m $}box{$\epsilon$}ilon. \] Note that $F(\mbox{$\bf m $}box{$\epsilon$}ilon)\rightarrow 0$ as $\mbox{$\bf m $}box{$\epsilon$}ilon\rightarrow 0$. We take $\mbox{$\bf m $}box{$\epsilon$}ilon_k>0$ for $k\geq 1$ such that $F(\mbox{$\bf m $}box{$\epsilon$}ilon_k)<2^{-k}$. We claim that we can choose for each $k\geq 1$ disjoint sets $(A_{ki})_{i=0}^{n_k}\in {\mathcal{B}}({\mathbb{K}})$ with union $\dot{\cup}_{0\leq i\leq n_k}A_{ki}={\mathbb{K}}$ such that \begin{equation}gin{equation}\label{eq:condition on sets inductive step exis WF} \nu^0(A_{k0})<\mbox{$\bf m $}box{$\epsilon$}ilon_k\quad\text{and}\quad \text{diam}(A_{ki})<2^{-k}\quad\text{for all} \quad 1\leq i\leq n_k. \end{equation} We take $x_{ki}\in A_{ki}$ for each $k\geq 1$ and $0\leq i\leq n_k$. Since $\{x_{ki}:0\leq i\leq n_k\}$ is finite we may (by taking an $n_k+1$-type Wright-Fisher diffusion) take a Wright-Fisher superprocess $(\nu^k_t)_{t\geq 0}\in C([0,\infty);{\mathcal{P}}_{\text{W}_1h}({\mathbb{K}}))$ with initial condition \[ \nu^k_0=\sum_{i=0}^{n_k}\nu^0(A_{ki})\delta_{x_{ki}}. \] We further claim that these may be chosen such that \begin{equation}gin{equation}\label{eq:inequality approx WF} \tilde\text{W}_1h\begin{itemize}g({\mathcal{L}}((\nu^k_t)_{0\leq t<\infty}),{\mathcal{L}}((\nu^{k+1}_t))_{0\leq t<\infty})\begin{itemize}g)<10 \ast 2^{-k}\quad \text{for all}\quad k\geq 1. \end{equation} We inductively assume for fixed $k\geq 1$ that we have chosen sets $(A_{ki})_{i=0}^{n_k}$ satsifying \eqref{eq:condition on sets inductive step exis WF} (it is straightforward to do this for $k=1$). Then for $0\leq i\leq n_k$ we split $A_{ki}$ into $(A_{kij})_{j=0}^{n_{ki}}\in{\mathcal{B}}({\mathbb{K}})$ such that $\nu^0(A_{ki0})<\frac{\mbox{$\bf m $}box{$\epsilon$}ilon_{k+1}}{n_{k}+1}$ and $\text{diam}(A_{kij})<2^{-(k+1)}$ for $1\leq j\leq n_{ki}$, taking $x_{kij}\in A_{kij}$ for each $i,j$. We take $X^{kij}_t$ to be a $\sum_{0\leq i\leq n_k}(n_{ki}+1)$-type Wright-Fisher diffusion with initial condition \[ X^{kij}_0:=\nu^0(A_{kij}). \] We define the following two Wright-Fisher superprocesses, \[ \hat \nu^k_t:=\sum_{0\leq i\leq n_{k}}\Big(\sum_{0\leq j\leq n_{ki}}X^{kij}_t\delta_{x_{ki}}\Big)\quad\text{and}\quad \bar \nu^k_t:=\sum_{0\leq i\leq n_k}\sum_{1\leq j\leq n_{ki}}X^{kij}_t\delta_{x_{kij}}+\sum_{0\leq i\leq n_k}X^{ki0}_t\delta_{x_{k00}}. \] Using that $\text{diam}(A_{ki})<2^{-k}$ for all $1\leq i\leq n_k$, we now calculate \[ \begin{equation}gin{split} \text{W}_1h(\hat \nu^k_t,\bar \nu^k_t)\leq \sum_{1\leq i\leq n_k}\sum_{1\leq j\leq n_{ki}}X^{kij}_t\lvert x_{ki}-x_{kij}\rvert+\sum_{0\leq i\leq n_k}X^{ki0}_t+\sum_{0\leq j\leq n_{k0}}X^{k0j}_t\\ \leq 2^{-k}+\sum_{0\leq i\leq n_k}X^{ki0}_t+\sum_{0\leq j\leq n_{k0}}X^{k0j}_t. \end{split} \] We note that $\sum_{0\leq i\leq n_k}X^{ki0}_t$ and $\sum_{0\leq j\leq n_{k0}}X^{k0j}_t$ are Wright-Fisher diffusions with initial conditions $\sum_{0\leq i\leq n_k}X^{ki0}_0=\sum_{0\leq i\leq n_k}\nu^0(A_{ki0})<\mbox{$\bf m $}box{$\epsilon$}ilon_{k+1}$ and $\sum_{0\leq j\leq n_{k0}}X^{k0j}_0=\nu^0(A_{k0})<\mbox{$\bf m $}box{$\epsilon$}ilon_k$ respectively, so that \[ {\mathbb E}[\sup_{0\leq t<\infty}\text{W}_1h(\hat \nu^k_t,\bar \nu^k_t)]\leq 2^{-k}+F(\mbox{$\bf m $}box{$\epsilon$}ilon_{k+1})+F(\mbox{$\bf m $}box{$\epsilon$}ilon_k)<10*2^{-k}. \] We now define $A_{(k+1)0}:=\cup_{0\leq i\leq n_k}A_{ki0}$, $x_{(k+1)0}:=x_{i00}$ and relabel $(A_{kij},x_{kij})$ for $0\leq i\leq n_k$, $1\leq \ell\leq n_{ki}$ as $(A_{(k+1)j},x_{(k+1)j})$ for $1\leq \ell\leq \sum_{0\leq i\leq n_k}n_{ki}$. This defines $x_{(k+1)i}\in A_{(k+1)i}\in {\mathcal{B}}({\mathbb{K}})$ for $0\leq i\leq n_{k+1}$ satisfying \eqref{eq:condition on sets inductive step exis WF}. Since in each case they are Wright-Fisher superprocesses with the same initial condition, we have by uniqueness in law that \[ (\nu^k_t)_{t\geq 0}\overset{d}{=} (\hat \nu^k_t)_{t\geq 0},\quad (\nu^{k+1}_t)_{t\geq 0}\overset{d}{=} (\bar \nu^k_t)_{t\geq 0}. \] Thus we have \eqref{eq:inequality approx WF} for our given $k$. By induction in $k$ we are done. Therefore ${\mathcal{L}}((\nu^k_t)_{0\leq t<\infty})$ is a Cauchy sequence in the complete space $({\mathcal{P}}(C([0,\infty);{\mathcal{P}}_{\text{W}_1h}({\mathbb{K}}))),\hat{\text{W}_1h})$, hence converges to an element of ${\mathcal{P}}(C([0,\infty);{\mathcal{P}}_{\text{W}_1h}({\mathbb{K}})))$, which must correspond to the law of a process $(\nu_t)_{0\leq t<\infty}$ defined on the canonical probability space $C([0,\infty);{\mathcal{P}}_{\text{W}_1h}({\mathbb{K}}))$ with the canonical filtration. By considering the martingale problem, we see that $(\nu_t)_{t\geq 0}\in C([0,\infty);{\mathcal{P}}_{\text{W}_1h}({\mathbb{K}}))$ is a Wright-Fisher superprocess with initial condition $\nu_0=\nu^0$. Therefore we have established existence. Furthermore the fact that $(\nu_t)_{t\geq 0}\in C([0,\infty);{\mathcal{P}}_{\text{W}_1h}({\mathbb{K}}))$ almost surely, Property \ref{enum:MVWF purely atomic} and Lemma \ref{lem:Weak atomic metric = conv of atoms and weak} combine to imply that $(\nu_t)_{t\geq 0}\in C([0,\infty);{\mathcal{P}}_{\text{W}_1t}({\mathbb{K}}))$ almost surely. By uniqueness we have established Property \ref{enum:MVWF cts}. \subsubsection*{Property \ref{enum:fixed colour dist like ic}} From the martingale problem we see that $\langle \nu_t,f\rightarrowngle$ is a martingale for all $f\in C_b({\mathbb{K}})$, so that \[ \langle {\mathcal{L}}(\iota),f\rightarrowngle={\mathbb E}[f(\iota)]={\mathbb E}[\delta_{\iota}(f)]={\mathbb E}[\langle \nu_0,f\rightarrowngle]=\langle \nu^0,f\rightarrowngle. \] Since $f\in C_b({\mathbb{K}})$ is arbitrary we have \eqref{eq:law of iota = nu}. \qed \subsection{Proof of Lemma \ref{lem:cty of kernel}} Given $(t,x)\in [0,T]\times \bar D$, we consider $(X_s)_{t\leq s\leq \tau_{\partial}}$, a solution to \eqref{eq:killed process SDE} (defined also at time $\tau_{\partial}$ by the left-limit) started at time $t$ and position $x$. We write $K(t,x;\cdot)\in {\mathcal{P}}(C([0,T];\bar D))$ for the law of $(t,(X_s)_{t\leq s\leq \tau_{\partial}\wedge T},t_d)$, whereby $t_d:=\tau_{\partial}$ if $\tau_{\partial}\leq T$ and $t_d:=\ast$ if $\tau_{\partial}>T$. Arguing as in \cite[Theorem 5.8, Remark 2]{Stroock1971}, we have $(t,x)\mbox{$\bf m $}apsto K(t,x;\cdot)$ is continuous. We recall that for a tree $\Pi$ and $v\in \Pi$, $\lvert v\rvert$ is the depth of the branch $v$ and $\lvert \Pi\rvert$ is the cardinality of the tree. We define \[ P_m({\mathcal{T}}):={\mathcal{T}}_{\lvert_{\{v\in \Pi({\mathcal{T}}):\lvert v\rvert\leq m\}}} \] and take $(x_n,t_n)\rightarrow (x,t)$. Since the branching rate is a continuous function of time, and the killing rate is a continuous function of position, we may argue inductively that \[ [0,T]\times \bar D\ni (t,x)\mbox{$\bf m $}apsto {\mathcal{L}}(P_m({\mathcal{T}}^{t,x}))\in {\mathcal{P}}({\bf{C}}_T) \] is continuous. Since the branching rate of each branch is bounded by $\mbox{$\bf m $}box{$\lambda$}bda_t(\mbox{$\bf m $}u)\leq \lvert\lvert \kappa\rvert\rvert_{\infty}$, so that $e^{-t\lvert\lvert \kappa\rvert\rvert_{\infty}}\lvert \Pi({\mathcal{T}})\rvert$ is a supermartingale, for all $\mbox{$\bf m $}box{$\epsilon$}ilon>0$ there exists $m(\mbox{$\bf m $}box{$\epsilon$}ilon)$ such that \[ {\mathbb P}({\mathcal{T}}^{x,t}=P_{m(\mbox{$\bf m $}box{$\epsilon$}ilon)+1}({\mathcal{T}}^{x,t}))\geq 1-\mbox{$\bf m $}box{$\epsilon$}ilon\quad\text{and}\quad {\mathbb P}({\mathcal{T}}^{x_n,t_n}=P_{m(\mbox{$\bf m $}box{$\epsilon$}ilon)+1}({\mathcal{T}}^{x_n,t_n}))\geq 1-\mbox{$\bf m $}box{$\epsilon$}ilon\quad\text{for all}\quad n. \] \qed {\textbf{Acknowledgement:}} This work was partially funded by grant 200020 196999 from the Swiss National Foundation. \begin{itemize}bliography{scalinglimitfvlib} \begin{itemize}bliographystyle{plain} \end{document}
\begin{document} \title{Why Do Competitive Markets Converge to First-Price Auctions?} \author{ Renato Paes Leme \\ Google Research \\ {\tt [email protected]} \and Balasubramanian Sivan \\ Google Research \\ {\tt [email protected]} \and Yifeng Teng \\ UW-Madison \\ {\tt [email protected]} } \date{} \maketitle \thispagestyle{empty} \begin{abstract} We consider a setting in which bidders participate in multiple auctions run by different sellers, and optimize their bids for the \emph{aggregate} auction. We analyze this setting by formulating a game between sellers, where a seller's strategy is to pick an auction to run. Our analysis aims to shed light on the recent change in the Display Ads market landscape: here, ad exchanges (sellers) were mostly running second-price auctions earlier and over time they switched to variants of the first-price auction, culminating in Google's Ad Exchange moving to a first-price auction in 2019. Our model and results offer an explanation for why the first-price auction occurs as a natural equilibrium in such competitive markets. \end{abstract} \setcounter{page}{1} \section{Introduction} The research questions investigated in this paper arise in the backdrop of numerous ad exchanges in the display ads market having switched to first-price auctions in the recent few years, culminating in Google Ad Exchange's move to first-price auction in September 2019~\cite{googFPA}. This is well summarized in this quote from Scott Mulqueen in~\cite{blogFPA}: ``\emph{Moving to a first-price auction puts Google at parity with other exchanges and SSPs in the market, and will contribute to a much fairer transactional process across demand sources.}'' In this paper we study the display ads market as a game between ad exchanges where the strategy of each exchange is the choice of auction mechanism it picks. Our goal in analyzing this game is to shed light on the incentives for an exchange to prefer one auction over another, and in particular, study if there is a strong game-theoretic justification for all exchanges converging to a first-price auction. Why is the first-price auction (FPA) special? In our model we consider a display ads market with $n$ sellers $N = \{1,2,\dots,n\}$ representing the ad exchanges, and $m$ buyers representing the bidding networks. Every day each of these exchanges runs billions of auctions, with buyer values drawn independently from a distribution $F$ in each auction. Exchange $j$ controls $\lambda_j$ fraction of the auctions, i.e., any single query could originate from exchange $j$ with probability $\lambda_j$. Every exchange $j$ chooses the mechanism $\mathcal{M}_j$ it runs. The buyers then decide on a bidding strategy mapping values to bids and use the same strategy \emph{across all exchanges}, i.e, they bid in equilibrium for the average auction $\sum_j \lambda_j \mathcal{M}_j$. In this competition between exchanges, what will be the outcome? Is there an equilibrium in this game between exchanges? If so, over what class of allowed auctions? Is the equilibrium symmetric? Is it unique? These are the central questions of the study in this paper. The answers to these questions are nuanced. The novelty in this competition is that when an exchange updates its auction, the change in its buyer response crucially depends on what the other exchanges were running and what their market powers were. We begin by discussing a key modeling assumption we make before discussing results and interpretations. \paragraph{Key assumption: Bidders respond to average auction:} Why do we assume that a bidder responds to the average auction rather than choosing a different bidding strategy for each exchange? To set the context: buyers derive value by maximizing their reach, and therefore usually buy inventory from numerous ad exchanges. This entails buyers bidding in numerous auctions simultaneously. While large sophisticated buyers devise exchange-specific strategies, smaller buyers often design a uniform strategy for the entire market. Practical reasons causing buyers not to have exchange specific strategies include: (a) bidders often track their Key Performance Indicators (KPIs) like impressions won, clicks, conversions, ad spend etc. across all exchanges, and don't maintain exchange-specific goals, leading to not having exchange specific strategies; (b) advertisers often participate in a preliminary auction within an ad network that represents the advertisers, and the ad network picks the winning advertiser's bid or clearing price in the preliminary auction and passes it transparently as the bid to the main auction run. This again tends to encourage uniform strategies across exchanges as the preliminary auction run by the ad network is not exchange specific, (c) often developing a bidding strategy for non-truthful auctions is costly and the gain in customizing it per exchange might not justify its cost, especially in cases where many exchanges use close enough auction formats. While we do not claim that every bidder is agnostic to the specific auction used in a query, the fact that a significant fraction of the advertisers are, justifies the model. Another assumption we make is that the buyer values are drawn iid. Is this justified? Two relevant points here are (a) while iid assumption does not capture all scenarios (e.g. retargeting, where some advertiser has very high values for a returning user), it is not too removed from reality --- we note that the buyers are competing to impress an ad to capture the \emph{same user's attention} (b) even with the iid assumption, the main result is quite technically involved; when buyer values are non-iid, even a single exchange first-price auction's bidder equilibrium for general distributions is very hard to reason about and doesn't have a closed form; therefore gathering general insights about the equilibrium among exchanges would be rendered practically infeasible. \paragraph{Auctions without reserves, revenue equivalence, and why is FPA sought after:} We start by analyzing a setting where each exchange always allocated to the highest bidder (i.e., no reserves), but allowed to charge the winner arbitrary payments smaller than bid (as long as losers pay $0$). This serves to illustrate both (a) why first-price auctions are universally sought after and (b) the central role of revenue equivalence theorem. When exchanges are allowed to run auctions within the class of auctions above, it turns out that revenue equivalence simplifies the problem: the total of revenue of all exchanges, remains a constant \emph{regardless of what auction any exchange runs}. So the only question that remains is how the constant sized pie gets split between the exchanges. We argue that if not all exchanges are running first-price, at least one of those not running the first-price auction can capture a larger fraction of the pie by switching to a first-price auction. This fact drives the first-price auction being the \emph{unique Nash equilibrium} in this competition between exchanges. \paragraph{Auctions with reserves, why revenue equivalence is useless, and why is FPA still sought after:} The simplicity afforded by the seemingly innocuous assumption of no reserves completely vanishes in the more realistic setting when one allows the exchanges to set reserve prices. Indeed, even reasoning about the setting where exchanges are only allowed to run first-price auctions, but are able to set arbitrary reserves is non-trivial (see Section~\ref{sec:simpleex} for an example). Several complications arise here, even with just two competing exchanges and two bidders. First, the two exchanges running first-price auctions with reserves $r_1$ and $r_2$ leads to a different outcome (different allocations, different payments, different sum of revenues of all exchanges) than two exchanges running second-price auctions with reserves $r_1$ and $r_2$ (compare this to the case where when exchanges are forced to use the same fixed reserve or have no reserve price at all, the auction choice does not matter). As a result, the total size of the revenue pie is no longer constant. Second, the equilibrium bidding function is not even continuous, with every unique reserve contributing an additional segment to the bidding function. Third, a symmetric pure strategy equilibrium among the exchanges ceases to exist in general. Fourth, the pure-strategy equilibrium is neither revenue optimal nor welfare optimal. \paragraph{First-price auction's desirability is robust:} Notwithstanding all the complications listed earlier, we show that when each exchange is allowed to choose between a first-price auction with an arbitrary reserve of exchange's choice and a second-price auction with an arbitrary reserve of exchange's choice, every exchange will pick the first-price auction with reserve! Thus, the first-price auction's desirability is not limited to the case where we are able to apply revenue equivalence. The proof of this fact is quite involved and entails analyzing the different segments of the bidding functions to infer consequential properties. While our analysis justifies a switch to first-price auctions from a game-theoretical angle, there are several other arguments supporting a first-price auction in general, e..g, the extra transparency / credibility offered by first-price~\cite{AL18}, the uniqueness of equilibrium~\cite{CH13} offered by first-price semantics compared to second-price semantics. \paragraph{Related Work.} Auctions have been compared along various desiderata in the past. In their famous essay on ``The Lovely But Lonely Vickrey Auction''~\cite{AM06} Ausubel and Milgrom discuss how the Vickrey auction is rarely used in practice despite its academic importance, and how the very closely related ascending auction happens to be quite popular (Christie's, Sotheby's, eBay all use some variants of ascending auction). In a related paper~\cite{AM02}, Ausubel and Milgrom compare the VCG auction with the ascending package auctions in terms of the ability of bidders to understand, incentive properties, their equilibrium outcomes etc. However this is the first work comparing auctions from a revenue-performance-in-a-competitive-marketplace standpoint, and establish the superior performance of the first-price auction. Closely related to this paper is the stream of work on \emph{competing mechanism designers} by McAfee \cite{mcafee1993mechanism}, Peters and Serverinov \cite{peters1997competition}, Burguet and Sakovics \cite{burguet1999imperfect}, Pavan and Calzonari \cite{pavan2010truthful} and Pai \cite{Pai09}. We refer to the excellent survey by Mallesh Pai on the topic \cite{Pai10}. In McAfee's model \cite{mcafee1993mechanism} there is a (repeated) two-stage game where seller's first propose an auction mechanism and then unit-demand buyers decide in which mechanism they will participate in. The seller's choice of mechanism needs to balance between extracting revenue and attracting buyers to the auction. The crucial difference between this and our model is that buyers in our model want to buy as much inventory as possible and therefore bid on all mechanisms simultaneously. Instead of choosing one mechanism to participate, our buyers respond to the average mechanism. \paragraph{Organization.} In Section~\ref{sec:prelim} we provide some preliminary details. In Section~\ref{sec:noreserve} we consider the case of auctions without reserves. In Section~\ref{sec:withreserve} we consider the case of auctions with reserves. In particular, in Section~\ref{sec:mainresults} we state our main results. In Section~\ref{sec:simpleex} we consider a simple example with two bidders and two exchanges to illustrate the complications that manifest themselves when we allow auctions with reserves. In Section~\ref{sec:thmproofs} we give the proofs of all our results. \section{Notations and Models} \label{sec:prelim} \subsection{Sealed-bid auctions and Bayes-Nash equilibrium} In a single-item sealed-bid auction with $n$ bidders, we have each bidder's private value $v_i$ drawn from a publicly known distribution $F_i$. A mechanism $(\textbf{x},\textbf{p})$ maps the bids $\textbf{b} = (b_1, \hdots, b_n)$ submitted by the bidders to allocation $x_i(\textbf{b})$ which is the probability that bidder $i$ gets allocated the item, and payment $p_i(\textbf{b})$ that bidder $i$ needs to pay. Each bidder's strategy corresponds to a bidding function $\beta_i:\mathbb{R}\to\mathbb{R}$ to map their value to a bid. Denote by $\betas(\textbf{v})=(\beta_1(v_1), \beta_2(v_2), \cdots, \beta_n(v_n))$ the vector of bids from all bidders with value profile $\textbf{v}=(v_1,v_2,\cdots,v_n)$. The goal of each bidder $i$ is to maximize his (quasi-linear) utility $u_i(\textbf{b})=x_i(\textbf{b})v_i-p_i(\textbf{b})$. We say that the bidders' bidding strategy profile $(\beta_1,\cdots,\beta_n)$ forms a Bayes-Nash equilibrium if under this profile no single bidder has an incentive to switch his bidding strategy, assuming they only know other bidders' value distributions and not true values. That is, \begin{equation*} \mathbb{E}_{\textbf{v}_{-i}\sim \bm{F}_{-i}}[u_i(\beta_i(v_i),\betas_{-i}(\textbf{v}_{-i}))]\geq \mathbb{E}_{\textbf{v}_{-i}\sim \bm{F}_{-i}}[u_i(\gamma_i(v_i),\betas_{-i}(\textbf{v}_{-i}))] \end{equation*} for any other bidding function $\gamma_i$. \subsection{Competitive market as a two-stage game} We consider an auction market with a set of bidders $N=\{1,2,\cdots,n\}$ and a set of sellers/exchanges $\{1,2,\cdots,m\}$. Each bidder $i$ has a value $v_i\geq 0$ for the item sold by the exchanges in the market. In this paper we focus on symmetric setting, where each bidder's valuation $v_i$ is drawn independently and identically from a publicly known distribution $F$. Throughout the paper, we assume the value distribution $F$ is differentiable on $[0,\infty)$, have no point mass and the continuous density function $f$ is positive on $(0,\infty)$. While bidders repeatedly interact with the exchanges billions of times a day, every such interaction is a priori identical: namely, with a probability $\lambda_j$, a query arrives from exchange $j$ (here $\lambda_j$ indicates exchange $j$'s market power) asking the bidder to submit a bid in exchange $j$'s mechanism $\mathcal{M}_j$, and in each such interaction every bidder's value is drawn independently from $F$ (independently of previous rounds and other bidders). A bidder $i$'s bid in every individual interaction is a function exclusively of his own value $v_i$ and other bidders' distribution $F$, and in particular \emph{not a function of which specific exchange} $j$ sent this query (see the Introduction for motivation). Given the a priori identical interactions, it follows that it is sufficient to model and study a single such interaction, which is what we do in this paper. We model the whole market as a two-stage game. The game is parametrized by a class of mechanisms $\mathcal{C}$: \begin{itemize} \item In the first stage, each exchange $j$ proposes a mechanism $\mathcal{M}_j$ simultaneously. The mechanism $\mathcal{M}_j=(\textbf{x}^j,\textbf{p}^j) \in \mathcal{C}$ is a sealed-bid auction that sells a single item to $n$ bidders. For any bidding profile $\textbf{b}=(b_1,b_2,\cdots,b_n)\in\mathbb{R}^n$ submitted by the bidders, the auction $\mathcal{M}_j$ charges each bidder $i$ a non-negative price $p^j_i(\textbf{b})$, and allocates the item to bidder $i$ with probability $x^j_i(\textbf{b})$. \item In the second stage, each bidder $i$ proposes a bid $b_i=\beta_i(v)$ according to his private value $v_i$, here $\beta_i:\mathbb{R}\to\mathbb{R}$ is the bidding function of bidder $i$. The same bid will be submitted regardless of the exchange $j$ that solicits bids this query. Given that each query comes with probability $\lambda_j$ from exchange $j$, the bidder will get allocation $x_i(\textbf{b})=\sum_{j}\lambda_j x^j_i(\textbf{b})$ and charged $p_i(\textbf{b})=\sum_{j}\lambda_j p^j_i(\textbf{b})$. In other words, the bidders respond to the average mechanism $(\textbf{x},\textbf{p})=(\sum_{j}\lambda_j\textbf{x}_j,\sum_{j}\lambda_j\textbf{p}_j)$ proposed by the exchanges (again, see Introduction for motivation). Later in the paper we may also refer to such auction as $\sum_{j=1}^{m}\lambda_j\mathcal{M}_j$. \end{itemize} In this two-round game, the objective of each bidder $i$ is to maximize his expected utility. Since the setting is symmetric, we are concerned with symmetric bidding equilibrium only, where the bidders are using the same bidding function $\beta=\beta_1=\beta_2=\cdots=\beta_m$ in the Bayes-Nash equilibrium. In the classes $\mathcal{C}$ of mechanisms that we analyze, a symmetric bidding equilibrium always exists. For the class of mechanisms $\mathcal{C}$ considered in Section~\ref{sec:withreserve}, the symmetric bidding equilibrium is essentially unique\footnote{I.e., unique if we ignore the bids below the smallest reserve price (it's okay to ignore what happens in this regime because any such bid leads to no allocation and generates zero payment), and if we ignore the bids at points of discontinuity of the bidding function, where the bidder is indifferent between the two points of discontinuity.}. For the class $\mathcal{C}$ considered in Section~\ref{sec:noreserve}, every symmetric bidding equilibrium implements the same social choice function (i.e., the allocation) and charges identical interim payments, thereby making uniqueness of symmetric bidding equilibrium irrelevant. The objective of each exchange $j$ is to maximize its expected revenue when bidders are bidding against the average mechanism, which is $$\normalfont\textsc{Rev}_j = \mathbb{E}_v\left[\sum_{i}p^j_i(\betas(\textbf{v}))\right].$$ As usual in mechanism design, when choosing a new auction rule the exchanges need to reason about how it will affect the bidding strategies of the buyers. The twist in this model is that the chosen mechanism only partially affects the average mechanism that the buyers are reacting to. This leads to a situation very much like \emph{tragedy of the commons} games. \section{Warm Up: High-bid auctions} \label{sec:noreserve} As a warm up we start by analyzing the game between exchanges where each exchange is restricted to using a \emph{high-bid auction}, i.e., an auction where the item is always allocated to the highest bidder. This is satisfied by various natural auction formats: second price, first-price, pricing at any convex combination of first and second price, ... This apparently innocuous assumption, however, prevents the use of reserve prices -- which is the main lever used in practice for revenue optimization. We will come back to reserve prices in the next section. Analyzing the equilibrium of the market seems challenging, as the bidders' bidding strategy changes dramatically whenever an exchange proposes a different auction. Surprisingly, if each exchange proposes an auction from above class of mechanisms, we can show that each exchange proposing first-price auction is the unique Nash equilibrium. \begin{theorem} \label{thm:noreserve} If every exchange is only allowed to use auctions with the following properties: \begin{enumerate} \item Highest bidder always wins (this implies a zero reserve price); \item Winner pays no more than his bid; \item Losers always pay 0; \item The bidders have a pure-strategy symmetric Bayes-Nash bidding equilibrium. \end{enumerate} Then each bidder proposing first-price auction is a unique Nash equilibrium. \end{theorem} \begin{proof} Consider the aggregated average mechanism. Since highest-valued bidder always wins in any auction proposed by any exchange, thus highest-valued bidder always wins in aggregated average mechanism. Notice that Revenue Equivalence Theorem (see \citet{vickrey1961counterspeculation,Riley1981optimal,myerson1981optimal}) implies that auctions whose equilibria implement the same social choice function always extract the same revenue in those equilibria. Given that bidders have a pure-strategy symmetric BNE, and the highest bidder always wins, we have that the highest valued buyer always wins --- i.e., the same social choice function. Thus the expected sum of revenue over all exchanges $\sum_i \lambda_i \normalfont\textsc{Rev}_i$ will be a constant, no matter what auctions are proposed by the exchanges (i.e., any auction satisfying properties (1) and (4) in the theorem statement, which get used in the Revenue Equivalence Theorem). Fixing the bids of the bidders, and respecting properties (2) and (3) in the theorem statement, a first-price auction extracts the highest possible revenue. An exchange running first-price auction will extract at least average revenue among all exchanges, and the equality holds only if all exchanges are running first-price auction. If all exchanges are running first-price auction, then no one would deviate to other auction to get below-average revenue. If not all exchanges are running first-price auction, there are two possible cases: (i) either some but not all exchanges are using first-price auctions. In such case some exchange $j$ must be getting below-average revenue. By deviating to first-price, this exchange can guarantee at least the average revenue since $\normalfont\textsc{Rev}_j \geq \normalfont\textsc{Rev}_i$ and hence $\normalfont\textsc{Rev}_j \geq \sum_i \lambda_i \normalfont\textsc{Rev}_i$ which is constant. (ii) if no exchange is using first-price, then there is some exchange $j$ that is at the average or below. By switching to first-price, this exchange can guarantee that $\normalfont\textsc{Rev}_j > \normalfont\textsc{Rev}_i$ for all $i \neq j$ and hence $\normalfont\textsc{Rev}_j > \sum_i \lambda_i \normalfont\textsc{Rev}_i$. In either case, if not all exchanges are running first-price, there is at least one exchange that would switch to first-price auction to get strict revenue improvement. Thus in a Nash equilibrium, every exchange must propose the first-price auction. \end{proof} \paragraph{Necessity of properties 2 and 3:} The second and the third properties in Theorem~\ref{thm:noreserve} cannot be removed. For the second property, suppose that every exchange proposes first-price auction. If an exchange switches to the following auction: highest bidder wins and pays 100 times his bid. Then that exchange will get more revenue than all other exchanges using first-price auction, thus above-average revenue. This means that his revenue increases after making such change to proposed auction. For the third assumption, again suppose that every exchange uses first-price auction. Then an exchange would prefer to switch to all-pay auction, where each bidder pays his bid. This leads to his revenue being above average among all exchanges, thus increases his revenue. We argue that the second and the third properties are also natural in real-world auctions. However, the first property is a huge restriction to the class of possible mechanisms. For example, most ad exchanges use first-price auction or second-price auction with reserve prices. An auction with reserve price violates the first property, since if the bids of all bidders are below the reserve price, no bidder will get served. In the next section, we discuss what happens when exchanges are allowed to use auctions with reserve prices. As for property 4, most natural auctions have a pure strategy symmetric BNE. For example, first-price auction, second price auction, a highest-bidder-wins and pays a convex combination of first and second highest bid, and even the all-pay auction (that violates property (3)), all satisfy property (4). \section{Auctions with reserve price} \label{sec:withreserve} \subsection{Main results} \label{sec:mainresults} When the highest bidder does not always get the item, the convenient and straightforward application of revenue equivalence theorem becomes out of question\footnote{Although the revenue equivalence theorem itself remains true, any straightforward application of it is not possible.}. Even in very simple settings, as we demonstrate in Section~\ref{sec:simpleex}, the details are quite involved. The most ubiquitous auctions used in real-world ad exchanges are first-price auction with reserve prices and second-price auction with reserve prices. We show that given these choices, no exchange will propose second-price auction with reserve in an equilibrium. \begin{theorem} \label{thm-fpspne} In a market with $m\geq 2$ exchanges, if each exchange is only allowed to use first-price auction with reserve or second-price auction with reserve, then in a pure-strategy equilibrium, every exchange will use first-price auction with reserve. \end{theorem} Now that we have established every exchange would pick a first-price auction with a reserve, we study properties of exchanges' equilibrium in such a first-price market. The first property is that every exchange using first-price auction \emph{without reserve} cannot be an equilibrium of the market. Note that this immediately justifies the necessity of property 1 in Theorem \ref{thm:noreserve}. \begin{theorem}\label{thm-allzero} Every exchange proposing first-price auction with no reserve is not a Nash equilibrium. \end{theorem} The second property is that the market does not admit symmetric pure-strategy equilibrium. \begin{theorem}\label{thm-asymmetric} Every exchange proposing first-price auction with the same reserve $r$ is not a Nash equilibrium. \end{theorem} A direct corollary of the above two theorems is that the competition in the market leads to the decrease of total revenue of all exchanges. We know that for each bidder having regular value distribution $F$, second-price auction with reserve is the revenue-optimal auction \cite{myerson1981optimal}, and every revenue-optimal auction must have the same allocation rule\footnote{Indeed the revenue-optimal auction is unique for strictly regular distributions because Myerson's theorem~\cite{myerson1981optimal} shows that the optimal auction should not have a non-zero probability of allocation to any agent with a negative virtual value or to any agent with lower than highest virtual value.}. By revenue equivalence theorem, first-price auction with the same reserve is also revenue-optimal. However, since every exchange proposing first-price auction with the same reserve cannot be an equilibrium, it means that in a competitive market the total revenue of exchanges decreases. To summarize, we have the following corollary. \begin{corollary}\label{cor-lessrev} Suppose the bidders' valuation distribution $F$ is strictly regular (virtual values are strictly increasing). Then in an equilibrium of the market, the total revenue of all exchanges is lower than in the revenue-optimal auction. \end{corollary} In the next section, we will study the above theorems in a specific market setting, and prove the theorems in general case in section \ref{sec:thmproofs}. \subsection{An example study with two exchanges and two bidders, in a first-price market} \label{sec:simpleex} Let's study a simple example with two bidders and two exchanges, and study the equilibrium of the market. Suppose that there are two bidders with values drawn from uniform distribution $F=U[0,1]$. There are two exchanges, each with $\lambda_1=\lambda_2=0.5$ fraction of traffic. Let $\normalfont\textsc{Rev}_j(\mathcal{M}_1,\mathcal{M}_2)$ denote the revenue obtained by exchange $j$ when the two exchanges propose mechanisms $\mathcal{M}_1$ and $\mathcal{M}_2$. Let $\normalfont\textrm{FP}_r$ denote first-price auction with reserve $r$. Suppose we only allow each exchange to propose first-price auction, but with an arbitrary reserve of its choosing. Assume that exchange 1 proposes $\normalfont\textrm{FP}_{r_1}$, while exchange 2 proposes $\normalfont\textrm{FP}_{r_2}$. Since the setting is symmetric, without loss of generality assume that $r_1\leq r_2$. \paragraph{Straightforward revenue equivalence is fruitless:} First we note that the sum of revenue of the two exchanges when they run $\normalfont\textrm{FP}_{r_1}$ and $\normalfont\textrm{FP}_{r_2}$ is \emph{not revenue equivalent} to the setting where both the exchanges run second price auctions with reserves of $r_1$ and $r_2$, i.e., when they run $\normalfont\textrm{SP}_{r_1}$ and $\normalfont\textrm{SP}_{r_2}$. To see this consider a setting where $r_1 = 0.7 \leq r_2 = 0.9$. When the exchanges ran $\normalfont\textrm{SP}_{r_1}$ and $\normalfont\textrm{SP}_{r_2}$ the bidders bid truthfully. Suppose $v_2 = r_2 + \epsilon = 0.9 + \epsilon$ for $\epsilon > 0$. Bidder $2$ will win the item with probability $0.9 + \epsilon$ (because bidder $1$'s value is drawn from $U[0,1]$). On the other hand, in the first-price auction, bidder $2$ will shade his bids (i.e., not bid truthfully) and therefore will not get allocated with probability more than $0.5$, even when his value is $v_2 = 0.9 + \epsilon$. Indeed, suppose bidder $2$ was allocated when exchange $2$ was running the auction\footnote{it is enough to consider deterministic bidding strategies for bidder $2$ as his utility would be equal in every single best response he has, and clearly there is always a deterministic best response}. Then clearly $b_2$ should have been at least $r_2$, making $2$'s payment (which is just $b_2$) also at least $r_2$, and his utility is at most $\epsilon$. On the other hand, suppose bidder $2$ just set $b_2 = 0.7$. Since $b_1 \leq v_1$, bidder $2$ will be served with probability at least $0.7 * 0.5$ (here $0.5$ captures $\lambda_1$ and $0.7$ captures $U[0,1]$ distribution) and therefore get a utility of at least $0.5 * 0.7 * (0.9+\epsilon-0.7) = 0.07 + 0.35\epsilon > \epsilon$ for sufficiently small $\epsilon$. Thus, values slightly higher than $r_2$ would bid lower than $r_2$, leading to a different allocation than the second price auction, thus denying the possibility of applying revenue equivalence easily. \paragraph{Discontinuous bidding functions:} Another peculiarity that manifests is the discontinuity in the equilibrium bidding function, even when the distribution is nice like $U[0,1]$. To see this, let $\beta(\cdot)$ be the equilibrium bidding function (which is clearly weakly monotone). Consider the smallest value $v$ s.t. $\beta(v) = r_2$. Under any best response bidding function, the utility of buyer with value $v-\epsilon$ is at most $\epsilon$ smaller than the utility of buyer with $v$ (buyer with $v-\epsilon$ can simply imitate $v$'s bid to satisfy this). But a bidder with $v > r_2$ bidding at $b \geq r_2$ gets allocated with about twice the probability and gets about twice the utility when compared to the buyer with value infinitesimally smaller than $v$ bidding $b < r_2$. This is a contradiction, thereby ruling out continuity of the bidding function. Let's first compute the equilibrium bidding strategy of the bidders and the revenue obtained by each exchange. \begin{proposition} Let $a=\frac{2r_2+\sqrt{4r_2^2-3r_1^2}}{3}$ and $c=\frac{4r_2^2+3r_1^2+2\sqrt{4r_2^2-3r_1^2}}{9}$. Then \begin{itemize} \item Under equilibrium bidding, both bidders use bidding function $$\beta(v)=\frac{v^2+r_1^2}{2v} \text{ for } v\in[r_1,a)$$ $$\beta(v)=\frac{v^2+c}{2v} \text{ for } v\in[a,1].$$ \item The revenue is described in the following formulas: $$\normalfont\textsc{Rev}_1(\normalfont\textrm{FP}_{r_1},\normalfont\textrm{FP}_{r_2})=-\frac{4}{3}r_1^3+r_1^2a+c-ca+\frac{1}{3}$$ $$\normalfont\textsc{Rev}_2(\normalfont\textrm{FP}_{r_1},\normalfont\textrm{FP}_{r_2})=\frac{1}{3}-\frac{1}{3}a^3+c-ca.$$ \end{itemize} \end{proposition} \begin{proof} The bidding function $\beta$ has the following properties. When $v<r_1$, $\beta(v)<r_1$, as the winner has to pay at least $r_1$ when he wins; therefore a bid of $0$ is as good for the bidder as any other bid below $r_1$. There exists some value $a$ such that $\beta^+(a)=r_2$, and $\beta^-(a)<r_2$, here $\beta^+$ and $\beta^-$ denote the right-side limit and left-side limit of the bidding function $\beta$. Since $\beta$ is a symmetric equilibrium, the bidder with value $v$ will bid $b=\beta(v)$ that maximizes the expected utility, which is $F(\beta^{-1}(b))(v-b)$, i.e. the probability of winning multiplied by the utility when he wins. Let $f=F'$ be the density function of $F$. Taking the first-order condition we have \[0=\frac{f(\beta^{-1}(b))}{\beta'(\beta^{-1}(b))}(v-b)-F(\beta^{-1}(b)).\] Apply $b=\beta(v)$ to above equation we get \begin{equation}\label{eqn-first-order} f(v)v=\beta'(v)F(v)+f(v)\beta(v)=\frac{d}{dv}F(v)\beta(v). \end{equation} Integrate both sides for $v\in(r_1,a)$ we have \begin{equation}\label{eqn-first-part} \int_{r_1}^{v}tf(t)dt=F(v)\beta^-(v)-F(r_1)\beta^+(r_1). \end{equation} Apply $F(v)=v$ to above equation we get $\beta(v)=\frac{v^2+r_1^2}{2v}$ for $v\in[r_1,a)$. Similarly integrate both sides of equation (\ref{eqn-first-order}) for $v\in[a,1]$ and apply $F(v)=v$ we get $\beta(v)=\frac{v^2+c}{2v}$ for $v\in[a,1]$, here $c=2ar_2-a^2$ is a constant. \begin{figure} \caption{An example of equilibrium bidding function when the exchanges propose auctions $\normalfont\textrm{FP} \label{fig-example} \end{figure} Now we solve the value of $a$. The bidder will be indifferent to bid $\beta^+(a)$ and $\beta^-(a)$ at value $a$, thus the utility he gets would be the same as these two bids, in other words, $a-\beta^-(a)=2(a-\beta^+)=2(a-r_2)$. Apply $\beta^-(a)=\frac{a^2+r_1^2}{2a}$ we get $a=\frac{2r_2+\sqrt{4r_2^2-3r_1^2}}{3}$ and $c=\frac{4r_2^2+3r_1^2+2\sqrt{4r_2^2-3r_1^2}}{9}$. Notice that the distribution of the higher value of two bidders has cumulative density function $v^2$ and density function $2v$. Thus the revenue of exchange 1 would be \begin{eqnarray*} \normalfont\textsc{Rev}_1(\normalfont\textrm{FP}_{r_1},\normalfont\textrm{FP}_{r_2})&= &\int_{r_1}^{a}2v\beta(v)dv+\int_{a}^{1}2v\beta(v)dv\\ &=&\int_{r_1}^{a}(v^2+r_1^2)dv+\int_{a}^{1}(v^2+c)dv\\ &=&-\frac{4}{3}r_1^3+r_1^2a+c-ca+\frac{1}{3}. \end{eqnarray*} The revenue of exchange 2 is \[\normalfont\textsc{Rev}_2(\normalfont\textrm{FP}_{r_1},\normalfont\textrm{FP}_{r_2})=\int_{a}^{1}2v\beta(v)dv=\frac{1}{3}-\frac{1}{3}a^3+c-ca.\] \end{proof} Given the characterization of bidding function and revenue function in the market, we are ready to verify the theorems and corollaries in previous section. \begin{corollary}\label{cor-reserve-useful} (Special case of Theorem \ref{thm-allzero}) If one exchange proposes first-price auction with zero reserve, the other exchange may propose first-price auction with some reserve price to gain more revenue. \end{corollary} \begin{proof} When both exchanges use no reserve, the revenue of exchange 1 is $\frac{1}{3}$. However, if he switches to first-price auction with reserve 0.1, his revenue would be $\normalfont\textsc{Rev}_1(\normalfont\textrm{FP}_{0.1},\normalfont\textrm{FP}_0)\approx0.3402>\frac{1}{3}$. Thus setting up a reserve may lead to higher revenue for the exchange. \end{proof} \begin{corollary}\label{cor-asymmetric-reserve}(Special case of Theorem \ref{thm-asymmetric}) The pure-strategy equilibrium of the market will not be symmetric. That is, both exchanges proposing first-price auction with the same reserve cannot be Nash equilibrium. \end{corollary} \begin{proof} Corollary \ref{cor-reserve-useful} shows that $(\normalfont\textrm{FP}_0,\normalfont\textrm{FP}_0)$ cannot be an equilibrium. It remains to show that $(\normalfont\textrm{FP}_r,\normalfont\textrm{FP}_r)$ cannot be an equilibrium for any $r\in(0,1]$. The idea is for any $r\in(0,1]$, there exists small $\epsilon>0$ such that $\normalfont\textsc{Rev}_1(\normalfont\textrm{FP}_{r-\epsilon},\normalfont\textrm{FP}{r})>\normalfont\textsc{Rev}_1(\normalfont\textrm{FP}_{r},\normalfont\textrm{FP}_{r})$. This can be done by verifying $\frac{\partial}{\partial r_1}\normalfont\textsc{Rev}_1(\normalfont\textrm{FP}_{r_1},\normalfont\textrm{FP}_{r_2})\Big|_{r_1=r_2}=-2r_2^2\leq0$ for any $r_2\in[0,1]$. \end{proof} \begin{corollary}\label{cor-speciallessrev} (Special case of Corollary \ref{cor-lessrev}) Pure-strategy equilibrium of exchanges in the market will not be revenue-optimal. \end{corollary} \begin{proof} Given the revenue function of the exchanges, we can verify that reserve prices $r_1=0.2402$, $r_2=0.3157$ corresponds to the unique equilibrium of the market. The revenue of exchange 1 is 0.397, while the revenue of exchange 2 is 0.378. The total revenue of the market under equilibrium is $\frac{1}{2}*0.397+\frac{1}{2}*0.378=0.3875$. However, in revenue-optimal auction (i.e. first-price auction with reserve $0.5$), the revenue would be $\frac{5}{12}=0.4167>0.3875$. \end{proof} Although the revenue decreases in equilibrium of the market, social welfare (i.e. the expected value obtained by the bidders) increases. \begin{corollary} Pure-strategy equilibrium of the market generates more welfare than revenue-optimal auction. \end{corollary} \begin{proof} Reserve prices $r_1=0.2402$, $r_2=0.3157$ corresponds to the unique equilibrium of the market. We can compute the lowest winner's value when both exchanges sell the item: $a=\frac{2r_2+\sqrt{4r_2^2-3r_1^2}}{3}=0.369$. Since in revenue-optimal auction, the auction only sells the item when winner's value is at least 0.5. Thus the welfare in such first-price market's equilibrium is higher than in revenue-optimal auction. \end{proof} \subsection{Proof of theorems in Section \ref{sec:mainresults}} \label{sec:thmproofs} \begin{proof}[Proof of Theorem \ref{thm-fpspne}.] The proof of the theorem is structured as follows. Firstly we characterize the bidding equilibrium of bidders in a market with first-price and second-price auctions. We show the equilibrium bidding functions have discontinuous segments, and analyze the equation the bidding function need to satisfy in each segment. Secondly we prove that for any exchange with reserve $r_{\ell}$, if all exchanges with smaller reserves use first-price auctions, then such exchange will prefer to propose first-price auction with reserve $r_{\ell}$ rather than second-price auction with the same reserve $r_{\ell}$. We prove this by showing that conditioned on highest-valued bidder has any value $v$, the expected payment received by such exchange when proposing $\normalfont\textrm{FP}_{r_{\ell}}$ will always be as large as proposing $\normalfont\textrm{SP}_{r_{\ell}}$. This will imply that no exchange in an equilibrium will prefer to propose second-price auction with reserve. \textit{\textbf{Step 1: Characterize the symmetric bidding equilibrium of bidders.}} Given a bidder with value $v$, let $G(v)=F^{n-1}(v)$ be the probability that the second highest value is smaller than $v$, and $g(v)=G'(v)$ be the corresponding density function. Assume that exchanges proposing $\mathcal{M}^1_{r_1},\mathcal{M}^2_{r_2},\cdots,\mathcal{M}^m_{r_m}$ forms an auction equilibrium, here each auction $\mathcal{M}^i\in\{\normalfont\textrm{FP},\normalfont\textrm{SP}\}$ can only be first-price or second price, and without loss of generality $r_1\leq r_2\leq\cdots\leq r_m$. To prove the theorem, it suffices to show that if in an equilibrium the auctions with lower reserves are first-price auctions, i.e. $\mathcal{M}^1=\mathcal{M}^2=\cdots=\mathcal{M}^{k-1}=\normalfont\textrm{FP}$ for some $k\in[m]$, then $\mathcal{M}^{k}=\normalfont\textrm{FP}$. We first investigate what does the symmetric equilibrium bidding function $\beta$ of the bidders look like. Similar to the case with two exchanges that we have studied before, the equilibrium bidding function would have at most $m+1$ continuous segments (also see Figure \ref{fig-example}). To be more precise, there exists $a_1\leq a_2\leq\cdots\leq a_m$ such that $\beta^+(a_\ell)=r_\ell$; while $\beta^-(a_\ell)<r_\ell$, if $r_{\ell-1}<r_\ell$. Let $a_0=0$ and $a_{m+1}=+\infty$. For $v\in[a_\ell,a_{\ell+1})$, let $\bar{p}_\ell(b)$ denote the average payment of the winner in aggregated average auction of $\mathcal{M}^1_{r_1},\mathcal{M}^2_{r_2},\cdots,\mathcal{M}^\ell_{r_\ell}$ (which is $\frac{1}{\sum_{j=1}^{\ell}\lambda_j}\sum_{j=1}^{\ell}\lambda_j\mathcal{M}^j_{r_j}$), conditioned on winner's bid being $b$. When highest-valued bidder has value $a_\ell$, he will bid $\beta^-(a_\ell)$ and $\beta^+(a_\ell)$ indifferently. When bidding $\beta^-(a_\ell)$, he wins in the first $\ell-1$ exchanges, and gets profit $\left(\sum_{j=1}^{\ell-1}\lambda_j\right)\Big(a_\ell-\bar{p}_{\ell-1}(\beta^-(a_\ell))\Big)$. When bidding $\beta^+(a_\ell)$, he wins in the first $\ell$ exchanges, and gets profit $\left(\sum_{j=1}^{\ell}\lambda_j\right)\Big(a_\ell-\bar{p}_{\ell}(\beta^+(a_\ell))\Big)$. Then the value of $a_\ell$ will be determined by the following equation: \begin{equation}\label{eqn-al} \left(\sum_{j=1}^{\ell-1}\lambda_j\right)\Big(a_\ell-\bar{p}_{\ell-1}(\beta^-(a_\ell))\Big)=\left(\sum_{j=1}^{\ell}\lambda_j\right)\Big(a_\ell-\bar{p}_{\ell}(\beta^+(a_\ell))\Big). \end{equation} Now we analyze first-order condition similar to what we did in previous section. Since $\beta$ is a bidding equilibrium, when winner's value $v\in [a_\ell,a_{\ell+1})$, he bids $b=\beta(v)$ to maximize his utility $G(\beta^{-1}(b))\left(\sum_{j=1}^{\ell}\lambda_j\right)(v-\bar{p}_\ell(b))$. Take the derivative to zero, we get \[\frac{g(\beta^{-1}(b))}{\beta'(\beta^{-1}(b))}(v-\bar{p}_\ell(b))-G(\beta^{-1}(b))p'_\ell(b)=0.\] Applying $b=\beta(v)$ we get \[vg(v)=\beta'(v)G(v)\bar{p}_\ell(\beta(v))+g(v)p'_\ell(\beta(v))=\frac{d}{dv}G(v)\bar{p}_\ell(\beta(v)).\] Integrate both sides from $a_\ell$ to $v$ we get \begin{equation}\label{eqn-integral-ell} \int_{a_\ell}^{v}g(t)tdt=G(v)\bar{p}_\ell(\beta^-(v))-G(a_\ell)\bar{p}_\ell(\beta^+(a_\ell)). \end{equation} The following lemma ensures that there is a unique equilibrium bidding function $\beta$ that satisfy (\ref{eqn-integral-ell}) for every $\ell$. The proof of the lemma will get deferred to appendix. \begin{lemma}\label{lem-existence} There exists a solution bidding function $\beta$ to (\ref{eqn-integral-ell}) for all $\ell$, and such bidding function $\beta$ is an equilibrium among bidders. Furthermore, $\beta(v)\leq v$ for all $v$. \end{lemma} \textit{\textbf{Step 2: Find a lower bound for exchange $k$'s revenue when he proposes $\normalfont\textrm{FP}_{r_k}$.}} Now we are ready to analyze the revenue obtained by exchange $k$, when all exchanges with smaller reserve prices use first-price auctions. We start with an upper bound of the revenue of exchange $k$ when he proposes $\normalfont\textrm{FP}_{r_k}$. Consider any winner's value $v\geq a_k$, and assume $v\in[a_{s},a_{s+1})$ for some $s$. Then \begin{eqnarray*} \int_{a_k}^{v}tg(t)dt &=&\int_{a_s}^{v}tg(t)dt+\int_{a_{s-1}}^{a_s}tg(t)dt+\cdots+\int_{a_k}^{a_{k+1}}tg(t)dt\\ &=&\left(G(v)\bar{p}_s(\beta^-(v))-G(a_s)\bar{p}_s(\beta^+(a_s))\right)\\ & &+\left(G(a_{s})\bar{p}_{s-1}(\beta^-(a_s))-G(a_{s-1})\bar{p}_{s-1}(\beta^+(a_{s-1}))\right)\\ & &+\cdots+\left(G(a_{k+1})\bar{p}_k(\beta^-(a_{k+1}))-G(a_k)\bar{p}_k(\beta^+(a_k))\right)\\ &\leq&G(v)\bar{p}_{s}(\beta^-(v))-G(a_k)\bar{p}_k(\beta^+(a_k))\\ &\leq&G(v)\beta^-(v)-G(a_k)\bar{p}_k(\beta^+(a_k))\\ &=&G(v)\beta^-(v)-G(a_k)r_k. \end{eqnarray*} Here the second equality follows from equation (\ref{eqn-integral-ell}); the third inequality is true since $\bar{p}_{\ell-1}(\beta^-(a_\ell))<\bar{p}_{\ell}(\beta^+(a_\ell))$ by equation (\ref{eqn-al}), whenever $r_{\ell-1}<r_{\ell}$; the fourth inequality is by $\bar{p}_s(\beta^-(v))\leq \beta^-(v)$ as all auctions proposed by exchanges are first-price with reserve or second-price with reserve which cannot charge the winner more than his bid, thus the aggregated average auction also won't charge the winner more than his bid, and the equality holds only if every exchange with reserve at most $r_s$ uses first-price auction; the last equality follows from the fact that $\beta^+(a_k)=r_k$, and all exchanges with smaller reserves propose first-price auctions, thus $\bar{p}_k$ is identity function. Thus when exchange $k$ uses first-price auction with reserve $r_k$, when highest-valued bidder has value $v\geq a_k$, the revenue would be $\beta^-(v)$\footnote{Here we need to assume $\beta^-(v)=\beta(v)$. This is not true for $v=a_\ell$ for some $\ell$, but we can ignore the revenue contribution from these points, as we assume the value distribution function has no point mass.}, which is lower bounded by \begin{equation}\label{eqn-fplb} \beta^-(v)\geq\frac{1}{G(v)}\left(G(a_k)r_k+\int_{a_k}^{v}tg(t)dt\right). \end{equation} For large enough $v$, the equality only holds if all exchanges use first-price auction. \textit{\textbf{Step 3: Find an upper bound for exchange $k$'s revenue when he proposes $\normalfont\textrm{SP}_{r_k}$.}} Now we upper bound the revenue of exchange $k$ when he uses second-price auction with reserve $r_k$, when the highest-valued bidder has value $v\geq a_k$. Let $v^{(1)}=v$ be the highest value among the bidders, $v^{(2)}$ be the second highest value. Notice that the revenue of exchange $k$ when $v^{(1)}=v$ and $v^{(2)}>a_k$ can be upper bounded by the second highest value since the bidders never overbid in the aggregated auction by Lemma \ref{lem-existence}. Thus \begin{eqnarray} \mathbb{E}_{\textbf{v}}[\max(\beta(v^{(2)}),r_k)|v^{(1)}=v] &=&\Pr[v^{(2)}<a_k|v^{(1)}=v]r_k\nonumber\\ & &+\Pr[v^{(2)}\geq a_k|v^{(1)}=v]\cdot\mathbb{E}[\beta(v^{(2)})|v^{(2)}\geq a_k,v^{(1)}=v]\nonumber\\ &=&\frac{G(a_k)}{G(v)}r_k+\frac{G(v)-G(a_k)}{G(v)}\cdot\frac{\int_{a_k}^{v}\beta(t)d\frac{G(t)}{G(v)}}{\int_{a_k}^{v}d\frac{G(t)}{G(v)}}\nonumber\\ &=&\frac{1}{G(v)}\left(G(a_k)r_k+\int_{a_k}^{v}\beta(t)g(t)dt\right)\nonumber\\ &\leq&\frac{1}{G(v)}\left(G(a_k)r_k+\int_{a_k}^{v}tg(t)dt\right)\label{eqn-second-bid}. \end{eqnarray} The last inequality follows from $\beta(t)\leq t$. For large enough $v$, the equality holds only if every exchange proposes second-price auction, which means the equality of (\ref{eqn-second-bid}) cannot hold simultaneously with (\ref{eqn-fplb}), unless there is only one exchange. Thus for every possible winner's value $v$, proposing $\normalfont\textrm{SP}_{r_k}$ always yields less revenue than proposing $\normalfont\textrm{FP}_{r_k}$. Taking the expectation over all possible values of $v^{(1)}$, we show that proposing $\normalfont\textrm{FP}_{r_k}$ is always a strictly better strategy than proposing $\normalfont\textrm{SP}_{r_k}$. Thus in an equilibrium of market, no exchange will propose second-price auction with reserve. \end{proof} \begin{proof}[Proof of Theorem \ref{thm-allzero}] By definition of Nash Equilibrium, we need to prove that an exchange in market $(\normalfont\textrm{FP}_0,\normalfont\textrm{FP}_0,\cdots,\normalfont\textrm{FP}_0)$ will deviate to positive reserve price to gain revenue. Since the other exchanges have the same reserve prices, we can merge them and treat them as a single exchange. It suffices to prove when two exchanges propose $(\normalfont\textrm{FP}_0,\normalfont\textrm{FP}_0)$, there exists $\epsilon>0$ such that the first exchange can deviate to $\normalfont\textrm{FP}_{\epsilon}$ to get more revenue. By equation (\ref{eqn-integral-ell}), when both exchanges use $\normalfont\textrm{FP}_{0}$, the bidding function is \begin{equation*} \betan(v)=\frac{\int_{0}^{v}tg(t)dt}{G(v)},\ v\geq 0. \end{equation*} When exchange 1 switches to $\normalfont\textrm{FP}_{\epsilon}$, the bidding function $\beta$ of the bidder will be as follows: \begin{equation*} \beta(v)=\begin{cases} \frac{\int_{0}^{v}tg(t)dt}{G(v)}, &0\leq v<a;\\ \frac{\epsilon G(a)+\int_{a}^{v}tg(t)dt}{G(v)}, &v\geq a. \end{cases} \end{equation*} Here $a$ is the discontinuous point where bidder is indifferent in bidding $\beta^-(a)$ and $\beta^+(a)$, which is determined by $a-\epsilon=\lambda_1(a-\beta^-(a))$. This implies $\int_{0}^atg(t)dt=\frac{\epsilon-(1-\lambda_1)a}{\lambda_1}G(a)$, and $\epsilon=(1-\lambda_1)a+\lambda_1\frac{\int_{0}^atg(t)dt}{G(a)}$. When exchange 1 switches from $\normalfont\textrm{FP}_0$ to $\normalfont\textrm{FP}_{\epsilon}$, he will gain revenue when $v^{(1)}\geq a$, and lose revenue when $v^{(1)}<a$. Conditioned on $v=v^{(1)}\in[a,\infty)$, the revenue gain is \begin{eqnarray*} \beta(v)-\betan(v) &=&\frac{\epsilon G(a)+\int_{a}^{v}tg(t)dt}{G(v)}-\frac{\int_{0}^{v}tg(t)dt}{G(v)}\\ &=&\frac{\epsilon G(a)-\int_{0}^{a}tg(t)dt}{G(v)}\\ &=&\frac{(1-\lambda_1)}{G(v)}\left(aG(a)-\int_{0}^atg(t)dt\right). \end{eqnarray*} Here the last equality is by $\epsilon=(1-\lambda_1)a+\lambda_1\frac{\int_{0}^atg(t)dt}{G(a)}$. Conditioned on $v=v^{(1)}\in[0,a)$, the revenue loss is \[\betan(v)-\beta(v)=\betan(v)=\frac{\int_{0}^{v}tg(t)dt}{G(v)}.\] Take $\epsilon$ small enough such that $F(a)\leq\frac{n-1}{n}$. Recall that $G(v)=F^{n-1}(v)$. Since winner's value $v^{(1)}$ has cumulative density function $F^n(v)$ and density $nF^{n-1}(v)f(v)=nG(v)f(v)$, the total revenue gain is at least \begin{eqnarray*} \int_{0}^{\infty}nf(t)G(t)(\beta(t)-\betan(t)) &=&\int_{a}^{\infty}nf(t)G(t)\frac{(1-\lambda_1)}{G(t)}\left(aG(a)-\int_{0}^asg(s)ds\right)dt\\ & &-\int_{0}^{a}nf(t)G(t)\frac{\int_{0}^{t}sg(s)ds}{G(t)}dt\\ &=&n(1-F(a))(1-\lambda_1)\left(aG(a)-\int_{0}^atg(t)dt\right)\\ & &-\int_{0}^{a}nf(t)\int_{0}^{t}sg(s)dsdt\\ &\geq&(1-\lambda_1)\left(aG(a)-\int_{0}^atg(t)dt\right)-\int_{0}^{a}nf(t)\int_{0}^{t}sg(s)dsdt. \end{eqnarray*} Here the last inequality is by $F(a)\leq \frac{n-1}{n}$. Define \[h(a)=(1-\lambda_1)\left(aG(a)-\int_{0}^atg(t)dt\right)-\int_{0}^{a}nf(t)\int_{0}^{t}sg(s)dsdt,\] and easy to see that $h(0)=0$. To prove that exchange 1's revenue gain is positive when switching to $\normalfont\textrm{FP}_{\epsilon}$ for some small $\epsilon>0$, it suffices to show that $h'(a)>0$, for every small enough $0<a<\frac{1-\lambda_1}{nc}$. Actually, \begin{equation*} h'(a)=(1-\lambda_1)G(a)-nf(a)\int_{0}^{a}tg(t)dt. \end{equation*} Find constant $c$ such that $f(a)\leq c$ for small enough $a$. Then \begin{eqnarray*} h'(a)&\geq&(1-\lambda_1)\int_{0}^{a}g(t)dt-nc\int_{0}^{a}tg(t)dt\\ &=&\int_{0}^{a}\left(1-\lambda_1-nct\right)g(t)dt>0. \end{eqnarray*} Thus there exists $a>0$ such that $h(a)>0$, which implies exchange 1 gains revenue by switching to $\normalfont\textrm{FP}_{\epsilon}$. Therefore $(\normalfont\textrm{FP}_0,\normalfont\textrm{FP}_0)$ cannot be an equilibrium. \end{proof} \begin{proof}[Proof of Theorem \ref{thm-asymmetric}] We prove that an exchange in market $(\normalfont\textrm{FP}_r,\normalfont\textrm{FP}_r,\cdots,\normalfont\textrm{FP}_r)$ will deviate to another reserve price to gain revenue. Theorem \ref{thm-allzero} settles the case of $r=0$, and here we only need to analyze the case of $r>0$. Since the other exchanges have the same reserve prices, we merge them and treat them as a single exchange. It suffices to prove when two exchanges propose $(\normalfont\textrm{FP}_r,\normalfont\textrm{FP}_r)$, there exists $\epsilon>0$ such that the first exchange can deviate to $\normalfont\textrm{FP}_{r-\epsilon}$ to get more revenue. By equation (\ref{eqn-integral-ell}), when both exchanges use $\normalfont\textrm{FP}_{r}$, the bidding function is \begin{equation*} \betan(v)=\begin{cases} 0, &0\leq v<r;\\ \frac{rg(r)+\int_{r}^{v}tg(t)dt}{G(v)}, &v\geq r. \end{cases} \end{equation*} When exchange 1 switches to $\normalfont\textrm{FP}_{r-\epsilon}$, the bidding function $\beta$ of the bidder will be as follows: \begin{equation*} \beta(v)=\begin{cases} 0, &0\leq v<r-\epsilon;\\ \frac{(r-\epsilon)G(r-\epsilon)+\int_{r-\epsilon}^{v}tg(t)dt}{G(v)}, &r-\epsilon\leq v<a;\\ \frac{rG(a)+\int_{a}^{v}tg(t)dt}{G(v)},&v\geq a. \end{cases} \end{equation*} Here $a$ is the discontinuous point where bidder is indifferent in bidding $\beta^-(a)$ and $\beta^+(a)$, which is determined by $a-r=\lambda_1(a-\beta^-(a))$. An upper bound of $a$ is $r+\frac{\lambda_1}{1-\lambda_1}\epsilon$, since $\beta^-(a)\geq r-\epsilon$. The same as before we define $v^{(1)}$ to be the winner's value. When exchange 1 switches from $\normalfont\textrm{FP}_r$ to $\normalfont\textrm{FP}_{r-\epsilon}$, he will gain revenue when $r-\epsilon\leq v^{(1)}<r$, and lose revenue when $v^{(1)}\geq r$. Conditioned on $v=v^{(1)}\in[r-\epsilon,r)$, the revenue gain is \[\beta(v)-\betan(v)=\beta(v)\geq r-\epsilon.\] Conditioned on $v=v^{(1)}\in[r,a)$, the revenue loss is \begin{eqnarray*} \betan(v)-\beta(v) &=&\frac{rG(r)+\int_{r}^{v}tg(t)dt}{G(v)}-\frac{(r-\epsilon)G(r-\epsilon)+\int_{r-\epsilon}^{v}tg(t)dt}{G(v)}\\ &=&\frac{rG(r)-(r-\epsilon)G(r-\epsilon)-\int_{r-\epsilon}^{r}tg(t)dt}{G(v)}\\ &\leq&\frac{rG(r)-(r-\epsilon)G(r-\epsilon)-\int_{r-\epsilon}^{r}(r-\epsilon)g(t)dt}{G(v)}\\ &=&\frac{G(r)}{G(v)}\epsilon\leq\frac{1}{G(v)}\epsilon. \end{eqnarray*} Conditioned on $v=v^{(1)}\in[a,\infty)$, the revenue loss is \begin{eqnarray*} \betan(v)-\beta(v) &=&\frac{rG(r)+\int_{r}^{v}tg(t)dt}{G(v)}-\frac{rG(a)+\int_{a}^{v}tg(t)dt}{G(v)}\\ &=&\frac{\int_{r}^{a}(t-r)g(t)dt}{G(v)}\\ &\leq&\frac{g(a)}{G(v)}(a-r)^2\\ &=&\frac{(n-1)f(a)F^{n-2}(a)}{G(v)}(a-r)^2<\frac{nf(a)}{G(v)}(a-r)^2. \end{eqnarray*} Since $f(v)>0$ for $v>0$, we can find $c,\delta>0$ such that $f(v)\in[c,c+\delta]$ for $v\in[r-\epsilon,r+\frac{\lambda_1}{1-\lambda_1}\epsilon]$, and $\delta\to0$ when $\epsilon \to 0$. Since winner's value $v^{(1)}$ has cumulative density function $F^n(v)$ and density $nF^{n-1}(v)f(v)=nf(v)G(v)$, the total revenue gain is at least \begin{eqnarray*} \int_{r-\epsilon}^{\infty}nf(t)G(t)(\beta(t)-\betan(t)) &>&\int_{r-\epsilon}^{r}nf(t)G(t)(r-\epsilon)dt-\int_{r}^{a}nf(t)G(t)\cdot\frac{1}{G(t)}\epsilon dt\\ & &-\int_{a}^{\infty}nf(t)G(t)\cdot\frac{nf(a)}{G(t)}(a-r)^2dt\\ &\geq&\int_{r-\epsilon}^{r}ncG(r-\epsilon)(r-\epsilon)dt-\int_{r}^{a}n(c+\delta)\epsilon dt\\ & &-\int_{0}^{\infty}f(t)dt\cdot n^2(a-r)^2(c+\delta)\\ &=&ncG(r-\epsilon)\cdot(r-\epsilon)\epsilon-n(c+\delta)(a-r)\epsilon\\ & &-n^2(a-r)^2(c+\delta)\\ &\geq&ncF^{n-1}(r-\epsilon)\cdot(r-\epsilon)\epsilon-n(c+\delta)\frac{\lambda_1}{1-\lambda_1}\epsilon^2\\ & &-n^2(c+\delta)\left(\frac{\lambda_1}{1-\lambda_1}\right)^2\epsilon^2\\ &\geq& 0 \end{eqnarray*} when $\epsilon\to0$. The first inequality is true by applying previous bounds of revenue gain. The second inequality is true by $f(t)\in[c,c+\delta)$ for $t\in[r-\epsilon,a]$, and $g(s)=(n-1)f(s)F^{n-2}(s)\leq (n-1)(c+\delta)F^{n-2}(a)$ for $s\in[r,a]$. The third equality is true by $\int_{0}^{\infty}f(t)dt=1$. The fourth inequality is true by $a\leq r+\frac{\lambda_1}{1-\lambda_1}\epsilon$. The last inequality is true since when $\epsilon\to 0$, the lowest-degree term has positive coefficient. Thus there exists $\epsilon>0$ such that the first exchange prefers to switch to $\normalfont\textrm{FP}_{r-\epsilon}$. \end{proof} \appendix \section{Appendix} \begin{proof}[Proof of Lemma \ref{lem-existence}] We first prove that there exists a unique equilibrium that satisfy equation (\ref{eqn-integral-ell}) for each $\ell$. Suppose that we have already found a bidding function $\beta$ that satisfy (\ref{eqn-integral-ell}) for $<\ell$. Now we want to solve $\beta$ for $v\in[a_\ell,a_{\ell+1})$. Let $I\subseteq[\ell]$ be the set of exchanges with reserve at most $r_\ell$ that uses first-price auctions, $J=[\ell]\setminus I$ be the set of exchanges with reserve at most $r_\ell$ that uses second-price auctions. Then the payment function $\bar{p}_{\ell}$ for $v\in[a_\ell,a_{\ell+1})$ would be the sum of first-price payments and second-price payments, which is \begin{eqnarray*} \bar{p}_{\ell}(\beta(v)) &=&\frac{1}{\sum_{j\leq \ell}\lambda_j}\left(\sum_{j\in I}\lambda_j\beta(v)+\sum_{j\in J}\lambda_j\mathbb{E}_{\textbf{v}}\left[\max(\beta(v^{(2)}),r_j)\right]\right)\\ &=&\frac{1}{\sum_{j\leq \ell}\lambda_j}\left(\sum_{j\in I}\lambda_j\beta(v)+\sum_{j\in J}\lambda_j\frac{1}{G(v)}\left(G(a_j)r_j+\int_{a_j}^{v}\beta(t)g(t)dt\right)\right)\\ &=&\frac{1}{\sum_{j\leq \ell}\lambda_j}\Bigg(\beta(v)\bigg(\sum_{j\in I}\lambda_j\bigg)+\frac{1}{G(v)}\sum_{j\in J}\lambda_jG(a_j)r_j\\ & &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{1}{G(v)}\bigg(\sum_{j\in J}\lambda_j\int_{a_j}^{v}\beta(t)g(t)dt\bigg)\Bigg). \end{eqnarray*} Here the second equality follows from equation \ref{eqn-second-bid}. Apply to \ref{eqn-integral-ell} and multiply both sides by $\sum_{j\leq \ell}\lambda_j$, and denote $\lambda_I=\sum_{j\in I}\lambda_j$, $\lambda_J=\sum_{j\in J}\lambda_j$, we get \begin{eqnarray*} (\lambda_I+\lambda_J)\int_{a_\ell}^{v}tg(t)dt &=&\left(\lambda_I\beta(v)G(v)+\sum_{j\in J}\lambda_jG(a_j)r_j+\sum_{j\in J}\lambda_j\int_{a_j}^{v}\beta(t)g(t)dt\right)\\ & &-\left(\lambda_I\beta(a_\ell)G(a_\ell)+\sum_{j\in J}\lambda_jG(a_j)r_j+\sum_{j\in J}\lambda_j\int_{a_j}^{a_{\ell}}\beta(t)g(t)dt\right)\\ &=&\lambda_I\beta(v)G(v)-\lambda_I\beta(a_\ell)G(a_\ell)+\lambda_J\int_{a_\ell}^{v}\beta(t)g(t)dt. \end{eqnarray*} Take the derivative on both sides, we get \[(\lambda_I+\lambda_J)vg(v)=\lambda_IG(v)\beta'(v)+(\lambda_I+\lambda_J)\beta(v)g(v).\] If $\lambda_I=0$, the above equation admits a solution $\beta(v)=v$. This solution is consistent with $\beta(a_\ell)=r_{\ell}$, as if all exchanges with reserve at most $r_\ell$ use second-price auction, the bidding equilibrium would be identity function. If $\lambda_I>0$, such differential equation always have a solution that satisfies $\beta(a_\ell)=r_{\ell}$. Thus in every case there will be a bidding function $\beta$ that satisfy equation (\ref{eqn-integral-ell}) for all $\ell$. Notice that above analysis only shows that bidding according to $\beta$ is local optimal. Now we prove that bidding according to $\beta$ is also global optimal, i.e. a bidder with value $v$ will prefer to bid $\beta(v)$ rather than $\beta(v')$, for some $v'\neq v$. Define $h_\ell(v)=\bar{p}_\ell(\beta(v))$ be the expected payment of winner when $v$ is winner's value, $\Lambda_\ell=\sum_{j\leq \ell}\lambda_j$ be the market occupation of first $\ell$ exchanges. For $s\in[a_{\ell},a_{\ell+1})$, let $u(s,v)=\Lambda(\ell)G(s)(v-\bar{p}_\ell(s))$ be the expected utility obtained by a bidder with value $v$ when bidding $\beta(s)$. It remains to show that for any $v$, $u(s,v)$ is maximized when $s=v$. For any $v''<v<v'$, assume that $v''\in[a_{j},a_{j+1})$, $v'\in[a_{k},a_{k+1})$, $v\in[a_{\ell}$, $j\leq \ell\leq k$. We now prove that $u(v',v)\leq u(v,v)$, and $u(v'',v)\leq u(v,v)$. Now we prove $u(v',v)\leq u(v,v)$. Notice that for any $j$, bidding $\beta^+(a_j)$ and $\beta^-(a_j)$ leads to different allocation. For $\ell+1\leq j\leq k$, we will use $u(a_j,v)$ and $u(a_j^-,v)$ to denote such difference in utility. Actually we can observe that, \begin{eqnarray*} u(a_j,v)-u(a_j^-,v)&=&u(a_j,a_j)-u(a_j^-,a_j)-\lambda_j(a_j-v)\\ &=&-\lambda_j(a_j-v)\leq 0. \end{eqnarray*} The second equality comes from the fact that a bidder with value $a_j$ is indifferent in bidding $\beta^-(a_j)$ and $\beta^+(a_j)$. Then \begin{eqnarray} u(v,v)-u(v',v) &=&u(v,v)-u(a_{\ell+1}^-,v)+u(a_{\ell+1}^-,v)-u(a_{\ell+2}^-,v)\nonumber\\ & &+\cdots+u(a_{k-1}^-,v)-u(a_{k}^-,v)+u(a_k^-,v)-u(v',v)\nonumber\\ &\geq&\left(u(v,v)-u(a_{\ell+1}^-,v)\right)+\left(u(a_{\ell+1},v)-u(a_{\ell+2}^-,v)\right)\nonumber\\ & &+\cdots+\left(u(a_{k-1},v)-u(a_{k}^-,v)\right)+\left(u(a_k,v)-u(v',v)\right)\label{eqn-difference}. \end{eqnarray} Now it suffices to show that each difference in the above sum is non-negative. Define $w(t)=u(t,t)$. Observe that for $t\in[a_k,v']$, \begin{eqnarray*} 0&=&\frac{\partial}{\partial s}\cdot\frac{u(s,t)}{\Lambda_{k}}\Big|_{s=t}\\ &=&g(t)(t-h(t))-G(t)h'(t)\\ &\geq&g(t)(t-h(t))-G(t)h'(t)+G(t)-G(v')\\ &=&w'(t)-G(v'). \end{eqnarray*} Here the first line follows by $s=t$ is the local maximal of $u(s,t)$; the third inequality follows from $G(v')\geq G(t)$. Integrate $w'(t)+G(v')$ from $a_k$ to $v'$, we get \begin{eqnarray*} 0&\geq&\int_{a_k}^{v'}(w'(t)-G(v'))dt\\ &=&w(v')-w(a_k)-G(v')(v'-a_k)\\ &=&G(v')(v'-h(v'))-G(a_k)(a_k-h(a_k))-G(v')(v'-a_k)\\ &=&G(v')(a_k-h(v'))-G(a_k)(a_k-h(a_k))\\ &\geq&G(v')(v-h(v'))-G(a_k)(v-h(a_k))\\ &=&\frac{1}{\Lambda_k}\big(u(v',v)-u(a_k,v)\big). \end{eqnarray*} Here the fifth line follows from $a_k\geq v$. Thus $u(a_k,v)-u(v',v)\geq 0$, and the same way we can prove $u(v,v)-u(a_{\ell+1}^-,v)\geq 0$, $u(a_{\ell+1},v)-u(a_{\ell+2}^-,v)\geq 0$, $\cdots$, $u(a_{k-1},v)-u(a_{k}^-,v)\geq 0$. Apply above results to inequality (\ref{eqn-difference}) we prove $u(v,v)\geq u(v',v)$. The proof of $u(v'',v)\leq u(v,v)$ can be done in the same way and is completely symmetric. \end{proof} \end{document}
\begin{document} \title[$G_\delta$-refinements]{$G_\delta$-refinements} \author{Robson A. Figueiredo} \address{Instituto de Matem\'atica e Estat\'istica da Universidade de S\~ao Paulo\\ Rua do Mat\~ao, 1010, Cidade Universit\'aria, CEP 05508-090, S\~ao Paulo, SP, Brazil} \email{[email protected]} \keywords{} \begin{abstract} In this work we deal with the preservation by $G_\delta$-refinements. We prove that for $\mathrm{SP}$-scattered spaces the metacompactness, paralindel\"ofness, metalindel\"ofness and linear lindel\"ofness are preserved by $G_\delta$-refinements. In this context we also consider some other generalizations of discrete spaces like $\omega$-scattered and $N$-scattered. In the final part of this paper we look at a question of Juh\'asz, Soukup, Szentmikl\'ossy and Weiss concerning the tighness of the $G_\delta$-refinement of a $\sigma$-product. \end{abstract} \maketitle \section{Preliminaries} \begin{definition} For any space $\langle X,\tau\rangle$, the topology $\tau_\delta$ obtained by letting every $G_\delta$ subset of $X$ be open is called the \textbf{$G_\delta$-topology} and the space so obtained is denoted by $X_\delta$. \end{definition} \begin{definition} Let $X$ be a set and let $\mu$ be a cardinal such that $\mu\leq |X|$. We say that $\mathscr{C}\subseteq[X]^\mu$ is \textit{cofinal} in $[X]^\mu$ if, for all $x\in [X]^\mu$, there exists $y\in\mathscr{C}$ such that $x\subseteq y$. For cardinals $\mu\leq\kappa$, we define $\mathrm{cf}\left([\kappa]^\mu,\subseteq\right)$ as the least cardinality of a cofinal family in $[\kappa]^\mu$. Given a infinite cardinal $\kappa$, we define $\mathrm{Cov}_\omega(\kappa)=\mathrm{cf}\left([\kappa]^{\aleph_0},\subseteq\right)$. \end{definition} \begin{theorem}[Passos \cite{Pas2007}]\label{1} Let $\kappa$ be a infinite cardinal such that $\mathrm{Cov}_\omega(\kappa)=\kappa$. Given a set $X$ of cardinality $\kappa$, there exists an $\omega$-covering elementary submodel $M$ such that $X\subseteq M$ and $|M|=\kappa$. \end{theorem} Recall that a subset $F$ of $X$ is $\kappa$-closed, where $\kappa$ is an infinite cardinal iff whenever $S\subseteq F$ and $|S|\leq\kappa$ then $S\subseteq F$. It is well known that $t(X)\leq\kappa$ iff every $\kappa$-closed set in $X$ is closed. \section{ \texorpdfstring{$\mathrm{SP}$}{P}-scattered spaces} \begin{definition} A point $p$ in a topological space $X$ is called a \textit{strong $P$-point} if it has a neighborhood consisting of $P$-points. The set of all strong $P$-points of $X$ is denoted by $\mathrm{SP}(X)$. \end{definition} Observe that $\mathrm{SP}(X) = \mathrm{int}_XP(X)$. \begin{definition}[\cite{HRW2007}] Recursively, define: \begin{itemize} \item $S_0(X) = X$ and $S_1(X) = X\setminus\mathrm{SP}(X)$; \item $S_{\alpha+1}(X) = S_1(S_\alpha(X))$ for any ordinal $\alpha\geq 1$; \item $S_\lambda(X) =\bigcap\{\,S_\alpha(X):\alpha<\lambda\,\}$, if $\lambda$ is a limit ordinal. \end{itemize} \end{definition} In \cite{HRW2007}, Henriksen, Raphael and Woods proved the following generalizations of the well known theorems 5.1 and 5.2 of \cite{LR1981}, respectively: \begin{theorem} If $X$ is a Lindel\"of $\mathrm{SP}$-scattered regular space, then $X_\delta$ is Lindel\"of. \end{theorem} \begin{theorem}\label{HRW2007:4.2} If $X$ is a paracompact $\mathrm{SP}$-scattered Hausdorff space, then $X_\delta$ is paracompact. \end{theorem} In the same article they asked: \begin{question} If $X$ is a metacompact $\mathrm{SP}$-scattered regular space, so is $X_\delta$ a metacompact space? \end{question} In this section, we will see that not just the metacompactness, but also the paralindel\"ofness, the metalindel\"ofness and the linear lindel\"ofness are preserved by $G_\delta$-refinements on the class of $\mathrm{SP}$-scattered regular spaces. \begin{theorem}[\cite{Mis1972}]\label{142} A regular $P$-space $X$ is paralindel\"of if, and only if, it is paracompact. \end{theorem} \begin{proposition}[\cite{HRW2007}]\label{HRW2007:2.8} If $X$ is a regular space, then the following are equivalent: \begin{enumerate} \item $X$ is $\mathrm{SP}$-scattered; \item if $A\subseteq X$ is nonempty, then $\mathrm{int}_A\{\,a:\text{$a$ is a $P$-point of $A$}\,\}\ne\emptyset$. \end{enumerate} \end{proposition} \begin{theorem}\label{2} If $X$ is a regular $\mathrm{SP}$-scattered paralindel\"of space, then $X_\delta$ is paracompact. \end{theorem} \begin{proof} By the theorem \ref{142}, it is enough to show that $X_\delta$ is paralindel\"of. Let $\mathscr{C}$ be an open cover of $X_\delta$. Let $O$ be the set of all points $x\in X$ such that $x\in \mathrm{int}_\tau\bigcup\mathscr{C}'$ for some locally countable open partial refinement $\mathscr{C}'$ of $\mathscr{C}$ in $X_\delta$. If $O=X$ then, for each $x\in X$, there exists a locally countable open partial refinement $\mathscr{C}_x$ of $\mathscr{C}$ in $X_\delta$ such that $x\in V_x=\mathrm{int}_\tau\bigcup\mathscr{C}_x$. Since $\langle X,\tau\rangle$ is paralindel\"of, the open cover $\mathscr{V}=\{\,V_x:x\in X\,\}$ admits a locally countable open refinement $\{\,W_s:s\in S\,\}$, with $W_s\ne W_{s'}$ whenever $s\ne s'$. For each $s\in S$, take $x_s\in X$ such that $W_s\subseteq V_{x_s}$. So \[ \{\,W_s\cap C:s\in S\text{ and }C\in\mathscr{C}_{x_s}\,\} \] is locally countable open refinement of $\mathscr{C}$ in $X_\delta$. Now it remains to show that in fact $O=X$. Suppose this is not the case. Since $\langle X,\tau\rangle$ is $\mathrm{SP}$-scattered, by the proposition \ref{HRW2007:2.8}, there exists a $y\in X\setminus O$ and an open neighborhood $U$ of $y$ in $X$ such that $(X\setminus O)\cap U$ is a $P$-subspace of $X$. Take $C_y\in\mathscr{C}$ such that $y\in C_y$. We can suppose that $C_y=\bigcap\{\,U_n:n\in\omega\,\}$, where, for each $n\in\omega$, $U_n\in \tau$ and $\mathrm{cl}_\tau(U_n)\subseteq U$. So $C_y\cup O$ is an open subset of $X$, for $(X\setminus O)\cap C_y$ is a subspace of the $P$-space $U$. As $X$ is regular, $y$ has an open neighborhood $U_y$ in $X$ such that $\mathrm{cl}_{\tau}(U_y)\subseteq C_y\cup O$. Fix $n\in\omega$. Let $F=\mathrm{cl}_\tau(U_y)\setminus U_n$. Since $F\subseteq O$, for each $x\in F$, there exists a locally countable open partial refinement $\mathscr{C}_x$ of $\mathscr{C}$ in $X_\delta$ such that $x\in V_x=\mathrm{int}_\tau\bigcup\mathscr{C}_x$. As $\langle X,\tau\rangle$ is paralindel\"of and $F$ is closed, $\mathscr{V}=\{\,V_x:x\in F\,\}$ admits a locally countable open partial refinement $\mathscr{W}=\{\,W_s:s\in S\,\}$ that covers $F$, where $W_s\ne W_{s'}$ whenever $s\ne s'$. For each $s\in S$, choose $x_s\in F$ such that $W_s\subseteq V_{x_s}$. Consider the family \[ \mathscr{D}_n=\{\,W_s\cap C:s\in S\text{ e }C\in\mathscr{C}_{x_s}\,\}. \] \textit{The family $\mathscr{D}_n$ is a locally countable open cover of $F$ in $X_\delta$.} Indeed, let $x\in X$. Since $\mathscr{W}$ is a locally countable open family in $X$, there exists an open neighborhood $Z_x$ of $x$ in $X$ such that $T=\{\,s\in S:W_s\cap Z_x\ne\emptyset\,\}$ is countable. For each $t\in T$, take an open neighborhood $O_t$ of $x$ in $X_\delta$ such that $\{\,C\in\mathscr{C}_{x_t}:C\cap O_t\ne\emptyset\,\}$ is countable. Consider the following open neighborhood of $x$ em $X_\delta$: \[ Z=Z_x\cap\bigcap\{\,O_t:t\in T\,\}. \] Note that for each $t\in T$, $\mathscr{C}_{x_t}(Z)=\{\,C\in\mathscr{C}_{x_t}:C\cap Z\ne\emptyset\,\}$ is countable. Seeing that \begin{align*} \mathscr{D}_n({Z})&=\{\,D\in\mathscr{D}_n:D\cap Z\ne\emptyset\,\}\\ &=\{\,W_i\cap C: s\in S,\,C\in\mathscr{C}_{x_s}\text{ e }W_s\cap C\cap Z\ne\emptyset\,\}\\ &\subseteq\{\,W_t\cap C: t\in T\text{ e }C\in\mathscr{C}_{x_t}({Z})\,\}, \end{align*} the family $\mathscr{D}_n({Z})$ is countable. Thus, \[ \mathscr{C}'=\{C_y\}\cup\bigcup\{\,\mathscr{D}_n:n\in\omega\,\} \] is a locally countable open partial refinement of $\mathscr{C}$ in $X_\delta$ such that $y\in U_y\subseteq\mathrm{int}_\tau\bigcup\mathscr{C}'$, contradicting the fact that $y\notin O$. \end{proof} \begin{theorem} If $X$ is a regular $\mathrm{SP}$-scattered metalindel\"of space, then $X_\delta$ is metalindel\"of. \end{theorem} \begin{proof} Let $\mathscr{C}$ be an open cover of $X_\delta$ and consider the set $O$ whose elements are all those $x\in X$ such that $x\in\mathrm{int}_\tau\bigcup\mathscr{C}'$ for some pointwise countable open partial refinement $\mathscr{C}'$ of $\mathscr{C}$ in $X_\delta$. If $O=X$ then, proceeding in the same way as in the proof of theorem \ref{2}, we can obtain a pointwise countable open refinement of $\mathscr{C}$ in $X_\delta$. In order to complete the proof, it is enough to show that $O=X$. Suppose on the contrary that $O\ne X$. As $X$ is regular and $\mathrm{SP}$-scattered, by the proposition \ref{HRW2007:2.8}, there exist a point $y\in X\setminus O$ and an open neighborhood $U$ of $y$ in $X$ such that $U\cap (X\setminus O)$ is a $P$-subspace of $X$. Choose a $C_y\in\mathscr{C}$ such that $y\in C_y$. We can suppose that $C_y=\bigcap\{\,U_n:n\in\omega\,\}$, where for each $n\in\omega$, $U_n\in\tau$ and $\mathrm{cl}_\tau(U_n)\subseteq U$. Note that $C_y\cup O$ is an open subset of $X$. Hence, from the regularity of $X$ it follows that there exist an open neighborhood $U_y$ of $y$ in $X$ such that $\mathrm{cl}_\tau(U_y)\subseteq C_y\cup O$. Fix $n\in\omega$. Note that $F=\mathrm{cl}_\tau(U_y)\setminus U_n\subseteq O$. Then, for each $x\in F$, there is a pointwise countable open partial refinement $\mathscr{C}_x$ of $\mathscr{C}$ in $X_\delta$ such that $x\in V_x=\mathrm{int}_\tau(\bigcup\mathscr{C}_x)$. Because $X$ is metalindel\"of and $F$ is closed then $\{\,V_x:x\in F\,\}$ has a pointwise countable open partial refinement $\mathcal{W}=\{\,W_i:i\in I\,\}$, where $W_i\ne W_j$ whenever $i\ne j$. For each $i\in I$, choose $x_i\in F$ such that $W_i\subseteq V_{x_i}$. Consider the family \[ \mathscr{D}_n=\{\,W_i\cap C:i\in I\text{ e }C\in\mathscr{C}_{x_i}\,\}. \] It is easily checked that each $\mathscr{D}_n$ is a pointwise countable open partial refinement of $\mathscr{C}$ which covers $\mathrm{cl}_{\tau}(U_y)\setminus U_n$. So, \[ \mathscr{C}'=\{C_y\}\cup\bigcup\{\,\mathscr{D}_n:n\in\omega\,\} \] is a pointwise countable open partial refinement of $\mathscr{C}$ in $X_\delta$ such that $y\in U_y\subseteq\mathrm{int}_\tau(\bigcup\mathscr{C}')$, contradicting the fact that $y\notin O$. Thus, $O=X$. \end{proof} \begin{theorem} If $X$ is a regular $\mathrm{SP}$-scattered metacompact space, then $X_\delta$ is metacompact. \end{theorem} \begin{proof} Let $\mathscr{C}$ be an open cover of $X_\delta$ and consider the set $O$ whose elements are all $x\in X$ such that $x\in\mathrm{int}_\tau(\bigcup\mathscr{C}')$ for some pointwise finite open partial refinement $\mathscr{C}'$ of $\mathscr{C}$ in $X_\delta$. Similarly to what it has been done in theorem \ref{2}, we can get, from the assumption $O=X$, a pointwise finite open refinement of $\mathscr{C}$ in $X_\delta$. We complete the proof by showing that $O=X$. Suppose that $O\ne X$. Since $\langle X,\tau\rangle$ is $\mathrm{SP}$-scattered and regular, by the proposition \ref{HRW2007:2.8}, there are a point $y\in X\setminus O$ and an open neighborhood $U$ of $y$ in $X$ such that $(X\setminus O)\cap U$ is a $P$-subspace of $X$. Take a $C_y\in\mathscr{C}$ such that $y\in C_y$. We can suppose that $C_y=\bigcap\{\,U_n:n\in\omega\,\}$, where for each $n\in\omega$, $U_n\in\tau$ and $\mathrm{cl}_\tau(U_{n+1})\subseteq U_n\subseteq\mathrm{cl}_\tau(U_n)\subseteq U$. Note that $C_y\cup O$ is an open subset of $X$. Once $X$ is regular, $y$ has an open neighborhood $U_y$ in $X$ such that $\mathrm{cl}_\tau(U_y)\subseteq(C_y\cup O)\cap U_0$. As $F_n\subseteq O$, for each $x\in F_n$, there exists a pointwise finite open partial refinement $\mathscr{C}_x$ of $\mathscr{C}$ in $X_\delta$ such that $x\in V_x=\mathrm{int}_\tau(\bigcup\mathscr{C}_x)$. Since $\langle X,\tau\rangle$ is metacompact and $F_n$ is closed then $\mathcal{V}=\{\,V_x:x\in F_n\,\}$ has a pointwise finite open partial refinement $\mathcal{W}$ which covers $F_n$. We can suppose that $\mathcal{W}=\{\,W_i:i\in I\,\}$, where $W_i\ne W_j$ whenever $i\ne j$. For each $i\in I$, choose $x_i\in F_n$ such that $W_i\subseteq V_{x_i}$. Consider the family \[ \mathscr{D}_n=\{\,W_i\cap C\cap(U_n\setminus\mathrm{cl}_\tau(U_{n+2}):i\in I\text{ and }C\in\mathscr{C}_{x_i}\,\}. \] Note that each $\mathscr{D}_n$ is a pointwise finite open partial refinement of $\mathscr{C}$ in $X_\delta$ such that $\bigcup\mathscr{D}_n=U_n\setminus \mathrm{cl}_\tau(U_{n+2})$. Therefore, \[ \mathscr{C}'=\{C_y\}\cup\bigcup\{\,\mathscr{D}_n:n\in\omega\,\} \] is a pointwise finite open partial refinement of $\mathscr{C}$ in $X_\delta$ such that $y\in U_y\subseteq\mathrm{int}_\tau(\bigcup\mathscr{C}')$, contradicting the fact that $y\notin O$. \end{proof} \begin{theorem} If $X$ is a regular $\mathrm{SP}$-scattered linearly Lindel\"of space, then $X_\delta$ is linearly Lindel\"of. \end{theorem} \begin{proof} Let $\mathscr{C}=\{\,C_\alpha:\alpha<\kappa\,\}$ be an open cover of $X_\delta$, where $\kappa$ is an uncountable regular cardinal. For each $\alpha<\kappa$, let \[ V_\alpha=\mathrm{int}_\tau\left(\bigcup\{\,C_\beta:\beta\leq\alpha\,\}\right). \] Define \[ O=\left\{\,x\in X:\text{ there exists $\alpha(x)<\kappa$ such that $x\in V_{\alpha(x)}$}\,\right\}. \] \begin{claim*} $O=X$ \end{claim*} \begin{proof of claim} Suppose on the contrary that $O\ne X$. As $\langle X,\tau\rangle$ is a $\mathrm{SP}$-scattered regular space, by the proposition \ref{HRW2007:2.8}, there are $y\in X\setminus O$ and an open neighborhood $U$ of $y$ in $X$ such that $(X\setminus O)\cap U$ is a $P$-subspace of $X$. Choose $\alpha_y<\kappa$ such that $y\in C_{\alpha_y}$. We can suppose that $C_{\alpha_y}=\bigcap\{\,U_n:n\in\omega\,\}$, where, for each $n\in\omega$, $U_n\in\tau$ and $\mathrm{cl}_\tau(U_n)\subseteq U$. Note that $C_y\cup O$ is an open subset of $X$. Once $X$ is regular, $y$ has an open neighborhood $U_y$ in $X$ such that $\mathrm{cl}_\tau(U_y)\subseteq C_{\alpha_y}\cup O$. Fix $n\in\omega$. Let $F_n=\mathrm{cl}_\tau(U_y)\setminus U_n$. Note that $F_n$ is a closed subset of $X$ and so it is linearly Lindel\"of. Moreover, $F_n\subseteq O$; this implies that, for each $x\in F_n$, we can take $\alpha(x)<\kappa$ such that $x\in V_{\alpha(x)}$. Then, $\left\{\,V_{\alpha(x)}:x\in F_n\,\right\}$ is a family of open subsets of $X$ which covers $F_n$ and it is linearly ordered by inclusion. Therefore, there is a countable subset $E_n\subseteq F_n$ such that $\left\{\,V_{\alpha(x)}:x\in E_n\,\right\}$ covers $F_n$. So, $\alpha=\sup(\{\alpha_y\}\cup\bigcup\{\,E_n:n\in\omega\,\})<\kappa$ e \[ y\in U_y\subseteq C_{\alpha_y}\cup\bigcup\{\,F_n:n\in\omega\,\}\subseteq\bigcup\{\,C_\beta:\beta\leq\alpha\,\}. \] Then $y\in V_\alpha$ and, thus, $y\in O$. This is a contradiction. \end{proof of claim} By the claim above, $\mathcal{V}=\{\,V_\alpha:\alpha<\kappa\,\}$ is an open cover of $\langle X,\tau\rangle$. Since $\langle X,\tau\rangle$ is a linearly Lindelöf space, $\mathcal{V}$ has a subcover whose cardinality is less than $\kappa$. Because $\kappa$ is regular, $V_\alpha=X$ for some $\alpha<\kappa$. Thus, $\{\,C_\beta:\beta<\alpha\,\}$ is a subcover of $\mathscr{C}$ whose cardinality is less than $\kappa$. \end{proof} \section{Other generalizations of scattered} Clearly, if a regular Lindel\"of space $X$ is a countable union of scattered closed subspaces, then $X_\delta$ is Lindel\"of. As we shall see, at least consistently, this is not the case when it is not required that the subspaces are closed. A space is \textbf{$\sigma$-scattered} if it is an union of a countable family of scattered subspaces. \begin{example}\label{3} Assuming $\mathsf{CH}$, there exists a regular $\sigma$-scattered Lindel\"of space whose $G_\delta$-refinement is not Lindel\"of. \end{example} \begin{proof} It is enough to take a Luzin subset of the real line containing the rational numbers and consider it as a subspace of the Michael line. \end{proof} \begin{question} Is there a regular $\sigma$-scattered Lindel\"of space whose $G_\delta$-refinement is not Lindel\"of? \end{question} \begin{question} Is there a regular $\sigma$-scattered paracompact space whose $G_\delta$-refinement is not paracompact? \end{question} Hdeib and Pareek introduced in \cite{HP1989} the following natural generalization of scattered spaces: a space $X$ is \textbf{$\omega$-scattered} if, for each non-empty subset $A$ of $X$, there exist a point $x\in A$ and an open neighborhood $U_x$ of $x$ such that $U_x\cap A$ is countable. Every scattered space is $\omega$-scattered, but the reverse is not true: the set of rational numbers with the usual topology is $\omega$-scattered and non-scattered. The theorem 3.12 of \cite{HP1989} states that in the class of regular $\omega$-scattered spaces the Lindel\"of property is preserved by $G_\delta$-refinements. However, this is not true once the space of the example \ref{3} is $\omega$-scattered. \begin{question} Is there a Hausdorff $\omega$-scattered paracompact space $X$ such that $X_\delta$ is not paracompact? \end{question} A space $X$ is \textbf{$N$-scattered} if every nowhere dense subset of $X$ is a scattered subspace of $X$. The next example was noticed by Santi Spadaro. \begin{example} Assuming $\mathsf{CH}$, there exists a $N$-scattered Lindel\"of space whose $G_\delta$-refinement is not Lindel\"of. \end{example} \begin{proof} Let $\mathcal{M}$ the family of all Lebesgue measurable subsets of the real line. For each $E\in\mathcal{M}$, define \[ \Phi(E)=\left\{\,x\in\mathbb{R}:\lim_{h\to 0}\frac{m\left(E\cap{]x-h,x+h[}\right)}{2h}=1\,\right\}. \] Then \[ \tau_d=\left\{\,E\in\mathcal{M}:E\subseteq\Phi(E)\,\right\} \] is a topology on $\mathbb{R}$ stronger than that usual, well known as \textit{density topology}. Denote by $\mathbb{R}_d$ the topological space $\langle\mathbb{R},\tau_d\rangle$. By corollary 4.3 of \cite{Tal1976}, $\mathsf{CH}$ implies that $\langle\mathbb{R},\tau_d\rangle$ has a hereditarily Lindel\"of, non-separable, regular and Baire subspace $X$. By theorem 2.7 of \cite{Tal1976}, every nowhere dense subset of $X$ is discrete (and closed). Therefore, $X$ is $N$-scattered. On the other side, the pseudocharacter of $X$ is countable, for $X$ is Hausdorff and hereditarily Lindel\"of. Then $X_\delta$ is discrete and uncountable and, thus, it is not Lindel\"of. \end{proof} \begin{question} Is there a Hausdorff paracompact $N$-scattered space whose $G_\delta$-refinement is not a paracompact space? \end{question} \section{The tightness of \texorpdfstring{$G_\delta$}{Gdelta}-refinement of \texorpdfstring{$\sigma$}{sigma}-products} Given a family of topological spaces $\{\,X_i:i\in I\,\}$ and a point $x^\ast\in X=\prod\{\,X_i:{i\in I}\,\}$, define \[ \sigma=\sigma(X)=\sigma(X,x^\ast)=\left\{\,x\in \prod_{i\in I}X_i:\mathrm{supp}(x)\text{ is finite}\,\right\}, \] where $\mathrm{supp}(x)=\{\,i\in I:x(i)\ne x^\ast(i)\,\}$. The \textit{$\sigma$-product of $X$ at $x^\ast$} is the set $\sigma$ equipped with the topology induced by the Tychonoff product $\prod\{\,X_i:{i\in I}\,\}$\@. In \cite{JSSW}, Juh\'asz, Soukup, Szentmikl\'ossy and Weiss proved: \begin{theorem} Let $\kappa$ and $\lambda$ be cardinals, with $\kappa\leq\aleph_1$. Let $X$ be the one point lindel\"ofication of a discrete space of cardinality $\kappa$ by a point $p$ and let $x^\ast\in X^\kappa$, where $x^\ast(\alpha)=p$ for all $\alpha<\kappa$. Then $\left(\sigma(X^\kappa,x^\ast)\right)_\delta$ has tighness $\aleph_1$. \end{theorem} In the same article, it was asked: \begin{question} Assume that $X$ is a Lindel\"of $P$-space such that $t(X)=\aleph_1$. Is it true that \[t(\sigma(X^\kappa)_\delta)=\aleph_1\] for all cardinal $\kappa$? \end{question} We will see that the answer is positive. \begin{lemma}[\cite{DW1986}]\label{160} If $\{\,X_i:i\leq n\,\}$ is a finite family of regular locally Lindel\"of $P$-spaces, then \[ t\left(\prod\{\,X_i:i\leq n\,\}\right)=\max\{\,t(X_i):i\leq n\,\}. \] \end{lemma} \begin{lemma}\label{4} If $X=\sigma\{\,X_n:n\in\omega\,\}$ is a $\sigma$-product of regular locally Lindel\"of $P$-spaces, then \[ t\left(X_\delta\right)=\sup\{\,t(X_n):n\in\omega\,\}. \] \end{lemma} \begin{proof} Let $\lambda=\sup\{\,t(X_n):n\in\omega\,\}$. Let $Y$ be a non-closed subset of $\sigma_\delta=\sigma\left(X,x^\ast\right)_\delta$ and let $q\in \mathrm{cl}(Y)\setminus Y$. For each $n\in\omega$, let \[ Y_n=\{\,y\in Y:\mathrm{supp}(y)\subseteq n\,\}. \] Since $Y=\bigcup\{\,Y_n:n\in\omega\,\}$ and $\sigma(X,x^\ast)_\delta$ is a $P$-space, there exists a $m\in\omega$ such that $q\in\mathrm{cl}(Y_m)$. Now, $\pi_m(q)\in\mathrm{cl}\left(\pi_{m}[Y_m]\right)$, where $\pi_m$ is the natural projection from $\prod\{\,X_n:n\in\omega\,\}$ in $\prod\{\,X_i:i\in m\,\}$. Since by the corollary \ref{160} the tightness of $\prod\{\,X_i:i\in m\,\}$ is $\leq\lambda$, there exists $Z'\subseteq \pi_{m}[Y_m]$ of cardinality $\leq\lambda$ such that $\pi_m(q)\in\mathrm{cl}(Z')$. Then $Z=Z'\times\prod\{\,\{x^\ast(n)\}:n\geq m\,\}\subseteq Y_m\subseteq Y$ and since $\mathrm{supp}(q)\subseteq m$, we have $q\in\mathrm{cl}(Z)$. \end{proof} \begin{lemma}\label{161} Let $\kappa$ be a infinite cardinal. Let $X=\prod\{\,X_\alpha:\alpha<\kappa\,\}$. If for each countable subset $I\subseteq \kappa$, $\sigma\{\,X_\alpha:\alpha\in I\,\}_\delta$ has tightness $\leq\lambda$, with $\mathrm{Cov}_\omega(\lambda)=\lambda$, then $\sigma(X)_\delta$ has tightness $\leq\lambda$. \end{lemma} \begin{proof} Let $Y$ be a non-closed subset of $\sigma_\delta=\sigma(X,x^\ast)_\delta$ and let $q\in\mathrm{cl}(Y)\setminus Y$. By the theorem \ref{1}, there exists a $\omega$-covering elementary submodel $M$ of cardinality $\lambda$ such that $\{X,x^\ast,\kappa,\lambda,q,Y\}\cup\lambda\subseteq M$. We are going to show that $q\in\mathrm{cl}(Y\cap M)$. Suppose that \[ q\in U=\prod\{\,U_\alpha:\alpha\in I\,\}\times \prod\{\,X_\alpha:\alpha\in\kappa\setminus I\,\}, \] where $I$ is a countable subset of $\kappa$ and each $U_\alpha$ is an open subset of $X_\alpha$\@. Since $M$ is $\omega$-covering, there exists $J\in M$, a countable subset of $\kappa$ such that $I\cap M\subseteq J$. Now note that $\pi_J(q)\in \mathrm{cl}(\pi_J[Y])$ and $\pi_J[Y]\in M$; besides $\pi_J[\sigma(X,x^\ast)_\delta]$ belongs to $M$ and, by hypothesis, its tightness is $\leq\lambda$. So by the elementarity there exists $Z\in M$, a subset of $\pi_J[Y]$ whose cardinality is at most $\lambda$, such that $\pi_J(q)\in\mathrm{cl}(Z)$. Let $z\in \pi_J[U]\cap Z$. Note that $z\in M$, because, since $Z\in M$ and $Z$ has cardinality at most $\lambda$ and $\lambda\subseteq M$, $Z\subseteq M$. So by the elementarity there exists $y\in Y\cap M$ such that $\pi_J(y)=z$. We claim that $y\in U$. Indeed, since $\mathrm{supp}(y)$, $\mathrm{supp}(q)\subseteq M$, it follows that if $\alpha\in I\setminus M$ so $y(\alpha)=x^\ast(\alpha)=q(\alpha)\in U_\alpha$\@. On the other side, if $\alpha\in I\cap M$ so $\alpha\in J$ and thus $y(\alpha)=z(\alpha)\in U_\alpha\cap M$\@. Therefore, $y\in \prod\{\,U_\alpha:{\alpha\in I}\,\}\times\prod\{\,X_\alpha:\alpha\in\kappa\setminus I\,\}$\@. \end{proof} \begin{theorem} If $X=\prod\{\,X_\alpha:\alpha<\kappa\,\}$, with each $X_\alpha$ being a Lindel\"of $P$-space such that $t(X_\alpha)\leq\lambda$, with $\mathrm{Cov}_\omega(\lambda)=\lambda$, then \[ t\left(\sigma\left(X\right)_\delta\right)\leq \lambda. \] In particular, if $X$ is a Lindel\"of $P$-space whose tightness is $\aleph_n$ then the tightness of $\sigma\left(X^\kappa, x^\ast\right)_\delta$ is $\aleph_n$. \end{theorem} As a corollary of the previous theorem we have that, for a regular Lindel\"of $P$-space, \[ t\left(\sigma\left(X^\kappa\right)_\delta\right)\leq t(X)^{\aleph_0}. \] It remains to be seen whether: \begin{question} Is there a Lindel\"of $P$-space $X$ such that $t(X)=\lambda$, with $\mathrm{Cov}_\omega(\lambda)>\lambda$, and \[ t\left(\sigma\left(X^\kappa\right)_\delta\right)>\lambda? \] \end{question} \begin{question} Assuming that $\mathrm{Cov}_\omega(\aleph_\omega)=\aleph_{\omega+1}$, is there a Lindel\"of $P$-space $X$ such that $t(X)=\aleph_\omega$ and \[ t\left(\sigma\left(X^\kappa\right)_\delta\right)=\aleph_{\omega+1}? \] \end{question} Based on the theorem 3.1 from \cite{DW1986}, we have: \begin{lemma}\label{6} Let $\lambda$ be a infinite cardinal. If $X=\sigma\{\,X_\alpha:\alpha<\lambda\,\}$ is a $\sigma$-product of regular Lindel\"of $P$-spaces, then \[ t\left(X_\delta\right)\leq \mathrm{Cov}_\omega(\lambda)\cdot \sup\{\,t(X_\alpha):\alpha<\lambda\,\}. \] \end{lemma} \begin{proof} Let $\kappa=\mathrm{Cov}_\omega(\lambda)\cdot \sup\{\,t(X_\alpha):\alpha<\lambda\,\}$. For each $I\subseteq \lambda$, let $\sigma_I=\sigma\{\,X_\alpha:\alpha\in I\,\}_\delta$. Suppose that $A\subseteq \sigma_\delta$ is $\kappa$-closed, and $a\in\mathrm{cl}_{\sigma_\delta}(A)$. Note that, for each countable subset $J\subseteq \lambda$, $\pi_J[\mathrm{cl}_{\sigma(\lambda)}(A)]\subseteq \mathrm{cl}_{\sigma_J}(\pi_J[A])$. Indeed, let $x\in \pi_J[\mathrm{cl}_{\sigma(\lambda)}(A)]$. Then there exists $z\in \mathrm{cl}_{\sigma(\lambda)}(A)$ such that $\pi_J(z)=x$. If $\prod\{\,U_j:j\in J\,\}$ is an basic neighborhood of $x$ in $\sigma_J$, then $\prod\{\,U_j:j\in J\,\}\times\prod\{\,X_\alpha:\alpha\in{\lambda\setminus J}\,\}$ is an open neighborhood of $z$. So $\left(\prod\{\,U_j:j\in J\,\}\times \prod\{\,X_\alpha:\alpha\in{\lambda\setminus J}\,\}\right)\cap A\ne \emptyset$ and, thus, $\left(\prod\{\,U_j:j\in J\,\}\right)\cap\pi_J[A]\ne \emptyset$. Therefore, $x\in \mathrm{cl}_{\sigma_J}(\pi_J[A])$. Then, for each countable subset $J\subseteq \lambda$, $\pi_J(a)\in \mathrm{cl}_{\sigma_J}(\pi_J[A])$. By lemma \ref{4}, $t(\sigma_J)\leq\kappa$, then we can take $B_J\subseteq \pi_J[A]$ of cardinality $\leq \kappa$ such that $\pi_J(a)\in\mathrm{cl}_{\sigma_J}(B_J)$. For each $b\in B_J$ choose $x_b\in A$ such that $\pi_J(x_b)=b$, and let $C_J=\{\,x_b:b\in B_J\,\}$. Now, let $\mathcal{J}$ be a cofinal family in $[\lambda]^{\aleph_0}$ and let \[C=\bigcup\{\,C_J:J\in\mathcal{J}\,\}.\] Note that $|C|\leq \mathrm{Cov}_\omega(\lambda)\cdot t(X)=\kappa$. Then $\mathrm{cl}_{\sigma(\lambda)}(C)\subseteq A$. So, it remains to be proved that $a\in \mathrm{cl}_{\sigma(\lambda)}(C)$. Let $U=\prod\{\,U_j:{j\in J'}\,\}\times\prod\{\,X_\alpha:\alpha\in{\lambda\setminus J'}\,\}$ be an basic neighborhood of $a$ in $\sigma(\lambda)$. Let $J\in\mathcal{J}$ such that $J'\subseteq J$. Since $\pi_J(a)\in \mathrm{cl}_{\sigma_J}(B_J)=\mathrm{cl}_{\sigma_J}\left(\pi_J[C_J]\right)$, then $\pi_J[U]\cap\pi_J[C_J]\ne\emptyset$; so $U\cap C_J\ne\emptyset$. Therefore, $U\cap C\ne\emptyset$. \end{proof} In the same way we have proved the lemma \ref{4}, we can show the following result for the cases in which $\mathrm{Cov}_\omega(t(X))>t(X)$: \begin{lemma}\label{5} If $X$ is a Lindel\"of $P$-space then \[ t(\sigma(X^{\aleph_\omega},x^\ast)_\delta)=t(X). \] \end{lemma} \begin{proof} Let $\kappa=t(X)$. Suppose that $\mathrm{Cov}_\omega(\kappa)>\kappa$. Note that $\kappa\geq\aleph_\omega$. Let $Y$ be a non-closed subset of $\sigma(X^\omega,x^\ast)_\delta$ and let $q\in \mathrm{cl}(Y)\setminus Y$. For each $n\in\omega$, let \[ Y_n=\{\,y\in Y:\mathrm{supp}(y)\subseteq \omega_n\,\}. \] Since $Y=\bigcup\{\,Y_n:n\in\omega\,\}$ and $\sigma(X^\omega,x^\ast)_\delta$ is a $P$-space, there exists a $m\in\omega$ such that $q\in\mathrm{cl}(Y_m)$. Now, $\pi_m(q)\in\mathrm{cl}\left(\pi_{m}[Y_m]\right)$, where $\pi_m$ is the natural projection from $X^{\aleph_\omega}$ in $X^{\aleph_m}$. Since by the theorem \ref{6} the tightness of $X^{\aleph_m}$ is $\kappa$, there exists $Z'\subseteq \pi_{m}[Y_m]$ of cardinality $\leq\kappa$ such that $\pi_m(q)\in\mathrm{cl}(Z')$. Then $Z=Z'\times\prod\{\,\{x^\ast(n)\}:n\geq m\,\}\subseteq Y_m\subseteq Y$ and since $\mathrm{supp}(q)\subseteq \omega_m$, we have $q\in\mathrm{cl}(Z)$. \end{proof} \begin{bibdiv} \begin{biblist} \bib{JSSW}{article}{ title={Lindel\"of $P$-spaces}, author={I. Juh\'asz}, author={L. Soukup}, author={Z. Szentmikl\'ossy}, journal={Preprint} } \bib{Pas2007}{thesis}{ title={Extens\~{o}es de submodelos elementares por forcing}, author={M. D. Passos}, type={Ph.D. Thesis}, organization={Universidade de S\~ao Paulo}, date={2007} } \bib{LR1981}{article}{ title={Normal $P$-spaces and the $G_\delta$-topology}, author={R. Levy}, author={M. D. Rice}, journal={Colloquium Mathematicum}, volume = {44}, number = {2}, date={1981}, pages={227--240} } \bib{Mis1972}{article}{ author = {A. K. Misra}, journal = {Topology and its Applications}, pages = {349--362}, title = {{A topological view of $P$-spaces}}, volume = {2}, year = {1972} } \bib{HRW2007}{article}{ author = {M. Henriksen}, author = {R. Raphael}, author = {R. G. Woods}, issn = {0010-2628}, journal = {Commentationes Mathematicae Universitatis Carolinae}, number = {3}, pages = {487--505}, title = {SP-scattered spaces; a new generalization of scattered spaces}, volume = {48}, year = {2007} } \bib{HP1989}{article}{ author = {H. Z. Hdeib}, author = {C. M. Pareek}, issn = {0146-4124}, journal = {Topology Proceedings}, number = {}, pages = {59--74}, title = {A generalization of scattered spaces}, volume = {14}, year = {1989} } \bib{Tal1976}{article}{ author = {F. D. Tall}, journal = {Pacific Journal of Mathematics}, title = {The density topology}, volume = {62}, number = {1}, pages = {275--284}, year = {1976} } \bib{DW1986}{article}{ author = {U. N. B. Dissanayake}, author = {S. W. Willard}, journal = {Proceedings of the American Mathematical Society}, title = {Tightness in product spaces}, volume = {96}, number = {1}, pages = {136-140}, year = {1986} } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \preprint{APS/123-QED} \title{ Measurement of the energy relaxation time of quantum states in quantum annealing with a D-Wave machine } \author{Takashi Imoto} \email{[email protected]} \affiliation{Research Center for Emerging Computing Technologies, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan.} \author{Yuki Susa} \email{[email protected]} \affiliation{Secure System Platform Research Laboratories, NEC Corporation, Kawasaki, Kanagawa 211-8666, Japan} \affiliation{NEC-AIST Quantum Technology Cooperative Research Laboratory, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568, Japan} \author{Tadashi Kadowaki} \email{[email protected]} \affiliation{DENSO CORPORATION, Kounan, Minato-ku, Tokyo 108-0075, Japan} \affiliation{Research Center for Emerging Computing Technologies, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan.} \author{Ryoji Miyazaki} \email{[email protected]} \affiliation{Secure System Platform Research Laboratories, NEC Corporation, Kawasaki, Kanagawa 211-8666, Japan} \affiliation{NEC-AIST Quantum Technology Cooperative Research Laboratory, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568, Japan} \author{Yuichiro Matsuzaki} \email{[email protected]} \affiliation{Research Center for Emerging Computing Technologies, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan.} \affiliation{NEC-AIST Quantum Technology Cooperative Research Laboratory, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568, Japan} \date{\today} \begin{abstract} Quantum annealing has been demonstrated with superconducting qubits. Such a quantum annealer has been used to solve combinational optimization problems and is also useful as a quantum simulator to investigate the properties of the quantum many-body systems. However, the coherence properties of actual devices provided by D-Wave Quantum Inc. are not sufficiently explored. Here, we propose and demonstrate a method to measure the coherence time of the excited state in quantum annealing with the D-Wave device. More specifically, we investigate the energy relaxation time of the first excited states of a fully connected Ising model with a transverse field. We find that the energy relaxation time of the excited states of the model is orders of magnitude longer than that of the excited state of a single qubit, and we qualitatively explain this phenomenon by using a theoretical model. The reported technique provides new possibilities to explore the decoherence properties of quantum many-body systems with the D-Wave machine. \end{abstract} \maketitle \section{Introduction} \begin{figure*} \caption{(a) We illustrate the energy level diagram of the fully connected Ising model with the transverse field. For the small transverse magnetic field, we can use the lowest-order perturbation theory and obtain $\ket{E_{0} \label{fig:long-lived-fig} \end{figure*} There have been significant developments in quantum annealing (QA). QA is useful to solve combinatorial optimization problems \cite{kadowaki1998quantum, farhi2000quantum, farhi2001quantum}. D-Wave Quantum inc. has developed a quantum annealing machine consisting of thousands of qubits \cite{johnson2011quantum}. It is possible to map the combinatorial optimization problem into the search for the ground state of the Ising Hamiltonian \cite{lucas2014ising, lechner2015quantum}. Efficient clustering and machine learning using QA have been reported \cite{kurihara2014quantum, kumar2018quantum, amin2015searching, neven2008training, korenkevych2016benchmarking, benedetti2017quantum, willsch2020support, wilson2021quantum}. Moreover, QA can be used for topological data analysis (TDA)\cite{berwald2018computing}. Furthermore, QA is used not only for solving problems but also as a quantum simulator of quantum many-body systems in and out of equilibrium \cite{king2018observation, kairys2020simulating, harris2018phase, zhou2021experimental}. There is a great interest in whether the D-Wave machines exploit quantum properties, and recent experiments with a D-Wave machine have confirmed good agreement with theoretical predictions based on Schrodinger dynamics \cite{king2022coherent, king2022quantum}. Focusing on this point, an attempt has been reported to use longitudinal magnetic fields to probe the dynamics of a qubit and distinguish the effects of noise in the D-Wave machine\cite{morrell2022signatures}. However, the coherence properties of actual devices provided by D-Wave are not sufficiently explored, and further studies are still needed. In particular, since an eigenstate of the Hamiltonian provided by the D-Wave machine can be entangled, the coherence properties of entangled states need to be clarified. In this paper, we propose and demonstrate a method to measure the energy relaxation time (called $T_1$) of the excited states during QA by using the D-Wave machine. We adopt a fully connected Ising model of four qubits for the problem Hamiltonian of QA. We show that the energy relaxation time of the first excited states of the model is much longer than that of a single qubit. Our numerical simulations show that the first excited state during QA for the model is entangled, and so our experimental results indicate that the long-lived entangled state can be generated during QA with the D-Wave machine. Also, the energy diagram, which is illustrated in Fig \ref{fig:long-lived-fig} (a) and (b), can theoretically show that the fully-connected Ising model can lead to an improved energy relaxation time. \section{Experimental Results using D-Wave machine}\subsection{$T_1$ Measurement Setup} Let us explain our method to measure the energy relaxation time of the excited states during QA by using the D-Wave machine. We perform the reverse quantum annealing (RQA) with a hot start where the initial state is the first excited state of the problem Hamiltonian, and we investigate the energy relaxation time of the state during RQA. We remark that, throughout this paper, we use the D-Wave Advantage system 6.1 for the demonstration. Although embedding techniques have been developed for a large number of qubits \cite{yang2016graph, okada2019improving, boothby2020next, cai2014practical, klymko2014adiabatic, boothby2016fast, zaribafiyan2017systematic}, we employ a fully connected Ising model with 4 qubits, which is the maximum system size without embedding. Moreover, we use several varieties of qubits in the D-Wave machine, and so we believe that our results are not limited to specific qubits but can be applied to general qubits in the D-Wave machine. The Hamiltonian is given as follows: \begin{align} H(k)&=A(k)H_{D}+B(k)\biggl(g(k)H_{L}+H_{P}\biggr)\label{eq:gen_ann_ham}\\ H_{P}&=-J\sum_{j<k}\sigma_{j}^{(z)}\sigma_{k}^{(z)}\\ H_{L}&=\frac{h}{2}\sum_{j=1}^{N}\sigma_{j}^{(z)}\\ H_{D}&=-\Gamma\sum_{j=1}^{N}\sigma_{j}^{(x)}, \end{align} where $H_{P}$ denotes the problem Hamiltonian, $H_{D}$ denotes the drive Hamiltonian, $H_{L}$ denotes the Hamiltonian of the longitudinal magnetic field, $\Gamma$ ($h$) denotes the amplitude of the transverse (horizontal) magnetic fields, $J$ denotes the coupling strength, $k$ denotes the time-dependent parameter to control the schedule of QA. We set $A(k)=1-k$, $B(k)=g(k)=k$. Throughout our paper, we set $h=1$ and $\Gamma=1$ unless otherwise mentioned. Also, when we consider a fully connected Ising model, we set $J=1$ and $N=4$. On the other hand, when we consider a non-interacting model (or a single-qubit model), we set $J=0$ and $N=1$ for simplicity. We choose the first excited state of the problem Hamiltonian, as an initial state. For the fully connected Ising model, the all-up state $\ket{\uparrow\uparrow\uparrow\uparrow}$ is the excited state. The schedule of the Hamiltonian is described as follows. (See Fig. \ref{fig:long-lived-fig} (c)) Firstly, from $t=0$ to $t=t_1$, we change the parameter of the Hamiltonian from $k=1$ to $k=h_{d}$ as a linear function of $t$, where we can control the strength of the transverse magnetic fields by tuning $h_d$. Secondly, from $t=t_1$ to $t=t_1 + t_2$, we let the Hamiltonian remain in the same form at $k=h_{d}$. Thirdly, from $t=t_1 + t_2$ to $t=t_1 + t_2 +t_3$, we gradually change the parameter of the Hamiltonian from $k=h_{d}$ to $k=1$. Fourthly, we measure the state with a computational basis. Finally, we repeat this protocol with many different values of $t_2$. We obtain the survival probability of the initial state after these processes to estimate the energy relaxation time. Throughout this paper, the annealing times $t_1$ and $t_3$ are set to be $1$ $\mu$s. \begin{figure*} \caption{(a) The survival probability against $t_2$ for the fully connected Ising model ($J=-1$) with the transverse field. The solid line shows a fitted exponential decay curve. (b) and (c) The survival probability against $t_2$ for the single qubit. The solid line shows a fitted exponential decay curve. We set the fitting function to $a\exp(-t/T_{1} \label{fig:D-wave_result} \end{figure*} \subsection{Experimental Result} By using our method described above, we measure the energy relaxation time of the first excited state. Moreover, we compare the energy relaxation time of the state of the transverse-field Ising model with that of the single-qubit model. Fig. \ref{fig:D-wave_result} (a) shows the survival probability against $t_2$ in the case of the fully connected Ising model with the transverse field. We can see that the survival probability decays exponentially. Furthermore, as we decrease $h_d$, the energy relaxation time becomes shorter. Fig. \ref{fig:D-wave_result} (b) and (c) show the survival probability of the single-qubit model against $t_2$. Similar to the case of the fully connected Ising model, the coherence time becomes shorter as we decrease $h_{d}$. Importantly, the energy relaxation time of the state of the single-qubit model is shorter than that of the fully connected Ising model with the transverse field. The details of the experimental data about the measured energy relaxation time are shown in Appendix \ref{sec:t1_table}. To gain our understanding of what occurs in the RQA, we separately perform numerical calculations. More specifically, we calculate the entanglement entropy of the first excited state during RQA when the problem Hamiltonian is the fully connected Ising model of four qubits. The entanglement entropy is defined by $\Tr[\rho_{A}\log\rho_{A}]$ where $\rho_{A}$ denotes a reduced density matrix of the two qubits. We plot the results in \ref{fig:long-lived-fig} (c), and then find that the first excited state during QA is entangled if decoherence is negligible. The calculations also show that the entanglement entropy grows more for smaller $h_d$. Thus our results indicate that the energy relaxation time of the entangled state is longer than that of the separable state. We will discuss the origin of the long-lived state later. \begin{figure} \caption{ The survival probability against $h_d$ for the fully connected Ising model ($J=-1$) with the transverse field and the single-qubit model. Each point is obtained with $100000$ measurements. Also, we set $t_1=1\mu s$, $t_3=1\mu s$, and $h=1.0$. We observe ‘bump’-like features, which we indicate by using $\bigtriangledown$. } \label{fig:s_prob_against_hd} \end{figure} In Fig. \ref{fig:s_prob_against_hd}, we plot the survival probability against $h_d$. Again, we confirm that the survival probability of the case of the fully connected Ising model is larger than that of the non-interacting model. Although there is a tendency of the larger survival probability for smaller $h_d$, we observe that the survival probability becomes especially smaller for $h_d=0.43$ ($h_d=0.79$) for the case of the fully connected Ising model (non-interacting model). As we will discuss later, this is possibly due to noise sources at a specific frequency. \section{Discussion} Let us discuss the possible mechanism of the long-lived entangled states in our experiments. For a superconducting flux qubit composed of three Josephson junctions \cite{chiorescu2003coherent,van2000quantum,mooij1999josephson}, which has been developed for a gate-type quantum computer, the main source of decoherence is the change in the magnetic flux penetrating the superconducting loop \cite{yoshihara2006decoherence,kakuyanagi2007dephasing}. This induces the amplitude fluctuation of $\hat{\sigma}_z$. Moreover, although the coherence time was not directly measured for the conventional RF SQUID, low-frequency flux noise was observed \cite{lanting2009geometrical,harris2010experimental}, and the flux fluctuations induce the change in the amplitude of $\hat{\sigma}_z$. From the observations mentioned above, we infer that the qubits in the D-Wave machine are also affected by the amplitude fluctuation of $\sigma_z$. The energy relaxation rate of the first excited state typically increases as transition matrix $|\langle g|\hat{\sigma}_j^{(z)}|e\rangle |$ increases \cite{hornberger2009introduction} where $|g\rangle$ and $|e\rangle$ denote the ground and first excited state of the Hamiltonian, respectively. We numerically show that the transition matrix of the four qubits in the transverse-field Ising model is much smaller than that of a single qubit in the transverse and longitudinal fields Hamiltonian (See Fig. \ref{fig:noise_analysis}). This could be the origin of the long-lived entangled state during RQA. Also, to elucidate the mechanism, we analytically calculate the transition matrix by using a perturbation theory (see Appendix \ref{sec:perturbation_analysis}). In Fig. \ref{fig:s_prob_against_hd}, we observe ‘bump’-like features around $h_d=0.43$ ($h_d=0.79$) for the case of the fully connected Ising model (single-qubit model), as we discussed before. The energy relaxation time becomes shorter for these values of $h_d$. For the superconducting qubit, it is known that two-level systems (TLSs) cause decoherence \cite{simmonds2004decoherence,shalibo2010lifetime}, and the energy relaxation time can be smaller when the frequency of the qubit is resonant on that of the TLS \cite{muller2015interacting,klimov2018fluctuations,abdurakhimov2020driven}. We expect that the state during RQA is also affected by TLSs, and the energy relaxation time becomes smaller when the energy of the state is resonant with the TLS. By diagonalizing the Hamiltonian, we show that the energy gap between the ground and first excited states with $h_d=0.43$ for the fully connected Ising model is similar to that with $h_d=0.79$ for the single-qubit model (See Fig. \ref{fig:noise_analysis} (b)). This result is consistent with our expectations that the bump structure is due to the TLS. Importantly, from these observations, our method to measure the energy relaxation time shows a potential to characterize the decoherence source during QA. \begin{figure} \caption{Comparison between four qubits in the transverse-field Ising model and a single qubit in the transverse and longitudinal fields Hamiltonian. (a)Plot of the transition matrix of the noise operator $\sigma^{(z)} \label{fig:noise_analysis} \end{figure} Let us compare our results to measure the energy relaxation time with previous results. In \cite{ozfidan2020demonstration}, the energy relaxation time of RF-SQUID with nonstoquastic interaction was measured, and the energy relaxation time is 17.5 ns, which is much shorter than that measured with our method by using a D-Wave machine. There are two possible reasons why the energy relaxation time in \cite{ozfidan2020demonstration} is shorter than ours. First, to demonstrate a nonstoquastic Hamiltonian, they couple two qubits via charge and flux degrees of freedom. If there is a local basis to represent the Hamiltonian whose off-diagonal elements are all positive, the Hamiltonian is called a stoquastic. The Hamiltonian in the D-Wave machine is stoquastic, because only inductive coupling is used. In \cite{ozfidan2020demonstration}, to utilize the charge degree for the nonstoquastic coupling, a more complicated structure is adopted, and this may decrease the energy relaxation time of the qubit. Second, when the energy relaxation time was measured in \cite{ozfidan2020demonstration}, they set a large tunneling energy corresponding to a transverse magnetic field such as 2 GHz. Here, the energy of the transverse magnetic field is much larger than that of the horizontal magnetic field. On the other hand, when we measure the energy relaxation time of the single qubit, the transverse magnetic field is smaller than the horizontal magnetic field, which makes the transition matrix $|\langle g|\hat{\sigma}_j^{(z)}|e\rangle |$ smaller. These are the possible reasons why the energy relaxation time measured with our method is much longer than that in \cite{ozfidan2020demonstration}. \section{Conclusion and future work} In conclusion, we propose and demonstrate a method to measure the energy relaxation time of the first excited state during QA by using a D-Wave machine. We found that the energy relaxation time of the first excited state of the transverse-field Ising model is much longer than that of the single-qubit model. Using numerical simulations, we also find that the first excited state of the model is entangled. These findings indicate a long-lived entangled state during QA performed in the D-Wave machine. The origin of the long-lived entangled state was a small transition-matrix element of the noise operator between the ground state and the first excited state. Our work was regarded as the first proof-of-principle experiment of a direct measurement of the energy relaxation time of quantum states during QA with the D-Wave machine. Moreover, our results open new possibilities in the D-Wave machine to explore the coherence properties of the excited state of the quantum many-body systems. This work was supported by Leading Initiative for Excellent Young Researchers MEXT Japan and JST presto (Grant No. JPMJPR1919) Japan. This paper is partly based on results obtained from a project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO), Japan. This work was supported by JST Moonshot R\&D (Grant Number JPMJMS226C). \appendix \section{Perturbative analysis for the long-lived qubit}\label{sec:perturbation_analysis} As mentioned in the main text, the energy relaxation time of the first excited state of the fully connected Ising model with the longitudinal and transverse fields is much longer than that of a single qubit model. In order to elucidate the mechanism of its long energy relaxation time, we adopt the perturbation theory and obtain the explicit form of the ground state when the transverse magnetic field is small. This allows us to obtain the transition matrix, which is useful to quantify the robustness of the first excited state against decoherence \cite{hornberger2009introduction}. We rewrite the total Hamiltonian (\ref{eq:gen_ann_ham}) as \begin{eqnarray} H&=&H_0+H'\nonumber \\ H_0&=&H_{P}+H_{L}\nonumber \\ H'&=&\lambda\biggl(H_{D}-H_{P}-2H_{L}+\lambda^{2} H_{L}\biggr), \end{eqnarray} where we set $\lambda=1-k$ and $\Gamma=1$. We can easily diagonalize $H_0$, and we assume that $H'$ is a perturbative term. Let us define $\hat{S}^{(a)}=\sum_{j=1}^{4}\hat{\sigma}_{j}^{(a)}\ (a=x,y,z)$ and define $S^{(z)}$ as the eigenvalue of $\hat{S}_z$. We consider the fully symmetric representation corresponding to the maximum total spin, and the subspace is spanned by the Dicke states. In this subspace, we can specify the state by $S^{(z)}$. Using the perturbation theory, we can describe the $n$-th excited state as \begin{align} \ket{\phi_{n}}&=\ket{\phi_{n}^{(0)}}+\lambda\sum_{m\neq n}\frac{\bra{\phi_{m}^{(0)}}H_{D}\ket{\phi_{n}^{(0)}}}{E_{m}^{(0)}-E_{n}^{(0)}}\ket{\phi_{m}^{(0)}} +o(\lambda^{2}),\label{eq:purturbatid_state_formula} \end{align} where $\ket{\phi_{m}^{(0)}}$($E_{m}^{(0)}$) is $m$-th energy eigenstate (eigenenergy) of $H_0$, and we assume $E_m \leq E_{m'}$ for $m\leq m'$. Here, we remark that $\ket{\phi_{m}^{(0)}}$ is the computational basis, and so $\bra{\phi_{n}^{(0)}}H_{L}\ket{\phi_{m}^{(0)}}=\bra{\phi_{n}^{(0)}}H_{P}\ket{\phi_{m}^{(0)}}=0$ if $m\neq n$. From Fig \ref{fig:long-lived-fig} (a), we obtain all the eigenstates as follows: \begin{align} \ket{\phi_{0}^{(0)}}&=\ket{-4}\\ \ket{\phi_{1}^{(0)}}&=\ket{+4}\\ \ket{\phi_{2}^{(0)}}&=\ket{-2}\\ \ket{\phi_{3}^{(0)}}&=\ket{+2}\\ \ket{\phi_{4}^{(0)}}&=\ket{0}. \end{align} Also, each energy eigenvalue is described as follows: \begin{align} E_{0}^{(0)}&=-8J-2h\\ E_{1}^{(0)}&=-8J+2h\\ E_{2}^{(0)}&=-2J-h\\ E_{3}^{(0)}&=-2J+h\\ E_{4}^{(0)}&=0. \end{align} Substituting these values into the formula in Eq. (\ref{eq:purturbatid_state_formula}), we obtain an explicit form of the ground state and the first excited state as follows: \begin{align} \ket{\phi_{0}}&=\ket{-4}+\frac{\lambda}{6J+h}\ket{-2}+o(\lambda^{2})\\ \ket{\phi_{1}}&=\ket{+4}+\frac{\lambda}{6J-h}\ket{+2}+o(\lambda^{2}). \end{align} Note that these are entangled states. The transition matrix between the ground state and the first excited state is given by \begin{align} \bra{\phi_{0}}\hat{\sigma}_{j}^{(z)}\ket{\phi_{1}}=o(\lambda^{2}). \end{align} We can calculate the transition matrix of the other local operators. We obtain $\bra{\phi_{0}}\hat{\sigma}_{j}^{(x)}\ket{\phi_{1}}=o(\lambda^{2})$ and $\bra{\phi_{0}}\hat{\sigma}_{j}^{(y)}\ket{\phi_{1}}=o(\lambda^{2})$. These show that this excited state is robust against the other local noise as well. Similarly, we calculate the eigenvector of the single qubit model by using the perturbation theory for small transverse magnetic fields. The single qubit model is given by \begin{align} H_{single}=\hat{\sigma}^{(z)}+\lambda\hat{\sigma}^{(x)}. \end{align} We describe the eigenstates as \begin{align} \ket{\psi_{0}}=\ket{\downarrow}+\lambda\ket{\uparrow}+o(\lambda^{2})\\ \ket{\psi_{1}}=\ket{\uparrow}+\lambda\ket{\downarrow}+o(\lambda^{2}), \end{align} where $\ket{\psi_{0}}$($\ket{\psi_{1}}$) denotes the ground state (first excited state) of the single qubit. The transition matrix between the ground state and the first excited state is given by \begin{align} \bra{\psi_{0}}\hat{\sigma}^{(z)}\ket{\psi_{1}}=o(\lambda^{1}). \end{align} Comparing Eq. (A18) with Eq. (A15), we can confirm the first excited state of the the fully connected Ising model with the longitudinal and transverse field is more robust against noise represented by $\hat{\sigma}_z$ than that of the single qubit model. \section{Energy relaxation time}\label{sec:t1_table} As described in the main text, we measured the energy relaxation time with the fully connected Ising model in the longitudinal and transverse fields and also measured that of the single-qubit model. Here, we show the details of the energy relaxation time of the fully connected Ising model in the longitudinal and transverse field (single-qubit model) in Table. \ref{table:t1_and_hd_fully} (\ref{table:t1_and_hd_single}). \begin{table}[hbtp] \caption{Energy relaxation time for each $h_d$ for the fully connected Ising model($J=-1$)with the transverse field}\label{table:t1_and_hd_fully} \centering \begin{tabular}{cc} \hline $h_d$ & energy relaxation time($\mu s$) \\ \hline \hline 0.47 & 933.5 \\ 0.48 & 2991.0 \\ 0.49 & 10022.9 \\ 0.50 & 32729.6 \\ 0.51 & 104001.0 \\ 0.52 & 255612.5 \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{Energy relaxation time for each $h_d$ for the single qubit}\label{table:t1_and_hd_single} \begin{tabular}{cc} \hline $h_d$ & relaxation time($\mu s$) \\ \hline \hline 0.65 & 48.2 \\ 0.66 & 69.3 \\ 0.67 & 99.2 \\ 0.68 & 142.2 \\ 0.69 & 206.3 \\ 0.70 & 291.0 \\ 0.80 & 907.9 \\ 0.81 & 2426.6 \\ 0.82 & 9049.8 \\ 0.83 & 45920.6 \\ 0.84 & 262665.3 \\ 0.85 & 1032667.2 \\ \hline \end{tabular}\label{table:t1_and_hd_single} \end{table} \nocite{*} \end{document}
\begin{document} \title{Towards computing the harmonic-measure distribution function for the middle-thirds Cantor set} \author{Christopher C. Green \& Mohamed M. S. Nasser} \date{} \maketitle \vskip-0.8cm \centerline{Wichita State University, Department of Mathematics, Statistics \& Physics,} \centerline{Wichita KS 67260, USA} \centerline{\tt [email protected], [email protected]} \begin{abstract} The harmonic-measure distribution function, or $h$-function for short, associated with a particular planar domain, describes the hitting probability of a Brownian walker released from some point with the boundary of the domain. In this paper, we present a fast and accurate boundary integral method for the numerical computation of the $h$-functions for symmetric multiply connected slit domains with high connectivity. In view of the fact that the middle-thirds Cantor set $\mathcal{C}$ is a special limiting case of these slit domains, the proposed method is used to approximate the $h$-function for the set $\mathcal{C}$. \mathrm{e}nd{abstract} \begin{center} \begin{quotation} {\noindent {{\bf Keywords}.\;\; Harmonic-measure distribution function; multiply connected slit domain; middle-thirds Cantor set; conformal mapping; boundary integral equation; Neumann kernel.} } \mathrm{e}nd{quotation} \mathrm{e}nd{center} \begin{center} \begin{quotation} {\noindent {{\bf MSC 2020}.\;\; 65E05; 30C85; 30E25; 45B05.} } \mathrm{e}nd{quotation} \mathrm{e}nd{center} \section{Introduction} \label{sec:int} The so-called $h$-function, or harmonic-measure distribution function, associated with a particular planar domain $\Omega$ in the extended complex plane $\overline{{\mathbb C}}={\mathbb C}\cup\{\mathrm{i}nfty\}$ can be viewed as a signature of the principal geometric characteristics of the boundary components of the domain $\Omega$ relative to some basepoint $z_0\mathrm{i}n\Omega$. Such functions are also probabilities that a Brownian particle reaches a boundary component of $\Omega$ within a certain distance from its point of release $z_0$. The properties of two-dimensional Brownian motions were investigated extensively by Kakutani~\cite{ka44} who found a deep connection between Brownian motion and harmonic functions (see also~\cite{ka45,SnipWard16,bookBM}). Stemming from a problem proposed by Stephenson and listed in~\cite{bh89}, the $h$-functions were first introduced as objects of study by Walden \& Ward~\cite{wawa96}. Since then the theory of $h$-functions has been successively developed in several works~\cite{asww,bawa14,beso03,gswc,SnipWard05,SnipWard08,wawa96,wawa01,wawa07}. For a overview of the main properties of $h$-functions, the reader is referred to the survey paper~\cite{SnipWard16}. Given a domain $\Omega$ in the extended complex plane $\overline{{\mathbb C}}$ and a fixed basepoint $z_0\mathrm{i}n\Omega$, the $h$-function is the piecewise smooth, non-decreasing function, $h\,:\,[0,\mathrm{i}nfty)\to[0,1]$, defined by \begin{equation}\label{eq:hr} h(r)= \omega(z_0,\partial\Omega\cap\overline{B(z_0,r)},\Omega) \mathrm{e}nd{equation} where $\omega$ denotes harmonic measure and $B(z_0,r)=\{z\mathrm{i}n{\mathbb C}\,:\,|z-z_0|<r\}$, i.e., $h(r)$ is the value of the harmonic measure of $\partial\Omega\cap\overline{B(z_0,r)}$ with respect to $\Omega$ at the point $z_0$. Thus $h(r)$ is equal to $u(z_0)$, where $u(z)$ is the solution to the following boundary value problem (BVP) of Dirichlet-type: \begin{subequations}\label{eq:bdv-u} \begin{align} \label{eq:u-Lap} \nabla^2 u(z) &= 0 \quad \mbox{if }z\mathrm{i}n \Omega, \\ \label{eq:u-1} u(z)&= 1 \quad \mbox{if }z\mathrm{i}n \partial\Omega\cap\overline{B(z_0,r)}, \\ \label{eq:u-0} u(z)&= 0 \quad \mbox{if }z\mathrm{i}n \partial\Omega\backslash \overline{B(z_0,r)}. \mathrm{e}nd{align} \mathrm{e}nd{subequations} This is illustrated in Figure~\ref{fig:h30} for the domain $\Omega_m$ defined in~\mathrm{e}qref{eq:Omeg-m} below, and we refer to $\partial B(z_0,r)$ as a `capture circle' of radius $r$ and center $z_0$. The main purpose of this work is to develop a fast and accurate numerical method for approximating the values of $h$-functions which are close to the actual, hypothetical, $h$-function associated with the middle-thirds Cantor set $\mathcal{C}$, which is defined by \begin{equation}\label{eq:E} \mathcal{C}=\bigcap_{\mathrm{e}ll=0}^{\mathrm{i}nfty} E_\mathrm{e}ll \mathrm{e}nd{equation} where the sets $E_\mathrm{e}ll$ are defined recursively by \begin{equation}\label{eq:Ek} E_\mathrm{e}ll=\frac{1}{3}\left(E_{\mathrm{e}ll-1}-\frac{1}{3}\right)\bigcup\frac{1}{3}\left(E_{\mathrm{e}ll-1}+\frac{1}{3}\right), \quad \mathrm{e}ll\ge 1, \mathrm{e}nd{equation} and $E_0=[-1/2,1/2]$. It is a type of fractal domain which is infinitely connected. We will demonstrate that the $h$-function for the Cantor set $\mathcal{C}$ can be approximated by computing $h$-functions for multiply connected slit domains of finite, but high, connectivity. For a fixed $\mathrm{e}ll=0,1,2,\ldots$, the set $E_\mathrm{e}ll$ consists of $m = 2^\mathrm{e}ll$ slits $I_{j}$, $j = 1, 2, \ldots, m$, numbered from left to right. The slits have the same length $L = (1/3)^\mathrm{e}ll$. The center of the slit $I_{j}$ is denoted by $w_{j}$. The domain $\Omega_m$ is the complement of the closed set $E_\mathrm{e}ll$ with respect to the extended complex plane $\overline{{\mathbb C}}$, i.e., \begin{equation}\label{eq:Omeg-m} \Omega_m = \overline{{\mathbb C}}\setminus\bigcup_{j=1}^{m} I_j, \quad m=2^\mathrm{e}ll. \mathrm{e}nd{equation} As the value of $m$ is increased, the computed $h$-function for the domain $\Omega_m$ provides a successively better approximation of the actual $h$-function associated with the middle-thirds Cantor set $\mathcal{C}$. The particular location of the basepoint $z_0$ is intrinsic to $h$-function calculations. In the current work, we consider $h$-functions for $\Omega_m$ with two different basepoint locations: one strictly to the left of all slits at $z_0=-3/2$, and another at $z_0=0$, as illustrated in Figure~\ref{fig:h30} for $m=2$. It should be highlighted that the method to be proposed in this paper, with modifications, can still be used to calculate $h$-functions for other basepoint locations. \begin{figure}[ht] \centerline{ \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{figh3}} \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{figh0}} } \caption{The domain $\Omega_2$ and a typical capture circle $\partial B(z_0,r)$ of radius $r$ used in the definition of the $h$-function with basepoint $z_0=-3/2$ (left) and with basepoint $z_0=0$ (right).} \label{fig:h30} \mathrm{e}nd{figure} There have been several calculations and analyses of $h$-functions over simply connected domains \cite{SnipWard16}. Recently, the first explicit formulae of $h$-functions in the multiply connected setting were derived by Green \textit{et al.}~\cite{gswc}. Their formulae were constructed by making judicious use of the calculus of the Schottky-Klein prime function, a special transcendental function which is ideally suited to solving problems in multiply connected domains, and the reader is referred to the monograph by Crowdy \cite{CrowdyBook} for further details. The formulae in \cite{gswc} have been generalized in~\cite{ArMa} for other classes of multiply connected domains and basepoint locations. In~\cite{gswc}, the planar domains of interest were chosen to be a particular class of multiply connected slit domain, i.e. unbounded domains in the complex plane whose boundary components consist of a finite number of finite-length linear segments lying on the real axis. Indeed, this choice of planar domain was motivated in part by the middle-thirds Cantor set. Whilst it is important to point out that there is now reliable software available to compute the prime function based on the analysis of Crowdy \textit{et al.}~\cite{ckgn}, the number of computational operations needed to be executed is relatively high for domains with many boundary components such as those which will be considered in this paper. To this end, to compute $h$-functions for \textit{highly} multiply connected domains, we turn to a boundary integral equation (BIE) method which significantly reduces the number of computational operations required while also maintaining a good level of accuracy. The proposed method is based on a BIE with the Neumann kernel (see~\cite{Weg-Nas,Nas-ETNA}). The integral equation method that will be employed in this paper has also been used in the new numerical scheme for computing the prime function \cite{ckgn}. The integral equation has found multitudinous application in several areas, including fluid stirrers, vortex dynamics, composite materials, and conformal mappings (see~\cite{Nas-ETNA,NG18,NK} and the references cited therein). In particular, the integral equation has been used in~\cite{LSN17,Nvm} to compute the logarithmic and conformal capacities for the Cantor set and the Cantor dust. With the middle-thirds Cantor $\mathcal{C}$ set in mind, the goal herein is to numerically calculate $h$-functions in domains which effectively imitate the $h$-function associated with $\mathcal{C}$. As mentioned above, to imitate $\mathcal{C}$ in a practical sense, we again consider multiply connected slit domains as in \cite{gswc}, but significantly increase the number of boundary components by successively removing the middle-third portion of each slit, and then calculate the corresponding $h$-functions. We also use the computed values of these $h$-functions to study numerically their asymptotic behavior. The calculation of such $h$-functions, together with their asymptotic features, was motivated by the list of open problems in Snipes \& Ward \cite{SnipWard16}. Computing the $h$-function requires solving a Dirichlet BVP in the domain $\Omega_m$, which is not an easy task since the boundary of the domain consists of the slits $I_1,\ldots,I_m$ and hence the solution of the BVP admits singularities at the slit endpoints. Since the Dirichlet BVP is invariant under conformal mapping, one way to overcome such a difficulty is to map the domain $\Omega_m$ onto another domain with a simpler geometry. We therefore appeal to the iterative method presented in~\cite{NG18} to numerically compute an unbounded preimage domain $G_m$ bordered by smooth Jordan curves. We will summarize the construction of this preimage domain in \S\ref{sec:pre} below. The main objective of this paper is to present a fast numerical method towards computing the $h$-function for the middle-thirds Cantor set. The method is based on using the BIE to compute the $h$-function in terms of the solution of the BVP~\mathrm{e}qref{eq:bdv-u}. Our paper mostly has a potential theoretical flavor and has the following structure. In Section 2, we introduce the conformal mapping we use, together with an outline of the BIE with the Neumann kernel. When the capture circle passes through the gaps between the slits, the values the $h$-function $h(r)$ takes on are equal to the harmonic measures of the slits enclosed by the capture circle with respect to the domain $\Omega_m$. In this case, with the help of a conformal mapping, the BVP~\mathrm{e}qref{eq:bdv-u} will be transformed into an equivalent problem in an unbounded circular domain $G_m$. The transformed problem is a particular case of the BVP considered in~\cite{Nvm} and hence it can be solved using the BIE method as in~\cite{Nvm}. This will be considered in Section~\ref{sec:step}. We calculate the values of $h$-functions associated with slit domains of connectivity $64$ and $1024$ for the basepoint $z_0=-3/2$; and for connectivity $64$ and $2048$ for the basepoint $z_0=0$. However, when the capture circle intersects one of the slits, the right-hand side of the BVP~\mathrm{e}qref{eq:bdv-u} becomes discontinuous. As in Section~\ref{sec:step}, we use conformal mapping to transform the slit domain $\Omega_m$ onto a circular domain $G_m$, and hence the BVP~\mathrm{e}qref{eq:bdv-u} will be turned into an equivalent problem in $G_m$ whose right-hand side is also discontinuous. With the help of two M{\"o}bius mappings, the BVP in $G_m$ can be modified so that it will have a continuous right-hand side. The BIE method is then used to solve this modified BVP and to compute the values of the $h$-function $h(r)$. This case will be considered in Section~\ref{sec:h-fun}. In Section 5, we analyze numerically the behavior of the $h$-function $h(r)$ when $r$ is close to $1$ for the basepoint $z_0=-3/2$; and when $r$ is close to $1/6$ for the basepoint $z_0=0$. Section 6 provides our concluding remarks. \section{The conformal mapping}\label{sec:cm} Computing the values of the $h$-function requires solving the BVP~\mathrm{e}qref{eq:bdv-u} in the unbounded multiply connected domain $\Omega_m$ bordered by slits. In this section, we review an iterative conformal mapping method from~\cite{AST13,NG18} which `opens up' the slits $I_k$ and consequently maps the domain $\Omega_m$ onto an unbounded multiply connected circular domain $G_m$. The BVP~\mathrm{e}qref{eq:bdv-u} will then be transformed into an equivalent problem in the circular domain $G_m$ where it can be solved in a straightforward manner. Solving the transformed problem in the new circular domain $G_m$ will be discussed in Sections~\ref{sec:step} and~\ref{sec:h-fun} below. \subsection{The Neumann kernel}\label{sec:NK} Let $G_m$ be an unbounded multiply connected circular domain obtained by removing $m$ non-overlapping disks from the extended complex plane $\overline{{\mathbb C}}$. The boundaries of these disks are the circles $C_j$ with centers $c_j$ and radii $r_j$, $j=1,\ldots,m$. We parametrize each circle $C_j$ by \begin{equation}\label{eq:eta-j} \mathrm{e}ta_j(t)=c_j+r_j e^{-\mathrm{i} t}, \quad t\mathrm{i}n J_j=[0,2\pi], \quad j=1,\ldots,m. \mathrm{e}nd{equation} We define the total parameter domain $J$ as the disjoint union of the $m$ intervals $J_j=[0,2\pi]$, $j=1,\ldots,m$. Thus, the whole boundary $\Gamma$ is parametrized by \begin{equation}\label{eq:eta} \mathrm{e}ta(t)= \left\{ \begin{array}{l@{\hspace{0.5cm}}l} \mathrm{e}ta_1(t),&t\mathrm{i}n J_1, \\ \quad\vdots & \\ \mathrm{e}ta_m(t),&t\mathrm{i}n J_m. \mathrm{e}nd{array} \right. \mathrm{e}nd{equation} See~\cite{Nas-ETNA,NG18} for more details. The Neumann kernel $N(s,t)$ is defined for $(s,t)\mathrm{i}n J\times J$ by \begin{equation}\label{eq:N} N(s,t) = \frac{1}{\pi}\mathop{\mathrm{Im}}\left(\frac{\mathrm{e}ta'(t)}{\mathrm{e}ta(t)-\mathrm{e}ta(s)}\right). \mathrm{e}nd{equation} It is a particular case of the generalized Neumann kernel considered in~\cite{Weg-Nas} when $A(t)=1$, $t\mathrm{i}n J$. We also define the following kernel \begin{equation}\label{eq:M} M(s,t) = \frac{1}{\pi}\mathop{\mathrm{Re}}\left(\frac{\mathrm{e}ta'(t)}{\mathrm{e}ta(t)-\mathrm{e}ta(s)}\right), \quad (s,t)\mathrm{i}n J\times J. \mathrm{e}nd{equation} which is a particular case of the kernel $M$ considered in~\cite{Weg-Nas} when $A(t)=1$. The kernel $N(s,t)$ is continuous and the kernel $M(s,t)$ is singular. Hence, the integral operator \[ {\bf N}\mu(s) = \mathrm{i}nt_J N(s,t) \mu(t) dt, \quad s\mathrm{i}n J, \] is compact and the integral operator \[ {\bf M}\mu(s) = \mathrm{i}nt_J M(s,t) \mu(t) dt, \quad s\mathrm{i}n J, \] is singular. Further details can be found in~\cite{Weg-Nas}. \subsection{The preimage domain}\label{sec:pre} Let $\Omega_m$ be the multiply connected slit domain obtained by removing the $m$ horizontal slits $I_1,\ldots,I_m$, described in Section~\ref{sec:int}, from the extended complex plane $\overline{{\mathbb C}}$. In this subsection, we will summarize an iterative method from~\cite{AST13,NG18} for the construction of a preimage unbounded circular domain $G_m$. The method has been tested numerically in several works~\cite{LSN17,NK,Nvm}. For a fixed $\mathrm{e}ll=0,1,2,\ldots$, the closed set $E_\mathrm{e}ll$, defined in~\mathrm{e}qref{eq:Ek}, consists of $m=2^\mathrm{e}ll$ slits, $I_1,\ldots,I_m$, of equal length $L=(1/3)^\mathrm{e}ll$ and with centers $w_j$ for $j=1,\ldots,m$. Our objective now is to find the centers $c_j$ and radii $r_j$ of non-overlapping circles $C_j$ and a conformal mapping $z=F(\zeta)$ from the circular domain $G_m$ exterior to $C=\cup_{j=1}^m C_j$ onto $\Omega_m$. With the normalization \[ F(\mathrm{i}nfty)=\mathrm{i}nfty, \quad \lim_{\zeta\to\mathrm{i}nfty}(F(\zeta)-\zeta)=0, \] such a conformal mapping is unique. The conformal mapping $z=F(\zeta)$ can be computed using the following boundary integral equation method from~\cite{Nas-Siam1}. Let the function $\gamma$ be defined by \begin{equation}\label{eq:gam} \gamma(t)=\mathop{\mathrm{Im}}\mathrm{e}ta(t), \quad t\mathrm{i}n J. \mathrm{e}nd{equation} Let also $\mu$ be the unique solution of the boundary integral equation with the Neumann kernel \begin{equation}\label{eq:ie-g} ({\bf I}-{\bf N})\mu=-{\bf M}\gamma, \mathrm{e}nd{equation} and let the piecewise constant function $\nu=(\nu_1,\nu_2,\ldots,\nu_\mathrm{e}ll)$ be given by \begin{equation}\label{eq:h-g} \nu=\left({\bf M}\mu-({\bf I}-{\bf N})\gamma\right)/2, \mathrm{e}nd{equation} i.e., the function $\nu$ is constant on each boundary component $C_j$ and its value on $C_j$ is a real constant $\nu_j$. Then the function $f$ with the boundary values \begin{equation}\label{eq:f-rec} f(\mathrm{e}ta(t))=\gamma(t)+\nu(t)+\mathrm{i}\mu(t) \mathrm{e}nd{equation} is analytic in $G_m$ with $f(\mathrm{i}nfty)=0$, and the conformal mapping $F$ is given by \begin{equation}\label{eqn:omega-app} F(\zeta)=\zeta-\mathrm{i} f(\zeta), \quad \zeta\mathrm{i}n G_m\cup\Gamma. \mathrm{e}nd{equation} The application of this method requires that the domain $G_m$ is known. However, in our case, the slit domain $\Omega_m$ is known and the domain $G_m$ is unknown and needs to be determined alongside the conformal mapping $z=F(\zeta)$ from $G_m$ onto $\Omega_m$. This preimage domain $G_m$ as well as the conformal mapping $z=F(\zeta)$ will be computed using the iterative method presented in~\cite{NG18} (see also~\cite{AST13}). For the convenience of the reader and for the completeness of this paper, we now briefly review a slightly modified version of this iterative method. In~\cite{NG18}, the boundary components of the preimage domain $G_m$ are assumed to be ellipses. Here, the slits $I_{j}$, $j=1,2,\ldots,m$, are well-separated and hence the boundary components of the preimage domain $G_m$ are assumed to be circles. The method generates a sequence of multiply connected circular domains $G_m^0,G_m^1,G_m^2,\ldots,$ which converge numerically to the required preimage domain $G_m$. Recall that the length of each slit $I_{j}$ is $L$ and the center of $I_{j}$ is $w_j$ for $j=1,\ldots,m$. In the iteration step $i=0,1,2,\ldots$, we assume that $G_m^{(i)}$ is bordered by the $m$ circles $C^{(i)}_1,\ldots,C^{(i)}_m$ parametrized by \begin{equation}\label{eq:eta-i} \mathrm{e}ta^{(i)}_j(t)=c^{(i)}_j+r^{(i)}_je^{\mathrm{i} t}, \quad 0\le t\le 2\pi, \quad j=1,\ldots,m. \mathrm{e}nd{equation} The centers $c^{(i)}_j$ and radii $r^{(i)}_j$ are computed using the following iterative method: \begin{enumerate} \mathrm{i}tem Set \[ c^{(0)}_j=w_j, \quad r^{(0)}_j=\frac{\,L\,}{2}, \quad j=1,\ldots,m. \] \mathrm{i}tem For $i=1,2,3,\ldots,$ \begin{itemize} \mathrm{i}tem Compute the conformal mapping from the preimage domain $G^{(i-1)}$ to a canonical horizontal rectilinear slit domain $\Omega^{(i)}$ which is the entire $\zeta$-plane with $m$ horizontal slits $I^{(i)}_{j}$, $j=1,\ldots,m$ (using the method presented in equations~\mathrm{e}qref{eq:gam}--\mathrm{e}qref{eqn:omega-app} above). Let $L^{(i)}_j$ denote the length of the slit $I^{(i)}_{j}$ and let $w^{(i)}_j$ denote its center. \mathrm{i}tem Define \[ c^{(i)}_j = c^{(i-1)}_j-(w^{(i)}_j-w_j), \quad r^{(i)}_j = r^{(i-1)}_j-\frac{\,1\,}{4}(L^{(i)}_j -L), \quad j=1,\ldots,m. \] \mathrm{e}nd{itemize} \mathrm{i}tem Stop the iteration if \[ \frac{1}{2m}\sum_{j=1}^{m}\left(|w^{(i)}_j - w_j|+|L^{(i)}_j -L|\right)<\varepsilon \quad{\rm or}\quad i>{\tt Max} \] where $\varepsilon$ is a given tolerance and ${\tt Max}$ is the maximum number of iterations allowed. \mathrm{e}nd{enumerate} The above iterative method generates sequences of parameters $c^{(i)}_j$ and $r^{(i)}_j$ that converge numerically to $c_j$ and $r_j$, respectively, and then the boundary components of the preimage domain $G_m$ are parametrized by~\mathrm{e}qref{eq:eta-j}. In our numerical implementations, we used $\varepsilon=10^{-14}$ and ${\tt Max}=100$. It is clear that in each iteration of the above method, it is required to solve the integral equation with the Neumann kernel~(\ref{eq:ie-g}) and to compute the function $\nu$ in~(\ref{eq:h-g}) which can be done with the fast method presented in~\cite{Nas-ETNA} by applying the {\sc Matlab} function \verb|fbie|. In \verb|fbie|, the integral equation is discretized by the Nystr\"om method using the trapezoidal rule with $n$ equidistant nodes \begin{equation}\label{eq:sji} s_{j,i}=(i-1)\frac{2\pi}{n}\mathrm{i}n J_j=[0,2\pi], \quad i=1,2,\ldots,n, \mathrm{e}nd{equation} for each sub-interval $J_j$, $j=1,2,\ldots,m$. This yields an $mn\times mn$ linear system which is solved by the generalized minimal residual (GMRES) method through the {\sc Matlab} function \verb|gmres| where the matrix-vector product in GMRES is computed using the {\sc Matlab} function \verb|zfmm2dpart| from the fast multipole method (FMM) toolbox~\cite{Gre-Gim12}. The boundary components of $\Omega_m$, i.e., the $m$ slits $I_1,\ldots,I_m$, are well-separated and hence the circles $C_1,\ldots,C_m$, which are the boundary components of the domain $G_m$, are also well-separated (see Figure~\ref{fig:map} for $m=4$). Thus, accurate results can be obtained even with small values of $n$~\cite{Nvm}. In our numerical computations, we used $n=16$. For the {\sc Matlab} function \verb|zfmm2dpart|, we chose the FMM precision flag \verb|iprec=5| which means that the tolerance of the FMM method is $0.5\times10^{-15}$. In the function \verb|gmres|, we chose the tolerance of the method to be $10^{-13}$, the maximum number of iterations to be $100$, and we apply the method without restart. The complexity of solving this integral equation, and hence the complexity of each iteration in the above iterative method, is $\mathcal{O}(mn\log n)$ operations. For more details, we refer the reader to~\cite{Nas-ETNA}. \section{Step heights of the $h$-function}\label{sec:step} In this section, we determine the step heights of the $h$-functions of interest. We refer to the `step heights' as the constant values of $h(r)$ coinciding with when the capture circle passes through the gaps between the slits, for the two basepoint locations $z_0=-3/2$ and $z_0=0$ (see Figure~\ref{fig:h30}). The restriction of the $h$-function $h(r)$ to these values of $r$ will be denoted by $\omega(r)$. The step heights of the $h$-function for an example with two slits have been computed in~\cite{SnipWard16} and for two, four and eight slits in~\cite{gswc}. Furthermore, it should be pointed out that there are several methods for computing harmonic measures in multiply connected domains, each of which may alternatively be used to compute the step heights of the $h$-function; see e.g.,~\cite{CrowdyBook,Del-harm,Tre-Gre,garmar,Tre-Ser} and the references cited therein. By the mapping function $\zeta=F^{-1}(z)$ described in \S\ref{sec:pre}, the multiply connected slit domain $\Omega_m$ (exterior to the $m$ slits $I_j$, $j=1,\ldots,m$) is mapped conformally onto a multiply connected circular domain $G_m$ exterior to $m$ disks such that each slit $I_{j}$ is mapped onto the circle $C_j$ with the center $c_j$ and radius $r_j$, $j=1,\ldots,m$. The basepoint $z_0$ is also mapped by $\zeta=F^{-1}(z)$ onto a point $\zeta_0$ in the domain $G_m$. Owing to the symmetry of $\Omega_m$ with respect to the real axis, the centers of the disks as well as the point $\zeta_0$ are on the real line (see Figure~\ref{fig:map} for $m=4$ and $z_0=-3/2$). \begin{figure}[ht] \centerline{ \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{map3s}} \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{map3d}} } \caption{The slit domain $\Omega_m$ (left) and the circular domain $G_m$ (right) with basepoint $z_0=-3/2$ in the case when $m=4$.} \label{fig:map} \mathrm{e}nd{figure} \subsection{Harmonic measures} For $k=1,\ldots,m$, let $\sigma_k$ be the harmonic measure of $C_k$ with respect to $G_m$, i.e., $\sigma_k(\zeta)$ is the unique solution of the Dirichlet problem: \begin{subequations}\label{eq:bdv-sig} \begin{align} \label{eq:sig-Lap} \nabla^2 \sigma_k(\zeta) &= 0 \quad\quad \mbox{if }\zeta\mathrm{i}n G_m, \\ \label{eq:sig-j} \sigma_k(\zeta)&= \delta_{k,j} \quad \mbox{if }\zeta\mathrm{i}n C_j, \quad j=1,\ldots,m, \mathrm{e}nd{align} \mathrm{e}nd{subequations} where $\delta_{k,j}$ is the Kronecker delta function. The function $\sigma_k$ is assumed to be bounded at infinity. The harmonic function $\sigma_k$ is the real part of an analytic function $g_k$ in $G_m$ which is not necessarily single-valued. The function $g_k$ can be written as~\cite{Gak,garmar} \begin{equation}\label{eq:F-u} g_k(\zeta)=b_k+f_k(\zeta)-\sum_{j=1}^{m} a_{kj}\log(\zeta-c_j) \mathrm{e}nd{equation} where $c_j$ is the center of the circle $C_j$, $f_k$ is a single-valued analytic function in $G_m$ with $f_k(\mathrm{i}nfty)=0$, $b_k$ and $a_{k,j}$ are undetermined real constants such that $\sum_{j=1}^{m}a_{kj}=0$ for $k=1,\ldots,m$. The condition $\sum_{j=1}^{m}a_{kj}=0$ implies that $g_k(\mathrm{i}nfty)=b_k$ since $f_k(\mathrm{i}nfty)=0$. Since we are interested in computing the real function $\sigma_k=\mathop{\mathrm{Re}}[g_k]$, we may assume that $b_k$ is real, and then $b_k=g_k(\mathrm{i}nfty)=\sigma_k(\mathrm{i}nfty)$. The BVP~\mathrm{e}qref{eq:bdv-sig} above is a particular case of the problem considered in~\cite[Eq.~(4)]{Nvm}; hence, it can be solved by the method presented in~\cite{Nvm} which is reviewed in the following subsection. \subsection{Computing the harmonic measures} For $k=1,\ldots,m$, it follows from \mathrm{e}qref{eq:F-u} that computing the harmonic measure $\sigma_k=\mathop{\mathrm{Re}}[g_k]$ requires computing the values of the analytic function $f_k$ and the values of the $m+1$ real constants $a_{k,1},\ldots,a_{k,m},b_k$. The constants $a_{k,1},\ldots,a_{k,m}$ in~\mathrm{e}qref{eq:F-u} can be computed as described in Theorem~3 in~\cite{Nvm} (with $\mathrm{e}ll=0$ and $A=1$). For each $j=1,2,\ldots,m$, let the function $\gamma_j$ be defined by \begin{equation}\label{eq:gam-j} \gamma_j(t)=\log|\mathrm{e}ta(t)-c_j|, \mathrm{e}nd{equation} let $\mu_j$ be the unique solution of the boundary integral equation with the Neumann kernel \begin{equation}\label{eq:ie} ({\bf I}-{\bf N})\mu_j=-{\bf M}\gamma_j, \mathrm{e}nd{equation} and let the piecewise constant function $\nu_j=(\nu_{1,j},\nu_{2,j},\ldots,\nu_{m,j})$ be given by \begin{equation}\label{eq:hj} \nu_j=\left({\bf M}\mu_j-({\bf I}-{\bf N})\gamma_j\right)/2. \mathrm{e}nd{equation} Then, for each $k=1,2,\ldots,m$, the boundary values of the function $f_k(\zeta)$ in~\mathrm{e}qref{eq:F-u} are given by \begin{equation}\label{eq:fk} f_k(\mathrm{e}ta(t))=\sum_{j=1}^m a_{kj}\left(\gamma_j(t)+\nu_j(t)+\mathrm{i}\mu_j(t)\right) \mathrm{e}nd{equation} and the $m+1$ unknown real constants $a_{k,1},\ldots,a_{k,m},b_k$ are the unique solution of the linear system \begin{equation}\label{eq:sys-method} \left[\begin{array}{ccccc} \nu_{1,1} &\nu_{1,2} &\cdots &\nu_{1,m} &1 \\ \nu_{2,1} &\nu_{2,2} &\cdots &\nu_{2,m} &1 \\ \vdots &\vdots &\ddots &\vdots &\vdots \\ \nu_{m,1} &\nu_{m,2} &\cdots &\nu_{m,m} &1 \\ 1 &1 &\cdots &1 &0 \\ \mathrm{e}nd{array}\right] \left[\begin{array}{c} a_{k,1} \\a_{k,2} \\ \vdots \\ a_{k,m} \\ b_k \mathrm{e}nd{array}\right] = \left[\begin{array}{c} \delta_{k,1} \\ \delta_{k,2} \\ \vdots \\ \delta_{k,m} \\ 0 \mathrm{e}nd{array}\right]. \mathrm{e}nd{equation} The integral operators ${\bf N}$ and ${\bf M}$ in~\mathrm{e}qref{eq:ie} and~\mathrm{e}qref{eq:hj} are the same operators introduced in Section~\ref{sec:NK}. It is clear that computing the $m+1$ unknown real constants $a_{k,1},\ldots,a_{k,m},b_k$ requires solving $m$ integral equations with the Neumann kernel~\mathrm{e}qref{eq:ie} and computing $m$ piecewise constant functions $\nu_j$ in~\mathrm{e}qref{eq:hj} for $j=1,\ldots,m$. This can be done using the {\sc Matlab} function \verb|fbie| as described in Section~\ref{sec:pre}. The complexity of solving each of these integral equations is $\mathcal{O}(mn\log n)$ operations and hence solving the $m$ integral equations~\mathrm{e}qref{eq:ie} requires $\mathcal{O}(m^2n\log n)$ operations. The linear system~\mathrm{e}qref{eq:sys-method} has an $(m+1)\times(m+1)$ constant coefficient matrix and $m$ different right-hand sides. These $m$ linear systems are solved in $\mathcal{O}(m^3)$ operations by computing the inverse of the coefficient matrix and then using this inverse matrix to compute the solution for each right-hand side. In doing so, we obtain the values of the real constants $a_{k,1},\ldots,a_{k,m},b_k$. The boundary values of the analytic function $f_k$ are given by~\mathrm{e}qref{eq:fk}, and hence its values in the domain $G_m$ can be computed by the Cauchy integral formula. The harmonic measure $\sigma_k$ can then be computed for $\zeta\mathrm{i}n G_m$ by \begin{equation}\label{eq:sgj-gj} \sigma_k(\zeta)=\mathop{\mathrm{Re}} g_k(\zeta) \mathrm{e}nd{equation} where $g_k(\zeta)$ is given by~\mathrm{e}qref{eq:F-u}. \subsection{Basepoint $z_0=-3/2$} The capture circle of radius $r$ intersects with the real line at two real numbers where the largest of these numbers is denoted by $z_r$ (see Figure~\ref{fig:h30} (left)). By `step heights', we mean the constant values of $h(r)$ when $r$ is such that $z_r$ lies in between a pair of slits $I_{j}$ and $I_{j+1}$, $j=1,\ldots,m-1$. In addition, $h(r) = 0$ when $r$ is such that $z_r$ lies strictly to the left of all $m$ slits (i.e., $0\le r\le1$), and $h(r) = 1$ when $r$ is such that $z_r$ lies strictly to the right of all $m$ slits (i.e., $2\le r<\mathrm{i}nfty$). Here, we use the harmonic measures $\sigma_1,\ldots,\sigma_m$ to compute the $m-1$ step heights of the $h$-functions for the slit domains $\Omega_m$ (the exterior domain of the closed set $E_\mathrm{e}ll$) with basepoint location $z_0=-3/2$. That is, we compute the values $\omega(r)$ of these $h$-functions at those values of $r$ corresponding to capture circles that pass through the $m-1$ gaps between the slits $I_1,\ldots,I_m$. In the next section, we compute the values of the $h$-functions for $\Omega_m$ for the remaining values of $r$, i.e. those values of $r$ which correspond to capture circles intersecting a slit $I_{j}$, for some $j=1,\ldots,m$. More precisely, the height $\omega(r)$ of the $k$th step of the $h$-function associated with $\Omega_m$ can be written in terms of the the harmonic measures $\sigma_1,\ldots,\sigma_m$ as \begin{equation}\label{eq:wr-3} \omega(r)=\sum_{j=1}^{k}\sigma_j(\zeta_0), \quad k=1,2,\ldots,m-1, \mathrm{e}nd{equation} where $\zeta_0=F^{-1}(z_0)$, $\sigma_j$ is the harmonic measure of $C_j$ with respect to $G_m$, and $k$ is the largest integer such that the slit $I_{k}$ is inside the circle $|z-z_0|=r$ and the slit $I_{k+1}$ is outside the circle $|z-z_0|=r$. The computed numerical values for the step heights of the $h$-function for the domains $\Omega_2$, $\Omega_4$, and $\Omega_8$ are presented in Table~\ref{tab:3}. These values agree with the values computed by the method presented in~\cite{gswc}. The plots of the step height as a function of $r$ for the domains $\Omega_{64}$ and $\Omega_{1024}$ are shown in Figure~\ref{fig:hm-3}. \begin{table}[h] \caption{The $m-1$ step heights of the $h$-function for the domains $\Omega_m$ when $z_0=-3/2$ for $m=2,4,8$, computed using the proposed method and the method presented in~\cite{gswc}.} \label{tab:3} \centering \begin{tabular}{lll|lll} \hline \multicolumn{3}{c|}{The proposed method} & \multicolumn{3}{c}{Green \mathrm{e}mph{et al.}~\cite{gswc}} \\ \hline & & $0.23081722$ & & & $0.23081722$ \\ & $0.37725094$ & $0.37469279$ & & $0.37725094$ & $0.37469279$ \\ & & $0.48515843$ & & & $0.48515843$ \\ $0.60527819$ & $0.60254652$ & $0.60117033$ & $0.60527819$ & $0.60254652$ & $0.60117033$ \\ & & $0.70056784$ & & & $0.70056784$ \\ & $0.78306819$ & $0.78288753$ & & $0.78306819$ & $0.78288753$ \\ & & $0.87276904$ & & & $0.87276904$ \\ \hline \mathrm{e}nd{tabular} \mathrm{e}nd{table} \begin{figure}[ht] \centerline{ \scalebox{0.6}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{hmfun64}} \scalebox{0.6}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{hmfun1024}} } \caption{The step height of the $h$-function for the domain $\Omega_{64}$ (left) and $\Omega_{1024}$ (right) when $z_0=-3/2$.} \label{fig:hm-3} \mathrm{e}nd{figure} \subsection{Basepoint $z_0=0$} We now compute the step heights of the $h$-functions for $\Omega_m$ with basepoint location $z_0=0$. That is, we compute the values of these $h$-functions at those values of $r$ corresponding to capture circles that pass through gaps between the slits $I_{k}$ for $k=1,\ldots,m$. In this case, the capture circle of radius $r$ intersects with the real line at two real numbers $\pm z_r$ where $z_r>0$ (see Figure~\ref{fig:h30} (right)). Here, $h(r) = 0$ when $r$ is such that $z_r$ lies strictly in-between the two slits $I_{m/2}$ and $I_{m/2+1}$ (i.e., $0\le r\le1/6$), and $h(r) = 1$ when $r$ is such that $z_r$ lies strictly to the right of all $m$ slits (i.e., $1/2\le r<\mathrm{i}nfty$). For this case, the height $\omega(r)$ of the $k$th step of the $h$-function associated with $\Omega_m$ is thus \begin{equation}\label{eq:wr-0} \omega(r)=\sum_{j=1}^{k}\left(\sigma_{\frac{m}{2}-j+1}(\zeta_0)+\sigma_{\frac{m}{2}+j}(\zeta_0)\right) \mathrm{e}nd{equation} where $\zeta_0=F^{-1}(z_0)$, $\sigma_j$ is the harmonic measure of $C_j$ with respect to $G_m$, and $k$ is the largest integer such that the two slits $I_{m/2-j+1}$ and $I_{m/2+j}$ are inside the circle $|z-z_0|=r$ and the two slits $I_{m/2-j}$ and $I_{m/2+j+1}$ are outside the circle $|z-z_0|=r$. The computed numerical values for the step heights of the $h$-function for the domains $\Omega_4$, $\Omega_8$, and $\Omega_{16}$ are presented in Table~\ref{tab:0}. The plots of the step height as a function of $r$ for the domains $\Omega_{64}$ and $\Omega_{2048}$ are shown in Figure~\ref{fig:hm-0}. \begin{table}[h] \caption{The $m/2-1$ step heights of the $h$-function for the domains $\Omega_m$ when $z_0=0$ for $m=4,8,16$.} \label{tab:0} \centering \begin{tabular}{ccc} \hline $\Omega_4$ & $\Omega_8$ & $\Omega_{16}$ \\ \hline & & $0.32412730$ \\ & $0.50657767$ & $0.50171513$ \\ & & $0.61794802$ \\ $0.73555154$ & $0.72992958$ & $0.72702487$ \\ & & $0.80333761$ \\ & $0.86334249$ & $0.86206402$ \\ & & $0.92098300$ \\ \hline \mathrm{e}nd{tabular} \mathrm{e}nd{table} \begin{figure}[ht] \centerline{ \scalebox{0.6}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{hmfunO64}} \scalebox{0.6}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{hmfunO2048}}} \caption{The step height of the $h$-function for the domain $\Omega_{64}$ (left) and $\Omega_{2048}$ (right) when $z_0=0$.} \label{fig:hm-0} \mathrm{e}nd{figure} \section{Numerical computation of the $h$-function}\label{sec:h-fun} The step heights $\omega(r)$ of the $h$-function, i.e., the values of the $h$-function $h(r)$ coinciding with when the capture circle passes through the gaps between the slits $I_k$, $k=1,\ldots,m$, have been computed in the previous section. In this section, we present a method for computing the values of the $h$-function $h(r)$ when the capture circle intercepts with the slits $I_k$ for the two cases of basepoints considered in the previous section. \subsection{Basepoint $z_0=-3/2$} Assume that the capture circle intercepts the slit $I_k$ for some $k=1,...,m$. Owing to the up-down symmetry of the domain $\Omega_m$ in the $z$-plane, we can assume that the pair of preimages of the intersection point of the capture circle with the slit $I_k$ are the points $\xi$ and $\overline{\xi}$ on the circle $C_k$. Let us also assume that $\xi_1$ is the intersection of the circle $C_k$ with the real axis lying on the arc between $\overline{\xi}$ and $\xi$ that is closest to the point $\zeta_0$. This arc will be denoted by $C'_k$. The remaining part of the circle is denoted by $C''_k$ (see Figure~\ref{fig:h3d}). Let the function $U_k(\zeta)$ be the unique solution of the Dirichlet problem: \begin{subequations}\label{eq:bdv-U3} \begin{align} \label{eq:U3-Lap} \nabla^2 U_k(\zeta) &= 0 \quad \mbox{if }\zeta\mathrm{i}n G_m, \\ \label{eq:U3-m} U_k(\zeta)&= 1 \quad \mbox{if }\zeta\mathrm{i}n C_j, \quad j=1,\ldots,k-1, \\ \label{eq:U3-k'} U_k(\zeta)&= 1 \quad \mbox{if }\zeta\mathrm{i}n C'_k, \\ \label{eq:U3-k''} U_k(\zeta)&= 0 \quad \mbox{if }\zeta\mathrm{i}n C''_k, \\ \label{eq:U3-p} U_k(\zeta)&= 0 \quad \mbox{if }\zeta\mathrm{i}n C_j, \quad j=k+1,\ldots,m, \mathrm{e}nd{align} \mathrm{e}nd{subequations} where the function $U_k$ is assumed to be bounded at infinity. \begin{figure}[ht] \centerline{ \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{figh3d}} } \caption{The arcs $C'_k$ and $C''_k$ for $z_0=-3/2$.} \label{fig:h3d} \mathrm{e}nd{figure} Note that the boundary conditions on the circle $C_k$ are not continuous. To remove this discontinuity, let us now introduce a function $\Psi_k$ in terms of the two M\"obius maps \begin{equation} \psi(\zeta,\xi,\xi_1)=\frac{(\zeta-\xi)(\xi_1-\overline{\xi})+\mathrm{i}(\zeta-\overline{\xi})(\xi_1-\xi)}{(\zeta-\overline{\xi})(\xi_1-\xi)+\mathrm{i}(\zeta-\xi)(\xi_1-\overline{\xi})}, \quad \phi(w)=\frac{w-\mathrm{i}}{\mathrm{i} w-1}. \mathrm{e}nd{equation} Note that $\psi(\zeta,\xi,\xi_1)$ maps the exterior of the unit disc onto the unit disk such that the three points $\overline{\xi}$, $\xi_1$, and $\xi$ on the unit circle are mapped onto the three points $-\mathrm{i}$, $1$, and $\mathrm{i}$, respectively, and $\phi(w)$ maps the unit disc onto the upper half-plane such that the three points $-\mathrm{i}$, $1$, and $\mathrm{i}$ are mapped onto the three points $\mathrm{i}nfty$, $-1$, and $0$, respectively. Then, for $\xi\mathrm{i}n C_k$, we define \begin{equation}\label{eq:Psi} \Psi_k(\zeta)=\frac{1}{\pi}\mathop{\mathrm{Im}} \log \phi(\psi(\zeta,\xi,\xi_1)), \quad \zeta\mathrm{i}n G_m. \mathrm{e}nd{equation} This function $\Psi_k(\zeta)$ is harmonic everywhere in the domain $G_m$ exterior to the $m$ discs, bounded at infinity, equal to 1 on the arc $C'_k$, and equal to $0$ on the arc $C''_k$. Hence, for $k=1,\ldots,m$, the function \begin{equation}\label{eq:Uk-Vk} V_k(\zeta)=U_k(\zeta)-\Psi_k(\zeta) \mathrm{e}nd{equation} is bounded at infinity and satisfies the following Dirichlet problem: \begin{subequations}\label{eq:bdv-V3} \begin{align} \label{eq:V3-Lap} \nabla^2 V_k(\zeta) &= 0 ~~~~~~~~~~~~~~\, \mbox{if }\zeta\mathrm{i}n G_m, \\ \label{eq:V3-m} V_k(\zeta)&= 1-\Psi_k(\zeta) \quad \mbox{if }\zeta\mathrm{i}n C_j, \quad j=1,\ldots,k-1, \\ \label{eq:V3-k'} V_k(\zeta)&= 0 ~~~~~~~~~~~~~~\, \mbox{if }\zeta\mathrm{i}n C_k, \\ \label{eq:V3-p} V_k(\zeta)&= -\Psi_k(\zeta) ~~~~~~ \mbox{if }\zeta\mathrm{i}n C_j, \quad j=k+1,\ldots,m. \mathrm{e}nd{align} \mathrm{e}nd{subequations} The boundary conditions of the new problem~\mathrm{e}qref{eq:bdv-V3} are now continuous on all circles. Note that $\phi(\psi(\zeta,\xi,\xi_1))$ is itself a M\"obius map which maps the circle $C_k$ onto the real line. For $j=1,\ldots,m$, $j\ne k$, it maps the circle $C_j$ onto a circle of very small radius in the upper half-plane (see Figure~\ref{fig:small} (left)). Hence in problem~\mathrm{e}qref{eq:bdv-V3}, the values of the function on the right-hand side of the boundary conditions are almost constant (see Figure~\ref{fig:small} (right)). As such, and in view of (\ref{eq:Psi}), we may approximate \begin{equation} \Psi_k(\zeta) \approx P_{kj}= \Psi_k(c_j), \quad \zeta \mathrm{i}n C_j, \quad j=1,\ldots,m, \mathrm{e}nd{equation} where $c_j$ is the center of the circle $C_j$. Thus, the Dirichlet problem~\mathrm{e}qref{eq:bdv-V3} becomes \begin{subequations}\label{eq:bdv-V23} \begin{align} \label{eq:V23-Lap} \nabla^2 V_k(\zeta) &= 0 ~~~~~~~~~~\:\: \mbox{if }\zeta\mathrm{i}n G_m, \\ \label{eq:V23-m} V_k(\zeta)&= 1-P_{kj} \quad \mbox{if }\zeta\mathrm{i}n C_j, \quad j=1,\ldots,k-1, \\ \label{eq:V23-k'} V_k(\zeta)&= 0 ~~~~~~~~~~\:\: \mbox{if }\zeta\mathrm{i}n C_k, \\ \label{eq:V23-p} V_k(\zeta)&= -P_{kj} ~~~~\:\:\: \mbox{if }\zeta\mathrm{i}n C_j, \quad j=k+1,\ldots,m. \mathrm{e}nd{align} \mathrm{e}nd{subequations} The function $V_k$ can be then written in terms of the harmonic measures $\sigma_1,\ldots,\sigma_m$ through \[ V_k(\zeta)=\sum_{j=1}^{k-1} (1-P_{kj})\sigma_j(\zeta)-\sum_{j=k+1}^{m} P_{kj}\sigma_j(\zeta). \] Thus, by~\mathrm{e}qref{eq:Uk-Vk}, the unique solution $U_k(\zeta)$ to the Dirichlet problem~\mathrm{e}qref{eq:bdv-U3} is given by \begin{equation}\label{eq:U_k} U_k(\zeta) = \Psi_k(\zeta)+\sum_{j=1}^{k-1} \sigma_j(\zeta) -\sum_{\begin{subarray}{c} j=1\\j\ne k\mathrm{e}nd{subarray}}^{m} P_{kj}\sigma_j(\zeta). \mathrm{e}nd{equation} \begin{figure}[htb] \centerline{ \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{figh3smallcir1}} \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{figh3smallcir2}} } \caption{The image of the circles $C_j$ (left) and the values of the functions in the right-hand side of the boundary conditions on~\mathrm{e}qref{eq:bdv-V3} (right) for $m=8$, $k=1$, and $\xi=c_k+r_k e^{3\pi\mathrm{i}/4}\mathrm{i}n C_k$.} \label{fig:small} \mathrm{e}nd{figure} Due to the symmetry of both domains $\Omega_m$ and $G_m$ with respect to the real line and since the image of the circle $C_k$ under the conformal mapping $F$ is the slit $I_k$, we have $F(\xi)=F(\overline{\xi})\mathrm{i}n I_k$ and the image of the arc $C_k'$ is the part of $I_k$ lying to the left of $F(\xi)$ (see Figure~\ref{fig:3hr}). For a given $r$ such that the capture circle passes through the slit $I_k$, for $k=1,...,m$, then $\xi$ is on the circle $C_k$ and depends on $r$ as (see Figure~\ref{fig:3hr}) \begin{equation}\label{eq:r-xi} r= F(\xi)-z_0. \mathrm{e}nd{equation} For this value of $r$, the value of the $h$-function is then given by \begin{equation}\label{eq:hr3-xi} h(r)=U_k(\zeta_0). \mathrm{e}nd{equation} In~\mathrm{e}qref{eq:r-xi}, we may assume that $r$ is given and then solve the non-linear equation~\mathrm{e}qref{eq:r-xi} for $\xi\mathrm{i}n C_k$. Alternatively, we may assume that $\xi$ on the circle $C_k$ is given and then compute the values of $r$ through~\mathrm{e}qref{eq:r-xi}, which is simpler. For computing the value of $r$ in~\mathrm{e}qref{eq:r-xi}, the value of the mapping function $F(\xi)$ can be approximated numerically for $\xi\mathrm{i}n C_k$ as follows. Since $\xi\mathrm{i}n C_k$, then it follows from the parametrization~\mathrm{e}qref{eq:eta-j} of the circle $C_k$ that $\xi=\mathrm{e}ta_k(s)$ for some $s\mathrm{i}n J_k=[0,2\pi]$. Then, it follows from the method described in Section~\ref{sec:pre} that $F(\xi)=\xi-\mathrm{i} f(\xi)$ where \[ f(\xi)=f(\mathrm{e}ta_k(s))=\gamma_k(s)+h_k(s)+\mathrm{i}\mu_k(s) \] where $\gamma_k$, $h_k$, and $\mu_k$ are the restrictions of the functions $\gamma$, $h$, and $\mu$ to the interval $J_k=[0,2\pi]$. Note that, by~\mathrm{e}qref{eq:gam}, $\gamma_k(s)=\mathop{\mathrm{Im}}\mathrm{e}ta_k(s)$ is known and $h_k(s)=h_k$ where the constant $h_k$ is also known. However, the value of $\mu_k(s)$ will be known only if $s$ is one of the discretization points $s_{k,i}$, $i=1,\ldots,n$, given by~\mathrm{e}qref{eq:sji}. If $s$ is not one of these points, then the value of $\mu_k(s)$ can be approximated using the Nystr{\"o}m interpolation formula (see~\cite{Atk97}). Here, we will approximate $\mu_k(s)$ by finding a trigonometric interpolation polynomial that interpolates the function $\mu_k$ at its known values $\mu_k(s_{k,i})$, $i=1,\ldots,n$. The polynomial can be then used to approximate $\mu_k(s)$ for any $s\mathrm{i}n[0,2\pi]$. Finding this polynomial and computing its values is done using the fast Fourier transform (FFT). In our numerical computations below, to compute the non-constant components of the $h$-function, i.e., to compute the values of $h(r)$ when the capture circle intercept with a slit $I_k$ for $k=1,\ldots,m$, we choose $31$ equidistant values of $\xi$ on the upper half of the circle $C_k$. For each of these points $\xi$, we compute the value of $r$ through~\mathrm{e}qref{eq:r-xi} and the values of $h(r)$ through~\mathrm{e}qref{eq:hr3-xi}. The values of $h(r)$ for $r$ corresponding to the capture circle passing through the gaps between the slits $I_k$, $k=1,\ldots,m$, are computed as described earlier in Section~\ref{sec:step}. The graphs of the function $h(r)$ for $m=16$ and $m=32$ are given in Figure~\ref{fig:h-3}. \begin{figure}[htb] \centerline{ \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{figh3r}} \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{figh3dr}} } \caption{Computing the value of $r$ in~\mathrm{e}qref{eq:r-xi}.} \label{fig:3hr} \mathrm{e}nd{figure} \begin{figure}[htb] \centerline{ \scalebox{0.6}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{hfun16}} \scalebox{0.6}{\mathrm{i}ncludegraphics[trim=3 0 0 0,clip]{hfun32}} } \caption{The $h$-function for the domain $\Omega_{16}$ (left) and the domain $\Omega_{32}$ (right) when $z_0=-3/2$.} \label{fig:h-3} \mathrm{e}nd{figure} \subsection{Basepoint $z_0=0$} For the basepoint $z_0=0$, we assume that the capture circle intercepts the two slits $I_{\frac{m}{2}-k+1}$ and $I_{\frac{m}{2}+k}$ for some $k=1,\ldots,\frac{m}{2}$. Again, due to the symmetry of $G_m$, let $\xi$ and $\overline{\xi}$ be the pair of preimages of the intersection of the capture circle with the slit $I_{\frac{m}{2}+k}$. Also let $\xi_1$ be the intersection of the circle $C_{\frac{m}{2}+k}$ with the positive real axis lying on the arc between $\overline{\xi}$ and $\xi$ which is closest to the point $\zeta_0$. It follows from the symmetry of $G_m$ and $\Omega_m$ that $-\overline{\xi}$ and $-\xi$ are the corresponding pair of preimages of the intersection of the capture circle with the circle $C_{\frac{m}{2}-k+1}$. See Figure~\ref{fig:h0d}. \begin{figure}[htb] \centerline{ \scalebox{0.4}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{figh0d}} } \caption{The arcs $C'_{\frac{m}{2}+k}$, $C''_{\frac{m}{2}+k}$, $C'_{\frac{m}{2}-k+1}$ and $C''_{\frac{m}{2}-k+1}$ for $z_0=0$.} \label{fig:h0d} \mathrm{e}nd{figure} For $k=1,\ldots,m/2$, the function $U_k(\zeta)$ is the unique solution of the following Dirichlet problem: \begin{subequations}\label{eq:bdv-U0} \begin{align} \label{eq:U-Lap} \nabla^2 U_k(\zeta) &= 0 \quad \mbox{if }\zeta\mathrm{i}n G_m, \\ \label{eq:U-m} U_k(\zeta)&= 1 \quad \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}-j+1}\cup C_{\frac{m}{2}+j}, \quad j=1,\ldots,k-1, \\ \label{eq:U-k'} U_k(\zeta)&= 1 \quad \mbox{if }\zeta\mathrm{i}n C'_{\frac{m}{2}-k+1}\cup C'_{\frac{m}{2}+k}, \\ \label{eq:U-k''} U_k(\zeta)&= 0 \quad \mbox{if }\zeta\mathrm{i}n C''_{\frac{m}{2}-k+1}\cup C''_{\frac{m}{2}+k}, \\ \label{eq:U-p} U_k(\zeta)&= 0 \quad \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}-j+1}\cup C_{\frac{m}{2}+j}, \quad j=k+1,\ldots,m/2. \mathrm{e}nd{align} \mathrm{e}nd{subequations} Here, $C'_{\frac{m}{2}+k}$ is the arc joining $\overline{\xi}$, $\xi_1$, and $\xi$ on the circle $C_{\frac{m}{2}+k}$ and $C''_{\frac{m}{2}+k}$ is the adjacent arc; $C'_{\frac{m}{2}-k+1}$ is the arc joining $-\overline{\xi}$, $-\xi_1$, and $-\xi$ on the circle $C_{\frac{m}{2}-k+1}$ and $C''_{\frac{m}{2}-k+1}$ is the adjacent arc. The function $U_k(\zeta)$ is assumed to be bounded at infinity. Note that the boundary conditions on the two circles $C_{\frac{m}{2}-k+1}$ and $C_{\frac{m}{2}+k}$ are again not continuous. In a similar fashion to the problem~\mathrm{e}qref{eq:bdv-U3}, we reduce the current problem~\mathrm{e}qref{eq:bdv-U0} to one with continuous boundary data. For $k=1,\ldots,m/2$, the function $\Psi_k(\zeta)$ defined by~\mathrm{e}qref{eq:Psi} is harmonic everywhere in the domain exterior to the $m$ discs, equal to 1 on the arc $C'_{\frac{m}{2}+k}$, and equal to $0$ on the arc $C''_{\frac{m}{2}+k}$. Similarly, the function \begin{equation}\label{eq:Phi_k} \Phi_k(\zeta)=\frac{1}{\pi}\mathop{\mathrm{Im}} \log \phi(\psi(\zeta,-\xi,-\xi_1)), \mathrm{e}nd{equation} is harmonic everywhere in the domain exterior to the $m$ discs, equal to 1 on the arc $C'_{\frac{m}{2}-k+1}$, and equal to $0$ on the arc $C''_{\frac{m}{2}-k+1}$. Thus, the function \begin{equation}\label{eq:Vk-Uk-0} V_k(\zeta)=U_k(\zeta)-\Psi_k(\zeta)-\Phi_k(\zeta) \mathrm{e}nd{equation} is a solution of the following Dirichlet problem: \begin{subequations}\label{eq:bdv-V0} \begin{align} \label{eq:V-Lap} \nabla^2 V_k(\zeta) &= 0 ~~~~~~~~~~~~~~~~~~~~~~~\:\; \mbox{if }\zeta\mathrm{i}n G_m, \\ \label{eq:V-m} V_k(\zeta)&= 1-\Psi_k(\zeta)-\Phi_k(\zeta) ~~ \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}-j+1}\cup C_{\frac{m}{2}+j}, ~ j=1,\ldots,k-1, \\ \label{eq:V-k'} V_k(\zeta)&= -\Phi_k(\zeta) ~~~~~~~~~~~~~~~\:\: \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}+k}, \\ \label{eq:V-k''} V_k(\zeta)&= -\Psi_k(\zeta) ~~~~~~~~~~~~~~~\:\: \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}-k+1}, \\ \label{eq:V-p} V_k(\zeta)&= -\Psi_k(\zeta)-\Phi_k(\zeta) ~~~~~ \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}-j+1}\cup C_{\frac{m}{2}+j}, ~ j=k+1,\ldots,m/2. \mathrm{e}nd{align} \mathrm{e}nd{subequations} The function $V_k$ is bounded at infinity. The boundary data of the new problem~\mathrm{e}qref{eq:bdv-V0} is now continuous on all circles. As before, it can be observed also that the image circles under the mappings $\phi(\psi(\zeta,\xi,\xi_1))$ and $\phi(\psi(\zeta,-\xi,-\xi_1))$ are very small, and hence in problem~\mathrm{e}qref{eq:bdv-V0}, the boundary data is almost constant. Hence, in view of (\ref{eq:Psi}) and (\ref{eq:Phi_k}), we may approximate \begin{equation} \Psi_k(\zeta) \approx P_{kj}= \Psi_k(c_j), \quad \Phi_k(\zeta) \approx Q_{kj}=\Phi_k(c_j), \quad \zeta \mathrm{i}n C_j, \quad j=1,\ldots,m, \mathrm{e}nd{equation} where $c_j$ is the center of the circle $C_j$. Thus, the Dirichlet problem~\mathrm{e}qref{eq:bdv-V0} becomes \begin{subequations}\label{eq:bdv-V2} \begin{align} \label{eq:V2-Lap} \nabla^2 V_k(\zeta) &= 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\:\:\; \mbox{if }\zeta\mathrm{i}n G_m, \\ \label{eq:V2-m} V_k(\zeta)&= 1-P_{k,\frac{m}{2}-j+1}-Q_{k,\frac{m}{2}-j+1} ~~ \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}-j+1}, ~ j=1,\ldots,k-1, \\ \label{eq:V2+m} V_k(\zeta)&= 1-P_{k,\frac{m}{2}+j}-Q_{k,\frac{m}{2}+j} ~~~~~~~~ \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}+j}, ~ j=1,\ldots,k-1, \\ \label{eq:V2-k'} V_k(\zeta)&= -Q_{k,\frac{m}{2}+k} ~~~~~~~~~~~~~~~~~~~~~~~~ \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}+k}, \\ \label{eq:V2-k''} V_k(\zeta)&= -P_{k,\frac{m}{2}-k+1} ~~~~~~~~~~~~~~~~~~~~\:\: \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}-k+1}, \\ \label{eq:V2-p} V_k(\zeta)&= -P_{k,\frac{m}{2}-j+1}-Q_{k,\frac{m}{2}-j+1} ~~~~\: \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}-j+1}, ~ j=k+1,\ldots,m/2, \\ \label{eq:V2+p} V_k(\zeta)&= -P_{k,\frac{m}{2}+j}-Q_{k,\frac{m}{2}+j} ~~~~~~~~~~\; \mbox{if }\zeta\mathrm{i}n C_{\frac{m}{2}+j}, ~ j=k+1,\ldots,m/2. \mathrm{e}nd{align} \mathrm{e}nd{subequations} The function $V_k$ can be then written in terms of the harmonic measures $\{\sigma_j\}_{j=1}^m$ through \begin{eqnarray} \nonumber V_k(\zeta)&=& \sum_{j=1}^{k-1}\left(\sigma_{\frac{m}{2}-j+1}(\zeta)+\sigma_{\frac{m}{2}+j}(\zeta)\right)\\ \label{eq:Vk-0} &-& \sum_{\begin{subarray}{c} j=1\\j\ne k\mathrm{e}nd{subarray}}^{m/2} \left([P_{k,\frac{m}{2}-j+1}+Q_{k,\frac{m}{2}-j+1}]\sigma_{\frac{m}{2}-j+1}(\zeta) +[P_{k,\frac{m}{2}+j}+Q_{k,\frac{m}{2}+j}]\sigma_{\frac{m}{2}+j}(\zeta)\right)\\ \nonumber &-&P_{k,\frac{m}{2}-k+1}\sigma_{\frac{m}{2}-k+1}(\zeta)-Q_{k,\frac{m}{2}+k}\sigma_{\frac{m}{2}+k}(\zeta). \mathrm{e}nd{eqnarray} Then the unique solution $U_k(\zeta)$ to the Dirichlet problem~\mathrm{e}qref{eq:bdv-U0} can be computed through~\mathrm{e}qref{eq:Vk-Uk-0}. Note that the images of the circles $C_{\frac{m}{2}-k+1}$ and $C_{\frac{m}{2}+k}$ under the conformal mapping $F$ are the slits $I_{\frac{m}{2}-k+1}$ and $I_{\frac{m}{2}+k}$, respectively. Due to the symmetry of $\Omega_m$ and $G_m$ with respect to the real line, we have $F(\xi)=F(\overline{\xi})\mathrm{i}n I_{\frac{m}{2}+k}$ and $F(-\xi)=F(-\overline{\xi})\mathrm{i}n I_{\frac{m}{2}-k+1}$ (see Figure~\ref{fig:h0d}). For a given $r$ such that the capture circle intersects the two slits $I_{\frac{m}{2}+k}$ and $I_{\frac{m}{2}-k+1}$ for some $k=1,...,m/2$, there are points $\xi\mathrm{i}n C_{\frac{m}{2}+k}$ and $-\xi\mathrm{i}n C_{\frac{m}{2}-k+1}$ which depend on $r$ via \begin{equation}\label{eq:r0-xi} r= F(\xi). \mathrm{e}nd{equation} The values of $F(\xi)$ can be computed as in the case of $z_0=-3/2$. Thus, the value of the $h$-function is given by \begin{equation}\label{eq:hr0-xi} h(r)=U_k(\zeta_0). \mathrm{e}nd{equation} To compute the values of $h(r)$ when the capture circle intersects the two slits $I_{\frac{m}{2}+k}$ and $I_{\frac{m}{2}-k+1}$ for some $k=1,\ldots,m/2$, as before we take $\xi$ on the circle $C_{\frac{m}{2}+k}$ and then compute the values of $r$ through~\mathrm{e}qref{eq:r0-xi}. Again we choose $31$ values of $\xi$ on the upper half of the circle $C_{\frac{m}{2}+k}$. For each of these points $\xi$, we compute the values of $r$ through~\mathrm{e}qref{eq:r0-xi} and then the values of $h(r)$ through~\mathrm{e}qref{eq:hr0-xi}. For the values of $r$ corresponding to the capture circle passing through the gaps between the slits $I_k$, the values of $h(r)$ are computed as described in Section~\ref{sec:step}. The graphs of the function $h(r)$ for $m=16$ and $m=64$ are given in Figure~\ref{fig:h-0}. The CPU time (in seconds) required for computing the step height of the $h$-function and the $h$-function for $z_0=-3/2$ and $z_0=0$ is presented in Table~\ref{tab:time}. All computations in this paper are performed in {\sc Matlab} R2022a on an MSI desktop with AMD Ryzen 7 5700G, 3801 Mhz, 8 Cores, 16 Logical Processors, and 16 GB RAM. \begin{figure}[htb] \centerline{ \scalebox{0.6}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{hfunO16}} \scalebox{0.6}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{hfunO64}} } \caption{The $h$-function for the domain $\Omega_{16}$ (left) and the domain $\Omega_{64}$ (right) when $z_0=0$.} \label{fig:h-0} \mathrm{e}nd{figure} \begin{table}[h] \caption{The CPU time (sec) required for computing the step heights of the $h$-function (left) and the total time required for computing the $h$-function (right).} \label{tab:time} \centering \begin{tabular}{l|cc|cc} \hline & \multicolumn{2}{c|}{The step height} & \multicolumn{2}{c}{The $h$-function} \\ \cline{2-5} & $z_0=-3/2$ & $z_0=0$ & $z_0=-3/2$ & $z_0=0$ \\ \hline $\Omega_4$ & $0.4120$ & $0.4066$ & $0.4345$ & $0.4363$\\ $\Omega_8$ & $0.5007$ & $0.5117$ & $0.5186$ & $0.5316$ \\ $\Omega_{16}$ & $0.8826$ & $0.9272$ & $0.9227$ & $0.9619$ \\ $\Omega_{256}$ & $24.622$ & $24.715$ & $29.251$ & $29.212$ \\ $\Omega_{512}$ & $87.541$ & $85.479$ & $104.76$ & $103.08$ \\ $\Omega_{1024}$ & $330.53$ & $326.17$ & $397.72$ & $392.40$ \\ \hline \mathrm{e}nd{tabular} \mathrm{e}nd{table} \section{Asymptotic behavior of the $h$-function}\label{sec:asy} To complement the preceding computations, in this section we analyze numerically some asymptotic features of the $h$-functions as $r\to1^+$ for the basepoint $z_0=-3/2$ and as $r\to(1/6)^+$ for the basepoint $z_0=0$. \subsection{The basepoint $z_0=-3/2$} For this case, we study numerically the behavior of the $h$-function $h(r)$ when $r$ is near to $1$ for several values of $\mathrm{e}ll$. For $E_0=[-1/2,1/2]$, i.e., $\mathrm{e}ll=0$, we have~\cite{gswc} \begin{equation}\label{eq:h-ell0} h(r)=\frac{2}{\pi}\tan^{-1}\left(\sqrt{2}\sqrt{\frac{r-1}{2-r}}\right) \mathrm{e}nd{equation} which implies that \[ h(r)\sim C_0(r-1)^{\beta_0}, \qquad C_0=\frac{2\sqrt{2}}{\pi}, \quad \beta_0=\frac{1}{2}. \] For $\mathrm{e}ll\ge1$, up to the best of our knowledge, there are no results about the asymptotic behavior of the $h$-function as $r\to1^+$. Here, we attempt to address this issue numerically. We choose $20$ values of $r$ in the interval $(1,1+\varepsilon)$ with $\varepsilon=10^{-6}$ (as illustrated in Figure~\ref{fig:3hr}, these $20$ values of $r$ are chosen by choosing $20$ points on the upper-half of the circle $C_1$ such that the images of these points under the conformal mapping $F$ are in $(1,1+\varepsilon)$). Then, we use the method described in the previous section to compute the values of the $h$-function $h(r)$ for these values of $r$. We consider only the values of $\mathrm{e}ll=1,\ldots,8$. To consider larger values of $\mathrm{e}ll$, it is required to consider smaller values of $\varepsilon$ which is challenging numerically. For $\mathrm{e}ll=1,\ldots,8$, the graphs of the functions $h(r)$ on $(1,1+\varepsilon)$ are shown in Figure~\ref{fig:h-asy} (left). The figure shows also the graph of the function $h(r)$ for $\mathrm{e}ll=0$ which is given by~\mathrm{e}qref{eq:h-ell0}. Figure~\ref{fig:h-asy} (right) shows the graphs of the function $\log(h(r))$ as a function of $\log(r-1)$ for $\mathrm{e}ll=0,1,\ldots,8$. \begin{figure}[htb] \centerline{ \scalebox{0.5}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{fig_h_asy_sq}} \scalebox{0.5}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{fig_h_asy_L}} } \caption{The graphs of the $h$-function (left) and $\log(h(r))$ (right) for $\mathrm{e}ll=0,1,\ldots,8$ and $r\mathrm{i}n(1,1+\varepsilon)$ with $\varepsilon=10^{-6}$ for $z_0=-3/2$.} \label{fig:h-asy} \mathrm{e}nd{figure} Figure~\ref{fig:h-asy} suggests that the functions $h(r)$ for $\mathrm{e}ll=1,\ldots,8$ have the same behavior near $r=1$ as for the case $\mathrm{e}ll=0$. For each $\mathrm{e}ll$, $\mathrm{e}ll=1,\ldots,8$, we use these $20$ values of $r$, i.e., $r_1,\ldots,r_{20}$, and the approximate values of the $h$-function $h(r)$ at these values to find approximations of real constants $C_\mathrm{e}ll$ and $\beta_\mathrm{e}ll$ such that \begin{equation}\label{eq:hrj-C} h(r_j)\approx C_\mathrm{e}ll (r_j-1)^{\beta_\mathrm{e}ll}. \mathrm{e}nd{equation} The approximations of $C_\mathrm{e}ll$ and $\beta_\mathrm{e}ll$ will be computed using the least square method, i.e., we find the values of $C_\mathrm{e}ll$ and $\beta_\mathrm{e}ll$ that minimize the least square error \begin{equation}\label{eq:LSE} \mathcal{E}_\mathrm{e}ll = \sum_{j=1}^{20}\left(h(r_j)-C_\mathrm{e}ll(r_j-1)^{\beta_\mathrm{e}ll}\right)^2. \mathrm{e}nd{equation} By taking the logarithm of both sides of~\mathrm{e}qref{eq:hrj-C}, we obtain \[ \log(h(r_j))\approx \log(C_\mathrm{e}ll)+\beta_\mathrm{e}ll\log(r_j-1), \] which can be written as \[ y_j\approx\log(C_\mathrm{e}ll)+\beta_\mathrm{e}ll x_j, \quad j=1,\ldots,20, \] where $y_j= \log(h(r_j))$ and $x_j=\log(r_j-1)$. The approximate values of the constants $\log(C_\mathrm{e}ll)$ and $\beta_\mathrm{e}ll$ will be computed by using the {\sc Matlab} function \verb|polyfit| to find the best line fitting for the points $(x_j,y_j)$, $j=1,\ldots,20$. The values of these constant as well as the error $\mathcal{E}_\mathrm{e}ll$ in~\mathrm{e}qref{eq:LSE} are given in Table~\ref{tab:Error} for $\mathrm{e}ll=1,\ldots,8$. \begin{table}[h] \caption{The values of the constants $C_\mathrm{e}ll$ and $\beta_\mathrm{e}ll$, as well as the error $\mathcal{E}_\mathrm{e}ll$ for $z_0=-3/2$ (left) and $z_0=0$ (right).} \label{tab:Error} \centering \begin{tabular}{l|lll|lll} \hline & \multicolumn{3}{c|}{$z_0=-3/2$} & \multicolumn{3}{c}{$z_0=0$} \\ \cline{2-7} $\mathrm{e}ll$ & $C_\mathrm{e}ll$ & $\beta_\mathrm{e}ll$ & $\mathcal{E}_\mathrm{e}ll$ & $C_\mathrm{e}ll$ & $\beta_\mathrm{e}ll$ & $\mathcal{E}_\mathrm{e}ll$ \\ \hline $0$ & $0.900316$ & $0.5$ & --- & --- & --- & --- \\ $1$ & $0.939343$ & $0.500000$ & $2.71\times10^{-21}$ & $2.351932$ & $0.500000$ & $9.00\times10^{-18}$ \\ $2$ & $0.977556$ & $0.500000$ & $1.31\times10^{-20}$ & $2.395871$ & $0.500000$ & $5.42\times10^{-18}$ \\ $3$ & $1.018398$ & $0.500000$ & $1.89\times10^{-19}$ & $2.466099$ & $0.500000$ & $3.14\times10^{-18}$ \\ $4$ & $1.061124$ & $0.500000$ & $2.67\times10^{-18}$ & $2.555452$ & $0.500000$ & $4.37\times10^{-19}$ \\ $5$ & $1.105679$ & $0.500002$ & $1.08\times10^{-17}$ & $2.655781$ & $0.500001$ & $6.70\times10^{-17}$ \\ $6$ & $1.152042$ & $0.500000$ & $5.67\times10^{-16}$ & $2.763722$ & $0.500004$ & $8.10\times10^{-16}$ \\ $7$ & $1.200444$ & $0.500001$ & $4.72\times10^{-15}$ & $2.878107$ & $0.500011$ & $1.07\times10^{-14}$ \\ $8$ & $1.251569$ & $0.500037$ & $2.03\times10^{-14}$ & $2.998958$ & $0.500032$ & $1.31\times10^{-13}$ \\ \hline \mathrm{e}nd{tabular} \mathrm{e}nd{table} It is clear from Table~\ref{tab:Error} that the values of the constant $C_\mathrm{e}ll$ increases as $\mathrm{e}ll$ increases. However, it is not clear from the presented values if $C_\mathrm{e}ll$ is bounded as $\mathrm{e}ll\to\mathrm{i}nfty$. As a numerical attempt to study the behavior of $C_\mathrm{e}ll$ for large values $\mathrm{e}ll$, we use extrapolation to find approximate values of $C_\mathrm{e}ll$ for $\mathrm{e}ll>8$. To use the exact value $C_0$ and the computed values $C_1,\ldots,C_8$ to extrapolate the values of $C_\mathrm{e}ll$ for $\mathrm{e}ll>8$, we note that $\log(C_\mathrm{e}ll)$ behave linearly as a function $\mathrm{e}ll$ (see Figure~\ref{fig:C08} (left)). We again use the {\sc Matlab} function \verb|polyfit| for computing the best line $a\mathrm{e}ll+b$ that fits the points $(\mathrm{e}ll,\log C_\mathrm{e}ll)$ for $\mathrm{e}ll=0,1,\ldots,8$. By computing the approximate values of the coefficients, the values of $C_\mathrm{e}ll$ can be approximated by \begin{equation}\label{eq:C-ell} C_\mathrm{e}ll \approx 0.900613 e^{0.041069\mathrm{e}ll}\quad {\rm for}\quad \mathrm{e}ll\ge 0. \mathrm{e}nd{equation} The least square error in this formula is \[ \sum_{\mathrm{e}ll=0}^{8} \left(C_\mathrm{e}ll - 0.900613 e^{0.041069\mathrm{e}ll}\right)^2\approx 1.78\times10^{-6}. \] The formula~\mathrm{e}qref{eq:C-ell} suggests that $C_\mathrm{e}ll$ is unbounded as $\mathrm{e}ll\to\mathrm{i}nfty$. The graph of the $C_\mathrm{e}ll$ as a function $\mathrm{e}ll$ is shown in Figure~\ref{fig:C08} (right) where the (red) circles are the values of $C_\mathrm{e}ll$ presented in Table~\ref{tab:Error}, for $\mathrm{e}ll=0,1,\ldots,8$, and the (blue) circles are the extrapolated values, for $\mathrm{e}ll=9,\ldots,20$. \begin{figure}[htb] \centerline{ \scalebox{0.5}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{fig_C3_L}} \scalebox{0.5}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{fig_C3_E}} } \caption{The values of $\log C_\mathrm{e}ll$ (left) and $C_\mathrm{e}ll$ (right) for the case $z_0=-3/2$.} \label{fig:C08} \mathrm{e}nd{figure} \subsection{The basepoint $z_0=0$} For this case, it is not possible to consider $\mathrm{e}ll=0$ because $z_0=0$ is in the middle of the interval $[-1/2,1/2]$. For $\mathrm{e}ll\ge1$, as in the previous case, we choose $20$ values of $r$ in the interval $(1/6,1/6+\varepsilon)$ with $\varepsilon=10^{-6}$ (by choosing $20$ points on the upper-half of the circle $C_1$ such that the images of these points under the conformal mapping $F$ are in $(1/6,1/6+\varepsilon)$). We then use the method described in Section~\ref{sec:h-fun} to compute the values of the $h$-function $h(r)$ for these values of $r$. As in the previous subsection, we consider only the values of $\mathrm{e}ll$, $\mathrm{e}ll=1,\ldots,8$. The graph of the functions $h(r)$ for these values of $\mathrm{e}ll$ on $(1/6,1/6+\varepsilon)$ are shown in Figure~\ref{fig:h0-asy} (left). Figure~\ref{fig:h0-asy} (right) shows the graphs of the function $\log(h(r))$ for $\mathrm{e}ll=1,\ldots,8$. \begin{figure}[htb] \centerline{ \scalebox{0.5}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{fig_h0_asy_sq}} \scalebox{0.5}{\mathrm{i}ncludegraphics[trim=0 0 0 0,clip]{fig_h0_asy_L}} } \caption{The graphs of the $h$-function (left) and $\log(h(r))$ (right) for $\mathrm{e}ll=1,\ldots,8$ and $r\mathrm{i}n(1/6,1/6+\varepsilon)$ with $\varepsilon=10^{-6}$ for $z_0=0$.} \label{fig:h0-asy} \mathrm{e}nd{figure} Figure~\ref{fig:h0-asy} suggests that the functions $h(r)$ for $\mathrm{e}ll=1,\ldots,8$ have the same behavior near $r=1/6$ as for the case $z_0=-3/2$ near $r=1$. We use the $20$ values of $r$, i.e., $r_1,\ldots,r_{20}\mathrm{i}n(1/6,1/6+\varepsilon)$, and the approximate values of the $h$-function $h(r)$ at these values to find approximations of real constants $C_\mathrm{e}ll$ and $\beta_\mathrm{e}ll$ such that \begin{equation}\label{eq:hrj-C0} h(r_j)\approx C_\mathrm{e}ll (r_j-1/6)^{\beta_\mathrm{e}ll} \mathrm{e}nd{equation} for each $\mathrm{e}ll$, $\mathrm{e}ll=1,\ldots,8$. The approximations of $C_\mathrm{e}ll$ and $\beta_\mathrm{e}ll$ are computed using the least square method and by using the {\sc Matlab} function \verb|polyfit| as in the case $z_0=-3/2$. The values of these constant as well as the least square error $\mathcal{E}_\mathrm{e}ll$ are given in Table~\ref{tab:Error} for $\mathrm{e}ll=1,\ldots,8$. By analogy to the case $z_0=-3/2$, it seems from Table~\ref{tab:Error} that the values of $C_\mathrm{e}ll$ are also unbounded as $\mathrm{e}ll\to\mathrm{i}nfty$. However, we have been unsuccessful in finding a best fit formula for $C_\mathrm{e}ll$ as a function of $\mathrm{e}ll$ akin to~\mathrm{e}qref{eq:C-ell} in the case $z_0=-3/2$. \section{Conclusions}\label{sec:con} In this paper, we have presented a fast and accurate numerical method for approximating the $h$-function for the middle-thirds Cantor set $\mathcal{C}$. We achieved this by computing $h$-functions for multiply connected slit domains $\Omega_m$ of high connectivity. Computing the $h$-functions for the slit domains $\Omega_m$ requires solving Dirichlet BVPs on these domains. We opted to map the slit domains $\Omega_m$ onto conformally equivalent circular domains $G_m$ where the transformed Dirichlet problems are solved and the $h$-functions are then calculated. Computing the conformal mapping from $\Omega_m$ onto $G_m$ as well as solving the transformed Dirichlet BVPs in the domain $G_m$ are performed using the BIE with the Neumann kernel~\mathrm{e}qref{eq:ie-g}. This is the same BIE which was also used in~\cite{LSN17} to approximate the logarithmic capacity of $\mathcal{C}$. We presented results for the step heights of the $h$-function up to connectivity $m=1024$ for the basepoint $z_0=-3/2$ and $m=2048$ for the basepoint $z_0=0$ (see Figures~\ref{fig:hm-3} and~\ref{fig:hm-0}). Our method could certainly be used for larger values of $m$, with still arguably competitive computation times. It is also important to point out that the actual graph of the $h$-function associated with $\mathcal{C}$, regardless of basepoint location, will consist of a union of infinitely many points and constant intervals. However, the main qualitative features of the $h$-functions we have presented for slit domains of high connectivity, such as those in Figures~\ref{fig:hm-3} and~\ref{fig:hm-0}, will differ only very slightly to those exhibited by the actual graph of the $h$-function associated with $\mathcal{C}$. \begin{thebibliography}{10} \bibitem{asww} J.~Aar{\~a}o, M.A. Snipes, B.L. Walden, and L.A. Ward. \newblock {\mathrm{e}m Four ways to calculate harmonic measure in a doubly connected domain}, in preparation. \bibitem{AST13} N.~Aoyama, T.~Sakajo, and H.~Tanaka. \newblock A computational theory for spiral point vortices in multiply connected domains with slit boundaries. \newblock {\mathrm{e}m Japan J. Indust. Appl. Math.}, 30:485--509, 2013. \bibitem{Atk97} K.E. Atkinson. \newblock {\mathrm{e}m The Numerical Solution of Integral Equations of the Second Kind}. \newblock Cambridge University Press, Cambridge, 1997. \bibitem{bawa14} A.~Barton and L.A. Ward. \newblock A new class of harmonic measure distribution functions. \newblock {\mathrm{e}m J. Geom. Anal.}, 24(4):2035--2071, 2014. \bibitem{beso03} D.~Betsakos and A.~Solynin. \newblock On the distribution of harmonic measure on simply connected planar domains. \newblock {\mathrm{e}m J. Aust. Math. Soc.}, 75:145--151, 2003. \bibitem{bh89} D.A. Brannan and W.K. Hayman. \newblock Research problems in complex analysis. \newblock {\mathrm{e}m Bull. London Math. Soc.}, 21:1--35, 1989. \bibitem{CrowdyBook} Darren Crowdy. \newblock {\mathrm{e}m Solving Problems in Multiply Connected Domains}. \newblock SIAM, 2020. \bibitem{ckgn} D.G. Crowdy, E.H. Kropf, C.C. Green, and M.M.S. Nasser. \newblock The {S}chottky-{K}lein prime function: a theoretical and computational tool for applications. \newblock {\mathrm{e}m IMA J. App. Math.}, 81:589--628, 2016. \bibitem{Del-harm} Thomas~K DeLillo and John~A Pfaltzgraff. \newblock Extremal distance, harmonic measure and numerical conformal mapping. \newblock {\mathrm{e}m J. Comput. Appl. Math.}, 46(1-2):103--113, 1993. \bibitem{Tre-Gre} Mark Embree and Lloyd~N Trefethen. \newblock Green's functions for multiply connected domains via conformal mapping. \newblock {\mathrm{e}m SIAM Rev.}, 41(4):745--761, 1999. \bibitem{Gak} F.D. Gakhov. \newblock {\mathrm{e}m Boundary Value Problems}. \newblock Pergamon Press, Oxford, 1966. \bibitem{garmar} J.B. Garnett and D.E. Marshall. \newblock {\mathrm{e}m Harmonic Measure}. \newblock Cambridge University Press, Cambridge, 2008. \bibitem{gswc} C.C. Green, M.A. Snipes, L.A. Ward, and D.G. Crowdy. \newblock Harmonic-measure distribution functions for a class of multiply connected symmetrical slit domains. \newblock {\mathrm{e}m Proceedings of the Royal Society A}, 478(2259):20210832, 2022. \bibitem{Gre-Gim12} L.~Greengard and Z.~Gimbutas. \newblock {\mathrm{e}m {FMMLIB2D}: A {MATLAB} toolbox for fast multipole method in two dimensions}, version 1.2. edition, 2012. \newblock \url{http://www.cims.nyu.edu/cmcl/fmm2dlib/fmm2dlib.html}. Accessed 17 Nov 2022. \bibitem{ka44} S.~Kakutani. \newblock Two-dimensional {B}rownian motion and harmonic functions. \newblock {\mathrm{e}m Proc. Imp. Acad. Tokyo}, 20:706--714, 1944. \bibitem{ka45} Shizuo Kakutani. \newblock Two-dimensional {B}rownian motion and the type problem of {R}iemann surfaces. \newblock {\mathrm{e}m Proceedings of the Japan Academy}, 21:138--140, 1945. \bibitem{bookBM} Ioannis Karatzas and Steven Shreve. \newblock {\mathrm{e}m Brownian Motion and Stochastic Calculus}. \newblock Springer Science \& Business Media, 2nd edition, 2012. \bibitem{LSN17} J.~Liesen, O.~S{\'e}te, and M.M.S. Nasser. \newblock Fast and accurate computation of the logarithmic capacity of compact sets. \newblock {\mathrm{e}m Comput. Methods Funct. Theory}, 17:689--713, 2017. \bibitem{ArMa} A.~Mahenthiram. \newblock {\mathrm{e}m Harmonic-measure distribution functions, and related functions, for simply connected and multiply connected two-dimensional regions}. \newblock PhD thesis (submitted), University of South Australia, 2022. \bibitem{Nas-Siam1} M.M.S. Nasser. \newblock Numerical conformal mapping via a boundary integral equation with the generalized {N}eumann kernel. \newblock {\mathrm{e}m SIAM J. Sci. Comput.}, 31:1695--1715, 2009. \bibitem{Nas-ETNA} M.M.S. Nasser. \newblock Fast solution of boundary integral equations with the generalized {N}eumann kernel. \newblock {\mathrm{e}m Electron. Trans. Numer. Anal.}, 44:189--229, 2015. \bibitem{NG18} M.M.S. Nasser and C.C. Green. \newblock A fast numerical method for ideal fluid flow in domains with multiple stirrers. \newblock {\mathrm{e}m Nonlinearity}, 31:815--837, 2018. \bibitem{NK} M.M.S. Nasser and E.~Kalmoun. \newblock Application of integral equations to simulating local fields in carbon nanotube reinforced composites. \newblock In R.~McPhedran, S.~Gluzman, V.~Mityushev, and N.~Rylko, editors, {\mathrm{e}m 2D and Quasi-2D Composite and Nanocomposite Materials}, pages 233--248. Elsevier, 2020. \bibitem{Nvm} M.M.S Nasser and M.~Vuorinen. \newblock Numerical computation of the capacity of generalized condensers. \newblock {\mathrm{e}m J. Comput. Appl. Math.}, 377:112865, 2020. \bibitem{SnipWard05} M.A. Snipes and L.A. Ward. \newblock Realizing step functions as harmonic measure distributions of planar domains. \newblock {\mathrm{e}m Ann. Acad. Sci. Fenn. Math.}, 30(2):353--360, 2005. \bibitem{SnipWard08} M.A. Snipes and L.A. Ward. \newblock Convergence properties of harmonic measure distributions for planar domains. \newblock {\mathrm{e}m Complex Var. Elliptic Equ.}, 53(10):897--913, 2008. \bibitem{SnipWard16} M.A. Snipes and L.A. Ward. \newblock Harmonic measure distribution functions of planar domains: A survey. \newblock {\mathrm{e}m J. Anal.}, 24:293--330, 2016. \bibitem{Tre-Ser} Lloyd~N Trefethen. \newblock Series solution of {L}aplace problems. \newblock {\mathrm{e}m ANZIAM J.}, 60(1):1--26, 2018. \bibitem{wawa96} B.L. Walden and L.A. Ward. \newblock Distributions of harmonic measure for planar domains. \newblock In I.~Laine and O.~Martio, editors, {\mathrm{e}m Proceedings of 16th Nevanlinna Colloquium, {J}oensuu}, pages 289--299. Walter de Gruyter, Berlin, 1996. \bibitem{wawa01} B.L. Walden and L.A. Ward. \newblock Asymptotic behaviour of distributions of harmonic measure for planar domains. \newblock {\mathrm{e}m Complex Variables: Theory and Applications}, 46(2):157--177, 2001. \bibitem{wawa07} B.L. Walden and L.A. Ward. \newblock A harmonic measure interpretation of the arithmetic--geometric mean. \newblock {\mathrm{e}m Amer. Math. Monthly}, 114(7):610--622, 2007. \bibitem{Weg-Nas} R.~Wegmann and M.M.S. Nasser. \newblock The {R}iemann-{H}ilbert problem and the generalized {N}eumann kernel on multiply connected regions. \newblock {\mathrm{e}m J. Comput. Appl. Math.}, 214:36--57, 2008. \mathrm{e}nd{thebibliography} \mathrm{e}nd{document}
\betagin{document} \title[Mean proximality and mean Li-Yorke chaos]{Mean proximality and mean Li-Yorke chaos} \author[F. Garcia-Ramos]{Felipe Garcia-Ramos} \address{Felipe Garcia-Ramos: Instituto de Fisica, Universidad Autonoma de San Luis Potosi Av. Manuel Nava 6, SLP, 78290 Mexico} \email{[email protected]} \author[L. Jin]{Lei Jin} \address{Lei Jin: Department of Mathematics, University of Science and Technology of China, Hefei, Anhui, 230026, P.R.China \& Institute of Mathematics, Polish Academy of Sciences, Warsaw, 00656, Poland} \email{[email protected]} \thanks{The first author was supported by IMPA, CAPES (Brazil) and NSERC (Canada) . The second author is supported by NNSF of China 11225105, 11371339 and 11431012.} \subjclass[2010]{37B05; 54H20} \keywords{mean Li-Yorke chaos, mean Devaney chaos, mean Li-Yorke set, mean proximal pair, mean asymptotic pair} \betagin{abstract} We prove that if a topological dynamical system is mean sensitive and contains a mean proximal pair consisting of a transitive point and a periodic point, then it is mean Li-Yorke chaotic (DC2 chaotic). On the other hand we show that a system is mean proximal if and only if it is uniquely ergodic and the unique measure is supported on one point. \end{abstract} \maketitle \section{Introduction} In the study of topological dynamical systems different versions of chaos, which represent complexity in various ways, have been defined and studied. Some properties that are considered forms of chaos are positive entropy, topological mixing, Devaney chaos, and Li-Yorke chaos. The relationship among them has been one of the main interests of this topic. Solving open questions Huang and Ye {\Cal I }te{HY} proved that Devaney chaos implies Li-Yorke chaos; and Blanchard, Glasner, Kolyada and Maass {\Cal I }te{BGKM} showed that positive entropy also implies Li-Yorke chaos. It is also known that topological weak mixing implies Li-Yorke chaos {\Cal I }te{I}. For all the other implications among these notions there are counterexamples. Many of the classical notions in topological dynamics have an analogous version in the mean sense (e.g. mean sensitivity, mean equicontinuity, and mean distality {\Cal I }te{LTY,G,DG,OW}). In a similar way mean forms of Li-Yorke chaos, also known as distributional chaos (DC) have been defined and studied {\Cal I }te{SS,D}. A nice generalization proved by Downarowicz {\Cal I }te{D} says that positive topological entropy implies mean Li-Yorke chaos, which strengthens the result of {\Cal I }te{BGKM}. It was also mentioned in {\Cal I }te{D} that mean Li-Yorke chaos is equivalent to the so-called chaos DC2 which was first studied for interval systems in {\Cal I }te{SS}. In our present paper, we would like to use the former terminology (i.e. calling it mean Li-Yorke). Mean Li-Yorke chaos does not imply positive entropy (see e.g., {\Cal I }te{DS,FPS} for details). For related topics we also recommend {\Cal I }te{BHS,LY}. Besides positive topological entropy we do not know of any other condition that implies mean Li-Yorke chaos. Oprocha showed that Devaney chaos does not imply mean Li-Yorke chaos {\Cal I }te{O}. Furthermore there are topologically mixing systems with dense periodic points and no mean Li-Yorke pairs {\Cal I }te {BGKM}. Motivated by the ideas and results above, we ask if there is some strong form of Devaney chaos (\textquotedblleft mean Devaney chaos\textquotedblright\ one could say), that is stronger than mean Li-Yorke chaos. We show that with mean sensitivity and a stronger relationship between transitivity and periodicity we can obtain mean Li-Yorke chaos. As a consequence of this result we show that it is easy to construct subshifts with dense mean Li-Yorke subsets. We also characterize mean proximal systems, which are a subclass of the classic proximal systems. It is known that a system is proximal if and only if its unique minimal subset is a fixpoint {\Cal I }te{AK}. Among other characterizations we show that a system is mean proximal if and only if its unique invariant measure is a delta measure on a fixpoint $x_{0}.$ As a corollary we also obtain that mean proximal systems contain no mean Li-Yorke pairs. We recall some necessary definitions in the following. By a \textit{ topological dynamical system} (TDS, for short), we mean a pair $(X,T)$, where $X$ is a compact metric space with the metric $d$, and $T:X\rightarrow X$ is continuous. Let $(X,T)$ be a TDS. A point $x\in X$ is called a \textit{transitive point} if its orbit is dense in $X$, i.e., $\overlineerline{\{T^{n}x:n\Gammaeq0\}}=X$; and called a \textit{periodic point} of period $n\in\mathbb{N}$ if $T^{n}x=x$ but $T^{i}x\neq x$ for $1\leq i\leq n-1$. A pair $(x,y)\in X\timeses X$ is said to be\textit{\ proximal} if \betagin{equation*} \liminf\limits_{n\rightarrow\infty}d(T^{n}x,T^{n}y)=0. \end{equation*} Given $\eta>0$, a pair $(x,y)\in X\timeses X$ is called a \textit{Li-Yorke pair } if $(x,y)$ is proximal and \betagin{equation*} \limsup\limits_{n\rightarrow\infty}d(T^{n}x,T^{n}y)\,\,>0. \end{equation*} A subset $S\subset X$ is called a\textit{\ Li-Yorke set } (or a scrambled set) if every pair $(x,y)\in S\timeses S$ of distinct points is a Li-Yorke pair. We say that $(X,T)$ is \textit{\ Li-Yorke chaotic} if $X$ contains an uncountable Li-Yorke subset. It was shown in {\Cal I }te{DL2} that Li-Yorke sets have measure zero for every invariant measure. We say that $(X,T)$ is\textit{\ sensitive}, if there exists $\Deltata>0$ such that for every $x\in X$ and every $\epsilonsilon>0$, there is $y\in B(x,\epsilonsilon)$ satisfying \betagin{equation*} \limsup\limits_{n\rightarrow\infty}d(T^{n}x,T^{n}y)>\Deltata. \end{equation*} Originally a TDS was defined to be \textit{Devaney chaotic} if it is transitive, sensitive, and has dense periodic points (it was shown later that the sensitivity hypothesis can be removed). As we noted Devaney chaos implies Li-Yorke chaos {\Cal I }te{HY}. Now we define the equivalent ``mean'' forms. A pair $(x,y)\in X\timeses X$ is said to be\textit{\ mean proximal} if \betagin{equation*} \liminf\limits_{n\rightarrow\infty}\frac{1}{n}\sum\limits_{i=0}^{n-1} d(T^{i}x,T^{i}y)=0. \end{equation*} A TDS $(X,T)$ is \textit{mean proximal} if every pair $(x,y)\in X\timeses X$ is mean proximal. Note that when studying invertible TDS this property is known as the so-called ``forward mean proximal'' \textit{\ {\Cal I }te{DL,OW}. } Given $\eta>0$, a pair $(x,y)\in X\timeses X$ is called a \textit{mean Li-Yorke pair} (with modulus $\eta$) if $(x,y)$ is mean proximal and \betagin{equation*} \limsup\limits_{n\rightarrow\infty}\frac{1}{n}\sum \limits_{i=0}^{n-1}d(T^{i}x,T^{i}y)\,\;(\Gammaeq\eta)\,>0. \end{equation*} A subset $S\subset X$ is called a \textit{mean Li-Yorke set} (with modulus $ \eta$) if every pair $(x,y)\in S\timeses S$ of distinct points is a mean Li-Yorke pair (with modulus $\eta$). We say that $(X,T)$ is \textit{mean Li-Yorke chaotic} if $X$ contains an uncountable mean Li-Yorke subset. We say that $(X,T)$ is \textit{mean sensitive} if there exists $\Deltata>0$ such that for every $x\in X$ and every $\epsilonsilon>0$, there is $y\in B(x,\epsilonsilon)$ satisfying \betagin{equation*} \limsup\limits_{n\rightarrow\infty}\frac{1}{n}\sum\limits_{i=0}^{n-1} d(T^{i}x,T^{i}y)>\Deltata. \end{equation*} Next we turn to introducing a new version of chaos. Our aim is to add something to the version of Devaney chaos such that the chaos in the new sense implies mean Li-Yorke chaos. It is worth mentioning that there are Devaney chaotic examples (even with positive entropy) that are not mean sensitive {\Cal I }te{GL}. So the first extra hypothesis we add is mean sensitivity. Now, by observing that if a TDS is Devaney chaotic then we can find a periodic point and a transitive point such that they are proximal, we consider the stronger condition: \textquotedblleft there exists a mean proximal pair which consists of a periodic point and a transitive point\textquotedblright. Indeed, this condition together with the mean sensitivity is enough, and we do not need the dense periodic points. \betagin{Theorem} \langlebel{thm1} If a TDS $(X,T)$ is mean sensitive and there is a forward mean proximal pair of $X\timeses X$ consisting of a transitive point and a periodic point, then $(X,T)$ is mean Li-Yorke chaotic; more precisely, there exist a positive number $\eta$ and a subset $K\subset X$ which is a union of countably many Cantor sets such that $K$ is a mean Li-Yorke set with modulus $\eta$. \end{Theorem} As an application of this result we show that with this condition it is easy to construct mean Li-Yorke chaotic systems. See Example \partialr\hang\textindentf{exam}. Another aim of this paper is to characterize mean proximal systems using mean asymptotic pairs. For a TDS $(X,T)$, we say that a pair $(x,y)\in X\timeses X$ is \textit{asymptotic} if \betagin{equation*} \lim\limits_{n\rightarrow\infty}d(T^{n}x,T^{n}y)=0, \end{equation*} and we say that $(x,y)\in X\timeses X$ is \textit{mean asymptotic} if \betagin{equation*} \lim\limits_{n\rightarrow\infty}\frac{1}{n}\sum \limits_{i=0}^{n-1}d(T^{i}x,T^{i}y)=0. \end{equation*} A TDS is mean asymptotic if every pair $(x,y)\in X\timeses X$ is mean asymptotic. It is known that proximal systems (i.e., systems $(X,T)$ satisfying that every pair $(x,y)\in X\timeses X$ is proximal) may have no asymptotic pairs {\Cal I }te[Theorem 6.1]{LT} (and hence it may happen that every pair is a Li-Yorke pair). However for mean proximal systems it will be completely different, see Corollary \partialr\hang\textindentf{diff}. Clearly, mean proximal pairs are proximal (hence mean proximal systems are proximal), but not vice-versa; while asymptotic pairs are mean asymptotic, but not vice-versa. Thus, a priori, we have the following implications (both for pairs and for systems): \betagin{equation*} \text{asymptotic}{{\Bbb B}bb R}ightarrow\text{mean asymptotic}{{\Bbb B}bb R}ightarrow\text{mean proximal}{{\Bbb B}bb R}ightarrow\text{proximal.} \end{equation*} The next theorem reverses the central implication for systems. We denote by $M(X,T)$ the set of all $T$-invariant Borel probability measures on $X.$ Given a set $Y,$ we denote the diagonal in the product space as $\Delta_{Y}:=\{(y,y):y\in Y\}.$ \betagin{Theorem} \langlebel{thm3} Suppose that $(X,T)$ is a TDS. Then the following are equivalent: \betagin{enumerate} \item $(X,T)$ is mean proximal. \item $(X,T)$ is mean asymptotic. \item Every measure $\langlembdabda \in M(X\timeses X,T\timeses T)$ satisfies $\langlembdabda (\Delta _{X})=1$. \item $(X,T)$ is uniquely ergodic and the unique measure of $M(X,T)$ is a delta measure $\Deltata_{x_0}$ on a fixed point $x_0$ (the unique fixed point). \item $(X\timeses X,T\timeses T)$ is mean proximal. \end{enumerate} \end{Theorem} As we mentioned before proximal systems may have no asymptotic pairs; thus on one hand the situation for mean proximal systems is very different. Nonetheless, the implications $1)$ $\Leftrightarrow4)$ provides a measure theoretic analogy of a characterization of proximal systems: a system is proximal if and only if its unique minimal subset is a fixpoint $x_{0}$ {\Cal I }te{AK} (note that in this case invariant measures other than $\Deltata_{x_{0}}$ are possible). Mean proximality is a stronger condition: the fixpoint supports a unique invariant measure. In particular, we also have the following corollary. \betagin{Corollary}\langlebel{diff} If $(X,T)$ is a mean proximal system, then $(X,T)$ has no mean Li-Yorke pairs. \end{Corollary} This paper is organized as follows. In Section 2, we prove Theorem \partialr\hang\textindentf{thm1} . In Section 3, using measure-theoretic tools, we focus on mean proximal pairs; we provide a proof of Theorem \partialr\hang\textindentf{thm3}. \noindentindent\textbf{Acknowledgements: } We would like to thank professors Wen Huang and Xiangdong Ye for useful comments and very helpful suggestions concerning this paper. The first author would like to thank the dynamical systems group at the University of Science and Technology of China (in particular Yixiao Qiao and Jie Li ) for their hospitality. We also thank the referee for the provided suggestions, in particular for very helpful comments regarding Theorem \partialr\hang\textindentf{thm3}. \section{Proof of Theorem \precotect\partialr\hang\textindentf{thm1}} We first state the following useful result due to Mycielski. For details see {\Cal I }te[Theorem 5.10]{A}. \betagin{Lemma}[Mycielski's lemma] \langlebel{mycl} Let $Y$ be a perfect compact metric space and $C$ be a symmetric dense $G_{\Deltata}$ subset of $Y\timeses Y$. Then there exists a dense subset $K\subset Y$ which is a union of countably many Cantor sets such that $K\timeses K\subset C{\Cal U }p\Delta_{Y}$. \end{Lemma} \betagin{proof}[\textbf{Proof of Theorem \precotect\partialr\hang\textindentf{thm1}}] Denote by $d$ the metric on $X$. Since $(X,T)$ is mean sensitive, there exists $\Deltata>0$ such that for every $x\in X$ and every $\epsilonsilon>0$, there is $y\in B(x,\epsilonsilon)$ with \betagin{equation*} \limsup\limits_{N\rightarrow\infty}\frac{1}{N}\sum \limits_{k=0}^{N-1}d(T^{k}x, T^{k}y)>\Deltata. \end{equation*} Let $\eta=\Deltata/2>0$, and \betagin{equation*} D_{\eta}=\{(x,y)\in X\timeses X: \limsup\limits_{N\rightarrow\infty}\frac{1}{N} \sum\limits_{k=0}^{N-1}d(T^{k}x, T^{k}y)\Gammae\eta\}. \end{equation*} By noting that $D_{\eta}$ can also be written as in the following form \betagin{equation*} D_{\eta}=\bigcap\limits_{m=1}^{\infty}\bigcap\limits_{l=1}^{\infty}\bigcup \limits_{n\Gammae l} \{(x,y)\in X\timeses X: \frac{1}{n}\sum \limits_{i=0}^{n-1}d(T^{i}x,T^{i}y)>\eta-\frac{1}{m}\}, \end{equation*} we know that $D_{\eta}$ is a $G_{\Deltata}$ subset of $X\timeses X$. If $D_{\eta} $ is not dense in $X\timeses X$, then there exist $\epsilonsilon>0$ and $x,z\in X$ such that for every $y\in B(x,\epsilonsilon)$, we have \betagin{equation*} \limsup\limits_{N\rightarrow\infty}\frac{1}{N}\sum \limits_{k=0}^{N-1}d(T^{k}y, T^{k}z)<\eta. \end{equation*} It follows that for every $y\in B(x,\epsilonsilon)$, we have \betagin{align*} & \limsup\limits_{N\rightarrow\infty}\frac{1}{N}\sum \limits_{k=0}^{N-1}d(T^{k}x, T^{k}y) \\ \le & \limsup\limits_{N\rightarrow\infty}\frac{1}{N}\sum\limits_{k=0}^{N-1} \big(d(T^{k}x, T^{k}z)+d(T^{k}y,T^{k}z)\big) \\ \le & \limsup\limits_{N\rightarrow\infty}\frac{1}{N}\sum \limits_{k=0}^{N-1}d(T^{k}x, T^{k}z) +\limsup\limits_{N\rightarrow\infty} \frac{1}{N}\sum\limits_{k=0}^{N-1}d(T^{k}y, T^{k}z) \\ < & \eta+\eta=\Deltata. \end{align*} This is a contradiction with the fact that $(X,T)$ is mean sensitive. So $ D_{\eta}$ is dense in $X\timeses X$. Thus, \betagin{align} \langlebel{deta} D_{\eta}\text{ is a dense } G_{\Deltata}\text{ subset of } X\timeses X. \end{align} Let \betagin{equation*} MP=\{(x,y)\in X\timeses X: \liminf\limits_{N\rightarrow\infty}\frac{1}{N} \sum\limits_{k=0}^{N-1}d(T^{k}x, T^{k}y)=0\} \end{equation*} be the set of all mean proximal pairs of $X\timeses X$. Then \betagin{align} \langlebel{mpg} MP \text{ is a } G_{\Deltata}\text{ subset of } X\timeses X \end{align} since it is easy to check that \betagin{equation*} MP=\bigcap\limits_{m=1}^{\infty}\bigcap\limits_{l=1}^{\infty}\bigcup \limits_{n\Gammae l} \{(x,y)\in X\timeses X: \frac{1}{n}\sum \limits_{i=0}^{n-1}d(T^{i}x,T^{i}y)<\frac{1}{m}\}. \end{equation*} Take \betagin{equation*} MLY_{\eta}=MP{\Cal A }p D_{\eta}. \end{equation*} Clearly, $MLY_{\eta}\subset X\timeses X$ is the set of all mean Li-Yorke pairs with modulus $\eta$ in $X\timeses X$. By hypothesis there exist $p\in X$ a periodic point of period $t$ and $q\in X $ a transitive point such that the pair $(p,q)\in X\timeses X$ is mean proximal. Let \betagin{equation*} X_{j}=\overlineerline{\{T^{nt+j}q:n\Gammaeq0\}} \end{equation*} for $0\leq j\leq t-1$. Since $(p,q)\in MP$, there exists an increasing sequence of positive integers $\{N_{i}\}_{i=1}^{\infty}$ with $N_{i}\rightarrow\infty$ such that \betagin{equation*} \lim\limits_{i\rightarrow\infty}\frac{1}{N_{i}}\sum \limits_{k=0}^{N_{i}-1}d(T^{k}p,T^{k}q)=0. \end{equation*} Hence for any fixed $n\in\mathbb{N}$ and $0\le j\le t-1$ we have \betagin{equation*} \lim\limits_{i\rightarrow\infty}\frac{1}{N_{i}}\sum \limits_{k=0}^{N_{i}-1}d(T^{k+nt+j}p,T^{k+nt+j}q)=0 \end{equation*} which, together with the fact that $T^{t}p=p$, implies that \betagin{equation*} \lim\limits_{i\rightarrow\infty}\frac{1}{N_{i}}\sum \limits_{k=0}^{N_{i}-1}d(T^{k+nt+j}q,T^{k+j}p)=0. \end{equation*} Thus, for any $n_{1},n_{2}\Gammae0$ and $0\le j\le t-1$, we have \betagin{align*} & \lim\limits_{i\rightarrow\infty}\frac{1}{N_{i}}\sum \limits_{k=0}^{N_{i}-1}d(T^{k+n_{1}t+j}q,T^{k+n_{2}t+j}q) \\ \le & \lim\limits_{i\rightarrow\infty}\frac{1}{N_{i}}\sum \limits_{k=0}^{N_{i}-1}d(T^{k+n_{1}t+j}q,T^{k+j}p) +\lim\limits_{i\rightarrow\infty}\frac{1}{N_{i}}\sum \limits_{k=0}^{N_{i}-1}d(T^{k+n_{2}t+j}q,T^{k+j}p) \\ = & 0 \end{align*} which implies that $(T^{n_{1}t+j}q,T^{n_{2}t+j}q)\in MP$. Thus, \betagin{align} \langlebel{mpd} MP \text{ is dense in } X_{j}\timeses X_{j} \text{\,\;\, for each } 0\le j\le t-1. \end{align} Since it is clear that \betagin{equation*} X=\bigcup_{j=0}^{t-1} X_{j}, \end{equation*} there exists some $j$ such that $X_{j}$ has non-empty interior, which is denoted by $A_{j}$. Put \betagin{equation*} A=\overlineerline{A_{j}}. \end{equation*} Since $A_{j}\timeses A_{j}$ is open (for both $X\timeses X$ and $X_{j}\timeses X_{j}$), by \eqref{deta} and \eqref{mpd}, we know that $D_{\eta}$ and $MP$ are dense in $A_{j}\timeses A_{j}$, and hence are dense in $A\timeses A$. Thus, by \eqref{deta} and \eqref{mpg}, we have that \betagin{align} \langlebel{dg} MP{\Cal A }p D_{\eta}{\Cal A }p(A\timeses A) \text{ is a symmetric dense } G_{\Deltata}\text{ subset of } A\timeses A. \end{align} From the definition of the mean sensitivity of $(X,T)$, we know that $X$ is perfect. Then by noting that $A_{j}$ is open, we have that $A_{j}$ is perfect, and hence $A$ is perfect. Thus, \betagin{align} \langlebel{perfect} A \text{ is a perfect compact metric space.} \end{align} Now applying Mycielski's lemma (Lemma \partialr\hang\textindentf{mycl}) with \eqref{perfect} and \eqref{dg}, and by the definition of $MLY_{\eta}$, we obtain a dense subset $ K$ of $A$ such that $K$ is a union of countably many Cantor sets and satisfies \betagin{equation*} K\timeses K\subset MLY_{\eta}{\Cal U }p\Delta_{X}. \end{equation*} This implies that $(X,T)$ is mean Li-Yorke chaotic and $K\subset X$ is a mean Li-Yorke set with modulus $\eta$. This completes the proof. \end{proof} \betagin{Remark} \langlebel{rmk} According to the proof of Theorem \partialr\hang\textindentf{thm1}, we know that if in addition, either \newline ${\Cal D }ot(X,T)$ is \textit{totally transitive}, i.e., $T^{n}$ is transitive for each $n\Gammaeq1$, or \newline ${\Cal D }ot$the periodic point $p$ is a fixed point, \, then $(X,T)$ is \textit{ densely mean Li-Yorke chaotic}; that is, it admits a dense uncountable mean Li-Yorke subset (which is $K$ in the proof). \end{Remark} An important class of topological dynamical systems are subshifts. Let $ \mathcal{A}$ be a finite set. For $x\in\mathcal{A}^{\mathbb{Z}_{+}}$ and $ i\in\mathbb{Z}_{+}$ we use $x_{i}$ to denote the $i$th coordinate of $x$ and $\sigmagma:X\rightarrow X$ to denote the \textit{shift map} ($x_{i+j}=(\sigmagma ^{i}x)_{j}$ for all $x\in\mathcal{A}^{\mathbb{Z}_{+}}$ and $j\in\mathbb{Z} _{+}$)$.$ Using the product topology of discrete spaces, we have that $ \mathcal{A}^{\mathbb{Z}_{+}}$ is a compact metrizable space. The metric of this space is equivalent to the metric \betagin{equation*} d(x,y)=\left\{ \betagin{array}{cc} 1/\inf\left\{ m+1:x_{m}\neq y_{m}\right\} & \text{if }x\neq y \\ 0 & \text{if }x=y \end{array} .\right. \end{equation*} A subset $X\subset\mathcal{A}^{\mathbb{Z}_{+}}$ is a \textit{subshift (or shift space)} if it is closed and $\sigmagma-$invariant$;$ in this case $ (X,\sigmagma)$ is a TDS. As it was noted in {\Cal I }te{LTY} and {\Cal I }te{G} mean sensitivity and (with similar arguments) mean proximality can be expressed in terms of densities. In particular, if $(X,\sigmagma)$ is a subshift, $x,y\in X,$ and \betagin{equation*} \liminf_{n\rightarrow\infty}\frac{\left\vert \left\{ i\leq n\mathbb{\mid } \text{ }x_{i}\neq y_{i}\right\} \right\vert }{n}=0, \end{equation*} then $x$ and $y$ are mean proximal. A finite string of symbols is a \textit{word. }Given a word $w$ the set $ Pow(w)$ is the set that contains all the possible subwords of $w.$ For example, if $w=011$, $Pow(w)=\left\{ 0,1,01,11,011\right\} $. \betagin{Example} \langlebel{exam} There exists a mean sensitive subshift $X\subset\left\{ 0,1\right\} ^{\mathbb{Z}_{+}}$ with a transitive point $x\in X$ such that $x$ and $0^{\infty}$ are mean proximal (and thus this system contains a dense uncountable mean Li-Yorke subset). \end{Example} \betagin{proof} We will construct a sequence of words inductively with \betagin{equation*} w_{1}=0. \end{equation*} Let $A_{k}:=Pow(w_{k-1}):=\left\{ v_{i}\right\} _{i=1}^{\left\vert Pow(w_{k-1})\right\vert }.$ We define \betagin{equation*} w_{k}:=w_{k-1}0^{n_{k}}v_{1}0^{k}v_{1}1^{k}v_{2}0^{k}v_{2}1^{k}...v_{\left \vert Pow(w_{k-1})\right\vert }0^{k}v_{\left\vert Pow(w_{k-1})\right\vert }1^{k}, \end{equation*} where $n_{k}\Gammaeq\left\vert w_{k}\right\vert (1-1/k).$ Let $x=\lim_{k\rightarrow\infty}w_{k}$ and let $X$ be the shift orbit closure of $x.$ The construction implies that for every finite word $v$ appearing in $x$ we have that $v0^{\infty}\in X$ and $v1^{\infty}\in X.$ This means that $ (X,\sigmagma)$ is mean sensitive. Using that $n_{k}\Gammaeq\left\vert w_{k}\right\vert (1-1/k)$ and that every $ w_{k}$ contains $0^{n_{k}}$ we obtain that the density of $0s$ in $x$ is one. This implies that $x$ and $0^{\infty\text{ }}$are mean proximal. \end{proof} \section{On Mean Proximal Sets} In this section we characterize mean proximal systems. We remark here that as an application of (the corollary of) Theorem \partialr\hang\textindentf {thm3} we obtain that the whole space $X$ cannot become a mean Li-Yorke set for any nontrivial TDS $(X,T)$. However, we also note that if we only want to observe that for any $\eta>0$, $X$ cannot be a mean Li-Yorke set with modulus $\eta$, it will be much easier. In fact, if this holds then, on the one hand, for every pair $(x,y)\in X\timeses X$ of distinct points, we have \betagin{equation*} \limsup_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}d(T^{i}x,T^{i}y)\Gammaeq \eta; \end{equation*} on the other hand, according to {\Cal I }te[Theorem 2.1]{K}, we can find a pair $ (x,y)\in X\timeses X$ of distinct points such that $d(T^{i}x,T^{i}y)<\eta/2$ for all $i\in\mathbb{N}$. This implies that \betagin{equation*} \limsup_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}d(T^{i}x,T^{i}y)\leq \eta/2<\eta, \end{equation*} a contradiction. Before we give the proof of Theorem \partialr\hang\textindentf{thm3}, we provide another result which only concerns the condition of mean proximality. We remind the reader that $M(X,T)$ represents the set of all $T$-invariant Borel probability measures on $X$. \betagin{Theorem} \langlebel{similar} Let $(X,T)$ be a TDS and $A$ be a Borel subset of $X$ such that every pair $(x,y)\in A\timeses A$ is mean proximal. Then for every non-atomic measure $\mu\in M(X,T)$, we have $\mu(A)=0$. \end{Theorem} \betagin{proof} Suppose $\mu\in M(X,T)$ is non-atomic and $\mu(A)>0$. Let \betagin{equation*} \mu\timeses\mu=\int_{{\Bbb O}mega}\langlembdabda_\Omegagaega d\xi(\Omegagaega) \end{equation*} be the ergodic decomposition of $\mu\timeses\mu$ with respect to $T\timeses T$. Since $\mu$ is non-atomic, which implies that $\mu\timeses\mu(\Delta_{X})=0$, we have $\mu\timeses\mu(A\timeses A\setminus\Delta_{X})>0$. Hence there exists an ergodic measure $\langlembdabda_{\Omegagaega}\in M(X\timeses X,T\timeses T)$ with $ \langlembdabda_{\Omegagaega}(A\timeses A\setminus\Delta_{X})>0$. By Birkhoff's Pointwise Ergodic Theorem, we can find a generic point $(x_{0},y_{0})\in A\timeses A\setminus\Delta_{X}$ of $\langlembdabda_{\Omegagaega}$ for $T\timeses T$, i.e., there exists a pair $(x_{0},y_{0})\in A\timeses A\setminus\Delta_{X}$ such that \betagin{equation*} \frac{1}{N}\sum\limits_{i=0}^{N-1}\Deltata_{(T\timeses T)^{i}(x_{0},y_{0})}\rightarrow\langlembdabda_{\Omegagaega} \end{equation*} under the weak$^{\ast}$-topology as $N\rightarrow\infty$. Here the probability measure $\Deltata_{(T\timeses T)^{i}(x_{0},y_{0})}$ denotes the point mass on $(T\timeses T)^{i}(x_{0},y_{0})=(T^{i}x_{0},T^{i}y_{0})$ of $ X\timeses X$. On the one hand, since the pair $(x_{0},y_{0})\in A\timeses A\setminus\Delta _{X}$ is mean proximal, we have \betagin{equation*} \liminf_{N\rightarrow\infty}\frac{1}{N} \sum_{i=0}^{N-1}d(T^{i}x_{0},T^{i}y_{0})=0. \end{equation*} Now since $\langlembdabda_{\Omegagaega}(X\timeses X\setminus\Delta_{X})\Gammaeq\langlembdabda_{\Omegagaega }(A\timeses A\setminus\Delta_{X})>0$, there exists $\epsilonsilon>0$ satisfying $ \langlembdabda_{\Omegagaega}(D_{\epsilonsilon})>0$, where $D_{\epsilonsilon}=\{(x,y)\in X\timeses X:d(x,y)>\epsilonsilon\}$. By noting that $D_{\epsilonsilon}$ is open, it then follows that \betagin{equation*} \liminf_{N\rightarrow\infty}\frac{1}{N} \sum_{i=0}^{N-1}d(T^{i}x_{0},T^{i}y_{0})\Gammaeq\epsilonsilon{\Cal D }ot\liminf_{N \rightarrow\infty}\frac{1}{N}\sum_{i=0}^{N-1} \Deltata_{(T^{i}x_{0},T^{i}y_{0})}(D_{\epsilonsilon})\Gammaeq\epsilonsilon \langlembdabda_{\Omegagaega}(D_{\epsilonsilon})>0, \end{equation*} a contradiction. This proves the result. \end{proof} \betagin{proof}[\textbf{Proof of Theorem \precotect\partialr\hang\textindentf{thm3}}] (2)$\Longrightarrow$(1). Obvious. (3)$\Longrightarrow$(2). Suppose that (2) does not hold, then we can find a pair $(x_{1},y_{1})\in X\timeses X$ such that $(x_{1},y_{1})$ is not mean asymptotic. This implies that the orbit of $(x_1,y_1)$ (in $X\timeses X$) spends outside some neighborhood $U$ of the diagonal $\Delta_X$ time which has positive lower density. Thus, the pair $(x_1,y_1)$ semi-generates (i.e., generates along a subsequence of averages) a measure $\nu\in M(X\timeses X,T\timeses T)$ with $\nu(X\timeses X\setminus U)>0$. This means that the measure $\nu$ is not supported by $\Delta_X$. (1)$\Longrightarrow$(3). Suppose that (3) does not hold, then there is some $ \langlembdabda\in M(X\timeses X, T\timeses T)$ satisfying $\langlembdabda(\Delta_{X})<1$ which implies \betagin{equation*} \langlembdabda(X\timeses X\setminus\Delta_{X})>0. \end{equation*} From now on, we begin to use the similar argument as in the proof of Theorem \partialr\hang\textindentf{similar} to complete our proof. We do this as follows. Since $\langlembdabda(X\timeses X\setminus\Delta_{X})>0$ and $\langlembdabda\in M(X\timeses X,T\timeses T)$, by the Ergodic Decomposition Theorem, there exists an ergodic measure $\langlembdabda_{\Omegagaega}\in M(X\timeses X,T\timeses T)$ with \betagin{equation*} \langlembdabda_{\Omegagaega}(X\timeses X\setminus\Delta_{X})>0. \end{equation*} It then follows from the Birkhoff Pointwise Ergodic Theorem that there exists a pair $(x_{2},y_{2})\in X\timeses X\setminus\Delta_{X}$ such that \betagin{equation*} \frac{1}{N}\sum\limits_{k=0}^{N-1}\Deltata_{(T\timeses T)^{k}(x_{2},y_{2})}\rightarrow\langlembdabda_{\Omegagaega} \end{equation*} as $N\rightarrow\infty$ under the weak$^{\ast}$-topology. Since $\langlembdabda_{\Omegagaega}(X\timeses X\setminus\Delta_{X})>0$, we can take some $ \epsilonsilon>0$ and put \betagin{equation*} D_{\epsilonsilon}=\{(x,y)\in X\timeses X: d(x,y)>\epsilonsilon\} \end{equation*} such that $\langlembdabda_{\Omegagaega}(D_{\epsilonsilon})>0$. Then, by noting that $D_{\epsilonsilon}$ is open, we have \betagin{equation*} \liminf_{N\rightarrow\infty}\frac{1}{N} \sum_{k=0}^{N-1}d(T^{k}x_{2},T^{k}y_{2})\Gammaeq\epsilonsilon{\Cal D }ot\liminf_{N \rightarrow\infty}\frac{1}{N}\sum_{k=0}^{N-1} \Deltata_{(T^{k}x_{2},T^{k}y_{2})}(D_{\epsilonsilon})\Gammaeq\epsilonsilon \langlembdabda_{\Omegagaega}(D_{\epsilonsilon})>0, \end{equation*} which is a contradiction with the assumption that every pair $(x,y)$ of $ X\timeses X$ is mean proximal. (3)$\Longrightarrow$(4). Suppose that there exists $\mu\in M(X,T)$ such that the support contains at least two different points, $x_{0}$ and $y_{0}$. This implies that the support of $\langlembdabda=\mu\timeses\mu$ contains the point $(x_{0},y_{0}).$ We conclude that $\langlembdabda(\Delta_{X})<1.$ This implies that if ($3$) holds then every invariant measure is supported on a fixed point. Nonetheless we already know ($3$) implies $(X,T)$ is proximal and hence it only contains one fixed point {\Cal I }te{AK}; thus $(X,T)$ is uniquely ergodic. (4)$\Longrightarrow$(3). Let $x_{0}$ \ be the only fixed point, $\Deltata_{x_{0}}$ the unique invariant measure of $(X,T),$ and $\langlembdabda\in M(X\timeses X,T\timeses T).$ This implies that $\langlembdabda(x_{0}\timeses X)=\Deltata_{x_{0}}(x_{0})=1$ and analogously $\langlembdabda(X\timeses x_{0})=1;$ thus $\langlembdabda(x_{0}\timeses x_{0})=1.$ (4)$\Longrightarrow$(5). In the previous step we showed that $\langlembdabda(x_{0}\timeses x_{0})=1.$ To conclude simply apply (4)$\Longrightarrow$(1) for the system $(X\timeses X,T\timeses T)$. (5)$\Longrightarrow$(3). Using (1)$\Longrightarrow$(4) for the system $(X\timeses X,T\timeses T)$ we conclude it has a unique fixed point and a unique invariant (delta) measure. The fixed point must lie on the diagonal, so for the unique invariant measure $\langlembdabda\in M(X\timeses X,T\timeses T)$ we have that $ \langlembdabda(\Delta_{X})=1.$ \end{proof} \betagin{thebibliography}{99} \bibitem{A} E. Akin, \textit{Lectures on Cantor and Mycielski sets for dynamical systems}, Chapel Hill Ergodic Theory Workshops, 21-79, Contemp. Math., 356, Amer. Math. Soc., Providence, RI, 2004. \bibitem{AK} E. Akin and S. Kolyada, \textit{Li-Yorke sensitivity}, Nonlinearity, \textbf{16} (2003), no. 4, 1421--1433. \bibitem{BGKM} F. Blanchard, E. Glasner, S. Kolyada and A. Maass, \textit{On Li-Yorke pairs}, J. Reine Angew. Math., \textbf{547}(2002), 51-68. \bibitem{BHS} F. Blanchard, W. Huang and L. Snoha, \textit{Topological size of scrambled sets}, Colloq. Math., 110(2), 2008, 293-361. \bibitem{D} T. Downarowicz, \textit{Positive topological entropy implies chaos DC2}, Proc. Amer. Math. Soc., \textbf{142}(2014), 137-149. \bibitem{DG} T. Downarowicz and E. Glasner, \textit{Isomorphic extensions and applications}, arXiv:1502.06999, 2015. \bibitem{DL} T. Downarowicz and Y. Lacroix, \textit{Forward mean proximal pairs and zero entropy}, Israel Journal of Mathematics, \textbf{191-2} (2012), 945-957. \bibitem{DL2} T. Downarowicz and Y. Lacroix, \textit{Measure-theoretic chaos} , Ergodic theory and dynamical systems, \textbf{34}(2012), 110-131. \bibitem{DS} T. Downarowicz and M. Stefankova, \textit{Embedding Toeplitz systems in triangular maps: The last but one problem of the Sharkovsky classification program}, Chaos, Solitons \& Fractals, \textbf{45}(2012), no.12, 1566-1572. \bibitem{FKKL} Fryderyk Falniowski, Marcin Kulczycki, Dominik Kwietniak and Jian Li, \textit{Two results on entropy, chaos, and independence in symbolic dynamics}, arXiv:1502.03981, 2015. \bibitem{FPS} G. L. Forti, L. Paganoni and J. Smital, \textit{Dynamics of homeomorphisms on minimal sets generated by triangular mappings}, Bull. Austral. Math. Soc., \textbf{59}(1999), no.1, 1-20. \bibitem{G} F. Garc\'{\i}a-Ramos, \textit{Weak forms of topological and measure theoretical equicontinuity: realtionships with discrete spectrum and sequence entropy,} Ergodic theory and dynamical systems (to appear), 2015. \bibitem{GL} F. Garc\'{\i}a-Ramos, J. Li and R. Zhang, \textit{When is a system mean sensite?}, in progress. \bibitem{HLY} W. Huang, J. Li and X. Ye, \textit{Stable sets and mean Li-Yorke chaos in positive entropy systems}, J. Funct. Anal., \textbf{266} (2014), no.6, 3377-3394. \bibitem{HY} W. Huang and X. Ye, \textit{Devaney's chaos or 2-scattering implies Li-Yorke's chaos}, Topol. Appl., \textbf{117}(2002), no.3, 259-272. \bibitem{I} A. Iwanik, \textit{Independence and scrambled sets for chaotic mappings}, The mathematical heritage of C. F. Gauss, World Sci. Publ., River Edge, NJ, (1991), pp. 372-378 \bibitem{K} J. L. King, \textit{A map with topological minimal self-joinings in the sense of del Junco}, Ergod. Th. \& Dynam. Sys., \textbf{10}(1990), no.4, 745-761. \bibitem{LT} J. Li and S. Tu, \textit{On proximality with Banach density one} , J. Math. Anal. Appl., \textbf{416}(2014), no.1, 36-51. \bibitem{LTY} J. Li, S. Tu and X. Ye, \textit{Mean equicontinuity and mean sensitivity}, Ergod. Th. \& Dynam. Sys., 35 (08), (2015),2587-2612. \bibitem{LY} J. Li and X. Ye, \textit{Recent development of chaos theory in topological dynamics}, Acta Mathematica Sinica, English Series, 32(1), 2016, 83-114. \bibitem{O} P. Oprocha, \textit{Relations between distributional and Devaney chaos}, chaos, \textbf{16}(2006), 033112. [MR2265261]. \bibitem{OW} D. Ornstein and B. Weiss, \textit{Mean distality and tightness} , Proc. Steklov Inst. Math., \textbf{244}(2004), no.1, 295-302. \bibitem{SS} B. Schweizer and J. Smital, \textit{Measures of chaos and a spectral decomposition of dynamical systems on the interval}, Transactions of the American Mathematical Society, \textbf{02}(1994); 344(2). \end{thebibliography} \end{document}
\begin{document} \author{Steven B. Damelin} \address{Department of Mathematics, The University of Michigan, 2074 East Hall, 530 Church Street, Ann Arbor, MI, 48109, USA} \email{[email protected]} \author{Kai Diethelm} \address{Fakultät Angewandte Natur- und Geisteswissenschaften, Hochschule Würzburg-Schweinfurt, Ignaz-Schön-Str.\ 11, 97421 Schweinfurt, Germany} \email{[email protected]} \title[Weighted Singular Cauchy Integrals with Exponential Weights on $\mathbb R$]{An Analytic and Numerical Analysis of Weighted Singular Cauchy Integrals with Exponential Weights on $\mathbb R$} \date{\today} \begin{abstract} This paper concerns an analytic and numerical analysis of a class of weighted singular Cauchy integrals with exponential weights $w:=\exp(-Q)$ with finite moments and with smooth external fields $Q:\mathbb R\to [0,\infty)$, with varying smooth convex rate of increase for large argument. Our analysis relies in part on weighted polynomial interpolation at the zeros of orthonormal polynomials with respect to $w^2$. We also study bounds for the first derivatives of a class of functions of the second kind for $w^2$. \end{abstract} \maketitle {\bf Keywords}: Cauchy principal value integral; Exponential weight; Erd\H{o}s weight, Freud weight, Numerical approximation, Orthogonal polynomial, Quadrature, Singular integral, Weighted approximation. \setcounter{equation}{0} \section{Introduction} Let $Q:\mathbb R\to [0,\infty)$ belong to a class of continuously differentiable functions with varying smooth convex rate of increase for large argument. For a class of exponential weight functions $w := \exp(-Q)$ with finite moments, we investigate the weighted Cauchy principal value integral \begin{equation*} H_{w^2}[f;x] := -\!\!\!\!\!\!\!\!\;\int_{\mathbb R} w^2(t) \frac{f(t)}{t-x} dt = \! \lim_{\varepsilon \to 0+} \! \left( \int_{-\infty}^{x-\varepsilon} \!w^2(t) \frac{f(t)}{t-x} dt + \! \int_{x+\varepsilon}^{\infty} \! w^2(t) \frac{f(t)}{t-x} dt \right) \end{equation*} with respect to its analytical properties, and we develop and analyze numerical methods for the approximate calculation of such integrals. Here, we work on the real line, i.e.\ $x \in \mathbb R$ is arbitrary but fixed and $f:\mathbb R\to \mathbb R$ belongs to a class of functions for which in particular, $H_{w^2}[f;x]$ is finite. When we say $w$ has finite moments, we mean that $\int_{\mathbb R}x^nw^2(x)<\infty$, ($n=0,1,2,...$). Notice that the numerator in the integrand of the operator $H_{w^2}[f,x]$ is $f w^2$. One reason for our investigation is due to the fact that integral equations with weighted Cauchy principal value integral kernels have shown to be an important tool for the modelling of many physical situations. See for example \cite{CDLM1, DD1, DD2, DD3, DD4, D, D1, K, M} and the references cited therein. In the case of ordinary integrals (without strong singularities) on unbounded intervals, various interesting results can be found for example in \cite{LM,SS}. In a series of papers, \cite{DD1,DD2,DD3,DD4}, the authors studied this problem and some of its applications for a class of weights $w:=\exp(-Q)$ with finite moments and with even external fields $Q:\mathbb R\to [0,\infty)$ belonging to a class of continuously differentiable functions with smooth polynomial rate of increase for large argument. An example of such an external field $Q$ studied is $|x|^{\alpha}$ where $\alpha>1$. In this paper, we extend the results of \cite{DD1} to a class of exponential weight functions $w=\exp(-Q)$ with finite moments and with external fields $Q:\mathbb R\to [0,\infty)$ continuously differentiable with certain convex increase for large argument. In particular, this class of external fields $Q$ studied need not be even (a considerably weaker condition on $Q$) and may allow for considerable varying convex rates of increase for large argument for example smooth polynomial increase and also faster than smooth polynomial increase. Typical examples \cite{LL,Lub} of admissible external fields $Q$ would be with some $\beta \ge \alpha > 1$ and $\ell,k\geq 0$, \begin{equation*} Q_{1,\alpha,\beta}(x):= \begin{cases} x^{\alpha}, & x \in [0,\infty) \\ |x|^{\beta}, & x \in (-\infty,0) \end{cases} \end{equation*} or \begin{equation*} Q_{2,\alpha,\beta,\ell,k}(x):= \begin{cases} \exp_l(x^{\alpha})-\exp_l(0), & x \in [0,\infty) \\ \exp_k(|x|^{\beta})-\exp_k(0), & x \in (-\infty,0) \end{cases} \end{equation*} Here, for $x\in \mathbb R$, $\exp_{0}(x):=x$ and for $j\geq 1$, $\exp_j(x):=\exp(\exp(\exp...\exp(x)))$, $j$ times is the $j$th iterated exponential. In particular, $\exp_j(x)=\exp(\exp_{j-1}(x))$.\footnote{$\exp(-Q_{1,\alpha,\beta})$ and $\exp(-Q_{2,\alpha,\beta, \ell,k})$ are historically often called respectively Freud and Erd\H{o}s weights. See \cite{LL,Lub} and the many references cited therein.} \subsection{A note on notation and constants} Throughout, $|.|$ is the Euclidean metric on $\mathbb R$. $\mathcal{P}_n$ will denote the class of polynomials of degree at most $n\geq 1$. $\| \cdot \|_{L_{p}}$, $0< p\leq \infty$, will denote the usual $L_p$ function space norm. Sometimes we will write the shorthand form $\| \cdot \|_p$ when the context is clear. For $\gamma > 0$, we identify the space $\mathrm{Lip}(\gamma)$ as the space of functions $f:\mathbb R\to \mathbb R$ for which $f$ is Lipschitz of order $\gamma$. $C, C_1, C_2,...$ will denote positive constants independent of $n,x$ and may take on different values at different times. The context will be clear. Function and operator notation (for example $f,H$) may also denote different or the same function/operator at different times. The context will be clear. When we write for a function, $f(\cdot)$, constant $C(\cdot)$ or operator $H(\cdot)$, we mean that the function/constant/operator depends on the indicated quantity. Dependence on several quantities follows a similar convention. Finally, for non zero real sequences $\alpha_n$ and $\beta_n$, we write $\alpha_n=O(\beta_n)$ if there exists a constant $C>0$ so that uniformly in all other parameters that $\alpha_n$ and $\beta_n$ may depend on, $\frac{\alpha_n}{\beta_n}\leq C$, $\alpha_n=o(\beta_n)$ if $\frac{\alpha_n}{\beta_n}\to 0$, $n\to \infty$ and $\alpha_n\sim\beta_n$ if $\alpha_n=O(\beta_n)$ and $\beta_n=O(\alpha_n)$. Similar notation holds for sequences of functions or operators. \setcounter{equation}{0} \section{The class of weights, $a_t$, $a_{-t}$ and some further important quantities} In this section, we introduce our class of weights and introduce some important quantities needed to move forward, including critical functions denoted by $a_t$ and $a_{-t}$ which we use throughout. \subsection{The class of admissible weights} Motivated by the external fields $Q_{1,\alpha,\beta}$ and $Q_{2,\alpha,\beta,\ell,k}$ defined above, we define our class of weights \cite{LL, Lub}. To formulate our definition, we shall say that a function $f : \mathbb R\rightarrow [0,\infty)$ is quasi-increasing on $[0,\infty)$ if there exists $C > 0$ such that $f(x) \le C f(y)$ for all $x, y \in \mathbb R$ with $0 < x\leq y<\infty$. The concept quasi-decreasing is defined similarly. Following is now our class of weights: \begin{definition}\label{Definition 1.1} Let $Q: \mathbb R \rightarrow [0,\infty)$ satisfy the following properties: \begin{enumerate} \item[(a)] $Q'(x)$ exists and is continuous in $\mathbb R$, with $Q(0)=0$. Moreover, $Q''$ exists in $\mathbb R$. \item[(b)] $Q'(x)$ is non-decreasing in $\mathbb R$. \item[(c)] \[ \lim_{x\rightarrow \infty} Q(x) = \lim_{x\to -\infty} Q(x) = \infty. \] \item[(d)] The function \begin{equation*} T(x) := \frac{xQ'(x)}{Q(x)}, \quad x\neq 0 \end{equation*} is quasi-increasing in $(0,\infty)$, is quasi-decreasing in $(-\infty,0)$ and satisfies \begin{equation*} T(x) \ge C > 1, \quad x \in \mathbb R\setminus \{ 0 \}. \end{equation*} \item[(e)] For all $x \in \mathbb R$, \[ Q''(x) Q(x) \le C_1 (Q'(x))^2 \] and there exists a compact subinterval $I$ of $\mathbb R$ and $C_2>0$ so that for a.e\, \, $x \in I\setminus \{ 0 \}$. \[ Q''(x) Q(x) \geq C_2 (Q'(x))^2. \] \item[(f)] There exists $\varepsilon_0\in(0,1)$ such that for $y\in \mathbb R\backslash\{0\}$, \begin{equation*} T(y) \sim T \left( y \left| 1 - \frac{\varepsilon_0}{T(y)} \right| \right). \end{equation*} \item[(g)] Assume that there exist $C$, $\varepsilon_1 > 0$ such that \begin{equation*} \int_{x-\frac{\varepsilon_1 |x|}{T(x)}}^x \frac{|Q'(s)-Q'(x)|}{|s-x|^{3/2}} ds \le C |Q'(x)| \sqrt{\frac{T(x)}{|x|}}, \quad x \in \mathbb R \setminus \{ 0 \}. \end{equation*} \end{enumerate} Then we say that $w=\exp(-Q)$ is an admissible weight with external field $Q$. \end{definition} Let us illustrate Definition \ref{Definition 1.1} using the examples of $Q_{1,\alpha,\beta}$ and $Q_{2,\alpha,\beta,\ell,k}$ in Section 1. \subsubsection{The Freud-type weight $Q_{1,\alpha,\beta}$} Here, a straightforward calculation shows that $T\sim 1$ in $\mathbb R$. Thus (d) holds. Notice that $T=O(1)$ forces $Q_{1,\alpha,\beta}$ to be of smooth polynomial growth for large argument. Indeed it is straightforward to show that $T(x)=\alpha$ for $x\in (0,\infty)$ and $T(x)=\beta$ for $x\in (-\infty,0)$. The conditions (a,b,c,e,f,g) are straightforward to check. \subsubsection{The Erd\H os-type weight $Q_{2,\alpha,\beta,\ell,k}$} Here, a straightforward calculation shows that $T$ grows without bound for large argument. Thus (d) holds. Notice that $T$ growing without bound for large argument forces $Q_{2, \alpha,\beta,k,l}$ to be of faster than smooth polynomial growth for large argument. Indeed, it is straightforward to check that if $l\geq 1$ and $x>0$, \[ T(x)=\alpha x^{\alpha}\left[\prod_{j=1}^{l-1}\exp_j(x^{\alpha})\right]\frac{\exp_l(x^{\alpha})}{\exp_l(x^{\alpha})-\exp_l(0)}. \] Indeed, $T(x)\to \alpha,\, x\to 0+$ while \[ T(x)=\alpha x^{\alpha}\left[\prod_{j=1}^{l-1}\exp_j(x^{\alpha})\right](1+o(1)), \quad x\to \infty \] with a similar expression holding for $x<0$. The conditions (a,b,c,e,f,g) are straightforward to check. \subsection{The quantities $a_t$ and $a_{-t}$}. \begin{definition}[cf.\ \cite{Deift,LL,Lub,Mh}] Given an admissible weight $w$ and some $t > 0$, we define the quantities $a_t > 0$ and $a_{-t} < 0$ as the unique solutions to the equations \begin{subequations} \label{1.1} \begin{eqnarray} t & = & \frac{1}{\pi} \int_{a_{-t}}^{a_t} \frac{x Q'(x)}{\sqrt{(x-a_{-t})(a_t-x)}} dx, \\ 0 & = & \frac{1}{\pi} \int_{a_{-t}}^{a_t} \frac{Q'(x)}{\sqrt{(x-a_{-t})(a_t-x)}} dx. \end{eqnarray} \end{subequations} \end{definition} \begin{remark} In the special case where $Q$ is even, the uniqueness of $a_{\pm t}$ forces $a_{-t} = -a_t$ for all $t > 0$. In this case, $a_t$ is the unique positive solution of the equation \[ t = \frac 2 \pi \int_0^1 \frac{a_t u Q'(a_t u)}{\sqrt{1-u^2}} du. \] \end{remark} Following \cite{LL}, we further use the notations \[ \Delta_t = [a_{-t}, a_t] \] and \[ \beta_t := \frac 1 2 (a_t + a_{-t}); \quad \delta_t := \frac 1 2 (a_t + |a_{-t}|) \] and define \begin{equation*} \varphi_n(x) := \begin{cases} \displaystyle \frac{|x-a_{-2n}| \cdot |a_{2n}-x|}{\sqrt{(|x-a_{-n}| + |a_{-n}| \eta_{-n})(|x-a_{n}| + |a_{n}| \eta_{n})}}, & x \in \Delta_n,\\ \varphi_n(a_n), & x > a_n,\\ \varphi_n(a_{-n}), & x < a_{-n} \end{cases} \end{equation*} where \begin{equation}\label{eta} \eta_{\pm n} := \left( n T(a_{\pm n}) \sqrt{\frac{|a_{\pm n}|}{\delta_n}} \right)^{-2/3}. \end{equation} \begin{example} \begin{itemize} \item[(a)] Consider the weight $Q_{1,\alpha,\beta}$: Here we have for $n \to \infty$, \[ T(a_{\pm n})\sim 1 \] and \[ a_n \sim n^{1/\alpha}, \quad |a_{-n}| \sim n^{(2\alpha-1)/(\alpha(2\beta-1))}. \] \item[(b)] Consider the weight $Q_{2,\alpha,\beta,\ell,k}$: Here we have for $n \to \infty$ \[ T(a_n)\sim \prod_{j=1}^{\ell}\log_jn, \quad T(a_{-n})\sim \prod_{j=1}^{k}\log_jn \] and \[ a_n = (\log_{\ell}n)^{1/\alpha} (1+o(1)); \quad a_{-n} = (\log_{k}n)^{1/\beta} (1 + o(1)) \] where we recall \[ \exp_{j}(x) = \underbrace{\exp(\exp(\exp\ldots\exp(x)))}_{j \,\,\textrm{times}} \] is the $j$th iterated exponential and for $x > 0$, $\log_0(x)=x$ and for $x>\exp_{j-1}(0)$ and $j\geq 1$, \[ \log_j(x) = \underbrace{\log(\log(\log\ldots\log(x)))}_{j \,\,\textrm{times}},\, \] is the $j$th iterated logarithm (and not the logarithm with respect to base $j$). \end{itemize} \end{example} The precise interpretations of the functions $a_{t}$ and $a_{-t}$, $t>0$ arise from logarithmic potential theory where they are in fact scaled endpoints of the support of a minimizer for a certain weighted variational problem in the complex plane. (Hence the term external field for $Q$). When $Q$ is convex, the support of the minimizer is one interval and when $Q$ is in addition even, $a_{-t}=-a_t$ holds for every $t>0$, cf.\ \cite{Deift,LL,Lub,Mh}. We shall use the important identity (and its $L_p$ cousins for different $0<p<\infty$) \[ \|P_n w\|_{L_{\infty}(\mathbb R)} = \|P_n w\|_{L_{\infty}[a_{-n},a_n]} \] valid for every polynomial $P\in \mathcal{P}_n$, $n \geq 1$. In the case when $w$ is even, $a_n$, $n\geq 1$, is asymptotically the smallest number for which this identity holds \cite{SD,LL,Lub,Mh}. This identity is useful in the sense that it can be used to get an intuitive idea of the growth of $a_{n}$ and $a_{-n}$ for large $n$ for different admissible weights $w$, for example for the admissible weights $w_{1,\alpha,\beta}$ and $w_{2,\alpha,\beta,\ell,k}$. \setcounter{equation}{0} \section{Main Result and Important Quantities} In this section, we will state one of our main results, Theorem \ref{Theorem 2.3}. Moreover, we will also introduce and define various important quantities. In the remaining sections, we will provide the proof of Theorem \ref{Theorem 2.3} and state and prove several more main results which are consequences of our machinery. Let $w$ be admissible. Then we can construct orthonormal polynomials $p_n(x)=p_n(w^2,x)$ of degree $n = 0, 1, 2, \ldots$ for $w^2(x)$ satisfying \begin{equation} \label{eq:onp} \int_{\mathbb R} p_n(x) p_m(x) w^2(x) dx = \delta_{mn}. \end{equation} Here for $m,n\geq 0$, $\delta_{mn}$ takes the value $1$ when $m=n$ and $0$ otherwise. The zeros $x_{j,n}$, $j = 1, 2, \ldots, n$, of $p_n$ above will serve as the nodes of certain interpolatory quadrature formulas $Q_n$ which are therefore defined by \begin{equation}\label{w_jn} Q_n[f;x] = \sum_{j=1}^n \zeta_{jn}(x) f(x_{j,n}) \end{equation} and where the weights $\zeta_{jn}(.)$ are chosen such that the quadrature error $R_n$ satisfies \[ R_n[f;x] := H_{w^2}[f;x] - Q_n[f;x] = 0 \] for every $x\in \mathbb R$ and every $f \in \mathcal{P}_{n-1}$. In other words, \begin{equation} \label{eq:qf-hl} Q_n[f;x] = H_{w^2}[L_n[f];x] \end{equation} where $L_n[f]$ is the Lagrange interpolation polynomial for the function $f$ with nodes $x_{j,n}$. Let \[ E_n\left[f\right]_{w,\infty} := \inf_{P\in \mathcal{P}_n} \left \| \left( f - P \right) w \right \|_{L_{\infty}(\mathbb R)} \] be the error of best weighted polynomial approximation of a given $f$. We shall prove the following error bound for this numerical approximation scheme for the singular integral: \begin{theorem}\label{Theorem 2.3} Let $w$ be admissible and let $\xi \in (0,1)$ be fixed. Consider a sequence $\mu_n:=(\mu_n)_{n=1}^{\infty}$ which converges to $0$ as $n \to \infty$ and satisfies $0 < \mu_n \le \min \{a_n \eta_n, |a_{-n}|\eta_{-n} \}$ for each $n\geq 1$. Let $x \in \mathbb R$ and let $f\in \mathrm{Lip}(\gamma, w)$ for some $\gamma >0$. Then uniformly for $n$ large enough, \begin{equation*} |R_n[f;x]| \le C \left[ (1+ n^{-2} w^{-1}(x)) \log n + \gamma_n(x) \right] E_{n-1}[f]_{w,\infty}, \end{equation*} where \begin{eqnarray*} \gamma_n(x) & := & \mu_n^{-1} \delta_n^{5/4} \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4} \log n\\ && {} + \mu_n \times \begin{cases} A_n, & a_n ( 1 + C \eta_n) \le x \le 2a_n, \\ B_n, & a_{\xi n} \le x \le a_n ( 1 + C \eta_n), \\ C_n, & a_{-\xi n} \le x \le a_{\xi n}, \\ D_n, & a_{-n} ( 1 + C \eta_{-n}) \le x \le a_{-\xi n}, \\ E_n, & 2a_{-n} \le x \le a_{-n} ( 1 + C \eta_{-n} ), \\ 0, & \textrm{otherwise,} \end{cases} \end{eqnarray*} and \begin{eqnarray*} A_n & := & n^{5/6} \delta_n^{1/3} a_n^{-5/6} T^{5/6}(a_n) \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{1/2},\\ B_n & := & n \delta_n^{1/4} a_n^{-3/4} T^{3/4}(a_n) \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{1/2},\\ C_n & := & n^{7/6} \delta_n^{-1/3} \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{2/3} \log n,\\ D_n & := & n \delta_n^{1/4} |a_{-n}|^{-3/4} T^{\frac34}(a_{-n}) \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{1/2},\\ E_n & := & n^{\frac56} \delta_n^{1/3} |a_{-n}|^{-5/6} T^{\frac56}(a_{-n}) \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{1/2}. \end{eqnarray*} \end{theorem} \subsection{The sequence of functions $\gamma_n(\cdot)$} In this section, we look at the sequence of functions $\gamma_n(.)$ for the two examples $Q_{1,\alpha,\beta}$ and $Q_{2,\alpha,\beta,\ell,k}$ defined in Section 1, so that the reader may absorb Theorem \ref{Theorem 2.3}. We recall the definitions of $Q_{1,\alpha,\beta}$ and $Q_{2,\alpha,\beta}$ and some information re these. Let $\beta \ge \alpha > 1$ and $\ell,k\geq 0$. Then: \begin{equation*} Q_{1,\alpha,\beta}(x):= \begin{cases} x^{\alpha}, & x \in [0,\infty) \\ |x|^{\beta}, & x \in (-\infty,0). \end{cases} \end{equation*} and \begin{equation*} Q_{2,\alpha,\beta,\ell,k}(x):= \begin{cases} \exp_l(x^{\alpha})-\exp_l(0), & x \in [0,\infty) \\ \exp_k(|x|^{\beta})-\exp_k(0), & x \in (-\infty,0). \end{cases} \end{equation*} Then straightforward calculations yield the following properties uniformly for $n$ large enough. \begin{itemize} \item $Q_{1,\alpha,\beta}:$ \begin{itemize} \item[(a)] $T(a_n)=\alpha$. \item[(b)] $T(a_{-n})=\beta$. \item[(c)] $a_n\sim n^{\frac{1}{\alpha}}.$ \item[(d)] $|a_{-n}|\sim n^{\frac{1}{\alpha}. \left(\frac{2\alpha-1}{2\beta-1}\right)}$. \item[(e)] $\delta_n\sim a_n\sim n^{\alpha}$ \item[(f)] $\eta_n\sim n^{-2/3}$. \item[(g)] $\eta_{-n}\sim \left[n^{1-\frac{1}{\alpha}. \left(\frac{\beta-\alpha}{2\beta-1}\right)}\right]^{-2/3}$. \end{itemize} \end{itemize} \begin{itemize} \item $Q_{2,\alpha,\beta,k,\ell}:$ \begin{itemize} \item[(a)] $T(a_n)\sim \prod_{j=1}^{\ell}\log_{j}(n)$. \item[(b)] $T(a_{-n})\sim \prod_{j=1}^{k}\log_{j}(n)$. \item[(c)] $a_n = (\log_{\ell}(n))^{1/\alpha} (1+o(1))$. \item[(d)] $a_{-n} = -(\log_{k}(n)^{1/\beta} (1+o(1)).$ \item[(e)] $\delta_n\sim a_n\sim (\log_{\ell}(n))^{1/\alpha} (1+o(1))$ \item[(f)] $\eta_n\sim \left(n\prod_{j=1}^{\ell}\log_{j}(n) \right)^{-2/3}$. \item[(g)] $\eta_{-n}\sim \left(n\prod_{j=1}^{k}\log_j(n)\left[\frac{\left(\log_k(n)\right)^{1/\beta}}{\left(\log_l(n)\right)^{1/\alpha}}\right]^{1/2} \right)^{-2/3}.$ \end{itemize} \end{itemize} \begin{remark} In the case when $Q$ is even and $T\sim 1$, that is $Q$ is of smooth polynomial growth with large argument (for example $Q_{1,\alpha,\beta}$), Theorem \ref{Theorem 2.3} is essentially Theorem 1.3 of \cite{DD1}. \end{remark} \setcounter{equation}{0} \section{Partial Proof of Theorem \ref{Theorem 2.3} and some other Main Results} In this section, we provide the necessary machinery for the partial proof of Theorem \ref{Theorem 2.3} and along the way state and prove several other main results. We need the following two lemmas taken from \cite{LL}. \begin{lemma}\label{Lemma 2.3} Let $w$ be admissible. Set for $n\geq 1$: \[ h_n := \frac{n}{\sqrt{\delta_n}} \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{1/2} \] and \[ k_n := n^{1/6} \delta_n^{-1/3} \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{1/6}. \] Then the following hold: \begin{enumerate} \item[(a)] Let $0 < p \le \infty$. Then for $n \ge 1$ and $P \in \mathcal{P}_n$, \begin{equation*} \| P'w \|_{L_p(\mathbb R)} \le C h_n \| Pw \|_{L_p(\mathbb R)}. \end{equation*} \item[(b)] For $n\geq 1$, \begin{equation*} \sup_{x\in \mathbb R} |p_n(x) w(x)| \cdot |(x-a_{-n})(a_n-x)|^{1/4} \sim 1. \end{equation*} \item[(c)] For $n\geq 1$, \begin{equation*} \sup_{x\in \mathbb R} |p_n(x) w(x)| \sim k_n. \end{equation*} \item[(d)] For $n\geq 1$, \begin{equation*} \|p_n w\|_{L_p(\mathbb R)} \sim \begin{cases} \delta_n^{\frac1p-\frac12}, & 0 < p < 4, \\ \delta_n^{-1/4}(\log (n+1))^{1/4}, & p = 4, \\ \delta_n^{\frac1{3p}-\frac13} \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{\frac23(\frac14-\frac1p)}, & p > 4. \end{cases} \end{equation*} \item[(e)] For large enough $t>0$, there exists large enough $L>0$ such that uniformly in $t$, \[ \left| \delta_{t} \frac{T(a_{\pm t})}{a_{\pm t}} \right| \sim \left( \frac{t}{Q(a_{\pm t})} \right)^2 \le C t^{2(1-\frac{\log 2}{\log L})} \] and thus \[ Q(a_{\pm t}) \ge C t^{\frac{\log 2}{\log L}} \quad \mbox{ and } \quad w(a_{\pm t}) = \exp(-Q(a_{\pm t})) \le \exp \! \left(-C t^{\frac{\log 2}{\log L}} \right). \] \item[(f)] The following hold: \begin{itemize} \item[(f1)] For large enough $t>0$, \[ \left| Q'(a_{\pm t}) \right| \sim t \sqrt{\frac{T(a_{\pm t})}{\delta_t |a_{\pm t}|}} \le C t^{2}. \] \item[(f2)] For fixed $L>0$, $j=0,1$ and uniformly for $t>0$, \[ Q^{(j)}(a_{Lt})\sim Q^{(j)}(a_{t}) \] and \[T(a_{Lt})\sim T(a_{t})\]. \item[(f3)] For $t\neq 0$ and $\frac{1}{2}\leq \frac{t}{s}\leq 2$, \[ \left|1-\frac{a_s}{a_u}\right|\sim \frac{1}{T(a_t)}\left|1-\frac{s}{t}\right|. \] \item[(f4)] For fixed $L>1$ and uniformly for $t>0$, $a_{Lt}\sim a_t.$ \end{itemize} \item[(g)] For $n\geq 1$, \begin{equation*} |p_n' (x_{j,n}) w(x_{j,n})| \sim \varphi_n^{-1}(x_{j,n}) | (x_{j,n} - a_{-n}) (a_n - x_{j,n}) |^{-1/4} \end{equation*} holds uniformly for all $1\leq j\leq n$. \item[(h)] For $n\geq 2$, \begin{equation*} x_{j,n} - x_{j+1,n} \sim \varphi_n(x_{j,n}) \end{equation*} holds uniformly for all $2\leq j\leq n$. \item[(i)] For $x \in (x_{j+1, n}, x_{j, n})$, $n\geq 1$ and $1\leq j\leq n$, \begin{eqnarray*} | p_n(x) w(x) | & \le & C \min\{ |x-x_{j,n}|, |x-x_{j+1,n}| \} \\ && \qquad \times \varphi_n^{-1}(x_{j,n})|(x_{j,n}-a_{-n})(a_n-x_{j,n})|^{-1/4}. \end{eqnarray*} \item[(j)] Let $0 < \alpha < 1$ and $n\geq 1$. For $x \in (a_{-\alpha n},a_{\alpha n})$, \begin{equation*} | (x-a_{-n}) (a_n-x) |^{-1} \le C \frac1{\delta_n} \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}. \end{equation*} \end{enumerate} \end{lemma} \begin{lemma} \label{Lemma 2.4} Let $w$ be admissible, $0 < p < \infty$, $0 < \alpha < 1$, and $L >0$. Denote the $p$th power Christoffel functions by $\lambda_{n,p}(w,x):=\inf_{P\in \mathcal{P}_n}\left(\frac{\|Pw\|_{L_{p}(\mathbb R)}}{P(x)}\right)^{p}$ for $n\geq 1$ and $x\in \mathbb R$. Then uniformly for $n \ge 1$ and $x \in [a_{-n} (1 + L \eta_{-n}), a_n (1 + L \eta_n)]$, \[ \lambda_{n,p}(w, x) \sim \varphi_n(x) w^p(x). \] Moreover, there exist $C$, $n_0 > 0$ such that uniformly for $n \ge n_0$ and $x \in \mathbb R$, \[ \lambda_{n,p}(w, x) \ge C \varphi_n(x) w^p(x). \] \end{lemma} \subsection{Functions of the second kind} Let $w$ be admissible and let $p_n$ be the $n$th degree orthonormal polynomial for $w^2$. We define a sequence of functions of the second kind $q_n : \mathbb R \to \mathbb R$, $n\geq 1$ by (cf.\ \cite{CDLM,DD1,DD2,DD3,DD4}) \[ q_n(x) := -\!\!\!\!\!\!\!\!\;\int_{\mathbb R} w^2(t) \frac{p_n(t)}{t-x} dx, \qquad n\geq 1. \] The following holds: \begin{theorem}\label{Theorem 2.4} Let $w$ be admissible and $0 < \xi < 1$. Then, \begin{equation}\label{q'} |q_n'(x)| \le C \! \times \! \begin{cases} n^{7/6} \delta_n^{-5/6} \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{2/3} \log h_n, & a_{-\xi n} \le x \le a_{\xi n}, \\ n a_n^{-1} T(a_n) \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{1/2}, & x \ge a_{\xi n}, \\ n |a_{-n}|^{-1} T(a_{-n}) \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{1/2}, & x \le a_{-\xi n}. \end{cases} \end{equation} \end{theorem} \begin{proof} Let for $x\in \mathbb R$ and $n\geq 1$, \[ \rho_n(x) := w^2(x) p_n(x). \] Then \[ q_n'(x) = -\!\!\!\!\!\!\!\!\;\int_{\mathbb R} \rho_n'(t) \frac{1}{t-x} dt. \] Introducing a positive sequence $\varepsilon_n$ that we shall define precisely later, we write for $x\in \mathbb R$ and $n\geq 1$ \[ q_n'(x) = A_1+A_2:=A_1(x) + A_2(x) \] with \[ A_1 = \int_{|t-x| \ge \varepsilon_n} \rho_n'(t) \frac{1}{t-x} dt \] and \[ A_2 = -\!\!\!\!\!\!\!\!\;\int_{|t-x|< \varepsilon_n} \rho_n'(t) \frac{1}{t-x} dt = -\!\!\!\!\!\!\!\!\;\int_{x- \varepsilon_n}^{x + \varepsilon_n} \frac{\rho_n'(t) - \rho_n'(x)}{t-x} dt. \] Let us collect some auxiliary results: Since for $x\in \mathbb R$ \[ \rho_n'(x) = p_n'(x)w^2(x) - 2 Q' p_n(x)w^2(x) \] and \[ \rho_n''(x) = p_n''(x)w^2(x)- 4Q'(x)p_n'(x) w^2(x) + \left(- 2Q''(x) + 4 {Q'}^2(x) \right) p_n(x) w^2(x), \] we have for $x \in \mathbb R$, in view of Lemma \ref{Lemma 2.3}(a) and the definition of $h_n$ given in the preamble of Lemma \ref{Lemma 2.3}, \begin{eqnarray*} | \rho_n'(x) w^{-1/2}(x) | & \le & | p_n'(x) w(x) w^{1/2}(x) | + 2 | Q'(x) w^{1/2}(x) |\cdot | p_n(x) w(x) | \\ & \le & C \left( h_n w^{1/2}(x) + |Q'(x) w^{1/2}(x) | \right) \| w p_n \|_{L_{\infty}(\mathbb R)} \end{eqnarray*} Continuing, \[ |\rho_n'(x)| \le C \left( h_n w(x) + |Q'(x) w(x)| \right) \| w p_n \|_{L_{\infty}(\mathbb R)} \le C h_n\|w p_n\|_{L_{\infty}(\mathbb R)}. \] Using (d) and (e) of Definition \ref{Definition 1.1} and the definition of $h_n$, we have for $x\in \mathbb R$, \begin{eqnarray*} | Q''(x) w(x) | & \le & C \frac{(Q'(x))^2}{|Q(x)|} w(x) \\ & = & C |Q'(x)| \frac{T(x)}{x} w(x) \\ & \le & C |Q'(x)| w(x) \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \} \\ & \le & C |Q'(x)| w(x) h_n^2 \frac{\delta_n}{n^2} \\ & \le & C |Q'(x)| w(x) h_n \end{eqnarray*} and hence we see that \begin{eqnarray*} \lefteqn{|\rho_n''(x)|} \\ & \le & C \left( h_n^2 w(x) + h_n | Q'(x) w(x) | + | Q''(x) w(x) | + | Q'(x) w^{1/2}(x) |^2 \right) \| w p_n \|_{L_{\infty}(\mathbb R)} \\ & \le & C \left( h_n^2 w(x) + h_n | Q'(x) w(x) | + | Q'(x) w^{1/2}(x) |^2 \right) \| w p_n \|_{L_{\infty}(\mathbb R)}. \end{eqnarray*} Finally, in view of Lemma \ref{Lemma 2.3}(e), Lemma \ref{Lemma 2.3}(f), and the relations $\lim_{t \to \infty} a_t = \infty$ and $\lim_{t \to \infty} a_{-t} = -\infty$ (which follow from their definitions) we see that \begin{equation}\label{Q'w} |Q'(x) w^{1/2}(x)| \le C, \quad x \in \mathbb R. \end{equation} Now we are in a position to deal with the case $a_{-\xi n} \le x \le a_{\xi n}$. We apply H\"older's inequality and derive \[ |A_1| = \left| \int_{|t-x|\ge \varepsilon_n} \rho_n'(t) \frac{1}{t-x} dt \right| \le \left\| \rho_n'(t) w^{-1/2} \right \|_{L_{\infty}(\mathbb R)} \left \| (\cdot-x)^{-1} w^{1/2} \right \|_{L_1(S_n)} \] where $S_n = \{t \in \mathbb R : |t-x| \ge \varepsilon_n \}$. An explicit calculation gives \[ \left\|(\cdot-x)^{-1}w^{1/2}\right\|_{L_1(S_n)} \sim |\log \varepsilon_n| \] uniformly in $n$. Then \[ |A_1| \le C h_n \| w p_n \|_{L_{\infty}(\mathbb R)} |\log \varepsilon_n|. \] For $A_2$, we observe that \[ |A_2| \le 2 \varepsilon_n \| \rho_n'' \|_{L_{\infty}(\mathbb R)} \le C \varepsilon_n h_n^2 \| w p_n \|_{L_{\infty}(\mathbb R)}. \] Bearing in mind that by definition, $h_n \to \infty$ as $n \to \infty$, we see that $h_n^{-1}$ can be made arbitrarily small for sufficiently large $n$. Then \[ |q_n'(x)| \le C h_n \| w p_n \|_{L_{\infty}(\mathbb R)} \log h_n \sim C h_n k_n \log h_n \] because of Lemma \ref{Lemma 2.3}(c). In view of the definitions of $h_n$ and $k_n$, this completes the proof in the first case. Next, we consider the case $x \ge a_{\xi n}$. We define for $n\geq 1$, the sequence $\varepsilon_n := a_{\xi n} - a_{\xi n/2}$. The behavior of $\varepsilon_n$ uniformly for large enough $n$ is determined by Lemma \ref{Lemma 2.3} (f3) which we recall says the following: For $t\neq 0$ and $\frac{1}{2}\leq \frac{t}{s}\leq 2$, \[ \left|1-\frac{a_s}{a_u}\right|\sim \frac{1}{T(a_t)}\left|1-\frac{s}{t}\right|. \] In particular, for $Q(1,\alpha,\beta)$, the sequence $\varepsilon_n$ grows without bound uniformly for large $n$ and for $Q(2,\alpha,\beta,\ell,k)$, $\varepsilon_n$ tends to $0$ uniformly for large $n$. We write \begin{equation} q_n'(x) = \left( \int_{|t-x|\ge \varepsilon_n} + \int_{n^{-10} <|t-x|< \varepsilon_n} + \int_{|t-x|\le n^{-10}} \right) \frac{\rho'(t)}{t-x} dt = A_3 + A_4 + A_5. \end{equation} Note that this decomposition is possible since we have by definition $a_{\xi n} - a_{\xi n/2} \sim a_{\xi n} / T(a_{\xi n}) > n^{-10}$ for sufficiently large $n$. For $A_5$, we argue in a similar way as for $A_2$ above and find \begin{eqnarray*} |A_5| & = & \int_{x-n^{-10}}^{x+n^{-10}} \frac{\rho_n'(t) - \rho_n'(x)}{t-x} dt \le C n^{-10} \| \rho_n'' \|_{L_{\infty}(\mathbb R)} \\ & \le & C n^{-10} h_n^2 \| w p_n \|_{L_{\infty}(\mathbb R)} \\ & \le & C n^{-10} h_n^2 k_n \le O(n^{-2}) \end{eqnarray*} where the last inequality follows from the fact that, in view of Lemma \ref{Lemma 2.3}(f), \[ \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \} \le C n^2 \delta_n \] and hence \[ h_n^2 k_n \le C n^{9/2} \delta_n^{-1/6} \le C n^{9/2} \] since $\delta_n$ is an increasing sequence of positive numbers. For $A_4$, \begin{eqnarray*} |A_4| & = & \int_{n^{-10} <|t-x|< \varepsilon_n} \frac{1}{|t-x|} \| \rho_n' \|_{L_{\infty}(n^{-10} <|t-x|< \varepsilon_n)} dt \\ & \le & C \| \rho_n' \|_{L_{\infty}(n^{-10} < |t-x| < \varepsilon_n)} |\log \varepsilon_n| \\ & \le & C h_n k_n w \left( a_{\xi n/2} \right) |\log \varepsilon_n| \le C h_n k_n o(\exp(-n^{\alpha_2})) \textrm{ for some } \alpha_2 > 0. \end{eqnarray*} Here, we used the following properties: For $a_{\xi n/2} < t < 2 a_{\xi n} - a_{\xi n/2}$ we have \begin{eqnarray*} \lefteqn{\| \rho_n' \|_{L_{\infty}(n^{-10} < |t-x| < \varepsilon_n)}} \\ & \le & C \| \rho_n' \|_{L_{\infty}(x \ge a_{\xi n/2})} \\ & \le & C \sup_{x \in \mathbb R, \, x \ge a_{\xi n / 2}} \big| h_n w(x) + | Q'(x) w(x) | \, \big| \cdot \|w p_n \|_{L_{\infty}(\mathbb R)} \\ & \le & C \sup_{x \in \mathbb R, \, x \ge a_{\xi n / 2}} \!\!\!\!\! w^{1/2}(x) \cdot \sup_{x \in \mathbb R, \, x \ge a_{\xi n / 2}} \!\!\!\!\! \big| h_n w^{1/2}(x) + | Q'(x) w^{1/2}(x) | \, \big| \cdot \|w p_n \|_{L_{\infty}(\mathbb R)} \\ & \le & C h_n w^{1/2} (a_{\xi n/ 2}) \| w p_n \|_{L_{\infty}(\mathbb R)} \quad (\mbox{because } |Q' w^{1/2}| \mbox{ and } w^{1/2} \mbox{ are bounded} \\ & & \mbox{ \hspace{5cm} and } h_n \to \infty) \\ & \le & C h_n k_n w^{1/2} (a_{\xi n/2}) \\ & \le & C h_n k_n o(\exp(-n^{\alpha_2})) \mbox{ for some }\alpha_2 >0, \end{eqnarray*} because $w^{1/2} (a_{\xi n/2}) = O(\exp(-n^{\alpha_1}))$ for some $\alpha_1 >0$. Finally, for $|A_3|$, since $\varepsilon_n \sim a_n/T(a_n)$ and using \[ \left| (p_n w^2)'(x) \right| \le C \left( \left| p_n'(x) w(x) \right| + \left| p_n(x) w(x) \right| \right) \quad \mbox{ because } \| Q' w \|_{L_\infty(\mathbb R)} < \infty, \] we see \begin{eqnarray*} \| \rho_n' \|_{L_1(\mathbb R)} = \| (p_nw^2)' \|_{L_1(\mathbb R)} \le C h_n \| p_n w \|_{L_1(\mathbb R)} \sim h_n \delta_n^{1/2}, \end{eqnarray*} and so we have \begin{eqnarray*} |A_3| & \le & \frac{1}{\varepsilon_n} \int_{\mathbb R} |\rho_n'(t)| dt \le C \frac{T(a_n)}{a_{n}} \| \rho_n' \|_{L_{1}(\mathbb R)} \le C \frac{T(a_n)}{a_{n}} h_n \delta_n^{1/2}. \end{eqnarray*} Then for $x \ge a_{\xi n}$ \[ |q_n'(x)| \le C \frac{T(a_n)}{a_{n}} h_n \delta_n^{1/2}. \] Finally, for $x \le a_{-\xi n}$ we proceed in almost the same way as for $x \ge a_{\xi n}$; the only difference is that we now obtain \[ |q_n'(x)| \le C \frac{T(a_{-n})}{|a_{-n}|} h_n \delta_n^{1/2}. \] Therefore, we can summarize the results as follows. For $n$ large enough, \begin{eqnarray*} |q_n'(x)| & \le & C \times \begin{cases} C h_n \| w p_n \|_{L_{\infty}(\mathbb R)} \log n, & a_{-\xi n} \le x \le a_{\xi n}\\ \frac{T(a_n)}{a_{n}} h_n \delta_n^{1/2}, & x \ge a_{\xi n}\\ \frac{T(a_{-n})}{|a_{-n}|} h_n \delta_n^{1/2}, & x \le a_{-\xi n}. \end{cases} \\ & \sim & \begin{cases} n^{7/6} \delta_n^{-5/6} \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right \}^{2/3} \log n, & a_{-\xi n} \le x \le a_{\xi n} \\ n a_n^{-1} T(a_n) \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/2}, & x \ge a_{\xi n} \\ n |a_{-n}|^{-1} T(a_{-n}) \max \left \{ \frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/2}, & x \le a_{-\xi n}. \end{cases} \end{eqnarray*} \end{proof} We also need the following result, cf.\ \cite[Lemma 3.1]{DD1}. \begin{lemma}\label{Lemma 2.5} For the weights $\zeta_{jn}(\cdot)$ of the quadrature formula $Q_n$ defined in (\ref{w_jn}), we have for $x\in \mathbb R$ and $1\leq j\leq n$ \[ \zeta_{jn}(x) = \begin{cases} \displaystyle \frac{q_n(x_{j,n}) - q_n(x)}{(x_{j,n} - x) p_n'(x_{j,n})}, & \textrm{ if } x \neq x_{j,n},\\ \displaystyle \frac{q_n'(x_{j,n})}{p_n'(x_{j,n})}, & \textrm{ if } x = x_{j,n}. \end{cases} \] \end{lemma} \begin{proof} This result has been shown in \cite[Lemma 3.1]{DD1} for a narrower class of weight functions than the class under consideration here. However, the proof given there only exploits properties that are satisfied in the present situation too, and so it can be carried over directly. \end{proof} \begin{theorem}\label{Theorem 2.6} Let $w$ be admissible. Let $x \in \mathbb R$, $n\geq 2$ and let $f\in \mathrm{Lip}(\gamma, w)$ for some $\gamma >0$. Let $P_{n-1}^* \in \mathcal P_{n-1}$ satisfy \[ \|w(f-P_{n-1}^*)\|_{L_{\infty}(\mathbb R)} =\inf_{P\in \mathcal{P}_{n-1}}\|w(f-P)\|_{L_{\infty}(\mathbb R)} =E_{n-1}[f]_{w,\infty}. \] Then, for $x\in \mathbb{R}$ \[ |H_{w^2}[f-P_{n-1}^*;x] | \le C(1+ n^{-2} w^{-1}(x))E_{n-1}[f]_{w,\infty} \log n, \] where $C$ is a constant depending on $w$, but independent of $f$, $n$, and $x$. \end{theorem} \begin{proof} We estimate: \[ |H_{w^2}[f-P_{n-1}^*;x]|. \] Let $x\in \mathbb R$ and $0<\varepsilon <\infty$. Then we have \begin{eqnarray*} \lefteqn{\left|\int_{|t-x|\ge \varepsilon} w^2(t)\frac{f(t)-P_{n-1}^*(t)}{t-x}dt\right|} \\ &\le& CE_{n-1}[f]_{w,\infty} \left(\|w\|_{L_{\infty}(\mathbb R)}\log \varepsilon^{-1} \right) \end{eqnarray*} and \begin{eqnarray*} \lefteqn{ -\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon}w^2(t)\frac{f(t)-P_{n-1}^*(t)}{t-x}dt } \\ &=&-\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon}w^2(t)\frac{(f(t)-P_{n-1}^*(t))-(f(x)-P_{n-1}^*(x))}{t-x}dt \\ &&\quad +w(x)(f(x)-P_{n-1}^*(x))-\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon} \frac{w^2(t)}{w(x)}\frac{dt}{t-x}. \end{eqnarray*} Now, we see that because of the weighted Lipschitz condition on $f$, \begin{equation*} w(t)\frac{\left|(f(t)-P_{n-1}^*(t))-(f(x)-P_{n-1}^*(x))\right|}{|t-x|^{\alpha^*}} \le C\varepsilon^{-\gamma^*} E_{n-1}[f]_{w,\infty} \end{equation*} for some $\gamma^* \in (0,\gamma/2)$. Thus, \begin{eqnarray}\label{(1)} \nonumber \lefteqn{\left|-\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon}w^2(t) \frac{(f(t)-P_{n-1}^*(t))-(f(x)-P_{n-1}^*(x))}{t-x}dt\right|} \\ \nonumber &\le& \|w\|_{L_{\infty}(\mathbb R)} \\ \nonumber && \quad \times -\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon}w(t) \frac{\left|(f(t)-P_{n-1}^*(t))-(f(x)-P_{n-1}^*(x))\right|}{|t-x|^{\gamma^*}|t-x|^{1-\gamma^*}}dt\\ \nonumber &\le& C\varepsilon^{-\gamma^*} \|w\|_{L_{\infty}(\mathbb R)} E_{n-1}[f]_{w,\infty} \int_0^{\varepsilon}y^{\gamma^*-1}dy\\ \nonumber &\le& C\varepsilon^{-\gamma^*} \|w\|_{L_{\infty}(\mathbb R)} E_{n-1}[f]_{w,\infty} \varepsilon^{\gamma^*}\\ &\le& C\|w\|_{L_{\infty}(\mathbb R)} E_{n-1}[f]_{w,\infty}. \end{eqnarray} Now, we estimate \begin{equation*} \left|-\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon} \frac{w^2(t)}{w(x)}\frac{dt}{t-x}\right|. \end{equation*} Using the mean value theorem, there exists $t_x$ between $t$ and $x$ such that \begin{eqnarray}\label{(2)} \nonumber \left|-\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon} \frac{w^2(t)}{w(x)}\frac{dt}{t-x}\right| &=&w^{-1}(x)\left|-\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon} \frac{w^2(t)-w^2(x)}{t-x}dt\right|\\ \nonumber &=&w^{-1}(x)\left|-\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon} -2Q'(t_x)w^2(t_x)dt\right|\\ &\le& Cw^{-1}(x)\|Q'w^2\|_{L_{\infty}(\mathbb R)}\varepsilon \le Cw^{-1}(x)\varepsilon. \end{eqnarray} Therefore, we have by (\ref{(1)}) and (\ref{(2)}) \begin{eqnarray*} \lefteqn{ \left|-\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon}w^2(t)\frac{f(t)-P_{n-1}^*(t)}{t-x}dt\right| } \\ &\le& C_1\|w\|_{L_{\infty}(\mathbb R)} E_{n-1}[f]_{w,\infty} + C_2w^{-1}(x)\varepsilon E_{n-1}[f]_{w,\infty} \\ &\le& C\left(\|w\|_{L_{\infty}(\mathbb R)} + \varepsilon w^{-1}(x)\right)E_{n-1}[f]_{w,\infty}. \end{eqnarray*} Consequently, we have taking $\varepsilon=n^{-2}$, \begin{eqnarray*} \lefteqn{|H_{w^2}[f-P_{n-1}^*;x]|}\\ &=& \left|-\!\!\!\!\!\!\!\!\;\int_{\mathbb R} w^2(t)\frac{f(t)-P_{n-1}^*(t)}{t-x}dt \right| \\ &\le& \left|\int_{|t-x|\ge \varepsilon} w^2(t)\frac{f(t)-P_{n-1}^*(t)}{t-x}dt\right| + \left|-\!\!\!\!\!\!\!\!\;\int_{x-\varepsilon}^{x+\varepsilon}w^2(t)\frac{f(t)-P_{n-1}^*(t)}{t-x}dt\right| \\ &\le& CE_{n-1}[f]_{w,\infty} \left(\|w\|_{L_{\infty}(\mathbb R)}\ln \varepsilon^{-1} \right) \\ && + C\left(\|w\|_{L_{\infty}(\mathbb R)} + \varepsilon w^{-1}(x)\right) E_{n-1}[f]_{w,\infty} \\ &\le&C (1+ \varepsilon w^{-1}(x))E_{n-1}[f]_{w,\infty} \ln \varepsilon^{-1} \\ &\le&C (1+ n^{-2} w^{-1}(x))E_{n-1}[f]_{w,\infty} \ln n. \end{eqnarray*} \end{proof} \begin{theorem}\label{Theorem 1.3_1} Let $w$ be admissible and assume that $0<\mu_n \! \le \! \min\{a_n \eta_n, |a_{-n}|\eta_{-n}\}$. Let $x \in \mathbb R$, $n$ large enough and let $f\in \mathrm{Lip}(\gamma, w)$ for some $\gamma >0$. Then \begin{equation*} |R_n[f;x]| \le C\left\{ (1+ n^{-2} w^{-1}(x))\log n + \gamma_n(x)\right\}E_{n-1}[f]_{w,\infty}, \end{equation*} where \begin{equation*} \gamma_n(x) := \|q_n\|_{\infty}\mu_n^{-1} \delta_n^{3/2} + \mu_n \begin{cases} A_n, & a_n(1+C\eta_n) \le x \le 2a_n, \\ B_n, & a_{\xi n} \le x \le a_n(1+C\eta_n), \\ C_n, & a_{-\xi n} \le x \le a_{\xi n}, \\ D_n, & a_{-n}(1+C\eta_{-n}) \le x \le a_{-\xi n}, \\ E_n, & 2a_{-n} \le x \le a_{-n}(1+C\eta_{-n}), \\ 0, & \textrm{otherwise}. \end{cases} \end{equation*} \end{theorem} \begin{proof} The proof of Theorem \ref{Theorem 1.3_1} is based on the fact that our quadrature formula $Q_n$ is of interpolatory type, i.e., it is exact for all polynomials of degree $\le n-1$. Thus, \begin{eqnarray*} |R_n[f;x]|&=& |R_n[f-P_{n-1}^*;x]| \\ &\le& |H_{w^2}[f-P_{n-1}^*;x]| + \sum_{j=1}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} [w(x_{j,n})|f(x_{j,n})-P_{n-1}^*(x_{j,n})|], \end{eqnarray*} where $P_{n-1}^*$ is the polynomial of the best uniform approximation for $f$ from $\mathcal{P}_{n-1}$ with respect to the weight function $w$. Hence, we now have to prove that \[ \sum_{j=1}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} \le C \gamma_n(x). \] Let $x >0$. Assume $0<\mu_n \le C \min\{a_n \eta_n, |a_{-n}|\eta_{-n}\}$. Then \[ \sum_{j=1}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} =\sum_{\substack{j=1\\ |x-x_{j,n}| > \mu_n}}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} + \sum_{\substack{j=1\\ |x-x_{j,n}| \le \mu_n}}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})}. \] For the first part, we have from Lemma \ref{Lemma 2.3}(g), Lemma \ref{Lemma 2.3}(h) and Lemma \ref{Lemma 2.5} \begin{eqnarray*} \lefteqn{\sum_{\substack{j=1\\ |x-x_{j,n}| > \mu_n}}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})}} \\ &\le& C \sum_{\substack{j=1\\ |x-x_{j,n}| > \mu_n}}^n \frac{1}{w(x_{j,n})} \left|\frac{q_n(x_{j,n})-q_n(x)}{(x_{j,n}-x)p_n'(x_{j,n})}\right| \\ &\le& C \|q_n\|_{\infty}\mu_n^{-1} \sum_{j=1}^n (x_{j,n}-x_{j+1,n})|(x_{j,n}-a_{-n})(a_n-x_{j,n})|^{1/4}\\ &\le& C \|q_n\|_{\infty}\mu_n^{-1} \int_{[a_{-n}(1-C\eta_{-n}),a_n(1-C\eta_{n})] } |(t-a_{-n})(a_n-t)|^{1/4} dt \\ &\le& C \|q_n\|_{\infty}\mu_n^{-1}\delta_n^{3/2} \int_{[-1,1] } (1-u^2)^{1/4} dt \\ &\le& C \|q_n\|_{\infty}\mu_n^{-1} \delta_n^{3/2}. \end{eqnarray*} Now, we consider the second part: \[ \sum_{\substack{j=1\\ |x-x_{j,n}| \le \mu_n}}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})}. \] \noindent {\bf Case 1.} $x \ge 2a_n$ : We know that all the zeros of $p_n$ are in the interval \[ [a_{-n}(1-C\eta_{-n}), a_n(1-C\eta_n)]. \] Then since \[ \frac{\eta_n^{-1}}{T(a_n)}=\left(n^2\frac{a_n}{\delta_nT(a_n)}\right)^{1/3}= O(n^{\alpha_1}), \quad \textrm{ for some } \alpha_1 >0, \] we have for some $\alpha_2, \alpha_3 >0$ \begin{eqnarray*} |x-x_{j,n}| &\ge& a_n \ge \min \left\{ a_n, \frac{a_{2n}-a_n}{2}\right\} \\ &\ge& C \frac{a_n}{T(a_n)} \ge O(n^{\alpha_2})a_n\eta_n \ge O(n^{\alpha_3})\mu_n. \end{eqnarray*} Therefore, we have shown that the second sum is empty. \\ For the other cases, by the mean value theorem, there exists $\xi_{j,n}$ between $x$ and $x_{j,n}$ such that \begin{eqnarray*} \sum_{\substack{j=1\\ |x-x_{j,n}| < \mu_n}}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} &\le& C \sum_{\substack{j=1\\ |x-x_{j,n}| < \mu_n}}^n \frac{1}{w(x_{j,n})} \left|\frac{q_n(x_{j,n})-q_n(x)}{(x_{j,n}-x)p_n'(x_{j,n})}\right| \\ &\le& C \sum_{\substack{j=1\\ |x-x_{j,n}| < \mu_n}}^n \left|\frac{q_n'(\xi_{j,n})}{p_n'(x_{j,n})w(x_{j,n})}\right|. \end{eqnarray*} \noindent {\bf Case 2.} $a_n(1-C\eta_n) \le x \le 2a_n$ : Since $\mu_n \le C a_n\eta_n$, we have \begin{eqnarray*} |x-x_{j,n}| < \mu_n &\Rightarrow& x-\mu_n < x_{j,n} \le x+\mu_n \\ &\Rightarrow& a_n(1-C\eta_n) < x_{j,n} \le x+\mu_n. \end{eqnarray*} Then we have \[ |(x_{j,n}-a_{-n})(a_n-x_{j,n})| \le \delta_n a_n\eta_n, \] and from (\ref{q'}) \[ |q_n'(\xi_{j,n})| \le C na_n^{-1}T(a_n) \max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{\frac12}. \] Using Lemma \ref{Lemma 2.3}(g) and Lemma \ref{Lemma 2.3}(h), we have \begin{eqnarray*} \lefteqn{ \sum_{\substack{j=1\\ |x-x_{j,n}| < \mu_n}}^n \left|\frac{q_n'(\xi_{j,n})}{p_n'(x_{j,n})w(x_{j,n})}\right| }\\ &\le& C n^{\frac56}\delta_n^{\frac1{3}}a_n^{-\frac{5}{6}}T^{\frac56}(a_n) \max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{\frac12} \sum_{\substack{j=1\\ |x-x_{j,n}| < \mu_n}}^n(x_{j,n}-x_{j+1,n}) \\ &\le& C\mu_n A_n. \end{eqnarray*} {\bf Case 3.} $ a_{\xi n} \le x \le a_n(1-C\eta_n)$ : Similarly to Case 2, since $\mu_n \le C a_n\eta_n$, there exists $0 < \eta_1 < 1$ such that \begin{eqnarray*} |x-x_{j,n}| < \mu_n &\Rightarrow& x-\mu_n < x_{j,n} \le x+\mu_n \\ &\Rightarrow& a_{\eta_1 n} < x_{j,n} \le x+\mu_n \le a_n(1+C\eta_n). \end{eqnarray*} Then we have \[ |(x_{j,n}-a_{-n})(a_n-x_{j,n})| \le \delta_n(a_n-a_{\eta_1 n}) \le C \delta_n\frac{a_n}{T(a_n)}, \] and from (\ref{q'}) \[ |q_n'(\xi_{j,n})| \le C na_n^{-1}T(a_n) \max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{\frac12}. \] Using Lemma \ref{Lemma 2.3}(g) and Lemma \ref{Lemma 2.3}(h), we have \begin{eqnarray*} \lefteqn{ \sum_{\substack{j=1\\ |x-x_{j,n}| < \mu_n}}^n \left|\frac{q_n'(\xi_{j,n})}{p_n'(x_{j,n})w(x_{j,n})}\right| }\\ &\le& Cn\delta_n^{\frac1{4}}a_n^{-\frac{3}{4}}T^{\frac34}(a_n) \max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{\frac12} \sum_{\substack{j=1\\ |x-x_{j,n}| < \mu_n}}^n(x_{j,n}-x_{j+1,n}) \\ &\le& C\mu_n B_n. \end{eqnarray*} {\bf Case 4.} $0 \le x \le a_{\xi n} $ : Since $\mu_n \le |a_{-n}|\eta_{-n}$ and $\mu_n \le a_{n}\eta_{n}$, there exist constants $L >0$ and $0 < \eta_1, \eta_2 < 1$ independent of $n$ with \begin{eqnarray*} |x-x_{j,n}| < \mu_n &\Rightarrow& x-\mu_n < x_{j,n} \le x+\mu_n \\ &\Rightarrow& a_{-n}\eta_{-n} < x_{j,n} \le x+\mu_n \le a_{\xi n}(1+L\eta_n) \le a_{\eta_2 n}\\ &\Rightarrow& a_{-\eta_1 n} < x_{j,n} \le x+\mu_n \le a_{\eta_2 n}. \end{eqnarray*} Then we have \[ |(x_{j,n}-a_{-n})(a_n-x_{j,n})| \le \delta_n(a_n-a_{\eta_1 n}) \le \delta_n^2, \] and from (\ref{q'}) \[ |q_n'(\xi_{j,n})| \le C n^{\frac76}\delta_n^{-\frac5{6}} \max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{\frac23}\log n. \] Using Lemma \ref{Lemma 2.3}(g) and Lemma \ref{Lemma 2.3}(h), we have \begin{eqnarray*} \lefteqn{\sum_{\substack{j=1\\ |x-x_{j,n}| < \mu_n}}^n \left|\frac{q_n'(\xi_{j,n})}{p_n'(x_{j,n})w(x_{j,n})}\right| } \\ &\le& Cn^{\frac76}\delta_n^{-\frac1{3}} \max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{\frac23}\log n \sum_{\substack{j=1\\ |x-x_{j,n}| < \mu_n}}^n(x_{j,n}-x_{j+1,n}) \\ &\le& C\mu_n C_n. \end{eqnarray*} Thus, we have for $x\geq 0$ \begin{eqnarray*} \sum_{j=1}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} &\le& C_1\|q_n\|_{\infty}\mu_n^{-1} \delta_n^{3/2}\\ && {} + C_2 \mu_n \times \begin{cases} A_n, & a_n(1+C\eta_n) \le x \le \min\{\frac{d+a_n}{2}, 2a_n\} \\ B_n, & a_{\xi n} \le x \le a_n(1+C\eta_n) \\ C_n, & 0 \le x \le a_{\xi n}, \\ 0, & \textrm{otherwise } \end{cases} \\ &\le&C\gamma_n(x) \end{eqnarray*} Similarly, we have for $x \le 0$ \begin{eqnarray*} \sum_{j=1}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} &\le& C\|q_n\|_{\infty}\mu_n^{-1} \delta_n^{3/2}\\ && {} +C \mu_n \times \begin{cases} E_n, & \max\{\frac{c+a_{-n}}{2}, 2a_{-n}\} \le x \le a_{-n}(1+C\eta_{-n}) \\ D_n, & a_{-n}(1+C\eta_{-n}) \le x \le a_{-\xi n} \\ C_n, & a_{-\xi n} \le x \le 0 \\ 0, & \textrm{ otherwise } \end{cases} \\ &\le&C\gamma_n(x). \end{eqnarray*} Thus, we have \begin{eqnarray*} \sum_{j=1}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} [w(x_{j,n})|f(x_{j,n})-P_{n-1}^*(x_{j,n})|] &\le& C E_{n-1}[f]_{w,\infty}\sum_{j=1}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} \\ &\le& C E_{n-1}[f]_{w,\infty} \gamma_n(x). \end{eqnarray*} Consequently, we obtain using Theorem \ref{Theorem 2.6} \begin{eqnarray*} \lefteqn{|R_n[f;x]|} \\ &\le& |H_{w^2}[f-P_{n-1}^*;x]| + \sum_{j=1}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} [w(x_{j,n})|f(x_{j,n})-P_{n-1}^*(x_{j,n})|] \\ &\le& |H_{w^2}[f-P_{n-1}^*;x]| + C E_{n-1}[f]_{w,\infty}\sum_{j=1}^n \frac{|\zeta_{jn}(x)|}{w(x_{j,n})} \\ &\le& C_1 (1+ n^{-2} w^{-1}(x))E_{n-1}[f]_{w,\infty} \log n + C_2 E_{n-1}[f]_{w,\infty}\gamma_n(x) \\ &\le& C\left\{ (1+ n^{-2} w^{-1}(x))\log n + \gamma_n(x)\right\}E_{n-1}[f]_{w,\infty}. \end{eqnarray*} \end{proof} \section{Estimation of the Functions of the Second Kind and Proof of Theorem \protect{\ref{Theorem 2.3}}} In this section, we provide the finished proof of Theorem \ref{Theorem 2.3} and we shall prove upper bounds for the Chebyshev norms of the functions of the second kind \[ q_n(x) := -\!\!\!\!\!\!\!\!\;\int_{\mathbb R} \frac{p_n (t) w^2(t)}{t-x} dt,\quad x\in \mathbb R. \] Specifically, Criscuolo et al.\ \cite[Theorem 2.2(a)]{CDLM} have shown that for large enough $n$, \begin{equation} \label{eq:bound-qn} \| q_n \|_{L_\infty(\mathbb R)} \sim a_n^{-1/2} \end{equation} if $w^2$ is a symmetric weight of smooth polynomial decrease for large argument that satisfies some mild additional smoothness conditions; cf.\ \cite[Definition 2.1]{CDLM} for precise details. Our goal is to extend this result to a much larger class of weight functions. We start with an alternative representation for $q_n$. Here, again we use the notation $\zeta_{jn}(.)$ for the weights of the Gaussian quadrature formula with respect to the weight function $w^2$ associated to the node $x_{jn}$. \begin{lemma} \label{lem:51} We have the following identities for $n\geq 1$. \begin{enumerate} \item[(a)] If $x \ne x_{jn}$ for all $j$ then \[ q_n(x) = p_n(x) \left( -\!\!\!\!\!\!\!\!\;\int_{\mathbb R} \frac{w^2(t)}{t-x} dt - \sum_{j=1}^n \frac{\zeta_{jn}(x)}{x_{jn} - x} \right). \] \item[(b)] For $j=1,2,\ldots,n$ we have \[ q_n(x_{jn}) = \zeta_{jn} p_n'(x_{jn}). \] \item[(c)] If $x \in (x_{\ell n}, x_{\ell-1,n})$ for some $2 \le \ell \le n$ then \[ |q_n(x)| \le |p_n(x)| \left( \sum_{j=\ell-1}^\ell \frac{\zeta_{jn}(x)}{|x - x_{jn}|} + \left| -\!\!\!\!\!\!\!\!\;\int_{x_{\ell n}}^{x_{\ell-1,n}} \frac{w^2(t)}{x-t} dt \right| \right) . \] \item[(d)] If $x > x_{1n}$ then \[ |q_n(x)| \le |p_n(x)| \left( \frac{\zeta_{1 n}}{|x - x_{1 n}|} + \left| -\!\!\!\!\!\!\!\!\;\int_{x_{1 n}}^{d} \frac{w^2(t)}{x-t} dt \right| \right) . \] \item[(e)] If $x < x_{nn}$ then \[ |q_n(x)| \le |p_n(x)| \left( \frac{\zeta_{n n}}{|x - x_{n n}|} + \left| -\!\!\!\!\!\!\!\!\;\int_{c}^{x_{n n}} \frac{w^2(t)}{x-t} dt \right| \right) . \] \end{enumerate} \end{lemma} \begin{proof} Parts (a), (b), (c), and (d) are shown for a special class of weights in \cite[eqs.\ (5.2), (5.3), (5.4) and (5.5)]{CDLM}. An inspection of the proof immediately reveals that no special properties of the weight functions are ever used in these proofs and thus the exact same methods of proof can be applied in our case. Part (e) can be shown by arguments analog to those of the proof of (d). \end{proof} Our main result for this section then reads as follows. \begin{theorem}\label{Theorem 3.2} We have uniformly for $n$ large enough, \begin{equation}\label{q_n} \| q_n\|_{L_\infty(\mathbb R)} \le C\frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}\log n. \end{equation} \end{theorem} \begin{proof} We first consider the case that $x \ge a_{n/2}$. Then we write \[ q_n(x) = -\!\!\!\!\!\!\!\!\;\int_{\mathbb R} \frac{p_n (t) w^2(t)}{t-x} dt=I_1 + I_2 + I_3 \] where $\varepsilon_n=a_{n/2}-a_{n/4}$ and \begin{eqnarray*} I_1 &=& \int_{|t-x| \ge \varepsilon_n} \frac{p_n (t) w^2(t)}{t-x} dt, \\ I_2 &=& \int_{n^{-10} < |t-x| < \varepsilon_n} \frac{p_n (t) w^2(t)}{t-x} dt, \\ I_3 &=& -\!\!\!\!\!\!\!\!\;\int_{x - n^{-10}}^{x + n^{-10}} \frac{p_n (t) w^2(t)}{t-x} dt. \end{eqnarray*} It then follows that \begin{eqnarray*} |I_1| &\le& C\frac1{\varepsilon_n^{1/4}} \int_{|t-x| \ge \varepsilon_n} |p_n (t) w^2(t)| dt \\ &\le& C\frac1{\varepsilon_n^{1/4}} \left\|\left(\frac{w(t)}{|t-x|^{3/4}}\right)\right\|_{L_{\frac43}(|t-x| \ge \varepsilon_n)} \|p_nw\|_{L_4(I)} \\ &\le& C \left(\frac{T(a_n)}{a_n}\right)^{1/4} (\log n)^{3/4} \delta_n^{-\frac14}(\log (n+1))^{\frac14}\\ &\le& C \frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}\log n. \end{eqnarray*} Here, we proceeded as follows: Since \begin{eqnarray*} \int_{|t-x| \ge \varepsilon_n} \frac{w^{4/3}(t)}{|t-x|} dt &\le& \int_{|t-x| \ge 1 } \frac{w^{4/3}(t)}{|t-x|} dt + \int_{ \min\{\varepsilon_n, 1\} < |t-x| \le 1 } \frac{w^{4/3}(t)}{|t-x|} dt \\ &\le& C_1 + \int_{ \min\{\varepsilon_n, 1\} < |t-x| \le 1 } \frac{1}{|t-x|} dt \\ &\le& C\log n, \end{eqnarray*} we see \[ \left\|\left(\frac{w(t)}{|t-x|^{3/4}}\right)\right\|_{L_{\frac43}(|t-x| \ge \varepsilon_n)} \le C (\log n)^{3/4}. \] Moreover, \begin{eqnarray*} |I_2| & \le & \|p_n w\|_{L_\infty(\mathbb R)} w(a_{n/4}) \int_{n^{-10} < |t-x| < a_{n/4}} \frac{dt}{t-x} \\ & \le & C \|p_n w\|_{L_\infty(\mathbb R)} w(a_{n/4}) \log n \\ & \le & C \frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}\log n. \end{eqnarray*} in view of the decay behaviour of $w(a_{n/4})$. Finally, \begin{eqnarray*} |I_3| & = & \left| -\!\!\!\!\!\!\!\!\;\int_{x - n^{-10}}^{x + n^{-10}} \frac{p_n (t) w^2(t) - p_n (x) w^2(x)}{t-x} dt \right| \\ & \le & 2 n^{-10} \| (p_n w^2)' \|_{L_\infty(I)} \\ & \le&C\frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}\log n, \end{eqnarray*} using $\|Q'w\| < \infty$, and $\left|(p_nw^2)'\right| \le C \left( \left|p_n'(x)w(x)\right| +\left|p_n(x)w(x)\right|\right)$. Thus we conclude that for $x \ge a_{n/2}$ \[ |q_n(x)| \le C \frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}\log n. \] The bound for $x \le a_{-n/2}$ follows by an analog argument. It remains to show the required inequality for $a_{-n/2}\le x \le a_{n/2}$. We split up this case into a number of sub-cases. First, if $x = x_{jn}$ for some $j$, then we obtain from Lemma \ref{Lemma 2.3}(g) and Lemma \ref{Lemma 2.4} that \begin{eqnarray*} q_n(x_{jn}) &=& \zeta_{jn}(x_{jn}) p_n'(x_{jn}) \\ & \sim & |(x_{j,n}-a_{-n})(a_n-x_{j,n})|^{-1/4}. \end{eqnarray*} In the remaining case, $x$ does not coincide with any of the zeros of the orthogonal polynomial $p_n$. We only treat the case $x \ge 0$ explicitly; the case $x<0$ can be handled in a similar fashion. In this situation, we have that $x \in (x_{\ell n}, x_{\ell-1, n})$ for some $\ell \in \{2,3,\ldots,n\}$, and therefore we may invoke the representation from Lemma \ref{lem:51}(c). This yields \[ |q_n(x)| \le \sum_{j=\ell-1}^\ell \frac{\zeta_{jn}(x)|p_n(x)|}{|x - x_{jn}|} + \left| p_n(x) -\!\!\!\!\!\!\!\!\;\int_{x_{\ell n}}^{x_{\ell-1,n}} \frac{w^2(t)}{x-t} dt \right| \] Here we first look at the two terms inside the summation operator. From Lemma \ref{Lemma 2.3} (i), Lemma \ref{Lemma 2.4} and Lemma \ref{Lemma 2.3}(j) we have for $x \in (x_{j+1, n}, x_{j, n})$, \begin{eqnarray*} \label{eq:qn1} \frac{\zeta_{jn}(x)|p_n(x)|}{|x - x_{jn}|} &\le& C w^{-1}(x)w^2(x_{j,n})|(x_{j,n}-a_{-n})(a_n-x_{j,n})|^{-1/4} \\ &\le& C\frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}. \end{eqnarray*} Moreover, the remaining term can be bounded as follows. We define \[ h := \min \left\{ \frac{1}{n}\sqrt{\frac{\delta_n a_n}{T(a_n)}} , x - x_{\ell n}, x_{\ell-1,n} - x \right\} \] and write \[ -\!\!\!\!\!\!\!\!\;\int_{x_{\ell n}}^{x_{\ell-1,n}} \frac{w^2(t)}{x-t} dt = I_4 + I_5 + I_6 \] where \begin{eqnarray*} I_4 &=& \int_{x_{\ell n}}^{x-h} \frac{w^2(t)}{x-t} dt, \\ I_5 &=& -\!\!\!\!\!\!\!\!\;\int_{x-h}^{x+h} \frac{w^2(t)}{x-t} dt, \\ I_6 &=& \int_{x + h}^{x_{\ell-1,n}} \frac{w^2(t)}{x-t} dt. \end{eqnarray*} Looking at $I_5$ first and using the definition of $h$ and the monotonicity properties of $w$ and $Q'$, we obtain \begin{eqnarray*} |I_5| & = & \left| \int_{x-h}^{x+h} \frac{w^2(t) - w^2(x)}{x-t} dt \right| \le 2 h \|(w^2)'\|_{L_\infty[x-h,x+h]} \\ & \le & 4 h w^2(x-h) Q'(x+h) \le C w^2(x_{\ell n}), \end{eqnarray*} because \[ |h Q'(x+h)| \le C \frac{1}{n}\sqrt{\frac{\delta_n a_n}{T(a_n)}}Q'(a_n) \le C. \] Thus, we see by Lemma \ref{Lemma 2.3}(b) \[ |p_n(x)| \cdot |I_5| \le C w^2(x_{\ell n}) w^{-1}(x) |(x-a_{-n})(a_n-x)|^{-1/4}. \] Moreover, we know that our function $w$ is decreasing in $(0,\infty)$ and satisfies $w(x) \sim 1$ whenever $x$ is confined to a fixed finite interval. Thus, \begin{eqnarray} \nonumber |I_4| &\le& C w^2(x_{\ell n}) \int_{x_{\ell n}}^{x-h} \frac{dt}{x-t} = C w^2(x_{\ell n}) \log \frac{x-x_{\ell n}}{h} \\ \label{eq:i4a} &\le& C w^2(x_{\ell n}) \log \frac{x_{\ell-1,n}-x_{\ell n}}{h}. \end{eqnarray} Another estimate for the quantity $|I_4|$ will also be useful later: We can see that \begin{equation} \label{eq:i4b} |I_4| \le \frac1h \int_{x_{\ell n}}^d w^2(t) dt \le \frac1{2 h Q'(x_{\ell n})} \int_{x_{\ell n}}^d 2 Q'(t) w^2(t) dt = \frac{w^2(x_{\ell n})}{2h Q'(x_{\ell n})}. \end{equation} Using essentially the same arguments, we can provide corresponding bounds for $|I_6|$, viz. \begin{equation} \label{eq:i6a} |I_6| \le C w^2(x_{\ell n}) \log \frac{x_{\ell-1,n}-x_{\ell n}}{h} \end{equation} and \begin{equation} \label{eq:i6b} |I_6| \le \frac{w^2(x_{\ell n})}{2h Q'(x_{\ell n})}. \end{equation} Now we recall that $h$ was defined as the minimum of three quantities and we check with which of these quantities it coincides. \begin{itemize} \item If $h = \frac{1}{n}\sqrt{\frac{\delta_n a_n}{T(a_n)}}$ then we use eqs.\ (\ref{eq:i4a}), (\ref{eq:i6a}), Lemma \ref{Lemma 2.3} (b) and Lemma \ref{Lemma 2.3}(j) to obtain \begin{eqnarray*} (|I_4| + |I_6|) \cdot |p_n(x)| & \le & C w^2(x_{\ell n}) |p_n(x)| \log \left\{(x_{\ell-1,n}-x_{\ell n})n\sqrt{\frac{T(a_n)}{\delta_n a_n}}\right\} \\ & \le & C w^2(x_{\ell n}) |p_n(x)|\log (T(a_n)) \\ & \le & C w^2(x_{\ell n}) w^{-1}(x)|(x-a_{-n})(a_n-x)|^{-1/4}\log (T(a_n))\\ & \le & C w^2(x_{\ell n}) w^{-1}(x)\frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}\log n \end{eqnarray*} because we see from Lemma \ref{Lemma 2.3}(h) \[ (x_{\ell-1,n}-x_{\ell n}) \sim \varphi_n(x) \le C\frac{\sqrt{\delta_n a_nT(a_n)}}{n}. \] \item If $h = x - x_{\ell n}$ then we require the known bound \[ \left| \frac{p_n(x) w(x)}{x - x_{jn}}\right| \le C \varphi_n^{-1}(x_{j,n})|(x_{j,n}-a_{-n})(a_n-x_{j,n})|^{-1/4}. \] that holds uniformly for all $j$ and all $x \in \mathbb R$ to derive that \begin{eqnarray*} \lefteqn{(|I_4| + |I_6|) \cdot |p_n(x)| } \\ & \le & C w^2(x_{\ell n}) (x - x_{\ell n}) w^{-1}(x) \varphi_n^{-1}(x_{j,n}) \log \frac{x_{\ell-1,n} - x_{\ell n}}{x - x_{\ell n}} \\ && \quad \times |(x_{j,n}-a_{-n})(a_n-x_{j,n})|^{-1/4}\\ & \le & C w^2(x_{\ell n}) \frac{x - x_{\ell n}}{x_{\ell-1,n} - x_{\ell n}} w^{-1}(x) \log \frac{x_{\ell-1,n} - x_{\ell n}}{x - x_{\ell n}} \\ && \quad \times |(x_{j,n}-a_{-n})(a_n-x_{j,n})|^{-1/4}\\ & \le & C w^2(x_{\ell n}) w^{-1}(x) |(x_{j,n}-a_{-n})(a_n-x_{j,n})|^{-1/4}\\ & \le & C w^2(x_{\ell n}) w^{-1}(x)\frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}\log n. \end{eqnarray*} where we used Lemma \ref{Lemma 2.3}(h) about the spacing of the nodes $x_{jn}$ and Lemma \ref{Lemma 2.3}(j). \item The final case $h = x_{\ell-1,n} - x$ is essentially the same as the previous one and leads to the same bounds. \end{itemize} Combining eq.\ (\ref{eq:qn1}) with the estimates for $I_4$, $I_5$ and $I_6$ we thus obtain, for our range of $x$, \begin{eqnarray*} |q_n(x)| & \le & C w^2(x_{\ell n}) w^{-1}(x)\frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}\log n\\ & \le & C \frac1{\delta_n^{1/4}}\max\left\{\frac{T(a_n)}{a_n}, \frac{T(a_{-n})}{|a_{-n}|} \right\}^{1/4}\log n. \end{eqnarray*} \end{proof} \begin{proof}[Proof of Theorem \protect{\ref{Theorem 2.3}}] Lemma \ref{Lemma 2.4} and Theorem \ref{Theorem 1.3_1} imply the result. \end{proof} \begin{remark} We guess that the factor of $\ln n$ may be made smaller in the upper bound above for large enough $n$ when $T\sim 1$, i.e., when $Q$ is of smooth polynomial growth for large argument. Indeed, the sequence $\varepsilon_n$ then grows without bound uniformly for large enough $n$. When $T$ grows without bound for large argument, i.e., when $Q$ is of smooth faster than polynomial growth for large argument, this is not the case. See Lemma \ref{Lemma 2.3} (f3) and the proof of Theorem \ref{Theorem 2.4}. \end{remark} \setcounter{equation}{0} \section{Numerical examples} \label{sec:num-examples} In this section, we provide some numerical results to illustrate our theoretical findings. As the algorithmic aspects regarding the concrete implementation are not within the main focus of this paper, we have relegated the discussion of such details to Appendix \ref{sec:num-comments}. In all our examples, we have chosen the Freud-type weight function $w = \exp(-Q)$ with the external field $Q(t) = t^4$. \begin{example} \label{ex:1} The first example deals with the function $f(t) = \sin t$. We have computed the values of $H_{w^2}[f;x]$ numerically for $x \in \{ 0.01, 0.1, 0.5, 1, 2 \}$. The approximation was done with our algorithm with $n$ nodes, where $n \in \{1, 2, 3, \ldots, 30 \}.$ In Figures \ref{fig:sin1}--\ref{fig:sin5}, we plot the associated absolute errors versus the number of nodes. All plots have a logarithmic scale on the vertical axis. Note that not all values of~$n$ are included in all plots. This is due to the fact that, for larger values of $n$, the relative error was smaller than machine accuracy, so the errors were effectively zero in these cases, which precludes their inclusion into a logarithmic plot. The rapid (essentially exponential) decay of the errors as $n$ increases is clearly evident. \end{example} \begin{figure} \caption{\label{fig:sin1} \label{fig:sin1} \end{figure} \begin{figure} \caption{\label{fig:sin2} \label{fig:sin2} \end{figure} \begin{figure} \caption{\label{fig:sin3} \label{fig:sin3} \end{figure} \begin{figure} \caption{\label{fig:sin4} \label{fig:sin4} \end{figure} \begin{figure} \caption{\label{fig:sin5} \label{fig:sin5} \end{figure} \begin{example} The second example is very similar to the first one, the only change being that we now use the function $f(t) = \log(1+t^2)$. For this example, we can observe the same very good convergence properties as in Example \ref{ex:1}, see Figures \ref{fig:log1}--\ref{fig:log5}. \end{example} \begin{figure} \caption{\label{fig:log1} \label{fig:log1} \end{figure} \begin{figure} \caption{\label{fig:log2} \label{fig:log2} \end{figure} \begin{figure} \caption{\label{fig:log3} \label{fig:log3} \end{figure} \begin{figure} \caption{\label{fig:log4} \label{fig:log4} \end{figure} \begin{figure} \caption{\label{fig:log5} \label{fig:log5} \end{figure} \appendix \setcounter{equation}{0} \section{Some needed potential theoretical background} In order to understand the deep differences in the problems we consider in this paper to those studied in our papers \cite{DD1,DD2,DD3,DD4} and the complexities involved from moving from the results of \cite{DD1,DD2,DD3,DD4} to those in this paper, we believe it useful to add the following last section as an appendix. The fundamental weighted energy problem on the real line: Let $\Sigma$ be a closed set on the real line and $w=\exp(-Q):\Sigma\to [0,\infty) $ be an upper semi-continuous weight function on $\Sigma$ with external field $Q$ that is positive on a set of positive linear Lebesgue measure. If $\Sigma$ is unbounded, we assume that \[ \lim_{|x|\to \infty,\, x\in \Sigma}(Q(x)-\log(x))=\infty. \] Fix $\Sigma$ and $Q$ and consider \[ {\rm inf}_{\mu}\left(\int\int \log\frac{1}{|x-t|}d\mu(x)d\mu(t)+2\int Qd\mu\right) \] where the infimum is taken over all positive Borel measures $\mu$ with the support of $\mu$, ${\rm supp}(\mu)$ in $\Sigma$ and with $\mu(\Sigma)=1$. The infimum is attained by a unique minimizer $\mu_w$. Let \[ V^{\mu_w}(x)=\int \log\left(\frac{1}{|x-t|}\right)d\mu_w(t),\, x\in \mathbb R \] be the logarithmic potential for $\mu_w$. Then the following variational inequalities hold: \begin{equation} \begin{array}{ccc} V^{\mu_w}(x)+Q(x)\geq F_w & {\rm q.e.} & x\in\Sigma \\ V^{\mu_w}(x)+Q(x)=F_w & {\rm q.e.} & x\in {\rm supp}(\mu_w) \end{array} \end{equation} Here, $F_w$ is a constant and q.e.\ (quasi everywhere) means, with the exception of a set of logarithmic capacity zero. The support of $\mu_w$ is one of the most important and fundamental quantities to determine the minimizer $\mu_w$, an extremely important and challenging problem which appears in diverse areas in mathematics and physics such as orthogonal polynomials, random matrix theory, combinatorics, approximation theory, electron configiurations on conductors, integrable systems, number theory and many more. When $Q$ is identically zero, the support of $\mu_w$ typically "lives" close to the boundary of the set $\Sigma$ but when $Q$ is no longer identically zero, the support of $\mu_w$ depends heavily on $Q$ for example, its regularity and smoothness and can be quite arbitrary. The more complicated support of $\mu_w$ in the case of the work in this paper compared to the support of $\mu_w$ in our papers \cite{DD1,DD2,DD3,DD4}, is one important reason why the research in the current paper differs from our previous work in \cite{DD1,DD2,DD3,DD4} so substantially. \subsection{The case $\Sigma=\mathbb R$ and one interval: $Q$ even.} The research on orthogonal polynomials, their zeroes and associated Christoffel functions, as used as critical tools in our papers \cite{DD1,DD2,DD3,DD4}, was developed by Lubinsky and Levin \cite{LL}. In particular, if $Q$ is even, $Q'$ exists in $(0,\infty)$ and $xQ'$ increasing on $(0,\infty)$ and positive there, the support of $\mu_w$ is given in Remark 2.3. \subsection{The case $\Sigma=\mathbb R$ and one interval.} Here, Lubinsky and Levin in their classic monograph \cite{LL}, established remarkable research on orthogonal polynomials, their zeroes and Christoffel functions to allow for the research in this paper. See Sections (2-5). In particular, under hypotheses on $Q$ such as given in Definition 2.1, the support of $\mu_w$ is given as in Definition 2.2. \subsection{The case $\Sigma=\mathbb R$} Deift and his collaborators in their papers \cite{Deift, Deift1, Deift2} studied the support of $\mu_w$ for smooth $Q$, for example polynomials and obtained many term asymptotics for the associated orthogonal polynomials and their zeroes. We did not use their research in this paper and leave that for future work. In this case, the support of $\mu_w$ typically need not be one interval; indeed it often splits into a finite number of intervals (sometimes with gaps) with endpoints described often using tools such as Riemann Hilbert problems. \subsection{Some other cases.} Damelin, Benko, Dragnev, Kuijlaars, Deift, Olver \cite{S1,S2,S3,O,O1} and many others have studied cases of $Q$ and $\Sigma$ where the support $\mu_w$ splits into a finite number of intervals (often with gaps) and with endpoints not necessarily known. Lubinsky and Levin have in recent years established remarkable results on asymptotics of orthogonal polynomials, their zeroes and Christoffel functions under very mild conditions on $Q$ and on various sets $\Sigma$. We do not use their research in this paper. See \cite{LL1,LL2}. In summary, descriptions of supports of minimizers for logarithmic energy variational problems such as (6.1) (and we do not discuss other kernels!) and associated research on their orthogonal polynomials, zeros and Christoffel functions for different $Q$ is a huge area of research in many areas of mathematics and physics. In particular, and in this regard, our results and methods in this paper, generalize, highly non-trivially our work in \cite{DD1,DD2,DD3,DD4}. \setcounter{equation}{0} \section{Comments on the Numerical Method} \label{sec:num-comments} In this appendix we collect some information that is helpful in the construction and implementation of the numerical algorithm for the quadrature formula \eqref{w_jn} required for the numerical examples presented in Section \ref{sec:num-examples}. To avoid excessive technical complications, the discussion here will not cover the very general weight functions investigated in the main part of the paper. Rather, we will restrict our attention to the special case discussed in Section \ref{sec:num-examples}, i.e.\ we shall assume throughout this appendix that $w$ is the Freud-type weight given by \begin{equation} \label{eq:wnum} w(t) = \exp(-Q(t)) \qquad \mbox{ with } \qquad Q(t) = t^4 \end{equation} for $t \in \mathbb R$. Furthermore, as indicated in Section \ref{sec:num-examples}, the algorithmic aspects are not the main point of this more theoretically oriented paper. Therefore, we emphasize here that the description below is also rather theoretical and does not include issues like the numerical stability of the approach. It is well known that this is a highly nontrivial matter that, however, needs to be discussed elsewhere. The main observation in the present context is that, in view of eq.\ \eqref{eq:qf-hl}, the construction of the formula $Q_n[f;x]$ requires the following ingredients: \begin{enumerate} \item We need to be able to compute the Lagrange interpolation polynomial $L_n[f]$ for the given function $f$. In view of the well known general relation \begin{equation} \label{eq:lagrange} L_n[f] (t) = \sum_{j=1}^n f(x_{j,n}) \prod_{\stackrel{\scriptstyle k=1}{k\ne j}}^n \frac{t - x_{k,n}}{x_{j,n} - x_{k,n}}, \end{equation} this means that we need to know the location of the nodes $x_{j,n}$ ($j = 1, 2, \ldots, n$), i.e.\ the zeros of the orthonormal polynomials $p_n = p_n(w^2, \cdot)$ with respect to the weight function $w^2$. \item In the second step, it is necessary to apply the weighted Hilbert transform operator $H_{w^2}$ to this interpolation polynomial. In view of the linearity of the Hilbert transform, this demands the knowledge of the values of $H_{w^2}[\pi_k; x]$ for $k = 0, 1, 2, \ldots, n$ where $\pi_k(t) = t^k$ is the $k$th monomial. \end{enumerate} We shall now describe how this information can be obtained. \subsection{The moments of the weight function $w^2$} It turns out that, for both required items, it is necessary to compute the moments \begin{equation} \label{eq:def-moments} \mu_k := \int_{-\infty}^\infty w^2(t) t^k dt \end{equation} of the given weight function $w^2$ for $k = 0, 1, 2, \ldots, 2n$, so this is our first result. Indeed, we can see that \begin{equation} \label{eq:moments} \mu_k = \begin{cases} 0 & \mbox{ if } k \mbox{ is odd,} \\ 2^{-(k+5)/4} \cdot \Gamma\left(\frac{k + 1}4 \right) & \mbox{ if } k \mbox{ is even,} \end {cases} \end{equation} where $\Gamma$ denotes Euler's Gamma function. The result for odd values of $k$ immediately follows from a symmetry argument because the weight function $w^2$ is even, and the result for even values of $k$ can be obtained by a symbolic integration using a computer algebra package like Mathematica \cite{Math}. \subsection{The nodes of the interpolation operator $L_n$} As indicated above, the nodes of the interpolation operator $L_n$ are the zeros of the orthonormal polynomial $p_n(w^2, \cdot)$. To determine these values, we follow a strategy outlined in \cite{Gau}. Specifically, we consider the orthogonal polynomials $\tilde p_n(w^2, \cdot)$ for the weight function $w^2$ that, instead of being normalized according to \eqref{eq:onp}, are normalized such that their leading coefficient is 1. Clearly, this means that, for each $n$, there exists some real number $C_n$ such $p_n(w^2, x) = C_n \tilde p_n(w^2, x)$ holds for all $x \in \mathbb R$, and hence the zeros of $p_n(w^2, \cdot)$ coincide with those of $\tilde p_n(w^2, \cdot)$. It is then a well known general property of orthogonal polynomials that there exist real numbers $\alpha_k, \beta_k$ ($k = 0, 1, 2, \ldots$) depending on the weight function $w^2$ such that the polynomials $\tilde p_n(w^2, \cdot)$ satisfy the three-term recurrence relation \begin{subequations} \begin{equation} \label{eq:ttrr} \tilde p_{k+1}(w^2, t) = (t - \alpha_k) \tilde p_k(w^2, t) - \beta_k \tilde p_{k-1}(w^2, t) \qquad k = 0, 1, 2, \ldots \end{equation} with starting values \begin{equation} \tilde p_0(w^2, t) = 1 \qquad \mbox{ and } \qquad \tilde p_{-1}(w^2, t) = 0, \end{equation} \end{subequations} cf., e.g., \cite[eq.\ (1.3)]{Gau}. From \cite[Section 6.1]{Gau} we can then conclude that the desired zeros of the polynomial $\tilde p_n(w^2, \cdot)$ (and hence also the zeros of $p_n(w^2, \cdot)$, i.e.\ the required interpolation nodes) are the eigenvalues of the tridiagonal matrix \[ M_n = \begin{pmatrix} \alpha_0 & \sqrt{\beta_1} & & & & \\ \sqrt{\beta_1} & \alpha_1 & \sqrt{\beta_2} & & & \\ & \ddots & \ddots & \ddots & \\ & & \sqrt{\beta_{n-2}} & \alpha_{n-2} & \sqrt{\beta_{n-1}} \\ & & & \sqrt{\beta_{n-1}} & \alpha_{n-1} \end{pmatrix}. \] So, to compute the interpolation nodes, we have to find the entries of the matrix $M_n$ and then calculate its eigenvalues. For the former step, we must determine the values $\alpha_k$ and $\beta_k$. To this end, we first note that, in our case, \[ \alpha_k = 0 \qquad \mbox{ for all } k; \] this follows because the weight function $w^2$ that we have chosen is even. For the $\beta_k$, we use the bootstrap method (also known as the Stieltjes procedure) described in \cite[Section 4.1]{Gau} that can be formulated in the following way: \begin{itemize} \item Set $\beta_0 = \int_{-\infty}^\infty w^2(t) dt = \mu_0$ (which is known from eq.\ \eqref{eq:moments}). \item For $k = 0, 1, 2, \ldots, n-1$: \begin{itemize} \item Compute $\tilde p_{k+1}(w^2,\cdot)$ by means of eq.\ \eqref{eq:ttrr}. \item Compute \begin{equation} \label{eq:beta-stieltjes} \beta_{k+1} = \frac{\int_{-\infty}^\infty w^2(t) \left( \tilde p_{k+1}(w^2, t) \right)^2 d t} {\int_{-\infty}^\infty w^2(t) \left( \tilde p_{k}(w^2, t) \right)^2 d t}. \end{equation} \end {itemize} \end{itemize} Note that the computation of the integrals in eq.\ \eqref{eq:beta-stieltjes} is technically possible because the coefficients of the polynomials $p_\ell(w^2, \cdot)$ ($\ell = k, k+1$) in the integrands have already been computed, so the squares of these polynomials can be computed as well, and therefore these integrals can be expressed as linear combinations of the known moments $\mu_\ell$ with known coefficients. It then remains to compute the eigenvalues of $M_n$. In view of the tridiagonal structure of $M_n$, this is a straightforward process in numerical linear algebra; the QR method, for example, is a reliable, stable and efficient method to accomplish this goal. \subsection{Evaluation of $H_{w^2}[L_n; x]$} We now have all the components of the right-hand side of eq.\ \eqref{eq:lagrange} available, and so we can compute the interpolation polynomial $L_n[f]$ and express it in the canonical form \[ L_n[f](t) = \sum_{k=0}^{n-1} \lambda_{kn}[f] t^k \] with certain coefficients $\lambda_{kn}[f]$ that depend on the given function $f$. It then follows that \[ H_{w^2}[L_n; x] = -\!\!\!\!\!\!\!\!\;\int_{-\infty}^\infty w^2(t) \frac{L_n[f](t)}{t-x} dt = J_1[f](x) + L_n[f](x) J_2(x) \] where \[ J_1[f](x) = \int_{-\infty}^\infty w^2(t) \frac{L_n[f](t) - L_n[f](x)}{t-x} dt \] and \[ J_2(x) = -\!\!\!\!\!\!\!\!\;\int_{-\infty}^\infty w^2(t) \frac{1}{t-x} dt. \] For $J_1[f](x)$, we see for any $x \in \mathbb R$ that \begin{eqnarray*} J_1[f](x) &=& \int_{-\infty}^\infty w^2(t) \frac{\sum_{k=0}^{n-1} \lambda_{kn}[f] t^k - \sum_{k=0}^{n-1} \lambda_{kn}[f] x^k}{t-x} dt \\ &=& \int_{-\infty}^\infty w^2(t) \sum_{k=1}^{n-1} \lambda_{kn}[f] \frac{t^k - x^k}{t-x} dt \\ &=& \int_{-\infty}^\infty w^2(t) \sum_{k=1}^{n-1} \lambda_{kn}[f] \sum_{\ell=0}^{k-1} t^\ell x^{k-\ell-1} dt \\ &=& \sum_{k=1}^{n-1} \lambda_{kn}[f] \sum_{\ell=0}^{k-1} x^{k-\ell-1} \mu_\ell \end{eqnarray*} which can be evaluated with the help of eq.\ \eqref{eq:moments}. For the integral $J_2(x)$, we first note that, owing to the fact that $w^2$ is an even function, $J_2$ is an odd function. Therefore, $J_2(0) = 0$ and $J_2(x) = - J_2(-x)$ for all $x < 0$. Hence it suffices to explicitly consider the computation of $J_2(x)$ for $x > 0$; the remaining cases can be covered by symmetry arguments. In this case we can use the fact that our specific application uses $w^2(t) = \exp(-2t^4)$, cf.\ eq.\ \eqref{eq:wnum}, and argue as follows: \[ J_2(x) = \int_{-\infty}^0 \frac{\exp(-2t^4)}{t-x} dt + -\!\!\!\!\!\!\!\!\;\int_0^{\infty} \frac{\exp(-2t^4)}{t-x} dt = J_{21}(x) + J_{22}(x), \] say, where (using the substitution $t = -u^{1/4}$ in the first step and the symbolic integration capabilities of Mathematica \cite{Math} in the last one) \begin{eqnarray*} J_{21}(x) &=& \int_{-\infty}^0 \frac{\exp(-2t^4)}{t-x} dt \\ &=& \frac 1 4 \int_0^\infty \frac{\exp(-2u)}{-u - x u^{3/4}} du \\ &=& \frac 1 4 \int_0^\infty \exp(-2u) \left[ \frac 1 {x^3 u^{1/4}} - \frac 1 {x^2 u^{1/2}} + \frac 1 {x u^{3/4}} - \frac 1 {x^3 (x + u^{1/4})} \right] du \\ &=& \frac{\Gamma(3/4)}{2^{11/4} x^3} + \frac{\Gamma(1/4)}{2^{9/4} x} \\ & & {} + \exp(-2x^4) \Bigg[ \frac {\pi \mathrm i} 4 \mathop{\mathrm{erf}}(\mathrm i \sqrt{2} x^2) - \frac 1 4 \mathop{\mathrm{Ei}}(2 x^4) - \frac {\pi \mathrm i}{2} \\ & & \phantom{\exp(-2x^4) \Bigg] abc} {} + \frac{3 \sqrt 2}{128} (-1 + \mathrm i) \Gamma \left( - \frac 1 4 \right) \Gamma \left( - \frac 3 4, -2 x^4 \right) \\ & & \phantom{\exp(-2x^4) \Bigg] abc} {} - \frac{\sqrt 2}{8} (1 + \mathrm i) \Gamma \left(\frac 5 4 \right) \Gamma \left( - \frac 1 4, -2 x^4 \right) \Bigg]. \end{eqnarray*} In this formula, $\mathrm i$ is the imaginary unit, $\mathop{\mathrm{erf}}$ denotes the error function, $\mathop{\mathrm{Ei}}$ is the exponential integral, and $\Gamma(\cdot, \cdot)$ is the incomplete Gamma function. Note that, as one may expect since $J_{21}(x)$ is the integral of a real valued function over a real interval, all the imaginary parts of the components of the final expression for $J_{21}(x)$ cancel each other, and so the result is purely real. Finally, using the substitution $t = u^{1/4}$ in the first step and once again the symbolic integration capabilities of Mathematica in the last one, we find \begin{eqnarray*} J_{22}(x) \!\!\!\!\! &=& \!\!\!\!\! -\!\!\!\!\!\!\!\!\;\int_0^{\infty} \frac{\exp(-2t^4)}{t-x} dt \\ &=& \!\!\!\!\! \frac 1 4 -\!\!\!\!\!\!\!\!\;\int_0^\infty \frac{\exp(-2u)}{u - x u^{3/4}} du \\ &=& \!\!\!\!\! \frac 1 4 -\!\!\!\!\!\!\!\!\;\int_0^\infty \exp(-2u) \left[- \frac 1 {x^3 u^{1/4}} - \frac 1 {x^2 u^{1/2}} - \frac 1 {x u^{3/4}} + \frac 1 {x^3 (u^{1/4} - x)} \right] du \\ &=& \!\!\!\!\! - \frac{\sqrt 2 \pi} {8 x^2} - \frac{\Gamma(\nicefrac 3 4)}{2^{11/4} x^3} - \frac{\Gamma(\nicefrac 1 4)}{2^{9/4} x} - \frac{\pi}{2x^4} G_{8,9}^{5,4} \! \left( \! \begin{matrix} \nicefrac 1 4, \nicefrac 1 2, \nicefrac 3 4, 1, \nicefrac 1 8, \nicefrac 3 8, \nicefrac 5 8, \nicefrac 7 8 \\ \nicefrac 1 4, \nicefrac 1 2, \nicefrac 3 4, 1, 1, \nicefrac 1 8, \nicefrac 3 8, \nicefrac 5 8, \nicefrac 7 8 \end{matrix} \, \Bigg| \, 2 x^4 \!\! \right) \end{eqnarray*} where $G_{8,9}^{5,4}$ is a member of the class of Meijer's $G$-functions (see, e.g., \cite{BS}). Combining all the results listed in Appendix \ref{sec:num-comments}, we can implement the algorithm used for computing the numerical results of Section \ref{sec:num-examples}. \setcounter{equation}{0} \section*{Declaration of interest statement} The authors report there are no competing interests to declare. \end{document}
\begin{document} \title{Resonance tongues for the Hill equation with Duffing coefficients\\ and instabilities in a nonlinear beam equation} \author{Carlo GASPARETTO -- Filippo GAZZOLA\\ {\small Dipartimento di Matematica del Politecnico, Piazza L. da Vinci 32 - 20133 Milano (Italy)}} \date{} \maketitle \begin{abstract} We consider a class of Hill equations where the periodic coefficient is the squared solution of some Duffing equation plus a constant. We study the stability of the trivial solution of this Hill equation and we show that a criterion due to Burdina \cite{burdina} is very helpful for this analysis. In some cases, we are also able to determine exact solutions in terms of Jacobi elliptic functions. Overall, we obtain a fairly complete picture of the stability and instability regions. These results are then used to study the stability of nonlinear modes in some beam equations. \par\noindent {\bf Keywords:} Duffing equation, Hill equation, stability, nonlinear beam equation.\par\noindent {\bf AMS Subject Classification (2010):} 34D20, 35G31. \end{abstract} \section{Introduction} In order to explain the contents of the present paper, we briefly introduce the Duffing and Hill equations. The Duffing equation \cite{duffing} (see also \cite{stoker}) is a nonlinear ODE and reads \neweq{ODE} \ddot{y}(t)+y(t)+y(t)^3=0\qquad(t>0)\, . \end{equation} To \eq{ODE} we associate the initial values \neweq{alphabeta} y(0)=\delta\ ,\qquad \dot{y}(0)=0\, ,\qquad(\delta\in{\mathbb R}\setminus\{0\})\, . \end{equation} The unique solution of \eq{ODE}-\eq{alphabeta} is periodic, its period depends on $\delta$ and may be computed in terms of Jacobi elliptic functions, see Proposition \ref{prop:sol_duffing} in Section \ref{properties}.\par The Hill equation \cite{Hill} was introduced for the study of the lunar perigee and it has been the object of many subsequent studies, see e.g.\ \cite{cesari,magnus,stoker,yakubovich}. It is a linear ODE with periodic coefficient, that is, \neweq{hill} \ddot{\xi}(t)+p(t)\xi(t)=0\, ,\qquad p\in C^0[0,T]\, ,\quad p(t+T)=p(t)\quad \forall t \end{equation} where we intend that $T>0$ is the smallest period of $p$ (in particular, $p$ is nonconstant). The main concern is to establish whether the trivial solution $\xi\equiv0$ of \eq{hill} is stable or, equivalently, if all the solutions of \eq{hill} are bounded in ${\mathbb R}$. If $p(t)=a+2q\cos(2t)$ for some $a,q>0$, then \eq{hill} is named after Mathieu \cite{mathieu} and, in this case, the stability analysis for \eq{hill} is well-understood: in the $(q,a)$-plane, the so-called {\em resonance tongues} (or instability regions) for \eq{hill} emanate from the points $(0,\ell^2)$, with $\ell\in{\mathbb N}$, see \cite[fig.8A]{mcl}: these tongues are separated from the stability regions by some {\em resonance lines} which are explicitly known. This is one of the few cases where the stability for \eq{hill} has reached a complete understanding. In general, the resonance tongues have strange geometries, see e.g.\ \cite{broer,broer2,simakhina,svetlana,weinstein}.\par The first purpose of the present paper is to study the stability of the Hill equation \eq{hill} when the periodic coefficient $p$ is related to a solution of the Duffing equation \eq{ODE}. More precisely, we will consider the cases where \neweq{where} p(t)=\gamma+\beta y(t)^2 \end{equation} with $\gamma\ge0$, $\beta>0$, and $y$ being a solution of \eq{ODE} or of a scaled version of it. We first consider the simpler case where $\beta=1$. Since the solution of \eq{ODE} is even with respect to $t$, the stability diagram for \eq{hill} in the $(\delta,\gamma)$-plane is symmetric with respect to the axis $\delta=0$. By allowing $\gamma$ to become negative we obtain the explicit form of the first resonance tongue and the stability behavior of \eq{hill} in a large neighborhood of $(\delta,\gamma)=(0,0)$, see Theorems \ref{exactsolutions} and \ref{striscia}, as well as Figure \ref{zonevere}. In our analysis we take advantage of some explicit solutions of \eq{hill} that we are able to determine thanks to the particular form of the coefficient $p$ in \eq{where} when $\beta=1$.\par In Theorem \ref{primo} we find sufficient conditions on $\gamma$ and $\delta$ for the stability of \eq{hill} when $p$ is as in \eq{where} and $\beta=1$. The proof contains two main ideas. First, we apply the Burdina criterion \cite{burdina}, reported in Proposition \ref{lyapzhu} $(iii)$. Second, we prove that this criterion {\em does not} need the explicit form of the solution $y$ of \eq{ODE} and the stability analysis may be reduced to the study of some elliptic integrals. In order to show that the sufficient conditions are accurate, we proceed numerically. In Figure \ref{num_B} we plot the stability regions obtained through the Burdina criterion whereas in Figure \ref{realplot} we plot the stability regions obtained through the trace of the monodromy matrix. These plots confirm that the Burdina criterion is indeed quite accurate. Probably, it is the most accurate for the Hill equation with squared Duffing coefficients: in Figure \ref{num_B} we also compare the Burdina criterion with the Zhukovskii criterion \cite{zhk} (reported in Proposition \ref{lyapzhu} $(ii)$) and we show that the former is much more precise.\par The Hill equation \eq{hill} with squared Duffing coefficients \eq{where} for general $\beta>0$ arises when studying the stability of modes of nonlinear strings \cite{cazw,cazw2,ghg}, beams \cite{bbfg}, and plates \cite{bfg1,fergazmor}. In 1950, Woinowsky-Krieger \cite{woinowsky} modified the classical beam models by Bernoulli and Euler assuming a nonlinear dependence of the axial strain on the deformation gradient, by taking into account the stretching of the beam due to its elongation. Independently, Burgreen \cite{burg} derived the very same nonlinear beam equation. After normalization of some physical constants and after scaling the space variable, the problem reads \neweq{truebeam} \left\{\begin{array}{ll} u_{tt}+u_{xxxx}-\frac{2}{\pi}\, \|u_x\|^2_{L^2(0,\pi)}\, u_{xx}=0\quad & x\in(0,\pi)\, ,\ t>0\, ,\\ u(0,t)=u(\pi,t)=u_{xx}(0,t)=u_{xx}(\pi,t)=0\quad & t>0\, , \end{array}\right. \end{equation} where the Navier boundary conditions model a beam hinged at its endpoints. The term $\frac{2}{\pi}\|u_x\|^2_{L^2(0,\pi)}$ measures the geometric nonlinearity of the beam due to its stretching. In Section \ref{appl}, we recall the definitions of nonlinear modes of \eq{truebeam} and of their linear stability and how this study naturally leads to \eq{hill} with $p$ as in \eq{where} with $\beta\neq1$. The linear stability for problem \eq{truebeam} was recently tackled in \cite{bbfg} by using both the Zhukovskii criterion \cite{zhk} and a generalization of the Lyapunov \cite{lyapunov} criterion due to Li-Zhang \cite[Theorem 1]{lizhang}. These criteria did not allow a full understanding of the linear stability for \eq{truebeam} and several problems had to be left open. In this paper we apply the Burdina criterion \cite{burdina} and we show that it allows to solve some of the open problems and to give a better description of the resonance tongues for couples of modes of \eq{truebeam}.\par This paper is organized as follows. In the next section we recall some basic facts about equations \eq{ODE} and \eq{hill}. In Section \ref{duffsqMOD} we state our results about the Hill equation $\ddot{\xi}(t)+\big(\gamma+y(t)^2\big)\xi(t)=0$, where $y$ is the solution of \eq{ODE}-\eq{alphabeta}. In Section \ref{appl} we define the nonlinear modes of \eq{truebeam} and we perform the same analysis for a more general Hill equation in order to study the stability of these modes. All the proofs are postponed until Section~\ref{proofs}. In the final section we list some open problems. \section{Basic properties of the Duffing and the Hill equations}\label{properties} The Duffing equation \eq{ODE} admits periodic solutions whose period depends on the parameter $\delta$ in \eq{alphabeta}. An explicit solution of~\eqref{ODE}, which uses elliptic Jacobi functions, has been given by Burgreen~\cite{burg}: its period may be computed by using some properties of elliptic functions and without knowing the explicit form of it. We refer to \cite{abram} for the definitions and the basic properties of the elliptic functions and integrals. For the sake of completeness and because we need a formula therein, we briefly sketch a different proof of the computation of the period. \begin{proposition} \label{prop:sol_duffing} For all $\delta\in{\mathbb R}$ the solution of the Cauchy problem~\eqref{ODE}-\eqref{alphabeta} is \begin{equation}\label{soluzione_duffing} y(t) = \delta \cn{\bigg[t\sqrt{1+\delta^2},\frac{\delta}{\sqrt{2(1+\delta^2)}}\bigg]}\, , \end{equation} where $\cn{[u,k]}$ is the Jacobi elliptic cosine function. Moreover, the period of the solution~\eqref{soluzione_duffing} is given by \neweq{TE} T(\delta)=4\sqrt2\int_0^1\frac{d\theta}{\sqrt{(2+\delta^2+\delta^2\theta^2)(1-\theta^2)}}\, . \end{equation} \end{proposition} {\em Proof.} The equation \eq{ODE} is conservative, its solutions have constant energy which is defined by \neweq{energyy} E(\delta)=\frac{\dot{y}^2}{2}+\frac{y^2}{2}+\frac{y^4}{4}\equiv\frac{\delta^2}{2}+\frac{\delta^4}{4}\, , \end{equation} where $\delta$ is the initial condition in \eq{alphabeta}. We rewrite \eq{energyy} as \neweq{ancoraenergia} 2\dot{y}^2=(2+\delta^2+y^2)(\delta^2-y^2)\qquad\forall t \end{equation} so that $-|\delta|\leqslanty(t)\le|\delta|$ for all $t$ and $y$ oscillates in this range. If $y(t)$ solves \eq{ODE}-\eq{alphabeta}, then also $y(-t)$ solves the same problem: this shows that the period $T(\delta)$ of $y$ is the double of the length of an interval of monotonicity for $y$. Since the problem is autonomous, we may assume that $y(0)=-\delta<0$ and $\dot{y}(0)=0$; then we have that $y(T/2)=\delta$ and $\dot{y}(T/2)=0$. By rewriting \eq{ancoraenergia} as \neweq{derivata} \sqrt2 \dot{y}=\sqrt{(2+\delta^2+y^2)(\delta^2-y^2)}\qquad\forall t\in\left(0,\frac{T}{2}\right)\, , \end{equation} by separating variables, and upon integration over the time interval $(0,T/2)$ we obtain $$\frac{T(\delta)}{2}=\sqrt2 \int_{-\delta}^{\delta}\frac{d\, y}{\sqrt{(2+\delta^2+y^2)(\delta^2-y^2)}}\,.$$ Then, using the fact that the integrand is even with respect to $y$ and through the change of variable $y=\delta\theta$, we obtain \eq{TE}. $\Box$\par\vskip3mm Proposition \ref{prop:sol_duffing} tells that the period of $y$ depends on its initial value $\delta$ and therefore on its energy \eq{energyy}: this is clearly due to the nonlinear nature of \eq{ODE}. We point out that a solution $y$ of \eq{ODE} has mean value 0 over a period and that $y(t)$ and its translation $y(t+t_0)$ all have the same energy \eq{energyy} for any $t_0\in{\mathbb R}$; therefore, initial conditions different from \eq{alphabeta}, but with the same energy, lead to the same solution of \eq{ODE}, up to a time translation.\par From \eq{TE} we infer that the map $\delta\mapsto T(\delta)$ is strictly decreasing on $(0,+\infty)$ and $$ \lim_{\delta\downarrow0}T(\delta)=2\pi\, ,\qquad\lim_{\delta\to+\infty}T(\delta)=0\, . $$ The period $T(\delta)$ can also be expressed differently. With the change of variables $\theta=\cos\alpha$, \eq{TE} becomes \neweq{firstkind} T(\delta)=4\sqrt2 \int_0^{\pi/2}\frac{d\alpha}{\sqrt{2+\delta^2+\delta^2\cos^2\alpha}}= \frac{4}{\sqrt{1+\delta^2}}\, K\left(\frac{\delta}{\sqrt{2(1+\delta^2)}}\right)\, , \end{equation} where $K(\cdot)$ is the complete elliptic integral of the first kind, see \cite{abram}.\par A further constant which plays an important role in our analysis is \begin{equation}\label{costantesigma} \sigma:=\int_0^1\frac{d\theta}{\sqrt{1-\theta^4}}=\int_0^{\pi/2}\frac{d\alpha}{\sqrt{1+\sin^2{\alpha}}}=\frac{1}{\sqrt{2}}K\Bigg(\frac{1}{\sqrt{2}}\Bigg) =\frac{\sqrt{\pi}}{4}\frac{\Gamma(\frac{1}{4})}{\Gamma(\frac{3}{4})}\, . \end{equation} It is in general extremely difficult to establish whether the parameters of the Hill equation \eq{hill} lead to a stable regime. In most cases, one uses suitable criteria which yield sufficient conditions for stability. There are many such criteria, see e.g.\ \cite{cesari,magnus,stoker} and references therein. We state here three criteria which appear appropriate to tackle our problem. \begin{proposition}\label{lyapzhu} Assume that one of the three following facts holds:\par $$(i)\ p\ge0\ \mbox{ and }\ T^3\int_0^T p(t)^2\, dt< \frac{64}{3}\sigma^4\qquad(\sigma\mbox{ as in \eqref{costantesigma}})\, ,$$ $$(ii)\ p\ge0\mbox{ and } \exists\ell\in{\mathbb N}\mbox{ s.t. }\frac{\ell^2\pi^2}{T^2}\leqslantp(t)\le\frac{(\ell+1)^2\pi^2}{T^2}\quad\forall t\, ,$$ $$(iii)\ p>0,\ p\mbox{ admits a unique maximum point and a unique minimum point in }[0,T),\mbox{ and }$$ $$\exists\ell\in{\mathbb N}\mbox{ s.t. }\ \ell\pi<\int_0^T\!\!\sqrt{p(t)}\, dt-\tfrac12\log\tfrac{\max p}{\min p}\le \int_0^T\!\!\sqrt{p(t)}\, dt+\tfrac12\log\tfrac{\max p}{\min p}<(\ell+1)\pi\, .$$ Then the trivial solution of \eqref{hill} is stable. \end{proposition} {\em Proof.} The first criterion is due to Li-Zhang \cite{lizhang} and generalizes the original Lyapunov criterion \cite{lyapunov}. The second criterion is due to Zhukovskii \cite{zhk}; see also \cite[Test 1, \S 3, Chapter VIII]{yakubovich}. The third criterion is due to Burdina \cite{burdina}; see also \cite[Test 3, \S 3, Chapter VIII]{yakubovich}. $\Box$\par\vskip3mm \section{Stability for the Hill equation with a squared Duffing coefficient}\label{duffsqMOD} In this section, we analyze the stability regions for the equation \begin{equation}\label{squareduffing} \ddot{\xi}(t)+\Big(\gamma+y(t)^2\Big)\xi(t)=0 \end{equation} in the parameter $(\delta,\gamma)$-plane; here $y$ is the solution of~\eqref{ODE}-\eqref{alphabeta}. As a straightforward consequence of Proposition \ref{prop:sol_duffing}, we infer that if $y$ solves \eqref{ODE}-\eqref{alphabeta}, then the squared function $y^2$ is periodic and its period is given by $T(\delta)/2$, that is, $$ \frac{T(\delta)}{2}=2\sqrt2\int_0^1\frac{d\theta}{\sqrt{(2+\delta^2+\delta^2\theta^2)(1-\theta^2)}}\, . $$ Our first result concerns the limit case $\gamma=0$. \begin{theorem}\label{intornorigine} If $\gamma=0$, then the trivial solution of~\eqref{squareduffing} is stable for all $\delta>0$. \end{theorem} This result is a particular case of Theorem~\ref{striscia} below. However, since our proof of Theorem~\ref{intornorigine} makes use of the Li-Zhang \cite{lizhang} generalized Lyapunov criterion, it has its own independent interest and we give its proof in Section~\ref{proofs}. As we know from~\cite[Chapter VIII]{yakubovich}, for all $\ell\in{\mathbb N}$, there exists a resonant tongue $U_\ell$ emanating from the point $(\delta,\gamma)=(0,\ell^2)$. If $(\delta,\gamma)$ belongs to one of these tongues, then the trivial solution of \eqref{squareduffing} is unstable. In Section~\ref{proofs}, we prove the following precise characterization of the first instability region $U_1$. \begin{theorem}\label{exactsolutions} Let $U_1$ be the resonant tongue of \eqref{squareduffing} emanating from $(\delta,\gamma)=(0,1)$. Then $$U_1=\left\{(\delta,\gamma)\in{\mathbb R}^2_+;\, 1<\gamma<1+\frac{\delta^2}{2}\right\}\, .$$ \end{theorem} In particular, Theorem \ref{intornorigine} states that, contrary to what happens for the Mathieu equations, the resonance lines do not bend downwards since they remain above the parabola $\gamma=1+\delta^2/2$. Moreover, as a consequence of Theorems~\ref{intornorigine} and~\ref{exactsolutions}, we see that the strip $0\leq\gamma<1$ is a region of stability for~\eqref{squareduffing}; this follows by applying Theorem II, p.695 in~\cite{yakubovich}. In fact, more can be said on the stability behavior of~\eqref{squareduffing} in a neighborhood of $(\delta,\gamma)=(0,0)$. Even if we initially assumed that $\gamma\geq0$, we may give a full description of what happens when $\gamma<0$. \begin{theorem}\label{striscia} The trivial solution of~\eqref{squareduffing} is: $$ \mbox{stable if}\quad-\frac{\delta^2}{2}<\gamma<1\quad\forall\delta\in{\mathbb R}\, ,\qquad \mbox{unstable if}\quad\gamma<-\frac{\delta^2}{2}\quad\forall\delta\in{\mathbb R}\, . $$ \end{theorem} Also the proof of this result is given in Section~\ref{proofs}: in particular, we use there the fact that, if $\gamma\le-\delta^2$, then $\gamma+y(t)^2\le0$ and the instability is trivial. A picture of the stability region described by Theorem~\ref{striscia} is shown in Figure~\ref{zonevere}, where we have included negative values for both $\gamma$ and $\delta$. \begin{figure} \caption{Stability regions (white) and resonance tongues (gray) for~\eqref{squareduffing} \label{zonevere} \end{figure} Unfortunately, we could not find an explicit representation of the remaining resonant lines, namely the boundaries of the resonance tongues $U_\ell$ for $\ell\ge2$. Thus, we now use the Burdina criterion to determine sufficient conditions for the stability of the trivial solution of \eqref{squareduffing}. To this end, we introduce the function $$\Phi(\delta,\gamma):=2\sqrt{2}\int_0^{\pi/2}\sqrt{\tfrac{\gamma+\delta^2\sin^2\theta}{2+\delta^2+\delta^2\sin^2\theta}}\, d\theta \qquad\forall\delta,\gamma>0\, .$$ Note that $\Phi$ is positive and smooth; this function may also be expressed in terms of elliptic integrals.\par In Section~\ref{proofs} we will prove the following statement. \begin{theorem}\label{primo} Let $y$ be the solution of the Duffing equation \eqref{ODE} with initial conditions \eqref{alphabeta}. If there exists $\ell\in{\mathbb N}$ such that \neweq{primavera} \log\left(1+\frac{\delta^2}{\gamma}\right)<2\cdot\min\Big\{\Phi(\delta,\gamma)-\ell\pi\, ,\, (\ell+1)\pi-\Phi(\delta,\gamma)\Big\}\, , \end{equation} then the trivial solution of \eqref{squareduffing} is stable. \end{theorem} The trick in the proof is to use the Burdina criterion {\em without} using the explicit form of the solution of \eq{ODE}. In particular, Theorem \ref{primo} has the following elegant consequence: \begin{corollary}\label{corparabola} Let $y$ be the solution of the Duffing equation \eqref{ODE} with initial conditions \eqref{alphabeta}. Then the trivial solution of $$\ddot{\xi}(t)+\Big(2+\delta^2+y(t)^2\Big)\xi(t)=0$$ is stable. \end{corollary} The proof follows directly from Theorem~\ref{primo} (case $\ell=1$) by observing that $\Phi(\delta,2+\delta^2)=\sqrt{2}\pi$ and $$ \log\left(1+\frac{\delta^2}{2+\delta^2}\right)<\log 2<2\pi(\sqrt2 -1)=2\pi\cdot\min\Big\{\sqrt2 -1\, ,\, 2-\sqrt2 \Big\}\qquad\forall\delta>0\, . $$ In order to obtain a more precise picture of the resonant tongues other than the first, we use Theorem \ref{primo} to deduce some information on their behavior as $\delta\to0$. \begin{theorem}\label{asymp} Let $\ell\in{\mathbb N}$ $(\ell\ge2)$ and let $U_\ell$ be the resonant tongue emanating from $(\delta,\gamma)=(0,\ell^2)$. If $(\delta,\gamma)\in U_\ell$, then \neweq{twoparabolas} \ell^2+\left(\frac{3\ell^2}{4}-\frac12 -\frac{1}{\pi\ell}\right)\delta^2+O(\delta^4)\le\gamma\le \ell^2+\left(\frac{3\ell^2}{4}-\frac12 +\frac{1}{\pi\ell}\right)\delta^2+O(\delta^4)\quad\mbox{as }\delta\to0\, . \end{equation} Therefore, for all $\gamma\ge0$ (with $\gamma\neq1$) there exists $\delta_\gamma>0$ such that if $0<\delta<\delta_\gamma$ and if $y$ is the solution of \eqref{ODE}-\eqref{alphabeta}, then the trivial solution of \eqref{squareduffing} is stable. \end{theorem} A further remark about the comparison between the three criteria reported in Proposition \ref{lyapzhu} is in order. In the left picture of Figure \ref{num_B} we plot the (white) stability regions obtained numerically through the Burdina criterion, see \eq{primavera}. The second lowest stable (white) region contains the parabola $\gamma=2+\delta^2$. In the right picture of Figure \ref{num_B} we see that the Burdina criterion performs better than the Zhukovskii criterion, at least in the region where $\gamma>1$: except for the first stability region, the regions which result stable for the Zhukovskii criterion are strictly included in the ones determined by the Burdina criterion. On the other hand, back to the left picture, we see that the Burdina criterion does not ensure stability in a neighborhood of $(0,0)$; in this case the Zhukovskii criterion performs slightly better. But the $L^2$ generalized Lyapunov criterion by Li-Zhang is the one having the better performance in the region $\gamma<1$: a hint of this fact is highlighted in the proof of Theorem \ref{intornorigine}. \begin{figure} \caption{Stability regions (white) for \eq{squareduffing} \label{num_B} \end{figure} Theorem \ref{primo} only gives sufficient conditions for the stability and the stability regions are considerably wider than the those defined by \eq{primavera}. In order to have a more precise picture of the resonant tongues, we proceed numerically. In Figure \ref{realplot} we plot the obtained stability (white) and instability (gray) regions for \eq{squareduffing}. It turns out that the resonant tongues $U_\ell$ (with $\ell\ge2$) are extremely narrow, see \eq{twoparabolas}. \begin{figure} \caption{Stability regions (white) for \eq{squareduffing} \label{realplot} \end{figure} The boundaries of $U_\ell$ (resonant lines) are found numerically by computing the trace of the monodromy matrix of the Hill equation \eqref{squareduffing} and by plotting its level lines $\pm2$. To do so, we used the software Matlab. The first step was to introduce a grid of values of $\delta$ and $\gamma$: for each value of $\delta$, with the built-in function \texttt{ellipj}, which gives the elliptic Jacobi functions, we defined a Matlab function which computed the solution $y_\delta$ of the Duffing equation with initial conditions \eq{alphabeta}, using~\eqref{soluzione_duffing}. Then, by using the Matlab ODE solver \texttt{ode45}, we integrated the principal fundamental matrix at $t=0$ in the interval $[0,T(\delta)]$ for every point $(\delta,\gamma)$. This method introduced some errors, both in the computation of the Jacobi cosine elliptic function and in the integration of the ODE. For this reason, and because the resonant tongues are very thin in a neighborhood of $\delta=0$, the plot was quite difficult. In order to give a more clear representation of the tongues, we plotted the level lines $\pm1.98$ instead of $\pm2$ of the trace of the monodromy matrix. The result is shown in Figure \ref{realplot}. \section{Instabilities in a nonlinear nonlocal beam equation}\label{appl} \subsection{Nonlinear modes} In this section we define the nonlinear modes of \eq{truebeam} and what we mean by stability. We start by seeking particular solutions of \eq{truebeam} by separating variables, that is, in the form \neweq{form} u_m(x,t)=\Theta_m(t)\sin(mx)\qquad(m\in{\mathbb N})\, . \end{equation} A simple computation shows that the function $\Theta_m$ satisfies \neweq{ODED} \ddot{\Theta}_m(t)+m^4\Theta_m(t)+m^4\Theta_m(t)^3=0\qquad(t>0)\, , \end{equation} which is a time-scaled version ($t\mapsto m^2t$) of the Duffing equation \eq{ODE}.\par We call a function $u_m$ in the form \eq{form} a $m$-th {\bf nonlinear mode} of \eq{truebeam}. Note that for any $m$ there exist infinitely many $m$-th nonlinear modes depending on the initial value, they are not proportional to each other and they have different periodicity. Their shape is described by the solutions $\Theta_m$ of \eq{ODED}: for this reason, with an abuse of language, we also call $\Theta_m$ a nonlinear mode of \eq{truebeam}.\par We are interested in studying the stability of the nonlinear modes. To this end, we consider solutions of \eq{truebeam} in the form \neweq{form2} u(x,t)=w(t)\sin(mx)+z(t)\sin(nx) \end{equation} for some integers $n,m\ge1$, $n\neq m$. After inserting \eq{form2} into \eq{truebeam} we reach the following (nonlinear) system of ODE's: \neweq{cw} \left\{\begin{array}{l} \ddot{w}(t)+m^4w(t)+m^2\big(m^2w(t)^2+n^2z(t)^2\big)w(t)=0\, ,\\ \ddot{z}(t)+n^4z(t)+n^2\big(m^2w(t)^2+n^2z(t)^2\big)z(t)=0\ , \end{array}\right. \end{equation} to which we associate some initial conditions \neweq{initialsyst} w(0)=w_0\, ,\ \dot{w}(0)=w_1\, ,\quad z(0)=z_0\, ,\ \dot{z}(0)=z_1\, . \end{equation} The constant energy of the Hamiltonian system \eq{cw} is given by \begin{eqnarray} {\mathcal E}(w_0,w_1,z_0,z_1)&=&\frac{\dot{w}^2}{2}+\frac{\dot{z}^2}{2}+m^4\frac{w^2}{2}+n^4\frac{z^2}{2}+ \frac{(m^2w^2+n^2z^2)^2}{4}\notag \\ &\equiv&\frac{w_1^2}{2}+\frac{z_1^2}{2}+m^4\frac{w_0^2}{2}+n^4\frac{z_0^2}{2}+\frac{(m^2w_0^2+n^2z_0^2)^2}{4}\, .\label{uffa} \end{eqnarray} Note that if $z_0=z_1=0$, then the solution of \eq{cw}-\eq{initialsyst} is $(w,z)=(\Theta_m,0)$ where $\Theta_m$ is a nonlinear mode, namely a solution of the Duffing equation \eq{ODED}. In order to analyze the stability of the nonlinear mode $\Theta_m$ with respect to the nonlinear mode $\Theta_n$ we argue as follows. We take initial data in \eq{initialsyst} such that $$ 0<|z_0|+|z_1|\ll|w_0|+|w_1|\, . $$ This means that the energy \eq{uffa} is initially almost totally concentrated on the $m$-th mode, that is, ${\mathcal E}(w_0,w_1,z_0,z_1)\approx{\mathcal E}(w_0,w_1,0,0)$. We then wonder whether this remains true for all time $t>0$ for the solution of \eq{cw}. To this end, we introduce the following notion of stability. \begin{definition}\label{defstabb} The mode $\Theta_m$ is said to be {\bf linearly stable} ({\bf unstable}) with respect to the $n$-th mode $\Theta_n$ if $\xi\equiv0$ is a stable (unstable) solution of the linear Hill equation \neweq{hill2} \ddot{\xi}(t)+\Big(n^4+m^2n^2\Theta_m(t)^2\Big)\xi(t)=0\, ,\quad \forall t\, . \end{equation} \end{definition} There exist also stronger definitions of stability which, in some cases, can be shown to be equivalent; see e.g.\ \cite{ghg}. For nonlinear PDE's such as \eq{truebeam}, Definition \ref{defstabb} is sufficiently precise to characterize the instabilities of the nonlinear modes of the equation, see \cite{bbfg,bfg1,bergaz,fergazmor} and also the example at the end of the next subsection.\par The relevant parameter for the stability of the trivial solution of \eq{hill2} turns out to be \[\omega:=\frac{n^2}{m^2}\, .\] Note that, if $\Theta_m(t)$ is a solution of~\eqref{ODED} with initial conditions \neweq{initialOmega} \Theta_m(0)=\delta\quad\mbox{and}\quad\dot{\Theta}_m(0)=0\, , \end{equation} then $\Theta_\omega(t)=\Theta_m(t/mn)$ solves the following time-scaled version of the Duffing equation \eq{ODE}: \begin{equation}\label{duffomega} \ddot{\Theta}_\omega(t)+\frac{1}{\omega}\Theta_\omega(t)+\frac{1}{\omega}\Theta_\omega(t)^3=0, \end{equation} with the same initial conditions. From Burgreen~\cite{burg}, we know that \begin{equation}\label{solomega} \Theta_\omega(t)=\delta\cn\Bigg[t\,\sqrt{\frac{1+\delta^2}{\omega}},\frac{\delta}{\sqrt{2(1+\delta^2)}}\Bigg]. \end{equation} Moreover, its period is given by \begin{equation*}\label{Tomega} T_\omega(\delta)=4\,\sqrt{\frac{\omega}{1+\delta^2}}\,K\bigg(\frac{\delta}{\sqrt{2(1+\delta^2)}}\bigg), \end{equation*} so that $\lim_{\delta\to 0}T_\omega(\delta)=2\pi\sqrt{\omega}$. The counterpart of~\eqref{ancoraenergia} reads \begin{equation} 2\dot{\Theta}_\omega^2=\frac{1}{\omega}(2+\delta^2+\Theta_\omega^2)(\delta^2-\Theta_\omega^2)\qquad\forall t\, . \end{equation} Through the time scaling $t\mapsto\frac{t}{mn}$,~\eqref{hill2} is equivalent to the Hill equation $$ \ddot{\xi} + \bigg(\frac{n^2}{m^2} + \Theta_m\bigg(\frac{t}{mn}\bigg)^2\bigg)\xi=0\, , $$ that is, \begin{equation}\label{hill_scalata} \ddot{\xi}(t)+(\omega+\Theta_\omega(t)^2)\xi(t)=0\, , \end{equation} where $\Theta_\omega$ is given in \eq{solomega}. \subsection{Stability of the nonlinear modes} We study here in detail the stability of the nonlinear modes of \eq{truebeam}, according to Definition \ref{defstabb}. In particular, we prove the following statement, left open in \cite{bbfg}. \begin{theorem}\label{facile} If $m>n$ then the mode $\Theta_m$ is linearly stable with respect to the mode $\Theta_n$, independently of the initial value $\delta\neq0$ in \eqref{initialOmega}. \end{theorem} The proof of this result follows from a more general statement, see Theorem \ref{primalingua} below, therefore we omit it.\par The situation is by far more complicated when $m<n$, several subcases have to be distinguished. From~\cite[Theorem 15]{bbfg} and~\cite[Theorem 1.1]{cazw2}, we learn that an asymptotic representation of the resonant tongues as $\delta\to\infty$ in \eq{initialOmega} can be given. Let us define the two sets \begin{equation}\label{strisceinfinito} I_U:=\bigcup_{k\in{\mathbb N}}\Big((k+1)(2k+1),(k+1)(2k+3)\Big)\qquad I_S:=\bigcup_{k\in{\mathbb N}}\Big(k(2k+1),(k+1)(2k+1)\Big). \end{equation} Note that $\overline{I_S\cup I_U}=[0,\infty)$. Then, by using the very same method as in \cite{cazw2}, the following result was obtained in \cite{bbfg}: \begin{proposition}\label{known} Let $I_S$ and $I_U$ be as in \eqref{strisceinfinito} and let $\Theta_\omega$ be as in \eqref{solomega}. For every $\omega>0$, there exist $\bar{\delta}_\omega>0$ such that, for all $\delta>\bar{\delta}_\omega$:\par\noindent $(i)$ if $\omega\in I_U$, then the trivial solution of equation~\eqref{hill_scalata} is unstable and $\Theta_m$ is linearly unstable with respect to $\Theta_n$;\par\noindent $(ii)$ if $\omega\in I_S$, then the trivial solution of equation~\eqref{hill_scalata} is stable and $\Theta_m$ is linearly stable with respect to $\Theta_n$. \end{proposition} Similarly to what we did in Section~\ref{duffsqMOD}, we apply the Burdina criterion to equation~\eqref{hill_scalata}. To this end, we define the function \begin{equation}\label{Psi} \Psi(\delta,\omega)=2\sqrt{2\,\omega}\int_0^{\pi/2}\sqrt{\tfrac{\omega+\delta^2\sin^2\theta}{2+\delta^2+\delta^2\sin^2\theta}}\, d\theta \qquad\forall\delta,\omega>0\, . \end{equation} Note that the map $\delta\mapsto\Psi(\delta,\omega)$ is strictly decreasing for all $\omega\ge1$ and that $$\Psi(0,\omega)=\pi\omega\, ,\qquad\lim_{\delta\to\infty}\Psi(\delta,\omega)= 2\sqrt{2\,\omega}\int_0^{\pi/2}\frac{\sin\theta}{\sqrt{1+\sin^2\theta}}\, d\theta=\pi\, \sqrt{\frac{\omega}{2}}\, .$$ By applying the Burdina criterion, see Proposition \ref{lyapzhu} $(iii)$, we obtain the following sufficient condition for the stability of the trivial solution of~\eqref{hill_scalata}. \begin{theorem}\label{burdinaomega} Let $\Theta_\omega$ be as in \eqref{solomega} and let $\Psi$ be as in \eqref{Psi}. If there exists $\ell\in{\mathbb N}$ such that \begin{equation*}\label{fantasia} \log\Big(1+\frac{\delta^2}{\omega}\Big)<2\cdot\min\Big\{\Psi(\delta,\omega)-\ell\pi\, ,\, (\ell+1)\pi-\Psi(\delta,\omega)\Big\}, \end{equation*} then the trivial solution of~\eqref{hill_scalata} is stable and $\Theta_m$ is linearly stable with respect to $\Theta_n$. \end{theorem} In Figure \ref{burdina2} we display the regions described by Theorem \ref{burdinaomega}. \begin{figure} \caption{Stability regions (white) obtained with the sufficient condition of Theorem \ref{burdinaomega} \label{burdina2} \end{figure} In particular, Theorem~\ref{burdinaomega} enables us to determine the behavior of the resonant tongues as $\delta\to0$. \begin{corollary}\label{asympomega} Let $\ell\in{\mathbb N}$ $(\ell\ge2)$ and let $U_\ell$ be the resonant tongue of \eqref{hill_scalata} emanating from $(\delta,\omega)=(0,\ell)$. If $(\delta,\omega)\in U_\ell$, then $$ \ell+\left(\frac{3\ell}{8}-\frac{1}{4} -\frac{1}{2\pi\ell}\right)\delta^2+O(\delta^4)\le\omega\le \ell+\left(\frac{3\ell}{8}-\frac{1}{4} +\frac{1}{2\pi\ell}\right)\delta^2+O(\delta^4)\quad\mbox{as }\delta\to0\, . $$ \end{corollary} The next result, whose proof is given in Section~\ref{proofs}, characterizes the first resonance tongue. \begin{theorem}\label{primalingua} The resonant tongue $U_1$ of \eqref{hill_scalata} emanating from $(\delta,\omega)=(0,1)$ is given by $$ U_1 =\big\{(\delta,\omega)\in\mathbb{R}_+^2:\, 1<\omega<\varphi(\delta)\big\} $$ where $\varphi:\mathbb{R}_+\to\mathbb{R}_+$ is a continuous function such that $$ \varphi(\delta)>1\ \forall\delta>0\, ,\qquad\varphi(\delta)\le1+\bigg(\frac{1}{8}+\frac{1}{2\pi}\bigg)\delta^2+O(\delta^4)\mbox{ as }\delta\to0\, ,\qquad \lim_{\delta\to\infty}\varphi(\delta)=3\, . $$ Moreover, the strip $(0,\infty)\times(0,1)$ is a stability region of the $(\delta,\omega)$-plane. \end{theorem} A straightforward consequence of Corollary \ref{asympomega} and Theorem \ref{primalingua} reads \begin{corollary}\label{newcor} For all $\omega>0$ (with $\omega\neq1$) there exists $\delta_\omega>0$ such that if $0<\delta<\delta_\omega$ and if $\Theta_\omega$ is as in \eqref{solomega}, then the trivial solution of~\eqref{hill_scalata} is stable. Therefore, for any integers $m\neq n$, there exists $\widehat{\delta}=\widehat{\delta}(n/m)>0$ such that the nonlinear mode $\Theta_m$, solution of \eqref{ODED}-\eqref{initialOmega}, is linearly stable with respect to $\Theta_n$ whenever $0<|\delta|<\widehat{\delta}$; moreover, $\widehat{\delta}=+\infty$ if and only if $n<m$. \end{corollary} By proceeding as in Section~\ref{duffsqMOD}, we numerically obtained a complete picture of the resonance tongues. We fixed a value of $m$ in~\eqref{hill2} and we integrated system~\eqref{ODED}-\eqref{hill2} allowing $n$ to take non integer values, so that also the squared ratio $\omega$ in~\eqref{hill_scalata} could take any real positive value. The resonant lines were obtained by computing the trace of the monodromy matrix of the Hill equation~\eqref{hill2} for different values of $\delta$ and $\omega$: since the resonance tongues are very narrow for small $\delta$, the plot of the level curves $\pm2$ in the $(\delta,\omega)$-plane was quite difficult and we obtained an approximate picture of the stability regions by plotting instead the level curves $\pm1.98$. The result is shown in Figure~\ref{monodromymnd1}. \begin{figure} \caption{Stability regions (white) for~\eq{hill_scalata} \label{monodromymnd1} \end{figure} Due to the asymptotic behavior of the stability regions stated in Proposition~\ref{known}, the resonance tongues are much wider for large values of $\delta$ than in a neighborhood of the $\omega$ axis. For this reason, if we choose a value $\omega$ of the squared ratio $n^2/m^2$ and we increase $\delta$ starting from the point $(0,\omega)$, we cross some tiny resonance tongues before reaching the final stability or instability region, as described in Proposition~\ref{known}. For a given squared ratio $\omega=n^2/m^2$ of spatial frequencies, in Table \ref{crossings} we quote the (minimum) number of crossing of the resonance lines, starting from $\delta=0$ up to $\delta\to\infty$. Since by Corollary \ref{newcor} we know that for $\delta\to0^+$ we always start in a stability region, if this number is even (resp. odd) then we end up in a stability (resp. instability) region when $\delta\to\infty$. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $\omega\in$ & $(0,1)$ & $(1,2]$ & $(2,3)$ & $\{3\}$ & $(3,4]$ & $(4,5]$ & $(5,6)$ & $\{6\}$ & $(6,7]$\\ \hline crossing & $0$ & $1$ & $3$ & $2$ & $4$ & $6$ & $8$ & $7$ & $9$\\ \hline \end{tabular} \caption{Number of resonance lines crossed before reaching the final stability/instability region.}\label{crossings} \end{center} \end{table} Let us emphasize that different couples $(m,n)$ may have the very same stability behavior. Fix some $(m,n)$ and consider all the couples $(km,kn)$ for any integer $k$. Then the corresponding $\omega$ is the same and hence, also the stability behavior. What changes is the time scaling: by applying the scaling $t\mapsto kt$, we recover the very same system \eq{cw} and the occurrence of a possible instability (as in the plots of Figure \ref{plots3} below) will appear delayed in time. Whence, what really counts for the stability of nonlinear modes is the {\em ratio} $\omega=n^2/m^2$ of spatial frequencies.\par Let us conclude with a particular example which shows how the instability for \eq{cw}-\eq{initialsyst} appears. We take $m=1$ and $n=2$, so that $\omega=4$ and we have the following consequence of Theorem \ref{burdinaomega}. \begin{corollary}\label{cor4} If one of the following two facts holds $$\log\left(1+\frac{\delta^2}{4}\right)<2\cdot\min\big\{\Psi(\delta,4)-2\pi,3\pi-\Psi(\delta,4)\big\}\, ,$$ $$\log\left(1+\frac{\delta^2}{4}\right)<2\cdot\min\big\{\Psi(\delta,4)-3\pi,4\pi-\Psi(\delta,4)\big\}\, ,$$ then the mode $\Theta_1$ is linearly stable with respect to the mode $\Theta_2$. \end{corollary} Numerically, one sees that Corollary \ref{cor4} guarantees the linear stability of the mode $\Theta_1$ with respect to the mode $\Theta_2$ whenever \neweq{interval} \delta\in(0,1.167)\cup(1.277,2.63)\, , \end{equation} while Proposition \ref{known} guarantees linear stability for $\delta>\overline{\delta}$ for a sufficiently large $\overline{\delta}$. Therefore, the instability range for the initial semi-amplitude $\delta$ lies in the complement of \eq{interval}. According to Figure \ref{monodromymnd1} we expect it to be composed by two disconnected intervals. In order to find it with precision, we numerically plotted the solution of \eq{hill_scalata} with $\Theta_\omega$ defined in \eq{solomega} and $\omega=4$. We could observe instability, that is exponential-like blow up of the solution towards $\pm\infty$ when $t\to+\infty$, for $\delta\in(2.93,3.45)$ which lies in the complement of \eq{interval} and belongs to the resonance tongue emanating from $(0,2)$ in the $(\delta,\omega)$-plane. This also means that $\overline{\delta}\approx3.45$. Then we increased $\delta$ with step $10^{-4}$ but we detected no instability for $\delta\in(1.167,1.277)$ which contains the intersection of the resonant tongue emanating from $(\delta,\omega)=(0,3)$ with the line $\omega=4$: this means that this region is extremely thin and/or that the modulus of the trace of the monodromy matrix only slightly exceeds 2. Overall, this means that these instabilities are irrelevant: on the one hand, they have very small probability to appear, on the other hand, even if they do appear they only give rise to tiny transfers of energy. For any $\omega$ one then expects to view ``true'' instabilities only for large values of $\delta$.\par We also plotted the solution of \eq{cw}-\eq{initialsyst}: we considered the system \neweq{cw12} \left\{\begin{array}{l} \ddot{w}(t)+w(t)+\big(w(t)^2+4z(t)^2\big)w(t)=0\, ,\\ \ddot{z}(t)+16z(t)+4\big(w(t)^2+4z(t)^2\big)z(t)=0\, ,\\ w(0)=\delta\, ,\ z(0)=10^{-3}\delta\, ,\ \dot{w}(0)=\dot{z}(0)=0\, . \end{array}\right. \end{equation} In Figure \ref{plots3} we display some of the pictures that we obtained. \begin{figure} \caption{The solutions $w$ (gray) and $z$ (black) of \eq{cw12} \label{plots3} \end{figure} It appears clearly that, in the first picture, $z$ remains small and no instability occurs. In the second picture, $z$ suddenly grows up, which means instability. In the third picture the same phenomenon is accentuated while in the fourth picture it is delayed, thereby getting ready to return in a stability regime for $\delta>3.454$. This is the pattern that can be observed in \eq{cw}-\eq{initialsyst}, for any value of $\omega=n^2/m^2$. This perfect coincidence with the linear stability justifies Definition \ref{defstabb} and the use of linearization. \section{Proofs}\label{proofs} Since we need a formula stated therein, we prove Theorem~\ref{primo} first. \noindent{\bf Proof of Theorem~\ref{primo}.} We may assume that $y(0)=-\delta<0$ so that $\dot y>0$ in $(0,T/2)$. Then we get \begin{eqnarray} \int_0^{T/2}\sqrt{\gamma+y(t)^2}\, dt &=& 2\int_0^{T/4}\sqrt{\gamma+y(t)^2}\, dt \notag \\ \mbox{by \eq{derivata} \ } &=& 2\sqrt{2}\int_0^{T/4}\sqrt{\tfrac{\gamma+y(t)^2}{(2+\delta^2+y(t)^2)(\delta^2-y(t)^2)}}\, \dot{y}(t)\, dt \notag \\ \mbox{using $y(t)=\delta\sin\theta$ \ } &=& \Phi(\delta,\gamma)\, .\label{super} \end{eqnarray} Moreover, if we put $p(t)=\gamma+y(t)^2$, we see that $$\log\frac{\max p}{\min p}=\log\left(1+\frac{\delta^2}{\gamma}\right)\, .$$ Then, by Proposition \ref{lyapzhu} ($iii$), we infer that the trivial solution of \eqref{squareduffing} is stable whenever \eq{primavera} holds. $\Box$\par\vskip3mm \noindent {\bf Proof of Theorem~\ref{intornorigine}.} By using \eqref{TE} and by arguing as for~\eqref{super}, we obtain \begin{align} g(\delta)&:=\frac{T(\delta)^3}{8} \int_{0}^{T(\delta)/2} y(t)^4\,dt\notag\\ \ &=(2\sqrt{2})^4 \bigg(\int_{0}^{T(\delta)/4}\frac{y(t)^4\dot{y}(t)}{\sqrt{(2+\delta^2+y(t)^2)(\delta^2-y(t)^2)}}\,dt\bigg) \bigg(\int_{0}^{1}\frac{d\theta}{\sqrt{(2+\delta^2+\delta^2\theta^2)(1-\theta^2)}}\bigg)^3 \notag\\ \ &=64\bigg(\int_{0}^{\pi/2}\frac{\sin^4\alpha}{\sqrt{\frac{2}{\delta^2} + 1 + \sin^2\alpha}}\,d\alpha\bigg)\bigg(\int_0^{\pi/2}\frac{d\alpha}{\sqrt{\frac{2}{\delta^2} + 1 + \sin^2\alpha}}\bigg)^3.\notag \end{align} Hence, the map $\delta\mapsto g(\delta)$ is strictly increasing; furthermore, $g(0)=0$. Thus, for all $\delta>0$, \begin{align} 0<g(\delta)<\lim_{\delta\to\infty}g(\delta)&=64\bigg(\int_{0}^{\pi/2}\frac{\sin^4\alpha}{\sqrt{1+\sin^2\alpha}}\,d\alpha\bigg) \bigg(\int_{0}^{\pi/2}\frac{d\alpha}{\sqrt{1+\sin^2\alpha}}\bigg)^3 \notag\\ \ \mbox{using~\eqref{costantesigma}}\quad &=64\bigg(\int_{0}^{\pi/2}\frac{\sin^4\alpha}{\sqrt{1+\sin^2\alpha}}\,d\alpha\bigg)\sigma^3. \label{bound_g} \end{align} Let us now put $$ I := \int_{0}^{\pi/2}\frac{\sin^4\alpha}{\sqrt{1+\sin^2\alpha}}\,d\alpha =\int_{0}^{\pi/2}\frac{\sin^2\alpha}{\sqrt{1+\sin^2\alpha}}\,d\alpha - \int_{0}^{\pi/2}\frac{\sin^2\alpha\cos^2\alpha}{\sqrt{1+\sin^2\alpha}}\,d\alpha\, , $$ so that, integrating by parts the second addend, we obtain $$ I=\bigg(\int_{0}^{\pi/2}\frac{\sin^2\alpha}{\sqrt{1+\sin^2\alpha}}\,d\alpha\bigg) +\bigg(\sigma-\int_{0}^{\pi/2}\frac{\sin^2\alpha}{\sqrt{1+\sin^2\alpha}}\,d\alpha -2I\bigg)\, , $$ which yields $I=\frac{\sigma}{3}$. Combined with \eqref{bound_g} and the monotonicity of $g$, this shows that $g(\delta)<\frac{64}{3}\sigma^4$ for all $\delta>0$. We have so proved that $$ \frac{T(\delta)^3}{8} \int_{0}^{T(\delta)/2} y(t)^4\,dt<\frac{64}{3}\sigma^4\qquad\forall\delta>0\, , $$ where $T(\delta)$ is the period of the solution $y(t)$ of the Duffing equation, as defined in Proposition~\ref{prop:sol_duffing}. By recalling that the period of $y^2$ is $T(\delta)/2$, Proposition~\ref{lyapzhu} ($i$) applied to equation~\eq{squareduffing} and the latter inequality ensure the stability of the trivial solution of~\eqref{squareduffing} if $\gamma=0$. $\Box$\par\vskip3mm \noindent{\bf Proof of Theorem~\ref{exactsolutions}.} Let us take $\gamma=1$. Then, one of the solutions of~\eq{squareduffing} is \[\xi(t) = y(t) = \delta \cn{\bigg[t\sqrt{1+\delta^2},\frac{\delta}{\sqrt{2(1+\delta^2)}}\bigg]}\, ,\] where, as usual, $y(t)$ is the solution of \eq{ODE}-\eq{alphabeta} and it is explicitly given by \eq{soluzione_duffing}. Since this solution is $T(\delta)$-periodic, its period is twice the period of the coefficient of the Hill equation \eq{squareduffing}. Thus, using the Floquet theory, we deduce that the eigenvalues of the monodromy matrix for $\gamma=1$ are both $-1$. This shows that the line $\gamma=1$ is part of the boundary of $U_1$.\par From \cite{abram} we recall that, for all $0<k<1$: \[\frac{\partial \sn(u,k)}{\partial u} = \cn(u,k)\dn(u,k),\qquad \frac{\partial \cn(u,k)}{\partial u} = -\sn(u,k)\dn(u,k),\] \begin{equation}\frac{\partial \dn(u,k)}{\partial u} = -k^2\cn(u,k)\sn(u,k),\qquad\dn(u,k)^2-k^2\cn(u,k)^2=1-k^2.\label{idenjacobi} \end{equation} Let us take $k = \frac{\delta}{\sqrt{2(1+\delta^2)}}$ and, since there are no ambiguities, we put $\sn(u,k)=\sn(u)$. Consider the function \neweq{considerxi} \xi(t) = \sn\Big(t\sqrt{1+\delta^2}\Big) \end{equation} so that, from the differential equalities \eq{idenjacobi}, we infer that: \[\dot{\xi}(t) =\sqrt{1+\delta^2}\cn(t\sqrt{1+\delta^2})\dn(t\sqrt{1+\delta^2}),\] \begin{eqnarray*} \ddot{\xi}(t) &=& -(1+\delta^2)\sn(t\sqrt{1+\delta^2})\Big(\dn(t\sqrt{1+\delta^2})^2+\frac{\delta^2}{2(1+\delta^2)}\cn(t\sqrt{1+\delta^2})^2\Big)\\ &=& -(1+\delta^2)\sn(t\sqrt{1+\delta^2})\Big(\frac{2+\delta^2}{2(1+\delta^2)}+\frac{\delta^2}{1+\delta^2}\cn(t\sqrt{1+\delta^2})^2\Big)\\ &=& -\bigg[1+\frac{\delta^2}{2} + \delta^2\cn\Big(t\sqrt{1+\delta^2}\Big)^2\bigg]\, \xi(t)\, . \end{eqnarray*} This shows that the choice \eq{considerxi} of $\xi(t)$ satisfies equation \eq{squareduffing} whenever $\gamma=1+\frac{\delta^2}{2}$. Whence, on this parabola the eigenvalues of the monodromy matrix are $-1$. This proves that also the curve $\gamma=1+\delta^2/2$ is part of the boundary of $U_1$.\par Therefore, the boundary of $U_1$ consists of the line $\gamma=1$ and the parabola $\gamma=1+\delta^2/2$: the statement is so proved. $\Box$\par\vskip3mm \noindent{\bf Proof of Theorem~\ref{striscia}.} In Theorem~\ref{exactsolutions} we showed that $\gamma=1$ is part of the boundary of $U_1$. Let $S_0$ be the first stability region for equation~\eqref{squareduffing}. Then, the line segment $\delta=0,\,0<\gamma<1$, is included in $S_0$. Using~\cite[Chapter VIII, \S1-4]{yakubovich}, we conclude that $\gamma=1$ must be also part of the boundary of $S_0$. From~\cite{abram} we recall that the following identity holds: \begin{equation}\label{cos2sin2} \cn(u,k)^2+\sn(u,k)^2=1\qquad(0<k<1). \end{equation} We take again $k=\frac{\delta}{\sqrt{2(1+\delta^2)}}$ and we drop it in the argument of $\sn$, $\cn$, $\dn$. Then we consider the function \neweq{considerxidn} \xi(t) = \dn\Big(t\sqrt{1+\delta^2}\Big)\, . \end{equation} By arguing as for Theorem~\ref{exactsolutions} and by using~\eqref{idenjacobi} and~\eqref{cos2sin2}, we obtain: \begin{equation*} \dot{\xi}(t) = -\frac{\delta^2}{2\sqrt{1+\delta^2}}\cn(t\sqrt{1+\delta^2})\sn(t\sqrt{1+\delta^2})\, , \end{equation*} \begin{align*} \ddot{\xi}(t) &= -\frac{\delta^2}{2}\dn(t\sqrt{1+\delta^2})\big[-\sn(t\sqrt{1+\delta^2})^2 +\cn(t\sqrt{1+\delta^2})^2\big]\\ &=-\frac{\delta^2}{2}\big[-1+2\cn(t\sqrt{1+\delta^2})^2\big]\xi(t)\\ &=-\Big(-\frac{\delta^2}{2}+\delta^2\cn(t\sqrt{1+\delta^2})^2\Big)\xi(t)\, . \end{align*} This shows that~\eqref{considerxidn} satisfies equation~\eq{squareduffing} whenever $\gamma=-\frac{\delta^2}{2}$. Furthermore, this solution has the same period $T(\delta)/2$ as the coefficient of the Hill equation~\eq{squareduffing}. Thus, using the Floquet theory, we conclude that the characteristic multipliers of~\eqref{squareduffing} are both equal to $1$ when $\gamma = -\frac{\delta^2}{2}$. From (4.2.i) p.60 in \cite{cesari}, we also know that if $p(t)\le0$, then the trivial solution of the Hill equation $\ddot{\xi}(t) +p(t)\xi(t)=0$ is unstable. Therefore, if $\gamma\le-\delta^2$ the zero solution of~\eqref{squareduffing} is unstable. Thus, the parabola $\gamma=-\frac{\delta^2}{2}$ is part of the boundary of $S_0$ and this concludes the proof. $\Box$\par\vskip3mm \noindent{\bf Proof of Theorem~\ref{asymp}.} Assume that $(\delta,\gamma)\to(0,\ell^2)$. In order to apply the Burdina criterion we need asymptotic estimates for $\Phi(\delta,\gamma)$. By neglecting $o(\delta^2)$, we obtain \begin{eqnarray} \Phi(\delta,\gamma) &=& 2\sqrt{2}\int_0^{\pi/2}\sqrt{\tfrac{\gamma+\delta^2\sin^2\theta}{2+\delta^2+\delta^2\sin^2\theta}}\, d\theta \notag \\ \ &\sim& 2\sqrt{\gamma}\int_0^{\pi/2}\left(1+\tfrac{\delta^2}{2\gamma}\sin^2\theta\right) \left(1-\tfrac{\delta^2}{4}-\tfrac{\delta^2}{4}\sin^2\theta\right)\, d\theta \notag \\ \ &\sim& 2\sqrt{\gamma}\int_0^{\pi/2}\left(1-\tfrac{\delta^2}{4}+\tfrac{\delta^2}{2\gamma}\sin^2\theta-\tfrac{\delta^2}{4}\sin^2\theta\right) \, d\theta \notag \\ \ &=& \pi\sqrt{\gamma}\left(1+\big(\tfrac{1}{\ell^2}-\tfrac32\big)\tfrac{\delta^2}{4}\right)\, .\label{stimona} \end{eqnarray} Next, we remark that $$ \log\left(1+\frac{\delta^2}{\gamma}\right)\ \sim\ \frac{\delta^2}{\ell^2}\qquad\mbox{as }(\delta,\gamma)\to(0,\ell^2)\, . $$ Combined with \eq{stimona}, this shows that the bounds in \eq{primavera} asymptotically become $$ \gamma\le\ell^2+\left(\frac{3\ell^2}{4}-\frac12 -\frac{1}{\pi\ell}\right)\delta^2\ ,\quad \gamma\ge\ell^2+\left(\frac{3\ell^2}{4}-\frac12 +\frac{1}{\pi\ell}\right)\delta^2\, . $$ Whence, $U_\ell$ is asymptotically contained in the region where none of these two facts holds, that is, in the region defined by \eq{twoparabolas}.\par Finally, the statement about the existence of $\delta_\gamma$ follows from what we have just proved and from Theorems~\ref{exactsolutions} and~\ref{striscia}. $\Box$\par\vskip3mm \noindent{\bf Proof of Theorem~\ref{primalingua}.} If $\omega=1$, equations~\eqref{duffomega} and~\eqref{hill_scalata} are equivalent to~\eqref{ODE} and~\eqref{squareduffing}, respectively. Therefore, it follows from the proof of Theorem~\ref{intornorigine} that \[\xi(t) = \cn\bigg(t\sqrt{1+\delta^2},\frac{\delta}{\sqrt{2(1+\delta^2)}}\bigg)\] is a solution of~\eqref{hill_scalata} whenever $\omega=1$. Thus the line $\omega=1$ is a resonant line and is part of the boundary of the resonance tongue $U_1$. Let us now recall, from~\cite[Theorem 13]{bbfg}, that if $0<\omega<\big(\frac{21}{22}\big)^2$, then the trivial solution of~\eqref{hill_scalata} is stable for all $\delta>0$. These two facts prove that the strip $(0,\infty)\times(0,1)$ is a stability region of the $(\delta,\omega)$-plane. If $U_1=\{\omega:\tau(\delta)<\omega<\varphi(\delta)\}$ then, by combining the previous observations with Proposition~\ref{known}, we infer that it has to be $\tau(\delta)\equiv1$, $\varphi(\delta)>1$ for all $\delta$, and $\varphi(\delta)\to3$ as $\delta\to\infty$. The second inequality for $\varphi(\delta)$ follows from Corollary~\ref{asympomega}. $\Box$\par\vskip3mm \section{Remarks and open problems} $\bullet$ In Figure \ref{monodromymnd1}, couples of resonant lines meet at the points $(\delta,\omega)=(0,\ell)$ with $\ell\in{\mathbb N}$. Between two of these ``double points'' there is a gap of 1 in the $\omega$-direction. For $\delta=\infty$ the gap is larger and the $j$-th resonance line emanates from the ``point'' $(\delta,\omega)=(\infty,\frac{j(j+1)}{2})$. Whence, below the line $\omega=s$ with $s\in{\mathbb R}_+\setminus{\mathbb N}$, there exist $2[s]$ resonant lines emanating from points on the $\omega$-axis and $[\frac{-1+\sqrt{1+8s}}{2}]$ resonant lines emanating from points at $\delta=\infty$. This shows that there are ``more'' resonant lines for $\delta\to0$ than for $\delta\to\infty$. Here, $[\varrho]$ represents the integer part of $\varrho$.\par \noindent $\bullet$ Once a nontrivial solution $\xi_1$ of~\eqref{hill} is known, the standard method to find a second (linearly independent) solution $\xi_2$ yields \neweq{xi2} \xi_2(t)=\xi_1(t)\int^t\frac{d\tau}{\xi_1^2(\tau)}\, . \end{equation} Therefore, in the cases where we found an explicit solution of \eqref{hill}, see the proofs of Theorems \ref{exactsolutions} and \ref{striscia}, one can also find another linearly independent explicit solution by using \eq{xi2} combined with the computation of the indefinite integral of Jacobi elliptic function, see~\cite{carlson}. We omit the details.\par \noindent $\bullet$ The plots in Figure \ref{monodromymnd1} suggest the following conjecture, which appears quite challenging to prove (or disprove): {\em all the resonant lines emanating from $(0,\ell)$ (except $\omega\equiv1$) are graphs of strictly increasing functions with a unique flex point}.\par \noindent $\bullet$ From \eq{twoparabolas} we infer that, when $\delta\to0$, the resonant tongues $U_\ell$ asymptotically lie between two upwards parabolas provided that $\ell\ge2$. An interesting problem would be to investigate whether the gap between these two lines is of the order of $\delta^2$ or of higher order $o(\delta^2)$ as $\delta\to0$.\par \noindent $\bullet$ It would be interesting to investigate the stability for the following damped version of \eq{truebeam}: $$u_{tt}+\rho u_t+u_{xxxx}-\tfrac{2}{\pi}\, \|u_x\|^2_{L^2(0,\pi)}\, u_{xx}=0\quad x\in(0,\pi)\, ,\ t>0\, ,$$ where $\rho>0$. How does $\rho$ modify the shape of the resonant tongues in Figure \ref{monodromymnd1}? We expect the tongues to retract in the horizontal direction, but does a {\em uniform bound} $\varepsilon_\rho>0$ exist such that all the nonlinear modes of \eq{truebeam} are linearly stable whenever $\delta<\varepsilon_\rho$? We point out that the simple argument of multiplying by an exponential, as for the Mathieu equation, does not work for the Hill equation with squared Duffing coefficients. \par \noindent \textbf{Acknowledgments.} This work is the continuation of the graduate dissertation of the first Author \cite{gaspa}. The second Author is partially supported by the PRIN project {\em Equazioni alle derivate parziali di tipo ellittico e parabolico: aspetti geometrici, disuguaglianze collegate, e applicazioni} and by the {\em Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA)} of the Istituto Nazionale di Alta Matematica (INdAM). \end{document}
\begin{document} \author{Francis OGER} \title{\textbf{Equivalence \'{e}l\'{e}mentaire entre pavages}} \date{} \begin{center} \textbf{Paperfolding sequences, paperfolding curves} \textbf{and local isomorphism} Francis OGER \end{center} \noindent ABSTRACT. For each integer $n$, an $n$-folding curve is obtained by folding $n$ times a strip of paper in two, possibly up or down, and unfolding it with right angles. Generalizing the usual notion of infinite folding curve, we define complete folding curves as the curves without endpoint which are unions of increasing sequences of $n$-folding curves for $n$ integer. We prove that there exists a standard way to extend any complete folding curve into a covering of $ \mathbb{R} ^{2}$ by disjoint such curves, which satisfies the local isomorphism property introduced to investigate aperiodic tiling systems. This covering contains at most six curves. \noindent 2000 Mathematics Subject Classification. Primary 05B45; Secondary 52C20, 52C23. \noindent Key words and phrases. Paperfolding sequence, paperfolding curve, tiling, local isomorphism, aperiodic. The infinite folding sequences (resp. curves) usually considered are sequences $(a_{k})_{k\in \mathbb{N} ^{\ast }}\subset \{+1,-1\}$ (resp. infinite curves with one endpoint) obtained as direct limits of $n$-folding sequences (resp. curves) for $n\in \mathbb{N} $. It is well known (see [3] and [4]) that paperfolding curves are self-avoiding and that, in some cases, including the Heighway Dragon curve, a small number of copies of the same infinite folding curve can be used to cover $ \mathbb{R} ^{2}$ without overlapping. Anyway, in some other cases, the last property is not true. In the present paper, we define complete folding sequences (resp. curves) as the sequences $(a_{k})_{k\in \mathbb{Z} }\subset \{+1,-1\}$ (resp. the infinite curves without endpoint) which are direct limits of $n$-folding sequences (resp. curves) for $n\in \mathbb{N} $. Any infinite folding sequence (resp. curve) in the classical sense can be extended into a complete folding sequence (resp. curve). On the other hand, most of the complete folding sequences (resp. curves) cannot be obtained in that way. We prove that any complete folding curve, and therefore any infinite folding curve, can be extended in an essentially unique way into a covering of $ \mathbb{R} ^{2}$ by disjoint complete folding curves which satisfies the local isomorphism property. We show that a covering obtained from an infinite folding curve can contain complete folding curves which are not extensions of infinite folding curves. One important argument in the proofs is the derivation of paperfolding curves, which is investigated in Section 2. Another one is the local isomorphism property for complete folding sequences (cf. Section 1) and for coverings of $ \mathbb{R} ^{2}$ by sets of disjoint complete folding curves (cf. Section 3). The local isomorphism property was originally used to investigate aperiodic tiling systems. Actually, we have an interpretation of complete folding sequences as tilings of $ \mathbb{R} $, and an interpretation of coverings of $ \mathbb{R} ^{2}$ by disjoint complete folding curves as tilings of $ \mathbb{R} ^{2}$. \textbf{1. Paperfolding sequences.} The notions usually considered (see for instance [5]), and which we define first, are those of $n$-folding sequence (sequence obtained by folding $n$ times a strip of paper in two), and $\infty $-folding sequence (sequence indexed by $ \mathbb{N} ^{\ast }$ which is obtained as a direct limit of $n$-folding sequences for $ n\in \mathbb{N} $). Then we introduce complete folding sequences, which are sequences indexed by $ \mathbb{Z} $, also obtained as direct limits of $n$-folding sequences for $n\in \mathbb{N} $. We describe the finite subwords of each such sequence. Using this description, we show that complete folding sequences satisfy properties similar to those of aperiodic tiling systems: they form a class defined by a set of local rules, neither of them is periodic, but all of them satisfy the local isomorphism property introduced for tilings. It follows that, for each such sequence, there exist $2^{\omega }$ isomorphism classes of sequences which are locally isomorphic to it. \noindent \textbf{Definitions.} For each $n\in \mathbb{N} $\ and each sequence $S=(a_{1},...,a_{n})\subset \{+1,-1\}$, we write $ \left\vert S\right\vert =n$\ and $\overline{S}=(-a_{n},...,-a_{1})$. We say that a sequence $(a_{1},...,a_{n})$ is a \emph{subword} of a sequence $ (b_{1},...,b_{p})$ or $(b_{k})_{k\in \mathbb{N} ^{\ast }}$ or $(b_{k})_{k\in \mathbb{Z} }$ if there exists $h$ such that $a_{k}=b_{k+h}$ for $1\leq k\leq n$. \noindent \textbf{Definition.} For each $n\in \mathbb{N} $, an $n$\emph{-folding sequence} is a sequence \noindent $(a_{1},...,a_{2^{n}-1})\subset \{+1,-1\}$\ obtained by folding $n$ \ times a strip of paper in two, with each folding being done independently up or down, unfolding it, and writing $a_{k}=+1$\ (resp. $a_{k}=-1$) for each $k\in \{1,...,2^{n}-1\}$\ such that the $k$-th fold from the left has the shape of a $\vee $\ (resp. $\wedge $) (we obtain\ the empty sequence for $n=0$). \noindent \textbf{Properties.} The following properties are true for each $ n\in \mathbb{N} $: \noindent 1) If $S$ is an $n$-folding sequence, then $\overline{S}$ is also an $n$-folding sequence. \noindent 2) The $(n+1)$-folding sequences are the sequences $(\overline{S} ,+1,S)$ and the sequences $(\overline{S},-1,S)$, where $S$ is an $n$-folding sequence. \noindent 3) There exist $2^{n}$ $n$-folding sequences (proof by induction on $n$ using 2)). \noindent 4) Any sequence $(a_{1},...,a_{2^{n+1}-1})$ is a $(n+1)$-folding sequence if and only if $(a_{2k})_{1\leq k\leq 2^{n}-1}$\ is an $n$-folding sequence and $a_{1+2k}=(-1)^{k}a_{1}$ for $0\leq k\leq 2^{n}-1$. \noindent 5) If $n\geq 2$ and if $(a_{1},...,a_{2^{n}-1})$ is an $n$-folding sequence, then $a_{2^{r}(1+2k)}=(-1)^{k}a_{2^{r}}$ for $0\leq r\leq n-2$ and $0\leq k\leq 2^{n-r-1}-1$ (proof by induction on $n$ using 4)). \noindent \textbf{Definition.} An $\infty $\emph{-folding sequence} is a sequence $(a_{n})_{n\in \mathbb{N} ^{\ast }}$ such that \noindent $(a_{1},...,a_{2^{n}-1})$ is an $n$-folding sequence for each $ n\in \mathbb{N} ^{\ast }$. \noindent \textbf{Definition.} A \emph{finite folding sequence} is a subword of an $n$-folding sequence for an integer $n$. \noindent \textbf{Examples.} The sequence $(+1,+1,+1)$\ is a finite folding sequence since it is a subword of the $3$-folding sequence $ (-1,+1,+1,+1,-1,-1,+1)$. On the other hand, $(+1,+1,+1)$ is not a $2$ -folding sequence and $(+1,+1,+1,+1)$\ is not a finite folding sequence. \noindent \textbf{Definition.} A\emph{\ complete folding sequence} is a sequence $(a_{k})_{k\in \mathbb{Z} }\subset \{+1,-1\}$ such that its finite subwords are folding sequences. \noindent \textbf{Examples.} For each $\infty $-folding sequence $ S=(a_{n})_{n\in \mathbb{N} ^{\ast }}$, write $\overline{S}=(-a_{-n})_{n\in - \mathbb{N} ^{\ast }}$. Then $(\overline{S},+1,S)$ and $(\overline{S},-1,S)$ are complete folding sequences since \noindent $(-a_{2^{n}-1},...,-a_{1},+1,a_{1},...,a_{2^{n}-1})$ and $ (-a_{2^{n}-1},...,-a_{1},-1,a_{1},...,a_{2^{n}-1})$ \noindent are $(n+1)$-folding sequences for each $n\in \mathbb{N} $. In Section 3, we give examples of complete folding sequences which are not obtained in that way. It follows from the property 5) above that, for each complete folding sequence $(a_{h})_{h\in \mathbb{Z} }$ and each $n\in \mathbb{N} $, there exists $k\in \mathbb{Z} $\ such that $a_{k+l.2^{n+1}}=(-1)^{l}a_{k}$\ for each $l\in \mathbb{Z} $. Moreover we have: \noindent \textbf{Proposition 1.1.} Consider a sequence $S=(a_{h})_{h\in \mathbb{Z} }\subset \{+1,-1\}$. For each $n\in \mathbb{N} $, suppose that there exists $h_{n}\in \mathbb{Z} $\ such that $a_{h_{n}+k.2^{n+1}}=(-1)^{k}a_{h_{n}}$\ for each $k\in \mathbb{Z} $, and consider $E_{n}=h_{n}+2^{n} \mathbb{Z} $ and $F_{n}=h_{n}+2^{n+1} \mathbb{Z} $.\ Then, for each $n\in \mathbb{N} $: \noindent 1) $ \mathbb{Z} -E_{n}=\{h\in \mathbb{Z} $~$\mathbf{\mid }$ $a_{h+k.2^{n+1}}=a_{h}$ for each $k\in \mathbb{Z} \mathbf{\}}$ and $ \mathbb{Z} -E_{n}$ is the disjoint union of $F_{0},...,F_{n-1}$; \noindent 2) for each $h\in E_{n}$, $(a_{h-2^{n}+1},...,a_{h+2^{n}-1})$\ is a $(n+1)$-folding sequence; \noindent 3) for each $h\in \mathbb{Z} $, if $(a_{h-2^{n+1}+1},...,a_{h+2^{n+1}-1})$\ is a $(n+2)$-folding sequence, then $h\in E_{n}$. \noindent \textbf{Proof.} It follows from the definition of the integers $ h_{n}$\ that the sets $F_{n}$\ are disjoint. For each $n\in \mathbb{N} $, we have $E_{n}= \mathbb{Z} -(F_{0}\cup ...\cup F_{n-1})$ since $ \mathbb{Z} -(F_{0}\cup ...\cup F_{n-1})$ is of the form $h+2^{n} \mathbb{Z} $ and $h_{n}$\ does not belong to $F_{0}\cup ...\cup F_{n-1}$. For each $n\in \mathbb{N} $, there exists no $h\in E_{n}$\ such that $a_{h+k.2^{n+1}}=a_{h}$ for each $ k\in \mathbb{Z} $, since we have $E_{n}=(h_{n}+2^{n+1} \mathbb{Z} )\cup (h_{n+1}+2^{n+1} \mathbb{Z} )$, $a_{h_{n}+2^{n+1}}=-a_{h_{n}}$ and $a_{h_{n+1}+2^{n+2}}=-a_{h_{n+1}}$. On the other hand, we have $a_{h_{m}+k.2^{m+2}}=a_{h_{m}}$\ for $0\leq m\leq n-1$ and $k\in \mathbb{Z} $, and therefore $a_{h+k.2^{n+1}}=a_{h}$ for $h\in F_{0}\cup ...\cup F_{n-1}$ and $k\in \mathbb{Z} $, which completes the proof de 1). We show 2) by induction on $n$. The case $n=0$\ is clear. If 2) is true for $ n$, then, for each $h\in E_{n+1}$, the induction hypothesis applied to $ (a_{h+2k})_{k\in \mathbb{Z} }$ implies that $(a_{h-2^{n+1}+2k})_{1\leq k\leq 2^{n+1}-1}$\ is a $(n+1)$ -folding sequence; it follows that $(a_{h-2^{n+1}+1},...,a_{h+2^{n+1}-1})$\ is a $(n+2)$-folding sequence, since $ a_{h-2^{n+1}+1+2k}=(-1)^{k}a_{h-2^{n+1}+1}$\ for $0\leq k\leq 2^{n+1}-1$. Concerning 3), we observe that, for each $h\in \mathbb{Z} $, if $(a_{h-2^{n+1}+1},...,a_{h+2^{n+1}-1})$\ is a $(n+2)$-folding sequence, then $a_{h+2^{n}}=-a_{h-2^{n}}$. According to 1), it follows $ h-2^{n}\in E_{n}$, and therefore\ $h\in E_{n}$.~~$\blacksquare $ \noindent \textbf{Corollary 1.2.} Any sequence $(a_{h})_{h\in \mathbb{Z} }\subset \{+1,-1\}$ is a complete folding sequence if and only if, for each $ n\in \mathbb{N} $, there exists $h_{n}\in \mathbb{Z} $\ such that $a_{h_{n}+k.2^{n+1}}=(-1)^{k}a_{h_{n}}$\ for each $k\in \mathbb{Z} $. For each complete folding sequence $S$ and each $n\in \mathbb{N} $, the sets $E_{n}$ and $F_{n}$\ of Proposition 1.1 do not depend on the choice of $h_{n}$. We denote them by $E_{n}(S)$ and $F_{n}(S)$.\ We write $ E_{n}$ and $F_{n}$\ instead of $E_{n}(S)$ and $F_{n}(S)$ if it creates no ambiguity. \noindent \textbf{Corollary 1.3.} Any complete folding sequence is nonperiodic. \noindent \textbf{Proof. }Let $S=(a_{h})_{h\in \mathbb{Z} }$ be such a sequence, and let $r$ be an integer such that $a_{h+r}=a_{h}$\ for each $h\in \mathbb{Z} $.\ For each $n\in \mathbb{N} $, it follows from 1) of Proposition 1.1 that $r+( \mathbb{Z} -E_{n})= \mathbb{Z} -E_{n}$, whence $r+E_{n}=E_{n}$ and $r\in 2^{n} \mathbb{Z} $. Consequently, we have $r=0$.~~$\blacksquare $ Now, for each complete folding sequence $S=(a_{h})_{h\in \mathbb{Z} }$, we describe the finite subwords of $S$ and we count those which have a given length. \noindent \textbf{Lemma 1.4.} For each $n\in \mathbb{N} $ and for any $r,s\in \mathbb{Z} $, we have $r-s\in 2^{n+1} \mathbb{Z} $ if $(a_{r+1},...,a_{r+t})=(a_{s+1},...,a_{s+t})$ for $t=\sup (2^{n},7)$ . \noindent \textbf{Proof.} If $r-s\notin 2 \mathbb{Z} $, then we have for instance $r\in E_{1}$ and $s\in F_{0}$. It follows $ a_{r+1}=-a_{r+3}=a_{r+5}=-a_{r+7}$\ since $r+1\in F_{0}$. Moreover, we have $ a_{s+5}=-a_{s+1}$ if $s+1\in F_{1}$, and $a_{s+7}=-a_{s+3}$ if $s+3\in F_{1}$ . One of these two possibilities is necessarily realized\ since $s+1\in E_{1} $, which contradicts $(a_{r+1},...,a_{r+7})=(a_{s+1},...,a_{s+7})$. If $r-s\in 2^{k} \mathbb{Z} -2^{k+1} \mathbb{Z} $ with $1\leq k\leq n$, then we consider $h\in \{1,...,2^{k}\}$ such that $ r+h\in F_{k-1}$. We have $a_{r+h+m.2^{k}}=(-1)^{m}a_{r+h}$ for each $m\in \mathbb{Z} $, and in particular $a_{s+h}=-a_{r+h}$, which contradicts $ (a_{r+1},...,a_{r+t})=(a_{s+1},...,a_{s+t})$\ since $1\leq h\leq 2^{k}\leq t$ .~~$\blacksquare $ \noindent \textbf{Proposition 1.5.} Consider $n\in \mathbb{N} $ and write\ $T=(a_{h+1},...,a_{h+2^{n}-1})$ with $h\in E_{n}$. Then any sequence of length $\leq 2^{n+1}-1$\ is a subword de $S$ if and only if there exist $\zeta ,\eta \in \{-1,+1\}$ such that it can be written in one of the forms: \noindent (1) $(T_{1},\zeta ,T_{2})$ with $T_{1}$ final segment of $T$ and $ T_{2}$ initial segment of $\overline{T}$; \noindent (2) $(T_{1},\zeta ,T_{2})$ with $T_{1}$ final segment of $ \overline{T}$ and $T_{2}$ initial segment of $T$; \noindent (3) $(T_{1},\zeta ,\overline{T},\eta ,T_{2})$ with $T_{1}$ final segment and $T_{2}$ initial segment of $T$; \noindent (4) $(T_{1},\zeta ,T,\eta ,T_{2})$ with $T_{1}$ final segment and $ T_{2}$ initial segment of $\overline{T}$. \noindent If $\sup (2^{n},7)\leq t\leq 2^{n+1}-1$, then any subword of length $t$ of $S$\ can be written in exactly one way in one of the forms (1), (2), (3), (4). \noindent \textbf{Proof.} We can suppose $h\in F_{n}$ since $T$ and $ \overline{T}$\ play symmetric roles in the Proposition. Then we have $ (a_{k+1},...,a_{k+2^{n}-1})=T$ for $k\in F_{n}$ and $ (a_{k+1},...,a_{k+2^{n}-1})=\overline{T}$ for $k\in E_{n+1}$. It follows that each subword of $S$ of length $\leq 2^{n+1}-1$ can be written in one of the forms (1), (2), (3), (4). Now, we are going to prove that each sequence of one of these forms can be expressed as a subword of $S$ in such a way that the part $T_{1}$ is associated to the final segment of a sequence $(a_{k+1},...,a_{k+2^{n}-1})$ with $k\in E_{n}$. First we show this property for the sequences of the form (1) or (3). It suffices to prove that, for any $\zeta ,\eta \in \{-1,+1\}$, there exists $ k\in F_{n}$ such that $a_{k-2^{n}}=\zeta $ and $a_{k}=\eta $, since these two equalities imply $(a_{k-2^{n+1}+1},...,a_{k+2^{n}-1})=(T,\zeta , \overline{T},\eta ,T)$. We consider $l\in F_{n}$\ such that $a_{l}=\eta $. We have $a_{l+r.2^{n+2}}=\eta $\ for each $r\in \mathbb{Z} $. Moreover, $\{l+r.2^{n+2}-2^{n}\mid r\in \mathbb{Z} \}$ is equal to $F_{n+1}$ or $E_{n+2}$. In both cases, there exists $r\in \mathbb{Z} $ such that $a_{l+r.2^{n+2}-2^{n}}=\zeta $, and it suffices to take $ k=l+r.2^{n+2}$ for such an $r$. Now we show the same property for the sequences of the form (2) or (4). It suffices to prove that, for any $\zeta ,\eta \in \{-1,+1\}$, there exists $ k\in F_{n}$ such that $a_{k}=\zeta $ and $a_{k+2^{n}}=\eta $, since these two equalities imply $(a_{k-2^{n}+1},...,a_{k+2^{n+1}-1})=(\overline{T} ,\zeta ,T,\eta ,\overline{T})$. We consider $l\in F_{n}$\ such that $ a_{l}=\zeta $. We have $a_{l+r.2^{n+2}}=\zeta $\ for each $r\in \mathbb{Z} $. Moreover, $\{l+r.2^{n+2}+2^{n}\mid r\in \mathbb{Z} \}$ is equal to $F_{n+1}$ or $E_{n+2}$. In both cases, there exists $r\in \mathbb{Z} $ such that $a_{l+r.2^{n+2}+2^{n}}=\eta $, and it suffices to take $ k=l+r.2^{n+2}$ for such an $r$. Now, suppose that two expressions of the forms (1), (2), (3), (4) give the same sequence of length $t$ with $\sup (2^{n},7)\leq t\leq 2^{n+1}-1$. Consider two sequences $(a_{r+1},...,a_{r+t})$ and $(a_{s+1},...,a_{s+t})$ which realize these expressions in such a way that, in each of them, the part $T_{1}$ of the expression is associated to a final segment of a sequence $(a_{k+1},...,a_{k+2^{n}-1})$ with $k\in E_{n}$, while the part $ T_{2}$ is associated to an initial segment of a sequence $ (a_{l+1},...,a_{l+2^{n}-1})$ with $l=k+2^{n}$ or $l=k+2^{n+1}$. Then, by\ Lemma 1.4, the equality $(a_{r+1},...,a_{r+t})=(a_{s+1},...,a_{s+t})$\ implies $r-s\in 2^{n+1} \mathbb{Z} $.\ It follows that the two expressions are equal.~~$\blacksquare $ \noindent \textbf{Corollary 1.6.} Any finite folding sequence $U$ is a subword of $S$ if and only if $\overline{U}$ is a subword of $S$. \noindent \textbf{Proof.} For each sequence $T=(a_{h+1},...,a_{h+2^{n}-1})$ with $n\in \mathbb{N} $ and $h\in E_{n}$, the sequence $U$ is of the form (1) (resp. (2), (3), (4)) relative to $T$ if and only if $\overline{U}$ is of the form (2) (resp. (1), (4), (3)) relative to $T$.~~$\blacksquare $ It follows from the Corollary below that, for each integer $n\geq 3$, each complete folding sequence has exactly $8$ subwords which are $n$-folding sequences: \noindent \textbf{Corollary 1.7.} Consider $n\in \mathbb{N} $ and write\ $T=(a_{h+1},...,a_{h+2^{n}-1})$ with $h\in E_{n}$. Then any $ (n+2)$-folding sequence is a subword of $S$ if and only if it can be written in the form $(\overline{T},-\zeta ,T,\eta ,\overline{T},\zeta ,T)$ or $ (T,-\zeta ,\overline{T},\eta ,T,\zeta ,\overline{T})$ with $\zeta ,\eta \in \{+1,-1\}$. \noindent \textbf{Proof. }For each $k\in \mathbb{Z} $, if $(a_{k-2^{n+1}+1},...,a_{k+2^{n+1}-1})$\ is a $(n+2)$-folding sequence, then $k\in E_{n}$ by 3) of Proposition 1.1. Consequently, we have $ (a_{k+1},...,a_{k+2^{n}-1})=T$ or $(a_{k+1},...,a_{k+2^{n}-1})=\overline{T}$ , and $(a_{k-2^{n+1}+1},...,a_{k+2^{n+1}-1})$ is of the required form. In order to prove that each sequence of that form is a subword of $S$, we consider $k\in E_{n+1}$ and we write $U=(a_{k+1},...,a_{k+2^{n+1}-1})$. We have $U=(\overline{T},\varepsilon ,T)$ or $U=(T,\varepsilon ,\overline{T})$ with $\varepsilon =\mp 1$. Here we only consider the first case; the second one can be treated in the same way since $T$ and $\overline{T}$ play symmetric roles in the Corollary. We apply Proposition 1.5 for $n+1$ instead of $n$, and we consider the forms (1), (2), (3), (4) relative to $U$. For any $\zeta ,\eta \in \{+1,-1\}$, the sequence $(\overline{T},-\zeta ,T,\eta ,\overline{T},\zeta ,T)$ is a subword of $S$ because it is equal to $(U,\eta ,\overline{U})$ or to $(\overline{U} ,\eta ,U)$, and therefore of the form (1) or (2) relative to $U$. The sequence $(T,-\zeta ,\overline{T},\eta ,T,\zeta ,\overline{T})$ is also a subword of $S$ because it is of the form $(T,\alpha ,\overline{U},\beta , \overline{T})$ or $(T,\alpha ,U,\beta ,\overline{T})$ with $\alpha ,\beta \in \{+1,-1\}$, and therefore of the form (3) or (4) relative to $U$.~~$ \blacksquare $ The following result generalizes [1, Th., p. 27] to complete folding sequences: \noindent \textbf{Theorem 1.8.} The sequence $S$ has $4t$ subwords of length $t$ for each integer $t\geq 7$ and $2$, $4$, $8$, $12$, $18$, $23$ subwords of length $t=1$, $2$, $3$, $4$, $5$, $6$. \noindent \textbf{Proof.} The proof of the Theorem for $t=1$, $2$, $3$, $4$, $5$, $6$ is based on Proposition 1.5. We leave it to the reader. For $t\geq 7$, we consider the integer $n\geq 2$ such that $2^{n}\leq t\leq 2^{n+1}-1$, and we write $T=(a_{h+1},...,a_{h+2^{n}-1})$ with $h\in E_{n}$.\ By Proposition 1.5, it suffices to count the subwords of length $t$ of $S$ which are in each of the forms (1), (2), (3), (4) relative to $T$. Each of the forms (1), (2) gives $\left\vert T_{1}\right\vert +\left\vert T_{2}\right\vert =t-1$, and therefore $\left\vert T_{1}\right\vert \geq (t-1)-(2^{n}-1)=t-2^{n}$. As $\left\vert T_{1}\right\vert \leq 2^{n}-1$, we have $(2^{n}-1)-(t-2^{n})+1=2^{n+1}-t$\ possible values for $\left\vert T_{1}\right\vert $. Consequently, there exist $4(2^{n+1}-t)$\ sequences associated to these two forms, since there are $2$ possible values for $ \zeta $. Each of the forms (3), (4) gives $\left\vert T_{1}\right\vert +\left\vert T_{2}\right\vert =t-2^{n}-1$, and therefore $\left\vert T_{1}\right\vert \leq t-2^{n}-1$.\ We have $t-2^{n}$\ possible values for $\left\vert T_{1}\right\vert $. Consequently, there exist $8(t-2^{n})$\ sequences associated to these two forms, since there are $4$ possible values for $ (\zeta ,\eta )$. Now, the total number of subwords of length $t$ in $S$ is $ 4(2^{n+1}-t)+8(t-2^{n})=4t$.~~$\blacksquare $ For each sequence $(a_{h})_{h\in \mathbb{Z} }\subset \{+1,-1\}$, we define a tiling of $ \mathbb{R} $ as follows: the tiles are the intervals $[k,k+1]$ for $k\in \mathbb{Z} $, where the \textquotedblleft colour\textquotedblright\ of the endpoint $k$ \ (resp. $k+1$) is\ the sign of $a_{k}$\ (resp. $a_{k+1}$). Each tile is of one of the forms $[+,+]$, $[+,-]$, $[-,+]$, $[-,-]$, and each pair of consecutive tiles is of one of the forms $([+,+],[+,+])$, $([+,+],[+,-])$, $ ([+,-],[-,+])$, $([+,-],[-,-])$, $([-,+],[+,+])$, $([-,+],[+,-])$, $ ([-,-],[-,+])$, $([-,-],[-,-])$. Concerning the theory of tilings, the reader is referred to [7], which presents classical results and gives generalizations based on mathematical logic. Two tilings of $ \mathbb{R} ^{n}$ are said to be \emph{isomorphic} if they are equivalent up to translation, and \emph{locally isomorphic} if they contain the same bounded sets of tiles modulo translations. We say that two sequences $(a_{h})_{h\in \mathbb{Z} },(b_{h})_{h\in \mathbb{Z} }\subset \{+1,-1\}$ are \emph{isomorphic} (resp. \emph{locally isomorphic}) if they are equivalent up to translation (resp. they have the same finite subwords). This property is true if and only if the associated tilings are isomorphic (resp. locally isomorphic). It follows from the definitions that any sequence $(a_{h})_{h\in \mathbb{Z} }\subset \{+1,-1\}$ is a complete folding sequence if it is locally isomorphic to such a sequence. \noindent \textbf{Corollary 1.9.} Any complete folding sequence $ S=(a_{h})_{h\in \mathbb{Z} }$ is locally isomorphic to $\overline{S}=(-a_{-h})_{h\in \mathbb{Z} }$, but not locally isomorphic to $-S=(-a_{h})_{h\in \mathbb{Z} }$. \noindent \textbf{Proof.} The first statement is a consequence of Corollary 1.6 since each finite folding sequence $T$\ is a subword of $S$ if and only if $\overline{T}$\ is a subword of $\overline{S}$. In order to prove the second statement, we consider $ T=(a_{h+1},a_{h+2},a_{h+3})$ with $h\in E_{2}$. We have $-T\neq \overline{T}$ \ since $a_{h+3}=-a_{h+1}$. By Corollary 1.7, the $4$-folding sequences which are subwords of $S$ are the sequences $(\overline{T},-\zeta ,T,\eta , \overline{T},\zeta ,T)$ and $(T,-\zeta ,\overline{T},\eta ,T,\zeta , \overline{T})$ for $\zeta ,\eta \in \{+1,-1\}$, while the $4$-folding sequences which are subwords of $-S$ are the sequences $(-\overline{T} ,-\zeta ,-T,\eta ,-\overline{T},\zeta ,-T)$ and $(-T,-\zeta ,-\overline{T} ,\eta ,-T,\zeta ,-\overline{T})$ for $\zeta ,\eta \in \{+1,-1\}$.\ Consequently, $S$ and $-S$ have no $4$-folding sequence in common.~~$ \blacksquare $ \noindent \textbf{Remark.} For each $\infty $-folding sequence $S$, it follows from Corollary 1.9 that $T=(\overline{S},+1,S)$ and $U=(\overline{S} ,-1,S)$ are locally isomorphic, since $U=\overline{T}$. We say that a tiling $\mathcal{T}$ of $ \mathbb{R} ^{n}$ \emph{satisfies the local isomorphism property} if, for each bounded set of tiles $\mathcal{F}\subset \mathcal{T}$, there exists $r\in \mathbb{R} _{\ast }^{+}$ such that each ball of radius $r$ in $ \mathbb{R} ^{n}$ contains the image of $\mathcal{F}$ under a translation. Then any tiling $\mathcal{U}$ is locally isomorphic to $\mathcal{T}$ if each bounded set of tiles contained in $\mathcal{U}$ is the image under a translation of a set of tiles contained in $\mathcal{T}$. We say that a sequence $(a_{h})_{h\in \mathbb{Z} }\subset \{+1,-1\}$ \emph{satisfies the local isomorphism property} if the associated tiling satisfies the local isomorphism property. Like Robinson tilings and Penrose tilings, complete folding sequences are \emph{aperiodic} in the following sense: \noindent 1) they form a class defined by a set of rules which can be expressed by first-order sentences (for each $n\in \mathbb{N} $, we write a sentence which says that each subword of length $2^{n}$ of the sequence considered is a subword of a $(n+1)$-folding sequence); \noindent 2) neither of them is periodic, but all of them satisfy the local isomorphism property. \noindent The second statement of 2) follows from the Theorem below: \noindent \textbf{Theorem 1.10.} Let $S=(a_{h})_{h\in \mathbb{Z} }$\ be a complete folding sequence, let $T$ be a finite subword of $S$, and let $r$ be an integer such that $\left\vert T\right\vert \leq 2^{r}$.\ Then $ T$ is a subword of $(a_{h+1},...,a_{h+10.2^{r}-2})$ for each $h\in \mathbb{Z} $. \noindent \textbf{Proof. }There exists $k\in E_{r}$ such that $T$ is a subword of $(a_{k-2^{r}+1},...,a_{k+2^{r}-1})$. We have $ (a_{k-2^{r}+1},...,a_{k+2^{r}-1})=(\overline{U},\zeta ,U)$ with $\zeta =a_{k} $ and $U=(a_{k+1},...,a_{k+2^{r}-1})$. If $k\in E_{r+1}$, we consider $m\in F_{r+1}$ such that $a_{m}=\zeta $; we have $a_{m+n.2^{r+2}}=(-1)^{n}\zeta $\ for each $n\in \mathbb{Z} $. If $k\in F_{r}$, we write $m=k$; we have $a_{m+n.2^{r+1}}=(-1)^{n}\zeta $ \ for each $n\in \mathbb{Z} $. In both cases, for each $n\in \mathbb{Z} $, we have $a_{m+n.2^{r+3}}=\zeta $ and $ (a_{m+n.2^{r+3}-2^{r}+1},...,a_{m+n.2^{r+3}+2^{r}-1})=(\overline{U},\zeta ,U) $. For each $h\in \mathbb{Z} $, there exists $n\in \mathbb{Z} $ such that $h-m+2^{r}\leq n.2^{r+3}\leq h-m+9.2^{r}-1$, which implies $ h+1\leq m+n.2^{r+3}-2^{r}+1$\ and $h+10.2^{r}-2\geq m+n.2^{r+3}+2^{r}-1$. Then $(a_{m+n.2^{r+3}-2^{r}+1},...,a_{m+n.2^{r+3}+2^{r}-1})$\ is a subword of $(a_{h+1},...,a_{h+10.2^{r}-2})$,\ which completes the proof of the Theorem since $T$ is a subword of $ (a_{m+n.2^{r+3}-2^{r}+1},...,a_{m+n.2^{r+3}+2^{r}-1})$.~~$\blacksquare $ The second part of the Theorem below is similar to results which were proved for Robinson tilings and Penrose tilings: \noindent \textbf{Theorem 1.11.} 1) There exist $2^{\omega }$ complete folding sequences which are pairwise not locally isomorphic. \noindent 2) For each complete folding sequence $S$, there exist $2^{\omega } $ isomorphism classes of sequences which are locally isomorphic to $S$ . \noindent \textbf{Proof of 1).} It follows from Proposition 1.1 that each complete folding sequence $(a_{h})_{h\in \mathbb{Z} }$ is completely determined by the following operations: \noindent - successively for each $n\in \mathbb{N} $, we choose among the $2$ possible values the smallest $h\in \mathbb{N} \cap F_{n}$, then we fix $a_{h}\in \{+1,-1\}$; \noindent - for the unique $h\in \cap _{n\in \mathbb{N} }E_{n}$\ if it exists, we fix $a_{h}\in \{+1,-1\}$. \noindent Moreover, each possible sequence of choices determines a complete folding sequence. Now, it follows from Corollary 1.7 that, for each complete folding sequence $ S$ and each integer $m$, there exist an integer $n>m$ and a complete folding sequence $T$ such that $S$ and $T$ contain as subwords the same $m$-folding sequences, but not the same $n$-folding sequences. \noindent \textbf{Proof of 2).} The sequence $S$ is not periodic by Corollary 1.3, and satisfies the local isomorphism property according to Theorem 1.10. By [7, Corollary 3.7], it follows that there exist $2^{\omega } $ isomorphism classes of sequences which are locally isomorphic to $S$.~~$ \blacksquare $ \noindent \textbf{Remark.} Concerning logic, we note two differences between complete folding sequences and Robinson or Penrose tilings. First, the set of all complete folding sequences is defined by a countable set of first-order sentences, and not by only one sentence. Second, it is the union of $2^{\omega }$ classes for elementary equivalence, i.e. local isomorphism, instead of being a single class. \textbf{2. Paperfolding curves: self-avoiding, derivatives, exterior.} In the present section, we define $n$-folding curves, finite folding curves, $\infty $-folding curves and complete folding curves associated to $n$ -folding sequences, finite folding sequences, $\infty $-folding sequences and complete folding sequences. We show their classical properties: self-avoiding, existence of \textquotedblleft derivatives\textquotedblright . Then we prove that any complete folding curve divides\ the set of all points of $ \mathbb{Z} ^{2}$ which are \textquotedblleft exterior\textquotedblright\ to it into zero, one or two \textquotedblleft connected components\textquotedblright , and that these components are infinite. As an application, we consider curves which are limits of successive antiderivatives of a complete folding curve. Any such curve is equal to the closure of its interior. We show that, except in a special case, its exterior is the union of zero, one or two connected components. In some cases, its boundary is a fractal. Finally we prove that, for each finite subcurve $F$ of a complete folding curve $C$, there exist everywhere in $C$ some subcurves which are parallel to $F$. We provide $ \mathbb{R} ^{2}$ with the euclidian distance defined as $d((x,y),(x^{\prime },y^{\prime }))=\sqrt{(x^{\prime }-x)^{2}+(y^{\prime }-y)^{2}}$\ for any $x,y,x^{\prime },y^{\prime }\in \mathbb{R} $. \begin{center} \includegraphics[scale=0.20]{FoldFig1.jpg} \end{center} We fix $\alpha \in \mathbb{R} _{+}^{\ast }$ small compared to $1$. For any $x,y\in \mathbb{Z} $ and any $\zeta ,\eta \in \{+1,-1\}$, we consider (cf. Fig. 1) the \emph{ segments of curves} \noindent $C_{H}(x,y,\zeta ,\eta )=[(x+\alpha ,y+\zeta \alpha ),(x+2\alpha ,y)]\cup \lbrack (x+2\alpha ,y),(x+1-2\alpha ,y)]$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cup $ $[(x+1-2\alpha ,y),(x+1-\alpha ,y+\eta \alpha )]$ and \noindent $C_{V}(x,y,\zeta ,\eta )=[(x+\zeta \alpha ,y+\alpha ),(x,y+2\alpha )]\cup \lbrack (x,y+2\alpha ),(x,y+1-2\alpha )]$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cup $ $[(x,y+1-2\alpha ),(x+\eta \alpha ,y+1-\alpha )]$. \noindent We say that $[(x,y),(x+1,y)]$ is the \emph{support} of $ C_{H}(x,y,\zeta ,\eta )$\ and $[(x,y),(x,y+1)]$ is the \emph{support} of $ C_{V}(x,y,\zeta ,\eta )$. We denote by $C_{H}^{+}(x,y,\zeta ,\eta )$ the segment $C_{H}(x,y,\zeta ,\eta )$\ oriented from left to right, and $C_{H}^{-}(x,y,\zeta ,\eta )$ the segment $C_{H}(x,y,\zeta ,\eta )$\ oriented from right to left. Similarly, we denote by $C_{V}^{+}(x,y,\zeta ,\eta )$ the segment $C_{V}(x,y,\zeta ,\eta )$\ oriented from bottom to top, and $C_{V}^{-}(x,y,\zeta ,\eta )$ the segment $C_{V}(x,y,\zeta ,\eta )$\ oriented from top to bottom. From now on, all the segments considered are oriented. We associate to $C_{H}^{+}(x,y,\zeta ,\eta )$\ the tile \noindent $P_{H}^{+}(x,y,\zeta ,\eta )=\{(u,v)\in \mathbb{R} ^{2}\mid \left\vert u-(x+1/2)\right\vert +\left\vert v-y\right\vert \leq 1/2\}$ \noindent $\cup \;\{(u,v)\in \mathbb{R} ^{2}\mid \sup (\left\vert u-(x+1-\alpha )\right\vert ,\left\vert v-(y+\eta \alpha )\right\vert )\leq \alpha \}$ \noindent $-\;\{(u,v)\in \mathbb{R} ^{2}\mid \sup (\left\vert u-(x+\alpha )\right\vert ,\left\vert v-(y+\zeta \alpha )\right\vert )<\alpha \}$, \noindent\ and to $C_{H}^{-}(x,y,\zeta ,\eta )$\ the tile \noindent $P_{H}^{-}(x,y,\zeta ,\eta )=\{(u,v)\in \mathbb{R} ^{2}\mid \left\vert u-(x+1/2)\right\vert +\left\vert v-y\right\vert \leq 1/2\}$ \noindent $\cup \;\{(u,v)\in \mathbb{R} ^{2}\mid \sup (\left\vert u-(x+\alpha )\right\vert ,\left\vert v-(y+\zeta \alpha )\right\vert )\leq \alpha \}$ \noindent $-\;\{(u,v)\in \mathbb{R} ^{2}\mid \sup (\left\vert u-(x+1-\alpha )\right\vert ,\left\vert v-(y+\eta \alpha )\right\vert )<\alpha \}$. Similarly, we associate to $C_{V}^{+}(x,y,\zeta ,\eta )$\ the tile \noindent $P_{V}^{+}(x,y,\zeta ,\eta )=\{(u,v)\in \mathbb{R} ^{2}\mid \left\vert u-x\right\vert +\left\vert v-(y+1/2)\right\vert \leq 1/2\}$ \noindent $\cup \;\{(u,v)\in \mathbb{R} ^{2}\mid \sup (\left\vert u-(x+\eta \alpha )\right\vert ,\left\vert v-(y+1-\alpha )\right\vert )\leq \alpha \}$ \noindent $-\;\{(u,v)\in \mathbb{R} ^{2}\mid \sup (\left\vert u-(x+\zeta \alpha )\right\vert ,\left\vert v-(y+\alpha )\right\vert )<\alpha \}$, \noindent\ and to $C_{V}^{-}(x,y,\zeta ,\eta )$\ the tile \noindent $P_{V}^{-}(x,y,\zeta ,\eta )=\{(u,v)\in \mathbb{R} ^{2}\mid \left\vert u-x\right\vert +\left\vert v-(y+1/2)\right\vert \leq 1/2\}$ \noindent $\cup \;\{(u,v)\in \mathbb{R} ^{2}\mid \sup (\left\vert u-(x+\zeta \alpha )\right\vert ,\left\vert v-(y+\alpha )\right\vert )\leq \alpha \}$ \noindent $-\;\{(u,v)\in \mathbb{R} ^{2}\mid \sup (\left\vert u-(x+\eta \alpha )\right\vert ,\left\vert v-(y+1-\alpha )\right\vert )<\alpha \}$. We say that two segments $C_{1},C_{2}$ are \emph{consecutive} if they just have one common point and if the end of $C_{1}$ is the beginning of $C_{2}$. This property is true if and only if the intersection of the associated tiles consists of one of their four edges (see Fig. 1). The supports of two consecutive segments form a right angle. A\emph{\ finite} (resp. \emph{infinite}, \emph{complete}) \emph{curve} is a sequence $(C_{1},...,C_{n})$ (resp. $(C_{i})_{i\in \mathbb{N} ^{\ast }}$, $(C_{i})_{i\in \mathbb{Z} }$) of consecutive segments which all have distinct supports. We identify two finite curves if they only differ in the beginning of the first segment and the end of the last one. The tiles associated to the segments of a curve are nonoverlapping. If we erase the bumps on their edges, we obtain nonoverlapping square tiles which cover the same part of $ \mathbb{R} ^{2}$ if the curve is complete. We consider that two curves $(C_{1},...,C_{m})$ and $(D_{1},...,D_{n})$ can be concatenated if their segments have distinct supports and if the end of the support of $C_{m}$ and the beginning of the support of $D_{1}$ form a right angle. Then we modify the end of $C_{m}$ and the beginning of $D_{1}$\ in order to make them compatible. For each finite curve $(C_{i})_{1\leq i\leq n}$ (resp. infinite curve $ (C_{i})_{i\in \mathbb{N} ^{\ast }}$, complete curve $(C_{i})_{i\in \mathbb{Z} }$), we consider the sequence $(\eta _{i})_{1\leq i\leq n-1}$\ (resp. $(\eta _{i})_{i\in \mathbb{N} ^{\ast }}$, $(\eta _{i})_{i\in \mathbb{Z} }$) defined as follows: for each $i$, we write $\eta _{i}=+1$ (resp. $\eta _{i}=-1$) if we turn left (resp. right) when we pass from $C_{i}$\ to $ C_{i+1}$. Two curves are associated to the same sequence\ if and only if they are equivalent modulo a positive isometry. For each segment of curve $D$, we denote by $\overline{D}$ the segment obtained from $D$\ by changing the orientation. If a finite curve $ C=(C_{1},...,C_{n})$ is associated to $S=(\eta _{1},...,\eta _{n-1})$, then $ \overline{C}=(\overline{C}_{n},...,\overline{C}_{1})$ is associated to $ \overline{S}=(-\eta _{n-1},...,-\eta _{1})$. If a complete curve $ C=(C_{i})_{i\in \mathbb{Z} }$ is associated to $S=(\eta _{i})_{i\in \mathbb{Z} }$, then $\overline{C}=(\overline{C}_{-i+1})_{i\in \mathbb{Z} }$ is associated to $\overline{S}=(-\eta _{-i})_{i\in \mathbb{Z} }$. For each segment of curve $D$ with support $[X,Y]$ and oriented from $X$ to $ Y$, we call $X,Y$ the \emph{endpoints}, $X$ the \emph{initial point} and $Y$ the \emph{terminal point} of $D$.\ The \emph{initial point} of a curve $ (C_{1},...,C_{n})$ is the initial point of $C_{1}$, and its \emph{terminal point} is the terminal point of $C_{n}$. The $\emph{vertices}$ of a curve are the endpoints of its segments. We say that two segments of curves, or two curves, are \emph{parallel} (resp. \emph{opposite}) if they are equivalent modulo a translation (resp. a rotation of angle $\pi $). We have $ \mathbb{Z} ^{2}=M_{1}\cup M_{2}$ and $M_{1}\cap M_{2}=\varnothing $ for $ M_{1}=\{(x,y)\in \mathbb{Z} ^{2}\mid x+y$ odd$\}$ and $M_{2}=\{(x,y)\in \mathbb{Z} ^{2}\mid x+y$ even$\}$.\ We denote by $M$ one of these two sets and we consider, on the one hand the curves with supports of length $1$\ and vertices in $ \mathbb{Z} ^{2}$, on the other hand the curves with supports of length $\sqrt{2}$\ and vertices in $M$. \begin{center} \includegraphics[scale=0.17]{FoldFig2.jpg} \end{center} Let $C$ be a segment of the second system, let $X$\ be its initial point and let $X^{\prime }$ be its terminal point. Then, in the first system, there exist two curves $ (A_{1},A_{2})$ and $(B_{1},B_{2})$, associated to the sequences $(-1)$ and $ (+1)$, such that $X$ is the initial point of $A_{1}$ and $B_{1}$, and $ X^{\prime }$ is the terminal point of $A_{2}$ and $B_{2}$ (see Fig. 2A). Now, consider in the second system a segment $C^{\prime }$ such that $ (C,C^{\prime })$ is a curve associated to a sequence $(\varepsilon )$\ with $ \varepsilon \in \{+1,-1\}$. Let $X^{\prime \prime }$ be the terminal point of $C^{\prime }$. In the first system, denote by $(A_{1}^{\prime },A_{2}^{\prime })$ and $(B_{1}^{\prime },B_{2}^{\prime })$ the curves associated to the sequences $(-1)$ and $(+1)$, such that $X^{\prime }$ is the initial point of $A_{1}^{\prime }$ and $B_{1}^{\prime }$, and $X^{\prime \prime }$ is the terminal point of $A_{2}^{\prime }$ and $B_{2}^{\prime }$. Then (see Fig. 2B), $(A_{1},A_{2},B_{1}^{\prime },B_{2}^{\prime })$ and $ (B_{1},B_{2},A_{1}^{\prime },A_{2}^{\prime })$ are curves associated to $ (-1,\varepsilon ,+1)$ and $(+1,\varepsilon ,-1)$. Each of these curves has $ X $, $X^{\prime }$, $X^{\prime \prime }$ among its vertices, and crosses the curve $(C,C^{\prime })$\ near $X^{\prime }$. Moreover $(A_{1},A_{2},A_{1}^{ \prime },A_{2}^{\prime })$ and $(B_{1},B_{2},B_{1}^{\prime },B_{2}^{\prime }) $ are not curves. For each curve $(C_{1},...,C_{2n})$ (resp. $(C_{i})_{i\in \mathbb{N} ^{\ast }}$, $(C_{i})_{i\in \mathbb{Z} }$) of the first system and each curve $(D_{1},...,D_{n})$ (resp. $ (D_{i})_{i\in \mathbb{N} ^{\ast }}$, $(D_{i})_{i\in \mathbb{Z} }$) of the second system, we say that $C$ is an \emph{antiderivative} of $D$ or that $D$ is the \emph{derivative} of $C$ if, for each integer $i$: \noindent a) if $D_{i+1}$ exists, then $C_{2i+1}$ and $D_{i+1}$ have the same initial point; \noindent b) if $D_{i}$ exists, then $C_{2i}$ and $D_{i}$ have the same terminal point; \noindent c) if $D_{i}$ and $D_{i+1}$ exist, then $C$ crosses $D$ near the terminal point of $D_{i}$ (we say that $C$ \emph{alternates} around $D$). Each curve of the second system has exactly two antiderivatives in the first one. Each curve $C$ of the first system has at most one derivative in the second one. If that derivative exists, then the sequence $(\eta _{1},...,\eta _{2n-1})$ (resp. $(\eta _{i})_{i\in \mathbb{N} ^{\ast }}$, $(\eta _{i})_{i\in \mathbb{Z} }$) of elements of $\{-1,+1\}$\ associated to $C$ satisfies $\eta _{2i+1}=(-1)^{i}\eta _{1}$\ for each integer $i$ such that $\eta _{2i+1}$\ exists. Conversely, if this condition is satisfied, then the \emph{derivative } of $C$ is defined by taking for $M$ the set\ $M_{1}$ or $M_{2}$\ which contains the initial point of $C_{1}$, and replacing each pair of segments $ (C_{2i-1},C_{2i})$\ with a segment $D_{i}$. For the definition of the derivative of a complete curve $(C_{i})_{i\in \mathbb{Z} }$, we permit ourselves to change the initial point of indexation, i.e.\ to replace $(C_{i})_{i\in \mathbb{Z} }$ with $(C_{i+k})_{i\in \mathbb{Z} }$ for an integer $k$. With this convention, the derivative exists if and only if $\eta _{2i}=(-1)^{i}\eta _{0}$\ for each $i\in \mathbb{Z} $ or $\eta _{2i+1}=(-1)^{i}\eta _{1}$\ for each $i\in \mathbb{Z} $.\ If these two conditions are simultaneously satisfied, we obtain two different derivatives; in that case, we consider that the derivative is not defined. This situation never appears for complete folding curves, which will be considered here. We define by induction the\emph{\ }$n$\emph{-th} \emph{derivative} $C^{(n)}$ \ of a curve $C$, with $C^{(0)}=C$ and $C^{(n+1)}$\ derivative of $C^{(n)}$\ for each $n\in \mathbb{N} $, as well as the\emph{\ }$n$\emph{-th} \emph{antiderivatives}. It is convenient to represent the successive derivatives of a curve $C$ on the same figure in such a way that $C^{(n)}$ alternates around $C^{(n+1)}$ for each $n\in \mathbb{N} $\ such that $C^{(n+1)}$ exists. This convention will be used later in the paper. For each $n\in \mathbb{N} $, an $n$\emph{-folding curve} is a curve associated to an $n$-folding sequence. For each $n$-folding sequence obtained by folding $n$ times a strip of paper, we obtain the associated $n$-folding curve by keeping the strip folded according to right angles instead of unfolding it completely. \begin{center} \includegraphics[scale=0.18]{FoldFig3.jpg} \end{center} We see by induction on $n$ that the $n$-folding curves are the $n$-th antiderivatives of the curves which consist of one segment. Consequently, up to isometry and up to the orientation, there exist one $2$-folding curve (cf. Fig. 3A), two $3$ -folding curves (cf. Fig. 3B), and four $4$-folding curves (cf. Fig. 3C). We call $\infty $\emph{-folding curve} (resp.\emph{\ finite} \emph{folding curve}, \emph{complete folding curve}) each curve associated to an $\infty $ -folding sequence (resp. a finite folding sequence, a complete folding sequence). Any curve $(C_{i})_{i\in \mathbb{N} ^{\ast }}$ (resp. $(C_{i})_{i\in \mathbb{Z} }$) is an $\infty $-folding curve (resp. a complete folding curve) if and only if it is indefinitely derivable. The successive antiderivatives of a paperfolding curve, as well as its successive derivatives if they exist, are also paperfolding curves. We say that a curve $(C_{1},...,C_{n})$ (resp. $(C_{i})_{i\in \mathbb{N} ^{\ast }}$, $(C_{i})_{i\in \mathbb{Z} }$) is \emph{self-avoiding} if we have $C_{i}\cap C_{j}=\emptyset $ for $ \left\vert j-i\right\vert \geq 2$. Such a curve defines an injective continuous function from a closed connected subset of $ \mathbb{R} $ to $ \mathbb{R} ^{2}$. \noindent \textbf{Proposition 2.1.} Antiderivatives of self-avoiding curves are self-avoiding. \noindent \textbf{Proof.} Consider a curve $C$ whose derivative $D$ is self-avoiding. If $C$ is not self-avoiding, then there exist two segments of $C$ which have the same support. These two segments are necessarily coming from segments of $D$ which have a common endpoint. In order to prove that this situation is impossible, we consider the function $\tau $ which is defined on the set of all supports of segments of $ D$ with $\tau ([(u,v),(u+1,v+\varepsilon )])=+1$ (resp. $-1$) for each $ (u,v)\in \mathbb{Z} ^{2}$ and each $\varepsilon \in \{-1,+1\}$ such that $C$ is above (resp. below) $D$\ on $[(u,v),(u+1,v+\varepsilon )]$. It suffices to observe that the equality $\tau ([(u^{\prime },v^{\prime }),(u^{\prime }+1,v^{\prime }+\varepsilon ^{\prime })])=(-1)^{u^{\prime }-u}\tau ([(u,v),(u+1,v+\varepsilon )])$ is true wherever $\tau $ is defined. In fact, it is true for the supports of consecutive segments of $D$ because $C$ alternates around $D$, and it is proved in the general case by induction on the number of consecutive segments between the two segments considered.~~$ \blacksquare $ \noindent \textbf{Corollary 2.2.} Paperfolding curves are self-avoiding. \noindent \textbf{Proof.} For each integer $n$, each $n$-folding curve is self-avoiding because it is the $n$-th antiderivative of a self-avoiding curve which consists of one segment. Each finite folding curve is self-avoiding since it is a subcurve of an $n$-folding curve for an integer $ n$. Complete folding curves and $\infty $-folding curves are self-avoiding because their finite subcurves are self-avoiding. Another proof is given by [4, Observation 1.11, p. 134].~~$\blacksquare $ For each self-avoiding curve $C$ and any $x,y\in \mathbb{Z} $, we write: \noindent $\rho _{C}([(x,y),(x+1,y)]=+1$ (resp. $-1$) if $C$ contains a segment with the initial point $(x,y)$\ (resp. $(x+1,y)$) and the terminal point $(x+1,y)$\ (resp. $(x,y)$); \noindent $\rho _{C}([(x,y),(x,y+1)])=+1$ (resp. $-1$) if $C$ contains a segment with the initial point $(x,y)$\ (resp. $(x,y+1)$) and the terminal point $(x,y+1)$\ (resp. $(x,y)$). There exists $\varepsilon \in \{-1,+1\}$ such that $\rho _{C}([(x,y),(x+1,y)])=(-1)^{y-x+\varepsilon }$ and $\rho _{C}([(x,y),(x,y+1)])=(-1)^{y-x+\varepsilon +1}$ wherever $\rho _{C}$\ is defined. In fact, $\varepsilon $ is the same for the supports of two consecutive segments, and we see that it is the same for the supports of any two segments by induction on the number of consecutive segments between them. We extend the definition of $\rho _{C}$, according to this property, to the set of all intervals $[(x,y),(x+1,y)]$\ or\ $[(x,y),(x,y+1)]$\ with $ x,y\in \mathbb{Z} $. For each self-avoiding curve $C=(C_{i})_{i\in \mathbb{Z} }$ and any pairs $(C_{i},C_{i+1})$, $(C_{j},C_{j+1})$ of consecutive segments, if $C_{i}$ and $C_{j}$ have the same terminal point, then, by the property of $\rho _{C}$\ stated above, we turn left when passing from $C_{i}$ to $C_{i+1}$ if and only if we turn left when passing from $C_{j}$ to $ C_{j+1}$. For each $X\in \mathbb{Z} ^{2}$, we write $\sigma _{C}(X)=+1$\ (resp. $-1$) if $X$ is the common endpoint of two consecutive segments $C_{i}$, $C_{i+1}$ of $C$ and if we turn left (resp. right) when passing from $C_{i}$ to $C_{i+1}$. It follows from the definition of derivatives that, for each $k\in \mathbb{N} $: \noindent a) if $C^{(2k)}$ exists, then there exists $X_{2k}\in \mathbb{Z} ^{2}$\ such that the set of all vertices of $C^{(2k)}$ is contained in $ E_{2k}(C)=X_{2k}+2^{k} \mathbb{Z} ^{2}$; \noindent b) if $C^{(2k+1)}$ exists, then there exists $X_{2k+1}\in \mathbb{Z} ^{2}$\ such that the set of all vertices of $C^{(2k+1)}$ is contained in $ E_{2k+1}(C)=X_{2k+1}+(2^{k},2^{k}) \mathbb{Z} +(2^{k},-2^{k}) \mathbb{Z} $. The set $E_{n}(C)$ is defined for each $n\in \mathbb{N} $\ such that $C^{(n)}$ exists.\ If $C^{(n+1)}$ exists, then we have $ E_{n+1}(C)\subset E_{n}(C)$; we write $F_{n}(C)=E_{n}(C)-E_{n+1}(C)$. If $S=(\eta _{i})_{i\in \mathbb{Z} }$ is the sequence associated to a complete folding curve $C=(C_{i})_{i\in \mathbb{Z} }$, then, for each $i\in \mathbb{Z} $ and each $n\in \mathbb{N} $, the terminal point of $C_{i}$\ belongs to $E_{n}(C)$\ if and only if $i$\ belongs to $E_{n}(S)$. The following lemma applies, in particular, to complete folding curves: \noindent \textbf{Lemma 2.3.} Let $C$ be a derivable self-avoiding complete curve. Consider a square $Q=[x,x+1]\times \lbrack y,y+1]$ with $x,y\in \mathbb{Z} $. If four vertices of $Q$ are endpoints of segments of $C$, then at least three segments of $C$, with two of them consecutive, have supports which are edges of $Q$. If three vertices of $Q$ are endpoints, then the two edges determined by these vertices are supports of segments of $C$, or neither of them is a support. If $C$ is derivable twice and if two vertices of $Q$ are endpoints, then they are necessarily adjacent. \begin{center} \includegraphics[scale=0.17]{FoldFig4.jpg} \end{center} \noindent \textbf{Proof.} We denote by $W,X,Y,Z$ the vertices of $Q$ taken consecutively, and we show that the cases excluded by the Lemma are impossible. First suppose\ that an edge of $Q$, for instance $WX$, is the support of a segment of $C$, that a vertex of $Q$ which does not belong to this edge, for instance $Z$, is a vertex of $C$, and that the edges of $Q$ which contain this vertex are not supports of segments of $C$. Consider the two pairs of consecutive segments of $C$ which respectively have $W$ and $Z$\ as a common endpoint. Then the property of $\rho _{C}$ implies that these two pairs both have the orientation shown by Figure 4A, or both have the contrary orientation, which contradicts the connectedness of $C$ (see Fig. 4A). Then suppose that two opposite edges of $Q$, for instance $WX$ and $YZ$, are supports of segments of $C$, and that the two preceding segments, as well as the two following segments, have supports which are not edges of $Q$. Then the property of $\rho _{C}$ implies that the two sequences of three consecutive segments formed from these six segments both have the orientation shown by Figure 4B, or both have the contrary orientation. Suppose for instance that $X$ and $Z$ belong to $F_{0}(C)$. Then the two pairs of segments of $C$, extracted from the two sequences, which respectively have $X$ and $Z$ as a common endpoint, give opposite segments of $D$, which contradicts the property of $\rho _{D}$ (see Fig. 4B). Now suppose that $W,X,Y,Z$ are vertices of $C$ and that no edge of $Q$ is the support of a segment of $C$. Then the property of $\rho _{C}$ implies that the pairs of segments of $C$ which respectively have $W,X,Y,Z$ as a common endpoint all have the orientation shown by Figure 4C, or all have the contrary orientation. Suppose for instance that $W$ and $Y$ belong to $ F_{0}(C)$. Then the pairs of segments of $C$ which respectively have $W$ and $Y$ as a common endpoint give opposite segments of $D$, which contradicts the property of $\rho _{D}$ (see Fig. 4C). Finally suppose that $C$ is derivable twice and that only two opposite vertices of $Q$, for instance $W$ and $Y$, are vertices of $C$. If $W$ and $ Y $ belong to $F_{0}(C)$, then, as in the previous case, the pairs of segments of $C$ which respectively have $W$ and $Y$ as a common endpoint give opposite segments of $D$, which contradicts the property of $\rho _{D}$. If $W$ and $Y$ belong to $E_{1}(C)$, we consider the two squares of width $ \sqrt{2}$ which have $WY$ as their common edge. As $W$ and $Y$ are vertices of $D$, one of the squares has an edge adjacent to $WY$ which is the support of a segment of $D$. On the other hand, as the center $X$ or $Z$ of that square is not a vertex of $C$, the edge $WY$ and the opposite edge are not supports of segments of $D$, which contradicts the two first statements of the Lemma applied to $D$.~~$\blacksquare $ \noindent \textbf{Definitions.} For any $X,Y\in \mathbb{Z} ^{2}$, a \emph{path} from $X$ to $Y$ is a sequence ($X_{0},...,X_{n})\subset \mathbb{Z} ^{2}$\ with $n\in \mathbb{N} $, $X_{0}=X$, $X_{n}=Y$ and $d(X_{i-1},X_{i})=1$ for $1\leq i\leq n$. For each complete curve $C$, we call \emph{exterior} of $C$ and we denote by $ \mathrm{Ext}(C)$ the set of all points of $ \mathbb{Z} ^{2}$ which are not vertices of $C$. A \emph{connected component} of $ \mathrm{Ext}(C)$ is a subset $K$ which is maximal for the following property: any two points of $K$ are connected by a path which only contains points of $K$. \noindent \textbf{Theorem 2.4.} The exterior $\mathrm{Ext}(C)$ of a complete folding curve $C$ is the union of $0$, $1$ or $2$ infinite connected components, and each of these components is the intersection of $\mathrm{Ext} (C)$ with one of the $2$ connected components of $ \mathbb{R} ^{2}-C$. \noindent \textbf{Lemma 2.4.1.} The connected components of $\mathrm{Ext}(C)$ \ are infinite. \noindent \textbf{Proof of the Lemma.} For each $n\in \mathbb{N} $,\ we have $\mathrm{Ext}(C^{(n)})=\mathrm{Ext}(C)\cap E_{n}(C)$ since each point of $E_{n}(C)$ is a vertex of $C^{(n)}$ if and only if it is a vertex of $C$. If $K$ is a connected component of $\mathrm{Ext}(C)$, then $K\cap E_{n}(C)$ is a union of connected components of $\mathrm{Ext}(C^{(n)})$ for each $n\in \mathbb{N} $. Otherwise, the smallest integer $n$ such that this property is false satisfies $n\geq 1$, and $E_{n-1}(C)$ contains the consecutive vertices $ W,X,Y,Z$ of a square of width $(\sqrt{2})^{n-1}$ with $W,Y\in \mathrm{Ext} (C^{(n)})$, $W\in K$, $Y\notin K$ and $X,Z$\ vertices of $C^{(n-1)}$, which contradicts Lemma 2.3 applied to $C^{(n-1)}$. For each connected component $K$ of $\mathrm{Ext}(C)$ and each $n\in \mathbb{N} $,\ we have $\varnothing \subsetneq K\cap E_{n+1}(C)\subsetneq K\cap E_{n}(C) $, or $K\cap E_{n}(C)$\ is a union of connected components which consist of one point; in fact, for any $X,Y\in E_{n}(C)$\ with $d(x,y)=( \sqrt{2})^{n}$, we have $X\in F_{n}(C)$\ and $Y\in E_{n+1}(C)$, or $Y\in F_{n}(C)$\ and $X\in E_{n+1}(C)$. Consequently, in order to prove that $ \mathrm{Ext}(C)$\ has no finite connected component, it suffices to show that each $\mathrm{Ext}(C^{(n)})$\ has no connected component which consists of one point. Suppose that there exist a twice derivable complete curve $D$ and a point $ U=(u,v)\in \mathbb{Z} ^{2}$ such that $\left\{ U\right\} $ is a connected component of $\mathrm{Ext }(D)$. Write $W=(u-1,v)$, $X=(u,v+1)$, $Y=(u+1,v)$ and $Z=(u,v-1)$. If $U$ belongs to $F_{0}(D)$, then $W,X,Y,Z$\ belong to $E_{1}(D)$ and they are vertices of $D^{(1)}$. By Lemma 2.3, two consecutive segments of $ D^{(1)} $ have supports which are edges of $WXYZ$. As $D$\ alternates around $D^{(1)} $, it follows that $U$ is a vertex of $D$, whence a contradiction. If $U$ belongs to $E_{1}(D)$, consider the point $S$ (resp. $T$) which forms a square with $X,U,W$ (resp. $X,U,Y$). Then $SX$ or $XT$ is the support of a segment of $D$ since $X$ is a vertex of $D$. Suppose for instance that $XT$ is the support of a segment of $D$. As $Y$ is a vertex of $D$ contrary to $U$, Lemma 2.3 implies that $TY$ is also the support of a segment of $D$. As $X$ and $Y$ belong to $F_{0}(D)$, it follows from the property of $\rho _{D}$ (see Fig. 4D) that there exist two parallel segments of $D^{(1)}$ such that $T$ is the terminal point of one of them and the initial point of the other one, whence a contradiction.~~$\blacksquare $ \noindent \textbf{Proof of the Theorem.} Write $C=(C_{i})_{i\in \mathbb{Z} }$.\ Consider a connected component $K$ of $\mathrm{Ext}(C)$ and write $ M=\{X\in \mathbb{Z} ^{2}-K\mid d(X,K)=1\}$. Denote by $\Omega $\ the set of all squares $S=[x,x+1]\times \lbrack y,y+1]$ with $x,y\in \mathbb{Z} $ such that $K$\ contains one or two vertices of $S$. For each $S\in \Omega $ , if $K$ contains one vertex $X$ of $S$, consider the segment $E_{S}$ of length $\sqrt{2}$ joining the vertices of $S$ adjacent to $X$. If $K$ contains two vertices of $S$, consider the segment $E_{S}$ of length $1$ joining the two other vertices, which are adjacent by Lemma 2.3. The endpoints of the segments $E_{S}$ for $S\in \Omega $ belong to $M$. We are going to prove that each $U\in M$ is an endpoint of exactly two such segments.\ We consider the points $V,W,X,Y\in \mathbb{Z} ^{2}$\ with $d(U,V)=d(U,W)=d(U,X)=d(U,Y)=1$ such that $VWXY$\ is a square, $ V,W$\ are vertices of $C$, and $X\in K$. We denote by $P,Q,R,S$\ the squares determined by the pairs of edges $(UV,UW)$, $(UW,UX)$, $(UX,UY)$, $(UY,UV)$. As $C$\ is connected, the fourth vertex of $P$\ does not belong to $K$, and $ P$ does not belong to $\Omega $. On the other hand, $Q$\ belongs to $\Omega $ \ since $X$\ belongs to $K$\ contrary to $U$\ and $W$. If $Y$ is a vertex of $C$, then $R$\ belongs to $\Omega $\ since $X$\ belongs to $K$\ and $U,Y$\ do not belong to $K$. Moreover, Lemma 2.3 implies that the fourth vertex of $S$\ is a vertex of $C$, since $UV$\ is the support of a segment of $C$\ contrary to $UY$. Consequently, $S$\ does not belong to $\Omega $. If $Y$ is not a vertex of $C$, then, by Lemma 2.3, the fourth vertex of $R$\ is not a vertex of $C$, and therefore belongs to $K$. Consequently, $Y$\ belongs to $K$, $S$\ belongs to $\Omega $\ and $R$\ does not belong to $ \Omega $. Moreover, $U$\ is the common endpoint of $E_{Q}$ and $E_{R}$ if $Y$\ is a vertex of $C$, and the common endpoint of $E_{Q}$ and $E_{S}$ if $Y$\ is not a vertex of $C$. As $K$ is infinite by Lemma 2.4.1, it follows that the segments $E_{S}$ for $ S\in \Omega $ form an unbounded self-avoiding curve $E$. The vertices of $E$ \ are the points of $M$. One connected component of $ \mathbb{R} ^{2}-E$ contains $K$, but contains no point of $C$\ and no point of\ $ \mathbb{Z} ^{2}$\ which does not belong to $K$. The points of $M$ taken along $E$ form a sequence $(X_{i})_{i\in \mathbb{Z} }$. For each $X\in M$, denote by $r(X)$ the unique integer $j$ such that $X$ is the terminal point of $C_{j}$ and the initial point of $C_{j+1}$. Suppose for instance $r(X_{0})<r(X_{1})$. Then, using the connectedness of $C$, we see by induction on $i$ that $r(X_{i})<r(X_{i+1})$ for each\ $i\in \mathbb{Z} $. Now suppose that there exists a connected component $L\neq K$ of $\mathrm{Ext }(C)$ such that $K$ and $L$ are contained in the same connected component of $ \mathbb{R} ^{2}-C$. Then there exists\ $i\in \mathbb{Z} $ such that $L$ is contained in the loop formed by $C$ between $X_{i}$ and $ X_{i+1}$, and $L$ is finite contrary to Lemma 2.4.1.~~$\blacksquare $ Now, we apply Theorem 2.4 to limits of complete folding curves. We give less details in this part, which will not be used in the remainder of the paper. We consider some complete folding curves $C_{n}=(C_{n,p})_{p\in \mathbb{Z} }$\ with $C_{n}=C_{n+1}^{(1)}$\ for each $n\in \mathbb{N} $.\ We suppose that the curves $C_{n}$\ are represented on the same figure in such a way that: \noindent - $C_{0}$ has vertices in $ \mathbb{Z} ^{2}$\ and supports of length $1$; \noindent - all the segments $C_{n,1}$\ have the same initial point; \noindent - $C_{n+1}$\ alternates around $C_{n}$\ for each $n\in \mathbb{N} $. \noindent We denote by $L$ the limit of the curves $C_{n}$ considered as representations of functions from $ \mathbb{R} $\ to $ \mathbb{R} ^{2}$. The curve $L$ is associated to a function from $ \mathbb{R} $\ to $ \mathbb{R} ^{2}$\ which is continuous everywhere, but derivable nowhere. Moreover, $L$\ is closed as a subset of $ \mathbb{R} ^{2}$. By Theorem 3.1 below, $L$\ contains arbitrarily large open balls. It follows from Proposition 2.6 and Theorem 3.1 that $L$ is equal to the closure of its interior. Now, we consider the complete folding sequences $(\eta _{n,p})_{p\in \mathbb{Z} }$\ associated to the curves $C_{n}$. We say that $(\eta _{n,1})_{n\in \mathbb{N} }$\ is \emph{ultimately alternating} if we have $\eta _{n,1}=(-1)^{n}$\ for $ n$\ large enough, or\ $\eta _{n,1}=(-1)^{n+1}$\ for $n$\ large enough. \noindent \textbf{Corollary 2.5.} If $(\eta _{n,1})_{n\in \mathbb{N} }$\ is not ultimately alternating, then $ \mathbb{R} ^{2}-L$ is the union of $0$, $1$\ or $2$\ unbounded connected components. \noindent \textbf{Proof.} For each $n\in \mathbb{N} $, we have $\mathrm{Ext}(C_{n})\subset \mathbb{R} ^{2}-L$ since $(\eta _{m,1})_{m\in \mathbb{N} }$\ is not ultimately alternating. By Theorem 2.4, it suffices to show that,\ for each $n\in \mathbb{N} $, any $X,X^{\prime }\in $ $\mathrm{Ext}(C_{n})$ belong to the same connected component of $ \mathbb{R} ^{2}-L$ if they belong to the same connected component of $\mathrm{Ext} (C_{n})$. Moreover, it is enough to prove this property when $d(X,X^{\prime })=(1/\sqrt{2})^{n}$. We consider some distinct points $Y,Y^{\prime },Z,Z^{\prime },U,V$\ with $ YY^{\prime }$\ and $ZZ^{\prime }$\ parallel to $XX^{\prime }$\ such that $ XX^{\prime }Y^{\prime }Y$ (resp. $XX^{\prime }Z^{\prime }Z$) is a square of center $U$ (resp. $V$). Then $XX^{\prime }$, $XY$, $XZ$, $X^{\prime }Y^{\prime }$, $X^{\prime }Z^{\prime }$ are not supports of segments of $ C_{n}$. Consequently, at least one of the points $U,V$ is not a vertex of $ C_{n+1}$. If $U$ ( resp. $V$) is not a vertex of $C_{n+1}$, then the open triangle $ XUX^{\prime }$\ (resp. $XVX^{\prime }$) is contained in $ \mathbb{R} ^{2}-L$. It follows that $X$\ and $X^{\prime }$ belong to the same connected component of $ \mathbb{R} ^{2}-L$.~~$\blacksquare $ \noindent \textbf{Remark.} Let $L$ be the limit of the curves $C_{n}$, where $C_{0}$\ is the curve $C$\ of Example 3.13 below and $\eta _{n,1}=(-1)^{n+1}$ \ for $n\geq 1$. Then we have $[(0,1),(2,1)]\subset L$ even though $(1,1)$ is not a vertex of $C_{0}$. It follows that $(1,0)$\ belongs to a bounded connected component of $ \mathbb{R} ^{2}-L$. It can be proved that $ \mathbb{R} ^{2}-L$ has infinitely many such components. \noindent \textbf{Examples. }The limit curve $L$\ obtained from the curve $C$ of Example 3.13 by writing $\eta _{n,1}=+1$ for $n\geq 1$ is called a \emph{ dragon curve}. It follows from [6] that the interior of $L$ is a union of countably many bounded connected components. According to [2], the boundary of $L$ is a fractal. On the other hand, Example 3.8 gives a case with $L= \mathbb{R} ^{2}$. Using a similar construction, we can obtain $L$ such that its boundary is a line. If $L$ is one of the limit curves obtained from the curves of Example 3.14 by writing $\eta _{n,1}=(-1)^{n+1}$\ for $n\geq 1$, then its boundary is the union of two intersecting lines, or the union of two half-lines with a common endpoint. The Proposition below is not a priori obvious since two curves associated to the same folding sequence are not necessarily parallel. \noindent \textbf{Proposition 2.6.} For each complete folding curve $ C=(C_{h})_{h\in \mathbb{Z} }$, for each $n\in \mathbb{N} $ and for any $i,j\in \mathbb{Z} $, $(C_{j+1},...,C_{j+88n})$\ contains a curve which is parallel and another curve which is opposite to $(C_{i+1},...,C_{i+n})$. For the proof of Proposition 2.6, we fix $C,n,i,j$ and we write $ A=(C_{i+1},...,C_{i+n})$. We consider the integer $m$ such that $ 2^{m-1}<n\leq 2^{m}$, and the sequence $S=(\eta _{h})_{h\in \mathbb{Z} }$ associated to $C$. \noindent \textbf{Lemma 2.6.1.} There exists $k\in E_{m+2}(S)$\ such that $ (C_{k+1},...,C_{k+2^{m+2}})$ contains a subcurve which is parallel or opposite to $A$. \noindent \textbf{Proof of the Lemma.} We can suppose that there exists $ l\in E_{m+2}(S)$ such that $i+1\leq l\leq i+n-1$ since, otherwise, there exists $k\in E_{m+2}(S)$\ such that $A\subset (C_{k+1},...,C_{k+2^{m+2}})$. Then, for each $h\in \{i+1,...,i+n-1\}-\{l\}$, we have $h\in \mathbb{Z} -E_{n}$, and therefore $\eta _{h}=\eta _{h+2^{m+1}+r.2^{m+2}}$ for each $ r\in \mathbb{Z} $. We also have $\eta _{l+2^{m+1}+r.2^{m+2}}=(-1)^{r}\eta _{l+2^{m+1}}$ for each $r\in \mathbb{Z} $. Consequently, there exists $r\in \mathbb{Z} $ such that $\eta _{l}=\eta _{l+2^{m+1}+r.2^{m+2}}$, and therefore $\eta _{h}=\eta _{h+2^{m+1}+r.2^{m+2}}$\ for $i+1\leq h\leq i+n-1$. Now the supports of $C_{i+1}$ and $C_{i+1+2^{m+1}+r.2^{m+2}}$\ are parallel or opposite since $(i+1+2^{m+1}+r.2^{m+2})-(i+1)$\ is even. It follows that \noindent $(C_{i+1+2^{m+1}+r.2^{m+2}},...,C_{i+n+2^{m+1}+r.2^{m+2}})$ is parallel or opposite to $A$.~~$\blacksquare $ \noindent \textbf{Proof of the Proposition.} By Lemma 2.6.1, we can suppose that there exists $k\in E_{m+2}(S)$ such that $k\leq i$ and $k+2^{m+2}\geq i+n$. For each $r\in \mathbb{Z} $, we write $C_{r}^{\ast }=(C_{k+1+r.2^{m+2}},...,C_{k+(r+1).2^{m+2}})$. We consider the $(m+2)$-th derivative $D=(D_{r})_{r\in \mathbb{Z} }$\ of $C$, indexed in such a way that, for each $r\in \mathbb{Z} $, the initial point of $D_{r}$ is the initial point of $C_{k+1+r.2^{m+2}}$. For any $r,s\in \mathbb{Z} $, we have \noindent $(\eta _{k+1+r.2^{m+3}},...,\eta _{k+2^{m+2}-1+r.2^{m+3}})=(\eta _{k+1+s.2^{m+3}},...,\eta _{k+2^{m+2}-1+s.2^{m+3}})$. \noindent Consequently, $C_{2r}^{\ast }$ and $C_{2s}^{\ast }$\ are parallel (resp. opposite) if and only if $D_{2r}$\ and $D_{2s}$\ are parallel (resp. opposite). The same property is true for the copies of $A$\ contained in $ C_{2r}^{\ast }$ and $C_{2s}^{\ast }$. As we have $88n>11.2^{m+2}$, there exists $r\in \mathbb{Z} $ such that $(C_{j+1},...,C_{j+88n})$\ contains $(C_{2r}^{\ast },...,C_{2r+8}^{\ast })$. Moreover, $(D_{2r},...,D_{2r+8})$ is necessarily contained in a $4$-folding curve. It follows (see Fig. 3C) that there exist $ p,q\in \{0,1,2,3,4\}$\ such that $D_{2r+p}$ and $D_{2r+q}$ are opposite. Then $C_{2r+p}^{\ast }$ and $C_{2r+q}^{\ast }$ are also opposite, as well as the copies of $A$ that they contain. Consequently, one of these copies is parallel to $A$.~~$\blacksquare $ \textbf{3. Coverings of $ \mathbb{R} ^{2}$ by sets of disjoint complete folding curves.} Consider $\Omega = \mathbb{R} ^{2}$ or $\Omega =[a,b]\times \lbrack c,d]$\ with\ $a,b,c,d\in \mathbb{Z} $, $b\geq a+1$ and $d\geq c+1$. Let $\mathcal{E}$\ be a set of segments with supports contained in $\Omega $. We say that $\mathcal{E}$\ is a \emph{ covering} of $\Omega $\ if it satisfies the following conditions: \noindent 1) each interval $[(x,y),(x+1,y)]$ or $[(x,y),(x,y+1)]$ contained in $\Omega $, with $x,y\in \mathbb{Z} $, is the support of a unique segment of $\mathcal{E}$; \noindent 2) if two distinct non consecutive segments of $\mathcal{E}$ have a common endpoint $X$, then they can be completed into pairs of consecutive segments, with all four segments having distinct supports which contain $X$. \noindent The set $\mathcal{E}$\ is a covering of $ \mathbb{R} ^{2}$ (resp. $[a,b]\times \lbrack c,d]$) if and only if the tiles associated to the segments belonging to $\mathcal{E}$\ form a tiling of $ \mathbb{R} ^{2}$ (resp. a patch which covers $[a,b]\times \lbrack c,d]$). For each covering $\mathcal{E}$ of $ \mathbb{R} ^{2}$, if each finite sequence of consecutive segments belonging to $ \mathcal{E}$\ is a folding curve, then $\mathcal{E}$\ is a \emph{covering of }$ \mathbb{R} ^{2}$\emph{\ by complete folding curves}, in the sense that $\mathcal{E}$ is a union of disjoint complete folding curves. We say that a covering of $ \mathbb{R} ^{2}$ by complete folding curves \emph{satisfies the local isomorphism property} if the associated tiling satisfies the local isomorphism property. Two such coverings are said to be \emph{locally isomorphic} if the associated tilings are locally isomorphic. It often happens that a covering of $ \mathbb{R} ^{2}$ by complete folding curves satisfies the local isomorphism property. In particular, we show that this property is satisfied by any covering of $ \mathbb{R} ^{2}$ by $1$ complete folding curve, or by $2$ complete folding curves associated to the \textquotedblleft positive\textquotedblright\ folding sequence. Two complete folding curves associated to the \textquotedblleft alternating\textquotedblright\ folding sequence do not give a covering of $ \mathbb{R} ^{2}$, but we prove that a covering of $ \mathbb{R} ^{2}$ which satisfies the local isomorphism property is obtained naturally from these $2$ curves by adding $4$ other complete folding curves. We characterize the coverings of $ \mathbb{R} ^{2}$ by sets of complete folding curves which satisfy the local isomorphism property, and the pairs of locally isomorphic such coverings. We show that each complete folding curve can be completed in a quasi-unique way into such a covering and that, for each complete folding sequence $S$, there exists a covering of $ \mathbb{R} ^{2}$ by a complete folding curve associated to a sequence which is locally isomorphic to $S$. Finally, we prove that each complete folding curve covers a \textquotedblleft significant\textquotedblright\ part of $ \mathbb{R} ^{2}$. In that way, we show that the maximum number of disjoint complete folding curves in $ \mathbb{R} ^{2}$, and therefore the maximum number of complete folding curves in a covering of $ \mathbb{R} ^{2}$, is at most $24$. We also prove that such a covering cannot contain more than $6$ curves if it satisfies the local isomorphism property. The following result will be used frequently in the proofs: \noindent \textbf{Theorem 3.1.} There exists a fonction $f: \mathbb{N} \rightarrow \mathbb{N} $\ with exponential growth such that, for each integer $n$, each $n$-folding curve contains a covering of a square $[x,x+f(n)]\times \lbrack y,y+f(n)]$ with $x,y\in \mathbb{Z} $. \begin{center} \includegraphics[scale=0.15]{FoldFig5.jpg} \end{center} \noindent \textbf{Proof.} By Figure 3C, each $4$-folding curve contains a copy, up to isometry and modulo the orientation, of the curve given by Figure 5A. Consequently, each $5$ -folding curve contains a copy of the curve given by Figure 5B, and each $6$ -folding curve contains a copy of one of the two curves given by Figures 5C and 5D. For each $X=(x,y)\in \mathbb{Z} ^{2}$ and each $k\in \mathbb{N} ^{\ast }$, we write $L(X,k)=\{(u,v)\in \mathbb{R} ^{2}\mid \left\vert u-x\right\vert +\left\vert v-y\right\vert \leq k\}$. We say that a folding curve $C$ covers $L(X,k)$ if each interval $ [(u,v),(u+1,v)]$ or $[(u,v),(u,v+1)]$ with $u,v\in \mathbb{Z} $ contained in $L(X,k)$ is the support of a segment of $C$. The curve in Figure 5C covers $L(Y,2)$, and each of its two antiderivatives covers $L(X,2)$ where $X$ is the point corresponding to $Y$. Each antiderivative of the curve in Figure 5D covers $L(W,2)$ or $L(X,2)$ where $ W $ and $X$ are the points corresponding to $Y$ and $Z$. Consequently, each $ 7$-folding curve covers an $L(U,2)$. For each $k\in \mathbb{N} ^{\ast }$, if a folding curve $C$ covers an $L(X,k)$, then each second antiderivative $D$ of $C$ covers an $L(Y,2k-1)$. More precisely, if we put $ D $ on the figure containing $C$, then $D$ covers $L(X,k-1/2)$, and we obtain a curve which covers an $L(Y,2k-1)$ when we apply a homothety of ratio $2$\ in order to give the length $1$ to the supports of the segments of $D$. We see by induction on $n$ that each $(2n+7)$-folding curve covers an $ L(X,2^{n}+1)$.~~$\blacksquare $ For each complete folding curve $C=(C_{i})_{i\in \mathbb{Z} }$, there are two possibilities: \noindent - either all the segments $C_{i}$ for $i\in E_{1}$\ have horizontal supports, and we say that $C$ has the \emph{type }$H$; \noindent - or all the segments $C_{i}$ for $i\in E_{1}$\ have vertical supports, and we say that $C$ has the \emph{type }$V$. \noindent \textbf{Theorem 3.2.} Let $\mathcal{C},\mathcal{D}$ be coverings of $ \mathbb{R} ^{2}$ by sets of complete folding curves which satisfy the local isomorphism property. Then $\mathcal{C}$ and $\mathcal{D}$ are locally isomorphic if and only if each curve of $\mathcal{C}$ and each curve of $\mathcal{D}$ have the same type and have locally isomorphic associated sequences. \noindent \textbf{Remark.} In particular, if a covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves satisfies the local isomorphism property, then all the curves have the same type and have locally isomorphic associated sequences. \noindent \textbf{Proof of the Theorem.} First we show that the condition is necessary. Let $C=(C_{i})_{i\in \mathbb{Z} }$ be a curve of $\mathcal{C}$ and let $D=(D_{i})_{i\in \mathbb{Z} }$ be a curve of $\mathcal{D}$. Consider a finite subcurve $F$ of $C$. As $\mathcal{C}$ satisfies the local isomorphism property, there exists an integer $n$ such that each square $[x,x+n]\times \lbrack y,y+n]$ with $ x,y\in \mathbb{Z} $ contains a subcurve of a curve of $\mathcal{C}$ which is parallel to $F$. As $\mathcal{C}$ is locally isomorphic to $\mathcal{D}$, each square $ [x,x+n]\times \lbrack y,y+n]$ with $x,y\in \mathbb{Z} $ also contains a subcurve of a curve of $\mathcal{D}$ which is parallel to $ F$. According to Theorem 3.1, there exist $x,y\in \mathbb{Z} $ such that $[x,x+n]\times \lbrack y,y+n]$ is covered by $D$. It follows that $D$ contains a subcurve which is parallel to $F$. In particular, the folding sequence associated to $F$ is a subword of the folding sequence associated to $D$. Now, take for $F$ a $3$-folding subcurve $(C_{i+1},...,C_{i+8})$ of $C$, and consider $j\in \mathbb{Z} $ such that $(D_{j+1},...,D_{j+8})$ is parallel to $F$. Then we have $i\in E_{1}(C)$ and $j\in E_{1}(D)$. It follows that $C$ and $D$ have the same type. It remains to be proved that the condition is sufficient. We show that, for each finite set $\mathcal{E}$ of tiles, if $\mathcal{C}$ contains the image of $\mathcal{E}$\ under a translation, then $\mathcal{D}$ contains the image of $\mathcal{E}$\ under a translation; the converse can be proved in the same way. As $\mathcal{C}$ satisfies the local isomorphism property, there exists an integer $m$ such that each square $[a,a+m]\times \lbrack b,b+m]$ contains a set of tiles of $\mathcal{C}$ which is the image of $\mathcal{E}$ under a translation.\ By Theorem 3.1, there exists an integer $n$ such that each $n$ -folding curve contains a covering of a square $[a,a+m]\times \lbrack b,b+m]$ . For such an $n$, each $n$-folding subcurve of a curve of $\mathcal{C}$ contains the image of $\mathcal{E}$ under a translation. We can take $n\geq 3$. Then each $n$-folding subcurve $F$ of a curve of $ \mathcal{D}$ is parallel or opposite to a subcurve of a curve of $\mathcal{C} $ since each curve of $\mathcal{C}$ and each curve of $\mathcal{D}$ have the same type and have locally isomorphic associated sequences. Now, it follows from Proposition 2.6 applied to $\mathcal{C}$ that such an $F$ is parallel to a subcurve of a curve of $\mathcal{C}$, and therefore contains the image of $\mathcal{E}$ under a translation.~~$\blacksquare $ For any disjoint complete folding curves $C,D$, we call \emph{boundary between }$C$\emph{\ and }$D$ the set of all points of $ \mathbb{Z} ^{2}$ which are vertices of two segments of $C$ and two segments of $D$ . \noindent \textbf{Proposition 3.3.} Let $n\geq 1$\ be an integer and let $ \mathcal{C}$ be a covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property. Then the curves of $\mathcal{C}$ define the same $ E_{n} $, and their $n$-th derivatives form a covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property. If the boundary between two curves $C,D\in \mathcal{C}$ is nonempty, then the boundary between $C^{(n)}$ and $D^{(n)}$ is nonempty. \noindent \textbf{Proof.} By induction, it suffices to show the Proposition for $n=1$. Consequently, it suffices to prove that, for each covering $ \mathcal{C}$ of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property, if the boundary between two curves $C,D\in \mathcal{C}$ is nonempty, then we have $E_{1}(C)=E_{1}(D)$, the curves $C^{(1)}$ and $ D^{(1)}$ are disjoint and the boundary between $C^{(1)}$ and $D^{(1)}$ is nonempty. Then each point of $E_{1}(\mathcal{C})$ will be an endpoint of $4$ segments of $\mathcal{C}^{(1)}=\{C^{(1)}\mid C\in \mathcal{C}\}$ since it is an endpoint of $4$ segments of $\mathcal{C}$, and $\mathcal{C}^{(1)}$\ will satisfy the local isomorphism property like $\mathcal{C}$. We write $C=(C_{i})_{i\in \mathbb{Z} }$ and $D=(D_{i})_{i\in \mathbb{Z} }$. We denote by $S=(\zeta _{i})_{i\in \mathbb{Z} }$ and $T=(\eta _{i})_{i\in \mathbb{Z} }$ the associated sequences. We consider a point $X$ which belongs to the boundary between $C$ and $D$. We write $A=(C_{i-4},...,C_{i+3})$ and $ B=(D_{j-4},...,D_{j+3})$, where $i$ (resp. $j$) is the integer such that $X$ is the common endpoint of $C_{i}$ and $C_{i+1}$\ (resp. $D_{j}$ and $D_{j+1}$ ). As $\mathcal{C}$ satisfies the local isomorphism property, it follows from Theorem 3.1 applied to $C$ that there exist a translation $\tau $ of $ \mathbb{R} ^{2}$ and two sequences $A^{\prime }=(C_{k-4},...,C_{k+3})$ and $B^{\prime }=(C_{l-4},...,C_{l+3})$\ such that $\tau (A)=A^{\prime }$ and $\tau (B)=B^{\prime }$. If $X$ belongs to $F_{0}(C)$, then we have $i\in F_{0}(S)$, and therefore $ \zeta _{i-4}=-\zeta _{i-2}=\zeta _{i}=-\zeta _{i+2}$, which implies $\zeta _{k-4}=-\zeta _{k-2}=\zeta _{k}=-\zeta _{k+2}$ and $k\in F_{0}(S)$. Consequently, we have $\tau (X)\in F_{0}(C)$ and $l\in F_{0}(S)$, which implies $\zeta _{l-4}=-\zeta _{l-2}=\zeta _{l}=-\zeta _{l+2}$ and $\eta _{j-4}=-\eta _{j-2}=\eta _{j}=-\eta _{j+2}$. It follows that $X$ belongs to $ F_{0}(D)$. We show in the same way that $X$ belongs to $F_{0}(C)$ if it belongs to $F_{0}(D)$. Consequently, we have $F_{0}(C)=F_{0}(D)$. If $X$ belongs to $F_{0}(C)=F_{0}(D)$, then the segment of $C^{(1)}$ obtained from $(C_{i},C_{i+1})$ and the segment of $D^{(1)}$ obtained from $ (D_{j},D_{j+1})$ have supports which are opposite edges of a square of center $X$ and width $\sqrt{2}$. Moreover, the segments of $C^{(1)}$ obtained from $(C_{k},C_{k+1})$ and $(C_{l},C_{l+1})$ have supports which are opposite edges of a square of center $\tau (X)$ and width $\sqrt{2}$. By Lemma 2.3, a third edge of the second square is the support of a segment of $ C^{(1)}$ obtained from one of the pairs $(C_{k-2},C_{k-1})$, $ (C_{k+2},C_{k+3})$, $(C_{l-2},C_{l-1})$, $(C_{l+2},C_{l+3})$. Consequently, a third edge of the first square is the support of a segment of $C^{(1)}$ obtained from one of the pairs $(C_{i-2},C_{i-1})$, $(C_{i+2},C_{i+3})$, or the support of a segment of $D^{(1)}$ obtained from one of the pairs $ (D_{j-2},D_{j-1})$, $(D_{j+2},D_{j+3})$. In both cases, $C^{(1)}$ and $ D^{(1)}$ have a common vertex. If $X$ belongs to $E_{1}(C)=E_{1}(D)$, then $X$ is a common vertex of $ C^{(1)}$ and $D^{(1)}$. Moreover, the two segments of $C^{(1)}$ and the two segments of $D^{(1)}$ which have $X$ as an endpoint all have different supports since they are the images under $\tau ^{-1}$ of the four segments of $C^{(1)}$ which have $\tau (X)$ as an endpoint. As this property is true for each point of the boundary between $C$\ and $D$\ which belongs to $ E_{1}(C)=E_{1}(D)$, the curves $C^{(1)}$ and $D^{(1)}$ are disjoint.~~$ \blacksquare $ \noindent \textbf{Remark.} If the boundary between two curves $C,D\in \mathcal{C}$ is finite, then it contains a point of $E_{\infty }$. Otherwise, for $n$ large enough, it would contain no point of $E_{n}$, and the boundary between $C^{(n)}$ and $D^{(n)}$ would be empty, contrary to Proposition 3.3. If two disjoint complete folding curves $C,D$ have the same type and if $ E_{1}(C)=E_{1}(D)$, then we have $\sigma _{C}(X)=\sigma _{D}(X)$\ for each point $X$ of the boundary between $C$ and $D$. Consequently, for each covering $\mathcal{C}$ of $ \mathbb{R} ^{2}$ by a set of complete folding curves which have the same type and define the same set $E_{1}$, there exists a unique fonction $\sigma _{ \mathcal{C}}: \mathbb{Z} ^{2}\rightarrow \{-1,+1\}$ such that $\sigma _{\mathcal{C}}(X)=\sigma _{C}(X) $\ for each curve $C\in \mathcal{C}$ and each vertex $X$ of $C$ . \noindent \textbf{Lemma 3.4.} Let$\ \mathcal{C}$ and $\mathcal{D}$ be coverings of $ \mathbb{R} ^{2}$ by sets of complete folding curves. Suppose that all the curves have the same type, define the same sets $E_{k}$ and have locally isomorphic associated sequences. Then, for each $n\in \mathbb{N} ^{\ast }$, each $X\in \mathbb{Z} ^{2}-E_{2n-1}$ and each $U\in 2^{n} \mathbb{Z} ^{2}$, we have $\sigma _{\mathcal{C}}(X)=\sigma _{\mathcal{D}}(U+X)$. \noindent \textbf{Proof.}\ For each integer $k\leq 2n-1$, we have $ F_{k}+2^{n} \mathbb{Z} ^{2}=F_{k}$ and therefore $U+F_{k}=F_{k}$. In particular, $X$ and $U+X$ belong to $F_{m}$ for the same integer $m\leq 2n-2$. As each curve of $ \mathcal{C}$ and each curve of $\mathcal{D}$ have the same type and define the same $E_{1}$, the map $Z\rightarrow U+Z$ induces a bijection from the set of all supports of segments of $\mathcal{C}$ to the set of all supports of segments of $\mathcal{D}$ which respects the orientation of the segments. We consider a curve $C=(C_{i})_{i\in \mathbb{Z} }\in \mathcal{C}$ indexed in such a way that the terminal point of $ C_{2^{m}} $ is $X$, and a curve $D=(D_{i})_{i\in \mathbb{Z} }\in \mathcal{D}$ indexed in such a way that the support of $D_{2^{m}}$\ is $ U+S$, where $S$ is the support of $C_{2^{m}}$. The initial point $Y$ of $ C_{1}$ and the initial point $Z$ of $D_{1}$ belong to $E_{m+1}$ since $X$ and $U+X$\ belong to $F_{m}$. The sequences $(\zeta _{i})_{i\in \mathbb{Z} }$ and $(\eta _{i})_{i\in \mathbb{Z} }$ associated to $C$ and $D$ are locally isomorphic. Consequently, we have $ (\zeta _{1},...,\zeta _{2^{m}-1})=(\eta _{1},...,\eta _{2^{m}-1})$ since $X$ and $U+X$\ belong to $F_{m}$, and therefore $ (D_{1},...,D_{2^{m}})=U+(C_{1},...,C_{2^{m}})$ and $Z=U+Y$. As $ U+F_{m+1}=F_{m+1}$, it follows that $Y$ and $Z$ both belong to $F_{m+1}$, or both belong to $E_{m+2}$. In both cases, we have $(\zeta _{1},...,\zeta _{2^{m+1}-1})=(\eta _{1},...,\eta _{2^{m+1}-1})$ since $C$ is locally isomorphic to $D$, and therefore $\sigma _{\mathcal{C}}(X)=\zeta _{2^{m}}=\eta _{2^{m}}=\sigma _{\mathcal{D}}(U+X)$.~~$\blacksquare $ \noindent \textbf{Theorem 3.5.} Let $\mathcal{C}$ be a covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves. Then $\mathcal{C}$ satisfies the local isomorphism property if and only if all the curves have the same type, define the same sets $E_{k}$ and have locally isomorphic associated sequences. \noindent \textbf{Proof.} The condition is necessary according to Proposition 3.3 and the remark after Theorem 3.2. Now we show that it is sufficient. For each $X=(x,y)\in \mathbb{Z} ^{2}$ and each $k\in \mathbb{N} ^{\ast }$, we write\ $S_{X,k}=[x,x+k]\times \lbrack y,y+k]$, and we denote by $\mathcal{E}_{X,k}$ the set of all segments of $\mathcal{C}$ whose supports are contained in $S_{X,k}$. It suffices to prove that, for each $ X\in \mathbb{Z} ^{2}$ and each $k\in \mathbb{N} ^{\ast }$, there exists $l\in \mathbb{N} ^{\ast }$ such that each $S_{Y,l}$\ contains some $Z\in \mathbb{Z} ^{2}$ with $\mathcal{E}_{Z,k}=(Z-X)+\mathcal{E}_{X,k}$. We consider the largest integer $m$ such that $S_{X,k}$\ contains a point of $F_{m}$, and an integer $n$ such that $m\leq 2n-2$. For each $U\in 2^{n} \mathbb{Z} ^{2}$, we have, according to Lemma 3.4, $\sigma _{\mathcal{C}}(U+Y)=\sigma _{ \mathcal{C}}(Y)$ for each $Y\in \mathbb{Z} ^{2}-E_{2n-1}$, and in particular for each $Y\in S_{X,k}$ which does not belong to $E_{\infty }$. If $S_{X,k}\cap E_{\infty }=\emptyset $, it follows that $\mathcal{E} _{U+X,k}=U+\mathcal{E}_{X,k}$\ for each $U\in 2^{n} \mathbb{Z} ^{2}$, since the curves of $\mathcal{C}$ have the same type. Then we have the required property for $l=2^{n}$. If $S_{X,k}$ contains the unique point $W$ of $E_{\infty }$, then we still have $\mathcal{E}_{U+X,k}=U+\mathcal{E}_{X,k}$\ for each $U\in 2^{n} \mathbb{Z} ^{2}$\ such that $\sigma _{\mathcal{C}}(U+W)=\sigma _{\mathcal{C}}(W)$,\ since the curves of $\mathcal{C}$ have the same type. Moreover, we have $ E_{2n}=W+2^{n} \mathbb{Z} ^{2}$\ since $W$ belongs to $E_{2n}$. We consider $V\in 2^{n} \mathbb{Z} ^{2}$ such that $V+W\in F_{2n}$ and $\sigma _{\mathcal{C}}(V+W)=\sigma _{ \mathcal{C}}(W)$.\ We have $V+W+2^{n+1} \mathbb{Z} ^{2}=\{Y\in \mathbb{Z} ^{2}\mid Y\in F_{2n}$ and $\sigma _{\mathcal{C}}(Y)=\sigma _{\mathcal{C} }(W)\}$, and therefore $\mathcal{E}_{U+V+X,k}=U+V+\mathcal{E}_{X,k}$\ for each $U\in 2^{n+1} \mathbb{Z} ^{2}$. Consequently we have the required property for $l=2^{n+1}$.~~$ \blacksquare $ Now we consider the coverings which consist of one complete folding curve. The following result is a particular case of Theorem 3.5. It applies to the folding curves considered in Theorem 3.7 and Example 3.8 below. \noindent \textbf{Corollary 3.6.} Any covering of $ \mathbb{R} ^{2}$ by a complete folding curve satisfies the local isomorphism property. \noindent \textbf{Theorem 3.7.} For each complete folding sequence $S$, there exists a covering of $ \mathbb{R} ^{2}$ by a complete folding curve associated to a sequence which is locally isomorphic to $S$. \noindent \textbf{Proof.} Consider a curve $C=(C_{i})_{i\in \mathbb{Z} }$\ associated to $S$. Let $\Omega $ consist of the finite curves parallel to subcurves of $C$, such that $(0,0)$\ is one of their vertices. For each $ F\in \Omega $, denote by $N(F)$ the largest integer $n$ such that $F$ contains a covering of $[-n,+n]^{2}$. For any $F,G\in \Omega $, write $F<G$ if $F\subset G$ and $N(F)<N(G)$. As $S$ satisfies the local isomorphism property, the union of any strictly increasing sequence in $\Omega $\ is a covering of $ \mathbb{R} ^{2}$ by a complete folding curve associated to a sequence which is locally isomorphic to $S$. It remains to be proved that, for each $F\in \Omega $, there exists $G>F$ in $\Omega $. According to Proposition 2.6, there is an integer $m$ such that each $(C_{i+1},...,C_{i+m})$ contains a curve parallel to $F$. By Theorem 3.1, there exists a finite subcurve $K$ of $C$ which contains a covering of a square $X+[-n,+n]^{2}$ with $X\in \mathbb{Z} ^{2}$ and $n=m+N(F)+1$. Let $i$ be an integer such that $X$ is the terminal point of $C_{i}$. Consider a curve $H$ parallel to $F$\ and contained in $ (C_{i+1},...,C_{i+m}) $. Denote by $\tau $ the translation such that $\tau (F)=H$. Then $\tau ^{-1}(X)$\ belongs to $[-m,+m]^{2}$ since $\tau ((0,0))$ belongs to $X+[-m,+m]^{2}$. For $G=\tau ^{-1}(K)$, we have $F\subset G$ and $ G$ covers $\tau ^{-1}(X)+[-n,+n]^{2}$. In particular, $G$ covers $ [-N(F)-1,+N(F)+1]^{2}$.~~$\blacksquare $ \noindent \textbf{Remark. }There is no covering of $ \mathbb{R} ^{2}$ by a curve associated to $(\overline{S},+1,S)$ or to $(\overline{S} ,-1,S)$, where $S$ is an $\infty $-folding sequence. In fact, such a curve $ C $ would contain $4$ segments having the point of $E_{\infty }(C)$ as an endpoint, and $E_{\infty }(\overline{S},+1,S)$\ (resp. $E_{\infty }( \overline{S},-1,S)$) would contain $2$ integers. \begin{center} \includegraphics[scale=0.16]{FoldFig6.jpg} \end{center} \noindent \textbf{Example 3.8.} There exists a covering of $ \mathbb{R} ^{2}$ by a complete folding curve defined in an effective way. \noindent \textbf{Proof.} For each $(x,y)\in \mathbb{Z} ^{2}$ and each $n\in \mathbb{N} ^{\ast }$, we say that a $(2n)$-folding (resp. $(2n+1)$-folding) curve $C$ covers the isosceles right triangle $T=\langle (x,y),(x+2^{n},y),(x,y+2^{n})\rangle $\ (resp. $T=\langle (x,y),(x+2^{n},y+2^{n}),(x+2^{n+1},y)\rangle $) if it satisfies the following conditions (cf. Fig. 6A, 6B, 6C): \noindent - each interval $[(u,v),(u+1,v)]$ or $[(u,v),(u,v+1)]$ with $ u,v\in \mathbb{Z} $, contained in the interior of $T$, is the support of a segment of $C$; \noindent - among the intervals $[(u,v),(u+1,v)]$ or $[(u,v),(u,v+1)]$ with $ u,v\in \mathbb{Z} $, contained in the same vertical or horizontal edge of $T$, alternatively one over two is the support of a segment of $C$; \noindent - no interval $[(u,v),(u+1,v)]$ or $[(u,v),(u,v+1)]$ with $u,v\in \mathbb{Z} $, contained in the exterior of $T$, is the support of a segment of $C$; \noindent - the vertex of the right angle of $T$ is the initial or the terminal point of $C$; \noindent - the vertex of one of the acute angles of $T$ is the initial or the terminal point of $C$. \noindent We extend this definition to the isosceles right triangles with vertices in $ \mathbb{Z} ^{2}$ obtained from $T$ by rotations of angles $\pi /2$, $\pi $, $3\pi /2$ (cf. Fig. 6D). Now, let $T$ be one of the isosceles right triangles considered, let $k$ be an integer, let $C$ be a $k$-folding curve which covers $T$, and let $S$ be the sequence associated to $C$. If the initial (resp. terminal) point of $C$ is the vertex of the right angle of $T$, we associe to $(\overline{S},+1,S)$ and $(\overline{S},-1,S)$ (resp. $(S,+1,\overline{S})$ and $(S,-1,\overline{S })$) two $(k+1)$-folding curves $C_{1}$ and $C_{2}$ which contain $C$. In both cases, the parts of $C_{1}$ and $C_{2}$ which correspond to $\overline{S }$ respectively cover the isosceles right triangles $T_{1}$ and $T_{2}$ which have one edge of their right angle in common with $T$, in such a way that $C_{1}$ and $C_{2}$ respectively cover the isosceles right triangles $ T\cup T_{1}$ and $T\cup T_{2}$ (cf. Fig. 6B, 6C and 6D). For each $n\in \mathbb{N} ^{\ast }$ and each triangle $T=\langle (x,y),(x+2^{n},y+2^{n}),(x+2^{n+1},y)\rangle $ with $(x,y)\in \mathbb{Z} ^{2}$, repete six times the operation above according to Figure 6E, beginning with a $(2n+1)$-folding curve $C$ which covers $T$. Then we obtain a $(2n+7)$-folding curve $C^{\prime }$ which contains of $C$. Moreover, $ C^{\prime }$ covers a triangle $T^{\prime }=\langle (x^{\prime },y^{\prime }),(x^{\prime }+2^{n+3},y^{\prime }+2^{n+3}),(x^{\prime }+2^{n+4},y^{\prime })\rangle $ with $(x^{\prime },y^{\prime })\in \mathbb{Z} ^{2}$ and $T$ contained in the interior of $T^{\prime }$. By iterating this process, we obtain a covering of $ \mathbb{R} ^{2}$ by a complete folding curve.~~$\blacksquare $ \noindent \textbf{Proposition 3.9.} Let$\ \mathcal{C}$ be a covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property. For each covering $\mathcal{D}$ of $ \mathbb{R} ^{2}$, the following properties are equivalent: \noindent 1) $\mathcal{D}$ consists of complete folding curves, and the curves of $\mathcal{C}\cup \mathcal{D}$ have the same type, define the same sets $E_{k}$ and have locally isomorphic associated sequences. \noindent 2) $\mathcal{C}=\mathcal{D}$, or $E_{\infty }(\mathcal{C})\neq \varnothing $ and $\mathcal{C},\mathcal{D}$ only differ in the way to connect the four segments which have the unique point of $E_{\infty }( \mathcal{C})$ as an endpoint. \noindent \textbf{Proof.} If 1) is true, then 2) is also true since Lemma 3.4 implies $\sigma _{\mathcal{C}}(X)=\sigma _{\mathcal{D}}(X)$\ for each $ X\in \mathbb{Z} ^{2}-E_{\infty }$. Conversely, if 2) is true, then 1) follows form the remark after Corollary 1.9.~~$\blacksquare $ \noindent \textbf{Remark.} If 1) and 2) are true, then\textbf{\ }$\mathcal{D} $ satisfies the local isomorphism property by Theorem 3.5, and $\mathcal{D}$ is locally isomorphic to $\mathcal{C}$ by Theorem 3.2. \noindent \textbf{Theorem 3.10.} For each complete folding curve $C$, if $ E_{\infty }(C)$ is empty or if the unique point of $E_{\infty }(C)$ is a vertex of $C$, then $C$ is contained in a unique covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property. Otherwise, $C$ is contained in exactly two such coverings, which only differ in the way to connect the four segments having the unique point of $E_{\infty }$ as an endpoint. \noindent \textbf{Proof.} It suffices to show that $C$ is contained in a covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property. In fact, for any such coverings $\mathcal{C},\mathcal{D }$, Proposition 3.3 and the remark after Theorem 3.2 imply that each curve of $\mathcal{C}$ and each curve of $\mathcal{D}$ have the same type, define the same sets $E_{k}$ and have locally isomorphic associated sequences. Then the Theorem is a consequence of Proposition 3.9 and the Remark just after. For each $m\in \mathbb{N} ^{\ast }$, denote by $\mathcal{C}_{m}$ the set of all segments of $C$ with supports in $[-m,+m]^{2}$. Let $\Omega $ consist of the pairs $(\mathcal{E} ,m)$, where $m\in \mathbb{N} ^{\ast }$ and $\mathcal{E}$ is a covering of $[-m,+m]^{2}$ containing $ \mathcal{C}_{m}$, for which there exists $X\in \mathbb{Z} ^{2}$\ such that $X+\mathcal{E}\subset C$. For any $(\mathcal{E},m),(\mathcal{F},n)\in \Omega $, write $(\mathcal{E} ,m)<(\mathcal{F},n)$ if $\mathcal{E}\subset \mathcal{F}$ and $m<n$. If $( \mathcal{F},n)\in \Omega $ and $m\in \{1,...,n-1\}$, then we have $(\mathcal{ E},m)\in \Omega $ and $(\mathcal{E},m)<(\mathcal{F},n)$ for the set $ \mathcal{E}$ of all segments of $\mathcal{F}$ with supports in $[-m,+m]^{2}$. First we show that, for each $m\in \mathbb{N} ^{\ast }$, there exists $(\mathcal{E},m)\in \Omega $. We can take $m$ large enough so that $C$\ contains some segments with supports in $[-m,+m]^{2}$. We consider a finite curve $A\subset C$\ which contains all these segments. According to Proposition 2.6, there exists an integer $k$ such that each subcurve of length $k$ of $C$ contains a curve parallel to $A$. By Theorem 3.1, $C$ contains a covering of a square $X+[-k-2m,+k+2m]^{2}$ with $X\in \mathbb{Z} ^{2}$. The covering of $X+[-k,+k]^{2}$\ extracted from $C$ contains a curve of length $\geq k$, which itself contains a curve $B$ parallel to $A$. We consider $Y\in \mathbb{Z} ^{2}$\ such that $Y+A=B$. We have $Y\in X+[-k-m,+k+m]^{2}$, because $A$\ contains a point of $[-m,+m]^{2}$ and $B$ is contained in\ $X+[-k,+k]^{2}$. Consequently, $C$ contains a covering $\mathcal{F}$ of $Y+[-m,+m]^{2}$. We have $(\mathcal{E},m)\in \Omega $ for $\mathcal{E}=-Y+\mathcal{F}$ since $ \mathcal{F}$ contains the set $Y+\mathcal{C}_{m}$ of all segments of $B$ with supports in $Y+[-m,+m]^{2}$. Now, according to K\"{o}nig's Lemma, $\Omega $ contains a strictly increasing sequence $(\mathcal{E}_{m},m)_{m\in \mathbb{N} ^{\ast }}$. The union $\mathcal{C}$ of the sets $\mathcal{E}_{m}$\ is a covering of $ \mathbb{R} ^{2}$ which contains $C$. Any finite curve $A\subset \mathcal{C}$ is parallel to a subcurve of $C$ since it is contained in one of the sets $ \mathcal{E}_{m}$. In particular, $\mathcal{C}$ is a covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves. It remains to be proved that $ \mathcal{C}$ satisfies the local isomorphism property. It suffices to show that, for each $m\in \mathbb{N} ^{\ast }$, there exists $n\in \mathbb{N} ^{\ast }$ such that each square $X+[-n,+n]^{2}$\ contains the image of $ \mathcal{E}_{m}$\ under a translation. We consider a point $Y\in \mathbb{Z} ^{2}$\ such that $Y+\mathcal{E}_{m}\subset C$, and a finite curve $A\subset C $\ which contains $Y+\mathcal{E}_{m}$. By Proposition 2.6, there exists $ n\in \mathbb{N} ^{\ast }$\ such that each subcurve of length $n$ of $C$, and therefore each subcurve of length $n$ of a curve of $\mathcal{E}$, contains a curve parallel to $A$.\ Each square $X+[-n,+n]^{2}$\ contains a subcurve of length $n$ of a curve of $\mathcal{E}$, and therefore contains a curve parallel to $ A$ and the image of $\mathcal{E}_{m}$\ under a translation.~~$\blacksquare $ \noindent \textbf{Remark.} It follows that any $\infty $-folding curve $C$ is contained in exactly two coverings of $ \mathbb{R} ^{2}$ by sets of complete folding curves which satisfy the local isomorphism property. These coverings only differ in the way to connect the four segments whose supports contain the origin of $C$. \noindent \textbf{Remark.} Let $C$ be a complete folding curve and let $S$ be the associated sequence. By Theorem 3.10, $C$ is contained in a covering $ \mathcal{C}$ of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property. According to Theorem 3.7, there is also a covering $ \mathcal{D}$ of $ \mathbb{R} ^{2}$ by a complete folding curve $D$ associated to a sequence which is locally isomorphic to $S$. Moreover, $\mathcal{D}$\ satisfies the local isomorphism property by Corollary 3.6. If we choose $D$ with the same type as $C$, then $\mathcal{C}$ and $\mathcal{D}$ are locally isomorphic according to Theorem 3.2. On the other hand, $\mathcal{C}$ and $\mathcal{D}$ do not contain the same number of curves if $\{C\}$\ is not a covering of $ \mathbb{R} ^{2}$. \noindent \textbf{Corollary 3.11.} 1) There exist $2^{\omega }$ pairwise not locally isomorphic coverings of $ \mathbb{R} ^{2}$ by sets of complete folding curves which satisfy the local isomorphism property. \noindent 2) If $\mathcal{C}$ is such a covering, then there exist $ 2^{\omega }$ isomorphism classes of coverings of $ \mathbb{R} ^{2}$ which are locally isomorphic to $\mathcal{C}$. \noindent \textbf{Proof.} For any complete folding sequences $S,T$,\ consider two curves $C,D$\ associated to $S,T$ which have the same type. By Theorem 3.10, there exist two coverings $\mathcal{C},\mathcal{D}$ of $ \mathbb{R} ^{2}$, respectively containing $C,D$, by sets of complete folding curves which satisfy the local isomorphism property. By Theorem 3.2, $\mathcal{C}$ and $\mathcal{D}$ are locally isomorphic if and only if $S$ and $T$ are locally isomorphic. In particular, the property 1) above follows from the property 1) of Theorem 1.11. Moreover, if $\mathcal{C}$ and $\mathcal{D}$ are isomorphic, then $C$ is isomorphic to one of the curves of $\mathcal{D}$,\ and $S$\ is isomorphic to one of their associated sequences. Consequently, the property 2) above follows from the property 2) of Theorem 1.11, since any covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property contains at most countably many curves (and in fact at most $6$ curves by Theorem 3.12 below).~~$\blacksquare $ Now we investigate the number of curves in a covering of $ \mathbb{R} ^{2}$ by complete folding curves. \noindent \textbf{Theorem 3.12.} If a covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves satisfies the local isomorphism property, then it contains at most $6$ curves. \noindent \textbf{Proof.} For each $X\in \mathbb{R} ^{2}$, each complete folding curve $C$ and each $n\in \mathbb{N} $, write $\delta _{n}(X,C)=4^{-n}.\inf_{S\in E_{4n}(C)}d(X,S)$. We have $ \delta _{n}(X,C)=\delta _{0}(X_{n},C^{(4n)})$, where $X_{n}$ is the image of $X$ in a representation of $C^{(4n)}$ which gives the length $1$ to the supports of the segments. Now consider $R\in E_{0}(C)$ and $S,T\in E_{4}(C)$ which are joined by a $4$ -folding subcurve of $C$ having $R$ as a vertex. Then, according to Figure 3C, we have \noindent $\inf (d(X,S),d(X,T))\leq \sqrt{(d(X,R)+3)^{2}+2^{2}}=\sqrt{ d(X,R)^{2}+6.d(X,R)+13}$; \noindent this maximum is reached with the second of the four $4$-folding curves of Figure 3C, for each point $X$ which is at the left and at a distance $\geq 3$ from the middle of the segment $ST$. Consequently we have $\delta _{1}(X,C)\leq (1/4)\sqrt{\delta _{0}(X,C)^{2}+6.\delta _{0}(X,C)+13}$. For $\delta _{0}(X,C)\leq 1,16$, it follows \noindent $\delta _{1}(X,C)\leq (1/4)\sqrt{1,16^{2}+6.1,16+13}<1,16$. \noindent For $\delta _{0}(X,C)\geq 1,16$, it follows \noindent $\delta _{1}(X,C)/\delta _{0}(X,C)\leq (1/4)\sqrt{1+6/\delta _{0}(X,C)+13/\delta _{0}(X,C)^{2}}$ \noindent\ $\ \ \ \ \ \ \ \ \leq (1/4)\sqrt{1+6/1,16+13/1,16^{2}}<0,995$. For each complete folding curve $C$, each $X\in \mathbb{R} ^{2}$ and each $n\in \mathbb{N} $, the argument above applied to $C^{(n)}$ gives $\delta _{n+1}(X,C)<1,16$ if $\delta _{n}(X,C)<1,16$ and $\delta _{n+1}(X,C)<0,995.\delta _{n}(X,C)$ if $\delta _{n}(X,C)\geq 1,16$. In particular, we have $\delta _{n}(X,C)<1,16 $ for $n$ large enough. For each covering $\{C_{1},...,C_{k}\}$ of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property, consider $X\in \mathbb{R} ^{2}$ and $n\in \mathbb{N} ^{\ast }$ such that $\delta _{n}(X,C_{i})<1,16$ for $1\leq i\leq k$. Let $Y$ be the image of $X$ in a representation of $C_{1}^{(4n)},...,C_{k}^{(4n)}$ which gives the length $1$ to the supports of the segments. Then we have $ \delta _{0}(Y,C_{i}^{(4n)})<1,16$ for $1\leq i\leq k$. Write $Y=(y,z)$ and consider $u,v\in \mathbb{Z} $ such that $\left\vert y-u\right\vert \leq 1/2$ and $\left\vert z-v\right\vert \leq 1/2$. Then each $C_{i}^{(4n)}$ has a vertex among the points listed below,\ since no other point $(w,x)\in \mathbb{Z} ^{2}$ satisfies $d((w,x),(y,z))<1,16$: \noindent $(u,v),(u-1,v),(u+1,v),(u,v-1),(u,v+1)$ if $\left\vert y-u\right\vert \leq 0,16$ and $\left\vert z-v\right\vert \leq 0,16$; \noindent $(u-1,v-1),(u-1,v),(u-1,v+1),(u,v-1),(u,v),(u,v+1)$ if $y<u-0,16$; \noindent $(u,v-1),(u,v),(u,v+1),(u+1,v-1),(u+1,v),(u+1,v+1)$ if $y>u+0,16$; \noindent $(u-1,v-1),(u,v-1),(u+1,v-1),(u-1,v),(u,v),(u+1,v)$ if $z<v-0,16$; \noindent $(u-1,v),(u,v),(u+1,v),(u-1,v+1),(u,v+1),(u+1,v+1),$ if $z>v+0,16$. In the first case, there exist $12$ intervals of length $1$ with endpoints in $ \mathbb{Z} ^{2}$\ which have exactly one endpoint among the $5$ points considered. In each of the four other cases, there exist $10$ intervals of length $1$ with endpoints in $ \mathbb{Z} ^{2}$\ which have exactly one endpoint among the $6$ points considered. In all cases, each $C_{i}^{(4n)}$ has at least $2$ segments with supports among these intervals. It follows $k\leq 6$ in the first case and $k\leq 5$ in each of the four other cases since, by Proposition 3.3, $ C_{1}^{(4n)},...,C_{k}^{(4n)}$ are disjoint.~~$\blacksquare $ \begin{center} \includegraphics[scale=0.16]{FoldFig7.jpg} \end{center} \noindent \textbf{Example 3.13 (curves associated to the positive folding sequence).} The\emph{\ positive} \emph{folding sequence} mentioned in [4, p. 192] is the $\infty $-folding sequence $R$ obtained as the direct limit of the $n$-folding sequences $ R_{n} $ defined with $R_{n+1}=(R_{n},+1,\overline{R_{n}})$ for each $n\in \mathbb{N} $. According to [3, Th. 4, p. 78], or by Theorem 3.15 below, there exists a covering $\mathcal{C}$ of $ \mathbb{R} ^{2}$ by $2$ complete folding curves $C,D$ associated to $S=(\overline{R} ,+1,R)$ and having the same type (see Fig. 7). We have $E_{\infty }(C)=E_{\infty }(D)=\{(0,0)\}$, and therefore $E_{k}(C)=E_{k}(D)$ for each $ k\in \mathbb{N} $. It follows from Theorem 3.5 that $\mathcal{C}$ satisfies the local isomorphism property. \begin{center} \includegraphics[scale=0.14]{FoldFig8.jpg} \end{center} \noindent \textbf{Exemple 3.14 (curves associated to the alternating folding sequence).} The\emph{\ alternating} \emph{folding sequence} described in [4, p. 134] is the $\infty $-folding sequence $R$ obtained as the direct limit of the $n$-folding sequences $R_{n}$ defined with $R_{n+1}=(R_{n},(-1)^{n+1},\overline{R_{n}})$ for each $n\in \mathbb{N} $. Contrary to Example 3.13, there exists no covering of $ \mathbb{R} ^{2}$ by $2$ complete folding curves associated to $S=(\overline{R},+1,R)$. Now, let $T$ be the complete folding sequence obtained as the direct limit of the sequences $R_{2n}$, where each $R_{2n}$\ is identified to its second copy in $R_{2n+2}=(R_{2n},-1,\overline{R_{2n}},+1,R_{2n},+1,\overline{R_{2n}} )$. Then there exists (cf. Fig. 8) a covering $\mathcal{C}$ of $ \mathbb{R} ^{2}$ by $2$ curves associated to $S$, $2$ curves associated to $T$ and $2$ curves associated to $\overline{T}$, with all the curves having the same type. The $6$ curves are associated to locally isomorphic sequences. The point $ (0,0)$\ belongs to $E_{\infty }$ in the $2$ curves which contain it. For each $n\in \mathbb{N} $, and in each of the $2$ curves which contain it, the point $(2^{n},0)$ (resp. $(-2^{n},0)$, $(0,2^{n})$, $(0,-2^{n})$) belongs to $F_{2n}$, while the point $(2^{n},2^{n})$ (resp. $(-2^{n},2^{n})$, $(2^{n},-2^{n})$, $ (-2^{n},-2^{n})$) belongs to $F_{2n+1}$. Consequently, the $6$ curves define the same sets $E_{k}$, and $\mathcal{C}$ satisfies the local isomorphism property by Theorem 3.5. We do not know presently if a covering $\mathcal{C}$ of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property can consist of $3$, $4$ or $5$ curves. If $E_{\infty }( \mathcal{C})\neq \varnothing $, then $\mathcal{C}$ consists of $2$ or $6$ curves according to the Theorem below: \noindent \textbf{Theorem 3.15.} Let $C$ be a curve associated to $S=( \overline{R},+1,R)$ or $S=(\overline{R},-1,R)$ where $R=(a_{h})_{h\in \mathbb{N} ^{\ast }}$\ is an $\infty $-folding sequence. Consider the unique covering $ \mathcal{C}\supset C$ of $ \mathbb{R} ^{2}$ by a set of complete folding curves which satisfies the local isomorphism property. \noindent 1) If $\{n\in \mathbb{N} \mid a_{2^{n}}=(-1)^{n}\}$ is finite or cofinite, then $\mathcal{C}$ consists of $2$ curves associated to $S$ and $4$ other curves. \noindent 2)\ Otherwise, $\mathcal{C}$ consists of $2$ curves associated to $ S$. \noindent \textbf{Proof.} As the other cases can be treated in the same way, we suppose that $C=(C_{h})_{h\in \mathbb{Z} }$\ is a curve associated to $S=(\overline{R},+1,R)=(a_{h})_{h\in \mathbb{Z} }$, and that $E_{\infty }(C)=\{(0,0)\}$. Then $\mathcal{C}$\ contains a curve $D\neq C$\ such that $(0,0)$\ is a vertex of $D$. We write $ D=(D_{h})_{h\in \mathbb{Z} }$\ with $(0,0)$\ terminal point of $D_{0}$ and initial point of $D_{1}$. The curve $D$ is associated to $S$, since it is associated to a sequence $ T=(b_{h})_{h\in \mathbb{Z} }$ with $T$ locally isomorphic to $S$, $E_{\infty }(T)=\{0\}$ and $b_{0}=+1$. If $\{n\in \mathbb{N} \mid a_{2^{n}}=(-1)^{n}\}$ is cofinite (resp. finite), we consider an odd (resp. even) integer $k$ such that $a_{2^{n}}=(-1)^{n}$ (resp. $ a_{2^{n}}=(-1)^{n+1}$) for $n\geq k$. In order to prove that $\mathcal{C}$\ contains $6$\ curves, it suffices to show that $C^{(k)}$ is contained in a covering of $ \mathbb{R} ^{2}$ which satisfies the local isomorphism property and which consists of $ 6 $\ complete folding curves. Consequently, it suffices to consider the case where $a_{2^{n}}=(-1)^{n+1}$ for each $n\in \mathbb{N} $. But this case is treated in Example 3.14 (see Fig. 8). The proof of 2) uses arguments similar to those in the proof of Theorem 3.1. For each $k\in \mathbb{N} ^{\ast }$, we say that a set $\mathcal{E}$ of disjoint curves covers $ L_{k}=\{(u,v)\in \mathbb{R} ^{2}\mid \left\vert u\right\vert +\left\vert v\right\vert \leq k\}$ if each interval $[(u,v),(u+1,v)]$ or $[(u,v),(u,v+1)]$ with $u,v\in \mathbb{Z} $, contained in $L_{k}$, is the support of a segment of one of the curves. For each set $\mathcal{E}$ of disjoint complete folding curves which have the same type, define the same sets $E_{n}$ and are associated to locally isomorphic sequences, if $\mathcal{E}^{(2)}$ covers $L_{k}$ for an integer $ k\geq 2$, then $\mathcal{E}$ covers $L_{2k-1}$, and therefore covers $ L_{k+1} $. By induction, it follows that, if $\mathcal{E}^{(2k)}$ covers $ L_{2}$ for an integer $k\geq 2$, then $\mathcal{E}$ covers $L_{k+2}$. For each $k\in \mathbb{N} $, consider $l\geq 2k$ such that $a_{2^{l}}=a_{2^{l+1}}$. Then $ \{(C_{-3}^{(l)},...,C_{4}^{(l)}),$\allowbreak $ (D_{-3}^{(l)},...,D_{4}^{(l)})\}$ covers\ $L_{2}$\ (see Fig. 7 for the case $ a_{2^{l}}=a_{2^{l+1}}=+1$). Consequently, $\{C^{(l-2k)},D^{(l-2k)}\}$ covers $L_{k+2}$, and the same property is true for $\{C,D\}$.~~$\blacksquare $ \begin{center} \includegraphics[scale=0.13]{FoldFig9.jpg} \end{center} A covering of $ \mathbb{R} ^{2}$ by a set of complete folding curves can contain more than $6$ curves if it does not satisfy the local isomorphism property. For the complete folding sequence $T$ of Example 3.14, Figure 9 gives an example of a covering of $ \mathbb{R} ^{2}$ by $4$ curves associated to $T$ and $4$ curves associated to $- \overline{T}$ ($T$ and $-\overline{T}$ are not locally isomorphic by Corollary 1.9). Anyway, the Theorem below implies that the number of complete folding curves in a covering of $ \mathbb{R} ^{2}$ is at most $24$: \noindent \textbf{Theorem 3.16.} There exist no more than $24$ disjoint complete folding curves in $ \mathbb{R} ^{2}$. In the proof of Theorem 3.16, we write $\delta ((x,y),(x^{\prime },y^{\prime }))=\sup (\left\vert x^{\prime }-x\right\vert ,\left\vert y^{\prime }-y\right\vert )$ for any $(x,y),(x^{\prime },y^{\prime })\in \mathbb{R} ^{2}$. \noindent \textbf{Lemma 3.16.1.} We have $\delta (X,Y)\leq 7.2^{n-2}-2$ for each integer $n\geq 2$ and for any vertices $X,Y$ of a $(2n)$-folding curve. \noindent \textbf{Proof of the Lemma.} The Lemma is true for $n=2$ according to Figure 3C. Now we show that, if it is true for $n\geq 2$, then it is true for $n+1$. Let $C$ be a $(2n+2)$-folding curve, and let $Z_{0},...,Z_{2^{2n+2}}$ be its vertices taken consecutively. Then $(Z_{4i})_{0\leq i\leq 2^{2n}}$ is the sequence of all vertices of the $(2n)$-folding curve $C^{(2)}$ represented on the same figure as $C$. It follows from the induction hypothesis applied to $C^{(2)}$ that we have $\delta (X,Y)\leq 2(7.2^{n-2}-2)$ for any $X,Y\in (Z_{4i})_{0\leq i\leq 2^{2n}}$. Moreover, for each vertex\ $U$ of $C$, there exists $V\in (Z_{4i})_{0\leq i\leq 2^{2n}}$ such that $\delta (U,V)\leq 1$. Consequently, we have $\delta (X,Y)\leq 2(7.2^{n-2}-2)+2=7.2^{n-1}-2$ for any vertices $X,Y$ of $C$.~~$\blacksquare $ \noindent \textbf{Lemma 3.16.2.} Let $n\geq 2$ be an integer and let $C$ be a finite folding curve with two vertices $X,Y$ such that $\delta (X,Y)\geq 7.2^{n-2}-1$. Then $C$\ contains at least $2^{2n-1}$\ segments. \noindent \textbf{Proof of the Lemma.} By Lemma 3.16.1, we have $k\geq 2n+1$ for the smallest integer $k$ such that $C$ is contained in a $k$-folding curve $D$. We consider the $(k-2)$-folding curves $D_{1},D_{2},D_{3},D_{4}$\ such that $D=(D_{1},D_{2},D_{3},D_{4})$. The curve $C$ contains one of the curves\ $D_{2},D_{3}$ since the $(k-1)$-folding curves\ $(D_{1},D_{2})$, $ (D_{2},D_{3})$, $(D_{3},D_{4})$ do not contain $C$. Consequently, $C$ contains a $(2n-1)$-folding curve, which consists of $2^{2n-1}$\ segments.~~$ \blacksquare $ \noindent \textbf{Proof of the Theorem.} Let $r\geq 2$ be an integer and let $C_{1},...,C_{r}$\ be disjoint complete folding curves. Consider an integer $ k$ and some vertices $X_{1},...,X_{r}$ of $C_{1},...,C_{r}$\ belonging to $ \left[ -k,+k\right] ^{2}$. Now consider an integer $n\geq 2$ and write $N=7.2^{n-2}+k$. For each $i\in \{1,...,r\}$, there exist some vertices $Y_{i},Z_{i}$ of $C_{i}$, with $ \delta (0,Y_{i})=N$ and $\delta (0,Z_{i})=N$, such that $\left] -N,+N\right[ ^{2}$ contains a subcurve of $C_{i}$ which has $X_{i}$ as a vertex and $ Y_{i},Z_{i}$ as endpoints. For each $i\in \{1,...,r\}$, we have $\delta (X_{i},Y_{i})\geq N-k=7.2^{n-2}$ and $\delta (X_{i},Z_{i})\geq N-k=7.2^{n-2}$. By Lemma 3.16.2, the part of the subcurve of $C_{i}$\ between $X_{i}$ and $Y_{i}$ (resp. between $X_{i}$ and $Z_{i}$) contains at least $2^{2n-1}$\ segments. As $C_{1},...,C_{r}$\ are disjoint, it follows that $\left] -N,+N\right[ ^{2} $ contains at least $2r.2^{2n-1}=2^{2n}r$\ supports of segments of $ C_{1}\cup ...\cup C_{r}$. But $\left] -N,+N\right[ ^{2}$ only contains $ 2(2N)(2N-1)<8N^{2}$ intervals of the form $](u,v),(u+1,v)[$ or $ ](u,v),(u,v+1)[$ with $u,v\in \mathbb{Z} $. Consequently, we have $2^{2n}r<8N^{2}$ and $ r<N^{2}/2^{2n-3}=(7.2^{n-2}+k)^{2}/2^{2n-3}$. As the last inequality is true for each $n\geq 2$, it follows $r\leq 49/2<25$ .~~$\blacksquare $ \begin{center} \textbf{References} \end{center} \noindent [1] J.-P. Allouche, The number of factors in a paperfolding sequence, Bull. Austral. Math. Soc. 46 (1992), 23-32. \noindent [2] A. Chang, T. Zhang, The fractal geometry of the boundary of dragon curves, J. Recreational Math. 30 (1999), 9-22. \noindent [3] C. Davis and D.E. Knuth, Number representations and dragon curves I, J. Recreational Math. 3 (1970), 66-81. \noindent [4] M. Dekking, M. Mend\`{e}s France, A. van der Poorten, Folds!, Math. Intelligencer 4 (1982), 130-195. \noindent [5] M. Mend\`{e}s France, A.J. van der Poorten, Arithmetic and analytic properties of paperfolding sequences, Bull. Austral. Math. Soc. 24 (1981), 123-131. \noindent [6] S.-M. Ngai, N. Nguyen, The Heighway dragon revisited, Discrete Comput. Geom. 29 (2003), 603-623. \noindent [7] F. Oger, Algebraic and model-theoretic properties of tilings, Theoret. Comput. Sci. 319 (2004), 103-126. Francis Oger C.N.R.S. - Equipe de Logique D\'{e}partement de Math\'{e}matiques Universit\'{e} Paris 7 case 7012, site Chevaleret 75205 Paris Cedex 13 France E-mail: [email protected]. \end{document}
\begin{equation}gin{document} \title{When Sets Can and Cannot Have Sum-Dominant Subsets} \author{H\`ung Vi\d{\^e}t Chu} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}} \address{Department of Mathematics, Washington and Lee University, Lexington, VA 24450} \author{Nathan McNew} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}} \address{Department of Mathematics, Towson University, Towson, MD 21252} \author{Steven J. Miller} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}, \textcolor{blue}{\href{[email protected]}{[email protected]}}} \address{Department of Mathematics and Statistics, Williams College, Williamstown, MA 01267} \author{Victor Xu} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}} \address{Department of Mathematics, Carnegie Mellon University, Pittsburgh, PA 15213} \author{Sean Zhang} \email{\textcolor{blue}{\href{mailto:[email protected]}{[email protected]}}} \address{Department of Mathematics, Carnegie Mellon University, Pittsburgh, PA 15213} \subjclass[2000]{11P99 (primary), 11K99 (secondary).} \date{\today} \thanks{The first named author was supported by Washington \& Lee University, and the third named author was partially supported by NSF grants DMS1265673 and DMS1561945. We thank the students from the Math 21-499 Spring '16 research class at Carnegie Mellon and the participants from CANT 2016 for many helpful conversations, Angel Kumchev for comments on an earlier draft, and the referee for many suggestions which improved and clarified the exposition, as well as suggestions on good topics for future research.} \begin{equation}gin{abstract} A finite set of integers $A$ is a sum-dominant (also called an More Sums Than Differences or MSTD) set if $|A+A| > |A-A|$. While almost all subsets of $\{0, \dots, n\}$ are not sum-dominant, interestingly a small positive percentage are. We explore sufficient conditions on infinite sets of positive integers such that there are either no sum-dominant subsets, at most finitely many sum-dominant subsets, or infinitely many sum-dominant subsets. In particular, we prove no subset of the Fibonacci numbers is a sum-dominant set, establish conditions such that solutions to a recurrence relation have only finitely many sum-dominant subsets, and show there are infinitely many sum-dominant subsets of the primes. \end{abstract} \maketitle \tableofcontents \section{Introduction} For any finite set of natural numbers $A \subset \mathbb{N}$, we define the sumset \begin{equation} A+A \ \coloneqq \ \{a + a' : a, a' \in A\}\end{equation} and the difference set \begin{equation} A-A \ \coloneqq \ \{a - a' : a, a' \in A\};\end{equation} $A$ is sum-dominant (also called a More Sums Than Differences or MSTD set) if $|A+A| > |A-A|$ (if the two cardinalities are equal it is called balanced, and otherwise difference-dominant). As addition is commutative and subtraction is not, it was natural to conjecture that sum-dominant sets are rare. Conway gave the first example of such a set, $\{0, 2, 3, 4, 7, 11, 12, 14\}$, and this is the smallest such set. Later authors constructed infinite families, culminating in the work of Martin and O'Bryant, which proved a small positive percentage of subsets of $\{0, \dots, n\}$ are sum-dominant as $n\to\infty$, and Zhao, who estimated this percentage at around $4.5 \cdot 10^{-4}$. See \cite{FP, He, HM, Ma, MO, Na1, Na2, Na3, Ru1, Ru2, Zh3} for general overviews, examples, constructions, bounds on percentages and some generalizations, \cite{MOS, MPR, MS, Zh1} for some explicit constructions of infinite families of sum-dominant sets, and \cite{DKMMW, DKMMWW, MV, Zh2} for some extensions to other settings. Much of the above work looks at finite subsets of the natural numbers, or equivalently subsets of $\{0, 1, \dots, n\}$ as $n\to\infty$. We investigate the effect of restricting the initial set on the existence of sum-dominant subsets. In particular, given an infinite set $A = \{a_k\}_{=1}^\infty$, when does $A$ have no sum-dominant subsets, only finitely many sum-dominant subsets, or infinitely many sum-dominant subsets? \emph{We assume throughout the rest of the paper that every such sequence $A$ is strictly increasing and non-negative.} Our first result shows that if the sequence grows sufficiently rapidly and there are no `small' subsets which are sum-dominant, then there are no sum-dominant subsets. \begin{equation}gin{thm}\label{thm:gen} Let $A = \{a_k\}_{k=1}^\infty$ be a strictly increasing sequence of non-negative numbers. If there exists a positive integer $r$ such that \begin{equation}gin{enumerate} \item $a_k > a_{k-1} + a_{k-r}$ for all $k\ge r+1$, and \item $A$ does not contain any sum-dominant set $S$ with $|S| \le 2r-1$, \end{enumerate} then $A$ contains no sum-dominant set. \end{thm} We prove this in \S\ref{sec:noMSTD}. As the smallest sum-dominant set has 8 elements (see \cite{He}), the second condition is trivially true if $r \le 4$. In particular, we immediately obtain the following interesting result. \begin{equation}gin{cor} No subset of the Fibonacci numbers $\{0, 1, 2, 3, 5, 8, \dots\}$ is a sum-dominant set. \end{cor} The proof is trivial, and follows by taking $r=3$ and noting \begin{equation} F_k \ = \ F_{k-1} + F_{k-2} \ > \ F_{k-1} + F_{k-3} \end{equation} for $k \ge 4$. After defining a class of subsets we present a partial result on when there are at most finitely many sum-dominant subsets. \begin{equation}gin{defi}[Special Sum-Dominant Set] For a sum-dominant set $S$, we call $S$ a special sum-dominant set if $|S+S| - |S-S| \ge |S|$. \end{defi} We prove sum-dominant sets exist in \S\ref{sec:specialsumdom}. Note if $S$ is a special sum-dominant set then if $S' = S \cup \{x\}$ for any sufficiently large $x$ then $S'$ is also a sum-dominant set. We have the following result about a sequence having at most finitely many sum-dominant sets (see \S\ref{sec:finitelymany} for the proof). \begin{equation}gin{thm}\label{thm:finite} Let $A = \{a_k\}_{k=1}^\infty$ be a strictly increasing sequence of non-negative numbers. If there exists a positive integer $s$ such that the sequence $\{a_k\}$ satisfies \begin{equation}gin{enumerate} \item $a_k > a_{k-1} + a_{k-3}$ for all $k\ge s$, and \item $\{a_1, \dots, a_{4s+6}\}$ has no special sum-dominant subsets, \end{enumerate} then $A$ contains at most finitely many sum-dominant sets. \end{thm} The above results concern situations where there are not many sum-dominant sets; we end with an example of the opposite behavior. \begin{equation}gin{thm}\label{thm:prime} There are infinitely many sum-dominant subsets of the primes. \end{thm} We will see later that this result follows immediately from the Green-Tao Theorem \cite{GT}, which asserts that the primes contain arbitrarily long progressions. We also give a conditional proof in \S\ref{sec:primes}. There we assume the Hardy-Littlewood conjecture (see Conjecture \ref{conj:HL}) holds. The advantage of such an approach is that we have an explicit formula for the number of the needed prime tuples up to $x$, which gives a sense of how many such solutions exist in a given window. \section{Subsets with no sum-dominant sets}\label{sec:noMSTD} We prove Theorem \ref{thm:gen}, establishing a sufficient condition to ensure the non-existence of sum-dominant subsets. \begin{equation}gin{proof}[Proof of Theorem \ref{thm:gen}] Let $S=\{s_1,s_2,\dots, s_k\} = \{a_{g(1)}, a_{g(2)}, \dots, a_{g(k)}\}$ be a finite subset of $A$, where $g : \ensuremath{\mathbb{Z}}^+ \to \ensuremath{\mathbb{Z}}^+$ is an increasing function. We show that $S$ is not a sum-dominant set by strong induction on $g(k)$. We proceed by induction. We show that if $A$ has no sum-dominant subsets of size $k$, then it has no sum-dominant subsets of size $k+1$; as any sum-dominant set has only finitely many elements, this completes the proof. For the Basis Step, we know (see \cite{He}) that all sum-dominant sets have at least 8 elements, so any subset $S$ of $A$ with exactly $k$ elements is not a sum-dominant set if $k \le 7$; in particular, $S$ is not a sum-dominant set if $g(k) \le 7$. Thus we may assume for $g(k) \ge 8$ that all $S'$ of the form $\{s_1,\dots, s_{k-1}\}$ with $s_{k-1} < a_{g(k)}$ are not sum-dominant sets. The proof is completed by showing \begin{equation}gin{equation} S \ = \ S' \cup \{a_{g(k)}\} \ =\ \{s_{1},\dots, s_{k-1}, a_{g(k)}\}\end{equation} is not sum-dominant sets for any $a_{g(k)}$. We now turn to the Inductive Step. We know that $S'$ is not a sum-dominant set by the inductive assumption. Also, if $k \le 2r-1$ then $|S| \le 2r-1$ and $S$ is not a sum-dominant set by the second assumption of the theorem. If $k \ge 2r$, consider the number of new sums and differences obtained by adding $a_{g(k)}$. As we have at most $k$ new sums, the proof is completed by showing there are at least $k$ new differences. Since $k \ge 2r$, we have $k - \floor{\frac{k+1}{2}} \ge r$. Let $t = \floor{\frac{k+1}{2}}$. Then $t \le k - r$, which implies $s_{t} \le s_{k-r}$. The largest difference in absolute value between elements in $S$ is $s_{k-1}-s_1$; we now show that we have added at least $k+1$ distinct differences greater than $s_{k-1}-s_1$ in absolute value, which will complete the proof. We have \begin{equation}gin{align} a_{g(k)} - s_{t} &\ \ge \ a_{g(k)} - s_{k-r} \ = \ a_{g(k)} - a_{g(k-r)} \nonumber\\ &\ \ge \ a_{g(k)} - a_{g(k)-r} \nonumber\\ &\ > \ a_{g(k)-1}-a_1 & \text{(by the first assumption on $\{a_n\}$)} \nonumber\\ &\ \ge \ s_{k-1} - a_{1} \ \ge \ s_{k-1} - s_{1}. \end{align} Since $a_{g(k)} - s_{t} \ge s_{k-1}-s_1$, we know that $$a_{g(k)} - s_{t},\ \dots,\ a_{g(k)} - s_{2},\ a_{g(k)} - s_{1}$$ are $t$ differences greater than the greatest difference in $S'$. As we could subtract in the opposite order, $S$ contains at least \begin{equation}gin{equation} 2t\ =\ 2\left\lfloor\frac{k+1}{2}\right\rfloor\ \ge \ k\end{equation} new differences. Thus $S+S$ has at most $k$ more sums than $S'+S'$ but $S-S$ has at least $k$ more differences compared to $S'-S'$. Since $S'$ is not a sum-dominant set, we see that $S$ is not a sum-dominant set. \end{proof} \begin{equation}gin{rek} We thank the referee for the following alternative proof. Given any infinite increasing sequence $\{a_{g(i)}\}$ that is a subset of a set $A$ satisfying $a_k > a_{k_1} + a_{k-r}$ for all $k > r$, let $S_k = \{a_{g(1)}, \dots, a_{g(k)}\}$ and $\Delta_k = |S_k - S_k| - |S_k + S_k|$. Similar arguments as above show that $\{\Delta_k\}$ is increasing for $k \ge 2r$. \end{rek} We immediately obtain the following. \begin{equation}gin{cor}\label{cor:abs} Let $A = \{a_k\}_{k=1}^\infty$ be a strictly increasing sequence of non-negative numbers. If $a_k > a_{k-1} + a_{k-4}$ for all $k\ge 5$, then $A$ contains no sum-dominant subsets. \end{cor} \begin{equation}gin{proof} From \cite{He} we know that all sum-dominant sets have at least 8 elements. When $r = 4$ the second condition of Theorem \ref{thm:gen} holds, completing the proof. \end{proof} For another example, we consider shifted geometric progressions. \begin{equation}gin{cor}\label{thm:geo} Let $A = \{a_k\}_{k=1}^\infty$ with $a_k = c \rho^k + d$ for all $k\ge 1$, where $0 \neq c \in \mathbb{N}$, $d\in\mathbb{N}$, and $1 < \rho \in \mathbb{N}$. Then $A$ contains no sum-dominant subsets. \end{cor} \begin{equation}gin{proof} Without loss of generality we may shift and assume $d=0$ and $c=1$; the result now follows immediately from simple algebra. \end{proof} \begin{equation}gin{rek} Note that if $\rho$ is an integer greater than the positive root of $x^4 - x^3 - 1$ (the characteristic polynomial associated to $a_k = a_{k-1} + a_{k-4}$ from Theorem \ref{thm:finite}, which is approximately 1.3803) then the above corollary holds for $\{c \rho^k + d\}$. \end{rek} \section{Subsets with Finitely Many sum-dominant Sets}\label{sec:finitelymany} We start with some properties of special sum-dominant sets, and then prove Theorem \ref{thm:finite}. The arguments are similar to those used in proving Theorem \ref{thm:gen}. \emph{In this section, in particular in all the statements of the lemmas, we assume the conditions of Theorem \ref{thm:finite} hold.} Thus $A = \{a_k\}_{k=1}^\infty$ and there is an integer $s$ such that the sequence $\{a_k\}$ satisfies \begin{equation}gin{enumerate} \item $a_k > a_{k-1} + a_{k-3}$ for all $k\ge s$, and \item $\{a_1, \dots, a_{4s+6}\}$ has no special sum-dominant subsets. \end{enumerate} \subsection{Special Sum-Dominant Sets}\label{sec:specialsumdom} Recall a sum-dominant set $S$ is special if $|S+S| - |S-S| \ge |S|$. For any $x \ge \sum_{a\in S} a$, adding $x$ creates $|S|+1$ new sums and $2|S|$ new differences. Let $S^* = S\cup \{x\}$. Then \begin{equation}gin{equation} |S^* + S^*| - |S^* - S^*|\ \ge\ |S| + (|S|+1) - 2|S|\ =\ 1,\end{equation} and $S^*$ is also a sum-dominant set. Hence, from one special sum-dominant set $S \subset \{a_n\}_{n=1}^\infty =: A$, we can generate infinitely many sum-dominant sets by adding any large integer in $A$. We immediately obtain the following converse. \begin{equation}gin{lem}\label{lem:appendingelementtonotspecial} If a set $S$ is not a special sum-dominant set, then $|S+S| - |S-S| < |S|$, and by adding any large $x \ge \sum_{a\in S} a$, $S\cup \{x\}$ has at least as many differences as sums. Thus only finitely many sum-dominant sets can be generated by appending one integer from $A$ to a non-special sum-dominant set $S$. \end{lem} Note that special sum-dominant sets exist. We use the base expansion method (see \cite{He}), which states that given a set $A$, for all $m$ sufficiently large if \begin{equation} A_t \ = \ \left\{\sum_{i=1}^t a_i m^{i-1}: a_i \in A\right\} \end{equation} then \begin{equation} |A_t \pm A_t| \ = \ |A \pm A|^t; \end{equation} the reason is that for $m$ large the various elements are clustered with different pairs of clusters yielding well-separated sums. To construct the desired special sum-dominant set, consider the smallest sum-dominant set $S = \{0, 2, 3, 4$, $7$, $11$, $12, 14\}$. Using the method of base expansion, taking $m = 10^{2017}$ we obtain $S_3$ containing $|S_3| = 8^3 = 512$ elements such that $|S_3 + S_3| = |S + S|^3 = 26^3 = 17576$ and $|S_3 - S_3| = |S - S|^3 = 25^3 = 15625$. Then $|S_3 + S_3| - |S_3 - S_3| > |S_3|$. \subsection{Finitely Many Sum-Dominant Sets on a Sequence} If a sequence $A = \{a_n\}_{n=1}^\infty$ contains a special sum-dominant set $S$, then we can get infinitely many sum-dominant subsets on the sequence just by adding sufficiently large elements of $A$ to $S$. Therefore for a sequence $A$ to have at most finitely many sum-dominant subsets, it is necessary that it has no special sum-dominant sets. Using the result from the previous subsection, we can prove Theorem \ref{thm:finite}. We establish some notation before turning to the proof in the next subsection. We can write $A$ as the union of $A_1 = \{a_1, \dots, a_{s-1}\}$ and $A_2 = \{a_{s}, a_{s+1}, \dots \}$. By Corollary \ref{cor:abs}, we know that $A_2$ contains no sum-dominant sets. Thus any sum-dominant set must contain some elements from $A_1$. We prove a lemma about $A_2$. \begin{equation}gin{lem}\label{lem:add} Let $S' = \{s_1, \dots, s_{k-1}\}$ be a subset of $A$ containing at least 3 elements $a_{r_1}, a_{r_2}, a_{r_3}$ in $A_2$, with $r_3 > r_2 > r_1$. Consider the index $g(k) > r_3$, and let $S = S' \cup \{a_{g(k)}\}$. Then either $S$ is not a sum-dominant set, or $S$ satisfies $|S-S| - |S+S| > |S' - S'| - |S' + S'|$. Thus the excess of sums to differences from $S$ is \emph{less than} the excess from $S'$. \end{lem} \begin{equation}gin{proof} We follow a similar argument as in Theorem \ref{thm:gen}. If $k \le 7$, then $S$ is not a sum-dominant set. If $k \ge 8$, then $k - \floor{\frac{k+3}{2}} \ge 3$. Let $t = \floor{\frac{k+2}{2}}$. Then $t \le k - 3$, and $s_{t} \le s_{k-3}$, and \begin{equation}gin{align} a_{g(k)} - s_{t} &\ \ge\ a_{g(k)} - s_{k-3} = a_{g(k)} - a_{g(k-3)} \nonumber\\ &\ \ge\ a_{g(k)} - a_{g(k)-3} \nonumber\\ & \ >\ a_{g(k)-1} = a_{g(k)-1} - a_{1} & \text{(by assumption on $a$)} \nonumber\\ &\ \ge\ s_{k-1} - a_{1} \ge s_{k-1} - s_{1}. \end{align} In the set $S'$, the greatest difference is $s_{k-1}-s_1$. Since $a_{g(k)} - s_{t} \ge s_{k-1}-s_1$, we know that $a_{g(k)} - s_{t}, \dots, a_{g(k)} - s_{2}, a_{g(k)} - s_{1}$ are all differences greater than the greatest difference in $S'$. By a similar argument, $s_{t} - a_{g(k)}, \dots, s_{2} - a_{g(k)}, s_{1} - a_{g(k)}$ are all differences smaller than the smallest difference in $S'$. So $S$ contains at least $2t = 2\floor{\frac{k+3}{2}} > 2\cdot \frac{k+1}{2} = k+1$ new differences compared to $S'$, and $S$ satisfies \begin{equation} |S-S| - |S+S| \ >\ |S' - S'| - |S' + S'|, \end{equation} completing the proof. \end{proof} \subsection{Proof of Theorem \ref{thm:finite}} Recall that we write $A = A_1 \cup A_2$ with $A_1 = \{a_1$, $\dots$, $a_{s-1}\}$, $A_2 = \{a_{s}$, $a_{s+1}$, $\dots \}$, and by Corollary \ref{cor:abs} $A_2$ contains no sum-dominant sets (thus any sum-dominant set must contain some elements from $A_1$). We first prove a series of useful results which imply the main theorem. Our first result classifies the possible sum-dominant subsets of $A$. Since any such set must have at least one element of $A_1$ in it but not necessarily any elements of $A_2$, we use the subscript $n$ below to indicate how many elements of $A_2$ are in our sum-dominant set. \begin{equation}gin{lem}[Classification of Sum-Dominant Subsets of $A$]\label{lem:classificationA} Notation as above, let $K_n$ be a sum-dominant subset of $A = A_1 \cup A_2$ with $n$ elements in $A_2$. Thus we may write $$K_n\ =\ S \cup \{a_{r_1}, \dots, a_{r_n}\}$$ for some $$S \ \subset \ A_1 \ = \ \{a_1, \dots, a_s\}, \ \ \ s \le r_1 < r_2 < \dots < r_n.$$ Set $$d\ =\ \max_{K_3} (|K_3 + K_3| - |K_3 - K_3|, 1).$$ Then $n \le d+3$. In other words, a sum-dominant subset of $A$ can have at most $d+3$ elements of $A_2$. \end{lem} \begin{equation}gin{proof} Let $S_m$ be any subset of $A$ with $m$ elements of $A_2$. Lemma \ref{lem:add} tells us that for any $S_m$ with $m \ge 3$, when we add any new element $a_{r_{m+1}}$ to get $S_{m+1}$, either $S_{m+1}$ is not a sum-dominant set, or $$|S_{m+1}-S_{m+1}| - |S_{m+1}+S_{m+1}| \ \ge\ |S_m - S_m| - |S_m + S_m| + 1.$$ For an $n > d+3$, assume there exists a sum-dominant set; if so, denote it by $K_n$. For $3 \le k \le n$, define $S_k$ as the set obtained by deleting the $(n-k)$ largest elements from $K_n$ (equivalently, keeping only the $k$ smallest elements from $K_n$ which are in $A_2$). We prove that each $S_k$ is sum-dominant, and then show that this forces $S_n$ not to be sum-dominant; this contradiction proves the theorem as $K_n = S_n$. If $S_k$ is not a sum-dominant set for any $k \ge 3$, by Lemma \ref{lem:add} either $S_{k+1}$ is not a sum-dominant set, or $$|S_{k+1} - S_{k+1}| - |S_{k+1} + S_{k+1}|\ \ge \ |S_k - S_k| - |S_k + S_k| + 1\ \ge \ 0,$$ in which case $S_{k+1}$ is also not a sum-dominant set (because $S_k$ is not sum-dominant, the set $S_{k+1}$ generates at least as many differences as sums). As we are assuming $K_n$ (which is just $S_n$) is a sum-dominant set, we find $S_{n-1}$ is sum-dominant. Repeating the argument, we find that $S_{n-2}$ down to $S_3$ must also all be sum-dominant sets, and we have \begin{equation} |S_n - S_n| - |S_n + S_n|\ \ge \ |S_3 - S_3| - |S_3 + S_3| + (n-3). \end{equation} Since $S_3$ is one of the $K_3$'s (i.e., it is a sum-dominant subset of $A$ with exactly three elements of $A_2$), by the definition of $d$ the right hand side above is at least $n-3-d$. As we are assuming $n > d+3$ we see it is positive, and hence $S_n$ is not sum-dominant. As $S_n = K_n$ we see that $K_n$ is not a sum-dominant set, contradicting our assumption that there is a sum-dominant set $K_n$ with $n > d+3$, proving the theorem. \end{proof} \begin{equation}gin{lem}\label{lem:precursortothm} For $n \ge 0$ let $k_n$ denote the number of subsets $K_n \subset A$ which are sum-dominant and contain exactly $n$ elements from $A_2$. We write \begin{equation}gin{equation}\label{eq:writingKn} K_n \ = \ S \cup \{a_{r_1}, \dots, a_{r_n}\} \ \ \ {\rm with}\ S \subset A_1.\end{equation} Then \begin{equation}gin{enumerate} \item $k_n$ is finite for all $n \ge 0$, and \item every $K_n$ is not a special sum-dominant set. \end{enumerate} \end{lem} \begin{equation}gin{proof} We prove each part by induction. It is easier to do both claims simultaneously as we induct on $n$. We break the analysis into $n \in \{0, 1, 2, 3\}$ and $n \ge 4$. The proof for $n=0$ is immediate, while $n \in \{1,2,3\}$ follow by obtaining bounds on the indices permissible in a $K_n$, and then $n \ge 4$ follows by induction. We thus must check (1) and (2) for $n \le 3$. While the arguments for $n \le 3$ are all similar, it is convenient to handle each case differently so we can control the indices and use earlier results, in particular removing the largest element in $A_2$ yields a set which is not a special sum-dominant set. \\ \ \noindent \emph{Case $n=0$:} As $A_1$ is finite, it has finitely many subsets and thus $k_0$, which is the number of sum-dominant subsets of $A_1$, is finite (it is at most $2^{|A_1|}$). Further any $K_0$ is a subset of $$ A_1\ =\ \{a_1, \dots, a_{s-1}\},$$ which is a subset of \begin{equation}gin{equation}\label{eq:Aprime} A'\ = \ \{a_1, \dots, a_{4s+6}\}.\end{equation} As we have assumed $A'$ has no special sum-dominant set, no $K_0$ can be a special sum-dominant set. \\ \ \noindent \emph{Case $n=1$:} We start by obtaining upper bounds on $r_1$, the index of the smallest (and only) element in our set coming from $A_2$. Consider the index $4s$. We claim that \begin{equation}gin{equation} \label{eq:s-1} a_{4s} \ >\ \sum_{a \in A_1} a. \end{equation} This is because $|A_1| < s$ and $a_k > a_{k-1} + a_{k-3}$ for all $k\ge s$, and hence \begin{equation}gin{align*} \label{eq:s-1} \sum_{a \in A_1} a & \ <\ s\cdot a_{s} \nonumber\\ & \ <\ \frac{s}{2} \left(a_{s} + a_{s+2} \right) \ <\ \frac{s}{2} \cdot a_{s+3} \nonumber\\ & \ <\ \frac{s}{4} \left(a_{s+3} + a_{s+5} \right) \ <\ \frac{s}{4} \cdot a_{s+6} \dots \nonumber\\ & \ <\ \frac{s}{2^{\lceil \log_2 s\rceil}} a_{s+3\lceil\log_2(s)\rceil} \nonumber\\ & \ <\ a_{s+3s} \ =\ a_{4s} \end{align*} (by doing the above $\lceil\log_2 s\rceil$ times we ensure that $s/2^{\lceil \log_2 s\rceil} < 1$, and since $s \ge 1$ we have $3s \ge 3 \lceil\log_2(s)\rceil$). Therefore for all $r_{1}$ sufficiently large, \begin{equation}gin{equation} \label{eq:t-1} a_{r_{1}} \ >\ a_{4s} \ > \ \sum_{a \in A_1} a. \end{equation} Clearly there are only finitely many sum-dominant subsets $K_1$ with $r_1 \le 4s$; the analysis is completed by showing there are no sum-dominant sets with $r_1 > 4s$. Imagine there was a sum-dominant $K_1$ with $a_{r_1} > a_{4s}$. Then $K_1$ is the union of a set of elements $S=\{s_1, \dots, s_m\}$ in $A_1$ and $a_{r_1}$ in $A_2$. As $\sum_{s\in S} s < a_{r_1}$, by Lemma \ref{lem:appendingelementtonotspecial} we find $K_1$ is not a sum-dominant set. All that remains is to show none of the $K_1$ are special sum-dominant sets. This is immediate, as each sum-dominant $K_1$ is a subset of $\{a_1, \dots, a_{4s}\}$, which is a subset of $A'$ (defined in \eqref{eq:Aprime}). As we have assumed $A'$ has no special sum-dominant set, no $K_1$ can be a special sum-dominant set. \\ \ \noindent \emph{Case $n=2$:} Consider the index $4s+3$. If $K_2$ is a sum-dominant set then it has two elements, $a_{r_1} < a_{r_2}$, that are in $A_2$. We show that if $r_2 \ge 4s+3$ then there can be no sum-dominant sets, and thus there are only finitely many $K_2$. For all $r_{2} \ge 4s+3$, \begin{equation}gin{equation} \label{eq:t-2} a_{r_2} - a_{r_2 - 1} \ >\ a_{r_2-3} \ \ge\ a_{4s} \ >\ \sum_{a \in A_1} a. \end{equation} Assume there is a sum-dominant $K_2$ with $r_2 \ge 4s+3$. It contains some elements $S=\{s_1, \dots, s_m\}$ in $A_1$ and $a_{r_1}, a_{r_2}$ in $A_2$. We have $$a_{r_2} - a_{r_1}\ \ge\ a_{r_2} - a_{r_2-1} \ >\ \sum_{a\in S} a.$$ Therefore $a_{r_2} > \left( \sum_{a\in S} a\right) + a_{r_1}$, and $S\cup\{a_{r_1}\}$ is not a special sum-dominant set by the $n=1$ case\footnote{If $S' = S \cup \{a_{r_1}\}$ is sum-dominant then it is not special, while if it is not sum-dominant then clearly it is not a special sum-dominant set.}. Hence, by Lemma \ref{lem:appendingelementtonotspecial} we find $K_2 = (S\cup \{a_{r_1}\}) \cup \{a_{r_2}\}$ is not a sum-dominant set. Finally, as $K_2$ is a subset of $\{a_1, \dots, a_{4s+1}\}$, which is a subset of $A'$, by assumption $K_2$ is not a special sum-dominant set. \\ \ \noindent \emph{Case $n=3$:} Let $K_3$ be a sum-dominant set with three elements from $A_2$. We show that if $r_3 \ge 4s+6$ then there are no such $K_3$; as there are only finitely many sum-dominant sets with $r_3 < 4s+6$, this completes the counting proof in this case. Consider the index $4s+6$. For all $r_{3} \ge 4s+6$, \begin{equation}gin{equation} \label{eq:t-3} a_{r_3-3} - a_{r_3-4} \ >\ a_{r_3-6} \ \ge\ a_{4s} \ >\ \sum_{a \in A_1} a. \end{equation} Consider any $K_3$ with $r_3 \ge 4s+6$. We write $K_3$ as $S \cup \{a_{r_1}, a_{r_2}, a_{r_3}\}$ and $S \subset A_1$. If $|S| < 5$, we know that $|K_3| < 8$, and $K_3$ is not a sum-dominant set as such a set has at least 8 elements. We can therefore assume that $|S| \ge 5$. We have two cases.\\ \ \noindent \emph{Subcase 1: $r_2 \le r_3-3$:} Thus $$a_{r_3} - a_{r_2} - a_{r_1}\ \ge\ a_{r_3} - a_{r_3-3} - a_{r_3-4}\ \ge\ a_{r_3-1} - a_{r_3-4}\ \ge\ a_{r_3 -2} > a_{r_3 -6} \ >\ \sum_{a \in S} a.$$ As $S\cup\{a_{r_1}, a_{r_2}\}$ is not a special sum-dominant set by the $n=2$ case\footnote{As before, if it is sum-dominant it is not special, while if it is not sum-dominant it cannot be sum-dominant special; thus we have the needed inequalities concerning the sizes of the sets.}, adding $a_{r_3}$ with $$a_{r_3}\ >\ \left( \sum_{s\in S} s\right) + a_{r_1} + a_{r_2}$$ creates a non-sum-dominant set by Lemma \ref{lem:appendingelementtonotspecial}. \\ \ \noindent \emph{Subcase 2: $r_2 > r_3 - 3$:} Using \eqref{eq:t-3} we find $$a_{r_3} - a_{r_2}\ \ge\ a_{r_3} - a_{r_3-1}\ >\ \sum_{a\in S} a$$ and $$a_{r_2} - a_{r_1} \ > \ a_{r_3-2} - a_{r_3-3}\ >\ \sum_{a\in S} a.$$ Therefore the differences between $a_{r_1}$, $a_{r_2}$, $a_{r_3}$ are large relative to the sum of the elements in $S$, and our new sums and new differences are well-separated from the old sums and differences. Explicitly, $K_3 + K_3$ consists of $S + S$, $a_{r_1} + S$, $a_{r_2} + S$, $a_{r_3} + S$, plus at most 6 more elements (from the sums of the $a_r$'s), while $K_3 - K_3$ consists of $S-S$, $\pm(a_{r_1} - S)$, $\pm(a_{r_2} - S)$, $\pm(a_{r_3} - S)$, plus possibly some differences from the differences of the $a_r$'s. As $S$ is not a special sum-dominant set, we know $|S+S| - |S-S| < |S|$ (if $S$ is not sum-dominant the claim holds trivially, while if it is sum-dominant it holds because $S$ is not special). Thus for $K_3$ to be sum-dominant, we must have \begin{equation}gin{eqnarray} 0 & \ < \ & |K_3 + K_3| - |K_3 - K_3| \nonumber\\ & \le & \left(|S+S| + 3|S| + 6\right) - \left(|S-S| + 6|S|\right) \nonumber\\ & < & 6 -2|S|; \nonumber \end{eqnarray} as $|S| \ge 5$ this is impossible, and thus $K_3$ cannot be sum-dominant. Finally, as again $K_3$ is a subset of $A' = \{a_1, \dots, a_{4s+6}\}$, no $K_3$ is a special sum-dominant set.\\ \ \noindent \emph{Case $n\ge 4$ (inductive step):} We proceed by induction. We may assume that $k_n$ is finite for some $n\ge 3$, and must show that $k_{n+1}$ is finite. By the earlier cases we know there is an integer $t_n$ such that if $K_n$ is a sum-dominant subset of $A$ with exactly $n$ elements of $A_2$, then the largest index $r_n$ of an $a_i \in K_n$ is less than $t_n$. We claim that if $K_{n+1}$ is a sum-dominant subset of $A$ then each index is less than $t_{n+1}$, where $t_{n+1}$ is the smallest index such that if $r_{n+1} \ge t_{n+1}$ then \begin{equation}gin{equation} \label{eq:t} a_{r_{n+1}} \ >\ \sum_{i < r_{n}} a_i. \end{equation} We write $$K_{n+1} \ = \ S \cup \{a_{r_1}, \dots, a_{r_n}, a_{r_{n+1}}\}, \ \ S \subset A_1, \ \ \{a_{r_1}, \dots, a_{r_n}\} \subset A_2.$$ We show that if $r_{n+1} \ge t_{n+1}$ then $K_{n+1}$ is not sum-dominant. Let $S_n = K_{n+1} \setminus \{a_{r_{n+1}}\}$. We have two cases. \begin{equation}gin{itemize} \item If $r_n < t_{n}$, then by the inductive hypothesis $S_n$ is not a special sum-dominant set. So adding $a_{r_{n+1}} > \sum_{x\in S_{n}} x$ to $S_n$ gives a non-sum-dominant set by Lemma \ref{lem:appendingelementtonotspecial}. \item If $r_n \ge t_{n}$, then by the inductive hypothesis $S_n$ is not a sum-dominant set. So $|S_n - S_n| - |S_n + S_n| \ge 0$. Since $n \ge 3$, we can apply Lemma \ref{lem:add}, and either $K_{n+1} = S_n \cup \{a_{r_{n+1}}\}$ is not a sum-dominant set, or $$|K_{n+1} - K_{n+1}| - |K_{n+1} + K_{n+1}|\ >\ |K_n - K_n| - |K_n + K_n| > 0,$$ in which case $S_{n+1}$ is still not a sum-dominant set. \end{itemize} We conclude that for all sum-dominant sets $S_{n+1}$, we must have $r_{n+1} < t_{n+1}$. So $k_{n+1}$ is finite. Consider any sum-dominant set $K_{n+1} = S_n \cup \{a_{r_{n+1}}\}$. Applying lemma \ref{lem:add} again, we have $|K_{n+1} - K_{n+1}| - |K_{n+1} + K_{n+1}| > |S_n - S_n| - |S_n + S_n|$. We know, from inductive hypothesis, that $S_n$ is not a special sum-dominant set. Therefore all possible $K_{n+1}$ are not special sum-dominant sets. By induction, $k_n$ is finite for all $n\ge 0$, and all $K_n$ are not special sum-dominant sets. \end{proof} \begin{equation}gin{proof}[Proof of Theorem \ref{thm:finite}] By Lemma \ref{lem:classificationA} every sum-dominant subset of $A$ is of the form $K_0$, $K_1$, $K_2$, $\dots$, $K_{d+3}$ where the $K_n$ are as in \eqref{eq:writingKn}. By Lemma \ref{lem:precursortothm} there are only finitely many sets of the form $K_n$ for $n \le d+3$, and thus there are only finitely many sum-dominant subsets of $A$. \end{proof} \section{Sum-Dominant subsets of the prime numbers}\label{sec:primes} We now investigate sum-dominant subsets of the primes. While Theorem \ref{thm:prime} follows immediately from the Green-Tao theorem, we first conditionally prove there are infinitely many sum-dominant subsets of the primes as this argument gives a better sense of what the `truth' should be (i.e., how far we must go before we find sum-dominant subsets). \subsection{Admissible Prime Tuples and Prime Constellations} We first consider the idea of prime $m$-tuples. A prime $m$-tuple $(b_1, b_2, \dots, b_m)$ represents a pattern of differences between prime numbers. An integer $n$ matches this pattern if $(b_1 + n, b_2 + n, \dots, b_m + n)$ are all primes. A prime $m$-tuple $(b_1, b_2, \dots, b_m)$ is called admissible if for all integers $k \ge 2$, $\{b_1, b_2, \dots, b_m\}$ does not cover all values modulo $k$. If a prime $m$-tuple is not admissible, whenever $n > k$ then at least one of $b_1 + n, b_2 + n, \dots, b_m + n$ is divisible by $k$ and greater than $k$, so this cannot be an $m$-tuple of prime numbers (in this case the only $n$ which can lead to an $m$-tuple of primes are $n \le k$, and there are only finitely many of these). It is conjectured in \cite{HL} that all admissible $m$-tuples are matched by infinitely many integers. \begin{equation}gin{conj}[Hardy-Littlewood \cite{HL}]\label{conj:HL} Let $b_1, b_2, \dots, b_m$ be $m$ distinct integers, $v_p(b) = v(p; b_1, b_2, \dots , b_m)$ the number of distinct residues of $b_1, b_2, \dots b_m$ to the modulus $p$, and $P(x$; $b_1$, $b_2$, $\dots$, $b_m)$ the number of integers $1 \le n \le x$ such that every element in $\{n + b_1, n + b_2, \dots , n + b_m\}$ is prime. Assume $(b_1, b_2, \dots, b_m)$ is admissible (thus $v_p(b) \neq p$ for all $p$). Then \begin{equation}gin{equation}\label{eq:HLforPx} P(x) \ \sim \ \mathfrak{S}(b_1, b_2, \dots, b_m) \int_2^x \frac{du}{(\log u)^m} \end{equation} when $x \to \infty$, where $$ \mathfrak{S}(b_1, b_2, \dots, b_m)\ =\ \prod_{p \ge 2} \left(\left( \frac{p}{p-1}\right)^{m-1} \frac{p-v_p(b)}{p-1} \right) \ \neq \ 0. $$ \end{conj} As $(b_1, b_2, \cdots, b_m)$ is an admissible $m$-tuple, $v(p; b_1, b_2, \dots, b_m)$ is never equal to $p$ and equals $m$ for $p> \max\{|b_i-b_j|\}$. The product $\mathfrak{S}(b_1, b_2, \dots, b_m)$ thus converges to a positive number as each factor is non-zero and is $1 + O_m(1/p^2)$. Therefore this conjecture implies that every admissible $m$-tuple is matched by infinitely many integers. \subsection{Infinitude of sum-dominant subsets of the primes} We now show the Hardy-Littlewood conjecture implies there are infinitely many subsets of the primes which are sum-dominant sets. \begin{equation}gin{thm}\label{thm:HLprimes} If the Hardy-Littlewood conjecture holds for all admissible $m$-tuples then the primes have infinitely many sum-dominant subsets. \end{thm} \begin{equation}gin{proof} Consider the smallest sum-dominant set $S=\{0, 2, 3, 4, 7, 11, 12, 14\}$. We know that $\{p, p+2s, p+3s, p+4s, p+7s, p+11s, p+12s, p+14s\}$ is a sum-dominant set for all positive integers $p, s$. Set $s=30$ and let $T = (0, 60, 90, 120, 210, 330, 360, 420)$. We deduce that if there are infinitely many $n$ such that $n+T = (n, n+60, n+90, n+120, n+210, n+330, n+360, n+420)$ is an 8-tuple of prime numbers, then there are infinitely many sum-dominant sets of prime numbers. We check that $T$ is an admissible prime 8-tuple. When $m>8$, the eight numbers in $T$ clearly don't cover all values modulo $m$. When $m \le 8$, one sees by straightforward computation that $T$ does not cover all values modulo $m$. By Conjecture \ref{conj:HL}, there are infinitely many integers $p$ such that every element of $\{p, p+60, p+90, p+120, p+210, p+330, p+360, p+420\}$ is prime. These are all sum-dominant sets, so there are infinitely many sum-dominant sets on primes. \end{proof} Of course, all we need is that the Hardy-Littlewood conjecture holds for one admissible $m$-tuple which has a sum-dominant subset. We may take $p=19$, which gives an explicit sum-dominant subset of the primes: $\{19, 79, 109, 139, 229, 349, 379, 439\}$ (a natural question is which sum-dominant subset of the primes has the smallest diameter). If one wishes, one can use the conjecture to get some lower bounds on the number of sum-dominant subsets of the primes at most $x$. The proof of Theorem \ref{thm:prime} follows similarly. \begin{equation}gin{proof}[Proof of Theorem \ref{thm:prime}] By the Green-Tao theorem, the primes contain arbitrarily long arithmetic progressions. Thus for each $N \ge 14$ there are infinitely many pairs $(p,d)$ such that \begin{equation}gin{equation} \{p, p + d, p + 2d, \dots, p + Nd\} \end{equation} are all prime. We can then take subsets as in the proof of Theorem \ref{thm:HLprimes}. \end{proof} \section{Future Work} We list some natural topics for further research. \begin{equation}gin{itemize} \item Can the conditions in Theorem \ref{thm:gen} or \ref{thm:finite} be weakened? \item What is the smallest special sum-dominant set by diameter, and by cardinality? \item What is the smallest, in terms of its largest element, set of primes that is sum-dominant? \end{itemize} \begin{equation}gin{thebibliography}{AAAAAAA} \bibitem[DKMMW]{DKMMW} T. Do, A. Kulkarni, S. J. Miller, D. Moon and J. Wellens, \emph{Sums and Differences of Correlated Random Sets}, Journal of Number Theory \textbf{147} (2015), 44--68. \bibitem[DKMMWW]{DKMMWW} T. Do, A. Kulkarni, S. J. Miller, D. Moon, J. Wellens and J. Wilcox, \emph{Sets Characterized by Missing Sums and Differences in Dilating Polytopes}, Journal of Number Theory \textbf{157} (2015), 123--153. \bibitem[FP]{FP} G. A. Freiman and V. P. Pigarev, \emph{The relation between the invariants R and T}, In: Number theoretic studies in the Markov spectrum and in the structural theory of set addition (Russian), pp. 172--174. Kalinin. Gos. Univ., Moscow (1973). \bibitem[GT]{GT} B. Green and T. Tao, \emph{The primes contain arbitrarily long arithmetic progressions}, Annals of Mathematics \textbf{167} (2008), no. 2, 481--547. \bibitem[HL]{HL} G. H. Hardy and J. Littlewood, \emph{Some Problems of `Partitio Numerorum'; III: On the Expression of a Number as a Sum of Primes}, Acta Math. \textbf{44}, 1--70. \bibitem[He]{He} P. V. Hegarty, \emph{Some explicit constructions of sets with more sums than differences} (2007), Acta Arithmetica \textbf{130} (2007), no. 1, 61--77. \bibitem[HM]{HM} P. V. Hegarty and S. J. Miller, \emph{When almost all sets are difference dominated}, Random Structures and Algorithms \textbf{35} (2009), no. 1, 118--136. \bibitem[ILMZ]{ILMZ} G. Iyer, O. Lazarev, S. J. Miller and L. Zhang, \emph{Generalized more sums than differences sets,} Journal of Number Theory \textbf{132} (2012), no. 5, 1054--1073 \bibitem[Ma]{Ma} J. Marica, \emph{On a conjecture of Conway}, Canad. Math. Bull. \textbf{12} (1969), 233--234. \bibitem[MO]{MO} G. Martin and K. O'Bryant, \emph{Many sets have more sums than differences}, in Additive Combinatorics, CRM Proc. Lecture Notes, vol. 43, Amer. Math. Soc., Providence, RI, 2007, pp. 287--305. \bibitem[MOS]{MOS} S. J. Miller, B. Orosz and D. Scheinerman, \emph{Explicit constructions of infinite families of MSTD sets}, Journal of Number Theory \textbf{130} (2010) 1221--1233. \bibitem[MS]{MS} S. J. Miller and D. Scheinerman, \emph{Explicit constructions of infinite families of mstd sets,} Additive Number Theory, Springer, 2010, pp. 229-248. \bibitem[MPR]{MPR} S. J. Miller, S. Pegado and L. Robinson, \emph{Explicit Constructions of Large Families of Generalized More Sums Than Differences Sets}, Integers \textbf{12} (2012), \#A30. \bibitem[MV]{MV} S. J. Miler and K. Vissuet, \emph{Most Subsets are Balanced in Finite Groups}, Combinatorial and Additive Number Theory, CANT 2011 and 2012 (Melvyn B. Nathanson, editor), Springer Proceedings in Mathematics \& Statistics (2014), 147--157. \bibitem[Na1]{Na1} M. B. Nathanson, \emph{Sums of finite sets of integers}, The American Mathematical Monthly, Vol. $79$, No. $9$ (Nov., 1972), pp. 1010-1012. \bibitem[Na2]{Na2} M. B. Nathanson, \emph{Problems in additive number theory, 1}, Additive combinatorics, 263--270, CRM Proc. Lecture Notes \textbf{43}, Amer. Math. Soc., Providence, RI, 2007. \bibitem[Na3]{Na3} M. B. Nathanson, \emph{Sets with more sums than differences}, Integers : Electronic Journal of Combinatorial Number Theory \textbf{7} (2007), Paper A5 (24pp). \bibitem[Ru1]{Ru1} I. Z. Ruzsa, \emph{On the cardinality of $A + A$ and $A - A$}, Combinatorics year (Keszthely, 1976), vol. 18, Coll. Math. Soc. J. Bolyai, North-Holland-Bolyai T$\grave{{\rm a}}$rsulat, 1978, 933--938. \bibitem[Ru2]{Ru2} I. Z. Ruzsa, \emph{Sets of sums and differences}. In: S\'eminaire de Th\'eorie des Nombres de Paris 1982-1983, pp. 267--273. Birkh\"auser, Boston (1984). \bibitem[Ru3]{Ru3} I. Z. Ruzsa, \emph{On the number of sums and differences}, Acta Math. Sci. Hungar. \textbf{59} (1992), 439--447. \bibitem[Zh1]{Zh1} Y. Zhao, \emph{Constructing MSTD sets using bidirectional ballot sequences}, Journal of Number Theory \textbf{130} (2010), no. 5, 1212--1220. \bibitem[Zh2]{Zh2} Y. Zhao, \emph{Counting MSTD sets in finite abelian groups}, Journal of Number Theory \textbf{130} (2010), no. 10, 2308--2322. \bibitem[Zh3]{Zh3} Y. Zhao, \emph{Sets characterized by missing sums and differences}, Journal of Number Theory \textbf{131} (2011), no. 11, 2107--2134. \end{thebibliography} \end{document}
\mathitbf{b}egin{eqnarray}gin{document} \mathitbf{b}egin{eqnarray}gin{frontmatter} \title{Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets} \author[Alex]{Alexander Litvinenko\hbox{corr}ef{cor1}} \ead{[email protected]} \ead[url]{bayescomp.kaust.edu.sa, sri-uq.kaust.edu.sa} \author[Alex]{Ying Sun} \ead{[email protected]} \ead[url]{es.kaust.edu.sa} \author[Alex]{Marc G. Genton} \ead{[email protected]} \ead[url]{stsda.kaust.edu.sa} \author[Alex]{David E. Keyes} \ead{[email protected]} \ead[url]{ecrc.kaust.edu.sa} \address[Alex]{King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia} \cortext[cor1]{Corresponding author} \mathitbf{b}egin{eqnarray}gin{abstract} We consider measurements to estimate the unknown parameters (variance, smoothness, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical ($\mathcal{H}$-) matrix format. The $\mathcal{H}$-matrix format has a log-linear computational cost and storage $\mathcal{O}(kn \lambdaog n)$, where the rank $k$ is a small integer, and $n$ is the number of locations. The $\mathcal{H}$-matrix technique allows us to work with general covariance matrices efficiently, since $\mathcal{H}$-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse has to be sparse. We research how the $\mathcal{H}$-matrix approximation error influences on the estimated parameters. We demonstrate our method with Monte Carlo simulations with known true values of parameters and an application to soil moisture data with unknown parameters. The C, C++ codes and data are freely available. \end{abstract} \mathitbf{b}egin{eqnarray}gin{keyword} Computational statistics; Hierarchical matrix; Large dataset; Mat\'ern covariance; Random Field; Spatial statistics. \end{keyword} \end{frontmatter} \section{Introduction}\lambdaabel{sec:intro} The number of measurements that must be processed for statistical modeling in environmental applications is usually very large, and these measurements may be located irregularly across a given geographical region. This makes the computing procedure expensive and the data difficult to manage. These data are frequently modeled as a realization from a stationary Gaussian spatial random field. Specifically, we let $\mathbf{Z}=\{Z(\mathbf{s}_1),\lambdadots,Z(\mathbf{s}_n)\}^\top$, where $Z(\mathbf{s})$ is a Gaussian random field indexed by a spatial location $\mathbf{s} \in \mathitbf{B}bb{R}^d$, $d\geq 1$. Then, we assume that $\mathbf{Z}$ has mean zero and a stationary parametric covariance function $C(\mathitbf{b}h;{\mathitbf{b}oldsymbol \theta})=\mathop{\rm cov}\nolimits\{Z(\mathbf{s}),Z(\mathbf{s}+\mathitbf{b}h)\}$, where $\mathitbf{b}h\in\mathitbf{B}bb{R}^d$ is a spatial lag vector and ${\mathitbf{b}oldsymbol \theta}\in\mathitbf{B}bb{R}^q$ is the unknown parameter vector of interest. Statistical inferences about ${\mathitbf{b}oldsymbol \theta}$ are often based on the Gaussian log-likelihood function: \mathitbf{b}egin{eqnarray}gin{equation} \lambdaabel{eq:likeli} \mathcal{L}(^{th}etab)=-\mathitbf{f}rac{n}{2}\lambdaog(2\pi) - \mathitbf{f}rac{1}{2}\lambdaog \mathcal Ydet{\mathbf{C}(^{th}etab)}-\mathitbf{f}rac{1}{2}\mathbf{Z}^\top \mathbf{C}(^{th}etab)^{-1}\mathbf{Z}, \end{equation} where the covariance matrix $\mathbf{C}(^{th}etab)$ has entries $C(\mathbf{s}_i-\mathbf{s}_j;{\mathitbf{b}oldsymbol \theta})$, $i,j=1,\lambdadots,n$. The maximum likelihood estimator of ${\mathitbf{b}oldsymbol \theta}$ is the value $\omegaidehat {\mathitbf{b}oldsymbol \theta}$ that maximizes (\ref{eq:likeli}). When the sample size $n$ is large, the evaluation of (\ref{eq:likeli}) becomes challenging, due to the computation of the inverse and log-determinant of the $n$-by-$n$ covariance matrix $\mathbf{C}(^{th}etab)$. Indeed, this requires ${\cal O}(n^2)$ memory and ${\cal O}(n^3)$ computational steps. Hence, scalable methods that can process larger sample sizes are needed. Stationary covariance functions, discretized on a rectangular grid, have block Toeplitz structure. This structure can be further extended to a block circulant form and resolved with the Fast Fourier Transform (FFT) \perp\!\!\!\perpte{WHITTLE54, DAHLHAUS87, guinness2017circulant, stroud2017bayesian, Dietrich2}. The computing cost, in this case, is $\mathcal{O}(n\lambdaog n)$. However, this approach either does not work for data measured at irregularly spaced locations or requires expensive, non-trivial modifications. During the past two decades, a large amount of research has been devoted to tackling the aforementioned computational challenge of developing scalable methods: for example, low-rank tensor methods \perp\!\!\!\perpte{litv17Tensor, nowak2013kriging}, covariance tapering \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Furrer2006,Kaufman2008, sang2012full}, likelihood approximations in both the spatial \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Stein:Chi:Wetly:2004, Stein2013} and spectral \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Fuentes:2007} domains, latent processes such as Gaussian predictive processes \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Banerjee:Gelfand:Finley:Sang:2008} and fixed-rank kriging \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Cressie:Johannesson:2008}, and Gaussian Markov random-field approximations \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{rue2002fitting,rue2005gaussian, fuglstad2015does}; see \cmmnt{\perp\!\!\!\perptet}\perp\!\!\!\perpte{Sun2012} for a review. A generalization of the Vecchia approach and a general Vecchia framework was introduced in \perp\!\!\!\perpte{Katzfuss17, Vecchia88}. Each of these methods has its strengths and drawbacks. For instance, covariance tapering sometimes performs even worse than assuming independent blocks in the covariance \cmmnt{\perp\!\!\!\perptet}\perp\!\!\!\perpte{stein2013statistical}; low-rank approximations have their own limitations \cmmnt{\perp\!\!\!\perptet}\perp\!\!\!\perpte{stein2014limitations}; and Markov models depend on the observation locations, requiring irregular locations to be realigned on a much finer grid with missing values~\cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{sun2016statistically}. A matrix-free approach for solving the multi-parametric Gaussian maximum likelihood problem was developed in \perp\!\!\!\perpte{Anitescu2012}. To further improve on these issues, other methods that have been recently developed include the nearest-neighbor Gaussian process models \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Datta:Banerjee:Finley:Gelfand:2015}, low-rank update \perp\!\!\!\perpte{SaibabaKitanidis15UQ}, multiresolution Gaussian process models \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{nychka2015multiresolution}, equivalent kriging \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{kleiber2015equivalent}, multi-level restricted Gaussian maximum likelihood estimators \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Castrillon16}, and hierarchical low-rank approximations \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Huang2016}. Bayesian approaches to identify unknown or uncertain parameters could be also applied \perp\!\!\!\perpte{rosic2013parameter, rosic2012sampling, matthies2012parametric, litvinenko2013inverse, matthies2012parametric, pajonk2012deterministic}. In this paper, we propose using the so-called hierarchical ($\mathcal{H}$-) matrices for approximating dense matrices with numerical complexity and storage $\mathcal{O}(k^{\alpha}n \lambdaog^{\alpha} n)$, where $n$ is the number of measurements; $k\lambdal n$ is the rank of the hierarchical matrix, which defines the quality of the approximation; and $\alpha=1$ or 2. $\mathcal{H}$-matrices are suitable for general grids and are also working well for large condition numbers. Previous results \perp\!\!\!\perpte{HackHMEng} show that the $\mathcal{H}$-matrix technique is very stable when approximating the matrix itself \perp\!\!\!\perpte{Li14, saibaba2012application, BallaniKressner, harbrecht2015efficient, ambikasaran2013large, BoermGarcke2007, SaibabaKitanidis12}, its inverse \perp\!\!\!\perpte{Ambikasaran16, ambikasaran2013large, bebendorf2003existence}, its Cholesky decomposition \perp\!\!\!\perpte{BebendorfSpEq16, bebendorf2007Why}, and the Schur complement (i.e., the conditional covariance matrix) \perp\!\!\!\perpte{HackHMEng, MYPHD, litvinenko2017partial}. The $\mathcal{H}$-matrix technique for inference including parameter estimation and MLE in Gaussian process regression has also been proposed in \perp\!\!\!\perpte{ambikasaran2013large, Ambikasaran16}, and uncertainty quantification and conditional realizations in \perp\!\!\!\perpte{SaibabaKitanidis12}. \mathitbf{b}egin{eqnarray}gin{figure} \centering \includegraphics[width=3.5in]{scheme2.pdf} \caption{Scheme of approximations to the underlying dense covariance matrix (and corresponding methods). Hierarchically low-rank matrices generalize purely structure-based or global schemes with analysis-based locally adaptive schemes.} \lambdaabel{fig:Hscheme} \end{figure} Other motivating factors for applying the $\mathcal{H}$-matrix technique include the following: \mathitbf{b}egin{eqnarray}gin{enumerate} \item $\mathcal{H}$-matrices are more general than other compressed matrix representations (see scheme in Fig.~\ref{fig:Hscheme}); \item The $\mathcal{H}$-matrix technique allows us to compute not only the matrix-vector products (e.g., like fast multipole methods), but also the more general classes of functions, such as $\mathbf{C}(^{th}etab)^{-1}$, $\mathbf{C}(^{th}etab)^{1/2}$, $\mathcal Ydet{\mathbf{C}(^{th}etab)}$, $\exp\{\mathbf{C}(^{th}etab)\}$, resolvents, Cholesky decomposition and, many others \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{HackHMEng}; \item The $\mathcal{H}$-matrix technique is well-studied and has a solid theory, many examples, and multiple sequential and parallel implementations; \item The $\mathcal{H}$-matrix accuracy is controllable by the rank, $k$, or by the accuracy $\mathop{\rm var}\nolimitsepsilon$. The full rank gives an exact representation; \item The $\mathcal{H}$-matrix technique preserves the structure of the matrix after the Cholesky decomposition and the inverse have been computed (see Fig.~\ref{fig:Hexample3}); \item There are efficient rank-truncation techniques to keep the rank small after the matrix operations; for instance, the Schur complement and matrix inverse can be approximated again in the $\mathcal{H}$-matrix format. \end{enumerate} Figure~\ref{fig:C} shows an $\mathcal{H}$-matrix approximation of the covariance matrix from a discretized ($n=16,641$) exponential covariance function on the unit square, its Cholesky approximation~\ref{fig:L}, and its $\mathcal{H}$-matrix inverse~\ref{fig:iC}. The dark (or red) blocks indicate the dense matrices and the grey (green) blocks indicate the rank-$k$ matrices; the number inside each block is its rank. The steps inside the blocks show the decay of the singular values in $\lambdaog$ scale. The white blocks are empty. \mathitbf{b}egin{eqnarray}gin{figure}[htbp!] \centering \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=5cm]{Cexp_129.png} \lambdaabel{fig:C} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=5cm]{Cexp_chol129.png} \lambdaabel{fig:L} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=5cm]{Cexp_inv129.png} \lambdaabel{fig:iC} \end{subfigure} \caption{(a) The $\mathcal{H}$-matrix approximation of an $n\times n$ covariance matrix from a discretized exponential covariance function on the unit square, with $n=16,641$, unit variance, and length scale of 0.1. The dimensions of the densest (dark) blocks are $32\times 32$ and the maximal rank is $k=13$. (b) An example of the corresponding Cholesky factor with maximal rank $k=14$. (c) The inverse of the exponential covariance matrix (precision matrix) with maximal rank $k=14$.} \lambdaabel{fig:Hexample3} \end{figure} In the last few years, there has been great interest in numerical methods for representing and approximating large covariance matrices in the applied mathematics community \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Rasmussen05, BoermGarcke2007,saibaba2012application,nowak2013kriging,ambikasaran2013large,ambikasaran2014fast,si2014memoryef,BallaniKressner}. Recently, the maximum likelihood estimator for parameter-fitting Gaussian observations with a Mat\'ern covariance matrix was computed via a framework for unstructured observations in two spatial dimensions, which allowed the evaluation of the log-likelihood and its gradient with computational complexity $\mathcal{O}(n^{3/2})$; the method relied on the recursive skeletonization factorization procedure \perp\!\!\!\perpte{ho2015hierarchical, martinsson2005fast}. However, the consequences of the approximation on the maximum likelihood estimator were not studied. In \perp\!\!\!\perpte{BoermGarcke2007}, the authors computed the exact solution of a Gaussian process regression by replacing the kernel matrix with a data-sparse approximation, which they called the $\mathcal{H}^2$-matrix technique, cf. \perp\!\!\!\perpte{Li14} . It is more complicated than $\mathcal{H}$-matrix technique, but has computational complexity and storage cost of $\mathcal{O}(kn)$. \mathitbf{b}lue{The same $\mathcal{H}^2$-matrix technique for solving large-scale stochastic linear inverse problems with applications in subsurface modeling was demonstrated in \perp\!\!\!\perpte{ambikasaran2013large}. The authors explored the sparsity of the underlying measurement operator, demonstrated the effectiveness by solving a realistic crosswell tomography problem, quantified the uncertainty in the solution and provided an optimal capture geometry for this problem. Their algorithm was implemented in C++ and is available online.} The authors of \perp\!\!\!\perpte{si2014memoryef} observed that the structure of shift-invariant kernels changed from low-rank to block-diagonal (without any low-rank structure) when they varied the scale parameter. Based on this observation, they proposed a new kernel approximation algorithm, which they called the Memory-Efficient Kernel Approximation. That approximation considered both the low-rank and the clustering structure of the kernel matrix. They also showed that the resulting algorithm outperformed state-of-the-art low-rank kernel approximation methods regarding speed, approximation error, and memory usage. The BigQUIC method for a sparse inverse covariance estimation of a million variables was introduced in \perp\!\!\!\perpte{QUIC13}. This method could solve $\ell_1$-regularized Gaussian Maximum Likelihood Estimation (MLE-) problems with dimensions of one-million. In \perp\!\!\!\perpte{BallaniKressner}, the authors estimated the covariance matrix of a set of normally distributed random vectors. To overcome large numerical issues in the high-dimensional regime, they computed the (dense) inverses of sparse covariance matrices using $\mathcal{H}$-matrices. This explicit representation enables them to ensure positive definiteness of each Newton-like iterate in the resulting convex optimization problem. In conclusion, the authors compare their new $\mathcal{H}$-QUIC method with the existing BIG-QUIC method~\perp\!\!\!\perpte{QUIC13}. In \perp\!\!\!\perpte{harbrecht2015efficient}, the authors offered to use $\mathcal{H}$-matrices for the approximation of random fields. In particular, they approximated Mat\'ern covariance matrices in the $\mathcal{H}$-matrix format, suggested the pivoted $\mathcal{H}$-Cholesky method and provided an a posteriori error estimate in the trace norm. In \perp\!\!\!\perpte{saibaba2012application, SaibabaKitanidis12}, the authors applied $\mathcal{H}$-matrices to linear inverse problems in large-scale geostatistics. \mathitbf{b}lue{They started with a detailed explanation of the $\mathcal{H}$-matrix technique, then reduced the cost of dense matrix-vector products, combined $\mathcal{H}$-matrices with a matrix-free Krylov subspace and solved the system of equations that arise from the geostatistical approach. They illustrated the performance of their algorithm on an application, for monitoring CO$_2$ concentrations using crosswell seismic tomography. The largest problem size was $n=10^6$. The code is available online.} \mathitbf{b}lue{In \perp\!\!\!\perpte{ambikasaran2014fast}, the authors considered the new matrix factorization for Hierarchical Off-Diagonal Low-Rank (HODLR) matrices. A HODLR matrix is an $\mathcal{H}$-matrix with the weak admissibility condition (see Appendix~\ref{sec:adm}). All off-diagonal sub-blocks of a HODLR matrix are low-rank matrices, and this fact significantly simplifies many algorithms. The authors showed that typical covariance functions can be hierarchically factored into a product of block low-rank updates of the identity matrix, yielding an $\mathcal{O}(n \lambdaog^2 n)$ algorithm for inversion, and used this product for evaluation of the determinant with the cost $\mathcal{O}(n \lambdaog n)$, with further direct calculation of probabilities in high dimensions. Additionally, they used this HODLR factorization to speed up prediction and to infer unknown hyper-parameters. The provided formulas for the factorization and the determinant hold only for HODLR matrices. How to extend this case to general $\mathcal{H}$-matrices, where off-diagonal block are allowed to be also $\mathcal{H}$-matrices, is not so clear. For instance, conditional covariance matrices may easily lose the HODLR structure. The largest demonstrated problem size was $n=10^6$. The authors made their code freely available on GitHub.} \textcolor{black}{Summarizing the literature review, we conclude that $\mathcal{H}$-matrices are already known to the statistical community. It is well-known that Mat\'ern covariance functions could be approximated in the $\mathcal{H}$-matrix format ($\mathcal{H}$, $\mathcal{H}^2$, HODLR). In \perp\!\!\!\perpte{BallaniKressner}, the authors used $\mathcal{H}$-matrices to compute derivatives of the likelihood function. Approximation of the inverse and Cholesky factorization is working in practice, but requires more theoretical analysis for error estimations (the existing theory is mostly developed for elliptic PDEs and integral equations). The influence of the $\mathcal{H}$-matrix approximation error on the quality of the estimated unknown parameters is not well studied yet. To research this influence with general $\mathcal{H}$-matrices (and not only with the simple HODLR format) is the main contribution of this work.} The rest of this paper is structured as follows. In Section~\ref{sec:Hcov}, we remind the $\mathcal{H}$-matrix technique and review the $\mathcal{H}$-matrix approximation of Mat\'ern covariance functions. Section~\ref{sec:HAGL} contains the hierarchical approximation of the Gaussian likelihood function, the algorithm for parameter estimation, and a study of the theoretical properties and complexity of the approximate log-likelihood \perp\!\!\!\perpte{Ipsen15, Ipsen05}. Results from Monte Carlo simulations, where the true parameters are known, are reported in Section~\ref{sec:MC}. An application of the hierarchical log-likelihood methodology to soil moisture data, where parameters are unknown, is considered in Section~\ref{sec:moisture}. We end the paper with a discussion in Section~\ref{sec:Conclusion}. The derivations of the theoretical results are provided in the Appendix~\ref{appendix:A}, more details about $\mathcal{H}$-matrices in Appendix~\ref{sec:adm}. \section{Hierarchical Approximation of Covariance Matrices} \lambdaabel{sec:Hcov} Hierarchical matrices have been described in detail \perp\!\!\!\perpte{HackHMEng, Part1, GH03, weak, MYPHD}. Applications of the $\mathcal{H}$-matrix technique to the covariance matrices can be found in \perp\!\!\!\perpte{Ambikasaran16, SaibabaKitanidis12, saibaba2012application, BoermGarcke2007, harbrecht2015efficient, ambikasaran2013large, ambikasaran2014fast, khoromskij2009application}. There are many implementations exist. To our best knowledge, the HLIB library\mathitbf{f}ootnote{http://www.hlib.org/} is not supported anymore, but it could be used very well for academic purposes, the $\mathcal{H}^2$-library\mathitbf{f}ootnote{https://github.com/H2Lib, developed by Steffen Boerm and his group, Kiel, Germany} operates with $\mathcal{H}^2$-matrices, is actively supported and contains some new idea like parallel implementation on GPU. The HLIBPro library\mathitbf{f}ootnote{https://www.hlibpro.com/, developed by R. Kriemann, Leipzig, Germany} is actively supported commercial, robust, parallel, very tuned, well tested, but not open source library. It contains about 10 open source examples, which can be modified for the user's personal needs. There are some other implementations (e.g., from M. Bebendorf or Matlab implementations), but we do not have experience with them. To new users, we recommend to start with free open source HLIB or H2Lib libraries and then to move to HLIBPro, which is free for academic purposes.\\ \subsection{Hierarchical matrices} \lambdaabel{sec:Happrox} In this section, we review the definition of $\mathcal{H}$-matrices and show how to approximate covariance matrices using the $\mathcal{H}$-matrix format. The $\mathcal{H}$-matrix technique is defined as a hierarchical partitioning of a given matrix into sub-blocks followed by the further approximation of the majority of these sub-blocks by low-rank matrices (see Fig.~\ref{fig:Hexample3}). To define which sub-blocks can be approximated well by low-rank matrices and which cannot, a so-called admissibility condition is used (see more details in Appendix~\ref{sec:adm}). There are different admissibility conditions possible: weak, strong, domain decomposition based, and some others (Fig.~\ref{fig:Hexample_adm}). The admissibility condition may depend on parameters (e.g., the covariance length). To define an $\mathcal{H}$-matrix some other notions are needed: index set $I$, clusters (denoted by $\tau$ and $\sigma$), cluster tree $T_{I}$, block cluster tree $T_{I\times I}$, admissibility condition (see Appendix~\ref{sec:adm}). We start with the index set $I=\{0,\lambdadots,n-1\}$, which corresponds to the available measurements in $n$ locations. After the hierarchical decomposition of the index set $I$ into sub-index sets has been completed (or in other words, a cluster tree $T_I$ is constructed), the block cluster tree (denoted by $T_{I\times I}$, see more details in Appendix~\ref{sec:adm}) together with the admissibility condition decides which sub-blocks can be approximated by low-rank matrices. For definitions and example of cluster trees and corresponding block cluster trees (block partitioning) see Appendix~\ref{sec:adm} and Fig.~\ref{fig:ct_bct}. On the first step, the matrix is divided into four sub-blocks. The hierarchical tree $T_{I \times I}$ tells how to divide. Then each (or some) sub-block(s) is (are) divided again and again until sub-blocks are sufficiently small. The resulting division is hierarchical. The procedure stops when either one of the sub-block sizes is $n_{\mbox{min}}$ or smaller ($n_{\mbox{min}}\lambdaeq 128$), or when this sub-block can be approximated by a low-rank matrix. Another important question is how to compute these low-rank approximations. \textcolor{black}{The HLIB library uses a well-known method, called Adaptive Cross Approximation (ACA)} algorithm \perp\!\!\!\perpte{TyrtyshACA,ACA,Winter}, which performs the approximations with a linear complexity $\mathcal{O}(kn)$ in contrast to $\mathcal{O}(n^3)$ by SVD. \mathitbf{b}egin{eqnarray}gin{rem} Errors in the $\mathcal{H}$-matrix approximation may destroy the symmetry of the symmetric positive definite covariance matrix, causing the symmetric blocks to have different ranks. As a result, the standard algorithms used to compute the Cholesky decomposition may fail. A remedy to this is defining $\mathbf{C}:=\mathitbf{f}rac{1}{2}(\mathbf{C}+\mathbf{C}^\top)$. \end{rem} \mathitbf{b}egin{eqnarray}gin{rem} \textcolor{black}{Errors in the $\mathcal{H}$-matrix approximation may destroy the positive definiteness of the covariance matrix. Especially for matrices which have very small (very close to zero) eigenvalues. A remedy is: 1) to use a more robust, e.g., block $\mathcal{H}$-Cholesky, algorithm; 2) use $\mathbf{L}\mathbf{D}\mathbf{L}^\top$ factorization instead of $\mathbf{L}\mathbf{L}^\top$ or 3) add a positive diagonal $\tau^2 \cdot\mathbf{I}$ to $\mathbf{C}$.} \end{rem} \mathitbf{b}egin{eqnarray}gin{rem} \textcolor{black}{We use both notations $\mathbf{C}\in \mathbb{R}^{n \times n}$ and $\mathbf{C}\in \mathbb{R}^{I \times I}$. In this work it is the same, since $ \ejectrt I \ejectrt = n$. The notation $\mathbf{C}\in \mathbb{R}^{I \times I}$ is useful when it is important to differentiate between $\mathbf{C}\in \mathbb{R}^{I \times I}$ and $\mathbf{C}\in \mathbb{R}^{J \times J}$, where $I$ and $J$ are two different index sets of the same size.} \end{rem} To define the class of $\mathcal{H}$-matrices, we assume that the cluster tree $T_{I}$ and block cluster tree $T_{I\times I}$ are already constructed. \mathitbf{b}egin{eqnarray}gin{defi} \lambdaabel{def:Hmatrix} We let $I$ be an index set and $T_{I\times I}$ a hierarchical division of the index set product, $I\times I$, into sub-blocks. The set of $\mathcal{H}$-matrices with the maximal sub-block rank $k$ is \mathitbf{b}egin{eqnarray}gin{equation*} \mathcal{H}(T_{I\times I},k):=\{\mathbf{C} \in \mathbb{R}^{I \times I}\, \ejectrt \, \mbox{rank}(\mathbf{C} \ejectrt_b) \lambdaeq k \, \text{ for all admissible blocks } b \text{ of } T_{I\times I}\}, \end{equation*} where $k$ is the maximum rank. Here, $\mathbf{C} \ejectrt_b = (c_{ij})_{(i,j)\in b}$ denotes the matrix block of $\mathbf{C} = (c_{ij})_{i,j\in I}$ corresponding to the sub-block $b \in T_{I\times I}$. \end{defi} Blocks that satisfy the admissibility condition can be approximated by low-rank matrices; see \perp\!\!\!\perpte{Part1}. An $\mathcal{H}$-matrix approximation of $\mathbf{C}$ is denoted by $\omegaidetilde{\mathbf{C}}$. The storage requirement of $\omegaidetilde \mathbf{C}$ and the matrix vector multiplication cost $\mathcal{O}(kn\lambdaog n)$, the matrix-matrix addition costs $\mathcal{O}(k^2n\lambdaog n)$, and the matrix-matrix product and the matrix inverse cost $\mathcal{O}(k^2n\lambdaog^2 n)$; see \perp\!\!\!\perpte{Part1}. \mathitbf{b}egin{eqnarray}gin{rem} \lambdaabel{rem:adap_fixed} There are two different $\mathcal{H}$-matrix approximation strategies, \perp\!\!\!\perpte{Winter,HackHMEng}. 1) \textit{The fixed rank strategy} (Fig.~\ref{fig:Hexample3}), i.e., each sub-block has a maximal rank $k$. It could be sufficient or not, but it is simplify theoretical estimates for the computing time and storage. In this case we write $\omegaidetilde{\mathbf{C}} \in \mathcal{H}(T_{I\times I};k)$ 2) \textit{The adaptive rank strategy} (Fig.~\ref{fig:Hexample_adm}, left sub-block), i.e., where absolute accuracy (in the spectral norm) for each sub-block $\mathbf{M}$ is $\mathop{\rm var}\nolimitsepsilon$ or better (smaller): \mathitbf{b}egin{eqnarray}gin{equation*} k:=\min\{\tilde{k}\in \mathbb{N}_0 \ejectrt \,\exists\, \tilde{\mathbf{M}}\in\mathcal{R}(\tilde{k},n,m):\mathitbf{V}ert \mathbf{M}-\tilde{\mathbf{M}} \mathitbf{V}ert \lambdaeq \mathop{\rm var}\nolimitsepsilon \mathitbf{V}ert \mathbf{M}\mathitbf{V}ert\}, \end{equation*} where $\mathcal{R}(\tilde{k},n,m):=\{\mathbf{M}\in \mathbb{R}^{n\times m} \ejectrt \hbox{rank}(\mathbf{M})\lambdaeq \tilde{k}\}$ is a set of low-rank matrices of size $n\times m$ of the maximal rank $\tilde{k}$. The second strategy is better, since it allows to have an adaptive rank for each sub-block, but it makes it difficult to estimate the total computing cost and storage. In this case, we write $\omegaidetilde{\mathbf{C}} \in \mathcal{H}(T_{I\times I};\mathop{\rm var}\nolimitsepsilon)$.\\ \end{rem} The fixed rank strategy is useful for a priori evaluations of the computational resources and storage memory. The adaptive rank strategy is preferable for practical approximations and is useful when the accuracy in each sub-block is crucial. \textcolor{black}{Similarly, we introduce $\omegaidetilde \mathcal{L}(^{th}etab; k)$ and $\omegaidetilde \mathcal{L}(^{th}etab; \mathop{\rm var}\nolimitsepsilon)$ as $\mathcal{H}$-matrix approximations of the log-likelihood $\mathcal{L}$ from (\ref{eq:likeli}).} Sometimes we skip $k$ (or $\mathop{\rm var}\nolimitsepsilon$) and write $\omegaidetilde \mathcal{L}(^{th}etab)$. \textcolor{black}{Thus, we conclude that different $\mathcal{H}$-matrix formats exist. For example, the weak admissible matrices (HODLR) result in a simpler block structure (Fig.~\ref{fig:Hexample_adm}, right), but the $\mathcal{H}$-matrix ranks could be larger than the ranks by standard admissible matrices. In \perp\!\!\!\perpte{weak}, the authors observed a factor of $\approx 3$ between the weak and standard admissible sub-blocks. In \perp\!\!\!\perpte{MYPHD}, the author observed that the HODLR matrix format might fail or result in very large ranks for 2D/3D computational domains or when computing the Schur complement. The $\mathcal{H}^2$ matrix format allows to get rid of the $\lambdaog$ factor in the complexity and storage but it is more complicated for understanding and implementation. We note that there are conversions from $\mathcal{H}$ to $\mathcal{H}^2$ matrix formats implemented (see HLIB and H2LIB libraries).} \subsection{Mat\'{e}rn covariance functions} \lambdaabel{sec:Matern} Among the many covariance models available, the Mat\'{e}rn family \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Matern1986a} has gained widespread interest in recent years. The Mat\'{e}rn form of spatial correlations was introduced into statistics as a flexible parametric class, with one parameter determining the smoothness of the underlying spatial random field \perp\!\!\!\perpte{Handcock1993a}. The varied history of this family of models can be found in \perp\!\!\!\perpte{Guttorp2006a}. The Mat\'{e}rn covariance depends only on the distance $h:=\mathitbf{V}ert \mathbf{s}-\mathbf{s}'\mathitbf{V}ert $, where $\mathbf{s}$ and $\mathbf{s}'$ are any two spatial locations. The Mat\'{e}rn class of covariance functions is defined as \mathitbf{b}egin{eqnarray}gin{equation} \lambdaabel{eq:MaternCov} C(h;{\mathitbf{b}oldsymbol \theta})=\mathitbf{f}rac{\sigma^2}{2^{\nu-1}\Gamma(\nu)}\lambdaeft(\mathitbf{f}rac{h}{\ell}\right)^\nu K_\nu\lambdaeft(\mathitbf{f}rac{h}{\ell}\right), \end{equation} with ${\mathitbf{b}oldsymbol \theta}=(\sigma^2,\ell,\nu)^\top$; where $\sigma^2$ is the variance; $\nu>0$ controls the smoothness of the random field, with larger values of $\nu$ corresponding to smoother fields; and $\ell>0$ is a spatial range parameter that measures how quickly the correlation of the random field decays with distance, with larger $\ell$ corresponding to a faster decay (keeping $\nu$ fixed) \perp\!\!\!\perpte{harbrecht2015efficient}. Here ${\cal K}_\nu$ denotes the modified Bessel function of the second kind of order $\nu$. When $\nu=1/2$, the Mat\'{e}rn covariance function reduces to the exponential covariance model and describes a rough field. The value $\nu=\infty$ corresponds to a Gaussian covariance model that describes a very smooth, infinitely differentiable field. Random fields with a Mat\'{e}rn covariance function are $\lambdafloor \nu-1 \rfloor$ times mean square differentiable. \subsection{Mat\'ern covariance matrix approximation} By definition, covariance matrices are symmetric and positive semi-definite. The decay of the eigenvalues (as well as the $\mathcal{H}$-matrix approximation) depends on the type of the covariance matrix and its smoothness, covariance length, computational geometry, and dimensionality. In this section, we perform numerical experiments with $\mathcal{H}$-matrix approximations. To underline the fact that $\mathcal{H}$-matrix approximations can be applied to irregular sets of locations, we use irregularly spaced locations in the unit square (only for Tables~\ref{table:eps_det}, \ref{table:approx_compare_rank}): \mathitbf{b}egin{eqnarray}gin{equation} \lambdaabel{eq:mesh_pert} \mathitbf{f}rac{1}{\sqrt{n}}\lambdaeft( i-0.5+X_{i j}, \,\, j-0.5+Y_{i j}\right), \quad \text{for}\,\, i,j \in \{1,2,\lambdadots,\sqrt{n}\}, \end{equation} where $X_{i j}$, $Y_{i j}$ are i.i.d. uniform on $(-0.4,0.4)$ for a total of $n$ observations; see \perp\!\!\!\perpte{sun2016statistically}. The observations are ordered lexicographically by the ordered pairs $(i,j)$. All of the numerical experiments herein are performed on a Dell workstation with 20 processors (40 cores) and total 128 GB RAM. The parallel $\mathcal{H}$-matrix library, HLIBPro \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{HLIBPRO}, was used to build the Mat\'{e}rn covariance matrix, compute the Cholesky factorization, solve a linear system, calculate the determinant and the quadratic form. HLIBPro is fast and efficient; see the theoretical parallel scalability in Table \ref{tab:1}. Here $V(T)$ denotes the set of vertices, $L(T)$ the set of leaves in the block-cluster tree $T=T_{I\times I}$, and $n_{min}$ the size of a block when we stop further division into sub-blocks (see Section \ref{sec:Happrox}). Usually $n_{min}=\{32, 64, 128\}$, since a very deep hierarchy slows down computations. \mathitbf{b}egin{eqnarray}gin{table}[h!] \mathitbf{b}egin{eqnarray}gin{center} \caption{Sequential and parallel complexity of the main linear algebra operations on $p$ processors.}\lambdaabel{tab:1} \mathitbf{b}egin{eqnarray}gin{tabular}{|l|l|l|} \hline Operation & Sequential Complexity & Parallel Complexity \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{HLIBPRO} (Shared Memory)\\ \hline\hline build $\omegaidetilde{\mathbf{C}}$ & $\mathcal{O}(n\lambdaog n)$ & $\mathitbf{f}rac{\mathcal{O}(n\lambdaog n)}{p}+\mathcal{O}( \ejectrt V(T) \setminus{L}(T) \ejectrt)$\\ \hline store $\omegaidetilde{\mathbf{C}}$ & $\mathcal{O}(kn\lambdaog n)$ & $\mathcal{O}(kn\lambdaog n)$\\ \hline $\omegaidetilde{\mathbf{C}}\cdot \mathbf{z}$ & $\mathcal{O}(kn\lambdaog n)$ & $\mathitbf{f}rac{\mathcal{O}(kn\lambdaog n)}{p} + \mathitbf{f}rac{n}{\sqrt{p}}$\\ \hline $\omegaidetilde{\mathbf{C}}^{-1}$ & $\mathcal{O}(k^2n\lambdaog^2 n)$ & $\mathitbf{f}rac{\mathcal{O}(n\lambdaog n)}{p}+\mathcal{O}(nn_{min}^2)$, $1\lambdaeq n_{min} \lambdaeq128$\\ \hline $\mathcal{H}$-Cholesky $\omegaidetilde \mathbf{L}$& $\mathcal{O}(k^2n\lambdaog^2 n)$ & $\mathitbf{f}rac{\mathcal{O}(n\lambdaog n)}{p}+\mathcal{O}(\mathitbf{f}rac{k^2n\lambdaog^2 n}{n^{1/d}})$\\ \hline $\mathcal Ydet{\omegaidetilde{\mathbf{C}}}$ & $\mathcal{O}(k^2n\lambdaog^2 n)$ & $\mathitbf{f}rac{\mathcal{O}(n\lambdaog n)}{p}+\mathcal{O}(\mathitbf{f}rac{k^2n\lambdaog^2 n}{n^{1/d}})$, $d=1,2,3$\\ \hline \end{tabular} \end{center} \end{table} Table \ref{table:eps_det} shows the $\mathcal{H}$-matrix approximation errors for $\lambdaog \mathcal Ydet{\omegaidetilde{\mathbf{C}}}$, $\omegaidetilde{\mathbf{C}}$ and the inverse $(\omegaidetilde{\mathbf{L}}\omegaidetilde{\mathbf{L}}^\top)^{-1}$, where $\omegaidetilde{\mathbf{L}}$ is the $\mathcal{H}$-Cholesky factor. The errors are computed in the Frobenius and spectral norms, where $\mathbf{C}$ is the exact Mat\'{e}rn covariance matrix with $\nu=0.5$ and $\sigma^2=1$. The local accuracy in each sub-block is $\mathop{\rm var}\nolimitsepsilon$. The number of locations is $n=16{,}641$. The last column demonstrates the total compression ratio, c.r., which is equal to $1-$size($\omegaidetilde{\mathbf{C}}$)/size($\mathbf{C}$). The exact values are $\lambdaog \mathcal Ydet{\mathbf{C}}=2.63$ for $\ell=0.0334$ and $\lambdaog \mathcal Ydet{\mathbf{C}}=6.36$ for $\ell=0.2337$. The uniformly distributed mesh points are taken in the unit square and perturbed as in (\ref{eq:mesh_pert}). \mathitbf{b}egin{eqnarray}gin{table}[h!] \centering \caption{The $\mathcal{H}$-matrix accuracy and compression rates (c.r.). Accuracy in each sub-block is $\mathop{\rm var}\nolimitsepsilon$; $n=16{,}641$, $\omegaidetilde{\mathbf{C}}$ is a Mat\'{e}rn covariance with $\nu=0.5$ and $\sigma^2=1$. The spatial domain is the unit square with locations irregularly spaced as in (\ref{eq:mesh_pert}). } \mathitbf{b}egin{eqnarray}gin{tabular}{|c|c|c|c|c|c|c|} \hline $\mathop{\rm var}\nolimitsepsilon$ &{\scriptsize $\mathitbf{b}ig|\lambdaog \mathcal Ydet{\mathbf{C}} -\lambdaog \mathcal Ydet{\omegaidetilde{\mathbf{C}}}\mathitbf{b}ig|$} &$\mathitbf{b}ig| \mathitbf{f}rac{\lambdaog \mathcal Ydet{\mathbf{C}} -\lambdaog \mathcal Ydet{\omegaidetilde{\mathbf{C}}}}{\lambdaog \mathcal Ydet{\omegaidetilde{\mathbf{C}}}}\mathitbf{b}ig|$ & {\scriptsize $\mathitbf{V}ert \mathbf{C}-\omegaidetilde{\mathbf{C}} \mathitbf{V}ert_F$} & $\mathitbf{f}rac{\mathitbf{V}ert \mathbf{C}-\omegaidetilde{\mathbf{C}} \mathitbf{V}ert_2}{\mathitbf{V}ert \omegaidetilde{\mathbf{C}}\mathitbf{V}ert_2}$ & {\scriptsize $\mathitbf{V}ert \mathbf{I}-(\omegaidetilde{\mathbf{L}}\omegaidetilde{\mathbf{L}}^\top)^{-1}{\mathbf{C}} \mathitbf{V}ert_2$} & c.r. ($\%$) \\ \hline $\ell=0.0334$ &&&&&&\\ \hline $10^{-1}$ & $3.2\cdot 10^{-4}$ & $1.2\cdot 10^{-4}$ & $7.0\cdot 10^{-3}$ & $7.6\cdot 10^{-3}$ & 2.9 & 91.8\\ $10^{-2}$ & $1.6\cdot 10^{-6}$ & $6.0\cdot 10^{-7}$ & $1.0\cdot 10^{-3}$ & $6.7\cdot 10^{-4}$ & $9.9\cdot 10^{-2}$ & 91.6\\ $10^{-4} $ & $1.8\cdot 10^{-9}$ & $7.0\cdot 10^{-10}$ & $1.0\cdot 10^{-5}$ & $7.3\cdot 10^{-6}$ & $2.0\cdot 10^{-3}$ & 89.8\\ $10^{-8} $ & $4.7\cdot 10^{-13}$ & $1.8\cdot 10^{-13}$ & $1.3\cdot 10^{-9}$ & $6\cdot 10^{-10}$ & $2.1\cdot 10^{-7}$ & 87.3\\ \hline $\ell=0.2337$ &&&&&&\\ \hline $ 10^{-4}$ & $9.8\cdot 10^{-5}$ & $1.5\cdot 10^{-5}$ & $8.1\cdot 10^{-5}$ & $1.4\cdot 10^{-5}$ &$ 2.5\cdot 10^{-1}$ & 91.5\\ $10^{-8}$ & $1.5\cdot 10^{-9}$ & $2.3\cdot 10^{-10}$ & $1.1\cdot 10^{-8}$ & $1.5\cdot 10^{-9}$ & $4\cdot 10^{-5}$ & 88.7\\ \hline \end{tabular} \lambdaabel{table:eps_det} \end{table} \subsection{Convergence of the $\mathcal{H}$-matrix error vs. the rank $k$} In Table~\ref{table:approx_compare_rank} we show the dependence of the Kullback-Leibler divergence (KLD) and two matrix errors on the $\mathcal{H}$-matrix rank $k$ for the Mat\'{e}rn covariance function with parameters $\ell=\{0.25, 0.75\}$ and $\nu=1.5$, computed on the domain $\mathcal{G}=[0,1]^2$. \textcolor{black}{The KLD was computed as follow: \mathitbf{b}egin{eqnarray}gin{equation*} D_{KL}(\mathbf{C},\omegaidetilde{\mathbf{C}}) = \mathitbf{f}rac{1}{2}\lambdaeft \{\mbox{trace}(\omegaidetilde{\mathbf{C}}^{-1}\mathbf{C}) - n + \lambdaog\lambdaeft (\mathitbf{f}rac{\mbox{d}et \omegaidetilde{\mathbf{C}}}{\mbox{d}et \mathbf{C}}\right) \right \}. \end{equation*}} We can bound the relative error $\mathitbf{V}ert \mathbf{C}^{-1} - \omegaidetilde{\mathbf{C}}^{-1}\mathitbf{V}ert/\mathitbf{V}ert \mathbf{C}^{-1} \mathitbf{V}ert$ for the approximation of the inverse as \mathitbf{b}egin{eqnarray}gin{equation*} \mathitbf{f}rac{\mathitbf{V}ert \mathbf{C}^{-1} - \omegaidetilde{\mathbf{C}}^{-1}\mathitbf{V}ert}{\mathitbf{V}ert \mathbf{C}^{-1} \mathitbf{V}ert}=\mathitbf{f}rac{\mathitbf{V}ert (\mathbf{I}- \omegaidetilde{\mathbf{C}}^{-1}\mathbf{C})\mathbf{C}^{-1}\mathitbf{V}ert}{\mathitbf{V}ert \mathbf{C}^{-1} \mathitbf{V}ert} \lambdaeq \mathitbf{V}ert (\mathbf{I}- \omegaidetilde{\mathbf{C}}^{-1}\mathbf{C})\mathitbf{V}ert. \end{equation*} The spectral norm of $\mathitbf{V}ert (\mathbf{I}- \omegaidetilde{\mathbf{C}}^{-1}\mathbf{C})\mathitbf{V}ert$ can be estimated by few steps of the power iteration method (HLIB library uses 10 steps). The rank $k\lambdaeq 20$ is not sufficient to approximate the inverse, and the resulting error $\mathitbf{V}ert {\mathbf{C}} (\omegaidetilde{\mathbf{C}})^{-1} -\mathbf{I} \mathitbf{V}ert_2 $ is large. One remedy could be to increase the rank, but it may not help for ill-conditioned matrices. The spectral norms of $\tilde{\mathbf{C}}$ are $\mathitbf{V}ert \tilde{\mathbf{C}}_{(\ell=0.25)}\mathitbf{V}ert_2=720$ and $\mathitbf{V}ert \tilde{\mathbf{C}}_{(\ell=0.75)}\mathitbf{V}ert_2=1068$. \textcolor{black}{ \mathitbf{b}egin{eqnarray}gin{rem} Very often, the nugget $\tau^2\mathbf{I}$ is a part of the model or is just added to the main diagonal of $\mathbf{C}$ to stabilize numerical calculations, i.e., $\omegaidetilde{\mathbf{C}}:=\omegaidetilde{\mathbf{C}}+\tau^2 \mathbf{I}$, \perp\!\!\!\perpte{LitvGentonSunKeyes17}. By adding a nugget, we ``push'' all the singular values away from zero. Adding a nugget effect to the list of unknown parameters ${\mathitbf{b}oldsymbol \theta}=(\ell,\nu, \sigma^2,\tau)^\top$ is straightforward. See the modified procedure in the GitHub repository \perp\!\!\!\perpte{LitvGitHubHcov}. \end{rem}} \mathitbf{b}egin{eqnarray}gin{table}[h!] \centering \caption{Convergence of the $\mathcal{H}$-matrix approximation error vs. the $\mathcal{H}$-matrix rank $k$ of a Mat\'{e}rn covariance function with parameters $\ell=\{0.25, 0.75\}$, $\nu=1.5$, domain $\mathcal{G}=[0,1]^2$, $n=16{,}641$,} \mathitbf{b}egin{eqnarray}gin{tabular}{|c|cc|cc|cc|} \hline $k$ & \multicolumn{2}{c|}{KLD}& \multicolumn{2}{c|}{$\mathitbf{V}ert \mathbf{C} - \omegaidetilde{\mathbf{C}} \mathitbf{V}ert_2$} & \multicolumn{2}{c|}{$\mathitbf{V}ert {\mathbf{C}} (\omegaidetilde{\mathbf{C}})^{-1} -\mathbf{I} \mathitbf{V}ert_2 $} \\ & $\ell=0.25$ & $\ell=0.75$ & $\ell=0.25$ & $\ell=0.75$ & $\ell=0.25$ & $\ell=0.75$ \\ \hline 20 & 0.12 & 2.7 & 5.3e-7& 2e-7 & 4.5& 72\\ 30 & 3.2e-5& 0.4 & 1.3e-9& 5e-10 & 4.8e-3& 20\\ 40 & 6.5e-8& 1e-2 &1.5e-11& 8e-12 & 7.4e-6& 0.5\\ 50 & 8.3e-10& 3e-3 &2.0e-13& 1.5e-13& 1.5e-7& 0.1\\ \hline \end{tabular} \lambdaabel{table:approx_compare_rank} \end{table} \section{Hierarchical Approximation of Gaussian Likelihood} \lambdaabel{sec:HAGL} \subsection{Parameter estimation} We use the $\mathcal{H}$-matrix technique to approximate the Gaussian likelihood function. The $\mathcal{H}$-matrix approximation of the exact log-likelihood $\mathcal{L}(^{th}etab)$ defined in (\ref{eq:likeli}) is denoted by $\omegaidetilde \mathcal{L}(^{th}etab;k)$: \mathitbf{b}egin{eqnarray}gin{equation} \lambdaabel{eq:likeH} \omegaidetilde{\mathcal{L}}(^{th}etab;k)=-\mathitbf{f}rac{n}{2}\lambdaog(2\pi) - \sum_{i=1}^n\lambdaog \{\omegaidetilde{L}_{ii}(^{th}etab;k)\}-\mathitbf{f}rac{1}{2}\mathbf{v}(^{th}etab)^\top \mathbf{v}(^{th}etab), \end{equation} where $\omegaidetilde{\mathbf{L}}(^{th}etab;k)$ is the rank-$k$ $\mathcal{H}$-matrix approximation of the Cholesky factor $\mathbf{L}({\mathitbf{b}oldsymbol \theta})$ in $\mathbf{C}({\mathitbf{b}oldsymbol \theta})=\mathbf{L}({\mathitbf{b}oldsymbol \theta})\mathbf{L}({\mathitbf{b}oldsymbol \theta})^\top$, and $\mathbf{v}(^{th}etab) $ is the solution of the linear system $\omegaidetilde{\mathbf{L}}(^{th}etab;k)\mathbf{v}(^{th}etab)=\mathbf{Z}$. Analogously, we can define $\omegaidetilde{\mathcal{L}}(^{th}etab;\mathop{\rm var}\nolimitsepsilon)$. To maximize $\omegaidetilde{\mathcal{L}}(^{th}etab;k)$ in (\ref{eq:likeH}), we use Brent's method \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{Brent73, BrentMethod07}, also known as Brent-Dekker\mathitbf{f}ootnote{Implemented in GNU Scientific library, \mathitbf{u}rl{https://www.gnu.org/software/gsl/} }. The Brent-Dekker algorithm first uses the fast-converging secant method or inverse quadratic interpolation to maximize $\omegaidetilde{\mathcal{L}}(^{th}etab;\cdot)$. If those do not work, then it returns to the more robust bisection method. We note that the maximization of the log-likelihood function is an ill-posed problem, since even very small perturbations in the covariance matrix $\mathbf{C}({\mathitbf{b}oldsymbol \theta})$ may result in huge perturbations in the log-determinant and the log-likelihood. An alternative to $\mathbf{C}({\mathitbf{b}oldsymbol \theta})=\mathbf{L}({\mathitbf{b}oldsymbol \theta})\mathbf{L}({\mathitbf{b}oldsymbol \theta})^\top$ is the $\mathbf{L}({\mathitbf{b}oldsymbol \theta}) \mathbf{D}({\mathitbf{b}oldsymbol \theta}) \mathbf{L}^\top({\mathitbf{b}oldsymbol \theta})$ decomposition, which is more stable since it avoids extracting square roots of diagonal elements, i.e., $\mathbf{L} \mathbf{D} \mathbf{L}^\top=(\mathbf{L}\mathbf{D}^{1/2})(\mathbf{L}\mathbf{D}^{1/2})^\top$. Very small negative diagonal elements can appear due to, e.g., the rounding off error. \subsection{Computational complexity and accuracy} We let $\mathbf{C}({\mathitbf{b}oldsymbol \theta})\in \mathbb{R}^{n \times n}$ be approximated by an $\mathcal{H}$-matrix $\omegaidetilde{\mathbf{C}}({\mathitbf{b}oldsymbol \theta};k)$ with a maximal rank $k$. The $\mathcal{H}$-Cholesky decomposition of $\omegaidetilde{\mathbf{C}}({\mathitbf{b}oldsymbol \theta};k)$ costs $\mathcal{O}(k^2 n\lambdaog^2 n)$. The solution of the linear system $\omegaidetilde{\mathbf{L}}(^{th}etab;k)\mathbf{v}({\mathitbf{b}oldsymbol \theta})=\mathbf{Z}$ costs $\mathcal{O}(k^2 n\lambdaog^2 n)$. The log-determinant $\lambdaog \mathcal Ydet{\omegaidetilde \mathbf{C}({\mathitbf{b}oldsymbol \theta};k)}=2\sum_{i=1}^n \lambdaog \{\omegaidetilde L_{ii}({\mathitbf{b}oldsymbol \theta};k)\}$ is available for free. The cost of computing the log-likelihood function $\omegaidetilde{\mathcal{L}}(^{th}etab;k)$ is $\mathcal{O}(k^2 n\lambdaog^2 n)$ and the cost of computing the MLE in $m$ iterations is ${\mathcal{O}(m k^2 n\lambdaog^2 n)}$. Once we observe a realization $\mathbf{z}$ of the random vector $\mathbf{Z}$, we can quantify the accuracy of the $\cal H$-matrix approximation of the log-likelihood function. Our main theoretical result is formulated below in Theorem~\ref{thm:MainLL}. \mathitbf{b}egin{eqnarray}gin{theorem}(Accuracy of the hierarchical log-likelihood)\\ \lambdaabel{thm:MainLL} We let $\omegaidetilde{\mathbf{C}}({\mathitbf{b}oldsymbol \theta})$ be an $\mathcal{H}$-matrix approximation of the matrix $\mathbf{C}({\mathitbf{b}oldsymbol \theta})\in \mathbb{R}^{n\times n}$, and $\mathbf{Z}=\mathbf{z}$ a data vector. We let also the spectral radius $\rho(\omegaidetilde{\mathbf{C}}({\mathitbf{b}oldsymbol \theta})^{-1}{\mathbf{C}}({\mathitbf{b}oldsymbol \theta})-\mathbf{I}) <\mathop{\rm var}\nolimitsepsilon<1$. Then the following statements hold: \mathitbf{b}egin{eqnarray}gin{eqnarray*} \mathitbf{b}ig| \lambdaog \mathcal Ydet{\omegaidetilde{\mathbf{C}}({\mathitbf{b}oldsymbol \theta};k)} - \lambdaog \mathcal Ydet{\mathbf{C}({\mathitbf{b}oldsymbol \theta})} \mathitbf{b}ig| &\lambdaeq& -n \lambdaog(1-\mathop{\rm var}\nolimitsepsilon)\approx n\mathop{\rm var}\nolimitsepsilon \quad \text{for small} \quad \mathop{\rm var}\nolimitsepsilon,\\ \ejectrt \omegaidetilde{\mathcal{L}}({\mathitbf{b}oldsymbol \theta};k) - \mathcal{L}({\mathitbf{b}oldsymbol \theta}) \ejectrt &\lambdaeq& \mathitbf{f}rac{1}{2}\cdot n \mathop{\rm var}\nolimitsepsilon + \mathitbf{f}rac{1}{2}\mathitbf{V}ert \mathbf{z}\mathitbf{V}ert^2_2\cdot \mathitbf{V}ert \omegaidetilde{\mathbf{C}}^{-1}\mathitbf{V}ert_2 \cdot \mathop{\rm var}\nolimitsepsilon. \end{eqnarray*} \end{theorem} \textbf{Proof: } See in the Appendix A. \\ The estimate in Remark~\ref{rem:NormZ} states how fast the probability decays while increasing the norm of $\mathbf{Z}$. For simplicity, further on, we will assume $\mathitbf{V}ert \mathbf{z} \mathitbf{V}ert_2 \lambdaeq c_0$. In \perp\!\!\!\perpte{BallaniKressner}, the authors observed that the bound $n$ in the inequalities above is pessimistic and is hardly observed in numerical simulations. Though Theorem~\ref{thm:MainLL} is shown for the fixed rank arithmetics, it also holds for the adaptive rank arithmetics with $\omegaidetilde{\mathbf{C}}({\mathitbf{b}oldsymbol \theta};\mathop{\rm var}\nolimitsepsilon)$ and $\omegaidetilde{\mathcal{L}}({\mathitbf{b}oldsymbol \theta};\mathop{\rm var}\nolimitsepsilon)$. \section{Monte Carlo Simulations} \lambdaabel{sec:MC} We performed numerical experiments with simulated data to recover the true values of the parameters of the Mat\'ern covariance matrix, known to be $^{th}etab^*:=(\ell^*, \nu^*,\sigma^*)=(0.7, 0.9, 1.0)$. In the first step, we constructed 50 independent data sets (replicates) of size $n\in \{ 128,64,\lambdadots,4,2\}\times 1{,}000 $, by multiplying the Cholesky factor $\omegaidetilde{\mathbf{L}}(^{th}etab,10^{-10})$ on a Gaussian random vector $\mathbf{W} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$, $\omegaidetilde{\mathbf{C}}(^{th}etab^*)=\omegaidetilde{\mathbf{L}}(^{th}etab^*)\omegaidetilde{\mathbf{L}}(^{th}etab^*)^\top$. We took the locations (not the data) from the daily moisture example, which is described below in Sect.~\ref{sec:moisture}. After that we ran the optimization algorithm and tried to identify (recover) the ``unknown" parameters $(\ell^*, \nu^*,\sigma^*)$. The boxplots for each parameter over 50 replicates are plotted in Figure~\ref{fig:synthetic_boxes} \mathitbf{b}egin{eqnarray}gin{figure}[h!] \centering \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.48\textwidth} \centering \caption{} \includegraphics[width=8cm]{7Mai2018_box_plots_ell.pdf} \lambdaabel{fig:ell_synt} \end{subfigure} ~ \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.48\textwidth} \centering \caption{} \includegraphics[width=8cm]{7Mai2018_box_plots_nu.pdf} \lambdaabel{fig:nu_synt} \end{subfigure}\\ \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.48\textwidth} \centering \caption{} \includegraphics[width=8cm]{7Mai2018_box_plots_sigma2.pdf} \lambdaabel{fig:sigma_synt} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.48\textwidth} \centering \caption{} \includegraphics[width=8cm]{7Mai2018_box_plots_coeff.pdf} \lambdaabel{fig:coeff_synt} \end{subfigure} \caption{ Boxplots for (a) $\ell$; (b) $\nu$; (c) $\sigma^2$; and (d) $\sigma^2/\ell^{2\nu}$ for $n \in \{128, 64, 32,16, 8, 4, 2 \}\times 1{,}000$. Simulated data with known parameters $(\ell^*, \nu^*,\sigma^*)=(0.7, 0.9, 1.0)$. Boxplots are obtained from 50 replicates; the $\mathcal{H}$-matrix sub-block accuracy is $\mathop{\rm var}\nolimitsepsilon=10^{-7}$. The green, dotted horizontal line represents the true value of the parameters.} \lambdaabel{fig:synthetic_boxes} \end{figure} \textcolor{black}{This simulation study (Fig.~\ref{fig:synthetic_boxes}) shows boxplots vs $n$. We are able to estimate the unknown parameters of a Mat\'ern covariance function on an irregular grid with a certain accuracy. The parameter $\nu$ (Fig.~\ref{fig:nu_synt}) was identified more accurately than the parameters $\ell$ (Fig.~\ref{fig:ell_synt}) and $\sigma^2$ (Fig.~\ref{fig:sigma_synt}). It is difficult to say why we do not see a clear patterns for $\ell$ and $\sigma^2$. We believe there are a few reasons. First, the iterative optimization method had difficulties to converge (we need to significantly decrease the threshold and increase the maximal number of iterations) for large $n$. This is very expensive regarding the computing time. Second, the log-likelihood is often very flat, and there are multiple solutions which fulfill the threshold requirements. We think that the reason is not in the $\mathcal{H}$-matrix accuracy since we took a very fine accuracy $\mathop{\rm var}\nolimitsepsilon=10^{-10}$ already. Third, we have fixed the locations for a given size $n$. Finally, under infill asymptotics, the parameters $\ell$, $\nu$, $\sigma^2$ cannot be estimated consistently but the quantity $\sigma^2/\ell^{2\nu}$ can, as seen in Fig.~\ref{fig:coeff_synt}.} \textcolor{black}{ In Fig.~\ref{fig:50eps_fbplots} we present functional boxplots \perp\!\!\!\perpte{SunGenton11} of the estimated parameters as a function of the accuracy $\mathop{\rm var}\nolimitsepsilon$, based on 50 replicates with $n=\{4000, 8000, 32000\}$ observations. The true values to be identified are ${\mathitbf{b}oldsymbol \theta}^*=({\ell}, {\nu}, {\sigma^2},{\sigma^2/\ell^{2\nu}})=(0.7,0.9,1.0, 1.9)$, denoted by the doted green line. Functional boxplots on (a), (b), (c) identify ${\ell}$; on (d), (e), (f) identify ${\nu}$; on (g), (h), (i) identify ${\sigma}^2$, and on (j), (k), (l) identify ${\sigma^2/\ell^{2\nu}}$. One can see that we are able to identify all parameters with a good accuracy: the median, denoted by the solid black curve is very close to the dotted green line. The parameter $\nu$ and the auxiliary coefficient ${\sigma^2/\ell^{2\nu}}$ are identified better than $\ell$ and $\sigma^2$, similarly to Fig.~\ref{fig:synthetic_boxes}. The size of the middle box in the boxplot indicates the variability of the estimates. It decreases when $n$ increases from $4{,}000$ to $32{,}000$.} \textcolor{black}{The functional boxplots in Fig.~\ref{fig:50eps_fbplots} demonstrate that the estimates are fairly insensitive to the choice of the accuracy $\mathop{\rm var}\nolimitsepsilon$. Plots of the estimated parameters' curves for 30 replicated with $n=64{,}000$ are presented in Fig.~\ref{fig:50eps} in Appendix~\ref{app:B}. All these plots suggest when to stop decreasing $\mathop{\rm var}\nolimitsepsilon$ further. The MLE estimates are already good enough for relatively large $\mathop{\rm var}\nolimitsepsilon$. In other words, the $\mathcal{H}$-matrix approximations are accurate enough, and to improve the estimates of the parameters, one should improve the optimization procedure (the initial guess, the threshold, the maximal number of iterations).} \mathitbf{b}egin{eqnarray}gin{figure}[htbp!] \centering \centering \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{B_fbplot_ell_2Sept2018_4.pdf} \lambdaabel{fig:ell4} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{B_fbplot_ell_2Sept2018_8.pdf} \lambdaabel{fig:ell8} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{C_fbplot_ell_2Sept2018_32.pdf} \lambdaabel{fig:ell64} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{B_fbplot_nu_2Sept2018_4.pdf} \lambdaabel{fig:nu4} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{B_fbplot_nu_2Sept2018_8.pdf} \lambdaabel{fig:nu8} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{B_fbplot_nu_2Sept2018_32.pdf} \lambdaabel{fig:nu64} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{B_fbplot_sigma_2Sept2018_4.pdf} \lambdaabel{fig:sigma4} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{B_fbplot_sigma_2Sept2018_8.pdf} \lambdaabel{fig:sigma8} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{B_fbplot_sigma_2Sept2018_32.pdf} \lambdaabel{fig:sigma64} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{C_fbplot_coeff_2Sept2018_4.pdf} \lambdaabel{fig:sigma4} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{C_fbplot_coeff_2Sept2018_8.pdf} \lambdaabel{fig:sigma8} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{C_fbplot_coeff_2Sept2018_32.pdf} \lambdaabel{fig:sigma64} \end{subfigure} \caption{Functional boxplots of the estimated parameters as a function of the accuracy $\mathop{\rm var}\nolimitsepsilon$, based on 50 replicates with $n=\{4000, 8000, 32000\}$ observations (left to right columns). True parameters ${\mathitbf{b}oldsymbol \theta}^*=({\ell}, {\nu}, {\sigma^2},{\sigma^2/\ell^{2\nu}})=(0.7,0.9,1.0, 1.9)$ represented by the green dotted lines. Replicates on (a), (b), (c) identify ${\ell}$; on (d), (e), (f) identify ${\nu}$; on (g), (h), (i) identify ${\sigma}^2$, and on (j), (k), (l) identify ${\sigma^2/\ell^{2\nu}}$.} \lambdaabel{fig:50eps_fbplots} \end{figure} \section{Application to Soil Moisture Data} \lambdaabel{sec:moisture} First, we introduce the daily soil moisture data set from a numerical model. Then we demonstrate how to apply $\mathcal{H}$-matrices to fit a Mat\'ern covariance model. We use just one replicate sampled at $n$ locations. Then we perform numerical tests to see which $n$ is sufficient and explain how to chose the appropriate $\mathcal{H}$-matrix accuracy. We provide the corresponding computational times and storage costs. \subsection{Problem and data description} In climate and weather studies, numerical models play an important role in improving our knowledge of the characteristics of the climate system and the causes behind climate variations. These numerical models describe the evolution of many variables, such as temperature, wind speed, precipitation, humidity, and pressure, by solving a set of equations. The process is usually very complicated, involving physical parameterization, initial condition configurations, numerical integration, and data output schemes. In this section, we use the proposed $\mathcal{H}$-matrix methodology to investigate the spatial variability of soil moisture data, as generated by numerical models. Soil moisture is a key factor in evaluating the state of the hydrological process and has a broad range of applications in weather forecasting, crop yield prediction, and early warnings of flood and drought. It has been shown that a better characterization of soil moisture can significantly improve weather forecasting. However, numerical models often generate very large datasets due to the high spatial resolutions of the measured data, which makes the computation of the widely used Gaussian process models infeasible. Consequently, the whole region of interest must be divided into blocks of smaller size in order to fit the Gaussian process models independently to each block; alternatively, the size of the dataset must be reduced by averaging to a lower spatial resolution. Compared to fitting a consistent Gaussian process model to the entire region, it is unclear how much statistical efficiency is lost by such an approximation. Since our proposed $\mathcal{H}$-matrix technique can handle large covariance matrix computations, and the parallel implementation of the algorithm significantly reduces the computational time, we are able to fit Gaussian process models to a set of large subsamples in a given spatial region. Therefore, next we explore the effect of sample size on the statistical efficiency. We consider high-resolution soil moisture data from January 1, 2014, measured in the topsoil layer of the Mississippi River basin, U.S.A. The spatial resolution is 0.0083 degrees, and the distance of one-degree difference in this region is approximately 87.5 km. The grid consists of $1830 \times 1329 = 2{,} 432{,} 070$ locations with $2{,}000{,}000$ observations and $432{,} 070$ missing values. Therefore, the available spatial data are not on a regular grid. We use the same model for the mean process as in Huang and Sun (2018), and fit a zero-mean Gaussian process model with a Mat\'ern covariance function to the residuals; see Huang and Sun (2018) for more details on the data description and exploratory data analysis. \subsection{Estimation, computing times and required storage costs} \lambdaabel{sec:TimeMoist} In the next experiment, we estimated the unknown parameters ${\ell}$, ${\nu}$, and ${\sigma^2}$ for different sample sizes $n$. Each time $n$ samples were chosen from the whole set randomly. Table~\ref{table:approx_compare15} shows the computational time and storage for the $\mathcal{H}$-matrix approximations for $n=\{2000,\lambdadots, 2000000\}$. All computations are done with the parallel $\mathcal{H}$-matrix toolbox, HLIBpro. The number of computing cores is 40, the RAM memory 128GB. It is important to note that the computing time (columns 2 and 5) and the storage cost (columns 3 and 6) are growing nearly linearly with $n$. These numerical computations illustrate the theoretical formulas from Table~\ref{tab:1}. Additionally, we provide the accuracy of the $\mathcal{H}$-Cholesky inverse and estimated parameters. The choice of the starting value is important. First, we run the optimization procedure with $n=2{,}000$ randomly sampled locations, and with the starting value $(\ell_0, \nu_0,\sigma_0^2)=(3, 0.5, 1)$, $\mathop{\rm var}\nolimitsepsilon=10^{-7}$, the residual threshold $10^{-7}$ and the maximal number of iterations $1{,}000$. After $78$ iterations the threshold was achieved and the solution is $(\hat{\ell}, \hat{\nu}, \hat{\sigma}^2)=(2.71, 0.257, 1.07)$. This is the starting value for the experiment with $n=4{,}000$. The estimated parameters, calculated with $n=4{,}000$ and 80 iterations, are $(\hat{\ell}, \hat{\nu}, \hat{\sigma}^2)=(3, 0.228, 1.02)$. Second, we take these values as the new starting values for another randomly chosen $n=8{,}000$ locations. After 60 iterations, the residual threshold is again achieved, the new estimated values are $(\hat{\ell}, \hat{\nu}, \hat{\sigma}^2)=(3.6, 0.223, 1.03)$. We repeat this procedure for each new $n$ till $2{,}000{,}000$. At the end, for $n=2{,}000{,}000$ we obtain $(\hat{\ell}, \hat{\nu}, \hat{\sigma}^2)=(1.38, 0.34, 1.1)$. \mathitbf{b}egin{eqnarray}gin{table}[htbp!] \centering \caption{Estimation of unknown parameters for various $n$ (see columns 1,8,9,10). Computing times and storage costs are given for 1 iteration. The $\mathcal{H}$-matrix accuracy $\mathop{\rm var}\nolimitsepsilon=10^{-7}$.} \mathitbf{b}egin{eqnarray}gin{small} \mathitbf{b}egin{eqnarray}gin{tabular}{|c|ccc|ccc|ccc|} \hline $ n$ & \multicolumn{3}{c|}{$\omegaidetilde{\mathbf{C}}$} & \multicolumn{3}{c|}{$\omegaidetilde{\mathbf{L}}\omegaidetilde{ \mathbf{L}}^\top$}& \multicolumn{3}{c|}{\textbf{Parameters}} \\ & time & size & kB/$n$ & time & size & $\mathitbf{V}ert \mathbf{I}-(\omegaidetilde{\mathbf{L}}\omegaidetilde{\mathbf{L}}^\top)^{-1}{\mathbf{C}} \mathitbf{V}ert_2$ & $\hat{\ell} $&$\hat{\nu}$& $\hat{\sigma}^2$ \\ & sec. & MB & & sec. & MB & & & & \\ \hline 2{,}000 & 0.04 & 5.2 &2.7 & 0.07 & 5.2 &$6.7\cdot 10^{-4}$ & 2.71 & 0.257 & 1.07 \\ 4{,}000 & 0.07 & 12.7 & 3.2& 0.17 & 13 & $1.5\cdot 10^{-4}$ & 3.03 & 0.228 & 1.02 \\ 8{,}000 & 0.12 & 29.8 & 3.8& 0.36 & 32 & $3.1\cdot 10^{-4}$ & 3.62 & 0.223 & 1.03 \\ 16{,}000 & 0.52 & 72.0 & 4.6& 1.11 & 75 & $2.6\cdot 10^{-4}$ & 1.73 & 0.252 & 0.95 \\ 32{,}000 & 0.93 & 193 & 6.2& 3.04 & 210 & $7.4\cdot 10^{-5}$ & 1.93 & 0.257 & 1.03 \\ 64{,}000 & 3.3 & 447 & 7.1 & 11.0 & 486 & $1.0\cdot 10^{-4}$ &1.48&0.272&1.01\\ 128{,}000 & 7.7 & 1160 & 9.5 & 36.7 & 1310 & $3.8\cdot 10^{-5}$ &0.84& 0.302& 0.94\\ 256{,}000 & 13 & 2550 & 10.5 & 64.0 & 2960 & $7.1\cdot 10^{-5}$ &1.41&0.327& 1.24\\ 512{,}000 & 23 & 4740 & 9.7 & 128 & 5800 & $7.1\cdot 10^{-4}$ &1.41& 0.331 & 1.25\\ 1{,}000{,}000 & 53 & 11260 & 11 & 361 & 13910 & $3.0\cdot 10^{-4}$ &1.29&0.308&1.00\\ 2{,}000{,}000 & 124 & 23650 & 12.4 & 1001 & 29610 & $5.2\cdot 10^{-4}$ &1.38& 0.340& 1.10\\ \hline \end{tabular} \end{small} \lambdaabel{table:approx_compare15} \end{table} In Table~\ref{table:costs_vs_eps} we list the computing time and the storage cost (total and divided by $n$) vs. $\mathcal{H}$-matrix accuracy $\mathop{\rm var}\nolimitsepsilon$ in each sub-block. This table connects the $\mathcal{H}$-matrix sub-block accuracy, the required computing resources, and the inversion error $\mathitbf{V}ert \mathbf{I}-(\omegaidetilde{\mathbf{L}}\omegaidetilde{\mathbf{L}}^\top)^{-1}{\mathbf{C}} \mathitbf{V}ert_2$. It provides some intuition about how much computational resources are required for each $\mathop{\rm var}\nolimitsepsilon$ for fixed $n$. We started with $\mathop{\rm var}\nolimitsepsilon=10^{-7}$ (the first row) and then multiplied it each time by a factor of $4$. The last row corresponds to $\mathop{\rm var}\nolimitsepsilon=6.4\cdot 10^{-3}$, which is sufficient to compute $\omegaidetilde{\mathbf{C}}$, but is not sufficient to compute the $\mathcal{H}$-Cholesky factor $\omegaidetilde{\mathbf{L}}$ (the procedure for computing $\mathcal{H}$-Cholesky crashes). We see that the storage and time increase only moderate with decreasing $\mathop{\rm var}\nolimitsepsilon$. \mathitbf{b}egin{eqnarray}gin{table}[h!] \centering \caption{Computing time and storage cost vs. accuracy $\mathop{\rm var}\nolimitsepsilon$ for parallel $\mathcal{H}$-matrix approximation; number of cores is 40, $\hat{\nu}=0.302$, $\hat{\ell}=0.84$, $\hat{\sigma}^2=0.94$, $n=128{,}000$. $\mathcal{H}$-matrix accuracy in each sub-block for both $\omegaidetilde{\mathbf{C}}$ and $\omegaidetilde{\mathbf{L}}$ is $\mathop{\rm var}\nolimitsepsilon$.} \mathitbf{b}egin{eqnarray}gin{small} \mathitbf{b}egin{eqnarray}gin{tabular}{|c|ccc|ccc|} \hline $ \mathop{\rm var}\nolimitsepsilon$ & \multicolumn{3}{c|}{$\omegaidetilde{\mathbf{C}}$} & \multicolumn{3}{c|}{$\omegaidetilde{\mathbf{L}}\omegaidetilde{ \mathbf{L}}^\top$} \\ & time & size, MB & kB/$n$ & time & size & $\mathitbf{V}ert \mathbf{I}-(\omegaidetilde{\mathbf{L}}\omegaidetilde{\mathbf{L}}^\top)^{-1}\mathbf{C} \mathitbf{V}ert_2$ \\ & sec. & GB & & sec. & kB/dof & \\ \hline $1.0\cdot 10^{-7}$ & 5.8 & 1110 & 9.16 & 21.5 & 10.0 & $8.4\cdot 10^{-5}$ \\ $4.0\cdot 10^{-7}$ & 5.4 & 1021 & 8.16 & 16.2 & 9.0 & $3.5\cdot 10^{-4}$ \\ $1.6\cdot 10^{-6}$ & 4.5 & 913 & 7.3 & 13.3 & 8.1 & $1.3\cdot 10^{-3}$ \\ $6.4\cdot 10^{-6}$ & 4.0 & 800 & 6.4 & 10.7 & 7.3 & $4.4\cdot 10^{-3}$ \\ $2.6\cdot 10^{-5}$ & 3.5 & 698 & 5.6 & 8.7 & 6.3 & $2.0\cdot 10^{-2}$ \\ $1.0\cdot 10^{-4}$ & 3.1 & 625 & 5.0 & 7.0 & 5.5 & $6.4\cdot 10^{-2}$ \\ $4.1\cdot 10^{-4}$ & 2.7 & 550 & 4.4 & 6.2 & 5.1 & $2.8\cdot 10^{-1}$ \\ $1.6\cdot 10^{-3}$ & 2.4 & 467 & 3.7 & 5.0 & 4.5 & $2.5$ \\ $6.4\cdot 10^{-3}$ & 2.3 & 403 & 3.2 & - & - & - \\ \hline \end{tabular} \end{small} \lambdaabel{table:costs_vs_eps} \end{table} In Table~\ref{table:costs_2MI} we use the whole daily moisture data set with $2\cdot 10^{6}$ locations to estimate the unknown parameters. We provide computing times to set up the matrix $\omegaidetilde{\mathbf{C}}$ and to compute its $\mathcal{H}$-Cholesky factorization $\omegaidetilde{\mathbf{L}}$. Additionally, we list the total storage requirement and the storage divided by $n$ for both $\omegaidetilde{\mathbf{C}}$ and $\omegaidetilde{\mathbf{L}}$. The initial point for the optimization algorithm is $(\ell_0, \nu_0, \sigma_0^2)=(1.18, 0.42, 1.3)$. \textcolor{black}{We note that the choice of the initial point is very important and may have a strong influence on the final solution (due to the fact that the log-likelihood could be very flat around the optimum). We started our computations with a rough $\mathcal{H}$-matrix accuracy $\mathop{\rm var}\nolimitsepsilon=10^{-4}$ and the $\mathcal{H}$-Cholesky procedure crashes. We tried $\mathop{\rm var}\nolimitsepsilon=10^{-5}$ and received $(\hat{\ell}, \hat{\nu}, \hat{\sigma})=(1.39, 0.325, 1.01)$. We took them as the initial condition for $\mathop{\rm var}\nolimitsepsilon=10^{-5}$ and received very similar values, and so on. To find out which $\mathcal{H}$-matrix accuracy is sufficient, we provide tests for different values of $\mathop{\rm var}\nolimitsepsilon \in \{10^{-4}, 10^{-5}, 10^{-6}, 10^{-7}, 10^{-8}\}$. One can see that $\mathop{\rm var}\nolimitsepsilon=10^{-4}$ is not sufficient (the $\mathcal{H}$-Cholesky procedure crashes), and $\mathop{\rm var}\nolimitsepsilon=10^{-5}$ provides similar results as $\mathop{\rm var}\nolimitsepsilon \in \{10^{-6}, 10^{-7}, 10^{-8}\}$. Therefore, our recommendation would be to start with some rather low $\mathcal{H}$-matrix accuracy, to use the obtained $\hat{{\mathitbf{b}oldsymbol \theta}}$ as the new initial guess, and then decrease $\mathop{\rm var}\nolimitsepsilon$ and repeat this a few times.} \mathitbf{b}egin{eqnarray}gin{table}[h!] \centering \caption{Estimating unknown parameters for various $\mathcal{H}$-matrix accuracies $\mathop{\rm var}\nolimitsepsilon$; number of cores is 40, $n=2{,}000{,}000$. $\mathcal{H}$-matrix accuracy in each sub-block for both $\omegaidetilde{\mathbf{C}}$ and $\omegaidetilde{\mathbf{L}}$ is $\mathop{\rm var}\nolimitsepsilon$, max. number of iterations is 60, threshold in the iterative solver is $10^{-4}$.} \mathitbf{b}egin{eqnarray}gin{small} \mathitbf{b}egin{eqnarray}gin{tabular}{|c|ccc|ccc|cccc|} \hline $ \mathop{\rm var}\nolimitsepsilon$ & \multicolumn{3}{c|}{Estimated parameters} & \multicolumn{3}{c|}{Costs, $\omegaidetilde{\mathbf{C}}$}& \multicolumn{4}{c|}{Costs, $\omegaidetilde{\mathbf{L}}$} \\ \hline & $\hat{\ell}$ & $\hat{\nu}$ & $\hat{\sigma}^2$ & time, &size, &size/$n$ & time, &size,&size/$n$, & $\mathitbf{V}ert \mathbf{I}-(\omegaidetilde{\mathbf{L}}\omegaidetilde{\mathbf{L}}^\top)^{-1}\mathbf{C} \mathitbf{V}ert_2$ \\ & & & & set up & GB & kB & sec.& GB &kB & \\ \hline $10^{-4}$ & -& - & - & 58 & 11 & 6 & - & -& - & -\\ $10^{-5}$ &1.39 & 0.325 & 1.01 & 61 & 12 & 6 & 241 & 15.4& 8.11 & $8.1\cdot 10^{-2}$\\ $10^{-6}$ & 1.41& 0.323 & 1.04& 66 & 15 & 7.7 & 379 & 20.1& 10.3 & $5.7\cdot 10^{-2}$\\ $10^{-7}$ & 1.39& 0.330 & 1.05& 114 & 22 & 11& 601 & 27.4& 14.0 & $2.7\cdot 10^{-3}$\\ $10^{-8}$ & 1.38& 0.329 & 1.06& 138 & 29 & 15 & 1057 & 34.4& 18.0 & $1.5\cdot 10^{-4}$\\ \hline \end{tabular} \end{small} \lambdaabel{table:costs_2MI} \end{table} \subsection{Reproducibility of the numerical results} To reproduce the presented numerical simulations, the first step is downloading the HLIB (www.hlib.org) or HLIBPro (www.hlibpro.com) libraries. Both are free for academic purposes. HLIB is a sequential, open-source C-code library, which is relatively easy to use and understand. HLIBPro is the highly tuned parallel library, where only the header and object files are accessible. After signing the $\mathcal{H}$-matrix (HLIB or HLIBPro) license, downloading, and installing the libraries, our modules for approximating covariance matrices and computing log-likelihoods, and the soil moisture data can be downloaded from $\mbox{https://github.com/litvinen/HLIBCov.git}$. HLIB requires not only the set of locations, but also the corresponding triangulation of the computing domains, \perp\!\!\!\perpte{MYPHD, khoromskij2008domain}. The vertices of the triangles should be used as the location points. To construct a triangulation, we recommend employing the routines in MATLAB, R, or any other third-party software. HLIBPro does not require triangulation; only the coordinates of the locations are needed \perp\!\!\!\perpte{litvHLIBPro17}. \section{Conclusion and Discussion} \lambdaabel{sec:Conclusion} We have applied a well-known tool from linear algebra, the $\mathcal{H}$-matrix technique, to spatial statistics. This technique makes it possible to work with large datasets of measurements observed on unstructured grids, in particular, to estimate the unknown parameters of a covariance model. The statistical model considered yields Mat\'{e}rn covariance matrices with three unknown parameters, $\ell$, $\nu$, and $\sigma^2$. We applied the $\mathcal{H}$-matrix technique in approximating multivariate Gaussian log-likelihood functions and Mat\'{e}rn covariance matrices. The $\mathcal{H}$-matrix technique allowed us to drastically reduce the required memory and computing time. Within the $\mathcal{H}$-matrix approximation, we can increase the spatial resolution, take more measurements into account, and consider larger regions with larger data sets. $\mathcal{H}$-matrices require neither axes-parallel grids nor homogeneity of the covariance function. From the $\mathcal{H}$-matrix approximation of the log-likelihood, we computed the $\mathcal{H}$-Cholesky factorization, the KLD, the log-determinant, and a quadratic form, $\mathbf{Z}^{\top}\mathbf{C}^{-1}\mathbf{Z}$ (Tables~\ref{table:eps_det}, \ref{table:approx_compare_rank}). We demonstrated the computing time, storage requirements, relative errors, and convergence of the $\mathcal{H}$-matrix technique (Tables~\ref{table:approx_compare15}, \ref{table:costs_vs_eps}, \ref{table:costs_2MI}). We reduced the cost of computing the log-likelihood from cubic to log-linear, by employing the $\mathcal{H}$-matrix approximations, without significantly changing the log-likelihood. We considered both simulated examples, where we identified the known parameters, and a large, real data (soil moisture) example, where the parameters were unknown. We were able to calculate the maximum likelihood estimate ($\hat{\ell}$, $\hat{\nu}$, $\hat{\sigma}^2$) for all examples. We researched the impact of the $\mathcal{H}$-matrix approximation error and the statistical error (see Fig.~\ref{fig:50eps}) to the total error. For both examples we repeated the calculations for 50 replicates and computed box plots. We analyzed the dependence of ($\hat{\ell}$, $\hat{\nu}$, $\hat{\sigma}^2$) on $\mathop{\rm var}\nolimitsepsilon$ and on the sample size $n$. With the parallel $\mathcal{H}$-matrix library HLIBPro, we were able to compute the log-likelihood function for $2{,}000{,}000$ locations in a few minutes (Fig.~\ref{table:approx_compare15}) on a desktop machine, which is $\approx 5$ years old and cost nowadays $\approx 5.000$ USD. At the same time, computation of the maximum likelihood estimates is much more expensive and depends on the number of iterations in the optimization and can take from few hours to few days. In total, the algorithm may need 100-200 iterations (or more, depending on the initial guess and the threshold). If each iteration takes 10 minutes, then we may need 24 hours to get ($\hat{\ell}$, $\hat{\nu}$, $\hat{\sigma}^2$). Possible extension of this work are 1) to reduce the number of required iterations by implementing the first and second derivatives of the likelihood; 2) add nugget to the set of unknown parameters; 3) combine the current framework with the domain decomposition method.\\ \textbf{Acknowledgment} The research reported in this publication was supported by funding from King Abdullah University of Science and Technology (KAUST). Additionally, we would like to express our special thanks of gratitude to Ronald Kriemann (for the HLIBPro software library) as well as to Lars Grasedyck and Steffen Boerm for the HLIB software library. Alexander Litvinenko was supported by the Bayesian Computational Statistics $\&$ Modeling group (KAUST), and by the SRI-UQ group (KAUST).\\ \textbf{References} \mathitbf{b}ibliography{Hcovariance} \mathitbf{b}egin{eqnarray}gin{appendices} \section{Appendix: Error estimates} \lambdaabel{appendix:A} The Lemma 3.3. (page 5 in \perp\!\!\!\perpte{BallaniKressner}) gives the following result. Let $\mathbf{C}\in \mathbb{R}^{n\times n}$, and $\mathbf{E}:=\mathbf{C}-\omegaidetilde{\mathbf{C}}$, $\omegaidetilde{\mathbf{C}}^{-1}\mathbf{E}:=\omegaidetilde{\mathbf{C}}^{-1} \mathbf{C}- \mathbf{I}$, and for the spectral radius\\ $\rho(\omegaidetilde{\mathbf{C}}^{-1} \mathbf{E})=\rho(\omegaidetilde{\mathbf{C}}^{-1}\mathbf{C}-\mathbf{I}) <\mathop{\rm var}\nolimitsepsilon<1$. Then $$ \ejectrt \lambdaog \mathcal Ydet{\mathbf{C}} - \lambdaog \mathcal Ydet{\omegaidetilde{\mathbf{C}}} \ejectrt \lambdaeq -n \lambdaog(1-\mathop{\rm var}\nolimitsepsilon).$$ To prove Theorem~\ref{thm:MainLL} we use the result above and \mathitbf{b}egin{eqnarray}gin{align*} \ejectrt \omegaidetilde{\mathcal{L}}({\mathitbf{b}oldsymbol \theta};k) - \mathcal{L}({\mathitbf{b}oldsymbol \theta}) \ejectrt &= \mathitbf{f}rac{1}{2}\lambdaog\mathitbf{f}rac{\mathcal Ydet \mathbf{C}}{\mathcal Ydet{ \omegaidetilde{\mathbf{C}}}}- \mathitbf{f}rac{1}{2} \ejectrt \mathbf{z}^\top \lambdaeft(\mathbf{C}^{-1} - \omegaidetilde{\mathbf{C}}^{-1}\right )\mathbf{z} \ejectrt \\ &\lambdaeq -\mathitbf{f}rac{1}{2}\cdot n \lambdaog(1-\mathop{\rm var}\nolimitsepsilon)- \mathitbf{f}rac{1}{2} \ejectrt \mathbf{z}^\top \lambdaeft(\mathbf{C}^{-1}\mathbf{C} - \omegaidetilde{\mathbf{C}}^{-1}\mathbf{C}\right )\mathbf{C}^{-1}\mathbf{z} \ejectrt \\ &\lambdaeq -\mathitbf{f}rac{1}{2}\cdot n \lambdaog(1-\mathop{\rm var}\nolimitsepsilon)- \mathitbf{f}rac{1}{2} \ejectrt \mathbf{z}^\top \lambdaeft(\mathbf{I} - \omegaidetilde{\mathbf{C}}^{-1}\mathbf{C}\right )\mathbf{C}^{-1}\mathbf{z} \ejectrt \\ &\lambdaeq \mathitbf{f}rac{1}{2}\cdot n \mathop{\rm var}\nolimitsepsilon + \mathitbf{f}rac{1}{2}\mathitbf{V}ert \mathbf{z} \mathitbf{V}ert^2\cdot c_1\cdot \mathop{\rm var}\nolimitsepsilon, \end{align*} where $-\lambdaog(1-\mathop{\rm var}\nolimitsepsilon)\approx \mathop{\rm var}\nolimitsepsilon$ for small $\mathop{\rm var}\nolimitsepsilon$. \mathitbf{b}egin{eqnarray}gin{rem} \lambdaabel{rem:NormZ} Let $\mathbf{Z} \sim \mathcal{N}(\mathbf{0},\sigma^2\mathbf{I})$. Theorem 2.2.7 in \perp\!\!\!\perpte{Nickl_2015} provides estimates for a norm of a Gaussian vector, $\mathitbf{V}ert \mathbf{Z}\mathitbf{V}ert$, for instance for $\sup_{t\in T} \ejectrt Z(t) \ejectrt$, \mathitbf{b}egin{eqnarray}gin{equation*} Pr \lambdaeft \{ \lambdaeft \ejectrt \sup_{t\in T} \ejectrt Z(t) \ejectrt-M \right \ejectrt >u \right \} \lambdaeq 2\lambdaeft (1-\mathitbf{P}hi \lambdaeft (\mathitbf{f}rac{u}{\sigma} \right) \right)\lambdaeq e^{-\mathitbf{f}rac{u^2}{2\sigma^2}}, \quad M>0. \end{equation*} \end{rem} \mathitbf{b}egin{eqnarray}gin{rem} The assumption $\mathitbf{V}ert \mathbf{C}^{-1} \mathitbf{V}ert \lambdaeq c_1$ is strong. This estimation depends on the matrix $\mathbf{C}$, the smoothness parameter $\nu$, the covariance length $\ell$, and the $\mathcal{H}$-matrix rank $k$. First, we find an appropriate block-decomposition for the covariance matrix $\mathbf{C}$. Second, we estimate the ranks of $\omegaidetilde{\mathbf{C}}$. Third, we prove that the inverse/Cholesky can also be approximated in the $\mathcal{H}$-matrix format. Then we estimate the ranks for the inverse $\omegaidetilde{\mathbf{C}}^{-1}$ and the Cholesky factor $\omegaidetilde{\mathbf{L}}$. Finally, we estimate the $\mathcal{H}$-matrix approximation accuracies; see \cmmnt{\perp\!\!\!\perptep}\perp\!\!\!\perpte{HackHMEng}. In the worst case, the rank $k$ will be of order $n$. We also note that some covariance matrices are singular, so that $\omegaidetilde{\mathbf{C}}^{-1}$ and $\omegaidetilde{\mathbf{L}}$ may not exist. The computation of $\lambdaog\mathcal Ydet{\omegaidetilde{\mathbf{C}}}$ could be an ill-posed problem, in the sense that small perturbations in $\mathbf{C}$ result in large changes in $\lambdaog\mathcal Ydet{\mathbf{C}}$. \end{rem} Below we estimate the error caused by the usage of $\mathcal{H}$-matrices to generate simulated data. Let $\omegaidetilde{\mathbf{C}}=\mathbf{C}+\mathop{\rm var}\nolimitsepsilon \mathbf{C}$ be an $\mathcal{H}$-matrix approximation, and $\mathbf{Z}=\mathbf{L}\mathbf{W}$ be the data generated without $\mathcal{H}$-matrix approximation. \mathitbf{b}egin{eqnarray}gin{lemma} \lambdaabel{lem:errZ} Let $\omegaidetilde{\mathbf{Z}}=\omegaidetilde{\mathbf{L}}\mathbf{W}$, and $\mathitbf{V}ert \omegaidetilde{\mathbf{L}}-\mathbf{L}\mathitbf{V}ert \lambdaeq \mathop{\rm var}\nolimitsepsilon_L$, where $\mathop{\rm var}\nolimitsepsilon_L$ tends to zero when the $\mathcal{H}$-matrix ranks are growing. Let $\mathbf{Z}=\mathbf{z}$, $\tilde{\mathbf{Z}}=\tilde{\mathbf{z}}$, $\mathbf{W}=\mathbf{w}$ denote the data realizations. Then \mathitbf{b}egin{eqnarray}gin{equation*} \mathitbf{V}ert \omegaidetilde{\mathbf{z}}-\mathbf{z} \mathitbf{V}ert \lambdaeq \mathitbf{V}ert \omegaidetilde{\mathbf{L}}-\mathbf{L}\mathitbf{V}ert \mathitbf{V}ert \mathbf{w} \mathitbf{V}ert \lambdaeq \mathop{\rm var}\nolimitsepsilon_L \mathitbf{V}ert \mathbf{w} \mathitbf{V}ert. \end{equation*} \end{lemma} The error $\mathop{\rm var}\nolimitsepsilon_L$ depends on the condition number of $\mathbf{C}$ and can be large, for instance, for ill-conditioned matrices. The next lemma estimates the error in the quadratic form by replacing the original data set $\mathbf{Z}=\mathbf{z} $ by its $\mathcal{H}$-matrix approximation. \mathitbf{b}egin{eqnarray}gin{lemma} \lambdaabel{lem:L} Let $\mathbf{L}=\omegaidetilde{\mathbf{L}}+\mathop{\rm var}\nolimitsepsilon_L \mathbf{L}$ be an $\mathcal{H}$-matrix approximation of $\mathbf{L}$, then $$ \ejectrt \omegaidetilde{\mathbf{z}}^\top {\mathbf{C}}^{-1} \omegaidetilde{\mathbf{z}} - \mathbf{z}^\top \mathbf{C}^{-1} \mathbf{z} \ejectrt \lambdaeq 2\mathop{\rm var}\nolimitsepsilon \ejectrt \omegaidetilde{\mathbf{z}}^\top {\mathbf{C}}^{-1} \omegaidetilde{\mathbf{z}} \ejectrt +\mathcal{O}(\mathop{\rm var}\nolimitsepsilon^2).$$ \end{lemma} \mathitbf{b}egin{eqnarray}gin{proof} Let $\omegaidetilde{\mathbf{Z}}=\omegaidetilde{\mathbf{L}}\mathbf{W}$, $\mathbf{W}\sim \mathcal{N}(\mathbf{0},\mathbf{I})$, and ${\mathbf{Z}} = {\mathbf{L}}\mathbf{W} = (\omegaidetilde{\mathbf{L}} + \mathop{\rm var}\nolimitsepsilon_L\omegaidetilde{\mathbf{L}} )\mathbf{W}=\omegaidetilde{\mathbf{Z}}+\mathop{\rm var}\nolimitsepsilon_L \omegaidetilde{\mathbf{Z}}$. Let $\mathbf{Z}=\mathbf{z}$, and $\tilde{\mathbf{Z}}=\tilde{\mathbf{z}}$ denote the data realizations. Then, after simple calculations, we obtain \mathitbf{b}egin{eqnarray}gin{equation*} \ejectrt \omegaidetilde{\mathbf{z}}^\top {\mathbf{C}}^{-1} \omegaidetilde{\mathbf{z}} - (\omegaidetilde{\mathbf{z}} +\mathop{\rm var}\nolimitsepsilon_L \omegaidetilde{\mathbf{z}})^\top \mathbf{C}^{-1} (\omegaidetilde{\mathbf{z}} +\mathop{\rm var}\nolimitsepsilon_L \omegaidetilde{\mathbf{z}}) \ejectrt \lambdaeq 2\mathop{\rm var}\nolimitsepsilon_L \ejectrt \omegaidetilde{\mathbf{z}}^\top {\mathbf{C}}^{-1} \omegaidetilde{\mathbf{z}} \ejectrt + \mathop{\rm var}\nolimitsepsilon_L^2 \ejectrt \omegaidetilde{\mathbf{z}}^\top {\mathbf{C}}^{-1} \omegaidetilde{\mathbf{z}} \ejectrt \rightarrow 0,\; \text{if }\; \mathop{\rm var}\nolimitsepsilon_L \rightarrow 0. \end{equation*} \end{proof} \mathitbf{b}egin{eqnarray}gin{lemma} \lambdaabel{lem:inv} Let $\mathbf{z}\in \mathbb{R}^{n}$ and $\mathitbf{V}ert \omegaidetilde{\mathbf{C}}^{-1} - \mathbf{C}^{-1} \mathitbf{V}ert\lambdaeq \mathop{\rm var}\nolimitsepsilon_c$, then $ \ejectrt {\mathbf{z}}^\top \omegaidetilde{\mathbf{C}}^{-1}{\mathbf{z}} - \mathbf{z}^\top \mathbf{C}^{-1} \mathbf{z} \ejectrt \lambdaeq \mathitbf{V}ert \mathbf{z}\mathitbf{V}ert^2 \mathop{\rm var}\nolimitsepsilon_c$. \end{lemma} \mathitbf{b}egin{eqnarray}gin{proof} \mathitbf{b}egin{eqnarray}gin{equation*} \ejectrt {\mathbf{z}}^\top \omegaidetilde{\mathbf{C}}^{-1}{\mathbf{z}} - \mathbf{z}^\top \mathbf{C}^{-1} \mathbf{z} \ejectrt = \ejectrt {\mathbf{z}}^\top( \omegaidetilde{\mathbf{C}}^{-1} - \mathbf{C}^{-1} )\mathbf{z} \ejectrt \lambdaeq \mathitbf{V}ert \mathbf{z}\mathitbf{V}ert^2 \mathop{\rm var}\nolimitsepsilon_c. \end{equation*} \end{proof} \mathitbf{b}egin{eqnarray}gin{lemma} \lambdaabel{lem:error1} Let $\mathbf{z},\omegaidetilde{\mathbf{z}} \in \mathbb{R}^{n}$, and conditions of Lemmas~\ref{lem:L}-\ref{lem:inv} hold, then $$ \ejectrt \omegaidetilde{\mathbf{z}}^\top \omegaidetilde{\mathbf{C}}^{-1} \omegaidetilde{\mathbf{z}} - \mathbf{z}^\top \mathbf{C}^{-1} \mathbf{z} \ejectrt \lambdaeq \mathcal{O}(\mathop{\rm var}\nolimitsepsilon_L) + \mathcal{O}(\mathop{\rm var}\nolimitsepsilon_c).$$ \end{lemma} \mathitbf{b}egin{eqnarray}gin{proof} Adding and subtracting an additional term and applying the previous Lemmas~\ref{lem:L}-\ref{lem:inv}, we obtain \mathitbf{b}egin{eqnarray}gin{align*} \ejectrt \omegaidetilde{\mathbf{z}}^\top \omegaidetilde{\mathbf{C}}^{-1} \omegaidetilde{\mathbf{z}} - \mathbf{z}^\top \mathbf{C}^{-1} \mathbf{z} \ejectrt &\lambdaeq \ejectrt \omegaidetilde{\mathbf{z}}^\top \omegaidetilde{\mathbf{C}}^{-1} \omegaidetilde{\mathbf{z}} - \mathbf{z}^\top \omegaidetilde{\mathbf{C}}^{-1} \mathbf{z} \ejectrt + \ejectrt \mathbf{z}^\top \omegaidetilde{\mathbf{C}}^{-1} \mathbf{z}- \mathbf{z}^\top \mathbf{C}^{-1} \mathbf{z} \ejectrt \lambdaeq 2\mathop{\rm var}\nolimitsepsilon_L \ejectrt \omegaidetilde{\mathbf{z}}^\top \omegaidetilde{\mathbf{C}}^{-1} \omegaidetilde{\mathbf{z}} \ejectrt + \mathitbf{V}ert \mathbf{z} \mathitbf{V}ert^2 \mathop{\rm var}\nolimitsepsilon_c . \end{align*} \end{proof} Thus, if the approximations of the matrices $\mathbf{C}$, $\mathbf{C}^{-1}$ and $\mathbf{L}$ are accurate (the error tends to zero relatively fast with increasing the ranks), then the $\mathcal{H}$-matrix approximation error in the quadratic form also tends to zero with increasing $k$ or decreasing $\mathop{\rm var}\nolimitsepsilon_L$. A more rigorous proof calculating the order of convergence is out of the scope of this paper. Dependence on the condition number is researched in \perp\!\!\!\perpte{BebendorfSpEq16, bebendorf2007Why} (only for matrices that origin from elliptic partial differential equations and integral equations). In Table~\ref{tab:Tnorms} we list the Frobenius and spectral norms of $\omegaidetilde{\mathbf{C}}$, $\omegaidetilde{\mathbf{L}}$, and $\omegaidetilde{\mathbf{C}}^{-1}$ vs. $n$. This table shows that $\mathitbf{V}ert \tilde{\mathbf{C}} \mathitbf{V}ert_2$ is growing with $n$ and $\mathitbf{V}ert \tilde{\mathbf{C}}^{-1} \mathitbf{V}ert_2$ almost not growing with $n$. \mathitbf{b}egin{eqnarray}gin{table}[] \mathitbf{b}egin{eqnarray}gin{center} \caption{Norms of Mat\'ern covariance matrix $\omegaidetilde{\mathbf{C}}$ and its Cholesky factor $\omegaidetilde{\mathbf{L}}$, $\ell=0.089$, $\nu=0.22$, $\sigma^2=1$, relative $\mathcal{H}$-matrix block-wise accuracy is $\mathop{\rm var}\nolimitsepsilon=10^{-5}$.} \mathitbf{b}egin{eqnarray}gin{tabular}{|c|c|c|c|c|c|c|c|} \hline $n$ & $\mathitbf{V}ert \omegaidetilde{\mathbf{C}} \mathitbf{V}ert_F$& $\mathitbf{V}ert \omegaidetilde{\mathbf{C}} \mathitbf{V}ert_2$ & $\mathitbf{V}ert \omegaidetilde{\mathbf{L}} \mathitbf{V}ert_F$& $\mathitbf{V}ert\omegaidetilde{ \mathbf{L}} \mathitbf{V}ert_2$& $\mathitbf{V}ert \omegaidetilde{\mathbf{C}}^{-1} \mathitbf{V}ert_2$ \\ \hline $32\cdot 10^3$ & $241$ & $7.70$ & $182$ & $3.2$ & $2.90$\\ \hline $64\cdot 10^3$ & $410$ & $13.5$ & $262$ & $4.5$ & $2.97$\\ \hline $128\cdot 10^3$ & $739$ & $24.0$ & $379$ & $6.4$ & $3.05$ \\ \hline $256\cdot 10^3$ & $1388$ & $46.0$ & $552$ & $9.2$& $3.08$\\ \hline \end{tabular} \lambdaabel{tab:Tnorms} \end{center} \end{table} \section{Appendix: Admissibility condition and cluster and block-cluster trees} \lambdaabel{sec:adm} The admissibility condition (criteria) is used to divide a given matrix into sub-blocks and to define which sub-blocks can be approximated well by low-rank matrices and which not. There are many different admissibility criteria (see Fig.~\ref{fig:Hexample_adm}). The typical criteria are: the strong, weak and domain decomposition-based \perp\!\!\!\perpte{HackHMEng}. The user can also develop his own admissibility criteria, which may depend, for example, on the covariance length. It regulates, for instance, block partitioning, the size of the largest block, the depth of the hierarchical block partitioning. Large low-rank blocks are good, but they may require also larger ranks. Figure~\ref{fig:Hexample_adm} shows three examples of $\mathcal{H}$-matrices with three different admissibility criteria: (left) standard, (middle) domain-decomposition based and (right) weak (HODLR). Matrices are taken from different applications and illustrate only the diversity of block partitioning. \mathitbf{b}egin{eqnarray}gin{figure}[htbp!] \centering \includegraphics[width=0.28\textwidth]{L.png} \includegraphics[width=0.28\textwidth]{sparsealg_LU.pdf} \includegraphics[width=0.28\textwidth]{weak_adm.pdf} \caption{Examples of $\mathcal{H}$-matrices with three different admissibility criteria: (left) standard, (middle) domain-decomposition based and (right) weak (HODLR).} \lambdaabel{fig:Hexample_adm} \end{figure} Figure~\ref{fig:ct_bct} shows examples of cluster trees (1-2) and block cluster trees (3-4). \mathitbf{b}egin{eqnarray}gin{figure}[htbp!] \centering \includegraphics[width=0.3\textwidth]{ct.eps} \includegraphics[width=0.3\textwidth]{sparsealg_ct.eps} \includegraphics[width=0.15\textwidth]{laplace_bct.eps} \includegraphics[width=0.15\textwidth]{sparsealg_bct.eps} \caption{(1-2) Two examples of cluster trees: (1st) standard, (2nd) domain-decomposition based; (3) an example of block cluster tree, (4) a block cluster tree, constructed from domain decomposition based cluster tree shown in (2nd).} \lambdaabel{fig:ct_bct} \end{figure} Let $I$ be an index set of all locations. Denote for each index $i \in I$ corresponding to a basis function $b_i$ (e.g., the ``hat'' function) the support $\mathcal{G}_i := \mathop{\rm supp}\nolimits b_i \subset \mathitbf{R}R^d$, where $d\in \{1,2,3\}$ is spatial dimension. Now we define two trees which are necessary for the definition of hierarchical matrices. These trees are labeled trees where the label of a vertex $t$ is denoted by $\hat{t}$. \mathitbf{b}egin{eqnarray}gin{defi}(Cluster Tree $T_{I}$)\perp\!\!\!\perpte{Part1, GH03}\\ A finite tree $T_I$ is a cluster tree over the index set $I$ if the following conditions hold: \mathitbf{b}egin{eqnarray}gin{itemize} \item $I$ is the root of $T_I$ and a subset $\hat{t} \subseteq I$ holds for all $t \in T_I$. \item If $t \in T_I$ is not a leaf, then the set of sons, $\mbox{sons}(t)$, contains disjoint subsets of $I$ and the subset $\hat{t}$ is the disjoint union of its sons, $\hat{t}=\mathrm{d}splaystyle{\mathitbf{b}igcup_{s \in \mbox{sons}(t)}{\hat{s}}}$.\\ \item If $t \in T_I$ is a leaf, then $ \ejectrt \hat{t} \ejectrt \lambdaeq n_{min}$ for a fixed number $n_{min}$. \end{itemize} \lambdaabel{def:ClTr} \end{defi} We generalise $\mathcal{G}_i$ to clusters $\tau \in T_I$ by setting $\mathcal{G}_{\tau} := \mathitbf{b}igcup_{i\in \tau} \mathcal{G}_i$, i.e., $\mathcal{G}_{\tau}$ is the minimal subset of $\mathitbf{R}R^d$ that contains the supports of all basis functions $b_i$ with $i \in \tau$. \mathitbf{b}egin{eqnarray}gin{defi} (Block Cluster Tree $T_{I\times I}$) \perp\!\!\!\perpte{Part1, GH03}\\ \lambdaabel{def:BLClTree} Let $T_{I}$ be a cluster tree over the index set $I$. A finite tree $T_{I\times I}$ is a block cluster tree based on $T_I$ if the following conditions hold: \mathitbf{b}egin{eqnarray}gin{itemize} \item $\mbox{root}(T_{I\times I})=I \times I$. \item Each vertex $b$ of $T_{I\times I}$ has the form $b=(\tau,\sigma)$ with clusters $\tau,\sigma \in T_I$. \item For each vertex $(\tau,\sigma)$ with $\mbox{sons}(\tau,\sigma)\neq \mathop{\rm var}\nolimitsnothing$, we have \mathitbf{b}egin{eqnarray}gin{equation*} \lambdaabel{eq:sons} \mbox{sons}(\tau,\sigma)= \lambdaeft\lambdabrace \mathitbf{b}egin{eqnarray}gin{array}{cl} {(\tau,\sigma^{'}) : \sigma' \in \mbox{sons}(\sigma)}, &\text{if sons}(\tau)= \mathop{\rm var}\nolimitsnothing \omegaedge \text{sons}(\sigma) \neq \mathop{\rm var}\nolimitsnothing \\ {(\tau^{'},\sigma) : \tau^{'} \in \text{sons}(\tau)}, &\text{if sons}(\tau)\neq \mathop{\rm var}\nolimitsnothing \omegaedge \text{sons}(\sigma)= \mathop{\rm var}\nolimitsnothing \\ {(\tau',\sigma^{'}) : \tau' \in \text{sons}(\tau), \sigma^{'} \in \text{sons}(\sigma)}, &\text{otherwise} \end{array} \right. \end{equation*} \item The label of a vertex $(\tau,\sigma)$ is given by $\omegaidehat{(\tau,\sigma)}=\omegaidehat{\tau} \times \omegaidehat{\sigma} \subseteq I \times I$. \end{itemize} \end{defi} We can see that $\omegaidehat{root(T_{I \times I})}=I \times I$. This implies that the set of leaves of $T_{I \times I}$ is a partition of $I \times I$. \mathitbf{b}egin{eqnarray}gin{defi} The standard admissibility condition (Adm$_{\eta}$) for two domains $B_{\tau}$ and $B_{\sigma}$ (which actually correspond to two clusters $\tau$ and $\sigma$) is defined as follows \mathitbf{b}egin{eqnarray}gin{equation*} \lambdaabel{eq:stand_cond} \min\{\mathrm{d}am(B_{\tau}), \mathrm{d}am(B_{\sigma})\} \lambdaeq \eta \mathrm{d}st(B_{\tau} , B_{\sigma}), \end{equation*} \end{defi} where $B_{\tau},\, B_{\sigma} \subset \mathitbf{R}R^d$ are axis-parallel bounding boxes of the clusters $\tau$ and $\sigma$ such that $\mathcal{G}_{\tau} \subset B_{\tau}$ and $\mathcal{G}_{\sigma} \subset B_{\sigma}$. $\mathrm{d}am$ and $\mathrm{d}st$ are usual diameter and distance, by default $\eta= 2.0$. \section{Appendix: Accuracy Stability Plots} \lambdaabel{app:B} \mathitbf{b}egin{eqnarray}gin{figure}[htbp!] \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.33\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{9Sept2018_graphs_ell_32K.pdf} \lambdaabel{fig:ell64} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{9Sept2018_graphs_nu_32K.pdf} \lambdaabel{fig:nu64} \end{subfigure} \mathitbf{b}egin{eqnarray}gin{subfigure}[b]{0.32\textwidth} \centering \caption{} \includegraphics[width=0.99\textwidth]{9Sept2018_graphs_sigma2_32K.pdf} \lambdaabel{fig:sigma64} \end{subfigure} \caption{Estimated parameters as a function of the accuracy $\mathop{\rm var}\nolimitsepsilon$, based on 30 replicates (black solid curves) with $n=64{,}000$ observations. True parameters ${\mathitbf{b}oldsymbol \theta}=(\ell,\nu,\sigma^2)=(0.7,0.9.1.0)$ represented by the green doted lines. Replicates on (a) identify $\hat{\ell}$; on (b) identify $\hat{\nu}$; and on (c) identify $\hat{\sigma}^2$.} \lambdaabel{fig:50eps} \end{figure} \end{appendices} \end{document}
\betagin{document} \title{Orthogonal functions related to Lax pairs in Lie algebras} \author{Wolter Groenevelt} \address{Delft University of Technology\\ Delft Institute of Applied Mathematics\\ P.O.~Box 5031\\ 2600 GA Delft\\ The Netherlands} \email{[email protected]} \author{Erik Koelink} \address{ IMAPP\\ Radboud Universiteit\\ P.O.~Box 9010\\ 6500 GL Nijmegen\\mathbb The Netherlands } \email{[email protected]} \date{\today} \deltadicatory{Dedicated to the memory of Richard Askey} \betagin{abstract} We study a Lax pair in a $2$-parameter Lie algebra in various representations. The overlap coefficients of the eigenfunctions of $L$ and the standard basis are given in terms of orthogonal polynomials and orthogonal functions. Eigenfunctions for the operator $L$ for a Lax pair for $\mathfrak{sl}(d+1,\mathbb C)$ are studied in certain representations. \end{abstract} \maketitle \section{Introduction} The link of the Toda lattice to three-term recurrence relations via the Lax pair after the Flaschka coordinate transform is well understood, see e.g. \cite{BabeBT}, \cite{Tesc}. We consider a Lax pair in a specific Lie algebra, such that in irreducible $*$-representations the Lax operator is a Jacobi operator. A Lax pair is a pair of time-dependent matrices or operators $L(t)$ and $M(t)$ satisfying the Lax equation \[ \dot{L}(t) = [M(t),L(t)], \] where $[\, , \, ]$ is the commutator and the dot represents differentiation with respect to time. The Lax operator $L$ is isospectral, i.e.~the spectrum of $L$ is independent of time. A famous example is the Lax pair for the Toda chain in which $L$ is a self-adjoint Jacobi operator, \[ L(t) e_n = a_n(t) e_{n+1} + b_n(t) e_{n} + a_{n-1}(t) e_{n-1}, \] where $\{e_n\}$ is an orthonormal basis for the Hilbert space, and $M$ is the skew-adjoint operator given by \[ M(t) e_n = a_n(t) e_{n+1} - a_{n-1}(t) e_{n-1}. \] In this case the Lax equation describes the equations of motion (after a change of variables) of a chain of interacting particles with nearest neighbour interactions. The eigenvalues of $L$, $L$ being isospectral, constitute integrals of motion. In this paper we define a Lax pair in a 2-parameter Lie algebra. In the special case of $\mathfrak{sl}(2,\mathbb C)$ we recover the Lax pair for the $\mathfrak{sl}(2,\mathbb C)$ Kostant Toda lattice, see \cite[\S 4.6]{BabeBT} and references given there. We give a slight generalization by allowing for a more general $M(t)$. We discuss the corresponding solutions to the corresponding differential equations in various representations of the Lie algebra. In particular, one obtains the classical relation to the Hermite, Krawtchouk, Charlier, Meixner, Laguerre and Meixner-Pollaczek polynomials from the Askey scheme of hypergeometric functions \cite{KLS} for which the Toda modification, see \cite[\S 2.8]{Isma}, remains in the same class of orthogonal polynomials. This corresponds to the results established by Zhedanov \cite{Zhed}, who investigated the situation where $L$, $M$ and $\dot{L}$ act as three-term recurrence operators and close up to a Lie algebra of dimension $3$ or $4$. In the current paper Zhedanov's result is explained, starting from the other end. In Zhedanov's approach the condition on forming a low-dimensional Lie algebra forces a factorization of the functions as a function of time $t$ and place $n$, which is immediate from representing the Lax pair from the Lie algebra element. The solutions of the Toda lattice arising in this way, i.e. which are factorizable as functions of $n$ and $t$, have also been obtained by Kametaka \cite{Kame} stressing the hypergeometric nature of the solutions. The link to Lie algebras and Lie groups in Kametaka \cite{Kame} is implicit, see especially \cite[Part I]{Kame}. The results and methods of the short paper by Kametaka \cite{Kame} have been explained and extended later by Okamoto \cite{Okam}. In particular, Okamoto \cite{Okam} gives the relation to the $\tau$-function formulation and the B\"acklund transformations. Moreover, we extend to non-polynomial settings by considering representations of the corresponding Lie algebras in $\ell^2(\mathbb Z)$ corresponding to the principal unitary series of $\mathfrak{su}(1,1)$ and the representations of $\mathfrak{e}(2)$, the Lie algebra of the group of motions of the plane. In this way we find solutions to the Toda lattice equations labelled by $\mathbb Z$. There is a (non-canonical) way to associate to recurrences on $\ell^2(\mathbb Z)$ three-term recurrences for $2\times 2$-matrix valued polynomials, see e.g. \cite{Bere}, \cite{Koel}. However, this does not lead to explicit $2\times 2$-matrix valued solutions of the non-abelian Toda lattice as introduced and studied in \cite{BrusMRL}, \cite{Gekh} in relation to matrix valued orthogonal polynomials, see also \cite{IsmaKR} for an explicit example and the relation to the modification of the matrix weight. The general Lax pair for the Toda lattice in finite dimensions, as studied by Moser \cite{Mose}, can also be considered and slightly extended in the same way as an element of the Lie algebra $\mathfrak{s\l}(d+1,\mathbb C)$. This involves $t$-dependent finite discrete orthogonal polynomials, and these polynomials occur in describing the action of $L(t)$ in highest weight representations. We restrict to representations for the symmetric powers of the fundamental representations, then the eigenfunctions can be described in terms of multivariable Krawtchouk polynomials following Iliev \cite{Ilie} establishing them as overlap coefficients between a natural basis for two different Cartan subalgebras. Similar group theoretic interpretations of these multivariable Krawtchouk polynomials have been established by Cramp\'e et al. \cite{CramvdVV} and Genest et al. \cite{GeneVZ}. We discuss briefly the $t$-dependence of the corresponding eigenvectors of $L(t)$. In brief, in \S \ref{sect:g(a,b)} we recall the $2$-parameter Lie algebra as in \cite{Mi} and the Lax pair. In \S \ref{sec:su2} we discuss $\mathfrak{su}(2)$ and its finite-dimensional representations, and in \S \ref{sec:su11} we discuss the case of $\mathfrak{su}(1,1)$, where we discuss both discrete series representations and principal unitary series representations. The last leads to new solutions of the Toda equations and the generalization in terms of orthogonal functions. The corresponding orthogonal functions are the overlap coefficients between the standard basis in the representations and the $t$-dependent eigenfunctions of the operator $L$. In \S \ref{sec:oscillatoralgebra} we look at the oscillator algebra as specialization, and in \S \ref{sec:casee2} we consider the Lie algebra for the group of plane motions leading to a solution in connection to Bessel functions. In \S \ref{sec:modification} we indicate how the measures for the orthogonal functions involved have to be modified in order to give solutions of the coupled differential equations. For the Toda case related to orthogonal polynomials, this coincides with the Toda modification \cite[\S 2.8]{Isma}. Finally, in \S \ref{sec:sld+1} we consider the case of finite dimensional representations of such a Lax pair for a higher rank Lie algebra in specific finite-dimensional representations for which all weight spaces are $1$-dimensional. A question following up on \S \ref{sec:modification} is whether the modification for the weight is of general interest, cf. \cite[\S 2.8]{Isma}. A natural question following up on \S \ref{sec:sld+1} is what happens in other finite-dimensional representations, and what happens in infinite dimensional representations corresponding to non-compact real forms of $\mathfrak{sl}(d+1,\mathbb C)$ as is done in \S \ref{sec:su11} for the case $d=1$. We could also ask if it is possible to associate Racah polynomials, as the most general finite discrete orthogonal polynomials in the Askey scheme, to the construction of \S \ref{sec:sld+1}. Moreover, the relation to the interpretation as in \cite{KVdJ98} suggests that it might be possible to extend to quantum algebra setting, but this is quite open. \mathfrak{su}bsection*{Dedication} This paper is dedicated to Richard A. Askey (1933--2019) who has done an incredible amount of fascinating work in the area of special functions, and who always had an open mind, in particular concerning relations with other areas. We hope this spirit is reflected in this paper. Moreover, through his efforts for mathematics education, Askey's legacy will be long-lived \mathfrak{su}bsection*{Acknowledgement} We thank Luc Vinet for pointing out references. We also thank both referees for their comments, and especially for pointing out the papers by Kametaka \cite{Kame} and Okamoto \cite{Okam}. \section{The Lie algebra $\mathfrak g(a,b)$} \lambdabel{sect:g(a,b)} Let $a,b \in \mathbb C$. The Lie algebra $\mathfrak g(a,b)$ is the 4-dimensional complex Lie algebra with basis $H,E,F,N$ satisfying \betagin{equation} \lambdabel{eq:commuation relations} \betagin{gathered} {} [E,F]=aH+bN, \quad [H,E]=2E, \quad [H,F]=-2F, \\ [H,N]=[E,N]=[F,N]=0. \end{gathered} \end{equation} For $a,b \in \mathbb R$ there are two inequivalent $*$-structures on $\mathfrak g(a,b)$ defined by \[ E^*=\varepsilonilon F, \quad H^*=H, \quad N^*=N, \] where $\varepsilonilon \in \{+,-\}$. We define the following Lax pair in $\mathfrak g(a,b)$. \betagin{Definition} Let $r,s \in C^1 [0,\infty)$ and $u \in C[0,\infty)$ be real-valued functions and let $c \in \mathbb R$. The Lax pair $L,M \in \mathfrak g(a,b)$ is given by \betagin{equation} \lambdabel{eq:Lax pair} \betagin{split} L(t) &= cH+ s(t)(aH+bN)+r(t) \big(E+E^*\big), \\ M(t)&= u(t)\big(E-E^*\big). \end{split} \end{equation} \end{Definition} Note that $L^*=L$ and $M^*=-M$. Being a Lax pair means that $\dot L = [M,L]$, which leads to the following differential equations. \betagin{proposition} \lambdabel{prop:relations r s u} The functions $r,s$ and $u$ satisfy \[ \betagin{split} \dot s(t) & = 2\varepsilonilon r(t)u(t)\\ \dot r(t) & = -2 (as(t)+c)u(t). \end{split} \] \end{proposition} \betagin{proof} From the commutation relations \eqref{eq:commuation relations} it follows that \[ [M,L]= 2\varepsilonilon r(t)u(t)(aH+bN) - 2 (as(t)+c)u(t)(E+E^*). \] Since $[M,L]=\dot L = \dot s(t)(aH+bN) + \dot r(t) (E+E^*)$, the results follows. \end{proof} \betagin{corollary} \lambdabel{cor:constant function} The function $I(r,s)=\varepsilonilon r^2+ (as+2c)s$ is an invariant. \end{corollary} \betagin{proof} Differentiating gives \[ \betagin{split} \frac{d}{dt} (\varepsilonilon r(t)^2+as(t)^2+2cs(t)) &= 2\varepsilonilon r(t) \dot r(t)+ 2 (as(t)+c) \dot s(t), \end{split} \] which equals zero by Proposition \ref{prop:relations r s u}. \end{proof} In the following sections we consider the Lax operator $L$ in an irreducible $*$-representation of $\mathfrak g(a,b)$, and we determine explicit eigenfunctions and its spectrum. We restrict to the following special cases of the Lie algebra $\mathfrak g(a,b)$: \betagin{itemize} \item $\mathfrak g(1,0) \cong \mathfrak{sl}(2,\mathbb C)\oplus \mathbb C$ \item $\mathfrak g(0,1) \cong \mathfrak b(1)$ is the generalized oscillator algebra \item $\mathfrak g(0,0) \cong \mathfrak e(2) \oplus \mathbb C$, with $\mathfrak e(2)$ the Lie algebra of the group of plane motions \end{itemize} These are the only essential cases as $\mathfrak g(a,b)$ is isomorphic as a Lie algebra to one of these cases, see \cite[Section 2-5]{Mi}. \section{The Lie algebra $\mathfrak{su}(2)$}\lambdabel{sec:su2} In this section we consider the Lie algebra $\mathfrak g(a,b)$ from Section \ref{sect:g(a,b)} with $(a,b)=(1,0)$ and $\varepsilonilon = +$, i.e.~the Lie algebra $\mathfrak{su}(2) \oplus \mathbb C$. The basis element $N$ plays no role in this case, therefore we omit it. So we consider the Lie algebra with basis $H,E,F$ satisfying commutation relations \[ [H,E]=2E, \qquad [H,F]=-2F, \qquad [E,F]=H, \] and the $*$-structure is defined by $H^*=H, E^*=F$. The Lax pair \eqref{eq:Lax pair} is given by \[ L(t) = s(t)H + r(t)(E+F), \qquad M(t)=u(t)(E-F), \] where (without loss of generality) we set $c=0$. The differential equations for $r$ and $s$ from Proposition \ref{prop:relations r s u} read in this case \betagin{equation} \lambdabel{eq:relations r s u su(2)} \betagin{split} \dot s(t) &= 2u(t)r(t) \\ \dot r(t) & = -2u(t)s(t) \end{split} \end{equation} and the invariant in Corollary \ref*{cor:constant function} is given by $I(r,s)=r^2+s^2$. \betagin{lemma} \lambdabel{lem:sign r s for su(2)} Assume $\sgn(u(t))=\sgn(r(t)$ for all $t>0$, $s(0)>0$ and $r(0)>0$. Then $\sgn(s(t))>0$ and $\sgn(r(t))>0$ for all $t>0$. \end{lemma} \betagin{proof} From $\dot s = 2 ur$ it follows that $s$ is increasing. Since $(r(t),s(t))$ in phase space is a point on the invariant $I(r,s)=I(r(0),s(0))$, which describes a circle around the origin, it follows that $r(t)$ and $s(t)$ remain positive. \end{proof} Throughout this section we assume that the conditions of Lemma \ref{lem:sign r s for su(2)} are satisfied, so that $r(t)$ and $s(t)$ are positive. Note that if we change the condition on $r(0)$ to $r(0)<0$, then $r(t)<0$ for all $t>0$. For $j \in \frac12\mathbb N$ let $\ell^2_j$ be the $2j+1$ dimensional complex Hilbert space with standard orthonormal basis $\{e_n \mid n=0,\ldots,2j\}$. An irreducible $*$-representation $\pi_j$ of $\mathfrak{su}(2)$ on $\ell^2_j$ is given by \[ \betagin{split} \pi_j(H)e_n & = 2(n-j)\, e_n \\ \pi_j(E) e_n & = \sqrt{(n+1)(2j-n))}\, e_{n+1} \\ \pi_j(F) e_n & = \sqrt{n(2j-n+1)}\, e_{n-1}, \end{split} \] where we use the notation $e_{-1}=e_{2j+1}=0$. In this representation the Lax operator $\pi_j(L)$ is the Jacobi operator \betagin{equation} \lambdabel{eq:L Krawtchouck} \pi_j(L(t)) e_n = r(t) \sqrt{(n+1)(2j-n)} \, e_{n+1} + 2s(t)(n-j) \, e_n + r(t) \sqrt{n(2j-n+1)}\, e_{n-1}. \end{equation} We can diagonalize the Lax operator $\pi_j(L)$ using orthonormal Krawtchouk polynomials \cite[Section 9.11]{KLS}, which are defined by \[ K_n(x) = K_n(x;p,N) = \left(\frac{p}{1-p}\right)^\frac{n}{2} \sqrt{\binom{N}{n}} \rFs{2}{1}{-n,-x}{-N}{\frac{1}{p}}, \] where $N\in \mathbb N$, $0<p<1$ and $n,x \in \{0,1,\ldots,N\}$. The three-term recurrence relation is \[ \betagin{split} &\frac{\frac12 N- x}{\sqrt{p(1-p)}}K_n(x) = \\ &\quad \sqrt{(n+1)(N-n)}\, K_{n+1}(x) + \frac{p-\frac12}{\sqrt{p(1-p)}}(2n-N)K_n(x) + \sqrt{n(N-n+1)}\, K_{n-1}(x), \end{split} \] with the convention $K_{-1}(x) = K_{N+1}(x)=0$. The orthogonality relations read \[ \mathfrak{su}m_{x=0}^N \binom{N}{x} p^x (1-p)^{N-x} K_{n}(x) K_{n'}(x) = \delta_{n,n'}. \] \betagin{theorem} \lambdabel{thm:Krawtchouk} Define for $x \in \{0,\ldots,2j\}$ \[ W_t(x) = \binom{2j}{x}p(t)^x(1-p(t))^{2j-x} \] where $p(t) = \frac12 + \frac{s(t)}{2C}$ and $C = \sqrt{s^2 + r^2}$. For $t>0$ let $U_t: \ell^2_j \to \ell^2(\{0,\ldots,2j\}, W_t)$ be defined by \[ [U_te_n](x) = K_n(x;p(t),2j), \] then $U_t$ is unitary and $U_t \circ \pi_j(L(t)) \circ U_t^* = M(2C(j-x))$. \end{theorem} Here $M$ denotes the multiplication operator given by $[M(f)g](x) = f(x)g(x)$. \betagin{proof} From \eqref{eq:L Krawtchouck} and the recurrence relation of the Krawtchouk polynomials we obtain \[ [U_t\,r^{-1}\pi_j(L) U_t^* K_\cdot(x)](n) = \frac{j-x}{\sqrt{p(1-p)}} K_n(x), \] where \[ \frac{s}{r} = \frac{p-\frac12}{\sqrt{p(1-p)}}. \] The last identity implies \[ p= \frac12+ \frac12\sqrt{\frac{s^2}{s^2+r^2}}, \] so that \[ p(1-p)= \frac{r^2}{4(s^2+r^2)}. \] Then we find that the eigenvalue is \[ \frac{j-x}{\sqrt{p(1-p)}}= \frac{\sqrt{s^2+r^2}}{r} 2(j-x). \] Since $s^2+r^2$ is constant, the result follows. \end{proof} \section{The Lie algebra $\mathfrak{su}(1,1)$}\lambdabel{sec:su11} In this section we consider representations of $\mathfrak g(a,b)$ with $(a,b)=(1,0)$ and $\varepsilonilon=-$, i.e.~the Lie algebra $\mathfrak{su}(1,1) \oplus \mathbb C$. We omit the basis element $N$ again. The commutation relations are the same as in the previous section. The $*$-structure in this case is defined by $H^*=H$ and $E^*=-F$. The Lax pair \eqref{eq:Lax pair} is given by \[ L(t) = s(t)H + r(t)(E-F), \qquad M(t)=u(t)(E+F), \] where we set $c=0$ again. The functions $r$ and $s$ satisfy \[ \betagin{split} \dot s(t) &= -2u(t)r(t) \\ \dot r(t) & = -2u(t)s(t) \end{split} \] and the invariant is given by $I(r,s)=s^2-r^2$. \betagin{lemma} \lambdabel{lem:sign r s for su(1,1)} Assume $\sgn(u(t))=-\sgn(r(t)$ for all $t>0$, $s(0)>0$ and $r(0)>0$. Then $\sgn(s(t))>0$ and $\sgn(r(t))>0$ for all $t>0$. \end{lemma} \betagin{proof} The proof is similar to the proof of Lemma \ref{lem:sign r s for su(2)}, where in this case $I(r,s)=I(r(0),s(0))$ describes a hyperbola or a straight line. \end{proof} Throughout this section we assume that the assumptions of Lemma \ref{lem:sign r s for su(1,1)} are satisfied.\\ We consider two families of irreducible $*$-representations of $\mathfrak{su}(1,1)$. The first family is the positive discrete series representations $\pi_k$, $k>0$, on $\ell^2(\mathbb N)$. The actions of the basis elements on the standard orthonormal basis $\{e_n \mid n \in \mathbb N\}$ are given by \[ \betagin{split} \pi_k(H)e_n &= 2(k+n)\, e_n \\ \pi_k(E)e_n & = \sqrt{(n+1)(2k+n)}\, e_{n+1} \\ \pi_k(F)e_n & = -\sqrt{n(2k+n-1)}\, e_{n-1}. \end{split} \] We use the convention $e_{-1}=0$. The second family of representations we consider is the principal unitary series representation $\pi_{\lambda,\varepsilon}$, $\lambda \in -\frac12+i\mathbb R_+$, $\varepsilon \in [0,1)$ with $(\lambda,\varepsilon) \neq (-\frac12,\frac12)$, on $\ell^2(\mathbb Z)$. The actions of the basis elements on the standard orthonormal basis $\{ e_n \mid n \in \mathbb Z \}$ are given by \[ \betagin{split} \pi_{\lambda,\varepsilon}(H)e_n &= 2(\varepsilon+n)\, e_n \\ \pi_{\lambda,\varepsilon}(E)e_n & = \sqrt{(n+\varepsilon-\lambda)(n+\varepsilon+\lambda+1)}\, e_{n+1} \\ \pi_{\lambda,\varepsilon}(F)e_n & = -\sqrt{n+\varepsilon-\lambda-1)(n+\varepsilon+\lambda)}\, e_{n-1}. \end{split} \] Note that both representations $\pi_k^+$ and $\pi_{\lambda,\varepsilon}$ as given above define unbounded representations. The operators $\pi(X)$, $X \in \mathfrak{su}(1,1)$, are densely defined operators on their representation space, where as a dense domain we take the set of finite linear combinations of the standard orthonormal basis $\{e_n\}$. \betagin{remark} The Lie algebra $\mathfrak{su}(1,1)$ has two more families of irreducible $*$-representations: the negative discrete series and the complementary series. The negative discrete series representation $\pi_k^-$, $k>0$, can be obtained from the positive discrete series representation $\pi_k$ by setting \[ \pi_k^-(X) = \pi_k(\vartheta(X)), \qquad X \in \mathfrak{su}(1,1) \] where $\vartheta$ is the Lie algebra isomorphism defined by $\vartheta(H)=-H$, $\vartheta(E)=F$, $\vartheta(F)=E$. The complementary series are defined in the same way as the principal unitary series, but the labels $\lambda,\varepsilon$ satisfy $\varepsilon \in [0,\frac12)$, $\lambda \in (-\frac12,-\varepsilon)$ or $\varepsilon \in (\frac12,1)$, $\lambda \in (-\frac12, \varepsilon-1)$. The results obtained in this section about the Lax operator in the positive discrete series and principal unitary series representations can easily be extended to these two families of representations. \end{remark} \mathfrak{su}bsection{The Lax operator in the positive discrete series} The Lax operator $L$ acts in the positive discrete series representation as a Jacobi operator on $\ell^2(\mathbb N)$ by \[ \pi_k(L(t)) e_n = r(t)\sqrt{(n+1)(n+2k)}\, e_{n+1} + s(t)(2k+2n) e_n + r(t)\sqrt{n(n+2k-1)}\, e_{n-1}. \] $\pi_k(L)$ can be diagonalized using explicit families of orthogonal polynomials. We need to distinguish between three cases corresponding to the invariant $s^2-r^2$ being positive, zero or negative. This corresponds to hyperbolic, parabolic and elliptic elements, and the eigenvalues and eigenfunctions have different behaviour per class, cf. \cite{KVdJ98}. \mathfrak{su}bsubsection{Case 1: $s^2-r^2>0$} In this case eigenfunctions of $\pi_k(L)$ can be given in terms of Meixner polynomials. The orthonormal Meixner polynomials \cite[Section 9.10]{KLS} are defined by \[ M_n(x)= M_n(x;\beta,c) = (-1)^n \sqrt{ \frac{(\beta)_n }{n!}c^n} \rFs{2}{1}{-n,-x}{\beta}{1-\frac{1}{c}}, \] where $\beta>0$ and $0<c<1$. They satisfy the three-term recurrence relation \[ \betagin{split} &\frac{(1-c)(x+\frac12\beta)}{\sqrt c} M_n(x) = \\ &\sqrt{(n+1)(n+\beta)} M_{n+1}(x) + \frac{ (c+1)(n+\frac12\beta) }{\sqrt c} M_n(x) + \sqrt{n(n-1+\beta)} M_{n-1}(x). \end{split} \] Their orthogonality relations are given by \[ \mathfrak{su}m_{x\in \mathbb N} \frac{ (\beta)_x }{x!}c^x (1-c)^{2\beta} M_n(x) M_{n'}(x) = \delta_{n,n'}. \] \betagin{theorem} \lambdabel{thm:Meixner} Let \[ W_t(x) = \frac{ (2k)_x }{x!}c(t)^x (1-c(t))^{4k}, \qquad x \in \mathbb N,\, t>0, \] where $c(t) \in (0,1)$ is determined by $\frac{s}{r}=\frac{1+c}{2\sqrt c}$, or equivalently $c(t) = e^{-2 \arccosh(\frac{s(t)}{r(t)})}$. Define for $t>0$ the operator $U_t:\ell^2(\mathbb N) \to \ell^2(\mathbb N,W_t)$ by \[ [U_te_n](x) = M_n(x;2k,c(t)), \] then $U_t$ is unitary and $U_t\circ \pi_k(L(t)) \circ U_t^* = M(2C(x+k))$ where $C = \sqrt{s^2-r^2}$. \end{theorem} \betagin{proof} The proof runs along the same lines as the proof of Theorem \ref{thm:Krawtchouk}. The condition $s^2-r^2>0$ implies $s/r>1$, so there exists a $c = c(t) \in (0,1)$ such that \[ \frac{s}{r} = \frac{ 1+c }{2\sqrt c}. \] It follows from the three-term recurrence relation for Meixner polynomials that $r^{-1}L$ has eigenvalues $\frac{(1-c)(x+k)}{\sqrt c}$, $x \in \mathbb N$. Write $c=e^{-2a}$ with $a>0$, then $\frac{ 1+c }{2\sqrt c}= \cosh(a)$, so that \[ \frac{1-c}{2\sqrt{c}} = \sigmanh(a)= \sqrt{\cosh^2(a) -1 } = \sqrt{\frac{s^2}{r^2}-1} = \frac{C}{r}, \] where $C = \sqrt{s^2 -r^2}$. \end{proof} \mathfrak{su}bsection{Case 2: $s^2-r^2=0$} In this case we need the orthonormal Laguerre polynomials \cite[Section 9.12]{KLS}, which are defined by \[ L_n(x)= L_n(x;\alphapha) = (-1)^n \sqrt{ \frac{(\alpha+1)_n}{n!} } \rFs{1}{1}{-n}{\alpha+1}{x}. \] They satisfy the three-term recurrence relation \[ x L_n(x) = \sqrt{(n+\alpha+1)(n+1)}\, L_{n+1}(x) + (2n+\alpha+1)L_n(x) + \sqrt{n(n+\alpha)}\, L_{n-1}(x), \] and the orthogonality relations are \[ \int_0^\infty L_n(x) L_{n'}(x) \, \frac{x^\alpha e^{-x}}{\Gamma(\alpha+1)}\, dx = \delta_{n,n'}. \] The set $\{L_n \mid n \in \mathbb N\}$ is an orthonormal basis for the corresponding weighted $L^2$-space. Using the three-term recurrence relation for the Laguerre polynomials we obtain the following result. \betagin{theorem} \lambdabel{thm:Laguerre} Let \[ W_t(x) = \frac{x^{2k-1} r(t)^{-2k} e^{-\frac{x}{r(t)}} }{\Gamma(2k)},\qquad x \in [0,\infty), \] and let $U_t:\ell^2(\mathbb N) \to L^2([0,\infty),W_t(x)dx)$ be defined by \[ [U_te_n](x) = L_n\left(\frac{x}{r(t)};2k-1\right), \] then $U_t$ is unitary and $U_t \circ \pi_k(L(t)) \circ U_t^{*} = M(x)$. \end{theorem} \mathfrak{su}bsection{Case 3: $s^2-r^2<0$} In this case we need the orthonormal Meixner-Pollaczek polynomials \cite[Section 9.7]{KLS} given by \[ P_n(x) = P_n(x;\lambda,\phi) = e^{in\phi} \sqrt{ \frac{(2\lambda)_n}{n!}} \rFs{2}{1}{-n, \lambda+ix}{2\lambda}{1-e^{-2i\phi}}, \] where $\lambda>0$ and $0<\phi<\pi$. The three-term recurrence relation for these polynomials is \[ 2x\sigman\phi\, P_n(x) = \sqrt{(n+1)(n+2k)}\,P_{n+1}(x) - 2(n+\lambda)\cos\phi \, P_n(x) + \sqrt{n(n+2k-1)}\, P_{n-1}(x), \] and the orthogonality relations read \[ \betagin{gathered} \int_{-\infty}^\infty P_n(x) P_{n'}(x)\, w(x;\lambda,\phi)\, dx = \delta_{n,n'},\\ w(x;\lambda,\phi) = \frac{(2\sigman\phi)^{2\lambda}}{2\pi\,\Gamma(2\lambda)} e^{(2\phi-\pi)x} |\Gamma(\lambda+ix)|^2. \end{gathered} \] The set $\{P_n \mid n \in \mathbb N\}$ is an orthonormal basis for the weighted $L^2$-space. \betagin{theorem} \lambdabel{thm:Meixner-Pollaczek} For $\phi(t) = \arccos(\frac{s(t)}{r(t)})$ let \[ W_t(x) = w(x;k,\phi(t)), \qquad x \in \mathbb R, \] and let $U_t : \ell^2(\mathbb N) \to L^2(\mathbb R,W_t(x)dx)$ be defined by \[ [U_t e_n](x) = P_n(x;k,\phi(t)) \] then $U_t$ is unitary and $U_t \circ \pi_k(L(t)) \circ U_t^{*} = M(-2Cx)$, where $C = \sqrt{r^2-s^2}$. \end{theorem} \betagin{proof} The proof is similar as before. Using the three-term recurrence relation for the Meixner-Pollaczek polynomials it follows that the generalized eigenvalue of $r^{-1}\pi_k(L)$ is $- 2x \sigman(\phi)$, where $\phi \in (0,\pi)$ is determined by $-\frac{s}{r} = \cos\phi$. Then \[ \sigman\phi = \sqrt{1-\frac{s^2}{r^2}} = \frac{C}{r}, \] from which the result follows. \end{proof} \mathfrak{su}bsection{The Lax operator in the principal unitary series} The action of the Lax operator $L$ in the principal unitary series as a Jacobi operator on $\ell^2(\mathbb Z)$ is given by \[ \betagin{split} \pi_{\lambda,\varepsilon} (L(t)) e_n &= r(t)\sqrt{(n+\varepsilon+\lambda+1)(n+\varepsilon-\lambda)}\, e_{n+1} + s(t)(2\varepsilon+2n) e_n \\ & \quad + r(t)\sqrt{ (n+\varepsilon+\lambda)(n+\varepsilon-\lambda-1)}\, e_{n-1}. \end{split} \] Again we distinguish between the cases where the invariant $s^2-r^2$ is either positive, negative or zero. \mathfrak{su}bsubsection{Case 1: $s^2-r^2>0$} The Meixner functions \cite{GroeK2002} are defined by \[ \betagin{split} m_n(x) = m_n(x;\lambda,\varepsilon,c) &= \left(\frac{\sqrt{c}}{c-1}\right)^n \frac{ \sqrt{ \Gamma(n+\varepsilon+\lambda+1) \Gamma(n+\varepsilon-\lambda) } }{(1-c)^\varepsilon \Gamma(n+1-x)} \\ & \quad \times \rFs{2}{1}{n+\varepsilon+\lambda+1,n+\varepsilon-\lambda}{n+1-x}{\frac{c}{c-1}}, \end{split} \] for $x,n \in \mathbb Z$ and $c \in (0,1)$. The parameters $\lambda$ and $\varepsilon$ are the labels from the principal unitary series. The Meixner functions satisfy the three-term recurrence relation \[ \betagin{split} \frac{(1-c)(x+\varepsilon)}{\sqrt{c}}m_n(x) & = \sqrt{(n+\varepsilon+\lambda+1)(n+\varepsilon-\lambda)}\, m_{n+1}(x) \\ & \quad + \frac{(c+1)(x+\varepsilon)}{\sqrt{c}} m_n(x) + \sqrt{(n+\varepsilon+\lambda)(n+\varepsilon-\lambda-1)} \, m_{n-1}(x), \end{split} \] and the orthogonality relations read \[ \mathfrak{su}m_{x \in \mathbb Z} \frac{ c^{-x} }{ \Gamma(x+ \varepsilon + \lambda +1) \Gamma(x+\varepsilon-\lambda) } m_n(x) m_{n'}(x) = \delta_{n,n'}. \] The set $\{ m_n \mid n \in \mathbb Z\}$ is an orthonormal basis for the weighted $L^2$-space. \betagin{theorem} \lambdabel{thm:Meixner functions} For $t>0$ let \[ W_t(x) = \frac{ c(t)^{-x} }{ \Gamma(x+ \varepsilon + \lambda +1) \Gamma(x+\varepsilon-\lambda)}, \] where $c(t) \in (0,1)$ is determined by $\frac{s(t)}{r(t)}=\frac{1+c(t)}{2\sqrt{c(t)}}$, or equivalently $c(t) = e^{-2 \arccosh(\frac{s(t)}{r(t)})}$. Define $U_t:\ell^2(\mathbb Z) \to \ell^2(\mathbb Z,W_t)$ by \[ [U_t e_n](x) = m_n(x;\lambda,\varepsilon,c), \] then $U_t$ is unitary and $U_t \circ \pi_{\lambda,\varepsilon}(L(t)) \circ U_t^* = M(2C(x+\varepsilon))$, where $C = \sqrt{s^2 - r^2}$. \end{theorem} \mathfrak{su}bsubsection{Case 2: $s^2-r^2=0$} In this case we need Laguerre functions \cite{Groe2003} defined by \[ \psi_n(x) = \psi_n(x;\lambda,\varepsilon) = \betagin{cases} \betagin{split} (-1)^n & \sqrt{\Gamma(n+\varepsilon+\lambda+1) \Gamma(n+\varepsilon-\lambda)} \\ &\quad \times U(n+\varepsilon+\lambda+1;2\lambda+2;x) \end{split} & x>0\\ \\ \betagin{split} &\sqrt{ \Gamma(-n-\varepsilon-\lambda) \Gamma(1-n-\varepsilon+\lambda)} \\ & \quad \times U(-n-\varepsilon-\lambda;-2\lambda;-x) \end{split} & x<0, \end{cases} \] where $x\in\mathbb R$, $n \in \mathbb Z$, and $U(a;b;z)$ is Tricomi's confluent hypergeometric function, see e.g.~ \cite[(1.3.1)]{Sl}, for which we use its principal branch with branch cut along the negative real axis. The Laguerre functions $\{\psi_n \mid n \in \mathbb Z\}$ form an orthonormal basis for $L^2(\mathbb R,w(x)dx)$ where \[ w(x)=w(x;\rho,\varepsilon) = \frac1{\pi^2} \sigman\left( \pi(\varepsilon+\lambda+1) \right) \sigman \left(\pi( \varepsilon-\lambda)\right) e^{-|x|}. \] The three-term recurrence relation reads \[ \betagin{split} -x \psi_n(x) &= \sqrt{(n+\varepsilon+\lambda+1)(n+\varepsilon-\lambda)} \,\psi_{n+1}(x) \\ & \quad + 2(n+\varepsilon)\, \psi_n(x) + \sqrt{(n+\varepsilon+\lambda)(n+\varepsilon-\lambda-1)}\, \psi_{n-1}(x). \end{split} \] \betagin{theorem} \lambdabel{thmL Laguerre functions} Let \[ W_t(x) = \frac{1}{r(t)} w\left( \frac{x}{r(t)};\lambda,\varepsilon\right), \qquad x \in \mathbb R, \] and let $U_t : \ell^2(\mathbb Z) \to L^2(\mathbb R,W_t(x)dx)$ be defined by \[ [U_t e_n](x) = \psi_n\left( \frac{x}{r(t)};\lambda,\varepsilon\right), \] then $U_t$ is unitary and $U_t \circ \pi_{\lambda,\varepsilon}(L(t)) \circ U_t^{*} = M(-x)$. \end{theorem} \mathfrak{su}bsubsection{Case 3: $s^2-r^2<0$} The Meixner-Pollaczek functions \cite[\S4.4]{Koel2004} are defined by \[ \betagin{split} u_n(x) = u_n(x;\lambda,\varepsilon,\phi) & = (2i\sigman\phi)^{-n} \frac{ \sqrt{\Gamma(n+1+\varepsilon+\lambda)\Gamma(n+\varepsilon-\lambda) }}{\Gamma(n+1+\varepsilon-ix)} \\ & \quad \times \rFs{2}{1}{n+1+\varepsilon+\lambda,n+\varepsilon-\lambda}{n+1+\varepsilon-ix}{\frac{1}{1-e^{-2i\phi}}}. \end{split} \] Define \[ W(x;\lambda,\varepsilon,\phi)= w_0(x)\betagin{pmatrix} 1 & - w_1(x) \\ -\overline{w_1}(x) & 1 \end{pmatrix}, \qquad x \in \mathbb R, \] where $\overline{f}(x) = \overline{f(x)}$ and \[ \betagin{split} w_1(x;\lambda,\varepsilon) &= \frac{ \Gamma(\lambda+1+ix) \Gamma(-\lambda+ix) }{\Gamma(ix-\varepsilon)\Gamma(1+\varepsilon-ix)}\\ w_0(x;\varepsilon,\phi) &= (2\sigman\phi)^{-2\varepsilon} e^{(2\phi-\pi)x}. \end{split} \] Let $L^2(\mathbb R,W(x)dx)$ be the Hilbert space consisting of functions $\mathbb R \to \mathbb C^2$ with inner product \[ \lambdangle f,g \rangle = \int_{-\infty}^\infty g^t(x) W(x) f(x)\, dx \] where $f^t(x)$ denotes the conjugate transpose of $f(x) \in \mathbb C^2$. The set $\{(\betagin{smallmatrix}u_n \\ \overline{u_n} \end{smallmatrix}) \mid n \in \mathbb Z\}$ is an orthonormal basis for $L^2(\mathbb R,W(x)dx)$. The three-term recurrence relation for the Meixner-Pollaczek functions is \[ \betagin{split} 2x \sigman\phi\, u_n(x) & = \sqrt{(n+\varepsilon+\lambda+1)(n+\varepsilon-\lambda)}\, u_{n+1}(x) \\ &\quad + 2(n+\varepsilon)\cos\phi\, u_n(x) + \sqrt{(n+\varepsilon+\lambda)(n+\varepsilon-\lambda-1)}\, u_{n-1}(x). \end{split} \] The function $\overline{u_n}$ satisfies the same recurrence relation. \betagin{theorem} \lambdabel{thm:Meixner-Pollaczek functions} For $\phi(t) = \arccos(\frac{s(t)}{r(t)})$ let \[ W_t(x) = W(x;\lambda,\varepsilon,\phi(t)), \] and let $U_t : \ell^2(\mathbb Z) \to L^2(\mathbb R,W_t(x;\lambda,\varepsilon)dx)$ be defined by \[ [U_t e_n](x) = \betagin{pmatrix} u_n(x;\lambda,\varepsilon,\phi(t)) \\ \overline{u_n}(x;\lambda,\varepsilon,\phi(t)) \end{pmatrix}, \] then $U_t$ is unitary and $U_t \circ \pi_{\lambda,\varepsilon}(L(t)) \circ U_t^{*} = M(2Cx)$, where $C = \sqrt{r^2-s^2}$. \end{theorem} Note that the spectrum of $\pi_{\lambda,\varepsilon}(L(t))$ has multiplicity 2. \betagin{remark} Transferring a three-term recurrence on $\mathbb Z$ to a three term recurrence for $2\times 2$ matrix orthogonal polynomials, see \cite[\S VII.3]{Bere}, \cite[\S 3.2]{Koel}, does not lead to an example of the nonabelian Toda lattice \cite{BrusMRL}, \cite{Gekh}, \cite{IsmaKR} \end{remark} \section{The oscillator algebra $\mathfrak b(1)$}\lambdabel{sec:oscillatoralgebra} $\mathfrak b(1)$ is the Lie $*$-algebra $\mathfrak g(a,b)$ with $(a,b)=(0,1)$ and $\varepsilonilon=+$. Then $\mathfrak b(1)$ has a basis $E,F,H,N$ satisfying \[ [E,F]=N, \quad [H,E]=2E, \quad [H,F]=-2F, \quad [N,E]=[N,F]=[N,H]=0. \] The $*$-structure is defined by $H^*=H$, $N^*=N$, $E^*=F$. The Lax pair $L,M$ is given by \[ L(t) = cH + r(t)(E+F) + s(t) N, \qquad M(t) = u(t) (E- F). \] The differential equations for $s$ and $r$ are in this case given by \[ \betagin{split} \dot s &= 2ru\\ \dot r &= -2cu \end{split} \] and the invariant is $r^2+2cs$. \betagin{lemma} \lambdabel{lem:sign r s for b(1)} Assume $\sgn(u(t))=\sgn(r(t)$ for all $t>0$, $s(0)>0$ and $r(0)>0$. Then $\sgn(s(t))>0$ and $\sgn(r(t))>0$ for all $t>0$. \end{lemma} \betagin{proof} The proof is similar to the proof of Lemma \ref{lem:sign r s for su(1,1)}, where in this case $I(r,s)=I(r(0),s(0))$ describes a parabola ($c \neq 0$) or a straight line ($c=0$). \end{proof} Throughout this section we assume the conditions of Lemma \ref{lem:sign r s for b(1)} are satisfied.\\ There is a family of irreducible $*$-representations $\pi_{k,h}$, $h>0$, $k\geq0$, on $\ell^2(\mathbb N)$ defined by \[ \betagin{split} \pi_{k,h}(N) e_n &= -h\, e_n\\ \pi_{k,h}(H) e_n &= 2(k+n)\, e_n \\ \pi_{k,h}(E) e_n &= \sqrt{h (n+1)} \, e_{n+1}\\ \pi_{k,h}(F) e_n &= \sqrt{ hn}\, e_{n-1}. \end{split} \] The action of the Lax operator on the basis of $\ell^2(\mathbb N)$ is given by \[ \pi_{k,h}(L(t)) e_n = r(t)\sqrt{h(n+1)}\, e_{n+1}+\left[2c(n+k) - h s(t) \right]e_n + r(t)\sqrt{hn}\, e_{n-1}. \] For the diagonalization of $\pi_{k,h}(L)$ we distinguish between the cases $c \neq 0$ and $c = 0$. \mathfrak{su}bsection{Case 1: $c \neq 0$} In this case we need the orthonormal Charlier polynomials \cite[Section 9.14]{KLS}, which are defined by \[ C_n(x) = C_n(x;a) = \sqrt{ \frac{ a^n }{n!} } \rFs{2}{0}{-n,-x}{\text{--}}{-\frac1a}, \] where $a>0$ and $n,x \in \mathbb N$. The orthogonality relations are \[ \mathfrak{su}m_{x=0}^\infty \frac{a^x e^{-a}}{x!} C_n(x) C_{n'}(x) = \delta_{n,n'}, \] and $\{C_n \mid n \in \mathbb N\}$ is an orthonormal basis for the corresponding $L^2$-space. The three-term recurrence relation reads \[ -xC_n(x) = \sqrt{a(n+1)} \, C_{n+1}(x) - (n+a) C_n(x) + \sqrt{an} \, C_{n-1}(x). \] \betagin{theorem} \lambdabel{thm:Charlier} For $t>0$ define \[ W_t(x) = \left( \frac{h r^2(t) }{c} \right)^x e^{-\frac{hr^2(t)}{c^2}} \] and let $U_t:\ell^2(\mathbb N) \to L^2(\mathbb N, W_t)$ be defined by \[ U_t e_n (x) = \left( -\sgn(r/c) \right)^n C_n\left(x ;\frac{hr^2(t)}{c^2}\right), \qquad x \in \mathbb N. \] Then $U_t$ is unitary and $U_t \circ L(t) \circ U_t^{*} = M(2c(x+k) + Ch)$, where $C=\frac1{2c}r^2+s$. \end{theorem} \betagin{proof} The action of $L$ can be written in the following form, \[ \betagin{split} \pi_{k,h}\Big(\frac{1}{2c}L & + \frac{hr^2}{4c^2} + \frac{hs}{2c}-k \Big) e_n = \\ & \sgn(r/c) \sqrt{\frac{hr^2(n+1)}{4c^2}}\, e_{n+1} +\left(n + \frac{hr^2}{4c^2}\right) e_n + \sgn(r/c) \sqrt{\frac{hr^2 n}{4c^2} } e_{n-1} \end{split} \] and recall that $\frac{1}{2c}r^2+ s$ is constant. The result then follows from comparing with the three-term recurrence relation for the Charlier polynomials. \end{proof} \mathfrak{su}bsection{Case 2: $c=0$} In this case $\dot r=0$, so $r$ is a constant function. We use the orthonormal Hermite polynomials \cite[Section 9.15]{KLS}, which are given by \[ H_n(x) = \frac{(\sqrt 2 \, x)^n }{\sqrt{n!}} \rFs{2}{0}{-\frac{n}{2}, - \frac{n-1}{2} }{\text{--}}{-\frac{1}{x^2} }. \] They satisfy the orthogonality relations \[ \frac{1}{\sqrt{\pi}} \int_\mathbb R H_n(x) H_{n'}(x) e^{-x^2}\, dx = \delta_{n,n'}, \] and $\{H_n \mid n \in \mathbb N\}$ is an orthonormal basis for $L^2(\mathbb R, e^{-x^2}dx/\sqrt{\pi})$. The three-term recurrence relation is given by \[ \sqrt{2}\,x H_n(x) = \sqrt{n+1}\, H_{n+1}(x) + \sqrt{n} H_{n-1}(x). \] \betagin{theorem} \lambdabel{thm:Hermite} For $t>0$ define \[ W_t(x) = \frac{1}{r\sqrt{2h\pi}} e^{-\frac{(x-h s(t))^2}{2hr^2}}, \] and let $U_t:\ell^2(\mathbb N) \to L^2(\mathbb R;w_t(x;h)\,dx)$ be defined by \[ U_t e_n(x) = H_n\left(\frac{x-h s(t)}{r\sqrt{2h}}\right), \] then $U_t$ is unitary and $U_t \circ \pi_{k,h}(L(t)) \circ U_t^{*} = M(x)$. \end{theorem} \betagin{proof} We have \[ \pi_{k,h}\left(\frac{1}{r \sqrt{h}}(L+sh)\right)e_n = \sqrt{n+1}\, e_{n+1} + \sqrt{n}\, e_{n-1}, \] which corresponds to the three-term recurrence relation for the Hermite polynomials. \end{proof} \section{The Lie algebra $\mathfrak{e}(2)$}\lambdabel{sec:casee2} We consider the Lie algebra $\mathfrak g(a,b)$ with $a=b=0$ and $\varepsilonilon=+$. Similar as in the case of $\mathfrak{sl}(2,\mathbb C)$, we omit the basis element $N$ again. The remaining Lie algebra is $\mathfrak e(2)$ with basis $H, E, F$ satisfying \[ [E,F]=0, \quad [H,E]=2E, \quad [H,F]=-2F, \] and the $*$-structure is determined by $E^*=F, \quad H^*=H$. The Lax pair is given by \[ L(t) = cH + r(t)(E+F), \qquad M(t) = u(t) (E-F), \] with $\dot r = -2cu$. $\mathfrak e(2)$ has a family of irreducible $*$-representations $\pi_k$, $k>0$, on $\ell^2(\mathbb Z)$ given by \[ \betagin{split} \pi_k(H) e_n &= 2n\, e_n, \\ \pi_k(E) e_n &= k e_{n+1}, \\ \pi_k(F)e_n &= k e_{n-1}. \end{split} \] This defines an unbounded representation. As a dense domain we use the set of finite linear combinations of the basis elements. Assume $c \neq 0$. The Lax operator $\pi_k(L(t))$ is a Jacobi operator on $\ell^2(\mathbb Z)$ given by \[ \pi_k(L(t))e_n = kr(t) e_{n+1} + 2cn e_n + kr(t) e_{n-1}. \] For the diagonalization of $\pi_k(L)$ we use the Bessel functions $J_n$ \cite{Wat, AAR} given by \[ J_n(z) = \frac{ z^n }{2^n \Gamma(n+1) } \rFs{1}{0}{\text{--}}{n+1}{-\frac{z^2}{4}}, \] with $z \in \mathbb R$ and $n \in \mathbb Z$. They satisfy the Hansen-Lommel type orthogonality relations, which follow from \cite[(4.9.15), (4.9.16)]{AAR} \[ \mathfrak{su}m_{m \in \mathbb Z} J_{m-n}(z) J_{m-n'}(z) = \delta_{n,n'}. \] and the set $\{J_{\cdot -n}(z) \mid n \in \mathbb Z\}$ is an orthonormal basis for $\ell^2(\mathbb Z)$. A well-known recurrence relation for $J_n$ is \[ J_{n-1}(z) + J_{n+1}(z) = \frac{2n}{z}J_n(z), \] which is equivalent to \[ zJ_{m-n-1}(z)+2nJ_{m-n}(z) + z J_{m-n+1}(z) = 2m J_{m-n}(z). \] \betagin{theorem} For $t>0$ define $U_t: \ell^2(\mathbb Z) \to \ell^2(\mathbb Z)$ by \[ U_t e_n(m) = J_{m-n}\left(\frac{kr(t)}{c}\right), \] then $U_t$ is unitary and $U_t \circ \pi_k(L(t)) \circ U_t^{*} = M(2cm)$. \end{theorem} Finally, let us consider the completely degenerate case $c=0$. In this case $r$ is also a constant function, so there are no differential equations to solve. We can still diagonalize the (degenerate) Lax operator, which is now independent of time. \betagin{theorem} Define $U:\ell^2(\mathbb Z) \to L^2[0,2\pi]$ by \[ [Ue_n](x) = \frac{e^{inx}}{\sqrt{2\pi}}, \] then $U$ is unitary and $U \circ \pi_k(L)\circ U^* = M(2kr \cos(x))$. \end{theorem} \section{Modification of orthogonality measures}\lambdabel{sec:modification} In this section we briefly investigate the orthogonality measures from the previous sections in case the Lax operator $L(t)$ acts as a finite or semi-infinite Jacobi matrix. In these cases the functions $U_te_n$ are $t$-dependent orthogonal polynomials and we see that the weight function $W_t$ of the orthogonality measure for $U_t e_n$ is a modification of the weight function $W_0$ in the sense that \[ W_t(x) = K_t W_0(x) m(t)^x, \] where $K_t$ is independent of $x$. The modification function $m(t)$ depends on the functions $s$ or $r$, which (implicitly) depend on the function $u$. We show how the choice of $u$ effects the modification function $m$. \betagin{theorem} \lambdabel{thm:modifications} There exists a constant $K$ such that \[ m(t) = \exp\left( K\int_0^t \frac{ u(\tau) }{r(\tau)}\, d\tau \right), \qquad t \geq 0. \] \end{theorem} \betagin{remark} In the Toda-lattice case, $u(t) = r(t)$, this gives back the well-known modification function $m(t) = e^{Kt}$, see e.g.~\cite[Theorem 2.8.1]{Isma}. \end{remark} Theorem \ref{thm:modifications} can be checked for each case by a straightforward calculation: we express $m$ as a function of $s$ and $r$, \[ m(t) = A_0 F(s(t),r(t)), \] where $A_0$ is a normalizing constant such that $m(0)=1$. Then differentiating and using the differential equations for $r$ and $s$ we can express $\dot m/ m$ in terms of $u$ and $r$. \mathfrak{su}bsection{$\mathfrak{su}(2)$} From Theorem \ref{thm:Krawtchouk} we see that \[ m(t) = A_0 \frac{p(t)}{1-p(t)} = A_0 \frac{C+s(t)}{C-s(t)} \] with $C = \sqrt{s^2+r^2}$. Differentiating to $t$ and using the relation $\dot s(t) = 2 u(t) r(t)$ then gives \[ \frac{\dot m(t) }{m(t) } = \frac{ 4C u(t)r(t) }{C^2-s(t)^2} = 4C \frac{ u(t)}{r(t)}. \] \mathfrak{su}bsection{$\mathfrak{su}(1,1)$} For $s^2-r^2>0$ Theorem \ref{thm:Meixner} shows that \[ m(t) = A_0e^{-2\arccosh\left( \frac{s(t)}{r(t)} \right)}. \] Then from $\dot s(t) = -2u(t)r(t)$ and $\dot r(t) = -2u(t)s(t)$ it follows that \[ \frac{\dot m(t) }{m(t) } = \frac{-2}{\sqrt{\frac{s(t)^2}{r(t)^2}-1}} \frac{ r(t) \dot s(t) - s(t) \dot r(t) }{r(t)^2} = -4C \frac{u(t) }{r(t)}, \] where $C = \sqrt{s^2-r^2}$. For $s^2-r^2=0$ Theorem \ref{thm:Laguerre} shows that \[ m(t)= A_0 e^{-\frac{1}{r(t)}}. \] Then using $\dot r(t) = 2u(t)r(t)$ it follows that \[ \frac{\dot m(t) }{m(t)} = - \frac{u(t)}{r(t)} . \] For $s^2-r^2<0$ it follows from Theorem \ref{thm:Meixner-Pollaczek} that \[ m(t) = A_0 e^{2\arccos\left( \frac{s(t) }{r(t)} \right)}. \] Then from $\dot s(t) = -2u(t)r(t)$ and $\dot r(t) = -2u(t)s(t)$ it follows that \[ \frac{\dot m(t) }{m(t) } = \frac{2}{\sqrt{1-\frac{s(t)^2}{r(t)^2}}} \frac{ r(t) \dot s(t) - s(t) \dot r(t) }{r(t)^2} = -4C \frac{u(t) }{r(t)}, \] where $C = \sqrt{r^2-s^2}$. \mathfrak{su}bsection{$\mathfrak b(1)$} For $c \neq 0$ we see from Theorem \ref{thm:Charlier} that \[ m(t) = A_0 r(t)^2. \] The relation $\dot r(t) = -2c u(t)$ then leads to \[ \frac{ \dot m(t) }{m(t) } = -4c\frac{u(t)}{r(t)}. \] For $c=0$ Theorem \ref{thm:Hermite} shows that \[ m(t) = A_0 e^{\frac{s(t)}r}. \] Note that $r=r(t)$ is constant in this case. Then $\dot s(t) = 2ru(t)$ leads to \[ \frac{\dot m(t)}{m(t) }= 2u(t) = 2r \frac{ u(t) }{r}. \] \betagin{remark} The result from Theorem \ref{thm:modifications} is also valid for the orthogonal functions from Theorems \ref{thm:Meixner functions} and \ref{thm:Meixner-Pollaczek functions}, i.e.~for $L(t)$ acting as a Jacobi operator on $\ell^2(\mathbb Z)$ in the principal unitary series for $\mathfrak{su}(1,1)$ in cases $r^2-s^2 \neq 0$. However, there is no similar modification function in the other cases where $L(t)$ acts as a Jacobi operator on $\ell^2(\mathbb Z)$. Furthermore, the corresponding recurrence relations for the functions on $\mathbb Z$ can be rewritten to recurrence relations for $2\times 2$ matrix orthogonal polynomials, but in none of the cases the modification of the weight function is as in Theorem \ref{thm:modifications}. \end{remark} \section{The case of $\mathfrak{sl}(d+1,\mathbb C)$}\lambdabel{sec:sld+1} We generalize the situation of the Lax pair for the finite-dimensional representation of $\mathfrak{sl}(2,\mathbb C)$ to the higher rank case of $\mathfrak{sl}(d+1,\mathbb C)$. Let $E_{i,j}$ be the matrix entries forming a basis for the $\mathfrak{gl}(d+1,\mathbb C)$. We label $i,j\in \{0,1,\cdots, d\}$. We put $H_i= E_{i-1,i-1}-E_{i,i}$, $i\in \{1,\cdots, d\}$, for the elements spanning the Cartan subalgebra of $\mathfrak{sl}(d+1,\mathbb C)$. \mathfrak{su}bsection{The Lax pair} \betagin{proposition}\lambdabel{prop:sld+1-Laxpair} Let \betagin{align*} L(t) = \mathfrak{su}m_{i=1}^d s_i(t) H_i + \mathfrak{su}m_{i=1}^d r_i(t) \bigl( E_{i-1,i}+ E_{i,i-1}\bigr), \qquad M(t) = \mathfrak{su}m_{i=1}^d u_i(t) \bigl( E_{i-1,i} - E_{i,i-1} \bigr) \end{align*} and assume that the functions $u_i$ and $r_i$ are non-zero for all $i$ and \[ \frac{r_{i-1}(t)}{r_i(t)} =\frac{u_{i-1}(t)}{u_i(t)}, \qquad i\in \{2,\cdots, d\} \] then the Lax pair condition $\dot{L}(t)=[L(t),M(t)]$ is equivalent to \betagin{align*} \dot{s}_i(t) &= 2r_i(t) u_i(t), \qquad i\in \{1,\cdots, d\} \\ \dot{r}_i(t) &= u_i(t) \bigl( s_{i-1}(t) - 2s_i(t) + s_{i+1}(t) \bigr), \qquad i\in \{2,\cdots, d-1\} \\ \dot{r}_1(t) &= u_1(t) \bigl( s_{2}(t) - 2s_1(t) \bigr), \\ \dot{r}_d(t) &= u_d(t) \bigl( s_{d-1}(t) -2s_d(t) \bigr). \end{align*} \end{proposition} Note that we can write it uniformly \[ \dot{r}_i(t) = u_i(t) \bigl( s_{i-1}(t) - 2s_i(t) + s_{i+1}(t)\bigr), \qquad i\in \{1,\cdots, d\} \\ \] assuming the convention that $s_0(t)=s_{d+1}(t)=0$, which we adapt for the remainder of this section. The Toda case follows by taking $u_i=r_i$ for all $i$, see \cite{BabeBT}, \cite{Mose}. \betagin{proof} The proof essentially follows as in \cite[\S 4.6]{BabeBT}, but since the situation is slightly more general we present the proof, see also \cite[\S 5]{Mose}. A calculation in $\mathfrak{sl}(d+1,\mathbb C)$ gives \betagin{multline*} [M(t),L(t)] = \mathfrak{su}m_{i=1}^d 2r_i(t)u_i(t) H_i + \mathfrak{su}m_{i=1}^d u_i(t) \bigl( s_{i-1}(t) - 2s_i(t) + s_{i+1}(t) \bigr)(E_{i-1,i}+E_{i,i-1}) \\ + \mathfrak{su}m_{i=1}^{d-1} \bigl(r_{i+1}(t) u_i(t) - r_{i}(t) u_{i+1}(t) \bigr) (E_{i-2,i}+E_{i,i-2}) \end{multline*} and the last term needs to vanish, since this term does not occur in $L(t)$ and in its derivative $\dot{L}(t)$. Now the stated coupled differential equations correspond to $\dot{L}=[M,L]$. \end{proof} \betagin{remark}\lambdabel{rmk:Ltinsld+1fromKrawtchouk} Taking the representation of the Lax pair for the $\mathfrak{su}(2)$ case in the $d+1$-dimensional representation as in Section \ref{sec:casee2}, we get, with $d=2j$, as an example \[ s_i(t) = s(t)i(i-1-d), \quad r_i(t) = r(t)\sqrt{i(d+1-i)}, \quad u_i(t) = u(t)\sqrt{i(d+1-i)}. \] Then the coupled differential equations of Proposition \ref{prop:sld+1-Laxpair} are equivalent to \eqref{eq:relations r s u su(2)}. \end{remark} Let $\{e_n\}_{n=0}^d$ be the standard orthonormal basis for $\mathbb C^{d+1}$, the natural representation of $\mathfrak{sl}(d+1,\mathbb C)$. Then $L(t)$ is a $t$-dependent tridiagonal matrix. Moreover, we assume that $r_i$ and $s_i$ are real-valued functions for all $i$, so that $L(t)$ is self-adjoint in the natural representation. \betagin{lemma}\lambdabel{lem:polsLsld+1} Assume that the conditions of Proposition \ref{prop:sld+1-Laxpair} hold. Let the polynomials $p_n(\cdot;t)$ of degree $n \in \{0,1,\cdots, d\}$ in $\lambda$ be generated by the initial value $p_0(\lambda;t)=1$ and the recursion \[ \lambda p_n(\lambda;t) = \betagin{cases} r_1(t) p_1(\lambda;t) + s_1(t) p_0(\lambda;t), & n=0 \\ r_{n+1}(t) p_{n+1}(\lambda;t) + (s_{n+1}(t)-s_n(t)) p_n(\lambda;t) + r_n(t) p_{n-1}(t), & 1\leq n < d. \end{cases} \] Let the set $\{ \lambda_0, \cdots, \lambda_d\}$ be the zeroes of \[ \lambda p_d(\lambda;t) = -s_d(t) p_d(\lambda;t) + r_d(t) p_{d-1}(t). \] In the natural representation $L(t)$ has simple spectrum $\sigma(L(t))= \{ \lambda_0, \cdots, \lambda_d\}$ which is independent of $t$, and $\mathfrak{su}m_{r=0}^d \lambda_r=0$ and \[ L(t) \mathfrak{su}m_{n=0}^d p_n(\lambda_r;t)e_n = \lambda_r \, \mathfrak{su}m_{n=0}^d p_n(\lambda_r;t)e_n, \quad r\in \{0,1\cdots, d\}. \] \end{lemma} Note that with the choice of Remark \ref{rmk:Ltinsld+1fromKrawtchouk}, the polynomials in Lemma \ref{lem:polsLsld+1} are Krawtchouk polynomials, see Theorem \ref{thm:Krawtchouk}. Explicitly, \betagin{equation}\lambdabel{eq:rmk:Ltinsld+1fromKrawtchouk} p_n(C(d-2r);t) = \left( \frac{p(t)}{1-p(t)}\right)^{\frac12 n} \binom{d}{n}^{1/2} \rFs{2}{1}{-n, -r}{-d}{\frac{1}{p(t)}} = K_n(r;p(t),d), \end{equation} where $C=\sqrt{r^2(t)+s^2(t)}$ is invariant, see Theorem \ref{thm:Krawtchouk} and its proof. \betagin{proof} In the natural representation we have \[ L(t) e_n = \betagin{cases} r_{1}(t) e_1 + s_1(t) e_0 & n=0 \\ r_{n+1}(t)e_{n+1} + (s_{n+1}(t)-s_n(t))e_n +r_{n-1}(t) e_{n-1} & 1\leq n <d \\ -s_d(t) e_d + r_d(t) e_{d-1} & n= d \end{cases} \] as a Jacobi operator. So the spectrum of $L(t)$ is simple, and the spectrum is time independent, since $(L(t),M(t))$ is a Lax pair. We can generate the corresponding eigenvectors as $\mathfrak{su}m_{n=0}^d p_n(\lambda;t) e_n$, where the recursion follows from the expression of Lemma \ref{lem:polsLsld+1}. The eigenvalues are then determined by the final equation, and since $\mathrm{Tr}(L(t))=0$ we have $\mathfrak{su}m_{i=0}^d \lambda_i=0$. \end{proof} Let $P(t) = \bigl(p_i(\lambda_j;t)\bigr)_{i,j=0}^d$ be the corresponding matrix of eigenvectors, so that \[ L(t) P(t) = P(t) \Lambda, \qquad \Lambda = \mathrm{diag}(\lambda_0, \lambda_1,\cdots, \lambda_d). \] Since $L(t)$ is self-adjoint in the natural representation, we find \betagin{equation}\lambdabel{eq:orthorelpolLtsld+1} \mathfrak{su}m_{n=0}^d p_n(\lambda_r;t) \overline{p_n(\lambda_s;t)} = \frac{\delta_{r,s}}{w_r(t)}, \qquad w_r(t)>0, \end{equation} and the matrix $Q(t) = \bigl(p_i(\lambda_j;t)\sqrt{w_j(t)} \bigr)_{i,j=0}^d$ is unitary. As $r_i$ and $s_i$ are real-valued, we have $\overline{p_n(\lambda_s;t)} = p_n(\lambda_s;t)$, so that $Q(t)$ is a real matrix, hence orthogonal. So the dual orthogonality relations to \eqref{eq:orthorelpolLtsld+1} hold as well. We will assume moreover that $r_i$ are positive functions. The dual orthogonality relations to \eqref{eq:orthorelpolLtsld+1} hold; \betagin{equation}\lambdabel{eq:dualorthorelpolLtsld+1} \mathfrak{su}m_{r=0}^d p_n(\lambda_r;t) p_m(\lambda_r;t) w_r(t) = \delta_{n,m}. \end{equation} Note that the $w_r(t)$ are essentially time-dependent Christoffel numbers \cite[\S 3.4]{Szeg}. By \cite[\S 2]{Mose}, see also \cite[Thm.~2]{DeifNT}, the eigenvalues and the $w_r(t)$'s determine the operator $L(t)$, and in case of the Toda lattice, i.e. $u_i(t) = r_i(t)$, the time evolution corresponds to linear first order differential equations for the Christoffel numbers \cite[\S 3]{Mose}. Since the spectrum is time-independent, the invariants for the system of Proposition \ref{prop:sld+1-Laxpair} are given by the coefficients of the characteristic polynomial of $L(t)$ in the natural representation. Since the characteristic polynomial is obtained by switching to the three-term recurrence for the corresponding monic polynomials, see \cite[\S 2.2]{Isma}, \cite[\S 2]{Mose}, this gives the same computation. For a Lax pair, $\mathrm{Tr}(L(t)^k)$ are invariants, and in this case the invariant for $k=1$ is trivial since $L(t)$ is traceless. In this way we have $d$ invariants, $\mathrm{Tr}(L(t)^k)$, $k\in \{2,\cdots, d+1\}$. \betagin{lemma}\lambdabel{lem:Lsld+1-invariants} With the convention that $r_n$ and $s_n$ are zero for $n\notin\{1,\cdots,d\}$ we have the invariants \betagin{align*} \mathrm{Tr}(L(t)^2) &= \mathfrak{su}m_{n=0}^d (s_{n+1}(t)-s_n(t))^2 + \mathfrak{su}m_{n=1}^d r_n(t)^2 \\ \mathrm{Tr}(L(t)^3) &= \mathfrak{su}m_{n=0}^d (s_{n+1}(t)-s_n(t))^3 + 3\mathfrak{su}m_{n=0}^d (s_{n+1}(t)-s_n(t)) r_n^2(t) \\ &\qquad \qquad + 3\mathfrak{su}m_{n=0}^d (s_{n}(t)-s_{n-1}(t)) r_n^2(t) \end{align*} \end{lemma} \betagin{proof} Write $L(t) = DS + D_0 + S^\ast D$ with $D=\mathrm{diag}(r_0(t), r_1(t),\cdots, r_d(t))$, $S\colon e_n\mapsto e_{n+1}$ the shift operator and $S^\ast \colon e_n\mapsto e_{n-1}$ its adjoint (with the convention $e_{-1}=e_{d+1}=0$ and $r_0(t)=0$). And $D_0$ is the diagonal part of $L(t)$. Then \[ \mathrm{Tr}(L(t)^k) = \mathrm{Tr}((DS + D_0 + S^\ast D)^k) \] and we need to collect the terms that have the same number of $S$ and $S^\ast$ in the expansion. The trace property then allows to collect terms, and we get \betagin{align*} \mathrm{Tr}(L(t)^2) &= \mathrm{Tr}(D_0^2) + 2\mathrm{Tr}(D^2), \\ \mathrm{Tr}(L(t)^3) &= \mathrm{Tr}(D_0^3) + 3\mathrm{Tr}(D_0D^2) + 3\mathrm{Tr}(SD_0S^\ast D^2) \end{align*} and this gives the result, since $(SD_0S^\ast)_{n,n}= (D_0)_{n-1,n-1}$. \end{proof} We do not use Lemma \ref{lem:Lsld+1-invariants}, and we have included to indicate the analog of Corollary \ref{cor:constant function}. We can continue this and find e.g. \betagin{align*} \mathrm{Tr}(L(t)^4) &= \mathrm{Tr}(D_0^4) + 2\mathrm{Tr}(D^4) + 4\mathrm{Tr}(D_0^2D^2) + 4\mathrm{Tr}(SD_0S^\ast D_0D^2) \\ & \qquad + 4\mathrm{Tr}(SD_0^2S^\ast D^2) + 4\mathrm{Tr}(SD^2S^\ast D^2) \end{align*} \mathfrak{su}bsection{Action of $L(t)$ in representations} We relate the eigenvectors of $L(t)$ in some explicit representations of $\mathfrak{sl}(d+1)$ to multivariable Krawtchouk polynomials, and we follow Iliev's paper \cite{Ilie}. Let $N\in \mathbb N$, and let $\mathbb C_N[x]=\mathbb C_N[x_0,\cdots, x_d]$ be the space of homogeneous polynomials of degree $N$ in $d+1$-variables, then $\mathbb C_N[x]$ is an irreducible representation of $\mathfrak{sl}(d+1)$ and $\mathfrak{gl}(d+1)$ given by $E_{i,j} \mapsto x_i \frac{\partial}{\partial x_j}$. $\mathbb C_N[x]$ is a highest weight representation corresponding to $N\omega_1$, $\omega_1$ being the first fundamental weight for type $A_d$. Then $x^\rho = x_0^{\rho_0}\cdots x_d^{\rho_d}$, $|\rho|=\mathfrak{su}m_{i=0}^d\rho_i=N$, is an eigenvector of $H_i$; $H_i\cdot x^\rho = (\rho_{i-1}-\rho_i)x^\rho$, and so we have a basis of joint eigenvectors of the Cartan subalgebra spanned by $H_1,\cdots, H_d$ and the joint eigenspace, i.e. the weight space, is $1$-dimensional. It is a unitary representation for the inner product \[ \lambdangle x^\rho, x^\sigma \rangle = \delta_{\rho,\sigma} \binom{N}{\rho}^{-1}= \delta_{\rho,\sigma} \frac{\rho_0! \cdots \rho_d!}{N!} \] and it gives a unitary representation of $SU(d+1)$ as well. Then the eigenfunctions of $L(t)$ in $\mathbb C_N[x]$ are $\tilde{x}^\rho$, where \[ (\tilde{x}_0, \cdots, \tilde{x}_d) = (x_0, \cdots, x_d) Q(t) \] since $Q(t)$ changes from eigenvectors for the Cartan subalgebra to eigenvectors for the operator $L(t)$, cf. \cite[\S 3]{Ilie}. It corresponds to the action of $SU(d+1)$ (and of $U(d+1)$) on $\mathbb C_N[x]$. Since $Q(t)$ is unitary, we have \betagin{equation}\lambdabel{eq:sld+1orthrelevLt} \lambdangle \tilde{x}^\rho, \tilde{x}^\sigma\rangle = \lambdangle x^\rho, x^\sigma\rangle = \delta_{\rho,\sigma} \binom{N}{\rho}^{-1}. \end{equation} We recall the generating function for the multivariable Krawtchouk polynomials as introduced by Griffiths \cite{Grif}, see \cite[\S 1]{Ilie}: \betagin{equation}\lambdabel{eq:sld+1genfunmvKrawtchouk} \prod_{i=0}^d \Bigl(z_0 + \mathfrak{su}m_{j=1}^d u_{i,j} z_j\Bigr)^{\rho_i} = \mathfrak{su}m_{|\sigma|=N} \binom{N}{\sigma} P(\sigma',\rho') z_0^{\sigma_0}\cdots z_d^{\sigma_d} \end{equation} where $\rho'= (\rho_1,\cdots, \rho_d) \in \mathbb N^d$, and similarly for $\sigma'$. We consider $P(\rho',\sigma')$ as polynomials in $\sigma'\in \mathbb N^d$ of degree $\rho'$ depending on $U=(u_{i,j})_{i,j=1}^d$, see \cite[\S 1]{Ilie}. \betagin{lemma}\lambdabel{lem:sld+1eigvectL} The eigenvectors of $L(t)$ in $\mathbb C_N[x]$ are \[ \tilde{x}^\rho = \prod_{i=0}^d \bigl( w_i(t)\bigr)^{\frac12 \rho_i} \mathfrak{su}m_{|\sigma|=N} \binom{N}{\sigma} P(\sigma',\rho') x^\sigma \] for $u_{i,j} = \frac{Q(t)_{j,i}}{Q(t)_{0,i}}= p_j(\lambda_i;t)$, $1 \leq i,j\leq d$ in \eqref{eq:sld+1genfunmvKrawtchouk}, and $L(t) \tilde{x}^\rho = (\mathfrak{su}m_{i=0}^d \lambda_i \rho_i )\tilde{x}^\rho$. The eigenvalue follows from the conjugation with the diagonal element $\Lambda$. \end{lemma} From now on we assume this value for $u_{i,j}$, $1\leq i,j\leq d$. Explicit expressions for $P(\sigma',\rho')$ in terms of Gelfand hypergeometric series are due to Mizukawa and Tanaka \cite{MizuT}, see \cite[(1.3)]{Ilie}. See also Iliev \cite{Ilie} for an overview of special and related cases of the multivariable cases. \betagin{proof} Observe that \[ \tilde{x}_i = \mathfrak{su}m_{j=0}^d x_j Q(t)_{j,i} = Q(t)_{0,i} \Bigl( x_0 + \mathfrak{su}m_{j=1}^d \frac{Q(t)_{j,i}}{Q(t)_{0,i}} x_j\Bigr) \] and $Q(t)_{0,i}= \sqrt{w_i(t)}$ is non-zero. Now expand $\tilde{x}^\rho$ using \eqref{eq:sld+1genfunmvKrawtchouk} and $Q(t)_{i,j} = p_i(\lambda_j;t) \sqrt{w_j(t)}$ gives the result. \end{proof} By the orthogonality \eqref{eq:sld+1orthrelevLt} of the eigenvectors of $L(t)$ we find \betagin{align*} \mathfrak{su}m_{|\sigma|=N} &\binom{N}{\sigma} P(\sigma',\rho')P(\sigma',\eta') = \frac{\delta_{\rho,\eta}}{\binom{N}{\rho} \prod_{i=0}^d w_i(t)^{\rho_i}}, \\ \mathfrak{su}m_{|\rho|=N} &\binom{N}{\rho} \Bigl( \prod_{i=0}^d w_i(t)^{\rho_i}\Bigr) P(\sigma',\rho')P(\tau',\rho') = \frac{\delta_{\sigma,\tau}}{\binom{N}{\sigma}}, \end{align*} where we use that all entries of $Q(t)$ are real. The second orthogonality follows by duality, and the orthogonality corresponds to \cite[Cor.~5.3]{Ilie}. In case $N=1$ we find $P(f_i',f_j')= p_i(\lambda_j;t)$, where $f_i\in \mathbb N^{d+1}$ is given by $(0,\cdots, 0, 1,0\cdots, 0)$ with the $1$ on the $i$-th spot. \betagin{lemma}\lambdabel{lem:recursioneqPtaurho} For all $\rho, \tau \in \mathbb N^{d+1}$ with $|\rho|=|\tau|$ we have for the $P$ from Lemma \ref{lem:sld+1eigvectL} the recurrence \betagin{gather*} \Bigl( \mathfrak{su}m_{i=0}^d \lambda_i\rho_i\Bigr) P(\tau',\rho') = \Bigl( \mathfrak{su}m_{i=0}^d s_i(t) (\tau_{i-1}-\tau_i)\Bigr) P(\tau',\rho')\\ + \mathfrak{su}m_{i=0}^d r_i(t) \bigl( \tau_{i-1} P((\tau-f_{i-1}+f_i)',\rho') + \tau_{i} P((\tau+f_{i-1}-f_i)',\rho')\bigr) \end{gather*} \end{lemma} Note that Lemma \ref{lem:recursioneqPtaurho} does not follow from \cite[Thm.~6.1]{Ilie}. \betagin{proof} Apply Lemma \ref{lem:sld+1eigvectL} to expand $\tilde{x}^\rho$ in $L(t)\tilde{x}^\rho = (\mathfrak{su}m_{i=0}^d \lambda_i\rho_i) \tilde{x}^\rho$, and use the explicit expression of $L(t)$ and the corresponding action. Compare the coefficient of $x^\tau$ on both sides to obtain the result. \end{proof} \betagin{remark}\lambdabel{rmk:Ltinsld+1fromKrawtchouk2} In the context of Remark \ref{rmk:Ltinsld+1fromKrawtchouk} and \eqref{eq:rmk:Ltinsld+1fromKrawtchouk} we have that the $u_{i,j}$ are Krawtchouk polynomials. Then the left hand side in \eqref{eq:sld+1genfunmvKrawtchouk} is related to the generating function for the Krawtchouk polynomials, see \cite[(9.11.11)]{KLS}, i.e. the case $d=1$ of \eqref{eq:sld+1genfunmvKrawtchouk}. Putting $z_j = (\frac{p}{1-p})^{-\frac12 j} \binom{d}{j}^{\frac12}w^j$, we see that in this situation $\mathfrak{su}m_{j=0}^d u_{i,j} z_j$ corresponds to $(1+w)^{d-i} (1- \frac{1-p(t)}{p(t)}w)^i$. Using this in the generating function, the left hand side of \eqref{eq:sld+1genfunmvKrawtchouk} gives a generating function for Krawtchouk polynomials. Comparing the powers of $w^k$ on both sides gives \betagin{multline*} \left( \frac{p}{1-p} \right)^{\frac12 k} \binom{dN}{k} \rFs{2}{1}{-\mathfrak{su}m_{i=0}^di\rho_i, -k}{-dN}{\frac{1}{p}} = \\ \mathfrak{su}m_{|\sigma|=N, \mathfrak{su}m_{j=0}^d j\sigma_j=k} \left(\prod_{j=0}^d \binom{d}{j}^{\frac12 \sigma_j}\right) \binom{N}{\sigma} P(\sigma',\rho'). \end{multline*} The left hand side is, up to a normalization, the overlap coefficient of $L(t)$ in the $\mathfrak{sl}(2,\mathbb C)$ case for the representation of dimension $Nd+1$, see \S \ref{sec:su2}. Indeed, the representation $\mathfrak{sl}(2,\mathbb C)$ to $\mathfrak{sl}(2,\mathbb C)$ to $\varthetaaxtrm{End}(\mathbb C_N[x])$ yields a reducible representation of $\mathfrak{sl}(2,\mathbb C)$, and the vector $x^{(0,\cdots, 0,N)}$ is a highest weight vector of $\mathfrak{sl}(2,\mathbb C)$ for the highest weight $dN$. Restricting to this space then gives the above connection. \end{remark} \mathfrak{su}bsection{$t$-Dependence of multivariable Krawtchouk polynomials} Let $L(t) v(t) = \lambda v(t)$, then taking the $t$-derivatives gives $\dot{L}(t)v(t) + L(t)\dot{v}(t) =\lambda \dot{v}(t)$, since $\lambda$ is independent of $t$, and using the Lax pair $\dot{L}=[M,L]$ gives \[ (\lambda - L(t)) (M(t) v(t) -\dot{v}(t))=0. \] Since $L(t)$ has simple spectrum, we conclude that \[ M(t) v(t) = \dot{v}(t) + c(t,\lambda) v(t) \] for some constant $c$ depending on the eigenvalue $\lambda$ and $t$. Note that this differs from \cite[Lemma~2]{Pehe}. For the case $N=1$ we get \[ M(t)v_{\lambda_r}(t) = \mathfrak{su}m_{n=0}^d \bigl( p_{n-1}(\lambda_r;t) u_n(t) - p_{n+1}(\lambda_r;t) u_{n+1}(t)\bigr) x_n \] with the convention that $u_0(t)=u_{d+1}(t)=0$, $p_{-1}(\lambda_r;t)=0$. So \[ (M(t)-c(t,\lambda_r))v_{\lambda_r}(t) = \dot{v}_{\lambda_r}(t) = \mathfrak{su}m_{n=0}^d \dot{p}_n(\lambda_r;t) \, x_n \] and comparing the coefficient of $x_0$, we find $c(t,\lambda_r) = - p_1(\lambda_r;t)u_1(t)$. So we have obtained the following proposition. \betagin{proposition} The polynomials satisfy \betagin{equation*} \betagin{split} \dot{p}_n(\lambda_r;t) &= u_n(t) p_{n-1}(\lambda_r;t) - u_{n+1}(t) p_{n+1}(\lambda_r;t) + u_1(t) p_1(\lambda_r;t) p_n(\lambda_r;t), \qquad 1\leq n<d \\ \dot{p}_d(\lambda_r;t) &= u_d(t) p_{d-1}(\lambda_r;t) + u_1(t) p_1(\lambda_r;t) p_d(\lambda_r;t) \end{split} \end{equation*} for all eigenvalues $\lambda_r$ of $L(t)$, $r\in \{0,\cdots, d\}$. \end{proposition} Note that for $0 \leq n<d$ we have \betagin{equation}\lambdabel{eq:derivpnlat-N=1} \dot{p}_n(\lambda;t) = u_n(t) p_{n-1}(\lambda;t) - u_{n+1}(t) p_{n+1}(\lambda;t) + u_1(t) p_1(\lambda;t) p_n(\lambda;t) \end{equation} as polynomial identity. Indeed, for $n=0$ this is trivially satisfied, and for $1\leq n<d$, this is a polynomial identity of degree $n$ due to the condition in Proposition \ref{prop:sld+1-Laxpair}, which holds for all $\lambda_r$ and hence is a polynomial identity. Note that the right hand side is a polynomial of degree $n$, and not of degree $n+1$ since the coefficient of $\lambda^{n+1}$ is zero because of the relation on $u_i$ and $r_i$ in Proposition \ref{prop:sld+1-Laxpair}. Writing out the identity for the Krawtchouk polynomials we obtain after simplifying \betagin{gather*} n \rFs{2}{1}{-n,-r}{-d}{\frac{1}{p(t)}} + \frac{2nr(1-p(t))}{dp(t)}\rFs{2}{1}{1-n,1-r}{1-d}{\frac{1}{p(t)}} = \\ n(1-p(t)) \rFs{2}{1}{1-n,-r}{-d}{\frac{1}{p(t)}} - p(t) (d-n) \rFs{2}{1}{-1-n,-r}{-d}{\frac{1}{p(t)}}\\ + (dp(t)-r) \rFs{2}{1}{-n,-r}{-d}{\frac{1}{p(t)}}, \end{gather*} where the left hand side is related to the derivative. Note that the derivative of $p$ cancels with factors $u$, see Theorem \ref{thm:Krawtchouk} and its proof and \S \ref{sec:modification}. In order to obtain a similar expression for the multivariable $t$-dependent Krawtchouk polynomials we need to assume that the spectrum of $L(t)$ is simple, i.e. we assume that for $\rho,\tilde{\rho} \in \mathbb N^{d+1}$ with $|\rho|=|\tilde{\rho}|$ we have that $\mathfrak{su}m_{i=0}^d \lambda_i(\rho_i-\tilde{\rho}_i)=0$ implies $\rho=\tilde{\rho}$. Assuming this we calculate, using Proposition \ref{prop:sld+1-Laxpair}, \betagin{equation*} M(t) \tilde{x}^\rho = W_\rho(t) \mathfrak{su}m_{|\sigma|=N} \binom{N}{\sigma} P(\sigma',\rho') \mathfrak{su}m_{r=1}^d u_r(t) (\sigma_r x^{\sigma+f_{r-1}-f_r} - \sigma_{r-1} x^{\sigma-f_{r-1}+f_r}) \end{equation*} using the notation $W_\rho(t) = \prod_{i=0}^d w_i(t)^{\frac12 \rho_i}$ and $f_i=(0,\cdots,0,1, 0,\cdots, 0)\in \mathbb N^{d+1}$, with the $1$ at the $i$-th spot. Now the $t$-derivative of $\tilde{x}^\rho$ is \betagin{equation*} \dot{W}_\rho(t) \mathfrak{su}m_{|\sigma|=N} \binom{N}{\sigma} P(\sigma',\rho') x^\sigma + W_\rho(t) \mathfrak{su}m_{|\sigma|=N} \binom{N}{\sigma} \dot{P}(\sigma',\rho') x^\sigma \end{equation*} and it leaves to determine the constant in $M(t) \tilde{x}^\rho - C\tilde{x}^\rho = \frac{\partial}{\partial t}\tilde{x}^\rho$. We determine $C$ by looking at the coefficient of $x_0^N$ using $P(0, \rho')= P((N,0,\cdots,0)',\rho')=1$. This gives $C= N u_1(t)W_\rho(t)^{-1} - \frac{\partial}{\partial t} \ln W_\rho(t)$. Comparing the coefficients of $x^\tau$ on both sides gives the following result. \betagin{theorem} Assume that $L(t)$ acting in $\mathbb C_N[x]$ has simple spectrum. The $t$-derivative of the multivariable Krawtchouk polynomials satisfies \betagin{gather*} \dot{W}_\rho(t) P(\tau',\rho') + W_\rho(t) \dot{P}(\tau',\rho')= \bigl( \dot{W}_\rho(t) -N u_1(t)\bigr) P(\tau',\rho') + \\ W_\rho(t) \mathfrak{su}m_{r=1}^d u_r(t) \bigl( \tau_{r-1} P((\tau-f_{r-1}+f_r)',\rho') - \tau_{r} P((\tau+f_{r-1}-f_r)',\rho')\bigr) \end{gather*} for all $\rho,\tau\in \mathbb N^{d+1}$, $|\tau|=|\rho|=N$. \end{theorem} \betagin{thebibliography}{99} \bibitem{AAR} G.E. Andrews, R. Askey, R. Roy, \varthetaaxtit{Special Functions}, Encycl. Math. Appl. 71, Cambridge Univ. Press, 1999. \bibitem{BabeBT} O.~Babelon, D.~Bernard, M.~Talon, \emph{Introduction to classical integrable systems}, Cambridge Univ. Press, 2003. \bibitem{Bere} Ju.M.~Berezanski\u\i , \emph{Expansions in eigenfunctions of selfadjoint operators}, Translations of Math. Monographs \varthetaaxtbf{17}, AMS, 1968. \bibitem{BrusMRL} M.~Bruschi, S.V.~Manakov, O.~Ragnisco, D.~Levi, \emph{The nonabelian Toda lattice-discrete analogue of the matrix Schr\"odinger spectral problem}, J. Math. Phys. \varthetaaxtbf{21} (1980), 2749--2753. \bibitem{CramvdVV} N.~Cramp\'e, W.~van de Vijver, L.~Vinet, \emph{Racah problems for the oscillator algebra, the Lie algebra $\mathfrak{sl}_n$, and multivariate Krawtchouk polynomials}, Ann. Henri Poincar\'e \varthetaaxtbf{21} (2020), 3939--3971. \bibitem{DeifNT} P.~Deift, T.~Nanda, C.~Tomei, \emph{Ordinary differential equations and the symmetric eigenvalue problem}, SIAM J. Numer. Anal. \varthetaaxtbf{20} (1983), 1--22. \bibitem{Gekh} M. Gekhtman, \emph{Hamiltonian structure of non-abelian Toda lattice}, Lett. Math. Phys. \varthetaaxtbf{46} (1998), 189--205. \bibitem{GeneVZ} V.X.~Genest, L.~Vinet, Luc; A.~Zhedanov, \emph{The multivariate Krawtchouk polynomials as matrix elements of the rotation group representations on oscillator states}, J. Phys. A \varthetaaxtbf{46} (2013), 505203, 24 pp. \bibitem{Grif} R.C.~Griffiths, \emph{Orthogonal polynomials on the multinomial distribution}, Austral. J. Statist. \varthetaaxtbf{13} (1971), 27--35. \bibitem{Groe2003} W.~Groenevelt, \varthetaaxtit{Laguerre functions and representations of $su(1,1)$}, Indag.~Math. (N.S.) \varthetaaxtbf{14} (2003), 329--352 \bibitem{GroeK2002} W.~Groenevelt, E.~Koelink, \emph{Meixner functions and polynomials related to Lie algebra representations}, J. Phys. A \varthetaaxtbf{35} (2002), 65--85. \bibitem{Ilie} P.~Iliev, \emph{A Lie-theoretic interpretation of multivariate hypergeometric polynomials}, Compos. Math. \varthetaaxtbf{148} (2012), 991--1002. \bibitem{Isma} M.E.H.~Ismail, \emph{Classical and quantum orthogonal polynomials in one variable}, Encycl. Math. Appl. 98, Cambridge Univ. Press, 2005. \bibitem{IsmaKR} M.E.H.~Ismail, E.~Koelink, P.~Rom\'an, \emph{Matrix valued Hermite polynomials, Burchnall formulas and non-abelian Toda lattice}, Adv. in Appl. Math. \varthetaaxtbf{110} (2019), 235--269. \bibitem{Kame} Y.~Kametaka, \emph{On the Euler–Poisson–Darboux equation and the Toda equation. I, II}, Proc. Japan Acad. Ser. A Math. Sci. 60 (1984), 145--148, 181--184. \bibitem{KLS} R.~Koekoek, P.A.~Lesky, R.~Swarttouw, \varthetaaxtit{Hypergeometric orthogonal polynomials and their q-analogues}, Springer Monographs in Math., Springer, 2010. \bibitem{Koel2004} E.~Koelink, \varthetaaxtit{Spectral theory and special functions}, pp.45--84 in ``Laredo Lectures on Orthogonal Polynomials and Special Functions'' (eds. R.~\'Alvarez-Nodarse, F.~Marcell\'an, W.~Van Assche), Adv. Theory Spec. Funct. Orthogonal Polynomials, Nova Sci. Publ., 2004. \bibitem{Koel} E.~Koelink, \emph{Applications of spectral theory to special functions}, pp.~131--212 in ``Lectures on Orthogonal Polynomials and Special Functions'' (eds. H.S. Cohl, M.E.H. Ismail), Lecture Notes of the London Math. Soc. \varthetaaxtbf{464}, Cambridge U. Press, 2021. \bibitem{KVdJ98} H.T.~Koelink, J.~Van Der Jeugt, \varthetaaxtit{Convolutions for orthogonal polynomials from Lie and quantum algebra representations}, SIAM J.~Math.~Anal.~\varthetaaxtbf{29} (1998), 794--822. \bibitem{Mi} W.~Miller Jr., \varthetaaxtit{Lie theory and special functions}, Math. in Science and Engineering \varthetaaxtbf{43}, Academic Press, 1968. \bibitem{MizuT} H.~Mizukawa, H.~Tanaka, \emph{$(n+1,m+1)$-hypergeometric functions associated to character algebras}, Proc. Amer. Math. Soc. \varthetaaxtbf{132} (2004), 2613--2618. \bibitem{Mose} J.~Moser, \emph{Finitely many mass points on the line under the influence of an exponential potential -- an integrable system}, pp. 467--497 in ``Dynamical systems, theory and applications'' (ed. J.~Moser), Lecture Notes in Phys. \varthetaaxtbf{38}, Springer, 1975. \bibitem{Okam} K.~Okamoto, \emph{Sur les \'echelles associ\'ees aux fonctions sp\'eciales et l'\'equation de Toda}, J. Fac. Sci. Univ. Tokyo Sect. IA Math. \varthetaaxtbf{34} (1987), 709--740. \bibitem{Pehe} F.~Peherstorfer, \emph{On Toda lattices and orthogonal polynomials} J. Comput. Appl. Math. \varthetaaxtbf{133} (2001), 519--534. \bibitem{Sl} L.J.~Slater, \varthetaaxtit{Confluent hypergeometric functions}, Cambridge Univ. Press, 1960. \bibitem{Szeg} G.~Szeg\H{o}, \emph{Orthogonal polynomials}, 4th ed., Colloquium Publ. \varthetaaxtbf{23}, AMS, 1975. \bibitem{Tesc} G.~Teschl, \emph{Almost everything you always wanted to know about the Toda equation}, Jahresber. Deutsch. Math.-Verein. \varthetaaxtbf{103} (2001), 149--162. \bibitem{Wat} G.N.~Watson, \varthetaaxtit{A Treatise on the Theory of Bessel Functions}, Cambridge Univ. Press, 1944. \bibitem{Zhed} A.S.~Zhedanov, \emph{Toda lattice: solutions with dynamical symmetry and classical orthogonal polynomials}, Theoret. and Math. Phys. \varthetaaxtbf{82} (1990), 6--11. \end{thebibliography} \end{document}
\betagin{document} \betagin{abstract} This paper is an attempt to show that, parallel to Elliott's classification of AF $C^*$-algebras by means of $K$-theory, the graded $K_0$-group classifies Leavitt path algebras completely. In this direction, we prove this claim at two extremes, namely, for the class of acyclic graphs (graphs with no cycles) and multi-headed comets or rose graphs (graphs in which each head is connected to a cycle or to a collection of loops), or a mixture of these graphs. \end{abstract} \maketitle \section{Introduction} \lambdabel{introf} In \cite{vitt62} Leavitt considered the free associative $K$-algebra $A$ generated by symbols $\{x_i,y_i \mid 1\leq i \leq n\}$ subject to relations \betagin{equation}\lambdabel{jh54320} x_iy_j =\deltalta_{ij}, \text{ for all } 1\leq i,j \leq n, \text{ and } \sum_{i=1}^n y_ix_i=1, \end{equation} where $K$ is a field, $n\geq 2$ and $\deltalta_{ij}$ is the Kronecker delta. The relations guarantee the right $A$-module homomorphism \betagin{align}\lambdabel{is329ho} \phi:A&\longrightarrow A^n\\ a &\mapsto (x_1a ,x_2a,\dots,x_na)\notag \end{align} has an inverse \betagin{align}\lambdabel{is329ho9} \psi:A^n&\longrightarrow A\\ (a_1,\dots,a_n) &\mapsto y_1a_1+\dots+y_na_n, \notag \end{align} so $A\cong A^n$ as a right $A$-module. He showed that $A$ is universal with respect to this property, of type $(1,n-1)$ (see \S\ref{gtr5654}) and it is a simple ring. Modeled on this, Leavitt path algebras are introduced \cite{aap05,amp}, which attach to a directed graph a certain algebra. In the case that the graph has one vertex and $n$ loops, it recovers Leavitt's algebra (\ref{jh54320}) (where $y_i$ are the loops and $x_i$ are the ghost loops). In \cite{cuntz1} Cuntz considered the universal unital $C^*$-algebra $\mathcal{O}_n$ generated by isometries $\{s_i \mid 1\leq i\leq n\}$ subject to the relation \[ \sum_{i=1}^ns_is_i^*=1,\] where $n\geq 2$. He showed that $\mathcal O_n$ is purely infinite, simple $C^*$-algebras. Modeled on this, graph $C^*$-algebras are introduced \cite{paskrae,Raegraph}, which attach to a directed graph a certain $C^*$-algebra. In the case that the graph has one vertex and $n$ loops, it recovers Cuntz's algebra. The study of Leavitt path algebras and their analytic counterpart develop parallel and strikingly similar. Cuntz computed $K_0$ of $\mathcal{O}_n$ \cite{cuntz2} and in~\cite{Raeburn113} Raeburn and Szyma\'nski carried out the calculation for graph $C^*$-algebras (see also~\cite[Ch.~7]{Raegraph}). Ara, Brustenga and Cortin\~as~\cite{arawillie} calculated $K$-theory of Leavitt path algebras. It was shown that when $K=\mathbb C$, the $K_0$-group of graph $C^*$-algebras and Leavitt path algebras coincide~\cite[Theorem~7.1]{amp}. The Grothendieck group $K_0$ has long been recognized as an essential tool in classifying certain types of $C^*$-algebras. Elliott proved that $K_0$ as a ``pointed'' pre-ordered group classifies the AF $C^*$-algebras completely. Since the ``underlying'' algebras of Leavitt path algebras are ultramatricial algebras (as in AF $C^*$-algebras) Elliott's work (see also~\cite[\S 8]{rordam}) has prompted to consider the $K_0$-group as the main tool of classifying certain types of Leavitt path algebras~\cite{aalp}. This note is an attempt to justify that parallel to Elliott's work on AF $C^*$-algebras, the ``pointed'' pre-order {\it graded} Grothedieck group, $K^{\operatorname{gr}}_0$, classifies the Leavitt path algebras completely. This note proves this claim at two extremes, namely for acyclic graphs (graphs with no cycle) and for graphs in which each head is connected to a cycle or to a collection of loops (see~(\ref{pid98})) or a mixture of these graphs (see~(\ref{monster})). Leavitt's algebra constructed in~(\ref{jh54320}) has a natural grading; assigning $1$ to $y_i$ and $-1$ to $x_i$, $1\leq i \leq n$, since the relations are homogeneous (of degree zero), the algebra $A$ is a $\mathbb Z$-graded algebra. The isomorphism~(\ref{is329ho}) induces a graded isomorphism \betagin{align}\lambdabel{is329ho22} \phi:A&\longrightarrow A(-1)^n\\ a &\mapsto (x_1a ,x_2a,\dots,x_na), \notag \end{align} where $A(-1)$ is the suspension of $A$ by $-1$. In the non-graded setting, the relation $A\cong A^n$ translates to $(n-1)[A]=0$ in $K_0(A)$. In fact for $n=2$, one can show that $K_0(A)=0$. However, when considering the grading, for $n=2$, $A \cong_{\operatorname{gr}} A(-1)\operatorname{op}lus A(-1)$ and $A(i)\cong_{\operatorname{gr}} A(i-1)\operatorname{op}lus A(i-1)$, $i\in \mathbb Z$, and therefore $A \cong_{\operatorname{gr}} A(-i)^{2^i}$, $i \in \mathbb N$. This implies not only $K^{\operatorname{gr}}_0(A)\not = 0$ but one can also show that $K^{\operatorname{gr}}_0(A)\cong \mathbb Z[1/2]$. This is an indication that graded $K$-groups can capture more information than the non-graded $K$-groups (see Example~\ref{smallgraphs} for more examples). In fact, we will observe that there is a close relation between graded $K$-groups of Leavitt path algebras and their graph $C^*$-algebra counterparts. For example, if the graph is finite with no sink, we have isomorphisms in the middle of the diagram below which induce isomorphisms on the right and left hand side of the diagram. This immediately implies $ K_0(C^*(E)) \cong K_0(\operatorname{\mathcal L}_{\mathbb C}(E))$. (See Remark~\ref{hjsonf} for the general case of row finite graphs.) \[ \xymatrix{ 0 \ar[r] & K_1(C^*(E)) \ar[r] \ar@{.>}[d] & K_0\big(C^*(E\times_1 \mathbb Z)\big) \ar[d]^{\cong} \ar[r]^{1-\betata^{-1}_*} & K_0\big(C^*(E\times_1 \mathbb Z)\big) \ar[d]^{\cong} \ar[r]& K_0(C^*(E))\ar@{.>}[d]\ar[r]& 0\\ 0 \ar[r] & \ker(1-N^t) \ar[r] & K^{\operatorname{gr}}_0(\operatorname{\mathcal L}_{\mathbb C}(E)) \ar[r]^{1-N^{t}} & K^{\operatorname{gr}}_0(\operatorname{\mathcal L}_{\mathbb C}(E)) \ar[r]& K_0(\operatorname{\mathcal L}_{\mathbb C}(E)) \ar[r] &0}. \] In the graded $K$-theory we study in this note, suspensions play a pivotal role. For example, Abrams \cite[Proposition~1.3]{arock} has given a number theoretic criterion when matrices over Leavitt algebras with no suspensions are graded isomorphism. Abrams' criterion shows \[\operatorname{\mathbb M}_3(\operatorname{\mathcal L}_2)\not \cong_{\operatorname{gr}} \operatorname{\mathbb M}_4(\operatorname{\mathcal L}_2),\] where $\operatorname{\mathcal L}_2$ is the Leavitt algebra constructed in (\ref{jh54320}) for $n=2$. However using graded $K$-theory (Theorem~\ref{re282} and Theorem~\ref{mani543}), we shall see that the Leavitt path algebras of the following two graphs \betagin{equation*} \xymatrix@=10pt{ &\bullet \ar[dr] & &&&& & \bullet \ar[dr] \\ E: && \bullet \ar@(ur,rd) \ar@(u,r) & & && F: & & \bullet \ar[r] & \bullet \ar@(ur,rd) \ar@(u,r) &\\ &\bullet \ar[ur] & &&&& & \bullet \ar[ur] } \end{equation*} are graded isomorphic, i.e., $\operatorname{\mathcal L}(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}(F)$, which would then imply \betagin{equation} \operatorname{\mathbb M}_3(\operatorname{\mathcal L}_2)(0,1,1) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_4(\operatorname{\mathcal L}_2)(0,1,2,2). \end{equation} This also shows, by considering suspensions, we get much wider classes of graded isomorphisms of matrices over Leavitt algebras (see Theorem~\ref{cfd2497}). In fact, the suspensions of graded modules induce a $\mathbb Z[x,x^{-1}]$-module structure on the graded $K_0$-group. This extra structure helps us to characterize the Leavitt path algebras. Elliott showed that $K_0$-groups classify ultramatricial algebras completely (\cite[\S15]{goodearlbook}). Namely, for two ultramatricial algebras $R$ and $S$, if $\phi:K_0(R)\rightarrow K_0(S)$ is an isomorphism such that $\phi([R])=[S]$ and the set of isomorphism classes of finitely generated projective modules of $R$ are sent to and covers the set of isomorphism classes of finitely generated projective modules of $S$, i.e., $\phi$ and $\phi^{-1}$ are order preserving, then $R$ and $S$ are isomorphic. This isomorphism is written as $\big(K_0(R),[R]\big)\cong \big (K_0(S),[S]\big)$. Replacing $K_0$ by $K^{\operatorname{gr}}_0$, we will prove that a similar statement holds for the class of Leavitt path algebras arising from acyclic, multi-headed comets and multi-headed rose graphs (see Figures~\ref{pid98}) or a mixture of these (called a polycephaly graph, see Figure~\ref{monster}) in Theorem~\ref{mani543}. Here the isomorphisms between $K^{\operatorname{gr}}_0$-groups are considered as ordered preserving $\mathbb Z[x,x^{-1}]$-modules (see Example~\ref{upst} for the action of $\mathbb Z[x,x^{-1}]$ on $K^{\operatorname{gr}}_0$-groups of acyclic and comets graphs). \betagin{equation}\lambdabel{pid98} \xymatrix@=16pt{ & \bullet \ar[d] \ar[r] & \bullet & && \bullet \ar[d] \ar[r] & \bullet \ar@(ur,dr)\\ \bullet \ar[dr] & \bullet\ar[d] & & &&\bullet\ar@/^/[dr] &&& && & & && \bullet \ar@(ul,ur) \ar@(u,r) \ar@{.}@(ur,dr) \ar@(r,d)& \\ & \bullet && \bullet \ar[r] \ar[dr] & \bullet \ar@/^/[ur] &&\bullet \ar@/^1pc/[ll] & \bullet \ar[l] & && \bullet \ar[r] & \bullet \ar[r] & \bullet \ar[r] \ar[urr] & \bullet \ar[r] \ar[dr] & \bullet \ar[r] & \bullet \ar[r] & \bullet \ar@(ul,ur) \ar@(u,r) \ar@{.}@(ur,dr) \ar@(r,d)& \\ && & & \bullet \ar@/^1pc/[rr] & & \bullet \ar@/^1.3pc/[ll]&&& &&& \bullet \ar[r] & \bullet \ar[r] & \bullet \ar@(u,r) \ar@{.}@(ur,dr) \ar@(r,d)& } \end{equation} This paves the way to pose the following conjecture. \betagin{conjecture}[Weak classification conjecture]\lambdabel{weakconj} Let $E$ and $F$ be row-finite graphs. Then $\operatorname{\mathcal L}(E)\cong_{\operatorname{gr}} \operatorname{\mathcal L}(F)$ if and only if there is an ordered preserving $\mathbb Z[x,x^{-1}]$-module isomorphism \[\big (K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)),[\operatorname{\mathcal L}(E)]\big ) \cong \big (K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(F)),[\operatorname{\mathcal L}(F)]\big ).\] \end{conjecture} Whereas, in Elliott's case, $K_0$ is a functor to the category of pre-ordered abelian groups, $K^{\operatorname{gr}}_0$ is a functor to $\Gammamma$-pre-ordered abelian groups taking into account the $\Gammamma$-grading of the rings (see~\S\ref{pregg5}). In fact the proof of Theorem~\ref{mani543} shows that an isomorphism between the graded Grothendieck groups induces a $K$-algebra isomorphism between the Leavitt path algebras. Therefore starting from a ring isomorphism, passing through $K$-theory, we obtain $K$-algebra isomorphisms between Leavitt path algebras. Therefore we obtain the following conjecture for the class of polycephaly graphs (see Corollary~\ref{griso7dgre}). \betagin{conjecture}\lambdabel{cofian} Let $E$ and $F$ be row-finite graphs. Then $\operatorname{\mathcal L}_K(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}_K(F)$ as rings if and only if $\operatorname{\mathcal L}_K(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}_K(F)$ as $K$-algebras. \end{conjecture} In fact we prove $K^{\operatorname{gr}}_0$ functor is a fully faithful functor from the category of acyclic Leavitt path algebras to the category of pre-ordered abelian groups (see \S\ref{pregg5} and Theorem~\ref{catgrhsf}). From the results of the paper, one is tempted to make the following conjecture. \betagin{conjecture}[Strong classification conjecture]\lambdabel{strongconj} The graded Grothendieck group $K^{\operatorname{gr}}_0$ is a fully faithful functor from the category of Leavitt path algebras with graded homomorphisms modulo inner-automorphisms to the category of pre-ordered abelian groups with order-units. \end{conjecture} \section{The graded Grothendieck group} In this note all modules are considered right modules unless stated otherwise. For a set $\Gammamma$ by $\mathbb Z ^{\Gammamma}$ we mean $\Gammamma$-copies of $\mathbb Z$, i.e., $\bigoplus_{\gammamma \in \Gammamma} \mathbb Z_{\gammamma}$ where $\mathbb Z_{\gammamma} = \mathbb Z$ for each $\gammamma \in \Gammamma$. \subsection{Graded rings}\lambdabel{pregr529} A ring $A = \textstyle{\bigoplus_{ \gamma \in \Gamma}} A_{\gamma}$ is called a \emph{$\Gamma$-graded ring}, or simply a \emph{graded ring}, \operatorname{ind}ex{graded ring} if $\Gamma$ is an (abelian) group, each $A_{\gamma}$ is an additive subgroup of $A$ and $A_{\gamma} A_{\deltalta} \subseteq A_{\gamma + \deltalta}$ for all $\gamma, \deltalta \in \Gamma$. The elements of $A_\gamma$ are called \emph{homogeneous of degree $\gamma$} and we write deg$(a) = \gamma$ if $a \in A_{\gamma}$. We let $A^{h} = \bigcup_{\gamma \in \Gamma} A_{\gamma}$ be the set of homogeneous elements of $A$. We call the set $\Gammamma_A=\{ \gamma \in \Gammamma \mid A_\gamma \not = 0 \}$ the {\it support} of $A$. A $\Gamma$-graded ring $A=\textstyle{\bigoplus_{ \gamma \in \Gamma}} A_{\gamma}$ is called a \emph{strongly graded ring} if $A_{\gamma} A_{\delta} = A_{\gamma +\delta}$ for all $\gamma, \delta \in \Gamma$. We say $A$ has a trivial grading, or $A$ is concentrated in degree zero if the support of $A$ is the trivial group, i.e., $A_0=A$ and $A_\gamma=0$ for $\gamma \in \Gammamma \backslash \{0\}$. Let $A$ be a $\Gammamma$-graded ring. A \emph{graded right $A$-module} $M$ is defined to be a right $A$-module with a direct sum decomposition $M=\bigoplus_{\gamma \in \Gammamma} M_{\gamma}$, where each $M_{\gamma}$ is an additive subgroup of $M$ such that $M_{\lambda} \cdot A_{\gamma} \subseteq M_{\gamma + \lambda}$ for all $\gamma, \lambda \in \Gamma$. For $\delta \in \Gamma$, we define the $\delta$-{\it suspended} $A$-module $M(\delta)$ \lambdabel{deshiftedmodule} as $M(\delta) =\bigoplus_{\gamma \in \Gamma} M(\delta)_\gamma$ where $M(\delta)_\gamma = M_{\gamma+\delta}$. For two graded $A$-modules $M$ and $N$, a {\it graded $A$-module homomorphism of degree $\deltalta$} is an $A$-module homomorphism $f:M\rightarrow N$, such that $f(M_\gamma)\subseteq N_{\gamma+\deltalta}$ for any $\gamma \in \Gamma$. By a {\it graded homomorphism} we mean a graded homomorphism of degree $0$. By $\mbox{gr-}A$, we denote the category of graded right $A$-modules with graded homomorphisms. For $\alphapha \in \Gammamma$, the {\it $\alphapha$-suspension functor} $\mathcal T_\alphapha:\mbox{gr-}A\rightarrow \mbox{gr-}A$, $M \mapsto M(\alphapha)$ is an isomorphism with the property $\mathcal T_\alphapha \mathcal T_\betata=\mathcal T_{\alphapha + \betata}$, $\alphapha,\betata\in \Gammamma$. For graded modules $M$ and $N$, if $M$ is finitely generated, then $\operatorname{Hom}_{A}(M,N)$ has a natural $\Gammamma$-grading \betagin{equation}\lambdabel{hgd543p} \operatorname{Hom}_{A}(M,N)=\bigoplus_{\gammamma \in \Gammamma} \operatorname{Hom}(M,N)_{\gammamma}, \end{equation} where $\operatorname{Hom}(M,N)_{\gammamma}$ is the set of all graded $A$-module homomorphisms of degree $\gammamma$ (see \cite[\S2.4]{grrings}). A graded $A$-module $P$ is called a {\it graded projective} module if $P$ is a projective module. One can check that $P$ is graded projective if and only if the functor $\operatorname{Hom}_{\mbox{\tiny gr-}A}(P,-)$ is an exact functor in $\mbox{gr-}A$ if and only if $P$ is graded isomorphic to a direct summand of a graded free $A$-module. In particular, if $P$ is a graded finitely generated projective $A$-module, then there is a graded finitely generated projective $A$-module $Q$ such that $P\operatorname{op}lus Q\cong_{\operatorname{gr}} A^n(\overline \alphapha)$, where $\overline \alphapha =(\alphapha_1,\cdots,\alphapha_n)$, $\alphapha_i \in \Gamma$. By $\mathcal P \mathrm{gr}(A)$ we denote the category of graded finitely generated projective right $A$-modules with graded homomorphisms. Note that the suspension functor $\mathcal T_\alphapha$ restricts to the category of graded finitely generated projective modules. Let $A= \textstyle{\bigoplus_{\gamma \in \Gamma}} A_{\gamma}$ and $B= \textstyle{\bigoplus_{\gamma \in \Gamma}} B_{\gamma}$ be graded rings. Then $A\times B$ has a natural grading given by $A\times B = \textstyle{\bigoplus_{\gamma \in \Gammamma}} (A \times B)_{\gamma}$ where $(A \times B)_{\gamma}=A_\gamma \times B_\gamma$. Similarly, if $A$ and $B$ are $K$-modules for a field $K$ (where here $K$ has a trivial grading), then $A \otimes_K B$ has a natural grading given by $A \otimes_K B = \textstyle{\bigoplus_{\gamma \in \Gammamma}} (A \otimes_K B)_{\gamma}$ where \betagin{equation}\lambdabel{tengr} (A \otimes_K B)_{\gamma} = \mathfrak big \{ \sum_i a_i \otimes b_i \mid a_i \in A^h, b_i \in B^h, \deltag(a_i)+\deltag(b_i) = \gamma \mathfrak big\}. \end{equation} Some of the rings we are dealing with in this note are of the form $K[x,x^{-1}]$ where $K$ is a field. This is an example of a graded field. A nonzero commutative $\Gamma$-graded ring $A$ is called a {\it graded field} if every nonzero homogeneous element has an inverse. It follows that $A_0$ is a field and $\Gammamma_A$ is an abelian group. Similar to the non-graded setting, one can show that any $\Gammamma$-graded module $M$ over a graded field $A$ is graded free, i.e., it is generated by a homogeneous basis and the graded bases have the same cardinality (see \cite[Proposition~4.6.1]{grrings}). Moreover, if $N$ is a graded submodule of $M$, then \betagin{equation}\lambdabel{dimcouti} \dim_A(N)+\dim_A(M/N)=\dim_A(M). \end{equation} In this note, all graded fields have torsion free abelian group gradings (in fact, in all our statements $\Gammamma=\mathbb Z$, $A=K[x^n,x^{-n}]$ is a $\Gammamma$-graded field with $\Gamma_A=n \mathbb Z$, for some $n\in \mathbb N$). However, this assumption is not necessary for the statements below. \subsection{Grading on matrices}\lambdabel{matgrhe} Let $A$ be a $\Gammamma$-graded ring and $M=M_1\operatorname{op}lus \dots \operatorname{op}lus M_n$, where $M_i$ are graded finitely generated right $A$-modules. So $M$ is also a graded $A$-module. Let $\pi_j:M\rightarrow M_j$ and $\kappa_j:M_j\rightarrow M$ be the (graded) projection and injection homomorphisms. Then there is a graded isomorphism \betagin{equation}\lambdabel{fgair} \operatorname{End}_A(M)\rightarrow \big[\operatorname{Hom}(M_j,M_i)\big]_{1\leq i,j\leq n} \end{equation} defined by $\phi \mapsto [\pi_i\phi\kappa_j]$, $1\leq i,j \leq n$. For a graded ring $A$, observe that $\operatorname{Hom}_A\big (A(\deltalta_i),A(\deltalta_j)\big )\cong_{\operatorname{gr}} A(\deltalta_j-\deltalta_i)$ (see~\ref{hgd543p}). If \[V=A(-\deltalta_1)\operatorname{op}lus A(-\deltalta_2) \operatorname{op}lus \dots \operatorname{op}lus A(-\deltalta_n),\] then by ~(\ref{fgair}), \[\operatorname{End}_A(V)\cong_{\operatorname{gr}} \big [\operatorname{Hom}\big (A(-\deltalta_j),A(-\deltalta_i)\big )\big ]\cong_{\operatorname{gr}}\big[A(\deltalta_j-\deltalta_i)\big]_{1\leq i,j\leq n}.\] Denoting this graded matrix ring by $\operatorname{\mathbb M}_n(A)(\delta_1,\dots,\delta_n)$, we have \betagin{equation}\lambdabel{pjacko} \operatorname{\mathbb M}_n(A)(\delta_1,\dots,\delta_n) = \betagin{pmatrix} A(\delta_1 - \delta_1) & A(\delta_2 - \delta_1) & \cdots & A(\delta_n - \delta_1) \\ A(\delta_1 - \delta_2) & A(\delta_2 - \delta_2) & \cdots & A(\delta_n - \delta_2) \\ \vdots & \vdots & \ddots & \vdots \\ A(\delta_1 - \delta_n) & A(\delta_2 - \delta_n) & \cdots & A(\delta_n - \delta_n) \end{pmatrix}. \end{equation} Therefore for $\lambda \in \Gammamma$, ${\operatorname{\mathbb M}_n (A)(\delta_1,\dots,\delta_n)}_{\lambda}$, the $\lambda$-homogeneous elements, are the $n \times n$-matrices over $A$ with the degree shifted (suspended) as follows: \betagin{equation}\lambdabel{mmkkhh} {\operatorname{\mathbb M}_n(A)(\delta_1,\dots,\delta_n)}_{\lambda} = \betagin{pmatrix} A_{ \lambda+\delta_1 - \delta_1} & A_{\lambda+\delta_2 - \delta_1} & \cdots & A_{\lambda +\delta_n - \delta_1} \\ A_{\lambda + \delta_1 - \delta_2} & A_{\lambda + \delta_2 - \delta_2} & \cdots & A_{\lambda+\delta_n - \delta_2} \\ \vdots & \vdots & \ddots & \vdots \\ A_{\lambda + \delta_1 - \delta_n} & A_{ \lambda + \delta_2 - \delta_n} & \cdots & A_{\lambda + \delta_n - \delta_n} \end{pmatrix}. \end{equation} This also shows that \betagin{equation}\lambdabel{hogr} \deltag(e_{ij}(x))=\deltag(x)+\deltalta_i-\deltalta_j. \end{equation} Setting $\overline \deltalta=(\delta_1, \dots,\delta_n)\in \Gammamma^n$, one denotes the graded matrix ring~(\ref{pjacko}) as $\operatorname{\mathbb M}_n(A)(\overline \delta)$. Consider the graded $A$-bi-module \[A^n(\overline \delta)=A(\delta_1)\operatorname{op}lus \dots \operatorname{op}lus A(\delta_n).\] Then one can check that $A^n(\overline \delta)$ is a graded right $\operatorname{\mathbb M}_n(A)(\overline \delta)$-module and $A^n(-\overline \delta)$ is a graded left $\operatorname{\mathbb M}_n(A)(\overline \delta)$-module, where $-\overline \delta=(-\delta_1, \dots,-\delta_n)$. Note that if $A$ has a trivial grading, this construction induces a {\it good grading} on $\operatorname{\mathbb M}_n(A)$. These group gradings on matrix rings have been studied by D\u{a}sc\u{a}lescu et al.~\cite{dascalescu}. Therefore for $x \in A$, \betagin{equation}\lambdabel{oiuytr} \deltag(e_{ij}(x))=\deltalta_i - \deltalta_j. \end{equation} We will use these gradings on matrices to describe the graded structure of Leavitt path algebras of acyclic and comet graphs. For a $\Gamma$-graded ring $A$, $\overline \alphapha = (\alpha_1 , \ldots \alpha_m) \in \Gamma^m$ and $\overline \deltalta=(\delta_1, \ldots ,\delta_n) \in \Gamma^n$, let \[ \operatorname{\mathbb M}_{m \times n} (A)[\overline \alphapha][\overline \deltalta] = \betagin{pmatrix} A_{\alpha_1 -\delta_1} & A_{\alpha_1 -\delta_2} & \cdots & A_{\alpha_1 -\delta_n } \\ A_{\alpha_2 -\delta_1} & A_{\alpha_2 -\delta_2} & \cdots & A_{\alpha_2 -\delta_n } \\ \vdots & \vdots & \ddots & \vdots \\ A_{\alpha_m-\delta_1} & A_{\alpha_m -\delta_2} & \cdots & A_{\alpha_m -\delta_n} \end{pmatrix}. \] So $\operatorname{\mathbb M}_{m\times n} (A)[\overline \alphapha][\overline \deltalta]$ consists of matrices with the $ij$-entry in $A_{\alpha_i -\delta_j}$. \betagin{prop} \lambdabel{rndconggrrna} Let $A$ be a $\Gamma$-graded ring and let $\overline \alphapha = (\alpha_1 , \ldots , \alpha_m) \in \Gamma^m$, $\overline \deltalta=(\delta_1, \ldots,\delta_n)\in \Gamma^n$. Then the following are equivalent: \betagin{enumerate}[\upshape(1)] \item $A^m (\overline \alphapha) \cong_{\operatorname{gr}} A^n(\overline \deltalta) $ as graded right $A$-modules. \item $A^m (-\overline \alphapha) \cong_{\operatorname{gr}} A^n(-\overline \deltalta) $ as graded left $A$-modules. \item There exist $a=(a_{ij}) \in \operatorname{\mathbb M}_{n\times m} (A)[\overline \deltalta][\overline \alphapha]$ and $b=(b_{ij}) \in \operatorname{\mathbb M}_{m\times n} (A)[\overline \alphapha][\overline \deltalta]$ such that $ab=\operatorname{\mathcal I}M_{n}$ and $ba=\operatorname{\mathcal I}M_{m}$. \end{enumerate} \end{prop} \betagin{proof} (1) $\Rightarrow$ (3) Let $\phi: A^m(\overline \alpha) \rightarrow A^n (\overline \delta)$ and $\psi: A^n(\overline \deltalta) \rightarrow A^m (\overline \alphapha)$ be graded right $A$-modules isomorphisms such that $\phi\psi=1$ and $\psi\phi=1$. Let $e_j$ denote the standard basis element of $A^m(\overline \alphapha)$ with $1$ in the $j$-th entry and zeros elsewhere. Then let $\phi(e_j)=(a_{1j},a_{2j},\dots,a_{nj})$, $1\leq j \leq m$. Since $\phi$ is a graded map, comparing the grading of both sides, one can observe that $\deltag(a_{ij})=\delta_i-\alphapha_j$. Note that the map $\phi$ is represented by the left multiplication with the matrix $a=(a_{ij})_{n\times m} \in \operatorname{\mathbb M}_{n\times m} (A)[\overline \deltalta][\overline \alphapha]$. In the same way one can construct $b \in \operatorname{\mathbb M}_{m\times n} (A)[\overline \alphapha][\overline \deltalta]$ which induces $\psi$. Now $\phi\psi=1$ and $\psi\phi=1$ translate to $ab=\operatorname{\mathcal I}M_{n}$ and $ba=\operatorname{\mathcal I}M_{m}$. (3) $\Rightarrow$ (1) If $a \in \operatorname{\mathbb M}_{n\times m} (A)[\overline \deltalta][\overline \alphapha]$, then multiplication from the left, induces a graded right $A$-module homomorphism $\phi_a:A^m(\overline \alphapha) \longrightarrow A^n(\overline \deltalta)$. Similarly $b$ induces $\psi_b: A^n(\overline \deltalta) \longrightarrow A^m(\overline \alphapha)$. Now $ab=\operatorname{\mathcal I}M_{n}$ and $ba=\operatorname{\mathcal I}M_{m}$ translate to $\phi_a\psi_b=1$ and $\psi_b\phi_a=1$. (2) $\Longleftrightarrow$ (3) This part is similar to the previous cases by considering the matrix multiplication from the right. Namely, the graded left $A$-module homomorphism $\phi: A^m (-\overline \alphapha) \rightarrow A^n(-\overline \deltalta) $ represented by a matrix multiplication from the right of the form $\operatorname{\mathbb M}_{m\times n} (A)[\overline \alphapha][\overline \deltalta]$ and similarly $\psi$ gives a matrix in $\operatorname{\mathbb M}_{n\times m} (A)[\overline \delta][\overline \alpha]$. The rest follows easily. \end{proof} When $m=n=1$ in Proposition~\ref{rndconggrrna} we obtain, \betagin{cor} \lambdabel{rndcongcori} Let $A$ be a $\Gamma$-graded ring and $\alphapha \in \Gammamma$. Then the following are equivalent: \betagin{enumerate}[\upshape(1)] \item $A(\alphapha) \cong_{\operatorname{gr}} A$ as graded right $A$-modules. \item $A(-\alphapha) \cong_{\operatorname{gr}} A$ as graded right $A$-modules. \item $A(\alphapha) \cong_{\operatorname{gr}} A$ as graded left $A$-modules. \item $A(-\alphapha) \cong_{\operatorname{gr}} A$ as graded left $A$-modules. \item There is an invertible homogeneous element of degree $\alphapha$. \end{enumerate} \end{cor} \betagin{proof} This follows from Proposition~\ref{rndconggrrna}. \end{proof} \subsection{Leavitt algebras as graded rings} \lambdabel{creekside} Let $K$ be a field, $n$ and $k$ positive integers and $A$ the free associative $K$-algebra generated by symbols $\{x_{ij},y_{ji} \mid 1\leq i \leq n+k, 1\leq j \leq n \}$ subject to relations (coming from) \[ Y\cdot X=I_{n,n} \qquad \text{ and } \qquad X\cdot Y=I_{n+k,n+k}, \] where \betagin{equation} \lambdabel{breaktr} Y=\left( \betagin{matrix} y_{11} & y_{12} & \dots & y_{1,n+k}\\ y_{21} & y_{22} & \dots & y_{2,n+k}\\ \vdots & \vdots & \ddots & \vdots\\ y_{n,1} & y_{n,2} & \dots & y_{n,n+k} \end{matrix} \right), ~~~~ X=\left( \betagin{matrix} x_{11} & x_{12} & \dots & x_{1,n}\\ x_{21} & x_{22} & \dots & x_{2,n}\\ \vdots & \vdots & \ddots & \vdots\\ x_{n+k,1} & x_{n+k,2} & \dots & x_{n+k,n} \end{matrix} \right). \end{equation} This algebra was studied by Leavitt in relation with its type in \cite{vitt56,vitt57,vitt62}. In~\cite[p.190]{vitt56}, he studied this algebra for $n=2$ and $k=1$, where he showed that the algebra has no zero divisors, in ~\cite[p.322]{vitt57} for arbitrary $n$ and $k=1$ and in~\cite[p.130]{vitt62} for arbitrary $n$ and $k$ and established that these algebras are of type $(n,k)$ (see \S\ref{gtr5654}) and when $n\geq 2$ they are domains. We denote this algebra by $\operatorname{\mathcal L}(n,k+1)$. (Cohn's notation in~\cite{cohn11} for this algebra is $V_{n,n+k}$.) Throughout the text we sometimes denote $\operatorname{\mathcal L}(1,k)$ by $\operatorname{\mathcal L}_k$. Assigning $\deltag(y_{ji})=(0,\dots,0,1,0\dots,0)$ and $\deltag(x_{ij})=(0,\dots,0,-1,0\dots,0)$, $1\leq i \leq n+k$, $1\leq j \leq n$, in $\bigoplus_n \mathbb Z$, where $1$ and $-1$ are in $j$-th entries respectively, makes the free algebra generated by $x_{ij}$ and $y_{ji}$ a graded ring. Furthermore, one can easily observe that the relations coming from ~(\ref{breaktr}) are all homogeneous with respect to this grading, so that the Leavitt algebra $\operatorname{\mathcal L}(n,k+1)$ is a $\bigoplus_n \mathbb Z$-graded ring. Therefore, $\operatorname{\mathcal L}(1,k+1)$ is a $\mathbb Z$-graded ring. In fact this is a strongly graded ring by Theorem~\ref{hazst}. \subsection{Graded IBN and graded type}\lambdabel{gtr5654} A ring $A$ has an {\it invariant basis number} (IBN) if any two bases of a free (right) $A$-module have the same cardinality, i.e., if $A^n \cong A^{m}$ as $A$-modules, then $n=m$. When $A$ does not have IBN, the {\it type} of $A$ is defined as a pair of positive integers $(n,k)$ such that $A^n \cong A^{n+k}$ as $A$-modules and these are the smallest number with this property. It was shown that if $A$ has type $(n,k)$, then $A^m\cong A^{m'}$ if and only if $m=m'$ or $m,m' >n$ and $m \equiv m' \pmod{k}$ (see \cite[p. 225]{cohn11}, \cite[Theorem~1]{vitt62}). A graded ring $A$ has a {\it graded invariant basis number} (gr-IBN) if any two homogeneous bases of a graded free (right) $A$-module have the same cardinality, i.e., if $ A^m(\overline \alphapha)\cong_{\operatorname{gr}} A^n(\overline \deltalta) $, where $\overline \alphapha=(\alphapha_1,\dots,\alphapha_m)$ and $\overline \deltalta=(\deltalta_1,\dots,\deltalta_n)$, then $m=n$. Note that contrary to the non-graded case, this does not imply that two graded free modules with bases of the same cardinality are graded isomorphic (see Proposition~\ref{rndconggrrna}). A graded ring $A$ has {\it IBN in} $\mbox{gr-}A$, if $A^m \cong_{\operatorname{gr}} A^n$ then $m=n$. If $A$ has IBN in $\mbox{gr-}A$, then $A_0$ has IBN. Indeed, if $A_0^m \cong A_0^n$ as $A_0$-modules, then $A^m\cong_{\operatorname{gr}} A_0^m \otimes_{A_0} A \cong A_0^n \otimes_{A_0} A \cong_{\operatorname{gr}}A^n$, so $n=m$ (see \cite[p. 215]{grrings}). When the graded ring $A$ does not have gr-IBN, the {\it graded type} of $A$ is defined as a pair of positive integers $(n,k)$ such that $A^n(\overline \deltalta) \cong_{\operatorname{gr}} A^{n+k}(\overline \alphapha)$ as $A$-modules, for some $\overline \deltalta=(\deltalta_1,\dots,\deltalta_n)$ and $\overline \alphapha=(\alphapha_1,\dots,\alphapha_{n+k})$ and these are the smallest number with this property. In Proposition~\ref{grtypel1} we show that the Leavitt algebra $\operatorname{\mathcal L}(n,k+1)$ has graded type $(n,k)$. Using graded $K$-theory we will also show that (see Corollary~\ref{k43shab}) \[\operatorname{\mathcal L}(1,n)^k (\lambda_1,\dots,\lambda_k) \cong_{\operatorname{gr}} \operatorname{\mathcal L}(1,n)^{k'}(\gamma_1,\dots,\gamma_{k'})\] if and only if $\sum_{i=1}^k n^{\lambda_i}=\sum_{i=1}^{k'} {n}^{\gamma_i}$. Let $A$ be a $\Gammamma$-graded ring such that $A^m(\overline \alphapha) \cong_{\operatorname{gr}} A^n(\overline \deltalta)$, where $\overline \alphapha=(\alphapha_1,\dots,\alphapha_m)$ and $\overline \deltalta=(\deltalta_1,\dots,\deltalta_n)$. Then there is a universal $\Gammamma$-graded ring $R$ such that $ R^m(\overline \alphapha) \cong_{\operatorname{gr}} R^n(\overline \deltalta)$, and a graded ring homomorphism $R\rightarrow A$ which induces the graded isomorphism $A^m(\overline \alphapha) \cong_{\operatorname{gr}} R^m(\overline \alphapha) \otimes_R A \cong_{\operatorname{gr}} R^n(\overline \delta) \otimes_R A \cong_{\operatorname{gr}} A^n(\overline \delta)$. Indeed, by Proposition~\ref{rndconggrrna}, there are matrices $a=(a_{ij}) \in \operatorname{\mathbb M}_{n\times m} (A)[\overline \deltalta][\overline \alphapha]$ and $b=(b_{ij}) \in \operatorname{\mathbb M}_{m\times n} (A)[\overline \alphapha][\overline \deltalta]$ such that $ab=\operatorname{\mathcal I}M_{n}$ and $ba=\operatorname{\mathcal I}M_{m}$. The free ring generated by symbols in place of $a_{ij}$ and $b_{ij}$ subject to relations imposed by $ab=\operatorname{\mathcal I}M_{n}$ and $ba=\operatorname{\mathcal I}M_{m}$ is the desired universal graded ring. In detail, let $F$ be a free ring generated by $x_{ij}$, $1\leq i \leq n$, $1\leq j \leq m$ and $y_{ij}$, $1\leq i \leq m$, $1\leq j \leq n$. Assign the degrees $\deltag(x_{ij})=\deltalta_i-\alphapha_j$ and $\deltag(y_{ij})=\alphapha_i-\deltalta_j$. This makes $F$ a $\Gammamma$-graded ring. Let $R$ be a ring $F$ modulo the relations $\sum_{s=1}^m x_{is}y_{sk}=\deltalta_{ik}$, $1\leq i,k \leq n$ and $\sum_{t=1}^n y_{it}x_{tk}=\deltalta_{ik}$, $1\leq i,k \leq m$, where $\deltalta_{ik}$ is the Kronecker delta. Since all the relations are homogeneous, $R$ is a $\Gammamma$-graded ring. Clearly the map sending $x_{ij}$ to $a_{ij}$ and $y_{ij}$ to $b_{ij}$ induces a graded ring homomorphism $R \rightarrow A$. Again Proposition~\ref{rndconggrrna} shows that $R^m(\overline \alphapha) \cong_{\operatorname{gr}} R^n(\overline \deltalta) $. \betagin{proposition}\lambdabel{grtypel1} Let $R=\operatorname{\mathcal L}(n,k+1)$ be the Leavitt algebra of type $(n,k)$. Then \betagin{enumerate}[\upshape(1)] \item $R$ is a universal $\bigoplus_n \mathbb Z$-graded ring which does not have gr-IBN. \item $R$ has graded type $(n,k)$ \item For $n=1$, $R$ has IBN in $R\mbox{-gr}$. \end{enumerate} \end{proposition} \betagin{proof} (1) Consider the Leavitt algebra $\operatorname{\mathcal L}(n,k+1)$ constructed in (\ref{breaktr}), which is a $\bigoplus_n \mathbb Z$-graded ring and is universal. Furthermore (\ref{breaktr}) combined by Proposition~\ref{rndconggrrna}(3) shows that $R^n \cong_{\operatorname{gr}} R^{n+k}(\overline \alphapha)$. Here $\overline \alphapha =(\alphapha_1,\dots,\alphapha_{n+k})$, where $\alphapha_i=(0,\dots,0,1,0\dots,0)$ and $1$ is in $i$-th entry. This shows that $R=\operatorname{\mathcal L}(n,k+1)$ does not have gr-IBN. (2) By \cite[Theorem~6.1]{cohn11}, $R$ is of type $(n,k)$. This immediately implies the graded type of $R$ is also $(n,k)$. (3) Suppose $R^n \cong_{\operatorname{gr}} R^m$. Then $R_0^n \cong R_0^m$ as $R_0$-modules. But $R_0$ is an ultramatricial algebra, i.e., direct limit of matrices over a field. Since IBN respects direct limits (\cite[Theorem~2.3]{cohn11}), $R_0$ has IBN. Therefore, $n=m$. \end{proof} \betagin{remark} Assigning $\deltag(y_{ij})=1$ and $\deltag(x_{ij})=-1$, for all $i,j$, makes $R=\operatorname{\mathcal L}(n,k+1)$ a $\mathbb Z$-graded algebra of graded type $(n,k)$ with $R^n\cong_{\operatorname{gr}} R^{n+k}(1)$. \end{remark} Recall from~\S\ref{pregr529} that $\mathcal P \mathrm{gr}(A)$ denotes the category of graded finitely generated projective right $A$-modules with graded homomorphisms and $\mathcal T_\alphapha (P)=P(\alphapha)$, where $\mathcal T_\alphapha$ is a suspension functor and $P$ a graded finitely generated $A$-module. \betagin{prop}[Graded Morita Equivalence] \lambdabel{grmorita} Let $A$ be a $\Gammamma$-graded ring and let $\overline \deltalta = (\delta_1 , \ldots , \delta_n)$, where $\delta_i \in \Gamma$, $1\leq i \leq n$. Then the functors \betagin{align*} \psi : \mathcal P \mathrm{gr}( \operatorname{\mathbb M}_n(A)(\overline \deltalta) ) & \longrightarrow \mathcal P \mathrm{gr}( A) \\ P & \longmapsto P \otimes_{\operatorname{\mathbb M}_n(A)(\overline \deltalta)} A^n(-\overline \deltalta) \\ \textrm{ and \;\;\;\; } \varphi : \mathcal P \mathrm{gr}( A) & \longrightarrow \mathcal P \mathrm{gr} ( \operatorname{\mathbb M}_n(A)(\overline \deltalta)) \\ Q & \longmapsto Q \otimes_A A^n(\overline \deltalta) \end{align*} form equivalences of categories and commute with suspensions, i.e, $\psi \mathcal T_{\alphapha}=\mathcal T_{\alphapha} \psi$, $\alphapha \in \Gammamma$. \end{prop} \betagin{proof} One can check that there is a graded $A$-module isomorphism \betagin{align*} \theta : \; A^n(\overline\delta)\otimes_{\operatorname{\mathbb M}_n(A)(\overline\delta)} A^n(-\overline \delta) & \longrightarrow A \\ (a_1 , \ldots , a_n ) \otimes (b_1, \ldots , b_n) & \longmapsto a_1 b_1 + \cdots + a_n b_n. \end{align*} Furthermore, there is a graded $\operatorname{\mathbb M}_n(A)(\overline \delta)$-isomorphism \betagin{align*} \theta' : \; A^n(-\overline\delta) \otimes_{A} A^n(\overline\delta) & \longrightarrow \operatorname{\mathbb M}_n(A)(\overline\delta)\\ \betagin{pmatrix} a_1 \\ \vdots \\ a_n \end{pmatrix} \otimes \betagin{pmatrix} b_1 \\ \vdots \\ b_n \end{pmatrix} & \longmapsto \betagin{pmatrix} a_{1} b_1 & \cdots & a_{1} b_n \\ \vdots & & \vdots \\ a_{n} b_1 & \cdots & a_{n} b_n \end{pmatrix} \end{align*} Now it easily follows that $\varphirphi \psi$ and $\psi \varphirphi$ are equivalent to identity functors. The general fact that $(P\otimes Q) (\alphapha)= P(\alphapha)\otimes Q =P \otimes Q(\alphapha)$ shows that the suspension functor commutes with $\psi$ and $\phi$. \end{proof} \subsection{The Graded Grothendieck group} Let $A$ be a $\Gammamma$-graded ring with identity and let $\mathcal V^{\operatorname{gr}}(A)$ denote the monoid of isomorphism classes of graded finitely generated projective modules over $A$. Then the graded Grothendieck group, $K_0^{\operatorname{gr}}(A)$, is defined as the group completion of $\mathcal V^{\operatorname{gr}}(A)$. Note that for $\alphapha \in \Gammamma$, the {\it $\alphapha$-suspension functor} $\mathcal T_\alphapha:\mbox{gr-}A\rightarrow \mbox{gr-}A$, $M \mapsto M(\alphapha)$ is an isomorphism with the property $\mathcal T_\alphapha \mathcal T_\betata=\mathcal T_{\alphapha + \betata}$, $\alphapha,\betata\in \Gammamma$. Furthermore, $\mathcal T_\alphapha$ restricts to the category of graded finitely generated projective modules. This induces a $\mathbb Z[\Gammamma]$-module structure on $K_i^{\operatorname{gr}}(A)$, $i\geq 0$, defined by $\alphapha [P] =[P(\alphapha)]$ on the generators and extended naturally. In particular if $A$ is a $\mathbb Z$-graded then $K_i^{\operatorname{gr}}(A)$ is a $\mathbb Z[x,x^{-1}]$-module for $i\geq 0$. Recall that there is a description of $K_0$ in terms of idempotent matrices. In the following, we can always enlarge matrices of different sizes by adding zeros in the lower right hand corner, so that they can be considered in a ring $\operatorname{\mathbb M}_k(A)$ for a suitable $k\in \mathbb N$. Any idempotent matrix $p \in \operatorname{\mathbb M}_n(A)$ gives rise to the finitely generated projective right $A$-module $pA^n$. On the other hand any finitely generated projective module gives rise to an idempotent matrix. We say two idempotent matrices $p$ and $q$ are equivalent if (after suitably enlarging them) there are matrices $x$ and $y$ such that $xy=p$ and $yx=q$. One can show that $p$ and $q$ are equivalent if and only if they are conjugate if and only if the corresponding finitely generated projective modules are isomorphic. Therefore $K_0(A)$ can be defined as the group completion of the monoid of equivalence classes of idempotent matrices with addition, $[p]+[q]=\left(\betagin{matrix} p & 0\\ 0 & q \end{matrix}\right)$. In fact, this is the definition one adopts for $K_0$ when the ring $A$ does not have identity. A similar construction can be given in the setting of graded rings. Since this does not seem to be documented in literature, we give the details here for the convenience of the reader. Let $A$ be a $\Gammamma$-graded ring and $\overline \alphapha=(\alphapha_1,\dots,\alphapha_n)$, where $\alphapha_i \in \Gammamma$. In the following if we need to enlarge a homogeneous matrix $p\in \operatorname{\mathbb M}_n(A)(\overline \alphapha)$, by adding zeroes in the lower right hand corner, then we add zeros in the right hand side of $\overline \alphapha=(\alphapha_1,\dots,\alphapha_n)$ as well accordingly (and call it $\overline \alphapha$ again) so that $p$ is a homogeneous matrix in $\operatorname{\mathbb M}_k(A)(\overline \alphapha)$, where $k\geq n$. Recall the definition of $ \operatorname{\mathbb M}_k(A)[\overline \alphapha][\overline \deltalta]$ from \S\ref{matgrhe} and note that if $x \in \operatorname{\mathbb M}_k(A)[\overline \alphapha][\overline \deltalta]$ and $y \in \operatorname{\mathbb M}_k(A)[\overline \delta][\overline \alphapha]$ then $xy \in \operatorname{\mathbb M}_k(-\overline \alphapha)_0$ and $yx\in \operatorname{\mathbb M}_k(-\overline\delta)_0$. \betagin{deff}\lambdabel{equison} Let $A$ be a $\Gammamma$-graded ring, $\overline \alphapha=(\alphapha_1,\dots,\alphapha_n)$ and $\overline \deltalta=(\deltalta_1,\dots,\deltalta_m)$, where $\alphapha_i, \deltalta_j \in \Gammamma$. Let $p\in \operatorname{\mathbb M}_n(A)(\overline \alphapha)_0$ and $q\in \operatorname{\mathbb M}_m(A)(\overline \deltalta)_0$ be idempotent matrices (which are homogeneous of degree zero). Then $p$ and $q$ are {\it grade equivalent} if (after suitably enlarging them) there are $x \in \operatorname{\mathbb M}_k(A)[-\overline \alphapha][-\overline \deltalta]$ and $y \in \operatorname{\mathbb M}_k(A)[-\overline \deltalta][-\overline \alphapha]$ such that $xy=p$ and $yx=q$. \end{deff} \betagin{lemma}\lambdabel{kzeiogr} \betagin{enumerate}[\upshape(1)] \item Any graded finitely generated projective module gives rise to a homogeneous idempotent matrix of degree zero. \item Any homogeneous idempotent matrix of degree zero gives rise to a graded finitely generated projective module. \item Two homogeneous idempotent matrices of degree zero are graded equivalent if and only if the corresponding graded finitely generated projective modules are graded isomorphic. \end{enumerate} \end{lemma} \betagin{proof} (1) Let $P$ be a graded finitely generated projective (right) $A$-module. Then there is a graded module $Q$ such that $P\operatorname{op}lus Q\cong_{\operatorname{gr}} A^n(-\overline \alphapha)$ for some $n \in \mathbb N$ and $\overline \alphapha=(\alphapha_1,\dots,\alphapha_n)$, where $\alphapha_i \in \Gammamma$. Define the homomorphism $f \in \operatorname{End}_A(A^n(-\overline \alphapha))$ which sends $Q$ to zero and acts as identity on $P$. Clearly, $f$ is an idempotent and graded homomorphism of degree $0$. Thus (see~(\ref{hgd543p}) and \S\ref{matgrhe}) \[f \in \operatorname{End}_A(A^n(-\overline \alphapha))_0\cong \operatorname{\mathbb M}_n(A)(\overline \alphapha)_0.\] (2) Let $p\in \operatorname{\mathbb M}_n(A)(\overline \alphapha)_0$, $\overline \alphapha=(\alphapha_1,\dots,\alphapha_n)$, where $\alphapha_i \in \Gammamma$. Then $1-p \in \operatorname{\mathbb M}_n(A)(\overline \alphapha)_0$ and \[A^n(-\overline \alphapha) = pA^n(-\overline \alphapha) \operatorname{op}lus (1-p)A^n(-\overline \alphapha).\] This shows that $pA^n(-\overline \alphapha)$ is a graded finitely generated projective $A$-module. (3) Let $p\in \operatorname{\mathbb M}_n(A)(\overline \alphapha)_0$ and $q\in \operatorname{\mathbb M}_m(A)(\overline \deltalta)_0$ be graded equivalent idempotent matrices. By Definition~\ref{equison}, there are $x' \in M_k(A)[-\overline \alphapha][-\overline \deltalta]$ and $y' \in M_k(A)[-\overline \deltalta][-\overline \alphapha]$ such that $x'y'=p$ and $y'x'=q$. Let $x=px'q$ and $y=qy'p$. Then $xy=px'qy'p=p(x'y')^2p=p$ and similarly $yx=q$. Furthermore $x=px=xq$ and $y=yp=qy$. Now the left multiplication by $x$ and $y$ induce right graded $A$-homomorphisms $qA^k(-\overline \deltalta)\rightarrow pA^k(-\overline \alphapha)$ and $pA^k(-\overline \alphapha)\rightarrow qA^k(-\overline \deltalta)$, respectively, which are inverse of each other. Therefore $pA^k(-\overline \alphapha)\cong_{\operatorname{gr}} qA^k(-\overline \deltalta)$. On the other hand if $f:pA^k(-\overline \alphapha)\cong_{\operatorname{gr}} qA^k(-\overline \deltalta)$, then extend $f$ to $A^k(-\overline \alphapha)$ by sending $(1-p)A^k(-\overline \alphapha)$ to zero and thus define a map \[\theta:A^k(-\overline \alphapha)=pA^k(-\overline \alphapha)\operatorname{op}lus (1-p)A^k(-\overline \alphapha) \longrightarrow qA^k(-\overline \deltalta)\operatorname{op}lus (1-q)A^k(-\overline \deltalta)=A^k(-\overline \deltalta).\] Similarly, extending $f^{-1}$ to $A^k(-\overline \deltalta)$, we get a map \[\phi:A^k(-\overline \deltalta)=qA^k(-\overline \deltalta)\operatorname{op}lus (1-q)A^k(-\overline \deltalta) \longrightarrow pA^k(-\overline \alphapha)\operatorname{op}lus (1-p)A^k(-\overline \alphapha)=A^k(-\overline \alphapha)\] such that $\theta \phi=p$ and $\phi \theta=q$. It follows (see the proof of Proposition~\ref{rndconggrrna}) $\theta \in M_k(A)[-\overline \alphapha][-\overline \deltalta]$ whereas $\phi \in M_k(A)[-\overline \deltalta][-\overline \alphapha]$. This gives that $p$ and $q$ are equivalent. \end{proof} Lemma~\ref{kzeiogr} shows that $K_0^{\operatorname{gr}}(A)$ can be defined as the group completion of the moniod of equivalent classes of homogeneous idempotent matrices of degree zero with addition, $[p]+[q]=\left(\betagin{matrix} p & 0\\ 0 & q \end{matrix}\right)$. In fact, this is the definition we adopt for $K_0^{\operatorname{gr}}$ when the graded ring $A$ does not have identity. Note that the action of $\Gammamma$ on the idempotent matrices (so that $K^{\operatorname{gr}}_0(A)$ becomes a $\mathbb Z[\Gammamma]$-module with this definition) is as follows: For a $\gamma \in \Gamma$ and $p\in \operatorname{\mathbb M}_n(A)(\overline \alphapha)_0$, $\gamma p$ is represented by the same matrix as $p$ but considered in $\operatorname{\mathbb M}_n(A)(\overline \alphapha+\gamma)_0$ where $\overline \alphapha+\gamma=(\alphapha_1+\gamma,\cdots, \alphapha_n+\gamma)$. A quick inspection of the proof of Lemma~\ref{kzeiogr} shows that the action of $\Gammamma$ is compatible in both definitions of $K^{\operatorname{gr}}_0$. In the case of graded fields, one can compute the graded Grothendieck group completely. This will be used to compute $K^{\operatorname{gr}}_0$ of acyclic and comet graphs in Theorems~\ref{f4j5h6h8} and \ref{cothemp}. \betagin{prop} \lambdabel{k0grof} Let $A$ be a $\Gammamma$-graded field with the support the subgroup $\Gammamma_A$. Then the monoid of isomorphism classes of $\Gammamma$-graded finitely generated projective $A$-modules is $\mathbb N[\Gammamma /\Gammamma_A]$ and $K_0^{\operatorname{gr}}(A) \cong \mathbb Z[\Gammamma /\Gammamma_A]$ as a $\mathbb Z[\Gammamma]$-modules. Furthermore, $[A^n(\deltalta_1,\dots,\deltalta_n)]\in K_0^{\operatorname{gr}}(A)$ corresponds to $\sum_{i=1}^n \underline \deltalta_i$ in $\mathbb Z[\Gammamma/\Gammamma_A]$, where $\underline \deltalta_i=\Gammamma_A+\deltalta_i$. In particular if $A$ is a (trivially graded) field, $\Gammamma$ a group and $A$ considered as a graded field concentrated in degree zero, then $K_0^{\operatorname{gr}}(A) \cong \mathbb Z[\Gammamma]$ and $[A^n(\deltalta_1,\dots,\deltalta_n)]\in K_0^{\operatorname{gr}}(A)$ corresponds to $\sum_{i=1}^n \deltalta_i$ in $\mathbb Z[\Gammamma]$. \end{prop} \betagin{proof} By Proposition~\ref{rndconggrrna}, $A(\deltalta_1) \cong_{\operatorname{gr}} A(\deltalta_2)$ if and only if $\deltalta_1-\deltalta_2 \in \Gammamma_A$. Thus any graded free module of rank $1$ is graded isomorphic to some $A(\deltalta_i)$, where $\{\deltalta_i\}_{i \in I}$ is a complete set of coset representative of the subgroup $\Gammamma_A$ in $\Gammamma$, i.e., $\{\Gammamma_A+\deltalta_i, i\in I\}$ represents $\Gammamma/\Gammamma_A$. Since any graded finitely generated module $M$ over $A$ is graded free (see~\S\ref{pregr529}), \betagin{equation}\lambdabel{m528h} M \cong_{\operatorname{gr}} A(\deltalta_{i_1})^{r_1} \operatorname{op}lus \dots \operatorname{op}lus A(\deltalta_{i_k})^{r_k}, \end{equation} where $\deltalta_{i_1},\dots \deltalta_{i_k}$ are distinct elements of the coset representative. Now suppose \betagin{equation}\lambdabel{m23gds} M \cong_{\operatorname{gr}} A(\deltalta_{{i'}_1})^{{s}_1} \operatorname{op}lus \dots \operatorname{op}lus A(\deltalta_{{i'}_{k'}})^{s_{k'}}. \end{equation} Considering the $A_0$-module $M_{-\deltalta_{i_1}}$, from~(\ref{m528h}) we have $M_{-\deltalta_{i_1}}=A_0^{r_1}$. This implies one of $\deltalta_{{i'}_j}$, $1\leq j \leq k'$, say, $\deltalta_{{i'}_1}$, has to be $\deltalta_{i_1}$ and so $r_1=s_1$ as $A_0$ is a field. Repeating the same argument for each $\deltalta_{i_j}$, $1\leq j \leq k$, we see $k=k'$, $\deltalta_{{i'}_j}=\deltalta_{i_j}$ and $r_j=s_j$, for all $1\leq j \leq k$ (possibly after suitable permutation). Thus any graded finitely generated projective $A$-module can be written uniquely as $M \cong_{\operatorname{gr}} A(\deltalta_{i_1})^{r_1} \operatorname{op}lus \dots \operatorname{op}lus A(\deltalta_{i_k})^{r_k} $, where $\deltalta_{i_1},\dots \deltalta_{i_k}$ are distinct elements of the coset representative. The (well-defined) map \betagin{align} \mathcal V^{\operatorname{gr}}(A) & \rightarrow \mathbb N[\Gammamma/\Gammamma_A] \lambdabel{attitude}\\ [A(\deltalta_{i_1})^{r_1} \operatorname{op}lus \dots \operatorname{op}lus A(\deltalta_{i_k})^{r_k} ] & \mapsto r_1(\Gammamma_A+\delta_{i_1})+\dots+ r_k(\Gammamma_A+\delta_{i_k}), \notag \end{align} gives a $\mathbb N[\Gammamma]$-monoid isomorphism between the monoid of isomorphism classes of $\Gammamma$-graded finitely generated projective $A$-modules $\mathcal V^{\operatorname{gr}} (A)$ and $\mathbb N[\Gammamma /\Gammamma_A]$. The rest of the proof follows easily. \end{proof} \betagin{example}\lambdabel{upst} Using Proposition~\ref{k0grof}, we calculate the graded $K_0$ of two types of graded fields and we determine the action of $\mathbb Z[x,x^{-1}]$ on these groups. These are graded fields obtained from graded $K_0$ of Leavitt path algebras of acyclic and $C_n$-comet graphs, respectively (see Definitions~\ref{mulidef} and~\ref{cometi}). \betagin{enumerate} \item Let $K$ be a field. Consider $A=K$ as a $\mathbb Z$-graded field with the support $\Gammamma_A=0$, i.e., $A$ is concentrated in degree $0$. By Proposition~\ref{k0grof}, $K_0^{\operatorname{gr}}(A)\cong \mathbb Z[x,x^{-1}]$ as a $\mathbb Z[x,x^{-1}]$-module. \item Let $A=K[x^n,x^{-n}]$ be $\mathbb Z$-graded field with $\Gammamma_A=n\mathbb Z$. By Proposition~\ref{k0grof}, \[ K_0^{\operatorname{gr}}(A)\cong \mathbb Z \big [ \mathbb Z / n \mathbb Z \big ]\cong \bigoplus_n \mathbb Z,\] is a $\mathbb Z[x,x^{-1}]$-module. The action of $x$ on $(a_1,\dots,a_n) \in \bigoplus_n \mathbb Z$ is $x (a_1,\dots,a_n)= (a_n,a_1,\dots,a_{n-1})$. \end{enumerate} \end{example} \subsection{Dade's theorem}\lambdabel{dadestmal} Let $A$ be a strongly $\Gammamma$-graded ring. (This implies $\Gammamma_A=\Gammamma$.) By Dade's Theorem \cite[Thm.~3.1.1]{grrings}, the functor $(-)_0:\mbox{gr-}A\rightarrow \mbox{mod-}A_0$, $M\mapsto M_0$, is an additive functor with an inverse $-\otimes_{A_0} A: \mbox{mod-}A_0 \rightarrow \mbox{gr-}A$ so that it induces an equivalence of categories. This implies that $K_i^{\operatorname{gr}}(A) \cong K_i(A_0)$, for $i\geq 0$. Furthermore, since $A_\alphapha \otimes_{A_0}A_\betata \cong A_{\alphapha +\betata}$ as $A_0$-bimodule, the functor $\mathcal T_\alphapha$ on $\mbox{gr-}A$ induces a functor on the level of $\mbox{mod-}A_0$, $\mathcal T_\alphapha:\mbox{mod-}A_0 \rightarrow \mbox{mod-}A_0$, $M\mapsto M \otimes_{A_0} A_\alphapha$ such that $\mathcal T_\alphapha \mathcal T_\betata\cong\mathcal T_{\alphapha +\betata}$, $\alphapha,\betata\in \Gammamma$, so that the following diagram is commutative. \betagin{equation}\lambdabel{veronaair} \xymatrix{ \mbox{gr-}A \ar[r]^{\mathcal T_\alphapha} \ar[d]_{(-)_0}& \mbox{gr-}A \ar[d]^{(-)_0}\\ \mbox{mod-}A_0 \ar[r] & \mbox{mod-}A_0 } \end{equation} Therefore $K_i(A_0)$ is also a $\mathbb Z[\Gammamma]$-module and \betagin{equation} \lambdabel{dade} K_i^{\operatorname{gr}}(A) \cong K_i(A_0), \end{equation} as $\mathbb Z[\Gammamma]$-modules. \subsection{} \lambdabel{kidenh} Let $A$ be a graded ring and $f:A\rightarrow A$ an inner-automorphism defined by $f(a)=r a r^{-1}$, where $r$ is a homogeneous and invertible element of $A$ of degree $\tau$. Then considering $A$ as a graded left $A$-module via $f$, it is easy to observe that there is a right graded $A$-module isomorphism $P(-\tau) \rightarrow P \otimes_f A$, $p \mapsto p \otimes r$ (with the inverse $p\otimes a \mapsto p r^{-1}a $). This induces an isomorphism between the functors $-\otimes_f A :\mathcal P \mathrm{gr}(A) \rightarrow \mathcal P \mathrm{gr}(A)$ and $\tau$-suspension functor $\mathcal T_\tau:\mathcal P \mathrm{gr}(A) \rightarrow \mathcal P \mathrm{gr}(A)$. Recall that a Quillen's $K_i$-group, $i\geq 0$, is a functor from the category of exact categories with exact functors to the category of abelian groups. Furthermore, isomorphic functors induce the same map on the $K$-groups \cite[p.19]{quillen}. Thus $K^{\operatorname{gr}}_i(f)=K^{\operatorname{gr}}_i(\mathcal T_\tau)$. Therefore if $r$ is a homogeneous element of degree zero, i.e., $\tau=0$, then $K^{\operatorname{gr}}_i(f):K^{\operatorname{gr}}_i(A)\rightarrow K^{\operatorname{gr}}_i(A)$ is the identity map. This will be used in Theorem~\ref{mani543}. \subsection{Homogeneous idempotents}\lambdabel{idemptis} The following facts about idempotents are well known in a non-graded setting and one can check that they translate in the graded setting with the similar proofs (cf. \cite[\S 21]{lamfc}). Let $P_i$, $1\leq i \leq l$, be homogeneous right ideals of $A$ such that $A=P_1\operatorname{op}lus\dots \operatorname{op}lus P_l$. (Recall that a homogeneous ideal, or a graded ideal, is an ideal which is generated by homogeneous elements.) Then there are homogeneous orthogonal idempotents $e_i$ (hence of degree zero) such that $1=e_1+\dots+e_l$ and $e_iA=P_i$. Let $e$ and $f$ be homogeneous idempotent elements in the graded ring $A$. (Note that, in general, there are non-homogeneous idempotents in a graded ring.) Let $\theta:eA \rightarrow fA$ be a graded $A$-module homomorphism. Then $\theta(e)=\theta(e^2)=\theta(e)e=fae$ for some $a\in A$ and for $b \in eA$, $\theta(b)=\theta(eb)=\theta(e)b$. This shows there is a map \betagin{align}\lambdabel{hgwob} \operatorname{Hom}_A(eA,fA) & \rightarrow fAe\\ \theta & \mapsto \theta(e) \notag \end{align} and one can prove this is a graded group isomorphism. Now if $\theta:eA\rightarrow fA$ is a graded $A$-isomorphism, then $x=\theta(e)\in fAe$ and $y=\theta^{-1}(f)\in eAf$, where $x$ and $y$ are homogeneous of degree zero, such that $yx=e$ and $xy=f$. Finally, for $f=e$, the map~(\ref{hgwob}) gives a graded ring isomorphism $ \operatorname{End}_A(eA) \cong_{\operatorname{gr}} eAe$. These facts will be used in Theorem~\ref{catgrhsf}. \subsection{$\mathbf \Gammamma$-pre-ordered groups} \lambdabel{pregg5} Here we define the category of $\Gammamma$-pre-ordered abelian groups. Classically, the objects are abelian groups with a pre-ordering. However, since we will consider graded Grothendieck groups, our groups are $\mathbb Z[x,x^{-1}]$-modules, so we need to adopt the definitions to this setting. Let $\Gammamma$ be a group and $G$ be a (left) $\Gammamma$-module. Let $\leq$ be a reflexive and transitive relation on $G$ which respects the group and the module structures, i.e., for $\gammamma \in \Gammamma$ and $x,y,z \in G$, if $x\leq y$, then $\gammamma x \leq \gammamma y$ and $x+z\leq y+z$. We call $G$ a {\it $\Gammamma$-pre-ordered group}. We call $G$ a pre-ordered group when $\Gammamma$ is clear from the context. The {\it cone} of $G$ is defined as $\{x \in G \mid x\geq 0\}$ and denoted by $G^{+}$. The set $G^{+}$ is a $\Gammamma$-submonoid of $G$, i.e., a submonoid which is closed under the action of $\Gammamma$. In fact, $G$ is a $\Gammamma$-pre-ordered group if and only if there exists a $\Gammamma$-submonoid of $G$. (Since $G$ is a $\Gammamma$-module, it can be considered as a $\mathbb Z[\Gammamma]$-module.) An {\it order-unit} in $G$ is an element $u \in G^{+}$ such that for any $x\in G$, there are $\alphapha_1,\dots,\alphapha_n \in \Gammamma$, $n \in \mathbb N$, such that $x\leq \sum_{i=1}^n \alphapha_i u$. As usual, we only consider homomorphisms which preserve the pre-ordering, i.e., a $\Gammamma$-homomorphism $f:G\rightarrow H$, such that $f(G^{+}) \subseteq H^{+}$. We denote by $\mathcal P_{\Gammamma}$ the category of pointed $\Gammamma$-pre-ordered abelian groups with pointed ordered preserving homomorphisms, i.e., the objects are the pairs $(G,u)$, where $G$ is a $\Gammamma$-pre-ordered abelian group and $u$ is an order-unit, and $f:(G,u)\rightarrow (H,v)$ is an ordered-preserving $\Gammamma$-homomorphism such that $f(u)=v$. Note that when $\Gammamma$ is a trivial group, we are in the classical setting of pre-ordered abelian groups. In this paper, $\Gammamma=\mathbb Z$ and so $\mathbb Z[\Gammamma]=\mathbb Z[x,x^{-1}]$. We simply write $\mathcal P$ for $\mathcal P_{\mathbb Z}$ throughout. \betagin{example} Let $A$ be a $\Gammamma$-graded ring. Then $K^{\operatorname{gr}}_0(A)$ is a $\Gammamma$-pre-ordered abelian group with the set of isomorphic classes of graded finitely generated projective right $A$-modules as the cone of ordering and $[A]$ as an order-unit. Indeed, if $x\in K^{\operatorname{gr}}_0(A)$, then there are graded finitely generated projective modules $P$ and $P'$ such that $x=[P]-[P']$. But there is a graded module $Q$ such that $P\operatorname{op}lus Q \cong A^n(\overline \alphapha)$, where $\overline \alphapha =(\alphapha_1,\dots,\alphapha_n)$, $\alphapha_i \in \Gammamma$ (see \S\ref{pregr529}). Now \[ [A^n(\overline \alphapha)] -x = [P]+[Q]-[P]+[P']=[Q]+[P']=[Q\operatorname{op}lus P'] \in K^{\operatorname{gr}}_0(A)^{+}.\] This shows that $\sum_{i=1}^n \alphapha_i [A]=[A^n(\overline \alphapha)]\geq x$. \end{example} \subsection{Graded matricial algebras}\lambdabel{grmat43} The Leavitt path algebras of acyclic and $C_n$-comet graphs are finite direct product of matrix rings over graded fields of the form $K$ and $K[x^n,x^{-n}]$, respectively (see~\cite[Theorem~4.11 and Theorem~4.17]{haz}). In this section we establish that the graded $K_0$ is a complete invariant for these algebras. This will be used in the main Theorem~\ref{mani543} to settle Conjecture~\ref{weakconj} in the case of acyclic and multi-headed comet graphs. \betagin{deff} Let $A$ be a $\Gammamma$-graded field. A $\Gammamma$-{\it graded matricial} $A$-algebra is a graded $A$-algebra of the form \[\operatorname{\mathbb M}_{n_1}(A)(\overline \deltalta_1) \times \dots \times \operatorname{\mathbb M}_{n_l}(A)(\overline \deltalta_l),\] where $\overline \deltalta_i=(\deltalta_{1}^{(i)},\dots,\deltalta_{n_i}^{(i)})$, $\deltalta_{j}^{(i)} \in \Gammamma$, $1\leq j \leq n_i$ and $1\leq i \leq l$. \end{deff} \betagin{lemma}\lambdabel{fdpahf} Let $A$ be a $\Gammamma$-graded field and $R$ be a $\Gammamma$-graded matricial $A$-algebra. Let $P$ and $Q$ be graded finitely generated projective $R$-modules. Then $[P]=[Q]$ in $K^{\operatorname{gr}}_0(R)$, if and only if $P \cong _{\operatorname{gr}} Q$. \end{lemma} \betagin{proof} Since the functor $K^{\operatorname{gr}}_0$ respects the direct sum, it suffices to prove the statement for a matricial algebra of the form $R=\operatorname{\mathbb M}_{n}(A)(\overline {\deltalta})$. Let $P$ and $Q$ be graded finitely generated projective $R$-modules such that $[P]=[Q]$ in $K^{\operatorname{gr}}_0(R)$. By Proposition~\ref{grmorita}, $R=\operatorname{\mathbb M}_{n}(A)(\overline {\deltalta})$ is graded Morita equivalent to $A$. So there are equivalent functors $\psi$ and $\phi$ such that $\psi\phi\cong 1$ and $\phi \psi \cong 1$, which also induce an isomorphism $K^{\operatorname{gr}}_0(\psi):K^{\operatorname{gr}}_0(R) \rightarrow K^{\operatorname{gr}}_0(A)$ such that $[P]\mapsto [\psi(P)]$. Now since $[P]=[Q]$, it follows $[\psi(P)]=[\psi(Q)]$ in $K^{\operatorname{gr}}_0(A)$. But since $A$ is a graded field, by the proof of Proposition~\ref{k0grof}, any graded finitely generated projective $A$-module can be written uniquely as a direct sum of appropriate shifted $A$. Writing $\psi(P)$ and $\psi(Q)$ in this form, the homomorphism~(\ref{attitude}) shows that $\psi(P)\cong_{\operatorname{gr}} \psi(Q)$. Now applying the functor $\phi$ to this we obtain $P\cong_{\operatorname{gr}} Q$. \end{proof} Let $A$ be a $\Gammamma$-graded field (with the support $\Gammamma_A$) and $\mathcal C$ be a category consisting of $\Gammamma$-graded matricial $A$-algebras as objects and $A$-graded algebra homomorphisms as morphisms. We consider the quotient category $\mathcal C^{\operatorname{out}}$ obtained from $\mathcal C$ by identifying homomorphisms which differ up to a degree zero graded inner-automorphim. That is, the graded homomorphisms $\phi, \psi \in \operatorname{Hom}_{\mathcal C}(R,S)$ represent the same morphism in $\mathcal C^{\operatorname{out}}$ if there is an inner-automorpshim $\theta:S\rightarrow S$, defined by $\theta(s)=xsx^{-1}$, where $\deltag(x)=0$, such that $\phi=\theta\psi$. The following theorem shows that $K^{\operatorname{gr}}_0$ ``classifies'' the category of $\mathcal C^{\operatorname{out}}$. This will be used in Theorem~\ref{mani543} where Leavitt path algebras of acyclic and multi-headed comet graphs are classified by means of their $K$-groups. This is a graded analog of a similar theorem for matricial algebras (see~\cite{goodearlbook}). \betagin{theorem}\lambdabel{catgrhsf} Let $A$ be a $\Gammamma$-graded field and $\mathcal C^{\operatorname{out}}$ be the category consisting of $\Gammamma$-graded matricial $A$-algebras as objects and $A$-graded algebra homomorphisms modulo graded inner-automorphisms as morphisms. Then $K^{\operatorname{gr}}_0: \mathcal C^{\operatorname{out}}\rightarrow \mathcal P$ is a fully faithful functor. Namely, \betagin{enumerate}[\upshape(1)] \item (well-defined and faithful) For any graded matricial $A$-algebras $R$ and $S$ and $\phi,\psi\in \operatorname{Hom}_{\mathcal C}(R, S)$, we have $\phi(r)=x\psi(r)x^{-1}$, $r\in R$, for some invertible homogeneous element $x$ of $S$, if and only if $K^{\operatorname{gr}}_0(\phi)=K^{\operatorname{gr}}_0(\psi)$. \item(full) For any graded matricial $A$-algebras $R$ and $S$ and the morphism $f:(K^{\operatorname{gr}}_0(R),[R])\rightarrow (K^{\operatorname{gr}}_0(S),[S])$ in $\mathcal P$, there is $\phi\in \operatorname{Hom}_{\mathcal C}(R, S)$ such that $K^{\operatorname{gr}}_0(\phi)=f$. \end{enumerate} \end{theorem} \betagin{proof} (1) (well-defined.) Let $\phi,\psi\in \operatorname{Hom}_{\mathcal C}(R, S)$ such that $\phi=\theta\psi$, where $\theta(s)=xsx^{-1}$ for some invertible homogeneous element $x$ of $S$ of degree $0$. Then $K^{\operatorname{gr}}_0(\phi)=K^{\operatorname{gr}}_0(\theta\psi)=K^{\operatorname{gr}}_0(\theta)K^{\operatorname{gr}}_0(\psi)=K^{\operatorname{gr}}_0(\psi)$ since $K^{\operatorname{gr}}_0(\theta)$ is the identity map (see \S\ref{kidenh}). (faithful.) The rest of the proof is similar to the non-graded version with an extra attension given to the grading (cf. \cite[p.218]{goodearlbook}). We give it here for the completeness. Let $K^{\operatorname{gr}}_0(\phi)=K^{\operatorname{gr}}_0(\psi)$. Let $R=\operatorname{\mathbb M}_{n_1}(A)(\overline \deltalta_1) \times \dots \times \operatorname{\mathbb M}_{n_l}(A)(\overline \deltalta_l)$ and set $g_{jk}^{(i)}=\phi(e^{(i)}_{jk})$ and $h_{jk}^{(i)}=\psi(e^{(i)}_{jk})$ for $1\leq i \leq l$ and $1\leq j,k \leq n_i$, where $e^{(i)}_{jk}$ are the standard basis for $\operatorname{\mathbb M}_{n_i}(A)$. Since $\phi$ and $\psi$ are graded homomorphism, $\deltag(e^{(i)}_{jk})=\deltag(g^{(i)}_{jk})=\deltag(h^{(i)}_{jk})=\deltalta^{(i)}_j-\deltalta^{(i)}_k$. Since $e^{(i)}_{jj}$ are pairwise graded orthogonal idempotents (of degree zero) in $R$ and $\sum_{i=1}^l \sum_{j=1}^{n_i} e^{(i)}_{jj}=1$, the same is also the case for $g^{(i)}_{jj}$ and $h^{(i)}_{jj}$. Then \[[g^{(i)}_{11}S]=K^{\operatorname{gr}}_0(\phi)([e^{(i)}_{11}R])=K^{\operatorname{gr}}_0(\psi)([e^{(i)}_{11}R])=[h^{(i)}_{11}S].\] By Lemma~\ref{fdpahf}, $g^{(i)}_{11}S \cong_{\operatorname{gr}} h^{(i)}_{11}S$. Thus there are homogeneous elements of degree zero $x_i$ and $y_i$ such that $x_iy_i=g^{(i)}_{11}$ and $y_ix_i=h^{(i)}_{11}$ (see \S\ref{idemptis}). Let $x=\sum_{i=1}^{l}\sum_{j=1}^{n_i}g^{(i)}_{j1}x_ih^{(i)}_{1j}$ and $y=\sum_{i=1}^{l}\sum_{j=1}^{n_i}h^{(i)}_{j1}y_ig^{(i)}_{1j}$. Note that $x$ and $y$ are homogeneous elements of degree zero. One can easily check that $xy=yx=1$. Now for $1\leq i \leq l$ and $1\leq j,k \leq n_i$ we have \betagin{align*} x h^{(i)}_{jk}&=\sum_{s=1}^{l}\sum_{t=1}^{n_s}g^{(s)}_{t1}x_sh^{(s)}_{1t}h^{(i)}_{jk}\\ & = g^{(i)}_{j1}x_ih^{(i)}_{1j}h^{(i)}_{jk}=g^{(i)}_{jk}g^{(i)}_{k1}x_ih^{(i)}_{1k}\\ &= \sum_{s=1}^{l}\sum_{t=1}^{n_s}g^{(i)}_{jk}g^{(s)}_{t1}x_sh^{(i)}_{1t}= g^{(i)}_{jk}x \end{align*} Let $\theta:S\rightarrow S$ be the graded inner-automorphism $\theta(s)=xsy$. Then $\theta\psi(e^{(i)}_{jk}))= xh^{(i)}_{jk}y=g^{(i)}_{jk}=\phi(e^{(i)}_{jk})$. Since $e^{(i)}_{jk}$, $1\leq i \leq l$ and $1\leq j,k \leq n_i$, form a homogeneous $A$-basis for $R$, $\theta\psi=\phi$. (2) Let $R=\operatorname{\mathbb M}_{n_1}(A)(\overline \deltalta_1) \times \dots \times \operatorname{\mathbb M}_{n_l}(A)(\overline \deltalta_l).$ Consider $R_i= \operatorname{\mathbb M}_{n_i}(A)(\overline \deltalta_i)$, $1\leq i \leq l$. Each $R_i$ is a graded finitely generated projective $R$-module, so $f([R_i])$ is in the cone of $K^{\operatorname{gr}}_0(S)$, i.e., there is a graded finitely generated projective $S$-module $P_i$ such that $f([R_i])=[P_i]$. Then \[[S]=f([R])=f([R_1]+\dots+[R_l])=[P_1]+\dots+[P_l]=[P_1\operatorname{op}lus\dots\operatorname{op}lus P_l].\] Since $S$ is a graded matricial algebra, by Lemma~\ref{fdpahf}, $P_1\operatorname{op}lus\dots\operatorname{op}lus P_l \cong_{\operatorname{gr}} S$ as a right $S$-module. So there are homogeneous orthogonal idempotents $g_1,\dots,g_l \in S$ such that $g_1+\dots+g_l=1$ and $g_iS\cong_{\operatorname{gr}} P_i$ (see \S\ref{idemptis}). Note that each of $R_i=\operatorname{\mathbb M}_{n_i}(A)(\overline \deltalta_i)$ are graded simple algebras. Set $\overline \deltalta_i=(\deltalta_1^{(i)},\dots,\deltalta_n^{(i)})$ (here $n=n_i$). Let $e_{jk}^{(i)}$,$1\leq j,k\leq n$, be matrix units of $R_i$ and consider the graded finitely generated projective (right) $R_i$-module, $V=e_{11}^{(i)} R_i=A(\delta_1^{(i)} - \delta_1^{(i)}) \operatorname{op}lus A(\delta_2^{(i)} - \delta_1^{(i)}) \operatorname{op}lus \cdots \operatorname{op}lus A(\delta_n^{(i)} - \delta_1^{(i)})$. Then~(\ref{pjacko}) shows \[R_i \cong_{\operatorname{gr}} V(\delta_1^{(i)} - \delta_1^{(i)})\operatorname{op}lus V(\delta_1^{(i)} - \delta_2^{(i)}) \operatorname{op}lus \dots \operatorname{op}lus V(\delta_1^{(i)} - \delta_n^{(i)}),\] as graded $R$-module. Thus \betagin{equation}\lambdabel{lkheohg} [P_i]=[g_iS]=f([R_i])=f([V(\delta_1^{(i)} - \delta_1^{(i)})])+f([V(\delta_1^{(i)} - \delta_2^{(i)})])+\dots+f([V(\delta_1^{(i)} - \delta_n^{(i)})]). \end{equation} There is a graded finitely generated projective $S$-module $Q$ such that $f([V])=f([V(\delta_1^{(i)} - \delta_1^{(i)})])=[Q]$. Since $f$ is a $\mathbb Z[\Gammamma]$-module homomorphism, for $1\leq k \leq n$, \[f([V(\delta_1^{(i)} - \delta_k^{(i)})])=f((\delta_1^{(i)} - \delta_k^{(i)})[V])=(\delta_1^{(i)} - \delta_k^{(i)})f([V])=(\delta_1^{(i)} - \delta_k^{(i)})[Q]=[Q(\delta_1^{(i)} - \delta_k^{(i)})].\] From ~(\ref{lkheohg}) and Lemma~\ref{fdpahf} now follows \betagin{equation}\lambdabel{bvgdks} g_iS\cong_{\operatorname{gr}} Q(\delta_1^{(i)} - \delta_1^{(i)})\operatorname{op}lus Q(\delta_1^{(i)} - \delta_2^{(i)}) \operatorname{op}lus \dots \operatorname{op}lus Q(\delta_1^{(i)} - \delta_n^{(i)}). \end{equation} Let $g^{(i)}_{jk}\in \operatorname{End}(g_iS)\cong_{\operatorname{gr}} g_iSg_i$ maps the $j$-th summand of the right hand side of~(\ref{bvgdks}) to its $k$-th summand and everything else to zero. Observe that $\deltag(g^{(i)}_{jk})=\delta^{(i)}_j-\delta^{(i)}_k$ and $g^{(i)}_{jk}$, $1\leq j,k\leq n$, form the matrix units. Furthermore, $g^{(i)}_{11}+\dots+g^{(i)}_{nn}=g_i$ and $g^{(i)}_{11}S=Q(\deltalta^{(i)}_1-\deltalta^{(i)}_1)=Q$. Thus $[g^{(i)}_{11}S]=[Q]=f([V])=f([e_{11}^{(i)}R_i])$. Now for $1\leq i\leq l$, define the $A$-algebra homomorphism $R_i\rightarrow g_iSg_i$, $e_{jk}^{(i)} \mapsto g_{jk}^{(i)}$. This is a graded homomorphism, and induces a graded homorphism $\phi:R\rightarrow S$ such that $\phi(e_{jk}^{(i)})= g_{jk}^{(i)}$. Clearly \[K_0^{\operatorname{gr}}(\phi)([e_{11}^{(i)}R_i])=[\phi(e_{11}^{(i)})S]=[g_{11}^{(i)}S]=f([e_{11}^{(i)}R_i]),\] for $1\leq i \leq l$. Now $K^{\operatorname{gr}}_0(R)$ is generated by $[e_{11}^{(i)}R_i]$, $1\leq i \leq l$, as $\mathbb Z[\Gammamma]$-module. This implies that $K^{\operatorname{gr}}_0(\phi)=f$. \end{proof} \forget \betagin{theorem}\lambdabel{gfw76} Let $E$ and $F$ be finite graphs with no sinks. Then $\operatorname{\mathcal L}(E)\cong_{\operatorname{gr}} \operatorname{\mathcal L}(F)$ if and only if there is a $\mathbb Z[x,x^{-1}]$-module isomorphism \betagin{equation}\lambdabel{ncfd1} \big (K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)),[\operatorname{\mathcal L}(E)]\big ) \cong \big (K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(F)),[\operatorname{\mathcal L}(F)]\big ). \end{equation} \end{theorem} \betagin{proof} Set $R=\operatorname{\mathcal L}(E)$ and $S=\operatorname{\mathcal L}(F)$. By Theorem~\ref{hazst}, $R$ and $S$ are strongly graded rings. Using Dade's theorem (see \S\ref{dadestmal}), the isomorphism (\ref{ncfd1}), call it $f$, induces an ordered preserving isomorphism \betagin{equation}\lambdabel{lkadff} f_0: \big (K_0(R_0),[R_0]\big ) \cong \big (K_0(S_0),[S_0]\big ), \end{equation} (see Diagram~\ref{gsqqwq}), defined on generators by $f_0([M])=f([M\otimes_{R_0}R])_0$, where $M$ is a finitely generated projective $R_0$-module. \betagin{equation}\lambdabel{gsqqwq} \xymatrix{ K^{\operatorname{gr}}_0(R) \ar[rr]^{f} && K^{\operatorname{gr}}_0(S) \ar[d]^{(-)_0}\\ K_0(R_0) \ar[u]^{-\otimes_{R_0}R} \ar@{.>}[rr]^{f_0} && K_0(S_0) } \end{equation} Since $R_0$ and $S_0$ are ultramatricial algebras, by Elliott's theorem (see \cite[Theorem 15.26]{goodearl}), there exists an isomorphism $\rho:R_0 \rightarrow S_0$ which induces the isomorphism $f_0$ on $K_0$-groups, i.e., $f_0=K_0(\rho)$. Now define the functor $\mathcal G: \mbox{gr-}R \rightarrow \mbox{gr-}S$ as the composition of the functors of the diagram below. \[ \xymatrix{ \mbox{gr-}R \ar@{.>}[rr]^{\mathcal G} \ar[d]_{(-)_0}&& \mbox{gr-}S \\ \mbox{mod-}R_0 \ar[rr]^{-\otimes_{R_0} {S_0}} && \mbox{mod-}S_0 \ar[u]_{-\otimes_{S_0}S} } \] Thus for graded finitely generated projective $R$-module $P$, $\mathcal G(P)=P_0\otimes_{R_0}S$. Note that since all the functors involved in the diagram are equivalences, so is $\mathcal G$. We show that $\mathcal G$ commutes with the suspension functors $\mathcal T_\alphapha$ for any $\alphapha \in \Gammamma$, up to isomorphism, i.e., there are natural graded $S$-isomorphisms $\mathcal G (P(\alphapha)) \cong \mathcal G(P)(\alphapha)$, where $P$ is a graded finitely generated projective $R$-module. Since $S$ is strongly graded, it suffices to prove $\mathcal G \big (P(\alphapha)\big)_0 \cong \big (\mathcal G (P)(\alphapha)\big)_0$ as $S_0$-modules. Since $S_0$ is an ultramatricial algebra and ultramatricial algebras are unit-regular, by Proposition~15.2 in~\cite{goodearlbook}, it suffices to show \betagin{equation} \lambdabel{ks5543} \big [\mathcal G \big (P(\alphapha)\big)_0\big] = \big [\big(\mathcal G (P)(\alphapha)\big)_0\big] \end{equation} in $K_0(S_0)$. But this is the case as all the functors involved (Diagram~\ref{gsqqwq}) induce $\mathbb Z[\Gammamma]$-module homomorphism on the level of $K$-groups. In details, if $\big [\mathcal G \big (P(\alphapha)\big)\big] = \big [\big(\mathcal G (P)(\alphapha)\big)\big]$ in $K^{\operatorname{gr}}_0(S)$ then applying the functor $(-)_0$ we obtain (\ref{ks5543}). But since $K_0(\mathcal G)=f$, we have \[\big [\mathcal G \big (P(\alphapha)\big)\big]=f([P(\alphapha)])= f(\alphapha [P]) =\alphapha f([P]) =\alphapha [\mathcal G (P)]=\big [(\mathcal G (P)(\alphapha)\big].\] So $\mathcal G$ is a graded equivalence between $\mbox{gr-}R$ and $\mbox{gr-}S$ which commutes with suspension functors up to isomorphism, i.e., $\mathcal G\mathcal T_\alphapha\cong \mathcal T_\alphapha \mathcal G$ for $\alphapha \in \Gammamma$. Consider the group homomorphism induced by the functor $\mathcal G_\alphapha$, \[\mathcal G_\alphapha: \operatorname{Hom}_{\mbox{gr-}R}\big (R,R(\alphapha)\big )\longrightarrow \operatorname{Hom}_{\mbox{gr-}S}\big (S,S(\alphapha)\big ),\] Thus for $h:R \rightarrow R(\alphapha)$ we have \[\mathcal G_\alphapha(h): S=(R_0\otimes_{R_0} S_0)\otimes_{S_0} S \operatorname{st}ackrel{f_0\otimes 1}\longrightarrow (R_n\otimes_{R_0} S_0)\otimes_{S_0} S=S(\alphapha).\] Check that if $h\in \operatorname{Hom}_{\mbox{gr-}R}\big (R,R(\alphapha)\big )$ and $k\in \operatorname{Hom}_{\mbox{gr-}R}\big (R,R(\betata)\big )$, then \[\mathcal G_{\alphapha+\betata}(kh)=\mathcal G_{\betata} (k)\mathcal G_{\alphapha} (h) \in \operatorname{Hom}_{\mbox{gr-}S}\big (S,S(\alphapha+\betata)\big ).\] Since $\mathcal G$ is an equivalence of the categories, (so it does have an inverse), then $\mathcal G_\alphapha$ are isomorphism. Therefore \[\operatorname{\mathcal L}(E)= R\cong_{\operatorname{gr}} \operatorname{Hom}_R(R,R) = \bigoplus_{\alphapha \in \Gammamma} \operatorname{Hom}(R,R(\alphapha)) \longrightarrow \bigoplus_{\alphapha \in \Gammamma} \operatorname{Hom}(S,S(\alphapha)) = \operatorname{Hom}_S(S,S) \cong_{\operatorname{gr}} S=\operatorname{\mathcal L}(F).\] \end{proof} \betagin{remark} The last part of the proof was inspired by Proposition~5.3 in \cite{greengordon}. This proposition states that if $\mathcal U:\mbox{gr-}R\rightarrow \mbox{gr-}S$ is a graded equivalence, then $\mathcal U$ is isomorphism to a graded functor $\mathcal U'$ such that $\mathcal U'$ also induces an equivalence between $\mbox{mod-}R$ and $\mbox{mod-}S$, i.e., there is an equivalent functor $\mathcal M$ such that the following diagram commutes \[ \xymatrix{ \mbox{gr-}R \ar[rr]^{\mathcal U'} \ar[d]_{\mathcal F}&& \mbox{gr-}S \ar[d]_{\mathcal F}\\ \mbox{mod-}R \ar[rr]^{\mathcal M} && \mbox{mod-}S } \] where $\mathcal F$ is the forgetful functor. This can't be used directly here. For, in \cite{greengordon} a functor is defined to be graded if it commutes with suspensions, i.e., $\mathcal U \mathcal T_\alphapha=\mathcal T_\alphapha \mathcal U$. In our setting this is not the case, i.e., we only have $\mathcal U\mathcal T_\alphapha\cong \mathcal T_\alphapha \mathcal U$ for $\alphapha \in \Gammamma$ \end{remark} \forgotten \section{Leavitt path algebras} In this section we gather some graph-theoretic definitions and recall the basics on Leavitt path algebras, including the calculation of Grothendieck group $K_0$. The reader familiar with this topic can skip to Section~\ref{daryak}. A {\it directed graph} $E=(E^0,E^1,r,s)$ consists of two countable sets $E^0$, $E^1$ and maps $r,s:E^1\rightarrow E^0$. The elements of $E^0$ are called {\it vertices} and the elements of $E^1$ {\it edges}. If $s^{-1}(v)$ is a finite set for every $v \in E^0$, then the graph is called {\it row-finite}. In this note we will consider only row-finite graphs. In this setting, if the number of vertices, i.e., $|E^0|$, is finite, then the number of edges, i.e., $|E^1|$, is finite as well and we call $E$ a {\it finite} graph. For a graph $E=(E^0,E^1,r,s)$, a vertex $v$ for which $s^{-1}(v)$ is empty is called a {\it sink}, while a vertex $w$ for which $r^{-1}(w)$ is empty is called a {\it source}. An edge with the same source and range is called a {\it loop}. A path $\mu$ in a graph $E$ is a sequence of edges $\mu=\mu_1\dots\mu_k$, such that $r(\mu_i)=s(\mu_{i+1}), 1\leq i \leq k-1$. In this case, $s(\mu):=s(\mu_1)$ is the {\it source} of $\mu$, $r(\mu):=r(\mu_k)$ is the {\it range} of $\mu$, and $k$ is the {\it length} of $\mu$ which is denoted by $|\mu|$. We consider a vertex $v\in E^0$ as a {\it trivial} path of length zero with $s(v)=r(v)=v$. If $\mu$ is a nontrivial path in $E$, and if $v=s(\mu)=r(\mu)$, then $\mu$ is called a {\it closed path based at} $v$. If $\mu=\mu_1\dots\mu_k$ is a closed path based at $v=s(\mu)$ and $s(\mu_i) \not = s(\mu_j)$ for every $i \not = j$, then $\mu$ is called a {\it cycle}. For two vertices $v$ and $w$, the existence of a path with the source $v$ and the range $w$ is denoted by $v\geq w$. Here we allow paths of length zero. By $v\geq_n w$, we mean there is a path of length $n$ connecting these vertices. Therefore $v\geq_0 v$ represents the vertex $v$. Also, by $v>w$, we mean a path from $v$ to $w$ where $v\not = w$. In this note, by $v\geq w' \geq w$, it is understood that there is a path connecting $v$ to $w$ and going through $w'$ (i.e., $w'$ is on the path connecting $v$ to $w$). For $n\geq 2$, we define $E^n$ to be the set of paths of length $n$ and $E^*=\bigcup_{n\geq 0} E^n$, the set of all paths. \betagin{deff}\lambdabel{llkas}{\sc Leavitt path algebras.} \lambdabel{LPA} \\For a graph $E$ and a field $K$, we define the {\it Leavitt path algebra of $E$}, denoted by $\operatorname{\mathcal L}_K(E)$, to be the algebra generated by the sets $\{v \mid v \in E^0\}$, $\{ \alphapha \mid \alphapha \in E^1 \}$ and $\{ \alphapha^* \mid \alphapha \in E^1 \}$ with the coefficients in $K$, subject to the relations \betagin{enumerate} \item $v_iv_j=\deltalta_{ij}v_i \textrm{ for every } v_i,v_j \in E^0$. \item $s(\alphapha)\alphapha=\alphapha r(\alphapha)=\alphapha \textrm{ and } r(\alphapha)\alphapha^*=\alphapha^*s(\alphapha)=\alphapha^* \textrm{ for all } \alphapha \in E^1$. \item $\alphapha^* \alphapha'=\deltalta_{\alphapha \alphapha'}r(\alphapha)$, for all $\alphapha, \alphapha' \in E^1$. \item $\sum_{\{\alphapha \in E^1, s( \alphapha)=v\}} \alphapha \alphapha^*=v$ for every $v\in E^0$ for which $s^{-1}(v)$ is non-empty. \end{enumerate} \end{deff} Here the field $K$ commutes with the generators $\{v,\alphapha, \alphapha^* \mid v \in E^0,\alphapha \in E^1\}$. Throughout the note, we sometimes write $\operatorname{\mathcal L}(E)$ instead of $\operatorname{\mathcal L}_K(E)$. The elements $\alphapha^*$ for $\alphapha \in E^1$ are called {\it ghost edges}. Setting $\deltag(v)=0$, for $v\in E^0$, $\deltag(\alphapha)=1$ and $\deltag(\alphapha^*)=-1$ for $\alphapha \in E^1$, we obtain a natural $\mathbb Z$-grading on the free $K$-ring generated by $\{v,\alphapha, \alphapha^* \mid v \in E^0,\alphapha \in E^1\}$. Since the relations in the above definition are all homogeneous, the ideal generated by these relations is homogeneous and thus we have a natural $\mathbb Z$-grading on $\operatorname{\mathcal L}_K(E)$. If $\mu=\mu_1\dots\mu_k$, where $\mu_i \in E^1$, is an element of $\operatorname{\mathcal L}(E)$, then we denote by $\mu^*$ the element $\mu_k ^*\dots \mu_1^* \in \operatorname{\mathcal L}(E)$. Since $\alphapha^* \alphapha'=\deltalta_{\alphapha \alphapha'}r(\alphapha)$, for all $\alphapha, \alphapha' \in E^1$, any word can be written as $\mu \gammamma ^*$ where $\mu$ and $\gammamma$ are paths in $E$. The elements of the form $\mu\gammamma^*$ are called {\it monomials}. Taking the grading into account, one can write $\operatorname{\mathcal L}(E) =\textstyle{\bigoplus_{k \in \mathbb Z}} \operatorname{\mathcal L}(E)_k$ where, \betagin{equation}\lambdabel{grrea} \operatorname{\mathcal L}(E)_k= \mathfrak big \{ \sum_i r_i \alphapha_i \betata_i^*\mid \alphapha_i,\betata_i \textrm{ are paths}, r_i \in K, \textrm{ and } |\alphapha_i|-|\betata_i|=k \textrm{ for all } i \mathfrak big\}. \end{equation} For simplicity we denote $\operatorname{\mathcal L}(E)_k$, the homogeneous elements of degree $k$, by $\operatorname{\mathcal L}_k$. We define an (anti-graded) involution on $\operatorname{\mathcal L}(E)$ by $\overline {\mu\gammamma^*}=\gammamma\mu^*$ for the monomials and extend it to the whole $\operatorname{\mathcal L}(E)$ in the obvious manner. Note that if $x\in \operatorname{\mathcal L}(E)_n$, then $\overline x \in \operatorname{\mathcal L}(E)_{-n}$. By constructing a representation of $\operatorname{\mathcal L}_K(E)$ in $\operatorname{End}_K(V)$, for a suitable $K$-vector space $V$, one can show that the vertices of the graph $E$ are linearly independent in $\operatorname{\mathcal L}_K(E)$ and the edges and ghost edges are not zero (see Lemma~1.5 in \cite{goodearl}). Throughout the note we need some more definitions which we gather here. \betagin{deff} \lambdabel{mulidef} \betagin{enumerate} \item A path which does not contain a cycle is called a {\it acyclic} path. \item A graph without cycles is called a {\it acyclic} graph. \item Let $v \in E^0$. Then the {\it out-degree} and the {\it total-degree} of $v$ are defined as $\operatorname{out}deg(v)=\operatorname{card}(s^{-1}(v))$ and $\operatorname{totdeg}(v)=\operatorname{card}(s^{-1}(v) \cup r^{-1}(v))$, respectively. \item A finite graph $E$ is called a {\it line graph} if it is connected, acyclic and $\operatorname{totdeg}(v)\leq 2$ for every $v \in E^0$. If we want to emphasize the number of vertices, we say that $E$ is an $n$-line graph whenever $n=\operatorname{card}(E^0)$. An {\it oriented} $n$-line graph $E$ is an $n$-line graph such that $E^{n-1} \not =\emptyset$. \item For any vertex $v \in E^0$, the cardinality of the set $R(v)=\{\alphapha \in E^* \mid r(\alphapha)=v\}$ is denoted by $n(v)$. \item For any vertex $v \in E^0$, the {\it tree} of $v$, denoted by $T(v)$, is the set $\{w\in E^0 \mid v\geq w \}$. Furthermore, for $X\subseteq E^0$, we define $T(X)=\bigcup_{x \in X}T(x)$. \item A subset $H$ of $E^0$ is called {\it hereditary} if $v\geq w$ and $v \in H$ imply that $w \in H$. \item A hereditary set $H$ is {\it saturated} if $s^{-1}(v) \not = \emptyset$ and $r(s^{-1}(v)) \subseteq H$, then $v \in H$. \end{enumerate} \end{deff} \betagin{deff}\lambdabel{petaldef} A {\it rose with $k$-petals} is a graph which consists of one vertex and $k$ loops. We denote this graph by $L_k$ and its vertex by $s(L_k)$. The Leavitt path algebra of this graph with coefficient in $K$ is denoted by $\operatorname{\mathcal L}_K(1,k)$. We allow $k$ to be zero and in this case $L_0$ is just a vertex with no loops. With this convention, one can easily establish that $\operatorname{\mathcal L}_K(1,0)\cong K$ and $\operatorname{\mathcal L}_K(1,1)\cong K[x,x^{-1}]$. In a graph which contains a rose $L_k$, we say $L_k$ does not have an exit, if there is no edge $e$ with $s(e)=s(L_k)$ and $r(e) \not = s(L_k)$. \end{deff} We need to recall the definition of morphisms between two graphs in order to consider the category of directed graphs. For two directed graphs $E$ and $F$, a {\it complete graph homomorphism} $f:E\rightarrow F$ consists of a map $f^0:E^0 \rightarrow F^0$ and $f^1:E^1 \rightarrow F^1$ such that $r(f^1(\alphapha))=f^0(r(\alphapha))$ and $s(f^1(\alphapha))=f^0(s(\alphapha))$ for any $\alphapha \in E^1$, additionally, $f^0$ is injective and $f^1$ restricts to a bijection from $s^{-1}(v)$ to $s^{-1}(f^0(v))$ for every $v\in E^0$ which emits edges. One can check that such a map induces a graded homomorphism on the level of Leavitt path algebras. i.e, there is a graded homomorphism $\operatorname{\mathcal L}(E) \rightarrow \operatorname{\mathcal L}(F)$. \subsection{Polycephaly graphs}\lambdabel{poiyre} Two distinguished types of strongly graded Leavitt path algebras are $C_n$-comet graphs and multi-headed rose graphs (see Figure~\ref{monster2}). We consider a polycephaly graph (Definition~\ref{popyt}) which is a mixture of these graphs (and so include all these types of graphs). The graded structure of Leavitt path algebras of polycephaly graphs were determined in \cite[Theorem~4.7]{haz}. In this note we prove Conjectures~\ref{weakconj} and ~\ref{cofian} for the category of polycephaly graphs. \betagin{deff}\lambdabel{cometi} A finite graph $E$ is called a {\it $C_n$-comet}, if $E$ has exactly one cycle $C$ (of length $n$), and $C^0$ is dense, i.e., $T(v) \cap (C)^0 \not = \emptyset$ for any vertex $v \in E^0$. \end{deff} Note that the uniqueness of the cycle $C$ in the definition of $C_n$-comet together with its density implies that the cycle has no exits. \betagin{deff}\lambdabel{wert} A finite graph $E$ is called a {\it multi-headed comets} if $E$ consists of $C_{l_s}$-comets, $1\leq s\leq t$, of length $l_s$, such that cycles are mutually disjoint and any vertex is connected to at least a cycle. (Recall that the graphs in this paper are all connected). More formally, $E$ consists of $C_{l_s}$-comets, $1\leq s\leq t$, and for any vertex $v$ in $E$, there is at least a cycle, say, $C_{l_k}$, such that $T(v) \cap C_{l_k}^0 \not = \emptyset$, and furthermore no cycle has an exit. \end{deff} \betagin{deff} \lambdabel{popyt} A finite graph $E$ is called a {\it polycephaly} graph if $E$ consists of a multi-headed comets and/or an acyclic graph with its sinks attached to roses such that all the cycles and roses are mutually disjoint and any vertex is connected to at least a cycle or a rose. More formally, $E$ consists of $C_{l_s}$-comets, $1\leq s\leq h$, and a finite acyclic graph with sinks $\{v_{h+1},\dots,v_t\}$ together with $n_s$-petal graphs $L_{n_s}$ attached to $v_s$, where $n_s \in \mathbb N$ and $h+1\leq s \leq t$. Furthermore any vertex $v$ in $E$ is connected to at least one of $v_s$, $h+1\leq s \leq t$, or at least a cycle, i.e., there is $C_{l_k}$, such that $T(v) \cap C_{l_k}^0 \not = \emptyset$, and no cycle or a rose has an exit (see Definition~\ref{petaldef}). When $h=0$, $E$ does not have any cycle, and when $t=h$, $E$ does not have any roses. \end{deff} \betagin{remark} Note that a cycle of length one can also be considered as a rose with one petal. This should not cause any confusion in the examples and theorems below. In some proofs (for example proof of Theorem~\ref{mani543}), we collect all the cycles of length one in the graph as comet types and thus all the roses have either zero or more than one petals, i.e., $n_s=0$ or $n_s>1$, for any $h+1\leq s \leq t$. \end{remark} \betagin{example}\lambdabel{popyttr} Let $E$ be a polycephaly graph. \betagin{enumerate} \item If $E$ contains no cycles, and for any rose $L_{n_s}$ in $E$, $n_s=0$, then $E$ is a finite acyclic graph. \item If $E$ consists of only one cycle (of length $n$) and no roses, then $E$ is a $C_n$-comet. \item If $E$ contains no roses, then $E$ is a multi-headed comets. \item If $E$ contains no cycles, and for any rose $L_{n_s}$ in $E$, $n_s \geq 1$, then $E$ is a multi-headed rose graph (see Figure~\ref{monster2}). \end{enumerate} \end{example} \betagin{example} The following graph is a (three-headed rose) polycephaly graph. \betagin{equation}\lambdabel{monster2} \xymatrix{ & & && \bullet \ar@(ul,ur)^{\alphapha_{1}} \ar@(u,r)^{\alphapha_{2}} \ar@{.}@(ur,dr) \ar@(r,d)^{\alphapha_{n_1}}& \\ & \bullet \ar[r] & \bullet \ar[r] \ar[urr] & \bullet \ar[r] \ar[dr] \ar[ur] & \bullet \ar[r] \ar[dr] & \bullet \ar[r] & \bullet \ar@(ul,ur)^{\betata_{1}} \ar@(u,r)^{\betata_{2}} \ar@{.}@(ur,dr) \ar@(r,d)^{\betata_{n_2}}& \\ & & & & \bullet \ar[r] & \bullet \ar@(ul,ur)^{\gammamma_{1}} \ar@(u,r)^{\gammamma_{2}} \ar@{.}@(ur,dr) \ar@(r,d)^{\gammamma_{n_3}}& } \end{equation} \end{example} \betagin{example} The following graph is a (five-headed) polycephaly graph, with two roses, two sinks and a comet. \betagin{equation}\lambdabel{monster} \xymatrix@=13pt{ & & & & & \bullet \ar@/^/[dr] \\ & & \bullet & & \bullet \ar@/^/[ur] &&\bullet \ar@/^/[dl] & \\ & & & & & \bullet \ar@/^/[ul] \\ \bullet \ar[r] & \bullet \ar[r] \ar[uur] \ar[ddr] & \bullet \ar[r] & \bullet \ar[r] \ar[ruu] & \bullet \ar[r] \ar[ddr] \ar[ru] & \bullet \ar[r] & \bullet \ar[r] & \bullet \ar@(ul,ur) \ar@(u,r) \ar@{.}@(ur,dr) \ar@(r,d)& \\ \\ & & \bullet & & \bullet \ar[r] & \bullet \ar[r] & \bullet \ar@(ul,ur) \ar@(u,r) \ar@{.}@(ur,dr) \ar@(r,d)& } \end{equation} \end{example} The following theorem classifies Leavitt path algebras of polycephaly graphs. This is Theorem~4.7 of ~\cite{haz}. This will be used in \S\ref{daryak} to calculate graded $K_0$ of polycephaly graphs (including acyclic, comets and multi-headed rose graphs) and prove Conjecture~\ref{weakconj} for the category of polycephaly graphs (see Theorem~\ref{mani543}). \betagin{theorem}\lambdabel{polyheadb} Let $K$ be a field and $E$ be a polycephaly graph consisting of cycles $C_{l_s}$, $1\leq s\leq h$, of length $l_s$ and an acyclic graph with sinks $\{v_{h+1},\dots,v_t\}$ which are attached to $L_{n_{h+1}},\dots,L_{n_t}$, respectively. For any $1\leq s\leq h$ choose $v_s$ (an arbitrary vertex) in $C_{l_s}$ and remove the edge $c_s$ with $s(c_s)=v_s$ from the cycle $C_{l_s}$. Furthermore, for any $v_s$, $h+1\leq s \leq t$, remove the rose $L_{n_s}$ from the graph. In this new acyclic graph $E_1$, let $\{p^{v_s}_i \mid 1\leq i \leq n(v_s)\}$ be the set of all paths which end in $v_s$, $1\leq s \leq t$. Then there is a $\mathbb Z$-graded isomorphism \betagin{equation}\lambdabel{dampaib} \operatorname{\mathcal L}_K(E) \cong_{\operatorname{gr}} \bigoplus_{s=1}^h \operatorname{\mathbb M}_{n(v_s)}\big(K[x^{l_s},x^{-l_s}]\big)(\overline{p_s}) \bigoplus_{s=h+1}^t \operatorname{\mathbb M}_{n(v_s)} \big(\operatorname{\mathcal L}_K(1,n_s)\big) (\overline{p_s}), \end{equation} where $\overline{p_s}=\big(|p_1^{v_s}|,\dots, |p_{n(v_{s})}^{v_s}|\big)$. \end{theorem} \betagin{remark}\lambdabel{kjsdyb} Theorem~\ref{polyheadb} shows that the Leavitt path algebra of a polycephaly graph decomposes into direct sum of three types of algebras. Namely, matrices over the field $K$ (which corresponds to acyclic parts of the graph, i.e, $n_s=0$ in~(\ref{dampaib})), matrices over $K[x^l,x^{-l}]$, $l \in \mathbb N$, (which corresponds to comet parts of the graph) and matrices over Leavitt algebras $\operatorname{\mathcal L}(1,n_s)$, $n_s \in \mathbb N$ and $n_s \geq 1$ (which corresponds to rose parts of the graph). Also note that a cycle of length one can also be considered as a rose with one petal, which in either case, on the level of Leavitt path algebras, we obtain matrices over the algebra $K[x,x^{-1}]$. \end{remark} \betagin{example} \lambdabel{nopain} Consider the polycephaly graph $E$ \[ \xymatrix{ & & \bullet \ar@(ul,ur) \ar@(u,r) \\ E: \bullet \ar[r] & \bullet \ar@<1.5pt>[r] \ar@<-1.5pt>[r] \ar@<0.5ex>[ur] \ar@<-0.5ex>[ur] \ar@<0ex>[ur] \ar[dr] & \bullet \ar@(rd,ru) &. \\ & & \bullet \ar@/^1.5pc/[r] & \bullet \ar@/^1.5pc/[l]& \\ } \] Then by Theorem~\ref{polyheadb}, \[\operatorname{\mathcal L}_K(E)\cong_{\operatorname{gr}} \operatorname{\mathbb M}_5(K[x,x^{-1}])(0,1,1,2,2) \operatorname{op}lus \operatorname{\mathbb M}_4(K[x^2,x^{-2}])(0,1,1,2) \operatorname{op}lus \operatorname{\mathbb M}_7(\operatorname{\mathcal L}(1,2))(0,1,1,1,2,2,2).\] \end{example} \subsection{$K_0$ of Leavitt path algebras} For an abelian monoid $M$, we denote by $M^{+}$ the group completion of $M$. This gives a left adjoint functor to the forgetful functor from the category of abelian groups to abelian monoids. Starting from a ring $R$, the isomorphism classes of finitely generated (right) $R$-modules equipped with the direct sum gives an abelian monoid, denoted by $\mathcal V(R)$. The group completion of this monoid is denoted by $K_0(R)$ and called the Grothendieck group of $R$. For a Leavitt path algebra $\operatorname{\mathcal L}_K(E)$, the monoid $\mathcal V(\operatorname{\mathcal L}_K(E))$ is studied in~\cite{amp}. In particular using~\cite[Theorem~3.5]{amp}, one can calculate the Grothendieck group of a Leavitt path algebras from the adjacency matrix of a graph (see~\cite[p.1998]{aalp}). We include this calculation in this subsection for the completeness. Let $F$ be a free abelian monoid generated by a countable set $X$. The nonzero elements of $F$ can be written as $\sum_{t=1}^n x_t$ where $x_t \in X$. Let $r_i, s_i$, $i\in I \subseteq \mathbb N$, be elements of $F$. We define an equivalence relation on $F$ denoted by $\lambdangle r_i=s_i\mid i\in I \rangle$ as follows: Define a binary relation $\rightarrow$ on $F\backslash \{0\}$, $r_i+\sum_{t=1}^n x_t \rightarrow s_i+\sum_{t=1}^n x_t$, $i\in I$ and generate the equivalence relation on $F$ using this binary relation. Namely, $a \sigmam a$ for any $a\in F$ and for $a,b \in F \backslash \{0\}$, $a \sigmam b$ if there is a sequence $a=a_0,a_1,\dots,a_n=b$ such that for each $t=0,\dots,n-1$ either $a_t \rightarrow a_{t+1}$ or $a_{t+1}\rightarrow a_t$. We denote the quotient monoid by $F/\lambdangle r_i=s_i\mid i\in I\rangle$. Then one can see that there is a canonical group isomorphism \betagin{equation} \lambdabel{monio} \mathfrak big (\frac{F}{\lambdangle r_i=s_i\mid i\in I \rangle}\mathfrak big)^{+} \cong \frac{F^{+}}{\lambdangle r_i-s_i\mid i\in I \rangle}. \end{equation} \betagin{deff}\lambdabel{adji} Let $E$ be a graph and $N'$ be the adjacency matrix $(n_{ij}) \in \mathbb Z^{E^0\operatorname{op}lus E^0}$ where $n_{ij}$ is the number of edges from $v_i$ to $v_j$. Furthermore let $I'$ be the identity matrix. Let $N^t$ and $I$ be the matrices obtained from $N'$ and $I'$ by first taking the transpose and then removing the columns corresponding to sinks, respectively. \end{deff} Clearly the adjacency matrix depends on the ordering we put on $E^0$. We usually fix an ordering on $E^0$ such that the elements of $E^0 \backslash \sigmank$ appear first in the list follow with elements of the $\sigmank$. Multiplying the matrix $N^t-I$ from the left defines a homomorphism $\mathbb Z^{E^0\backslash \sigmank} \longrightarrow \mathbb Z^{E_0}$, where $\mathbb Z^{E^0\backslash \sigmank} $ and $\mathbb Z^{E^0}$ are the direct sum of copies of $\mathbb Z$ indexed by $E^0\backslash \sigmank$ and $E^0$, respectively. The next theorem shows that the cokernel of this map gives the Grothendieck group of Leavitt path algebras. \betagin{theorem}\lambdabel{wke} Let $E$ be a graph and $\operatorname{\mathcal L}(E)$ a Leavitt path algebra. Then \betagin{equation} K_0(\operatorname{\mathcal L}(E))\cong \operatorname{coker}\big(N^t-I:\mathbb Z^{E^0\backslash \sigmank} \longrightarrow \mathbb Z^{E^0}\big). \end{equation} \end{theorem} \betagin{proof} Let $M_E$ be the abelian monoid generated by $\{v \mid v \in E^0 \}$ subject to the relations \betagin{equation}\lambdabel{phgqcu} v=\sum_{\{\alphapha\in E^{1} \mid s(\alphapha)=v \}} r(\alphapha), \end{equation} for every $v\in E^0\backslash \sigmank$, where $n_v= \max\{w(\alphapha)\mid \alphapha\in E^{\operatorname{st}}, \, s( \alphapha)=v\}$. Arrange $E^0$ in a fixed order such that the elements of $E^0\backslash \sigmank$ appear first in the list follow with elements of $\sigmank$. The relations ~(\ref{phgqcu}) can be then written as $N^t \overline v_i= I \overline v_i$, where $v_i \in E^0\backslash \sigmank $ and $\overline v_i$ is the $(0,\dots,1,0,\dots)$ with $1$ in the $i$-th component. Therefore, \[M_E\cong \frac{F}{ \lambdangle N^t \overline v_i = I \overline v_i , v_i \in E^0 \backslash \sigmank \rangle},\] where $F$ is the free abelian monoid generated by the vertices of $E$. By~\cite[Theorem~3.5]{amp} there is a natural monoid isomorphism $\operatorname{\mathcal V}(\operatorname{\mathcal L}_K(E))\cong M_E$. So using~(\ref{monio}) we have, \betagin{equation}\lambdabel{pajd} K_0(\operatorname{\mathcal L}(E))\cong \operatorname{\mathcal V}(\operatorname{\mathcal L}_K(E))^{+}\cong M_E^{+}\cong \frac{F^+}{ \lambdangle (N^t-I) \overline v_i, v_i \in E^0 \backslash \sigmank \rangle}. \end{equation} Now $F^{+}$ is $\mathbb Z^{E^0}$ and it is easy to see that the denominator in ~(\ref{pajd}) is the image of $N^t-I:\mathbb Z^{E^0\backslash \sigmank} \longrightarrow \mathbb Z^{E^0}$. \end{proof} \section{Graded $K_0$ of Leavitt path algebras}\lambdabel{daryak} For a strongly graded ring $R$, Dade's Theorem implies that $K_0^{\operatorname{gr}}(R) \cong K_0(R_0)$ (\S\ref{dadestmal}). In~\cite{haz}, we determined the strongly graded Leavitt path algebras. In particular we proved \betagin{theorem}[\cite{haz}, Theorem~3.15]\lambdabel{hazst} Let $E$ be a finite graph. The Leavitt path algebra $\operatorname{\mathcal L}(E)$ is strongly graded if and only if $E$ does not have a sink. \end{theorem} This theorem shows that many interesting classes of Leavitt path algebras fall into the category of strongly graded LPAs, such as purely infinite simple Leavitt path algebras (see~\cite{aap06}). The following theorem shows that there is a close relation between graded $K_0$ and the non-graded $K_0$ of strongly graded Leavitt path algebras. \betagin{theorem}\lambdabel{ultrad} Let $E$ be a row-finite graph. Then there is an exact sequence \betagin{equation}\lambdabel{zerhom} K_0(\operatorname{\mathcal L}(E)_0) \operatorname{st}ackrel{N^t-I}{\longrightarrow} K_0(\operatorname{\mathcal L}(E)_0) \longrightarrow K_0(\operatorname{\mathcal L}(E)) \longrightarrow 0. \end{equation} Furthermore if $E$ is a finite graph with no sinks, the exact sequence takes the form \betagin{equation}\lambdabel{kaler} K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)) \operatorname{st}ackrel{N^t-I}{\longrightarrow} K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)) \longrightarrow K_0(\operatorname{\mathcal L}(E)) \longrightarrow 0. \end{equation} In this case, \betagin{equation}\lambdabel{imip} K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)) \cong \varphirinjlim \mathbb Z^{E^0} \end{equation} of the inductive system $\mathbb Z^{E^0} \operatorname{st}ackrel{N^t}{\longrightarrow} \mathbb Z^{E^0} \operatorname{st}ackrel{N^t}\longrightarrow \mathbb Z^{E^0} \operatorname{st}ackrel{N^t}\longrightarrow \cdots$. \end{theorem} \betagin{proof} First let $E$ be a finite graph. Write $E^0=V\cup W$ where $W$ is the set of sinks and $V=E^0 \backslash W$. The ring $\operatorname{\mathcal L}(E)_0$ can be described as follows (see the proof of Theorem~5.3 in \cite{amp}): $\operatorname{\mathcal L}(E)_0=\bigcup_{n=0}^{\infty}L_{0,n}$, where $L_{0,n}$ is the linear span of all elements of the form $pq^*$ with $r(p)=r(q)$ and $|p|=|q|\leq n$. The transition inclusion $L_{0,n}\rightarrow L_{0,n+1}$ is the identity on all elements $pq^*$ where $r(p) \in W$ and is to identify $pq^*$ with $r(p) \in V$ by \[\sum_{\{ \alphapha | s(\alphapha)=v\}} p\alphapha (q\alphapha)^*.\] Note that since $V$ is a set of vertices which are not sinks, for any $v\in V$ the set $\{ \alphapha | s(\alphapha)=v\}$ is not empty. For a fixed $v \in E^0$, let $L_{0,n}^v$ be the linear span of all elements of the form $pq^*$ with $|p|=|q|=n$ and $r(p)=r(q)=v$. Arrange the paths of length $n$ with the range $v$ in a fixed order $p_1^v,p_2^v,\dots,p_{k^v_n}^v$, and observe that the correspondence of $p_i^v{p_j^v}^*$ to the matrix unit $e_{ij}$ gives rise to a ring isomorphism $L_{0,n}^v\cong\operatorname{\mathbb M}_{k^v_n}(K)$. Furthermore, $L_{0,n}^v$, $v\in E^0$ and $L_{0,m}^w$, $m<n$, $w\in W$ form a direct sum. This implies that \[L_{0,n}\cong \mathfrak big ( \bigoplus_{v\in V}\operatorname{\mathbb M}_{k_n^v}(K) \times \bigoplus_{v\in W}\operatorname{\mathbb M}_{k_n^v}(K) \mathfrak big) \times \mathfrak big( \bigoplus_{m=0}^{n-1} \big ( \bigoplus_{v \in W} \operatorname{\mathbb M}_{k_m^v}(K) \big )\mathfrak big),\] where $k_m^v$, $v \in E^0$, $0\leq m\leq n$, is the number of paths of length $m$ with the range $v$. The inclusion map $L_{0,n}\rightarrow L_{0,n+1}$ is then identity on factors where $v \in W$ and is \betagin{equation}\lambdabel{volleyb} N^t: \bigoplus_{v\in V} \operatorname{\mathbb M}_{k^v_n}(K) \longrightarrow \bigoplus_{v\in V} \operatorname{\mathbb M}_{k^v_{n+1}}(K) \times \bigoplus_{v\in W} \operatorname{\mathbb M}_{k^v_{n+1}}(K), \end{equation} where $N$ is the adjacency matrix of the graph $E$ (see Definition~\ref{adji}). This means $(A_1,\dots,A_l)\in \bigoplus_{v\in V} \operatorname{\mathbb M}_{k^v_n}(K) $ is sent to $(\sum_{j=1}^l n_{j1}A_j,\dots,\sum_{j=1}^l n_{jl}A_j) \in \bigoplus_{v\in E^0} \operatorname{\mathbb M}_{k^v_{n+1}}(K)$, where $n_{ji}$ is the number of edges connecting $v_j$ to $v_i$ and \[\sum_{j=1}^lk_jA_j=\left( \betagin{matrix} A_1 & & & \\ & A_1 & & \\ & & \ddots &\\ & & & A_l\\ \end{matrix} \right) \] in which each matrix is repeated $k_j$ times down the leading diagonal and if $k_j=0$, then $A_j$ is omitted. This can be seen as follows: for $v\in E^0$, arrange the the set of paths of length $n+1$ with the range $v$ as \betagin{align} \big \{ & p_1^{v_1}\alphapha_1^{v_1v}, \dots, p_{k_n^{v_1}}^{v_1}\alphapha_1^{v_1v}, p_1^{v_1}\alphapha_2^{v_1v}, \dots, p_{k_n^{v_1}}^{v_1}\alphapha_2^{v_1v},\dots, p_1^{v_1}\alphapha_{n_{1v}}^{v_1v}, \dots, p_{k_n^{v_1}}^{v_1}\alphapha_{n_{1v}}^{v_1v},\\ \notag & p_1^{v_2}\alphapha_1^{v_2v}, \dots, p_{k_n^{v_2}}^{v_2}\alphapha_1^{v_2v}, p_1^{v_2}\alphapha_2^{v_2v}, \dots, p_{k_n^{v_2}}^{v_2}\alphapha_2^{v_2v},\dots, p_1^{v_2}\alphapha_{n_{2v}}^{v_2v}, \dots, p_{k_n^{v_2}}^{v_2}\alphapha_{n_{2v}}^{v_2v},\\\notag & \dots \big \}, \end{align} where $\{p_1^{v_i},\dots,p_{k_n^{v_i}}^{v_i}\}$ are paths of length $n$ with the range $v_i$ and $\{\alphapha_1^{v_iv},\dots,\alphapha_{n_{iv}}^{v_iv}\}$ are all the edges with the source $v_i\in V$ and range $v$. Now this shows that if $p_s^{v_i}(p_t^{v_i})^*\in L_{0,n}^{v_i}$ (which corresponds to the matrix unit $e_{st}$), then $\sum_{j=1}^{n_{iv}} p_s^{v_i}\alphapha_{j}^{v_iv} (p_t^{v_1}\alphapha_{j}^{v_iv})^*\in L_{0,n+1}^v$ corresponds to the matrix with matrix unit $e_{st}$ repeated down the diagonal $n_{iv}$ times. Writing $\operatorname{\mathcal L}(E)_0=\varphirinjlim_{n} L_{0,n}$, since the Grothendieck group $K_0$ respects the direct limit, we have $K_0(\operatorname{\mathcal L}(E)_0)\cong \varphirinjlim_{n}K_0(L_{0,n})$. Since $K_0$ of (Artinian) simple algebras are $\mathbb Z$, the ring homomorphism $L_{0,n}\rightarrow L_{0,n+1}$ induces the group homomorphism \[\mathbb Z^V \times \mathbb Z^W \times \bigoplus_{m=0}^{n-1} \mathbb Z^W\operatorname{st}ackrel{N^t\times I}{\longrightarrow} \mathbb Z^V \times \mathbb Z^W \times \bigoplus_{m=0}^{n}\mathbb Z^W,\] where $N^t:\mathbb Z^V \rightarrow \mathbb Z^V \times \mathbb Z^W$ is multiplication of the matrix $N^t=\left(\betagin{array}{c} B^t \\ C^t \end{array}\right) $ from left which is induced by the homomorphism~(\ref{volleyb}) and $I_n: \mathbb Z^W \times \bigoplus_{m=0}^{n-1} \mathbb Z^W \rightarrow \bigoplus_{m=0}^{n}\mathbb Z^W$ is the identity map. Consider the commutative diagram \[ \xymatrix{ K_0(L_{0,n}) \ar[r] \ar[d]^{\cong} & K_0(L_{0,n+1}) \ar[d]^{\cong}\ar[r] & \cdots \\ \mathbb Z^V \times \mathbb Z^W \times \bigoplus_{m=0}^{n-1} \mathbb Z^W \ar[r]^{N^t\operatorname{op}lus I_{n}} \ar@{^{(}->}[d] & \mathbb Z^V \times \mathbb Z^W \times \bigoplus_{m=0}^{n}\mathbb Z^W \ar@{^{(}->}[d] \ar[r] & \cdots \\ \mathbb Z^V \times \mathbb Z^W \times \mathbb Z^W \times \cdots \ar[r]^{N^t \operatorname{op}lus I_{\infty}}& \mathbb Z^V \times \mathbb Z^W \times \mathbb Z^W \times \cdots \ar[r] & \cdots } \] where \[N^t\operatorname{op}lus I_{\infty}= \left(\betagin{array}{ccccc} B^t & 0 & 0 & 0 & \cdot\\ C^t & 0 & 0 & 0 & \cdot \\ 0 & 1 & 0 & 0 & \cdot\\ 0 & 0 & 1 & 0 & \cdot \\ \cdot & \cdot & \cdot & \cdot & \ddots \end{array}\right).\] It is easy to see that the direct limit of the second row in the diagram is isomorphic to the direct limit of the third row. Thus $K(\operatorname{\mathcal L}(E)_0) \cong \varphirinjlim \big (\mathbb Z^V \times \mathbb Z^W \times \mathbb Z^W \times \cdots \big)$. Consider the commutative diagram on the left below which induces a natural map, denoted by $N^t$ again, on the direct limits. \[ \xymatrix{ \mathbb Z^{V} \times \mathbb Z^W \times \cdots \ar[r]^{N^t} \ar[d]^{N^t} & \mathbb Z^{V} \times \mathbb Z^W \times \cdots \ar[r]^{\quad \qquad N^t} \ar[d]^{N^t} & \cdots & & \varphirinjlim \cong K_0(\operatorname{\mathcal L}(E)_0) \ar@{.>}[d]^{N^t}\\ \mathbb Z^{V} \times \mathbb Z^W \times \cdots\ar[r]^{N^t} & \mathbb Z^{V} \times \mathbb Z^W \times \cdots\ar[r]^{\quad \qquad N^t} & \cdots& &\varphirinjlim \cong K_0(\operatorname{\mathcal L}(E)_0) } \] Now Lemma 7.15 in~\cite{Raegraph} implies that \betagin{equation*} \operatorname{coker}\mathfrak big (K_0(\operatorname{\mathcal L}(E)_0)\operatorname{st}ackrel{N^t-1}{\longrightarrow} K_0(\operatorname{\mathcal L}(E)_0\mathfrak big ) \cong \operatorname{coker}\mathfrak big(\mathbb Z^{V} \times \mathbb Z^W \times \cdots \operatorname{st}ackrel{N^t-1}{\longrightarrow} \mathbb Z^{V} \times \mathbb Z^W \times \cdots \mathfrak big). \end{equation*} And Lemma~3.4 in~\cite{Raeburn113} implies that the \betagin{equation}\lambdabel{thlap} \operatorname{coker}\mathfrak big(\mathbb Z^{V} \times \mathbb Z^W \times \cdots \operatorname{st}ackrel{N^t-1}{\longrightarrow} \mathbb Z^{V} \times \mathbb Z^W \times \cdots \mathfrak big)\cong \operatorname{coker}\mathfrak big(\mathbb Z^V \operatorname{st}ackrel{N^t-1}{\longrightarrow} \mathbb Z^V \times \mathbb Z^W\mathfrak big). \end{equation} Finally by Theorem~\ref{wke}, replacing $\operatorname{coker}\big (\mathbb Z^V \operatorname{st}ackrel{N^t-1}{\longrightarrow} \mathbb Z^V \times \mathbb Z^W\big)$ by $K_0(\operatorname{\mathcal L}(E))$ in ~(\ref{thlap}) we obtain the exact sequence \betagin{equation}\lambdabel{zerhom11} K_0(\operatorname{\mathcal L}(E)_0) \operatorname{st}ackrel{N^t-I}{\longrightarrow} K_0(\operatorname{\mathcal L}(E)_0) \longrightarrow K_0(\operatorname{\mathcal L}(E)) \longrightarrow 0 \end{equation} when the graph $E$ is finite. Now a row-finite graph can be written as a direct limit of finite complete subgraphs~\cite[Lemmas 3.1, 3.2]{amp}, i.e., $E=\varphirinjlim E_i$, $\operatorname{\mathcal L}(E)\cong \varphirinjlim \operatorname{\mathcal L}(E_i)$ and $\operatorname{\mathcal L}(E)_0\cong \varphirinjlim \operatorname{\mathcal L}(E_i)_0$. Since the direct limit is an exact functor, applying $\varphirinjlim$ to the exact sequence~(\ref{zerhom11}) for $E_i$, we get the first part of the theorem. When the graph $E$ is finite with no sinks, then $\operatorname{\mathcal L}(E)$ is strongly graded (Theorem~\ref{hazst}). Therefore \[ K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)) \cong K_0(\operatorname{\mathcal L}(E)_0).\] This gives the exact sequence~(\ref{kaler}). \end{proof} \betagin{example} Let $E$ be the following graph. \betagin{equation*} \xymatrix{ & \bullet \ar[dr] \ar@/_1pc/[dl] & \\ \bullet \ar[ur] \ar@/_1pc/[rr] & & \bullet \ar[ll] \ar@/_1pc/[ul] } \end{equation*} By Theorem~\ref{hazst}, $\operatorname{\mathcal L}(E)$ is strongly graded and so by Thereom~\ref{ultrad}(\ref{imip}), \[K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)) \cong \varphirinjlim_{N^t} \mathbb Z\operatorname{op}lus \mathbb Z \operatorname{op}lus \mathbb Z,\] where $N^t= \left(\betagin{array}{ccc} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\ \end{array}\right). $ Since $\deltat(N)=2 \not = 0$, one can see that \[\varphirinjlim_{N^t} \mathbb Z\operatorname{op}lus \mathbb Z \operatorname{op}lus \mathbb Z \cong \mathbb Z\mathfrak big [\frac{1}{\deltat(N)}\mathfrak big] \operatorname{op}lus \mathbb Z\mathfrak big [\frac{1}{\deltat(N)}\mathfrak big] \operatorname{op}lus \mathbb Z\mathfrak big [\frac{1}{\deltat(N)}\mathfrak big] \cong \bigoplus_3\mathbb Z[1/2].\] On the other hand by Theorem~\ref{wke}, $K_0(\operatorname{\mathcal L}(E)) \cong \mathbb Z /2\mathbb Z \operatorname{op}lus \mathbb Z /2\mathbb Z$ (see~\cite[Example~3.8]{aalp} for the detailed computation). By Thereom~\ref{ultrad}(\ref{kaler}), these groups fit into the following exact sequence. \[ \bigoplus_3\mathbb Z[1/2] \operatorname{st}ackrel{N^t-I}{\longrightarrow} \bigoplus_3\mathbb Z[1/2] \longrightarrow \mathbb Z /2\mathbb Z \operatorname{op}lus \mathbb Z /2\mathbb Z \longrightarrow 0.\] \end{example} \betagin{remark}\lambdabel{hjsonf} Let $E$ be a row finite graph. The $K_0$ of graph $C^*$-algebra of $E$ was first computed in~\cite{Raeburn113}, where Raeburn and Szyma\'nski obtained the top exact sequence of the following commutative diagram. Here $E\times_1 \mathbb Z$ is a graph with $(E\times_1 \mathbb Z)^0=E^0\times \mathbb Z$, $(E\times_1 \mathbb Z)^1=E^1\times \mathbb Z$ and $s(e,n)=(s(e),n-1))$ and $r(e,n)=(r(e),n))$. Looking at the proof of \cite[Theorem~3.2]{Raeburn113}, and Theorem~\ref{ultrad}, one can see that, $K_0\big(C^*(E\times_1 \mathbb Z)\big) \cong K_0(\operatorname{\mathcal L}_{\mathbb C}(E)_0)$ and the following diagram commutes. \betagin{equation}\lambdabel{lol1532} \xymatrix{ 0 \ar[r] & K_1(C^*(E)) \ar[r] \ar@{.>}[d] & K_0\big(C^*(E\times_1 \mathbb Z)\big) \ar[d]^{\cong} \ar[r]^{1-\betata^{-1}_*} & K_0\big(C^*(E\times_1 \mathbb Z)\big) \ar[d]^{\cong} \ar[r]& K_0(C^*(E))\ar@{.>}[d]\ar[r]& 0\\ 0 \ar[r] & \ker(1-N^t) \ar[r] & K_0(\operatorname{\mathcal L}_{\mathbb C}(E)_0) \ar[r]^{1-N^{t}} & K_0(\operatorname{\mathcal L}_{\mathbb C}(E)_0) \ar[r]& K_0(\operatorname{\mathcal L}_{\mathbb C}(E)) \ar[r] &0.} \end{equation} This immediately shows that for a row-finite graph $E$, $K_0(C^*(E)) \cong K_0(\operatorname{\mathcal L}_{\mathbb C}(E))$. This was also proved in \cite[Theorem~7.1]{amp}. In particular if $E$ is a finite graph with no sinks, then \[K_0\big(C^*(E\times_1 \mathbb Z)\big) \cong K_0^{gr}(\operatorname{\mathcal L}_{\mathbb C}(E)).\] \end{remark} The graded structure of Leavitt path algebras of polycephaly graphs (which include acyclic and comet graphs) were classified in~\cite{haz} (see Theorem~\ref{polyheadb}). This, coupled by the graded Morita theory~(cf. Proposition~\ref{grmorita}) and~(\ref{dade}) enable us to calculate graded $K_0$ of these graphs. We record them here. \betagin{theorem}[$K_0^{\operatorname{gr}}$ of acyclic graphs]\lambdabel{f4j5h6h8} Let $E$ be a finite acyclic graph with sinks $\{v_1,\dots,v_t\}$. For any sink $v_s$, let $\{p^{v_s}_i \mid 1\leq i \leq n(v_s)\}$ be the set of all paths which end in $v_s$. Then there is a $\mathbb Z[x,x^{-1}]$-module isomorphism \betagin{equation}\lambdabel{skyone231} \mathfrak big (K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)),[\operatorname{\mathcal L}(E)] \mathfrak big ) \cong \mathfrak big ( \bigoplus_{s=1}^t \mathbb Z[x,x^{-1}], (d_1,\dots,d_t)\mathfrak big ), \end{equation} where $d_s=\sum_{i=1}^{n(v_s)}x^{-|p_i^{v_s}|}$. \end{theorem} \betagin{proof} By~\cite[Theorem~4.11]{haz} (see also Theorem~\ref{polyheadb}, in the absence of comet and rose graphs), \betagin{equation}\lambdabel{skyone23} \operatorname{\mathcal L}(E) \cong_{\operatorname{gr}} \bigoplus_{s=1}^t \operatorname{\mathbb M}_{n(v_s)} (K)\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big). \end{equation} Since $K_0^{\operatorname{gr}}$ is an exact functor, $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E))$ is isomorphic to the direct sum of $K_0^{\operatorname{gr}}$ of matrix algebras of ~(\ref{skyone23}). The graded Morita equivalence (Proposition~\ref{grmorita}), induces a $\mathbb Z[x,x^{-1}]$-isomorphism \[K_0^{\operatorname{gr}}\big (\operatorname{\mathbb M}_{n(v_s)} (K)\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big) \big)\longrightarrow K_0^{\operatorname{gr}}(K),\] with \[\big [\operatorname{\mathbb M}_{n(v_s)} (K)\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big)\big ] \mapsto [K(-|p^{v_s}_1|)\operatorname{op}lus \dots,\operatorname{op}lus K( -|p^{v_s}_{n(v_s)}|)].\] Now the theorem follows from Proposition~\ref{k0grof}, by considering $A=K$ as a graded field concentrated in degree zero, i.e., $\Gammamma_A=0$, and $\Gammamma=\mathbb Z$ (see Example~\ref{upst}). \end{proof} Leavitt path algebras of $C_n$-comets were classified in \cite[Theorem~4.17]{haz} as follows: Let $E$ be a $C_n$-comet with the cycle $C$ of length $n \geq 1$. Let $u$ be a vertex on the cycle $C$. Eliminate the edge in the cycle whose source is $u$ and consider the set $\{p_i \mid 1\leq i \leq m\}$ of all paths which end in $u$. Then \betagin{equation}\lambdabel{ytsnf} \operatorname{\mathcal L}(E) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_{m}\big(K[x^n,x^{-n}] \big)\big (|p_1|,\dots, |p_m|\big). \end{equation} Let $d_l$, $0 \leq l \leq n-1$, be the number of $i$ such that $|p_i|$ represents $\overline l$ in $\mathbb Z/n \mathbb Z$. Then \betagin{equation}\lambdabel{ytsnfpos} \operatorname{\mathcal L}(E) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_{m}\big(K[x^n,x^{-n}] \big)\big (0,\dots,0,1,\dots,1,\dots, n-1,\dots,n-1\big), \end{equation} where $0\leq l \leq n-1$ occurring $d_l$ times. It is now easy to see \betagin{equation}\lambdabel{zgdak} \operatorname{\mathcal L}(E)_0 \cong \operatorname{\mathbb M}_{d_0}(K)\times \dots \times \operatorname{\mathbb M}_{d_{n-1}}(K). \end{equation} Furthermore, let $F$ be another $C_{n'}$-comet with the cycle $C'$ of length $n' \geq 1$ and $u'$ be a vertex on the cycle $C'$. Eliminate the edge in the cycle whose source is $u'$ and consider the set $\{q_i \mid 1\leq i \leq m'\}$ of all paths which end in $u'$. Then $\operatorname{\mathcal L}(E) \cong_{\operatorname{gr}}\operatorname{\mathcal L}(F)$ if and only if $n=n'$, $m=m'$ and for a suitable permutation $\pi \in S_m$, and $r \in \mathbb N$, $r+|p_i|+n\mathbb Z=|q_{\pi(i)}|+n \mathbb Z$, where $1\leq i \leq m$. Before determining the $\mathbb Z[x,x^{-1}]$-module structure of the graded $K_0$ of a $C_n$-comet graph, we define a $\mathbb Z[x,x^{-1}]$-module structure on the group $\bigoplus_n \mathbb Z$. Let $\phi: \mathbb Z[x,x^{-1}] \longrightarrow \operatorname{\mathbb M}_n(\mathbb Z)=\operatorname{End}_{\mathbb Z}(\mathbb Z^n)$, be the evaluation ring homomorphism, where \[\phi(x)=\left(\betagin{array}{ccccc} 0 & \dots & 0 & 0 & 1 \\ 1 & 0 & \dots & \dots & 0\\ 0 & 1 & 0 & \dots & 0 \\ \vdots & 0 & \ddots & 0 & 0\\ 0 & \dots & 0 & 1 & 0 \end{array}\right).\] This homomorphism induces a $\mathbb Z[x,x^{-1}]$-module structure on $\bigoplus_n \mathbb Z$, where $x (a_1,\dots,a_n)=(a_n,a_1,\dots, a_{n-1})$. \betagin{theorem}[$K_0^{\operatorname{gr}}$ of $C_n$-comet graphs]\lambdabel{cothemp} Let $E$ be a $C_n$-comet with the cycle $C$ of length $n \geq 1$. Then there is a $\mathbb Z[x,x^{-1}]$-module isomorphism \betagin{equation}\lambdabel{ytsnf1} \mathfrak big ( K_0^{\operatorname{gr}} (\operatorname{\mathcal L}(E)), [\operatorname{\mathcal L}(E)] \mathfrak big ) \cong \mathfrak big ( \bigoplus_{i=0}^{n-1} \mathbb Z, (d_0,\dots,d_{n-1}) \mathfrak big ), \end{equation} where $d_l$, $0 \leq l \leq n-1$, are as in {\upshape~(\ref{ytsnfpos})}. \end{theorem} \betagin{proof} By~(\ref{ytsnf}), $K^{\operatorname{gr}}_0(\operatorname{\mathcal L}(E)) \cong_{\operatorname{gr}} K^{\operatorname{gr}}_0\big( \operatorname{\mathbb M}_{m}\big(K[x^n,x^{-n}] \big)\big (|p_1|,\dots, |p_m|\big)\big).$ The graded Morita equivalence (Proposition~\ref{grmorita}), induces a $\mathbb Z[x,x^{-1}]$-isomorphism \[K_0^{\operatorname{gr}}\big (\operatorname{\mathbb M}_{m}\big(K[x^n,x^{-n}] \big)\big (|p_1|,\dots, |p_m|\big)\big) \longrightarrow K_0^{\operatorname{gr}}(K[x^n,x^{-n}]),\] with \[\big [\operatorname{\mathbb M}_{m}\big(K[x^n,x^{-n}] \big)\big (|p_1|,\dots, |p_m|\big)\big ] \mapsto \big [K[x^n,x^{-n}](-|p_1|)\operatorname{op}lus \dots,\operatorname{op}lus K[x^n,x^{-n}]( -|p_m|)\big].\] Now the theorem follows from Proposition~\ref{k0grof}, by considering $A=K[x^n,x^{-n}]$ as a graded field with $\Gammamma_A=n\mathbb Z$ and $\Gammamma=\mathbb Z$ (see Example~\ref{upst}). \end{proof} Before determining the $\mathbb Z[x,x^{-1}]$-module structure of the graded $K_0$ of multi-headed rose graphs, we define a $\mathbb Z[x,x^{-1}]$-module structure on the group $\mathbb Z[1/n]$. Consider the evaluation ring homomorphism $\phi:\mathbb Z[x,x^{-1}] \rightarrow \mathbb Z[1/n]$, by $\phi(x)=n$. This gives a $\mathbb Z[x,x^{-1}]$-module structure on $\mathbb Z[1/n]$, where $xa=na$, $a \in \mathbb Z[1/n]$. \betagin{theorem}[$K_0^{\operatorname{gr}}$ of multi-headed rose graphs]\lambdabel{re282} Let $E$ be a polycephaly graph consisting of an acyclic graph $E_1$ with sinks $\{v_1,\dots,v_t\}$ which are attached to $L_{n_1},\dots,L_{n_t}$, respectively, where $n_s\geq 1$, $1\leq s \leq t$. For any $v_s$, let $\{p^{v_s}_i \mid 1\leq i \leq n(v_s)\}$ be the set of all paths in $E_1$ which end in $v_s$. Then there is a $\mathbb Z[x,x^{-1}]$-module isomorphism \betagin{equation}\lambdabel{dampai} \big(K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)),[\operatorname{\mathcal L}(E)]\big) \cong \big ( \bigoplus_{s=1}^t \mathbb Z[1/n_s], (d_1,\dots,d_t)\big), \end{equation} where $d_s=\sum_{i=1}^{n(v_s)}n_s^{-|p_i^{v_s}|}$. \end{theorem} \betagin{proof} By Theorem~\ref{polyheadb} (in the absence of acyclic and comet graphs with cycles of length greater than one), \betagin{equation}\lambdabel{dampai2} \operatorname{\mathcal L}(E) \cong_{\operatorname{gr}} \bigoplus_{s=1}^t \operatorname{\mathbb M}_{n(v_s)} \big(\operatorname{\mathcal L}(1,n_s)\big)\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big). \end{equation} Since $K_0^{\operatorname{gr}}$ is an exact functor, $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E))$ is isomorphic to the direct sum of $K_0^{\operatorname{gr}}$ of matrix algebras of ~(\ref{dampai2}). The graded Morita equivalence (Proposition~\ref{grmorita}), induces a $\mathbb Z[x,x^{-1}]$-isomorphism \[K_0^{\operatorname{gr}}\big (\operatorname{\mathbb M}_{n(v_s)} (\operatorname{\mathcal L}(1,n_s))\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big) \big)\longrightarrow K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(1,n_s)),\] with $\big [\operatorname{\mathbb M}_{n(v_s)} (\operatorname{\mathcal L}(1,n_s))\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big)\big ] \mapsto [\operatorname{\mathcal L}(1,n_s)^{n(v_s)}(-|p^{v_s}_1|,\dots,- |p^{v_s}_{n(v_s)}|)]$. Now by Theorem~\ref{ultrad}(\ref{imip}), $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(1,n_s))\cong \mathbb Z[1/n_s]$. Observe that $[\operatorname{\mathcal L}(1,n_s)]$ represents $1$ in $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(1,n_s))$. Since for any $i \in \mathbb Z$, \[\operatorname{\mathcal L}(1,n_s)(i)\cong \bigoplus_{n_s}\operatorname{\mathcal L}(1,n_s)(i-1),\] it immediately follows that $\big [\operatorname{\mathcal L}(1,n_s)^{n(v_s)}(-|p^{v_s}_1|,\dots,- |p^{v_s}_{n(v_s)}|)\big]$ maps to $\sum_{i=1}^{n(v_s)}n_s^{-|p_i^{v_s}|}$ in $\mathbb Z[1/n_s]$. This also shows that the isomorphism is a $\mathbb Z[x,x^{-1}]$-module isomorphism. \end{proof} \betagin{theorem}\lambdabel{mani543} Let $E$ and $F$ be polycephaly graphs. Then $\operatorname{\mathcal L}(E)\cong_{\operatorname{gr}} \operatorname{\mathcal L}(F)$ if and only if there is an order preserving $\mathbb Z[x,x^{-1}]$-module isomorphism \[\big (K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)),[\operatorname{\mathcal L}(E)]\big ) \cong \big (K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(F)),[\operatorname{\mathcal L}(F)]\big ).\] \end{theorem} \betagin{proof} One direction is straightforward. For the other direction, suppose there is an order-preserving $\mathbb Z[x,x^{-1}]$-module isomorphism $\phi: K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)) \rightarrow K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(F))$ such that $\phi([\operatorname{\mathcal L}(E)])=[\operatorname{\mathcal L}(F)]$. By Theorem~\ref{polyheadb}, $\operatorname{\mathcal L}(E)$ is a direct sum of matrices over the field $K$, the Laurent polynomials $K[x^l,x^{-l}]$, $l \in \mathbb N$, and Leavitt algebras, corresponding to acyclic, comets and rose graphs in $E$, respectively (see Remark~\ref{kjsdyb}). Since $K_0^{\operatorname{gr}}$ is an exact functor, by Theorems~\ref{f4j5h6h8},~\ref{cothemp} and ~\ref{re282}, there is an order-preserving $\mathbb Z[x,x^{-1}]$-isomorphism \[ K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E))\cong \mathfrak big ( \bigoplus_k \mathbb Z[x,x^{-1}] \mathfrak big ) \mathfrak big ( \bigoplus_h \bigoplus_{l_h} \mathbb Z \mathfrak big ) \mathfrak big ( \bigoplus_t \mathbb Z [1/n_t] \mathfrak big ). \] Similarly for the graph $F$ we have \[ K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(F))\cong \mathfrak big ( \bigoplus_{k'} \mathbb Z[x,x^{-1}] \mathfrak big ) \mathfrak big ( \bigoplus_{h'} \bigoplus_{l'_{h'}} \mathbb Z \mathfrak big ) \mathfrak big ( \bigoplus_{t'} \mathbb Z [1/n'_{t'}] \mathfrak big ). \] Throughout the proof, we consider a cycle of length one in the graphs as a comet and thus all $n_s>1$, $1\leq s \leq t$ and $n'_s>1$, $1\leq s \leq t'$. {\it Claim:} The isomorphism $\phi$ maps each type of groups in $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E))$ to its corresponding type in $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(F))$ and so the restriction of $\phi$ to each type of groups gives an isomorphism, i.e, $\phi: \bigoplus_k \mathbb Z[x,x^{-1}] \rightarrow \bigoplus_{k'} \mathbb Z[x,x^{-1}]$, $\phi: \bigoplus_h \bigoplus_{l_h} \mathbb Z \rightarrow \bigoplus_{h'} \bigoplus_{l'_{h'}} \mathbb Z$ and $\phi: \bigoplus_t \mathbb Z [1/n_t] \rightarrow \bigoplus_{t'} \mathbb Z [1/n'_{t'}]$ are order-preserving $\mathbb Z[x,x^{-1}]$-isomorphisms. Let $a \in \mathbb Z[1/n_s]$, for some $1\leq s \leq t$ (note that $n_s>1$) and suppose \[\phi(0,\dots,0,a,0\dots,0)=(b_1,\dots,b_{k'},c_1,\dots,c_{h'},d_1,\dots d_{t'}),\] where $(b_1,\dots,b_{k'}) \in \bigoplus_{k'} \mathbb Z[x,x^{-1}]$, $(c_1,\dots,c_{h'}) \in \bigoplus_{h'} \bigoplus_{l'_{h'}} \mathbb Z$ and $(d_1,\dots d_{t'}) \in \bigoplus_{t'} \mathbb Z [1/n'_{t'}]$. We show that $b_i$'s and $c_i$'s are all zero. Let $m=\operatorname{LCM}(l'_{s})$, $1\leq s \leq h'$. Recall that the action of $x$ on $a\in \mathbb Z[1/n_s]$, $x a= n_s a$. Also the action of $x$ on $(a_1,\dots,a_l) \in \bigoplus_l \mathbb Z$, is $x (a_1,\dots,a_l)=(a_l,a_{1},\dots,a_{l-1})$ (see Example~\ref{upst}). Consider \betagin{multline*} (n_s^m b_1,\dots, n_s^m b_{k'}, n_s^m c_1,\dots,n_s^m c_{h'},n_s^m d_1,\dots n_s^m d_{t'})= \\ \phi (0,\dots,0,n_s^m a,0\dots,0)= \phi (0,\dots,0,x^m a,0\dots,0)=x^m \phi (0,\dots,0, a,0\dots,0)=\\(x^m b_1,\dots, x^m b_{k'}, c_1,\dots, c_{h'},{n'_1}^m d_1,\dots {n'_{t'}}^m d_{t'}). \end{multline*} This immediately implies that $b_i$'s and $c_i$'s are all zero. Thus the restriction of $\phi$ on the Leavitt type algebras, gives the isomorphism \betagin{equation}\lambdabel{lkara} \phi: \bigoplus_t \mathbb Z [1/n_t] \longrightarrow \bigoplus_{t'} \mathbb Z [1/n'_{t'}]. \end{equation} Now consider $a \in \mathbb \bigoplus_{l_s} \mathbb Z$, for some $1\leq s \leq h$, and suppose \[\phi(0,\dots,0,a,0\dots,0)=(b_1,\dots,b_{k'},c_1,\dots,c_{h'},d_1,\dots d_{t'}).\] As above, let $m=\operatorname{LCM}(l'_{s})$, $1\leq s \leq h'$ (note that $m$ could be 1) and consider \betagin{multline*} (b_1,\dots, b_{k'}, c_1,\dots, c_{h'}, d_1,\dots d_{t'})= \\ \phi (0,\dots,0, a,0\dots,0)= \phi (0,\dots,0,x^m a,0\dots,0)=x^m \phi (0,\dots,0, a,0\dots,0)=\\(x^m b_1,\dots, x^m b_{k'}, c_1,\dots, c_{h'},{n'_1}^m d_1,\dots {n'_{t'}}^m d_{t'}). \end{multline*} This immediately implies that $b_i$'s and $d_i$'s are all zero. Thus the restriction of $\phi$ on the comet types, gives the isomorphism \betagin{equation}\lambdabel{lkara2} \phi: \bigoplus_h \bigoplus_{l_h} \mathbb Z \longrightarrow \bigoplus_{h'} \bigoplus_{l'_{h'}} \mathbb Z. \end{equation} Finally we show that $\phi$ induces an isomorphism on acyclic parts. Let \[\phi(1,0,\dots,0)=(b_1,\dots, b_{k'}, c_1,\dots, c_{h'}, d_1,\dots d_{t'}).\] Since $(1,0,\dots,0)$ is in the positive cone of $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E))$, $(b_1,\dots, b_{k'}, c_1,\dots, c_{h'}, d_1,\dots d_{t'})$ is in the positive cone of $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(F))$. Since the restriction of $\phi$ on Leavitt and comet types are isomorphisms (see~(\ref{lkara}) and (\ref{lkara2})), there is $(0,\dots,0,c_1',\dots,c_h',d_1',\dots,d_t')$ in the positive cone of $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E))$ such that $\phi(0,\dots,0,c_1',\dots,c_h',d_1',\dots,d_t')=(0,\dots, 0, c_1,\dots, c_{h'}, d_1,\dots d_{t'})$. It follows \[\phi(1,0,\dots,0, -c_1',\dots,-c_h',-d_1',\dots,-d_t')=(b_1,\dots, b_{k'},0,\dots,0, 0,\dots,0).\] But $(b_1,\dots, b_{k'},0,\dots,0, 0,\dots,0)$ is in the positive cone. This forces $c_i'$'s and $d_i'$'s to be zero. This shows that $\phi(1,0,\dots,0) \in \bigoplus_{k'} \mathbb Z[x,x^{-1}]$ and thus $\phi\big(\mathbb Z[x,x^{-1}],0,\dots,0\big)\subseteq \bigoplus_{k'} \mathbb Z[x,x^{-1}].$ A similar argument can be written for the other components of acyclic parts. Thus the restriction of $\phi$ on the acyclic types, gives the isomorphism \[\phi: \bigoplus_k \mathbb Z[x,x^{-1}] \rightarrow \bigoplus_{k'} \mathbb Z[x,x^{-1}].\] This finishes the proof of the claim. Thus we can reduce the theorem to graphs of the same types, namely, $E$ and $F$ being both acyclic, multi-headed comets or multi-headed rose graphs. We consider each case separately. (i) $E$ and $F$ are acyclic: In this case $\operatorname{\mathcal L}(E)$ and $\operatorname{\mathcal L}(F)$ are $K$-algebras (see~(\ref{skyone23})). The isomorphism between Leavitt path algebras now follows from Theorem~\ref{catgrhsf}, by setting $A=K$ and $\Gammamma_A=0$. (ii) $E$ and $F$ are multi-headed comets: By Theorem~\ref{polyheadb} and ~\ref{cothemp}, (since $K_0^{\operatorname{gr}}$ is an exact functor) $K_0^{\operatorname{gr}} (\operatorname{\mathcal L}(E)) \cong \bigoplus_{s=1}^h \bigoplus_{l_s} \mathbb Z$ and $K_0^{\operatorname{gr}} (\operatorname{\mathcal L}(F))\cong \bigoplus_{s=1}^{h'} \bigoplus_{l'_{s}} \mathbb Z$. Tensoring with $\mathbb Q$, the isomorphism $\phi$ extends to an isomorphism of $\mathbb Q$-vector spaces, which then immediately implies $\sum_{s=1}^h l_s=\sum_{i=1}^{h'} l'_s$. We claim that $h=h'$ and (after a suitable permutation), $l_s=l'_s$ for any $1\leq s\leq h$ and the restriction of $\phi$ gives an isomorphism $\phi: \bigoplus_{l_s} \mathbb Z \rightarrow \bigoplus_{l'_s} \mathbb Z$, $1\leq s\leq h$. Let $r=\sum_{s=1}^h l_s=\sum_{i=1}^{h'} l'_s$. Consider the matrix $a=(a_{ij})$, $1\leq i,j \leq r$, representing the isomorphism $\phi$. We first show that $a$ has to be a permutation matrix. Since $\phi$ is an order preserving homomorphism, all the entires of the matrix $a$ are positive integers. Since $\phi$ is an isomorphism, $a$ is an invertible matrix, with the inverse $b=(b_{ij})$ with $b_{ij} \geq 0$, $1\leq i,j \leq r$, as $b$ is also order preserving. Thus there is $1\leq j\leq r$ such that $a_{1j}\not = 0$. Since $\sum_{i=1}^{r}a_{1i} b_{il}=0$, for $1<l\leq r$, $a_{1j}\not = 0$ and all the numbers involved are positive, it follows $b_{jl}=0$ for all $1<l \leq r$. That is the entires of $j$-th row of the matrix $b$ are all zero except $b_{1j}$, which has to be nonzero. If there is another $j'\not = j$ such that $a_{1j'}\not = 0$, a similar argument shows that all $b_{j'l}=0$ for all $1<l \leq r$. But this then implies that the matrix $b$ is singular which is not the case. That is there is only one non-zero entry in the first row of the matrix $a$. Furthermore, since $a_{1j}b_{j1}=1$, we get $a_{1j}=1$. Repeating the same argument for other rows of the matrix $a$, we obtain that $a$ consists of exactly one $1$ in each row and column, i.e., it is a permutation matrix. Let $e_s$ be the standard generators of $K_0^{\operatorname{gr}} (\operatorname{\mathcal L}(E))$. Since $\phi$ is a permutation, $\phi(e_1)=e_k$ for some $k$. Suppose that for a suitable permutation of $l'_s$, $1\leq s \leq h'$, we have $1\leq k\leq l'_1$, i.e., $e_k$ is a non-zero element of $\bigoplus_{l'_1} \mathbb Z$. Then \[x^{l_1} e_k=x^{l_1} \phi(e_1)= \phi(x^{l_1} e_1)= \phi(e_1) =e_k.\] This implies that $l_1 \geq l'_1$. On the other hand \[\phi(e_1)= e_k= x^{l'_1} e_k= x^{l'_1} \phi(e_1)= \phi(x^{l'_1} e_1).\] So $e_1= x^{l'_1} e_1$. This implies that $l'_1 \geq l_1$. Thus $l_1=l'_1$. Again, since $\phi$ is a $\mathbb Z[x,x^{-1}]$-module isomorphism, the generators of $\bigoplus_{l_1} \mathbb Z$ maps to the generators of $\bigoplus_{l'_1} \mathbb Z$, so it follows that the restriction of $\phi$ gives an isomorphism $\phi: \bigoplus_{l_1} \mathbb Z \rightarrow \bigoplus_{l'_1} \mathbb Z$. Repeating the same argument for $2\leq s \leq h$, we obtain the claim. This shows that $\operatorname{\mathcal L}(E)$ and $\operatorname{\mathcal L}(F)$ are direct sum of matrices over $K[x^{l_s},x^{-l_s}]$, $1\leq s \leq h$ with the graded $K_0$-group of each summand of $\operatorname{\mathcal L}(E)$ is $\mathbb Z[x,x^{-1}]$-module isomorphisms with the corresponding summand of $\operatorname{\mathcal L}(F)$. The isomorphism between each summand and thus the isomorphism between the Leavitt path algebras $\operatorname{\mathcal L}(E)$ and $\operatorname{\mathcal L}(F)$ now follows from applying Theorem~\ref{catgrhsf} $h$-times (i.e., for each summand), by setting $A=K[x^{l_s},x^{-l_s}]$, $\Gammamma_A=l_s\mathbb Z$, $1\leq s \leq h$, and $\Gammamma=\mathbb Z$. (iii) $E$ and $F$ are multi-headed rose graphs: By~(\ref{dampai}) $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)) \cong \bigoplus_{s=1}^t \mathbb Z[1/n_s]$ and $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(F)) \cong \bigoplus_{s=1}^{t'} \mathbb Z[1/n'_s]$. Tensoring with $\mathbb Q$, the isomorphism $\phi$ extends to an isomorphism of $\mathbb Q$-vector spaces, which then immediately implies $t=t'$. Since the group of $n$-adic fractions $\mathbb Z[1/n]$ is a flat $\mathbb Z$-module, $\phi$ can be represented by a $t\times t$ matrix with entries from $\mathbb Q$. Write the sequence $(n_1,\dots,n_t)$ in an ascending order, respectively (changing the labels if necessary) and write the vertices $\{v_1,\dots,v_t\}$ (again, changing the labels if necessary) such that $L_{n_1},\dots , L_{n_t}$ correspond to $(n_1,\dots,n_t)$. Do this for $(n'_1,\dots,n'_t)$ and $\{v'_1,\dots,v'_{t'}\}$ as well. We first show that $n_s=n_s'$, $1\leq s \leq t$ and $\phi$ is a block diagonal matrix. Write the ascending sequence \betagin{equation}\lambdabel{potrw} (n_1,\dots,n_t)=(\sf{ n_1,\dots,n_1,n_2,\dots,n_2,\dots,n_k,\dots,n_k}), \end{equation} where each ${\sf n}_l$ occurring $r_l$ times, $1\leq l \leq k$. Let $\phi$ be represented by the matrix $a=(a_{ij})_{t\times t}$. Choose the smallest $s_1$, $1\leq s_1\leq t$, such that starting from $s_1$-th row all entires below the $s_1$-th row and between the $1$st and $r_1$-th column (including the column $r_1$) are zero. Clearly $s_1\geq 1$, as there is at least one nonzero entry on the $r_1$-th column, as $\phi$ is an isomorphism so $a$ is an invertible matrix. Formally, $s_1$ is chosen such that $a_{ij}=0$ for all $s_1 < i \leq t$ and $1\leq j \leq r_1$ and there is $1< j \leq r_1$ such that $a_{s_1j}\not = 0$. We will show that $s_1=r_1$. Pick $a_{s_1j}\not = 0$, for $1< j \leq r_1$. Then $\phi(e_{j})=\sum_{s=1}^{t}a_{sj}e'_s$, where $e_{j}\in \bigoplus_{s=1}^t \mathbb Z[1/n_s] $ and $e'_j \in \bigoplus_{s=1}^{t} \mathbb Z[1/n'_s]$, with $1$ in $j$-th entry and zero elsewhere, respectively. This shows $a_{s_1j}e'_{s_1} \in \mathbb Z[1/n'_{s_1}]$. But $\phi$ is a $\mathbb Z[x,x^{-1}]$-module homomorphism. Thus \betagin{equation}\lambdabel{chrisdeb} \sum_{s=1}^{t}n'_s a_{sj}e'_s=x \phi(e_j) =\phi(x e_j)=\phi(n_1 e_j)= \sum_{s=1}^{t}n_1 a_{sj}e'_s. \end{equation} Since $a_{s_1j}\not = 0$, we obtain $n'_{s_1}=n_1$. Next we show $a_{ij}=0$ for all $1 \leq i \leq s_1$ and $r_1 < j \leq t$ (see the matrix below). \betagin{equation}\lambdabel{jh7h6g} a=\left(\betagin{array}{cccccc} a_{11} & \cdots & a_{1r_1} & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & 0 & \cdots & 0 \\ a_{s_1,1} & \cdots & a_{s_1,r_1} & 0 & \cdots& 0 \\ 0 & 0 & 0 & a_{s_1+1,r_1+1} & \cdots & a_{s_1+1,t} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & a_{t,r_1+1} & \cdots & a_{t,t} \\ \end{array}\right)_{t\times t}. \end{equation} Suppose $a_{ij} \not = 0$ for some $1 \leq i \leq s_1$ and $r_1 < j \leq t$. Since $j>r_1$, $n_j>n_1$. Applying $\phi$ on $e_j$, a similar calculation as~(\ref{chrisdeb}) shows that, since $a_{ij} \not = 0$, $n'_i=n_j$. But the sequence $(n'_1,\dots,n'_t)$ is arranged in an ascending order, so if $i\leq s_1$, then $n_j=n'_i \leq n'_{s_1}=n_1$, which is a contradiction. Therefore the matrix $a$, consists of two blocks as in~(\ref{jh7h6g}). Next we show that for $1\leq i\leq s_1$ $n'_i=n'_{s_1}=n_1$. If $1\leq i\leq s_1$, there is $a_{ij}\not = 0$, for some $1\leq j\leq r_1$ (as all $a_{ij}= 0$, $j>r_1$, and $a$ is invertible). Then consider $\phi(e_j)$ and again a similar calculation as in ~(\ref{chrisdeb}) shows that $n'_i=n_1$. Now restricting $\phi$ to the first $r_1$ summands of $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E))$, we have a homomorphism \betagin{equation}\lambdabel{beau54} \phi_{r_1}: \bigoplus_{s=1}^{r_1} \mathbb Z[1/n_1]\longrightarrow \bigoplus_{s=1}^{s_1} \mathbb Z[1/n_1]. \end{equation} Being the restriction of the isomorphism $\phi$, $\phi_{r_1}$ is also an isomorphism. Tensoring~(\ref{beau54}) with $\mathbb Q$, we deduce $s_1=r_1$ as claimed. Repeating the same argument ($k-1$ times) for the second block of the matrix $a$ in ~(\ref{jh7h6g}), we obtain that $a$ is a diagonal block matrix consisting of $k$ blocks of $r_i\times r_i$, $1\leq i \leq k$ matrices. Since $\phi$ is an order preserving homomorphism, all the entires of the matrix $a$ are positive numbers. Similar to the case of multi-headed comets, we show that each of $\phi_{r_i}$ is a permutation matrix. Consider the matrix $a_{r_1}=(a_{ij})$, $1\leq i,j \leq r_1$, representing the isomorphism $\phi_{r_1}$. Since $\phi_{r_1}$ is an isomorphism, $a_{r_1}$ is an invertible matrix, with the inverse $b_{r_1}=(b_{ij})$ with $b_{ij} \geq 0$, $1\leq i,j \leq r_1$, as $b_{r_1}$ is also order preserving. Thus there is $1\leq j\leq r_1$ such that $a_{1j}\not = 0$. Since $\sum_{i=1}^{r_1}a_{1i} b_{il}=0$, for $1<l\leq r_1$, $a_{1j}\not = 0$ and all the numbers involved are positive, it follows $b_{jl}=0$ for all $1<l \leq r_1$. That is the entires of $j$-th row of the matrix $b_{r_1}$ are all zero except $b_{1j}$, which has to be nonzero. If there is another $j'\not = j$ such that $a_{1j'}\not = 0$, a similar argument shows that all $b_{j'l}=0$ for all $1<l \leq r_1$. But this then implies that the matrix $b_{r_1}$ is singular which is not the case. That is there is only one non-zero entry in the first row of the matrix $a_{r_1}$. Furthermore, since $a_{1j}b_{j1}=1$, $a_{1j}={n_1}^{c_1}$, where $c_1 \in \mathbb Z$. Repeating the same argument for other rows of the matrix $a_{r_1}$, we obtain that $a_{r_1}$ consists of exactly one number of the form ${n_1}^c$, $c\in \mathbb Z$, in each row and column, i.e., it is a permutation matrix with entries invertible in $\mathbb Z[1/n_1]$. As in Theorem~\ref{re282}, we can write \betagin{equation*} \operatorname{\mathcal L}(E) \cong_{\operatorname{gr}} \bigoplus_{s=1}^t \operatorname{\mathbb M}_{n(v_s)} \big(\operatorname{\mathcal L}(1,n_s)\big)\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big). \end{equation*} Furthermore, one can re-group this into $k$ summands, with each summand the matrices over the Leavitt algebras $\operatorname{\mathcal L}(1,{\sf n}_u)$, $1\leq u \leq k$ (see~(\ref{potrw})). Do the same re-grouping for the $\operatorname{\mathcal L}(F)$. On the level of $K_0$, since the isomorphism $\phi$ decomposes into a diagonal block matrix, it is enough to consider each $\phi_{r_i}$ separately. For $i=1$, \betagin{equation} \phi_{r_1}: \bigoplus_{s=1}^{r_1} K^{\operatorname{gr}}_0\mathfrak big(\operatorname{\mathbb M}_{n(v_s)} \big(R\big)\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big)\mathfrak big) \longrightarrow \bigoplus_{s=1}^{r_1} K^{\operatorname{gr}}_0\mathfrak big(\operatorname{\mathbb M}_{n(v'_s)} \big(R\big)\big (|p'^{v'_s}_1|,\dots, |p'^{v'_s}_{n(v'_s)}|\big)\mathfrak big) \end{equation} with \[\phi_{r_1}\mathfrak big(\bigoplus_{s=1}^{r_1} \mathfrak big[\operatorname{\mathbb M}_{n(v_s)} \big(R\big)\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big)\mathfrak big ]\mathfrak big )= \bigoplus_{s=1}^{r_1} \mathfrak big [\operatorname{\mathbb M}_{n(v'_s)} \big(R\big)\big (|p'^{v'_s}_1|,\dots, |p'^{v'_s}_{n(v'_s)}|\big)\mathfrak big],\] where $R=\operatorname{\mathcal L}(1,{\sf n}_1)$, ${\sf n}_1=n_1$. Using the graded Morita theory, (as in Theorem~\ref{re282}), we can write \[\phi_{r_1}\mathfrak big( \bigoplus_{s=1}^{r_1} \big [R(-|p^{v_s}_1|)\operatorname{op}lus\dots\operatorname{op}lus R(-|p^{v_s}_{n(v_s)}|)\big ]\mathfrak big)= \bigoplus_{s=1}^{r_1} \big[R(-|p'^{v'_s}_1|)\operatorname{op}lus\dots\operatorname{op}lus R(-|p'^{v'_s}_{n(v'_s)}|)\big]. \] But $\phi_{r_1}$ is a permutation matrix (with power of $n_1$ as entires). Thus we have (by suitable relabeling if necessary), for any $1\leq s \leq r_1$, \betagin{equation}\lambdabel{h32319} n_1^{c_s}\mathfrak big([R(-|p^{v_s}_1|)\operatorname{op}lus\dots\operatorname{op}lus R(-|p^{v_s}_{n(v_s)}|)\big ]\mathfrak big)= \big[R(-|p'^{v'_s}_1|)\operatorname{op}lus\dots\operatorname{op}lus R(-|p'^{v'_s}_{n(v'_s)}|)\big], \end{equation} where $c_s \in \mathbb Z$. Since for any $i \in \mathbb Z$, \[\operatorname{\mathcal L}(1,n_1)(i)\cong \bigoplus_{n_1}\operatorname{\mathcal L}(1,n_1)(i-1),\] if $c_s>0$, we have \betagin{equation}\lambdabel{gfqw32} \big[R(-|p^{v_s}_1|+c_s)\operatorname{op}lus\dots\operatorname{op}lus R(-|p^{v_s}_{n(v_s)}|+c_s)\big ]= \big[R(-|p'^{v'_s}_1|)\operatorname{op}lus\dots\operatorname{op}lus R(-|p'^{v'_s}_{n(v'_s)}|)\big], \end{equation} otherwise \betagin{equation}\lambdabel{gfqw35} \big [R(-|p^{v_s}_1|)\operatorname{op}lus\dots\operatorname{op}lus R(-|p^{v_s}_{n(v_s)}|)\big ]= \big[R(-|p'^{v'_s}_1|+c_s)\operatorname{op}lus\dots\operatorname{op}lus R(-|p'^{v'_s}_{n(v'_s)}|+c_s)\big]. \end{equation} Suppose, say,~(\ref{gfqw32}) holds. The case of ~(\ref{gfqw35}) is similar. Recall that by Dade's theorem (see \S\ref{dadestmal}), the functor $(-)_0:\mbox{gr-}A\rightarrow \mbox{mod-}A_0$, $M\mapsto M_0$, is an equivalence so that it induces an isomorphism $K_0^{\operatorname{gr}}(R) \rightarrow K_0(R_0)$. Applying this to~(\ref{gfqw32}), we obtain \betagin{equation}\lambdabel{gfqw30021} \big[R_{-|p^{v_s}_1|+c_s}\operatorname{op}lus\dots\operatorname{op}lus R_{-|p^{v_s}_{n(v_s)}|+c_s}\big ]= \big[R_{-|p'^{v'_s}_1|}\operatorname{op}lus\dots\operatorname{op}lus R_{-|p'^{v'_s}_{n(v'_s)}|}\big] \end{equation} in $K_0(R_0)$. Since $R_0$ is an ultramatricial algebra, it is unit-regular and by Proposition~15.2 in~\cite{goodearlbook}, \betagin{equation}\lambdabel{gfqw3002} R_{-|p^{v_s}_1|+c_s}\operatorname{op}lus\dots\operatorname{op}lus R_{-|p^{v_s}_{n(v_s)}|+c_s}\cong R_{-|p'^{v'_s}_1|}\operatorname{op}lus\dots\operatorname{op}lus R_{-|p'^{v'_s}_{n(v'_s)}|} \end{equation} as $R_0$-modules. Tensoring this with the graded ring $R$ over $R_0$, since $R$ is strongly graded (Theorem~\ref{hazst}), and so $R_{\alphapha} \otimes_{R_0} R \cong_{\operatorname{gr}} R(\alphapha)$ as graded $R$-modules (\cite[3.1.1]{grrings}), we have a (right) graded $R$-module isomorphism \betagin{equation} R(-|p^{v_s}_1|+c_s)\operatorname{op}lus\dots\operatorname{op}lus R(-|p^{v_s}_{n(v_s)}|+c_s)\cong_{\operatorname{gr}} R(-|p'^{v'_s}_1|)\operatorname{op}lus\dots\operatorname{op}lus R(-|p'^{v'_s}_{n(v'_s)}|). \end{equation} Applying $\operatorname{End}$-functor to this isomorphism, we get (see \S\ref{matgrhe}), \[\operatorname{\mathbb M}_{n(v_s)} \big(\operatorname{\mathcal L}(1,{\sf n}_1)\big)\big (|p^{v_s}_1|-c_s,\dots, |p^{v_s}_{n(v_s)}|-c_s\big) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_{n(v_s)} \big(\operatorname{\mathcal L}(1,{\sf n}_1)\big)\big (|p'^{v'_s}_1|,\dots, |p'^{v'_s}_{n(v'_s)}|\big), \] which is a $K$-algebra isomorphism. But clearly (\S\ref{matgrhe}) \[\operatorname{\mathbb M}_{n(v_s)} \big(\operatorname{\mathcal L}(1,{\sf n}_1)\big)\big (|p^{v_s}_1|-c_s,\dots, |p^{v_s}_{n(v_s)}|-c_s\big) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_{n(v_s)} \big(\operatorname{\mathcal L}(1,{\sf n}_1)\big)\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big). \] So \[ \operatorname{\mathbb M}_{n(v_s)} \big(\operatorname{\mathcal L}(1,{\sf n}_1)\big)\big (|p^{v_s}_1|,\dots, |p^{v_s}_{n(v_s)}|\big) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_{n(v_s)} \big(\operatorname{\mathcal L}(1,{\sf n}_1)\big)\big (|p'^{v'_s}_1|,\dots, |p'^{v'_s}_{n(v'_s)}|\big). \] Repeating this argument for all $1 \leq s \leq r_1$ and then for all $r_i$, $1\leq i\leq k$, we get \[\operatorname{\mathcal L}(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}(F)\] as $K$-algebras. \end{proof} Using Theorem~\ref{mani543} we show that Conjecture~\ref{cofian} holds for the category polycephaly graphs. \betagin{corollary}\lambdabel{griso7dgre} Let $E$ and $F$ be polycephaly graphs. Then $\operatorname{\mathcal L}_K(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}_K(F)$ as rings if and only if $\operatorname{\mathcal L}_K(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}_K(F)$ as $K$-algebras. \end{corollary} \betagin{proof} Let $E$ and $F$ be two polycephaly graphs such that $\operatorname{\mathcal L}(E)\cong_{\operatorname{gr}} \operatorname{\mathcal L}(F)$. This gives a $\mathbb Z[x,x^{-1}]$-module isomorphism \[\big (K_0^{\operatorname{gr}}(\operatorname{\mathcal L}_K(E)),[\operatorname{\mathcal L}_K(E)]\big ) \cong \big (K_0^{\operatorname{gr}}(\operatorname{\mathcal L}_K(F)),[\operatorname{\mathcal L}_K(F)]\big ).\] As in the first part of the proof of Theorem~\ref{mani543}, the graded $K_0$ of each type of the graph $E$ (acyclic, comets or rose graphs) is isomorphic to the corresponding type of the graph $F$. For the acyclic and comets part of $E$ and $F$, the combination of part (1) and (2) of Theorem~\ref{catgrhsf} shows that their corresponding Leavitt path algebras are $K$-algebra isomorphism. (In fact if $E$ and $F$ are $C_n$-comets, then there is an $K[x^n,x^{-n}]$-graded isomorphism.) The $K$-isomorphism of Leavitt path algebras of rose graphs part of $E$ and $F$ was proved in the last part of the proof of Theorem~\ref{mani543}. \end{proof} \forget \betagin{remark}\lambdabel{ng4352} Let $E$ and $F$ be graphs with no sinks such that the weak classification Conjecture~\ref{weakconj} holds. Then $C^*(E)\cong C^*(F)$ implies $\operatorname{\mathcal L}_{\mathbb C}(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}_{\mathbb C} (F)$. Indeed, if $C^*(E)\cong C^*(F)$, then $C^*(E)\rtimes_{\gammamma} \mathbb T \cong C^*(F)\rtimes_{\gammamma} \mathbb T$. By \cite[Lemma~3.1]{Raeburn113}, $C^*(E\times_1 \mathbb Z)\cong C^*(E)\rtimes_{\gammamma} \mathbb T$. From sequence~(\ref{lol1532}) now follows that \[K^{\operatorname{gr}}_0(\operatorname{\mathcal L}_{\mathbb C}(E)) \cong K_0(C^*(E\times_1 \mathbb Z)) \cong K_0(C^*(F\times_1 \mathbb Z)) \cong K^{\operatorname{gr}}_0(\operatorname{\mathcal L}_{\mathbb C}(F)).\] So if Conjecture~\ref{weakconj} holds, we then have $\operatorname{\mathcal L}_{\mathbb C}(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}_{\mathbb C}(F)$. Thus by Theorem~\ref{mani543}, if $E$ and $F$ are comet or polycephaly graphs and $C^*(E)\cong C^*(F)$ then $\operatorname{\mathcal L}_{\mathbb C}(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}_{\mathbb C} (F)$. \end{remark} \forgotten \betagin{remark} In Theorem~\ref{mani543}, the assumption that the isomorphism between $K$-groups is $\mathbb Z[x,x^{-1}]$-module homomorphism is used in significant way. The following example shows one can not relax this assumption (see also Example~\ref{smallgraphs2}). Consider $R=\operatorname{\mathcal L}_2$ and $S=\operatorname{\mathcal L}_4$. By Theorem~\ref{re282}, $(K^{\operatorname{gr}}_0(R),[R])=(\mathbb Z[1/2],1)$ and $(K^{\operatorname{gr}}_0(S),[S])=(\mathbb Z[1/4],1)$. So the identity map, gives an isomorphism $(K^{\operatorname{gr}}_0(R),[R]) \cong (K^{\operatorname{gr}}_0(S),[S])$. But we know $\operatorname{\mathcal L}_2 \not \cong \operatorname{\mathcal L}_4$, since by Theorem~\ref{wke}, $K_0(\operatorname{\mathcal L}_2)=0$ whereas $K_0(\operatorname{\mathcal L}_4)=\mathbb Z/3\mathbb Z$. Theorem~\ref{mani543} can't be applied here, as this isomorphism is not $\mathbb Z[x,x^{-1}]$-module isomorphism. For, otherwise $2[S(-1)]=2[R(-1)]=[R]=[S]=4[S(-1)]$. This implies $[R]=2[S(-1)]=0$ which is clearly absurd. \end{remark} \subsection{Matrices over Leavitt algebras} In \cite{arock}, Abrams gives the necessary and sufficient conditions for matrices over Leavitt path algebras to be graded isomorphic. For a $\Gammamma$-graded ring $A$, he considers grading induced on $\operatorname{\mathbb M}_k(A)$ by setting $\operatorname{\mathbb M}_k(A)_\gammamma=\operatorname{\mathbb M}_k(A_\gammamma)$, for $\gammamma \in \Gammamma$, i.e., with $(0,\dots,0)$-suspension. For positive integers $k$ and $n$, {\it factorization of $k$ along $n$}, means $k=td$ where $d\mid n^i$ for some positive integer $i$ and $\gcd(t,n)=1$. Then Theorem~1.3 in \cite{arock} states that for positive integers $k,n,n'$, $n \geq 2$, $M_k(\operatorname{\mathcal L}_n)\cong_{\operatorname{gr}} M_{k'}(\operatorname{\mathcal L}_n)$ if and only if $t=t'$, where $k=td$, respectively $k'=t'd'$, are the factorization of $k$, respectively $k'$, along $n$. With this classification, \[M_3(\operatorname{\mathcal L}_2)\not \cong_{\operatorname{gr}} M_4(\operatorname{\mathcal L}_2).\] However combining Theorem~\ref{re282} and Theorem~\ref{mani543}, we shall see that the following two graphs \betagin{equation*} \xymatrix@=10pt{ &\bullet \ar[dr] & &&&& & \bullet \ar[dr] \\ E: && \bullet \ar@(ur,rd) \ar@(u,r) & & && F: & & \bullet \ar[r] & \bullet \ar@(ur,rd) \ar@(u,r) &\\ &\bullet \ar[ur] & &&&& & \bullet \ar[ur] } \end{equation*} give isomorphic Leavitt path algebras, i.e., \betagin{equation}\lambdabel{b53s} \operatorname{\mathbb M}_3(\operatorname{\mathcal L}_2)(0,1,1) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_4(\operatorname{\mathcal L}_2)(0,1,2,2). \end{equation} This shows, by considering suspensions, we get much wider classes of graded isomorphisms of matrices over Leavitt algebras. Indeed, by Theorem~\ref{re282}, $(K^{\operatorname{gr}}_0(\operatorname{\mathcal L}(E),[\operatorname{\mathcal L}(E)])=( \mathbb Z[1/2],2)$ and $(K^{\operatorname{gr}}_0(\operatorname{\mathcal L}(F),[\operatorname{\mathcal L}(F)])= (\mathbb Z[1/2],2)$. Now the idenitity map gives an ordering preserving isomorphisms between these $K$-groups and so by Theorem~\ref{mani543}, we have $\operatorname{\mathcal L}(E)\cong_{\operatorname{gr}} \operatorname{\mathcal L}(F)$ which then applying ~(\ref{dampai}) we obtain the graded isomorphism~(\ref{b53s}). We can now give a criterion when two matrices over Leavitt algebras are graded isomorphic. It is an easy exercise to see this covers Abrams' theorem \cite[Proposition~1.3]{arock}, when there is no suspension. \betagin{theorem}\lambdabel{cfd2497} Let $k,k',n,n'$ be positive integers, where $n$ is prime. Then \betagin{equation}\lambdabel{tes5363} \operatorname{\mathbb M}_k(\operatorname{\mathcal L}_n)(\lambda_1,\dots,\lambda_k) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_{k'}(\operatorname{\mathcal L}_{n'})(\gammamma_1,\dots,\gammamma_{k'}) \end{equation} if and only if $n=n'$ and there is $j \in \mathbb Z$ such that $n^j\sum_{i=1}^k n^{-\lambda_i}=\sum_{i=1}^{k'} {n}^{-\gamma_i}$. \end{theorem} \betagin{proof} By Theorem~\ref{re282} (and its proof), the isomorphism~(\ref{tes5363}), induces an $\mathbb Z[x,x^{-1}]$-isomorphism $\phi:\mathbb Z[1/n]\rightarrow \mathbb Z[1/n']$ such that $\phi(\sum_{i=1}^k n^{-\lambda_i})=\sum_{i=1}^{k'} {n'}^{-\gamma_i}$. Now \[n \phi(1)=\phi(n 1) =\phi(x 1) =x\phi(1)=n'\phi(1) \] which implies $n=n'$. (This could have also been obtained by applying non-graded $K_0$ to isomorphism~(\ref{tes5363}) and applying Theorem~\ref{wke}.) Since $\phi$ is an isomorphism, it has to be multiplication by $n^j$, for some $j \in \mathbb Z$. Thus $\sum_{i=1}^{k'} {n}^{-\gamma_i}=\phi(\sum_{i=1}^k n^{-\lambda_i})=n^j \sum_{i=1}^k n^{-\lambda_i}$. Conversely suppose there is $j \in \mathbb Z$, such that $n^j\sum_{i=1}^k n^{-\lambda_i}=\sum_{i=1}^{k'} {n}^{-\gamma_i}$. Set $A=\operatorname{\mathcal L}_n$. Since Leavitt algebras are strongly graded (Theorem~\ref{hazst}), it follows $R=\operatorname{\mathbb M}_k(A)(\lambda_1,\dots,\lambda_k)$ and $S=\operatorname{\mathbb M}_{k'}(A)(\gammamma_1,\dots,\gammamma_{k'})$ are strongly graded (\cite[2.10.8]{grrings}). Again Theorem~\ref{re282} shows that the multiplication by $n^j$ gives an isomorphism $(K^{\operatorname{gr}}_0(R),[R]) \cong (K^{\operatorname{gr}}_0(S),[S])$. Therefore $n^j[R]=[S]$. Using the graded Morita, we get \[ n^j [A(-\lambda_1)\operatorname{op}lus \dots \operatorname{op}lus A(-\lambda_k)]= [A(-\gammamma_1)\operatorname{op}lus\dots\operatorname{op}lus A(-\gammamma_{k'})]. \] Using Dade's theorem (\S\ref{dadestmal}), passing this equality to $A_0$ , we get \[ n^j [A_{-\lambda_1}\operatorname{op}lus\dots \operatorname{op}lus A_{-\lambda_k}]= [A_{-\gammamma_1}\operatorname{op}lus\dots\operatorname{op}lus A_{-\gammamma_{k'}}]. \] If $j\geq 0$ (the case $j<0$ is argued similarly), since $A_0$ is an ultramatricial algebra, it is unit-regular and by Proposition~15.2 in~\cite{goodearlbook}, \[A^{n^j}_{-\lambda_1}\operatorname{op}lus\dots \operatorname{op}lus A_{-\lambda_k}^{n^j} \cong A_{-\gammamma_1}\operatorname{op}lus\dots\operatorname{op}lus A_{-\gammamma_{k'}}.\] Tensoring this with the graded ring $A$ over $A_0$, since $A=\operatorname{\mathcal L}(1,n)$ is strongly graded (Theorem~\ref{hazst}), and so $A_{\lambda} \otimes_{A_0} A \cong_{\operatorname{gr}} A(\lambda)$ as graded $A$-modules (\cite[3.1.1]{grrings}), we have a (right) graded $R$-module isomorphism \[ A(-\lambda_1)^{n^j}\operatorname{op}lus\dots \operatorname{op}lus A(-\lambda_k)^{n^j} \cong_{\operatorname{gr}} A(-\gammamma_1)\operatorname{op}lus\dots\operatorname{op}lus A(-\gammamma_{k'}). \] Since for any $i \in \mathbb Z$, $A(i)\cong \bigoplus_{n} A(i-1).$ we have \[ A(-\lambda_1+j)\operatorname{op}lus\dots \operatorname{op}lus A(-\lambda_k+j) \cong_{\operatorname{gr}} A(-\gammamma_1)\operatorname{op}lus\dots\operatorname{op}lus A(-\gammamma_{k'}). \] Now applying $\operatorname{End}$-functor to this isomorphism, we get (see \S\ref{matgrhe}), \[ \operatorname{\mathbb M}_k(A)(\lambda_1-j,\dots,\lambda_k-j) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_{k'}(A)(\gammamma_1,\dots,\gammamma_{k'}) \] which gives the isomorphism ~(\ref{tes5363}). \end{proof} \forget Applying Dade's theorem (see \S\ref{dadestmal}), we obtain the isomorphism $\phi_0$ below, \betagin{equation}\lambdabel{gsqqwq} \xymatrix{ \big(K^{\operatorname{gr}}_0(R),[R]\big) \cong \big (\mathbb Z[1/n], \sum_{i=1}^k n^{-\lambda_i} \big) \ar[rr]^{\phi} && \big (\mathbb Z[1/n], \sum_{i=1}^{k'} n^{-\gamma_i} \big) \cong \big(K^{\operatorname{gr}}_0(S),[S]\big) \ar[d]^{(-)_0} \\ \big(K_0(R_0),[R_0]\big) \ar[u]^{-\otimes_{R_0}R} \ar@{.>}[rr]^{\phi_0} && \big(K_0(S_0),[S_0]\big) } \end{equation} Since $R_0$ and $S_0$ are ultramatricial algebras, by Elliott's theorem (see~\cite[Theorem 15.26]{goodearlbook}), there exists an isomorphism $R_0 \cong S_0$. Tensoring this with $\operatorname{\mathcal L}_n$, since $\operatorname{\mathcal L}_n$ is strongly graded, we obtain \betagin{multline*} \operatorname{\mathbb M}_k(\operatorname{\mathcal L}_n)(\lambda_1,\dots,\lambda_k) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_k(\operatorname{\mathcal L}_n)(\lambda_1,\dots,\lambda_k)_0 \otimes_{\mathbb Z} \operatorname{\mathcal L}_n \cong_{\operatorname{gr}} \\ \operatorname{\mathbb M}_{k'}(\operatorname{\mathcal L}_{n})(\gammamma_1,\dots,\gammamma_{k'})_0 \otimes_{\mathbb Z} \operatorname{\mathcal L}_n \cong_{\operatorname{gr}} \operatorname{\mathbb M}_{k'}(\operatorname{\mathcal L}_{n})(\gammamma_1,\dots,\gammamma_{k'}) \end{multline*} \forgotten \betagin{remarks} \betagin{enumerate} \item The if part of Theorem ~\ref{cfd2497} does not need the assumption of $n$ being prime. In fact, the same proof shows that if $\operatorname{\mathbb M}_k(\operatorname{\mathcal L}_n)(\lambda_1,\dots,\lambda_k) \cong_{\operatorname{gr}} \operatorname{\mathbb M}_{k'}(\operatorname{\mathcal L}_{n'})(\gammamma_1,\dots,\gammamma_{k'})$ then $n=n'$ and there is $j \in \mathbb Z$ such that $p^\deltalta \sum_{i=1}^k n^{-\lambda_i}=\sum_{i=1}^{k'} {n}^{-\gamma_i}$, where $p | n^j$ and $\deltalta=\pm 1$. By setting $k=1$, $\lambda_1=0$ and $\gamma_i=0$, $1\leq i \leq k'$, we recover Theorem~6.3 of \cite{aapardo}. \item The proof of Theorem ~\ref{cfd2497} shows that the isomorphism (\ref{tes5363}) is in fact an induced graded isomorphism. Recall that for graded (right) modules $N_1$ and $N_2$ over the graded ring $A$, the graded ring isomorphism $\Phi: \operatorname{End}(N_1) \rightarrow \operatorname{End}(N_2)$ is called {\it induced}, if there is a graded $A$-module isomorphism $\phi:N_1\rightarrow N_2$ which lifts to $\Phi$ (see \cite[p.4]{arock}). \end{enumerate} \end{remarks} Imitating a similar proof as in Theorem ~\ref{cfd2497} we can obtain the following result. \betagin{corollary} \lambdabel{k43shab} There is a graded $\operatorname{\mathcal L}(1,n)$-isomorphism \[\operatorname{\mathcal L}(1,n)^k (\lambda_1,\dots,\lambda_k) \cong_{\operatorname{gr}} \operatorname{\mathcal L}(1,n)^{k'}(\gamma_1,\dots,\gamma_{k'})\] if and only if $\sum_{i=1}^k n^{\lambda_i}=\sum_{i=1}^{k'} {n}^{\gamma_i}$. \end{corollary} \subsection{Graded $K_0$ of finite graphs with no sinks} In the last two theorems we restrict ourselves to Leavitt path algebras associated to finite graphs with no sinks. By Theorem~\ref{hazst}, these are precisely strongly graded unital Leavitt path algebras. One would think the conjectures stated in Introduction ~\S\ref{introf} should be first checked for this kind of graphs. Theorem~\ref{ictp1} (and Corollary~\ref{hgfwks6}) shows that the structure of $\operatorname{\mathcal L}(E)$ is closely reflected in the structure of the group $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E))$. \betagin{theorem} Let $E$ and $F$ be finite graphs with no sinks. Then the following are equivalent: \betagin{enumerate}[\upshape(1)] \item $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E)) \cong K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(F))$ as partially ordered groups. \item $\operatorname{\mathcal L}(E)$ is gr-Morita equivalent to $\operatorname{\mathcal L}(F).$ \item $\operatorname{\mathcal L}(E)_0$ is Morita equivalent to $\operatorname{\mathcal L}(F)_0.$ \end{enumerate} \end{theorem} \betagin{proof} The equivalence of the statements follow from Theorems~\ref{hazst} and~\ref{ultrad} and Dade's Theorem (\S\ref{dadestmal}) in combination with Corollary~15.27 in~\cite{goodearlbook} which states that two ultramatricial algebras are Morita equivalent if their $K_0$ are isomorphism as partially ordered groups. \end{proof} Let $E$ be a finite graph with no sinks. Set $A=\operatorname{\mathcal L}_K(E)$. For any $u \in E^0$ and $i \in \mathbb Z$, $uA(i)$ is a right graded finitely generated projective $A$-module and any graded finitely generated projective $A$-module is generated by these modules up to isomorphism, i.e., $$\mathcal V^{\operatorname{gr}}(A)=\mathfrak big \lambdangle \big [uA(i)\big ] \mid u \in E^0, i \in \mathbb Z \mathfrak big \rangle, $$ and $K_0^{\operatorname{gr}}(A)$ is the group completion of $\mathcal V^{\operatorname{gr}}(A)$. The action of $\mathbb Z[x,x^{-1}]$ on $\mathcal V^{\operatorname{gr}}(A)$ and thus on $K_0^{\operatorname{gr}}(A)$ is defined on generators by $x^j [uA(i)]=[uA(i+j)]$, where $i,j \in \mathbb Z$. We first observe that for $i\geq 0$, \betagin{equation}\lambdabel{hterw} x[uA(i)]=[uA(i+1)]=\sum_{\{\alphapha \in E^1 \mid s(\alphapha)=u\}}[r(\alphapha)A(i)]. \end{equation} First notice that for $i\geq 0$, $A_{i+1}=\sum_{\alphapha \in E^1} \alphapha A_i$. It follows $uA_{i+1}=\bigoplus_{\{\alphapha \in E^1 \mid s(\alphapha)=u\}} \alphapha A_i$ as $A_0$-modules. Using (\ref{veronaair}), and the fact that $\alphapha A_i \cong r(\alphapha) A_i$ as $A_0$-module, we get $uA(i+1) \cong \bigoplus_{\{\alphapha \in E^1 \mid s(\alphapha)=u\}} r(\alphapha) A(i)$ as graded $A$-modules. This gives~(\ref{hterw}). A subgroup $I$ of $K_0^{\operatorname{gr}}(A)$ is called a {\it graded ordered ideal} if $I$ is closed under the action of $\mathbb Z[x,x^{-1}]$, $I=I^{+}-I^{+}$, where $I^{+}=I\cap K_0^{\operatorname{gr}}(A)^{+}$ and $I^{+}$ is hereditary, i.e., if $a,b \in K_0^{\operatorname{gr}}(A)$, $0 \leq a \leq b \in I$ then $a\in I$. \betagin{theorem}\lambdabel{ictp1} Let $E$ be a finite graph with no sinks. Then there is one-to-one correspondences between the subsets of hereditary and saturated subsets of $E^0$ and graded ordered ideals of $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}_K(E))$. \end{theorem} \betagin{proof} Let $\mathcal H$ be the set of all hereditary and saturated subsets of $E^0$ and $\mathcal L(K_0^{\operatorname{gr}}(A))$ the set of all graded ordered ideals of $K_0^{\operatorname{gr}}(A)$, where $A=\operatorname{\mathcal L}_K(E)$. Define $\phi: \mathcal H \longrightarrow \mathcal L(K_0^{\operatorname{gr}}(A))$ as follows, for $H \in \mathcal H$, $\phi(H)$ is the graded ordered ideal generated by $\big \{[vA(i)]\mid v\in H, i\in \mathbb Z\big\}$. Define also $\psi: \mathcal L(K_0^{\operatorname{gr}}(A)) \longrightarrow \mathcal H$ as follows, for a graded ordered ideal $I$ of $K_0^{\operatorname{gr}}(A)$, set $\psi(I)=\big \{ v\in E^0 \mid [vA(i)] \in I, \text{ for some } i \in \mathbb Z \big \}$. We show that these maps are order-preserving mutually inverse maps. Let $I$ be an graded ordered ideal of $K_0^{\operatorname{gr}}(A)$. We show $H=\psi(I)$ is hereditary and saturated. Let $v \in H$ and $w\in E^0$ be an adjacent vertex to $v$, i.e., there is a $e\in E^1$ such that $s(e)=v$ and $r(e)=w$. Since $v\in H$, $[vA(i)]\in I$ for some and therefore all $i \in \mathbb Z$. Since $I$ is closed under the action of $\mathbb Z[x,x^{-1}]$, by (\ref{hterw}), for $i\geq 0$, \betagin{equation}\lambdabel{hgfd} x[vA(i)]=\sum_{\{\alphapha \in E^1 \mid s(\alphapha)=v\}}[r(\alphapha)A(i)]=[wA(i)]+\sum_{\{\alphapha \in E^1\backslash \{e\} \mid s(\alphapha)=v\}}[r(\alphapha)A(i)] \in I. \end{equation} So (\ref{hgfd}) shows $[wA(i)] \leq x[vA(i)] \in I$. Since $I$ is an ordered-ideal, it follows $[wA(i)]\in I$ so $w \in H$. Now an easy induction shows that if $v\in H$ is connected to $w$ by a path (of length $\geq 1$), then $w \in H$. We next show that $H$ is saturated. Let $v\in E^0$ such that $r(e) \in H$ for every $e\in E^1$ emited from $v$. Then by (\ref{hterw}), $x[vA(i)]=\sum_{\{\alphapha \in E^1 \mid s(\alphapha)=v\}}[r(\alphapha)A(i)]$ for any $i\in \mathbb Z^+$. But $[r(\alphapha)A(i)] \in I$ for some and therefore all $i \in \mathbb Z$. So $x[vA(i)] \in I$ and thus $[vA(i)] \in I$. Therefore $v \in H$. Suppose $H$ is a saturated hereditary subset of $E^0$ and $I=\phi(H)$ is a graded ordered ideal generated by $\big \{[vA(i)]\mid v\in H, i\in \mathbb Z\big\}$. We show that $v\in H$ if and only if $[vA(i)]\in I$ for some $i \in \mathbb Z$. It is obvious that if $v\in H$ then $[vA(i)]\in I$, for any $i \in \mathbb Z$. For the converse, let $[vA(i)] \in I$ for some $i\in \mathbb Z$. Then there are $[\gammamma]=\big [\sum_{t=1}^l h_tA(i_t) \big ]$, where $h_t \in H$ and $[\deltalta]=\big [\sum_{s=1}^{l'} u_sA(i_s)\big ]$, where $u_s\in E^0$ such that \betagin{equation}\lambdabel{khapp} [\gammamma]=[v]+[\deltalta]. \end{equation} There is a natural isomorphism $\mathcal V^{\operatorname{gr}}(A) \rightarrow K^{\operatorname{gr}}_0(A)^+$, $[uA(i)] \mapsto [uA(i)]$. Indeed, if $[P]=[Q]$ in $K^{\operatorname{gr}}_0(A)^+$, then $[P_0]=[Q_0]$ in $K_0(A_0)$. But $A_0$ is an ultramatricial algebra so it is unit-regular and by Proposition~15.2 in~\cite{goodearlbook}, $P_0\cong Q_0$ as $A_0$-modules. Thus $P\cong_{\operatorname{gr}} P_0 \otimes_{A_0} A \cong_{\operatorname{gr}} Q_0 \otimes_{A_0} A \cong_{\operatorname{gr}} Q$. (This argument shows $\mathcal V^{\operatorname{gr}}(A)$ is cancellative.) So the above map is well-defined. Consider the epimorphisms \betagin{align} K^{\operatorname{gr}}_0(A)^+ \operatorname{st}ackrel{\cong}{\longrightarrow} \mathcal V^{\operatorname{gr}}(A) & \longrightarrow \mathcal V(A) \operatorname{st}ackrel{\cong}{\longrightarrow} M_E,\\ [vA(i)] \longmapsto [vA(i)] & \longmapsto [vA] \longmapsto [v] \notag \end{align} Equation~\ref{khapp} under these epimorphisms becomes $[\gammamma]=[v]+[\deltalta]$ in $M_E$, where $[\gammamma]= \sum_{t=1}^l [ h_t ]$ and $[\deltalta]= \sum_{s=1}^{l'} [ u_s ]$. The rest of the argument imitates the last part of the proof of~\cite[Proposition~5.2]{amp}. We include it here for completeness. By~\cite[Lemma~4.3]{amp}, there is $\betata \in F(E)$ such that $\gammamma \rightarrow \betata$ and $v+\deltalta \rightarrow \betata$. Since $H$ is hereditary and $\operatorname{supp}(\gammamma)\subseteq H$, we get $\operatorname{supp}(\betata)\subseteq H$. By ~\cite[Lemma~4.2]{amp}, we have $\betata=\betata_1+\betata_2$, where $v\rightarrow \betata_1$ and $\deltalta\rightarrow \betata_2$. Observe that $\operatorname{supp}(\betata_1) \subseteq \operatorname{supp}(\betata) \subseteq H$. Using that $H$ is saturated, it is easy to check that, if $\alphapha \rightarrow \alphapha'$ and $\operatorname{supp}(\alphapha') \subseteq H$, then $\operatorname{supp}(\alphapha) \subseteq H$. Using this and induction, we obtain that $v \in H$. \end{proof} Recall the construction of the monoid $M_E$ assigned to a graph $E$ (see the proof of Theorem~\ref{wke} and~(\ref{phgqcu})). \betagin{corollary}\lambdabel{hgfwks6} Let $E$ be a finite graph with no sinks. Then there is a one-to-one order persevering correspondences between the following sets \betagin{enumerate}[\upshape(1)] \item hereditary and saturated subsets of $E^0$, \item graded ordered ideals of $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}_K(E))$, \item graded two-sided ideals of $\operatorname{\mathcal L}_K(E)$, \item ordered ideals of $M_E$. \end{enumerate} \end{corollary} \betagin{proof} This follows from Theorem~\ref{ictp1} and Theorem~5.3 in~\cite{amp}. \end{proof} \betagin{example}\lambdabel{smallgraphs} In~\cite{aalp}, using a `change the graph' theorem~\cite[Theorem~2.3]{aalp}, it was shown that the following graphs give rise to isomorphic (purely infinite simple) Leavitt path algebras. \betagin{equation*} \xymatrix{ E_1 : & \bullet \ar@(lu,ld)\ar@/^0.9pc/[r] & \bullet \ar@/^0.9pc/[l] && E_2: & \bullet \ar@(lu,ld)\ar@/^0.9pc/[r] & \bullet \ar@(ru,rd)\ar@/^0.9pc/[l] && E_3:& \bullet \ar@(u,l)\ar@(d,l) &\bullet \ar[l] } \end{equation*} As it is noted~\cite{aalp}, $K_0$ of Leavitt path algebras of these graphs are all zero (see Theorem~\ref{wke}). However this change the graph theorem does not respect the grading. We calculate $K^{\operatorname{gr}}_0$ of these graphs and show that the Leavitt path algebras of these graphs are not graded isomorphic. Among these graphs $E_3$ is the only one which is in the category of polycephaly graphs. {\bf Graph} $\mathbf {E_1}$: The adjacency matrix of this graph is $N=\left(\betagin{matrix} 1 & 1\\ 1 & 0 \end{matrix}\right)$. By Theorem~\ref{ultrad} \betagin{equation*} K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_1)) \cong \varphirinjlim \mathbb Z\operatorname{op}lus \mathbb Z \end{equation*} of the inductive system $ \mathbb Z\operatorname{op}lus \mathbb Z \operatorname{st}ackrel{N^t}{\longrightarrow} \mathbb Z\operatorname{op}lus \mathbb Z \operatorname{st}ackrel{N^t}\longrightarrow \mathbb Z\operatorname{op}lus \mathbb Z \operatorname{st}ackrel{N^t}\longrightarrow \cdots$. Since $\deltat(N)\not = 0$, one can easily see that \[\varphirinjlim \mathbb Z\operatorname{op}lus \mathbb Z \cong \mathbb Z\mathfrak big [\frac{1}{\deltat(N)}\mathfrak big]\operatorname{op}lus \mathbb Z\mathfrak big [\frac{1}{\deltat(N)}\mathfrak big].\] Thus \[ K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_1)) \cong \mathbb Z\operatorname{op}lus \mathbb Z. \] In fact this algebra produces a so called the Fibonacci algebra (see \cite[Example IV.3.6]{davidson}). {\bf Graph} $\mathbf {E_2}$: The adjacency matrix of this graph is $N=\left(\betagin{matrix} 1 & 1\\ 1 & 1 \end{matrix}\right)$. By Theorem~\ref{ultrad} \betagin{equation*} K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_2)) \cong \varphirinjlim \mathbb Z\operatorname{op}lus \mathbb Z \cong \mathbb Z[\frac{1}{2}] \end{equation*} {\bf Graph} $\mathbf {E_3}$: By Theorem~\ref{re282}, $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_3)) \cong \mathbb Z[\frac{1}{2}]$. This shows that $\operatorname{\mathcal L}(E_1)$ is not graded isomorphic to $\operatorname{\mathcal L}(E_2)$ and $\operatorname{\mathcal L}(E_3)$. Finally we show that $\operatorname{\mathcal L}(E_2)\not \cong_{\operatorname{gr}} \operatorname{\mathcal L}(E_3).$ First note that the graph $E_2$ is the out-split of the graph \[\xymatrix{ E_4:& \bullet \ar@(u,l)\ar@(d,l)}\] \noindent (see~\cite[Definition~2.6]{aalp}). Therefore by~\cite[Theorem~2.8]{aalp} $\operatorname{\mathcal L}(E_2) \cong_{\operatorname{gr}}\operatorname{\mathcal L}(E_4)$. But by Theorem~\ref{re282} \[\big ( K^{\operatorname{gr}}_0(\operatorname{\mathcal L}(E_4)),[\operatorname{\mathcal L}(E_4)]\big )=\big (\mathbb Z[\frac{1}{2}],1\big ),\] whereas \[\big ( K^{\operatorname{gr}}_0(\operatorname{\mathcal L}(E_3)),[\operatorname{\mathcal L}(E_3)]\big )=\big(\mathbb Z[\frac{1}{2}],\frac{3}{2}\big).\] Now Theorem~\ref{mani543} shows that $\operatorname{\mathcal L}(E_4) \not \cong_{\operatorname{gr}} \operatorname{\mathcal L}(E_3)$. Therefore $\operatorname{\mathcal L}(E_2)\not \cong_{\operatorname{gr}} \operatorname{\mathcal L}(E_3).$ \end{example} \betagin{example}\lambdabel{smallgraphs2} In this example we look at three graphs which the graded $K_0$-group of their Leavitt path algebras and the position of their identities are the same but the Leavitt path algebras are mutually not isomorphic. We will then show that there is no $\mathbb Z[x,x^{-1}]$-isomorphisms between their graded $K_0$-groups. This shows that in our conjectures, not only the isomorphisms need to be order preserving, but it should also respect the $\mathbb Z[x,x^{-1}]$-module structures. Consider the following three graphs. \betagin{equation*} \xymatrix{ E_1 : & \bullet \ar@(lu,ld)\ar@/^0.9pc/[r] & \bullet \ar@/^0.9pc/[l] && E_2: & \bullet \ar@(lu,ld)\ar[r] & \bullet \ar@(ru,rd) && E_3: & \bullet \ar@/^0.9pc/[r] & \bullet \ar@/^0.9pc/[l] } \end{equation*} The adjacency matrices of these graphs are $N_{E_1}=\left(\betagin{matrix} 1 & 1\\ 1 & 0\end{matrix}\right)$, $N_{E_2}=\left(\betagin{matrix} 1 & 1\\ 0 & 1\end{matrix}\right)$ and $N_{E_3}=\left(\betagin{matrix} 0 & 1\\ 1 & 0\end{matrix}\right)$. We determine the graded $K_0$-groups and the position of identity in each of these groups. First observe that the ring $\operatorname{\mathcal L}(E_i)_0$, $1\leq i \leq 3$, is the direct limit of the following direct systems, respectively (see~(\ref{volleyb}) in the proof of Theorem~\ref{ultrad}). \betagin{align} \mathbb \operatorname{\mathcal L}(E_1)_0: & \qquad K \operatorname{op}lus K \operatorname{st}ackrel{N_{E_1}^t}{\longrightarrow} \operatorname{\mathbb M}_2(K) \operatorname{op}lus K \operatorname{st}ackrel{N_{E_1}^t}\longrightarrow \operatorname{\mathbb M}_3(K)\operatorname{op}lus \operatorname{\mathbb M}_2(K) \operatorname{st}ackrel{N_{E_1}^t}\longrightarrow \cdots \notag\\ \operatorname{\mathcal L}(E_2)_0: & \qquad K\operatorname{op}lus K \operatorname{st}ackrel{N_{E_2}^t}{\longrightarrow} K\operatorname{op}lus \operatorname{\mathbb M}_2(K) \operatorname{st}ackrel{N_{E_2}^t}\longrightarrow K\operatorname{op}lus \operatorname{\mathbb M}_3(K) \operatorname{st}ackrel{N_{E_2}^t}\longrightarrow \cdots \lambdabel{hgdtrew} \\ \operatorname{\mathcal L}(E_3)_0: & \qquad K\operatorname{op}lus K \operatorname{st}ackrel{N_{E_3}^t}{\longrightarrow} K\operatorname{op}lus K \operatorname{st}ackrel{N_{E_3}^t}\longrightarrow K\operatorname{op}lus K \operatorname{st}ackrel{N_{E_3}^t}\longrightarrow \cdots \notag \end{align} Note that these are ultramatricial algebras with the following different Bratteli diagrams ({\it the Bratteli diagrams associated to Leavitt path algebras}). \betagin{equation*} \xymatrix@=15pt{ E_1: & \bullet \ar@{-}[r] \ar@{-}[dr] & \bullet \ar@{-}[r] \ar@{-}[dr] & \bullet \ar@{-}[r] \ar@{-}[dr] &&& E_2: & \bullet \ar@{-}[r] \ar@{-}[dr] & \bullet \ar@{-}[r] \ar@{-}[dr] & \bullet \ar@{-}[r] \ar@{-}[dr] &&& E_3: & \bullet \ar@{-}[dr] & \bullet \ar@{-}[dr] & \bullet \ar@{-}[dr] &\\ &\bullet \ar@{-}[ur] & \bullet \ar@{-}[ur] & \bullet \ar@{-}[ur]&&& &\bullet \ar@{-}[r] & \bullet \ar@{-}[r] & \bullet \ar@{-}[r]&&& & \bullet \ar@{-}[ur] & \bullet \ar@{-}[ur] & \bullet \ar@{-}[ur]& } \end{equation*} Since the determinant of adjacency matrices are $\pm 1$, by Theorem~\ref{ultrad} (and~(\ref{veronaair})), for $1\leq i \leq 3$, \betagin{multline} \big(K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_i)),[\operatorname{\mathcal L}(E_i)]\big)\cong \big(K_0(\operatorname{\mathcal L}(E_i)_0),[\operatorname{\mathcal L}(E_i)_0]\big)\cong \big(K_0(\varphirinjlim_n L^i_{0,n}),[\varphirinjlim_n L^i_{0,n}]\big)\cong \\ \big (\varphirinjlim_n K_0( L^i_{0,n}), \varphirinjlim_n [L^i_{0,n}]\big )\cong \big (\mathbb Z \operatorname{op}lus \mathbb Z, (1,1) \big ), \end{multline} where $L^i_{0,n}$ are as appear in the direct systems~(\ref{hgdtrew}). Next we show that there is no $\mathbb Z[x,x^{-1}]$ isomorphisms between the graded $K_0$-groups. In $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_1)) \cong \mathbb Z \operatorname{op}lus \mathbb Z$, one can observe that $[u\operatorname{\mathcal L}(E_1)]=\varphirinjlim_n [u L^1_{0,n}]= (1,0)$ and similarly $[v\operatorname{\mathcal L}(E_1)]=(0,1)$. Furthermore, by~(\ref{hterw}), the action of $\mathbb Z[x,x^{-1}]$ on these elements are, \betagin{align} x[u\operatorname{\mathcal L}(E_1)] & =[u\operatorname{\mathcal L}(E_1)(1)]=\sum_{\{\alphapha \in E_1^1 \mid s(\alphapha)=u\}}[r(\alphapha)\operatorname{\mathcal L}(E_1)]=[u\operatorname{\mathcal L}(E_1)]+[v\operatorname{\mathcal L}(E_1)], \notag \\ x[v\operatorname{\mathcal L}(E_1)] &= [v\operatorname{\mathcal L}(E_1)(1)]=\sum_{\{\alphapha \in E_1^1 \mid s(\alphapha)=v\}}[r(\alphapha)\operatorname{\mathcal L}(E_1)]=[u\operatorname{\mathcal L}(E_1)]. \notag \end{align} In particular this shows that $x(1,0)=(1,1)$ and $x(0,1)=(1,0)$. The above calculation shows that \betagin{equation}\lambdabel{hpolor} x (m,n)= N_{E_1}^t (m,n), \end{equation} where $(m,n) \in \mathbb Z\operatorname{op}lus \mathbb Z$. One can carry out the similar calculations observing that the action of $x$ on $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_2))$ and $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_3))$ are as in~(\ref{hpolor}) with $N^t_{E_2}$ and $N^t_{E_3}$ instead of $N^t_{E_1}$, respectively. Now if there is an order preserving isomorphism $\phi: \operatorname{\mathcal L}(E_i) \rightarrow \operatorname{\mathcal L}(E_j)$, $1\leq i,j \leq 3$, then $\phi$ has to be a permutation matrix (either $\left(\betagin{matrix} 1 & 0 \\ 0 & 1\end{matrix}\right)$ or $\left(\betagin{matrix} 0 & 1 \\ 1 & 0\end{matrix}\right)$). Furthermore, if $\phi$ preserves the $\mathbb Z[x,x^{-1}]$-module structure then $\phi(x a)=x\phi (a)$, for any $a \in \mathbb Z\operatorname{op}lus \mathbb Z$. Translating this into matrix representation, this means \betagin{equation}\lambdabel{birdpeace} \phi N_{E_i}^t = N_{E_j}^t \phi. \end{equation} However, one can quickly check that, if $i\not = j$, then~(\ref{birdpeace}) can't be the case, i.e., there is no $\mathbb Z[x,x^{-1}]$-module isomorphisms between $\operatorname{\mathcal L}(E_i)$ and $\operatorname{\mathcal L}(E_j)$, $1\leq i \not = j \leq 3$. By Corollary~\ref{hgfwks6}, $\operatorname{\mathcal L}_K(E_1)$ is a graded simple ring and $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_1))$ has no nontrivial graded ordered ideals. Notice that although $\mathbb Z\operatorname{op}lus 0$ and $0\operatorname{op}lus \mathbb Z$ are hereditary ordered ideals of $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_1))$, but they are not closed under the action of $\mathbb Z[x,x^{-1}]$. On the other hand, one can observe that $0\operatorname{op}lus \mathbb Z$ is a hereditary ordered ideal of $K_0^{\operatorname{gr}}(\operatorname{\mathcal L}(E_2))$ which is also closed under the action of $\mathbb Z[x,x^{-1}]$. Indeed, one can see that the set containing the right vertex of $E_2$ is a hereditary and saturated subset of $E_2^0$. Using the similar arguments as above, we can show that, for two graphs $E$ and $F$, each with two vertices, such that $\deltat(N_E)=\deltat(N_F)=1$, we have $\operatorname{\mathcal L}_K(E) \cong_{\operatorname{gr}} \operatorname{\mathcal L}_K(F)$ if and only if $E=F$ (by re-labeling the vertices if necessary). \end{example} \betagin{thebibliography}{10} \bibitem{aap05} G. Abrams, G. Aranda Pino, \emph{The Leavitt path algebra of a graph,} J. Algebra {\bf 293} (2005), no. 2, 319--334. \bibitem{aap06} G. Abrams, G. Aranda Pino, \emph{Purely infinite simple Leavitt path algebras}, J. Pure Appl. Algebra {\bf 207} (2006), no. 3, 553--563. \bibitem{aalp} G. Abrams, P.N. \'Anh, A. Louly, E. Pardo, \emph{The classification question for Leavitt path algebras}, J. Algebra {\bf 320} (2008), no. 5, 1983--2026. \bibitem{aapardo} G. Abrams, P.N. \'Anh, E. Pardo, \emph{Isomorphisms between Leavitt algebras and their matrix rings}, J. Reine Angew. Math. {\bf 624} (2008), 103--132. \bibitem{arock} G. Abrams, \emph{Number theoretic conditions which yield isomorphisms and equivalences between matrix rings over Leavitt algebras}, Rocky Mountain J. Math {\bf 40} (4), (2010), 1069--1084. \bibitem{arapardog} P. Ara, K.R. Goodearl, E. Pardo, \emph{$K_0$ of purely infinite simple regular rings}, K-Theory {\bf 26} (2002), no. 1, 69--100. \bibitem{amp} P. Ara, M.A. Moreno, E. Pardo, \emph{Nonstable $K$-theory for graph algebras,} Algebr. Represent. Theory {\bf 10} (2007), no. 2, 157--178. \bibitem{arawillie} P. Ara, M. Brustenga, G. Corti\~nas, \emph{$K$-theory of Leavitt path algebras}, M\"unster J. Math. {\bf 2} (2009), 5--33. \bibitem{cohn11} P. M. Cohn, \emph{Some remarks on the invariant basis property}, Topology {\bf 5} (1966), 215--228. \bibitem{cuntz1} J. Cuntz, \emph{Simple $C^*$-algebras generated by isometries}, Comm. Math. Phys. {\bf 57} (1977), no. 2, 173--185. \bibitem{cuntz2} J. Cuntz, \emph{$K$-theory for certain $C^*$-algebras}, Ann. of Math. (2) {\bf 113} (1981), no. 1, 181--197. \bibitem{dascalescu} S. D\u{a}sc\u{a}lescu, B. Ion, C. N\u{a}st\u{a}sescu, J. Rios Montes, \emph{Group gradings on full matrix rings}, J. Algebra {\bf 220} (1999), no. 2, 709--728. \bibitem{davidson} K. Davidson, $C^*$-algebras by example. Fields Institute Monographs, 6. American Mathematical Society, Providence, RI, 1996. \bibitem{goodearl} K.R. Goodearl, \emph{Leavitt path algebras and direct limits}, Contemp. Math. {\bf 480} (2009), 165--187. \bibitem{goodearlbook} K.R. Goodearl, Von Neumann regular rings, 2nd ed., Krieger Publishing Co., Malabar, FL, 1991. \bibitem{haz} R. Hazrat, \emph{The graded structure of Leavitt path algebras}, preprint. {\tt arXiv:1005.1900} \bibitem{paskrae} A. Kumjian, D. Pask, I. Raeburn, \emph{Cuntz-Krieger algebras of directed graphs} Pacific J. Math. {\bf 184} (1998), no. 1, 161--174. \bibitem{lamfc} T.Y. Lam, A first course in noncommutative rings, Springer-verlag 1991. \bibitem{vitt56} W.G. Leavitt, \emph{Modules over rings of words}, Proc. Amer. Math. Soc. {\bf 7} (1956), 188--193. \bibitem{vitt57} W.G. Leavitt, \emph{Modules without invariant basis number,} Proc. Amer. Math. Soc. {\bf 8} (1957), 322--328. \bibitem{vitt62}W.G. Leavitt, \emph{The module type of a ring,} Trans. Amer. Math. Soc. {\bf 103} (1962) 113--130. \bibitem{grrings} C. N\u ast\u asescu, F. Van Oystaeyen, Methods of graded rings, Lecture Notes in Mathematics, 1836, Springer-Verlag, Berlin, 2004. \bibitem{quillen} D. Quillen, \emph{Higher algebraic K-theory. I}. Algebraic K-theory, I: Higher K-theories (Proc. Conf., Battelle Memorial Inst., Seattle, Wash., 1972), pp. 85--147. Lecture Notes in Math., Vol. 341, Springer, Berlin 1973. \bibitem{Raeburn113} I. Raeburn, W. Szyma\'nski, \emph{Cuntz-Krieger algebras of infinite graphs and matrices}, Trans. Amer. Math. Soc. {\bf 356} (2004), 39--59. \bibitem{Raegraph} I. Raeburn, Graph algebras. CBMS Regional Conference Series in Mathematics, {\bf 103} American Mathematical Society, Providence, RI, 2005. \bibitem{rordam} M. R{\o}rdam, Classification of nuclear $C^*$-algebras. Entropy in operator algebras, 1Ð145, Encyclopaedia Math. Sci., {\bf 126}, Springer, Berlin, 2002. \end{thebibliography} \end{document}
\begin{document} \title{A Comparison of Automatic Differentiation and Continuous Sensitivity Analysis for Derivatives of Differential Equation Solutions} \makeatletter \newcommand{\linebreakand}{ \end{@IEEEauthorhalign} \mbox{}\par \mbox{} \begin{@IEEEauthorhalign} } \makeatother \author{\IEEEauthorblockN{Yingbo Ma} \IEEEauthorblockA{Julia Computing\\ Cambridge, Massachusetts, USA\\ Email: [email protected]} \and \IEEEauthorblockN{Vaibhav Dixit} \IEEEauthorblockA{Julia Computing\\ Pumas-AI\\ Bangalore, India\\ Email: [email protected]} \and \IEEEauthorblockN{Michael J Innes} \IEEEauthorblockA{Edinburgh, UK\\ Email: [email protected]} \linebreakand \IEEEauthorblockN{Xingjian Guo} \IEEEauthorblockA{New York University\\ New York City, New York, USA\\ Email: [email protected]} \and \IEEEauthorblockN{Chris Rackauckas} \IEEEauthorblockA{Massachusetts Institute of Technology\\ Julia Computing\\ Pumas-AI\\ Cambridge, Massachusetts, USA\\ Email: [email protected]}} \maketitle \begin{abstract} Derivatives of differential equation solutions are commonly for parameter estimation, fitting neural differential equations, and as model diagnostics. However, with a litany of choices and a Cartesian product of potential methods, it can be difficult for practitioners to understand which method is likely to be the most effective on their particular application. In this manuscript we investigate the performance characteristics of Discrete Local Sensitivity Analysis implemented via Automatic Differentiation (DSAAD) against continuous adjoint sensitivity analysis. Non-stiff and stiff biological and pharmacometric models, including a PDE discretization, are used to quantify the performance of sensitivity analysis methods. Our benchmarks show that on small stiff and non-stiff systems of ODEs (approximately $<100$ parameters+ODEs), forward-mode DSAAD is more efficient than both reverse-mode and continuous forward/adjoint sensitivity analysis. The scalability of continuous adjoint methods is shown to be more efficient than discrete adjoints and forward methods after crossing this size range. These comparative studies demonstrate a trade-off between memory usage and performance in the continuous adjoint methods that should be considered when choosing the technique, while numerically unstable backsolve techniques from the machine learning literature are demonstrated as unsuitable for most scientific models. The performance of adjoint methods is shown to be heavily tied to the reverse-mode AD method used for the vector-Jacobian product calculations, with tape-based AD methods shown to be 2 orders of magnitude slower on nonlinear partial differential equations than static AD techniques. In addition, these results demonstrate the out-of-the-box applicability of DSAAD to differential-algebraic equations, delay differential equations, and hybrid differential equation systems where the event timing and effects are dependent on model parameters, showcasing an ease of implementation advantage for DSAAD approaches. Together, these benchmarks provide a guide to help practitioners to quickly identify the best mixture of continuous sensitivities and automatic differentiation for their applications. \end{abstract} \IEEEpeerreviewmaketitle \section{Introduction\label{sec:Introduction}} In the literature of differential equations, local sensitivity analysis is the practice of calculating derivatives to a differential equation's solution with respect to model parameters. For an ordinary differential equation (ODE) of the form \begin{equation} \dot{u}=f(u,p,t),\label{eq:ode} \end{equation} where $f$ is the derivative function and $p$ are the model parameters, the sensitivity of the state vector $u$ with respect to model parameter $p_{i}$ at time $t$ is defined as $\frac{\partial u(p,t)}{\partial p_{i}}$. These sensitivities have many applications. For example, they can be directly utilized in fields such as biological modeling to identify parameters of interest for tuning and experimentation \cite{sommer_numerical_nodate}. Recent studies have utilized these sensitivities as part for training neural networks associated with the ODEs \cite{chen_neural_2018,grathwohl_ffjord:_2018}. In addition, these sensitivities are indirectly utilized in many disciplines for parameter estimation of dynamical models. Parameter estimation is the problem of finding parameters $p$ such that a cost function $C(p)$ is minimized (usually some fit against data) \cite{peifer_parameter_2007,hamilton_parameter_2011,zhen_parameter_nodate,zimmer_parameter_2013,sommer_numerical_nodate,friberg_model_2002,steiert_experimental_2012}. Gradient-based optimization methods require the computation of gradients of $C(p)$. By the chain rule, $\frac{dC}{dp}$ requires the calculation of $\frac{du(t_i)}{dp}$ which are the model sensitivities. Given the high computational cost of parameter estimation due to the number of repeated numerical solutions which are required, efficient and accurate computation of model sensitivities is an important part of differential equation solver software. The simplest way to calculate model sensitivities is to utilize numerical differentiation which is given by the formula \begin{equation} \frac{\partial u(t)}{\partial p_{i}}=\frac{u(p+\Delta p_{i},t)-u(p,t)}{\Delta p_{i}}+\mathcal{O}(\Delta p_{i}),\label{eq:numdiff} \end{equation} where $p+\Delta p_{i}$ means adding $\Delta p_{i}$ to only the $i$th component of $p$. However, this method is not efficient (it requires two numerical ODE solutions for each parameter $i$) and it is prone to numerical error. If $\Delta p_{i}$ is chosen too large, then the error term of the approximation is large. In contrast, if $\Delta p_{i}$ is chosen too small, then calculations may exhibit floating point cancellation which increases the error \cite{burden_numerical_2011}. To alleviate these issues, many differential equation solver softwares implement a form of sensitivity calculation called continuous local sensitivity analysis (CSA) \cite{zhang_fatode:_2014,alan_c._hindmarsh_sundials:_2005}. Forward-mode continuous sensitivity analysis calculates the model sensitivities by extending the ODE system to include the equations: \begin{equation} \frac{d}{dt}\left(\frac{\partial u}{\partial p_{i}}\right)=\frac{\partial f}{\partial u}\frac{\partial u}{\partial p_{i}}+\frac{\partial f}{\partial p_{i}}\label{eq:clsa} \end{equation} where $\frac{\partial f}{\partial u}$ is the Jacobian of the derivative function $f$ with respect to the current state, and $\frac{\partial f}{\partial p_{i}}$ is the derivative of the derivative function with respect to the $i$th parameter. We note that $\frac{\partial f}{\partial u} v$ is equivalent to the directional derivative in the direction of $v$, which can thus be calculated Jacobian-free via: \begin{equation} \frac{\partial f}{\partial u} v \approx \frac{f(u+\epsilon v,p,t)-f(u,p,t)}{\epsilon} \end{equation} or by equivalently pre-seeding forward-mode automatic differentiation with $v$ in the dual space. Since the sensitivity equations for each $i$ are dependent on the current state $u$, these ODEs must be solved simultaneously with the ODE system $u'=f$. By solving this expanded system, one can ensure that the sensitivities are computed to the same error tolerance as the original ODE terms, and only a single numerical ODE solver call is required. However, since the number of ODEs in this system now scales proportionally with the number of parameters, the scaling of the aforementioned method is $\mathcal{O}(np)$ for $n$ ODEs and $p$ parameters, forward-mode CSA is not practical for a large number of parameters. Instead, for these cases continuous adjoint sensitivity analysis (CASA) is utilized. This methodology is defined to directly compute the gradient of a cost function of the solution with scaling $\mathcal{O}(n+p)$. Given a cost function $c$ on the ODE solution which is evaluated as discrete time points (such as an $L^2$ loss) \begin{equation} C(u(p), p) = \sum_i c(u(p,t_i), p) \end{equation} this is done by solving a backwards ODE known as adjoint problem \begin{equation} \frac{d\lambda'}{dt} = - \lambda' \frac{\partial f(u(t),p,t)}{\partial u} \end{equation} where at every time point $t_i$, this backwards ODE is perturbed by $\frac{\partial c(u(p,t_i), p)}{\partial u}$. Note $u(t)$ is generated by a forward solution. The gradient of the cost function is then given by the integral: \begin{align} \frac{dC}{dp} = &\lambda'(t_0) \frac{\partial f(u(t_0),p,t_0)}{\partial u} + \\ \nonumber &\sum_i \int_{t_i}^{t_{i+1}} \lambda' \frac{\partial f(u(t),p,t)}{\partial p} + \frac{\partial c(u(p,t_i), p)}{\partial p} dt \end{align} This integral may be solved via quadrature on a continuous solution of the adjoint equation or by appending the quadrature variables to perform the integration as part of the backsolve. The former can better utilize quadrature points to reduce the number of evaluation points required to reach a tolerance, but requires enough memory to store a continuous extension of the backpass. We note that the backsolve technique is numerically unstable, which will be noted in benchmarks where it produces a divergent calculation of the gradient. We note that, similarly to forward-mode, the crucial term $\lambda'(t) \frac{\partial f(u(t),p,t)}{\partial p}$ can be computed without building the full Jacobian. However, in this case this computation $v'J$ cannot be matrix-free computed via numerical or forward-mode differentiation, but instead is the primitive of reverse-mode automatic differentiation. Thus efficient methods for CASA necessarily mix a reverse-mode AD into the generated adjoint pass, and thus we will test the Cartesian product of the various choices. In contrast to CSA methods, discrete sensitivity analysis calculates model sensitivities by directly differentiating the numerical method's steps \cite{zhang_fatode:_2014}. However, this approach requires specialized implementations of the first order ODE solvers to propagate said derivatives. Instead, one can achieve the same end by using automatic differentiation (AD) on a solver implemented entirely in a language with pervasive AD, also known as a differentiable programming approach \cite{rackauckas2020generalized}. Section \ref{sec:Discrete-Sensitivity-Analysis} introduces the discrete sensitivity analysis through AD (DSAAD) approach via type-specialization on a generic algorithm. Section \ref{sec:Discrete-and-Continuous} compares the performance of DSAAD against continuous sensitivity analysis and numerical differentiation approaches and shows that DSAAD consistently performs well on the tested models. Section \ref{sec:Generalizes} describes limitations of the continuous sensitivity analysis approach and describes how these cases are automatically handled in the case of DSAAD. Together, this manuscript shows that the ability to utilize AD directly on numerical integrator can be advantageous to existing approaches for the calculation of model sensitivities, while at times can be disadvantageous in terms of scaling performance. Currently, continuous sensitivity techniques are commonly used throughout software such as SUNDIALS \cite{alan_c._hindmarsh_sundials:_2005}, while some software like FATODE \cite{zhang_fatode:_2014} allow for discrete sensitivity analysis. No previous manuscript performs comprehensive benchmarks on the Cartesian product of discrete/continuous forward/adjoint sensitivity analysis with the various automatic differentiation modes (tape vs static). This study uses DiffEqSensitivity.jl \cite{rackauckas2020universal}, the sensitivity analysis extension to DifferentialEquations.jl and the first comprehensive package which includes all mentioned differentiation choices, to do such a full comparison of the space. The results showcase the efficiency gains provided by discrete sensitivity analysis on sufficiently small models, and establishes a heuristic range ($\approx$ 30-100 ODEs) at which the scalability of continuous adjoint sensitivity analysis overcomes the low overhead of DSAAD. This study can thus can be a central to helping all users of ODE solvers to choose the method that will be effective on their specific problem. \section{Discrete Sensitivity Analysis via Automatic Differentiation (DSAAD) \label{sec:Discrete-Sensitivity-Analysis}} The core feature of the Julia programming language is multiple dispatch \cite{bezanson_julia:_2017}. It allows a function to compile to different outputs dependent on the types of the inputs, effectively allowing choices of input types to trigger forms of code generation. ForwardDiff.jl provides a Dual number type which performs automatic differentiation on differentiable programs by simultaneously propagating a derivative along with the computed value on atomic (addition, multiplication, etc.) and standard mathematical ($\sin$, $\exp$, etc.) function calls \cite{revels_forward-mode_2016}. By using the chain rule during the propagation, any function which is composed of differentiable calls is also differentiable by the methodology. Such a program is known as a differentiable program. Since this exactly differentiates the atomics, the numerical error associated with this method is similar to the standard evaluation of the function, effectively alleviating the errors seen in numerical differentiation. In addition, this method calculates derivatives simultaneously with the function's evaluation, making it a good candidate for fast discrete sensitivity analysis. It can be thought of as the analogue to continuous forward-mode sensitivity analysis on the set of differentiable programs. Similarly, reverse-mode AD from packages like ReverseDiff.jl \cite{revels_reversediff.jl_nodate} and Flux.jl \cite{innes_flux:_2018} use Tracker numerical types which builds a tape of the operations and utilizes the reverse of the chain rule to ``backpropogate'' derivatives and directly calculate the gradient of some cost function. This implementation of AD can be thought of as the analogue of CASA on the set of differentiable programs. The DifferentialEquations.jl package provides many integration routines which were developed in native Julia \cite{christopher_rackauckas_differentialequations.jl_2017}. These methods are type-generic, meaning they utilize the numeric and array types that are supplied by the user. Thus these ODE solvers serve as a generic template whose internal operations can be modified by external packages via dispatch. When a generic DifferentialEquations.jl integrator is called with a Dual number type for the initial condition, the combination of these two programs results in a program which performs discrete sensitivity analysis. This combination is what we define as forward-mode DSAAD, and this combination with reverse-mode ReverseDiff.jl AD Tracker types is reverse-mode DSAAD. To test the correctness of the DSAAD and continuous sensitivity methods, we check the outputted sensitivities on four models. For our tests we utilize nonlinear reaction models which are representative of those found in biological and pharmacological applications where these techniques are commonly used \cite{hiroaki_kitano_computational_nodate,danhof_mechanism-based_2008}. The models are: \begin{enumerate} \item The non-stiff Lotka-Volerra equations (LV). \item An $N \times N$ finite difference discretization of the two-dimensional stiff Brusselator reaction-diffusion PDE (BRUSS). \item A stiff pollution model (POLLU). \item A non-stiff pharmacokinetic/pharmacodynamic system (PK/PD). \end{enumerate} These cover stiff and non-stiff ODEs, large systems and small systems, and include a PDE discretization with a dimension $N$ for testing the scaling of the methodologies. Details of the four models are presented in the Appendix. Figure \ref{fig:ode_sens} shows the output of the first two models' sensitivities that are computed by the DSAAD method compared to CSA. The two methods align in their model sensitivity calculations, demonstrating that the application of AD on the generic ODE solver does produce correct output sensitivities. From these tests we note that the differences in the model sensitivities between the two methods had a maximum norm of $1.14\times10^{-5}$ and $3.1\times10^{-4}$ which is roughly the chosen tolerance of the numerical integration. These results were again confirmed at a difference of $1\times10^{-12}$ when ran with sufficiently low ODE solver tolerance. \begin{figure} \caption{{\bf Model Sensitivities for DSAAD and CSA.} \label{fig:ode_sens} \end{figure} \section{Benchmark Results\label{sec:Discrete-and-Continuous}} \subsection{Forward-Mode Sensitivity Performance Comparisons\label{subsec:Forward-Mode-Sensitivity}} To test the relative performance of discrete and continuous sensitivity analysis, we utilized packages from the Julia programming language. The method for continuous sensitivity analysis is implemented in the DiffEqSensitivity.jl package by directly extending a user-given ordinary differential equation. This is done by extending the initial condition vector and defining a new derivative function $\tilde{f}$ which performs $f$ on the first $N$ components and adds the sensitivity equations. For performance, construction of the Jacobian can be avoided by utilizing AD for vector-Jacobian and Jacobian-vector products. By seeding the Dual numbers to have partials $v$ and applying forward applications of $f$, the resulting output is the desired $\frac{\partial f}{\partial u}v$. Similarly, seeding on Tracked reals for reverse-mode autodifferentiation results in $v' \frac{\partial f}{\partial u}$ which is the other desired quantity. As a comparison, full Jacobian implementations were also explored, either via a user-given analytical solution or automatic differentiation. We also include numerical differentiation performed by the FiniteDiff.jl package. The test problems were solved with the various sensitivity analysis methods and the timings are given in Table \ref{tab:Forward-Sensitivity-Analysis-Bench}. In all of these benchmarks DSAAD performs well, being the fastest or nearly the fastest in all cases. While continuous sensitivity analysis does not necessarily require building the Jacobian, inspection of the generated LLVM from the forward mode reveals that the compiler is able to fuse more operations between the Jacobian-vector product and the other parts of the calculation, effectively improving common subexpression elimination (CSE) at the compiler level further than implementations which call functions for the separate parts of the calculation. Thus, given the equivalence of discrete forward sensitivities and forward-mode AD, these results showcase that sufficiently optimized forward-mode AD methods will be preferable in most cases. \begin{center} \begin{table} \begin{centering} {\tiny \begin{tabular}{|c|c|c|c|c|} \hline Method/Runtime & LV ($\mu$s) & BRUSS (s) & POLLU (ms) & PKPD (ms)\tabularnewline \hline \hline DSAAD & 174 & 1.94 & 12.2 & 2.66 \tabularnewline \hline CSA User-Jacobian & 429 & 727 & 572 & 17.3 \tabularnewline \hline CSA AD-Jacobian & 998 & 168 & 629 & 13.7 \tabularnewline \hline CSA AD-$Jv$ seeding & 881 & 189 & 508 & 8.46 \tabularnewline \hline Numerical Differentiation & 807 & 1.58 & 24.9 & 17.4 \tabularnewline \hline \end{tabular}} \par\end{centering} \caption{\textbf{Forward Sensitivity Analysis Performance Benchmarks.} The Lotka-Volterra model used by the benchmarks is the same as in Figure \ref{fig:ode_sens}, while the Brusselator benchmarks use a finer $5 \times 5$ grid with the same solution domain, initial values and parameters. The \texttt{Tsit5} integrator is used for the Lotka-Volterra and the PKPD model. The \texttt{Rodas5} integrator is used for the Brusselator and POLLU.} \label{tab:Forward-Sensitivity-Analysis-Bench} \end{table} \par\end{center} \subsection{Adjoint Sensitivity Performance Comparisons\label{subsec:Adjoint-Sensitivity}} For adjoint sensitivity analysis, adjoint DSAAD programs were produced using a combination of the generic DifferentialEquations.jl integrator with the tape-based automatic differentiation implementation of ReverseDiff.jl. DiffEqSensitivity.jl provides an implementation of CASA which saves a continuous solution for the forward pass of the solution and utilizes its interpolant in order to calculate the requisite Jacobian and gradients for the backwards pass. While this method is less memory efficient than checkpointing or re-solving schemes \cite{alan_c._hindmarsh_sundials:_2005}, it only requires a single forward numerical solution and thus was demonstrated as more runtime optimized for sufficiently small models. Thus while DifferentialEquations.jl contains a checkpointed adjoint implementation, checkpointing was turned off for the interpolating and backsolve schemes to allow them as much performance as possible. The timing results of these methods on the test problems are given in Table \ref{tab:Adjoint-Sensitivity-Analysis-Bench}. These results show a clear performance advantage for forward-mode DSAAD over the other choices on sufficiently small models. \begin{center} \begin{table} \begin{centering} {\tiny \begin{tabular}{|c|c|c|c|c|} \hline Method/Runtime & LV ($\mu$s) & BRUSS (s) & POLLU (s) & PKPD (ms)\tabularnewline \hline \hline Forward-Mode DSAAD & 279 & 1.80 & 0.010 & 5.81 \tabularnewline \hline Reverse-Mode DSAAD & 5670 & 19.1 & 0.194 & 133 \tabularnewline \hline CASA User-Jacobian (interpolating) & 549 & 25.1 & 9.18 & 6.48 \tabularnewline \hline CASA AD-Jacobian (interpolating) & 636 & 11.8 & 16.1 & 5.23 \tabularnewline \hline CASA AD-$v'J$ seeding (interpolating) & 517 & 1.59 & 2.12 & 2.13 \tabularnewline \hline \hline CASA User-Jacobian (quadrature) & 693 & 0.964 & 1.82 & 4.88 \tabularnewline \hline CASA AD-Jacobian (quadrature) & 825 & 2.17 & 6.19 & 4.97 \tabularnewline \hline CASA AD-$v'J$ seeding (quadrature) & 707 & 0.461 & 1.30 & 2.94 \tabularnewline \hline CASA User-Jacobian (backsolve) & 813 & N/A & N/A & N/A \tabularnewline \hline CASA AD-Jacobian (backsolve) & 941 & N/A & N/A & N/A \tabularnewline \hline CASA AD-$v'J$ seeding (backsolve) & 760 & N/A & N/A & N/A \tabularnewline \hline Numerical Differentiation & 811 & 2.48 & 0.044 & 20.8 \tabularnewline \hline \end{tabular}} \par\end{centering} \caption{\textbf{Adjoint Sensitivity Analysis Performance Benchmarks.} The Lotka-Volterra and Brusselator models used by the benchmarks are the same as in Figure \ref{fig:ode_sens}. The integrators used for the benchmarks are: \texttt{Rodas5} for Brusselator and POLLU, and \texttt{Tsit5} for Lotka-Volterra and PKPD.} \label{tab:Adjoint-Sensitivity-Analysis-Bench} \end{table} \par\end{center} \subsection{Adjoint Sensitivity Scaling} \begin{figure} \caption{\textbf{Brusselator Scaling Benchmarks.} \label{fig:bruss_scaling} \end{figure} The previous tests all showed that on small models forward-mode via AD was advantageous to reverse-mode and adjoint methods. However, the advantage of adjoint methods comes in their ability to scale, with additive scaling with respect to the number of ODEs and parameters instead of multiplicative. Thus we decided to test the scaling of the methods on the Brusselator partial differential equation. For an $N\times N$ discretization in space, this problem has $2N^2$ ODE terms and $4N^2$ parameters. The timing results for the adjoint methods and forward-mode DSAAD are shown in Figure \ref{fig:bruss_scaling}. Figure \ref{fig:bruss_scaling}A demonstrates that as $N$ increases, there is a point at which CASA becomes more efficient than DSAAD. This cutoff point seems to be around 50 ODEs+parameters when $v'J$ seeding is used and around 100 ODEs+parameters when $v'J$ seeding is not used with quadrature adjoints, to around 150 ODEs+parameters without $v'J$ seeding with interpolating adjoints (noted in the discussion section as the method equivalent to SUNDIALS \cite{alan_c._hindmarsh_sundials:_2005}). This identifies a ballpark range of around 100 combined ODEs+parameters at which practitioners should consider changing from forward sensitivity approaches to adjoint methods. The most efficient CASA method is the quadrature-based method. This aligns with the theoretical understanding of the method. Stiff ODE solvers scale like $\mathcal{O}(n^3)$ where $n$ is the number of ODEs. For the adjoint calculation, the quadrature based method has $n=2N^2$, the number of ODEs of the forward pass. For the interpolating adjoint method, $n=6N^2$, or the number of parameters plus the number of ODEs, which is then amplified by the cubic scaling of the linear solving. We note in passing that the backsolve technique requires $n=8N^2$, or double the number of ODEs plus the number of parameters, but is unstable on stiff equations and thus is not effective on this type of problem (indeed, on these benchmarks it was unstable and thus omitted). We note that the inability for the reverse-mode DSAAD to scale comes from the tape generation performance on nonlinear models. The Tracker types in the differential equation solvers have to utilize scalar tracking instead of tracking array primitives, greatly increasing the size of the computational graph and the memory burden. When array primitives are used, mutation is not allowed which decreases the efficiency of the solver more than the gained efficiency. This is a general phenomena of tape-based automatic differentiation methods and has been similarly noted in PyTorch \cite{NEURIPS2019_9015} and TensorFlow Eager \cite{agrawal2019tensorflow}, demonstrating that the effective handling of this approach would require alternative AD architectures. These issues may be addressed in the next generation reverse-mode source-to-source AD packages like Zygote \cite{innes_dont_2018} or Enzyme \cite{enzymeNeurips} by not relying on tape generation. Figure \ref{fig:bruss_scaling}B further highlights the importance of the reverse-mode architecture by comparing between different reverse-mode choices used for the $v'J$ calculation internal to the CASA. By default, ReverseDiff.jl is a tape-based AD which has trouble scaling to larger problems. Compiled ReverseDiff.jl does a single trace through the program to generate a static description of the code which it optimizes and compiles (this is thus not compatible with code that is non-static, such as having branches which change due to $u$ or $t$). Enzyme.jl compiles a static SSA-form LLVM representation of the user's $f$ function and uses compiler passes to generate the $v'J$ calculation. By acting at the lowest level, it can generate code after other code optimizations have been applied, which leads to noticeable performance gains of around an order of magnitude on this nonlinear partial differential equation application. \subsection{Parameter Estimation Performance Comparisons\label{subsec:Application:-Parameter-Estimatio}} To test the effect of sensitivity analysis timings in applications, we benchmarked parameter estimation performed with an $L^2$ loss function on generated data for each of the models. The data was generated by solving the ODE with the parameters defined in the Appendix and sampling at evenly spaced time points (100 points for Lotka-Volterra, 20 for Brusselator, 10 for POLLU, and 41 for PK/PD). Each of the parameter estimations were done using the BFGS local optimizer from Optim.jl \cite{mogensen_optim:_2018} and were ran until the optimizer converged to the optima to a tolerance of $10^{-6}$ (results were checked for proper convergence). Each method started from the same initial condition which was a perturbation of the true parameters. For Lotka-Volerra, the initial parameter values were $\frac{4}{5}$ the true values, for PK/PD $0.95p_0+0.001$, and for the other models $\frac{9}{10}$. The timings are shown in Table \ref{tab:Parameter-Estimation-Bench}. While not as pronounced as the pure sensitivity calculations, these benchmarks show that utilizing a more efficient sensitivity analysis calculation does give a performance advantage in the application. Additional performance disadvantages for numerical differentiation could be attributed to the increased numerical error in the gradient which notably caused more iterations for the optimizer. \begin{center} \begin{table} \begin{centering} {\tiny \begin{tabular}{|c|c|c|c|c|} \hline Method/Runtime & LV ($s$) & BRUSS ($s$) & POLLU ($s$) & PKPD ($s$)\tabularnewline \hline \hline Forward-Mode DSAAD & 3.51 & 0.760 & 0.025 & 9.54 \tabularnewline \hline CSA User-Jacobian & 1.47 & 52.2 & 0.931 & 12.3 \tabularnewline \hline CSA AD-Jacobian & 1.82 & 51.9 & 0.932 & 12.5 \tabularnewline \hline CSA AD-$Jv$ seeding & 1.84 & 51.9 & 0.928 & 12.5 \tabularnewline \hline Reverse-Mode DSAAD & 5.87 & 14.9 & 0.560 & 238 \tabularnewline \hline CASA User-Jacobian (interpolating) & 0.030 & 12.4 & 0.238 & 42.5 \tabularnewline \hline CASA AD-Jacobian (interpolating) & 0.031 & 5.58 & 0.414 & 25.0 \tabularnewline \hline CASA AD-$v'J$ seeding (interpolating) & 0.028 & 2.26 & 0.060 & 13.4 \tabularnewline \hline CASA User-Jacobian (quadrature) & 0.122 & 1.37 & 0.053 & 21.2 \tabularnewline \hline CASA AD-Jacobian (quadrature) & 0.078 & 1.67 & 0.144 & 6.35 \tabularnewline \hline CASA AD-$v'J$ seeding (quadrature) & 0.065 & 1.20 & 0.038 & 18.3 \tabularnewline \hline CASA User-Jacobian (backsolve) & 2.95 & N/A & N/A & N/A \tabularnewline \hline CASA AD-Jacobian (backsolve) & 1.35 & N/A & N/A & N/A \tabularnewline \hline CASA AD-$v'J$ seeding (backsolve) & 1.30 & N/A & N/A & N/A \tabularnewline \hline Numerical Differentiation & 0.105 & 8.17 & 0.110 & 168 \tabularnewline \hline \end{tabular}} \par\end{centering} \caption{\textbf{Parameter Estimation Benchmarks.} The Lotka-Volterra model used by the benchmarks is the same as in Figure \ref{fig:ode_sens}, while the Brusselator benchmarks use a finer $5 \times 5$ grid with the same solution domain, initial values and parameters. The \texttt{Tsit5} integrator is used for Lotka-Volterra and PKPD, and the \texttt{Rodas5} integrator is used for Brusselator and pollution.} \label{tab:Parameter-Estimation-Bench} \end{table} \par\end{center} \section{DSAAD Generalizes To Hybrid, Delay, and Differential-Algebraic Differential Equations\label{sec:Generalizes}} We compared the flexibility of sensitivity analysis approaches in order to understand their relative merits for use in a general-purpose differential equation package. First we analyze the ability for the sensitivity analysis approaches to work with event handling. Event-handling is a feature of differential equation packages which allows users to provide a rootfinding function $g(u,p,t)$ at which a discontinuity (of the user's choice) is applied at every time point where $g(u,p,t)=0$ (such equations are also known as hybrid differential equations). Automatic differentiation approaches to discrete sensitivity analysis directly generalize to handling this case by propagating Dual numbers through the event handling code. On the other hand, continuous sensitivity analysis approaches can require special handling in order to achieve correctness. There are two ways which sensitivities will not propagate: \begin{enumerate} \item Standard continuous sensitivity analysis does not take into account the sensitivity of the time point of the discontinuity to the parameters. \item Standard continuous sensitivity analysis does not take into account the possibility of the discontinuity's amount being parameter-dependent. \end{enumerate} These points can be illustrated using a single state linear control problem where $x$ is the signal responsible for the control of $y$. This results in a first-order linear hybrid ordinary differential equation system: \begin{align} \frac{dx}{dt} & = -a,\nonumber\\ \frac{dy}{dt} & = b,\label{eq:event} \end{align} where $a, b > 0$ are parameters, and the rootfinding function is $g(x,y,p,t)=x$. At zero-crossings the parameter $b$ is set to $0$, effectively turning off the second equation. For the initial condition $(x(0), y(0)) = (1,0)$, the system's analytical solution is \begin{align} x(t) &= 1 - at,\nonumber\\ y(t) &= \begin{cases} bt & (t < t^*)\\ bt^* = \frac{b}{a} & (t \ge t^*), \end{cases} \label{eq:event-analytical} \end{align} where $t^* = 1/a$ is the crossing time, which depends on the parameters. Furthermore, the amount of jump for the discontinuity of $dy/dt$ is also parameter dependent. \begin{table} \begin{centering} {\tiny \begin{tabular}{|c|c|c|c|c|} \hline Method & $\partial x(1)/\partial a$ & $\partial y(1)/\partial a$ & $\partial x(1)/\partial b$ & $\partial y(1)/\partial b$\tabularnewline \hline \hline Analytical Solution & -1.0 & -0.25 & 0 & 0.5 \tabularnewline \hline DSAAD & -1.0 & -0.25 & 5.50e-11 & 0.5 \tabularnewline \hline CSA & -1.0 & 0.0 & 0.0 & 1.0 \tabularnewline \hline \end{tabular}} \par\end{centering} \caption{{\bf Sensitivity Analysis with Events}. Shown are the results of the control problem given by Equation \ref{eq:event} with $a = 2$ and $b = 1$. It was solved on $t\in[0,1]$ with initial condition $(x(0), y(0)) = (1,0)$. The sensitivities of the two state variables are given at time $t=1$ with respect to the two parameters. The analytical solution is derived by taking derivatives directly on Equation \ref{eq:event-analytical}.} \label{tab:event-results} \end{table} The sensitivity analysis results are compared to the true derivative at time $t=1$ utilizing the analytical solution of the system (Equation \ref{eq:event-analytical}) in Table \ref{tab:event-results}. These results show that continuous sensitivity analysis as defined in Equation \ref{eq:clsa} does not properly propagate the sensitivities due to discontinuities and this results in incorrect derivative calculations for hybrid ODE systems. Extensions to continuous sensitivity analysis are required in order to correct these errors \cite{kirches_numerical_2006}. These have been implemented in DiffEqSensitivity.jl, though we remark from experience that such corrections are very non-trivial to implement and thus demonstrate an ease of implementation advantage for DSAAD. Additionally, the continuous sensitivity analysis equations defined in Equation \ref{eq:clsa} only apply to ordinary differential equations. It has been shown that a different set of equations is required for delay differential equations (DDEs) \cite{bredies_generalized_2013} \begin{equation} \frac{d}{dt}\frac{\partial u(t)}{\partial p_i} = \frac{\partial G}{\partial u} \frac{\partial u}{\partial p_i}(t) + \frac{\partial G}{\partial \tilde{u}}\frac{\partial u}{\partial p_i}(t-\tau) + \frac{\partial G}{\partial p_i}(t), \end{equation} where $\frac{du(t)}{dt}=G(u,\tilde{u},p,t)$ is the DDE system with a single fixed time delay $\tau$, and differential-algebraic equations (DAEs) \cite{alan_c._hindmarsh_sundials:_2005} \begin{equation} \frac{\partial F}{\partial u}\frac{\partial u}{\partial p_i} + \frac{\partial F}{\partial \dot{u}}\frac{\partial \dot{u}}{\partial p_i} + \frac{\partial F}{\partial p_i} = 0, \end{equation} where $F(\dot{u},u,p,t)=0$ is the DAE system. On the other hand, the discrete sensitivity analysis approach implemented via automatic differentiation is not specialized to ordinary differentiation equations since it automatically generates the sensitivity propagation at the compiler-level utilizing the atomic operations inside the numerical integration scheme. Thus these types of equations, and expanded forms such as DDEs with state-dependent delays or hybrid DAEs, are automatically supported by the connection between DifferentialEquations.jl and Julia-based automatic differentiation packages. Tests on the DDE and DAE solvers confirm this to be the case. \section{Discussion\label{sec:Discussion}} Performant and correct sensitivity analysis is crucial to many applications of differential equation models. Here we analyzed both the performance and generalizability of the full Cartesian product of approaches. Our results show a strong performance advantage for automatic differentiation based discrete sensitivity analysis for forward-mode sensitivity analysis on sufficiently small systems, and an advantage for continuous adjoint sensitivity analysis for sufficiently large systems. Notably, pure tape-based reverse-mode automatic differentiation did not perform or scale well in these benchmarks. The implementations have been generally optimized for the usage in machine learning models which make extensive use of large linear algebra like matrix multiplications, which decrease the size of the tape with respect to the amount of work performed. Since differential equations tend to be defined by nonlinear functions with scalar operations, the tape handling to work ratio thus decreases and is no longer competitive with other forms of derivative calculations. This technical detail affects common frameworks such as ReverseDiff.jl, PyTorch \cite{NEURIPS2019_9015}, and TensorFlow Eager \cite{agrawal2019tensorflow}, which leads to the recommendation of CASA for most frameworks. In addition, some frameworks, such as Jax \cite{jax2018github}, cannot JIT optimize the non-static computation graphs of a full ODE solver, which further leads to performance improvements of CASA in current reverse-mode implementations. Future work should investigate adjoint DSAAD with approaches which allow for low overhead compilation of the reverse path, such as that of Zygote \cite{innes_dont_2018}, for which the static handling of control flow may be what is needed for this application. One major result to note is that, in many cases of interest, runtime overhead of the adjoint methods can be larger than the theoretical scaling advantages. Forward-mode automatic differentiation does not exhibit the best scaling properties but on small systems of ODEs with small numbers of parameters, both stiff and non-stiff, the forward mode methods benchmarks as advantageous. On problems like PDEs, the scaling of adjoint methods does confer an advantage to them when the problem is sufficiently large but also a disadvantage when the problem is small. Additionally, the ability to seed automatic differentiation for the Jacobian vector multiplications was demonstrated as very advantageous on the larger problems, showing that integrating the implementation of sensitivity analysis with automatic differentiation tools is necessary for achieving the utmost efficiency. In addition, the choice of automatic differentiation for the $v'J$ calculation was shown to be a major factor in performance, with the static Enzyme-based implementation giving a two order of magnitude performance advantage over the tape-based implementation even when confined to this one calculation. Together, these results show that having the ability to choose between these different methods is essential for a software wishing to support these distinct use cases. Future research should look into the comparative performance of DiffEqSensitivity.jl \cite{rackauckas2020universal} with the native adjoint techniques of SUNDIALS \cite{alan_c._hindmarsh_sundials:_2005} and PETSc TS \cite{abhyankar2018petsc} to see how much this result generalizes. The results here would suggest the performance difference may be around three orders of magnitude since SUNDIALS uses a method similar to the interpolating CASA with (numerical) forward-mode Jacobians. A follow-up study on multiple PDEs comparing between the software suites could be very useful to practitioners. While runtime performance of these methods is usually of interest, it is important to note the memory scaling for the various adjoint sensitivity analysis implementations. The quadrature CASA implementation benchmarks as the fastest for stiff ODEs, but uses a continuous solution to the original ODE in order to generate the adjoint Jacobian and gradients on-demand. This setup only requires a single forward ODE solve but makes a tradeoff due to the high memory requirement to save the full timeseries solution and its interpolating function. For example, with the chosen 9th order explicit Runge-Kutta method due to Verner, the total memory cost is $26NM$ since 26 internal derivative calculations of size $N$ are utilized to construct the interpolant where $M$ is the number of time points. The DSAAD reverse-mode AD approaches require constructing a tape for the entire initial forward solution and thus has memory scaling similar to the quadrature CASA, though in theory checkpointing AD systems AD could alleviate these memory issues. In contrast, checkpointing-based adjoint sensitivity analysis implementations \cite{alan_c._hindmarsh_sundials:_2005} re-solve the forward ODE from a saved time point (a checkpoint) in order to get the $u$ value required for the Jacobian and gradient calculation, increasing the runtime cost by at most 2x while decreasing the memory cost to NC where C is the number of checkpoints. These results show that the DiffEqSensitivity.jl interpolating checkpointing CASA approach may have around a 5x performance deficit over the more optimal quadrature CASA, which is can be a useful price to pay if memory concerns are the largest factor. The backsolve CASA approach, while the most memory efficient, was shown to be too unstable to be useful in most usecases and is thus only recommended if the dynamical system is known to be non-stiff in advance. These results show many advantages for the AD-based discrete sensitivity analysis for small systems. However, there are significant engineering challenges to the development of such integration schemes. In order for this methodology to exist, a general function automatic differentiation tool must exist for the programming language and the entire ODE solver must be compatible with the AD software. This works for the Julia-based DifferentialEquations.jl software since it contains a large number of high performance native-Julia solver implementations. However, many other solver ecosystems do not have such possibilities. For example, common open source packages for solving ordinary differential equation systems include deSolve in R \cite{soetaert_solving_2010} and SciPy for Python \cite{jones_scipy:_2001}. While general automatic differentiation tools exist for these languages (\cite{pav_madness:_nodate}, \cite{maclaurin_autograd:_nodate}), both of these package call out to Fortran-based ODE solvers such as LSODA \cite{petzold_automatic_nodate}, and thus AD cannot be directly applied to the solver calls. This means DSAAD techniques may stay limited to specific software due to technical engineering issues. When considering the maintenance of large software ecosystems, the DSAAD approach gives many advantages if planned from the start. For one, almost no additional code was required to be written by the differential equation community in order for this implementation to exist since it directly works via code generation at compile-time on the generic functions of DifferentialEquations.jl. But an additional advantage is that this same technique applies to the native hybrid, delay, and differential-algebraic integrators present in the library. DifferentialEquations.jl also allows for many other actions to occur in the events. For example, the user can change the number of ODEs during an event, and events can change solver internals like the current integration time. Continuous sensitivity analysis require a separate implementation for each of these cases with could be costly to developer time. For DiffEqSensitivity.jl, the performant and correct implementation of corrections for hybrid differential equations and differential-algebraic equations took around 6 months of split between two developers (part time). This ease of implementation aspect may be worth considering for smaller organizations. \section*{Acknowledgments} This work is supported by Center for Translational Medicine, University of Maryland Baltimore School of Pharmacy. CR is partially supported by the NSF grant DMS1763272 and a grant from the Simons Foundation (594598, QN). Additionally, we thank the Google Summer of Code program for helping support the open source packages showcased in this manuscript. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0001222 and NSF grants OAC-1835443 and IIP-1938400. \section{Appendix} \scriptsize \subsection{Models} The first test problem is LV, the non-stiff Lotka-Volterra model {\scriptsize \begin{align} \frac{dx}{dt} &= p_{1}x - p_{2}xy,\nonumber \\ \frac{dy}{dt} &= -p_{3}y + xy.\label{eq:Lotka-Volterra} \end{align}} with initial condition $[1.0,1.0]$ and $p = [1.5,1.0,3.0]$ \cite{murray_mathematical_2002}. The second model, BRUSS, is the two dimensional ($N\times N$) Brusselator stiff reaction-diffusion PDE: {\scriptsize \begin{align} \frac{\partial u}{\partial t} &= p_{2} + u^2v - (p_{1}+1)u + p_{3}(\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}) + f(x, y, t),\nonumber \\ \frac{\partial v}{\partial t} &= p_{1}u - u^2v + p_{4}(\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}),\label{eq:Brusselator} \end{align}} where {\scriptsize \begin{align} f(x, y, t) = \begin{cases} 5 & \text{ if } (x-0.3)^2+(y-0.6)^2\le 0.1^2 \text{ and } t\ge 1.1 \\ 0 & \text{ else}, \end{cases} \end{align} } with no-flux boundary conditions and $u(0,x,y) = 22(y(1-y))^{3/2}$ with $v(0,x,y) = 27(x(1-x))^{3/2}$ \cite{ernst_hairer_solving_1991}. This PDE is discretized to a set of $N \times N \times 2$ ODEs using the finite difference method. The parameters are spatially-dependent, $p_i=p_i(x,y)$, making each discretized $p_i$ a $N \times N$ set of values at each discretization point, giving a total of $4N^2$ parameters. The initial parameter values were the uniform $p_i(x,y) = [3.4,1.0,10.0,10.0]$ The third model, POLLU, simulates air pollution. It is a stiff non-linear ODE system which consists $20$ ODEs: {\allowdisplaybreaks {\scriptsize \begin{align} \frac{du_1}{dt} &= -p_{1} u_{1}-p_{10}u_{11}u_{1}-p_{14}u_{1}u_{6}-p_{23}u_{1}u_{4}- \nonumber\\ & p_{24}u_{19}u_{1}+p_{2} u_{2}u_{4}+p_{3} u_{5}u_{2}+p_{9}u_{11}u_{2}+ \nonumber\\ & p_{11}u_{13}+p_{12}u_{10}u_{2}+p_{22}u_{19}+p_{25}u_{20} \nonumber\\ \frac{du_2}{dt} &= -p_{2} u_{2}u_{4}-p_{3} u_{5}u_{2}-p_{9} u_{11}u_{2}-p_{12}u_{10}u_{2}+p_{1} u_{1}+p_{21}u_{19} \nonumber\\ \frac{du_3}{dt} &= -p_{15}u_{3}+p_{1} u_{1}+p_{17}u_{4}+p_{19}u_{16}+p_{22}u_{19} \nonumber\\ \frac{du_4}{dt} &= -p_{2} u_{2}u_{4}-p_{16}u_{4}-p_{17}u_{4}-p_{23}u_{1}u_{4}+p_{15}u_{3} \nonumber\\ \frac{du_5}{dt} &= -p_{3} u_{5}u_{2}+p_{4} u_{7}+p_{4} u_{7}+p_{6} u_{7}u_{6}+p_{7} u_{9}+p_{13}u_{14}+p_{20}u_{17}u_{6} \nonumber\\ \frac{du_6}{dt} &= -p_{6} u_{7}u_{6}-p_{8} u_{9}u_{6}-p_{14}u_{1}u_{6}-p_{20}u_{17}u_{6}+p_{3} u_{5}u_{2}+p_{18}u_{16}+p_{18}u_{16} \nonumber\\ \frac{du_7}{dt} &= -p_{4} u_{7}-p_{5} u_{7}-p_{6} u_{7}u_{6}+p_{13}u_{14} \nonumber\\ \frac{du_8}{dt} &= p_{4} u_{7}+p_{5} u_{7}+p_{6} u_{7}u_{6}+p_{7} u_{9} \nonumber\\ \frac{du_9}{dt} &= -p_{7} u_{9}-p_{8} u_{9}u_{6} \nonumber\\ \frac{du_{10}}{dt} &= -p_{12}u_{10}u_{2}+p_{7} u_{9}+p_{9} u_{11}u_{2} \nonumber\\ \frac{du_{11}}{dt} &= -p_{9} u_{11}u_{2}-p_{10}u_{11}u_{1}+p_{8} u_{9}u_{6}+p_{11}u_{13} \nonumber\\ \frac{du_{12}}{dt} &= p_{9} u_{11}u_{2} \nonumber\\ \frac{du_{13}}{dt} &= -p_{11}u_{13}+p_{10}u_{11}u_{1} \nonumber\\ \frac{du_{14}}{dt} &= -p_{13}u_{14}+p_{12}u_{10}u_{2} \nonumber\\ \frac{du_{15}}{dt} &= p_{14}u_{1}u_{6} \nonumber\\ \frac{du_{16}}{dt} &= -p_{18}u_{16}-p_{19}u_{16}+p_{16}u_{4} \nonumber\\ \frac{du_{17}}{dt} &= -p_{20}u_{17}u_{6} \nonumber\\ \frac{du_{18}}{dt} &= p_{20}u_{17}u_{6} \nonumber\\ \frac{du_{19}}{dt} &= -p_{21}u_{19}-p_{22}u_{19}-p_{24}u_{19}u_{1}+p_{23}u_{1}u_{4}+p_{25}u_{20} \nonumber\\ \frac{du_{20}}{dt} &= -p_{25}u_{20}+p_{24}u_{19}u_{1} \end{align} }} with the initial condition of $u_0 = [0, 0.2, 0, 0.04, 0, 0, 0.1, 0.3, 0.01, 0, 0, 0, 0 ,0, 0, 0, 0.007, 0, 0, 0]^T$ and parameters $[.35, 26.6, 12,300, .00086, .00082, 15,000, .00013, 24,000, 16,500, 9,000,\\ .022, 12,000, 1.88, 16,300, 4,800,000, .00035,.0175, 10^9, .444e12, 1,240,\\ 2.1, 5.78, .0474, 1,780, 3.12]$ \cite{ernst_hairer_solving_1991}. The fourth model is a non-stiff pharmacokinetic/pharmacodynamic model (PKPD) \cite{upton_basic_2014}, which is in the form of {\scriptsize \begin{align} \frac{dDepot}{dt} &= -k_a Depot \nonumber\\ \frac{dCent}{dt} &= k_a Depot+ \nonumber\\ & (CL+V_{max}/(K_m+(Cent/V_c))+Q_1)(Cent/V_c) + \nonumber\\ & Q_1(Periph_1/V_{p_1}) - \nonumber\\ & Q_2(Cent/V_c) + Q_2(Periph_2/V_{p_2}) \nonumber\\ \frac{dPeriph_1}{dt} &= Q_1(Cent/V_c) - Q_1(Periph_1/V_{p_1}) \nonumber\\ \frac{dPeriph_2}{dt} &= Q_2(Cent/V_c) - Q_2(Periph_2/V_{p_2}) \nonumber\\ \frac{dResp}{dt} &= k_{in}(1-(I_{max}(Cent/V_c)^\gamma/(IC_{50}^\gamma+(Cent/V_c)^\gamma))) - k_{out} Resp. \end{align}} with the initial condition of $[100.0,0.0,0.0,0.0,5.0]$. $k_a = 1$ is the absorption rate of drug into the central compartment from the dosing compartment, $CL = 1$ is the clearance parameter of drug elimination, $V_c = 20$ is the central volume of distribution, $Q_1 = 2$ is the inter-compartmental clearance between central and first peripheral compartment, $Q_2 = 0.5$ is the inter-compartmental clearance between central and second peripheral compartment, $V_{p_1} = 10$ is the first peripheral compartment distribution volume, $V_{p_2} = 100$ is the second peripheral compartment distribution volume, $V_{max} = 0$ is the maximal rate of saturable elimination of drug, $K_m = 2$ is the Michaelis-Mentens constant, $k_{in} = 10$ is the input rate to the response (PD) compartment with a maximal inhibitory effect of $I_{max} = 1$, $IC_{50} = 2$ is a parameter for the concentration at 50\% of the effect and $k_{out} = 2$ is the elimination rate of the response, and $\gamma = 1$ is the model sigmoidicity. Additional doses of $100.0$ are applied to the $Depot$ variable at every $24$ time units. \end{document}
\betaetagin{document} \thetaimestle{\sl From the Pr\'ekopa-Leindler inequality to modified logarithmic Sobolev inequality} \alphauthor{ Ivan Gentil\\ \alphaffil{Ceremade (UMR CNRS no. 7534), Universit\'e Paris IX-Dauphine,}\\ \alphaffil{Place de Lattre de Tassigny, 75775 Paris C\'edex~16, France}\\ \varepsilonmail{[email protected]}\\ \http{http://www.ceremade.dauphine.fr/ \rhoaisebox{-4pt}{$\!\!\widetilde{\mathbf{P}hiantom{x}}$}gentil/}\\ } \partialate{\thetaoday}\title{\sl From the Pr\'ekopa-Leindler inequality to modified logarithmic Sobolev inequality}\thetahispagestyle{empty} \betaetagin{abstract} We develop in this paper an improvement of the method given by S. Bobkov and M. Ledoux in~\cite{bobkov-ledoux1}. Using the Pr\'ekopa-Leindler inequality, we prove a modified logarithmic Sobolev inequality adapted for all measures on $\varepsilonnsuremath{\mathbb{R}}^n$, with a strictly convex and super-linear potential. This inequality implies modified logarithmic Sobolev inequality, developed in~\cite{ge-gu-mi,ge-gu-mi2}, for all uniformly strictly convex potential as well as the Euclidean logarithmic Sobolev inequality. \betaetagin{center} {\betaf Résumé} \varepsilonnd{center} Dans cet article nous am\'elirons la m\'ethode expos\'ee par S. Bobkov et M. Ledoux dans~\cite{bobkov-ledoux1}. En utilisant l'in\'egalit\'e de Pr\'ekopa-Leindler, nous prouvons une in\'egalit\'e de Sobolev logarithmique modifi\'ee, adapt\'ee \`a toutes les mesures sur $\varepsilonnsuremath{\mathbb{R}}^n$ poss\'edant un potentiel strictement convexe et super-lin\'eaire. Cette in\'egalit\'e implique en particulier une in\'egalit\'e de Sobolev logarithmique modifi\'ee, d\'evelopp\'ee dans~\cite{ge-gu-mi,ge-gu-mi2}, pour les mesures ayant un potentiel uniform\'ement strictement convexe mais aussi une in\'egalit\'e de Sobolev logarithmique de type euclidien. \varepsilonnd{abstract} \section{Introduction} \lambdabel{sec-int} The Pr\'ekopa-Leindler inequality is the functional form of Brunn-Minkowski inequality. Let $a,b$ be some positive reals such that $a+b=1$, and $u$, $v$, $w$ be some non-negative measurable functions on $\varepsilonnsuremath{\mathbb{R}}^n$. Assume that, for any $x,y \in \varepsilonnsuremath{\mathbb{R}} ^n$, we have \betaetagin{equation*} u(x)^a v(y)^b\leq w(a x + b y), \varepsilonnd{equation*} then \betaetagin{equation} \lambdabel{ul-6.5} \left(\int u(x) dx\rhoight)^a\left(\int v(x) dx\rhoight)^b\leq \int w(x) dx, \varepsilonnd{equation} where $dx$ is the Lebesgue measure on $\varepsilonnsuremath{\mathbb{R}}^n$. If we apply inequality~\varepsilonqref{ul-6.5} to characteristic functions of bounded measurable sets $A$ and $B$ in $\varepsilonnsuremath{\mathbb{R}}^n$, we get the multiplicative form of the Brunn-Minkowski inequality $$ vol(A)^a vol(B)^b\leq vol(aA+bB), $$ where $aA+bB=\BRA{ax_A+bx_B,\,\, x_A\in A,x_B\in B}$ and $vol(A)$ is the Lebesgue measure of the set $A$. One can see for example two interesting reviews on this topic \cite{gupta,maurey}. Bobkov and Ledoux in \cite{bobkov-ledoux1} use the Pr\'ekopa-Leindler inequality to prove some functional inequalities like Brascamp-Lieb, Logarithmic Sobolev and Transportation inequalities. More precisely, let $\mathbf{P}hii$ be a $\mathcal C^2$ strictly convex function on $\varepsilonnsuremath{\mathbb{R}}^n$ and let \betaetagin{equation} \lambdabel{eq-defm} d\mu_\mathbf{P}hii(x)=e^{-\mathbf{P}hii(x)}dx \varepsilonnd{equation} be a probability measure on $\varepsilonnsuremath{\mathbb{R}}^n$ ($\int e^{-\mathbf{P}hii(x)}dx=1$). The function $\mathbf{P}hii$ is called {\it the potential} of the measure $\mu_\mathbf{P}hii$. Bobkov and Ledoux obtained in particular the following two results: \betaetagin{itemize} \item (Proposition~2.1 of \cite{bobkov-ledoux1}) Brascamp-Lieb inequality: assume that $\mathbf{P}hii$ is a $\mathcal C^2$ function on $\varepsilonnsuremath{\mathbb{R}}^n$, then for all smooth enough functions $g$, \betaetagin{equation} \lambdabel{eq-br} \varphir{\mu_\mathbf{P}hii}{g}:=\int \mathbf{P}AR{g-\int gd\mu_\mathbf{P}hii}^2d\mu_\mathbf{P}hii\leq\int \nablaablabla gCD(\rhoho ,\infty)ot{\thetahetaxt{Hess}}(\mathbf{P}hii)^{-1}\nablaablabla gd\mu_\mathbf{P}hii, \varepsilonnd{equation} where $\thetahetaxt{Hess}(\mathbf{P}hii)^{-1}$ is the inverse of the Hessian of $\mathbf{P}hii$. \item (Proposition~3.2 of \cite{bobkov-ledoux1}) Assume that for some $c>0$ and $p\gammaeq2$, for all $t,s>0$ with $t+s=1$, and for all $x,y\in\varepsilonnsuremath{\mathbb{R}}^n$, $\mathbf{P}hii$ satisfies, as $s$ goes to 0, \betaetagin{equation} \lambdabel{e-in} t\mathbf{P}hii(x)+s\mathbf{P}hii(y)-\mathbf{P}hii(tx+sy)\gammaeq\frac{c}{p}(s+o(s))\varepsilonnsuremath{\mathbb{N}}RM{x-y}^p, \varepsilonnd{equation} where $\varepsilonnsuremath{\mathbb{N}}RM{CD(\rhoho ,\infty)ot}$ is the Euclidean norm in $\varepsilonnsuremath{\mathbb{R}}^n$. Then for all smooth enough functions~$g$, \betaetagin{equation} \lambdabel{e-bl1} \varepsilonnt{\mu_\mathbf{P}hii}{e^g}:=\int e^g\log \frac{e^g}{\int e^gd\mu_\mathbf{P}hii}d\mu_\mathbf{P}hii\leq c\int \varepsilonnsuremath{\mathbb{N}}RM{\nablaablabla g}^qe^gd\mu_\mathbf{P}hii, \varepsilonnd{equation} where $1/p+1/q=1$. They also give an example: the function $\mathbf{P}hii(x)=\varepsilonnsuremath{\mathbb{N}}RM{x}^p+Z_\mathbf{P}hii$ (where $Z_\mathbf{P}hii$ is a normalization constant) which satisfies inequality~\varepsilonqref{e-in} for some constant $c>0$. \varepsilonnd{itemize} The main result of this paper is to prove an inequality satisfies for any measure $\mu_\mathbf{P}hii$ with a potential strictly convex and super-linear (we also assume a technical hypothesis satisfied by the potential $\mathbf{P}hii$)). More precisely we obtain, for all smooth enough functions $g$ on~$\varepsilonnsuremath{\mathbb{R}}^n$, \betaetagin{equation} \lambdabel{e-theo0} \varepsilonnt{\mu_\mathbf{P}hii}{e^g}\leq \int \BRA{xCD(\rhoho ,\infty)ot\nablaablabla g(x)-\mathbf{P}hii^*(\nablaablabla\mathbf{P}hii(x))+\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(x)-\nablaablabla g(x)}}e^{g(x)}d\mu_\mathbf{P}hii(x), \varepsilonnd{equation} where $\mathbf{P}hii^*$ is the Fenchel-Legendre transform of $\mathbf{P}hii$, $\mathbf{P}hii^*(x):=\sup_{z\in\varepsilonnsuremath{\mathbb{R}}^n}\BRA{xCD(\rhoho ,\infty)ot z-\mathbf{P}hii(z)}$. The main application of this result is to extend the modified logarithmic Sobolev inequalities presented in \cite{ge-gu-mi,ge-gu-mi2} for probability measures on $\varepsilonnsuremath{\mathbb{R}}$ satisfying a uniform strictly convexity condition. It is well known that if the potential $\mathbf{P}hii$ is $\mathcal C^2$ on $\varepsilonnsuremath{\mathbb{R}}$ such that for all $x\in\varepsilonnsuremath{\mathbb{R}}$, $\mathbf{P}hii''(x)\gammaeq\lambda>0$, then the measure $\mu_\mathbf{P}hii$ defined on~\varepsilonqref{eq-defm} verifies the logarithmic Sobolev inequality introduced by Gross in~\cite{gross}, for all smooth enough functions $g$, namely \betaetagin{equation*} \varepsilonnt{\mu_\mathbf{P}hii}{e^g}\leq \frac{1}{2\lambda}\int g'^2e^gd\mu_\mathbf{P}hii. \varepsilonnd{equation*} This result comes from the $\GU \!\!_{\mathbf{2}}$-criterion of D. Bakry and M. \'Emery, see \cite{b-e} or \cite{logsob} for a review. We then improve the classical logarithmic Sobolev inequality of Gross, in the situation where if the potential is even with $\mathbf{P}hii(0)=0$ and satisfies \betaetagin{equation*} \forall x\in\varepsilonnsuremath{\mathbb{R}}, \quad \mathbf{P}hii''(x)\gammaeq \lambdambda >0\thetahetaxt{ and } \lim_{\varepsilonnsuremath{\mathcal A}BS{x}\rhoightarrow\infty} \mathbf{P}hii''(x)=\infty. \varepsilonnd{equation*} Adding a technical hypothesis (see Section~\rhoef{sec-mod}), we show that for all smooth functions $g$, \betaetagin{equation*} \varepsilonnt{\mu_\mathbf{P}hii}{e^g}\leq \int H_\mathbf{P}hii(g')e^gd\mu_\mathbf{P}hii, \varepsilonnd{equation*} where \betaetagin{equation*} H_{\mathbf{P}hii}(x)= \left\{ \betaetagin{array}{r} \partialisplaystylelaystyle C'\mathbf{P}hii^*\mathbf{P}AR{\frac{x}{2}},\quad \thetahetaxt{if}\quad\varepsilonnsuremath{\mathcal A}BS{x}> C\\ \partialisplaystylelaystyle\frac{1}{2\lambda}x^2,\quad \thetahetaxt{if}\quad\varepsilonnsuremath{\mathcal A}BS{x}\leq C,\\ \varepsilonnd{array} \rhoight. \varepsilonnd{equation*} for some constants $C,C',\lambda>0$ depending on $\mathbf{P}hii$. Remark that we always have $$ \forall x\in\varepsilonnsuremath{\mathbb{R}},\quad H_{\mathbf{P}hii}(x)\leq C''x^2, $$ for some other constant $C''$. This inequality implies concentration inequalities which are more adapted to the measure studied, as we will see in Section~\rhoef{sec-mod}. \betaigskip The next section is divided into two subsections. In the first one we state the main theorem of this article, inequality~\varepsilonqref{e-theo0}. In the second subsection, we explain how this result improves results of \cite{bobkov-ledoux1}. In particular, inequality~\varepsilonqref{e-bl1} or Brascamp-Lieb inequality~\varepsilonqref{eq-br}. Section~\rhoef{sec-app} deals with some applications. The first one is an improvement of the a classical consequence of the ${\betaf \gammau {\!\!_2}}$-criterion of Bakry-\'Emery for measures on $\varepsilonnsuremath{\mathbb{R}}$. We obtain then a global view of modified logarithmic Sobolev inequality for log-concave measures as introduced in joint work with A. Guillin and L. Miclo in \cite{ge-gu-mi,ge-gu-mi2}. Finally, we explain how the main theorem is equivalent to the Euclidean logarithmic Sobolev inequality. As a consequence, a short proof of the generalization given in \cite{del-dol,gentil03,ghoussoub} is obtained. \section{Inequality for log-concave measures} \lambdabel{sec-theo} \subsection{The main theorem} \lambdabel{s-re} \betaetagin{ethm} \lambdabel{the-theo} Let $\mathbf{P}hii$ be a $\mathcal C^2$ strictly convex function on~$\varepsilonnsuremath{\mathbb{R}}^n$, such that \betaetagin{equation} \lambdabel{eq-propphi} \lim_{\varepsilonnsuremath{\mathbb{N}}RM{x}\rhoightarrow\infty}\frac{\mathbf{P}hii(x)}{\varepsilonnsuremath{\mathbb{N}}RM{x}}=\infty. \varepsilonnd{equation} Denotes by $\mu_\mathbf{P}hii(dx)=e^{-\mathbf{P}hii(x)}dx$ a probability measure on~$\varepsilonnsuremath{\mathbb{R}}^n$, where $dx$ is the Lebesgue measure on~$\varepsilonnsuremath{\mathbb{R}}^n$, ($\int e^{-\mathbf{P}hii(x)}dx=1$). Assume that $\mu_\mathbf{P}hii$ satisfies for any $R>0$, \betaetagin{equation} \lambdabel{hypo-++} \int \mathbf{P}AR{\varepsilonnsuremath{\mathbb{N}}RM{z}+\varepsilonnsuremath{\mathbb{N}}RM{y_0}+R}^2\mathbf{P}AR{\varepsilonnsuremath{\mathbb{N}}RM{\nablaablabla\mathbf{P}hii(z)}+\!\!\!\!\! \sup_{y;\,\varepsilonnsuremath{\mathbb{N}}RM{y-z+y_0}\leq R} \!\!\!\!\!\varepsilonnsuremath{\mathbb{N}}RM{\thetahetaxt{Hess}(\mathbf{P}hii)(y)}}d\mu_\mathbf{P}hii(z)<+\infty, \varepsilonnd{equation} where $y_0$ satisfies $\varepsilonnsuremath{\mathbb{N}}RM{\nablaablabla\mathbf{P}hii(y_0)}\leq\varepsilonnsuremath{\mathbb{N}}RM{\nablaablabla\mathbf{P}hii(z)}+R$. If $\mathbf{P}hii^*$ is the Fenchel-Legendre transform of $\mathbf{P}hii$, $\mathbf{P}hii^*(x):=\sup_{z\in\varepsilonnsuremath{\mathbb{R}}^n}\BRA{xCD(\rhoho ,\infty)ot z-\mathbf{P}hii(z)},$ then for all smooth enough functions $g$ on $\varepsilonnsuremath{\mathbb{R}}^n$, one gets \betaetagin{equation} \lambdabel{eq-theo} \varepsilonnt{\mu_\mathbf{P}hii}{e^g}\leq \int \BRA{xCD(\rhoho ,\infty)ot\nablaablabla g(x)-\mathbf{P}hii^*(\nablaablabla\mathbf{P}hii(x))+\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(x)-\nablaablabla g(x)}}e^{g(x)}d\mu_\mathbf{P}hii(x). \varepsilonnd{equation} \varepsilonnd{ethm} \betaetagin{elem} \lambdabel{lem-1} Let $\mathbf{P}hii$ satisfying conditions on Theorem~\rhoef{the-theo} then we have \betaetagin{itemize} \item $\nablaablabla \mathbf{P}hii$ is a bijection on $\varepsilonnsuremath{\mathbb{R}}^n$ to $\varepsilonnsuremath{\mathbb{R}}^n$ \item $\lim_{\varepsilonnsuremath{\mathbb{N}}RM{x}\rhoightarrow\infty}\frac{xCD(\rhoho ,\infty)ot\nablaablabla\mathbf{P}hii(x)}{\varepsilonnsuremath{\mathbb{N}}RM{x}}=+\infty$. \varepsilonnd{itemize} \varepsilonnd{elem} \betaetagin{eproof} Condition~\varepsilonqref{eq-propphi} implies that for all $x\in\varepsilonnsuremath{\mathbb{R}}^n$ the supremum of ${xCD(\rhoho ,\infty)ot z-\mathbf{P}hii(z)}$ for $y\in\varepsilonnsuremath{\mathbb{R}}^n$ is reached for some $y\in\varepsilonnsuremath{\mathbb{R}}^n$. Then $y$ satisfies $x=\nablaablabla\mathbf{P}hii(y)$ and it proves that $\nablaablabla\mathbf{P}hii$ is a surjection. Then the strict convexity of $\mathbf{P}hii$ implies that $\nablaablabla\mathbf{P}hii$ is a bijection. The function $\mathbf{P}hii$ is convex then for all $x\in\varepsilonnsuremath{\mathbb{R}}^n$, $xCD(\rhoho ,\infty)ot\nablaablabla\mathbf{P}hii(x) \gammaeq \mathbf{P}hii(x)-\mathbf{P}hii(0)$,~\varepsilonqref{eq-propphi} implies the second properties satisfied by $\mathbf{P}hii$. \varepsilonnd{eproof} The proof of the theorem is based on the following lemma: \betaetagin{elem} \lambdabel{lem-technique} Let $g$ be a $\mathcal C^\infty$ function with a compact support on $\varepsilonnsuremath{\mathbb{R}}^n$. Let $s,t\gammaeq 0$ with $t+s=1$ and denotes $$ \forall z\in\varepsilonnsuremath{\mathbb{R}}^n,\quad g_s(z)=\sup_{z=tx+sy}\mathbf{P}AR{g(x)-\mathbf{P}AR{t\mathbf{P}hii(x)+s\mathbf{P}hii(y)-\mathbf{P}hii(tx+sy)}}. $$ Then there exists $R\gammaeq 0$ such that, when $s$ goes to $0$, \betaetagin{multline*} g_s(z)=g(z)+s\BRA{zCD(\rhoho ,\infty)ot\nablaablabla g(z)-\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(z)}+\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(x)-\nablaablabla g(x)}}\\ + \mathbf{P}AR{(\varepsilonnsuremath{\mathbb{N}}RM{z}+\varepsilonnsuremath{\mathbb{N}}RM{y_0}+R)\varepsilonnsuremath{\mathbb{N}}RM{\nablaablabla \mathbf{P}hii(z)}+(\varepsilonnsuremath{\mathbb{N}}RM{z}+\varepsilonnsuremath{\mathbb{N}}RM{y_0}+R)^2\!\!\!\!\!\sup_{y;\, \varepsilonnsuremath{\mathbb{N}}RM{y-z+y_0}\leq R} \!\!\!\!\! \varepsilonnsuremath{\mathbb{N}}RM{\thetahetaxt{Hess}(\mathbf{P}hii)(y)}}{O(s^2)}, \varepsilonnd{multline*} where $y_0$ satisfies $\varepsilonnsuremath{\mathbb{N}}RM{\nablaablabla\mathbf{P}hii(y_0)}\leq\varepsilonnsuremath{\mathbb{N}}RM{\nablaablabla\mathbf{P}hii(z)}+R$ and $O(s^2)$ is uniform on $z\in\varepsilonnsuremath{\mathbb{R}}^n$. \varepsilonnd{elem} \betaetagin{eproof} Let $s\in]0,1/2[$ and $x=z/t-(s/t)y$, hence $$ g_s(z)=\mathbf{P}hii(z)+\sup_{y\in\varepsilonnsuremath{\mathbb{R}}^n}\mathbf{P}AR{g\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y}-t\mathbf{P}hii\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y}-s\mathbf{P}hii(y)}. $$ Due to the fact that $g$ has a compact support and by property~\varepsilonqref{eq-propphi} there exists $y_s\in\varepsilonnsuremath{\mathbb{R}}^n$ such that $$ \sup_{y\in\varepsilonnsuremath{\mathbb{R}}^n}\mathbf{P}AR{g\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y}-t\mathbf{P}hii\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y}-s\mathbf{P}hii(y)}= g\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}-t\mathbf{P}hii\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}-s\mathbf{P}hii(y_s). $$ Moreover, $y_s$ satisfies \betaetagin{equation} \lambdabel{eq-ys} \nablaablabla g\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}-t\nablaablabla\mathbf{P}hii\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}+t\nablaablabla \mathbf{P}hii(y_s)=0. \varepsilonnd{equation} Lemma~\rhoef{lem-1} implies that there exists a unique solution $y_0$ of the equation \betaetagin{equation} \lambdabel{eq-der} \nablaablabla\mathbf{P}hii(y_0)={\nablaablabla\mathbf{P}hii(z)-\nablaablabla g(z)},\quad y_0=\mathbf{P}AR{\nablaablabla\mathbf{P}hii}^{-1}\mathbf{P}AR{\nablaablabla\mathbf{P}hii(z)-\nablaablabla g(z)}. \varepsilonnd{equation} We prove now that $\lim_{s\rhoightarrow 0}y_s=y_0$. First we show that there exists $A\gammaeq0$ such that $\forall s\in]0,1/2[$, $\varepsilonnsuremath{\mathbb{N}}RM{y_s}\leq A$. Indeed, if the function $y_s$ is not bounded one can found $(s_k)_{k\in\varepsilonnsuremath{\mathbb{N}}}$ such that $s_k\rhoightarrow0$ and $\varepsilonnsuremath{\mathbb{N}}RM{y_{s_k}}\rhoightarrow\infty$. Definition of $y_s$ implies that $$ g\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}-t\mathbf{P}hii\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}-s\mathbf{P}hii(y_s)\gammaeq g\mathbf{P}AR{\frac{z}{t}}-t\mathbf{P}hii\mathbf{P}AR{\frac{z}{t}}. $$ Due to the fact that $\lim_{\varepsilonnsuremath{\mathbb{N}}RM{x}\rhoightarrow\infty}\mathbf{P}hii(x)=\infty$ and since $g$ is bounded we obtain $s_ky_{s_k}=O(1)$. Next using~\varepsilonqref{eq-ys} one get $$ \frac{y_sCD(\rhoho ,\infty)ot \nablaablabla g\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}}{\varepsilonnsuremath{\mathbb{N}}RM{y_s}}-t\frac{y_sCD(\rhoho ,\infty)ot\nablaablabla\mathbf{P}hii\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}}{\varepsilonnsuremath{\mathbb{N}}RM{y_s}}+t\frac{y_sCD(\rhoho ,\infty)ot\nablaablabla \mathbf{P}hii(y_s)}{\varepsilonnsuremath{\mathbb{N}}RM{y_s}}=0. $$ The last equality is an contradiction with the second assertion of Lemma~\rhoef{lem-1} which prove that the $(y_s)_{s\in]0,1/2[}$ is bounded. Let $\hat{y}$ an accumulation point of the function $y_s$, when $s$ tends to 0. Then $\hat{y}$ satisfies equation~\varepsilonqref{eq-der}. By unicity of the solution of~\varepsilonqref{eq-der} we get $\hat{y}=y_0$. Therefore we have proved that $\lim_{s\rhoightarrow 0}y_s=y_0$. Taylor formula gives $$ \mathbf{P}hii\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}=\mathbf{P}hii(z)+s\mathbf{P}AR{\frac{z}{t}-\frac{y_s}{t}}CD(\rhoho ,\infty)ot\nablaablabla\mathbf{P}hii(z) + s^2\int_0^1(1-t)\mathbf{P}AR{\frac{z}{t}-\frac{y_s}{t}}\!CD(\rhoho ,\infty)ot\!{\rhom{Hess}}(\mathbf{P}hii)\mathbf{P}AR{\frac{z}{t}-s\frac{y_s}{t}}\mathbf{P}AR{\frac{z}{t}-\frac{y_s}{t}}dt, $$ and the same for $g$. Using the continuity of $y_s$ at $s=0$, ones gets \betaetagin{multline*} \mathbf{P}hii\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}=\mathbf{P}hii(z)+s(z-y_0)CD(\rhoho ,\infty)ot\nablaablabla\mathbf{P}hii(z)\\ + \mathbf{P}AR{(z-y_0)CD(\rhoho ,\infty)ot\nablaablabla\mathbf{P}hii(z)+\sup_{t\in[0,1/2]}\varepsilonnsuremath{\mathbb{N}}RM{\frac{z}{t}-\frac{y_t}{t}}^2\sup_{t\in[0,1/2]}\varepsilonnsuremath{\mathbb{N}}RM{{\rhom{Hess}}(\mathbf{P}hii)\mathbf{P}AR{\frac{z}{t}-\frac{y_s}{t}}}}{O(s^2)}. \varepsilonnd{multline*} and the same for $g$ \betaetagin{multline*} g\mathbf{P}AR{\frac{z}{t}-\frac{s}{t}y_s}=g(z)+s(z-y_0)CD(\rhoho ,\infty)ot\nablaablabla g(z) +\\ \mathbf{P}AR{(z-y_0)CD(\rhoho ,\infty)ot\nablaablabla g(z)+\sup_{t\in[0,1/2]}\varepsilonnsuremath{\mathbb{N}}RM{\frac{z}{t}-\frac{y_t}{t}}^2\sup_{t\in[0,1/2]}\varepsilonnsuremath{\mathbb{N}}RM{{\rhom{Hess}}(g)\mathbf{P}AR{\frac{z}{t}-\frac{y_s}{t}}}}{O(s^2)}. \varepsilonnd{multline*} As a consequence, \betaetagin{multline*} g_s(z)=g(z)+s\BRA{\mathbf{P}hii(z)-\mathbf{P}hii(y_0)+(z-y_0)CD(\rhoho ,\infty)ot(\nablaablabla g(z)-\nablaablabla\mathbf{P}hii(z))}\\ + \mathbf{P}AR{(z-y_0)CD(\rhoho ,\infty)ot(\nablaablabla g(z)-\nablaablabla\mathbf{P}hii(z))+\sup_{t\in[0,1/2]}\varepsilonnsuremath{\mathbb{N}}RM{\frac{z}{t}-\frac{y_t}{t}}^2\sup_{t\in[0,1/2]}\varepsilonnsuremath{\mathbb{N}}RM{{\rhom{Hess}}(\mathbf{P}hii+g)\mathbf{P}AR{\frac{z}{t}-\frac{y_s}{t}}}}{O(s^2)}. \varepsilonnd{multline*} The function $g$ is $\mathcal C^\infty$ with a compact support then one obtains using~\varepsilonqref{eq-der} and the expression of the Fenchel-Legendre transformation for a strictly convex function $$ \forall x\in\varepsilonnsuremath{\mathbb{R}}^n,\quad\mathbf{P}hii^*(\nablaablabla \mathbf{P}hii (z))=\nablaablabla\mathbf{P}hii(z)CD(\rhoho ,\infty)ot z-\mathbf{P}hii(z), $$ we get the result. \varepsilonnd{eproof} We are now ready to deduce our main result: {\nablaoindent {\varepsilonmph{\thetahetaxtbf{Proof of Theorem~\rhoef{the-theo}}}\\~$\lhd$~} The proof is based on the proof of Theorem~3.2 of \cite{bobkov-ledoux1}. First we prove inequality~\varepsilonqref{eq-theo} for all functions $g$, $\mathcal C^\infty$ with compact support on $\varepsilonnsuremath{\mathbb{R}}^n$. Let $t,s\gammaeq 0$ with $t+s=1$ and denote for $z\in\varepsilonnsuremath{\mathbb{R}}^n$, $$ g_t(z)=\sup_{z=tx+sy}\mathbf{P}AR{g(x)-\mathbf{P}AR{t\mathbf{P}hii(x)+s\mathbf{P}hii(y)-\mathbf{P}hii(tx+sy)}}. $$ We apply Pr\'ekopa-Leindler inequality to the functions $$ u(x)=\varepsilonxists \,p\mathbf{P}AR{\frac{g(x)}{t}-\mathbf{P}hii(x)},\quad v(y)=\varepsilonxists \,p\mathbf{P}AR{-\mathbf{P}hii(y)},\quad w(z)=\varepsilonxists \,p\mathbf{P}AR{g_s(z)-\mathbf{P}hii(z)}, $$ to get $$ \mathbf{P}AR{\int \varepsilonxists \,p(g/t)d\mu_\mathbf{P}hii}^t\leq\int \varepsilonxists \,p(g_s)d\mu_\mathbf{P}hii. $$ The differentiation of the $L^p$ norm gives the entropy, and thanks to a Taylor's formula we get $$ \mathbf{P}AR{\int \varepsilonxists \,p(g/t)d\mu_\mathbf{P}hii}^t=\int e^gd\mu_\mathbf{P}hii+s\varepsilonnt{\mu_\mathbf{P}hii}{e^g}+O(s^2). $$ Then applying Lemma~\rhoef{lem-technique} and inequality~\varepsilonqref{hypo-++} yield \betaetagin{multline*} \int \varepsilonxists \,p(g_s)d\mu_\mathbf{P}hii=\\ \int e^g\mu_\mathbf{P}hii+ s\int \BRA{zCD(\rhoho ,\infty)ot\nablaablabla g(z)-\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(z)}+ \mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(z)-\nablaablabla g(z)}}e^{g(z)} d\mu_\mathbf{P}hii(z)+O(s^2). \varepsilonnd{multline*} When $s$ goes to 0, inequality~\varepsilonqref{eq-theo} arises and can be extended for all smooth enough functions $g$. {~$\rhohd$\\} Note that hypothesis~\varepsilonqref{hypo-++} is satisfied by a large class of convex functions. For example if $\mathbf{P}hii(x)=\varepsilonnsuremath{\mathbb{N}}RM{x}^2/2+(n/2)\log(2\pi)$ we obtain the classical logarithmic Sobolev of Gross for the canonical Gaussian measure on $\varepsilonnsuremath{\mathbb{R}}^n$, with the optimal constant. \subsection{Remarks and examples} \lambdabel{sec-ex} In the next corollary we recall a classical result of perturbation. If $\mathbf{P}hi$ is a function on $\varepsilonnsuremath{\mathbb{R}}^n$ such that $\int e^{-\mathbf{P}hi} dx<\infty$ we note the probability measure $\mu_\mathbf{P}hi$ by \betaetagin{equation} \lambdabel{e-phi} d\mu_\mathbf{P}hi(x)=\frac{e^{-\mathbf{P}hi(x)}}{Z_\mathbf{P}hi}dx, \varepsilonnd{equation} where $Z_\mathbf{P}hi=\int {e^{-\mathbf{P}hi(x)}}dx$· \betaetagin{ecor} \lambdabel{co-pe} Assume that $\mathbf{P}hii$ satisfies conditions of Theorem~\rhoef{the-theo}. Let $\mathbf{P}hi=\mathbf{P}hii+U$, where $U$ is a bounded function on $\varepsilonnsuremath{\mathbb{R}}^n$ and denote by $\mu_\mathbf{P}hi$ the measure defined by~\varepsilonqref{e-phi}. Then for all smooth enough functions $g$ on $\varepsilonnsuremath{\mathbb{R}}^n$, one has \betaetagin{equation} \lambdabel{eq-core} \varepsilonnt{\mu_\mathbf{P}hi}{e^g}\leq e^{2\thetahetaxt{osc}(U)}\int \BRA{xCD(\rhoho ,\infty)ot\nablaablabla g(x)-\mathbf{P}hii^*(\nablaablabla\mathbf{P}hii(x))+\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(x)-\nablaablabla g(x)}}e^{g(x)}d\mu_\mathbf{P}hi(x), \varepsilonnd{equation} where $\thetahetaxt{osc}(U)=\sup(U)-\inf(U)$. \varepsilonnd{ecor} \betaetagin{eproof} First we observe that \betaetagin{equation} \lambdabel{eq-ra} e^{-\thetahetaxt{osc}(U)}\leq\frac{d{\mu}_\mathbf{P}hi}{d\mu_\mathbf{P}hii}\leq e^{\thetahetaxt{osc}(U)}. \varepsilonnd{equation} Moreover we have for all probability measures $\nablau$ on $\varepsilonnsuremath{\mathbb{R}}^n$, $$ \varepsilonnt{\nablau}{e^g}=\inf_{a\gammaeq0}\BRA{\int\mathbf{P}AR{e^g\log\frac{e^g}{a}-e^g+a}d\nablau}. $$ Using the fact that for all $x,a>0$, $x \log\frac{x}{a}-x+a\gammaeq0,$ we get $$ e^{-\thetahetaxt{osc}(U)}\varepsilonnt{{\mu}_\mathbf{P}hi}{e^g}\leq\varepsilonnt{\mu_\mathbf{P}hii}{e^g}\leq e^{\thetahetaxt{osc}(U)}\varepsilonnt{{\mu}_\mathbf{P}hi}{e^g}. $$ Then if $g$ a smooth enough function on $\varepsilonnsuremath{\mathbb{R}}^n$ we have \betaetagin{equation*} \betaetagin{array}{rl} \partialisplaystyle\varepsilonnt{{\mu}_\mathbf{P}hi}{e^g}& \partialisplaystyle\leq e^{\thetahetaxt{osc}(U)}\varepsilonnt{{\mu}_\mathbf{P}hii}{e^g}\\ &\partialisplaystyle \leq e^{\thetahetaxt{osc}(U)} \int \BRA{xCD(\rhoho ,\infty)ot\nablaablabla g(x)-\mathbf{P}hii^*(\nablaablabla\mathbf{P}hii(x))+\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(x)-\nablaablabla g(x)}}e^{g(x)}d\mu_\mathbf{P}hii(x). \varepsilonnd{array} \varepsilonnd{equation*} The convexity of $\mathbf{P}hii^*$ $\varepsilonnsuremath{\mathbb{R}}^n$ and th relation $\nablaablabla\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(x)}=x$ lead to $$ \forall x\in\varepsilonnsuremath{\mathbb{R}}^n,\quad {xCD(\rhoho ,\infty)ot\nablaablabla g(x)-\mathbf{P}hii^*(\nablaablabla\mathbf{P}hii(x))+\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(x)-\nablaablabla g(x)}}\gammaeq0. $$ Finally by~\varepsilonqref{eq-ra} we get \betaetagin{equation*} \varepsilonnt{{\mu}_\mathbf{P}hi}{e^g} \leq e^{2\thetahetaxt{osc}(U)} \int \BRA{xCD(\rhoho ,\infty)ot\nablaablabla g(x)-\mathbf{P}hii^*(\nablaablabla\mathbf{P}hii(x))+\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(x)-\nablaablabla g(x)}}e^gd\mu_\mathbf{P}hi. \varepsilonnd{equation*} \varepsilonnd{eproof} \betaetagin{erem} It is not necessary to state a tensorization result, as we may obtain exactly the same expression when computing directly with a product measure. \varepsilonnd{erem} Theorem~\rhoef{the-theo} implies also examples given in \cite{bobkov-ledoux1} and \cite{bobkov-zeg}. \betaetagin{ecor}[\cite{bobkov-ledoux1}] \lambdabel{bl-prop} Let $p\gammaeq 2$ and let $\mathbf{P}hi(x)=\varepsilonnsuremath{\mathbb{N}}RM{x}^p/p$ where $\varepsilonnsuremath{\mathbb{N}}RM{CD(\rhoho ,\infty)ot}$ is Euclidean norm in $\varepsilonnsuremath{\mathbb{R}}^n$. Then there exists $c>0$, such that for all smooth enough functions $g$, \betaetagin{equation} \lambdabel{e-prop} \varepsilonnt{\mu_\mathbf{P}hi}{e^g}\leq c\int \varepsilonnsuremath{\mathbb{N}}RM{\nablaablabla g}^qe^gd\mu_\mathbf{P}hi, \varepsilonnd{equation} where $1/p+1/q=1$ and $\mu_\mathbf{P}hi$ is defined on~\varepsilonqref{e-phi}. \varepsilonnd{ecor} \betaetagin{eproof} Using Theorem~\rhoef{the-theo}, we just have to prove that there exists $c>0$ such that, \betaetagin{equation*} \forall x,y\in\varepsilonnsuremath{\mathbb{R}}^n,\quad {xCD(\rhoho ,\infty)ot y-\mathbf{P}hi^*(\nablaablabla\mathbf{P}hi(x))+\mathbf{P}hi^*\mathbf{P}AR{\nablaablabla\mathbf{P}hi(x)- y}}\leq c\varepsilonnsuremath{\mathbb{N}}RM{y}^q. \varepsilonnd{equation*} Assume that $y\nablaeq 0$ and define the function $\psi$ by, $$ \psi(x,y)=\frac{{xCD(\rhoho ,\infty)ot y-\mathbf{P}hi^*(\nablaablabla\mathbf{P}hi(x))+\mathbf{P}hi^*\mathbf{P}AR{\nablaablabla\mathbf{P}hi(x)-y}}}{\varepsilonnsuremath{\mathbb{N}}RM{y}^q}. $$ Then $\psi $ is a bounded function. We know that $\mathbf{P}hi^*(x)=\varepsilonnsuremath{\mathbb{N}}RM{x}^q/q$. Choosing $z=x\varepsilonnsuremath{\mathbb{N}}RM{x}^{p-2}/\varepsilonnsuremath{\mathbb{N}}RM{y}$ and denoting $e=y/\varepsilonnsuremath{\mathbb{N}}RM{y}$, we obtain $$ \psi(x,y)=\betaar{\psi}(z,e)=zCD(\rhoho ,\infty)ot e\varepsilonnsuremath{\mathbb{N}}RM{z}^{q-2}-\frac{1}{q}\varepsilonnsuremath{\mathbb{N}}RM{z}^q+\frac{1}{q}\varepsilonnsuremath{\mathbb{N}}RM{{z}-{e}}^q. $$ Taylor's formula then yields $\betaar{\psi}(z,e)=O(\varepsilonnsuremath{\mathbb{N}}RM{z}^{q-2})$. But $p\gammaeq 2$ implies that $q\leq2$, so that $\betaar{\psi}$ is a bounded function. We then get the result with $c=\sup\betaar{\psi}=\sup{\psi}$. \varepsilonnd{eproof} Optimal transportation is also used by Cordero-Erausquin, Gangbo and Houdr\'e in~\cite{co-ga-ho} to prove the particular case of the inequality~\varepsilonqref{e-prop}. In Proposition~2.1 of \cite{bobkov-ledoux1}, Bobkov and Ledoux prove that the Pr\'ekopa-Leindler inequality implies Brascamp-Lieb inequality. In our case, Theorem~\rhoef{the-theo} also implies Brascamp-Lieb inequality, as we can see in the next corollary. \betaetagin{ecor} Let $\mathbf{P}hii$ satisfying conditions of Theorem~\rhoef{the-theo}. Then for all smooth enough functions $g$ we get, \betaetagin{equation*} \varphir{\mu_\mathbf{P}hii}{g}\leq \int \nablaablabla gCD(\rhoho ,\infty)ot{\thetahetaxt{Hess}}(\mathbf{P}hii)^{-1}\nablaablabla gd\mu_\mathbf{P}hii, \varepsilonnd{equation*} where ${\thetahetaxt{Hess}}(\mathbf{P}hii)^{-1}$ denote the inverse of the Hessian of $\mathbf{P}hii$. \varepsilonnd{ecor} \betaetagin{eproof} Assume that $g$ is a ${\mathcal C}^\infty$ function with a compact support and apply inequality~\varepsilonqref{eq-theo} with the function $\varepsilonpsilon g$ where $\varepsilonpsilon>0$. Taylor's formula gives $$ \varepsilonnt{\mu_\mathbf{P}hii}{\varepsilonxists \,p{\varepsilonpsilon g}}=\frac{\varepsilonpsilon^2}{2}\varphir{\mu_\mathbf{P}hii}{g}+o(\varepsilonpsilon^2), $$ and \betaetagin{multline*} \int \BRA{xCD(\rhoho ,\infty)ot\nablaablabla g(x)-\mathbf{P}hii^*(\nablaablabla\mathbf{P}hii(x))+\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii(x)-\nablaablabla g(x)}}e^g{(x)}d\mu_\mathbf{P}hii(x) = \\\int\frac{\varepsilonpsilon^2}{2}\nablaablabla gCD(\rhoho ,\infty)ot{\thetahetaxt{Hess}}(\mathbf{P}hii^*)\mathbf{P}AR{\nablaablabla \mathbf{P}hii}\nablaablabla gd\mu_\mathbf{P}hii+o(\varepsilonpsilon^2). \varepsilonnd{multline*} Because of $\nablaablabla\mathbf{P}hii^*(\nablaablabla\mathbf{P}hii(x))=x$, one has ${\thetahetaxt{Hess}}(\mathbf{P}hii^*)\mathbf{P}AR{\nablaablabla\mathbf{P}hii}={\thetahetaxt{Hess}}(\mathbf{P}hii)^{-1}$ which finished the proof. \varepsilonnd{eproof} \betaetagin{erem} Let $\mathbf{P}hii$ satisfying the conditions of Theorem~\rhoef{the-theo}, and $L$ be defined by $$ \forall x,y\in\varepsilonnsuremath{\mathbb{R}}^n,\quad L(x,y)=\mathbf{P}hii(y)-\mathbf{P}hii(x)+(y-x)CD(\rhoho ,\infty)ot\nablaablabla\mathbf{P}hii(x). $$ The convexity of $\mathbf{P}hii$ implies that $L(x,y)\gammaeq0$ for all $x,y\in\varepsilonnsuremath{\mathbb{R}}^n$. Let $F$ be a density of probability with respect to the measure $\mu_\mathbf{P}hii$, we defined the following Wasserstein distance with the cost function $L$ by $$ W_L(Fd\mu_\mathbf{P}hii,d\mu_\mathbf{P}hii)=\inf\BRA{ \int L(x,y)d\pi(x,y)}, $$ where the infimum is taken over all probability measures $\pi$ on $\varepsilonnsuremath{\mathbb{R}}^n\thetaimesmes\varepsilonnsuremath{\mathbb{R}}^n$ with marginal distributions $Fd\mu_\mathbf{P}hii$ and $d\mu_\mathbf{P}hii$. Bobkov and Ledoux proved in \cite{bobkov-ledoux1} the following transportation inequality \betaetagin{equation} \lambdabel{eq-t} W_L(Fd\mu_\mathbf{P}hii,d\mu_\mathbf{P}hii)\leq \varepsilonnt{\mu_\mathbf{P}hii}{F}. \varepsilonnd{equation} The main result of Otto and Villani in \cite{villani} is the following: Classical logarithmic Sobolev inequality of Gross (when $\mathbf{P}hii(x)=\varepsilonnsuremath{\mathbb{N}}RM{x}^2/2+(n/2)\log (2\pi)$) implies the transportation inequality~\varepsilonqref{eq-t} for all functions $F$, density of probability with respect to $\mu_\mathbf{P}hii$ (see also \cite{bgl} for an another proof). The method developed in~\cite{bgl}, enables to extend the property for $\mathbf{P}hii(x)=\varepsilonnsuremath{\mathbb{N}}RM{x}^p+Z_\mathbf{P}hii$ ($p\gammaeq2$). In the general case, inequality proved in this article, we do not know if the modified logarithmic Sobolev inequality~\varepsilonqref{eq-theo} implies transportation inequality~\varepsilonqref{eq-t}. \varepsilonnd{erem} \section{Applications} \lambdabel{sec-app} \subsection{Application to modified logarithmic Sobolev inequalities} \lambdabel{sec-mod} In \cite{ge-gu-mi,ge-gu-mi2}, a modified logarithmic Sobolev inequality for measure $\mu_\mathbf{P}hii$ on $\varepsilonnsuremath{\mathbb{R}}$ is given with a potential between $\varepsilonnsuremath{\mathcal A}BS{x}$ and $x^2$. More precisely let $\mathbf{P}hi$ be a function on the real line and assume that $\mathbf{P}hi$ is even and satisfies the following property: there exist $M\gammaeq 0$ and $0<\varepsilon\leq1/2$ such that, \betaetagin{equation*} \thetaag{\betaf H} \forall{x\gammaeq M},\,\,\,\,(1+\varepsilon)\mathbf{P}hi(x)\leq x\mathbf{P}hi'(x)\leq(2-\varepsilon)\mathbf{P}hi(x). \varepsilonnd{equation*} Then there exist $A,B,D>0$ such that for all smooth functions $g$ we have \betaetagin{equation} \lambdabel{ggl} \varepsilonnt{\mu_\mathbf{P}hi}{e^g}\leq A\int H_{\mathbf{P}hi}\mathbf{P}AR{g'}e^gd\mu_\mathbf{P}hi, \varepsilonnd{equation} where \betaetagin{equation*} \lambdabel{defh} H_{\mathbf{P}hi}(x)=\left\{ \betaetagin{array}{rl} \mathbf{P}hi^*\mathbf{P}AR{Bx} &\thetahetaxt{ if }\varepsilonnsuremath{\mathcal A}BS{x}\gammaeq D,\\ x^2 &\thetahetaxt{ if }\varepsilonnsuremath{\mathcal A}BS{x}\leq D, \varepsilonnd{array} \rhoight. \varepsilonnd{equation*} and $\mu_\mathbf{P}hi$ is defined on~\varepsilonqref{e-phi}. The proof of inequality~\varepsilonqref{ggl} is rather technical and is divided in two parts: the large and the small entropy. Using Theorem~\rhoef{the-theo} one obtains two results in this direction. In the next theorem, we extend~\varepsilonqref{ggl} in the case where the potential is ``bigger'' than $x^2$. \betaetagin{ethm} \lambdabel{theo-logg} Let $\mathbf{P}hii$ be a real function satisfying conditions of Theorem~\rhoef{the-theo}. Assume that $\mathbf{P}hii$ is even, $\mathbf{P}hii(0)=0$, $\mathbf{P}hii''$ in decreasing on $]-\infty,0]$ and increasing on $[0,+\infty[$ and satisfies, \betaetagin{equation} \lambdabel{eq-hypo} \forall x\in\varepsilonnsuremath{\mathbb{R}}, \quad \mathbf{P}hii''(x)\gammaeq \mathbf{P}hii''(0)=\lambdambda>0 \thetahetaxt{ and } \lim_{\varepsilonnsuremath{\mathcal A}BS{x}\rhoightarrow\infty} \mathbf{P}hii''(x)=\infty. \varepsilonnd{equation} Assume also that there exists $A>1$ such that for $\varepsilonnsuremath{\mathcal A}BS{x}\gammaeq C$ for some $C>0$, \betaetagin{equation} \lambdabel{eq-fa} A\mathbf{P}hii(x)\leq x\mathbf{P}hii'(x). \varepsilonnd{equation} Then there exists $C>0$ such that for all smooth enough functions $g$, \betaetagin{equation} \lambdabel{eq-logg} \varepsilonnt{\mu_\mathbf{P}hii}{e^g}\leq \int H_\mathbf{P}hii(g')e^gd\mu_\mathbf{P}hii, \varepsilonnd{equation} where \betaetagin{equation} \lambdabel{eq-defhh} H_{\mathbf{P}hii}(x)= \left\{ \betaetagin{array}{r} \partialisplaystylelaystyle\frac{2A}{A-1}\mathbf{P}hii^*\mathbf{P}AR{\frac{x}{2}},\quad \thetahetaxt{if}\quad\varepsilonnsuremath{\mathcal A}BS{x}> C\\ \partialisplaystylelaystyle\frac{1}{2\lambda}x^2,\quad \thetahetaxt{if}\quad\varepsilonnsuremath{\mathcal A}BS{x}\leq C.\\ \varepsilonnd{array} \rhoight. \varepsilonnd{equation} \varepsilonnd{ethm} The proof of this theorem is a straightforward application of the following lemma: \betaetagin{elem} \lambdabel{lem-gg} Assume that $\mathbf{P}hii$ satisfies conditions of Theorem~\rhoef{theo-logg}, then we get \betaetagin{equation} \lambdabel{eq-lemm} \forall x,y\in\varepsilonnsuremath{\mathbb{R}},\quad xy-\mathbf{P}hii^*(\mathbf{P}hii'(x))+\mathbf{P}hii^*(\mathbf{P}hii'(x)-y)\leq H_\mathbf{P}hii(y). \varepsilonnd{equation} \varepsilonnd{elem} \betaetagin{eproof} We know that for all $x\in\varepsilonnsuremath{\mathbb{R}}^n$, $x=\mathbf{P}hii^{*'}(\mathbf{P}hii'(x))$, and the convexity of $\mathbf{P}hii^*$ yields \betaetagin{equation} \lambdabel{eq-bas} xy-\mathbf{P}hii^*(\mathbf{P}hii'(x))+\mathbf{P}hii^*(\mathbf{P}hii'(x)-y)\leq y\mathbf{P}AR{\mathbf{P}hii^{*'}(\mathbf{P}hii'(x))-\mathbf{P}hii^{*'}(\mathbf{P}hii'(x)-y)}. \varepsilonnd{equation} Let $y\in\varepsilonnsuremath{\mathbb{R}}$ be fixed and notes $\psi_y(x)=\mathbf{P}hii^{*'}(x+y)-\mathbf{P}hii^{*'}(x)$. The function $\mathbf{P}hii$ is convex, so one gets for all $x\in\varepsilonnsuremath{\mathbb{R}}$, $\mathbf{P}hii^{*''}(\mathbf{P}hii'(x))\mathbf{P}hii''(x)=1$, and the maximum of $\psi_y(x)$ is reached on $x_0\in\varepsilonnsuremath{\mathbb{R}}$ which satisfies the condition ${\mathbf{P}hii^{*''}(x_0)=\mathbf{P}hii^{*''}(x_0+y)}$. Since $\mathbf{P}hii$ is even, $\mathbf{P}hii''$ is decreasing on $]-\infty,0]$ and increasing on $]-\infty,0]$ one get that $x_0+y=-x_0$. Then one obtains $$ \forall y\in\varepsilonnsuremath{\mathbb{R}},\quad\forall x\in\varepsilonnsuremath{\mathbb{R}},\quad\mathbf{P}hii^{*'}(x+y)-\mathbf{P}hii^{*'}(x)\leq2\mathbf{P}hii^{*'}\mathbf{P}AR{\frac{y}{2}}, $$ $$ \forall y\in\varepsilonnsuremath{\mathbb{R}},\quad\forall x\in\varepsilonnsuremath{\mathbb{R}},\quad xy-\mathbf{P}hii^*(\mathbf{P}hii'(x))+\mathbf{P}hii^*(\mathbf{P}hii'(x)-y)\leq y2\mathbf{P}hii^{*'}\mathbf{P}AR{\frac{y}{2}}. $$ By \varepsilonqref{eq-fa} one gets \betaetagin{equation} \lambdabel{ea-dr} \forall \varepsilonnsuremath{\mathcal A}BS{y}\gammaeq C,\quad\forall x\in\varepsilonnsuremath{\mathbb{R}},\quad xy-\mathbf{P}hii^*(\mathbf{P}hii'(x))+\mathbf{P}hii^*(\mathbf{P}hii'(x)-y)\leq \frac{2A}{A-1}\mathbf{P}hii^{*}\mathbf{P}AR{\frac{y}{2}}. \varepsilonnd{equation} A Taylor's formula then leads to $$ \forall x,y\in\varepsilonnsuremath{\mathbb{R}},\quad xy-\mathbf{P}hii^*(\mathbf{P}hii'(x))+\mathbf{P}hii^*(\mathbf{P}hii'(x)-y)\leq \frac{y^2}{2}\mathbf{P}hii^{*''}(\mathbf{P}hii'(x)-\thetaheta y) \leq \frac{y^2}{2\lambda}, $$ for some $\thetaheta\in(0,1)$, and one gets~\varepsilonqref{ea-dr}. \varepsilonnd{eproof} \betaetagin{erem} \betaetagin{itemize} \item The last theorem improved the classical consequence of Bakry-\'Emery criterion for the logarithmic Sobolev inequality. In fact when a probability measure is more log-concave than the Gaussian measure, we obtain a modified logarithmic Sobolev inequality sharper than the classical inequality of Gross. Using a such inequality then one obtains concentration inequality which is more adapted to the probability measure studied. \item Theorem~\rhoef{theo-logg} is more precise than Corollary~\rhoef{bl-prop} proved by Bobkov, Ledoux and Zegarlinski in \cite{bobkov-ledoux1, bobkov-zeg}. The particularity of the function $H_\mathbf{P}hii$ defined on~\varepsilonqref{eq-defhh} is its behaviour around the origin. One can obtain easily that if a probability measure satisfies inequality~\varepsilonqref{eq-logg} then it satisfies a Poincar\'e inequality with constant $1/\lambda$. \item Note also that this method can not be applied for measures with a concentration between $e^{-\varepsilonnsuremath{\mathcal A}BS{x}}$ and $e^{-x^2}$ described in \cite{ge-gu-mi,ge-gu-mi2}. In particular Lemma~\rhoef{lem-gg} is false in this case. \item Note finally that the condition~\varepsilonqref{eq-fa} is a technical condition, satisfied for a large class of functions. \varepsilonnd{itemize} \varepsilonnd{erem} A natural application of Theorem~\rhoef{theo-logg} is a concentration inequality in the spirit of Talagrand, see~\cite{talagrand}. \betaetagin{ecor} \lambdabel{cor-cons} Assume that $\mathbf{P}hii$ satisfies conditions of Theorem~\rhoef{theo-logg} and there exists $B>1$ such that for $\varepsilonnsuremath{\mathcal A}BS{x}$ large enough, \betaetagin{equation} \lambdabel{eq-fa2} x\mathbf{P}hii'(x)\leq B\mathbf{P}hii(x). \varepsilonnd{equation} Then there exists constants $C_1,C_2,C_3\gammaeq0$, independent of $n$ such that: if $F$ is a function on $\varepsilonnsuremath{\mathbb{R}}^n$ such that $\forall i$, $\varepsilonnsuremath{\mathbb{N}}RM{\partial_iF}_\infty\leq 1$, then we get for $\lambda\gammae0$, \betaetagin{equation} \lambdabel{eq-cons} \mu^{\otimesimes n}(\varepsilonnsuremath{\mathcal A}BS{F -\mu^{\otimesimes n}(F)}\gammaeq \lambda)\leq \left\{ \betaetagin{array}{ll} \partialisplaystyle2\varepsilonxists \,p\mathbf{P}AR{-nC_1\mathbf{P}hi\mathbf{P}AR{C_2\frac{\lambda}{n}}}&\thetahetaxt{if } \lambda > {nC_3 },\\ \partialisplaystyle2\varepsilonxists \,p\mathbf{P}AR{-C_1\frac{\lambda^2}{n}}& \thetahetaxt{if }0 \leq\lambda\leq {nC_3 }. \varepsilonnd{array} \rhoight. \varepsilonnd{equation} \varepsilonnd{ecor} \betaetagin{eproof} Using the additional hypothesis~\varepsilonqref{eq-fa2}, the proof of~\varepsilonqref{eq-cons} is the same as for Proposition~3.2 of~\cite{ge-gu-mi2}. \varepsilonnd{eproof} A $n$-dimensional version of~\varepsilonqref{eq-logg} is also available. \betaetagin{eprop} \lambdabel{thmn} Let $\mathbf{P}hi$ be a $\mathcal C^2$, strictly convex and even function on $\varepsilonnsuremath{\mathbb{R}}^n$ and satisfying~\varepsilonqref{eq-propphi} and~\varepsilonqref{hypo-++}. Assume also that $\mathbf{P}hi\gammaeq0$ and $\mathbf{P}hi(0)=0$ {$($}it implies that $0$ is the unique minimum of $\mathbf{P}hi${$)$}, \betaetagin{equation} \lambdabel{eqm} \lim_{\alphalpha\rhoightarrow0,\,\alphalpha\in[0,1]} \sup_{x\in\varepsilonnsuremath{\mathbb{R}}^n}\BRA{(1-\alphalpha)\frac{\mathbf{P}hi^*\mathbf{P}AR{\frac{x}{1-\alphalpha}}}{\mathbf{P}hi^*(x)}}=1, \varepsilonnd{equation} and also that there exists $A>0$ such that \betaetagin{equation} \lambdabel{eqh} \forall x\in\varepsilonnsuremath{\mathbb{R}}^n,\quad xCD(\rhoho ,\infty)ot \nablaablabla \mathbf{P}hi(x)\leq (A+1)\mathbf{P}hi(x). \varepsilonnd{equation} Then there exist $C_1,C_2,C_3\gammaeq0$ such that for all smooth enough functions $g$ such that $\int e^gd\mu_\mathbf{P}hi=1$we get \betaetagin{equation} \lambdabel{eqd} \varepsilonnt{\mu_\mathbf{P}hi}{e^g}\leq C_1\int\mathbf{P}hi^*\mathbf{P}AR{C_2\nablaablabla g}e^gd\mu_\mathbf{P}hi+C_3. \varepsilonnd{equation} \varepsilonnd{eprop} \betaetagin{eproof} Let apply Theorem~\rhoef{the-theo} with $\mathbf{P}hii=\mathbf{P}hi+\log Z_\mathbf{P}hi$, one has \betaetagin{equation*} \varepsilonnt{\mu_\mathbf{P}hi}{e^g}\leq \int \BRA{xCD(\rhoho ,\infty)ot\nablaablabla g(x)-\mathbf{P}hi^*(\nablaablabla\mathbf{P}hi(x))+\mathbf{P}hi^*\mathbf{P}AR{\nablaablabla\mathbf{P}hi(x)-\nablaablabla g(x)}}e^gd\mu_\mathbf{P}hi. \varepsilonnd{equation*} The convexity of $\mathbf{P}hi^*$ implies, for all $\alphalpha\in[0,1[$, \betaetagin{equation} \lambdabel{eqc} \forall x\in\varepsilonnsuremath{\mathbb{R}}^n,\quad\mathbf{P}hi^*\mathbf{P}AR{\nablaablabla\mathbf{P}hi(x)-\nablaablabla g(x)}\leq(1-\alphalpha)\mathbf{P}hi^*\mathbf{P}AR{\frac{\nablaablabla\mathbf{P}hi(x)}{1-\alphalpha}}+ \alphalpha\mathbf{P}hi^*\mathbf{P}AR{\frac{-\nablaablabla g(x)}{\alphalpha}}. \varepsilonnd{equation} Recall that $\mathbf{P}hi^*$ is also an even function. Young's inequality implies that \betaetagin{equation} \lambdabel{eqy} \forall x\in\varepsilonnsuremath{\mathbb{R}}^n,\quad xCD(\rhoho ,\infty)ot\frac{\nablaablabla g(x)}{\alphalpha}\leq\mathbf{P}hi(x)+\mathbf{P}hi^*\mathbf{P}AR{\frac{\nablaablabla g(x)}{\alphalpha}}. \varepsilonnd{equation} Using~\varepsilonqref{eqc} and ~\varepsilonqref{eqy} we get \betaetagin{multline*} \varepsilonnt{\mu_\mathbf{P}hi}{e^g}\leq 2\alphalpha\int \mathbf{P}hi^*\mathbf{P}AR{\frac{\nablaablabla g}{\alphalpha}}e^gd\mu_\mathbf{P}hi+ \alphalpha \int \mathbf{P}hi\mathbf{P}AR{x}e^gd\mu_\mathbf{P}hi+\\ \int \mathbf{P}AR{(1-\alphalpha)\mathbf{P}hi^*\mathbf{P}AR{\frac{\nablaablabla\mathbf{P}hi(x)}{1-\alphalpha}}-\mathbf{P}hi^*\mathbf{P}AR{{\nablaablabla\mathbf{P}hi(x)}}}e^gd\mu_\mathbf{P}hi. \varepsilonnd{multline*} We have $\mathbf{P}hi^*(\nablaablabla \mathbf{P}hi(x))=xCD(\rhoho ,\infty)ot\nablaablabla \mathbf{P}hi(x)-\mathbf{P}hi(x)$, then inequality~\varepsilonqref{eqh} implies that $\mathbf{P}hi^*\mathbf{P}AR{\nablaablabla \mathbf{P}hi(x)}\leq A\mathbf{P}hi(x)$. Because of $\mathbf{P}hi(0)=0$ one has $\mathbf{P}hi^*\gammaeq 0$, so that $$ \varepsilonnt{\mu_\mathbf{P}hi}{e^g}\leq \alphalpha\int \mathbf{P}hi^*\mathbf{P}AR{\frac{\nablaablabla g}{\alphalpha}}e^gd\mu_\mathbf{P}hi+ \alphalpha\int \mathbf{P}hi^*\mathbf{P}AR{\frac{\nablaablabla g}{\alphalpha}}e^gd\mu_\mathbf{P}hi+ (\alphalpha+A\varepsilonnsuremath{\mathcal A}BS{\psi(\alphalpha)-1})\int \mathbf{P}hi e^gd\mu_\mathbf{P}hi, $$ where \betaetagin{equation} \lambdabel{eqp} \psi(\alphalpha)=\sup_{x\in\varepsilonnsuremath{\mathbb{R}}^n}\BRA{(1-\alphalpha)\frac{\mathbf{P}hi^*\mathbf{P}AR{\frac{x}{1-\alphalpha}}}{\mathbf{P}hi^*(x)}}. \varepsilonnd{equation} Let $\lambda>0$, recall that $\int e^g d\mu_\mathbf{P}hi=1$ then yields $$ \int \mathbf{P}hi e^gd\mu_\mathbf{P}hi\leq \lambda\mathbf{P}AR{\varepsilonnt{\mu_\mathbf{P}hi}{e^g}+\log\int e^{\mathbf{P}hi /\lambda}d\mu_\mathbf{P}hi}. $$ One has $\partialisplaystyle\lim_{\lambda\rhoightarrow\infty}\log\int e^{\mathbf{P}hi /\lambda}d\mu_\mathbf{P}hi=0$. Let then let now choose $\lambda$ large enough so that $\log\int e^{\mathbf{P}hi /\lambda}d\mu_\mathbf{P}hi\leq 1$. Using the property~\varepsilonqref{eqm}, taking $\alphalpha$ such that $(\alphalpha+A\varepsilonnsuremath{\mathcal A}BS{\psi(\alphalpha)-1})\lambda\leq1/2$ implies $$ \varepsilonnt{\mu_\mathbf{P}hi}{e^g}\leq 2\alphalpha\int \mathbf{P}hi^*\mathbf{P}AR{\frac{\nablaablabla g}{\alphalpha}}e^gd\mu_\mathbf{P}hi+ \frac{1}{2}\mathbf{P}AR{\varepsilonnt{\mu_\mathbf{P}hi}{e^g}+1}, $$ which gives $$ \varepsilonnt{\mu_\mathbf{P}hi}{e^g}\leq{4\alphalpha}\int \mathbf{P}hi^*\mathbf{P}AR{\frac{\nablaablabla g}{\alphalpha}}e^gd\mu_\mathbf{P}hi+1/2. $$ \varepsilonnd{eproof} The main difference between the inequality obtained and the modified logarithmic inequality~\varepsilonqref{eq-logg} is that we do not have equality if $f=1$. Then~\varepsilonqref{eqd} is called a no tight inequality and it is more difficult to obtain. \subsection{Application to Euclidean logarithmic Sobolev inequality} \lambdabel{sec-e} \betaetagin{ethm} \lambdabel{theo2} Assume that the function $\mathbf{P}hii$ satisfies conditions of Theorem~\rhoef{the-theo}. Then for all $\lambda>0$ and for all smooth enough functions $g$ on $\varepsilonnsuremath{\mathbb{R}}^n$, \betaetagin{equation} \lambdabel{e-theo2} \varepsilonnt{dx}{e^g}\leq -n\log\mathbf{P}AR{\lambda e}\int e^gdx+\int\mathbf{P}hii^*\mathbf{P}AR{-\lambda\nablaablabla g}e^gdx. \varepsilonnd{equation} This inequality is optimal in the sense that if $g=-\mathbf{P}hii(x-\betaar{x})$ with $\betaar{x}\in\varepsilonnsuremath{\mathbb{R}}^n$ and $\lambda=1$ we get an equality. \varepsilonnd{ethm} \betaetagin{eproof} Integrating by parts in the second term of~\varepsilonqref{eq-theo} yields for all $g$ smooth enough $$ \int {xCD(\rhoho ,\infty)ot\nablaablabla g(x)}e^{g(x)}d\mu_\mathbf{P}hii(x)= \int \mathbf{P}AR{-n+xCD(\rhoho ,\infty)ot\nablaablabla\mathbf{P}hii(x)}e^{g(x)}d\mu_\mathbf{P}hii(x). $$ Then using the equality $\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii}=xCD(\rhoho ,\infty)ot\nablaablabla\mathbf{P}hii(x)-\mathbf{P}hii(x)$ we get for all smooth enough functions $g$, \betaetagin{equation*} \varepsilonnt{\mu_\mathbf{P}hii}{e^g}\leq \int \mathbf{P}AR{-n+\mathbf{P}hii+\mathbf{P}hii^*\mathbf{P}AR{\nablaablabla\mathbf{P}hii-\nablaablabla g}}e^{g}d\mu_\mathbf{P}hii, \varepsilonnd{equation*} Let now take $g=f+\mathbf{P}hii$ to obtain \betaetagin{equation*} \varepsilonnt{dx}{e^f}\leq \int \mathbf{P}AR{-n+\mathbf{P}hii^*\mathbf{P}AR{-\nablaablabla g}}e^{g}dx. \varepsilonnd{equation*} Finally, let $\lambda>0$ and take $f(x)=g(\lambda x)$, we get then \betaetagin{equation*} \varepsilonnt{dx}{e^g}\leq -n\log\mathbf{P}AR{\lambda e}\int e^gdx+\int\mathbf{P}hii^*\mathbf{P}AR{-\lambda\nablaablabla g}e^gdx, \varepsilonnd{equation*} which proves~\varepsilonqref{e-theo2}. If now $g=-\mathbf{P}hii(x-\betaar{x})$ with $\betaar{x}\in\varepsilonnsuremath{\mathbb{R}}^n$ an easy computation proves that if $\lambda=1$ the equality holds. \varepsilonnd{eproof} In the inequality~\varepsilonqref{e-theo2}, there exists an optimal $\lambda_0>0$ and when $C$ is homogeneous, we can improve the last result. We find an inequality called {\it Euclidean logarithmic Sobolev inequality} which is explained on the next corollary. \betaetagin{ecor} \lambdabel{cor1} Let $C$ be a strictly convex function on $\varepsilonnsuremath{\mathbb{R}}^n$ satisfying condition of Theorem~\rhoef{the-theo} and assume that $C$ is $q$-homogeneous for some $q>1$, $$ \forall \lambda\gammaeq0\quad \thetahetaxt{and}\quad \forall x\in\varepsilonnsuremath{\mathbb{R}}^n,\quad C(\lambda x)={\lambda}^qC(x). $$ Then for all smooth enough functions $g$ in $\varepsilonnsuremath{\mathbb{R}}^n$ we get \betaetagin{equation} \lambdabel{e-ecl} \varepsilonnt{dx}{e^g}\leq \frac{n}{p}\int e^gdx\log\mathbf{P}AR{\frac{p}{n e^{p-1}{\mathcal L}^{p/n}}\frac{\int C^*\mathbf{P}AR{-\nablaablabla g}e^gdx}{\int e^gdx} }, \varepsilonnd{equation} where ${\mathcal L}=\int e^{-C}dx$ and $1/p+1/q=1$. \varepsilonnd{ecor} \betaetagin{eproof} Let apply Theorem~\rhoef{theo2} with $ \mathbf{P}hii=C+\log{\mathcal L}$. Then $\mathbf{P}hii$ satisfies conditions of Theorem~\rhoef{theo2} and we get then \betaetagin{equation*} \varepsilonnt{dx}{e^g}\leq -n\log\mathbf{P}AR{\lambda e {\mathcal L}^{1/n}}\int e^gdx+\int C^*\mathbf{P}AR{-\lambda\nablaablabla g}e^gdx. \varepsilonnd{equation*} Due to the fact that $C$ is $q$-homogeneous an easy computation proves that $C^*$ is $p$-homogeneous where $1/p+1/q=1$. An optimization over $\lambda>0$ gives inequality~\varepsilonqref{e-ecl}. \varepsilonnd{eproof} \betaetagin{erem} Inequality~\varepsilonqref{e-ecl} is useful to prove regularity properties as hypercontractivity for nonlinear diffusion as the $p$-Laplacian, see~\cite{ddg}. The function $C$ is then adapted to the nonlinear diffusion studied. \varepsilonnd{erem} Inequality~\varepsilonqref{e-ecl} is called Euclidean logarithmic Sobolev inequality and computations of this section is the generalization of the work of Carlen in~\cite{carlen}. This inequality with $p=2$, appears in the work of Weissler in~\cite{weissler}. It was discussed and extended to this last version in many articles see \cite{beckner,del-dol,gentil03,ghoussoub}. \betaetagin{erem} As explained in the introduction, computation used in Corollary~\rhoef{cor1} clearly proves that inequality~\varepsilonqref{e-ecl} is equivalent to inequality~\varepsilonqref{e-theo2}. Agueh, Ghoussoub and Kang, in~\cite{ghoussoub}, used Monge-Kantorovich theory for mass transport to prove inequalities~\varepsilonqref{e-theo2} and~\varepsilonqref{e-ecl}. Their approach gives another way to establish Theorem~\rhoef{the-theo}. Note finally that inequality~\varepsilonqref{e-ecl} is optimal, extremal functions are given by $g(x)=-bC(x-\betaar{x})$, with $\betaar{x}\in\varepsilonnsuremath{\mathbb{R}}^n$ and $b>0$. If they are only ones is still an open question. \varepsilonnd{erem} {\betaf{Acknowledgments}}: I would like to warmly thank referee for pointed out errors in the first version. \nablaewcommand{\varepsilontalchar}[1]{$^{#1}$} \betaetagin{thebibliography}{DPDG04} \betaibitem[ABC{\varepsilontalchar{+}}00]{logsob} C.~An{\'e}, S.~Blach{\`e}re, D.~Chafa{\"\i}, P.~Foug{\`e}res, I.~Gentil, F.~Malrieu, C.~Roberto, and G.~Scheffer. \nablaewblock {\varepsilonm Sur les in{\'e}galit{\'e}s de {S}obolev logarithmiques}, volume~10 of {\varepsilonm Panoramas et Synth{\`e}ses}. \nablaewblock Soci{\'e}t{\'e} {M}ath{\'e}matique de {F}rance, Paris, 2000. \betaibitem[AGK04]{ghoussoub} M.~Agueh, N.~Ghoussoub, and X.~Kang. \nablaewblock {Geometric inequalities via a general comparison principle for interacting gases.} \nablaewblock {\varepsilonm Geom. Funct. Anal.}, 14(1):215--244, 2004. \betaibitem[B{\'E}85]{b-e} D.~Bakry and M.~{\'E}mery. \nablaewblock Diffusions hypercontractives. \nablaewblock In {\varepsilonm S\'eminaire de probabilit\'es, XIX, 1983/84}, volume 1123 of {\varepsilonm Lecture Notes in Math.}, pages 177--206. Springer, 1985. \betaibitem[Bec99]{beckner} William Beckner. \nablaewblock Geometric asymptotics and the logarithmic {S}obolev inequality. \nablaewblock {\varepsilonm Forum Math.}, 11(1):105--137, 1999. \betaibitem[BGL01]{bgl} S.~Bobkov, I.~Gentil, and M.~Ledoux. \nablaewblock Hypercontractivity of {H}amilton-{J}acobi equations. \nablaewblock {\varepsilonm J. Math. Pu. Appli.}, 80(7):669--696, 2001. \betaibitem[BL00]{bobkov-ledoux1} S.~G. Bobkov and M.~Ledoux. \nablaewblock From {B}runn-{M}inkowski to {B}rascamp-{L}ieb and to logarithmic {S}obolev inequalities. \nablaewblock {\varepsilonm Geom. Funct. Anal.}, 10(5):1028--1052, 2000. \betaibitem[BZ05]{bobkov-zeg} S.G. Bobkov and B.~Zegarlinski. \nablaewblock {Entropy bounds and isoperimetry.} \nablaewblock {\varepsilonm Mem. Am. Math. Soc.}, 829:69 p., 2005. \betaibitem[Car91]{carlen} E.~A. Carlen. \nablaewblock {Superadditivity of Fisher's information and logarithmic Sobolev inequalities.} \nablaewblock {\varepsilonm J. Funct. Anal.}, 101(1):194--211, 1991. \betaibitem[CEGH04]{co-ga-ho} D.~Cordero-Erausquin, W.~Gangbo, and C.~Houdr{\'e}. \nablaewblock Inequalities for generalized entropy and optimal transportation. \nablaewblock In {\varepsilonm Recent advances in the theory and applications of mass transport}, volume 353 of {\varepsilonm Contemp. Math.}, pages 73--94. Amer. Math. Soc., Providence, RI, 2004. \betaibitem[DPD03]{del-dol} M.~Del~Pino and J.~Dolbeault. \nablaewblock {The optimal Euclidean $L^{p}$-Sobolev logarithmic inequality.} \nablaewblock {\varepsilonm J. Funct. Anal.}, 197(1):151--161, 2003. \betaibitem[DPDG04]{ddg} M.~Del~Pino, J.~Dolbeault, and I.~Gentil. \nablaewblock Nonlinear diffusions, hypercontractivity and the optimal {$L^p$}-{E}uclidean logarithmic {S}obolev inequality. \nablaewblock {\varepsilonm J. Math. Anal. Appl.}, 293(2):375--388, 2004. \betaibitem[Gen03]{gentil03} I.~Gentil. \nablaewblock The general optimal {$L\sp p$}-{E}uclidean logarithmic {S}obolev inequality by {H}amilton-{J}acobi equations. \nablaewblock {\varepsilonm J. Funct. Anal.}, 202(2):591--599, 2003. \betaibitem[GGM05]{ge-gu-mi} I.~Gentil, A.~Guillin, and L.~Miclo. \nablaewblock Modified logarithmic {S}obolev inequalities and transportation inequalities. \nablaewblock {\varepsilonm {P}robab. {T}heory {R}elated {F}ields}, 133(3):409--436, 2005. \betaibitem[GGM07]{ge-gu-mi2} I.~Gentil, A.~Guillin, and L.~Miclo. \nablaewblock Logarithmic sobolev inequalities in curvature null. \nablaewblock {\varepsilonm Rev. Mat. Iberoamericana}, 23(1):237--260, 2007. \betaibitem[Gro75]{gross} L.~Gross. \nablaewblock Logarithmic {S}obolev inequalities. \nablaewblock {\varepsilonm Amer. J. Math.}, 97(4):1061--1083, 1975. \betaibitem[Gup80]{gupta} S.~D. Gupta. \nablaewblock {Brunn-Minkowski inequality and its aftermath.} \nablaewblock {\varepsilonm J. Multivariate Anal.}, 10:296--318, 1980. \betaibitem[Mau04]{maurey} B.~Maurey. \nablaewblock In{\'e}galit{\'e} de {B}runn-{M}inkowski-{L}usternik, et autres in{\'e}galit{\'e}s g{\'e}om{\'e}triques et fonctionnelles. \nablaewblock {\varepsilonm S{\'e}minaire Bourbaki}, 928, 2003/04. \betaibitem[OV00]{villani} F.~Otto and C.~Villani. \nablaewblock Generalization of an inequality by {T}alagrand, and links with the logarithmic {S}obolev inequality. \nablaewblock {\varepsilonm J. Funct. Anal.}, 173(2):361--400, 2000. \betaibitem[Tal95]{talagrand} M.~Talagrand. \nablaewblock Concentration of measure and isoperimetric inequalities in product spaces. \nablaewblock {\varepsilonm Inst. Hautes \'Etudes Sci. Publ. Math.}, (81):73--205, 1995. \betaibitem[Wei78]{weissler} F.~B. Weissler. \nablaewblock {Logarithmic Sobolev inequalities for the heat-diffusion semigroup.} \nablaewblock {\varepsilonm Trans. Am. Math. Soc.}, 237:255--269, 1978. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title[Damped Driven Coupled Oscillators] {Damped driven coupled oscillators: entanglement, decoherence and the classical limit} \author{R.~D.~Guerrero Mancilla, R.~R.~Rey-Gonz\'alez and K.~M.~Fonseca-Romero} \address{\centerline{\emph{Grupo de \'Optica e Informaci\'on Cu\'antica, Departamento de F\'{\i}sica,}} \centerline{\emph{Universidad Nacional de Colombia - Bogot\'a }}} \ead{[email protected]} \ead{[email protected]} \ead{[email protected]} \begin{abstract} \noindent{\footnotesize The interaction of (two-level) Rydberg atoms with dissipative QED cavity fields can be described classically or quantum mechanically, even for very low temperatures and mean number of photons, provided the damping constant is large enough. We investigate the quantum-classical border, the entanglement and decoherence of an analytically solvable model, analog to the atom-cavity system, in which the atom (field) is represented by a (driven and damped) harmonic oscillator. The maximum value of entanglement is shown to depend on the initial state and the dissipation-rate to coupling-constant ratio. While in the original model the atomic entropy never grows appreciably (for large dissipation rates), in our model it reaches a maximum before decreasing. Although both models predict small values of entanglement and dissipation, for fixed times of the order of the inverse of the coupling constant and large dissipation rates, these quantities decrease faster, as a function of the ratio of the dissipation rate to the coupling constant, in our model.} {\thanks{} This research was partially funded by DIB and Facultad de Ciencias, Universidad Nacional de Colombia.} \end{abstract} \pacs{03.65.Ud, 03.67.Mn, 42.50.Pq, 89.70.Cf } \maketitle \section{Introduction} One expects quantum theory to approach to the classical theory, for example in the singular limit of a vanishing Planck's constant, $\hbar\rightarrow 0$, or for large quantum numbers. However, dissipative systems can bring forth some surprises: for example, QED (quantum electrodynamics) cavity fields interacting with two-level systems, may exhibit classical or quantum behavior, even if the system is kept at very low temperatures and if the mean number of photons in the cavity is of the order of one \cite{Kim1999a,Raimond2001a}, depending on the strength of the damping constant. Classical behavior, in this context refers to the unitary evolution of one of the subsystems, as if the other subsystem could be replaced by a classical driving. In QED cavities, the atom, which enters in one of the relevant Rydberg states (almost in resonance with the field sustained in the cavity), conserves its purity and suffers a unitary rotation inside the cavity -- exactly as if it were controlled by a classical driving field -- without entangling with the electromagnetic field. This unexpected behavior was analyzed in reference \cite{Kim1999a} employing several short-time approximations, and it was found that in the time needed to rotate the atom, its state remains almost pure. Other driven damped systems, composed by two (or more) subsystems can be readily identified. Indeed, in the last years there has been a fast development of quite different physical systems and interfaces between them, including electrodynamical cavities \cite{cavities1,cavities8}, superconducting circuits \cite{SCCircuits1,SCCircuits4}, confined electrons \cite{CElectrons4,CElectrons5,CElectrons6} and nanoresonators \cite{NMResonators1,NMResonators5,NMResonators6}, on which it is possible to explore genuine quantum effects at the level of a few excitations and/or in individual systems. For instance, the interaction atom-electromagnetic field is exploited in experiments with trapped ions \cite{TrappedIons2,TrappedIons3}, cavity electrodynamics and ensembles of atoms interacting with coherent states of light \cite{Polzik}, radiation pressure over reflective materials in experiments coupling the mechanical motion of nanoresonators to light \cite{NMResonators6}, and the coupling of cavities with different quality factors in the manufacturing of more reliable Ramsey zones \cite{Haroche07}. In many of these interfaces it is possible to identify a system which couples strongly to the environment and another which couples weakly. For example, in the experiments of S. Haroche the electromagnetic field decays significantly faster \cite{Haroche96} (or significantly slower \cite{Haroche07}) than the atoms, the quality factor $Q$ of the nanoresonators is much smaller than that of the cavity, and the newest Ramsey zones comprise two coupled cavities of quite different $Q$. Several of these systems therefore, can be modelled as coupled harmonic oscillators, one which can be considered dissipationless. In this contribution we study an exactly solvable system, composed of two oscillators, which permits the analysis of large times, shedding additional light on the classical-quantum border. Entanglement and entropy, as measured by concurrence and linear entropy, are used to tell ``classical'' from quantum effects. \section{The model} The system that we consider in this manuscript comprises two oscillators of natural frequencies $\omega_1$ and $\omega_2$, coupled through an interaction which conserves the (total) number of excitations and whose coupling constant abruptly changes from zero to $g$ at some initial time, and back to zero at some final time. We take into account that the second oscillator loses excitations at the rate $\gamma$, through a phenomenological Liouvillian of Lindblad form, corresponding to zero temperature, in the dynamical equation of motion \cite{Louisell1973a}. Lindblad superoperators are convenient because they preserve important characteristics of physically realizable states, namely hermiticity, conservation of the trace and semi-positivity \cite{Lindblad1976a}. In order to guarantee the presence of excitations, the second oscillator is driven by a classical resonant field. The interaction can be considered to be turned on (off) in the remote past (remote future) if it is always present (coupled Ramsey zones or nanoresonators coupled to cavity fields), or can really be present for a finite time interval (for example in atoms travelling through cavities). The initial states of the coupled oscillators also depend on the experimental setup, varying from the base state of the compound system to a product of the steady state of the coupled damped oscillator with the state of the other oscillator. Since we want to make comparisons with Ramsey zones, the choices in the formulation of this model have been inspired by the analogy with the atom-cavity system, for a cavity --kept at temperatures of less than 1K-- whose lifetime is much shorter than the lifetime of Rydberg states, allowing us to ignore the Lindblad operator characterizing the atomic decay process. The first oscillator therefore is a cartoon of the atom, at least in the limit where only its first two states are significantly occupied, while the second oscillator corresponds to the field. All the ingredients detailed before can be summarily put into the Liouville-von Neumann equation for the density matrix $\hat{\rho}$ of the total system \begin{equation} \label{eq:lvneq} \fl \frac{d{\hat \rho}}{dt}= -\frac{i}{\hbar}[{\hat H},{\hat \rho}] + \gamma (2{\hat a}{\hat \rho}{\hat a}^{\dagger} -{\hat a}^{\dagger}{\hat a}{\hat \rho} -{\hat \rho}{\hat a}^{\dagger}{\hat a}) \end{equation} where $\hat{H}$ is the total Hamiltonian of the system and the second term of the rhs of (\ref{eq:lvneq}) is the Lindblad superoperator which accounts for the loss of excitations of the second oscillator. In absence of the coupling with the first oscillator, the inverse of twice the dissipation rate $\gamma$ gives the mean lifetime of the second oscillator. The first two terms of the total Hamiltonian \begin{equation} \fl \hat{H} = \hbar\omega_{1}{\hat b}^{\dagger}{\hat b} +\hbar\omega_{2}{\hat a}^{\dagger}{\hat a} + \hbar g \left(\Theta(t)-\Theta(t+T)\right) ({\hat a}^{\dagger}{\hat b}+{\hat a}{\hat b}^{\dagger}) + i \hbar\epsilon (e^{-i\omega_D t}{\hat a}^{\dagger}-e^{i\omega_D t}{\hat a}), \end{equation} are the free Hamiltonians of the two harmonic oscillators; the next term, which is modulated by the step function $\Theta(t)$, is the interaction between them and the last is the driving. The bosonic operators of creation $\hat b$ ($\hat a$) and annihilation $\hat b^\dag$ ($\hat a^\dag$) of one excitation of the first (second) oscillator, satisfy the usual commutation relations. From here on we focus on the case of resonance, $\omega_1=\omega_2=\omega_D=\omega$. The interaction time $T$ is left undefinite until the end of the manuscript, where we compare our results with those of the atom-cavity system. \section{Dynamical evolution} The solution of the dynamical equation (\ref{eq:lvneq}) can be written as \begin{equation} \label{ansatz} \fl \hat{\rho}(t) = {\mathcal D}(\beta(t),\alpha(t)) \tilde{\rho}(t) {\mathcal D}^\dag(\beta(t),\alpha(t)), \end{equation} where ${\mathcal D}(\beta(t),\alpha(t))$ is the two-mode displacement operator, \begin{equation*} \fl {\mathcal D}(\beta(t),\alpha(t)) = {\mathcal D}_1(\beta(t)) {\mathcal D}_2(\alpha(t)) = e^{\beta(t) \hat{b}^\dag - \beta^*(t) \hat{b}} e^{\alpha(t) \hat{a}^\dag - \alpha^*(t) \hat{a}}, \end{equation*} and $\tilde{\rho}(t)$ is the total density operator in the interaction picture defined by equation (\ref{ansatz}). By replacing (\ref{ansatz}) into (\ref{eq:lvneq}), and employing the operator identities \begin{equation*} \fl \frac{d}{dt}{\mathcal D}(\alpha) = \left( -\frac{\alpha^* \dot{\alpha}-\dot{\alpha}^* \alpha}{2} + \dot{\alpha} \hat{a}^\dag - \dot{\alpha}^* \hat{a}\right) {\mathcal D}(\alpha) = {\mathcal D}(\alpha) \left( \frac{\alpha^* \dot{\alpha}-\dot{\alpha}^* \alpha}{2} + \dot{\alpha} \hat{a}^\dag - \dot{\alpha}^* \hat{a}\right), \end{equation*} with the dot designating the time derivative as usual, we are able to decouple the dynamics of the displacement operators, obtaining the following dynamical equations for the labels $\alpha$ and $\beta$ \begin{eqnarray} \label{eq:dDisplacementdt} \fl \frac{d}{dt} \left( \begin{array}{c} \alpha \\ \beta \end{array} \right ) = \left( \begin{array}{rc} -\gamma-i\omega & -i g \\ -i g & -i\omega \end{array} \right ) \left( \begin{array}{c} \alpha \\ \beta \end{array} \right) +\left( \begin{array}{c} \epsilon e^{-i\omega t} \\ 0 \end{array} \right), \end{eqnarray} for times between zero and $T$. On the other hand, the Ansatz (\ref{ansatz}) also provides the equation of motion for $\tilde{\rho}(t)$, which turns out to be very similar to (\ref{eq:lvneq}) but with the hamiltonian $\tilde{H}= \hat{H}(\epsilon=0)$, that is, without driving. The separation provided by our Ansatz is also appealing from the point of view of its possible physical interpretation, because the effect of the driving has been singled out, and quantum (entangling and purity) effects are extracted from the displaced density operator $\tilde{\rho}(t)$. The two oscillators interact after the second oscillator reaches its stationary coherent state \begin{equation} \label{ec:steadystate1} \fl\hat{\rho}_2(t) = \textrm{tr}_1 \hat{\rho}(t) = \Ket{\frac{\epsilon}{\gamma} e^{-i\omega t}} \Bra{\frac{\epsilon}{\gamma} e^{-i\omega t}}, \end{equation} as can be verified by solving (\ref{eq:dDisplacementdt}) with the interaction turned off. If we want a mean number of excitations of the order of one then the driving amplitude must satisfy $\epsilon\approx \gamma$, and thereby the larger the dissipation is, the larger the driving is to be chosen. At zero time, when the oscillators begin to interact, the state of the total system is separable with the second oscillator state given by (\ref{ec:steadystate1}). The first oscillator, on the other hand, begins in a pure state which we choose as a linear combination of its ground and first excited states (again inspired on the analogy with the atom-cavity system). Thus, the initial state $\hat{\rho}(0)$ given by \begin{equation} \fl \label{eq:InitialState} {\mathcal D}\left(0,\frac{\epsilon}{\gamma}\right) \underbrace{ \left(\cos(\theta)\ket{0}+\sin(\theta)\ket{1}\right) \left(\cos(\theta)\bra{0}+\sin(\theta)\bra{1}\right) \otimes \ket{0}\bra{0}}_{\tilde{\rho}(0)} {\mathcal D}^\dag\left(0,\frac{\epsilon}{\gamma}\right), \end{equation} corresponds to a state of the form described by equation (\ref{ansatz}) with $\beta(0)=0$ and $\alpha(0) ={\epsilon}/{\gamma}$. At later times, the solution maintains the same structure, but --as can be seen from the solution of (\ref{eq:dDisplacementdt}) -- the labels of the displacement operators evolve as follows \begin{eqnarray} \fl \label{eq:alpha} \alpha(t) = \epsilon e^{-\frac{1}{2}(\gamma + 2 i\omega )t} \left\{ \frac{1}{\gamma } \cos\left(\tilde{g} t\right) + \frac{\sin\left(\tilde{g} t\right)}{2\tilde{g}} \right\}, \\ \label{eq:beta}\fl \beta(t) = -\frac{i e^{-i\omega t} \epsilon }{g} + \frac{i \epsilon}{g} e^{-\frac{1}{2}(\gamma + 2 i \omega )t} \left\{\cos \left(\tilde{g} t\right) +\left(-2 g^2 +\gamma^2\right) \frac{\sin\left(\tilde{g}t\right)}{2\gamma\tilde{g}}\right\}, \end{eqnarray} where we have defined the new constant $\tilde{g} =\frac{1}{2} \sqrt{4g^2-\gamma^2 }.$ We employ $\tilde{g}$, which also appears in the solution of the displaced density operator, to define three different regimes: underdamped ($\tilde{g}^2> 0$), critically damped ($\tilde{g}^2 =0$) and overdamped ($\tilde{g}^2 <0$) regime. It is important to notice that there is no direct connection with the quality factor of the damped oscillator: it is possible to have physical systems in the overdamped regime defined here even with relatively large quality factors, if the interaction constant $g$ is much smaller than $\omega$, the frequency of the oscillators. The inspection of the equations (\ref{eq:alpha}) and (\ref{eq:beta}), allows one to clearly identify the time scale $2/\gamma$, after which the stationary state is reached and the state of the first oscillator just rotates with frequency $\omega$ and have a mean number of excitations equal to $\epsilon^2/g^2$. The doubling of the damping time of the second oscillator, from $1/\gamma$ in the absence of interaction to $2/\gamma$, in the underdamped regime, can be seen as an instance of the shelving effect \cite{Wineland1975a}. The first oscillator, which in absence of interaction, suffers no damping, it is now driven and damped. It can be thought that the excitations remain half of the time on each oscillator, and that they decay with a damping constant $\gamma$, thereby leading to an effective damping constant of $\gamma/2$. An interesting feature of the solution is that the displacement of the second oscillator goes to zero, in the stationary state. In the stationary state, the first oscillator evolves as if it were driven by a classical field $-i\hbar\epsilon \exp(-i\omega t)$ and damped with a damping rate $g$, without any interaction with a second oscillator. More generally speaking, we remark that from the point of view of the first oscillator, the evolution of its displacement operator happens as if there were damping but no coupling, and the driving were of the form $\hbar g (\beta-i\alpha)$, or, in terms of the parameters of the problem, \begin{equation} \label{eq:effectivedrive} \fl F(t) = -i\hbar\epsilon e^{-i\omega t} -i\hbar\epsilon e^{-(\gamma/2+i\omega) t} \left( \left(\frac{g}{\gamma}-1\right)\cos(\tilde{g}t) +\frac{2g^2+g\gamma-\gamma^2}{2\gamma\tilde{g}}\sin(\tilde{g}t) \right). \end{equation} This behavior is particularly relevant in the following extreme case, whose complete solution depends only on the displacement operators. If the initial state of the first oscillator is the ground state then $\tilde{\rho}$ does not evolve in time, i.~e. it remains in the state $\ket{00}$, and the total pure and separable joint state is \begin{equation} \fl \rho(t) = \ket{\beta(t)}\bra{\beta(t)}\otimes\ket{\alpha(t)}\bra{\alpha(t)}. \end{equation} Even in the more general case considered here, corresponding to the initial state (\ref{eq:InitialState}), the solution of $\tilde{\rho}(t)$ possesses only a few non-zero elements. If we write the total density operator as \begin{equation} \tilde{\rho}(t) = \sum_{i_1 i_2 j_1 j_2} \tilde{\rho}_{i_1 i_2}^{j_1 j_2} \ket{i_1 i_2} \bra{j_1 j_2}, \end{equation} we can arrange the elements corresponding to zero and one excitations in each oscillator, as the two-qubit density matrix \begin{equation} \left( \begin{array}{llll} \tilde{\rho}_{00}^{00}(t) & \tilde{\rho}_{00}^{01}(t) & \tilde{\rho}_{00}^{10}(t) & 0 \\ \tilde{\rho}_{01}^{00}(t) & \tilde{\rho}_{01}^{01}(t) & \tilde{\rho}_{01}^{10}(t) & 0 \\ \tilde{\rho}_{10}^{00}(t) & \tilde{\rho}_{10}^{01}(t) & \tilde{\rho}_{10}^{10}(t) & 0 \\ 0 & 0 & 0 & 0 \end{array} \right). \end{equation} If we measure time in units of $g$ by defining $t'=g t$ we have only two free parameters $\Gamma=\frac{\gamma}{g}$ and $\Omega=\frac{\omega}{g}$. The nonvanishing elements of the density matrix, written in the underdamped case ($|\Gamma|<2$), are given by (hermiticity of the density operator yields the missing non-zero elements) \begin{eqnarray*} \fl { \tilde{\rho}}^{00}_{00}(t')= & 1 -\sin^2\theta e^{-\Gamma t'} \left( \frac{4-\Gamma^2 \cos(\sqrt{4-\Gamma^2}t')}{4-\Gamma^2} - \frac{\Gamma \sin(\sqrt{4-\Gamma^2}t')}{\sqrt{4-\Gamma^2}} \right) \\ \fl { \tilde{\rho}}^{01}_{01}(t') =& 2\sin^2(\theta) e^{-\Gamma t'} \frac{1-\cos\left(\sqrt{4-\Gamma ^2}t'\right)}{4-\Gamma ^2} \\ \fl { \tilde{\rho}}^{10}_{10}(t') =& \sin ^2(\theta) e^{- \Gamma t' } \left( \frac{\left(2-\Gamma ^2\right) \cos \left(\sqrt{4-\Gamma ^2} t'\right)+2}{4-\Gamma ^2} -\frac{\Gamma \sin \left(\sqrt{4-\Gamma ^2} t' \right)} {\sqrt{4-\Gamma ^2}} \right) \\ \fl \tilde{\rho}^{00}_{01}(t') =& i\sin (2 \theta) e^{i\Omega t'-\frac{\Gamma t'}{2}} \frac{\sin\left(\sqrt{4-\Gamma ^2}\frac{t'}{2}\right)} {\sqrt{4-\Gamma ^2}}\\ \fl \tilde{\rho}^{00}_{10}(t') =& \frac{\sin (2 \theta)}{2} e^{i \Omega t' -\frac{ \Gamma t' }{2}} \left( \cos \left(\sqrt{4-\Gamma ^2} \frac{t'}{2}\right) -\frac{\Gamma\sin \left(\sqrt{4-\Gamma ^2} \frac{t'}{2}\right)} {\sqrt{4-\Gamma ^2}}\right) \\ \fl \tilde{\rho}^{01}_{10}(t') =& 2i\sin ^2(\theta)e^{-\Gamma t'} \frac{\sin\left(\sqrt{4-\Gamma^2}\frac{t'}{2}\right)}{\sqrt{4-\Gamma^2}} \left( \frac{\Gamma\sin\left(\sqrt{4-\Gamma^2}\frac{t'}{2}\right)} {\sqrt{4-\Gamma^2}} -\cos\left(\sqrt{4-\Gamma ^2}\frac{t'}{2}\right) \right) \end{eqnarray*} The expressions of the elements of the density matrix in the critically damped case $\Gamma=2$ and in the overdamped case $\Gamma>2$ can be obtained from those given in the text for the underdamped case $\Gamma<2$. \section{Entanglement} Although quantities like quantum discord \cite{Olivier2001a} have been proposed to extract the quantum content of correlations between two systems, we presently quantify the quantum correlations between both oscillators employing a measure of entanglement. Due to the dynamics of the system, and the initial states chosen, the whole system behaves as a couple of qubits and therefore its entanglement can be measured by Wootters' concurrence \cite{Wootters1998a}. One of the most important characteristics of the form of the solution given by (\ref{ansatz}) is that concurrence, as well as linear entropy, depend \emph{only} on the displaced density operator $\tilde{\rho}(t')$. In our case the concurrence reduces to \begin{eqnarray} \fl C(t') & = \left \vert \sqrt{\tilde{\rho}^{01}_{10}(t'){\tilde{\rho}}^{10}_{01}(t')} +\sqrt{\tilde{\rho}^{01}_{01}(t') {\tilde{\rho}}^{10}_{10}(t')} \right \vert -\left\vert \sqrt{{\tilde{\rho}}^{01}_{10}(t')\tilde{\rho}^{10}_{01}(t')} -\sqrt{{\tilde{\rho}}^{01}_{01}(t') \tilde{\rho}^{10}_{10}(t')} \right\vert \nonumber \\ \fl & = 2\sqrt{\tilde{\rho}^{01}_{10}(t')\tilde{\rho}^{10}_{01}(t')} = 2 \vert \tilde{\rho}^{10}_{01}(t')\vert, \end{eqnarray} where the positivity and hermiticity of the density matrix were used. The explicit expressions for the concurrence in the underdamped (UD), critically damped (CD) and overdamped (OD) regimes are \begin{eqnarray} \fl \nonumber C_{UD}(t') &= 4 \sin^2(\theta) e^{-\Gamma t'} \frac{\sin \left(\frac{\sqrt{4-\Gamma^2}\ t'}{2}\right)} {\sqrt{4-\Gamma^2}} \left\vert \frac{\Gamma \sin \left(\frac{\sqrt{4-\Gamma^2}\ t'}{2}\right)} {\sqrt{4-\Gamma^2}} -\cos \left(\frac{\sqrt{4-\Gamma^2}\ t'}{2}\right) \right\vert, \\ \fl C_{CD}(t')& = 2 \sin ^2(\theta ) e^{-2 t'} t' \big\vert t'-1\big\vert , \\ \fl \nonumber C_{OD}(t') &= 4 \sin^2(\theta) e^{-\Gamma t'} \frac{\sinh \left(\frac{\sqrt{\Gamma^2-4}\ t'}{2}\right)} {\sqrt{\Gamma^2-4}} \left\vert \frac{\Gamma \sinh \left(\frac{\sqrt{\Gamma^2-4}\ t'}{2}\right)} {\sqrt{\Gamma^2-4}} -\cosh \left(\frac{\sqrt{\Gamma^2-4}\ t'}{2}\right) \right\vert . \end{eqnarray} All the dependence on the initial state is contained on the squared norm of the coefficient of the state $\ket{1}$ of the displaced density operator. In all regimes the concurrence vanishes at zero time, because the initial state considered is separable. However, while in the underdamped case the concurrence vanishes periodically (see equation (\ref{eq:tau2n}) below), in the other two cases it crosses zero once ($t>0$) and reaches zero assymptotically as time grows. This shows a markedly different qualitative behavior (see figures \ref{fig:Concurrence} and \ref{fig:MaxConcurrence}). In the underdamped regime the zeroes of the concurrence are found at times \begin{equation} \label{eq:tau2n} \tau_{1n}=\frac{2 n \pi }{\sqrt{4-\Gamma ^2}}, \quad \textrm{and} \quad \tau_{2n}= \frac{2 \pi n+2 \arccos \left(\frac{\Gamma}{2} \right)}{\sqrt{4-\Gamma ^2}}, \end{equation} where $n$ is a non-negative integer. In this contribution, the inverse sine and cosine functions are chosen to take values in the interval $[0,\pi/2]$. The time $\tau_{10}$ corresponds to the initial state. The sequence of concurrence zeroes is thereby $0=\tau_{10}<\tau_{20}<\tau_{11}<\tau_{21}\ldots$ As the critical damping is approached, the time $\tau_{11}$ is pushed towards infinity, while $\tau_{20}$ approaches the finite time $2/\Gamma$ (see figure \ref{fig:MaxConcurrence}). For the initial states considered in this manuscript we do not observe the sudden death of the entanglement since the concurrence is zero only for isolated instants of time. If one writes the concurrence in the underdamped regime in the alternative form \begin{equation} C_{UD}(t')= \frac{\sin^2\theta}{2(1-\Gamma^2/4)} e^{-\Gamma t'} \left| \frac{\Gamma}{2} -\sin\left( \arcsin\left(\frac{\Gamma}{2}\right)+2\sqrt{1-\frac{\Gamma^2}{4}}\ t' \right) \right| \end{equation} it is easy to verify that at the times $\tau_{\pm n}$, given by \begin{equation} \fl \tau_{\pm n} = \frac{1}{\sqrt{4-\Gamma^2}} \left( (2n+1)\pi \pm \arccos\left(\frac{\Gamma^2}{4}\right) -2\arcsin\left(\frac{\Gamma}{2}\right) \right) >0, \quad n=0,1,\ldots \end{equation} the concurrence reaches the local maxima \begin{equation*} \fl C_{\pm n} = \sin^2\theta \left( \sqrt{1+\frac{\Gamma^2}{4}} \pm\frac{\Gamma}{2}\right) \exp\left( -\frac{\Gamma\left( (2n+1)\pi\pm \arccos(\frac{\Gamma^2}{4})-2\arcsin(\frac{\Gamma}{2}) \right)} {2\sqrt{1-\frac{\Gamma^2}{4}}} \right) . \end{equation*} We observe these maxima to lie on the curves $\sin^2\theta K_\pm \exp(-\Gamma t)$, where the constants $K_\pm =\sqrt{1+\frac{\Gamma^2}{4}} \pm\frac{\Gamma}{2}$ satisfy the inequalities $\sqrt{2}-1 \leq K_-\leq 1 \leq K_+ \leq \sqrt{2}+1$. Maxima of concurrence depend on both the initial state and the value of the rescaled damping constant, and reach the maximum available value of one only in the non-dissipative case for a particular initial state. In order to have negligible values of concurrence (except for small time intervals around the zeroes of concurrence) it is necessary to have times much larger than $1/\gamma$. From the point of view of classical-like behavior, the most favourable scenario corresponds to zero or almost zero concurrence, which are obtained for short time intervals around $\tau_{1 n},\ \tau_{2 n}$ and for large values of time. In the overdamped regime, the concurrence presents two maxima, $\tau_-$ and $\tau_+>\tau_-$ \begin{equation} \tau_\pm = \frac{2 \textrm{arccosh}(\Gamma/2)\pm \textrm{arccosh}(\Gamma^2/4)} {2 \sqrt{1-\Gamma^2/4}}, \end{equation} both of which go to zero as the rescaled dissipation rate grows, $\tau_+\rightarrow 4 \ln(\Gamma)/\Gamma$ and $\tau_-\rightarrow \ln(2)/\Gamma$ (see figure \ref{fig:MaxConcurrence}). The function arccosh$(x)$ is chosen as to return nonnegative values for $x\geq 1$. Since the global maximum of concurrence, which corresponds to the later time, scales like $1/(2\Gamma)$ for large values of $\Gamma$, in the highly overdamped regime quantum correlations are not developed at any time. \begin{figure} \caption{Concurrence as a function of time and the rescaled damping constant in the (a) underdamped, (b) critically damped, and (c) overdamped case.} \label{fig:Concurrence} \end{figure} The behavior of concurrence in the different regimes is shown in figure \ref{fig:Concurrence}. It is apparent that small values of concurrence are obtained for very small times and for large times in the underdamped case and for all times for a highly overdamped oscillator. In figure \ref{fig:MaxConcurrence} we depict the times at which concurrence attains a maximum, and the maximum values of concurrence, as a function of the rescaled damping constant. One can see how the first two times of maximum concurrence go to zero, while the other times diverge, as the critically damped regime is reached. The first two maxima of concurrence vanish more slowly than the rest of maxima, which hit zero at $\Gamma=2$. \begin{figure} \caption{(a) {\bf Concurrence local maxima} \label{fig:MaxConcurrence} \end{figure} \section{Entropy} The entropy is analized employing the linear entropy of the first oscillator, the system of interest. As remarked before, the first oscillator behaves like a two-level system, where the maximum value of the linear entropy, 0.5, is obtained when the population of each of the two states is one half. The type of ``classical'' behavior which allows the interaction with Ramsey zones to be modelled like a classical driving force occurs when the linear entropy is very small, and hence the state of the first oscillator is (almost) pure and uncorrelated with the state of the second oscillator. The linear entropy for the first oscillator can be computed as \begin{equation} \label{eq:delta1} \delta_{1}(t') = 1- \tr_1 \left( \tr_2 \rho \tr_2 \rho \right) = 1- \tr_1 \left( \tr_2 \tilde{\rho} \tr_2 \tilde{\rho} \right) = 2 \det \left(\tr_2 \tilde{\rho}\right), \end{equation} where the last equality holds for two-level systems. In equation (\ref{eq:delta1}) the density operator of the first oscillator is assumed to be represented by a 2$\times$2 matrix. Employing the expressions we have found for the elements of $\tilde{\rho}$ we obtain \begin{equation} \label{eq:delta} \delta_{1}(t') = 2 \sin^4\theta\ x(t') \left(1 - x(t') \right), \end{equation} where $x$, in the underdamped regime, is given by \begin{equation} x_{UD}(t)= \frac{ e^{-\Gamma t}\sin^2\left(\sqrt{1-\Gamma^2/4}\ t-\arccos(\Gamma/2)\right)} {1-\Gamma^2/4}. \end{equation} Surprisingly, as in the case of concurrence, the influence of the initial state factors out in the expression of the linear entropy of the first oscillator, which turns out to be proportional to the square of the population of $\ket{1}$ in the initial displaced operator. As it is well known, in the limit of zero dissipation, the linear entropy of the reduced density matrix is equal to one fourth of the square of the concurrence. At times $\tau_{2n}$ (see eq.(\ref{eq:tau2n})), when both concurrence and linear entropy vanish, the total state of the system is separable, $\rho(g\tau_{2n}) = \ket{\beta(g\tau_{2n})}\bra{\beta(g\tau_{2n})} \otimes \rho_2(g\tau_{2n})$; that is, from the point of view of the first oscillator the evolution is unitary like. Since the linear entropy begins at zero, because the initial state is pure and separable, there is a maximum in the interval $(0,\tau_{20})$, which turns out to give a linear entropy of exactly 0.5 (we treat the case $\sin\theta=1$, because ---due to the scaling property discussed before--- a simple multiplication by $\sin^4\theta$ gives the result for other cases). Indeed, as the function $x(t)$ changes continuously from $x(t=0)=1$ to $x(t=\tau_{20})=0$, it crosses 0.5 at some time $\tau_{30}$ in between, giving the maximum value possible of the linear entropy. Although the exact value of $\tau_{30}$ can be obtained only numerically, good analytical approximations can be readily obtained. For example, $\tau_{30}\approx\pi/(4+4g+2g^2)$, gives an error smaller than 0.5\%. For small values of the rescaled damping constant, $\Gamma\lessapprox 0.237$, there are several solutions to the equation $x(t)=0.5$ in the interval $(0,\log(2)/\Gamma)$, which give absolute maxima of the linear entropy, while the times \begin{equation} \label{eq:tau4n} \tau_{4n} = \frac{2\arccos(\Gamma/2)+n\pi}{\sqrt{1-\Gamma^2/4}}, \quad n=0,1,2,\cdots \end{equation} correspond to local minima. In the interval $(\log(2)/\Gamma,\infty)$ the times $ \tau_{4n}$ give local maxima. All of the local maxima and minima given by eq. (\ref{eq:tau4n}) belong to the curve $2 e^{-\Gamma t}(1-e^{-\Gamma t})$. The large time behavior of the local maxima of linear entropy and concurrence is, thereby, of the same form constant$\times \exp(-\Gamma t')$. For values of $\Gamma>0.237$ all times $ \tau_{4n}$ give local maxima. The maxima of concurrence and linear entropy coincide only in the weakly damped case, because concurrence and linear entropy are not independent for pure bipartite states. At times $\tau_{1n}$ (see eq.(\ref{eq:tau2n})), where the total state $\rho(g\tau_{1n}) = \rho_1(g\tau_{1n}) \otimes \ket{\alpha(g\tau_{1n})}\bra{\alpha(g\tau_{1n})}$, is separable, the reduced state of the first oscillator is mixed. The linear entropy is small for short $(\tau_{1n}\ll \log(2)/\Gamma)$ and large $(\tau_{1n}\gg \log(2)/\Gamma)$ times. In the overdamped regime the function $x(t)$, which appers on the expression for linear entropy (\ref{eq:delta}) and is given by \begin{equation} x_{OD}(t')= \frac{e^{-\Gamma t'} \sinh^2(\sqrt{\Gamma^2/4-1}\ t'-\textrm{arccosh}(\Gamma/2))} {\Gamma^2/4-1}, \end{equation} begins at one for $t'=0$, and goes down to zero for large values of time. The time at which it crosses one half can be calculated to be $\tau_{0.5}\approx 0.16557 < 1/6$ for $\Gamma=2$ and for large values of $\Gamma$ it goes as $\tau_{0.5}\approx \ln 2/(2\Gamma)$. It is easy to find interpolating functions with small error for the time of crossing, \begin{equation*} \tilde{\rho}_{10}^{10} \left(t'=\frac{1}{ 6 +\frac{4}{\ln 2} \sinh\left( \textrm{arccosh}(\frac{\Gamma}{2}) \tanh\left(\frac{\textrm{arccosh}(\frac{\Gamma}{2})}{1.6}\right) \right)} \right)= 0.5(1+\Delta) \end{equation*} where $|\Delta|<2.5$\%. It is interesting to notice that for large values of the damping this time ($\ln(2)/(2\Gamma)$) is half the time needed to obtain the maximum value of concurrence, and that, at the later time, the linear entropy is 3/4 of the maximum value of entropy, a relatively large value. The state of the first oscillator always becomes maximally mixed before becoming pure again, no matter how large the value of the damping. We show the behavior of linear entropy in figure \ref{fig:LinearEntropy}. In the underdamped regime there are infinite maxima and minima, while for critical damping and for the overdamped regime there are only two maxima. The first maximum always corresponds to a linear entropy of one half. \begin{figure} \caption{Linear entropy of the first oscillator as a function of time and the rescaled damping constant in the (a) underdamped, (b) critically damped, and (c) overdamped case.} \label{fig:LinearEntropy} \end{figure} \section{Conclusions} In the present contribution we have shown that the classical quantum border in this model depends mainly on the initial state and on damping constant to interaction coupling ratio, and that quantum effects, characteristic of the underdamped regime, can be seen in the other regimes for small times. In order to make connection with Ramsey zones we remember in that physical system $\omega \approx 10^{10}$ Hz, $Q \approx 10^4$, $g \approx 10^4$ Hz and $T_R \approx 10^{-5}$ s, which was chosen as to produce $\pi/2$ pulse, that is a pulse that can rotate the state of the two-level system, as represented in a Bloch sphere, by an angle $\pi/2$. These numbers place the system into the highly overdamped (regime because $\Gamma = \omega/(Q g) \approx 10^2 \gg 2$) and give a rescaled evolution time of the order of $g T \approx 10^{-1}$. Here we use the same values of $\omega, \gamma$ and $g$, and an evolution time of order 1/g. Indeed, the hamiltonian $ \hbar\omega{\hat b}^{\dagger}{\hat b} + \hbar g \left(\Theta(t)-\Theta(t+T)\right) (\alpha_0 e^{-i\omega t}{\hat b}^{\dagger} +\alpha_0^* e^{i\omega t}{\hat b})$, with $\|\alpha_0\|\approx 1$ --- which would model the interaction of the first oscillator with a classical driving field of an average number of excitations of the order of one --- has a characteristic time $1/g$, corresponding to $T'\approx 1$. The dynamical behavior of the linear entropy obtained here, is quite different from that of ref. \cite{Kim1999a}: there the linear entropy was never large for the relevant time interval, here it grows to the maximum possible for a two-level system, and then goes to zero very quickly. Therefore, in this model dissipation produces relaxation also, and a description obviating the second oscillator still needs a dissipation process. Although, at the evolution time T, both models predict a small atomic entropy, in Ramsey zones it decreases as $\delta_1({T_R}'\approx 0.1)\approx 4/\Gamma$, while in the present model it goes to zero as $\delta_1(T'\approx 1)\propto 1/\Gamma^4$. Qualitative and quantitative differences notwithstanding, at the evolution time the linear entropy is very small, in both cases, due to the smallness of the ratio $g/\gamma$. As remarked before the quality factor of the damped oscillator does not appear directly in either case; it can be perfectly possible to have a very weakly damped oscillator and a highly overdamped interaction. However, as the first oscillator quality factor is improved, the damping constant will eventually be comparable with the interaction constant, and there will be considerable entanglement between both oscillators. For the same physical system if the damping rate can be changed then classical or quantum behavior can be obtained. \section*{References} \end{document}
\begin{document} \pagestyle{empty} \title{On the Equivalence of von Neumann and Thermodynamic Entropy} \author{Carina E. A. Prunkl\\ \textit{Future of Humanity Institute, University of Oxford}} \date{} \maketitle \raggedright \onehalfspacing \section*{Abstract} In 1932, von Neumann argued for the equivalence of the thermodynamic entropy and $-\text{Tr}\rho\ln\rho$, since known as the von Neumann entropy. Hemmo and Shenker (2006) recently challenged this argument by pointing out an alleged discrepancy between the two entropies in the single particle case, concluding that they must be distinct. In this article, their argument is shown to be problematic as it a) allows for a violation of the second law of thermodynamics and b) is based on an incorrect calculation of the von Neumann entropy. \pagestyle{plain} \singlespacing \tableofcontents \section{Introduction} In \textit{Mathematische Grundlagen der Quantenmechanik} von Neumann introduces $-\text{Tr}\rho\ln\rho$ as the quantum mechanical generalisation of the phenomenological thermodynamic entropy, where $\rho$ is the quantum mechanical density operator\footnote{Von Neumann considers two types of processes that describe how the quantum state changes in time. The first, `Prozess 1', is associated with the probabilistic outcome of a measurement, whereas `Prozess 2' refers to the evolution of the system via the Schr\"odinger equation. $-\text{Tr}\rho\ln\rho$ is shown to be non-decreasing for both of them.}. In his argument, he considers the cyclic transformation of a quantum gas confined to a box. By demanding that the overall entropy change of system and heat bath must be zero by the end of the cycle\footnote{An explicit assumption of the validity of the second law.}, von Neumann concludes that the entropy of the quantum gas ought to be given by $-\text{Tr}\rho\ln\rho$. Hemmo and Shenker (2006) recently challenged this argument by pointing out an alleged discrepancy between the two entropies in the single particle case, concluding that they must be distinct. In this article I demonstrate that their argument against the equivalence of thermodynamic and von Neumann entropy is problematic as it a) allows for a violation of the second law of thermodynamics and b) is based on an incorrect calculation of the von Neumann entropy. The article is structured as follows: after a summary of von Neumann's original argument, I will quickly revisit the debate that has been lead to date by \cite{shenker_is_1999} and \cite{henderson_von_2003} before moving on to an analysis of Hemmo and Shenker's (2006) most recent contribution, which will be shown to be problematic. \section{The Argument} To fully appreciate Hemmo and Shenker's criticism, it is helpful to begin by recapitulating von Neumann's argument as presented in his \textit{Mathematische Grundlagen}. \subsection{The Setup} Von Neumann begins by considering $N$ non-interacting, quantum systems, denoted by $\mathbf{S}_1,...,\mathbf{S}_N$. For the purpose of this paper, we take these systems to be two-state quantum systems and only consider, say, the spin states of a spin-$\frac{1}{2}$ particle.\footnote{A generalisation to more degrees of freedom is straight forward and may be found in \citep[pp.191--201]{von_neumann_mathematische_1996}.} Each of these systems is placed in a box $\mathbf{K}_i$, with $i=1,...N$, whose walls shield its contained system off from its environment and thus prevent any interaction between systems. The boxes $\mathbf{K}_1,...,\mathbf{K}_N$ are now all placed into another, much larger box $\bar{\mathbf{K}}$ of volume $V$. For simplification, von Neumann assumes that there are no force fields, in particular no gravitational fields, present in $\bar{\mathbf{K}}$. This in turn means that there is no gravitational interaction between the boxes $\mathbf{K}_1,...,\mathbf{K}_N$, even though they may exchange kinetic energy via collisions. The boxes can thus be taken to behave just like molecules of a gas. Von Neumann calls this `quantum gas' a $[\mathbf{S}_1,...,\mathbf{S}_N]$-gas, where $[\mathbf{S}_1,...,\mathbf{S}_N]$ is the statistical ensemble associated with the systems. A more detailed analysis of von Neumann's understanding of such an ensemble, or \textit{Gesamtheit}, will be presented in Section \ref{sec:response}. The large box $\bar{\mathbf{K}}$ can be brought into thermal contact with a heat bath at temperature $T$ and in that case, after some equilibration time, the $[\mathbf{S}_1,...,\mathbf{S}_N]$-gas itself will be at temperature $T$.\footnote{For a more detailed account of this equilibration process, see \citep[pp.192--193]{von_neumann_mathematische_1996}.} Finally, we add an \textit{empty} box $\bar{\mathbf{K}}^\prime$ of equal volume $V$ to the right of $\bar{\mathbf{K}}$. \begin{figure} \caption{\small An illustration of von Neumann's argument. \textbf{(I)} \label{shenkerfig1} \end{figure} \subsection{The Process} The $[\mathbf{S}_1,...,\mathbf{S}_N]$-gas now undergoes a series of transitions. The following various stages are also illustrated in Figure \ref{shenkerfig1}.\footnote{The presentation of the argument given here slightly differs from its original. Instead of having the gas undergo a cyclic process (in Figure \ref{shenkerfig1}, the system is in the same state at Stages I and VII), von Neumann separately considers the system's entropy changes for Stages II to VII \citep[pp.200--202]{von_neumann_mathematische_1996} and Stages I to II \citep[pp.202--206]{von_neumann_mathematische_1996}. The cyclic version was chosen to be consistent with the presentation of von Neumann's argument in \citep{shenker_is_1999,henderson_von_2003} and \citep{hemmo_von_2006}.} \textbf{Stage I}: Each of the two-level quantum systems $\mathbf{S}_i$ initially is in the pure state $\ket{0}=w_1\ket{+}+w_2\ket{-}$, where the states $\ket{+}$ and $\ket{-}$ can be taken to be spin eigenstates of a spin-half particle. Given the lack of interaction between the quantum systems, the density matrix of the overall system factorises as $\rho=\rho_1\otimes...\otimes\rho_N$ with $\rho_i=\ket{0}\!\bra{0}$. \textbf{Stage II}: Each system is now measured in the \{$\ket{+}\!,\!\ket{-}$\}-basis, resulting in a mixture with $w_1^2N$ particles in state $\ket{+}$ and $w_2^2N$ particles in state $\ket{-}$. \textbf{Stage III \& IV}: The $\ket{+}$- and $\ket{-}$-systems are now separated in the following manner. The wall between $\bar{\mathbf{K}}$ and $\bar{\mathbf{K}}^\prime$ is replaced with a \textit{movable} partition and a \textit{fixed} semipermeable membrane\footnote{For such a semipermeable membrane to exist in principle, the two states need to be orthogonal. Mixtures instead of pure states are also conceivable, as long as they are disjoint.}. This first semipermeable membrane is transparent to $\ket{-}$-systems but impermeable to $\ket{+}$-systems. From the very left of $\bar{\mathbf{K}}$, another semipermeable membrane is inserted. It is movable and furthermore transparent to $\ket{+}$-systems, while being impermeable to $\ket{-}$-systems. The $\ket{-}$-systems are now 'pushed' into the right box $\bar{\mathbf{K}}^\prime$ by moving this second semipermeable membrane and the movable partition in the center \textit{simultaneously} and quasi-statically to the right, keeping the enclosed volume constant at all times (see Figure \ref{shenkerfig1}, Stage III, for an illustration of this step). During this process, no work is done on the gas and no heat is exchanged with the heat bath. Eventually, all the $\ket{-}$-systems will be in $\bar{\mathbf{K}}^\prime$, while the $\ket{+}$-systems remain in the left box (Figure \ref{shenkerfig1}, Stage IV). \textbf{Stage V}: The two boxes are now isothermally compressed to volumes $w_1^2V$ and $w_2^2 V$. Figure \ref{shenkerfig1} illustrates this step for $w_1^2=w_2^2=\frac{1}{2}$. The particle densities in $\bar{\mathbf{K}}$ and $\bar{\mathbf{K}}^\prime$ change from $w_1^2N/V$ and $w_2^2N/V$ to $N/V$ where, as before, $N$ is the total number of systems. The entropy of the heat reservoir increases by $Nk_B w_1^2\ln w_1^2$ and $Nk_B w_2^2\ln w_2^2$ respectively, where $k_B$ is the Boltzmann constant. \textbf{Stage VI}: The $\ket{+}$- and $\ket{-}$-gases are then reversibly transformed back into a $\ket{0}$-gas via unitary operations. \textbf{Stage VII}: Finally, the partition between the two chambers is removed, restoring the original state of the gas. Von Neumann's now argues as follows: all transitions between Stages II and VII take place in a reversible fashion, which means that the total entropy change of gas and heat bath between II and VII must be zero. Since a total amount of $\Delta S=Nk_B\left[ w_1^2\ln w_1^2+w_2^2\ln w_2^2\right]$ has been dumped into the heat bath during the compression stage, and since the (normed) entropy of the final $\ket{0}$-gas is zero by definition, the entropy of the gas must have been $S=S_{+}+S_{-}=-Nk_B\left[ w_1^2\ln w_1^2+w_2^2\ln w_2^2\right]$ before. Later in his book, von Neumann explains that the measurement process ('Prozess 1') is responsible for the entropy increase between Stages I and II \citep[pp.202--206]{von_neumann_mathematische_1996}. The above considerations are easily generalised to more dimensions: for a system described by a density matrix $\rho$ with eivenvectors $\ket{\phi_1},...,\ket{\phi_n}$ and eigenvalues $w_1,...,w_n$, the entropy is then given by $S_\rho=-\text{Tr}\rho\ln\rho=-\sum_{i=1}^n w_i\ln w_i$. \section{\citeauthor{shenker_is_1999}'s Criticism and \citeauthor{henderson_von_2003}'s Reply } I will now briefly consider Shenker's (1999) first criticism against von Neumann's argument and Henderson's (2003) reply. According to Shenker, two assumptions were made by von Neumann: \begin{enumerate}[a)] \item the thermodynamic entropy only changes during the compression, stages IV to V, and \item the entropies of stage I and VII are the same. \end{enumerate} As Shenker presents the argument, von Neumann's conclusion was that the entropy must thus have increased during the measurement process (I to II), to balance out the decrease during the compression (IV to V). In order to show that von Neumann entropy and ``classical entropy''\footnote{Shenker does not distinguish between classical statistical mechanical entropy and thermodynamic entropy at this point: ``Classical thermodynamics concludes that the very separation [of two gases] means a reduction of entropy. This is called entropy of mixing [...]'' \citep[39]{shenker_is_1999}. Entropy of mixing however has its origin in statistical and not phenomenological considerations.} are distinct, Shenker points out an alleged discrepancy in behaviour, which supposedly takes place between stages II and IV. Following her argument, let us first consider the change in von Neumann entropy: at stage II, the system is in a mixed state and has positive von Neumann entropy. At stage IV, the system is in a pure state and, so Shenker claims, has zero von Neumann entropy by definition. The von Neumann entropy therefore must have decreased between II and IV, that is, it must have decreased during the separation of the $\ket{+}$- and $\ket{-}$-systems. ``From a thermodynamic point of view'' (p.42), however, the entropy has not changed between II and IV. This is because ``the entropy reduction of the separation is exactly compensated by an entropy increase due to expansion'' \citep[p.45]{shenker_is_1999}. The thermodynamic entropy instead changes between during the compression, IV to V. According to Shenker, Thermodynamic entropy and von Neumann entropy therefore differ in their behaviour since the reduction in thermodynamic entropy takes place at a later stage (IV to V) than the reduction of von Neumann entropy (II to IV). \cite{henderson_von_2003} points out some deficiencies in Shenker's argument that explain the alleged discrepancy. She shows that the system at stage IV \textit{cannot} be considered to be in a pure state, as the gas' spatial degrees of freedom ought also be taken into account in addition to its spin degrees of freedom. The initial state of the system at stage I is then given by $\ket{0}\otimes \rho_{\beta}$, where $\rho_{\beta}$ is the thermal state of the system in contact with a heat bath at inverse temperature $\beta$. The entropy change at stage II is then only due to the entropy change of the spin degrees of freedom. Furthermore, even if we assume collapse, as Shenker implicitly does, the entropy is still high, since ``we lack knowledge of \textit{which} pure state the system is in'' \citep[p.294, original emphasis]{henderson_von_2003}. The separation step between II and IV then only `labels' the states in so far as they are associated with a particular spatial area of the box, but this step does \textit{not} change the entropy. The change in entropy at the compression stage V is then due to a change of the entropy of the \textit{spatial} degrees of freedom. \section{Modern Criticism by \citeauthor{hemmo_von_2006}} In a subsequently published, revised and amended version, \cite{hemmo_von_2006} offer an amended proposal with a similar but slightly weakened claim. They assert that ``von Neumann's argument does not establish a conceptual link between $-\text{Tr}[\rho\ln\rho]$ and the thermodynamic quantity $(1/T)\int pdV$ (or $dQ/T$) \textit{in the single particle gas} [...]'' \citep[p.158, emphasis added]{hemmo_von_2006}. They therefore retain their position that the von Neumann entropy cannot be empirically equivalent to the phenomenological entropy, but restrict this inequivalence to the domain of single or sufficiently few particles. Von Neumann and thermodynamic entropy, they argue, are effectively equivalent only in the thermodynamic limit. This section will discuss Shenker's and Hemmo and Shenker's (H\&S) efforts to show dissimilar behaviour between the two entropies and reveal that their argument is problematic to the extent that it allows for \textit{pepetua mobile} of the second kind. The shortcoming will be identified as the failure to take into account the entropy contribution of the measurement apparatus. I will make use of the famous one-particle engine developed by Szilard's (\citeyear{szilard_decrease_1929}) in order to show that measurement based correlations with an external agent cannot be ignored in the single particle limit, as they straightforwardly lead to a violation of the second law. Once the entropy contribution of the measurement apparatus is taken into account, however, the analogous behaviour of thermodynamic entropy and von Neumann entropy for the \textit{joint system} is restored. The following argument, including any conceptual ambiguities, has been taken unamended from \citep{hemmo_von_2006}, with the exception that we represent the position of the particle by the two orthogonal states $\ket{L}$ and $\ket{R}$, where $L$ and $R$ stand for 'Left' and 'Right', as opposed to Hemmo and Shenker's mixed state $\rho(L)$ representation. For the remainder of the article I furthermore assume, just like H\&S, that it is in fact possible to treat a single quantum particle as a genuine thermodynamic system. An illustration of the following steps can be found in Figure \ref{fig:singleparticle}. \begin{figure} \caption{\small Illustration of the Gedankenexperiment following \cite{hemmo_von_2006} \label{fig:singleparticle} \end{figure} \textbf{Step 1 (Preparation I)}: A quantum particle $P$ is prepared in a spin-up eigenstate in the $x$-direction, $\ket{+_x}_P$. Its initial location is given by $\ket{L}_P$, where $L$ and $R$ refer to its position in either the left or the right part of the box. The measuring apparatus $M$ starts out in the state $\ket{\text{Ready}}_M$. The initial state of the particle is then given by the product state: \begin{equation} \rho^{(1)}=\ket{+_x}\!\bra{+_x}_P\ket{L}\!\bra{L}_P\ket{\text{Ready}}\!\bra{\text{Ready}}_M. \end{equation} \textbf{Step 2 (Preparation II)}: In Step 2, a measurement in the spin $z$ direction is performed, leading to an entanglement of the measurement apparatus' pointer states and the $z$ spin eigenstates. It is important to note that H\&S do not specify the nature of the measurement at this stage, i.e. whether they are working in a collapse or no-collapse model. The state of the entire system is given by \begin{equation} \rho^{(2)}=\frac{1}{2}\left( \ket{+_z}\!\bra{+_z}_P\ket{+}\!\bra{+}_M+\ket{-_z}\!\bra{-_z}_P\ket{-}\!\bra{-}_M\right)\ket{L}\!\bra{L}_P, \end{equation} while the reduced density matrix of the particle becomes: \begin{equation} \rho^{(2,red)}=\frac{1}{2}\left( \ket{+_z}\!\bra{+_z}_P+\ket{-_z}\!\bra{-_z}_P\right)\ket{L}\!\bra{L}_P, \label{eq:step2} \end{equation} which, as H\&S state, ``in some interpretations may be taken to describe our ignorance of the z spin of P'' \citep[p.160]{hemmo_von_2006}. Whereas the von Neumann entropy of the spin component $S_{vN}=-\text{Tr}[\rho\ln\rho]$ was zero before, it now becomes positive. The thermodynamic entropy however, the authors assert, remains the same. \textbf{Step 3 (Separation)}: Two semi-permeable membranes are inserted and moved through the box in such a way that the particle remains on the left if it is in state $\ket{+_z}$ but is moved to the right if it is in state $\ket{-_z}$. There is no work cost involved in this process and neither von Neumann entropy nor thermodynamic entropy change during this step, during which the spatial degrees of freedom are coupled to the spin degrees of freedom. \textbf{Step 4 (Measurement)}: As we are only considering a \textit{single} molecule in this setup as opposed to von Neumann's original many particle gas, the compression stage needs to be preceded by a location measurement in order to determine which part of the box is empty. H\&S therefore introduce a \textit{further} measurement before compression (not present in von Neumann's original argument), in order to determine in which part of the box the particle is located. H\&S add that for the calculation of the von Neumann entropy, collapse and no-collapse interpretations will now have to use different expressions for the quantum state. In collapse theories, the state as a result of the location measurement collapses into either \begin{equation} \rho^{(4,+)}=\ket{+_z}\bra{+_z}_P\ket{L}\!\bra{L}_P\;\;\;\text{or}\;\;\;\rho^{(4,-)}=\ket{-_z}\bra{-_z}_P\ket{R}\!\bra{R}_P,.\label{collapse} \end{equation} For no-collapse interpretations on the other hand, the system's state is given by the reduced density matrix: \begin{equation} \rho^{(4,red)}=\frac{1}{2} \ket{+_z}\bra{+_z}_P\ket{L}\!\bra{L}_P +\frac{1}{2}\ket{-_z}\bra{-_z}_P\ket{R}\!\bra{R}_P.\label{nocollapse} \end{equation} The thermodynamic entropy, $S_{TD}$, as the authors stress, is \textit{not} influenced by the position measurement and does not change during this step, in the sense--presumably--that no heat flows into, or out, of the system in consequence of this measurement . By contrast they urge, whether the von Neumann entropy changes, depends on whether we consider collapse or no-collapse interpretations. In the case of collapse interpretations the von Neumann entropy of the system allegedly decreases, whereas in the case of no-collapse interpretations, it remains the same. \textbf{Step 5 (Compression)}: The box is isothermally compressed back to its original volume $V$. The change in \emph{thermodynamic} entropy during this step is normally given by $S_{TD}=(1/T)\int{pdV}$, however, since there is no work involved in the compression against the vacuum, H\&S argue, the thermodynamic entropy does not change at Step 5. In fact, the thermodynamic entropy does not change throughout the \textit{whole experiment}, the authors claim. \textbf{Step 6 (Return to Initial State)}: The system is brought back to its initial state by unitary transformations with no entropy cost. ``[...] [T]he measuring device need also be returned to its initial ready state. One can do that unitarily.'' \citep[161]{hemmo_von_2006}. H\&S's main criticism thereby focuses on the fact that the thermodynamic entropy remains constant \textit{throughout the experiment}, whereas the von Neumann entropy does not: \begin{quotation} \small \noindent Therefore, whatever changes occur in $\text{Tr}\rho\ln\rho$ during the experiment, they cannot be taken to compensate for $(1/T)\int pdV$ since the latter is null throughout the experiment. \citep[p.162]{hemmo_von_2006} \end{quotation} \section{Discussion of \citeauthor{hemmo_von_2006}'s (\citeyear{hemmo_von_2006}) Argument} This section will discuss the argument presented above and identify two problems. The first concerns a wrong calculation of the von Neumann entropy during the Step 4 location measurement. The second problem regards the suggested unitary reset of the measurement apparatus. \subsection{Redundancy of the Step 4 Location Measurement} I will begin the discussion of the above with some general observations, in order to provide some more clarity. For this, we recall that according to Hemmo and Shenker, the only difference between their and von Neumann's original thought experiment is that a \textit{further} measurement, a location measurement (Step 4), is needed to determine the molecule's location prior to the compression stage. For gases at the thermodynamic limit, this measurement becomes redundant, since the amount of molecules on each side of the box becomes proportional to their respective occupying volume. Not so for single molecules, for which, before the empty side of the box can be compressed (with probability one), a location measurement is required in order to determine which side the particle is on. Contrary to H\&S's assertions, however, the location measurement during Step 4 is \textit{not} needed. A spin $z$ measurement already took place at Step 2 and the outcome of this measurement will be fully correlated with the position of the particle after the separation in Step 3. And so instead of introducing yet another auxiliary system that performs a location measurement on the particle, it would have been sufficient to read out the measurement result of the spin $z$ measurement. In the case of collapse, for example, the particle will have already collapsed into a spin eigenstate during the Step 2 measurement. The correlations established during the location measurement will thereby all be classical and reading out the spin-measurement result is sufficient to predict the particle's location after the separation. In the case of no-collapse interpretations, system and (spin-)measurement apparatus become entangled during Step 2: \begin{equation} \ket{\Psi}^{(2)}=\frac{1}{\sqrt{2}}\left( \ket{+_z}_P\ket{+}_M+\ket{-_z}_P\ket{-}_M\right)\ket{L}_P \end{equation} During the separation in Step 3, the particle's spatial degree of freedom becomes entangled with its spin degree of freedom. This means that the state of the overall system is: \begin{equation} \ket{\Psi}^{(3)}=\frac{1}{\sqrt{2}}\left( \ket{+_z}_P\ket{+}_M\ket{L}_P+\ket{-_z}_P\ket{-}_M\ket{R}_P\right). \end{equation} Therefore, for both collapse and no-collapse cases it is in fact sufficient to read out the measurement result of the Step 2 spin measurement in order to determine the location of the particle after the separation process. Having two instead of one measurement(s) would not be much of a problem, if it weren't the case that for H\&S, the two measurements have different consequences for the von Neumann entropy. This is the first inconsistency in their argument: whereas H\&S agree that after the spin z measurement at Step 2 the \textit{post spin measurement} density matrix of the particle is given by \begin{equation} \rho^{(2,red)}=\frac{1}{2}\left( \ket{+_z}\!\bra{+_z}_P+\ket{-_z}\!\bra{-_z}_P\right)\ket{L}\!\bra{L}_P, \end{equation} they do not apply the same reasoning to the \textit{post location measurement} state of the system at Step 4. Instead, they use a `collapsed' density matrix to calculate the von Neumann entropy: \begin{equation} \rho^{(4,+)}=\ket{+_z}\!\bra{+_z}_P\ket{L}\!\bra{L}_P\;\;\;\text{or}\;\;\rho^{(4,-)}=\ket{-_z}\!\bra{-_z}_P\ket{R}\!\bra{R}_P. \end{equation} In the first case, the (spin) measurement has therefore increased the entropy, whereas in the second case the (location) measurement has effectively reduced it. What is going on? Let me first try to assemble what the authors themselves could have had in mind. In von Neumann's original argument, the Step 2 spin measurement is \textit{non-selective}\footnote{Or rather should have been, given that von Neumann himself begins his argument with a spin mixture. This however does not matter for conceptual purposes.}, which means that even if the system has \textit{de facto} collapsed into one of its eigenstates, an external agent\footnote{Some words of clarification regarding my use of the term `agent': an agent does or course not need to be a human being but can be anything that is able to measure and react to the measurement outcome accordingly. For this reason I will use the term `agent' interchangeably with the term `measurement apparatus' or even `memory cell', implying that even a simple binary system can serve as an `agent'. Using the term `agent' in this way therefore does not imply subjectivity of any kind.} would not be able to determine into which state the system has collapsed and would therefore describe the system by a density matrix $\rho^{(2,red)}$. The system would be in a so-called proper mixture, meaning that it is possible to understand $\rho$ as representing a probability distribution over pure states\footnote{In the case of no-collapse interpretations and ignoring decoherence, the agent is not yet entangled with the system. She only becomes entangled once she performs the Step 4 location measurement.}. The von Neumann entropy of the system at Step 2 has thereby increased compared to its previous state, in agreement with what von Neumann considers being the irreversibility of a `Prozess 1'. The Step 4 location measurement on the other hand is \textit{selective} --- it establishes correlations between the agent who performs the measurement\footnote{More precisely the location measurement apparatus, but I will take those two to be synonymous for the time being.} and the system. These correlations then allow the agent to perform further operations on the system, such as the Step 5 compression of the box. For H\&S, the von Neumann entropy of the system at Step 4 has therefore decreased from Step 3. Since H\&S want to include the non-selective spin-measurement at Step 2, the Step 4 location measurement is indeed a necessary requirement for the single particle case, given that the compression is not being allowed to be conditional on the outcome of the Step 2 measurement. Without the selective measurement, the work-free compression against vacuum could not take place. The limiting case of infinite particles (and in fact von Neumann's original account) does not require this selective measurement since the amount of particles within each chamber of the box becomes equal. The problem with including a second measurement on the system is that this second measurement also introduces a second measurement apparatus. It will be shown shortly that H\&S's conclusion is based on an erroneous calculation of the von Neumann entropy when the system is correlated to this second measurement apparatus. Before elaborating on this point however, I would like to discuss another shortcoming of their argument. \subsection{Violation of the Second Law}\label{sec:violation} H\&S notably claim that the thermodynamic entropy change is zero throughout the whole cycle and in particular, that at the end of the cycle ``[...] the measuring device [can be returned to its initial ready state] unitarily.'' \citep[p.161]{hemmo_von_2006}, and hence without any heat cost. To appreciate the consequences of this claim, let us assume that it is indeed possible to unitarily bring the measurement apparatus back to its original position without a compensating heat transfer into the environment, as H\&S claim. We may then construct a slightly amended version of their proposed cycle. For this amended version, the only thing we change is Step 5, which instead of being a compression we turn into an isothermal expansion. This means that instead of compressing the empty side against the vacuum, we let the particle push against the partition in a quasi-static, isothermal fashion. Given that the position of the particle is `known' as a result of the location measurement, it is possible to attach a weight to the partition, thereby extracting $kT \ln2$ units of work from the system during the expansion, while the according amount of heat is delivered from the heat reservoir. After the work extraction, the measurement apparatus is brought back to its initial state (which according to H\&S can be done for free). The partition is then re-inserted into the original system (for free), the position of the particle measured again (for free) and the above process is repeated, thereby extracting arbitrarily large amounts of work from this one-particle engine with the sole effect being that heat is extracted from a single reservoir. This constitutes a direct violation of the Kelvin-Planck statement of the second law \citep{planck_treatise_1991}. Von Neumann himself, as the authors acknowledge, states that \begin{quote}\small\singlespacing [...] in the sense of phenomenological thermodynamics, each conceivable process constitutes valid evidence, provided that it does not conflict with the two fundamental laws of thermodynamics.'' \citep[p.192]{von_neumann_mathematische_1996}\footnote{[...] im Sinne der ph\"anomenologischen Thermodynamik ist jeder denkbare Prozess beweiskr\"aftig, wenn er die beiden Haupts\"atze nicht verletzt.}, \end{quote} and so at this point, one might already conclude that Hemmo and Shenker's argument fails, as their suggested unitary reset \textit{does} conflict with one of the fundamental laws of thermodynamics. It should be noted, however, that there exists some controversy in the literature about whether or not we ought to expect the second law to hold in the single particle case and whether or not thermodynamic considerations in these cases are indeed acceptable\footnote{I am grateful to two anonymous referees for prompting me to elaborate on this point.} \citep{maxwell_diffusion_1878,earman_exorcist_1999,norton_all_2013,hemmo_road_2012}. In fact H\&S elsewhere have expressed skepticism on whether the second law holds in such cases \citep{hemmo_road_2012}. While it seems reasonable to assume that in the article under consideration the second law is required to hold (after all, the article is about comparing the \textit{thermodynamic} entropy with the von Neumann entropy and the authors explicitly state that they ``do not address this issue'' of skepticism here \citep[p.158]{hemmo_von_2006}), it is worth emphasising that the above violation is a \textit{reliable} violation of the second law. This means that even if one is willing to sacrifice the strict second law which states that entropy should never decrease, and instead adapts either a \textit{statistical} or a \textit{probabilistic} version of the second law\footnote{The statistical version, that Maxwell adapted, takes the second law to be a statistical, as opposed to mathematical, truth that holds for macroscopic systems where fluctuations are rare. Naturally, the statistical second law does not hold anymore for microscopic systems where fluctuations become relevant. The alternative is a probabilistic version, which rules out the \textit{reliable} extraction of work from a single heat reservoir. For this probabilistic law, system size seems less of an issue. See also \cite{maroney_information_2009} or \cite{myrvold_statistical_2011} for a distinction between these different versions.}, the problem with the above case is that we are confronted with a violation that allows us to extract work from the heat reservoir 100\% of the time. It allows us to extract work reliably and continuously, and so is a particularly serious version of a Maxwell's demon. A common strategy to recover the second law from situations like the one presented above, is to invoke Landauer's principle \citep{landauer_irreversibility_1961,bennett_logical_1973}. It states that there is a heat cost involved with the resetting of the measurement apparatus to its initial state and is widely accepted in the physics community.\footnote{Notably, some philosophers have challenged its validity \citep{earman_exorcist_1999,norton2011waiting}, while others have made arguments in its favour \citep{ladyman_connection_2007,maroney_generalizing_2009,ladyman2013landauer,wallace2014thermodynamics}} For the case presented above, the resetting of the memory cell would then lead to an increase of heat in the environment, thereby offsetting the previous entropy decrease.\footnote{Note that once we take into account this heat cost, H\&S \textit{original} one-particle cycle ceases to be entropy neutral, as the resetting step will lead to an entropy increase in the environment.} And indeed, this is what is required in the present case, for contrary to H\&S's assertion, a unitary reset of the measurement device is \textit{not} possible. To see this, we consider the end of the cycle. The measurement device then is in one of the two mutually exclusive states $\ket{-}_M$ or $\ket{+}_M$. As can be easily seen, there then exist \textit{no} unitary operator that reliably maps the memory cell back to its initial, ready state $\{\ket{-}_M,\ket{+}_M\} \mapsto \ket{ready}_M$. The only way to reset the measurement device unitarily is if one recorded beforehand in which one of the two mutually exclusive states the device is in. To do so, however, one requires a measurement on the measurement device itself, performed by a second measurement device. But then one would want to reset this second measurement device unitarily, too, for which a third device would be needed and so on. Eventually, one would run out of resources and a Landauer type reset, at a cost, becomes unavoidable. \subsection*{Which Entropy?} Notwithstanding the above criticism, H\&S's main point is that the von Neumann entropy (as opposed to the thermodynamic entropy) during the Step 4 location measurement \textit{decreases}, and that this gives us reason to reject their conceptual equivalence, still stands, or seems to. \begin{quotation}\small \noindent As a result of the location measurement, the von Neumann entropy decreases back to its original value. \citep[p.163]{hemmo_von_2006} \end{quotation} In this section I will discuss this claim and in particular I will show that: \begin{itemize} \item[(i)] The physically relevant entity is the joint entropy of system \textit{and} measurement apparatus. It remains the same. \item[(ii)] It is the system's so-called conditional entropy that decreases during the measurement, not the system's marginal von Neumann entropy. \end{itemize} I will now begin with a justification of the two claims. In phenomenological thermodynamics, the joint entropy of two systems is always the sum of their respective entropies. The von Neumann (in the classical case the Gibbs-) entropy on the other hand is generally subadditive and additive only in the absence of correlations between two systems: \begin{equation}\label{eq:uncorrelated} H(S,M)\leq H(S)+H(M), \end{equation} where $S$ stands for `system' and $M$ stands for `measurement apparatus' or `memory cell'. In the concrete case of the Step 4 location measurement, we can model the measurement apparatus $M$ as a box containing a single molecule and divided by a partition. It can then be in one of two mutually exclusive states, corresponding to the position of the molecule, left ($l$) or right ($r$). We assume it needs to be in a `ready'-state before the measurement, which we chose to be $l$. The entropy is zero, in this case. If we consider the case of collapse, then at the time of the measurement the system will have already collapsed into a spin eigenstate. The correlations between the location of the system and the measurement apparatus are then all essentially classical and the von Neumann entropy before the measurement can be rewritten as: \begin{equation} H(S)=-k_B \sum_{s=l,r} p(s) \log p(s), \end{equation} where $p(s)$ is the probability of the system being in the left or right chamber of the box. Before the Step 4 location measurement, system and measurement apparatus are not correlated, and their joint entropy is given by $H_3(S,M)=H_3(S)+H_3(M)$, where $3$ is taken to denote `Stage 3', or, in other words, `before the Step 4 measurement'. During the measurement, the memory cell will align itself with the position of the particle and the two systems become correlated. The joint entropy now cannot be expressed anymore as the sum of the individual entropies and instead becomes \begin{equation} \label{eq:conditional} H_4(S,M)=H_4(S\vert M) + H_4(M) \leq H_4(S)+H_4(M), \end{equation} with $H(S\vert M)$ being the so-called \textit{conditional entropy}, which quantifies how much $S$ is correlated with $M$ and which is given by \begin{align} H(S\vert M)&=-k_B\sum_{s,m}p(s,m)\ln{p(s\vert m)} \\&=-k_B\sum_{m}p(m)\sum_{s}p(s\vert m)\ln{p(s\vert m)}\\&=k_B\sum_m p(m)H(s). \end{align} $p(s)$ and $p(m)$ are the probabilities that system and memory cell are found in macrostate $s=l_s,r_s$ or $m=l_m,r_m$ respectively, $p(m,s)$ is their joint probability and $p(s\vert m)$ the conditional probability, with $H(s)=-k_B \sum_s p(s\vert m)\ln p(s\vert m)$. The conditional entropy is non-negative and is maximal when system and measurement apparatus are uncorrelated, , $0\leq H(S\vert M) \leq H(S)$, in which case Equation (\ref{eq:conditional}) reduces to Equation (\ref{eq:uncorrelated}). It is often considered to be the entropy relative to an agent (in this case the measurement apparatus). Let us now go back to H\&S's claim that the von Neumann entropy of the system decreases during the location measurement. Does it? The answer is no. What decreases, however, is the \textit{conditional entropy} relative to the measurement apparatus: \begin{equation} H_3(S\vert M)\geq H_4(S\vert M). \end{equation} It reduces to zero, because system and measurement apparatus become perfectly correlated during the measurement. And so when H\&S claim that the system's entropy has decreased, what they \textit{mean} is that the system's conditional entropy has decreased. But the conditional entropy is distinct from the marginal entropy. Re-writing the joint entropy of system and measurement apparatus demonstrates this: \begin{equation} H_4(S,M)=H_4(S\vert M)+H_4(M)=H_4(M\vert S)+H_4(S). \end{equation} As opposed to phenomenological thermodynamics, which treats systems as black boxes, (classical and quantum) statistical mechanics is able to detect correlations between subsystems, allowing us to mathematically handle the concept of `measurement' in the first place. If we associate entropy with the potential to (reliably) extract work from a system, then the conditional entropy certainly quantifies this ability to a certain extent: a memory cell endowed with an automaton would now be able to (reliably) extract work from the system by allowing it to isothermally expand into the other half of the box, thereby raising a weight. Contrary to an external agent who is not correlated to the particle location. But this is just the ordinary Maxwell's demon scenario\footnote{As mentioned before, Maxwell in 1867 introduced the idea of a ``very observant and neat-fingered being'' (as cited in \citep{maxwell_maxwell_1995}), which was intended to demonstrate that the orthodox second law of thermodynamics could be broken in principle by exploiting fluctuations. In the thought experiment, a box filled with monoatomic gas is divided into two parts by a partition into which a small door is inbuilt. The ``being'', later called Maxwell's demon, controls every atom that approaches the door and either lets the atom pass or not. Since the gas molecules are subject to a velocity distribution, he can decide to only let the fast molecules pass into the one direction and to only let slow molecules pass into the other direction. By doing so the demon creates a temperature gradient, allowing him to violate the second law.} applied to a one-particle setting. What becomes important for thermodynamic treatments in such a setting, is the joint entropy of system \textit{and} measurement apparatus, as the joint system (ideally) has no correlations with the outside and can thus be treated as a thermodynamic black box. And it turns out that the behaviour of the thermodynamic entropy of the joint system, is exactly mirrored by the behaviour of the von Neumann entropy: the \textit{joint} entropy of system and measurement apparatus does not change during the location measurement, but remains the same: \begin{equation} \label{eq:joint} H_3(S,M)=H_4(S,M). \end{equation} And so, to summarise the above: all that changes during the location measurement is the conditional entropy, but neither the joint entropy of the system nor the marginal entropy $H(S)$. Furthermore, the joint entropy, just as the thermodynamic entropy of the joint system, remain the same during the location measurement. \subsection{No Collapse Scenarios} Let us now consider the case of no collapse scenarios. In no collapse scenarios, following the measurement in Step 4, the measurement apparatus and the location degree of freedom become entangled. The reduced density matrix after tracing out the decohering environment therefore is given by an inproper mixture due to the neglect of the correlations with the environment. After the Step 4 measurement, the density matrix of the combined system and measurement apparatus is given by \begin{equation} \rho^{(4,P+M)}= \frac{1}{2}\left( \ket{+_z}\!\bra{+_z}_P\ket{L}\!\bra{L}_P\ket{L}\!\bra{L}_M+\ket{-_z}\!\bra{-_z}_P\ket{R}\!\bra{R}_P\ket{R}\!\bra{R}_M\right), \end{equation} where now $\ket{L}_M$ and $\ket{R}_M$ represent the states of the measurement apparatus. The correlations between the measurement apparatus and the system are of a classical nature and so also in the absence of collapse, the von Neumann entropy of system and apparatus has not changed during the Step 4 measurement. \section{Conclusion} This article considers von Neumann's introduction of $-\text{Tr}\rho\ln\rho$ as the quantum mechanical generalisation of thermodynamic entropy. In particular, it shows that an argument raised by \cite{shenker_is_1999} and \cite{hemmo_von_2006} against the equivalence of von Neumann and thermodynamic entropy is problematic because a) their reasoning allows for a violation of the second law of thermodynamics and b) the alleged disparate behaviour in von Neumann and thermodynamic entropy during the Step 4 location measurement is in fact due to a wrong calculation of the von Neumann entropy. It is in fact the system's conditional entropy that decreases during the step, leading to the seemingly disparate behaviour of the two entropies. Finally, the article shows that the relevant quantum entropy, the joint entropy of system and measurement apparatus, remains unchanged during the location measurement and thus exactly mirrors the thermodynamic entropy. \section*{Appendix: Von Neumann, Entropy and Single Particles}\label{sec:response} In his original setup, von Neumann introduced a `gas' consisting of individual quantum systems, locked up in boxes and placed in a further, giant box\footnote{Such a setup was first proposed by \cite{einstein_beitraege_1914}}. The `gas' represents a imaginary statistical but finite \textit{ensemble}. The density operator, which he calls the `statistical operator', can only relate to such a \textit{Gesamtheit}. This means that even in the case of an individual quantum system, von Neumann's argument would remain unchanged: the density operator of this individual quantum system would \textit{still} relate to an ensemble of systems and a system containing a single particle would therefore \textit{still} be modeled as an $N$ particle ensemble. The statistical representations of a) a system containing a single particle, and b) a system containing many particles, are therefore identical. This, however, does not imply that von Neumann denies the meaningful application of thermodynamics to individual particles, quite the contrary: von Neumann explicitly considers the case of a single particle in a box \citep[p.212]{von_neumann_mathematische_1996}. He uses the example to demonstrate that the capacity of an agent to extract work from such a single-particle thermodynamic system depends on the agent's state of knowledge about the position of the particle and therefore adopts what one might call an epistemic interpretation of entropy: \begin{quote}\small\singlespacing The temporal variations of the entropy are due to the fact that the observer does not know everything, or rather that he cannot determine (measure) everything, that is in principle measurable. \citep[p.213]{von_neumann_mathematische_1996}\footnote{``Die zeitlichen Variationen der Entropie r\"uhren also daher, dass der Beobachter nicht alles weiss, bzw. dass er nicht alles ermitteln (messen) kann, was prinzipiell messbar ist.''} \end{quote} While one may raise several severe objections to this reading of entropy and the density operator, the above nevertheless shows that von Neumann's argument in its original intention not only applies to large (macroscopic) systems, but also for small (microscopic) systems. \end{document}
\begin{document} \title{Non-holonomic tomography II: Detecting correlations in multiqudit systems} \author{Christopher Jackson and Steven van Enk} \affiliation{ Oregon Center for Optical Molecular and Quantum Sciences\\ Department of Physics\\ University of Oregon, Eugene, OR 97403} \begin{abstract} In the context of quantum tomography, quantities called partial determinants\cite{jackson2015detecting} were recently introduced. PDs (partial determinants) are explicit functions of the collected data which are sensitive to the presence of state-preparation-and-measurement (SPAM) correlations. In this paper, we demonstrate further applications of the PD and its generalizations. In particular we construct methods for detecting various types of SPAM correlation in multiqudit systems | e.g. measurement-measurement correlations. The relationship between the PDs of each method and the correlations they are sensitive to is topological. We give a complete classification scheme for all such methods but focus on the explicit details of only the most scalable methods, for which the number of settings scales as $\mathcal{O}(d^4)$. This paper is the second of a two part series where the first paper\cite{nonholo1} is about a theoretical perspective for the PD, particularly its interpretation as a holonomy. \end{abstract} \maketitle \section{Introduction} In any quantum tomography experiment, one has the ability to perform various state preparations and measurements. We may abstractly represent these abilities by devices with various settings (Figure \ref{knobs}.) In standard quantum tomographies such as state, detector, or process tomography, it is assumed, respectively, that either the measurement device, the state device, or both are already characterized and may thus provide a resource to determine the parameters associated with the yet uncharacterized devices. Fundamental to the practice of these tomographies is a much subtler assumption: that the performance of each device is independent of the use and history of every other device. A problem in recent years has been the issue of estimating quantum gates while taking into account that there are small but significant errors in the states prepared and measurements made to probe the gates, so called SPAM errors \cite{merkel}. Any practice which takes into account SPAM errors will be generically referred to as SPAM tomography. Several works have come out in SPAM tomography particular to the task of making estimates in spite of such conditions \cite{merkel,gst,stark}, all of which speak to the notion of a ``self-consistent tomography.'' However, these works consistently assume by fiat that the SPAM errors are uncorrelated. In \cite{jackson2015detecting} it was demonstrated that one can test for the presence of correlated SPAM errors using so called partial determinants (PDs) which bypass any need to estimate state or measurement parameters individually. The logic behind the PD is simple, uncorrelated SPAM corresponds to a particular ability to factorize the estimated frequencies into a product of state and measurement parameters. Such a factorization always exists for small enough numbers of settings but does not exist for larger numbers of settings if there are correlations. Thus, the notion of parameter independence can be viewed as either a local or a global property. PDs are then a measure of the contradiction that results from requiring that multiple sets of locally uncorrelated settings be consistent with each other. In other words, SPAM correlations correspond to holonomies (or measures of global contradiction) in overcomplete tomography experiments (hence the title, ``non-holonomic tomography.'') Further details on this perspective may be found in \cite{nonholo1}. For multiqudit systems, the notions of product state and product measurement introduce further kinds of factorizability in estimated frequencies. Particularly, we will focus on systems where there is a single device associated with the preparation of multiqudit states and a measurement device for each qudit separately (but not necessarily independently) | i.e. systems where we expect outcome probabilities to factor into the form $\mathrm{Tr} \rho (E \otimes\cdots\otimes E)$. Sure enough, PDs can be generalized to measure a degree to which such factorizations do not exist. Thus, these generalized PDs can serve as tests for the presence of various state-state correlations, measurement-measurement correlations, as well as mixed SPAM correlations. Further, such generalized PDs can be much more scalable than the original PD | $\mathcal{O}(d^4)$ settings versus $\mathcal{O}(d^{4m})$ settings where $m$ is the number of qudits. The main portion of this work will demonstrate how to classify the various PDs one could consider. \begin{figure} \caption{ On the left is a device which prepares various signals on demand depending on which button, $a\in\{1,\ldots,N\} \label{knobs} \end{figure} The most basic aspect of non-holonomic tomography relies on the notion of an effectively uncorrelated system. With this notion, one emphasizes the perspective that, although one is not able to measure individual device parameters, correlation is simply the inability to define parameters that are organized according to a particular model. Similar forms of analysis have come up in the context of matrix product states\cite{perez2006matrix, schon2007sequential, crosswhite2008finite}, a way of representing various kinds of many-body quantum states that is particularly elegant for calculating correlation functions. Similar analyses can also be found in the more abstract context of (generalized) Baysian networks\cite{geiger2001stratified, garcia2005algebraic, henson2014theory} where the presence of hidden variables with in a model or causal structure result in a rich set of testable constraints on the probabilities associated with the observed variables. (Bell inequalities are an example of this.) Also from a fundamental perspective, similar analyses may be found in works on general probabilistic theories \cite{hardy2001quantum, hardy2013formalism} where much attention is spent on the correspondence between operational descriptions of systems and the mathematical calculations that represent them. \section{Tomography: States, Observables, and Data} \subsection{The Born Rule Revisited} For every quantum experiment, quantum events are counted and the frequency of each outcome is understood to estimate the product of a state and a POVM element (Figure \ref{knobs}.) This is the famous Born Rule, usually denoted \begin{equation} {f_a}^i = \mathrm{Tr}\rho_a E^i \end{equation} where $\rho_a$ is the density operator for the state prepared according to $a$, $E^i$ is the POVM element for an outcome of the measurement made according to $i$, and ${f_a}^i$ is the estimated frequency. However, we wish to consider the situation where the state-preparations and measurements (SPAM) behind these estimated frequencies actually fluctuate. In such a case, one must modify the Born Rule to read \begin{equation} {f_a}^i = {\langle\mathrm{Tr}\rho E\rangle_a}^i \end{equation} where ${\langle\rangle_a}^i$ denotes the average over the ensemble of trial runs of the devices set to $a$ and $i$ | that is, $\rho$ and $E$ are now to be considered (positive operator-valued) random variables, distributed according to the setting $(a,i)$. It is useful to more generally consider estimates of any statistical observable, ${S_a}^i$ such that \begin{equation}\label{datadata} {S_a}^i = {\langle\mathrm{Tr}\rho \Sigma\rangle_a}^i, \end{equation} where $\Sigma$ is a Hermitian (not necessarily positive) operator-valued random variable representing the corresponding quantum observable. The setting $i$ still represents a measurement, but can be more generally associated with a specific linear combination of outcomes which may be useful to consider | e.g. $\Sigma^i = \proj{+_i} - \proj{-_i}$ where $\ket{\pm_i}$ are eigenstates of spin in the $i$-direction. Any such ${S_a}^i$ will be referred to as quantum \emph{data}, calculated as the same linear combinations of measured frequencies as the observables they correspond to | that is, ${S_a}^i = {f_a}^k{c_k}^i$ just as $\Sigma^i = E^k{c_k}^i$ for whatever ${c_k}^i$ are useful. More traditional language would refer to ${S_a}^i$ as a ``quantum expectation value'' of the observable $i$ given state $a$. However, for the purposes of this paper one should refrain from such language as it is crucial to focus instead on states and observables themselves as the random variables, rather than the actual result or ``blinking of the light'' for each trial.\footnote{ The author is even inclined to suggest that states and outcomes should fundamentally be thought of as on an equal footing.} \subsection{The Partial Determinant: A Test for Correlated SPAM errors} In standard state, detector, and process tomographies, an experimentalist can ignore the ensemble average because they are (respectively) able to control either the measurements, the state preparations, or both. However, if one is doing SPAM tomography, where neither the state preparations nor measurements are assumed to be controlled, then the ensemble average suggests the possibility that SPAM errors are correlated | i.e. \begin{equation}\label{corrcorr} {\langle\mathrm{Tr}\rho \Sigma\rangle_a}^i \neq \mathrm{Tr} \langle\rho\rangle_a\langle\Sigma\rangle^i. \end{equation} From the perspective of doing any of the standard tomographies, this is an awkward statement indeed because one does not have the resources necessary to access quantities such as $\langle\rho\rangle_a$ or $\langle\Sigma\rangle^i$ individually.\footnote{ One could parse correlations into two separate kinds of independence: The first kind being when ${\langle\mathrm{Tr}\rho \Sigma\rangle_a}^i = \mathrm{Tr} {\langle\rho\rangle_a}^i {\langle\Sigma\rangle_a}^i$. The second kind being when ${\langle\rho\rangle_a}^i = {\langle\rho\rangle_a}$ and ${\langle\Sigma\rangle_a}^i = {\langle\Sigma\rangle}^i$} One may thus be tempted to conclude that correlations (or lack thereof) cannot be determined without access to the individual expectation values. However, this is not the case. Correlations such as Equation \pref{corrcorr} can be determined without individual expectation values because equations like ${\langle\mathrm{Tr}\rho \Sigma\rangle_a}^i = \mathrm{Tr} \langle\rho\rangle_a\langle\Sigma\rangle^i$ express a very special factorizability of the data, Equation \pref{datadata}. We thus proceed with the following operational definition: we say that data ${S_a}^i$ is \emph{effectively (SPAM) uncorrelated} when we can express it as a simple matrix equation: \begin{equation}\label{eff} {S_a}^i = {P_a}^\mu{W_\mu}^i. \end{equation} The rows of $P$ and columns of $W$ (when they exist) represent the states and observables, $\rho_a = {P_a}^\mu\sigma_\mu$ and $\Sigma^i = \sigma^\mu {W_\mu}^i$, where $\{\sigma_\mu\}$ is some operator basis and $\{\sigma^\mu\}$ is the corresponding dual basis. Repeated indices are to be summed over. We relax the requirement that the rows of $P$ correspond to positive operators. Three important observations should be made about this definition. First, Equation \pref{eff} requires that the sum on $\mu$ be over $\le n^2$ operators for an $n$-dimensional Hilbert space. (Indeed, the notion of effectively uncorrelated is always relative to an assumed dimension, $n$.) Second, one can always write an expression like Equation \pref{eff} (with the sum on $\mu$ being over $\le n^2$) so long as the number of state settings, $N$, and detector settings, $M$, are both $\le n^2$. Third, when $P$ and $W$ exist, they are in general not unique because one could just as well use $PG$ and $G^\text{-1} W$ where $G$ is an $n^2 \times n^2$ real invertible matrix. The components of $G$ are gauge degrees of freedom which have been referred to as \emph{SPAM gauge}\cite{merkel,gst,stark} or \emph{blame gauge}\cite{jackson2015detecting}. The combination of these observations suggest the following generic protocol for quantifying correlations: First, perform SPAM tomography with $N > d^2$ and $M > d^2$. Such SPAM tomography may be referred to as overcomplete because such numbers of setttings would correspond to overcomplete standard tomographies, if only the appropriate devices were controlled and well characterized. Second, consider $d^2 \times d^2$ submatrices of the data, which can be thought of as corresponding to minimally complete tomographies. Each such submatrix is effectively uncorrelated and thus may be associated with a ``local'' gauge degree of freedom. Finally, check whether the states and observables for each minimally complete submatrix can be chosen so that such choices among all submatrices are consistent with each other. It turns out that the amount of inconsistency can be quantified rather elegantly by what has been called a partial determinant.\cite{jackson2015detecting} Such protocols can be understood as an organization of the data into a \emph{fiber bundle}. Fiber bundles are mathematical structures which support the notions of \emph{connection} and \emph{holonomy} which are rather ubiquitous concepts. In \cite{nonholo1}, it is demonstrated how tomography and partial determinants can be interpreted as connections and holonomies, respectively. However, for the sake of those who are interested exclusively in potential applications, an effort has been made to avoid such language in this current paper. Nevertheless, occasional references will be made and terms will be defined which allude to this perspective. It is from these perspectives that the title ``Non-holonomic tomography'' is fully justified as a name for partial determinants. Forgetting this alternative perspective and using standard linear algebraic considerations, instead one can make the observation that the definition for effective independence is equivalent to the statement that the data must be such that $\mathrm{rank}(S) \le n^2$. For example, consider devices whose numbers of settings are $M = N = 2n^2$ and organize the data as \begin{equation}\label{square} S = \left[ \begin{array}{cc} A & B\\ C & D \end{array} \right] \end{equation} where each corner is an $n^2 \times n^2$ matrix. One can define a partial determinant (or PD) for this arrangement of the data, \begin{equation}\label{PD} \Delta(S) = A^\text{-1} B D^\text{-1} C. \end{equation} The PD has the property that it is equal to the identity matrix if and only if the data is effectively SPAM uncorrelated.\cite{jackson2015detecting} The proof of this is simple if one observes that $\mathrm{rank}(S) \le n^2$ if and only if there exist $n^2 \times n^2$ matrices $P_1$, $P_2$, $W_1$, and $W_2$ such that \begin{equation} S = \left[ \begin{array}{c} P_1\\ P_2 \end{array} \right] \left[ \begin{array}{cc} W_1 & W_2 \end{array} \right]. \end{equation} \subsection{Multiqudit Correlations: SPAMs and Non-Localities}\label{recallterms} In this paper we consider extensions of the general notion of a PD to multiqudit systems ($n = d^m$ for $m$ qu$d$its) with a concentration on PD constructions with a number of device settings which scales to lowest order, $\mathcal{O}(d^4)$. Specifically, we will focus on systems where the preparation of a multiqudit state can be represented by a single device and the measurement of each qudit can be represented separately by separate (but not necessarily independent) devices. Uncorrelated measurements between different qudits will be referred to as \emph{local} measurements.\footnote{ This second meaning of the word ``local'' should not be too confusing as it will be clear from context whether we are considering individual qudit observables or small numbers of state and measurement settings.} If all measurements are effectively local and SPAM uncorrelated, then any data collected for $m$ qudits can be factored into the form \begin{widetext} \begin{equation} S_a^{ijk\ldots} = \mathrm{Tr}\big(\rho_a\;\Sigma_1^i\otimes\Sigma_2^j\otimes\Sigma_3^k\cdots\big) = R_a^{\lambda\mu\nu\ldots}{W_{1\lambda}}^i{W_{2\mu}}^j{W_{3\nu}}^k\cdots \end{equation} \end{widetext} where $\rho_a = R_a^{\mu\nu\cdots}\sigma_\mu\otimes\sigma_\nu\otimes\cdots$ and $\Sigma_q^i = W^i_{q\mu}\sigma^\mu$ for each qudit $q \in\{1,\ldots,m\}$, for some operator bases $\{\sigma_\mu\}$ and dual bases $\{\sigma^\mu\}$ and a sum over repeated greek indices (from $1$ to $d^2$) is always implied.\footnote{ Technically, we should write $(\sigma_q)_\mu$ to emphasize that the qudit measurements do not necessarily share a reference frame, but we will not write this here for the sake of reducing index clutter.} Two kinds of indices can be distinguished in these expressions. There are those indices which correspond to device \emph{settings} ($a$, $i$, $j$, ...) which are associated with degrees of freedom which can be controlled. Such indices may be referred to as ``external'' because they correspond to degrees of freedom outside of the quantum system being probed. Then there are those indices which correspond to device \emph{parameters} ($\mu$, $\nu$, ...) and represent the model | the Born rule with $d$-dimensional Hilbert spaces. These indices may be referred to as ``internal'' because they are always summed over and thus are accompanied with gauge degrees of freedom. For such multiqudit systems, there are now multiple kinds of correlation one can have. We refer to correlations between states and measurements on qudit $q$ as $\text{SPAM}_q$ correlations. Further, let us refer to correlations between measurements on qudit $q$ and measurements on qudit $p$ as $(q,p)$-nonlocalities. Such correlations are to be understood with the notions of \emph{effectively} $\text{SPAM}_q$ uncorrelated data and \emph{effectively} $(q,p)$-local data. Then one may proceed to categorize the various ways a partial determinant may be constructed to test if a system is $\text{SPAM}_q$ correlated or $(q,p)$-nonlocal. For simplicity, we are only considering 2-point correlations in this paper (see Conclusions and Discussion.) \section{Local Measurements of Two Qudits} \begin{figure} \caption{ A two qudit experiment where there is a single device which prepares qud$^2$it states and two devices which make qudit measurements. We would like to know if the data can be modeled by equation \pref{2qu} \label{2qudev} \end{figure} For two qudits, a.k.a. a qud$^2$it, the data is an object with 3 (external) indices, 1 for state preparations and 2 for the measurements on each qudit. If there are no correlations, then we may write \begin{equation}\label{2qu} S_a^{ij} = R_a^{\mu\nu}{V_\mu}^i{W_\nu}^j. \end{equation} One can consider this as a matrix equation in the most obvious way: \begin{equation}\label{snip1} {S_a}^I = {R_a}^M{X_M}^I \end{equation} where $M = (\mu,\nu)$, $I = (i,j)$, and ${X_M}^I = {V_\mu}^i{W_\nu}^j$, treating the two qudit measurements as one qud$^2$it measurement. This separation of parameters suggest the original protocol \cite{jackson2015detecting} for detecting what will now be called \emph{generic} SPAM correlations, constructing a partial determinant for $n = d^2$. One can also consider equation (\ref{2qu}) as a matrix equation in another way: \begin{equation}\label{snip2} {S_A}^j = {P_A}^\nu {W_\nu}^j \end{equation} where $A = (a,i)$ and ${P_A}^\nu = R_a^{\mu\nu}{V_\mu}^i$. One can interpret this separation as the measurement settings of one qudit being used to effectively prepare states for the other qudit. In this case, we know that if these effective-states, set by $A$, are uncorrelated with the other qudit measurements, set by $j$, (so that we may write Equation \pref{snip2}) then a smaller ($n = d$) PD of ${S_A}^j$ must be the identity. We should stress that taking the inverse of a matrix like ${S_A}^j$ is very different from taking the inverse of a matrix like ${S_a}^I$ even if they consist of the same entries, only organized differently. \begin{figure} \caption{Diagrammatic representation of effectively completely uncorrelated data, Equation \pref{2qu} \label{pic1} \end{figure} One sees that there are already two distinct ways to be effectively uncorrelated: The first is to be SPAM uncorrelated in the generic sense, such that Equation \pref{snip1} exists. In this case, the rank of ${S_a}^I$ must be $\le d^4$, particularly for $>d^4$ state settings, $a$, and $>d^4$ measurement settings, $I$. The second is to be uncorrelated such that Equation \pref{snip2} exists. In this case, the rank of ${S_A}^j$ must be $\le d^2$, particularly for $>d^2$ effective state settings, $A$, and $>d^2$ measurement settings, $j$, for the second qudit. \begin{figure} \caption{Diagrammatic representation of Equation \pref{snip1} \label{pic2} \end{figure} \begin{figure} \caption{Diagrammatic representation of Equation \pref{snip2} \label{pic3} \end{figure} Equations \pref{snip1} and \pref{snip2} represent weaker forms of effective independence than Equation \pref{2qu}. One should think of them as potential factorizations of the data which may or may not exist. It is helpful to represent Equations \pref{2qu} through \pref{snip2} diagrammatically, as in Figures \ref{pic1} thru \ref{pic3}. Being able to factorize the data as in Equation \pref{2qu} means that the system can be considered completely uncorrelated. Being able to factorize the data as in Equation \pref{snip1} means that the data is effectively $\text{SPAM}_1$ and $\text{SPAM}_2$ uncorrelated. (Recall the terminology from the end of section \ref{recallterms}) Being able to factorize the data as in Equation \pref{snip2} means that the data has effective $\text{SPAM}_2$ independence and (1,2)-locality. Similarly, there is another factorization that results from permuting the qudits whose existence would mean the system is effectively $\text{SPAM}_1$ uncorrelated and (1,2)-local. \subsection{Classifying Different PDs}\label{a_whole_bunch} PDs which test for generic SPAM correlations are relatively straight forward as there are only 2 main variations on their construction. In contrast, there are 11 distinct PDs one can consider relative to factoring the data as in Equation \pref{snip2}. These PDs differ in their construction by the number of settings used for $a$ and $j$ (or $i$) and by how these settings are organized. These different constructions are sensitive to different kinds of correlation. Specifically, each PD will be equal to the identity when the system is effectively uncorrelated in a corresponding way. In order to organize the description of these various constructions, we must establish a few definitions and some notation. The procedure for constructing a PD can be summarized in two steps. The basic goal is to organize the data so that it is of the same form as Equation \pref{square}. In addition to the original PD construction, rows and columns may now be products of multiple settings. The first step is then to organize the settings so to construct a \emph{corner} template. The second step is to ``displace'' four instances of that corner which can then be connected in a \emph{loop}, as in Equation \pref{PD}. This constructed matrix of four corners shall be called a \emph{square}. \subsubsection{Generic SPAM Correlations}\label{gen} For detecting generic SPAM correlations, such that Equation \pref{snip1} does not exist, we denote the various numbers of experimental settings by $[N:M_1,M_2]$, where $N$ is the number of state settings (the range of $a$) and $M_q$ is the number of local measurement settings for $\text{qudit}_q$ (the range of $i$ or $j$.) The colon can be thought of as representing the dotted separation of Figure \ref{pic2}. Settings to the left of the colon are to be organized as a row index while settings to the right are to be columns. To calculate a partial determinant in this case, one needs to consider corners that are $d^4 \times d^4$ which further requires $N = d^4$ and $M_1=M_2=d^2$. (Recall that this is because the data must have rank $\le d^4$ if Equation \pref{snip1} exists.) A square can then be assembled from such a corner in two ways, which we denote simply by multiplying the appropriate setting number by 2: \begin{equation} [2d^4:2d^2,d^2] \hspace{30pt}\text{and}\hspace{30pt} [2d^4:d^2,2d^2]. \end{equation} Thus we have two kinds of generic PD. Importantly, we use a `2' in our bracket notation as if to suggest implementing twice as many settings, as done originally. However, one could just as well make a square from any number of rows and columns, each $> d^4$. In Appendix \ref{plus1}, we demonstrate how to construct an $r \times r$ PD for an $(r+1)\times(r+1)$ matrix. Nevertheless, we will always write `$2$'s in our bracket notation for simplicity. To summarize, factors of $d$ in this square bracket notation represent an organization template for the settings in each corner, while `2's represent which device settings one changes when going from one corner to the next. The nature of these representations should become much clearer in the following, more intricate factorization problem. \subsubsection{Nonlocalities and $\text{SPAM}_q$ Correlations} For detecting correlations such that Equation \pref{snip2} does not exist, we denote the numbers of settings by $[N;L :M]$. We now make a distinction between $L$, the number of observable settings for $\text{qudit}_1$, used to effectively prepare states, and $M$, the number of observable settings for $\text{qudit}_2$, used to measure them. Again, we can interpret the colon as the dotted separation in Figure \ref{pic3}, between effective state preparations and measurements of $\text{qudit}_2$. A semicolon after the first argument is just to distinguish the first argument as the number of (joint) state preparations. Of course, there are actually 2 distinct schemes of type $[N;L :M]$ depending on which qudit we consider part of the effective state preparation. We denote the other by $\pi[N;L :M]$ where $\pi$ means `permute the two qudits.' \begin{table}[h!] \begin{equation} \begin{array}{c|cc} \text{Corners} & \multicolumn{2}{c}{\text{Squares}}\\ {[N;L :M]} & [2N;L :2M] & [N;2L :2M]\\\hline {[d^2 ;1 :d^2]} & [2d^2;1 :2d^2] & [d^2;2 :2d^2]\\ {[d;d:d^2]} & [2d;d :2d^2] & [d;2d :2d^2]\\ {[1;d^2:d^2]} & [2;d^2 :2d^2] & [1;2d^2 :2d^2] \end{array} \end{equation} \caption{Each row is a way to make a corner while each column is a way to make a square.}\label{6table} \end{table} Corners and squares can now be made in several ways. Corners must be $d^2 \times d^2$ (because the data must have rank $\le d^2$ if Equation \pref{snip2} exists.) There are 3 ways one can do this because we must take $M=d^2$ while there are 3 different ways to make $d^2$ effective states, $[N;L] = [d^2;1]$, $[d;d]$, and $[1;d^2]$, (restricting ourselves to nice multiples.) Then there are 2 ways each to make a square, $[2N;L :2M]$ or $[N;2L :2M]$, (ignoring that one could mix corner types in a single PD.) (See Table \ref{6table}.) Having picked one of the 2 qudits, there are almost $2 \times 6=12$ PDs, except that $\pi[1;2d^2 :2d^2] = [1;2d^2 :2d^2]$ is actually a symmetric construction. So there are $12-1=11$ in total PDs of the type $[N;L:M]$. To make the construction of these PDs as clear as possible, Figures \ref{corners} and \ref{Wonderful} are given to go over each of them individually. In Figures \ref{Wonderful}, it is important to recall the distinction between device settings and a device parameters. Device settings are the external controls that are available in an experiment, while device parameters are the model dependent numbers used to describe the behavior of the experiment. Changes in settings can be understood as generating changes in parameters, but only in a local sense (from corner to corner) which one might not be able to integrate to a global correspondence (because there could be correlations.) In other words, a constraint such as ``keeping state parameters fixed'' can still be operationally defined but will in general be a non-holonomic constraint. Similar distinctions are represented mathematically in other physical theories as well, a discussion of which may be found in \cite{nonholo1}. \begin{figure} \caption{At the top is a coordinate system for the entries of the data ${S_a} \label{end} \label{b5e} \label{b4e} \label{b3e} \label{b2e} \label{begin} \label{Wonderful} \end{figure} In Figures \ref{corners}, Corners have been given qualitative names for how they ``fill'' the space of settings as represented in Figures \ref{Wonderful}. Solid lines represent a range of $d^2$, dashed lines have range $d$, and amputated lines are single valued. A vertex joining one solid line with two dashed lines represents the delta function \begin{equation}\label{fancydelta} \vec\nabla\,ta_A^{ab} = \begin{cases} 1 & A = ad+b \\ 0 & \text{otherwise} \end{cases} \end{equation} where $A\in\{0,1,\ldots,d^2-1\}$ is the solid line and $a,b\in\{0,1,\ldots,d-1\}$ are the dashed lines. Dotted lines with small circular endpoints represent the settings used to displace the corners of a square, i.e. the `$2$'s in square bracket notation. Squares have been further labelled based on how they are oriented in the setting dimensions as represented by the placement of `$2$'s in bracket notation as well as in Figures \ref{Wonderful}. \begin{figure} \caption{Diagrammatic representations of PD constructions as arranged in Table \ref{6table} \label{vert} \label{butt} \label{hori} \label{corners} \end{figure} Further in Figures \ref{corners}, the backbone (Figure \ref{pic1}) of each diagram represents the hypothesis that the data is effectively completely uncorrelated. However, once a corner is assembled, one can then see from the diagram how this hypothesis may be relaxed to weaker types of independence that would still give the PD a trivial value. Digrammatically, this corresponds to the property that the minimum number of lines one must cut in order to detach the external solid lines corresponds exactly to the upper bound in the rank. Moreover, the displacing lines or `2's can empirically suggest different models of correlation for nontrivial values in the corresponding PD. For example, a nontrivial value for $[2d^2; 1:2d^2]$ suggests $\text{SPAM}_2$ correlations while $[d^2; 2:2d^2]$ suggests (1,2)-nonlocalities. To summarize, a square bracket notation has been introduced to represent different PDs one can construct for two-qudit (or qu$d^2$it) systems. Each PD will have a trivial value, $A^\text{-1} B D^\text{-1} C = 1_{d^2 \times d^2}$, if the system is effectively uncorrelated in that corresponding way. The types of correlation which violate these PDs should be clear from the topology of their effective backbone (see Figures \ref{pic2} and \ref{pic3}.) The first is that $[N:M_1,M_2]$ PDs are trivial if $\langle\mathrm{Tr}\rho\,(\Sigma\otimes\Sigma)\rangle = \mathrm{Tr}\langle\rho\,\rangle\langle\Sigma\otimes\Sigma\rangle$ and are thus not sensitive to (12-)nonlocalities. The second is that $[N;L :M]$ PDs are trivial if $\langle\mathrm{Tr}\rho\,(\Sigma\!\otimes\!\Sigma)\rangle = \mathrm{Tr}\langle\rho\,\Sigma\rangle\!\otimes\!\langle\Sigma\rangle$ so are not sensitive to $\text{SPAM}_1$ correlations. Similarly $\pi[N;L :M]$ are insensitive to $\text{SPAM}_2$ correlations. \section{More than Two Qudits} Increasing the number of qudits, $m>2$, there are many more variations in the kinds of corners and squares we can construct and so there are many more different types of experiments one can do to detect many more different types of correlation. One fruitful way of classifying PDs (and the corresponding experiments) is by the matrix \emph{rank} that the corresponding square should have in the absence of correlations. In particular, for qudit measurements on qud$^m$it there are $m$ types of PDs corresponding to $m$ different ranks, $\mathrm{rank}(M)=d^{2k}$ for $k=1, \ldots, m$. (See Figures \ref{pic2}, \ref{pic3}, and \ref{pics3} and Tables \ref{3qPD2qish} and \ref{chPD3q}.) Remember that the rank also determines how the number of experimental settings scales, namely, as ``pairs of settings'' = $\mathrm{rank}(M)^2=d^{4k}$. Following our previous notation, these classes will be denoted with square brackets by \begin{equation} \Delta_k = [N;L_1, \ldots, L_{m-k}:M_1,\ldots,M_k] \end{equation} The generic PD corresponds to $k=m$ which has only 1 corner type (because all the measurement devices are to the right of the colon) and $m$ square types (because there are m devices to the right of the colon which can be used for displacement) as in section \ref{gen}. Those PDs which demand the least number of experimental settings correspond to $k=1$ for which there are 4 kinds of corner, 10 kinds of square, and $\frac{1}{2}m(7m^2-12m+7)$ permutational variants, as will be explained. All of the main variations in $k=1$ are present for $m=3$, so we will start there. We will also briefly include $k=2$ for $m=3$ qudits to make the construction of the more general PDs clear. \subsection{Three Qudits} For three qudits, the data has $1+3=4$ indices or device settings. If the data is completely uncorrelated, then we may write \begin{equation} {S_a}^{ijk} = R_a^{\lambda\mu\nu}{U_\lambda}^i{V_\mu}^j{W_\nu}^k. \end{equation} Such data can be organized into a matrix in 3 basic ways as represented in Figure \ref{pics3}. These 3 ways further represent separate classes of PD one can construct, each of which are sensitive to different correlations. Generic PDs, $[N:M,M,M]$, are insensitive to all $(p,q)$-nonlocalities. The ``$k=2$'' PDs, $[N;L:M,M]$, are insensitive to $(2,3)$-nonlocality and $\text{SPAM}_1$ correlation, but are sensitive to $(1,2)$-nonlocality, $(1,3)$-nonlocality, $\text{SPAM}_2$ correlation, and $\text{SPAM}_3$ correlation. The most scalable PDs, $[N;L,L:M]$, are insensitive to $(1,2)$-nonlocalities, $\text{SPAM}_1$, and $\text{SPAM}_2$ correlations. but are sensitive to $(1,3)$-nonlocality, $(2,3)$-nonlocality, and $\text{SPAM}_3$ correlation. \begin{figure} \caption{Each circular vertex also has an implied external index attached to it like in Figures \ref{pic2} \label{pics3} \end{figure} Of course, one can permute the qudits to make similar statements. Perhaps the best way to denote each of these is by $\pi[N; \ldots M]$ where now $\pi$ could denote any permutation of 3 elements. Further, we may denote each $\pi$ the most succinctly with cyclic notation. For example $(123)[N;L:M,M]$ PDs are insensitive to $31$-nonlocalities and $\text{SPAM}_2$ correlation. This notion is important for discussing PD symmetries which brings us to the discussion on the ways one can construct corners and squares. Generic PDs, $[N:M,M,M]$, have no variability in corner types and only 1 basic kind of square, 3 considering which qubit you choose to displace the measurement dimension. These can be represented in permutation notation as $\Delta$, $(12)\Delta$, and $(13)\Delta$ where $\Delta=[2d^6:2d^2,d^2,d^2]$. The permutations $\{1, 12, 13\}$ represent the coset for the subgroup $\{1,23\}$ corresponding to the symmetry $(23)\Delta = \Delta$. \begin{table}[h!] \begin{equation} \begin{array}{c|cc} \text{Corners} & \multicolumn{2}{c}{\text{Squares}}\\ {[N;L :M_1,M_2]} & {[2N;L: 2M_1,M_2]} & {[N;2L: 2M_1,M_2]} \\\hline {[d^4;1:d^2,d^2]} & {[2d^4;1:2d^2,d^2]} & {[d^4;2:2d^2,d^2]}\\ {[d^3;d :d^2,d^2]} & {[2d^3;d :2d^2,d^2]} & {[d^3;2d :2d^2,d^2]}\\ {[d^2;d^2 :d^2,d^2]} & {[2d^2;d^2 :2d^2,d^2]} & {[d^2;2d^2 :2d^2,d^2]} \end{array} \end{equation} \caption{$[N;L :M,M]$ PDs require $\mathcal O (d^8)$ settings. The number of settings is determined by the expected rank of these matrices, $d^4$, for a completely uncorrelated model. See Figure \ref{pics3}.}\label{3qPD2qish} \end{table} For $k=2$ PDs, $[N;L:M,M]$, we have 3 kinds of corner and 6 kinds of square (see Table \ref{3qPD2qish} and compare to Table \ref{6table}.) We even continue to have the symmetry $(12)\Delta=\Delta$ for $\Delta = [d^2;2d^2 :2d^2,d^2]$. Except now, a square can be displaced in 3 measurement dimensions. This gives a total of $3 \times 11 = 33$ partial determinants, 6 per square except for the one with a symmetry (which only only gives $6/2=3$.) For $m$ qubits, this would be $11\binom{m}{2}$ PDs. Qudits given a ``1'' in square bracket notation can be considered a trace over that qudit, i.e. choose the identity observable. Most practical instances will consider qubits, $d=2$, in which case it is important to remember a ``$d$'' and a ``$2$'' are still different in that a $d$ refers to settings used to make a single kind of corner while a $2$ is used to displace different corners in a square. Diagrams could be drawn as before to represent corners and squares where we would go on to interpret what trivial values for such PDs mean. \begin{widetext} \begin{table}[h!] \begin{equation} \begin{array}{c|ccc} \text{Corners} & \multicolumn{3}{c}{\text{Squares}}\\ {[N;L_1,L_2 :M]} & {[2N;L_1,L_2 :2M]} & {[N;2L_1,L_2 :2M]} & {[N;L_1,2L_2 :2M]}\\\hline {[d^2;1,1 :d^2]} & {[2d^2;1,1 :2d^2]} & {[d^2;2,1 :2d^2]} & \\ {[d;d,1 :d^2]} & {[2d;d,1 :2d^2]} & {[d;2d,1 :2d^2]} & {[d;d,2 :2d^2]} \\ {[1;d^2,1 :d^2]} & {[2;d^2,1 :2d^2]} & {[1;2d^2,1 :2d^2]^*} & {[1;d^2,2 :2d^2]} \\ {[1;d,d :d^2]} & {[2;d,d :2d^2]^{**}} & {[1;2d,d :2d^2]} & \\ \end{array} \end{equation} \caption{$[N;L,L : M]$ PDs require $\mathcal O (d^4)$ settings. The number of settings is determined by the expected rank of these matrices, $d^2$, for a completely uncorrelated model. See Figure \ref{pics3}.}\label{chPD3q} \end{table} \end{widetext} Finally for the most scalable PDs, $[N;L,L:M]$, we have 4 types of corner and 10 types of square (see Table \ref{chPD3q}.) Entries kept blank are simply because they are equivalent by permutation with the entry to the left in the table. Entries marked with an asterisk have a symmetry. All together, these make 51 PDs which we will explain the combinatorics for in the next section on general $m$. Illustrating the corners diagrammatically as in Figures \ref{pics4}, we can interpret the meaning of a non-trivial value for each corresponding PD and we will refer to them by there subfigures: \begin{itemize} \item Rank $d^2$ PDs displaced from $\Delta_a$, $\Delta_b$, and $\Delta_d$ have a trivial value\\ if and only if $\langle RUVW \rangle = \langle RUV \rangle\langle W \rangle$. \item On the other hand, rank $d^2$ PDs displaced from $\Delta_c$ will have a trivial value\\ if either $\langle RUVW \rangle = \langle RUV \rangle\langle W \rangle$ \emph{or} $\langle RUVW \rangle = \langle U \rangle\langle RVW \rangle$. \item Further, if $\langle RUVW \rangle \neq \langle RUV \rangle\langle W \rangle$ but $\langle RUVW \rangle = \langle U \rangle\langle RVW \rangle$,\\ then a rank $d^2$ PD displaced from $\Delta_b$ will have a nontrivial value,\\ but one of rank $d^3$ will be trivial. \item Finally, if $\langle RUVW \rangle \neq \langle RUV \rangle\langle W \rangle$ but a PD from $\Delta_d$ but of rank $d^3$ had a trivial value\\ then either $\langle RUVW \rangle = \langle U \rangle\langle RVW \rangle$ \emph{or} $\langle RUVW \rangle = \langle V \rangle\langle RUW \rangle$ \end{itemize} \begin{figure} \caption{Diagrams for $[N;L_1,L_2 :M]$ corners, the rows of Table \ref{chPD3q} \label{pics4} \end{figure} \subsection{The Most Scalable $m$ Qudit PDs} \begin{figure} \caption{There are $\frac{1} \label{wth} \end{figure} The PDs of $k=1$ for $m>3$ qudits are essentially no different from $m=3$ because after 3 qudits are chosen, the remaining are fixed to 1 observable or just traced out. For completeness, let us write the completely uncorrelated data of $m+1$ indices, \begin{equation} {S_a}^{i\ldots jk} = R_a^{\lambda\ldots\mu\nu}{U_\lambda}^i\cdots{V_\mu}^j{W_\nu}^k, \end{equation} and represent the $k=1$ class of PDs diagrammatically in Figure \ref{wth}. Also, we include Tables \ref{chPDmq} which are just like the last section except with a bunch of ellipses to denote `1's for the remaining qudit measurement settings. We also include the combinatorics for the permuted variations of each PD. Trivial values are also interpreted just the same as for 3 qudits. \begin{widetext} \begin{table}[h!] \begin{equation} \begin{array}{c|ccc} \text{Corners} & \multicolumn{3}{c}{\text{Squares}}\\ {[N;L_1,L_2, ... :M]} & {[2N;L_1,L_2, ... :2M]} & {[N;2L_1,L_2, ... :2M]} & {[N;L_1,2L_2, ... :2M]}\\\hline {[d^2;1,1, ... :d^2]} & {[2d^2;1,1, ... :2d^2]} & {[d^2;2,1, ... :2d^2]} & \\ {[d;d,1, ... :d^2]} & {[2d;d,1, ... :2d^2]} & {[d;2d,1, ... :2d^2]} & {[d;d,2, ... :2d^2]} \\ {[1;d^2,1, ... :d^2]} & {[2;d^2,1, ... :2d^2]} & {[1;2d^2,1, ... :2d^2]}^* & {[1;d^2,2, ... :2d^2]} \\ {[1;d,d, ... :d^2]} & {[2;d,d, ... :2d^2]}^{**} & {[1;2d,d, ... :2d^2]} & \ \end{array} \end{equation} \begin{equation} \begin{array}{c|ccc} \text{Corners} & \multicolumn{3}{c}{\text{Squares}}\\ {[N;L_1,L_2, ... :M]} & [2N;L_1,L_2, ... :2M] & [N;2L_1,L_2, ... :2M] & [N;L_1,2L_2, ... :2M]\\\hline {[d^2;1,1, ... :d^2]} & m & m(m-1) & \\ {[d;d,1, ... :d^2]} & m(m-1) & m(m-1) & m(m-1)(m-2) \\ {[1;d^2,1, ... :d^2]} & m(m-1) & m(m-1)/2 & m(m-1)(m-2) \\ {[1;d,d, ... :d^2]} & m(m-1)(m-2)/2 & m(m-1)(m-2) \\ \end{array} \end{equation} \caption{The most scalable PDs, requiring only $\mathcal O (d^4)$ device settings, are just like those for 3 qudits (Table \ref{chPD3q}) except that there are more of them by qudit permutation. The combinatorics for the distinct permutations of each PD are given in the second table, with a total of $\frac{1}{2}m(7m^2-12m+7)$.}\label{chPDmq} \end{table} \end{widetext} \section{Conclusions and Discussion} In this paper, we considered non-holonomic tomography and its application to multiqudit systems. Non-holonomic tomography is the use of partial determinants (PDs) to analyze quantum data for detecting various kinds of correlation in SPAM tomography, where both state and measurement devices have errors. We demonstrated that there are a multitude of PDs one can consider which are sensitive in different ways to the various correlations that can occur. Further, we were able to describe these sensitivities based on the topology of the factorization associated with corresponding notions of an effectively uncorrelated system. For single qudit measurements on a qud$^m$it state, there are $m$ major classes of PD corresponding to matrix rank, $\mathrm{rank}=d^{2k}$ for $k=1, \ldots, m$. These ranks in turn determine how many device settings are needed ($\mathcal{O}(\mathrm{rank}^2)=\mathcal{O}(d^{4k})$) to experimentally determine the PD. Finally, we enumerated the class of PDs which require the least number of experimental settings, $k=1$, for any number of qudits. Figure \ref{summary} is provided as a logical sketch for the technique of non-holonomic tomography. \begin{widetext} \begin{figure} \caption{A summary of the logic in non-holonomic tomography: The data collected from a quantum experiment is a tensor, with an index associated with each (state preparation or measurement device. This tensor can be organized as a matrix or ``square'' in various ways and partial determinants can be calculated for these matrices. Uncorrelated devices correspond to a specific factorization model of the data. The ``topology'' of this factorization model then sets upper bounds on the rank of any matrix organized from the data. The rank of these matrices are equal to their upper bound if and only if the partial determinant of their correspondingly sized PD is equal to the identity.} \label{summary} \end{figure} \end{widetext} PDs have been classified by the types of experimental protocols or ``squares'' one can consider. However, for each square there are still more PDs corresponding to the order in which the settings are actually put into a matrix. If one considers data from $2d^2 \times 2d^2$ distinct settings, then for a fixed type of square there are actually $[(2d^2)!]^2$ different PDs by permutation of rows and columns. On the other hand, these PDs are certainly not distinct quantities. Some permutations result in PDs which are obviously equivalent, up to familiar transformations, while others result in more obscure equivalences. Analyzing these various permutationally equivalent PDs may help to determine which more precisely which settings have correlation. The first group of permutations which give obviously equivalent PDs correspond to the ways one can traverse the corners of a square | 4 starting points times 2 directions. If the corners are originally $A$, $B$, $C$, and $D$ as in Equation \pref{square}, then the 8 PDs that result are \begin{equation}\label{eight} \begin{array}{cccc} A^\text{-1} B D^\text{-1} C & B D^\text{-1} C A^\text{-1} & D^\text{-1} C A^\text{-1} B & C A^\text{-1} B D^\text{-1} \\ C^\text{-1} D B^\text{-1} A & D B^\text{-1} A C^\text{-1} & B^\text{-1} A C^\text{-1} D & A C^\text{-1} D B^\text{-1}. \\ \end{array} \end{equation} PDs in the same row are related by cyclic permutation while those in the same column are inverses of each other. Each of these PDs are equivalent to each other up to inverse and conjugation | e.g. $(B^\text{-1} A C^\text{-1} D) = (B^\text{-1} A)\,(A^\text{-1} B D^\text{-1} C)^\text{-1} (B^\text{-1} A)^\text{-1}$. The second group corresponds to those permutations that keep settings within their respective corners: \begin{widetext} \begin{equation} \left[ \begin{array}{cc} A & B\\ C & D \end{array} \right] \longrightarrow \left[ \begin{array}{cc} \pi_{SP1} & 0\\ 0 & \pi_{SP2}\end{array} \right] \left[ \begin{array}{cc} A & B\\ C & D \end{array} \right] \left[ \begin{array}{cc} \pi_{M1} & 0\\ 0 & \pi_{M2} \end{array} \right]^\text{-1} \end{equation} where all the $\pi$s are $d^2 \times d^2$ permutation matrices. There are $(d^2!)^4$ such elements. These PDs are equivalent to each other up to conjugation since \begin{equation} \Delta\left( \left[ \begin{array}{cc} \pi_{SP1} & 0\\ 0 & \pi_{SP2}\end{array} \right] \left[ \begin{array}{cc} A & B\\ C & D \end{array} \right] \left[ \begin{array}{cc} \pi_{M1} & 0\\ 0 & \pi_{M2} \end{array} \right]^\mathsf{T} \right) = \pi_{M1}^\text{-1} \Delta\left( \left[ \begin{array}{cc} A & B\\ C & D \end{array} \right] \right) \pi_{M1} \end{equation} \end{widetext} where $\Delta$ is the standard PD defined by Equations \pref{square} and \pref{PD}.\footnote{ If one considers data from $(d^2+1) \times (d^2+1)$ settings (as described in Appendix \ref{plus1},) then there are $\binom{d^2+1}{2}^2$ distinct PDs, having already divided out the aforementioned equivalences. This is because one must choose the $d^2-1$ rows and $d^2-1$ columns of the data that will be common to each corner. The remaining 2 rows and 2 columns are what actually displace the corners.} Permutations beyond these two groups ``delocalize'' settings across corners (experiments) and thus give PDs which are equivalent but in a much less obvious way. Another important comment is that the links between corners, as considered in this paper, have no immediate sense of distance. This is a consequence of the gauge degrees of freedom. We don't a priori have the ability to say how different states in experiment A are from states in experiment B, even if they share the same state settings. However, a notion of distance can be introduced if the devices are also equipped with \emph{continuous} settings. A discussion of this technique may be found in \cite{nonholo1}, Sections III.A, III.C, and IV. \begin{figure} \caption{A PD for a two qubit system that has the topology of a 2-dimensional surface. Eight copies of Figure \ref{pic1} \label{3shells} \end{figure} From a mathematical perspective, it is intriguing that there is this relationship between matrix rank and holonomy. These holonomies can in fact be generalized to higher-dimensional quantities (like surfaces etc., rather than just loops) which can test for more general tensor ranks. Such tests can be interpreted as measures of higher $n$-point correlations between devices. Their construction is relatively simple and requires just one observation: that matrix inverses just consist of several contractions with antisymmetric tensors (or Levi-Civita epsilon symbols.) The first author expects to soon publish full details on the method for constructing such quantities (see Figure \ref{3shells}.) \begin{acknowledgments} S.J.v.E. was supported in part by ARO/LPS under Contract No. W911NF-14-C-0048. \end{acknowledgments} \appendix \section{An $\mathbf{r \times r}$ PD for an $\mathbf{(r+1)\times(r+1)}$ Matrix}\label{plus1} In this section, we show how to use PDs to test if an $(r+1)\times(r+1)$ matrix has rank $r$. Of course, this test should be equivalent to checking if a regular determinant is zero. However, this construction also generalizes to check if any $s \times s$ matrix has rank $r < s$. These tests can be associated with experimental protocols which require fewer device settings than the $2r \times 2r$ protocol. In fact, the $(r+1)\times(r+1)$ protocol has already been applied.\cite{mccormick2017experimental} Suppose we have an $(r+1)\times(r+1)$ matrix, $S$, which we suspect has rank $\le r$. We can calculate an $r\times r$ PD by generating a $2r\times2r$ matrix, $\tilde S$, partitioning $S$ as follows \begin{widetext} \[ S = \left[ \begin{array}{c|ccc|c} * & * & * & * & * \\ \hline * & * & * & * & * \\ * & * & * & * & * \\ * & * & * & * & * \\ \hline * & * & * & * & * \\ \end{array} \right] = \left[ \begin{array}{ccc} a & \vec \beta^\mathsf{T} & b \\ \vec\alpha & \mathbf{M} & \vec \vec\nabla\,ta \\ c & \vec \gamma^\mathsf{T} & d \\ \end{array} \right] \longrightarrow \tilde S = \left[ \begin{array}{cc|cc} a & \vec \beta^\mathsf{T} & \vec \beta^\mathsf{T}& b \\ \vec\alpha & \mathbf{M} & \mathbf{M} & \vec \vec\nabla\,ta \\ \hline \vec\alpha & \mathbf{M} & \mathbf{M} & \vec \vec\nabla\,ta \\ c & \vec \gamma^\mathsf{T} & \vec \gamma^\mathsf{T} & d \\ \end{array} \right] \equiv \left[ \begin{array}{cc} A & B \\ C & D \end{array} \right]. \] \end{widetext} It is useful to define the following matrices: \[ \begin{array}{ccc} \tilde\alpha = \left[ \begin{array}{cc} 1 & \vec 0^\mathsf{T} \\ -\mathbf{M}^\text{-1}\vec \alpha & \mathbf{1} \\ \end{array} \right] &\hspace{20pt}& \tilde\beta = \left[ \begin{array}{cc} 1 & -\vec \beta^\mathsf{T}\mathbf{M}^\text{-1} \\ \vec 0 & \mathbf{1} \\ \end{array} \right] \\ \\ \tilde\gamma = \left[ \begin{array}{cc} \mathbf{1} & \vec 0 \\ -\vec \gamma^\mathsf{T}\mathbf{M}^\text{-1} & 1 \\ \end{array} \right] &\hspace{20pt}& \tilde\vec\nabla\,ta = \left[ \begin{array}{cc} \mathbf{1} & -\mathbf{M}^\text{-1}\vec \vec\nabla\,ta \\ \vec 0^\mathsf{T} & 1 \\ \end{array} \right] \end{array} \] (which one may note are representations of $(r-1)$-dimensional translation groups.) These matrices allow us to partially diagonalize each corner: \[ \begin{array}{ccc} \tilde\beta A \tilde \alpha = \left[ \begin{array}{cc} A/M & \vec 0^\mathsf{T} \\ \vec0 & \mathbf{M} \\ \end{array} \right] &\hspace{20pt}& \tilde\beta B \tilde \vec\nabla\,ta = \left[ \begin{array}{cc} \vec 0^\mathsf{T}& B/M \\ \mathbf{M} & \vec 0 \\ \end{array} \right] \\ \\ \tilde\gamma C \tilde \alpha = \left[ \begin{array}{cc} \vec0 & \mathbf{M} \\ C/M & \vec 0^\mathsf{T} \\ \end{array} \right] &\hspace{20pt}& \tilde\gamma D \tilde \vec\nabla\,ta = \left[ \begin{array}{cc} \mathbf{M} & \vec 0 \\ \vec 0^\mathsf{T} & D/M \\ \end{array} \right] \end{array} \] where we denote the Schur complements by \[ \begin{array}{ccc} A/M = a - \vec\beta^T M^\text{-1}\vec\alpha &\hspace{10pt}& B/M = b - \vec\beta^T M^\text{-1}\vec\vec\nabla\,ta \\ C/M = c - \vec\gamma^T M^\text{-1}\vec\alpha &\hspace{10pt}& D/M = d - \vec\gamma^T M^\text{-1}\vec\vec\nabla\,ta. \end{array} \] All of the various partial determinants can thus be simplified: \begin{widetext} \begin{align*} A^\text{-1} B D^\text{-1} C &= 1+(x-1)\tilde\alpha & C^\text{-1} D B^\text{-1} A &= \frac{1}{x^2}\Big[1+(x-1)\tilde\alpha^\text{-1}\Big]\\ B D^\text{-1} C A^\text{-1} & = 1+(x-1)\tilde\beta^\text{-1} & A C^\text{-1} D B^\text{-1} &= \frac{1}{x^2}\Big[1+(x-1)\tilde\beta\Big]\\ D^\text{-1} C A^\text{-1} B &= 1+(x-1)\tilde\vec\nabla\,ta & B^\text{-1} A C^\text{-1} D &= \frac{1}{x^2}\Big[1+(x-1)\tilde\vec\nabla\,ta^\text{-1}\Big]\\ C A^\text{-1} B D^\text{-1} & = 1+(x-1)\tilde\gamma^\text{-1} & D B^\text{-1} A C^\text{-1} &= \frac{1}{x^2}\Big[1+(x-1)\tilde\gamma\Big]\\ \end{align*} \end{widetext} where \[ x = \frac{(B/M)(C/M)}{(A/M)(D/M)} = \frac{\downarrowet B \downarrowet C}{\downarrowet A \downarrowet D}. \]\\ One can see that each of these PDs is equal to the identity if and only if $x=1$. This condition on $x$ must be equivalent to $\mathrm{Det} S = 0$ (given the existence of $M^\text{-1}$.) \end{document}
\begin{equation}gin{document} \title{Single-photon-level quantum memory at room temperature} \author{K. F. Reim} \affiliation{Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, UK} \author{P. Michelberger} \affiliation{Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, UK} \author{K.C. Lee} \affiliation{Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, UK} \author{J. Nunn} \affiliation{Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, UK} \author{N. K. Langford} \affiliation{Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, UK} \author{I. A. Walmsley} \email[]{[email protected]} \affiliation{Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, UK} \date{\today} \begin{equation}gin{abstract} Quantum memories capable of storing single photons are essential building blocks for quantum information processing, enabling the storage and transfer of quantum information over long distances~\cite{Duan_2001_N_Long-distance-quantum-communic,Briegel_1998_PRL_Quantum-Repeaters:-The-Role-of}. Devices operating at room temperature can be deployed on a large scale and integrated into existing photonic networks, but so far warm quantum memories have been susceptible to noise at the single photon level ~\cite{Manz_2007_PR_Collisional-decoherence-during}. This problem is circumvented in cold atomic ensembles~\cite{Choi_2008_N_Mapping-photonic-entanglement-,Simon_2007_PRL_Interfacing-Collective-Atomic-,Matsukevich_2006_PRL_Deterministic-Single-Photons-v}, but these are bulky and technically complex. Here we demonstrate controllable, broadband and efficient storage and retrieval of weak coherent light pulses at the single-photon level in warm atomic caesium vapour using the far off-resonant Raman memory scheme~\cite{Reim_2010_P_Towards-high-speed-optical-qua}. The unconditional noise floor is found to be low enough to operate the memory in the quantum regime at room temperature. \end{abstract} \maketitle In an age of ever-increasing, global information transfer, there is growing demand for secure communication technology, such as could be provided by photonic quantum communications networks \cite{Gisin_2002_RMP_Quantum-cryptography}. Currently, the biggest challenge for such networks is distance. Over short distances, photons, interacting only weakly with their environment, easily and reliably carry quantum information without much decoherence, but intercontinental quantum communication will require quantum repeaters embedded in potentially isolated locations, because photon loss rises otherwise exponentially with distance~\cite{Duan_2001_N_Long-distance-quantum-communic,Briegel_1998_PRL_Quantum-Repeaters:-The-Role-of}. In general, these repeaters will require some sort of quantum memory, a coherent device where single photons are reversibly coupled into and out of an atomic system, to be stored, possibly processed and then redistributed. In order to be practically useful, this will need to have sufficiently large bandwidth, high efficiency and long storage time, with multimode capacity, and a low-enough noise level to enable operation at the quantum level. Furthermore, to be truly practical, a critical feature of such memories will be ease of operation near room temperature for use in isolated, potentially unmanned locations. Warm atomic vapours of alkali atoms such as rubidium and caesium are potentially excellent storage media. At moderate temperatures the vapour pressure is sufficient to render the doublet lines optically thick, providing a strong atom-photon interaction in a centimeter-scale vapour cell. With the addition of a buffer gas to slow atomic diffusion, collective atomic coherences on the order of milliseconds have been observed \cite{Julsgaard:2004lr}. There is no need for laser trapping, high vacuum or cryogenic cooling, because atomic vapour cells can be miniaturised \cite{Baluktsian:2010kx}, integrated onto chips using hollow-core waveguide structures \cite{Wu:2010uq} and mass-produced. Cost-effective integration is likely to be crucial for quantum photonics to develop into a mature technology. A number of experiments using electromagnetically induced transparency (EIT) have demonstrated efficient storage and recall of optical signals in warm atomic vapour \cite{Novikova:2007kc}, and conditional storage and recall of heralded single photons \cite{Eisaman_2005_N_Electromagnetically-induced-tr}. In EIT, however, the frequency of the signal matches the atomic resonance, and collision-induced fluorescence at this frequency makes the unconditional (i.e.\ unheralded) noise floor of this protocol too high for quantum applications~\cite{Manz_2007_PR_Collisional-decoherence-during}. These issues are avoided in cold-atom experiments~\cite{Chou_2007_S_Functional-Quantum-Nodes-for-E, Chen_2008_P_Memory-built-in-quantum-telepo, Chen_2006_PRL_Deterministic-and-Storable-Sin}, where atomic collisions are suppressed or eliminated. But the technical complexity of these experiments, which require the trapping and cooling of atomic clouds under high vacuum, makes large-scale deployment of this type of memory rather challenging, especially outside controlled laboratory conditions. Similarly, solid-state memories are promising candidates for efficient and low-noise photon storage \cite{Hedges_2010_N_Efficient-quantum-memory-for-l,Riedmatten:2008ek}, but currently they must be operated at cryogenic temperatures. By contrast, ensemble-type Raman memories which are tuned far from resonance~\cite{Nunn_2007_PRAMOP_Mapping-broadband-single-photo} provide a possible path to fulfilling all of the requirements for quantum-ready operation at room temperature. The far off-resonant Raman interaction results in: (i) extremely broadband capability, allowing to interface the memory with conventional parametric downconversion sources \cite{Cohen_2009_PRL_Tailored-Photon-Pair-Generatio}; (ii) the ability to optically switch the memory in and out of the quantum channel, or alternatively set the storage level to 50\%, providing a straightforward noninterferometric path to creating light-atom entanglement; (iii) very weak fluorescence noise which is predominantly non-synchronous with the short signal pulse; and (iv) memory efficiency which is insensitive to inhomogeneous broadening, allowing room-temperature operation and a path towards integrated implementations. Apart from the advantage of simplicity~\cite{Raab_1987_PRL_Trapping-of-Neutral-Sodium-Ato}, operating at room temperature also makes it easier to achieve larger optical depths, and hence higher efficiencies~\cite{Hosseini:2009lq,Nunn_2007_PRAMOP_Mapping-broadband-single-photo}. Recently, we implemented a Raman memory in a hot caesium-vapour atomic ensemble, demonstrating extremely broadband, coherent operation under room-temperature conditions~\cite{Reim_2010_P_Towards-high-speed-optical-qua}. In this letter, we address the remaining requirements for a memory to be able to function usefully in a genuine quantum application. Specifically, we demonstrate total memory efficiencies of $>30\%$, and with only moderate magnetic shielding, we show storage times of up to 4 $\mu$s, around 2,500 times longer than the duration of the pulses themselves. This is already sufficient to improve heralded multiphoton rates from parametric down-conversion sources \cite{Bouwmeester_Experimental_quantum_teleportation}. Finally, we make a detailed analysis of the unconditional noise floor of the memory, which is found to be $<0.25$ photons per pulse; that is --- low enough for quantum applications. The combination of these results shows that we have a quantum-ready memory, capable of handling quantum information in a simple room-temperature design. \begin{equation}gin{figure} \begin{equation}gin{center} \includegraphics[scale=0.3]{figure_1.pdf} \caption{(a) $\Lambda$-level scheme for Raman memory. $\Delta$ is the detuning from the atomic resonance. (b) Experimental setup. At $t_1$ the single photon level signal is mapped by a strong write pulse into a spin-wave excitation in the caesium vapour cell. At $t_2$ a strong read pulse reconverts the excitation into a photonic mode. After polarization filtering (Pol) and spectral filtering with Fabry-Perot etalons, the retrieved signal is detected with single Photon counting modules (SPCM). Vertical polarization is indicated as ($\updownarrow$) and horizontal polarization as ($\leftrightarrow$).} \label{fig:experiment} \end{center} \end{figure} The heart of the quantum memory is a caesium-vapour atomic ensemble, prepared in a vapour cell heated to 62.5$^\circ$C, that makes use of the 852-nm $D_2$ line, with the $6^2 S_{1/2}$ hyperfine states serving as the ground $\ket{1}$ and storage $\ket{3}$ states (Fig. \ref{fig:experiment}(a)). At $t_1$, the signal gets mapped by the strong write field into a collective atomic spinwave excitation, and at $t_2$, a strong read pulse reconverts the atomic coherence back into the photonic mode (Fig.~\ref{fig:experiment}(b)). The signal is separated from the strong write and read fields via spectral and polarisation filtering, which is particularly important when operating at the single-photon level, and then detected using a silicon avalanche photodiode (see Methods for details). \begin{equation}gin{figure} \begin{equation}gin{center} \includegraphics[width=1\columnwidth]{figure_3.pdf} \caption{Memory efficiency. The lifetime of the memory is about 1.5~$\mu$s. The blue dots indicate experimental data measured with a fast APD. Error bars represent the standard error in the mean. The solid line is the theoretical dephasing predicted for a constant magnetic field of $0.13\pm 0.05$ Gauss, which can be attributed to the residual of the Earth's magnetic field (Supplementary Information). Note that this dephasing could be compensated using spin-echo techniques, or suppressed with improved magnetic shielding.} \label{fig:storage} \end{center} \end{figure} In the current experiment, we demonstrate total efficiencies around $30\%$ and a memory lifetime around 1.5~$\mu$s (Fig.~\ref{fig:storage}), which is double the efficiency and more than two orders of magnitude improvement in lifetime over the values obtained in our previous experiment~\cite{Reim_2010_P_Towards-high-speed-optical-qua}. These results were obtained with only moderate magnetic shielding and residual static fields are the main dephasing mechanism (see Supplementary Information). In principle, magnetic dephasing can be eliminated by improved shielding, enabling storage times limited by atomic diffusion to several hundred $\mu$s~\cite{Camacho_2009_P_Four-wave-mixing-stopped-light}. However, because this memory has such a broad bandwidth, this measured lifetime already corresponds to a time-bandwidth product of $\sim$2500. Indeed, with memory efficiencies of $20\%$ at 1~$\mu$s, this memory could already be used to improve heralded photon-pair rates with typical parametric down-conversion sources with heralded single photon rates of $\sim$1~MHz. As discussed in Ref.~\cite{Reim_2010_P_Towards-high-speed-optical-qua}, the memory efficiency is restricted mainly by control field power and the less efficient, but experimentally simpler, forward-retrieval configuration \cite{Nunn_2007_PRAMOP_Mapping-broadband-single-photo}. \begin{equation}gin{figure} \begin{equation}gin{center} \includegraphics[scale=0.25]{figure_2.pdf} \caption{Single-photon-level data. (a) Incident ($t_1\,{\sim}\,0$~ns) and retrieved ($t_2\,{\sim}\,780$~ns) pulse. The inset is a zoom of the retrieved pulse showing a fluorescence noise tail. (b,c) Zooms around $t_1$ and $t_2$, showing the: incident signal (transmitted/retrieved signal with no control field) (green); transmitted/retrieved signal with the control field (blue); and noise (control field only) (red). Histograms are accumulated over 360,000 runs. (d) Blue shaded area: Optical pumping on the $\ket{3}\leftrightarrow\ket{2}$ (blue) transition. Red shaded area: Optical pumping on the $\ket{1}{\leftrightarrow}\ket{2}$ (red) transition. The red points are data and the black line is a theoretical prediction of noise due to spontaneous Raman scattering. The red solid line shows the predicted noise due only to Stokes scattering (e) Predicted fractional noise contributions from Stokes (black curve) and anti-Stokes (red curve) light. Theoretical details can be found in the supplementary information.} \label{fig:single_photon} \end{center} \end{figure} Next, we measure the memory output using a single-photon counting module (SPCM) to test its performance at the single-photon level. Figure \ref{fig:single_photon}(a) shows storage at $t=0$~ns and retrieval at $t=780$~ns later for an average input signal of 1.6 photons per pulse ($< 1$ retrieved photon per pulse), while Figs~\ref{fig:single_photon}(b,c) show the storage and retrieval processes in detail. The strong reduction in transmitted signal (Fig.~\ref{fig:single_photon}(b)) between the control field being off (green) and on (blue), and the significant amount of retrieved signal (Fig.~\ref{fig:single_photon}(c), blue) demonstrates that the quantum memory operates well at the single-photon level. However, to properly characterise the noise in these signals, it is critical to measure the \emph{unconditional noise floor}: the signal detected when memory retrieval is triggered without any stored input signal (Figs~\ref{fig:single_photon}(b,c), red curves). In our case, this is the detected signal when the control field is sent in with no input signal, and is measured to be 0.25 photons per pulse. Already substantially less than a single photon, this enables operation in the quantum regime, even with $30\%$ memory efficiency. We now explore the origin of the noise photons to determine if the observed noise could be further reduced. The inset to Fig.~\ref{fig:single_photon}(a) shows that the retrieved signal peak is followed by an exponential tail, which is fluorescence noise from the excited state, $\ket{2}$: this process is excited by the control field, even far from resonance, in the presence of atomic collisions. The red curve is a least-squares fit to the tail of the experimental data and yields an estimated lifetime of 32$\pm$2~ns (expected value 30.5~ns \cite{Steck_2009_AOH_Cesium-D-Line-Data}; error bars derived from multiple experiments). While such collision-induced fluorescence limits the usefulness of other more narrowband room-temperature memories (as shown in \cite{Manz_2007_PR_Collisional-decoherence-during}), in our extremely broadband memory, the 30 ns time scale of these emissions is much longer than the duration of the readout event (set by the 300 ps pulse width and detector timing jitter). By time gating the detection, fluorescence therefore made a negligible contribution to the measured, instantaneous noise floor (Figs~\ref{fig:single_photon}(b,c)). To understand where the instantaneous noise comes from, we then investigated its dependence on optical pumping. The blue shading in Fig.~\ref{fig:single_photon}(d) indicates optical pumping on the $\ket{3}{\leftrightarrow}\ket{2}$ (``blue'') transition, whereas the red shading represents optical pumping on the $\ket{1}{\leftrightarrow}\ket{2}$ (``red'') transition. Increasing the pump power on the blue transition partially suppresses the noise, although it rapidly levels off at an average of 0.25 photons per pulse, while the noise level rises linearly with increasing pump power on the red transition. These observations are well-described by a simple noise model (supplementary information) based on \emph{spontaneous Raman scattering} (SRS) \cite{Lukin:1999uq,Phillipsa:2009fk}, in which Stokes and anti-Stokes photons scattered during the control pulse are both transmitted through our etalon-based filters (see Methods section) and detected as noise. Since the model does not include collision-induced fluorescence \cite{Manz_2007_PR_Collisional-decoherence-during} or leakage of the control, this suggests that these contributions are negligible. Furthermore, as shown in Fig.~\ref{fig:single_photon}(d) and (e), around 60\% of the noise affecting the quantum memory is emitted at the anti-Stokes frequency (when operated with maximal blue pumping) and could be removed using further spectral filtering. This would already bring the true unconditional noise floor down to 0.1 photons per pulse resulting in an unconditional signal to noise ratio (SNR) of 10:1 for single photon retrieval. The remaining signal-frequency (Stokes) noise remains even if the optical pumping and spectral filtering are perfect and has its origin in four-wave mixing seeded by spontaneous anti-Stokes scattering. However, even this noise can be eliminated if the anti-Stokes channel can be suppressed or rendered much weaker than the Stokes channel, for instance by operating the memory closer to resonance, so that the anti-Stokes detuning is relatively much larger. Finally, although our fibre-coupled detection system is optimised for the mode of the signal, the noise is scattered mostly into the control mode \cite{Raymer:1985qy,Sorensen:2009fj}, which suggests that the noise floor can be further reduced by angle tuning the control field \cite{Surmacz:2008vf}. These results show that the Raman memory, therefore, represents a genuine quantum-ready optical memory, functioning at room temperature. In conclusion, we have demonstrated a coherent broadband single photon level optical memory at room temperature with a detailed investigation of the unconditional single-photon noise floor. We have also extended the memory storage time far enough that it could be used to improve heralded multiphoton rates from parametric down-conversion sources --- an important step towards making this type of memory useful for quantum repeaters. This work shows that the far off-resonant Raman memory scheme makes it possible to implement a quantum ready memory in warm atomic vapour, and together with technological advances in microfabrication of vapour cells and hollow-core waveguide structures embedded on chips, opens a path to an integrated, scalable quantum information technology architecture. \section*{Methods} The atomic ensemble is initially prepared in the ground state~$\ket{1}$ by optically pumping with an external cavity diode laser (ECDL). For the storage process, the signal and write pulses, both derived from a Ti:Sapph laser oscillator and an EOM \cite{Reim_2010_P_Towards-high-speed-optical-qua}, are sent temporally and spatially overlapped into the vapour cell (Fig. \ref{fig:experiment}(b)). The pulses have a duration of 300 ps corresponding to a bandwidth of 1.5~GHz. The bandwidth of the memory is defined dynamically by the strong write field which is about $10^9$ times brighter than the single photon level signal field. The necessary filtering is provided by a Glan-laser polariser and three air-spaced Fabry-Perot etalons with a free spectral range of 18.4~GHz and transmission window of 1.5~GHz, giving a total extinction ratio of $10^{-11}$. Note that the free spectral range is twice the 9.2~GHz Stokes shift, so both Stokes (signal) and anti-Stokes (noise) frequencies pass through the filters. As discussed in the main text, anti-Stokes noise could be eliminated using more selective filters. The detection system consists of single photon counting modules (SPCM) combined with a time-to-amplitude converter (TAC) and a multichannel analyser (MCA), which allows photo-detection events to be time-binned with a resolution of $\sim125$~ps. All experimental single photon level data are histograms derived from 360,000 experiments. The memory operates at a repetition rate of 3 kHz and the storage time is set to 800~ns. \section*{Author contribution} KFR and PM carried out the experimental work and took the data. KFR, JN and NKL were responsible for the experimental design and theoretical analysis, and KCL and IAW contributed to the experiment. The manuscript was written by KFR with input from JN, NKL and IAW. \section*{Supplementary information} \subsection*{SRS noise model} The dominant noise process in our Raman memory appears to be spontaneous Stokes and anti-Stokes scattering during the strong control pulse. To model the noise, we consider the Maxwell-Bloch equations describing the collinear propagation of Stokes and anti-Stokes field amplitudes $A_\mathrm{S}$, $A_\mathrm{AS}$ through a $\Lambda$-type ensemble with resonant optical depth $d$ along the $z$-axis, in the presence of a control pulse, whose shape is described by the time dependent Rabi frequency $\Omega = \Omega(\tau)$, where $\tau$ is the time in a reference frame moving with the control. In the adiabatic limit that the bandwidth and Rabi frequency of the control are much smaller than the detuning of the fields from resonance, the propagation equations take the form \begin{equation}gin{eqnarray} \label{MaxBloch} \left[\partial_z + \frac{d\gamma p_1}{\Gamma_\mathrm{S}}\right]A_\mathrm{S} &=& -\frac{\Omega \sqrt{d\gamma}}{\Gamma_\mathrm{S}}B,\\ \nonumber \left[\partial_z -\frac{d\gamma p_3}{\Gamma_\mathrm{AS}^*}\right]A^\dagger_\mathrm{AS} &=& -\frac{\Omega^*\sqrt{d\gamma}}{\Gamma_\mathrm{AS}^*}B,\\ \nonumber \left[\partial_\tau - |\Omega|^2\left(\frac{1}{\Gamma_\mathrm{S}}-\frac{1}{\Gamma_\mathrm{AS}^*}\right)\right]B&=&-\sqrt{d\gamma}\Omega^*\left(\frac{p_1}{\Gamma_\mathrm{S}} + \frac{p_3}{\Gamma_\mathrm{S}^*}\right)A_\mathrm{S}\\ \nonumber & &-\sqrt{d\gamma}\Omega\left(\frac{p_1}{\Gamma_\mathrm{AS}} + \frac{p_3}{\Gamma_\mathrm{AS}^*}\right)A_\mathrm{AS}^\dagger, \end{eqnarray} where $B$ is the amplitude of the spin wave, $\Gamma_{\mathrm{S},\mathrm{AS}} = \gamma-\mathrm{i} \Delta_{\mathrm{S},\mathrm{AS} }$ is the complex detuning of the Stokes (anti-Stokes) field, $\gamma$ is the homogeneous linewidth of the $\ket{1}{\leftrightarrow}\ket{2}$ transition, $\Delta_\mathrm{AS} = \Delta_\mathrm{S}+\delta$, the Stokes detuning $\Delta_\mathrm{S}$ is equal to the detuning $\Delta$ of the signal field in Fig.~\ref{fig:experiment}, and $\delta = 9.2$~GHz is the Stokes shift. The $z$-coordinate has been normalised so that it runs from $0$ to $1$. We have defined $p_1$ ($p_3$) as the fraction of atoms initially in state $\ket{1}$ ($\ket{3}$). Since the noise process is weak, we assume that these populations, as well as the control pulse, are unaffected by the interaction. In general, the solution is given by \begin{equation}gin{eqnarray} \nonumber A_{\mathrm{S},\mathrm{out}}(\tau) &=& \int_{-\infty}^\infty K_\mathrm{S}(\tau,\tau')A_{\mathrm{S},\mathrm{in}}(\tau')\,\mathrm{d}\tau' \\ \nonumber && + \int_{-\infty}^\infty G_\mathrm{S}(\tau,\tau')A_{\mathrm{AS},\mathrm{in}}^\dagger(\tau')\,\mathrm{d}\tau' \\ \nonumber && + \int_0^1L_\mathrm{S}(\tau,z)B_\mathrm{in}(z)\,\mathrm{d}z,\\ \nonumber A_{\mathrm{AS},\mathrm{out}}(\tau) &=& \int_{-\infty}^\infty K_\mathrm{AS}(\tau,\tau')A_{\mathrm{AS},\mathrm{in}}(\tau')\,\mathrm{d}\tau' \\ \nonumber && + \int_{-\infty}^\infty G_\mathrm{AS}(\tau,\tau')A_{\mathrm{S},\mathrm{in}}^\dagger(\tau')\,\mathrm{d}\tau' \\ \nonumber && + \int_0^1L_\mathrm{AS}(\tau,z)B_\mathrm{in}^\dagger(z)\,\mathrm{d}z, \end{eqnarray} where the subscripts `$\mathrm{in}$' (`$\mathrm{out}$') describe the amplitudes at the start (end) of the interaction, and where the integral kernels $K_{\mathrm{S},\mathrm{AS}}$, $G_{\mathrm{S},\mathrm{AS}}$ and $L_{\mathrm{S},\mathrm{AS}}$ are Green's functions that propagate the input fields to compute the output fields. Both the Stokes and anti-Stokes frequencies are passed by our spectral filters, so our noise signal is calculated by adding the average number of photons scattered into both Stokes and anti-Stokes modes, \begin{equation}gin{eqnarray} \nonumber S &=& \int_{-\infty}^\infty \langle A_{\mathrm{S},\mathrm{out}}^\dagger(\tau)A_{\mathrm{S},\mathrm{out}}(\tau)\rangle\,\mathrm{d}\tau\\ \label{noise} && + \int_{-\infty}^\infty \langle A_{\mathrm{AS},\mathrm{out}}^\dagger(\tau)A_{\mathrm{AS},\mathrm{out}}(\tau)\rangle \,\mathrm{d}\tau. \end{eqnarray} It can be shown that the averaged photon numbers do not depend on the shape of the control pulse, but only on its energy, through the quantity $W = \int_{-\infty}^\infty |\Omega(\tau)|^2\,\mathrm{d}\tau$ \cite{Nunn_2007_PRAMOP_Mapping-broadband-single-photo}, as is usual in the transient regime of spontaneous Raman scattering \cite{Raymer:1985qy}. The initial state used to evaluate the expectation values is the vacuum state, with no Stokes/anti-Stokes photons and no spin wave excitations. Using the bosonic commutation relations $[A_{\mathrm{A},\mathrm{AS},\mathrm{in}}(\tau),A_{\mathrm{A},\mathrm{AS},\mathrm{in}}^\dagger(\tau')]=\delta(\tau-\tau')$, and noting that $B\propto \ket{1}\!\bra{3}$, so that $\langle B_\mathrm{in}^\dagger(z) B_\mathrm{in}(z')\rangle = p_3\delta(z-z')$ and $\langle B_\mathrm{in}(z) B_\mathrm{in}^\dagger(z')\rangle = p_1\delta(z-z')$, we obtain \begin{equation}gin{eqnarray} \label{Sexp} S &=& \int_{-\infty}^\infty \int_{-\infty}^\infty \mathrm{d}\tau' \mathrm{d}\tau \left\{|G_\mathrm{S}(\tau,\tau')|^2+|G_\mathrm{AS}(\tau,\tau')|^2\right\}\\ \nonumber && + \int_{-\infty}^\infty \int_0^1 \mathrm{d}z \mathrm{d}\tau \left\{p_3|L_\mathrm{S}(\tau,z)|^2+p_1|L_\mathrm{AS}(\tau,z)|^2\right\}. \end{eqnarray} This expression depends only on the Green's functions, and can be evaluated by solving the system Eqs.~(\ref{MaxBloch}) numerically, treating the amplitudes $A_\mathrm{S}$, $A_\mathrm{AS}$, and $B$ as classical complex-valued functions. The variation of the populations $p_{1,3}$ with optical pumping power $P$ is modelled by setting \begin{equation}gin{equation} \label{pump} p_3(P) = \frac{1}{2}\left[1+\frac{P/P_\mathrm{s}}{1+|P|/P_\mathrm{s}}\right];\quad p_1(P) = 1-p_3(P), \end{equation} where $P_\mathrm{s}$ is the saturation power and with the convention that negative values of $P$ indicate pumping on the blue $\ket{2}\leftrightarrow\ket{3}$ transition. The observed spontaneously scattered Raman noise is then given by $S_{\rm obs}(P)=\kappa S(P)$, which incorporates a scaling parameter into the simple one-dimensional theory to account for the fact that the Raman scattering process has a broad spatial distribution \cite{Raymer:1985qy,Sorensen:2009fj}, whereas the signal is emitted into a single narrower spatial mode. The parameter $\kappa$ defines the overlap between the scattered mode and the detected mode, which is defined and filtered by a single-mode fibre optimised on the signal mode. For our experiment we have $d=1900$, $\gamma = 16$~MHz, $\Delta=15$~GHz and $W=30$~GHz. To correctly describe our noise measurements, we find $P_\mathrm{s}=84$ mW and $\kappa = 0.12$ using a least-squares fit. \subsection*{Magnetic dephasing} We model the magnetic dephasing by considering the evolution of the Raman coherence excited in the memory under the influence of a static magnetic field. Suppose that an atom is initially prepared in the Zeeman state $\ket{F_\mathrm{i},m_\mathrm{i}}$ with probability $p_{m_\mathrm{i}}$ within the initial hyperfine manifold with $F_\mathrm{i}=4$. Storing a signal pulse creates a spin wave, represented by applying the operator $S^\dagger$ to the initial atomic state, where $S = \alpha I + \begin{equation}ta \Sigma$, with $I$ the identity operator, $\alpha$, $\begin{equation}ta$ two coefficients quantifying the amplitude of the spin wave, whose values are not important, and $\Sigma$ the transition operator given by \begin{equation}gin{equation} \label{Sigma} \Sigma = \sum_{m_\mathrm{i}=-F_\mathrm{i}}^{F_\mathrm{i}} \sum_{m_\mathrm{f}=-F_\mathrm{f}}^{F_\mathrm{f}} C(m_\mathrm{i},m_\mathrm{f})\ket{F_\mathrm{i},m_\mathrm{i}}\!\!\bra{F_\mathrm{f},m_\mathrm{f}}. \end{equation} Here $C(m_\mathrm{i},m_\mathrm{f})$ is the coupling coefficient between the initial state and the final Zeeman state $\ket{F_\mathrm{f},m_\mathrm{f}}$, with $F_\mathrm{f}=3$. To model the dephasing, the absolute coupling strengths are not important; the relative strengths are computed in a straightforward manner from the Clebsch-Gordan coefficients describing the allowed transitions, once the polarizations of the control and signal fields are fixed. Over a time $t$, the atomic spins undergo Larmour precession due to the magnetic field. If the field has strength $B$ in a direction parameterized by the polar and azimuthal angles $\theta$, $\phi$, the precession is described by the operator $U = R^\dagger ER$, where $R$ rotates the quantization axis to align it with the field, and $E$ accounts for the accumulation of phase by each Zeeman level, \begin{equation}gin{eqnarray} \nonumber R &=& \sum_{m_\mathrm{i}=-F_\mathrm{i}}^{F_\mathrm{i}} e^{\mathrm{i}( Y_\mathrm{i}\sin \phi - X_\mathrm{i} \cos \phi )\theta}\ket{F_\mathrm{i},m_\mathrm{i}}\!\!\bra{F_\mathrm{i},m_\mathrm{i}} \\ \nonumber & &+ \sum_{m_\mathrm{f}=-F_\mathrm{f}}^{F_\mathrm{f}} e^{\mathrm{i}( Y_\mathrm{f}\sin \phi - X_\mathrm{f} \cos \phi )\theta}\ket{F_\mathrm{f},m_\mathrm{f}}\!\!\bra{F_\mathrm{f},m_\mathrm{f}};\\ \nonumber E &=& \sum_{m_\mathrm{i}=-F_\mathrm{i}}^{F_\mathrm{i}} e^{\mathrm{i}m_\mathrm{i}g_\mathrm{i}\mu_\mathrm{B} B t}\ket{F_\mathrm{i},m_\mathrm{i}}\!\!\bra{F_\mathrm{i},m_\mathrm{i}} \\ \nonumber & & + \sum_{m_\mathrm{f}=-F_\mathrm{f}}^{F_\mathrm{f}} e^{\mathrm{i}m_\mathrm{f}g_\mathrm{f}\mu_\mathrm{B} B t}\ket{F_\mathrm{f},m_\mathrm{f}}\!\!\bra{F_\mathrm{f},m_\mathrm{f}}. \end{eqnarray} Here $X_{\mathrm{i},\mathrm{f}}$, $Y_{\mathrm{i},\mathrm{f}}$ are the $x$ and $y$ components of the spin angular momentum operators for the spins $F_{\mathrm{i},\mathrm{f}}$, and $g_{\mathrm{i},\mathrm{f}}$ are the $g$-factors determining the size of the Zeeman splitting within each hyperfine manifold. In our case we have $g_\mathrm{i}=1/4$ and $g_\mathrm{f}=-1/4$. $\mu_\mathrm{B}$ is the Bohr magneton. The Raman coherence acts as a source for the retrieved signal field in the presence of the retrieval control pulse, so the retrieval efficiency is proportional to $|\langle \Sigma \rangle |^2$, where the expectation value is evaluated just before retrieval. Incoherently summing the contributions arising from atoms starting in each of the initial Zeeman levels, we compute the retrieval efficiency according to the formula \begin{equation}gin{equation} \label{ret} \eta(t) \propto \sum_{m_\mathrm{i}=-F_\mathrm{i}}^{F_\mathrm{i}} p_{m_\mathrm{i}} \left|\bra{F_\mathrm{i},m_\mathrm{i}} SU^\dagger \Sigma U S^\dagger \ket{F_\mathrm{i},m_\mathrm{i}}\right|^2. \end{equation} The only non-vanishing terms in the above expression are all proportional to $|\alpha \begin{equation}ta^*|^2$, confirming that the absolute values of these coefficients simply determines the scaling of $\eta$, not its shape, so that we obtain $$ \eta(t) \propto \sum_{m_\mathrm{i}=-F_\mathrm{i}}^{F_\mathrm{i}} p_{m_\mathrm{i}} \left|\bra{F_\mathrm{i},m_\mathrm{i}} U^\dagger \Sigma U \Sigma^\dagger \ket{F_\mathrm{i},m_\mathrm{i}}\right|^2. $$ The plot in Fig.~\ref{fig:storage} is produced by plotting $\eta(t)$, multiplied by an appropriate scaling factor, assuming a uniform initial thermal distribution $p_\mathrm{m_\mathrm{i}}=1/(2F_\mathrm{i}+1)$, where the magnetic field $B\approx 0.13$~Gauss and orientation ($\theta = 30^\circ$ from the vertical, \emph{i.e.} from the direction of the control field polarization, and $\phi = 25^\circ$ from the direction of propagation of the optical beams) are determined from a least-squares fit to the experimental data. \begin{equation}gin{thebibliography}{10} \bibitem{Duan_2001_N_Long-distance-quantum-communic} Duan, L.-M., Lukin, M.~D., Cirac, J.~I. \& Zoller, P. \newblock Long-distance quantum communication with atomic ensembles and linear optics. \newblock {\em Nature}{ \bf 414}, 413--418 (2001). \bibitem{Briegel_1998_PRL_Quantum-Repeaters:-The-Role-of} Briegel, H. J., D\"{u}r, W., Cirac, J.~I. \& Zoller, P. \newblock {Quantum Repeaters: The Role of Imperfect Local Operations in Quantum Communication}. \newblock {\em Phys. Rev. Lett.}{ \bf 81}, 5932 (1998). \bibitem{Manz_2007_PR_Collisional-decoherence-during} Manz, S., Fernholz, T., Schmiedmayer, J. \& Pan, J. \newblock {Collisional decoherence during writing and reading quantum states}. \newblock {\em Phys. Rev. A}{ \bf 75}, 040101 (2007). \bibitem{Choi_2008_N_Mapping-photonic-entanglement-} Choi, K.~S., Deng, H., Laurat, J. \& Kimble, H.~J. \newblock {Mapping photonic entanglement into and out of a quantum memory}. \newblock {\em Nature}{ \bf 452}, 67--71 (2008). \bibitem{Simon_2007_PRL_Interfacing-Collective-Atomic-} Simon, J., Tanji, H., Thompson, J.~K. \& Vuleti{\'c}, V. \newblock {Interfacing Collective Atomic Excitations and Single Photons}. \newblock {\em Phys. Rev. Lett.}{ \bf 98}, 183601 (2007). \bibitem{Matsukevich_2006_PRL_Deterministic-Single-Photons-v} Matsukevich, D.~N. \emph{et al.} \newblock {Deterministic Single Photons via Conditional Quantum Evolution}. \newblock {\em Phys. Rev. Lett.}{ \bf 97}, 013601 (2006). \bibitem{Reim_2010_P_Towards-high-speed-optical-qua} Reim, K.~F. \emph{et al.} \newblock {Towards high-speed optical quantum memories}. \newblock {\em Nature Photon}{ \bf 4}, 218--221 (2010). \bibitem{Gisin_2002_RMP_Quantum-cryptography} Gisin, N., Ribordy, G., Tittel, W. \& Zbinden, H. \newblock {Quantum cryptography}. \newblock {\em Phys. Rev. Mod.}{ \bf 74}, 145 (2002). \bibitem{Julsgaard:2004lr} Julsgaard, B., Sherson, J., Cirac, J.~I., Fiurasek, J. \& Polzik, E.~S. \newblock Experimental demonstration of quantum memory for light. \newblock {\em Nature}{ \bf 432}, 482--486 (2004). \bibitem{Baluktsian:2010kx} Baluktsian, T. \emph{et al.} \newblock {Fabrication method for microscopic vapor cells for alkali atoms}. \newblock {\em Optics letters}{ \bf 35}, 1950--1952 (2010). \bibitem{Wu:2010uq} Wu, B. \emph{et al.} \newblock Slow light on a chip via atomic quantum state control. \newblock {\em Nat Photon}{ doi:10.1038/nphoton.2010.211}. \bibitem{Novikova:2007kc} Novikova, I. \emph{et al.} \newblock Optimal control of light pulse storage and retrieval. \newblock {\em Phys. Rev. Lett.}{ \bf 98}, 243602 (2007). \bibitem{Eisaman_2005_N_Electromagnetically-induced-tr} Eisaman, M.~D. \emph{et al.} \newblock Electromagnetically induced transparency with tunable single-photon pulses. \newblock {\em Nature}{ \bf 438}, 837--841 (2005). \bibitem{Chou_2007_S_Functional-Quantum-Nodes-for-E} Chou, C. \emph{et al.} \newblock {Functional Quantum Nodes for Entanglement Distribution over Scalable Quantum Networks}. \newblock {\em Science}{ \bf 316}, 1316--1320 (2007). \bibitem{Chen_2008_P_Memory-built-in-quantum-telepo} Chen, Y. \emph{et al.} \newblock {Memory-built-in quantum teleportation with photonic and atomic qubits}. \newblock {\em Nat Phys}{ \bf 4}, 103--107 (2008). \bibitem{Chen_2006_PRL_Deterministic-and-Storable-Sin} Chen, S. \emph{et al.} \newblock {Deterministic and Storable {Single-Photon} Source Based on a Quantum Memory}. \newblock {\em Phys. Rev. Lett.}{ \bf 97}, 173004 (2006). \bibitem{Hedges_2010_N_Efficient-quantum-memory-for-l} Hedges, M.~P., Longdell, J.~J., Li, Y. \& Sellars, M.~J. \newblock {Efficient quantum memory for light}. \newblock {\em Nature}{ \bf 465}, 1052--1056 (2010). \bibitem{Riedmatten:2008ek} de~Riedmatten, H., Afzelius, M., Staudt, M.~U., Simon, C. \&Gisin, N. \newblock A solid-state light-matter interface at the single-photon level. \newblock {\em Nature}{ \bf 456}, 773--777 (2008). \bibitem{Nunn_2007_PRAMOP_Mapping-broadband-single-photo} Nunn, J. \emph{et al.} \newblock {Mapping broadband single-photon wave packets into an atomic memory}. \newblock {\em Phys. Rev. A}{ \bf 75}, 011401 (2007). \bibitem{Cohen_2009_PRL_Tailored-Photon-Pair-Generatio} Cohen, O. \emph{et al.} \newblock {Tailored {Photon-Pair} Generation in Optical Fibers}. \newblock {\em Phys. Rev. Lett.}{ \bf 102}, 123603 (2009). \bibitem{Raab_1987_PRL_Trapping-of-Neutral-Sodium-Ato} Raab, E.~L., Prentiss, M., Cable, A., Chu, S. \& Pritchard, D.~E. \newblock {Trapping of Neutral Sodium Atoms with Radiation Pressure}. \newblock {\em Phys. Rev. Lett.}{ \bf 59}, 2631 (1987). \bibitem{Hosseini:2009lq} Hosseini, M. \emph{et al.} \newblock Coherent optical pulse sequencer for quantum applications. \newblock {\em Nature}{ \bf 461}, 241--245 (2009). \bibitem{Bouwmeester_Experimental_quantum_teleportation} Bouwmeester, D. \emph{et al.} \newblock {Experimental quantum teleportation}. \newblock {\em Nature}{ \bf 390}, 575 (1997). \bibitem{Camacho_2009_P_Four-wave-mixing-stopped-light} Camacho, R.~M., Vudyasetu, P.~K. \& Howell, J.~C. \newblock {Four-wave-mixing stopped light in hot atomic rubidium vapour}. \newblock {\em Nature Photon}{ \bf 3}, 103--106 (2009). \bibitem{Steck_2009_AOH_Cesium-D-Line-Data} Steck, D.~A. \newblock Caesium d line data. \newblock {\em available online at http://steck.us/alkalidata}{ \bf } (2009). \bibitem{Phillipsa:2009fk} Phillips, N. B., Gorshkov , A. V. \& Novikova, I. \newblock {Slow light propagation and amplification via electromagnetically induced transparency and four-wave mixing in an optically dense atomic vapor}. \newblock {\em Journal of Modern Optics}{ \bf 56}, 1916 (2009). \bibitem{Lukin:1999uq} Lukin, M., Matsko, A., Fleischhauer, M. \& Scully, M. \newblock {Quantum noise and correlations in resonantly enhanced wave mixing based on atomic coherence}. \newblock {\em Phys. Rev. Lett.}{ \bf 82}, 1847--1850 (1999). \bibitem{Raymer:1985qy} Raymer, M., Walmsley, I., Mostowski, J. \& Sobolewska, B. \newblock {Quantum theory of spatial and temporal coherence properties of stimulated Raman scattering}. \newblock {\em Phys. Rev. A}{ \bf 32}, 332--344 (1985). \bibitem{Sorensen:2009fj} S{\o}rensen, M. \& S{\o}rensen, A. \newblock {Three-dimensional theory of stimulated Raman scattering}. \newblock {\em Phys. Rev. A}{ \bf 80}, 33804 (2009). \bibitem{Surmacz:2008vf} Surmacz, K. \emph{et al.} \newblock Efficient spatially resolved multimode quantum memory. \newblock {\em Phys. Rev. A}{ \bf 78}, 033806 (2008). \end{thebibliography} \end{document}
\begin{document} \addtocounter{footnote}{1} \footnotetext{\label{fn-dimap} Atminas and Lozin gratefully acknowledge support from DIMAP -- the Center for Discrete Mathematics and its Applications at the University of Warwick. } \addtocounter{footnote}{1} \footnotetext{\label{fn-epsrc} Brignall, Korpelainen, and Vatter were partially supported by EPSRC Grant EP/J006130/1. } \addtocounter{footnote}{1} \footnotetext{\label{fn-lozinepsrc} Lozin was partially supported by EPSRC Grants EP/I01795X/1 and EP/L020408/1. } \addtocounter{footnote}{1} \footnotetext{ Vatter was partially supported by the National Security Agency under Grant Number H98230-12-1-0207 and the National Science Foundation under Grant Number DMS-1301692. The United States Government is authorized to reproduce and distribute reprints not-withstanding any copyright notation herein. \label{fn-vatter} } \title{Well-Quasi-Order for Permutation Graphs Omitting a Path and a Clique} \begin{abstract} We consider well-quasi-order for classes of permutation graphs which omit both a path and a clique. Our principle result is that the class of permutation graphs omitting $P_5$ and a clique of any size is well-quasi-ordered. This is proved by giving a structural decomposition of the corresponding permutations. We also exhibit three infinite antichains to show that the classes of permutation graphs omitting $\{P_6,K_6\}$, $\{P_7,K_5\}$, and $\{P_8,K_4\}$ are not well-quasi-ordered. \end{abstract} \section{Introduction}\label{sec-intro} While the Minor Theorem of Robertson and Seymour~\cite{robertson:graph-minors-i-xx:} shows that the set of all graphs is well-quasi-ordered under the minor relation, it is well known that this set is not well-quasi-ordered under the induced subgraph order. Consequently, there has been considerable interest in determining which classes of graphs are well-quasi-ordered under this order. Here we consider finite graphs and permutation graphs which omit both a path $P_k$ and a clique $K_\ell$. Our main result, proved in Section~\ref{sec-p5-kell}, is that permutation graphs which avoid both $P_5$ and a clique $K_\ell$ are well-quasi-ordered under the induced subgraph order for every finite $\ell$. We also prove, in Section~\ref{sec-p7}, that the three classes of permutation graphs defined by forbidding $\{P_6,K_6\}$, $\{P_7,K_5\}$ and $\{P_8,K_4\}$ respectively are not well-quasi-ordered, by exhibiting an infinite antichain in each case. We begin with elementary definitions. We say that a \emph{class} of graphs is a set of graphs closed under isomorphism and taking induced subgraphs. A class of graphs is \emph{well-quasi-ordered (wqo)} if it contains neither an infinite strictly decreasing sequence nor an infinite antichain (a set of pairwise incomparable graphs) under the induced subgraph ordering. Note that as we are interested only in classes of finite graphs, wqo is synonymous with a lack of infinite antichains. Given a permutation $\pi=\pi(1)\cdots\pi(n)$, its corresponding \emph{permutation graph} is the graph $G_\pi$ on the vertices $\{1,\dots,n\}$ in which $i$ is adjacent to $j$ if both $i\le j$ and $\pi(i)\ge\pi(j)$. This mapping is many-to-one, because, for example, $G_{231}\cong G_{312}\cong P_3$. Given permutations $\sigma=\sigma(1)\cdots\sigma(k)$ and $\pi=\pi(1)\cdots\pi(n)$, we say that $\sigma$ is \emph{contained} in $\pi$ and write $\sigma\le\pi$ if there are indices $1\le i_1<\cdots<i_k\le n$ such that the sequence $\pi(i_1)\cdots\pi(i_k)$ is in the same relative order as $\sigma$. For us, a \emph{class} of permutations is a set of permutation closed downward under this containment order, and a class is wqo if it does not contain an infinite antichain. The mapping $\pi\mapsto G_\pi$ is easily seen to be \emph{order-preserving}, i.e., if $\sigma\le\pi$ then $G_\sigma$ is an induced subgraph of $G_\pi$. Therefore if a class $\mathcal{C}$ of permutations is wqo then the associated class of permutation graphs $\{G_\pi \::\: \pi\in\mathcal{C}\}$ must also be wqo. However, it is possible to exploit the many-to-one nature of this mapping to construct infinite antichains of permutations which do not correspond to antichains of permutation graphs (though currently there are no examples of this in the literature). For this reason, when showing that classes of permutation graphs are wqo we instead prove the stronger result that the associated permutation classes are wqo, but when constructing infinite antichains, we must construct antichains of permutation graphs. \begin{figure} \caption{This figure shows the known wqo results for classes of graphs and permutation graphs avoiding paths and cliques, including the results of this paper. Filled circles indicate that all graphs avoiding the specified path and clique are wqo. Half-filled circles indicate that the corresponding class of permutation graphs are wqo, but that the corresponding class of all graphs are not wqo. Empty circles indicate that neither class is wqo. Note that for the three unknown cases (indicated by question marks), it is known that the corresponding class of graphs contains an infinite antichain.} \label{fig-wqo-results} \end{figure} A summary of our results is shown in Figure~\ref{fig-wqo-results}, with our new contributions in the upper-right highlighted. The rest of this paper is organised as follows: in Section~\ref{sec-non-perm} we briefly summarise the status of the analogous question for non-permutation graphs. Section~\ref{sec-structural-tools} sets up the necessary notions from the study of permutation classes, before the proof of the well-quasi-orderability of $P_5$, $K_\ell$-free permutation graphs in Section~\ref{sec-p5-kell}. Section~\ref{sec-p7} contains three non-wqo results, Section~\ref{sec-enum} briefly presents some enumerative consequences of our results, and the final section contains a few concluding remarks about the three remaining open cases. \section{Non-Permutation Graphs}\label{sec-non-perm} When the graphs needn't be permutation graphs, well-quasi-ordering is of course harder to attain. On the side of wqo, graphs avoiding $K_2$ are trivially wqo and graphs avoiding $P_4$ (co-graphs) are well-known to be wqo, so there are only two nontrivial results, namely graphs avoiding $K_3$ and $P_5$ or $P_6$. Of course, it suffices to show that $K_3, P_6$-free graphs are wqo, and this was recently proved by Atminas and Lozin~\cite{atminas:labelled-induced:}. On the side of non-wqo, two infinite antichains are required: one (from~\cite{korpelainen:two-forbidden-i:}) in the class of graphs with neither $P_5$ (or even $2K_2$) nor $K_4$, and one (from~\cite{korpelainen:bipartite-induc:}) in the class omitting both $P_7$ and $K_3$. For completeness, we outline both constructions here. For graphs which omit $P_5$ and $K_4$, Korpelainen and Lozin~\cite{korpelainen:two-forbidden-i:} construct an infinite antichain by adapting a correspondence between permutations and graphs due to Ding~\cite{ding:subgraphs-and-w:}. The correspondence we require for our antichain can be described as follows, and is accompanied by Figure~\ref{fig-ding-antichain}. For a permutation $\pi$ of length $n$, first note that $\pi$ can be thought of as a structure with $n$ points, equipped with two linear orderings. With this in mind, form a graph $B_\pi$ which consists of three independent sets, $U$, $V$ and $W$, each containing $n$ vertices. Let $U=\{u_1,u_2,\dots,u_n\}$. Between $U$ and $V$, there is a \emph{chain graph}: vertex $u_i$ in $U$ is adjacent to $i$ vertices of $V$, and for $i>1$ the neighbourhood of $u_{i-1}$ in $V$ is contained in the neighbourhood of $u_i$ in $V$. Note that this containment of neighbourhoods defines a linear ordering on the vertices of $U$: $u_1<u_2<\cdots < u_n$. Next, between $U$ and $W$, we build another chain graph. This time, vertex $u_{\pi(i)}$ has $i$ neighbours in $W$, and for $i>1$ the neighbourhood of $u_{\pi(i)}$ in $W$ contains the neighbourhood of $u_{\pi(i-1)}$ in $W$. This defines a second linear ordering on $U$, namely $u_{\pi(1)}\prec u_{\pi(2)}\prec \cdots\prec u_{\pi(n)}$, and hence $\pi$ has been encoded in $B_\pi$. Finally, to complete the construction, between $V$ and $W$ there is a complete bipartite graph, i.e., every edge is present. \begin{figure}\label{fig-ding-antichain} \end{figure} Now it is routine to verify that $B_\pi$ does not contain $2K_2\le P_5$ or $K_4$ for any $\pi$. Moreover for permutations $\sigma$ and $\pi$, we have $\sigma\leq\pi$ if and only if $B_\sigma\leq B_\pi$ as induced subgraphs. Thus, one may take any infinite antichain of permutations (for example, the ``increasing oscillating'' antichain), and encode each element of the antichain as a graph, yielding an infinite antichain in the class of $2K_2$, $K_4$-free graphs. For graphs which omit $P_7$ and $K_3$, a modification of this construction was used by~\cite{korpelainen:bipartite-induc:} to answer a question of Ding~\cite{ding:subgraphs-and-w:}, who had asked whether $P_7$-free bipartite graphs are wqo. Starting with $B_\pi$, the modification ``splits'' every vertex $u$ of $U$ into two, $u^{(1)}$ and $u^{(2)}$, with an edge between them. The new vertex $u^{(1)}$ takes the neighbourhood of $u$ with $V$, while $u^{(2)}$ takes the neighbourhood of $u$ with $W$. The result is a graph with four independent sets $U^{(1)}$, $U^{(2)}$, $V$ and $W$ each of size $n$, with a perfect matching between $U^{(1)}$ and $U^{(2)}$, a complete bipartite graph between $V$ and $W$, and chain graphs between $U^{(1)}$ and $V$, and between $U^{(2)}$ and $W$. This construction yields a $2P_3$-free bipartite graph. Moreover, the permutation $\pi$ is still encoded in such a way as to ensure an infinite antichain of permutations maps to an infinite antichain of graphs. \section{Structural Tools}\label{sec-structural-tools} Instead of working directly with permutation graphs, we establish our wqo results for the corresponding permutation classes (which, by our observations in the introduction, is a stronger result). The permutations $24153$ and $31524$ are the only permutations which correspond to the permutation graph $P_5$, and thus to establish our main result we must determine the structure of the permutation class $$ \operatorname{Av}(24153, 31524, \ell\cdots 21). $$ Considering the wqo problem from the viewpoint of permutations has the added benefit of allowing us to make use of the recently developed tools in this field. In particular, we utilise grid classes and the substitution decomposition. Thus before establishing our main result we must first introduce these concepts. We frequently identify a permutation $\pi$ of length $n$ with its \emph{plot}, the set $\{(i,\pi(i))\::\: 1\le i\le n\}$ of points in the plane. We say that a rectangle in the plane is \emph{axis-parallel} if its top and bottom sides are parallel with the $x$-axis while its left and right sides are parallel with the $y$-axis. Given natural numbers $i$ and $j$ we denote by $[i,j]$ the closed interval $\{i,i+1,\dots,j\}$ and by $[i,j)$ the half-open interval $\{i,i+1,\dots,j-1\}$. Thus the axis-parallel rectangles we are interested in may be described by $[x,x']\times[y,y']$ for natural numbers $x$, $x'$, $y$, and $y'$. Monotone grid classes are a way of partitioning the entries of a permutation (or rather, its plot) into monotone axis-parallel rectangles in a manner specified by a $0/\mathord{\pm} 1$ matrix. In order for these matrices to align with plots of permutations, we index them with Cartesian coordinates. Suppose that $M$ is a $t\times u$ matrix (thus $M$ has $t$ columns and $u$ rows). An \emph{$M$-gridding} of the permutation $\pi$ of length $n$ consists of a pair of sequences $1=c_1\le\cdots\le c_{t+1}=n+1$ and $1=r_1\le\cdots\le r_{u+1}=n+1$ such that for all $k$ and $\ell$, the entries of (the plot of) $\pi$ that lie in the axis-parallel rectangle $[c_k,c_{k+1})\times [r_\ell,r_{\ell+1})$ are increasing if $M_{k,\ell}=1$, decreasing if $M_{k,\ell}=-1$, or empty if $M_{k,\ell}=0$. We say that the permutation $\pi$ is \emph{$M$-griddable} if it possesses an $M$-gridding, and the \emph{grid class} of $M$, denoted by $\operatorname{Grid}(M)$, consists of the set of $M$-griddable permutations. We further say that the permutation class $\mathcal{C}$ is $M$-griddable if $\mathcal{C}\subseteq\operatorname{Grid}(M)$, and that this class is \emph{monotone griddable} if there is a finite matrix $M$ for which it is $M$-griddable. Grid classes were first described in this generality (albeit under a different name) by Murphy and Vatter~\cite{murphy:profile-classes:}, who studied their wqo properties. To describe their result we need the notion of the \emph{cell graph} of a matrix $M$. This graph has vertex set $\{(i,j)\::\: M_{i,j}\neq 0\}$ and $(i,j)$ is adjacent to $(k,\ell)$ if they lie in the same row or column and there are no nonzero entries lying between them in this row or column. We typically attribute properties of the cell graph of $M$ to $M$ itself; thus we say that $M$ is a forest if its cell graph is a forest. \begin{theorem}[Murphy and Vatter~\cite{murphy:profile-classes:}]\label{thm-murphy-vatter} The grid class $\operatorname{Grid}(M)$ is wqo if and only if $M$ is a forest. \end{theorem} (This is a slightly different form of the result than is stated in \cite{murphy:profile-classes:}, but the two forms are equivalent as shown by Vatter and Waton~\cite{vatter:on-partial-well:}, who also gave a much simpler proof of Theorem~\ref{thm-murphy-vatter}.) The monotone griddable classes were characterised by Huczynska and Vatter~\cite{huczynska:grid-classes-an:}. In order to present this result we also need some notation. Given permutations $\sigma$ and $\tau$ of respective lengths $m$ and $n$, their \emph{sum} is the permutation $\pi\oplus\tau$ whose plot consists of the plot of $\tau$ above and to the right of the plot of $\sigma$. More formally, this permutation is defined by $$ \sigma\oplus\tau(i) = \left\{\begin{array}{ll} \sigma(i)&\mbox{for $1\le i\le m$,}\\ \tau(i-m)+m&\mbox{for $m+1\le i\le m+n$.} \end{array}\right. $$ The obvious symmetry of this operation (in which the plot of $\tau$ lies below and to the right of the plot of $\sigma$) is called the \emph{skew sum} of $\sigma$ and $\tau$ and is denoted $\sigma\ominus\tau$. We can now state the characterisation of monotone griddable permutation classes. \begin{theorem}[Huczynska and Vatter~\cite{huczynska:grid-classes-an:}] \label{thm-grid-characterisation} The permutation $\mathcal{C}$ is monotone griddable if and only if it does not contain arbitrarily long sums of $21$ or skew sums of $12$. \end{theorem} The reader might note that the classes we are interested in are not monotone griddable, let alone $M$-griddable for a forest $M$. However, our proof will show that these classes can be built from a monotone griddable class via the ``substitution decomposition'', which we define shortly. Before this, though, we must introduce a few more concepts concerning monotone grid classes, the first two of which are alternative characterisations of monotone griddable classes. Given a permutation $\pi$, we say that the axis-parallel rectangle $R$ is \emph{monotone} if the entries of $\pi$ which lie in $R$ are monotone increasing or decreasing (otherwise $R$ is \emph{non-monotone}). We say that the permutation $\pi$ can be \emph{covered by $s$ monotone rectangles} if there is a collection $\mathfrak{R}$ of $s$ monotone axis-parallel rectangles such that every point in the plot of $\pi$ lies in at least one rectangle in $\mathfrak{R}$. Clearly if $\mathcal{C}\subseteq\operatorname{Grid}(M)$ for a $t\times u$ matrix $M$ then every permutation in $\mathcal{C}$ can be covered by $tu$ monotone rectangles. To see the other direction, note that every permutation which can be covered by $s$ monotone rectangles is $M$-griddable for some matrix $M$ of size at most $(2s-1)\times (2s-1)$. There are only finitely many such matrices, say $M^{(1)}$, $\dots$, $M^{(m)}$, so their \emph{direct sum}, $$ \left(\begin{array}{ccc} &&M^{(m)}\\ &\iddots&\\ M^{(1)}&& \end{array}\right) $$ is a finite matrix whose grid class contains all such permutations. \footnote{\label{fn-matrix-sum} By adapting this argument it follows that if every permutation in the class $\mathcal{C}$ lies in the grid class of a forest of size at most $t\times u$, then $\mathcal{C}$ itself lies in the grid class of a (possibly much larger) forest.} This characterisation of monotone griddability is recorded in Proposition~\ref{prop-mono-rectangles} below, which also includes a third characterisation. We say that the line $L$ \emph{slices} the rectangle $R$ if $L\cap R\neq\emptyset$. If $\mathcal{C}\subseteq\operatorname{Grid}(M)$ for a $t\times u$ matrix then for every permutation $\pi\in\mathcal{C}$ there is a collection of $t+u$ horizontal and vertical lines (the grid lines) which slice every non-monotone axis-parallel rectangle of $\pi$. Conversely, every such collection of lines defines a gridding of $\pi$, completing the sketch of the proof of the following result. \begin{proposition}[specialising Vatter~{\cite[Proposition 2.3]{vatter:small-permutati:}}] \label{prop-mono-rectangles} For a permutation class $\mathcal{C}$, the following are equivalent: \begin{enumerate} \item[(1)] $\mathcal{C}$ is monotone griddable, \item[(2)] there is a constant $\ell$ such that for every permutation $\pi\in\mathcal{C}$ the set of non-monotone axis-parallel rectangles of $\pi$ can be sliced by a collection of $\ell$ horizontal and vertical lines, and \item[(3)] there is a constant $s$ such that every permutation in $\mathcal{C}$ can be covered by $s$ monotone rectangles. \end{enumerate} \end{proposition} We now move on to the substitution decomposition, which will allow us to build the classes we are interested in from grid classes of forests. An {\it interval\/} in the permutation $\pi$ is a set of contiguous indices $I=[a,b]$ such that the set of values $\pi(I)=\{\pi(i) : i\in I\}$ is also contiguous. Given a permutation $\sigma$ of length $m$ and nonempty permutations $\alpha_1,\dots,\alpha_m$, the {\it inflation\/} of $\sigma$ by $\alpha_1,\dots,\alpha_m$ --- denoted $\sigma[\alpha_1,\dots,\alpha_m]$ --- is the permutation of length $|\alpha_1|+\cdots+|\alpha_m|$ obtained by replacing each entry $\sigma(i)$ by an interval that is order isomorphic to $\alpha_i$ in such a way that the intervals themselves are order isomorphic to $\sigma$. Thus the sum and skew sum operations are particular cases of inflations: $\sigma\oplus\tau=12[\sigma,\tau]$ and $\sigma\ominus\tau=21[\sigma,\tau]$. Given two classes $\mathcal{C}$ and $\mathcal{U}$, the \emph{inflation} of $\mathcal{C}$ by $\mathcal{U}$ is defined as \[ \mathcal{C}[\mathcal{U}]=\{\sigma[\alpha_1,\dots,\alpha_m]\::\:\mbox{$\sigma\in\mathcal{C}_m$ and $\alpha_1,\dots,\alpha_m\in\mathcal{U}$}\}. \] The class $\mathcal{C}$ is said to be \emph{substitution closed} if $\mathcal{C}[\mathcal{C}]=\mathcal{C}$. The \emph{substitution closure}, $\langle\mathcal{C}\rangle$, of a class $\mathcal{C}$ is defined as the smallest substitution closed class containing $\mathcal{C}$. A standard argument shows that $\langle\mathcal{C}\rangle$ exists, and by specialising a result of \cite{albert:inflations-of-g:} we obtain the following. \begin{theorem}[Albert, Ru\v{s}kuc, and Vatter~{\cite[specialisation of Theorem 4.4]{albert:inflations-of-g:}}] \label{thm-geom-simples-pwo-basis} If the matrix $M$ is a forest then the class $\langle\operatorname{Grid}(M)\rangle$ is wqo. \end{theorem} The permutation $\pi$ is said to be \emph{simple} if the only ways to write it as an inflation are trivial --- that is, it can only be written either as the inflation of a permutation by singletons, or as the inflation of a singleton by a permutation. Thus if $\pi$ has length $n$, it is simple if and only if its only intervals have length $0$, $1$, and $n$. Thus every permutation can be expressed as the inflation of a simple permutation. Moreover, in most cases, this decomposition is unique. A permutation is said to be \emph{sum (resp., skew) decomposable} if it can be expressed as the sum (resp., skew sum) of two shorter permutations. Otherwise it is said to be \emph{sum (resp., skew) indecomposable}. \begin{proposition}[Albert and Atkinson~\cite{albert:simple-permutat:}]\label{simple-decomp-1} Every permutation $\pi$ except $1$ is the inflation of a unique simple permutation $\sigma$. Moreover, if $\pi=\sigma[\alpha_1,\dots,\alpha_m]$ for a simple permutation $\sigma$ of length $m\ge 4$, then each interval $\alpha_i$ is unique. If $\pi$ is an inflation of $12$ (i.e., is sum decomposable), then there is a unique sum indecomposable $\alpha_1$ such that $\pi=\alpha_1\oplus\alpha_2$. The same holds, mutatis mutandis, with $12$ replaced by $21$ and sum replaced by skew. \end{proposition} We close this section by noting how easily this machinery can show that permutation graphs omitting both $P_k$ and $K_3$ are wqo for all $k$, a result originally due to Korpelainen and Lozin~\cite{korpelainen:bipartite-induc:}. The corresponding permutations all avoid $321$ and thus lie in the grid class of an \emph{infinite} matrix, known as the infinite staircase (see Albert and Vatter~\cite{albert:generating-and-:}). Moreover, the sum indecomposable permutations which avoid $321$ and the two permutations corresponding to $P_k$ can be shown to lie in the grid class of a finite staircase, $$ \left(\begin{array}{ccccc} &&&1&1\\ &&\iddots&\iddots\\ &1&1\\ 1&1 \end{array}\right). $$ Finally, it follows by an easy application of Higman's Theorem~\cite{higman:ordering-by-div:} that if the sum indecomposable permutations in a class are wqo, then the class itself is wqo. (In this case, the wqo conclusion also follows by Theorem~\ref{thm-geom-simples-pwo-basis}.) \section{Permutation Graphs Omitting \texorpdfstring{$P_5$ and $K_\ell$}{P\_5 and K\_l}}\label{sec-p5-kell} In this section we prove that the class of permutations corresponding to permutation graphs omitting $P_5$ and $K_\ell$, $$ \operatorname{Av}(24153, 31524, \ell\cdots 21), $$ is wqo. Our proof basically consists of two steps. First, we show that the simple permutations in these classes are monotone griddable, and then we show that these griddings can be refined to forests. The conclusion then follows from Theorem~\ref{thm-geom-simples-pwo-basis}. Given a set of points in the plane, their \emph{rectangular hull} is defined to be the smallest axis-parallel rectangle containing all of them. We begin with a very simple observation about these simple permutations. \begin{proposition} \label{prop-simples-n1} For every simple permutation $\pi\in\operatorname{Av}(24153, 31524)$, either its greatest entry lies to the left of its least entry, or its leftmost entry lies above its rightmost entry. \end{proposition} \begin{proof} Suppose, for a contradiction, that $\pi$ is a simple permutation in $\operatorname{Av}(24153, 31524)$ such that its greatest entry lies to the right of its least, and its leftmost entry lies below its rightmost entry. Thus, these four extremal entries form the pattern $2143$, and the situation is as depicted in Figure~\ref{fig-extremal-points}(i). Since $\pi$ is simple, regions $A$ and $B$ cannot both be empty, so, without loss of generality, suppose that $A$ is non-empty and label the greatest entry in this region as point $x$. \begin{figure} \caption{The impossible configuration for a simple permutation in Proposition~\ref{prop-simples-n1} \label{fig-extremal-points} \end{figure} Since $\pi$ is simple, the rectangular hull of the leftmost entry, the least entry, and the point $x$ cannot be an interval in $\pi$. Therefore, there must be a point either in $B$, or in that part of $C$ lying below $x$. Take the rightmost such point, and label it $y$. If $y$ is in region $C$, we immediately encounter the contradiction illustrated in Figure~\ref{fig-extremal-points}(ii): our choices of $x$ and $y$, and the forbidden permutation 24153 causes the permutation to be sum decomposable. Therefore, $y$ is placed in region $B$, and we have the picture depicted in Figure~\ref{fig-extremal-points}(iii). Since $\pi$ is simple, there must now be a point in the region labelled $D$. However, in order for this permutation to not be sum decomposable, we must insert another point in the region to the right of region $D$, and below the greatest entry in $D$, but this would force an occurrence of $31524$.\end{proof} The class $\operatorname{Av}(24153, 31524, \ell\cdots 21)$ is closed under group-theoretic inversion (because $24153^{-1}=31524$ and $\ell\cdots 21^{-1}=\ell\cdots 21$), so we may always assume that the latter option in Proposition~\ref{prop-simples-n1} holds. The rest of our proof adapts several ideas from Vatter~\cite{vatter:small-permutati:}. Two rectangles in the plane are said to be \emph{dependent} if their projections onto either the $x$- or $y$-axis have nontrivial intersection, and otherwise they are said to be \emph{independent}. A set of rectangles is called independent if its members are pairwise independent. Thus an independent set of rectangles may be viewed as a permutation, and it satisfies the Erd\H{o}s-Szekeres Theorem (every permutation of length at least $(a-1)(b-1)+1$ contains either $12\cdots a$ or $b\cdots 21$). We construct independent sets of rectangles in the proofs of both Propositions~\ref{prop-simples-griddable} and \ref{prop-gridding-corners}. In these settings, the rectangles are used to capture ``bad'' areas in the plot of a permutation, and our desired result is obtained by slicing the rectangles with horizontal and vertical lines in the sense of Proposition~\ref{prop-mono-rectangles}. The following result shows that we may slice a collection of rectangles with only a few lines, so long as we can bound its independence number. \begin{theorem}[Gy{\'a}rf{\'a}s and Lehel~\cite{gyarfas:a-helly-type-pr:}] \label{thm-slice} There is a function $f(m)$ such that for any collection $\mathfrak{R}$ of axis-parallel rectangles in the plane which has no independent set of size greater than $m$, there exists a set of $f(m)$ horizontal and vertical lines which slice every rectangle in $\mathfrak{R}$. \end{theorem} Our next two results both rest upon Theorem~\ref{thm-slice}. \begin{proposition} \label{prop-simples-griddable} For every $\ell$, the simple permutations in $\operatorname{Av}(24153, 31524, \ell\cdots 21)$ are contained in $\operatorname{Grid}(M)$ for a finite $0/1$ matrix $M$. \end{proposition} \begin{proof} By Proposition~\ref{prop-mono-rectangles} it suffices to show that there is a function $g(\ell)$ such that every permutation in $\operatorname{Av}(24153, 31524, \ell\cdots 21)$ can be covered by $g(\ell)$ increasing axis-parallel rectangles (i.e., rectangles which only cover increasing sets of points). We prove this statement by induction on $\ell$. For the base case, we can take $g(2)=1$. Now take a simple permutation $\pi\in\operatorname{Av}(24153, 31524, \ell\cdots 21)$ for $\ell\ge 3$ and suppose that the claim holds for $\ell-1$. \begin{figure} \caption{On the left, the division of $\pi$ used in Proposition~\ref{prop-simples-griddable} \label{fig-simples-griddable} \end{figure} By Proposition~\ref{prop-simples-n1}, we may assume that the leftmost entry of $\pi$ lies above its rightmost entry. Let $\pi_t$ be the permutation formed by all entries of $\pi$ lying above its rightmost entry and $\pi_b$ the permutation formed by all entries of $\pi$ lying below its leftmost entry (as shown in Figure~\ref{fig-simples-griddable}). Thus every entry of $\pi$ corresponds to an entry in $\pi_t$, to an entry in $\pi_b$, or to entries in both permutations. Moreover, both $\pi_t$ and $\pi_b$ avoid $(\ell-1)\cdots 21$. We would like to apply induction to find monotone rectangle coverings of both $\pi_t$ and $\pi_b$ but of course these permutations needn't be simple. Nevertheless, if $\pi_t$ is the inflation of the simple permutation $\sigma_t$ and $\pi_b$ of $\sigma_b$ then both $\sigma_t$ and $\sigma_b$ can be covered by $g(\ell-1)$ increasing axis-parallel rectangles by induction. Now we stretch these rectangles so that they cover the corresponding regions of $\pi$. By adapting the proof of Proposition~\ref{prop-mono-rectangles}, we may then extend this rectangle covering to a gridding of size at most $4g(\ell-1)\times 4g(\ell-1)$. While this gridding needn't be monotone, inside each of its cells we see points which correspond either to inflations of increasing sequences of $\sigma_t$ or of $\sigma_b$. Let $\mathfrak{L}$ denote the grid lines of this gridding. We now say that the axis-parallel rectangle $R$ is \emph{bad} if it is fully contained in a cell of the above gridding and the points it covers contain a decreasing interval in either $\pi_t$ or $\pi_b$. Further let $\mathfrak{R}$ denote the collection of all bad rectangles. We aim to show that there is a collection of $f(2\ell(\ell-1))$ lines which slice every bad rectangle, where $f$ is the function defined in Theorem~\ref{thm-slice}. This, together with Proposition~\ref{prop-mono-rectangles} and the comments before it, will complete the proof because these lines together with $\mathfrak{L}$ will give a gridding of $\pi$ of bounded size. Theorem~\ref{thm-slice} will give us the desired lines if we can show that $\mathfrak{R}$ has no independent set of size greater than $2\ell(\ell-1)+1$. Suppose to the contrary that $\mathfrak{R}$ does contain an independent set of this size. Thus at least $\ell(\ell-1)+1$ of these bad rectangles lie in one of $\pi_t$ or $\pi_b$; suppose first that these $\ell(\ell-1)+1$ bad rectangles lie in $\pi_t$. Because $\pi_t$ avoids $(\ell-1)\cdots 21$, the Erd\H{o}s-Szekeres Theorem implies that at least $\ell+1$ of its bad rectangles occur in increasing order (when read from left to right). Because $\pi$ itself is simple, each such rectangle must be separated, and this separating point must lie in $\pi_b\setminus\pi_t$ since (by definition) the points inside this bad rectangle form a decreasing interval in $\pi_t$. Appealing once more to Erd\H{o}s-Szekeres we see that two such separating points must themselves lie in increasing order, as shown in the centre of Figure~\ref{fig-simples-griddable}. However, this is a contradiction to our assumption that $\pi$ avoids $31524$ (given by the solid points). As shown on the rightmost pane of this figure, if the $\ell(\ell-1)+1$ bad rectangles lie in $\pi_b$ we instead find a copy of $24153$. \end{proof} A \emph{submatrix} of a matrix is obtained by deleting any collection of rows and columns from the matrix. Our next result shows that in $\operatorname{Av}(24153, 31524, \ell\cdots 21)$ the simple permutations can be gridded in a matrix which does not contain $\fnmatrix{rr}{1&1\\1&*}$ as a submatrix, i.e., in this matrix, there is no non-zero cell with both a non-zero cell below it in the same column, and a non-zero cell to its right in the same row. (The $*$ indicates an entry that can be either 0 or 1.) The following result is in some sense the technical underpinning of our entire argument. We advise the reader to note during the proof that if the hulls in $\mathfrak{H}$ are assumed to be increasing, then the resulting gridding matrix $M$ will be $0/1$, not $0/\mathord{\pm} 1$. \begin{proposition} \label{prop-chop-hulls} Suppose that $\pi$ is a permutation and $\mathfrak{H}$ is a collection of $m$ monotone rectangular hulls which cover the entries of $\pi$ satisfying \begin{enumerate} \item[(H1)] the hulls in $\mathfrak{H}$ are pairwise nonintersecting, \item[(H2)] no single vertical or horizontal line slices through more than $k$ hulls in $\mathfrak{H}$, and \item[(H3)] no hull in $\mathfrak{H}$ is dependent both with a hull to its right and a hull beneath it. \end{enumerate} Then there exists a function $f(m,k)$ such that $\pi$ is $M$-griddable for a $0/\mathord{\pm} 1$ matrix $M$ of size at most $f(m,k)\times f(m,k)$ which does not contain $\fnmatrix{rr}{\pm1&\pm1\\\pm1&*}$ as a submatrix. \end{proposition} \begin{proof} We define a gridding of $\pi$ using the sides of the hulls in $\mathfrak{H}$. For a given side from a given hull, form a gridline by extending it to the edges of the permutation. By our hypothesis (H2), this line can slice at most $k$ other hulls. Whenever a hull is sliced in this way, a second gridline, perpendicular to the first, is induced so that all of the entries within the hull are contained in the bottom left and top right quadrants (for a hull containing increasing entries), or top left and bottom right quadrants (for a hull containing decreasing entries) defined by the two lines. This second gridline may itself slice through the interior of at most $k$ further hulls in $\mathfrak{H}$, and each such slice will induce another gridline, and so on. We call this process the \emph{propagation} of a line. See Figure~\ref{fig-propagation} for an illustration. For a given propagation sequence, the \emph{propagation tree} has gridlines for vertices, is rooted at the original gridline for the side, and has an edge between two gridlines if one induces the other in this propagation. \begin{figure} \caption{Propagating a side of a hull in $\mathfrak{H} \label{fig-propagation} \end{figure} Before moving on, we note that it is clear that the propagation tree is connected, but less obvious that it is in fact a tree. This is not strictly required in our argument (we will only need to bound the number of vertices it contains), but if the tree were to contain a cycle it would have to correspond to a cyclic sequence of hulls, and this is impossible without contradicting hypothesis (H3). In order to bound the size of a propagation tree, we first show that it has height at most $2m-1$. In the propagation tree of a side from some hull $H_0\in\mathfrak{H}$, take a sequence $H_1,H_2,\dots,H_p$ of hulls from $\mathfrak{H}$ corresponding to a longest path in the propagation tree, starting from the root. Thus, $H_1$ is sliced by the initial gridline formed from the side of $H_0$, and $H_i$ is sliced by the gridline induced from $H_{i-1}$ for $i=2,\dots,p$. We now define a word $w=w_1w_2\cdots w_p$ from this sequence, based on the position of hull $H_i$ relative to hull $H_{i-1}$. For $i=1,2,\dots,p$, let \[ w_i = \begin{cases} \mathsf{u} & \text{if }H_i\text{ lies above }H_{i-1}\\ \mathsf{d} & \text{if }H_i\text{ lies below }H_{i-1}\\ \mathsf{l} & \text{if }H_i\text{ lies to the left of }H_{i-1}\\ \mathsf{r} & \text{if }H_i\text{ lies to the right of }H_{i-1}\\ \end{cases} \] Note that by the process of inducing perpendicular gridlines, successive letters in $w$ must alternate between $\{\mathsf{u},\mathsf{d}\}$ and $\{\mathsf{l},\mathsf{r}\}$. Moreover, since no hull interacts with hulls both below it and to its right, $w$ cannot contain a $\mathsf{u}\mathsf{r}$ or $\mathsf{l}\mathsf{d}$ factor. In other words, after the first instance of $\mathsf{u}$ or $\mathsf{l}$, there are no more instances of $\mathsf{r}$ or $\mathsf{d}$. This means that $w$ consists of a (possibly empty) alternating sequence $\mathsf{d}\mathsf{r}\mathsf{d}\mathsf{r}\cdots$ or $\mathsf{r}\mathsf{d}\mathsf{r}\mathsf{d}\cdots$ followed by a (possibly empty) alternating sequence $\mathsf{u}\mathsf{l}\mathsf{u}\mathsf{l}\cdots$ or $\mathsf{l}\mathsf{u}\mathsf{l}\mathsf{u}\cdots$. Any alternating sequence of the form $\mathsf{d}\mathsf{r}\mathsf{d}\mathsf{r}\cdots$ or $\mathsf{r}\mathsf{d}\mathsf{r}\mathsf{d}\cdots$ can have at most $m-1$ letters, as each hull in $\mathfrak{H}$ (other then $H_0$) can be sliced at most once in such a sequence. Similarly, any alternating sequence of the form $\mathsf{u}\mathsf{l}\mathsf{u}\mathsf{l}\cdots$ or $\mathsf{l}\mathsf{u}\mathsf{l}\mathsf{u}\cdots$ can have at most $m-1$ letters. Consequently, we have $p\leq 2m-2$, and thus every propagation sequence has length at most $2m-1$, as required. Since each gridline in the propagation tree has at most $k$ children, this means that the propagation tree for any given side has at most \[1 + k + k^2 + \cdots + k^{2m-1} < k^{2m}\] vertices, yielding a gridding of $\pi$ with fewer than $4mk^{2m}$ gridlines, and we may take this number to be $f(m,k)$. The gridding matrix $M$ is then naturally formed from the cells of this gridding of $\pi$: each empty cell corresponds to a $0$ in $M$, each cell containing points in decreasing order corresponds to a $-1$ in $M$, and each cell containing points in increasing order corresponds to a $1$ in $M$. Finally, we verify that $M$ satisfies the conditions in the proposition. The process of propagating gridlines ensures that each rectangular hull in $\mathfrak{H}$ is divided into cells no two of which occupy the same row or column of the $M$-gridding. This means that there can be no $\fnmatrix{rr}{\pm1&\pm1\\\pm1&*}$ submatrix of $M$ with two cells originating from the same hull in $\mathfrak{H}$. Thus, the cells of a submatrix of the form $\fnmatrix{rr}{\pm1&\pm1\\\pm1&*}$ must be made up from points in distinct hulls in $\mathfrak{H}$, but this is impossible since $\mathfrak{H}$ contains no hulls which are dependent with hulls both below it and to its right. \end{proof} We now apply this proposition to refine the gridding provided to us by Proposition~\ref{prop-simples-griddable}. \begin{proposition} \label{prop-gridding-corners} For every $\ell$, the simple permutations in $\operatorname{Av}(24153, 31524, \ell\cdots 21)$ are contained in $\operatorname{Grid}(M)$ for a finite $0/1$ matrix $M$ which does not contain $\fnmatrix{rr}{1&1\\1&*}$ as a submatrix. \end{proposition} \begin{proof} Let $\pi$ be an arbitrary simple permutation in $\operatorname{Av}(24153, 31524, \ell\cdots 21)$. As we observed in Footnote~\ref{fn-matrix-sum}, it suffices to show that there are constants $a$ and $b$ such that $\pi\in\operatorname{Grid}(M)$ for an $a\times b$ matrix $M$ satisfying the desired conditions. By Proposition~\ref{prop-simples-griddable}, $\pi$ is contained in $\operatorname{Grid}(N)$ for some $0/1$ matrix $N$, of size (say) $t \times u$. We say that a \emph{bad} rectangle within any specified cell is an axis-parallel rectangle which contains two entries which are split both by points below and to the right. Since $\pi$ does not contain 24153, no cell can contain more than one independent bad rectangle --- see Figure~\ref{fig-bad-rectangles}. Therefore an independent set of bad rectangles can have size at most $tu$, so Theorem~\ref{thm-slice} shows that the bad rectangles can all be sliced by $f(tu)$ lines. \begin{figure} \caption{Two independent bad rectangles. The filled points form a copy of $24153$, irrespective of the relative orders of the splitting points.} \label{fig-bad-rectangles} \end{figure} In any cell of the original gridding, the additional $f(tu)$ slices that have been added can at most slice these points into $f(tu)+1$ maximal unsliced pieces. In the entire permutation, therefore, these slices (together with the original gridlines from $N$) divide the points into at most $tu(f(tu)+1)$ maximal unsliced pieces. Let $\mathfrak{H}$ denote the rectangular hulls of these maximal unsliced pieces. We now check that $\mathfrak{H}$ satisfies the hypotheses of Proposition~\ref{prop-chop-hulls}. Condition (H1) follows immediately by construction. Next, note that no two hulls of $\mathfrak{H}$ from the same cell of the $N$-gridding can be dependent, since every cell is monotone, so we may take $k=\max(t,u)$ to satisfy (H2). Finally, no hull can contain a bad rectangle (since all bad rectangles have been sliced), and so no hull can simultaneously be dependent with a hull from a cell below it, and a hull from a cell to its right, as required by (H3). Now, applying Proposition~\ref{prop-chop-hulls} (noting that all the rectangular hulls in $\mathfrak{H}$ contain increasing entries), we have a $0/1$ gridding matrix $M_\pi$ for $\pi$, of dimensions at most $v\times w$ for some $v,w$, which does not contain $\fnmatrix{rr}{1&1\\1&*}$ as a submatrix. We are now done by our comments at the beginning of the proof. \end{proof} Having proved Proposition~\ref{prop-gridding-corners}, we merely need to put the pieces together to finish the proof of our main theorem. This proposition shows that there is a finite $0/1$ matrix $M$ with no submatrix of the form $$ \fnmatrix{rr}{1&1\\1&*} $$ such that the simple permutations of $\operatorname{Av}(24153, 31524, \ell\cdots 21)$ are contained in $\operatorname{Grid}(M)$. It follows that $$ \operatorname{Av}(24153, 31524, \ell\cdots 21)\subseteq\langle\operatorname{Grid}(M)\rangle. $$ Moreover, $M$ is a forest because if it were to contain a cycle, it would have to contain a submatrix of the form $\fnmatrix{rr}{1&1\\1&*}$. Therefore the permutation class $\operatorname{Av}(24153, 31524, \ell\cdots 21)$ is wqo by Theorem~\ref{thm-geom-simples-pwo-basis}. \begin{theorem} \label{thm-P5} For every $\ell$, the permutation class $\operatorname{Av}(24153, 31524, \ell\cdots 21)$ is well-quasi-ordered. Therefore the class of permutation graphs omitting $P_5$ and $K_\ell$ is also well-quasi-ordered. \end{theorem} \section{Permutation Graphs Omitting \texorpdfstring{$P_6$, $P_7$ or $P_8$}{P\_6, P\_7 or P\_8}}\label{sec-p7} In this section, we establish the following: \begin{proposition}\label{prop-three-antichains} The following three classes of graphs are not wqo: \begin{enumerate} \item[(1)] the $P_6$, $K_6$-free permutation graphs, \item[(2)] the $P_7$, $K_5$-free permutation graphs, and \item[(3)] the $P_8$, $K_4$-free permutation graphs. \end{enumerate} \end{proposition} In order to prove these three classes are not wqo, it suffices to show that each class contains an infinite antichain. This is done by showing that the related permutation classes contain ``generalised'' grid classes, for which infinite antichains are already known. In general, we cannot immediately guarantee that the permutation antichain translates to a graph antichain, but we will show that this is in fact the case for the three we construct here. We must first introduce \emph{generalised} grid classes. Suppose that $\mathcal{M}$ is a $t\times u$ matrix of permutation classes (we use calligraphic font for matrices containing permutation classes). An {\it $\mathcal{M}$-gridding\/} of the permutation $\pi$ of length $n$ in this context is a pair of sequences $1=c_1\le\cdots\le c_{t+1}=n+1$ (the column divisions) and $1=r_1\le\cdots\le r_{u+1}=n+1$ (the row divisions) such that for all $1\le k\le t$ and $1\le\ell\le u$, the entries of $\pi$ from indices $c_k$ up to but not including $c_{k+1}$, which have values from $r_{\ell}$ up to but not including $r_{\ell+1}$ are either empty or order isomorphic to an element of $\mathcal{M}_{k,\ell}$. The {\it grid class of $\mathcal{M}$\/}, written $\operatorname{Grid}(\mathcal{M})$, consists of all permutations which possess an $\mathcal{M}$-gridding. The notion of monotone griddability can be analogously defined, but we do not require this. Here, our generalised grid classes are formed from gridding matrices which contain the monotone class $\operatorname{Av}(21)$, and a non-monotone permutation class denoted $\bigoplus 21$. This is formed by taking all finite subpermutations of the infinite permutation $21436587\cdots$. In terms of minimal forbidden elements, we have $\bigoplus 21 = \operatorname{Av}(321,231,312)$. Our proof of Proposition~\ref{prop-three-antichains} requires some further theory to ensure we can convert the permutation antichains we construct into graph antichains. The primary issue is that a permutation graph $G$ can have several different corresponding permutations. With this in mind, let \[\Pi(G) = \{\text{permutations } \pi : G_\pi \cong G\}\] denote the set of permutations each of which corresponds to the permutation graph $G$. Denote by $\pi^{-1}$ the (group-theoretic) \emph{inverse} of $\pi$, by $\pi^\text{rc}$ the \emph{reverse-complement} (formed by reversing the order of the entries of $\pi$, then replacing each entry $i$ by $|\pi|-i+1$), and by $(\pi^{-1})^{\text{rc}}$ the \emph{inverse-reverse-complement}, formed by composing the two previous operations (in either order, as the two operations commute). It is then easy to see that if $\pi\in\Pi(G)$, then $\Pi(G)$ must also contain all of $\pi^{-1}$, $\pi^\text{rc}$ and $(\pi^{-1})^{\text{rc}}$. However, it is possible that $\Pi(G)$ may contain other permutations, and this depends on the graph-theoretic analogue of the substitution decomposition, which is called the \emph{modular decomposition}. We also need to introduce the graph analogues of intervals and simplicity, which have different names in that context. A \emph{module} $M$ in a graph $G$ is a set of vertices such that for every $u,v\in M$ and $w\in V(G)\setminus M$, $u$ is adjacent to $w$ if and only if $v$ is adjacent to $w$. A graph $G$ is said to be \emph{prime} if it has no nontrivial modules, that is, any module $M$ of $G$ satisfies $|M| = 0, 1,$ or $|V(G)|$. The following result, arising as a consequence of Gallai's work on transitive orientations, gives us some control over $\Pi(G)$: \begin{proposition}[Gallai~\cite{gallai:transitiv-orien:}]\label{prop-gallai} If $G$ is a prime permutation graph, then, up to the symmetries inverse, reverse-complement, and inverse-reverse-complement, $\Pi(G)$ contains a unique permutation. \end{proposition} We now extend Proposition~\ref{prop-gallai} to suit our purposes. Consider a simple permutation $\sigma$ of length $m\geq 4$, and form the permutation $\pi$ by inflating two\footnote{It is, in fact, possible to inflate more than two entries and establish the same result, but we do not require this here.} of the entries of $\sigma$ each by the permutation $21$. Note that $G_{21} = K_2$, and $\Pi(K_2)=\{21\}$. In the correspondence between graphs and permutations, modules map to intervals and vice versa, so prime graphs correspond to simple permutations, and there is an analogous result to Proposition~\ref{simple-decomp-1} for (permutation) graphs. Thus, for any permutation $\rho\in\Pi(G_\pi)$, it follows that $\rho$ must be constructed by inflating two entries of some simple permutation $\tau$ by the permutation $21$. Moreover, $\tau$ must be one of $\sigma$, $\sigma^{-1}$, $\sigma^\text{rc}$ or $(\sigma^{-1})^{\text{rc}}$, and the entries of $\tau$ which are inflated are determined by which entries of $\sigma$ were inflated, and which of the four symmetries of $\sigma$ is equal to $\tau$. In other words, we still have $\Pi(G_\pi)=\{\pi,\pi^{-1},\pi^\text{rc},(\pi^{-1})^{\text{rc}}\}$. We require one further easy observation: \begin{lemma}\label{lem-graph-perm-order} If $G$ and $H$ are permutation graphs such that $H\leq G$, then for any $\pi\in\Pi(G)$ there exists $\sigma\in\Pi(H)$ such that $\sigma\leq\pi$. \end{lemma} \begin{proof} Given any $\pi\in\Pi(G)$, let $\sigma$ denote the subpermutation of $\pi$ formed from the entries of $\pi$ which correspond to the vertices of an embedding of $H$ as an induced subgraph of $G$. Clearly $G_\sigma\cong H$, so $\sigma\in\Pi(H)$ and $\sigma\leq \pi$, as required. \end{proof} We are now in a position to prove Proposition~\ref{prop-three-antichains}. Since the techniques are broadly similar for all three cases, we will give the details for case (2), and only outline the key steps for the other two cases. \begin{proof}[Proof of Proposition~\ref{prop-three-antichains}\,(2)] First, the graph $P_7$ corresponds to two permutations, namely $3152746$ and $2416375$ respectively. Thus the class of $P_7$, $K_5$-free permutation graphs corresponds to the permutation class $\operatorname{Av}(3152746,2416375,54321)$. This permutation class contains the grid class \[\operatorname{Grid}\fnmatrix{rr}{ \bigoplus 21 & \bigoplus 21},\] because this grid class avoids the permutations $241635$ (contained in both $3152746$ and $2416375$), and $54321$. We now follow the recipe given by Brignall~\cite{brignall:grid-classes-an:} to construct an infinite antichain which lies in $\operatorname{Grid}\fnmatrix{rr}{ \bigoplus 21 & \bigoplus 21}$. Call the resulting antichain $A$, the first three elements and general term of which are illustrated in Figure~\ref{fig-parallel-antichain}. This antichain is related to the ``parallel'' antichain in Murphy's thesis~\cite{murphy:restricted-perm:}. \begin{figure}\label{fig-parallel-antichain} \end{figure} Now set $G_A = \{ G_\pi : \pi \in A\}$, and note that $G_A$ is contained in the class of permutation graphs omitting $P_7$ and $K_5$ by Lemma~\ref{lem-graph-perm-order}. If $G_A$ is an antichain of graphs, then we are done, so suppose for a contradiction that there exists $G,H\in G_A$ with $H\leq G$. Take the permutation $\pi\in A$ for which $G_\pi= G$. Applying Lemma~\ref{lem-graph-perm-order}, there exists $\sigma \in \Pi(H)$ such that $\sigma\leq \pi$. We cannot have $\sigma\in A$ since $A$ is an antichain, so $\sigma$ must be some other permutation with the same graph. Choose $\tau\in A\cap\Pi(H)$. It is easy to check that the only non-trivial intervals in any permutation in $A$ are the two pairs of points in Figure~\ref{fig-parallel-antichain} which are circled. Thus, $\tau$ is formed by inflating two entries of a simple permutation each by a copy of $21$. By the comments after Proposition~\ref{prop-gallai}, we conclude that $\sigma$ is equal to one of $\tau^{-1}$, $\tau^\text{rc}$ or $(\tau^{-1})^{\text{rc}}$. Note that every permutation in $A$ is closed under reverse-complement, so $\tau=\tau^\text{rc}$ and thus $\sigma\neq\tau^\text{rc}$. If $\sigma=\tau^{-1}$, by inspection every element of $A$ contains a copy of $24531$, which means that $\sigma$ contains $(24531)^{-1}=51423$. Since $\sigma\leq \pi$, it follows that $\pi$ must also contain a copy of $51423$. However this is impossible, because $51423$ is not in $\operatorname{Grid}\fnmatrix{rr}{ \bigoplus 21 & \bigoplus 21}$. Thus, we must have $\sigma= (\tau^{-1})^{\text{rc}}$, in which case $\sigma$ must contain a copy of $\left((24531)^\text{rc}\right)^{-1}=34251$. However, this permutation is also not in $\operatorname{Grid}\fnmatrix{rr}{ \bigoplus 21 & \bigoplus 21}$ so cannot be contained in $\pi$. Thus $\sigma\not\in\pi$, and from this final contradiction we conclude that $H\not\leq G$, so $G_A$ is an infinite antichain of permutation graphs, as required. \end{proof} \begin{figure}\label{fig-hook-antichain} \end{figure} \begin{proof}[Sketch proof of Proposition~\ref{prop-three-antichains}\,(1) and (3)] For (1), the class of $P_6$, $K_6$-free permutation graphs corresponds to $\operatorname{Av}(241635,315264,654321)$. This class contains \[\operatorname{Grid}\fnmatrix{rr}{\bigoplus 21&\operatorname{Av}(21)\\&\bigoplus 21}\] because this grid class does not contain $23154$ (contained in $241635$), $31254$ (contained in $315264$) and $654321$. By~\cite{brignall:grid-classes-an:}, this grid class contains an infinite antichain whose first three elements are illustrated in Figure~\ref{fig-hook-antichain}. The permutations in this antichain have exactly two proper intervals, indicated by the circled pairs of points in each case, and this means that for any permutation $\pi$ in this antichain, $\Pi(G_\pi) = \{\pi, \pi^{-1}, \pi^\text{rc}, (\pi^{-1})^{\text{rc}}\}$. Following the proof of Proposition~\ref{prop-three-antichains}\,(2), it suffices to show that for any permutations $\sigma$ and $\pi$ in the antichain, none of $\sigma$, $\sigma^{-1}$, $\sigma^\text{rc}$, or $(\sigma^{-1})^{\text{rc}}$ is contained in $\pi$. This is done by identifying permutations which are not contained in any antichain element $\pi$, but which are contained in one of the symmetries $\sigma^{-1}$, $\sigma^\text{rc}$, or $(\sigma^{-1})^{\text{rc}}$. We omit the details. For (3), The $P_8$, $K_4$-free permutation graphs correspond to the permutation class $\operatorname{Av}(4321,24163857,31527486)$, and this contains the (monotone) grid class \[\operatorname{Grid}\fnmatrix{rr}{\operatorname{Av}(21)&\operatorname{Av}(21)\\\operatorname{Av}(21)&\operatorname{Av}(21)}.\] In this case, we appeal to Murphy and Vatter~\cite{murphy:profile-classes:} for an infinite antichain, and the same technique used to prove cases~(1) and~(2) can be applied. \end{proof} \section{Enumeration}\label{sec-enum} One significant difference between studies of the induced subgraph order and the subpermutation order is the research interests of the different camps of investigators. In the latter area, a great proportion of the work is enumerative in nature. Therefore, having established the structure of permutation graphs avoiding $P_5$ and cliques, we briefly study the enumeration of the corresponding permutation classes in this section. Our goal is to show that these classes are \emph{strongly rational}, i.e., that they and every one of their subclasses have rational generating functions. Here we refer to $$ \sum_{\pi\in\mathcal{C}} x^{|\pi|} $$ as the generating function of the class $\mathcal{C}$, where $|\pi|$ denotes the length of the permutation $\pi$. Note that by a simple counting argument (made explicit in Albert, Atkinson, and Vatter~\cite{albert:subclasses-of-t:}), strongly rational permutation classes must be wqo. Proposition~\ref{simple-decomp-1} allows us to associate with any permutation $\pi$ a unique \emph{substitution decomposition tree}. This tree is recursively defined by decomposing each node of the tree as \begin{itemize} \item $\sigma[\alpha_1,\dots,\alpha_m]$ where $\sigma$ is a nonmonotone simple permutation, \item $\alpha_1\oplus\cdots\oplus\alpha_m$ where each $\alpha_i$ is sum indecomposable, or \item $\alpha_1\ominus\cdots\ominus\alpha_m$ where each $\alpha_i$ is skew indecomposable. \end{itemize} See Figure~\ref{fig-tree-375896214} for an example. In particular, note that by our indecomposability assumptions, sum nodes (resp., skew sum nodes) cannot occur twice in a row when reading up a branch of the tree. The {\it substitution depth\/} of $\pi$ is then the height of its substitution decomposition tree, so for example, the substitution depth of the permutation from Figure~\ref{fig-tree-375896214} is $3$, while the substitution depth of any simple (or monotone) permutation is $1$. As we show in our next result, substitution depth is bounded for the classes we are interested in. This result is a special case of Vatter~\cite[Proposition 4.2]{vatter:small-permutati:}, but we include a short proof for completeness. \begin{figure} \caption{The substitution decomposition tree of $375896214$ (of height $3$), and a pruned version of the same height which alternates between the labels $\oplus$ and $\ominus$.} \label{fig-tree-375896214} \end{figure} \begin{proposition} \label{prop-bdd-subst-depth} The substitution depth of every permutation in $\operatorname{Av}(\ell\dots 21)$ is at most $2\ell-3$ for $\ell\ge 2$. \end{proposition} \begin{proof} We prove the result using induction on $\ell$. Only increasing permutations avoid $21$ and they have substitution depth $1$, so the result holds for $\ell=2$. Suppose to the contrary that the result holds for $\ell\ge 2$ but that there is a permutation $\pi\in\operatorname{Av}((\ell+1)\cdots 21)$ of substitution depth $2\ell$ (we may always assume that the depth is precisely this value, because of downward closure). If $\pi$ is a sum decomposable permutation, express it as $\pi=\pi_1\oplus\cdots\oplus\pi_m$ and note that at least one $\pi_i$ must have substitution depth $2\ell-1$. Thus we may now assume that there is a sum indecomposable permutation $\pi\in\operatorname{Av}((\ell+1)\cdots 21)$ of substitution depth $2\ell-1$. Write $\pi=\sigma[\alpha_1,\dots,\alpha_m]$ where $\sigma$ is either a simple permutation or decreasing of length at least $2$ (to cover the case where $\pi$ is skew decomposable). At least one of the $\alpha_i$ must have substitution depth $2\ell-2$, and contains $\ell\cdots 21$ by induction. However, there is at least one entry of $\sigma$ which forms an inversion with $\sigma(i)$, and thus $\pi$ itself must contain $(\ell+1)\cdots 21$, as desired. \end{proof} It was shown in Albert, Atkinson, Bouvel, Ru\v{s}kuc and Vatter~\cite[Theorem 3.2]{albert:geometric-grid-:} that if $M$ is a forest, then $\operatorname{Grid}(M)$ is a \emph{geometric grid class}. In the same paper, two strong properties of geometric grid classes were established: they are defined by finitely many minimal forbidden permutations and they are strongly rational. We refer the reader to that paper for a comprehensive introduction to geometric grid classes. Our enumerative result follows from the following theorem proved in a subsequent paper. \begin{theorem}[Albert, Ru\v{s}kuc, and Vatter~{\cite[Theorem 7.6]{albert:inflations-of-g:}}] \label{thm-geom-inflate-enum} The class $\mathcal{C}[\mathcal{U}]$ is strongly rational for all geometrically griddable classes $\mathcal{C}$ and strongly rational classes $\mathcal{U}$. \end{theorem} Applying induction on the height of substitution decomposition trees (which are bounded in all of our classes by Proposition~\ref{prop-bdd-subst-depth}), we immediately obtain the following result. \begin{theorem} For every $\ell$, the class $\operatorname{Av}(24153, 31524, \ell\cdots 21)$ is strongly rational. \end{theorem} \section{Concluding Remarks} As shown in Figure~\ref{fig-wqo-results}, there are only three cases remaining: permutation graphs avoiding $\{P_6,K_5\}$, $\{P_6,K_4\}$ and $\{P_7,K_4\}$. Due to the absence of an ``obvious'' infinite antichain in these cases, we conjecture that they are all wqo. However, all three classes contain (for example) the generalised grid class $\operatorname{Grid}\fnmatrix{rr}{\bigoplus 21&\operatorname{Av}(21)}$, so these classes all contain simple permutations which are not monotone griddable. Thus our approach would have to be significantly changed to approach this conjecture. A natural first step might be to show that the corresponding permutation classes are $\mathcal{D}$-griddable (in the sense of Section~\ref{sec-p7}) for a ``nice'' class $\mathcal{D}$. As any proof along these lines would need to use some sort of slicing argument as well (such as we used to prove Proposition~\ref{prop-gridding-corners}), $\mathcal{D}$ would have to be nice enough to allow for such arguments. Vatter~\cite[Lemma 5.3]{vatter:small-permutati:} has shown that such slicing arguments can be made to work when $\mathcal{D}$ has only finitely many simple permutations and finite substitution depth (in the sense of Section~\ref{sec-enum}). As we already have finite substitution depth by Proposition~\ref{prop-bdd-subst-depth}, it would suffice to show that we could take $\mathcal{D}$ to contain only finitely many simple permutations. Again, though, this would only be an encouraging sign and not a proof, because one would then have to develop more sophisticated tools to prove wqo for such classes. Specifically, it is unlikely that the existing machinery of Brignall~\cite{brignall:grid-classes-an:} would suffice. \def$'${$'$} \end{document}
\begin{equation}gin{document} \begin{equation}gin{CJK}{UTF8}{gbsn} \begin{equation}gin{flushright} \end{flushright} \begin{equation}gin{center} {\large {\bf A new class of exactly-solvable potentials by means of the hypergeometric equation}} Wei Yang {\it College of Science, Guilin University of Technology, Guilin, Guangxi 541004, China} \underline{ABSTRACT} \end{center} We obtained a new class of exactly-solvable potentials by means of the hypergeometric equation for Schr\"{o}dinger equation, which different from the exactly-solvable potentials introduced by Bose and Natanzon. Using the new class of solvable potentials, we can obtain the corresponding complex PT-invariant potentials. This method can also apply to the other Fuchs equations. {\footnotesize Emails: [email protected]} \thispagestyle{empty} \pagebreak \section{Introduction} \label{Sec.introduce} The exact solutions to the Schr\"{o}dinger equation play crucial roles in quantum physics. It is well-known that there are several potentials can be exactly solved, as examples one can site the harmonic oscillator, the Coulomb, the Morse \cite{Morse}, P\"{o}schll-Teller \cite{Teller}, Eckart \cite{Eckart:1930zza} potentials and so on. The reason why these potentials are exact solvable is that the Schr\"{o}dinger equation in these potentials can be transformed by into either the hypergeometric or the confluent hypergeometric equation. Here we focus on hypergeometric equation to construct a new class of the solvable potentials within the framework of the non-relativistic Schr\"{o}dinger equation. Similar works have been carried out \cite{Bose,Natanzon,Ishkhanyan,Dong}. Using the new class of solvable potentials that we have constructed, we can easily obtain complex PT-invariant potentials, these potentials is a hot issue of recent study\cite{Ahmed:2001gz,Kumari,Hasan:2017fje,Yadav:2015fia}. This work is organized as follows. In Section \ref{Sec.formalism}, We present the basic method of our argument. In Section \ref{Sec.Hypergeometric} as an illustration we take the hypergeometric equation to construct the soluble potentials. In Section \ref{Sec.conclusion} we will give some conclusions for this paper. \section{Basic methods} \label{Sec.formalism} As we know, the stationary Schr\"{o}dinger equation for a particle of energy $E$ in a potential $V(r)$ has the form \begin{equation}gin{align} \psi''(r)+(E-V(r))\psi(r)=0\,. \label{eq:0} \end{align} Through choosing a transformation of the variable $z=z(r)$, the equation become \begin{equation}gin{align} \psi_{zz}+\fft{\rho_z}{\rho}\psi_z+\fft{E-V}{\rho^2}\psi=0\,. \end{align} Where $\rho=\fft{dz}{dr}$, and apply the transformation $\psi=f(z)u(z)$. we can obtain the following differential equation \begin{equation}gin{align} u_{zz}+(2\fft{f_z}{f}+\fft{\rho_z}{\rho})u_z+(\fft{f_{zz}}{f}+\fft{f_z}{f}\fft{\rho_z}{\rho}+\fft{E-V}{\rho^2})\psi=0\,. \end{align} Compared with our target equation \begin{equation}gin{align} u_{zz}+g(z)u_z+h(z)u=0\,. \label{eq:1} \end{align} we have \begin{equation}gin{align} &g(z)=2\fft{f_z}{f}+\fft{\rho_z}{\rho}\,\label{eq:2}\\ &h(z)=\fft{f_{zz}}{f}+\fft{f_z}{f}\fft{\rho_z}{\rho}+\fft{E-V}{\rho^2}\,.\label{eq:3} \end{align} Integrating the equation (\ref{eq:2}) allows us to obtain \begin{equation}gin{align} &f(z)=\sqrt{c_1}\rho^{-1/2}e^{\int g(z)dz/2}\,. \end{align} Substitution of this equation into (\ref{eq:3}) yields \begin{equation}gin{align} E-V=\rho^{2}(h-\fft{g_z}{2}-\fft{g^2}{4})+\fft{1}{2}\{z,r\}\,. \label{eq:8} \end{align} Where the Schwarzian derivative given as \begin{equation}gin{align} \{z,r\}=\fft{z'''(r)}{z'(r)}-\fft{3}{2}(\fft{z''(r)}{z'(r)})^2=\rho\rho_{zz}-\fft12\rho_z^2\,. \end{align} Therefore, on the basis of the solvable differential equation (\ref{eq:1}), we are able to construction of the solvable potentials for the original Schr\"{o}dinger equation. This problem will become how to find that new variable $z(r)$ and function $f(z)$ satisfy equation (\ref{eq:2}) (\ref{eq:3}). \section{Hypergeometric equation} \label{Sec.Hypergeometric} In this section, we application the methods to hypergeometric equation \begin{equation}gin{align} z(1-z)u''(z)+[\gamma-(\alpha+\begin{equation}ta+1)z]u'(z)-\alpha\begin{equation}ta u(z)=0\,. \end{align} Obviously, in this case $g(z)=\fft{\gamma-(\alpha+\begin{equation}ta+1)z}{z(1-z)}$ and $h(z)=-\fft{\alpha\begin{equation}ta}{z(1-z)}$. In order to determine the form of $f$, we follow the Ishkhanyan et al~\cite{Ishkhanyan} who discussed the reduction of the Schr\"{o}dinger equation to a rather large class of target equations by a transformation of $\rho(z)$, and $\rho(z)$ is a polynomial, which all the roots should necessarily coincide with the singular points of the target equation to which the Schrödinger equation is reduced. Then, the coordinate transformation of $f$ need the form $f=z^p(1-z)^q$, so the equation (\ref{eq:2}) have solution \begin{equation}gin{align} \rho=c_1z^{\gamma-2p}(1-z)^{\alpha+\begin{equation}ta+1-\gamma-2q}\,. \label{eq:4} \end{align} Substitution of this equation (\ref{eq:4}) and $h(z)=-\fft{\alpha\begin{equation}ta}{z(1-z)}$ into (\ref{eq:3}), we can easily figure out that the Schr\"{o}dinger equation have solvable potentials only if $a=\gamma-2p$ and $b=\alpha+\begin{equation}ta+1-\gamma-2q$ satisfy some of these discrete values $a=0,1,2$, $b=0,1,2$, and in these cases the functional form of $z(r)$ and $V(r)$ are likely to be simple. \textbf{Case:1} consider $a=0,b=1$, so $\rho=\fft{dz}{dr}=c_1(1-z)$, therefore \begin{equation}gin{align} &z=1+c_2e^{-c_1r}\,. \end{align} From equation (\ref{eq:8}), we can get \begin{equation}gin{align} &E-V=C-A\fft{1}{(c_2+e^{c_1r})^2}-B\fft{1}{c_2+e^{c_1r}}\,. \end{align} Where the coefficients are given \begin{equation}gin{align} &A=\fft{c_1^2c_2^2}{4}\gamma(\gamma-2)\nonumber\\ &B=\fft{c_1^2c_2}{2}(\alpha\gamma+\begin{equation}ta\gamma+\gamma-\gamma^2-2\alpha\begin{equation}ta)\nonumber\\ &C=-\fft{c_1^2}{4}(\alpha+\begin{equation}ta-\gamma)^2 \,. \end{align} \textbf{Case:2} consider $a=1/2,b=1/2$ , $\rho=c_1z^{1/2}(1-z)^{1/2}$, therefore \begin{equation}gin{align} &z=1-\sin(1/2(c_1r+c_2))^2\,. \end{align} So equation (\ref{eq:8}) tells us that \begin{equation}gin{align} &E-V=C-A\csc(c_1r+c_2)^2-B\cot(c_1r+c_2)\csc(c_1r+c_2)\,. \end{align} Where the coefficients are given \begin{equation}gin{align} &A=\fft{c_1^2}{4}(2(\alpha+\begin{equation}ta)^2+1)-c_1^2(\alpha+\begin{equation}ta-\gamma+1)\gamma\nonumber\\ &B=\fft{c_1^2}{2}(\alpha+\begin{equation}ta-1)(\alpha+\begin{equation}ta+1-2\gamma)\nonumber\\ &C=\fft{c_1^2}{4}(\alpha-\begin{equation}ta)^2 \,. \end{align} \textbf{Case:3} consider $a=1/2,b=1$ , $\rho=c_1z^{1/2}(1-z)$ , therefore \begin{equation}gin{align} &z=\tanh^2(\fft{c_1r-c_2}{2})\,. \end{align} So we have \begin{equation}gin{align} &E-V=C-A\sech^2[(c_1r-c_2)/2]-B\csch^2[(c_1r-c_2)/2]\,. \end{align} Where the coefficients are given \begin{equation}gin{align} &A=-\fft{c_1^2}{16}(4(\alpha-\begin{equation}ta)^2-1)\nonumber\\ &B=\fft{c_1^2}{16}(4\gamma^2-8\gamma+3)\nonumber\\ &C=-\fft{c_1^2}{4}(\alpha+\begin{equation}ta-\gamma)^2 \,. \end{align} \textbf{Case:4} consider $a=1,b=0$, $\rho=c_1z$, therefore \begin{equation}gin{align} &z=c_2e^{c_1r}\,. \end{align} we have \begin{equation}gin{align} &E-V=C-A\fft{1}{(c_2e^{c_1r}-1)^2}-B\fft{1}{c_2e^{c_1r}-1}\,. \end{align} Where the coefficients are given \begin{equation}gin{align} &A=-\fft{c_1^2}{4}((\alpha+\begin{equation}ta-\gamma)^2+1)\nonumber\\ &B=\fft{c_1^2}{2}(\alpha^2+\begin{equation}ta^2+\gamma(1-\alpha-\begin{equation}ta)-1)\nonumber\\ &C=-\fft{c_1^2}{4}(\alpha-\begin{equation}ta)^2 \,. \end{align} \textbf{Case:5} consider $a=1,b=1/2$ , $\rho=c_1z(1-z)^{1/2}$ , therefore \begin{equation}gin{align} &z=\sech^2(\fft{c_1r+c_2}{2})\,. \end{align} we have \begin{equation}gin{align} &E-V=C-A\csch^2(c_1r+c_2)-B\coth(c_1r+c_2)\csch(c_1r+c_2)\,. \end{align} Where the coefficients are given \begin{equation}gin{align} &A=\fft{c_1^2}{4}[4(\alpha^2+\begin{equation}ta^2)-4(\alpha+\begin{equation}ta)+2\gamma-1]\nonumber\\ &B=\fft{c_1^2}{2}(2\alpha-\gamma)(2\begin{equation}ta-\gamma)\nonumber\\ &C=-\fft{c_1^2}{4}(\gamma-1)^2 \,. \end{align} \textbf{Case:6} consider $a=1,b=1$ , $\rho=c_1z(1-z)$, therefore \begin{equation}gin{align} &z=1-\fft{c_2}{e^{c_1r}+c_2}\,. \end{align} we have \begin{equation}gin{align} &E-V=C-A\fft{e^{2c_1r}}{4(e^{c_1r}+e^{c_2})^2}-B\fft{e^{c_1r}}{4(e^{c_1r}+e^{c_2})}\,. \end{align} Where the coefficients are given \begin{equation}gin{align} &A=\fft{c_1^2}{4}[(\alpha-\begin{equation}ta)^2-1]\nonumber\\ &B=\fft{c_1^2}{2}(2\alpha\begin{equation}ta+\gamma-\alpha\gamma-\begin{equation}ta\gamma)\nonumber\\ &C=-\fft{c_1^2}{4}(\gamma-1)^2 \,. \end{align} So far we have discussed six cases, which represent six possible types of potential energy. The other three possible cases $a=0,b=0$; $a=0,b=1/2$ and $a=1/2,b=0$ cannot be considered as solvable potential due to the lack of an energy terms. Further, by applying the relationship between the hypergeometric function and the Riemannian $P$-function and its transformation formula, we can obtain other solutions of different forms. It is worth noting that in case 2 if we set $c_1=i,c_2=\pi/2$, then \begin{equation}gin{align} &z=(1-i\sinh r)/2\,. \end{align} And \begin{equation}gin{align} &E-V=C-A\sech^2 r+iB\sech r\tanh r\nonumber\\ &A=-\fft{1}{4}(2(\alpha+\begin{equation}ta)^2+1)+(\alpha+\begin{equation}ta-\gamma+1)\gamma\nonumber\\ &B=-\fft{1}{2}(\alpha+\begin{equation}ta-1)(\alpha+\begin{equation}ta+1-2\gamma)\nonumber\\ &C=-\fft{1}{4}(\alpha-\begin{equation}ta)^2 \,. \end{align} This is nothing but the complex PT-invariant potential energy discussed by Ahmed et al~\cite{Ahmed:2001gz,Kumari}. Similarly, we can construct others solvable PT-invariant potentials. Set $c_1$ is pure imaginary number and $c_2$ is real number in all sex cases, we will find these potentials are all complex PT-invariant potentials. If set $g(z)=0$ in the equation (\ref{eq:2}), then we can solve $f(z)=\sqrt{c_1}\rho^{-1/2}$, and also assume that $f(z)$ have the form $f=z^p(1-z)^q$, then we will arrive at the Bose solvable potentials~\cite{Bose}. These potentials have been studied generally~\cite{Ishkhanyan,Natanzon,Milson:1997cp,Morales}. In a similar way, we can also construct the corresponding complex PT-invariant potentials. \section{Conclusion} \label{Sec.conclusion} In this work, we construct a new class of solvable potentials by means of the hypergeometric equation, where the Schr\"{o}dinger equation is rewritten as a hypergeometric equation through taking a similarity transformation. we consider $g(r)\neq 0$, and suppose $f=z^p(1-z)^q$, such that $p,q$ dependent $\alpha,\begin{equation}ta,\gamma$, this method provides us a new perspective to construct solvable potentials for Schr\"{o}dinger equation. Our method unlike previously let $g(r)=0$ or used the invariant identity of the target equation, this method need $f(z)=\sqrt{c_1}\rho^{-1/2}$, cause $f(z)$ to be a fixed function. In a similar way, computing the other Fuchs equations with our method, such as the Heun equation, can also construct corresponding solvable potentials\cite{Filipuk}. \begin{equation}gin{thebibliography}{99} \bibitem{Morse} Morse, Philip M.. “Diatomic Molecules According to the Wave Mechanics. II. Vibrational Levels.” Physical Review \textbf{34}, 57-64 (1929). \bibitem{Teller} G. P\"{o}schll and E. Teller. Z. Physik, \textbf{83}, 143-151 (1933). \bibitem{Eckart:1930zza} C.~Eckart, ``The Penetration of a Potential Barrier by Electrons,'' Phys. Rev. \textbf{35}, 1303-1309 (1930) \bibitem{Bose} A.K. Bose, “SOLVABLE POTENTIALS.” Phys. Lett. \textbf{7}, 245 (1963). \bibitem{Natanzon} Natanzon, G. A.. “General properties of potentials for which the Schrödinger equation can be solved by means of hypergeometric functions.” Theoretical and Mathematical Physics \textbf{38} (1979): 146-153. \bibitem{Ishkhanyan} Ishkhanyan, Artur M. and Vladimir P. Krainov. “Discretization of Natanzon potentials.” The European Physical Journal Plus \textbf{131} (2016): 1-11. \bibitem{Dong} Dong, Shishan et al. “Constructions of the Soluble Potentials for the Nonrelativistic Quantum System by Means of the Heun Functions.” Advances in High Energy Physics (2018). \bibitem{Ahmed:2001gz} Z.~Ahmed, ``Real and complex discrete eigenvalues in an exactly solvable one-dimensional complex PT invariant potential,'' Phys. Lett. A \textbf{282}, 343-348 (2001) \bibitem{Kumari} Kumari, Nisha et al. “Scattering amplitudes for the rationally extended PT symmetric complex potentials.” Annals of Physics \textbf{373} (2016): 163-177. \bibitem{Hasan:2017fje} M.~Hasan and B.~P.~Mandal, ``New scattering features in non-Hermitian space fractional quantum mechanics,'' Annals Phys. \textbf{396}, 371-385 (2018) \bibitem{Yadav:2015fia} R.~K.~Yadav, A.~Khare, B.~Bagchi, N.~Kumari and B.~P.~Mandal, ``Parametric symmetries in exactly solvable real and $PT$ symmetric complex potentials,'' J. Math. Phys. \textbf{57}, no.6, 062106 (2016) \bibitem{Milson:1997cp} R.~Milson, ``On the Liouville transformation and exactly solvable Schrodinger equations,'' Int. J. Theor. Phys. \textbf{37}, 1735-1752 (1998) \bibitem{Morales} Morales J , J García-Martínez, J García-Ravelo, et al. ``Exactly Solvable Schrodinger Equation with Hypergeometric Wavefunctions." Journal of Applied Mathematics and Physics, \textbf{3} 11-18 (2015) \bibitem{Filipuk} Filipuk, Galina, Artur M. Ishkhanyan and Jan Derezi'nski. “On the Derivatives of the Heun Functions.” Journal of Contemporary Mathematical Analysis (Armenian Academy of Sciences) 55 (2019): 200-207. \end{thebibliography} \end{CJK} \end{document}
\begin{document} \title{Demonstration of quantum advantage in machine learning} \author{Diego~Rist\`e} \author{Marcus~P.~da~Silva} \author{Colm~A.~Ryan} \affiliation{Raytheon BBN Technologies, Cambridge, MA 02138, USA} \author{Andrew~W.~Cross} \author{John~A.~Smolin} \author{Jay~M.~Gambetta} \author{Jerry~M.~Chow} \affiliation{IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA} \author{Blake~R.~Johnson} \affiliation{Raytheon BBN Technologies, Cambridge, MA 02138, USA} \date{\today} \pacs{} \maketitle \textbf{ The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution~\cite{Nielsen00}. One measure of the algorithmic performance is the query complexity~\cite{Cleve01}, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover~\cite{Nielsen00}, have been implemented across diverse physical systems such as nuclear magnetic resonance~\cite{Jones98, Linden98, Chuang98a, Chuang98b}, trapped ions~\cite{Gulde03}, optical systems~\cite{Takeuchi00, Kwiat00}, and superconducting circuits~\cite{DiCarlo09, Yamamoto10, Dewes12}. However, at the small scale, these problems can already be solved classically with a few oracle queries, and the attainable quantum advantage is modest~\cite{Yamamoto10, Dewes12}. Here we solve an oracle-based problem, known as learning parity with noise~\cite{Angluin88, Blum03}, using a five-qubit superconducting processor. Running classical and quantum~\cite{Cross15} algorithms on the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a quantum advantage already emerges in existing noisy systems.} The limited size of engineered quantum systems and their extreme susceptibility to noise sources have made it hard so far to establish a clear advantage of quantum over classical computing. A promising avenue to highlight this separation is offered by a new family of algorithms designed for machine learning~\cite{Schuld15, Manzano09, Lloyd13, Wiebe14}. In this class of problems, artificial intelligence methods are employed to discern patterns in large amounts of data, with little or no knowledge of underlying models. A particular learning task, known as binary classification, is to identify an unknown mapping between a set of bits onto $0$ or $1$. An example of binary classification is identifying a hidden parity function~\cite{Angluin88, Blum03}, defined by the unknown bit-string $\mathbf{k}$, which computes $f(\mathbf{D},\mathbf{k}) = \mathbf{D}\cdot \mathbf{k} \mod 2$ on a register of $n$ data bits $\mathbf{D} = \{D_1, D_2 ..., D_n\}$ (Fig.~1a). The result, i.e., 0 (1) for even (odd) parity, is mapped onto the state of an additional bit $A$. The learner has access to the output register of an \emph{example oracle} circuit that implements $f$ on random input states, on which he/she has no control. Repeated queries of the oracle allow the learner to reconstruct $\mathbf{k}$. However, any physical implementation suffers from errors, both in the oracle execution itself and in readout of the register. In the presence of errors, the problem becomes hard. Assuming that every bit introduces an equal error probability, the best known algorithms have a number of queries growing as $\mathcal{O}(n)$ and runtime growing almost exponentially with $n$~\cite{Angluin88, Blum03,Lyubashevsky05}. In view of the classical hardness of learning parity with noise (LPN), parity functions have been suggested as keys for secure and computationally easy authentication~\cite{Hopper01, Pietrzak12}. The picture is different when the algorithm can process quantum superpositions of input states, i.e., when the oracle is implemented by a quantum circuit. In this case, applying a coherent operation on all qubits after an oracle query ideally creates the entangled state \begin{equation} \label{eq:LPNq} (\ket{0_A0_\textbf{D}^n} + \ket{1_A\mathbf{k}_\mathbf{D}})/\sqrt{2}. \end{equation} In particular, when $A$ is measured to be in $\ket{1}$, $\ket{\textbf{D}}$ will be projected onto $\ket{\mathbf{k}}$. With constant error per qubit, learning from a quantum oracle requires a number of queries that scales as $\mathcal{O}(\log n)$, and has a total runtime that scales as $\mathcal{O}(n)$~\cite{Cross15}. This gives the quantum algorithm an exponential advantage in query complexity and a super-polynomial advantage in runtime. In this work, we implement a LPN problem in a superconducting quantum circuit using up to five qubits, realizing the experiment proposed in Ref.~\onlinecite{Cross15}. We construct a parity function with bit-string $\mathbf{k}$ using a series of $\mathrm{CNOT}$ gates between the ancilla and the data qubits (Fig.~1b). We then present two classes of learners for $\mathbf{k}$ and compare their performance. The first class simply measures the output qubits in the computational basis and analyzes the results. The measurement collapses the state into a random $\{\mathbf{D}, f(\mathbf{D}, \mathbf{k})\}$ basis state, reproducing an example oracle of the classical LPN problem. The second class performs some quantum computation (coherent operations), followed by classical analysis, to infer the solution. We show that the quantum approach outperforms the classical one in the number of queries required to reach a target error threshold, and that it is largely robust to noise added to the output qubit register. \begin{figure} \caption{\textbf{Implementation of a parity function in a superconducting circuit.} \end{figure} The quantum device used in our experiment consists of five superconducting transmon qubits, $A, D_1, ..., D_4$, and seven microwave resonators (Fig.~1c). Five of the resonators are used for individual control and readout of the qubits, to which they are dispersively coupled~\cite{Blais04}. The center qubit $A$ plays the role of the result and is coupled to the data register $\{D_i\}$ via the remaining two resonators. This coupling allows the implementation of cross-resonance ($\mathrm{CR}$) gates~\cite{Rigetti10} between $A$ (used as control qubit) and each $D_i$ (target), constituting the primitive two-qubit operation for the circuit in Fig.~1b (full gate decomposition in Extended Data Fig.~1). Each qubit state is read out by probing the dedicated resonator with a near-resonant microwave pulse. The output signals are then demodulated and integrated at room temperature to produce the homodyne voltages $\{V_{{D}_1}, ... V_{{D}_n}, V_{A}\}$ (see Extended Data Fig.~2 for the detailed experimental setup). To implement a uniform random example oracle for a particular $\mathbf{k}$, we first prepare the data qubits in a uniform superposition (Fig.~1b). Preparing such a state ensures that all parity examples are produced with equal probability and is also key in generating a quantum advantage. We then implement the oracle as a series of CNOT gates, each having the same target qubit $A$ and a different control qubit $D_i$ for each $k_i=1$. Finally, the state of all qubits is read out (with the optional insertion of Hadamard gates, see discussion below). The oracle mapping to the device is limited by imperfections in the two-qubit gates, with average fidelities $88 - 94\%$, characterized by randomized benchmarking~\cite{Magesan12} (see Extended Data Table~1). Readout errors in the register $\eta_{\mathrm{D}}i$, defined as the average probability of assigning a qubit to the wrong state, are limited to $20-40\%$ by the onset of inter-qubit crosstalk at higher measurement power (Extended Data Fig.~3). A Josephson parametric amplifier~\cite{Hatridge11} in front of the amplification chain of $A$ suppresses its low-power readout error to $\eta_{\mathrm{A}} = 5\%$. Having implemented parity functions with quantum hardware, we now proceed to interrogate an oracle $N$ times and assess our capability to learn the corresponding $\mathbf{k}$. We start with oracles with register size $n=2$, involving $D_1, D_2$, and $A$. We consider two classes of learning strategies, classical (C) and quantum (Q). In C, we perform a projective measurement of all qubits right after execution of the oracle. This operation destroys any coherence in the oracle output state, thus making any analysis of the result classical. The measured homodyne voltages $\{V_{{D}_1}, ... V_{{D}_n}, V_{A}\}$ are converted into binary outcomes, using a calibrated set of thresholds (see Methods). Thus, for every query, we obtain a binary string $\{a, d_1, d_2\}$, where each bit is $0$ ($1$) for the corresponding qubit detected in $\ket{0}$ ($\ket{1}$). Ideally, $a$ is the linear combination of $d_1, d_2$ expressed by the string $\mathbf{k}$ (Fig.~1a). However, both the gates comprising the oracle and qubit readout are prone to errors (see Extended Data Table~1). To find the $\mathbf{k}$ that is most likely to have produced our observations, at each query $m$ we compute the expected $\tilde{a}_{\mathbf{k},m}$ for the measured $d_{1,m}, d_{2,m}$ and the $4$ possible values of $\mathbf{k}$. We then select the $\mathbf{k}$ which minimizes the distance to the measured results $a_1, ..., a_N$ of $N$ queries, i.e., $\sum^N_m{\abs{\tilde{a}_q - a_{i,\mathbf{k}}}}$~\cite{Angluin88}. In the case of a tie, $\mathbf{k}$ is randomly chosen among those producing the minimum distance. As expected, the error probability $p$ of obtaining the correct answer decreases with $N$ (Fig.~2a). Interestingly, the difficulty of the problem depends on $\mathbf{k}$ and increases with the number of $k_i=1$. This can be intuitively understood as needing to establish a higher correlation between data qubits when the weight of $\mathbf{k}$ increases. \begin{figure} \caption{\textbf{Error probability $p$ to identify a 2-bit oracle $\mathbf{k} \end{figure} In our second approach (Q), while the oracle is left untouched, we apply local operations (Hadamard gates) to all qubits before measuring. Remarkably, this simple operation completely changes the statistics of the measurement results and the learning procedure. We now use the fact that the state of the data qubits is entangled with the result $A$ (see Eq.~\ref{eq:LPNq}). Whenever $A$ is measured to be in $\ket{1}$, the data register will ideally be projected onto the solution, $\ket{D_1, D_2} = \ket{k_1, k_2}$. We therefore digitize and postselect our results on the $\approx50\%$ outcomes where $a = 1$ and perform a bit-wise majority vote on $\{d_1, d_2\}_{1...\tilde{N}}$. Despite every individual query being subject to errors, the majority vote is effective in determining $\mathbf{k}$ (Fig.~2b). We assess the performance of the two solvers by comparing the number of queries $N_{1\%}$ required to reach $p=0.01$ (Fig.~2c). Whereas Q performs comparably or worse than C for $\mathbf{k} = 00, 01$ or $10$, Q requires less than half as many queries as C for the hardest oracle, $\mathbf{k}=11$. We note that, while these results are specific to our the lowest oracle and readout errors we can achieve (see Extended Data Table~1), a systematic advantage of quantum over classical learning will become clear in the following. So far we have adhered to a literal implementation of the classical LPN problem, where each output can only be either 0 or 1. However, the actual measurement results are the continuous homodyne voltages $\{V_{{D}_1}, ... V_{{D}_n}, V_{A}\}$, each having mean and variance determined by the probed qubit state and by the measurement efficiency, respectively~\cite{Blais04}. These additional resources can be exploited to improve the learner's capabilities as follows. A more effective strategy for C uses Bayesian estimation to calculate the probability of any possible $\mathbf{k}$ for the measured output voltages, and select the most probable (see Methods). This approach is expensive in classical processing time (scaling exponentially with $n$), but drastically reduces the error probability $pavg$, averaged over all $\mathbf{k}$, at any $N$ (Fig.~3 and Extended Data Fig.~4). To improve Q, we still postselect each oracle query on digital $a=1$, but average all instances of $\{V_{D_i}\}$, and digitize the averages $\{\avg{V_{D_i}}\}$ instead of each observation (see Methods). For each $D_i$, the majority vote between $\approx N/2$ inaccurate observations is then replaced by a single vote with high accuracy. Using the analog results, not only does Q retain an advantage over C (smaller $p$ for given $N$), but it does so without introducing an overhead in classical processing. \begin{figure} \caption{\textbf{Learning error probability $pavg$ averaged over all the $n$-bit oracles $\mathbf{k} \end{figure} The superiority of Q over C becomes even more evident when the oracle size $n$ grows from 2 to 3 data qubits (Fig.~3b). Whereas Q solutions are marginally affected, the best C solver demands almost an order of magnitude higher $N$ to achieve a target error. Maximizing the resources available in our quantum hardware, we observe an even larger gap for oracles with $n=4$ (Extended Data Fig.~5), suggesting a continued increase of quantum advantage with the problem size. As predicted, quantum parity learning surpasses classical learning in the presence of noise. To investigate the impact of noise on learning, we introduce additional readout error on either $A$ or on all $D_i$. This can be easily done by tuning the amplitude of the readout pulses, effectively decreasing the signal-to-noise ratio~\cite{Vijay11}. When the ancilla assignment error probability $\eta_{\mathrm{A}}$ grows (Fig.~4a), the number of queries $N_{1\%}avg$ (the average of $N_{1\%}$ over all $\mathbf{k}$) required by the C solver increases by up to 2 orders of magnitude in the measured range (see also Extended Data Fig.~6). Conversely, using Q, $N_{1\%}avg$ only changes by a factor of $\sim3$. Key to this performance gap is the optimization of the digitization threshold for $\{\avg{V_{D_i}}\}$ at each value of $\eta_{\mathrm{A}}$ (see Methods). When $\eta_{\mathrm{A}}$ is increased, an interesting question is whether postselection on $V_{A}$ remains always beneficial. In fact, for $\eta_{\mathrm{A}}>0.25$, it becomes more convenient to ignore $V_{A}$ and use the totality of the queries ($\mathrm{Q}'$ in Fig.~4a). \begin{figure} \caption{\textbf{Robustness of quantum parity learning to noise.} \end{figure} Similarly, we step the readout error of the data qubits, with average $\eta_{\mathrm{D}}$, while setting $\eta_{\mathrm{A}}$ to the minimum. Not only does Q outperform C at every step, but the gap widens with increasing $\eta_{\mathrm{D}}$. A numerical model including the measured $\eta_{\mathrm{A}}, \eta_{\mathrm{D}}$, qubit decoherence, and gate errors modeled as depolarization noise (Extended Data Table 1) is in very good agreement with the measured $N_{1\%}$ at all $\eta_{\mathrm{A}}, \eta_{\mathrm{D}}$. This model allows us to extrapolate $N_{1\%}$ to the extreme cases of zero and maximum noise. Obviously, when $\eta_{\mathrm{D}} = 0.5$, readout of the data register contains no information, and $N_{1\%}$ consequently diverges. On the other hand a random ancilla result ($\eta_{\mathrm{A}} = 0.5$) does not prevent a quantum learner from obtaining $\mathbf{k}$. In this limit, the predicted factor of $\sim2$ in $N_{1\%}avg$ between Q and $\mathrm{Q}'$ can be intuitively understood as Q indiscriminately discards half of the queries, while $\mathrm{Q}'$ uses all of them. (See Supplementary Material for theoretical bounds on the scaling of $N_{1\%}avg$ for different solvers.) In conclusion, we have implemented a learning parity with noise algorithm in a quantum setting. We have demonstrated a superior performance of quantum learning compared to its classical counterpart, where the performance gap increases with added noise in the query outcomes. A quantum learner, with the ability of physically manipulating the output of a quantum oracle, is expected to find the hidden key with a logarithmic number of queries and linear runtime as function of the problem size, whereas a passive classical observer would require a linear number of queries and nearly exponential runtime. We have shown that the difference in classical and quantum queries required for a target error rate grows with the oracle size in the experimentally accessible range, and that quantum learning is much more robust to noise. We expect that future experiments with increased oracle size will further demarcate a quantum advantage, in support of the predicted asymptotic behavior. \section{Methods} \textbf{Pulse calibration.} Single- and two-qubit pulses are calibrated by an automated routine, executed periodically during the experiments. For each qubit, first the transition frequency is calibrated with Ramsey experiments. Second, $\pi$ and $\pi/2$ pulse amplitudes are calibrated using a phase estimation protocol~\cite{Kimmel15}. The pulse amplitudes, modulating a carrier through an I/Q mixer (Extended Data Fig.~2) are adjusted at every iteration of the protocol until the desired accuracy or signal-to-noise limit is reached. Pulses have a Gaussian envelope in the main quadrature and derivative-of-Gaussian in the other, with DRAG parameter~\cite{Motzoi09} calibrated beforehand using a sequence amplifying phase errors~\cite{Lucero10}. $\mathrm{CR}$ gates are calibrated in a two-step procedure, determining first the optimum duration and then the optimum phase for a $ZX_{90}$ unitary. \textbf{Experimental setup.} A detailed schematic of the experimental setup is illustrated in Extended Data Fig.~2. For each qubit, signals for readout and control are delivered to the corresponding resonator through an individual line through the dilution refrigerator. For an efficient use of resources, we apply frequency division multiplexing~\cite{Jerger12} to generate the five measurement tones by sideband modulation of three microwave sources. Moreover, the same pair of BBN APS (custom arbitrary waveform generators) channels produce the readout pulses for $\{D_1, D_2\}$, and another one for $\{D_3, D_4\}$. Similarly, the output signals are pairwise combined at base temperature, limiting the number of HEMTs and digitizer channels to three. The attenuation on the input lines, distributed at different temperature stages, is a compromise between suppression of thermal noise impinging on the resonators (affecting qubit coherence) and the input power required for $\mathrm{CR}$ gates. \textbf{Gate sequence.} $\mathrm{CNOT}$ gates can be decomposed in terms of $\mathrm{CR}$ gates using the relation $\mathrm{CNOT}_{12} = (Z_{90}^- \otimes X_{90}^-)\, \mathrm{CR}_{12}$~\cite{Chow14}. Moreover, the role of control and target qubits are swapped, using $\mathrm{CNOT}_{12} = (H_1 \otimes H_2)\, \mathrm{CNOT}_{21} (H_1 \otimes H_2)$. The first of these $H$ gates is absorbed into state preparation for the LPN sequence (Figs.~1a and Extended Data Fig.~1). Similarly, when two $\mathrm{CNOT}$s are executed back to back, two consecutive $H$ gates on $A$ are canceled out. In order to maintain the oracle identical in C and Q, we do not compile the $H$ gates in the $\mathrm{CNOT}$s with those applied before measurement in Q. \textbf{Data analysis.} For each set of $\{\mathbf{k}, \eta_{\mathrm{A}}, \eta_{\mathrm{D}}\}$, solver type, and register size $n$, we measure the result of $10,000$ oracle queries. Each set is accompanied by $n+2$ calibration points (averaged $10,000$ times), providing the distributions of $V_{A}, V_{{D}_1}, ..., V_{{D}_n}$ for the collective ground state and for single-qubit excitations ($n$ data and 1 ancilla qubit). These distributions are then used to determine the optimum digitization threshold (for digital solvers) or as input to the Bayesian estimate in C. To obtain $p (N)$, we resample the full data set with $1000-4000$ random subsets of each size $N$. Error bars are obtained by first computing the credible intervals for $p$ at each set $\{N,\mathbf{k}, \eta_{\mathrm{A}}, \eta_{\mathrm{D}}\}$. These intervals are computed with Jeffreys beta distribution prior $\mathrm{Beta}(\frac{1}{2},\frac{1}{2})$ for Bernoulli trials, with a credible level of $100\%-(100\%-95\%)/8\approx99.36\%$. This ensures that, under a union bound, the average of estimates for $8$ different keys is inside the credible interval with a probability of at least $95\%$. We then perform antitonic regression on the upper and lower bounds of the credible intervals to ensure monotonicity as function of $N$, and find the intercept to $p=0.01$ for each $\mathbf{k}$. The bounds on the value $\overline{N}_{1\%}$ averaged over the keys is computed by interval arithmetic on the credible intervals of $N_{1\%}$ for each $\mathbf{k}$. \textbf{Classical solver with Bayesian estimate.} An improved classical solver for the LPN problem can be constructed when the oracle provides an analog output. Under the assumption of Gaussian distributions for each possible bit value, this improved solver corresponds to a Bayesian estimate of the key after a series of observations of the data and ancilla bits. More formally, taking a uniform prior distribution for all binary strings produced by the oracle, one computes the (unnormalized) posterior $p(D_i)$ distribution for each data bit $D_i$ the output of the oracle, \begin{align*} p(D_i=b|V_{D_i}) = \frac{1}{2} \exp\left[-\frac{(V_{D_i}-b)^2}{2\sigma_{i}^2}\right] \end{align*} The (unnormalized) posterior distribution $p_m(\mathbf{k}|\mathbf{V}_\mathrm{D},V_{A})$ for the key $\mathbf{k}$ after the $m$th query, on the other hand, is given by \begin{align*} p_m(\mathbf{k}|\mathbf{V}_\mathrm{D},V_{A}) = \exp\left[-\frac{(V_{A}-\mathbf{D}\cdot\mathbf{k})^2}{2\sigma_A^2}\right] p(\mathbf{D}|\mathbf{V}_\mathrm{D})p_{m-1}(\mathbf{k}), \end{align*} where $p_0(\mathbf{k})$ is the prior distribution for each key. Here and above, $\{V_{{D}_1}, ... V_{{D}_n}, V_{A}\}$ are rescaled to have mean $0$ and $1$ for the corresponding qubit in $\ket{0}$ and $\ket{1}$, respectively. Iterating this procedure (while updating $p(\mathbf{k})$ at each iteration), and then choosing the most probable key $\mathbf{k}_{\text{Bayes}} = \arg \max_{\mathbf{k}} p(\mathbf{k})$, one obtains an estimate for the key. \textbf{Analog quantum solver with postselection on $A$.} While postselection on $A$ is performed equally on both digital (Fig.~2) and analog (Figs.~3-4) Q solvers, in the analog case all postselected $\{V_{D_i}\}$ are averaged together. Finally, the results $\{\avg{V_{D_i}}\}$ are digitized to determine the most likely $\mathbf{k}$. The choice of digitization threshold for each $D_i$ depends on: a) the readout voltage distributions $\rho_0$ and $\rho_1$ for the two basis states, each characterized by a mean $\mu$ and a variance $\sigma^2$; b) $\eta_{\mathrm{A}}$. Ideally ($\eta_{\mathrm{A}}=0$ and perfect oracle), the distribution of each query output $V_{D_i}$ matches $\rho_0$ ($\rho_1$) for $k_i=0\, (1)$. When $\eta_{\mathrm{A}} >0$, the distribution for $k_i =1$ becomes the mixture $\rho_{k_i=1} = \eta_{\mathrm{A}}\rho_0 + (1-\eta_{\mathrm{A}})\rho_1$. This mixture has mean $(1-\eta_{\mathrm{A}})\mu_1 + \eta_{\mathrm{A}} \mu_0$ and variance $(1-\eta_{\mathrm{A}}) \sigma^2_1+ \eta_{\mathrm{A}} \sigma^2_0 - 2\eta_{\mathrm{A}}(1-\eta_{\mathrm{A}})\mu_0\mu_1$. Instead, $\rho_{k_i=0} = \rho_0$ independently of $\eta_{\mathrm{A}}$. We approximate the expected distribution of the mean $\avg{V_{D_i}}$ with a Gaussian having average and variance obtained from $\rho_{k_i=0} (\rho_{k_i=1})$ for $k_i=0\,(1)$. Finally, we choose the digitization threshold for $V_{D_i}$ which maximally discriminates these two Gaussian distributions. We note that the number of queries scales the variance of both distributions equally and therefore does not affect the optimum threshold. Furthermore, this calibration protocol is independent of the oracle (see Extended Data Fig.~7). \textbf{Analog quantum solver without postselection.} The analysis without ancilla ($\mathrm{Q}'$) closely follows the steps outlined in the last paragraph. For the purpose of extracting the optimum digitization thresholds, we consider $\eta_{\mathrm{A}}=0.5$ in the expressions above. This corresponds to an equal mixture of $\rho_0$ and $\rho_1$ when $k_i=1$. \textbf{Bounds on performance of the analog quantum solvers.} Here we demonstrate how the bounds from Ref.~\onlinecite{Cross15} can be easily adapted to the case where the solver uses analog voltage measurements. We consider both the case where experiments are postselected based on the digitized value of the ancilla (referred below as postselected soft averaging), and the case where the ancilla is ignored altogether. We consider different error rate for the ancilla and the data qubits. \textit{Postselected soft averaging.} In order to generalize the analysis in Ref.~\onlinecite{Cross15} to the postselected soft averaging case, we now need to take two types of data errors into account: depolarizing errors (our crude model for oracle errors), and measurement error (additive Gaussian noise). First, postselection works identically to Ref.~\onlinecite{Cross15}, since we treat the ancilla digitally. We note that, in this analysis, the ancilla error rate combines oracle errors and readout errors. Given $n$ queries, $n'$ are postselected according to the ancilla value $V_{A}$, and $s$ of this postselections are correct. Although $s$ is unknown in an experiment, we condition our results on $s$ being typical (i.e., we only consider the values of $s$ that occur with probability higher than $1-\epsilon$ for some small $\epsilon$.). For the correct postselections, we have two possible voltage distributions for each $D_i$, depending on whether the outcome is 0 or 1. The distribution of the outcomes will depend on whether we have one of the correct postselections, and on the value of $i$th key bit $k_i$. If $k_i=0$, the conditional voltage distributions, depending on whether we postselected correctly ($\checkmark$) or not (\ding{55}), are \begin{align*} \rho_{i|0}^\checkmark &\sim \mathcal{N}(\eta_d s,s\sigma^2),\\ \rho_{i|0}^\text{\ding{55}} &\sim \mathcal{N}[\eta_d(N'-s),(N'-s)\sigma^2], \end{align*} respectively, with $\mathcal{N}(\mu,\sigma^2)$ the normal distribution with mean $\mu$ and variance $\sigma^2$. Therefore, the overall distribution is \begin{align*} \rho_{i|0} & \sim \mathcal{N}(\eta_d N',N'\sigma^2). \end{align*} If the true bit value is 1, we have \begin{align*} \rho_{i|1}^\checkmark &\sim \mathcal{N}((1-\eta_{\mathrm{D}})s,s\sigma^2),\\ \rho_{i|1}^\text{\ding{55}} &\sim \mathcal{N}[\eta_{\mathrm{D}}(N'-s),(N'-s)\sigma^2], \end{align*} and therefore \begin{align*} \rho_{i|1} &\sim \mathcal{N}[(1-\eta_{\mathrm{D}})s+\eta_{\mathrm{D}}(N'-s),n'\sigma^2]. \end{align*} Now we must compute the optimal voltage threshold which determine the digital decision at each of the data qubits. If we define \begin{align*} \mu_{i|j} &= \mathbb{E}[\rho_{i|j}], \end{align*} the threshold we must choose is \begin{align*} T &= \frac{1}{2}\mu_{i|0}+\frac{1}{2}\mu_{i|1} \\ &= \frac{s(1-2\eta_{\mathrm{D}})+2\eta_{\mathrm{D}} N'}{2}. \end{align*} The complication is that this is conditioned on $s$, but we will deal with that later, as the dependence on $s$ also comes from the distribution of outcomes (not just the threshold). In the following we assume the value of $s$ to be typical (i.e., $s$ is contained in the region around the median excluding the distribution tails that add up to at most some small $\epsilon$). Under this assumption, we require that $\mu_{i|0} \le T \le \mu_{i|1}$. The probability of having the right answer at a particular bit is the probability that the averaged voltage is on the correct side of the threshold (above or below). If the true value of the bit is 0, i.e., if $k_i=0$, given the threshold, we can compute \begin{align*} \Pr(\rho_i\le T|s,k_i=0) & = \Phi\left(\frac{T-\mu_{i|0}}{N'\sigma^2}\right) \\ & = 1-Q\left(\frac{T-\mu_{i|0}}{N'\sigma^2}\right), \end{align*} where $\Phi$ is the cumulative distribution function for a normal distribution, and $Q$ is the tail probability for the normal distribution. We can place a lower bound on $\Pr(M_j\le T|s,k_i=0)$ with an upper bound on $Q$. Note that, for the range of interest, the argument of $Q$ is always positive, so we can use the bound \begin{align*} Q(x) &< \frac{1}{2}\exp\left(\frac{x^2}{2}\right), \quad x > 0 \end{align*} and therefore \begin{align*} \Pr(\rho_i\le T|s) &\ge 1 - \frac{1}{2}\exp\left[-\left(\frac{T-\mu_{i|0}}{\sqrt{2 N'}\sigma}\right)^2\right], \end{align*} which is nearly what we want---we must now address the dependence on $s$. One way to restrict the analysis to typical $s$ is to require that, for $\eta_{\mathrm{A}}bar=\max\{\eta_{\mathrm{A}},1-\eta_{\mathrm{A}}\}$,the probability $$ \Pr(|s-\mu_s|<\delta' \mu_s) > 1 - 2 \exp\left( - \frac{\delta'^2\eta_{\mathrm{A}}bar N'}{3}\right) $$ is exponentially close to 1. This choice of $\eta_{\mathrm{A}}bar$ requires knowledge of the error rates in the ancilla so that, for example, one knows to postselect on 0 instead of 1 if $\eta_{\mathrm{A}}>0.5$. In order to pick a lower bound valid for all typical thresholds and means, we choose the smallest $|T-\mu_{i|0}|$ by choosing $T$ and $\mu_{j|0}$ independently from the typical sets. This leads to $$ T-\mu_{i|0} > N' (1 - \delta') \left(\frac{1}{2} - \eta_{\mathrm{D}}\right) \eta_{\mathrm{A}}bar $$ and thus, $$ \Pr(\rho_i\le T|s) \ge 1 - \frac{1}{2}\exp\left[-\frac{N'(1 - \delta')^2 \left(\frac{1}{2} - \eta_{\mathrm{D}}\right)^2 \eta_{\mathrm{A}}bar^2}{2\sigma^2}\right] $$ so that, by the union bound, $$ \Pr(\tilde{a}\not=a|s) < \frac{n}{2}\exp\left[-\frac{N'(1 - \delta')^2 \left(\frac{1}{2} - \eta_d\right)^2 \eta_{\mathrm{A}}bar^2}{2\sigma^2}\right] $$ and therefore the lower bound on the number of queries is $$ N' > \frac{2\sigma^2}{(1 - \delta')^2 \left(\frac{1}{2} - \eta_d\right)^2 \eta_{\mathrm{A}}bar^2}\ln\frac{n}{2\delta}. $$ If $k_i=1$, we take a similar approach, but the lower bound on the distance between the threshold and the mean is smaller, leading to $$ N' > \frac{2\sigma^2}{(1 - 3\delta')^2 \left(\frac{1}{2} - \eta_d\right)^2 \eta_{\mathrm{A}}bar^2}\ln\frac{n}{2\delta}, $$ so clearly this is the worst case for $k_i$. If we want to bound $N$ instead of $N'$, we just remember that there is a $50\%$ chance of collapsing into the informative branch of the state, and using the same typicality argument as before, we have $$ N > \frac{4\sigma^2}{(1 - \delta'')^2(1 - 3\delta')^2 \left(\frac{1}{2} - \eta_d\right)^2 \eta_{\mathrm{A}}bar^2}\ln\frac{n}{2\delta}, $$ where $\delta''$ measures how far from the mean $k$ is, with a corresponding Chernoff bound. \emph{Analysis without postselection.} The analysis is equivalent to the postselected case, but with $\eta_a=\frac{1}{2}$ and $N'=N$, since we keep all experiments and have a $50\%$ chance of collapsing the state in the informative branch. All of this leads to $$ N > \frac{8\sigma^2}{(1 - 3\delta')^2(1/2-\eta_{\mathrm{D}})^2}\ln\frac{n}{2\delta}. $$ We now see that depending of choices of $\delta'$ and $\delta''$, postselection may or may not lead to better bounds, but the asymptotic scaling is the same. \textbf{Complexity of digital classical solvers.} Angluin and Laird~\cite{Angluin88} showed that learning with classification noise requires $O(n)$ queries as long as the classification error rate is below $\frac{1}{2}$, and propose an algorithm (\emph{disagreement minimization}) that corresponds to solving an NP-complete problem. According to the exponential time hypothesis, it is widely believe that NP-complete problems can only be solved in exponential time. Note that, while the classification rate is nominally $\eta_{\mathrm{A}}$ in our experiment, all errors (including $\eta_{\mathrm{D}}$ and gate infidelities) can be combined onto an effective, $\mathbf{k}$-dependent, single error rate. Blum, Kalai, and Wasserman~\cite{Blum03} devised a sub-exponential time algorithm for learning with classification errors, as long as the classification error rate is below $\frac{1}{2}-\frac{1}{2^{n^\delta}}$ for $\delta<1$, at the cost of increasing the query complexity to slightly sub-exponential scaling with $n$. Later, Lyubashevsky~\cite{Lyubashevsky05} devised another sligthly sub-exponential time algorithm for learning with classification errors, as long as the classification error rate is below $\frac{1}{2}-\frac{1}{2^{(\log n)^\delta}}$ for $\delta<1$, but bringing down the query complexity to $n^{1+\epsilon}$ for $\epsilon>0$. Note that the gains over exponential time scaling for these two algorithms are rather small -- a reduction from $O(2^n)$ to $O(2^{\frac{n}{\log n}})$ and $O(2^{\frac{n}{\log\log n}})$, respectively. For $n=3$, the Blum-Kalai-Wasserman algorithm can only tolerate less than $\frac{3}{8}\approx0.375$ classification error rate, while the Lyubashevsky algorithm can only tolerate less than $\frac{1}{2}-\frac{1}{2^{\log 3}}\approx0.033$ classification error rate. Lyubashevsky's algorithm does not apply to any of the experiments discussed here because our classification error rates are too high. The Blum-Kalai-Wasserman algorithm only applies to some of the experiments discussed here, so for the sake of fair comparison across all error rates, we use Angluin and Laird's disagreement minimization. \section{Acknowledgments} \begin{acknowledgments} We thank George A. Keefe and Mary B. Rothwell for device fabrication, T.~Ohki for technical assistance, H.~Krovi for discussions, and I.~Siddiqi for providing the Josephson parametric amplifier. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office contract no. W911NF-10-1-0324. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the U.S. Government. \end{acknowledgments} \section{Contributions} C.A.R. and B.R.J. developed the BBN APS and the data acquisition software, D.R. carried out the experiment, D.R., M.P.S. and B.R.J. performed the data analysis, M.P.S. implemented the solvers and developed the theoretical models, D.R. and M.P.S. wrote the manuscript with comments from the other authors, A.W.C. and J.A.S. contributed to the initial design of the experiment, B.R.J., J.M.C. and J.M.G. supervised the project. \section{Author information} \noindent The authors declare no competing financial interests. \\ Correspondence: [email protected] or [email protected]. \renewcommand{Extended Data Table}{Extended Data Fig.} \setcounter{figure}{0} \begin{figure*} \caption{\textbf{Circuit gate decomposition for 3-bit oracles.} \end{figure*} \begin{figure*} \caption{\textbf{Experimental setup.} \end{figure*} \begin{figure*} \caption{\textbf{Readout voltage distributions.} \end{figure*} \begin{figure*} \caption{\textbf{Learning error $p$ for the individual 3-bit $\mathbf{k} \end{figure*} \begin{figure*} \caption{\textbf{Learning error $p$ for 4-bit oracles} \end{figure*} \begin{figure*} \caption{\textbf{Average learning error $pavg$ as a function of readout errors} \end{figure*} \begin{figure*} \caption{\textbf{Calibration of the digitization threshold $V_{D_i} \end{figure*} \renewcommand{Extended Data Table}{Extended Data Table} \setcounter{figure}{0} \begin{figure*} \caption{\textbf{Qubit and resonator parameters.} \end{figure*} \end{document}
\begin{document} \baselineskip 16pt \title{On $\sigma$-quasinormal subgroups of finite groups \thanks{Research is supported by an NNSF grant of China (Grant No. 11401264) and a TAPP of Jiangsu Higher Education Institutions (PPZY 2015A013)}} \author{ Bin Hu\\\ {\small School of Mathematics and Statistics, Jiangsu Normal University,}\\ {\small Xuzhou, 221116, P.R. China}\\ {\small E-mail: [email protected]}\\ \\ Jianhong Huang \thanks{Corresponding author}\\ {\small School of Mathematics and Statistics, Jiangsu Normal University,}\\ {\small Xuzhou 221116, P. R. China}\\ {\small E-mail: [email protected]}\\ \\ { Alexander N. Skiba }\\ {\small Department of Mathematics and Technologies of Programming, Francisk Skorina Gomel State University,}\\ {\small Gomel 246019, Belarus}\\ {\small E-mail: [email protected]}} \date{} \maketitle \begin{abstract} Let $G$ be a finite group and $\sigma =\{\sigma_{i} | i\in I\}$ some partition of the set of all primes $\Bbb{P}$, that is, $\sigma =\{\sigma_{i} | i\in I \}$, where $\Bbb{P}=\bigcup_{i\in I} \sigma_{i}$ and $\sigma_{i}\cap \sigma_{j}= \emptyset $ for all $i\ne j$. We say that $G$ is \emph{$\sigma$-primary} if $G$ is a $\sigma _{i}$-group for some $i$. A subgroup $A$ of $G$ is said to be: \emph{${\sigma}$-subnormal} in $G$ if there is a subgroup chain $A=A_{0} \leq A_{1} \leq \cdots \leq A_{n}=G$ such that either $A_{i-1}\trianglelefteq A_{i}$ or $A_{i}/(A_{i-1})_{A_{i}}$ is $\sigma$-primary for all $i=1, \ldots , n$; \emph{modular} in $G$ if the following conditions hold: (i) $\langle X, A \cap Z \rangle=\langle X, A \rangle \cap Z$ for all $X \leq G, Z \leq G$ such that $X \leq Z$, and (ii) $\langle A, Y \cap Z \rangle=\langle A, Y \rangle \cap Z$ for all $Y \leq G, Z \leq G$ such that $A \leq Z$. In this paper, a subgroup $A$ of $G$ is called \emph{$\sigma$-quasinormal in $G$} if $L$ is modular and ${\sigma}$-subnormal in $G$. We study $\sigma$-quasinormal subgroups of $G$. In particular, we prove that if a subgroup $H$ of $G$ is $\sigma$-quasinormal in $G$, then for every chief factor $H/K$ of $G$ between $H^{G}$ and $H_{G}$ the semidirect product $(H/K)\rtimes (G/C_{G}(H/K))$ is $\sigma$-primary. \end{abstract} \footnotetext{Keywords: finite group, $\sigma$-nilpotent group, ${\sigma}$-subnormal subgroup, modular subgroup, $\sigma$-quasinormal subgroup.} \footnotetext{Mathematics Subject Classification (2010): 20D10, 20D15} \let\thefootnote\thefootnoteorig \section{Introduction} Throughout this paper, all groups are finite and $G$ always denotes a finite group. Moreover, $\mathbb{P}$ is the set of all primes, $\pi= \{p_{1}, \ldots , p_{n}\} \subseteq \Bbb{P}$ and $\pi' = \Bbb{P} \setminus \pi$. If $n$ is an integer, the symbol $\pi (n)$ denotes the set of all primes dividing $n$; as usual, $\pi (G)=\pi (|G|)$, the set of all primes dividing the order of $G$. A subgroup $A$ of $G$ is said to be \emph{modular in $G$} \cite{1-3} if it is a modular element (in the sense of Kurosh \cite[p. 43]{Schm}) of the lattice of all subgroups of $G$, that is, the following conditions hold: (i) $\langle X,M \cap Z \rangle=\langle X, M \rangle \cap Z$ for all $X \leq G, Z \leq G$ such that $X \leq Z$, and (ii) $\langle M, Y \cap Z \rangle=\langle M, Y \rangle \cap Z$ for all $Y \leq G, Z \leq G$ such that $M \leq Z$. In what follows, $\sigma$ is some partition of $\Bbb{P}$, that is, $\sigma =\{\sigma_{i} | i\in I \}$, where $\Bbb{P}=\bigcup_{i\in I} \sigma_{i}$ and $\sigma_{i}\cap \sigma_{j}= \emptyset $ for all $i\ne j$. By the analogy with the notation $\pi (n)$, we write $\sigma (n)$ to denote the set $\{\sigma_{i} |\sigma_{i}\cap \pi (n)\ne \emptyset \}$; $\sigma (G)=\sigma (|G|)$. The group $G$ is said to be: \emph{$\sigma$-primary} \cite{1} if $G$ is a $\sigma_{i}$-group for some $i$; \emph{$\sigma$-decomposable} (Shemetkov \cite{Shem}) or \emph{$\sigma$-nilpotent} (Guo and Skiba \cite{33}) if $G=G_{1}\times \dots \times G_{n}$ for some $\sigma$-primary groups $G_{1}, \ldots, G_{n}$. We use $\mathfrak{N}_{\sigma}$ to denote the class of all $\sigma$-nilpotent groups. We say, following \cite{1}, that the subgroup $A$ of $G$ is \emph{${\sigma}$-subnormal in $G$} if it is \emph{$\mathfrak{N}_{\sigma}$-subnormal in $G$} in the sense of Kegel \cite{KegSubn}, that is, there is a subgroup chain $$A=A_{0} \leq A_{1} \leq \cdots \leq A_{n}=G$$ such that either $A_{i-1} \trianglelefteq A_{i}$ or $A_{i}/(A_{i-1})_{A_{i}}$ is ${\sigma}$-nilpotent for all $i=1, \ldots , n$. A subgroup $A$ of $G$ is said to be \emph{ quasinormal } \cite{5} or \emph{ permutable} \cite{6, prod} in $G$ if $A$ permutes with every subgroup $L$ of $G$, that is, $AL=LA$. The quasinormal subgroups have many interesting properties. For instance, if $A$ is quasinormal in $G$, then: {\sl $A$ is subnormal in $G$} (Ore \cite{5}), {\sl $A/A_{G}$ is nilpotent} (Ito and Szep \cite{It}) and, in general, \emph{$A/A_{G}$ is not necessarily abelian} (Thomson \cite{Th}). Every quasinormal subgroup $A$ of $G$ is modular in $G$ \cite{1-3}. Moreover, the following properties of quasinormal subgroups are well-known. {\bf Theorem A} (See Theorem 5.1.1 in \cite{Schm}). {\sl A subgroup $A$ of $G$ is quasinormal in $G$ if and only if $A$ is modular and subnormal in $G$. } {\bf Theorem B.} {\sl If $A$ is a quasinormal subgroup of $G$, then:} (i) {\sl $A^{G}/A_{G}$ is nilpotent} (This follows from the above-mentioned results in \cite{5, It}), {\sl and } (ii) {\sl Every chief factor $H/K$ of $G$ between $A^{G}$ and $A_{G}$ is central in $G$, that is, $C_{G}(H/K)=G$ } (Maier and Schmid \cite{MaierS}). Since every subnormal subgroup of $G$ is $\sigma$-subnormal in $G$, Theorems A and B make natural to ask: {\sl What we can say about the quotient $A^{G}/A_{G}$ provided the subgroup $A$ of $G$ is $\sigma$-qusinormal in the sense of the following definition?} {\bf Definition 1.1.} Let $A$ be a subgroup of $G$. Then we say that $A$ is \emph{$\sigma$-qusinormal} in $G$ if $A$ is modular and $\sigma$-subnormal in $G$. In this note we give the answer to this question. But before continuing, consider the following {\bf Example 1.2.} Let $p > q, r, t$ be distinct primes, where $t$ divides $r-1$. Let $Q$ be a simple ${\mathbb F}_{q}C_{p}$-module which is faithful for $C_{p}$, let $C_{r}\rtimes C_{t}$ be a non-abelian group of order $rt$, and let $A=C_{t}$. Finally, let $G=(Q\rtimes C_{p})\times (C_{r}\rtimes C_{t})$ and $B$ be a subgroup of order $q$ in $Q$. Then $ B < Q$ since $p > q$. It is not difficult to show that $A$ is modular in $G$ (see \cite[Lemma 5.1.8]{Schm}). On the other hand, $A$ is $\sigma$-subnormal in $G$, where $\sigma =\{\{q, r, t\}, \{q, r, t\}'\}$. Hence $A$ is $\sigma$-quasinormal in $G$. It is clear also that $A$ is not subnormal in $G$, so $A$ is not quasinormal in $G$. Finally, note that $B$ is not modular in $G$ by Lemma 2.2 below. A chief factor $H/K$ of $G$ is said to be \emph{$\sigma$-central} in $G$ \cite{commun} if the semidirect product $(H/K)\rtimes (G/C_{G}(H/K))$ is $\sigma$-primary. Note that $G$ is $\sigma$-nilpotent if and only if every chief factor of $G$ is $\sigma$-central in $G$ \cite{1}. A subgroup $A$ of $G$ is said to be: \emph{$\sigma$-seminormal in $G$} (J.C. Beidleman) if $x\in N_{G}(A)$ for all $x\in G$ such that $\sigma (|x|)\cap \sigma (A)=\emptyset$; \emph{seminormal in $G$} if $x\in N_{G}(A)$ for all $x\in G$ such that $\pi (|x|)\cap \pi (A)=\emptyset$. Our main goal here is to prove the following {\bf Theorem C.} {\sl Let $A$ be a $\sigma$-quasinormal subgroup of $G$. Then the following statements hold:} (i) {\sl If $G$ possesses a Hall $\sigma_{i}$-subgroup, then $A$ permutes with each Hall $\sigma_{i}$-subgroup of $G$. } (ii) {\sl The quotients $A^{G}/A_{G}$ and $G/C_{G}(A^{G}/A_{G}) $ are $\sigma$-nilpotent, and } (iii) {\sl Every chief factor of $G$ between $A^{G}$ and $A_{G}$ is $\sigma$-central in $G$. } (iv) {\sl For every $i$ such that $\sigma _{i} \in \sigma (G/C_{G}(A^{G}/A_{G}))$ we have $\sigma _{i} \in \sigma (A^{G}/A_{G}).$ } (v) {\sl $A$ is $\sigma$-seminormal in $G$.} The subgroup $A$ of $G$ is subnormal in $G$ if and only if it is $\sigma$-subnormal in $G$, where $\sigma=\sigma ^{1} =\{\{2\}, \{3\}, \ldots \}$ (we use here the terminology in \cite{alg12}). It is clear also that $G$ is nilpotent if and only if $G$ is $ \sigma ^{1}$-nilpotent, and a chief factor $H/K$ of $G$ is central in $G$ if and only if $H/K$ is $ \sigma ^{1}$-central in $G$. Therefore Theorem B is a special case of Theorem C, when $\sigma = \sigma ^{1}$. In the other classical case when $\sigma =\sigma ^{\pi}=\{\pi, \pi'\}$: $G$ is $\sigma ^{\pi}$-nilpotent if and only if $G$ is \emph{ $\pi$-decomposable}, that is, $G=O_{\pi}(G)\times O_{\pi'}(G)$); a subgroup $A$ of $G$ is $\sigma ^{\pi}$-subnormal in $G$ if and only if there is a subgroup chain $$A=A_{0} \leq A_{1} \leq \cdots \leq A_{n}=G$$ such that either $A_{i-1} \trianglelefteq A_{i}$ or $A_{i}/(A_{i-1})_{A_{i}}$ is a $\pi _{0}$-group, where $\pi _{0}\in \{\pi, \pi'\}$, for all $i=1, \ldots , n$. Thus, in this case we get from Theorem C the following {\bf Corollary 1.3.} {\sl Suppose that $A$ is modular subgroup of $G$ and there is a subgroup chain $$A=A_{0} \leq A_{1} \leq \cdots \leq A_{n}=G$$ such that either $A_{i-1} \trianglelefteq A_{i}$ or $A_{i}/(A_{i-1})_{A_{i}}$ is a $\pi _{0}$-group, where $\pi _{0}\in \{\pi, \pi'\}$, for all $i=1, \ldots , n$. Then the following statements hold:} (i) {\sl If $G$ possesses a Hall $\pi _{0}$-subgroup, where $\pi _{0}\in \{\pi, \pi'\}$, then $A$ permutes with each Hall $\pi _{0}$-subgroup of $G$. } (ii) {\sl The quotients $A^{G}/A_{G}$ and $G/C_{G}(A^{G}/A_{G}) $ are $\pi$-decomposable, and } (iii) {\sl For every chief factor $H/K$ of $G$ between $A^{G}$ and $A_{G}$ the semidirect product $(H/K)\rtimes (G/C_{G}(H/K))$ is either a $\pi$-group or a $\pi'$-group. } In fact, in the theory of $\pi$-soluble groups ($\pi= \{p_{1}, \ldots , p_{n}\}$) we deal with the partition $\sigma =\sigma ^{1\pi }=\{\{p_{1}\}, \ldots , \{p_{n}\}, \pi'\}$ of $\Bbb{P}$. Note that $G$ is $\sigma ^{1\pi }$-nilpotent if and only if $G$ is \emph{$\pi$-special} \cite{Cun2}, that is, $G=O_{p_{1}}(G)\times \cdots \times O_{p_{n}}(G)\times O_{\pi'}(G)$. A subgroup $A$ of $G$ is $\sigma ^{1\pi }$-subnormal in $G$ if and only if it is $\frak{F}$-subnormal in $G$ in the sense of Kegel \cite{KegSubn}, where $\frak{F}$ is the class of all $\pi'$-groups. Therefore, in this case we get from Theorem C the following {\bf Corollary 1.4.} {\sl Suppose that $A$ is a modular subgroup of $G$ and there is a subgroup chain $$A=A_{0} \leq A_{1} \leq \cdots \leq A_{n}=G$$ such that either $A_{i-1} \trianglelefteq A_{i}$ or $A_{i}/(A_{i-1})_{A_{i}}$ is a $\pi'$-group. Then the following statements hold:} (i) {\sl $A$ permutes with every Sylow $p$-subgroup of $G$ for all $p\in \pi$, and if $G$ possesses a Hall $\pi'$-subgroup, then $A$ permutes with each Hall $\pi'$-subgroup of $G$. } (ii) {\sl The quotients $A^{G}/A_{G}$ and $G/C_{G}(A^{G}/A_{G}) $ are $\pi$-special, and } (iii) {\sl For every non-central chief factor $H/K$ of $G$ between $A^{G}$ and $A_{G}$ the semidirect product $(H/K)\rtimes (G/C_{G}(H/K))$ is a $\pi'$-group. } \section{Preliminaries} If $G=A\rtimes \langle t \rangle$ is non-abelian with an elementary abelian $p$-group $A$ and an element $t$ of prime order $q\ne p$ induces a non-trivial power automorphism on $A$, then we say $G$ is a \emph{$P$-group of type $(p, q)$} (see \cite[p. 49]{Schm}). {\bf Lemma 2.1} (See Lemma 2.2.2(d) in \cite{Schm}). {\sl If $G=A\rtimes \langle t \rangle$ is a $P$-group of type $(p, q)$, then $\langle t \rangle^{G}=G$.} The following remarkable result of R. Schmidt plays a key role in the proof of Theorem C. {\bf Lemma 2.2} (See Theorems 5.1.14 in \cite{Schm}). {\sl Let $M$ be a modular subgroup of $G$ with $M_{G}=1$. Then $ G=S_{1}\times \cdots \times S_{r} \times K,$ where $0\leq r\in \mathbb{Z}$ and for all $i, j \in \{1, \ldots , r\}$, } (a) {\sl $S_{i}$ is a non-abelian $P$-group,} (b) {\sl $(|S_{i}|, |S_{j}|)=1=(|S_{i}|, |K|)$ for all $i\ne j$, } (c) {\sl $M=Q_{1}\times \cdots \times Q_{r}\times (M\cap K)$ and $Q_{i}$ is a non-normal Sylow subgroup of $S_{i}$, } (d) {\sl $M\cap K$ is quasinormal in $G$. } {\bf Lemma 2.3} (See Lemma 2.6 in \cite{1}). {\sl Let $A$, $B$ and $N$ be subgroups of $G$, where $A$ is $\sigma$-subnormal and $N$ is normal in $G$. Then:} (1) {\sl $A \cap B$ is $\sigma$-subnormal in $B$.} (2) {\sl $AN/N$ is $\sigma$-subnormal in $G/N$.} (3) {\sl $A\cap H$ is a Hall $\sigma _{i}$-subgroup of $A$ for every Hall $\sigma _{i}$-subgroup $H$ of $G$. } The following lemma is a corollary of Lemma 2.3 and general properties of modular subgroups \cite[p. 201]{Schm}. {\bf Lemma 2.4.} {\sl Let $A$, $B$ and $N$ be subgroups of $G$, where $A$ is $\sigma$-quasinormal and $N$ is normal in $G$.} (1) {\sl If $A \leq B$, then $A$ is $\sigma$-quasinormal in $B$.} (2) {\sl $AN/N$ is $\sigma$-quasinormal in $G/N$}. A normal subgroup $E$ of $G$ is said to be \emph{${\sigma}$-hypercentral} (in $G$) if either $E=1$ or every chief factor of $G$ below $E$ is $\sigma$-central. We use $Z_{\sigma}(G)$ to denote the \emph{${\sigma}$-hypercentre } of $G$ \cite{commun}, that is, the product of all normal ${\sigma}$-hypercentral subgroups of $G$. {\bf Lemma 2.5. } {\sl Every chief factor of $G$ below $Z_{\sigma}(G)$ is $\sigma$-central in $G$. } {\bf Proof. } It is enough to consider the case when $Z=A_{1}A_{2}$, where $A_{1}$ and $A_{2}$ are normal ${\sigma}$-hypercentral subgroups of $G$. Moreover, in view of the Jordan-H\"{o}lder theorem for the chief series, it is enough to show that if $A_{1}\leq K < H \leq A_{1}A_{2}$, then $H/K$ is $\sigma$-central. But in this case we have $H=A_{1}(H\cap A_{2})$, where $H\cap A_{2}\nleq K$ and so from the $G$-isomorphism $(H\cap A_{2})/(K\cap A_{2})\simeq (H\cap A_{2})K/K=H/K$ we get that $C_{G}(H/K)=C_{G}((H\cap A_{2})/(K\cap A_{2}))$ and hence $H/K$ is $\sigma$-central in $G$. The lemma is proved. {\bf Lemma 2.6.} {\sl Let $N$ be a normal $\sigma _{i}$-subgroup of $G$. Then $N\leq Z_{\sigma}(G)$ if and only if $O^{\sigma _{i}}(G)\leq C_{G}(N)$. } {\bf Proof. } If $O^{\sigma _{i}}(G)\leq C_{G}(N)$, then for every chief factor $H/K$ of $G$ below $N$ both groups $H/K$ and $G/C_{G}(H/K)$ are ${\sigma _{i}}$-group since $G/O^{\sigma _{i}}(G)$ is a ${\sigma _{i}}$-group. Hence $(H/K)\rtimes (G/C_{G}(H/K))$ is $\sigma$-primary. Thus $N\leq Z_{\sigma}(G)$. Now assume that $N\leq Z_{\sigma}(G)$. Let $1= Z_{0} < Z_{1} < \cdots < Z_{t} = N$ be a chief series of $G$ below $N$ and $C_{i}= C_{G}(Z_{i}/Z_{i-1})$. Let $C= C_{1} \cap \cdots \cap C_{t}$. Then $G/C$ is a ${\sigma _{i}}$-group. On the other hand, $C/C_{G}(N)\simeq A\leq \text{Aut}(N)$ stabilizes the series $1= Z_{0} < Z_{1} < \cdots < Z_{t} = N$, so $C/C_{G}(N)$ is a $\pi (N)$-group by \cite[Ch. A, Corollary 12.4(a)]{DH}. Hence $G/C_{G}(N)$ is a ${\sigma _{i}}$-group and so $O^{\sigma _{i}}(G)\leq C_{G}(N)$. The lemma is proved. \section{Proof of Theorem C} Suppose that this theorem is false and let $G$ be a counterexample of minimal order. Then $1 < A < G$. We can assume without loss of generality that $\sigma (A)=\{\sigma _{1}, \ldots , \sigma _{m}\}$. (1) {\sl Statement (i) holds for $G$.} Suppose that this assertion is false. Then for some Hall $\sigma_{i}$-subgroup $V$ of $G$ we have $AV\ne VA$. It is clear that $V$ is a Hall $\sigma_{i}$-subgroup $V$ of $\langle A, V \rangle$. On the other hand, $A$ is $\sigma$-quasinormal in $\langle A, V \rangle$ by Lemma 2.4(1). Therefore in the case when $\langle A, V \rangle < G$, we have $AV=VA$ by the choice of $G$. Thus $\langle A, V \rangle =G$. Since $A$ is $\sigma$-subnormal in $G$, there is a subgroup chain $A=A_{0} \leq A_{1} \leq \cdots \leq A_{n}=G$ such that either $A_{i-1}\trianglelefteq A_{i}$ or $A_{i}/(A_{i-1})_{A_{i}}$ is ${\sigma}$-primary for all $i=1, \ldots , n$. We can assume without loss of generality that $M=A_{n-1} < G$. Then $A$ permutes with every Hall $\sigma_ {i}$-subgroup of $M$ by the choice of $G$. Moreover, the modularity of $A$ in $G$ implies that $$M=M\cap \langle A, V \rangle=\langle A, M\cap V\rangle .$$ On the other hand, by Lemma 2.3(3), $M\cap V$ is a Hall $\sigma _{i}$-subgroup of $M$. Hence $M= A(M\cap V)=(M\cap V)A$ by the choice of $G$. If $V\leq M_{G}$, then $A (M\cap V)=A V=VA$ and so $V\nleq M_{G}$. Now note that $VM=MV$. Indeed, if $M$ is normal in $G$, it is clear. Otherwise, $G/M_{G}$ is $\sigma$-primary and so $G=MV=VM$ since $V\nleq M_{G}$ and $V$ is a Hall $\sigma_{i}$-subgroup of $G$. Therefore $$VA=V(M\cap V)A=VM=MV =A(M\cap V)V= AV.$$ This contradiction completes the proof of (1). (2) $A_{G}=1$. Suppose that $A_{G}\ne 1$ and let $R$ be a minimal normal subgroup of $G$ contained in $A_{G}$. Then $A/R$ is $\sigma$-quasinormal in $G/R$ by Lemma 2.4(2), so the hypothesis holds for $(G/R, A/R)$. Therefore the choice of $G$ implies that Statements (ii)--(v) hold for $(G/R, A/R)$. Hence $$(A/R)^{G/R}/(A/R)_{G/R}=(A^{G}/R)/(A_{G}/R)\simeq A^{G}/A_{G}$$ and $$(G/R)/C_{G/R}((A/R)^{G/R}/(A/R)_{G/R})= (G/R)/(C_{G}(A^{G}/A_{G})/R)\simeq G/C_{G}(A^{G}/A_{G})$$ are $ \sigma $-nilpotent, so Statement (ii) holds for $G$. Now let $T/L$ be any chief factor of $G$ between $A^{G}$ and $ A_{G}$. Then $(T/R)/(L/R)$ is a chief factor of $G/R$ between $(A/R)^{G/R}$ and $ (A/R)_{G/R}$ and so $(T/R)/(L/R)$ is $\sigma$-central in $G/R$, that is, $$((T/R)/(L/R))\rtimes ((G/R)/C_{(G/R)}((T/R)/(L/R)))$$ is $\sigma$-primary. Since the factors $(T/R)/(L/R)$ and $T/L$ are $G$-isomorphic, it follows that $(T/L)\rtimes (G/C_{G}(T/L))$ is $\sigma$-primary too. Hence $T/L$ is $\sigma$-central in $G$. Thus Statement (iii) holds for $G$. If $i$ such that $$\sigma _{i} \cap \pi (G/C_{G}(A^{G}/A_{G})) =\sigma _{i} \cap \pi ((G/R)/C_{G/R}((A/R)^{G/R}/(A/R)_{G/R}))\ne \emptyset,$$ then $$\sigma _{i} \cap \pi (A^{G}/A_{G}) =\sigma _{i} \cap \pi ((A/R)^{G/R}/(A/R)_{G/R})\ne \emptyset$$ and so Statement (iv) holds for $G$ too. Finally, if $x\in G$ is such that $\sigma (A)\cap \sigma (|x|)=\emptyset$, then $\sigma (A/R)\cap \sigma (|xR|)=\emptyset$, so $xR\in N_{G/R}(A/R)=N_{G}(A)/R$ and hence Statement (v) holds for $G$. Therefore, in view of Claim (1), the conclusion of the theorem holds for $G$, which contradicts the choice of $G$. Hence $A_{G}=1$. (3) {\sl If $A$ is a $\sigma _{i}$-group, then $A\leq O_{\sigma _{i}}(G)$. } It is enough to show that if $A$ is any $\sigma $-subnormal $\sigma _{i}$-subgroup of $G$, then $A\leq O_{\sigma _{i}}(G)$. Assume that this is false and let $G$ be a counterexample of minimal order. Then $1 < A < G$. Let $D=O_{\sigma _{i}}(G)$, $R$ be a minimal normal subgroup of $G$ and $O/R=O_{\sigma_{i}}(G/R)$. Then the choice of $G$ and Lemma 2.4(ii) imply that $AR/R\leq O/R$. Therefore $R\nleq D $, so $D=1$ and $A\cap R < R$. It is clear also that $O^{\sigma _{i}}(R)=R$. Suppose that $L= A\cap R\ne 1$. Lemma 2.3(2) implies that $L$ is $\sigma $-subnormal in $R$. If $R < G$, the choice of $G$ implies that $L\leq O_{\sigma _{i}}(R)\leq D$ since $O_{\sigma _{i}}(R)$ is a characteristic subgroup of $R$. But then $D\ne 1$, a contradiction. Hence $R=G$ is a simple group, which is also impossible since $1 < A < G$. Therefore $R\cap A=1$. If $O < G$, the choice of $G$ implies that $A\leq O_{\sigma _{i}}(O)\leq D=1$. Therefore $G/R=O/R$ is a $\sigma _{i}$-group. Hence $R$ is a unique minimal normal subgroup of $G$. It is clear also that $R\nleq\Phi (G)$, so $C_{G}(R)\leq R$ by \cite[Ch. A, 15.2]{DH}. Now we show that $G=RA$. Indeed, if $RA < G$, then the choice of $G$ and Lemma 2.3(1) imply that $A\leq O_{\sigma _{i}}(RA)$ and so $A= O_{\sigma _{i}}(RA)$ since $O_{\sigma _{i}}(R)=1$, which implies that $RA=R\times A$. But then $A\leq C_{G}(R)\leq R$ and so $A=1$ since $A\cap R=1$. This contradiction shows that $G=RA$. Since $A$ is $\sigma $-subnormal in $G$, there is a subgroup $M$ such that $A\leq M < G$ and either $M \trianglelefteq G$ or $G/M_{G}$ is ${\sigma}$-primary. Since $R$ is a unique minimal normal subgroup of $G$ and $A\leq M < G=RA$, $R\nleq M$ and hence $G/M_{G}=G/1$ is a ${\sigma}_{i}$-group. Therefore $A\leq O_{\sigma _{i}}(G)=G$. This contradiction completes the proof of (3). (4) {\sl $A\leq O_{\sigma_{1}}(G) \times \cdots \times O_{\sigma _{m}}(G)$. Hence $A^{G}=O_{\sigma_{{1}}}(A^{G}) \times \cdots \times O_{\sigma _{{m}}}(A^{G})$.} Claim (2) and Lemma 2.2(c)(d) imply that $A=A_{1}\times \cdots \times A_{m}$, where $A_{i}$ is a Hall $\sigma _{i}$-subgroup of $A$ for all $i=1, \ldots , m$. On the other hand, since $A$ is $\sigma $-subnormal in $G$, we have $A_{i}\leq O_{\sigma _{i}}(G)$ by Claim (3). Hence we have (4). (5) {\sl Statement (iii) holds for $G$.} Let $T/L$ be any chief factor of $ G$ below $A^{G}$. Suppose that $T/L$ is not $\sigma$-central in $G$. Theorem B(ii) implies then that $A$ is not quasinormal in $G$, so in view of Lemma 2.2, we have $G=S_{1}\times \cdots \times S_{r} \times K,$ where for all $i, j \in \{1, \ldots , r\}$ the following hold: (a) {\sl $S_{i}$ is a non-abelian $P$-group,} (b) {\sl $(|S_{i}|, |S_{j}|)=1=(|S_{i}|, |K|)$ for $i\ne j$, } (c) {\sl $A=Q_{1}\times \cdots \times Q_{r}\times (A\cap K)$ and $Q_{i}$ is a non-normal Sylow subgroup of $S_{i}$, and } (d) {\sl $A\cap K$ is quasinormal in $G$. } Hence, in view of Claim (3), $$A^{G}=Q_{1}^{G}\times \cdots \times Q_{r}^{G}\times (A\cap K)^{G}=O_{\sigma_{{1}}}(A^{G}) \times \cdots \times O_{\sigma _{{m}}}(A^{G}), $$ where $(A\cap K)^{G} \leq Z_{\infty}(G)\leq Z_{\sigma}(G)$ by Theorem B(ii) since $(A\cap K)_{G}\leq A_{G}=1$ by Claim (2). Therefore, in view of the Jordan-G\"{o}lder theorem for the chief series, we can assume without loss of generality that $T\leq S_{k}$ for some $k$. Now note that for all $i, j$ we have either $S_{i}\leq O_{\sigma_{j}}(A^{G})$ or $S_{i}\cap O_{\sigma_{j}}(A^{G})=1$. Indeed, assume that $S_{i}\cap O_{\sigma_{j}}(A^{G})\ne 1$. It is clear that for some $t$ we have $Q_{i}\leq O_{\sigma_{t}}(A^{G})$. Then $ Q_{i}^{G}=S_{i} \leq O_{\sigma_{{t}}}(A^{G})$ by Lemma 2.1. Hence $j=t$ since $O_{\sigma_{{j}}}(A^{G})\cap O_{\sigma_{{t}}}(A^{G})=1$ for $j\ne t$. Therefore all $S_{i}$ are $\sigma$-primary. Moreover, if $S_{i}$ is a $\sigma _{l}$-group, then $G/C_{G}(S_{i})$ is a $\sigma _{l}$-group since $G=S_{1}\times \cdots \times S_{r} \times K.$ Therefore $S_{k}\leq Z_{\sigma}(G)$ by Lemma 2.6 and so $T/L$ is $\sigma$-central in $G$ by Lemma 2.5, a contradiction. Hence Statement (iii) holds for $G$. (6) {\sl Statements (ii) and (iv) hold for $G$.} From Claim (3) we know that $A^{G} =O_{\sigma_{1}}(A^{G}) \times \cdots \times O_{\sigma _{m}}(A^{G}).$ Then $$C_{G}(A^{G})= C_{G}(O_{\sigma_{1}}(A^{G}) ) \cap \cdots \cap C_{G}(O_{\sigma_{m}}(A^{G}) ).$$ From Claims (2), (4) and Lemma 2.6 we know that $G/C_{G}(O_{\sigma_{i}}(A^{G}))$ is a $\sigma _{i}$-group for all $i=1, \dots, m$. Therefore, in view of \cite[Ch. I, 9.6]{hupp}, $$G/C_{G}(A^{G})=G/(C_{G}(O_{\sigma_{1}}(A^{G})) \cap \cdots \cap C_{G}(O_{\sigma_{m}}(A^{G})))$$$$\simeq V\leq (G/C_{G}(O_{\sigma_{1}}(A^{G}))) \times \cdots \times (G/C_{G}(O_{\sigma_{m}}(A^{G}))) $$ is $\sigma$-nilpotent, and for every $i$ such that $\sigma _{i} \in \sigma (G/C_{G}(A^{G}))$ we have $\sigma _{i} \in \sigma (A^{G}).$ Hence Statements (ii) and (iv) hold for $G$. (7) {\sl Statement (v) holds for $G$.} Suppose that $x\in G$ is such that $\sigma (A)\cap \sigma (|x|)=\emptyset$. Then the modularity of $A$ and Claim (4) imply that $A=O_{\sigma_{1}}(A^{G}) \times \cdots \times O_{\sigma _{m}}(A^{G})\cap \langle A, \langle x \rangle \rangle $ is normal in $\langle A, \langle x \rangle \rangle $, so $x \in N_{G}(A)$. Hence we have (7). From Claims (1), (5)--(7) it follows that the conclusion of the theorem holds for $G$, which contradicts the choice of $G$. The theorem is proved. \end{document}
\begin{document} \title{Reconfiguration graphs of shortest paths\thanks{This project was initiated as part of the REUF program at AIM, NSF grant DMS 1620073.} \\ } \author{ John Asplund\\ {\small Department of Technology and Mathematics,} \\ {\small Dalton State College,} \\ {\small Dalton, GA 30720, USA} \\ {\small [email protected]}\\ \\ Kossi Edoh\\ {\small Department of Mathematics} \\ {\small North Carolina Agricultural and Technical State University} \\ {\small Greensboro, NC 27411, USA} \\ {\small [email protected]} \\ \\ Ruth Haas \thanks{Partially supported by Simons Foundation Award Number 281291}\\ {\small Department of Mathematics,} \\ {\small University of Hawaii at Manoa,}\\ {\small Honolulu, Hawaii 96822, USA}\\ {\small [email protected]}\\ {\small and Smith College, Northampton MA 01063}\\ \\ Yulia Hristova\\ {\small Department of Mathematics and Statistics,}\\ {\small University of Michigan - Dearborn,}\\ {\small Dearborn, MI 48128, USA}\\ {\small [email protected]}\\ \\ Beth Novick\\ {\small Department of Mathematical Sciences,} \\ {\small Clemson University,} \\ {\small Clemson, SC 29634, USA} \\ {\small [email protected]}\\ \\ Brett Werner\\ {\small Department of Mathematics, Computer Science \& Cooperative Engineering,} \\ {\small University of St. Thomas,} \\ {\small Houston, TX 77006, USA} \\ {\small [email protected]}\\ \\ } \maketitle \begin{abstract} For a graph $G$ and $a,b\in V(G)$, the shortest path reconfiguration graph of $G$ with respect to $a$ and $b$ is denoted by $S(G,a,b)$. The vertex set of $S(G,a,b)$ is the set of all shortest paths between $a$ and $b$ in $G$. Two vertices in $V(S(G,a,b))$ are adjacent, if their corresponding paths in $G$ differ by exactly one vertex. This paper examines the properties of shortest path graphs. Results include establishing classes of graphs that appear as shortest path graphs, decompositions and sums involving shortest path graphs, and the complete classification of shortest path graphs with girth $5$ or greater. We also show that the shortest path graph of a grid graph is an induced subgraph of a lattice. \end{abstract} \section{Introduction} The goal of reconfiguration problems is to determine whether it is possible to transform one feasible solution $s$ into a target feasible solution $t$ in a step-by-step manner (a reconfiguration) such that each intermediate solution is also feasible. Such transformations can be studied via the reconfiguration graph, in which the vertices represent the feasible solutions and there is an edge between two vertices when it is possible to get from one feasible solution to another in one application of the reconfiguration rule. Reconfiguration versions of vertex coloring \cite{BJLPP,BC,CJV,CJV2,CJV3}, independent sets \cite{HD,IDHPSUU,KMM}, matchings \cite{IDHPSUU}, list-colorings \cite{IKD}, matroid bases \cite{IDHPSUU}, and subsets of a (multi)set of numbers \cite{EW}, have been studied. This paper concerns the reconfiguration of shortest paths in a graph. \begin{definition} Let $G$ be a graph with distinct vertices $a$ and $b$. The \emph{shortest path graph} of $G$ with respect to $a$ and $b$ is the graph $S(G,a,b)$ in which every vertex $U$ corresponds to a shortest path in $G$ between $a$ and $b$, and two vertices $U,W \in V(S(G,a,b))$ are adjacent if and only if their corresponding paths in $G$ differ in exactly one vertex. \end{definition} While there have been investigations into shortest path reconfiguration in \cite{B,KMM2,KMM}, these papers focused on the complexity of changing one shortest path into another\footnote{The shortest path graph is denoted by SP$(G,a,b)$ in \cite{B}.}. It was found in \cite{B} that the complexity of this problem is PSPACE-complete. In contrast, the focus of our work is on the structure of shortest path graphs, rather than algorithms. Our main goal is to understand which graphs occur as shortest path graphs. A similar study on classifying color reconfiguration graphs can be found in \cite{beier2016classifying}. The paper is organized as follows. Some definitions and notations are provided in Section~\ref{notation}. Section~\ref{general} contains some useful properties and examples. In particular, we show that paths and complete graphs are shortest path graphs. In Section~\ref{one-two_sum} we show that the family of shortest path graphs is closed under disjoint union and under Cartesian products. We establish a decomposition result which suggests that, typically, $4$-cycles are prevalent in shortest path graphs. Thus, we would expect the structure of shortest path graphs containing no $4$-cycles to be rather simple. This is substantiated in Section~\ref{girth5}, where we give a remarkably simple characterization of shortest path graphs with girth $5$ or greater. In the process of establishing this characterization, we show that the claw and the odd cycle $C_k$, for $k>3$ are, in a sense, forcing structures. As a consequence, we determine precisely which cycles are shortest path graphs; that the claw, by itself, is not a shortest path graph; and that a tree cannot be a shortest path graph unless it is a path. In contrast, our main theorem in the final section of the paper involves a class of shortest path graphs which contain many $4$-cycles. We establish that the shortest path graph of a grid graph is an induced subgraph of the lattice. One consequence of our construction is that the shortest path graph of the hypercube $Q_n$ with respect to two diametric vertices is a Cayley graph on the symmetric group $S_n$. \setcounter{section}{1} \section{Preliminaries}\label{notation} Let $G$ be a graph with distinct vertices $a$ and $b$. A shortest $a,b$-path in $G$ is a path between $a$ and $b$ of length $d_G(a,b)$. When it causes no confusion, we write $d(a,b)$ to mean $d_G(a,b)$. We often refer to a shortest path as a geodesic and to a shortest $a,b$-path as an $a,b$-{\em geodesic}. Note that any subpath of a geodesic is a geodesic. If the paths corresponding to two adjacent vertices $U, W$ in $S(G,a,b)$ are $a v_1{\bf c}{\bf d}ots v_{i-1}v_i v_{i+1} {\bf c}{\bf d}ots v_pb$ and $av_1{\bf c}{\bf d}ots v_{i-1}v_i'v_{i+1}{\bf c}{\bf d}ots v_pb$, we say that $U$ and $W$ differ in the $i^{{\rm th}}$ index, or that $i$ is the \textit{difference index} of the edge $UW$. We call the graph $G$ the \textit{base graph} of $S(G,a,b)$, and we say that a graph $H$ is a shortest path graph, if there exists a graph $G$ with $a,b\in V(G)$ such that $S(G,a,b)\cong H$. Several examples are given in Figure~\ref{moreExamples}. With a slight abuse of notation, a label for a vertex in the shortest path graph will often also represent the corresponding path in its base graph. To avoid confusion between vertices in $G$ and vertices in $S(G,a,b)$, throughout this paper, we will use lower case letters to denote vertices in the base graph, and upper case letters to denote vertices in $S(G, a,b)$. It can easily be seen that several base graphs can have the same shortest path graph. For example, if $e\in E(G)$ and $e$ is an edge not in any $a,b$-geodesic, then $S(G, a, b) \cong S(G\setminus e, a, b)$. To this end, we define the \textit{reduced graph,} $(G,a,b)$, to be the graph obtained from $G$ by deleting any edge or vertex that does not occur in any $a,b$-geodesic, and contracting any edge that occurs in all $a,b$-geodesics. If the reduced graph $(G, a, b)$ is again $G$ then $G$ is called a {\em reduced graph} with respect to $a, b$. We may omit the reference to $a,b$ when it is clear from context. \begin{figure} \caption{Base graph $G$ (left) with several shortest path graphs (right). } \label{moreExamples} \end{figure} We conclude this section with a review of some basic definitions. If $G_1$ and $G_2$ are graphs then $G_1\cup G_2$ is defined to be the graph whose vertex set is $V(G_1)\cup V(G_2)$ and whose edge set is $E(G_1) \cup E(G_2)$. When $V(G_1)\cap V(G_2) =\varnothing$ we say that $G_1$ and $G_2$ are disjoint, and refer to $G_1\cup G_2$ as the disjoint union of $G_1$ and $G_2$. For two graphs $G_1$ and $G_2$, the Cartesian product $G_1\,\square\, G_2$ is a graph with vertex set $V(G_1)\times V(G_2)$ and edge set $\{(v_1,v_2)(u_1,u_2)\,:\, v_i,u_i\in V(G_i) \text{ for } i\in\{1,2\} \text{ and either }v_1=v_2\text{ and } u_1\sim u_2, \text{ or } v_1\sim v_2 \text{ and } u_1=u_2\}.$ If $U_1$ is a $v_0,v_{\ell}$-path and $U_2$ is a $v_{\ell},v_m$-path, where $U_1$ and $U_2$ have only one vertex in common, namely $v_{\ell}$, then the concatenation of $U_1$ and $U_2$ is the $v_0,v_m$-path $U_1\circ U_2=v_0v_1 \ldots v_{\ell} v_{\ell+1} \ldots v_m $. A hypercube of dimension $n$, denoted $Q_n$, is the graph formed by labeling a vertex with each of the $2^n$ binary sequences of length $n$, and joining two vertices with an edge if and only if their sequences differ in exactly one position. \section{General Properties, Examples, and Constructions}\label{general} In this section we answer some natural questions as to which classes of graphs are shortest path graphs. We easily see that the empty graph is a shortest path graph. We show that paths and complete graphs are shortest path graphs as well. \begin{proposition}\label{emptyGraph} Let $G$ be a graph formed by joining $t$ paths of equal length greater than $2$, each having the same end vertices, $a$ and $b$, and with all other vertices between any two paths being distinct. Then $S(G,a,b)=\overline{K_t}$. \end{proposition} Before finding shortest path graphs with edges, we make the following simple observation which will be used implicitly throughout. \begin{obs}\label{fundObs} Let $H$ be a shortest path graph. If $U_1U_2U_3$ is an induced path in $H$ then $U_1U_2$ and $U_2U_3$ have distinct difference indices. \end{obs} We use this observation to construct a family of graphs whose shortest path graphs are paths. \begin{lemma}\label{firstpath} For any $k\geq 1$, the path $P_k$ is a shortest path graph. \end{lemma} \begin{proof} For $k\ge 1$, define the graph $G_k$ by $V(G_k)= \{a,b,v_0, v_1, \ldots , v_{\lfloor k/2 \rfloor}, v_0',v_1', \ldots v'_{\lceil k/2 \rceil}\} $ and $E(G_k)= \{av_i, v_iv_i'\,:\, 0\le i \le \lfloor k/2 \rfloor\} \cup \{v_{i-1}v_i' \,:\, 1\le i \le \lceil k/2 \rceil\}\cup \{v_i'b\,:\, 0\le i \le \lceil k/2 \rceil \}. $ One checks that $P_k \cong S(G_k,a,b)$. \end{proof} \begin{lemma} For any $n\geq 1$ the complete graph $K_n$ is a shortest path graph. \end{lemma} \begin{proof} Let $a$ and $b$ be the vertices on one side of the bipartition of $K_{2,n}$. Then $S(K_{2,n}, a,b) \cong K_n$. \end{proof} In fact, as we show in the proof of Theorem \ref{completeGraph}, any graph for which the shortest path graph is a complete graph must reduce to $K_{2,n}$. We will see later that in general there can be different reduced graphs that have the same shortest path graph. \begin{theorem}\label{completeGraph} $S(G,a,b)=K_n$ for some $n\in \mathbb{N}$, if and only if each pair of $a,b$-geodesics in $G$ differs at the same index. \end{theorem} \begin{proof} If every pair of $a,b$-geodesics differs only at the $i^{{\rm th}}$ index, for some $i$, then it is clear that $S(G,a,b)=K_n$, where $n$ is the number of $a,b$-geodesics in $G$. Now suppose that $U,V$ and $W\in V(S(G,a,b))$ are such that $UV$ and $VW$ have distinct difference indices. Then the paths $U$ and $W$ differ at two vertices, so there could be no edge between them in $S(G,a,b)$. Hence the reduced graph of $G$ is $K_{2,n}$. \end{proof} It is clear that if two graphs give the same reduced graph with respect to a, b then they have the same shortest path graph. It will be useful to be able to construct different graphs with the same reduced graph. The next result involves, in a sense, an operation which is the reverse of forming a reduced graph. \begin{proposition}\label{manybasegraphs} If $H=S(G,a,b)$ and $d_G(a,b) = k$, then for any $k'\geq k$ there exists a graph $G'$ with vertices $a, b'\in G'$ such that $d_{G'}(a, b') = k'$ and $H\cong S(G',a,b')$. \end{proposition} \begin{proof} Suppose $H= S(G, a, b)$. Define $G'$ as follows: $V(G')= V(G) \cup \{x_1, x_2, \dots x_{k'-k-1}, b'\}$, and $E(H') = E(H)\cup \{ bx_1, x_1x_2, \dots x_{k'-k-2} x_{k'-k-1}, x_{k'-k-1}b' \}$. It is clear that $H\cong S(G', a, b')$. \end{proof} \section{Decompositions and Sums}\label{one-two_sum} In the previous section we constructed a few special classes of shortest path graphs. In the present section we establish two methods of obtaining new shortest path graphs from old. In particular, we show that the family of shortest path graphs is closed under disjoint unions and is closed under Cartesian products. \begin{theorem}\label{disconnect} If $H_1$ and $H_2$ are shortest path graphs, then $H_1\cup H_2$ is a shortest path graph. \end{theorem} \begin{proof} By Proposition~\ref{manybasegraphs} we can choose disjoint base graphs $G_i$ for $H_i$, $i\in \{1,2\}$, such that $\{a_i,b_i\}\in G_i$, with $d_{G_1}(a_1,b_1) = d_{G_2}(a_2,b_2)$ and with $H_i \cong S(G_i,a_i,b_i)$. Construct a graph $G$ as follows. Let $V(G) = V(G_1) \cup V(G_2) \cup \{a, b\}$ and $E(G) = E(G_1) \cup E(G_2) \cup \{aa_1,aa_2, b_1b, b_2b\}$. It is clear by the construction of $G$ that every $a,b$-geodesic corresponds to an $a_1,b_1$-geodesic through $G_1$ or an $a_2,b_2$-geodesic through $G_2$. In addition, if two shortest paths are adjacent in $S(G_i,a_i,b_i)$, $i \in \{1,2\}$, they are still adjacent in $S(G,a,b)$. If $U_1$ and $U_2$ are $a,b$-geodesics in $G$ between $a$ and $b$ where $V(U_2)\cap V(G_1)\neq\varnothing $ and $V(U_1)\cap V(G_2)\neq\varnothing$, then since $a_1, b_1\in V(U_1)$ and $a_2, b_2\in V(U_2)$ we have $U_1\not\sim U_2$. Thus the result holds. \end{proof} Proposition \ref{concat} concerns the structure of the subgraph of a shortest path graph $H\cong S(G,a,b)$ induced by all $a,b$-geodesics containing a given vertex $v$. \begin{proposition}\label{concat} Let $G$ be a connected graph with $a, b\in V(G)$ and $d=d(a,b) \ge 2$. Let $H=S(G,a,b)$ and let $v$ be a vertex of $G$ which is on at least one $a,b$-geodesic. Let $H'$ be the subgraph of $H$ induced by all vertices corresponding to $a,b$-geodesics which contain $v$. Then \[ H' \cong S(G, a, v)\square S(G, v,b).\] Furthermore, if $G_1$ is any subgraph of $G$ containing all $a,v$-geodesics, and $G_2$ is any subgraph of $G$ containing all $v,b$-geodesics, then $ H'$ is isomorphic to $S(G_1,a,v) \square S(G_2,v,b).$ \end{proposition} \begin{proof} Let $H'$ be the subgraph of $H$ induced by all elements of $V(H)$ corresponding to $a,b$-geodesics in $G$ which contain the vertex $v$. These $a,b$-geodesics are precisely the concatenations $T_{av}\circ T_{vb}$ where $T_{av}$ is an $a,v$-geodesic and $T_{vb}$ is a $v,b$-geodesic. Hence we speak interchangeably about the elements in the vertex set of $H'$ and geodesic paths in $G$ of the form $T_{av}\circ T_{vb}$. By definition, the vertex set of $S(G,a,v) \,\square\, S(G,v,b)$ is the collection of ordered pairs $(T_{av},T_{vb})$ where $T_{av}$ is a vertex of $S(G,a,v)$, and $T_{vb}$ is a vertex of $S(G,v,b)$. It is clear that the mapping $f: V(H') \rightarrow V(S(G,a,v) \, \square \, S(G,v,b))$ given by $f(T_{av}\circ T_{vb}) = (T_{av},T_{vb})$, is a bijection. We claim that this bijection is edge-preserving. Indeed, let $(U_1,R_1)\sim (U_2,R_2)$ in $S(G,a,v) \,\square\, S(G,v,b)$. Then by definition of Cartesian product, either (i) $U_1 \sim U_2$ in $S(G,a,v)$ and $R_1 = R_2$ or (ii) $U_1 = U_2$ and $R_1 \sim R_2$ in $S(G,v,b)$. In the former case $U_1$ and $U_2$ differ in exactly one index while $V(R_1)=V(R_2)$, so $U_1\circ R_1 \sim U_2\circ R_2$ in $H'$. An analogous argument holds in case (ii). Now assume that $(U_1,R_1)$ is neither equal to nor adjacent to $(U_2,R_2)$ in $S(G,a,v) \,\square\, S(G,v,b)$. Then one of the following occurs: $U_1=U_2$, in which case $R_1$ and $R_2$ differ in at least two indices; $R_1=R_2$ in which case $U_1$ and $U_2$ differ in at least two indices; or $U_1\not= U_2$ and $R_1\not= R_2$ in which case $U_1$ and $R_1$ differ with $U_2$ and $R_2$ in at least one index respectively. In each of these cases $R_1\circ U_1$ and $R_2\circ U_2$ differ in at least two indices and hence are not adjacent, as required. To complete the proof, we simply note that $S(G,a,v)\cong S(G_1,a,v)$ and that $S(G,v,b)\cong S(G_2,v,b)$. \end{proof} For two graphs $G_1$ and $G_2$ with vertex sets such that $V(G_1)\cap V(G_2)=\{c\}$, the {\em one-sum of $G_1$ and $G_2$ }is defined to be the graph $G$ with vertex set $V(G_1)\cup V(G_2)$ and edge set $E(G_1)\cup E(G_2)$. Theorem \ref{onesum} characterizes the shortest path graph of the one-sum of two graphs. \begin{theorem}\label{onesum} Let $G_1$ and $G_2$ be graphs with vertex sets such that $V(G_1)\cap V(G_2)=\{c\}$. Let $G$ be the one-sum of $G_1$ and $G_2$. Then for any $a\in V(G_1)\setminus\{c\}$ and any $b\in V(G_2)\setminus\{c\}$, $$S(G,a,b)\cong S(G_1,a,c)\,\square\, S(G_2,c,b).$$ \end{theorem} \begin{proof} Because $c$ is a cut-vertex, every $a,b$-geodesic in $G$ must contain $c$. The result now follows immediately from Proposition~\ref{concat}. \end{proof} \begin{corollary}\label{cartesian_closed} Let $H_1$ and $H_2$ be shortest path graphs. Then $H_1 \square H_2$ is also a shortest path graph. \end{corollary} \begin{proof} Let $H_1 = S(G_1,a_1,b_1)$ and $H_2 = S(G_2,a_2,b_2)$, where $G_1$ and $G_2$ are reduced graphs. Identify $b_1$ with $a_2$ to obtain a graph $(G,a_1,b_2)$ for which $H_1 \square H_2$ is the shortest path graph. \end{proof} The construction of Theorem \ref{onesum} leads to a family of graphs whose shortest path graphs are hypercubes. Let $J_k$ be the graph formed by taking one-sums of $k$ copies of $C_4$ as follows. For $i=1,\ldots, k$ let $a_i$ and $b_i$ be antipodal vertices in the $i^{{\rm th}}$ copy of $C_4$. Form $J_k$ by identifying $b_i$ and $a_{i+1}$ for $i=1, \ldots , k-1$. See Figure \ref{hypercubes}. \begin{corollary}\label{Cube} For $J_k$ as defined above, $S(J_k,a_1,b_k)\cong Q_{k}$ where $Q_{k}$ is a hypercube of dimension $k$. \end{corollary} \begin{figure} \caption{Base graph $J_k$ whose shortest path graph is a hypercube.} \label{hypercubes} \end{figure} \begin{proof} For $k=1$, $S(G,a_1,b_1)\cong P_1\cong Q_1$. The proof follows by induction on $k$ and from the statement and proof of Corollary \ref{cartesian_closed}. \end{proof} The next result gives a decomposition of a shortest path graph into a disjoint set of one sums with additional edges. Note that the following theorem, holds for all $1\le i< d(a,b)$. Hence, there are actually $d(a,b)- 1$ different decompositions of this sort. \begin{theorem}\label{decomp} Let $H$ be a shortest path graph with reduced base graph $(G,a,b)$, where $d(a,b)\ge 2$. Fix an index $i$, with $1\le i < d(a,b)$. Let $\{v_{i_1},v_{i_2}, \ldots , v_{i_k}\}$ be the set of $k$ vertices in $V(G)$ of distance $i$ from $a$, and let $E_i$ be the set of all edges $UW \in E(H)$ having difference index $i$. Then \begin{itemize} \item[(a)] the result of deleting the edges $E_i$ from $H$ yields a graph having $k$ disjoint components, each of which is a Cartesian product: \[\displaystyle H\setminus E_i = \bigcup_{j=1}^k D_{i_j}, \] where $D_{i_j}=S(G,a,v_{i_j})\square S(G,v_{i_j}, b)$ and \item[(b)] For any two subgraphs $D_{i_j}$ and $D_{i_\ell}$, the edges in $E_i$ between $V(D_{ij})$ and $V(D_{i \ell})$ form a partial matching. \end{itemize} \end{theorem} \begin{proof} For each $j\in\{1, 2, \ldots, k\}$, it follows from Proposition~\ref{concat} that the set of vertices in $H$ corresponding to $a,b$-geodesics containing $v_{i_j}$ induce a subgraph isomorphic to $S(G,a,v_{i_j})\square S(G,v_{i_j},b)$. Since each $a,b$-geodesic in $G$ contains precisely one vertex $v_{i_j} \in \{v_{i_1}, \ldots ,v_{i_k}\}$, these $k$ induced subgraphs of $H$ are vertex disjoint. Furthermore, any pair $U$, $W$ of adjacent vertices in $H$ whose corresponding $a,b$-geodesics contain distinct vertices in $\{v_{i_1}, \ldots ,v_{i_k}\}$, must differ in index $i$. We conclude that $UW$ has difference index $i$ and is in $E_i$. This establishes part (a). Each vertex $U$ in $D_{i_j}$ corresponds to a path with vertex $v_{i_j}$ at the $i^{\rm th}$ index, and each vertex $W$ in $D_{i_\ell}$ corresponds to a path with vertex $v_{i_\ell}$ at the $i^{\rm th}$ index. Thus for each vertex $U$ in $D_{i_j}$ there is at most one vertex in $D_{i_\ell}$ adjacent to $U$. \end{proof} Note that $4$-cycles occur very often in Cartesian products: Take any edge $UW$ in $H_1$ and any edge $XY$ in $ H_2$. Then the set of vertices $\{ (U,X), (U,Y), (W,X), (W,Y)\}$ induces a $4$-cycle in $H_1 \square H_2$. From Theorem \ref{decomp}, part (a), and the fact that 4-cycles are ubiquitous in Cartesian products of graphs, we conclude that \begin{obs}\label{prevalent}$4$-cycles are prevalent in shortest path graphs. \end{obs} In view of the fact that Theorem \ref{decomp} holds for every index $i$, Observation~\ref{prevalent} is especially strong: we expect shortest path graphs having no $4$-cycles to have a relatively simple structure, and we predict the study of shortest path graphs with no such restriction to be more challenging. We conclude this section with another way to combine base graphs. Let $G_1$ and $G_2$ be graphs with edge sets such that $E(G_1)\cap E(G_2) = \{e\}$, where $e=xy$, and $V(G_1)\cap V(G_2) = \{x,y\}$. The {\em two-sum of $G_1$ and $G_2$} is defined to be the graph $G$ with vertex set $V(G_1)\cup V(G_2)$ and edge set $E(G_1)\cup E(G_2)$. Theorem~\ref{twosum} characterizes the shortest path graph of the two-sum of two graphs. \begin{theorem}\label{twosum} Let $G_1$ and $G_2$ be graphs with edge sets such that $E(G_1)\cap E(G_2) = \{e\}$, where $e=xy$, and $V(G_1)\cap V(G_2) = \{x,y\}$. Let $G$ be the two-sum of $G_1$ and $G_2$. Let $a\in V(G_1)$ and $b\in V(G_2)$ where $\{a,b\} \cap \{x,y\} = \varnothing$. Then $S(G, a,b)$ is isomorphic to one of the following: \begin{itemize} \item[$(i)$] the disjoint union $S(G_1,a,x)\,\square\, S(G_2,x,b)\, \bigcup \, S(G_1,a,y)\,\square\, S(G_2,y,b)$ plus additional edges which comprise a matching between the two, in the case that $d(a,x)=d(a,y)$ and $d(x,b)=d(y,b)$; \item[$(ii)$] $S(G_1,a,x)\,\square\, S(G_2,x,b)$, in the case that $d(a,x)\le d(a,y)$ and $d(x,b) < d(y,b)$, \\ or $d(a,x)<d(a,y)$ and $d(x,b) \le d(y,b)$; \item[$(iii)$] $S(G_1,a,y)\,\square\, S(G_2,y,b)$, in the case that $d(a,y)\le d(a,x)$ and $d(y,b) < d(x,b)$, \\ or $d(a,y)<d(a,x)$ and $d(y,b) \le d(x,b)$; \item[$(iv)$] and otherwise $S(G_1,a,x)\,\square\, S(G_2,x,b)\, \bigcup \,S(G_1,a,y)\,\square\, S(G_2,y,b)$, where vertices common to \\$S(G_1,a,x)\,\square\, S(G_2,x,b)$ and $S(G_1,a,y)\,\square\, S(G_2,y,b)$ correspond precisely to $a,b$-geodesics containing the edge $e$. \end{itemize} \end{theorem} \begin{proof} Note that every $a,b$-geodesic has non-empty intersection with $\{x,y\}$. \noindent \textbf{Case (i)} Suppose, $i=d(a,x)=d(a,y)$ and $d(x,b)=d(y,b)$. In this case, the vertices $x$ and $y$ are the only vertices at distance $i$ from $a$ in $G$ to be used in any $a,b$-geodesic. Let $E_i$ be the set of all edges in $S(G,a,b)$ having difference index $i$. If we note that $S(G,a,x)$, $S(G,a,y)$, $S(G,x,b)$ and $S(G,y,b)$ are, respectively, isomorphic to $S(G_1,a,x)$, $S(G_1,a,y)$, $S(G_2,x,b)$ and $S(G_2,y,b)$, then it follows immediately from Theorem \ref{decomp}, part (a), that \[ S(G,a,b)/E_i \cong S(G_1,a,x)\,\square\, S(G_2,x,b)\, \bigcup \, S(G_1,a,y)\,\square\, S(G_2,y,b).\] From part (b) of that same theorem, it follows directly that the edges connecting the vertex disjoint components $S(G_1,a,x)\,\square\, S(G_2,x,b)$ and $S(G,a,y)\,\square\, S(G,y,b)$ form a matching. This completes the proof for Case 1. \noindent \textbf{Case (ii)} Either $d(a,x)\le d(a,y)$ and $d(x,b) < d(y,b)$, or $d(a,x)<d(a,y)$ and $d(x,b) \le d(y,b)$. Every $a,b$-geodesic in $G$ contains the vertex $x$, and the result follows directly from Theorem \ref{onesum}. \noindent \textbf{Case (iii)} Either $d(a,y)\le d(a,x)$ and $d(y,b) < d(x,b)$, or $d(a,y)<d(a,x)$ and $d(y,b) \le d(x,b)$. Every $a,b$-geodesic in $G$ contains the vertex $y$, and the result follows directly from Theorem \ref{onesum}. \noindent \textbf{Case (iv)} Consider first when $d(a,x) > d(a,y)$ and $d(x,b) < d(y,b)$. Since $d(x,y)=1$, we have that $d(a,x)=d(a,y)+1$ and $d(x,b)+1=d(y,b)$. By Proposition \ref{concat}, the vertices of $S(G,a,b)$ which correspond to paths containing $x$ induce a subgraph isomorphic to $S(G_1,a,x)\square S(G_2, x,b)$, and those which correspond to paths containing $y$ induce a subgraph isomorphic to $S(G_1,a,y)\square S(G_2, y,b)$. Note that some $a,b$-geodesics contain the edge $e=xy$, and hence the two induced subgraphs described above have non-empty intersection. Now let $U$ be a vertex in $V(S(G,a,b))$ which corresponds to an $a,b$-geodesic containing $x$ and not $y$, and let $W$ be a vertex in $V(S(G,a,b))$ which corresponds to an $a,b$-geodesic containing $y$ but not $x$. Then $U$ and $W$ differ in both index $d(a,y)$ and index $d(a,y)+1$ and hence are non-adjacent. The case $d(a,x)< d(a,y)$ and $d(x,b) > d(y,b)$ is handled analogously. This completes the proof. \end{proof} Note that in the proof of Theorem \ref{twosum}, the edge $e$ is used only in Case (iv). Hence, if Case~(i), (ii), or (iii) holds in the statement of that theorem, then $S(G\setminus e,a,b) \cong S(G,a,b)$. Also note that results similar to Theorem \ref{twosum} can be obtained by considering joining two graphs at two vertices with no edges between the pair; or indeed joining graphs on more than two vertices. \section{Shortest path graphs of girth at least $5$}\label{girth5} In this section, we completely classify all shortest path graphs with girth $5$ or greater. In the process, we characterize precisely which cycles are shortest path graphs and we show that the claw is not a shortest path graph. The following simple observation will be crucial. \begin{proposition}\label{p3toc4} Let $H$ be a shortest path graph. Let $U_1,U_2,U_3$ be distinct vertices in $H$ such that $U_1U_2U_3$ is an induced path. If the difference indices of $U_1U_2$ and $U_2U_3$ are $i$ and $j$, respectively, where $j\not\in\{i-1,i,i+1\}$, then $H$ has an induced $C_4$ containing $U_1U_2U_3$. \end{proposition} \begin{proof} Let $U_1 = av_1 \ldots v_pb$, $U_2 = av_1\ldots v_i'\ldots v_pb$, and $U_3 = av_1 \ldots v_{i-1}v_i'v_{i+1}\ldots v_j'\ldots v_p b$. Then there is a shortest path $U_4=av_1 \ldots v_j'\ldots v_p b$ in $G$, creating the $4$-cycle $(U_1,U_2,U_3,U_4)$. \end{proof} The next result says that any shortest path graph containing an induced odd cycle larger than a $3$-cycle must necessarily contain an induced $C_4$. Theorem~\ref{noClaw} establishes the same result for induced claws. \begin{lemma}\label{c4induced} Let $H$ be a shortest path graph that contains an induced $C_k$ for odd $k > 3$. Then $H$ contains an induced $C_4$. \end{lemma} \begin{proof} Let $(U_1, \ldots, U_k)$ be an induced $C_k$ with odd $k>3$ in $H$, and suppose that $H$ does not contain an induced $C_4$. Let $i$ be the difference index of $U_1U_2$. By Proposition~\ref{p3toc4}, the difference index of $U_2U_3$ is either $i-1$ or $i+1$. In particular, if $i$ is odd then $U_2$ and $U_3$ differ at an even index, and if $i$ is even then $U_2$ and $U_3$ differ at an odd index. The same is true at every step, that is, the parity of the difference index alternates around the cycle. This is impossible if $k$ is odd. \end{proof} In contrast, $C_3$ and every even cycle are shortest path graphs. \begin{theorem}\label{ck} $C_k$ is a shortest path graph, if and only if $k$ is even or $k=3$. \end{theorem} \begin{proof} We have already seen that $C_3$ and $C_4$ are shortest path graphs, see Figure~\ref{moreExamples} and Corollary \ref{Cube}. From Lemma~\ref{c4induced}, it follows that $C_k$ is not a shortest path graph for odd $k > 3$. We now construct a graph whose shortest path graph is $C_{2n}$. Define $G$ with $2n+2$ vertices namely $ V(G) = \{a,b,v_0,v_{1},\ldots,v_{n-1},v_{0}',v_{1}',\ldots,v_{n-1}'\}$ and edge set \[ E(G)=\{av_{i}\,:\,i\in \mathbb{Z}_{n}\}\cup \{bv_{i}'\,:\,i\in\mathbb{Z}_{n}\} \cup \{v_{i}v_{i}': i \in \mathbb{Z}_{n}\} \cup \{v_{i}v_{i+1}'\,:\, i\in \mathbb{Z}_{n}\}, \] where indices are calculated modulo $n$. There are exactly $2n$ $a,b$-geodesics of $G$, namely the set $\{av_iv_i'b,av_iv'_{i+1}b\, :\, i=0,\ldots,n-1\}$. It is easy to check that the shortest path graph $S(G,a,b)=C_{2n}$. \\ \end{proof} \begin{theorem}\label{noClaw} If a shortest path graph $H$ has an induced claw, $K_{1,3}$, then $H$ must have a $4$-cycle containing two edges of the induced claw. In particular, $K_{1,3}$ is not a shortest path graph. \end{theorem} \begin{proof} Let $H$ be a shortest path graph that contains an induced claw with vertices $U_0,U_1,U_2,U_3$, such that $U_0$ is adjacent to $U_1, U_2$, and $U_3$. Let $i_j$ be the difference index of $U_0U_j$ for $j\in\{1, 2, 3\}$. Since the claw is induced, these difference indices must be distinct. Suppose that no three vertices of the claw are part of an induced $4$-cycle. By Proposition \ref{p3toc4}, since $U_0, U_1, U_2$ is not a part of an induced $4$-cycle, it follows that $i_2= i_1 \pm 1$. Without a loss of generality let $i_2=i_1+1$. Similarly, because $U_0, U_1, U_3$ is not a part of an induced $4$-cycle, we have $i_3= i_1 \pm 1$. Since the indices $i_j$ are distinct, it must be that $i_3=i_1-1$. By Proposition \ref{p3toc4} it follows that $U_0, U_2, U_3$ is in an induced $4$-cycle in $S(G,a,b)$. \end{proof} An immediate consequence of Theorem~\ref{noClaw} is the following observation. \begin{obs} If $H$ is a tree and a shortest path graph, then $H$ is a path. \end{obs} Next we establish a characterization of when $C_k$ can be an induced subgraph of some shortest path graph. \begin{theorem}\label{c5} $C_k$ is an induced subgraph of some shortest path graph if and only if $k \neq 5$. \end{theorem} \begin{proof} Assume to the contrary, that $S(G,a,b)$ contains an induced $C_5$, say $\widetilde{U}=(U_1,U_2,U_3,U_4,U_5)$. Consider the difference indices along the edges of the cycle. Every difference index that occurs must occur at least twice in order to return to the original shortest path. Thus a 5-cycle can use at most 2 distinct difference indices. Furthermore, if a difference index occurs twice in a row, say for $U_1U_2$ and for $U_2U_3$, then the edge $U_1U_3$ is also in $S(G, a,b)$. Therefore, $C_5$ is not an induced subgraph of a shortest path graph. To finish the proof, we show that for any $k \neq 5$, $C_k$ is an induced subgraph of some shortest path graph. In Theorem~\ref{ck} we saw that $C_k$ is itself a shortest path graph when $k=3$ or when $k$ is even. Thus, we only need to consider odd $k>6$. Suppose that $k=2p+1$ and let $G_{2p+1}$ be a graph with vertex set $V(G_{2p+1})=\{a,b,v_1,v_2,\ldots,v_p,v_1',v_2',\ldots,v_p',v_1''\}$ and edge set \begin{align*} E(G_{2p+1}) &= \{av_1, av_1', bv_p, bv_p',av_1'',v_1''v_2',v_1''v_2\}\cup \{v_iv_{i+1},v_i'v_{i+1}',v_iv_{i+1}',v_i'v_{i+1}\,:\,i\in \{1,2,\ldots,n-1\}\} \end{align*} Then the following paths of $G_{2p+1}$ induce a $C_{2p+1}$ in $S(G_{2p+1}, a, b)$: \[ \begin{array}{rrrrr} av_1v_2v_3{\bf c}{\bf d}ots v_pb, \\ av_1'v_2v_3{\bf c}{\bf d}ots v_pb, \\ av_1'v_2'v_3{\bf c}{\bf d}ots v_pb,\\ \ldots, \\ av_1'v_2'v_3'{\bf c}{\bf d}ots v_p'b, \\ av_1''v_2'v_3'{\bf c}{\bf d}ots v_p'b,\\ av_1''v_2v_3'{\bf c}{\bf d}ots v_p'b, \\ av_1''v_2v_3v_4'{\bf c}{\bf d}ots v_p'b, \\ \ldots, \\ av_1''v_2v_3{\bf c}{\bf d}ots v_{p-1} v_p'b, \\ av_1v_2v_3{\bf c}{\bf d}ots v_{p-1}v_p'b \\ \end{array} \] \end{proof} \begin{theorem} \label{girth5class} Let $H$ be a graph with $\mathop{\mathrm{girth}}(H) \geq 5$. Then $H$ is a shortest path graph, if and only if each nontrivial component of $H$ is a path or a cycle of even length greater than $5$. \end{theorem} \begin{proof} If $\mathop{\mathrm{girth}}(H) \geq 5$, by Theorem~\ref{noClaw} there is no vertex $U \in V(H)$ with degree $d_H(U) \geq 3$. Thus each vertex in $H$ must have degree $0$, $1$, or $2$. From this, it follows that every nontrivial component of $H$ is a path or cycle. By Lemma~\ref{c4induced}, any induced odd cycle forces an induced $C_4$. Therefore all of these cycles must have even length. By Lemma~\ref{firstpath}, a path of any length is attained. In Theorem~\ref{ck}, it was shown how to construct a shortest path graph that is a cycle of even length. Finally, by Theorem~\ref{disconnect}, the disjoint union of any set of shortest path graphs is again a shortest path graph. \end{proof} Now that shortest path graphs of girth $5$ or more have been characterized, a natural next step would be to work towards a characterization of girth $4$ shortest path graphs. The prevalence of Cartesian products in shortest path graphs tends to indicate that $4$-cycles will play a large and challenging role in the study of these graphs. We leave this challenge to a future paper and instead characterize the shortest path graphs of grid graphs, which have particularly nice structure. We study these in the next section. \section{Shortest Paths in Grid Graphs}\label{girth4} An $m$-dimensional grid graph is the Cartesian product of $m$ paths, $P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}$. We denote the vertices of $P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}$ with the usual Cartesian coordinates on the $m$-dimensional lattice, so the vertex set is given by $V(P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}) = \{(x_1,x_2,\ldots, x_m) : x_i \in \mathbb{Z}, 0 \leq x_1 \leq n_1, \ldots, 0 \leq x_m \leq n_m\}$. In what follows, we consider the geodesics between two diametric vertices of a grid graph, i.e., the shortest paths between the origin and the vertex $(n_1,\ldots,n_m)$. Because these two vertices will always be the vertices of consideration for grid graphs, we will denote the shortest path graph of $P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}$ with respect to them simply by $S(P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m})$. A two-dimensional grid graph and the diametric vertices under consideration are shown in Figure~\ref{Fig:G(n_1,n_2)}. \begin{figure} \caption{$P_{n_1} \label{Fig:G(n_1,n_2)} \end{figure} For convenience of notation, we will consider the shortest paths in $P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}$ as a sequence of moves through the grid in the following way. For $1 \leq i \leq m$, let $\mathbf{e_i}$ be the $i^{{\rm th}}$ standard basis vector in $\mathbb{R}^m$. A move from a vertex $(x_1,\ldots,x_i,\ldots,x_m) \in V(P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m})$ in the $\mathbf{e_i}$ direction means that the next vertex along the path is $(x_1,\ldots,x_i+1,\ldots,x_m)$. Note that a shortest path in $P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}$ from $(0,\ldots,0)$ to $(n_1,\ldots,n_m)$ consists of exactly $N=\sum_{i=1}^{m} n_i$ moves, $n_i$ of which are in the $\mathbf{e_i}$ direction. Furthermore, observe that any $m$-ary sequence of length $N$ in which the symbol $i$ occurs exactly $n_i$ times corresponds to a geodesic in $P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}$ and that there are $\frac{\left(\sum_{i=1}^m n_i\right)!}{n_1!{\bf c}{\bf d}ots n_m!}$ such shortest paths. Explicitly, a geodesic $U$ will be denoted by the $m$-ary sequence $\widetilde{U}= s_1 \ldots s_N\in \mathbb{Z}_m^N$, where $s_j = i \in \{1,\ldots,m\}$ if the $j^{{\rm th}}$ move in $U$ is in the $\mathbf{e_{i}}$ direction. In this way, the symbol $1$ corresponds to a move in the $\mathbf{e_1}$ direction, $2$ corresponds to a move in the $\mathbf{e_2}$ direction, etc. We will refer to $\widetilde{U}$ as the sequence representation of $U$ and use $B_{n_1,\ldots,n_m}\subset \mathbb{Z}_m^N$ to denote the set of all sequences with each $i$ occurring exactly $n_i$ times. \begin{figure} \caption{Adjacent paths in $P_{n_1} \label{Fig:adjacent_paths} \end{figure} Two shortest paths in $S(P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m})$ are adjacent if and only if they differ by a single vertex, i.e., if one path can be obtained from the other by swapping a single pair of two consecutive moves in different directions (see Figure~\ref{Fig:adjacent_paths}). Thus, if $U$ and $W$ are two shortest paths, then $U \sim W$ if and only if their sequence representations in $B_{n_1,\ldots, n_m}$, $\widetilde{U}$ and $\widetilde{W}$, respectively, have the forms $\widetilde{U} = s_1 \ldots s_is_{i+1}\ldots s_N$ and $\widetilde{W} = s_1 \ldots s_{i+1}s_i\ldots s_N$ where $s_i \neq s_{i+1}$. It follows that $UW\in E( S(P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m})) $ if and only if the two sequences $\widetilde{U}, \widetilde{W} \in B_{n_1,\ldots,n_m}$ can be obtained from each other by switching two different consecutive symbols. The main result of this section is that $S(P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m})$ is isomorphic to an induced subgraph of the integer lattice graph $\mathbb{Z}^M$, where $M = \sum_{i=2}^{m}(i-1)n_i$. To prove this result we define a mapping from $B_{n_1,\ldots,n_m}\subset \mathbb{Z}_m^N$ to coordinates in $\mathbb{Z}^M$. \begin{theorem}\label{Th:gridgraph} The shortest path graph of $P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}$ is isomorphic to an induced subgraph of the integer lattice graph $\mathbb{Z}^M$, where $M = \sum_{i=2}^{m}(i-1)n_i$. \end{theorem} \begin{proof} Consider the $m$-dimensional grid graph $P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}$, with $B_{n_1,\ldots,n_m}\subset \mathbb{Z}_m^N$ the set of all $m$-ary sequences corresponding to its geodesics. Define a map $\phi: B_{n_1,\ldots,n_m}\rightarrow \mathbb{Z}^M.$ For a sequence $\widetilde{U} \in B_{n_1,\ldots,n_m}$, let \[ \begin{array}{llllll} \phi(\widetilde{U}):= &(a_{121},\ldots, a_{12n_2}, & & &\\ &\hspace{1ex} a_{131},\ldots, a_{13n_3},& a_{231},\ldots, a_{23n_3},&&\\ && \ldots &&\\ &\hspace{1ex} a_{1m1},\ldots, a_{1mn_m},& a_{2m1},\ldots, a_{2mn_m},&\ldots&, a_{(m-1)m1},\ldots, a_{(m-1)mn_m}), \end{array} \] where $a_{ijk}$ is the number of $i$'s following the $k^{{\rm th}}$ $j$ in $\widetilde{U}$. For example, if $\widetilde{U}=32121231 \in B_{3,3,2}$, then $\phi(\widetilde{U}) =(3,2,1,3,1,3,0)$. Also, $\phi$ maps the sequence $1{\bf c}{\bf d}ots 1 2{\bf c}{\bf d}ots2 {\bf c}{\bf d}ots m{\bf c}{\bf d}ots m$ to the origin. Thus, $\phi$ maps $B_{n_1,\ldots,n_m}$ into a set of vectors $(a_{ijk}) \in \mathbb{Z}^M$ such that $ 2\leq j \leq m, 1 \leq i < j$, and $ 1 \leq k \leq n_j$, in the order indicated. Note that for all $i,j,k$, $a_{ijk} \leq n_i$ since there are at most $n_i$ $i$'s following a $j$, and $a_{ijk} \geq a_{ij(k+1)}$ since at least as many $i$'s appear after the $k^{{\rm th}}$ $j$ than after the $(k+1)^{{\rm st}}$ $j$. To see that $\phi$ is injective, consider two distinct sequences $\widetilde{U} = s_1\ldots s_N$ and $\widetilde{W} = s_1'\ldots s_N'$ in $B_{n_1,\ldots,n_m}$, and denote their images under $\phi$ by $A$ and $A'$, respectively. Let $r$ be the first index where the entries of $\widetilde{U}$ and $\widetilde{W}$ differ. Without loss of generality, assume that $s_r = j > s_r' = i$ and that $s_r$ is the $k^{{\rm th}}$ $j$ occurring in $\widetilde{U}$. Then, the $k^{{\rm th}}$ $j$ in $\widetilde{W}$ will appear after $s_r' = i$, so the number of $i$'s in $\widetilde{W}$ following the $k^{{\rm th}}$ $j$ will be at least one less as compared to the sequence $\widetilde{U}$. Therefore, if $a_{ijk}$ and $a_{ijk}'$ are the $ijk$-components of $A$ and $A'$, respectively, then $a_{ijk} > {a}_{ijk}'$, showing $\phi(\widetilde{U}) \neq \phi(\widetilde{W})$. To finish the proof, we need to show that $\phi$ preserves adjacency. Suppose $\widetilde{U}$ and $\widetilde{W}$ are adjacent sequences in $B_{n_1,\ldots,n_m}$. Then $\widetilde{U}$ and $\widetilde{W}$ have the forms $\widetilde{U}=s_1\ldots s_rs_{r+1} \ldots s_N$ and $\widetilde{W}=s_1\ldots s_{r+1}s_r \ldots s_N$, where $s_r \neq s_{r+1}$. Let $ s_{r+1}=j> s_r = i$. Now, suppose that $s_{r+1}$ is the $k^{{\rm th}}$ $j$ appearing in $\widetilde{U}$. Then the only difference in the vectors $\phi(\widetilde{U})$ and $\phi(\widetilde{W})$ is that the $ijk$-component of $\phi({\widetilde{W}})$ is increased by one unit, so $\phi(\widetilde{U})$ and $\phi({\widetilde{W}})$ are adjacent vertices in $\mathbb{Z}^M$. To see that $\phi^{-1}$ also preserves adjacency, consider two adjacent vertices, $A$ and $A'$, in the image of $\phi$. Let $a_{ijk}$ be the $ijk$-component of $A$, and without loss of generality, assume that ${A'}$ is obtained from $A$ by increasing $a_{ijk}$ to $a_{ijk}+1$. Because ${A'}$ is in the image of $\phi$, it follows that the symbol directly preceding the $k^{{\rm th}}$ $j$ in $\phi^{-1}(A)$ is $i$. To see this, first note that there must be at least one $i$ preceding the $k^{{\rm th}}$ $j$. Otherwise, $a_{ijk} = n_i$ and cannot be increased. If there is any other symbol between the $k^{{\rm th}}$ $j$ and the $i$ preceding it, other components of $A$ must be changed in order to increase $a_{ijk}$. However, $A$ and $A'$ have only one different component. Thus, $\phi^{-1}(A')$ can be obtained from $\phi^{-1}(A)$ by switching the $k^\text{th}$ $j$ and the $i$ directly preceding it. Therefore, $\phi^{-1}(A)$ and $\phi^{-1}(A')$ are adjacent vertices in $B_{n_1,\ldots,n_m}$. \end{proof} The shortest path graph of a two-dimensional grid graph $P_{n_1} \square P_{n_2}$ is particularly easy to characterize, as demonstrated in the following corollary. First, we need an additional definition. The staircase graph $S_{n_1,n_2}$ is an induced subgraph of the grid graph on the integer lattice $\mathbb{Z}^{n_2}$. $S_{n_1,n_2}$ has vertex set $V(S_{n_1,n_2}) = \{(a_1,\ldots,a_{n_2}) : a_k \in \mathbb{Z}, n_1\geq a_1 \geq a_2 \geq {\bf c}{\bf d}ots \geq a_{n_2} \geq 0\}$ (see Figure \ref{Fig:Stairs}). \begin{figure} \caption{The staircase graphs $S_{n_1,2} \label{Fig:Stairs} \end{figure} \begin{corollary} The shortest path graph of $P_{n_1} \square P_{n_2}$ is isomorphic to the staircase graph $S_{n_1,n_2}$. \end{corollary} \begin{proof} We have seen that the shortest path graph $S(P_{n_1}\square P_{n_2})$ can be described as a graph on the set of binary strings $B_{n_1,n_2}$. Furthermore, from the proof of Theorem \ref{Th:gridgraph}, the mapping $\phi:B_{n_1,n_2} \rightarrow \mathbb{Z}^{n_2}$ defined by $\phi(\widetilde{U}) := (a_1, a_2\ldots,a_{n_2})$, where $a_k$ is the number of $1$'s following the $k^{{\rm th}}$ occurrence of $2$ in $\widetilde{U}$, is injective and adjacency preserving. Therefore, this corollary follows if we can show $\phi(B_{n_1,n_2}) = V(S_{n_1,n_2})$. Let $\widetilde{U} \in B_{n_1,n_2}$ and let $\phi(\widetilde{U}) = (a_1, a_2\ldots,a_{n_2})$. From the definition of $\phi$, it follows that $n_1\geq a_k \geq a_{k+1} \geq 0$ for all $k \in\{ 1,\ldots,n_2-1\}$, since the number of $1$'s following the $k^{{\rm th}}$ $2$ is greater than or equal to the number of $1$'s following the $(k+1)^{{\rm st}}$ $2$. Thus, $\phi(\widetilde{U}) \in V(S_{n_1,n_2})$. Conversely, for any $A=(a_1,\ldots,a_{n_2}) \in V(S_{n_1,n_2})$, let $\widetilde{U}$ be the sequence in $B_{n_1,n_2}$ that has exactly $a_k$ $1$'s following the $k^{{\rm th}}$ $2$ for $k\in\{1,\ldots,n_2\}$. Then, $\phi(\widetilde{U}) = A$ showing $V(S_{n_1,n_2}) \subseteq \phi(B_{n_1,n_2})$. \end{proof} \begin{remark} Theorem \ref{Th:gridgraph} implies that the dimension of the lattice graph $\mathbb{Z}^M$ of which $S(P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m})$ is an induced subgraph depends on the ordering of $n_1,\ldots, n_m$. Since $M=\sum_{i=2}^{m}(i-1)n_i$, the least value for $M$ will occur when $n_1,{\bf c}{\bf d}ots, n_m$ are listed in decreasing order. \end{remark} It is a direct consequence of our discussion on grid graphs that the path of length $k$ is the shortest path graph of $P_k\square P_1$. \begin{corollary}\label{pathThm} For the grid $G=P_k\square P_{1}$, $S(G)\cong P_k$. \end{corollary} Earlier we made the comment that two base graphs may produce the same shortest path graph. In fact, even two reduced graphs can have the same shortest path graph, e.g., the graphs $P_k\square P_1$ and $G_k$ given in Lemma \ref{firstpath} have the same shortest path graph yet they are reduced and non-isomorphic. Another special grid graph is the $m$-dimensional hypercube, $Q_m=P_1 \square {\bf c}{\bf d}ots \square P_1$. We shall observe in Proposition \ref{Th:Cayley}, that $S(Q_m)$ is isomorphic to a Cayley graph of the symmetric group $S_m$. We first recall some material from elementary group theory and algebraic graph theory. See \cite{Bacher, Fraleigh, Godsil} for more detail. Let $(\Gamma, {\bf c}{\bf d}ot)$ be a group. Let $S$ be a generating set of $\Gamma$ that does not contain the identity element and such that for each $g\in S$, $g^{-1}$ is also in $S$. The {\em Cayley graph} of $\Gamma$ with generating set $S$, denoted by $\mbox{Cay}(\Gamma; S)$, is the graph whose vertices are the elements of $\Gamma$, and which has an edge between two vertices $x$ and $y$ if and only if $x{\bf c}{\bf d}ot s = y$ for some $s\in S$. The symmetric group $S_m$ is the group whose elements are the permutations on the set $\{1, 2, \ldots, m\}$. An element of $S_m$ is a bijection from the set $\{1, 2, \ldots, m\}$ to itself. Denote by $s_1s_2 \ldots s_m$ the permutation $\sigma$ given by $\sigma(i)=s_i$, $1\le i \le m$. The group operation in $S_m$ is the composition of permutations defined by $(\sigma\tau )(j)=\sigma(\tau(j))$, for $\sigma, \tau \in S_m$, $1\le j\le m$. An \emph{adjacent transposition} is a permutation $\tau_i$ such that \[\tau_i(i) = i+1, \tau_i(i+1) = i \text{, and } \tau_i(k) = k \text{ for } k\not\in\{i,i+1\}.\] It is well known that every permutation can be represented as the composition of finitely many adjacent transpositions. Therefore, the set of adjacent transpositions, $T$, generates $S_m$. We also note that each adjacent transposition is its own inverse. Hence, we can define the Cayley graph of $S_m$ with generating set $T$, $\mbox{Cay}(S_m; T )$.\footnote{This Cayley graph was studied by Bacher \cite{Bacher} and is also known, in other contexts, as a Bubble Sort Graph.} To gain some insight into the structure of $\mbox{Cay}(S_m; T)$, consider the effect of the composition of an element $\sigma \in S_m$ with an adjacent transposition. Let $\sigma = s_1s_2 \ldots s_m$ and $1\le i\le m-1$. Then \[\sigma\tau_i(i) = \sigma(i+1),\, \sigma\tau_i(i+1) = \sigma(i), \text{ and } \sigma\tau_i(k) =\sigma( k), \text{ if } k \not\in\{i,i+1\},\] or simply \[\sigma\tau_i = s_1s_2 \ldots s_{i+1}j_i \ldots s_m.\] Thus, the effect of the composition $\sigma\tau_i$ is the switching of the two consecutive elements $s_i$ and $s_{i+1}$ in $\sigma$. We can conclude that the neighborhood of a vertex $\sigma$ in $\mbox{Cay}(S_m; T )$ is the collection of all $(m-1)$ permutations on $\{1,\ldots,m\}$ obtained from $\sigma$ by interchanging two consecutive elements. Now consider the $m$-dimensional hypercube, $Q_m=P_1 \square {\bf c}{\bf d}ots \square P_1$. The vertices of $P_1 \square {\bf c}{\bf d}ots \square P_1$ correspond to all binary strings of length $m$. A shortest path in $Q_m$ is a sequence of exactly one move in each direction. In the discussion preceding Theorem \ref{Th:gridgraph} we introduced a correspondence between the geodesics of $P_{n_1} \square {\bf c}{\bf d}ots \square P_{n_m}$ and the set of sequences of moves $B_{n_1,\ldots,n_m}$. For $Q_m$, this is a bijection between the geodesics in $P_1 \square {\bf c}{\bf d}ots \square P_1$ and the sequences in $B_{1,\ldots,1}$. Since in each geodesic a vertex coordinate changes exactly once, $B_{1,\ldots,1}$ coincides with the set of permutations on $\{1, 2, \ldots, m\}$. The sequence representation of $U$ is denoted by $\widetilde{U} = s_1s_2 \ldots s_m$, where each element of $\{1, \ldots, m\}$ appears precisely once. Recall that we defined two sequences in $B_{1,\ldots,1}$ as adjacent, if and only if one can be obtained from the other by switching two (different) consecutive symbols. This is equivalent to two permutations being adjacent if and only if one can be obtained from the other using an adjacent transposition. Thus, the set $B_{1,\ldots,1}$ together with the adjacency relation is isomorphic to $\mbox{Cay}(S_m; T )$. Hence we observe: \begin{proposition}\label{Th:Cayley} Let $S_m$ be the symmetric group, let $T$ be the set of adjacent transpositions, and let $a$ and $b$ be diametric vertices on $Q_m$. Then $S(Q_m,a,b) \cong \mbox{Cay}(S_m; T )$. \end{proposition} Although the function $\phi$ introduced in the proof of Theorem \ref{Th:gridgraph} is not needed in Proposition \ref{Th:Cayley}, that function has an interesting interpretation in the case of $Q_m$. Here the domain of $\phi$ is $B_{1,\ldots,1}$, which as we have discussed, is isomorphic to $S_m$, the set of permutations of $\{1,2, \ldots, m\}$. Referring to the definition of $\phi$ in the proof of Theorem \ref{Th:gridgraph}, and using the notation introduced there, the image of $\phi$ is a sequence whose elements are $a_{ijk}$, where $1\le i <j \le m$ and $1\le k \le n_j$. In the case of $Q_m$, $n_j =1$ for each $j$. Hence, for a sequence $\widetilde{U}\in B_{1, \ldots, 1}$, corresponding to a permutation $s_1s_2 {\bf c}{\bf d}ots s_m$, it makes sense to simplify our notation to \[ \phi(\widetilde{U}):= (a_{12}, a_{13}, a_{23}, \ldots, a_{1m}, a_{2m}, \ldots , a_{(m-1)m}), \] where $a_{ij}$ is equal to the number of $i$'s following the $1^{\rm st}$ (and only) $j$ in $\widetilde{U}$. From Theorem \ref{Th:gridgraph}, the length of the sequence $\phi(\widetilde{U})$ is $M = \sum_{i=2}^{m}(i-1)n_i= \sum_{i=1}^m(i-1)= {m \choose 2}$. Since every element in $\{1,2, \ldots ,m\}$ occurs precisely once in the permutation $s_1s_2 \ldots s_m$, we have that for every pair $i,j$ with $1\le i<j\le m$, \[ a_{ij} = \begin{cases} 0 & \text{if $i$ occurs before $j$ and}\\ 1 & \text{if $i$ occurs after $j$.} \end{cases} \] In the case of $Q_m$, one may interpret $\phi(\widetilde{U})$ as the edge set of a complete directed graph on $m$ vertices as follows. For each pair of vertices $i,j$ with $1\le i<j \le m$, the edge $ij$ is oriented from $i$ to $j$ if $a_{ij}=0$, and from $j$ to $i$ if $a_{ij}=1$. A complete directed graph having this transitive property is called a {\em transitive tournament}. We conclude that for the hypercube $Q_m$, the image of $\phi$ corresponds precisely to the set of $m!$ transitive tournaments. \end{document}
\mathbf egin{document} \title[Integral quadratic polynomials]{Representations of integral quadratic polynomials} \author{Wai Kiu Chan} {\mathbb A}dress{Department of Mathematics and Computer Science, Wesleyan University, Middletown CT, 06459, USA} \email{[email protected]} \author{Byeong-Kweon Oh} {\mathbb A}dress{Department of Mathematical Sciences and Research Institute of Mathematics, Seoul National University, Seoul 151-747, Korea} \email{[email protected]} \thanks{This work of the second author was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MEST) (No. 20110027952)} {\mathfrak s}ubjclass[2010]{Primary 11D09, 11E12, 11E20} \keywords{Integral quadratic polynomials} \mathbf egin{abstract} In this paper, we study the representations of integral quadratic polynomials. Particularly, it is shown that there are only finitely many equivalence classes of positive ternary universal integral quadratic polynomials, and that there are only finitely many regular ternary triangular forms. A more general discussion of integral quadratic polynomials over a Dedekind domain inside a global field is also given. \end{abstract} \maketitle {\mathfrak s}ection{Introduction} For a polynomial $f(x_1, \ldots, x_n)$ with rational coefficients and an integer $a$, we say that $f$ represents $a$ if the diophantine equation \mathbf egin{equation} \label{1steqn} f(x_1, \ldots, x_n) = a \end{equation} is soluble in the integers. The {\em representation problem} asks for a complete determination of the set of integers represented by a given polynomial. This problem is considered to be untractable in general in view of Matiyasevich's negative answer to Hilbert's tenth problem \cite{ma}. Moreover, Jones \cite{j} has shown that whether a general single diophantine equation of degree four or higher is soluble in the positive integers is already undecidable. However, the linear and the quadratic cases have been studied extensively. The linear case is elementary and its solution is a consequence of the Euclidean algorithm. For the quadratic case, the representation problem for homogeneous quadratic polynomials, or quadratic forms in other words, has a long history and it still garners a lot of attention from mathematicians across many areas. For accounts of more recent development of the subject, the readers are referred to the surveys \cite{h, sp} and the references therein. In this paper, we will discuss a couple of questions which are related to the representation problem of quadratic polynomials in general, namely {\em universality} and {\em regularity}, which we will explain below. A quadratic polynomial $f(\mathbf x) = f(x_1, \ldots, x_n)$ can be written as $$f(\mathbf x) = Q(\mathbf x) + L(\mathbf x) + c$$ where $Q(\mathbf x)$ is a quadratic form, $L(\mathbf x)$ is a linear form, and $c$ is a constant. Unless stated otherwise {\em we assume that $Q$ is positive definite}. This in particular implies that there exists a unique vector $\mathbf v \in {\mathbb Q}^n$ such that $L(\mathbf x) = 2B(\mathbf v, \mathbf x)$, where $B$ is the bilinear form such that $B(\mathbf x, \mathbf x) = Q(\mathbf x)$. As a result, $$f(\mathbf x) = Q(\mathbf x + \mathbf v) - Q(\mathbf v) + c \geq -Q(\mathbf v) + c,$$ and so $f(\mathbf x)$ attains an absolute minimum on ${\mathbb Z}^n$. We denote this minimum by $m_f$ and will simply call it the minimum of $f(\mathbf x)$. We call $f(\mathbf x)$ {\em positive} if $m_f \geq 0$. In this paper, we call a quadratic polynomial $f(\mathbf x)$ {\em integral} if it is integer-valued, that is, $f(\mathbf x) \in {\mathbb Z}$ for all $\mathbf x \in {\mathbb Z}^n$. A positive integral quadratic polynomial $f(\mathbf x)$ is called {\em universal} if it represents all nonnegative integers. Positive definite universal integral quadratic forms have been studied for many years by many authors and have become a popular topic in the recent years. It is known that positive definite universal integral quadratic forms must have at least four variables, and there are only finitely many equivalence classes of such universal quadratic forms in four variables. Moreover, a positive definite integral quadratic form is universal if and only if it represents all positive integers up to 290 \cite{bh}. However, Bosma and Kane \cite{bk} show that this kind of finiteness theorem does not exist for positive integral quadratic polynomials in general. More precisely, given any finite subset $T$ of $\mathbb N$ and a positive integer $n {\mathfrak n}ot \in T$, Bosma and Kane construct explicitly a positive integral quadratic polynomial with minimum 0 which represents every integer in $T$ but not $n$. An integral quadratic polynomial is called {\em almost universal} if it represents all but finitely many positive integers. A classical theorem of Tartakovski \cite{t} implies that a positive definite integral quadratic form in five or more variables is almost universal provided it is universal over ${\mathbb Z}_p$ for every prime $p$. An effective procedure for deciding whether a positive definite integral quadratic form in four variables is almost universal is given in \cite{bo}. Unlike positive definite universal or almost universal quadratic forms, positive universal and almost universal integral quadratic polynomials do exist in three variables. One well-known example of universal quadratic polynomial is the sum of three triangular numbers $$\frac{x_1(x_1+1)}{2} + \frac{x_2(x_2+1)}{2} + \frac{x_3(x_3 + 1)}{2}.$$ Given positive integers $a_1, \ldots, a_n$, we follow the terminology used in \cite{co} and call the polynomial $$\Delta(a_1, \ldots, a_n): = a_1\frac{x_1(x_1 + 1)}{2} + \cdots + a_n\frac{x_n(x_n + 1)}{2}$$ a triangular form. There are only seven universal ternary triangular forms and they were found by Liouville in 1863 \cite{li}. Bosma and Kane \cite{bk} have a simple criterion--the Triangular Theorem of Eight--to determine the universality of a triangular form: a triangular form is universal if and only if it represents the integers 1, 2, 4, 5, and 8. In \cite{co}, the present authors give a complete characterization of triples of positive integers $a_1, a_2, a_3$ for which $\Delta(a_1, a_2, a_3)$ are almost universal. Particularly, it is shown there that there are infinitely many almost universal ternary triangular forms. Almost universal integral quadratic polynomials in three variables that are mixed sums of squares and triangular numbers are determined in \cite{ch} and \cite{ks}. Two quadratic polynomials $f(\mathbf x)$ and $g(\mathbf x)$ are said to be {\em equivalent} if there exists $T \in \text{GL}_n({\mathbb Z})$ and $\mathbf x_0 \in {\mathbb Z}^n$ such that \mathbf egin{eqnarray} \label{equiv} g(\mathbf x) = f(\mathbf x T + \mathbf x_0). \end{eqnarray} One can check readily that this defines an equivalence relation on the set of quadratic polynomials, and equivalent quadratic polynomials represent the same set of integers. In Section \ref{universal}, we will prove the following finiteness result on almost universal integral quadratic polynomials in three variables. It, in particular, implies that given a nonnegative integer $k$, there are only finitely many almost universal ternary triangular forms that represent all integers $\geq k$. \mathbf egin{thm} \label{thmin3} Let $k$ be a nonnegative integer. There are only finitely many equivalence classes of positive integral quadratic polynomials in three variables that represent all integers $\geq k$. \end{thm} An integral polynomial is called {\em regular} if it represents all the integers that are represented by the polynomial itself over ${\mathbb Z}_p$ for every prime $p$ including $p = \infty$ (here ${\mathbb Z}_\infty = \mathbb R$ by convention). In other words, $f(\mathbf x)$ is regular if \mathbf egin{equation} \label{hasse} (\ref{1steqn}) \mbox{ is soluble in ${\mathbb Z}_p$ for every $p \leq \infty$ } \Longrightarrow (\ref{1steqn}) \mbox{ is soluble in ${\mathbb Z}$}. \end{equation} Watson \cite{w1, w2} showed that up to equivalence there are only finitely many primitive positive definite regular integral quadratic forms in three variables. A list containing all possible candidates of equivalence classes of these regular quadratic forms is compiled by Jagy, Kaplansky, and Schiemann in \cite{jks}. This list contains 913 candidates and all but twenty two of them are verified to be regular. Recently Oh \cite{o} verifies the regularity of eight of the remaining twenty two forms. As a first step to understand regular quadratic polynomials in three variables, we prove the following in Section \ref{regularpoly}. \mathbf egin{thm} \label{regular3} There are only finitely many primitive regular triangular forms in three variables. \end{thm} A quadratic polynomial $f(\mathbf x)$ is called {\em complete} if it takes the form $$f(\mathbf x) = Q(\mathbf x) + 2B(\mathbf v, \mathbf x) + Q(\mathbf v) = Q(\mathbf x + \mathbf v).$$ Every quadratic polynomial is complete after adjusting the constant term suitably. In Section \ref{coset}, we will describe a geometric approach of studying the arithmetic of complete quadratic polynomials. In a nut shell, a complete integral quadratic polynomial $f(\mathbf x)$ is just a coset $M + \mathbf v$ of an integral ${\mathbb Z}$-lattice $M$ on a quadratic ${\mathbb Q}$-space with a quadratic map $Q$, and solving the diophantine equation $f(\mathbf x) = a$ is the same as finding a vector $\mathbf e$ in $M$ such that $Q(\mathbf e + \mathbf v) = a$. The definition of the class number of a coset will be introduced, and it will be shown in Section \ref{coset} that this class number is always finite and can be viewed as a measure of obstruction of the local-to-global implication in (\ref{hasse}). In the subsequent sections, especially in Section \ref{coset}, we will complement our discussion with the geometric language of quadratic spaces and lattices. Let $R$ be a PID. If $M$ is a $R$-lattice on some quadratic space over the field of fractions of $R$ and $A$ is a symmetric matrix, we shall write ``$M \cong A$" if $A$ is the Gram matrix for $M$ with respect to some basis of $M$. The discriminant of $M$ is the determinant of one of its Gram matrices. An $n \times n$ diagonal matrix with $a_1, \ldots, a_n$ as its diagonal entries is written as $\langlegle a_1, \ldots, a_n \longrightarrowngle$. Any other unexplained notation and terminology in the language of quadratic spaces and lattices used in this paper can be found in \cite{ca}, \cite{ki}, and \cite{om}. {\mathfrak s}ection{Universal Ternary Quadratic Polynomials} \label{universal} We start this section with a technical lemma which will be used in the proof of Theorem \ref{thmin3}. \mathbf egin{lem} \label{2variables} Let $q(\mathbf x)$ be a positive definite binary quadratic form and $b$ be the associated bilinear form. For $i = 1, \ldots, t$, let $f_i(\mathbf x) = q(\mathbf x) + 2b(\mathbf w_i, \mathbf x) + c_i$ be a positive integral quadratic polynomial with quadratic part $q(\mathbf x)$. For any integer $k \geq 0$, there exists a positive integer $N\geq k$, bounded above by a constant depending only on $q(\mathbf x)$, $k$, and $t$, such that $N$ is not represented by $f_i(\mathbf x)$ for every $i = 1, \ldots, t$. \end{lem} \mathbf egin{proof} Let $d$ be the discriminant of $q(\mathbf x)$. Choose odd primes $p_1 < \cdots < p_t$ such that $-d$ is a nonresidue mod $p_i$ for all $i$. Then for every $i = 1, \ldots, t$, $q(\mathbf x)$ is anisotropic ${\mathbb Z}_{p_i}$-unimodular. In particular, $q(\mathbf x) \in {\mathbb Z}_{p_i}$, and hence $2b(\mathbf w_i, \mathbf x)$ as well, are in ${\mathbb Z}_{p_i}$ for all $\mathbf x \in {\mathbb Z}_{p_i}^2$. This implies that $\mathbf w_i \in {\mathbb Z}_{p_i}^2$ and so $q(\mathbf w_i) \in {\mathbb Z}_{p_i}$. Let $N$ be the smallest positive integer satisfying $N \geq k$ and $$N \equiv p_i + c_i - q(\mathbf w_i) \mod p_i^2, {\mathbb Q}uad \mbox{ for } i = 1, \ldots, t.$$ Then for every $i$, $\text{ord}_{p_i}(N - c_i + q(\mathbf w_i)) = 1$ and so $N - c_i + q(\mathbf w_i)$ is not represented by $q(\mathbf x + \mathbf w_i)$ over ${\mathbb Z}_{p_i}$. Thus $N$ is not represented by $f_i(\mathbf x)$. \end{proof} A positive ternary quadratic polynomial $f(\mathbf x) = Q(\mathbf x) + 2B(\mathbf v, \mathbf x) + m$ is called {\em Minkowski reduced}, or simply {\em reduced}, if its quadratic part is Minkowski reduced and it attains its minimum at the zero vector. This means that the quadratic part $Q(\mathbf x)$ is of the form $\mathbf x A \mathbf x^t$, where $A$ is a Minkowski reduced symmetric matrix. So, if $\mathbf e_1, \mathbf e_2, \mathbf e_3$ is the standard basis for ${\mathbb Z}^3$, then $Q(\mathbf e_1) \leq Q(\mathbf e_2) \leq Q(\mathbf e_3)$. Also, $Q(\mathbf x) + 2B(\mathbf v, \mathbf x) \geq 0$ for all $\mathbf x \in {\mathbb Z}^3$, and hence \mathbf egin{equation} \label{inequality} 2\vert B(\mathbf v, \mathbf e_i) \vert \leq Q(\mathbf e_i) \mbox{ for } i = 1, 2, 3. \end{equation} \mathbf egin{lem} \label{reduced} Every positive ternary quadratic polynomial is equivalent to a reduced ternary quadratic polynomial. \end{lem} \mathbf egin{proof} Let $f(\mathbf x)$ be a positive ternary quadratic polynomial. It follows from reduction theory that there exists $T \in \text{GL}_n({\mathbb Z})$ such that the quadratic part of $f(\mathbf x T)$ is Minkowski reduced. If $f(\mathbf x T)$ attains its minimum at $\mathbf x_0$, then the polynomial $g(\mathbf x):= f(\mathbf x T + \mathbf x_0)$, which is equivalent to $f(\mathbf x)$, is reduced. \end{proof} \mathbf egin{lem} \label{reduction} Let $Q(\mathbf x)$ be a positive definite reduced ternary quadratic form. Then for any $(x_1, x_2, x_3) \in {\mathbb Z}^3$, $$Q(x_1\mathbf e_1 + x_2\mathbf e_2 + x_3\mathbf e_3) \geq \frac{1}{6}(Q(\mathbf e_1)x_1^2 + Q(\mathbf e_2)x_2^2 + Q(\mathbf e_3)x_3^2).$$ \end{lem} \mathbf egin{proof} Let $C_{ij} = Q(\mathbf e_i)Q(\mathbf e_j) - B(\mathbf e_i, \mathbf e_j)^2$, which is positive if $i {\mathfrak n}eq j$ because $Q(\mathbf x)$ is reduced. For any permutation $i, j, k$ of the integers $1, 2, 3$, we have $$Q(\mathbf e_k)C_{ij} \leq Q(\mathbf e_1)Q(\mathbf e_2)Q(\mathbf e_3) \leq 2 D,$$ where $D$ is the discriminant of $Q$. Now, by completing the squares, \mathbf egin{eqnarray*} Q(x_1\mathbf e_1 + x_2\mathbf e_2 + x_3\mathbf e_3) & \geq & Q(\mathbf e_i)(x_i + \cdots)^2 + \frac{C_{ij}}{Q(\mathbf e_j)}(x_j + \cdots )^2 + \frac{D}{C_{ij}} x_k^2\\ & \geq & \frac{Q(\mathbf e_k)}{2} x_k^2. \end{eqnarray*} Thus $$3(Q(x_1\mathbf e_1 + x_2\mathbf e_2 + x_3\mathbf e_3)) \geq \frac{1}{2}(Q(\mathbf e_1)x_1^2 + Q(\mathbf e_2)x_2^2 + Q(\mathbf e_3)x_3^2),$$ and the lemma follows immediately. \end{proof} We are now ready to prove Theorem \ref{thmin3}. \mathbf egin{proof}[Proof of Theorem \ref{thmin3}] Let $k$ be a fixed nonnegative integer. By virtue of Lemma \ref{reduced}, it suffices to show that there are only finitely many reduced positive ternary integral quadratic polynomials which represent all positive integers $\geq k$. By adjusting the constant terms of these quadratic polynomials, we may assume that their minimum is 0. Let $f(\mathbf x) = Q(\mathbf x) + 2B(\mathbf v, \mathbf x)$ be a reduced positive ternary integral quadratic polynomial with minimum $0$. Let $\mathbf e_1, \mathbf e_2, \mathbf e_3$ be the standard basis for ${\mathbb Z}^3$. For simplicity, for each $i = 1, 2, 3$, we denote $Q(\mathbf e_i)$ by $\mu_i$ and $B(\mathbf v, \mathbf e_i)$ by $w_i$. Furthermore, for $i {\mathfrak n}eq j$, let $a_{ij}$ be $B(\mathbf e_i, \mathbf e_j)$. We assume throughout below that $f(\mathbf x)$ represents all integers $\geq k$. The proof will be complete if we can show that $\mu_3$ is bounded above by a constant depending only on $k$. From now on, $(x_1, x_2, x_3)$ always denotes a vector in ${\mathbb Z}^3$. By (\ref{inequality}) and Lemma \ref{reduction}, \mathbf egin{eqnarray*} f(x_1, x_2, x_3) & \geq & {\mathfrak s}um_{i=1}^3 \left(\frac{1}{6} \mu_i x_i^2 - 2\vert w_i x_i \vert\right)\\ & \geq & {\mathfrak s}um_{i = 1}^3 \mu_i \left(\frac{1}{6} x_i^2 - \vert x_i \vert \right), \end{eqnarray*} and so if $\vert x_3 \vert \geq 9$, we have $$f(x_1, x_2, x_3) \geq -\frac{3}{2}\mu_1 - \frac{3}{2}\mu_2 + \frac{9}{2} \mu_3 \geq \frac{3}{2}\mu_3.$$ Suppose that $\vert x_3 \vert \leq 8$. Since $2\vert a_{12} \vert \leq \mu_1$, one obtains $\frac{\mu_1}{2}x_1^2 + 2a_{12}x_1x_2 + \frac{\mu_2}{2}x_2^2 \geq 0$ for all $(x_1, x_2) \in {\mathbb Z}^2$. So, if $\vert x_2 \vert \geq 22$, then \mathbf egin{eqnarray*} f(x_1, x_2, x_3) & \geq & \frac{\mu_1}{2}x_1^2 + 2(a_{13}x_3 + w_1)x_1 + \frac{\mu_2}{2} x_2^2 + 2(a_{23}x_3 + w_2)x_2 + f(0,0,x_3) \\ & \geq & - \frac{81}{2}\mu_1 + 44\mu_2\\ & \geq & \frac{7}{2}\mu_2. \end{eqnarray*} Let us assume further that $\vert x_2 \vert \leq 21$. If, in addition, $\vert x_1 \vert \geq 31$, then \mathbf egin{eqnarray*} f(x_1, x_2, x_3) & = & \mu_1 x_1^2 + 2(a_{12}x_2 + a_{13}x_3 + w_1)x_1 + f(0, x_2, x_3)\\ & \geq & \mu_1(x_1^2 - 30 \vert x_1\vert)\\ & \geq & 31 \mu_1. \end{eqnarray*} Therefore, we have $$f(x_1, x_2, x_3) \geq \gamma(f): = \min \left \{ \frac{3}{2}\mu_3, \frac{7}{2}\mu_2, 31\mu_1 \right \}$$ unless $$\vert x_1 \vert \leq 30, {\mathbb Q}uad \vert x_2 \vert \leq 21, \,\, \mbox{ and }\,\, \vert x_3 \vert \leq 8.$$ In particular, this means that there are at most $61\times 43 \times 17$ choices of $(x_1, x_2, x_3)$ for which $f(x_1, x_2, x_3) < \gamma(f)$, and thus there are at most $61\times 43\times 17$ distinct positive integers less than $\gamma(f)$ which may be represented by $f$. So, if $\gamma(f) \geq 61\times 43 \times 17 + 2 + k$, then $f(x_1, x_2, x_3)$ does not represent at least one integer among $k + 1, k + 2, \ldots, k + 61\times 43 \times 17 + 1$. Consequently, $$\frac{3}{2}\mu_1 \leq \gamma(f) \leq k + 61 \times 43 \times 17 + 1.$$ Let $\eta$ be the smallest positive integer satisfying $$43 \times 17 \times [2(15 + {\mathfrak s}qrt{225 + k + \eta}) + 1] < \eta.$$ Suppose that $\frac{3}{2}\mu_2 > k + \eta$. Let $s$ be a positive integer $\leq k + \eta$. If $f(x_1, x_2, x_3) = s$, then $\vert x_2 \vert \leq 21$ and $\vert x_3 \vert \leq 8$; thus, as shown before, \mathbf egin{eqnarray*} f(x_1, x_2, x_3) & = & \mu_1 x_1^2 + 2(a_{12}x_2 + a_{13}x_3 + w_1)x_1 + f(0, x_2, x_3)\\ & \geq & \mu_1(x_1^2 - 30 \vert x_1\vert)\\ & \geq & x_1^2 - 30 \vert x_1 \vert. \end{eqnarray*} So, if $\vert x_1 \vert > 15 + {\mathfrak s}qrt{225 + k + \eta}$, then $f(x_1, x_2, x_3) > k + \eta$. Therefore, the number of vectors $(x_1, x_2, x_3) \in {\mathbb Z}^3$ satisfying $k + 1 \leq f(x_1, x_2, x_3) \leq k + \eta$ is not bigger than $$43 \times 17 \times [2(15 + {\mathfrak s}qrt{225 + k + \eta}) + 1],$$ which is strictly less than $\eta$. This is impossible, which means that $$\mu_2 \leq \frac{2(k + \eta)}{3}.$$ Recall that if $\vert x_3 \vert \geq 9$, then $f(x_1, x_2, x_3) \geq \frac{3}{2}\mu_3$. It follows from Lemma \ref{2variables} that there exists a positive integer $N \geq k$ which is not represented by $f(x_1, x_2, t)$ for any integer $t \in [-8, 8]$, and this $N$ is bounded above by a constant depending only on $k, \mu_1, \mu_2$, and $a_{12}$ (note that $2\vert a_{12}\vert \leq \mu_1$). This means that whenever $f(x_1, x_2, x_3) = N$, we must have $\vert x_3 \vert \geq 9$ and so $$\mu_3 \leq \frac{2N}{3}.$$ This completes the proof. \end{proof} {\mathfrak s}ection{Regular Ternary Triangular Forms} \label{regularpoly} A triangular form $\Delta(\alpha_1, \ldots, \alpha_n)$ is said to be {\em primitive} if $\gcd(\alpha_1, \ldots, \alpha_n) = 1$. Its discriminant, denoted $d(\Delta)$, is defined to be the product $\alpha_1\cdots \alpha_n$. By completing the squares, it is easy to see that $\Delta(\alpha_1, \ldots, \alpha_n)$ represents an integer $m$ if and only if the equation \mathbf egin{equation} \label{3to2} \alpha_1(2x_1 + 1)^2 + \cdots + \alpha_n(2x_n + 1)^2 = 8m + (\alpha_1 + \cdots + \alpha_n) \end{equation} is soluble in ${\mathbb Z}$. Let $M$ be the ${\mathbb Z}$-lattice with quadratic map $Q$ and an orthogonal basis $\{\mathbf e_1, \ldots, \mathbf e_n\}$ such that $M \cong \langlegle 4\alpha_1, \ldots, 4\alpha_n \longrightarrowngle$. Then (\ref{3to2}) is soluble in ${\mathbb Z}$ if and only if $8m + (\alpha_1 + \cdots + \alpha_n)$ is represented by the coset $M + \mathbf v$, where $\mathbf v = (\mathbf e_1 + \cdots + \mathbf e_n)/2$, that is, there exists a vector $\mathbf x \in M$ such that $Q(\mathbf x + \mathbf v) = 8m + (\alpha_1 + \cdots + \alpha_n)$. Let $p$ be an odd prime. If $M_p$ is the ${\mathbb Z}_p$-lattice ${\mathbb Z}_p\otimes M$, then $M_p + \mathbf v = M_p$. Therefore, (\ref{3to2}) is soluble in ${\mathbb Z}_p$ if and only if $M_p$ represents $8m + (\alpha_1 + \cdots + \alpha_n)$. In particular, $\Delta(\alpha_1, \ldots, \alpha_n)$ is universal over ${\mathbb Z}_p$ if and only if $M_p$ is universal. \mathbf egin{lem}\label{at2} A primitive triangular form is universal over ${\mathbb Z}_2$. \end{lem} \mathbf egin{proof} It suffices to prove that for an odd integer $\alpha$, the polynomial $\alpha x(x + 1)/2$ is universal over ${\mathbb Z}_2$. But this is clear by the Local Square Theorem \cite[63:1]{om} or \cite[Lemma 1.6, page 40]{ca}. \end{proof} \mathbf egin{lem} \label{atodd} Let $p$ be an odd prime and $\alpha, \mathbf eta, \gamma$ be $p$-adic units. Then over ${\mathbb Z}_p$, \mathbf egin{enumerate} \item[(1)] $\Delta(\alpha,\mathbf eta)$ represents all integers $m$ for which $8m + \alpha + \mathbf eta {\mathfrak n}ot \equiv 0$ mod $p$; \item[(2)] $\Delta(\alpha, \mathbf eta)$ is universal if $\alpha + \mathbf eta \equiv 0 \mod p$; \item[(3)] $\Delta(\alpha, \mathbf eta, \gamma)$ is universal. \end{enumerate} \end{lem} \mathbf egin{proof} The binary ${\mathbb Z}_p$-lattice $\langlegle \alpha, \mathbf eta \longrightarrowngle$ represents all $p$-adic units \cite[92:1b]{om}. Therefore, it represents all integers $m$ for which $8m + \alpha + \mathbf eta {\mathfrak n}ot \equiv 0 \mod p$. This proves (1). In (2), the condition on $\alpha$ and $\mathbf eta$ implies that the ${\mathbb Z}_p$-lattice $\langlegle \alpha,\mathbf eta \longrightarrowngle$ is isometric to the hyperbolic plane which is universal. For (3), it follows from \cite[92:1b]{om} that any unimodular ${\mathbb Z}_p$-lattice of rank at least three is universal. \end{proof} Recall that a triangular form is regular if it represents all positive integers that are represented by the triangular form itself over ${\mathbb Z}_p$ for all primes $p$. For example, every universal triangular form is regular. The following lemma is a ``descending trick" which transforms a regular ternary triangular form to another one with smaller discriminant. \mathbf egin{lem} \label{watson} Let $q$ be an odd prime and $a, b, c$ be positive integers which are not divisible by $q$. Suppose that $\Delta(a, q^rb, q^sc)$ is regular, with $1 \leq r \leq s$. Then $\Delta(q^{2-\delta}a, q^{r - \delta}b, q^{s - \delta}c)$ is also regular, where $\delta = \min\{2, r\}$. \end{lem} \mathbf egin{proof} It suffices to show that $\Delta(q^2a, q^rb, q^sc)$ is regular. Suppose that $m$ is a positive integer represented by $\Delta(q^2a, q^rb, q^sc)$ over ${\mathbb Z}_p$ for all primes $p$. Then the equation \mathbf egin{equation} \label{1} 8m + (q^2a + q^rb + q^sc) = q^2a (2x_1 + 1)^2 + q^rb (2x_2 + 1)^2 + q^sc (2x_3 + 1)^2 \end{equation} is soluble in ${\mathbb Z}_p$ for every prime $p$. Since $q$ is odd, we can say that \mathbf egin{equation}\label{2} 8m + (q^2a + q^rb + q^sc) = a(2x_1 + 1)^2 + q^rb (2x_2 + 1)^2 + q^sc (2x_3 + 1)^2 \end{equation} is also soluble in ${\mathbb Z}_p$ for every prime $p$. Notice that $q^2 \equiv 1$ mod 8, and so $8m + (q^2a + q^rb + q^sc) = 8m' + (a + q^rb + q^sc)$ for some integer $m'$. Thus, the regularity of $\Delta(a, q^rb, q^sc)$ implies that (\ref{2}) is soluble in ${\mathbb Z}$. Let $(x_1, x_2, x_3) \in {\mathbb Z}^3$ be a solution to (\ref{2}). Then $(2x_1 + 1)$ must be divisible by $q$ because $q \mid m$ by (\ref{1}), and we can write $(2x_1 + 1)$ as $q(2y_1 + 1)$ for some $y_1 \in {\mathbb Z}$. So $(y_1, x_2, x_3)$ is an integral solution to (\ref{1}), which means that $m$ is in fact represented by $\Delta(q^2a, q^rb, q^sc)$. \end{proof} The following lemma will be used many times in the subsequent discussion. It is a reformulation of \cite[Lemma 3]{kko}. \mathbf egin{lem}\label{kkolemma} Let $T$ be a finite set of primes and $a$ be an integer not divisible by any prime in $T$. For any integer $d$, the number of integers in the set $\{d, a + d, \ldots, (n-1)a + d \}$ that are not divisible by any prime in $T$ is at least $$n\frac{\tilde{p}-1}{\tilde{p} + t - 1} - 2^t + 1,$$ where $t = \vert T \vert$ and $\tilde{p}$ is the smallest prime in $T$. \end{lem} For the sake of convenience, we say that a ternary triangular form $\Delta(\alpha, \mathbf eta, \gamma)$ {\em behaves well} if the unimodular Jordan component of the ${\mathbb Z}_p$-lattice $\langlegle \alpha, \mathbf eta, \gamma \longrightarrowngle$ has rank at least two, or equivalently, $p$ does not divide at least two of $\alpha, \mathbf eta$, and $\gamma$. For a ternary triangular form $\Delta$, we can rearrange the variables so that $\Delta = \Delta(\mu_1, \mu_2, \mu_3)$ with $\mu_1 \leq \mu_2 \leq \mu_3$. Collectively, we call these $\mu_i$ the successive minima of $\Delta$. In what follows, an inequality of the form $A \ll B$ always means that there exists a constant $k > 0$ such that $\vert A \vert \leq k\vert B\vert$. A real-valued function in several variables is said to be bounded if its absolute value is bounded above by a constant independent of the variables. \mathbf egin{prop} \label{well} There exists an absolute constant $C$ such that if $\Delta$ is a primitive regular ternary triangular form which behaves well at all odd primes, then $d(\Delta) \leq C$. \end{prop} \mathbf egin{proof} Let $\mu_1\leq \mu_2\leq \mu_3$ be the successive minima of $\Delta$, and let $M$ be the ${\mathbb Z}$-lattice $\langlegle 4\mu_1, 4\mu_2, 4\mu_3 \longrightarrowngle$. Let $T$ be the set of odd primes $p$ for which $M_p$ is not split by the hyperbolic plane. Then $T$ is a finite set. Let $t$ be the size of $T$, $\tilde{p}$ be the smallest prime in $T$, and $\omega = (\tilde{p} + t - 1)/(\tilde{p} - 1)$. Note that, since $\tilde{p} \geq 2$, we have $\omega \leq t + 1$. Let $\eta = (\mu_1 + \mu_2 + \mu_3)$ and $\mathfrak T$ be the product of primes in $T$. It follows from Lemmas \ref{at2} and \ref{atodd} and the regularity of $\Delta$ that $\Delta$ represents every positive integer $m$ for which $8m + \eta$ is relatively prime to $\mathfrak T$. By Lemma \ref{kkolemma}, there exists a positive integer $k_1 < (t+1)2^t$ such that $8k_1 + \eta$ is relatively prime to $\mathfrak T$. Therefore, $k_1$ is represented by $\Delta$ and hence $$\mu_1 \leq (t+1)2^t \ll t2^t.$$ For any positive integer $n$, the number of integers between 1 and $n$ that are represented by the triangular form $\Delta(\mu_1)$ is at most $2{\mathfrak s}qrt{n}$. Therefore, by virtue of Lemma \ref{kkolemma}, if $n \geq 4(t+1)^2 + 3(t+1)2^t$, there must be a positive integer $k_2 \leq n$ such that $8k_2 + \eta$ is relatively prime to $\mathfrak T$ and $k_2$ is not represented by $\Delta(\mu_1)$. This implies that $$\mu_2 \leq 4(t+1)^2 + 3(t+1)2^t \ll t2^t.$$ Let $\mathfrak A$ be the product of primes in $T$ that do not divide $\mu_1\mu_2$. Following the argument in \cite[page 862]{e}, we find that there must be an odd prime $q$ outside $T$ such that $-\mu_1\mu_2$ is a nonresidue mod $q$ and $q \ll (\mu_1\mu_2)^{\frac{7}{8}} {\mathfrak A}^{\frac{1}{4}}$. Since $\mathfrak A \leq \mathfrak T$, we have $$q \ll (\mu_1\mu_2)^{\frac{7}{8}} {\mathfrak T}^{\frac{1}{4}} \ll (t2^t)^{\frac{7}{8}}{\mathfrak T}^{\frac{1}{4}}.$$ Fix a positive integer $m \leq q^2$ such that $$8m + \mu_1 + \mu_2 \equiv q \mod q^2.$$ For any integer $\lambda$, $8(m + \lambda q^2) + \mu_1 + \mu_2$ is not represented by the binary lattice $\langlegle \mu_1, \mu_2 \longrightarrowngle$, which means that $m + \lambda q^2$ is not represented by $\Delta(\mu_1, \mu_2)$. However, by Lemma \ref{kkolemma}, there must be a positive integer $k_3 \leq (t+1)2^t$ such that $8q^2k_3 + 8m + \eta$ is relatively prime to $\mathfrak T$. Then $m + q^2k_3$ is an integer represented by $\Delta$ but not by $\Delta(\mu_1,\mu_2)$. As a result, $$\mu_3 \leq m + q^2k_3 \ll (t 2^t)^{\frac{11}{4}} {\mathfrak T}^{\frac{1}{2}},$$ and hence $$\mathfrak T \leq d(\Delta) = \mu_1\mu_2\mu_3 \ll (t 2^t)^{\frac{19}{4}} {\mathfrak T}^{\frac{1}{2}}.$$ Since $\mathfrak T$, a product of $t$ distinct primes, grows at least as fast as $t!$, the above inequality shows that $t$, and hence $\mathfrak T$ as well, must be bounded. This means that $d(\Delta)$ is also bounded. \end{proof} Starting with a primitive regular ternary triangular form $\Delta$, we may apply Lemma \ref{watson} successively at suitable odd primes and eventually obtain a primitive regular ternary triangular form $\overline{\Delta}$ which behaves well at all odd primes. It is also clear from Lemma \ref{watson} that $d(\overline{\Delta})$ divides $d(\Delta)$. Let $\ell$ be an odd prime divisor of $d(\Delta)$. If $\ell$ divides $d(\overline{\Delta})$, then $\ell$ is bounded by Proposition \ref{well}. So we assume from now on that $\ell$ does not divide $d(\overline{\Delta})$. Our next goal is to bound $\ell$. When we obtain $\overline{\Delta}$ from $\Delta$, we may first apply Lemma \ref{watson} at all primes $p$ not equal to $\ell$. So, there is no harm to assume from the outset that $\Delta$ behaves well at all primes $p {\mathfrak n}eq \ell$. Then, by Lemma \ref{watson}, $\Delta$ can be transformed to a primitive regular ternary triangular form $\tilde{\Delta} = \tilde{\Delta}(a, \ell^2b, \ell^2c)$, with $\ell {\mathfrak n}mid abc$, which behaves well at all primes $p {\mathfrak n}eq \ell$. Since further application of Lemma \ref{watson} to $\tilde{\Delta}$ results in the triangular form $\overline{\Delta}$, therefore all the prime divisors of $d(\tilde{\Delta})$, except $\ell$, are bounded. Let $\tilde{T}$ be the set of odd prime divisors of $d(\tilde{\Delta})$ that are not $\ell$. By Lemmas \ref{at2} and \ref{atodd}, we see that $\tilde{\Delta}$ represents all positive integers $m$ for which $8m + a + \ell^2b + \ell^2c$ is relatively prime to every prime in $\tilde{T}$ and $(8m + a + \ell^2b + \ell^2c)a$ is a quadratic residue modulo $\ell$. In order to find integers represented by $\tilde{\Delta}$, we need a result which is a slight generalization of Proposition 3.2 and Corollary 3.3 in \cite{e}. Let $\chi_1, \ldots, \chi_r$ be Dirichlet characters modulo $k_1, \ldots, k_r$, respectively, $u_1, \ldots, u_r$ be values taken from the set $\{{\mathfrak p}m 1\}$, and $\Gamma$ be the least common multiple of $k_1, \ldots, k_r$. Given a nonnegative number $s$ and a positive number $H$, let $S_s(H)$ be the set of integers $n$ in the interval $(s, s + H)$ which satisfy the conditions $$\chi_i(n) = u_i {\mathbb Q}uad \mbox{ for } i = 1, \ldots, r \text{ and } \gcd(n, X) = 1,$$ where $X$ is a positive integer relatively prime to $\Gamma$. \mathbf egin{prop} \label{eresult} Suppose that $\chi_1, \ldots, \chi_r$ are independent. Let $h = \min \{H : S_s(H) > 0 \}$ and $\omega(\Gamma)$ denote the number of distinct prime divisor of $\Gamma$. Then \mathbf egin{equation} \label{e3.2} S_s(H) = 2^{-r}\frac{{\mathfrak p}hi(\Gamma X)}{\Gamma X}H + O\left(H^{\frac{1}{2}}\Gamma^{\frac{3}{16} + \epsilon} X^\epsilon\right), \end{equation} and if $r \leq \omega(\Gamma) + 1$, we have \mathbf egin{equation} \label{e3.3} h \ll \Gamma^{\frac{3}{8} + \epsilon}X^\epsilon, \end{equation} where ${\mathfrak p}hi$ is the Euler's phi-function and the implied constants in both \textnormal{(\ref{e3.2})} and \textnormal{(\ref{e3.3})} depend only on $\epsilon$. \end{prop} \mathbf egin{proof} We may proceed as in the proofs for Proposition 3.2 and Corollary 3.3 in \cite{e}, but notice that \cite[Lemma 3.1]{e} remains valid if we replace ``$0 < n < H$" by ``$s < n < s + H$" in the summations since Burgess's estimate for character sums \cite[Theorem 2]{b} holds for any interval of length $H$. \end{proof} \mathbf egin{lem} \label{primel} The prime $\ell$ is bounded. \end{lem} \mathbf egin{proof} Let $\tilde{\mu_1} \leq \tilde{\mu_2}$ be the first two successive minima of $\tilde{\Delta}$. Let $s = a + \ell^2 b + \ell^2 c$ and write $s = 2^\kappa s_0$ with $2 {\mathfrak n}mid s_0$. Suppose that $\kappa \geq 3$. We apply Proposition \ref{eresult} to the quadratic residue mod $\ell$ character $\chi_\ell$, taking $\epsilon = 1/8$ and $X$ to be the product of the primes in $\tilde{T}$. So, there is a positive integer $h \ll \ell^{\frac{1}{2}}$ such that $\chi_\ell(h + 2^{\kappa - 3}s_0) = \chi_\ell(2a)$ and $h + 2^{\kappa - 3} s_0$ is not divisible by any prime in $\tilde{T}$. Then $\tilde{\Delta}$ represents $h$ and hence $\tilde{\mu_1} \ll \ell^{\frac{1}{2}}$. If $\kappa < 3$, then we apply Proposition \ref{eresult} again but this time to $\chi_\ell$ and possibly the mod 4 character $\left(\frac{-1}{*} \right)$ and the mod 8 character $\left(\frac{2}{*} \right)$. We obtain a positive integer $n > s_0$ such that $\chi_\ell(n) = \chi_\ell(2^{\kappa} a)$, $n$ is not divisible by any prime in $\tilde{T}$, $n \equiv s_0$ mod $2^{3 - \kappa}$, and $n - s_0 \ll \ell^{\frac{1}{2}}$. Then we can write $2^\kappa n = 8m + s$, where $m$ is represented by $\tilde{\Delta}$ and $m \ll \ell^{\frac{1}{2}}$. So, $\tilde{\mu_1} \ll \ell^{\frac{1}{2}}$ in this case as well. Now, for any $H > 0$, the number of integers in the interval $(s, s + H)$ that are represented by the triangular form $\Delta(\tilde{\mu_1})$ is equal to $O({\mathfrak s}qrt{H})$. Thus, by Proposition \ref{eresult} and an argument similar to the one above, we must have $\tilde{\mu_2} \ll \ell^{\frac{1}{2}}$. Then $\ell^2 \leq \tilde{\mu_1}\tilde{\mu_2} \ll \ell$, and hence $\ell$ is bounded. \end{proof} We now present the proof of Theorem \ref{regular3} which asserts that there are only finitely many primitive regular ternary triangular forms. \mathbf egin{proof}[Proof of Theorem \ref{regular3}] Let $\Delta$ be a primitive regular ternary triangular form, and $\mu_1\leq \mu_2\leq \mu_3$ be its successive minima. It suffices to show that these successive minima are bounded. Let $S$ be the set of odd prime divisors of $d(\Delta)$. It follows from Proposition \ref{well} and Lemma \ref{primel} that all the primes in $S$ are bounded. Let $\mathfrak S$ be the product of these primes. It is clear from Lemma \ref{at2} and Lemma \ref{atodd}(3) that $\Delta$ represents $\mathfrak S$ over $\mathbb Z_p$ for all $p {\mathfrak n}ot \in S$. Also, Lemma \ref{atodd}(1) (if $\mu_1 + \mu_2 {\mathfrak n}ot \equiv 0$ mod $p$) or Lemma \ref{atodd}(2) (if $\mu_1 + \mu_2 \equiv 0$ mod $p$) shows that $\Delta$ represents $\mathfrak S$ over $\mathbb Z_p$ for all primes $p \in S$. Consequently, $\Delta$ represents $\mathfrak S$ over $\mathbb Z_p$ for all primes $p$. Since $\Delta$ is regular, it must represent $\mathfrak S$. This shows that $\mu_1$ is bounded. Let $q_1$ be the smallest odd prime not dividing $3\mu_1\mathfrak S$, and $q_2$ be the smallest odd prime not dividing $q_1\mu_1\mathfrak S$ for which $8q_2 \mathfrak S\mu_1 + \mu_1^2$ is a nonresidue mod $q_1$. Such $q_2$ exists because there are at least two nonresidues mod $q_1$. Note that $q_2\mathfrak S$ is represented by $\Delta$ but not by $\Delta(\mu_1)$. Therefore, $\mu_2$ is also bounded. Now, let $q_3$ be the smallest odd prime not dividing $\mathfrak S$ for which $-\mu_1\mu_2$ is a nonresidue mod $q_3$, and $q_4$ be the smallest odd prime not dividing $\mathfrak S$ which satisfies $$-8q_4 \mathfrak S \equiv \mu_1 + \mu_2 + q_3 \mod q_3^2.$$ Then $q_4\mathfrak S$ is represented by $\Delta$ but not by $\Delta(\mu_1, \mu_2)$, which means that $\mu_3$ is bounded. This completes the proof. \end{proof} {\mathfrak s}ection{Representations of Cosets} \label{coset} In the previous sections we have seen some connection between the diophantine aspect of quadratic polynomials and the geometric theory of quadratic spaces and lattices. In this section we will amplify this connection by describing a geometric approach of a special, but yet general enough for most practical purpose, family of quadratic polynomials. Since it will not present any additional difficulty, we shall consider quadratic polynomials over global fields and the Dedekind domains inside. For simplicity, the quadratic map and its associated bilinear form on any quadratic space will be denoted by $Q$ and $B$ respectively. Now, unless stated otherwise, $F$ is assumed to be a global field of characteristic not 2 and $\mathfrak o$ is a Dedekind domain inside $F$ defined by a Dedekind set of places $\Omega$ on $F$ (see, for example, \cite[\S 21]{om}). We call a quadratic polynomial $f(\mathbf x)$ over $F$ in variables $\mathbf x = (x_1, \ldots, x_n)$ {\em complete} if $f(\mathbf x) = (\mathbf x + \mathbf v)A (\mathbf x + \mathbf v)^t$, where $A$ is an $n\times n$ nonsingular symmetric matrix over $F$ and $\mathbf v \in F^n$. It is called {\em integral} if $f(\mathbf x) \in \mathfrak o$ for all $\mathbf x \in \mathfrak o^n$. Two quadratic polynomials $f(\mathbf x)$ and $g(\mathbf x)$ are said to be {\em equivalent} if there exist $T \in \text{GL}_n(\mathfrak o)$ and $\mathbf x_0 \in \mathfrak o^n$ such that $g(\mathbf x) = f(\mathbf x T + \mathbf x_0)$. On the geometric side, an $\mathfrak o$-{\em coset} is a set $M + \mathbf v$, where $M$ is an $\mathfrak o$-lattice on an $n$-dimensional nondegenerate quadratic space $V$ over $F$ and $\mathbf v$ is a vector in $V$. An $\mathfrak o$-coset $M + \mathbf v$ is called {\em integral} if $Q(M + \mathbf v) {\mathfrak s}ubseteq \mathfrak o$, and is {\em free} if $M$ is a free $\mathfrak o$-lattice. Two $\mathfrak o$-cosets $M + \mathbf v$ and $N + \mathbf w$ on two quadratic spaces $V$ and $W$, respectively, are said to be {\em isometric}, written $M + \mathbf v \cong N + \mathbf w$, if there exists an isometry ${\mathfrak s}igma: V \longrightarrow W$ such that ${\mathfrak s}igma(M + \mathbf v) = N + \mathbf w$. This is the same as requiring ${\mathfrak s}igma(M) = N$ and ${\mathfrak s}igma(\mathbf v) \in \mathbf w + N$. For each ${\mathfrak p} \in \Omega$, $\mathfrak o_{\mathfrak p}$-cosets and isometries between $\mathfrak o_{\mathfrak p}$-cosets are defined analogously. As in the case of quadratic forms and lattices, there is a one-to-one correspondence between the set of equivalence classes of complete quadratic polynomials in $n$ variables over $F$ and the set of isometry classes of free cosets on $n$-dimensional quadratic spaces over $F$. Under this correspondence, integral complete quadratic polynomials corresponds to integral free cosets. \mathbf egin{defn} Let $M + \mathbf v$ be an $\mathfrak o$-coset on a quadratic space $V$. The genus of $M + \mathbf v$ is the set $$\text{gen}(M + \mathbf v) = \{K + \mathbf w \mbox{ on } V : K_{\mathfrak p} + \mathbf w \cong M_{\mathfrak p} + \mathbf v \mbox{ for all } {\mathfrak p} \in \Omega\}.$$ \end{defn} \mathbf egin{lem} Let $M + \mathbf v$ be an $\mathfrak o$-coset on a quadratic space $V$ and let $S$ be a finite subset of $\Omega$. Suppose that an $\mathfrak o_{\mathfrak p}$-coset $M({\mathfrak p}) + \mathbf x_{\mathfrak p}$ on $V_{\mathfrak p}$ is given for each ${\mathfrak p} \in S$. Then there exists an $\mathfrak o$-coset $K + \mathbf z$ on $V$ such that $$K_{\mathfrak p} + \mathbf z = \left \{ \mathbf egin{array}{ll} M({\mathfrak p}) + \mathbf x_{\mathfrak p} & \mbox{ if ${\mathfrak p} \in S$};\\ M_{\mathfrak p} + \mathbf v & \mbox{ if ${\mathfrak p} \in \Omega {\mathfrak s}etminus S$}. \end{array} \right .$$ \label{crt} \end{lem} \mathbf egin{proof} Let $T$ be the set of places ${\mathfrak p} \in \Omega {\mathfrak s}etminus S$ for which $\mathbf v {\mathfrak n}ot \in M_{\mathfrak p}$. Then $T$ is a finite set. For each ${\mathfrak p} \in T$, let $M({\mathfrak p}) = M_{\mathfrak p}$ and $\mathbf x_{\mathfrak p} = \mathbf v$. Let $K$ be an $\mathfrak o$-lattice on $V$ such that $$K_{\mathfrak p} = \left \{ \mathbf egin{array}{ll} M({\mathfrak p}) & \mbox{ if ${\mathfrak p} \in S\cup T$};\\ M_{\mathfrak p} & \mbox{ if ${\mathfrak p} \in \Omega{\mathfrak s}etminus (S\cup T)$}. \end{array} \right .$$ By the strong approximation property of $V$, there exists $\mathbf z \in V$ such that $\mathbf z \equiv \mathbf x_{\mathfrak p}$ mod $M({\mathfrak p})$ for all ${\mathfrak p} \in S\cup T$, and $\mathbf z \in M_{\mathfrak p}$ for all ${\mathfrak p} \in \Omega {\mathfrak s}etminus (S\cup T)$. Then $K + \mathbf z$ is the desired $\mathfrak o$-coset. \end{proof} Let $O_{\mathbb A}(V)$ be the adelization of the orthogonal group of $V$. Let $\Sigma$ be an element in $O(V)_{\mathbb A}$. The ${\mathfrak p}$-component of $\Sigma$ is denoted by $\Sigma_{\mathfrak p}$. Given an $\mathfrak o$-coset $M + \mathbf v$ on $V$, $\Sigma_{\mathfrak p}(M_{\mathfrak p} + \mathbf v) = \Sigma_{\mathfrak p}(M_{\mathfrak p}) = M_{\mathfrak p}$ for almost all finite places ${\mathfrak p}$. By Lemma \ref{crt}, there exists an $\mathfrak o$-coset $K + \mathbf z$ on $V$ such that $K_{\mathfrak p} + \mathbf z = \Sigma_{\mathfrak p}(M_{\mathfrak p} + \mathbf v)$ for all ${\mathfrak p} \in \Omega$. Therefore, we can define $\Sigma(M + \mathbf v)$ to be $K + \mathbf z$, and so $O(V)_{\mathbb A}$ acts transitively on $\text{gen}(M + \mathbf v)$. As a result, $$\text{gen}(M + \mathbf v) = O_{\mathbb A}(V) \cdot (M + \mathbf v).$$ Let $O_{\mathbb A}(M + \mathbf v)$ be the stabilizer of $M + \mathbf v$ in $O_{\mathbb A}(V)$. Then the (isometry) classes in $\text{gen}(M + \mathbf v)$ can be identified with $$O(V){\mathfrak s}etminus O_{\mathbb A}(V) / O_{\mathbb A}(M + \mathbf v).$$ The group $O_{\mathbb A}(M + \mathbf v)$ is clearly a subgroup of $O_{\mathbb A}(M)$. For each ${\mathfrak p} \in \Omega$, we have \mathbf egin{eqnarray*} O(M_{\mathfrak p} + \mathbf v) & = & \{ {\mathfrak s}igma \in O(V_{\mathfrak p}) : {\mathfrak s}igma(M_{\mathfrak p}) = M_{\mathfrak p} \mbox{ and } {\mathfrak s}igma(\mathbf v) \equiv \mathbf v \mbox{ mod } M_{\mathfrak p} \}\\ & {\mathfrak s}ubseteq & O(M_{\mathfrak p})\cap O(M_{\mathfrak p} + \mathfrak o_{\mathfrak p} \mathbf v). \end{eqnarray*} \mathbf egin{lem} For any ${\mathfrak p} \in \Omega$, the group index $[O(M_{\mathfrak p}) : O(M_{\mathfrak p} + \mathbf v)]$ is finite. \end{lem} \mathbf egin{proof} There is the natural map $$O(M_{\mathfrak p})\cap O(M_{\mathfrak p} + \mathfrak o_{\mathfrak p} \mathbf v) \longrightarrow \text{Aut}_{\mathfrak o_{\mathfrak p}}((M_{\mathfrak p} + \mathfrak o_{\mathfrak p} \mathbf v) / M_{\mathfrak p})$$ whose kernel is precisely $O(M_{\mathfrak p} + \mathbf v)$. Since $(M_{\mathfrak p} + \mathfrak o_{\mathfrak p} \mathbf v)/ M_{\mathfrak p}$ is a finite group, the index $[O(M_{\mathfrak p})\cap O(M_{\mathfrak p} + \mathfrak o_{\mathfrak p} \mathbf v) : O(M_{\mathfrak p} + \mathbf v)]$ is finite. But it is known \cite[30.5]{kn} that the index $[O(M_{\mathfrak p}) : O(M_{\mathfrak p})\cap O(M_{\mathfrak p} + \mathfrak o_{\mathfrak p} \mathbf v)]$ is always finite. This proves the lemma. \end{proof} Since $M_{\mathfrak p} = M_{\mathfrak p} + \mathbf v$ for almost all ${\mathfrak p} \in \Omega$, the index $[O_{\mathbb A}(M) : O_{\mathbb A}(M + \mathbf v)]$ is finite. The set $O(V){\mathfrak s}etminus O_{\mathbb A}(V) / O_{\mathbb A}(M)$ is finite (which is the class number of $M$), hence the set $O(V){\mathfrak s}etminus O_{\mathbb A}(V) / O_{\mathbb A}(M + \mathbf v)$ is also finite. Let $h(M + \mathbf v)$ be the number of elements in this set, which is also the number of classes of in $\text{gen}(M +\mathbf v)$. We call it the {\em class number} of $M + \mathbf v$. \mathbf egin{cor} \label{classnumber} The class number $h(M + \mathbf v)$ is finite, and $h(M + \mathbf v) \geq h(M)$. \end{cor} If we replace the orthogonal groups by the special orthogonal groups in the above discussion, then we have the definitions for the proper genus $\text{gen}^+(M + \mathbf v)$, which can be identified with $O^+(V){\mathfrak s}etminus O^+_{\mathbb A}(V)/O^+_{\mathbb A}(M + \mathbf v)$, and the proper class number $h^+(M + \mathbf v)$ which is also finite. Unlike the case of lattices, it is not true in general that $\text{gen}(M + \mathbf v)$ coincides with $\text{gen}^+(M + \mathbf v)$. The following example illustrates this phenomenon. It also shows that in general $h(M + \mathbf v)$ and $h(M)$ are not equal. \mathbf egin{exam} Let $W$ be the hyperbolic plane over $\mathbb Q$, and let $H$ be the $\mathbb Z$-lattice on $W$ spanned by two linear independent isotropic vectors $\mathbf e$ and $\mathbf f$ such that $B(\mathbf e,\mathbf f) = 1$. Consider the ${\mathbb Z}$-coset $H + \mathbf v$, where $\mathbf v = \frac{1}{p}\mathbf e$ for some odd prime $p$. Suppose that ${\mathfrak s}igma_p$ is an improper isometry of $H_p + \mathbf v$. Then ${\mathfrak s}igma_p$ must send $\mathbf e$ to $\epsilon \mathbf f$ and $\mathbf f$ to $\epsilon^{-1}\mathbf e$ for some unit $\epsilon$ in $\mathbb Z_p$. Then $$\mathbf v = \frac{1}{p}\mathbf e \equiv {\mathfrak s}igma_p(\mathbf v) \equiv \frac{\epsilon}{p} \mathbf f \mbox{ mod } H_p.$$ This implies that $\frac{1}{p}\mathbf e - \frac{\epsilon}{p}\mathbf f$ is in $H_p$, which is absurd. Therefore, $H_p + \mathbf v$ does not have any improper isometry and hence $\text{gen}(H + \mathbf v)$ is not the same as $\text{gen}^+(H + \mathbf v)$. Now, suppose in addition that $p > 3$. Let $q$ be an integer such that $q {\mathfrak n}ot \equiv {\mathfrak p}m 1$ mod $p$. Let $\mathbf u$ be the vector $\frac{q}{p}\mathbf e$. Then the coset $H + \mathbf u$ is in $\text{gen}^+(H + \mathbf v)$. To see this, observe that $H_\ell + \mathbf u = H_\ell + \mathbf v$ for all primes $\ell {\mathfrak n}eq p$. At $p$, the isometry that sends $\mathbf e$ to $q^{-1}\mathbf e$ and $\mathbf f$ to $q\mathbf f$, whose determinant is 1, would send $H_p + \mathbf u$ to $H_p + \mathbf v$. Suppose that there exists ${\mathfrak s}igma \in O(W)$ which sends $H + \mathbf u$ to $H + \mathbf v$. Then ${\mathfrak s}igma$ necessarily sends $H$ to $H$ itself; hence the matrix for ${\mathfrak s}igma$ relative to the basis $\{\mathbf e, \mathbf f\}$ is one of the following: $$\mathbf egin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix},{\mathbb Q}uad \mathbf egin{bmatrix} -1 & 0\\ 0 & -1 \end{bmatrix},{\mathbb Q}uad \mathbf egin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix},{\mathbb Q}uad \mathbf egin{bmatrix} 0 & -1\\ -1 & 0 \end{bmatrix}.$$ But a simple calculation shows that none of the above sends $H + \mathbf u$ to $H + \mathbf v$. Hence $H + \mathbf u$ is not in the same class of $H + \mathbf v$. As a result, both $h^+(H + \mathbf v)$ and $h(H + \mathbf v)$ are greater than 1, while $h(H)$ and $h^+(H)$ are 1. \end{exam} Of course, there are $\mathfrak o$-cosets, which are not $\mathfrak o$-lattices themselves, whose class numbers are 1. Here is an example: \mathbf egin{exam} Let $M$ be the $\mathbb Z$-lattice whose Gram matrix is $\langlegle 4, 4, 4 \longrightarrowngle$ relative to a basis $\{\mathbf e, \mathbf f, \mathbf g\}$, and let $\mathbf v$ be $\frac{\mathbf e + \mathbf f + \mathbf g}{2}$. The class number of $M$ is 1. The lattice $M + \mathbb Z \mathbf v$ is isometric to $$\mathbf egin{pmatrix} 3 & 1 & -1\\ 1 & 3 & 1\\ -1 & 1 & 3 \end{pmatrix}$$ whose class number is also 1. Since $h(M) = 1$, any ${\mathbb Z}$-coset in $\text{gen}(M + \mathbf v)$ has an isometric copy of the form $M + \mathbf x$ for some $\mathbf x \in \mathbb Q M$. If $M + \mathbf x \in \text{gen}(M + \mathbf v)$, then the lattice $M + \mathbb Z \mathbf x$ is in $\text{gen}(M + \mathbb Z\mathbf v)$ which has only one class. Therefore, there exists an isometry ${\mathfrak s}igma \in O(\mathbb Q M)$ such that ${\mathfrak s}igma(\mathbf x) \in M + \mathbb Z\mathbf v$. Thus ${\mathfrak s}igma(\mathbf x) = \mathbf z + a\mathbf v$, where $\mathbf z \in M$ and $a \in \mathbb Z$. But $Q(\mathbf x)$ must be odd; therefore $a$ must be odd and hence ${\mathfrak s}igma(\mathbf x) \equiv \mathbf v$ mod $M$. This shows that ${\mathfrak s}igma(M + \mathbf x) = M + \mathbf v$ and so $h(M + \mathbf v) = 1$. \end{exam} \mathbf egin{prop} Let $\mathbf x$ be a vector in $V$. Suppose that for each ${\mathfrak p} \in \Omega$, there exists ${\mathfrak s}igma_{\mathfrak p} \in O(V_{\mathfrak p})$ such that $\mathbf x \in {\mathfrak s}igma_{\mathfrak p}(M_{\mathfrak p} + \mathbf v)$. Then there exists $K + \mathbf z \in \textnormal{gen}(M + \mathbf v)$ such that $\mathbf x \in K + \mathbf z$. \end{prop} \mathbf egin{proof} This follows from Lemma \ref{crt} since $\mathbf x \in M_{\mathfrak p} = M_{\mathfrak p} + \mathbf v$ for almost all ${\mathfrak p}$. \end{proof} Let $a \in F$. We say that $M + \mathbf v$ represents $a$ if there exists a nonzero vector $\mathbf z \in M + \mathbf v$ such that $Q(\mathbf z) = a$, and that $\text{gen}(M + \mathbf v)$ represents $a$ if $V_\ell$ represents $a$ for all places $\ell {\mathfrak n}ot \in \Omega$ and $M_{\mathfrak p} + \mathbf v$ represents $a$ for all places ${\mathfrak p} \in \Omega$. The following corollary shows that the class number of a coset can be viewed as a measure of the obstruction of the local-to-global implication in (\ref{hasse}). \mathbf egin{cor} Let $a \in F^\times$. Suppose that $\textnormal{gen}(M_{\mathfrak p} + \mathbf v)$ represents $a$. Then there exists $K + \mathbf z \in \textnormal{gen}(M + \mathbf v)$ which represents $a$. \end{cor} \mathbf egin{proof} The hypothesis says that for each ${\mathfrak p} \in \Omega$ there is a vector $\mathbf z_{\mathfrak p} \in M_{\mathfrak p} + \mathbf v$ such that $Q(\mathbf z_{\mathfrak p}) = a$. By virtue of the Hasse Principle, there exists a vector $\mathbf z \in V$ such that $Q(\mathbf z) = a$. At each ${\mathfrak p}\in \Omega$, it follows from Witt's extension theorem that there is an isometry ${\mathfrak s}igma_{\mathfrak p} \in O(V_{\mathfrak p})$ such that ${\mathfrak s}igma_{\mathfrak p}(\mathbf z_{\mathfrak p}) = \mathbf z$. Then for each ${\mathfrak p} \in \Omega$, $$\mathbf z = {\mathfrak s}igma_{\mathfrak p}(\mathbf z_{\mathfrak p}) \in {\mathfrak s}igma_{\mathfrak p}(M_{\mathfrak p} + \mathbf v).$$ By the previous proposition, $\mathbf z$ is contained in some coset $K + \mathbf z \in \text{gen}(M + \mathbf v)$. Equivalently, $a$ is represented by $K + \mathbf z$. \end{proof} When $F$ is a number field, the obstruction of the local-to-global principle for representations of cosets may be overcome by applying the results on representations of quadratic lattices with approximation properties. \mathbf egin{thm} \label{rep} Let $M+\mathbf v$ be an $\mathfrak o$-coset on a positive definite quadratic space over a totally real number field $F$. Suppose that $a \in F^\times$ is represented by $\textnormal{gen}(M+\mathbf v)$. \mathbf egin{enumerate} \item[(1)] If $\dim(M) \geq 5$, then there exists a constant $C = C(M , \mathbf v)$ such that $a$ is represented by $M + \mathbf v$ provided $\mathbb N_{F/{\mathbb Q}}(a) > C$. \item[(2)] Suppose that $\dim(M) = 4$ and $a$ is primitively represented by $M_{\mathfrak p} + \mathfrak o_{\mathfrak p}\mathbf v$ whenever $M_{\mathfrak p}$ is anisotropic. Then there exists a constant $C^* = C^*(M, \mathbf v)$ such that $a$ is represented by $M + \mathbf v$ provided $\mathbb N_{F/{\mathbb Q}}(a) > C^*$. \end{enumerate} \end{thm} \mathbf egin{proof} (1) Let $S$ be the subset of $\Omega$ containing all ${\mathfrak p}$ for which $M_{\mathfrak p} + \mathbf v {\mathfrak n}eq M_{\mathfrak p}$ or $M_{\mathfrak p}$ is not unimodular. This $S$ is a finite set. For each ${\mathfrak p} \in S$, let $\mathbf x_{\mathfrak p} \in M_{\mathfrak p}$ such that $Q(\mathbf x_{\mathfrak p} + \mathbf v) = a$. Choose an integer $s$ large enough so that ${\mathfrak p}^s\mathbf v \in M_{\mathfrak p}$ for all ${\mathfrak p} \in S$. Let $C$ be the constant obtained from applying the number field version of the main theorem in \cite{jk} (see \cite[Remark (ii)]{jk}) to $M + \mathfrak o\mathbf v$, $S$, and $s$. If $\mathbb N_{F/{\mathbb Q}}(a) > C$, then there exists $\mathbf w \in M + \mathfrak o \mathbf v$ such that $Q(\mathbf w) = a$ and $\mathbf w \equiv \mathbf x_{\mathfrak p} + \mathbf v \equiv \mathbf v$ mod ${\mathfrak p}^s(M_{\mathfrak p} + \mathfrak o_{\mathfrak p} \mathbf v)$ for every ${\mathfrak p} \in S$. Since ${\mathfrak p}^s(M_{\mathfrak p} + \mathfrak o_{\mathfrak p} \mathbf v) {\mathfrak s}ubseteq M_{\mathfrak p}$, it follows that $\mathbf w$ is in $M + \mathbf v$, which means that $M + \mathbf v$ represents $a$. Part (2) can be proved in the same manner, except that we need to replace the main theorem in \cite{jk} by \cite[Appendix]{ch}. \end{proof} When $V$ is indefinite, we need to take into account of the orthogonal complement of a vector representing $a$. Since $a$ is represented by $\text{gen}(M + \mathbf v)$, $a$ must be represented by $\text{gen}(M + \mathfrak o \mathbf v)$, and it follows from the Hasse Principle that there exists $\mathbf z \in V$ such that $Q(\mathbf z) = a$. Let $W$ be the orthogonal complement of $\mathbf z$ in $V$. The following theorem is an immediate consequence of \cite[Corollary 2.6]{bc}. \mathbf egin{thm} \label{rep2} Let $M + \mathbf v$ be an $\mathfrak o$-coset on an indefinite quadratic space over a number field $F$. Suppose that $a \in F^\times$ is represented by $\textnormal{gen}(M + \mathbf v)$. \mathbf egin{enumerate} \item[(1)] If $\dim(M) \geq 4$ or $W$ is isotropic, then $a$ is represented by $M + \mathbf v$. \item[(2)] Suppose that $\dim(M) = 3$, $W$ is anisotropic, and $M + \mathfrak o\mathbf v$ represents $a$. Then $M + \mathbf v$ represents $a$ if either $a$ is a spinor exception of $\textnormal{gen}(M + \mathfrak o \mathbf v)$ or there exists ${\mathfrak p} {\mathfrak n}ot \in \Omega$ for which $W_{\mathfrak p}$ is anisotropic and additionally $V_{\mathfrak p}$ is isotropic if ${\mathfrak p}$ is a real place. \end{enumerate} \end{thm} \mathbf egin{proof} Let $T$ be the set of places ${\mathfrak p}$ for which $\mathbf v {\mathfrak n}ot \in M_{\mathfrak p}$. By \cite[Corollary 2.6]{bc}, the hypothesis in either (1) or (2) implies that $M + \mathfrak o \mathbf v$ represents $a$ with approximation at $T$. Therefore, there exists $\mathbf w \in M + \mathfrak o \mathbf v$ such that $Q(\mathbf w) = a$ and $\mathbf w \equiv \mathbf v$ mod $M_{\mathfrak p}$ for all ${\mathfrak p} \in T$. Consequently, $M + \mathbf v$ represents $a$. \end{proof} We conclude this paper by offering a few comments on the additional hypothesis placed in Theorem \ref{rep2}(2). First, there is an effective procedure \cite{sp1} to decide whether $a$ is a spinor exception of $\text{gen}(M + \mathfrak o \mathbf v)$. It depends on the knowledge of the local relative spinor norm groups $\theta(M_{\mathfrak p} + \mathfrak o_{\mathfrak p}\mathbf v, a)$. These groups have been computed in \cite{sp1} when ${\mathfrak p}$ is nondyadic or 2-adic, and in \cite{x} when ${\mathfrak p}$ is general dyadic. When $a$ is a spinor exception of $\text{gen}(M + \mathfrak o \mathbf v)$, it is also possible to determine if $M + \mathfrak o\mathbf v$ itself represents $a$; see \cite[Theorem 3.6]{cx} for example. \mathbf egin{thebibliography}{10} \bibitem {bc} N. Beli and W.K. Chan, {\em Strong approximation of quadrics and representations of quadratic forms}, J. Number Theory {\bf 128} (2008), 2091-2096. \bibitem {bh} M. Bhargava and J. Hanke, {\em Universal quadratic forms and the 290-theorem}, preprint. \bibitem {bo} J. Bochnak and B.-K. Oh, {\em Almost universal quadratic forms: an effective solution of a problem of Ramanujan}, Duke Math. J. {\bf 147} (2009), 131�156. \bibitem {bk} W. Bosma and B. Kane, {\em The triangular theorem of eight and representation by quadratic polynomials}, to appear. \bibitem {b} D.A. Burgess, {\em On character sums and $L$-series, II}, Proc. London Math. Soc. {\bf 13} (1963), 524-536. \bibitem {ca} J.W.S. Cassels, {\em Rational quadratic forms}, Academic Press, London, 1978. \bibitem {ch} W.K. Chan and A. Haensch, {\em Almost universal ternary sums of squares and triangular numbers}, to appear. \bibitem {ch2} W.K. Chan and J.S. Hsia, {\em On almost strong approximation for algebraic groups}, J. Algebra {\bf 254} (2002), 441-461. \bibitem {co} W.K. Chan and B.-K. Oh, {\em Almost universal ternary sums of triangular numbers}, Proc. AMS {\bf 137} (2009), 3553-3562. \bibitem {cx} W.K. Chan and F. Xu, {\em On representations of spinor genera}, Compositio Math. {\bf 140} (2004), 287-300. \bibitem {e} A. Earnest, {\em The representation of binary quadratic forms by positive definite quaternary quadratic forms}, Trans. AMS {\bf 345} (1994), 853-863. \bibitem {h} J.S. Hsia, {\em Arithmetic of indefinite quadratic forms}, Contempoary Math. AMS {\bf 249} (1999), 1-15. \bibitem {jks} W.C. Jagy, I. Kaplansky and A. Schiemann, {\em There are 913 regular ternary forms}, Mathematika, {\bf 44} (1997), 332-341. \bibitem {jk} M. J\"{o}chner and Y. Kitaoka, {\em Representations of positive definite quadratic forms with congruence and primitive conditions}, J. Number Theory, {\bf 48} (1994), 88-101. \bibitem {j} J.P. Jones, {\em Undecidable diophantine equations}, Bull. AMS (N.S.) {\bf 3} (1980), 859-862. \bibitem {ks} B. Kane and Z.W. Sun, {\em On almost universal mixed sums of squares and triangular numbers}, Trans. AMS {\bf 362} (2010), 6425-6455. \bibitem {kko} B.M. Kim, M.-H. Kim, and B.-K. Oh, {\em 2-universal positive definite integral quinary quadratic forms}, Contemporary Math. {\bf 249} (1999), 51-62. \bibitem {ki} Y. Kitaoka, {\em Arithmetic of quadratic forms}, Cambridge University Press, 1993. \bibitem {kn} M. Kneser, {\em Quadratische Formen}, Springer Verlag, 2000. \bibitem {li} J. Liouville, {\em Nouveaux th�eor`emes concernant les nombres triangulaires}, Journal de Math�ematiques pures et appliqu\'{e}es {\bf 8} (1863), 73�84. \bibitem {ma} Yu. V. Matiyasevich, {\em The diophantiness of enumerable sets}, Dokl. Akad. Nauk SSSR, {\bf 191} (1970), 272-282. \bibitem {o} B.-K. Oh, {\em Regular positive ternary quadratic forms}, Acta Arith. {\bf 147} (2011), 233-243. \bibitem {om} O.T. O'Meara, {\em Introduction to quadratic forms}, Springer Verlag, New York, 1963. \bibitem {sp1} R. Schulze-Pillot, {\em Dartellung durch Spinorgeschlechter Formen}, J. Number Theory {\bf 12} (1980), 529-540. \bibitem {sp} R. Schulze-Pillot, {\em Representation by integral quadratic forms--a survey}, Comtempoary Math. AMS {\bf 344} (2004), 303-321. \bibitem {t} W. Tartakovski, {\em Die Gesamtheit der Zahlen, die durch eine positive quadratische Form $F(x_1, \ldots, x_s)$ $(s \geq 4)$ darstellbar sind}, IZv. Akad. Nauk SSSR. {\bf 7} (1929), 111-122, 165-195. \bibitem {w1} G.L. Watson, {\em Some problems in the theory of numbers}, Ph.D. Thesis, University of London, 1953. \bibitem {w2} G.L. Watons, {\em The representation of integers by positive ternary quadratic forms}, Mathematika {\bf 1} (1954), 104-110. \bibitem {x} F. Xu, {\em Representations of indefinite ternary quadratic forms over number fields}, Math. Z. {\bf 234} (2000), 115-144. \end{thebibliography} \end{document}
\begin{document} \title{The Jacobian Conjecture for the space of all the inner functions} \author{Ronen Peretz} \maketitle \begin{abstract} We prove the Jacobian Conjecture for the space of all the inner functions in the unit disc. \end{abstract} \section{Known facts} \begin{definition} Let $B_F$ be the set of all the finite Blaschke products defined on the unit disc $\mathbb{D}=\{z\in\mathbb{C}\,|\,|z|<1\}$. \end{definition} \noindent {\bf Theorem A.} $f(z)\in B_F\Leftrightarrow\,\exists\,n\in\mathbb{Z}^+\,\,{\rm such}\,\,{\rm that}\,\,\forall\, w\in\mathbb{D}$ the equation $f(z)=w$ has exactly $n$ solutions $z_1,\ldots,z_n$ in $\mathbb{D}$, counting multiplicities. \\ \\ That follows from \cite{fm} on the bottom of page 1. \\ \\ {\bf Theorem B.} $(B_F,\circ)$ is a semigroup under composition of mappings. \\ \\ That follows by Theorem 1.7 on page 5 of \cite{kr}. \\ \\ {\bf Theorem C.} If $f(z)\in B_F$ and if $f'(z)\ne 0\,\,\forall\,z\in\mathbb{D}$ then $$ f(z)=\lambda\frac{z-\alpha}{1-\overline{\alpha}z} $$ for some $\alpha\in\mathbb{D}$ and some unimodular $\lambda$, $|\lambda|=1$, i.e. $f\in {\rm Aut}(\mathbb{D})$. \\ \\ For that we can look at Remark 1.2(b) on page 2, and remark 3.2 on page 14 of \cite{kr}. Also we can look at Theorem A on page 3 of \cite{kr1}. \section{introduction} We remark that the last theorem (Theorem C) could be thought of, as the (validity of) Jacobian Conjecture for $B_F$. This result is, perhaps, not surprising in view of the characterization in Theorem A above of members of $B_F$ (This is, in fact Theorem B on page 2 of \cite{fm}. This result is due to Fatou and to Rado). For in the classical Jacobian Conjecture one knows of a parallel result, namely: \\ \\ If $F\in{\rm et}(\mathbb{C}^2)$ and if $d_F(w)=|\{ z\in\mathbb{C}^2\,|\,F(z)=w\}|$ is a constant $N$ (independent of $w\in\mathbb{C}^2$), then $F\in{\rm Aut}(\mathbb{C}^2)$ (because $F$ is a proper mapping). \\ Thus we are led to the following, \begin{definition} Let $V_F$ be the set of all holomorphic $f\,:\,\mathbb{D}\rightarrow\mathbb{D}$, such that $\exists\,N=N_f\in\mathbb{Z}^+$ (depending on $f$) for which $d_f(w)=|\{z\in\mathbb{D}\,|\,f(z)=w\}|$, $w\in\mathbb{D}$, satisfies $d_f(w)\le N_f$ $\forall\,w\in\mathbb{D}$. \end{definition} \noindent We ask if the following is true: $$ f\in V_F,\,\,f'(z)\ne 0\,\forall\,z\in\mathbb{D}{\rm I\kern-.15em R}ightarrow f\in{\rm Aut}(\mathbb{D}). $$ The answer is negative. For example, we can take $f(z)=z/2$. So we modify the question: $$ f\in V_F,\,\,f'(z)\ne 0\,\forall\,z\in\mathbb{D}{\rm I\kern-.15em R}ightarrow f(z)\,\,{\rm is}\,\,{\rm injective}. $$ This could be written, alternatively as follows: $$ f\in V_F,\,\,f'(z)\ne 0\,\forall\,z\in\mathbb{D}{\rm I\kern-.15em R}ightarrow \forall\,w\in\mathbb{D},\,\,d_f(w)\le 1. $$ Also the answer to this question is negative. For we can take $f(z)=10^{-10}e^{10z}$ which will satisfy the condition $f(\mathbb{D})\subset\mathbb{D}$ because of the tiny factor $10^{-10}$, while clearly $f\in V_F$ and $f'(z)\ne 0$ $\forall\,z\in\mathbb{D}$. But $d_f(w)$ can be as large as $$ \left[\frac{2}{2\pi/10}\right]=\left[\frac{10}{\pi}\right]=3. $$ Thus we again need to modify the question (in order to get a more interesting result). It is not clear if the right assumption should include surjectivity or almost surjectivity. Say, $$ f\in V_F,\,\,f'(z)\ne 0\,\forall\,z\in\mathbb{D}, {\rm meas}(\mathbb{D}-f(\mathbb{D}))=0{\rm I\kern-.15em R}ightarrow f\in {\rm Aut}(\mathbb{D}), $$ where ${\rm meas}(A)$ is the Lebesgue measure of the Lebesgue measurable set $A$. Or maybe, $$ f\in V_F,\,\,f'(z)\ne 0\,\forall\,z\in\mathbb{D}, \lim_{r\rightarrow 1^-}|f(re^{i\theta})|=1\,\,{\rm a.e.}\,\,{\rm in}\,\,\theta {\rm I\kern-.15em R}ightarrow f\in {\rm Aut}(\mathbb{D}). $$ This last question could be rephrased as follows: $$ f\in V_F,\,\,f'(z)\ne 0\,\forall\,z\in\mathbb{D},\,f\,\,{\rm is}\,\,{\rm an}\,\,{\rm inner}\,\,{\rm function} {\rm I\kern-.15em R}ightarrow f\in {\rm Aut}(\mathbb{D}). $$ \section{The main result} We can answer the two last questions that were raised in the previous section. We start by answering affirmatively the last question. \begin{theorem} If $f\in V_F$, $f'(z)\ne 0\,\,\forall\,z\in\mathbb{D}$, $f$ is an inner function, then $f\in {\rm Aut}(\mathbb{D})$. \end{theorem} \noindent {\bf Proof.} \\ We recall the following result, \\ \\ {\bf Theorem.} Every inner function is a uniform limit of Blaschke products. \\ \\ We refer to the theorem on page 175 of \cite{h}. Let $\{ B_n\}_{n=1}^{\infty}$ be a sequence of Blaschke products that uniformly converge to $f$. Since $f\in V_F$ there exist a natural number $N_f$ such that $d_f(w)\le N_f\,\,\forall\,w\in\mathbb{D}$. By Hurwitz Theorem we have $\lim_{n\rightarrow\infty} d_{B_n}(w)=d_f(w)\,\,\forall\,w\in\mathbb{D}$ and hence $d_{B_n}(w)=d_f(w)$ for $n\ge n_w$. We should note that $n_w$ depends on $w$ but it is a constant in a neighborhood of the point $w$. Hence the Blaschke products in the tail subsequence $\{B_n\}_{n\ge n_w}$ all have a finite valence which is bounded from above by $d_f(w)$ at the point $w$. In particular, the valence of these finite Blaschke products are bounded from above by the number $N_f$ in definition 2.1 (of the set $V_F$). Hence we can extract a subsequence of these Blaschke products that have one and the same number of zeroes. Again by the Hurwitz Theorem it follows that $d_f(w)=N$ is a constant, independent of $w$, and so by Theorem A in section 1 we conclude that $f(z)$ is a finite Blaschke product with exactly $N$ zeroes. By Theorem C in section 1 we conclude (using the assumption that $f'(z)\ne 0$, $\forall\,z\in\mathbb{D}$) that $f(z)\in {\rm Aut}(\mathbb{D})$. $\qed $ \\ \\ Next, we answer negatively the one before the last question. \begin{theorem} There exist functions $f\in V_F$ that satisfy $f'(z)\ne 0$ $\forall\,z\in \mathbb{D}$ and also ${\rm meas}(\mathbb{D}-f(\mathbb{D}))=0$ such that $f\not\in {\rm Aut}(\mathbb{D})$. In fact, we can construct such functions that will not be surjective and not injective. \end{theorem} \noindent {\bf Proof.} \\ Consider the domain $\Omega=\mathbb{D}-\{x\in\mathbb{R}\,|\,0\le x<1\}$. Then $\Omega$ is the unit disc with a slit along the non-negative $x$-axis. It is a simply connected domain. Let $g:\,\mathbb{D}\rightarrow\Omega$ be a Riemann mapping (i.e. it is holomorphic and conformal. Finally, let $k\ge 2$ any natural integer and define $f=g^k$. This gives the desired function. $\qed $ \\ \\ Can the result in Theorem 3.1 be generalized to higher complex dimensions? We make the obvious: \begin{definition} Let $V_F(n)$ ($n\in \mathbb{Z}^+$) be the set of all the holomorphic $f:\,\mathbb{D}^n\rightarrow\mathbb{D}^n$, such that $\exists\,N=N_f$ (depending on $f$) for which $d_f(w)=|\{z\in\mathbb{D}^n\,|\,f(z)=w\}|$, $w\in\mathbb{D}^n$ satisfies $d_f(w)\le N_f$ $\forall\,a\in\mathbb{D}^n$. \end{definition} \noindent We ask if the following assertion holds true: \\ \\ If $f\in V_F(n)$ satisfies $\det J_f(z)\ne 0$ $\forall\,z\in\mathbb{D}^n$ and also $\lim_{r\rightarrow 1^-}|f(re^{i\theta_1},\ldots,re^{i\theta_n})|=1$ a.e. in $(\theta_1,\ldots,\theta_n)$ then $f\in {\rm Aut}(\mathbb{D}^n)$. \noindent {\it Ronen Peretz \\ Department of Mathematics \\ Ben Gurion University of the Negev \\ Beer-Sheva , 84105 \\ Israel \\ E-mail: [email protected]} \\ \end{document}
\begin{document} \title{Few-photon scattering and emission from open quantum systems} \author{Rahul Trivedi} \email{[email protected]} \author{Kevin Fischer} \author{Shanshan Xu} \author{Shanhui Fan} \author{Jelena Vuckovic} \affiliation{ E. L. Ginzton Laboratory, Stanford University, Stanford, CA 94305, USA } \begin{abstract} We show how to use the input-output formalism compute the propagator for an open quantum system, i.e.~quantum networks with a low dimensional quantum system coupled to one or more loss channels. The total propagator is expressed entirely in terms of the Green's functions of the low dimensional quantum system, and it is shown that these Green's functions can be computed entirely from the evolution of the low-dimensional system with an effective non-hermitian Hamiltonian. Our formalism generalizes the previous works that have focused on time independent Hamiltonians to systems with time dependent Hamiltonians, making it a suitable computational tool for the analysis of a number of experimentally interesting quantum systems. We illustrate our formalism by applying it to analyze photon emission and scattering from driven and undriven two-level system and three-level lambda system. \end{abstract} \pacs{Valid PACS appear here} \maketitle \section{\label{sec_intro}Introduction} \noindent {Single photon sources \cite{lodahl2015interfacing,ding2016demand,michler,senellart2017high,zhang2018strongly} and single and two-photon optical gates \cite{kok2007linear,o2009photonic,roy2017colloquium,reiserer2015cavity,duan2010colloquium,sangouard2011quantum,nemoto2014photonic} are the basic building blocks of optical quantum information processing and quantum communication systems. Implementing any of these building blocks involves interfacing a low-dimensional quantum system (e.g.~a two level system such as quantum dots, color centers, superconducting qubits, atomic ensembles) with the high-dimensional optical field (which often propagates in optical structures such as waveguides) -- the resulting system is called an open-quantum system \cite{gardiner2004quantum,gardiner2015quantum,wiseman2009quantum} and such systems have been a topic of theoretical interest since the inception of quantum optics. Analyzing open-quantum systems has always been a challenging task due to the huge and continuous Hilbert Space of the optical fields, and the non-linearity induced by the low-dimensional quantum system. Traditional computational methods for analyzing open quantum systems fall under two broad classes -- master equation based methods and the scattering matrix based methods. The master-equation based methods \cite{breuer2002theory,carmichael2009open,isar1994open} exactly compute the dynamics of the low-dimensional system while tracing out the Hilbert space of the optical fields. Within the master-equation framework, it is only possible to computationally extract the correlators in the optical fields. To extract the full state of the optical field, correlators of arbitrary orders are required and computing them becomes exponentially more complex with the order of the correlator. The scattering matrix based methods \cite{xu2015input,xu2017input,fischer2017scattering,pletyukhov2015quantum,pan2016exact,caneva2015quantum,shi2015multiphoton,kiukas2015equivalence,fan2010input, shi2011two, shen2005coherent, xu2013analytic, shen2007strongly} attempt to resolve this problem by treating the low-dimensional system as a scatterer for the optical fields and attempting to relate the incoming and outgoing optical fields. However, most of the scattering matrix methods are restricted to time-independent Hamiltonians -- in particular, they are unable to analyze coherently driven systems which are extremely important from an experimental standpoint. Moreover, the scattering matrix methods often only relate the input state at distant past to the output state at distant future, and don't have the capability to model the dynamics of the system during the interaction of the optical field with the low-dimensional system. In this paper, we present a full calculation of the few-photon elements of the propagator for an arbitrary open quantum system coupled to a optical field. The central result of this paper is a relation between the propagator and time-ordered expectations of system operators over states where the bath is in the vacuum state (labeled as the `Green's functions'). By resorting to a discrete approximation of the bath Hilbert space, we show that Green's functions can be evaluated by computing the dynamics of the low-dimensional system under an effective non-Hermitian Hamiltonian. It can be noted that our formalism is valid for both time-independent and time-dependent Hamiltonians, allowing it to efficiently model not only few-photon transport through the low-dimensional system, but also photon-emission and scattering from coherently driven systems. Our formalism provides a set of computational tools that are complementary to the master-equation framework, wherein the focus is on exactly computing the dynamics of the few-photon states emitted from the low-dimensional system, as opposed to capturing the exact dynamics of the low-dimensional system. Our formalism will thus be of relevance to design and analysis of quantum information processing systems in which the `information' is encoded in the state of the bath, with the low-dimensional system implementing either a source for the bath state or a unitary operation on the bath state. Finally, we show how to extract the scattering matrices of the low-dimensional system from the full propagator. In particular, we show that for the case of a time-independent low-dimensional system Hamiltonian, our method reduces to scattering matrices derived in past works \cite{xu2015input,xu2017input}. Our formalism also allows us to define a scattering matrix for systems with time-dependent Hamiltonians, as long as they are asymptotically time independent. Such a quantity might be of interest while analyzing photon scattering from coherently driven low-dimensional systems. This paper is organized as follows -- Section \ref{sec:out_state} describes the propagator computation starting from the input-output equations, Section \ref{sec:scat_matrix} shows how to extract the scattering matrix for the system from the propagator and Section \ref{sec_examples} uses the formalism developed in sections II and III to analyze scattering and emission from a two-level system and a lambda system.} \section{\label{sec:out_state}Quantum optical systems coupled to waveguides} \noindent {The general quantum networks being considered in this paper include a low dimensional quantum-optical system coupled to one or more waveguides (Fig.~\ref{fig:schematic}). In our calculations, we will be concerned with not only the state of the quantum-optical system, but also with the state of the waveguides. Section \ref{sec:prob-setup} introduces the Hamiltonians studied in this paper and the associated input-output equations, section \ref{sec:inp-out} shows how to use the input-output formalism to relate the propagator matrix elements to Green's functions of the low dimensional system, \ref{sec:mult-wg} extends the result to multiple input and output waveguides and \ref{sec:green-func} shows how to efficiently compute the Green's functions required for computing the propagator matrix elements.} \begin{figure} \caption{Schematic of the quantum systems being considered, with a low-dimensional system coupled to (a) a single waveguide (b) multiple waveguides. } \label{fig:schematic} \end{figure} \subsection{\label{sec:prob-setup}Hamiltonian and input-output formalism} \noindent As a starting point, we consider only one waveguide [Fig.~\ref{fig:schematic}(a)] --- the generalization to multiple waveguides is a straightforward extension. The total Hamiltonian we are interested in can be expressed as the local system Hamiltonian $H_\text{sys}$, the waveguide Hamiltonian $H_\text{wg}$ and an interaction Hamiltonian between the local system and waveguide $H_\text{int}$: \begin{equation} H = H_\text{sys}(t)+H_\text{wg}+H_\text{int}. \end{equation} The Hilbert space of the low dimensional system can often be captured by a finite or countably infinite basis, which we denote by $\{\ket{\sigma_1}, \ket{\sigma_2} \dots \ket{\sigma_S} \}$. It is typically straightforward and computationally tractable to analyze the dynamics of the low-dimensional system in isolation. The dynamics of the waveguide can be modeled by the following Hamiltonian: \begin{equation} H_\text{wg} = \int \omega a_\omega^\dagger a_\omega \textrm{d}\omega \end{equation} where $a_\omega$ is the annihilation operator for an excitation of the waveguide mode at frequency $\omega$. They satisfy the standard commutation relation $[a_\omega, a_{\omega'}] = 0$ and $[a_\omega, a_{\omega'}^\dagger] = \delta(\omega-\omega')$. A suitable basis for the waveguide's Hilbert space is the Fock state basis: \begin{equation} \ket{\omega_1, \omega_2 \dots \omega_N} = \prod_{n = 1}^N a_{\omega_n}^\dagger \ket{\text{vac}} \quad\forall N\in\mathbb{N}. \end{equation} It is also convenient to define a spatial annihilation operator for the waveguide mode: \begin{equation} a_x = \int a_\omega \exp \bigg(\textrm{i} \frac{\omega x}{v_g}\bigg) \frac{\textrm{d}\omega}{\sqrt{2\pi v_g}} \end{equation} where $v_g$ is the group velocity of the waveguide mode under consideration, which we will take as unity ($v_g=1$) for the rest of this paper (note that this is equivalent to rescaling the position $x$ to have units of time). It follows from the commutators for the operators $a_\omega$ that $[a_x,a_{x'}] = 0$ and $[a_x,a^\dagger_{x'}] = \delta(x-x')$. These operators can be used to construct another basis for the waveguide's Hilbert space, which we refer to as the spatial Fock states: \begin{equation} \ket{x_1, x_2 \dots x_N} = \prod_{n=1}^N a_{x_n}^\dagger \ket{\text{vac}}. \end{equation} The system-waveguide interaction, in the rotating wave approximation, can be described by the following Hamiltonian: \begin{equation} H_\text{int} = \int \big(a_\omega L^\dagger +L a_\omega^\dagger \big)\, \frac{\textrm{d}\omega}{\sqrt{2\pi}} \end{equation} where $L$ is the operator through which the low-dimensional system couples to the waveguide. In writing this Hamiltonian, we make the standard Markov approximation of assuming that the low-dimensional system couples equally to waveguide modes at all frequency --- this is equivalent to assuming that the physical process inducing an interaction between the waveguide and the low-dimensional system has a bandwidth much larger than any excitation that will be used to probe the low-dimensional system. In the Heisenberg picture, this system can be modeled via well known input-output equations \cite{gardiner1985input}: \begin{widetext} \begin{subequations} \label{in_out_eq} \begin{align} &a_\omega(t) = a_\omega(\tau_-)\exp(-\textrm{i}\omega (t-\tau_-)) -\textrm{i} \int_{\tau_-}^t L(t')\exp(-\textrm{i}\omega(t-t')) \,\frac{\textrm{d}t'}{\sqrt{2\pi}} \label{bath_op}\\ &\dot{L}(t) = -\textrm{i}[L(t),H_\text{sys}]- \textrm{i} [L(t),L^\dagger(t)]\bigg(a_\text{in}(t)-\frac{\textrm{i}}{2}L(t) \bigg) \end{align} \end{subequations} \end{widetext} where $a_\text{in}(t) = \int a_\omega(\tau_-)\exp(-\textrm{i}\omega (t-\tau_-)) \,\textrm{d}\omega/\sqrt{2\pi}$ is the input operator which, in the Heisenberg picture, captures the state of the system at time $t = \tau_-$. After the evolution of the system to a time instant $t = \tau_+>\tau_-$ in the future, the state of the system is described by the output operator $a_\text{out}(t) = \int a_\omega(\tau_+)\exp(-\textrm{i}\omega(t-\tau_+))\,\textrm{d}\omega/\sqrt{2\pi}$. Using Eq.~\ref{bath_op}, we can relate the input operator to the output operator via: \begin{align} a_\text{out}(t) = a_\text{in}(t) - \textrm{i} L(t) \mathcal{I}(\tau_-<t<\tau_+)\label{eq:inout} \end{align} where $\mathcal{I}(\cdot)$ is the indicator function, which is 1 if its argument is true or else is 0. Intuitively, this expresses the field in the waveguide after the input has interacted with the system as a sum of the input field and field emitted by the low-dimensional system. A useful set of commutators between the input operator, output operator and the low-dimensional system's Heisenberg operators can be derived using the quantum-causality conditions \cite{gardiner1985input}: \begin{align} [a_\text{out}(t), s(t')] = 0 \text{ if } t<t' \ \text{and} \ [a_\text{in}(t), s(t')] = 0 \text{ if } t>t' \end{align} where $s$ is an operator in the Hilbert space of the low-dimensional system. From these causality conditions, it immediately follows that \cite{xu2015input}: \begin{subequations} \begin{align} \mathcal{T}\bigg[\prod_{l=1}^L o(t_l) \prod_{m=1}^M s_m(t_m') \bigg] = \mathcal{T}\bigg[\prod_{l=1}^L o(t_l)\bigg] \mathcal{T}\bigg[\prod_{m=1}^M s_m(t_m') \bigg]\label{eq:time_order_output}\\ \mathcal{T}\bigg[\prod_{l=1}^L i(t_l) \prod_{m=1}^M s_m(t_m') \bigg] = \mathcal{T}\bigg[\prod_{m=1}^M s_m(t_m') \bigg]\mathcal{T}\bigg[\prod_{l=1}^L i(t_l)\bigg]\label{eq:time_order_input} \end{align} \end{subequations} where $o(t)$ is an output operator evaluated at time $t$ (i.e.~$a_\text{out}(t)$, $a_\text{out}^\dagger(t)$ or their combination), $i(t)$ is an input operator evaluated at time $t$ (i.e.~$a_\text{in}(t)$, $a_\text{in}^\dagger(t)$ or their combination), $s_m(t)$ is a low-dimensional system operator evaluated at time $t$ in the Heisenberg picture and $\mathcal{T}$ is the chronological time-ordering operator. \subsection{\label{sec:inp-out}Calculating the propagator matrix elements\label{sec:calcprop}} \noindent The dynamics of a quantum system with a Hamiltonian $H(t)$ is completely characterized by its propagator -- in particular, if the propagator is known, then the time evolution of the quantum state of the system can be completely derived from the initial state of the system. In this section, we will focus on computing the interaction picture propagator $U_I(\tau_{+},\tau_{-})$ defined by: \begin{align} U_\textrm{I}(\tau_{+},\tau_{-}) = \exp(\textrm{i}H_\text{wg}\tau_+)U(\tau_+,\tau_-)\exp(-\textrm{i}H_\text{wg}\tau_-) \end{align} where $U(\tau_+,\tau_-)$ is the Schr\"{o}dinger picture propagator for the system: \begin{equation} U(\tau_{+},\tau_{-}) = \mathcal{T}\text{exp}\left[-\textrm{i}\int_{\tau_-}^{\tau_+} H(t) \textrm{d}t\right]\label{eq:U} \end{equation} and $\mathcal{T}$ is the chronological operator that time-orders the infinitesimal products of Eq. \ref{eq:U}. In particular, we will compute the matrix elements of the interaction picture propagator in the form: \begin{align} U^{\sigma', \sigma}_{\tau_+,\tau_-}(x_1', x_2' \dots x_M'; x_1, x_2 \dots x_N) \equiv \bra{x_1', x_2' \dots x_M';\sigma'} U_\textrm{I}(\tau_+,\tau_-) \ket{x_1, x_2 \dots x_N; \sigma} \end{align} where $\ket{x_1, x_2 \dots x_P; \sigma} = \ket{x_1, x_2 \dots x_P} \otimes \ket{\sigma}$ with $x_i \in \mathbb{R}$ and $\sigma \in \{\sigma_1, \sigma_2 \dots\}$. Since the spatial Fock state basis for the waveguide is complete, and the basis $\{\ket{\sigma_1}, \ket{\sigma_2}\dots \}$ is a complete basis for the system state by construction, these matrix elements are sufficient to characterize the entire propagator. Writing out $U^{\sigma', \sigma}_{\tau_+,\tau_-}(x_1', x_2' \dots x_M'; x_1, x_2 \dots x_N)$ in terms of the spatial annihilation operators: \begin{align} U^{\sigma', \sigma}_{\tau_+,\tau_-}(x_1' \dots x_M'; x_1 \dots x_N) &= \bra{\text{vac}; \sigma'}\bigg[\prod_{i=1}^M a_{x_i'} \bigg] U_\textrm{I}(\tau_+,\tau_-) \bigg[ \prod_{j=1}^N a_{x_j}^\dagger\bigg ]\ket{\text{vac}; \sigma} \nonumber \\ &= \bra{\text{vac}; \sigma'}\bigg[\prod_{i=1}^M a_{x_i'} \bigg] U_\textrm{I}(\tau_+,0)U(0,\tau_-) \bigg[ \prod_{j=1}^N a_{x_j}^\dagger\bigg ]\ket{\text{vac}; \sigma} \label{eq:U(x',x)}. \end{align} Noting that \begin{equation} \exp(-\textrm{i}H_\text{wg}\tau) a_x \exp(\textrm{i}H_\text{wg}\tau) = \int a_\omega \exp(\textrm{i}\omega(x+\tau)) \frac{\textrm{d}\omega}{\sqrt{2\pi}} \end{equation} and since $U(0,\tau) a_\omega U(\tau,0) = a_\omega(\tau)$, \begin{equation}\label{eq:out_spatial} U_\textrm{I}(0,\tau_+) a_x U_\textrm{I}(\tau_+,0) = a_\text{out}(-x) \; \text{ and } \; U_\textrm{I}(0,\tau_-) a_x U_\textrm{I}(\tau_-,0) = a_\text{in}(-x). \end{equation} Therefore \begin{subequations} \begin{align} &\bigg[\prod_{i=1}^M a_{x_i'}\bigg] U_\textrm{I}(\tau_+,0) = U_\textrm{I}(\tau_+,0) \bigg[\prod_{i=1}^M a_\text{out}(-x_i')\bigg] \\ &U_\textrm{I}(0,\tau_-)\bigg[\prod_{j=1}^M a_{x_j}^\dagger\bigg] = \bigg[\prod_{j=1}^M a_\text{in}^\dagger(-x_j)\bigg] U_\textrm{I}(0,\tau_-). \end{align} \end{subequations} Eq.~\ref{eq:U(x',x)} can thus be expressed as \begin{align}\label{eq:prop_mod} &U_{\tau_+,\tau_-}^{\sigma',\sigma}(x_1'\dots x_M'; x_1 \dots x_N) = \bra{\text{vac}; \sigma'}U_\textrm{I}(\tau_+,0)\prod_{i=1}^M a_\text{out}(-x_i') \prod_{j=1}^N a_\text{in}^\dagger(-x_j) U_\textrm{I}(0,\tau_-) \ket{\text{vac}; \sigma}. \end{align} Since all the input or output operators commute with each other, we can introduce a time ordering operator as shown below: \begin{align}\label{eq:simp_1} &\prod_{j=1}^N a_\text{in}^\dagger(-x_j) =\mathcal{T}\bigg[\prod_{j=1}^N a_\text{in}^\dagger(-x_j) \bigg]\nonumber \\ &=\mathcal{T} \bigg[\prod_{j=1}^N \big(a_\text{out}^\dagger(-x_j) -\textrm{i}L^ \dagger (-x_j)\mathcal{I}(-\tau_+<x_j<-\tau_-)\big) \bigg] \nonumber\\ &=\sum_{k=0}^N (-\textrm{i})^{N-k} \sum_{B_k^N}\mathcal{T}\bigg[\prod_{i=1}^k a^\dagger_\text{out}(-x_{B_k^N(i)}) \prod_{j=1}^{N-k} L^\dagger(-x_{\bar{B}_k^N(j)})\bigg] \bigg[\prod_{j=1}^{N-k} \mathcal{I}(-\tau_+<x_{\bar{B}_k^N(j)}<-\tau_-) \bigg] \nonumber \\ &=\sum_{k=0}^N (-\textrm{i})^{N-k} \sum_{B_k^N}\bigg[\prod_{i=1}^k a^\dagger_\text{out}(-x_{B_k^N(i)}) \bigg] \mathcal{T}\bigg[\prod_{j=1}^{N-k} L^\dagger(-x_{\bar{B}_k^N(j)})\bigg] \bigg[\prod_{j=1}^{N-k}\mathcal{I}(-\tau_+<x_{\bar{B}_k^N(j)}<-\tau_-) \bigg] \end{align} where $B_k^N$ is a $k-$element unordered subset of $\{1,2 \dots N\}$ and $\bar{B}_k^N$ is its complement. In the last step, we have used Eq.~\ref{eq:time_order_output} and the fact that output operators evaluated at different time instances commute. From Eq.~\ref{eq:out_spatial}, it follows that $a_\text{out}(-x)U_\text{I}(0,\tau_+)\ket{\text{vac}; \sigma'} = U_\text{I}(0,\tau_+)a_x\ket{\text{vac};\sigma} = 0$. Together with the commutator $[a_\text{out}(t), a_\text{out}^\dagger(t')] = \delta(t-t')$, this can be used to prove that: \begin{align}\label{eq:simp_2} &\prod_{i=1}^k a_\text{out}(-x_{B_k^N(i)}) \prod_{j=1}^M a_\text{out}^\dagger(-x_j') U_\text{I}(0,\tau_+) |\text{vac};\sigma'\rangle \nonumber \\ &= \mathcal{I}(M\geq k)\sum_{B_k^M}\bigg[\sum_{P_k}\prod_{i=1}^k \delta(x_{P_k B_k^M(i)}'-x_{B_k^N(i)})\bigg]\prod_{j=1}^{M-k} a_\text{out}^\dagger(-x'_{\bar{B}_k^M(j)}) U_\text{I}(0,\tau_+) \ket{\text{vac};\sigma'} \end{align} where $P_k$ denotes a permutation of a set of $k$ elements, e.g. $P_k B_k^M$ is a permutation of $B_k^M$ which itself is an unordered $k$-element subset of $\{1,2 \dots M\}$. Substituting Eqs.~\ref{eq:simp_1} and \ref{eq:simp_2} into Eq.~\ref{eq:prop_mod}, we obtain: \begin{align}\label{eq:U_almost} &U_{\tau_+,\tau_-}^{\sigma',\sigma}(x_1'\dots x_M'; x_1 \dots x_N) \nonumber \\ &=\sum_{k=0}^N (-\textrm{i})^{N-k}\mathcal{I}(M\geq k)\sum_{B_k^N, B_k^M} \bigg[\sum_{P_k} \prod_{i=1}^k \delta(x'_{P_k B_k^M(i)}-x_{B_k^N(i)}) \bigg]\times \nonumber \\ & \quad\bra{\text{vac}; \sigma'}U_\textrm{I}(\tau_+,0) \bigg[\prod_{j=1}^{M-k}a_\text{out}(-x'_{\bar{B}_k^M(j)})\bigg]\mathcal{T}\bigg[\prod_{s=1}^{N-k} L^\dagger(-x_{\bar{B}_k^N(s)})\bigg]U_\textrm{I}(0,\tau_-) \ket{\text{vac}; \sigma}\times \nonumber \\ & \quad\prod_{s=1}^{N-k}\mathcal{I}(-\tau_+<x_{\bar{B}_k^N(s)}<-\tau_-) \nonumber \\ &=\sum_{k=0}^N (-\textrm{i})^{N-k}\mathcal{I}(M\geq k)\sum_{B_k^N, B_k^M} \bigg[\sum_{P_k} \prod_{i=1}^k \delta(x'_{P_k B_k^M(i)}-x_{B_k^N(i)}) \bigg]\times \nonumber \\ & \quad\bra{\text{vac}; \sigma'}U_\textrm{I}(\tau_+,0) \mathcal{T}\bigg[\prod_{j=1}^{M-k}a_\text{out}(-x'_{\bar{B}_k^M(j)}) \prod_{s=1}^{N-k} L^\dagger(-x_{\bar{B}_k^N(s)})\bigg]U_\textrm{I}(0,\tau_-) \ket{\text{vac}; \sigma}\times \nonumber \\ &\quad\prod_{s=1}^{N-k}\mathcal{I}(-\tau_+<x_{\bar{B}_k^N(s)}<-\tau_-) \end{align} wherein we have used Eq.~\ref{eq:time_order_input} in the last step to pull the output operators into the time-ordering operator. Using Eq.~\ref{eq:inout}: \begin{align}\label{eq:simp_3} &\mathcal{T}\bigg[\prod_{j=1}^{M-k}a_\text{out}(-x'_{\bar{B}_k^M(j)}) \prod_{s=1}^{N-k} L^\dagger(-x_{\bar{B}_k^N(s)}) \bigg] U_\textrm{I}(0,\tau_-) \ket{\text{vac}; \sigma} \nonumber \\ &= \mathcal{T}\bigg[\prod_{j=1}^{M-k}(a_\text{in}(-x'_{\bar{B}_k^M(j)})-\textrm{i}L(-x'_{\bar{B}_k^M(j)})\mathcal{I}(-\tau_+ < x_{\bar{B}_k^M(k)}'<\tau_-)) \prod_{s=1}^{N-k} L^\dagger(-x_{\bar{B}_k^N(s)}) \bigg] U_\textrm{I}(0,\tau_-) \ket{\text{vac}; \sigma} \nonumber \\ &=(-\textrm{i})^{M-k}\mathcal{T}\bigg[\prod_{j=1}^{M-k}L(-x'_{\bar{B}_k^M(j)}) \prod_{s=1}^{N-k}L^\dagger(-x_{\bar{B}_k^N(s)}) \bigg]U_\textrm{I}(0,\tau_-)\ket{\text{vac};\sigma}\prod_{s=1}^{M-k}\mathcal{I}(-\tau_+ < x_{\bar{B}_k^M(s)}'< -\tau_-) \end{align} wherein we have used Eq.~\ref{eq:time_order_input} and the fact that input operators at different time instances commute to move all the input operators in the time ordered product to the right and Eq.~\ref{eq:out_spatial} to set $a_\text{in}(-x)U_I(0,\tau_-)\ket{\text{vac};\sigma} = U_I(0,\tau_-)a_x\ket{\text{vac}; \sigma} = 0 $. Substituting Eq.~\ref{eq:simp_3} into Eq.~\ref{eq:U_almost} \begin{align}\label{eq:prop_green_func} &U_{\tau_+,\tau_-}^{\sigma',\sigma}(x_1'\dots x_M'; x_1 \dots x_N) =\sum_{k=0}^N (-\textrm{i})^{M+N-2k}\mathcal{I}(M\geq k) \times \nonumber \\ &\qquad \sum_{B_k^N, B_k^M} \bigg[\sum_{P_k} \prod_{i=1}^k \delta(x'_{P_k B_k^M(i)}-x_{B_k^N(i)}) \bigg] \bigg[\prod_{i=1}^{M-k} \mathcal{I}(-\tau_+<x'_{\bar{B}^M_k(i)}<-\tau_-)\bigg] \bigg[\prod_{j=1}^{N-k} \mathcal{I}(-\tau_+<x_{\bar{B}^N_k(j)}<-\tau_-)\bigg] \times \nonumber \\ & \qquad\mathcal{G}^{\sigma',\sigma}_{\tau_+,\tau_-}(-x'_{\bar{B}_k^M(1)}, -x'_{\bar{B}_k^M(2)} \dots -x'_{\bar{B}_k^M(M-k)};-x_{\bar{B}_k^N(1)}, -x_{\bar{B}_k^N(2)} \dots -x_{\bar{B}_k^N(N-k)}) \end{align} where $\mathcal{G}_{\tau_+,\tau_-}^{\sigma',\sigma}(t_1'\dots t_M'; t_1 \dots t_N)$ is the system Green's function defined by: \begin{align} \label{eq:green_func} \mathcal{G}_{\tau_+,\tau_-}^{\sigma',\sigma}(t_1'\dots t_M'; t_1 \dots t_N) = \bra{\text{vac};\sigma'}U_I(\tau_+,0)\mathcal{T}\bigg[\prod_{i=1}^M L(t_i') \prod_{j=1}^N L^\dagger(t_j) \bigg]U_I(0,\tau_-)\ket{\text{vac}; \sigma}. \end{align} Note that the Green's function depends entirely on the dynamics of the low-dimensional system under consideration --- we have thus reduced the problem of computing the propagator for the entire system to the problem of computing the dynamics of only the low-dimensional system. Finally, after having computed the propagator in the interaction picture, $U_I(\tau_+, \tau_-)$, it is a simple matter to compute the propagator in the Schr\"{o}dinger picture $U(\tau_+, \tau_-)$. To do so, we use the following property of the spatial Fock state: \begin{align} \exp(-\textrm{i}H_\text{wg} \tau) \ket{x_1, x_2 \dots x_N} = \ket{x_1+\tau, x_2+\tau \dots x_N+\tau} \end{align} which intuitively states that an excitation created at a position $x$ in the waveguide propagates along the positive $x$ direction with velocity equal to the group velocity of the waveguide mode (which in this case is taken to be 1). Thus: \begin{align} &\bra{x_1'\dots x_M'; \sigma'}U(\tau_+, \tau_-) \ket{x_1 \dots x_N; \sigma} = U_{\tau_+, \tau_-}^{\sigma',\sigma}(x_1'-\tau_+, x_2-\tau_+ \dots x_M'-\tau_+; x_1-\tau_-, x_2-\tau_- \dots x_N-\tau_-). \end{align} \subsection{\label{sec:mult-wg}Extension to multiple waveguides} \noindent Local systems coupled to multiple waveguide modes (which can either be physically separate waveguides or orthogonal modes of the same waveguide), diagrammatically shown in Fig.~\ref{fig:schematic}(b), can be described by Hamiltonians of the form: \begin{equation}\label{mult_loss_channels} H = H_\text{sys}(t) + \sum_{\mu =1}^{N_L}\int \omega a_{\omega,\mu}^\dagger a_{\omega,\mu} \textrm{d}\omega +\sum_{\mu=1}^{N_L}\int \big( a_{\omega, \mu} L_\mu^\dagger+L_\mu a_{\omega, \mu}^\dagger \big) \frac{\textrm{d}\omega}{\sqrt{2\pi}} \end{equation} where $N_L$ is the total number of loss channels, $a_{\omega, \mu}$ is the plane wave annihilation operator for the $\mu^\text{th}$ waveguide which couples to the low dimensional system through the operator $L_\mu$. A complete basis for the waveguide modes can now be constructed using the creation operators for the different waveguides: \begin{align} |\{x_{1},\mu_1\}, \{x_{2},\mu_2\} \dots \{x_{N}, \mu_N\}\rangle = \prod_{i=1}^N a_{x_i, \mu_i}^\dagger \ket{\text{vac}} \end{align} where $a_{x,\mu} = \int a_{\omega, \mu}\exp(\textrm{i}\omega x) \textrm{d}\omega/\sqrt{2\pi}$ is the spatial annihilation operator for the $\mu^\text{th}$ waveguide mode. These annihilation operator have the commutators $[a_{x,\mu}, a_{x',\mu'}] = 0$ and $[a_{x,\mu}, a_{x',\mu'}^\dagger] = \delta(x-x')\delta_{\mu,\mu'}$. The propagator can thus be completely characterized by matrix elements of the form: \begin{align} &U_{\tau_+, \tau_-}^{\sigma', \sigma}(\{x_{1}',\mu_1'\},\{x_{2}',\mu_2'\}\dots \{x_{M}',\mu_M'\}; \{x_{1},\mu_1\},\{x_{2},\mu_2\}\dots \{x_{N},\mu_N\}) \nonumber \\ &= \bra{\text{vac}; \sigma'}a_{x_1',\mu_1'}a_{x_2',\mu_2'}\dots a_{x_M',\mu_M'} U_\textrm{I}(\tau_+, \tau_-)a_{x_1,\mu_1}^\dagger a_{x_2,\mu_2}^\dagger \dots a_{x_M,\mu_M}^\dagger \ket{\text{vac};\sigma}. \end{align} Repeating the procedure described in Section \ref{sec:calcprop}, it can easily be shown that: \begin{align}\label{eq:prop_green_func_multiple} &U_{\tau_+,\tau_-}^{\sigma',\sigma}(\{x_1',\mu_1'\}\dots \{x_M',\mu_M'\}; \{x_1,\mu_1\} \dots \{x_N,\mu_N\}) =\sum_{k=0}^N (-\textrm{i})^{M+N-2k}\mathcal{I}(M\geq k) \times \\ &\qquad\sum_{B_k^N, B_k^M} \bigg[\sum_{P_k} \prod_{i=1}^k \delta(x'_{P_k B_k^M(i)}-x_{B_k^N(i)})\delta_{\mu_{P_k B_k^M(i)}', \mu_{B_k^N(i)}} \bigg] \times \nonumber \\ &\qquad\bigg[\prod_{i=1}^{M-k} \mathcal{I}(-\tau_+<x_{\bar{B}^M_k(i)}<-\tau_-)\bigg] \bigg[\prod_{j=1}^{N-k} \mathcal{I}(-\tau_+<x_{\bar{B}^N_k(j)}<-\tau_-)\bigg] \times \nonumber \\ & \qquad\mathcal{G}^{\sigma',\sigma}_{\tau_+,\tau_-}(\{-x'_{\bar{B}_k^M(1)}, \mu_{\bar{B}_k^M(1)}'\}, \dots \{-x'_{\bar{B}_k^M(M-k)},\mu'_{\bar{B}_k^M(M-k)}\};\{-x_{\bar{B}_k^N(1)},\mu_{\bar{B}_k^N(1)}\} \dots \{-x_{\bar{B}_k^N(N-k)},\mu_{\bar{B}_k^N(N-k)}\})\nonumber \end{align} where: \begin{align} &\mathcal{G}_{\tau_+,\tau_-}^{\sigma',\sigma}(\{t_1',\mu_1'\} \dots \{t_M',\mu_M'\}; \{t_1,\mu_1\} \dots \{t_N,\mu_N\}) =\bra{\text{vac};\sigma'}U_\textrm{I}(\tau_+,0)\mathcal{T}\bigg[\prod_{i=1}^M L_{\mu_i'}(t_i') \prod_{j=1}^N L_{\mu_j}^\dagger(t_j) \bigg]U_\textrm{I}(0,\tau_-)\ket{\text{vac}; \sigma}. \end{align} \subsection{\label{sec:green-func}Efficient computation of the Green's functions} \begin{figure} \caption{Diagrammatic representation of approximating the Hilbert space of the bath modes with a discrete Hilbert space.} \label{fig:fig_apprx} \end{figure} \noindent While Eqs.~\ref{eq:prop_green_func} and \ref{eq:prop_green_func_multiple} express the propagator entirely in terms of the low-dimensional system operators, the time evolution of these operators requires computing the time evolution of the entire system which remains computationally intractable due to the high dimensionality of the Hilbert space of the waveguide modes. In this section, we show that these Green's functions can equivalently be computed by evolving the low dimensional system under an effective non-hermitian Hamiltonian. We consider a general Green's function of the form: \begin{equation}\label{eq:gen_green_func} \mathcal{G}_{\tau_+, \tau_-}^{\sigma',\sigma}(t_1, t_2 \dots t_N) = \bra{\text{vac};\sigma' }U_\textrm{I}(\tau_+,0)\mathcal{T}\big[s_1(t_1) s_2(t_2) \dots s_N(t_N)\big] U_\textrm{I}(0,\tau_-)\ket{\text{vac};\sigma } \end{equation} where $s_i$ are operators defined in the low-dimensional system's Hilbert space and $\tau_-<t_i<\tau_+ \ \forall \ i\in\{1,2\dots N\}$. Let $P$ be a permutation of $\{1,2,3 \dots N\}$ such that $t_{P(1)}\geq t_{P(2)} \dots \geq t_{P(N)}$, then \begin{equation} \mathcal{G}^{\sigma', \sigma}_{\tau_+,\tau_-}(t_1, t_2 \dots t_N) = \langle \text{vac}; \sigma' | U_\textrm{I}(\tau_+,0)\bigg[ \prod_{i=1}^N s_{P(i)}(t_{P(i)})\bigg] U_\textrm{I}(0,\tau_-) |\text{vac}; \sigma \rangle. \end{equation} We note that the system operators $s_i$ commute with the waveguide Hamiltonian $H_\text{wg}$ and thus $s_i(t) = U(0,t) s_i U(t,0) = U(0,t) \exp(-\textrm{i}H_\text{wg}t) s_i\exp(\textrm{i}H_\text{wg}t) U(t,0) = U_\textrm{I}(0,t) s_i U_\textrm{I}(t,0)$. The Green's function can thus be expressed as: \begin{align}\label{green_func_schro} \mathcal{G}_{\tau_-, \tau_+}^{\sigma',\sigma}(t_1, t_2 \dots t_N) = \langle \text{vac}; \sigma' | U_\textrm{I}(\tau_+,t_{P(1)}) \bigg[\prod_{i=1}^{N-1} s_{P(i)}U_\textrm{I}(t_{P(i)},t_{P(i+1)}) \bigg] s_{P(N)}U_\textrm{I}(t_{P(N)},\tau_-) |\text{vac}; \sigma \rangle. \end{align} We next show that the vacuum expectation in this equation can be explicitly evaluated --- as a starting point, we express the interaction picture propagator in terms of the interaction picture Hamiltonian: \begin{align} U_\textrm{I}(t_1, t_2) = \mathcal{T}\exp\bigg[-\textrm{i}\int_{t_2}^{t_1} H_\textrm{I}(t') \textrm{d}t' \bigg] \end{align} where \begin{align}\label{eq:int_hamil} H_\textrm{I}(t) = H_\text{sys}(t)+\sum_{\mu=1}^{N_L} \int \big( a_{\omega, \mu} L_\mu^\dagger \exp(-\textrm{i}\omega t)+L_\mu a_{\omega, \mu}^\dagger \exp(\textrm{i}\omega t) \big) \frac{\textrm{d}\omega}{\sqrt{2\pi}}. \end{align} In terms of the spatial annihilation operator, $a_{x,\mu}$, the interaction picture Hamiltonian can be rewritten as: \begin{align} H_\textrm{I}(t) = H_\text{sys}(t) + \sum_{\mu=1}^{N_L}\big(a_{x = -t,\mu} L_\mu^\dagger+L_\mu a_{x = -t,\mu}^\dagger \big). \end{align} To proceed, we approximate the high-dimensional continuum Hilbert space of the waveguides by a discrete Hilbert Space with a countably infinite basis (Fig.~\ref{fig:fig_apprx}). This is achieved by introducing a coarse graining parameter $\delta x$, and defining the `coarse grained operators' $A_{\mu, n}$ via: \begin{align} A_{\mu}[n] = \int\limits_{(n-1)\delta x}^{n\delta x}a_{x,\mu}\frac{\textrm{d}x}{\sqrt{\delta x}}. \end{align} These operators satisfy the commutators $[A_{\mu}[n], A_{\mu'}^\dagger[n']]= \delta_{\mu,\mu'}\delta_{n,n'}, [A_{\mu}[n], A_{\mu'}[n']] = 0$ and in the limit of $\delta x \to 0$, $A_\mu[n]/\sqrt{\delta x}$ would approach the continuum operator $a_{x = n\delta x,\mu}$. The discrete Hilbert space approximating the waveguide's Hilbert space would then be the space spanned by the tensor product of the Fock states created by each of the operators $A_\mu^\dagger[n], \ \forall \mu \in \{1, 2\dots N_L\}$ and $n \in \mathbb{Z}$. In particular, the vacuum state $\ket{\text{vac}}$ for the Hilbert space of the waveguide is approximated by \begin{equation} \bigotimes_{\mu = 1}^{N_L} \bigotimes_{n=-\infty}^{\infty} \ket{\text{vac}_{n,\mu}} \end{equation} where $\ket{\text{vac}_{n,\mu}}$ is the vacuum state corresponding to the operator $A_\mu[n]$. Moreover, this interaction Hamiltonian (Eq.~\ref{eq:int_hamil}) is then expressed as $H_\textrm{I}(t; \delta x)$ in the limit of $\delta x \to 0$, where $H_\textrm{I}(t; \delta x)$ is given by: \begin{align}\label{eq:disc_hamil} H_\textrm{I}(t; \delta x) = H_\text{sys}(t) + \frac{1}{\sqrt{\delta x}}\bigg\{\sum_{\mu=1}^{N_L} \big(A_\mu\big[\lceil -t/\delta x \rceil\big] L_\mu^\dagger+L_\mu A_\mu^\dagger \big[\lceil -t/\delta x \rceil\big] \big) \bigg \} . \end{align} Using the notation $n_+ = \lceil \tau_+/\delta x\rceil$, $n_- = \lceil \tau_-/\delta x \rceil$, $n_i = \lceil t_i/\delta x \rceil$ and defining \begin{align} U_\textrm{I}[n_1, n_2] = \mathcal{T}\exp\bigg(-\textrm{i}\int \limits_{n_1 \delta x}^{n_2 \delta x}H_\textrm{I}(t; \delta x) \textrm{d}x \bigg) \end{align} the Green's function in Eq.~\ref{green_func_schro} can be expressed as: \begin{align}\label{eq:green_func_disc} \mathcal{G}^{\sigma', \sigma}_{\tau_+,\tau_-}(t_1, t_2 \dots t_N) & = \lim_{\delta x \to 0} \bigg[\bigg\{\bigotimes_{\mu = 1}^{N_L} \bigotimes_{n=-\infty}^{\infty} \bra{\text{vac}_{n,\mu}} \bigg\}\otimes \bra{\sigma'}\bigg]U_\textrm{I}[n_+, n_{P(1)}]\bigg[\prod_{i=1}^{N-1} s_{P(i)}U_\textrm{I}[n_{P(i)}, n_{P(i+1)}]\bigg]\times \nonumber \\ & \qquad s_{P(N)}U_\textrm{I}[n_{P(N)}, n_-] \bigg[\bigg\{\bigotimes_{\mu = 1}^{N_L} \bigotimes_{n=-\infty}^{\infty} \ket{\text{vac}_{n,\mu}}\bigg\}\otimes \ket{\sigma} \bigg]. \end{align} Noting that since for $n\delta x < t <(n+1) \delta x$, $H_\textrm{I}(t)$ depends only on $A_{\mu}[-n]$, $U_\textrm{I}[n+1, n]$ would only act on $\ket{\text{vac}_{-n,\mu}} \ \forall \mu$. Using this together with the decomposition $U_\textrm{I}[n_1, n_2] = U_\textrm{I}[n_1,n_1-1]U_\textrm{I}[n_1-1, n_1-2] \dots U_\textrm{I}[n_2-1, n_2]$, we can rewrite Eq.~\ref{eq:green_func_disc} as: \begin{align}\label{eq:gfunc_eff_single_step} \mathcal{G}_{\tau_+, \tau_-}^{\sigma',\sigma}(t_1, t_2 \dots t_N) & = \lim_{\delta t \to 0} \bra{\sigma'} \bigg[ \prod_{j = -n_+-1}^{-n_{P(1)}}U_\textrm{eff}[-j+1,-j] \bigg] \bigg[\prod_{i=1}^{N-1}\bigg\{s_{P(i)}\prod_{j=-n_{P(i)}-1}^{-n_{P(i+1)}} U_\text{eff}[-j+1,-j] \bigg\}\bigg]\times \nonumber \\ & \qquad s_{P(N)}\bigg[ \prod_{j=-n_{P(N)}-1}^{-n_{-}}U_\text{eff}[-j+1,-j]\bigg]\ket{\sigma} \end{align} where \begin{align} U_\text{eff}[n+1,n] = \bigg[\bigotimes_{\mu=1}^{N_L}\bra{\text{vac}_{-n,\mu}}\bigg] U_\textrm{I}[n+1,n]\bigg[\bigotimes_{\mu=1}^{N_L}\ket{\text{vac}_{-n,\mu}}\bigg]. \end{align} This expression can be further simplified by expanding $U_\textrm{I}[n+1,n]$ into a Dyson series in terms of the interaction Hamiltonian: \begin{align} U_\textrm{I}[n+1,n] = \textrm{I}+\sum_{n=1}^{\infty}(-\textrm{i})^n \int\limits_{t_1 = n\delta x}^{(n+1)\delta x}\int\limits_{t_2 = t_1}^{(n+1)\delta x} \dots \int\limits_{t_n =t_{n-1}}^{(n+1)\delta x} H_\textrm{I}(t_n; \delta x)H_\textrm{I}(t_{n-1}; \delta x) \dots H_\textrm{I}(t_1; \delta x) \textrm{d}t_1 \dots \textrm{d}t_n . \end{align} It follows from Eq.~\ref{eq:disc_hamil} that the vacuum expectations corresponding to the first two terms in the summation in the Dyson series evaluate to: \begin{subequations} \begin{align} &\bigg[\bigotimes_{\mu=1}^{N_L}\bra{\text{vac}_{-n,\mu}}\bigg] \int\limits_{t = n\delta x}^{(n+1)\delta x}H_\textrm{I}(t'; \delta x) \textrm{d}t' \bigg[\bigotimes_{\mu=1}^{N_L}\ket{\text{vac}_{-n,\mu}}\bigg] = \int\limits_{n\delta x}^{(n+1)\delta x}H_\text{sys}(t) \textrm{d}t \\ &\bigg[\bigotimes_{\mu=1}^{N_L}\bra{\text{vac}_{-n,\mu}}\bigg] \int\limits_{t_1 = n\delta x}^{(n+1)\delta x} \int\limits_{t_2 = t_1} ^{(n+1)\delta x} H_\textrm{I}(t_2; \delta x) H_\textrm{I}(t_1; \delta x) \textrm{d}t_1 \textrm{d}t_2 \bigg[\bigotimes_{\mu=1}^{N_L}\ket{\text{vac}_{-n,\mu}}\bigg] = \frac{1}{2} \sum_{\mu=1}^{N_L} L_\mu^\dagger L_\mu \delta x + \mathcal{O}(\delta x^2). \end{align} \end{subequations} Moreover, it can easily be seen from Eq.~\ref{eq:int_hamil} that the vacuum expectations of higher order terms in the Dyson series do not have any contributions that are first order in $\delta x$. Therefore, \begin{align}\label{eq:eff_single_step} U_\textrm{eff}[n+1, n] = \textrm{I}-\textrm{i}\int\limits_{t=n\delta x}^{(n+1)\delta x}H_\text{eff}(t') \textrm{d}t' +\mathcal{O}(\delta x^2) = \mathcal{T} \exp \bigg[-\textrm{i}\int \limits_{n\delta x}^{(n+1)\delta x}H_\text{eff}(t) \textrm{d}t \bigg] + \mathcal{O}(\delta x^2) \end{align} where \begin{align} H_\text{eff}(t) = H_\text{sys}(t)-\frac{\textrm{i}}{2}\sum_\mu L_\mu^\dagger L_\mu. \end{align} Finally, substituting Eq.~\ref{eq:eff_single_step} into Eq.~\ref{eq:gfunc_eff_single_step} and evaluating the limit, we obtain: \begin{align}\label{eq:greens_func_eff} \mathcal{G}_{\tau_-, \tau_+}^{\sigma',\sigma}(t_1, t_2 \dots t_N) = \langle \sigma' | U_\textrm{eff}(\tau_+,t_{P(1)}) \bigg[\prod_{i=1}^{N-1} s_{P(i)}U_\textrm{eff}(t_{P(i)},t_{P(i+1)}) \bigg] s_{P(N)}U_\textrm{eff}(t_{P(N)},\tau_-) | \sigma \rangle \end{align} where \begin{align} U_\text{eff}(t_1, t_2) = \mathcal{T}\exp\bigg[-\textrm{i}\int\limits_{t_2}^{t_1}H_\text{eff}(t) \textrm{d}t \bigg]. \end{align} Eq.~\ref{eq:greens_func_eff} is an expectation evaluated entirely in the Hilbert space of the low-dimensional system, which makes it computationally tractable. For many systems of interest, it is often easier to work with a Heisenberg-like form of the Green's function, which can be obtained by defining $\tilde{s}_i(t) = U_\text{eff}(0,t)s_i U_\text{eff}(t,0)$, which satisfies the Heisenberg-like equations of motion: \begin{align} \dot{\tilde{s}}_i(t) = -\textrm{i}[\tilde{H}_\text{eff}(t), \tilde{s}_i(t)] \end{align} where $\tilde{H}_\text{eff}(t) = U_\text{eff}(0,t) H_\text{eff}(t) U_\text{eff}(t,0)$. In terms of these operators, Eq.~\ref{eq:greens_func_eff} can be reduced to \begin{align} \mathcal{G}_{\tau_+,\tau_-}^{\sigma',\sigma}(t_1, t_2 \dots t_N) &= \langle \sigma' |U_\text{eff}(\tau_+,0) \bigg[\prod_{i=1}^N \tilde{s}_P(i)(t_{P(i)}) \bigg] U_\text{eff}(0,\tau_{-}) \ket{\sigma} \nonumber \\ &= \langle \sigma' |U_\text{eff}(\tau_+,0) \mathcal{T}\bigg[\prod_{i=1}^N \tilde{s}_i(t_{i}) \bigg] U_\text{eff}(0,\tau_{-}) \ket{\sigma} . \end{align} \section{\label{sec:scat_matrix}Scattering matrices} \noindent The scattering matrix is a useful quantity to characterize the response of the low dimensional system to wave-packets incident from the waveguide modes coupling to it. The scattering matrix $\Sigma$ can be computed from the interaction picture operator by taking the limits $\tau_{-} \to -\infty$ and $\tau_{+} \to \infty$: \begin{align} \Sigma = \lim_{\substack{\tau_+ \to \infty \\ \tau_- \to -\infty}} U_I(\tau_+,\tau_-). \end{align} The scattering matrix can be completely characterized by matrix elements of the form: \begin{align} &\Sigma^{\sigma', \sigma}(\{x_1',\mu_1'\}\dots \{x_M',\mu_M'\}; \{x_1,\mu_1\} \dots \{x_N,\mu_N\}) \nonumber \\ &=\bra{\{x_1',\mu_1'\}\dots \{x_M',\mu_M'\};\sigma'}\Sigma \ket{ \{x_1,\mu_1\} \dots \{x_N,\mu_N\};\sigma}. \end{align} In order to compute the scattering matrix from the propagator analyzed in the previous section, we make the definition of the system Hamiltonian $H_\text{sys}(t)$ more explicit --- in particular, we assume that it is `time dependent' only if $t \in [0,T_P]$. Physically, this might correspond to a system like a two-level atom being driven by a coherent pulse which vanishes outside $[0,T_P]$. Mathematically, this is equivalent to writing $H_\text{sys}(t)$ as: \begin{align} H_\text{sys}(t) = \begin{cases} H_\text{sys}^{0}+H_\text{sys}^P(t) & \text{for} \ 0\leq t\leq T_P \\ H_\text{sys}^{0} & \text{otherwise} \end{cases}. \end{align} For a low-dimensional system Hamiltonian of this form and by using Eq.~\ref{eq:prop_green_func} the scattering matrix element can be expressed as: \begin{align} &\Sigma_{\sigma',\sigma}(\{x_1',\mu_1'\}\dots \{x_M',\mu_M'\}; \{x_1,\mu_1\} \dots \{x_N,\mu_N\}) =\sum_{k=0}^N (-\textrm{i})^{M+N-2k}\mathcal{I}(M\geq k) \times \\ &\quad\sum_{B_k^N, B_k^M} \bigg[\sum_{P_k} \prod_{i=1}^k \delta(x'_{P_k B_k^M(i)}-x_{B_k^N(i)})\delta_{\mu_{P_k B_k^M(i)}', \mu_{B_k^N(i)}} \bigg] \times \nonumber \\ &\quad \mathcal{G}^{\sigma',\sigma}_{\infty,-\infty}(\{-x'_{\bar{B}_k^M(1)}, \mu_{\bar{B}_k^M(1)}'\}, \dots \{-x'_{\bar{B}_k^M(M-k)},\mu'_{\bar{B}_k^M(M-k)}\};\{-x_{\bar{B}_k^N(1)},\mu_{\bar{B}_k^N(1)}\} \dots \{-x_{\bar{B}_k^N(N-k)},\mu_{\bar{B}_k^N(N-k)}\})\nonumber \end{align} where \begin{align} &\mathcal{G}_{\infty,-\infty}^{\sigma',\sigma}(\{t_1',\mu_1'\} \dots \{t_M',\mu_M'\}; \{t_1,\mu_1\} \dots \{t_N,\mu_N\}) \nonumber \\ &=\bra{\sigma'}U_\textrm{eff}^0(\tau_+,T_P)U_\text{eff}(T_P,0)\mathcal{T}\bigg[\prod_{i=1}^M \tilde{L}_{\mu_i'}(t_i') \prod_{j=1}^N \widetilde{L^\dagger}_{\mu_j}(t_j) \bigg]U_\textrm{eff}^0(0,\tau_-)\ket{ \sigma} \end{align} with \begin{align} U_\text{eff}^0(t_1,t_2) = \exp\bigg[-\bigg(\textrm{i}H_\text{sys}^0+\sum_\mu \frac{1}{2} L_\mu^\dagger L_\mu \bigg)(t_1-t_2)\bigg]. \end{align} While the orthonormal basis $\{\ket{\sigma_1}, \ket{\sigma_2} \dots \ket{\sigma_S}\}$ for the low-dimensional system's Hilbert space used for computing the propagator could be arbitrarily chosen as long as it is complete, for the purpose of computing the scattering matrix, we make this basis more explicit by expressing it as a union of a set of `ground states' $\{\ket{g_1}, \ket{g_2} \dots \ket{g_{S_g}}\}$ and `excited states' $\{\ket{e_1}, \ket{e_2} \dots \ket{e_{S_e}}\}$ which are all eigen-states of the Hamiltonian $H_\text{sys}^0$: \begin{align} H_\text{sys}^0\ket{g_n} = \varepsilon_{g_n}\ket{g_n} \ \text{and} \ H_\text{sys}^0 \ket{e_n} = \varepsilon_{e_n}\ket{e_n}. \end{align} Moreover, the ground states and the excited states also satisfy: \begin{subequations}\label{eq:def_states} \begin{align} &L_\mu \ket{g_n} = 0 \ \forall \ \mu \in \{1,2 \dots N_L\}, n \in \{1, 2 \dots S_g\}, m \in \{1,2 \dots S_e\} \\ &\exists \mu \in \{1,2 \dots N_L\} \ |\ L_\mu \ket{e_n} \neq 0 \ \forall \ n \in \{1,2 \dots S_e \} \\ & \braket{e_m | g_n} = 0 \ \forall \ m \in \{1,2 \dots S_e \}, n \in \{1,2 \dots S_g \} \end{align} \end{subequations} where $L_\mu$ are the operators through which the low dimensional system couples to the waveguide modes. An immediate consequence of the definition of ground and excited states (Eq.~\ref{eq:def_states}) and the positive-definiteness of the operator $\sum_\mu L_\mu^\dagger L_\mu$ (which is also the non-hermitian part of the effective Hamiltonian) within the subspace spanned by the excited states, is that if evolved with the propagator $U_\text{eff}^0(t_1,t_2)$ an excited state would decay to 0 and a ground state would remain unchanged except for accumulating a phase: \begin{subequations} \begin{align} &\lim_{\tau_- \to -\infty} U_\text{eff}^0(0,\tau_-)|e_n\rangle = 0 \ \text{and} \ \lim_{\tau_- \to -\infty} U_\text{eff}^0(0,\tau_-)|g_n\rangle = \exp(\textrm{i}\varepsilon_{g_n} \tau_-)\ket{g_n} \\ &\lim_{\tau_+ \to \infty} \bra{e_n}U_\text{eff}^0(\tau_+,T_P) = 0 \ \text{and} \ \lim_{\tau_+ \to -\infty} \bra{g_n}U_\text{eff}^0(\tau_+,T_P) = \exp(-\textrm{i}\varepsilon_{g_n} (\tau_+-T_P))\bra{g_n}. \end{align} \end{subequations} Therefore, only the scattering matrix elements corresponding to the low-dimensional system going from one ground state to another ground state are non zero and are given by: \begin{align}\label{eq:scat_matrix_final} &\Sigma_{g_m,g_n}(\{x_1',\mu_1'\}\dots \{x_M',\mu_M'\}; \{x_1,\mu_1\} \dots \{x_N,\mu_N\}) =\sum_{k=0}^N (-\textrm{i})^{M+N-2k}\mathcal{I}(M\geq k) \times \\ &\quad\sum_{B_k^N, B_k^M} \bigg[\sum_{P_k} \prod_{i=1}^k \delta(x'_{P_k B_k^M(i)}-x_{B_k^N(i)})\delta_{\mu_{P_k B_k^M(i)}', \mu_{B_k^N(i)}} \bigg] \times \nonumber \\ &\quad \mathcal{G}^{\sigma',\sigma}_{\infty,-\infty}(\{-x'_{\bar{B}_k^M(1)}, \mu_{\bar{B}_k^M(1)}'\}, \dots \{-x'_{\bar{B}_k^M(M-k)},\mu'_{\bar{B}_k^M(M-k)}\};\{-x_{\bar{B}_k^N(1)},\mu_{\bar{B}_k^N(1)}\} \dots \{-x_{\bar{B}_k^N(N-k)},\mu_{\bar{B}_k^N(N-k)}\})\nonumber \end{align} where \begin{align}\label{eq:scat_matrix_gfunc} &\mathcal{G}_{\infty,-\infty}^{g_m ,g_n}(\{t_1',\mu_1'\} \dots \{t_M',\mu_M'\}; \{t_1,\mu_1\} \dots \{t_N,\mu_N\}) =\bra{g_m}U_\text{eff}(T_P,0)\mathcal{T}\bigg[\prod_{i=1}^M \tilde{L}_{\mu_i'}(t_i') \prod_{j=1}^N \widetilde{L^\dagger}_{\mu_j}(t_j) \bigg]\ket{g_n} \end{align} wherein we have dropped the phase factors depending on $\tau_+$ and $\tau_-$ corresponding to the phase accumulated by the ground states from the scattering matrix element. Previous calculation of the scattering matrix \cite{xu2015input,xu2017input} for systems with time-independent Hamiltonians arrived at exactly the same form as in Eqs.~\ref{eq:scat_matrix_final} and \ref{eq:scat_matrix_gfunc} with $T_P = 0$. However, as previously emphasized, the formalism introduced here allows us to model systems with time-dependent Hamiltonians, thereby increasing its applicability in modeling experimentally relevant systems. \section{\label{sec_examples} Examples} {In this section, we show how to use the formalism developed in the previous sections to analyze some open-quantum systems of interest. The examples we choose to analyze include a two-level system and a lambda three-level system. We calculate both emission from these systems when they are coherently driven, and scattering of single-photon pulses from these systems.} \subsection{\label{sec:tls}{Two-level System}} \noindent As our first example, we consider a coherently driven two-level system coupled to a single waveguide with coupling decay rate $\gamma$. Denoting the ground state of the two-level system with $\ket{g}$ and the excited state with $\ket{e}$, the two-level system can be modeled with the following Hamiltonian: \begin{align}\label{eq:tls_hamiltonian} H_\text{sys} = \delta_a \sigma^\dagger \sigma + \Omega(t)(\sigma+\sigma^\dagger) \end{align} where $\sigma = \ket{g}\bra{e}$ and $\sigma^\dagger = \ket{e}\bra{g}$ are the annihilation and creation operators for the two-level system, $\delta_a$ is the detuning of the resonant frequency of the two level atom from the frequency of the coherent drive and $\Omega(t)$ is the amplitude of the coherent drive, which we assume to be of the form: \begin{align}\label{eq:coh_drive} \Omega(t) = \begin{cases} \Omega_0 & 0 \leq t \leq T_P \\ 0 & \text{otherwise} \end{cases}. \end{align} We first analyze emission from this driven two-level system --- we consider the two-level system coupled to a single waveguide, with it being in the ground state and the waveguide to be in the vacuum state at $t = 0$ [Fig.~\ref{fig:tls_emission}(a)]. This amounts to computing the propagator $U_I(\tau,0)$, from which it is easy to extract the state of the waveguide and the two-level system as a function of $\tau$. Of particular interest are the probabilities of finding 0 and 1 photon in the waveguide with the two-level system being in the excited or ground state as a function of $\tau$: \begin{subequations}\label{eq:tls_prob} \begin{align} &P_{0,g}(\tau) = |\langle \text{vac}; g| U(\tau,0)\ket{\text{vac}; g}|^2, \quad\ P_{0,e}(\tau) = |\langle \text{vac}; e| U(\tau,0)\ket{\text{vac}; g}|^2 \\ &P_{1,g}(\tau) = \int |\langle x; g| U(\tau,0)\ket{\text{vac};g}|^2 \text{d}x; \quad P_{1,e}(\tau) = \int |\langle x; e| U(\tau,0)\ket{\text{vac};g}|^2 \text{d}x . \end{align} \end{subequations} Fig.~\ref{fig:tls_emission}(b) shows the steady state probabilities $P_{1,g}(\infty)$ and $P_{0,g}(\infty)$ of a two-level system emitting a single photon or not emitting any photons after being excited by a short pulse as a function of the pulse area. We observe the well understood rabi-oscillations in these probabilities with the pulse area. Fig.~\ref{fig:tls_emission}(c) shows the time-dependence of the probabilities defined in Eq.~\ref{eq:tls_prob} for a long pulse --- again, we observe oscillation in these probabilities while the two-level system is being driven, followed by a decay of the two-level system to its ground state and emission of photons into the waveguide. The full space-time dependence of the propagator matrix elements corresponding to a single photon in the waveguide is shown in Fig.~\ref{fig:tls_emission}(d) --- we clearly see signatures of rabi-oscillation during the time interval in which the two-level system is being driven by the coherent pulse. During this time interval, the two-level system state and the waveguide state are entangled to each other, and after the coherent pulse goes to 0 the two-level system completely decays into the waveguide mode and the resulting excitation propagates along the waveguide. It can also be noted that the matrix elements are always zero outside the light-cone (i.e. for $x>t$), which is intuitively expected since the group velocity is an upper bound on the speed at which photons emitted by the two-level system can propagate in the waveguide. \begin{figure} \caption{Emission from a coherently driven two-level system. (a) Schematic of a two-level system driven with a pulse $\Omega(t)$. (b) Probability of emission of a single photon $P_{0,g} \label{fig:tls_emission} \end{figure} \begin{figure} \caption{Scattering of a single-photon wave-packet from a coherently driven two-level system. (a) Schematic of a coherently driven two-level system coupled to an input and output waveguide. (b) Single-photon scattering matrix for the driven and undriven two-level system. (c) Variation of the transmission of the coherently driven two-level system with the central frequency $\delta_0$ of the input single-photon state. (d) Variation of the transmission of a coherently driven two-level system with the length of the coherent drive for a resonant input single photon state ($\delta = 0$). For (b) and (c), it is assumed that $\gamma T_P = 4$ and $\Omega_0 = 5\gamma$. $\gamma_1 = \gamma_2 = \gamma/2$ and $\Delta x = 2.0/\gamma$ is assumed in all the calculations.} \label{fig:tls_scattering} \end{figure} Next, we analyze scattering of a single-photon pulse from a coherently driven two-level system (Fig.~\ref{fig:tls_scattering}) --- this amounts to computing the scattering matrix for the time dependent Hamiltonian in Eq.~\ref{eq:tls_hamiltonian} using Eq.~\ref{eq:scat_matrix_final}. We consider a two-level system coupled to two waveguides (Fig.~\ref{fig:tls_scattering}), and excite it with an input pulse from the first waveguide (labeled as 1), and compute the single-photon component of the output state in the second waveguide (labeled as 2). Fig.~\ref{fig:tls_scattering}(b) shows the single-photon scattering matrix for a two-level system, with and without the coherent drive. The two scattering matrices differ in the region $-T_P < x_1,x_2<0$, which corresponds to incident pulses that arrive at the two-level system while it is being driven. To analyze the transmission spectra of the driven two-level system, we consider exciting the first waveguide with a single-photon state (in the interaction picture) at time $-\infty$ of the form: \begin{align} \ket{\psi(-\infty)} = \int \psi_\text{in}(x_1) a_{x_1,1}^\dagger \ket{\text{vac}} \textrm{d}x_1, \quad \psi_\text{in}(x_1) = \frac{1}{(\pi \Delta x^2)^{1/4}}\exp\bigg(-\frac{x_1^2}{2\Delta x^2}+\textrm{i}\delta_0 x_1 \bigg) \end{align} where $\delta_0$ is the central frequency of the incident wave-packet, and $\Delta x$ is its spatial extent. We compute the single-photon component in the output waveguide by applying the scattering matrix to the input state: \begin{align} \psi_\text{out}(x_2) = \int \Sigma(\{x_2, 2\}, \{x_1, 1\}) \psi_\text{in}(x_1) \textrm{d}x_1 \end{align} and the transmission of the single-photon state via: \begin{align} \text{Transmission} = \int |\psi_\text{out}(x_2)|^2 \textrm{d}x_2. \end{align} Figure \ref{fig:tls_scattering}(c) shows the transmission spectrum of the two-level system with and without the coherent drive --- the driven two-level system clearly shows a suppression in transmission. Intuitively, this is a consequence of the coherent pulse transferring the two-level system into its excited state when the single-photon pulse arrives at the two-level system, thereby resulting in low transmission into the output waveguide. Figure~\ref{fig:tls_scattering} shows the dependence of the transmission of the two-level system for a resonant input single-photon wave-packet ($\delta_0 = 0$) on the length of the coherent drive $T_P$. Again, we see a signature of the rabi-oscillation in the transmission --- the probability of the two-level system being in the excited state oscillates with the input pulse, and this oscillation translates to the oscillation in the transmission of the driven two-level system. \subsection{\label{lambda}{Lambda system}} \noindent As our next example, we consider a coherently driven three-level lambda-system. This system has two ground states, denoted by $\ket{g_1}$ and $\ket{g_2}$, and one excited state, denoted by $\ket{e}$. The Hamiltonian for this system is given by: \begin{align} H_\text{sys} = \delta_e \ket{e}\bra{e}+\delta_{12}\ket{g_1}\bra{g_1}+\Omega(t)(\sigma_1+\sigma_1^\dagger) \end{align} where $\sigma_i = \ket{g_i}\bra{e}$ is the operator that annihilates the excited state $\ket{e}$ to the ground state $\ket{g_i}$, $\delta_e$ is the detuning of the frequency difference between $\ket{e}$ and $\ket{g_2}$ from the frequency of the coherent drive, $\delta_{12}$ is the frequency difference between the states $\ket{g_2}$ and $\ket{g_1}$, and $\Omega(t)$ is the amplitude of the coherent drive which we again assume to be a rectangular pulse as given by Eq.~\ref{eq:coh_drive}. \begin{figure} \caption{Emission from a coherently driven three-level lambda system. (a) Schematic of the system being analyzed. (b) Probability of emission of a single photon $P_{1,g_2} \label{fig:lambda_1} \end{figure} We first analyze photon emission from this driven lambda system --- the lambda system is assumed to couple to a single waveguide through an operator $\sqrt{\gamma} \sigma_2$, with it being in the ground state $\ket{g_1}$ and the waveguide being in the vacuum state at time 0 [Fig.~\ref{fig:lambda_1}(a)]. The probabilities of interest that we compute include the following: \begin{align}\label{eq:lambda_sys_prob} &P_{0,g1}(\tau) = |\bra{\text{vac}; g_1}U_\textrm{I}(\tau,0) \ket{\text{vac}; g_1}|^2, P_{0, e}(\tau) = |\bra{\text{vac}; e}U_\textrm{I}(\tau,0)\ket{\text{vac}; g_1}|^2 \nonumber \\ &P_{1,g2} = \int |\bra{x; g_2} U_\textrm{I}(\tau,0) \ket{\text{vac}; g_1}|^2 \textrm{d}x \end{align} Figure~\ref{fig:lambda_1}(b) shows the dependence of the probabilities of single photon emission and no emission as a function of the pulse area for a short driving pulse -- we again observe the expected rabi-oscillations in these probabilities. Fig.~\ref{fig:lambda_1}(c) shows the time evolution of the probabilities defined in Eq.~\ref{eq:lambda_sys_prob} for a long driving pulse --- we clearly observe an oscillation in these probabilities while the lambda system is being driven followed by emission of a single photon into the waveguide. We note that for a lambda system, once a photon emission into the waveguide occurs, the lambda system would necessarily transition to the state $\ket{g_2}$, and will no longer be driven by $\Omega(t)$. As a consequence of this structure of the lambda system, only a single photon can be emitted into the waveguide --- this is numerically validated in Fig.~\ref{fig:lambda_1}(c) from which it can be seen that $P_{0,g1}(\infty)+P_{1, g2}(\infty) = 1$. Figure~\ref{fig:lambda_1}(d) shows the propagator matrix element $|\bra{x;g_2}U(\tau,0)\ket{\text{vac};g_1}|^2$ --- we clearly see a stark difference from the corresponding matrix element for a two-level system (Fig.~\ref{fig:tls_emission}) due to the system not interacting with the coherent pulse following the emission of a photon into the waveguide. \begin{figure} \caption{Scattering of a single-photon wave-packet from a three-level lambda system. (a) Schematic of the system being analyzed. (b) Time dependence of the probabilities $P_e, P_{1, g_1} \label{fig:lambda_scattering} \end{figure} As our final example, we analyze photon subtraction using a lambda system --- the system under consideration is shown in Fig.~\ref{fig:lambda_scattering}(a). A lambda system is coupled to an input waveguide through the operator $\sigma_1$ and to an output waveguide through $\sigma_2$ to an output waveguide, with lambda system initially being in the state $\ket{g_1}$. A single photon is incident from the input waveguide which drives the system from $\ket{g_1}$ to $\ket{e}$ to $\ket{g_2}$, with the photon finally being emitted into the output waveguide. For a lambda system with $\delta_{12} = 0$ and the incoming photon being resonant with the ground state to excited state transition, the incoming photon is completely transfered from the input waveguide to the output waveguide. Subsequently, a photon incident from the input waveguide does not interact with the lambda system, since the lambda system is now in the state $\ket{g_2}$ which cannot be driven with an excitation from the input waveguide. This allows the lambda system to be used as a photon subtracter \cite{rosenblum2016extraction} --- for a stream of spatially separated single photons incident on the lambda system, the lambda system would remove the first photon from the stream, and transmit the rest. To numerically reproduce this effect, we consider the system to be in the following initial state: \begin{align} \ket{\psi(0)} = \int \psi_\text{in}(x_1) a_{x_1,1}^\dagger \ket{\text{vac}; g_1} \textrm{d}x_1, \quad \psi_\text{in}(x_1) = \frac{1}{(\pi \Delta x^2)^{1/4}}\exp\bigg(-\frac{(x_1+L)^2}{2\Delta x^2} \bigg) \end{align} where $\Delta x$ is its spatial extent of the incoming photon wave-packet and $L$ is the distance of the center of the wave-packet from the lambda system at $t = 0$. The state of the system at $t = \tau$ can then be expressed in terms of the propagator matrix elements: \begin{align} \ket{\psi(\tau)} &= \int \bra{\text{vac}; e} U(\tau,0) \ket{\{x_1', 1\};g_1}\psi_\text{in}(x_1') \textrm{d}x_1' \nonumber \\ &\quad+ \int \int\bra{\{x_1,1\}; g_1} U(\tau,0) \ket{\{x_1', 1\};g_1}\psi_\text{in}(x_1') a_{x,1}^\dagger \ket{\text{vac}; g_1}\textrm{d}x_1' \textrm{d}x_1 \nonumber \\ &\quad+\int \int\bra{\{x_2,2\}; g_2} U(\tau,0) \ket{\{x_1', 1\};g_1}\psi_\text{in}(x_1') a_{x,2}^\dagger \ket{\text{vac}; g_2}\textrm{d}x_1' \textrm{d}x_2. \end{align} The probabilities of interest, denoted by $P_e, P_{1, g_1}$ and $P_{2, g_2}$, that we simulate for this system are defined by: \begin{align} P_e(\tau) = |\braket{\text{vac}; e | \psi(\tau)}|^2,\quad P_{1,g_1} = \int |\braket{\{x_1,1\}; g_1 | \psi(\tau)}|^2 \textrm{d}x_1 \quad \text{and} \quad P_{2,g_2} = \int |\braket{\{x_2,1\}; g_2 | \psi(\tau)}|^2 \textrm{d}x_2. \end{align} Figure~\ref{fig:lambda_scattering}(b) shows the time evolution of these probabilities --- clearly, the lamba-system transitions from initially being in $\ket{g_1}$ to the excited state $\ket{e}$ and then to $\ket{g_2}$ with photon transfering from the input to the output waveguide. However, we note that the photon wave-packet is not completely transmitted into the second waveguide, which is a consequence of wave-packet having a finite spread in frequency, and therefore not being completely resonant with the lambda system. Figure~\ref{fig:lambda_scattering}(c) shows the space-time dependence of the single-photon wave packet in the input and output waveguides --- clearly, the initial wave-packet propagates along the input waveguide till it reaches the lambda system at $x = 0$ and then transmits to the second waveguide. \section{Conclusion and outlook} \noindent We have completely described the unitary propagator for a composite quantum network comprising waveguides and a low-dimensional system. Our method is strongly connected to the linearity of the waveguide dispersions and the Markovian coupling approximation. This linearity allows us to always express the initial (time $\tau_-$) and final (time $\tau_+$) waveguide states in terms of Heisenberg field operators. As a result, we are able to use the input-output boundary conditions to arrive at an expression between the states in terms of system Heisenberg operators only. The culminating expectations are with respect to vacuum states, otherwise known as a Green's functions. These Green's functions can be expressed entirely in the Hilbert space of system operators, which we showed by coarse-graining the waveguides' spatial dimensions. Hence, our expression proves tractable for many systems of interest and provides insight into the connections between input-output theory, scattering matrices, and propagators for Markovian open-quantum systems. \end{document}
\begin{document} \newcommand{\todo}[1]{\mathrm{Re}d{{$\bigstar$\sc #1}$\bigstar$}} \newcommand{\ks}[1]{{\textcolor{teal}{[KS: #1]}}} \global\long\def\eqn#1{\begin{align}#1\end{align}} \global\long\def\ket#1{\left|#1\right\rangle } \global\long\def\bra#1{\left\langle #1\right|} \global\long\def\bkt#1{\left(#1\right)} \global\long\def\sbkt#1{\left[#1\right]} \global\long\def\cbkt#1{\left\{#1\right\}} \global\long\def\abs#1{\left\vert#1\right\vert} \global\long\def\der#1#2{\frac{{d}#1}{{d}#2}} \global\long\def\pard#1#2{\frac{{\partial}#1}{{\partial}#2}} \global\long\def\mathrm{Re}{\mathrm{Re}} \global\long\def\mathrm{Im}{\mathrm{Im}} \global\long\def\mathrm{d}{\mathrm{d}} \global\long\def\mathrm{d}d{\mathcal{D}} \global\long\def\avg#1{\left\langle #1 \right\rangle} \global\long\def\mr#1{\mathrm{#1}} \global\long\def\mb#1{{\mathbf #1}} \global\long\def\mc#1{\mathcal{#1}} \global\long\def\mathrm{Tr}{\mathrm{Tr}} \global\long\def\dbar#1{\Bar{\Bar{#1}}} \global\long\def$n^{\mathrm{th}}$\,{$n^{\mathrm{th}}$\,} \global\long\def$m^{\mathrm{th}}$\,{$m^{\mathrm{th}}$\,} \global\long\def\nonumber{\nonumberumber} \newcommand{\teal}[1]{{\color{teal} {#1}}} \newcommand{\orange}[1]{{\color{orange} {#1}}} \newcommand{\cyan}[1]{{\color{cyan} {#1}}} \newcommand{\blue}[1]{{\color{blue} {#1}}} \newcommand{\yellow}[1]{{\color{yellow} {#1}}} \newcommand{\green}[1]{{\color{green} {#1}}} \newcommand{\mathrm{Re}d}[1]{{\color{red} {#1}}} \global\long\def\todo#1{\yellow{{$\bigstar$ \orange{\bf\sc #1}}$\bigstar$} } \title{Delay-induced spontaneous dark state generation from two distant excited atoms} \author{W. Alvarez-Giron} \email{[email protected] } \affiliation{Instituto de Investigaciones en Matem\'{a}ticas Aplicadas y en Sistemas, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Ciudad Universitaria, 04510, DF, M\'{e}xico.} \author{P. Solano} \email{[email protected]} \affiliation{Departamento de F\'isica, Facultad de Ciencias F\'isicas y Matem\'aticas, Universidad de Concepci\'on, Concepci\'on, Chile} \affiliation{CIFAR Azrieli Global Scholars program, CIFAR, Toronto, Canada.} \author{K. Sinha} \email{[email protected]} \affiliation{School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85287-5706, USA} \author{P. Barberis-Blostein} \email{[email protected] } \affiliation{Instituto de Investigaciones en Matem\'{a}ticas Aplicadas y en Sistemas, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Ciudad Universitaria, 04510, DF, M\'{e}xico.} \begin{abstract} We investigate the collective non-Markovian dynamics of two fully excited two-level atoms coupled to a one-dimensional waveguide in the presence of delay. We demonstrate that analogous to the well-known superfluorescence phenomena, where an inverted atomic ensemble synchronizes to enhance its emission, there is a `subfluorescence' effect that synchronizes the atoms into an entangled dark state depending on the interatomic separation. Our results are pertinent to long-distance quantum networks, presenting a mechanism for spontaneous entanglement generation between distant quantum emitters. \end{abstract} \maketitle {\it Introduction.---} Coupling quantum emitters via waveguides is essential for building large-scale and interconnected quantum systems, which find promising applications in quantum information processing and distributed quantum sensing~\cite{quantum-network-1, quantum-network-2, quantum-network-3}. Platforms based on waveguide QED enable tunable, efficient, and long-ranged interactions between quantum emitters in the optical and microwave regime \cite{Goban15,Solano2017,PabloReview,Kim2018,Wen19,Mirhosseini19,Sheremet2021,Magnard2022,Tiranov2023}. For example, state-of-the-art experiments allow one to tune out dispersive dipole-dipole interactions by strategically positioning the atoms along a waveguide \cite{Martin-Cano11, VanLoo13, Pichler15} while highly reducing coupling to non-guided modes \cite{Lecamp07,astafiev2010, Arcari14, Zang16, Scarpelli19}. However, long-ranged emitter-emitter coupling in waveguide QED systems necessitates careful consideration of time-delayed interactions. When the time taken by a photon to propagate between two distant emitters becomes comparable to their characteristic lifetimes, the system exhibits surprisingly rich delay-induced non-Markovian dynamics \cite{Zheng13, Tufarelli14, Guimond16, Zhang20,DelAngel22,Solano2023}. Some examples of such dynamics include collective spontaneous emission rates exceeding those of Dicke superradiance and formation of highly delocalized atom-photon bound states \cite{sinha, Sinha20, Dinc19, Calajo19, Facchi19, Guo19, Hughes09, Hughes2020, Lee23}. In addition, time-delayed feedback can assist in preparing photonic cluster states \cite{Pichler17} and single photon sources with improved coherence and indistinguishability \cite{Crowder23}. Such novel effects invite one to revisit canonical quantum optical phenomena in the context of waveguide-coupled emitters in a quantum network. For example, a striking result in collective quantum optics is the spontaneous emergence of correlations between atoms in a fully inverted ensemble, which leads to the well-known effect of \textit{superfluorescence}~\cite{Bonifacio75, Glauber78, vrehen1980,PhysRevA.68.023809}\footnote{ We use the term superfluorescence~\cite{Bonifacio75} as a particular case of \textit{superradiance}, wherein the collective decay rate of an atomic ensemble is enhanced beyond individual spontaneous emission rate of a single atom~\cite{PhysRev.93.99,superradianceessay}.}. Such atom-atom correlations emerge without external driving fields, post-selection, or initial state preparation~\cite{superradianceessay}. Thus, the question arises: can we spontaneously generate and engineer entanglement between distant emitters in the presence of delay? \begin{figure} \caption{Schematic of the system: Two excited atoms with a transition frequency $\omega_0$ near a waveguide at positions $z_{1,2} \label{fig: system} \end{figure} In this work, we demonstrate that one can spontaneously generate and stabilize quantum entanglement between two initially excited emitters in the presence of retardation. We investigate the non-Markovian dynamics of the emitters coupled via a waveguide, illustrating that for interatomic distances of an integer multiple of half their resonant wavelength, $\lambda/2$, the system ends in a dark state where atom-atom entanglement suddenly emerges. We refer to the phenomena as delayed-induced \textit{subfluorescence}, as the subradiant counterpart of the well-known superfluorescence effect. The resulting steady state is a delocalized hybrid atom-photon state, requiring a description beyond the usual Born and Markov approximations. Moreover, the probability of ending in such a dark state increases linearly for small interatomic separation, reaching its maximum when the atoms are separated by a distance $d \approx 0.895 v/\gamma$, with $v$ as the speed of light in the waveguide and $\gamma$ as the individual atomic spontaneous emission rate. For interatomic separations comparable to the coherence length associated with a spontaneously emitted photon $(v/\gamma)$ memory effects of the electromagnetic environment become prominent, necessitating a non-Markovian treatment of the system dynamics. {\it Theoretical model.---} We consider two two-level atoms, with transition frequency $\omega_0$ between the ground $\ket{g}$ and excited $\ket{e}$ states, coupled through a waveguide. The free Hamiltonian of the total atoms+field system is given by $H_0=(\hbar\omega_0/2)\sum_{r=1}^2\sg{z}{r}+\sum_{\alpha, \eta}\hbar\omega_\alpha\hat{a}^\dagger_{\alpha, \eta}\hat{a}_{\alpha, \eta}$, where $ \omega_\alpha $ represents the frequency of the field modes, and $ \eta $ corresponds to the direction of propagation. The interaction Hamiltonian in the interaction picture is (having made the electric-dipole and rotating-wave approximations): \begin{eqnarray} \hat{H}_\text{int} = -i \hbar \sum_{r=1}^2 \sum_{\alpha, \eta} g_\alpha \sg{+}{r} \hat{a}_{\alpha, \eta} \ee{-i\Delta_\alpha t} \ee{ik_{\alpha\eta} z_r} \label{ham} + \text{H.c.} \, , \end{eqnarray} where $g_\alpha$ is the coupling between atoms and the guided mode $\alpha$ with frequency $ \omega_\alpha$, $\Delta_\alpha = \omega_\alpha - \omega_0$, and $k_{\alpha \eta}= \eta\omega_\alpha/v$, with $\eta = \pm 1$ specifying the direction of propagation. The position of the atoms is $z_{1, \,2} = \pm d/2$, and the raising and lowering operators for the $r$-th atom are defined as $\sg{+}{r} = \bkt{\sg{-}{r}}^\dagger = \ket{e}_r\bra{g}_r$. We consider the initial state with both atoms excited and the field in the vacuum state, $\ket{e_1 e_2,\{0\}}$. As a consequence of the rotating-wave approximation, the total number of atomic and field excitations are conserved, suggesting the following ans\"atz for the state of the system at time $t$: \begin{widetext} \eqn{\ket{\psi (t)} = \bigg\{ a(t) \sg{+}{1}\sg{+}{2} + \sum_{r=1}^2 \sum_{\alpha, \eta}b_{\alpha \eta}^{(r)}(t) \sg{+}{r} \hat{a}_{\alpha, \eta}^\dag + \mathop{\sum \sum}_{\alpha , \eta\neq \beta, \eta'} \frac{c_{\alpha\eta, \beta \eta'}(t)}{2} \hat{a}_{\alpha, \eta}^\dag \hat{a}_{\beta, \eta'} ^\dag + \sum_{\alpha, \eta} \frac{c_{\alpha \eta}(t)}{\sqrt{2}} \hat{a}_{\alpha, \eta}^\dag \hat{a}_{\alpha, \eta} ^\dag \bigg\} \ket{g_1 g_2,\{0\}} \, , \label{psi} } \end{widetext} where $\ket{g_1g_2,\{ 0\}}$ is the ground state of the system. The complex coefficients $a(t)$, $b_{\alpha \eta}^{(r)}(t)$, and $c_{\alpha\eta, \beta\eta'}(t)$ correspond to the probability amplitudes of having an excitation in both the atoms, $r^\mr{th}$ atom and field mode $\cbkt{\alpha, \eta}$, and field modes $\cbkt{\alpha, \eta}$ and $\cbkt{\beta, \eta'}$, respectively. The coefficient $c_{\alpha\eta}(t)$ represents the probability amplitude of exciting two photons in mode $\cbkt{\alpha, \eta}$. In this work, our emphasis will be on the case of perfectly coupled atoms; under this condition the modes $\alpha$ are the guided modes of the waveguide and $\gamma$ is the decay rate of the emitters into those modes. Defining $B_{\alpha \eta}^{(r)} = b_{\alpha \eta} ^{(r)} (t) \ee{-i\Delta_\alpha t}$ and $C_{\alpha\eta} = c_{\alpha\eta} (t) \ee{-2i\Delta_\alpha t}$, and formally integrating the equation for $c_{\alpha \eta,\beta \eta'}(t)$, the Schr\"odinger equation yields the following system of delay-differential equations for the excitation amplitudes: \eqn{ \label{a-2} \dot{a}(t) =& - \sum_{\alpha,\eta} \sum_{s=1}^2 g_\alpha B_{\alpha\eta}^{(s)}(t) \ee{-ik_{\alpha\eta} z_s}\, , } \eqn{ \dot{B}_{\alpha\eta}^{(r)}(t) =& -\bkt{i\Delta_\alpha + \frac{\gamma}{2}} B_{\alpha \eta}^{(r)}(t) -\sqrt{2} g_\alpha C_{\alpha\eta}(t) \ee{ik_{\alpha\eta} z_r} + \nonumberumber \\ & g_\alpha^\ast a(t) \ee{ik_{\alpha\eta} z_r} - \frac{\gamma}{2} \ee{i\phi} \ee{-i\Delta_\alpha t} B_{\alpha\eta} ^{(s)}(t-\tau) \Theta(t-\tau) \label{b-2}\\ \label{c1-2} \dot{C}_{\alpha\eta} (t) =& - 2i\Delta_\alpha C_{\alpha\eta}(t) + \sqrt{2} g_\alpha ^\ast\sum_{s=1}^2 B_{\alpha \eta}^{(s)}(t) \ee{-ik_{\alpha\eta} z_s} \, . } We present a detailed derivation in the Appendix. We solve the system of equations via Laplace transforms considering the initial conditions $a(0) = 1$ and $B_{\alpha\eta}(0)=C_{\alpha\eta}(0)=0$. {\it System dynamics.---} The Laplace transform of $a(t)$ is (see Appendix) \begin{widetext} \begin{eqnarray} \label{a12-solapp} \tilde{a}(s) &\approx & \left\{s - \sum_{\alpha, \eta} |g_\alpha |^2 \frac{s + i\Delta_\alpha + \frac{\gamma}{2} - \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} \cos(\omega_\alpha \tau)}{\left[ \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} + s + i\Delta_\alpha + \frac{\gamma}{2} \right] \left[ \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} - s - i\Delta_\alpha - \frac{\gamma}{2} \right] } \right\}^{-1} \, , \end{eqnarray} \end{widetext} where $\phi = \omega_0 \tau$ is the resonance field propagation phase. \begin{figure} \caption{\label{fig:P1_dynamics} \label{fig:P1_dynamics} \end{figure} We write the sum over modes on the RHS of Eq. \eqref{a12-solapp} as an integral over frequencies by introducing the density of modes $\rho(\omega_\alpha)$. Additionally, we use the Wigner-Weisskopf approximation and set $\rho(\omega_\alpha), \, g_\alpha \approx \rho(\omega_0), \, g_0$, evaluated at the atomic resonance frequency. With these considerations we obtain $\tilde{a}(s) = 1/(s+\gamma)$ for all $\tau$. Taking the inverse Laplace transform we get $a(t)= e^{-\gamma t}$, which gives the time-dependent probability of having two atomic excitations \begin{eqnarray} P^{(2)}(t) = \left\lvert a(t) \right\rvert^2 = e^{-2\gamma t} \, . \end{eqnarray} We remark that the probability of having two atomic excitations is independent of the delay between atoms. \label{sec-b} The probability of having only one of the atoms excited is \begin{eqnarray} P^{(1)}(t) \approx \rho(\omega_0) \sum_{r=1,2} \sum_{\eta }\int_{0}^{\infty} d\omega_\alpha \, \left \lvert b_{\alpha\eta}^{(r)}(t)\right \rvert ^2 , \label{p1-sol} \end{eqnarray} where the time-dependent solutions $b_{\alpha\eta}^{(1,2)}(t)$ are derived in the Appendix. In the limits of two coincident atoms with $\tau= 0$, and two infinitely distant atoms with $\tau \to \infty$, we obtain the expected Markovian dynamics \cite{Lehmberg1970, alvarez}: \begin{equation} \label{p1-cases} P^{(1)}(t) = \begin{cases} 2 \gamma t \,\ee{-2\gamma t} &\text{for $\tau = 0$}\\ 2 \ee{-\gamma t} (1 - \ee{-\gamma t}) &\text{for $\tau \to \infty$} \end{cases} \, . \end{equation} Figure~\mathrm{Re}f{fig:P1_dynamics} shows $P^{(1)}(t)$ for different values of the delay $\tau$ and the propagation phase $\phi$. For times $t < \tau$, the atoms decay independently. At the time $t = \tau$, the field emitted by one atom reaches the other, modifying their decay dynamics depending on the value of the propagation phase $\phi$. Their instantaneous decay rate, given by the negative slope of $P^{(1)}(t)$, could momentarily exceed that of standard superfluorescence ($\tau=0$). The delay condition $\gamma\tau \approx 0.375$ and $\phi=n\pi$ maximizes the instantaneous decay rate. This phenomenon is a signature of \textit{superduperradiance}, reported in Ref.~\cite{sinha}. Notably, in this case the atom-atom coherence that is necessary to modify the instantaneous decay emerges spontaneously. In the late-time limit, the following steady state appears when $\phi = n\pi$ for any delay $\tau$ between the atoms: \eqn{ \label{eq:boundstate} \ket{\psi (t\to \infty)} =\mathop{\sum \sum}_{\alpha, \eta \neq \beta, \eta'} \frac{c_{\alpha\eta, \beta\eta'}(t \to \infty)}{2} \ket{g_1 g_2,\{1_{\alpha,\eta}1_{\beta,\eta '}\}} &\nonumber\\ +\ket{e_1 g_2}\sum_{\alpha,\eta} b_{\alpha\eta}^{(1)} \ket{1_{\alpha,\eta}} +\ket{g_1 e_2}\sum_{\alpha, \eta} b_{\alpha\eta}^{(2)} \ket{1_{\alpha,\eta}}&\, , } where we have removed the explicit time dependence of the steady-state coefficients. The steady-state amplitude for mode $\cbkt{\alpha,\eta}$ is \begin{equation} b_{\alpha\eta}^{\bkt{\substack{1\\2}}}= \mp\eta\ee{\pm i \eta n \pi /2}\frac{g_\alpha^{*}}{ 1 + \frac{\gamma \tau}{2} } \frac{ i\sin\left( \Delta_\alpha \tau /2\right)}{\gamma - i \Delta_\alpha} \, , \label{eq:boundstatecoef} \end{equation} and $c_{\alpha\eta, \beta\eta'}$ is given in the Appendix. The last two terms in Eq.~\eqref{eq:boundstate} represent a bound state in the continuum (BIC) that corresponds to having one shared excitation between the atoms and one propagating photon mode in between them \cite{Calajo19}. Using the Born rule and Eq.~(\mathrm{Re}f{eq:boundstatecoef}), we obtain that its probability is $2 P^{(1)}\bkt{t\to\infty}$, where: \eqn{ \label{p1ss} P^{(1)}\bkt{t\to\infty} = \frac{\sinh \left(\frac{\gamma\tau}{2} \right)}{(1+\frac{\gamma\tau}{2})^2 \ee{\frac{\gamma\tau}{2}}} \, . } We note that the probability of ending up in a BIC state is maximum for $\gamma \tau \approx 0.895$ with a probability of $0.282$. The fact that the atoms+field state is non-separable shows that one cannot use the Born approximation to solve for the dynamics of the system. Using Eq.~(\mathrm{Re}f{eq:boundstate}) and Eq.~(\mathrm{Re}f{eq:boundstatecoef}) we obtain the following reduced density matrix for the atomic subsystem \begin{equation} \label{eq:reducedrho} \hat{\rho}_A^\pm(t\to\infty)=P^{(1)}\ket{\Psi^\pm}\bra{\Psi^\pm}+\bkt{1-P^{(1)}}\ket{g_1 g_2}\bra{g_1 g_2}\, , \end{equation} with $+$ corresponding to the case when $n$ is odd and $-$ when it is even and $\ket{\Psi^\pm}=(\ket{e_1 g_2}\pm\ket{g_1e_2})/\sqrt{2}$ are single excitation Bell states. As the initially inverted atoms evolve into a superposition of radiative and non-radiative states upon delayed-interactions, the atoms decay into a superposition of ground state and an entangled dark steady state. For a field propagation phase $\phi =2 n\pi$ $(\phi = (2n+1) \pi)$, the dark state that appears in the late-time limit corresponds to the antisymmetric state $\ket{\Psi^- }$ (symmetric state $\ket{\Psi^+}$). \begin{figure} \caption{\label{fig:cmax} \label{fig:cmax} \end{figure} Delayed interactions create quantum correlations between the atoms. We quantify it using the concurrence $\mathcal{C}(t)$ of the reduced density matrix of the atoms $\hat{\rho}_A(t) = Tr_F(\ketbra{\psi(t)}{\psi(t)})$ \cite{Wooters}. For $\tau \to \infty$, the concurrence is zero throughout the evolution since the initially uncorrelated atoms evolve independently. Remarkably, when $\tau = 0$, the concurrence is also zero throughout the evolution, even when atoms transition to a superradiant behavior. This case exemplifies that entanglement is not necessary for superradiance. For intermediate values of $\tau$, we numerically study the dynamics of concurrence, shown in Fig.~\mathrm{Re}f{fig:cmax}~(a). We begin studying its behavior for $\gamma \tau \approx 0.895$ (corresponding to the maximum value of $P^{(1)}\bkt{t\to\infty}$) and $\gamma \tau \approx 0.375$ (corresponding to the largest instantaneous decay rate). Since the system lacks initial correlations, the concurrence is zero from $t=0$ to a certain time, $t_\text{SBE}$, when there is a sudden birth of entanglement (SBE) \cite{SBE1, SBE2, SBE3, SBE4,SBE5,SBE6,SBE7}. For $\phi = n\pi$, the concurrence increases until reaching a stationary value, whereas for $\phi =(n+\frac{1}{2}) \pi$, it always remains zero. In general, for other values of $\phi$, the concurrence suddenly departs from zero and slowly decays after reaching a maximum value. For $\gamma \tau \approx 0.895$, we obtain the maximum value for the concurrence. Fig.~\mathrm{Re}f{fig:cmax}~(b) shows the emergence of atom-atom correlations defined by $\mathrm{Tr}\sbkt{\hat \rho_A (t)\hat{\sigma}_+ ^{(1)} \hat \sigma_-^{(2)} }$. We note that the atom-atom correlations develop as soon as the atoms `see' each other, while the concurrence takes longer to emerge. Delayed atom-atom interactions couple the fully excited state of the atoms to both single excitation symmetric and anti-symmetric states. After the build-up of correlations, the most radiative state collectively decays, while the non-radiative atomic state remains. The system, therefore, spontaneously evolves into an entangled dark state in the presence of retardation. Such appearance of quantum correlations is a striking example of environment-assisted spontaneous entanglement generation. {\it Summary and Outlook.---} We have analyzed the spontaneous decay of two fully inverted atoms coupled through a waveguide in the presence of retardation effects. A remarkable result is the spontaneous creation of a steady delocalized atom-photon bound state, with sudden birth of entanglement between the atoms. Such atom-atom entanglement appears as a result of the non-Markovian time-delayed feedback of the spontaneously radiated EM field acting on the emitters. Furthermore, the collective decay of the two atoms can be momentarily enhanced beyond standard superfluorescence and subsequently inhibited, demonstrating the rich non-Markovian dynamics of such a system. Such delay-induced spontaneous steady state entanglement generation can have implications in the rapidly growing field of waveguide QED, a field with promising applications in quantum circuits \cite{Zheng13-2, Monroe14, Javadi15, Tian21} and quantum networks \cite{Yan15, Coles16, Chang20, Zhong21} that benefit from preparing and manipulating long-live dark-states. In this context, there have been several proposals to generate a steady entangled state; compared to delayed induced \textit{subfluorescence}, these schemes necessitate extra degrees of control, such as external driving fields \cite{Paulisch16, Zanner22,pumping-1, pumping-2}, initial entanglement \cite{sinha}, or chiral emission in front of a mirror \cite{chiral-1, chiral-2}. It is not only the generation \cite{entanglement-generation} but also the stabilization of entangled states that are the key to more efficient devices. In contrast to the idea that the interaction of quantum systems with their environment leads to decoherence and can degrade the entanglement between the components of a quantum system~\cite{decoherence-1}, the environment can also be proposed as a generator \cite{entanglement-by-dissipation-1, entanglement-by-dissipation-2} and stabilizer of entanglement \cite{entanglement-by-dissipation-3}. Our results highlight a novel way of environment-assisted entanglement generation and stabilization via non-Markovian time-delayed feedback. Adding delay to the most straightforward collective system of two two-level atoms leads to new phenomenology, breaking the Born and Markov approximations, non-trivially modifying the dynamics, and spontaneously creating steady-state quantum correlations. Our results further the understanding of the significant role delay plays in quantum optics and present the outset of studying more complex phenomena involving many-body interactions \cite{PhysRevA.68.023809,masson2022,Masson2020}. However, studying this scenario is challenging as the complexity of the problem increases with the number of emitters, and known methods based on master equation approaches fail because neither the Markov nor the Born approximations are valid. Furthermore, although we neglect dispersive dipole-dipole interactions and coupling to non-guided modes to highlight the consequences of delay-induced non-Markovianity, all these effects can coexist, creating richer dynamics that will increase in complexity as the system scales up. Our reductionist approach, under ideal conditions, allows for a clear interpretation of the effects of delay and how it induces novel quantum optics phenomena. Moreover, the results contribute to the almost 70 years old discussion on superradiance and superfluorescence, phenomena at the core of quantum optics with unceasing interest \cite{timothy2012,raino2018,Masson2020,Ferioli2021,Huang2022,masson2022}. {\it Acknowledgments.---} {We acknowledge helpful discussions with Saikat Guha.} P.S. is a CIFAR Azrieli Global Scholar in the Quantum Information Science Program. This work was support by DGAPA-UNAM under grant IG120518 from Mexico, as well as CONICYT-PAI grant 77190033, and FONDECYT grant N$^{\circ}$ 11200192 from Chile. K.S. was supported in part by the John Templeton Foundation Award No. 62422. \section{Appendix} \widetext \section{Equations of motion: Derivation} \label{app1} Using Schr\"odinger equation with the Hamiltonian and the state ansatz given in the main text, we get the following differential equations for the probability amplitudes \begin{eqnarray} \dot{a}(t) &=& - \sum_{r=1}^2 \sum_{\alpha,\eta} g_a \ee{-i\Delta_\alpha t} b_{\alpha\eta} ^{(r)}(t) \ee{-ik_{\alpha\eta} z_r} \, , \label{a-1} \\ \dot{b}_{\alpha\eta}^{(r)}(t) &=& g_\alpha^* \ee{i\Delta_\alpha t} a(t) \ee{ik_{\alpha\eta} z_r} - \sum_{\beta,\eta '} g_\beta \ee{-i\Delta_\beta t} c_{\alpha\eta, \beta\eta '}(t) \ee{ik_{\beta\eta'} z_r} -\sqrt{2} g_\alpha \ee{-i\Delta_\alpha t} c_{\alpha\eta}(t) \ee{ik_{\alpha\eta} z_r} \, \label{b-1} , \\ \dot{c}_{\alpha\eta, \beta\eta '}(t) &=& 2 g_\alpha ^* \ee{i\Delta_\alpha t} \sum_{r=1}^2 b_{\beta\eta '} ^{(r)}(t) \ee{-ik_{\alpha\eta} z_r} + 2 g_\beta ^* \ee{i\Delta_\beta t} \sum_{r=1}^2 b_{\alpha\eta} ^{(r)}(t) \ee{-ik_{\beta \eta '} z_r} \label{c2-1} \, , \\ \label{c1-1} \dot{c}_{\alpha\eta} (t) &=& \sqrt{2} g_\alpha ^* \ee{i\Delta_\alpha t} \sum_{r=1}^2 b_{\alpha \eta} ^{(r)}(t) \ee{-ik_{\alpha \eta} z_r} \, , \end{eqnarray} where we consider atomic positions such that $z_1 = -z_2$. We integrate \eqref{c2-1} with $c_{\alpha\eta, \beta \eta '} (0) = 0$, and substitute in \eqref{b-1}, obtaining \begin{eqnarray*} \small \dot{b}_{\alpha \eta}^{(r)}(t) &=& g_\alpha^* \ee{i\Delta_\alpha t} a(t) \ee{ik_{\alpha\eta} z_r} - 2 g_\alpha ^* \sum_{s=1}^2 \int_0 ^t dT \ee{i\Delta_\alpha (t-T)} \ee{-ik_{\alpha \eta} z_s} \sum_{\beta, \eta '} g_\beta \ee{-i\Delta_\beta t} b^{(s)}_{\beta \eta '}(t-T) \ee{ik_{\beta \eta '} z_r} \\ && -\sqrt{2} g_\alpha \ee{-i\Delta_\alpha t} c_{\alpha\eta}(t) \ee{ik_{\alpha\eta} z_r} - 2 \sum_{s=1}^2 \int_0 ^t dT \, b_{\alpha \eta}^{(s)}(t-T) \sum_{\beta, \eta '} |g_\beta |^2 \ee{-i\Delta_\beta T} \ee{ik_{\beta \eta'} (z_r-z_s)} \, . \end{eqnarray*} For $t\neq 0$, the second term of the RHS can be approximated as zero assuming $g_\beta$ constant and $b_{\beta \eta '} (t)$ evolving slowly, which is consistent with the Wigner-Weisskopf approximation. We transform the sum over frequencies to integrals by using the densities of modes $\rho(w_\alpha)$ \cite{Density-OS}. For guided modes $k_{\alpha \eta} = \eta \frac{\omega_\alpha}{v}$ with $v$ being the phase velocity of the field inside the waveguide, and $\eta = \pm 1$ labels forward or backward propagation direction of the field along the waveguide. Thus $\sum_{\alpha, \eta} \to \sum_{\eta = \pm 1}\int_0^{\infty} d\omega_\alpha \, \rho(\omega_\alpha)$. To study the non-Markovian effects due only to delay and not to a structured reservoir, we assume a flat spectral density of field modes around the resonance of the emitters such that $\omega_\alpha \approx \omega_0$ in the evaluation of $\rho(\omega_\alpha)$ and $g_\alpha$ functions. Taking into account all the considerations above we obtain \begin{eqnarray*} \sum_{\beta, \eta '} |g_\beta |^2 \ee{-i\omega_\beta T} \ee{ik_{\beta \eta '} (z_r-z_s)} &=& \int_{0}^{\infty} d\omega_\beta\, \rho(\omega_\beta) |g_\beta|^2 \bigg\{ \ee{-i \omega_\beta (T - \tau_{rs})} + \ee{i \omega_\beta (T+\tau_{rs})} \bigg\} \nonumberumber \\ &\approx& \rho(\omega_0) |g_0|^2 \int_{0}^{\infty} d\omega_\beta \bigg\{ \ee{-i \omega_\beta (T - \tau_{rs})} + \ee{i \omega_\beta (T+\tau_{rs})} \bigg\} \, , \end{eqnarray*} where $\tau_{rs} = \tfrac{z_r - z_s}{v}$. We make use of Sokhotski-Plemelj theorem \begin{eqnarray} \int_0^\infty d\omega_\beta \, \ee{-i\omega_\beta a} = - i \text{PV}\left(\frac{1}{a}\right) + \pi \delta (a) \, , \end{eqnarray} where $\text{PV}$ refers to Cauchy principal value. Absorbing the contribution of the principal value (which corresponds to the Lamb shifts) into the atomic transition frequency \cite{superradianceessay} we obtain \begin{eqnarray} \sum_{\beta , \eta '} |g_\beta |^2 \ee{-i\Delta_\beta T} \ee{ik_{\beta \eta} (z_r-z_s)} &=& \frac{\gamma \ee{i\omega_0 T}}{2} \frac{\delta(T - \tau_{rs}) + \delta(T + \tau_{rs})}{2} \, , \label{gammas} \end{eqnarray} where the single atom decay rate to guided modes is defined as $\gamma \equiv 4\pi \rho(\omega_0) |g_0|^2$. Using \eqref{gammas} we obtain the equation of motion for $ \dot{b}_{\alpha \eta}^{(r)}(t)$ \begin{eqnarray*} \small \dot{b}_{\alpha \eta}^{(r)}(t) &=& g_\alpha^* \ee{i\Delta_\alpha t} a(t) \ee{ik_{\alpha \eta} z_r} -\sqrt{2} g_\alpha \ee{-i\Delta_\alpha t} c_{\alpha \eta}(t) \ee{ik_{\alpha \eta} z_r} -\frac{\gamma}{2} b_{\alpha \eta}^{(r)} (t) - \frac{\gamma}{2} \ee{i\phi} b_{\alpha \eta}^{(s)} (t - \tau) \Theta(t - \tau)\, , \qquad r\neq s \, , \end{eqnarray*} with $\tau = |\tau_{12}| = \frac{|z_1-z_2|}{v}$, $\phi=\omega_0 \tau$ and $\Theta$ is the Heaviside step function. \section{Equations of motion: Solution} \label{app2} We use the Laplace transform to solve the delayed differential equations of motion (3)-(5) in the main text. As an intermediate step, we define the variables \begin{eqnarray} \tilde{B}_{\alpha\eta } ^{(\pm)}(s) = \tilde{B}_{\alpha\eta}^{(1)}(s)\ee{-ik_{\alpha\eta} z_1} \pm \tilde{B}_{\alpha\eta}^{(2)}(s)\ee{-ik_{\alpha\eta} z_2} \,. \end{eqnarray} Thus \begin{align} s\, \tilde{a}(s) - 1 &= -\sum_{\alpha, \eta} g_\alpha \tilde{B}_{\alpha \eta}^{(+)}(s) \label{a12-ec} \\% \left[s + i\Delta_\alpha +\frac{\gamma}{2} + \frac{4|g_\alpha |^2}{s+2i\Delta_\alpha} \right] \tilde{B}_{\alpha\eta}^{(+)}(s) &= 2g_\alpha^{*} \tilde{a}(s)-\frac{\gamma}{2} \ee{i\phi}\ee{-i\Delta_\alpha \tau - s\tau}\left\{ \cos(k_{\alpha \eta} d) \tilde{B}_{\alpha\eta}^{(+)}(s) + i \sin(k_{\alpha \eta} d) \tilde{B}_{\alpha \eta}^{(-)}(s) \right\} \label{bp-ec} \\% \left[s + i\Delta_\alpha +\frac{\gamma}{2} \right] \tilde{B}_{\alpha\eta}^{(-)}(s) &= \frac{\gamma}{2} \ee{i\phi}\ee{-i\Delta_\alpha \tau - s\tau} \left\{ \cos(k_{\alpha\eta} d) \tilde{B}_{\alpha\eta}^{(-)}(s) + i \sin(k_{\alpha \eta} d) \tilde{B}_{\alpha \eta}^{(+)}(s) \right\} \, , \label{bm-ec} \end{align} where we use that \begin{eqnarray*} \tilde{C}_{\alpha\eta} (s) = \sqrt{2} g_\alpha^{*} \frac{\tilde{B}_{\alpha\eta}^{(+)}(s)}{s+2i\Delta_\alpha} \,, \end{eqnarray*} with $a(0)=1$, $B_{\alpha\eta}^{(1,2)}(0) = 0$ and $C_{\alpha\eta}(0) = 0$ as initial conditions. The solutions of the previous equations are \eqn{ \label{bp-sol} \tilde{B}_{\alpha\eta}^{(+)}(s) & = - 2g_\alpha^{*} \tilde{a}(s) \sbkt{s + i\Delta_\alpha + \frac{\gamma}{2} - \frac{\gamma}{2}e^{i\phi} e^{ -(s+ i \Delta_\alpha) \tau } \cos(k_{\alpha\eta} d)}\sbkt{\cbkt{ \frac{\gamma}{2}e^{i\phi} \ee{ -(s + i \Delta_\alpha) \tau } \sin(k_{\alpha \eta} d) }^2 \right.\nonumber\\ &\left.+\cbkt{ s + i\Delta_\alpha + \frac{\gamma}{2} -\frac{\gamma}{2}e^{i\phi} e^{ - (s+i \Delta_\alpha) \tau } \cos(k_{\alpha\eta} d)} \cbkt{ s + i\Delta_\alpha + \frac{\gamma}{2} +\frac{\gamma}{2}e^{i\phi} e^{ -(s + i \Delta_\alpha ) \tau} \cos(k_{\alpha\eta} d) + \frac{4 \lvert g_\alpha \rvert ^2}{s+2i\Delta_\alpha} } }^{-1} } \eqn{ \label{bm-sol} \tilde{B}_{\alpha\eta}^{(-)}(s) &= - 2g_\alpha^{*} \tilde{a}(s) \sbkt{i \frac{\gamma}{2}e^{i\phi} e^{ -(s+ i \Delta_\alpha) \tau } \sin(k_{\alpha \eta} d)}\sbkt{\cbkt{ \frac{\gamma}{2}e^{i\phi} \ee{ -(s + i \Delta_\alpha) \tau } \sin(k_{\alpha \eta} d)}^2\right. \nonumber\\ &\left.+ \cbkt{ s + i\Delta_\alpha + \frac{\gamma}{2} -\frac{\gamma}{2}e^{i\phi} e^{ - (s+i \Delta_\alpha) \tau } \cos(k_{\alpha\eta} d)} \cbkt{ s + i\Delta_\alpha + \frac{\gamma}{2} +\frac{\gamma}{2}e^{i\phi} e^{ -(s + i \Delta_\alpha ) \tau} \cos(k_{\alpha\eta} d) + \frac{4 \lvert g_\alpha \rvert ^2}{s+2i\Delta_\alpha} } } \,. } and \eqn{ \label{a12-sol} \tilde{a}(s) &= \sbkt{ s - 2 \sum_{\alpha, \eta} |g_\alpha |^2 \sbkt{s + i\Delta_\alpha + \frac{\gamma}{2} - \frac{\gamma}{2}e^{i\phi} e^{-(s + i \Delta_\alpha) \tau } \cos(k_{\alpha \eta} d)}\sbkt{\cbkt{\frac{\gamma}{2}e^{i\phi} e^{- (s + i \Delta_\alpha) \tau }\sin(k_\alpha d) }^2 \right.\right.\nonumber\\ &\left.\left.+\cbkt{ s + i\Delta_\alpha + \frac{\gamma}{2} -\frac{\gamma}{2}e^{i\phi} e^{-(s + i \Delta_\alpha) \tau }\cos(k_{\alpha \eta} d)} \cbkt{ s + i\Delta_\alpha + \frac{\gamma}{2} + \frac{\gamma}{2}e^{i\phi} e^{-(s + \Delta_\alpha) \tau } \cos(k_{\alpha \eta} d) + \frac{4 \lvert g_\alpha \rvert ^2}{s+2i\Delta_\alpha} } } }^{-1} \,. } In order to find the inverse Laplace transform of Eqs. \eqref{bp-sol}, \eqref{bm-sol} and \eqref{a12-sol} we use that $ |g_\alpha|^2 / \gamma \sim 10^{-4}$ for EM modes in the visible range \cite{Barnes2020} to neglect the contribution of the term $4|g_\alpha|^2 /(s + i \Delta_\alpha)$ in the denominator. This corresponds to neglecting the probability of exciting two equal $\alpha$ modes traveling in the same direction compared to the probability of exciting two different modes. Considering this, we get Eq. (6) in the main text and \eqn{\label{bs} \tilde{B}_\alpha^{\bkt{\substack{1\\2}}}(s) &\approx& - g_\alpha^{*} \tilde{a}(s) e^{\pm i k_{\alpha \eta} d/2} \frac{s + i\Delta_\alpha + \frac{\gamma}{2} - \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} \ee{\mp ik_{\alpha\eta} d}}{\left[ \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} + s + i\Delta_\alpha + \frac{\gamma}{2} \right] \left[ \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} - s - i\Delta_\alpha - \frac{\gamma}{2} \right] } \, . } \section{Solving the integrals for $\tilde{a}(s)$} \label{app3} Eq. (6) with the sum approximated as an integral is \begin{eqnarray} \label{a12int} \tilde{a}(s) &\simeq & \left\{ s - \frac{\gamma}{\pi}\tilde{a}(s) \int_{-\omega_0}^{\infty} \frac{s + i\Delta_\alpha + \frac{\gamma}{2} - \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} \cos(\Delta_\alpha \tau + \phi)}{\left[ \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} + s + i\Delta_\alpha + \frac{\gamma}{2} \right] \left[ \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} - s - i\Delta_\alpha - \frac{\gamma}{2} \right] } d\Delta_\alpha \right\}^{-1} \, \nonumberumber \\ &=& \left\{ s - \frac{\gamma}{\pi}\tilde{a}(s) I(s, \gamma, \tau, \phi) \right\}^{-1} \, , \end{eqnarray} where we extend the lower limit in the integral to $-\infty$ as an approximation. We rewrite the denominator inside the integral using the poles for the $\Delta_\alpha$ variable, determined by the characteristic equation \begin{eqnarray} \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} -\sigma \left( s + i\Delta_\alpha + \frac{\gamma}{2}\right) = 0 \qquad \Longrightarrow \qquad \Delta^{(\sigma)}_k = i \left\{s + \frac{\gamma}{2} - \frac{1}{\tau} W_k(\sigma r) \right\} \, , \end{eqnarray} with $W_k$ denoting the $k$th branch of Lambert W function \cite{Corless1996} and $\sigma = \pm 1$. For simplicity, we introduce $r=\frac{\gamma \tau}{2}\ee{\gamma \tau /2 + i\phi}$. Using partial fraction decomposition \cite{partial-fd} we obtain \eqn{ \label{pfd} \frac{1}{\left[ \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} + s + i\Delta_\alpha + \frac{\gamma}{2} \right]\left[ \frac{\gamma}{2}e^{i\phi} e^{-s \tau }e^{ - i \Delta_\alpha \tau} - s - i\Delta_\alpha - \frac{\gamma}{2} \right] } = \frac{\tau}{2} \sum_{k=-\infty}^{\infty} \sum_{\sigma =\pm 1} \frac{1}{W_k(\sigma r) + W_k^{2}(\sigma r)}\frac{i}{\Delta_\alpha - \Delta_{k}^{(\sigma)}} \, . } Using the Cauchy's integral formula we get \begin{eqnarray} \int_{-\infty}^{\infty} \frac{s + i\Delta_\alpha + \frac{\gamma}{2}}{\Delta_\alpha - \Delta_k^{(\sigma)}} d \Delta_\alpha = 2\pi i \frac{W_k{(\sigma r)}}{\tau} \, , \\ \int_{-\infty}^{\infty} \frac{e^{-i\Delta_\alpha \tau} \cos(\Delta_\alpha \tau + \phi)}{\Delta_\alpha - \Delta_k^{(\sigma)}} d \Delta_\alpha = 2\pi i \ee{i\phi} \, . \end{eqnarray} For the second integral we use that $\textrm{Im}\left(\Delta_k ^{(\sigma)} \right) > 0$ because we can take $\textrm{Re}(s)$ as large as we need, obtaining {\small \begin{flalign} \label{int-i} I(s, \gamma, \tau, \phi) = -\pi \sum_{k=-\infty}^{\infty} \sum_{\sigma = \pm 1} \frac{1}{1+ W_k(\sigma r)}\left(1 + \frac{\tau \ee{i\phi}}{2 W_k(\sigma r)} \right) \,. \end{flalign} } Using the following identities of the Lambert functions\cite{Lambert-identities} \begin{eqnarray} \sum_{k=-\infty}^{\infty} \frac{1}{1+W_k(z)} &=& \frac{1}{2} \, , \\ \sum_{k=-\infty}^{\infty} \frac{1}{W_k(z) + W_k^2(z)} &=& \frac{1}{z} \,, \end{eqnarray} we obtain $I(s, \gamma, \tau, \phi) = - \pi$. \section{Inverse Laplace transform of $\tilde{B}_{\alpha\eta}^{(1,2)}$} \label{app4} To obtain $B_{\alpha \eta}^{(1,2)}(t)$ we apply inverse Laplace transform of Eq.~\eqref{bs}. First, similar to what is done in Appendix \mathrm{Re}f{app3}, we rewrite the result using the poles of the denominator but for $s$ variable. Defining \begin{eqnarray} \label{poles-s} s_{k,\alpha}^{(\sigma)} = -\frac{\gamma}{2} - i\Delta_\alpha + \frac{1}{\tau} W_k(\sigma r)\, , \end{eqnarray} and using that $k_{\alpha \eta} d = \eta (\Delta_\alpha \tau + \phi)$ and $\tilde{a}(s)=1/(s+\gamma)$, we obtain {\small \begin{eqnarray} B_{\alpha \eta}^{\bkt{\substack{1\\2}}}(t) = -\frac{g_\alpha^{*}\tau}{2} \sum_{\sigma = \pm 1} \sum_{k=\infty}^{\infty} \frac{\ee{\pm ik_{\alpha \eta} d/2}}{W_k(\sigma r) + W_k^{2}(\sigma r)} \bigg[ \mathcal{L}^{-1} \left( \frac{s + i\Delta_\alpha + \frac{\gamma}{2}}{(s+\gamma)(s-s_{k,\alpha}^{(\sigma)})} \right) - \frac{\gamma}{2} \ee{i\phi \mp k_{\alpha\eta} d} \mathcal{L}^{-1} \left( \frac{e^{-s\tau}}{(s+\gamma)(s-s_{k,\alpha}^{(\sigma)})} \right) \bigg] \end{eqnarray}} where $\mathcal{L}^{-1}$ denotes inverse Laplace transform. Using that \begin{eqnarray} \mathcal{L}^{-1} \left( \frac{s + i\Delta_\alpha + \frac{\gamma}{2}}{(s+\gamma)(s-s_{k, \alpha}^{(\sigma)})} \right) &=& \frac{\frac{\gamma}{2} - i \Delta_\alpha }{s_{k,\alpha}^{(\sigma)}+\gamma} \ee{-\gamma t} + \frac{\frac{\gamma}{2} + i \Delta_\alpha +s_{k,\alpha}^{(\sigma)} }{s_{k,\alpha}^{(\sigma)}+\gamma} \ee{s_{k,\alpha}^{(\sigma)} t} \\ \mathcal{L}^{-1} \left( \frac{\ee{-s\tau}}{(s+\gamma)(s-s_{k,\alpha}^{(\sigma)})} \right) &=& \left( \ee{s_{k,\alpha}^{(\sigma)} (t-\tau)} - \ee{-\gamma(t-\tau)} \right) \frac{\Theta(t-\tau) }{s_{k,\alpha}^{(\sigma)}+\gamma} \end{eqnarray} and taking $b_{\alpha\eta}^{(r)} = B_{\alpha\eta} ^{(r)} (t) \ee{i\Delta_\alpha t}$ we get \begin{eqnarray} \label{bt} b_{\alpha \eta}^{\bkt{\substack{1\\2}}}(t) &=& - \frac{g_\alpha^{*} \tau}{2} \ee{i\Delta_\alpha t} e^{\pm i \eta (\Delta_\alpha \tau + \phi)/2} \sum_{\sigma = \pm 1} \sum_{k=-\infty}^{\infty} \frac{1}{W_k(\sigma r) + W_k^2(\sigma r)} \frac{\ee{-\gamma t}}{\frac{\gamma}{2}- i\Delta_\alpha + \frac{1}{\tau}W_k(\sigma r) } \times \\ && \bigg[ \frac{\gamma}{2}-i\Delta_\alpha + \frac{W_k(\sigma r)}{\tau}\ee{\left[\frac{\gamma}{2} - i \Delta_\alpha + \frac{1}{\tau} W_k(\sigma r) \right] t} + \frac{\gamma}{2} \ee{i(1-\eta)\phi - i(1+\eta) \Delta_\alpha \tau + \gamma\tau} \Theta(t-\tau) \bigg\{1-\ee{\left[\frac{\gamma}{2} - i \Delta_\alpha + \frac{1}{\tau} W_k(\sigma r) \right] (t-\tau)} \bigg\} \bigg] \, ,\nonumberumber \end{eqnarray} where $W_k$ denotes the $k$-branch of Lambert-W function. \section{Stationary values}\label{app5} In the late-time limit $\gamma t \to \infty$ we have \begin{eqnarray} b_{\alpha\eta}^{\bkt{\substack{1\\2}}}(t) \approx -\frac{g_\alpha^{*} \tau}{2} \sum_{\sigma = \pm 1}\sum_{k=\infty}^{\infty} \frac{\ee{\pm i\eta (\Delta_\alpha \tau + \phi )/2}}{W_k(\sigma r) + W_k^2(\sigma r)} \frac{\ee{-\left[\frac{1}{2} - \frac{1}{\gamma \tau}W_{k}(\sigma r) \right]\gamma t}}{\frac{\gamma}{2} - i \Delta_\alpha + \frac{1}{\tau}W_k(\sigma r)} \bigg[ \frac{W_k(\sigma r)}{\tau} - \frac{\gamma}{2}\ee{i(1-\eta)\phi} \ee{-i\eta \Delta_\alpha \tau} \ee{-\left[\frac{1}{2} - \frac{1}{\gamma \tau}W_{k}(\sigma r) \right]\gamma \tau} \bigg] \, . \nonumberumber \end{eqnarray} The above expression does not decay to zero if \begin{eqnarray} \Re \left[\frac{1}{2} - \frac{1}{\gamma \tau} W_k\left(\sigma \frac{\gamma \tau}{2} \ee{\gamma \tau /2} \ee{i\phi} \right) \right] = 0 \, . \end{eqnarray} Thus, the terms of the expression that will be non-zero in the long time limit are $k=0$, and $\{\sigma=+1, \, \phi=2\pi n\}$ or $\{\sigma=-1, \, \phi=(2n+1)\pi\}$, with $n \in \mathbb{N}^{0}$, because \begin{eqnarray*} W_0\left(\frac{\gamma \tau}{2}\ee{\gamma \tau /2} \right) = \frac{\gamma \tau}{2} \, . \end{eqnarray*} Taking into account the above, we get a stationary value given by \begin{eqnarray} b_{\alpha\eta,\, \text{ss}}^{\bkt{\substack{1\\2}}} = -\frac{g_\alpha^{*}}{2} \frac{\ee{\pm i\eta (\Delta_\alpha \tau + n \pi )/2}}{1 + \frac{\gamma \tau}{2}} \frac{1 - \ee{ \mp i \eta \Delta_\alpha \tau }}{\gamma - i \Delta_\alpha} \, . \end{eqnarray} Then, we obtain \begin{eqnarray} P^{(1)}\bkt{t\to\infty} &=& \frac{\rho(\omega_0)|g_0|^{2}}{2(1+\frac{\gamma \tau}{2})^{2}} \sum_{\eta=\pm 1} \int_{-\infty}^{\infty} d\Delta_\alpha \left\lvert \frac{1 - \ee{ - i \eta\Delta_\alpha \tau}}{\gamma - i \Delta_\alpha} \right \rvert^{2} \, , \nonumberumber \end{eqnarray} which leads to Eq.~(12) in the main text. For correlations between atomic dipoles, we find the stationary value \begin{eqnarray} \mathrm{Tr}\sbkt{\hat \rho_A (t\to \infty)\hat{\sigma}_+ ^{(1)} \hat \sigma_-^{(2)} } &=& \frac{\rho(\omega_0)|g_0|^{2} \cos(n\pi)}{4(1+\frac{\gamma \tau}{2})^{2}} \sum_{\eta=\pm 1} \int_{-\infty}^{\infty} d\Delta_\alpha \frac{\left(1 - \ee{ - i \eta\Delta_\alpha \tau} \right)^{2}}{\left\lvert\gamma - i \Delta_\alpha \right \rvert^{2}} = -\frac{\cos(n\pi)}{2} \frac{\sinh \left(\frac{\gamma\tau}{2} \right)}{(1+\frac{\gamma\tau}{2})^2 \ee{\frac{\gamma\tau}{2}}} \, . \end{eqnarray} Therefore $\mathrm{Tr}\sbkt{\hat \rho_A (t \to \infty)\hat{\sigma}_+ ^{(1)} \hat \sigma_-^{(2)} } = P^{(1)}(t\to \infty)/2$. \section{Solution for $c_{\alpha \eta, \beta \eta '} (t)$} Applying Laplace transform to equation \eqref{c2-1} and defining $C_{\alpha \eta, \beta \eta '}(t) = c_{\alpha \eta, \beta \eta '}(t) \ee{-i(\Delta_\alpha + \Delta_\beta)t}$ we get \begin{eqnarray} \tilde{C}_{\alpha\eta, \beta\eta '}(s) &=& 2 g_\alpha ^* \sum_{r=1}^2 \frac{\tilde{B}_{\beta\eta '} ^{(r)}(s) \ee{-ik_{\alpha \eta } z_r}}{s+i(\Delta_\alpha + \Delta_\beta)} + 2 g_\beta ^* \sum_{r=1}^2 \frac{ \tilde{B}_{\alpha\eta} ^{(r)}(s) \ee{-ik_{\beta \eta '} z_r}}{s+i(\Delta_\alpha + \Delta_\beta)} \, . \end{eqnarray} To find the function in the time domain, we substitute Eq.~\eqref{bs} and use the poles \eqref{poles-s}. Then, applying the inverse Laplace transform we arrive to the expression \begin{eqnarray} c_{\alpha \eta, \beta \eta '}(t) &=& 2\tau g_\alpha^{*}g_\beta^{*} \sum_{\sigma = \pm 1} \sum_{k=-\infty}^{\infty} \frac{\ee{i(\Delta_\alpha + \Delta_\beta)t}}{W_k(\sigma r) + W_k^{2}(\sigma r)} \bigg[ \cos \left[(k_{\alpha \eta}- k_{\beta \eta '})d/2 \right] \left\{ M_{\alpha, \beta}^{(k,\sigma)}(t) + M_{\beta, \alpha}^{(k,\sigma)}(t) \right\} \nonumberumber \\ && - \frac{\gamma}{2} e^{i\phi} \cos \left[(k_{\alpha \eta} + k_{\beta \eta '})d/2 \right] \left\{N_{\alpha, \beta}^{(k,\sigma)}(t) \ee{-i\Delta_\alpha \tau}+ N_{\beta, \alpha}^{(k,\sigma)}(t) \ee{-i\Delta_\beta \tau} \right\} \bigg] \, , \end{eqnarray} where we have defined the functions \begin{eqnarray} M_{\alpha, \beta}^{(k,\sigma)}(t) &=& \mathcal{L}^{-1} \left( \frac{s + i\Delta_\alpha + \frac{\gamma}{2}}{\left[s + i(\Delta_\alpha + \Delta_\beta \right] \left[s + \gamma \right] \left[ s - s_{k,\alpha} ^{(\sigma)} \right] } \right) \nonumberumber \\ &=& \frac{\gamma - 2 i \Delta_\alpha}{\left[\Delta_\alpha + i \left( \frac{\gamma}{2} + \frac{1}{\tau}W_k(\sigma r) \right) \right] \left[\Delta_\alpha + \Delta_\beta +i\gamma \right]} \frac{\ee{-\gamma t}}{2} - \frac{\gamma - 2 i \Delta_\beta}{\left[\Delta_\beta + i \left( \frac{\gamma}{2} - \frac{1}{\tau}W_k(\sigma r) \right) \right] \left[\Delta_\alpha + \Delta_\beta +i\gamma \right]} \frac{\ee{-i(\Delta_\alpha + \Delta_\beta) t}}{2} \nonumberumber \\ && + \frac{W_k(\sigma r)}{\tau} \frac{\ee{-\left[\frac{1}{2} - \frac{1}{\gamma \tau}W_{k}(\sigma r) \right]\gamma t - i \Delta_\alpha t}}{\left[ \Delta_\alpha + i\left(\frac{\gamma}{2} + \frac{1}{\tau} W_k(\sigma r) \right) \right]\left[ \Delta_\beta + i\left(\frac{\gamma}{2} - \frac{1}{\tau} W_k(\sigma r) \right) \right]} \, , \\ N_{\alpha, \beta}^{(k,\sigma)}(t) &=& \mathcal{L}^{-1} \left( \frac{\ee{-s\tau}}{\left[s + i(\Delta_\alpha + \Delta_\beta \right] \left[s + \gamma \right] \left[ s - s_{k,\alpha} ^{(\sigma)} \right] } \right) \nonumberumber \\ &=& - \Bigg( \frac{\ee{-\gamma (t-\tau)}}{\left[\Delta_\alpha + i \left( \frac{\gamma}{2} + \frac{1}{\tau}W_k(\sigma r) \right) \right] \left[\Delta_\alpha + \Delta_\beta +i\gamma \right]} + \frac{\ee{-i(\Delta_\alpha + \Delta_\beta) (t-\tau)}}{\left[\Delta_\beta + i \left( \frac{\gamma}{2} - \frac{1}{\tau}W_k(\sigma r) \right) \right] \left[\Delta_\alpha + \Delta_\beta +i\gamma \right]} \nonumberumber \\ && - \frac{\ee{-\left[\frac{1}{2} - \frac{1}{\gamma \tau}W_{k}(\sigma r) \right]\gamma (t-\tau) - i \Delta_\alpha (t-\tau)}}{\left[ \Delta_\alpha + i\left(\frac{\gamma}{2} + \frac{1}{\tau} W_k(\sigma r) \right) \right]\left[ \Delta_\beta + i\left(\frac{\gamma}{2} - \frac{1}{\tau} W_k(\sigma r) \right) \right]} \Bigg) \Theta(t-\tau) \,. \end{eqnarray} These functions are not invariant to the interchange of the labels $\alpha, \beta$. In the long-time limit $\gamma t \to \infty$ and $\phi = n \pi$ the solution takes the form {\small \begin{eqnarray} c_{\alpha \eta, \beta \eta '}(t\to \infty) &=& 2 g_\alpha^{*}g_\beta^{*} \bigg[ \frac{\cos \left[(k_{\alpha \eta} - k_{\beta \eta '})d/2 \right] -\ee{in \pi }\cos \left[(k_{\alpha \eta} + k_{\beta \eta '})d/2 \right]}{1 + \frac{\gamma \tau}{2}} \left\{ \frac{\ee{i\Delta_\alpha t}}{\Delta_\alpha (\Delta_\beta + i \gamma)} + \frac{\ee{i\Delta_\beta t}}{\Delta_\beta (\Delta_\alpha + i \gamma)} \right\} \\ && - \frac{\tau}{2 \left[\Delta_\alpha + \Delta_\beta +i\gamma \right]} \sum_{\sigma = \pm 1} \sum_{k=-\infty}^{\infty} \frac{1}{W_k(\sigma r) + W_k^{2}(\sigma r)} \left\{ \frac{L_{\alpha \eta, \beta \eta '}}{\Delta_\alpha + i\left(\frac{\gamma}{2} - \frac{1}{\tau} W_k(\sigma r) \right)} + \frac{L_{\beta \eta ', \alpha \eta }}{\Delta_\beta + i\left(\frac{\gamma}{2} - \frac{1}{\tau} W_k(\sigma r) \right)} \right \} \bigg] \, , \nonumberumber \end{eqnarray}} with \begin{eqnarray} L_{\alpha \eta, \beta \eta '} = (\gamma -2i\Delta_\alpha)\cos \left[(k_{\alpha \eta} - k_{\beta \eta '})d/2 \right] - \gamma \ee{i n \pi + i \Delta_\alpha \tau}\cos \left[(k_{\alpha \eta} + k_{\beta \eta '})d/2 \right] \, . \end{eqnarray} \end{document}
\begin{document} \begin{center} {\large \bf Realization of irreducible unitary \\ representations of {\normalsize $ \lie{osp}(M/N;\mathbb{R} ) $} \large \bf on Fock spaces. \footnote{ appeared in ``\textit{Representation theory of Lie groups and Lie algebras}'' (Fuji-Kawaguchiko, 1990), 1–21, World Sci. Publ., River Edge, NJ, 1992. } } Dedicated to Professor Nobuhiko Tatsuuma on his 60th birthday \small {\bf By}\\ {\bf Hirotoshi FURUTSU}\\ {\it Department of Mathematics, College of Science and Technology,\\ Nihon University\\ Kanda-Surugadai 1-8, Chiyoda, Tokyo 101, JAPAN.} {\bf Kyo NISHIYAMA}\\ {\it Institute of Mathematics, Yoshida College, Kyoto University\\ Sakyo, Kyoto 606, JAPAN.} \end{center} \normalsize \section*{Introduction.} Recently, Lie superalgebras and their representations have become more important in physics. They appeared first in the area of elementary particle physics, then they were used in other fields, such as nuclear physics, supergravity, superstring theory, etc. (cf.\ \cite{Kostelecky-Campbell}). In physics, unitarizability of the representations is very important, since operators in physics have often to be self-adjoint. It is notable that many physical applications in supersymmetry deal with Lie superalgebras $ \lie{su}(M/N) $ and $ \lie{osp}(M/N) $. For example in \cite{Bars}, $ \lie{osp}(4/N) $ is used to describe supergravity. The classification of finite dimensional simple Lie superalgebras was discovered by V.G.~Kac (\cite{Kac1}). He also wrote many works on the theory of Lie superalgebras and their finite dimensional representations (\cite{Kac2}, \cite{Kac4}). Thereafter, many studies on Lie superalgebras were made in mathematics. For example, M.~Scheunert wrote some interesting works already in the 1970's (\cite{Scheunert1}) and F.A.~Berezin also played an important role in this field (\cite{Berezin}). In the 1970's, most of the studies were of finite dimensional representations (e.g. \cite{Kac4}, \cite{Scheunert.N.R2}). Two types of irreducible representations of simple Lie superalgebras exist, i.e.\ typical and atypical ones (see \cite{Kac2} for definition). For finite dimensional typical representations, similar properties as those of simple Lie algebras hold (\cite{Kac2}, \cite{Kac4}). But atypical representations (even finite dimensional ones) are more difficult to study. Many problems are still left to be solved (cf.\ \cite{Gould}). In recent years, papers dealing with infinite dimensional representations have become more common. Now they are studied everywhere, and papers on super unitary (infinite dimensional) representations are also popular (see \cite{Gould-Zhang2}, \cite{Gunaydin}, \cite{Schmitt.et.al}, etc.). It seems interesting that whether a representation is typical or not is not so much correlated with its super unitarity. Classifications for irreducible super unitary representations were discovered for some low rank basic classical Lie superalgebras. For example, for some of the Lie superalgebras of type A or their real forms, classifications were discovered (\cite{Furutsu2}, \cite{Furutsu-Hirai}, \cite{Gould-Zhang1}, \cite{Jakobsen}, etc.). For an orthosymplectic Lie superalgebra $ \lie{osp}(2/1; \mathbb{R} ) $, a classification was also discovered in \cite{Furutsu-Hirai}. In \cite{Heidenreich}, irreducible super unitary lowest weight modules of $ \lie{osp}(4/1) $ were classified. The complete list of irreducible super unitary representations of $ \lie{osp}(2/2; \mathbb{R} ) $ is obtained in \cite[Th.~4.5]{Nishiyama2}. For general orthosymplectic algebras $ \lie{osp}(M/N; \mathbb{R} ) \; (M=2m) $, many irreducible super unitary representations were realized explicitly in \cite{Nishiyama2}, using super dual pairs. For simple Lie algebras, a general classification theorem of irreducible unitary highest weight representations was already given in \cite{EHW} and \cite{Jakobsen3} independently. But, for Lie superalgebras, a general classification theorem had been expected for a long time. Recently, H.P.~Jakobsen made a complete classification of irreducible super unitary highest weight modules (\cite{Jakobsen2}). However, his classification only clarifies the parameters of representations and does not provide realizations, character formulas, etc. In our last paper \cite{Furutsu-Nishiyama}, we classified the irreducible super unitary representations of Lie superalgebras of type $ \lie{su}(p,q/n) $. Our method of classification naturally provides realizations of irreducible super unitary representations on Fock spaces. We think the method is also useful for obtaining character formulas (cf.\ \cite{Nishiyama4}). In this paper, we classify irreducible super unitary representations of an orthosymplectic Lie superalgebra $ \lie{osp}(M/N; \mathbb{R} ) $ in a similar way as was done in the previous paper. Our starting point for classification for $ \lie{su}(p,q/n) $ was based on the notion of super dual pairs in $ \lie{osp}(M/N) $ (cf.\ \cite{Nishiyama2}, \cite{Nishiyama3}). Therefore, logically, this paper should have preceded our last paper \cite{Furutsu-Nishiyama}. In this paper we study the irreducible super unitary representations of a real form of the complex Lie superalgebra $ \lie{osp}(M/N; \mathbb{C} ) \; (N \geq 2) $. Note that $ \lie{osp}(M/N; \mathbb{C} ) $ itself has no irreducible super unitary representations except trivial ones. The real forms of the Lie superalgebra $ \lie{osp}(M/N; \mathbb{C} ) $ are isomorphic to one of the real Lie superalgebras $ \lie{osp}(M/p,q; \mathbb{R} ) \; (N=p+q, [N/2] \leq p \leq N) $. But if $ p $ is not equal to $ N $, $ \lie{osp}(M/p,q; \mathbb{R} ) $ also has no irreducible super unitary representations except trivial ones. So the only real form which has non-trivial super unitary representations is $ \lie{osp}(M/N; \mathbb{R} ) $. The main result given in this paper is a classification of all the irreducible super unitary representations of $ \lie{osp}(M/N; \mathbb{R} ) \; (N \geq 2) $ that integrate to global representations of $ Sp(M) \times SO(N) $, a Lie group corresponding to the even part (Theorem \ref{thm:cond}). As commented above, the classification itself is not new and having been obtained by H.P.~Jakobsen \cite{Jakobsen2} for general highest weight representations in quite different style. But the construction of integrable unitary representations is new, and we think our method of classification is simpler and gives more information than the one in \cite{Jakobsen2}. Let us explain the method of classification briefly. First of all, we note that if a representation is integrable, then it is admissible (see \S \ref{sec:osp}). Therefore, an irreducible integrable super unitary representation must be a lowest or highest weight representation (see \cite[Prop.~2.3]{Nishiyama2}). So what we have to do is to find a necessary and sufficient condition for an irreducible lowest (or highest) weight representation to be super unitary. {\tolerance=10000 Since an orthosymplectic algebra has a special super unitary representation called oscillator representation (see \cite{Nishiyama1}), we utilize this representation to realize the \linebreak irreducible super unitary representations. If we imbed $ \lie{osp}(M/N; \mathbb{R} ) $ into \linebreak $ \lie{osp}(ML/NL; \mathbb{R} ) $ $ (L \geq 1 ) $, then the oscillator representation of $ \lie{osp}(ML/NL; \mathbb{R} ) $ becomes a super unitary representation of $ \lie{osp}(M/N; \mathbb{R} ) $ through the above imbedding. If we decompose it, we can get many irreducible super unitary representations. In this paper, the complete decomposition is not carried out, but we construct many primitive vectors for $ \lie{osp}(M/N; \mathbb{R} ) $ explicitly. The weights of these vectors are lowest weights for some irreducible super unitary representations. This supplies us with a sufficient condition for super unitarity.} Next we study a necessary condition. From the definition of super unitarity, we get an inequality for weights of super unitary representations (Proposition \ref{prop:ncond1}, cf.\ \cite[Prop.~2.2]{Furutsu2} and \cite[Lemma~2.1]{Furutsu-Nishiyama}). Note that this inequality also implies that an irreducible admissible super unitary representation must be a lowest (or highest) weight representation. It is very important that not only the lowest (or highest) weight but all the weights of the representation must satisfy this condition. Take a non-zero lowest weight vector $ v_{\lambda} $ of an irreducible lowest weight super unitary representation, where $ \lambda $ is the lowest weight. We choose a special series of weight vectors $ v_1, \cdots , v_{m-1} $ in \S \ref{sec:cond.unitary}. If a vector $ v_k $ does not vanish, then its weight must satisfy the above mentioned inequality. We translate this condition of the weight of $ v_k $ into that of $ \lambda $, and determine when $ v_k $ does not vanish. Then we get a necessary condition for super unitarity for the lowest weight $ \lambda $ (Proposition \ref{prop:ncond2}). Finally, using this necessary condition for super unitarity, we prove that the above sufficient condition is also necessary, that is, the representations we get by the above method exhaust all the irreducible integrable super unitary representations up to isomorphism. Our method of classification is quite different from that in \cite{Jakobsen2}. We think our method is simpler in the case of orthosymplectic algebras. Moreover it gives us realizations of the representations at the same time. On the other hand, we only treat integrable representations, but H.P.~Jakobsen classified all the irreducible super unitary highest weight modules which have continuous classification parameters. Let us explain each section briefly. We introduce some notations for the orthosymplectic algebras $ \lie{osp}(M/N) $ in \S 1. We also define the ``admissibility" and ``integrability" for representations of an orthosymplectic Lie superalgebra in \S 1. In \S 2, we review the oscillator representation of $ \lie{osp}(M/N; \mathbb{R} ) $ which is a super unitary lowest weight representation. We imbed $ \lie{osp}(M/N; \mathbb{R} ) $ into $ \lie{osp}(ML/NL; \mathbb{R} ) $ and consider the oscillator representation of $ \lie{osp}(ML/NL; \mathbb{R} ) $ as a super unitary representation of $ \lie{osp}(M/N; \mathbb{R} ) $ through that imbedding. In \S 3, we construct a number of $ \Delta^{-} $-primitive vectors in the above representation space (a Fock space), where $ \Delta^{-} $ is the standard negative root system of $ \lie{osp}(M/N; \mathbb{R} ) $ (Proposition \ref{prop:primvec}). Each of these vectors generates an irreducible super unitary representation of $ \lie{osp}(M/N; \mathbb{R} ) $. We calculate the weights of these vectors, then the existence of these weights gives a sufficient condition for super unitarity (Proposition \ref{prop:weight}). In \S 4, first we introduce a necessary condition for super unitarity which is obtained by the inequality which defines super unitarity (Proposition \ref{prop:ncond1}). But this necessary condition is not sufficient. So we must give a stricter necessary condition (Proposition \ref{prop:ncond2}). After giving that, we finally show that this condition is also sufficient. Thus we get a classification of irreducible integrable super unitary representations of $ \lie{osp}(M/N; \mathbb{R} ) \; (N \geq 2) $. This is our main theorem (Theorem \ref{thm:cond}). We get realizations of super unitary representations on Fock spaces simultaneously. The authors whish to express their thanks to Prof.~T.~Hirai for drawing their attention to this field and suggesting that they collaborate. \section{Orthosymplectic algebras.} \label{sec:osp} Let $ \sspace{V} $ be a super space over $ \mathbb{C} $ with dim $ V_{\bar{0}} = M $ and dim $ V_{\bar{1}} = N $. Throughout in this paper, we put $ M=2m \; (m \in \mathbb{Z}_{\geq 0}) $ and $ N=2n $ or $ 2n+1 \; (n \in \mathbb{Z}_{\geq 0}) $ according as $ N $ is even or odd. We call a bilinear form $ b $ on $ V $ super skew symmetric if $ b $ satisfies \[ b(v,w) = -(-)^{\deg v \deg w} b(w,v), \] where $ v, w $ are homogeneous elements in $ V $ and deg $ v $ denotes the degree of $ v $. We denote an orthosymplectic algebra by \[ \lie{osp}(b) = \{ X \in \lie{gl}(V) | b(Xv,w)+(-)^{\deg X \deg v}b(v,Xw)=0 \mbox{ for } v,w \in V \} , \] where deg $ X $ denotes the degree of $ X \in \lie{gl}(V) $. Note that an element $ X $ in the above formula is supposed to be homogeneous. However, we say that $ X $ satisfies some property (or formula) when homogeneous components of $ X $ satisfy that property (or formula), if there is no confusion. So $ \lie{osp}(b) $ consists of linear combinations of homogeneous $ X $ which satisfies the above equation. In this paper, we fix a bilinear form $ b $ which is expressed by a matrix of the form \[ B= \left[ \begin{array}{cc|c} & 1_m & \\ -1_m & & \\ \hline & & 1_{N} \end{array} \right], \] that is $ b(v,w) = {^t}vBw $ for column vectors $ v,w \in V $. We denote an orthosymplectic algebra $ \lie{osp}(b) $ for the above $ b $ by $ \lie{osp}(M/N; \mathbb{C} ) $ or $ \lie{osp}(M/N) $ simply. The elements in $ \lie{osp}(M/N; \mathbb{C} ) $ are matrices of the form \[ \left[ \begin{array}{cc|c} A & B & P \\ C & - \; ^t \! A & Q \\ \hline - \; ^t \! Q & ^t \! P & D \end{array} \right] , \] where $ A \in \lie{gl}(m; \mathbb{C} ) $ , $ B $ and $ C $ are symmetric, $ P $ and $ Q $ are $ m \times N $-matrices, and $ D $ belongs to $ \lie{so}(N) $. It is easy to see that real forms of $ \lie{osp}(M/N; \mathbb{C} ) $ are $ \lie{osp}(b_p; \mathbb{R} ) \; ([\frac{N}{2}] \leq p \leq N) $ up to isomorphism, where $ b_p $ is expressed by a matrix \[ B_p = \left[ \begin{array}{cc|cc} & 1_m & & \\ -1_m & & & \\ \hline & & 1_{p} & \\ & & & -1_{N-p} \end{array} \right] . \] If $ p \neq N $, these real forms have no admissible unitary representation except for trivial ones (\cite[Prop.~2.3]{Nishiyama2}). We denote $ \lie{g} = \lie{osp}(b_N; \mathbb{R} ) $ by $ \lie{osp}(M/N; \mathbb{R} ) $. We will write down the root system of this algebra in order to fix the notations. This algebra has a compact Cartan subalgebra $ \lie{h} $: \[ \lie{h}= \left\{ h= \left[ \begin{array}{c|c} \begin{array}{cc} 0 & A \\ -A & 0 \end{array} & 0 \\ \hline 0 & B \end{array} \right] \left| \begin{array}{l} A= {\rm diag}(a_1, a_2, \cdots ,a_m), \\ B= {\rm diag}(b_1 u, b_2 u, \cdots , b_n u, 1), \\ a_i,b_j \in \mathbb{R} \end{array} \right. \right\}, \] where \[ u = \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right) , \] and the figure $ 1 $ in the last place of the sequence in $ B $ appears if and only if $ N $ is odd. We define $ e_i \in (\lie{h}^{\mathbb{C}})^{\ast} \; (1 \leq i \leq m) $ and $ f_j \in (\lie{h}^{\mathbb{C}})^{\ast} \; (1 \leq j \leq n) $ by putting \[ e_i(h)= \iunit a_i, \hspace{2em} f_j(h)= \iunit b_j, \] \noindent for $ h \in \lie{h} $ of the above form. Then, for the case $ N =2n $, roots are given as \[ \makebox[13.5cm][l]{$ \Delta _c ^+ =\{ e_i-e_j | 1 \leq i < j \leq m \} \cup \{ f_i \pm f_j | 1 \leq i < j \leq n \} $} \] \hspace*{.52\textwidth} :the set of positive compact roots, \begin{eqnarray*} & \makebox[6.8cm][l]{$ \Delta _n ^+ =\{ e_i+e_j | 1 \leq i \leq j \leq m \} $ } & \mbox{:the set of positive non-compact roots,} \\ & \makebox[6.8cm][l]{$ \Delta _{\bar{0}}^+ = \Delta _c ^+ \cup \Delta _n ^+ $} & \mbox{:the set of positive even roots,} \\ & \makebox[6.8cm][l]{$ \Delta _{\bar{1}}^+ =\{ e_i \pm f_j | 1 \leq i \leq m, 1 \leq j \leq n \} $ } & \mbox{:the set of positive odd roots,} \\ & \makebox[6.8cm][l]{$ \Delta^+ = \Delta_{\bar{0}}^+ \cup \Delta_{\bar{1}}^+ $} & \mbox{:the set of positive roots,} \end{eqnarray*} For the case $ N = 2n+1 $, a set of positive roots are similarly given but the set of positive compact roots contains $ \{ f_j | 1 \leq j \leq n \} $ and the set of positive odd roots contains $ \{ e_i | 1 \leq i \leq m \} $: \[ \Delta _c ^+ =\{ e_i-e_j | 1 \leq i < j \leq m \} \cup \{ f_i \pm f_j | 1 \leq i < j \leq n \} \cup \{ f_j | 1 \leq j \leq n \}, \] \[ \Delta _{\bar{1}}^+ =\{ e_i \pm f_j | 1 \leq i \leq m, 1 \leq j \leq n \} \cup \{ e_i | 1 \leq i \leq m \}. \] We put \[ \lie{g}^{\pm }_{\bar{0} } = \bigoplus_{\pm \alpha \in \Delta^{+}_{\bar{0} } } \lie{g} _{\alpha }, \hspace*{1.5em} \lie{g}^{\pm }_{\bar{1} } = \bigoplus_{\pm \alpha \in \Delta^{+}_{\bar{1} } } \lie{g} _{\alpha }, \hspace*{1.5em} \lie{g}^{\pm} = \lie{g}^{\pm}_{\bar{0}} \oplus \lie{g}^{\pm}_{\bar{1}}, \] where $ \lie{g}_{\alpha } $ is a root space in $ \lie{g}^{\mathbb{C} } $ of a root $ \alpha $. If $ \lambda \in (\lie{h}^{\mathbb{C}})^{\ast} $ is of the form \[ \lambda = \sum _{1 \leq i \leq m} \lambda_i e_i + \sum _{1 \leq j \leq n} \mu_j f_j, \] \noindent then we write $ \lambda =(\lambda _1, \lambda _2 , \cdots , \lambda _m / \; \mu _1 , \mu _2 , \cdots , \mu _n ) $ and call it a {\it coordinate expression} of $ \lambda $. \begin{edefinition} \label{def:adm} \rm A representation $ ( \pi , U ) $ of $ \lie{g} = \lie{osp}(M/N;\mathbb{R} ) $ is called {\it infinitesimally admissible} if the representation $ ( \pi |_{\lie{k}} , U ) $ of $ \lie{k} $ is a direct sum of its irreducible finite dimensional representations with finite multiplicity, where $ \lie{k} $ is a maximal compact subalgebra of $ \lie{g}_{\bar{0} } $. \end{edefinition} Note that, to define the notion of admissibility, we usually use a compact Lie group $ K $, which has the Lie algebra $ \lie{k} $. But in the above definition, we use $ \lie{k} $ instead of $ K $. So we add the term ``infinitesimally". In the following, we assume that $ \lie{k} $ contains the Cartan subalgebra $ \lie{h} $. It is known that an irreducible unitary representation of a semisimple Lie group is admissible (see, for example, \cite[Th.~0.3.6]{Vogan1}). Therefore, it is natural to consider infinitesimally admissible representations even for Lie superalgebras. Moreover, we only consider ``integrable" representations in this paper. \begin{edefinition} \label{def:int} \rm A representation $ ( \pi , U ) $ of $ \lie{g} = \lie{osp}(M/N;\mathbb{R} ) $ is called {\it integrable} if the representation $ ( \pi |_{\lie{g}_{\bar{0} }} , U ) $ of $ \lie{g}_{\bar{0} } $ is a $ (\lie{g}_{\bar{0}}, K) $-module consisting of $ K $-finite vectors of an admissible representation of a Lie group $ G_{\bar{0}} = Sp(M) \times SO(N) $, where $ K $ is a maximal compact subgroup of $ G_{\bar{0}} $. \end{edefinition} It is clear that integrable representations are infinitesimally admissible. Conversely, let $ ( \pi , U ) $ be an infinitesimally admissible representation of $ \lie{osp}(M/N; \mathbb{R} ) $. Then, it is necessary for $ \pi $ to be integrable that any representation of $ \lie{k} $ in $ \pi $ can be lifted up to a representation of $ K $. Therefore any weight $ \lambda $ of $ \pi $ must satisfy the condition \[ \lambda_i , \mu_j \in \mathbb{Z} \hspace*{1.5em} \mbox{ for } 1 \leq i \leq m, 1 \leq j \leq n, \] that is, all $ \lambda _i $'s and $ \mu _j $'s are integers. We also note that the following proposition holds. \begin{eproposition}[e.g.\ {\cite[Prop.~2.3]{Nishiyama2}}] \label{prop:admuni} For $ \lie{osp}(M/N; \mathbb{R} ) $, an irreducible admissible unitary representation is a highest or lowest weight representation. \end{eproposition} So we only consider lowest (or highest) weight representations in the rest of this paper. \section{Oscillator representation.} \label{sec:osci} In \cite{Nishiyama1}, we defined a special super unitary representation for $ \lie{osp}(M/N; \mathbb{R} ) $ called the oscillator representation. Let us review the construction of it briefly for the case $ N =2n+1 $. Note that the case $ N =2n $ is essentially contained in this case (ignore the element $ c $ in the following). Let $ C^{\mathbb{C}}(r_{\ell} ,c| 1 \leq \ell \leq n) $ be a Clifford algebra over $ \mathbb{C} $ generated by $ \{ r_{\ell} | 1 \leq \ell \leq n \} $ $ \bigcup \{ c \} $ with relations \[ {r_i}^2 = 1, \hspace{2em} r_i r_j + r_j r_i =0 \; (1 \leq i \neq j \leq n) , \] \[ c^2 = 1, \hspace{2em} c r_i + r_i c =0 \; (1 \leq i \leq n) . \] If we set the degree of each generator in the above to $ \bar{1} \in \mathbb{Z} _2 $, then this algebra becomes a superalgebra. That is, let us denote by $ C^{\mathbb{C}}_{\bar{0}}(r_{\ell},c| 1 \leq \ell \leq n) $ (resp.\ $ C^{\mathbb{C}}_{\bar{1}}(r_{\ell},c| 1 \leq \ell \leq n) $) a subalgebra (resp.\ subspace) of $ C^{\mathbb{C}}(r_{\ell},c| 1 \leq \ell \leq n) $ generated by even (resp.\ odd) products of $ r_j\mbox{'s} $ and $ c $. Then we have \[ C^{\mathbb{C}}(r_{\ell},c| 1 \leq \ell \leq n) = C^{\mathbb{C}}_{\bar{0}}(r_{\ell},c| 1 \leq \ell \leq n) \oplus C^{\mathbb{C}}_{\bar{1}}(r_{\ell},c| 1 \leq \ell \leq n), \] and this is the decomposition as a $ \mathbb{Z}_2 $-graded algebra. Now we define a representation space $ \sspace{F} $ of the oscillator representation $ \rho $. Put \begin{eqnarray*} \makebox[.6cm][l]{$ F $} & = & F_{\bar{0}} \oplus F_{\bar{1}} , \\ \makebox[.6cm][l]{$ F_{\bar{0}} $} & = & \mathbb{C} [z_k| 1 \leq k \leq m] \otimes C_{\bar{0}}^{\mathbb{C}}(r_{\ell},c | 1 \leq \ell \leq n), \\ \makebox[.6cm][l]{$ F_{\bar{1}} $} & = & \mathbb{C} [z_k| 1 \leq k \leq m] \otimes C_{\bar{1}}^{\mathbb{C}}(r_{\ell},c | 1 \leq \ell \leq n), \end{eqnarray*} where $ \mathbb{C} [z_k| 1 \leq k \leq m] $ means a polynomial algebra generated by $ \{ z_k | 1 \leq k \leq m \} $. To define the operation $ \rho $ of $ \lie{osp}(M/N; \mathbb{R} ) $ on $ F $, we introduce a representation of a Clifford-Weyl algebra $ C^{\mathbb{R}}(V; b) $ (cf.\ \cite{Tilgner}). Let $ \sspace{V} $ be a super space of dimension $ (M/N) $ on which $ \lie{osp}(M/N; \mathbb{R} ) $ naturally acts (see \S 1). And let $ b $ be a super skew symmetric form on $ V $ such that $ \lie{osp}(M/N; \mathbb{R} ) = \lie{osp}(b) $ holds. Let us choose a basis $ \{ p_k , q_k | 1 \leq k \leq m \} $ for $ V_{\bar{0}} $ such that \[ b(p_i, q_j)=-b(q_j, p_i)=\delta _{i,j} , \; b(p_i, p_j)=b(q_i, q_j)=0, \] and an orthogonal basis $ \{ r_{\ell}, s_{\ell}, c | 1 \leq {\ell} \leq n \} $ for $ V_{\bar{1}} $ with respect to $ b $ with length $ \sqrt{2} $. Then there exists a superalgebra $ C^{\mathbb{R}}(V; b) $ over $ \mathbb{R} $ which is generated by $ \{ p_k , q_k | 1 \leq k \leq m \} \cup \{ r_{\ell} , s_{\ell}, c | 1 \leq \ell \leq n \} $ with relations \[ p_i q_j - q_j p_i = \delta_{i,j} , \; r_i s_j + s_j r_i=0 , \; r_i r_j + r_j r_i=2\delta_{i,j} , \; s_i s_j + s_j s_i = 2\delta_{i,j} , \] \[ c^2 = 1 , c r_i + r_i c = 0 , c s_i + s_i c =0 , \] and all the other pairs of $ p, q, r, s, c $ commute with each other. $ C^{\mathbb{R}}(V; b) $ can be considered as a Lie superalgebra in the standard way (cf.\ \cite[{\S}1.1]{Kac1}), and $ \lie{osp}(M/N; \mathbb{R} ) $ can be realized as a sub Lie superalgebra in $ C^{\mathbb{R}}(V; b) $. In fact, let $ \lie{L} $ be a subspace generated by the second degree elements of the following form: \[ \{ xy + (-)^{\deg x \deg y}yx | \; x, y \; \mbox{ are one of the generators } p_k , q_k , r_{\ell} , s_{\ell} \mbox{ or } c \} . \] Then $ \lie{L} $ becomes a sub Lie superalgebra. We denote $ m(x,y) = xy + (-)^{\deg x \deg y}yx $. An operator $ {\rm ad}\; m(x,y) $ preserves $ V \subset C^{\mathbb{R}}(V; b) $ and the bilinear form $ b $, and this adjoint representation gives an isomorphism between $ \lie{L} $ and $ \lie{osp}(M/N; \mathbb{R} ) $. From now on, we will identify $ \lie{L} $ and $ \lie{osp}(M/N; \mathbb{R} ) $ with each other. For example, $ {\rm ad}\; m(p_k,r_{\ell}) $ is expressed as a matrix of the form \[ {\rm ad}\; m(p_k,r_{\ell}) \longleftrightarrow 2\sqrt{2} \left[ \begin{array}{cc|c} 0 & & E_{k,2\ell -1} \\ & 0 & 0 \\ \hline 0 & E_{2\ell -1,k} & 0 \end{array} \right], \] where $ E_{i,j} $ is an $ m \times N $ or $ N \times m $-matrix with all the elements $ 0 $ except the $ (i,j) $-element, which is $ 1 $. The oscillator representation $ (\rho , F) $ is actually a representation of the superalgebra $ C^{\mathbb{R}}(V; b) $ given by \noindent \renewcommand{1.2}{2.5} \small \normalsize \makebox[1.7cm][l]{} $ \begin{array}{lll} \rho (p_{k}) = {\displaystyle \frac{\iunit \qroot}{\sqrt{2}} \left( z_{k} - \frac{\partial}{\partial z_k} \right) \otimes 1 } & & (1 \leq k \leq m ) , \\ \rho (q_{k}) = {\displaystyle \frac{\qroot}{\sqrt{2}} \left( z_k + \frac{\partial}{\partial z_k} \right) \otimes 1 } & & (1 \leq k \leq m ) , \end{array} $ \renewcommand{1.2}{1.2} \small \normalsize \noindent \makebox[2cm][l]{} $ \rho (r_{\ell})=1 \otimes r_{\ell} , \; \rho (s_{\ell})=1 \otimes \iunit r_{\ell} \alpha _{\ell} \hspace*{0.8em} (1 \leq \ell \leq n ), \hspace*{1.5em} \rho (c)=1 \otimes c, $ \noindent where $ \alpha _{\ell} $ is an automorphism of the Clifford algebra $ C^{\mathbb{C}}(r_{\ell},c| 1 \leq \ell \leq n) $ which sends $ r_k $ to $ (-)^{\delta _{k, \ell}} r_k \; (1 \leq k \leq n) $ and $ c $ to $ c $. If we restrict $ \rho $ to the sub Lie superalgebra $ \lie{osp}(M/N; \mathbb{R} ) \subset C^{\mathbb{R}}(V; b) $, then $ \rho $ gives a {\it super unitary} representation for $ \lie{osp}(M/N; \mathbb{R} ) $. For more information on $ \rho $, see \cite{Nishiyama1} and \cite{Nishiyama2}. Let us consider the following imbedding: \[ \iota : \lie{osp}(M/N; \mathbb{R} ) \hookrightarrow \lie{osp}(ML/NL; \mathbb{R} ) \hspace*{1.5em} (L \geq 1). \] Here $ \iota $ is given as follows. Let $ \sspace{V} $ be the super space with the super skew symmetric bilinear form $ b=b_V $ as above and $ W = W_{\bar{0}} $ be a usual $ N $-dimensional vector space with a positive definite inner product $ b_W $. Then a super space $ V \otimes W = V_{\bar{0}} \otimes W_{\bar{0}} + V_{\bar{1}} \otimes W_{\bar{0}} $ is endowed with a super skew symmetric bilinear form $ b_{V \otimes W} = b_V \otimes b_W $. If we consider \[ \lie{osp}(b_V) = \lie{osp}(M/N; \mathbb{R} ) \] and \[ \lie{osp}(b_{V \otimes W}) = \lie{osp}(ML/NL; \mathbb{R} ) , \] $ \iota $ is given by $ \iota (A) = A \otimes 1_W $ for $ A \in \lie{osp}(M/N; \mathbb{R} ) $. In the matrix form, this only means \[ \iota (A) = \left[ \begin{array}{ccc} a_{11} 1_L & a_{12} 1_L & \cdots \\ a_{21} 1_L & a_{22} 1_L & \cdots \\ \vdots & \vdots & \ddots \end{array} \right] \; \mbox{ for } A = (a_{i,j}) . \] Let $ (\rho^L , F^L) $ be the oscillator representation of $ \lie{osp}(ML/NL; \mathbb{R} ) $ so that \[ F^L = \mathbb{C} [z_{i,j} | 1 \leq i \leq m, 1 \leq j \leq L ] \otimes C^{\mathbb{C}}(r_{k,\ell}, c_{\ell} | 1 \leq k \leq n, 1 \leq \ell \leq L ), \] where we use similar notations $ p_{i,k},q_{i,k},r_{\ell,k},s_{\ell,k} $ and $ c_k \; (1 \leq i \leq m , 1 \leq \ell \leq n , 1 \leq k \leq L) $ as those for $ \lie{osp}(M/N; \mathbb{R} ) $. We denote $ \tilde{m}(x,y)=xy+(-)^{\deg x \deg y}yx $. The successive application of $ \iota $ then $ \rho^L $ gives a {\it super unitary} representation $ \tilde{\rho} = \rho^L \circ \iota $ of $ \lie{osp}(M/N; \mathbb{R} ) $. In the following subsections, we try to decompose this super unitary representation $ \tilde{\rho} $ of $ \lie{osp}(M/N; \mathbb{R} ) $. From the definition of super unitarity, there exists a constant $ \varepsilon = \pm 1 $ for each super unitary representation, which is called the associated constant (see \cite{Furutsu-Hirai}). Note that, for $ \lie{osp}(M/N; \mathbb{R} ) $, if $ \varepsilon = -1 $ then a super unitary representation must be a lowest weight representation, and if $ \varepsilon = 1 $ then it must be highest (see \cite{Nishiyama2}). In this case, since the associated constant of $ \rho $ is $ \varepsilon = -1 $, an irreducible super unitary representation of $ \lie{osp}(M/N; \mathbb{R} ) $ which appears in $ (\tilde{\rho}, F^L) $ is a lowest weight module. Therefore what we have to do is to find all the $ \Delta^{-} $-primitive vectors for $ \tilde{\rho} $. Now let us calculate operators for root vectors. Let $ X_{\alpha} \; (\alpha \in \Delta ) $ be a non-zero root vector for a root $ \alpha $ of $ \lie{osp}(M/N; \mathbb{R} ) $. Then up to a non-zero constant multiple, the operators $ \tilde{\rho}(X_{\alpha}) $ are given as follows. Root vectors for $ \alpha \in \Delta ^- $ in $ \lie{osp}(M/N) $ : \renewcommand{1.2}{3} \small \normalsize \[ \begin{array}{llc} {\rm (I)} & \alpha = -(e_s \pm f_{t}) \; (1 \leq s \leq m , 1 \leq t \leq n ); & {\displaystyle \sum _{k=1} ^L \frac{\partial}{\partial z_{s,k}} r_{t ,k}(1 \pm \alpha _{t ,k})} , \\ {\rm (II)} & \alpha = -(e_s - e_{t}) \; (1\leq s < t \leq m); & {\displaystyle \sum _{k=1} ^L z_{t, k} \frac{\partial}{\partial z_{s,k}}} , \\ {\rm (III)} & \alpha = -(e_s + e_t) \; (1 \leq s , t \leq m); & {\displaystyle \sum _{k=1} ^L \frac{\partial}{\partial z_{s,k}} \frac{\partial}{\partial z_{t,k}}} , \\ {\rm (IV)} & \alpha = -(f_s \pm f_t) \;(1 \leq s < t \leq n ); & {\displaystyle \sum _{k=1} ^L r_{s ,k}r_{t ,k} (1 + \alpha _{s ,k})(1 \pm \alpha _{t ,k})} , \\ {\rm (V)} & \alpha = - e_s \; (1 \leq s \leq m); & {\displaystyle \sum _{k=1} ^L \frac{\partial}{\partial z_{s,k}} c_k } , \\ {\rm (VI)} & \alpha = -f_s \;(1 \leq s \leq n ); & {\displaystyle \sum _{k=1} ^L r_{s ,k} c_k (1+\alpha _{s ,k})} . \end{array} \] \renewcommand{1.2}{1.2} \small \normalsize All these formulas follow after easy calculations. Here we show only the case (I). A root vector $ X_{\alpha} \in \lie{osp}(M/N; \mathbb{C} ) $ of root $ \alpha = -(e_s \pm f_t) $ is expressed by a matrix of the form \[ \qroot \left[ \begin{array}{cc|c} 0 & & E_{s,2t-1} \mp \iunit E_{s,2t} \\ & 0 & -\iunit E_{s,2t-1} \mp E_{s,2t} \\ \hline \iunit E_{2t-1,s} \pm E_{2t,s} & E_{2t-1,s} \mp \iunit E_{2t,s} & 0 \end{array} \right]. \] Let us identify $ \lie{L}^{\mathbb{C}} $ with $ \lie{osp}(M/N; \mathbb{C} ) $. Then $ X_{\alpha} $ is identified with an element \[ \frac{\qroot}{2\sqrt{2}} \{ m(p_s,r_t) - \iunit m(q_s,r_t) \mp \iunit m(p_s,s_t) \mp m(q_s,s_t) \}. \] Therefore we get \begin{eqnarray*} \lefteqn{ \rho (X_{\alpha}) = \frac{\qroot}{\sqrt{2}} \{ \rho (p_s) \rho (r_t) - \iunit \rho (q_s) \rho (r_t) \mp \iunit \rho (p_s) \rho (s_t) \mp \rho (q_s) \rho (s_t) \} } \\ & = & \frac{\qroot}{\sqrt{2}} \rho (p_s - \iunit q_s) \rho (r_t \mp \iunit s_t) \\ & = & \frac{\qroot}{\sqrt{2}} \left\{ \frac{\iunit \qroot}{\sqrt{2}} \left( z_s - \frac{\partial}{\partial z_s} \right) -\iunit \frac{\qroot}{\sqrt{2}} \left( z_s + \frac{\partial}{\partial z_s} \right) \right\} \otimes \\ & & \hspace{5cm} \otimes \left\{ \left( r_t \right) \mp \iunit \left( \iunit r_t \alpha _t \right) \right\} \\ & = & \frac{\partial}{\partial z_s} r_t (1 \pm \alpha _t). \end{eqnarray*} Next we consider an operator $ \tilde{\rho}(X_{\alpha}) $ for $ \lie{osp}(M/N; \mathbb{R} ) $. Then similarly as above, $ X_{\alpha} $ is identified with an element \[ \frac{\qroot}{2\sqrt{2}} \sum_{k=1}^{L} \{ \tilde{m}(p_{s,k},r_{t,k}) - \iunit \tilde{m}(q_{s,k},r_{t,k}) \mp \iunit \tilde{m}(p_{s,k},s_{t,k}) \mp \tilde{m}(q_{s,k},s_{t,k}) \} . \] Thus we get \[ \tilde{\rho} (X_{\alpha}) = \frac{\qroot}{2\sqrt{2}} \sum_{k=1}^{L} \{ 2 \tilde{\rho} (p_{s,k}) \tilde{\rho} (r_{t,k}) -2 \iunit \tilde{\rho} (q_{s,k}) \tilde{\rho} (r_{t,k}) \hspace{3cm} \] \[ \hspace{3cm} \mp 2 \iunit \tilde{\rho} (p_{s,k}) \tilde{\rho} (s_{t,k}) \mp 2 \tilde{\rho} (q_{s,k}) \tilde{\rho} (s_{t,k}) \} . \] Similar calculations lead us to \[ \tilde{\rho} (X_{\alpha}) = \sum _{k=1} ^L \frac{\partial}{\partial z_{s,k}} r_{t,k} (1 \pm \alpha _{t,k}). \] This completes the discussion of the operators for root vectors. Finally we show operators for the Cartan subalgebra $ \lie{h} $ of $ \lie{osp}(M/N; \mathbb{R} ) $. Let $ A_i $ be an element of $ \lie{h} $ which is expressed by a matrix of the form \[ \left[ \begin{array}{cc|c} 0 & E_{i,i} & \\ -E_{i,i} & 0 & \\ \hline & & 0 \end{array} \right] \hspace*{1.5em} (1 \leq i \leq m). \] Let us identify $ \lie{L} $ with $ \lie{osp}(M/N; \mathbb{R} ) $. Then $ A_i $ is identified with an element \[ \frac{1}{4} \{ m(p_i,p_i) + m(q_i,q_i) \} . \] Therefore we get \begin{eqnarray*} \lefteqn{\rho (A_i) = \frac{1}{4} \{ 2\rho (p_i)\rho (p_i) + 2\rho (q_i)\rho (q_i) \} } \\ & = & - \frac{\iunit }{4} {\left( z_i - \frac{\partial}{\partial z_i} \right)}^2 +\frac{\iunit }{4} {\left( z_i + \frac{\partial}{\partial z_i} \right)}^2 = \iunit \left( z_i \frac{\partial}{\partial z_i}+\half \right) . \end{eqnarray*} Then, by the similar arguments as above, we get a formula of a weight operator for $ \lie{sp} $-part: \[ \tilde{\rho} (A_i) = \iunit \sum _{k=1}^L \left( z_{i,k} \frac{\partial}{\partial z_{i,k}} +\half \right) \hspace*{1.5em} (1 \leq i \leq m). \] Let $ B_j $ be an element of $ \lie{h} $ which is expressed by a matrix of the form: \[ \left[ \begin{array}{cc|c} 0 & & \\ & 0 & \\ \hline & & E_{2j-1,2j} - E_{2j,2j-1} \end{array} \right] \hspace*{1.5em} (1 \leq j \leq n). \] Then $ B_j $ is identified with an element $ m(r_j,s_j)/4 \in \lie{L} $. Therefore we get \[ \rho (B_j) = \frac{1}{2} \rho (r_j) \rho (s_j) = \half r_j \iunit r_j \alpha _j = \frac{\iunit}{2} \alpha _j . \] Thus we get a weight operator $ \tilde{\rho}(B_j) $ for $ \lie{so} $-part: \[ \tilde{\rho} (B_j) = \frac{\iunit}{2} \sum _{k=1} ^L \alpha_{j,k} \hspace*{1.5em} (1 \leq j \leq n). \] \section{Construction of primitive vectors.} \label{sec:primvec} In this section, we will construct primitive vectors for $ \lie{osp} (M/N; \mathbb{R} ) $ in $ F^L $, where $ F^L $ is the representation space of the oscillator representation $ \rho^L $ of $ \lie{osp} (ML/NL; \mathbb{R} ) $. The case of $ N = 2n $ is already treated in \cite{Nishiyama2}. The case of $ N = 2n+1 $ is similar, but we will write down primitive vectors for the sake of completeness. By definition, $ F^L $ is given by: \[ F^L = \mathbb{C} [ z_{s,k} | 1 \leq s \leq m, 1 \leq k \leq L ] \otimes C^{ \mathbb{C} } ( r_{t,k}, c_k | 1 \leq t \leq n, 1 \leq k \leq L ). \] Put $ L=2l $ or $ 2l+1 $ according as $ L $ is even or odd. For $ 1 \leq a \leq \min (m,l) $, put {\renewcommand{.5}{.5}\small\normalsize \[ \Lambda _a = \det (z_{s,2k-1}+ \iunit z_{s,2k} ) _{\scriptscriptstyle \begin{array}{l} \scriptscriptstyle m-a <s \leq m \\ \scriptscriptstyle 1 \leq k \leq a \end{array} }, \] } and, for $ 1 \leq b \leq n $ and $ 1 \leq j \leq l $, put \[ R_b (j) = \prod_{k=1}^{j} (r_{b,2k-1}+ \iunit r_{b,2k} ) \prod_{k'=2j+1}^{L} r_{b,k'}, \] \[ C = \prod_{k=1}^{l} (c_{2k-1}+ \iunit c_{2k} ). \] For non-negative integers $ i_1, \cdots , i_m $ and $ j_1, \cdots , j_n $, we define \[ \Lambda = \Lambda (i_1, \cdots , i_m) = \prod_{a=1}^{m} \Lambda _a ^{i_a}, \hspace*{1.5em} R = R(j_1, \cdots , j_n) = \prod_{b=1}^{n} R_b (j_b) \cdot C. \] We define $ R=C $ if $ N = 1 $. Note that the order of the product in the expression of $ R $ only causes the multiplication by a sign $ \pm 1 $. We get the following result. \begin{eproposition} \label{prop:primvec} $ v = \Lambda R $ is a primitive vector for $ \Delta ^{-} $ if $ 0 \leq j_1 \leq j_2 \leq \cdots \leq j_n $ and $ i_a = 0 $ for $ a > j_1 $. \end{eproposition} \proof First we will prove that $ \Lambda R $ is killed by negative root vectors which are expressed as linear differential operators, i.e. operators type (I)--(VI) except type (III) (see \S \ref{sec:osci}). For these linear differential operators, it is easy to see that we can assume $ \Lambda = \Lambda _a $ ($ a \leq j_1 $). We easily check that the above vectors are killed by operator (II), since it replaces the $ s $-th row by the $ t $-th row in the determinant $ \Lambda _a $. For the other operators, we note that \begin{equation} \label{eq:z.prim} \frac{\partial}{\partial z_{s,2k}} \Lambda _a = \iunit \frac{\partial}{\partial z_{s,2k-1}} \Lambda _a \end{equation} holds for arbitrary $ s $ and $ k $. In fact, both sides represent the same cofactor of $ \Lambda _a $ if $ m-a < s \leq m, 1 \leq k \leq a $. Otherwise both sides vanish at the same time. Also \begin{equation} \label{eq:r.prim} r_{b,2k}(1 \pm \alpha _{b,2k}) R_b (j_b) = \iunit r_{b,2k-1}(1 \pm \alpha _{b,2k-1}) R_b (j_b) \end{equation} for $ k \leq j_b $, and for $ k > 2 j_b $, we have \begin{equation} \label{eq:r2.prim} r_{b,k}(1 + \alpha _{b,k}) R_b (j_b) = 0. \end{equation} Similarly $ c_{2k} C = \iunit c_{2k-1} C $ holds for $ k \leq l $. Using these formulas, we show that operators (I) and (V) kill the vectors in the lemma. For $ k \leq a $, we have \begin{eqnarray*} \lefteqn{ \left( \frac{\partial}{\partial z_{s,2k-1}} r_{t,2k-1}(1 \pm \alpha _{t,2k-1}) + \frac{\partial}{\partial z_{s,2k}} r_{t,2k}(1 \pm \alpha _{s,2k}) \right) \Lambda _a R } \\ & & = \left( \frac{\partial}{\partial z_{s,2k-1}} \Lambda _a \right) r_{t,2k-1}(1 \pm \alpha _{t,2k-1}) R + \iunit \left( \frac{\partial}{\partial z_{s,2k-1}} \Lambda _a \right) r_{t,2k}(1 \pm \alpha _{t,2k}) R \end{eqnarray*} from equation (\ref{eq:z.prim}). Since $ k \leq a \leq j_1 \leq j_t $, we can apply (\ref{eq:r.prim}) to the above formula, and one can conclude it vanishes. For $ k > a $, we get $ \partial \Lambda _a / \partial z = 0 $. Therefore operator (I) kills the vectors. For operator (V), we can proceed similarly. Let us consider operators (IV) and (VI). For $ k \leq j_s $, we have \[ ( r_{s,2k-1}(1 + \alpha _{s,2k-1}) r_{t,2k-1}(1 - \alpha _{t,2k-1}) + r_{s,2k}(1 + \alpha _{s,2k}) r_{t,2k}(1 - \alpha _{t,2k}) ) R_s (j_s) R_t (j_t) \] \[ = (-)^{L-j_s} r_{s,2k-1}(1 + \alpha _{s,2k-1}) R_s (j_s) r_{t,2k-1}(1 - \alpha _{t,2k-1}) R_t (j_k) \hspace{2cm} \] \[ \hspace{2cm} - (-)^{L-j_s} r_{s,2k-1}(1 + \alpha _{s,2k-1}) R_s (j_s) r_{t,2k-1}(1 - \alpha _{t,2k-1}) R_t (j_t) =0, \] from equation (\ref{eq:r.prim}). In the above formula, we have calculated only the essential factor of $ \Lambda _a R $ for operator (IV). For $ k > 2j_s $, we get $ r_{s,k}(1 + \alpha _{s,k}) R_s (j_s) =0 $ from equation (\ref{eq:r2.prim}). Thus operator (IV) kills $ \Lambda _a R $. For operator (VI), we can argue similarly. Now we consider operator (III), which is a second degree differential operator. For $ k \leq 2l $, we have \begin{eqnarray*} \lefteqn{ \left( \frac{\partial}{\partial z_{s,2k-1}} \frac{\partial}{\partial z_{t,2k-1}} + \frac{\partial}{\partial z_{s,2k}} \frac{\partial}{\partial z_{t,2k}}\right) \Lambda _a \Lambda _b \hspace*{2.5em} } \\ & & = \frac{\partial}{\partial z_{s,2k-1}} \Lambda _a \frac{\partial}{\partial z_{t,2k-1}} \Lambda _b + \frac{\partial}{\partial z_{t,2k-1}} \Lambda _a \frac{\partial}{\partial z_{s,2k-1}} \Lambda _b \\ & & \hspace{2cm} - \frac{\partial}{\partial z_{s,2k-1}} \Lambda _a \frac{\partial}{\partial z_{t,2k-1}} \Lambda _b - \frac{\partial}{\partial z_{t,2k-1}} \Lambda _a \frac{\partial}{\partial z_{s,2k-1}} \Lambda _b =0, \end{eqnarray*} from equation (\ref{eq:z.prim}). For $ k = 2l+1 $, the operator certainly gives $ 0 $. So operator (III) kills $ \Lambda _a \Lambda _b $. For a general $ \Lambda $, the proof is similar. \qed Let us give the weight of this primitive vector $ v $. \begin{eproposition} \label{prop:weight} The weight of $ v = \Lambda (i_1, \cdots , i_m) R(j_1, \cdots , j_n) $ is given by $ \lambda = ( \lambda _1, \cdots , \lambda _m / \mu _1, \cdots , \mu _n ) $ with \[ \lambda _s = \sum_{a=m-s+1}^{m} i_a + \frac{L}{2} \hspace*{1.5em} ( 1 \leq s \leq m ), \hspace*{1.5em} \mu _t = j_t - \frac{L}{2} \hspace*{1.5em} ( 1 \leq t \leq n ). \] \end{eproposition} \proof Note that weight operators for $ \lie{sp} $-part are \[ \iunit \sum _{k=1}^L \left( z_{s,k} \frac{\partial}{\partial z_{s,k}} +\half \right) \hspace*{1.5em} (1 \leq s \leq m). \] Because the degree of $ \Lambda _a $ with respect to $ { z_{s,1}, \cdots , z_{s,L} } $ is $ 0 $ for $ 1 \leq s \leq m-a $ and $ 1 $ for $ m-a < s \leq m $, we obtain \[ \sum _{k=1}^L z_{s,k} \frac{\partial}{\partial z_{s,k}} \Lambda _a = \left\{ \begin{array}{ll} 0 & \mbox{for $ 1 \leq s \leq m-a $,} \\ 1 & \mbox{for $ m-a < s \leq m $.} \end{array} \right. \] Therefore we have \[ \lambda _s = \sum_{a=m-s+1}^{m} i_a + \frac{L}{2} \hspace*{1.5em} ( 1 \leq s \leq m ). \] Since weight operators for $ \lie{so} $-part are \[ \frac{\iunit}{2} \sum _{k=1} ^L \alpha_{t,k} \hspace*{1.5em} (1 \leq t \leq n), \] we get \[ \mu _t = j_t - \frac{L}{2} \hspace*{1.5em} ( 1 \leq t \leq n ), \] because $ (\alpha _{t,2k-1} + \alpha _{t,2k})(r_{t,2k-1} + \iunit r_{t,2k}) = 0 $. \qed \section{Conditions for super unitarity.} \label{sec:cond.unitary} In this section, we study necessary and sufficient conditions for super unitarizability of irreducible lowest weight representations of $ \lie{osp}(M/N;\mathbb{R} ) $ for $ N \geq 2 $. \begin{eproposition} \label{prop:ncond1} Let $ ( \pi , V ) $ be an irreducible super unitary lowest weight representation of $ \lie{osp}(M/N;\mathbb{R} ) \; (M = 2m $ and $ N = 2n $ or $ 2n+1 ) $. Then any weight $ \lambda $ of $ ( \pi , V ) $ must satisfy \[ |\mu _t| \leq \lambda _s \hspace*{1.5em} ( 1 \leq s \leq m , 1 \leq t \leq n ). \] \end{eproposition} For the proof, see the proof of Proposition 2.3 in \cite{Nishiyama2}. From this proposition and the property of $ \lie{g}_{\bar{0}} $-lowest weights, the lowest weight $ \lambda $ of an irreducible super unitary lowest weight representation of $ \lie{osp}(M/N;\mathbb{R} ) $ must satisfy the following condition (I): \[ {\rm (I)} \hspace*{1.5em} \left\{ \begin{array}{l} -\mu _n \leq \cdots \leq -\mu _1 \leq \lambda _1 \leq \cdots \leq \lambda _m,\\ | \mu _n | \leq -\mu _{n-1} \mbox{ for } N = 2n, \hspace*{0.8em} \mu _n \leq 0 \mbox{ for } N = 2n+1. \end{array} \right. \] Note that $ \mu _1, \cdots , \mu _{n-1} $ are all non-positive integers or all non-positive half integers. Now we restrict ourselves to the case that $ N $ is strictly greater than $ 1 $. Let $ ( \pi , V ) $ be an irreducible lowest weight representation of $ \lie{osp}(M/N;\mathbb{R} ) \; ( N \geq 2 ) $ with the lowest weight $ \lambda \in (\lie{h}^{\mathbb{C}})^{\ast} $ and let $ v_0 $ be a non-zero lowest weight vector. We put \begin{equation} \label{eq:vkdef} v_k = X_{m-k+1} X_{m-k+2} \cdots X_m v_0 \hspace*{0.8em} \mbox{ for } k = 1,2, \cdots ,m-1, \end{equation} where $ X_j $ is a non-zero root vector for $ \beta _j = e_j - f_1 $. We find that $ X_{\alpha } v_k = 0 $ for a negative even root vector $ X_{\alpha} \in \lie{g} _{\alpha} $ ($ \alpha \in \Delta _{ \bar{0} }^{-} $). In fact, we have \[ X_{\alpha} v_k = \sum_{i=1}^{k} X_{m-k+1} \cdots X_{m-k+i-1} [ X_{\alpha} , X_{m-k+i} ] X_{m-k+i+1} \cdots X_m v_0 \] \[ \hspace{5cm} +X_{m-k+1} \cdots X_m X_{\alpha} v_0 , \] where the second term of the right hand side vanishes since $ v_0 $ is lowest. Each bracket $ [ X_{\alpha} , X_j ] $ of the first term is in $ \lie{g} _{ \beta _l } ( j<l ) $ or in $ \lie{g} _{-e_s - f_1} $. Therefore each member of the first term vanishes since $ (X_l)^{2} = 0 $ or $ [ X_{-e_s - f_1} , X_t ] = 0 $ respectively. Thus the vector $ v_k $ is primitive for the even part. \begin{elemma} \label{lemma:3eq} The following three conditions are mutually equivalent. \noindent $ \begin{array}{lc} {\rm (a)} & v_k \neq 0 , \\ {\rm (b)} & X_{- \beta _m } X_{- \beta _{m-1} } \cdots X_{- \beta _{m-k+1} } v_k \neq 0 , \\ {\rm (c)} & \prod_{l=1}^{k} ( \lambda _{m-l+k} + \mu _1 -l+1 ) \neq 0. \end{array} $ \end{elemma} \proof First we show conditions (b) and (c) are equivalent. We have \[ X_{- \beta _{m-k+1} } v_k = X_{- \beta _{m-k+1} } X_{ \beta _{m-k+1} } v_{k-1} \hspace{5cm} \] \[ = [ X_{ \beta _{m-k+1} } , X_{- \beta _{m-k+1} } ] v_{k-1} + \sum_{l=1}^{k-1} (-)^{k-l} X_{ \beta _{m-k+1} } \cdots X_{ \beta _{m-l} } [ X_{ \beta _{m-l+1} } , X_{- \beta _{m-k+1} } ] v_{l-1} \] \[ + (-)^{k} X_{ \beta _{m-k+1} } \cdots X_{ \beta _{m} } X_{- \beta _{m-k+1} } v_0 \] \[ = H_{ \beta _{m-k+1} } v_{k-1} + \sum_{l=1}^{k-1} (-)^{k-l} X_{ \beta _{m-k+1} } \cdots X_{ \beta _{m-l} } X_{e_{m-l+1} - e_{m-k+1} } v_{l-1}, \] where $ X_{e_{m-l+1} - e_{m-k+1} } v_{l-1} $ vanishes since $ e_{m-l+1} - e_{m-k+1} $ is in $ \Delta _{\bar{0}}^{-} $ and $ v_{l-1} $ is primitive for the even part. So the above formula becomes \[ ( \lambda + e_{m-k+2} + \cdots + e_m -(k-1) f_1 ) ( H_{ \beta _{m-k+1} }) v_{k-1} = ( \lambda _{m-k+1} + \mu _1 -k+1) v_{k-1}. \] Using induction on $ k $, we get \[ X_{- \beta _m } X_{- \beta _{m-1} } \cdots X_{- \beta _{m-k+1} } v_k = \prod_{l=1}^{k} ( \lambda _{m-l+k} + \mu _1 -l+1 ) v_0 . \] Thus (b) and (c) are equivalent. It is clear that (b) $ \mathbb{R}ightarrow $ (a). We will show (a) $ \mathbb{R}ightarrow $ (b). Suppose $ v_k \neq 0 $. Then \begin{equation} \label{eq:vkuniv} v_0 \in U( \lie{g} ) v_k , \end{equation} since $ V $ is irreducible. Therefore we can write \[ v_0 = \sum Y^{+} Y_{\bar{1}}^{-} Y_{\bar{0}}^{-} v_k, \] where $ Y^{+}, Y_{\bar{1}}^{-} $ and $ Y_{\bar{0}}^{-} $ are monomials of root vectors in $ \lie{g} ^{+}, \lie{g} _{ \bar{1} }^{-} $ and $ \lie{g} _{ \bar{0} }^{-} $ respectively. If $ Y^{+} $ is not a scalar, the weight of $ Y_{ \bar{1} }^{-} Y_{ \bar{0} }^{-} v_k $ is lower than $ \lambda $. Thus we may assume $ Y^{+}=1 $. On the other hand, since $ v_k $ is primitive for the even part, we can also assume $ Y_{ \bar{0} }^{-}=1 $ and we get \[ v_0 = \sum Y_{ \bar{1} }^{-} v_k. \] The weight of $ Y_{ \bar{1} }^{-} $ is the difference of the weight of $ v_0 $ and that of $ v_k $, i.e. $ k f_1 -(e_{m-k+1} + \cdots + e_m) $. Therefore $ Y_{ \bar{1} }^{-} $ must be of the following form: \[ Y_{ \bar{1} }^{-} = c X_{- \beta _m } X_{- \beta _{m-1} } \cdots X_{- \beta _{m-k+1} } \hspace{1cm} ( c \in \mathbb{C} ), \] and we get \[ v_0 = c X_{- \beta _m } X_{- \beta _{m-1}} \cdots X_{- \beta _{m-k+1}} v_k. \] This shows that $ X_{- \beta _m } X_{- \beta _{m-1}} \cdots X_{- \beta _{m-k+1}} v_k \neq 0 $. \qed \begin{eproposition} \label{prop:ncond2} Let $ ( \pi , V ) $ be an irreducible lowest weight representation of $ \lie{osp}(M/N;\mathbb{R}) \; ( N \geq 2 ) $ with the lowest weight $ \lambda \in (\lie{h}^{\mathbb{C}})^{\ast} $. Assume that $ (\pi , V) $ is admissible. Then the following condition is necessary for $ ( \pi , V ) $ to be super unitary. \noindent The lowest weight $ \lambda $ satisfies \begin{equation} \label{eq:2'} \lambda _1 + \mu _1 \in \{ d,d+1, \cdots , m-1 \} \cup [m-1, \infty), \end{equation} where $ d = \# \{ 1 \leq k \leq m | \lambda _k > \lambda _1 \}. $ \end{eproposition} \proof Let $ ( \pi , V ) $ be super unitary. From condition (I), we have $ \lambda _1 + \mu _1 \geq 0 $. If $ m = 1 $, then condition (\ref{eq:2'}) only means $ \lambda _1 + \mu _1 \geq 0 $, and there is nothing to prove. We consider the case $ m \geq 2 $. If $ \lambda _1 + \mu _1 \geq m-1 $ then the condition trivially holds. So we suppose $ k \leq \lambda _1 + \mu _1 < k+1 $ for $ k = 0,1, \cdots , m-2 $. Note that $ v_{k+1} $ vanishes. In fact, if $ v_{k+1} \neq 0 $, its weight is $ \lambda + ( e_{m-k+1} + \cdots + e_m ) - k f_1 $. Applying Proposition \ref{prop:ncond1}, we get $ \lambda _1 + \mu _1 \geq k+1 $, a contradiction. So, from Lemma \ref{lemma:3eq}, there exists $ 1 \leq l \leq k+1 $ such that \[ \lambda _{m-l+1} + \mu _{1} = l-1. \] For $ l \leq k $, the left hand side of the above equation is greater than $ \lambda _1 + \mu _1 \geq k $ and the right hand side is less than $ k-1 $. Thus the above equation must hold for $ l = k+1 $ and $ \lambda _{m-k} + \mu _1 = k $. Since $ k- \mu _1 \leq \lambda _1 \leq \cdots \leq \lambda _{m-k} = k- \mu _1 $, the equations must hold: $ \lambda _1 = \cdots = \lambda _{m-k} = k- \mu _1 $, hence $ d \leq k $. So we get $ \lambda _1 + \mu _1 = k \geq d $. \qed From these propositions and Proposition \ref{prop:weight}, we get the following result. \begin{etheorem} \label{thm:cond} Let $ (\pi , V) $ be an integrable irreducible super unitary representation of $ \lie{osp}(M/N;\mathbb{R}) \; (N \geq 2) $, which is necessarily a lowest or highest weight representation. {\rm (i)} If $ (\pi , V) $ is a lowest weight representation, then its lowest weight $ \lambda= $ \linebreak $ (\lambda _1, \cdots , \lambda _m / $ $ \mu _1, \cdots , \mu _n) $ $ \in (\lie{h}^{\mathbb{C}})^{\ast} $ must satisfy conditions {\rm (I)} and {\rm (II):} \noindent $ \displaystyle {\rm (I)} \hspace*{1.5em} \left\{ \begin{array}{l} -\mu _n \leq \cdots \leq -\mu _1 \leq \lambda _1 \leq \cdots \leq \lambda _m,\\ |\mu_n| \leq -\mu _{n-1} \hspace*{0.8em} \mbox{ for } N = 2n, \hspace*{1.5em} \mu _n \leq 0 \hspace*{0.8em} \mbox{ for } N = 2n+1,\\ \lambda _i , \mu _j \in \mathbb{Z} \hspace*{0.8em} \mbox{ for all } i, j, \end{array} \right. $ \noindent $ \displaystyle {\rm (II)} \hspace*{1.5em} \lambda _1 + \mu _1 \geq d \hspace*{1.5em} \mbox{ where } d = \# \{ 1 \leq k \leq m| \lambda _k > \lambda _1 \}. $ Conversely, let $ (\pi , V) $ be an irreducible lowest weight representation of \linebreak $ \lie{osp}(M/N;\mathbb{R} ) $ $ (N \geq 2) $ with the lowest weight $ \lambda = ( \lambda _1, \cdots , \lambda _m / $ $ \mu _1, \cdots , \mu _n ) \in (\lie{h}^{\mathbb{C}})^{\ast} $ which satisfies the above conditions {\rm (I)} and {\rm (II)}, then $ ( \pi , V ) $ is super unitary. {\rm (ii)} If $ (\pi, V) $ is a highest weight representation, then its highest weight $ \lambda= $ \linebreak $ (\lambda _1, \cdots , \lambda _m / $ $ \mu _1, \cdots , \mu _n) $ $ \in (\lie{h}^{\mathbb{C}})^{\ast} $ must satisfy conditions ${\rm (I')}$ and ${\rm (II')}${\rm :} \noindent $ \displaystyle {\rm (I')} \hspace*{1.5em} \left\{ \begin{array}{l} -\mu _n \geq \cdots \geq -\mu _1 \geq \lambda _1 \geq \cdots \geq \lambda _m,\\ | \mu _n | \leq \mu _{n-1} \hspace*{0.8em} \mbox{ for } N = 2n, \hspace*{1.5em} \mu _n \geq 0 \hspace*{0.8em} \mbox{ for } N = 2n+1,\\ \lambda _i , \mu _j \in \mathbb{Z} \hspace*{0.8em} \mbox{ for all } i, j, \end{array} \right. $ \noindent $ \displaystyle {\rm (II')} \hspace*{1.5em} \lambda _1 + \mu _1 \leq -d \hspace*{1.5em} \mbox{ where } d = \# \{ 1 \leq k \leq m| \lambda _k < \lambda _1 \}. $ Conversely, let $ (\pi , V) $ be an irreducible highest weight representation of \linebreak $ \lie{osp}(M/N;\mathbb{R} ) $ $ (N \geq 2) $ with the highest weight $ \lambda = ( \lambda _1, \cdots , \lambda _m / $ $ \mu _1, \cdots , \mu _n ) \in (\lie{h}^{\mathbb{C}})^{\ast} $ which satisfies the above conditions ${\rm (I')}$ and ${\rm (II')}$, then $ ( \pi , V ) $ is super unitary. \end{etheorem} \proof If $ \pi $ is integrable, then it is admissible. Thus it must be a lowest or highest weight representation (Proposition \ref{prop:admuni}). We will prove the statements only for lowest weight representations. The proof for highest weight representations is similar. Note that integrability forces $ \lambda_i $'s and $ \mu_j $'s to be integers, as we mentioned in \S \ref{sec:osp}. Let $ \pi $ be an integral lowest weight representation. Then conditions (I) and (II) follow from Propositions \ref{prop:ncond1} and \ref{prop:ncond2}. If $ \lambda _1 = 0 $, then all $ \mu _j $ vanish because of condition (I) and, from condition (II), we have $ d = 0 $. Thus all $ \lambda _i $ also vanish. Therefore $ \lambda $ is the weight of the trivial representation which is super unitary. If $ \lambda _1 \neq 0 $, then let us put $ L = 2 \lambda _1 $ , $ i_k = \lambda _{m-k+1} - \lambda _{m-k} $ for $ 1 \leq k \leq m-1 $ , $ i_m = 0 $ and $ j_b = \lambda _1 + \mu _b \hspace*{0.8em} ( 1 \leq b \leq n ) $. Then all $ i_k $'s and $ j_b $'s become integers and, from conditions (I) and (II), these $ L,{i_a},{j_b} $ satisfy the conditions in Proposition \ref{prop:primvec}. From Proposition \ref{prop:weight}, the weight of the corresponding primitive vector $ v $ is $ \lambda $ itself. Therefore $ \lambda $ is the lowest weight of a super unitary representation. Q.E.D. \end{document}
\begin{document} \sigmaynctex=1 \title[Renormalization and thermodynamics] {Renormalization, thermodynamic formalism and quasi-crystals in subshifts.} \author{Henk Bruin and Renaud Leplaideur} \date{Version of \today} \thanks{Part of this research was supported by a Scheme 3 (ref 2905) visitor grant of the London Mathematical Society and a visiting professorship at the University of Brest. } \begin{abstract} We examine thermodynamic formalism for a class of renormalizable dynamical systems which in the symbolic space is generated by the Thue-Morse substitution, and in complex dynamics by the Feigenbaum-Coullet-Tresser map. The basic question answered is whether fixed points $V$ of a renormalization operator ${\mathbb C}R$ acting on the space of potentials are such that the pressure function $\gamma \mapsto {\mathbb C}P(-\gamma V)$ exhibits phase transitions. This extends the work by Baraviera, Leplaideur and Lopes on the Manneville-Pomeau map, where such phase transitions were indeed detected. In this paper, however, the attractor of renormalization is a Cantor set (rather than a single fixed point), which admits various classes of fixed points of ${\mathbb C}R$, some of which do and some of which do not exhibit phase transitions. In particular, we show it is possible to reach, as a ground state, a quasi-crystal before temperature zero by freezing a dynamical system. \end{abstract} \maketitle \sigmaection{Introduction}\lambdaabel{sec:intro} \sigmaubsection{Background} Phase transitions are a central theme in statistical mechanics and probability theory. In the physics/probability approach the dynamics is not very relevant and just emerges as a by-product of the invariance by translation. The main difficulty is the geometry of the ${\mathbb Z}^{d}$ lattice. Considering an interacting particle systems such as the Ising model (see {\em e.g.\ } \cite{gallavotti1, georgii}), it is possible to find a measure (called Gibbs measure) that maximizes the probability of obtaining a configuration with minimal free energy associated to a Hamiltonian. This is done considering a finite box and fixing the conditions on its boundary. Then letting the size of the box tend to infinity, the sequence of Gibbs measures have a set of accumulation points. If this set varies non-continuously with respect to the parameters (including the temperature), then the system is said to exhibit a \emph{phase transition}. In contrast, the time evolution of the system is the central theme in dynamics systems. The theory of thermodynamic formalism has been imported into hyperbolic dynamics in the 70's, essentially by Sinai, Ruelle and Bowen. Gradually, authors started to extend this theory to the non-uniformly hyperbolic case, sometimes applying \emph{inducing techniques} that are also important in this paper. Initially, phase transitions have been less central in dynamical systems, but the development of the theory of ergodic optimization since the 2000's has naturally led mathematicians to introduce (or rather rediscover) the notion of ground states. The question of phase transitions arises naturally in this context. Note that vocabulary used in statistical mechanics is sometimes quite different from that used in dynamical system. What in statistical mechanics vocabulary is called a ``freezing'' transition, such as occur in Fisher-Felderhof models (see {\em e.g.\ } \cite{fisher}), corresponds in the mathematical vocabulary to the Manneville-Pomeau map or the shift with Hofbauer potential (see {\em e.g.\ } \cite{wang} or \cite[Exercise 5.8 on page 98]{ruelle} and also \cite{gallavotti2}). Renormalization is an over-arching theme in physics and dynamics, including thermodynamic formalism, see \cite{CGU} for modern results in the direction. The system that we study in this paper is related to cascade of doubling period phenomenon and the infinitely renormalizable maps {\it \`a la} Feigenbaum-Coullet-Tresser, which is on the boundary of chaos (see {\em e.g.\ } \cite{schuster}). Instead of the freezing transitions, the system has its equilibrium state (at phase transition) supported on a Cantor set rather than in a fixed point or a periodic orbit. Stated in physics terminology, we prove that it is possible to reach a quasi-crystal as a ground state before temperature zero by freezing a dynamical system (see Theorems \ref{theo-thermo-a<1} and \ref{theo-super-MP}). This issue is related to a question due to van Enter (see \cite{vEM}). The original question was for ${\mathbb Z}^{2}$-actions, but we hope that ideas here may be exported to this more complicated case. Returning to the mathematical motivation, the present paper takes the work of \cite{baraviera-leplaideur-lopes} a step further. We investigate the connections between phase transition in the full $2$-shift, renormalization for potentials, renormalization for maps (in complex dynamics) and substitutions in the full $2$-shift. Here the attractor of renormalization is a Cantor set, rather than a single point, and its thermodynamic properties turn out to be strikingly different. We recall that Bowen's work \cite{Bowen} on thermodynamic formalism showed that every subshift of finite type with H\"older continuous potential $\varphii$ admits a unique equilibrium state (which is a Gibbs measure). Moreover, the pressure function $\gamma \mapsto {\mathbb C}P(-\gamma \varphii)$ is real analytic and there are no phase transitions. This is also known as the Griffiths-Ruelle theorem. Hofbauer \cite{Hnonuni} was the first (in the dynamical systems world) to find continuous non-H\"older potentials for the full two-shift $(\Sigma, \sigma)$ allowing a phase transition at some $t = t_{0}$. A geometric interpretation of Hofbauer's example leads naturally to the Manneville-Pomeau map $f_{\mbox{\tiny MP}}:[0,1] \to [0,1]$ defined as $$ f_{\mbox{\tiny MP}}(x) = \lambdaeqslantft\{ \begin{array}{ll} \frac{x}{1-x} & \text{ if } x \in [0, \frac12], \\ 2x-1 & \text{ if } x \in (\frac12, 1], \end{array}\right. $$ with a neutral fixed point at $0$. This map admits a local renormalization $\psi(x) = \frac{x}{2}$ which satisfies \begin{equation} \lambdaabel{equ-renorm-MP} f_{\mbox{\tiny MP}}^2 \circ \psi(x) = \psi \circ f_{\mbox{\tiny MP}}(x) \qquad \text{ for all } x \in [0, \frac12]. \end{equation} If we differentiate Equation~\eqref{equ-renorm-MP}, take logarithms and subtract $\lambdaog \psi'\Leftrightarrowv \lambdaog\frac12$ from both sides of the equality, we find \begin{equation} \lambdaabel{equ2-renom-mp-pot} \lambdaog|f_{\mbox{\tiny MP}}'| = \lambdaog |f_{\mbox{\tiny MP}}'| \circ f_{\mbox{\tiny MP}} \circ \psi(x) + \lambdaog |f_{\mbox{\tiny MP}}'| \circ \psi(x). \end{equation} Passing to the shift-space again (via the itinerary map for the standard partition $\{[0,\frac12], \ (\frac12,1]\}$), we are naturally led to renormalization in the shift. Of prime importance are the solutions of the equation \begin{equation} \lambdaabel{equ-renorm-shift} \sigma^{2}\circ H=H\circ \sigma, \end{equation} which replaces the renormalization scaling $\psi$ in \eqref{equ-renorm-MP}. Equation~\eqref{equ2-renom-mp-pot} leads to an operator ${\mathbb C}R$ defined by $$ {\mathbb C}R(V) = V \circ \sigma \circ H + V \circ H. $$ In \cite{baraviera-leplaideur-lopes}, the authors investigated the case of the substitution $$ H_{\mbox{\tiny MP}}:\lambdaeqslantft\{ \begin{array}{l} 0 \to 00, \\ 1 \to 01, \end{array} \right. $$ which has a unique fixed point $ 0^\infty$, corresponding to the neutral fixed point $0$ of $f_{\mbox{\tiny MP}}$. In \cite{baraviera-leplaideur-lopes}, the map $H_{\mbox{\tiny MP}}$ was not presented as a substitution but we emphasize here (and it is an improvement because it allows more general studies) that it indeed is; more generally, any constant-length $k$ substitution solves Equation~\eqref{equ-renorm-shift} (with $\sigma^{k}$ instead of $\sigma^{2}$). It is also shown in \cite{baraviera-leplaideur-lopes} that the operator ${\mathbb C}R$ fixes the Hofbauer potential $$ V(x):=\lambdaog\frac{n+1}{n} \quad \text{ if } x \in [0^{n}1] \sigmaetminus [0^{n+1}1], \quad n>0. $$ Moreover, the lift of $\lambdaog f'_{\mbox{\tiny MP}}$ belongs to the \emph{stable set} of the Hofbauer potential. This fact is somewhat mysterious because the substitution $H_{\mbox{\tiny MP}}$ \emph{is not} the lift of the scaling function $\psi:x\mapsto x/2$. In this paper we focus on the Thue-Morse substitution; see \eqref{eq:HTM} for the definition. It is one of the simplest substitutions satisfying the renormalization equality \eqref{equ-renorm-shift} and contrary to $H_{\mbox{\tiny MP}}$, the attractor for the Thue-Morse substitution, say ${\mathbb K}$, is not a periodic orbit but a Cantor set. Yet similarly to the Manneville-Pomeau fixed point, $\sigma:{\mathbb K} \to {\mathbb K}$ has zero entropy and is uniquely ergodic. This is one way to define quasi-crystal in ergodic theory. The thermodynamic formalism for the Thue-Morse substitution is much more complicated, and more interesting, than for the Manneville-Pomeau substitution. This is because Cantor structure of the attractor admits a more intricate recursion behavior of nearby points (although it has zero entropy) characterized by what we call ``accidents'' in Section~\ref{subsec-theo-vtilde}, which are responsible for the lack of phase transitions for the ``good'' fixed point for ${\mathbb C}R$, This allows much more chaotic shadowing than when the attractor of the substitution is a periodic orbit. We want to emphasize here that our results are extendible to more general substitutions, but to get the main ideas across, we focus on the Thue-Morse shift in this paper. \sigmaubsection{Statements of results} The Thue-Morse substitution \begin{equation}\lambdaabel{eq:HTM} H := H_{\mbox{\tiny TM}}: \lambdaeqslantft\{ \begin{array}{l} 0 \to 01 \\ 1 \to 10 \end{array} \right. \end{equation} has two fixed points $$ \rho_1 = 1001\ 0110\ 1001\ 0110\ 01\dots \quad \text{ and } \quad \rho_0 = 0110\ 1001\ 0110\ 1001\ 10\dots $$ Let ${\mathbb K} = \overline{\cup_n \sigma^n(\rho_0)} = \overline{\cup_n \sigma^n(\rho_1)}$ be the corresponding subshift of the full shift $(\Sigma, \sigma)$ on two symbols. The renormalization equation~\eqref{equ-renorm-shift} holds in $\Sigma$: $H \circ \sigma = \sigma^2 \circ H$, and we define the {\em renormalization operator} acting on functions $V:\Sigma \to {\mathbb R}$ as $$ ({\mathbb C}R V)(x) = V \circ \sigma \circ H(x) + V \circ H(x). $$ We consider the usual metric on $\Sigma$: $d(x,y) = \frac1{2^n}$ if $n = \min\{ i \geqslant 1 : x_i \neq y_i \}$. This distance is sometimes graphically represented as follows: \begin{figure} \caption{The sequence $x$ and $y$ coincide for digits $0$ up to $n-1$ and then split.} \end{figure} Note that $d(H^nx, H^ny) = d(x,y)^{2^n}$: if $x$ and $y$ coincide for $m$ digits, then $H^{n}(x)$ and $H^{n}(y)$ coincide for $2^nm$ digits. \\[2mm] The first two results deal with the continuous fixed points for the renormalization operator ${\mathbb C}R$. The main issue is to determine fixed points and their \emph{weak stable leaf}, namely the potentials attracted by the considered fixed point by iterations of ${\mathbb C}R$. The second series of results deals with the thermodynamical formalism; we study if some class of potentials related to weak stable leaf of the fixed points, exhibit a phase transition. In particular, Theorem~\ref{theo-super-MP} is related to a question of Van Enter et al.\ (see {\em e.g.\ } \cite{vEM,vEMZ}) asking whether it is possible to reach a quasi-crystal by freezing a system before zero temperature. The last result (Theorem~\ref{theo-thermo-Vu}) returns to the geometrical dynamics and shows the main difference between the Thue-More case and the Manneville-Pomeau case. Due to the Cantor structure of the attractor of the substitution, there exist non-continuous but locally constant (on ${\mathbb K}$ up to a finite number of points) fixed points for ${\mathbb C}R$. As the Hofbauer potential represents the logarithm of the derivative of an affine approximation of the Manneville-Pomeau map, one of these potentials, $V_{u}$, represents the logarithm of the derivative of an affine approximation to the Feigenbaum-Coullet-Tresser map $f_{feig}:{\mathbb C} \to {\mathbb C}$. The main difference with the Manneville-Pomeau case is that here, $V_{u}$ has no phase transition whereas $-\lambdaog |f'_{feig}|$ has. \sigmaubsubsection{ Results on continuous fixed points for ${\mathbb C}R$} Define the one-parameter family of potentials \begin{equation}\lambdaabel{eq:Uc} U_c = \lambdaeqslantft\{ \begin{array}{rl} c & \text{ on } [01], \\ -c & \text{ on } [10], \\ 0 & \text{ on } [00] \cup [11]. \end{array}\right. \end{equation} It is easy to verify that $U_{c}$ is a fixed point of ${\mathbb C}R$. Given a fixed function $V:\Sigma\to{\mathbb R}$, the \emph{variation on $k$-cylinders} ${\mbox{Var}}_{k}(V)$ is defined as $$ {\mbox{Var}}_{k}(V):=\max\{|V(x)-V(y)|,\ x_{j}=y_{j}\,\mbox{for }j=0,\lambdadots,k-1\}. $$ The condition $\sigmaum_{k=1}^{\infty}{\mbox{Var}}_{k}(W)<\infty$ holds if {\em e.g.\ } $W$ is H\"older continuous. \begin{theorem}\lambdaabel{theo-super-uc} If $W$ is a continuous fixed point of ${\mathbb C}R$ on ${\mathbb K}$ such that $$\sigmaum_{k=1}^{\infty}{\mbox{Var}}_{k}(W)<\infty,$$ then $W = U_c$ for $c = W(\rho_0)$. \end{theorem} As for the Hofbauer case, we produce a non-negative continuous fixed point for ${\mathbb C}R$ with a well-defined \emph{weak stable set}\footnote{In \cite{baraviera-leplaideur-lopes} it was proven that ${\mathbb C}R^{n}(V)$ converges to the fixed point $\widetilde V$; here we only get convergence in the Cesaro sense.} \begin{theorem}\lambdaabel{theo-vtilde} There exists a unique function $\widetilde V$, such that $\widetilde V = \lambdaim_{m \to \infty} \frac1m\sigmaum_{k=0}^{n-1}{\mathbb C}R^kV$ for every continuous $V$ satisfying $V(x) = \frac1n+o(\frac1n)$ if $d(x,{\mathbb K}) = 2^{-n}$. Moreover $\widetilde V$ is ${\mathbb C}R$-invariant, continuous and positive except on ${\mathbb K}$: $\frac1{2n} \lambdaeqslant \widetilde V(x) \lambdaeqslant \frac{1}{n-1}$ if $d(x,{\mathbb K}) = 2^{-n}$. \end{theorem} \sigmaubsubsection{Results on Thermodynamic Formalism} We refer to Bowen's book \cite{bowen} for the background on thermodynamic formalism, equilibrium states and Gibbs measures in $\Sigma$. However, in contrast to Bowen's book, our potentials are not H\"older-continuous. For a given potential $W:\Sigma\to{\mathbb R}$, the pressure of $W$ is defined by $${\mathbb C}P(W):=\sigmaup\{h_{\mu}(\sigma)+\int W\,d\mu\},$$ where $h_{\mu}(\sigma)$ is the Kolmogorov entropy of the invariant probability measure $\mu$. The supremum is a maximum in $\Sigma$ whenever $W$ is continuous. Any measure realizing this maximum is called an equilibrium state. We want to study the regularity of the function $\gamma\mapsto {\mathbb C}P(-\gamma W)$. For simplicity, this function will also be denoted by ${\mathbb C}P(\gamma)$. If ${\mathbb C}P(\gamma)$ fails to be analytic, we speak of a phase transition. We are in particular interested in the special phase transition as $\gamma \to \infty$: easy and classical computations show that ${\mathbb C}P(\gamma)$ has an asymptote of the form $-a\gamma+b$ as $\gamma \to \infty$. By an \emph{ultimate phase transition} we mean that ${\mathbb C}P(\gamma)$ reaches its asymptote at some $\gamma'$. In this case, there cannot be another phase transition for larger $\gamma$, hence {\em ultimate}. Then, by a convexity argument, ${\mathbb C}P(\gamma)=-a\gamma+b$ for any $\gamma\geqslant \gamma'$. One of the main motivations for studying ultimate phase transitions is that the quantity $a$ satisfies $$ a=\inf\lambdaeqslantft\{\int W\,d\mu,\ \mu \text{ is a shift-invariant probability measure} \right\}. $$ An example of an ultimate phase transition for rational maps can be found in \cite{makarov-smirnov}. The Manneville-Pomeau map is another classical example. \iffalse \begin{theorem}[No phase transition] \lambdaabel{theo-thermo-a=1} Let $V:\Sigma\rightarrow {\mathbb R}$ be a continuous function satisfying $V(x) = \frac1n+o(\frac1n)$ if $d(x,{\mathbb K}) = 2^{-n}$. Then, for every $\gamma\geqslant 0$, there exists a unique equilibrium state associated to $-\gamma V$ and it gives positive mass to every open set. The pressure function $\gamma\mapsto{\mathbb C}P(\gamma)$ is analytic and positive on $[0, \infty)$, although it converges to zero as $\gamma \to \infty$. \end{theorem} \fi \begin{theorem}[No phase transition] \lambdaabel{theo-thermo-a>1} Let $a > 1$ and $V:\Sigma\rightarrow {\mathbb R}$ be a continuous function satisfying $V(x) = \frac1{n^{a}}+o(\frac1{n^{a}})$ if $d(x,{\mathbb K}) = 2^{-n}$. Then, for every $\gamma\geqslant 0$, there exists a unique equilibrium state associated to $-\gamma V$ and it gives positive mass to every open set. The pressure function $\gamma\mapsto{\mathbb C}P(\gamma)$ is analytic and positive on $[0, \infty)$, although it converges to zero as $\gamma \to \infty$. \end{theorem} \begin{theorem}[Phase transition] \lambdaabel{theo-thermo-a<1} Let $a \in (0,1)$ and $V:\Sigma\rightarrow {\mathbb R}$ be a continuous function satisfying $V(x) = \frac1{n^{a}}+o(\frac1{n^{a}})$ if $d(x,{\mathbb K}) = 2^{-n}$. Then there exists $\gamma_{1}$ such that for every $\gamma>\gamma_{1}$ the unique equilibrium state for $-\gamma V$ is the unique invariant measure $\mu_{{\mathbb K}}$ supported on ${\mathbb K}$. For $\gamma<\gamma_{1}$, there exists a unique equilibrium state associated to $-\gamma V$ and it gives positive mass to every open set in $\Sigma$. The pressure function $\gamma\mapsto{\mathbb C}P(\gamma)$ is positive and analytic on $[0,\gamma_{1})$. \end{theorem} These results show that case $a=1$ ({\em i.e.,\ } the Hofbauer potential) is the border between the regimes with and without phase transition. Whether there is a phase transition for the case $a=1$ ({\em i.e.,\ } the fixed point $\tilde V$) or in other words the analog of the Hofbauer potential, discussed in \cite{baraviera-leplaideur-lopes}, is much more subtle. We intend to come back to this question in a later paper. The full shift $(\Sigma,\sigma)$ can be interpreted geometrically by a degree $2$ covering of the circle. The Manneville-Pomeau map can be viewed this way; it is expanding except for a single (one-sided) indifferent fixed point. When dealing with the Thue-Morse shift, it is natural to look for a circle covering with an indifferent Cantor set. \begin{theorem} \lambdaabel{theo-super-MP} There exist ${\mathbb C}C^1$ maps $f_{a}:[0,1]\circlearrowleft$, semi-conjugate to the full $2$-shift and expanding everywhere except on a Cantor set $\widetilde{\mathbb K}$, such that $\widetilde{\mathbb K}$ is conjugate to ${\mathbb K}$ in $\Sigma$ and if $a \in (0,1)$, then $-\gamma\lambdaog f'_{a}$ has an ultimate phase transition. \end{theorem} Another geometric realization of the Thue-Morse shift and the prototype of renormalizability in one-dimensional dynamics is the Feigenbaum map. This quadratic interval map $f_{\mbox{\tiny q-feig}}$ has zero entropy, but when complexified it has entropy $\lambdaog 2$. Moreover, it is conjugate to another analytic degree $2$ covering map on ${\mathbb C}$, which we call $f_{\mbox{\tiny feig}}$, that is fixed by the Feigenbaum renormalization operator $$ {\mathbb C}R_{\mbox{\tiny feig}} f = \Psi^{-1} \circ f^2 \circ \Psi $$ where $\Psi$ is linear $f$-dependent holomorphic contraction. Arguments from complex dynamics give that ${\mathbb C}P(-\gamma \lambdaog |f'_{\mbox{\tiny feig}}|) = 0$ for all $\gamma \geqslant 2$, see Proposition~\ref{prop:feig_PT}. Because $h_{top}(f_{\mbox{\tiny feig}}) = \lambdaog 2$ on its Julia set, the potential $-\gamma_1 \lambdaog |f'_{\mbox{\tiny feig}}|$ has a phase transition for some $\gamma_1 \in (0,2]$. When lifted to symbolic space, $-\lambdaog|f'_{\mbox{\tiny feig}}|$ produces an unbounded potential $V_{\mbox{\tiny feig}}$ which is fixed by ${\mathbb C}R$. We can find a potential $V_u$, which is constant on $$ (\sigma\circ H)^{k}(\Sigma)\sigmaetminus (\sigma\circ H)^{k+1}(\Sigma) $$ for each $k$ such that $\| V_{\mbox{\tiny feig}} - V_u\|_\infty < \infty$ and analyze the thermodynamic properties of $V_u$. Although ${\mathbb C}P(-\gamma_1 V_{\mbox{\tiny feig}}) = 0$ for some $\gamma_1 \lambdaeqslant 2$, it is surprising to see that the potential $V_u$ exhibits no phase transition. We emphasize here an important difference with the Manneville-Pomeau case, where both the potential $-\gamma \lambdaog |f'_{MP}|$ and its countably piecewise version, the Hofbauer potential, which is constant on cylinder sets $(H_{MP})^{k}(\Sigma)\sigmaetminus(H_{MP})^{k+1}(\Sigma) = [0^{2k+1}1]$, undergo a phase transition. \begin{theorem}[No phase transition for unbounded fixed point $V_u$] \lambdaabel{theo-thermo-Vu} The unbounded potential $V_u$ given by $$ V_u(x) = \alpha(k-1) \quad \text{ for } \quad x \in (\sigma \circ H)^k(\Sigma) \sigmaetminus (\sigma \circ H)^{k+1}(\Sigma) $$ is a fixed point of ${\mathbb C}R$ for any $\alpha \in {\mathbb R}$. If $\alpha < 0$, then for every $\gamma\geqslant 0$, there exists a unique equilibrium state for $-\gamma V_{u}$. It gives positive mass to any open set in $\Sigma$. The pressure function $\gamma \mapsto {\mathbb C}P(-\gamma V_u)$ is analytic and positive for all $\gamma \in [0, \infty)$. \end{theorem} The exact definition of the equilibrium state for this unbounded potential can be found in Subsection~\ref{subsec-unbounded2}. \sigmaubsection{Outline of the paper} In Section~\ref{sec:renorm} we prove Theorems~\ref{theo-super-uc} and \ref{theo-vtilde}. In the first subsection we recall some and prove other results on the Thue-Morse substitution and its associated attractor ${\mathbb K}$. In Section~\ref{sec:thermo} we study the thermodynamic formalism and prove Theorems~\ref{theo-thermo-a<1}, \ref{theo-super-MP} and \ref{theo-thermo-Vu}. This section uses extensively the theory of local thermodynamic formalism defined in \cite{leplaideur1} and developed in further works of the author. Finally, in the Appendix, we explain the relation between the Thue-Morse shift and the Feigenbaum map, and state and prove Proposition~\ref{prop:feig_PT}. \sigmaection{Renormalization in the Thue-Morse shift-space}\lambdaabel{sec:renorm} \sigmaubsection{General results on the Thue-Morse shift-space}\lambdaabel{subsec-genresult-TM} Let $\sigma:\Sigma \to \Sigma$ be the full shift on $\Sigma = \{ 0, 1 \}^{\mathbb N}$. If $x = x_0x_1x_2x_4\dots \in \Sigma$, let $[x_0\dots x_{n-1}]$ denote the $n$-cylinder containing $x$, and let $\bar x_i = 1-x_i$ be our notation for the opposite symbol. Recall that $\rho_0$ and $\rho_1$ are the fixed points of the Thue--More substitution, and that ${\mathbb K} = \overline{\mbox{\rm orb}_\sigma(\rho_0)} = \overline{\mbox{\rm orb}_\sigma(\rho_1)}$ is a uniquely ergodic and zero-entropy subshift. We denote by $\mu_{{\mathbb K}}$ its invariant measure. We give here some properties for the Thue-Morse sequence that can be found in \cite{BLRS, B, dekking, dLV}. \begin{enumerate} \item {\em Left-special} words ({\em i.e.,\ } words $w$ such that both $0w$ and $1w$ appear in ${\mathbb K}$) are prefixes of $H^k(010)$ or of $H^k(101)$ for some $k \geqslant 0$. \item {\em Right-special} words ({\em i.e.,\ } words $w$ such that both $w0$ and $w0$ appear in ${\mathbb K}$) are suffixes of $H^k(010)$ or of $H^k(101)$ for some $k \geqslant 0$. \item {\em Bispecial} words ({\em i.e.,\ } words $w$ such that both left and right-special) are precisely the words $\tau_k := H^k(0)$, $\bar \tau_k := H^k(1)$, $\tau_k\bar \tau_k \tau_k = H^k(010)$ and $\bar \tau_k \tau_k \bar \tau_k = H^k(101)$ for $k \geqslant 0$. There are four ways in which a word $w$ can be extended to $awb$, {\em i.e.,\ } with a symbol both to the right and left. It is worth noting that for $w = \tau_k$ or $\bar \tau_k$, all four ways indeed occur in ${\mathbb K}$, while for $w = \tau_k \bar \tau_k \tau_k$ or $\bar \tau_k \tau_k \bar \tau_k$ only two extensions occur. \item The Thue-Morse sequence has low {\em word-complexity}: $$ p(n) = \begin{cases} 3 \cdot 2^m + 4r & \text{ if } 0 \lambdaeqslant r < 2^{m-1},\\ 4 \cdot 2^m + 2r & \text{ if } 2^{m-1} \lambdaeqslant r < 2^m,\\ \end{cases} $$ where $n = 2m + r + 1$. \item The Thue-Morse shift is almost square-free in the sense that if $w = w_1\dots w_n$ is some word, then $ww$ can appear in ${\mathbb K}$, but not $www_1$. The nature of the Thue-Morse substitution is such that $\rho_0$ and $\rho_1$ are concatenations of the words $\tau_k$ and $\bar \tau_k$. Appearances of $\tau_k$ and $\bar \tau_k$ can overlap, but not for too long compared to their lengths, as made clear in Corollary~\ref{cor-limited-overlap} \end{enumerate} The next lemma shows that almost-invertibility of $\sigma$ on ${\mathbb K}$ implies some shadowing close to ${\mathbb K}$. \begin{lemma}\lambdaabel{lem-invertible} For $x \in \Sigma$ with $d(x,{\mathbb K}) < 2^{-5}$, let $y, y' \in {\mathbb K}$ be the closest points in ${\mathbb K}$ to $x$ and $\sigmaigma(x)$ respectively. If $y' \neq \sigma(y)$, then $y'$ starts as $\tau_k$, $\bar \tau_k$, $\tau_k\bar \tau_k \tau_k$ or $\bar \tau_k \tau_k \bar \tau_k$ for some $k \geqslant 3$. \end{lemma} \begin{proof} As $y' \neq \sigma(y)$, there is another $z \in {\mathbb K}$ such that $\sigma(z) = y'$ and $z_0 \neq y_0 = x_0$. Let $d$ be maximal such that $y_1\dots y_{d-1} = z_1 \dots z_{d-1}$, so $y_d \neq z_d$. This means that the word $y_1\dots y_{d-1}$ is bi-special, and according to property (3) has to coincide with $\tau_k$, $\bar \tau_k$, $\tau_k\bar \tau_k \tau_k$ or $\bar \tau_k \tau_k \bar \tau_k$ for some $k \geqslant 3$. \end{proof} Due to the Cantor structure of ${\mathbb K}$, the distance of an orbit to ${\mathbb K}$ is not a monotone function in the time. This is the main problem we will have to deal with. \begin{definition} \lambdaabel{def-accident} Let $x \in \Sigma$ be such that $d(\sigma(x),{\mathbb K})<2d(x,{\mathbb K})$. Then we say that we have an \emph{accident} at $\sigma(x)$. By extension, if $d(\sigma^{k+1}(x),{\mathbb K})=2d(\sigma^{k}(x),{\mathbb K})$ for every $k<n-1$, but $d(\sigma^{n}(x),{\mathbb K})<2d(\sigma^{n-1}(x),{\mathbb K})$, then we say that we have an \emph{accident} at time $n$. \end{definition} \begin{proposition}\lambdaabel{prop-time-accident} Assume that $-\lambdaog_{2}d(x,{\mathbb K})=d$ and that $b\lambdaeqslant d$ is the first accident for the piece of orbit $x,\lambdadots, \sigma^{d}(x)$, then \begin{itemize} \item $x_{b}x_{b+1}\lambdadots x_{d-1}$ is a bispecial word for ${\mathbb K}$; \item $d-b = 3^{\varepsilon} \cdot 2^k$ for some $k$ and $\varepsilon\in\{0,1\}$; \item $x_0\dots x_{d-1}$ is neither right-special nor left-special; \item $b \geqslant \ \begin{cases} 2^k & \text{ if } d-b = 2^k, \\ 2^{k+1} & \text{ if } d-b = 3 \cdot 2^k. \end{cases} $ \end{itemize} \end{proposition} \begin{proof} Let $y$ and $y' \in {\mathbb K}$ be such that $x$ and $y$ coincide for $d$ digits and $\sigma^{b}(x)$ and $y'$ coincide for at least $d-b$ digits. Then $$ x_{b}x_{b+1}\lambdadots x_{d-1} = y_{b}y_{b+1}\lambdadots y_{d-1} = y'_{0}y'_{1}\lambdadots y'_{d-b-1} $$ is a right-special because it can be continued both as $y'$ and $y$. The word $x_{b}x_{b+1}\lambdadots x_{d-1}$ is also a left-special, because otherwise, by Lemma~\ref{lem-invertible}, only one preimage of $y'$ by $\sigma$ would be in ${\mathbb K}$ and this would coincide with the word $y_{b-1}y_{b}\lambdadots y_{d-b-1}$. Then $b-1$ rather than $b$ would be the first accident. By property (3) above, $d-b = 3^{\varepsilon}2^{k}$. On the other hand, $x_0\dots x_{d-1}$ cannot be right-special, because otherwise there would be a point $\tilde x = x_0\dots x_{d-1}\bar x_d\dots \in {\mathbb K}$ with $d(x, \tilde x) < 2^{-d}$. If $x_0\dots x_{d-1}$ is left-special, then To finish the proof of the proposition we need to check that the next accident cannot happen too early. Assume that $x_0\dots x_{d-2}$ start as $\rho_0 = r_0r_1r_3\dots$ (the argument for $\rho_1$ is the same). Let $\pi(n) = \#\{ 0 \lambdaeqslant i < n : r_i = 1\} - \#\{ 0 \lambdaeqslant i < n : r_i = 0\}$ count the surplus of $1$'s within the first $n$ entries of $\rho_1$. Clearly $\pi(n) = 0$ for even $n$ and $\pi(n) = \pm 1$ otherwise. Assume the word $\tau_k$ starts in $\rho_0$ at some digit $m < 2^k$. If $\pi(m) = 1$, then $\pi(m+3) = 2$ while if $\pi(m) = -1$, then $\pi(m+7) = -2$. A similar argument works if $\bar \tau_k$ stars at digit $m$. This shows that if $\tau_k$ or $\bar \tau_k$ can only start in $\rho_0$ at even digits. This means that we can take the inverse $H^{-1}$ and find that $\tau_{k-1}$ (or $\bar \tau_k$) start at digit $m/2 < 2^{k-2}$ in $\rho_0$. Repeating this argument, we arrive at $\tau_3$ or $\bar \tau_3$ starting before digit $8 = 2^{4-1}$ of $\rho_0$, which is definitely false, as we can see by inspecting $\rho_0 = 0110\ 1001\ 1001\ 0110\ \dots$. Note also that the bound $2^k$ is sharp, because $\bar \tau_k$ starts in $\rho_0$ at entry $2^k$. Finally, we need to answer the same question for bispecial words $\tau_k \bar \tau_k \tau_k = \tau_{k+1} \tau_k$ and $\bar \tau_k \tau_k \bar \tau_k = \bar \tau_{k+1} \bar \tau_k$. The previous argument shows that neither can start before digit $2^{k+1}$, and also this bound is sharp, because $\bar \tau_k \tau_k \bar \tau_k$ starts in $\rho_0$ at entry $2^{k+1}$. \end{proof} \begin{corollary}\lambdaabel{cor-limited-overlap} Occurrences of $\tau_k$ and $\bar \tau_k$ cannot overlap for more than $2^{k-1}$ digits. \end{corollary} \begin{proof} We consider the prefix $\tau_k$ of $\rho_0$ only, as the other case is symmetric. If the overlap was more than $2^{k-1}$ digits, then $\tau_{k-1}$ or $\bar \tau_{k-1}$ would appear in $\rho_0$ before digit $2^{k-1}$, which contradicts part (3) of Proposition~\ref{prop-time-accident} \end{proof} \begin{lemma}\lambdaabel{lem:disjoint} For each $k \geqslant 1$, the Thue-Morse substitution $H$ satisfies ${\mathbb K} = \bigsqcup_{j=0}^{2^k-1} \sigma^j \circ H^k({\mathbb K})$, where $\sigmaqcup$ indicates disjoint union, so $\sigma^i \circ H^k({\mathbb K}) \cap \sigma^j \circ H^k({\mathbb K}) = \emptyset$ for all $0 \lambdaeqslant i < j < 2^k$. \end{lemma} \begin{proof} Take $x \in {\mathbb K}$, so there is a sequence $(n_k)_{k \in {\mathbb N}}$ such that $x = \lambdaim_k \sigma^{n_k}(\rho_0)$. If this sequence contains infinitely many even integers, then $x = \lambdaim_k \sigma^{2m_k}(\rho_0) = \lambdaim_k \sigma^{2m_k} \circ H(\rho_0) = \lambdaim_k H \circ \sigma^{m_k}(\rho_0) \in H({\mathbb K})$. Otherwise, $(n_k)_{k \in {\mathbb N}}$ contains infinitely many odd integers and $x = \lambdaim_k \sigma^{1+2m_k}(\rho_0) = \lambdaim_k \sigma \circ \sigma^{2m_k} \circ H(\rho_0) = \lambdaim_k \sigma \circ H \circ \sigma^{m_k}(\rho_0) \in \sigma \circ H({\mathbb K})$. Therefore ${\mathbb K} \sigmaubset H({\mathbb K}) \cup \sigma \circ H({\mathbb K})$. Now if $x = H(a) = \sigma \circ H(b) \in {\mathbb K}$, then $$ x = a_{0}\bar a_0 a_{1}\bar a_1 a_{2}\bar a_2\lambdadots = \bar b_0 b_{1}\bar b_1 b_{2}\bar b_2\lambdadots\ , $$ so $\bar b_0 = a_0 \neq \bar a_0 = b_1 \neq \bar b_1 = a_1 \neq \bar a_1 = b_2 \neq b_2 = a_2$. Therefore $x = 101010\dots$ or $010101\dots$, but neither belongs to ${\mathbb K}$. Now for the induction step, assume ${\mathbb K} = \bigsqcup_{j=0}^{2^k-1} \sigma^j \circ H^k({\mathbb K})$. Then since $H$ is one-to-one, \begin{eqnarray*} {\mathbb K} &=& \bigsqcup_{j=0}^{2^k-1} \sigma^j \circ H^k(H({\mathbb K}) \sigmaqcup \sigma \circ H({\mathbb K})) \\ &=& \lambdaeqslantft(\bigsqcup_{j=0}^{2^k-1} \sigma^j \circ H^{k+1}({\mathbb K}) \right) \bigsqcup \lambdaeqslantft( \bigsqcup_{j=0}^{2^{k-1}} \sigma^j \circ H^k \circ \sigma \circ H({\mathbb K})) \right)\\ &=& \lambdaeqslantft(\bigsqcup_{j=0}^{2^k-1} \sigma^j \circ H^{k+1}(H({\mathbb K}) \right) \bigsqcup \lambdaeqslantft( \bigsqcup_{j=0}^{2^k-1} \sigma^{j+2^k} \circ H^{k+1}({\mathbb K})) \right)\\ &=& \bigsqcup_{j=0}^{2^{k+1}-1} \sigma^j \circ H^{k+1}({\mathbb K}). \end{eqnarray*} \end{proof} \begin{lemma} \lambdaabel{lem-sigmacirch} Let $x$ be in the cylinder $[ab]$ with $a, b \in \{0,1\}$. Then the accumulation point of $(\sigma\circ H)^k(x)$ are $0\rho_b$ and $1\rho_b$. More precisely, the $(\sigma\circ H)^{2k}(x)$ converges to $a\rho_{b}$ and $(\sigma\circ H)^{2k+1}(x)$ converges to $\bar a\rho_{b}$. \end{lemma} \begin{proof} By definition of $H$ we get $H(x)= a \bar a H(b)\lambdadots$ Hence $\sigma\circ H(x)=\bar a H(b)\lambdadots$ By induction we get $$ (\sigma\circ H)^{2k}(x)=a H^{2k}(b)\lambdadots \quad \mbox{ and } \quad (\sigma\circ H)^{2k+1}(x)=\bar a H^{2k+1}(b). $$ Therefore $H^{n}(b)$ converges to $\rho_{b}$, for $b=0,1$. \end{proof} \sigmaubsection{Continuous fixed points of ${\mathbb C}R$ on ${\mathbb K}$: Proof of Theorem~\ref{theo-super-uc}}\lambdaabel{subsec:fixed_on_K} We recall that we have ${\mathbb C}R (V) = V \circ \sigma \circ H + V \circ H$. Therefore \begin{eqnarray*} {\mathbb C}R^2V &=& {\mathbb C}R(V \circ \sigma \circ H + V \circ H) \\ &=& V \circ \sigma \circ H \circ \sigma \circ H + V \circ \sigma \circ H^2 + V \circ H \circ \sigma \circ H + V \circ H^2 \\ &=& V \circ \sigma^3 \circ H^2 + V \circ \sigma^2 \circ H^2 + V \circ \sigma \circ H^2 + V \circ H^2, \end{eqnarray*} and in general $$ {\mathbb C}R^nV = S_{2^n}V \circ H^n \quad \text{ where } \quad(S_kV)(x) = \sigmaum_{i=0}^{k-1} V \circ \sigma^i(x) $$ is the $k$-th ergodic sum. \begin{lemma}\lambdaabel{lem-integralV} If $V \in L^1(\mu_{\mathbb K})$ is a fixed point of ${\mathbb C}R$, then $\int_{{\mathbb K}} V \ d\mu_{{\mathbb K}} = 0$. \end{lemma} \begin{proof} For any typical (w.r.t.\ Birkhoff's Ergodic Theorem) $y \in {\mathbb K}$ we get $$ V(y)=({\mathbb C}R^{n}V)(y)=\sigmaum_{j=0}^{2^{n}-1}V\circ \sigma^j\circ H^{n}(y). $$ Hence $$ \frac1{2^{n}}V(y)=\frac1{2^{n}}\sigmaum_{j=0}^{2^{n}-1}V\circ \sigma^j\circ H^{n}(y). $$ The left hand side tend to $0$ as $n \to \infty$ and the right hand side tends to $\int_{{\mathbb K}}V\,d\mu_{{\mathbb K}}$. \end{proof} \begin{lemma}\lambdaabel{lem-Vpreimage-rhoi} Let $W$ be any continuous fixed point for ${\mathbb C}R$ (on ${\mathbb K}$). Then, for $j=0,1$, $$ W(01\rho_{j})+W(10\rho_{j})=0\quad \mbox{ and }\quad W(1\rho_{j})= W(10\rho_{j})+W(0\rho_{j}). $$ \end{lemma} \begin{proof} Using the equality $W(x)=({\mathbb C}R W)(x)=W\circ H(x)+W\circ\sigma\circ H(x)$ we immediately get: $$W\circ (\sigma\circ H)^n(x)=W\circ H\circ(\sigma\circ H)^n(x)+W\circ (\sigma\circ H)^{n+1}(x).$$ Using Lemma~\ref{lem-sigmacirch} on this new equality, we obtain $$ W(i\rho_{j})=W(i\bar{i}\rho_{j})+W(\bar{i}\rho_{j}), $$ for $i, j \in \{0,1\}$. This gives the second equality of the lemma (for $i=1$). The symmetric formula is obtained from the case $i=0$, and then adding both formulas yields $W(01\rho_{j})+W(10\rho_{j})=0$. \end{proof} \begin{remark}\lambdaabel{rem-continuity-weaker} Lemma~\ref{lem-Vpreimage-rhoi} still holds if the potential is only continuous at points of the form $i\rho_{j}$ and $i\bar i\rho_{j}$ with $i,j\in\{0,1\}$. $ \blacksquare$ \end{remark} Recall the one-parameter family of potentials $U_c$ from \eqref{eq:Uc}. They are fixed points of ${\mathbb C}R$, not just on ${\mathbb K}$, but globally on $\Sigma$. Let $i:\Sigma \to \Sigma$ be the involution changing digits $0$ to $1$ and vice versa. Clearly $U_c = -U_c \circ i$. We can now prove Theorem~\ref{theo-super-uc}. \begin{proof}[Proof of Theorem~\ref{theo-super-uc}] Let $W$ be a potential on ${\mathbb K}$, that is fixed by ${\mathbb C}R$. We assume that the variations are summable: $\sigmaum_{k=1}^{\infty}{\mbox{Var}}_{k}(W)<\infty$. We show that $W$ is constant on $2$-cylinders. Let $x=x_{0}x_{1}\lambdadots$ and $y=y_{0}y_{1}\lambdadots$ be in the same $2$-cylinder (namely $x_{0}=y_{0}$ and $x_{1}=y_{1}$). Then, for every $n$, $H^n(x)$ and $H^n(y)$ coincide for (at least) $2^{n+1}$ digits. Therefore \begin{eqnarray*} |W(x)-W(y)|&=& |({\mathbb C}R^{n}W)(x)-({\mathbb C}R^{n}W)(y)|\\ &=& |(S_{2^{n}}W)(H^n(x))-(S_{2^n}W)(H^n(y))|\\ &\lambdaeqslant& \sigmaum_{k=2^n+1}^{2^{n+1}}{\mbox{Var}}_{k}(W). \end{eqnarray*} Convergence of the series $\sigmaum_{k}{\mbox{Var}}_{k}(W)$ implies that $\sigmaum_{k=2^n+1}^{2^{n+1}}{\mbox{Var}}_{k}(W) \to 0$ as $n \to \infty$. This yields that $W$ is constant on $2$-cylinders. Lemma~\ref{lem-Vpreimage-rhoi} shows that $W|_{[01]}=-W|_{[10]}$. Again, the second equality in that lemma used for both $\rho_{0}$ and $\rho_{1}$ shows that $W|_{[00]}=W|_{[11]}=0$. Therefore $W = U_c$ with $c = W(\rho_0)$, and the proof is finished. \end{proof} \sigmaubsection{Global fixed points for ${\mathbb C}R$: Proof of Theorem~\ref{theo-vtilde}}\lambdaabel{subsec-theo-vtilde} To give an idea why Theorem~\ref{theo-vtilde} holds, observe that the property $V(x) = \frac1n + o(\frac1n)$ if $d(x, {\mathbb K}) = 2^{-n}$ (so $V$ vanishes on ${\mathbb K}$ but is positive elsewhere) is in spirit preserved under iterations of ${\mathbb C}R$, provided the shift $\sigma$ doubles the distance from ${\mathbb K}$. Let ${\mathbb C}H$ denote the class of potentials satisfying this property. Choose $x$ such that $d(x,{\mathbb K}) = 2^{-m}$. Taking the limit of Riemann sums, and since ${\mathbb C}R$ preserves the class of non-negative functions, we obtain \begin{eqnarray*} 0 \lambdaeqslant ({\mathbb C}R^nV)(x) &=& \sigmaum_{j=0}^{2^n-1} \frac{1}{2^nm-j} + \sigmaum_{j=0}^{2^n-1} o(\frac{1}{2^nm-j}) \\ &\rightarrow_{{n\rightarrow\8}}& (1+o(1)) \int_0^1 \frac{1}{m-t} \ dt \\ &=& (1+o(1)) \lambdaog \frac{m}{m-1} = \frac1m + o(\frac1m). \end{eqnarray*} However, it may happen that $d(\sigma(y), {\mathbb K}) < 2d(y, {\mathbb K})$ for some $y = \sigma^j \circ H^n(x)$, in which case we speak of an accident (see Definition~\ref{def-accident}). The proof of the proposition includes an argument that accidents happen only infrequently, and far apart from each other. \begin{remark}\lambdaabel{rem-o1m} We emphasize an important bi-product of the previous computation. If $V$ is of the form $V(x)=o(\frac1m)$ when $d(x,{\mathbb K})=2^{-m}$, then ${\mathbb C}R^{n}(V)$ converges to 0. See also Proposition~\ref{prop-renorm-power}. $ \blacksquare$ \end{remark} \begin{proof}[Proof of Theorem~\ref{theo-vtilde}] The proof has three steps. In the first step we prove that the class ${\mathbb C}H$ is invariant under ${\mathbb C}R$. In the second step we show that ${\mathbb C}R^{n}(V_{0})$, with $V_{0}$ defined by $V_0(x) = \frac1m$ if $d(x,{\mathbb K}) = 2^{-m}$, is positive (outside ${\mathbb K}$) and bounded from above. In the last step we deduce from the two first steps that there exists a unique fixed point and that it is continuous and positive. We also briefly explain why it gives the result for any $V \in {\mathbb C}H$. \noindent \emph{Step 1.} We recall that ${\mathbb C}R$ is defined by $({\mathbb C}R V)(x):=V\circ H(x)+V\circ\sigma\circ H(x)$. As $H$ and $\sigma$ are continuous, ${\mathbb C}R(V)$ is continuous if $V$ is continuous. Let $x\in \Sigma$, then if $x_{K}\in{\mathbb K}$ is such that \begin{equation}\lambdaabel{eq:double} d(x,{\mathbb K})=d(x,x_{K})=2^{-m}, \text{ then } d(H(x), H(x_K)) = 2^{-2m}. \end{equation} We claim that if $m \geqslant 3$, then $d(H(x),{\mathbb K}) = d(H(x),H(x_{K}))$. Let us assume by contradiction that $y\in {\mathbb K}$ is such that $d(H(x),{\mathbb K})=d(H(x),y)<d(H(x),H(x_{K}))$. By Lemma~\ref{lem:disjoint}, $y$ belongs either to $H({\mathbb K})$ or to $\sigma\circ H({\mathbb K})$. In the first case, say $H(z)=y$, we get $$ d(H(x),H(z)) < d(H(x),H(x_{K})). $$ This would yield $d(x,z)<d(x,x_{K})$ which contradicts the fact that $d(x,{\mathbb K}) = d(x,x_{K})$. In the other case, say $y=\sigma\circ H(z)$, $m\geqslant 3$ yields $H(x) = a_{0}\bar a_0a_{1}\bar a_1a_{2}\bar a_2\lambdadots$ and $\sigma\circ H(z) = \bar b_0 b_{1}\bar b_1b_{2}\bar b_2\lambdadots$. As in the proof of Lemma~\ref{lem:disjoint} this would show that $y$ must start with $010101$ or $101010$. However, both are forbidden in ${\mathbb K}$ and this produces a contradiction. This finishes the proof of the claim. Lemma~\ref{lem-invertible} also shows that $d(\sigma\circ H(x),{\mathbb K})=2^{-(2m-1)}=d(\sigma\circ H(x),\sigma\circ H(x_{K}))$. Therefore \begin{equation}\lambdaabel{eq_R} ({\mathbb C}R V)(x)=V\circ H(x)+V\circ \sigma\circ H(x)= \frac1{{2m}}+\frac1{{2m-1}}+o(\frac1{{m}})=\frac1{m}+o(\frac1m). \end{equation} \noindent \emph{Step 2.} We establish upper and lower bounds for ${\mathbb C}R^n(V_0)$ where $V_{0}$ is defined by $V_0(x) = \frac1m$ if $d(x,{\mathbb K}) = 2^{-m}$. Let $x \in \Sigma$ be such that $d(x,{\mathbb K}) = 2^{-m}$, and pick $x_K \in {\mathbb K}$ such that $x$ and $x_K$ coincide for exactly $m$ initial digits. Due to the definition of ${\mathbb K}$, $m \geqslant 2$ (for any $x$) but we assume in the following that $m \geqslant 3$. By \eqref{eq:double} we have $d(H^nx, {\mathbb K}) = d(H^nx, H^nx_K) = 2^{-2^nm}$. Assume that the first digit of $x$ is $0$. Then $H^{n}(x)$ coincides with $\rho_{0}$ at least for $2^{n}$ digits. Assume now that $H^nx$ has an accident at the $j$-th shift, $1\lambdaeqslant j < 2^n$, so there is $y \in {\mathbb K}$ such that $d(\sigma^j \circ H_n(x), y) <2 d(\sigma^j \circ H_n(x), \sigma^j \circ H_n(x_K))$. \begin{figure} \caption{Half of the sum ${\mathbb C} \end{figure} The last point in Proposition~\ref{prop-time-accident} shows $j\geqslant 2^{n-1}$. Therefore, using again that the sum approximates the Riemann integral, \begin{eqnarray*} ({\mathbb C}R^nV_0)(x) \geqslant \frac{1}{2^n} \sigmaum_{j=0}^{(2^n/2)-1} \frac{1}{m-j/2^n} \to_{{n\rightarrow\8}} \int_0^\frac12 \frac{1}{m-x} \ dx \geqslant \frac1{2m}. \end{eqnarray*} The worst case scenario for the upper bound is when there is no accident, and then \begin{equation} \lambdaabel{equ-riemansum-upper} ({\mathbb C}R^nV_{0})(x) = \sigmaum_{j=0}^{2^n-1} \frac{1}{2^nm-j} \to_{{n\rightarrow\8}} \int_0^1 \frac{1}{m-x} \ dx \lambdaeqslant \frac1{m-1} \end{equation} as required. \begin{remark}\lambdaabel{rem-pointproche} Note that the largest distance between ${\mathbb K}$ and points $\sigma^{k}(H^n(x))$ with $k\in [\![} \def\rrb{]\!]0,2^n-1\rrb$ is smaller than $2^{-(2^nm-2^{n}+1)}\lambdaeqslant 2^{-2^n}$. This largest distance thus tends to $0$ super-exponentially fast as $n \to \infty$. $ \blacksquare$\end{remark} \noindent \emph{Step 3.} We prove here equicontinuity for ${\mathbb C}R^{n}(V_{0})$. Namely, there exists some positive $\kappa$, such that for every $n$, for every $x$ and $y$ $$ |{\mathbb C}R^{n}(V_{0})(x)-{\mathbb C}R^{n}(V_{0})(y)|\lambdaeqslant \frac\kappa{|\lambdaog_2 d(x,y)|}, $$ holds. Assume that $x$ and $y \in \Sigma$ coincide for $m$ digits. We consider two cases. {\bf Case 1:} $d(x,{\mathbb K})=2^{-m'}=:d(x,z)$, with $m'<m$ (and $z\in {\mathbb K}$). If there are no accidents for $\sigma^{j}\circ H^n(x)$ for $j\in [\![} \def\rrb{]\!] 0, 2^n[\![} \def\rrb{]\!]$, then for every $j$, $$d(\sigma^j(H^n(x)),{\mathbb K})=d(\sigma^j(H^n(y)),{\mathbb K})=d(\sigma^j(H^n(x)),\sigma^j(H^n(z))),$$ and $V_{0}(\sigma^j(H^n(x)))=V_{0}(\sigma^j(H^n(y)))$. This yields $({\mathbb C}R^n V_{0})(x)=({\mathbb C}R^n V_{0})(y)$. {\bf Case 2:} If there is an accident, say at time $j_{0}$, then two sub-cases can happen. Subcase 2-1. The accident is due to a point $z'$ that separates before $2^nm$, see Figure~\ref{fig:case2-1}. \begin{figure} \caption{Comparing sequence when the accident occurs before separation.} \end{figure} Again, we claim that $V_{0}(\sigma^j(H^n(x)))=V_{0}(\sigma^j(H^n(y)))$ holds for $j\lambdaeqslant j_{0}-1$, but also for $j \geqslant j_{0}$ but smaller than the (potential) second accident. Going further, we refer to cases 2-2 or 1. Sub-case 2-2. The accident is due to a point much closer to $H^{n}(x)$ than to $H^{n}(y)$, see Figure~\ref{fig:case2-2}. \begin{figure} \caption{Comparing sequence when the accident occurs after separation.} \end{figure} In that case we recall that the first accident cannot happen before $2^{n-1}$, hence $j_{0}\geqslant 2^{n-1}$. Again, for $j\lambdaeqslant j_{0}-1$ we get $V_{0}(\sigma^j(H^n(x)))=V_{0}(\sigma^j(H^n(y)))$. By definition of accident we get $$\max\lambdaeqslantft\{ V_{0}(\sigma^{j+2^{n-1}}(H^n(x)))\ , \ V_{0}(\sigma^{j+2^{n-1}}(H^n(y)))\right\} \lambdaeqslant \frac1{2^nm-2^{n-1}-j} $$ for $j\geqslant j_{0}$. This yields $$ \lambdaeqslantft| ({\mathbb C}R^{n} V_{0})(x)-({\mathbb C}R^n V_{0})(y)\right| \lambdaeqslant \sigmaum_{k=j}^{2^{n-1}}\frac2{2^nm-2^{n-1}-j}=\frac1{2^n}\sigmaum_{k=j}^{2^{n-1}}\frac2{m-\frac12-\frac{j}{2^n}}. $$ This last sum is a Riemann sum and is thus (uniformly in $n$) comparable to the associated integral $\int_{0}^{\frac12}\frac1{m-\frac12-t}dt \lambdaeqslant \frac{1}{2(m-1)}.$ \noindent \emph{Step 4.} Following Step 3, the family $(\frac1n\sigmaum_{k=0}^{n-1}{\mathbb C}R^k(V_{0}))_n$ is equicontinuous (and bounded), hence there exists accumulation points. Let us prove that $\displaystyle(\frac1n\sigmaum_{k=0}^{n-1}{\mathbb C}R^k(V_{0}))_n$ actually converges. Assume that $\widetilde V_{1}$ and $\widetilde V_{2}$ are two accumulation points. Note that both $\widetilde V_{1}$ and $\widetilde V_{2}$ are fixed points for ${\mathbb C}R$. They are continuous functions and Steps 1 and 2 show that they satisfy $$\frac1{2m}\lambdaeqslant \widetilde V_{i}(x)\lambdaeqslant \frac1{m}+o(\frac1m),$$ if $d(x,{\mathbb K})=2^{-m}$. From this we get $$ \widetilde V_{1}(x)-\widetilde V_{2}(x)\lambdaeqslant \frac1{2m}+o(\frac1m)=\frac12V_{0}(x)+o(V_{0}(x)), $$ and then for every $n$, $$ \widetilde V_{1}-\widetilde V_{2}=\frac1n\sigmaum_{k=0}^{n-1}{\mathbb C}R^{k}(\widetilde V_{1})-{\mathbb C}R^{k}(\widetilde V_{2})\lambdaeqslant\frac12\frac1n\sigmaum_{k=0}^{n-1}{\mathbb C}R^{k}(V_{0})+o({\mathbb C}R^{k}(V_{0})). $$ We recall from Remark~\ref{rem-o1m} that $o({\mathbb C}R^{k}(V_{0}))$ goes to 0 as $k\to\infty$. Taking the limit on the right hand side along the subsequence which converges to $\widetilde V_{2}$ we get $$ \widetilde V_{1}-\widetilde V_{2}\lambdaeqslant \frac12\widetilde V_{2} + o(V_0), $$ which is equivalent to $\frac23\widetilde V_{1}\lambdaeqslant \widetilde V_{2} + o(V_0)$. Exchanging $\widetilde V_{1}$ and $\widetilde V_{2}$ we also get $\frac23\widetilde V_{2}\lambdaeqslant \widetilde V_{1} + o(V_0)$. These two inequalities yield $$ \widetilde V_{1}-\widetilde V_{2}\lambdaeqslant \frac13 V_{0}+o(V_{0}) \quad \text { and } \quad \widetilde V_{2}-\widetilde V_{1}\lambdaeqslant \frac13 V_{0}+o(V_{0}). $$ Again, applying ${\mathbb C}R^{k}$ on these inequalities and the Cesaro mean, we get $$ \widetilde V_{1}-\widetilde V_{2} \lambdaeqslant \frac13\widetilde V_{2} + o(V_0) \quad \text{ and } \quad \widetilde V_{2}-\widetilde V_{1}\lambdaeqslant \frac13\widetilde V_{1} + o(V_0). $$ Iterating this process, we get that for every integer $p$, $$\frac{p}{p+1}\widetilde V_{2} + o(V_0) \lambdaeqslant \widetilde V_{1}\lambdaeqslant \frac{p+1}p\widetilde V_{2} + o(V_0).$$ This proves $\widetilde V_{1} - \widetilde V_{2} = o(V_0)$, {\em i.e.,\ } $(\widetilde V_{1} - \widetilde V_{2})(x) \to 0$ faster than $V_0(x)$ as $x \to {\mathbb K}$ (see again Remark~\ref{rem-o1m}). But $\widetilde V_{1} - \widetilde V_{2}$ is also fixed by ${\mathbb C}R$, so we can apply \eqref{equ-riemansum-upper} with a factor $o(V_0)$ in front. This shows that $\widetilde V_{1} = \widetilde V_{2}$, and hence the convergence of the Cesaro mean $(\frac1n\sigmaum_{k=0}^{n-1}{\mathbb C}R^{k}(V_{0}))_n$. This finishes the proof of Theorem~\ref{theo-vtilde}. \end{proof} \sigmaubsection{More results on fixed points of ${\mathbb C}R$} The same proof also proves a more general result: \begin{proposition} \lambdaabel{prop-renorm-power} Let $a$ be a real positive number. Take $V(x) = \frac1{n^{a}}+o(\frac1{n^{a}})$ if $d(x,{\mathbb K}) = 2^{-n}$. Then, for $a>1$, $ \lambdaim_{n \to \infty} {\mathbb C}R^nV\Leftrightarrowv 0$ and for $a<1$, $\lambdaim_{n \to \infty} {\mathbb C}R^nV\Leftrightarrowv\infty$. \end{proposition} \begin{proof} Immediate, since the Riemann sum as in \eqref{equ-riemansum-upper} has a factor $2^{n(1-a)}$ in front of it. \end{proof} Consequently, any $V$ satisfying $V(x) = \frac1{n}+o(\frac1{n})\mbox{ for }d(x,{\mathbb K}) = 2^{-n}$ belongs to the weak stable set $V\in {\mathbb C}W^{s}(\widetilde V)$ of the fixed potential $\widetilde V$ from Theorem~\ref{theo-vtilde}. However, ${\mathbb C}W^{s}(\widetilde V)$ is in fact much larger: \begin{proposition} If $V(x) = \frac1n g(x)$ for $d(x, {\mathbb K}) = 2^{-n}$ and $g:\Sigma \to {\mathbb R}$ a continuous function, then $\frac1j \sigmaum_{k=0}^{j-1} {\mathbb C}R^k(V) \to \widetilde V \cdot \int_{{\mathbb K}} g\ d\mu_{{\mathbb K}}$. \end{proposition} \begin{proof} Take $\varepsilon > 0$ arbitrary, and take $r \in {\mathbb N}$ so large that $\sigmaup |g| 2^{-r} \lambdaeqslant \varepsilon$ and if $d(x, {\mathbb K}) = d(x, x_{{\mathbb K}}) \lambdaeqslant 2^{-r}$, then $|g(x) - g(x_{{\mathbb K}})| \lambdaeqslant \varepsilon$. Next take $k \in {\mathbb N}$ so large that if $k = r+s$, then $$ \lambdaeqslantft| \frac{1}{2^s}\sigmaum_{i=0}^{2^s-1} g(\sigma^i(y)) - \int g\ d\mu_{{\mathbb K}} \right| \lambdaeqslant \varepsilon. $$ uniformly over $y \in {\mathbb K}$. Then we can estimate \begin{eqnarray*} ({\mathbb C}R^k V)(x) &=& \sigmaum_{j=0}^{2^k-1} V \circ \sigma^j \circ H^k(x) \\ &\lambdaeqslant& \frac{1}{2^k} \sigmaum_{j=0}^{2^k-1} \frac{1}{m-\frac{j}{2^k}} g \circ \sigma^j \circ H^k(x) \\ &=& \frac{1}{2^r} \sigmaum_{t=0}^{2^r-1}\frac{1}{2^s} \sigmaum_{i=0}^{2^s-1} \frac{1}{m-\frac{1}{2^k} (2^st+i)} g \circ \sigma^{2^st+i} \circ H^k(x) \\ &=& \frac{1}{2^r} \sigmaum_{t=0}^{2^r-2}\frac{1}{2^s} \sigmaum_{i=0}^{2^s-1} \lambdaeqslantft( \frac{1}{m-\frac{t}{2^r}} + O(2^{-r}) \right) \cdot \int_{{\mathbb K}} \lambdaeqslantft( g\ d\mu_{{\mathbb K}} + O(\varepsilon) \right) \\ && \ + \ \frac1{2^r} \frac{1}{2^s}\sigmaum_{j=0}^{2^s-1} \frac{1}{m-\frac{1}{2^k} (2^k-2^s+j)} \sigmaup|g|\\ &\to& \int_0^1 \frac{1}{m-x} dx \cdot \int_{{\mathbb K}} g\ d\mu_{{\mathbb K}} + O(3 \varepsilon). \end{eqnarray*} Since $\varepsilon$ is arbitrary, we find $\lambdaimsup_k ({\mathbb C}R^kV)(x) \lambdaeqslant \frac1m \cdot \int_{{\mathbb K}} g\ d\mu_{{\mathbb K}} + o(\frac1m)$. Similar to Step 2 in the proof of Theorem~\ref{theo-vtilde}, we find $\lambdaimsup_k ({\mathbb C}R^kV)(x) \geqslant \frac1{2m} \cdot \int_{{\mathbb K}} g\ d\mu_{{\mathbb K}} + o(\frac1m)$. From this, using the argument of Step 3 in the proof of Theorem~\ref{theo-vtilde}, we conclude that for the Cesaro means, $\lambdaim_n \frac1n \sigmaum_{k=0}^{n-1}({\mathbb C}R^kV)(x) = \widetilde V(x) \cdot \int_{{\mathbb K}} g\ d\mu_{{\mathbb K}}$. \end{proof} \sigmaubsection{Unbounded fixed points of ${\mathbb C}R$}\lambdaabel{subsec-unbounded1} The application to Feigenbaum maps discussed in the Appendix of this paper suggests the existence of unbounded fixed points $V_u$ of ${\mathbb C}R$ as well. They can actually be constructed explicitly using the disjoint decomposition $$ \Sigma \sigmaetminus \sigma^{-1}\{ \rho_0, \rho_1 \} = \sigmaqcup_{k \geqslant 0} \lambdaeqslantft( (\sigma \circ H)^k(\Sigma) \sigmaetminus(\sigma \circ H)^{k+1}(\Sigma) \right). $$ If we set \begin{equation}\lambdaabel{eq:Vu} V_u|_{H(\Sigma)} = g \quad \text{ and }\quad V_u(x) = V_u(y) - V_u \circ H(y) \quad \text{ for } x = \sigma \circ H(y), \end{equation} then $V_u$ is well-defined and ${\mathbb C}R V_u = V_u$ on $\Sigma \sigmaetminus \sigma^{-1}\{ \rho_0, \rho_1 \}$. The simplest example is \begin{equation}\lambdaabel{eq_exampleVu} V_u|_{ (\sigma \circ H)^k(\Sigma) \sigmaetminus(\sigma \circ H^{k+1})(\Sigma) } = (1-k)\alpha, \end{equation} and we will explore this further for phase transitions in Section~\ref{sec:thermo}. For $x \in \Sigma \sigmaetminus \sigma \circ H(\Sigma)$ and $x^k = (\sigma \circ H)^k(x)$, we have $$ V_u(x^k) = g(x) - \sigmaum_{j=1}^k g \circ \sigma^{2^j-2} \circ H^j(x). $$ Now for $x \in [1]$ $$ \sigma^{2^j-2} \circ H^j(x) \to \lambdaeqslantft\{ \begin{array}{ll} \sigmaigma^{-2}(\rho_0) & \text{ along odd $j$'s,}\\ \sigmaigma^{-2}(\rho_1) & \text{ along even $j$'s,} \end{array}\right. $$ and the reverse formula holds for $x \in [0]$. In either case, $V(x^k) \sigmaim \frac{k}{2} [g \circ \sigma^{-2}(\rho_0) + g \circ \sigma^{-2}(\rho_1)]$. Therefore, unless $g \circ \sigma^{-2}(\rho_0) + g \circ \sigma^{-2}(\rho_1) = 0$, the potential $V_u$ is unbounded near $\lambdaim_{k\to\infty} (\sigma \circ H)^k(x) = \{\sigma^{-1}(\rho_0)\ , \ \sigma^{-1}(\rho_1)\}$, cf.\ Lemma~\ref{lem-sigmacirch}. \begin{remark} A variation of this stems from the decomposition $$ \Sigma \sigmaetminus \{ \rho_0, \rho_1 \} = \sigmaqcup_{k \geqslant 0} \lambdaeqslantft(H^k(\Sigma) \sigmaetminus H^{k+1}(\Sigma) \right). $$ In this case, if we define $$ V'_u|_{\sigma \circ H(\Sigma)} = g \quad \text{ and }\quad V'_u(x) = V'_u(y) - V'_u \circ \sigma(x) \quad \text{ for } x = H(y), $$ then $V'_u = {\mathbb C}R V'_u$ on $\Sigma \sigmaetminus \{ \rho_0, \rho_1 \}$. $ \blacksquare$ \end{remark} \sigmaection{Thermodynamic formalism}\lambdaabel{sec:thermo} In this section we prove Theorems~\ref{theo-thermo-a<1}, \ref{theo-super-MP} and \ref{theo-thermo-Vu}. In the first subsection we define an induced transfer operator as in \cite{leplaideur1} and use its properties. Then we prove both theorems. \sigmaubsection{General results and a key proposition} Let $V:\Sigma \to {\mathbb R}$ be some potential function, and let $J$ be any cylinder such that on it, the distance to ${\mathbb K}$ is constant, say $\delta_{J}$. Consider the first return map $T:J \to J$, say with return time $\tau(x) = \min\{ n \geqslant 1 : \sigma^n(x) \in J\}$, so $T(x) = \sigma^{\tau(x)}(x)$. The sequence of successive return times is then denoted by $\tau^k(x)$, $k=1,2,\lambdadots$ The transfer operator is defined as \begin{equation}\lambdaabel{eq:L} ({\mathbb C}L_{z, \gamma} g)(x) = \sigmaum_{T(y) = x} e^{\Phi_{z,\gamma}(y)} g(y) \end{equation} where $\Phi_{z,\gamma}(y): = -\gamma (S_n V)(y) - n z$ if $\tau(y) = n$. For a given test function $g$ and a point $x\in J$, $({\mathbb C}L_{z, \gamma} g)(x)$ is thus a power series in $e^{-z}$. These operators extend the usual transfer operator. They were introduced in \cite{leplaideur1} and allow us to define \emph{local equilibrium states}, {\em i.e.,\ } equilibrium states for the potentials of the form $\Phi_{z,\gamma}$ and the dynamical system $(J,T)$. These local equilibrium states are later denoted by $\nu_{z,\gamma}$. We emphasize that, using induction on $J$, these operators ${\mathbb C}L_{z, \gamma}$ allow us to construct equilibrium states for potentials which do not necessarily satisfy the Bowen condition (such as {\em e.g.\ } the Hofbauer potential). Nevertheless, we need the following {\em local Bowen condition}: there exists $C_V$ (possibly depending on $J$) such that \begin{equation}\lambdaabel{eq:Bowen} |(S_nV)(x)-(S_nV)(y)|\lambdaeqslant C_{V}, \end{equation} whenever $x, y \in J$ coincide for $n:=\tau^{k}(x)=\tau^{k}(y)$ indices. This holds, {\em e.g.\ } if $V(x)$ depends only on the distance between $x$ and ${\mathbb K}$. \begin{lemma}\lambdaabel{lem-zc} Let $x \in J$ and let $\gamma$ and $z$ be such that $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x) < \infty$. Then $({\mathbb C}L_{z,\gamma}g)(y)<\infty$ for every $y \in J$ and for every continuous function $g:J\rightarrow{\mathbb R}$. \end{lemma} \begin{proof} Note that for any $x, y \in J$, $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)\approx e^{\pm C_{V}}({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(y)$. Indeed, if $x'$ and $y'$ are two preimages of $x$ and $y$ in $J$, with the same return time $n$ and such that for every $k\in[\![} \def\rrb{]\!] 0,n\rrb$ $\sigma^{k}(x')$ and $\sigma^{k}(y')$ are in the same cylinder, then $$ |(S_nV)(x')-(S_nV)(y')|\lambdaeqslant C_{V}. $$ Recall that $J$ is compact, and that every continuous function $g$ on $J$ is bounded. Hence convergence ({\em i.e.,\ } as power series) of $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)$ ensures uniform convergence over $y \in J$ for any continuous $g$. This finishes the proof of the lemma. \end{proof} For fixed $\gamma$, there is a critical $z_c$ such that $({\mathbb C}L_{z, \gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)$ converges for all $z > z_c$ and $z_{c}$ is the smallest real number with this property. Lemma~\ref{lem-zc} shows that $z_{c}$ is independent of $x$. The next result is straightforward. \begin{lemma}\lambdaabel{lem-decreaz-zlambda} The spectral radius $\lambdaambda_{z,\gamma, }$ of ${\mathbb C}L_{z,\gamma}$ is decreasing in both $\gamma$ and $z$. \end{lemma} We are interested in the critical $z_c$ and the pressure ${\mathbb C}P(\gamma)$, both as function of $\gamma$. Both curves are decreasing (or at least non-increasing). If the curve $\gamma \mapsto z_c(\gamma)$ avoids the horizontal axis, then there is no phase transition: \begin{proposition}\lambdaabel{prop-equil-presspos} Let $V$ be continuous and satisfying the local Bowen condition \eqref{eq:Bowen} for every cylinder $J$ disjoint and at constant distance from ${\mathbb K}$. Then the following hold: \begin{enumerate} \item[1.] For every $\gamma\geqslant 0$, the critical $z_{c}(\gamma)\lambdaeqslant {\mathbb C}P(\gamma)$. \item[2.] Assume that the pressure ${\mathbb C}P(\gamma) > -\gamma\int V\,d\mu_{{\mathbb K}}$. Then there exists a unique equilibrium state for $-\gamma V$ and it gives a positive mass to every open set in $\Sigma$. Moreover $z_{c}(\gamma)<{\mathbb C}P(\gamma)$ and ${\mathbb C}P(\gamma)$ is analytic on the largest open interval where the assumption holds. \item[3] If $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(\xi)$ diverges for every (or some) $\xi$ and for $z=z_{c}(\gamma)$, then ${\mathbb C}P(\gamma)>z_{c}(\gamma)$ and there is a unique equilibrium state for $-\gamma V$. \end{enumerate} \end{proposition} \begin{proof} There necessarily exists an equilibrium state for $-\gamma V$. Indeed, the potential is continuous and the metric entropy is upper semi-continuous. Therefore any accumulation point as $\varepsilon \to 0$ of a family of measures $\nu_{\varepsilon}$ satisfying $$ h_{\nu_{\varepsilon}}(\sigma)-\gamma\int V\, d\nu_{\varepsilon}\geqslant {\mathbb C}P(\gamma) $$ is an equilibrium state. The main argument in the study of local equilibrium states as in \cite{leplaideur1} is that $z > z_{c}(\gamma)$ (to make the transfer operator ``converges'') and that $V$ satisfies the local Bowen property \eqref{eq:Bowen}. This property is used in several places and in particular, it yields for every $x$ and $y$ in $J$ and for every $n$: $$ e^{-\gamma C_{V}}\lambdaeqslant\frac{({\mathbb C}L^{n}_{z\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)}{({\mathbb C}L^{n}_{z\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(y)}\lambdaeqslant e^{\gamma C_{V}}. $$ To prove part 1., recall that $$ ({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x):=\sigmaum_{n=1}^{\infty}\lambdaeqslantft(\sigmaum_{x', T(x')=x,\tau(x)=n}e^{-\gamma (S_nV)(x')}\right)e^{-nz}, $$ which yields that $ z_c = \lambdaimsup_{n}\frac1n\lambdaog\lambdaeqslantft(\sigmaum_{x', T(x')=x,\tau(x)=n}e^{-\gamma (S_nV)(x')}\right)$. To prove the inequality $z_{c}(\gamma)\lambdaeqslant {\mathbb C}P(\gamma)$, we copy the proof of Proposition 3.10 in \cite{jacoisa}. Define the measure $\widetilde\nu$ as follows\lambdaabel{preuve-def-nutilde-pgamme}: for $x$ in $J$ and for each $T$-preimage $y$ of $x$ there exists a unique $\tau(y)$-periodic point $\xi(y) \in J$, coinciding with $y$ until $\tau(y)$. Next we define the measure $\widetilde\nu_{n}$ as the probability measure proportional to $$ \sigmaum_{\xi(y),\tau(y)=n}e^{\Phi_{{\mathbb C}P(\gamma),\gamma}(\xi(y))}\lambdaeqslantft(\sigmaum_{j=0}^{n-1}\delta_{\sigma^{j}\xi(y)}\right)=\sigmaum_{\xi(y),\tau(y)=n}e^{-\gamma (S_nV)(\xi(y))-n{\mathbb C}P(\gamma)}\lambdaeqslantft(\sigmaum_{j=0}^{n-1}\delta_{\sigma^{j}\xi(y)}\right). $$ The measure $\widetilde\nu$ is an accumulation point of $(\widetilde\nu_{n})_{n \in {\mathbb N}}$. It follows from the proof of \cite[ Lemma 20.2.3, page 264]{katok} that \begin{equation}\lambdaabel{equ-zc-pgamme-nutilde} z_{c}(\gamma)\lambdaeqslant h_{\widetilde\nu}(\sigma)-\gamma\int V\,d\widetilde\nu\lambdaeqslant {\mathbb C}P(\gamma). \end{equation} \begin{remark}\lambdaabel{rem-nutilde-J} We emphasize that $\widetilde\nu_{n}(J)=\frac1n$ for each $n$, which shows that $\widetilde\nu(J)=0$. $ \blacksquare$ \end{remark} Now we prove part 2. Let $\mu_{\gamma}$ be an ergodic equilibrium state for $-\gamma V$. The assumption ${\mathbb C}P(\gamma)>-\gamma\int V\,d\mu_{{\mathbb K}}$ means that the unique shift-invariant measure on ${\mathbb K}$ cannot be an equilibrium state (since $\sigma|_{{\mathbb K}}$ has zero entropy). Hence $\mu_{\gamma}$ gives positive mass to some cylinder $J$ in ${\mathbb K}^{c}$. Thus the conditional measure \begin{equation} \lambdaabel{eq-meas-open-out} \nu_{\gamma}(\cdot):=\mu_{\gamma}(\cdot\cap J)/\mu_{\gamma}(J). \end{equation} is $T$-invariant (using the above notations). We now focus on the convergence (as power series) of $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)$ for any $x \in J$ and $z={\mathbb C}P(\gamma)$. The inequality $z_{c}(\gamma)\lambdaeqslant {\mathbb C}P(\gamma)$ does not ensure convergence of $({\mathbb C}L_{z,\gamma} {\AArm 1\AAk{-.8}{I}I}_{J})(x)$ for $z={\mathbb C}P(\gamma)$. Again, we copy and adapt arguments from \cite[Proposition 3.10]{jacoisa} to get that $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)$ converges and that the $\Phi_{z,\gamma}$-pressure is non-positive for $z={\mathbb C}P(\gamma)$. In the case $z>{\mathbb C}P(\gamma)$, so $z > z_{c}(\gamma)$, we can apply the local thermodynamic formalism for $\Phi_{z,\gamma}$. Moreover $z>z_{c}(\gamma)$ means that $\frac{\partial}{\partial z} ({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)$ converges. This implies by \cite[Proposition 6.8]{leplaideur1} that there exists a unique equilibrium state $\nu_{z,\gamma}$ on $J$ for $T$ and for the potential $\Phi_{z,\gamma}$, and that the expectation $\int_J \tau \ d\nu_{z,\gamma} < \infty$. In other words, there exists a shift-invariant probability measure $\mu_{z,\gamma}$ such that $$ \mu_{z,\gamma}(J)>0, \mbox{ and } \nu_{z,\gamma}(\cdot):=\frac{\mu_{z,\gamma}(\cdot\cap J)}{\mu_{z,\gamma}(J)}. $$ The equality $ h_{\nu_{z,\gamma}}(T)+\int\Phi_{z,\gamma}\,d\nu_{z,\gamma}=\lambdaog\lambdaambda_{z,\gamma}$ (the spectral radius for ${\mathbb C}L_{z,\gamma}$) shows that $$ h_{\mu_{z,\gamma}}(\sigma)-\gamma\int V\,d\mu_{z,\gamma}=z+\mu_{z,\gamma}(J)\lambdaog\lambdaambda_{z,\gamma}. $$ As $z > {\mathbb C}P(\gamma)$ we get $\lambdaambda_{z,\gamma}\lambdaeqslant 1$. Now the Bowen property of the potential shows that for every $x \in J$ and for every $n \geqslant1$: $$({\mathbb C}L^{n}_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)=e^{\gamma C_V}\lambdaambda^{n}_{z,\gamma}.$$ The power series is decreasing in $z$, thus the monotone Lebesgue convergence theorem shows that it converges for $z={\mathbb C}P(\gamma)$. For this value of the parameter $z$, the spectral radius $\lambdaambda_{{\mathbb C}P(\gamma),\gamma} \lambdaeqslant 1$. Following \cite{leplaideur1}, there exists a unique local equilibrium state, $\nu_{{\mathbb C}P(\gamma),\gamma}$ with pressure $\lambdaog\lambdaambda_{{\mathbb C}P(\gamma),\gamma}\lambdaeqslant 1$. This proves that the $\Phi_{{\mathbb C}P(\gamma),\gamma}$-pressure is non-positive. Now, we prove that the $\Phi_{z,\gamma}$-pressure is non-negative for $z={\mathbb C}P(\gamma)$. Indeed, by Abramov's formula \begin{eqnarray*} 0 &=& h_{\mu_{\gamma}}(\sigma)-\gamma\int V\,d\mu_{\gamma}- {\mathbb C}P(\gamma) \\ &=& \mu_{\gamma}(J)\lambdaeqslantft(h_{\nu_{\gamma}}(T)-\gamma\int (S_{\tau(x)}V)(x)\,d\nu_{\gamma}(x)-{\mathbb C}P(\gamma)\int \tau\,d\nu_{\gamma}\right), \end{eqnarray*} which yields $$ h_{\nu_{\gamma}}(T)-\gamma\int S_{\tau(x)}(V)(x)-{\mathbb C}P(\gamma) \tau(x)\,d\nu_{\gamma}(x)\ =\ 0. $$ Finally, as the $\Phi_{{\mathbb C}P(\gamma),\gamma}$-pressure is non-negative and non-positive, it is equal to $0$. It also has a unique equilibrium state which is a Gibbs measure (in $J$ and for the first-return map $T$). As the conditional measure $\nu_{\gamma}$ has zero $\Phi_{{\mathbb C}P(\gamma),\gamma}$-pressure, it is the unique local equilibrium state. The local Gibbs property proves that $\nu_{\gamma}$ gives positive mass to every open set in $J$, and the mixing property shows that the global shift-invariant measure $\mu_{\gamma}$ gives positive mass to every open set in $\Sigma$. We can thus copy the argument to show it is uniquely determined on each cylinder which does not intersect ${\mathbb K}$ (here we use the assumption that the potential satisfies \eqref{eq:Bowen} for each cylinder $J$ with empty intersection with ${\mathbb K}$). It now remains to prove analyticity of the pressure. Equality~\eqref{equ-zc-pgamme-nutilde} gives $z_{c}(\gamma)\lambdaeqslant h_{\widetilde\nu}(\sigma)-\gamma\int V\,d\widetilde\nu$. Remark~\ref{rem-nutilde-J} states that $\widetilde\nu(J)=0$, and uniqueness of the equilibrium state shows that $\widetilde\nu$ cannot be this equilibrium state (otherwise we would have $\widetilde\nu(J)>0$). Hence, $z_{c}(\gamma)$ is strictly less than ${\mathbb C}P(\gamma)$. Then, we use \cite{Hennion-Herve} to get analyticity in each variable $z$ and $\gamma$, and the analytic version of the implicit function theorem (see \cite{Range}) shows that ${\mathbb C}P(\gamma)$ is analytic. The proof of part 3 is easier. The divergence of ${\mathbb C}L_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})(\xi)$ for some $\xi$ and $z=z_{c}(\gamma)$ ensures the divergence for every $\xi$, and then Lemma~\ref{lem-decreaz-zlambda} and the local Bowen condition show that $\lambdaambda_{z,\gamma}$ goes to $\infty$ as $z$ goes to $z_{c}(\gamma)$. This means that there exists a unique $Z>z_{c}(\gamma)$ such that $\lambdaambda_{Z,\gamma}=1$. Using the work done in the proof of point 2, we let the reader check that necessarily $Z={\mathbb C}P(\gamma)$ and the local equilibrium state produces a global equilibrium state (see also \cite{leplaideur1}). This finishes the proof of the proposition. \end{proof} Actually, Proposition~\ref{prop-equil-presspos} says a little bit more. If the second assumption is satisfied, namely ${\mathbb C}P(\gamma)>-\int V\,d\mu_{{\mathbb K}}$, then the unique equilibrium state for $V$ in $\Sigma$ is the measure obtained (using Equation \eqref{eq-meas-open-out}) from the unique equilibrium state $\nu_{{\mathbb C}P(\gamma),\gamma}$ for the dynamical system $(J,T)$ and associated to the potential $\Phi_{{\mathbb C}P(\gamma),\gamma}$. Therefore, two special curves $z$ as function of $\gamma$ appear, see Figure~\ref{fig-geen-red-curve}. The first is $z_{c}(\gamma)$, and the second is ${\mathbb C}P(\gamma)$, defined by the implicit equality $$\lambdaog\lambdaambda_{{\mathbb C}P(\gamma),\gamma}=0.$$ We claim that these curves are convex. \begin{figure} \caption{Two important values of $z$ as function of $\gamma$.} \end{figure} \sigmaubsection{Counting excursions close to ${\mathbb K}$}\lambdaabel{subsec:counting} Let $x\in \Sigma$ and $n\in{\mathbb N}$ be such that for $k\in[\![} \def\rrb{]\!]0,n-1\rrb$, $d(\sigma^{k}(x),{\mathbb K})\lambdaeqslant 2^{-5}\delta_{J}$. We divide the piece of orbit $x, \sigma(x), \dots, \sigma^{n-1}(x)$ into pieces between accidents. We take $b_0 = 0$ by default, and let $y^0 \in {\mathbb K}$ be the point closest to $x$. Inductively, set $$ \begin{array}{rcl} b_1 &=& \min\{j \geqslant 1 : d(\sigma^j(x),{\mathbb K}) \lambdaeqslant d(\sigma^j(x),\sigma^j(y^0))\}, \\ && \qquad y^1 \in {\mathbb K} \text{ is point closest to } \sigma^{b_1}(x). \\[2mm] b_2 &=& \min\{j \geqslant 1 : d(\sigma^{j+b_1}(x),{\mathbb K}) \lambdaeqslant d(\sigma^{j+b_1}(x),\sigma^{j}(y^1))\}, \\ && \qquad y^2 \in {\mathbb K} \text{ is point closest to } \sigma^{b_1+b_2}(x). \\[2mm] b_3 &=& \min\{j \geqslant 1 : d(\sigma^{j+b_1+b_2}(x),{\mathbb K}) \lambdaeqslant d(\sigma^{j+b_1+b_2}(x),\sigma^j(y^2) \}, \\ && \qquad y^3 \in {\mathbb K} \text{ is point closest to } \sigma^{b_1+b_2+b_3}(x) \\[2mm] \vdots & & \qquad\qquad \vdots\qquad\qquad \qquad\qquad \vdots \end{array} $$ and $d_j = -\lambdaog_2 d(\sigma^{\sigmaum_{i < j} b_i}(x),{\mathbb K}) = -\lambdaog_2 d(\sigma^{\sigmaum_{i < j} b_i}(x),y^{j-1})$ expresses how close the image of $x$ is to ${\mathbb K}$ at the $j-1$-st accident. Following Proposition~\ref{prop-time-accident}, $d_{j}-b_{j}$ is of the form $3^{\varepsilon_{j}}2^{k_{j}}$, with $\varepsilon_{j} \in \{0,1\}$ and $d_{j+1}>d_{j}-b_{j}$ by definition of an accident. One problem we will have to deal with, is to count the possible accidents during a very long piece of orbit: if we know $d_{j}-b_{j}$ can we determine the possible values of $d_{j}$? As it is stated in Subsection~\ref{subsec-genresult-TM}, accidents occur at bispecial words which have to be prefixes of $\tau_{n}\bar\tau_{n}\tau_{n}$ or $\bar\tau_{n}\tau_{n}\bar\tau_{n}$, and are words of the form $\tau_{k}$, $\bar\tau_{k}$, $\tau_{k}\bar\tau_{k}\tau_{k}$ or $\bar\tau_{k}\tau_{k}\bar\tau_{k}$. From now on, we pick some non-negative potential $V$ and assume it satisfies hypotheses of Proposition~\ref{prop-equil-presspos}. Namely, our potentials are of the form $V(x)=\frac1{n^{a}}+o(\frac1{n^{a}})$ if $\lambdaog_{2}(d(x,{\mathbb K}))=-n$. They satisfy hypotheses of Proposition~\ref{prop-equil-presspos}, and furthermore, the Birkhoff sums are locally constant. Moreover, for $x$ and $y$ in $J$ coinciding until $n=\tau(x)=\tau(y)$, the assumption $d({\mathbb K},J)=\delta_{J}=d(x,{\mathbb K})=d(y,{\mathbb K})$ shows that for every $j\lambdaeqslant n$, $$d(\sigma^{j}(x),{\mathbb K})=d(\sigma^{j}(y),{\mathbb K})$$ holds. Hence $\Phi_{.,\gamma}$ satisfies the local Bowen property \eqref{eq:Bowen}. Let $x$ be a point in $J$. We want to estimate $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)$. Let $y$ be a preimage of $x$ for $T$. To estimate $\Phi(y)$, we decompose the orbit $y, \sigma(y), \dots, \sigma^{\tau(y)-1}(y)$ where $\sigma^j(y)$ is reasonably far away from ${\mathbb K}$ (let $c_r \geqslant 0$ be the length of such piece) and {\em excursions} close to ${\mathbb K}$. \begin{definition} \lambdaabel{def-excursion} An excursions begins at $\xi := \sigma^s(y)$ when $\xi$ starts as $\rho_0$ or $\rho_1$ for at least $5-\lambdaog_{2}\delta_{J}$ digits ({\em i.e.,\ } $d(\xi', \rho_0)$ and $d(\xi', \rho_0) \lambdaeqslant\delta_J 2^{-5}$) and ends at $\xi' := \sigma^t(y)$ where $t > s$ is the minimal integer such that $d(\xi', {\mathbb K}) > \delta_J 2^{-5}$. \end{definition} If $\sigma^i(y)$ is very close to ${\mathbb K}$, then due to minimality of $({\mathbb K}, \sigma)$ it takes a uniformly bounded (from above) number of iterates for an excursion to begin. Note that each cylinder for the return map $T$ is characterized by a {\em path} $$ c_0, \underbrace{b_{1,1}, b_{1,2}, \dots, b_{1,N_1}}_{\text{\tiny first excursion}}, c_1, \underbrace{b_{2,1}, \dots, b_{2,N_1}}_{\text{\tiny second excursion}}, c_2, \dots , c_{M-1}, \underbrace{b_{M,1}, b_{M,2}, \dots, b_{M,N_1}}_{\text{\tiny $M$-th excursion}}, c_M. $$ Any piece of orbit between two excursions or before the first excursion or after the last excursion is called a \emph{free path}. Let $s_r$ and $t_r$ be the times where the $r$-th free path and $r$-th excursion begin. Since $J$ is disjoint from ${\mathbb K}$, each piece $c_r$ of free path takes at least two iterates, so $c_r \geqslant 2$ for $0 \lambdaeqslant r \lambdaeqslant M$. Due to the locally constant potential we are considering, $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)$ is independent of the point $x$ where it is evaluated. Hence, for the rest of the proofs in this section, unless it is necessary, we shall just write ${\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J}$. Our strategy is to glue on together paths in functions of their free-paths and the numbers of accidents during an excursion. This form clusters and the contribution of a cluster considering $N$ accidents is of the form \begin{equation}\lambdaabel{eq:excur_sum} E_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J}) := \sigmaum_{N \geqslant 1} \underbrace{\sigmaum_{such that ackrel{\text{allowed}}{(b_j)_{j=1}^N,\ (d_j)_{j=1}^N}} \exp\lambdaeqslantft(-\gamma \sigmaum_{j=1}^N S_jV \right) \exp\lambdaeqslantft(-\sigmaum_{j=1}^N b_jz \right) D_{N}}_{A_N} , \end{equation} where $S_jV$ is the Birkhoff sum of the potential after the $j^{th}$ accident but before the next one and the quantity $D_{N}=\displaystyle e^{\varphi_{N}-(d_{N}-b_{N})z}$ is the contribution of the last part of the orbit after the $N^{th}$ accident. By definition of an accident, this contribution is larger than if there would be no accident. Therefore for non-negative $z$, $\frac{e^{-(d_{N}-b_{N})z}}{(d_{N}-b_{N})^{\gamma}} \lambdaeqslant D_N \lambdaeqslant 1$. The quantity $A_{N}$ is the sum of the contribution of the cluster with $N$ accident. Thus we have \begin{eqnarray}\lambdaabel{equ2-clzgam-Ezgam} ({\mathbb C}L_{0,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x) &= & \lambdaeqslantft(\sigmaum_{c_0 \geqslant 5} \sigmaum_{such that ackrel{\text{\tiny free}}{c_0-\text{\tiny paths}}} e^{-\gamma \sigmaum_{n=0}^{c_0-1} V(\sigma^n(y)) - c_0z} \right) \times \nonumber \\[3mm] && \hspace{-1cm} \lambdaeqslantft(\sigmaum_{M \geqslant 0} \lambdaeqslantft[\lambdaeqslantft(\sigmaum_{ (c_r)_{r=1}^M } \sigmaum_{such that ackrel{\text{\tiny free}}{c_r-\text{\tiny paths}}} e^{-\gamma \sigmaum_{n=0}^{c_r-1} V(\sigma^{n+s_r}(y)) - c_rz}\right) \times \lambdaeqslantft(E_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})\right)^M\right]\right). \end{eqnarray} \sigmaubsection{The potential \boldmath $n^{-a}$: \unboldmath Proofs of Theorems~\ref{theo-thermo-a>1} and \ref{theo-thermo-a<1} \lambdaabel{subsec-1/na} } \sigmaubsubsection{Proof of Theorem~\ref{theo-thermo-a>1}} Here we deal with the case $a > 1$ and $V(x) = n^{-a}$ if $d(x, {\mathbb K}) = 2^{-n}$. \begin{proof}[Proof of Theorem~\ref{theo-thermo-a>1}] Since $a > 1$, $$ \sigmaum_{n=d-b+1}^{d}\frac1{n^{a}}\asymp \int_{d-b}^{d}\frac1{x^{a}}dx =\frac1{a-1} \lambdaeqslantft( \frac{1}{(d-b)^{a-1}} - \frac{1}{d^{a-1}} \right) \lambdaeqslant \frac{1}{a-1} < \infty, $$ for all values of $b < d$. To find a lower bound for $E_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})$ in \eqref{eq:excur_sum}, it suffices to take only excursions with a single accident, and sum over all possible $d_1$ with $d_1-b_1 = 2^k$. Then $$ E_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J}) \geqslant \sigmaum_k e^{-\gamma/(a-1)} = \infty, $$ regardless of the value of $\gamma> 0$. By Proposition~\ref{prop-equil-presspos} (part 1) we get ${\mathbb C}P(\gamma) > z_c(\gamma) \geqslant 0=-\gamma\int V\, d\mu_{{\mathbb K}}$. Then Proposition~\ref{prop-equil-presspos} (part 3) ensures that there is no phase transition and that $\gamma\mapsto {\mathbb C}P(\gamma)$ is positive and analytic on $[0,\infty)$. To finish the proof of Theorem~\ref{theo-thermo-a>1}, we need to compute $\lambdaim_{\gamma\to\infty}{\mathbb C}P(\gamma)$. Let $\mu_{\gamma}$ be the unique equilibrium state for $-\gamma V$. Then $$ \frac{{\mathbb C}P(\gamma)}{\gamma}=\frac{h_{\mu_{\gamma}}}\gamma-\int V\,d\mu_{\gamma}, $$ which yields $\lambdaimsup_{\gamma\to\infty}\frac{{\mathbb C}P(\gamma)}{\gamma}\lambdaeqslant 0$, hence $\lambdaimsup_{\gamma\to\infty}{\mathbb C}P(\gamma)\lambdaeqslant 0$. On the other hand, ${\mathbb C}P(\gamma)\geqslant 0=h_{\mu_{{\mathbb K}}}-\int V\,d\mu_{{\mathbb K}}$, hence $\lambdaiminf_{\gamma\to\infty}{\mathbb C}P(\gamma)\geqslant 0$. \end{proof} \sigmaubsubsection{Proof of Theorem~\ref{theo-thermo-a<1} for a special case.} Now take $a \in (0,1)$ and $$ V(x) = n^{-a} \text{ if } d(x,{\mathbb K}) = 2^{-n}, $$ so $$ \Phi_{z,\gamma}(x) = -\gamma S_n V(x) - n z = -\gamma \sigmaum_{k=1}^n k^{-a} - nz. $$ The potential is locally constant on sufficiently small cylinder sets. It thus satisfies the local Bowen condition \eqref{eq:Bowen} and the hypotheses of Proposition~\ref{prop-equil-presspos} hold. Recall that $$ \sigmaum_{n=d-b+1}^{d}\frac1{n^{a}}\asymp \int_{d-b}^{d}\frac1{x^{a}}dx=\frac1{1-a}\lambdaeqslantft(d^{1-a}-(d-b)^{1-a}\right), $$ and we shall replace the discrete sum by the integral. The error involved in this can be incorporated in the changed coefficient $(1\pm\varepsilon)\gamma$. Our goal is to prove that $z_{c}(\gamma)=0$ (for every $\gamma$) and that ${\mathbb C}L_{0,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})(x) \to 0$ as $\gamma \to \infty$ (for any $x \in J$). This will prove that there is a phase transition. \begin{lemma} \lambdaabel{lem-diveznegna} The series $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(\xi)$ diverges for $z<0$. \end{lemma} \begin{proof} We employ notations from \eqref{eq:excur_sum} with our new $V$. In the full shift all orbits appear, and we are counting here only orbits which have only one excursion close to ${\mathbb K}$ without accident. For each $j$, we consider a piece of orbit of length $2^{k+1}(1+2j)$, coinciding with a piece of orbit within ${\mathbb K}$, and then ``going back'' to $J$. The quantity $E_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})$ is larger than the contribution of these excursions, which is $$ A_{1}^{k}(z)\geqslant \sigmaum_{j=1}^{\infty} e^{-\frac\gamma{1-a}((2^{k+1}(1+2j))^{1-a}-1)-2^{k+1}(1+2j)z}. $$ As $a<1$, $-2jz$ is eventually larger than $(2^{k+1}(1+2j))^{1-a}$ for $z<0$ and the series trivially diverges. Then, $E_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})$ diverges as well, and \eqref{equ2-clzgam-Ezgam} shows that ${\mathbb C}L_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})$ diverges for every initial point $x \in J$. \end{proof} Let us now consider the case $z=0$. As we are now looking for upper bounds, we can consider the $b_{j}$'s and the $d_{j}$'s as independent and sum over all possibilities (and thus forget the condition $d_{j+1}>d_{j}-b_{j}$). Note that we trivially have $D_{N}\lambdaeqslant 1$. For a piece of orbit of length $d$ and with an accident at $b$, $d-b=2^{k}$, the possible values of $d$'s are among $2^{k}(1+\frac{j}2)$, $j\geqslant 1$, and then $b=2^{k-1}j$. If $d-b=3 \cdot 2^{k}$, then the possible values of $d$'s are among $2^{k}(1+\frac{j}2)$ with $j\geqslant 5$ and then $b=2^{k}(\frac{j}2-2)$. Let $$ B(z) := \sigmaum_{k=4}^{\infty} \sigmaum_{j = 1}^{\infty} e^{-\frac\gamma{1-a}\lambdaeqslantft(\lambdaeqslantft(2^{k}(1+\frac{j}2)\right)^{1-a}-2^{k(1-a)}\right)-j2^{k-1}z}. $$ and $$ C(z):=\sigmaum_{k=4}^{\infty} \sigmaum_{j = 5}^{\infty} e^{-\frac\gamma{1-a}\lambdaeqslantft(\lambdaeqslantft(2^{k}(1+\frac{j}2)\right)^{1-a}-3^{1-a}2^{k(1-a)}\right)-2^{k-1}(j-4)z}. $$ The quantity $B(z)$ is an upper bound for the cluster with one excursion of the form $d-b=2^{k}$, and $C(z)$ is an upper bound for the cluster with one excursion of the form $d-b=3\cdot 2^{k}$. Then multiplying $N$ copies to estimate from above the contribution of excursion with $N$ accidents we get $$ E_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})\lambdaeqslant \sigmaum_N (B(z)+C(z))^N. $$ Hence \begin{align*}\lambdaabel{eq:Lsum} &({\mathbb C}L_{0,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x) \ \lambdaeqslant \ \lambdaeqslantft(\sigmaum_{c_0 \geqslant 5} \sigmaum_{such that ackrel{\text{\tiny free}}{c_0-\text{\tiny paths}}} e^{-\gamma \sigmaum_{n=0}^{c_0-1} V(\sigma^n(y)) - c_0z} \right) \times \nonumber \\[3mm] & \quad \lambdaeqslantft(\sigmaum_{M \geqslant 0} \lambdaeqslantft[\lambdaeqslantft(\sigmaum_{ (c_r)_{r=1}^M } \sigmaum_{such that ackrel{\text{\tiny free}}{c_r-\text{\tiny paths}}} e^{-\gamma \sigmaum_{n=0}^{c_r-1} V(\sigma^{n+s_r}(y)) - c_rz}\right) \times \lambdaeqslantft(\sigmaum_{N\geqslant1}(B(z)+C(z))^N\right)^M\right]\right), \end{align*} where the sum over $(c_r)_{r=1}^M$ is $1$ by convention if $M = 0$. The first factor (the sum over $c_0$) indicates the first cluster of free paths, and $c_0 \geqslant 5$ by our choice of the distance $\delta_J$. Note that for the free pieces between excursions the orbit is relatively far from ${\mathbb K}$, so there is $\varepsilon > 0$ depending only on $\delta_J$ such that \begin{equation} \lambdaabel{equ-encadre-freepath } -c_r (\gamma + z) \lambdaeqslant \sigmaum_{n=0}^{c_r-1} -\gamma V(\sigma^{n+s_r}(y)) - c_rz \lambdaeqslant -c_r (\varepsilon \gamma + z). \end{equation} An upper bound for ${\mathbb C}L_{z,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})$ is obtained by taking an upper bound for $B$ and $C$ and majorizing the sum over the $c_{r}$ free paths by taking the sum over all the $c$ and the upper bound in \eqref{equ-encadre-freepath }. \begin{proof}[Proof of Theorem~\ref{theo-thermo-a<1}] Lemma~\ref{lem-diveznegna} shows that for every $\gamma$, $z_{c}(\gamma)\geqslant 0$. Our goal is to prove that $B(0)+C(0)$ can be made as small as wanted by choosing $\gamma$ sufficiently large. This will imply that $z_{c}(\gamma)=0$ for sufficiently large $\gamma$ and that the unique equilibrium state is $\mu_{{\mathbb K}}$. We compute $B(0)$ leaving the very similar computation for $C(0)$ to the reader. Apply the inequality $1+u \geqslant 1+\lambdaog(1+u)$ for the value of $u$ such that $1+u = (1+\frac{j}{2})^{1-a}$, to obtain $(1+\frac{j}{2})^{1-a} - 1 \geqslant \lambdaog(1+\frac{j}{2})^{1-a}$, whence $e^{(1+\frac{j}{2})^{1-a} - 1} \geqslant (1+\frac{j}{2})^{1-a}$. Raising this to the power $-\frac{\gamma}{1-a} 2^{k(1-a)}$ and summing over $j$, we get \begin{eqnarray*} \sigmaum_{j=1}^\infty e^{-\frac{\gamma}{1-a} 2^{k(1-a)}((1+\frac{j}{2})^{1-a}-1)} &\lambdaeqslant& \sigmaum_{j=1}^\infty(1 +\frac{j}{2})^{-\gamma 2^{k(1-a)}} \\ &\lambdaeqslant& \lambdaeqslantft(\frac23\right)^{\gamma 2^{k(1-a)}} + \int_1^\infty\frac{dx}{ (1 +\frac{x}{2})^{-\gamma 2^{k(1-a)}} }\\ &=& \lambdaeqslantft(1 + \frac{3}{\gamma 2^{k(1-a)}-1} \right) \lambdaeqslantft(\frac23\right)^{\gamma 2^{k(1-a)}}. \end{eqnarray*} Therefore $$ B(0) = \sigmaum_{k=4}^{\infty}\sigmaum_{j=1}^{\infty}e^{\frac\gamma{1-a}2^{k(1-a)}(1-(1+\frac{j}2)^{1-a})}\lambdaeqslant \sigmaum_{k=4}^{\infty} \lambdaeqslantft(1 + \frac{3}{\gamma 2^{k(1-a)}-1} \right) \lambdaeqslantft(\frac23 \right)^{\gamma 2^{k(1-a)}} $$ is clearly finite and tends to zero as $\gamma \to \infty$. Now to estimate $({\mathbb C}L_{z,\gamma} {\AArm 1\AAk{-.8}{I}I}_{J})(x)$, we have to sum over the free periods as well and we have \begin{eqnarray}\lambdaabel{eq:bound} ({\mathbb C}L_{0,\gamma} {\AArm 1\AAk{-.8}{I}I}_{J})(x) &\lambdaeqslant & \lambdaeqslantft( \sigmaum_{c \geqslant5} 2^c e^{-\varepsilon \gamma c} \right) \cdot \sigmaum_{M \geqslant 0} \lambdaeqslantft( \sigmaum_{c \geqslant 1} 2^c e^{-c \varepsilon \gamma } (E_{0,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(\xi) \right)^M \nonumber \\ &\lambdaeqslant& \frac{32 e^{-5\varepsilon\gamma}}{1-2e^{-\varepsilon\gamma}} \sigmaum_{M \geqslant 0} \lambdaeqslantft( \sigmaum_{c \geqslant 1} 2^ce^{-c \varepsilon \gamma } \sigmaum_{N \geqslant 1} (B(0)+C(0))^N \right)^M \!\!\! . \end{eqnarray} The term in the brackets still tends to zero as $\gamma \to \infty$, and hence is less than $1$ for $\gamma > \gamma_0$ and some sufficiently large $\gamma_0$. The double sum converges for such $\gamma$, so the critical $z_c(\gamma) \lambdaeqslant 0$ for $\gamma \geqslant\gamma_0$. Lemma~\ref{lem-diveznegna} shows that $z_{c}(\gamma)$ is always non-negative. Therefore $z_{c}(\gamma)=0$ for every $\gamma\geqslant \gamma_{0}$. In fact, for $\gamma$ sufficiently large (and hence $e^{-5\varepsilon\gamma}$ is sufficiently small), the bound \eqref{eq:bound} is less than one: for every $x \in J$, $({\mathbb C}L_{0,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x) < 1$. This means that $\lambdaog\lambdaambda_{0,\gamma}$, {\em i.e.,\ } the logarithm of the spectral radius of ${\mathbb C}L_{z,\gamma}$, becomes zero at some value of $\gamma$, say $\gamma_2$. Lemma~\ref{lem-decreaz-zlambda} says that the spectral radius decreases in $z$. On the other hand the pressure ${\mathbb C}P(\gamma)$ is non-negative because $\int V\,d\mu_{{\mathbb K}}=0$. Moreover, the curve $z={\mathbb C}P(\gamma)$ is given by the implicit equality $\lambdaambda_{{\mathbb C}P(\gamma),\gamma}=1$. Therefore, the curve $\gamma \mapsto {\mathbb C}P(\gamma)$ is below the curve $\gamma \mapsto \lambdaog\lambdaambda_{0,\gamma}$. Thus it must intersect the horizontal axis at some $\gamma_1\in[\gamma_{0},\gamma_{2}]$ (see Figure~\ref{fig-gamma12}). \begin{figure} \caption{Phase transition at $\gamma_1$} \end{figure} For $\gamma>\gamma_{1}$ convexity yields ${\mathbb C}P(\gamma)=0$, hence the function is not analytic at $\gamma_1$ and we have an ultimate phase transition (for $\gamma=\gamma_1$). Analyticity for $\gamma<\gamma_{1}$ follows from Proposition~\ref{prop-equil-presspos}. \end{proof} \sigmaubsubsection{Proof of Theorem~\ref{theo-thermo-a<1} for the general case} Now we consider $V$ such that $$ V(x) = n^{-a}+o(n^{-a}) \quad \text{ if } d(x,{\mathbb K}) = 2^{-n}. $$ For every fixed $\varepsilon_{0}$, there exists some $N_{0}$ such that for every $n\geqslant N_{0}$ and for $x$ such that $d(x,{\mathbb K})=2^{-n}$, $$ \lambdaeqslantft|V(x)-\frac1{n^{a}}\right|\lambdaeqslant \frac{\varepsilon_{0}}{n^{a}}. $$ We can incorporate this perturbation in the free path, assuming that any path with length less than $N_{0}$ is a free path. Then all the above computations are valid provided we replace $\gamma$ by $\gamma(1\pm\varepsilon_{0})$. This does not affect the results. \sigmaubsection{The proof of Theorem~\ref{theo-super-MP}} As a direct application of Theorem~\ref{theo-thermo-a<1}, we can give a version of the Manneville-Pomeau map with a neutral Cantor set instead of a neutral fixed point. \begin{proof}[Proof of Theorem~\ref{theo-super-MP}] Pick $a > 0$, and consider $V$ and $\gamma_{1}$ as in Subsection~\ref{subsec-1/na} (only for $a<1$). For $a>1$ we pick any positive $\gamma_{1}$. Define the canonical projection $\Pi:\Sigma\rightarrow [0,1]$ by the dyadic expansion: $$ \Pi(x_{0},x_{1},x_{2},\lambdadots)=\sigmaum_{j}\frac{x_{j}}{2^{j+1}}. $$ It maps ${\mathbb K}$ to a Cantor subset of $[0,1]$. Only dyadic points in $[0,1]$ have two preimages under $\Pi$, namely $x_{1}\lambdadots x_{n}10^{\infty}$ and $x_{1}\lambdadots x_{n}01^{\infty}$ have the same image. \begin{lemma} \lambdaabel{lem-pot-W1na} There exists a potential $W:\Sigma\to{\mathbb R}$ such that $$W(x)=\frac1{n^{a}}+o(\frac1{n^{a}}) \quad \text{ if } d(x,{\mathbb K})=2^{-n},$$ and it is continuous at dyadic points: $$W(x_{1}\lambdadots x_{n}10^{\infty})=W(x_{1}\lambdadots x_{n}01^{\infty}),$$ and is positive everywhere except on ${\mathbb K}$ where it is zero. \end{lemma} \begin{proof} Let us consider the multi-valued function $V\circ\Pi^{-1}$ on the interval. It is uniquely defined at each non-dyadic point. For a dyadic point, consider the two preimages $x_{1}\lambdadots x_{n}10^{\infty}$ and $x_{1}\lambdadots x_{n}01^{\infty}$ in $\Sigma$. \paragraph{\it Case 1.} The word $x_{1}\lambdadots x_{n}$ (which is ${\mathbb K}$-admissible) has a single suffix in ${\mathbb K}$, say $0$. This means that $x_{1}\lambdadots x_{n}0$ is an admissible word for ${\mathbb K}$ but not $x_{1}\lambdadots x_{n}1$. Let $\ul{x}^{-}:=x_{1}\lambdadots x_{n}01^{\infty}$ and $\ul{x}^{+}:=x_{1}\lambdadots x_{n}10^{\infty}$. Then \begin{equation} \lambdaabel{equ1-modif1pot} d(\ul{x}^{+},{\mathbb K})=2^{-n}>d(\ul{x}^{-},{\mathbb K})>2^{-n-4}, \end{equation} where the last inequality comes from the fact that $x_{1}\lambdadots x_{n}0111$ is not admissible for ${\mathbb K}$. We modify the potential $V$ on the right side hand of the dyadic point $\Pi(x_{1}\lambdadots x_{n}10^{\infty})$ as indicated on Figure~\ref{fig-modif1}. \begin{figure} \caption{The modification for words with a single suffix} \end{figure} The inequalities of \eqref{equ1-modif1pot} yield $$ V(\ul{x}^{-})= \frac1{(n+k)^{a}}=\frac1{n^{a}}-\frac{ak}{n^{a+1}}+o(\frac1{n^{a+2}}) =V(\ul{x}^{+})+o(V(\ul{x}^{+})), $$ where $k$ is an integer in $[1,4]$. As the modification is done ``convexly'', the new potential $W$ satisfies for these modified points $$W(x)=\frac1{n^{a}}+o(\frac1{n^{a}})\quad \text{ if }d(x,{\mathbb K})=2^{-n}.$$ \paragraph{\it Case 2.} The word $x_{1}\lambdadots x_{n}$ (which is ${\mathbb K}$-admissible) has two suffixes in ${\mathbb K}$. It may be that $\ul{x}_{+}$ and $\ul{x}_{-}$ are at the same distance to ${\mathbb K}$ (see Figure~\ref{fig-modif2}). Then we do not need to change the potential around this dyadic point. \begin{figure} \caption{No modification with two different suffixes} \end{figure} If $V(\ul{x}^{+})\neq V(\ul{x}^{-})$, neither $x_{1}\lambdadots x_{n}0111$ nor $x_{1}\lambdadots x_{n}1000$ are admissible for ${\mathbb K}$ and we modify the potential linearly in that region in the interval as Figure~\ref{fig-modif3}. \begin{figure} \caption{modification with two different suffixes} \end{figure} Again we have $$ V(\ul{x}^{+})=\frac1{(n+j)^{a}}=\frac1{n^{a}}+o(\frac1{n^{a}}) \quad \text{ and } \quad V(\ul{x}^{-}) = \frac1{(n+k)^{a}} = \frac1{n^{a}}+o(\frac1{n^{a}}), $$ where $j$ and $k$ are different integers in $\{ 1,2,3,4\}$. Hence, for these points too, $W$ satisfies $$ W(x)=\frac1{n^{a}}+o(\frac1{n^{a}}) \quad \text{ if }d(x,{\mathbb K})=2^{-n}. $$ Positivity of $W$ away from ${\mathbb K}$ follows from the positivity of $V$ and the way of modifying it to get $W$. Clearly $W$ vanishes on ${\mathbb K}$. \end{proof} \paragraph{\it The case $a<1$} Continuing the proof of Theorem~\ref{theo-super-MP}, the eigen-measure $\nu_{a}$ in $\Sigma$ is a fixed point for the adjoint of the transfer operator for $\gamma_{1}$ (the pressure vanishes at $\gamma_{1}$) for the potential $W$. As the potential $W$ is continuous and the shift is Markov, such a measure always exists. It is conformal in the sense that \begin{equation} \lambdaabel{equ1-meas-conformal} \nu(\sigma(B))=\int_{B} e^{{\mathbb C}P(\gamma_{1})+\gamma_{1}W}\,d\nu_{a}=\int_{B} e^{\gamma_{1}W}\,d\nu_{a}, \end{equation} for any Borel set $B$ on which $\sigma$ is one-to-one. Since we have a phase transition at $\gamma_{1}$, ${\mathbb C}P(\gamma_1) = 0$. Note also that $W$ is positive everywhere except on ${\mathbb K}$ where it vanishes. Now consider the measure $\Pi_{*}(\nu_{a})$ and its distribution function $$\theta_{a}(x) := \nu_{a}([0^\infty,\Pi(x)))=\nu_{a}([0^\infty,\Pi(x)]), $$ the last equality resulting from the fact that $\nu_{a}$ is non-atomic. We emphasize that $\Pi$ maps the lexicographic order in $\Sigma$ to the usual order on the unit interval $[0,1]$. This enables us to define \emph{intervals} in $\Sigma$, for which we will use the same notation $[x,y]$. Let us now compute the derivative of $f_{a}$ define by $$f_{a}:=\theta_{a}\circ\Pi\circ\sigma\circ\Pi^{-1}\circ \theta_{a}^{-1}$$ at some point $x \in [0,1]$. For $h$ very small we define $y$ and $y_{h}$ in $[0,1]$ such that $\Pi_{*}\nu_{a}([0,y])=x$ and $\Pi_{*}\nu_{a}([0,y_{h}])=x+h$. Also define $\ul y$ and $\ul{y_{h}}$ such that $\Pi(\ul y)=y$ and $\Pi(\ul{y_{h}})=y_{h}$. Then we get \begin{eqnarray*} \frac{f_{a}(x+h)-f_{a}(x)}{h}&=&\frac{\nu_{a}([\sigma(\ul y),\sigma(\ul{y_{h}})])}{\nu_{a}([\ul y,\ul{y_{h}}])} \\ &=& \frac{\nu_{a}(\sigma([\ul y,\ul{y_{h}}]))}{\nu_{a}([\ul y,\ul{y_{h}}])}\\ &=&\frac1{\nu_{a}([\ul y,\ul{y_{h}}])}\int_{[\ul y,\ul{y_{h}}]} e^{\gamma_{1}W}\,d\nu_{a}\ \rightarrow_{h\to 0} \ e^{\gamma_{1}W(y)}. \end{eqnarray*} This computation is valid if $\Pi^{-1}(y)$ is uniquely determined (namely $y$ is not dyadic). If $y_{h}$ is dyadic for some $h$, then we choose for $\ul{y_{h}}$ the one closest to $\ul y$. If $y$ is dyadic, then the same can be done provided we change the preimage of $y$ by $\Pi$ depending on whether we compute left or right derivative. Nevertheless, the potential $W$ is continuous at dyadic points, hence $f_{a}$ has left and right derivative at every dyadic points and they are equal. We finally get $f'_{a}(x)=e^{\gamma_{1}W\circ\Pi^{-1}(x)}$ (this make sense also for dyadic points) and then $\lambdaog f'_{a}(x)=\gamma_{1}W\circ\Pi^{-1}(x)$. Therefore $f_{a}$ is ${\mathbb C}C^1$ and as $W$ is positive away from ${\mathbb K}$ and zero on ${\mathbb K}$, $f_{a}$ is expanding away from $\widetilde{\mathbb K}:=\theta_{a}\circ\Pi({\mathbb K})$ and is indifferent on $\widetilde{\mathbb K}$. For $t\in[0, \infty)$, the lifted potential for $-t\lambdaog f'_{a}$ is $-t\gamma_{1}W\circ \Pi^{-1}$, which has an ultimate phase transition for $t=1$ and $a \in (0,1)$. \end{proof} \paragraph{\it The case $a>1$} Computations are similar to the case $a<1$, except that we have to add the pressure for $\gamma_{1}$. The construction is the same, but the map $f_{a}$ satisfies : $$f'_{a}(x)=e^{\gamma_{1}W\circ\Pi^{-1}(x)+{\mathbb C}P(\gamma_{1})}.$$ This extra term is just a constant and then, the thermodynamic formalism for $-t\lambdaog f'_{a}$ is the same that the one for $-t\gamma_{1}W\circ\Pi^{-1}$. \sigmaubsection{Unbounded potentials: Proof of Theorem~\ref{theo-thermo-Vu}} \lambdaabel{subsec-unbounded2} We know from Subsection~\ref{subsec-unbounded1} that ${\mathbb C}R$ fixes the potential $V_u$, defined in \eqref{eq:Vu}. In this section we set $g \Leftrightarrowv \alpha$, which gives $V_u = \alpha(k-1)$ on $(\sigma \circ H)^k(\Sigma) \sigmaetminus (\sigma \circ H)^{k+1}(\Sigma)$. For the thermodynamic properties of this potential, the interesting case is $\alpha < 0$ (see the Introduction before the statement of Theorem~\ref{theo-thermo-Vu} and the Appendix). \begin{lemma}\lambdaabel{lem-vu-inte-pos} Let $\alpha < 0$. Then $\int V_u d\mu \geqslant \int_\Sigmaigma V_u d\mu_{\mathbb K} = 0$ for every shift-invariant measure probability $\mu$. \end{lemma} \begin{proof} As in Lemma~\ref{lem-sigmacirch}, the set $(\sigma \circ H)^k(\Sigma) = \sigma^{2^k-1} \circ H^k( [00] \sigmaqcup [10] \sigmaqcup [01] \sigmaqcup [11])$ consists of four $2^k+1$-cylinders containing the points $1\rho_0$, $0\rho_0$, $1\rho_1$ and $1\rho_1$ respectively, and they are mapped into the two $2^k$-cylinders containing $\rho_0$ and $\rho_1$. In other words, $(\sigma \circ H)^k(\Sigma) = \sigma^{-1} \circ H^k(\Sigma)$, and by Lemma~\ref{lem:disjoint}, its next $2^k$ shifts are pairwise disjoint. Therefore $\mu_{\mathbb K}((\sigma \circ H)^k(\Sigma)) = 2^{-k}$ and $\mu_{\mathbb K}((\sigma \circ H)^k(\Sigma) \sigmaetminus (\sigma \circ H)^{k+1}(\Sigma)) = 2^{-(k+1)}$. Since $V_u = \alpha(k-1)$ on $(\sigma \circ H)^k(\Sigma) \sigmaetminus (\sigma \circ H)^{k+1}(\Sigma)$ this gives $$ \int V_u \ d\mu_{\mathbb K} = \alpha \sigmaum_{k \geqslant 0} (k-1) 2^{-(k+1)} = -\frac{\alpha}{2} + \alpha\sigmaum_{k \geqslant 2} k 2^{-(k+1)} = 0. $$ Again, since $\sigma^j((\sigma \circ H)^k(\Sigma)$ is disjoint from $((\sigma \circ H)^k(\Sigma))$ for $0 < j < 2^k$, its $\mu$-mass is at most $2^{-k}$ for any shift-invariant probability measure $\mu$. Since $V_u$ is decreasing in $k$ (for $\alpha < 0$), we can minimize the integral $\int V_u \ d\mu$ by putting as much mass on $(\sigma \circ H)^k(\Sigma)$ as possible, for each $k$. But this means that the $\mu$-mass of $(\sigma \circ H)^k(\Sigma) \sigmaetminus (\sigma \circ H)^{k+1}(\Sigma)$ becomes $2^{-(k+1)}$ for each $k$, and hence $\mu = \mu_{\mathbb K}$. \end{proof} \begin{remark} \lambdaabel{rem-measu-discrecylin} As a by-product of our proof, $\mu\lambdaeqslantft((\sigma\circ H)^{k}(\Sigma)\right)\lambdaeqslant 2^{-k}$ for any invariant probability $\mu$ and $k \geqslant 2$. $ \blacksquare$ \end{remark} For fixed $\alpha<0$, the integral $\int V_{u}\,d\mu$ is non-negative and we define for $\gamma\geqslant 0$ $$ {\mathbb C}P(\gamma):=\sigmaup_{\mu\ \sigma-inv}\lambdaeqslantft\{h_{\mu}-\gamma\int V_{u}\,d\mu\right\}. $$ \begin{proposition} \lambdaabel{prop-exit-mesdeq-vu} For any $\gamma\geqslant 0$ there exists an equilibrium state for $-\gamma V_{u}$. \end{proposition} To prove this proposition, we need a result on the accumulation value $\lambdaim\inf_{\varepsilon \to 0}\int V_{u}d\nu_{\varepsilon}$ if $\{ \nu_{\varepsilon} \}_\varepsilon$ is a family of invariant probability measures. \begin{lemma} \lambdaabel{lem-liminf} Let $\nu_{\varepsilon}$ be a sequence of invariant probability measures converging to $\nu$ in the weak topology as $\varepsilon \to 0$. Let us set $\nu:=(1-\beta)\mu+\beta\mu_{{\mathbb K}}$, where $\mu$ is an invariant probability measure satisfying $\mu({\mathbb K})=0$ and $\beta\in [0,1]$. Then, $$\lambdaiminf_{\varepsilon\to 0}\int V_{u}\,d\nu_{\varepsilon}\geqslant (1-\beta)\int V_{u}\,d\mu.$$ \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem-liminf}] Let us consider an $\eta$-neighborhood $O_{\eta}$ of ${\mathbb K}$ consisting of finite union of cylinders. Clearly $\lambdaeqslantft(\sigma\circ H\right)^{j} \sigmaubset O_{\eta}$ for $j = j(\eta) \geqslant 2$ sufficiently large (and $j(\eta) \to \infty$ as $\eta \to 0$). Let $\nu_{\varepsilon}$ be an invariant probability measure. Following the same argument as in the proof of Lemma~\ref{lem-vu-inte-pos} and in particular Remark~\ref{rem-measu-discrecylin}, we claim that $$ \int {\AArm 1\AAk{-.8}{I}I}_{O_{\eta}}V_{u}\,d\nu_{\varepsilon}\geqslant -\frac\alpha2\nu_{\varepsilon} \lambdaeqslantft(O_{\eta}\sigmaetminus (\sigma\circ H)(\Sigma)\right)+\alpha\sigmaum_{k\geqslant j} k2^{-(k+1)} $$ holds. Then we have \begin{eqnarray}\lambdaabel{equ-mino-meas-neighK} \int V_{u}\,d\nu_{\varepsilon} &\geqslant& \int {\AArm 1\AAk{-.8}{I}I}_{\Sigma\sigmaetminus O_{\eta}}V_{u}\,d\nu_{\varepsilon}-\frac\alpha2\nu_{\varepsilon}(O_{\eta}\sigmaetminus (\sigma\circ H)(\Sigma))+\alpha\sigmaum_{k\geqslant j}k2^{-(k+1)} \nonumber \\ &\geqslant& \int {\AArm 1\AAk{-.8}{I}I}_{\Sigma\sigmaetminus O_{\eta}}V_{u}\,d\nu_{\varepsilon}+\alpha\sigmaum_{k\geqslant j}k2^{-(k+1)}. \end{eqnarray} Note that ${\AArm 1\AAk{-.8}{I}I}_{\Sigma\sigmaetminus O_{\eta}}V_{u}$ is a continuous function. Thus, $ \lambdaim_{\varepsilon\to 0}\int {\AArm 1\AAk{-.8}{I}I}_{\Sigma\sigmaetminus O_{\eta}}V_{u}\,d\nu_{\varepsilon}$ exists and is equal to $\int {\AArm 1\AAk{-.8}{I}I}_{\Sigma\sigmaetminus O_{\eta}}V_{u}\,d\nu=(1-\beta)\int {\AArm 1\AAk{-.8}{I}I}_{\Sigma\sigmaetminus O_{\eta}}V_{u}\,d\mu$. As $\eta \to 0$, this quantity decreases and converges to $(1-\beta)\int V_{u}\,d\mu$ (here we use $\mu({\mathbb K})=0$). Therefore, passing to the limit in \eqref{equ-mino-meas-neighK} first in $\varepsilon$ and then in $\eta$ we get $$ \lambdaiminf_{\varepsilon\to 0}\int V_{u}\,d\nu_{\varepsilon}\geqslant (1-\beta)\int V_{u}\,d\mu. $$ \end{proof} \begin{proof}[Proof of Proposition~\ref{prop-exit-mesdeq-vu}] We repeat the argument given in the proof of Proposition~\ref{prop-equil-presspos} and adapt it as in \cite{jacoisa}. Let $\nu_{\varepsilon}$ be a probability measure such that \begin{equation} \lambdaabel{equ1-equil-potinfini} h_{\nu_{\varepsilon}}-\gamma\int V_{u}\,d\nu_{\varepsilon}\geqslant {\mathbb C}P(\gamma)-\varepsilon, \end{equation} and let $\nu$ be any accumulation point of $\nu_{\varepsilon}$. As $V_{u}$ is discontinuous we cannot directly pass to the limit $\varepsilon \to 0$ and claim that the integral of the limit measure is the limit of the integrals. However, we claim that $V_{u}$ is continuous everywhere but at the four points $0\rho_{0}$, $0\rho_{1}$, $1\rho_{0}$ and $1\rho_{1}$ (see and adapt the proof of Lemma~\ref{lem-sigmacirch}). These points are in ${\mathbb K}$ and their orbits are dense in ${\mathbb K}$. We thus have to consider two cases. \begin{itemize} \item $\nu({\mathbb K})=0$. Then a standard argument in measure theory says that we do not see the discontinuity, and passing to the limit as $\varepsilon \to 0$ in \eqref{equ1-equil-potinfini}, $${\mathbb C}P(\gamma)\geqslant h_{\nu}-\gamma\int V_{u}\,d\nu\geqslant {\mathbb C}P(\gamma),$$ which means that $\nu$ is an equilibrium state. \item $\nu({\mathbb K})>0$. In this case we can write $\nu=\beta\mu_{{\mathbb K}}+(1-\beta)\mu$, where $\mu$ is a $\sigma$-invariant probability satisfying $\mu({\mathbb K})=0$ and $\beta$ belongs to $(0,1]$. Therefore $$h_{\nu}=\beta h_{\mu_{{\mathbb K}}}+(1-\beta)h_{\mu}=(1-\beta)h_{\mu}.$$ \end{itemize} Lemma~\ref{lem-liminf} yields \begin{equation} \lambdaabel{equ2-equil-potinfi} \lambdaiminf_{\varepsilon\to 0}\int V_{u}\,d\nu_{\varepsilon}\geqslant (1-\beta)\int V_{u}\,d\mu. \end{equation} Hence, passing to the limit in Inequality \eqref{equ1-equil-potinfini}, Inequality \eqref{equ2-equil-potinfi} shows that $${\mathbb C}P(\gamma)\lambdaeqslant (1-\beta)h_{\mu}-\gamma(1-\beta)\int V_{u}\,d\mu.$$ This last inequality is impossible if $\beta<1$, by definition of the pressure. This yields that $\nu_{\varepsilon}$ converges to $\mu_{{\mathbb K}}$, and $h_{\nu_{\varepsilon}}$ converges to $0$. Then \eqref{equ2-equil-potinfi} shows that ${\mathbb C}P(\gamma)\lambdaeqslant 0$. On the other hand ${\mathbb C}P(\gamma)\geqslant 0$ because the pressure is larger than the free energy for $\mu_{{\mathbb K}}$, which is zero. Therefore $\mu_{{\mathbb K}}$ is an equilibrium state. \end{proof} In order to use Proposition~\ref{prop-equil-presspos} we need to check that $V_{u}$ satisfies the hypotheses. \begin{lemma} \lambdaabel{lem-vu-loc-Bowen} For every cylinder $J$ which does not intersect ${\mathbb K}$, the potential $V_{u}$ satisfies the local Bowen property \eqref{eq:Bowen}. \end{lemma} \begin{proof} Actually, $V_{u}$ satisfies a stronger property: if $x=x_{0}x_{1}\lambdadots$ and $y=y_{0}y_{1}\lambdadots$ are in $J$ (a fixed cylinder with $J\cap {\mathbb K}=\emptyset$), if $n$ is their first return time in $J$, and if $x_{k}=y_{k}$ for any $0 \lambdaeqslant k < n$, then $(S_{n}V_{u})(x)=(S_{n}V_{u})(y)$. Assume that $J$ is a $k$-cylinder, and assume without loss of generality that $n > k$. The coordinates $x_{j}$ and $y_{j}$ coincide for $0 \lambdaeqslant j < n$, but since $J$ is a $k$-cylinder, we actually have $$x_{j}=y_{j}\text{ for }0\lambdaeqslant j\lambdaeqslant n+k-1.$$ We recall that $V_{u}$ is constant on sets of the form $(\sigma\circ H)^{m}(\Sigma)\sigmaetminus (\sigma\circ H)^{m+1}({\mathbb K})$. Therefore, to compute $V_{u}(z)$ for $z\in\Sigma$ we have to know which set $(\sigma\circ H)^{m}(\Sigma)\sigmaetminus (\sigma\circ H)^{m+1}({\mathbb K})$ it belongs to. Lemma~\ref{lem-sigmacirch} shows that $z=z_{0},z_{1},\lambdadots$ belongs to $(\sigma\circ H)^{m}(\Sigma)\sigmaetminus (\sigma\circ H)^{m+1}({\mathbb K})$ if and only if $z_{1}\lambdadots z_{2^{m}}$ coincides with $[\rho_{0}]_{2^{m}}$ or $[\rho_{1}]_{2^{m}}$ and $m$ is the largest integer with this property. Let us now study $V_{u}(\sigma^{j}(x))$ (and $V_{u}(\sigma^{j}(y))$) for $j$ between 0 and $n-1$. We have to find the largest integer $m$ such that $z_{j+1}\lambdadots z_{j+1+2^{m}}$ coincides with $[\rho_{0}]_{2^{m}}$ or $[\rho_{1}]_{2^{m}}$. As $J$ does not intersect ${\mathbb K}$, the word $x_{n},\lambdadots ,x_{n+k-1}$ (which is also the word $y_{n},\lambdadots ,y_{n+k-1}$) is not admissible for ${\mathbb K}$. Therefore, the largest $m$ such that $z_{j+1}\lambdadots z_{j+1+2^{m}}$ coincides with $[\rho_{0}]_{2^{m}}$ or $[\rho_{1}]_{2^{m}}$ satisfies $$2^{m}\lambdaeqslant n-j+k-1.$$ In other words, the integer $m$ only depends on the digits where $\sigma^{j}(x)$ and $\sigma^{j}(y)$ coincide. Therefore $V_{u}(\sigma^{j} (x)) = V_{u}(\sigma^{j} (y))$. \end{proof} \begin{remark} An important consequence of Proposition~\ref{prop-exit-mesdeq-vu} and Lemma~\ref{lem-vu-loc-Bowen} is that the conclusions of Proposition~\ref{prop-equil-presspos} hold. Although the potential $V_u$ is not continuous (and in fact undefined at $\sigma^{-1}(\{ \rho_0, \rho_1\})$), it satisfies the local Bowen condition, so that the discontinuity is ``invisible'' for the first return map to $J$. Proposition~\ref{prop-exit-mesdeq-vu} then implies the existence of an equilibrium state. Furthermore, the critical $z_c(\gamma) \lambdaeqslant {\mathbb C}P(\gamma)$. By a similar argument as used in \cite[Proposition 3.10]{jacoisa} it can be checked that the conclusion of Lemma~\ref{lem-liminf} holds despite the discontinuity of $V_u$. $ \blacksquare$\end{remark} \begin{lemma} Take $\alpha < 0$ and consider the potential $V_u$ and some cylinder set $J$ disjoint from ${\mathbb K}$. The critical parameter for the convergence of $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)$ satisfies $z_c(\gamma) \geqslant 2^{-e^{-\gamma \alpha + 2}+1} > 0$ for every $\gamma \in {\mathbb R}$ and $x \in J$. \end{lemma} \begin{proof} We now explore the thermodynamic formalism of the unbounded fixed point $V_u$ of ${\mathbb C}R$ given by Equation~\ref{eq_exampleVu}. This potential is piecewise constant, and the value on cylinder sets intersecting ${\mathbb K}$ can be pictured schematically (with $\alpha = -1$) as follows: $$ \begin{array}{r|ccccccccccccccccc} \rho_0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 &\cdots \\ \rho_1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 &\cdots \\ \hline V_u & 1 & 0 & 1 & -1 & 1 & 0 & 1 & -2 & 1 & 0 & 1 & -1 & 1 & 0 & 1 & -3 &\cdots \end{array} $$ Here, the third line indicates the value of $V_u$ at $\sigma^n(\rho_j)$ for $n = 0,1,2,3,\dots$ and $j = 0,1$. A single ergodic sum of length $b = 2^{k+1}-2^{k-i}$ (with $\alpha < 0$ arbitrary again) for points $x$ in the same cylinder as $\rho_0$ or $\rho_1$ is $$ (S_bV_u)(x) = \sigmaum_{j=0}^{2^{k+1}-2^{k-i}-1} V_u(\sigma^j(x)) = -\alpha(1+i). $$ Therefore, the contribution of a single excursion is $$ \Phi_{z,\gamma}(\xi) = \sigmaum_{j=1}^N \sigmaum_{k=0}^{b_j-1} \gamma \alpha (1-i_j) - b_jz $$ where $i = i_j$ is such that $b_j = 2^{k+1}-2^{k-i}$. The contribution to $({\mathbb C}L_{z,\gamma}{\AArm 1\AAk{-.8}{I}I}_{J})(x)$ of one cluster of excursions then becomes $E_{z,\gamma}(\xi) \geqslant \sigmaum_{N \geqslant 1} A^N$, where (assuming that $z \geqslant 0$) \begin{eqnarray*} A = \sigmaum_{such that ackrel{\text{\tiny allowed}}{b \geqslant 1}} e^{\gamma \alpha(1+i) - bz} &=& \sigmaum_{k \geqslant 1} \sigmaum_{i=0}^{k-1} e^{\gamma \alpha(1+i) - (2^{k+1}-2^{k-i})z}\\ &\geqslant& e^{\gamma \alpha} \sigmaum_{k \geqslant 1} \sigmaum_{i=0}^{k-1} e^{i \gamma \alpha - 2^{k+1}z}\\ &=& e^{\gamma \alpha} \sigmaum_{k \geqslant 1} \frac{1-e^{\gamma \alpha k}}{1-e^{\gamma \alpha}} e^{- 2^{k+1}z} \geqslant e^{\gamma \alpha} \sigmaum_{k \geqslant 1} e^{- 2^{k+1}z} \end{eqnarray*} Take an integer $M \geqslant e^{-\gamma \alpha +2}$ and $z = 2^{-(M+1)}$. Then taking only the $M$ first terms of the above sum, we get the the entire sum is larger than $$ e^{\gamma \alpha} M e^{-2^{M+1} z} \geqslant e^{\gamma \alpha} e^{-\gamma \alpha + 2} e^{-1} = e > 1. $$ Therefore, $A > 1$ and $\sigmaum A^N$ diverges. Hence, the critical $z_c(\gamma) \geqslant 2^{-e^{-\gamma \alpha + 2}+1} > 0$ for all $\gamma > 0$. \end{proof} \begin{proof}[Proof of Theorem~\ref{theo-thermo-Vu}] It is just a consequence of Proposition~\ref{prop-equil-presspos} that a phase transition can only occur at the zero pressure. This never happens, hence the pressure is analytic on $[0, \infty)$ and there is a unique equilibrium state for $-\gamma V_{u}$. \end{proof} \sigmaection*{Appendix: The Thue-Morse subshift and the Feigenbaum map}\lambdaabel{appendix} The logistic Feigenbaum map $f_{\mbox{\tiny q-feig}}:I \to I$ is conjugate to unimodal interval map $f_{\mbox{\tiny feig}}$, which solves a renormalization equation \begin{equation}\lambdaabel{eq_renorm_feig} f_{\mbox{\tiny feig}}^2 \circ \Psi(x) = \Psi \circ f_{\mbox{\tiny feig}}(x), \end{equation} for all $x \in I$, where $\Psi$ is an affine contraction depending on $f_{\mbox{\tiny feig}}$. Note that $f_{\mbox{\tiny feig}}$ is not a quadratic map, but it has a quadratic critical point $c$. See \cite{Epstein} and \cite[Chapter VI]{MSbook} for an extensive survey. As a result of \eqref{eq_renorm_feig}, $f_{\mbox{\tiny feig}}$ is infinitely renormalizable of Feigenbaum type, {\em i.e.,\ } there is a nested sequence $M_k$ of periodic cycles of $2^k$-periodic intervals such that each component of $M_k$ contains two components of $M_{k+1}$. The intersection ${\mathbb C}A := \cap_{k \geqslant 0} M_k$ is a Cantor attractor on which $f_{\mbox{\tiny feig}}$ acts as a dyadic adding machine. The renormalization scaling $\Psi:M_k \to M_{k+1}^\text{\tiny crit}$, where $M_k^\text{\tiny crit}$ is the component of $M_k$ containing the critical point, and on each $M_k^\text{\tiny crit}$ we have $f_{\mbox{\tiny feig}}^{2^{k+1}} \circ \Psi = \Psi \circ f_{\mbox{\tiny feig}}^{2^k}$. Furthermore, ${\mathbb C}A$ coincides with the critical $\omega$-limit set $\omega(c)$ and it attracts every point in $I$ except for countably many (pre-)periodic points of (eventual) period $2^k$ for some $k \geqslant 0$. Hence $f_{\mbox{\tiny feig}}:I \to I$ has zero entropy, and the only probability measures it preserves are Dirac measures on periodic orbits and a unique measure on ${\mathbb C}A$. This means that $f_{\mbox{\tiny feig}}:I \to I$ is not very interesting from a thermodynamic point of view. However, we can extend $f_{\mbox{\tiny feig}}$ to a quadratic-like map on the complex domain, with a chaotic Julia set ${\mathbb C}J$ supporting topological entropy $\lambdaog 2$, and its dynamics is a finite-to-one quotient of the full two-shift $(\Sigma, \sigmaigma)$. Equation \eqref{eq_renorm_feig} still holds for the complexification $f_{\mbox{\tiny feig}}:U_0 \to V_0$ (a quadratic-like map, to be precise), where $\Psi$ is a linear holomorphic contraction, and $U_0 \Sigmaubset V_0$ are open domains in ${\mathbb C}$ such that $U_0$ contains the unit interval. Renormalization in the complex domain thus means that $M_1^\text{\tiny crit}$ extends to a disks $U_1 \Sigmaubset V_1$ and $f_{\mbox{\tiny feig}}^2:U_1 \to V_1$ is a two-fold branched cover with branch-point $c$. The {\em little Julia set} $$ {\mathbb C}J_1 = \{ z \in U_1 : f_{\mbox{\tiny feig}}^{2n}(z) \in U_1 \text{ for all } n \geqslant 0\} $$ is a homeomorphic copy under $\Psi$ of the entire Julia set ${\mathbb C}J$, but it should be noted that most points in $U_1$ eventually leave $U_1$ under iteration of $f_{\mbox{\tiny feig}}^2$: $U_1$ is not a periodic disk, only the real trace $M_1^\text{\tiny crit} = U_1 \cap {\mathbb R}$ is $2$-periodic. The same structure is found at all scales: $M_k^\text{\tiny crit} = U_k \cap {\mathbb R}$, $U_k \Sigmaubset V_k$ and $f_{\mbox{\tiny feig}}^{2^k}:U_k \to V_k$ is a two-fold covering map with little Julia set $${\mathbb C}J_k := \{ z \in U_1 : f_{\mbox{\tiny feig}}^{2^kn}(z) \in U_1 \text{ for all } n \geqslant 0\} = \Psi({\mathbb C}J_{k-1}). $$ To explain the connection between $f_{\mbox{\tiny feig}}:{\mathbb C}J \to {\mathbb C}J$ and symbolic dynamics, we first observe that the {\em kneading sequence} $\rho$ ({\em i.e.,\ } the itinerary of the critical value $f_{\mbox{\tiny feig}}(c)$) is the fixed point of a substitution $$ H_{\mbox{\tiny feig}}:\lambdaeqslantft\{ \begin{array}{l} 0 \to 11, \\ 1 \to 10. \end{array} \right. $$ Let $\Sigmaigma_{\mbox{\tiny feig}} = \overline{\mbox{\rm orb}_\sigma(\rho)}$ be the corresponding shift space. If we quotient over the equivalence relation $x \sigmaim y$ if $x = y$ or $x = w0\rho$ and $y = w1\rho$ (or vice versa) for any finite and possibly empty word $w$, then $\Sigmaigma_{\mbox{\tiny feig}} \sigmaim$ is homeomorphic to ${\mathbb C}A$, and the itinerary map $i:{\mathbb C}A \to \Sigmaigma_{\mbox{\tiny feig}}/\sigmaim$ conjugates $f_{\mbox{\tiny feig}}$ to the shift $\sigma$. To make the connection with the Thue-Morse shift, observe that the sliding block code $\pi: \Sigma \to \Sigma$ defined by $$ \pi(x)_k = \lambdaeqslantft\{ \begin{array}{ll} 1 & \text{ if } x_k \neq x_{k+1}, \\ 0 & \text{ if } x_k = x_{k+1}, \end{array} \right. $$ is a continuous shift-commuting two-to-one covering map. The fact that it is two-to-one is easily seen because if $x_k = 1-y_k$ for all $k$, then $\pi(x) = \pi(y)$. Surjectivity can also easily be proved; once the first digit of $\pi^{-1}(z)$ is chosen, the following digits are all uniquely determined. It also transforms the Thue-Morse substitution $H$ into $H_{\mbox{\tiny feig}}$ in the sense that $H_{\mbox{\tiny feig}} \circ \pi = \pi \circ H$. For the two Thue-Morse fixed points of $H$ we obtain $$ \pi(\rho_0) = \pi(\rho_0) = \rho = 1011101010111011 1011101010111010\dots $$ Figure~\ref{fig:Thue_Morse_Feigenbaum} summarizes all this in a single commutative diagram. \begin{figure} \caption{Commutative diagram linking the Thue-Morse substitution shift to the Feigenbaum map. Further commutative relations:\newline $\pi$ is continuous, two-to-one and $\sigma \circ \pi = \pi \circ \sigma$.\newline $i:[c_2,c_1] \to \pi({\mathbb L} \end{figure} The Cantor set ${\mathbb K}$ factorizes over $\Sigmaigma_{\mbox{\tiny feig}}$ and hence over the Cantor attractor ${\mathbb C}A$. The intermediate space ${\mathbb L}$ factorizes over the real {\em core} $[c_2, c_1]$ in the Julia set ${\mathbb C}J$ and we can characterize its symbolic dynamics by means of a particular order relation. Namely, itineraries $i(z)$ of $z \in [c_2, c_1]$ are exactly those sequences that satisfy $$ \sigma(\rho) \lambdaeqslant_{pl} \sigma^n \circ i(z) \lambdaeqslant_{pl} \rho \quad \text{ for all } n \geqslant 0. $$ Here $\lambdaeqslant_{pl}$ is the {\em parity-lexicographical order} by which $z <_{pl} z'$ if and only if there is a prefix $w$ such that $$ \lambdaeqslantft\{ \begin{array}{l} z = w0\dots,\quad z' = w1\dots \text{ and } w \text{ contains an even number of $1$s,}\\[2mm] z = w1\dots,\quad z' = w0\dots \text{ and } w \text{ contains an odd number of $1$s.}\end{array} \right. $$ On the level of itineraries, the substitution $H_{\mbox{\tiny feig}}$ plays the role of the conjugacy $\Psi$: $$ i \circ \Psi(x) = H_{\mbox{\tiny feig}} \circ i(x) \qquad \text{ for all } x \in [c_2, c_1]. $$ Also let $\lambdaeqslant_l$ denote the usual lexicographical order. \begin{lemma} Let $[0]$ and $[1]$ denote the one-cylinders of $\Sigma$. The map $\pi : ([0], \lambdaeqslant_l) \to (\Sigma, \lambdaeqslant_{pl})$ is order preserving and $\pi : ([1], \lambdaeqslant_l) \to (\Sigma, \lambdaeqslant_{pl})$ is order reversing. \end{lemma} \begin{proof} First we consider $[0]$ and let $w = 0^n$, then $w0\dots <_l w1\dots$ and \begin{equation}\lambdaabel{eq:0w} \pi(w0\dots) = 0^n\dots \lambdaeqslant_{pl} 0^{n-1}1\dots = \pi(w1\dots). \end{equation} Now if we change the $k$-th digit in $w$ (for $k \geqslant 2$), then still $w0 <_l w1$ and both the $k$-th and $k-1$-st digit of $\pi(w\dots)$ change. This does not affect the parity of $1$s in $\pi(w)$ and so \eqref{eq:0w} remains valid. Repeating this argument, we obtain that $\pi$ is order-preserving for all words $w$ starting with $0$. Now for the cylinder $[1]$ and $w = 10^{n-1}$, we find $w0\dots <_l w1\dots$ and \begin{equation*}\lambdaabel{eq:1w} \pi(w0\dots) = 10^{n-1}\dots \geqslant_{pl} 10^{n-2}1\dots = \pi(w1\dots). \end{equation*} The same argument shows that $\pi$ reverses order for all words $w$ starting with $1$. \end{proof} This lemma shows that $\pi^{-1} \circ i([c_2, c_1])$ consists of the sequence $s$ such that for all $n$, $$ \lambdaeqslantft\{ \begin{array}{rcccll} \sigma(\rho_1) &\lambdaeqslant_{l} & \sigma^n(s) & \lambdaeqslant_{l} & \rho_1 & \text{ if } \sigma^n(s) \text{ starts with } 1, \\[2mm] \rho_0 & \lambdaeqslant_{l} & \sigma^n(s) & \lambdaeqslant_{l} & \sigma(\rho_0) & \text{ if } \sigma^n(s) \text{ starts with } 0. \end{array} \right. $$ However, the class of sequence carries no shift-invariant measures of positive entropy, and the thermodynamic formalism reduces to finding measures that maximize the potential. The measure supported furthest away from ${\mathbb K}$ is the Dirac measure on $\overline{01}$ (with $\pi(\overline{01}) = \overline{1}$). Instead, if we look at the entire Julia set ${\mathbb C}J$, the combination of $\pi$ and the quotient map do not decrease entropy, and the potential $-\lambdaog |f'_{\mbox{\tiny feig}}|$ has thermodynamic interest for the complexified Feigenbaum map $f_{\mbox{\tiny feig}}:{\mathbb C}J \to {\mathbb C}J$. Since $\Psi$ is affine, differentiating \eqref{eq_renorm_feig} and taking logarithms, we find that $$ {\mathbb C}R_{\mbox{\tiny feig}}(\lambdaog |f'_{\mbox{\tiny feig}}|) := \lambdaog |f'_{\mbox{\tiny feig}}| \circ f_{\mbox{\tiny feig}} \circ \psi + \lambdaog |f'_{\mbox{\tiny feig}}| \circ \Psi = \lambdaog |f'_{\mbox{\tiny feig}}|, $$ so $V_{\mbox{\tiny feig}} := \lambdaog |f'_{\mbox{\tiny feig}}|$ is a fixed point of the renormalization operator ${\mathbb C}R_{\mbox{\tiny feig}}$ mimicking ${\mathbb C}R$. Furthermore, since $U_k = \Psi^{k-1}(U_1)$, its size is exponentially small in $k$ and hence there is some fixed $\alpha < 0$ such that $V_{\mbox{\tiny feig}} \approx \alpha(k-1)$ on $U_k \sigmaetminus U_{k+1}$. Since $U_k \sigmaetminus U_{k+1}$ corresponds to the cylinder $(\sigma \circ H)^{k-1} \sigmaetminus (\sigma \circ H)^k$, the potential $V_u$ from Section~\ref{subsec-unbounded1} is comparable to $V_{\mbox{\tiny feig}}$. As shown in Section~\ref{subsec-unbounded2}, $V_u$ exhibits no phase transition. The following proposition for complex analytic maps is stated in general terms, but proves the phase transition of Feigenbaum maps in particular. \begin{proposition}\lambdaabel{prop:feig_PT} Let $f:{\mathbb C} \to {\mathbb C}$ be an $n$-covering map without parabolic periodic points such that the omega-limit set $\omega({\mathbb C}rit)$ of the critical set is nowhere dense in its Julia set ${\mathbb C}J$, and such that there is some $c \in {\mathbb C}rit$ such that $f:\omega(c) \to \omega(c)$ has zero entropy and Lyapunov exponent. Then for $\varphii = \lambdaog|f'|$ and every $\gamma > 2$, ${\mathbb C}P(-\gamma \varphii) = 0$. \end{proposition} \begin{proof} As $f$ has no parabolic points, $\lambdaambda_0 := \inf\{ |(f^n)'(p)| \ : \ p \in {\mathbb C}J \text{ is an $n$-periodic point} \} > 1$. Obviously, all the invariant measures $\mu$ supported on $\omega(c)$ have $h_{\mu} -\gamma \int \lambdaog |f'| d\mu = 0$, so ${\mathbb C}P(-\gamma \varphii) \geqslant 0$. To prove the other inequality, we fix $\gamma > 2$ and for some $f$-invariant measure $\mu$, we choose a neighborhood $U$ intersecting ${\mathbb C}J$ but bounded away from $\mbox{\rm orb}({\mathbb C}rit)$ such that $\mu(U) > 0$. We can choose $\mbox{\rm diam} (U)$ so small compared to the distance $d(\mbox{\rm orb}({\mathbb C}rit), U)$ that $K^{\gamma-1} < \lambdaambda_0^{\gamma-2}$ where $K$ is the distortion constant in the Koebe Lemma, see \cite[Theorem 1.3]{Pomm}. Since $K \to 1$ as $\kappa := \mbox{\rm diam} (U)/d(\mbox{\rm orb}({\mathbb C}rit), U) \to 0$, we can satisfy the condition on $K$ by choosing $U$ small enough. Let $F:\cup_i U_i \to U$ be the first return map to $U$. Each branch $F|_{U_i} = f^{\tau_i}|_{U_i}$, with first return time $\tau_i > 0$ can be extended holomorphically to $f^{\tau_i}:V_i \to f^{\tau_i}(U_i)$ where $f^{\tau_i}(V_i)$ contains a disc around $f^{\tau_i}(U_i)$ of diameter $\geqslant d(\mbox{\rm orb}({\mathbb C}rit), U) \geqslant \mbox{\rm diam} (f^{\tau_i}(U_i))/\kappa$. Hence the Koebe Lemma implies that the distortion of $f^{\tau_i}|_{U_i}$ is bounded by $K = K(\kappa)$. Furthermore, since each $U_i$ contains a $\tau_i$-periodic point of multiplier $\geqslant \lambdaambda_0$, we have $\mbox{\rm diam} (U_i)/\mbox{\rm diam} (U) \lambdaeqslant K/\lambdaambda_0$. Therefore, for any $x \in U$, \begin{eqnarray*} {\mathbb C}L_{0,\gamma}({\AArm 1\AAk{-.8}{I}I}_{J})(x) &=& \sigmaum_{i, \exists x' \in U_i\ F(x') = x} |F'(x')|^{-\gamma} \\ &\lambdaeqslant& \sigmaum_i K \lambdaeqslantft(\frac{\mbox{\rm diam} (U_i)}{\mbox{\rm diam} (U)}\right)^{\gamma} \\ &\lambdaeqslant& \sigmaum_i K \ \frac{\mbox{\rm area} (U_i)}{\mbox{\rm area} (U)} \lambdaeqslantft( \frac{K}{\lambdaambda_0} \right)^{\gamma-2} \\ &\lambdaeqslant& K^{\gamma-1} \lambdaambda_0^{-(\gamma-2)} \sigmaum_i \frac{\mbox{\rm area} (U_i)}{\mbox{\rm area} (U)}. \end{eqnarray*} Since the regions $U_i$ are pairwise disjoint, the sum in the final line $\lambdaeqslant 1$, so our choice of $K$ gives that the above quantity is bounded by $1$. Therefore the radius of convergence $\lambdaambda_{0,\gamma} \lambdaeqslant 1$. Taking the logarithm and using Abramov's formula, we find that the pressure ${\mathbb C}P(-\gamma \varphii) \lambdaeqslant 0$. \end{proof} Department of Mathematics\\ University of Surrey\\ Guildford, Surrey, GU2 7XH\\ United Kingdom\\ \texttt{[email protected]}\\ \texttt{http://personal.maths.surrey.ac.uk/st/H.Bruin/} \\[3mm] D\'epartement de Math\'ematiques\\ Universit\'e de Brest\\ 6, avenue Victor Le Gorgeu\\ C.S. 93837, France \\ \texttt{[email protected]}\\ \texttt{http://www.math.univ-brest.fr/perso/renaud.leplaideur} \end{document}
\begin{document} \title{Stochastic Schr\"{o}dinger equations as limit of discrete filtering} \author{John Gough, Andrei Sobolev \\ Department of Computing \& Mathematics\\ Nottingham-Trent University, Burton Street,\\ Nottingham NG1\ 4BU, United Kingdom.\\ [email protected]} \date{} \maketitle \begin{abstract} We consider an open model possessing a Markovian quantum stochastic limit and derive the limit stochastic Schr\"{o}dinger equations for the wave function conditioned on indirect observations using only the von Neumann projection postulate. We show that the diffusion (Gaussian) situation is universal as a result of the central limit theorem with the quantum jump (Poissonian) situation being an exceptional case. It is shown that, starting from the correponding limiting open systems dynamics, the theory of quantum filtering leads to the same equations, therefore establishing consistency of the quantum stochastic approach for limiting Markovian models. \end{abstract} \section{Introduction} The problem of describing the evolution of a quantum system undergoing continual measurement has been examined from a variety of different physical and mathematical\ viewpoints however a consensus is that the generic forms of the stochastic Schr\"{o}dinger equation (SSE) governing the state $\psi _{t}\left( \omega \right) $, conditioned on recorded output $\omega $, takes on of the two forms below: \begin{eqnarray} \left| d\psi _{t}\right\rangle &=&\left( L-\lambda _{t}\right) \left| \psi _{t}\right\rangle \,d\hat{q}_{t}+\left( -iH-\frac{1}{2}\left( L^{\dagger }L-2\lambda _{t}L+\lambda _{t}^{2}\right) \right) \left| \psi _{t}\right\rangle \,dt, \label{Gaussian SSE} \\ \left| d\psi _{t}\right\rangle &=&\left( \frac{L-\sqrt{\nu _{t}}}{\sqrt{\nu _{t}}}\right) \left| \psi _{t}\right\rangle \,d\hat{n}_{t}+\left( -iH-\frac{1 }{2}L^{\dagger }L-\frac{1}{2}\nu _{t}+\sqrt{\nu _{t}}L\right) \left| \psi _{t}\right\rangle \,dt. \label{Poissonian SSE} \end{eqnarray} Here $H$ is the system's Hamiltonian and $L$ a fixed operator of the system which is somehow involved with the coupling of the system to the apparatus. In $\left( \ref{Gaussian SSE}\right) $ we have $\lambda _{t}\left( \omega \right) =\dfrac{1}{2}\left\langle \psi _{t}\left( \omega \right) |\,\left( L^{\dagger }+L\right) \psi _{t}\left( \omega \right) \right\rangle $ and $ \left\{ \hat{q}_{t}:t\geq 0\right\} $ is a Gaussian martingale process (in fact a Wiener process). In $\left( \ref{Poissonian SSE}\right) $, $\nu _{t}\left( \omega \right) =\left\langle \psi _{t}\left( \omega \right) |\,L^{\dagger }L\psi _{t}\left( \omega \right) \right\rangle $ and $\left\{ \hat{n}_{t}:t\geq 0\right\} $ is a Poissonian martingale process. The former describing quantum diffusions [1-7], the latter quantum jumps [8-10]. (By the term martingale, we mean a bounded stochastic process whose current value agrees with the conditional expectation of any future value based on observations up to the present time. They are used mathematically to model noise and, in both cases above, they are to come from continually de-trending the observed output process.) There is a general impression that the continuous time SSEs require additional postulates beyond the standard formalism of quantum mechanics and the von Neumann projection postulate. We shall argue that this is not so. Our aim is derive the SSEs above as continuous limits of a straightforward quantum dynamics with discrete time measurements. The model we look at is a generalization of one studied by Kist \textit{et al.} \cite{Kistetal} where the environment consists of two-level atoms which sequentially interact with the system and are subsequently monitored. The generalization occurs in considering the most general form of the coupling of the two level atoms to the system that will lead to a well defined Markovian limit dynamics. The procedure for conditioning the quantum state, based on discrete measurements is given by von Neumann's projection postulate. Recording a value of an observable with corresponding eigenspace-projection $\Pi $ will result in the change of vector state $\psi \mapsto p^{-1/2}\Pi \psi $ where $ p=\left\langle \psi |\Pi \psi \right\rangle $ is assumed non-zero. Let us suppose that at discrete times $t=\tau ,2\tau ,3\tau ,\dots $ the system comes in contact with an apparatus and that an indirect measurement is made. Based on the measurement output, which must be viewed as random, we get a time series $\left( \psi _{j}\right) _{j}$ of system vector states. The question is then whether such a time series might converge in the continuous time limit $\left( \tau \rightarrow 0\right) $\ and whether it will lead to the standard stochastic Schr\"{o}dinger equations. We apply a procedure pioneered by Smolianov and Truman \cite{ST}\ to obtain the limit SSEs for the various choices of monitored variable: a key feature of this procedure approach is that only standard quantum mechanics and the projection postulate are needed! The second point of the analysis is to square our results up with the theory of continuous-time quantum filtering\ \cite{Belavkin},\cite{Holevo},\cite {BGM},\cite{ALV}. This applies to unitary, Markovian evolutions of quantum open systems (that is, joint system and Markovian environment) described by quantum stochastic methods \cite{HP},\cite{Meyer},\cite{Gardiner}. Our model was specifically chosen to lead to a Markovian limit. Here we show that filtering theory applied to the limit dynamics leads to exactly the same limit SSEs we derive earlier. Needless to say, the standard forms $\left( \ref{Gaussian SSE}\right) $ and $\left( \ref{Poissonian SSE}\right) $ occur as generic forms. We show that if the two level atoms are prepared in their ground states then we obtain jump equations $\left( \ref{Poissonian SSE}\right) $ whenever we try to monitor if the post-interaction atom is still in its ground state. In all other cases we are lead to a diffusion equation which we show to universally have the form $\left( \ref{Gaussian SSE}\right) $. \section{Limit of Continuous Measurements} Models of the type we consider here have been treated in the continuous time limit by \cite{JML+Partha},\cite{AttalPautrat}, and \cite{GoughLettMathPhys} . In this section we recall the notations and results of \cite {GoughLettMathPhys}\ detailing how a discrete-time repeated interaction-measurement can, in the continuous time limit, be described as an open quantum dynamics driven by quantum Wiener and Poisson Processes. Let $\mathcal{H}_{S}$\ be has state space of our system and at times $t=\tau ,2\tau ,3\tau ,\dots $ it interacts with an apparatus. We denote by $ \mathcal{H}_{E,k}$ the state space describing the apparatus used at time $ t=k\tau $ - this will be a copy of a fixed Hilbert space $\mathcal{H}_{E}$. We are interested in the Hilbert spaces \begin{equation} \mathcal{F}_{E}^{t]}=\bigotimes_{k=1}^{\left\lfloor t/\tau \right\rfloor } \mathcal{H}_{E,k},\qquad \mathcal{F}^{(\tau )}=\bigotimes_{k=\left\lfloor t/\tau \right\rfloor +\tau }^{\infty }\mathcal{H}_{E,k},\qquad \mathcal{F} ^{\left( \tau \right) }=\mathcal{F}_{t]}^{\left( \tau \right) }\otimes \mathcal{F}_{(t}^{\left( \tau \right) } \end{equation} where $\left\lfloor x\right\rfloor $ means the integer part of $x$. (We fix a vector $e_{0}\in \mathcal{H}_{E}$ and use this to stabilize the infinite direct product.) We shall refer to $\mathcal{F}_{t]}^{\left( \tau \right) }$ and $\mathcal{F}_{(t}^{\left( \tau \right) }$\ as the \textit{past} and \textit{future environment spaces} respectively. We are interested only in the evolution between the discrete times $t=\tau ,2\tau ,3\tau ,\dots $ and to this end we require, for each $k>0$, a unitary (Floquet) operator, $V_{k}$, to be applied at time $t=k\tau $: its action will be on the joint space $\mathcal{H}_{S}\otimes \mathcal{F}^{\left( \tau \right) }$ but it will have non-trivial action only on the factors $\mathcal{ H}_{S}$ and $\mathcal{H}_{E,k}$. The $V_{k}$'s will be copies of a fixed unitary $V$ acting on the representative space $\mathcal{H}_{S}\otimes \mathcal{H}_{E}$. The unitary operator $U_{t}^{\left( \tau \right) }$ describing the evolution from initial time zero to time $t$ is then \begin{equation} U_{t}^{\left( \tau \right) }=V_{\left\lfloor t/\tau \right\rfloor }\cdots V_{2}V_{1} \end{equation} It acts on $\mathcal{H}_{S}\otimes \mathcal{F}_{t]}^{\left( \tau \right) }$ but, of course, has trivial action on the future environment space. The same is true of the discrete time dynamical evolution of observables $X\in \frak{B }\left( \mathcal{H}_{S}\right) $, the space of bounded operators on $ \mathcal{H}_{S}$, given by \begin{equation} J_{t}^{\left( \tau \right) }\left( X\right) =U_{t}^{\left( \tau \right) \dagger }\left( X\otimes 1_{\tau }\right) U_{t}^{\left( \tau \right) }\text{, } \end{equation} where $1_{\tau }$ is the identity on $\mathcal{F}^{\left( \tau \right) }$. The discrete time evolution satisfies the difference equation \begin{equation} \frac{1}{\tau }\left( U_{\left\lfloor t/\tau \right\rfloor +\tau }^{\left( \tau \right) }-U_{\left\lfloor t/\tau \right\rfloor }^{\left( \tau \right) }\right) =\left( V_{\left\lfloor t/\tau \right\rfloor +\tau }-1\right) U_{\left\lfloor t/\tau \right\rfloor }^{\left( \tau \right) }. \end{equation} The state for the environment is chosen to be the vector $\Psi ^{\left( \tau \right) }$ on $\mathcal{F}^{\left( \tau \right) }$ given by \begin{equation*} \Psi ^{\left( \tau \right) }=e_{0}\otimes e_{0}\otimes e_{0}\otimes e_{0}\cdots \end{equation*} and, since $e_{0}$ will typically be identified as the ground state on $ \mathcal{H}_{E}$, we shall call $\Psi ^{\left( \tau \right) }$ the \textit{ vacuum vector for the environment}. \subsection{Spin-$\frac{1}{2}$ Apparatus} For simplicity, we take $\mathcal{H}_{E}\equiv \mathbb{C}^{2}$. We may think of the apparatus as comprising of a two-level atom (qubit) with ground state $e_{0}$ and excited state $e_{1}$. We introduce the transition operators \begin{equation*} \sigma ^{+}=\left| e_{1}\right\rangle \left\langle e_{0}\right| \quad \sigma ^{-}=\left| e_{0}\right\rangle \left\langle e_{1}\right| \end{equation*} The copies of these operators for the $k-$th atom will be denoted $\sigma _{k}^{+}$ and $\sigma _{k}^{-}$. The operators $\sigma _{k}^{\pm }$ are Fermionic variables and satisfy the anti-commutation relation \begin{equation} \{\sigma _{k}^{\pm },\sigma _{k}^{\pm }\}=0,\text{\ \ }\{\sigma _{k}^{-},\sigma _{k}^{+}\}=1 \end{equation} while commuting for different atoms. \subsection{Collective Operators} We define the \textit{collective operators} $A^{\pm }\left( t;\tau \right) ,\Lambda \left( t;\tau \right) $ to be \begin{equation} A^{\pm }\left( t;\tau \right) :=\sqrt{\tau }\sum_{k=1}^{\left\lfloor t/\tau \right\rfloor }\sigma _{k}^{\pm };\qquad \Lambda \left( t;\tau \right) :=\sum_{k=1}^{\left\lfloor t/\tau \right\rfloor }\sigma _{k}^{+}\sigma _{k}^{-}. \end{equation} For times $t,s>0$, we have the commutation relations \begin{eqnarray*} \left[ A^{-}\left( t;\tau \right) ,A^{+}\left( s;\tau \right) \right] &=&\tau \left\lfloor \left( \frac{t\wedge s}{\tau }\right) \right\rfloor -2\tau \Lambda \left( t\wedge s,\tau \right) , \\ \left[ \Lambda \left( t;\tau \right) ,A^{\pm }\left( s;\tau \right) \right] &=&\pm A^{\pm }\left( t\wedge s;\tau \right) , \end{eqnarray*} where $s\wedge t$ denotes the minimum of $s$ and $t$. In the limit where $ \tau $ goes to zero while $s$ and $t$ are held fixed, we have the approximation \begin{equation} \left[ A^{-}\left( t;\tau \right) ,A^{+}\left( s;\tau \right) \right] \approx t\wedge s. \end{equation} The collective fields $A^{\pm }\left( t;\tau \right) $ converge to a Bosonic quantum Wiener processes $A_{t}^{\pm }$ as $\tau \rightarrow 0$, while $ \Lambda \left( t;\tau \right) $ converges to the Bosonic conservation process $\Lambda _{t}$ \cite{HP}. This is an example of a general class of well-known quantum stochastic limits \cite{GvW},\cite{Biane}. Intuitively, we may use the following rule of thumb for $t=j\tau :$ \begin{equation} \begin{array}{cc} \tau \simeq dt, & \sqrt{\tau }\sigma _{j}^{-}\simeq dA_{t}^{-}, \\ \sqrt{\tau }\sigma _{j}^{+}\simeq dA_{t}^{+}, & \sigma _{j}^{+}\sigma _{j}^{-}\simeq d\Lambda _{t}. \end{array} \end{equation} These replacements are usually correct when we go from a finite difference equation involving the discrete spins to a quantum stochastic differential equation involving differential processes. The limit processes are denoted as $A_{t}^{10}=A_{t}^{+}, \,A_{t}^{01}=A_{t}^{-}$ and $A_{t}^{11}=\Lambda _{t}$ respectively and we emphasize that they are not considered to live on the same Hilbert space as the discrete system but on a Bose Fock space $\Gamma _{+}\left( L^{2}\left( \mathbb{R}^{+},dt\right) \right) $. (See the appendix for details.) We also set $A_{t}^{00}=t$. \subsection{The Interaction} The $k-$th Floquet operator is then taken to be \begin{equation} V_{k}=\exp \left\{ -i\tau H_{k}^{\left( \tau \right) }\right\} \end{equation} where \begin{equation} H_{k}^{\left( \tau \right) }:=\frac{1}{\tau }H_{11}\otimes \sigma _{k}^{+}\sigma _{k}^{-}+\frac{1}{\sqrt{\tau }}H_{10}\otimes \sigma _{k}^{+}+ \frac{1}{\sqrt{\tau }}H_{01}\otimes \sigma _{k}^{-}+H_{00}. \end{equation} Here we must take $H_{11}$ and $H_{00}$ to be self-adjoint and require that $ \left( H_{01}\right) ^{\dagger }=H_{10}$. We may identify $H_{00}$ with the free system Hamiltonian $H_{S}$. We shall assume that the operators $ H_{\alpha \beta }$ are bounded on $\mathcal{H}_{S}$ with $H_{11}$ also bounded away from zero. The scaling of the spins $\sigma _{k}^{\pm }$ by $\tau ^{-1/2}$ is necessary if we want to obtain a quantum diffusion associated with the $H_{10}$ and $ H_{01}$ terms and a zero-intensity Poisson process associated with $H_{11}$ in the $\tau \rightarrow 0$ limit. We shall also employ the following summation convention: whenever a repeated raised and lowered Greek index appears we sum the index over the values zero and one. With this convention, \begin{equation} H_{k}^{\left( \tau \right) }\equiv H_{\alpha \beta }\otimes \left[ \frac{ \sigma _{k}^{+}}{\sqrt{\tau }}\right] ^{\alpha }\left[ \frac{\sigma _{k}^{-} }{\sqrt{\tau }}\right] ^{\beta } \end{equation} were we interpret the raised index as a power: that is, $\left[ x\right] ^{0}=1,\,\left[ x\right] ^{1}=x$. \subsection{Continuous Time Limit Dynamics} We consider the Floquet unitary on $\mathcal{H}_{S}\otimes \mathcal{H}_{E}$ given by \begin{eqnarray*} V &=&\exp \left\{ -i\tau \,H_{\alpha \beta }\otimes \left[ \frac{\sigma ^{+} }{\sqrt{\tau }}\right] ^{\alpha }\left[ \frac{\sigma ^{-}}{\sqrt{\tau }} \right] ^{\beta }\right\} \\ &\simeq &1+\tau L_{\alpha \beta }\otimes \left[ \frac{\sigma ^{+}}{\sqrt{ \tau }}\right] ^{\alpha }\left[ \frac{\sigma ^{-}}{\sqrt{\tau }}\right] ^{\beta } \end{eqnarray*} where $\simeq $ means that we drop terms that do not contribute in our $\tau \rightarrow 0$ limit. Here the so-called \textit{It\^{o} coefficients} $ L_{\alpha \beta }$ are given by \begin{equation} L_{\alpha \beta }=-iH_{\alpha \beta }+\sum_{n\geq 2}\frac{\left( -i\right) ^{n}}{n!}H_{\alpha 1}\left( H_{11}\right) ^{n-2}H_{1\beta }, \label{ItoHolevo} \end{equation} that is, \begin{equation*} \begin{array}{ll} L_{11}=e^{-iH_{11}}-1; & L_{10}=\dfrac{e^{-iH_{11}}-1}{H_{11}}H_{10}; \\ L_{01}=H_{01}\dfrac{e^{-iH_{11}}-1}{H_{11}}; & L_{00}=-iH_{00}+H_{01}\dfrac{ e^{-iH_{11}}-1+iH_{11}}{\left( H_{11}\right) ^{2}}H_{10}. \end{array} \end{equation*} The relationship between the Hamiltonian coefficients $H_{\alpha \beta }$ and the It\^{o} coefficients was first given in \cite{Holevo1}. We remark that they satisfy the relations \begin{equation} L_{\alpha \beta }+L_{\beta \alpha }^{\dagger }+L_{1\alpha }^{\dagger }L_{1\beta }=0. \label{unitary} \end{equation} guaranteeing unitarity \cite{HP}. Consequently we have \begin{equation} L_{11}=W-1;\quad L_{10}=L;\quad L_{01}=-L^{\dagger }W;\quad L_{00}=-iH-\frac{ 1}{2}L^{\dagger }L \label{unitarity} \end{equation} with $W=\exp \left\{ -iH_{11}\right\} $ unitary, $H=H_{00}-H_{01}\frac{ H_{11}-\sin \left( H_{11}\right) }{\left( H_{11}\right) ^{2}}H_{10}$ self-adjoint and $L$ is bounded but otherwise arbitrary. (Note that $\frac{ x-\sin x}{x^{2}}>0$ for $x>0$.) \subsubsection{Convergence of Unitary Evolution} In the above notations, the discrete time family $\left\{ U_{t}^{\left( \tau \right) }:t\geq 0\right\} $\ converges to quantum stochastic process $ \left\{ U_{t}:t\geq 0\right\} $\ on $\mathcal{H}_{S}\otimes \Gamma _{+}\left( L^{2}\left( \mathbb{R}^{+},dt\right) \right) $\ in the sense that, for all $u,v\in \mathcal{H}_{S}$, integers $n,m$\ and for all $\phi _{j},\psi _{j}\in L^{2}\left( \mathbb{R}^{+},dt\right) $\ Riemann integrable, we have the uniform convergence $\left( \tau \rightarrow 0\right) $ \begin{gather} \left\langle A^{+}\left( \phi _{m},\tau \right) \cdots A^{+}\left( \phi _{1},\tau \right) \,u\otimes \Psi ^{\left( \tau \right) }|\,U_{t}^{\left( \tau \right) }\,A^{+}\left( \psi _{n},\tau \right) \cdots A^{+}\left( \psi _{1},\tau \right) \,v\otimes \Psi ^{\left( \tau \right) }\right\rangle \notag \\ \rightarrow \left\langle A^{+}\left( \phi _{m}\right) \cdots A^{+}\left( \phi _{1}\right) \,u\otimes \Psi |\,U_{t}\,A^{+}\left( \psi _{n}\right) \cdots A^{+}\left( \psi _{1}\right) \,v\otimes \Psi \right\rangle \label{limit} \end{gather} The process $U_{t}$\ is moreover unitary, adapted and satisfies the quantum stochastic differential equation (QSDE, see appendix) \begin{equation} dU_{t}=L_{\alpha \beta }\otimes dA_{t}^{\alpha \beta }\,U_{t},\quad U_{0}=1. \label{limit qsde} \end{equation} \subsubsection{Convergence of Heisenberg Dynamics} Likewise, for $X$ a bounded observable on $\mathcal{H}_{S}$ the discrete time family $\left\{ J_{t}^{\left( \tau \right) }\left( X\right) \right\} $\ converges to the quantum stochastic process $J_{t}\left( X\right) =U_{t}^{\dagger }\left( X\otimes 1\right) U_{t}$\ on $\mathcal{H}_{S}\otimes \Gamma _{+}\left( L^{2}\left( \mathbb{R}^{+},dt\right) \right) $ in the same weak sense as in $\left( \ref{limit}\right) $. The limit Heisenberg equations of motion are \begin{equation} dJ_{t}\left( X\right) =J_{t}\left( \mathcal{L}_{\alpha \beta }X\right) \otimes dA_{t}^{\alpha \beta },\quad J_{0}\left( X\right) =X\otimes 1 \end{equation} where \begin{equation} \mathcal{L}_{\alpha \beta }\left( X\right) :=L_{\beta \alpha }^{\dagger }X+XL_{\alpha \beta }+L_{1\alpha }^{\dagger }XL_{1\beta }. \end{equation} We remark that $\mathcal{L}_{\alpha \beta }\left( 1\right) =0$, by the unitarity conditions $\left( \ref{unitary}\right) $. A completely positive semigroup $\left\{ \Xi _{t}:t\geq 0\right\} $ is then defined by $ \left\langle u|\,\Xi _{t}\left( X\right) \,v\right\rangle :=\left\langle u\otimes \Psi |\,J_{t}\left( X\right) \,v\otimes \Psi \right\rangle $ and we have $\Xi _{t}=\exp \left\{ t\mathcal{L}_{00}\right\} $ where the Lindblad generator is $\mathcal{L}_{00}\left( X\right) =\frac{1}{2}\left[ L^{\dagger },X\right] L+\frac{1}{2}L^{\dagger }\left[ X,L\right] -i\left[ X,H\right] $ with $L=\frac{e^{-iH_{11}}-1}{H_{11}}H_{10}$ and $H=H_{00}-H_{01}\frac{ H_{11}-\sin \left( H_{11}\right) }{\left( H_{11}\right) ^{2}}H_{10}$. We remark that such QSDEs occur naturally in Markovian limits for quantum field reservoirs \cite{GREP}, \cite{G1}, see also \cite{ALV}. \section{Conditioning on Measurements} We now consider how the measurement of an apparatus indirectly leads to a conditioning of the system state. For clarity we investigate the situation of a single apparatus to begin with. Initially the apparatus is prepared in state $e_{0}$ and after a time $\tau $ we have the evolution \begin{equation} \phi \otimes e_{0}\rightarrow V\left( \phi \otimes e_{0}\right) \simeq \left( 1+\tau L_{00}\right) \phi \otimes e_{0}+\sqrt{\tau }L_{10}\phi \otimes e_{1}. \label{approxV} \end{equation} We shall measure the $\sigma ^{x}$-observable. This can be written as \begin{equation*} \sigma ^{x}=\sigma ^{+}+\sigma ^{-}=\left| e_{+}\right\rangle \left\langle e_{+}\right| -\left| e_{-}\right\rangle \left\langle e_{-}\right| \end{equation*} where $\left| e_{+}\right\rangle =\frac{1}{\sqrt{2}}\left| e_{1}\right\rangle +\frac{1}{\sqrt{2}}\left| e_{0}\right\rangle $ and $ \left| e_{-}\right\rangle =\frac{1}{\sqrt{2}}\left| e_{1}\right\rangle - \frac{1}{\sqrt{2}}\left| e_{0}\right\rangle $. (Actually, the main result of this section will remain true provided we measure an observable with eigenstates $\left| e_{\pm }\right\rangle $ different to $\left| e_{0}\right\rangle $ and $\left| e_{1}\right\rangle $. We will show this universality later.) Taking the initial joint state to be $\phi \otimes e_{0}$, we find that the probabilities to get the eigenvalues $\pm 1$ are \begin{equation*} p_{\pm }=\left\langle \phi \otimes e_{0}|\;V^{\dagger }\left( 1_{S}\otimes \Pi _{\pm }\right) V\;\phi \otimes e_{0}\right\rangle \simeq \frac{1}{\sqrt{2 }}\left[ 1\pm 2\lambda \sqrt{\tau }\right] +O\left( \tau ^{3/2}\right) \end{equation*} where we introduce the expectation \begin{equation*} \lambda =\dfrac{1}{2}\left\langle \phi |\left( L+L^{\dagger }\right) \,\phi \right\rangle . \end{equation*} A pair of linear maps, $\mathcal{V}_{\pm }$ on $\frak{h}_{S}$ are defined by \begin{equation} \left( \mathcal{V}_{\pm }\phi \right) \otimes e_{\pm }\equiv \left( 1_{S}\otimes \Pi _{\pm }\right) V\;\left( \phi \otimes e_{0}\right) \end{equation} and to leading order we have \begin{equation*} \mathcal{V}_{\pm }\simeq \frac{1}{\sqrt{2}}\left[ 1\pm L\sqrt{\tau }+\left( -iH-\frac{1}{2}L^{\dagger }L\right) \,\tau \right] . \end{equation*} The wave function $\mathcal{W}_{\pm }\phi $, conditioned on a $\pm $ -measurement, is therefore \begin{equation} \mathcal{W}_{\pm }\phi :=\frac{\mathcal{V}_{\pm }\phi }{\sqrt{p_{\pm }}} \end{equation} $\mathcal{W}_{\pm }$ will be non-linear as the probabilities $p_{\pm }$ clearly depend on the state $\phi $. We then have the development \begin{equation} \mathcal{W}_{\pm }\phi \simeq \phi \pm \sqrt{\tau }\,\left( L-\lambda \right) \phi +\tau \left[ -iH-\frac{1}{2}L^{\dagger }L+\lambda \left( \frac{3 }{2}\lambda -L\right) \right] \phi . \end{equation} We now introduce a random variable $\eta $ which takes the two possible values $\pm 1$ with probabilities $p_{\pm }$. We call $\eta $ the \textit{ discrete output variable}. Then \begin{eqnarray} \mathbb{E}\left[ \eta \right] &=&p_{+}+p_{-}=2\lambda \,\sqrt{\tau }+O\left( \tau \right) \\ \mathbb{E}\left[ \eta ^{2}\right] &=&p_{+}+p_{-}=1+O\left( \tau \right) . \end{eqnarray} Now let us deal with repeated measurements. We shall record an output sequence of $\pm 1$ and we write $\eta _{j}$ as the random variable describing the $j$th output. Statistically, the $\eta _{j}$ are not independent: each $\eta _{j}$ will depend on $\eta _{1},\cdots ,\eta _{j-1}$. We set, for $j=\left\lfloor t/\tau \right\rfloor $ and fixed $\phi \in \frak{ h}_{S}$, \begin{equation} \phi _{t}^{\left( \tau \right) }=\mathcal{V}_{\eta _{n}}\cdots \mathcal{V} _{\eta _{1}}\phi ,\quad \psi _{t}^{\left( \tau \right) }=\mathcal{W}_{\eta _{n}}\cdots \mathcal{W}_{\eta _{1}}\phi =\frac{1}{\left\| \phi _{t}^{\left( \tau \right) }\right\| }\phi _{t}^{\left( \tau \right) }. \end{equation} We shall have $\Pr \left\{ \eta _{j+1}=\pm 1\right\} \simeq \frac{1}{2}\left[ 1\pm \sqrt{\tau }\lambda _{j}^{\left( \tau \right) }\right] $, where $ \lambda _{j}^{\left( \tau \right) }=\frac{1}{2}\left\langle \psi _{j}^{\left( \tau \right) }|\,\left( L+L^{\dagger }\right) \,\psi _{j}^{\left( \tau \right) }\right\rangle $, and \begin{eqnarray} \mathbb{E}_{j}^{\tau }\left[ \eta _{j+1}\right] &=&2\lambda _{j}^{\left( \tau \right) }\,\sqrt{\tau }+O\left( \tau \right) \\ \mathbb{E}_{j}^{\tau }\left[ \left( \eta _{j+1}\right) ^{2}\right] &=&1+O\left( \tau \right) \end{eqnarray} where $\mathbb{E}_{j}^{\tau }$ is conditional expectation wrt. the variables $\left( \eta _{1},\cdots ,\eta _{j}\right) $. The state $\psi _{\left( j+1\right) \tau }^{\left( \tau \right) }$ after the $\left( j+1\right) $-st measurement \ depends on the state $\psi _{j\tau }^{\left( \tau \right) }$ and $\eta _{j+1}$ and we have, to relevant order, the finite difference equation \begin{equation*} \psi _{\left( j+1\right) \tau }^{\left( \tau \right) }-\psi _{j\tau }^{\left( \tau \right) }\simeq \eta _{j+1}\sqrt{\tau }\left( L-\lambda _{j}^{\left( \tau \right) }\right) \psi _{j\tau }^{\left( \tau \right) }+\tau \left[ -iH-\frac{1}{2}L^{\dagger }L+\lambda _{j}^{\left( \tau \right) }\left( \frac{3}{2}\lambda _{j}^{\left( \tau \right) }-L\right) \right] \psi _{j\tau }^{\left( \tau \right) } \end{equation*} We wish to consider the process \begin{equation*} q^{\left( \tau \right) }\left( t\right) =\sqrt{\tau }\sum_{j=1}^{\left \lfloor t/\tau \right\rfloor }\eta _{j}, \end{equation*} however, it has a non-zero expectation and so is not suitable as a noise term. Instead, we introduce new random variables $\zeta _{j}:=\eta _{j}- \sqrt{\tau }\,2\lambda _{j-1}^{\left( \tau \right) }$ and consider $\hat{q} ^{\left( \tau \right) }\left( t\right) =\sqrt{\tau }\sum_{j=1}^{\left\lfloor t/\tau \right\rfloor }\zeta _{j}$. We now use a simple trick to show that it is mean-zero to required order. First of all, observe that $\mathbb{E} _{j-1}^{\tau }\left[ \zeta _{j}\right] =0$ and so $\mathbb{E}\left[ \zeta _{j}^{\tau }\right] =\mathbb{E}\left[ \mathbb{E}_{j-1}^{\tau }\left[ \zeta _{j}^{\tau }\right] \right] =O\left( \tau \right) $. Similarly $\mathbb{E} \left[ \zeta _{j}^{2}\right] =1+O\left( \sqrt{\tau }\right) $. It follows that $\left\{ \hat{q}^{\left( \tau \right) }\left( t\right) :t\geq 0\right\} $ converges in distribution to a mean-zero martingale process, which we denote as $\left\{ \hat{q}_{t}:t\geq 0\right\} $, with correlation $\mathbb{E }\left[ \hat{q}_{t}\hat{q}_{s}\right] =t\wedge s$. We may therefore take $ \hat{q}_{t}$ to be a Wiener process. Likewise, $\left\{ q^{\left( \tau \right) }\left( t\right) :t\geq 0\right\} $ should converge to a stochastic process $\left\{ q_{t}:t\geq 0\right\} $ which is adapted wrt. $\hat{q}$: in other words, $q_{t}$ should be determined as a function of the $\hat{q}$ -process for times $s\leq t$ for each time $t>0$. In particular, \begin{equation*} q_{t}=\hat{q}_{t}+2\int_{0}^{s}\lambda _{s}ds \end{equation*} where $\lambda _{t}=\dfrac{1}{2}\left\langle \psi _{t}|\left( L+L^{\dagger }\right) \,\psi _{t}\right\rangle $. In the limit $\tau \rightarrow 0$ we obtain the limit stochastic differential \begin{equation*} \left| d\psi _{t}\right\rangle =\left( L-\lambda _{l}\right) \left| \psi _{t}\right\rangle dq_{t}+\left[ -iH-\frac{1}{2}L^{\dagger }L+\lambda _{j}\left( \frac{3}{2}\lambda _{t}-L\right) \right] \left| \psi _{t}\right\rangle dt \end{equation*} with initial condition $\left| \psi _{0}\right\rangle =\left| \phi \right\rangle $. In terms of the Wiener process $\hat{q}$ we have \begin{equation} \left| d\psi _{t}\right\rangle =\left( L-\lambda _{t}\right) \left| \psi _{t}\right\rangle d\hat{q}_{t}+\left[ -iH-\frac{1}{2}\left( L^{\dagger }L-2\lambda _{t}L+\lambda _{t}^{2}\right) \right] \left| \psi _{t}\right\rangle dt. \label{SSE} \end{equation} This is, of course, the standard form of the diffusive Stochastic Schr\"{o}dinger equation $\left( \ref{Gaussian SSE}\right) $. \subsection{Universality of Gaussian SSE} We consider measurements on an observable of the form \begin{equation} X=x_{+}\left| e_{+}\right\rangle \left\langle e_{+}\right| +x_{-}\left| e_{-}\right\rangle \left\langle e_{-}\right| \label{X} \end{equation} where $x_{+}$ and $x_{-}$ are real eigenvalues, while $e_{+}$ and $e_{-}$ are any orthonormal eigenvectors in $\mathcal{H}_{E}$ not the same as $e_{0}$ and $e_{1}$. Generally speaking we will have \begin{equation} e_{+}=\sqrt{q}e_{0}+e^{i\theta }\sqrt{1-q}e_{1},\quad e_{-}=\sqrt{1-q}e_{0}- \sqrt{q}e^{i\theta }e_{1} \label{eigen-vectors} \end{equation} with $0<q<1$. For convenience we set $q_{+}=q$ and $q_{-}=1-q$. The phase $ \theta \in \lbrack 0,2\pi )$ will actually play no role in what follows and can always be removed by the ``gauge'' transformations $e_{0}\hookrightarrow e_{0}$, $e^{i\theta }e_{1}\hookrightarrow e_{1}$ which is trivial from our point of view since it leaves the ground state invariant. We therefore set $ \theta =0$. The probability that we measure $X$ to be $x_{\pm }$ after the interaction will be \begin{eqnarray} p_{\pm } &=&\left\langle \phi \otimes e_{0}|\;V^{\dagger }\left( 1_{S}\otimes \Pi _{\pm }\right) V\;\phi \otimes e_{0}\right\rangle \label{p-plus/minus} \\ &\simeq &q_{\pm }\left[ 1+\eta _{\pm }2\lambda \sqrt{\tau }-\nu \left( 1-\eta _{\pm }^{2}\right) \tau \right] \end{eqnarray} where we introduce the expectations \begin{equation*} \nu =\left\langle \phi |L^{\dagger }L\,\phi \right\rangle =\left\| L\phi \right\| ^{2} \end{equation*} and the weighting \begin{equation} \eta _{+}=\sqrt{\frac{q_{-}}{q_{+}}},\qquad \eta _{-}=-\sqrt{\frac{q_{+}}{ q_{-}}}. \label{eta plus/minus} \end{equation} We may now introduce a random variable $\eta $ taking the values $\eta _{\pm }$ with probabilities $p_{\pm }$. We remark that \begin{eqnarray} \mathbb{E}\left[ \eta \right] &=&p_{+}\eta _{+}+p_{-}\eta _{-}=2\lambda \, \sqrt{\tau }+O\left( \tau \right) , \label{eta mean} \\ \mathbb{E}\left[ \eta ^{2}\right] &=&p_{+}\eta _{+}^{2}+p_{-}\eta _{-}^{2}=1-2\lambda \frac{q_{+}^{2}-q_{-}^{2}}{\sqrt{q_{+}q_{-}}}\sqrt{\tau } +O\left( \tau \right) . \label{eta variance} \end{eqnarray} We then have the finite difference equation \begin{gather} \psi _{j+1}^{\left( \tau \right) }\simeq \psi _{j}^{\left( \tau \right) }+ \sqrt{\tau }\eta _{j+1}^{\left( \tau \right) }\left( L-\lambda _{j}^{\left( \tau \right) }\right) \psi _{j}^{\left( \tau \right) } \notag \\ +\tau \left[ -iH-\frac{1}{2}L^{\dagger }L+\frac{1}{2}\left( 1-\eta _{\left( j+1\right) }^{\left( \tau \right) 2}\right) \nu _{\left( j\right) }^{\left( \tau \right) }+\eta _{\left( j+1\right) }^{\left( \tau \right) 2}\left( \frac{3}{2}\lambda _{j}^{\left( \tau \right) 2}-\lambda _{j}^{\left( \tau \right) }L\right) \right] \psi _{j}^{\left( \tau \right) } \end{gather} which is the same as before except for the new term involving $\nu _{\left( j\right) }^{\left( \tau \right) }=\left\langle \psi _{j}^{\left( \tau \right) }|\,L^{\dagger }L\,\psi _{j}^{\left( \tau \right) }\right\rangle $. We may replace the $\eta ^{2}$-terms by their averaged value of unity when transferring to the limit $\tau \rightarrow 0$: in particular the term with $ \nu _{j}^{\left( \tau \right) }$ is negligible. Defining the processes $ q_{t}^{\left( \tau \right) }$ and $\hat{q}_{t}^{\left( \tau \right) }$ as before, we are lead to the same SSE as $\left( \ref{SSE}\right) $. \subsection{Poissonian Noise} Now let us consider what happens if we take the input observable to be $ \sigma ^{+}\sigma ^{-}$. (This corresponds to the choice $q_{+}=1$, $q_{-}=0$ .) We now record a result of either zero or one with probabilities $ p_{\varepsilon }=\left\langle \phi \otimes e_{0}|V^{\dagger }\left( 1\otimes \Pi _{\varepsilon }\right) V\phi \otimes e_{0}\right\rangle $ where $ \varepsilon =0,1$ and $\Pi _{\varepsilon }=\left| e_{\varepsilon }\right\rangle \left\langle e_{\varepsilon }\right| $. Explicitly we have \begin{equation*} p_{0}\simeq 1-\nu \tau ,\quad p_{1}\simeq \nu \tau . \end{equation*} As before, we define a conditional operator $\mathcal{V}_{\varepsilon }$ on $ \mathcal{H}_{S}$ by $\left( \mathcal{V}_{\varepsilon }\phi \right) \otimes e_{\varepsilon }=\left( 1\otimes \Pi _{\varepsilon }\right) V\left( \phi \otimes e_{0}\right) $ and a conditional map $\mathcal{W}_{\varepsilon }=\left( p_{\varepsilon }\right) ^{-1/2}\,\mathcal{V}_{\varepsilon }$. Here we will have \begin{equation*} \mathcal{W}_{0}\phi \simeq \left[ 1+\tau \left( -iH-\frac{1}{2}L^{\dagger }L+ \frac{1}{2}\nu \right) \right] \phi ,\quad \mathcal{W}_{1}\phi \simeq \frac{1 }{\sqrt{\nu }}L\phi . \end{equation*} Iterating this in a repeated measurement strategy, we record an output series $\left( \varepsilon _{1}^{\left( \tau \right) },\varepsilon _{2}^{\left( \tau \right) },\cdots \right) $ of zeroes and ones with difference equation for the conditioned states given by \begin{eqnarray} \psi _{j+1}^{\left( \tau \right) } &\simeq &\psi _{j}^{\left( \tau \right) }+\varepsilon _{j+1}^{\left( \tau \right) }\left( \frac{L-\sqrt{\nu _{j}^{\left( \tau \right) }}}{\sqrt{\nu _{j}^{\left( \tau \right) }}}\right) \psi _{j}^{\left( \tau \right) } \notag \\ &&+\tau \left( 1-\varepsilon _{j+1}^{\left( \tau \right) }\right) \left( -iH- \frac{1}{2}L^{\dagger }L+\frac{1}{2}\nu _{j}^{\left( \tau \right) }\right) \psi _{j}^{\left( \tau \right) } \label{FDPoisson} \end{eqnarray} where $\nu _{j}^{\left( \tau \right) }:=\left\langle \psi _{j}^{\left( \tau \right) }|L^{\dagger }L\psi _{j}^{\left( \tau \right) }\right\rangle $. The $ \varepsilon _{j}^{\left( \tau \right) }$'s may be viewed as dependent Bernoulli variables. In particular let $\mathbb{E}_{j}\left[ \cdot \right] $ denote conditional expectation with respect to the first $j$ of these variables, then $\mathbb{E}_{j}\left[ \exp \left( iu\varepsilon _{j+1}^{\left( \tau \right) }\right) \right] \simeq 1-\tau \nu _{j}^{\left( \tau \right) }\left( ie^{u}-1\right) $. We now define a stochastic process $ n_{t}^{\left( \tau \right) }$ by \begin{equation*} n_{t}^{\left( \tau \right) }:=\sum_{j=1}^{\left\lfloor t/\tau \right\rfloor }\varepsilon _{j}^{\left( \tau \right) } \end{equation*} and in the limit $\tau \rightarrow 0$ this converges to a non-homogeneous compound Poisson process $\left\{ n_{t}:t\geq 0\right\} $. Specifically, if $ f$ is a smooth test function, then \begin{equation*} \lim_{\tau \rightarrow 0}\mathbb{E}\left[ \exp \left\{ i\sum_{j=1}^{\left\lfloor t/\tau \right\rfloor }f\left( j\tau \right) \varepsilon _{j}^{\left( \tau \right) }\right\} \right] =\mathbb{E}\left[ \exp \left\{ \int_{0}^{t}ds\,\nu _{s}\left( e^{if\left( s\right) }-1\right) \right\} \right] \end{equation*} with mean square limit $\nu _{t}:=\lim_{\tau \rightarrow 0}\nu _{\left\lfloor t/\tau \right\rfloor }^{\left( \tau \right) }$. The It\^{o} table for $n_{t}$ is $\left( dn_{t}\right) ^{2}=dn_{t},$ and we have $ \mathbb{E}_{t]}\left[ dn_{t}\right] =\nu _{t}dt$ where $\mathbb{E}_{t]}\left[ \cdot \right] $ is conditional expectation with respect to $n_{t}$. The limit stochastic Schr\"{o}dinger equation is then \begin{equation} \left| d\psi _{t}\right\rangle =\frac{\left( L-\sqrt{\nu _{t}}\right) }{ \sqrt{\nu _{t}}}\left| \psi _{t}\right\rangle \,dn_{t}+\left( -iH-\frac{1}{2} \left( L^{\dagger }L-\nu _{t}\right) \right) \left| \psi _{t}\right\rangle \,dt \end{equation} and one readily shows that the normalization $\left\| \psi _{t}\right\| =1$ is preserved. Replacing $n_{t}$ by the compensated Poisson process $\hat{n} _{t}=n_{t}-\int_{0}^{t}\nu _{s}ds$, we find \begin{equation} \left| d\psi _{t}\right\rangle =\frac{\left( L-\sqrt{\nu _{t}}\right) }{ \sqrt{\nu _{t}}}\left| \psi _{t}\right\rangle \,d\hat{n}_{t}+\left( -iH- \frac{1}{2}\left( L^{\dagger }L+\nu _{t}\right) +\nu _{t}L\right) \left| \psi _{t}\right\rangle \,dt \end{equation} This is the standard jump-type SSE $\left( \ref{Poissonian SSE}\right) $. \subsection{Discrete Input / Output Processes} In quantum theory the probabilistic notion of events is replaced by orthogonal projections. For the measurements of $\sigma _{x}$, the relevant projectors are $\Pi _{\pm }^{\left( j\right) }=\frac{1}{2}\left[ 1\pm \sigma _{x}^{\left( j\right) }\right] $ and so far we have worked in the Schr\"{o}dinger representation. We note the property that, for $j\neq k$, \begin{equation*} \left[ \Pi _{\pm }^{\left( j\right) },V_{k}\right] =0. \end{equation*} In the Heisenberg picture we are interested alternatively in the projectors \begin{equation} \tilde{\Pi}_{\pm }^{\left( j\right) }:=U_{j\tau }^{\left( \tau \right) \dagger }\,\Pi _{\pm }^{\left( j\right) }\,U_{j\tau }^{\left( \tau \right) }. \end{equation} Note that $\left[ \tilde{\Pi}_{\pm }^{\left( j\right) },\Pi _{\pm }^{\left( k\right) }\right] =0$ for $j<k$. The family $\left\{ \Pi _{\pm }^{\left( j\right) }:j=1,2,\cdots \right\} $ is auto-commuting: a property that is sometimes referred to as leading to a \textit{consistent history} of measurement outputs. The family $\left\{ \tilde{\Pi}_{\pm }^{\left( j\right) }:j=1,2,\cdots \right\} $ is likewise also auto-commuting. To see this, we note that for $n>j$, \begin{eqnarray*} U_{n\tau }^{\left( \tau \right) \dagger }\,\Pi _{\pm }^{\left( j\right) }\,U_{n\tau }^{\left( \tau \right) } &=&V_{1}^{\dagger }\cdots V_{n}^{\dagger }\,\Pi _{\pm }^{\left( j\right) }\,V_{n}\cdots V_{1} \\ &=&V_{1}^{\dagger }\cdots V_{j}^{\dagger }\,\Pi _{\pm }^{\left( j\right) }\,V_{j}\cdots V_{1}\equiv \tilde{\Pi}_{\pm }^{\left( j\right) } \end{eqnarray*} and so, for any $j$ and $k$, $\left[ \tilde{\Pi}_{\pm }^{\left( j\right) }, \tilde{\Pi}_{\pm }^{\left( k\right) }\right] =U_{n\tau }^{\left( \tau \right) \dagger }\,\left[ \Pi _{\pm }^{\left( j\right) },\Pi _{\pm }^{\left( k\right) }\right] \,U_{n\tau }^{\left( \tau \right) }=0$ where we need only take $n$ greater than both $j$ and $k$. For a given random output sequence $\mathbf{\eta }=\left( \eta _{1},\eta _{2},\cdots \right) $, we have for $n=\left\lfloor t/\tau \right\rfloor $ \begin{equation*} \left( \phi _{t}^{\left( \tau \right) }\left( \mathbf{\eta }\right) \right) \otimes e_{\eta _{1}}\otimes \cdots \otimes e_{\eta _{n}}\otimes \Phi _{(t}^{\left( \tau \right) }=\left( \Pi _{\eta _{n}}^{\left( n\right) }V_{n}\right) \cdots \left( \Pi _{\eta _{1}}^{\left( 1\right) }V_{1}\right) \,\phi \otimes \Phi ^{\left( \tau \right) } \end{equation*} and the right hand side can be written alternatively as \begin{equation*} \Pi _{\eta _{n}}^{\left( n\right) }\cdots \Pi _{\eta _{1}}^{\left( 1\right) }\,U_{t}^{\left( \tau \right) }\,\phi \otimes \Phi ^{\left( \tau \right) } \text{ or }U_{t}^{\left( \tau \right) }\,\tilde{\Pi}_{\eta _{n}}^{\left( n\right) }\cdots \tilde{\Pi}_{\eta _{1}}^{\left( 1\right) }\,\phi \otimes \Phi ^{\left( \tau \right) }. \end{equation*} The probability of a given output sequence $\left( \eta _{1},\cdots ,\eta _{n}\right) $ is then \begin{eqnarray*} \left\| \phi _{t}^{\left( \tau \right) }\left( \mathbf{\eta }\right) \right\| ^{2} &=&\left\langle \chi _{t}^{\left( \tau \right) }\right| \Pi _{\eta _{n}}^{\left( n\right) }\cdots \Pi _{\eta _{1}}^{\left( 1\right) }\,\left. \chi _{t}^{\left( \tau \right) }\right\rangle \\ &=&\left\langle \phi \otimes \Phi ^{\left( \tau \right) }\right| \tilde{\Pi} _{\eta _{n}}^{\left( n\right) }\cdots \tilde{\Pi}_{\eta _{1}}^{\left( 1\right) }\,\left. \phi \otimes \Phi ^{\left( \tau \right) }\right\rangle \end{eqnarray*} where $\chi _{t}^{\left( \tau \right) }:=$ $U_{t}^{\left( \tau \right) }\phi \otimes \Phi ^{\left( \tau \right) }$. Likewise, we have $\psi _{t}^{\left( \tau \right) }\left( \mathbf{\eta } \right) =\left\| \phi _{t}^{\left( \tau \right) }\left( \mathbf{\eta } \right) \right\| ^{-1}\phi _{t}^{\left( \tau \right) }\left( \mathbf{\eta } \right) $ and for any observable $G$ of the system we have the random expectation \begin{eqnarray*} \left\langle \psi _{t}^{\left( \tau \right) }\right| G\,\left. \psi _{t}^{\left( \tau \right) }\right\rangle &=&\left\| \phi _{t}^{\left( \tau \right) }\right\| ^{-2}\left\langle \chi _{t}^{\left( \tau \right) }\right| \left( G\otimes 1_{E}^{\left( \tau \right) }\right) \,\Pi _{\eta _{n}}^{\left( n\right) }\cdots \Pi _{\eta _{1}}^{\left( 1\right) }\,\left. \chi _{t}^{\left( \tau \right) }\,\right\rangle \\ &=&\left\| \phi _{t}^{\left( \tau \right) }\right\| ^{-2}\left\langle \phi \otimes \Phi ^{\left( \tau \right) }\right| J_{t}^{\left( \tau \right) }\left( G\right) \,\tilde{\Pi}_{\eta _{n}}^{\left( n\right) }\cdots \tilde{ \Pi}_{\eta _{1}}^{\left( 1\right) }\,\left. \phi \otimes \Phi ^{\left( \tau \right) }\right\rangle . \end{eqnarray*} Let us introduce new spin variables $\tilde{\sigma}_{j}^{\pm }$ defined by \begin{equation*} \tilde{\sigma}_{j}^{\pm }=U_{j\tau }^{\left( \tau \right) \dagger }\,\left( \sigma _{j}^{\pm }\right) \,U_{j\tau }^{\left( \tau \right) } \end{equation*} In particular, let $\tilde{\sigma}_{j}^{x}=\tilde{\sigma}_{j}^{+}+\tilde{ \sigma}_{j}^{-}$ then $\tilde{\Pi}_{\pm }^{\left( j\right) }=\frac{1}{2} \left[ 1\pm \tilde{\sigma}_{j}^{x}\right] $. We may the construct the following adapted processes \begin{equation*} \tilde{A}^{\pm }\left( t,\tau \right) :=\sqrt{\tau }\sum_{j=1}^{\left\lfloor t/\tau \right\rfloor }\tilde{\sigma}_{j}^{\pm };\qquad \tilde{\Lambda}\left( t;\tau \right) :=\sum_{j=1}^{\left\lfloor t/\tau \right\rfloor }\tilde{\sigma }_{j}^{+}\tilde{\sigma}_{j}^{-}. \end{equation*} The current situation can be described as follows. Either we work in the Schr\"{o}dinger picture, in which case we are dealing in the quantum stochastic process $Q\left( t;\tau \right) =A^{+}\left( t;\tau \right) +A^{-}\left( t;\tau \right) $, or in the Heisenberg picture, in which case we are dealing with $\tilde{Q}\left( t;\tau \right) =U_{t}^{\left( \tau \right) \dagger }X\left( t;\tau \right) U_{t}^{\left( \tau \right) }=\tilde{A }^{+}\left( t;\tau \right) +\tilde{A}^{-}\left( t;\tau \right) $. Adopting the terminology due to Gardiner, the $Q$-process is called the \textit{input process} while the $\tilde{Q}$ is called the \textit{output process}. We may likewise refer to the $\Pi _{\pm }^{\left( j\right) }$'s as \textit{input events} and the $\tilde{\Pi}_{\pm }^{\left( j\right) }$'s as \textit{output events}. It is useful to know that, to relevant order, the output spin variables are \begin{eqnarray} \sqrt{\tau }\tilde{\sigma}_{j}^{-} &\simeq &U_{\left( j-1\right) \tau }^{\left( \tau \right) \dagger }\,\left( \sqrt{\tau }W\sigma _{j}^{-}+\tau L\right) \,U_{\left( j-1\right) \tau }^{\left( \tau \right) } \notag \\ &=&J_{\left( j-1\right) \tau }^{\left( \tau \right) }\left( W\right) \,\sqrt{ \tau }\sigma _{j}^{-}+J_{\left( j-1\right) \tau }^{\left( \tau \right) }\left( L\right) \,\tau \label{discretecan} \end{eqnarray} We shall study the continuous-time versions of these processes next. \section{The Stochastic Schr\"{o}dinger Equation} We now review the simple theory of quantum filtering. Let $Y=\left\{ Y_{t}:t\geq 0\right\} $ be an adapted, self-adjoint quantum stochastic process having trivial action on the system. In particular we suppose that it arises as a quantum stochastic integral \begin{equation*} Y_{t}=y_{0}+\sum_{\alpha ,\beta }\int_{0}^{t}Y_{\alpha \beta }\left( t\right) \,dA_{t}^{\alpha \beta } \end{equation*} where the $Y_{\alpha \beta }\left( t\right) =\left( Y_{\beta \alpha }\left( t\right) \right) ^{\dagger }$ are again adapted processes and $y_{0}\in \mathbb{C}$. We shall assume that the process is self-commuting: \begin{equation*} \left[ Y_{t},Y_{s}\right] =0,\;\text{for all }t,s>0. \end{equation*} This requires the consistency conditions $\left[ Y_{\alpha \beta }\left( t\right) ,Y_{s}\right] =0$ whenever $t>s$. The process $Y$ can then be represented as a classical stochastic process $ \left\{ y_{t}:t\geq 0\right\} $ with canonical (that is to say, minimal) probability space $\left( \Omega ,\Sigma ,\mathbb{Q}\right) $ and associated filtration $\left\{ \Sigma _{t]}:t\geq 0\right\} $ of sigma-algebras. For each $t>0$, we define the Dyson-ordered exponential \begin{equation*} \vec{T}\exp \left\{ \int_{0}^{t}f\left( u\right) dY_{u}\right\} :=\sum_{n\geq 0}\frac{1}{n!}\int_{\Delta _{n}\left( t\right) }dY_{t_{n}}\cdots dY_{t_{1}}\,f\left( t_{n}\right) \cdots f\left( t_{1}\right) \end{equation*} where $\Delta _{n}\left( t\right) $ is the ordered simplex $\left\{ \left( t_{n},\cdots ,t_{1}\right) :t>t_{n}>\cdots >t_{1}>0\right\} $. The algebra generated by such Dyson-ordered exponentials will be denotes as $\frak{C} _{t]}^{Y}$. Essentially, this is the spectral algebra of process up to time $ t$ and it can be understood (at least when the $Y$'s are bounded) as the von Neumann algebra $\frak{C}_{t]}^{Y}=\left\{ Y_{s}:0\leq s\leq t\right\} ^{\prime \prime }$ where we take the commutants in $\frak{B}\left( \mathcal{H }_{S}\otimes \mathcal{F}_{t]}\right) $. In the following we shall assume that $\frak{C}_{t]}^{Y}$ is a maximal commuting von Neumann sub-algebra of $ \frak{B}\left( \mathcal{F}_{t]}\right) $. In other words, there are no effects generated by the environmental noise other than those that can be accounted for by the observed process. The commutant of $\frak{C}_{t]}^{Y}$ will be denoted as \begin{equation*} \frak{A}_{t]}^{Y}=\left( \frak{C}_{t]}^{Y}\right) ^{\prime }=\left\{ A\in \frak{B}\left( \mathcal{H}_{S}\otimes \mathcal{F}\right) :\left[ Z,A\right] =0,\forall Z\in \frak{C}_{t]}^{Y}\right\} \end{equation*} and this is often referred to as the algebra of observables that are not demolished by the observed process up to time $t$. We note the isotonic property $\frak{C}_{t]}^{Y}\subset \frak{C}_{s]}^{Y}$ whenever $t<s$ and it is natural to introduce the inductive limit algebra $\frak{C} ^{Y}:=\lim_{t\rightarrow 0}\frak{C}_{t]}^{Y}$. From our assumption of maximality, we have that \begin{equation*} \frak{A}_{t]}^{Y}\equiv \frak{B}\left( \mathcal{H}_{S}\right) \otimes \frak{C }_{t]}^{Y}\otimes \frak{B}\left( \mathcal{F}_{(t}\right) . \end{equation*} A less simple theory would allow for effects of unobserved noises and one would have $\frak{C}_{t]}^{Y}$ as the centre of $\frak{A}_{t]}^{Y}$. One is then interested in conditional expectations from $\frak{A}_{t]}^{Y}$ into $ \frak{C}_{t]}^{Y}$. Here we are interested in the Hilbert space aspects and so we take advantage of the simple setup that arises when $\frak{C}_{t]}^{Y}$ is assumed maximal. (For the more general case where $\frak{C}_{t]}^{Y}$ is not maximal, see \cite{BGM}.) It is convenient to introduce the Hilbert spaces $\mathcal{H}_{t]}^{Y}$\ and $\mathcal{G}_{t]}^{Y}$ defined though \begin{equation*} \overline{\frak{A}_{t]}^{Y}\left( \mathcal{H}_{S}\otimes \Psi _{t]}\right) } \equiv \mathcal{H}_{t]}^{Y}\otimes \left\{ \mathbb{C}\Psi _{(t}\right\} ,\quad \overline{\frak{C}_{t]}^{Y}\left( \Psi _{t]}\right) }\equiv \mathcal{G }_{t]}^{Y} \end{equation*} where we understand $\mathcal{H}_{t]}^{Y}$ as a subspace of $\mathcal{H} _{S}\otimes \mathcal{F}_{t]}$ and $\mathcal{G}_{t]}^{Y}$ as a subspace of $ \mathcal{F}_{t]}$. From our maximality condition we shall have \begin{equation*} \mathcal{H}_{t]}^{Y}=\mathcal{H}_{S}\otimes \mathcal{G}_{t]}^{Y}. \end{equation*} (Otherwise $\mathcal{H}_{S}\otimes \mathcal{G}_{t]}^{Y}$ would be only a subset of $\mathcal{H}_{t]}^{Y}$.) A Hilbert space isomorphism $\frak{I}_{t}$ from $\mathcal{H}_{S}\otimes \mathcal{G}_{t]}^{Y}$ to $\mathcal{H} _{S}\otimes L^{2}\left( \Omega ,\Sigma ,\mathbb{Q}\right) $ is then defined by linear extension of the map \begin{equation*} \phi \otimes \vec{T}\exp \left\{ \int_{0}^{t}f\left( u\right) dY_{u}\right\} \mapsto \phi \vec{T}\exp \left\{ \int_{0}^{t}f\left( u\right) dy_{u}\right\} \end{equation*} where the Dyson-ordered exponential on the right hand side has the same meaning as for its operator-valued counterpart. We remark that, in particular, we have the following isomorphism between commutative von Neumann algebras: \begin{equation*} \frak{I}_{t}\frak{\,C}_{t]}^{Y}\,\frak{I}_{t}^{-1}=L^{\infty }\left( \Omega ,\Sigma _{t},\mathbb{Q}\right) . \end{equation*} By extension, $\frak{C}^{Y}$ can be understood as being isomorphic to $ L^{\infty }\left( \Omega ,\Sigma ,\mathbb{Q}\right) $. Now fix a unit vector $\phi $ in $\mathcal{H}_{S}$ and consider the evolved vector \begin{equation*} \chi _{t}=U_{t}\,\left( \phi \otimes \Psi \right) \end{equation*} which will lie in $\mathcal{H}_{S}\otimes \mathcal{G}_{t]}^{Y}$. In particular, we have a $\Sigma $-measurable function $\phi _{t}\left( \cdot \right) $ corresponding to $\frak{I}_{t}\frak{\,\chi }_{t}$. Here $\phi _{t}$ is a $\mathcal{H}_{S}$-valued random variable on $\left( \Omega ,\Sigma , \mathbb{Q}\right) $ which is adapted to the filtration $\left\{ \Sigma _{t]}:t\geq 0\right\} $. We shall have the normalization condition \begin{equation*} \int_{\Omega }\left\| \phi _{t}\left( \omega \right) \right\| _{S}^{2}\, \mathbb{Q}\left[ d\omega \right] =1. \end{equation*} In general, $\left\| \phi _{t}\left( \omega \right) \right\| _{S}^{2}$ is not unity, however, as it is positive and normalized, we may introduce a second measure $\mathbb{P}$ on $\left( \Omega ,\Sigma \right) \ $defined by \begin{equation*} \mathbb{P}\left[ A\right] :=\int_{A}\left\| \phi _{t}\left( \omega \right) \right\| _{S}^{2}\,\mathbb{Q}\left[ d\omega \right] \end{equation*} whenever $A\in \Sigma _{t}$. We remark that, for $B\in \frak{C}_{t]}^{Y}$, we have \begin{equation*} \left\langle \chi _{t}|\,B\chi _{t}\right\rangle =\int_{\Omega }\frak{I}_{t} \frak{\,}B\,\frak{I}_{t}^{-1}\,\mathbb{P}\left[ d\omega \right] . \end{equation*} It is convenient to introduce a normalized $\mathcal{H}_{S}$-valued variable $\psi _{t}$ defined almost everywhere by \begin{equation*} \psi _{t}\left( \omega \right) :=\left\| \phi _{t}\left( \omega \right) \right\| _{S}^{-1}\;\phi _{t}\left( \omega \right) . \end{equation*} We now define a conditional expectation $\mathcal{E}_{t]}^{Y}$ from $\frak{A} _{t]}^{Y}$ to the von Neumann sub-algebra $\frak{C}_{t]}^{Y}$ by the following identification almost everywhere \begin{equation*} \frak{I}_{t}\,\mathcal{E}_{t]}^{Y}\left[ A\right] \,\frak{I} _{t}^{-1}:=\left\langle \psi _{t}|\,\frak{I}_{t}\,A\,\frak{I}_{t}^{-1}\,\psi _{t}\right\rangle . \end{equation*} This expectation leaves the state determined by $\chi _{t}$ invariant: \begin{eqnarray*} \left\langle \chi _{t}|\,\mathcal{E}_{t]}^{Y}\left[ A\right] \,\chi _{t}\right\rangle &=&\int_{\Omega }\left\langle \psi _{t}|\,\frak{I}_{t}\,A\, \frak{I}_{t}^{-1}\,\psi _{t}\right\rangle \,\mathbb{P}\left[ d\omega \right] \\ &=&\int_{\Omega }\left\langle \phi _{t}|\,\frak{I}_{t}\,A\,\frak{I} _{t}^{-1}\,\phi _{t}\right\rangle \,\mathbb{Q}\left[ d\omega \right] \\ &=&\left\langle \chi _{t}|\,A\,\chi _{t}\right\rangle . \end{eqnarray*} This property then uniquely fixes the conditional expectation. If we consider the action of $\mathcal{E}_{t]}^{Y}$ from $\frak{C}^{Y}$, only, to $ \frak{C}_{t]}^{Y}$ then this must play the role of a classical conditional expectation $\mathbb{E}_{t]}^{y}$ from $\Sigma $-measurable to $\Sigma _{t}$ -measurable functions, again uniquely determined by the fact that it leaves a probability measure, in this case $\mathbb{P}$, invariant. We have the usual property that $\mathbb{E}_{t]}^{y}\circ \mathbb{E}_{s]}^{y}=\mathbb{E} _{t\wedge s]}^{y}$. We shall denote by $\mathbb{E}^{y}=\mathbb{E}_{0]}^{y}$ the expectation wrt. $\mathbb{P}$. Let us first remark that the classical process $\left\{ y_{t}:t\geq 0\right\} $ introduced above is not necessarily a martingale on $\left( \Omega ,\Sigma ,\mathbb{P}\right) $ wrt. the filtration $\left\{ \Sigma _{t]}:t\geq 0\right\} $. Indeed, we have \begin{eqnarray*} \mathbb{E}^{y}\left[ dy_{t}\right] &=&\left\langle \chi _{t}|\,dY_{t}\,\chi _{t}\right\rangle \\ &=&\left\langle \phi \otimes \Psi |\,d\tilde{Y}_{t}\,\phi \otimes \Psi \right\rangle \end{eqnarray*} where we define the output process $\tilde{Y}$ by \begin{equation*} \tilde{Y}_{t}:=U_{t}^{\dagger }\,Y_{t}\,U_{t} \end{equation*} From the quantum It\^{o} calculus, we obtain \begin{eqnarray*} d\tilde{Y}_{t} &=&U_{t}^{\dagger }\,dY_{t}\,U_{t}+dU_{t}^{\dagger }\,Y_{t}\,U_{t}+U_{t}^{\dagger }\,Y_{t}\,dU_{t}+dU_{t}^{\dagger }\,Y_{t}\,dU_{t} \\ &&+dU_{t}^{\dagger }\,dY_{t}\,U_{t}+U_{t}^{\dagger }\,dY_{t}\,dU_{t}+dU_{t}^{\dagger }\,dY_{t}\,dU_{t} \\ &=&U_{t}^{\dagger }\,Y_{\alpha \beta }\left( t\right) \,U_{t}\,dA_{t}^{\alpha \beta }+U_{t}^{\dagger }\,\mathcal{L}_{\alpha \beta }\left( 1\right) Y_{t}\,U_{t}\,dA_{t}^{\alpha \beta } \\ &&+U_{t}^{\dagger }\,\left( L_{1\alpha }^{\dagger }Y_{1\beta }+Y_{\alpha 1}L_{1\beta }+L_{1\alpha }^{\dagger }Y_{11}L_{1\beta }\right) \,U_{t}\,dA_{t}^{\alpha \beta }. \end{eqnarray*} Noting that $\mathcal{L}_{\alpha \beta }\left( 1\right) =0$, we see that \begin{equation*} d\tilde{Y}_{t}=U_{t}^{\dagger }\,\left( Y_{\alpha \beta }\left( t\right) +L_{1\alpha }^{\dagger }Y_{1\beta }\left( y\right) +Y_{\alpha 1}\left( t\right) L_{1\beta }+L_{1\alpha }^{\dagger }Y_{11}\left( t\right) L_{1\beta }\right) \,U_{t}\,dA_{t}^{\alpha \beta }. \end{equation*} In particular, we define $\tilde{A}_{t}^{\alpha \beta }:=U_{t}^{\dagger }A_{t}^{\alpha \beta }U_{t}$ and they are explicitly \begin{eqnarray*} d\tilde{\Lambda}_{t} &=&d\tilde{A}_{t}^{11}=d\Lambda _{t}+J_{t}\left( W^{\dagger }L\right) dA_{t}^{+}+J_{t}\left( L^{\dagger }W\right) dA_{t}^{-}+J_{t}\left( L^{\dagger }L\right) dt \\ d\tilde{A}_{t}^{+} &=&d\tilde{A}_{t}^{10}=J_{t}\left( W^{\dagger }\right) dA_{t}^{+}+J_{t}\left( L^{\dagger }\right) dt \\ d\tilde{A}_{t}^{-} &=&d\tilde{A}_{t}^{01}=J_{t}\left( W\right) dA_{t}^{-}+J_{t}\left( L\right) dt \end{eqnarray*} with $\tilde{A}_{t}^{00}=t$. We remark that $\mathbb{E}^{y}\left[ dy_{t}\right] =\bar{y}_{t}dt$ where \begin{eqnarray*} \bar{y}_{t} &=&\left\langle \chi _{t}|\,\left( Y_{00}\left( t\right) +L_{10}^{\dagger }Y_{10}\left( t\right) +Y_{01}\left( t\right) L_{10}+L_{10}^{\dagger }Y_{11}\left( t\right) L_{10}\right) \,\chi _{t}\right\rangle \\ &=&\int_{\Omega }\mathbb{P}\left[ d\omega \right] \;\left\langle \psi _{t}\left( \omega \right) |\,\left[ L^{\dagger }\right] ^{\alpha }\left[ L \right] ^{\beta }\,\psi _{t}\left( \omega \right) \right\rangle \,y_{\alpha \beta }\left( t;\omega \right) \end{eqnarray*} and we use the notations $y_{\alpha \beta }\left( t;\cdot \right) =\frak{I} _{t}Y_{\alpha \beta }\left( t\right) \frak{I}_{t}^{-1}$. Therefore, a martingale on $\left( \Omega ,\Sigma ,\mathbb{P}\right) $ wrt. the filtration $\left\{ \Sigma _{t]}:t\geq 0\right\} $ is given by the process $ \left\{ \hat{y}_{t}:t\geq 0\right\} $ defined as \begin{equation*} d\tilde{y}_{t}\left( \omega \right) =dy_{t}\left( \omega \right) -\left\langle \psi _{t}\left( \omega \right) |\,\left[ L^{\dagger }\right] ^{\alpha }\left[ L\right] ^{\beta }\,\psi _{t}\left( \omega \right) \right\rangle \,y_{\alpha \beta }\left( t;\omega \right) \,dt. \end{equation*} \subsection{Filtering based on observations of $Q_{t}=A_{t}^{+}+A_{t}^{-}$} Let us choose, for our monitored observables, the process $ Q_{t}=A_{t}^{+}+A_{t}^{-}$. Here the output process will be $\tilde{Q}_{t}$ with differentials \begin{equation*} d\tilde{Q}_{t}=J_{t}\left( W^{\dagger }\right) dA_{t}^{+}+J_{t}\left( W\right) dA_{t}^{-}+J_{t}\left( L^{\dagger }+L\right) dt. \end{equation*} By the previous arguments, we construct a classical process $y=q$ giving the distribution of $Q$ in the vacuum state: as is well-known, this is a Wiener process and $\left( \Omega ,\Sigma ,\mathbb{Q}\right) $ will be the canonical Wiener space. (In fact, $\frak{I}_{t}$ is then the Wiener-It\^{o}-Segal isomorphism \cite{Meyer}.) The corresponding martingale process will then be $\hat{q}$ defined through \begin{equation*} d\hat{q}_{t}=dq_{t}-2\lambda _{t}dt,\quad \hat{q}_{0}=0 \end{equation*} where \begin{equation*} \lambda _{t}\left( \omega \right) :=\frac{1}{2}\left\langle \psi _{t}\left( \omega \right) |\left( L+L^{\dagger }\right) \psi _{t}\left( \omega \right) \right\rangle . \end{equation*} A differential equation for $\psi _{t}$ can be obtained as follows. The state $\chi _{t}$=$U_{t}\,\phi \otimes \Psi $ will satisfy the vector-process QSDE \begin{equation} d\chi _{t}=L_{\alpha \beta }dA_{t}^{\alpha \beta }\,\chi _{t}=L_{\alpha 0}dA_{t}^{\alpha 0}\,\chi _{t} \label{chi equation} \end{equation} since we have $dA_{t}^{\alpha 1}\chi _{t}=U_{t}dA_{t}^{\alpha 1}\phi \otimes \Psi =0$ - that is the It\^{o} differentials $d\Lambda _{t}$ and $dA_{t}^{-}$ commute with $U_{t}$ annihilate the Fock vacuum. It is convenient to restore the annihilation differential, this time as $L_{10}dA_{t}^{-}\phi _{t}=0$, in which case we obtain the equivalent QSDE \begin{equation*} d\chi _{t}=-\left( iH+\frac{1}{2}L^{\dagger }L\right) \chi _{t}\,dt+L\,dQ_{t}\,\chi _{t}. \end{equation*} It should be immediately obvious that the process $\phi _{t}\left( \cdot \right) $ will satisfy the sde $\left| d\phi _{t}\right\rangle =L\left| \phi _{t}\right\rangle \,dq_{t}-\left( iH+\frac{1}{2}L^{\dagger }L\right) \left| \phi _{t}\right\rangle \,dt$. Here which we shall write $\phi _{t}\left( \cdot \right) $ as $\left| \phi _{t}\left( \cdot \right) \right\rangle $ to emphasize the fact that it in $\mathcal{H}_{S}$-valued process. From the It\^{o} rule $\left( dq_{t}\right) ^{2}=dt$, we find that \begin{equation*} d\left\| \phi _{t}\right\| ^{2}=\left\langle d\phi _{t}|\phi _{t}\right\rangle +\left\langle \phi _{t}|d\phi _{t}\right\rangle +\left\langle d\phi _{t}|d\phi _{t}\right\rangle =\left\langle \phi _{t}|\left( L^{\dagger }+L\right) \phi _{t}\right\rangle \,dq_{t}. \end{equation*} The derivative rule is \begin{eqnarray} d\left\| \phi _{t}\right\| ^{-1} &=&\left( \left\| \phi _{t}\right\| ^{2}+d\left\| \phi _{t}\right\| ^{2}\right) ^{-1/2}-\left( \left\| \phi _{t}\right\| ^{2}\right) ^{-1/2} \notag \\ &=&\left\| \phi _{t}\right\| ^{-1}\sum_{k\geq 1}\binom{-1/2}{k}\left\| \phi _{t}\right\| ^{-2k}\left( d\left\| \phi _{t}\right\| ^{2}\right) ^{k} \label{normalization sde} \end{eqnarray} and here we must use the It\^{o} rule $d\left\| \phi _{t}\right\| ^{2}=2\lambda _{t}dt$ where here $\lambda _{t}:=\frac{1}{2}\left\langle \psi _{t}|\left( L^{\dagger }+L\right) \psi _{t}\right\rangle $. This leads to \begin{equation*} d\left\| \phi _{t}\right\| ^{-1}=-\frac{1}{2}\left\| \phi _{t}\right\| ^{-1}\lambda _{t}\,dq_{t}+\frac{3}{8}\left\| \phi _{t}\right\| ^{-1}\lambda _{t}^{2}\,dt. \end{equation*} This leads to the SDE for $\left| \psi _{t}\right\rangle $: $\left| d\psi _{t}\right\rangle =\left\| \phi _{t}\right\| ^{-1}\left| d\phi _{t}\right\rangle +d\left( \left\| \phi _{t}\right\| ^{-1}\right) \left| \phi _{t}\right\rangle +d\left( \left\| \phi _{t}\right\| ^{-1}\right) \left| d\phi _{t}\right\rangle $ and this is explicitly \begin{equation} \left| d\psi _{t}\right\rangle =\left( L-\lambda _{t}\right) \left| \psi _{t}\right\rangle \,dq_{t}+\left( -iH-\frac{1}{2}L^{\dagger }L-\lambda _{t}L+ \frac{3}{2}\lambda _{t}^{2}\right) \left| \psi _{t}\right\rangle \,dt. \end{equation} Finally, substituting in for the martingale process $\hat{q}$ we obtain \begin{equation} \left| d\psi _{t}\right\rangle =\left( L-\lambda _{t}\right) \left| \psi _{t}\right\rangle \,d\hat{q}_{t}+\left( -iH-\frac{1}{2}\left( L^{\dagger }L-2\lambda _{t}L+\lambda _{t}^{2}\right) \right) \left| \psi _{t}\right\rangle \,dt. \end{equation} \subsection{Filtering based on observations of $\Lambda _{t}$} Let us now choose, for our monitored observables, the gauge process $\Lambda _{t}$. Unfortunately, we hit on a snag: the gauge is trivially zero in the vacuum state, that is, it is a Poisson process of zero intensity. A trick to deal with this is to replace the gauge process with a unitarily equivalent process $\Lambda _{t}^{f}$ given by \begin{equation*} \Lambda _{t}^{f}:=e^{A^{-}\left( f\right) -A^{+}\left( f\right) }\,\Lambda _{t}\,e^{A^{+}\left( f\right) -A^{-}\left( f\right) } \end{equation*} for $f\in L^{2}\left( \mathbb{R}^{+},dt\right) $ a real-valued function with $f\left( t\right) >0$ for all times $t>0$. The process is defined alternatively by \begin{equation*} d\Lambda _{t}^{f}=d\Lambda _{t}+f\left( t\right) dA_{t}^{+}+f\left( t\right) dA_{t}^{-}+f\left( t\right) ^{2}dt,\quad \Lambda _{0}^{f}=0. \end{equation*} It satisfies the It\^{o} rule $d\Lambda _{t}^{f}\,d\Lambda _{t}^{f}=d\Lambda _{t}^{f}$ and we have that $\left\langle \Psi |\,d\Lambda _{t}^{f}\,\Psi \right\rangle =f\left( t\right) ^{2}dt$. We see that $\Lambda _{t}^{f}$ corresponds to a classical process $y=n^{f}$ which is a non-homogeneous Poisson process with intensity density $f^{2}$ and we shall denote by $ \left( \Omega ,\Sigma ,\mathbb{Q}\right) $ the canonical probability space. Now, from $\left( \ref{chi equation}\right) $, we find $d\chi _{t}=-\left( iH+\frac{1}{2}L^{\dagger }L+fL\right) \chi _{t}\,dt+f^{-1}L\,dn_{t}^{f}\,\chi _{t}$ and the corresponding $\mathcal{H} _{S}$-valued process satisfies \begin{equation*} \left| d\phi _{t}\right\rangle =-\left( iH+\frac{1}{2}L^{\dagger }L+fL\right) \left| \phi _{t}\right\rangle \,dt+f^{-1}L\,\left| \phi _{t}\right\rangle \,dn_{t}^{f} \end{equation*} from which we find that \begin{equation*} d\left\| \phi _{t}\right\| ^{2}=\frac{1}{f\left( t\right) ^{2}}\left\langle \phi _{t}|\left( L^{\dagger }+f\right) \left( L+f\right) \phi _{t}\right\rangle \;\left( dn_{t}^{f}-f\left( t\right) ^{2}dt\right) . \end{equation*} Substituting into $\left( \ref{normalization sde}\right) $ we find after some re-summing \begin{eqnarray*} d\left\| \phi _{t}\right\| ^{-1} &=&\left\| \phi _{t}\right\| ^{-1}\left( \frac{f\left( t\right) }{\sqrt{\nu _{t}+2f\left( t\right) \lambda _{t}+f\left( t\right) ^{2}}}-1\right) dn_{t}^{f} \\ &&+\frac{1}{2}\left\| \phi _{t}\right\| ^{-1}\left( \nu _{t}+2f\left( t\right) \lambda _{t}\right) dt \end{eqnarray*} where $\nu _{t}\left( \omega \right) :=\left\langle \psi _{t}\left( \omega \right) |\,L^{\dagger }L\,\psi _{t}\left( \omega \right) \right\rangle $ and $\lambda _{t}\left( \omega \right) $ is as defined above. Note that $\nu _{t}+2f\left( t\right) \lambda _{t}+f\left( t\right) ^{2}=\left\langle \psi _{t}\left( \omega \right) |\,\left( L^{\dagger }+f\left( t\right) \right) \left( L+f\left( t\right) \right) \,\psi _{t}\left( \omega \right) \right\rangle $. The resulting sde for the normalized state $\psi _{t}$ is then \begin{eqnarray*} \left| d\psi _{t}\right\rangle &=&\left( \frac{L+f\left( t\right) -\sqrt{\nu _{t}+2f\left( t\right) \lambda _{t}+f\left( t\right) ^{2}}}{\sqrt{\nu _{t}+2f\left( t\right) \lambda _{t}+f\left( t\right) ^{2}}}\right) \left| \psi _{t}\right\rangle \,dn_{t}^{f} \\ &&+\left( -iH-\frac{1}{2}L^{\dagger }L-f\left( t\right) L+\frac{1}{2}\left[ \nu _{t}+2f\left( t\right) \lambda _{t}\right] \right) \left| \psi _{t}\right\rangle \,dt. \end{eqnarray*} Now $n^{f}$ is decomposed into martingale and deterministic part according to \begin{equation*} dn^{f}=d\hat{n}^{f}+\left( \nu _{t}+2f\left( t\right) \lambda _{t}+f\left( t\right) ^{2}\right) dt \end{equation*} and so we have \begin{eqnarray*} \left| d\psi _{t}\right\rangle &=&\left( \frac{L+f\left( t\right) -\sqrt{\nu _{t}+2f\left( t\right) \lambda _{t}+f\left( t\right) ^{2}}}{\sqrt{\nu _{t}+2f\left( t\right) \lambda _{t}+f\left( t\right) ^{2}}}\right) \left| \psi _{t}\right\rangle \,d\hat{n}_{t}^{f} \\ &&+\left[ -iH-\frac{1}{2}L^{\dagger }L-f\left( t\right) L+\frac{1}{2}\left[ \nu _{t}+2f\left( t\right) \lambda _{t}\right] \right. \\ &&\left. +\left( L+f\left( t\right) -\sqrt{\nu _{t}+2f\left( t\right) \lambda _{t}+f\left( t\right) ^{2}}\right) \sqrt{\left( \nu _{t}+2f\left( t\right) \lambda _{t}+f\left( t\right) ^{2}\right) }\right] \left| \psi _{t}\right\rangle \,dt. \end{eqnarray*} We now take the limit $f\rightarrow 0$ to obtain the result we want and this leaves us with the sde \begin{eqnarray*} \left| d\psi _{t}\right\rangle &=&\left( \frac{L-\sqrt{\nu _{t}}}{\sqrt{\nu _{t}}}\right) \left| \psi _{t}\right\rangle \,d\hat{n}_{t}+\left( -iH-\frac{1 }{2}L^{\dagger }L-\frac{1}{2}\nu _{t}+\sqrt{\nu _{t}}L\right) \left| \psi _{t}\right\rangle \,dt \\ &=&\left( \frac{L-\sqrt{\nu _{t}}}{\sqrt{\nu _{t}}}\right) \left| \psi _{t}\right\rangle \,dn_{t}+\left( -iH-\frac{1}{2}L^{\dagger }L+\frac{1}{2} \nu _{t}\right) \left| \psi _{t}\right\rangle \,dt. \end{eqnarray*} Here $n_{t}$ will be a non-homogeneous Poisson process with intensity $\nu _{t}$. \section{Appendix} \subsection{Bosonic Noise} Let $\mathcal{H}$ be a\ fixed Hilbert space. The $n$-particle Bose states take the basic form $\phi _{1}\hat{\otimes}\cdots \hat{\otimes}\phi _{n}=\sum_{\sigma \in \frak{S}_{n}}\phi _{\sigma \left( 1\right) }\otimes \cdots \otimes \phi _{\sigma \left( n\right) }$ where we sum over the permutation group $\frak{S}_{n}$. The $n$-particle state space is denoted $ \mathcal{H}^{\hat{\otimes}n}$ and the Bose Fock space, with one particle space $\mathcal{H}$, is then $\Gamma _{+}\left( \mathcal{H}\right) :=\bigoplus_{n=0}^{\infty }\mathcal{H}^{\hat{\otimes}n}$ with vacuum space $ \mathcal{H}^{\hat{\otimes}0}$ spanned by a single vector $\Psi $. The Bosonic creator, annihilator and differential second quantization fields are, respectively, the following operators on Fock space \begin{eqnarray*} A^{+}\left( \psi \right) \;\phi _{1}\hat{\otimes}\cdots \hat{\otimes}\phi _{n} &=&\sqrt{n+1}\,\psi \hat{\otimes}\phi _{1}\hat{\otimes}\cdots \hat{ \otimes}\phi _{n} \\ A^{-}\left( \psi \right) \;\phi _{1}\hat{\otimes}\cdots \hat{\otimes}\phi _{n} &=&\frac{1}{\sqrt{n}}\,\sum_{j}\left\langle \psi |\phi \right\rangle \hat{\otimes}\phi _{1}\hat{\otimes}\cdots \hat{\otimes}\widehat{\phi _{j}} \hat{\otimes}\cdots \hat{\otimes}\phi _{n} \\ d\Gamma \left( T\right) \;\phi _{1}\hat{\otimes}\cdots \hat{\otimes}\phi _{n} &=&\,\sum_{j}\phi _{1}\hat{\otimes}\cdots \hat{\otimes}\left( T\phi _{j}\right) \hat{\otimes}\cdots \hat{\otimes}\phi _{n} \end{eqnarray*} where $\psi \in \mathcal{H}$ and $T\in \frak{B}\left( \mathcal{H}\right) $. Now choose $\mathcal{H}=L^{2}\left( \mathbb{R}^{+},dt\right) $ and on the Fock space $\mathcal{F}=\Gamma _{+}\left( L^{2}\left( \mathbb{R} ^{+},dt\right) \right) $ set \begin{equation} A_{t}^{\pm }:=A^{\pm }\left( 1_{\left[ 0,t\right] }\right) ;\quad \Lambda _{t}:=d\Gamma \left( \tilde{1}_{\left[ 0,t\right] }\right) \end{equation} where $1_{\left[ 0,t\right] }$ is the characteristic function for the interval $\left[ 0,t\right] $ and $\tilde{1}_{\left[ 0,t\right] }$ is the operator on $L^{2}\left( \mathbb{R}^{+},dt\right) $ corresponding to multiplication by $1_{\left[ 0,t\right] }$. An integral calculus can be built up around the processes $A_{t}^{\pm },\Lambda _{t}$ and $t$ and is known as (Bosonic) quantum stochastic calculus. This allows us to consider quantum stochastic integrals of the type $\int_{0}^{T}\{F_{10}\left( t\right) \otimes dA_{t}^{+}+F_{01}\left( t\right) \otimes dA_{t}^{-}+F_{11}\left( t\right) \otimes d\Lambda _{t}+F_{00}\left( t\right) \otimes dt\}$ on $\mathcal{H}_{0}\otimes \Gamma _{+}\left( L^{2}\left( \mathbb{R}^{+},dt\right) \right) $ where $\mathcal{H} _{0}$ is some fixed Hilbert space (termed the initial space). We note the natural isomorphism $\Gamma _{+}\left( L^{2}\left( \mathbb{R} ^{+},dt\right) \right) \cong \mathcal{F}_{t]}\otimes \mathcal{F}_{(t}$ where $\mathcal{F}_{t]}=\Gamma _{+}\left( L^{2}\left( \left[ 0,t\right] ,dt\right) \right) $ and $\mathcal{F}_{(t}=\Gamma _{+}\left( L^{2}\left( (t,\infty ),dt\right) \right) $. A family $\left( F_{t}\right) _{t}$ of operators on $ \mathcal{H}_{0}\otimes \Gamma _{+}\left( L^{2}\left( \mathbb{R} ^{+},dt\right) \right) $ is said to be adapted if $F_{t}$ acts trivially on the future space $\mathcal{H}_{(t}$\ for each $t$. The Leibniz rule however breaks down for this theory since products of stochastic integrals must be put to Wick order before they can be re-expressed again as stochastic integrals. The new situation is summarized by the quantum It\^{o} rule $d\left( FG\right) =\left( dF\right) G+F\left( dG\right) +\left( dF\right) \left( dG\right) $ and the quantum It\^{o} table \begin{equation*} \begin{tabular}{l|llll} $\times $ & $dA^{+}$ & $d\Lambda $ & $dA^{-}$ & $dt$ \\ \hline $dA^{+}$ & $0$ & $0$ & $0$ & $0$ \\ $d\Lambda $ & $dA^{+}$ & $d\Lambda $ & $0$ & $0$ \\ $dA^{-}$ & $dt$ & $dA^{-}$ & $0$ & $0$ \\ $dt$ & $0$ & $0$ & $0$ & $0$ \end{tabular} \end{equation*} It is convenient to denote the four basic processes as follows: \begin{equation*} A_{t}^{\alpha \beta }=\left\{ \begin{array}{cc} \Lambda _{t}, & \left( 1,1\right) ; \\ A_{t}^{+}, & \left( 1,0\right) ; \\ A_{t}^{-}, & \left( 0,1\right) ; \\ t, & \left( 0,0\right) . \end{array} \right. \end{equation*} The It\^{o} table then simplifies to $dA_{t}^{\alpha \beta }dA_{t}^{\mu \nu }=0$ except for the cases \begin{equation} dA_{t}^{\alpha 1}dA_{t}^{1\beta }=dA_{t}^{\alpha \beta }. \label{qito} \end{equation} The fundamental result \cite{HP} is that there exists an unique solution $ U_{t}$\ to the quantum stochastic differential equation (QSDE) \begin{equation*} dU_{t}=L_{\alpha \beta }\otimes dA_{t}^{\alpha \beta },\qquad U_{0}=1 \end{equation*} whenever the coefficients $L_{\alpha \beta }$ are in $\frak{B}\left( \mathcal{H}_{0}\right) $. The solution is automatically adapted and, moreover, will be unitary provided that the coefficients take the form $ \left( \ref{unitarity}\right) $. Acknowledgments: We would like to thank Professors Aubrey Truman and Oleg Smolianov for stimulating our interest in stochastic Schr\"{o}dinger equations as fundamental limits from quantum mechanics. We also thank Slava Belavkin and Luc Bouten for discussions on their approaches to quantum filtering. A.S acknowledges with thanks the financial support of EPSRC reseach grant GR/0174 on quantum filtering, decoherence and control. J.G. is grateful to the Department of Mathematics, University of Wales Swansea, for the warm hospitality exyended to him during his visit when a part of this work was done. \end{document}
\begin{document} \begin{abstract} We prove the existence of weak solutions in the space of energy for a class of non-linear Schr\"odinger equations in the presence of a external, rough, time-dependent magnetic potential. Under our assumptions it is not possible to study the problem by means of usual arguments like resolvent techniques or Fourier integral operators, for example. We use a parabolic regularisation and we solve the approximating Cauchy problem. This is achieved by obtaining suitable smoothing estimates for the dissipative evolution. The total mass and energy bounds allow to extend the solution globally in time. We then infer sufficient compactness properties in order to produce a global-in-time finite energy weak solution to our original problem. \\ \noindent Keywords: non-linear Schr\"odinger equation, magnetic potentials, parabolic regularisation, Strichartz estimates, weak solutions.\\ \noindent MSC: 35D40 - 35H30 - 35Q41 - 35Q55 - 35K08 \end{abstract} \maketitle \section{Introduction and main result} In this work we study the initial value problem associated with the non-linear Schr\"odinger equation with magnetic potential \begin{equation}\lambdaabel{eq:magneticNLS} i{\partial}_tu=-(\nabla-\mathrm{i}\,A)^2u+\mathcal N(u) \end{equation} in the unknown $u\equiv u(t,x)$, $t\in{\mathbb R}$, $x\in{\mathbb R}^3$, where \begin{equation}\lambdaabel{hart_pp_nl} \mathcal N(u)=\lambdaambda_1|u|^{\gammaamma-1}u+\lambdaambda_2(|\cdot|^{-\alpha}\ast|u|^2)u,\qquad \begin{array}{l} \gammaamma\in(1,5]\,, \\ \alpha\in (0,3)\,, \\ \lambdaambda_1,\lambdaambda_2\gammaeqslantqslant 0 \end{array} \end{equation} is a defocusing non-linearity, both of local (pure power) and non-local (Hartree) type, and $A:{\mathbb R}\times{\mathbb R}^3\rightarrow{\mathbb R}^3$ is the external time-dependent magnetic potential. (The cases $\alpha=0$ and $\gammaamma=1$ would make $\mathcal{N}(u)$ a trivial linear term.) The novelty here will be the choice of $A$ within a considerably larger class of rough potentials than what customarily considered in the literature so far -- as a consequence, we will be in the condition to prove the existence of global-in-time weak solutions, without attacking for the moment the general issue of the global well-posedness. Concerning the non-linearity, in the regimes $\gammaamma\in(1,5)$ and $\alpha\in(0,3)$ we say that $\mathcal{N}(u)$ it is \emph{energy sub-critical}, while for $\gammaamma=5$ is \emph{energy critical}. Given the defocusing character of the equation, it will not be restrictive henceforth to set $\lambdaambda_1=\lambdaambda_2=1$, and in fact all our discussion applies also to the case when one of such couplings is set to zero. The relevance of equation \eqref{eq:magneticNLS} is hard to underestimate, both for the interest it deserves per se, given the variety of techniques that have been developed for its study, and for the applications in various contexts in physics. Among the latter, \eqref{eq:magneticNLS} is the typical effective evolution equation for the quantum dynamics of an interacting Bose gas subject to an external magnetic field, and as such it can be derived in suitable scaling limits of infinitely many particles \cite{am_GPlim,S-2008,Benedikter-Porta-Schlein-2015,AO-GP_magnetic_lapl-2016volume}: in this context the $|u|^{\gammaamma-1}u$ term with $\gammaamma=3$ (resp., $\gammaamma=5$) arises as the self-interaction term due to a two-body (resp., three-body) inter-particle interaction of short scale, whereas the $(|\cdot|^{-\alpha}\ast|u|^2)u$ term accounts for a two-body interaction of mean-field type, whence its non-local character. On the other hand \eqref{eq:magneticNLS} arises also as an effective equation for the dynamics of quantum plasmas. Indeed, for densely charged plasmas, the pressure term in the degenerate (i.e. zero-temperature) electron gas is effectively given by a non-linear function of the electron charge density \cite{Haas-plasmas-2011}, which in the wave-function dynamics corresponds to a power-type non-linearity (see for instance \cite{ADM-2017} for more details). In the absence of an external field ($A\equiv 0$), equation \eqref{eq:magneticNLS} has been studied extensively, and global well-posedness and scattering are well understood, both in the critical and in the sub-critical case (\cite{cazenave,CKSTT-2008-energycrit,Miao-Hartree-2007,Dodson-2013,Ginibre-Velo-1984TS,Ginibre-Velo-1998Prov,Fang-Han-Zheng-NLS-2011}). Such results are mainly based upon (variants of the) perturbation theory with respect to the linear dynamics, built on Strichartz estimates for the free Schr\"{o}dinger propagator (\cite{Ginibre-Velo-CMP1992,Keel-Tao-endpoint-1998}). However when $A\equiv\!\!\!\!\!/\; 0$ the picture is much less developed. The main mathematical difficulty is to obtain suitable dispersive and smoothing estimates for the \emph{linear} magnetic evolution operator, in order to exploit a standard fixed point argument where the non-linearity is treated as a perturbation. For \emph{smooth} magnetic potentials, local-in-time Strichartz estimates were established under suitable growth assumptions \cite{Yajima-magStri-91,Mizutani-2014-superquadratic}, based on the construction of the fundamental solution for the magnetic Schr\"{o}dinger flow by means of the method of parametrices and time slicing a la Fujiwara \cite{Fujiwara-JAM1979}, together with Kato's perturbation theory. If the potential has some Sobolev regularity and is sufficiently small, then Strichartz-type estimates were obtained \cite{Stefanov-MagnStrich-2007} by studying the parametrix associated with the derivative Schr\"odinger equation ${\partial}_tu-i\Delta u+A\cdot\nabla u=0$, exploiting the methods developed by Doi in \cite{Doi-1994,Doi-1996}. Global well-posedness of \eqref{eq:magneticNLS} and stability results in the case of suitable smooth potentials are proved in \cite{DeBouard1991,naka-shimo-2005,Michel_remNLS_2008}. As far as \emph{non-smooth} magnetic potentials are concerned, magnetic Strichartz estimates are still available with a number of restrictions. When $A$ is time-independent, global-in-time magnetic Strichartz estimates were established by various authors under suitable spectral assumptions (absence of zero-energy resonances) on the magnetic Laplacian $A$ \cite{EGSchlag-2008,EGSchlag-2009,DAncona-Fanelli-StriSmoothMagn-2008}, or alternatively under suitable smallness of the so called non-trapping component of the magnetic field \cite{DAFVV-2010}, up to the critical scaling $|A(x)|{\sigma}m |x|^{-1}$. Counterexamples at criticality are also known \cite{Fanelli-Garcia-2011}. In the time-dependent case, magnetic Strichartz estimates are available only under suitable smallness condition of $A$ \cite{Georgiev-Stefanov-Tarulli-2007,Stefanov-MagnStrich-2007}. Beyond the regime of Strichartz-controllable magnetic fields very few is known, despite the extreme topicality of the problem in applications with potentials $A$ that are rough, have strong singularities locally in space, and have a very mild decay at spatial infinity, virtually a $L^\infty$-behaviour. This generic case can be actually covered, and global well-posedness for \eqref{eq:magneticNLS} was indeed established \cite{M-2015-nonStrichartzHartree}, by means of energy methods, as an alternative to the lack of magnetic Strichartz estimates. However, such an approach is only applicable to non-local non-linearities with energy sub-critical potential (in the notation of \eqref{eq:magneticNLS}: $\lambdaambda_1=0$ and $\alpha\lambdaeqslantqslant 2$), for it crucially relies on the fact that the non-linearity is then locally Lipschitz in the energy space, power-type non-linearities being instead way less regular and hence escaping this method. The same feature indeed allows to extend globally in time the well-posedness for the Maxwell-Schr\"odinger system in higher regularity spaces \cite{Nakamura-Wada-MaxwSchr_CMP2007}. In this work we are concerned precisely with the generic case where in \eqref{eq:magneticNLS} neither are the external magnetic fields Strichartz-controllable, nor can the non-linearity be handled with energy methods. The key idea is then to work out first the global well-posedness of an initial value problem in which an additional source of smoothing for the solution is introduced, as the one provided by the magnetic Laplacian is not sufficient. In a recent work by the first author and collaborators \cite{ADM-2017}, placed in the closely related setting of non-linear Maxwell-Schr\"{o}dinger systems, the regularisation was provided by Yosida's approximation of the identity. Here, instead, we introduce a parabolic regularisation, in the same spirit of \cite{GuoNakStr-1995} for the Maxwell-Schr\"{o}dinger system. The net result is the addition of a heat kernel effect in the linear propagator, whence the desired smoothing. At the removal of the regularisation by a compactness argument, we obtain one -- not necessarily unique -- global-in-time, weak solution with finite energy, which is going to be our main result (Theorem \ref{th:main} below). To be concrete, let us first state the conditions on the magnetic potential. \begin{hyp}\lambdaabel{hyp:assum1_alt} The magnetic potential $A$ belongs to one of the two classes $\mathcal{A}_1$ or $\mathcal{A}_2$ defined by \[ \begin{split} \mathcal{A}_1\;&:=\;\widetilde{\mathcal{A}}_1\cap\mathcal{R} \\ \mathcal{A}_2\;&:=\;\widetilde{\mathcal{A}}_2\cap\mathcal{R} \,, \end{split} \] where \[ \widetilde{\mathcal{A}}_1\;:=\; \lambdaeqslantft\{ A=A(t,x)\lambdaeqslantft|\! \begin{array}{c} \mathrm{div}_x A=0\;\textrm{ for a.e.}\;t\in\mathbb{R}, \\ A=A_1+A_2\textrm{ such that, for $j\in\{1,2\}$,} \\ A_j\in L^{a_j}_\mathrm{loc}(\mathbb{R},L^{b_j}(\mathbb{R}^3,\mathbb{R}^3)) \\ a_j\in(4, +\infty],\quad b_j\in(3, 6),\quad \frac{2}{\,a_j}+\frac{3}{\,b_j}<1 \end{array}\!\! \right.\right\} \] and \[ \widetilde{\mathcal{A}}_2\;:=\; \lambdaeqslantft\{ A=A(t,x)\lambdaeqslantft|\! \begin{array}{c} \mathrm{div}_x A=0\;\textrm{ for a.e.}\;t\in\mathbb{R}, \\ A=A_1+A_2\textrm{ such that, for $j\in\{1,2\}$,} \\ A_j\in L^{a_j}_\mathrm{loc}(\mathbb{R},W^{1, \frac{3b_j}{3+b_j}}(\mathbb{R}^3,\mathbb{R}^3)) \\ a_j\in(2, +\infty],\quad b_j\in(3, +\infty],\quad \frac{2}{\,a_j}+\frac{3}{\,b_j}<1 \end{array}\!\! \right.\right\}\,, \] and where \[ \mathcal{R}\;:=\;\big\{ A\in\widetilde{\mathcal{A}}_1\textrm{ or }A\in\widetilde{\mathcal{A}}_2\:|\: \partial_t A_j\in L^1_{\mathrm{loc}}(\mathbb{R},L^{b_j}(\mathbb{R}^3,\mathbb{R}^3)),\,j=1, 2\big\}\,. \] Associated to such classes, we define \begin{equation*} \begin{aligned} \|A\|_{\mathcal A_1}:=\;&\|A_1\|_{L^{a_1}_tL^{b_1}_x}+\|A_2\|_{L^{a_2}_tL^{b_2}_x}\\ \|A\|_{\mathcal A_2}:=\;&\|A_1\|_{L^{a_1}_tW^{1, \frac{3b_1}{3+b_1}}_x}+\|A_2\|_{L^{a_2}_tW^{1, \frac{3b_2}{3+b_2}}_x}\,. \end{aligned} \end{equation*} \end{hyp} A few observations are in order. First and foremost, both classes $\mathcal{A}_1$ and $\mathcal{A}_2$ include magnetic potentials for which in general the validity of Strichartz estimates for the magnetic Laplacian is not known. A large part of our intermediate results, including in particular the local theory in the energy space, are found with magnetic potentials in the larger classes $\widetilde{\mathcal{A}}_1$ and $\widetilde{\mathcal{A}}_2$. The mild amount of regularity in time provided by the intersection with the class $\mathcal{R}$ is needed to infer suitable a priori bounds on the solution from the estimates on the total energy. This allows one to extend globally in time the solution to the regularised problem. Regularity in time of the external potential is not needed either when equation \eqref{eq:magneticNLS} is studied in the \emph{mass sub-critical} regime, i.e., when $\gammaamma\in(1,\frac73)$ and $\alpha\in(0,2)$, and when $\max{\{b_1,b_2\}}\in (3,6)$. In this case we are able to work with the more general condition $A\in\widetilde{\mathcal A}_1$. This is a customary fact in the context of Schr\"{o}dinger equations with time-dependent potentials, as well known since \cite{Yajima1987_existence_soll_SE} (compare Theorems \cite[Theorem 1.1]{Yajima1987_existence_soll_SE} and \cite[Theorem 1.4]{Yajima1987_existence_soll_SE} therein: $L^a$-integrability in time on the electric external potentials yields a $L^p$-theory in space, whereas additional $L^a$-integrability of the time derivative of the potential yields a $H^2$-theory in space). Our aim here of studying finite energy solutions to \eqref{eq:magneticNLS} thus requires some intermediate assumptions on the magnetic potential, determined by the class $\mathcal R$ above. See also Proposition 1.7 in \cite{Carles} where a similar issue is considered. The additional requirement on $\nabla A$ present in the class $\mathcal{A}_2$ is taken to accommodate slower decay at infinity for $A$, way slower than the behaviour $|A(x)|{\sigma}m |x|^{-1}$ (and in fact even a $L^\infty$-behaviour) which, as mentioned before, is critical for the validity of magnetic Strichartz inequalities. Last, it is worth remarking that the divergence-free condition, ${\partial}iver_x{A}=0$, is assumed merely for convenience: our entire analysis can be easily extended to the cases where ${\partial}iver_x{A}$ belongs to suitable Lebesgue spaces and consider it as a given (electrostatic) scalar potential. Here is finally our main result. Clearly, there is no fundamental difference in studying solutions forward or backward in time, and as customary we shall only consider henceforth the problem for $t\gammaeqslantqslant 0$. Our entire discussion can be repeated for the case $t\lambdaeqslantqslant 0$. \begin{theorem}[Existence of global, finite energy weak solutions]\lambdaabel{th:main}~ \noindent Let the magnetic potential $A$ be such that $A\in\mathcal A_1$ or $A\in\mathcal A_2$, and take $\gammaamma\in (1,5]$, $\alpha\in(0,3)$. Then, for every initial datum $f\in H^1({\mathbb R}^3)$, the Cauchy problem \begin{equation}\lambdaabel{eq:CauMNLS} \begin{split} & \begin{cases} \;\;\mathrm{i}\,\partial_t u\;=\; -(\nabla-\mathrm{i}\,A)^2u+|u|^{\gammaamma-1}u+(|\cdot|^{-\alpha}*|u|^2)u \\ \:u(0,\cdot)=\;f \end{cases} \\ & \quad t\in[0,+\infty),\;\; x\in{\mathbb R}^3 \end{split} \end{equation} admits a global weak $H^1$-solution \begin{equation*} u\;\in\; L_{\mathrm{loc}}^{\infty}([0,+\infty),H^1({\mathbb R}^3))\cap W_{\mathrm{loc}}^{1,\infty}([0,+\infty),H^{-1}({\mathbb R}^3))\,, \end{equation*} meaning that \eqref{eq:magneticNLS} is satisfied for a.e.~$t\in[0,+\infty)$ as an identity in $H^{-1}$ and $u(0,\cdot)=f$. Moreover, the energy \begin{equation*} \begin{split} \mathcal{E}(u)(t)\;&:=\;\int_{{\mathbb R}^3}\Big({\textstyle{\frac{1}{2}}}|(\nabla-\mathrm{i} A(t))\,u|^2+{\textstyle\frac{1}{\gammaamma +1}}|u|^{\gammaamma +1}+{\textstyle{\frac{1}{4}}}(|x|^{-\alpha}*|u|^2)|u|^2\Big)\,\mathrm{d} x \end{split} \end{equation*} is finite and bounded on compact intervals. \end{theorem} In the remaining part of this Introduction, let us elaborate further on the general ideas behind our proof of Theorem \ref{th:main}. As previously mentioned, we introduce a small dissipation term in the equation \begin{equation}\lambdaabel{eq:visc_nls} i{\partial}_tu\;=\;-(1-\mathrm{i}\,\varepsilon)(\nabla-\mathrm{i}\,A)^2u+\mathcal N(u) \end{equation} and we study the approximated problem. Similar parabolic regularisation procedures are commonly used in PDEs, see for example the vanishing viscosity approximation in fluid dynamics or in systems of conservation laws, and in fact this was also exploited in a similar context by Guo-Nakamitsu-Strauss to study on the existence of finite energy weak solutions to the Maxwell-Schr\"odinger system \cite{GuoNakStr-1995}. By exploiting the parabolic regularisation, we can now regard $i{\partial}_tu+(1-\mathrm{i}\,\varepsilon)\Delta u$ as the main linear part in the equation and treat $(1-\mathrm{i}\,\varepsilon)(2\,\mathrm{i}\,A\cdot\nabla u+|A|^2u)+\mathcal{N}(u)$ as a perturbation. Evidently, this cannot be done in the purely Hamiltonian case $\varepsilon=0$. Indeed, the term $A\cdot\nabla u$ is not a Kato perturbation of the free Laplacian and the whole derivative Schr\"odinger equation must be considered as the principal part \cite{Stefanov-MagnStrich-2007}. We can instead establish the local well-posedness in the energy space for the approximated Cauchy problem \begin{equation}\lambdaabel{eq:visc_CauMNLS} \begin{split} & \begin{cases} \:\mathrm{i}\,\partial_t u\;=\; -(1-\mathrm{i}\,\varepsilon)(\nabla-\mathrm{i}\,A)^2u+|u|^{\gammaamma-1}u+(|\cdot|^{-\alpha}*|u|^2)u \\ \:u(0,\cdot)\;=\;f \end{cases} \\ &\quad t\in[0,T]\,,\;\; x\in{\mathbb R}^3\,. \end{split} \end{equation} We first obtain suitable Strichartz-type and smoothing estimates for the viscous magnetic evolution semi-group. This is done by exploiting the smoothing effect of the heat-Schr\"odinger semi-group $t\mapsto e^{(i+\varepsilon)t\Delta}$ and by inferring the same space-time bounds also for the viscous magnetic evolution, in a similar fashion as in \cite{Yajima-magStri-91,Naibo-Stefanov-MathAnn2006} scalar (electrostatic) potentials are treated as perturbations of the free Schr\"odinger evolution. Next, the a priori bounds on the total mass and the total energy allow us to extend the solution of the regularised problem globally in time. It is worth stressing that such global well-posedness holds in the energy critical case too: indeed, when $\gammaamma=5$ the bounds deduced from the energy dissipation provide a uniform-in-time control on some Strichartz-type norms, and the argument is then completed by means of the blow-up alternative for the critical case. The mass/energy a priori bounds turn out to be uniform in the regularising parameter $\varepsilon>0$, which yields the needed compactness for the sequence of approximating solutions. It is then possible to remove the regularisation and to show the existence of a finite energy weak solution to our original problem \eqref{eq:CauMNLS}, at the obvious price of loosing the uniqueness, as well as its continuous dependence on the initial data. The material is organized as follows: in Section \ref{sec:preliminaries} we collect the preliminary notions and results we need in our analysis. In particular, we clarify the notion of weak (and strong) $H^1$-solution and we derive suitable space-time estimates for the heat-Schr\"odinger evolution. In Section \ref{sec:propagator} we study the smoothing property of the magnetic linear Schr\"odinger equation with a parabolic regularisation. In Section \ref{sec:LWP} we prove local existence for the regularised magnetic non-linear Schr\"{o}dinger equation \eqref{eq:visc_nls}. In Section \ref{eq:mass-energy} we prove mass and energy estimates for \eqref{eq:visc_nls} together with certain a priori bounds. In Section \ref{eq:eps-GWP} we use the energy estimates and the a priori bounds to extend the solution (forward) globally in time, both in the energy sub-critical and critical case. In Section \ref{sec:remov-reg}, using a compactness argument, we remove the regularisation, eventually proving the main theorem. \section{Preliminaries and notation}\lambdaabel{sec:preliminaries} In this Section we collect the definitions and main tools that we shall use in the rest of the work. We begin with a few remarks on our notation. For two positive quantities $P$ and $Q$, we write $P\lambdaeqslantsssim Q$ to mean that $P\lambdaeqslantqslant CQ$ for some constant $C$ independent of the variables or of the parameters which $P$ and $Q$ depend on, unless explicitly declared; in the latter case we write, self-explanatorily, $P\lambdaeqslantsssim_{\alpha} Q$, and the like. Given $p_1,\lambdadots p_n\in [1,+\infty]$, we define $p=p_1*p_2\lambdadots *p_n$ by $$ \frac1p=\frac{1}{p_1}+\frac{1}{p_2}+\lambdadots +\frac{1}{p_n}\,.$$ The same operation can be extended component-wise to vectors in $[1,+\infty]^d$, and we still denote it by $*$. Thus, for example, $(s,p)=(s_1,p_1)*(s_2,p_2)$ will mean $s^{-1}=s_1^{-1}+s_2^{-1}$ and $p^{-1}=p_1^{-1}+p_2^{-1}$. Given $p\in[1,+\infty]$, we denote by $p'$ its H\"{o}lder dual exponent, defined by $p*p'=1$. Henceforth, we use the symbols ${\partial}iver$, $\nabla$ and $\Delta$ to denote derivations in the spatial variables only. When referring to the vector field $A:\mathbb{R}^3\to\mathbb{R}^3$, conditions like $A\in L^p(\mathbb{R}^3)$ are to be understood as $A\in L^p(\mathbb{R}^3,\mathbb{R}^3)$. As customary, in a self-explanatory manner we will frequently make only the dependence on $t$ explicit in symbols such as $A(t)$, $u(t)$, $\mathcal{N}(u(t))$, $(\nabla-\mathrm{i}\,A(t))u$, etc., instead of writing $A(t,x)$, $u(t,x)$, $(\mathcal{N}(u))(t,x)$, $((\nabla-\mathrm{i}\,A)u)(t,x)$, etc. The short-cut `NLS' refers as usual to non-linear Schr\"{o}dinger equation, in the sense that will be specified in the following. For sequences and convergence of sequences, we write $(u_n)_n$ and $u_n\to u$ for $(u_n)_{n\in\mathbb{N}}$ and $u_n\to u$ as $n\to+\infty$. \subsection{Magnetic Laplacian and magnetic Sobolev space} We clarify now the meaning of the symbol $(\nabla-\mathrm{i} A(t))^2$. As mentioned already in the Introduction, formally $$(\nabla-\mathrm{i} A(t))^2=\Delta-2\,\mathrm{i}\,A(t)\cdot\nabla-i{\partial}iver{A(t)}-|A(t)|^2\,.$$ In our setting of divergence-free magnetic potentials, this becomes $$(\nabla-\mathrm{i} A(t))^2=\Delta-2\,\mathrm{i}\,A(t)\cdot\nabla-|A(t)|^2.$$ If $A(t)\in L^2_{\mathrm{loc}}({\mathbb R}^3)$ for almost every $t\in{\mathbb R}$, which will always be our case, then we define the \emph{magnetic Laplacian} $(\nabla-\mathrm{i} A(t))^2$ as a (time-dependent) distributional operator, according to the following straightforward Lemma. \begin{lemma}[Distributional meaning of the magnetic Laplacian] Assume that, for almost every $t\in{\mathbb R}$, $A(t)\in L^2_{\mathrm{loc}}({\mathbb R}^3)$ with ${\partial}iver{A}(t)=0$. Then for almost every $t\in{\mathbb R}$, $(\nabla-\mathrm{i} A(t))^2$ is a map from $L^1_{\mathrm{loc}}({\mathbb R}^3)$ to $\mathcal{D}'({\mathbb R}^3)$, which acts on a generic $f\in L^1_{\mathrm{loc}}({\mathbb R}^3)$ as $$(\nabla-\mathrm{i} A(t))^2f\;=\;\Delta f-2\,\mathrm{i}\,A(t)\cdot\nabla f-|A(t)|^2f\,.$$ \end{lemma} In order to qualify such a distribution as an element of a suitable functional space, it is natural to deal with the magnetic Sobolev space defined as follows. (Here, with respect to our general setting, $A$ is meant to be a magnetic vector potential at a fixed time.) \begin{definition}\lambdaabel{de:magn_space} Let $A\in L_{\mathrm{loc}}^2({\mathbb R}^3)$. We define \emph{magnetic Sobolev space} $$H_A^{1}({\mathbb R}^3):=\{f\in L^2({\mathbb R}^3)\,|\,(\nabla-\mathrm{i}\,A)f\in L^2({\mathbb R}^d)\}$$ equipped with the norm $$\Vert f\Vert_{H_A^{1}({\mathbb R}^3)}^2\;:=\;\Vert f\Vert_{L^2({\mathbb R}^d)}^2+\Vert(\nabla-\mathrm{i}\,A)f\Vert_{L^2({\mathbb R}^d)}^2\,,$$ which makes $H_A^{1}({\mathbb R}^3)$ a Banach space. \end{definition} We recall \cite[Theorem 7.21]{Lieb-Loss-Analysis} that, when $A\in L_{\mathrm{loc}}^2({\mathbb R}^3)$, any $f\in H^1_A({\mathbb R}^3)$ satisfies the \emph{diamagnetic inequality} \begin{equation}\lambdaabel{eq:diam} |(\nabla |f|)(x)|\lambdaeqslantq|((\nabla-\mathrm{i}\,A)f)(x)|\quad\mbox{for a.e. }x\in{\mathbb R}^3\,. \end{equation} The following two Lemmas express useful magnetic estimates in our regime for $A$. \begin{lemma} Assume that $A\in\mathcal{A}_1$ or $A\in\mathcal{A}_2$. Then, for almost every $t\in{\mathbb R}$, \begin{equation}\lambdaabel{H1H-1} \Vert 2\,\mathrm{i}\, A(t)\cdot\nabla f+|A(t)|^2f\Vert_{H^{-1}({\mathbb R}^3)}\;\lambdaeqslantsssim\; C_A(t)\Vert f\Vert_{H^1({\mathbb R}^3)}\,, \end{equation} where \begin{equation*} C_A(t)\;:=\;1+\Vert A_1(t)\Vert^2_{L^{b_1}({\mathbb R}^3)}+\Vert A_2(t)\Vert^2_{L^{b_2}({\mathbb R}^3)}\,. \end{equation*} In particular, for almost every $t\in{\mathbb R}$, $(\nabla-\mathrm{i} A(t))^2$ is a continuous map from $H^1({\mathbb R}^3)$ to $H^{-1}({\mathbb R}^3)$. \end{lemma} \begin{proof} The proof is based on a straightforward application of Sobolev's embedding and H\"older's inequality. \end{proof} \begin{lemma}\lambdaabel{le:norm_equiv} Let $A\in L^b({\mathbb R}^3)$ with $b\in[3,+\infty]$. \begin{itemize} \item[(i)] One has \begin{equation}\lambdaabel{HAem} \|f\|_{L^q({\mathbb R}^3)}\lambdaeqslantsssim \|f\|_{H_A^{1}({\mathbb R}^3)}\,,\quad q\in[2,6] \end{equation} with the constant in \eqref{HAem} independent of $A$, hence the embedding $H_A^{1}({\mathbb R}^3)\hookrightarrow L^q({\mathbb R}^3)$ for $q\in[2,6]$. \item [(ii)] One has \begin{equation}\lambdaabel{eq:norm_equiv} (1+\|A\|_{L^b({\mathbb R}^3)})^{-1}\|f\|_{H^{1}({\mathbb R}^3)}\;\lambdaeqslantsssim\; \|f\|_{H_A^{1}({\mathbb R}^3)}\;\lambdaeqslantsssim\; (1+\|A\|_{L^b({\mathbb R}^3)})\|f\|_{H^{1}({\mathbb R}^3)}\,, \end{equation} whence $H_A^{1}({\mathbb R}^3)\cong H^{1}({\mathbb R}^3)$ as an isomorphism between Banach spaces \end{itemize} \end{lemma} \begin{proof} The proof is based on a straightforward application of Sobolev's embedding, H\"older's inequality, and the diamagnetic inequality. \end{proof} \begin{remark} As an immediate consequence of Lemma \ref{le:norm_equiv}, given a potential $A\in\widetilde{\mathcal{A}}_1$ or $A\in \widetilde{\mathcal{A}}_2$, for almost every $t>0$ the magnetic Sobolev spaces $H_{A(t)}^1({\mathbb R}^3)$ are all equivalent to the ordinary Sobolev space $H^1({\mathbb R}^3)$. \end{remark} \subsection{Notion of solutions} We give now the precise notion of strong and weak solutions for the Cauchy problem \eqref{eq:CauMNLS} and its regularised version \eqref{eq:visc_CauMNLS}. For the sake of a comprehensive discussion, let us consider the general Cauchy problem \begin{equation}\lambdaabel{eq:ge_cau} \begin{split} &\begin{cases} \;\;\mathrm{i}\,\partial_t u\;=\;c\,(\Delta u-2\,\mathrm{i}\,A(t)\cdot\nabla u-|A(t)|^2u)+\mathcal{N}(u) \\ u(0,\cdot)\:=\;f \end{cases} \\ &\qquad t\in I:=[0,T)\,,\;\; x\in\mathbb{R}^3\,, \end{split} \end{equation} for some $T>0$ and $c\in\mathbb C$ with $\mathfrak{Im}\,c\gammaeqslantqslant 0$. Here the choices $c=-1$ and $c=-1+i\varepsilon$ correspond, respectively, to \eqref{eq:CauMNLS} and \eqref{eq:visc_CauMNLS}. \begin{definition}\lambdaabel{de:ws_solution_alt} Let $I:=[0,T)$ for some $T>0$. Given an initial datum $f\in H^1({\mathbb R}^3)$, we say that \begin{itemize} \item[(i)] a local strong $H^1$-solution $u$ to \eqref{eq:ge_cau} on $I$ is a function \[ u\;\in\;\mathcal{C}(I,H^1({\mathbb R}^3))\cap\mathcal C^1(I; H^{-1}({\mathbb R}^3)) \] such that $\mathrm{i}\,\partial_t u=c\,(\Delta u-2\,\mathrm{i}\,A(t)\cdot\nabla u-|A(t)|^2u)+\mathcal{N}(u)$ in $H^{-1}(\mathbb{R}^3)$ for all $t\in I$ and $u(0)=f$; \item[(ii)] a local weak $H^1$-solution $u$ to \eqref{eq:ge_cau} on $I$ is a function \[ u\;\in\;L^\infty(I,H^1({\mathbb R}^3))\cap W^{1, \infty}(I; H^{-1}({\mathbb R}^3)) \] such that $\mathrm{i}\,\partial_t u=c\,(\Delta u-2\,\mathrm{i}\,A(t)\cdot\nabla u-|A(t)|^2u)+\mathcal{N}(u)$ in $H^{-1}(\mathbb{R}^3)$ for a.e.~$t\in I$ and $u(0)=f$. \end{itemize} Moreover, a function $u\in L^\infty_{\mathrm{loc}}([0,+\infty),H^1(\mathbb{R}^3))$ is called \begin{itemize} \item[(iii)] a global strong $H^1$-solution $u$ to \eqref{eq:ge_cau} if it is a local strong solution for every interval $I=[0,T)$; \item[(iv)] a global weak $H^1$-solution $u$ to \eqref{eq:ge_cau} if it is a local weak solution for every interval $I=[0,T)$. \end{itemize} \end{definition} Next, we recall the notion of local and global well-posedness (\cite[Section 3.1]{cazenave}). \begin{definition}\lambdaabel{de:WP} We say that equation \[ \mathrm{i}\,\partial_t u\;=\;c\,(\Delta u-2\,\mathrm{i}\,A(t)\cdot\nabla u-|A(t)|^2u)+\mathcal{N}(u) \] is locally well-posed in $H^1({\mathbb R}^3)$ if the following conditions hold: \begin{itemize} \item[(i)] For any initial datum $f\in H^1({\mathbb R}^3)$, the Cauchy problem \eqref{eq:ge_cau} admits a unique local strong $H^1$-solution, defined on a maximal interval $[0,T_{\mathrm{max}})$, with $T_{\mathrm{max}}=T_{\mathrm{max}}(f)\in(0,+\infty]$. \item[(ii)] One has continuous dependence on the initial data, i.e., if $f_n\rightarrow f$ in $H^1({\mathbb R}^3)$ and $0\ni I\subset[0,T_{\mathrm{max}})$ is a closed interval, then the maximal strong $H^1$-solution of \eqref{eq:ge_cau} with initial datum $f_n$ is defined on $I$ for $n$ large enough and satisfies $u_n\rightarrow u$ in $\mathcal{C}(I,H^1({\mathbb R}^3))$. \item[(iii)] In the energy-sub-critical case one has the blow-up alternative: if $T_{\mathrm{max}}<+\infty$, then $$\lambdaim_{t\uparrow T_{\mathrm{max}}}\Vert u(t,\cdot)\Vert_{H^1({\mathbb R}^3)}\;=\;+\infty\,.$$ \end{itemize} We say that the same equation is globally well-posed in $H^1({\mathbb R}^3)$ if it is locally well-posed and if for any initial datum $f\in H^1({\mathbb R}^3)$ the Cauchy problem \eqref{eq:ge_cau} admits a global strong $H^1$-solution. \end{definition} \subsection{Smoothing estimates for the heat-Schr\"odinger flow}\lambdaabel{sec:smoothingestHSflow} Let us now analyse the smoothing properties of the heat and the Schr\"odinger flows generated by the free Laplacian. We begin by recalling the well-known dispersive estimates for the Schr\"odinger equation \begin{equation}\lambdaabel{disps} \|e^{\mathrm{i} t\Delta}f\|_{L^p({\mathbb R}^3)}\;\lambdaeqslantsssim\;|t|^{-\frac32\lambdaeqslantft(\frac{1}{p'}-\frac1p\right)}\|f\|_{L^{p'}({\mathbb R}^3)}\,, \quad p\in[2,+\infty]\,,\;\;t\neq 0\,, \end{equation} and the $L^p-L^r$ estimates for the heat flow \begin{eqnarray} \|e^{t\Delta}f\|_{L^r({\mathbb R}^3)}\!\!\!&\lambdaeqslantsssim& \!\!t^{-\frac{3}{2}\lambdaeqslantft(\frac{1}{p}-\frac1r\right)}\|f\|_{L^{p}({\mathbb R}^3)} \lambdaabel{disph}\\ & & \qquad\qquad\qquad\qquad\qquad 1\lambdaeqslantqslant p\lambdaeqslantqslant r\lambdaeqslantqslant +\infty,\,t>0\,. \nonumber\\ \|\nabla e^{t\Delta}f\|_{L^r({\mathbb R}^3)}\!\!\!&\lambdaeqslantsssim& \!\!t^{-\frac{3}{2}\lambdaeqslantft(\frac{1}{p}-\frac1r\right)-\frac12}\|f\|_{L^{p}({\mathbb R}^3)} \lambdaabel{disphg} \end{eqnarray} We also recall the definition of admissible pairs for the Schr\"odinger flow in three dimensions. \begin{definition}\lambdaabel{destro} A pair $(q,r)$ is called admissible if \begin{equation*} \frac2q+\frac{3}{r}=\frac{3}{2},\qquad r\in[2,6]\,. \end{equation*} The pair $(2,6)$ is called endpoint, while the others are called non-endpoint. The pair $(s,p)$ is called dual-admissible if $(s,p)=(q',r')$ for some admissible pair $(q,r)$, namely \begin{equation*} \frac2s+\frac{3}{p}=\frac{7}{2},\qquad p\in[{\textstyle{\frac{6}{5}}},2]\,. \end{equation*} \end{definition} The dispersive estimate \eqref{disps} yields a whole class of space-time estimates for the Schr\"odinger flow \cite{Ginibre-Velo-1998Prov, Yajima1987_existence_soll_SE, Keel-Tao-endpoint-1998}. \begin{proposition}[Strichartz estimates]\lambdaabel{freestr}~ \begin{itemize} \item[(i)] For any admissible pair $(q,r)$, the following homogeneous estimate holds: \begin{equation}\lambdaabel{eq:hofree} \|e^{\mathrm{i} t\Delta}f\|_{L^q({\mathbb R};L^r({\mathbb R}^3))}\;\lambdaeqslantsssim\;\|f\|_{L^2({\mathbb R}^3)}\,. \end{equation} \item[(ii)] Let $I$ be an interval of ${\mathbb R}$ (bounded or not), and $\tau,t\in \overline{I}$. For any admissible pair $(q,r)$ and any dual admissible pair $(s,p)$, the following inhomogeneous estimate holds: \begin{equation}\lambdaabel{eq:rifree} \Big\|\int_{\tau}^t e^{\mathrm{i}(t-{\sigma}gma)\Delta}F({\sigma}gma)\,\,\mathrm{d}{\sigma}gma\Big\|_{L^q(I;L^r({\mathbb R}^3))}\;\lambdaeqslantsssim\;\|F\|_{L^s(I;L^p({\mathbb R}^3))}\,. \end{equation} \end{itemize} \end{proposition} Similarly (see, e.g., \cite[Section 2.2.2]{Wang-Zhaohui-Chengchun_HarmonicAnalysisI_2011}), by means of \eqref{disph}-\eqref{disphg} one infers an analogous class of space-time estimates for the heat propagator. \begin{proposition}[Space-time estimates for $e^{t\Delta}$]~ \begin{itemize} \item[(i)] For any admissible pair $(q,r)$, the following homogeneous estimate holds: \begin{equation}\lambdaabel{eq:hofree_heat} \|e^{t\Delta}f\|_{L^q([0,+\infty),L^r({\mathbb R}^3))}\;\lambdaeqslantsssim\;\|f\|_{L^2({\mathbb R}^3)}\,. \end{equation} \item[(ii)] Let $I\subseteq {\mathbb R}$ be an interval of the form $[\tau,T)$, with $T\in(\tau,+\infty]$. For any admissible pair $(q,r)$ and any dual admissible pair $(s,p)$, the following inhomogeneous estimate holds: \begin{equation}\lambdaabel{eq:rifree_heat} \Big\|\int_{\tau}^t e^{(t-{\sigma}gma)\Delta}F({\sigma}gma)\,\,\mathrm{d}{\sigma}gma\Big\|_{L^q(I;L^r({\mathbb R}^3))}\;\lambdaeqslantsssim\;\|F\|_{L^s(I;L^p({\mathbb R}^3))}\,. \end{equation} \end{itemize} \end{proposition} We can also combine the previous results in order to infer $L^p$-$L^r$ estimates (Proposition \ref{prop:pw_ex_lemma}) and space-time estimates (Proposition \ref{le:stri_hs}) for the heat-Schr\"o\-dinger propagator. \begin{proposition}[Pointwise-in-time estimates for the heat-Schr\"odinger flow]\lambdaabel{prop:pw_ex_lemma}~ \noindent For any $t>0$, $p\in[1,2]$, and $r\in[2,+\infty]$, \begin{eqnarray} \|e^{(\mathrm{i}+\varepsilon)t\Delta}f\|_{L^r({\mathbb R}^3)}\!\!&\lambdaeqslantsssim& \!\!\varepsilon^{-\frac{3}{2}|\frac{1}{p'}-\frac{1}{r}|}t^{-\frac{3}{2}(\frac1p-\frac1r)}\|f\|_{L^p({\mathbb R}^3)} \lambdaabel{hsdisp} \\ \|\nabla e^{(\mathrm{i}+\varepsilon)t\Delta}f\|_{L^r({\mathbb R}^3)}\!\!&\lambdaeqslantsssim&\!\! \varepsilon^{-\frac{3}{2}|\frac{1}{p'}-\frac{1}{r}|-\frac12}t^{-\frac{3}{2}(\frac1p-\frac1r)-\frac12}\|f\|_{L^p({\mathbb R}^3)}\,. \lambdaabel{hsdispgr} \end{eqnarray} \end{proposition} \begin{proof} The proof is straightforward and follows by combining the decay estimates of both the heat and the Schr\"odinger propagators, see formulas \eqref{disps}-\eqref{disphg} above. In fact, similar decay estimates follow also by simply ignoring the hyperbolic part given by the Schr\"odinger evolution, however with a worse control in terms of $\varepsilon$. \end{proof} \begin{proposition}[Space-time estimates for the heat-Schr\"odinger flow]\lambdaabel{le:stri_hs}~ Let $\varepsilon>0$ and let $(q,r)$ be an admissible pair. \begin{itemize} \item[(i)] One has (homogeneous Strichartz estimate) \begin{equation}\lambdaabel{eq:homstrich} \|e^{(\mathrm{i}+\varepsilon)t\Delta}f\|_{L^q([0,T],L^r({\mathbb R}^3))}\;\lambdaeqslantsssim\;\|f\|_{L^2({\mathbb R}^3)}\,. \end{equation} \item[(ii)] Let $T>0$ and let the pair $(s,p)$ satisfy \begin{equation}\lambdaabel{est_dual} \frac{2}{s}+\frac{3}{p}=\frac{7}{2},\qquad \begin{cases} \frac{1}{2}\lambdaeqslantqslant\frac{1}{p}\lambdaeqslantqslant 1&2\lambdaeqslantqslant r<3\\ \frac{1}{2}\lambdaeqslantqslant\frac{1}{p}<\frac{1}{r}+\frac{2}{3}&3\lambdaeqslantqslant r\lambdaeqslantqslant 6\,. \end{cases} \end{equation} Then (inhomogeneous retarded Strichartz estimate) \begin{equation}\lambdaabel{eq:st} \Big\|\int_{0}^{t} e^{(\mathrm{i}+\varepsilon)(t-\tau)\Delta}F(\tau)\,\,\mathrm{d}\tau\Big \|_{L^q([0,T],L^r({\mathbb R}^3))}\;\lambdaeqslantsssim_{\,\varepsilon}\;\|F\|_{L^s([0,T],L^p({\mathbb R}^3))}\,. \end{equation} \item[(iii)] Assume in addition that $(q,r)$ is non-endpoint. Let $T>0$ and let the pair $(s,p)$ satisfy \begin{equation}\lambdaabel{est_grad_dual} \frac{2}{s}+\frac{3}{p}=\frac{5}{2},\qquad\frac{1}{2}\lambdaeqslantqslant\frac{1}{p}<\frac{1}{r}+\frac{1}{3}\,. \end{equation} Then (inhomogeneous retarded Strichartz estimate) \begin{equation}\lambdaabel{eq:grst} \Big\|\nabla\!\!\int_{0}^{t} e^{(\mathrm{i}+\varepsilon)(t-\tau)\Delta}F(\tau)\,\,\mathrm{d}\tau\,\Big\|_{L^q([0,T],L^r({\mathbb R}^3))}\;\lambdaeqslantsssim_{\,\varepsilon}\;\|F\|_{L^s([0,T],L^p({\mathbb R}^3))}\,. \end{equation} \end{itemize} \end{proposition} \begin{remark}\lambdaabel{re:comp_epsno} In \eqref{est_dual} the range of admissible pairs $(s,p)$ is \emph{larger} as compared to the case of the Schr\"odinger equation. In fact, dispersive equations, even if hyperbolic, have the remarkable property of enjoying a class of smoothing estimates. More specifically, for the Schr\"odinger equation it can be proved that the inhomogeneous part in the Duhamel formula enjoys the gain of regularity by one derivative in space, see Theorem 4.4 in \cite{Lineares-Ponce-book2015}. However, it is straightforward to check that estimate \eqref{eq:grst} for the heat-Schr\"odinger semi-group is stronger than estimate (4.26) in \cite{Lineares-Ponce-book2015} and it is better suited to study our problem. \end{remark} \begin{proof}[Proof of Proposition \ref{le:stri_hs}] We begin with the proof of part (i). Combining the homogeneous Strichartz estimates \eqref{eq:hofree} for the Schr\"odinger flow with the estimate $\|e^{\varepsilon t\Delta}f\|_{L^r({\mathbb R}^3)}\lambdaeqslantsssim\|f\|_{L^r({\mathbb R}^3)}$, which follows by \eqref{disph}, we get \begin{align*} \|e^{(\mathrm{i}+\varepsilon)t\Delta}f\|_{L^q([0,+\infty),L^r({\mathbb R}^3))}\;&=\; \|e^{\varepsilon t\Delta}e^{\mathrm{i} t\Delta}f\|_{L^q([0,+\infty),L^r({\mathbb R}^3))}\\ &\lambdaeqslantsssim\; \|e^{\mathrm{i} t\Delta}f\|_{L^q([0,+\infty),L^r({\mathbb R}^3))}\;\lambdaeqslantsssim\;\|f\|_{L^2({\mathbb R}^3)}\,, \end{align*} which proves \eqref{eq:homstrich}. Next we prove part (ii). In the special case $(q,r)=(+\infty,2)$ and $(s,p)=(1,2)$, the dispersive estimates \eqref{hsdisp} yields \begin{equation}\lambdaabel{fi_pi} \begin{aligned} &\Big\|\int_{0}^{+\infty} e^{(\mathrm{i}+\varepsilon)|t-\tau|\Delta}F(\tau)\,\,\mathrm{d}\tau\,\Big\|_{L^{\infty}([0,T],L^2({\mathbb R}^3))}\\ &\qquad\lambdaeqslantsssim \int_{0}^{+\infty}\|F(\tau)\|\,\mathrm{d}\tau\;=\;\|F\|_{L^{1}([0,T],L^2({\mathbb R}^3))}\,. \end{aligned} \end{equation} For the generic case, namely $(p,r)\neq(2,2)$, owing to \eqref{hsdisp} one obtains \begin{align*} &\Big\|\int_{0}^{+\infty} e^{(\mathrm{i}+\varepsilon)|t-\tau|\Delta}F(\tau)\,\,\mathrm{d}\tau\,\Big\|_{L^q([0,T],L^r({\mathbb R}^3))}\\ &\qquad\lambdaeqslantsssim_{\,\varepsilon}\;\Big\|\int_{0}^{+\infty}|t-\tau|^{-\gammaamma}\|F(\tau)\|_{L^p({\mathbb R}^3)}\,\,\mathrm{d}\tau\,\Big\|_{L^q[0,T]}\,, \end{align*} where $\gammaamma:=\frac{3}{2}(\frac1p-\frac1r)\in(0, 1)$ by the assumptions on $p,r$. The Hardy-Littlewood-Sobolev inequality in time yields then \begin{equation}\lambdaabel{se_pi} \Big\|\int_{0}^{+\infty} e^{(\mathrm{i}+\varepsilon)|t-\tau|\Delta}F(\tau)\,\,\mathrm{d}\tau\,\Big\|_{L^q([0,T],L^r({\mathbb R}^3))}\;\lambdaeqslantsssim_{\,\varepsilon}\;\|F\|_{L^s([0,T],L^p({\mathbb R}^3))} \end{equation} with $\frac1s=1+\frac1q-\gammaamma$, namely $\frac{2}{s}+\frac{3}{p}=\frac{7}{2}$. Now, using estimates \eqref{fi_pi}-\eqref{se_pi} and the Christ-Kiselev lemma \cite{Christ-Kiselev-2001}, we deduce the ``retarded estimates'' \eqref{eq:st}. The proof of part (iii) proceeds similarly as for part (ii). Indeed, owing to the dispersive estimate with gradient \eqref{hsdispgr} and the Hardy-Littlewood-Sobolev inequality in time, \begin{align*} &\Big\|\,\nabla\!\!\int_{0}^{+\infty} e^{(\mathrm{i}+\varepsilon)|t-\tau|\Delta}F(\tau)\,\,\mathrm{d}\tau\,\Big\|_{L^q([0,T],L^r({\mathbb R}^3))}\\ &\qquad\lambdaeqslantsssim_{\,\varepsilon}\;\Big\|\int_{0}^{+\infty}|t-\tau|^{-\gammaamma}\|F(\tau)\|_{L^p({\mathbb R}^3)}\,\,\mathrm{d}\tau\,\Big\|_{L^q(0,+\infty)} \;\lambdaeqslantsssim_{\,\varepsilon}\;\|F\|_{L^s([0,T],L^p({\mathbb R}^3))}\,, \end{align*} where now $\gammaamma=\frac{3}{2}\lambdaeqslantft(\frac1p-\frac1r\right)+\frac12\in(0, 1)$ and the exponent $s$ is given by $\frac{2}{s}+\frac{3}{p}=\frac{5}{2}$; this, and again the result by Christ-Kieselev, then imply \eqref{eq:grst}. \end{proof} For our analysis it will be necessary to apply the above Strichartz estimates for the heat-Schr\"{o}dinger flow in a regime of indices that guarantees also to control the smallness of the constant in each such inequalities in terms of the smallness of $T$. This leads us to introduce the following admissibility condition. \begin{definition}\lambdaabel{de:ad-grad_ad} Let $(q,r)$ be a admissible pair. \begin{itemize} \item[(i)] A pair $(s,p)$ is called a $(q,r)$-admissible pair if \begin{equation}\lambdaabel{est_dual_loc} \frac{2}{s}+\frac{3}{p}<\frac{7}{2},\qquad \begin{cases} \frac{1}{2}\lambdaeqslantqslant\frac{1}{p}\lambdaeqslantqslant 1&2\lambdaeqslantqslant r<3\\ \frac{1}{2}\lambdaeqslantqslant\frac{1}{p}<\frac{1}{r}+\frac{2}{3}&3\lambdaeqslantqslant r\lambdaeqslantqslant 6\,. \end{cases} \end{equation} \item[(ii)] A pair $(s,p)$ is a called $(q,r)$-grad-admissible pair if \begin{equation}\lambdaabel{est_grad_dual_loc} \frac{2}{s}+\frac{3}{p}<\frac{5}{2},\quad\frac{1}{2}\lambdaeqslantqslant\frac{1}{p}<\frac{1}{r}+\frac{1}{3}\,. \end{equation} \end{itemize} \end{definition} \begin{remark}\lambdaabel{re:rel_adm} If $(s,p)$ is a $(q,r)$-grad-admissible pair, then it is also $(q,r)$-admissible. Moreover, if $(s,p)$ is a $(q,r)$-admissible pair (resp.~$(q,r)$-grad-admissible), and $(q_1,r_1)$ is another admissible pair with $r_1<r$, then $(s,p)$ is also a $(q_1,r_1)$-admissible pair (resp.~ $(q_1,r_1)$-grad-admissible) pair. \end{remark} We can state now a useful Corollary to Proposition \ref{le:stri_hs}. \begin{corollary}\lambdaabel{co:stri_lo} Let $\varepsilon>0$ and $T>0$, and let $(q,r)$ be a admissible pair. \begin{itemize} \item[(i)] For any $(q,r)$-admissible pair $(s,p)$, \begin{equation}\lambdaabel{eq:st_st} \begin{split} \Big\|\int_{0}^{t} e^{(\mathrm{i}+\varepsilon)(t-\tau)\Delta}F(\tau)\,\,\mathrm{d}\tau\,\Big\|_{L^q([0,T],L^r({\mathbb R}^3))}\;&\lambdaeqslantsssim_{\,\varepsilon}\;T^{\theta}\|F\|_{L^s([0,T],L^p({\mathbb R}^3))} \\ & \qquad \textstyle \theta:=\frac74-\frac1s-\frac{3}{2p}\,. \end{split} \end{equation} \item[(ii)] Assume in addition that $(q,r)$ is non-endpoint. For any $(q,r)$-grad-admissible pair, \begin{equation}\lambdaabel{eq:grst_st} \begin{split} \Big\|\nabla\!\!\int_{0}^{t} e^{(\mathrm{i}+\varepsilon)(t-\tau)\Delta}F(\tau)\,\,\mathrm{d}\tau\,\Big\|_{L^q([0,T],L^r({\mathbb R}^3))}\;&\lambdaeqslantsssim_{\,\varepsilon}\;T^{\theta}\|F\|_{L^s([0,T],L^p({\mathbb R}^3))} \\ & \qquad \textstyle \theta:=\frac54-\frac1s-\frac{3}{2p}\,. \end{split} \end{equation} \end{itemize} In either case, it follows by the assumptions that $\theta>0$. \end{corollary} \subsection{Further technical Lemmas} We conclude the Section by collecting a few technical Lemmas that will be useful for setting up the fixed point argument (Section \ref{sec:propagator}). Let us first introduce the following. \begin{definition}\lambdaabel{relevant} Given $T>0$, we define $$X^{(4,3)}[0,T]\;:=\;L^{\infty}([0,T],H^{1}({\mathbb R}^3))\cap L^{4}([0,T],W^{1,3}({\mathbb R}^3))$$ equipped with the Banach norm $$\|\cdot\|_{X^{(4,3)}[0,T]}:=\|\cdot\|_{L^{\infty}([0,T],H^{1}({\mathbb R}^3))}+\|\cdot\|_{L^{4}([0,T],W^{1,3}({\mathbb R}^3))}\,.$$ \end{definition} \begin{remark} By interpolation we have that, for every admissible pair $(q, r)$ with $r\in[2, 3]$. \begin{equation}\lambdaabel{int_Str} \|u\|_{L^{q}([0,T],W^{1,r}({\mathbb R}^3))}\;\lambdaeqslantsssim\; \|u\|_{X^{(4,3)}[0,T]}\,. \end{equation} Furthermore, Sobolev embedding also yields \begin{equation}\lambdaabel{int_Str_emb} \|u\|_{L^{q}([0,T],L^{\frac{3r}{3-r}}({\mathbb R}^3))}\;\lambdaeqslantsssim\; \|u\|_{X^{(4,3)}[0,T]} \end{equation} for any admissible pair $(q, r)$ with $r\in[2, 3]$. \end{remark} \begin{lemma}\lambdaabel{le:abcd}~ \begin{itemize} \item[(i)] Let $A\in\widetilde{\mathcal A}_1$ or $A\in\widetilde{\mathcal A}_2$. There exist $(4,3)$-grad-admissible pairs $(s_1,p_1)$, $(s_2,p_2)$ such that, for any $u\in X^{(4,3)}[0,T]$, \begin{equation*} A_i\cdot\nabla u\;\in\; L^{s_i}([0,T],L^{p_i}({\mathbb R}^3))\,,\qquad i\in\{1,2\}\,, \end{equation*} and \begin{equation}\lambdaabel{holdgradab} \begin{aligned} \|A_i\cdot\nabla u\|_{L^{s_i}([0,T],L^{p_i}({\mathbb R}^3))}\;\lambdaeqslantsssim \;\|A\|_{L^{a_i}([0,T],L^{b_i}({\mathbb R}^3))}\|u\|_{X^{(4,3)}[0,T]}\,. \end{aligned} \end{equation} \item[(ii)] Let $A\in\widetilde{\mathcal A}_1$. There exist four $(4,3)$-grad-ad\-mis\-sible pairs $(s_{ij},p_{ij})$, $i,j\in\{1,2\}$, such that, for any $u\in X^{(4,3)}[0,T]$, \begin{equation*} A_i\cdot A_j\,u\;\in\; L^{s_{ij}}([0,T],L^{p_{ij}}({\mathbb R}^3)) \end{equation*} and \begin{equation}\lambdaabel{holdgradcd} \begin{aligned} &\|A_i\cdot A_j\,u\|_{L^{s_{ij}}([0,T],L^{p_{ij}}({\mathbb R}^3))}\\ &\quad\lambdaeqslantsssim\|A_i\|_{L^{a_i}([0,T],L^{b_i}({\mathbb R}^3))}\,\|A_j\|_{L^{a_j}([0,T],L^{b_j}({\mathbb R}^3))}\,\|u\|_{X^{(4,3)}[0,T]}\,. \end{aligned} \end{equation} \item[(iii)] Let $A\in\widetilde{\mathcal A}_2$. There exist four $(4,3)$-admis\-sible pairs $(s_{ij},p_{ij})$, $i,j\in\{1,2\}$, such that, for any $u\in X^{(4,3)}[0,T]$, \begin{equation*} A_i\cdot A_j\,u\;\in\; L^{s_{ij}}([0,T],W^{1,p_{ij}}({\mathbb R}^3)) \end{equation*} and \begin{equation}\lambdaabel{holdgradcd_grad} \begin{aligned} &\|A_i\cdot A_j\,u\|_{L^{s_{ij}}([0,T],W^{1,p_{ij}}({\mathbb R}^3))}\\ &\quad \lambdaeqslantsssim \big(\|A_i\|_{L^{a_i}([0,T],L^{b_i}({\mathbb R}^3))}+\|\nabla A_i\|_{L^{a_i}([0,T],L^{3b_i/(3+b_i)}({\mathbb R}^3))}\big)\;\times\\ &\qquad\times\big(\|A_j\|_{L^{a_j}([0,T],L^{b_j}({\mathbb R}^3))}+\|\nabla A_j\|_{L^{a_j}([0,T],L^{3b_j/(3+b_j)}({\mathbb R}^3))}\big)\;\times \\ &\qquad\times\|u\|_{X^{(4,3)}[0,T]}\,. \end{aligned} \end{equation} \end{itemize} \end{lemma} \begin{proof} The proof consists in repeatedly applying H\"older's inequality and the Sobolev embedding, we omit the standard details. \end{proof} \begin{lemma}\lambdaabel{le:Vab_grad} Let $A\in\widetilde{\mathcal A}_1$ or $A\in\widetilde{\mathcal A}_2$, and let $\varepsilon>0$. There exists a constant $\theta_{\!A}>0$ such that, for every $T\in(0,1]$, \begin{equation*} \Big\Vert\int_0^te^{(\mathrm{i}+\varepsilon)(t-{\sigma}gma)\Delta}A({\sigma}gma)\cdot\nabla u({\sigma}gma)\,\mathrm{d}{\sigma}gma\,\Big\Vert_{X^{(4,3)}[0,T]}\;\lambdaeqslantsssim_{\,\varepsilon,A}\; T^{\theta_{\!A}}\Vert u\Vert_{X^{(4,3)}[0,T]}\,. \end{equation*} \end{lemma} \begin{proof} Because of Lemma \ref{le:abcd}(i), \begin{equation*} A_i\cdot\nabla u\in L^{s_i}([0,T],L^{p_i}({\mathbb R}^3))\,,\qquad i\in \{1,2\}\,, \end{equation*} for some $(s_1,p_1)$, $(s_2,p_2)$ which are $(4,3)$-grad-admissible pairs. Applying Corollary \ref{co:stri_lo}(ii) and Lemma \ref{le:abcd}(i) to $A_i\cdot\nabla u$ and setting \begin{equation*} \theta_{\!A}\;:=\;\min\lambdaeqslantft\{\frac54-\frac{1}{s_1}-\frac{3}{2p_1},\frac54-\frac{1}{s_2}-\frac{3}{2p_2}\right\} \end{equation*} the thesis follows. \end{proof} \begin{lemma}\lambdaabel{le:Vcd_grad} Let $A\in\widetilde{\mathcal A}_1$ or $A\in\widetilde{\mathcal A}_2$, and let $\varepsilon>0$. There exists a constant $\theta_{\!A}>0$ such that, for every $T\in(0,1]$, \begin{equation*} \lambdaeqslantft\Vert\int_0^te^{(\mathrm{i}+\varepsilon)(t-{\sigma}gma)\Delta}|A({\sigma}gma)|^2u({\sigma}gma)\,\mathrm{d}{\sigma}gma\right\Vert_{X^{(4,3)}[0,T]}\;\lambdaeqslantsssim_{\,\varepsilon,A}\; T^{\theta_{\!A}}\Vert u\Vert_{X^{(4,3)}[0,T]}\,. \end{equation*} \end{lemma} \begin{proof} The proof is similar to the previous one. For example, in the case $A\in\widetilde{\mathcal A}_1$, by Lemma \ref{le:abcd}(ii) we have \begin{equation*} A_i\cdot A_j u\in L^{s_{ij}}([0,T],L^{p_{ij}}({\mathbb R}^3))\,,\qquad i,j\in\{1,2\}\,. \end{equation*} Then we apply Corollary \ref{co:stri_lo}(i). \end{proof} \section{The regularised magnetic Laplacian}\lambdaabel{sec:propagator} We discuss now the existence of the linear magnetic viscous propagator and we prove that, with our assumptions on the magnetic potential, the propagator enjoys the same Strichartz-type estimates for the heat-Schr\"odinger flow obtained already in the Subsection \ref{sec:smoothingestHSflow}. The main result of this Section is the following. \begin{theorem}\lambdaabel{th:energy_linear} Assume that $A\in\widetilde{\mathcal{A}}_1$ or $A\in\widetilde{\mathcal{A}}_2$. For given $\mathcal{\tau}\in\mathbb{R}$, $\varepsilon>0$, and $f\in H^1(\mathbb{R}^3)$ consider the inhomogeneous Cauchy problem \begin{equation}\lambdaabel{eq:applin} \lambdaeqslantft\{\begin{aligned} \mathrm{i}\,\partial_t u\;=\;&-(1-\mathrm{i}\,\varepsilon)(\Delta u-2\,\mathrm{i}\,A\cdot\nabla u-|A|^2u)+F+G \\ u(\tau,\cdot)\;=\;&f \\ \end{aligned}\right. \end{equation} and the associated integral equation \begin{equation}\lambdaabel{duprima} \begin{split} u&(t,\cdot)\;=\;e^{(\mathrm{i}+\varepsilon)(t-\tau)\Delta}f \\ &\quad -\mathrm{i}\!\int_{\tau}^t e^{(\mathrm{i}+\varepsilon)(t-{\sigma}gma)\Delta}\big((1-\mathrm{i}\,\varepsilon)(2\,\mathrm{i}\,A\cdot\nabla u+|A|^2u)({\sigma}gma)+F({\sigma}gma)+G({\sigma}gma)\big)\,\mathrm{d}{\sigma}gma\,, \end{split} \end{equation} where \begin{itemize} \item $F\in L^{\widetilde{s}}({\mathbb R},W^{1,\widetilde{p}}({\mathbb R}^3))$ for some pair $(\widetilde{s},\widetilde{p})$ that is $(4,3)$-admissible pair or satisfies \eqref{est_dual} with $(q,r)=(4,3)$, namely $\frac{2}{\widetilde{s}}+\frac{3}{\widetilde{p}}\lambdaeqslantqslant\frac{7}{2}$, $\frac{1}{2}\lambdaeqslantqslant\frac{1}{\widetilde{p}}<1$; \item $G\in L^{s}({\mathbb R},L^{p}({\mathbb R}^3))$, for some pair $(s, p)$ that is $(4, 3)$-grad-admissible or satisfies \eqref{est_grad_dual} with $(q,r)=(4,3)$, namely $\frac{2}{s}+\frac{3}{p}\lambdaeqslantqslant\frac{5}{2}$, $\frac{1}{2}\lambdaeqslantqslant\frac{1}{p}<\frac{2}{3}$. \end{itemize} Then there exists a unique solution $u\in\mathcal{C}([\tau,+\infty),H^1({\mathbb R}^3))$ to \eqref{duprima}. Moreover, for any $T>\tau$ and for any Strichartz pair $(q,r)$, with $r\in[2,3]$, \begin{equation}\lambdaabel{stri_nono_grad} \Vert u\Vert_{L^q([\tau,T],W^{1,r}({\mathbb R}^3))}\;\lambdaeqslantsssim_{\,\varepsilon,A,T}\Vert f\Vert_{H^1({\mathbb R}^3)}+\Vert F\Vert_{L^{s}({\mathbb R},L^{p}({\mathbb R}^3))} +\Vert G\Vert_{L^{\widetilde{s}}({\mathbb R},W^{1,\widetilde{p}}({\mathbb R}^3))}\,. \end{equation} \end{theorem} Theorem \ref{th:energy_linear} shows the existence of a unique solution $u$ to the integral equation \eqref{duprima}. From the assumptions on the magnetic potential and the source terms $F, G$ and by using standard arguments in the theory of evolution equations (see for example \cite{CH}) we may also infer that $u$ satisfies \eqref{eq:applin} for almost every $t\in{\mathbb R}$ in the sense of distributions. In the case when $F=G=0$, the solution $u$ to \eqref{eq:applin} defines an evolution operator, namely for any $f\in H^1({\mathbb R}^3)$ the \emph{magnetic viscous evolution} is defined by $\mathcal U_{\varepsilon, A}(t, \tau)f=u(t)$ where $u$ is the solution to \eqref{eq:applin} with $F=G=0$. As a consequence of Theorem \ref{th:energy_linear} we have that $\mathcal U_{\varepsilon, A}(t, \tau)$ enjoys a class of Strichartz-type estimates. \begin{proposition}\lambdaabel{prop:prop} The family $\{\mathcal U_{\varepsilon, A}(t, \tau)\}_{t,\tau}$ of operators on $H^1(\mathbb{R}^3)$ satisfies the following properties: \begin{itemize} \item $\mathcal U_{\varepsilon, A}(t, s)\,\mathcal U_{\varepsilon, A}(s, \tau)=\mathcal U_{\varepsilon, A}(t, \tau)$ for any $\tau<s<t$; \item $\mathcal U_{\varepsilon, A}(t, t)=\mathbbm{1}$; \item the map $(t,\tau)\mapsto\mathcal U_{\varepsilon, A}(t, \tau)$ is strongly continuous in $H^1(\mathbb{R}^3)$; \item for any admissible pair $(q, r)$ with $r\in[2, 3]$, and for any $F,G$ satisfying the same assumptions as in Theorem \ref{th:energy_linear}, one has \begin{eqnarray} \Vert\mathcal{U}_{\varepsilon,A}(t,\tau)f\Vert_{L^q([\tau,T],W^{1,r}({\mathbb R}^3))}\!\!\!&\lambdaeqslantsssim_{\,\varepsilon,A,T}&\!\!\!\Vert f\Vert_{H^1({\mathbb R}^3)} \lambdaabel{eq:mahostg} \\ \Big\Vert\int_\tau^t\!\mathcal{U}_{\varepsilon,A}(t,{\sigma}gma)\,F({\sigma}gma)\,\mathrm{d}{\sigma}gma\,\Big\Vert_{L^q([\tau,T],W^{1,r}({\mathbb R}^3))}\!\!\!&\lambdaeqslantsssim_{\,\varepsilon,A,T}&\!\!\!\Vert F\Vert_{L^{\widetilde{s}}([\tau,T],W^{1,\widetilde{p}}({\mathbb R}^3))} \lambdaabel{eq:mainhostg} \\ \Big\Vert\int_\tau^t\!\mathcal{U}_{\varepsilon,A}(t,{\sigma}gma)\,G({\sigma}gma)\,\mathrm{d}{\sigma}gma\,\Big\Vert_{L^q([\tau,T],W^{1,r}({\mathbb R}^3))}\!\!\!&\lambdaeqslantsssim_{\,\varepsilon,A,T}&\!\!\!\Vert G\Vert_{L^s([\tau,T],L^{p}({\mathbb R}^3))}\,. \lambdaabel{eq:mainhostg2} \end{eqnarray} \end{itemize} \end{proposition} Once we defined the magnetic viscous evolution operator $\mathcal U_{\varepsilon, A}(t, \tau)$, we see that we can write the integral formulation for \eqref{eq:applin} in the following way \begin{equation}\lambdaabel{dusecondapp} u(t)\;=\;\mathcal U_{\varepsilon, A}(t, \tau)f-\mathrm{i}\int_\tau^t\mathcal U_{\varepsilon, A}(t, {\sigma}gma)\big(F({\sigma}gma)+G({\sigma}gma)\big)\,\mathrm{d} {\sigma}gma\,. \end{equation} We will use formula \eqref{dusecondapp} and the Strichartz-type estimates \eqref{eq:mahostg}-\eqref{eq:mainhostg2} in order to set up a fixed point argument and show the existence of solutions to the nonlinear problem \eqref{eq:visc_CauMNLS}. Let us now proceed with proving Theorem \ref{th:energy_linear}. As already mentioned, the proof is based upon a contraction argument in the space introduced in Definition \ref{relevant} and requires the magnetic estimates established Lemmas \ref{le:abcd}, \ref{le:Vab_grad}, and \ref{le:Vcd_grad}. \begin{proof}[Proof of Theorem \ref{th:energy_linear}] It is clearly not restrictive to set the initial time $\tau=0$. For given $T\in(0,1]$ and $M>0$, we consider the ball of radius $M$ in $X^{(4, 3)}[0, T]$, i.e., $$\mathcal{X}_{T,M}\;:=\;\{u\in X^{(4,3)}[0,T]\;|\;\Vert u\Vert_{X^{(4,3)}[0,T]}\lambdaeqslantqslant M\}.$$ Moreover, we define the solution map $u\mapsto\mathbb Phi u$ where, for $t\in[0,T]$, \begin{equation}\lambdaabel{eq:solution_map} \begin{split} &(\mathbb Phi u)(t)\;:=\;e^{(\mathrm{i}+\varepsilon)t\Delta}f \\ &\;-(\mathrm{i}+\varepsilon)\int_{0}^te^{(\mathrm{i}+\varepsilon)(t-{\sigma}gma)\Delta}\big((2\,\mathrm{i}\,A({\sigma}gma)\cdot\nabla+|A({\sigma}gma)|^2)u({\sigma}gma)+F({\sigma}gma)+G({\sigma}gma)\big)\,\mathrm{d}{\sigma}gma\,. \end{split} \end{equation} Thus, finding a solution to the integral equation \eqref{duprima}, with $\tau=0$, is equivalent to finding a fixed point for the map $\mathbb Phi$. We shall then prove Theorem \ref{th:energy_linear} by showing that, for suitable $T$ and $M$, the map $\mathbb Phi$ is a contraction on $\mathcal{X}_{T,M}$. To this aim, let us consider a generic $u\in \mathcal{X}_{T,M}$: owing to the Strichartz estimates \eqref{eq:homstrich} and \eqref{eq:grst} and to Lemmas \ref{le:Vab_grad} and \ref{le:Vcd_grad}, there exist positive constants $C\equiv C_{\varepsilon,A}$ and $\theta\equiv\theta_{\!A}$ such that, for $T\in (0,1]$, \begin{equation}\lambdaabel{eq:bou_phi_g} \begin{split} \Vert\mathbb Phi u\Vert_{X^{(4,3)}[0,T]}\;&\lambdaeqslantqslant\; C\,\Big(\Vert f\Vert_{H^1({\mathbb R}^3)}+\sum_{i=1}^N\|F_i\|_{L^{s_i}([0,T],L^{p_i}({\mathbb R}^3)} \\ &\qquad\qquad+\sum_{i=1}^N\|G_i\|_{L^{\widetilde{s}_i}([0,T],L^{\widetilde{p}_i}({\mathbb R}^3)} +T^{\theta}\Vert u\Vert_{X^{(4,3)}[0,T]}\Big)\,. \end{split} \end{equation} It is possible to restrict further $M$ and $T$ such that \[ M\;>\;2C\,\Big(\Vert f\Vert_{H^1({\mathbb R}^3)}+\sum_{i=1}^N\|F_i\|_{L^{s_i}([0,T],L^{p_i}({\mathbb R}^3)}+\sum_{i=1}^N\|G_i\|_{L^{\widetilde{s}_i}([0,T],L^{\widetilde{p}_i}({\mathbb R}^3)}\Big) \] and $2CT^{\theta}<1$, in which case \eqref{eq:bou_phi_g} yields $$ \Vert\mathbb Phi u\Vert_{X^{(4,3)}[0,T]}\;\lambdaeqslantqslant\; M({\textstyle\frac{1}{2}}+CT^{\theta})\;<\;M\,.$$ This proves that $\mathbb Phi$ maps indeed $\mathcal{X}_{T,M}$ into itself. Next, for generic $u,v\in \mathcal{X}_{T,M}$, and with the above choice of $M$ and $T$, \eqref{eq:bou_phi_g} also yields \begin{align*} \Vert \mathbb Phi u-\mathbb Phi v\Vert_{X^{(4,3)}[0,T]}\;&=\;\Vert \mathbb Phi (u-v)\Vert_{X^{(4,3)}[0,T]}\;\lambdaeqslantqslant\; CT^{\theta}\Vert u-v\Vert_{X^{(4,3)}[0,T]}\\ &<\;\frac12\Vert u-v\Vert_{X^{(4,3)}[0,T]}\,, \end{align*} which proves that $\mathbb Phi$ is indeed a contraction on $\mathcal{X}_{T,M}$. By Banach's fixed point theorem, we conclude that the integral equation $u=\mathbb Phi u$ has a unique solution in $\mathcal{X}_{T,M}$. Furthermore, $\mathbb Phi u\in \mathcal{C}([0,T],H^1({\mathbb R}^3))$. Hence, we have found a local solution $u\in\mathcal{C}([0,T],H^1({\mathbb R}^3))$ to the integral equation \eqref{duprima}, which satisfies \eqref{stri_nono_grad}. Moreover, since the local existence time $T$ does not depend on the initial data, this solution can be extended globally in time, and \eqref{stri_nono_grad} is satisfied for any $T>0$. \end{proof} As the last result of this Section, we show the propagator $\mathcal{U}_{\varepsilon,A}(t,\tau)$ is stable under small perturbations of the magnetic potential and of the initial datum. \begin{proposition}[Stability]\lambdaabel{pr:stab_td}~ Let $\tau\in{\mathbb R}$, $T>\tau$, and let us assume that $A^{(1)}, A^{(2)}\in\widetilde{\mathcal{A}}_1$, with $\|A^{(1)}-A^{(2)}\|_{\mathcal A_1}<{\partial}elta$ or $A^{(1)}, A^{(2)}\in\widetilde{\mathcal{A}}_2$, with $\|A^{(1)}-A^{(2)}\|_{\mathcal A_2}<{\partial}elta$, where ${\partial}elta>0$ is sufficiently small. Let $u_1, u_2\in\mathcal C([\tau, T);H^1({\mathbb R}^3))$ be the solutions to \begin{equation}\lambdaabel{eq:stab_td} \lambdaeqslantft\{\begin{aligned} \mathrm{i}\,\partial_t u_j\;=\;&-(1-\mathrm{i}\,\varepsilon)(\nabla-\mathrm{i} A^{(j)})^2u_j+F_j \\ u(\tau,\cdot)\;=\;&\:f_j \\ \end{aligned}\right. \end{equation} for given $f_1, f_2\in H^1({\mathbb R}^3)$ and given $F_1, F_2\in L^s([\tau,T],W^{1,p}({\mathbb R}^3))$, where $(s,p)$ is dual-admissible. Then, for any admissible pair $(q,r)$ with $r\in [2,3]$ we have \begin{equation*} \|u_1-u_2\|_{L^q([\tau, T],W^{1, r}({\mathbb R}^3))}\;\lambdaeqslantsssim\;{\partial}elta+\|f_1-f_2\|_{H^1}+\|F_1-F_2\|_{L^s([\tau,T],W^{1,p}({\mathbb R}^3))}\,. \end{equation*} \end{proposition} \begin{proof} We prove the Proposition under the assumptions $A^{(1)}, A^{(2)}\in\mathcal A_1$ and $\|A^{(1)}-A^{(2)}\|_{\mathcal A_1}<{\partial}elta$, the other case being completely analogous. From \eqref{eq:stab_td} we infer that the function $\widetilde u:=u_1-u_2$ satisfies \begin{equation*} \lambdaeqslantft\{\begin{aligned} \mathrm{i}\,\partial_t \widetilde u\;=\;&-(1-\mathrm{i}\,\varepsilon)(\nabla-iA^{(1)})^2\widetilde u+2\,\mathrm{i}\widetilde A\cdot\nabla u_2+\widetilde A\cdot(A^{(1)}+A^{(2)})u_2+\widetilde F \\ \widetilde u(\tau,\cdot)\;=\;&\:\widetilde f \\ \end{aligned}\right. \end{equation*} or equivalently \begin{equation}\lambdaabel{eq:diff} \begin{split} \widetilde u(t)\;&=\;\mathcal U_{\varepsilon, A^{(1)}}(t,0)\widetilde f \\ &\quad -\mathrm{i}\int_\tau^t\mathcal U_{\varepsilon, A^{(1)}}(t, {\sigma}gma)\big(2\,\mathrm{i}\widetilde A\cdot\nabla u_2+\widetilde A\cdot(A^{(1)}+A^{(2)})u_2+\widetilde F\big)({\sigma}gma)\,\mathrm{d} {\sigma}gma\,, \end{split} \end{equation} where $\widetilde f:=f_1-f_2$, $\widetilde A:=A_1-A_2$, and $\widetilde F:=F_1-F_2$. Since $u_1$ and $u_2$ solve \eqref{eq:stab_td} on the time interval $[\tau, T]$, estimate \eqref{stri_nono_grad} yields \begin{equation*} \|u_j\|_{L^q([\tau, T];W^{1, r}({\mathbb R}^3))}\;\lambdaeqslantqslant\; C(\|f_j\|_{H^1}, \|A^{(j)}\|_{\mathcal A_1}, \|F_j\|_{L^s([\tau,T],W^{1,p}({\mathbb R}^3))})\,,\quad j\in\{1, 2\} \end{equation*} for any admissible pair $(q, r)$ with $r\in[2, 3]$. By applying the Strichartz-type estimates stated in Proposition \ref{prop:prop} and the estimates of Lemma \ref{le:abcd} to equation \eqref{eq:diff} we have \begin{equation*} \begin{aligned} \|\tilde u&\|_{L^q([\tau, T],W^{1, r}({\mathbb R}^3))}\;\lambdaeqslantsssim\;\|\widetilde{f}\|_{H^1}+\|\widetilde{A}\|_{\mathcal A_1}\|u_2\|_{X^{(4, 3)}[\tau, T]}\\ &+\|\widetilde{A}\|_{\mathcal A_1}\lambdaeqslantft(\|A^{(1)}\|_{\mathcal A_1}+\|A^{(2)}\|_{\mathcal A_1}\right)\|u_2\|_{X^{(4, 3)}[\tau, T]}+\|\tilde F\|_{L^s([\tau,T],W^{1,p}({\mathbb R}^3))}\,, \end{aligned} \end{equation*} from which the result follows. \end{proof} \section{Local well posedness for the regularised magnetic NLS}\lambdaabel{sec:LWP} In this Section we turn our attention to the non-linear problem \eqref{eq:visc_CauMNLS}. Using the existence result and the Strichartz-type estimates established, respectively, in Theorem \ref{th:energy_linear} and Proposition \ref{prop:prop}, we set up our fixed point argument associated with the integral equation \begin{equation}\lambdaabel{eq:visc_nls_int} u(t)\;=\;\mathcal{U}_{\varepsilon,A}(t,0)f-\mathrm{i}\!\int_0^t\mathcal{U}_{\varepsilon,A}(t,{\sigma}gma)\,\mathcal{N}(u)({\sigma}gma)\,\mathrm{d}{\sigma}gma\,. \end{equation} We first focus on the case of energy sub-critical non-linearities. \begin{proposition}[Local well-posedness, energy sub-critical case]\lambdaabel{pr:exsub} Let $\varepsilon>0$. Assume that $A\in\widetilde{\mathcal A}_1$ or $A\in\widetilde{\mathcal A}_2$ and that the exponents in the non-linearity \eqref{hart_pp_nl} are in the regime $\gammaamma\in(1,5)$ and $\alpha\in(0,3)$. Then for any $f\in H^1(\mathbb{R}^3)$ there exists a unique solution $u\in\mathcal C([0, T_{max}),H^1(\mathbb{R}^3))$ to \eqref{eq:visc_nls_int} on a maximal interval $[0, T_{max})$ such that the following blow-up alternative holds: if $T_{max}<+\infty$ then $\lambdaim_{t\uparrow T_{max}}\|u(t)\|_{H^1}=+\infty$. \end{proposition} \begin{proof} Since the linear propagator $\mathcal U_{\varepsilon, A}(t, \tau)$ satisfies the same Strichartz-type estimates as the heat-Schr\"odinger flow, and since the non-linearities considered here are sub-critical perturbation of the linear flow, a customary contraction argument in the space \begin{equation}\lambdaabel{eq:cont_space} \mathcal{C}([0,T],H^1({\mathbb R}^3))\cap L^{q(\gammaamma)}([0,T],W^{1,r(\gammaamma)}({\mathbb R}^3))\cap L^{q(\alpha)}([0,T],W^{1,r(\alpha)}({\mathbb R}^3))\,, \end{equation} where \begin{equation} (q(\gammaamma),r(\gammaamma))\;:=\;\textstyle\Big(\frac{4(\gammaamma+1)}{\gammaamma-1},\frac{3(\gammaamma+1)}{\gammaamma+2}\Big) \end{equation} (see, e.g., \cite[Theorems 2.1 and 3.1]{Miao-Hartree-2007}) and \begin{equation}\lambdaabel{qalpha} (q(\alpha),r(\alpha))\;:=\; \begin{cases} \quad\;(+\infty,2)&\;\;\alpha\in(0,2]\\ \Big(\frac{6}{\alpha-2},\frac{18}{13-2\alpha}\Big)&\;\;\alpha\in(2,3)\\ \end{cases} \end{equation} (see, e.g., \cite[Section 5.2]{Lineares-Ponce-book2015}), guarantees the existence of a unique local solution for sufficiently small $T$. We observe, in particular, that with the above choice one has $r(\gammaamma),r(\alpha)\in[2,3)$. Furthermore, by a customary continuation argument we can extend such a solution over a maximal interval for which the blow-up alternative holds true. We omit the standard details, they are part of the well-established theory of semi-linear equations. \end{proof} In the presence of a energy-critical non-linearity ($\gammaamma=5$) the above arguments cannot be applied. Indeed, when $\gammaamma=5$ we cannot apply Corollary \ref{co:stri_lo} with that nonlinearity, in order to obtain the factor $T^\theta$, $\theta>0$ and apply the standard contraction argument. However, it is possible to exploit a similar idea as in \cite{Cazenave-Weissler-1990} to infer a local well-posedness result when $\gammaamma=5$. \begin{proposition}[Local existence and uniqueness, energy critical case]\lambdaabel{pr:crie} Let $A\in\widetilde{\mathcal A}_1$ or $A\in\widetilde{\mathcal A}_2$ and let the exponents in the non-linearity \eqref{hart_pp_nl} be in the regime $\gammaamma=5$ and $\alpha\in(0,3)$. Let $\varepsilon>0$ and $f\in H^1({\mathbb R}^3)$. There exists $\eta_0>0$ such that, if \begin{equation} \|\nabla e^{\mathrm{i} t\Delta}f\|_{L^6([0,T],L^{\frac{18}{7}}({\mathbb R}^3))}\lambdaeqslantqslant \eta \end{equation} for some (small enough) $T>0$ and some $\eta<\eta_0$, then there exists a unique solution $u\in\mathcal C([0, T],H^1(\mathbb{R}^3))$ to \eqref{eq:visc_nls_int}. Moreover, this solution can be extended on a maximal interval $[0, T_{max})$ such that the following blow-up alternative holds true: $T_{max}<\infty$ if and only if $\Vert u\Vert_{L^6([0,T_{max}),L^{18}({R}^3))}=\infty$. \end{proposition} \begin{proof} A direct application of a well-known argument by Cazenave and Weissler \cite{Cazenave-Weissler-1990} (we refer to \cite[~Section 3]{Killip-Visan-2013} for a more recent discussion). In particular, having established Strichartz estimates for $\mathcal{U}_{\varepsilon,A}(t,\tau)$ relative to the pair $(q,r)=(6,\frac{18}{7})$, we proceed exactly as in the proof of \cite[Theorem 3.4 and Corollary 3.5]{Killip-Visan-2013}, so as to find a unique solution $u$ to the integral equation \eqref{eq:visc_nls_int} in the space \begin{equation}\lambdaabel{eq:cont_space_crit} \mathcal{C}([0,T],H^1({\mathbb R}^3))\cap L^{6}([0,T],W^{1,\frac{18}{7}}({\mathbb R}^3))\cap L^{q(\alpha)}([0,T],W^{1,r(\alpha)}({\mathbb R}^3)) \end{equation} with $(q(\alpha),r(\alpha))$ given by \eqref{qalpha}, together with the $L^6_tL^{18}_x$-blow-up alternative. \end{proof} We conclude this Section by stating the analogous stability property of Proposition \ref{pr:stab_td} also for the nonlinear problem \begin{proposition}\lambdaabel{prop:stab_nl} Let $\tau\gammaeqslantq0$, $T\in(\tau, \infty)$ and let us assume that $A^{(1)}, A^{(2)}\in \widetilde{\mathcal{A}}_1$ with $\|A^{(1)}-A^{(2)}\|_{\mathcal A_1}<{\partial}elta$ or that $A^{(1)},A^{(2)}\in \widetilde{\mathcal{A}}_2$ with $\|A^{(1)}-A^{(2)}\|_{\mathcal A_2}<{\partial}elta$, for some ${\partial}elta>0$ small enough. Let us consider $u_1, u_2\in\mathcal C([\tau, T];H^1({\mathbb R}^3))$ solutions to \begin{equation*} \lambdaeqslantft\{\begin{aligned} \mathrm{i}\,\partial_t u_j\;=\;&-(1-\mathrm{i}\,\varepsilon)(\nabla-iA^{(j)})^2u_j+\mathcal N(u_j) \\ u(\tau,\cdot)\;=\;&f_j, \\ \end{aligned}\right. \end{equation*} where $j\in\{1, 2\}$, $f_1, f_2\in H^1$, $\mathcal N(u)$ is given by \eqref{hart_pp_nl} with $\gammaamma\in(1, 5], \alpha\in(0, 3)$. Then for any admissible pair $(q, r)$ with $r\in[2, 3]$ we have \begin{equation*} \|u_1-u_2\|_{L^q([\tau, T],W^{1, r}({\mathbb R}^3))}\;\lambdaeqslantsssim\;{\partial}elta+\|f_1-f_2\|_{H^1}. \end{equation*} \end{proposition} \section{Mass and energy estimates}\lambdaabel{eq:mass-energy} In this Section we establish some a priori estimates which will be needed in order to extend the local approximating solution obtained in Section \ref{sec:LWP} over arbitrary time intervals. In particular we will show that the total mass and energy are uniformly bounded. Furthermore, by exploiting the dissipative regularisation, we will infer some a priori space-time bounds which will allow to extend globally the solution also in the energy-critical case. The two quantities of interest are defined as follows. \begin{definition} Let $T>0$. For each $u\in L^{\infty}([0,T),H^1({\mathbb R}^3))$ and $t\in [0,T)$, mass and energy of $u$ are defined, at almost every time $t\in[0,T)$, as \[ \begin{split} (\mathcal{M}(u))(t)\;&:=\;\int_{{\mathbb R}^3}|u(t,x)|^2\,\mathrm{d} x \\ (\mathcal{E}(u))(t)\;&:=\;\int_{{\mathbb R}^3}\Big({\textstyle{\frac{1}{2}}}|(\nabla-\mathrm{i} A(t))\,u|^2+{\textstyle\frac{1}{\gammaamma +1}}|u|^{\gammaamma +1}+{\textstyle{\frac{1}{4}}}(|x|^{-\alpha}*|u|^2)|u|^2\Big)\,\mathrm{d} x\,. \end{split} \] \end{definition} In what follows, we will consider potentials $A\in\mathcal A_1$ or $A\in\mathcal A_2$, so to have the time regularity needed in order to study the energy functional. \begin{proposition}\lambdaabel{pr:5ene} Assume that $A\in\mathcal{A}_1$ or $A\in\mathcal{A}_2$, and that the exponents in the non-linearity \eqref{hart_pp_nl} are in the whole regime $\gammaamma\in(1,5]$ and $\alpha\in(0,3)$. For fixed $\varepsilon>0$, let $u_{\varepsilon}\in\mathcal{C}([0,T),H^1({\mathbb R}^3))$ be the local solution to the regularised equation \eqref{eq:visc_nls} for some $T>0$. Then the mass, the energy, and the $H^1$-norm of $u_{\varepsilon}$ are bounded in time over $[0,T)$, uniformly in $\varepsilon>0$, that is, \begin{eqnarray} \sup_{t\in[0, T]}\mathcal{M}(u_{\varepsilon})\!\!\!&\!\!\!\!\!\!\!\!\!\lambdaeqslantsssim & \!\!\!\!1\lambdaabel{eq:Mbound} \\ \sup_{t\in[0, T]}\mathcal{E}(u_{\varepsilon})\!\!&\lambdaeqslantsssim_{A,T} &\!\!\!\! 1 \lambdaabel{eq:Ebound}\\ \Vert u_{\varepsilon}\Vert_{L^{\infty}([0,T),H^1({\mathbb R}^3))}\!\!\!&\lambdaeqslantsssim_{A,T}&\!\!\!\!1\,, \lambdaabel{eq:uni_bou} \end{eqnarray} and moreover one has the a priori bounds \begin{equation}\lambdaabel{dissiesti} \begin{aligned} \int_0^{T}\!\!\int_{{\mathbb R}^3}&\Big(|(\nabla-\mathrm{i} A(t))u_{\varepsilon}|^2\big(|u_{\varepsilon}|^{\gammaamma-1}+(|x|^{-\alpha}*|u_{\varepsilon}|^2)\big)+(\gammaamma-1)|u_{\varepsilon}|^{\gammaamma-1}|\nabla|u_{\varepsilon}||^2\\ &\qquad+(|x|^{-\alpha}*\nabla|u_{\varepsilon}|^2)\nabla|u_{\varepsilon}|^2\Big)\,\mathrm{d} x\,\mathrm{d} t\;\lambdaeqslantsssim_{A,T}\;\varepsilon^{-1}\,. \end{aligned} \end{equation} \end{proposition} \begin{remark} \emph{At fixed} $\varepsilon>0$ the finiteness of $\mathcal{M}(u_{\varepsilon})(t)$ and of $\mathcal{E}(u_{\varepsilon})(t)$ for all $t\in[0,T)$ is obvious for the mass, since by assumption $u_{\varepsilon}(t)\in L^2({\mathbb R}^3)$ for every $t\in[0,T)$, and it is also straightforward for the energy, since the property that $((\nabla-\mathrm{i} A)u_\varepsilon)(t)\in L^2({\mathbb R}^3)$ for every $t\in[0,T)$ is also part of the assumption, and moreover it is a standard property (see, e.g., \cite[Section 3.2]{cazenave}) that both $\int_{\mathbb{R}^3}|u_\varepsilon|^{\gammaamma +1}\,\mathrm{d} x$ and $\int_{\mathbb{R}^3}(|x|^{-\alpha}*|u_\varepsilon|^2)|u_\varepsilon|^2\,\mathrm{d} x$ are finite for every $t\in[0,T)$, and both in the energy sub-critical and critical regime. The virtue of Proposition \ref{pr:5ene} is thus to produce bounds \eqref{eq:Mbound}-\eqref{eq:uni_bou} that are \emph{uniform} in $\varepsilon$. The non-uniformity in $T$ of \eqref{eq:Ebound}-\eqref{eq:uni_bou} is due to the fact that the magnetic potential is only $AC_\mathrm{loc}$ in time: for $AC$-potentials such bounds would be uniform in $T$ as well. \end{remark} \begin{proof}[Proof of Proposition \ref{pr:5ene}] We recall that $u_{\varepsilon}$ satisfies $$\mathrm{i}\,\partial_t u_{\varepsilon}\;=\;-(1-\mathrm{i}\,\varepsilon)(\nabla-\mathrm{i}\,A)^2u_{\varepsilon}+\mathcal{N}(u_{\varepsilon})$$ as an identity at every $t$ between $H^{-1}$-functions in space. Let us first prove the thesis in a regular case, and later work out a density argument for the general case. It is straightforward to see, by means of a customary contraction argument in $L^\infty([0,T],H^s(\mathbb{R}^3))$ for arbitrary $s>0$, that if $f\in\mathcal{S}({\mathbb R}^3)$ and $A\in AC_{\mathrm{loc}}({\mathbb R},\mathcal{S}({\mathbb R}^3))$, then the solution $u_{\varepsilon}$ to the local Cauchy problem \eqref{eq:visc_CauMNLS} is smooth in space, whence in particular $u_{\varepsilon}\in \mathcal{C}^1([0,T),H^1({\mathbb R}^3))$, a fact that justifies the time derivations in the computations that follow. From \[ \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}&(\mathcal{M}(u_{\varepsilon}))(t)\;= \\ &=\;-2\,\mathfrak{Re}\int_{{\mathbb R}^3}\overline{u_{\varepsilon}}\,\big((\mathrm{i}+\varepsilon)(\nabla-\mathrm{i}\,A)^2u_{\varepsilon}-\mathrm{i}\,|u_{\varepsilon}|^{\gammaamma-1}u_{\varepsilon}-\mathrm{i}\,(|\cdot|^{-\alpha}*|u_{\varepsilon}|^2)\,u_{\varepsilon}\big)\,\mathrm{d} x \\ &=\;-2\varepsilon\int_{{\mathbb R}^3}|(\nabla-\mathrm{i}\,A)u_{\varepsilon}|^2dx\;\lambdaeqslantqslant\; 0\,, \end{split} \] one deduces $(\mathcal{M}(u_{\varepsilon}))(t)\lambdaeqslantqslant (\mathcal{M}(u_{\varepsilon}))(0)$, whence \eqref{eq:Mbound}. Next, we compute \begin{equation}\lambdaabel{derieni} \begin{aligned} &\frac{\mathrm{d}}{\mathrm{d} t}(\mathcal{E}(u_{\varepsilon}))(t)\;=\;\mathfrak{Re}\int_{{\mathbb R}^3}\Big(\big((\nabla-\mathrm{i}\,A)\partial_t u_{\varepsilon}-\mathrm{i}\,(\partial_tA)u_{\varepsilon}\big)\cdot\overline{(\nabla-\mathrm{i}\,A)u_{\varepsilon}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad+\big(|u_{\varepsilon}|^{\gammaamma -1}+(|x|^{-\alpha}*|u_{\varepsilon}|^2)\big)\,\overline{u_{\varepsilon}}\,\partial_t u_{\varepsilon}\Big)\,\mathrm{d} x\\ &=\;\mathfrak{Re}\int_{{\mathbb R}^3}(\partial_t u_{\varepsilon})\big(-\overline{(\nabla-\mathrm{i}\,A)^2u_{\varepsilon}}+|u_{\varepsilon}|^{\gammaamma-1}\,\overline{u_{\varepsilon}}+(|x|^{-\alpha}*|u_{\varepsilon}|^2)\,\overline{u_{\varepsilon}}\,\big)\,\mathrm{d} x\\ &\qquad\qquad +\int_{{\mathbb R}^3}A\cdot(\partial_tA)|u_{\varepsilon}|^2+(\partial_tA)\cdot\mathfrak{Im}\,(u_{\varepsilon}\overline{\nabla u_{\varepsilon}}\,)\,\mathrm{d} x\\ &=\;\varepsilon\!\int_{{\mathbb R}^3}\!\big(\!-|(\nabla-\mathrm{i}\,A)^2u_{\varepsilon}|^2+(\,|u_{\varepsilon}|^{\gammaamma-1}+|x|^{-\alpha}*|u_{\varepsilon}|^2)\,\mathfrak{Re}\,(\overline{u_{\varepsilon}}(\nabla-\mathrm{i}\,A)^2u_{\varepsilon})\big)\,\mathrm{d} x\!\!\!\!\!\!\!\\ &\qquad\qquad +\int_{{\mathbb R}^3}\!\big(A\cdot(\partial_tA)|u_{\varepsilon}|^2+(\partial_tA)\cdot\mathfrak{Im}\,(u_{\varepsilon}\overline{\nabla u_{\varepsilon}}\,)\,\mathrm{d} x \\ &=\;-\varepsilon\!\int_{{\mathbb R}^3}\!\,|(\nabla-\mathrm{i}\,A)^2u_{\varepsilon}|^2\,\mathrm{d} x -\varepsilon\,\mathcal{R}(u_\varepsilon)(t)+\mathcal{S}(u_\varepsilon)(t)\,, \end{aligned} \end{equation} where \begin{equation*} \begin{split} \mathcal{R}(u_\varepsilon)(t)\;&:=\;-\!\int_{{\mathbb R}^3}\!(\,|u_{\varepsilon}|^{\gammaamma-1}+|x|^{-\alpha}*|u_{\varepsilon}|^2)\,\mathfrak{Re}\,(\overline{u_{\varepsilon}}(\nabla-\mathrm{i}\,A)^2u_{\varepsilon})\,\mathrm{d} x \\ \mathcal{S}(u_\varepsilon)(t)\;&:=\;\int_{{\mathbb R}^3}\!\big(A\cdot(\partial_tA)|u_{\varepsilon}|^2+(\partial_tA)\cdot\mathfrak{Im}\,(u_{\varepsilon}\overline{\nabla u_{\varepsilon}}\,)\,\big)\,\mathrm{d} x\,. \end{split} \end{equation*} From \begin{equation}\lambdaabel{dopoderieni} \begin{aligned} &\mathcal{R}(u_\varepsilon)(t)\;=\\ &=\;-\int_{{\mathbb R}^3}(\,|u_{\varepsilon}|^{\gammaamma-1}+|x|^{-\alpha}*|u_{\varepsilon}|^2)\big(-|(\nabla-\mathrm{i}\,A)u_{\varepsilon}|^2+{\textstyle{\frac{1}{2}}}\Delta|u_{\varepsilon}|^2\big)\,\mathrm{d} x\\ &=\;+\!\int_{{\mathbb R}^3}|u_{\varepsilon}|^{\gammaamma-1}|(\nabla-\mathrm{i}\,A)u_{\varepsilon}|^2\,\mathrm{d} x+(\gammaamma-1)\!\int_{{\mathbb R}^3}|u_{\varepsilon}|^{\gammaamma-1}|\nabla|u_{\varepsilon}||^2\,\mathrm{d} x\\ &\quad\;\, +\!\int_{{\mathbb R}^3}(\,|x|^{-\alpha}*|u_{\varepsilon}|^2)\,|(\nabla-\mathrm{i}\,A)u_{\varepsilon}|^2\,\mathrm{d} x+{\textstyle{\frac{1}{2}}}\!\int_{{\mathbb R}^3}(\,|x|^{-\alpha}*\nabla|u_{\varepsilon}|^2)\,\nabla|u_{\varepsilon}|^2\,\mathrm{d} x\!\!\!\!\!\!\! \end{aligned} \end{equation} we see that \begin{equation}\lambdaabel{eq:Rpositive} \mathcal{R}(u_\varepsilon)(t)\;\gammaeqslantqslant\;0\,. \end{equation} This is obvious for the first three summands in the r.h.s.~of \eqref{dopoderieni}, whereas for the last one, setting $\phi:=\nabla|u_{\varepsilon}|^2$, Plancherel's formula gives $$\int_{{\mathbb R}^3}(\,|x|^{-\alpha}*\phi)\,\phi\,\mathrm{d} x\;=\;\int_{{\mathbb R}^3}\widehat{(|\cdot|^{-\alpha})}(\xi)\,|\widehat{\phi}(\xi)|^2\,\mathrm{d} \xi\,,$$ and since $\widehat{|\cdot|^{-\alpha}}$ is positive, the fourth summand too is positive. Therefore, \begin{equation}\lambdaabel{enel1} \frac{\mathrm{d}}{\mathrm{d} t}(\mathcal{E}(u_{\varepsilon}))(t)\;\lambdaeqslantqslant\;\mathcal{S}(u_\varepsilon)(t)\,. \end{equation} In order to estimate $\mathcal{S}(u_\varepsilon)(t)$, it is checked by direct inspection that there are $M_1,M_2\in[2,6]$ such that \[ b_1*2*M_1\;=\;b_2*2*M_2\;=\;1\,, \] whence, for every $t\in[0,T)$ and $j\in\{1,2\}$, \[ \begin{split} \|u_\varepsilon(t)\|_{L^M_j(\mathbb{R}^3)}\;&\lambdaeqslantsssim\; \|u_\varepsilon(t)\|_{H^1(\mathbb{R}^3)} \\ &\lambdaeqslantsssim\;\big(1+ \Vert A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+ \Vert A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)\,\Vert u_{\varepsilon}\Vert_{H^1_{A(t)}} \end{split} \] (Sobolev's embedding and norm equivalence \eqref{eq:norm_equiv}). Thus, by H\"{o}lder's inequality, \begin{equation}\lambdaabel{enel2} \begin{aligned} \Big|\int_{{\mathbb R}^3}&(\partial_t A(t))\cdot \mathfrak{Im}\,(u_{\varepsilon}(t)\,\overline{\nabla u_{\varepsilon}(t)})\,\mathrm{d} x\,\Big|\\ &\lambdaeqslantsssim\; \big(\Vert\partial_t A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+\Vert\partial_t A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)\:\times\\ &\qquad\quad \times\big(1+\Vert A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+ \Vert A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)^2\,\Vert u_{\varepsilon}(t)\Vert_{H_{A(t)}^1({\mathbb R}^3)}^2\\ &\lambdaeqslantqslant\;\big(\Vert\partial_t A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+\Vert\partial_t A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)\:\times\\ &\qquad\quad \times\big(1+\Vert A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+ \Vert A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)^2\,\big(1+(\mathcal{E}(u_{\varepsilon})(t)\big)\,, \end{aligned} \end{equation} the last step following from \begin{equation}\lambdaabel{pdelc} \Vert u_{\varepsilon}(t)\Vert_{H_{A(t)}^1}^2\lambdaeqslantqslant\; (\mathcal{M}(u_{\varepsilon}))(t)+(\mathcal{E}(u_{\varepsilon}))(t) \end{equation} and from $(\mathcal{M}(u_{\varepsilon}))(t)\lambdaeqslantsssim 1$. Analogously, now with H\"{o}lder exponents $M_{ij}\in[2,6]$ such that \[ b_i*b_j*{\textstyle\frac{1}{2} }M_{ij}\;=\;1\qquad i,j\in\{1,2\}\,, \] we find \begin{equation}\lambdaabel{enel3} \begin{aligned} \Big|\int_{{\mathbb R}^3}&A\cdot(\partial_tA)\,|u_{\varepsilon}|^2\,\mathrm{d} x\,\Big|\\ &\lambdaeqslantsssim\; \big(\Vert\partial_t A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+\Vert\partial_t A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)\:\times\\ &\qquad\quad \times\big(\Vert A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+ \Vert A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)\,\Vert u_{\varepsilon}(t)\Vert_{H_{A(t)}^1({\mathbb R}^3)}^2\\ &\lambdaeqslantqslant\;\big(\Vert\partial_t A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+\Vert\partial_t A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)\:\times\\ &\qquad\quad \times\big(1+\Vert A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+ \Vert A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)\,\big(1+(\mathcal{E}(u_{\varepsilon})(t)\big)\,. \end{aligned} \end{equation} Combining \eqref{enel1}, \eqref{enel2} and \eqref{enel3} together yields \begin{equation}\lambdaabel{enel4} \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}(\mathcal{E}(u_{\varepsilon}))(t)\;&\lambdaeqslantsssim\;|\mathcal{S}(u_\varepsilon)(t)|\;\lambdaeqslantsssim\; \mathcal Lambda(t)\,\big(1+(\mathcal{E}(u_{\varepsilon})(t)\big) \\ \mathcal Lambda(t)\;&:=\;\big(\Vert\partial_t A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+\Vert\partial_t A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)\:\times \\ &\qquad\quad\times\:\big(1+\Vert A_1(t)\Vert_{L^{b_1}({\mathbb R}^3)}+ \Vert A_2(t)\Vert_{L^{b_2}({\mathbb R}^3)}\big)\,. \end{split} \end{equation} Owing to the assumptions on $A$, $\mathcal Lambda\in L_{\mathrm{loc}}^{1}({\mathbb R},\mathrm{d} t)$, therefore Gr\"onwall's lemma is applicable to \eqref{enel4} and we deduce \[ (\mathcal{E}(u_{\varepsilon}))(t)\;\lambdaeqslantqslant\;e^{\int_0^t\mathcal Lambda(s)\,\mathrm{d} s}\Big((\mathcal{E}(u_{\varepsilon}))(0)+\int_0^t\mathcal Lambda(s)\,\mathrm{d} s\Big)\;\lambdaeqslantsssim_{A,T}\;1\,, \] which proves \eqref{eq:Ebound}. Based on \eqref{pdelc} and on the norm equivalence \eqref{le:norm_equiv}, the bounds \eqref{eq:Mbound} and \eqref{eq:Ebound} then imply also \eqref{eq:uni_bou}. Let us prove now the a priori bound \eqref{dissiesti}. Integrating \eqref{derieni} in $t\in[0,T)$ yields \begin{equation*} \begin{split} (\mathcal{E}&(u_{\varepsilon}))(T)-(\mathcal{E}(u_{\varepsilon}))(0)\;= \\ &=\;-\varepsilon\!\int_0^T\!\!\Big(\int_{{\mathbb R}^3}\!\big(\,|(\nabla-\mathrm{i}\,A)^2u_{\varepsilon}|^2\,\mathrm{d} x +\mathcal{R}(u_\varepsilon)(t)\Big)\,\mathrm{d} t+\int_0^T\!\!\mathcal{S}(u_\varepsilon)(t)\,\mathrm{d} t\,, \end{split} \end{equation*} whence \begin{equation*} \int_0^T\!\!\mathcal{R}(u_\varepsilon)(t)\,\mathrm{d} t\;\lambdaeqslantqslant\;\frac{1}{\varepsilon}\Big(\,|(\mathcal{E}(u_{\varepsilon}))(T)-(\mathcal{E}(u_{\varepsilon}))(0)|+\!\int_0^T\!\!|\mathcal{S}(u_\varepsilon)(t)|\,\mathrm{d} t\Big)\,. \end{equation*} The bound \eqref{enel4} for $|\mathcal{S}(u_\varepsilon)(t)|$ and the bound \eqref{eq:Ebound} for $\mathcal{E}(u_\varepsilon)(t)$, together with the fact that $\mathcal Lambda\in L_{\mathrm{loc}}^{1}({\mathbb R},\mathrm{d} t)$, then give \begin{equation}\lambdaabel{eq:boundintR} \int_0^T\!\!\mathcal{R}(u_\varepsilon)(t)\,\mathrm{d} t\;\lambdaeqslantsssim_{A,T}\;\varepsilon^{-1}\,. \end{equation} It is clear from \eqref{dopoderieni} that the l.h.s.~of the a priori bound \eqref{dissiesti} is controlled by $\int_0^T\!\mathcal{R}(u_\varepsilon)(t)\mathrm{d} t$, therefore \eqref{eq:boundintR} implies \eqref{dissiesti}. This completes the proof under the additional assumption that $f\in\mathcal{S}({\mathbb R}^3)$ and $A\in AC_{\mathrm{loc}}({\mathbb R},\mathcal{S}({\mathbb R}^3))$. The proof in the general case of non-smooth potentials and non-smooth initial data follows by a density argument. We consider a sequence of regular potentials $A_n$ and regular initial data $f_n$ such that $f_n\rightarrow f$ in $H^1({\mathbb R}^3)$ and $\|A_{n}-A\|_{\mathcal{A}_1}\to 0$ when $A\in\mathcal{A}_1$, or $\|A_{n}-A\|_{\mathcal{A}_2}\to 0$ when $A\in\mathcal{A}_2$, and we denote by $u_{\varepsilon,n}$ the solution to the local Cauchy problem \eqref{eq:visc_CauMNLS} with initial datum $f_n$ and magnetic potential $A_n$. Having already established Proposition \ref{pr:5ene} for such regular initial data and potentials, the bounds \begin{eqnarray} \Vert u_{\varepsilon,n}\Vert_{L^{\infty}([0,T);L^2({\mathbb R}^3))}\!\!\!&\!\!\!\!\!\!\!\!\!\lambdaeqslantsssim&\!\! 1 \lambdaabel{eq:uni_bou_mass_n} \\ \Vert u_{\varepsilon,n}\Vert_{L^{\infty}([0,T);H^1({\mathbb R}^3))}\!\!\!&\lambdaeqslantsssim_{A,T}& \!\!1 \lambdaabel{eq:uni_bou_n} \end{eqnarray} hold for every $n$ uniformly in $\varepsilon>0$. The latter fact, together with the stability property \begin{equation}\lambdaabel{eq:stabinf2} \|u_{n,\varepsilon}-u_{\varepsilon}\|_{L^{\infty}[0,T),H^1({\mathbb R}^3))}\rightarrow 0\qquad\textrm{uniformly in $\varepsilon$} \end{equation} given by Proposition \ref{prop:stab_nl}, then imply \eqref{eq:Mbound} and \eqref{eq:uni_bou} also in the general case. Analogously, since for fixed $t$ the mass $\mathcal{M}(u)(t)$ and the energy $\mathcal{E}(u)(t)$ depend continuously on the $H^1$-norm of $u(t)$, \eqref{eq:stabinf2} also implies \eqref{eq:Mbound} and \eqref{eq:Ebound} in the general case. We are left to prove the energy a priori bound \eqref{dissiesti}. We first collect some useful facts, valid for a generic Strichartz pair $(q,r)$, with $r\in[2,3)$. The starting point is the stability result proved in Proposition \ref{prop:stab_nl}, which in this case reads \begin{equation}\lambdaabel{venti} u_{n,\varepsilon}\;\lambdaongrightarrow\; u_{\varepsilon}\quad\mbox{in }L^{q}([0,T),W^{1,r}(\mathbb{R}^3))\,. \end{equation} In particular, \begin{eqnarray} & u_{n,\varepsilon}\;\lambdaongrightarrow\; u_{\varepsilon}&\,\mbox{in }L^{q}([0,T),L^{\frac{Mr}{M-r}}(\mathbb{R}^3))\,,\quad M\in[3,+\infty]\,,\lambdaabel{ventuno}\\ & \nabla u_{n,\varepsilon}\;\lambdaongrightarrow\; \nabla u_{\varepsilon}&\,\mbox{in }L^{q}([0,T),L^{r}(\mathbb{R}^3))\,.\lambdaabel{ventidue} \end{eqnarray} Moreover the following identity is trivially satisfied (recall that $b_i>3)$: \begin{equation}\lambdaabel{puma} (+\infty,b_i)*\Big(q,\frac{b_ir}{b_i-r}\Big)\;=\;(q,r),\quad i\in\{1,2\}\,. \end{equation} Now, \eqref{ventuno} and H\"older's inequality yield \begin{equation}\lambdaabel{ventitre} A u_{n,\varepsilon}\lambdaongrightarrow A u_{\varepsilon}\quad\mbox{in }L^{q}([0,T),L^{r}(\mathbb{R}^3))\,, \end{equation} and \eqref{ventidue} and \eqref {ventitre} yield \begin{equation}\lambdaabel{ventiquattro} |(\nabla-\mathrm{i} A) u_{n,\varepsilon}|^2\lambdaongrightarrow |(\nabla-\mathrm{i} A) u_{\varepsilon}|^{2}\quad\mbox{in }L^{\frac{q}{2}}([0,T),L^{\frac{r}{2}}(\mathbb{R}^3)). \end{equation} We show now how to prove estimate \eqref{dissiesti} in the general case. Having already established Proposition \ref{pr:5ene} for regular initial data and potentials, we have in particular \begin{eqnarray} \big\|u_{n,\varepsilon}^{\gammaamma-1}|(\nabla-\mathrm{i} A) u_{n,\varepsilon}|^2\big\|_{L^1([0,T),L^1(\mathbb{R}^3))}\!\!\!&\lambdaeqslantsssim_{A,T}\;\varepsilon^{-1},\lambdaabel{StriStra}\\ \big\|(|x|^{-\alpha}*|u_{n,\varepsilon}|^2)|(\nabla-\mathrm{i} A) u_{n,\varepsilon}|^2\big\|_{L^1([0,T),L^1(\mathbb{R}^3))}\!\!\!&\lambdaeqslantsssim_{A,T}\;\varepsilon^{-1},\lambdaabel{StriStraconv}\\ \big\|(|x|^{-\alpha}*\nabla|u_{n,\varepsilon}|^2)\nabla|u_{n,\varepsilon}|^2\big\|_{L^1([0,T),L^1(\mathbb{R}^3))}\!\!\!&\lambdaeqslantsssim_{A,T}\;\varepsilon^{-1}\,.\lambdaabel{StriStraconvdue} \end{eqnarray} For any $\gammaamma\in(1,5]$ we can find Strichartz pairs $(q_1,r_1)$ and $(q_2,r_2)$, with $r_1,r_2\in[2,3)$, such that $$\textstyle\big(\frac{q_1}{\gammaamma-1},\frac{3r_1}{(3-r_1)(\gammaamma-1)}\big)*\big(\frac{q_2}{2},\frac{r_2}{2}\big)\;=\;(1,1)\,.$$ Then \eqref{ventuno}, \eqref{ventiquattro}, and H\"older's inequality yield \begin{equation}\lambdaabel{venticinque} u_{n,\varepsilon}^{\gammaamma-1}|(\nabla-\mathrm{i} A) u_{n,\varepsilon}|^2\!\!\lambdaongrightarrow u_{\varepsilon}^{\gammaamma-1}|(\nabla-\mathrm{i} A) u_{\varepsilon}|^2\quad\mbox{in }L^{1}([0,T),L^{1}(\mathbb{R}^3))\,, \end{equation} which together with the bound \eqref{StriStra} implies \begin{equation}\lambdaabel{StriStralimit} \big\|u_{\varepsilon}^{\gammaamma-1}|(\nabla-\mathrm{i} A) u_{\varepsilon}|^2\big\|_{L^1([0,T),L^1(\mathbb{R}^3))}\;\lambdaeqslantsssim_{A,T}\;\varepsilon^{-1}\,. \end{equation} In turn, the diamagnetic inequality $|\nabla|g||\lambdaeqslantqslant |(\nabla-\mathrm{i} A)g|$ and \eqref{StriStralimit} give also \begin{equation}\lambdaabel{StriStralimitdue} \big\|u_{\varepsilon}^{\gammaamma-1}|\nabla |u_{\varepsilon}||^2\big\|_{L^1([0,T),L^1(\mathbb{R}^3))}\;\lambdaeqslantsssim_{A,T}\;\varepsilon^{-1}\,. \end{equation} Concerning the convolution terms, for any $\alpha\in(0,3)$ we can find Strichartz pairs $(\widetilde{q}_1,\widetilde{r}_1)$ and $(\widetilde{q}_2,\widetilde{r}_2)$, with $\widetilde{r}_1,\widetilde{r}_2\in[2,3)$, such that \[ \begin{split} \int_0^T\!\!\int_{\mathbb{R}^3}&(|x|^{-\alpha}*|u_{n,\varepsilon}|^2)|(\nabla-\mathrm{i} A) u_{n,\varepsilon}|^2\,\mathrm{d} x\,\mathrm{d} t\;\lambdaeqslantsssim \\ &\;\lambdaeqslantsssim\;\|u\|_{L^{\frac{2}{\widetilde{q}}}([0,T),L^{\frac{3\widetilde{r}_1}{2(3-\widetilde{r}_1)}}(\mathbb{R}^3))}\,\|(\nabla-\mathrm{i} A) u_{n,\varepsilon}\|_{L^{\frac{\widetilde{q}_2}{2}}([0,T),L^{\frac{\widetilde{r}_2}{2}}(\mathbb{R}^3))}\,, \end{split} \] which is obtained by the Hardy-Littlewood-Sobolev and H\"older's inequality. Therefore, \begin{equation}\lambdaabel{ventisei} \begin{split} (|x|^{-\alpha}*|u_{n,\varepsilon}|^2)|&(\nabla-\mathrm{i} A) u_{n,\varepsilon}|^2\;\lambdaongrightarrow\; (|x|^{-\alpha}*|u_{\varepsilon}|^2)|(\nabla-\mathrm{i} A) u_{\varepsilon}|^2 \\ &\qquad\qquad \mbox{in }L^{1}([0,T)L^{1}(\mathbb{R}^3))\,, \end{split} \end{equation} which together with the bound \eqref{StriStraconv} implies \begin{equation}\lambdaabel{StriStralimitconv} \big\|(|x|^{-\alpha}*|u_{\varepsilon}|^2)|(\nabla-\mathrm{i} A) u_{\varepsilon}|^2\big\|_{L^1([0,T),L^1(\mathbb{R}^3))}\;\lambdaeqslantsssim_{A,T}\;\varepsilon^{-1}\,. \end{equation} In analogous manner, using Hardy-Littlewood-Sobolev and H\"older's inequality, from \eqref{ventuno} and \eqref{ventidue} we get \begin{equation}\lambdaabel{ventisette} \begin{split} (|x|^{-\alpha}*\nabla|u_{n,\varepsilon}|^2)&\nabla|u_{n,\varepsilon}|^2\;\lambdaongrightarrow\; (|x|^{-\alpha}*\nabla|u_{\varepsilon}|^2)\nabla|u_{\varepsilon}|^2 \\ &\quad\quad \mbox{in }L^{1}([0,T)L^{1}(\mathbb{R}^3))\,, \end{split} \end{equation} which together with the bound \eqref{StriStraconvdue} implies \begin{equation}\lambdaabel{StriStraconvduelimit} \big\|(|x|^{-\alpha}*\nabla|u_{\varepsilon}|^2)\nabla|u_{\varepsilon}|^2\big\|_{L^1([0,T),L^1(\mathbb{R}^3))}\;\lambdaeqslantsssim_{A,T}\;\varepsilon^{-1}. \end{equation} The a priori abound \eqref{dissiesti} in the general case follows by combining \eqref{StriStralimit}, \eqref{StriStralimitdue}, \eqref{StriStralimitconv} and \eqref{StriStraconvduelimit}. \end{proof} \begin{remark} The inequality \eqref{pdelc}, namely \begin{equation} \Vert u_{\varepsilon}(t)\Vert_{H_{A(t)}^1}^2\lambdaeqslantqslant\; (\mathcal{M}(u_{\varepsilon}))(t)+(\mathcal{E}(u_{\varepsilon}))(t)\,,\qquad t\in[0,T)\,, \end{equation} reflects the \emph{defocusing} structure of the regularised magnetic NLS \eqref{eq:visc_nls}. \end{remark} \section{Global existence for the regularised equation}\lambdaabel{eq:eps-GWP} In this Section we exploit the a priori estimates for mass and energy so as to prove that the local solution to the regularised Cauchy problem \eqref{eq:visc_CauMNLS}, constructed is Section \ref{sec:LWP}, can be actually extended globally in time. We discuss first the result in the energy sub-critical case. \begin{theorem}[Global well-posedness, energy sub-critical case]\lambdaabel{t:s} Assume that $A\in\mathcal{A}_1$ or $A\in\mathcal{A}_2$, and that the exponents in the non-linearity \eqref{hart_pp_nl} are in the regime $\gammaamma\in(1,5)$ and $\alpha\in(0,3)$. Let $\varepsilon>0$. Then the regularised non-linear magnetic Schr\"{o}dinger equation \eqref{eq:visc_nls} is globally well-posed in $H^1({\mathbb R}^3)$, in the sense of Definitions \ref{de:ws_solution_alt} and \ref{de:WP}. Moreover, the solution $u_{\varepsilon}$ to \eqref{eq:visc_nls} with given initial datum $f\in H^1({\mathbb R}^3)$ satisfies the bound \begin{equation}\lambdaabel{eq:unif_bound} \Vert u_{\varepsilon}\Vert_{L^{\infty}[0,T],H^1({\mathbb R}^3)}\;\lambdaeqslantsssim_T\; 1\qquad\forall\, T\in(0,+\infty)\,, \end{equation} uniformly in $\varepsilon>0$. \end{theorem} \begin{proof} The local well-posedness is proved in Proposition \ref{pr:exsub}. Because of \eqref{eq:uni_bou}, the $H^1$-norm of $u_{\varepsilon}$ is bounded on finite intervals of time. Therefore, by the blow-up alternative, the solution is necessarily global and in particular it satisfies the bound \eqref{eq:unif_bound}. \end{proof} We discuss now the analogous result in the energy-critical case. \begin{theorem}[Global existence and uniqueness, energy critical case]\lambdaabel{t:c} Assume that $A\in\mathcal{A}_1$ or $A\in\mathcal{A}_2$, and that the exponents in the non-linearity \eqref{hart_pp_nl} are in the regime $\gammaamma=5$ and $\alpha\in(0,3)$. Let $\varepsilon>0$ and $f\in H^1({\mathbb R}^3)$. The Cauchy problem \eqref{eq:visc_CauMNLS} has a unique global strong $H^1$-solution $u_\varepsilon$, in the sense of Definition \ref{de:ws_solution_alt}. Moreover, $u$ satisfies the bound \begin{equation}\lambdaabel{eq:unif_bound_cri} \Vert u_{\varepsilon}\Vert_{L^{\infty}[0,T],H^1({\mathbb R}^3)}\;\lambdaeqslantsssim_T\; 1\qquad\forall\, T\in(0,+\infty)\,, \end{equation} uniformly in $\varepsilon>0$. \end{theorem} \begin{proof} The existence of a unique local solution $u_\varepsilon$ is proved in Proposition \ref{pr:crie}. The a priori bound \eqref{dissiesti} implies that $$\int_0^T\!\!\int_{\mathbb{R}^3}\big(\,|u_{\varepsilon}|^2\,\nabla|u_{\varepsilon}|\big)^2\,\mathrm{d} x\,\mathrm{d} t\;\lambdaeqslantsssim\;\varepsilon^{-1}\,,$$ which, together with Sobolev's embedding, yields \begin{equation}\lambdaabel{eq:seidiciotto} \begin{aligned} \Vert u_{\varepsilon}\Vert^6_{L^6([0,T],L^{18}({\mathbb R}^3))}\;&=\;\Vert u_{\varepsilon}^3\Vert_{L^2([0,T],L^{6}({\mathbb R}^3))}^2\;\lambdaeqslantsssim\int_0^T\!\!\int_{{\mathbb R}^3}|\nabla|u_{\varepsilon}|^3|^2\,\mathrm{d} x\,\mathrm{d} t \\ &\lambdaeqslantsssim\int_0^T\!\!\int_{{\mathbb R}^3}|u_{\varepsilon}|^4\,|\nabla|u_{\varepsilon}||^2\,\mathrm{d} x\,\mathrm{d} t\;\lambdaeqslantsssim\;\varepsilon^{-1}<\;+\infty\,. \end{aligned} \end{equation} Owing to \eqref{eq:seidiciotto} and to the blow-up alternative proved in Proposition \ref{pr:crie}, we conclude that the solution $u$ can be extended globally and moreover, using again \eqref{eq:uni_bou}, it satisfies the bound \eqref{eq:unif_bound_cri}. \end{proof} \begin{remark}\lambdaabel{asba} As anticipated in the Introduction, right after stating the assumptions on the magnetic potential, let us comment here about the fact that in the \emph{mass sub-critical} regime ($\gammaamma\in(1,\frac{7}{3})$ and $\alpha\in(0,2)$) we can work with the larger class $\widetilde{\mathcal{A}}_1$ instead of $\mathcal{A}_1$ and still prove the extension of the local solution globally in time with finite $H^1$-norm on arbitrary finite time interval. This is due to the fact that, for a potential $u\in\widetilde{\mathcal{A}}_1$ and in the mass sub-critical regime, in order to extend the solution globally neither need we the estimate \eqref{eq:uni_bou} as in the proof of Theorem \ref{t:s}, nor need we the estimate \eqref{dissiesti} as in the proof of Theorem \ref{t:c}. Indeed, we can first prove local well-posedness in $L^2(\mathbb{R}^3)$ for the regularised magnetic NLS \eqref{eq:visc_nls}, using a fixed point argument based on the space-time estimates for the heat-Schr\"odinger flow, in the very same spirit of the proof of Theorem \ref{th:energy_linear}. Then we can extend such a solution globally in time using only the mass a priori bound \eqref{eq:Mbound}, for proving such a bound does not require any time-regularity assumption on the magnetic potential. Moreover, since the non-linearities are mass sub-critical and since we can prove convenient estimates on the commutator $[\nabla,(\nabla-\mathrm{i}\,A)^2]$ when $\max{\{b_1,b_2\}}\in (3,6)$, we can show that the global $L^2$-solution exhibits \emph{persistence of $H^1$-regularity} in the sense that it stays in $H^1({\mathbb R}^3)$ for every positive time provided that the initial datum belongs already to $H^1({\mathbb R}^3)$. This way, we obtain existence and uniqueness of one global strong $H^1$-solution. \end{remark} \section{Removing the regularisation}\lambdaabel{sec:remov-reg} In this Section we prove our main Theorem \ref{th:main}. The proof is based on a compactness argument that we develop in Subsection \ref{sec:LocalWeakSol}, so as to remove the $\varepsilon$-regularisation, and leads to a \emph{local} weak $H^1$-solution to \eqref{eq:CauMNLS}. The reason why by compactness we can only produce local solutions is merely due to the local-in-time regularity of magnetic potentials belonging to the class $\mathcal{A}_1$ or $\mathcal{A}_2$ -- \emph{globally}-in-time regular potentials, say, $AC(\mathbb{R})$-potentials, would instead allow for a direct removal of the regularisation globally in time. In order to circumvent this simple obstruction, in Subsection \ref{sec:proof_of_main_Thm} we work out a straightforward `gluing' argument, eventually proving Theorem \ref{th:main}. \subsection{Local weak solutions}\lambdaabel{sec:LocalWeakSol} The main result of this Subsection is the following. \begin{proposition}\lambdaabel{pr:compa} Assume that $A\in\mathcal{A}_1$ or $A\in\mathcal{A}_2$, and that the exponents in the non-linearity \eqref{hart_pp_nl} are in the whole regime $\gammaamma\in(1,5]$ and $\alpha\in(0,3)$. Let $T>0$, and $f\in H^1({\mathbb R}^3)$. For any sequence $(\varepsilon_n)_n$ of positive numbers with $\varepsilon_n{\partial}ownarrow 0$, let $u_n$ be the unique global strong $H^1$-solution to the Cauchy problem \eqref{eq:visc_CauMNLS} with viscosity parameter $\varepsilon=\varepsilon_n$ and with initial datum $f$, as provided by Theorem \ref{t:s} in the energy sub-critical case and by Theorem \ref{t:c} in the energy critical case. Then, up to a subsequence, $u_n$ converges weakly-$*$ in $L^{\infty}([0,T],H^1({\mathbb R}^3))$ to a local weak $H^1$-solution $u$ to the magnetic NLS \eqref{eq:magneticNLS} in the time interval $[0,T]$ and with initial datum $f$. \end{proposition} In order to set up the compactness argument that proves Proposition \ref{pr:compa} we need a few auxiliary results, as follows. \begin{lemma}\lambdaabel{le:aabb} The sequence $(u_n)_n$ in the assumption of Proposition \ref{pr:compa} is bounded in $L^{\infty}([0,T],H^1({\mathbb R}^3))$, i.e., \begin{equation}\lambdaabel{uni_bou_n} \Vert u_n\Vert_{L^{\infty}([0,T],H^1({\mathbb R}^3))}\;\lambdaeqslantsssim_{A,T}\; 1\,, \end{equation} and hence, up to a subsequence, $(u_n)_n$ admits a weak-$*$ limit $u$ in $L^{\infty}([0,T],H^1({\mathbb R}^3))$. \end{lemma} \begin{proof} An immediate consequence of the uniform-in-$\varepsilon$ bounds \eqref{eq:unif_bound}-\eqref{eq:unif_bound_cri} and the the Banach-Alaoglu Theorem. \end{proof} \begin{lemma}\lambdaabel{lann} For the sequence $(u_n)_n$ in the assumption of Proposition \ref{pr:compa} there exist indices $p_i,p_{ij}\in[\frac{6}{5},2]$, $i,j\in\{1,2\}$, such that \begin{equation}\lambdaabel{lima_uno} (A_i\cdot\nabla u_n)_n\mbox{ is a bounded sequence in }L^{\infty}([0,T],L^{p_i}({\mathbb R}^3))\,, \end{equation} \begin{equation}\lambdaabel{lima_due} (A_i\cdot A_ju_n)_n\mbox{ is a bounded sequence in }L^{\infty}([0,T],L^{p_{ij}}({\mathbb R}^3))\,. \end{equation} \end{lemma} \begin{proof} For $p_i:=b_i*2\in[\frac{6}{5},2]$, $i\in\{1,2\}$, the bound \eqref{uni_bou_n} and H\"older's inequality give $$\Vert A_i\cdot\nabla u_n\Vert_{L^{\infty}([0,T],L^{p_i}({\mathbb R}^3))}\;\lambdaeqslantsssim\;\Vert A_i\Vert_{L^{\infty}([0,T],L^{b_i}({\mathbb R}^3))}\Vert \nabla u_n\Vert_{L^{\infty}([0,T],L^2({\mathbb R}^3))}\;\lambdaeqslantsssim_{A,T}\; 1\,,$$ which proves \eqref{lima_uno}. Moreover, there exist $M_{ij}\in[2,6]$, $i,j\in\{1,2\}$, such that $p_{ij}:=b_i*b_j*M_{ij}\in[\frac{6}{5},2]$, therefore the bound \eqref{uni_bou_n}, H\"older's inequality, and Sobolev's embedding give \begin{gather*} \Vert A_i\cdot A_j u_n\Vert_{L^{\infty}([0,T],L^{p_{ij}}({\mathbb R}^3))}\lambdaeqslantsssim\\ \Vert A_i\Vert_{L^{\infty}([0,T],L^{b_i}({\mathbb R}^3))}\Vert A_j\Vert_{L^{\infty}([0,T],L^{b_j}({\mathbb R}^3))}\Vert u_n\Vert_{L^{\infty}([0,T],L^{M_{ij}}({\mathbb R}^3))}\lambdaeqslantsssim_{A,T} 1, \end{gather*} which proves \eqref{lima_due}. \end{proof} \begin{lemma}\lambdaabel{linn} For the sequence $(u_n)_n$ in the assumption of Proposition \ref{pr:compa}, and for every $\gammaamma\in(1,5]$ and $\alpha\in(1,3)$, there exist indices $p(\gammaamma),\widetilde{p}(\alpha)\in[\frac{6}{5},2]$ such that \begin{equation}\lambdaabel{lima_tre} \big(|u_n|^{\gammaamma-1}u_n\big)_n\mbox{ is a bounded sequence in }L^{\infty}([0,T],L^{p(\gammaamma)}({\mathbb R}^3))\,, \end{equation} \begin{equation}\lambdaabel{lima_quattro} \big((\,|\cdot|^{-\alpha}*|u_n|^2)u_n\big)_n\mbox{ is a bounded sequence in }L^{\infty}([0,T],L^{\widetilde{p}(\alpha)}({\mathbb R}^3))\,. \end{equation} \end{lemma} \begin{proof} For any $\gammaamma\in(1,5]$ there exists $M:=M(\gammaamma)\in[2,6]$ such that $M/\gammaamma\in[\frac{6}{5},2]$, whence \[ \begin{split} \Vert |u_n|^{\gammaamma-1}u\Vert_{L^{\infty}([0,T],L^{M/\gammaamma}(\mathbb{R}^3))}\;&\lambdaeqslantqslant\; \Vert u_n\Vert_{L^{\infty}([0,T],L^M(\mathbb{R}^3))}^{\gammaamma} \\ &\lambdaeqslantsssim\;\Vert u_n\Vert_{L^{\infty}([0,T],H^1(\mathbb{R}^3))}^{\gammaamma}\;\lambdaeqslantsssim_{A,T}\;1\,, \end{split} \] based on the bound \eqref{uni_bou_n} and Sobolev's embedding, which proves \eqref{lima_tre}, with $p(\gammaamma):=M/\gammaamma$. Next, let us use the Hardy-Littlewood-Sobolev inequality, for $m(\alpha)\in(1,\frac{3}{3-\alpha})$ and $g\in L^{m(\alpha)}(\mathbb{R}^3)$, $$\big\Vert\,|\cdot|^{-\alpha}*g\,\big\Vert_{L^{q(m(\alpha))}(\mathbb{R}^3)}\;\lambdaeqslantsssim\; \Vert g\Vert_{L^{m(\alpha)}(\mathbb{R}^3)}\,,\qquad q(m):=\textstyle\frac{3m(\alpha)}{3-(3-\alpha)m(\alpha)}\,.$$ Taking \begin{equation}\lambdaabel{qm} \begin{array}{ll} m(\alpha)\in(1,{\textstyle\frac{3}{3-\alpha}})\;&\quad\textrm{if }\alpha\in(0,2] \\ m(\alpha)\in(1,3]\;&\quad\textrm{if }\alpha\in(2,3)\,, \end{array} \end{equation} the Hardy-Littlewood-Sobolev inequality above and Sobolev's embedding yield \begin{equation}\lambdaabel{mangf} \begin{split} \big\Vert\,|\cdot|^{-\alpha}*|u|^2\big\Vert_{L^{\infty}([0,T],L^{q(m(\alpha))}(\mathbb{R}^3))}\;&\lambdaeqslantsssim\; \Vert u^2 \Vert_{L^{\infty}([0,T],L^{m(\alpha)}(\mathbb{R}^3))} \\ &\lambdaeqslantsssim\; \Vert u\Vert_{L^{\infty}([0,T],H^1(\mathbb{R}^3))}^2\,. \end{split} \end{equation} Since $\frac{3}{4-\alpha}<1$ for $\alpha\in(0,3)$, we can find $m(\alpha)$ that satisfies \eqref{qm} as well as $q(m(\alpha))*2\in[\frac{6}{5},2]$, namely \begin{equation} m(\alpha)\;\in\;\Big(\frac{3}{4-\alpha},\frac{3}{3-\alpha}\Big)\,. \end{equation} As a consequence, for $\widetilde{p}(\alpha):=q(m(\alpha))*2\in[\frac{6}{5},2]$ one has \[ \begin{split} \big\Vert(\,|\cdot|^{-\alpha}&*|u_n|^2)u_n\big\Vert_{L^{\infty}([0,T],L^{\widetilde{p}(\alpha)}(\mathbb{R}^3))}\;\lambdaeqslantsssim \\ &\lambdaeqslantsssim\;\big\Vert\,|\cdot|^{-\alpha}*|u_n|^2\big\Vert_{L^{\infty}([0,T],L^{q(m(\alpha))}(\mathbb{R}^3))}\,\Vert u_n\Vert_{L^{\infty}([0,T],L^{2}(\mathbb{R}^3))} \\ &\lambdaeqslantsssim_{A,T}\;\Vert u_n\Vert_{L^{\infty}([0,T],H^{1}(\mathbb{R}^3))}\;\lambdaeqslantsssim_{A,T}\; 1\,, \end{split} \] based on H\"{o}lder's inequality (first step), the bound \eqref{mangf} (second step), and Sobolev's embedding (third step), which proves \eqref{lima_quattro}. \end{proof} \begin{corollary}\lambdaabel{cor:cor} For the sequence $(u_n)_n$ in the assumption of Proposition \ref{pr:compa} there exist indices $p_i$, $p_{ij}$, $p(\gammaamma)$, and $\widetilde{p}(\alpha)$ in $[\frac{6}{5},2]$, and there exists functions $X_i\in L^{\infty}([0,T],L^{p_i}({\mathbb R}^3)) $, $Y_{ij}\in L^{\infty}([0,T],L^{p_{ij}}({\mathbb R}^3))$, $N_1\in L^{\infty}([0,T],L^{p(\gammaamma)}({\mathbb R}^3))$, and $N_2\in L^{\infty}([0,T],L^{\widetilde{p}(\alpha)}({\mathbb R}^3))$ such that \begin{eqnarray} A_i\cdot\nabla u_n\!\!\!&\rightarrow&\!\!\! X_i\qquad\mbox{weakly-$*$}\mbox{ in } L^{\infty}([0,T],L^{p_i}({\mathbb R}^3)) \lambdaabel{eq:cADu}\\ A_i\cdot A_ju_n\!\!\!&\rightarrow&\!\!\! Y_{ij}\qquad\mbox{weakly-$*$}\mbox{ in } L^{\infty}([0,T],L^{p_{ij}}({\mathbb R}^3)) \lambdaabel{eq:cAAu} \\ |u_n|^{\gammaamma-1}u_n\!\!\!&\rightarrow&\!\!\! N_1\qquad\mbox{weakly-$*$}\mbox{ in } L^{\infty}([0,T],L^{p(\gammaamma)}({\mathbb R}^3)) \lambdaabel{eq:cuuu}\\ (|\cdot|^{-\alpha}*|u_n|^2)u_n\!\!\!&\rightarrow&\!\!\! N_2\qquad\mbox{weakly-$*$}\mbox{ in }L^{\infty}([0,T],L^{\widetilde{p}(\alpha)}({\mathbb R}^3))\,. \lambdaabel{eq:calphau} \end{eqnarray} \end{corollary} \begin{proof} An immediate consequence of Lemmas \ref{lann} and \ref{linn}, using the Banach-Alaoglu Theorem. \end{proof} \begin{lemma} For the sequence $(u_n)_n$ in the assumption of Proposition \ref{pr:compa}, for the corresponding weak limit $u$ identified in Lemma \ref{le:aabb}, and for the exponents $p_i$, $i\in\{1,2\}$ identified in Corollary \ref{cor:cor}, one has \begin{equation}\lambdaabel{eq:cADuF} A_i\cdot\nabla u_n\;\rightarrow\;A_i\cdot\nabla u\qquad\textrm{weakly in } L^{2}([0,T],L^{p_i}({\mathbb R}^3))\,. \end{equation} \end{lemma} \begin{proof} Because of the bound \eqref{uni_bou_n}, up to a subsequence $$\nabla u_n\rightarrow \nabla u\qquad\mbox{weakly in }L^2([0,T],L^2({\mathbb R}^3))\,.$$ Now, since $p_i=b_i*2$ and hence $p_i'*b_i=2$, and since $A_i\in L^\infty([0,T]L^{b_i}({\mathbb R}^3))$, one has $A_i\eta\in L^2([0,T],L^2({\mathbb R}^3))$ for any $\eta\in L^{2}([0,T],L^{p'_i}(\mathbb{R}^3))$. Then $$\int_0^T\!\!\int_{{\mathbb R}^3}A_i\cdot(\nabla u_n-\nabla u)\overline{\eta}\,\mathrm{d} x\,\mathrm{d} t\;=\;\int_0^T\!\!\int_{{\mathbb R}^3}(\nabla u_n-\nabla u)A_i\overline{\eta}\,\mathrm{d} x\,\mathrm{d} t\;\rightarrow\; 0\,,$$ thus concluding the proof. \end{proof} \begin{lemma} Let $\mathcal Omega$ be an open, bounded subset of ${\mathbb R}^3$ and let $M\in[1,+\infty]$. For the sequence $(u_n)_n$ in the assumption of Proposition \ref{pr:compa}, and for the corresponding weak limit $u$ identified in Lemma \ref{le:aabb}, \begin{equation}\lambdaabel{eq:unuO} u_n|_\mathcal Omega\rightarrow u|_\mathcal Omega\qquad\mbox{strongly in }L^M([0,T],L^4(\mathcal Omega))\,. \end{equation} \end{lemma} \begin{proof} Because of the \eqref{uni_bou_n}, $(u_n|)_n$ is a bounded sequence in $L^M([0,T],H^1({\mathbb R}^3))$ for any $M\in[1,+\infty]$. Moreover, for every time $t\in[0,T]$ $u_n$ satisfies \[ \mathrm{i}\,\partial_t u_n\;=\;-(1-\mathrm{i}\,\varepsilon)(\Delta u_n-2\,\mathrm{i}\,A\cdot\nabla u_n-|A|^2u_n)+\mathcal{N}(u_n) \] as an identity between $H^{-1}$ functions. Hence, owing to the estimate \eqref{H1H-1} and to the boundedness of the map $\mathcal{N}(u):H^1(\mathbb{R}^3)\to H^{-1}(\mathbb{R}^3)$, \begin{equation}\lambdaabel{uni:dersuR3} \begin{aligned} \Vert&\partial_tu_n\Vert_{L^\infty([0,T],H^{-1}(\mathbb{R}^3))}\; \lambdaeqslantsssim\;\\ & \lambdaeqslantsssim\;\Vert(\nabla-\mathrm{i}\,A)^2u_n\Vert_{L^{\infty}([0,T],H^{-1}(\mathbb{R}^3))}+\Vert \mathcal{N}(u_n)\Vert_{L^{\infty}([0,T],H^{-1}(\mathbb{R}^3))}\\ &\lambdaeqslantsssim_A\,\Vert u_n\Vert_{L^{\infty}([0,T],H^1(\mathbb{R}^3))}\;\lambdaeqslantsssim_{A,T}\; 1\,. \end{aligned} \end{equation} In particular, \begin{equation}\lambdaabel{uni:der} \begin{aligned} \Vert&\partial_tu_n|_\mathcal Omega\Vert_{L^1([0,T],H^{-1}(\mathcal Omega))}\; \lambdaeqslantsssim_{A,T}\; 1\,. \end{aligned} \end{equation} Therefore \eqref{eq:unuO} follows as an application of Aubin-Lions compactness lemma (see, e.g., \cite[Section 7.3]{Roubicek_NPDEbook}) to the bound \eqref{uni:der} and with respect to the compact inclusion $H^1(\mathcal Omega)\hookrightarrow L^4(\mathcal Omega)$ and the continuous inclusion $L^4(\mathcal Omega)\hookrightarrow H^{-1}(\mathcal Omega)$. \end{proof} \begin{lemma}\lambdaabel{pr:alme} For the limit function $u$ identified in Lemma \ref{le:aabb} and for the limit functions $X_i$, $Y_{ij}$, and $N_i$ identified in Corollary \ref{cor:cor} one has the pointwise identities for $t\in[0,T]$ and a.e.~$x\in\mathbb{R}^3$: \begin{eqnarray} A_i\cdot\nabla u\!\!\!&=&\!\!\!X_i \lambdaabel{mufy1} \\ A_i\cdot A_ju\!\!\!&=&\!\!\!Y_{ij} \lambdaabel{mufy2} \\ |u|^{\gammaamma -1}u\!\!\!&=&\!\!\!N_1 \lambdaabel{mufy3} \\ \big(\,|\cdot|^{-\alpha}*|u|^2\big)u\!\!\!&=&\!\!\!N_2\,. \lambdaabel{mufy4} \end{eqnarray} \end{lemma} \begin{proof} For the sequence $(u_n)_n$ in the assumption of Proposition \ref{pr:compa}, and for the exponents $p_i$, $i\in\{1,2\}$ identified in Corollary \ref{cor:cor}, one has \begin{equation} A_i\cdot\nabla u_n\;\rightarrow\;A_i\cdot\nabla u\qquad\textrm{weakly in } L^{2}([0,T],L^{p_i}({\mathbb R}^3))\,. \end{equation} Indeed, because of the bound \eqref{uni_bou_n}, up to a subsequence $$\nabla u_n\rightarrow \nabla u\qquad\mbox{weakly in }L^2([0,T],L^2({\mathbb R}^3))\,;$$ therefore, since $p_i=b_i*2$ and hence $p_i'*b_i=2$, and since $A_i\in L^\infty([0,T]L^{b_i}({\mathbb R}^3))$, one has $A_i\eta\in L^2([0,T],L^2({\mathbb R}^3))$ for any $\eta\in L^{2}([0,T],L^{p'_i}(\mathbb{R}^3))$, $$\int_0^T\!\!\int_{{\mathbb R}^3}A_i\cdot(\nabla u_n-\nabla u)\overline{\eta}\,\mathrm{d} x\,\mathrm{d} t\;=\;\int_0^T\!\!\int_{{\mathbb R}^3}(\nabla u_n-\nabla u)A_i\overline{\eta}\,\mathrm{d} x\,\mathrm{d} t\;\rightarrow\; 0\,.$$ The limits \eqref{eq:cADu} and \eqref{eq:cADuF} imply \[ \begin{split} \int_0^T\!\!\int_{{\mathbb R}^3}\big(A_i\cdot\nabla u_n-A_i\cdot\nabla u \big)\,\varphi\,\mathrm{d} x\,\mathrm{d} t\;&\rightarrow\;0\\ \int_0^T\!\!\int_{{\mathbb R}^3}\big(A_i\cdot\nabla u_n-X_i \big)\,\varphi\,\mathrm{d} x\,\mathrm{d} t\;&\rightarrow\;0 \end{split} \] for arbitrary $\varphi\in\mathcal{S}({\mathbb R}\times{\mathbb R}^3)$, whence the pointwise identity \eqref{mufy1}. Let now $\mathcal Omega$ be an open and bounded subset of $\mathbb{R}^3$, and let $M\in[1,+\infty]$. Since, as seen in \eqref{eq:unuO}, $u_n|_\mathcal Omega$ converges to $u|_\mathcal Omega$ in $L^M([0,T],L^4(\mathcal Omega))$, then up to a subsequence one has also pointwise convergence, whence \begin{eqnarray} A_i\cdot A_ju_n|_\mathcal Omega\!\!\!&\rightarrow&\!\!\! A_i\cdot A_ju|_\mathcal Omega \lambdaabel{eq:cAAuF} \\ |u_n|^{\gammaamma -1}u_n|_\mathcal Omega\!\!\!&\rightarrow&\!\!\! |u|^{\gammaamma -1}u|_\mathcal Omega \lambdaabel{eq:cuuuF} \\ \big(\,|\cdot|^{-\alpha}*|u_n|^2\big)u_n|_\mathcal Omega\!\!\!&\rightarrow&\!\!\! \big(\,|\cdot|^{-\alpha}*|u|^2\big)u|_\mathcal Omega \lambdaabel{eq:calphauF} \end{eqnarray} pointwise for $t\in[0,T]$ and a.e.~$x\in\mathcal Omega$. Therefore, \eqref{mufy2}, \eqref{mufy3}, and \eqref{mufy4} follow by the uniqueness of the pointwise limit and the arbitrariness of $\mathcal Omega$, combining, respectively, \eqref{eq:cAAu}, \eqref{eq:cuuu}, and \eqref{eq:calphau} with, respectively, \eqref{eq:cAAuF}, \eqref{eq:cuuuF}, and \eqref{eq:calphauF}. \end{proof} With the material collected so far we can complete the argument for the removal of the parabolic regularisation, locally in time. \begin{proof}[Proof of Proposition \ref{pr:compa}] We want to show that the function $u$ identified in Lemma \ref{le:aabb} is actually a local weak $H^1$-solution, in the sense of Definition \ref{de:ws_solution_alt} to the magnetic NLS \eqref{eq:magneticNLS} with initial datum $f$ in the time interval $[0,T]$. All the exponents $p_i$, $p_{ij}$, $p(\gammaamma)$ and $\widetilde{p}(\alpha)$ identified in Corollary \ref{cor:cor} belong to the interval $[\frac{6}{5},2]$, and then by Sobolev's embedding the functions $X_i=A_i\cdot\nabla u$, $Y_{ij}=A_i\cdot A_ju$, $N_1=|u|^{\gammaamma -1}u$, and $N_2=(|\cdot|^{-\alpha}*u^2)u$ discussed in Corollary \ref{cor:cor} and Lemma \ref{pr:alme} all belong to $H^{-1}({\mathbb R}^3)$, and so too does $\Delta u$, obviously. Therefore \eqref{eq:magneticNLS} is satisfied by $u$ as an identity between $H^{-1}$-functions. As a consequence, one can repeat the argument used to derive the estimate \eqref{uni:dersuR3}, whence $\partial_t u\in L^{\infty}([0,T],H^{-1}(\mathbb{R}^3))$. Thus, $u\in W^{1,\infty}([0,T],H^{-1}(\mathbb{R}^3))$. On the other hand $u_n\in C^1([0,T],H^{-1}(\mathbb{R}^3))$, and Lemma \ref{le:aabb} implies \[ \int_0^T\!\!\int_{\mathbb{R}^3}\eta(t,x)\big(u_n(t,x)-u(t,x) \big)\,\mathrm{d} x\,\mathrm{d} t\;\to\;0\qquad\forall \eta\in L^1([0,T],H^{-1}(\mathbb{R}^3)\,. \] For $\eta(t,x)={\partial}elta(t-t_0,x)\varphi(x)$, where $t_0$ is arbitrary in $[0,T]$ and $\varphi$ is arbitrary in $L^2(\mathbb{R}^3)$, the limit above reads $u_n(t_0,\cdot)\to u(t_0,\cdot)$ weakly in $L^2(\mathbb{R}^3)$, whence $u(0,\cdot)=f(\cdot)$. \end{proof} \subsection{Proof of the main Theorem}\lambdaabel{sec:proof_of_main_Thm} It is already evident at this stage that had we assumed the magnetic potential to be an $AC$-function for all times, then the proof of the existence of a global weak solution with finite energy would be completed with the proof of Proposition \ref{pr:compa} above, in full analogy with the scheme of the work \cite{GuoNakStr-1995} we mentioned in the Introduction. Our potential being in general only $AC_{\mathrm{loc}}$ in time, we cannot appeal to bounds that are uniform in time (indeed, our \eqref{eq:unif_bound} and \eqref{eq:unif_bound_cri} are $T$-dependent), and the following straightforward strategy must be added in order to complete the proof of our main result. \begin{proof}[Proof of Theorem \ref{th:main}] We set $T=1$ and we choose an arbitrary sequence $(\varepsilon_n)_n$ of positive numbers with $\varepsilon_n{\partial}ownarrow 0$. Let $u_n$ be the unique local strong $H^1$-solution to the regularised magnetic NLS \eqref{eq:visc_nls} with viscosity parameter $\varepsilon=\varepsilon_n$ and with initial datum $f\in H^1({\mathbb R}^3)$. By Proposition \ref{pr:compa}, there exists a subsequence $(\varepsilon_{n'})_{n'}$ of $(\varepsilon_n)_n$ such that $u_{n'}\to u_1$ weakly-$*$ in $L^{\infty}([0,1],H^1({\mathbb R}^3))$, where $u_1$ is a local weak $H^1$-solution to the magnetic NLS \eqref{eq:magneticNLS} with $u_1(0)=f$. If we take instead $T=2$ and repeat the argument, we find a subsequence $(\varepsilon_{n''})_{n''}$ of $(\varepsilon_{n'})_{n'}$ such that $u_{n''}\to u_2$ weakly-$*$ in $L^{\infty}([0,2],H^1({\mathbb R}^3))$, where $u_2$ is a local weak $H^1$-solution to \eqref{eq:magneticNLS} with $u_2(0)=f$, now in the time interval $[0,2]$. Moreover, having refined the $u_{n'}$'s in order to obtain the $u_{n''}$'s, necessarily $u_2(t)=u_1(t)$ for $t\in[0,1]$. Iterating this process, we construct for any $N\in{\mathbb N}$ a function $u_N$ which is a local weak $H^1$-solution to \eqref{eq:magneticNLS} in the time interval $[0,N]$, with $u_N(0)=f$ and $u_N(t)=u_{N-1}(t)$ for $t\in[0,N-1]$. It remains to define $$u(t,x)\;:=\;u_{N}(t,x)\qquad x\in\mathbb{R}^3\,,\quad t\in[0,+\infty)\,\quad N=[t]\,.$$ Since $u_{N}\in L^{\infty}([0,N],H^1({\mathbb R}^3))\cap W^{1,\infty}([0,N],H^{-1}({\mathbb R}^3))$ for every $N\in N$, such $u$ turns out to be a global weak $H^1$-solution to \eqref{eq:CauMNLS} with finite energy for a.e. $t\in {\mathbb R}$, uniformly on compact time intervals. \end{proof} {\partial}ef$'${$'$} \end{document}
\begin{document} \tildetle[Donoghue-Type $m$-Functions]{Donoghue-Type $m$-Functions for Schr\"odinger Operators with Operator-Valued Potentials} \author[F.\ Gesztesy]{Fritz Gesztesy} \address{Department of Mathematics, University of Missouri, Columbia, MO 65211, USA} \hbox{\rm e}mail{\mailto{[email protected]}} \urladdr{\url{http://faculty.missouri.edu/~gesztesyf}} \author[S.\ N.\ Naboko]{Sergey N.\ Naboko} \address{Department of Mathematical Physics, St.~Petersburg State University, Ulianovskaia 1, NIIF, St.~Peterhof, St.~Petersburg, Russian Federation, 198504} \hbox{\rm e}mail{\mailto{[email protected]}} \author[R.\ Weikard]{Rudi Weikard} \address{Department of Mathematics, University of Alabama at Birmingham, Birmingham, AL 35294, USA} \hbox{\rm e}mail{\mailto{[email protected]}} \urladdr{\url{http://www.math.uab.edu/~rudi/}} \author[M.\ Zinchenko]{Maxim Zinchenko} \address{Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131, USA} \hbox{\rm e}mail{\mailto{[email protected]}} \urladdr{\url{http://www.math.unm.edu/~maxim/}} \date{\today} \thanks{S.N. is supported by grants NCN 2013/09/BST1/04319, RFBR 12-01-00215-a, and Marie Curie grant PIIF-GA-2011-299919; Research of M.Z. is supported in part by a Simons Foundation grant CGM--281971.} \subjclass[2010]{Primary: 34B20, 35P05. Secondary: 34B24, 47A10.} \keywords{Weyl--Titchmarsh theory, spectral theory, operator-valued ODEs.} \begin{abstract} Given a complex, separable Hilbert space ${\mathcal H}$, we consider differential expressions of the type $\tau = - (d^2/dx^2) I_{{\mathcal H}} + V(x)$, with $x \in (x_0,\infty)$ for some $x_0 \in {\mathbb{R}}$, or $x \in {\mathbb{R}}$ (assuming the limit-point property of $\tau$ at $\pm \infty$). Here $V$ denotes a bounded operator-valued potential $V(\cdot) \in {\mathcal B}({\mathcal H})$ such that $V(\cdot)$ is weakly measurable, the operator norm $\|V(\cdot)\|_{{\mathcal B}({\mathcal H})}$ is locally integrable, and $V(x) = V(x)^*$ a.e.\ on $x \in [x_0,\infty)$ or $x \in {\mathbb{R}}$. We focus on two major cases. First, on $m$-function theory for self-adjoint half-line $L^2$-realizations $H_{+,\alphapha}$ in $L^2((x_0,\infty); dx; {\mathcal H})$ (with $x_0$ a regular endpoint for $\tau$, associated with the self-adjoint boundary condition $\sigman(\alphapha)u'(x_0) + \cos(\alphapha)u(x_0)=0$, indexed by the self-adjoint operator $\alphapha = \alphapha^* \in {\mathcal B}({\mathcal H})$), and second, on $m$-function theory for self-adjoint full-line $L^2$-realizations $H$ of $\tau$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$. In a nutshell, a Donoghue-type $m$-function $M_{A,{\mathcal N}_i}^{Do}(\cdot)$ associated with self-adjoint extensions $A$ of a closed, symmetric operator $\dot A$ in ${\mathcal H}$ with deficiency spaces ${\mathcal N}_z = \ker \bibitemg(\dot A\high{^*} - z I_{{\mathcal H}}\bibitemg)$ and corresponding orthogonal projections $P_{{\mathcal N}_z}$ onto ${\mathcal N}_z$ is given by \begin{align*} M_{A,{\mathcal N}_i}^{Do}(z)&=P_{{\mathcal N}_i} (zA + I_{\mathcal H})(A - z I_{{\mathcal H}})^{-1} P_{{\mathcal N}_i}\bibitemg\varepsilonrt_{{\mathcal N}_i} \\ &=zI_{{\mathcal N}_i} + (z^2+1) P_{{\mathcal N}_i} (A - z I_{{\mathcal H}})^{-1} P_{{\mathcal N}_i}\bibitemg\varepsilonrt_{{\mathcal N}_i} \, , \quad z\in {\mathbb{C}}\backslash {\mathbb{R}}. \hbox{\rm e}nd{align*} In the concrete case of half-line and full-line Schr\"odinger operators, the role of $\dot A$ is played by a suitably defined minimal Schr\"odinger operator $H_{+,\min}$ in $L^2((x_0,\infty); dx; {\mathcal H})$ and $H_{\min}$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$, both of which will be proven to be completely non-self-adjoint. The latter property is used to prove that if $H_{+,\alphapha}$ in $L^2((x_0,\infty); dx; {\mathcal H})$, respectively, $H$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$, are self-adjoint extensions of $H_{+,\min}$, respectively, $H_{\min}$, then the corresponding operator-valued measures in the Herglotz--Nevanlinna representations of the Donoghue-type $m$-functions $M_{H_{+,\alphapha}, {\mathcal N}_{+,i}}^{Do}(\cdot)$ and $M_{H, {\mathcal N}_i}^{Do}(\cdot)$ encode the entire spectral information of $H_{+,\alphapha}$, respectively, $H$. \hbox{\rm e}nd{abstract} \maketitle {\scriptsize \tableofcontents} \section{Introduction} \lambdabel{s1} The principal topic of this paper centers around basic spectral theory for self-adjoint Schr\"odinger operators with bounded operator-valued potentials on a half-line as well as on the full real line, focusing on Donoghue-type $m$-function theory, eigenfunction expansions, and a version of the spectral theorem. More precisely, given a complex, separable Hilbert space ${\mathcal H}$, we consider differential expressions $\tau$ of the type \begin{equation} \tau = - (d^2/dx^2) I_{{\mathcal H}} + V(x), \lambdabel{1.1} \hbox{\rm e}nd{equation} with $x \in (x_0,\infty)$ or $x \in {\mathbb{R}}$ ($x_0 \in {\mathbb{R}}$ a reference point), and $V$ a bounded operator-valued potential $V(\cdot) \in {\mathcal B}({\mathcal H})$ such that $V(\cdot)$ is weakly measurable, the operator norm $\|V(\cdot)\|_{{\mathcal B}({\mathcal H})}$ is locally integrable, and $V(x) = V(x)^*$ a.e.\ on $x \in [x_0,\infty)$ or $x \in {\mathbb{R}}$. The self-adjoint operators in question are then half-line $L^2$-realizations of $\tau$ in $L^2((x_0,\infty); dx; {\mathcal H})$, with $x_0$ assumed to be a regular endpoint for $\tau$, and hence with appropriate boundary conditions at $x_0$ (cf.\ \hbox{\rm e}qref{1.26}) on one hand, and full-line $L^2$-realizations of $\tau$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$ on the other. The case of Schr\"odinger operators with operator-valued potentials under various continuity or smoothness hypotheses on $V(\cdot)$, and under various self-adjoint boundary conditions on bounded and unbounded open intervals, received considerable attention in the past. In the special case where $\dim({\mathcal H})<\infty$, that is, in the case of Schr\"odinger operators with matrix-valued potentials, the literature is so voluminous that we cannot possibly describe individual references and hence we primarily refer to the monographs \cite{AM63}, \cite{RK05}, and the references cited therein. We note that the finite-dimensional case, $\dim({\mathcal H}) < \infty$, as discussed in \cite{BL00}, is of considerable interest as it represents an important ingredient in some proofs of Lieb--Thirring inequalities (cf.\ \cite{LW00}). For the particular case of Schr\"odinger-type operators corresponding to the differential expression $\tau = - (d^2/dx^2) I_{{\mathcal H}} + A + V(x)$ on a bounded interval $(a,b) \subset {\mathbb{R}}$ with either $A=0$ or $A$ a self-adjoint operator satisfying $A\geq c I_{{\mathcal H}}$ for some $c>0$, we refer to the list of references in \cite{GWZ13b}. For earlier results on various aspects of boundary value problems, spectral theory, and scattering theory in the half-line case $(a,b) =(0,\infty)$, we refer, for instance, to \cite{Al06a}, \cite{AM10}, \cite{De08}, \cite{Go68}--\cite{GG69}, \cite[Chs.~3,4]{GG91}, \cite{GM76}, \cite{HMM13}, \cite{KL67}, \cite{Mo07}, \cite{Mo10}, \cite{Ro60}, \cite{Sa71}, \cite{Tr00} (the case of the real line is discussed in \cite{VG70}). Our treatment of spectral theory for half-line and full-line Schr\"odinger operators in $L^2((x_0,\infty); dx; {\mathcal H})$ and in $L^2({\mathbb{R}}; dx; {\mathcal H})$, respectively, in \cite{GWZ13}, \cite{GWZ13b} represents the most general one to date. Next, we briefly turn to Donoghue-type $m$-functions which abstractly can be introduced as follows (cf.\ \cite{GKMT01}, \cite{GMT98}). Given a self-adjoint extension $A$ of a densely defined, closed, symmetric operator $\dot A$ in ${\mathcal K}$ (a complex, separable Hilbert space) and the deficiency subspace ${\mathcal N}_i$ of $\dot A$ in ${\mathcal K}$, with \begin{equation} {\mathcal N}_i = \ker \bibitemg(\dot A\high{^*} - i I_{{\mathcal K}}\bibitemg), \quad \dim \, ({\mathcal N}_i)=k\in {\mathbb{N}} \cup \{\infty\}, \lambdabel{1.2} \hbox{\rm e}nd{equation} the Donoghue-type $m$-operator $M_{A,{\mathcal N}_i}^{Do} (z) \in{\mathcal B}({\mathcal N}_i)$ associated with the pair $(A,{\mathcal N}_i)$ is given by \begin{align} \begin{split} M_{A,{\mathcal N}_i}^{Do}(z)&=P_{{\mathcal N}_i} (zA + I_{\mathcal K})(A - z I_{{\mathcal K}})^{-1} P_{{\mathcal N}_i}\bibitemg\varepsilonrt_{{\mathcal N}_i} \\ &=zI_{{\mathcal N}_i} + (z^2+1) P_{{\mathcal N}_i} (A - z I_{{\mathcal K}})^{-1} P_{{\mathcal N}_i} \bibitemg\varepsilonrt_{{\mathcal N}_i}\,, \quad z\in {\mathbb{C}}\backslash {\mathbb{R}}, \lambdabel{1.3} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} with $I_{{\mathcal N}_i}$ the identity operator in ${\mathcal N}_i$, and $P_{{\mathcal N}_i}$ the orthogonal projection in ${\mathcal K}$ onto ${\mathcal N}_i$. Then $M_{A,{\mathcal N}_i}^{Do}(\cdot)$ is a ${\mathcal B}({\mathcal N}_i)$-valued Nevanlinna--Herglotz function that admits the representation \begin{equation} M_{A,{\mathcal N}_i}^{Do}(z) = \int_{\mathbb{R}} d\Omega_{A,{\mathcal N}_i}^{Do}(\lambdambda) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \lambdabel{1.4} \hbox{\rm e}nd{equation} where the ${\mathcal B}({\mathcal N}_i)$-valued measure $\Omega_{A,{\mathcal N}_i}^{Do}(\cdot)$ satisfies \hbox{\rm e}qref{5.6}--\hbox{\rm e}qref{5.8}. In the concrete case of regular half-line Schr\"odinger operators in $L^2((x_0,\infty); dx)$ with a scalar potential, Donoghue \cite{Do65} introduced the analog of \hbox{\rm e}qref{1.3} and used it to settle certain inverse spectral problems. As has been shown in detail in \cite{GKMT01}, \cite{GMT98}, \cite{GT00}, Donoghue-type $m$-functions naturally lead to Krein-type resolvent formulas as well as linear fractional transformations relating two different self-adjoint extensions of $\dot A$. However, in this paper we are particularly interested in the question under which conditions on $\dot A$, the spectral information on its self-adjoint extension $A$, contained in its family of spectral projections $\{E_A(\lambdambda)\}_{\lambdambda \in {\mathbb{R}}}$, is already encoded in the ${\mathcal B}({\mathcal N}_i)$-valued measure $\Omega_{A,{\mathcal N}_i}^{Do}(\cdot)$. As shown in Corollary \ref{c5.8}, this is the case if and only if $\dot A$ is completely non-self-adjoint in ${\mathcal K}$ and we will apply this to half-line and full-line Schr\"odinger operators with ${\mathcal B}({\mathcal H})$-valued potentials. In the general case of ${\mathcal B}({\mathcal H})$-valued potentials on the right half-line $(x_0,\infty)$, assuming Hypothesis \ref{h6.1}\,$(i)$, we introduce minimal and maximal, operators $H_{+,\min}$ and $H_{+,\max}$ in $L^2((x_0,\infty); dx; {\mathcal H})$ associated to $\tau$, and self-adjoint extensions $H_{+,\alphapha}$ of $H_{+,\min}$ (cf.\ \hbox{\rm e}qref{3.2}, \hbox{\rm e}qref{3.4}, \hbox{\rm e}qref{3.9}) and given the generating property of the deficiency spaces ${\mathcal N}_{+,z} = \ker(H_{+,\min} - z I)$, $z\in {\mathbb{C}} \backslash {\mathbb{R}}$, proven in Theorem \ref{t6.2}, conclude that $H_{+,\min}$ is completely non-self-adjoint (i.e., it has no nontrivial invariant subspace in $L^2((x_0,\infty); dx; {\mathcal H})$ on which it is self-adjoint). According to \hbox{\rm e}qref{1.3}, the right half-line Donoghue-type $m$-function corresponding to $H_{+,\alphapha}$ and ${\mathcal N}_{+,i}$ is given by \begin{align} \begin{split} M_{H_{+,\alphapha}, {\mathcal N}_{+,i}}^{Do} (z,x_0) &= P_{{\mathcal N}_{+,i}} (z H_{+,\alphapha} + I) (H_{+,\alphapha} - z I)^{-1} P_{{\mathcal N}_{+,i}} \bibitemg|_{{\mathcal N}_{+,i}} \lambdabel{1.5} \\ &= \int_{\mathbb{R}} d\Omega_{H_{+,\alphapha},{\mathcal N}_{+,i}}^{Do}(\lambdambda,x_0) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} where $\Omega_{H_{+,\alphapha},{\mathcal N}_{+,i}}^{Do}(\, \cdot\, , x_0)$ satisfies the analogs of \hbox{\rm e}qref{5.6}--\hbox{\rm e}qref{5.8}. Combining Corollary \ref{c5.8} with the complete non-self-adjointness of $H_{+,\min}$ proves that the entire spectral information for $H_{+,\alphapha}$, contained in the corresponding family of spectral projections $\{E_{H_{+,\alphapha}}(\lambdambda)\}_{\lambdambda \in {\mathbb{R}}}$ in $L^2((x_0,\infty); dx; {\mathcal H})$, is already encoded in the ${\mathcal B}({\mathcal N}_{+,i})$-valued measure $\Omega_{H_{+,\alphapha},{\mathcal N}_{+,i}}^{Do}(\, \cdot \,,x_0)$ (including multiplicity properties of the spectrum of $H_{+,\alphapha}$). An explicit computation of $M_{H_{+,\alphapha}, {\mathcal N}_{+,i}}^{Do} (z,x_0)$ then yields \begin{align} & M_{H_{+,\alphapha}, {\mathcal N}_{\pm,i}}^{Do} (z,x_0) = \pm \sum_{j,k \in {\mathcal J}} \bibitemg(e_j, m_{+,\alphapha}^{Do}(z,x_0) e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \quad \tildemes (\psi_{+,\alphapha}(i, \, \cdot \, ,x_0) [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_k, \, \cdot \, )_{L^2((x_0,\infty); dx; {\mathcal H}))} \nonumber \\ & \quad \tildemes \psi_{+,\alphapha}(i, \, \cdot \, ,x_0) [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_j \bibitemg|_{{\mathcal N}_{+,i}}, \quad z \in {\mathbb{C}} \backslash {\mathbb{R}}, \lambdabel{1.6} \hbox{\rm e}nd{align} where $\{e_j\}_{j \in {\mathcal J}}$ is an orthonormal basis in ${\mathcal H}$ (${\mathcal J} \subseteq {\mathbb{N}}$ an appropriate index set) and the ${\mathcal B}({\mathcal H})$-valued Nevanlinna--Herglotz functions $m_{+,\alphapha}^{Do}(\, \cdot \, , x_0)$ are given by \begin{align} m_{+,\alphapha}^{Do}(z,x_0) &= [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} [m_{+,\alphapha}(z,x_0) - \operatorname{Re}(m_{+,\alphapha}(i,x_0))] \nonumber \\ & \quad \tildemes [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} \lambdabel{1.7} \\ &= d_{+, \alphapha} + \int_{\mathbb{R}} d\omega_{+,\alphapha}^{Do}(\lambdambda,x_0) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}. \lambdabel{1.8} \hbox{\rm e}nd{align} Here $d_{+,\alphapha} = \operatorname{Re}(m_{+,\alphapha}^{Do}(i,x_0)) \in {\mathcal B}({\mathcal H})$, and \begin{equation} \omega_{+,\alphapha}^{Do}(\, \cdot \,,x_0) = [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} \rho_{+,\alphapha}(\,\cdot\,,x_0) [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} \lambdabel{1.9} \hbox{\rm e}nd{equation} satisfies the analogs of \hbox{\rm e}qref{A.42a}, \hbox{\rm e}qref{A.42b}. In addition, $\psi_{+,\alphapha}(\, \cdot \,,x,x_0)$ is the right half-line Weyl--Titchmarsh solution \hbox{\rm e}qref{3.58A}, and $m_{+,\alphapha}(\, \cdot \,,x_0)$ represents the standard ${\mathcal B}({\mathcal H})$-valued right half-line Weyl--Titchmarsh $m$-function in \hbox{\rm e}qref{3.58A} with ${\mathcal B}({\mathcal H})$-valued measure $\rho_{+,\alphapha}(\, \cdot \,,x_0)$ in its Nevanlinna--Herglotz representation \hbox{\rm e}qref{2.25}--\hbox{\rm e}qref{2.26}. This result shows that the entire spectral information for $H_{+,\alphapha}$ is also contained in the ${\mathcal B}({\mathcal H})$-valued measure $\omega_{+,\alphapha}^{Do}(\, \cdot \,,x_0)$ (again, including multiplicity properties of the spectrum of $H_{+,\alphapha}$). Naturally, the same facts apply to the left half-line $(- \infty,x_0)$. Turning to the full-line case assuming Hypotheis \ref{h2.8}, and denoting by $H$ the self-adjoint realization of $\tau$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$, we now decompose \begin{equation} L^2({\mathbb{R}}; dx; {\mathcal H}) = L^2((-\infty,x_0); dx; {\mathcal H}) \oplus L^2((x_0, \infty); dx; {\mathcal H}), \lambdabel{1.10} \hbox{\rm e}nd{equation} and introduce the orthogonal projections $P_{\pm,x_0}$ of $L^2({\mathbb{R}}; dx; {\mathcal H})$ onto the left/right subspaces $L^2((x_0,\pm\infty); dx; {\mathcal H})$. Thus, we introduce the $2 \tildemes 2$ block operator representation, \begin{equation} (H - z I)^{-1} = \begin{pmatrix} P_{-,x_0} (H - zI)^{-1} P_{-,x_0} & P_{-,x_0} (H - z I)^{-1} P_{+,x_0} \\ P_{+,x_0} (H - z I)^{-1} P_{-,x_0} & P_{+,x_0} (H - z I)^{-1} P_{+,x_0} \hbox{\rm e}nd{pmatrix}, \lambdabel{1.11} \hbox{\rm e}nd{equation} and introduce with respect to the decomposition \hbox{\rm e}qref{1.10}, the minimal operator $H_{\min}$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$ via \begin{align} H_{\min} &:= H_{-,\min} \oplus H_{+,\min}, \quad H_{\min}^* = H_{-,\min}^* \oplus H_{+,\min}^*, \lambdabel{1.12} \\ {\mathcal N}_z &\, = \ker\bibitemg(H_{\min}^* - z I \bibitemg) = \ker\bibitemg(H_{-,\min}^* - z I \bibitemg) \oplus \ker\bibitemg(H_{+,\min}^* - z I \bibitemg) \nonumber \\ &\, = {\mathcal N}_{-,z} \oplus {\mathcal N}_{+,z}, \quad z \in {\mathbb{C}} \backslash {\mathbb{R}}, \lambdabel{1.13} \hbox{\rm e}nd{align} (see the additional comments concerning our choice of minimal operator in Section \ref{s6}, following \hbox{\rm e}qref{6.23a}). According to \hbox{\rm e}qref{1.3}, the full-line Donoghue-type $m$-function is given by \begin{align} \begin{split} M_{H, {\mathcal N}_i}^{Do} (z) &= P_{{\mathcal N}_i} (z H + I) (H - z I)^{-1} P_{{\mathcal N}_i} \bibitemg|_{{\mathcal N}_i}, \lambdabel{1.14} \\ &= \int_{\mathbb{R}} d\Omega_{H,{\mathcal N}_i}^{Do}(\lambdambda) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} where $\Omega_{H,{\mathcal N}_i}^{Do}(\cdot)$ satisfies the analogs of \hbox{\rm e}qref{5.6}--\hbox{\rm e}qref{5.8} (resp., \hbox{\rm e}qref{A.42A}--\hbox{\rm e}qref{A.42b}). Combining Corollary \ref{c5.8} with the complete non-self-adjointness of $H_{\min}$ proves that the entire spectral information for $H$, contained in the corresponding family of spectral projections $\{E_{H}(\lambdambda)\}_{\lambdambda \in {\mathbb{R}}}$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$, is already encoded in the ${\mathcal B}({\mathcal N}_i)$-valued measure $\Omega_{H,{\mathcal N}_i}^{Do}(\cdot)$ (including multiplicity properties of the spectrum of $H$). With respect to the decomposition \hbox{\rm e}qref{1.10}, one can represent $M_{H, {\mathcal N}_i}^{Do} (\cdot)$ as the $2 \tildemes 2$ block operator, \begin{align} & M_{H, {\mathcal N}_i}^{Do} (\cdot) = \bibitemg(M_{H, {\mathcal N}_i,\hbox{\rm e}ll,\hbox{\rm e}ll'}^{Do} (\cdot)\bibitemg)_{0 \leq \hbox{\rm e}ll, \hbox{\rm e}ll' \leq 1} \nonumber \\ & \quad = z \left(\begin{smallmatrix} P_{{\mathcal N}_-,i} & 0 \\ 0 & P_{{\mathcal N}_+,i} \hbox{\rm e}nd{smallmatrix}\right) \lambdabel{1.15} \\ & \qquad + (z^2 + 1) \left(\begin{smallmatrix} P_{{\mathcal N}_-,i} P_{-,x_0} (H - zI)^{-1} P_{-,x_0} P_{{\mathcal N}_-,i} & \;\;\; P_{{\mathcal N}_-,i} P_{-,x_0} (H - zI)^{-1} P_{+,x_0} P_{{\mathcal N}_+,i} \\ P_{{\mathcal N}_+,i} P_{+,x_0} (H - zI)^{-1} P_{-,x_0} P_{{\mathcal N}_-,i} & \;\;\; P_{{\mathcal N}_+,i} P_{+,x_0} (H - zI)^{-1} P_{+,x_0} P_{{\mathcal N}_+,i} \hbox{\rm e}nd{smallmatrix}\right), \nonumber \hbox{\rm e}nd{align} and utilizing the fact that \begin{align} \begin{split} &\bibitemg\{\widehat \Psi_{-,\alphapha,j}(z,\, \cdot \, , x_0) = P_{-,x_0} \psi_{-,\alphapha}(z, \, \cdot \, ,x_0)[- (\operatorname{Im}(z)^{-1} m_{-,\alphapha}(z,x_0)]^{-1/2} e_j, \\ & \;\;\, \widehat \Psi_{+,\alphapha,j}(z,\, \cdot \, , x_0) = P_{+,x_0} \psi_{+,\alphapha}(z, \, \cdot \, ,x_0)[(\operatorname{Im}(z)^{-1} m_{+,\alphapha}(z,x_0)]^{-1/2} e_j\bibitemg\}_{j \in {\mathcal J}} \lambdabel{1.16} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} is an orthonormal basis for ${\mathcal N}_{z} = \ker\bibitemg(H_{\min}^* - z I \bibitemg)$, $z \in {\mathbb{C}} \backslash {\mathbb{R}}$, with $\{e_j\}_{j \in {\mathcal J}}$ an orthonormal basis for ${\mathcal H}$, one eventually computes explicitly, \begin{align} & M_{H, {\mathcal N}_i,0,0}^{Do} (z) = \sum_{j,k \in {\mathcal J}} (e_j, M_{\alphapha,0,0}^{Do}(z,x_0) e_k)_{{\mathcal H}} \nonumber \\ & \hspace*{3.1cm} \tildemes \bibitemg(\widehat \Psi_{-,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{-,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{1.17} \\ & M_{H, {\mathcal N}_i,0,1}^{Do} (z) = \sum_{j,k \in {\mathcal J}} (e_j, M_{\alphapha,0,1}^{Do}(z,x_0) e_k)_{{\mathcal H}} \nonumber \\ & \hspace*{3.1cm} \tildemes \bibitemg(\widehat \Psi_{+,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{-,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{1.18} \\ & M_{H, {\mathcal N}_i,1,0}^{Do} (z) = \sum_{j,k \in {\mathcal J}} (e_j, M_{\alphapha,1,0}^{Do}(z,x_0) e_k)_{{\mathcal H}} \nonumber \\ & \hspace*{3.1cm} \tildemes \bibitemg(\widehat \Psi_{-,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{1.19} \\ & M_{H, {\mathcal N}_i,1,1}^{Do} (z) = \sum_{j,k \in {\mathcal J}} (e_j, M_{\alphapha,1,1}^{Do}(z,x_0) e_k)_{{\mathcal H}} \nonumber \\ & \hspace*{3.1cm} \tildemes \bibitemg(\widehat \Psi_{+,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{1.20} \\ & \hspace*{8.95cm} z\in{\mathbb{C}}\backslash{\mathbb{R}}, \nonumber \hbox{\rm e}nd{align} with $M_{\alphapha}^{Do}(\, \cdot \,,x_0)$ given by \begin{align} \begin{split} M_{\alphapha}^{Do} (z,x_0) &= T_{\alphapha}^* M_{\alphapha}(z,x_0) T_{\alphapha} + E_{\alphapha} \lambdabel{1.21} \\ &= D_{\alphapha} + \int_{\mathbb{R}} d\Omega_{\alphapha}^{Do}(\lambdambda,x_0) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} Here $D_{\alphapha} = \operatorname{Re}(M_{\alphapha}^{Do}(i,x_0)) \in {\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$, and \begin{equation} \Omega_{\alphapha}^{Do}(\, \cdot \,,x_0) = T_{\alphapha}^* \Omega_{\alphapha}(\,\cdot\,,x_0) T_{\alphapha} \lambdabel{1.23} \hbox{\rm e}nd{equation} satisfies the analogs of \hbox{\rm e}qref{A.42a}, \hbox{\rm e}qref{A.42b}. In addition, the $2 \tildemes 2$ block operators $T_{\alphapha} \in {\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$ \bibitemg(with $T_{\alphapha}^{-1} \in {\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$\bibitemg) and $E_{\alphapha} \in {\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$ are defined in \hbox{\rm e}qref{6.46} and \hbox{\rm e}qref{6.47}, and $M_{\alphapha}(\, \cdot \,,x_0)$ is the standard ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued Weyl--Titchmarsh $2 \tildemes 2$ block operator Weyl--Titchmarsh function \hbox{\rm e}qref{2.71}--\hbox{\rm e}qref{2.71a} with $\Omega_{\alphapha}(\, \cdot \, x_0)$ the ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued measure in its Nevanlinna--Herglotz representation \hbox{\rm e}qref{2.71b}--\hbox{\rm e}qref{2.71d}. This result shows that the entire spectral information for $H$ is also contained in the ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued measure $\Omega_{\alphapha}^{Do}(\, \cdot \,,x_0)$ (again, including multiplicity properties of the spectrum of $H$). \begin{remark} \lambdabel{r1.1} As the first equality in \hbox{\rm e}qref{1.21} shows, $M_{\alphapha}^{Do} (z,x_0)$ recovers the traditional Weyl--Titchmarsh operator $M_{\alphapha}(z,x_0)$ apart from the boundedly invertible $2 \tildemes 2$ block operators $T_{\alphapha}$. The latter is built from the half-line Weyl--Titchmarsh operators $m_{\pm,\alphapha}(z,x_0)$ in a familiar, yet somewhat intriguing, manner (cf.\ \hbox{\rm e}qref{2.71}--\hbox{\rm e}qref{2.71a}), \begin{align} \begin{split} & M_{\alphapha}(z,x_0) \\ & \quad = \left(\begin{smallmatrix} W(z)^{-1} & \;\;\; 2^{-1} W(z)^{-1} [m_{-,\alphapha}(z,x_0) + m_{-,\alphapha}(z,x_0)] \\ 2^{-1} [m_{-,\alphapha}(z,x_0) + m_{-,\alphapha}(z,x_0)] W(z)^{-1} & \;\;\; m_{\pm,\alphapha}(z,x_0) W(z)^{-1} m_{\mp,\alphapha}(z,x_0) \hbox{\rm e}nd{smallmatrix}\right), \lambdabel{1.24} \\ & \hspace*{9.35cm} z \in {\mathbb{C}} \backslash \sigmagma(H), \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} abbreviating $W(z) = [m_{-,\alphapha}(z,x_0) - m_{+,\alphapha}(z,x_0)]$, $z \in {\mathbb{C}} \backslash \sigmagma(H)$. In contrast to this construction, combining the Donoghue $m$-function $M_{H, {\mathcal N}_i}^{Do}(\cdot)$ with the left/right half-line decomposition \hbox{\rm e}qref{1.10}, via equation \hbox{\rm e}qref{1.15}, directly leads to \hbox{\rm e}qref{1.17}--\hbox{\rm e}qref{1.20}, and hence to \hbox{\rm e}qref{1.21}, and thus to the ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued measure $\Omega_{\alphapha}^{Do}(\, \cdot \,,x_0)$ in the Nevanlinna--Herglotz representation of $M_{\alphapha}^{Do} (\, \cdot \,,x_0)$, encoding the entire spectral information of $H$ contained in it's family of spectral projections $E_H(\cdot)$. Of course, $\Omega_{\alphapha}^{Do}(\, \cdot \,,x_0)$ is directly related to the ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued Weyl--Titchmarsh measure measure $\Omega_{\alphapha}(\, \cdot \,,x_0)$ in the Nevanlinna--Herglotz representation of $M_{\alphapha}(\, \cdot \,,x_0)$ via relation \hbox{\rm e}qref{1.23}, but our point is that the simple left/right half-line decomposition \hbox{\rm e}qref{1.10} combined with the Donoghue-type $m$ function \hbox{\rm e}qref{1.14} naturally leads to $\Omega_{\alphapha}^{Do}(\, \cdot \,,x_0)$, without employing \hbox{\rm e}qref{1.24}. This offers interesting possibilities in the PDE context where ${\mathbb{R}}^n$, $n \in {\mathbb{N}}$, $n \geq 2$, can now be decomposed in various manners, for instance, into the interior and exterior of a given (bounded or unbounded) domain $D \subset {\mathbb{R}}^n$, a left/right (upper/lower) half-space, etc. In this context we should add that this paper concludes the first part of our program, the treatment of half-line and full-line Schr\"odinger operators with bounded operator-valued potentials. Part two will aim at certain classes of unbounded operator-valued potentials $V$, applicable to multi-dimensional Schr\"odinger operators in $L^2({\mathbb{R}}^n; d^n x)$, $n \in {\mathbb{N}}$, $n \geq 2$, generated by differential expressions of the type $- \mathbb{D}elta + V(\cdot)$. In fact, it was precisely the connection between multi-dimensional Schr\"odinger operators and one-dimensional Schr\"odinger operators with unbounded operator-valued potentials which originally motivated our interest in this program. We will return to this circle of ideas elsewhere. $\diamond$ \hbox{\rm e}nd{remark} At this point we turn to the content of each section: Section \ref{s2} recalls our basic results in \cite{GWZ13} on the initial value problem associated with Schr\"odinger operators with bounded operator-valued potentials. We use this section to introduce some of the basic notation employed subsequently and note that our conditions on $V(\cdot)$ (cf.\ Hypothesis \ref{h2.7}) are the most general to date with respect to the local behavior of the potential $V(\cdot)$. Following our detailed treatment in \cite{GWZ13}, Section \ref{s3} introduces maximal and minimal operators associated with the differential expression $\tau = - (d^2/dx^2) I_{{\mathcal H}} + V(\cdot)$ on the interval $(a,b) \subset {\mathbb{R}}$ (eventually aiming at the case of a half-line $(a,\infty)$), and assuming that the left endpoint $a$ is regular for $\tau$ and that $\tau$ is in the limit-point case at the endpoint $b$ we discuss the family of self-adjoint extensions $H_{\alphapha}$ in $L^2((a,b); dx; {\mathcal H})$ corresponding to boundary conditions of the type \begin{equation} \sigman(\alphapha)u'(a) + \cos(\alphapha)u(a)=0, \lambdabel{1.26} \hbox{\rm e}nd{equation} indexed by the self-adjoint operator $\alphapha = \alphapha^* \in {\mathcal B}({\mathcal H})$. In addition, we recall elements of Weyl--Titchmarsh theory, the introduction of the operator-valued Weyl--Titchmarsh function $m_{\alphapha}(\cdot) \in {\mathcal B}({\mathcal H})$ and the Green's function $G_{\alphapha}(z,\, \cdot \,, \,\cdot \,) \in {\mathcal B}({\mathcal H})$ of $H_{\alphapha}$. In particular, we prove bounded invertibility of $\operatorname{Im}(m_{\alphapha}(\cdot))$ in ${\mathcal B}({\mathcal H})$ in Theorem \ref{t3.3}. In Section \ref{s4} we recall the analogous results for full-line Schr\"odinger operators $H$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$, employing a $2 \tildemes 2$ block operator representation of the associated Weyl--Titchmarsh $M_{\alphapha} (\, \cdot \,,x_0)$-matrix and its ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued spectral measure $d\Omega_{\alphapha}(\, \cdot \,,x_0)$, decomposing ${\mathbb{R}}$ into a left and right half-line with respect to the reference point $x_0 \in {\mathbb{R}}$, $(-\infty, x_0] \cup [x_0, \infty)$. Various basic facts on deficiency subspaces, abstract Donoghue-type $m$-functions and the bounded invertibility of their imaginary parts, and the notion of completely non-self-adjoint symmetric operators are provided in Section \ref{s5}. This section also discusses the possibility of a reduction of the spectral family $E_A(\cdot)$ of the self-adjoint operator $A$ in ${\mathcal H}$ to the measure $\Sigma_A(\cdot) = P_{{\mathcal N}} E_A(\cdot)P_{{\mathcal N}}\bibitemg|_{{\mathcal N}}$ in ${\mathcal N}$ (with $P_{{\mathcal N}}$ the orthogonal projection onto a closed linear subspace ${\mathcal N}$ of ${\mathcal H}$) to the effect that $A$ is unitarily equivalent to the operator of multiplication by the independent variable $\lambda$ in the space $L^2({\mathbb{R}}; d\Sigma_A(\lambda);{\mathcal N})$, yielding a diagonalization of $A$ (see Theorem \ref{t5.6}). Our final and principal Section \ref{s6}, establishes complete non-self-adjointness of the minimal operators $H_{\pm,\min}$ in $L^2((x_0, \pm \infty); dx;{\mathcal H})$ (cf.\ Theorem \ref{t6.2}), and analyzes in detail the half-line Donoghue-type $m$-functions $M_{H_{\pm,\alphapha}, {\mathcal N}_{\pm,i}}^{Do} (\, \cdot \,,x_0)$ in ${\mathcal N}_{\pm,i}$. In addition, it introduces the derived quantities $m_{\pm,\alphapha}^{Do}(\, \cdot \,,x_0)$ in ${\mathcal H}$ and subsequently, turns to the full-line Donoghue-type operators $M_{H, {\mathcal N}_i}^{Do} (\cdot)$ in ${\mathcal N}_i$ and $M_{\alphapha}^{Do} (\,\cdot\,,x_0)$ in ${\mathcal H}^2$. It is then proved that the entire spectral information for $H_{\pm}$ and $H$ (including multiplicity issues) are encoded in $M_{H_{\pm,\alphapha}, {\mathcal N}_{\pm,i}}^{Do} (\,\cdot \,,x_0)$ (equivalently, in $m_{\pm,\alphapha}^{Do}(\,\cdot\,,x_0)$) and in $M_{H, {\mathcal N}_i}^{Do} (\cdot)$ (equivalently, in $M_{\alphapha}^{Do} (\, \cdot \,,x_0)$), respectively. Appendix \ref{sA} collects basic facts on operator-valued Nevanlinna--Herglotz functions. We introduced the background material in Sections \ref{s2}--\ref{s4} to make this paper reasonably self-contained. Finally, we briefly comment on the notation used in this paper: Throughout, ${\mathcal H}$ denotes a separable, complex Hilbert space with inner product and norm denoted by $(\, \cdot \,, \, \cdot \,)_{{\mathcal H}}$ (linear in the second argument) and $\|\cdot \|_{{\mathcal H}}$, respectively. The identity operator in ${\mathcal H}$ is written as $I_{{\mathcal H}}$. We denote by ${\mathcal B}({\mathcal H})$ (resp., ${\mathcal B}_{\infty}({\mathcal H})$) the Banach space of linear bounded (resp., compact) operators in ${\mathcal H}$. The domain, range, kernel (null space), resolvent set, and spectrum of a linear operator will be denoted by $\operatorname{dom}(\cdot)$, $\operatorname{ran}(\cdot)$, $\ker(\cdot)$, $\rho(\cdot)$, and $\sigmagma(\cdot)$, respectively. The closure of a closable operator $S$ in ${\mathcal H}$ is denoted by $\overline S$. By $\mathfrak{B}({\mathbb{R}})$ we denote the collection of Borel subsets of ${\mathbb{R}}$. \section{Basics on the Initial Value For Schr\"odinger Operators With Operator-Valued Potentials} \lambdabel{s2} In this section we recall the basic results on initial value problems for second-order differential equations of the form $-y''+Qy=f$ on an arbitrary open interval $(a,b) \subseteq {\mathbb{R}}$ with a bounded operator-valued coefficient $Q$, that is, when $Q(x)$ is a bounded operator on a separable, complex Hilbert space ${\mathcal H}$ for a.e.\ $x\in(a,b)$. We are concerned with two types of situations: in the first one $f(x)$ is an element of the Hilbert space ${\mathcal H}$ for a.e.\ $x\in (a,b)$, and the solution sought is to take values in ${\mathcal H}$. In the second situation, $f(x)$ is a bounded operator on ${\mathcal H}$ for a.e.\ $x\in(a,b)$, as is the proposed solution $y$. All results recalled in this section were proved in detail in \cite{GWZ13}. We start with some necessary preliminaries: Let $(a,b) \subseteq {\mathbb{R}}$ be a finite or infinite interval and ${\mathcal X}$ a Banach space. Unless explicitly stated otherwise (such as in the context of operator-valued measures in Nevanlinna--Herglotz representations, cf.\ Appendix \ref{sA}), integration of ${\mathcal X}$-valued functions on $(a,b)$ will always be understood in the sense of Bochner (cf., e.g., \cite[p.\ 6--21]{ABHN01}, \cite[p.\ 44--50]{DU77}, \cite[p.\ 71--86]{HP85}, \cite[Ch.\ III]{Mi78}, \cite[Sect.\ V.5]{Yo80} for details). In particular, if $p\ge 1$, the symbol $L^p((a,b);dx;{\mathcal X})$ denotes the set of equivalence classes of strongly measurable ${\mathcal X}$-valued functions which differ at most on sets of Lebesgue measure zero, such that $\|f(\cdot)\|_{{\mathcal X}}^p \in L^1((a,b);dx)$. The corresponding norm in $L^p((a,b);dx;{\mathcal X})$ is given by $\|f\|_{L^p((a,b);dx;{\mathcal X})} = \bibitemg(\int_{(a,b)} dx\, \|f(x)\|_{{\mathcal X}}^p \bibitemg)^{1/p}$, rendering $L^p((a,b);dx;{\mathcal X})$ a Banach space. If ${\mathcal H}$ is a separable Hilbert space, then so is $L^2((a,b);dx;{\mathcal H})$ (see, e.g., \cite[Subsects.\ 4.3.1, 4.3.2]{BW83}, \cite[Sect.\ 7.1]{BS87}). One recalls that by a result of Pettis \cite{Pe38}, if ${\mathcal X}$ is separable, weak measurability of ${\mathcal X}$-valued functions implies their strong measurability. Sobolev spaces $W^{n,p}((a,b); dx; {\mathcal X})$ for $n\in{\mathbb{N}}$ and $p\geq 1$ are defined as follows: $W^{1,p}((a,b);dx;{\mathcal X})$ is the set of all $f\in L^p((a,b);dx;{\mathcal X})$ such that there exists a $g\in L^p((a,b);dx;{\mathcal X})$ and an $x_0\in(a,b)$ such that \begin{equation} f(x)=f(x_0)+\int_{x_0}^x dx' \, g(x') \, \text{ for a.e.\ $x \in (a,b)$.} \hbox{\rm e}nd{equation} In this case $g$ is the strong derivative of $f$, $g=f'$. Similarly, $W^{n,p}((a,b);dx;{\mathcal X})$ is the set of all $f\in L^p((a,b);dx;{\mathcal X})$ so that the first $n$ strong derivatives of $f$ are in $L^p((a,b);dx;{\mathcal X})$. For simplicity of notation one also introduces $W^{0,p}((a,b);dx;{\mathcal X})=L^p((a,b);dx;{\mathcal X})$. Finally, $W^{n,p}_{\rm loc}((a,b);dx;{\mathcal X})$ is the set of ${\mathcal X}$-valued functions defined on $(a,b)$ for which the restrictions to any compact interval $[\alphapha,\beta]\subset(a,b)$ are in $W^{n,p}((\alphapha,\beta);dx;{\mathcal X})$. In particular, this applies to the case $n=0$ and thus defines $L^p_{\rm loc}((a,b);dx;{\mathcal X})$. If $a$ is finite we may allow $[\alphapha,\beta]$ to be a subset of $[a,b)$ and denote the resulting space by $W^{n,p}_{\rm loc}([a,b);dx;{\mathcal X})$ (and again this applies to the case $n=0$). Following a frequent practice (cf., e.g., the discussion in \cite[Sect.\ III.1.2]{Am95}), we will call elements of $W^{1,1} ([c,d];dx;{\mathcal X})$, $[c,d] \subset (a,b)$ (resp., $W^{1,1}_{\rm loc}((a,b);dx;{\mathcal X})$), strongly absolutely continuous ${\mathcal X}$-valued functions on $[c,d]$ (resp., strongly locally absolutely continuous ${\mathcal X}$-valued functions on $(a,b)$), but caution the reader that unless ${\mathcal X}$ possesses the Radon--Nikodym (RN) property, this notion differs from the classical definition of ${\mathcal X}$-valued absolutely continuous functions (we refer the interested reader to \cite[Sect.\ VII.6]{DU77} for an extensive list of conditions equivalent to ${\mathcal X}$ having the RN property). Here we just mention that reflexivity of ${\mathcal X}$ implies the RN property. In the special case where ${\mathcal X} = {\mathbb{C}}$, we omit ${\mathcal X}$ and just write $L^p_{(\text{\rm{loc}})}((a,b);dx)$, as usual. We emphasize that a strongly continuous operator-valued function $F(x)$, $x \in (a,b)$, always means continuity of $F(\cdot) h$ in ${\mathcal H}$ for all $h \in{\mathcal H}$ (i.e., pointwise continuity of $F(\cdot)$ in ${\mathcal H}$). The same pointwise conventions will apply to the notions of strongly differentiable and strongly measurable operator-valued functions throughout this manuscript. In particular, and unless explicitly stated otherwise, for operator-valued functions $Y$, the symbol $Y'$ will be understood in the strong sense; similarly, $y'$ will denote the strong derivative for vector-valued functions $y$. \begin{definition} \lambdabel{d2.2} Let $(a,b)\subseteq{\mathbb{R}}$ be a finite or infinite interval and $Q:(a,b)\to{\mathcal B}({\mathcal H})$ a weakly measurable operator-valued function with $\|Q(\cdot)\|_{{\mathcal B}({\mathcal H})}\in L^1_\text{\rm{loc}}((a,b);dx)$, and suppose that $f\in L^1_{\text{\rm{loc}}}((a,b);dx;{\mathcal H})$. Then the ${\mathcal H}$-valued function $y: (a,b)\to {\mathcal H}$ is called a (strong) solution of \begin{equation} - y'' + Q y = f \lambdabel{2.15A} \hbox{\rm e}nd{equation} if $y \in W^{2,1}_\text{\rm{loc}}((a,b);dx;{\mathcal H})$ and \hbox{\rm e}qref{2.15A} holds a.e.\ on $(a,b)$. \hbox{\rm e}nd{definition} One verifies that $Q:(a,b)\to{\mathcal B}({\mathcal H})$ satisfies the conditions in Definition \ref{d2.2} if and only if $Q^*$ does (a fact that will play a role later on, cf.\ the paragraph following \hbox{\rm e}qref{2.33A}). \begin{theorem} \lambdabel{t2.3} Let $(a,b)\subseteq{\mathbb{R}}$ be a finite or infinite interval and $V:(a,b)\to{\mathcal B}({\mathcal H})$ a weakly measurable operator-valued function with $\|V(\cdot)\|_{{\mathcal B}({\mathcal H})}\in L^1_\text{\rm{loc}}((a,b);dx)$. Suppose that $x_0\in(a,b)$, $z\in{\mathbb{C}}$, $h_0,h_1\in{\mathcal H}$, and $f\in L^1_{\text{\rm{loc}}}((a,b);dx;{\mathcal H})$. Then there is a unique ${\mathcal H}$-valued solution $y(z,\, \cdot \,,x_0)\in W^{2,1}_\text{\rm{loc}}((a,b);dx;{\mathcal H})$ of the initial value problem \begin{equation} \begin{cases} - y'' + (V - z) y = f \, \text{ on } \, (a,b)\backslash E, \\ \, y(x_0) = h_0, \; y'(x_0) = h_1, \hbox{\rm e}nd{cases} \lambdabel{2.1} \hbox{\rm e}nd{equation} where the exceptional set $E$ is of Lebesgue measure zero and depends only on the representatives chosen for $V$ and $f$ but is independent of $z$. Moreover, the following properties hold: \begin{enumerate}[$(i)$] \item For fixed $x_0,x\in(a,b)$ and $z\in{\mathbb{C}}$, $y(z,x,x_0)$ depends jointly continuously on $h_0,h_1\in{\mathcal H}$, and $f\in L^1_{\text{\rm{loc}}}((a,b);dx;{\mathcal H})$ in the sense that \begin{align} \begin{split} & \bibitemg\|y\bibitemg(z,x,x_0;h_0,h_1,f\bibitemg) - y\bibitemg(z,x,x_0;\widetilde h_0,\widetilde h_1,\widetilde f\bibitemg)\bibitemg\|_{{\mathcal H}} \\ & \quad \leq C(z,V) \bibitemg[\bibitemg\|h_0 - \widetilde h_0\bibitemg\|_{{\mathcal H}} + \bibitemg\|h_1 - \widetilde h_1\bibitemg\|_{{\mathcal H}} + \bibitemg\|f - \widetilde f\bibitemg\|_{L^1([x_0,x];dx;{\mathcal H})}\bibitemg], \lambdabel{2.1A} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} where $C(z,V)>0$ is a constant, and the dependence of $y$ on the initial data $h_0, h_1$ and the inhomogeneity $f$ is displayed in \hbox{\rm e}qref{2.1A}. \item For fixed $x_0\in(a,b)$ and $z\in{\mathbb{C}}$, $y(z,x,x_0)$ is strongly continuously differentiable with respect to $x$ on $(a,b)$. \item For fixed $x_0\in(a,b)$ and $z\in{\mathbb{C}}$, $y'(z,x,x_0)$ is strongly differentiable with respect to $x$ on $(a,b)\backslash E$. \item For fixed $x_0,x \in (a,b)$, $y(z,x,x_0)$ and $y'(z,x,x_0)$ are entire with respect to $z$. \hbox{\rm e}nd{enumerate} \hbox{\rm e}nd{theorem} For classical references on initial value problems we refer, for instance, to \cite[Chs.\ III, VII]{DK74} and \cite[Ch.\ 10]{Di60}, but we emphasize again that our approach minimizes the smoothness hypotheses on $V$ and $f$. \begin{definition} \lambdabel{d2.4} Let $(a,b)\subseteq{\mathbb{R}}$ be a finite or infinite interval and assume that $F,\,Q:(a,b)\to{\mathcal B}({\mathcal H})$ are two weakly measurable operator-valued functions such that $\|F(\cdot)\|_{{\mathcal B}({\mathcal H})},\,\|Q(\cdot)\|_{{\mathcal B}({\mathcal H})}\in L^1_\text{\rm{loc}}((a,b);dx)$. Then the ${\mathcal B}({\mathcal H})$-valued function $Y:(a,b)\to{\mathcal B}({\mathcal H})$ is called a solution of \begin{equation} - Y'' + Q Y = F \lambdabel{2.26A} \hbox{\rm e}nd{equation} if $Y(\cdot)h\in W^{2,1}_\text{\rm{loc}}((a,b);dx;{\mathcal H})$ for every $h\in{\mathcal H}$ and $-Y''h+QYh=Fh$ holds a.e.\ on $(a,b)$. \hbox{\rm e}nd{definition} \begin{corollary} \lambdabel{c2.5} Let $(a,b)\subseteq{\mathbb{R}}$ be a finite or infinite interval, $x_0\in(a,b)$, $z\in{\mathbb{C}}$, $Y_0,\,Y_1\in{\mathcal B}({\mathcal H})$, and suppose $F,\,V:(a,b)\to{\mathcal B}({\mathcal H})$ are two weakly measurable operator-valued functions with $\|V(\cdot)\|_{{\mathcal B}({\mathcal H})},\,\|F(\cdot)\|_{{\mathcal B}({\mathcal H})}\in L^1_\text{\rm{loc}}((a,b);dx)$. Then there is a unique ${\mathcal B}({\mathcal H})$-valued solution $Y(z,\, \cdot \,,x_0):(a,b)\to{\mathcal B}({\mathcal H})$ of the initial value problem \begin{equation} \begin{cases} - Y'' + (V - z)Y = F \, \text{ on } \, (a,b)\backslash E, \\ \, Y(x_0) = Y_0, \; Y'(x_0) = Y_1. \hbox{\rm e}nd{cases} \lambdabel{2.3} \hbox{\rm e}nd{equation} where the exceptional set $E$ is of Lebesgue measure zero and depends only on the representatives chosen for $V$ and $F$ but is independent of $z$. Moreover, the following properties hold: \begin{enumerate}[$(i)$] \item For fixed $x_0 \in (a,b)$ and $z \in {\mathbb{C}}$, $Y(z,x,x_0)$ is continuously differentiable with respect to $x$ on $(a,b)$ in the ${\mathcal B}({\mathcal H})$-norm. \item For fixed $x_0 \in (a,b)$ and $z \in {\mathbb{C}}$, $Y'(z,x,x_0)$ is strongly differentiable with respect to $x$ on $(a,b)\backslash E$. \item For fixed $x_0, x \in (a,b)$, $Y(z,x,x_0)$ and $Y'(z,x,x_0)$ are entire in $z$ in the ${\mathcal B}({\mathcal H})$-norm. \hbox{\rm e}nd{enumerate} \hbox{\rm e}nd{corollary} Various versions of Theorem \ref{t2.3} and Corollary \ref{c2.5} exist in the literature under varying assumptions on $V$ and $f, F$ (cf.\ the discussion in \cite{GWZ13} which uses the most general hypotheses to date). \begin{definition} \lambdabel{d2.6} Pick $c \in (a,b)$. The endpoint $a$ (resp., $b$) of the interval $(a,b)$ is called {\it regular} for the operator-valued differential expression $- (d^2/dx^2) + Q(\cdot)$ if it is finite and if $Q$ is weakly measurable and $\|Q(\cdot)\|_{{\mathcal B}({\mathcal H})}\in L^1([a,c];dx)$ (resp., $\|Q(\cdot)\|_{{\mathcal B}({\mathcal H})}\in L^1([c,b];dx)$) for some $c\in (a,b)$. Similarly, $- (d^2/dx^2) + Q(\cdot)$ is called {\it regular at $a$} (resp., {\it regular at $b$}) if $a$ (resp., $b$) is a regular endpoint for $- (d^2/dx^2) + Q(\cdot)$. \hbox{\rm e}nd{definition} We note that if $a$ (resp., $b$) is regular for $- (d^2/dx^2) + Q(x)$, one may allow for $x_0$ to be equal to $a$ (resp., $b$) in the existence and uniqueness Theorem \ref{t2.3}. If $f_1, f_2$ are strongly continuously differentiable ${\mathcal H}$-valued functions, we define the Wronskian of $f_1$ and $f_2$ by \begin{equation} W_{*}(f_1,f_2)(x)=(f_1(x),f'_2(x))_{\mathcal H} - (f'_1(x),f_2(x))_{\mathcal H}, \lambdabel{2.31A} \quad x \in (a,b). \hbox{\rm e}nd{equation} If $f_2$ is an ${\mathcal H}$-valued solution of $-y''+Qy=0$ and $f_1$ is an ${\mathcal H}$-valued solution of $-y''+Q^*y=0$, their Wronskian $W_{*}(f_1,f_2)(x)$ is $x$-independent, that is, \begin{equation} \frac{d}{dx} W_{*}(f_1,f_2)(x) = 0 \, \text{ for a.e.\ $x \in (a,b)$} \lambdabel{2.32A} \hbox{\rm e}nd{equation} (in fact, by \hbox{\rm e}qref{2.52A}, the right-hand side of \hbox{\rm e}qref{2.32A} actually vanishes for all $x \in (a,b)$). We decided to use the symbol $W_{*}(\, \cdot \,,\, \cdot \,)$ in \hbox{\rm e}qref{2.31A} to indicate its conjugate linear behavior with respect to its first entry. Similarly, if $F_1,F_2$ are strongly continuously differentiable ${\mathcal B}({\mathcal H})$-valued functions, their Wronskian is defined by \begin{equation} W(F_1,F_2)(x) = F_1(x) F'_2(x) - F'_1(x) F_2(x), \quad x \in (a,b). \lambdabel{2.33A} \hbox{\rm e}nd{equation} Again, if $F_2$ is a ${\mathcal B}({\mathcal H})$-valued solution of $-Y''+QY = 0$ and $F_1$ is a ${\mathcal B}({\mathcal H})$-valued solution of $-Y'' + Y Q = 0$ (the latter is equivalent to $- {(Y^{*})}^{\prime\prime} + Q^* Y^* = 0$ and hence can be handled in complete analogy via Theorem \ref{t2.3} and Corollary \ref{c2.5}, replacing $Q$ by $Q^*$) their Wronskian will be $x$-independent, \begin{equation} \frac{d}{dx} W(F_1,F_2)(x) = 0 \, \text{ for a.e.\ $x \in (a,b)$.} \hbox{\rm e}nd{equation} Our main interest lies in the case where $V(\cdot)=V(\cdot)^* \in {\mathcal B}({\mathcal H})$ is self-adjoint. Thus, we now introduce the following basic assumption: \begin{hypothesis} \lambdabel{h2.7} Let $(a,b)\subseteq{\mathbb{R}}$, suppose that $V:(a,b)\to{\mathcal B}({\mathcal H})$ is a weakly measurable operator-valued function with $\|V(\cdot)\|_{{\mathcal B}({\mathcal H})}\in L^1_\text{\rm{loc}}((a,b);dx)$, and assume that $V(x) = V(x)^*$ for a.e.\ $x \in (a,b)$. \hbox{\rm e}nd{hypothesis} Moreover, for the remainder of this paper we assume \begin{equation} \alphapha = \alphapha^* \in {\mathcal B}({\mathcal H}). \lambdabel{2.4A} \hbox{\rm e}nd{equation} Assuming Hypothesis \ref{h2.7} and \hbox{\rm e}qref{2.4A}, we introduce the standard fundamental systems of operator-valued solutions of $\tau y=zy$ as follows: Since $\alphapha$ is a bounded self-adjoint operator, one may define the self-adjoint operators $A=\sigman(\alphapha)$ and $B=\cos(\alphapha)$ via the spectral theorem. Given such an operator $\alphapha$ and a point $x_0\in(a,b)$ or a regular endpoint for $\tau$, we now define $\theta_\alphapha(z,\, \cdot \,, x_0), \, \phi_\alphapha(z,\, \cdot \,,x_0)$ as those ${\mathcal B}({\mathcal H})$-valued solutions of $\tau Y=z Y$ (in the sense of Definition \ref{d2.4}) which satisfy the initial conditions \begin{equation} \theta_\alphapha(z,x_0,x_0)=\phi'_\alphapha(z,x_0,x_0)=\cos(\alphapha), \quad -\phi_\alphapha(z,x_0,x_0)=\theta'_\alphapha(z,x_0,x_0)=\sigman(\alphapha). \lambdabel{2.5} \hbox{\rm e}nd{equation} By Corollary \ref{c2.5}\,$(iii)$, for any fixed $x, x_0\in(a,b)$, the functions $\theta_{\alphapha}(z,x,x_0)$, $\phi_{\alphapha}(z,x,x_0)$, $\theta_{\alphapha}(\overline{z},x,x_0)^*$, and $\phi_{\alphapha}(\overline{z},x,x_0)^*$, as well as their strong $x$-derivatives are entire with respect to $z$ in the ${\mathcal B}({\mathcal H})$-norm. Since $\theta_{\alphapha}(\bar z,\, \cdot \,,x_0)^*$ and $\phi_{\alphapha}(\bar z,\, \cdot \,,x_0)^*$ satisfy the adjoint equation $-Y''+YV=z Y$ and the same initial conditions as $\theta_\alphapha$ and $\phi_\alphapha$, respectively, one can show the following identities (cf.\ \cite{GWZ13}): \begin{align} \theta_{\alphapha}' (\bar z,x,x_0)^*\theta_{\alphapha} (z,x,x_0)- \theta_{\alphapha} (\bar z,x,x_0)^*\theta_{\alphapha}' (z,x,x_0)&=0, \lambdabel{2.7f} \\ \phi_{\alphapha}' (\bar z,x,x_0)^*\phi_{\alphapha} (z,x,x_0)- \phi_{\alphapha} (\bar z,x,x_0)^*\phi_{\alphapha}' (z,x,x_0)&=0, \lambdabel{2.7g} \\ \phi_{\alphapha}' (\bar z,x,x_0)^*\theta_{\alphapha} (z,x,x_0)- \phi_{\alphapha} (\bar z,x,x_0)^*\theta_{\alphapha}' (z,x,x_0)&=I_{{\mathcal H}}, \lambdabel{2.7h} \\ \theta_{\alphapha} (\bar z,x,x_0)^*\phi_{\alphapha}' (z,x,x_0) - \theta_{\alphapha}' (\bar z,x,x_0)^*\phi_{\alphapha} (z,x,x_0)&=I_{{\mathcal H}}, \lambdabel{2.7i} \hbox{\rm e}nd{align} as well as, \begin{align} \phi_{\alphapha} (z,x,x_0)\theta_{\alphapha} (\bar z,x,x_0)^*- \theta_{\alphapha} (z,x,x_0)\phi_{\alphapha} (\bar z,x,x_0)^*&=0, \lambdabel{2.7j} \\ \phi_{\alphapha}' (z,x,x_0)\theta_{\alphapha}' (\bar z,x,x_0)^*- \theta_{\alphapha}' (z,x,x_0)\phi_{\alphapha}' (\bar z,x,x_0)^*&=0, \lambdabel{2.7k} \\ \phi_{\alphapha}' (z,x,x_0)\theta_{\alphapha} (\bar z,x,x_0)^*- \theta_{\alphapha}' (z,x,x_0)\phi_{\alphapha} (\bar z,x,x_0)^*&=I_{{\mathcal H}}, \lambdabel{2.7l} \\ \theta_{\alphapha} (z,x,x_0)\phi_{\alphapha}' (\bar z,x,x_0)^*- \phi_{\alphapha} (z,x,x_0)\theta_{\alphapha}' (\bar z,x,x_0)^*&=I_{{\mathcal H}}. \lambdabel{2.7m} \hbox{\rm e}nd{align} Finally, we recall two versions of Green's formula (resp., Lagrange's identity). \begin{lemma} \lambdabel{l2.9} Let $(a,b)\subseteq{\mathbb{R}}$ be a finite or infinite interval and $[x_1,x_2]\subset(a,b)$. \\ $(i)$ Assume that $f,g\in W^{2,1}_{\rm loc}((a,b);dx;{\mathcal H})$. Then \begin{equation} \int_{x_1}^{x_2} dx \, [((\tau f)(x),g(x))_{\mathcal H}-(f(x),(\tau g)(x))_{\mathcal H}] = W_{*}(f,g)(x_2)-W_{*}(f,g)(x_1). \lambdabel{2.52A} \hbox{\rm e}nd{equation} $(ii)$ Assume that $F,\,G:(a,b)\to{\mathcal B}({\mathcal H})$ are absolutely continuous operator-valued functions such that $F',\,G'$ are again differentiable and that $F''$, $G''$ are weakly measurable. In addition, suppose that $\|F''\|_{\mathcal H},\, \|G''\|_{\mathcal H} \in L^1_\text{\rm{loc}}((a,b);dx)$. Then \begin{equation} \int_{x_1}^{x_2} dx \, [(\tau F^*)(x)^*G(x) - F(x) (\tau G)(x)] = W(F,G)(x_2) - W(F,G)(x_1). \lambdabel{2.53A} \hbox{\rm e}nd{equation} \hbox{\rm e}nd{lemma} \section{Half-Line Weyl--Titchmarsh and Spectral Theory for Schr\"odinger Operators with Operator-Valued Potentials} \lambdabel{s3} In this section we recall the basics of Weyl--Titchmarsh and spectral theory for self-adjoint half-line Schr\"odinger operators $H_{\alphapha}$ in $L^2((a,b); dx; {\mathcal H})$ associated with the operator-valued differential expression $\tau =-(d^2/dx^2) I_{{\mathcal H}} + V(\cdot)$, assuming regularity of the left endpoint $a$ and the limit-point case at the right endpoint $b$ (see Definition \ref{d3.6}). These results were proved in \cite{GWZ13} and \cite{GWZ13b} and we refer to these sources for details and an extensive bibliography on this topic. As before, ${\mathcal H}$ denotes a separable Hilbert space and $(a,b)$ denotes a finite or infinite interval. One recalls that $L^2((a,b);dx;{\mathcal H})$ is separable (since ${\mathcal H}$ is) and that \begin{equation} (f,g)_{L^2((a,b);dx;{\mathcal H})} =\int_a^b dx \, (f(x),g(x))_{\mathcal H}, \quad f,g\in L^2((a,b);dx;{\mathcal H}). \hbox{\rm e}nd{equation} Assuming Hypothesis \ref{h2.7} throughout this section, we discuss self-adjoint operators in $L^2((a,b);dx;{\mathcal H})$ associated with the operator-valued differential expression $\tau =-(d^2/dx^2) I_{{\mathcal H}} + V(\cdot)$ as suitable restrictions of the {\it maximal} operator $H_{\max}$ in $L^2((a,b);dx;{\mathcal H})$ defined by \begin{align} & H_{\max} f = \tau f, \nonumber \\ & f\in \operatorname{dom}(H_{\max})=\bibitemg\{g\in L^2((a,b);dx;{\mathcal H}) \,\bibitemg|\, g\in W^{2,1}_{\rm loc}((a,b);dx;{\mathcal H}); \lambdabel{3.2} \\ & \hspace*{6.6cm} \tau g\in L^2((a,b);dx;{\mathcal H})\bibitemg\}. \nonumber \hbox{\rm e}nd{align} We also introduce the operator $\dot H_{\min}$ in $L^2((a,b);dx;{\mathcal H})$ \begin{equation} \operatorname{dom}(\dot H_{\min})=\{g\in\operatorname{dom}(H_{\max})\,|\,\operatorname{supp} (g) \, \text{is compact in} \, (a,b)\}, \hbox{\rm e}nd{equation} and the {\it minimal} operator $H_{\min}$ in $L^2((a,b);dx;{\mathcal H})$ associated with $\tau$, \begin{equation} H_{\min} = \overline{\dot H_{\min}}. \lambdabel{3.4} \hbox{\rm e}nd{equation} One obtains, \begin{equation} H_{\max} = (\dot H_{\min})^*, \quad H_{\max}^* = \overline{\dot H_{\min}} = H_{\min}. \lambdabel{3.13a} \hbox{\rm e}nd{equation} Moreover, Green's formula holds, that is, if $u$ and $v$ are in $\operatorname{dom}(H_{\max})$, then \begin{equation}\lambdabel{3.17A} (H_{\max}u,v)_{L^2((a,b);dx;{\mathcal H})}-(u,H_{\max}v)_{L^2((a,b);dx;{\mathcal H})} = W_*(u,v)(b) - W_*(u,v)(a). \hbox{\rm e}nd{equation} \begin{definition} \lambdabel{d3.6} Assume Hypothesis \ref{h2.7}. Then the endpoint $a$ (resp., $b$) is said to be of {\it limit-point-type for $\tau$} if $W_*(u,v)(a)=0$ (resp., $W_*(u,v)(b)=0$) for all $u,v\in\operatorname{dom}(H_{\max})$. \hbox{\rm e}nd{definition} Next, we introduce the subspaces \begin{equation} {\mathcal D}_{z}=\{u\in\operatorname{dom}(H_{\max}) \,|\, H_{\max}u=z u\}, \quad z \in {\mathbb{C}}. \hbox{\rm e}nd{equation} For $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, ${\mathcal D}_{z}$ represent the deficiency subspaces of $H_{\min}$. Von Neumann's theory of extensions of symmetric operators implies that \begin{equation} \lambdabel{3.20A} \operatorname{dom}(H_{\max})=\operatorname{dom}(H_{\min}) \dotplus {\mathcal D}_i \dotplus {\mathcal D}_{-i}, \hbox{\rm e}nd{equation} where $\dotplus$ indicates the direct (but not necessarily orthogonal direct) sum in the underlying Hilbert space $L^2((a,b);dx;{\mathcal H})$. For the remainder of this section we now make the following asumptions: \begin{hypothesis} \lambdabel{h3.9} In addition to Hypothesis \ref{h2.7} suppose that $a$ is a regular endpoint for $\tau$ and $b$ is of limit-point-type for $\tau$. \hbox{\rm e}nd{hypothesis} Given Hypothesis \ref{h3.9}, it has been shown in \cite{GWZ13} that all self-adjoint restrictions, $H_{\alphapha}$, of $H_{\max}$, equivalently, all self-adjoint extensions of $H_{\min}$, are parametrized by $\alphapha = \alphapha^* \in {\mathcal B}({\mathcal H})$, with domains given by \begin{equation} \operatorname{dom}(H_{\alphapha})=\{u\in\operatorname{dom}(H_{\max}) \,|\, \sigman(\alphapha)u'(a)+\cos(\alphapha)u(a)=0\}. \lambdabel{3.9} \hbox{\rm e}nd{equation} Next, we recall that (normalized) ${\mathcal B}({\mathcal H})$-valued and square integrable solutions of $\tau Y=zY$, denoted by $\psi_{\alphapha}(z,\, \cdot \,,a)$, $z\in{\mathbb{C}}\backslash\sigmagma(H_{\alphapha})$, and traditionally called {\it Weyl--Titchmarsh} solutions of $\tau Y = z Y$, and the ${\mathcal B}({\mathcal H})$-valued Weyl--Titchmarsh functions $m_{\alphapha}(z,a)$, have been constructed in \cite{GWZ13} to the effect that \begin{equation} \psi_{\alphapha}(z,x,a)=\theta_{\alphapha}(z,x,a) + \phi_{\alphapha}(z,x,a) m_{\alphapha}(z,a), \quad z \in {\mathbb{C}}\backslash\sigmagma(H_{\alphapha}), \; x \in [a,b). \lambdabel{3.58A} \hbox{\rm e}nd{equation} Then $\psi_{\alphapha}(\, \cdot \,,x,a)$ is analytic in $z$ on ${\mathbb{C}}\backslash{\mathbb{R}}$ for fixed $x \in [a,b)$, and \begin{equation} \int_a^b dx \, \|\psi_{\alphapha}(z,x,a) h\|_{{\mathcal H}}^2 < \infty, \quad h \in {\mathcal H}, \; z \in {\mathbb{C}}\backslash\sigmagma(H_{\alphapha}), \hbox{\rm e}nd{equation} in particular, \begin{equation} \psi_{\alphapha}(z, \, \cdot \, ,a) h \in L^2((a,b); dx; {\mathcal H}), \quad h \in {\mathcal H}, \; z \in {\mathbb{C}} \backslash \sigmagma(H_{\alphapha}), \hbox{\rm e}nd{equation} and \begin{equation} \ker(H_{\max} - z I_{L^2((a,b); dx; {\mathcal H})}) = \{\psi_{\alphapha}(z, \, \cdot \, ,a) h \,|\, h \in {\mathcal H}\}. \quad z \in {\mathbb{C}} \backslash {\mathbb{R}}. \hbox{\rm e}nd{equation} In addition, $m_{\alphapha}(z,a)$ is a ${\mathcal B}({\mathcal H})$-valued Nevanlinna--Herglotz function (cf.\ Definition \ref{dA.4}), and \begin{equation} m_{\alphapha}(z,a)=m_{\alphapha}(\overline z, a)^*, \quad z \in {\mathbb{C}}\backslash\sigmagma(H_{\alphapha}). \lambdabel{3.59A} \hbox{\rm e}nd{equation} Given $u \in {\mathcal D}_{z}$, the operator $m_{0}(z,a)$ assigns Neumann boundary data $u'(a)$ to the Dirichlet boundary data $u(a)$, that is, $m_{0}(z,a)$ is the ($z$-dependent) Dirichlet-to-Neumann map. With the help of Weyl--Titchmarsh solutions one can now describe the resolvent of $H_{\alphapha}$ as follows, \begin{align} \begin{split} \bibitemg((H_{\alphapha} - z I_{L^2((a,b);dx;{\mathcal H})})^{-1} u\bibitemg)(x) = \int_a^b dx' \, G_{\alphapha}(z,x,x')u(x'),& \\ u \in L^2((a,b);dx;{\mathcal H}), \; z \in \rho(H_{\alphapha}), \; x \in [a,b),& \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} with the ${\mathcal B}({\mathcal H})$-valued Green's function $G_{\alphapha}(z,\,\cdot \,, \, \cdot \,)$ given by \begin{equation} \lambdabel{3.63A} G_{\alphapha}(z,x,x') = \begin{cases} \phi_{\alphapha}(z,x,a) \psi_{\alphapha}(\overline{z},x',a)^*, & a\leq x \leq x'<b, \\ \psi_{\alphapha}(z,x,a) \phi_{\alphapha}(\overline{z},x',a)^*, & a\leq x' \leq x<b, \hbox{\rm e}nd{cases} \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}. \hbox{\rm e}nd{equation} Next, we replace the interval $(a,b)$ by the right half-line $(x_0,\infty)$ and indicate this change with the additional subscript $+$ in $H_{+,\min}$, $H_{+,\max}$, $H_{+,\alphapha}$, $\psi_{+,\alphapha}(z,\, \cdot \,,x_0)$, $m_{+,\alphapha}(\, \cdot \,,x_0)$, $d\rho_{+,\alphapha}(\, \cdot \,,x_0)$, $G_{+,\alphapha}(z,\, \cdot \, , \, \cdot \,) $, etc., to distinguish these quantities from the analogous objects on the left half-line $(-\infty, x_0)$ (later indicated with the subscript $-$), which are needed in our subsequent full-line Section \ref{s4}. Our aim is to relate the family of spectral projections, $\{E_{H_{+,\alphapha}}(\lambdambda)\}_{\lambdambda\in{\mathbb{R}}}$, of the self-adjoint operator $H_{+,\alphapha}$ and the ${\mathcal B}({\mathcal H})$-valued spectral function $\rho_{+,\alphapha}(\lambdambda,x_0)$, $\lambdambda\in{\mathbb{R}}$, which generates the operator-valued measure $d\rho_{+,\alphapha}(\, \cdot \, , x_0)$ in the Nevanlinna--Herglotz representation \hbox{\rm e}qref{2.25} of $m_{+,\alphapha}(\, \cdot \, , x_0)$: \begin{equation} m_{+,\alphapha}(z,x_0) = c_{+,\alphapha} + \int_{{\mathbb{R}}} d\rho_{+,\alphapha}(\lambdambda,x_0) \Big[ \fracrac{1}{\lambdambda-z} -\fracrac{\lambdambda}{\lambdambda^2 + 1}\Big], \quad z\in{\mathbb{C}}\backslash \sigmagma(H_{+,\alphapha}), \lambdabel{2.25} \hbox{\rm e}nd{equation} where \begin{equation} c_{+,\alphapha} = \operatorname{Re}(m_{+,\alphapha}(i,x_0)) \in {\mathcal B}({\mathcal H}), \lambdabel{2.25a} \hbox{\rm e}nd{equation} and $d\rho_{+,\alphapha}(\, \cdot \, , x_0)$ is a ${\mathcal B}({\mathcal H})$-valued measure satisfying \begin{equation} \int_{{\mathbb{R}}} d(e,\rho_{+,\alphapha}(\lambdambda, x_0)e)_{{\mathcal H}} \, (\lambdambda^2 + 1)^{-1} < \infty, \quad e\in{\mathcal H}. \lambdabel{2.26} \hbox{\rm e}nd{equation} In addition, the Stieltjes inversion formula for the nonnegative ${\mathcal B}({\mathcal H})$-valued measure $d\rho_{+,\alphapha}(\, \cdot \,,x_0)$ reads \begin{equation} \rho_{+,\alphapha}((\lambdambda_1,\lambdambda_2],x_0) =\frac1\pi \lim_{\delta\downarrow 0} \lim_{\varepsilon\downarrow 0} \int^{\lambdambda_2+\delta}_{\lambdambda_1+\delta} d\lambdambda \, \operatorname{Im}(m_{+,\alphapha}(\lambdambda +i\varepsilon,x_0)), \quad \lambdambda_1, \lambdambda_2 \in{\mathbb{R}}, \; \lambdambda_1<\lambdambda_2 \lambdabel{2.27} \hbox{\rm e}nd{equation} (cf.\ Appendix \ref{sA} for details on Nevanlinna--Herglotz functions). We also note that $m_{+,\alphapha}(\, \cdot\,,x_0)$ and $m_{+,\beta}(\,\cdot\,,x_0)$ are related by the following linear fractional transformation, \begin{equation}\lambdabel{3.67A} m_{+,\beta}(\,\cdot\,,x_0) = (C+Dm_{+,\alphapha}(\,\cdot\,,x_0))(A+Bm_{+\alphapha}(\,\cdot\,,x_0))^{-1}, \hbox{\rm e}nd{equation} where \begin{equation} \begin{pmatrix}A&B\\ C&D\hbox{\rm e}nd{pmatrix} = \begin{pmatrix}\cos(\beta) & \sigman(\beta) \\ -\sigman(\beta) & \cos(\beta) \hbox{\rm e}nd{pmatrix} \begin{pmatrix}\cos(\alphapha) & -\sigman(\alphapha) \\ \sigman(\alphapha) & \cos(\alphapha) \hbox{\rm e}nd{pmatrix}. \hbox{\rm e}nd{equation} An important consequence of \hbox{\rm e}qref{3.67A} and the fact that the $m$-functions take values in ${\mathcal B}({\mathcal H})$ is the following invertibility result. \begin{theorem}\lambdabel{t3.3} Assume Hypothesis \ref{h3.9}, then $[\operatorname{Im}(m_{+,\alpha}(z,x_0))]^{-1} \in {\mathcal B}({\mathcal H})$ for all $z\in{\mathbb{C}}\backslash{\mathbb{R}}$ and $\alpha=\alpha^*\in{\mathcal B}({\mathcal H})$. \hbox{\rm e}nd{theorem} \begin{proof} Let $z\in{\mathbb{C}}\backslash{\mathbb{R}}$ be fixed. We first show that $[\operatorname{Im}(m_{+,0}(z,x_0))]^{-1} \in {\mathcal B}({\mathcal H})$. By \hbox{\rm e}qref{3.67A}, \begin{align}\lambdabel{3.21} m_{+,\beta}(z,x_0) = [\cos(\beta)m_{+,0}(z,x_0)-\sigman(\beta)][\sigman(\beta)m_{+,0}(z,x_0)+\cos(\beta)]^{-1}, \hbox{\rm e}nd{align} hence using $\sigman^2(\beta)+\cos^2(\beta)=I_{\mathcal H}$ and commutativity of $\sigman(\beta)$ and $\cos(\beta)$, one gets \begin{align} \cos(\beta)-\sigman(\beta)m_{+,\beta}(z,x_0) = [\sigman(\beta)m_{+,0}(z,x_0)+\cos(\beta)]^{-1}. \hbox{\rm e}nd{align} Taking $\beta = \beta(z) = {\rm arccot}(-\operatorname{Re}(m_{+,0}(z,x_0))) \in{\mathcal B}({\mathcal H})$ yields \begin{align} \cos(\beta)-\sigman(\beta)m_{+,\beta}(z,x_0) = [\sigman(\beta)i\operatorname{Im}(m_{+,0}(z,x_0))]^{-1}, \hbox{\rm e}nd{align} and since the left-hand side is in ${\mathcal B}({\mathcal H})$, also $[\operatorname{Im}(m_{+,0}(z,x_0))]^{-1} \in {\mathcal B}({\mathcal H})$. Next, we show that for any $\alpha=\alpha^*\in{\mathcal B}({\mathcal H})$, $[\operatorname{Im}(m_{+,\alpha}(z,x_0))]^{-1} \in {\mathcal B}({\mathcal H})$. Replacing $\beta$ by $\alpha$ in \hbox{\rm e}qref{3.21} and noting that both $\sigman(\alpha)$ and $\cos(\alpha)$ are self-adjoint, one obtains \begin{align} m_{+,\alpha}(z,x_0) &= [\cos(\alpha)m_{+,0}(z,x_0)-\sigman(\alpha)][\sigman(\alpha)m_{+,0}(z,x_0)+\cos(\alpha)]^{-1}, \nonumber \\ m_{+,\alpha}(z,x_0)^* &= [m_{+,0}(z,x_0)^*\sigman(\alpha)+\cos(\alpha)]^{-1}[m_{+,0}(z,x_0)^*\cos(\alpha)-\sigman(\alpha)], \hbox{\rm e}nd{align} and consequently \begin{align} 2i\operatorname{Im}(m_{+,\alpha}(z,x_0)) &= m_{+,\alpha}(z,x_0) - m_{+,\alpha}(z,x_0)^* \nonumber \\ & = [m_{+,0}(z,x_0)^*\sigman(\alpha)+\cos(\alpha)]^{-1} [2i\operatorname{Im}(m_{+,0}(z,x_0))] \nonumber \\ & \quad \tildemes [\sigman(\alpha)m_{+,0}(z,x_0) + \cos(\alpha)]^{-1}. \hbox{\rm e}nd{align} Since $[\operatorname{Im}(m_{+,0}(z,x_0))]^{-1} \in {\mathcal B}({\mathcal H})$, it follows that $[\operatorname{Im}(m_{+,\alpha}(z,x_0))]^{-1} \in {\mathcal B}({\mathcal H})$. \hbox{\rm e}nd{proof} In the following, $C_0^\infty((c,d); {\mathcal H})$, $-\infty \leq c<d\leq \infty$, denotes the usual space of infinitely differentiable ${\mathcal H}$-valued functions of compact support contained in $(c,d)$. \begin{theorem} \lambdabel{t2.5} Assume Hypothesis \ref{h3.9} and let $f,g \in C^\infty_0((x_0,\infty); {\mathcal H})$, $F\in C({\mathbb{R}})$, and $\lambdambda_1, \lambdambda_2 \in{\mathbb{R}}$, $\lambdambda_1<\lambdambda_2$. Then, \begin{align} \begin{split} & \bibitemg(f,F(H_{+,\alphapha})E_{H_{+,\alphapha}}((\lambdambda_1,\lambdambda_2])g \bibitemg)_{L^2((x_0,\infty);dx;{\mathcal H})} \\ & \quad = \bibitemg(\widehat f_{+,\alphapha},M_FM_{\chi_{(\lambdambda_1,\lambdambda_2]}} \widehat g_{+,\alphapha}\bibitemg)_{L^2({\mathbb{R}};d\rho_{+,\alphapha}(\, \cdot \,,x_0);{\mathcal H})}, \lambdabel{2.28} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} where we introduced the notation \begin{equation} \widehat u_{+,\alphapha}(\lambdambda)=\int_{x_0}^\infty dx \, \phi_\alphapha(\lambdambda,x,x_0)^* u(x), \quad \lambdambda \in{\mathbb{R}}, \; u \in C^\infty_0((x_0,\infty); {\mathcal H}), \lambdabel{2.29} \hbox{\rm e}nd{equation} and $M_G$ denotes the maximally defined operator of multiplication by the function $G \in C({\mathbb{R}})$ in the Hilbert space $L^2({\mathbb{R}};d\rho_{+,\alphapha};{\mathcal H})$, \begin{align} & \bibitemg(M_G\widehat u\bibitemg)(\lambdambda)=G(\lambdambda)\widehat u(\lambdambda) \, \text{ for $\rho_{+,\alphapha}$-a.e.\ $\lambdambda\in{\mathbb{R}}$}, \lambdabel{2.30} \\ & \, \widehat u \in\operatorname{dom}(M_G)=\bibitemg\{\widehat v \in L^2({\mathbb{R}};d\rho_{+,\alphapha}(\,\cdot\,,x_0);{\mathcal H}) \,\bibitemg|\, G\widehat v \in L^2({\mathbb{R}};d\rho_{+,\alphapha}(\,\cdot\,,x_0);{\mathcal H})\bibitemg\}. \nonumber \hbox{\rm e}nd{align} Here $\rho_{+,\alphapha}(\, \cdot \, , x_0)$ generates the operator-valued measure in the Nevanlinna--Herglotz representation of the ${\mathcal B}({\mathcal H})$-valued Weyl--Titchmarsh function $m_{+,\alphapha}(\, \cdot \, , x_0) \in {\mathcal B}({\mathcal H})$ $($cf.\ \hbox{\rm e}qref{2.25}$)$. \hbox{\rm e}nd{theorem} For a discussion of the model Hilbert space $L^2({\mathbb{R}};d\Sigma;{\mathcal K})$ for operator-valued measures $\Sigma$ we refer to \cite{GKMT01}, \cite{GWZ13a} and \cite[App.~B]{GWZ13b}. In the context of operator-valued potential coefficients of half-line Schr\"odinger operators we also refer to M.\ L.\ Gorbachuk \cite{Go68}, Sait{\= o} \cite{Sa71}, and Trooshin \cite{Tr00}. The proof of Theorem \ref{t2.5} in \cite{GWZ13b} relies on a version of Stone's formula in the weak sense (cf., e.g., \cite[p.\ 1203]{DS88}): \begin{lemma} \lambdabel{l2.4a} Let $T$ be a self-adjoint operator in a complex separable Hilbert space ${\mathcal H}$ $($with scalar product denoted by $(\, \cdot \,,\, \cdot \,)_{\mathcal H}$, linear in the second factor$)$ and denote by $\{E_T(\lambdambda)\}_{\lambdambda\in{\mathbb{R}}}$ the family of self-adjoint right-continuous spectral projections associated with $T$, that is, $E_T(\lambdambda)=\chi_{(-\infty,\lambdambda]}(T)$, $\lambdambda\in{\mathbb{R}}$. Moreover, let $f,g \in{\mathcal H}$, $\lambdambda_1,\lambdambda_2\in{\mathbb{R}}$, $\lambdambda_1<\lambdambda_2$, and $F\in C({\mathbb{R}})$. Then, \begin{align} &(f,F(T)E_{T}((\lambdambda_1,\lambdambda_2])g)_{{\mathcal H}} \nonumber \\ & \quad = \lim_{\delta\downarrow 0}\lim_{\varepsilon\downarrow 0} \fracrac{1}{2\pi i} \int_{\lambdambda_1+\delta}^{\lambdambda_2+\delta} d\lambdambda \, F(\lambdambda) \bibitemg[\bibitemg(f,(T-(\lambdambda+i\varepsilon) I_{{\mathcal H}})^{-1}g\bibitemg)_{{\mathcal H}} \nonumber \\ & \hspace*{4.9cm} - \bibitemg(f,(T-(\lambdambda-i\varepsilon)I_{{\mathcal H}})^{-1} g\bibitemg)_{{\mathcal H}}\bibitemg]. \lambdabel{2.26a} \hbox{\rm e}nd{align} \hbox{\rm e}nd{lemma} One can remove the compact support restrictions on $f$ and $g$ in Theorem \ref{t2.5} in the usual way by introducing the map \begin{equation} \widetilde U_{+,\alphapha} : \begin{cases} C_0^\infty((x_0,\infty); {\mathcal H})\to L^2({\mathbb{R}};d\rho_{+,\alphapha}(\,\cdot\,,x_0);{\mathcal H}) \\[1mm] u \mapsto \widehat u_{+,\alphapha}(\cdot)= \int_{x_0}^\infty dx\, \phi_\alphapha(\, \cdot \,,x,x_0)^* u(x). \hbox{\rm e}nd{cases} \lambdabel{2.39} \hbox{\rm e}nd{equation} Taking $f=g$, $F=1$, $\lambdambda_1\downarrow -\infty$, and $\lambdambda_2\uparrow \infty$ in \hbox{\rm e}qref{2.28} then shows that $\widetilde U_{+,\alphapha}$ is a densely defined isometry in $L^2((x_0,\infty);dx;{\mathcal H})$, which extends by continuity to an isometry on $L^2((x_0,\infty);dx;{\mathcal H})$. The latter is denoted by $U_{+,\alphapha}$ and given by \begin{equation} U_{+,\alphapha} : \begin{cases}L^2((x_0,\infty);dx;{\mathcal H})\to L^2({\mathbb{R}};d\rho_{+,\alphapha}(\,\cdot\,,x_0);{\mathcal H}) \\[1mm] u \mapsto \widehat u_{+,\alphapha}(\cdot)= \slim_{b\uparrow\infty}\int_{x_0}^b dx\, \phi_\alphapha(\, \cdot \,,x,x_0)^* u(x), \hbox{\rm e}nd{cases} \lambdabel{2.40} \hbox{\rm e}nd{equation} where $\slim$ refers to the $L^2({\mathbb{R}};d\rho_{+,\alphapha}(\,\cdot\,,x_0);{\mathcal H})$-limit. In addition, one can show that the map $U_{+,\alphapha}$ in \hbox{\rm e}qref{2.40} is onto and hence that $U_{+,\alphapha}$ is unitary (i.e., $U_{+,\alphapha}$ and $U_{+,\alphapha}^{-1}$ are isometric isomorphisms between the Hilbert spaces $L^2((x_0,\infty);dx;{\mathcal H})$ and $L^2({\mathbb{R}};d\rho_{+,\alphapha}(\,\cdot\,,x_0);{\mathcal H})$) with \begin{equation} U_{+,\alphapha}^{-1} : \begin{cases} L^2({\mathbb{R}};d\rho_{+,\alphapha};{\mathcal H}) \to L^2((x_0,\infty);dx;{\mathcal H}) \\[1mm] \widehat u \mapsto \slim_{\mu_1\downarrow -\infty, \mu_2\uparrow\infty} \int_{\mu_1}^{\mu_2} \phi_\alphapha(\lambdambda,\, \cdot \,,x_0) \, d\rho_{+,\alphapha}(\lambdambda,x_0)\, \widehat u(\lambdambda). \hbox{\rm e}nd{cases} \lambdabel{2.45} \hbox{\rm e}nd{equation} Here $\slim$ refers to the $L^2((x_0,\infty); dx; {\mathcal H})$-limit. We recall that the essential range of $F$ with respect to a scalar measure $\mu$ is defined by \begin{equation} \hbox{\rm e}ssran_{\mu}(F)=\{z\in{\mathbb{C}}\,|\, \text{for all $\varepsilon>0$,} \, \mu(\{\lambdambda\in{\mathbb{R}} \,|\, |F(\lambdambda)-z|<\varepsilon\})>0\}, \lambdabel{2.46c} \hbox{\rm e}nd{equation} and that $\hbox{\rm e}ssran_{\rho_{+,\alphapha}}(F)$ for $F\in C({\mathbb{R}})$ is then defined to be $\hbox{\rm e}ssran_{\nu_{+,\alphapha}}(F)$ for any control measure $d\nu_{+,\alphapha}$ of the operator-valued measure $d\rho_{+,\alphapha}$. Given a complete orthonormal system $\{e_n\}_{n \in {\mathcal I}}$ in ${\mathcal H}$ (${\mathcal I} \subseteq {\mathbb{N}}$ an appropriate index set), a convenient control measure for $d\rho_{+,\alphapha}$ is given by \begin{equation} \mu_{+,\alphapha}(B)=\sum_{n\in{\mathcal I}}2^{-n}(e_n, \rho_{+,\alphapha}(B,x_0)e_n)_{\mathcal H}, \quad B\in\mathfrak{B}({\mathbb{R}}). \lambdabel{2.46d} \hbox{\rm e}nd{equation} These considerations lead to a variant of the spectral theorem for $H_{+,\alphapha}$: \begin{theorem} \lambdabel{t2.6} Assume Hypothesis \ref{h3.9} and suppose $F\in C({\mathbb{R}})$. Then, \begin{equation} U_{+,\alphapha} F(H_{+,\alphapha})U_{+,\alphapha}^{-1} = M_F I_{{\mathcal H}} \lambdabel{2.46} \hbox{\rm e}nd{equation} in $L^2({\mathbb{R}};d\rho_{+,\alphapha}(\,\cdot\,,x_0);{\mathcal H})$ $($cf.\ \hbox{\rm e}qref{2.30}$)$. Moreover, \begin{align} & \sigmagma(F(H_{+,\alphapha}))= \hbox{\rm e}ssran_{\rho_{+,\alphapha}}(F), \lambdabel{2.46a} \\ & \sigmagma(H_{+,\alphapha})=\operatorname{supp}(d\rho_{+,\alphapha}(\,\cdot\,,x_0)), \lambdabel{2.46b} \hbox{\rm e}nd{align} and the multiplicity of the spectrum of $H_{+,\alphapha}$ is at most equal to $\dim ({\mathcal H})$. \hbox{\rm e}nd{theorem} \section{Weyl--Titchmarsh and Spectral Theory of Schr\"odinger Operators with Operator-Valued Potentials on the Real Line} \lambdabel{s4} In this section we briefly recall the basic spectral theory for full-line Schr\"odinger operators $H$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$, employing a $2 \tildemes 2$ block operator representation of the associated Weyl--Titchmarsh matrix and its ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued spectral measure, decomposing ${\mathbb{R}}$ into a left and right half-line with reference point $x_0 \in {\mathbb{R}}$, $(-\infty, x_0] \cup [x_0, \infty)$. We make the following basic assumption throughout this section. \begin{hypothesis} \lambdabel{h2.8} $(i)$ Assume that \begin{equation} V\in L^1_{\text{\rm{loc}}} ({\mathbb{R}};dx;{\mathcal H}), \quad V(x)=V(x)^* \, \text{ for a.e. } x\in{\mathbb{R}} \lambdabel{2.51} \hbox{\rm e}nd{equation} $(ii)$ Introducing the differential expression $\tau$ given by \begin{equation} \tau=-\frac{d^2}{dx^2} I_{{\mathcal H}} + V(x), \quad x\in{\mathbb{R}}, \lambdabel{2.52} \hbox{\rm e}nd{equation} we assume $\tau$ to be in the limit-point case at $+\infty$ and at $-\infty$. \hbox{\rm e}nd{hypothesis} Associated with the differential expression $\tau$ one introduces the self-adjoint Schr\"odinger operator $H$ in $L^2({\mathbb{R}};dx;{\mathcal H})$ by \begin{align} &Hf=\tau f, \lambdabel{2.53} \\ \nonumber &f\in \operatorname{dom}(H)= \bibitemg\{g\in L^2({\mathbb{R}};dx;{\mathcal H}) \, \bibitemg| \, g, g' \in W^{2,1}_{\text{\rm{loc}}}({\mathbb{R}};dx;{\mathcal H}); \, \tau g\in L^2({\mathbb{R}};dx;{\mathcal H})\bibitemg\}. \hbox{\rm e}nd{align} As in the half-line context we introduce the ${\mathcal B}({\mathcal H})$-valued fundamental system of solutions $\phi_\alphapha(z,\, \cdot \,,x_0)$ and $\theta_\alphapha(z,\, \cdot \,,x_0)$, $z\in{\mathbb{C}}$, of \begin{equation} (\tau \psi)(z,x) = z \psi(z,x), \quad x\in {\mathbb{R}}, \lambdabel{2.54} \hbox{\rm e}nd{equation} with respect to a fixed reference point $x_0\in{\mathbb{R}}$, satisfying the initial conditions at the point $x=x_0$, \begin{align} \begin{split} \phi_\alphapha(z,x_0,x_0)&=-\theta'_\alphapha(z,x_0,x_0)=-\sigman(\alphapha), \\ \phi'_\alphapha(z,x_0,x_0)&=\theta_\alphapha(z,x_0,x_0)=\cos(\alphapha), \quad \alphapha=\alphapha^*\in{\mathcal B}({\mathcal H}). \lambdabel{2.55} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} Again we note that by Corollary \ref{c2.5}\,$(iii)$, for any fixed $x, x_0\in{\mathbb{R}}$, the functions $\theta_{\alphapha}(z,x,x_0)$, $\phi_{\alphapha}(z,x,x_0)$, $\theta_{\alphapha}(\overline{z},x,x_0)^*$, and $\phi_{\alphapha}(\overline{z},x,x_0)^*$ as well as their strong $x$-derivatives are entire with respect to $z$ in the ${\mathcal B}({\mathcal H})$-norm. Moreover, by \hbox{\rm e}qref{2.7i}, \begin{equation} W(\theta_\alphapha(\overline{z},\, \cdot \,,x_0)^*,\phi_\alphapha(z,\, \cdot \,,x_0))(x)=I_{\mathcal H}, \quad z\in{\mathbb{C}}. \lambdabel{2.56} \hbox{\rm e}nd{equation} Particularly important solutions of \hbox{\rm e}qref{2.54} are the {\it Weyl--Titchmarsh solutions} $\psi_{\pm,\alphapha}(z,\, \cdot \,,x_0)$, $z\in{\mathbb{C}}\backslash{\mathbb{R}}$, uniquely characterized by \begin{align} \begin{split} &\psi_{\pm,\alphapha}(z,\, \cdot \,,x_0)h \in L^2((x_0,\pm\infty);dx;{\mathcal H}), \quad h\in{\mathcal H}, \\ &\sigman(\alphapha)\psi'_{\pm,\alphapha}(z,x_0,x_0) +\cos(\alphapha)\psi_{\pm,\alphapha}(z,x_0,x_0)=I_{\mathcal H}, \quad z\in{\mathbb{C}}\backslash\sigmagma(H_{\pm,\alphapha}). \lambdabel{2.57} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} The crucial condition in \hbox{\rm e}qref{2.57} is again the $L^2$-property which uniquely determines $\psi_{\pm,\alphapha}(z,\, \cdot \,,x_0)$ up to constant multiples by the limit-point hypothesis of $\tau$ at $\pm\infty$. In particular, for $\alphapha = \alphapha^*, \beta = \beta^* \in {\mathcal B}({\mathcal H})$, \begin{align} \psi_{\pm,\alphapha}(z,\, \cdot \,,x_0) = \psi_{\pm,\beta}(z,\, \cdot \,,x_0)C_\pm(z,\alphapha,\beta,x_0) \lambdabel{2.58} \hbox{\rm e}nd{align} for some coefficients $C_\pm (z,\alphapha,\beta,x_0)\in{\mathcal B}({\mathcal H})$. The normalization in \hbox{\rm e}qref{2.57} shows that $\psi_{\pm,\alphapha}(z,\, \cdot \,,x_0)$ are of the type \begin{equation} \psi_{\pm,\alphapha}(z,x,x_0)=\theta_{\alphapha}(z,x,x_0) + \phi_{\alphapha}(z,x,x_0) m_{\pm,\alphapha}(z,x_0), \quad z\in{\mathbb{C}}\backslash\sigmagma(H_{\pm,\alphapha}), \; x\in{\mathbb{R}}, \lambdabel{2.59} \hbox{\rm e}nd{equation} for some coefficients $m_{\pm,\alphapha}(z,x_0)\in{\mathcal B}({\mathcal H})$, the {\it Weyl--Titchmarsh $m$-functions} associated with $\tau$, $\alphapha$, and $x_0$. In addition, we note that (with $z, z_1, z_2 \in {\mathbb{C}}\backslash\sigmagma(H_{\pm,\alphapha})$) \begin{align} &W(\psi_{\pm,\alpha}(\overline{z_1},x_0,x_0)^*,\psi_{\pm,\alpha}(z_2,x_0,x_0)) = m_{\pm,\alphapha}(z_2,x_0)- m_{\pm,\alphapha}(z_1,x_0), \lambdabel{2.59a} \\ & \frac{d}{dx}W(\psi_{\pm,\alpha}(\overline{z_1},x,x_0)^*,\psi_{\pm,\alpha}(z_2,x,x_0)) = (z_1-z_2)\psi_{\pm,\alpha}(\overline{z_1},x,x_0)^*\psi_{\pm,\alpha}(z_2,x,x_0), \lambdabel{2.59b} \\ & (z_2-z_1)\int_{x_0}^{\pm\infty} dx\, \psi_{\pm,\alphapha}(\overline{z_{1}},x,x_0)^* \psi_{\pm,\alphapha}(z_{2},x,x_0) = m_{\pm,\alphapha}(z_2,x_0)- m_{\pm,\alphapha}(z_1,x_0), \lambdabel{2.60} \\ & m_{\pm,\alphapha}(z,x_0) = m_{\pm,\alphapha}(\overline z,x_0)^*, \lambdabel{2.61} \\ & \operatorname{Im}[m_{\pm,\alphapha}(z,x_0)] = \operatorname{Im}(z)\int_{x_0}^{\pm\infty} dx\, \psi_{\pm,\alphapha}(z,x,x_0)^* \psi_{\pm,\alphapha}(z,x,x_0). \lambdabel{2.62} \hbox{\rm e}nd{align} In particular, $\pm m_{\pm,\alphapha}(\, \cdot \,,x_0)$ are operator-valued Nevanlinna--Herglotz functions. In the following we abbreviate the Wronskian of $\psi_{+,\alpha}(\overline{z},x,x_0)^*$ and $\psi_{-,\alpha}(z,x,x_0)$ by $W(z)$ (thus, $W(z) = m_{-,\alpha}(z,x_0) - m_{+,\alpha}(z,x_0)$, $z\in{\mathbb{C}}\backslash\sigmagma(H)$). The Green's function $G(z,x,x')$ of the Schr\"odinger operator $H$ then reads \begin{align} G(z,x,x') = \psi_{\mp,\alphapha}(z,x,x_0) W(z)^{-1} \psi_{\pm,\alphapha}(\overline{z},x',x_0)^*, \quad x \lesseqgtr x', \; z\in{\mathbb{C}}\backslash\sigmagma(H). \lambdabel{2.63} \hbox{\rm e}nd{align} Thus, \begin{align} \begin{split} ((H-zI_{L^2({\mathbb{R}}; dx;{\mathcal H})})^{-1}f)(x) =\int_{{\mathbb{R}}} dx' \, G(z,x,x')f(x'), \quad z\in{\mathbb{C}}\backslash\sigmagma(H),& \\ x\in{\mathbb{R}}, \; f\in L^2({\mathbb{R}};dx;{\mathcal H}).& \lambdabel{2.65} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} Next, we introduce the $2\tildemes 2$ block operator-valued Weyl--Titchmarsh $m$-function, $M_\alphapha(z,x_0)\in{\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$, \begin{align} M_\alphapha(z,x_0)&=\bibitemg(M_{\alphapha,j,j'}(z,x_0)\bibitemg)_{j,j'=0,1}, \quad z\in{\mathbb{C}}\backslash\sigmagma(H), \lambdabel{2.71} \\ M_{\alphapha,0,0}(z,x_0) &= W(z)^{-1}, \\ M_{\alphapha,0,1}(z,x_0) &= 2^{-1} W(z)^{-1} \bibitemg[m_{-,\alphapha}(z,x_0)+m_{+,\alphapha}(z,x_0)\bibitemg], \\ M_{\alphapha,1,0}(z,x_0) &= 2^{-1} \bibitemg[m_{-,\alphapha}(z,x_0)+m_{+,\alphapha}(z,x_0)\bibitemg] W(z)^{-1}, \\ M_{\alphapha,1,1}(z,x_0) &= m_{+,\alphapha}(z,x_0) W(z)^{-1} m_{-,\alphapha}(z,x_0) \nonumber \\ &= m_{-,\alphapha}(z,x_0) W(z)^{-1} m_{+,\alphapha}(z,x_0). \lambdabel{2.71a} \hbox{\rm e}nd{align} $M_\alphapha(z,x_0)$ is a ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued Nevanlinna--Herglotz function with representation \begin{equation} M_\alphapha(z,x_0)=C_\alphapha(x_0)+\int_{{\mathbb{R}}} d\Omega_\alphapha (\lambdambda,x_0)\bibitemgg[\fracrac{1}{\lambdambda -z}-\fracrac{\lambdambda} {\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash\sigmagma(H), \lambdabel{2.71b} \hbox{\rm e}nd{equation} where \begin{equation} C_\alphapha(x_0)=\operatorname{Re}(M_\alphapha(i,x_0)) \in {\mathcal B}\bibitemg({\mathcal H}^2\bibitemg), \lambdabel{2.71c} \hbox{\rm e}nd{equation} and $d\Omega_{\alphapha}(\, \cdot \,,x_0)$ is a ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued measure satisfying \begin{equation} \int_{{\mathbb{R}}} \bibitemg(e,d\Omega_{\alphapha}(\lambdambda,x_0)e\bibitemg)_{{\mathcal H}^2} \, (\lambdambda^2 + 1)^{-1} < \infty, \quad e\in{\mathcal H}^2. \lambdabel{2.71d} \hbox{\rm e}nd{equation} In addition, the Stieltjes inversion formula for the nonnegative ${\mathcal B} \bibitemg({\mathcal H}^2\bibitemg)$-valued measure $d\Omega_\alphapha(\, \cdot \,,x_0)$ reads \begin{equation} \Omega_\alphapha((\lambdambda_1,\lambdambda_2],x_0) =\frac1\pi \lim_{\delta\downarrow 0} \lim_{\varepsilon\downarrow 0} \int^{\lambdambda_2+\delta}_{\lambdambda_1+\delta} d\lambdambda \, \operatorname{Im}(M_\alphapha(\lambdambda +i\varepsilon,x_0)), \quad \lambdambda_1, \lambdambda_2 \in{\mathbb{R}}, \; \lambdambda_1<\lambdambda_2. \lambdabel{2.71e} \hbox{\rm e}nd{equation} In particular, $d\Omega_\alphapha(\, \cdot \,,x_0)$ is a $2\tildemes 2$ block operator-valued measure with ${\mathcal B}({\mathcal H})$-valued entries $d\Omega_{\alpha,\hbox{\rm e}ll,\hbox{\rm e}ll'}(\, \cdot \,,x_0)$, $\hbox{\rm e}ll,\hbox{\rm e}ll'=0,1$. Relating the family of spectral projections, $\{E_H(\lambdambda)\}_{\lambdambda\in{\mathbb{R}}}$, of the self-adjoint operator $H$ and the $2\tildemes 2$ operator-valued increasing spectral function $\Omega_{\alphapha}(\lambdambda,x_0)$, $\lambdambda\in{\mathbb{R}}$, which generates the ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued measure $d\Omega_\alphapha(\, \cdot \,,x_0)$ in the Nevanlinna--Herglotz representation \hbox{\rm e}qref{2.71b} of $M_\alphapha(z,x_0)$, one obtains the following result: \begin{theorem} \lambdabel{t2.9} Let $\alphapha = \alphapha^* \in {\mathcal B}({\mathcal H})$, $f,g \in C^\infty_0({\mathbb{R}};{\mathcal H})$, $F\in C({\mathbb{R}})$, $x_0\in{\mathbb{R}}$, and $\lambdambda_1, \lambdambda_2 \in{\mathbb{R}}$, $\lambdambda_1<\lambdambda_2$. Then, \begin{align} \lambdabel{2.73} &\bibitemg(f,F(H)E_H((\lambdambda_1,\lambdambda_2])g\bibitemg)_{L^2({\mathbb{R}};dx;{\mathcal H})} \nonumber \\ &\quad = \bibitemg(\widehat f_{\alphapha}(\, \cdot \,,x_0),M_FM_{\chi_{(\lambdambda_1,\lambdambda_2]}} \widehat g_{\alphapha}(\, \cdot \,,x_0)\bibitemg)_{L^2({\mathbb{R}};d\Omega_{\alphapha}(\, \cdot \,,x_0);{\mathcal H}^2)} \hbox{\rm e}nd{align} where we introduced the notation \begin{align} \lambdabel{2.74} &\widehat u_{\alphapha,0}(\lambdambda,x_0) = \int_{\mathbb{R}} dx \, \theta_\alphapha(\lambdambda,x,x_0)^* u(x), \quad \widehat u_{\alphapha,1}(\lambdambda,x_0) = \int_{\mathbb{R}} dx \, \phi_\alphapha(\lambdambda,x,x_0)^* u(x), \nonumber \\ &\widehat u_{\alphapha}(\lambdambda,x_0) = \bibitemg(\,\widehat u_{\alphapha,0}(\lambdambda,x_0), \widehat u_{\alphapha,1}(\lambdambda,x_0)\bibitemg)^\top, \quad \lambdambda \in{\mathbb{R}}, \; u \in C^\infty_0({\mathbb{R}};{\mathcal H}), \hbox{\rm e}nd{align} and $M_G$ denotes the maximally defined operator of multiplication by the function $G \in C({\mathbb{R}})$ in the Hilbert space $L^2\bibitemg({\mathbb{R}};d\Omega_{\alphapha}(\, \cdot \,,x_0);{\mathcal H}^2\bibitemg)$, \begin{align} \begin{split} & \bibitemg(M_G\widehat u\bibitemg)(\lambdambda)=G(\lambdambda)\widehat u(\lambdambda) =\bibitemg(G(\lambdambda) \widehat u_0(\lambdambda), G(\lambdambda) \widehat u_1(\lambdambda)\bibitemg)^\top \, \text{ for \ $\Omega_{\alphapha}(\, \cdot \,,x_0)$-a.e.\ $\lambdambda\in{\mathbb{R}}$}, \lambdabel{2.75} \\ & \, \widehat u \in \operatorname{dom}(M_G)=\bibitemg\{\widehat v \in L^2\bibitemg({\mathbb{R}};d\Omega_{\alphapha}(\, \cdot \,,x_0);{\mathcal H}^2\bibitemg) \,\bibitemg|\, G\widehat v \in L^2({\mathbb{R}};d\Omega_{\alphapha}\bibitemg(\, \cdot \,,x_0);{\mathcal H}^2\bibitemg)\bibitemg\}. \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} \hbox{\rm e}nd{theorem} As in the half-line case, one can remove the compact support restrictions on $f$ and $g$ in the usual way by considering the map \begin{align} &\widetilde U_{\alphapha}(x_0) : \begin{cases} C_0^\infty({\mathbb{R}})\to L^2\bibitemg({\mathbb{R}};d\Omega_{\alphapha}(\, \cdot \,,x_0);{\mathcal H}^2\bibitemg) \\[1mm] u \mapsto \widehat u_{\alphapha}(\, \cdot \,,x_0) =\bibitemg(\,\widehat u_{\alphapha,0}(\lambdambda,x_0), \widehat u_{\alphapha,1}(\lambdambda,x_0)\bibitemg)^\top, \hbox{\rm e}nd{cases} \lambdabel{2.95} \\ & \widehat u_{\alphapha,0}(\lambdambda,x_0)=\int_{\mathbb{R}} dx \, \theta_\alphapha(\lambdambda,x,x_0)^* u(x), \quad \widehat u_{\alphapha,1}(\lambdambda,x_0)=\int_{\mathbb{R}} dx \, \phi_\alphapha(\lambdambda,x,x_0)^* u(x). \nonumber \hbox{\rm e}nd{align} Taking $f=g$, $F=1$, $\lambdambda_1\downarrow -\infty$, and $\lambdambda_2\uparrow \infty$ in \hbox{\rm e}qref{2.73} then shows that $\widetilde U_{\alphapha}(x_0)$ is a densely defined isometry in $L^2({\mathbb{R}};dx;{\mathcal H})$, which extends by continuity to an isometry on $L^2({\mathbb{R}};dx;{\mathcal H})$. The latter is denoted by $U_{\alphapha}(x_0)$ and given by \begin{align} &U_{\alphapha}(x_0) : \begin{cases} L^2 ({\mathbb{R}};dx;{\mathcal H})\to L^2\bibitemg({\mathbb{R}};d\Omega_{\alphapha}(\, \cdot \,,x_0);{\mathcal H}^2\bibitemg) \\[1mm] u \mapsto \widehat u_{\alphapha}(\, \cdot \,,x_0) = \bibitemg(\,\widehat u_{\alphapha,0}(\, \cdot \,,x_0), \widehat u_{\alphapha,1}(\, \cdot \,,x_0)\bibitemg)^\top, \hbox{\rm e}nd{cases} \lambdabel{2.96} \\ & \widehat u_\alphapha(\, \cdot \,,x_0)=\begin{pmatrix} \widehat u_{\alphapha,0}(\, \cdot \,,x_0) \\ \widehat u_{\alphapha,1}(\, \cdot \,,x_0) \hbox{\rm e}nd{pmatrix}= \slim_{a\downarrow -\infty, b \uparrow\infty} \begin{pmatrix} \int_{a}^b dx \, \theta_\alphapha(\,\cdot \,,x,x_0)^* u(x) \\ \int_{a}^b dx \, \phi_\alphapha(\, \cdot \,,x,x_0)^* u(x) \hbox{\rm e}nd{pmatrix}, \nonumber \hbox{\rm e}nd{align} where $\slim$ refers to the $L^2\bibitemg({\mathbb{R}};d\Omega_{\alphapha}(\, \cdot \,,x_0);{\mathcal H}^2\bibitemg)$-limit. In addition, one can show that the map $U_{\alphapha}(x_0)$ in \hbox{\rm e}qref{2.96} is onto and hence that $U_{\alphapha}(x_0)$ is unitary with \begin{align} &U_{\alphapha}(x_0)^{-1} : \begin{cases} L^2\bibitemg({\mathbb{R}};d\Omega_{\alphapha}(\, \cdot \,,x_0);{\mathcal H}^2\bibitemg) \to L^2({\mathbb{R}};dx;{\mathcal H}) \\[1mm] \widehat u \mapsto u_\alphapha, \hbox{\rm e}nd{cases} \lambdabel{2.101} \\ & u_\alphapha(\cdot)= \slim_{\mu_1\downarrow -\infty, \mu_2\uparrow \infty} \int_{\mu_1}^{\mu_2}(\theta_\alphapha(\lambdambda,\, \cdot \,,x_0), \phi_\alphapha(\lambdambda,\, \cdot \,,x_0)) \, d\Omega_{\alphapha}(\lambdambda,x_0)\, \widehat u(\lambdambda). \nonumber \hbox{\rm e}nd{align} Here $\slim$ refers to the $L^2({\mathbb{R}};dx;{\mathcal H})$-limit. Again, these considerations lead to a variant of the spectral theorem for $H$: \begin{theorem} \lambdabel{t2.10} Let $F\in C({\mathbb{R}})$ and $x_0\in{\mathbb{R}}$. Then, \begin{equation} U_{\alphapha}(x_0) F(H)U_{\alphapha}(x_0)^{-1} = M_F \lambdabel{2.102} \hbox{\rm e}nd{equation} in $L^2\bibitemg({\mathbb{R}};d\Omega_{\alphapha}(\, \cdot \,,x_0);{\mathcal H}^2\bibitemg)$ $($cf.\ \hbox{\rm e}qref{2.75}$)$. Moreover, \begin{align} & \sigmagma(F(H))= \hbox{\rm e}ssran_{\Omega_{\alphapha}}(F), \lambdabel{2.103a} \\ & \sigmagma(H)=\operatorname{supp} (d\Omega_\alphapha(\, \cdot \,,x_0)), \lambdabel{2.103b} \hbox{\rm e}nd{align} and the multiplicity of the spectrum of $H$ is at most equal to $2\dim({\mathcal H})$. \hbox{\rm e}nd{theorem} \section{Some Facts on Deficiency Subspaces and Abstract \\ Donoghue-type $m$-Functions} \lambdabel{s5} Throughout this preparatory section we make the following assumptions: \begin{hypothesis} \lambdabel{h5.1} Let ${\mathcal K}$ be a separable, complex Hilbert space, and $\dot A$ a densely defined, closed, symmetric operator in ${\mathcal K}$, with equal deficiency indices $(k,k)$, $k\in{\mathbb{N}}\cup\{\infty\}$. \hbox{\rm e}nd{hypothesis} Self-adjoint extensions of $\dot A$ in ${\mathcal K}$ will be denoted by $A$ $($or by $A_{\alphapha}$, with $\alphapha$ an appropriate operator parameter\,$)$. Given Hypothesis \ref{h5.1}, we will study properties of deficiency spaces of $\dot A$, and introduce operator-valued Donoghue-type $m$-functions corresponding to $A$, closely following the treatment in \cite{GKMT01}. These results will be applied to Schr\"odinger operators in the following section. In the special case $k=1$, detailed investigation of this type were undertaken by Donoghue \cite{Do65}. The case $k\in {\mathbb{N}}$ was discussed in depth in \cite{GT00} (we also refer to \cite{HKS98} for another comprehensive treatment of this subject).~Here we treat the general situation $k\in {\mathbb{N}}\cup\{\infty\}$, utilizing results in \cite{GKMT01}, \cite{GMT98}. The deficiency subspaces ${\mathcal N}_{z_0}$ of $\dot A$, $z_0 \in {\mathbb{C}} \backslash {\mathbb{R}}$, are given by \begin{equation} {\mathcal N}_{z_0} = \ker \bibitemg((\dot A)\high{^*} - z_0 I_{{\mathcal K}}\bibitemg), \quad \dim \, ({\mathcal N}_{z_0})=k, \lambdabel{5.1} \hbox{\rm e}nd{equation} and for any self-adjoint extension $A$ of $\dot A$ in ${\mathcal K}$, one has (see also \cite[p.~80--81]{Kr71}) \begin{equation} (A - z_0 I_{{\mathcal K}})(A - z I_{{\mathcal K}})^{-1} {\mathcal N}_{z_0} = {\mathcal N}_{z}, \quad z, z_0 \in {\mathbb{C}}\backslash{\mathbb{R}}. \lambdabel{5.2} \hbox{\rm e}nd{equation} We also note the following result on deficiency spaces. \begin{lemma} \lambdabel{l5.2} Assume Hypothesis \ref{h5.1}. Suppose $z_0 \in {\mathbb{C}} \backslash {\mathbb{R}}$, $h \in {\mathcal K}$, and that $A$ is a self-adjoint extension of $\dot A$. Assume that \begin{equation} \text{for all $z \in {\mathbb{C}} \backslash {\mathbb{R}}$, } \, h \, \bot \, \bibitemg\{(A - z I_{{\mathcal K}})^{-1} \ker \bibitemg((\dot A)\high{^*} - z_0 I_{{\mathcal K}}\bibitemg)\bibitemg\}. \lambdabel{5.3a} \hbox{\rm e}nd{equation} Then, \begin{equation} \text{for all $z \in {\mathbb{C}} \backslash {\mathbb{R}}$, } \, h \, \bot \, \ker \bibitemg((\dot A)\high{^*} - z I_{{\mathcal K}}\bibitemg). \lambdabel{5.4a} \hbox{\rm e}nd{equation} \hbox{\rm e}nd{lemma} \begin{proof} Let $f_{z_0} \in \ker \bibitemg((\dot A)\high{^*} - z_0 I_{{\mathcal K}}\bibitemg)$, then $\slim_{z \to i \infty} (-z) (A - z I_{{\mathcal K}})^{-1}f_{z_0} = f_{z_0}$ and hence $h \, \bot \, f_{z_0}$, that is, $h \, \bot \, \ker \bibitemg((\dot A)\high{^*} - z_0 I_{{\mathcal K}}\bibitemg)$. The latter fact together with \hbox{\rm e}qref{5.3a} imply \hbox{\rm e}qref{5.4a} due to \hbox{\rm e}qref{5.2}. \hbox{\rm e}nd{proof} Next, given a self-adjoint extension $A$ of $\dot A$ in ${\mathcal K}$ and a closed, linear subspace ${\mathcal N}$ of ${\mathcal K}$, ${\mathcal N}\subseteq {\mathcal K}$, the Donoghue-type $m$-operator $M_{A,{\mathcal N}}^{Do} (z) \in{\mathcal B}({\mathcal N})$ associated with the pair $(A,{\mathcal N})$ is defined by \begin{align} \begin{split} M_{A,{\mathcal N}}^{Do}(z)&=P_{\mathcal N} (zA + I_{\mathcal K})(A - z I_{{\mathcal K}})^{-1} P_{\mathcal N}\bibitemg\varepsilonrt_{\mathcal N} \\ &=zI_{\mathcal N}+(z^2+1)P_{\mathcal N}(A - z I_{{\mathcal K}})^{-1} P_{\mathcal N}\bibitemg\varepsilonrt_{\mathcal N}\,, \quad z\in {\mathbb{C}}\backslash {\mathbb{R}}, \lambdabel{5.3} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} with $I_{\mathcal N}$ the identity operator in ${\mathcal N}$ and $P_{\mathcal N}$ the orthogonal projection in ${\mathcal K}$ onto ${\mathcal N}$. In our principal Section \ref{s6}, we will exclusively focus on the particular case ${\mathcal N} = {\mathcal N}_i = \dim\bibitemg((\dot A)\high{^*} - i I_{{\mathcal K}}\bibitemg)$. We turn to the Nevanlinna--Herglotz property of $M_{A,{\mathcal N}}^{Do}(\cdot)$ next: \begin{theorem} \lambdabel{t5.3} Assume Hypothesis \ref{h5.1}. Let $A$ be a self-adjoint extension of $\dot A$ with associated orthogonal family of spectral projections $\{E_A(\lambdambda)\}_{\lambdambda\in {\mathbb{R}}}$, and ${\mathcal N}$ a closed subspace of ${\mathcal K}$. Then the Donoghue-type $m$-operator $M_{A,{\mathcal N}}^{Do}(z)$ is analytic for $z\in {\mathbb{C}}\backslash{\mathbb{R}}$ and \begin{align} & [\operatorname{Im}(z)]^{-1} \operatorname{Im}\bibitemg(M_{A,{\mathcal N}}^{Do} (z)\bibitemg) \geq 2 \Big[\bibitemg(|z|^2 + 1\bibitemg) + \bibitemg[\bibitemg(|z|^2 -1\bibitemg)^2 + 4 (\operatorname{Re}(z))^2\bibitemg]^{1/2}\Big]^{-1} I_{{\mathcal N}}, \nonumber \\ & \hspace*{9.5cm} z\in {\mathbb{C}}\backslash {\mathbb{R}}. \lambdabel{5.4} \hbox{\rm e}nd{align} In particular, \begin{equation} \bibitemg[\operatorname{Im}\bibitemg(M_{A,{\mathcal N}}^{Do} (z)\bibitemg)\bibitemg]^{-1} \in {\mathcal B}({\mathcal N}), \quad z\in {\mathbb{C}}\backslash {\mathbb{R}}, \lambdabel{5.4A} \hbox{\rm e}nd{equation} and $M_{A,{\mathcal N}}^{Do}(\cdot)$ is a ${\mathcal B}({\mathcal N})$-valued Nevanlinna--Herglotz function that admits the following representation valid in the strong operator topology of ${\mathcal N}$, \begin{equation} M_{A,{\mathcal N}}^{Do}(z)= \int_{\mathbb{R}} d\Omega_{A,{\mathcal N}}^{Do}(\lambdambda) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \lambdabel{5.5} \hbox{\rm e}nd{equation} where $($see also \hbox{\rm e}qref{A.42A}--\hbox{\rm e}qref{A.42b}$)$ \begin{align} &\Omega_{A,{\mathcal N}}^{Do}(\lambdambda)=(\lambdambda^2 + 1) (P_{\mathcal N} E_A(\lambdambda)P_{\mathcal N}\bibitemg\varepsilonrt_{\mathcal N}), \lambdabel{5.6} \\ &\int_{\mathbb{R}} d\Omega_{A,{\mathcal N}}^{Do}(\lambdambda) \, (1+\lambdambda^2)^{-1}=I_{\mathcal N}, \lambdabel{5.7} \\ &\int_{\mathbb{R}} d(\xi,\Omega_{A,{\mathcal N}}^{Do} (\lambdambda)\xi)_{\mathcal N}=\infty \, \text{ for all } \, \xi\in {\mathcal N}\backslash\{0\}. \lambdabel{5.8} \hbox{\rm e}nd{align} \hbox{\rm e}nd{theorem} We just note that inequality \hbox{\rm e}qref{5.4} follows from \begin{align} [\operatorname{Im} (z)]^{-1} \operatorname{Im} (M_{A,{\mathcal N}}^{Do}(z))&= P_{\mathcal N}(I_{\mathcal K}+A^2)^{1/2}\bibitemg((A-\operatorname{Re} (z) I_{{\mathcal K}})^2+ (\operatorname{Im} (z))^2 I_{{\mathcal K}}\bibitemg)^{-1} \nonumber \\ &\quad \tildemes (I_{\mathcal K}+A^2)^{1/2}P_{\mathcal N}\bibitemg\varepsilonrt_{\mathcal N}, \quad \, z\in {\mathbb{C}}\backslash{\mathbb{R}}, \lambdabel{5.8a} \hbox{\rm e}nd{align} the spectral theorem applied to $(I_{\mathcal K}+A^2)^{1/2}\bibitemg((A-\operatorname{Re} (z) I_{{\mathcal K}})^2+ (\operatorname{Im} (z))^2 I_{{\mathcal K}}\bibitemg)^{-1} (I_{\mathcal K}+A^2)^{1/2}$, together with \begin{align} & \inf_{\lambdambda \in {\mathbb{R}}} \bibitemgg(\fracrac{\lambdambda^2 + 1}{(\lambdambda-\operatorname{Re}(z))^2 + (\operatorname{Im}(z))^2}\bibitemgg) = \inf_{\lambdambda \in {\mathbb{R}}} \bibitemgg(\bibitemgg|\frac{\lambdambda - i}{\lambdambda - z}\bibitemgg|^2\bibitemgg) \nonumber \\ & \quad = \frac{2}{\bibitemg(|z|^2 + 1\bibitemg) + \Big[\bibitemg(|z|^2 - 1\bibitemg)^2 + 4 (\operatorname{Re}(z))^2\Big]^{1/2}}, \quad z \in {\mathbb{C}}\backslash{\mathbb{R}}. \lambdabel{5.8b} \hbox{\rm e}nd{align} Since \begin{align} & \Big[\bibitemg(|z|^2 + 1\bibitemg) + \bibitemg[\bibitemg(|z|^2 - 1\bibitemg)^2 + 4 (\operatorname{Re}(z))^2\bibitemg]^{1/2}\Big] \Big/ 2 \nonumber \\ & \quad \leq \Big[\bibitemg(|z|^2 + 1\bibitemg) + \bibitemg(|z|^2 - 1\bibitemg) + 2 |\operatorname{Re}(z)|\Big] \Big/ 2 \nonumber \\ & \quad = \max (1,|z|^2)+|\operatorname{Re}(z)|, \quad z \in {\mathbb{C}}\backslash{\mathbb{R}}, \hbox{\rm e}nd{align} the lower bound \hbox{\rm e}qref{5.4} improves the one for $[\operatorname{Im}(z)]^{-1} \operatorname{Im}\bibitemg(M_{A,{\mathcal N}}^{Do} (z)\bibitemg)$ recorded in \cite{GKMT01} and \cite{GMT98} if $\operatorname{Re}(z) \neq 0$\fracootnote{We note that \cite{GKMT01} and \cite{GMT98} contain a typographical error in this context in the sense that $\operatorname{Im}(z)$ must be replaced by $[\operatorname{Im}(z)]^{-1}$ in (4.16) of \cite{GKMT01} and (40) of \cite{GMT98}.}. Operators of the type $M_{A,{\mathcal N}}^{Do}(\cdot)$ and some of its variants have attracted considerable attention in the literature. The interested reader can find a variety of additional results, for instance, in \cite{AB09}, \cite{AP04}, \cite{BL07}, \cite{BM14}--\cite{BR15a}, \cite{BGW09}--\cite{Bu97}, \cite{DHMdS09}--\cite{DMT88}, \cite{GKMT01}--\cite{GT00}, \cite{HMM13}, \cite{KO77}, \cite{KO78}, \cite{LT77}, \cite{Ma92a}, \cite{Ma92b}, \cite{MN11}, \cite{MN12}, \cite{Ma04}, \cite{Mo09}, \cite{Pa13}, \cite{Po04}, \cite{Re98}, \cite{Ry07}, and the references therein. We also add that a model operator approach for the pair $(\dot A, A)$ on the basis of the operator-valued measure $\Omega_{A, {\mathcal N}_{i}}$ has been developed in detail in \cite{GKMT01}. In addition, we mention the following well-known fact (cf., e.g., \cite[Lemma~4.5]{GKMT01}, \cite[p.~80--81]{Kr71}): \begin{lemma} \lambdabel{l5.4} Assume Hypothesis \ref{h5.1}. Then ${\mathcal K}$ decomposes into the direct orthogonal sum \begin{align} & {\mathcal K}={\mathcal K}_0\oplus {\mathcal K}_0^\bot, \quad \ker \bibitemg((\dot A)\high{^*} - z I_{{\mathcal K}}\bibitemg) \subset {\mathcal K}_0, \quad z\in {\mathbb{C}}\backslash{\mathbb{R}}, \lambdabel{5.9} \\ & {\mathcal K}_0^\bot = \bibitemgcap_{z \in {\mathbb{C}} \backslash {\mathbb{R}}} \ker \bibitemg((\dot A)\high{^*} - z I_{{\mathcal K}}\bibitemg)^\bot = \bibitemgcap_{z \in {\mathbb{C}} \backslash {\mathbb{R}}} \operatorname{ran} \bibitemg(\dot A - z I_{{\mathcal K}}\bibitemg), \lambdabel{5.9a} \hbox{\rm e}nd{align} where ${\mathcal K}_0$ and $ {\mathcal K}_0^\bot$ are invariant subspaces for all self-adjoint extensions $A$ of $\dot A$ in ${\mathcal K}$, that is, \begin{equation} (A - z I_{{\mathcal K}})^{-1}{\mathcal K}_0\subseteq {\mathcal K}_0 , \quad (A - z I_{{\mathcal K}})^{-1}{\mathcal K}_0^\bot\subseteq {\mathcal K}_0^\bot, \quad z\in {\mathbb{C}}\backslash{\mathbb{R}}. \lambdabel{5.10} \hbox{\rm e}nd{equation} In addition, \begin{equation} {\mathcal K}_0=\overline{\operatorname{lin.span} \{(A - z I_{{\mathcal H}})^{-1}u_+ \, \varepsilonrt\, u_+\in {\mathcal N}_i, \, z \in {\mathbb{C}}\backslash {\mathbb{R}} \}}. \lambdabel{5.10a} \hbox{\rm e}nd{equation} Moreover, all self-adjoint extensions $\dot A$ coincide on ${\mathcal K}_0^\bot$, that is, if $A_\alphapha$ denotes an arbitrary self-adjoint extension of $\dot A$, then \begin{equation} A_\alphapha=A_{0,\alphapha} \oplus A_0^\bot \, \text{ in } \, {\mathcal K}={\mathcal K}_0\oplus{\mathcal K}_0^\bot, \lambdabel{5.11} \hbox{\rm e}nd{equation} where \begin{equation} A_0^\bot \text{ is independent of the chosen } A_{\alphapha}, \lambdabel{5.12} \hbox{\rm e}nd{equation} and $A_{0,\alphapha}$ $($resp., $A_0^\bot$$)$ is self-adjoint in ${\mathcal K}_0$ $($resp., ${\mathcal K}_0^\bot$$)$. \hbox{\rm e}nd{lemma} In this context we note that a densely defined closed symmetric operator $\dot A$ with deficiency indices $(k,k)$, $k\in {\mathbb{N}}\cup\{\infty\}$ is called {\it completely non-self-adjoint} (equivalently, {\it simple} or {\it prime}) in ${\mathcal K}$ if ${\mathcal K}_0^\bot=\{0\}$ in the decomposition \hbox{\rm e}qref{5.9} (cf.\ \cite[p.~80--81]{Kr71}). \begin{remark} \lambdabel{r5.5} In addition to Hypothesis \ref{h5.1} assume that $\dot A$ is not completely non-self-adjoint in ${\mathcal K}$. Then in addition to \hbox{\rm e}qref{5.9}, \hbox{\rm e}qref{5.11}, and \hbox{\rm e}qref{5.12} one obtains \begin{equation} \dot A={\dot A}_0\oplus A_0^\bot, \quad {\mathcal N}_{i}={\mathcal N}_{0,i}\oplus \{ 0 \} \lambdabel{5.13} \hbox{\rm e}nd{equation} with respect to the decomposition ${\mathcal K}={\mathcal K}_0 \oplus {\mathcal K}_0^\bot$. In particular, the part $A_0^\bot$ of $\dot A$ in ${\mathcal K}_0^\bot$ is self-adjoint. Thus, if $A = A_0 \oplus A_0^\bot$ is a self-adjoint extension of $\dot A$ in ${\mathcal K}$, then \begin{equation} M_{A, {\mathcal N}_i}^{Do}(z) = M_{A_0, {\mathcal N}_{0,i}}^{Do}(z), \quad z\in {\mathbb{C}}\backslash {\mathbb{R}}. \lambdabel{5.14} \hbox{\rm e}nd{equation} This reduces the $A$-dependent spectral properties of the Donoghue-type operator $M_{A, {\mathcal N}_i}^{Do}(\cdot)$ effectively to those of $A_0$. A different manner in which to express this fact would be to note that the subspace ${\mathcal K}_0^\bot$ is ``not detectable'' by $M_{A, {\mathcal N}_i}^{Do}(\cdot)$ (we refer to \cite{BNMW14}) for a systematic investigation of this circle of ideas, particularly, in the context of non-self-adjoint operators). $\diamond$ \hbox{\rm e}nd{remark} We are particularly interested in the question under which conditions on $\dot A$, the spectral information for $A$ contained in its family of spectral projections $\{E_A(\lambdambda)\}_{\lambdambda \in {\mathbb{R}}}$ is already encoded in the ${\mathcal B}({\mathcal N}_i)$-valued measure $\Omega_{A,{\mathcal N}_i}^{Do}(\cdot)$. In this connection we now mention the following result, denoting by $C_b({\mathbb{R}})$ the space of scalar-valued bounded continuous functions on ${\mathbb{R}}$: \begin{theorem} \lambdabel{t5.6} Let $A$ be a self-adjoint operator on a separable Hilbert space ${\mathcal K}$ and $\{E_A(\lambda)\}_{\lambda\in{\mathbb{R}}}$ the family of spectral projections associated with $A$. Suppose that ${\mathcal N}\subset {\mathcal K}$ is a closed linear subspace such that \begin{align} \lambdabel{5.22} \overline{\operatorname{lin.span} \, \{g(A)v \,|\, g\in C_b({\mathbb{R}}), \, v\in{\mathcal N}\}} = {\mathcal K}. \hbox{\rm e}nd{align} Let $P_{\mathcal N}$ be the orthogonal projection in ${\mathcal K}$ onto ${\mathcal N}$. Then $A$ is unitarily equivalent to the operator of multiplication by the independent variable $\lambda$ in the space $L^2({\mathbb{R}};d\Sigma_A(\lambda);{\mathcal N})$. Here the operator-valued measure $d\Sigma_A(\cdot)$ is given in terms of the Lebesgue--Stieltjes measure defined by the nondecreasing uniformly bounded family $\Sigma_A(\cdot)=P_{\mathcal N} E_A(\cdot)P_{\mathcal N}\bibitemg\varepsilonrt_{\mathcal N}$. \hbox{\rm e}nd{theorem} \begin{proof} It suffices to construct a unitary transformation $U:{\mathcal K} \to L^2({\mathbb{R}};d\Sigma_A(\lambda);{\mathcal N})$ that satisfies $U A u = \lambda U u$ for all $u\in{\mathcal K}$. First, define $U$ on the set of vectors ${\mathcal S}=\{g(A)v \,|\, g\in C_b({\mathbb{R}}), \; v\in{\mathcal N}\}\subset{\mathcal K}$ by \begin{align} U[g(A)v]=g(\lambda)v, \quad g\in C_b({\mathbb{R}}), \; v\in{\mathcal N}, \hbox{\rm e}nd{align} and then extend $U$ by linearity to the span of these vectors, which by assumption is a dense subset of ${\mathcal K}$. Applying the above definition to the function $\lambda g(\lambda)$ yields $U A u = \lambda U u$ for all $u$ in ${\mathcal S}$ and hence by linearity also for all $u$ in the dense subset $\operatorname{lin.span}({\mathcal S})$. In addition, the following simple computation utilizing the spectral theorem for the self-adjoint operator $A$ shows that $U$ is an isometry on ${\mathcal S}$ and hence by linearity also on $\operatorname{lin.span}({\mathcal S})$, \begin{align} \bibitemg(f(A)u,g(A)v\bibitemg)_{{\mathcal K}} &= \bibitemg(u,f(A)^*g(A)v\bibitemg)_{{\mathcal K}} = \bibitemg(u,P_{\mathcal N} f(A)^*g(A) P_{\mathcal N}\bibitemg\varepsilonrt_{\mathcal N} v\bibitemg)_{{\mathcal N}} \nonumber\\ &= \int_{{\mathbb{R}}} \bibitemg(u,\overline{f(\lambda)}g(\lambda)d\Sigma_A(\lambda)v\bibitemg)_{{\mathcal N}} \\ &= \bibitemg(f(\cdot)u,g(\cdot)v\bibitemg)_{L^2({\mathbb{R}};d\Sigma_A(\lambda);{\mathcal N})}, \quad f,g\in C_b({\mathbb{R}}), \; u,v\in{\mathcal N}. \nonumber \hbox{\rm e}nd{align} Thus, $U$ can be extended by continuity to the whole Hilbert space ${\mathcal K}$. Since the range of $U$ contains the set $\{g(\cdot)v \,|\, g\in C_b({\mathbb{R}}), \, v\in{\mathcal N}\}$ which is dense in $L^2({\mathbb{R}};d\Sigma_A(\lambda);{\mathcal N})$ (cf.\ \cite[Appendix~B]{GWZ13b}), it follows that $U$ is a unitary transformation. \hbox{\rm e}nd{proof} \begin{remark} \lambdabel{r5.7} Since $\{(\lambda-z)^{-1}\,|\, z\in{\mathbb{C}}\backslash{\mathbb{R}}\}\subset C_b({\mathbb{R}})$, the condition \hbox{\rm e}qref{5.22} in Theorem \ref{t5.6} can be replaced by the following stronger, and frequently encountered, one, \begin{align} \overline{\operatorname{lin.span} \, \{(A - z I_{{\mathcal K}})^{-1}v \,|\, z\in {\mathbb{C}}\backslash{\mathbb{R}}, \, v\in{\mathcal N}\}} = {\mathcal K}. \lambdabel{5.25} \hbox{\rm e}nd{align} $\diamond$ \hbox{\rm e}nd{remark} Combining Lemma \ref{l5.4}, Remark \ref{r5.5}, Theorem \ref{t5.6}, and Remark \ref{r5.7} then yields the following fact: \begin{corollary} \lambdabel{c5.8} Assume Hypothesis \ref{h5.1} and suppose that $A$ is a self-adjoint extension of $\dot A$. Let $M_{A, {\mathcal N}_i}^{Do}(\cdot)$ be the Donoghue-type $m$-operator associated with the pair $(A, {\mathcal N}_i)$, with ${\mathcal N}_i = \ker\bibitemg((\dot A)\high{^*} - i I_{{\mathcal K}}\bibitemg)$, and denote by $\Omega_{A,{\mathcal N}_i}^{Do}(\cdot)$ the ${\mathcal B}({\mathcal N}_i)$-valued measure in the Nevanlinna--Herglotz representation of $M_{A, {\mathcal N}_i}^{Do}(\cdot)$ $($cf.\ \hbox{\rm e}qref{5.5}$)$. Then $A$ is unitarily equivalent to the operator of multiplication by the independent variable $\lambda$ in the space $L^2({\mathbb{R}}; (\lambdambda^2 + 1)^{-1}d\Omega_{A,{\mathcal N}_i}^{Do}(\lambda);{\mathcal N}_i)$, with $\Omega_{A,{\mathcal N}_i}^{Do}(\lambdambda) = (\lambdambda^2 + 1) P_{{\mathcal N}_i} E_A(\lambdambda) P_{{\mathcal N}_i}\bibitemg\varepsilonrt_{{\mathcal N}_i}$, $\lambdambda \in {\mathbb{R}}$, if and only if $\dot A$ is completely non-self-adjoint in ${\mathcal K}$. \hbox{\rm e}nd{corollary} \begin{proof} If $\dot A$ is completely non-self-adjoint in ${\mathcal K}$, then ${\mathcal K}_0 = {\mathcal K}$, ${\mathcal K}_0^{\bot} = \{0\}$ in \hbox{\rm e}qref{5.9}, together with \hbox{\rm e}qref{5.10a}, and \hbox{\rm e}qref{5.25} with ${\mathcal N} = {\mathcal N}_i$ yields $\Sigma_A (\lambdambda) = (\lambdambda^2 + 1) P_{{\mathcal N}_i} E_A(\lambdambda) P_{{\mathcal N}_i}\bibitemg\varepsilonrt_{{\mathcal N}_i} = \Omega_{A,{\mathcal N}_i}^{Do}(\lambdambda)$, $\lambdambda \in {\mathbb{R}}$, in Theorem \ref{t5.6}. Conversely, if $\dot A$ is not completely non-self-adjoint in ${\mathcal K}$, then the fact \hbox{\rm e}qref{5.14} shows that $\Omega_{A,{\mathcal N}_i}^{Do}(\cdot)$ cannot describe the nontrivial self-adjoint operator $A_0^{\bot}$ in ${\mathcal K}_0^{\bot} \supsetneq \{0\}$. \hbox{\rm e}nd{proof} In other words, $\dot A$ is completely non-self-adjoint in ${\mathcal K}$, if and only if the entire spectral information on $A$ contained in its family of spectral projections $E_A(\cdot)$, is already encoded in the ${\mathcal B}({\mathcal N}_i)$-valued measure $\Omega_{A,{\mathcal N}_i}^{Do}(\cdot)$ (including multiplicity properties of the spectrum of $A$). \section{Donoghue-type $m$-Functions for Schr\"odinger Operators with Operator-Valued Potentials and Their Connections to Weyl--Titchmarsh $m$-Functions} \lambdabel{s6} In our principal section we construct Donoghue-type $m$-functions for half-line and full-line Schr\"odinger operators with operator-valued potentials and establish their precise connection with the Weyl--Titchmarsh $m$-functions discussed in Sections \ref{s3} and \ref{s4}. To avoid overly lengthy expressions involving resolvent operators, we now simplify our notation a bit and use the symbol $I$ to denote the identity operator in $L^2((x_0, \pm \infty); dx;{\mathcal H})$ and $L^2({\mathbb{R}}; dx; {\mathcal H})$. The principal hypothesis for this section will be the following: \begin{hypothesis} \lambdabel{h6.1} ${}$ \\ $(i)$ For half-line Schr\"odinger operators on $[x_0,\infty)$ we assume Hypothesis \ref{h2.7} with $a=x_0$, $b = \infty$ and assume $\tau = - (d^2/dx^2) I_{{\mathcal H}} + V(x)$ to be in the limit-point case at $\infty$. \\ $(ii)$ For half-line Schr\"odinger operators on $(-\infty,x_0]$ we assume Hypothesis \ref{h2.7} with $a=-\infty$, $b = x_0$ and assume $\tau = - (d^2/dx^2) I_{{\mathcal H}} + V(x)$ to be in the limit-point case at $-\infty$. \\ $(iii)$ For Schr\"odinger operators on ${\mathbb{R}}$ we assume Hypothesis \ref{h2.8}. \hbox{\rm e}nd{hypothesis} \subsection{The half-line case:} We start with half-line Schr\"odinger operators $H_{\pm,\min}$ in $L^2((x_0,\pm\infty); dx; {\mathcal H})$ and note that for $\{e_j\}_{j \in {\mathcal J}}$ a given orthonormal basis in ${\mathcal H}$ (${\mathcal J} \subseteq {\mathbb{N}}$ an appropriate index set), and $z \in {\mathbb{C}} \backslash {\mathbb{R}}$, \begin{equation} \{\psi_{\pm,\alphapha}(z, \, \cdot \,,x_0) e_j\}_{j \in {\mathcal J}} \lambdabel{6.1} \hbox{\rm e}nd{equation} is a basis in the deficiency subspace ${\mathcal N}_{\pm,z} = \ker\bibitemg(H_{\pm,\min}^* - z I\bibitemg)$. In particular, given $f \in L^2((x_0,\pm\infty); dx; {\mathcal H})$, one has \begin{equation} f \bot \{\psi_{\pm,\alphapha}(z, \, \cdot \,,x_0) e_j\}_{j \in {\mathcal J}}, \lambdabel{6.2} \hbox{\rm e}nd{equation} if and only if \begin{align} \begin{split} 0 &= (\psi_{\pm,\alphapha}(z, \, \cdot \,,x_0) e_j,f)_{L^2((x_0,\pm\infty); dx; {\mathcal H})} = \pm \int_{x_0}^{\pm\infty} dx \, (\psi_{+,\alphapha}(z,x,x_0) e_j,f(x))_{{\mathcal H}} \lambdabel{6.3} \\ &= \pm \int_{x_0}^{\pm\infty} dx \, (e_j,\psi_{\pm,\alphapha}(z,x,x_0)^* f(x))_{{\mathcal H}}, \quad j \in {\mathcal J}, \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} and since $j \in {\mathcal J}$ is arbitrary, \begin{align} \begin{split} & f \bot \{\psi_{\pm,\alphapha}(z, \, \cdot \,,x_0) e_j\}_{j \in {\mathcal J}} \, \text{ if and only if } \\ & \quad \pm \int_{x_0}^{\pm\infty} dx \, (h,\psi_{\pm,\alphapha}(z,x,x_0)^* f(x))_{{\mathcal H}} = 0, \quad h \in {\mathcal H}, \lambdabel{6.4} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} a fact to be exploited below in \hbox{\rm e}qref{6.5}. Next, we prove the following generating property of deficiency spaces of $H_{\pm,\min}$: \begin{theorem} \lambdabel{t6.2} Assume Hypothesis \ref{h6.1}\,$(i)$, respectively, $(ii)$, and suppose that $f \in L^2((x_0,\pm\infty); dx; {\mathcal H})$ satisfies for all $z \in {\mathbb{C}} \backslash {\mathbb{R}}$, $f \bot \ker\bibitemg(H_{\pm,\min}^* - z I\bibitemg)$. Then $f = 0$. Equivalently, $H_{\pm,\min}$ are completely non-self-adjoint in $L^2((x_0,\pm\infty); dx; {\mathcal H})$. \hbox{\rm e}nd{theorem} \begin{proof} We focus on the right-half line $[x_0,\infty)$ and recall the ${\mathcal B}({\mathcal H})$-valued Green's function $G_{+,\alphapha}(z,\, \cdot \, , \, \cdot \,)$ in \hbox{\rm e}qref{3.63A} of a self-adjoint extension $H_{+,\alphapha}$ of $H_{+,\min}$. Choosing a test vector $\hbox{\rm e}ta \in C_0^{\infty}((x_0,\infty); {\mathcal H})$, $\lambdambda_j \in {\mathbb{R}}$, $j=1,2$, $\lambdambda_1 < \lambdambda_2$, one computes with the help of Stone's formula (cf.\ Lemma \ref{l2.4a}), \begin{align} & (\hbox{\rm e}ta, E_{H_{+,\alphapha}}((\lambdambda_1,\lambdambda_2])) f)_{L^2((x_0,\infty); dx; {\mathcal H})} \nonumber \\ & \quad = \lim_{\delta \downarrow 0} \lim_{\varepsilon \downarrow 0} \frac{1}{2 \pi i} \int_{\lambdambda_1 + \delta}^{\lambdambda_2 + \delta} d\lambdambda \, \bibitemg[(\hbox{\rm e}ta, (H_{+,\alphapha} - (\lambdambda + i \varepsilon)I)^{-1} f)_{L^2((x_0,\infty); dx; {\mathcal H})} \nonumber \\ & \hspace*{4.2cm} - (\hbox{\rm e}ta, (H_{+,\alphapha} - (\lambdambda - i \varepsilon)I)^{-1} f)_{L^2((x_0,\infty); dx; {\mathcal H})}\bibitemg] \nonumber \\ & \quad = \lim_{\delta \downarrow 0} \lim_{\varepsilon \downarrow 0} \frac{1}{2 \pi i} \int_{\lambdambda_1 + \delta}^{\lambdambda_2 + \delta} d\lambdambda \int_{x_0}^{\infty} dx \nonumber \\ & \qquad \tildemes \bibitemgg\{\bibitemgg[\bibitemgg(\hbox{\rm e}ta(x), \psi_{+,\alphapha}(\lambdambda + i \varepsilon,x,x_0) \int_{x_0}^x dx' \, \phi_{\alphapha}(\lambdambda - i \varepsilon, x', x_0)^* f(x')\bibitemgg)_{{\mathcal H}} \nonumber \\ & \hspace*{1.5cm} + \underbrace{\int_{x_0}^{\infty} dx' \, (\phi_{\alphapha}(\lambdambda + i \varepsilon,x,x_0)^* \hbox{\rm e}ta(x), \psi_{+,\alphapha}(\lambdambda - i \varepsilon,x',x_0)^* f(x'))_{{\mathcal H}}}_{=0 \text{ by \hbox{\rm e}qref{6.4}}} \nonumber \\ & \hspace*{1.5cm} - \bibitemgg(\hbox{\rm e}ta(x), \phi_{\alphapha}(\lambdambda + i \varepsilon,x,x_0) \int_{x_0}^x dx' \, \psi_{+,\alphapha}(\lambdambda - i \varepsilon,x',x_0)^* f(x')\bibitemgg)_{{\mathcal H}}\bibitemgg] \nonumber \\ & \hspace*{1.5cm} - \bibitemgg[\bibitemgg(\hbox{\rm e}ta(x), \psi_{+,\alphapha}(\lambdambda - i \varepsilon,x,x_0) \int_{x_0}^x dx' \, \phi_{\alphapha}(\lambdambda + i \varepsilon, x', x_0)^* f(x')\bibitemgg)_{{\mathcal H}} \nonumber \\ & \hspace*{2cm} + \underbrace{\int_{x_0}^{\infty} dx' \, (\phi_{\alphapha}(\lambdambda - i \varepsilon,x,x_0)^* \hbox{\rm e}ta(x), \psi_{+,\alphapha}(\lambdambda + i \varepsilon,x',x_0)^* f(x'))_{{\mathcal H}}}_{= 0 \text{ by \hbox{\rm e}qref{6.4}}} \nonumber \\ & \hspace*{2cm} - \bibitemgg(\hbox{\rm e}ta(x), \phi_{\alphapha}(\lambdambda - i \varepsilon,x,x_0) \int_{x_0}^x dx' \, \psi_{+,\alphapha}(\lambdambda + i \varepsilon,x',x_0)^* f(x')\bibitemgg)_{{\mathcal H}}\bibitemgg]\bibitemgg\}. \lambdabel{6.5} \hbox{\rm e}nd{align} Here we twice employed the orthogonality condition \hbox{\rm e}qref{6.4} in the terms with underbraces. Thus, one finally concludes, \begin{align} & (\hbox{\rm e}ta, E_{H_{+,\alphapha}}((\lambdambda_1,\lambdambda_2])) f)_{L^2((x_0,\infty); dx; {\mathcal H})} \nonumber \\ & \quad = \lim_{\delta \downarrow 0} \lim_{\varepsilon \downarrow 0} \frac{1}{2 \pi i} \int_{\lambdambda_1 + \delta}^{\lambdambda_2 + \delta} d\lambdambda \int_{x_0}^{\infty} dx \int_{x_0}^x dx' \nonumber \\ & \hspace*{4cm} \tildemes \bibitemg[(\hbox{\rm e}ta(x),[\theta_{\alphapha}(\lambdambda + i \varepsilon,x,x_0) \phi_{\alphapha}(\lambdambda - i \varepsilon,x',x_0)^* \nonumber \\ & \hspace*{5cm} - \phi_{\alphapha}(\lambdambda + i \varepsilon,x,x_0) \theta_{\alphapha}(\lambdambda - i \varepsilon,x,x_0)^*]f(x'))_{{\mathcal H}} \nonumber \\ & \hspace*{4.5cm} - (\hbox{\rm e}ta(x),[\theta_{\alphapha}(\lambdambda - i \varepsilon,x,x_0) \phi_{\alphapha}(\lambdambda + i \varepsilon,x',x_0)^* \nonumber \\ & \hspace*{5cm} - \phi_{\alphapha}(\lambdambda - i \varepsilon,x,x_0) \theta_{\alphapha}(\lambdambda + i \varepsilon,x,x_0)^*]f(x'))_{{\mathcal H}} \bibitemg] \nonumber \\ & \quad = 0. \lambdabel{6.6} \hbox{\rm e}nd{align} Here we used the fact that $\hbox{\rm e}ta$ has compact support, rendering all $x$-integrals over the bounded set $\operatorname{supp} \, (\hbox{\rm e}ta)$. In addition, we employed the property that for fixed $x \in [x_0,\infty)$, $\phi_{\alphapha}(z,x,x_0)$ and $\theta_{\alphapha}(z,x,x_0)$ are entire with respect to $z \in {\mathbb{C}}$, permitting freely the interchange of the $\varepsilon$ limit with all integrals and implying the vanishing of the limit $\varepsilon \downarrow 0$ in the last step in \hbox{\rm e}qref{6.6}. Since $\hbox{\rm e}ta \in C_0^{\infty}((x_0,\infty); {\mathcal H})$ and $\lambdambda_1, \lambdambda_2 \in {\mathbb{R}}$ were arbitrary, \hbox{\rm e}qref{6.6} proves $f=0$. The fact that $H_{\pm,\min}$ are completely non-self-adjoint in $L^2((x_0,\pm\infty); dx; {\mathcal H})$ now follows from \hbox{\rm e}qref{5.9a}. \hbox{\rm e}nd{proof} We note that Theorem \ref{t6.2} in the context of regular (and quasi-regular) half-line differential operators with scalar coefficients has been established by Gilbert \cite[Theorem~3]{Gi72}. The corresponding result for $2n \tildemes 2n$ Hamltonian systems, $n \in {\mathbb{N}}$, was established in \cite[Proposition~7.4]{DLDS88}, and the case of indefinite Sturm--Liouville operators in the associated Krein space has been treated in \cite[Proposition~4.8]{BT07}. While these proofs exhibit certain similarities with that of Theorem \ref{t6.2}, it appears that our approach in the case of a regular half-line Schr\"odinger operator with ${\mathcal B}({\mathcal H})$-valued potential is a canonical one. For future purpose we recall formulas \hbox{\rm e}qref{2.59a}--\hbox{\rm e}qref{2.62}, and now add some additional results: \begin{lemma} \lambdabel{l6.3} Assume Hypothesis \ref{h6.1}\,$(i)$, respectively, $(ii)$, and let $z \in {\mathbb{C}} \backslash {\mathbb{R}}$. Then, for all $h \in {\mathcal H}$, and $\rho_{+,\alphapha}(\, \cdot \, , x_0)$-a.e.~$\lambdambda \in \sigmagma(H_{\pm,\alphapha})$, \begin{align} & \pm \slim_{R \to \infty} \int_{x_0}^{\pm R} dx \, \phi_{\alphapha}(\lambdambda,x,x_0)^* \psi_{\pm,\alphapha}(z,x,x_0) h = \pm (\lambdambda -z)^{-1} h, \lambdabel{6.10} \\ & \pm \slim_{R \to \infty} \int_{x_0}^{\pm R} dx \, \theta_{\alphapha}(\lambdambda,x,x_0)^* \psi_{\pm,\alphapha}(z,x,x_0) h = \mp (\lambdambda -z)^{-1} m_{\pm,\alphapha}(z,x_0) h, \lambdabel{6.11} \hbox{\rm e}nd{align} where $\slim$ refers to the $L^2({\mathbb{R}};d\rho_{+,\alphapha}(\,\cdot\,,x_0);{\mathcal H})$-limit. \hbox{\rm e}nd{lemma} \begin{proof} Without loss of generality, we consider the case of $H_{+,\alphapha}$ only. Let $u \in C_0^{\infty}((x_0,\infty); {\mathcal H}) \subset L^2((x_0,\infty); dx; {\mathcal H})$ and $v = (H_{+,\alphapha} - z I)^{-1} u$, then by Theorem \ref{t2.5}, \hbox{\rm e}qref{2.40}, and \hbox{\rm e}qref{2.45}, \begin{align} u &= (H_{+,\alphapha} - z I) v = \slim_{\mu_2 \uparrow \infty, \mu_1 \downarrow - \infty} \int_{\mu_1}^{\mu_2} \phi_{\alphapha}(\lambdambda, \, \cdot \, , x_0) \, d\rho_{+,\alphapha}(\lambdambda,x_0) \, \widehat u_{+,\alphapha}(\lambdambda) \nonumber \\ & = \slim_{\mu_2 \uparrow \infty, \mu_1 \downarrow - \infty} \int_{\mu_1}^{\mu_2} (\lambdambda - z) \phi_{\alphapha}(\lambdambda, \, \cdot \, , x_0) \, d\rho_{+,\alphapha} \, (\lambdambda,x_0) \widehat v_{+,\alphapha}(\lambdambda), \hbox{\rm e}nd{align} that is, \begin{equation} \widehat v_{+,\alphapha}(\lambdambda) = (\lambdambda - z)^{-1} \widehat u_{+,\alphapha}(\lambdambda) \, \text{ for $\rho_{+,\alphapha}(\, \cdot \,,x_0)$-a.e.~$\lambdambda \in \sigmagma(H_{+,\alphapha})$.} \hbox{\rm e}nd{equation} Hence, \begin{align} v &= (H_{+,\alphapha} - z I)^{-1} u \nonumber \\ & = \slim_{\mu_2 \uparrow \infty, \mu_1 \downarrow - \infty} \int_{\mu_1}^{\mu_2} \phi_{\alphapha}(\lambdambda, \, \cdot \, , x_0) \, d\rho_{+,\alphapha} \, (\lambdambda,x_0) \widehat u_{+,\alphapha}(\lambdambda) (\lambdambda - z)^{-1} \nonumber \\ & = \int_{x_0}^\infty dx' \, G_{+,\alphapha}(z,\, \cdot \,,x') u(x'). \hbox{\rm e}nd{align} Thus one computes, given unitarity of $U_{+,\alphapha}$ (cf.\ \hbox{\rm e}qref{2.40}, \hbox{\rm e}qref{2.45}), \begin{align} & \bibitemg(h, \bibitemg((H_{+,\alphapha} - z I)^{-1} u\bibitemg)(x)\bibitemg)_{{\mathcal H}} = \int_{x_0}^{\infty} dx' \, (h, G_{+,\alphapha}(z,x,x') u(x'))_{{\mathcal H}} \nonumber \\ & \quad = \int_{x_0}^{\infty} dx' \, (G_{+,\alphapha}(z,x,x')^* h, u(x'))_{{\mathcal H}} \nonumber \\ & \quad = \slim_{\mu_2 \uparrow \infty, \mu_1 \downarrow - \infty} \int_{\mu_1}^{\mu_2} \bibitemg(\widehat{(G_{+,\alphapha}(z,x,\, \cdot \,)^* h)} (\lambdambda), d \rho_{+,\alphapha} (\lambdambda, x_0) \, \widehat u_{+,\alphapha}(\lambdambda)\bibitemg)_{{\mathcal H}} \nonumber \\ & \quad = \slim_{\mu_2 \uparrow \infty, \mu_1 \downarrow - \infty} \int_{\mu_1}^{\mu_2} \bibitemg(h,\phi_{\alphapha}(\lambdambda, x , x_0) \, d\rho_{+,\alphapha}(\lambdambda,x_0) \, \widehat u_{+,\alphapha}(\lambdambda)\bibitemg)_{{\mathcal H}} (\lambdambda - z)^{-1} \nonumber \\ & \quad = \slim_{\mu_2 \uparrow \infty, \mu_1 \downarrow - \infty} \int_{\mu_1}^{\mu_2} \bibitemg((\lambdambda - \overline{z})^{-1}\phi_{\alphapha}(\lambdambda, x , x_0)^* h, \, d\rho_{+,\alphapha}(\lambdambda,x_0) \, \widehat u_{+,\alphapha}(\lambdambda)\bibitemg)_{{\mathcal H}}. \nonumber \\ \hbox{\rm e}nd{align} Since $u \in C_0^{\infty}((x_0,\infty); {\mathcal H})$ was arbitrary, one concludes that \begin{align} \begin{split} \Big(\widehat{G_{+,\alphapha}(z,x, \, \cdot \,)^* h}\Big)(\lambdambda) = (\lambdambda - \overline{z})^{-1}\phi_{\alphapha}(\lambdambda, x , x_0)^* h, \quad h \in {\mathcal H}, \; z \in {\mathbb{C}} \backslash {\mathbb{R}},& \lambdabel{6.11A} \\ \text{for $\rho_{+,\alphapha}(\, \cdot \,,x_0)$-a.e.~$\lambdambda \in \sigmagma(H_{+,\alphapha})$.}& \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} In precisely the same manner one derives, \begin{align} \begin{split} \Big(\partial_x \widehat{G_{+,\alphapha}(z,x, \, \cdot \,)^* h}\Big)(\lambdambda) = (\lambdambda - \overline{z})^{-1}\phi_{\alphapha}'(\lambdambda, x , x_0)^* h, \quad h \in {\mathcal H}, \; z \in {\mathbb{C}} \backslash {\mathbb{R}},& \lambdabel{6.11B} \\ \text{for $\rho_{+,\alphapha}(\, \cdot \,,x_0)$-a.e.~$\lambdambda \in \sigmagma(H_{+,\alphapha})$.}& \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} Taking $x \downarrow x_0$ in \hbox{\rm e}qref{6.11A} and \hbox{\rm e}qref{6.11B}, observing that \begin{align} \begin{split} & G_{+,\alphapha}(z,x_0,x') = \sigman(\alphapha) \psi_{+,\alphapha}({\overline z},x',x_0), \\ & [\partial_x G_{+,\alphapha}(z,x,x')]\bibitemg|_{x=x_0} = \cos(\alphapha) \psi_{+,\alphapha}({\overline z},x',x_0), \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} and choosing $h = \sigman(\alphapha) g$ in \hbox{\rm e}qref{6.11A} and $h = \cos(\alphapha) g$ in \hbox{\rm e}qref{6.11B}, $g \in {\mathcal H}$, then yields \begin{align} \widehat{\Big(\psi_{+,\alphapha} (\overline z, \, \cdot \,, x_0) [\sigman(\alphapha)]^2 g\Big)}(\lambdambda) = (\lambdambda - {\overline z})^{-1} [\sigman(\alphapha)]^2 g,& \lambdabel{6.11C} \\ \widehat{\Big(\psi_{+,\alphapha} (\overline z, \, \cdot \,, x_0) [\cos(\alphapha)]^2 g\Big)}(\lambdambda) = (\lambdambda - \overline z)^{-1} [\cos(\alphapha)]^2 g,& \lambdabel{6.11D} \\ g \in {\mathcal H}, \; z \in {\mathbb{C}} \backslash {\mathbb{R}}, \, \text{ for $\rho_{+,\alphapha}(\, \cdot \,,x_0)$-a.e.~$\lambdambda \in \sigmagma(H_{+,\alphapha})$.}& \nonumber \hbox{\rm e}nd{align} Adding equations \hbox{\rm e}qref{6.11C} and \hbox{\rm e}qref{6.11D} yields relation \hbox{\rm e}qref{6.10}. Finally, changing $\alphapha$ into $\alphapha - (\pi/2)I_{{\mathcal H}}$, and noticing \begin{align} & \phi_{\alphapha - (\pi/2)I_{{\mathcal H}}} (z,\, \cdot \,, x_0) = \theta_{\alphapha} (z,\, \cdot \,, x_0), \quad \theta_{\alphapha - (\pi/2)I_{{\mathcal H}}} (z,\, \cdot \,, x_0) = - \phi_{\alphapha} (z,\, \cdot \,, x_0), \\ & m_{+,\alphapha - (\pi/2)I_{{\mathcal H}}} (z,x_0) = - [m_{+,\alphapha} (z,x_0)]^{-1}, \\ & \psi_{+,\alphapha - (\pi/2) I_{{\mathcal H}}} (z, \, \cdot \,,x_0) = - \psi_{+,\alphapha} (z, \, \cdot \,,x_0) [m_{+,\alphapha} (z,x_0)]^{-1}, \hbox{\rm e}nd{align} yields \begin{equation} \int_{x_0}^{\infty} dx \, \theta_{\alphapha}(\lambdambda,x,x_0)^* \psi_{\pm,\alphapha}(z,x,x_0) \widetilde h = \mp (\lambdambda -z)^{-1} m_{\pm,\alphapha}(z,x_0) \widetilde h, \hbox{\rm e}nd{equation} with $\widetilde h = - [m_{+,\alphapha} (z,x_0)]^{-1} h$, and hence \hbox{\rm e}qref{6.11} since $h \in {\mathcal H}$ was arbitrary. \hbox{\rm e}nd{proof} By Theorem \ref{t3.3} \begin{equation} [\operatorname{Im}(m_{\pm,\alphapha}(z,x_0))]^{-1} \in {\mathcal B}({\mathcal H}), \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \lambdabel{6.11b} \hbox{\rm e}nd{equation} therefore \begin{align} & \bibitemg(\psi_{\pm,\alphapha}(z, \, \cdot \,,x_0) [\pm (\operatorname{Im}(z))^{-1} \operatorname{Im}(m_{\pm,\alphapha}(z,x_0))]^{-1/2} e_j, \psi_{\pm,\alphapha}(z, \, \cdot \,,x_0) \nonumber \\ & \qquad \tildemes [\pm (\operatorname{Im}(z))^{-1}\operatorname{Im}(m_{\pm,\alphapha}(z,x_0))]^{-1/2} e_k\bibitemg)_{L^2((x_0,\pm\infty);dx;{\mathcal H})} \nonumber \\ & \quad = \bibitemg([\pm \operatorname{Im}(m_{\pm,\alphapha}(z,x_0))]^{-1/2} e_j, \operatorname{Im}(m_{\pm,\alphapha}(z,x_0) \nonumber \\ & \qquad \tildemes [\pm \operatorname{Im}(m_{\pm,\alphapha}(z,x_0))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \quad = (e_j,e_k)_{{\mathcal H}} = \delta_{j,k}, \quad j,k \in {\mathcal J}, \; z\in{\mathbb{C}}\backslash{\mathbb{R}}. \lambdabel{6.11c} \hbox{\rm e}nd{align} Thus, one obtains in addition to \hbox{\rm e}qref{6.1} that \begin{equation} \bibitemg\{\Psi_{\pm,\alphapha,j}(z,\, \cdot \, , x_0) = \psi_{\pm,\alphapha}(z, \, \cdot \,,x_0) [\pm (\operatorname{Im}(z))^{-1} \operatorname{Im}(m_{\pm,\alphapha}(z,x_0))]^{-1/2} e_j\bibitemg\}_{j \in {\mathcal J}} \lambdabel{6.12} \hbox{\rm e}nd{equation} is an orthonormal basis for ${\mathcal N}_{\pm,z} = \ker\bibitemg(H_{\pm,\min}^* - z I\bibitemg)$, $z \in {\mathbb{C}} \backslash {\mathbb{R}}$, and hence (cf.\ the definition of $P_{{\mathcal N}}$ in Section \ref{s5}) \begin{equation} P_{{\mathcal N}_{\pm,i}} = \sum_{j \in {\mathcal J}} \bibitemg(\Psi_{\pm,\alphapha,j}(i,\, \cdot \, , x_0), \, \cdot \, \bibitemg)_{L^2((x_0,\pm\infty); dx; {\mathcal H})} \Psi_{\pm,\alphapha,j}(i,\, \cdot \, , x_0). \lambdabel{6.13} \hbox{\rm e}nd{equation} Consequently (cf.\ \hbox{\rm e}qref{5.3}), one obtains for the half-line Donoghue-type $m$-functions, \begin{align} \begin{split} M_{H_{\pm,\alphapha}, {\mathcal N}_{\pm,i}}^{Do} (z,x_0) &= \pm P_{{\mathcal N}_{\pm,i}} (z H_{\pm,\alphapha} + I) (H_{\pm,\alphapha} - z I)^{-1} P_{{\mathcal N}_{\pm,i}} \bibitemg|_{{\mathcal N}_{\pm,i}}, \lambdabel{6.14} \\ &= \int_{\mathbb{R}} d\Omega_{H_{\pm,\alphapha},{\mathcal N}_{\pm,i}}^{Do}(\lambdambda,x_0) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} where $\Omega_{H_{\pm,\alphapha},{\mathcal N}_{\pm,i}}^{Do}(\, \cdot\, , x_0)$ satisfies the analogs of \hbox{\rm e}qref{5.6}--\hbox{\rm e}qref{5.8} (resp., \hbox{\rm e}qref{A.42A}--\hbox{\rm e}qref{A.42b}). Next, we explicitly compute $M_{H_{\pm,\alphapha}, {\mathcal N}_{\pm,i}}^{Do} (\, \cdot \,,x_0)$. \begin{theorem} \lambdabel{t6.3} Assume Hypothesis \ref{h6.1}\,$(i)$, respectively, $(ii)$. Then, \begin{align} \begin{split} & M_{H_{\pm,\alphapha}, {\mathcal N}_{\pm,i}}^{Do} (z,x_0) = \pm \sum_{j,k \in {\mathcal J}} \bibitemg(e_j, m_{\pm,\alphapha}^{Do}(z,x_0) e_k\bibitemg)_{{\mathcal H}} \\ & \quad \tildemes (\Psi_{\pm,\alphapha,k}(i, \, \cdot \, ,x_0), \, \cdot \, )_{L^2((x_0,\pm \infty); dx; {\mathcal H})} \Psi_{\pm,\alphapha,j}(i, \, \cdot \, ,x_0) \bibitemg|_{{\mathcal N}_{\pm,i}}, \quad z \in {\mathbb{C}} \backslash {\mathbb{R}}, \lambdabel{6.15} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} where the ${\mathcal B}({\mathcal H})$-valued Nevanlinna--Herglotz functions $m_{\pm,\alphapha}^{Do}(\, \cdot \, , x_0)$ are given by \begin{align} m_{\pm,\alphapha}^{Do}(z,x_0) &= \pm [\pm \operatorname{Im}(m_{\pm,\alphapha}(i,x_0))]^{-1/2} [m_{\pm,\alphapha}(z,x_0) - \operatorname{Re}(m_{\pm,\alphapha}(i,x_0))] \nonumber \\ & \quad \tildemes [\pm \operatorname{Im}(m_{\pm,\alphapha}(i,x_0))]^{-1/2} \lambdabel{6.16} \\ &= d_{\pm, \alphapha} \pm \int_{\mathbb{R}} d\omega_{\pm,\alphapha}^{Do}(\lambdambda,x_0) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}. \lambdabel{6.16a} \hbox{\rm e}nd{align} Here $d_{\pm,\alphapha} = \operatorname{Re}(m_{\pm,\alphapha}^{Do}(i,x_0)) \in {\mathcal B}({\mathcal H})$, and \begin{equation} \omega_{\pm,\alphapha}^{Do}(\, \cdot \,,x_0) = [\pm \operatorname{Im}(m_{\pm,\alphapha}(i,x_0))]^{-1/2} \rho_{\pm,\alphapha}(\,\cdot\,,x_0) [\pm \operatorname{Im}(m_{\pm,\alphapha}(i,x_0))]^{-1/2} \hbox{\rm e}nd{equation} satisfy the analogs of \hbox{\rm e}qref{A.42a}, \hbox{\rm e}qref{A.42b}. \hbox{\rm e}nd{theorem} \begin{proof} We will consider the right half-line $[x_0,\infty)$. To verify \hbox{\rm e}qref{6.15} it suffices to insert \hbox{\rm e}qref{6.13} into \hbox{\rm e}qref{6.14} and then apply \hbox{\rm e}qref{2.28}, \hbox{\rm e}qref{2.29} to compute, \begin{align} & \bibitemg(\Psi_{+,\alphapha,j}(i,\, \cdot \, , x_0), (z H_{+,\alphapha} + I) (H_{+,\alphapha} - z I)^{-1} \Psi_{+,\alphapha,k}(i,\, \cdot \, , x_0)\bibitemg)_{L^2((x_0,\infty); dx; {\mathcal H})} \nonumber \\ & \quad = \bibitemg(\widehat e_{j,+,\alphapha}, (z \cdot + I_{{\mathcal H}}) (\cdot - z I_{{\mathcal H}})^{-1} \widehat e_{k,+,\alphapha}\bibitemg)_{L^2({\mathbb{R}}; d \rho_{+,\alphapha}; {\mathcal H})} \nonumber \\ & \quad = \int_{{\mathbb{R}}} d \bibitemg(\widehat e_{j,+,\alphapha}, \rho_{+,\alphapha} (\lambdambda,x_0) \widehat e_{k,+,\alphapha}\bibitemg)_{{\mathcal H}} \, \frac{z \lambdambda + 1}{\lambdambda - z}, \quad j, k \in {\mathcal J}, \lambdabel{6.17} \hbox{\rm e}nd{align} where \begin{align} \widehat e_{j,+,\alphapha} (\lambdambda) &= \int_{x_0}^{\infty} dx \, \phi_{\alphapha}(\lambdambda,x,x_0)^* \psi_{+,\alphapha}(i,x,x_0) [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_j \nonumber \\ &= (\lambdambda - i)^{-1} [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_j, \quad j \in {\mathcal J}, \lambdabel{6.18} \hbox{\rm e}nd{align} employing \hbox{\rm e}qref{6.10} (with $z=i$). Thus, \begin{align} \hbox{\rm e}qref{6.17} &= \int_{{\mathbb{R}}} d \bibitemg([\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_j, \rho_{+,\alphapha} (\lambdambda,x_0) [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \hspace*{8.5mm} \tildemes \frac{z \lambdambda + 1}{\lambdambda - z} \frac{1}{\lambdambda^2 + 1} \nonumber \\ & = \int_{{\mathbb{R}}} d \bibitemg([\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_j, \rho_{+,\alphapha} (\lambdambda,x_0) [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \hspace*{8.5mm} \tildemes \bibitemgg[\frac{1}{\lambdambda - z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg] \nonumber \\ & = \bibitemg([\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_j, [m_{+,\alphapha}(z,x_0) - \operatorname{Re}(m_{+,\alphapha}(i,x_0)] \nonumber \\ & \hspace*{5mm} \tildemes [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} e_k\bibitemg)_{{\mathcal H}}, \lambdabel{6.19} \hbox{\rm e}nd{align} using \hbox{\rm e}qref{2.25}, \hbox{\rm e}qref{2.25a} in the final step. \hbox{\rm e}nd{proof} \begin{remark} \lambdabel{r6.4} Combining Corollary \ref{c5.8} and Theorem \ref{t6.2} proves that the entire spectral information for $H_{\pm,\alphapha}$, contained in the corresponding family of spectral projections $\{E_{H_{\pm,\alphapha}}(\lambdambda)\}_{\lambdambda \in {\mathbb{R}}}$ in $L^2((x_0,\pm\infty); dx; {\mathcal H})$, is already encoded in the operator-valued measure $\{\Omega_{H_{\pm,\alphapha},{\mathcal N}_{\pm,i}}^{Do}(\lambdambda,x_0)\}_{\lambdambda \in {\mathbb{R}}}$ in ${\mathcal N}_{\pm,i}$ (including multiplicity properties of the spectrum of $H_{\pm,\alphapha}$). By the same token, invoking Theorem \ref{t6.3} shows that the entire spectral information for $H_{\pm,\alphapha}$ is already contained in $\{\omega_{\pm,\alphapha}^{Do}(\lambdambda,x_0)\}_{\lambdambda \in {\mathbb{R}}}$ in ${\mathcal H}$. ${}$ $\diamond$ \hbox{\rm e}nd{remark} \subsection{The full-line case:} In the remainder of this section we turn to Schr\"odinger operators on ${\mathbb{R}}$, assuming Hypotheis \ref{h2.8}. Decomposing \begin{equation} L^2({\mathbb{R}}; dx; {\mathcal H}) = L^2((-\infty,x_0); dx; {\mathcal H}) \oplus L^2((x_0, \infty); dx; {\mathcal H}), \lambdabel{6.20} \hbox{\rm e}nd{equation} and introducing the orthogonal projections $P_{\pm,x_0}$ of $L^2({\mathbb{R}}; dx; {\mathcal H})$ onto the left/right subspaces $L^2((x_0,\pm\infty); dx; {\mathcal H})$, we now define a particular minimal operator $H_{\min}$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$ via \begin{align} H_{\min} &:= H_{-,\min} \oplus H_{+,\min}, \quad H_{\min}^* = H_{-,\min}^* \oplus H_{+,\min}^*, \lambdabel{6.23} \\ {\mathcal N}_z &\, = \ker\bibitemg(H_{\min}^* - z I \bibitemg) = \ker\bibitemg(H_{-,\min}^* - z I \bibitemg) \oplus \ker\bibitemg(H_{+,\min}^* - z I \bibitemg) \nonumber \\ &\, = {\mathcal N}_{-,z} \oplus {\mathcal N}_{+,z}, \quad z \in {\mathbb{C}} \backslash {\mathbb{R}}. \lambdabel{6.23a} \hbox{\rm e}nd{align} We note that \hbox{\rm e}qref{6.23} is not the standard minimal operator associated with the differential expression $\tau$ on ${\mathbb{R}}$. Usually, one introduces \begin{align} & \widehat H_{\min} f = \tau f, \nonumber \\ & f\in \operatorname{dom}\bibitemg(\widehat H_{\min}\bibitemg)=\bibitemg\{g\in L^2({\mathbb{R}};dx;{\mathcal H}) \,\bibitemg|\, g\in W^{2,1}_{\rm loc}({\mathbb{R}};dx;{\mathcal H}); \, \operatorname{supp}(g) \, \text{compact}; \nonumber \\ & \hspace*{7.6cm} \tau g\in L^2({\mathbb{R}};dx;{\mathcal H})\bibitemg\}. \lambdabel{6.24} \hbox{\rm e}nd{align} However, due to our limit-point assumption at $\pm \infty$, $ \widehat H_{\min}$ is essentially self-adjoint and hence (cf.\ \hbox{\rm e}qref{2.53}), \begin{equation} \overline{\widehat H_{\min}} = H, \hbox{\rm e}nd{equation} rendering $\widehat H_{\min}$ unsuitable as a minimal operator with nonzero deficiency indices. Consequently, $H$ given by \hbox{\rm e}qref{2.53}, as well as the Dirichlet extension, $H_{\rm D} = H_{-,{\rm D}} \oplus H_{+,{\rm D}}$, where $H_{\pm,{\rm D}} = H_{\pm, 0}$ (i.e., $\alphapha = 0$ in \hbox{\rm e}qref{3.9}, see also our notational conventions following \hbox{\rm e}qref{3.63A}), are particular self-adjoint extensions of $H_{\min}$ in \hbox{\rm e}qref{6.23}. Associated with the operator $H$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$ (cf.\ \hbox{\rm e}qref{2.53}) we now introduce its $2 \tildemes 2$ block operator representation via \begin{equation} (H - z I)^{-1} = \begin{pmatrix} P_{-,x_0} (H - zI)^{-1} P_{-,x_0} & P_{-,x_0} (H - z I)^{-1} P_{+,x_0} \\ P_{+,x_0} (H - z I)^{-1} P_{-,x_0} & P_{+,x_0} (H - z I)^{-1} P_{+,x_0} \hbox{\rm e}nd{pmatrix}. \lambdabel{6.21} \hbox{\rm e}nd{equation} Hence (cf.\ \hbox{\rm e}qref{6.12}), \begin{align} \begin{split} &\bibitemg\{\widehat \Psi_{-,\alphapha,j}(z,\, \cdot \, , x_0) = P_{-,x_0} \psi_{-,\alphapha}(z, \, \cdot \, ,x_0)[- (\operatorname{Im}(z))^{-1} \operatorname{Im}(m_{-,\alphapha}(z,x_0))]^{-1/2} e_j, \\ & \;\;\, \widehat \Psi_{+,\alphapha,j}(z,\, \cdot \, , x_0) = P_{+,x_0} \psi_{+,\alphapha}(z, \, \cdot \, ,x_0)[(\operatorname{Im}(z))^{-1} \operatorname{Im}(m_{+,\alphapha}(z,x_0))]^{-1/2} e_j\bibitemg\}_{j \in {\mathcal J}} \lambdabel{6.22} \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} is an orthonormal basis for ${\mathcal N}_{z} = \ker\bibitemg(H_{\min}^* - z I \bibitemg)$, $z \in {\mathbb{C}} \backslash {\mathbb{R}}$, if $\{e_j\}_{j \in {\mathcal J}}$ is an orthonormal basis for ${\mathcal H}$, and (cf.\ \hbox{\rm e}qref{6.13}) \begin{align} P_{{\mathcal N}_i} &= P_{{\mathcal N}_-,i} \oplus P_{{\mathcal N}_+,i} \nonumber \\ & = \sum_{j \in {\mathcal J}} \Big[ \bibitemg(\psi_{-,\alphapha}(i, \, \cdot \,,x_0) [- \operatorname{Im}(m_{-,\alphapha}(i,x_0))]^{-1/2}e_j, \, \cdot \, \bibitemg)_{L^2((-\infty,x_0); dx; {\mathcal H})} \nonumber \\ & \hspace*{4cm} \tildemes \psi_{-,\alphapha}(i, \, \cdot \,,x_0) [- \operatorname{Im}(m_{-,\alphapha}(i,x_0))]^{-1/2}e_j \lambdabel{6.26} \\ & \hspace*{1.1cm} \oplus \bibitemg(\psi_{+,\alphapha}(i, \, \cdot \,,x_0) [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2}e_j, \, \cdot \, \bibitemg)_{L^2((x_0,\infty); dx; {\mathcal H})} \nonumber \\ & \hspace*{4cm} \tildemes \psi_{+,\alphapha}(i, \, \cdot \,,x_0) [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2}e_j \Big], \nonumber \\ &= \sum_{j \in {\mathcal J}} \bibitemg[\bibitemg(\widehat\Psi_{-,\alphapha,j}(i,\, \cdot \,, x_0), \, \cdot \,\bibitemg)_{L^2((-\infty,x_0); dx; {\mathcal H})} \widehat \Psi_{-,\alphapha,j}(i,\, \cdot \,,x_0) \nonumber \\ & \hspace*{1.2cm } \oplus \bibitemg(\widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0), \, \cdot \,\bibitemg)_{L^2((x_0,\infty); dx; {\mathcal H})} \widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0)\bibitemg] \hbox{\rm e}nd{align} is the orthogonal projection onto ${\mathcal N}_i$. Consequently (cf.\ \hbox{\rm e}qref{5.3}), one obtains for the full-line Donoghue-type $m$-function, \begin{align} \begin{split} M_{H, {\mathcal N}_i}^{Do} (z) &= P_{{\mathcal N}_i} (z H + I) (H - z I)^{-1} P_{{\mathcal N}_i} \bibitemg|_{{\mathcal N}_i}, \lambdabel{6.27} \\ &= \int_{\mathbb{R}} d\Omega_{H,{\mathcal N}_i}^{Do}(\lambdambda) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \hbox{\rm e}nd{split} \hbox{\rm e}nd{align} where $\Omega_{H,{\mathcal N}_i}^{Do}(\cdot)$ satisfies the analogs of \hbox{\rm e}qref{5.6}--\hbox{\rm e}qref{5.8} (resp., \hbox{\rm e}qref{A.42A}--\hbox{\rm e}qref{A.42b}). With respect to the decomposition \hbox{\rm e}qref{6.20}, one can represent $M_{H, {\mathcal N}_i}^{Do} (\cdot)$ as the $2 \tildemes 2$ block operator, \begin{align} & M_{H, {\mathcal N}_i}^{Do} (\cdot) = \bibitemg(M_{H, {\mathcal N}_i,\hbox{\rm e}ll,\hbox{\rm e}ll'}^{Do} (\cdot)\bibitemg)_{0 \leq \hbox{\rm e}ll, \hbox{\rm e}ll' \leq 1} \nonumber \\ & \quad = z \left(\begin{smallmatrix} P_{{\mathcal N}_-,i} & 0 \\ 0 & P_{{\mathcal N}_+,i} \hbox{\rm e}nd{smallmatrix}\right) \lambdabel{6.32} \\ & \qquad + (z^2 + 1) \left(\begin{smallmatrix} P_{{\mathcal N}_-,i} P_{-,x_0} (H - zI)^{-1} P_{-,x_0} P_{{\mathcal N}_-,i} & \;\;\; P_{{\mathcal N}_-,i} P_{-,x_0} (H - zI)^{-1} P_{+,x_0} P_{{\mathcal N}_+,i} \\ P_{{\mathcal N}_+,i} P_{+,x_0} (H - zI)^{-1} P_{-,x_0} P_{{\mathcal N}_-,i} & \;\;\; P_{{\mathcal N}_+,i} P_{+,x_0} (H - zI)^{-1} P_{+,x_0} P_{{\mathcal N}_+,i} \hbox{\rm e}nd{smallmatrix}\right), \nonumber \hbox{\rm e}nd{align} and hence explicitly obtains, \begin{align} & M_{H, {\mathcal N}_i,0,0}^{Do} (z) \nonumber \\ & \quad = \sum_{j,k \in {\mathcal J}} \bibitemg(\widehat \Psi_{-,\alphapha,j}(i,\, \cdot \,,x_0), (z H + I)(H - z I)^{-1} \widehat \Psi_{-,\alphapha,k}(i,\, \cdot \,,x_0)\bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \nonumber \\ & \hspace*{1.65cm} \tildemes \bibitemg(\widehat \Psi_{-,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{-,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{6.33} \\ & M_{H, {\mathcal N}_i,0,1}^{Do} (z) \nonumber \\ & \quad = \sum_{j,k \in {\mathcal J}} \bibitemg(\widehat \Psi_{-,\alphapha,j}(i,\, \cdot \,,x_0), (z H + I)(H - z I)^{-1} \widehat \Psi_{+,\alphapha,k}(i,\, \cdot \,,x_0)\bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \nonumber \\ & \hspace*{1.65cm} \tildemes \bibitemg(\widehat \Psi_{+,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{-,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{6.34} \\ & M_{H, {\mathcal N}_i,1,0}^{Do} (z) \nonumber \\ & \quad = \sum_{j,k \in {\mathcal J}} \bibitemg(\widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0), (z H + I)(H - z I)^{-1} \widehat \Psi_{-,\alphapha,k}(i,\, \cdot \,,x_0)\bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \nonumber \\ & \hspace*{1.65cm} \tildemes \bibitemg(\widehat \Psi_{-,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{6.35} \\ & M_{H, {\mathcal N}_i,1,1}^{Do} (z) \nonumber \\ & \quad = \sum_{j,k \in {\mathcal J}} \bibitemg(\widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0), (z H + I)(H - z I)^{-1} \widehat \Psi_{+,\alphapha,k}(i,\, \cdot \,,x_0)\bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \nonumber \\ & \hspace*{1.65cm} \tildemes \bibitemg(\widehat \Psi_{+,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{6.36} \\ & \hspace*{7.55cm} z\in{\mathbb{C}}\backslash{\mathbb{R}}. \nonumber \hbox{\rm e}nd{align} Taking a closer look at equations \hbox{\rm e}qref{6.33}--\hbox{\rm e}qref{6.36} we now state the following preliminary result: \begin{lemma} \lambdabel{l6.5} Assume Hypothesis \ref{h2.8}. Then, \begin{align} & \bibitemg(\widehat \Psi_{\varepsilon,\alphapha,j}(i,\, \cdot \,,x_0), (zH + I)(H - z I)^{-1} \widehat \Psi_{\varepsilon',\alphapha,k}(i,\, \cdot \,,x_0)\bibitemg)_{L^2({\mathbb{R}};dx;{\mathcal H})} \nonumber \\ & \quad = \int_{{\mathbb{R}}} d\bibitemg(\widehat e_{\varepsilon,\alphapha,j}(\lambdambda), \Omega_{\alphapha}(\lambdambda,x_0) \widehat e_{\varepsilon',\alphapha,k}(\lambdambda)\bibitemg)_{{\mathcal H}^2} \, \frac{z \lambdambda + 1}{\lambdambda - z} \nonumber \\ & \quad = \int_{{\mathbb{R}}} d\bibitemg(e_{\varepsilon,\alphapha,j}(\lambdambda), \Omega_{\alphapha}(\lambdambda,x_0) e_{\varepsilon',\alphapha,k}(\lambdambda)\bibitemg)_{{\mathcal H}^2} \, \frac{z \lambdambda + 1}{(\lambdambda - z)(\lambdambda^2 + 1)} \nonumber \\ & \quad = \bibitemg(e_{\varepsilon,\alphapha,j}, [M_{\alphapha}(z,x_0) - \operatorname{Re}(M_{\alphapha}(i,x_0)] e_{\varepsilon',\alphapha,k}\bibitemg)_{{\mathcal H}^2}, \lambdabel{6.37} \\ & \hspace*{2.45cm} \varepsilon, \varepsilon' \in \{+,-\}, \; j,k \in {\mathcal J}, \; z \in{\mathbb{C}}\backslash{\mathbb{R}}, \nonumber \hbox{\rm e}nd{align} where \begin{align} & \widehat e_{\varepsilon,\alphapha,j}(\lambdambda) = \bibitemg(\widehat e_{\varepsilon,\alphapha,j,0}(\lambdambda), \widehat e_{\varepsilon,\alphapha,j,1}(\lambdambda)\bibitemg)^\top \nonumber \\ & \quad = \frac{1}{\lambdambda - i} e_{\varepsilon,\alphapha,j} = \frac{1}{\lambdambda - i} (e_{\varepsilon,\alphapha,j,0}, e_{\varepsilon,\alphapha,j,1})^\top \nonumber \\ & \quad = \frac{1}{\lambdambda - i} \bibitemg(- \varepsilon m_{\varepsilon,\alphapha}(i,x_0) [\varepsilon \operatorname{Im}(m_{\varepsilon,\alphapha}(i,x_0))]^{-1/2} e_j, \varepsilon [\varepsilon \operatorname{Im}(m_{\varepsilon,\alphapha}(i,x_0))]^{-1/2} e_j\bibitemg)^\top, \nonumber \\ & \hspace*{7cm} \varepsilon \in \{+,-\}, \; j \in {\mathcal J}, \; \lambdambda \in {\mathbb{R}}. \lambdabel{6.38} \hbox{\rm e}nd{align} \hbox{\rm e}nd{lemma} \begin{proof} The first two equalities in \hbox{\rm e}qref{6.37} follow from \hbox{\rm e}qref{2.73}, \hbox{\rm e}qref{2.74} upon introducing $\widehat e_{\varepsilon,\alphapha,j}(\cdot) = \bibitemg(\widehat e_{\varepsilon,\alphapha,j,0}(\cdot), \widehat e_{\varepsilon,\alphapha,j,1}(\cdot)\bibitemg)^\top$, where \begin{align} & \widehat e_{\varepsilon,\alphapha,j,0}(\lambdambda) = \varepsilon \int_{x_0}^{\varepsilon \infty} dx \, \theta_{\alphapha}(\lambdambda,x,x_0)^* \psi_{\varepsilon,\alphapha}(i,x,x_0) [\varepsilon \operatorname{Im}(m_{\varepsilon,\alphapha}(i,x_0))]^{-1/2} e_j \nonumber \\ & \hspace*{1.5cm} = - \varepsilon (\lambdambda - i)^{-1} m_{\varepsilon,\alphapha}(i,x_0) [\varepsilon \operatorname{Im}(m_{\varepsilon,\alphapha}(i,x_0))]^{-1/2} e_j, \lambdabel{6.39} \\ & \widehat e_{\varepsilon,\alphapha,j,1}(\lambdambda) = \varepsilon \int_{x_0}^{\varepsilon \infty} dx \, \phi_{\alphapha}(\lambdambda,x,x_0)^* \psi_{\varepsilon,\alphapha}(i,x,x_0) [\varepsilon \operatorname{Im}(m_{\varepsilon,\alphapha}(i,x_0))]^{-1/2} e_j \nonumber \\ & \hspace*{1.5cm} = \varepsilon (\lambdambda - i)^{-1} [\varepsilon \operatorname{Im}(m_{\varepsilon,\alphapha}(i,x_0))]^{-1/2} e_j , \lambdabel{6.40} \\ & \hspace*{3.3cm} \varepsilon \in \{+,-\}, \; j \in {\mathcal J}, \; \lambdambda \in {\mathbb{R}}, \nonumber \hbox{\rm e}nd{align} and we employed \hbox{\rm e}qref{6.11}, \hbox{\rm e}qref{6.10} (with $z=i$) to arrive at \hbox{\rm e}qref{6.39}, \hbox{\rm e}qref{6.40}. The third equality in \hbox{\rm e}qref{6.37} follows from \hbox{\rm e}qref{2.71b}, \hbox{\rm e}qref{2.71c}. \hbox{\rm e}nd{proof} Next, further reducing the computation \hbox{\rm e}qref{6.37} to scalar products of the type $(e_j, \cdots e_k)_{{\mathcal H}}$, $j,k \in {\mathcal H}$, naturally leads to a $2 \tildemes 2$ block operator \begin{equation} M_{\alphapha}^{Do} (\, \cdot \,,x_0) = \bibitemg(M_{\alphapha,\hbox{\rm e}ll,\hbox{\rm e}ll'}^{Do} (\, \cdot \,,x_0)\bibitemg)_{0 \leq \hbox{\rm e}ll, \hbox{\rm e}ll' \leq 1}, \hbox{\rm e}nd{equation} where \begin{align} (e_j, M_{\alphapha,0,0}^{Do} (z,x_0) e_k)_{{\mathcal H}} &= \bibitemg(e_{-,\alphapha,j}, [M_{\alphapha}(z,x_0) - \operatorname{Re}(M_{\alphapha}(i,x_0)] e_{-,\alphapha,k}\bibitemg)_{{\mathcal H}^2}, \nonumber \\ (e_j, M_{\alphapha,0,1}^{Do} (z,x_0) e_k)_{{\mathcal H}} &= \bibitemg(e_{-,\alphapha,j}, [M_{\alphapha}(z,x_0) - \operatorname{Re}(M_{\alphapha}(i,x_0)] e_{+,\alphapha,k}\bibitemg)_{{\mathcal H}^2}, \nonumber \\ (e_j, M_{\alphapha,1,0}^{Do} (z,x_0) e_k)_{{\mathcal H}} &= \bibitemg(e_{+,\alphapha,j}, [M_{\alphapha}(z,x_0) - \operatorname{Re}(M_{\alphapha}(i,x_0)] e_{-,\alphapha,k}\bibitemg)_{{\mathcal H}^2}, \lambdabel{6.41} \\ (e_j, M_{\alphapha,1,1}^{Do} (z,x_0) e_k)_{{\mathcal H}} &= \bibitemg(e_{+,\alphapha,j}, [M_{\alphapha}(z,x_0) - \operatorname{Re}(M_{\alphapha}(i,x_0)] e_{+,\alphapha,k}\bibitemg)_{{\mathcal H}^2}, \nonumber \\ & \hspace*{4.5cm} j,k \in {\mathcal J}, \; z \in{\mathbb{C}}\backslash{\mathbb{R}}. \nonumber \hbox{\rm e}nd{align} \begin{theorem} \lambdabel{t6.6} Assume Hypothesis \ref{h2.8}.~Then $M_{\alphapha}^{Do}(\, \cdot \, ,x_0)$ is a ${\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$-valued \\ Nevanlinna--Herglotz function given by \begin{align} M_{\alphapha}^{Do} (z,x_0) &= T_{\alphapha}^* M_{\alphapha}(z,x_0) T_{\alphapha} + E_{\alphapha} \lambdabel{6.42} \\ &= D_{\alphapha} + \int_{\mathbb{R}} d\Omega_{\alphapha}^{Do}(\lambdambda,x_0) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}, \lambdabel{6.43} \hbox{\rm e}nd{align} where the $2 \tildemes 2$ block operators $T_{\alphapha} \in {\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$ and $E_{\alphapha} \in {\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$ are defined by \begin{align} & T_{\alphapha} = \begin{pmatrix} m_{-,\alphapha}(i,x_0) [-\operatorname{Im}(m_{-,\alphapha}(i,x_0))]^{-1/2} & - m_{+,\alphapha}(i,x_0) [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} \\ - [-\operatorname{Im}(m_{-,\alphapha}(i,x_0))]^{-1/2} & [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} \hbox{\rm e}nd{pmatrix}, \lambdabel{6.46} \\ & E_{\alphapha} = \begin{pmatrix} 0 & E_{\alphapha,0,1} \\ E_{\alphapha,1,0} & 0 \hbox{\rm e}nd{pmatrix} = E_{\alphapha}^*, \nonumber \\ & E_{\alphapha,0,1} = 2^{-1} [-\operatorname{Im}(m_{-,\alphapha}(i,x_0))]^{-1/2} [m_{-,\alphapha}(-i,x_0) - m_{+,\alphapha}(i,x_0)] \nonumber \\ & \hspace*{1.3cm} \tildemes [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2}, \lambdabel{6.47} \\ & E_{\alphapha,1,0} = 2^{-1} [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{-1/2} [m_{-,\alphapha}(i,x_0) - m_{+,\alphapha}(-i,x_0)] \nonumber \\ & \hspace*{1.3cm} \tildemes [-\operatorname{Im}(m_{-,\alphapha}(i,x_0))]^{-1/2}, \nonumber \hbox{\rm e}nd{align} and $T_{\alphapha}^{-1} \in {\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$, with \begin{align} & \bibitemg(T_{\alphapha}^{-1}\bibitemg)_{0,0} = [-\operatorname{Im}(m_{-,\alphapha}(i,x_0))]^{1/2} [m_{-,\alphapha}(i,x_0) - m_{+,\alphapha}(i,x_0)]^{-1}, \lambdabel{6.47a}\\ & \bibitemg(T_{\alphapha}^{-1}\bibitemg)_{0,1} = [-\operatorname{Im}(m_{-,\alphapha}(i,x_0))]^{1/2} [m_{-,\alphapha}(i,x_0) - m_{+,\alphapha}(i,x_0)]^{-1} m_{+,\alphapha}(i,x_0), \lambdabel{6.48}\\ & \bibitemg(T_{\alphapha}^{-1}\bibitemg)_{1,0} = [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{1/2} [m_{-,\alphapha}(i,x_0) - m_{+,\alphapha}(i,x_0)]^{-1}, \lambdabel{6.49} \\ & \bibitemg(T_{\alphapha}^{-1}\bibitemg)_{1,1} = [\operatorname{Im}(m_{+,\alphapha}(i,x_0))]^{1/2} [m_{-,\alphapha}(i,x_0) - m_{+,\alphapha}(i,x_0)]^{-1} m_{-,\alphapha}(i,x_0). \lambdabel{6.50} \hbox{\rm e}nd{align} In addition, $D_{\alphapha} = \operatorname{Re}\bibitemg(M_{\alphapha}^{Do} (i,x_0)\bibitemg) \in {\mathcal B}\bibitemg({\mathcal H}^2\bibitemg)$, and $\Omega_{\alphapha}^{Do}(\, \cdot \, ,x_0) = T_{\alphapha}^* \Omega_{\alphapha}(\, \cdot \, ,x_0) T_{\alphapha}$ satisfy the analogs of \hbox{\rm e}qref{A.42a}, \hbox{\rm e}qref{A.42b}. \hbox{\rm e}nd{theorem} \begin{proof} While \hbox{\rm e}qref{6.43} is clear from \hbox{\rm e}qref{6.42}, and similarly, \hbox{\rm e}qref{6.47a}--\hbox{\rm e}qref{6.50} is clear from \hbox{\rm e}qref{6.46}, the main burden of proof consists in verifying \hbox{\rm e}qref{6.42}, given \hbox{\rm e}qref{6.46}, \hbox{\rm e}qref{6.47}. This can be achieved after straightforward, yet tedious computations. To illustrate the nature of this computations we just focus on the $(0,0)$-entry of the $2 \tildemes 2$ block operator \hbox{\rm e}qref{6.42} and consider the term (cf.\ the first equation in \hbox{\rm e}qref{6.41}), $(e_{-,\alphapha,j}, M_{\alphapha}(z,x_0) e_{-,\alphapha,k})_{{\mathcal H}^2}$, temporarily suppressing $x_0$ and $\alphapha$ for simplicity: \begin{align} & (e_{-,\alphapha,j}, M_{\alphapha}(z,x_0) e_{-,\alphapha,k})_{{\mathcal H}^2} = \bibitemgg(\bibitemgg(\begin{smallmatrix} m_-(i) [- \operatorname{Im}(m_-(i))]^{-1/2} e_j \\ - [- \operatorname{Im}(m_-(i))]^{-1/2} e_j\hbox{\rm e}nd{smallmatrix}\bibitemgg), \nonumber \\ & \qquad \tildemes \bibitemgg(\begin{smallmatrix} [m_-(z) - m_+(z)]^{-1} & 2^{-1} [m_-(z) - m_+(z)]^{-1} [m_-(z) + m_+(z)] \\ 2^{-1} [m_-(z) + m_+(z)] [m_-(z) - m_+(z)]^{-1} \hbox{\rm e}nd{smallmatrix} \bibitemgg) \nonumber \\ & \qquad \tildemes \bibitemgg(\bibitemgg(\begin{smallmatrix} m_-(i) [- \operatorname{Im}(m_-(i))]^{-1/2} e_j \\ - [- \operatorname{Im}(m_-(i))]^{-1/2} e_j\hbox{\rm e}nd{smallmatrix}\bibitemgg)\bibitemgg)_{{\mathcal H}^2} \nonumber \\ & \quad = \bibitemg(m_-(i) [- \operatorname{Im}(m_-(i))]^{-1/2} e_j, [m_-(z) - m_+(z)]^{-1} \nonumber \\ & \hspace*{9mm} \tildemes m_-(i) [- \operatorname{Im}(m_-(i))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \qquad - 2^{-1} \bibitemg(m_-(i) [- \operatorname{Im}(m_-(i))]^{-1/2} e_j, [m_-(z) - m_+(z)]^{-1} [m_-(z) + m_+(z)] \nonumber \\ & \hspace*{1.7cm} \tildemes [- \operatorname{Im}(m_-(i))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \qquad - 2^{-1} \bibitemg([- \operatorname{Im}(m_-(i))]^{-1/2} e_j, [m_-(z) + m_+(z)] [m_-(z) - m_+(z)]^{-1} \nonumber \\ & \hspace*{1.7cm} \tildemes m_-(i) [- \operatorname{Im}(m_-(i))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \qquad + \bibitemg([- \operatorname{Im}(m_-(i))]^{-1/2} e_j, m_{\mp}(z)] [m_-(z) - m_+(z)]^{-1} m_{\pm}(z) \nonumber \\ & \hspace*{1.7cm} \tildemes [- \operatorname{Im}(m_-(i))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \quad = \bibitemg(e_j, [- \operatorname{Im}(m_-(i))]^{-1/2} m_-(-i) [m_-(z) - m_+(z)]^{-1} \nonumber \\ & \hspace*{9mm} \tildemes m_-(i) [- \operatorname{Im}(m_-(i))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \qquad - 2^{-1} \bibitemg(e_j, [- \operatorname{Im}(m_-(i))]^{-1/2} m_-(-i) [m_-(z) - m_+(z)]^{-1} [m_-(z) + m_+(z)] \nonumber \\ & \hspace*{1.7cm} \tildemes [- \operatorname{Im}(m_-(i))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \qquad - 2^{-1} \bibitemg(e_j, [- \operatorname{Im}(m_-(i))]^{-1/2} [m_-(z) + m_+(z)] [m_-(z) - m_+(z)]^{-1} \nonumber \\ & \hspace*{1.7cm} \tildemes m_-(i) [- \operatorname{Im}(m_-(i))]^{-1/2} e_k\bibitemg)_{{\mathcal H}} \nonumber \\ & \qquad + \bibitemg(e_j, [- \operatorname{Im}(m_-(i))]^{-1/2} m_{\mp}(z)] [m_-(z) - m_+(z)]^{-1} m_{\pm}(z) \nonumber \\ & \hspace*{1.7cm} \tildemes [- \operatorname{Im}(m_-(i))]^{-1/2} e_k\bibitemg)_{{\mathcal H}}, \quad z\in{\mathbb{C}}\backslash{\mathbb{R}}. \lambdabel{6.51} \hbox{\rm e}nd{align} Explicitly computing $(e_j, [T_{\alphapha}^* M_{\alphapha}(z,x_0)T_{\alphapha}]_{0,0} e_k)_{{\mathcal H}}$, given $T_{\alphapha}$ in \hbox{\rm e}qref{6.46} yields the same expression as in \hbox{\rm e}qref{6.51}. Similarly, one verifies that \begin{equation} (e_{-,\alphapha,j}, \operatorname{Re}(M_{\alphapha}(i,x_0)) e_{-,\alphapha,k})_{{\mathcal H}^2} = 0, \hbox{\rm e}nd{equation} verifying the $(0,0)$-entry of \hbox{\rm e}qref{6.42}. The remaining three entries are verified analogously. \hbox{\rm e}nd{proof} Combining Lemma \ref{l6.5} and Theorem \ref{t6.6} then yields the following result: \begin{theorem} \lambdabel{t6.7} Assume Hypothesis \ref{h2.8}.~Then $M_{H, {\mathcal N}_i}^{Do} (\cdot) = \bibitemg(M_{H, {\mathcal N}_i,\hbox{\rm e}ll,\hbox{\rm e}ll'}^{Do} (\cdot)\bibitemg)_{0 \leq \hbox{\rm e}ll, \hbox{\rm e}ll' \leq 1}$, explicitly given by \hbox{\rm e}qref{6.27}--\hbox{\rm e}qref{6.36}, is of the form, \begin{align} & M_{H, {\mathcal N}_i,0,0}^{Do} (z) = \sum_{j,k \in {\mathcal J}} (e_j, M_{\alphapha,0,0}^{Do}(z,x_0) e_k)_{{\mathcal H}} \nonumber \\ & \hspace*{3.1cm} \tildemes \bibitemg(\widehat \Psi_{-,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{-,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{6.53} \\ & M_{H, {\mathcal N}_i,0,1}^{Do} (z) = \sum_{j,k \in {\mathcal J}} (e_j, M_{\alphapha,0,1}^{Do}(z,x_0) e_k)_{{\mathcal H}} \nonumber \\ & \hspace*{3.1cm} \tildemes \bibitemg(\widehat \Psi_{+,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{-,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{6.54} \\ & M_{H, {\mathcal N}_i,1,0}^{Do} (z) = \sum_{j,k \in {\mathcal J}} (e_j, M_{\alphapha,1,0}^{Do}(z,x_0) e_k)_{{\mathcal H}} \nonumber \\ & \hspace*{3.1cm} \tildemes \bibitemg(\widehat \Psi_{-,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{6.55} \\ & M_{H, {\mathcal N}_i,1,1}^{Do} (z) = \sum_{j,k \in {\mathcal J}} (e_j, M_{\alphapha,1,1}^{Do}(z,x_0) e_k)_{{\mathcal H}} \nonumber \\ & \hspace*{3.1cm} \tildemes \bibitemg(\widehat \Psi_{+,\alphapha,k}(i,\, \cdot \,,x_0), \, \cdot \, \bibitemg)_{L^2({\mathbb{R}}; dx; {\mathcal H})} \widehat \Psi_{+,\alphapha,j}(i,\, \cdot \,,x_0), \lambdabel{6.56} \\ & \hspace*{8.95cm} z\in{\mathbb{C}}\backslash{\mathbb{R}}, \nonumber \hbox{\rm e}nd{align} with $M_{\alphapha}^{Do}(\, \cdot \,,x_0)$ given by \hbox{\rm e}qref{6.42}--\hbox{\rm e}qref{6.47}. \hbox{\rm e}nd{theorem} \begin{remark} \lambdabel{r6.8} Combining Corollary \ref{c5.8} and Theorem \ref{t6.7} proves that the entire spectral information for $H$, contained in the corresponding family of spectral projections $\{E_H(\lambdambda)\}_{\lambdambda \in {\mathbb{R}}}$ in $L^2({\mathbb{R}}; dx; {\mathcal H})$, is already encoded in the operator-valued measure $\{\Omega_{H,{\mathcal N}_i}^{Do}(\lambdambda)\}_{\lambdambda \in {\mathbb{R}}}$ in ${\mathcal N}_i$ (including multiplicity properties of the spectrum of $H$). In addition, invoking Theorem \ref{t6.6} shows that for any fixed $\alphapha = \alphapha^* \in {\mathcal B}({\mathcal H})$, $x_0 \in {\mathbb{R}}$, the entire spectral information for $H$ is already contained in $\{\Omega_{\alphapha}^{Do}(\lambdambda,x_0)\}_{\lambdambda \in {\mathbb{R}}}$ in ${\mathcal H}^2$. $\diamond$ \hbox{\rm e}nd{remark} \appendix \section{Basic Facts on Bounded Operator-Valued Nevanlinna--Herglotz Functions} \lambdabel{sA} \setcounter{theorem}{0} \setcounter{equation}{0} We review some basic facts on (bounded) operator-valued Nevanlinna--Herglotz functions (also called Nevanlinna, Pick, $R$-functions, etc.), frequently employed in the bulk of this paper. For additional details concerning the material in this appendix we refer to \cite{GWZ13}, \cite{GWZ13b}. Throughout this appendix, ${\mathcal H}$ is a separable, complex Hilbert space with inner product denoted by $(\, \cdot \,,\, \cdot \,)_{{\mathcal H}}$, identity operator abbreviated by $I_{{\mathcal H}}$. We also denote ${\mathbb{C}}_{\pm} = \{z \in {\mathbb{C}} \,| \pm \operatorname{Im}(z) > 0\}$. \begin{definition}\lambdabel{dA.4} The map $M: {\mathbb{C}}_+ \rightarrow {\mathcal B}({\mathcal H})$ is called a bounded operator-valued Nevanlinna--Herglotz function on ${\mathcal H}$ (in short, a bounded Nevanlinna--Herglotz operator on ${\mathcal H}$) if $M$ is analytic on ${\mathbb{C}}_+$ and $\operatorname{Im} (M(z))\geq 0$ for all $z\in {\mathbb{C}}_+$. \hbox{\rm e}nd{definition} Here we follow the standard notation \begin{equation} \lambdabel{A.37} \operatorname{Im} (M) = (M-M^*)/(2i),\quad \operatorname{Re} (M) = (M+M^*)/2, \quad M \in {\mathcal B}({\mathcal H}). \hbox{\rm e}nd{equation} Note that $M$ is a bounded Nevanlinna--Herglotz operator if and only if the scalar-valued functions $(u,Mu)_{\mathcal H}$ are Nevanlinna--Herglotz for all $u\in{\mathcal H}$. As in the scalar case one usually extends $M$ to ${\mathbb{C}}_-$ by reflection, that is, by defining \begin{equation} M(z)=M(\overline z)^*, \quad z\in {\mathbb{C}}_-. \lambdabel{A.36} \hbox{\rm e}nd{equation} Hence $M$ is analytic on ${\mathbb{C}}\backslash{\mathbb{R}}$, but $M\bibitemg|_{{\mathbb{C}}_-}$ and $M\bibitemg|_{{\mathbb{C}}_+}$, in general, are not analytic continuations of each other. In contrast to the scalar case, one cannot generally expect strict inequality in $\operatorname{Im}(M(\cdot))\geq 0$. However, the kernel of $\operatorname{Im}(M(\cdot))$ has the following simple properties recorded in \cite[Lemma\ 5.3]{GT00} (whose proof was kindly communicated to us by Dirk Buschmann) in the matrix-valued context. Below we indicate that the proof extends to the present infinite-dimensional situation (see also \cite[Proposition\ 1.2\,$(ii)$]{DM97} for additional results of this kind): \begin{lemma} \lambdabel{lA.5} Let $M(\cdot)$ be a ${\mathcal B}({\mathcal H})$-valued Nevanlinna--Herglotz function. Then the kernel ${\mathcal H}_0 = \ker(\operatorname{Im}(M(z)))$ is independent of $z\in{\mathbb{C}} \backslash {\mathbb{R}}$. Consequently, upon decomposing ${\mathcal H} = {\mathcal H}_0 \oplus {\mathcal H}_1$, ${\mathcal H}_1 = {\mathcal H}_0^\bot$, $\operatorname{Im}(M(\cdot))$ takes on the form \begin{equation} \operatorname{Im}(M(z))= \begin{pmatrix} 0 & 0 \\ 0 & N_1(z) \hbox{\rm e}nd{pmatrix}, \quad z \in {\mathbb{C}}_+, \lambdabel{A.38} \hbox{\rm e}nd{equation} where $N_1(\cdot) \in {\mathcal B}({\mathcal H}_1)$ satisfies \begin{equation} N_1(z) \geq 0, \quad \ker(N_1) = \{0\}, \quad z\in{\mathbb{C}}_+. \lambdabel{A.39} \hbox{\rm e}nd{equation} \hbox{\rm e}nd{lemma} \begin{proof} Pick $z_0 \in {\mathbb{C}} \backslash {\mathbb{R}}$, and suppose $f_0 \in \ker(\operatorname{Im}(M(z_0)))$. Introducing $m(z) = (f_0,M(z) f_0)_{{\mathcal H}}$, $z \in {\mathbb{C}} \backslash {\mathbb{R}}$, $m(\cdot)$ is a scalar Nevanlinna--Herglotz function and $m(z_0) \in {\mathbb{R}}$. Hence the Nevanlinna--Herglotz function $m(z) - m(z_0)$ has a zero at $z=z_0$, and thus must be a real-valued constant, $m(z) = m(z_0)$, $z \in {\mathbb{C}} \backslash {\mathbb{R}}$. Since $(f_0, M(z)^* f_0)_{{\mathcal H}} = \overline{(f_0, M(z) f_0)_{{\mathcal H}}} = \overline{m(z)} = m(z_0) \in {\mathbb{R}}$, $z \in {\mathbb{C}} \backslash {\mathbb{R}}$, one concludes that $(f_0, \operatorname{Im}(M(z)) f_0)_{{\mathcal H}} = \pm \bibitemg\|[\pm \operatorname{Im}(M(z))]^{1/2} f_0\bibitemg\|_{{\mathcal H}}^2 = 0$, $z \in {\mathbb{C}}_{\pm}$, that is, \begin{equation} f_0 \in \ker\bibitemg([\pm \operatorname{Im}(M(z))]^{1/2}\bibitemg) = \ker(\operatorname{Im}(M(z))), \quad z \in {\mathbb{C}}_{\pm}, \hbox{\rm e}nd{equation} and hence $\ker(M(z_0) \subseteq \ker(M(z))$, $z \in {\mathbb{C}} \backslash {\mathbb{R}}$. Interchanging the role of $z_0$ and $z$ finally yields $\ker(M(z_0) = \ker(M(z))$, $z \in {\mathbb{C}} \backslash {\mathbb{R}}$. \hbox{\rm e}nd{proof} Next we recall the definition of a bounded operator-valued measure (see, also \cite[p.\ 319]{Be68}, \cite{MM04}, \cite{PR67}): \begin{definition} \lambdabel{dA.6} Let ${\mathcal H}$ be a separable, complex Hilbert space. A map $\Sigma:\mathfrak{B}({\mathbb{R}}) \to{\mathcal B}({\mathcal H})$, with $\mathfrak{B}({\mathbb{R}})$ the Borel $\sigmagma$-algebra on ${\mathbb{R}}$, is called a {\it bounded, nonnegative, operator-valued measure} if the following conditions $(i)$ and $(ii)$ hold: \\ $(i)$ $\Sigma (\hbox{\rm e}mptyset) =0$ and $0 \leq \Sigma(B) \in {\mathcal B}({\mathcal H})$ for all $B \in \mathfrak{B}({\mathbb{R}})$. \\ $(ii)$ $\Sigma(\cdot)$ is strongly countably additive (i.e., with respect to the strong operator \hspace*{5mm} topology in ${\mathcal H}$), that is, \begin{align} & \Sigma(B) = \slim_{N\to \infty} \sum_{j=1}^N \Sigma(B_j) \lambdabel{A.40} \\ & \quad \text{whenever } \, B=\bibitemgcup_{j\in{\mathbb{N}}} B_j, \, \text{ with } \, B_k\cap B_{\hbox{\rm e}ll} = \hbox{\rm e}mptyset \, \text{ for } \, k \neq \hbox{\rm e}ll, \; B_k \in \mathfrak{B}({\mathbb{R}}), \; k, \hbox{\rm e}ll \in {\mathbb{N}}. \nonumber \hbox{\rm e}nd{align} $\Sigma(\cdot)$ is called an {\it $($operator-valued\,$)$ spectral measure} (or an {\it orthogonal operator-valued measure}) if additionally the following condition $(iii)$ holds: \\ $(iii)$ $\Sigma(\cdot)$ is projection-valued (i.e., $\Sigma(B)^2 = \Sigma(B)$, $B \in \mathfrak{B}({\mathbb{R}})$) and $\Sigma({\mathbb{R}}) = I_{{\mathcal H}}$. \\ $(iv)$ Let $f \in {\mathcal H}$ and $B \in \mathfrak{B}({\mathbb{R}})$. Then the vector-valued measure $\Sigma(\cdot) f$ has {\it finite variation on $B$}, denoted by $V(\Sigma f;B)$, if \begin{equation} V(\Sigma f; B) = \sup\bibitemgg\{\sum_{j=1}^N \|\Sigma(B_j)f\|_{{\mathcal H}} \bibitemgg\} < \infty, \hbox{\rm e}nd{equation} where the supremum is taken over all finite sequences $\{B_j\}_{1\leq j \leq N}$ of pairwise disjoint subsets on ${\mathbb{R}}$ with $B_j \subseteq B$, $1 \leq j \leq N$. In particular, $\Sigma(\cdot) f$ has {\it finite total variation} if $V(\Sigma f;{\mathbb{R}}) < \infty$. \hbox{\rm e}nd{definition} We recall that due to monotonicity considerations, taking the limit in the strong operator topology in \hbox{\rm e}qref{A.40} is equivalent to taking the limit with respect to the weak operator topology in ${\mathcal H}$. For relevant material in connection with the following result we refer the reader, for instance, to \cite{AL95}, \cite{AN75}, \cite{AN76}, \cite{ABT11}, \cite[Sect.\ VI.5,]{Be68}, \cite[Sect.\ I.4]{Br71}, \cite{Bu97}, \cite{Ca76}, \cite{De62}, \cite{DM91}--\cite{DM97}, \cite{HS98}, \cite{KO77}, \cite{KO78}, \cite{MM02}, \cite{MM04}, \cite{Na74}, \cite{Na77}, \cite{NA75}, \cite{Na87}, \cite{Sh71}, \cite{Ts92}, and the detailed bibliography in \cite{GWZ13b}. \begin{theorem} \rm {(\cite{AN76}, \cite[Sect.\ I.4]{Br71}, \cite{Sh71}.)} \lambdabel{tA.7} Let $M$ be a bounded operator-valued Nevanlinna--Herglotz function in ${\mathcal H}$. Then the following assertions hold: \\ $(i)$ For each $f \in {\mathcal H}$, $(f,M(\cdot) f)_{{\mathcal H}}$ is a $($scalar$)$ Nevanlinna--Herglotz function. \\ $(ii)$ Suppose that $\{e_j\}_{j\in{\mathbb{N}}}$ is a complete orthonormal system in ${\mathcal H}$ and that for some subset of ${\mathbb{R}}$ having positive Lebesgue measure, and for all $j\in{\mathbb{N}}$, $(e_j,M(\cdot) e_j)_{{\mathcal H}}$ has zero normal limits. Then $M\hbox{\rm e}quiv 0$. \\ $(iii)$ There exists a bounded, nonnegative ${\mathcal B}({\mathcal H})$-valued measure $\Omega$ on ${\mathbb{R}}$ such that the Nevanlinna representation \begin{align} & M(z) = C + D z + \int_{{\mathbb{R}}} d\Omega (\lambdambda ) \bibitemgg[\frac{1}{\lambdambda-z} - \frac{\lambdambda}{\lambdambda ^2 + 1}\bibitemgg], \quad z\in{\mathbb{C}}_+, \lambdabel{A.42} \\ & \widetilde \Omega((-\infty, \lambdambda]) = \slim_{\varepsilon \downarrow 0} \int_{-\infty}^{\lambdambda + \varepsilon} d \Omega (t) \, (t^2 + 1)^{-1}, \quad \lambdambda \in {\mathbb{R}}, \lambdabel{A.42A} \\ & \widetilde \Omega({\mathbb{R}}) = \operatorname{Im}(M(i)) - D = \int_{{\mathbb{R}}} d\Omega(\lambdambda) \, (\lambdambda^2 + 1)^{-1} \in {\mathcal B}({\mathcal H}), \lambdabel{A.42a} \\ & C=\operatorname{Re}(M(i)),\quad D=\slim_{\hbox{\rm e}ta\uparrow \infty} \, \fracrac{1}{i\hbox{\rm e}ta}M(i\hbox{\rm e}ta) \geq 0, \lambdabel{A.42b} \hbox{\rm e}nd{align} holds in the strong sense in ${\mathcal H}$. Here $\widetilde\Omega (B) = \int_{B} \bibitemg(1+\lambdambda^2\bibitemg)^{-1}d\Omega(\lambdambda)$, $B \in \mathfrak{B}({\mathbb{R}})$. \\ $(iv)$ Let $\lambdambda _1,\lambdambda_2\in{\mathbb{R}}$, $\lambdambda_1<\lambdambda_2$. Then the Stieltjes inversion formula for $\Omega $ reads \begin{equation}\lambdabel{A.43} \Omega ((\lambdambda_1,\lambdambda_2]) f =\pi^{-1} \slim_{\delta\downarrow 0} \slim_{\varepsilon\downarrow 0} \int^{\lambdambda_2 + \delta}_{\lambdambda_1 + \delta}d\lambdambda \, \operatorname{Im} (M(\lambdambda+i\varepsilon)) f, \quad f \in {\mathcal H}. \hbox{\rm e}nd{equation} $(v)$ Any isolated poles of $M$ are simple and located on the real axis, the residues at poles being nonpositive bounded operators in ${\mathcal B}({\mathcal H})$. \\ $(vi)$ For all $\lambdambda \in {\mathbb{R}}$, \begin{align} & \slim_{\varepsilon \downarrow 0} \, \varepsilon \operatorname{Re}(M(\lambdambda +i\varepsilon ))=0, \lambdabel{A.45} \\ & \, \Omega (\{\lambdambda \}) = \slim_{\varepsilon \downarrow 0} \, \varepsilon \operatorname{Im} (M(\lambdambda + i \varepsilon )) =- i \slim_{\varepsilon \downarrow 0} \, \varepsilon M(\lambdambda +i\varepsilon). \lambdabel{A.46} \hbox{\rm e}nd{align} $(vii)$ If in addition $M(z) \in {\mathcal B}_{\infty} ({\mathcal H})$, $z \in {\mathbb{C}}_+$, then the measure $\Omega$ in \hbox{\rm e}qref{A.42} is countably additive with respect to the ${\mathcal B}({\mathcal H})$-norm, and the Nevanlinna representation \hbox{\rm e}qref{A.42} and the Stieltjes inversion formula \hbox{\rm e}qref{A.43} as well as \hbox{\rm e}qref{A.45}, \hbox{\rm e}qref{A.46} hold with the limits taken with respect to the $\|\cdot\|_{{\mathcal B}({\mathcal H})}$-norm. \\ $(viii)$ Let $f \in {\mathcal H}$ and assume in addition that $\Omega(\cdot) f$ is of finite total variation. Then for a.e.\ $\lambdambda \in {\mathbb{R}}$, the normal limits $M(\lambdambda + i0) f$ exist in the strong sense and \begin{equation} \slim_{\varepsilon \downarrow 0} M(\lambdambda +i\varepsilon) f = M(\lambdambda +i 0) f = H(\Omega(\cdot) f) (\lambdambda) + i \pi \Omega'(\lambdambda) f, \hbox{\rm e}nd{equation} where $H(\Omega(\cdot) f)$ denotes the ${\mathcal H}$-valued Hilbert transform \begin{equation} H(\Omega(\cdot) f) (\lambdambda) = \text{p.v.}\int_{- \infty}^{\infty} d \Omega (t) f \, \frac{1}{t - \lambdambda} = \slim_{\delta \downarrow 0} \int_{|t-\lambdambda|\geq \delta} d \Omega (t) f \, \frac{1}{t - \lambdambda}. \hbox{\rm e}nd{equation} \hbox{\rm e}nd{theorem} As usual, the normal limits in Theorem \ref{tA.7} can be replaced by nontangential ones.The nature of the boundary values of $M(\cdot + i 0)$ when for some $p>0$, $M(z) \in {\mathcal B}_p({\mathcal H})$, $z \in {\mathbb{C}}_+$, was clarified in detail in \cite{BE67}, \cite{Na89}, \cite{Na90}, \cite{Na94}. We also mention that Shmul'yan \cite{Sh71} discusses the Nevanlinna representation \hbox{\rm e}qref{A.42}; moreover, certain special classes of Nevanlinna functions, isolated by Kac and Krein \cite{KK74} in the scalar context, are studied by Brodskii \cite[Sect.\ I.4]{Br71} and Shmul'yan \cite{Sh71}. Our final result of this appendix offers an elementary proof of bounded invertibility of $\operatorname{Im}(M(z))$ for all $z \in {\mathbb{C}}_+$ if and only if this property holds for some $z_0 \in {\mathbb{C}}_+$: \begin{lemma} \lambdabel{lA.8} Let $M$ be a bounded operator-valued Nevanlinna--Herglotz function in ${\mathcal H}$. Then $[\operatorname{Im}(M(z_0))]^{-1} \in {\mathcal B}({\mathcal H})$ for some $z_0\in{\mathbb{C}}_+$ $($resp., $z_0 \in {\mathbb{C}}_-$$)$ if and only if $[\operatorname{Im}(M(z))]^{-1} \in {\mathcal B}({\mathcal H})$ for all $z\in{\mathbb{C}}_+$ $($resp., $z \in {\mathbb{C}}_-$$)$. \hbox{\rm e}nd{lemma} \begin{proof} By relation \hbox{\rm e}qref{A.36}, it suffices to consider $z_0, z \in {\mathbb{C}}_+$, and because of Theorem\ \ref{tA.7}\,$(iii)$, we can assume that $M(z)$, $z \in {\mathbb{C}}_+$, has the representation \hbox{\rm e}qref{A.42}. Let $x_0,x\in{\mathbb{R}}$ and $y_0,y>0$, then there exists a constant $c\geq1$ such that \begin{align} \sup_{\lambda\in{\mathbb{R}}} \bibitemgg(\frac{(\lambda-x)^2+y^2}{(\lambda-x_0)^2+y_0^2}\bibitemgg) \leq c, \hbox{\rm e}nd{align} since the function on the left-hand side is continuous and tends to $1$ as $\lambda\to\pm\infty$. If $[\operatorname{Im}(M(x_0+iy_0)]^{-1} \in {\mathcal B}({\mathcal H})$, there exists $\delta>0$ such that $\operatorname{Im}(M(x_0+iy_0))\geq \delta I_{\mathcal H}$, and hence, using $c\geq1$, $y>0$, and $\Omega\geq0$, one obtains \begin{align} \delta I_{\mathcal H} &\leq \operatorname{Im}(M(x_0+iy_0)) = Dy_0 + \int_{{\mathbb{R}}} \frac{y_0}{(\lambda-x_0)^2+y_0^2}\,d\Omega(\lambda) \nonumber\\ &\leq \frac{y_0}{y}\bibitemgg[Dy + c\int_{{\mathbb{R}}} \frac{y}{(\lambda-x)^2+y^2}\,d\Omega(\lambda)\bibitemgg] \\ &\leq \frac{y_0}{y}\bibitemgg[\operatorname{Im}(M(x+iy))+(c-1)\int_{{\mathbb{R}}} \frac{y}{(\lambda-x)^2+y^2}\,d\Omega(\lambda)\bibitemgg] \leq \frac{y_0}{y}\operatorname{Im}(M(x+iy)). \nonumber \hbox{\rm e}nd{align} Thus, $\operatorname{Im}(M(x+iy))\geq (y/y_0) \delta I_{\mathcal H}$, and hence $[\operatorname{Im}(M(x+iy))]^{-1} \in {\mathcal B}({\mathcal H})$. \hbox{\rm e}nd{proof} For a variety of additional spectral results in connection with operator-valued Nevanlinna--Herglotz functions we refer to \cite{BMN02} and \cite[Proposition~1.2]{DM97}. For a systematic treatment of operator-valued Nevanlinna--Herglotz families we refer to \cite{DHM15}. \nonumberindent {\bf Acknowledgments.} We are indebted to Jussi Behrndt and Mark Malamud for numerous discussions on this topic. S.\,N.~is grateful to the Department of Mathematics of the University of Missouri where part of this work was completed while on a Miller Scholar Fellowship in February--March of 2014. \begin{thebibliography}{99} \bibitem{AL95} V.\ M.\ Adamjan and H.\ Langer, {\it Spectral properties of a class of rational operator valued functions}, J. Operator Th. {\bf 33}, 259--277 (1995). \bibitem{AM63} Z.\ S.\ Agranovich and V.\ A.\ Marchenko, {\it The Inverse Problem of Scattering Theory}, Gordon and Breach, New York, 1963. \bibitem{Al06a} A.\ R.\ Aliev, {\it On the generalized solution of the boundary-value problem for the operator-differential equations of the second order with variable coefficients}, J. Math. Phys. Anal. Geom. {\bf 2}, 87--93 (2006). \bibitem{AM10} A.\ R.\ Aliev and S.\ S.\ Mirzoev, {\it On boundary value problem solvability theory for a class of high-order operator-differential equations}, Funct. Anal. Appl. {\bf 44}, 209--211 (2010). \bibitem{AN75} G.\ D.\ Allen and F.\ J.\ Narcowich, {\it On the representation and approximation of a class of operator-valued analytic functions}, Bull. Amer. Math. Soc. {\bf 81}, 410--412 (1975). \bibitem{AN76} G.\ D.\ Allen and F.\ J.\ Narcowich, {\it $R$-operators I. Representation theory and applications}, Indiana Univ. Math. J. {\bf 25}, 945--963 (1976) \bibitem{AB09} D.\ Alpay and J.\ Behrndt, {\it Generalized $Q$-functions and Dirichlet-to-Neumann maps for elliptic differential operators}, J. Funct. Anal. {\bf 257}, 1666--1694 (2009). \bibitem{Am95} H.\ Amann, {\it Linear and Quasilinear Parabolic Problems}, Monographs in Mathematics, Vol.\ 89, Birkh\"auser, Basel, 1995. \bibitem{AP04} W.\ O.\ Amrein and D.\ B.\ Pearson, {\it $M$ operators: a generalization of Weyl--Titchmarsh theory}, J. Comp. Appl. Math. {\bf 171}, 1--26 (2004). \bibitem{ABHN01} W.\ Arendt, C.\ K.\ Batty, M.\ Hieber, F.\ Neubrander, {\it Vector-Valued Laplace Transforms and Cauchy Transforms}, Monographs in Mathematics, Vol.\ 96, Birkh\"auser, Basel, 2001. \bibitem{ABT11} Yu.\ Arlinskii, S.\ Belyi, and E.\ Tsekanovskii, {\it Conservative Realizations of Herglotz--Nevanlinna Functions}, Operator Theory: Advances and Applications, Vol.\ 217, Birkh\"auser, Springer, Basel, 2011. \bibitem{BW83} H.\ Baumg\"artel and M.\ Wollenberg, {\it Mathematical Scattering Theory}, Operator Theory: Advances and Applications, Vol.\ 9, Birkh\"auser, Boston, 1983. \bibitem{BL07} J.\ Behrndt and M.\ Langer, {\it Boundary value problems for elliptic partial differential operators on bounded domains}, J. Funct. Anal. {\bf 243}, 536--565 (2007). \bibitem{BM14} J.\ Behrndt and T.\ Micheler, {\it Elliptic differential operators on Lipschitz domains and abstract boundary value problems}, J. Funct. Anal. {\bf 267}, 3657--3709 (2014). \bibitem{BR15} J.\ Behrndt and J.\ Rohleder, {\it Titchmarsh--Weyl theory for Schr\"odinger operators on unbounded domains}, \arxiv{1208.5224}, J. Spectral Theory, to appear. \bibitem{BR15a} J.\ Behrndt and J.\ Rohleder, {\it Spectral analysis of selfadjoint elliptic differential operators, Dirichlet-to-Neumann maps, and abstract Weyl functions}, \arxiv{1404.0922}. \bibitem{BT07} J.\ Behrndt and C.\ Trunk, {\it On the negative squares of indefinite Sturm--Liouville operators}, J. Diff. Eq. {\bf 238}, 491--519 (2007). \bibitem{BL00} R.\ Benguria and M.\ Loss, {\it A simple proof of a theorem of Laptev and Weidl}, Math. Res. Lett. {\bf 7}, 195--203 (2000). \bibitem{Be68} Ju.\ Berezanskii, {\it Expansions in Eigenfunctions of Selfadjoint Operators}, Transl. Math. Mongraphs, Vol.\ 17, Amer. Math. Soc., Providence, R.I., 1968. \bibitem{BE67} M.\ \v S.\ Birman, S.\ B.\ \`Entina, {\it The stationary method in the abstract theory of scattering theory}, Math. SSSR Izv. {\bf 1}, 391--420 (1967). \bibitem{BS87} M.\ S.\ Birman and M.\ Z.\ Solomjak, {\it Spectral Theory of Self-Adjoint Operators in Hilbert Space}, Reidel, Dordrecht, 1987. \bibitem{BMN02} J.\ F.\ Brasche, M.\ Malamud, and H.\ Neidhardt, {\it Weyl function and spectral properties of self-adjoint extensions}, Integr. Eq. Oper. Th. {\bf 43}, 264--289 (2002). \bibitem{Br71} M.\ S.\ Brodskii, {\it Triangular and Jordan Representations of Linear Operators}, Transl. Math. Mongraphs, Vol.\ 32, Amer. Math. Soc., Providence, RI, 1971. \bibitem{BGW09} B.\ M.\ Brown, G.\ Grubb, and I.\ G.\ Wood, {\it $M$-functions for closed extensions of adjoint pairs of operators with applications to elliptic boundary problems}, Math. Nachr. {\bf 282}, 314--347 (2009). \bibitem{BHMNW09} B.\ M.\ Brown, J.\ Hinchcliffe, M.\ Marletta, S.\ Naboko, and I.\ Wood, {\it The abstract Titchmarsh--Weyl $M$-function for adjoint operator pairs and its relation to the spectrum}, Integral Equ.\ Operator Theory {\bf 63}, 297--320 (2009). \bibitem{BMNW08} B.\ M.\ Brown, M.\ Marletta, S.\ Naboko, and I.\ Wood, {\it Boundary triplets and $M$-functions for non-selfadjoint operators, with applications to elliptic PDEs and block operator matrices}, J.\ London Math.\ Soc.\ (2) {\bf 77}, 700--718 (2008). \bibitem{BNMW14} B.\ M.\ Brown, M.\ Marletta, S.\ Naboko, and I.\ Wood, {\it An abstract inverse problem for boundary triples with an application to the Friedrichs model}, \arxiv{1404.6820}. \bibitem{BGP08} J.\ Br\"uning, V.\ Geyler, and K.\ Pankrashkin, {\it Spectra of self-adjoint extensions and applications to solvable Schr\"odinger operators}, Rev. Math. Phys. {\bf 20}, 1--70 (2008). \bibitem{Bu97} D.~Buschmann, {\it Spektraltheorie verallgemeinerter Differentialausdr\"ucke - Ein neuer Zugang}, Ph.D. Thesis, University of Frankfurt, Germany, 1997. \bibitem{Ca76} R.\ W.\ Carey, {\it A unitary invariant for pairs of self-adjoint operators}, J. Reine Angew. Math. {\bf 283}, 294--312 (1976). \bibitem{DK74} Ju.\ L.\ Daleckii and M.\ G.\ Krein, {\it Stability of Solutions of Differential Equations in Banach Space}, Transl. Math. Monographs, Vol.\ 43, Amer. Math. Soc., Providence, RI, 1974. \bibitem{De62} L.\ de Branges, {\it Perturbations of self-adjoint transformations}, Amer. J. Math. {\bf 84}, 543--560 (1962). \bibitem{De08} S.\ A.\ Denisov, {\it Schr\"odinger operators and associated hyperbolic pencils}, J. Funct. Anal. {\bf 254}, 2186--2226 (2008). \bibitem{DHM15} V.\ Derkach, S.\ Hassi, and M.\ Malamud, {\it Invariance theorems for Nevanlinna families}, \arxiv{1503.05606}. \bibitem{DHMdS09} V.\ Derkach, S.\ Hassi, M.\ Malamud, and H.\ de Snoo, {\it Boundary relations and generalized resolvents of symmetric operators}, Russian J. Math. Phys. {\bf 16}, 17--60 (2009). \bibitem{DM87} V.\ A.\ Derkach and M.\ M.\ Malamud, {\it On the Weyl function and Hermitian operators with gaps}, Sov. Math. Dokl. {\bf 35}, 393--398 (1987). \bibitem{DM91} V.\ A.\ Derkach and M.\ M.\ Malamud, {\it Generalized resolvents and the boundary value problems for Hermitian operators with gaps}, J. Funct. Anal. {\bf 95}, 1--95 (1991). \bibitem{DM95} V.\ A.\ Derkach and M.\ M.\ Malamud, {\it The extension theory of Hermitian operators and the moment problem}, J. Math. Sci. {\bf 73}, 141--242 (1995). \bibitem{DM97} V.\ A.\ Derkach and M.\ M.\ Malamud, {\it On some classes of holomorphic operator functions with nonnegative imaginary part}, in {\it Operator Algebras and Related Topics}, 16th International Conference on Operator Theory, A.\ Gheondea, R.\ N.\ Gologan, and T.\ Timotin (eds.), The Theta Foundation, Bucharest, 1997, pp.\ 113--147. \bibitem{DM15} V.\ A.\ Derkach and M.\ M.\ Malamud, {\it Weyl function of a Hermitian operator and its connection with characteristic function}, \arxiv{1503.08956}. \bibitem{DMT88} V.\ A.\ Derkach, M.\ M.\ Malamud, and E.\ R.\ Tsekanovskii, {\it Sectorial extensions of a positive operator, and the characteristic function}, Sov. Math. Dokl. {\bf 37}, 106--110 (1988). \bibitem{DLDS88} A.\ Dijksma, H.\ Langer, and H.\ de Snoo, {\it Hamiltonian systems with eigenvalue depending boundary conditions}, in {\it Contributions to Operator Theory and its Applications}, I.\ Gohberg, J.\ W.\ Helton, and L.\ Rodman (eds.), Operator Theory: Advances and Applications, Vol.\ 35, Birkh\"auser, Basel, 1988, pp.\ 37--83. \bibitem{DU77} J.\ Diestel and J.\ J.\ Uhl, {\it Vector Measures}, Mathematical Surveys, Vol.\ 15, Amer. Math. Soc., Providence, RI, 1977. \bibitem{Di60} J.\ Dieudonn{\' e}, {\it Foundations of Modern Analysis}, Pure and Appl. Math., Vol.\ 10, Academic Press, New York, 1960. \bibitem{Do65} W.\ F.\ Donoghue, {\it On the perturbation of spectra}, Commun. Pure Appl. Math. {\bf 18}, 559-579 (1965). \bibitem{DS88} N.\ Dunford and J.\ T.\ Schwartz, {\it Linear Operators Part II: Spectral Theory}, Interscience, New York, 1988. \bibitem{GKMT01} F.\ Gesztesy, N.J.\ Kalton, K.A.\ Makarov, and E.\ Tsekanovskii, {\it Some applications of operator-valued Herglotz functions}, in ``Operator Theory, System Theory and Related Topics,'' Oper.\ Theory Adv. Appl., Vol.\ 123, Birkh\"auser, Basel, 2001, pp.\ 271--321. \bibitem{GMT98} F.\ Gesztesy, K.\ A.\ Makarov, E.\ Tsekanovskii, {\it An Addendum to Krein's formula}, J. Math. Anal. Appl. {\bf 222}, 594--606 (1998). \bibitem{GT00} F.\ Gesztesy and E.\ Tsekanovskii, {\it On matrix-valued Herglotz functions}, Math. Nachr. {\bf 218}, 61--138 (2000). \bibitem{GWZ13} F.\ Gesztesy, R.\ Weikard, and M.\ Zinchenko, {\it Initial value problems and Weyl--Titchmarsh theory for Schr\"odinger operators with operator-valued potentials}, Operators and Matrices {\bf 7}, 241--283 (2013). \bibitem{GWZ13a} F.\ Gesztesy, R.\ Weikard, and M.\ Zinchenko, {\it On a class of model Hilbert spaces}, Discrete Cont. Dyn. Syst. A {\bf 33}, 5067--5088 (2013). \bibitem{GWZ13b} F.\ Gesztesy, R.\ Weikard, and M.\ Zinchenko, {\it On spectral theory for Schr\"odinger operators with operator-valued potentials}, J. Diff. Eq. {\bf 255}, 1784--1827 (2013). \bibitem{Gi72} R.\ C.\ Gilbert, {\it Simplicity of linear ordinary differential operators}, J. Diff. Eq. {\bf 11}, 672--681 (1972). \bibitem{Go68} M.\ L.\ Gorbachuk, {\it On spectral functions of a second order differential operator with operator coefficients}, Ukrain. Math. J. {\bf 18}, No.\ 2, 3--21 (1966). (Russian.) Engl. transl. in Amer. Math. Soc. Transl. (2), {\bf 72}, 177--202 (1968). \bibitem{Go71} M.\ L.\ Gorbachuk, {\it Self-adjoint boundary problems for a second-order differential equation with unbounded operator coefficient}, Funct. Anal. Appl. {\bf 5}, 9--18 (1971). \bibitem{GG69} V.\ I.\ Gorba{\v c}uk and M.\ L.\ Gorba{\v c}uk, {\it Expansion in eigenfunctions of a second-order differential equation with operator coefficients}, Sov. Math. Dokl. {\bf 10}, 158--162 (1969). \bibitem{GG91} V.\ I.\ Gorbachuk and M.\ L.\ Gorbachuk, {\it Boundary Value Problems for Operator Differential Equations}, Kluwer, Dordrecht, 1991. \bibitem{GM76} M.\ L.\ Gorbachuk and V.\ A.\ Mihailec, {\it Semibounded selfadjoint extensions of symmetric operators}, Sov. Math. Dokl. {\bf 17}, 185--187 (1976). \bibitem{HKS98} S.\ Hassi, M.\ Kaltenb\"ack, and H.\ de Snoo, {\it Generalized finite rank perturbations associated with Kac classes of matrix Nevanlinna functions}, in preparation. \bibitem{HMM13} S.\ Hassi, M.\ Malamud, and V.\ Mogilevskii, {\it Unitary equivalence of proper extensions of a symmetric operator and the Weyl function}, Integral Equ. Operator Theory {\bf 77}, 449--487 (2013). \bibitem{HP85} E.\ Hille and R.\ S.\ Phillips, {\it Functional Analysis and Semi-Groups}, Colloquium Publications, Vol.\ 31, rev. ed., Amer. Math. Soc., Providence, RI, 1985. \bibitem{HS98} D.\ Hinton and A.\ Schneider, {\it On the spectral representation for singular selfadjoint boundary eigenvalue problems}, in {\it Contributions to Operator Theory in Spaces with an Indefinite Metric}, A.\ Dijksma, I.\ Gohberg, M.\ A.\ Kaashoek, R.\ Mennicken (eds.), Operator Theory: Advances and Applications, Vol.\ 106, Birkh\"auser, Basel, 1998, pp.\ 217--251. \bibitem{KK74} I.\ S.\ Kac and M.\ G.\ Krein, {\it $R$-functions--analytic functions mapping the upper halfplane into itself}, Amer. Math. Soc. Transl. (2) {\bf 103}, 1-18 (1974). \bibitem{KL67} A.\ G.\ Kostyuchenko and B.\ M.\ Levitan, {\it Asymptotic behavior of the eigenvalues of the Sturm--Liouville operator problem}, Funct. Anal. Appl. {\bf 1}, 75--83 (1967). \bibitem{Kr71} M.\ G.\ Krein, {\it Fundamental aspects of the representation theory of Hermitean operators with deficiency index $(m,m)$}, Ukrain. Mat. Z. {\bf 1}, 3--66 (1949); Engl. transl. in Amer. Math. Soc. Transl., Ser.\ 2, {\bf 97}, 75--143 (1971). \bibitem{KO77} M.\ G.\ Krein and I.\ E.\ Ov\v carenko, {\it $Q$-functions and sc-resolvents of nondensely defined Hermitian contractions}, Sib. Math. J. {\bf 18}, 728--746 (1977). \bibitem{KO78} M.\ G.\ Krein and I.\ E.\ Ov\v carenko, {\it Inverse problems for $Q$-functions and resolvent matrices of positive Hermitian operators}, Sov. Math. Dokl. {\bf 19}, 1131--1134 (1978). \bibitem{LT77} H.\ Langer and B.\ Textorius, {\it On generalized resolvents and $Q$-functions of symmetric linear relations (subspaces) in Hilbert space}, Pacific J. Math. {\bf 72}, 135--165 (1977). \bibitem{LW00} A.\ Laptev and T.\ Weidl, {\it Sharp Lieb--Thirring inequalities in high dimensions}, Acta Math. {\bf 184}, 87--111 (2000). \bibitem{Ma92a} M.\ M.\ Malamud, {\it Certain classes of extensions of a lacunary Hermitian operator}, Ukrain. Math. J. {\bf 44}, 190--204 (1992). \bibitem{Ma92b} M.\ M.\ Malamud, {\it On a formula of the generalized resolvents of a nondensely defined hermitian operator}, Ukrain. Math. J. {\bf 44}, 1522--1547 (1992). \bibitem{MM02} M.\ M.\ Malamud and S.\ M.\ Malamud, {\it On the spectral theory of operator measures}, Funct. Anal. Appl. {\bf 36}, 154--158 (2002). \bibitem{MM04} M.\ M.\ Malamud and S.\ M.\ Malamud, {\it On the spectral theory of operator measures in Hilbert space}, St. Petersburg Math. J. {\bf 15}, 323--373 (2004). \bibitem{MN11} M.\ Malamud and H.\ Neidhardt, {\it On the unitary equivalence of absolutely continuous parts of self-adjoint extensions}, J. Funct. Anal. {\bf 260}, 613--638 (2011). \bibitem{MN12} M.\ Malamud and H.\ Neidhardt, {\it Sturm--Liouville boundary value problems with operator potentials and unitary equivalence}, J. Diff. Eq. {\bf 252}, 5875--5922 (2012). \bibitem{Ma04} M.\ Marletta, {\it Eigenvalue problems on exterior domains and Dirichlet to Neumann maps}, J. Comp. Appl. Math. {\bf 171}, 367--391 (2004). \bibitem{Mi78} J.\ Mikusi{\' n}ski, {\it The Bochner Integral}, Academic Press, New York, 1978. \bibitem{Mo07} V.\ I.\ Mogilevskii, {\it Description of spectral functions of differential operators with arbitrary deficiency indices}, Math. Notes {\bf 81}, 553--559 (2007). \bibitem{Mo09} V.\ Mogilevskii, {\it Boundary triplets and Titchmarsh--Weyl functions of differential operators with arbitrary deficiency indices}, Meth. Funct. Anal. Topology {\bf 15}, 280--300 (2009). \bibitem{Mo10} V.\ Mogilevskii, {\it Minimal spectral functions of an ordinary differential operator}, Proc. Edinburgh Math. Soc. {\bf 55}, 731--769 (2012). \bibitem{Na87} S.\ N.\ Naboko, {\it Uniqueness theorems for operator-valued functions with positive imaginary part, and the singular spectrum in the selfadjoint Friedrichs model}, Ark. Mat. {\bf 25}, 115--140 (1987). \bibitem{Na89} S.\ N.\ Naboko, {\it Boundary values of analytic operator functions with a positive imaginary part}, J. Soviet Math. {\bf 44}, 786Ð-795 (1989). \bibitem{Na90} S.\ N.\ Naboko, {\it Nontangential boundary values of operator-valued $R$-functions in a half-plane}, Leningrad Math. J. {\bf 1}, 1255--1278 (1990). \bibitem{Na94} S.\ N.\ Naboko, {\it The boundary behavior of ${\textfrak{S}}_p$-valued functions analytic in the half-plane with nonnegative imaginary part}, Functional Analysis and Operator Theory, Banach Center Publications, Vol.\ {\bf 30}, Institute of Mathematics, Polish Academy of Sciences, Warsaw, 1994, pp.\ 277--285. \bibitem{Na74} F.\ J.\ Narcowich, {\it Mathematical theory of the $R$ matrix. II. The $R$ matrix and its properties}, J. Math. Phys. {\bf 15}, 1635--1642 (1974). \bibitem{Na77} F.\ J.\ Narcowich, {\it $R$-operators II. On the approximation of certain operator-valued analytic functions and the Hermitian moment problem}, Indiana Univ. Math. J. {\bf 26}, 483--513 (1977). \bibitem{NA75} F.\ J.\ Narcowich and G.\ D.\ Allen, {\it Convergence of the diagonal operator-valued Pad\'e approximants to the Dyson expansion}, Commun. Math. Phys. {\bf 45}, 153--157 (1975). \bibitem{Pa13} K.\ Pankrashkin, {\it An example of unitary equivalence between self-adjoint extensions and their parameters}, J. Funct. Anal. {\bf 265}, 2910--2936 (2013). \bibitem{Pe38} B.\ J.\ Pettis {\it On integration in vector spaces}, Trans. Am. Math. Soc. {\bf 44}, 277--304, (1938). \bibitem{PR67} A.\ I.\ Plesner and V.\ A.\ Rohlin, {\it Spectral theory of linear operators}, Uspehi Matem. Nauk (N.\ S.) {\bf 1(11)}, No.\ 1, 71--191 (1946). (Russian.) Engl. transl. in Amer. Math. Soc. Transl. (2), {\bf 62}, 29--175 (1967). \bibitem{Po04} A.\ Posilicano, {\it Boundary triples and Weyl functions for singular perturbations of self-adjoint operators}, Meth. Funct. Anal. Topology {\bf 10}, 57--63 (2004). \bibitem{Re98} C.\ Remling, {\it Spectral analysis of higher order differential operators I: General properties of the $M$-function}, J. London Math. Soc., to appear. \bibitem{Ro60} F.\ S.\ Rofe-Beketov, {\it Expansions in eigenfunctions of infinite systems of differential equations in the non-self-adjoint and self-adjoint cases}, Mat. Sb. {\bf 51}, 293--342 (1960). (Russian.) \bibitem{RK05} F.\ S.\ Rofe-Beketov and A.\ M.\ Kholkin, {\it Spectral Analysis of Differential Operators. Interplay Between Spectral and Oscillatory Properties}, Monograph Series in Mathematics, Vol.\ 7, World Scientific, Singapore, 2005. \bibitem{Ry07} V.\ Ryzhov, {\it A general boundary value problem and its Weyl function}, Opuscula Math. {\bf 27}, 305--331(2007). \bibitem{Sa71} Y.\ Sait{\= o}, {\it Eigenfunction expansions associated with second-order differential equations for Hilbert space-valued functions}, Publ. RIMS, Kyoto Univ. {\bf 7}, 1--55 (1971/72). \bibitem{Sh71} Yu.\ L.\ Shmul'yan, {\it On operator $R$-functions}, Siberian Math. J. {\bf 12}, 315--322 (1971). \bibitem{Tr00} I.\ Trooshin, {\it Asymptotics for the spectral and Weyl functions of the operator-valued Sturm--Liouville problem}, in {\it Inverse Problems and Related Topics}, G.\ Nakamura, S.\ Saitoh, J.\ K.\ Seo, and M.\ Yamamoto (eds.), Chapman \& Hall/CRC, Boca Raton, FL, 2000, pp.\ 189--208. \bibitem{Ts92} E.\ R.\ Tsekanovskii, {\it Accretive extensions and problems on the Stieltjes operator-valued functions realizations}, in {\it Operator Theory and Complex Analysis}, T.\ Ando and I.\ Gohberg (eds.), Operator Theory: Advances and Applications, Vol.\ 59, Birkh\"auser, Basel, 1992, pp.\ 328--347. \bibitem{VG70} L.\ I.\ Vainerman and M.\ L.\ Gorbachuk, {\it On self-adjoint semibounded abstract differential operators}, Ukrain. Math. J. {\bf 22}, 694--696 (1970). \bibitem{Yo80} K.\ Yosida, {\it Functional Analysis}, 6th ed., Springer, Berlin, 1980. \hbox{\rm e}nd{thebibliography} \hbox{\rm e}nd{document}
\begin{document} \title[Decay via viscoelastic boundary damping]{On the decay rate for the wave equation with viscoelastic boundary damping} \author[Reinhard Stahn]{Reinhard Stahn} \begin{abstract} We consider the wave equation with a boundary condition of memory type. Under natural conditions on the acoustic impedance $\hat{k}$ of the boundary one can define a corresponding semigroup of contractions \cite{DFMP2010a}. With the help of Tauberian theorems we establish energy decay rates via resolvent estimates on the generator $-\mathcal{A}$ of the semigroup. We reduce the problem of estimating the resolvent of $-\mathcal{A}$ to the problem of estimating the resolvent of the corresponding stationary problem. Under not too strict additional assumptions on $\hat{k}$ we establish an upper bound on the resolvent. For the wave equation on the interval or the disk or for certain acoustic impedances making $0$ a spectral point of $\mathcal{A}$ we prove our estimates to be sharp. \end{abstract} \maketitle {\let\thefootnote\relax\footnotetext{MSC2010: Primary 35B40, 35L05. Secondary 35P20, 47D06.}} {\let\thefootnote\relax\footnotetext{Keywords and phrases: wave equation, viscoelastic, energy, resolvent estimates, singularity at zero, memory, $C_0$-semigroups.}} \section{Introduction}\label{sec: Introduction} Let $\Omega\subset\mathbb{R}^d$ be a bounded domain with Lipschitz boundary and $k:\mathbb{R}\rightarrow[0,\infty)$ be an integrable function, depending on the time-variable only and vanishing on $(-\infty,0)$. We consider a model for the reflection of sound on a wall \cite{PrussProbst1996}: \begin{equation}\label{eq: wave equation} \begin{cases} U_{tt}(t,x) - \Delta U(t,x) = 0 & (t\in\mathbb{R}, x\in\Omega), \\ \partial_n U(t,x) + k* U_t(t,x) = 0 & (t\in\mathbb{R}, x\in\partial\Omega). \end{cases} \end{equation} The function $U$ is called the \textit{velocity potential}. One can derive the acoustic pressure $p(t,x)=U_t(t,x)$ and fluid velocity $v(t,x)=-\nabla U(t,x)$ from $U$. The second formula gives the velocity potential its name. The convolution is defined by the usual formula $k*U_t(t,x)=\int_0^{\infty} k(r)U_t(t-r,x) dr$. Here $n$ is the outward normal vector of $\partial\Omega$, which exists almost everywhere for Lipschitz domains. Furthermore $\partial_n$ denotes the normal derivative on the boundary. We assume that $k\in L^1(0,\infty)$ is a completely monotonic function\footnote{We use the convention to identify functions defined on the interval $[0,\infty)$ with functions defined on $\mathbb{R}$ but zero to the left of $t=0$.}. That is, there exists a positive Radon measure $\nu$ on $[0,\infty)$ such that $k(t)=\int_{[0,\infty)}e^{-\tau t} d\nu(\tau)$. We note here that the integrability assumption on $k$ is easily checked to be equivalent to \begin{equation}\label{eq: k is integrable} \nu(\{0\})=0 \text{ and } \int_0^{\infty}\tau^{-1} d\nu(\tau)<\infty. \end{equation} Let $e_{\tau}(t)=e^{-\tau t} 1_{[0,\infty)}(t)$ and \begin{align*} \psi(t,\tau,x) = e_{\tau} * U_t(t,x) \quad (t\in\mathbb{R}, \tau\geq0, x\in\partial\Omega) . \end{align*} Informally it is not difficult to see that (\ref{eq: wave equation}) for $t>0$ in conjunction with the information that $p=U_t$ and $v=-\nabla U$ at time $t=0$ and $\int_0^{\infty}k(t)U_t(-t)dt$ at the boundary (the \emph{``essential''} data from the past) is equivalent to \begin{equation}\label{eq: physical wave equation} \begin{cases} p_t(t,x) + \Div v(t,x) = 0 & (t>0, x\in\Omega), \\ v_t(t,x) + \nabla p(t,x) = 0 & (t>0, x\in\Omega), \\ [\psi_t+\tau\psi-p](t,\tau,x) = 0 & (t>0, \tau>0, x\in\partial\Omega), \\ \left[-v\cdot n + \int_0^{\infty} \psi(\tau) d\nu(\tau)\right](t,x) = 0 & (t>0, x\in\partial\Omega), \end{cases} \end{equation} and the information of the initial state $\mathbf{x}_0=(p_0, v_0, \psi_0)$ of the system at time $t=0$. It is important to observe that $p_0$ and $v_0$ cannot fully describe the system's state at $t=0$ since there are memory effects at the boundary. The missing data from the past is stored in the auxiliary function $\psi$. Let us define the energy of the system to be the sum of potential, kinetic and boundary energy: \begin{align*} E(\mathbf{x}_0) = \int_{\Omega} \abs{p_0(x)}^2 + \abs{v_0(x)}^2 dx + \int_0^{\infty}\int_{\partial\Omega} \abs{\psi_0(\tau,x)}^2 dS(x) d\nu(\tau) . \end{align*} Furthermore we introduce the homogeneous first order energy by \begin{align*} E_1^{hom}(\mathbf{x}_0) = \int_{\Omega} \abs{\nabla p_0}^2 + \abs{\Div v_0}^2 dx + \int_0^{\infty}\int_{\partial\Omega} \abs{\tau\psi_0-p_0}^2 dS d\nu(\tau). \end{align*} The first order energy is defined by $E_1=E+E_1^{hom}$. Let us define the (zeroth order) energy space, and the first order energy space by \begin{align} \label{eq: energy space} \mathcal{H} &= \mathcal{H}_0= L^2(\Omega) \times \nabla H^1(\Omega) \times L^2_{\nu}((0,\infty)_{\tau}; L^2(\partial\Omega)) , \\ \label{eq: first order energy space} \mathcal{H}_1 &= \{\mathbf{x}_0\in\mathcal{H}: E_1(\mathbf{x}_0)<\infty \text{ and } \left[-v_0\cdot n|_{\partial\Omega} + \int_0^{\infty} \psi_0(\tau) d\nu(\tau)\right] = 0\} . \end{align} Here $\nabla H^1(\Omega)$ is the space of vector fields $v\in (L^2(\Omega))^d$ for which there exists a function (potential) $U\in H^1(\Omega)$ such that $v=-\nabla U$. We note that the space of gradient fields $\nabla H^1(\Omega)$ is a closed subspace of $(L^2(\Omega))^d$ since $\Omega$ satisfies the Poincar\'e inequality\footnote{Poincar\'e inequality: If $\Omega$ is a bounded Lipschitz domain then there exists a $C>0$ such that for all $p\in H^1(\Omega)$ with $\int_{\Omega}p=0$ we have $\int_{\Omega}\abs{p}^2 \leq C\int_{\Omega}\abs{\nabla p}^2$.}. To make the boundary condition, appearing in the definition of $\mathcal{H}_1$, meaningful we use that the trace operator $\Gamma:H^1(\Omega)\rightarrow H^{1/2}(\partial\Omega), u\mapsto u|_{\partial\Omega}$ is continuous and has a continuous right inverse. Therefore we see that $v\cdot n |_{\partial\Omega}$ is well defined as an element of $H^{-1/2}(\partial\Omega)=(H^{1/2}(\partial\Omega))^*$ for vector fields $v\in (L^2(\Omega))^d$ with $\Div v\in L^2(\Omega)$ by the relation \begin{equation}\label{eq: vn} \dual{v\cdot n}{\Gamma u}_{H^{-\frac{1}{2}}\times H^{\frac{1}{2}}(\partial\Omega)} = \int_{\Omega} \Div v \overline{u} + \int_{\Omega} v\cdot\nabla \overline{u} \end{equation} for all $u\in H^1(\Omega)$. Also note that $E_1(\mathbf{x}_0)<\infty$ implies $\psi_0\in L^1_{\nu}$ since $\psi_0(\tau)=\frac{\psi_0(\tau)}{1+\tau}+\frac{\Gamma p}{1+\tau}+\frac{\tau\psi_0(\tau)-\Gamma p}{1+\tau}$ and $(\tau\mapsto \frac{1}{1+\tau})\in L^1_{\nu}\cap L^2_{\nu}(0,\infty)$ by (\ref{eq: k is integrable}).\footnote{Here and in the following we abbreviate $L^p_{\nu}((0,\infty)_{\tau}; L^2(\partial\Omega))$ simply by $L^p_{\nu}$ for $p\in\{1,2\}$.} The quadratic forms $E$ and $E_1$ turn $\mathcal{H}$ and $\mathcal{H}_1$ into Hilbert spaces respectively. An initial state $\mathbf{x}_0$ is called \emph{classical} if its first order energy is finite and the boundary condition is satisfied (i.e. $\mathbf{x}_0\in\mathcal{H}_1$). We say that $\mathbf{x}\in C^1([0,\infty); \mathcal{H})\cap C([0,\infty);\mathcal{H}_1)$ is a \emph{(classical) solution} of (\ref{eq: physical wave equation}) if it satisfies the first two lines in the sense of distributions and the last two lines in the trace sense, i.e. with $v\cdot n$ defined by (\ref{eq: vn}) and $p$ replaced by $\Gamma p$. From Theorem \ref{thm: waves well posed} below plus basics from the theory of $C_0$-semigroups it follows that the initial value problem corresponding to (\ref{eq: physical wave equation}) is well-posed in the sense that for all classical initial data $\mathbf{x}_0\in\mathcal{H}_1$ there is a unique solution $\mathbf{x}$ with $\mathbf{x}(0)=\mathbf{x}_0$ and the mapping $\mathcal{H}_j\owns \mathbf{x}_0\mapsto \mathbf{x}\in C([0,\infty);\mathcal{H}_j)$ is continuous for $j\in\{0,1\}$. For a solution $\mathbf{x}$ with $\mathbf{x}_0=\mathbf{x}(0)$ we also write e.g. $E(t,\mathbf{x}_0)$ instead of $E(\mathbf{x}(t))$. Note that $E_1^{hom}(\mathbf{x}(t)) = E(\dot{\mathbf{x}}(t))$ - this justifies the adjective ``homogeneous'' for the quadratic form $E_1^{hom}$. Our aim is to find the optimal decay rate of the energy, uniformly with respect to classical initial states. This means that we want to find the smallest possible decreasing function $N:[0,\infty)\rightarrow[0,\infty)$ such that \begin{align*} E(t,\mathbf{x}_0) \leq N(t)^2 E_1(\mathbf{x}_0) \end{align*} for all $\mathbf{x}_0\in\mathcal{H}_1$. Because of Theorem \ref{thm: Batty-Duyckaerts}, \ref{thm: Borichev-Tomilov} and \ref{thm: Martinez} this is essentially equivalent to estimate the resolvent of the wave equation's generator $\mathcal{A}$ (defined in Section \ref{sec: Semigroup approach} below) along the imaginary axis near infinity and near zero. Our two main results are Theorem \ref{thm: Preliminary estimates} and \ref{thm: upper resolvent estimate}. The Sections \ref{sec: Correspondence between resolvents} and \ref{sec: upper resolvent estimate} are devoted to the proofs. We illustrate the application of our main results to energy decay by several examples in Section \ref{sec: examples}. Our first main result (Theorem \ref{thm: Preliminary estimates}) implies in particular that the task of estimating the resolvent of the complicated $3\times 3$-matrix operator $\mathcal{A}$ is equivalent to estimate the resolvent of the corresponding (and much simpler) stationary operator. Our second main result (Theorem \ref{thm: upper resolvent estimate}) thus determines an upper resolvent estimate of $\mathcal{A}$ at infinity. Unfortunately we need \emph{additional assumptions} on the acoustic impedance (see (\ref{eq: additional assumptions})). However in our separate treatment of the $\Omega=(0,1)$-setting in Section \ref{sec: 1D case} we see that in this case actually no additional assumptions are required for the conclusion of Theorem \ref{thm: upper resolvent estimate} to hold. Even more is true: The given upper bound on the resolvent is also optimal in the 1D setting. This and observations from the examples lead us to three questions and corresponding conjectures formulated in Section \ref{sec: Conclusion}. In Section \ref{sec: Semigroup approach} we recall the semigroup approach from \cite{DFMP2010a}. For convenience of the reader we recall some basic and some not so basic facts from the literature concerning the trace operator, fractional Sobolev spaces and Besov spaces in Appendix \ref{apx: Traces}. In Appendix \ref{apx: semiuniform stability} we recall some Batty-Duyckaerts type Theorems. For the reader to be interested in the physical background of equation (\ref{eq: wave equation}) we recommend \cite{IngardMorse}. \section{The semigroup approach}\label{sec: Semigroup approach} We reformulate (\ref{eq: physical wave equation}) as an abstract Cauchy problem in a Hilbert space: \begin{equation}\label{CP} \begin{cases} \dot{\mathbf{x}}(t) + \mathcal{A} \mathbf{x}(t) = 0 , \\ \mathbf{x}(0) = \mathbf{x}_0 \in \mathcal{H} . \end{cases} \end{equation} Following the approach of \cite{DFMP2010a} we define the energy/state space $\mathcal{H}$ as in (\ref{eq: energy space}) and write $\mathbf{x}=(p,v,\psi)$ for its elements (the states). Again let $\Gamma:H^1(\Omega)\rightarrow H^{1/2}(\partial\Omega), u\mapsto u|_{\partial\Omega}$ be the trace operator on $\Omega$. By abuse of notation let $\tau$ denote the multiplication operator on $L^2_{\nu}(0,\infty)$ mapping $\psi(\tau)$ to $\tau\psi(\tau)$. We define the wave operator by \begin{align*} \mathcal{A} = \left( \begin{array}{ccc} 0 & \Div & 0 \\ \nabla & 0 & 0 \\ -\Gamma & 0 & \tau \end{array}\right) \text{ with } D(\mathcal{A}) = \mathcal{H}_1 . \end{align*} Note that $E_1(\mathbf{x}_0) = \norm{\mathbf{x}_0}_{D(\mathcal{A})}^2 = \norm{\mathbf{x}_0}_{\mathcal{H}}^2 + \norm{\mathcal{A} \mathbf{x}_0}_{\mathcal{H}_1}^2$ for all $\mathbf{x}_0\in D(\mathcal{A})$. \begin{Theorem}[\cite{DFMP2010a}]\label{thm: waves well posed} The Cauchy problem (\ref{CP}) is well posed. More precisely $-\mathcal{A}$ is the generator of a $C_0$-semigroup of contractions in $\mathcal{H}$. \end{Theorem} Taking formal Laplace transform of the wave equation (\ref{eq: wave equation}) yields \begin{equation}\label{eq: stationary wave equation} \begin{cases} z^2u(x) - \Delta u(x) = f & (x\in\Omega), \\ \partial_n u(x) + z\hat{k}(z)u(x) = g & (x\in\partial\Omega). \end{cases} \end{equation} Here $z$ is a complex number and formally $u=\hat{U}(z)=\int_0^{\infty} e^{-zt}U(t)dt$, $f=zU(0)+U_t(0)$ and $g=\hat{k}(z)U(0)|_{\partial\Omega}$. A way to give (\ref{eq: stationary wave equation}) a precise meaning is via the method of forms. Thus for $z\in\mathbb{C}\backslash(-\infty,0)$ let us define the bounded sesquilinear form $a_z:H^1\times H^1(\Omega)\rightarrow\mathbb{C}$ by \begin{equation}\nonumber a_z(p, u) = z^2\int_{\Omega}p\overline{u} + \int_{\Omega} \nabla p \cdot\nabla\overline{u} + z\hat{k}(z)\int_{\partial\Omega} \Gamma p\Gamma \overline{u} dS. \end{equation} If we replace the right-hand side $f,g$ by $F\in H^1(\Omega)^*$ (dual space of $H^1$), given by $\dual{F}{\eta}=\int_{\Omega}f\overline{\eta}+\int_{\partial\Omega} g\Gamma\overline{\eta}dS$, then a functional analytic realization of (\ref{eq: stationary wave equation}) is given by \begin{equation}\label{eq: stationary wave equation FA} \forall \eta\in H^1(\Omega):\, a_z(u,\eta) = \dual{F}{\eta}_{(H^1)^*,H^1(\Omega)} . \end{equation} For all $z\in\mathbb{C}\backslash(-\infty,0)$ for which (\ref{eq: stationary wave equation FA}) has for all $F\in H^1(\Omega)^*$ a unique solution $u\in H^1(\Omega)$ we define the stationary resolvent operator $R(z):H^1(\Omega)^*\rightarrow H^1(\Omega), F\mapsto u$. \begin{Theorem}[\cite{DFMP2010a}]\label{thm: spectrum DFMP} The spectrum of the wave operator satisfies \begin{align*} \sigma(-\mathcal{A})\backslash(-\infty,0] &= \{z\in\mathbb{C}\backslash(-\infty,0]: R(z) \text{ does not exist.}\} \\ &\subseteq \{z\in\mathbb{C}: \mathbb{R}e z < 0\}. \end{align*} Furthermore all spectral points in $\mathbb{C}\backslash(-\infty,0]$ are eigenvalues. \end{Theorem} Following the proof of the preceding theorem given by \cite{DFMP2010a} one sees that for $s\in\mathbb{C}\backslash i[0,\infty)$ \begin{equation}\label{A} (is+\mathcal{A}) (p, v, \psi) = (q, w, \varphi) \in \mathcal{H} \end{equation} is equivalent to \begin{align} \label{R} \forall u\in H^1(\Omega): a_{is}(p, u) = \dual{F}{u}_{(H^1)^*,H^1(\Omega)} \\ \nonumber \text{ and } v = \frac{w + \nabla p}{is}, \, \psi(\tau) = \frac{\Gamma p + \varphi(\tau)}{is + \tau}, \end{align} where \begin{align}\nonumber \dual{F}{u} &= is\int_{\Omega} q \overline{u} - \int_{\Omega} w\cdot\nabla\overline{u} - is\int_{\partial\Omega} \left[ \int_0^{\infty} \frac{\varphi(\tau)}{is+\tau} d\nu(\tau) \right] \Gamma u\, dS \\ \label{eq: F123} &=: \dual{F_1}{u} + \dual{F_2}{u} + \dual{F_3}{u} . \end{align} Observe that the adjoint operator of $R(z)$ is given by $R(z)^* = R(\overline{z})$ for all $z\in\mathbb{C}\backslash(-\infty,0)$ for which $R(z)$ is defined. Finally mention: \begin{Theorem}[\cite{DFMP2010a}]\label{thm: A is injective} The wave operator $\mathcal{A}$ is injective. \end{Theorem} In the next section we characterize all kernels $k$ for which $\mathcal{A}$ is invertible. \section{A correspondence between $(is+\mathcal{A})^{-1}$ and $R(is)$}\label{sec: Correspondence between resolvents} In this section we prove our first main result. \begin{Theorem}\label{thm: Preliminary estimates} The following holds: \begin{itemize} \item[(i)] Let $M:(0,\infty)\rightarrow[1,\infty)$ be an increasing function. Then \begin{align*} &\left[\exists s_1>0 \forall \abs{s}\geq s_1: \norm{(is+\mathcal{A})^{-1}} \leq C M(\abs{s})\right] \\ \Leftrightarrow &\left[\exists s_2>0 \forall \abs{s}\geq s_2: \norm{R(is)}_{L^2\rightarrow L^2} \leq C \abs{s}^{-1}M(\abs{s}) \right]. \end{align*} \item[(ii)] $\exists s_3>0 \forall \abs{s}\leq s_3: \norm{(is+\mathcal{A})^{-1}} \leq C \abs{s}^{-1}$. \item[(iii)] $\mathcal{A}$ is invertible iff $(\tau\mapsto\tau^{-1})\in L^{\infty}_{\nu}$, i.e. $\exists \varepsilon>0: \nu|_{(0,\varepsilon)}=0$. \end{itemize} \end{Theorem} If $\mathcal{A}$ is not invertible we deduce from Theorem \ref{thm: A is injective} that $\mathcal{A}$ can not be surjective in this case. In Section \ref{sec: range of A} we characterize the range of $\mathcal{A}$. \subsection{Singularity at $\infty$} In this subsection we prove Theorem \ref{thm: Preliminary estimates} (i). Therefore let us first define the auxiliary spaces $X^{\theta}$ by the real interpolation method: \begin{equation}\nonumber X^{\theta} = \begin{cases} L^2(\Omega) \text{ resp. } H^1(\Omega) & \text{if } \theta = 0 \text{ resp. } 1, \\ (L^2(\Omega), H^1(\Omega))_{\theta, 1} & \text{if } \theta\in(0,1), \\ (X^{\theta})^* & \text{if } \theta\in[-1, 0). \end{cases} \end{equation} For $\theta\in(0,1)$ the space $X^{\theta}$ coincides with the Besov space $B^{\theta, 2}_1(\Omega)$. Let us explain why we use the Besov spaces $X^{\theta}$ instead of the Bessel potential spaces $H^{\theta}(\Omega)$. The reason is that while the trace operator $\Gamma: H^{\theta}(\Omega)\rightarrow H^{\theta-1/2}(\partial\Omega)$ is continuous for $\theta\in(1/2,1]$ this is no longer true for $\theta=1/2$ (with the convention $H^0=L^2$). On the other hand $\Gamma: X^{1/2}\rightarrow L^2(\partial\Omega)$ is indeed continuous (see Proposition \ref{thm: Trace} in the appendix). A corollary of this fact is that for some $C>0$ \begin{equation}\label{eq: Trace inequality} \forall u\in H^1(\Omega): \norm{\Gamma u}_{L^2(\partial\Omega)}^2 \leq C \norm{u}_{L^2(\Omega)}\norm{u}_{H^1(\Omega)}. \end{equation} Actually, by Lemma \ref{thm: interpolation lemma}, the preceding trace inequality is equivalent to the continuity of the trace operator $\Gamma:X^{1/2}\rightarrow L^2(\partial\Omega)$. Let us prove the following extrapolation result. \begin{Proposition}\label{thm: interpolation at infinity} Let $M:(1,\infty)\rightarrow[1,\infty)$ be an increasing function. If \begin{equation}\label{eq: interpolation at infinity} \norm{R(is)}_{X^{-a}\rightarrow X^{b}} = O(\abs{s}^{a+b-1}M(\abs{s})) \text{ as } \abs{s}\rightarrow \infty \end{equation} is true for $a=b=0$, then it is also true for all $a,b\in[0,1]$. \end{Proposition} \begin{proof} Throughout the proof we may assume $\abs{s}$ to be sufficiently large. Assume that (\ref{eq: interpolation at infinity}) is true for $a = b = 0$. Let $f\in L^2(\Omega)$ and $p=R(is)f$, i.e. \begin{equation}\nonumber \forall u\in H^1(\Omega):\, a_{is}(p,u) = \int_{\Omega}f\overline{u} . \end{equation} Because of (\ref{eq: Trace inequality}) and the uniform boundedness of $\hat{k}(is)$ there are constants $c,C>0$ such that $\mathbb{R}e a_{is}(p,p)\geq c\norm{p}_{H^1}^2-Cs^2\norm{p}_{L^2}^2$. This helps us to estimate \begin{align*} c\norm{p}_{H^1}^2 &\leq \mathbb{R}e a_{is}(p,p) + Cs^2\norm{p}_{L^2}^2 \\ &\leq \norm{f}_{L^2}\norm{p}_{L^2} + Cs^2\norm{p}_{L^2}^2 \\ &\leq s^{-2}\norm{f}_{L^2}^2 + Cs^2\norm{p}_{L^2}^2 \\ &\leq C M(\abs{s})^2 \norm{f}_{L^2}^2. \end{align*} In other words, (\ref{eq: interpolation at infinity}) is true for $a=0, b=1$. By duality (recall $R(z)^* = R(\overline{z})$) it is also true for $a=-1, b=0$. Almost the same calculation as above but now with the help of (\ref{eq: interpolation at infinity}) for the now known case $a=-1, b=0$ shows that (\ref{eq: interpolation at infinity}) is also true for $a=-1, b=1$. It remains to interpolate. First interpolate between the parameters $(a=0,b=1)$ and $(a=1,b=1)$ to get (\ref{eq: interpolation at infinity}) for $a\in[0,1], b=1$. Then interpolate between the parameters $(a=0,b=0)$ and $(a=1,b=0)$ to get (\ref{eq: interpolation at infinity}) for $a\in[0,1], b=0$. One last interpolation gives us the desired result. \end{proof} Let us proceed with the proof of Theorem \ref{thm: Preliminary estimates} part (i). The implication ``$\mathbb{R}ightarrow$'' follows immediately from the equivalence of (\ref{A}) and (\ref{R}) with $w, \varphi = 0$. Therefore we have to show $\norm{\mathbf{x}}_{\mathcal{H}}\leq CM(\abs{s}) \norm{\mathbf{y}}_{\mathcal{H}}$, for all large $\abs{s}$ and for all $\mathbf{x}=(p,v,\psi)\in D(\mathcal{A}), \mathbf{y}=(q,w,\varphi)\in\mathcal{H}$ satisfying (\ref{A}) where $C$ does not depend on $s$ and $\mathbf{y}$. Let $F_j$ for $j\in\{1,2,3\}$ be defined by (\ref{eq: F123}) and let $p_j$ satisfy \begin{equation}\nonumber \forall u\in H^1(\Omega): a_{is}(p_j, u) = \dual{F_j}{u}_{(H^1)^*,H^1(\Omega)}. \end{equation} \emph{Case $j=1$.} It is clear that $\norm{F_1}_{L^2}=\abs{s}\norm{q}_{L^2}$. By Proposition \ref{thm: interpolation at infinity} we have $\norm{p_1}_{X^b}=O(\abs{s}^bM(\abs{s}))\norm{q}_{L^2}$ for all $b\in[0,1]$. \emph{Case $j=2$.} It is clear that $\norm{F_2}_{X^{-1}}\leq\norm{w}_{L^2}$. By Proposition \ref{thm: interpolation at infinity} we have $\norm{p_2}_{X^b}=O(\abs{s}^bM(\abs{s}))\norm{w}_{L^2}$ for all $b\in[0,1]$. \emph{Case $j=3$.} By the continuity of the trace $\Gamma:X^{1/2}\rightarrow L^2(\partial\Omega)$, H\"older's inequality and (\ref{eq: k is integrable}) we have \begin{align*} \norm{F_3}_{X^{-\frac{1}{2}}} &\leq C \abs{s} \norm{\int_0^{\infty} \frac{\varphi(\tau)}{is+\tau} d\nu(\tau)}_{L^2(\partial\Omega)} \\ &\leq C \abs{s}^{\frac{1}{2}} \norm{\varphi}_{L_{\nu}^2}. \end{align*} Again by Proposition \ref{thm: interpolation at infinity} this yields $\norm{p_3}_{X^b}=O(\abs{s}^bM(\abs{s}))\norm{\varphi}_{L_{\nu}^2}$ for all $b\in[0,1]$. Overall we derived the estimate $\norm{p}_{X^b}=O(\abs{s}^b M(\abs{s})) \norm{\mathbf{y}}_{\mathcal{H}}$ for all $b\in[0,1]$. Finally, this together with (\ref{R}) implies \begin{align*} \norm{v}_{L^2} &\leq C\abs{s}^{-1}(\norm{w}_{L^2} + \norm{p}_{H^1}) \\ &\leq CM(\abs{s})\norm{\mathbf{y}}_{\mathcal{H}} \end{align*} and \begin{align*} \norm{\psi}_{L_{\nu}^2} &\leq \abs{s}^{-1}\norm{\varphi}_{L_{\nu}^2} + \norm{\Gamma p}_{L^2}\left(\int_0^{\infty}\frac{1}{\abs{is+\tau}^2} d\nu(\tau)\right)^{\frac{1}{2}} \\ &\leq \abs{s}^{-1}\norm{\varphi}_{L_{\nu}^2} + C\abs{s}^{-\frac{1}{2}}\norm{p}_{X^{\frac{1}{2}}} \\ &\leq CM(\abs{s})\norm{\mathbf{y}}_{\mathcal{H}}. \end{align*} This concludes the proof of Theorem \ref{thm: Preliminary estimates} part (i). \subsection{Singularity at $0$} Now we prove Theorem \ref{thm: Preliminary estimates} (ii). For $s\neq 0$ we equip the Sobolev space $H^1(\Omega)$ with the equivalent norm $\norm{u}_{H_s^1}^2:=\norm{u}_{L^2}^2+\norm{s^{-1}\nabla u}_{L^2}^2$. In what follows we are interested in the asymptotics $s\rightarrow0$ while $s\neq0$. As in the preceding subsection we introduce some auxiliary spaces by the real interpolation method \begin{equation}\nonumber X^{\theta}_s = \begin{cases} L^2(\Omega) \text{ resp. } H^1_s(\Omega) & \text{if } \theta = 0 \text{ resp. } 1, \\ (L^2(\Omega), H^1_s(\Omega))_{\theta, 1} & \text{if } \theta\in(0,1), \\ (X^{\theta}_s)^* & \text{if } \theta\in[-1, 0). \end{cases} \end{equation} We prove an analog of Proposition \ref{thm: interpolation at infinity} - but without the unknown function $M$. \begin{Proposition}\label{thm: interpolation at 0} Let $a, b\in [0,1]$ and $\theta_+=\max\{a+b-1, 0\}$, then \begin{equation}\label{eq: interpolation at 0} \norm{R(is)}_{X^{-a}_s\rightarrow X^{b}_s} = O(\abs{s}^{-1-\theta_+}) \text{ as } s\rightarrow 0 . \end{equation} \end{Proposition} Before we can prove this proposition we show \begin{Lemma}\label{thm: Maz'ya} There is a constant $C(\Omega)$ solely depending on the dimension and volume of $\Omega$ such that for all $u\in H^1(\Omega)$ \begin{equation}\nonumber \int_{\Omega} \abs{\nabla u}^2 + \int_{\partial\Omega} \abs{u}^2 dS \geq C(\Omega) \int_{\Omega} \abs{u}^2 . \end{equation} \end{Lemma} \begin{proof} For the dimension $d=1$ this is an easy exercise for the reader. For $d\geq2$ we recall the isoperimetric inequality of Maz'ya \cite[Chapter 5.6]{Maz'ya} which is valid for all functions $v\in W^{1,1}(\Omega)$: \begin{equation}\nonumber \int_{\Omega} \abs{\nabla v} + \int_{\partial\Omega} \abs{v} dS \geq \frac{d\sqrt{\pi}}{\Gamma(1+\frac{d}{2})^{\frac{1}{d}}} \left(\int_{\Omega} \abs{v}^{\frac{d}{d-1}}\right)^{\frac{d-1}{d}} . \end{equation} The right-hand side can easily be estimated from below by a constant times the $L^1(\Omega)$-norm of $v$ since $\Omega$ is bounded. The conclusion now follows by plugging in $v=u^2$. \end{proof} \begin{proof}[Proof of Proposition \ref{thm: interpolation at 0}.] Because of (\ref{eq: Trace inequality}) and the continuity of $\mathbb{R}\owns s\mapsto \hat{k}(is)$ at zero we have for all $u\in H^1(\Omega)$ \begin{equation}\nonumber a_{is}(u,u) = \int_{\Omega} \abs{\nabla u}^2 + is\hat{k}(0)\int_{\partial\Omega} \abs{u}^2 dS + o(1)\norm{\nabla u}_{L^2}^2 + O(s^2)\norm{u}^2_{L^2} . \end{equation} Thus for sufficiently small $\abs{s}$ we deduce from Lemma \ref{thm: Maz'ya} and the fact $\hat{k}(0)>0$ that for all solutions $p\in H^1(\Omega)$ of the stationary wave equation (\ref{R}) with $F=f\in L^2(\Omega)$ the following estimate holds: \begin{align*} \abs{s}\norm{p}_{L^2}^2 &\leq C \abs{a_{is}(p,p)} = C \abs{\dual{f}{p}} \\ &\leq C\abs{s}^{-1}\norm{f}^2_{L^2} + \frac{\abs{s}}{2}\norm{p}_{L^2}^2 . \end{align*} This shows (\ref{eq: interpolation at 0}) in the case $a=b=0$. Let us define the semi-linear functional \begin{equation}\nonumber G_s(u) = -s\int_{\Omega} \overline{u} + i\hat{k}(is)\int_{\partial\Omega} \overline{u} dS \end{equation} for $u\in H^1(\Omega)$. Observe that $G_s(1)\rightarrow i\hat{k}(0)\abs{\partial\Omega}\neq0$ as $s$ tends to $0$. It is easy to see from Poincar\'e's inequality (recall that $\Omega$ has Lipschitz boundary) that the expression $\norm{\nabla u}_{L^2} + \abs{G_s(u)}$ defines a norm on $H^1(\Omega)$ which is equivalent to the usual one - uniformly for small $\abs{s}$. In particular $p\mapsto\norm{\nabla p}_{L^2}$ is an equivalent norm on the kernel of $G_s$. Remember that $p$ is the solution of (\ref{R}) for $F=f\in L^2(\Omega)$. We decompose $p=p_0+p_G$ with $p_G=G_s(p)=\text{const.}\in L^2(\Omega)$ and $G_s(p_0)=0$. Then \begin{equation}\nonumber a_{is}(p,p_0) = a_{is}(p_0,p_0) = (1+O(\abs{s}))\int_{\Omega} \abs{\nabla p_0}^2 . \end{equation} This implies \begin{equation}\nonumber \norm{\nabla p_0}_{L^2}^2 \leq C\abs{a_{is}(p,p_0)} \leq C\abs{\dual{f}{p_0}} \leq C\norm{f}_{L^2}\norm{\nabla p_0} . \end{equation} This in combination with (\ref{eq: interpolation at 0}) for $a=b=0$ implies $\norm{p}_{H^1_s}\leq C\abs{s}^{-1}\norm{f}_{L^2}$ which is (\ref{eq: interpolation at 0}) for the parameters $a=0, b=1$. By duality (recall $R(z)^* = R(\overline{z})$) equation (\ref{eq: interpolation at 0}) is also true for $a=1,b=0$. A similar calculation as above with $f$ replaced by $F\in H^1(\Omega)^*$ and (\ref{eq: interpolation at 0}) for $a=1,b=0$ shows (\ref{eq: interpolation at 0}) for $a=1, b=1$. What remains to do is some interpolation. It is important to interpolate in the right order. First, one has to show \begin{equation}\nonumber \norm{R(is)}_{X^0_s\rightarrow X^{b_1}_s}, \norm{R(is)}_{X^{a_1}_s\rightarrow X^0_s}=O(\abs{s}^{-1}) \end{equation} for $a_1,b_1\in [0,1]$. This can be done via interpolation between $(a=0,b=0)$ and $(a=0, b=1)$ for the first estimate and between $(a=0,b=0)$ and $(a=1, b=0)$ for the second estimate. Choosing $a_1$ and $b_1$ appropriately, the preceding estimates imply (\ref{eq: interpolation at 0}) in the case $a+b\leq 1$. Interpolation between the preceding case and $a=1,b=1$ yields the remaining part of the proposition. \end{proof} Let us proceed with the proof of Theorem \ref{thm: Preliminary estimates} part (ii) in a similar fashion as for part (i). We have to show $\norm{x}_{\mathcal{H}}\leq C\abs{s}^{-1} \norm{y}_{\mathcal{H}}$ for all small $\abs{s}$ and for all $x=(p,v,\psi)\in D(A), y=(q,w,\varphi)\in\mathcal{H}$ satisfying (\ref{A}) where $C$ does not depend on $s$ and $y$. Let $F_j$ for $j\in\{1,2,3\}$ be defined by (\ref{eq: F123}) and let $p_j$ satisfy \begin{equation}\nonumber \forall u\in H^1(\Omega): a_{is}(p_j, u) = \dual{F_j}{u}_{(H^1)^*,H^1(\Omega)} \end{equation} \emph{Case $j=1$.} It is clear that $\norm{F_1}_{L^2}=\abs{s}\norm{q}_{L^2}$. By Proposition \ref{thm: interpolation at 0} we have $\norm{p_1}_{X^b_s}=O(1)\norm{q}_{L^2}$ for all $b\in[0,1]$. \emph{Case $j=2$.} It is clear that $\norm{F_2}_{X^{-1}_s}\leq \abs{s}\norm{w}_{L^2}$. By Proposition \ref{thm: interpolation at 0} we have $\norm{p_2}_{X^b_s}=O(\abs{s}^{-b})\norm{w}_{L^2}$ for all $b\in[0,1]$. \emph{Case $j=3$.} By the continuity of the trace $\Gamma:X^{1/2}\rightarrow L^2(\partial\Omega)$ and by H\"older's inequality we have for all $\abs{s}\leq 1$ \begin{align*} \norm{F_3}_{X^{-\frac{1}{2}}_s} &\leq \norm{F_3}_{X^{-\frac{1}{2}}} \leq C \abs{s} \norm{\int_0^{\infty} \frac{\varphi(\tau)}{is+\tau} d\nu(\tau)}_{L^2(\partial\Omega)} \\ &\leq C \abs{s}^{\frac{1}{2}} \norm{\varphi}_{L_{\nu}^2}. \end{align*} By Proposition \ref{thm: interpolation at 0} this yields $\norm{p_3}_{X^b_s}=O(\abs{s}^{-\frac{1}{2}-(b-\frac{1}{2})_+})\norm{\varphi}_{L_{\nu}^2}$ for all $b\in[0,1]$. Overall we derived the estimate $\norm{p}_{X^b_s}=O(\abs{s}^{-\frac{1}{2}-(b-\frac{1}{2})_+}) \norm{\mathbf{y}}_{\mathcal{H}}$ for all $b\in[0,1]$. Finally, this together with (\ref{R}) implies \begin{align*} \norm{v}_{L^2} &\leq C\abs{s}^{-1}(\norm{w}_{L^2} + \norm{\nabla p}_{L^2}) \\ &\leq C\abs{s}^{-1}\norm{w}_{L^2} + C\norm{p}_{H^1_s} \\ &\leq C\abs{s}^{-1}\norm{\mathbf{y}}_{\mathcal{H}} \end{align*} and because of $\norm{p}_{X^{\frac{1}{2}}}\leq\norm{p}_{X^{\frac{1}{2}}_s}$ for $\abs{s}\leq 1$ \begin{align*} \norm{\psi}_{L_{\nu}^2} &\leq \abs{s}^{-1}\norm{\varphi}_{L_{\nu}^2} + \norm{\Gamma p}_{L^2}\left(\int_0^{\infty}\frac{1}{\abs{is+\tau}^2} d\nu(\tau)\right)^{\frac{1}{2}} \\ &\leq \abs{s}^{-1}\norm{\varphi}_{L_{\nu}^2} + C\abs{s}^{-\frac{1}{2}}\norm{p}_{X^{\frac{1}{2}}_s} \\ &\leq C\abs{s}^{-1}\norm{\mathbf{y}}_{\mathcal{H}}. \end{align*} This concludes the proof of Theorem \ref{thm: Preliminary estimates} part (ii). \subsection{Spectrum at $0$} Let us prove part (iii) of Theorem \ref{thm: Preliminary estimates}. ``$\mathbb{R}ightarrow$''. Let us first assume that $\mathbf{y}=(q,w,\varphi)\in\mathcal{H}$ and $\mathbf{x}=(p,v,\psi)\in D(\mathcal{A})$ satisfies (\ref{A}) for $s=0$. There is a function $u\in H^1(\Omega)$ such that $w=\nabla u$. We may assume $\int_{\Omega} u=0$ to make $u$ unique. Then (\ref{A}) for $s=0$ is \begin{equation}\label{eq: A for s=0} \begin{cases} \Div v(x) = q(x) & (x\in\Omega), \\ \nabla p(x) = w(x) = \nabla u(x) & (x\in\Omega), \\ \tau\psi(\tau,x)-p(x) = \varphi(\tau,x) & (\tau>0, x\in\partial\Omega), \\ -v\cdot n(x) + \int_0^{\infty} \psi(\tau,x) d\nu(\tau) = 0 & (x\in\partial\Omega). \end{cases} \end{equation} From the second line we see that necessarily $p=u+\alpha$ for some complex number $\alpha$. We have \begin{equation}\label{eq: psi} \psi = \frac{\varphi + \Gamma u + \alpha}{\tau} \in (L_{\nu}^1\cap L_{\nu}^2)(0,\infty; L^2(\partial\Omega)) . \end{equation} The $L^1_{\nu}$-inclusion follows by the definition of $D(\mathcal{A})$ as explained in the paragraph following (\ref{eq: first order energy space}). Let us now specialize to the situation $q, w = 0$ and $\norm{\varphi}_{L^2_{\nu}}\leq 1$. Then $u=0$. By the existence of $\mathcal{A}^{-1}$ there must be a uniform bound $\abs{\alpha}\leq C$ where the constant does not depend on $\varphi$. Because of this, (\ref{eq: psi}) and $\int_0^{\infty}\tau^{-1} d\nu(\tau)<\infty$ we deduce a bound $\norm{\tau^{-1}\varphi}_{L^1_{\nu}}=\norm{\psi}_{L^1_{\nu}} + C\leq C$ where $C$ does not depend on $\varphi$. Since this is true for all $\varphi\in L^2_{\nu}(0,\infty; L^2(\partial\Omega))$ we deduce that the function $(0,\infty)\owns\tau\mapsto\tau^{-1}$ is in $L^2_{\nu}(0,\infty)$. If we use this in the $L^2_{\nu}$-inclusion in (\ref{eq: psi}) we see that $\norm{\tau^{-1}\varphi}_{L^2_{\nu}}=\norm{\psi}_{L^2_{\nu}} + C\leq C$ where $C$ does not depend on $\varphi$. Thus $\tau^{-1}$ is an $L^2_{\nu}$-multiplier and thus it must be bounded with respect to the measure $\nu$. ``$\Leftarrow$''. Assume now that $\nu|_{(0,\varepsilon)}=0$ for some $\varepsilon>0$. Given $\mathbf{y}=(q,w,\varphi)\in\mathcal{H}$ we show that there is a unique solution $\mathbf{x}=(p,v,\psi)\in D(\mathcal{A})$ of (\ref{eq: A for s=0}). From the second line of (\ref{eq: A for s=0}) we see that necessarily $p=u+\alpha$ for some complex number $\alpha$ and $u$ as in the first part of the proof. The definition of $\mathcal{H}$ forces the necessity of the ansatz $v=-\nabla U$ for some function $U\in H^1(\Omega)$ with $\int_{\Omega} U=0$ for uniqueness purposes. It remains to uniquely determine $\alpha$ and $U$ since then $\psi$ is uniquely given by (\ref{eq: psi}). Let $h=-\int_0^{\infty} \psi d\nu\in L^2(\partial\Omega)$. Then the first and the last line of (\ref{eq: A for s=0}) are equivalent to \begin{equation}\nonumber \begin{cases} -\Delta U(x) = q(x) & (x\in\Omega), \\ \partial_n U(x) = h(x) & (x\in\partial\Omega). \end{cases} \end{equation} By the Poincar\'e inequality this equation has a solution $U$ - which is unique under the constraint $\int_{\Omega} U=0$ - if and only if \begin{align}\nonumber 0 &= \int_{\Omega} q + \int_{\partial\Omega} h dS \\ \label{eq: condition on alpha} &= \int_{\Omega} q - \int_{\partial\Omega}\left( \hat{k}(0) \Gamma u + \int_{\varepsilon}^{\infty} \frac{\varphi(\tau)}{\tau} d\nu(\tau)\right)dS -\alpha\abs{\partial\Omega}\hat{k}(0). \end{align} In the second equality we also used (\ref{eq: psi}). Since $\hat{k}(0)\neq0$ this determines $\alpha$ and thus also $U$ uniquely. This completes the proof. \subsection{The range of $\mathcal{A}$}\label{sec: range of A} In the case that $\mathcal{A}$ is not invertible (i.e. $(\tau\mapsto\tau^{-1})\notin L^{\infty}_{\nu}$) in spite of Theorem \ref{thm: Martinez} it is important to know the image $R(\mathcal{A})$ of $\mathcal{A}$. To characterize the range we have to distinguish two cases: (i) $(\tau\mapsto\tau^{-1})\in L^{2}_{\nu}$ and (ii) $(\tau\mapsto\tau^{-1})\notin L^{2}_{\nu}$. In case (ii) for a given $\varphi\in L^2_{\nu}(0,\infty;L^2(\partial\Omega))$ there might exist no $p\in H^1(\Omega)$ such that \begin{equation}\nonumber \left( \tau \mapsto \frac{\varphi(\tau)+\Gamma p}{\tau} \right) \in L^2_{\nu}(0,\infty; L^2(\partial\Omega)) . \end{equation} In the case that $p$ exists, its boundary value $\Gamma p$ is uniquely determined and the function $(\tau\mapsto\varphi(\tau)/\tau)$ is integrable with respect to $\nu$. Therefore we can define the complex number \begin{equation}\label{eq: mphi} m_{\varphi,p} = \int_{\partial\Omega} \int_0^{\infty} \frac{\varphi(\tau)+\Gamma p}{\tau} d\nu(\tau) dS . \end{equation} Equipped with this notation we can now formulate: \begin{Theorem}\label{thm: range of A} Assume that $\mathcal{A}$ is not invertible (i.e. $(\tau\mapsto\tau^{-1})\notin L^{\infty}_{\nu}$). (i) If $(\tau\mapsto\tau^{-1})\in L^{2}_{\nu}$, then \begin{equation}\nonumber R(\mathcal{A}) = \left\{ (q,w,\varphi)\in\mathcal{H} ; \int_0^{\infty} \norm{\frac{\varphi(\tau)}{\tau}}^2_{L^2(\partial\Omega)} d\nu(\tau) < \infty \right\} . \end{equation} (ii) If $(\tau\mapsto\tau^{-1})\notin L^{2}_{\nu}$, then \begin{align*} R(\mathcal{A}) = \left\{ (q,w,\varphi)\in\mathcal{H} ; \exists p\in H^1(\Omega) : w=\nabla p, \int_{\Omega} q = m_{\varphi,p} \right. \text{ and } \\ \left. \int_0^{\infty} \norm{\frac{\varphi(\tau)+\Gamma p}{\tau}}^2_{L^2(\partial\Omega)} d\nu(\tau) < \infty \right\} \end{align*} where $m_{\varphi,p}$ is given by (\ref{eq: mphi}). If $(q,w,\varphi)$ is in the image of $\mathcal{A}$ then $p$ is unique. In fact it is the first component of the pre-image of $(q,w,\varphi)$. \end{Theorem} \begin{proof} Let $\mathbf{y}=(q,w,\varphi)\in\mathcal{H}$. Clearly $\mathbf{y}\in R(\mathcal{A})$ if and only if we can find $\mathbf{x}=(p,v,\psi)\in\mathcal{H}_1$ such that $\mathcal{A}\mathbf{x}=\mathbf{y}$. Let $u\in H^1(\Omega)$ be such that $\nabla u = w$ and $\int_{\Omega} u = 0$. As in the proof of Theorem \ref{thm: Preliminary estimates}(iii) we see that necessarily $p=u+\alpha$ for some complex number $\alpha$ and \begin{equation}\label{eq: range A definition of psi} \frac{\varphi + \Gamma p}{\tau} = \psi \in L_{\nu}^2(0,\infty; L^2(\partial\Omega)) . \end{equation} Let us assume that case (i) is valid. Then the so defined $\psi$ is in $L^2_{\nu}$ if and only if $(\tau\mapsto\varphi(\tau)/\tau)$ is square integrable with respect to $\nu$. Now one can proceed as in the ``$\Leftarrow$''-part of the proof of Theorem \ref{thm: Preliminary estimates}(iii) to find the unique $p$ and $v$ such that $\mathcal{A} \mathbf{x} = \mathbf{y}$. Let us now assume that case (ii) is valid. By (\ref{eq: range A definition of psi}) it is clear that the existence of $p$ as in the definition of $R(\mathcal{A})$ is necessary. From the fact that $(\tau\mapsto\tau^{-1})$ is not square integrable we see that $\Gamma p$ is uniquely defined. Now we can again proceed as in the ``$\Leftarrow$''-part of the proof of Theorem \ref{thm: Preliminary estimates}(iii) to find the unique $p$ and $v$ such that $\mathcal{A} \mathbf{x} = \mathbf{y}$. The condition $\int_{\Omega} q = m_{\varphi,p}$ on $\mathbf{y}$ comes from (\ref{eq: condition on alpha}), where we have to replace $\hat{k}(0) \Gamma u + \int_{\varepsilon}^{\infty} \frac{\varphi(\tau)}{\tau} d\nu(\tau) + \alpha\hat{k}(0)$ by $\int_{0}^{\infty} \frac{\varphi(\tau)+\Gamma p}{\tau} d\nu(\tau)$ in our situation. \end{proof} \section{An upper estimate for $\norm{(is+\mathcal{A})^{-1}}$ if $s\rightarrow\infty$}\label{sec: upper resolvent estimate} We are seeking for an increasing function $M:[1,\infty)\rightarrow[1,\infty)$ such that for some constant $C>0$ \begin{align*} \norm{(is+\mathcal{A})^{-1}} \leq C M(\abs{s}) \quad (\abs{s}\geq 1). \end{align*} In this section we want to show that the function $M(s) = 1/\mathbb{R}e \hat{k}(is)$ is an upper bound (up to a constant) for the norm of $(is+\mathcal{A})^ {-1}$ when $\abs{s}$ is large and if some additional assumptions on the acoustic impedance $\hat{k}$ and the domain are satisfied. More precisely we assume that the acoustic impedance satisfies \begin{align}\label{eq: additional assumptions} \left[\abs{\hat{k}}\frac{\abs{\hat{k}}^2}{(\mathbb{R}e\hat{k})^2}\right](is) = o\left(\frac{1}{L(s)}\right) \text{ as } s\rightarrow\infty, \\ \nonumber \text{where } L(s) = s^{\alpha}(1+\log(s)) \text{ for } s \geq 1 . \end{align} The real number $\alpha\in[0,1)$ is a domain dependent constant which will be defined below. Note that for $\alpha\geq1$ there can not be any integrable completely monotonic function which satisfies this condition. Let $(u_j)$ be the sequence of normalized eigenfunctions of the Neumann Laplacian with respect to the corresponding (non-negative) frequencies $(\lambda_j)$. That is \begin{equation}\label{eq: Neumann eigenvalue problem} \begin{cases} \lambda_j^2u_j(x) + \Delta u(x) = 0 & (x\in\Omega), \\ \partial_n u_j(x) = 0 & (x\in\partial\Omega), \\ \norm{u_j}_{L^2(\Omega)} = 1 . \end{cases} \end{equation} The eigenfrequencies are counted with multiplicity and we may order them so that $0\leq \lambda_1 \leq \lambda_2 \leq \ldots$. We call a function $p\in L^2(\Omega)$ a spectral cluster of width $\delta>0$ whenever $\sup\{\abs{\lambda_j-\lambda_i}; a_j, a_i \neq 0\}\leq \delta$ where $p=\sum a_j u_j$ is the expansion of $p$ into eigenfunctions. We define the (mean) frequency $\lambda(p)\geq0$ of $p$ by $\lambda(p)^2=\sum \abs{(a_j/\norm{p}_{L^2})}^2\lambda_j^2$. We assume that the domain has the property that for sufficiently small $\delta>0$ there are constants $c,C>0$ such that for any spectral cluster $p$ of width $\delta$ the following estimate is true \begin{align}\label{eq: upper and lower estimate for Neumann eigenfunctions} c\norm{p}_{L^2(\Omega)}^2 \leq \int_{\partial\Omega} \abs{\Gamma p}^2 dS \leq C \lambda(p)^{\alpha} \norm{p}_{L^2(\Omega)}^2 . \end{align} We call the left inequality the \emph{lower estimate} and the right inequality the \emph{upper estimate}. Note that the upper estimate is trivially satisfied for $\alpha=1$ by applying the trace inequality from Lemma \ref{thm: trace inequality}. It is indeed reasonable to assume that this estimate holds for some $\alpha$ strictly smaller than $1$. For example if the boundary of $\partial\Omega$ is of class $C^{\infty}$ then both estimates hold with $\alpha=2/3$. See \cite{BarnettHassellTacy2016} for this result. For $\Omega$ being an interval one can choose $\alpha=0$ and for a square $\alpha=1/2$ is optimal. This section is devoted to the proof of our second main result: \begin{Theorem}\label{thm: upper resolvent estimate} Assume that (\ref{eq: additional assumptions}) is satisfied, where $\alpha\in[0,1)$ is such that (\ref{eq: upper and lower estimate for Neumann eigenfunctions}) holds for all spectral cluster $p$ of sufficiently small width $\delta>0$. Then there is a constant $C>0$ such that \begin{equation}\nonumber \norm{R(s)}_{L^2\rightarrow L^2} \leq \frac{C}{s\mathbb{R}e\hat{k}(is)} \end{equation} for all $s\geq 1$. \end{Theorem} Compare this result to Theorem \ref{thm: Preliminary estimates} to obtain that the norm of $\norm{(is+\mathcal{A})^{-1}}$ is bounded by $\frac{C}{\mathbb{R}e\hat{k}(is)}$ under the constraints of the preceding theorem. \subsection{Some auxiliary definitions}\label{sec: Some auxiliary definitions} We fix a $\delta>0$ such that (\ref{eq: upper and lower estimate for Neumann eigenfunctions}) is true for any spectral cluster of width $3\delta$. For $p,q\in H^1(\Omega)$ we define the Neumann form by \begin{align*} a_z^N(p,q) = z^2\int_{\Omega} p\overline{q} + \int_{\Omega} \nabla p\cdot \nabla \overline{q} . \end{align*} We cover $[0,\infty)$ by disjoint intervals $I_k=[k\lambda, (k+1)\lambda)$ for $k=0,1,2,\ldots$ such that \begin{enumerate} \item [(i)] $\lambda \in [2\delta, 3\delta]$, \item [(ii)] $\exists k_c\in\mathbb{N}: I_{k_c} \supset (s-\delta, s+\delta)$. \end{enumerate} The covering depends on $s\geq1$ but this does not matter for our considerations. With the help of this partition we can uniquely expand every function $p\in L^2(\Omega)$ in terms of spectral clusters in the following way: \begin{align*} p = \sum_{k=0}^{\infty} c_k p_k \text{ where } p_k = \sum_{\lambda_j\in I_k} a_j u_j, \, \norm{p_k}_{L^2(\Omega)}=1 . \end{align*} Let $s_k(p)\in I_k$ be such that \begin{align*} s_k^2(p) = \int_{\Omega} \abs{\nabla p_k}^2. \end{align*} Let $p^0_{+(-)}=\sum_{k>(<)k_c} c_k p_k$ and $p^0=p^0_- + p^0_+$. Let $p_c=c_{k_c}p_{k_c}$. Obviously $p=p^0+p_c$. Define \begin{align*} p_+ = \begin{cases} p_+^0 + p_c & \text{ if } a_{is}^N(p_c)\geq0 , \\ p_+^0 & \text{ else} , \end{cases} \end{align*} and let $p_-$ be given by $p=p_+ + p_-$. Finally let $\tilde{p}=p_+-p_-$. \subsection{Some auxiliary lemmas} For the remaining part of Section \ref{sec: upper resolvent estimate} we use the notation introduced in Subsection \ref{sec: Some auxiliary definitions} and we assume that $\abs{s}\geq 1$. \begin{Lemma}\label{thm: auxiliary lemma 1} For all $p\in H^1(\Omega)$ we have $a_{is}^N(p,\tilde{p})\geq \abs{s}\delta\norm{p^0}_{L^2(\Omega)}^2$. \end{Lemma} \begin{proof} \begin{align*} a^N_{is}(p,\tilde{p}) \geq a^N_{is}(p^0,(\tilde{p})^0) = \sum_{k\neq k_c} \abs{s^2 - s_k^2} \abs{c_k}^2 \geq s\delta \sum_{k\neq k_c} \abs{c_k}^2 = s\delta \norm{p^0}_{L^2(\Omega)}^2 . \end{align*} \end{proof} A little bit more involved is the proof of the next lemma. \begin{Lemma}\label{thm: auxiliary lemma 2} There is a constant $C>0$ (depending on $\delta$ and $\alpha$) such that for all $p\in H^1(\Omega)$ \begin{equation}\nonumber \int_{\partial\Omega} \abs{\Gamma p^0}^2 dS \leq C \abs{s}^{\alpha}(1+\log(\abs{s})) \frac{a^N_{is}(p,\tilde{p})}{\abs{s}} \end{equation} \end{Lemma} \begin{proof} Since $a^N_{is}(p,\tilde{p})\geq a^N_{is}(p^0,(\tilde{p})^0)$ we may assume that $p_c=0$. Because of \begin{align*} \int_{\partial\Omega} \abs{\Gamma p}^2 dS \leq 2 \int_{\partial\Omega} \abs{\Gamma p_-}^2 + \abs{\Gamma p_+}^2 dS \\ \text{and } a^N_{is}(p,\tilde{p}) = a^N_{is}(p_+) - a^N_{is}(p_-) \end{align*} we may assume without loss of generality that either $p=p_+$ or $p=p_-$. We show the proof in detail for the case $p=p_+$. The case $p=p_-$ is analogous and therefore we omit it. \begin{align*} \int_{\partial\Omega} \abs{\Gamma p_+}^2 dS &= \norm{\sum_{k>k_c} c_k \Gamma p_k}_{L^2(\partial\Omega)}^2 \\ &\leq \left(\sum_{k>k_c} \abs{c_k}\norm{\Gamma p_k}_{L^2(\partial\Omega)}\right)^2 \\ &\leq \left(C\delta^{\frac{\alpha}{2}} \sum_{k>k_c} \abs{c_k} k^{\frac{\alpha}{2}}\right)^2 \\ &\leq C\delta^{\alpha} \underbrace{\left(\sum_{k>k_c}\abs{c_k}^2(s_k^2-s^2)\right)}_{a^N_{is}(p_+)} \underbrace{\left(\sum_{k>k_c}\frac{k^{\alpha}}{s_k^2 - s^2}\right)}_{=:J} \end{align*} In the first line we used the continuity of the trace operator $\Gamma: H^1(\Omega)\rightarrow L^2(\partial\Omega)$. From the second to the third line we used the upper estimate (\ref{eq: upper and lower estimate for Neumann eigenfunctions}) together with $s_k\in I_k = \lambda[k,k+1)$ with $\lambda\in[2\delta,3\delta]$. It remains to estimate $J$. It is a well known trick to estimate sums of positive and decreasing summands by corresponding integrals. \begin{align*} J &= \sum_{k>k_c} \frac{k^{\alpha}}{s_k^2 - s^2} \leq \sum_{k>k_c} \frac{k^{\alpha}}{\lambda^2k^2 - s^2} \\ &\leq \frac{(k_c+1)^{\alpha}}{\lambda^2(k_c+1)^2-s^2} + \int_{k_c+1}^{\infty} \frac{x^{\alpha}}{\lambda^2x^2 - s^2} dx \\ &=: J_1 + J_2 . \end{align*} It is not difficult to see that $J_1$ can be estimated by a constant times $\delta^{-1-\alpha}s^{\alpha-1}$. For $J_2$ we substitute $y=\lambda x/s$ and use that $\lambda(k_c+1)\geq1+\delta$. This yields \begin{align*} J_2 &\leq C\delta^{-1-\alpha}s^{\alpha-1}\int_{1+\frac{\delta}{s}}^{\infty} \frac{y^{\alpha}}{y^2 - 1} dy \\ &\leq C\delta^{-1-\alpha}s^{\alpha-1}\left(\int_{1+\frac{\delta}{s}}^{2} \frac{1}{y - 1} dy + \int_{2}^{\infty} \frac{1}{y^{2-\alpha}} dy\right) \\ &\leq C \delta^{-1-\alpha}s^{\alpha-1}(\log(\frac{s}{\delta})+1) . \end{align*} This concludes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm: upper resolvent estimate}} Let $p\in H^1(\Omega)$ and $\abs{s}\geq 1$. We have to verify that \begin{equation}\nonumber \sup \{\abs{a_{is}(p,u)}; u\in H^1(\Omega),\, \norm{u}_{L^2(\Omega)}\leq 1\}\geq c\abs{s}\mathbb{R}e\hat{k}(is)\norm{p}_{L^2(\Omega)} \end{equation} is true for some constant $c>0$ independent of $p$ and $s$. In the following we assume that $a^N_{is}(p_c)\geq 0$. This implies that $p_+=(p^0)_+ + p_c$ and $p_-=(p^0)_-$. The case $a^N_{is}(p_c)<0$ can be treated similarly and we therefore omit it. First we prove an auxiliary estimate with the help of Lemma \ref{thm: auxiliary lemma 2}: \begin{align}\nonumber \int_{\partial\Omega} \abs{\Gamma p_+}^2 + \abs{\Gamma p_-}^2 dS &= \int_{\partial\Omega} \abs{\Gamma p_+^0}^2 + \abs{\Gamma p_-^0}^2 + \abs{\Gamma p_c}^2 + 2\mathbb{R}e (\Gamma p_+^0 \overline{\Gamma p_c}) dS \\ \nonumber &\leq \int_{\partial\Omega} 2\abs{\Gamma p_+^0}^2 + \abs{\Gamma p_-^0}^2 + 2\abs{\Gamma p_c}^2 dS \\ \label{eq: auxiliary estimate in main proof} &\leq CL(s)\frac{a^N_{is}(p,\tilde{p})}{\abs{s}} + 2\int_{\partial\Omega} \abs{\Gamma p_c}^2 dS. \end{align} Let us define \begin{equation}\nonumber L_1(s) = \left(\frac{\abs{\hat{k}(is)}}{\mathbb{R}e\hat{k}(is)}\right)^2 L(s) \geq L(s) . \end{equation} Our assumption (\ref{eq: additional assumptions}) on $k$ is equivalent to $|\hat{k}|(is)=o(1/L_1(s))$ as $\abs{s}\rightarrow\infty$. Now we come to the final part of the proof which consists of distinguishing two cases. Essentially the first case means that $p$ is roughly the same as $p^0$ and the second case means that $p$ is roughly the same as $p_c$. We fix a constant $\varepsilon_1\in(0,1)$ to be chosen later. The choice of $\varepsilon_1$ does not depend on $s$. \emph{Case 1: } $L_1(s)a^N_{is}(p,\tilde{p})\geq \varepsilon_1 \abs{s}\int_{\partial\Omega}\abs{\Gamma p_c}^2 dS$. We first show that in this case the Neumann form dominates the form $a_{is}$ for $\abs{s}$ big enough in the following sense: \begin{align*} \abs{a_{is}(p,\tilde{p})-a^N_{is}(p,\tilde{p})} &= \abs{s\hat{k}(is)\int_{\partial\Omega}(\Gamma p_+ + \Gamma p_-)\overline{(\Gamma p_+ - \Gamma p_-)} dS}\\ &\leq 2\abs{s\hat{k}(is)} \int_{\partial\Omega} \abs{\Gamma p_+}^2 + \abs{\Gamma p_-}^2 dS \\ &\leq C \abs{s\hat{k}(is)} \varepsilon_1^{-1} L_1(s) \frac{a^N_{is}(p,\tilde{p})}{\abs{s}} \\ &\leq \frac{1}{2} a^N_{is}(p,\tilde{p}) . \end{align*} From the second to the third line we used the assumption of case 1 and (\ref{eq: auxiliary estimate in main proof}). By (\ref{eq: additional assumptions}) the last line is valid for all $s\geq s_0$, where $s_0$ is sufficiently large depending on how small $\varepsilon_1$ is. Therefore we have \begin{align*} \abs{a_{is}(p,\tilde{p})} &\geq (\frac{1}{4} + \frac{1}{4}) a^N_{is}(p,\tilde{p}) \\ &\geq \frac{s\delta}{4}\norm{p^0}_{L^2(\Omega)}^2 + \frac{\varepsilon_1\abs{s}}{4L_1(s)}\int_{\partial\Omega}\abs{\Gamma p_c}^2 dS \\ &\geq \frac{c\varepsilon_1 \abs{s}}{L_1(s)} \left(\norm{p^0}_{L^2(\Omega)}^2 + \norm{p_c}_{L^2(\Omega)}^2\right) \\ &\geq c\varepsilon_1 \abs{s}\mathbb{R}e\hat{k}(is)\norm{p}_{L^2(\Omega)}^2 . \end{align*} From the second to the third line we used the lower estimate (\ref{eq: upper and lower estimate for Neumann eigenfunctions}) and in the last step we used our assumptions on the acoustic impedance (\ref{eq: additional assumptions}). The theorem is proved for case 1. \emph{Case 2: } $L_1(s)a^N_{is}(p,\tilde{p}) < \varepsilon_1 \abs{s}\int_{\partial\Omega}\abs{\Gamma p_c}^2 dS$. By Lemma \ref{thm: auxiliary lemma 1} and $\lim_{\abs{s}\rightarrow\infty} L_1(s)=\infty$ this yields \begin{align}\label{eq: case 2 preliminary estimate} \int_{\partial\Omega}\abs{\Gamma p_c}^2 dS \geq \norm{p^0}_{L^2(\Omega)}^2 \end{align} for all $\abs{s}\geq s_1$ with an $s_1>0$ not depending on $\varepsilon_1$. We show now that in case 2 the form $a_{is}$ is dominated by the contribution from the boundary. By Lemma \ref{thm: auxiliary lemma 2} we have \begin{align*} \abs{\int_{\partial\Omega} p^0\overline{p_c} dS} &\leq \left(CL(s)\frac{a^N_{is}(p,\tilde{p})}{\abs{s}}\right)^{\frac{1}{2}} \left(\int_{\partial\Omega} \abs{p_c}^2 dS \right)^{\frac{1}{2}} \\ &\leq C\sqrt{\varepsilon_1} \left(\frac{L(s)}{L_1(s)}\right)^{\frac{1}{2}} \int_{\partial\Omega} \abs{p_c}^2 dS \\ &\leq \frac{\mathbb{R}e\hat{k}(is)}{2\abs{\hat{k}(is)}}\int_{\partial\Omega} \abs{p_c}^2 dS . \end{align*} In the last step we choose $\varepsilon_1$ so small that $C\sqrt{\varepsilon_1}\leq1/2$. Finally from this, (\ref{eq: case 2 preliminary estimate}) and the lower estimate (\ref{eq: upper and lower estimate for Neumann eigenfunctions}) we deduce that \begin{align*} \Im a_{is}(p, p_c) &\geq \frac{1}{2}\abs{s}\mathbb{R}e\hat{k}(is)\int_{\partial\Omega} \abs{p_c}^2 dS \\ &\geq c \abs{s}\mathbb{R}e\hat{k}(is) (\norm{p_c}_{L^2(\Omega)}^2 + \norm{p^0}_{L^2(\Omega)}^2) \\ &= c \abs{s}\mathbb{R}e\hat{k}(is) \norm{p}_{L^2(\Omega)}^2 \end{align*} which yields the claimed result. \section{Examples}\label{sec: examples} To illustrate our main results, Theorem \ref{thm: Preliminary estimates} and Theorem \ref{thm: upper resolvent estimate}, we want to consider special \emph{standard kernels} $k=k_{\beta,\varepsilon}$ (with $\varepsilon>0$ and $0<\beta<1$) introduced below. These standard kernels have the property that $\mathbb{R}e\hat{k}(is)\approx |\hat{k}(is)|\approx \abs{s}^{\beta-1}$ for large $\abs{s}$. This makes it easy to check whether (\ref{eq: additional assumptions}) is satisfied or not. We take a closer look at $\Omega$ being a square or a disk. In the case of the disk we show the optimality of the resolvent estimate, that is we show that $\norm{(is+\mathcal{A})^{-1}}$ is not only bounded from above by a constant times $1/\mathbb{R}e\hat{k}(is)$ but also from below. The standard kernels are designed in such a way that $\mathcal{A}$ is invertible (i.e. $(\tau\mapsto\tau^{-1})\in L^2_{\nu}$; see Theorem \ref{thm: Preliminary estimates}). We have assumed this for the simplicity of exposition. However, in Subsection \ref{sec: Example singularity at 0} we briefly show that our results yield (optimal) decay rates also in the presence of a singularity at zero. The case $\Omega=(0,1)$ is treated separately in Section \ref{sec: 1D case}. \subsection{Properties of the standard kernels}\label{sec: examples kernel} For $\varepsilon>0$ and $0<\beta<1$ let \begin{equation}\nonumber k_{\beta, \varepsilon}(t) = e^{-\varepsilon t}t^{-(1-\beta)} \text{ for } t>0 . \end{equation} To keep the notation short we fix $\varepsilon$ and $\beta$ now and write $k$ instead of $k_{\beta, \varepsilon}$ throughout this section. Obviously $k\in L^1(0,\infty)$ and for all $n\in\mathbb{N}_0$ we have $(-1)^n d^nk/dt^n(t) > 0$. The last property is a characterization of completely monotonic functions. Thus the kernel $k$ is admissible in the sense that the semigroup from Section \ref{sec: Semigroup approach} is defined. Let $\Gamma$ denote the Gamma function. Taking Laplace transform yields for $z>-\varepsilon$ \begin{align*} \hat{k}(z) = \int_0^{\infty} e^{-(\varepsilon+z)t}t^{-(1-\beta)} dt = \frac{1}{(\varepsilon+z)^{\beta}} \int_0^{\infty} s^{-(1-\beta)} e^{-s} ds = \frac{\Gamma(\beta)}{(\varepsilon+z)^{\beta}} . \end{align*} By analyticity the equality between the left end and the right end of this chain of equations extends to $\mathbb{C}\backslash(-\infty,-\varepsilon]$. For $s\in\mathbb{R}$, let $\varphi(s)\in(-\frac{\pi}{2},\frac{\pi}{2})$ be the argument of $\varepsilon-is$. Note that $\varphi(s)\rightarrow\mp\frac{\pi}{2}$ as $s\rightarrow\pm\infty$. Then we have \begin{equation}\nonumber \hat{k}(is) = \Gamma(\beta)\abs{\frac{\varepsilon-is}{\varepsilon^2+s^2}}^{\beta}\left(\cos(\beta\varphi(s)) + i\sin(\beta\varphi(s))\right) . \end{equation} In particular \begin{equation}\nonumber \mathbb{R}e\hat{k}(is) \approx \abs{\Im\hat{k}(is)} \approx \frac{1}{\abs{s}^{\beta}} \text{ for } \abs{s}\geq 1 . \end{equation} Here by $\approx$ we mean that the left-hand side is up to a constant, which does not depend on $s$, an upper bound for the right-hand side and vice versa. The first $\approx$-relation implies that the condition (\ref{eq: additional assumptions}) is equivalent to the simpler estimate $\mathbb{R}e\hat{k}(is)=o(1/L(s))$ as $\abs{s}$ tends to infinity. More precisely we have \begin{equation}\label{eq: standard additional assumptions} (\ref{eq: additional assumptions}) \Leftrightarrow \beta > \alpha . \end{equation} It is well known that for $z>0$ and $\beta\in(0,1)$ \begin{equation}\nonumber z^{-\beta} = \frac{\sin(\pi \beta)}{\pi} \int_0^{\infty} \frac{1}{\tau+z} \frac{d\tau}{\tau^{\beta}} . \end{equation} Thus \begin{align*} \hat{k}(z) = \frac{\sin(\beta\pi)}{\pi\Gamma(\beta)} \int_{\varepsilon}^{\infty} \frac{1}{\tau+z} \frac{d\tau}{(\tau-\varepsilon)^{\beta}} . \end{align*} In the notation of Section \ref{sec: Introduction} this means \begin{equation}\nonumber d\nu(\tau) = \frac{\sin(\beta\pi)}{\pi\Gamma(\beta)}\cdot \frac{1_{[\varepsilon,\infty)}}{(\tau-\varepsilon)^{\beta}} d\tau . \end{equation} By Theorem \ref{thm: Preliminary estimates} (iii) we see that $\mathcal{A}$ is invertible. \subsection{Smooth domains} Let us suppose that $\Omega$ has a $C^{\infty}$ boundary and let $k=k_{\beta,\varepsilon}$ for some $\varepsilon>0$ and $0<\beta<1$. By \cite{BarnettHassellTacy2016} we know that (\ref{eq: upper and lower estimate for Neumann eigenfunctions}) is satisfied for $\alpha=2/3$. Thus by (\ref{eq: standard additional assumptions}) and Theorem \ref{thm: upper resolvent estimate} we have \begin{equation} \beta > \frac{2}{3} \quad \Longrightarrow \quad \forall s\in\mathbb{R}: \norm{(is+\mathcal{A})^{-1}}\leq C (1+\abs{s})^{\beta}. \end{equation} By Theorem \ref{thm: Borichev-Tomilov} this implies \begin{Proposition}\label{thm: arbitrary smooth domains} Let $\partial\Omega$ be of class $C^{\infty}$ and $k=k_{\beta,\varepsilon}$. If $\beta>2/3$ then, for all $t>0$ and $\mathbf{x}_0\in\mathcal{H}_1$, \begin{equation}\nonumber E(t, \mathbf{x}_0) \leq C t^{-\frac{2}{\beta}} E_1(\mathbf{x}_0). \end{equation} \end{Proposition} \subsection{The disk}\label{sec: examples disk} Let $\Omega=D$ be the unit disk in $\mathbb{R}^2$. The smallest possible choice of $\alpha$ in (\ref{eq: upper and lower estimate for Neumann eigenfunctions}) is indeed $2/3$. The simple proof is based on a \emph{Rellich-type identity}, see for instance \cite[page 5]{BarnettHassellTacy2016}. So the circle already realizes the ``worsed case scenario'' with respect to the upper bounds for Neumann eigenfunctions. Thus in Proposition \ref{thm: arbitrary smooth domains} we cannot replace the condition $\beta>2/3$ by a weaker one. Instead we show the optimality of the upper bound for the energy decay. Therefore we investigate the spectrum of $\mathcal{A}$. \begin{Lemma}\label{thm: spectrum on disk} Let $\Omega=D$ and $k=k_{\beta,\varepsilon}$. Then there exists a sequence $(z_n)$ in the spectrum of $-\mathcal{A}$ such that $(\Im z_n)$ is positive and increasing and such that there exists a constant $C>0$ such that \begin{equation}\nonumber 0 < -\mathbb{R}e z_n \leq \frac{C}{(\Im z_n)^{\beta}} \end{equation} holds for all $n$. \end{Lemma} As a corollary we have \begin{equation}\nonumber \forall s>0: \sup_{\abs{\sigma}\leq s} \norm{(i\sigma+\mathcal{A})^{-1}} \geq C (1+s)^{\beta}. \end{equation} By Theorem \ref{thm: Borichev-Tomilov} and the remark after Theorem \ref{thm: Batty-Duyckaerts} this implies \begin{Proposition} Let $\Omega=D$ and $k=k_{\beta,\varepsilon}$. If $\beta>2/3$ then we have for all $t\geq1$ that \begin{equation}\nonumber c t^{-\frac{2}{\beta}} \leq \sup_{E_1(\mathbf{x}_0)\leq 1} E(t, \mathbf{x}_0) \leq C t^{-\frac{2}{\beta}} . \end{equation} If $\beta$ is arbitrary the left inequality remains valid. \end{Proposition} \begin{proof}[Proof of Lemma \ref{thm: spectrum on disk}] Except for the rate of convergence of $(z_n)$ towards the imaginary axis the content of our lemma is included in \cite[Theorem 5.2]{DFMP2010b}. Therefore we only sketch the existence of a sequence $(z_n)$ with imaginary part tending to infinity and real part tending to zero. First recall that an eigenvalue is a complex number $z_n$ such that (\ref{eq: stationary wave equation FA}) with $F=0$ and $z=z_n$ has a non-zero solution $u$. After a transformation to polar coordinates, by a separation of variables argument one can show that the existence of $u$ is equivalent to the existence of a non-zero solution $v$ of \begin{align*} \begin{cases} v''(r) + \frac{1}{r}v'(r) - (\frac{l^2}{r^2}+z^2)v(r) = 0 & (0 < r < 1), \\ v'(1) + z\hat{k}(z) v(1) = 0, & \\ v(0+) \text{ is finite}, \end{cases} \end{align*} for some $l\in\mathbb{N}_0$. The first and the third line forces that $v(r)$ is proportional to $J_l(izr)$, where $J_l$ is the $l$-th order Bessel function of the first kind (see e.g. \cite[Chapter 9]{AbramowitzStegun}). Therefore the second line implies \begin{equation}\label{eq: characteristic equation disk} \frac{J'_l(iz)}{J_l(iz)} = i\hat{k}(z) . \end{equation} We have seen that a complex number $z_n\notin (-\infty, 0]$ is an eigenvalue of the wave operator if and only if it is a zero of (\ref{eq: characteristic equation disk}) for some $l$. Let us fix $l$ now. Following the approach of \cite{DFMP2010b} one can prove the existence of a sequence of zeros $(z_n)=(i s_n - \xi_n)$ with $s_n=n\pi + (2l+1)\pi/4$, $\mathbb{R}e\xi_n>0$ and $\xi_n$ tending to zero, by a Rouch\'e argument. It remains to prove that $\xi_n=O((\Im z_n)^{-(1-\beta)})$. By \cite[Formula 9.2.1]{AbramowitzStegun} the following asymptotic formula holds if $z$ tends to infinity while $\mathbb{R}e z$ stays bounded (and $l$ is fixed): \begin{equation}\label{eq: asymptotic expansion Bessel function} J_l(iz) = \sqrt{\frac{2}{\pi z}} \cos\left(iz - \frac{(2l+1)}{4}\pi\right) + O(\abs{z}^{-1}) . \end{equation} A naive way to get the corresponding asympotic formula for $J'$ and $J''$ would be to take derivatives of the cosine. In fact this yields the correct leading term. The error term is again $O(\abs{z}^{-1})$ in both cases. For the first derivative this is \cite[Formula 9.2.11]{AbramowitzStegun}. The formula for the second derivative then follows from the ordinary differential equation satisfied by $J_l$. Thus by a Taylor expansion of (\ref{eq: characteristic equation disk}) we get: \begin{equation}\nonumber 0 + i\xi_n + O(\abs{\xi_n}^2+n^{-1}) = i\hat{k}(is_n) - i\xi_n\hat{k}'(is_n) + O(\abs{\xi_n}^2+n^{-1}) . \end{equation} This implies \begin{align}\label{eq: xi vs acoustic impedance} \xi_n &= (1+o(1))\hat{k}(is_n) \\ \nonumber &= (1+o(1))\frac{\Gamma(\beta)}{s_n^{\beta}}\left(\cos(\beta\varphi(s_n)) + i\sin(\beta\varphi(s_n))\right). \end{align} Here $\varphi(s)$ is the argument of $\varepsilon-is$ (see Section \ref{sec: examples kernel}). \end{proof} Note that in the undamped case $k=0$ we have $z_n^{0}=s_n+O(s_n^{-1})$ by \cite[Formula 9.5.12]{AbramowitzStegun} for the eigenvalues $z_n^0$. Here again $s_n=n\pi + (2l+1)\pi/4$ and $l$ is fixed. Thus (\ref{eq: xi vs acoustic impedance}) implies that $z_n=z_n^0-(1+o(1))\hat{k}(is_n)$. \subsection{The square} Let $\Omega=Q=(0,\pi)^2$ be a square. In terms of upper bounds for boundary values of spectral clusters the square behaves slightly better than the disk. It seems to be reasonable to believe that this is due to the fact that the square has no \emph{whispering gallery modes}. \begin{Lemma}\label{thm: upper bound for square} Let $\Omega=Q$, $k=k_{\beta,\varepsilon}$ and $\delta>0$. If $\delta$ is sufficiently small then for each $L^2(Q)$-normalized spectral cluster $p$ of width $\delta$ of the Neumann-Laplace operator \begin{equation}\nonumber c \leq \int_{\partial\Omega} |\Gamma p|^2 dS \leq C s(p)^{\frac{1}{2}}. \end{equation} The constants $c,C>0$ do not depend on $p$. Furthermore the exponent $\alpha(Q)=1/2$ is optimal, i.e. one cannot replace it by a smaller one. \end{Lemma} The optimality assertion of Lemma \ref{thm: upper bound for square} may be somewhat surprising. If $p$ was restricted to be a (pure) eigenfunction of the Neumann-Laplace operator the optimal exponent would be $\alpha=0$. This is a direct consequence of the explicit formula available for the eigenfunctions. However, it will be clear from the proof why spectral clusters behave differently. As in the preceding examples the lemma implies \begin{Proposition}\label{thm: square} Let $\Omega=Q$, $k=k_{\beta,\varepsilon}$. If $\beta>1/2$ then, for all $t>0$ and $\mathbf{x}_0\in\mathcal{H}_1$, \begin{equation}\nonumber E(t, \mathbf{x}_0) \leq C t^{-\frac{2}{\beta}} E_1(\mathbf{x}_0). \end{equation} \end{Proposition} \begin{proof}[Proof of Lemma \ref{thm: upper bound for square}] The explicit form of the normalized Neumann eigenfunctions $u_{m,n}$ and its eigenfrequencies $\lambda_{m,n}\geq0$ is \begin{align*} u_{m,n}(x,y) = 2\cos(mx)\cos(ny), \, \lambda_{m,n}^2 = m^2 + n^2. \end{align*} Let $p=\sum_{m,n} a_{n,m} u_{n,m}$ be a normalized spectral cluster of width $\delta$. We choose $s\geq0$ such that the set of indices $(m,n)$ with $a_{m,n}\neq 0$ is included in $I$ which is given by \begin{align*} I &= \{(m,n)\in\mathbb{N}_0^2; s^2\leq m^2 + n^2 \leq (s+\delta)^2\}, \\ I_1 &= \{(m,n)\in I; m\geq n\}. \end{align*} Without loss of generality we may assume that $\sum_{I_1}\abs{a_{m,n}}^2\geq 1/2$. We first prove the lower bound: \begin{align*} \int_{\partial\Omega} |\Gamma p|^2 dS &= \sum_n \norm{\sum_m a_{m,n} \Gamma u_{m,n}}_{L^2(\partial\Omega)}^2 \\ &\geq 16\pi\sum_{I_1} \abs{a_{m,n}}^2 \\ &\geq 8\pi \norm{p}_{L^2(\Omega)}^2 . \end{align*} In the first line we use the orthogonality relation for the cosine functions with respect to the $y$ variable. In the second line we use $\norm{u_{m,n}}_{L^2(\partial\Omega)}=4\sqrt{\pi}$ and the fact that the partial sum over $m$ in the preceding step includes only one member if $\delta$ is small and if the index set is restriced to $I_1$. Let $N_n$ be the number of non-zero summands with respect to the inner sum in line one. It is not difficult to see that $N_n\leq C \sqrt{s}$ for a constant independent of $n$ and $s$. Therefore we have \begin{align*} \int_{\partial\Omega} |\Gamma p|^2 dS &= \sum_n \norm{\sum_m a_{m,n} \Gamma u_{m,n}}_{L^2(\partial\Omega)}^2 \\ &= \sum_n N_n^2 \norm{\frac{1}{N_n}\sum_m a_{m,n} \Gamma u_{m,n}}_{L^2(\partial\Omega)}^2 \\ &\leq C \sum_{m,n} N_n \abs{a_{m,n}}^2 \\ &\leq C s^{\frac{1}{2}} \norm{p}_{L^2(\Omega)}^2 . \end{align*} It remains to prove optimality of the exponent $\alpha=1/2$. For $n_1\in\mathbb{N}$ we consider a special spectral cluster $p_1$ of the form \begin{align*} p_1 = 2 \sum_{m=0}^{N-1} a_m \cos(m x)\cos(n_1 y) \\ \text{where } N=N(n_1) = \lceil\varepsilon \sqrt{n_1}\rceil. \end{align*} If $\varepsilon>0$ is sufficiently small and $n_1$ large enough we see that $p_1$ is a spectral cluster of width $\delta$. If we set $a_m=1/\sqrt{N}$ we see that the $L^2(\Omega)$-norm of $p_1$ is $1$ and \begin{align*} \int_{\{0\}\times (0,1)} \abs{\Gamma p_1}^2 dS = \abs{\sum_{m=1}^{N} a_m}^2 = N(n_1) \geq \varepsilon\sqrt{n_1} . \end{align*} This finishes the proof since $s(p_1)\in [n_1, n_1+\delta]$. \end{proof} \subsection{Decay in the presence of a singularity at zero}\label{sec: Example singularity at 0} So far in this section we have excluded the case when $\mathcal{A}$ has a singularity at zero. The purpose of this subsection is to show that getting decay rates in this case is not more difficult than in the case where there is no singularity at zero. As in the previous subsection we simplify our presentation by considering a special family $(\hat{k}'_{\alpha,\beta})_{\alpha,\beta}$ of acoustic impedances given by the measures \begin{equation}\nonumber d\nu'_{\alpha,\beta} = \tau^{\alpha} d\tau|_{(0,1)} + (\tau-1)^{-\beta} d\tau|_{(1,\infty)} \quad (\alpha\in(0,\infty),\beta\in(0,1)). \end{equation} Obviously $(\tau\mapsto\tau^{-1})$ is integrable with respect to $\nu'_{\alpha,\beta}$ (thus $k'_{\alpha,\beta}$ is integrable) but it is not bounded with respect to that measure. Observe that $\alpha>1$ implies that $(\tau\mapsto\tau^{-1})$ is square integrable with respect to $\nu'$. In the following we assume for simplicity that $\alpha>1$. The reason is that by Theorem \ref{thm: range of A} the range of $\mathcal{A}$ has a simpler representation in this case. \begin{Lemma}\label{lem: visco kalphabeta} Let $\alpha\in(1,\infty),\beta\in(0,1)$. Then $(\tau\mapsto\tau^{-1})$ is integrable, square integrable but unbounded with respect to $\nu'_{\alpha,\beta}$. Moreover \begin{equation}\nonumber \hat{k}'_{\alpha,\beta}(z) = \frac{\pi}{\sin(\beta\pi)} (1+z)^{-\beta} + O(\abs{z}^{-1}) \end{equation} as $z$ tends to infinity avoiding $\mathbb{R}_-$. \end{Lemma} \begin{proof} We only have to prove the last statement. We calculate \begin{align*} \hat{k}'(z) = \int_0^1 \frac{\tau^{\alpha}}{z+\tau} d\tau + \int_1^{\infty} \frac{1}{z+\tau} \frac{d\tau}{(\tau-1)^{\beta}} =: I + II. \end{align*} It is easy to see that the modulus of $I$ is bounded by $(\abs{z}-1)^{-1}$ for all $z$ with $\abs{z} > 1$. With regard to $II$ we see that the well known identity \begin{equation}\nonumber z^{-\beta} = \frac{\sin(\beta \pi)}{\pi} \int_0^{\infty} \frac{1}{z+\tau} \frac{d\tau}{\tau^{\beta}} \end{equation} finishes the proof. \end{proof} \begin{Proposition}\label{prop: visco singularity at infinity} Let $\alpha\in(0,\infty), \beta\in(2/3,1)$ and $k=k'_{\alpha,\beta}$. Let $\partial\Omega$ be a $C^\infty$-manifold. Then \begin{equation}\nonumber \norm{(is-\mathcal{A})^{-1}} = O(\abs{s}^{\beta}) \end{equation} as $\abs{s}>1$ tends to infinity. \end{Proposition} \begin{proof} This is an immediate consequence of Lemma \ref{lem: visco kalphabeta} together with Theorem \ref{thm: Preliminary estimates}(i) and Theorem \ref{thm: upper resolvent estimate}. \end{proof} We are now in the position to prove an \emph{optimal} decay estimate. \begin{Proposition} Let $\alpha\in(1,\infty), \beta\in(2/3,1)$ and $k=k_{\alpha,\beta}$. Let $\partial\Omega$ be a $C^\infty$-manifold. Then \begin{equation}\nonumber E(t,\mathbf{x}_0) \leq \frac{C}{1+t^2} \left[ E_1(\mathbf{x}_0) + \int_0^{\infty} \norm{\psi_0(\tau)}_{L^2(\partial\Omega)}^2 \frac{d\nu(\tau)}{\tau^2} \right] \end{equation} holds for all $t\geq0$ and for all $\mathbf{x}_0=(p_0,v_0,\psi_0)\in\mathcal{H}$ for which the right-hand side is finite. The constant $C>0$ does not depend on $\mathbf{x}_0$ or $t$. Moreover this estimate is sharp in the sense that it would be invalid if one replaces $C/(1+t^2)$ by $o(1/(1+t^2))$ as $t$ tends to infinity. \end{Proposition} \begin{proof} Proposition \ref{prop: visco singularity at infinity}, Theorem \ref{thm: Preliminary estimates}(ii) together with \cite[Theorem 8.4]{BattyChillTomilov2016} yield \begin{equation}\nonumber \norm{e^{t\mathcal{A}}\mathbf{x}_0} \leq \frac{C}{1+t} \norm{\mathbf{x}_0}_{D(\mathcal{A}) \cap R(\mathcal{A})} \text{ for all } \mathbf{x}_0\in D(\mathcal{A}) \cap R(\mathcal{A}). \end{equation} We know that the norm of $D(\mathcal{A})$ is (equivalent to) the square root of the first order energy $E_1$. By Theorem \ref{thm: range of A} the norm on $R(\mathcal{A})$ is given by \begin{equation}\nonumber \norm{\mathbf{x}_0}_{R(\mathcal{A})}^2 = E(\mathbf{x}_0) + \int_0^{\infty} \norm{\psi_0(\tau)}_{L^2(\partial\Omega)}^2 \frac{d\nu(\tau)}{\tau^2}. \end{equation} This gives the desired estimate. The sharpness of this estimate follows from \cite[Theorem 6.9 and the remarks in Chapter 8]{BattyChillTomilov2016}. \end{proof} \section{Optimal decay rates for the 1D case}\label{sec: 1D case} Throughout this section $\Omega=(0,1)$ and $k$ is a completely monotonic, integrable function. We aim to show that in the 1D setting the conclusion of Theorem \ref{thm: upper resolvent estimate} remains true without any further hypothesis - like (\ref{eq: additional assumptions}) - on the acoustic impedance. Even more can be done - we prove that the upper estimate is optimal. More precisely we prove \begin{Theorem}\label{thm: optimal resolvent estimate 1D} Let $\Omega=(0,1)$. Then there are constants $c,C>0$ such that for all $s>1$ we have \begin{equation}\nonumber \frac{c}{\mathbb{R}e\hat{k}(is)} \leq \sup_{1\leq\abs{\sigma}\leq s} \norm{(i\sigma+\mathcal{A})^{-1}} \leq \frac{C}{\mathbb{R}e\hat{k}(is)}. \end{equation} \end{Theorem} We prove the lower bound by investigating the spectrum of $-\mathcal{A}$ which is close to the imaginary axis (Subsection \ref{sec: the spectrum}). Furthermore we give a more or less concrete formula for the stationary resolvent operator $R(is)$ which allows to prove the upper bound (Subsection \ref{sec: Upper resolvent estimate}). Subsection \ref{sec: Decay rates} contains implications of Theorem \ref{thm: optimal resolvent estimate 1D} for the decay rates of the energy of the wave equation. \subsection{The spectrum}\label{sec: the spectrum} The spectrum of $-\mathcal{A}$ satisfies a characteristic equation which is implicitly contained in \cite{DFMP2010b}. For convenience of the reader we give a complete proof. \begin{Proposition} A number $z\in \mathbb{C}\backslash(-\infty,0]$ is in the spectrum of $-\mathcal{A}$, and hence an eigenvalue, if and only if it satisfies \begin{equation}\label{eq: characteristic equation 1D} \left(\hat{k}(z)-i\tan\left(\frac{iz}{2}\right)\right)\cdot \left(\hat{k}(z)+i\cot\left(\frac{iz}{2}\right)\right) = 0 \end{equation} \end{Proposition} \begin{proof} By Theorem \ref{thm: spectrum DFMP} together with the equivalence between (\ref{A}) and (\ref{R}) we see that $z$ is a spectral point if and only if there is a non-zero function $p$ solving \begin{align*} \begin{cases} z^2p(x) - p''(x) = 0 & (x\in(0,1)), \\ -p'(0) + z\hat{k}(z)p(0) = 0, & \\ p'(1) + z\hat{k}(z)p(1) = 0. & \end{cases} \end{align*} Up to a scalar factor the first two lines are equivalent to the following ansatz \begin{equation}\nonumber p(x) = \cos(iz x) - i\hat{k}(z)\sin(iz x) . \end{equation} Plugging this into the third line yields that $z$ is an eigenvalue if and only if \begin{equation}\label{eq: precharacteristic equation} \left(\hat{k}(z)^2 + 2i\hat{k}(z)\cot(iz) + 1\right) z\sin(iz) = 0. \end{equation} Note that the zeros of the sine function do not lead to an eigenvalue since the cotangent function has a singularity at the same point. Actually we already know from the situation of general domains that an eigenvalue which is neither zero nor a negative number must have negative real-part. Thus we may simplify (\ref{eq: precharacteristic equation}) by dividing by $z\sin(iz)$. The claim now follows from the formula $\cot(\zeta)-\tan(\zeta)=2\cot(2\zeta)$ which is valid for all complex numbers $\zeta$. \end{proof} Let $H,R>0$. The reader may consider $H$ and $R$ as large numbers. We are interested in the part of the spectrum of $-\mathcal{A}$ contained in the strip \begin{equation}\nonumber {\mathcal{U}_H^R} = \{z\in\mathbb{C}; \abs{\Im z}>R \text{ and } 0 < -\mathbb{R}e z < H\} . \end{equation} \begin{Proposition}\label{thm: spectrum 1D} Let $H>0$. Then for $R>0$ large enough there exists a natural number $n_0>0$ such that the part of the spectrum of $-\mathcal{A}$ which is contained in ${\mathcal{U}_H^R}$ is given by a doubly infinite sequence $(z_n)_{n=\pm n_0}^{\infty}$ with $z_{-n}=\overline{z_n}$ for all $n$ and \begin{align*} \Im z_n &= \pi n - \left[(2+O(|\hat{k}|))\Im\hat{k}\right](i\pi n), \\ \mathbb{R}e z_n &= -\left[(2+O(|\hat{k}|))\mathbb{R}e\hat{k}\right](i\pi n). \end{align*} \end{Proposition} As a consequence the lower bound in Theorem \ref{thm: optimal resolvent estimate 1D} is proved. Note that the two asymptotic formulas given by the proposition imply $z_n=(2+o(1))\hat{k}(in\pi)$ for $n$ tending to plus or minus infinity. This formula can be proved by the same Taylor expansion argument as in the proof of Lemma \ref{thm: spectrum on disk}. See also the remark after the proof of the mentioned lemma. But this is not enough in order to prove the lower bound in Theorem \ref{thm: optimal resolvent estimate 1D} since it might happen that the real part of $\hat{k}(is)$ tends much faster to zero then its imaginary part! This explains the more elaborate Taylor expansion technique in the proof below. \begin{proof}[Proof of Proposition \ref{thm: spectrum 1D}] We are searching for the solutions $z\in{\mathcal{U}_H^R}$ of the characteristic equation (\ref{eq: characteristic equation 1D}). For simplicity we only consider the solutions of \begin{equation}\nonumber z\in{\mathcal{U}_H^R} \text{ and } F(z) := \hat{k}(z) - i\tan\left(\frac{iz}{2}\right) = 0. \end{equation} We apply a Rouch\'e argument to show that the zeros of this equation are close to the zeros $is_{2n}=2n\pi i$ of the tangens-type function on the right-hand side. Let $(\varepsilon_{2n})$ be a null-sequence of positive real numbers, smaller than $H$, to be fixed later. Let $B_{2n}$ be the open ball of radius $\varepsilon_{2n}$ around the center $is_{2n}$. For $r>0$ let \begin{equation} {\mathcal{V}_H^R}(r)=\{z\in\mathbb{C}; R < \Im z < R+r \text{ and } -H < \mathbb{R}e z < H\}. \end{equation} Take $K(r)$ to be the boundary of the set ${\mathcal{V}_H^R}(r)\backslash (\bigcup_n B_{2n})$. Since $\hat{k}(z)$ tends to zero as $z$ tends to infinity with bounded real part we can choose $R$ so large and $(\varepsilon_{2n})$ so slowly decreasing such that $|\hat{k}(z)|<\abs{i\tan(iz/2)}$ for $z\in K(r)$. Thus Rouch\'e's theorem for meromorphic functions says that for $F$ and for $(z\mapsto i\tan(iz/2))$ restricted to ${\mathcal{V}_H^R}(r)$ the number of zeros minus the number of poles (counted with multiplicity) is the same for all $r>0$. The poles of $F$ are actually the same as for for the tangens type function. Thus it is proved that for large enough $n_0\in\mathbb{N}$ the zeros of $F$ from ${\mathcal{U}_H^R}$ for $R=(2n_0-1)\pi$ are simple and contained in the balls $B_{2n}$ for $\abs{n}\geq n_0$. Note that we used that we already know that zeros of the characteristic equation must have negative real part. We have verified that all zeros $z_{2n}$ of $F|_{\mathcal{U}_H^R}$ are given by the following ansatz: \begin{equation}\nonumber z_{2n} = is_{2n} - \xi_{2n} \text{ with } \mathbb{R}e\xi_{2n}>0 \text{ and } \xi_{2n} = o(1). \end{equation} In the remaining part of the proof we want to simplify the notation by dropping the indices from $z, s$ and $\xi$. We also write $\hat{k}$ instead of $\hat{k}(z)$. It is not difficult to verify that $F(z)=0$ is equivalent to \begin{equation}\nonumber e^z = \frac{1-\hat{k}}{1+\hat{k}} = \frac{(1-i\Im\hat{k}) - \mathbb{R}e\hat{k}}{(1+i\Im\hat{k}) + \mathbb{R}e\hat{k}}. \end{equation} Let $\alpha=\arg(1+i\Im\hat{k})$ be the argument of $1+i\Im\hat{k}$ and $L=(1+(\Im\hat{k})^2)^{1/2}$. Then \begin{align*} &\arg(1\pm\hat{k}) = \pm \alpha (1+O(\mathbb{R}e\hat{k})) = \pm (1+O(|\hat{k}|))\Im\hat{k}, \\ &\text{thus } \Im \xi = 2(1+O(|\hat{k}|))\Im\hat{k}. \end{align*} This yields the first asymptotic formula claimed in the proposition. The second asympotic formula is a direct consequence of \begin{equation}\nonumber e^{-\mathbb{R}e\xi} = \frac{L-(1+O(\abs{\alpha}^2))\mathbb{R}e\hat{k}}{L+(1+O(\abs{\alpha}^2))\mathbb{R}e\hat{k}} = 1 - \frac{2}{L}(1+O(|\Im\hat{k}|^2))\mathbb{R}e\hat{k} + O((\mathbb{R}e\hat{k})^2). \end{equation} \end{proof} \subsection{Upper resolvent estimate}\label{sec: Upper resolvent estimate} We prove the upper estimate stated in Theorem \ref{thm: optimal resolvent estimate 1D}. By Theorem \ref{thm: Preliminary estimates} part (i) it suffices to show \begin{Proposition} For all $\abs{s}\geq 1$ we have $\norm{R(is)}_{L^2\rightarrow L^2}\leq C(\abs{s}\mathbb{R}e\hat{k}(is))^{-1}$. \end{Proposition} \begin{proof} For some $f\in L^2(0,1)$ let $p$ be the solution of \begin{align}\label{eq: stationary equation 1D} \begin{cases} -s^2p(x) - p''(x) = f & (x\in(0,1)), \\ -p'(0) + is\hat{k}(is)p(0) = 0, & \\ p'(1) + is\hat{k}(is)p(1) = 0. & \end{cases} \end{align} Let us define two auxiliary functions $p_f$ and $p^0$ by \begin{align*} p_f(x) = -\frac{1}{s}\int_0^x \sin(s(x-y))f(y) dy \text{ and } p^0(x) = \cos(sx) + i\hat{k}(is)\sin(sx). \end{align*} It is easy to see that $p=ap^0+p_f$ with $a\in\mathbb{C}$ is the only possible ansatz which satisfies the first two lines in (\ref{eq: stationary equation 1D}). The parameter $a$ is uniquely defined by the condition from the third line. A short calculation yields that this condition is equivalent to \begin{equation}\nonumber a s\cdot \underbrace{\left( \hat{k}(is) + i\tan\left(\frac{s}{2}\right) \right) \left( \hat{k}(is) - i\cot\left(\frac{s}{2}\right) \right)}_{=:D(s)}\cdot \sin(s) = - p_f'(1) - is\hat{k}(is) p_f(1). \end{equation} Note that the singularities of $D$ cancel the zeros of the sine function. Thus we have an explicit formula for $a$ in terms of $f$. Further note that the absolute values of $sp_f(1)$ and $p_f'(1)$ can be estimated from above by a constant times $\norm{f}_{L^2}$. Thus \begin{equation}\nonumber \abs{a} \leq \frac{C}{\abs{s}} \cdot \frac{1}{\abs{D(s)\sin(s)}}\cdot \norm{f}_{L^2(0,1)}. \end{equation} By the presence of the tangent and contangent type function the factor $D(s)\sin(s)$ can only be small in a neighbourhood of $s=2n\pi$ or $s=(2n+1)\pi$. But in this case the real part of $\hat{k}$ prevents $D$ from getting too small. We thus have an estimate $\abs{D(s)\sin(s)}\geq c \mathbb{R}e\hat{k}(is)$ for $\abs{s}\geq 1$ which in turn gives an upper bound on $\abs{a}$. Since the $L^2$-norm of $p^0$ can be estimated from above by a constant the proof is finished. \end{proof} \subsection{Decay rates}\label{sec: Decay rates} An immediate consequence of Theorem \ref{thm: optimal resolvent estimate 1D}, \ref{thm: Preliminary estimates}(iii), \ref{thm: Batty-Duyckaerts} and the remark after Theorem \ref{thm: Batty-Duyckaerts} is \begin{Theorem}\label{thm: Decay rate 1D} Assume that $\nu|_{(0,\varepsilon)}=0$ for some $\varepsilon>0$. Then there are constants $c,C>0$ and $t_0>0$ such that for all $t\geq t_0$ \begin{equation}\nonumber c M^{-1}(t/c) \leq \sup_{E_1(\mathbf{x}_0)\leq 1} E(t, \mathbf{x}_0) \leq C M_{log}^{-1}(t/C) \end{equation} where the increasing function $M:(0,\infty)\rightarrow(0,\infty)$ is given by $M(s)=(\mathbb{R}e\hat{k}(is))^{-1}$. \end{Theorem} We made the assumption $\nu|_{(0,\varepsilon)}=0$ (i.e. $\mathcal{A}$ is invertible) only to simplify the formulation of the theorem. A recipe how to adapt the formulation in case of a non-invertible $\mathcal{A}$ is given in Subsection \ref{sec: Example singularity at 0}. \section{Further research}\label{sec: Conclusion} For a complete treatment of resolvent estimates for wave equations like (\ref{eq: wave equation}) it would be desirable to answer at least the following two questions: \textbf{Question 1.} Is the upper bound on $\norm{(is-\mathcal{A})^{-1}}$, given by Theorem \ref{thm: upper resolvent estimate}, optimal? \textbf{Question 2.} Can one discard the additional assumption (\ref{eq: additional assumptions}) on $\hat{k}$ without changing the conclusion of Theorem \ref{thm: upper resolvent estimate}? A strategy to positively answer question 1 is to show that there exists a sequence of eigenvalues of $-\mathcal{A}$ which tend to infinity and approach the imaginary axis sufficiently fast. We have seen that this strategy works at least for $\Omega=(0,1)$ and $\Omega=D$ (see Section \ref{sec: 1D case} and Subsection \ref{sec: examples disk}). For the disk we restricted to kernels $k=k_{\beta,\varepsilon}$. However, with the more elaborate Taylor argument which proved Proposition \ref{thm: spectrum 1D} one can discard this restriction from Lemma \ref{thm: spectrum on disk}. We believe that there is a general argument for any bounded Lipschitz domain $\Omega$ yielding the existence of such a sequence of eigenvalues. By our investigations in Section \ref{sec: 1D case} we already have a positive answer for question 2 in the 1D setting. Moreover, if $\Omega=D$ is the disk we already know from the spectrum that an increasing function $M$ with $M(s)=o((\mathbb{R}e\hat{k}(is))^{-1})$ can never be an upper bound for $\norm{(is+\mathcal{A})^{-1}}$ for all large $\abs{s}$. We think that the answer to question 2 is either ``yes'', or if ``no'' then the upper bound solely depends on $\mathbb{R}e\hat{k}$ and the infimum of all $\alpha$ making the upper estimate in (\ref{eq: upper and lower estimate for Neumann eigenfunctions}) true for all spectral cluster $p$. Concerning the application of resolvent estimates to energy decay there is also a third question. Let us assume for a moment that the answers to questions 1 and 2 were positive. Then Theorem \ref{thm: Decay rate 1D} was true for any $\Omega$. In general it is not possible to replace $M_{log}$ by $M$ in Theorem \ref{thm: Batty-Duyckaerts}. However, does our particular situation allow for a smaller upper bound? In our opinion the most elegant result would be a positive answer to \textbf{Question 3.} Is Theorem \ref{thm: Decay rate 1D} true for all bounded Lipschitz domains $\Omega$ - even with $M_{log}$ replaced by $M$? \begin{appendix} \section{Besov spaces and the trace operator}\label{apx: Traces} In this article we work with fractional Sobolev spaces, Besov spaces and the trace operator acting on them. Note also that we work with the space $H^s(\partial\Omega)$ which is not only a fractional Sobolev space but also is a function space on a closed subset of $\mathbb{R}^d$ which has non-empty interior. In this appendix we aim at providing some results from the literature about Sobolev/Besov spaces and their relation to interpolation spaces which is necessary to follow the arguments from our article. Of exceptional importance for the proof of Theorem \ref{thm: Preliminary estimates} (i) and (ii) is the validity of the borderline trace theorem - Proposition \ref{thm: Trace}. This borderline case seems to be well-known to the experts - also for Lipschitz domains - but unfortunately we were not able to find it in the literature except in \cite[Theorem 18.6]{Triebel}. But the proof given there is not in our spirit - the Besov spaces are not defined as interpolation spaces there. Therefore we give a simple direct proof via the characterization of Besov spaces as interpolation spaces which is true if $\Omega$ has the so called extension property. \subsection{Fractional Sobolev- and Besov spaces}\label{apx: fractional Sobolev and Besov spaces} Let $\Omega\subset\mathbb{R}^d$ be a bounded Lipschitz domain. Here by \emph{Lipschitz} we mean that locally near any boundary point and in an appropriate coordinate system one can describe $\Omega$ as the set of points which are above the graph of some Lipschitz continuous function from $\mathbb{R}^{d-1}$ into $\mathbb{R}$. Let $1\leq p\leq \infty$. We assume the reader to be familiar with the usual \emph{Sobolev space} $W^{1,p}(\Omega)$ which consists of all functions $u\in L^p(\Omega)$ for which all distributional derivatives $\partial_ju$ are in $L^p(\Omega)$. There are different methods of defining Besov spaces. For our purposes it is most convenient to define the \emph{Besov spaces} for $0<s<1$ and $1\leq q\leq \infty$ as real interpolation spaces: \begin{equation}\ B^{s,p}_q(\Omega) = (L^p(\Omega), W^{1,p}(\Omega))_{s, q} . \end{equation} Another approach is to define $B^{s,p}_q(\mathbb{R}^d)$ for example via interpolation and then to define the Besov space on $\Omega$ as restrictions to $\Omega$ of Besov function on $\mathbb{R}^d$. In general these approaches are not equivalent but if $\Omega$ satisfies the extension property they are equivalent \cite[Chapter 34]{Tartar}. In our setting ($0<s<1$) we say that $\Omega$ satisfies the \emph{extension property} if there is a linear and continuous operator $\Ext:W^{1,p}(\Omega)\rightarrow W^{1,p}(\mathbb{R}^d)$ such that $(\Ext u)|_{\Omega}=u$ for each $u$ from $W^{1,p}(\Omega)$. The extension property is fulfilled if $\Omega$ is bounded and has a Lipschitz boundary. In the following we always assume that this extension property is fulfilled - otherwise some statements from below are not valid. The \emph{Sobolev-Slobodeckij spaces} are defined as special Besov spaces $W^{s,p}(\Omega)=B^{s,p}_p(\Omega)$. It is common to write $H^s$ instead of $W^{s,2}$ in the Hilbert space setting. For $0\leq s \leq 1$ it is also possible to define the scale of \emph{fractional Sobolev spaces} (also known as \emph{Bessel potential spaces}) $H^{s,p}(\Omega)$ via Fourier methods for the special case $\Omega=\mathbb{R}^d$ and via restriction for the general case. These spaces form a scale of complex interpolation spaces. In general the fractional Sobolev spaces differ from the Sobolev-Slobodeckij spaces but coincide in the case $p=2$ (see \cite[Chapter 7.67]{AdamsFournier}. Note that in Adam's and Fournier's book the letter $W$ stands for the fractional Sobolev spaces. We also have $H^{1,p}(\Omega)=W^{1,p}(\Omega)$ for $1<p<\infty$ - which is Calder\'on's Theorem (see \cite[page 7]{JonssonWallin}). We mention that for all $0<s_1\leq s<1$ and $q,q_1\in [1,\infty]$ with the restriction $q\leq q_1$ if $s_1=s$: \begin{align*} B^{s, p}_q(\Omega) \hookrightarrow B^{s_1, p}_{q_1}(\Omega) . \end{align*} This is a direct consequence of a general result about the real interpolation method (see e.g. \cite[Lemma 22.2]{Tartar}). It is possible to define the Besov space $B^{s,p}_q(A)$ on a general class of closed subsets $A$ of $\mathbb{R}^d$ - the so called \emph{$d$-sets}. For $\Omega$ having a Lipschitz boundary its boundary $\partial\Omega$ is such a set, since it is a $d-1$ dimensional manifold topologically. The required background is included in \cite[Chapter V]{JonssonWallin}. Again we write $H^s(A)=B^{s, 2}_2(A)$ in the Hilbert space setting. \subsection{Traces for functions with $1/p$ or more derivatives} Throughout this subsection $\Omega\subseteq\mathbb{R}^d$ is a bounded domain with Lipschitz boundary and we let $1<p<\infty$. For $1/p<s<1$ the following Theorem is a special case of \cite[Chapter VI, Theorem 1-3]{JonssonWallin}. For $s=1$ it is a special case of \cite[Chapter VII, Theorem 1-3]{JonssonWallin}, keeping in mind that by Calder\'on's Theorem the Bessel potential spaces are the ordinary Sobolev spaces for positive integer orders $s$. \begin{Theorem} Let $1/p<s<1$. Then the trace operator $\Gamma: C(\overline{\Omega})\rightarrow C(\partial\Omega), u\mapsto u|_{\partial\Omega}$ extends continuously to an operator \begin{equation}\nonumber \Gamma:B^{s,p}_{q}(\Omega)\rightarrow B^{s-\frac{1}{p}, p}_q(\partial\Omega). \end{equation} Furthermore $\Gamma$ has a continuous right inverse: \begin{equation}\nonumber \Ext:B^{s-\frac{1}{p}, p}_q(\partial\Omega)\rightarrow B^{s,p}_{q}(\Omega),\quad \Gamma\circ\Ext = \id_{B^{s-\frac{1}{p}, p}_q(\partial\Omega)} . \end{equation} The theorem remains valid for $s=1$, $q=p$ if one replaces $B^{s,p}_q(\Omega)$ by $W^{1,p}(\Omega)$. \end{Theorem} Unfortunately this theorem is false for any $1\leq q\leq \infty$ in the borderline case $s=1/p$ if one replaces the target space of $\Gamma$ by $L^p(\partial\Omega)$. But for our purposes it is sufficient that a weakened version remains valid. \begin{Proposition}\label{thm: Trace} The trace operator $\Gamma: B^{\frac{1}{p},p}_1(\Omega)\rightarrow L^p(\partial\Omega)$ is continuous. \end{Proposition} Actually $\Gamma$ from this proposition is indeed surjective (but we do not need this property in our article) and a more general version is proved in \cite[Section 18.6]{Triebel}. However there is no \emph{linear} extension operator from $L^p(\partial\Omega)$ back to the Besov space (See \cite{Triebel} and references therein). We indicate a simple direct proof of Proposition \ref{thm: Trace}. It is based on two lemmas which have very simple proofs on their own. The first one is \begin{Lemma}\label{thm: trace inequality} For every $C^{\infty}$ function $u$ with compact support in $\mathbb{R}^d$ it is true that \begin{equation}\nonumber \norm{\Gamma u}_{L^p(\partial\Omega)} \leq C\norm{u}_{L^p(\Omega)}^{1-\frac{1}{p}}\norm{u}_{W^{1,p}(\Omega)}^{\frac{1}{p}}. \end{equation} \end{Lemma} The straightforward proof can be found in \cite[Lemma 13.1]{Tartar}. For a different proof in the case $p=2$ we refer to \cite{Monniaux2014}. The second ingredient to the proof of Proposition \ref{thm: Trace} is \cite[Lemma 25.3]{Tartar} which we recall here for convenience of the reader. \begin{Lemma}\label{thm: interpolation lemma} Let $(E_0,E_1)$ be an interpolation couple, $F$ a Banach space and let $0<\theta<1$. Then a linear mapping $L:E_0\cap E_1\rightarrow F$ extends to a continuous operator $L:(E_0,E_1)_{\theta, 1}\rightarrow F$ if and only if there exists a $C>0$ such that for all $u\in E_0\cap E_1$ we have $\norm{Lu}_{E_0 \cap E_1}\leq C\norm{u}_{E_0}^{1-\theta}\norm{u}_{E_1}^{\theta}$. \end{Lemma} \begin{proof}[Proof of Proposition \ref{thm: Trace}] Apply the if-part of Lemma \ref{thm: interpolation lemma} to $E_0=L^p(\Omega)$, $E_1=W^{1,p}(\Omega)$, $F=L^p(\partial\Omega)$, $L=\Gamma$ and $\theta=s$. Use Lemma \ref{thm: trace inequality} to verify the converse. \end{proof} \section{Semiuniform decay of bounded semigroups}\label{apx: semiuniform stability} We briefly recall three important results connecting resolvent estimates of generators to the decay rate of their corresponding semigroups. In addition to the literature mentioned below we recommend the reader to consult \cite{BattyChillTomilov2016} for a general overview and finer results. Let $X$ be a Banach space and $B(X)$ the algebra of bounded operators acting on $X$. Throughout this section we assume that $-\mathcal{A}$ is a the generator of a bounded $C_0$-semigroup $T:[0,\infty)\rightarrow B(X)$. By $D(\mathcal{A}), R(A)$ we denote domain and range of $\mathcal{A}$ and by $\sigma(\mathcal{A})$ its spectrum. \subsection{Singularity at infinity} The phrase \emph{``Singularity at infinity''} refers to the situation when the resolvent of $(is+\mathcal{A})^{-1}$ of $\mathcal{A}$ has no poles on the imaginary axis but is allowed to blow up in operator norm if $s$ tends to infinity. The following theorem is due to Batty and Duyckaerts \cite{BattyDuyckaerts2008} but we also refer to \cite{ChillSeifert2016} for a different proof and to \cite{BattyBorichevTomilov2016} for a generalization. \begin{Theorem}[\cite{BattyDuyckaerts2008}]\label{thm: Batty-Duyckaerts} Assume that $\sigma(\mathcal{A})\cap i\mathbb{R}=\emptyset$ and that there exist constants $C, s_0 > 0$ and an increasing function $M:[s_0,\infty)\rightarrow[1,\infty)$ such that \begin{align}\label{eq: resolvent estimate at infinity} \forall \abs{s}\geq s_0: \norm{(is+\mathcal{A})^{-1}}_{X\rightarrow X} \leq C M(C\abs{s}). \end{align} Then there exist constants $C,t_0 > 0$ such that \begin{align*} \forall t\geq t_0, \mathbf{x}_0 \in D(\mathcal{A}): \norm{T(t)\mathbf{x}_0}_{X} \leq \frac{C}{M_{log}^{-1}(\frac{t}{C})}\norm{\mathbf{x}_0}_{D(\mathcal{A})}. \end{align*} Here $M_{log}(s) = M(s)(\log(1+M(s))+\log(1+s))$. \end{Theorem} It is comparatively easy to see that a \emph{semiuniform} decay rate for $T$ as in the conclusion of the Batty-Duyckaerts theorem implies that $\mathcal{A}$ has no spectrum on the imaginary axis. Furthermore if the semigroup decays at least like the decreasing function $m:[0,\infty)\rightarrow(0,\infty)$ then the resolvent can not grow faster than the function $M_1$ at infinity, where $M_1(s)=1+m_r^{-1}(1/(2\abs{s}+1))$ (see \cite[Proposition 1.3]{BattyDuyckaerts2008}). Here $m_r^{-1}$ denotes the right inverse of a decreasing function. Note that for $M(s)=\abs{s}^{\gamma}$ with $\gamma>0$, this theorem tells us that the decay rate is estimated from above by $(\log(t)/t)^{1/\gamma}$. One may wonder if the logarithmic term is necessary. In general it is, as was shown in \cite{BorichevTomilov2010}, but in the same article one can find the following nice characterization of polynomial decay rates in the Hilbert space setting: \begin{Theorem}[\cite{BorichevTomilov2010}]\label{thm: Borichev-Tomilov} Let $X$ be a Hilbert space and $\gamma>0$. Assume that $\sigma(\mathcal{A})\cap i\mathbb{R}=\emptyset$ and that there exist constants $C, s_0 > 0$ such that \begin{align*} \forall \abs{s}\geq s_0: \norm{(is+\mathcal{A})^{-1}}_{X\rightarrow X} \leq C \abs{s}^{\gamma}. \end{align*} Then there exist constants $C,t_0 > 0$ such that \begin{align*} \forall t\geq t_0, \mathbf{x}_0\in D(\mathcal{A}): \norm{T(t)\mathbf{x}_0}_{X} \leq C t^{-\frac{1}{\gamma}}\norm{\mathbf{x}_0}_{D(\mathcal{A})}. \end{align*} \end{Theorem} \subsection{Singularity at zero and infinity} If $\mathcal{A}$ is our wave operator, by Theorem \ref{thm: Preliminary estimates} (iii), it may happen that $0$ is a spectral point. Therefore it is convenient to have the following generalization of Theorem \ref{thm: Batty-Duyckaerts} at hand: \begin{Theorem}[\cite{Martinez2011}]\label{thm: Martinez} Assume that $\sigma(\mathcal{A})\cap i\mathbb{R}=\{0\}$. Assume that in addition to the condition (\ref{eq: resolvent estimate at infinity}) there exist $C > 0$, $0<s_1<1$ and a decreasing function $m:(0,s_1)\rightarrow[1,\infty)$ such that \begin{align*} \forall \abs{s}\leq s_1: \norm{(is+\mathcal{A})^{-1}}_{X\rightarrow X} \leq C m(C\abs{s}). \end{align*} Then there exist constants $C,t_0 > 0$ such that for all $t\geq t_0$ and $\mathbf{x}_0\in D(\mathcal{A})\cap R(\mathcal{A})$ we have \begin{align*} \norm{T(t)\mathbf{x}_0}_{X} \leq C\left[\frac{1}{M_{log}^{-1}(\frac{t}{C})} + m_{log}^{-1}(\frac{t}{C}) + \frac{1}{t}\right]\norm{\mathbf{x}_0}_{D(\mathcal{A})\cap R(\mathcal{A})}. \end{align*} Here $M_{log}$ is defined as in Theorem \ref{thm: Batty-Duyckaerts} and $m_{log}(s) = m(s)(\log(1+m(s))-\log(s))$. \end{Theorem} Concerning the relevance of the $1/t$-term we refer the reader to \cite[Section 8]{BattyChillTomilov2016}. In \cite[Theorem 8.4]{BattyChillTomilov2016} it was shown that in case of polynomial resolvent bounds on a Hilbert space one can get rid of the logarithmic loss. \end{appendix} \subsection*{Acknowledgments} I am most grateful to Ralph Chill and Eva Fa\u{s}angov\'a for helpful discussions during my work on the topic of the article and for reading and correcting the first version of this paper. I am also grateful to Otared Kavian for uncovering a gap in the proof of Theorem \ref{thm: Preliminary estimates} related to my ignorance about the borderline case of the trace theorem. Finally I want to thank Lars Perlich for helpful comments. Technische Universit\"{a}t Dresden, Fachrichtung Mathematik, Institut f\"{u}r Analysis, 01062, Dresden, Germany. Email: \textit{[email protected]} \end{document}
\begin{document} \title{Universal distortion-free entanglement concentration} \author{ Keiji Matsumoto} \affiliation{Quantum Computation Group, National Institute of Informatics, Tokyo, Japan} \affiliation{Quantum Computation and Information Project, ERATO, JST, 5-28-3 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan.} \author{Masahito Hayashi} \affiliation{Quantum Computation and Information Project, ERATO, JST, 5-28-3 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan..} \pacs{03.67.-a,03.67.Hk} \begin{abstract} We propose a new protocol of \textit{universal} entanglement concentration, which converts many copies of an \textit{unknown} pure state to an \textit{ exact} maximally entangled state. The yield of the protocol, which is outputted as a classical information, is probabilistic, and achieves the entropy rate with high probability, just as non-universal entanglement concentration protocols do. \end{abstract} \maketitle \section{Introduction} \label{s1} Conversion of a given partially entangled state to a maximally entangled state by local operation and classical communication (LOCC) is an important task in quantum information processing, both in application and theory. If the given state is a pure state, such protocols are called entanglement concentration, while mixed state versions are called entanglement distillation. In the paper, we study \textit{universal} entanglement concentration, or entanglement concentration protocols which take unknown states as input, and discuss the optimal yield in higher order asymptotic theory, or in non-asymptotic theory, depending on the settings. The reason why we studied universal entanglement concentration rather than universal entanglement distillation is to study optimal yield in detail, in comparison with non-universal protocols, which had been studied in detail ~ \cite{Popescu,LP,Hardy,VidalJonathanNielsen,Morikoshi}. Note that study of the optimal entanglement distillation is under development even in non-universal settings. For example, the known formula for the optimal first order rate still includes optimization over LOCC in one-copy space. This is sharp contrast with the study of entanglement concentration, for which non-asymptotic and higher order asymptotic formula in various form are obtained. As demonstrated by Bennett et al.~\cite{Ben}, if many copies of known pure states are given, the optimal asymptotic yield equals the entropy of entanglement of the input state \cite{Popescu}. To achieve the optimal, both parties apply projections onto the typical subspaces of the reduced density matrix of given partially entangled pairs (\textit{BBPS protocol}, hereafter). Obviously, the protocol is not applicable to the case where information about Schmidt basis is unknown. Of course, one can estimate necessary information by measuring some of the given copies. In such protocol, however, the final state is not quite a maximally entangled state, because errors in estimation of the Schmidt basis will cause distortions. This paper proposes a protocol, denoted by $\left\{ C_{\ast }^{n}\right\} $, of \textit{universal distortion-free entanglement concentration}, in which \textit{exact} (not approximate) entangled states are produced out of identically prepared copies of an unknown pure state. Its yield is probabilistic(, so users cannot predict the yield beforehand), but the protocol outputs the amount of the yield as classical information, ( so that users know what they have obtained, ) and the rate of the yield asymptotically achieves the entropy of entanglement with probability close to unity. A key to construction of our protocol is symmetry; an ensemble of identically prepared copies of a state is left unchanged by simultaneous reordering of copies at each site. This symmetry gives rise to entanglement which is accessible without any information about the Schmidt basis. In some applications, small distortion in outputs might be enough, and estimation-based protocols might suffice, because entropy rate is achieved anyway. In higher order asymptotic terms and non-asymptotic evaluations, however, we will prove that our protocol is better than any other protocols which may allow small distortion. In the proof, the following observation simplifies the problem to large extent. Let us concentrate on the optimization of the worst-case quantity of performance measures over all the unknown Schmidt basis, because the uncertainty about Schmidt basis is the main difficulty of universal entanglement concentration. We also assume performance measures are not increasing by postprocessing which decreases the Schmidt rank of the product maximally entangled state. With such reasonable restrictions, an optimal protocol is always found out in a class of protocols which are the same as $\left\{ C_{\ast }^{n}\right\} $ in output quantum states, but may differ in classical output. Therefore, any trial of improvement of $\left\{ C_{\ast }^{n}\right\} $ cannot change real yield. What can be 'improved' is the information about how much yield was produced. For $\left\{ C_{\ast }^{n}\right\} $ outputs the information about yield correctly, there should be no room for 'improvement' in this part, too. This observation assures us that $\left\{ C_{\ast }^{n}\right\} $ is optimal if the criterion is fair. In addition, the optimization is now straightforward, for we have to optimize only classical part of the protocol. Based on this observation, we prove the optimality in terms of a natural class of measures: monotone increasing measures which are bounded over the range and continuously differentiable except at finitely many points. In terms of such measures, distortion-free condition trivially implies the non-asymptotic optimality of our protocol, while the constraint on the distortion implies that our protocol is optimal up to the higher orders. Also, (a kind of ) non-asymptotic optimality is proved for some performance measures which varies with both yield and distortion. These results assure us that our protocol $\{C_{\ast }^{n}\}$ is the best universal entanglement concentration protocol. Here, we stress that most of these results generalize to the case where Schmidt coefficients of an input are known and its Schmidt basis is unknown. In the end, we prove that the classical output of our protocol gives an asymptotically optimal estimate of entropy of entanglement. Surprisingly, this estimate is not less accurate than any other estimate based on (potentially global) measurement whose construction depends on the Schmidt basis of the unknown state. Considering its optimal performance in very strong senses, it is surprising that our protocol does not use any classical communication at all. The paper is organized as follows. After introducing symbols and terms in Section~\ref{sec:definition}, we describe implications of the permutation symmetry, and constructed the protocol $\{C_{\ast }^{n}\}$ (Subsection~\ref {subsec:symmetry}). Its asymptotic performance is analyzed using known results of group representation theoretic type theory in Subsection~\ref {sec:performance}, followed by comparison with a estimation based protocol~ \ref{subsec:comparison-est-based}. Optimality of the protocol is discussed in Section~\ref{sec:optimality}. Subsection~\ref{subsec:whatisproved} gives definition of measures of performance and short description of proved assertions. The key lemma, which restricts the class of protocols of interest to large extent, is proved in Subsection~\ref{subsec:keylemmas}. Subsections~\ref{subsec:dist-free}-\ref {subsec:totalfidelity} treats proof of optimality in each setting. The estimation theoretic application of the protocol is discussed in Section~ \ref{sec:estimate}. Interestingly, a part of arguments in this section gives another proof of optimality of $\{C_{\ast }^{n}\}$ in terms of error exponent. In the appendices, we demonstrated several technical lemmas and formulas. Among them, an asymptotic formula of average yield of non-universal entanglement concentration protocols is, so far as we know, had not shown, and might be useful for other applications. \section{Definitions} \label{sec:definition} Given an entangled pure state $|\phi \rangle \in \mathcal{H}_{A}\otimes \mathcal{H}_{B}$ ($\dim \mathcal{H}_{A}=\dim \mathcal{ H}_{B}=d$), we denote its \textit{Schmidt coefficients} by $\mathbf{p}_{\phi }=(p_{1,\phi },\ldots ,p_{d,\phi })$ ($p_{1,\phi }\geq p_{2,\phi }\geq \ldots \geq p_{d,\phi }\geq 0)$ and its \textit{Schmidt basis} by $ \{|e_{i,x}^{\phi }\}$, respectively. \textit{Entropy of entanglement} of $ |\phi \rangle $ equals the Shannon entropy $\mathrm{H}$ of the probability distribution $\mathbf{p}_{\phi }$, where Shannon entropy $\mathrm{H}$ is defined by $\mathrm{H}(\mathbf{p}):=$ $\sum_{i}-p_{i}\log p_{i}$. (Throughout the paper, the base of log is 2.) In the paper, our main concern is concentration of maximal entanglement from $|\phi \rangle ^{\otimes n}$ by LOCC. We denote a maximally entangled state with the Schmidt rank $L$ by \begin{equation*} \Vert L\rangle :=\frac{1}{\sqrt{L}}\sum_{i=1}^{L}|f_{i,A}^{n}\rangle |f_{i,B}^{n}\rangle , \end{equation*} where $\{|f_{i,x}^{n}\rangle \}$ is an orthonormal basis in $\mathcal{H} _{x}^{\otimes n}(x=A,B)$. Note that $\{|f_{i,x}^{n}\rangle \}$ need not to be explicitly defined, for the difference between $\frac{1}{\sqrt{L}} \sum_{i=1}^{L}|f_{i,A}^{n}\rangle |f_{i,B}^{n}\rangle $ and $\frac{1}{\sqrt{L }}\sum_{i=1}^{L}|\widetilde{f_{i,A}^{n}}\rangle |\widetilde{f_{i,B}^{n}} \rangle $ is compensated by a local unitary. One can optimally produce $ \Vert 2^{n\mathrm{H}(\mathbf{p}_{\phi })}\rangle $ from $|\phi \rangle ^{\otimes n}$ by LOCC with high probability and high fidelity, if $n$ is very large \cite{Ben,Popescu}. In this paper, an entanglement concentration $\{C^{n}\}$ is a sequence of LOCC measurement, in which $C^{n}$ takes $n$ copies $|\phi \rangle ^{\otimes n}$ of unknown state as its input. With probability $Q_{C^{n}}^{\phi }(x)$, $ C^{n}$ outputs $\rho _{C^{n}}^{\phi }(x)$, which is meant to be an approximation to $\Vert 2^{nx}\rangle $, together with $x$ as classical information. The worst-case distortion $\epsilon _{C^{n}}^{\phi }$ is the maximum of square of the Bure's distance between the output $\rho _{C^{n}}^{\phi }(x)$ and the target $\Vert 2^{nx}\rangle $, \begin{equation*} \epsilon _{C^{n}}^{\phi }:=1-\min_{x}\langle 2^{nx}\Vert \rho _{C^{n}}^{\phi }(x)\Vert 2^{nx}\rangle , \end{equation*} while $\bar{\epsilon}_{C^{n}}^{\phi }$ denotes the average distortion, \begin{eqnarray*} \overline{\epsilon }_{C^{n}}^{\phi } &:&=1-\sum\limits_{x}Q_{C^{n}}^{\phi }(x)\langle 2^{nx}\Vert \rho _{C^{n}}^{\phi }(x)\Vert 2^{nx}\rangle \\ &=&1-\mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}\langle 2^{nX}\Vert \rho _{C^{n}}^{\phi }(X)\Vert 2^{nX}\rangle , \end{eqnarray*} where $\mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}$ means the average with respect to $Q_{C^{n}}^{\phi }$, \begin{equation*} \mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}f(X):=\sum\limits_{x}Q_{C^{n}}^{\phi }(x)f(x). \end{equation*} A protocol is said to be \textit{distortion -free}, if $\epsilon _{C^{n}}^{\phi }=\overline{\epsilon }_{C^{n}}^{\phi }=0$ holds for all $\phi $. \section{Construction of the protocol $\{C_{\ast }^{n}\}$} \label{sec:construction} \subsection{Symmetry and the protocol $\{C_{\ast }^{n}\}$} \label{subsec:symmetry} In the construction of $\{C_{\ast }^{n}\}$, we exploit two kinds of symmetries. First, our input, $|\phi \rangle ^{\otimes n}$, is invariant by the reordering of copies, or the action of the permutation $\sigma $ in the set $\{1,\ldots n\}$ such that \begin{equation*} \bigotimes_{i=1}^{n}|h_{i,A}\rangle |h_{i,B}\rangle \mapsto \bigotimes_{i=1}^{n}|h_{\sigma ^{-1}(i),A}\rangle |h_{\sigma ^{-1}(i),B}\rangle , \end{equation*} \label{symmetric}where $|h_{i,x}\rangle \in \mathcal{H}_{x}\;(x=A,B)$. (Hereafter, the totality of permutations in the set $\{1,...,n\}$ is denoted by $S_{n}$.) Second, an action of local unitary transform $U^{\otimes n}{ \otimes }V^{\otimes n}$ $(U,V\in \mathrm{SU}(d))$ corresponds to change of the Schmidt basis. Action of these groups induces a decomposition of the tensored space $ \mathcal{H}_{x}^{\otimes n}(x=A,B)$ ~\cite{Weyl} into \begin{equation} \mathcal{H}_{x}^{\otimes n}=\bigoplus_{\mathbf{n}}\mathcal{W}_{\mathbf{n} ,x},\;\mathcal{W}_{\mathbf{n},x}:=\mathcal{U}_{\mathbf{n},x}\otimes \mathcal{ V}_{\mathbf{n},x}\;(x=A,B), \label{sym} \end{equation} where $\mathcal{U}_{\mathbf{n},x}$ and $\mathcal{V}_{\mathbf{n},x}$ is an irreducible space of the tensor representation of $\mathrm{SU}(d)$, and the representation~(\ref{sym}) of the group of permutations respectively, and \begin{equation} \mathbf{n}=(n_{1},\ldots ,n_{d}),\quad \sum_{i=1}^{d}n_{i}=n,\;n_{i}\geq n_{i+1}\geq 0, \label{yungindex} \end{equation} is called \textit{Young index}, which $\mathcal{U}_{\mathbf{n},x}$ and $ \mathcal{V}_{\mathbf{n},x}$ uniquely correspond to. In case of spin-$\frac{1 }{2}$-system, $\mathcal{W}_{\mathbf{n},x}$ is an eigenspace of the total spin operator. Due to the invariance by the permutation (\ref{sym}), any $n$ -tensored state $|\phi \rangle ^{\otimes n}$ is decomposed in the following form. \begin{lem} \begin{equation*} |\phi \rangle ^{\otimes n}=\sum_{\mathbf{n}}\sqrt{a_{\mathbf{n}}^{\phi }} |\phi _{\mathbf{n}}\rangle \otimes |\mathcal{V}_{\mathbf{n}}\rangle , \end{equation*} where, $|\phi _{\mathbf{n}}\rangle $ is a \ state vector in $\mathcal{U}_{ \mathbf{n},A}\otimes \mathcal{U}_{\mathbf{n},B}$, $a_{\mathbf{n}}^{\phi }$ is a complex number, and $|\mathcal{V}_{\mathbf{n}}\rangle $ is a maximally entangled state in $\mathcal{V}_{\mathbf{n},A}\otimes \mathcal{V}_{\mathbf{n} ,B}$ with the Schmidt rank $\dim \mathcal{V}_{\mathbf{n},A}.$ While $|\phi _{ \mathbf{n}}\rangle $ and $a_{\mathbf{n}}^{\phi }$ depends on the input $ |\phi \rangle $, $|\mathcal{V}_{\mathbf{n}}\rangle $ does not depend on the input. \end{lem} \begin{pf} Write \begin{equation*} |\phi \rangle ^{\otimes n}=\sum_{\mathbf{n},\mathbf{n}^{\prime }}\sum_{i,j,k,l}b_{\mathbf{n},i,j,\mathbf{n}^{\prime },k,l}\left\vert e_{i}^{ \mathcal{U}_{\mathbf{n},A}}\right\rangle \left\vert e_{j}^{\mathcal{V}_{ \mathbf{n},A}}\right\rangle \left\vert e_{k}^{\mathcal{U}_{\mathbf{n} ^{\prime },B}}\right\rangle \left\vert e_{l}^{\mathcal{V}_{\mathbf{n} ^{\prime },B}}\right\rangle \end{equation*} where $\left\{ \left\vert e_{i}^{\mathcal{U}_{\mathbf{n},A}}\right\rangle \right\} $, $\left\{ \left\vert e_{j}^{\mathcal{V}_{\mathbf{n} ,A}}\right\rangle \right\} $, $\left\{ \left\vert e_{k}^{\mathcal{U}_{ \mathbf{n}^{\prime },B}}\right\rangle \right\} $, $\left\{ \left\vert e_{l}^{ \mathcal{V}_{\mathbf{n}^{\prime },B}}\right\rangle \right\} $ is a complete orthonormal basis in $\mathcal{U}_{\mathbf{n},A}$, $\mathcal{V}_{\mathbf{n} ,A}$, $\mathcal{U}_{\mathbf{n},B}$, $\mathcal{V}_{\mathbf{n},B}$, respectively. Establish a correspondence between a vector $|\phi \rangle ^{\otimes n}$ in bipartite system and an operator \begin{equation*} \Phi ^{n}:=\sum_{\mathbf{n},\mathbf{n}^{\prime }}\sum_{i,j,k,l}b_{\mathbf{n} ,i,j,\mathbf{n}^{\prime },k,l}\left\vert e_{i}^{\mathcal{U}_{\mathbf{n} }}\right\rangle \left\langle e_{k}^{\mathcal{U}_{\mathbf{n}^{\prime }}}\right\vert \otimes \left\vert e_{j}^{\mathcal{V}_{\mathbf{n} }}\right\rangle \left\langle e_{l}^{\mathcal{V}_{\mathbf{n}^{\prime }}}\right\vert , \end{equation*} using 'partial transpose' or the linear map which maps $\left\vert e_{i}^{ \mathcal{U}_{\mathbf{n},A}}\right\rangle \left\vert e_{j}^{\mathcal{V}_{ \mathbf{n},A}}\right\rangle \left\vert e_{k}^{\mathcal{U}_{\mathbf{n} ^{\prime },B}}\right\rangle \left\vert e_{l}^{\mathcal{V}_{\mathbf{n} ^{\prime },B}}\right\rangle $ to $\left\vert e_{i}^{\mathcal{U}_{\mathbf{n} }}\right\rangle \left\langle e_{k}^{\mathcal{U}_{\mathbf{n}^{\prime }}}\right\vert \otimes \left\vert e_{j}^{\mathcal{V}_{\mathbf{n} }}\right\rangle \left\langle e_{l}^{\mathcal{V}_{\mathbf{n}^{\prime }}}\right\vert $. For this map is one to one, we study $\Phi ^{n}$ in stead of $|\phi \rangle ^{\otimes n}$. Observe that $\Phi ^{n}$ is invariant by action of any permutation $\sigma $ , \begin{equation*} \sigma \Phi ^{n}\sigma ^{\dagger }=\Phi ^{n}, \end{equation*} where the action of $\sigma $ is defined by (\ref{symmetric}). Due to Lemma \ref{lem:decohere}, $b_{\mathbf{n},i,j,\mathbf{n}^{\prime },k,l}=0$ unless $ \mathbf{n}=\mathbf{n}^{\prime }$, and \begin{equation*} \Phi ^{n}=\bigoplus_{\mathbf{n}}\sum_{i,j,k,l}b_{\mathbf{n},i,j,\mathbf{n} ,k,l}\left\vert e_{i}^{\mathcal{U}_{\mathbf{n}}}\right\rangle \left\langle e_{k}^{\mathcal{U}_{\mathbf{n}}}\right\vert \otimes \left\vert e_{j}^{ \mathcal{V}_{\mathbf{n}}}\right\rangle \left\langle e_{l}^{\mathcal{V}_{ \mathbf{n}}}\right\vert . \end{equation*} Then, we apply Lemma \ref{lem:shur2} $\ $to \begin{equation*} \sum_{i,j,k,l}b_{\mathbf{n},i,j,\mathbf{n},k,l}\left\vert e_{i}^{\mathcal{U} _{\mathbf{n}}}\right\rangle \left\langle e_{k}^{\mathcal{U}_{\mathbf{n} }}\right\vert \otimes \left\vert e_{j}^{\mathcal{V}_{\mathbf{n} }}\right\rangle \left\langle e_{l}^{\mathcal{V}_{\mathbf{n}}}\right\vert , \end{equation*} proving that $\Phi ^{n}$ is of the form \begin{eqnarray*} \Phi ^{n} &=&\bigoplus_{\mathbf{n}}\sum_{i,k}b_{\mathbf{n},i,j}^{\prime }\left\vert e_{i}^{\mathcal{U}_{\mathbf{n}}}\right\rangle \left\langle e_{k}^{\mathcal{U}_{\mathbf{n}}}\right\vert \otimes \mathrm{Id}_{\mathcal{V} _{\mathbf{n}}} \\ &=&\bigoplus_{\mathbf{n}}\sqrt{a_{\mathbf{n}}^{\phi }}\Phi _{\mathbf{n} }\otimes \sqrt{\frac{1}{\dim \mathcal{V}_{\mathbf{n}}}}\sum_{j=1}^{\dim \mathcal{V}_{\mathbf{n}}}\left\vert e_{j}^{\mathcal{V}_{\mathbf{n} }}\right\rangle \left\langle e_{j}^{\mathcal{V}_{\mathbf{n}}}\right\vert , \end{eqnarray*} where $\Phi _{\mathbf{n}}$ is a linear map in $\mathcal{U}_{\mathbf{n}}$. To obtain the lemma, we simply take "partial transpose" of this again: apply the linear map which maps $\left\vert e_{i}^{\mathcal{U}_{\mathbf{n} }}\right\rangle \left\langle e_{k}^{\mathcal{U}_{\mathbf{n}^{\prime }}}\right\vert \otimes \left\vert e_{j}^{\mathcal{V}_{\mathbf{n} }}\right\rangle \left\langle e_{l}^{\mathcal{V}_{\mathbf{n}^{\prime }}}\right\vert $ to $\left\vert e_{i}^{\mathcal{U}_{\mathbf{n} ,A}}\right\rangle \left\vert e_{j}^{\mathcal{V}_{\mathbf{n},A}}\right\rangle \left\vert e_{k}^{\mathcal{U}_{\mathbf{n}^{\prime },B}}\right\rangle \left\vert e_{l}^{\mathcal{V}_{\mathbf{n}^{\prime },B}}\right\rangle $. For this map is one to one, $\Phi ^{n}$ is mapped to $|\phi \rangle ^{\otimes n}$ . By this map, $\sqrt{\frac{1}{\dim \mathcal{V}_{\mathbf{n}}}} \sum_{j=1}^{\dim \mathcal{V}_{\mathbf{n}}}\left\vert e_{j}^{\mathcal{V}_{ \mathbf{n}}}\right\rangle \left\langle e_{j}^{\mathcal{V}_{\mathbf{n} }}\right\vert $ is mapped to $|\mathcal{V}_{\mathbf{n}}\rangle $, and $\Phi _{\mathbf{n}}$ is mapped to $|\phi _{\mathbf{n}}\rangle \in \mathcal{U}_{ \mathbf{n},A}\otimes \mathcal{U}_{\mathbf{n},B}$. \end{pf} This lemma implies that there are maximally entangled states, $|\mathcal{V}_{ \mathbf{n}}\rangle $, which are accessible without using knowledge on the input state. The average amount of the accessible entanglement is decided by the coefficients $a_{\mathbf{n}}^{\phi }$, which vary with the Schmidt coefficients of the input $|\phi \rangle $. Now we are at the position to present our universal distortion-free entanglement concentration protocol $\left\{ C_{\ast }^{n}\right\} $ (Hereafter, the projection onto a Hilbert space $\mathcal{X}$ is also denoted by $\mathcal{X}$): First, each party apply the projection measurements $\{\mathcal{W}_{\mathbf{n}_{A},A}\}_{\mathbf{n}_{A}}$ and $\{ \mathcal{W}_{\mathbf{n}_{B},B}\}_{\mathbf{n}_{B}}$ at each site independently. This yields the same measurement result $\mathbf{n}_{A}= \mathbf{n}_{B}=\mathbf{n}$ at both site, and the state is changed to $|\phi _{\mathbf{n}}\rangle \otimes |\mathcal{V}_{\mathbf{n}}\rangle $. Taking partial trace over $\mathcal{U}_{\mathbf{n},A}$ and $\mathcal{U}_{\mathbf{n} ,B}$ at each site, we obtain $|\mathcal{V}_{\mathbf{n}}\rangle $. If $\mathcal{H}_{A}$ and $\mathcal{H}_{B}$ are qubit systems, $\{\mathcal{W} _{\mathbf{n}_{A},A}\}_{\mathbf{n}_{A}}$ is nothing but the measurement of the total angular momentum. For the sake of the formalism, $|\mathcal{V}_{\mathbf{n}}\rangle $ is mapped to $\Vert \dim \mathcal{V}_{\mathbf{n}}\rangle $. With this modification, $ \rho _{C_{\ast }^{n}}^{\phi }(x)=\Vert 2^{nx}\rangle \langle 2^{nx}\Vert $ and $Q_{C_{\ast }^{n}}^{\phi }(x)=a_{\mathbf{n}}^{\phi }$, if $2^{nx}=\dim \mathcal{V}_{\mathbf{n}}$ (if such $\mathbf{n}$ does not exist, $Q_{C_{\ast }^{n}}^{\phi }(x)=0$). Due to the identity $Q_{C_{\ast }^{n}}^{\phi }\left( \frac{\log \dim \mathcal{V}_{\mathbf{n}}}{n}\right) =a_{\mathbf{n}}^{\phi }=\mathrm{Tr} \left\{ \mathcal{W}_{\mathbf{n},A}\left( \mathrm{Tr}_{B}|\phi \rangle \langle \phi |\right) ^{\otimes n}\right\} $ and the formulas in the appendix of \cite{Ha}, we can evaluate the asymptotic behavior of $ Q_{C_{\ast }^{n}}^{\phi }(x)$ as follows: \begin{equation} \begin{split} \left\vert \frac{\log \dim \mathcal{V}_{\mathbf{n}}}{n}-\mathrm{H}\left( \frac{\mathbf{n}}{n}\right) \right\vert & \leq \frac{d^{2}+2d}{2n}\log (n+d), \\ \lim_{n\rightarrow \infty }\frac{-1}{n}\log Q_{C_{\ast }^{n}}^{\phi }\left( \frac{\log \dim \mathcal{V}_{\mathbf{n}}}{n}\right) & =\mathrm{D}(\frac{ \mathbf{n}}{n}\Vert \mathbf{p}_{\phi }), \\ \lim_{n\rightarrow \infty }\frac{-1}{n}\log \sum_{\frac{\mathbf{n}}{n}\in \mathcal{R}}Q_{C_{\ast }^{n}}^{\phi }\left( \frac{\log \dim \mathcal{V}_{ \mathbf{n}}}{n}\right) & =\max_{\mathbf{q}\in \mathcal{R}}\mathrm{D}(\mathbf{ q}\Vert \mathbf{p}_{\phi }), \end{split} \label{grep-type} \end{equation} where $\mathcal{R}$ is an arbitrary closed subset of $\{\mathbf{q}|q_{1}\geq q_{2}\geq \ldots \geq q_{d}\geq 0,\sum_{i=1}^{d}q_{i}=1\}$. These means the probability for $\frac{1}{n}\log \dim \mathcal{V}_{\mathbf{n}}\sim \mathrm{H} \left( \boldsymbol{p}\right) $ is exponentially close to unity, as is demonstrated in Subsection \ref{sec:performance}. \subsection{Asymptotic Performance of $\{C_{\ast }^{n}\}$} \label{sec:performance} In this subsection, we analyze the asymptotic performance of $\{C_{\ast }^{n}\}$ in terms of success (failure) probability, total fidelity, and average of the log of Schmidt rank of the output maximally entangled states. (The proof of the optimality of $ \{C_{\ast }^{n}\}$ is made for more general class of measures). For the main difficulty of universal concentration is attributed to uncertainty about Schmidt basis, we consider the value in the worst-case Schmidt basis. The worst-case value for the failure probability, or the probability that the yield is not more than $y$ equals \begin{equation} \max_{U,V}\sum\limits_{x:x\leq y}Q_{C^{n}}^{U\otimes V\phi }(x). \label{error-prob} \end{equation} where $U$ and $V$ run all over unitary matrices. For the yield of our protocol $\{C_{\ast }^{n}\}$ is invariant by local unitary operations, the maximum over $U$ and $V$ can be removed. Due to the first and the third formula in (\ref{grep-type}), we have, \begin{equation} \lim_{n\rightarrow \infty }\frac{-1}{n}\log \sum\limits_{x:x\leq R}Q_{C_{\ast }^{n}}^{\phi }(x)=\mathrm{D}(R\Vert \mathbf{p}_{\phi }), \label{fexp1-1} \end{equation} and \begin{equation} \lim_{n\rightarrow \infty }\frac{-1}{n}\log \sum\limits_{x:x\geq R}Q_{C_{\ast }^{n}}^{\phi }(x)=\mathrm{D}(R\Vert \mathbf{p}_{\phi }) \label{fexp-2} \end{equation} where \begin{equation*} \mathrm{D}(R\Vert \mathbf{p})=\left\{ \begin{array}{cc} \min_{\mathbf{q}:\,H(\mathbf{q})\geq R}\mathrm{D}(\mathbf{q}\Vert \mathbf{p}) & (\mathrm{H}(\mathbf{p})\leq R), \\ \min_{\mathbf{q}:\,H(\mathbf{q})\leq R}\mathrm{D}(\mathbf{q}\Vert \mathbf{p}) & (\mathrm{H}(\mathbf{p})>R). \end{array} \right. \end{equation*} \ Eq.~(\ref{fexp1-1}) implies that our protocol achieves entropy rate: if $R$ is strictly smaller than $\mathrm{H}(\mathbf{p}_{\phi })$, the RHS of \ Eq.~( \ref{fexp1-1}) is positive, which means that the failure probability is exponentially small. On the other hand, Eq.~(\ref{fexp-2}) means that the probability to have the yield more than the optimal rate (\textit{strong converse probability}) tends to vanish, and its convergence is exponentially fast. Next, we evaluate the exponent of the \textit{total fidelity} $ F_{C^{n}}^{\phi }\left( R\right) $, or the average fidelity to the maximally entangled state whose Schmidt rank is not smaller than $2^{nR}$: \begin{equation} F_{C^{n}}^{\phi }\left( R\right) :=\mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}\max_{y:y\geq R}\langle 2^{ny}\Vert \rho _{C^{n}}^{\phi }(X)\Vert 2^{ny}\rangle . \label{def:totalfidelity} \end{equation} (The optimization is considered in the worst-case Schmidt basis.) This function describes trade-off between yield and distortion. Obviously, $ F_{C^{n}}^{\phi }\left( R\right) $ is non-increasing in $R$, and takes larger value if the protocol is better. We evaluate this quantity for $ \{C_{\ast }^{n}\}$ as follows. \begin{eqnarray*} &&1-F_{C_{\ast }^{n}}^{\phi }\left( R\right) =1-\sum_{x}\min \left\{ 1,2^{-n(R-x)}\right\} Q_{C_{\ast }^{n}}^{\phi }(x) \\ &=&\sum_{x:x<R}\left( 1-2^{-n(R-x)}\right) Q_{C_{\ast }^{n}}^{\phi }(x). \end{eqnarray*} The RHS is upper-bounded by $\sum_{x:x<R}Q_{C_{\ast }^{n}}^{\phi }(x)$ and lower-bounded by $\left( 1-2^{-n(R-x)}\right) Q_{C_{\ast }^{n}}^{\phi }(x)$ where $x$ can be any value strictly smaller than $R$. Hence, if $R<H(\mathbf{ p}_{\phi })$ letting $x=R-\frac{c}{n}$ such that $Q_{C_{\ast }^{n}}^{\phi }(x)\neq 0$, using the second equation of (\ref{grep-type}), we have \begin{equation*} \lim_{n\rightarrow \infty }\frac{-1}{n}\log \left( 1-F_{C_{\ast }^{n}}^{\phi }\left( R\right) \right) =\mathrm{D}(R\Vert \mathbf{p}_{\phi }). \end{equation*} The exponent of failure probability, strong converse probability, and total fidelity for the optimal non-universal protocol are found out in \cite {Morikoshi}, and we can observe the non-zero gap between the exponents of $ \left\{ C_{\ast }^{n}\right\} $ and the optimal non-universal protocol. By contrast, these quantities for BBPS protocol coincides with the ones for $ \left\{ C_{\ast }^{n}\right\} $. (Proof is straightforwardly done using the classical type theory). This fact may imply that the protocol $\left\{ C_{\ast }^{n}\right\} $ is so well-designed that its performance is comparable with the one which uses some information about the input state. However, it might be the case that these quantities are not sensitive to difference in performance. Hence, we also discuss another quantity, the average yield (, evaluated at the worst-case Schmidt basis), \begin{equation} \min_{U,V}\sum_{x}xQ_{C^{n}}^{U\otimes V\phi }(x)=\min_{U,V}\mathrm{E} _{Q_{C^{n}}^{U\otimes V\phi }}^{X}X. \label{average-yield} \end{equation} The average yield of BBPS protocol is of the form \begin{equation*} \mathrm{H}(\mathbf{p}_{\phi })+A\frac{\log n}{n}+\frac{B}{n}+o\left( \frac{1 }{n}\right) , \end{equation*} where the coefficients $A$, $B$ and their derivation are described in Appendix \ref{sec:cal-ave-ben}. The average yield of the protocol $\left\{ C_{\ast }^{n}\right\} $ is less than that of BBPS protocol by $\frac{C}{n}$, where $C$ is calculated in Appendix~\ref{sec:diff-ave-yield}. Hence, this measure is sensitive to the difference in performance which do not reveal in the exponent of failure probability etc. \subsection{Comparison with estimation based protocols} \label{subsec:comparison-est-based}Most straightforwardly, universal entanglement concentration is constructed based on the state estimation; First, $c_{n}$ copies of $\left\vert \phi \right\rangle $ are used to estimate the Schmidt basis, and second, apply BBPS protocol to the $n-c_{n}$ copies of $\left\vert \phi \right\rangle $. The average yield of such protocol cannot be better than \begin{equation*} \frac{n-c_{n}}{n}\mathrm{H}\left( \mathbf{p}\right) +A\frac{\log (n-c_{n})}{ n-c_{n}}+O\left( \frac{1}{n}\right) , \end{equation*} where $c_{n}$ slowly grows as $n$ increases. Therefore, this estimation-based protocol cannot be better than $\left\{ C_{\ast }^{n}\right\} $, because the average yield of $\left\{ C_{\ast }^{n}\right\} $ and BBPS protocol are the same except for $O\left( \frac{1}{n}\right) $ -terms. One might improve the estimation-based protocol by replacing BBPS protocol with the non-asymptotically optimal entanglement concentration protocol. However, this improvement is not likely to be effective, because in qubit case, the yield of these protocols are the same up to the order of $O\left( \frac{\log n}{n}\right) $ (Appendix \ref{sec:cal-ave-opt}, $O\left( \frac{1}{ n}\right) $-term is also given). Another alternative is to use precise measurements which cause only negligible distortion, so that we can use all the given copies of an unknown state for entanglement concentration. This protocol can be very good, and there might be many other good protocols. As is proven in the next section, however, none of these protocols is no better than $\left\{ C_{\ast }^{n}\right\} $, i.e., $\left\{ C_{\ast }^{n}\right\} $ is optimal for all protocols whose outputs are slightly distorted. \section{Optimality of $\{C_{\ast }^{n}\}$} \label{sec:optimality} \subsection{Measures, settings, and summary of results} \label{subsec:whatisproved}A performance of an entanglement concentration has two parts. One is amount of yield, and the other is distortion of the output. The measures of the latter are, as is explained in Section\thinspace \ref{sec:definition}, $\epsilon _{C^{n}}^{\phi }$ and $\overline{\epsilon } _{C^{n}}^{\phi }$. Hereafter, maximum of these quantities over all Schmidt basises ( $\max_{U,V}\epsilon _{C^{n}}^{U\otimes V\phi }$ and $\max_{U,V} \overline{\epsilon }_{C^{n}}^{U\otimes V\phi }$) are discussed. The measures of the yield (\ref{error-prob}), (\ref{average-yield}) discussed in the previous section are essentially of the form \begin{equation} \min_{U,V}\sum_{x}f\left( x\right) Q_{C^{n}}^{U\otimes V\phi }(x)=\min_{U,V} \mathrm{E}_{Q_{C^{n}}^{U\otimes V\phi }}^{X}f\left( X\right) . \label{g-yield} \end{equation} So far, we had considered minimization for error probability, and maximization for average yield. Hereafter, we use success probability \begin{equation*} \min_{U,V}\mathrm{E}_{Q_{C^{n}}^{U\otimes V\phi }}^{X}\Theta \left( X-R\right) , \end{equation*} with $\Theta (x)$ denoting the step function, instead of error probability ( \ref{error-prob}). From here to the end, optimization of an yield measure ( \ref{g-yield}) means maximization of (\ref{g-yield}). Namely, minimization of (\ref{error-prob}) corresponds to maximization of ( \ref{g-yield}) with $f(x)=\Theta (x-R)$. Also, maximization of (\ref {average-yield}) is equivalent to maximization of (\ref{g-yield}) with $f(x)= \frac{x}{\log d}$. These examples are monotone and bounded, or \begin{eqnarray} &&f(x)\geq f(x^{\prime })\geq f(0)=0,\quad (x\geq x^{\prime }\geq 0), \label{cond-f} \\ &&f(\log d)=1, \label{cond-f-2} \end{eqnarray} and \begin{equation} \mbox{continuously differentiable but finitely many points.} \label{cond-f-3} \end{equation} The condition (\ref{cond-f}) and (\ref{cond-f-2}) are assumed throughout the paper unless otherwise mentioned. In the following subsections, measures of the form (\ref{g-yield}) are optimized with the restriction on the worst-case distortion $ \max_{U,V}\epsilon _{C^{n}}^{U\otimes V\phi }$ or the average distortion $ \max_{U,V}\overline{\epsilon }_{C^{n}}^{U\otimes V\phi }$. Also, we consider the optimization (maximization) of the measures which vary with both yield and distortion. Namely, the weighted sum of these yield measure and the average distortion $\overline{\epsilon }_{C^{n}}^{\phi }$, i.e., \begin{equation} \min_{U,V}\mathrm{E}_{Q_{C^{n}}^{U\otimes V\phi }}^{X}f(X)-\lambda \max_{U,V} \overline{\epsilon }_{C^{n}}^{U\otimes V\phi }, \label{g-y+d} \end{equation} and the total fidelity $\max_{U,V}F_{C^{n}}^{U\otimes V\phi }\left( R\right) $ are considered. We prove that the protocol is optimal, in the following senses. \begin{enumerate} \item The entropy rate is achieved (Subsection \ref{sec:performance}, Eq. ( \ref{fexp1-1})). \item Non-asymptotic behavior is best among all distortion-free protocols (Subsection \ref{subsec:dist-free}). \item Higher order asymptotic behavior is best for all protocols which allows small distortions (Subsections \ref{subsec:worst-distortion}, \ref {subsec:average-distortion}). \item In terms of weighed sum measures (\ref{g-y+d}) and the total fidelity, the non-asymptotic optimality holds (Subsections \ref{subsec:weighted}-\ref {subsec:totalfidelity}). \end{enumerate} The key to the proof of these assertions is Lemma~\ref{lem:k1} which will be proved in the next section. Due to this lemma, we can focus on the protocols which is a modification of $\{C_{\ast }^{n}\}$ in its classical outputs only. This fact not only simplifies the argument but also assures us that $ \{C_{\ast }^{n}\}$ is a very natural protocol. Here, we note that many of our results in this section generalize to the case where Schmidt basis is unknown and Schmidt coefficients are known. Such generalization is possible if the optimization problem can be recasted only in terms of a family of quantum states $\left\{ U\otimes V\phi \right\} _{UV} $, where $\left\vert \phi \right\rangle $ is a given input state and $ U $ and $V$ run all over $\mathrm{SU}(d)$. This is trivially the case when we optimize a function of distortion and yield. This is also the case if conditions on distortion are needed to be imposed only on the given input state, and not on all the possible input states. \subsection{The Key Lemma} \label{subsec:keylemmas}\vspace*{-2pt}In this subsection, we prove Lemma \ref {lem:k1}, which is the key to the arguments in the rest of the paper. To make analysis easier, before the protocol starts, each party applies $ U^{\otimes n}$, $V^{\otimes n}$ at each site, where $U$ and $V$ are chosen randomly according to Haar measure in $\mathrm{SU}(d)$, and erase the memory of $U,V$. This operation is denoted by $O1$, hereafter. From here to the end of the paper, $C_{\ast }^{n}$ means the composition of $ O1$ followed by $C_{\ast }^{n}$. The optimality of the newly defined $ \left\{ C_{\ast }^{n}\right\} $ trivially implies the optimality of $\left\{ C_{\ast }^{n}\right\} $ defined previously, because $O1$ simply randomizes output and cannot improve the performance; \begin{equation*} O1:\rho \rightarrow \mathrm{E}_{U,V}(U\otimes V)^{\otimes n}\rho (U^{\dagger }\otimes V^{\dagger })^{\otimes n}, \end{equation*} where $\mathrm{E}_{U,V}$ denotes expectation by Haar measure in $\mathrm{SU} (d)$. (More explicitly, \begin{equation*} \mathrm{E}_{U,V}f(U,V)=\int f(U,V)\mu (\mathrm{d}U)\mu (\mathrm{d}V), \end{equation*} where $\mu $ is the Haar measure with the convention $\int \mu (\mathrm{d} U)=1$.) Lemma~\ref{l5} implies that $\mathcal{U}_{\mathbf{n},A}\otimes \mathcal{U}_{ \mathbf{n},B}$ is an irreducible space of the tensored representation $ U^{\otimes n}\otimes V^{\otimes n}$ of $\mathrm{SU}(d)\times \mathrm{SU}(d)$ . Hence, by virtue of Lemmas~\ref{lem:decohere}-\ref{lem:shur}, the average state writes \begin{equation} \mathrm{E}_{U,V}(U\otimes V|\phi \rangle \langle \phi |U^{\ast }\otimes V^{\ast })^{\otimes n}=\bigoplus_{\mathbf{n}}a_{\mathbf{n}}^{\phi }\sigma _{ \mathbf{n}}^{\phi }, \label{5-5-1} \end{equation} and \begin{equation*} \sigma _{\mathbf{n}}^{\phi }:=\frac{\mathcal{U}_{\mathbf{n},A}\otimes \mathcal{U}_{\mathbf{n},B}\otimes |\mathcal{V}_{\mathbf{n}}\rangle \langle \mathcal{V}_{\mathbf{n}}|}{\dim \left\{ \mathcal{U}_{\mathbf{n},A}\otimes \mathcal{U}_{\mathbf{n},B}\right\} }. \end{equation*} We denote by $O2$ the projection measurement $\{\mathcal{W}_{\mathbf{n} _{A},A}\otimes \mathcal{W}_{\mathbf{n}_{B},B}\}_{\mathbf{n}_{A},\mathbf{n} _{B}}$,\textit{\ }which maps the state $\rho $ to the pair \begin{equation*} \left( \mathbf{n}_{A},\,\mathbf{n}_{B},\,\frac{\mathcal{W}_{\mathbf{n} _{A},A}\otimes \mathcal{W}_{\mathbf{n}_{B},B}\rho \mathcal{W}_{\mathbf{n} _{A},A}\otimes \mathcal{W}_{\mathbf{n}_{B},B}}{\mathrm{tr}\mathcal{W}_{ \mathbf{n}_{A},A}\otimes \mathcal{W}_{\mathbf{n}_{B},B}\rho }\right) \end{equation*} with probability \begin{equation*} \mathrm{tr}\mathcal{W}_{\mathbf{n}_{A},A}\otimes \mathcal{W}_{\mathbf{n} _{B},B}\rho . \end{equation*} Here note that, due to the form of $\sigma _{\mathbf{n}}^{\phi }$, $\mathbf{n }_{A}=\mathbf{n}_{B}:=\mathbf{n}$, so long as the input is many copies of a pure state. Given a pair $\left( \mathbf{n},\sigma _{\mathbf{n}}^{\phi }\right) $ of classical information and a state supported on $\mathcal{W}_{ \mathbf{n},A}\otimes \mathcal{W}_{\mathbf{n},B}$, the operation $O3$ outputs $\left( \mathbf{n},\mathrm{tr}_{\mathcal{U}_{\mathbf{n},A}\otimes \mathcal{U} _{\mathbf{n},B}}\sigma _{\mathbf{n}}^{\phi }\right) $. Denoting the composition of an operation $A$ followed by an operation $B$ as $B\circ A$, $C_{\ast }^{n}$ writes $O3\circ O2\circ O1$, essentially. (The mapping from $|\mathcal{V}_{\mathbf{n}}\rangle $ to $||\dim \mathcal{V}_{ \mathbf{n}}\rangle $ is needed only for the sake of formality.) Here, in defining $B\circ A$, if $A$'s output is a pair $(\mathbf{n},\rho _{\mathbf{n} })$ of classical information and quantum state, we always consider the correspondence \begin{equation} (\mathbf{n},\rho _{\mathbf{n}})\leftrightarrow \left\vert \mathbf{n} \right\rangle \left\langle \mathbf{n}\right\vert \otimes U_{\mathbf{n}}\rho _{\mathbf{n}}U_{\mathbf{n}}^{\dagger }, \label{correspondence} \end{equation} where $\{\left\vert \mathbf{n}\right\rangle \}$ is an orthonormal basis, and $U_{x}$ is an local isometry to appropriately defined Hilbert space. Here, 'local' is in terms of A-B partition. In terms of this convention, the definition of $O3$ rewrites \begin{equation} O3:\left\vert \mathbf{n}\right\rangle \left\langle \mathbf{n}\right\vert \otimes U_{\mathbf{n}}\sigma _{\mathbf{n}}^{\phi }U_{\mathbf{n}}^{\dagger }\rightarrow \left\vert \mathbf{n}\right\rangle \left\langle \mathbf{n} \right\vert \otimes U_{\mathbf{n}}^{\prime }\mathrm{tr}_{\mathcal{U}_{ \mathbf{n},A}\otimes \mathcal{U}_{\mathbf{n},B}}\sigma _{\mathbf{n}}^{\phi }U_{\mathbf{n}}^{\prime \dagger }, \label{defUn} \end{equation} using local isometry $U_{\mathbf{n}}$ and $U_{\mathbf{n}}^{\prime }$. (The domain of $U_{\mathbf{n}}$ and $U_{\mathbf{n}}^{\prime }$is $\mathcal{W}_{ \mathbf{n},A}\otimes \mathcal{W}_{\mathbf{n},B}$ and $\mathcal{V}_{\mathbf{n} ,A}\otimes \mathcal{V}_{\mathbf{n},B}$, respectively.) Recall that all the measures listed in the previous section are invariant by local unitary operations to the input, i.e., the measure $f_{n}\left( \rho ,\{C^{n}\}\right) $ satisfies \begin{eqnarray} f_{n}\left( \rho ,\{C^{n}\}\right) &=&f_{n}\left( U\otimes V\rho U^{\dagger }\otimes V^{\dagger },\{C^{n}\}\right) , \label{inv-measure} \\ \forall U,\forall V &\in &\mathrm{SU}(d), \notag \end{eqnarray} and is affine with respect to $\rho $, \begin{eqnarray} &&f_{n}\left( p\rho +(1-p)\sigma ,\{C^{n}\}\right) \label{affine-measure} \\ &=&pf_{n}\left( \rho ,\{C^{n}\}\right) +(1-p)f_{n}\left( \sigma ,\{C^{n}\}\right) . \notag \end{eqnarray} Recall also that the worst-case/average distortion are affine. Hereafter, the worst-case/average distortion are always evaluated at the worst-case Schmidt basis, so that those measures satisfy (\ref{inv-measure}). \begin{lem} For any given protocol $\{C^{n}\}$, we can find a protocol such that; (i) The protocol is of the form $\{B^{n}\circ C_{\ast }^{n}\}$, where $B^{n}$ is an LOCC operation; (ii) A performance measure satisfying (\ref{inv-measure}) and (\ref{affine-measure})takes the same value as the protocol $\{C^{n}\}$, \begin{equation*} f(\rho ,\left\{ B^{n}\circ C_{\ast }^{n}\right\} )=f(\rho ,\left\{ C^{n}\right\} ). \end{equation*} \label{optimal} \end{lem} \begin{pf} Due to (\ref{inv-measure}) and (\ref{affine-measure}), the operation $O1$ does not decrease the measure of the performance, because \begin{eqnarray*} &&f_{n}\left( \mathrm{E}_{U,V}U\otimes V\rho U^{\dagger }\otimes V^{\dagger },\{C^{n}\}\right) \\ &=&\mathrm{E}_{U,V}\,f_{n}\left( U\otimes V\rho U^{\dagger }\otimes V^{\dagger },\{C^{n}\}\right) \\ &=&\mathrm{E}_{U,V}\,f_{n}\left( \rho ,\{C^{n}\}\right) =\,f_{n}\left( \rho ,\{C^{n}\}\right) . \end{eqnarray*} Hence, $\{C^{n}\circ O1\}$ is the same as $\{C^{n}\}$ in the performance. After the operation $O1$, the state is block diagonal in subspaces $\{ \mathcal{W}_{\mathbf{n},A}\otimes \mathcal{W}_{\mathbf{n},B}\}$. Therefore, if we use the correspondence (\ref{correspondence}), the state is not unchanged by $O2$ (up to local isometry). More explicitly, let $U_{\mathbf{n} }$ be a local isometry in (\ref{defUn}), and $C^{\prime n}=C^{n}\circ U_{ \mathbf{n}}^{\dagger }$. Then, we have \begin{equation*} f(\rho ,\left\{ C^{\prime n}\circ O2\circ O1\right\} )=f(\rho ,\left\{ C^{n}\circ O1\right\} ). \end{equation*} Observe also that, after the operation $O1$, parts of the state which are supported on $\mathcal{U}_{\mathbf{n},A}\otimes \mathcal{U}_{\mathbf{n},B}$ are tensor product states. Hence, there is a operation $B^{n}$ such that \begin{equation*} f(\rho ,\left\{ B^{n}\circ O3\circ O2\circ O1\right\} )=f(\rho ,\left\{ C^{\prime n}\circ O2\circ O1\right\} ), \end{equation*} because tensor product states can be reproduced locally whenever they are needed. More explicitly, $B^{n}=C^{\prime n}\circ B^{\prime n}$, where $ B^{\prime n}$ is \begin{equation*} B^{\prime n}:\left\vert \mathbf{n}\right\rangle \left\langle \mathbf{n} \right\vert \otimes U_{\mathbf{n}}^{\prime }\rho U_{\mathbf{n}}^{\prime \dagger }\rightarrow \left\vert \mathbf{n}\right\rangle \left\langle \mathbf{ n}\right\vert \otimes U_{\mathbf{n}}\left( \mathcal{U}_{\mathbf{n},A}\otimes \mathcal{U}_{\mathbf{n},B}\otimes \rho \right) U_{\mathbf{n}}^{\dagger }, \end{equation*} where $U_{\mathbf{n}}$ and $U_{\mathbf{n}}^{\prime }$ are local isometry in ( \ref{defUn}). After all,$f(\rho ,\left\{ B^{n}\circ C_{\ast }^{n}\right\} )=f(\rho ,\left\{ C^{n}\right\} )$ and the lemma is proved. \end{pf} In the postprocessing $B^{n}$, a classical output $x$ will be changed to $ x+\Delta $ with probability $Q^{n}\left( x+\Delta |x\right) $, accompanying some SLOCC operations on the quantum output. In the \ following lemma, for a given $Q^{n}(y|x)$, $\widetilde{Q^{n}}\left( y|x\right) $ is a transition matrix such that $\widetilde{Q^{n}}\left( y|x\right) =Q^{n}(y|x)$ for $y>x$ and $\widetilde{Q^{n}}\left( y|x\right) =0$ for $y<x$, and $\widetilde{Q^{n}} \left( x|x\right) :=Q^{n}(x|x)+\sum_{y<x}Q^{n}(y|x)$. \begin{lem} \label{lem:k1} In optimizing (maximizing) (i)-(vi), we can restrict ourselves to the protocol satisfying (a)-(c). \begin{itemize} \item[(i)] (\ref{g-yield}) under the constraint on the worst-case/average distortion \item[(ii)] the weighted sum (\ref{g-y+d}) \item[(iii)] Total fidelity (\ref{def:totalfidelity}). \end{itemize} \begin{itemize} \item[(a)] The protocol is of the form $\left\{ B^{n}\circ C_{\ast }^{n}\right\} $. \item[(b)] In $B^{n}$, the corresponding $Q^{n}(y|x)$ satisfies $Q^{n}\left( y|x\right) =0$ for $y<x$. \item[(c)] $B^{n}$ does not change quantum output of $C_{\ast }^{n}$. \end{itemize} \end{lem} \begin{pf} The condition (a) follows from Lemma \ref{optimal}, for worst-case/average distortion, (\ref{g-yield}), and total fidelity (\ref{def:totalfidelity}) because they satisfy (\ref{inv-measure}) and (\ref{affine-measure}). For $f(x)$ is monotone increasing, \begin{equation*} \sum_{x,y}f(y)Q^{n}(y|x)Q_{C_{\ast }^{n}}^{\phi }(x)\leq \sum_{x,y}f(y) \widetilde{Q^{n}}(y|x)Q_{C_{\ast }^{n}}^{\phi }(x), \end{equation*} where $\widetilde{Q^{n}}\left( y|x\right) $ is a transition matrix such that $\widetilde{Q^{n}}\left( y|x\right) =Q^{n}(y|x)$ for $y>x$ and $\widetilde{ Q^{n}}\left( y|x\right) =0$ for $y<x$, and $\widetilde{Q^{n}}\left( x|x\right) :=Q^{n}(x|x)+\sum_{y<x}Q^{n}(y|x)$. Hence, $\widetilde{Q^{n}} \left( y|x\right) $ improves $Q^{n}(y|x)$ in average yield (\ref{g-yield}), while worst-case/average distortion is unchanged as is proved later. Therefore, (b) applies to (i) and (ii). To go on further, we have to find out optimal state transition made by the postprocessing. When the postprocessing $B^{n}$changes classical output $x$ to $y$, the corresponding quantum output $\rho _{B^{n}}^{\phi }(y|x)$ which minimize the distortion i.e., maximizes the fidelity to $\Vert 2^{ny}\rangle $ is \begin{equation} \rho _{B^{n}}^{\phi }(y|x)=\left\{ \begin{array}{cc} \Vert 2^{nx}\rangle \langle 2^{nx}|| & (y>x), \\ \Vert 2^{ny}\rangle \langle 2^{ny}|| & (y\leq x), \end{array} \right. \label{state-opt} \end{equation} for the reasons stated shortly. In case the $y\leq x$, LOCC can change the output of $C_{\ast }^{n}$, $||2^{nx}\rangle $, to $\Vert 2^{ny}\rangle $ perfectly and deterministically. On the other hand, in case $y>x$, monotonicity of Schmidt rank by SLOCC implies that $\Vert 2^{nx}\rangle $ is the best approximate state to $\Vert 2^{ny}\rangle $ in all the states which can be reached from $\Vert 2^{nx}\rangle $ with non-zero probability. This transition causes the distortion of $1-2^{-n(y-x)}$. From (\ref{state-opt}), it is easily understood that worst-case/average distortion of $\widetilde{Q^{n}}(y|x)$ equals that of $Q^{n}(y|x)$, and that the condition (c) applies to (i) and (ii). It remains to prove (b) and (c) for (iii). Observe that total fidelity (\ref {def:totalfidelity}) does not depend on the classical output of the protocol. Therefore, condition (b) is not restriction in optimization. Therefore, we only prove (c). By definition, \begin{equation*} F_{C^{n}}^{\phi }\left( R\right) =\sum_{x,y}\sum_{z:z\geq R}Q^{n}(y|x)Q_{C_{\ast }^{n}}^{\phi }(x)\langle 2^{nz}||\rho _{B^{n}}^{\phi }(y|x)\Vert 2^{nz}\rangle . \end{equation*} In $x\geq R$ case, $\rho _{B^{n}}^{\phi }(y|x)=\Vert 2^{nx}\rangle \langle 2^{nx}||$ achieves \begin{equation*} \sum_{y}\sum_{z:z\geq R}Q^{n}(y|x)\langle 2^{nz}||\rho _{B^{n}}^{\phi }(y|x)\Vert 2^{nz}\rangle =1, \end{equation*} which is maximal. In $x<R$ case, for any $z\geq R$($>x$), the maximum of $ \langle 2^{nz}||\rho _{B^{n}}^{\phi }(y|x)\Vert 2^{nz}\rangle $ is achieved by $\rho _{B^{n}}^{\phi }(y|x)=\Vert 2^{nx}\rangle \langle 2^{nx}||$, because monotonicity of Schmidt rank by SLOCC implies that $\Vert 2^{nx}\rangle $ is the best approximate state to $\Vert 2^{nz}\rangle $ in all the states which can be reached from $\Vert 2^{nx}\rangle $ with non-zero probability. Therefore, the optimal output state should be as is described in (c). \end{pf} Now, the protocol of interest is very much restricted. We modify classical output of $C_{\ast }^{n}$ according to transition probability $Q^{n}(y|x)$, while its quantum output is untouched. Note $Q^{n}(y|x)$ is non-zero only if $y\geq x$. Especially, transition to $y$ strictly larger than $x$ means that the protocol claims the yield $y$ while in fact its yield is $x<y$. In other words, this is excessive claim on its yield. Main part of our effort in the following is how to suppress 'excessive claim', or $Q^{n}(y|x)$ for $y>x$ by setting appropriate measure or constraint. Note that the mathematical treatment is much simplified now, for we only have to optimize transition probability $Q^{n}(y|x)$, with the condition that the distortion $1-2^{-n(y-x)}$ occurs only if $y\geq x$. Observe that in the proof of these lemmas, we have used the uncertainty about Schmidt basis. This assumption is needed to justify the condition (\ref {inv-measure}). However, the uncertainty about Schmidt coefficients has played no role. Therefore, Lemma~\ref{optimal}-\ref{lem:k1} holds true even in the case where Schmidt coefficients are known. Hereafter, maximization/minimization over local unitaries will be often removed, because the protocols of our interest are local unitary invariant. \subsection{Distortion-free protocols} \label{subsec:dist-free} \begin{thm} $\{C_{\ast }^{n}\}$ achieves the optimal (maximal) value of (\ref{g-yield}) for all universal distortion-free concentrations for all finite $n$, any input state $\left\vert \phi \right\rangle $, and any threshold $R$. Here, $ f $ only need to be monotone increasing, and need not to be bounded nor continuous. \label{opt1} \end{thm} \begin{pf} Lemma~\ref{lem:k1} apply\ to this case, for distortion-free condition writes, using the invariant measure of a distortion, $\max_{U,V}\epsilon _{C^{n}}^{U\otimes V\phi }=0$. To increase the value of (\ref{g-yield}), $ Q^{n}\left( x+\Delta |x\right) $ should be non-zero for some $x$, $\Delta $ with $\Delta >0$, which causes non-zero distortion. Hence, it is impossible to improve the yield measure (\ref{g-yield}) by postprocessing. \end{pf} Observe that the proof also applies to the case where Schmidt coefficients are known, for the condition $\max_{U,V}\epsilon _{C^{n}}^{U\otimes V\phi }=0 $ assertion is needed to be imposed only on the input state. \subsection{Constraints on the worst-case distortion} \label{subsec:worst-distortion}In this subsection, we discuss the higher order asymptotic optimality of $\{C_{\ast }^{n}\}$ in terms of the average yield (\ref{g-yield}) under the constraint on the worst-case distortion, \begin{eqnarray*} &&\max_{U,V}\epsilon _{C^{n}}^{U\otimes V\phi } \\ &=&\max_{\Delta :\exists x,\,Q^{n}(x+\Delta |x)\neq 0}\left( 1-2^{-n\Delta }\right) \\ &\leq &r_{n}<1, \end{eqnarray*} which implies \begin{equation} Q^{n}(x+\Delta |x)=0,\quad \Delta \geq \frac{-\log (1-r_{n})}{n}. \label{improve-upper-2} \end{equation} This means that the magnitude of the improvement in the yield is uniformly upper-bounded by $\frac{-\log (1-r_{n})}{n}$. Ineq. (\ref{improve-upper-2}) is the key to the rest of the argument in this subsection. In discussing the average yield (\ref{g-yield}), we assume $f(x)$ is continuously differentiable at around $x=\mathrm{H}\left( \mathbf{p}_{\phi }\right) $. In addition, first we assume $f^{\prime }\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) >0$. After that, we study the case where $ f^{\prime }(x)=0$ in the neighborhood of $x=$.$\mathrm{H}\left( \mathbf{p} _{\phi }\right) $. A typical example of the former and the latter is $f(x)= \frac{x}{\log d}$ and $f(x)=\Theta (x-R)$, respectively. Note that the argument in this section holds true also for the cases where Schmidt coefficients are known. This is because the constraint $ \max_{U,V}\epsilon _{C^{n}}^{U\otimes V\phi }\leq r_{n}$ is needed to be imposed only on a given input state, and not on all the state. \begin{thm} \label{th:opt-g-yield-worst}Suppose that $f$ is continuously differentiable in a region $(R_{1},R_{2})$ with $\ R_{1}<\mathrm{H}\left( \mathbf{p}_{\phi }\right) <R_{2}$ , and $r_{n}$ is smaller than $1-\delta $ with $\delta $ being a positive constant, and $r_{n}$ is not exponentially small. Then, \begin{itemize} \item[(i)] $\left\{ C_{\ast }^{n}\right\} $ is optimal in the order which is slightly larger than $O\left( \frac{r_{n}}{n}\right) $, or for any protocol $ \{C^{n}\}$, \begin{equation*} \mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}f(X)\leq \mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}f(X)+O\left( \frac{r_{n}}{n}\right) . \end{equation*} \item[(ii)] if $f^{\prime }\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) >0$, there is a protocol $\{C^{n}\}$ which is better than $\left\{ C_{\ast }^{n}\right\} $ by the magnitude of $O\left( \frac{r_{n}}{n}\right) $ for an input $\left\vert \psi \right\rangle $, or \begin{equation*} \mathrm{E}_{Q_{C^{n}}^{\psi }}^{X}f(X)\geq \mathrm{E}_{Q_{C_{\ast }^{n}}^{\psi }}^{X}f(X)+O\left( \frac{r_{n}}{n}\right) . \end{equation*} \end{itemize} \end{thm} Applied to $f(x)=\frac{x}{\log d}$, (i) and (ii) imply that with the constraint $\max_{U,V}\epsilon _{C^{n}}^{U\otimes V\phi }\rightarrow 0$, $ \left\{ C_{\ast }^{n}\right\} $ is optimal up to $O\left( \frac{1}{n}\right) $-terms, and not optimal in the order smaller than that. Hence, the coefficients computed in Appendix\thinspace \ref{sec:cal-ave-opt} are optimal. \begin{pf} (i) Obviously, the optimal protocol $\{C^{n}\}$ is given by \begin{equation} Q^{n}(x+\Delta ^{\prime }|x)=1,\quad \Delta ^{\prime }=\left\lfloor \frac{ -\log (1-r_{n})}{n}\right\rfloor . \label{opt-worst-dist} \end{equation} In the region $(R_{1},R_{2})$, with $c:=\max_{x:R_{1}\leq x\leq R_{2}}f^{^{\prime }}\left( x\right) $, \begin{eqnarray*} f\left( x+\Delta ^{\prime }\right) &\leq &f\left( x\right) +c\Delta ^{\prime } \\ &\leq &f\left( x\right) +c\frac{-\log (1-r_{n})}{n} \end{eqnarray*} holds. For the function $-\log (1-x)$ is monotone and concave, if $ r_{n}<1-\delta $, we have \begin{eqnarray*} f\left( x+\Delta ^{\prime }\right) &\leq &f\left( x\right) +\frac{cr_{n}}{n} \left( \frac{-\log (1-1+\delta )+\log (1-0)}{1-\delta }\right) \\ &\leq &f\left( x\right) +c\frac{-\log (1-\delta )}{1-\delta }\frac{r_{n}}{n}. \end{eqnarray*} The average of the both sides of this over $x$ yields \begin{eqnarray*} \mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}f(X)\leq \mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}f(X)+c\frac{-\log (1-\delta )}{1-\delta }\frac{r_{n}}{n} +O(2^{-nD}) &&, \\ \quad \exists D>0 &&, \end{eqnarray*} for the sum over the complement of $(R_{1},R_{2})$ is exponentially small due to the third equation of (\ref{grep-type}). This implies the optimality of our protocol. (ii) Due to mean value theorem, \begin{eqnarray*} f\left( x+\frac{-\log (1-r_{n})}{n}\right) &>&f\left( x\right) +c^{\prime } \frac{-\log (1-r_{n})}{n},\quad \exists c^{\prime }>0, \\ &>&f\left( x\right) +(-c^{\prime }\log \delta )\frac{r_{n}}{n}, \end{eqnarray*} holds in a neighborhood of $x=\mathrm{H}\left( \mathbf{p}_{\phi }\right) $. Hence, letting $\{C^{n}\}$ be the protocol corresponding to (\ref {opt-worst-dist}), we have \begin{eqnarray*} \mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}f(X)\geq \mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}f(X)+(-c^{\prime }\log \delta )\frac{r_{n}}{n} -O(2^{-nD}), && \\ \quad \exists D>0, && \end{eqnarray*} proving the achievability. \end{pf} In case of $f(x)=\Theta (x-R)$, which is flat at around $x=\mathrm{H}\left( \mathbf{p}_{\phi }\right) $, (ii) of this theorem does not apply, and as is shown below, the upper-bound to the average yield suggested by (i) is not tight at all. \begin{thm} \label{th:opt-g-yield-worst-2} Suppose $f^{^{\prime }}\left( x\right) =0\,(R_{1}<x<R_{2})$, $f(R_{1}-0)\neq f\left( \mathrm{H}\left( \mathbf{p} _{\phi }\right) \right) $, $f(R_{2}+0)\neq f\left( \mathrm{H}\left( \mathbf{p }_{\phi }\right) \right) $, and $\varlimsup_{n\rightarrow \infty }r_{n}<1$. If $\mathrm{H}\left( \mathbf{p}_{\phi }\right) >R_{1}$, \begin{eqnarray*} &&\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \sum_{x:x\leq R_{1}}\left\{ f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) -f(x)\right\} Q_{C^{n}}^{\phi }\left( x\right) \\ &\leq &\mathrm{D}\left( R_{1}||\mathbf{p}_{\phi }\right) , \end{eqnarray*} holds. If $\mathrm{H}\left( \mathbf{p}_{\phi }\right) <R_{2}$, \begin{eqnarray*} &&\varliminf_{n\rightarrow \infty }\frac{-1}{n}\log \sum_{x:x\geq R_{2}}\left\{ f(x)-f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) \right\} Q_{C^{n}}^{\phi }\left( x\right) \\ &\geq &\mathrm{D}\left( R_{2}||\mathbf{p}_{\phi }\right) , \end{eqnarray*} and the equality is achieved by $\left\{ C_{\ast }^{n}\right\} $. \end{thm} This theorem intuitively means that, if $f(x)$ is flat at the neighborhood of $x=\mathrm{H}\left( \mathbf{p}_{\phi }\right) $, for the optimal protocol, the quantity (\ref{g-yield}) is approximately of the form, \begin{equation*} f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) -A2^{-n\mathrm{D} \left( R_{1}||\mathbf{p}_{\phi }\right) }+B2^{-n\mathrm{D}\left( R_{2}|| \mathbf{p}_{\phi }\right) }. \end{equation*} Applied to $f(x)=\Theta (x-R)$, the theorem implies the optimality of (\ref {fexp1-1}) and (\ref{fexp-2}) under the constraint $\varlimsup_{n\rightarrow \infty }\max_{U,V}\epsilon _{C^{n}}^{U\otimes V\phi }<1$. \begin{pf} Suppose $\mathrm{H}\left( \mathbf{p}_{\phi }\right) >R_{1}$ . For any $ R<R_{1}$, \begin{eqnarray*} &&\sum_{x:x\leq R_{1}}\left\{ f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) -f(x)\right\} Q_{C^{n}}^{\phi }\left( x\right) \\ &\geq &\sum_{x:x\leq R}\left\{ f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) -f(x)\right\} Q_{C^{n}}^{\phi }\left( x\right) \\ &\geq &\left\{ f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) -f(R)\right\} \sum_{x:x\leq R}Q_{C^{n}}^{\phi }\left( x\right) , \end{eqnarray*} where the second inequality is due to monotonicity of $f$. On the other hand, (\ref{improve-upper-2}) implies \begin{equation*} \sum_{x:x\leq R}Q_{C^{n}}^{\phi }\left( x\right) \geq \sum_{x:x\leq R-\frac{ -\log (1-r_{n})}{n}}Q_{C_{\ast }^{n}}^{\phi }\left( x\right) . \end{equation*} Combination of these inequalities with (\ref{fexp1-1}) leads to \begin{eqnarray*} &&\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \sum_{x:x\leq R_{1}}\left\{ f\left( \mathrm{H}(\mathbf{p}_{\phi })\right) -f(x)\right\} Q_{C^{n}}^{\phi }(x) \\ &\leq &\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\left\{ \begin{array}{c} \log \sum_{x:x\leq R-\frac{-\log (1-r_{n})}{n}}Q_{C_{\ast }^{n}}^{\phi }\left( x\right) \\ +\log \left\{ f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) -f(R)\right\} \end{array} \right\} \\ &\leq &\mathrm{D}\left( R||\mathbf{p}_{\phi }\right) , \end{eqnarray*} which, letting $R\rightarrow R_{1}$, leads to the first inequality. On the other hand, in $\mathrm{H}\left( \mathbf{p}_{\phi }\right) <R_{2}$ case, the monotonicity of $f$ and (\ref{improve-upper-2}) also implies \begin{eqnarray*} &&\sum_{x:x\geq R_{2}}\left\{ f(x)-f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) \right\} Q_{C^{n}}^{\phi }\left( x\right) \\ &\leq &\left\{ f(\log d)-f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) \right\} \sum_{x:x\geq R_{2}}Q_{C^{n}}^{\phi }\left( x\right) \\ &\leq &\left\{ f(\log d)-f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) \right\} \sum_{x:x\geq R_{2}-\frac{-\log (1-r_{n})}{n}}Q_{C_{\ast }^{n}}^{\phi }\left( x\right) . \end{eqnarray*} Combination of this with (\ref{fexp-2}) leads to the second inequality. \ The achievability is proved as follows. Suppose $\mathrm{H}\left( \mathbf{p} _{\phi }\right) >R_{1}$. For $x$ smaller than $R_{1}$, \begin{eqnarray*} f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) -f(x) &\leq &f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) \\ &=&f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) (1-\Theta (x-R_{1})). \end{eqnarray*} Hence, the exponent is lower bounded by \begin{eqnarray*} &&\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}\left\{ 1-\Theta (X-R_{1})\right\} \\ &&+\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log f\left( \mathrm{H} \left( \mathbf{p}_{\phi }\right) \right) \\ &=&\mathrm{D}\left( R_{1}||\mathbf{p}_{\phi }\right) , \end{eqnarray*} which means the first inequality is achieved. Suppose $\mathrm{H}\left( \mathbf{p}_{\phi }\right) <R_{2}$. For $x$ larger than $R_{2}$, then we have \begin{eqnarray*} &&f(x)-f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) \\ &\geq &\left\{ f(R_{0})-f\left( \mathrm{H}\left( \mathbf{p}_{\phi }\right) \right) \right\} \Theta (x-R_{0}), \end{eqnarray*} where $R_{0}$ is an arbitrary constant with $R_{0}>R_{2}$. Hence, the exponent is upper-bounded by \begin{eqnarray*} &&\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}\Theta (x-R_{0}) \\ &&+\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \left\{ f\left( \mathrm{ H}\left( \mathbf{p}_{\phi }\right) \right) -f(R_{0})\right\} \\ &=&\mathrm{D}\left( R_{0}||\mathbf{p}_{\phi }\right) . \end{eqnarray*} Letting $R_{0}\rightarrow R_{1}$, we have the achievability of the second inequality. \end{pf} \subsection{Constraints on the average distortion} \label{subsec:average-distortion} In this subsection, we discuss the higher-order asymptotic optimality of $ \{C_{\ast }^{n}\}$ in terms of the generalized average yield (\ref{g-yield}) under the constraint on the average distortion, \begin{eqnarray*} &&\max_{U,V}\overline{\epsilon }_{C^{n}}^{U\otimes V\phi } \\ &=&\sum_{\Delta }\left( 1-2^{-n\Delta }\right) \mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}Q^{n}(X+\Delta |X) \\ &\leq &r_{n}<1. \end{eqnarray*} Denote the probability that the improvement by the amount $\Delta $ occurs by \begin{equation*} \Pr_{\phi }\left( \Delta \right) :=\mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}Q^{n}(X+\Delta |X). \end{equation*} Observe that \begin{eqnarray*} \overline{\epsilon }_{C^{n}}^{\phi } &=&\sum_{\Delta }\left( 1-2^{-n\Delta }\right) \Pr_{\phi }\left( \Delta \right) \\ &\geq &\left( 1-2^{-c}\right) \Pr_{\phi }\left\{ \Delta \geq \frac{c}{n} \right\} \end{eqnarray*} which implies \begin{equation} \Pr_{\phi }\left\{ \Delta \geq \frac{c}{n}\right\} \leq \frac{r_{n}}{1-2^{-c} }. \label{improve-upper} \end{equation} Hence, the magnitude of improvement is upper-bounded only in average sense, in contrast with (\ref{improve-upper-2}) which implies an upper-bound uniform with respect to $x$. Suppose that $f$ is continuously differentiable all over the region $(0,\log d)$. Then, the improvement $x\rightarrow x+\Delta $ causes the distortion by the amount of \begin{eqnarray*} 1-2^{-n\Delta } &\geq &\frac{1-d^{-n}}{\log d}\Delta \\ &\geq &\frac{1-d^{-n}}{\log d}\frac{1}{c}\left( f\left( x\right) -f\left( x-\Delta \right) \right) , \end{eqnarray*} where, $c=\max_{x:0\leq x\leq 1}f^{\prime }\left( x\right) .$ Taking average of the both side, \begin{equation*} \overline{\epsilon }_{C^{n}}^{\phi }\geq \frac{1-d^{-n}}{\log d}\frac{1}{c} \left( \mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}f\left( X\right) -\mathrm{E} _{Q_{C_{\ast }^{n}}^{\phi }}^{X}f\left( X\right) \right) , \end{equation*} or, \begin{equation*} \mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}f\left( X\right) \leq \mathrm{E} _{Q_{C_{\ast }^{n}}^{\phi }}^{X}f\left( X\right) +\frac{c\log d}{1-d^{-n}} \overline{\epsilon }_{C^{n}}^{\phi }. \end{equation*} On the other hand, let the protocol $\{C^{n}\}$ be the one corresponding to \begin{equation*} Q^{n}\left( \log d|x\right) =r_{n},\quad \forall x. \end{equation*} Then, we have \begin{eqnarray*} \mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}f\left( X\right) &=&\mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}f\left( X\right) +r_{n}\mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}\left\{ 1-f(X)\right\} \\ &\geq &\mathrm{E}_{Q_{C_{\ast }^{n}}^{\phi }}^{X}f\left( X\right) \\ &&+r_{n}\left\{ 1-f(\mathrm{H}(\mathbf{p}_{\phi })+c)\right\} \left( 1-2^{-n \mathrm{D}(H(\mathbf{p}_{\phi })+c||\mathbf{p}_{\phi })}\right) , \\ &&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \forall c>0, \end{eqnarray*} while the average distortion of $\{C^{n}\}$ is at most $r_{n}$. Now, we extend these arguments to the case where finitely many discontinuous points exist. First, in the proof of the upper-bound, it is sufficient for $ f $ to be continuously differentiable in the neighborhood of $x=\mathrm{H} \left( \mathbf{p}_{\phi }\right) $, if the exponentially small terms are neglected. Second, the evaluation of the performance of the protocol constructed above does not rely on the differentiability of $f$. Therefore, we have the following theorem. \begin{thm} \label{opt-g-yield} \begin{itemize} \item[(i)] Suppose that $f$ is continuously differentiable in the neighborhood of $x=\mathrm{H}\left( \mathbf{p}_{\phi }\right) $. Suppose also $r_{n}$ is not exponentially small. Then, if $\overline{\epsilon } _{C^{n}}^{\phi }\leq r_{n}$., $\{C_{\ast }^{n}\}$ is optimal in terms of ( \ref{g-yield}), up to the order which is slightly larger than $O(r_{n})$, or for any protocol $\{C_{n}\}$, \begin{equation*} \mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}f\left( X\right) \leq \mathrm{E} _{Q_{C_{\ast }^{n}}^{\phi }}^{X}f\left( X\right) +O(r_{n}). \end{equation*} \item[(ii)] Suppose that $f(\mathrm{H}(\mathbf{p}_{\phi })+c)<1$, $\exists c>0$, and $\overline{\epsilon }_{C^{n}}^{\phi }\leq r_{n}$. Then, there is a protocol $\{C^{n}\}$ which improves $\{C_{\ast }^{n}\}$ by the order of $ O(r_{n})$; \begin{equation*} \mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}f\left( X\right) \geq \mathrm{E} _{Q_{C_{\ast }^{n}}^{\phi }}^{X}f\left( X\right) +O(r_{n}). \end{equation*} \end{itemize} \end{thm} Let us compare this theorem with Theorem~\ref{th:opt-g-yield-worst} which states optimality results with constraint on worst-case distortion. First, the yield is worse by the order of $\frac{1}{n}$. In particular, if $f(x)= \frac{x}{\log d}$, $r_{n}$ needs to be $o(\frac{\log n}{n})$ for optimality up to a higher order term is guaranteed. By contrast, under the constraint on the worst-case distortion, $r_{n}=o(1)$ is enough to certify optimality up to the third leading term. Second, applied to the case of $f(x)=\Theta (x-R)$, $\mathrm{H}(\mathbf{p} _{\phi })>R$ , Theorem~\ref{opt-g-yield} implies the following. With $ r_{n}=o(1)$, the success probability $\sum_{x:x\geq R}Q_{C^{n}}^{\phi }\left( x\right) $ vanishes (strong converse holds), but the speed of convergence is at most as fast as $r_{n}$, which is not exponentially fast, in general. Therefore, (\ref{fexp-2}) is far from optimal unless $r_{n}$ decreases exponentially fast. By contrast, under the constraint on the worst-case distortion, a constant upper-bound is enough to guarantee the optimality of the exponent (\ref{fexp-2}). Let us study the equivalence of Theorem~\ref{th:opt-g-yield-worst-2}, because Theorem~\ref{opt-g-yield}, (ii) cannot be applied to discussion of optimality of the exponent~(\ref{fexp1-1}), in which the rate $R$ is typically less than $\mathrm{H}(\mathbf{p}_{\phi })$. \begin{lem} \label{lem:improve-upper}Suppose that \begin{equation*} r\geq \varlimsup_{n\rightarrow \infty }\max_{U,V}\overline{\epsilon } _{C^{n}}^{U\otimes V\phi } \end{equation*} holds for all $\left\vert \phi \right\rangle $. Then, for all $\left\vert \phi \right\rangle $, all $c>0$, all $\delta >0$, and all $R^{\prime }$, $ R^{\prime \prime }$ with $R^{\prime }>R^{\prime \prime }$, there is a sequence $\{x_{n}\}$ such that $R^{\prime \prime }\leq x_{n}\leq R^{\prime }$ and \begin{equation*} \varlimsup_{n\rightarrow \infty }\sum_{\Delta :\Delta \geq \frac{c}{n} }Q^{n}(x_{n}+\Delta |x_{n})\leq \frac{r+\delta }{1-2^{-c}} \end{equation*} hold. \end{lem} \begin{pf} Assume the lemma is false, i.e., there is a sequence $\{n_{k}\}$ such that for all $x$ in the interval $(R^{\prime \prime },R^{\prime })$, \begin{equation*} \sum_{\Delta :\Delta \geq c}Q^{n_{k}}(x+\Delta |x)>\frac{r+\delta }{1-2^{-c}} \end{equation*} holds. Choosing $\left\vert \phi \right\rangle $ with $R^{\prime }>\mathrm{H} \left( \boldsymbol{p}_{\phi }\right) >R^{\prime \prime }$, we have \begin{eqnarray*} &&\Pr_{\phi }\left\{ \Delta \geq \frac{c}{n_{k}}\right\} \\ &\geq &\sum\limits_{x:R^{\prime \prime }\leq x\leq R^{\prime }}\sum_{\Delta :\Delta \geq \frac{c}{n_{k}}}Q^{n_{k}}(x+\Delta |x)Q_{C_{\ast }^{n_{k}}}^{\phi }(x) \\ &\geq &\frac{r+\delta }{1-2^{-c}}\left( 1-2^{-n_{k}\left( \min \left\{ \mathrm{D}(R^{\prime }||\mathbf{p}_{\phi }),\mathrm{D}(R^{\prime }||\mathbf{p }_{\phi })\right\} -\delta ^{\prime }\right) }\right) , \\ &&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \forall \delta ^{\prime }>0,\,\exists k_{1},\,\,\forall k\geq k_{1}, \end{eqnarray*} which, combined with (\ref{improve-upper}), implies \begin{equation*} \frac{r+\delta }{1-2^{-c}}\left( 1-2^{-n_{k}\left( \min \left\{ \mathrm{D} (R^{\prime }||\mathbf{p}_{\phi }),\mathrm{D}(R^{\prime \prime }||\mathbf{p} _{\phi })\right\} -\delta ^{\prime }\right) }\right) \leq \frac{r}{1-2^{-c}}. \end{equation*} This cannot hold when $n_{k}$ is large enough. Therefore, the lemma has to be true. \end{pf} \begin{thm} \label{th:opt-exponent}Suppose that $\ f(x)=1$ for $x\geq R$, and $\mathrm{H} (\mathbf{p}_{\phi })>R$ holds. Then, if $\varlimsup_{n\rightarrow \infty }\max_{U,V}\overline{\epsilon }_{C^{n}}^{U\otimes V\phi }<1$ holds for all $ \left\vert \phi \right\rangle $, \begin{equation} \varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \mathrm{E} _{Q_{C^{n}}^{\phi }}^{X}\left\{ 1-f\left( X\right) \right\} \leq D(R|| \mathbf{p}_{\phi }), \label{g-fexp} \end{equation} and the equality is achieved by $\left\{ C_{\ast }^{n}\right\} $. \end{thm} Note the premise of the theorem is the negation of the premise of Theorem~ \ref{opt-g-yield}\ , (ii). Note also that the constraint on the average distortion is very moderate, allowing constant distortion. \begin{pf} First, we prove (\ref{g-fexp}) for $f(x)=\Theta \left( x-R\right) $. Without loss of generality, we can assume that $Q^{n}\left( y|x\right) $ is non-zero only if $y=R$ and $x<R$. Therefore \begin{eqnarray*} &&1-\mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}\left\{ f\left( X\right) \right\} \\ &=&1-\sum\limits_{x<R}Q^{n}\left( R|x\right) Q_{C_{\ast }^{n}}^{\phi }\left( x\right) \\ &=&1-\sum\limits_{x}Q^{n}\left( R|x\right) Q_{C_{\ast }^{n}}^{\phi }\left( x\right) \\ &=&\sum\limits_{x}\left( 1-Q^{n}\left( R|x\right) \right) Q_{C_{\ast }^{n}}^{\phi }\left( x\right) . \end{eqnarray*} Let $R^{\prime }$, $R^{\prime \prime }$ be real numbers with $R^{\prime }<R^{\prime \prime }<R$, and $\left\{ x_{n}\right\} $ be a sequence given by Lemma~\ref{lem:improve-upper}. Then we have, due to Lemma~\ref {lem:improve-upper}, \begin{eqnarray*} &&1-\mathrm{E}_{Q_{C^{n}}^{\phi }}^{X}\left\{ f\left( X\right) \right\} \\ &\geq &\left\{ 1-Q^{n}\left( R|x_{n}\right) \right\} Q_{C_{\ast }^{n}}^{\phi }\left( x_{n}\right) \\ &\geq &\left( 1-\frac{r+\delta }{1-2^{-n(R-R^{\prime })}}\right) Q_{C_{\ast }^{n}}^{\phi }\left( x_{n}\right) , \\ &&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \exists n_{0}\,\forall n>n_{0}, \end{eqnarray*} which implies \begin{eqnarray*} &&\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \sum_{x}\left\{ 1-\Theta \left( x-R\right) \right\} Q_{C^{n}}^{\phi }(x) \\ &\leq &\lim_{n\rightarrow \infty }\frac{-1}{n}\left\{ \begin{array}{c} \log Q_{C_{\ast }^{n}}^{\phi }\left( R^{\prime \prime }\right) \\ +\log \left( 1-\frac{r+\delta }{1-2^{-n(R-R^{\prime })}}\right) \end{array} \right\} \\ &=&\mathrm{D}\left( R^{\prime \prime }||\mathbf{p}_{\phi }\right) . \end{eqnarray*} For this holds true for all $R^{\prime \prime }<R^{\prime }<R,$ the limit $ R^{\prime \prime }\rightarrow R$ leads to the inequality (\ref{g-fexp}) for $ f(x)=\Theta \left( x-R\right) $. As for $f$ which satisfies the premise of the theorem, we lower-bound $ 1-f(x) $ by $(1-f(R_{0}))(1-\Theta (x-R_{0}))$, with $R_{0}<R$. Then, the exponent is \begin{eqnarray*} &&\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \mathrm{E} _{Q_{C^{n}}^{\phi }}^{X}\left\{ 1-f\left( X\right) \right\} \\ &\leq &\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \mathrm{E} _{Q_{C^{n}}^{\phi }}^{X}(1-\Theta (X-R_{0})) \\ &&+\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log (1-f(R_{0})) \\ &=&\mathrm{D}\left( R_{0}||\mathbf{p}_{\phi }\right) . \end{eqnarray*} The limit $R_{0}\rightarrow R$ leads to the inequality of the theorem. The achievability is proven in the same way as the proof of Theorem~\ref {th:opt-g-yield-worst}. \end{pf} Note the arguments in the proof of Theorem~\ref{opt-g-yield} apply also to the case where Schmidt coefficients are known. This is because the constraint $\max_{U,V}\overline{\epsilon }_{C^{n}}^{U\otimes V\phi }\leq r_{n}$ is required only for a given input state, and not for all the state. On the contrary, the proof of Theorem~\ref{th:opt-exponent} is valid only if the constraint $\max_{U,V}\overline{\epsilon }_{C^{n}}^{U\otimes V\phi }\leq r_{n}$ is assumed for all $\left\vert \phi \right\rangle $ (otherwise, Lemma~ \ref{lem:improve-upper} cannot be proved), meaning the generalization to the case where Schmidt coefficients are known is impossible. \subsection{Weighted sum of the distortion and the yield} \label{subsec:weighted}In this subsection, we discuss the weighted sum (\ref {g-y+d}) of the distortion and the yield. First, we study the case where $f$ is continuously differentiable over the domain, and then study the case where $f$ is an arbitrary monotone non-decreasing, bounded function. In this section, we prove (a sort of) non-asymptotic optimality. Finally, we apply our result to induce another proof of Theorem~\ref{opt-g-yield}. The argument in this subsection generalize to the case where Schmidt coefficients are known, as is explained toward the end of Subsection~\ref {subsec:whatisproved}. To have a reasonable result, the wight $\lambda $ cannot be too small nor too large. In this subsection, we assume \begin{equation} \lambda >1, \label{weight} \end{equation} because otherwise the yield $f\left( x\right) $ can take the value larger than the maximum value of the distortion, which equals unity. No explicit upper-bound to $\lambda $ is assumed, but $\lambda $ is regarded as a constant only slightly larger than $1$. Due to Lemma~\ref{lem:k1}, the difference between the value of the measure ( \ref{g-y+d}) of $\left\{ C_{\ast }^{n}\right\} $ and the protocol characterized by $Q^{n}\left( x|y\right) $ is \begin{eqnarray} &&\sum\limits_{\substack{ x,y: \\ x\geq y}}\left\{ f\left( x\right) -f\left( y\right) -\lambda \left( 1-2^{-n\left( x-y\right) }\right) \right\} \notag \\ &&\times Q^{n}\left( x|y\right) Q_{C_{\ast }^{n}}^{\phi }(x). \label{diff} \end{eqnarray} For $\left\{ C_{\ast }^{n}\right\} $, or $Q^{n}\left( x|y\right) =0$ to be optimal, the coefficient for $Q^{n}\left( x|y\right) $ has to be non-positive, or, \begin{equation*} f\left( x\right) -f\left( y\right) \leq \lambda \left( 1-2^{-n\left( x-y\right) }\right) . \end{equation*} When $n$ is large enough, the RHS of this approximately equals $\lambda \Theta \left( x-y\right) $. Hence, this inequality holds, if $n$ is larger than some threshold $n_{0}$, for varieties of $f$'s. More rigorously, the condition for the optimality of $\left\{ C_{\ast }^{n}\right\} $ writes \begin{eqnarray} n\geq \frac{-1}{x-y}\log \left( 1-\frac{f\left( x\right) -f(y)}{\lambda } \right) , && \label{cond-n-delta} \\ \forall x,y,\quad 0\leq y<x\leq \log d. && \notag \end{eqnarray} Observe that $-\log (1-x)$ is convex and monotone increasing, and $-\log (1-0)=0$. Hence, the RHS of (\ref{cond-n-delta}) is upper-bounded by \begin{eqnarray*} &&\left\{ -\log \left( 1-\frac{f\left( \log d\right) -f(0)}{\lambda }\right) +\log \left( 1-\frac{0}{\lambda }\right) \right\} \\ &&\times \frac{1}{f\left( \log d\right) -f(0)}\frac{f(x)-f(y)}{x-y}, \\ &=&-\log \left( 1-\frac{1}{\lambda }\right) \frac{f(x)-f(y)}{x-y}. \end{eqnarray*} If the function $f$ is continuously differentiable, the last side of the equation equals \begin{equation*} -\log \left( 1-\frac{1}{\lambda }\right) \max_{x:0\leq x\leq \log d}f^{\prime }(x) \end{equation*} After all, we have the following theorem. \begin{thm} If $f$ satisfies (\ref{cond-f})-(\ref{cond-f-3}), $\left\{ C_{\ast }^{n}\right\} $ is optimal, i,e., achieves maximum of (\ref{g-y+d}) with the weight (\ref{weight}) for any input state, any $n$ larger than the threshold $n_{0}$, where \begin{equation*} n_{0}=-\log \left( 1-\frac{1}{\lambda }\right) \max_{x:0\leq x\leq \log d}f^{\prime }(x). \end{equation*} .\label{weitht-semi-asymptotic} \end{thm} Some comments on the theorem are in order. First,this assertion is different from so called asymptotic optimality, in which the higher order terms are neglected. On the contrary, our assertion is more like non-asymptotic arguments, for we have proved that $\left\{ C_{\ast }^{n}\right\} $ is optimal up to arbitrary order if $n$ is larger than some finite threshold. Second, the factor $-\log \left( 1-\frac{1}{\lambda }\right) $ is relatively small even if $\lambda $ is very close to $1$. For example, for $\lambda =1.001$, $-\log \left( 1-\frac{1}{\lambda }\right) =9.96\cdots $. Hence, the threshold value is not so large. For example, if $f(x)=x$, $n_{0}=\frac{-1}{ \log d}\log \left( 1-\frac{1}{\lambda }\right) =\frac{9.96\cdots }{\log d} \leq 10$. So far, $\lambda $ has been a constant, but, let $\lambda =\frac{1}{1-d^{-n}} $, so that the range of $f$ and the distortion coincide with each other. (Note $\frac{1}{1-d^{-n}}$ is only slightly larger than $1$.) Then, if $f(x)= \frac{x}{\log d}$, the condition for the optimality of $\left\{ C_{\ast }^{n}\right\} $ writes, \begin{equation*} n\geq \frac{-\log \left( 1-\frac{1}{1-d^{-n}}\right) }{\log d}=n \end{equation*} which holds for all $n\geq 1$, implying the non-asymptotic optimality. Now, let us study the case where $f$ is not differentiable, such as $ f(x)=\Theta (x-R)$. In such case, we see the condition (\ref{cond-n-delta}) as a restriction on $\Delta :=x-y$ such that $Q^{n}\left( y+\Delta |y\right) $ for the optimal protocol takes non-zero value for some $y$. Observe that $ f(x)-f(y)\leq 1$ is true for all the function satisfying (\ref{cond-f-2}). Therefore, for \begin{equation*} \Delta \geq \frac{-\log \left( 1-\frac{1}{\lambda }\right) }{n}. \end{equation*} $Q^{n}\left( y+\Delta |y\right) $'s for the optimal protocol vanish for all $ y$. Hence, the improvement of $\left\{ C_{\ast }^{n}\right\} $ is possible only in the very small range of $\Delta $, when $n$ is very large. Those analysis of the weighted sum measures can be applied to the proof of Theorem~\ref{opt-g-yield}, (i). For this purpose, we use a Lagrangian such that \begin{equation*} \mathfrak{L}\mathfrak{:}=\min_{U,V}\mathrm{E}_{Q_{C^{n}}^{U\otimes V\phi }}^{X}f(X)+\lambda \max_{U,V}\left( r_{n}-\overline{\epsilon } _{C^{n}}^{U\otimes V\phi }\right) , \end{equation*} with $\lambda >1$, $r_{n}>0$. Observe that (\ref{g-yield}) cannot be larger than $\mathfrak{L}$ under the condition $\max_{U,V}\overline{\epsilon } _{C^{n}}^{U\otimes V\phi }\leq r_{n}$. For under this condition, the second term is positive, and the first term of $\mathfrak{L}$ is nothing but (\ref {g-yield}). In case that $f$ is differentiable, Theorem~\ref {weitht-semi-asymptotic} implies that the maximum of $\mathfrak{L}$ ($ \lambda >1$) is achieved by $\{C_{\ast }^{n}\}$. Therefore, for $\overline{ \epsilon }_{C_{\ast }^{n}}^{\phi }=0$, we obtain the inequality, \begin{equation} \max \mbox{(\ref{g-yield})}\leq \mbox{(\ref{g-yield}) for ${C_*^n}$}+\lambda r_{n}. \label{upper-g-yield} \end{equation} The similar argument applies to the case where $f$ is continuously differentiable only in the neighbor of $x=\mathrm{H}(\mathbf{p}_{\phi })$, except for the exponentially small terms. \subsection{Total fidelity $F_{C^{n}}^{\protect\phi }\left( R\right) $} \label{subsec:totalfidelity} This measure equals fidelity between an output and a target, and relation between $R$ and this quantity reflects the trade-off between yield and distortion. Note that the argument in this subsection also generalizes to the case where the Schmidt coefficients are known, as is explained toward the end of Subsection~\ref{subsec:whatisproved} . \begin{thm} $\{C_{\ast }^{n}\}$ achieves the optimal (maximum) value of total fidelity ( \ref{def:totalfidelity}) in all the protocols, for any $n$, any input state $ \left\vert \phi \right\rangle $, and any threshold $R$. \end{thm} \begin{pf} Due to Lemma \ref{lem:k1}, the protocol of interest is postprocessing of $ \{C_{\ast }^{n}\}$ which does not touch its quantum output. The total fidelity of such protocol equals that of $\{C_{\ast }^{n}\}$, for total fidelity (\ref{def:totalfidelity}) is not related to a classical output of the protocol. \end{pf} \section{Universal concentration as an estimate of entanglement} \label{sec:estimate} In this section, universal concentration is related to statistical estimation of entanglement measure $\mathrm{H}(\mathbf{p}_{\phi })$. Observe that \begin{equation*} \mathrm{\hat{H}}_{C^{n}}:=\frac{1}{n}\log (\mbox{dim. of max. ent.}) \end{equation*} is a natural estimate of $\mathrm{H}(\mathbf{p}_{\phi })$ when $n\gg 1$. Letting $f(x)=\Theta (x-R)$, Theorem~\ref{opt-g-yield}, (i) implies that the probability for $\mathrm{\hat{H}}_{C^{n}}>\mathrm{H}(\mathbf{p}_{\phi })$ tends to vanish, if $\varlimsup_{n\rightarrow \infty }\overline{\epsilon } _{C^{n}}^{\phi }<1$, as is demonstrated right after the statement of the theorem. Therefore, if $\{C^{n}\}$ achieves the entropy rate, the estimate $ \mathrm{\hat{H}}_{C^{n}}$ converges to $\mathrm{H}(\mathbf{p}_{\phi })$ in probability as $n\rightarrow \infty $ (a \textit{consistent estimate}). Especially, for the estimate $\widehat{\mathrm{H}}_{C_{\ast }^{n}}$ which is based on $\left\{ C_{\ast }^{n}\right\} $, the error exponent is given using (\ref{fexp1-1}) and (\ref{fexp-2}) as, \begin{eqnarray*} &&\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \max_{U,V}\mathrm{ \Pr_{U\otimes V\phi }}\left\{ \left\vert \widehat{\mathrm{H}}_{C_{\ast }^{n}}-\mathrm{H}(\mathbf{p}_{\phi })\right\vert <\delta \right\} \\ &=&\min_{\left\vert \mathrm{H}(\mathbf{q})-\mathrm{H}(\mathbf{p}_{\phi })\right\vert \geq \delta }\mathrm{D}(\mathbf{q}\Vert \mathbf{p}_{\phi }) \end{eqnarray*} Now, we prove that this exponent is better than any other consistent estimate which potentially uses global measurements, if the Schmidt basis is unknown. \begin{thm} \begin{eqnarray} &&\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \max_{U,V}\mathrm{ \Pr_{U\otimes V\phi }}\left\{ \widehat{\mathrm{H}}_{n}<\mathrm{H}(\mathbf{p} _{\phi })-\delta \right\} \notag \\ &\leq &\min_{\mathrm{H}(\mathbf{q})\leq \mathrm{H}(\mathbf{p}_{\phi })-\delta }\mathrm{D}(\mathbf{q}\Vert \mathbf{p}_{\phi }) \label{est-exp} \end{eqnarray} \begin{eqnarray} &&\varlimsup_{n\rightarrow \infty }\frac{-1}{n}\log \max_{U,V}\mathrm{ \Pr_{U\otimes V\phi }}\left\{ \widehat{\mathrm{H}}_{n}>\mathrm{H}(\mathbf{p} _{\phi })+\delta \right\} \notag \\ &\leq &\min_{\mathrm{H}(\mathbf{q})\geq \mathrm{H}(\mathbf{p}_{\phi })+\delta }\mathrm{D}(\mathbf{q}\Vert \mathbf{p}_{\phi }) \label{est-exp2} \end{eqnarray} holds for any consistent estimate $\mathrm{\hat{H}}_{n}$ of $\mathrm{H}( \mathbf{p}_{\phi })$ by global measurement, if the Schmidt basis is unknown. \label{theorem:bahadur} \end{thm} \begin{pf} An argument almost parallel to the one in Subsection~\ref{subsec:keylemmas} implies that we can restrict ourselves to the estimate which is computed from the classical output of $C_{\ast }^{n}$. From here, we use the argument almost parallel with the one in ~\cite {Nagaoka}. For $\epsilon $, $|\phi \rangle $, $|\psi \rangle $ with $\mathrm{ H}(\mathbf{p}_{\psi })<\mathrm{H}(\mathbf{p}_{\phi })-\delta $, consistency of $\widehat{\mathrm{H}}_{n}$ implies \begin{align} \mathrm{Pr}_{\phi }\left\{ \widehat{\mathrm{H}}_{n}<\mathrm{H}(\mathbf{p} _{\phi })-\delta \right\} (& :=p_{n})\rightarrow 0, \notag \\ \mathrm{Pr}_{\psi }\left\{ \widehat{\mathrm{H}}_{n}<\mathrm{H}(\mathbf{p} _{\phi })-\delta \right\} (& :=q_{n})\rightarrow 1. \label{converse2} \end{align} On the other hand, monotonicity of relative entropy implies \begin{align*} & \mathrm{D}(Q_{C_{\ast }^{n}}^{\psi }\Vert Q_{C_{\ast }^{n}}^{\phi })\geq \mathrm{D}(\mathrm{Pr}_{\psi }\{\mathrm{\hat{H}}_{n}\}\Vert \mathrm{Pr} _{\phi }\{\mathrm{\hat{H}}_{n}\}) \\ & \geq q_{n}\log \frac{q_{n}}{p_{n}}+(1-q_{n})\log \frac{1-q_{n}}{1-p_{n}}, \end{align*} or, equivalently, \begin{align*} & \frac{-1}{n}\log {p_{n}} \\ & \leq \frac{1}{nq_{n}}\left( \mathrm{D}(Q_{C_{\ast }^{n}}^{\psi }\Vert Q_{C_{\ast }^{n}}^{\phi })+\mathrm{h}(q_{n})+(1-q_{n})\log (1-p_{n})\right) , \end{align*} with $\mathrm{h}(x):=-x\log x-(1-x)\log (1-x)$. With the help of Eqs.~(\ref {converse2}), letting $n\rightarrow \infty $ of the both sides of this inequality,.we obtain Bahadur-type inequality~\cite{Bahadur}, \begin{equation} \mbox{ LHS of ~(\ref{est-exp})}\leq \varlimsup_{n\rightarrow \infty }\frac{1 }{n}\mathrm{D}(Q_{C_{\ast }^{n}}^{\psi }\Vert Q_{C_{\ast }^{n}}^{\phi }), \label{bahadur2} \end{equation} whose RHS equals $\mathrm{D}(\mathbf{p}_{\psi }\Vert \mathbf{p}_{\phi })$, as in Appendix~\ref{sec:cal-divergence}. Therefore, choosing $|\psi \rangle $ such that $H(\mathbf{p}_{\psi })$ is infinitely close to $R$, (\ref{est-exp} ) is proved. (\ref{est-exp2}) is proved almost in the same way. \end{pf} \begin{pf} \textbf{of (\ref{g-fexp}) with }$f(x)=\Theta \left( x-R\right) $, $R>\mathrm{ H}(\mathbf{p}_{\phi })$ If a protocol $\{C^{n}\}$ satisfies $\varlimsup_{n\rightarrow \infty } \overline{\epsilon }_{C^{n}}^{\phi }<1$ and achieves the rate of the entropy of entanglement, as is mentioned at the beginning of this section, the corresponding estimate $\mathrm{\hat{H}}_{C^{n}}$ is consistent, and satisfies Ineq.~(\ref{est-exp}). This is equivalent to the optimality (\ref {g-fexp}) with\textbf{\ }$f(x)=\Theta \left( x-R\right) $, $R>\mathrm{H}( \mathbf{p}_{\phi })$, for the error probability of $\mathrm{\hat{H}}_{C^{n}}$ equals that of $C^{n}$. \end{pf} Suppose in addition that the Schmidt basis is known, and we discuss the first main term of the mean square error, \begin{equation*} \mathrm{E}_{\phi }(\mathrm{\hat{H}}_{n}-\mathrm{H}(\mathbf{p}_{\phi }))^{2}= \frac{1}{n}\tilde{\mathrm{V}}_{\phi }+o\left( \frac{1}{n}\right) \end{equation*} or, \begin{equation*} \tilde{\mathrm{V}}_{\phi }:=\varliminf_{n\rightarrow \infty }n\mathrm{E} _{\phi }(\mathrm{\hat{H}}_{n}-\mathrm{H}(\mathbf{p}_{\phi }))^{2}. \end{equation*} \begin{thm} Suppose the Schmidt basis of $|\phi \rangle $ is known and its Schmidt coefficient is unknown. Then, any global measurement satisfies, \begin{equation} \tilde{\mathrm{V}}_{\phi }\geq \sum_{i=1}^{d}p_{\phi ,i}(\log p_{\phi ,i}- \mathrm{H}(\mathbf{p}_{\phi }))^{2}, \label{bound} \end{equation} if $\mathrm{E}_{\phi }(\mathrm{\hat{H}}_{n})\rightarrow \mathrm{H}(\mathbf{p} _{\phi })$ ($n\rightarrow \infty $) for all $\left\vert \phi \right\rangle $ , and the estimate $\mathrm{\hat{H}}_{C_{\ast }^{n}}$ based on $\{C_{\ast }^{n}\}$ achieves the equality. \label{theorem:meansquare} \end{thm} \begin{pf} Consider a family of state vectors $\left\{ \sum\limits_{i}\sqrt{p_{\phi ,i}} |e_{j,A}\rangle |e_{j,B}\rangle \right\} $, where $\{|e_{j,A}\rangle |e_{j,B}\rangle \}$ is fixed and $\mathbf{p_{\phi }}$ runs over all the probability distributions supported on $\{1,\ldots ,d\}$. Due to Theorem~5 in \cite{Matsumoto1}, the asymptotically optimal estimate of $\mathrm{H}( \mathbf{p}_{\phi })$ is a function of data result from the projection measurement $\{|e_{j,A}\rangle |e_{j,B}\rangle \}$ on each copies. Therefore, the problem reduces to the optimal estimate of $\mathrm{H}( \mathbf{p}_{\phi })$ from the data generated from probability distribution $ \mathbf{p}_{\phi }$. Due to asymptotic Cram{\'{e}}r-Rao inequality of classical statistics, the asymptotic mean square error of such estimate is lower-bounded by, \begin{equation*} \frac{1}{n}\sum_{1\leq i,j\leq d-1}\left( J^{-1}\right) ^{i,j}\frac{\partial \mathrm{H}}{\partial p_{i}}\frac{\partial \mathrm{H}}{\partial p_{j}} +o\left( \frac{1}{n}\right) , \end{equation*} where $J$ is the Fisher information matrix of the totality of probability distributions supported on $\{1,\cdots ,d\}$. For $\left( J^{-1}\right) ^{i,j}=p_{i,\phi }\delta _{i,j}-p_{i,\phi }p_{j,\phi }$, we obtain the lower-bound (\ref{bound}). To prove the achievability, observe that the Schmidt coefficient is exactly the spectrum of the reduced density matrix. As is discussed in \cite {KW,Matsumoto3},, the optimal measurement is the projectors $\left\{ \mathcal{W}_{\mathbf{n},A}\otimes \mathcal{W}_{\mathbf{n},B}\right\} $ , which is used in the protocol $\{C_{\ast }^{n}\}$, and the estimated of the spectrum is $\frac{_{\mathbf{n}}}{n}$. It had been shown that the asymptotic mean square error matrix of this estimate equals $\frac{1}{n}J^{-1}+o\left( \frac{1}{n}\right) $.\ Hence, if we estimate $\mathrm{H}(\mathbf{p}_{\phi })$ by $\mathrm{H}(\frac{_{\mathbf{n}}}{n})$, we can achieve the lower bound, as is easily checked by using Taylor's expansion. Now, due to (\ref{grep-type} ), our estimate $\frac{\log \dim \mathcal{V}_{\mathbf{n}}}{n}$ differs from $ \mathrm{H}(\frac{_{\mathbf{n}}}{n})$ at most by $O\left( \frac{\log n}{n} \right) $. Therefore, their mean square error differs at most by $O\left( \frac{\log n}{n}\right) ^{2}=o\left( \frac{1}{n}\right) $. As a result, the estimate based on $\{C_{\ast }^{n}\}$ achieves the lower-bound (\ref{bound}). \end{pf} \section{Conclusions and discussions} \label{sec:conclusion}We have proposed a new protocol of entanglement concentration $\{C_{\ast }^{n}\}$, which has the following properties. \begin{enumerate} \item The input state are many copies of unknown pure states. \item The output is the exact maximally entangled state, and its Schmidt rank. \item Its performance is probabilistic, and entropy rate is asymptotically achieved. \item Any protocol is no better than a protocol given by modification of the protocol $\{C_{\ast }^{n}\}$ in its classical output only. \item The protocol is optimal up to higher orders or non-asymptotically, depending on measures. \item No classical communication is needed. \item The classical output gives the estimate of the entropy of entanglement with minimum asymptotic error, where minimum is taken over all the global measurements. \end{enumerate} The key to the optimality arguments is Lemma~\ref{lem:k1}, which imply 4 in above, and drastically simplified the arguments. As is pointed out throughout the paper, almost all the statement of optimality, except for Theorem~\ref{th:opt-exponent}, generalizes to the case where the Schmidt coefficients are known. As a measure of the distortion, we considered the worst-case distortion and the average distortion. Trivially, the latter constraint is stronger, and thus the proof for optimality was technically much simpler and results are stronger. A problem is which one is more natural. This is very subtle problem, but we think that the constraints on the average distortion is too generous. The reason is as follows. Under this constraint, the strong converse probability decreases very slowly (Theorem~\ref{opt-g-yield}, (ii) ). It is easy to generalize this statement to non-universal entanglement concentration. This is in sharp contrast with the fact that strong converse probability converges exponentially fast in many other information theoretic problems. Toward the end of Section~\ref{subsec:weighted}, using the linear programming approach, we gave another proof of Theorem~\ref{opt-g-yield}. The similar proof of Theorem~\ref{th:opt-exponent} is possible using the Lagrangian \begin{equation*} \mathfrak{L}^{\prime }\mathfrak{:}=\min_{U,V}\mathrm{E}_{Q_{C^{n}}^{U\otimes V\phi }}^{X}f(X)-\lambda \max_{U,V}\left( \overline{\epsilon } _{C^{n}}^{U\otimes V\psi }-r_{n}\right) , \end{equation*} with $\mathrm{H}\left( \mathbf{p}_{\psi }\right) $ slightly smaller than $R$ . In addition, using the Lagrangian \begin{eqnarray*} &&\mathfrak{L}^{\prime \prime }\mathfrak{:}= \\ &&\min_{U,V}\mathrm{E}_{Q_{C^{n}}^{U\otimes V\phi }}^{X}\left\{ \begin{array}{c} f(X) \\ +\lambda _{X}^{n}\left( r_{n}-\langle 2^{nX}\Vert \rho _{C^{n}}^{U\otimes V\phi }(X)\Vert 2^{nX}\rangle \right) \end{array} \right\} , \end{eqnarray*} one can give another proof of the optimality results with the constraint on worst-case distortion. It had been pointed out by many authors that the theory of linear programming, especially the duality theorem, supplies strong mathematical tool to obtain an upper/lower-bound. Our case is one of such examples. Almost parallel with the arguments in this paper, we can prove the optimality of BBPS protocol for all the protocols which do not use information about phases of Schmidt basis and Schmidt coefficients. For that, we just have to replace average over all the local unitary in our arguments with the one over the phases. This average kills all the coherence between typical subspaces, changing the state to the direct sum of the maximally entangled states, and we obtain an equivalence of our key lemma. Rest of the arguments are also parallel, for an equivalence of (\ref {grep-type}) holds due to type theoretic arguments. In the paper, we discussed universal entanglement concentration only, but the importance of universal entanglement distillation is obvious. This topic is already studied by some authors \cite{Brun, Pan}, but optimality of their protocol, etc. are left for the future study. Another possible future direction is to explore new applications of the measurement used in our protocol. This measurement had already been applied to the estimation of the spectrum~\cite{KW, Matsumoto3}, and the universal data compression~\cite{Ha, JHHH}. In addition, after the appearance of the first draft~\cite{HM} of this paper, the polynomial size circuit for this measurement had been proposed~\cite{Aram}, meaning that this measurements can be realized efficiently by forthcoming quantum computers. \section*{Acknowledgment} The main part of this research was conducted as a part of QCI project, JST, led by Professor Imai. We are grateful for him for his supports. At that time, MH was a member of a research group in RIKEN led by Professor S. Amari, and MH is grateful for his generous support to this work. We are thankful to Professor H. Nagaoka, Dr. T. Ogawa, Professor. M. Hamada for discussions .K. M. is especially thankful to Prof. Nagaoka for his implication that the average state over the unknown unitary may simplify the optimality proof, and for his suggestion that the average yield may be another natural measure. \appendix \section{Group representation theory} \label{appendixA} \begin{lem} \label{lem:decohere} Let $U_{g}$ and $U_{g}^{\prime }$ be an irreducible representation of $G$ on the finite-dimensional space $\mathcal{H}$ and $ \mathcal{H}^{\prime }$, respectively. We further assume that $U_{g}$ and $ U_{g}^{\prime }$ are not equivalent. If a linear operator $A$ in $\mathcal{H} \oplus \mathcal{H}^{\prime }$ is invariant by the transform $A\rightarrow U_{g}\oplus U_{g}^{\prime }AU_{g}^{\ast }\oplus U_{g}^{^{\prime }\ast }$ for any $g$, $\mathcal{H}A\mathcal{H^{\prime }}=0$. ~\cite{GW} \end{lem} \begin{lem} \label{lem:shur} (Shur's lemma~\cite{GW}) Let $U_{g}$ be as defined in lemma~ \ref{lem:decohere}. If a linear map $A$ in $\mathcal{H}$ is invariant by the transform $A\rightarrow U_{g}AU_{g}^{\ast }$ for any $g$, $A=c\mathrm{Id}_{ \mathcal{H}}$. \end{lem} \begin{lem} \label{lem:shur2}Let $U_{g}$ be an irreducible representation of $G$ on the finite dimensional space $\mathcal{H}$, and let $A$ be an linear map in $ \mathcal{K}\otimes \mathcal{H}$. If $A$ is invariant by the transform $ A\rightarrow I\otimes U_{g}AI\otimes U_{g}^{\ast }$ for any $g$, $A$ is the form of $A^{\prime }\otimes \mathrm{Id}_{\mathcal{H}}$, with a linear map in $\mathcal{K}$. \begin{pf} Write $A=\sum_{i,j}A_{i}\otimes B_{j}$. Due to Shur's lemma, $B_{j}=c_{j} \mathrm{Id}_{\mathcal{H}}$. Therefore, \begin{equation*} A=\sum_{i,j}A_{i}\otimes c_{j}\mathrm{Id}_{\mathcal{H}}=\left( \sum_{i,j}c_{j}A_{i}\right) \otimes \mathrm{Id}_{\mathcal{H}}, \end{equation*} and we have the lemma. \end{pf} \end{lem} \begin{lem} \label{l5} If the representation $U_{g}($ $U_{h}^{\prime }$, resp.$)$ of $ G(H $, resp.$)$ on the finite-dimensional space $\mathcal{H}(\mathcal{H} ^{\prime }$, resp.$)$ is irreducible, the representation $U_{g}\times U_{h}^{\prime }$ of the group $G\times H$ in the space $\mathcal{H}\otimes \mathcal{H}^{\prime }$ is also irreducible. \end{lem} \begin{pf} Assume that if the representation $U_{g}\times U_{h}^{\prime }$ is reducible, \textit{i. e.}, $\mathcal{H}\otimes \mathcal{H}^{\prime }$ has an irreducible subspace $\mathcal{K}$. Denoting Haar measure in $G$ and $H$ by $ \mu (\mathrm{d}g)$ and $\nu (\mathrm{d}h)$ respectively, Shur's lemma yields \begin{equation*} \int U_{g}\otimes U_{h}^{\prime }|\phi \rangle \langle \phi |U_{g}^{\ast }\otimes U_{h}^{^{\prime }\ast }\mu (\mathrm{d}g)\nu (\mathrm{d}h)=c\mathrm{ Id}_{\mathcal{H}\otimes \mathcal{H}^{\prime }}, \end{equation*} for the RHS is invariant by both $U_{g}\,\cdot \,U_{g}^{\ast }$ and $ U_{h}^{\prime }\,\cdot \,U_{h}^{^{\prime }\ast }$. This equation leads to $ \int |\langle \psi |U_{g}\otimes U_{h}^{\prime }|\phi \rangle |^{2}\mu ( \mathrm{d}g)\nu (\mathrm{d}h)=c$ and $c\dim \mathcal{H}\dim \mathcal{H} ^{\prime }=\mu (G)\nu (H)\langle \phi |\phi \rangle $. Choosing $|\psi \rangle $ from $\mathcal{K}^{\bot }$, the former equation gives $c=0$, which contradicts with the latter. \end{pf} \section{\protect Asymptotic yield of BBPS protocol} \label{sec:cal-ave-ben} From here to the end of the paper, we use the following notation. \begin{eqnarray*} \mathbf{n}! &:&=\prod_{i}n_{i}!,\,\, \\ p_{i} &:&=p_{i}^{\phi },\,\mathbf{p}=(p_{1},\cdots ,p_{d}) \end{eqnarray*} In this section, we compute the yield of BBPS protocol, \begin{equation*} \mathrm{E}_{\mathbf{p}}\left[ \frac{1}{n}\log \frac{n!}{\mathbf{n}!}\right] =\sum_{\substack{ \mathbf{n:} \\ \sum_{i}n_{i}=n,n_{i}\geq 0}} \prod\limits_{i}p_{i}^{n_{i}}\frac{n!}{\mathbf{n}!}\frac{1}{n}\log \left( \frac{n!}{\mathbf{n}!}\right) \end{equation*} \label{bbps-yield}up to $O\left( \frac{1}{n}\right) $. Here, $\mathrm{E}_{ \mathbf{p}}$ denotes average in terms of probability distribution \begin{equation*} \mathbf{n}=(n_{1},\cdots ,n_{d})\sim \prod\limits_{i=1}^{d}p_{i}\,. \end{equation*} Below, we assume $p_{i}\neq 0$. Due to Stirling's formula $n!=\sqrt{2\pi n} n^{n}e^{-n}\left( 1+O\left( \frac{1}{n}\right) \right) $, \begin{eqnarray*} &&\frac{1}{n}\log \left( \frac{n!}{\mathbf{n}!}\right) \\ &=&\mathrm{H}\left( \frac{\mathbf{n}}{n}\right) -\frac{d-1}{2}\frac{\log n}{n } \\ &&-\frac{1}{n}\left( \frac{d-1}{2}\log 2\pi +\frac{1}{2}\sum \limits_{i=1}^{d}\log \frac{n_{i}}{n}\right) +R_{1}\left( \mathbf{n}\right) , \end{eqnarray*} where $R_{1}\left( \mathbf{n}\right) =\frac{1}{n}O\left( \max \left\{ \frac{1 }{n},\frac{1}{n_{1}},\cdots ,\frac{1}{n_{d}}\right\} \right) $. Consider a Taylor's expansion, \begin{eqnarray*} &&\mathrm{H}\left( \frac{\mathbf{n}}{n}\right) \\ &=&\mathrm{H}\left( \mathbf{p}\right) +\sum_{i=1}^{d}\frac{\partial \mathrm{H }\left( \mathbf{p}\right) }{\partial p_{i}}\left( \frac{n_{i}}{n} -p_{i}\right) \\ &&+\frac{\log e}{2}\left( -\sum_{j=1}^{d}\frac{n_{j}^{2}}{p_{j}n^{2}} +1\right) +R_{2}\left( \frac{\mathbf{n}}{n},\mathbf{p}\right) , \end{eqnarray*} and let $R\left( \frac{\mathbf{n}}{n},\mathbf{p}\right) :=R_{1}(\mathbf{n} )+R_{2}(\frac{\mathbf{n}}{n},\mathbf{p})$. For $\mathrm{E}_{\mathbf{p} }f\left( \frac{n_{i}}{n}\right) =f(p_{i})+o(1)$ and $\mathrm{E}_{\mathbf{p} }n_{j}^{2}=n(n-1)p_{j}^{2}+np_{j}$, we have, \begin{eqnarray*} &&\mathrm{E}_{\mathbf{p}}\left[ \frac{1}{n}\log \frac{n!}{\mathbf{n}!}\right] \\ &=&\mathrm{H}\left( \mathbf{p}\right) -\frac{d-1}{2}\frac{\log n}{n} \\ &&-\frac{1}{n}\left\{ \frac{(d-1)}{2}\log 2\pi e+\frac{1}{2} \sum\limits_{i=1}^{d}\log p_{i}\right\} \\ &&+\mathrm{E}_{\mathbf{p}}R\left( \frac{\mathbf{n}}{n},\mathbf{p}\right) +o\left( n^{-1}\right) \end{eqnarray*} For \begin{equation} \mathrm{E}_{\mathbf{p}}R(\frac{\mathbf{n}}{n},\mathbf{p})=o\left( n^{-1}\right) , \label{remainder} \end{equation} holds as is proved below, our calculation is complete. For $\frac{1}{n}\log \frac{n!}{\mathbf{n}!}$ is bounded by constant, $ R\left( \frac{\mathbf{n}}{n},\mathbf{p}\right) $ is bounded by a polynomial function of $n$. Hence, due to the type theory, \begin{eqnarray*} &&\mathrm{E}_{\mathbf{p}}R\left( \frac{\mathbf{n}}{n},\mathbf{p}\right) \\ &\leq &\mathrm{E}_{\mathbf{p}}\left[ \left. R\left( \frac{\mathbf{n}}{n}, \mathbf{p}\right) \right\vert \left\Vert \frac{\mathbf{n}}{n}-\mathbf{p} \right\Vert <\delta \right] \\ &&+\mathrm{poly}\left( n\right) 2^{-nD\left( \delta \right) }, \end{eqnarray*} where $D\left( \delta \right) :=\min_{\mathbf{q\in }\left\{ \mathbf{q:} \left\Vert \mathbf{q}-\mathbf{p}\right\Vert <\delta \right\} }\mathrm{D} \left( \mathbf{q||p}\right) $. In the region $\left\{ \mathbf{n:}\left\Vert \frac{\mathbf{n}}{n}-\mathbf{p}\right\Vert <\delta \right\} ,$ \begin{eqnarray*} \left\vert R_{1}(\mathbf{n)}\right\vert &=&O\left( \frac{1}{n^{2}}\right) , \\ \left\vert R_{2}\left( \frac{\mathbf{n}}{n},\mathbf{p}\right) \right\vert &\leq &\sum_{i,j,k}\max_{\mathbf{p}_{0}:\left\Vert \mathbf{p}_{0}-\mathbf{p} \right\Vert <\delta }\left\vert \frac{\partial ^{3}H\left( \mathbf{p} _{0}\right) }{\partial p_{i}\partial p_{j}\partial p_{k}}\right\vert \delta ^{3}, \end{eqnarray*} Observe also $D\left( \delta \right) =O\left( \delta ^{2}\right) .$ Hence, if $\delta =n^{-\frac{3}{8}}$, $\left\vert R_{2}\left( \frac{\mathbf{n}}{n}, \mathbf{p}\right) \right\vert =o\left( n^{-1}\right) $ in the region $ \left\{ \mathbf{n:}\left\Vert \frac{\mathbf{n}}{n}-\mathbf{p}\right\Vert <\delta \right\} $, and $2^{-nD\left( \delta \right) }=2^{-O\left( n^{2/8}\right) }$, implying (\ref{remainder}). \section{Difference between the average yield of BBPS protocol and $\left\{ C_{\ast }^{n}\right\} $} \label{sec:diff-ave-yield} $\dim \mathcal{V}_{\mathbf{n}}$ and $a_{\mathbf{n}}^{\phi }$ are explicitly given as follows. \begin{eqnarray} \dim \mathcal{V}_{\mathbf{n}} &=&\sum_{\pi \in S_{n}}\mathrm{sgn}(\pi )\frac{ n!}{(\mathbf{n}+\mathbf{\delta }-\pi (\mathbf{\delta }))!}, \label{dimV-1} \\ &=&\frac{\left( n+\frac{d(d-1)}{2}\right) !}{(\mathbf{n}+\mathbf{\delta )}!} \prod\limits_{i,j:i>j}(n_{i}-n_{j}+j-i) \label{dimV-2} \\ a_{\mathbf{n}}^{\phi } &=&\frac{\dim \mathcal{V}_{\mathbf{n}}}{ \prod\limits_{i,j:i>j}(p_{i}-p_{j})}\sum_{\pi \in S_{n}}\mathrm{sgn}(\pi )\prod\limits_{i}p_{i}^{n_{\pi (i)}+\delta _{\pi (i)}}, \notag \end{eqnarray} where \begin{eqnarray*} \mathbf{\delta } &:&=(d-1,d-2,\cdots ,0), \\ \pi (\mathbf{\delta }) &:&=\left( \delta _{\pi (1)},\delta _{\pi (1)},\cdots ,\delta _{\pi (d)}\right) . \end{eqnarray*} Below, $p_{i}^{\phi }\neq p_{j}^{\phi }$, and $p_{i}^{\phi }\neq 0$ are assumed for simplicity. The average yield equals \begin{eqnarray*} &&\frac{1}{n}\sum_{\mathbf{n}}a_{\mathbf{n}}^{\phi }\log (\dim \mathcal{V}_{ \mathbf{n}}) \\ &=&\frac{1}{n}\frac{1}{\prod\limits_{i,j:i>j}(p_{i}-p_{j})}\sum_{\pi _{0}\in S_{n}}\mathrm{sgn}(\pi _{0}) \\ &&\times \sum_{\mathbf{n}}\prod\limits_{i}p_{i}^{n_{\pi _{0}(i)}+\delta _{\pi _{0}(i)}}(\dim \mathcal{V}_{\mathbf{n}})\log (\dim \mathcal{V}_{ \mathbf{n}}), \end{eqnarray*} where $\mathbf{n}$ is summed over the region satisfying (\ref{yungindex}). In the sum over $\pi _{0}$, we first compute the term for $\pi _{0}=\mathrm{ id}$ (other terms will turn out to be exponentially small): \begin{eqnarray} &&\sum_{\mathbf{n}}\prod\limits_{i}p_{i}^{n_{i}+\delta _{i}}(\dim \mathcal{V} _{\mathbf{n}})\log (\dim \mathcal{V}_{\mathbf{n}}) \notag \\ &=&\sum_{\pi \in S_{n}}\mathrm{sgn}(\pi )\prod\limits_{i}p_{i}^{\delta _{\pi (i)}}\sum_{\mathbf{n}}\prod\limits_{i}p_{i}^{n_{i}+\mathbf{\delta } _{i}-\delta _{\pi (i)}} \notag \\ &&\times \frac{n!}{(\mathbf{n}+\mathbf{\delta }-\pi (\mathbf{\delta }))!} \log \left\{ \sum_{\pi ^{\prime }\in S_{n}}\mathrm{sgn}(\pi ^{\prime })\frac{ n!}{(\mathbf{n}+\mathbf{\delta }-\pi ^{\prime }(\mathbf{\delta }))!}\right\} \notag \\ &=&\sum_{\pi \in S_{n}}\mathrm{sgn}(\pi )\prod\limits_{i}p_{i}^{\delta _{\pi (i)}}\sum_{\mathbf{n}^{\pi }}\prod\limits_{i}p_{i}^{n_{i}^{\pi }}\frac{n!}{ \mathbf{n}^{\pi }!} \notag \\ &&\times \log \left\{ \sum_{\pi ^{\prime }\in S_{n}}\mathrm{sgn}(\pi ^{\prime })\frac{n!}{(\mathbf{n}^{\pi }+\pi (\mathbf{\delta )}-\pi ^{\prime }(\mathbf{\delta }))!}\right\} , \label{ave-y-cal} \end{eqnarray} where $\mathbf{n}^{\pi }$ is defined by $\mathbf{n}^{\pi }:=\mathbf{n}+ \mathbf{\delta }-\pi (\mathbf{\delta })$. For the probability sharply concentrates at the neighborhood of $\frac{\mathbf{n}^{\pi }}{n}=\mathbf{p}$ , we have, \begin{equation} \sum_{\mathbf{n}^{\pi }}\prod\limits_{i}p_{i}^{n_{i}^{\pi }}\frac{n!}{ \mathbf{n}^{\pi }!}f\left( \frac{\mathbf{n}^{\pi }}{n}\right) =f(\mathbf{p} )+O(2^{-cn}). \label{lawLN} \end{equation} The main part of (\ref{ave-y-cal}) rewrites, \begin{eqnarray*} &&\sum_{\mathbf{n}^{\pi }}\prod\limits_{i}p_{i}^{n_{i}^{\pi }}\frac{n!}{ \mathbf{n}^{\pi }!}\log \left\{ \sum_{\pi ^{\prime }\in S_{n}}\mathrm{sgn} (\pi ^{\prime })\frac{n!}{(\mathbf{n}^{\pi }+\pi (\mathbf{\delta )}-\pi ^{\prime }(\mathbf{\delta }))!}\right\} \\ &=&\sum_{\mathbf{n}^{\pi }}\prod\limits_{i}p_{i}^{n_{i}^{\pi }}\frac{n!}{ \mathbf{n}^{\pi }!}\left\{ \begin{array}{c} \log \frac{n!}{\mathbf{n}^{\pi }!} \\ +\log \sum\limits_{\pi ^{\prime }\in S_{n}}\mathrm{sgn}(\pi ^{\prime })\frac{ \mathbf{n}^{\prime }!}{(\mathbf{n}^{\pi }+\pi (\mathbf{\delta )}-\pi ^{\prime }(\mathbf{\delta }))!} \end{array} \right\} . \end{eqnarray*} The first term is exponentially close to $n$ times the average yield of BBPS protocol. The second term is, \begin{eqnarray*} &&\sum_{\mathbf{n}^{\pi }}\prod\limits_{i}p_{i}^{n_{i}^{\pi }}\frac{n!}{ \mathbf{n}^{\pi }!}\log \sum_{\pi ^{\prime }\in S_{n},}\mathrm{sgn}(\pi ^{\prime })\frac{\mathbf{n}^{\pi }!}{(\mathbf{n}^{\pi }+\pi (\mathbf{\delta ) }-\pi ^{\prime }(\mathbf{\delta }))!} \\ &=&\sum_{\mathbf{n}^{\pi }}\prod\limits_{i}p_{i}^{n_{i}^{\pi }}\frac{n!}{ \mathbf{n}^{\pi }!}\times \\ &&\log \sum_{\pi ^{\prime }\in S_{n}}\mathrm{sgn}(\pi ^{\prime })\frac{ \prod\limits_{i:\delta _{\pi (i)}-\delta _{\pi ^{\prime }(i)}<0}\prod\limits_{j=1}^{\delta _{\pi (i)}-\delta _{\pi ^{\prime }(i)}}(n_{i}^{\pi }-j+1)}{\prod\limits_{i:\delta _{\pi (i)}-\delta _{\pi ^{\prime }(i)}>0}\prod\limits_{j=1}^{\delta _{\pi ^{\prime }(i)}-\delta _{\pi (i)}}(n_{i}^{\pi }+j)} \\ &=&\log \sum_{\pi ^{\prime }\in S_{n}}\mathrm{sgn}(\pi ^{\prime })\frac{ \prod\limits_{i:\delta _{\pi (i)}-\delta _{\pi ^{\prime }(i)}<0}p_{i}^{\delta _{\pi ^{\prime }(i)}-\delta _{\pi (i)}}}{ \prod\limits_{i:\delta _{\pi (i)}-\delta _{\pi ^{\prime }(i)}>0}p_{i}^{\delta _{\pi (i)}-\delta _{\pi ^{\prime }(i)}}}+o(1) \\ &=&\log \sum_{\pi ^{\prime }\in S_{n}}\mathrm{sgn}(\pi ^{\prime })\prod_{i}p_{i}^{\delta _{\pi ^{\prime }(i)}-\delta _{\pi (i)}}+o(1) \\ &=&\log \prod\limits_{i,j:i>j}(p_{i}-p_{j})\prod_{k}p_{k}^{-\delta _{\pi (k)}}+o(1), \end{eqnarray*} where the second equation is due to (\ref{lawLN}). To sum up, the term for $\pi _{0}=\mathrm{id}$ is \begin{eqnarray*} &&\frac{1}{\prod\limits_{i,j:i>j}(p_{i}-p_{j})}\sum_{\pi \in S_{n}}\mathrm{ sgn}(\pi )\prod\limits_{i}p_{i}^{\delta _{\pi (i)}} \\ &&\times \left\{ \begin{array}{c} \frac{1}{n}\left\{ \begin{array}{c} \log \left\{ \prod\limits_{i,j:i>j}(p_{i}-p_{j})\prod_{k}p_{k}^{-\delta _{\pi (k)}}\right\} \\ +o\left( 1\right) \end{array} \right\} \\ +\mbox{average yield of BBPS} \end{array} \right\} \\ &=&\frac{1}{n\prod\limits_{i,j:i>j}(p_{i}-p_{j})}\sum_{\pi \in S_{n}}\mathrm{ sgn}(\pi )\prod\limits_{i}p_{i}^{\delta _{\pi (i)}} \\ &&\times \log \left\{ \prod\limits_{i,j:i>j}(p_{i}-p_{j})\prod_{k}p_{k}^{-\delta _{\pi (k)}}\right\} \\ &&+\mbox{average yield of BBPS}+o\left( \frac{1}{n}\right) \end{eqnarray*} The terms for $\pi _{0}\neq $\textrm{id }are of the form \begin{eqnarray*} &&\sum_{\mathbf{n}^{\pi }}\prod\limits_{i}p_{i}^{n_{\pi _{0}(i)}^{\pi }} \frac{n!}{\mathbf{n}^{\pi }!}f\left( \frac{\mathbf{n}^{\pi }}{n}\right) \\ &=&\sum_{\mathbf{n}^{\pi }}\prod\limits_{i}p_{\pi _{0}^{-1}(i)}^{n_{i}^{\pi }}\frac{n!}{\mathbf{n}^{\pi }!}f\left( \frac{\mathbf{n}^{\pi }}{n}\right) . \end{eqnarray*} Observe that the probability distribution $\prod\limits_{i}p_{\pi _{0}(i)}^{n_{i}^{\pi }}\frac{n!}{\mathbf{n}^{\pi }!}$ is concentrated around $\mathbf{n}^{\pi }=n\pi _{0}^{-1}(\mathbf{p})$, which is not close to the region where $\mathbf{n}^{\pi }$ takes its value. Hence, for $f(x)$ is bounded by a constant, due to the large deviation principles, this sum should be exponentially small. \section{Asymptotic average yield of the optimal non-universal concentration} \label{sec:cal-ave-opt} In this section, we discuss the asymptotic performance of a optimal non-universal entanglement concentration, or an optimal entanglement concentration for known input state. In terms of error probability, intensive research is done by \cite{Morikoshi}. Here, our concern is average yield of the optimal protocol. For that, we use Hardy's formula\thinspace \cite{Hardy} : the average of number of bell pairs concentrated from known pure state is. \begin{equation} \sum\limits_{i=0}^{m}\left( \alpha _{i}-\alpha _{i+1}\right) T_{i}\log T_{i}, \label{hardy} \end{equation} where $\alpha _{i}$ is a Schmidt coefficient (in decreasing order, with the convention $\alpha _{m+1}=0$), and $T_{i}$ is the number of Schmidt basis such that its corresponding Schmidt coefficient is larger than or equal to $ \alpha _{i}$. In this appendix, we evaluate (\ref{hardy}) asymptotically for qubit systems, assuming that an input state is $\left\vert \phi \right\rangle = \sqrt{q}\left\vert 0\right\rangle +\sqrt{p}\left\vert 1\right\rangle $ with $ p>q$. Below, we always neglect $o(1)$ terms, unless otherwise mentioned, because this quantity is the yield multiplied by $n$. \begin{align*} & \sum_{i=0}^{n-1}\left( p^{i}q^{n-i}-p^{i+1}q^{n-i-1}\right) \sum\limits_{j=0}^{i}\binom{n}{j}\log \sum\limits_{k=0}^{i}\binom{n}{k} \\ & =\left( 1-\frac{p}{q}\right) \sum_{i=0}^{n-1}\sum\limits_{j=0}^{i}p^{i}q^{n-i}\binom{n}{j}\log \sum\limits_{k=0}^{i}\binom{n}{k} \\ & =\left( 1-\frac{p}{q}\right) \sum_{j=0}^{n-1}\sum\limits_{i=j}^{n}p^{i}q^{n-i}\binom{n}{j}\log \sum\limits_{k=0}^{i}\binom{n}{k} \\ & =\left( 1-\frac{p}{q}\right) \sum_{j=0}^{n-1}p^{j}q^{n-j}\binom{n}{j} \sum\limits_{l=0}^{n-j}\left( \frac{p}{q}\right) ^{l}\log \sum\limits_{k=0}^{j+l}\binom{n}{k} \end{align*} For $\sum\limits_{l=0}^{n-j}\left( \frac{p}{q}\right) ^{l}\log \sum\limits_{k=0}^{j+l}\binom{n}{k}$ is at most polynomial order, due to the large deviation principles, the range of $j$ can be replaced by $[n(p-\delta ),\,n(p+\delta )]$. In this region,. $j+l$ is the order of $n$. Also, the range of $l$ can be replaced by $[1,\,n^{1/2})$, for \begin{align*} & \sum\limits_{l=n/2}^{n-j}\left( \frac{p}{q}\right) ^{l}\log \sum\limits_{k=0}^{j+l}\binom{n}{k} \\ & \leq \left( \frac{p}{q}\right) ^{n^{1/2}}\times n\times \mathrm{poly} (n)=o(1). \end{align*} Hence, we evaluate $\frac{\sum\limits_{k=0}^{np^{\prime }}\binom{n}{k}}{ \binom{n}{np^{\prime }}}$ with $0<p^{\prime }<\frac{1}{2}$. First, we upper-bound $\frac{\sum\limits_{k=0}^{np^{\prime }-n^{1/3}}\binom{n}{k}}{ \binom{n}{np^{\prime }}}$ by \begin{equation*} \frac{\sum\limits_{k=0}^{np^{\prime }-n^{1/3}}2^{nh(\frac{k}{n})}}{\frac{1}{ n+1}2^{nh(p^{\prime })}}\leq \frac{(n+1)2^{nh(\frac{np^{\prime }-n^{1/3}}{n} )}}{\frac{1}{n+1}2^{nh(p^{\prime })}}=(n+1)^{2}2^{-O\left( n^{1/3}\right) }, \end{equation*} meaning this part is negligible. Hence, we have \begin{eqnarray*} &&\frac{\sum\limits_{k=0}^{np^{\prime }}\binom{n}{k}}{\binom{n}{np^{\prime }} }=\frac{\sum\limits_{k=0}^{n^{1/3}}\binom{n}{np^{\prime }-k}}{\binom{n}{ np^{\prime }}} \\ &=&\sum\limits_{k=0}^{n^{1/3}}\frac{\left( np^{\prime }\right) !\left( nq^{\prime }\right) !}{\left( np^{\prime }-k\right) !\left( nq^{\prime }+k\right) !} \\ &=&1+\sum\limits_{k=1}^{n^{1/3}}\prod_{i=1}^{k}\frac{np^{\prime }-k+i}{ nq^{\prime }+i} \\ &=&\sum\limits_{k=0}^{n^{1/3}}\left( \frac{p^{\prime }}{q^{\prime }}\right) ^{k}\left( 1+\frac{O(n^{1/3}}{n})\,\right) ^{n^{1/3}}=\frac{q^{\prime }}{ q^{\prime }-p^{\prime }}\;\left( 1+o(1)\right) . \end{eqnarray*} Hence, the average yield is, \begin{equation} \left( 1-\frac{p}{q}\right) \sum\limits_{l=0}^{n^{1/2}}\left( \frac{p}{q} \right) ^{l}\sum_{j}p^{j}q^{n-j}\binom{n}{j}\left\{ \begin{array}{c} \log \binom{n}{j+l} \\ +\log \frac{1-\left( j+l\right) /n}{1-2\left( j+l\right) /n} \end{array} \right\} . \label{ave-opt-2} \end{equation} The second term of this is evaluated by using the following identity, \begin{equation} \left( 1-\frac{p}{q}\right) \sum\limits_{l=0}^{n^{1/2}}\left( \frac{p}{q} \right) ^{l}f\left( x+l/n\right) =f(x)+o(1), \label{sum-p/q} \end{equation} where $f$ is continuous and bounded by a polynomial function. This identity holds true because of the upper-bound to the RHS, \begin{align*} & \left( 1-\frac{p}{q}\right) \sum\limits_{l=0}^{n^{1/2}}\left( \frac{p}{q} \right) ^{l}\max_{y:y\in \lbrack 0,n^{-1/2}]}f\left( x+y\right) \\ & \leq \max_{y:y\in \lbrack 0,n^{-1/2}]}f\left( x+y\right) , \end{align*} and the lower-bound the RHS, \begin{align*} & \left( 1-\frac{p}{q}\right) \sum\limits_{l=0}^{n^{1/2}}\left( \frac{p}{q} \right) ^{l}\max_{y:y\in \lbrack 0,n^{-1/2}]}f\left( x+y\right) \\ & =\left( 1-\left( \frac{p}{q}\right) ^{n^{1/2}+1}\right) \max_{y:y\in \lbrack 0,n^{-1/2}]}f\left( x+y\right) . \end{align*} Hence, the second term of (\ref{ave-opt-2}), or \begin{equation*} \left( 1-\frac{p}{q}\right) \sum\limits_{l=0}^{n^{1/2}}\left( \frac{p}{q} \right) ^{l}\log \frac{q-l/n}{1-2(p+l/n)} \end{equation*} equals, due to Eq. (\ref{sum-p/q}), $\log \frac{q}{q-p}+o(1).$ The first term of (\ref{ave-opt-2}) is evaluated as follows. Due to Stirling's formula and Taylor's expansion, \begin{align*} & \log \binom{n}{j+l} \\ & =n\mathrm{h}(p)-\left( \log p+\log e\right) \left( j+l-np\right) \\ & -\left( \log q+\log e\right) \left( n-j-l-nq\right) \\ & +\frac{\log e}{2}\left( -\frac{(j+l)^{2}}{pn}-\frac{(n-j-l)^{2}}{qn} +n\right) +nR_{2}(\frac{\mathbf{n}}{n},\mathbf{p}) \\ & -\frac{\log n}{2} \\ & -\frac{1}{2}\left( \log 2\pi +\log \frac{j+l}{n}+\log \left( 1-\frac{j+l}{n }\right) \right) +nR_{1}\left( \mathbf{n}\right) , \end{align*} whose average by the binomial distribution $p^{j}q^{n-j}\binom{n}{j}$ is, \begin{align*} & n\mathrm{h}(p)-(\log p-\log q)l \\ & +\frac{\log e}{2}\left( -\frac{1}{n}\frac{l^{2}}{pq}\right) -\frac{\log n}{ 2} \\ & -\frac{1}{2}\left( \log 2\pi e+\log \left( p+\frac{l}{n}\right) +\log \left( q-\frac{l}{n}\right) \right) \\ & +nR_{1}(\mathbf{n})+nR_{2}(\frac{\mathbf{n}}{n},\mathbf{p}). \end{align*} Due to Eq. (\ref{sum-p/q}), multiplied by $(p/q)^{l}$ and summed over $l$, the first term is obtained as: \begin{align*} & n\mathrm{h}(p)-\frac{\log n}{2} \\ & -\frac{1}{2}\left( \log 2\pi e+\log p+\log q\right) \\ & -(\log p-\log q)\frac{p/q}{1-p/q} \\ & +nR_{1}(\mathbf{n})+nR_{2}\left( \frac{\mathbf{n}}{n},\mathbf{p}\right) \end{align*} We can prove $nR_{1}(\mathbf{n})+nR_{2}\left( \frac{\mathbf{n}}{n},\mathbf{p} \right) $ is negligible almost in the same way as in Appendix \ref {sec:cal-ave-ben}. After all, the average yield is, \begin{align*} & \mathrm{h}(p)-\frac{\log n}{2n}+ \\ & \frac{1}{n}\left\{ \begin{array}{c} -\frac{1}{2}\log 2\pi eqp \\ +\frac{p/q}{1-p/q}\log \frac{q}{p}+\log \frac{1}{1-p/q} \end{array} \right\} +o\left( \frac{1}{n}\right) . \end{align*} \section{$\varlimsup_{n\rightarrow \infty }\frac{1}{n}\mathrm{D}(Q_{C_{\ast }^{n}}^{\protect\psi }\Vert Q_{C_{\ast }^{n}}^{\protect\phi })$} \label{sec:cal-divergence}Let us define, \begin{eqnarray*} \mathbf{p} &:&=\mathbf{p}^{\phi },\,\mathbf{q}:=\mathbf{p}^{\psi },\mathbf{l} :=\mathbf{n}+\mathbf{\delta ,} \\ f\left( \mathbf{l}\right) &:&=\log \frac{\prod\limits_{i,j:i>j}(p_{i}-p_{j}) \sum\limits_{\pi \in S_{n}}\mathrm{sgn}(\pi )\prod\limits_{i}q_{i}^{l_{\pi (i)}}}{\prod\limits_{i,j:i>j}(q_{i}-q_{j})\sum\limits_{\pi \in S_{n}}\mathrm{ sgn}(\pi )\prod\limits_{i}p_{i}^{l_{\pi (i)}}} \end{eqnarray*} and our task is to compute, \begin{eqnarray*} &&\mathrm{D}(Q_{C_{\ast }^{n}}^{\psi }\Vert Q_{C_{\ast }^{n}}^{\phi }) \\ &=&\sum_{\mathbf{n}}\frac{\dim \mathcal{V}_{\mathbf{n}}}{\prod \limits_{i,j:i>j}(q_{i}-q_{j})}\sum_{\pi \in S_{n}}\mathrm{sgn}(\pi )\prod\limits_{i}q_{i}^{l_{\pi (i)}}f\left( \mathbf{l}\right) . \end{eqnarray*} Due to the argument stated at the end of Appendix \ref{sec:diff-ave-yield}, in the first sum over $\pi \in S_{n}$, we can concentrate on the term $\pi = \mathrm{id}$, which is evaluated as follows. \begin{eqnarray*} &&\sum_{\mathbf{n}}\frac{\dim \mathcal{V}_{\mathbf{n}}}{\prod \limits_{i,j:i>j}(q_{i}-q_{j})}\prod\limits_{i}q_{i}^{l_{i}}f\left( \mathbf{l }\right) \\ &=&\sum_{\mathbf{l}}\frac{1}{\prod\limits_{i,j:i>j}(q_{i}-q_{j})}\frac{ \left( n+\frac{d(d-1)}{2}\right) !}{\mathbf{l}!} \\ &&\times \frac{\prod\limits_{i,j:i>j}(l_{i}-l_{j})}{\prod\limits_{i:i=0}^{ \frac{d(d-1)}{2}}\left( n+\frac{d(d-1)}{2}-i\right) }\prod \limits_{i}q_{i}^{l_{i}}f\left( \mathbf{l}\right) \\ &=&\frac{\prod\limits_{i,j:i>j}(q_{i}-q_{j})}{\prod \limits_{i,j:i>j}(q_{i}-q_{j})}f\left( \left( n+\frac{d\left( d-1\right) }{2} \right) \mathbf{q}\right) +o\left( f\left( n\right) \right) \\ &=&f\left( \left( n+\frac{d\left( d-1\right) }{2}\right) \mathbf{q}\right) +o\left( f\left( n\right) \right) \end{eqnarray*} Here, the second equation was derived as follows. We first extended the region of $\mathbf{l}$ to $\left\{ \mathbf{l};\sum_{i}l_{i}=n+\frac{d\left( d-1\right) }{2}\right\} $, because this causes only exponentially small difference due to the large deviation principles. Then, we applied the law of the large number. Observe that \begin{eqnarray*} f\left( \mathbf{l}\right) &=&\log \frac{\prod\limits_{i,j:i>j}(p_{i}-p_{j})}{ \prod\limits_{i,j:i>j}(q_{i}-q_{j})}+\log \frac{\prod\limits_{i}q_{i}^{l_{i}} }{\prod\limits_{i}p_{i}^{l_{i}}} \\ &&+\log \frac{1+\sum\limits_{\substack{ \pi \in S_{n} \\ \pi \neq \mathrm{id } }}\mathrm{sgn}(\pi )\prod\limits_{i}q_{i}^{l_{\pi (i)}-l_{i}}}{ 1+\sum\limits _{\substack{ \pi \in S_{n} \\ \pi \neq \mathrm{id}}}\mathrm{ sgn}(\pi )\prod\limits_{i}p_{i}^{l_{\pi (i)}-l_{i}}} \\ &=&\log \frac{\prod\limits_{i}q_{i}^{l_{i}}}{\prod\limits_{i}p_{i}^{l_{i}}} +O(1), \end{eqnarray*} where the second equation is true due to the inequality, \begin{equation*} 1\leq 1+\sum\limits_{\substack{ \pi \in S_{n}, \\ \pi \neq \mathrm{id}}} \mathrm{sgn}(\pi )\prod\limits_{i}q_{i}^{l_{\pi (i)}-l_{i}}\leq 1+d!. \end{equation*} After all, we have, \begin{eqnarray*} &&\mathrm{D}(Q_{C_{\ast }^{n}}^{\psi }\Vert Q_{C_{\ast }^{n}}^{\phi }) \\ &=&\left( n+\frac{d(d-1)}{2}\right) \log \frac{\prod\limits_{i}q_{i}^{q_{i}} }{\prod\limits_{i}p_{i}^{p_{i}}}+O(1) \\ &=&n\mathrm{D}(\mathbf{q}\Vert \mathbf{p})+O(1). \end{eqnarray*} \end{document}
\begin{document} \title{ Tomography of one and two qubit states and factorisation of the Wigner distribution in prime power dimensions.} {\it We study different techniques that allow us to gain complete knowledge about an unknown quantum state, e.g. to perform full tomography of this state. We focus on two apparently simple cases, full tomography of one and two qubit systems. We analyze and compare those techniques according to two figures of merit. Our first criterion is the minimisation of the redundancy of the data acquired during the tomographic process. In the case of two-qubits tomography, we also analyze this process from the point of view of factorisability, so to say we analyze the possibility to realise the tomographic process through local operations and classical communications between local observers. This brings us naturally to study the possibility to factorize the (discrete) Wigner distribution of a composite system into the product of local Wigner distributions. The discrete Heisenberg-Weyl group is an essential ingredient of our approach. Possible extensions of our results to higher dimensions are discussed in the last section and in the conclusions. } \section{Introduction} The estimation of an unknown state is one of the important problems in quantum information and quantum computation\cite{Na1,N1}. Traditionnally, the estimation of the $d^2-1$ parameters that characterize the density matrix of a single qu$d$it consists of realising $d+1$ independent von Neumann measurements on the system. For instance, when the system is a spin 1/2 particle, three successive Stern-Gerlach measurements performed along orthogonal directions make it possible to infer the values of the 3 Bloch parameters $p_{x},p_{y},$ and $p_{z}$ defined by \begin{equation} \left\{ \begin{array}{l} \left\langle \sigma _{x}\right\rangle =p_{x}=\gamma \sin \theta \cos \varphi \\ \left\langle \sigma _{y}\right\rangle =p_{y}=\gamma \sin \theta \sin \varphi \\ \left\langle \sigma _{z}\right\rangle =p_{z}=\gamma \cos \theta \end{array} \right. \end{equation} Once we know the value of these parameters, we are able to determine unambiguously the value of the density matrix, making use of the identity \begin{equation} \rho (\gamma ,\theta ,\varphi )=\frac{1}{2}(I+p_{x}\sigma _{x}+p_{y}\sigma _{y}+p_{z}\sigma _{z})=\frac{1}{2}(I+ \overrightarrow{\gamma }.\overrightarrow{\sigma }) \end{equation} When the qubit system is not a spin 1/2 particle but consists of the polarisation of a photon, a similar result can be achieved by measuring its degree of polarisation in three independent polarisation bases, for instance with polarising beamsplitters, which leads to the Stokes representation of the state of polarisation of the (equally prepared) photons. Tomography through von Neumann measurements presents an inherent drewback: in order to estimate the $d^2-1$ independent parameters of the density matrix, $d+1$ measurements must be realised which means that $d^2+d$ histograms of the counting rate are established, one of them being sacrificed after each of the $d+1$ measurements in order to normalize the corresponding probability distribution. From this point of view, the number of counting rates is higher than the number of parameters that characterize the density matrix, which is a form of redundancy, inherent to the tomography through von Neumann measurements. Besides, it is known that a more general class of measurements exists than the von Neumann measurements. This class is represented by the Positive-Operator-Valued Measure (POVM) measurements \cite{QCQ}, of which a reduced subset, the Projection-Valued Measure (PVM) measurements corresponds to the von Neumann measurements. The most general POVM can be achieved by coupling the system A to an ancilla or assistant B and performing a von Neumann measurement on the full system. When both the system and its assistant are qu$d$¥it systems, the full system belongs to a $d^2$ dimensional Hilbert space which makes it possible to measure $d^2$ probabilities during a von Neumann measurement performed on the full system. As always, one of the counting rates must be sacrificed in order to normalise the probability distribution so that we are left with $d^2-1$ parameters. When the coupling to the assistant and the von Neumann measurement are well-chosen, we are able in principle to infer the value of the density matrix of the initial qu$d$it system from the knowledge of those $d^2-1$ parameters, in which case the POVM is said to be Informationnally Complete (IC). Obviously, this approach is optimal in the sense that it minimizes the number of counting rates (thus of independent detection processes) that must be realised during the tomographic process. As it was shown in \cite{WoottersF}, the PVM approach to tomography can further be optimised regarding redundancy. Optimality according to this particular figure of merit is achieved when the $d+1$ bases in which the PVM measurements are performed are ``maximally independent'' or ``minimally overlapping'' so to say when they are mutually unbiased (two orthonormal bases of a $d$ dimensional Hilbert space are said to be mutually unbiased bases (MUB's) if whenever we choose one state in the first basis, and a second state in the second basis, the modulus squared of their in-product is equal to $1/d$). It is well-known that, when the dimension of the Hilbert space is a prime power, there exists a set of $d+1$ mutually unbiased bases \cite{WoottersF,ivanovic,india}. This is the case for instance with the bases that diagonalize the generalised Pauli operators \cite{india,TD05}. Those unitary operators form a group which is a discrete counterpart of the Heisenberg-Weyl group, the group of displacement operators \cite{milburn}, that present numerous applications in quantum optics and in signal theory \cite{vourdas}. A discrete version of the Heisenberg-Weyl group \cite{weyl} also plays an essential role \cite{caves} in the derivation of so-called covariant symmetric-informationally-complete (SIC) POVM's. Such POVM's are intimately associated to a set of $d^2$ minimally overlapping projectors onto pure qu$d$it states (the modulus squared of their in-product is now equal to $1/\sqrt{d+1}$). We shall compare the respective merits of PVM and POVM tomographic processes in the cases of one and two qubit systems and focus on the factorisability of the latter. This question leads us to study the factorisability of discrete Wigner quasidistributions which appear to be a very natural tool in the context of one qubit and (factorisable) two qubit tomography. Although in the continuous case, the factorisation-property is de facto fulfilled, in discrete dimensions this is not a trivial question at all. This why this question raised recently a lot of interest (Refs.¥\cite{Wootters2} to \cite{Vourdas2,Wootters87,Rubin1}). Generalisations to higher dimensions are discussed in the last section and in the conclusions. \section{\label{section1}Tomography of a (single) qubit system.} \subsection{\label{section1.1}Optimal PVM approach.} The aforementioned traditional approach to tomography for a spin 1/2 particle, that consists of three successive Stern-Gerlach measurements performed along orthogonal directions $X$, $Y$ and $Z$ is optimal among PVM tomographic processes because the 3 corresponding bases are MUB's \cite{ivanovic,WoottersF,JSh60}. Considered so, the traditional approach to spin/polarisation tomography is optimal. Actually, the $\sigma$ operators plus the identity constitute a discrete counterpart of the displacement operators. Formally, they can be defined as follows: $\sigma_{i,j}=\sqrt{(-)^{i.j}}\sum_{k=0}^1(-)^{k.j}\ket{k+i (mod.2)}\bra{k}$, where the labels $i,j,k$ can take values 0 or 1. It is easy to check that, up to a global sign that we presently keep undetermined, $\sigma_{0,0}=Id., \sigma_{1,0}=\sigma_X, \sigma_{1,1}=\sigma_Y$ and $\sigma_{0,1}=\sigma_Z.$ This set is orthonormal regarding the Trace-norm: ${1 \over d}Tr.\sigma^{\dagger}_{i}\sigma_{j}=\delta_{i,j}$ (here $d=2$) and, like the displacement operators in the continuous case, its elements constitute a complete basis of the set of linear operators of the Hilbert space on which they act. The Bloch parameters are seen to be in one-to-one correspondence with the qubit Weyl distribution (defined by $w_{i,j}=(1/2)Tr.(\rho.\sigma_{i,j})$), which consists, in analogy with its continuous counterpart \cite{weyl}, of the amplitudes of the development of the density matrix in terms of the (qubit) displacement operators \cite{vourdas}: $w_{0,0}=1/2$,$w_{1,0}=p_{x}/2$,$w_{1,1}=p_{y}/2$, $w_{0,1}=p_{z}/2$. These properties can be generalised to higher dimensions \cite{TD05}, and, when the dimension is a prime power, each PVM measurement in one of the $d+1$ MUB's leads to the estimation of a set of $d-1$ parameters of the Weyl distribution. The measurements performed in different MUB's are independent and the $d+1$ subsets of $d-1$ amplitudes obtained so form a partition of (the set of $d^2-1$ independent parameters of) the Weyl distribution and provide a complete tomographic information about the unknown qu$d$it state of the system. \subsection{\label{section1.2}Optimal POVM approach.} It has been shown in the past, on the basis of different theoretical arguments \cite{caves,JRe04,Alla04}, that the optimal IC POVM is symmetric in the sense that it is in one-to-one correspondence with a tetrahedron on the Bloch sphere. Intuitively, such tetrahedrons homogenize and minimize the informational overlap or redundancy between the four histograms collected during the POVM measurement. Some of such tetrahedrons can be shown to be invariant under the action of the Heisenberg-Weyl group which corresponds to so-called Covariant Symmetric Informationnally Complete (SIC) POVM's \cite{caves}. Let us now briefly describe how such a POVM measurement could be realised experimentally (we were able recently \cite{durtdu} to implement this POVM measurement on a two qubit NMR quantum computer \cite{ekert}). Let us suppose that we wish to estimate the three parameters $\gamma ,\theta ,$and $\varphi $ necessary in order to describe the unknown state of the qubit $a.$ An ancilla is added to this device as qubit $b$ to form a extending system. This device is initially prepared in the state: $\rho _{in}=\rho _{a}\otimes \left\vert 0\right\rangle \left\langle 0\right\vert _{b} $. This state differs according to different input qubits $a.$ In virtue of the \textit{Stinespring-Kraus} theorem\cite{MZ04}, quantum operations are related to the unitary transformations, a property that we shall now exploit by letting the entire system evolve under unitary evolution $U$ \begin{equation} U=\frac{1}{2}\left( \begin{array}{cccc} e^{i\pi /4}\alpha & \alpha & \beta & -e^{i\pi /4}\beta \\ \alpha & -e^{-i\pi /4}\alpha & -e^{-i\pi /4}b & -\beta \\ \beta & -e^{i\pi /4}\beta & e^{i\pi /4}\alpha & \alpha \\ -e^{-i\pi /4}\beta & -\beta & \alpha & -e^{-i\pi /4}\alpha \end{array} \right) \label{U} \end{equation} where $\alpha=\sqrt{1+1/\sqrt{3}},\beta=\sqrt{1-1/\sqrt{3}}$. By measuring the full system in a basis that consists of the product of the $a$ and $b$ qubit computational bases, we are able in principle to estimate the four parameters enlisted on the diagonal of the following matrix: \label{Th} \begin{equation} \rho _{out}^{Th}=\left( \begin{array}{llll} P_{00} & & & \\ & P_{01} & & \\ & & P_{10} & \\ & & & P_{11} \end{array} \right) \end{equation} such a POVM measurement is informationally complete due to the fact that the $P_{00},P_{01},P_{10},P_{11}$ are in one-to-one correspondence with the Bloch parameters $ p_{x},p_{y},$ and $p_{z}$ as shows the identity \begin{equation} \left\{ \begin{array}{l} P_{00}=\frac{1}{4}\left[ 1+\frac{1}{\sqrt{3}}(p_{x}+p_{y}+p_{z})\right] \\ P_{01}=\frac{1}{4}\left[ 1+\frac{1}{\sqrt{3}}(-p_{x}-p_{y}+p_{z})\right] \\ P_{10}=\frac{1}{4}\left[ 1+\frac{1}{\sqrt{3}}(p_{x}-p_{y}-p_{z})\right] \\ P_{11}=\frac{1}{4}\left[ 1+\frac{1}{\sqrt{3}}(-p_{x}+p_{y}-p_{z})\right] \end{array} \right. \end{equation} Actually, $P_{00}$ is the average value of the operator $({1\over 2})(\sigma_{0,0}+ ({1\over \sqrt 3})(\sigma_{1,0}+\sigma_{0,1}+\sigma_{1,1}))$ which is the projector onto the pure state $\ket{\phi} \bra{\phi}$ with $\ket{\phi}=\alpha\ket{0}+\beta^*\ket{1}$ and $\alpha=\sqrt{1+{1\over \sqrt 3}} $, $\beta^*=e^{{i \pi\over 4}}\sqrt{1-{1\over \sqrt 3}} $. Under the action of the Pauli group it transforms into a projector onto one of the four pure states $\sigma_{i,j}\ket{\phi}$; $i,j:0,1$: $\sigma_{i,j}\ket{\phi}\bra{\phi} \sigma_{i,j}=({1\over 2})((1-{1\over \sqrt 3})\sigma_{0,0}+ ({1\over \sqrt 3})(\sum_{k,l=0}^1(-)^{i.l-j.k}\sigma_{k,l}))$ The signs $(-)^{i.l-j.k}$ reflect the (anti)commutation properties of the Pauli group. So, the four parameters $P_{ij}$ are the average values of projectors onto four pure states that are ``Pauli displaced'' of each other. The in-product between them is equal, in modulus, to $1/\sqrt 3=1/\sqrt{d+1}$, with $d=2$. This shows that this POVM is symmetric in the sense that it is in one-to-one correspondence with a tetrahedron on the Bloch sphere; as this tetrahedron is invariant under the action of the Heisenberg-Weyl group it is a Covariant Symmetric Informationnally Complete (SIC) POVM \cite{caves}. One can show \cite{caves,JRe04,Alla04} that such tetrahedrons minimize the informational redundancy between the four collected histograms due to the fact that their angular opening is maximal. It is worth noting that this POVM possesses another very appealing property which is also true in the qutrit case but not in dimensions strictly higher than 3 \cite{gross}: the qubit Covariant SIC POVM is a direct realisation (up to an additive and a global normalisation constants) of the qubit Wigner distribution of the unknown qubit $a$. Indeed, this distribution $W$ ¥is the symplectic Fourier transform of the Weyl distribution $w$ ¥(already defined by the relation $w_{i,j}=(1/2)Tr.(\rho.\sigma_{i,j})$) which is, in the qubit case, equivalent (up to a relabelling of the indices) to its double qubit-Hadamard or double qubit-Fourier transform: \begin{equation}{W_{k,l}=(1/2)\sum_{i,j=0}^1(-)^{i.l-j.k}w_{i,j}=((1/\sqrt 2) \sum_{i=0}^1(-)^{i.l})((1/\sqrt 2)\sum_{j=0}^1(-)^{-j.k})w_{i,j}.}\label{wign}\end{equation} This expression, originally derived by Wootters in a somewhat different form \cite{Wootters87}, is a special case of an expression for a Wigner distribution derived by us in prime power dimensions \cite{durtwig}. One can check that $P_{k,l}=(1/\sqrt 3)W_{k,l}+(1-1/\sqrt 3)/4.$ The symplectic Fourier transform is invertible so that once we know the Wigner distribution, we can directly infer the Weyl distribution or, equivalently, the Bloch vector of any a priori unknown quantum state. It is worth noting that in order to measure the coefficients $P_{k,l} (W_{k,l})$ we must carry out a measurement on the full system (the unknown qubit plus the ancilla), which is the essence and novelty of entanglement-assisted quantum tomography \cite{durtdu,qlah}. It has been shown that the discrete qubit Wigner distribution directly generalises its continuous counterpart \cite{wigner} in the sense that it provides information about the localisation of the qubit system in a discrete 2 times 2 phase space \cite{Wootters87}. For instance the Wigner distribution of the first state of the computational basis (spin up along $Z$) is equal to $W_{k,l}(\ket{0})=(1/2)\delta_{k,0}$, which corresponds to a state located in the ``position'' spin up (along $Z$), and homogeneously spread in ``impulsion'' (in spin along $X$), in accordance with uncertainty relations (see Ref. \cite{Rubin1} for an enlightening discussion of discrete uncertainty relations in connection with the Wigner distribution)¥. Similarly, the Wigner distribution of the first state of the complementary basis (spin up along $X$) is equal to $W_{k,l}((1/\sqrt 2)(\ket{0}+\ket{1}))=(1/2)\delta_{l,0}$. We arrived to our formulation of the Wigner distribution \cite{durtwig} by deriving a solution of the Mean King's problem \cite{LVaid87,YA01,aravind}, which is not astonishing. Indeed, this problem consists of ascertaining the value of the spin of a spinor prepared at random in the $X$, $Y$ and $Z$ bases. The connection between the Wigner distribution and the Mean King problem is the following. Its solution consists of entangling the qubit to another qubit and to measure the full system in a well-chosen quartit basis in such a way that each detector fires with a probability equal to the Wigner distribution of the first qubit. Therefore, knowing to which basis the initial states belong and knowing which of the four ``Wigner'' detectors would fire, we are able to infer what is the value of their spin component \cite{LVaid87,durtwig}. For instance, when the spin is prepared in the $Z$ basis and that the detector corresponding to $W_{1,i}$ ($i=0,1$) fires, we could infer that the initial spin was the state $\ket{1}$ (spin down along $Z$). \section{\label{section2}Tomography of a two qubit system.} \subsection{\label{sect}Optimal PVM approach.} The results relative to the qubit case can be generalised to multi-qu$d$it systems whenever $d$ is a prime power \cite{WoottersF}, so to say to the case $d=p^m$ with $p$ prime and $m$ a positive integer. In particular, in the present case ($p=m=2$), we can define generalised displacement operators according to the definition given in \cite{TD05} (see also Ref.\cite{DurtNagler})¥: \begin{equation} V^j_i= \sum_{k=0}^{d-1} \gamma_{G}^{(( k\oplus_{G} i)\odot_{G} j)}\ket{ k\oplus_{G} i}\bra{ k} \label{defV0};i,j:0...d-1, \end{equation} where $\gamma_{G}$ is the $p$th root of unity: $\gamma_{G} =e^{ i .2\pi/p}$, the Galois addition $\oplus_{G}$ and the Galois multiplication $\odot_{G}$ ¥are defined by the tables given in appendix. The Galois addition is by definition equivalent with the addition modulo $p$ componentwise. Concretely, this means that if we write the labels (0,1,2,3) in a binary form (0=(0,0),1=(0,1),2=(1,0),3=(1,1)), the Galois addition is defined as follows: if ($i=(i_{a},i_{b})$ and $j=(j_{a},j_{b}))$, then $i\oplus_{G} j=(i_{a}\oplus_{mod 2}j_{a},i_{b}\oplus_{mod 2}j_{b})$. The Galois multiplication is distributive relatively to the Galois addition, moreover it is commutative, and there is no divider of 0 (the neutral element for the addition) excepted 0 itself. The algebraic structure that is defined by these requirements is a field; in the present case, the multiplication table is uniquely defined by these requirements, and by the definition of the addition: $0\odot_{G}x=x\odot_{G}0=0;$¥ $¥1\odot_{G}x=x\odot_{G}1=x;$¥$¥2\odot_{G}3=3\odot_{G}2=1$ and $2\odot_{G}2=3$¥¥. We can write the 16 4 times 4 displacements operators defined by Eqn.(\ref{defV0}) as the identity plus 5 sets of 3 operators that are defined as follows. The first set consists of the operators $V^j_{i}$ with $i=0$ and $j=1,2,3$, while the other sets consist of the operators $V^{(l-1)\odot_{G}i}_{i}$ with $i=0,1,2,3,$ and correspond to the respective choices $l=1,2,3,4$. By direct computation, one can check that we obtain so the five sets $\{ \sigma^a_{z} ,\sigma^b_{z} ,\sigma^a_{z} .\sigma^b_{z}\}$, $\{ \sigma^a_{x} ,\sigma^b_{x} ,\sigma^a_{x} .\sigma^b_{x}\}$, $\{ \sigma^a_{y} ,\sigma^b_{y} ,\sigma^a_{y} .\sigma^b_{y}\}$, $\{ \sigma^a_{x}.\sigma^b_{y} ,\sigma^a_{y}.\sigma^b_{z} ,\sigma^a_{z} .\sigma^b_{y}\}$ and $\{ \sigma^a_{x}.\sigma^b_{z} ,\sigma^a_{y}.\sigma^b_{x} ,\sigma^a_{z} . \sigma^b_{y}\}$ (up to irrelevant global phases). The operators that belong to each of these sets commute with each other; moreover, it is possible to multiply each of them by a well-chosen phase in such a way that the 3 operators of each set form (together with the identity operator) a commutative group. The bases that simultaneously diagonalize all the operators of such sets are unambiguously defined and are mutually unbiased \cite{india,TD05} relatively to each other. All the properties that we described in the qubit case are still valid in the present case: by performing von Neumann measurements in those 5 MUB's it is possible to estimate 15 parameters (5 times (4-1) probabilities) that are in one-to-one correspondence with the Weyl distribution, or equivalently with the coefficients of the density matrix of the system. The problem is that only the three first bases are factorisable (the common eigenbases of the two last operators are actually maximally entangled \cite{india,TD05,sanchez}). This is not astonishing because by taking products of mutually unbiased qubit bases we can at most construct 3 MUB's. Of course, it is still possible to obtain full tomographic information about the system by measuring each qubit component in the three MUB's (along $X,Y$ and $Z$), due to the fact that the displacement operators are factorisable into a product of local displacement operators (a very general property, also valid in dimension $p^m$ with $p$ an arbitrary prime and $m$ an arbitrary positive integer \cite{TD05}). The problem is that this requires to establish $3^2.2^2=36$ count rates in order to estimate 15(+1) parameters, a situation which is far from being optimal. To conclude, it is clearly impossible to conciliate optimality and factorisability in the two qubit case (as well as in the two qu$d$it case, when $d=(p^m)^2$) because factorisable MUB's are necessarily products of local MUB's ($\sqrt{d^2}=\sqrt{d}.\sqrt{d}$) and, by taking products of mutually unbiased bases of the $p^m$ dimensional Hilbert spaces associated to the two components of a bipartite system of dimension $d=(p^m)^2$, we can at most construct $p^m+1$ (factorisable) MUB's which, for all prime power dimensions, is strictly smaller than $d+1=(p^m)^2+1$. \subsection{\label{section2.2}Optimal POVM approach.} As in the two qubit case, it is easy to show on the basis of a simple argument that it is impossible to conciliate optimality and factorisability of the POVM or entanglement-assisted tomography in the two qubit case (as well as in the two qu$d$it case, when $d=(p^m)^2$). The reason therefore is that when $d=d_{1}.d_{2}$, and that a POVM can be splitted into a product of two SIC POVM's which means that we couple the $d_{1}$ ($d_{2}$) dimensional subsytem to a $d_{1}$ ($d_{2}$) dimensional assistant we find $d^2=d_{1}^2.d_{2}^2$ projectors onto factorisable pure states of which the in-products are most often equal in modulus to $(1/\sqrt{d_{1}+1}).(1/\sqrt{d_{2}+1})$ but not always; sometimes this in-product is equal to $(1/\sqrt{d_{1}+1})$ or $(1/\sqrt{d_{2}+1})$, so that what we get is not a SIC POVM, but only an IC POVM which may not be optimal. Nevertheless, the number of counting rates necessary in order to realize a tomographic process by a factorisable POVM measurement is optimal and equal to $d^2=d_{1}^2.d_{2}^2$, so that this technique can be considered as a good compromise: not fully optimal from the point of view of redundancy but at least factorisable which is very appealing regarding the experimental realisability of the tomographic process in the case of separated subsytems (for instance in the case of two separated photons entangled in polarisation, a common situation in the laboratory). In the case of two qubits, a factorisable SIC POVM is nearly equivalent (actually, it is equivalent up to a simple renormalisation of the counting rates) to a direct measurement of the average values of the 16 products of the local (qubit) Wigner operators defined in a previous section (Eqn.(\ref{wign}). As we are interested in the question of the factorisability of the tomographic process, it is very natural in the present context to ask the following question: Is the product of the two Wigner distributions of the subsystems of a bipartite system equivalent to the Wigner distribution of the full system? We shall provide certain answers to this question in the following section. \section{\label{section3}Factorisability of the discrete Wigner distribution.} \subsection{\label{sectionwig}Candidates for the Wigner distribution.} Before we discuss their factorisability, it is necessary to provide an operational definition of the discrete Wigner distribution for a finite-state system. We derived such a definition in Ref.\cite{durtwig} where we showed that the recipe for constructing a Wigner distribution associated to a qu$d$it system, with $d=p^m$, was the following: i) Let us split the set of $d^2$ d times d displacement operators defined by Eqn.(\ref{defV0}) into the identity plus $d+1$ sets of $d-1$ operators that are defined as follows. The first set consists of the operators $V^j_{i}$ with $i=0$ and $j=1,...,d-1$, while the other sets consist of the operators $V^{(l-1)\odot_{G}i}_{i}$ with $l=1,...,d-1$ and correspond to the respective choices $l=1,...,d$. In the qubit case, each set corresponds to one of the Pauli operators. In the two qubit case, the list of these sets was explicitly given in the section \ref{sect}. It can been shown \cite{Durtmutu} that all the operators from a same set commute with each other. ii) Let us multiply each operator of a set by a well-chosen phase in such a way that the set of ``renormalised'' operators together with the identity forms a finite group (with $d$ elements). It is shown in Ref.\cite{TD05} that there are, for each of the $d+1$ sets, $d$ possible choices of phases that satisfy this constraint. Moreover these choices are equivalent to simple relabellings (in fact translations or Galois-additive shifts) of the indices of the states of the MUB in which the operators of the group are simultaneously diagonal. In the same reference two possible choices are explicitly given, that correspond to the odd and even dimensional cases: In the odd case, we showed that one among the possible phase choices led to the following definition of the renormalised displacements operators associated to the $i$th group (denoted $ U^i_{l}$, with $i:1,...,d$, $l:0,...,d-1$): \begin{equation}a \label{elegant} U^i_l= (\gamma_{G}^{\ominus_{G}((i-1)\odot_{G} l\odot_{G} l)/_{G}2})V_{l}^{(i-1)\odot_{G} l},\end{equation}a where $/_{G}$ represents the inverse of the field (Galois) multiplication and $2=1\oplus_{G}1$. Actually, this choice is particularly attractive and elegant for several reasons and is uniquely defined once we know the operation tables of the field with $p^m$ elements. We shall show that it helps us to answer positively to the problem of the factorisation of the Wigner function for bipartite systems in odd prime power dimensions in a next section. In the even case ($d=2^m$) the explicit expressions for the phases are less easy to manipulate, essentially due to the fact that $1\oplus_{G}1=0$ and that we may not divide by 0. Once again, there are $p^m$ ($2^m$ in this case) possible ways to determine the phases $U^j_{1}/V^{(j-1)\odot_{G} l}_{l}$ which are equivalent, up to a relabelling of the states of the corresponding MUB. A possible choice for the phases was shown to be \begin{equation}a U^j_{l}= (\Pi^{m-1}_{n=0, l_{n}\not= 0}i^{(j-1)\odot_{G}2^n\odot_{G}2^n}\gamma_{G}^{(j-1)\odot_{G}2^n\odot_{G}2^{n'}}) V^{((j-1)\odot_{G} l)}_{l} \label{UsurVeven}\end{equation}a where the coefficients $l_{n}$ are unambiguously defined by the $p$-ary (here binary) expansion of $l$, $l=\sum_{k=0}^{m-1}l_{n}2^n$, while $n' $ is the smallest integer strictly larger than $n$ such that $l_{n'}\not= 0$, if it exists, 0 otherwise. For $i=0$, a possible choice of phases corresponds to the relation $U^{0}_{l}=V^{l}_{0}$, in the even and odd cases. It is worth noting that in both cases the phases are square roots of integer powers of gamma, $(U^j_{l}/V^{(j-1)\odot_{G} l}_{l})^2=\gamma_{G}^{\ominus_{G}( (j-1)\odot_{G} l\odot_{G} l)}$. As a consequence, it is easy to show (see appendix) that $(U^j_{l})^{-1}=(U^j_{l})^{\dagger}= U^j_{\ominus_G l}$ iii) In Ref.\cite{durtwig}, the $d^2$ Wigner operators are defined as follows: \begin{equation}a \label{wigno}W_{(i_{1},i_{2})}={1\over d}\sum_{m,n=0}^{d-1} \gamma_{G}^{\ominus_{G}i_{1}\odot_{G}n \oplus_{G}i_{2}\odot_{G}m} (\gamma_{G}^{( m\odot_{G} n)})^{1\over 2} V^{n}_{m}, \end{equation}a Introducing the more convenient notation $U_{m,n}=(\gamma_{G}^{( m\odot_{G} n)})^{1\over 2} V^{n}_{m}$ (the $U_{m,n}¥$ operators are equivalent with the $U^i_{j}¥$ operators previously defined, up to a mere relabelling¥¥),¥ we get \begin{equation}a W_{(i_{1},i_{2})}={1\over d}(\sum_{m=1,n=0}^{d-1} \gamma_{G}^{\ominus_{G}i_{1}\odot_{G}n \oplus_{G}i_{2}\odot_{G}m} U_{m,n}). \end{equation}a It is worth noting that the Wigner ($W$) operators defined by Eqn.(\ref{wigno}) are Hermitian, due to the identity $U_{m,n}^{\dagger}=U_{m,n}^{-1}=U_{\ominus_{G}m,\ominus_{G}n}$. In appendix, we prove that they are ``acceptable'' candidates for discrete Wigner distributions according to the criteria introduced by Wootters in his seminal paper of '87 \cite{Wootters87}, which means that (a) their Trace is equal to 1, (b) they are orthonormalised (to $d$) operators (under the Trace norm) so that they form a basis of the set of linear operators, (c) that if we consider any set of parallel lines in the phase space, the average of the Wigner operators along one of those lines is equal to a projector onto a pure state, and the averages taken along different parallel lines are projectors onto mutually orthogonal states. In appendix we prove that the last relation is valid in odd and even prime power dimensions as well and we also show that the sets of $d$ orthogonal states that are associated to different directions (there are $d+1$ non parallel directions in the $d$ times $d$ phase space) form MUB's (in accordance with the prediction made in Ref.\cite{discretewigner})¥. The Wigner distribution is then nothing else than the set of the $d^2$ amplitudes that we obtain when we develop the density matrix of a qu$d$it (with $d=p^m$) in the basis provided by the Wigner operators: $w_{(i_{1},i_{2})}=Tr.(\rho.W_{(i_{1},i_{2})})$ As a consequence of the property $c$ the marginals of this distribution along any axis of the phase space are equal to transition probabilities to states of the corresponding MUB. \subsection{\label{section3.2}Factorisability of the two and three qubit Wigner distribution.} In Ref.\cite{durtwig}, we showed that $W_{(i_{1},i_{2})}=(V_{i_{1}}^{i_{2}})^{-1}W_{(0,0)}V_{i_{1}}^{i_{2}}$ , and we also showed in Ref.\cite{TD05} that the displacement operators always factorise into products of local displacement operators. Therefore, in order to prove the factorisability of the Wigner operators, it is sufficient to prove that $W_{(0,0)}$ factorises. As we mentioned in the previous section, the Wigner operators are not uniquely defined according to our definition. For instance, in the qubit case we are free to change the sign of the operators $\sigma_{x}$, $\sigma_{y}$, and $\sigma_{z}$ according to our convenience in the definition (\ref{wign}). This change of sign is equivalent to a relabelling of the states of the associated MUB's which has no important physical consequence. Therefore $2^3$ acceptable Wigner distributions exist in the qubit case. In the two qubit case, we are free to choose arbitrarily the sign of two operators in each of the 5 families of 3 operators that we defined in the section \ref{sect}, which corresponds to $4^5$ acceptable Wigner distributions. There are thus $(2^3)^2$ possible products of two qubit Wigner distributions and $4^5$ possible two-qubit (quartit distributions). Once we have chosen the signs of the three sigma operators ($x,y,z$) of the first qubit, the corresponding signs at the level of the second qubit must be the same an even number of times (0 or 2 times) in order that the product of the Wigner distributions is an acceptable two qubit distribution. For instance, if we choose the phase $+$ for the $a$ sigma operators ($x,y,z$) and the phase $-$ for the $b$ sigma operators ($x,y,z$), we obtain the products $\{ +\sigma^a_{z} , -\sigma^b_{z} ,-\sigma^a_{z} .\sigma^b_{z}\}$, $\{ +\sigma^a_{x} ,-\sigma^b_{x} ,-\sigma^a_{x} .\sigma^b_{x}\}$, $\{ +\sigma^a_{y} ,-\sigma^b_{y} ,-\sigma^a_{y} .\sigma^b_{y}\}$, $\{ -\sigma^a_{x}.\sigma^b_{y} ,-\sigma^a_{y}.\sigma^b_{z} ,-\sigma^a_{z} .\sigma^b_{y}\}$ and $\{ -\sigma^a_{x}.\sigma^b_{z} ,-\sigma^a_{y}.\sigma^b_{x} ,-\sigma^a_{z}\}$. Each of these families (¥¥together with the identity) forms a commuting group, so that the corresponding operator $W_{(0,0)}$, which is the sum of those 16 operators factorises: \begin{equation}a W_{(0,0)}={1\over 4}(+Id.+\sigma^a_{z} -\sigma^b_{z} -\sigma^a_{z} .\sigma^b_{z}+\sigma^a_{x} -\sigma^b_{x} -\sigma^a_{x} .\sigma^b_{x}+ \nonumber \\ \sigma^a_{y} -\sigma^b_{y} -\sigma^a_{y} .\sigma^b_{y}-\sigma^a_{x}.\sigma^b_{y} -\sigma^a_{y}.\sigma^b_{z} -\sigma^a_{z}.\sigma^b_{y}-\sigma^a_{x}.\sigma^b_{z} -\sigma^a_{y}.\sigma^b_{x} -\sigma^a_{z}.\sigma^b_{y} ) \nonumber \\= {1\over 2}(+Id.^a+\sigma^a_{x} +\sigma^a_{y} +\sigma^a_{z}).{1\over 2}(+Id.^b-\sigma^b_{x} -\sigma^b_{y} -\sigma^b_{z}).\end{equation}a As a consequence, all Wigner operators factorize too. There are obviously $2^3.4=2^5$ similar ways to derive factorisable two-qubit Wigner distributions, so that fifty percent of the products of qubit Wigner distributions provide an acceptable quartit Wigner distribution (among the $4^5$ acceptable Wigner distributions). It is now easy to show that a three qubit Wigner distribution never factorises into the product of three qubit Wigner distributions. Essentially this is due to the fact that it is impossible to find three ordered triplets of plus or minus signs that would be the same an even number of times (0 or 2 times) TWO BY TWO. It is important to note that these results are still valid in the approach followed in Ref.\cite{discretewigner}. This (more axiomatic and geometrical) ¥approach is more general because it allows more flexibility in the way to attribute one element of the Galois field to one label of the basis states of the computational and dual bases. Implicitly in our approach, the $m$uple of integers comprised between $0$ and $p-1$ assigned to each computational state expresses the development of the corresponding element of the Galois field in a field ¥basis (field bases are defined in Ref.\cite{discretewigner})¥ that contains 1 (the neutral element for the multiplication) as first element. Similarly, our choice of the dual basis is such that the MUB associated to diagonal straightlines factorises into local MUB's associated to local diagonal straight lines \cite{TD05}. Anyhow, the MUB's are exactly the same in both approaches, independently on the choices of field bases, because the splitting of the Heisenberg-Weyl into commuting sub-groups is unambiguously defined up to relabellings once we associate $\sigma_{X}¥¥$ operators to horizontal translations (shifts of the labels of the first MUB) ¥and ¥¥$\sigma_{Z}¥¥$ operators to vertical translations (shifts of the labels of the second MUB). In our approach, different choices of phase conventions for the subgroup of the Heisenberg-Weyl group lead to different labellings of the corresponding MUB while in the approach of Ref.\cite{discretewigner} this labelling is arbitrarily imposed. Anyhow, there are in both cases $(¥¥d)¥^{d+1¥}¥$ possible phase conventions in dimension $d$ once the field bases are chosen. There are thus in both approaches $(¥¥d^{2¥})¥^{d^2+1¥}¥$ possible Wigner operators in dimension $d^2$ and ¥$d^{2(¥¥d+1)¥}¥$ possible factorisable Wigner operators, and those operators are the same because they can always be written (¥up to additive and normalisation constants) as sums of projectors onto $d+1$ states from different MUB's \cite{discretewigner}, and the MUB's are the same in both approaches. One can check for instance that one of the two factorisable Wigner distributions derived in Ref.\cite{discretewigner} in the case $d=4$ corresponds to choosing the phases $(+,+,+)¥$ for the ¥$a$ sigma operators ($x,y,z$) and the phases $(+,-,+)¥$ for the $b$ sigma operators ($x,y,z$). The second one is obtained similarly but with a permutation of the roles of $a$ and $b$. The 32 operators that we derived can be obtained from those two operators by performing at most one local rotation of 180 degrees around the $X$, $Y$ or $Z$ axis. \subsection{\label{odd}Factorisability of the two qu$d$it Wigner distribution with $d=p^m$ and $p$ odd.} We shall now prove the following result: provided we apply in odd prime power dimension the particular phase convention (\ref{elegant}) and define the field with $d^2=(p^m)^2$ elements as the quadratic extension of the field with $d=p^m$ elements (this technique was succesfully applied by us in the past in order to solve the Mean King's problem in prime power dimensions \cite{durtwig} ), the Wigner distribution of a bipartite $p^{2m}$-dimensional system naturally factorises into a product of local Wigner distributions for the two $p^{m}$-dimensional subsystems. Before we do so, we must define the quadratic extension of a field. Let us denote $i_{a}$ ($i_{b}$) the elements of the field with $p^m$ elements associated to the $a$ ($b$) subsystems. Their quadratic extension is a field associated to the bipartite system $a-b$ the elements of which we denote ($i_{a}$ ,$i_{b}$). As always, its addition (denoted $\oplus\oplus_{G}$) is the addition componentwise: ($i_{a}$ ,$i_{b}) \oplus\oplus_{G}(j_{a},j_{b}$)=($i_{a}\oplus_{G}j_{a}$ ,$i_{b}\oplus_{G}j_{b}) .$ In particular, $2=(1_{a}$ ,$0_{b}) \oplus\oplus_{G}(1_{a},0_{b})=(1_{a} \oplus_{G}1_{a},0_{b})$=$¥(2_{a},0_{b})$¥¥¥ All what we need to know about the extended multiplication rule (denoted $\odot\odot_{G}$) is that it is commutative and distributive relatively to the extended addition, that ($i_{a} ,0) \odot\odot_{G}(j_{a},0$)=($i_{a}\odot_{G}j_{a}$ ,0) , ($i_{a} ,0) \odot\odot_{G}(0,j_{b}$)=($0,i_{a}\odot_{G}j_{b}$ ), and finally that $ (0,i_{b})\odot\odot_{G}(0,j_{b}$)=($i_{b}\odot_{G}j_{b}\odot_{G} R$, $i_{b}\odot_{G}j_{b}\odot_{G} Q$), with $R$ and $Q$ elements of the field with $p^m$ elements, and $R$ different from 0 (otherwise this extension would not form a field). Those properties are very similar to those met in the case of the complex field which is the quadratic extension of the real (infinite) field. It is instructive to note that as $2=(2_{a},0_{b})$, its inverse (relatively to the extended multiplication) is equal to $¥(2^{-1}_{a},0_{b})$¥¥¥¥, where $2^{-1}$ represents the inverse of 2 in the non-extended field with $p^m$ elements.¥¥ The Wigner operators of the composite system can now be written according to Eqn.(\ref{wigno}), and their factorisability is easily established: \begin{equation}a &&W_{((i^1_{a} ,i^1_{b}),(i^2_{a} ,i^2_{b}))}={1\over d^2}\sum_{(m_{a} ,m_{b},n_{a} ,n_{b})=0}^{d-1} \gamma_{G}^{\ominus\ominus_{G}(i^1_{a} ,i^1_{b})\odot\odot_{G}(n_{a} ,n_{b})\oplus\oplus_{G}(i^2_{a} ,i^2_{b}) \odot\odot_{G}(m_{a} ,m_{b})} (\gamma_{G}^{((m_{a} ,m_{b})\odot\odot_{G} (n_{a} ,n_{b})})^{1\over 2} V^{(n_{a} ,n_{b})}_{(m_{a} ,m_{b})}\qquad\nonumber\\ && ={1\over d^2}\sum_{(m_{a} ,m_{b}),(n_{a} ,n_{b})=0}^{d-1} \gamma_{G}^{\ominus\ominus_{G}(i^1_{a} ,i^1_{b})\odot\odot_{G}(n_{a} ,n_{b})\oplus\oplus_{G}(i^2_{a} ,i^2_{b}) \odot\odot_{G}(m_{a} ,m_{b})} \gamma_{G}^{ ((m_{a} ,m_{b})\odot\odot_{G} (n_{a} ,n_{b})//_{G}2)}¥). \qquad\nonumber\\ && \sum_{k_{a} ,k_{b}=0}^{d-1} \gamma_{G}^{(( (k_{a} ,k_{b})\odot\odot_{G} (n_{a} ,n_{b}))} \ket{ (k_{a} ,k_{b})\oplus\oplus_{G} (m_{a} ,m_{b})}\bra{ (k_{a} ,k_{b})} \nonumber\\ &&={1\over d^2}\sum_{(m_{a} ,m_{b},n_{a} ,n_{b})=0}^{d-1} \gamma_{G}^{\ominus_{G}(i^1_{a}\odot_{G} n_{a}\oplus_{G}i^1_{b}\odot_{G}R\odot_{G}n_{b})\oplus_{G} (i^2_{a}\odot_{G} m_{a}\oplus_{G}i^2_{b}\odot_{G}R\odot_{G}m_{b})} \gamma_{G}^{ (m_{a}\odot_{G} n_{a}\oplus_{G} (m_{b}\odot_{G} R\odot_{G} n_{b})\odot_{G} 2^{-1}}. \qquad\nonumber\\ &&\sum_{k_{a} ,k_{b}=0}^{d-1} \gamma_{G}^{ ((k_{a} \odot_{G} n_{a})\oplus_{G} (k_{b} \odot_{G}R\odot_{G} n_{b}))} \ket{ k_{a} \oplus_{G} m_{a}}\bra{ k_{a} }\ket{ k_{b}\oplus_{G} m_{b}}\bra{ k_{b}}\qquad\nonumber\\ &&=W^a_{(i^1_{a} ,i^2_{a})}.W^b_{(i^1_{b} ,R\odot_{G}i^2_{b})} \end{equation}a \section{Conclusions.} At first sight, the tomography of single and two qubit systems seems to be a trivial question. From the previous treatment we see that if we analyze the problem at the light of two criteria (optimality in the sense of minimal redundancy and factorisability) the problem is surprisingly rich. In particular it motivates the interest of studying the possibility to factorise the Wigner distribution of a discrete composite system, a question that recently attracted an increasing interest (Refs.¥\cite{Wootters2} to \cite{Vourdas2}). Besides the question of tomography of composite systems, the main reason therefore is that the phase space structure \cite{Wootters2,Planat} of composite systems is not necessarily factorisable (for instance the two qubit straight line of slope 2 of the 16 dimensional phase space contains the 4 couples $(0,0)_{a,b},(1,2)_{a,b},(2,3)_{a,b}¥$ and (3,1); it is obviously not the Cartesian product of two qubit straight lines because ¥$(0_{a}¥,0_{a}¥)=(0,0)_{a}¥,(1_{a},2_{a})=(0,1)_{a},(2_{a},3_{a})=(1,1)_{a}¥$ and $(3_{a},1_{a})¥ =(1,0)_{a}$¥ ). Actually, the existence of non-factorisable lines is directly related to the existence of entangled, non-factorisable MUB's and is an unavoidable feature of composite systems of prime power dimensions, for dimensional reasons similar to those that we explained at the end of section \ref{sect}. This motivates the quest for global (non necessarily factorisable) phase space approaches, an example of which is provided by our displacement operators and our Wigner distribution. Although the phase space of the composite system is not factorisable (it is not the Cartesian product of the phase spaces of the composite systems), it could be that Weyl or Wigner operators nevertheless factorize, which is the object of the present paper. ¥Of course, the product of local Wigner functions can always provide a full tomographic representation of the state of a composite system, but this picture is naturally linked to a Cartesian splitting of the full phase space, which is not as rich as a global description of the full phase space. The affine structure of the full space phase (which is intimately related to the underlying field with $p^m$ elements¥) is lost whenever we replace it by the Cartesian product of the local phase spaces (as is done in Ref.\cite{Wootters87}) ¥as showed our example at the beginning of this section. ¥ There exist several approaches to the problem (Refs.¥\cite{Wootters2} to \cite{Vourdas2,Wootters87,Rubin2,Rubin1}), and most often those approaches have a pronounced geometrical flavour in the sense that they aim at deriving potential candidates for the Wigner distribution from general considerations about the structure of the $d$ times $d$ phase space (an excellent survey of the phase space approach is given in the introduction of Ref.\cite{intro})¥. Our approach is slightly different from the beginning because we postulate from the beginning (and this is an educated guess) ¥the ``algebraic'' expression of the Weyl and Wigner operators (or phase-point operators following the terminology introduced by Wootters in \cite{Wootters87}). It seems nevertheless that our approach captures the essential features of the more general, geometric, approach. For instance, it is also true in our approach that straight lines of the phase space correspond to states, and that the states associated to parallel lines form orthogonal bases, while different orientations correspond to MUB's \cite{Wootters87,discretewigner}. Our results about the factorisability of Wigner distributions are still partial results, and they directly raise a question the answer of which is out of the scope of the present paper: Is it possible to factorise the Wigner distribution of a $(2^m)^2$ dimensional system into a product of two (local) Wigner distributions of $(2^m)$ dimensional subsystems when $m$ is strictly larger than 1? Finally, it would also be interesting to investigate the factorisability of Wigner operators in odd dimensions (in which case it has been shown that many results valid in odd prime power dimensions can be transfered nearly integrally \cite{gross,Appleby,davidsThesis,cohendet}). For instance the definitions of the Weyl (¥¥\ref{defV0}) and Wigner operators (\ref{wigno}) are still operational when we replace the Galois operations by the modulo $d$ operations and the $p$th root of unity ¥¥$\gamma_{G}¥$ by the $d$th root of unity. Factorisation is still possible in this case. For instance when $d=15=3.5$, we can write $m_{a,b}=5.m_{a}+3.m_{b} (modulo 15)¥¥¥¥$, where $0\leq m_{a,b}\leq 14 $, $0\leq m_{a}\leq 2 $ and $0\leq m_{b}\leq 4 $. Then factorisation is ensured by the identities $m_{a,b}+n_{a,b} (modulo 15)=5.(¥¥m_{a}+n_{a} (modulo 3))+3.(¥¥m_{b}+n_{b} (modulo 5))¥ $ and $exp^{{i.2\pi¥m_{a,b}.n_{a,b}\over 15}}¥$=$exp^{{i.2\pi¥m_{a}.n_{a}\over 3}}¥$.$exp^{{i.2\pi¥m_{b}.n_{b}\over 5}}¥$.¥¥¥¥¥¥ Prime power dimensions remain exceptional anyhow because Galois fields with $d$ elements are known to exist only when $d$ is a prime power. Our guess is that a set of $d+1$ MUB's only exists in prime power dimensions \cite{Archer}, due to the fact that they seem to be closely related to the existence of finite fields and finite projective spaces \cite{Wootters2,Planat,grassl} (finite affine spaces with $d^2$ elements do not exist when $d=6$ ¥\cite{tarry} or $d=10$ \cite{lam} and it is conjectured that they exist only when $d$ is a prime power ), but this question is still open. Last but not least, it is worth to mention the approach to the problem that was developed by Rubin and Pittenger \cite{Rubin2,Rubin1}. This approach is also algebraical but the authors make use of sophisticated techniques like field extensions of arbitrary order $m$ (in the treatment of odd dimensions we limited ourselves to quadratic extensions), which enabled them to prove that it is possible to find, when the dimension is an odd prime power ($d$= $p^m$ with $p$ prime and ODD), Wigner distributions that would be the product of $m$ local Wigner distributions. Our result of the section \ref{odd} allow us to answer positively whenever $m=2^n, n=1,2,3...$. The authors also established in Ref. \cite{Rubin1} the factorisability of the two qubit Wigner distribution but their method did not allow them to treat systems composed of more than two qubits. As we showed in this paper in the even case ($p=2$) we know that factorisability is not possible when $m=3$ which establishes a clear distinction between the even and odd (prime power) ¥dimensions. Another adavantage of their approach is that they are able to estimate the degree of separability of MUB's states \cite{Rubin2} in arbitrary prime power dimensions (this reference was kindly drawn to my attention by the authors). It seems that our approaches are closely related (for instance, in order to establish the separability of Wigner distributions \cite{Rubin1}, the authors also made use of the relative freedom in the assignment of phases to commuting operators of a same subgroup). In the introduction of Ref.\cite{Rubin2}, Rubin and Pittenger wrote, relative to the approach of Wootters {\it et al.} that {\it ``Although the motivations of the two approaches appear to be quite different, they require the same mathematical tools and appear to lead to the same results. An interesting question is the interrelationship between the two approaches.''} This is certainly true concerning our approach and theirs. \leftline{\large \bf Acknowledgment} T.D. acknowledges a Postdoctoral Fellowship of the Fonds voor Wetenschappelijke Onderzoek, Vlaanderen and also support from the IUAP programme of the Belgian government, the grant V-18, and the Solvay Institutes for Physics and Chemistry. Support from the Quantum Lah at N.U.S. (among others comments of B-G Englert) is acknowledged. \leftline{\large \bf Appendix 1: ``Acceptability'' of the Wigner operators.} Let us consider the Wigner operators defined by Eqn.(\ref{wigno}). We shall now show that (a) their Trace is equal to 1, (b) they are orthogonal with each other and normalised to $d$ (under the Trace norm) (c) that if we consider any set of parallel lines in the phase space, the average of the Wigner operators along one of those lines is equal to a projector onto a pure state, and the averages taken along different parallel lines are projectors onto mutually orthogonal states. Three identities, that were derived in Ref.\cite{TD05}, will be helpful in the establishment of the proofs: \begin{equation}\label{identi1}\sum_{j=0}^{d-1} \gamma_{G}^{ (j\odot_{G} i)}=d\delta_{i,0}\end{equation} \begin{equation}\label{identi2}\gamma_G^{i}\cdot\gamma_G^{j}=\gamma _{G} ^{(i\oplus_G j)}\end{equation} \begin{equation}a V^j_i.V^k_l=\gamma^{\ominus_{G}(i\odot_{G} k)} V^{j\oplus_G k}_{i\oplus_{G} l}. \label{compo} \end{equation}a With the help of the identity (\ref{identi1}) and on the basis of the definition (\ref{defV0}), we get that $tr.V^j_i=\sum_{k,k'=0}^{d-1} \gamma_{G}^{(( k\oplus_{G} i)\odot_{G} j)}\delta_{k\oplus_{G} i,k'}\delta_{k,k'}$$=d.\delta_{i,0}.\delta_{j,0}$. It is easy to show that the $V$ operators defined in (\ref{defV0}) are unitary with $(V^j_i)^+=(V^j_i)^{-1}=\gamma^{\ominus_{G}(i\odot_{G} j)} V^{\ominus_G i}_{\ominus_G j}$. Making use of the composition law (\ref{compo}), it is easy to show that the $V$ operators are orthogonal relatively to the trace norm: and that $tr.V^j_i=N.\delta_{i,0}.\delta_{j,0}$. Hence, we can derive the property (a): $Tr.(W_{(i_{1},i_{2})})=Tr.({1\over d}\sum_{m,n=0}^{d-1} \gamma_{G}^{\ominus_{G}i_{1}\odot_{G}n \oplus_{G}i_{2}\odot_{G}m} (\gamma_{G}^{( m\odot_{G} n)})^{1\over 2} V^{n}_{m})$ =${1\over d}\sum_{m,n=0}^{d-1} \gamma_{G}^{\ominus_{G}i_{1}\odot_{G}n \oplus_{G}i_{2}\odot_{G}m} (\gamma_{G}^{( m\odot_{G} n)})^{1\over 2} Tr.(V^{n}_{m})$ =${1\over d}\sum_{m,n=0}^{d-1} \gamma_{G}^{\ominus_{G}i_{1}\odot_{G}n \oplus_{G}i_{2}\odot_{G}m} (\gamma_{G}^{( m\odot_{G} n)})^{1\over 2}.d. \delta_{m,0}.\delta_{n,0})$=${1\over d}\sum_{m,n=0}^{d-1} .d=1.$ In order to prove the property (b), we should firstly note that $(U^j_{l})^{-1}=(U^j_{l})^{\dagger}= U^j_{\ominus_G l}$ a direct consequence of the Eqn.(\ref{compo}) and of the identity $(U^j_{l}/V^{(j-1)\odot_{G} l}_{l})^2=\gamma_{G}^{\ominus_{G}( (j-1)\odot_{G} l\odot_{G} l)}$. Besides, the $U$ operators, like the $V$ operators are orthogonal relatively to the trace norm, so that $Tr.(U^{\dagger}_{m,n}.U_{m',n'})=d.\delta_{m,m'}.\delta_{n,n'}$. Henceforth, $ Tr.(W^{\dagger}_{(i_{1},i_{2})}.W_{(i'_{1},i'_{2})})={1\over d^2}(\sum_{m=1,n=0}^{d-1} \gamma_{G}^{\oplus_{G}i_{1}\odot_{G}n \ominus_{G}i_{2}\odot_{G}m} \gamma_{G}^{\ominus_{G}i'_{1}\odot_{G}n \oplus_{G}i'_{2}\odot_{G}m})$ =$ \delta_{(i_{1},i'_{1})}\delta_{(i_{2},i'_{2})}$, where we applied twice the identity (\ref{identi1}). In order to prove the property (c), it is useful to recall the transformation law of the $U$ operators that was established in Ref.\cite{durtwig}: $U_{m,n}(0)= {(\gamma_{G}^{ (\ominus_{G} ((i-1)\odot_{G} m\ominus_{G}n)\odot_{G} m )})^{{1\over 2}}\over (\gamma_{G}^{ \ominus_{G} ((i-1)\odot_{G} m\odot_{G} m )})^{{1\over 2}}(\gamma_{G}^{ \oplus_{G} ( m\odot_{G} n )})^{{1\over 2}}}.U_{\ominus_{G}n\oplus_{G}(i-1)\odot_{G}m,m}(i)$, where $U_{m,n}(0)=(\gamma_{G}^{ \oplus_{G} ( m\odot_{G} n )})^{{1\over 2}}\sum_{k=0}^{d-1} \gamma_{G}^{(( k\oplus_{G} m)\odot_{G} n)} \ket{ e_{k\oplus_{G} m}^0}\bra{ e_{k}^0}$ and $U_{m,n}(i)=(\gamma_{G}^{ \oplus_{G} ( m\odot_{G} n )})^{{1\over 2}} \sum_{k=0}^{d-1} \gamma_{G}^{(( k\oplus_{G} m)\odot_{G} n)}\ket{ e_{k\oplus_{G} m}^i} \bra{ e_{k}^i}; i:1...N$. Here, the symbol $\ket{ e_{k}^i}$ represents the $k$th state of the $i$th MUB ($i:0,...d$). It is worth noting that in odd prime power dimensions the phase factor ${(\gamma_{G}^{ (\ominus_{G} ((i-1)\odot_{G} m\ominus_{G}n)\odot_{G} m )})^{{1\over 2}}\over (\gamma_{G}^{ \ominus_{G} ((i-1)\odot_{G} m\odot_{G} m )})^{{1\over 2}}(\gamma_{G}^{ \oplus_{G} ( m\odot_{G} n )})^{{1\over 2}}}$ is always equal to 1 so that the Wigner operators are invariant in all MUB's (up to a relabelling) \cite{durtwig}. The present treatment is also valid in even prime power dimensions. Let us now evaluate the value of the averaged sum of the Wigner operators (expressed in the computational basis) along a vertical straight line of the phase space: $d^{-1}\sum_{i_{2}=0}^{d-1}W^{0}_{(i_{1},i_{2})}= d^{-2}\sum_{i_{2}=0}^{d-1} \sum_{m,n=0}^{d-1}$ $(\gamma_{G}^{\ominus_{G}(i_{1}\odot_{G} n\oplus_{G}m\odot_{G} i_{2})})$ $ U_{m,n}(0)$ $= d^{-2}.\sum_{m,n=0}^{d-1}$ $d.\delta_{m,0}(\gamma_{G}^{\ominus_{G} (i_{1}\odot_{G} n)})$ $ U_{m,n}(0)$ $= d^{-1}.\sum_{m,n=0}^{d-1}$ $(\gamma_{G}^{\ominus_{G} (i_{1}\odot_{G} n)})$ $ U_{0,n}(0)$ $= \sum_{m,n,k=0}^{d-1}$ $d^{-1}.(\gamma_{G}^{\ominus_{G} (i_{1}\odot_{G} n)})$$(\gamma_{G}^{\oplus_{G} (k\odot_{G} n)})$ $ \ket{ e^{0}_{k}}\bra{ e^{0}_{k}}$ $= d^{-1}.\sum_{m,n,k=0}^{d-1}$ $d.\delta_{i_{1},k}$ $ \ket{ e^{0}_{k}}\bra{ e^{0}_{k}}$ $= \ket{ e^{0}_{i_{1}}}\bra{ e^{0}_{i_{1}}} .$ Let us finally evaluate the value of the averaged sum of the Wigner operators along a non-vertical straight line of the phase space (of slope $k-1, k:1...d$); this sum is equal to $ d^{-2}.\sum_{\alpha,m,n=0...d-1 } \gamma_{G}^{\ominus_{G}i_{1}\odot_{G}n \oplus_{G}i_{2}\odot_{G}m}U_{m,n}(0)$ where $, i_{1}=\alpha_{0}+\alpha,i_{2}=(k-1)\odot_{G}\alpha$; we can rewrite it in the form $d^{-2}.\sum_{\beta,m,n=0...d-1 } \gamma_{G}^{\ominus_{G}i'_{1}\odot_{G}n' \oplus_{G}\beta\odot_{G}m'}. {(\gamma_{G}^{ (\ominus_{G} ((k-1)\odot_{G} m\ominus_{G}n)\odot_{G} m )})^{{1\over 2}}\over (\gamma_{G}^{ \ominus_{G} ((k-1)\odot_{G} m\odot_{G} m )})^{{1\over 2}}(\gamma_{G}^{ \oplus_{G} ( m\odot_{G} n )})^{{1\over 2}}}.U_{m',n'}(k) $ where $i'_{1}=(k-1)\odot_{G}i_{1}\ominus_{G}i_{2}=(k-1)\odot_{G}\alpha_{0}, i'_{2}=i_{1}=\beta, m'=(k-1)\odot_{G} m\ominus_{G}n$ and $n'=m$. summing firstly over $\beta$ and making use of the identity (\ref{identi1}), we obtain a factor $d.\delta_{m',0}$; now, when $m'=0$ then ${(\gamma_{G}^{ (\ominus_{G} ((k-1)\odot_{G} m\ominus_{G}n)\odot_{G} m )})^{{1\over 2}}\over (\gamma_{G}^{ \ominus_{G} ((k-1)\odot_{G} m\odot_{G} m )})^{{1\over 2}}(\gamma_{G}^{ \oplus_{G} ( m\odot_{G} n )})^{{1\over 2}}}=1$; the sum reduces thus to $d^{-1}.\sum_{n'=0...d-1 }\gamma_{G}^{\ominus_{G}i'_{1}\odot_{G}n' }.U_{0,n'}(k);$ this is the sum of the Wigner operators along a vertical line, with the operators rewritten in the $k$th MUB; in virtue of the previous result, it is equal to $\ket{ e^{k}_{i_{1}}}\bra{ e^{k}_{i_{1}}} $, the projector onto the $i_{1}$th state of the $k$th MUB. \leftline{\large \bf Appendix 2: Galois operations in dimension 4} \begin{table} \begin{tabular}{c||c|c|c|c} \hline $\odot_{G}$ & $0$ & $1$ & $2$ & $3$\\ \hline \hline 0 & 0 & $0$ & $0$ & $0$ \\ 1 & $0$ & $1$ & $2$ & $3$\\ 2 & $0$ & $2$ & $3$ & $1$\\ 3 & $0$ & $3$ & $1$ & $2$ \\ \hline \end{tabular} \caption{The field (Galois) multiplication in dimension 4. }\label{tab1} \begin{tabular}{c||c|c|c|c} \hline $\oplus_{G}$ & $0$ & $1$ & $2$ & $3$\\ \hline \hline 0 & 0 & $1$ & $2$ & $3$ \\ 1 & $1$ & $0$ & $3$ & $2$\\ 2 & $2$ & $3$ & $0$ & $1$\\ 3 & $3$ & $2$ & $1$ & $0$ \\ \hline \end{tabular} \caption{The field (Galois) addition in dimension 4. }\label{tab2} \end{table} \end{document}
\begin{document} \begin{frontmatter} \title{On off-line and on-line Bayesian filtering for uncertainty quantification of structural deterioration} \author[mymainaddress,mytertiaryaddress]{Antonios Kamariotis \corref{mycorrespondingauthor}} \cortext[mycorrespondingauthor]{Corresponding author} \ead{[email protected]} \author[mymainaddress]{Luca Sardi} \ead{[email protected]} \author[mymainaddress]{Iason Papaioannou} \ead{[email protected]} \author[mysecondaryaddress,mytertiaryaddress]{Eleni Chatzi} \ead{[email protected]} \author[mymainaddress]{Daniel Straub} \ead{[email protected]} \address[mymainaddress]{Engineering Risk Analysis Group, Technical University of Munich, Theresienstrasse 90, 80333 Munich, Germany} \address[mysecondaryaddress]{Institute of Structural Engineering, ETH Zurich, Stefano-Franscini-Platz 5, 8093 Zurich, Switzerland} \address[mytertiaryaddress]{Institute for Advanced Study, Technical University of Munich, Lichtenbergstrasse 2a, 85748 Garching, Germany} \begin{abstract} Data-informed predictive maintenance planning largely relies on stochastic deterioration models. Monitoring information can be utilized to update sequentially the knowledge on time-invariant deterioration model parameters either within an off-line (batch) or an on-line (recursive) Bayesian framework. With a focus on the quantification of the full parameter uncertainty, we review, adapt and investigate selected Bayesian filters for parameter estimation: an on-line particle filter, an on-line iterated batch importance sampling filter, which performs Markov chain Monte Carlo (MCMC) move steps, and an off-line MCMC-based sequential Monte Carlo filter. A Gaussian mixture model is used to approximate the posterior distribution within the resampling process in all three filters. Two numerical examples serve as the basis for a comparative assessment of off-line and on-line Bayesian estimation of time-invariant deterioration model parameters. The first case study considers a low-dimensional, nonlinear, non-Gaussian probabilistic fatigue crack growth model that is updated with sequential crack monitoring measurements. The second high-dimensional, linear, Gaussian case study employs a random field to model corrosion deterioration across a beam, which is updated with sequential measurements from sensors. The numerical investigations provide insights into the performance of off-line and on-line filters in terms of the accuracy of posterior estimates and the computational cost, when applied to problems of different nature, increasing dimensionality and varying sensor information amount. Importantly, they show that a tailored implementation of the on-line particle filter proves competitive with the computationally demanding MCMC-based filters. Suggestions on the choice of the appropriate method in function of problem characteristics are provided. \end{abstract} \begin{keyword} Bayesian filtering, particle filter, Markov Chain Monte Carlo, uncertainty quantification, Gaussian mixture, structural deterioration \end{keyword} \end{frontmatter} \section*{Impact Statement} Stochastic models describing time-evolving processes are widespread in science and engineering. In the modern data-rich engineering landscape, Bayesian methods can exploit monitoring data to sequentially update knowledge on underlying model parameters. The quantification of the full posterior uncertainty of these parameters is indispensable for several real-world tasks, where decisions need to be taken in view of the evaluated margins of risk and uncertainty. This work contributes to these tasks by rigorously reviewing the off-line and on-line Bayesian framework for the purpose of parameter estimation. On-line and off-line Bayesian filters are adapted and compared on a set of numerical examples of varying complexity related to structural deterioration. This results in suggestions regarding the suitability of each algorithm to specific applications. \section{Introduction} \label{sec:introduction} Structural deterioration of various forms is present in most mechanical and civil structures and infrastructure systems. Accurate and effective tracking of structural deterioration processes can help to effectively manage it and minimize the total life-cycle costs \citep{Cadini_2009_b, Frangopol_2011, Kim_2017, Kamariotis_2022a}. The deployment of sensors on structural components/systems can enable long-term monitoring of such processes. Monitoring data obtained sequentially at different points in time must be utilized in an efficient manner within a Bayesian framework to enable data-informed estimation and prediction of the deterioration process evolution. Monitored structural deterioration processes are commonly modeled using Markovian state-space representations \citep{Myotyri_2006, Cadini_2009, Baraldi_2013}, whereby the deterioration state evolution is represented by a recursive Markov process equation, and is subject to stochastic process noise \citep{Corbetta_2018}. Monitoring information is incorporated by means of the measurement equation. The deterioration models further contain time-invariant uncertain parameters. The state-space can be augmented to include these parameters, if one wishes to obtain updated estimates thereof conditional on the monitoring information \citep{Straub_2009, Bhaskar_2009, Sun_2014, Corbetta_2018, Sang-ri_2018, Cristiani_2021, Kamariotis_2022b}; this is referred to as joint state-parameter estimation \citep{Sarkka_2013, Kantas_2015}. The formulation of a Markovian state-space representation of the deterioration process is not strictly required. The uncertain structural deterioration state is often defined solely as a function of uncertain time-invariant model parameters \citep{Ditlevsen_Madsen_1996, Vu_2000, Elingwood_2005, Stewart_2007}, which can be updated in view of the monitoring data. This updating, referred to herein as Bayesian parameter estimation, is often the primary task of interest. In this case, the deterioration state variables are obtained as outputs of the calibrated deterioration model with posterior parameter estimates \citep{Kennedy_2001, Ramancha_2022}. In spite of this, the problem of parameter estimation only can still be cast into a Markovian state-space representation. Quantifying the full posterior uncertainty of the time-invariant model parameters is essential for performing monitoring-informed predictions on the deterioration process evolution, the subsequent monitoring-informed estimation of the time-variant structural reliability \citep{Straub_2020, Melchers_2017} or the remaining useful life \citep{Sun_2014, Kim_2017}, and eventually for predictive maintenance planning. Bayesian parameter estimation is the main focus of this paper. In long-term deterioration monitoring settings, where data is obtained sequentially at different points in time, Bayesian inference can be performed either in an on-line or an off-line framework \citep{Storvik_2002, Kantas_2015, Azam_2017}. In literature, these are also referred to as recursive (on-line) and batch (off-line) estimation \citep{Sarkka_2013}. Parameter estimation is cast into a state-space setup to render it suitable for application with on-line Bayesian filtering algorithms \citep{Kantas_2015}, such as the Kalman filter \citep{Kalman_1960} and its nonlinear variants \citep{Jazwinski_1970, Julier_1997, Daum_2005, SONG_2020}, the ensemble Kalman filter \citep{Evensen_2006}, and particle filters \citep{Doucet_2001,Doucet_2008,Sarkka_2013, Tatsis_2022}. We employ on-line particle filter methods for pure recursive estimation of time-invariant deterioration model parameters, which is not the typical use case for such methods, and can lead to degenerate and impoverished posterior estimates \citep{Del_Moral_2006,Sarkka_2013}. Taking that into account, we provide a formal investigation and discussion on the use of such methods for quantifying the full posterior uncertainty of time-invariant model parameters. In its most typical setting within engineering applications, Bayesian parameter estimation is commonly performed with the use of off-line Markov Chain Monte Carlo (MCMC) methods, which have been used extensively in statistics and engineering to sample from complex posterior distributions of model parameters \citep{Hastings_1970, Gilks_1995, Beck_2002, Haario_2006, Ching_2007, Papaioannou_2015, Wu_2017, Lye_2021}. However, use of off-line methods for on-line estimation tasks is computationally prohibitive \citep{Del_Moral_2006, Kantas_2015}. Additionally, when considering off-line inference, in settings when measurements are obtained sequentially at different points in time, off-line MCMC methods tend to induce a larger computational cost than on-line particle filter methods, which can be important, e.g., when optimizing inspection and monitoring \citep{PAPAKONSTANTINOU_2014, Luque_2019, Kamariotis_2022b}. Questions that we investigate in this context include: Can one accurately quantify the uncertainty in the posterior parameter estimates when employing on-line particle filter methods for parameter estimation only purposes? How does this estimation compare against the posterior estimates obtained with off-line MCMC methods? How does the estimation accuracy depend on the nature of the problem, i.e., dimensionality, nonlinearity, or non-Gausssianity? What is the computational cost induced by the different methods? Ideally, one would opt for the method which can provide sufficiently accurate posterior results at the expense of the least computational cost. To address these questions, this paper reviews, adapts and selects algorithms in view of parameter estimation, and performs a comparative assessment of selected off-line and on-line filters specifically tailored for off-line and on-line Bayesian parameter estimation. The innovative comparative assessment results in a set of suggestions on the choice of the appropriate algorithm in function of problem characteristics. The paper is structured as follows. Section \ref{sec:algorithms} provides a detailed description of on-line and off-line Bayesian inference in the context of parameter estimation only. Three different selected and adapted methods are presented in full algorithmic detail, namely an on-line particle filter with Gaussian mixture-based resampling (PFGM) \citep{Merwe_2003, McLachlan_2007}, the on-line iterated batch importance sampling filter (IBIS) \citep{Chopin_2002}, which performs off-line MCMC steps with a Gaussian mixture as a proposal distribution, and an off-line MCMC-based sequential Monte Carlo (SMC) filter \citep{Del_Moral_2006}, which enforces tempering of the likelihood function (known as simulated annealing) to sequentially arrive to the single final posterior density of interest \citep{Neal_2001, Jasra_2011}. The tPFGM and tIBIS variants, which adapt the PFGM and IBIS filters by employing tempering of the likelihood function of each new measurement, are further presented and proposed for problems with high sensor information amount. Section \ref{sec:Numerical_investigations} describes the two case studies that serve as the basis for numerical investigations, one non-linear, non-Gaussian and low-dimensional and one linear, Gaussian and high-dimensional. MATLAB codes implementing the different algorithms and applying them on the two case studies introduced in this paper are made publicly available via a GitHub repository\footnote{\url{https://github.com/antoniskam/Offline_online_Bayes}}. Section 4 summarizes the findings of this comparative assessment, provides suggestions on choice of the appropriate method according to the nature of the problem, discusses cases which are not treated in our investigations, and concludes this work. \section{On-line and off-line Bayesian filtering for time-invariant parameter estimation} \label{sec:algorithms} This work assumes the availability of a stochastic deterioration model $D$, parametrized by a vector $\boldsymbol{\theta}\in{\rm I\!R}^{d}$ containing the $d$ uncertain time-invariant model parameters. We collect the uncertain parameters influencing the deterioration process in the vector $\boldsymbol{\theta}$. In the Bayesian framework, $\boldsymbol{\theta}$ is modeled as a vector of random variables with a prior distribution $\pi_{\text{pr}}(\boldsymbol{\theta})$. We assume that the deterioration process is monitored via a permanently installed monitoring system. Long-term monitoring of a deterioration process leads to sets of noisy measurements $\{y_1,\dots, y_n\}$ obtained sequentially at different points in time $\{t_1,\dots, t_n\}$ throughout the lifetime of a structural component/system. Such measurements can be used to update the distribution of $\boldsymbol{\theta}$; this task is referred to as Bayesian parameter estimation. Within a deterioration monitoring setting, Bayesian parameter estimation can be performed either in an on-line or an off-line framework \citep{Kantas_2015}, depending on the task of interest. In an on-line framework, one is interested in updating the distribution of $\boldsymbol{\theta}$ in a sequential manner, i.e., at every time step $t_n$ when a new measurement $y_n$ becomes available, conditional on all measurements available up to $t_n$. Thus, in an on-line framework, inference of the sequence of posterior densities $\{\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})\}_{n\geq 1}$ is the goal, where $\mathbf{y}_{1:n}$ denotes the components $\{y_1,\dots, y_n\}$. We point out that in this paper the term on-line does not relate to ``real-time" estimation, although on-line algorithms are also used in real-time estimation \citep{Chatzi_2009, Russel_2021}. In contrast, in an off-line framework, inference of $\boldsymbol{\theta}$ is performed at a fixed time step $t_N$ using a fixed set of measurements $\{y_1,\dots,y_N\}$, and the single posterior density $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:N})$ is sought, which can be estimated via Bayes' rule as \begin{equation} \pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:N}) \propto L(\mathbf{y}_{1:N}|\boldsymbol{\theta}) \pi_{\text{pr}}(\boldsymbol{\theta}), \label{eq:Bayes} \end{equation} where $L(\mathbf{y}_{1:N}|\boldsymbol{\theta})$ denotes the likelihood function of the whole measurement set $\mathbf{y}_{1:N}$ given the parameters $\boldsymbol{\theta}$. With the assumption that measurements are independent given the parameter state, $L(\mathbf{y}_{1:N}|\boldsymbol{\theta})$ can be expressed as a product of the likelihoods $L(y_n|\boldsymbol{\theta})$ as \begin{equation} L(\mathbf{y}_{1:N}|\boldsymbol{\theta})=\prod_{n=1}^N L(y_n|\boldsymbol{\theta}). \label{eq:likelihood_product} \end{equation} MCMC methods sample from $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:N})$ via simulation of a Markov chain with $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:N})$ as its stationary distribution, e.g., by performing Metropolis Hastings (MH) steps \citep{Hastings_1970}. MCMC methods do not require estimation of the normalization constant in Equation \eqref{eq:Bayes}. However, in the on-line framework, MCMC methods are impractical, since they require simulating anew a different Markov chain for each new posterior $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$, and the previously generated Markov chain for the posterior estimation of $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n-1})$ is not accounted for, except when choosing the seed for initializing the new Markov chain. This implies that MCMC methods quickly become computationally prohibitive in the on-line framework, already for a small $n$. An additional computational burden stems from the fact that each step within the MCMC sampling process requires evaluation of the full likelihood function $L(\mathbf{y}_{1:n}|\boldsymbol{\theta})$, i.e., the whole set of measurements $\mathbf{y}_{1:n}$ needs to be processed. This leads to increasing computational complexity for increasing $n$, and can render use of MCMC methods computationally inefficient even for off-line inference, especially when $N$ is large. On-line particle filters \citep{Sarkka_2013, Kantas_2015} operate in a sequential fashion by making use of the Markovian property of the employed state-space representation, i.e., they compute $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$ solely based on $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n-1})$ and the new measurement $y_n$. The typical use of particle filters targets the tracking of a system's response (dynamic state) by means of a state-space representation \citep{Gordon_1993, Sarkka_2013}, while they are often also used also for joint state-parameter estimation tasks, wherein the state-space is augmented to include the model parameters to be estimated \citep{Sarkka_2013, Kantas_2015}. In addition, particle filters can also be applied for pure recursive estimation of time-invariant parameters, for which the noise in the dynamic model is formally zero \citep{Del_Moral_2006, Sarkka_2013}, although this is not the typical setting for application of particle filters. A model of the Markovian discrete time state-space representation for the case of time-invariant parameter estimation only is given in Equations \eqref{eq:state_space_a}, \eqref{eq:state_space_b} \begin{subequations} \begin{align} \boldsymbol{\theta}_n &= \boldsymbol{\theta}_{n-1} \label{eq:state_space_a}\\ y_n &= D_n\left(\boldsymbol{\theta}_n\right)\exp \left( \epsilon_n \right)\label{eq:state_space_b} \end{align} \label{eq:state_space} \end{subequations} where $\epsilon_n$ models the error/noise of the measurement at time $t_n$, and $\boldsymbol{\theta}_n$ denotes the time-invariant parameter vector at time step $n$. The dynamic equation for the time-invariant parameters \eqref{eq:state_space_a} is introduced for the sole purpose of casting the problem into a state-space representation. Since the measurements are assumed independent given the parameter state, the errors $\epsilon_n$ in Equation \eqref{eq:state_space_b} are independent. It should be noted that the measurement error, which is introduced in multiplicative form in Equation \eqref{eq:state_space_b}, is commonly expressed in an additive form \citep{Corbetta_2018}. Indeed, Equation \eqref{eq:state_space_b} can be reformulated in the logarithmic scale, whereby the measurement error is expressed in an additive form. All target distributions of interest in the sequence $\pi_{\text{pos}}(\boldsymbol{\theta}_n|\mathbf{y}_{1:n})$ are defined on the same space of $\boldsymbol{\theta}\in{\rm I\!R}^{d}$. In the remainder of this paper, the subscript $n$ will therefore be dropped from $\boldsymbol{\theta}_n$. As previously discussed, particle filters are mainly used for on-line inference. However, these can also be used in exactly the same way for off-line inference, where only a single posterior density $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:N})$ is of interest. In this case, particle filters use the sequence of measurements successively to sequentially arrive to the final single posterior density of interest via estimating all the intermediate distributions. \subsection{On-line Particle Filter} \label{subsec:PF} Particle filter (PF) methods, also referred to as sequential Monte Carlo (SMC) methods, are importance sampling-based techniques that use a set of weighted samples $\{(\boldsymbol{\theta}_n^{(i)}, w_n^{(i)}):\ i=1,\dots, N_{\text{par}}\}$, called particles, to represent the posterior distribution of interest at estimation time step $n$, $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$. PFs form the following approximation to the posterior distribution of interest: \begin{equation} \pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n}) \approx \sum_{i=1}^{N_{\text{par}}} w_n^{(i)} \delta (\boldsymbol{\theta}-\boldsymbol{\theta}_n^{(i)}) \end{equation} where $\delta$ denotes the Dirac delta function. When a new measurement $y_n$ becomes available, PFs shift from $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n-1})$ to $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$ by importance sampling using an appropriately chosen importance distribution, which results in a reweighting procedure (updating of the weights). An important issue that arises from this weight updating procedure is the sample degeneracy problem \citep{Sarkka_2013}. This relates to the fact that the importance weights $w_n^{(i)}$ become more unevenly distributed with each updating step. In most cases, after a certain number of updating steps, the weights of almost all the particles assume values close to zero (see Figure \ref{fig:degen_impov}). This problem is alleviated by the use of adaptive resampling procedures based on the effective sample size $N_{\text{eff}}=1/\sum_{i=1}^{N_{\text{par}}} \left(w_n^{(i)}\right)^2$ \citep{Liu_1998}. Most commonly, resampling is performed with replacement according to the particle weights whenever $N_{\text{eff}}$ drops below a user-defined threshold $N_{\text{T}} = cN_{\text{par}}, \ c \in[0,1]$. Resampling introduces additional variance to the parameter estimates \citep{Sarkka_2013}. In the version of the PF algorithm presented in Algorithm \ref{alg:pf}, the dynamic model of Equation \eqref{eq:state_space} is used as the importance distribution, as originally proposed in the bootstrap filter by \cite{Gordon_1993}. \begin{algorithm} \caption{Particle Filter (PF)}\label{alg:pf} \begin{algorithmic}[1] \State generate $N_{\text{par}}$ initial particles $\boldsymbol{\theta}^{(i)}$ from $\pi_{\text{pr}}(\boldsymbol{\theta})$, \hspace{5mm} $i=1,\dots,N_{\text{par}}$ \State assign initial weights $w_0^{(i)}=1/N_{\text{par}}$, \hspace{5mm} $i=1,\dots,N_{\text{par}}$ \For{$n=1,\dots,N$} \State evaluate likelihood of the particles based on new measurement $y_n$, $L_n^{(i)} = L\left(y_n \mid \boldsymbol{\theta}^{(i)}\right) $ \State update particle weights $w_n^{(i)} \propto L_n^{(i)} \cdot w_{n-1}^{(i)}$ and normalize $\mathrm{s.t.} \; \sum_{i=1}^{N_{\text{par}}} w_n^{(i)} = 1$ \State evaluate $N_{\text{eff}}=\frac{1}{\sum_{i=1}^{N_{\text{par}}} \left(w_n^{(i)}\right)^2}$ \If{$N_{\text{eff}}<N_{\text{T}}$} \State resample particles $\boldsymbol{\theta}^{(i)}$ with replacement according to $w_n^{(i)}$ \State reset particle weights to $w_n^{(i)}=1/N_{\text{par}}$ \EndIf \EndFor \end{algorithmic} \end{algorithm} \begin{figure} \caption{Sample degeneracy and impoverishment} \label{fig:degen_impov} \end{figure} When using PFs to estimate time-invariant parameters, for which the process noise in the dynamic equation is zero, one runs into the issue of sample impoverishment \citep{Sarkka_2013}. The origin of this issue is the resampling process. More specifically, after a few resampling steps, most (or in extreme cases all) of the particles in the sample set end up assuming the exact same value, i.e., the particle set consists of only few (or one) distinct particles (see Figure \ref{fig:degen_impov}). The sample impoverishment issue poses the greatest obstacle for time-invariant parameter estimation with PFs. A multitude of techniques have been suggested in literature to alleviate the sample impoverishment issue in joint state-parameter estimation setups \citep[see, e.g.,][]{Gilks_2001, Liu_2001, Musso_2001, Storvik_2002, Andrieu_2004, Andrieu_2010, Chopin_2013, Chatzi_2013}. Fewer works have proposed solutions for resolving this issue in parameter estimation only setups \citep[see, e.g.,][]{Chopin_2002, Del_Moral_2006}. One of the simplest and most commonly used approaches consists of introducing artificial dynamics in the dynamic model of the parameter vector, i.e., the dynamic model $\boldsymbol{\theta}_n=\boldsymbol{\theta}_{n-1}+\boldsymbol{\epsilon}_{n-1}$ is employed, where $\boldsymbol{\epsilon}_{n-1}$ is a small artificial process noise \citep{Kitagawa_1998}. In this way, the time-invariant parameter vector is transformed into a time-variant one, therefore, the parameter estimation problem deviates from the original one \citep{Sarkka_2013, Kantas_2015}. This approach can introduce a bias and an artificial variance inflation in the estimates \citep{Kantas_2015}. For these reasons, this approach is not considered in this paper. To resolve the sample impoverishment issue encountered when using the PF Algorithm \ref{alg:pf} for parameter estimation only, this work employs the particle filter with Gaussian mixture resampling (PFGM), described in Algorithm \ref{alg:PF_GM}. The PFGM algorithm relates to pre-existing concepts \citep{Merwe_2003,Veettil_2016}, and is here specifically suggested for the parameter estimation only task, with its main goal being, in contrast to previous works, the quantification of the full posterior parameter uncertainty. A comparison between Algorithms \ref{alg:pf} and \ref{alg:PF_GM} shows that the only difference lies in the way that the resampling step is performed. PFGM replaces the standard resampling process of PF by first approximating the posterior distribution at estimation step $n$ by a Gaussian mixture model (GMM), which is fitted via the Expectation-Maximization (EM) algorithm \citep{McLachlan_2007, Chen_2010} on the weighted particle set. The degenerating particle set is then rejuvenated by sampling $N_{\text{par}}$ new particles from the GMM of Equation \eqref{eq:GMM}, \begin{equation} p\left( \boldsymbol{\theta} \mid \mathbf{y}_{1:n}\right) \approx \sum_{i=1}^{N_{\text{GM}}} \phi_{i} \mathcal{N} \left( \boldsymbol{\theta}; \boldsymbol{\mu}_{\mathbf{i}}, \mathbf{\Sigma_{i}} \right) \label{eq:GMM} \end{equation} where $\phi_{i}$ represents the weight of the Gaussian component $i$, while $\boldsymbol{\mu}_{\mathbf{i}}$ and $\mathbf{\Sigma_{i}}$ are the respective mean vector and covariance matrix. The number of Gaussians in the mixture $N_{\text{GM}}$, has to be chosen in advance, or can be estimated by use of appropriate algorithms \citep{Schubert_2017, Celeux_2019, Geyer_2019}. In the numerical investigations of Section \ref{sec:Numerical_investigations}, we set $N_{\text{GM}}$=8. We point out that the efficacy of PFGM strongly depends on the quality of the GMM posterior approximation. The reason for applying a GMM (and not a single Gaussian) is that the posterior distribution can deviate from the normal distribution, and can even be multimodal or heavy-tailed. \begin{algorithm} \caption{Particle Filter with Gaussian mixture resampling (PFGM)}\label{pfgm} \begin{algorithmic}[1] \State generate $N_{\text{par}}$ initial particles $\boldsymbol{\theta}^{(i)}$ from $\pi_{\text{pr}}(\boldsymbol{\theta})$, \hspace{5mm} $i=1,\dots,N_{\text{par}}$ \State assign initial weights $w_0^{(i)}=1/N_{\text{par}}$, \hspace{5mm} $i=1,\dots,N_{\text{par}}$ \For{$n=1,\dots,N$} \State evaluate likelihood of the particles based on new measurement $y_n$, $L_n^{(i)} = L\left( y_n \mid \boldsymbol{\theta}^{(i)}\right) $ \State update particle weights $w_n^{(i)} \propto L_n^{(i)} \cdot w_{n-1}^{(i)}$ and normalize $\mathrm{s.t.} \; \sum_{i=1}^{N_{\text{par}}} w_n^{(i)} = 1$ \State evaluate $N_{\text{eff}}=\frac{1}{\sum_{i=1}^{N_{\text{par}}} \left(w_n^{(i)}\right)^2}$ \If{$N_{\text{eff}}<N_{\text{T}}$} \State EM: fit a Gaussian mixture proposal distribution $g_{\text{GM}}(\boldsymbol{\theta})$ according to $\{\boldsymbol{\theta}^{(i)},w_n^{(i)}\}$ \State sample $N_{\text{par}}$ new particles $\boldsymbol{\theta}^{(i)}$ from $g_{\text{GM}}(\boldsymbol{\theta})$ \State reset particle weights to $w_n^{(i)}=1/N_{\text{par}}$ \EndIf \EndFor \end{algorithmic} \label{alg:PF_GM} \end{algorithm} The simple reweighting procedure used in the on-line PFs is based on the premise that $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n-1})$ and $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$ are likely to be similar, i.e., that the new measurement $y_n$ will not cause a very large change in the posterior. However, when that is not the case, this simple reweighting procedure is bound to perform poorly, leading to very fast degeneration of the particle set. In cases where already the first measurement set $y_1$ is strongly informative relative to the prior, the PF is bound to strongly degenerate already in the first weight updating step (e.g., we observe this in the second case study of Section \ref{subsec:RF} in the case of 10 sensors). To counteract this issue, in this paper we incorporate the idea of simulated annealing (enforcing tempering of the likelihood function) \citep{Neal_2001} when needed within the on-line PFGM algorithm, which we term the tPFGM Algorithm \ref{alg:pf_gm_temp}. The tPFGM algorithm draws inspiration from previous works \citep{Gall_2007, Deutscher_2000}, but is here suggested for the parameter estimation only task, opting for the quantification of the full posterior parameter uncertainty. The algorithm operates as follows: At estimation time step $n$, before performing the reweighting operation, the algorithm first checks the updated effective sample size for indication of sample degeneracy. If no degeneracy is detected, tPFGM operates exactly like PFGM. When sample degeneracy occurs, tPFGM employs adaptive tempering of the likelihood $L\left(y_n \mid \boldsymbol{\theta}\right)$ of the new measurement $y_n$ in order to ``sequentially" sample from $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n-1})$ to $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$ by visiting a sequence of artificial intermediate posteriors, as defined by the tempered likelihood function $L^q\left(y_n \mid \boldsymbol{\theta}\right)$. The tempering factor $q$ takes values between 0 and 1. When $q=0$, the new measurement $y_n$ is neglected, while $q=1$ entails considering the whole likelihood function of $y_n$, thus reaching to $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$. The intermediate values of $q$ are adaptively selected via solution of the optimization problem in line 11 of Algorithm \ref{alg:pf_gm_temp}, which ensures that the effective sample size does not drop below the threshold $N_{\text{T}}$ for the chosen $q$ value. Naturally, use of tPFGM can trigger more resampling events than PFGM, as resampling can occur more than once within a time step $n$. \begin{algorithm}[!ht] \caption{Particle Filter with Gaussian mixture resampling and likelihood tempering (tPFGM)}\label{alg:pf_gm_temp} \begin{algorithmic}[1] \State generate $N_{\text{par}}$ initial particles $\boldsymbol{\theta}^{(i)}$ from $\pi_{\text{pr}}(\boldsymbol{\theta})$, \hspace{5mm} $i=1,\dots,N_{\text{par}}$ \State assign initial weights $w_0^{(i)}=1/N_{\text{par}}$, \hspace{5mm} $i=1,\dots,N_{\text{par}}$ \For{$n=1,\dots,N$} \State evaluate likelihood of the particles based on new measurement $y_n$, $L_n^{(i)} = L\left(y_n \mid \boldsymbol{\theta}^{(i)}\right) $ \State set $q=0$ and create auxiliary particle weights $w_a^{(i)}=w_{n-1}^{(i)}$ \While{$q\neq1$} \If {$N_{\text{eff}} = \left(\sum_{i=1}^{N_{\text{par}}} w_a^{(i)} \cdot {L_n^{(i)}}^{1-q} \right)^2 / \sum_{i=1}^{N_{\text{par}}} \left(w_a^{(i)} \cdot {L_n^{(i)}}^{1-q}\right)^2 >N_{\text{T}}$} \State update auxiliary particle weights $w_a^{(i)} \propto w_a^{(i)} \cdot {L_n^{(i)}}^{1-q}$ and normalize $\mathrm{s.t.} \sum_{i=1}^{N_{\text{par}}} w_a^{(i)} = 1$ \State set $q=1$ \Else \State solve $\left(\sum_{i=1}^{N_{\text{par}}} w_a^{(i)} \cdot {L_n^{(i)}}^{dq} \right)^2 / \sum_{i=1}^{N_{\text{par}}} \left(w_a^{(i)} \cdot {L_n^{(i)}}^{dq}\right)^2 - N_{\text{T}} = 0$ for $dq$ \State set $q_{\text{new}} = \min \left[q+dq,1\right]$ \State set $dq=q_{\text{new}}-q$ and $q=q_{\text{new}}$ \State update auxiliary particle weights $w_a^{(i)} \propto w_a^{(i)} \cdot {L_n^{(i)}}^{dq}$ and normalize $\mathrm{s.t.} \sum_{i=1}^{N_{\text{par}}} w_a^{(i)} = 1$ \State EM: fit a Gaussian mixture proposal distribution $g_{\text{GM}}(\boldsymbol{\theta})$ according to $\{\boldsymbol{\theta}^{(i)},w_a^{(i)}\}$ \State sample $N_{\text{par}}$ new particles $\boldsymbol{\theta}^{(i)}$ from $g_{\text{GM}}(\boldsymbol{\theta})$ \State reset auxiliary particle weights to $w_a^{(i)}=1/N_{\text{par}}$ \EndIf \EndWhile \State set $w_n^{(i)}=w_a^{(i)}$ \EndFor \end{algorithmic} \end{algorithm} The PFGM and tPFGM filters rely entirely on the posterior approximation via a GMM for sampling $N_{\text{par}}$ new particles during the resampling process. However, there is no guarantee that these new particles follow the true posterior distribution of interest. The IBIS filter of the following Section \ref{subsec:IBIS} aims at addressing this issue. \subsection{Iterated Batch Importance Sampling} \label{subsec:IBIS} Implementing MCMC steps within PF methods to move the particles after a resampling step was originally proposed by \cite{Gilks_2001}, in the so-called resample-move algorithm. \cite{Chopin_2002} introduced a special case of the resample-move algorithm, specifically tailored for application to static parameter estimation only purposes, namely the iterated batch importance sampling (IBIS) filter. IBIS was originally introduced as an iterative method for solving off-line estimation tasks by incorporating the sequence of measurements one at a time. In doing this, the algorithm visits the sequence of intermediate posteriors within its process, and can therefore also be used to perform on-line estimation tasks. An on-line version of the IBIS filter is presented in Algorithm \ref{alg:ibis}, used in conjuction with the MCMC routine of Algorithm \ref{alg:mh_gm}. The core idea of the IBIS filter is the following: At estimation step $n$, if sample degeneracy is identified, first the particles are resampled with replacement, and subsequently the resampled particles are moved with a Markov chain transition kernel whose stationary distribution is $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$. More specifically, each of the $N_{\text{par}}$ resampled particles is used as the seed to perform a single MCMC step. This approach is inherently different to standard applications of MCMC, where a transition kernel is applied multiple times on one particle. A question that arises is how to choose the Markov chain transition kernel. \cite{Chopin_2002} argues for choosing a transition kernel that ensures that the proposed particle only weakly depends on the seed particle value. It is therefore recommended to use an independent Metropolis-Hastings (IMH) kernel, wherein the proposed particle is sampled from a proposal distribution $g$, which has to be as close as possible to the target distribution $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$. In obtaining such a proposal distribution, along the lines of what is described in Section \ref{subsec:PF}, in this work we employ a GMM approximation (see Equation \eqref{eq:GMM}) of the target distribution as the proposal density $g_{\text{GM}}(\boldsymbol{\theta})$ within the IMH kernel \citep{Papaioannou_2016, South_2019}. The IMH kernel with a GMM proposal distribution is denoted IMH-GM herein. The acceptance probability (line 6 of Algorithm \ref{alg:mh_gm}) of the IMH-GM kernel is a function of both the initial seed particle and the GMM proposed particle. The acceptance rate can indicate how efficient the IMH-GM kernel is in performing the MCMC move step within the IBIS algorithm. It is important to note that when computing the acceptance probability, a call of the full likelihood function is invoked, which requires the whole set of measurements $y_{1:n}$ to be processed; this leads to a significant additional computational demand, which pure on-line methods are not supposed to accommodate \citep{Doucet_2001}. The performance of the IBIS sampler depends highly on the mixing properties of the IMH-GM kernel. If the kernel leads to slowly decreasing chain auto-correlation, the moved particles are bound to remain in regions close to the particles obtained by the resampling step. This can lead to an underrepresentation of the parameter space of the intermediate posterior distribution. It might therefore be beneficial to add a burn-in period within the IMH-GM kernel \citep{Del_Moral_2006}. Implementing that is straightforward and is shown in Algorithm \ref{alg:mh_gm}, where $n_\text{B}$ is the user-defined number of burn-in steps. Naturally, the computational cost of the IMH-GM routine increases linearly with the number of burn-in steps. \begin{algorithm} \caption{Independent Metropolis Hastings with GM proposal (IMH-GM)}\label{alg:mh_gm} \begin{algorithmic}[1] \State \textbf{IMH-GM Input}: $\{\boldsymbol{\theta}^{(i)},L^{(i)}\cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)})\}$, $\pi_{\text{pr}}(\boldsymbol{\theta})$, $L(\mathbf{y}_{1:n} | \boldsymbol{\theta})$ and $g_{\text{GM}}(\boldsymbol{\theta})$ \For{$i=1,\dots,N_{\text{par}}$} \For{$j=1,\dots,n_\text{B}+1$} \State sample candidate particle $\boldsymbol{\theta}^{(i)}_{c,j}$ from $g_{\text{GM}}(\boldsymbol{\theta})$ \State evaluate $L^{(i)}_{c,j}= L(\mathbf{y}_{1:n} | \boldsymbol{\theta}^{(i)}_{c,j}) $ for candidate particle \State evaluate acceptance ratio $\alpha = \min\left[1,\frac{L^{(i)}_{c,j}\cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)}_{c,j}) \cdot g_{\text{GM}}(\boldsymbol{\theta}^{(i)})}{L^{(i)}\cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)}) \cdot g_{\text{GM}}(\boldsymbol{\theta}^{(i)}_{c,j})}\right]$ \State generate uniform random number $u \in [0,1]$ \If{$u<\alpha$} \State replace $\{\boldsymbol{\theta}^{(i)},L^{(i)}\cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)})$\} with $\{\boldsymbol{\theta}^{(i)}_{c,j},L^{(i)}_{c,j}\cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)}_{c,j})$\} \EndIf \EndFor \EndFor \State \textbf{IMH-GM Output}: $\{\boldsymbol{\theta}^{(i)},L^{(i)}\cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)})\}$ \end{algorithmic} \end{algorithm} Algorithm \ref{alg:ibis} details the workings of the IMH-GM-based IBIS filter used in this work. In line 11 of this algorithm, the IMH-GM routine of Algorithm \ref{alg:mh_gm} is called, which implements the IMH-GM kernel for the MCMC move step. Comparing Algorithms \ref{alg:PF_GM} and \ref{alg:ibis}, it is clear that both filters can be used for on-line inference within a single run, but the IBIS filter has significantly larger computational cost, as will also be demonstrated in the numerical investigations of Section \ref{sec:Numerical_investigations}. In the same spirit as the proposed tPFGM algorithm \ref{alg:pf_gm_temp}, which enforces simulated annealing (tempering of the likelihood function) in cases when $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n-1})$ and $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:n})$ are likely to be quite different, the same idea can be implemented also within the IBIS algorithm. That leads to what we refer to as the tIBIS algorithm in this paper. \begin{algorithm} \caption{IMH-GM-based Iterated Batch Importance Sampling (IBIS)}\label{alg:ibis} \begin{algorithmic}[1] \State generate $N_{\text{par}}$ initial particles $\boldsymbol{\theta}^{(i)}$ from $\pi_{\text{pr}}(\boldsymbol{\theta})$, \hspace{5mm} $i=1,\dots,N_{\text{par}}$ \State assign initial weights $w_0^{(i)}=1/N_{\text{par}}$, \hspace{5mm} $i=1,\dots,N_{\text{par}}$ \For{$n=1,\dots,N$} \State evaluate likelihood of the particles based on new measurement $y_n$, $L_n^{(i)} = L\left( y_n \mid \boldsymbol{\theta}^{(i)}\right) $ \State evaluate the new target distribution, $L(\mathbf{y}_{1:n} | \boldsymbol{\theta}^{(i)}) \cdot \pi_{\text{pr}}\left(\boldsymbol{\theta}^{(i)}\right) = L_n^{(i)} \cdot L(\mathbf{y}_{1:n-1} | \boldsymbol{\theta}^{(i)}) \cdot \pi_{\text{pr}}\left(\boldsymbol{\theta}^{(i)}\right)$ \State update particle weights $w_n^{(i)} \propto L_n^{(i)} \cdot w_{n-1}^{(i)}$ and normalize $\mathrm{s.t.} \; \sum_{i=1}^{N_{\text{par}}} w_n^{(i)} = 1$ \State evaluate $N_{\text{eff}}=\frac{1}{\sum_{i=1}^{N_{\text{par}}} \left(w_n^{(i)}\right)^2}$ \If{$N_{\text{eff}}<N_{\text{T}}$} \State EM: fit a Gaussian mixture proposal distribution $g_{\text{GM}}(\boldsymbol{\theta})$ according to $\{\boldsymbol{\theta}^{(i)},w_n^{(i)}\}$ \State resample $N_{\text{par}}$ new particles $\{\boldsymbol{\theta}^{(i)},L(\mathbf{y}_{1:n} | \boldsymbol{\theta}^{(i)}) \cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)})\}$ with replacement according to $w_n^{(i)}$ \State IMH-GM step with inputs $\{\boldsymbol{\theta}^{(i)},L(\mathbf{y}_{1:n} | \boldsymbol{\theta}^{(i)}) \cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)})\}$, $\pi_{\text{pr}}(\boldsymbol{\theta})$, $L(\mathbf{y}_{1:n} | \boldsymbol{\theta})$ and $g_{\text{GM}}(\boldsymbol{\theta})$ \State reset particle weights to $w_n^{(i)}=1/N_{\text{par}}$ \EndIf \EndFor \end{algorithmic} \end{algorithm} \subsection{Off-line Sequential Monte Carlo sampler} \label{subsec:SMC} In Section 4 of \cite{Del_Moral_2006}, the authors presented a generic approach to convert an off-line MCMC sampler into a sequential Monte Carlo (SMC) sampler tailored for performing off-line estimation tasks, i.e., for estimating the single posterior density of interest $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:N})$. The off-line SMC sampler used in this work is presented in Algorithm \ref{smc} based on \cite{Del_Moral_2006} and \cite{Jasra_2011}. The key idea of this sampler is to adaptively construct the following artificial sequence of densities, \begin{equation} \pi_{j}(\boldsymbol{\theta}|\mathbf{y}_{1:N}) \propto L^{q_j}(\mathbf{y}_{1:N}|\boldsymbol{\theta}) \pi_{\text{pr}}(\boldsymbol{\theta}) \label{eq:Bayes_temp} \end{equation} where $q_j$ is a tempering parameter which obtains values between 0 and 1, in order to ``sequentially" sample in a smooth manner from the prior to the final single posterior density of interest. Once $q_j = 1$, $\pi_{\text{pos}}(\boldsymbol{\theta}|\mathbf{y}_{1:N})$ is reached. Similar to what was described in tPFGM, the intermediate values of $q_j$ are adaptively found via solution of the optimization problem in line 5 of Algorithm \ref{smc}. The GMM approximation of the intermediate posteriors and the IMH-GM kernel of Algorithm \ref{alg:mh_gm} in order to move the particles after resampling are also key ingredients of this SMC sampler. Unlike PFGM and IBIS, this SMC algorithm cannot provide the on-line solution within a single run, and has to be rerun from scratch for every new target posterior of interest. In this regard, use of Algorithm \ref{smc} for on-line inference is impractical. \begin{algorithm}[!ht] \caption{IMH-GM-based Sequential Monte Carlo (SMC)}\label{smc} \begin{algorithmic}[1] \State generate $N_{\text{par}}$ initial particles $\boldsymbol{\theta}^{(i)}$ from $\pi_{\text{pr}}(\boldsymbol{\theta})$, \hspace{5mm} $i=1,\dots,N_{\text{par}}$ \State evaluate for every particle the full likelihood $L^{(i)} = L(\mathbf{y}_{1:N}\mid\boldsymbol{\theta}^{(i)})$ and the prior $\pi_{\text{pr}}(\boldsymbol{\theta}^{(i)})$ \State set $q=0$ \While{$q\neq1$} \State solve $\left(\sum_{i=1}^{N_{\text{par}}} {L^{(i)}}^{dq}\right)^2 / \sum_{i=1}^{N_{\text{par}}} {L^{(i)}}^{2\cdot dq} - N_{\text{T}} = 0$ for $dq$ \State set $q_{\text{new}} = \min \left[q+dq,1\right]$ \State set $dq=q_{\text{new}}-q$ and $q=q_{\text{new}}$ \State evaluate particle weights $w^{(i)} \propto {L^{(i)}}^{dq}$ and normalize $\mathrm{s.t.} \; \sum_{i=1}^{N_{\text{par}}} w^{(i)} = 1$ \State EM: fit a Gaussian mixture proposal distribution $g_{\text{GM}}(\boldsymbol{\theta})$ according to $\{\boldsymbol{\theta}^{(i)},w^{(i)}\}$ \State resample $N_{\text{par}}$ new particles $\{\boldsymbol{\theta}^{(i)},{L^{(i)}}^{q}\cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)})\}$ with replacement according to $w^{(i)}$ \State IMH-GM step with inputs $\{\boldsymbol{\theta}^{(i)}, {L^{(i)}}^{q} \cdot \pi_{\text{pr}}(\boldsymbol{\theta}^{(i)})\}$ , $\pi_{\text{pr}}(\boldsymbol{\theta})$, $L^q(\mathbf{y}_{1:N} | \boldsymbol{\theta})$ and $g_{\text{GM}}(\boldsymbol{\theta})$ \State reset particle weights to $w^{(i)}=1/N_{\text{par}}$ \EndWhile \end{algorithmic} \end{algorithm} \subsection{Computational remarks} The algebraic operations in all presented algorithms are implemented in the logarithmic scale, which employs evaluations of the logarithm of the likelihood function and, hence, ensures computational stability. Furthermore, the EM step for fitting the GMM is performed after initially transforming the prior joint probability density function of $\boldsymbol{\theta}$ to an underlying vector $\boldsymbol{u}$ of independent standard normal random variables \citep{Der_Kiureghian_1986}. In standard normal space, the parameters are decorrelated, which enhances the performance of the EM algorithm. \section{Numerical investigations} \label{sec:Numerical_investigations} \subsection{Low-dimensional case study: Paris-Erdogan fatigue crack growth model} \label{subsec:CGM} A fracture mechanics-based model serves as the first case study. This describes the fatigue crack growth evolution under increasing stress cycles \citep{Paris_1963, Ditlevsen_Madsen_1996}. The crack growth follows the following first-order differential Equation \eqref{eq:ODE}, known as Paris-Erdogan law, \begin{equation} \frac{da\left( n\right) }{dn} = \exp \left( C_{\ln} \right) \left[\Delta S \sqrt{\pi a \left( n \right)} \right]^m \label{eq:ODE} \end{equation} where $a \left[\text{mm}\right]$ is the crack length, $n \left[-\right]$ is the number of stress cycles, $\Delta S \left[\text{Nmm}^{-2}\right]$ is the stress range per cycle when assuming constant stress amplitudes, $C$ and $m$ represent empirically determined model parameters; $C_{\ln}$ corresponds to the natural logarithm of $C$. The solution to this differential equation, with boundary condition $a\left(n=0\right)=a_0$, can be written as a function of the number of stress cycles $n$ and the vector $\boldsymbol{\theta}=\left[a_0, \Delta S, C_{ln}, m\right]$ containing the uncertain time-invariant model parameters as \begin{equation} a\left( n, \boldsymbol{\theta} \right) = \left[ \left( 1- \frac{m}{2} \right) \exp \left( C_{\ln} \right) \Delta S^m \pi^{{m}\slash{2}} n +a_0^{\left(1-{m}\slash{2}\right)} \right]^{{\left(1-{m}\slash{2}\right)}^{-1}} \label{eq:ODE_solution} \end{equation} We assume that noisy measurements of the crack $y_n$ are obtained sequentially at different values of $n$. The measurement Equation \eqref{eq:meas_eq} assumes a multiplicative lognormal measurement error, $\exp \left( \epsilon_n \right)$. \begin{equation} y_{n} = a_n(\boldsymbol{\theta}) \exp \left( \epsilon_n \right) \label{eq:meas_eq} \end{equation} Under this assumption, the likelihood function for a measurement at a given $n$ is shown in Equation \eqref{eq:likelihood}. \begin{equation} L \big( y_{n} ; a_n\left(\boldsymbol{\theta}\right) \big) = \frac{1}{\sigma_{\epsilon_n} \sqrt{2 \pi}} \exp \left[ - \frac{1}{2} \left(\frac{\ln \left( y_n \right) - \mu_{\epsilon_n} - \ln \big( a_n\left(\boldsymbol{\theta}\right) \big)}{\sigma_{\epsilon_n}} \right)^2 \right] \label{eq:likelihood} \end{equation} Table \ref{tab:par_cgm} shows the prior probability distribution model for each random variable in the vector $\boldsymbol{\theta}$ \citep{Ditlevsen_Madsen_1996, Straub_2009}, as well as the assumed probabilistic model of the measurement error. In this case study we are dealing with a non-linear model and a parameter vector with non-Gaussian prior distribution. \begin{table}[!ht] \caption{Prior distribution model for the fatigue crack growth model parameters and the measurement error} \small \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} c c c c c} \hline \textbf{Parameter} & \textbf{Distribution} & \textbf{Mean} & \textbf{Standard Deviation} & \textbf{Correlation}\\ \hline $a_0$ & Exponential & $1$ & $1$ & $-$\\ $\Delta S$ & Normal & $60$ & $10$ & $-$\\ $C_{\ln}, \, m$ & Bi-Normal & $\left( -33; \, 3.5 \right)$ & $\left( 0.47; \, 0.3 \right)$ & $\rho_{C_{\ln},m} = -0.9$\\ $\exp \left( \epsilon_n \right)$ & Log-normal & $1.0$ & $0.1508$& $-$\\ \hline \end{tabular*} \label{tab:par_cgm} \end{table} \subsubsection{Markovian state-space representation for application of on-line filters} \label{subsubsec:state_space_CGM} A Markovian state-space representation of the deterioration process is required for application of on-line filters. The dynamic and measurement equations of the discrete-time state-space representation of the fatigue crack growth model with unknown time-invariant parameters $\boldsymbol{\theta}=\left[a_0, \Delta S, C_{ln}, m\right]$ are shown below. \begin{equation} \begin{split} \boldsymbol{\theta}_k &= \boldsymbol{\theta}_{k-1} \\ y_k &= a_k\left(\boldsymbol{\theta}_k\right)\exp \left( \epsilon_k \right) = \left[ \left( 1- \frac{m_k}{2} \right) \exp \left( C_{ln_k} \right) \Delta S_k^{m_k} \pi^{{m_k}\slash{2}} n +a_{0_k}^{\left(1-{m_k}\slash{2}\right)} \right]^{{\left(1-{m_k}\slash{2}\right)}^{-1}} \exp \left( \epsilon_k \right) \label{eq:state_space_CGM} \end{split} \end{equation} The subscript $k$ denotes the estimation time step. More specifically, the number of stress cycles is discretized as $n=k\Delta n$, with $k=1,\ldots,100$ and $\Delta n = 1\times10^5$. The state-space model of Equation \ref{eq:state_space_CGM} is nonlinear and the prior is non-Gaussian. For reasons explained in Section \ref{sec:algorithms}, the subscript $k$ in $\boldsymbol{\theta}_k$ is dropped in the remainder of this section. \subsubsection{Reference posterior solution} \label{subsubsec:reference_post_CGM} For the purpose of performing a comparative assessment of the different filters, an underlying ``true" realization of the fatigue crack growth process $a^\ast(n)$ is generated for $n=k\Delta n$, with $k=1,\ldots,100$ and $\Delta n = 1\times10^5$. This realization corresponds to the randomly generated ``true" vector of time-invariant parameters $\boldsymbol{\theta}^\ast = [a_0^\ast=2.0, \Delta S^\ast=50.0, C_{ln}^\ast=-33.5, m^\ast=3.7]$. Sequential synthetic crack monitoring measurements $y_k$ are sampled from the measurement Equation \eqref{eq:meas_eq} for $a_k(\boldsymbol{\theta}^\ast)$, and for randomly generated measurement noise samples $\exp \left( \epsilon_k \right)$. These measurements are scattered in green in Figure \ref{fig:cgm_ref_state_filt}. Based on the generated measurements, the sequence of reference posterior distributions $\pi_{\text{pos}}\left(\boldsymbol{\theta}|\mathbf{y}_{1:k}\right)$ is obtained using the prior distribution as an envelope distribution for rejection sampling \citep{Smith_1992, Rubinstein_2016}. More specifically, for each of the 100 posterior distributions of interest $\pi_{\text{pos}}\left(\boldsymbol{\theta}|\mathbf{y}_{1:k}\right)$, $10^5$ independent samples are generated. The results of this reference posterior estimation of the four time-invariant model parameters are plotted in Figure \ref{fig:cgm_ref_param_filt}. With posterior samples, the reference filtered estimate of the crack length $a_n$ at each estimation step is also obtained via the model of Equation \eqref{eq:ODE_solution} and plotted in Figure \ref{fig:cgm_ref_state_filt}. In the left panel of this figure, the filtered state is plotted in logarithmic scale. In an off-line estimation, a single posterior density is of interest. One such reference posterior estimation result for the last estimation step, $\pi_{\text{pos}}\left(\boldsymbol{\theta}|\mathbf{y}_{1:100}\right)$, is plotted for illustration in Figure \ref{fig:cgm_ref_param_dist_final}. \begin{figure} \caption{Reference posterior solution: mean and credible intervals for the sequence of posterior distributions $\pi_{\text{pos} \label{fig:cgm_ref_param_filt} \end{figure} \begin{figure} \caption{Reference mean and credible intervals for the filtered crack growth state $a_n$} \label{fig:cgm_ref_state_filt} \end{figure} \begin{figure} \caption{Reference final posterior: prior and single posterior distribution of interest $\pi_{\text{pos} \label{fig:cgm_ref_param_dist_final} \end{figure} \subsubsection{Comparative assessment of the investigated on-line and off-line filters} We apply the PFGM filter with 5000 and 50000 particles, the IBIS filter with 5000 particles, and the SMC filter with 5000 particles for performing on-line and off-line time-invariant parameter estimation tasks. We evaluate the performance of each filter by taking the relative error of the estimated mean and standard deviation of each of the four parameters with respect to the reference posterior solution. For example, the relative error in the estimation of the mean of parameter $a_0$ at a certain estimation step $k$ is computed as $\abs{\frac{\mu_{a_0,k}-\hat{\mu}_{a_0,k}}{\mu_{a_0,k}}}$, where $\mu_{a_0,k}$ is the reference posterior mean from rejection-sampling (Section \ref{subsubsec:reference_post_CGM}), and $\hat{\mu}_{a_0,k}$ is the posterior mean estimated with each filter. Each filter is run 50 times, and the mean relative error of the mean and the standard deviation of each parameter, together with the 90\% credible intervals (CI), are obtained. These are plotted in Figure \ref{fig:cgm_comp_filters_sep_err}. \begin{figure} \caption{Comparison of the relative error of the mean and standard deviation of the parameters evaluated for each filter. The solid lines show the mean and the shaded areas the 90\% credible intervals inferred from 50 repeated runs of each filter. In the horizontal axis, $n$ is the number of stress cycles} \label{fig:cgm_comp_filters_sep_err} \end{figure} Figure \ref{fig:cgm_comp_filters_total_err} plots the $L^2$ relative error norm of the mean and the standard deviation of all four parameters, i.e., the quantity of equation \eqref{eq:L2_error} (here formulated for the mean at estimation step $k$) \begin{equation} \sqrt{\frac{\sum_{i=1}^{d}\left(\mu_{i,k}-\hat{\mu}_{i,k}\right)^2}{\sum_{i=1}^{d}\left(\mu_{i,k}\right)^2}} \label{eq:L2_error} \end{equation} where $d$ is the dimensionality of the time-invariant parameter vector $\boldsymbol{\theta}$ (in this example $d=4$). More specifically, Figure \ref{fig:cgm_comp_filters_total_err} plots the mean and credible intervals of the $L^2$ relative error norm of the estimated mean and standard deviation, as obtained from 50 runs of each filter. \begin{figure} \caption{Comparison of the $L^2$ relative error norm of the mean and the standard deviation of the parameters evaluated for each filter. The solid lines show the mean and the shaded areas the 90\% credible intervals inferred from 50 repeated runs of each filter. In the horizontal axis, $n$ is the number of stress cycles} \label{fig:cgm_comp_filters_total_err} \end{figure} Figures \ref{fig:cgm_comp_filters_sep_err} and \ref{fig:cgm_comp_filters_total_err} reveal that, when all three filters are run with the same number of particles, the IBIS and SMC filters yield superior performance over PFGM. When the number of particles in the PFGM filter is increased to 50000, the PFGM filter performance is comparable to the one of the IBIS and SMC filters. In estimating the mean, the mean $L^2$ relative error norm obtained from the PFGM filter with 50000 particles is slightly larger than the corresponding error obtained from IBIS and SMC with 5000 particles, while the 90\% credible intervals of the PFGM filter estimation are still wider. In estimating the standard deviation, the PFGM filter with 50000 particles proves competitive. Figures \ref{fig:cgm_comp_filters_sep_err} and \ref{fig:cgm_comp_filters_total_err} show the estimation accuracy of each filter when used for on-line inference, i.e., for estimating the whole sequence of 100 posterior distributions $\pi_{\text{pos}}\left(\boldsymbol{\theta}|\mathbf{y}_{1:k}\right)$, $k=1,\ldots,100$. The PFGM and IBIS filters, being intrinsically on-line filters, provide the whole posterior sequence with one run. On the other hand, the off-line SMC filter is run anew for each of the 100 required posterior estimations. Hence, Figures \ref{fig:cgm_comp_filters_sep_err} and \ref{fig:cgm_comp_filters_total_err} enclose the results of both the on-line and the off-line inference. If one is interested in the off-line estimation accuracy at a specific stress cycle $n$, one can simply consider a vertical ``cut" at $n$. \begin{table} \caption{Average number of model evaluations for the fatigue crack growth model parameter estimation} \small \centering \begin{tabular}{|c|c|c|c|c|c|}\hline method &PFGM 5000&PFGM 50000&IBIS&SMC (final posterior)&SMC (all posteriors)\\\hline model evaluations&$5\times10^5$&$5\times10^6$&$3.4\times10^6$&$4.5\times10^6$&$1.9\times10^8$\\\hline \end{tabular} \label{table:Comp_cost_CGM} \end{table} Table \ref{table:Comp_cost_CGM} documents the computational cost associated with each filter, expressed in the form of required model evaluations induced by calls of the likelihood function. By model we here refer to the model of Equation \eqref{eq:ODE_solution}, which is an analytical expression with negligible associated runtime. However, unlike the simple measurement equation that we have assumed in this example, in many realistic deterioration monitoring settings, the deterioration state cannot be measured directly (e.g., in vibration-based structural health monitoring \citep{Kamariotis_2022a}). In such cases, each deterioration model evaluation often entails evaluation of a finite element (FE) model, which has substantial runtime. It therefore appears appropriate to evaluate the filters' computational cost in terms of required model evaluations. The on-line PFGM filter with 5000 particles requires $5\times10^5$ model evaluations, and yields by far the smallest computational cost, while at the same time providing the solution to both on-line and off-line estimation tasks. However, it also yields the worst performance in terms of accuracy of the posterior estimates. Running the IBIS filter with 5000 particles, which performs MCMC move steps, leads to $3.4\times10^6$ model evaluations. Comparing this value against the $5\times10^5$ model evaluations required by the PFGM filter with 5000 particles for performing the same task distinctly shows the computational burden associated with MCMC move steps, which require a complete browsing of the whole measurement data set in estimating the acceptance probability. However, the IBIS filter also leads to enhanced estimation accuracy, which might prove significant when the subsequent tasks entail prognosis of the deterioration evolution, the structural reliability or the remaining useful lifetime, and eventually the predictive maintenance planning. Using 50000 particles, the PFGM filter performance increases significantly with a computational cost that is comparable to the IBIS filter with 5000 particles. For the off-line SMC algorithm, $4.5\times10^6$ model evaluations are required only for the task of estimating the final posterior density. The $1.9\times10^8$ model evaluations required by the SMC for obtaining the whole sequence of posteriors $\pi_{\text{pos}}\left(\boldsymbol{\theta}|\mathbf{y}_{1:k}\right)$, $k=1,\ldots,100$, clearly demonstrate that off-line MCMC techniques are unsuited to on-line estimation tasks. \subsection{High-dimensional case study: Corrosion deterioration spatially distributed across beam} \label{subsec:RF} \begin{figure} \caption{Structural beam subjected to spatially and temporally varying corrosion deterioration. The deterioration process is monitored from sensors deployed at specific sensor locations (in green)} \label{fig:beam_corrosion} \end{figure} As a second case study, we employ the deterioration model of Equation \eqref{eq:RF_deterioration}, which describes the spatially and temporally varying corrosion deterioration across the structural beam shown in Figure \ref{fig:beam_corrosion}. \begin{equation} D(t)=At^B, \qquad t=0, \dots,50 \label{eq:RF_deterioration} \end{equation} $A$ is a random field modeling the deterioration rate, while $B$ is a random field related to the nonlinearity effect of the deterioration process in terms of a power law in time. The corrosion deterioration $D(t)$ is therefore also a spatial random field. A random field, by definition, contains an infinite number of random variables, and must therefore be discretized \citep{Vanmarcke_2010}. One of the most common methods for discretization of random fields is the midpoint method \citep{DERKIUREGHIAN_1988}, whereby the domain is discretized in $m$ elements, and the two random fields can be approximated by using the random variables that correspond to the values of the random fields at the discrete points in the domain (the midpoints of each element). In that case, the uncertain time-invariant deterioration model parameter vector is $\boldsymbol{\theta} = \left[A_1,\dots, A_m, B_1, \dots, B_m\right]$, where $A_i, B_i, i = 1, \ldots, m$ are the random variables corresponding to the midpoint of the $i$-th element. We assume that noisy measurements of the corrosion deterioration state $D_{t,l}$ at time $t$ and at certain locations $l$ of the beam are obtained sequentially (summarized in one measurement per year) from $n_{l}$ sensors deployed at these locations ($n_{l} = 10$ sensor locations are shown in Figure \ref{fig:beam_corrosion}). The measurement Equation \eqref{eq:RF_measurement}, describing the corrosion measurement at time $t$ and sensor location $l$, assumes a multiplicative measurement error, $\exp \left(\epsilon_{t,l}\right)$. \begin{equation} y_{t,l} = D_{t,l}\left(\boldsymbol{\theta}\right) \exp \left(\epsilon_{t,l}\right) = A_{i_l} t^{B_{i_l}} \exp \left(\epsilon_{t,l}\right), \label{eq:RF_measurement} \end{equation} where ${i_l}$ returns the discrete element number of the midpoint discretization within which the measurement location l lies. Table \ref{tab:par_RF} shows the prior distribution model for the two random fields of the deterioration model of Equation \eqref{eq:RF_deterioration} and the assumed probabilistic model of the multiplicative measurement error. Since $A$ models a lognormal random field, $\ln(A)$ follows the normal distribution. For both random fields $\ln(A)$ and $B$, the exponential correlation model with correlation length of 2m is applied \citep{Sudret_2000}. \begin{table}[ht!] \caption{Prior distribution model for the corrosion deterioration model parameters and the measurement error} \small \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} c c c c c} \hline \textbf{Parameter} & \textbf{Distribution} & \textbf{Mean} & \textbf{Standard Deviation} & \textbf{Corr. length (m) }\\ \hline $A$ & Lognormal & $0.8$ & $0.24$ & $2$\\ $B$ & Normal & $0.8$ & $0.12$ & $2$\\ $\exp \left( \epsilon_{t,l} \right)$ & Lognormal & $1.0$ & $0.101$& -\\ \hline \end{tabular*} \label{tab:par_RF} \end{table} The goal is to update the time-invariant deterioration model parameters $\boldsymbol{\theta} = \left[A_{1},\dots, A_{m}, B_1, \dots, B_m\right]$ given sequential noisy corrosion measurements $y_{t,l}$ from $n_l$ deployed sensors. The dimensionality of the problem is $d = 2\times m$. Hence, the more elements in the midpoint discretization, the higher the dimensionality of the parameter vector. The main goal of this second case study is to investigate the effect of the problem dimensionality and the amount of sensor information on the posterior results obtained with each filter. We choose the following three midpoint discretization schemes: \begin{enumerate}[noitemsep,topsep=0pt] \item $m=25$ elements: $d=50$ time-invariant parameters to estimate. \item $m=50$ elements: $d=100$ time-invariant parameters to estimate. \item $m=100$ elements: $d=200$ time-invariant parameters to estimate. \end{enumerate} Furthermore, we choose the following three potential sensor arrangements: \begin{enumerate}[noitemsep,topsep=0pt] \item $n_l=2$ sensors (the $4^{th}$ and $7^{th}$ sensors of Figure \ref{fig:beam_corrosion}). \item $n_l=4$ sensors (the $1^{st}, 4^{th}, 7^{th}$ and $10^{th}$ sensors of Figure \ref{fig:beam_corrosion}). \item $n_l=10$ sensors of Figure \ref{fig:beam_corrosion}. \end{enumerate} We therefore study nine different cases of varying problem dimensionality and number of sensors. \subsubsection{Markovian state-space representation for application of on-line filters} \label{subsubsec:state_space_RF} A Markovian state-space representation of the deterioration process is required for application of on-line filters. The dynamic and measurement equations are shown in Equation \eqref{eq:state_space_RF}. The measurement equation is written in the logarithmic scale. Time $t$ is discretized in yearly estimation time steps $k$, i.e., $k=1,\dots,50$, and the subscript $l=1,\dots,n_l$ corresponds to the sensor location. \begin{equation} \begin{split} \boldsymbol{\theta}_k &= \boldsymbol{\theta}_{k-1} \\ \ln \left(y_{k,l}\right) &= \ln \left(D_{k,l}(\boldsymbol{\theta}_k)\right) + \epsilon_{k,l} \Rightarrow \ln \left(y_{k,l}\right) = \ln \left(A_{k,i_l}\right) + B_{k,i_l} \ln \left(t_k\right) +\epsilon_{k,l} \label{eq:state_space_RF} \end{split} \end{equation} In the logarithmic scale, both the dynamic and measurement equations are linear functions of Gaussian random variables. For reasons explained in Section \ref{sec:algorithms}, the subscript $k$ in $\boldsymbol{\theta}_k$ is dropped in the following. \subsubsection{Underlying ``true" realization} \label{subsubsec:under_true_RF} To generate a high-resolution underlying ``true" realization of the two random fields $A$ and $B$, and the corresponding synthetic monitoring data set, we employ the Karhunen-Loeve (KL) expansion \citep{Sudret_2000} using the first 400 KL modes. These realizations are shown in the left panel of Figure \ref{fig:KL}. Given these $A$ and $B$ realizations, the underlying ``true" realizations of the deterioration process at ten specific beam locations are generated, which correspond to the ten potential sensor placement locations shown in Figure \ref{fig:beam_corrosion}. Subsequently, a synthetic corrosion sensor measurement data set (one measurement per year) at these 10 locations is generated from the measurement Equation \eqref{eq:RF_measurement}. These are shown in the right panel of figure \ref{fig:KL}. The KL expansion is used for the sole purpose of generating the underlying ``truth". \begin{figure} \caption{Left: the blue solid line plots the underlying ``true" realization of $\ln(A)$ and $B$ created using the KL expansion. Right: the blue solid line plots the underlying ``true" realization of $\ln\left(D(t)\right)$ at 10 specific sensor locations and the corresponding synthetic sensor monitoring data are scattered in black. In both figures, the black dashed lines plot the prior mean and the black solid lines the prior 90\% credible intervals} \label{fig:KL} \end{figure} \subsubsection{Reference posterior solution} \label{subsubsec:ref_post_RF} For the investigated linear Gaussian state space representation of Equation \ref{eq:state_space_RF}, we create reference on-line posterior solutions for each of the nine considered cases by applying the Kalman filter (KF) \citep{Kalman_1960}, which is the closed form solution to the Bayesian filtering equations. The process noise covariance matrix in the KF equations is set equal to zero. The linear Gaussian nature of the chosen problem ensures existence of an analytical reference posterior solution obtained with the KF. One such reference on-line posterior solution for the case described by $m=25$ elements ($d=50$) and $n_l=4$ sensors is shown in Figure \ref{fig:KF_ref}. \begin{figure} \caption{Case with $m=25, n_l=4$: reference on-line posterior solution at 10 locations across the beam obtained with the Kalman filter. The solid blue horizontal line represents the underlying ``true" values of $ln(A)$ and $B$ at these locations. The black dashed lines plot the posterior mean and the black solid lines the posterior 90\% credible intervals. Locations 1,4,7,10 correspond to the four assumed sensor placement locations} \label{fig:KF_ref} \end{figure} \subsubsection{Comparative assessment of the investigated on-line and off-line filters} We apply the tPFGM filter, the tIBIS filter, and the SMC filter, all with $N_{\text{par}}$=2000 particles, for estimating the time-invariant parameter vector $\boldsymbol{\theta}$. For each of the nine cases of varying problem dimensionality and number of sensors described above, we compute the $L^2$ relative error norm of the estimated means, correlation coefficients, and standard deviations of the parameters with respect to the corresponding KF reference posterior solution, i.e., we estimate a quantity as in Equation \eqref{eq:L2_error} for all estimation steps $k=1,\dots,50$. In Figures \ref{fig:RF_comp_filters_mean}, \ref{fig:RF_comp_filters_corr}, \ref{fig:RF_comp_filters_std} we plot the mean and credible intervals of these relative errors as obtained from 50 different runs. The off-line SMC filter, which does not provide the on-line solution within a single run, is run anew for estimating the single posterior density of interest at years 10, 20, 30, 40, 50, and in between, the relative error is linearly interpolated. Although each of the nine panels in the figures corresponds to a different case with a different underlying KF reference solution, their y axes have the same scaling. Table \ref{tab:hd_model_eval} documents the computational cost of each filter in each considered case, measured by average number of evaluations of the model of Equation \eqref{eq:RF_deterioration}. Figures \ref{fig:RF_comp_filters_mean} and \ref{fig:RF_comp_filters_corr} show that the off-line IMH-GM-based SMC filter yields the best performance in estimating the KF reference posterior mean and correlation, for all nine considered cases, while at the same time producing the narrowest credible intervals. Comparison of the relative errors obtained with the SMC and tIBIS filters reveals that, although they are both reliant on the IMH-GM MCMC move step, the on-line tIBIS filter leads to larger estimation errors. The on-line tPFGM and tIBIS filters generate quite similar results in estimating the reference posterior mean and correlation, thus rendering the benefit of the MCMC move step in tIBIS unclear, except in cases with more sensors and lower parameter dimension. Figures \ref{fig:RF_comp_filters_mean} and \ref{fig:RF_comp_filters_corr} reveal a slight trend, indicating that for fixed dimensionality, availability of more sensors (i.e., stronger information content in the likelihood function) leads to a slight decrease in the relative errors when using the SMC and tIBIS filters, whereas the opposite trend can be identified for the tPFGM filter. Increasing problem dimensionality (for fixed number of sensors) does not appear to have strong influence on the posterior results in any of the columns of Figures \ref{fig:RF_comp_filters_mean}, \ref{fig:RF_comp_filters_corr} and \ref{fig:RF_comp_filters_std}, a result that initially appears puzzling. \begin{figure} \caption{Comparison of the $L^2$ relative error norm of the means of the parameters evaluated for each filter. The solid lines show the mean and the shaded areas the 90\% credible intervals inferred from 50 repeated runs of each filter} \label{fig:RF_comp_filters_mean} \end{figure} \begin{figure} \caption{Comparison of the $L^2$ relative error norm of the correlation coefficients of the parameters evaluated for each filter. The solid lines show the mean and the shaded areas the 90\% credible intervals inferred from 50 repeated runs of each filter} \label{fig:RF_comp_filters_corr} \end{figure} \begin{figure} \caption{Comparison of the $L^2$ relative error norm of the standard deviations of the parameters evaluated for each filter. The solid lines show the mean and the shaded areas the 90\% credible intervals inferred from 50 repeated runs of each filter} \label{fig:RF_comp_filters_std} \end{figure} \begin{table}[ht!] \fontsize{7pt}{12pt}\selectfont \caption{Average number of model evaluations for the high-dimensional case study. For the SMC, the required model evaluations for obtaining the single final posterior density are reported.} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} |c|c|c|c|c|c|c|c|c|c| } \hline elements & \multicolumn{3}{c|}{$25$} & \multicolumn{3}{c|}{$50$} & \multicolumn{3}{c|}{$100$}\\ \hline sensors & $2$ & $4$ & $10$ & $2$ & $4$ & $10$ & $2$ & $4$ & $10$\\ \hline tPFGM & 129,480 & 154,000 & 194,440 & 129,440 & 155,560 & 195,760 & 130,480 & 157,120 & 199,040\\ \hline tIBIS & 602,400 & 1,038,440 & 1,878,000 & 603,240 & 1,049,400 & 1,909,880 & 567,720 & 1,017,280 & 1,876,240\\ \hline SMC & 1,130,000 & 1,596,000 & 2,298,000 & 1,108,000 & 1,582,000 & 2,250,000 & 1,100,000 & 1,504,000 & 2,150,000\\ \hline \end{tabular*} \label{tab:hd_model_eval} \end{table} Figure \ref{fig:RF_comp_filters_std} conveys that the tPFGM filter, which entirely depends on the GMM posterior approximation, induces the smallest relative errors for the estimation of the standard deviation of the parameters in all considered cases. This result reveals a potential inadequacy of the single application of the IHM-GM kernel for the move step within the tIBIS and SMC filters in properly exploring the space of $\boldsymbol{\theta}$. In all 50 runs of the tIBIS and SMC filters, the standard deviation of the parameters is consistently underestimated compared to the reference, unlike when applying the tPFGM filter. Based on the discussion of Section \ref{subsec:IBIS}, we introduce a burn-in period of $n_\text{B}$=5 in the IMH-GM kernel of Algorithm \ref{alg:mh_gm} and perform 50 new runs of the tIBIS and SMC filters. One can expect that inclusion of a burn-in is more likely to ensure sufficient exploration of the intermediate posterior distributions. However, at the same time the computational cost of tIBIS and SMC increases significantly, with a much larger number of required model evaluations than in Table \ref{tab:hd_model_eval}. In Figures \ref{fig:RF_comp_filters_mean_burn_in}, \ref{fig:RF_comp_filters_std_burn_in} we plot the mean and credible intervals for the relative errors in the estimation of the mean and standard deviation of the parameters. Comparing Figures \ref{fig:RF_comp_filters_mean} and \ref{fig:RF_comp_filters_mean_burn_in}, inclusion of burn-in is shown to lead to an improved performance of tIBIS and SMC in estimating the mean of the parameters in all cases. This improvement is more evident in the lower-dimensional case with 25 elements, and lessens as the problem dimension increases. Hence, with burn-in one observes a deterioration of the tIBIS and SMC filters' performance with increasing dimensionality. This point becomes more evident when looking at the relative errors of the estimated standard deviation in Figure \ref{fig:RF_comp_filters_std_burn_in}. With burn-in, the tIBIS and SMC filters provide better results than the tPFGM filter in estimating the standard deviation in the case of 25 elements, but perform progressively worse as the dimensionality increases, where they underestimate the KF reference standard deviation. This underestimation is clearly illustrated in Figure \ref{fig:RF_updating}. The reason for this behavior is the poor performance of the IMH-GM algorithm in high dimensions, which is numerically demonstrated in \cite{Papaioannou_2016}. We suspect that this behavior is related to the degeneracy of the acceptance probability of MH samplers in high dimensions, which has been extensively discussed in the literature for random walk samplers, e.g., in \cite{Gelman_1997, Au_2001, Katafygiotis_2008, Beskos_2009, Cotter_2013, Papaioannou_2015}. Single application of the IHM-GM kernel without burn-in yielded acceptance rates of around 50\% for all cases. With inclusion of burn-in, in higher dimensions, the acceptance rate in IMH-GM drops significantly in the later burn-in steps, leading to rejection of most proposed particles. To alleviate this issue, one could consider using the preconditioned Crank Nicolson (pCN) sampler to perform the move step within the IBIS and SMC filters, whose performance is shown to be independent of the dimension of the parameter space when the prior is Gaussian \citep{Cotter_2013}. Increase of dimensionality does not seem to have any influence on the results obtained with the tPFGM filter. The illustrated efficacy of the tPFGM filter in estimating the time-invariant parameters in all considered cases of increasing dimensionality is related to the nature of the studied problem. The tPFGM filter relies entirely on the GMM approximation of the posterior distribution within its resampling process, in that it simply ``accepts" all the $N_{\text{par}}$ GMM-proposed particles, unlike the tIBIS and SMC filters, which contain the degenerating acceptance-rejection step within the IMH-GM move step. Clearly, the worse the GMM fit, the worse the expected performance of the tPFGM filter. The particular case investigated here has a Gaussian reference posterior solution, hence the GMM fitted by EM proves effective in approximating the posterior with a relatively small number of particles, even when going up to $d$=200 dimensions, thus leading to a good proposal distribution for sampling $N_{\text{par}}$ new particles in tPFGM. As reported in Table \ref{tab:hd_model_eval}, the tPFGM filter is associated with a significantly lower computational cost than its MCMC-based counterparts. \begin{figure} \caption{Comparison of the $L^2$ relative error norm of the mean of the parameters evaluated for each filter. The solid lines show the mean and the shaded areas the 90\% credible intervals inferred from 50 repeated runs of each filter. Burn-in $n_\text{B} \label{fig:RF_comp_filters_mean_burn_in} \end{figure} \begin{figure} \caption{Comparison of the $L^2$ relative error norm of the standard deviation of the parameters evaluated for each filter. The solid lines show the mean and the shaded areas the 90\% credible intervals inferred from 50 repeated runs of each filter. Burn-in $n_\text{B} \label{fig:RF_comp_filters_std_burn_in} \end{figure} \begin{figure} \caption{Updating of the random field $\ln \left(D(t=50)\right)$ in three different cases of varying problem dimensionality. The solid lines show the mean and the shaded areas the 90\% credible intervals inferred from 10 repeated runs of each filter. The black dashed line represented the posterior mean obtained via the KF, and the black solid lines the KF 90\% credible intervals} \label{fig:RF_updating} \end{figure} \section{Concluding remarks} \label{sec:Conclusions} In this article, we present in full algorithmic detail three different on-line and off-line Bayesian filters, specifically tailored for the task of parameter estimation only of time-invariant deterioration model parameters in long-term monitoring settings. More specifically, these are an on-line particle filter with Gaussian mixture resampling (PFGM), an on-line iterated batch importance sampling (IBIS) filter, and an off-line sequential Monte Carlo (SMC) filter, which applies simulated annealing to sequentially arrive to a single posterior density of interest. The IBIS and SMC filters perform Markov Chain Monte Carlo (MCMC) move steps via application of an independent Metropolis Hastings kernel with a Gaussian mixture proposal distribution (IMH-GM) whenever degeneracy is identified. A simulated annealing process (tempering of the likelihood function) is further incorporated within the update step of the on-line PFGM and IBIS filters for cases when each new measurement is expected to have a strong information content; this leads to the presented tPFGM and tIBIS filters. The SMC filter can be employed only for off-line inference, while the PFGM, tPFGM, IBIS and tIBIS filters can perform both on-line and off-line inference tasks. With the aid of two numerical examples, a rigorous comparative assessment of these algorithms for off-line and on-line Bayesian filtering of time-invariant deterioration model parameters is performed. In contrast to other works, the main focus here lies on the efficacy of the investigated Bayesian filters in quantifying the full posterior uncertainty of deterioration model parameters, as well as on the induced computational cost. For the first non-linear, non-Gaussian and low-dimensional case study, the IBIS and SMC filters, which both contain IMH-GM-based MCMC move steps, are shown to consistently outperform the purely on-line PFGM filter in estimating the parameters' reference posterior distributions. However, they induce a computational cost of at least an order of magnitude larger than the PFGM filter, when the same initial number of particles is used in all three filters. With similar computational cost, i.e., when increasing the number of particles in PFGM, it achieves enhanced posterior accuracy, comparable to the IBIS and SMC filters. For the second case study, involving a linear, Gaussian and high-dimensional model, the results vary with increasing problem dimensionality and number of sensors. The on-line tPFGM filter achieves a consistently satisfactory quality with increasing dimensionality, a behavior explained by the linear Gaussian nature of the problem, while a slight drop in the posterior quality is observed for increasing amount of sensor information. The tIBIS and SMC filters are shown to consistently outperform the tPFGM filter in lower dimensions, they however perform progressively worse in higher dimensions, a behavior likely explained by the degeneracy of the acceptance probability of MH samplers in high dimensions. The computational cost of the tIBIS and SMC filters is an order of magnitude larger than the tPFGM filter. Some general conclusions drawn from the delivered comparative assessment are listed below. \begin{itemize}[noitemsep,topsep=0pt] \itemsep0em \item The IBIS (and its tIBIS variant) and SMC filters, which contain MCMC move steps, offer better approximations of the posterior mean of the model parameters than the purely on-line PFGM (and its tPFGM variant) filter with the same number of samples, as shown in both studied examples. \item The independent Metropolis Hastings (IMH)-based MCMC move step performed within the IBIS, tIBIS and SMC filters proves inadequate in properly exploring the posterior parameter space in high-dimensional problems. \item The purely on-line PFGM (and its tPFGM variant) filter is competitive with MCMC-based filters, especially for higher-dimensional well-behaved problems. \end{itemize} Finally, to support the reader with the selection of the appropriate algorithm for a designated scenario, we provide Table \ref{table:guidelines}, which contains an assessment of the methods presented in this paper in function of problem characteristics. \begin{table}[!ht] \caption{Set of suggestions on choice of the appropriate method in function of problem characteristics.} \small \centering \begin{tabular}{ccccc} \hline \multicolumn{2}{c}{\textbf{Criterion}} & \textbf{\begin{tabular}[c]{@{}c@{}}PFGM \\ (tPFGM)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}IMH-GM-based \\ IBIS (tIBIS)\end{tabular}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}IMH-GM-based \\ SMC\end{tabular}}} \\ \hline \multicolumn{2}{c}{On-line inference} & \checkmark & $\circ$ & \multicolumn{1}{c}{$\times$} \\ \hline \multicolumn{2}{c}{Computational cost} & $C_1$ & $C_2$ & \multicolumn{1}{c}{$C_3$} \\ \hline \multicolumn{1}{c}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Applicability to different problems}}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Mean estimation \\ in low-dimensional, nonlinear,\\ non-Gaussian problems\end{tabular}} & $Q_3$ & $Q_4$ & \multicolumn{1}{c}{$Q_4$ } \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Uncertainty quantification\\ in low-dimensional, nonlinear, \\ non-Gaussian problems\end{tabular}} & \multicolumn{1}{c}{$Q_3$} & \multicolumn{1}{c}{$Q_4$} & \multicolumn{1}{c}{$Q_4$ } \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Mean estimation\\ in high-dimensional, \\ well-behaved problems\end{tabular}} & \multicolumn{1}{c}{$Q_2$} & \multicolumn{1}{c}{$Q_3$} & \multicolumn{1}{c}{$Q_4$ } \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Uncertainty quantification \\ in high-dimensional, \\ well-behaved problems\end{tabular}} & $Q_3$ & $Q_1$ & \multicolumn{1}{c}{$Q_1$} \\ \hline \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Increasing sensor \\ information amount\end{tabular}} & $\times$ (\checkmark) & $\times$ (\checkmark) & \multicolumn{1}{c}{\checkmark} \\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}$Q_1$: low quality\\ $Q_2$: moderate quality\\ $Q_3$: moderate to high quality\\ $Q_4$: high quality\end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}\checkmark: applicable\\ $\circ$: partly applicable\\ $\times$: not applicable\end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}$C_1$: moderate\\ $C_2$: moderate to high\\ $C_3$: high\end{tabular}} & \multicolumn{1}{l}{} \end{tabular} \label{table:guidelines} \end{table} This paper does not investigate the performance of these filters when applied to high-dimensional and highly non-Gaussian problems. Such problems are bottlenecks for most existing filters and we expect the investigated filters to be confronted with difficulties in approximating the posterior distributions. \section*{Funding Statement} The work of A. Kamariotis and E. Chatzi has been carried out with the support of the Technical University of Munich - Institute for Advanced Study, Germany, funded by the German Excellence Initiative and the T{\"U}V S{\"U}D Foundation. \section*{Competing Interests} The authors declare none. \section*{Data Availability Statement} The data and code that support the findings of this study are openly available in \url{https://github.com/antoniskam/Offline_online_Bayes}. \end{document}
\begin{document} \title{BSVIEs with stochastic Lipschitz coefficients and applications in finance hanks{This work is supported by National Natural Science Foundation of China Grant 10771122, Natural Science Foundation of Shandong Province of China Grant Y2006A08 and National Basic Research Program of China (973 Program, No. 2007CB814900).} \begin{abstract} This paper is concerned with existence and uniqueness of M-solutions of backward stochastic Volterra integral equations (BSVIEs for short) which Lipschitz coefficients are allowed to be random, which generalize the results in \cite{Y3}. Then a class of continuous time dynamic coherent risk measure is derived, allowing the riskless interest rate to be random, which is different from the case in \cite{Y3}. \par $\textit{Keywords:}$ Backward stochastic Volterra integral equations, Adapted M-solutions, Dynamic coherent risk measure, Stochastic Lipschitz coefficients \end{abstract} \section{Introduction}\label{sec:intro} The literature on both static and dynamic risk measures, has been well developed since Artzner et al \cite{ADEH} firstly introduced the concept of coherent risk measures, see \cite{D}, \cite{FS} for more other detailed accounts. Recently, a class of static and dynamic risk measures were induced via $g$-expectation and conditional $g$-expectation respectively in \cite{R1}. $g$-expectation was introduced by Peng \cite{P1} as particular nonlinear expectations based on backward stochastic differential equations (BSDEs for short), which were firstly studied by Pardoux and Peng \cite{PP1}. One nature characteristic of the above risk measures is time-consistency (or semi-group property), however, time-inconsistency preference usually exists in real world, see \cite{EL}, \cite{La}, \cite{S}. As to the this case, Yong \cite{Y3} firstly obtained a class of continuous-time dynamic risk measures, allowing possible time-inconsistent preference, by means of backward stochastic Volterra integral equations (BSVIEs for short) in \cite{Y3}. One-dimensional BSVIEs are equations of the following type defined on $[0,T]$, \begin{equation} Y(t)=\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s),Z(s,t))ds-\int_t^TZ(t,s)dW(s), \end{equation} where $(W_t)_{t\in [0,T]}$ is a $d$-dimensional Wiener process defined on a probability space $(\Omega ,\mathcal{F},P)$ with $(\mathcal{F}_t)_{t\in [0,T]} $ the natural filtration of $(W_t),$ such that $\mathcal{F}_0$ contains all $P$-null sets of $\mathcal{F}.$ The function $g:\Omega\times\Delta^c\times R\times R^d\times R^d\rightarrow R$ is generally called a generator of (1), here $T$ is the terminal time, and the $R$-valued $\mathcal{F}_{T}$-adapted process $\psi(\cdot)$ is a terminal condition; $(g,T,\psi)$ are the parameters of (1). A solution is a couple of processes $(Y(\cdot),Z(\cdot,\cdot))$ which have some integrability properties, depending on the framework imposed by the type of assumptions on $g.$ Readers interested in an in-depth analysis of BSVIEs can see \cite{Y1}, \cite{Y2}, \cite{Y3}, \cite{L}, \cite{WZ}, \cite{A} and \cite{R2}, among others. One of the assumptions in Yong \cite{Y3} is that $r(\cdot)$ (the interest rate) is deterministic, otherwise, it is contradicted with the definition of translation invariance. As well known to us, in some circumstances, it is necessary that the interest rate is random, hence, in the current paper, we are dedicated to study the case of the random case by giving a general version of definition aforementioned. After that we will show a class of dynamic coherent risk measure by means of BSVIEs, allowing the interest rate to be random. Before doing this, we should prove the unique solvability of M-solution, introduced by Yong \cite{Y2}, of BSVIEs under stochastic Lipschitz condition, \begin{equation} Y(t)=\psi(t)+\int_t^Tg(t,s,Y(s),Z(s,t))ds-\int_t^TZ(t,s)dW(s), \end{equation} which generalize the result in \cite{Y3}. In addition, we claim that this proof is much briefer than the one in \cite{Y3}. The paper is organized as follows. Section 2 is devoted to notation. In Section 3, we state a result of existence and uniqueness for BSVIEs with generators satisfying a stochastic Lipschitz condition. In Section 4, we apply the previous result to study some problems in dynamic risk measure. \section{Notation} In this paper, we define several classes of stochastic processes which we use in the sequel. We denote $\Delta^c=\{(t,s), 0\leq t\leq s\leq T,\}$, and $\Delta=\{(t,s), 0\leq s<t\leq T,\}$. Let $L_{\mathcal{F}_T}^2[0,T]$ be the set of $\mathcal {B}([0,T])\otimes\mathcal{F}_T $-measurable processes $\psi:[0,T]\times\Omega\rightarrow R$ such that $E\int_0^T|\psi(t)|^2dt<\infty .$ We also denote \begin{eqnarray*} \mathcal{H}^2[0,T]=L_{\mathbb{F}}^2[0,T]\times L^2(0,T;L_{\mathbb{F}}^2[0,T]), \end{eqnarray*} where $L_{\mathbb{F}}^2[0,T]$ is the set of all adapted processes $ Y:[0,T]\times \Omega\rightarrow R$ such that $E\int_0^T|Y(t)|^2dt<\infty ,$ and $L^2(0,T;L_{\mathbb{F}}^2[0,T])$ is the set of all processes $ Z:[0,T]^2\times \Omega\rightarrow R$ such that for almost all $ t\in [0,T]$, $z(t,\cdot )\in L_{\mathbb{F}}^2[0,T]$ satisfying \begin{eqnarray*} E\int_0^T\int_0^T|z(t,s)|^2dsdt<\infty. \end{eqnarray*} Now we cite some definitions introduced in \cite{Y3} and \cite{Y2}. \begin{definition} A mapping $\rho :L_{\mathcal{F}_T}^2[0,T]\rightarrow L_{\mathbb{F}}^2[0,T]$ is called a dynamic risk measure if the following hold: 1) (Past independence) For any $\psi (\cdot ),$ $\overline{\psi }(\cdot )\in L_{\mathcal{F} _T}^2[0,T],$ if $\psi (s)=\overline{\psi }(s),$ a.s. $\omega \in \Omega ,$ $ s\in [t,T],$ for some $t\in [0,T),$ then $\rho (t;\psi (\cdot ))=\rho (t; \overline{\psi }(\cdot )),$ a.s. $\omega \in \Omega .$ 2) (Monotonicity) For any $\psi (\cdot ),$ $\overline{\psi }(\cdot )\in L_{\mathcal{F} _T}^2[0,T],$ if $\psi (s)\leq \overline{\psi }(s),$ a.s. $\omega \in \Omega , $ $s\in [t,T],$ for some $t\in [0,T),$ then $\rho (s;\psi (\cdot ))\geq \rho (s;\overline{\psi }(\cdot )),$ a.s. $\quad \omega \in \Omega, s\in[t,T].$ \end{definition} \begin{definition} A dynamic risk measure $\rho :L_{\mathcal{F}_T}^2[0,T]\rightarrow L_{\mathbb{ F}}^2[0,T]$ is called a coherent risk measure if the following hold: 1) (Translation invariance) There exists a deterministic integrable function $r(\cdot )$ such that for any $\psi (\cdot )\in L_{\mathcal{F}_T}^2[0,T],$ \begin{eqnarray*} \rho (t;\psi (\cdot )+c)=\rho (t;\psi (\cdot ))-ce^{\int_t^Tr(s)ds},\quad \omega \in \Omega ,t\in [0,T]. \end{eqnarray*} 2) (Positive homogeneity) For $\psi (\cdot )\in L_{\mathcal{F}_T}^2[0,T]$ and $\lambda >0,$ \begin{eqnarray*} \rho (t;\lambda \psi (\cdot ))=\lambda \rho (t;\psi (\cdot )),\quad a.s.\quad \omega \in \Omega ,\quad t\in [0,T]. \end{eqnarray*} 3) (Subadditivity) For any $\psi (\cdot ),$ $\overline{\psi }(\cdot )\in L_{\mathcal{F} _T}^2[0,T],$ \begin{eqnarray*} \rho (t;\psi (\cdot )+\overline{\psi }(\cdot ))\leq \rho (t;\psi (\cdot ))+\rho (t;\overline{\psi }(\cdot )),\quad \omega \in \Omega , t\in [0,T]. \end{eqnarray*} \end{definition} \begin{definition} Let $S\in [0,T]$. A pair of $(Y(\cdot ),Z(\cdot ,\cdot ))\in \mathcal{H} ^2[S,T]$ is called an adapted M-solution of BSVIE (1) on $[S,T]$ if (1) holds in the usual It\^o's sense for almost all $t\in [S,T]$ and, in addition, the following holds: \begin{eqnarray*} Y(t)=E^{\mathcal{F}_S}Y(t)+\int_S^tZ(t,s)dW(s). \end{eqnarray*} \end{definition} \section{The existence and uniqueness with stochastic Lipschitz coefficient} A class of dynamic risk measures, allowing time-inconsistency preference, were induced via BSVIEs of the form, $t\in [0,T]$, \begin{equation} Y(t)=\psi (t)+\int_t^Tg(t,s,Y(s),Z(s,t))ds-\int_t^TZ(t,s)dW(s). \end{equation} In this section we will study the unique solvability of M-solution for (3) under more weaker assumptions, i.e., allowing the coefficients be stochastic. In addition, the proof also seems briefer than the one in \cite{Y3}. So we introduce the standard assumptions as follows, (H1) Let $g:\Delta ^c\times R\times R^{d}\times \Omega \rightarrow R$ be $\mathcal{B}(\Delta ^c\times R\times R^{d})\otimes \mathcal{F}_T$-measurable such that $s\rightarrow g(t,s,y,z)$ is $\mathbb{F}$-progressively measurable for all $(t,y,z)\in [0,T]\times R\times R^{d}$ and \begin{equation} E\int_0^T\left( \int_t^T|g_0(t,s)|ds\right)^2dt<\infty , \end{equation} where we denote $g_0(t,s)\equiv g(t,s,0,0).$ Moreover, \begin{eqnarray} |g(t,s,y,z)-g(t,s,\overline{y},\overline{z})| &\leq &L_1(t,s)|y-\overline{y} |+L_2(t,s)|z-\overline{z}|, \\ \forall y,\overline{y} &\in &R^m,\quad z, \overline{z}\in R^{m\times d}, \nonumber \end{eqnarray} where $L_1(t,s)$ and $L_2(t,s)$ are two non-negative $\mathcal{B}(\Delta^c)\times \mathcal{F}_{T}$-measurable processes such that for any \begin{eqnarray} \int_t^TL_1^2(t,s)ds<M, \quad \left( \int_t^TL_2^q(t,s)ds\right) ^{\frac 2q} <M , \quad t\in[0,T], \nonumber \end{eqnarray} for some constant $M$ and $\frac 1p+\frac 1q=1,$ $1<p<2$. So we obtain the following theorem, \begin{theorem} Let (H1) hold, $\psi (\cdot )\in L_{\mathcal{F}_{T}}^2[0,T],$ then (3) admits a unique M-solution in $\mathcal{H}^2[0,T]$. \end{theorem} \begin{proof} Let $\mathcal{M}^2[0,T]$ be set of elements in $\mathcal{H}^2[0,T]$ satisfying: $\forall S\in [0,T]$ \[ Y(t)=E^{\mathcal{F}_{S}}Y(t)+\int_S^tZ(t,s)dW(s). \] Obviously it is a closed subset of $\mathcal{H}^2[0,T]$ (see \cite{Y2}). Due to the following inequality, \begin{eqnarray*} E\int_0^Te^{\beta t}dt\int_0^t|z(t,s)|^2ds\leq E\int_0^Te^{\beta t}|y(t)|^2dt, \end{eqnarray*} where $\beta$ is a constant, $(y(\cdot),z(\cdot,\cdot))\in\mathcal{M}^2[0,T],$ we can introduce a new equivalent norm in $\mathcal{M}^2[0,T]$ as follows, \[ \Vert (y(\cdot ),z(\cdot ,\cdot ))\Vert _{\mathcal {M}^2[0,T]}\equiv E\left\{ \int_0^Te^{\beta t}|y(t)|^2dt+\int_0^Te^{\beta t}\int_t^T|z(t,s)|^2dsdt\right\} ^{\frac 12}. \] Let us consider, \begin{equation} Y(t)=\psi (t)+\int_t^Tg(t,s,y(s),z(s,t))ds-\int_t^TZ(t,s)dW(s),\quad t\in [0,T], \end{equation} for any $\psi (\cdot )\in L_{\mathcal{F}_{T}}^2[0,T]$ and $(y(\cdot ),z(\cdot ,\cdot ))\in \mathcal{M}^2[0,T].$ Hence BSVIE (6) admits a unique M-solution $(Y(\cdot ),Z(\cdot ,\cdot ))\in \mathcal{M} ^2[0,T]$, see \cite{Y2}, and we can define a map $\Theta : \mathcal{M}^2[0,T]\rightarrow \mathcal{M}^2[0,T]$ by \[ \Theta (y(\cdot ),z(\cdot ,\cdot ))=(Y(\cdot ),Z(\cdot ,\cdot )),\quad \forall (y(\cdot ),z(\cdot ,\cdot ))\in \mathcal{M}^2[0,T]. \] Let $(\overline{y}(\cdot ),\overline{z}(\cdot ,\cdot ))\in \mathcal{M} ^2[0,T] $ and $\Theta (\overline{y}(\cdot ),\overline{z}(\cdot ,\cdot ))=( \overline{Y}(\cdot ),\overline{Z}(\cdot ,\cdot )).$ Obviously we obtain that \begin{eqnarray*} &&E\int_0^Te^{\beta t}|Y(t)-\overline{Y}(t)|^2dt+E\int_0^Te^{\beta t}dt\int_t^T|Z(t,s)-\overline{Z}(t,s)|^2ds \\ &\leq &CE\int_0^Te^{\beta t}\left\{ \int_t^T|g(t,s,y(s),z(s,t))-g(t,s, \overline{y}(s),\overline{z}(s,t))|ds\right\} ^2dt \\ &\leq &CE\int_0^Te^{\beta t}\left\{ \int_t^TL_1(t,s)|y(s)-\overline{y} (s)|ds\right\} ^2dt \\ &&+CE\int_0^Te^{\beta t}\left\{ \int_t^TL_2(t,s)|z(s,t)-\overline{z} (s,t)|ds\right\} ^2dt \\ &\leq &CE\int_0^Te^{\beta t}\left(\int_t^TL_1^2(t,s)ds\right) \int_t^T|y(s)-\overline{y}(s)|^2dsdt \\ &&+CE\int_0^Te^{\beta t}\left(\int_t^TL_2^{q'}(t,s)ds\right)^{\frac 2 {q'}} \left(\int_t^T|z(s,t)-\overline{z}(s,t)|^{p'}ds\right)^{\frac 2 {p'}}dt \\ &\leq &CE\int_0^T|y(s)-\overline{y}(s)|^2ds\int_0^se^{\beta t}dt+C\left[\frac1 \beta\right]^{\frac {2-p'} {p'}} E\int_0^Tds\int_t^Te^{\beta s}|z(s,t)-\overline{z}(s,t)|^2dt \\ &\leq &\frac C\beta E\int_0^Te^{\beta s}|y(s)-\overline{y}(s)|^2ds +C\left[\frac 1 \beta\right]^{\frac {2-p'}{p'}} E\int_0^Te^{\beta t}dt\int_0^t|z(t,s)-\overline{z}(t,s)|^2ds \\ &\leq &\left(\frac C\beta +C\left[\frac 1 \beta\right]^{\frac {2-p'}{p'}}\right)E\int_0^Te^{\beta s}|y(s)-\overline{y}(s)|^2ds, \end{eqnarray*} where $1<p'<2,$ $\frac {1}{p'}+\frac {1 }{q'}=1.$ Note that in above we use the following relation, for any $1<p'<2,$ and $r>0,$ \begin{eqnarray} &&\left[ \int_t^T|z(s,t)-\overline{z}(s,t)|^{p'}ds\right] ^{\frac 2 {p'}} \nonumber \\ &\leq& \left[ \int_t^Te^{-rs\frac 2{2-p'}}ds\right] ^{\frac{2-p'} {p'}}\int_t^Te^{rs\frac 2 {p'}}|z(s,t)-\overline{z}(s,t)|^2ds \nonumber \\ &\leq &\left[ \frac 1 r\right] ^{\frac{2-p'}{p'}}\left[ \frac{2-p'}p\right] ^{ \frac{2-p'}{p'}}e^{-rt\frac 2 {p'}}\int_t^Te^{rs\frac 2 {p'}}|z(s,t)-\overline{z}(s,t)|^2ds, \end{eqnarray} Then we can choose a $\beta ,$ so that the map $\Theta $ is a contraction, and (6) admits a unique M-solution. \end{proof} \section{Applications in finance} In what follows, we define \begin{equation} \rho(t,\psi(\cdot))=Y(t),\quad \forall t\in[0,T], \end{equation} where $(Y(\cdot),Z(\cdot,\cdot))$ is the unique adapted M-solution of (3). Now we cite a proposition from \cite{Y3}. \begin{proposition} Let us consider the following form of BSVIE, $t\in [0,T]$, \begin{equation} Y(t)=-\psi (t)+\int_t^T(r_1(s)Y(s)+g(t,s,Z(s,t)))ds-\int_t^TZ(t,s)dW(s). \end{equation} If $r_{1}(s)$ is a bounded and deterministic function, then $\rho(\cdot)$ defined by (8) is a dynamic coherent risk measure if $z\mapsto g(t,s,z)$ is positively homogeneous and sub-additive. \end{proposition} In Proposition 4.1, due to the way of defining translation invariance in definition 2.2, $r(\cdot)$ must be a deterministic function. In fact, we require $t\mapsto \rho(t;\psi(\cdot)+c)$ is $\Bbb{F}$-adapted, and if we allow $r(\cdot)$ to be an $\Bbb{F}$-adapted process, the axiom will become controversial. Hence if we would like to consider the random case, we should replace the axiom to a more general form. So we have, 1')There exists a $\Bbb{F}$-adapted process $Y_{0}(t)$ such that for any $\psi (\cdot )\in L_{\mathcal{F}_T}^2[0,T],$ \begin{eqnarray*} \rho (t;\psi (\cdot )+c)=\rho (t;\psi (\cdot ))-Y_{0}(t),\quad \omega \in \Omega ,t\in [0,T]. \end{eqnarray*} Now we give a class of dynamic coherent risk measure via certain BSVIEs. We have, \begin{theorem} let us consider \begin{equation} Y(t)=-\psi(t)+\int_t^Tl_2(t,s)Z(s,t)+l_1(t,s)Y(s)ds+\int_t^TZ(t,s)dW(s), \end{equation} where $l_{i}, (i=1,2)$ are two bounded processes such that $s\mapsto l_{i}(t,s), (i=1,2)$ are $\Bbb{F}$-adapted for almost every $t\in[0,T]$, then $\rho(\cdot)$ is a dynamic coherent risk measure. \end{theorem} \begin{proof} The result is obvious, so we omit it. \end{proof} \begin{remark} In Theorem 4.1, the coefficient of $Y(s)$ is random, which generalizes the situation in Proposition 4.1. However, as we will assert below, in this case $g$ usually can not be a general form as in BSVIE (9). \end{remark} Let us consider the following two equations, \begin{equation} \left\{ \begin{array}{lc} Y^\psi(t)=-\psi (t)+\displaystyle\int_t^T(l'(t,s)Y^\psi (s)ds \nonumber \\ \quad\quad \quad \quad +\displaystyle\int_t^Tg(t,s,Z^\psi (s,t)))ds-\displaystyle\int_t^TZ^\psi (t,s)dW(s),\\ Y^{\psi+c}(t)=-\psi (t)-c+\displaystyle\int_t^Tl'(t,s)Y^{\psi+c} (s)ds\nonumber \\ \quad\quad \quad \quad+\displaystyle\int_t^Tg(t,s,Z^{\psi+c} (s,t))ds -\displaystyle\int_t^TZ^{\psi+c} (t,s)dW(s), \end{array} \right. \end{equation} where $l'$ is a process such that $s\mapsto l'(t,s)$ is $\Bbb{F}$-adapted for almost every $t\in[0,T]$, $(Y^{\psi}(\cdot),Z^{\psi}(\cdot,\cdot)$ and $(Y^{\psi+c}(\cdot),Z^{\psi+c}(\cdot,\cdot)$ are the unique M-solution of the above two BSVIEs. We denote $$Y'(t)=Y^{\psi+c}(t)-Y^{\psi}(t), Z'(t,s)=Z^{\psi+c}(t,s)-Z^{\psi}(t,s),\quad t,s\in[0,T].$$ Thus we deduce that \begin{eqnarray} Y'(t)&=&-c+\int_t^T(g(t,s,Z^{\psi+c} (s,t))-g(t,s,Z^\psi(s,t)))ds \nonumber\\ &&+\int_t^Tl'(t,s)Y'(s)ds+\int_t^TZ'(t,s)dW(s), \end{eqnarray} The definition of M-solution implies \begin{equation} \left\{ \begin{array}{lc} Y^{\psi}(t)=EY^{\psi}(t)+\displaystyle\int_0^tZ^{\psi}(t,s)dW(s),\nonumber \\ Y^{\psi+c}(t)=EY^{\psi+c}(t)+\displaystyle\int_0^tZ^{\psi+c}(t,s)dW(s),\nonumber \end{array} \right. \end{equation} so we have $$Z'(t,s)=Z^{\psi+c}(t,s)-Z^{\psi}(t,s),\quad (t,s)\in\Delta,$$ thus we can rewrite (11) as \begin{eqnarray} Y'(t)&=&-c+\int_t^T(g(t,s,Z^{\psi}(s,t)+Z'(s,t))-g(t,s,Z^\psi(s,t)))ds \nonumber \\ &&+\int_t^Tl'(t,s)Y'(s)ds+\int_t^TZ'(t,s)dW(s), \end{eqnarray} We assume $(Y^{\psi},Z^{\psi})$ is known, then under (H1), (12) admits a unique M-solution $(Y',Z')$, which maybe depends on $(Y^{\psi},Z^{\psi}).$ Obviously $Y'=Y^{\psi+c}-Y^{\psi},$ which means means that when the total wealth $\psi$ is known to be increased by an amount $c>0$ (if $c<0,$ it means a decrease), then the dynamic risk will be decreased (or increase) $Y'$ which maybe depends on the total wealth $\psi$. Next we give two special cases. (1) When $l'(t,s)$ is independent of $\omega$, let us consider the following backward Volterra integral equation (BVIE for short), \begin{equation} Y^{*}(t)=-c+\int_t^Tl'(t,s)Y^{*}(s)ds,\quad t\in [0,T]. \end{equation} Obviously by the fixed point theorem as above, (13) admits a unique deterministic solution when $l'(t,s)$ satisfies the assumption in (H1). It is easy to check that $(Y^{*},0)$ is the unique M-solution of (13) when $l'$ is a deterministic function. In fact, if $Y^{*}$ is deterministic, then $Z^{*}(t,s)=0, (t,s)\in\Delta.$ After substituting $(Y^{*},Z^{*})$ into BSVIE (12), we obtain $$Z^{\psi+c}(t,s)=Z^{\psi}(t,s), g(t,s,Z^{\psi+c}(s,t)=g(t,s,Z^\psi(s,t),\quad (t,s)\in\Delta,$$ thus $(Y^{*},0)$ is a adapted solution of (12), moreover, it is the unique M-solution. So in this case, $g$ can be a general form $g(t,s,z).$ When $l'$ is independent of $t$, then $Y^{*}(t)=-ce^{\int^T_tr(u)du},$ and we can get the result in \cite{Y3}. (2) When $l'$ depends on $\omega$, if the value of $(Y',Z')$ is independent of $(Y^{\psi},Z^{\psi}),$ $g$ usually can not be the general form $g(t,s,z)$ as above. On the one hand, the similar result as case (1) above no longer holds. In fact, in this case, $Y'$ usually depends on $\omega,$ i.e., there exists a $A\subseteq [0,T]$ satisfying $\lambda(A)>0,$ such that for $t\in A,$ $P\{\omega\in\Omega, Y'(t)\neq EY'(t)\}>0,$ where $\lambda$ is the Lebeague measure, then $$E\int^T_0\int_0^t|Z'(t,s)|^2dsdt=E\int^T_0|Y'(t)-EY'(t)|^2dt>0,$$ which means that there must exist a set $B\subseteq \Delta,$ such that $\forall (t,s)\in B,$ $$P(\{Z'(t,s)\neq 0\})>0, \quad \lambda(B)>0. $$ Then for a general form of $g(t,s,z),$ $$g(t,s,Z'(s,t)+Z^{\psi}(s,t))\neq g(t,s,Z^{\psi}(s,t)),$$ which means $Y'$ maybe depends on $\psi.$ For example, if we let $l(s)=\sin W(s),$ $c\neq0,$ and let's consider the equation below, \begin{equation} Y^c(t)=-c+\int_t^T\sin W(s)Y^c(s)ds+\int_t^TZ^c(t,s)dW(s). \end{equation} By theorem 3.1 it admits a unique M-solution. If $Y^c=EY^c,$ a.e., a.s., then \begin{eqnarray} Y^c(t) &=&-c+\int_t^TE\sin W(s)Y^c(s)ds \end{eqnarray} Since $E\sin W(t)=0$ implies $Y^c(t)=-c,$ then $Z(t,s)=0,$ $(t,s)\in [0,T]^2,$ thus we have for almost any $t\in[0,T],$ $\int_t^T\sin W(s)ds=0,$ which means that for almost any $t\in[0,T],$ $\sin W(t)=0,$ obviously it is a contradiction. On the other hand, if $g(t,s,Z'(s,t)+Z^{\psi}(s,t))-g(t,s,Z^{\psi}(s,t))$ is independent of $Z^{\psi}(s,t),$ roughly speaking, the only good case for this is $g$ is a linear function of $z$, otherwise the result will not hold. For example, if we let $g(t,s,z)=l(t,s)z^2,$ which is a convex function for $z$, then $$g(t,s,Z'(s,t)+Z^{\psi}(s,t))-g(t,s,Z^{\psi}(s,t))=2l(t,s)Z'(s,t)Z^{\psi}(s,t)+Z'^{2}(s,t).$$ Obviously the value of $Y_0$ depends on $\psi$. \end{document}
\begin{document} \title[On a property of random-oriented percolation in a quadrant] {On a property of random-oriented percolation in a quadrant} \author{Dmitry Zhelezov} \thanks{Department of Mathematical Sciences, Chalmers University Of Technology and University of Gothenburg} \address{Department of Mathematical Sciences, Chalmers University Of Technology and University of Gothenburg, 41296 Gothenburg, Sweden} \mathbb{E}mail{[email protected]} \subjclass[2000]{60K35 (primary).} \keywords{percolation, random orientations, phase transition} \date{\today} \begin{abstract} Grimmett's random-orientation percolation is formulated as follows. The square lattice is used to generate an oriented graph such that each edge is oriented rightwards (resp. upwards) with probability $p$ and leftwards (resp. downwards) otherwise. We consider a variation of Grimmett's model proposed by Hegarty, in which edges are oriented away from the origin with probability $p$, and towards it with probability $1-p$, which implies rotational instead of translational symmetry. We show that both models could be considered as special cases of random-oriented percolation in the NE-quadrant, provided that the critical value for the latter is $\frac{1}{2}$. As a corollary, we unconditionally obtain a non-trivial lower bound for the critical value of Hegarty's random-orientation model. The second part of the paper is devoted to higher dimensions and we show that the Grimmett model percolates in any slab of height at least $3$ in $\mathbb{Z}^3$. \mathbb{E}nd{abstract} \maketitle \section{Introduction} Random-oriented percolation was first introduced by G. Grimmett \cite{Gr1} and is defined as follows. Consider the square lattice $\mathbb{Z}^2$ and let each vertical edge be directed upwards with probability $p \in [0, 1]$ and downwards otherwise. Analogously, each horizontal edge is directed rightwards with probability $p$ and leftwards otherwise. Let $\theta_G(p)$ be the probability that there is a directed path from the origin to infinity. By coupling with the classical bond percolation, it is not hard to show that $\theta_G(\frac{1}{2}) = 0$ using, for example, Lemma~2.1 in \cite{Lin1}. There is also the obvious symmetry $\theta_G(p) = \theta_G(1-p)$, so it is natural to ask if $\theta_G(p) > 0$ for $p \neq 1/2$. This conjecture was raised by Grimmett \cite{Gr1} for the first time. The most significant advance so far was also made by Grimmett in \cite{Gr2} where he showed that percolation does occur if one adds a positive density of randomly directed arcs, so that the total probability of a directed arc being present is greater than one. Also, W. Xianyuan \cite{Xia1} proved the uniqueness of the infinite cluster in the supercritical phase. He also conjectured that $\theta_G(p)$ is strictly monotone on $[1/2, 1]$, an obviously much stronger version of Grimmett's conjecture. Both conjectures seem to be far from resolution. In this note we consider a slightly different model, proposed by P. Hegarty on MathOverflow \cite{Heg1}, which we will refer as the $H$-model hereafter. It is defined as follows: for each edge $e$ of the integer lattice $\mathbb{Z}^2$ assign a direction away from the origin with probability $p$, and towards the origin otherwise. We say that a directed edge from $x$ to $y$ is oriented {\mathbb{E}m inwards} if $\|x\| > \|y\|$, {\mathbb{E}m outwards} otherwise, with the usual Euclidean norm. We denote by $\theta_H(p)$ the corresponding probability that there exists an infinite directed path from the origin. Note that this model coincides with the one proposed by Grimmett if we consider percolation only in the North-East quadrant. So, we will use $\theta_{NE}(p)$ for the percolation probability of the latter without abuse of notation. We conjecture that the NE quadrant is big enough for random-oriented percolation to occur. \begin{con} \label{con:NE} For random-oriented percolation in the North-East quadrant, $\theta_{NE}(p) > 0$ for all $p > \frac{1}{2}$. \mathbb{E}nd{con} It might be possible to prove a weaker result that $\theta_G(p) > 0$ implies $\theta_{NE}(p) > 0$, but we were unable to do so. The analogous result is well known for ordinary bond percolation in two dimensions and may be proven, for example, using RSW theory. It is not hard to show, using standard circuit-counting arguments, that $\theta_H(p) = 0$ for $p < \frac{1}{\mu^2}$, where $\mu$ is the connective constant of the square lattice. From the other side, $\theta_H(p) > 0$ for $p > \vec{p}_c$ due to coupling with oriented percolation with the critical probability $\vec{p}_c$. It is proved that $\vec{p}_c < 0.6735$, \cite{BBS}, and believed that $\vec{p}_c \approx 0.6447$. The main question that arises is whether we observe critical phenomena for the $H$-model and if so, what is the critical probability? Unfortunately, such a property appears to be very hard to establish. \begin{con} \label{con:monotonicity} The probability function $\theta_H(p)$ is strictly monotone in $[0, 1]$. \mathbb{E}nd{con} In this note we prove \begin{thm} \label{prop:main} Suppose $\theta_{NE}(1-p) > 0$. Then $\theta_H(p) = 0$. \mathbb{E}nd{thm} Together with Conjecture~\ref{con:NE}, this would imply that for the $H$-model the critical probability does exist and it is equal to $\frac{1}{2}$. Also, we get the following result unconditionally. \begin{corr} $\theta_H(p) = 0$ for $0 < p < 1 - \vec{p}_c$. \mathbb{E}nd{corr} Inserting the upper bound for oriented percolation, we get that $\theta_H(p)=0$ for $p < 0.3265$, which is considerably better than $\frac{1}{\mu^2} \approx 0.15$. It is worth noting that the crucial property of the $H$-model is its' 90-degree rotational symmetry which is absent in the Grimmett model. At the end of the paper we consider the Grimmett model in higher dimensions and prove that it is always supercritical, even if confined to a thin slab. \begin{thm} \label{prop:dGrimmett} The $3$-dimensional Grimmett model confined to the slab $\mathbb{Z}^2 \times \{-1, 0, 1\}$ percolates for any $p \in [0, 1]$. \mathbb{E}nd{thm} The proof of Theorem \ref{prop:dGrimmett} exploits criticality of the two dimensional Grimmett model that had already been shown in \cite{Gr2}. Though the result supports Grimmett's original conjecture, it seems that the crucial phenomena occur in the case of random-oriented percolation confined to a quadrant, which probably exhibits a phase transition in all dimensions. At least we can say that for any fixed $d \geq 2$ the $H$-model as well as the NE-quadrant model do not percolate for sufficiently small $p > 0$ due to the standard path-counting argument, but of course they do percolate for $p > \vec{p}_c$. \section{Percolation in the NE quadrant} This section is devoted to the proof of Theorem~\ref{prop:main}. The dual lattice is a copy of $\mathbb{Z}^2$ translated by the vector $(1/2, 1/2)$, but orientation rules can be defined in two different ways: turning orientations in the original lattice clockwise or counterclockwise. We denote such dual lattices $\mathbb{Z}^{2u}_d$ and $\mathbb{Z}^{2d}_d$ and define them as follows. If the edge $e$ fails to have an orientation in direction $\alpha$, the dual edge $e_d$ has orientation $\alpha + \mathbb{P}i/2$ in $\mathbb{Z}^{2u}_d$ and $\alpha - \mathbb{P}i/2$ in $\mathbb{Z}^{2d}_d$. The corresponding dual NE quadrants we denote by $\mathbb{Q}^u_d$ and $\mathbb{Q}^d_d$. As an example, to prevent percolation in the NE quadrant, there must be a directed path from $(x, -1/2)$ to $(-1/2, y)$ in $\mathbb{Q}^u_d$ and, equivalently, a directed path from $(-1/2, y)$ to $(x, -1/2)$ in $\mathbb{Q}^d_d$ for some $x, y > 0$. That partly explains the superscripts $u$ (up) and $d$ (down). Also, we denote by $\Lambda_n$ the $2n\times2n$ square box with the center at the origin and let $B^{+}_{m,n}(x)$ be the event that there exists a path from $(x, 1/2)$ to $(1/2, y)$ in $\mathbb{Q}^{u}_d$ for some $y > 0$ which lies entirely within $\Lambda_m\setminus\Lambda_n$. For existence of a path which avoids $\Lambda_n$ we will write $B^{+}_{n}(x)$ and $B^{+}(x)$ for the unconstrained event. In other words, $B^{+}_{n}(x) = \cup_{m > n}B^{+}_{m,n}(x)$ and $B^{+}(x) = \cup_{n > 0}B^{+}_{n}(x)$. As usual, we will denote by $\mathbb{P}artial \Lambda_n$ the vertex boundary of the box $\Lambda_n$, i.e.: the set of vertices that have neighbors both inside and outside $\Lambda_n$. Hereafter we will assume that all paths are in $\mathbb{Z}^{2u}_d$ if it is not explicitly stated otherwise. We start with a few auxiliary lemmas that explicitly exploit duality. \begin{lem} \label{lem:nodual} If $\theta_{NE}(1-p) > 0$ then $\theta_{NE}(p) = 0$ \mathbb{E}nd{lem} \begin{proof} This lemma can be proven in exactly the same way Harris showed there is no bond percolation in the quadrant at $\frac{1}{2}$ in his seminal paper \cite{Harr}. The only observation we need is that $\mathbb{Z}^{2u}_d$ has percolation parameter $1-p$ if we fix its origin at some point $(x, -1/2)$. Then, according to Lemma 5.2 of \cite{Harr}, since $\theta_{NE}(1-p) > 0$, with probability one there is an oriented path in $\mathbb{Q}^{u}_d$ from $(x, -1/2)$ to $(-1/2, y)$ for some $x, y > 0$ because any dual path started at the $x$-axis crosses the $y$-axis a.s. But this means that the NE-cluster in the original lattice is finite a.s. \mathbb{E}nd{proof} \begin{lem} \label{lem:limit} Let $n > 0$ and $\theta = \theta_{NE}(1-p) > 0$. Recall that $B^{+}_n(x)$ denotes the event that there is a path in $\mathbb{Q}^{u}_d$ from $(x, 1/2)$ to $(1/2, y)$ outside the box $\Lambda_n$ for some $y > 0$. Then $$ \liminf_{x \to \infty} \mathbb{P}\{B^{+}_n(x)\} \geq \theta. $$ \mathbb{E}nd{lem} \begin{proof} For each dual configuration $\omega$, let $\omega'$ be the modification of $\omega$ such that all edges inside $\Lambda_n$ are directed outwards from the point $(1/2, 1/2)$. Let $N(\omega)$ be the number of points $(x, 1/2)$ such that $\omega \in B^{+}(x)$ but $\omega' \notin B^{+}(x)$. Finally, let $A$ be the set of all configurations $\omega$, such that $N(\omega) = \infty$. We claim that $\mathbb{P}(A) = 0$. Let us assume $\mathbb{P}(A) > 0$ for the sake of contradiction. For $\omega \in A$ and $x > 0$, conditions $\omega \in B^{+}(x)$ and $\omega' \notin B^{+}(x)$ imply existence of a path $\mathbb{P}artial\Lambda_n \to (x-1/2, y)$ in the original NE-quadrant of $\mathbb{Z}^2$ for some $y > 0$. Indeed, since $\omega' \notin B^{+}(x)$ there must be a NE-path in the original lattice that blocks $(x, 1/2)$ from the $y$-axis in the dual (all other configurations would have probability zero). On the other hand, it can emanate only at the boundary of $\Lambda_n$, because otherwise outwards orientation in $\Lambda_n$ would have no effect on $B^{+}(x)$. But due to the fact that $N(\omega) = \infty$ we conclude that there must be arbitrarily long NE-paths in the original lattice, hence there exists an infinitely long one, implying $\theta_{NE}(p) \geq \mathbb{P}(A) > 0$ and contradicting Lemma~\ref{lem:nodual}. Now, defining $N_m(\omega)$ as above but counting only points $(x, 1/2)$ with $x > m$ we have $$\lim_{m \to \infty} \mathbb{P}(\{\omega | N_m(\omega) > 0\}) = 0$$ and thus $$ \liminf_{x \to \infty} \mathbb{P}\{ B^{+}_n(x) \} = \liminf_{x \to \infty} \mathbb{P}\{ B^{+}(x) \} \geq \theta. $$ \mathbb{E}nd{proof} \begin{corr} \label{corr:q} Consider the $H$-model with edge probability $p$. Suppose $\theta = \theta_{NE}(1-p) > 0$. Then, for any $n, d > 0$ there exist $0 < N < M_0 < M$ such that $M_0 - N > d$ and $$\mathbb{P}(B^{+}_{M, n}(x)) > \frac{\theta}{2}$$ for each $x \in [N, M_0]$. \mathbb{E}nd{corr} \begin{proof} According to Lemma~\ref{lem:limit} we can pick $N$ such that $\mathbb{P}\{B^{+}(x) | \Lambda_n \, \mathrm{is\,\,blocked} \,\} > 2\theta/3$ whenever $x > N$. As $B^{+}_{n}(x) = \cup_{m > n}B^{+}_{m,n}(x)$ by definition, it remains to take $M_0 = N + d + 1$ and $M$ large enough to fulfill the desired conditions. \mathbb{E}nd{proof} Now we iterate Corollary~\ref{corr:q} to extend the directed path in the following way. Consider the event $B^{O}(A)$ that there exists a directed path \begin{equation} \label{eq:path} (A, 1/2) \to (1/2, B) \to (-C, -1/2) \to (1/2, -D) \to (E, 1/2) \mathbb{E}nd{equation} for some $B, C, D, E > 0$, where each part if the path, apart from axis crossings, lies inside a single dual quadrant. See Figure~\ref{fig:circuits}. \begin{lem} \label{lem:extend} For each $N > 0$ there exists $M > N$ such that $$\mathbb{P}\{B^{O}(A) \,\,\, \mathrm{in} \,\,\, \Lambda_M \setminus \Lambda_N\} > \left(\frac{\theta(1-p)}{2}\right)^4$$ for some $A \in [N, M]$. \mathbb{E}nd{lem} \begin{proof} We may choose $M_1 > M_0 > N_1 > N > 0$ such that $\mathbb{P}\{B^{+}_{M_1, N}(x) > \frac{\theta}{2}\}$ for all $x \in (N_1, M_0)$. Thanks to Corollary~\ref{corr:q}, then, we pick $M'_0, N_2$ (enlarging $M_0$ and, subsequently, $M_1$ if necessary) having $M_0 > M'_0 > N_2 > N_1 > 0$ such that $\mathbb{P}\{B^{+}_{M_0, N_1}(x) > \frac{\theta}{2}\}$ whenever $x \in (N_2, M'_0)$. This guarantees that the probability of a directed path $(x, 1/2) \to (1/2, B) \to (-C, 1/2)$ is greater than $\theta^2(1-p)/4$. Indeed, consider three events for any $x \in (N_2, M'_0)$: \begin{enumerate} \item There exists a directed path $(x, 1/2) \to (1/2, B)$ in $\Lambda_{M_0} \setminus \Lambda_{N_1}$ \item The axis crossing edge has direction $(1/2, B) \to (-1/2, B)$ \item There exists a directed path $(-1/2, B) \to (-C, 1/2)$ in $\Lambda_{M_1} \setminus \Lambda_{N}$ \mathbb{E}nd{enumerate} By construction, $M_0 > B > N_1$ and these three events are independent since they depend on disjoint edge sets. Thus, the probability that they occur simultaneously is greater than $\theta^2(1-p)/4$. Repeating this consideration twice, we get the claim of the corollary. Note, that here the rotational symmetry of the $H$-model comes into play: turning by $\frac{\mathbb{P}i}{2}$ each time we are able to construct the almost closed path with probability bounded away from zero. \mathbb{E}nd{proof} The following lemma asserts that the probability of a closed dual path is also bounded away from zero and is crucial. For $A > 0$, denote by $B^{O}(A, A)$ the event that there exists a closed directed circuit in $\mathbb{Z}^{2u}_d$ of the form (\ref{eq:path}) starting and finishing at $(A, 1/2)$. We now claim \begin{lem} \label{lm:main} For each $N > 0$ there exists $M > N$ such that $$\mathbb{P}\{B^{O}(A, A) \,\,\, \mathrm{in} \,\,\, \Lambda_M \setminus \Lambda_N\} > \frac{1}{9}\left(\frac{\theta(1-p)}{2}\right)^8$$ for some $A \in [N, M]$. \mathbb{E}nd{lem} \begin{proof} Let us pick $M, A > 0$ given by Lemma~\ref{lem:extend}, such that $$ \mathbb{P}\{ B^{O}(A)\,\,\, \mathrm{in} \,\,\, \Lambda_M \setminus \Lambda_N \} > \left(\frac{\theta(1-p)}{2}\right)^4, $$ and fix the point $A$. Note that if there is at least one $A \to B \to C \to D \to E$ path, the inner-most and outer-most paths are unique and well-defined. Among all such paths $(A, 1/2) \to (1/2, B) \to (-C, -1/2) \to (1/2, -D) \to (E, 1/2)$ we choose the inner-most path $P_{\mathrm{in}}$ and the outer-most path $P_{\mathrm{out}}$, denoting their endpoints by $E_{\mathrm{in}}$ and $E_{\mathrm{out}}$ respectively, see Figure~\ref{fig:circuits}. Since the inner-most path always lies inside the outer-most one (though they may touch each other or even coincide), conditioning on the existence of least one $A \to B \to C \to D \to E$ path, at least one of three events must occur: $E_{\mathrm{in}} < A$, $E_{\mathrm{out}} > A$ or $B^{O}(A, A)$, whence $$ \mathbb{P}\{E_{\mathrm{in}} < A\} + \mathbb{P}\{E_{\mathrm{out}} > A\} + \mathbb{P}(B^{O}(A, A)) \geq \left(\frac{\theta(1-p)}{2}\right)^4. $$ If $$ \mathbb{P}(B^{O}(A, A)) \geq \frac{1}{3}\left(\frac{\theta(1-p)}{2}\right)^4, $$ we are done, so we suppose $$ \mathbb{P}\{E_{\mathrm{in}} < A\} \geq \frac{1}{3}\left(\frac{\theta(1-p)}{2}\right)^4 $$ without loss of generality. Conditioning on the states of the enclosed edges (the dashed region on the left part of Figure~\ref{fig:circuits}), we ensure that the enclosing path is indeed inner-most. Now, recall the other dual lattice $\mathbb{Z}^{2d}_d$ directed opposite to $\mathbb{Z}^{2u}_d$. By symmetry, the probability for a clockwise oriented path $(A, -1/2) \to (-1/2, -B') \to (-C', -1/2) \to (-1/2, D') \to (E', 1/2)$ such that $E' < A$ is equal to $\mathbb{P}\{E_{\mathrm{in}} < A\}$ and any such path must cross $(A, 1/2) \to (1/2, B_{\mathrm{in}}) \to (-C_{\mathrm{in}}, -1/2) \to (1/2, -D_{\mathrm{in}}) \to (E_{\mathrm{in}}, 1/2)$ whenever $E_{\mathrm{in}} < A$. The states of the edges enclosed by the counter-clockwise path are independent of the existence of the clockwise path up to the first crossing point with the latter, and such a crossing produces a closed circuit in $\mathbb{Z}^{2u}_d$ (and, equivalently, in $\mathbb{Z}^{2d}_d$). Thus, $$ \mathbb{P}(B^{O}(A, A)) \geq \mathbb{P}\{\mathrm{two\,paths\,exist}, E_{\mathrm{in}} < A, E'_{\mathrm{in}} < A\} \geq \frac{1}{9}\left(\frac{\theta(1-p)}{2}\right)^8. $$ The case $$ \mathbb{P}\{E_{\mathrm{out}} > A\} \geq \frac{1}{3}\left(\frac{\theta(1-p)}{2}\right)^4 $$ is completely analogous, but we condition on the states of the outer edges (see the right-hand side of Figure \ref{fig:circuits}). \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{inout.pdf} \caption{At least one of three events must occur: $E_{\mathrm{in}} < A$ (on the left), $E_{\mathrm{out}} > A$ (on the right) or $B^{O}(A, A)$ (closed circuit). In the first and second cases we condition on the states of edges in the dashed region to apply a symmetry argument.} \label{fig:circuits} \mathbb{E}nd{figure*} To finish the proof of Theorem~\ref{prop:main}, it simply remains to set up an infinite collection of frames provided by Lemma~\ref{lm:main} and apply the Borell-Cantelli lemma. \mathbb{E}nd{proof} \section{Higher dimensions} In light of Theorem~\ref{prop:main} it is reasonable to guess that both the $H$- and Grimmett models are equivalent to percolation in the NE-quadrant. Indeed, suppose, say, $\theta_G(p) > 0$ and $p > 0$. According to Lemma~\ref{lem:nodual} either $\theta_{NE}(p) = 0$ or $\theta_{NE}(1-p) = 0$ (or maybe both). Analogously, due to self-duality of the NW and the SE quadrants of the Grimmett model for all $p \geq 0$ it is easy to see that the percolation cluster restricted to any of these quadrants is almost surely finite. So, it is easy to believe that the infinite part of the percolation cluster (provided it is infinite) stays in the NE quadrant for $p > 1/2$, but we were unable to prove this. The $d$-dimensional $H$- and Grimmett models differ significantly for $d \geq 3$: whereas the $H$-model probably remains equivalent to percolation in the quadrant, it is not difficult to show that $\theta^d_G(p) > 0$ for all $p$ when $d \geq 3$. In this section we will prove an even stronger result, namely that the Grimmett model percolates in any 3-dimensional slab of height at least three, as has already been announced in Theorem \ref{prop:dGrimmett}. On the other hand, the standard path counting argument implies that $\theta^d_H(p) = 0$ in all dimensions for sufficiently small $p > 0$ (which, of course, depends on $d$). The idea of the proof of Theorem \ref{prop:dGrimmett} is that one can consider spatial orientated paths of the form $$ (x, y, 0) \to (x, y, 1) \to (x+1, y, 1) \to (x+1, y, 0) $$ as an additional arc $(x, y, 0) \to (x+1, y, 0)$ in the lattice $\mathbb{Z}^2 \times \{0\}$ and then apply the following theorem by Grimmett \cite{Gr2}. \begin{thm} {\rm (\textbf{Grimmett}, \cite{Gr2})} \label{thm:Gr2} Consider the following independent process on $\mathbb{Z}^2$ with parameters $a$ and $b$: rightward and leftward (respectively, upward and downward) arcs are placed independently between each pair of horizontal (respectively, vertical) neighbors. The probability of each upward or rightward arc being placed is $a$ and the probability of each downward or leftward arc being placed is $b$. If $a + b > 1$ then the independent process with parameters $a, b$ contains an infinite oriented self-avoiding path from $0$ with strictly positive probability. \mathbb{E}nd{thm} In the same paper \cite{Gr2} it is shown that the Grimmett model is equivalent to the independent process with parameters $p$ and $1-p$, which implies that if additional arcs introduced above would have been placed independently, the process is supercritical. Thus, the main technical difficulty to overcome is the dependence between such paths for neighboring vertices in $\mathbb{Z}^2 \times \{0\}$. To do so, we will use Lemma 1.1 from \cite{LSS} to "bound" the dependent measure by a product measure from below. \begin{lem} \label{lem:LSS} Suppose that $(X_s)_{s \in S}$ is a family of $\{0, 1\}$-valued random variables, indexed by a countable set $S$, with joint law $\nu$. Suppose $S$ is totally ordered in such a way that, given any finite subset of $S$, $s_1 < s_2 < \ldots < s_j < s_{j+1}$, and any choice of $\mathbb{E}psilon_1, \mathbb{E}psilon_2, \ldots, \mathbb{E}psilon_j \in \{0, 1\}$, then, whenever $\mathbb{P}(X_{s_1} = \mathbb{E}psilon_1, \ldots, X_{s_j} = \mathbb{E}psilon_j) > 0$, \begin{equation} \label{ineq:marginal} \mathbb{P}(X_{s_{j+1}} = 1| X_{s_1} = \mathbb{E}psilon_1, \ldots, X_{s_j} = \mathbb{E}psilon_j) \geq \rho. \mathbb{E}nd{equation} Then $\nu$ stochastically dominates $\mathbb{P}i^{S}_{\rho}$, which is a product measure with parameter $\rho$. \mathbb{E}nd{lem} \begin{proof} \textbf{(of Theorem \ref{prop:dGrimmett})} We may assume that the original Bernoulli percolation has parameter $p \in [1/2, \vec{p}_c]$ due to symmetry and obvious coupling with 2-dimensional directed percolation with the critical value $\vec{p}_c$. Chess-color the lattice vertices of $\mathbb{Z}^2 \times \{0\}$ such that a vertex $(x, y, 0)$ is black if and only if $x + y$ is even. For a given black vertex $(x, y, 0)$ we consider oriented paths \begin{equation} \label{eq:path1} (x, y, 0) \to (x, y, 1) \to (x \mathbb{P}m 1, y, 1) \to (x \mathbb{P}m 1, y, 0) \mathbb{E}nd{equation} and \begin{equation} \label{eq:path2} (x, y, 0) \to (x, y, 1) \to (x, y \mathbb{P}m 1 , 1) \to (x, y \mathbb{P}m 1, 0). \mathbb{E}nd{equation} For white vertices the construction is completely similar but all oriented paths go through the plane $\mathbb{Z}^2 \times \{-1\}$. To make the distributions for white and black vertices identical and homogeneous we will consider a slightly different percolation model where each edge that is not in $\mathbb{Z}^2 \times \{0\}$ has the same probability $1-p < p$ of being oriented in any direction. This can be achieved by coupling in the following way: sample a countable set of independent random variables uniformly distributed on $[0,1]$ for each edge in $\mathbb{Z}^2 \times \{-1, 0, 1\}$ except the plane $\mathbb{Z}^2 \times \{0\}$. To make orientations distributed according to the original law we assign rightwards (resp. upwards) orientation to the $i$th edge if $Y_i > 1-p$ and leftwards (resp. downwards) otherwise. If we assign rightwards (resp. upwards) only if $Y_i \in [2-2p, 1-p]$ we will end up with the desired model with all orientations having the same probability that is dominated by the original one. From now on we can claim that each auxiliary path is present with probability $(1-p)^3$. Let us fix the point $(x, y, 0)$ for a moment and write $A_i$ for the event that the $i$-th oriented path is present where the paths of the form (\ref{eq:path1}) and (\ref{eq:path2}) emanating from $(x, y, 0)$ have been ordered in some way. We observe that \begin{equation} \mathbb{P}\left(\bigcap_{i \in I} A_i\right) \geq \mathbb{P}rod_{i \in I} \mathbb{P}(A_i) \label{eq:positiveness} \mathbb{E}nd{equation} for any (finite) index set $I$. On the other hand, each event $A_i$ may be seen as an additional arc in $\mathbb{Z}^2 \times \{0\}$ oriented outwards from $(x, y, 0)$ and we end up with the probability measure $\nu$ on $\mathbb{Z}^2 \times \{0\}$ that corresponds to the set of arcs on $\mathbb{Z}^2 \times \{0\}$ enriched in this way. Let us order the set of all possible additional arcs in $S = \mathbb{Z}^2 \times \{0\}$ in some way, say, alphabetically. Note that, for each unoriented edge, there are two arcs directed in opposite ways and we consider them separately. Given the ordered countable set of arcs we assign a random variable $X_{s_i}$ having $X_{s_i} = 1$ if the arc $s_i$ has been added during the enrichment process described above and zero otherwise. Obviously, the random variables $X_{s_i}$ are not independent, but we are almost in the setting of Lemma \ref{lem:LSS} and it remains to show that inequality (\ref{ineq:marginal}) holds for some $\rho$. Let us fix some $X_{s_i}$. First, we note that $X_{s_i}$ is dependent only on arcs that either emanate from or end at the same vertex as $s_i$. Thus we have only six other variables $X_{s_1}, \ldots, X_{s_6}$ upon which it depends. Moreover, due to positive association (\ref{eq:positiveness}) we have $$ \mathbb{P}(X_{s_i} = 1| X_{s_1} = \mathbb{E}psilon_1, \ldots, X_{s_j} = \mathbb{E}psilon_j) \geq \mathbb{P}(X_{s_i}=1 | X_{s_1} = 0, \ldots, X_{s_6} = 0) $$ and it remains to bound the last probability from below. Without loss of generality we may assume that $s_i$ is the arc $(x, y, 0) \to (x+1, y, 0)$. For both probabilities $p_1, p_2$ for arcs $(x, y, 0) \to (x, y, 1)$ and $(x+1, y, 1) \to (x+1, y, 0)$ being present, conditionally on $X_{s_k} = 0, k = 1,...,6$, we have a lower bound $(1-p)p^3$ because horizontal arcs lying on the planes $\mathbb{Z}^2 \times \{\mathbb{P}m1\}$ are independent. Hence we can take $\rho = (1-p)^3p^6$, which is bounded away from zero since $p \in [1/2, \vec{p}_c]$. Thus, Lemma \ref{lem:LSS} applies and $\nu$ dominates the product measure $\mathbb{P}i_\rho$. But the process with the product measure $\mathbb{P}i_\rho$ corresponds to adding a positive density of additional arcs to $\mathbb{Z}^2 \times \{0\}$ independently at random and we immediately arrive at the setting of Theorem \ref{thm:Gr2}. Thus, the original percolation process on $\mathbb{Z}^2 \times \{-1, 0, 1\}$ is supercritical. \mathbb{E}nd{proof} \section{Conclusion} Despite seemingly simple formulations, both Grimmett's and the $H$-model appear to be difficult to analyze, mainly because of the absence of Harris-FKG-type correlation inequalities. The main difficulty to overcome is that any reasonable event, such as connectivity, defined in the random-orientation model is not increasing, in contrast with usual percolation models. Note, that Reimer's inequality still holds, but usually it is less fruitful and difficult to apply. It looks probable that further progress on the models in question requires substantially new ideas or at least considerable refinements of results in classical percolation. In the proof of Theorem \ref{prop:main} we have shown how purely geometrical considerations based on rotational symmetry together with self-duality can be used even without joining paths by any kind of correlation inequalities. However, the conjecture that, for example, percolation in Grimmett's model implies percolation in the NE quadrant contains the Harris theorem (that there is no bond percolation at $\frac{1}{2}$ in $\mathbb{Z}^2$), and thus it is probably very non-trivial to prove. The crux of Theorem \ref{prop:dGrimmett} consists of a few observations. First, as shown by Grimmett, the random-orientation process is equivalent to the independent process in which oriented arcs are placed independently. On the other hand, the process such that leftwards (resp. downwards) and rightwards (resp. upwards) orientations are independent and present with probabilities $p$ and $q$ respectively, dominates the same process with parameters $p' \leq p$ and $q' \leq q$. Thus, it is monotone with respect to $p$ and $q$ and most of the classical results apply, for example, Menshikov's exponential decay theorem. Second, in the same paper \cite{Gr2}, Grimmett shows how self-duality (again without any correlation inequalities) together with exponential decay implies criticality (or, maybe, supercriticality) of the random-orientation model. Finally, in the proof of Theorem \ref{prop:dGrimmett} we use the general domination result to show that additional paths introduced by two copies of $\mathbb{Z}^2$, namely $\mathbb{Z}^2 \times \{-1\}$ and $\mathbb{Z}^2 \times \{1\}$, have positive density and thus the resulting process is supercritical. Again, what we actually prove is that Grimmett's model in $\mathbb{Z}^2 \times \{-1, 0, 1\}$ dominates the independent process in $\mathbb{Z}^2$ with some parameters $p'$ and $q'$ such that $p' + q' > 1$. \section{Acknowledgments} The author is very grateful to his supervisor Professor Peter Hegarty for proposing the question, helpful discussions and for reviewing drafts over and over again. The author also deeply thanks Professor Jeff Steif for very careful proof-reading and pointing out a mistake in an earlier version. \begin{thebibliography}{99} \addcontentsline{toc}{chapter}{Bibliography} \bibitem[BBS]{BBS} P. Ballister, B. Bollob\'as, A. Stacey, Improved upper bounds for the critical probability on oriented percolation, Random Structures Algorithms 5 (1994), 573-589 \bibitem[Gr1]{Gr1} G.\ Grimmett, Percolation, Springer-Verlag, Berlin, 1989, 1st ed. \bibitem[Gr2]{Gr2} G.\ Grimmett, Infinite Paths in Randomly Oriented Lattices, Random Structures Algorithms 3 (2000), 257-266 \bibitem[Harr]{Harr} T. E. Harris, A lower bound for the critical probability in a certain percolation process, Math. Proc. Cambridge Philos. Soc. 56 (1960), 13-20 \bibitem[Heg1]{Heg1} \url{http://mathoverflow.net/questions/82369/a-percolation-problem/82718} \bibitem[Hugh]{Hugh} B. D. Hughes, Random walks and random environments, Volume 2: Random Environments, Oxford University Press, Oxford, England, 1995. \bibitem[Lin1]{Lin1} S.\ Linusson, A note on correlations in randomly oriented graphs, arXiv:0905.2881v2 \bibitem[LSS]{LSS} T.\ M.\ Liggett, R.\ H.\ Schonmann, A.\ M.\ Stacey, Domination by product measures, Ann. Probab. 1 (1997), 71-95 \bibitem[Xia1]{Xia1} W.\ Xianyuan, On the random-oriented percolation, Acta Math. Sci. Ser. B Engl. Ed. 2 (2001), 265-274 \mathbb{E}nd{thebibliography} \mathbb{E}nd{document}
\begin{document} \title{On universal subspaces for Lie groups} \underline {\rm Aut}hor{Saurav Bhaumik and Arunava Mandal} \mathfrak maketitle \begin{abstract} Let $U$ be a finite dimentional vector space over $\mathfrak mathbb R$ or $\mathfrak mathbb C$, and let $\rho:G\to GL(U)$ be a representation of a connected Lie group $G$. A linear subspace $V\subset U$ is called universal if every orbit of $G$ meets $V$. We study universal subspaces for Lie groups, especially compact Lie groups. Jinpeng and Dokovi\'{c} approached universality for compact groups through a certain topological obstruction. They showed that the non-vanishing of the obstruction class is sufficient for the universality of $V$, and asked whether it is also necessary under certain conditions. In this article, we show that the answer to the question is negative in general, but we discuss some important situations where the answer is positive. We show that if $G$ is solvable and $\rho:G\to GL(U)$ is a complex representation, then the only universal complex subspace is $U$ itself. We also investigate the question of universality for a Levi subgroup. \end{abstract} {\it Keywords}: {Compact Lie groups. maximal rank Lie subalgebra,  Chern class, Eular class, Eular characteristics.} {\it Subjclass} [2010] {Primary: 22E46, 57R20, Secondary: 22C05, 57T15} \section{Introduction} Let $G$ be a connected Lie group and let $\rho:G\to GL(U)$ be a finite dimensional (real or complex) representation of $G$. A linear subspace $V\subset U$ is called \emph{universal in $U$} (Jinpeng and Dokovi\'{c} \cite{A-D1}) if for every $u\in U$, the $G$ orbit through $u$ intersects $V$ i.e. $\rho(g)(u)\in V$ for some $g\in G$. A classical example is Schur's triangularization theorem, which says that the complex vector space of upper triangular matrices is universal for the complexified adjoint representation of $U(n)$. For the complexified adjoint representation of a semisimple compact connected Lie group $G$ on its complexified Lie algebra $\mathfrak g\otimes_{\mathfrak mathbb R} \mathbb C$ (where $\mathfrak g$ is a Lie algebra of $G$), the sum of root spaces is universal. An important class of examples for universal subspace arises from the work of Jorge Galindo, Pierre de la Harpe, and Thierry Vust \cite{G-H-T}, who investigated how the irreducibility of a representation of connected complex Lie group forces certain other geometrical properties of the orbits. They showed that any complex hyperplane in a finite-dimensional complex irreducible representation of a reductive connected complex Lie group is universal. Gichev \cite{G} showed that the same is true for compact connected Lie groups. For more examples of universal subspace, we refer to \cite{A-D1}. The availability of a large number of examples for `universal subspace' and its applications in matrix theory (see \cite{A-D1}, \cite{A-D2}, \cite{D-T}, and the references cited therein) is one of the primary motivations to study this topic. One of the main themes in \cite{A-D1} is to find necessary and sufficient conditions for a linear subspace to be universal, in terms of certain topological obstructions. They proved that the nonvanishing of that obstruction class is sufficient for the universality of the subspace in general. They posed a question (on p.19 of \cite{A-D1}) whether the nonvanishing of the obstruction class in a certain specific situation is necessary for the subspace to be universal. In our note we provide an example (rather a class of examples) to show that the answer is negative in general. However, we discuss some important situations where the answer is indeed positive. A key observation is that if $G$ is a connected Lie group and if $\mathfrak h\subset \mathfrak g$ is a Lie subalgebra that is universal for the adjoint representation, then $\mathfrak h$ is the Lie algebra of a \emph{closed} subgroup $H\subset G$ (Proposition \ref{universal-closed}). Apart from this, we investigate universality for possibly noncompact solvable Lie groups and Levi subgroups of connected Lie groups. We recall from \cite{A-D1} the construction of the topological obstruction. Given a representation $\rho:G\to GL(U)$ and a linear subspace $V\subset U$, let $G_V$ be the closed subgroup of $g\in G$ such that $\rho(g) (V)\subset V$. Consider $W=U/V$. Then $U$, $V$ and $W$ are representations of $G_V$. Since $G\to G/G_V=:M$ is a principal $G_V$-bundle, there are the three associated $G$-equivariant vector bundles $E_U,E_V$ and $E_W$ associated to $U,V$ and $W$, respectively. There is a $G$-equivariant isomorphism of vector bundles $E_U{\underline Isom}or M\times U: [(g,u)]\mathfrak mapsto ([g],gu)$. Every vector $u\in U$, viewed as a constant section of $E_U$, induces a section $s_u$ of $E_W=E_U/E_V$. Then $V$ is universal if and only if for every $u\in U$, the constant section $s_u$ has a zero (cf. \cite{A-D1}). If $E_W$ is orientable, then it has the Euler class $e(E_W)$. If this Euler class is nonzero, there is no nowhere vanishing section of $E_W$, so $V$ is universal. In case $U$ is a complex representation and $V$ is a complex linear subspace, $E_W$ is a complex vector bundle, and its top Chern class is the same as the Euler class of the underlying real vector bundle. Let $T$ be a maximal torus of $G$. In case $V$ is $T$-invariant or equivalently $T\subset G_V$, consider the vector bundle $E_W$ over $G/T$. If $V$ is universal, then ${\rm dim}_\mathbb R V\geq {\rm dim}_\mathbb R U - {\rm dim}_\mathbb R G/T$. A $T$-invariant subspace $V$ is said to have \emph{optimal dimension} if ${\rm dim}_\mathbb R V= {\rm dim}_\mathbb R U - {\rm dim}_\mathbb R G/T$. In this case, the Euler class of $E_W$ over $G/T$ lives in the top dimensional cohomology, hence gives rise to a number $C_V=\langle e(E_W),[G/T]\rangle$ via Poincar\'e duality, where $[G/T]$ is the fundamental class of $G/T$. If $C_V$ is nonzero then $V$ is universal in $U$ (Theorem 4.2 of \cite{A-D1}). Since there is a description of the cohomology $H^*(G/T,\mathbb Z)$, in many cases $C_V$ can be calculated and shown to be nonzero, hence in those cases $V$ is universal (\S 5 of \cite{A-D1}). The converse holds only under additional condition (Theorem 4.4 of \cite{A-D1}) and fails in general (\S 6 of \cite{A-D1}). If $V$ is universal, then ${\rm dim}_\mathbb R V\geq {\rm dim}_\mathbb R U - {\rm dim}_\mathbb R G/G_V$, where only real dimensions are taken (Lemma 4.1 of \cite{A-D1}). Now suppose \begin{equation}\label{dimension}{\rm rank}_\mathbb R (W)={\rm dim}_\mathbb R(G/G_V)=2r.\end{equation} Suppose $E_W$ is orientable. Then $E_W$ has a nowhere vanishing section if and only if the Euler class $e(E_W)\in H^{2r}(G/G_V,\mathbb Z)$ is zero (Corollary 14.4 of \cite{B}). Jinpeng and Dokovi\'{c} considered an example where $V$ is universal, the Euler class of $E_W$ vanishes over $G/T$ but not over $G/G_V$. They asked the following question in \cite{A-D1}. \begin{question}\label{question} Suppose ${\rm rank}_\mathbb R(W)={\rm dim}(M)$ and $T\subset G_V$. Does the universality of $V$ imply that certain obstruction class of $E_W$ over $G/G_V$, for the existence of nowhere vanishing sections, is nonzero? \end{question} The two conditions on the rank and $T$-invariance of $V$ are necessary, as they show by examples where the universality of $V$ does not imply the vanishing of certain obstruction class of $E_W$ if either of the above conditions is violated (cf. \S 6 of \cite{A-D1}). In this context, we investigate the topic further and explore the relationship between certain topological obstructions and universality. We show that in general, the answer to the above question is negative (see \S 3). When the rank of the vector bundle is equal to the dimension of the base, the Euler class is the final obstruction to the existence of a nowhere vanishing section. We give an example where $V$ is universal, the two conditions in the question hold, yet the Euler class of $E_W$ over $G/G_V$ vanishes. However, we show that for the following three important classes of examples the universality of $V$ is equivalent to the nonvanishing of the Euler class. If $G$ is a complex connected Lie group, and $\rho:G\to GL(U)$ is a holomorphic representation of $G$ such that $G/G_V$ is compact, then under the assumption (\ref{dimension}), $V$ is universal if and only if the top Chern class of $E_W$ is nonzero (Theorem \ref{holomorphic-case-1}). Let $H$ be a closed Lie subgroup of a compact connected Lie group $G$, and we denote the Lie algebra of $H$ by $\mathfrak h$. Consider $U=\mathfrak g$, $V=\mathfrak h$, and $W=\mathfrak g/\mathfrak h$. Then the bundle $E_W$ is the tangent bundle $T(G/H)$. We prove using a result of Hopf and Samelson (cf. \cite{H-S}) that the universality of $\mathfrak h$ is equivalent to the nonvanishing of the obstruction class of $E_W$ (Theorem \ref{tangent-bundle-1}). An analogous result holds for the complexified adjoint representation. We prove that if $\mathfrak h$ is a complex Lie subalgebra of the complexified Lie algebra $\mathfrak g_\mathfrak mathbb C$, then the following three are equivalent (Theorem \ref{complexified-1}): (i) the top Chern class of the associated bundle $E_W$ is nonzero, (ii) $\mathfrak h$ contains a Borel subalgebra, and (iii) $\mathfrak h$ is universal in $\mathfrak g_\mathfrak mathbb C$ under the complexified adjoint representation. This is in some sense a partial converse to Schur's triangularization. Then we study all the irreducible representations of $SU(2)$, and consider all the $T$-invariant subspaces $V$ which satisfy the dimension requirement (\ref{dimension}). We give a complete description of these subspaces, show that they are all universal, and give a necessary and sufficient condition for the Euler class of $E_W$ to vanish (Proposition \ref{SU(2)}). A byproduct of this analysis is that we get an infinite family of counterexamples to Question \ref{question}. Let $G$ be a connected solvable Lie group. We prove that if $\rho:G\to GL(U)$ is a complex linear representation, then the only complex linear subspace $V\subset U$ that is universal in $U$, is $U$ itself (Proposition \ref{solvable}). Let $G$ be a compact connected Lie group, $\rho:G\to GL(U)$ a complex linear representation, and $V\subset U$ a complex linear universal subspace. We show that if $S\subset G$ is a Levi subgroup, then $V$ is universal for $S$ also (Proposition \ref{compact-levi}). But the same is no longer true if we take real representations for $G$ or if $G$ is not compact (Remark \ref{solv-remark}). \section{Some important cases of universal subspaces} In this section, we discuss three important classes of examples where the universality is equivalent to the nonvanishing of the obstruction class for a certain vector bundle. Let $p:E\to M$ be a smooth real oriented vector bundle of rank $r$ over a base space $M$ which is a CW-complex. Then $E$ has the Euler class $e(E)\in H^r(M,\mathbb Z)$. If the rank $r$ is equal to the dimension of the base $M$, then $E$ admits a nowhere vanishing section if and only if the Euler class vanishes (VII Corollary 14.4 on p. 514 of Bredon \cite{B}). If $E$ is a complex vector bundle of rank $n$, then the top Chern class $c_n(E)\in H^{2n}(M,\mathbb Z)$ is the same as the Euler class of the underlying real vector bundle. \subsection{Holomorphic case} Suppose $G$ is a complex Lie group (not necessarily compact). Let $U$ be a complex vector space and let $\rho:G\to GL(U)$ be a holomorphic representation. Let $V$ be a complex linear subspace of $U$, and let $G_V=\{g\in G|~\rho(g)(v)\in V,~\forall v\in V\}$. Then $G_V$ is a closed holomorphic Lie subgroup and the quotient $G/G_V$ is a holomorphic manifold. We assume now that $G/G_V$ is compact and the dimension requirement (\ref{dimension}) holds. The Lie subalgebra $\mathfrak g_V=\{X\in \mathfrak g:d\rho(X)(V)\subset V\}$ is the Lie algebra of the connected component of $G_V$. \begin{theorem} \label{holomorphic-case-1} Let the notation be as above. Then the following are equivalent. \begin{enumerate} \item If $e(E_W)\in H^{2r}(G/G_V,\mathbb Z)$ is the Euler class, and if $[G/G_V]\in H_{2r}(G/G_V,\mathbb Z)$ is the fundamental class, then $C_V=\langle e(E_W),[G/G_V]\rangle \ne 0.$ \item $V$ is universal. \end{enumerate} \end{theorem} \begin{proof} Our argument follows the proof of Theorem 4.4 of \cite{A-D1}. We briefly recall some of the ingredients in their proof in our context. The Lie algebra map $d\rho:\mathfrak g\to \mathfrak g l(U)$ is complex linear. Let $W=U/V$, and consider the projection map $\pi_W:U\to W$. Given $v\in V$, the map \[\mathfrak g\to W: X(\in \mathfrak g)\mathfrak mapsto -\pi_W(d\rho(X)v)\] is zero on the subspace $\mathfrak g_V$, so it indeces a complex linear map $\psi_v$ as in (4.1) of \cite{A-D1}: \[\psi_v:\mathfrak g/\mathfrak g_V\to W.\] However in our case, if $\psi_v|_\mathbb R$ is the corresponding real linear map of the underlying real vector spaces, then its determinant is always nonnegative. Indeed, $det(\psi_v|_\mathbb R)=|det(\psi_v)|^2\ge 0$. Therefore, for every $u\in U$, if $x\in G/G_V$ is a zero of the section $s_u$, then ${\rm ind}_x(s_u)={\rm sgn}(\psi_v)\ge 0$. We observe that the rest of the proof of Theorem 4.4 of \cite{A-D1} relies on three main assumptions: the orientability of $E_W$, the orientability and compactness of the base manifold, and the dimension requirement (\ref{dimension}). The same proof carries over to the situation where $E_W$ is complex, the base manifold $G/G_V$ is compact, holomorphic, and (\ref{dimension}) holds. \end{proof} \subsection{Universal Lie subalgebras for the adjoint representation} In this section, we examine Question \ref{question} for the Lie subalgebras in the adjoint representation of a compact connected Lie group. To be able to state our result regarding the obstruction class of a bundle on the corresponding homogeneous space, we need to first observe that maximal rank subalgebras are always the Lie algebras of \emph{closed} Lie subgroups. This is an old result of Borel and de Siebenthal (cf. Theorem in \S 8 of \cite{B-S}). We give a different proof of this result based on our key observation (Proposition \ref{universal-closed}). Let $G$ be a connected Lie group (not necessarily compact) and let $\mathfrak g$ be its associated Lie algebra. Recall that if $\mathfrak h$ is a Lie subalgebra of $\mathfrak g$, then there is a connected Lie group $H$, an injective smooth group homomorphism $i:H\to G$ such that $di$ is injective and $di({\rm Lie} (H))=\mathfrak h$. This is sometimes referred to as a ``virtual Lie subgroup'', because the image $i(H)$ is not necessarily closed. For example, take the torus $G=U(1)\times U(1)$, and take $\mathfrak h$ to be the real line through the point $(1,\alpha)\in \mathbb R^2={\rm Lie}(G)$, where $\alpha$ is irrational. It is well known that the image of the corresponding virtual Lie subgroup is dense in $G$. However, in Proposition \ref{universal-closed} we show that if the Lie subalgebra $\mathfrak h$ is \emph{universal} for the adjoint representation of $G$ then the Lie subgroup $H$ is necessarily closed. We denote by $N_G(H)$ the normalizer of $H$ in $G$ and for a Lie subalgebra $\mathfrak h$ of $\mathfrak g={\rm Lie}(G)$, the normalizer of $\mathfrak h$ in $G$ is defined by $N_G(\mathfrak h):=\{g\in G|~ {\rm Ad}_g(\mathfrak h)\subset \mathfrak h\}$. \begin{proposition}\label{universal-closed} Let $G$ be a connected Lie group and let $\mathfrak h\subset \mathfrak g$ be a universal subspace for the adjoint action of $G$. If $\mathfrak h$ is a Lie subalgebra of $\mathfrak g$, then $\mathfrak h$ is the Lie algebra of a \emph{closed} connected subgroup $H$ of $G$. Moreover, $H=N_G(\mathfrak h)^\circ=N_G(H)^\circ$. \end{proposition} \begin{proof} Note that we have a surjection $G\times \mathfrak h\to \mathfrak g:(g,X)\mathfrak mapsto {\rm Ad}_g(X)$. Let $G_\mathfrak h^\circ$ be the connected component of the normalizer $N_G(\mathfrak h)$. Then the above surjection factors through the surjection $E(\mathfrak h)\to \mathfrak g$, where $E(\mathfrak h)=(G\times \mathfrak h)/G_\mathfrak h^\circ$, the associated $G$-equivariant vector bundle on $G/G_\mathfrak h^\circ$. Since this is a surjection, by Sard's theorem (cf. Theorem 6.2 \cite{B}), we can say that (all real dimensions) \[\dim(G)-\dim(G_\mathfrak h^\circ)+\dim(\mathfrak h)\ge \dim(\mathfrak g)=\dim(G),\] \[{\rm or},\;\dim(\mathfrak h)\ge \dim(G_\mathfrak h^\circ).\] Since $\mathfrak h$ is a Lie subalgebra, there must be a connected virtual Lie subgroup $i:H\to G$ such that $di(Lie(H))=\mathfrak h$. But $i(H)\subset G_\mathfrak h^\circ$, which means $$\dim(G_\mathfrak h^\circ)\ge \dim(H)=\dim(\mathfrak h).$$ Therefore, $\dim(H)=\dim(G_\mathfrak h^\circ)$, which means $i:H\to G_\mathfrak h^\circ$ must be surjective, as $G_\mathfrak h^\circ$ is connected. We have a one-one, onto map $i$ between two connected manifolds, and it induces isomorphism at all tangent spaces. Therefore $i$ is to be a diffeomorphism by the inverse function theorem. Since $G_\mathfrak h$ is closed in $G$ by definition, so is its connected component $G_\mathfrak h^\circ$. \end{proof} As a corollary, we get a different proof for the following result of A. Borel and J. De Siebenthal (cf. Theorem in \S 8 of \cite{B-S}). Recall that a lie subalgebra $\mathfrak h$ of $\mathfrak g$ is said to be {\it maximal rank} if $\mathfrak h$ contains a maximal torus of $\mathfrak g$. \begin{corollary}\label{max-rank-1} Let $G$ be a compact Lie group and let $\mathfrak mathfrak k\subset \mathfrak g$ be a Lie subalgebra of \emph{maximal rank}. Then there is a connected \emph{closed} Lie subgroup $K\subset G$ such that $\mathfrak mathfrak k$ is the Lie algebra of $K$.\end{corollary} \begin{proof} If $G$ is compact, then in $\mathfrak g$, every maximal rank subalgebra $\mathfrak mathfrak k$ contains a maximal toral subalgebra. Since maximal toral subalgebras are universal in $\mathfrak g$, so is $\mathfrak mathfrak k$. Therefore by Proposition \ref{universal-closed}, $\mathfrak mathfrak k$ must be the Lie algebra of a \emph{closed} subgroup $K$. \end{proof} \begin{theorem}\label{tangent-bundle-1} Let $G$ be a compact connected Lie group, $\mathfrak h\subset \mathfrak g$ any Lie subalgebra. Then the following are equivalent. \begin{enumerate} \item $\mathfrak h$ of maximal rank. \item In the adjoint representation, $\mathfrak h$ is universal in $\mathfrak g$. \item $\mathfrak h$ is the Lie algebra of a closed connected subgroup $H$, and the Euler characteristic of $G/H$ is positive. \item $\mathfrak h$ is the Lie algebra of the connected component of $G_\mathfrak h=N_G(\mathfrak h)$. If $W=\mathfrak g/\mathfrak h$, $E_W$ is the $G$-equivariant bundle on $G/G_\mathfrak h$ associated to $W$, then the obstruction class to having a nowhere vanishing section of $E_W$ on $G/G_\mathfrak h$ is nonzero. \end{enumerate} \end{theorem} \proof We will prove (1) $\Rightarrow$ (2) $\Rightarrow$ (1) $\Rightarrow$ (3) $\Rightarrow$ (4) $\Rightarrow$ (2). $(1) \Rightarrow (2):$ Since $\mathfrak h$ is of maximal rank, it contains a maximal toral algebra. Any maximal toral algebra is universal for the adjoint representation. $(2) \Rightarrow (1):$ Since $\mathfrak h$ is universal, it is the Lie algebra of a closed connected Lie subgroup $H$ by \ref{universal-closed}. Let $T$ be a maximal torus of $G$ and let $a$ be a generating element for $T$. For a Lie group $G$ and its Lie algebra $\mathfrak g$, let $\exp:\mathfrak g \to G$ be the exponential map. Let $a=\exp(A)$ for some $A\in \mathfrak mathfrak t$. If $\mathfrak h$ is universal, then there exists $x\in G$ such that ${\rm Ad}_x A\in \mathfrak h$. Then $xax{}^{-1}=\exp({\rm Ad}_x A)\in \exp(\mathfrak h).$ Note that $H$ is a compact connected Lie group, hence $\exp(\mathfrak h)=H$. This implies that $x T x{}^{-1}\subset H$, in other words, $\mathfrak h$ is of maximal rank. $(1) \Rightarrow (3):$ By (2), Corollary \ref{max-rank-1}, $\mathfrak h$ is the Lie algebra of a closed connected Lie subgroup $H$ of maximal rank. This is equivalent to saying that $H$ contains a maximal torus of $G$. In that case, a theorem of Hopf and Samelson \cite{H-S} says that the Euler characteristic of $G/H$ is positive. $(3) \Rightarrow (4):$ If $G/H$ is \emph{orientable} (which is the case always, if $H$ is connected by Proposition 15.10 on p. 92 of \cite{Bu}) then by Corollary 11.12 of \cite{M-S}, the Euler characteristic \[\chi(G/H)=\langle e(T(G/H)),[G/H]\rangle\] which means the Euler class of the tangent bundle of $G/H$ is nonzero. Recall that $G_\mathfrak h=\{g\in G|~{\rm Ad}_gv\in \mathfrak h,~\forall v\in \mathfrak h\}=N_G(\mathfrak h)$. Since $H$ is connected, $N_G(H)= N_G(\mathfrak h)$. Now we make an intermediate lemma. \begin{lemma} Let $G$ be a compact connected Lie group and let $T\subset G$ be a maximal torus. Let $H\subset G$ be a closed subgroup containing $T$. Then $N_G(H)/H$ is finite.\end{lemma} \proof If $x\in N_G(H)$, then $x{}^{-1} Tx\subset x{}^{-1} Hx=H$. Since $T$ and $x{}^{-1} Tx$ are both maximal tori of $H$, there is $h\in H$ such that $x{}^{-1} Tx=hTh{}^{-1}$, which implies $xh\in N_G(T)\cap xH$. Since $T\subset H$, the entire coset $xhT\subset xH$. Since $N_G(T)/T$ is finite, there are only finitely many cosets $yT$ where $y\in N_G(T)$. Since $xH$ are either \emph{disjoint} or equal, this implies there are only finitely many cosets $xH$. $\Box$\\ We resume the proof of Theorem \ref{tangent-bundle-1}. Writing $G_\mathfrak h=N_G(\mathfrak h)$, $G_\mathfrak h/H=N_G(H)/H$ is finite. Therefore the projection $p:G/H\to G/G_\mathfrak h$ is a covering map, so the pullback of the tangent bundle is the tangent bundle. If the tangent bundle below has a nowhere vanishing section $\sigma$, the tangent bundle above admits the nowhere vanishing section $p^*\sigma$, which is impossible because the Euler class of $T(G/H)$ is nonzero by the above. Consider the representation of $G_\mathfrak h$ on $W=\mathfrak g/\mathfrak h$ induced from the adjoint representation of $G$ on $\mathfrak g$. The tangent bundle $T(G/G_\mathfrak h)$ is isomorphic to the associated bundle $E_W$ on $G/G_\mathfrak h$. Therefore we have proved that if $\chi(G/H)>0$, then the obstruction class of $E_W=T(G/G_\mathfrak h)$ for the existence of a nowhere vanishing section is nonzero. $(4) \Rightarrow (2):$ As in \cite{A-D1}, for every $u\in \mathfrak g$, the constant section of $E_\mathfrak g\cong (G/G_\mathfrak h)\times \mathfrak g$ defines a section $s_u$ of the quotient bundle $E_\mathfrak g/E_\mathfrak h=E_W$. Since $s_u$ has a zero for every $u\in \mathfrak g$, $\mathfrak h$ is universal. \subsection{Universal Lie subalgebras for the complexified adjoint representation} In this section, we prove an analogue of Theorem \ref{tangent-bundle-1} for the complexified adjoint representation of a compact connected Lie group. The complexification of a real Lie algebra $\mathfrak g$ is the complex Lie algebra $\mathfrak g_\mathfrak mathbb C := \mathfrak g\otimes_{\mathfrak mathbb R} \mathfrak mathbb C $, where the complex Lie bracket is given by $$[X+iY, S+iT]=[X,S]-[Y,T]+i([Y,S]+[X,T])~ {\rm for}~X,~Y,~S,~T \in \mathfrak g.$$ If $\mathfrak g\subset \mathfrak g l(n, \mathfrak mathbb R)$, then the complexification $\mathfrak g _\mathfrak mathbb C$ can be identified with the Lie algebra $\mathfrak g + i\mathfrak g\subset \mathfrak g l(n, \mathfrak mathbb C).$ For every compact connected Lie group $G$, there is a unique reductive algebraic group $G_\mathbb C$, called the \emph{complexification of $G$}, with the following properties. (i) $G$ is a maximal compact Lie subgroup of $G_\mathbb C$. (ii) The natural map ${\rm Lie}(G)\otimes_\mathbb R \mathbb C\to {\rm Lie}(G_\mathbb C)$ is an isomorphism, in other words, ${\rm Lie}(G_\mathbb C)$ is the complexification of ${\rm Lie}(G)$. (iii) Consider the category $\mathfrak mathscr C$ of finite dimensional unitary representations of $G$, and the category $\mathfrak mathscr C'$ of rational algebraic complex representations of $G_\mathbb C$. Then restriction to the subgroup $G$ induces an equivalence of categories $\mathfrak mathscr C'\to \mathfrak mathscr C$. In other words, every unitary representation $\rho:G\to GL(U)$ is the restriction of a unique rational representation $\rho_C:G_\mathbb C\to GL(U)$. (iv) If $\rho:G\to GL(U)$ is a faithful unitary representation, then $\rho_C$ is an isomorphism of $G_\mathbb C$ with the Zariski closure of $\rho(G)$ in $GL(U)$. For details, see Processi \cite{P}. Let us now recall that a Borel subgroup of a complex reductive algebraic group is a maximal Zariski closed and connected solvable algebraic subgroup. Such a subgroup always exists for dimension reason. Let $G$ be a compact connected Lie group and consider the complexified adjoint representation on $\mathfrak g_\mathfrak mathbb C$. Let $G_\mathfrak mathbb C$ be the complexification of $G$. If $B$ is a Borel subgroup of $G_\mathbb C$ then there is a maximal torus $T$ of $G$ such that $T=G\cap B$. Let $\mathfrak b={\rm Lie}(B)$. We have the following natural identifications, where the first one is a $G$-equivariant diffeomorphism and the second one is a $T$-equivariant isomorphism of real vector spaces (cf. Theorem 1, \S 7.3, page 382 \cite{P}). \begin{equation}\label{gmodt}G/T{\underline Isom}or G_\mathfrak mathbb C/B, \; \;\;\mathfrak g/\mathfrak mathfrak t{\underline Isom}or \mathfrak g_\mathfrak mathbb C/\mathfrak b\end{equation} As a corollary, we recover the following well known result, which can also be seen as a special case of Theorem 5.7 of \cite{A-D1}. \begin{corollary}\label{schur}\emph{(Schur's triangularization)} Let $G$ be a compact connected Lie group with Lie algebra $\mathfrak g$, and let $\mathfrak b\subset \mathfrak g_\mathbb C$ be a Borel subalgebra. Then $\mathfrak b$ is universal in $\mathfrak g_\mathbb C$ for $G$.\end{corollary} \proof Since $\mathfrak mathfrak t$ is universal in $\mathfrak g$ for the adjoint representation of $G$, the Euler class of the quotient bundle $E_W=E_\mathfrak g/E_\mathfrak mathfrak t$ on $G/T$ is nonzero, where $W=\mathfrak g/\mathfrak mathfrak t$ (Theorem \ref{tangent-bundle-1}). On the other hand, for the complexified adjoint representation of $G$ on $\mathfrak g_\mathbb C$ and the subspace $\mathfrak b$, $W'=\mathfrak g_\mathbb C/\mathfrak b$, the quotient bundle $E_{W'}=E_{\mathfrak g_\mathbb C}/E_\mathfrak b$ is isomorphic to $E_W$ by (\ref{gmodt}). Thus the Euler class of $E_{W'}$ is also nonzero, so $\mathfrak b$ is universal in $\mathfrak g_\mathbb C$. $\Box$ \vskip 1em The following is immediate from Proposition \ref{universal-closed}. \begin{corollary}\label{universal-closed-complex} Let $G$ be a compact connected Lie group and let $\mathfrak h\subset \mathfrak g_\mathfrak mathbb C$ be a universal complex subspace for the complexified adjoint action of $G$. If $\mathfrak h$ is a complex Lie subalgebra of $\mathfrak g$, then $\mathfrak h$ is the Lie algebra of a \emph{closed} connected complex Lie subgroup $H\subset G$. Moreover, $H=N_{G_\mathfrak mathbb C}(\mathfrak h)^\circ=N_{G_\mathfrak mathbb C}(H)^\circ$. \end{corollary} Recall that a parabolic subgroup $P$ of the complex reductive algebraic group $G_\mathbb C$ is a connected closed subgroup in the Zariski topology, for which the quotient space $G_\mathbb C/P$ is a projective complex algebraic variety. A subgroup $P$ of $G$ is a parabolic subgroup if and only if it contains some Borel subgroup of the group $G$ (cf. Corollary in \S 11.2 of \cite{Bo}) \begin{corollary}\label{tangent-bundle-2} Let $G$ be a compact connected Lie group, $\mathfrak h$ any complex Lie subalgebra of $\mathfrak g_\mathfrak mathbb C$ containing a Borel subalgebra. Let $W=\mathfrak g_\mathbb C/\mathfrak h$, $G_\mathfrak h=N_G(\mathfrak h)$. Then \begin{enumerate} \item ${\rm dim}(G/G_\mathfrak h)={\rm rank}_\mathfrak mathbb R(W|_\mathfrak mathbb R)$. \item The top Chern class of the associated bundle $E_W$ on $G/G_\mathfrak h$ is also nonzero. \end{enumerate} \end{corollary} \proof Suppose $\mathfrak h$ contains a Borel subalgebra $\mathfrak b$. By Schur's triangularization above, $\mathfrak h$ is universal for the complexified adjoint representation of $G$, hence also for the adjoint representation of $G_\mathbb C$. By Corollary \ref{universal-closed-complex}, $\mathfrak h$ is the Lie algebra of a \emph{closed} connected complex analytic subgroup of $H\subset G_\mathfrak mathbb C$, with $H=(G_\mathbb C)_\mathfrak h^\circ$. Putting $\mathfrak h_\mathfrak mathbb R=\mathfrak g\cap \mathfrak h$, $H_\mathfrak mathbb R=G\cap H$, we have $\mathfrak h_\mathfrak mathbb R={\rm Lie}(H_\mathfrak mathbb R^\circ)$. Since the adjoint representation $G_\mathfrak mathbb C\to \mathfrak g l(\mathfrak g_\mathfrak mathbb C)$ is algebraic, the stabilizer $(G_\mathfrak mathbb C)_\mathfrak h$ is an \emph{algebraic subgroup}. But then the identity component in the Zariski topology will be connected in Euclidean topology (\cite{S} VII. 2.2 Theorem 1), so $H=(G_\mathfrak mathbb C)_\mathfrak h^\circ$ must be an algebraic subgroup of $G_\mathfrak mathbb C$. Therefore $H$ is a parabolic subgroup of $G_\mathbb C$, hence $H=N_{G_\mathbb C}(H)$ (Theorem 11.16 of Borel \cite{Bo}). But $H$ is connected, so $(G_\mathbb C)_\mathfrak h=N_{G_\mathbb C}(H)=H$. This means $H_\mathbb R=G\cap H=G\cap (G_\mathbb C)_\mathfrak h=G_\mathfrak h$. Now since $\mathfrak g\to \mathfrak g_\mathfrak mathbb C/\mathfrak b$ is surjective, $\mathfrak g\to \mathfrak g_\mathfrak mathbb C/\mathfrak h$ is also surjective, and hence we get the following isomorphism of real vector spaces, equivariant under $H_\mathfrak mathbb R$.\begin{equation}\label{gmodh-1}\mathfrak g/\mathfrak h_\mathfrak mathbb R{\underline Isom}or \mathfrak g_\mathfrak mathbb C/\mathfrak h.\end{equation} We have already shown $H_\mathbb R=G_\mathfrak h$, so this proves $(1)$. Also, since $G$ acts transitively on $G_\mathfrak mathbb C/B$, $G$ must act transitively on the further quotient $G_\mathfrak mathbb C/H$, the stabilizer of $[H]\in G_\mathbb C/H$ being $H_\mathfrak mathbb R$. Thus, we have a $G$-equivariant isomorphism \begin{equation}\label{gmodh}G/H_\mathfrak mathbb R{\underline Isom}or G_\mathfrak mathbb C/H.\end{equation} Now since $H$ contains $B$, $H_\mathfrak mathbb R$ contains $T$. If $W=\mathfrak g/\mathfrak h_\mathfrak mathbb R$ with the induced action of $H_\mathfrak mathbb R$, then $E_W\cong T(G/H_\mathfrak mathbb R)$. If $W'=\mathfrak g_\mathbb C/\mathfrak h$ with the induced action of $H$, then $E_{W'}\cong T(G_\mathfrak mathbb C/H)$. Under (\ref{gmodh}) and (\ref{gmodh-1}), the bundle $E_W$ is isomorphic to $E_{W'}$. The Euler class of this bundle must not vanish by Theorem \ref{tangent-bundle-1} or Theorem \ref{holomorphic-case-1}. $\Box$ \vskip 1em \begin{theorem}\label{complexified-1} Let $G$ be compact connected Lie group and let $G_\mathfrak mathbb C$ be its complexification. For a complex Lie subalgebra $\mathfrak h\subset \mathfrak g_\mathfrak mathbb C$, the following are equivalent. \begin{enumerate} \item $\mathfrak h$ is universal in $\mathfrak g_\mathbb C$ for the complexified adjoint action of $G$. \item $\mathfrak h$ contains a Borel subalgebra of $\mathfrak g_\mathbb C$. \item $\mathfrak h$ is the Lie algebra of a parabolic subgroup $H\subset G_\mathfrak mathbb C$ and the natural map $G/(G\cap H)\to G_\mathfrak mathbb C/H$ is a diffeomorphism. \item The associated vector bundle $E_W$ on $G/G_\mathfrak h$, where $W=\mathfrak g_\mathbb C/\mathfrak h$, has nonvanishing top Chern class. \end{enumerate} \end{theorem} Before we prove Theorem \ref{complexified-1} we mention the following lemma, which is obvious but useful. \begin{lemma}\label{immersion} Let $M$ be a smooth connected manifold of dimension $n$ and let $i:K\to M$ be an injective immersion, where $K$ is a compact connected manifold of dimension $n$. Then $i$ is a diffeomorphism.\end{lemma} \vskip 2em \noindent\emph{Proof of Theorem \ref{complexified-1}}: $(4)\Rightarrow (1)$ is obvious. $(3)\Rightarrow (4):$ In this case $\mathfrak h$ is parabolic. Then the implication follows from Corollary \ref{tangent-bundle-2}. $(1)\Rightarrow (3):$ Since $\mathfrak h$ is universal for $G$, the map $G\times \mathfrak h\to \mathfrak g_\mathfrak mathbb C$ is surjective, which means the map $E_\mathfrak h\to \mathfrak g_\mathfrak mathbb C$ is surjective, where $E_\mathfrak h=(G\times\mathfrak h)/G^\circ_\mathfrak h$. Now, since $\mathfrak h$ must also be universal for $G_\mathfrak mathbb C$, by Corollary \ref{universal-closed-complex} (see also Proposition \ref{universal-closed}), $\mathfrak h$ is the Lie algebra of a closed complex Lie subgroup $H$ such that $H=N_{G_\mathfrak mathbb C}(H)^\circ$. Since the adjoint representation $G_\mathfrak mathbb C\to \mathfrak mathfrak{gl}(\mathfrak g_\mathfrak mathbb C)$ is algebraic, the stabilizer $(G_\mathfrak mathbb C)_\mathfrak h$ is an \emph{algebraic subgroup}. But then the identity component in the Zariski topology will be connected in Euclidean topology (\cite{S} VII. 2.2 Theorem 1), so $H=(G_\mathfrak mathbb C)_\mathfrak h^\circ$ must be an algebraic subgroup of $G_\mathfrak mathbb C$. Again, $G_\mathfrak h=G\cap (G_\mathfrak mathbb C)_\mathfrak h=G\cap N_{G_\mathfrak mathbb C}(H)$, so we have $$G_\mathfrak h^\circ= (G\cap N_{G_\mathfrak mathbb C}(H)^\circ)^\circ=(G\cap H)^\circ.$$ But by surjectivity of $E_\mathfrak h\to \mathfrak g_\mathfrak mathbb C$, we know that \[\dim_\mathbb R(G)-\dim_\mathbb R(G\cap H)^\circ+\dim_\mathbb R(\mathfrak h)=\dim_\mathbb R(E_\mathfrak h)\ge \dim_\mathbb R(G_\mathfrak mathbb C),\] \[{\rm or},\;\dim_\mathbb R(G/G\cap H)\ge \dim_\mathbb R(G_\mathfrak mathbb C/H).\] Since $G/G\cap H$ is compact and the map $G/G\cap H\to G_\mathfrak mathbb C/H$ is an injective immersion, they must have the same dimension, so by Lemma \ref{immersion}, $G/G\cap H\to G_\mathfrak mathbb C/H$ is a diffeomorphism, proving that $G_\mathfrak mathbb C/H$ is compact. Since $H$ is an algebraic subgroup of $G_\mathfrak mathbb C$, the quotient space $G_\mathfrak mathbb C/H$ is quasi-projective, or in other words, it is open in an irreducible projective variety $M$. By \cite{S} VII. 2.2 Theorem 1, $M$ is connected in Euclidean topology. Since $G_\mathfrak mathbb C/H$ is compact, it is equal to $M$. This shows that $H$ is parabolic. $(3)\Rightarrow(2):$ Follows from the definition of a parabolic subgroup. $(2)\Rightarrow(1):$ This follows from the general Schur's triangularization (Corollary \ref{schur}). $\Box$ \section{Universality does not imply nonzero Euler class}\label{counter} First we make preparatory observations. Consider the complexified adjoint representation of $G=SU(2)$ on $U=\mathfrak g_\mathfrak mathbb C=\mathfrak mathfrak{su}(2)\otimes_\mathbb R \mathbb C\cong \mathfrak mathfrak{sl}(2,\mathbb C)$. Let $V=\mathfrak b$ be the standard Borel subalgebra of $\mathfrak g_\mathfrak mathbb C$, that is, $\mathfrak b$ consists of upper triangular traceless matrices in $\mathfrak mathfrak{sl}(2,\mathbb C)$. Let $W=U/V$. Then $G_V=T$, the standard diagonal (maximal) torus in $SU(2)$, $G/G_V\cong S^2$, $E_W\cong T(S^2)$. Indeed, for the adjoint representation of $G_\mathfrak mathbb C=SL(2,\mathbb C)$ on its Lie algebra $U$, $(G_{\mathfrak mathbb C})_V=B$, and $G/G_V\to G_\mathfrak mathbb C/(G_{\mathfrak mathbb C})_V$ is a $G$-equivariant isomorphism, while $E_W$ as a bundle on $G_\mathfrak mathbb C/(G_{\mathfrak mathbb C})_V$ is isomorphic to its tangent bundle. By Corollary 14.4 of \cite{B}, if $E$ is an orientable vector bundle on a compact manifold $M$, ${\rm rank}(E)=\dim(M)$, then the Euler class $e(E)$ is the final obstruction class to having a nowhere vanishing section of $E$. In case $E$ is the underlying real vector bundle of a complex vector bundle $E'$, the Euler class $e(E)$ is equal to the top Chern class $c_{\rm top}(E')$. In what follows, we construct an example of a compact Lie group $G$, a complex representation $U$, a universal complex linear subspace $V$, such that the dimension condition (\ref{dimension}) holds, but the Euler class of $E_W$ over $G/G_V$ is zero. Let $G_1=G_2=SU(2)$ and $G=G_1\times G_2$. Let $U_1=U_2=\mathfrak mathfrak{sl}(2,\mathbb C)$, and consider $U=U_1\oplus U_2={\rm Lie}(G)_\mathbb C$ with the complexified adjoint action of $G$. Let $V_2=\left\{\begin{pmatrix} 0 & *\\ * & 0\end{pmatrix}\right\}$ be the set of zero diagonal matrices in $U_2$, and let $V_1=\mathfrak b$ be the standard Borel in $U_1$. Set $V=V_1\oplus V_2\subset U$, $W_i=U_i/V_i$ for $i=1,2$, $W=U/V=W_1\oplus W_2$. Then $G_V=T\times N_G(T)$, and $V$ is universal in $U$. Now, $G/G_V\cong S^2\times \mathbb R P^2$. We will denote the two projections by $p_1,p_2$ respectively. Consider the vector bundles $E_1=E_{W_1}$ on $S^2$ and $E_2=E_{W_2}$ on $\mathbb R P^2$. Then the mod $2$ reduced Chern class of $E_2$ is the generator of $H^2(\mathbb R P^2,\mathbb Z/2)=\mathbb Z/2=H^2(\mathbb R P^2,\mathbb Z)$ as observed in the counterexample of Jinpeng and Dokovi\'{c} (\S 6, \cite{A-D1}). On the other hand, since $E_1$ is the tangent bundle on $S^2$, its Chern class is twice the generator of $H^2(S^2,\mathbb Z)$. We will show below that the top Chern class of the bundle $E_W=p_1^*E_1\oplus p_2^*E_2$ is zero. By the Whitney sum formula, $c_2(E)=p_1^*c_1(E_1)\cup p_2^*c_1(E_2)$, which is the image of $c_1(E_1)\otimes c_1(E_2)$ under the cross product map $$H^2(S^2,\mathbb Z)\otimes H^2(\mathbb R P^2,\mathbb Z)\to H^4(G/G_V,\mathbb Z): (a,b)\mathfrak mapsto p_1^*a\cup p_2^*b.$$ But $H^2(S^2,\mathbb Z)\otimes H^2(\mathbb R P^2,\mathbb Z)\cong\mathbb Z\otimes_\mathbb Z \mathbb Z/2\mathbb Z\cong \mathbb Z/2\mathbb Z$. Since $c_1(E_1)$ is twice the generator of $H^2(S^2,\mathbb Z)$, $c_2(E)=0$. \section{Universality of irreducible representations of $SU(2)$} Let $T$ be the standard torus in $SU(2)$. For any irreducible representations of $SU(2)$, we consider all the $T$-invariant complex linear subspaces $V$ which satisfy the dimension requirement (\ref{dimension}), and show that they are universal. We give a concrete description of these subspaces give a necessary and sufficient condition for the Euler class of $E_W$ to vanish. \begin{proposition} \label{SU(2)} Let $G=SU(2)$, $T\subset G$ a fixed maximal torus, and let $\rho_n:G\to GL(U_n)$ be an irreducible unitary finite dimensional complex representation of $G$ with ${\rm rank}_\mathbb C(U_n)=n+1$. \begin{enumerate} \item The only $T$-invariant complex subspaces $V\subset U_n$ such that the dimension requirement (\ref{dimension}) holds are hyperplanes, and they are universal. \item There are exactly $n+1$ many choices for $V$ (and their complements $W$). We can give concrete descriptions of them. \item The Chern class of $E_W$ over $G/G_V$ vanishes if and only if $n=4m$ and $T$ acts trivially on $W$. \end{enumerate} \end{proposition} \begin{proof} We know that $U_n$ is equivalent to the standard representation on the space of complex homogeneous polynomials in $x,y$ of degree $n$. If $V$ is a subspace such that $G_V$ contains the maximal torus, then $G_V$ is of dimension either $1$ or $3$, because the only possible subgroups of $SU(2)$ are of dimension $0,1,3$. Then the only nontrivial case is when $G_V$ is of dimension $1$, which means $T$ is the connected component of $G_V$ i.e. $G_V=T$ or $N_G(T)$ (the connected component has to be a normal subgroup). Fix the Weyl element $w:x\mathfrak mapsto y\mathfrak mapsto -x$. Now since the complex subspace $W=V^\perp$ has real rank equal to the dimension of $G/G_V$, which is $2$, $W$ has to be a complex line. This implies $V$ is a hyperplane. Therefore $V$ is universal in $U_n$ by Gichev \cite{G} Corollary 1. This proves $(1)$. The only possible $T$-invariant lines are the eigenspaces of $T$ in $U_n$ for different characters. They are precisely the complex lines containing the monomials $x^iy^{n-i}$, hence they are $n+1$ many. Let $W$ be the complex line through the monomial $x^iy^{n-i}$. Thus we can concretely describe $V$ as the space of polynomials $f\in U_n$ such that the coefficient of $x^iy^{n-i}$ in $f$ is zero. This proves $(2)$. The maximal torus $T$ acts on the line through the monomial $x^iy^{n-i}$ by the character $t\mathfrak mapsto t^{2i-n}$. The stabilizer is the set of $g=\begin{pmatrix}\alpha &\beta \\ -\overline{\beta},&\overline{\alpha}\end{pmatrix}\in SU(2)$ such that $gx^iy^{n-i}$ lies in $W$. But this stabilizer is either $T$ or $N_G(T)$, as seen above. The Weyl element $w$ acts as $w\cdot x^iy^{n-i}=(-1)^{n-i}x^{n-i}y^i$, so it does not stabilize $W$ unless $i=n-i$. In case $2i\ne n$, the stabilizer $G_V=T$, $G/G_V\cong S^2$, and it is easy to show that the Chern class of the complex line bundle associated to this character $t\mathfrak mapsto t^{2i-n}$ is $(2i-n)$ times the generator of $H^2(S,\mathbb Z)$, which is nonzero. Now come to the case $n=2i$, where $T$ acts trivially on $W$. Here the Weyl element $w$ acts on $W$ trivially or nontrivially according as $i$ is even or odd. In case $w$ acts nontrivially, $G_V=N_G(T)$ and the associated complex line bundle $E_W$ on $G/G_V\cong \mathbb R\mathbb P^2$ is the complexification of the real line bundle which is the tautological real line bundle on $\mathbb R\mathbb P^2$. Since $H^*(\mathbb R\mathbb P^2,\mathfrak mathbb F_2)=\mathfrak mathbb F_2[w_1]/w_1^3$, where $w_1$ is the Stiefel-Whitney class of the tautological bundle, the mod 2 reduced Chern class of the complexification would be $w_1^2\ne 0$. So the Chern class of $E_W$ itself is nonzero. When $i=2m$ is even, the Weyl element acts trivially on $W$, so $E_W$ descends to the trivial complex line bundle on $G/N_G(T)=\mathbb R P^2$. Therefore the Chern class of $E_W$ is zero. This completes the proof of $(3)$. \end{proof} \begin{remark} By $(3)$ of Proposition \ref{SU(2)}, we get an infinite family of irreducible representations $U_{4m}$ of $G=SU(2)$, $T$-invariant universal subspaces $V_{4m}$ satisfying the dimension requirement (\ref{dimension}), such that the Chern class of $E_{W_{4m}}$ is zero. Here $W_{4m}$ can be concretely identified with the complex line through the monomial $x^{2m}y^{2m}$. This gives an infinite family of counterexamples to the original question posed in \cite{A-D1}. In contrust, the counterexample in \S\ref{counter} was worked out from scratch and did not depend on Gichev's result. \end{remark} \section{Levi subgroups and solvable groups} \begin{proposition}\label{solvable} Let $G$ be a connected solvable Lie group, $U=\mathbb C^n$, and let $\rho:G\to GL(n,\mathbb C)$ be a representation of $G$ on $U$. Then any complex linear universal subspace $V$ of $U$ has to be equal to $U$.\end{proposition} \begin{proof} We prove it in two steps. In Case 1 we prove the result for simply connected solvable complex Lie groups, and in Case 2 for the general case. {\bf Case 1.} $G$ is a complex simply connected solvable Lie group. Induction on the complex rank of $U$. If rank is $0$ there is nothing to prove. Now assume the rank is positive. Consider the action of the Lie algebra $d\rho:\mathfrak g\to\mathfrak mathfrak{gl}(U)=\mathfrak mathfrak{gl}(n,\mathbb C)$. By Lie's theorem, $U$ admits a filtration $$U=U_n\supset U_{n-1}\supset \cdots \supset U_1\supset U_0=0$$ of complex subspaces invariant under $\mathfrak g$ (hence under $G$, because the connected group $G$ is generated by a neighbouhood of identity and such nbd is in the image of exp) such that $\mathfrak g$ acts on $U_i/U_{i-1}$ by characters. These characters integrate to characters on the simply connected $G$. Since the rank of $U$ is positive, $U_1\ne 0$. Since $V$ is universal and $G$ acts on $U_1$ by a character, $U_1\subset V$. Then $V/U_1$ is universal in $U/U_1$, and the induction hypothesis applies, so $V/U_1=U/U_1$. {\bf Case 2.} The general case. First we claim that if $G$ is any connected solvable Lie group (real or complex), then there is a simply connected solvable complex Lie group $G_{sc}$ such that ${\rm Lie}(G_{sc})=\mathfrak g_\mathbb C$. By Ado's theorem, there is an embedding of complex Lie algebras $\mathfrak g_\mathfrak mathbb C\subset \mathfrak mathfrak{gl}(N,\mathbb C)$, so there is some connected complex Lie group $G'$ with $\mathfrak g_\mathfrak mathbb C$ as the Lie algebra by the subgroup-subalgebra correspondence. This $G'$ has to be solvable if $\mathfrak g$ is. Now let $G_{sc}$ be the simply connected cover of $G'$. Since the central quotient $G'$ of $G_{sc}$ is solvable, $G_{sc}$ is solvable too. This proves the claim. Given the representation $\rho$, consider $d\rho:\mathfrak g\to \mathfrak mathfrak{gl}(U)$, and then $d\rho_\mathfrak mathbb C=d\rho\otimes_\mathbb R 1_\mathbb C:\mathfrak g_\mathfrak mathbb C\to \mathfrak mathfrak{gl}(U)$. Then $d\rho_\mathfrak mathbb C$ integrates to a representation $\rho_{sc}:G_{sc}\to GL(U)=GL(n,\mathbb C)$. Now there is a group homomorphism $i:\tilde G\to G_{sc}$, where $\tilde G$ is the simply connected cover of $G$, and $di$ is the inclusion of $\mathfrak g$ into $\mathfrak g_\mathbb C$. The two representations of $\tilde G$ on $U$ given by the following two compositions are equivalent. $$\tilde G\to G\stackrel\rho\to GL(n,\mathbb C)~~{\rm and}~~\tilde G\stackrel i\to G_{sc}\stackrel{\rho_\mathfrak mathbb C}\to GL(n,\mathbb C)$$ Since $V$ is universal for $G$, it is universal for $\tilde G$ and therefore for $G_{sc}$ as well. Now Case 1 applies. \end{proof} Recall that a Levi decomposition of a connected Lie group $G$ is given by a virtual connected semisimple Lie subgroup $S\subset G$ such that $G=R\cdot S$, $R\cap S$ is discrete, where $R$ is the solvable radical of $G$. \begin{proposition}\label{compact-levi} Let $G$ be a compact connected Lie group, and let $G=R\cdot S$ be a Levi decomposition, where $R$ is the radical and $S$ is semisimple. Suppose $\rho:G\to GL(U)$ is a complex representation of $G$ and $V$ is a complex subspace that is universal for $G$. Then $V$ is universal for $S$ also.\end{proposition} \begin{proof} Since $G$ is compact, $R$ is central in $G$. Let $\Phi$ be the set of weights in the weight space decomposition of $U$ for the torus $R$. Then we have $$U=\oplus_{\alpha\in \Phi}U_\alpha,~{\rm where}~U_\alpha:=\{u\in U|~ \rho(g) u=\alpha(g)u,~\forall~g\in R\}.$$ Since $R$ is central in $G$, each $U_\alpha$ is $G$-invariant. Therefore, for each $\alpha\in \Phi$, $V\cap U_\alpha$ is universal in $U_\alpha$ for $G$. Now our proposition will follow if we can show that $V\cap U_\alpha$ are universal in $U_\alpha$ for $S$. Let $v\in U_\alpha$. Then there is some $g\in G$ such that $\rho(g)v\in V\cap U_\alpha$. Write $g=sr$, $r\in R$, $s\in S$. Then $V\cap U_\alpha\ni \rho(g)v= \rho(s)\rho(r)v=\alpha(r)\rho(s)v$, which means $\rho(s)v\in V\cap U_\alpha$.\end{proof} \begin{remark} \label{solv-remark} $(a)$ Propositions \ref{solvable} and \ref{compact-levi} are not always true for real representations. For example, take $G=U(1)$ acting on $U=\mathbb R^2$ by rotation. Then any line through the origin is universal for $G$. $(b)$ Proposition \ref{compact-levi} is not valid if $G$ is not compact. We give an example to demonstrate this. Take \[G=\left\{\begin{pmatrix} A & B\\ 0 & A\end{pmatrix}\in SL(4,\mathbb C): A\in SU(2), B\in \mathfrak mathfrak{gl}(2,\mathbb C).\right\}\]There is an isomorphism \[f:SU(2)\ltimes \mathfrak mathfrak{gl}(2,\mathbb C){\underline Isom}or G: (A,B)\mathfrak mapsto \begin{pmatrix} A & AB\\ 0 & A\end{pmatrix}.\] Here the semidirect product is given by $(A',B')\cdot (A,B)=(A'A, A{}^{-1} B'A+B)$. The subgroup $SU(2)$ sitting diagonally is a Levi component $S$, while the solvable radical is $R=\left\{\begin{pmatrix}I_2 & *\\ 0 & I_2\end{pmatrix}\right\}\cong \mathfrak mathfrak{gl}(2,\mathbb C)$, which is abelian. Let $U=\mathbb C^4$ and consider the restriction of the defining representation of $SL(4,\mathbb C)$ on $U$ to $G$. Let $V$ be the set of vectors $\{(v,w)\in \mathbb C^4: v\in \mathbb C e_1,w\in \mathbb C e_1\}$, where $e_1=(1,0)\in \mathbb C^2$. Then $V$ is universal for $G$. Indeed, if we take any $(v,w)\in \mathbb C^4$, with $w\ne 0$, then we have some $A\in SU(2)$, $z\in \mathbb C$ such that $Aw=ze_1$. Fix such an $A$. We claim that there is some $B\in\mathfrak mathfrak{gl}(2,\mathbb C)$, $z'\in\mathbb C$ such that $Av+Bw=z'e_1$. If $Av=\alpha e_1$ for some $\alpha\in \mathbb C$, we can find $B\in SU(2)\subset \mathfrak mathfrak{gl}(2,\mathbb C)$, $\beta\in \mathbb C$ such that $Bw=\beta e_1$, so that $Av+Bw=(\alpha+\beta)e_1$. If $Av\not\in \mathbb C e_1$, then $\{Av,e_1\}$ is a basis of $\mathbb C^2$. Since $w\ne 0$, there is some $B_1\in GL(2,\mathbb C)\subset \mathfrak mathfrak{gl}(2,\mathbb C)$ such that $B_1w=-\gamma Av+\delta e_1$, where $\gamma,\delta\in \mathbb C$, $\gamma\ne 0$. If $B=\gamma{}^{-1} B_1$, then $Bw+Av=\delta\gamma{}^{-1} e_1$, which proves the claim. So, \[\begin{pmatrix} A & B\\ 0 & A\end{pmatrix}\begin{pmatrix} v\\ w\end{pmatrix}=\begin{pmatrix} z'e_1\\ ze_1\end{pmatrix}.\] If $w=0$, $v\ne 0$, then there are $A\in SU(2)$, $z\in \mathbb C$ such that $Av=ze_1$. Thus \[\begin{pmatrix} A & 0\\ 0 & A\end{pmatrix}\begin{pmatrix} v\\ 0\end{pmatrix}=\begin{pmatrix} ze_1\\ 0\end{pmatrix}.\] Now $V$ is not universal for the Levi part $S$. For example, if $v=e_1$ and $w=e_2=(0,1)$ then there is no $A\in SU(2)$ such that $Av,Aw\in \mathbb C e_1$. \end{remark} \vskip 2em \parindent=0pt {\small Saurav Bhaumik Department of Mathematics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India \texttt{[email protected]} \vskip 1em Arunava Mandal Department of Mathematical Sciences, Indian Institute of Science Education and Research Mohali, Punjab 140306, India \texttt{[email protected]} \end{document}
\begin{document} \title{Arbitrary Arrow Update Logic with Common Knowledge is neither RE nor co-RE} \begin{abstract} Arbitrary Arrow Update Logic with Common Knowledge (AAULC) is a dynamic epistemic logic with (i) an arrow update operator, which represents a particular type of information change and (ii) an arbitrary arrow update operator, which quantifies over arrow updates. By encoding the execution of a Turing machine in AAULC, we show that neither the valid formulas nor the satisfiable formulas of AAULC are recursively enumerable. In particular, it follows that AAULC does not have a recursive axiomatization. \end{abstract} \section{Introduction} One of the active areas of study in the field of Dynamic Epistemic Logic is that of \emph{quantified update logics}. Examples of these quantified logics include Arbitrary Public Announcement Logic (APAL) \cite{APAL}, Group Announcement Logic (GAL) \cite{Agotnes2010}, Coalition Announcement Logic (CAL) \cite{CAL}, Arbitrary Arrow Update Logic (AAUL) \cite{AAULAIJ}, Refinement Modal Logic (RML) \cite{refinement} and Arbitray Action Model Logic (AAML) \cite{hales13}. All these logics have an operator that quantifies over all updates of a particular type. For example, in APAL the formula $[!]\varphi$ means ``$[\psi]\varphi$ holds for every public announcement $\psi$'' and in AAUL the formula $[\updownarrow]\varphi$ means ``$[U]\varphi$ holds for every arrow update $U$.'' One important question about these logics is the decidability of their satisfiability problems. The satisfiability problems of RML and AAML are known to be decidable \cite{hales13,refinement}, whereas for the other logics the satisfiability problem is known to be undecidable \cite{french08,agotnes2014,agotnesetal:2016,undecidable}. More precisely, the satisfiability problems for each of these undecidable logics was shown, by a reduction from the tiling problem, to be co-RE hard. But, so far, it has remained an open question whether they are co-RE. In other words, while we know that we cannot generate a list of all satisfiable formulas of APAL, GAL, CAL and AAUL, we do not know whether it is possible to generate a list of all valid formulas of these logics. The question of whether the valid formulas are RE is of particular interest, since a negative result would imply the non-existence of a recursive axiomatization of the logic in question.\footnote{Finitary axiomatizations for APAL and GAL were proposed in \cite{APAL} and \cite{Agotnes2010}, respectively, but these axiomatizations contain a flaw that renders them unsound.\footnotemark See \url{http://personal.us.es/hvd/errors.html} for details and a proof of the unsoundness.}\footnotetext{We should stress that it is only the finitary axiomatizations that are unsound; the infinitary axiomatization presented in \cite{APAL} is sound and complete (although the completeness proof contains an error; again, see \url{http://personal.us.es/hvd/errors.html} for details).} After all, a recursive axiomatization would allow us to list all valid formulas. Here, we study a variant AAULC of AAUL, which in addition to all the operators from AAUL also contains a common knowledge operator. We show that the valid formulas of AAULC are not recursively enumerable, by a reduction from the non-halting problem. The proof from \cite{undecidable}, which shows that the validities of AAUL are not co-RE also applies to AAULC. Still, the non-RE proof in this paper can be extended to a non-co-RE proof with little effort, so in this paper we prove that the validity problem of AAULC is neither RE nor co-RE. We consider this result to be interesting in its own right. Additionally, and perhaps even more importantly, we also hope that the proof presented here can provide inspiration for proofs about the (non)existence of recursive axiomatizations for APAL, GAL, CAL and AAUL. The structure of this paper is as follows. First, in Section~\ref{sec:aaul}, we define AAULC. Then, in Section~\ref{sec:machines} we discuss the notation that we will use to describe Turing machines. Finally, in Section~\ref{sec:reduction}, we show that both the halting problem and the non-halting problem can be reduced to the validity problem of AAULC. \section{AAULC} \label{sec:aaul} Here, we provide the definitions of Arbitrary Arrow Update Logic with Common Knowledge (AAULC). The logics AUL, AAUL and AAULC were designed to reason about information change, but they can also be applied to other domains, most notably that of Normative Systems. A brief overview of the epistemic interpretation of arrow updates is given after the formal definitions. See \cite{AUL} and \cite{AAULAIJ} for a more in-depth discussion of the applications of AUL and its variants. Let $\mathcal{P}$ be a countable set of propositional atoms, and let $\mathcal{A}$ be a finite set of agents. We use five agents in our proof, so we assume that $|\mathcal{A}|\geq 5$. The proof can be modified to use only one agent, but such modification requires a lot of complicated notation so we do not so here. \begin{definition} The language $\mathcal{L}_\mathit{AAULC}$ of AAULC is given by the following normal forms. \begin{align*} \varphi ::= {} & p \mid \neg \varphi \mid \varphi \vee \varphi \mid \square_a\varphi \mid C\varphi \mid [U]\varphi \mid [\updownarrow]\varphi\\ u := {} & (\varphi,a,\varphi)\\ U := {} & \{u_1,\cdots,u_n\} \end{align*} Where $p\in \mathcal{P}$ and $a\in \mathcal{A}$. The language $\mathcal{L}_\mathit{AULC}$ of \emph{Arrow Update Logic with Common Knowledge} is the fragment of $\mathcal{L}_\mathit{AAULC}$ that does not contain the $[\updownarrow]$ operator. \end{definition} We use $\wedge,\lozenge$ and $\langle \updownarrow\rangle$ in the usual way as abbreviations. The formulas of $\mathcal{L}_\mathit{AAULC}$ are evaluated on standard multi-agent Kripke models. \begin{definition} A \emph{model} is a triple $\mathcal{M}=(W,R,V)$, where $W$ is a set of worlds, $R:\mathcal{A}\rightarrow 2^{W\times W}$ assigns to each agent an accessibility relation and $V:\mathcal{P}\rightarrow 2^W$ is a valuation. \end{definition} Note that we use the class $K$ of all Kripke models. Our reason for using $K$, as opposed to a smaller class such as $S5$, is that Arrow Update Logic is traditionally evaluated on $K$, see also \cite{AUL} and \cite{AAULAIJ}. For the results presented in this paper the choice of models is not very important; the proof that we use would, with some small modifications, also work on $S5$. We also write $R_a(w)$ for $\{w'\mid (w,w')\in R(a)\}$. The semantics for most operators are as usual, so we omit their definitions. We do provide definitions for $[U]$ and $[\updownarrow]$, since these operators are not as well known as the others. \begin{definition} Let $\mathcal{M}=(W,R,V)$ be a model, and let $w\in W$. Then $$\begin{array}{llr} \mathcal{M},w\models [U]\varphi & \Leftrightarrow& \mathcal{M}*U,w\models \varphi\\ \mathcal{M},w\models [\updownarrow]\varphi & \Leftrightarrow& \forall U\in \mathcal{L}_\mathit{AULC}: \mathcal{M},w\models [U]\varphi \end{array}$$ where $\mathcal{M}*U$ is given by \begin{align*} \mathcal{M}* U := {} & (W,R*U,V),\\ R*U(a) := {} & \{(w_1,w_2)\in R(a)\mid \\ & \hspace{10pt}\exists (\varphi_1,a,\varphi_2)\in U: \mathcal{M},w_1\models \varphi_1 \text{ and } \mathcal{M},w_2\models \varphi_2\} \end{align*} \end{definition} \begin{definition} Let $\mathcal{M}_1=(W_1,R_1,V_1)$ and $\mathcal{M}_2= (W_2,R_2,V_2)$ be models and let $w_1\in W_1$, $w_2\in W_2$. We say that $w_1$ and $w_2$ are $\mathit{AULC}$-indistinguishable if for every $\varphi\in \mathcal{L}_\mathit{AULC}$, we have $\mathcal{M},w_1\models \varphi \Leftrightarrow \mathcal{M},w_2\models \varphi$. \end{definition} An arrow update $U$ represents an information-changing event, of a kind that is sometimes referred to as a \emph{semi-private announcement}. Unlike with a public announcement, the information gained though a semi-private announcement is not common knowledge. It is, however, common knowledge what information is gained under which conditions. A typical example is the following. Suppose that $a$ and $b$ are playing a game of cards, where each player holds one card. The cards have been dealt to them, face down. Now, $a$ picks up her card and looks at it. By doing this, agent $a$ learns what card she holds. This new information is not common knowledge, since $b$ doesn't learn $a$'s card, so this event cannot be represented as a public announcement. It is common knowledge, however, under which conditions $a$ gains which information: if $a$ has the Ace of Spades, then she learns that she has the Ace of Spades, and so on. The event of $a$ picking up her card can therefore be considered a semi-private announcement, which can be represented as an arrow update. A clause $(\varphi_1,a,\varphi_2)\in U$ says that in every world that satisfies $\varphi_1$, the new information gained by agent $a$ is consistent with $\varphi_2$. If there are multiple clauses that apply to a single world, we consider them to apply disjunctively, i.e., the new information is consistent with both postconditions: if $(\varphi_1,a,\varphi_2), (\psi_1,a,\psi_2)\in U$ and a world satisfies both $\varphi_1$ and $\psi_1$, then $a$'s new information is consistent with every world that satisfies either $\varphi_2$ or $\psi_2$. We assume that $U$ provides a full description of the new information, so any world consistent with the new information satisfies the postcondition of at least one applicable clause. Semantically, this means that a transition $(w_1,w_2)\in R(a)$ is retained by the update $[U]$ if and only there is at least one clause $(\varphi_1,a,\varphi_2)\in U$ such that $w_1$ satisfies $\varphi_1$ and $w_2$ satisfies $\varphi_2$. Every other transition is removed from the model. The example discussed above, where $a$ looks at her card, is represented by the arrow update \begin{equation*}U_\mathit{cards} := \{(\top,b,\top)\}\cup \{(\mathit{card},a,\mathit{card})\mid \mathit{card}\in\mathit{deck}\},\end{equation*} where $\mathit{deck}$ is the deck from which the cards were dealt. The clause $(\top,b,\top)$ states that $b$ doesn't directly learn anything new: every distribution of cards is consistent with $b$'s new information. The clause $(\mathit{card},a,\mathit{card})$, for $\mathit{card}\in \mathit{deck}$ states that if $a$ holds $\mathit{card}$, then by looking at her card she learns that she holds $\mathit{card}$. The arbitrary arrow update operator $[\updownarrow]$ quantifies over all arrow updates that do not themselves contain the $[\updownarrow]$ operator. So $\mathcal{M},w\models [\updownarrow]\varphi$ if and only if $\mathcal{M},w\models [U]\varphi$ for every $U\in \mathcal{L}_{AULC}$. This restriction to $[\updownarrow]$-free updates keeps the semantics from becoming circular.\footnote{Similar restrictions exist in the other quantified update logics. In APAL, for example, the arbitrary arrow update operator $[!]$ quantifies over all public announcements $[\psi]$ where $\psi\in \mathcal{L}_\mathit{PAL}$.} The operator $[\updownarrow]$ allows us to ask, inside the object language, whether there is a semi-private announcement that makes a formula true. So, for example, $\mathcal{M},w\models [\updownarrow](\square_ap \wedge \neg \square_bp)$ asks whether, in the situation represented by the model $\mathcal{M},s$, there is a semi-private announcement that informs $a$ of the truth of $p$ without letting $b$ know that $p$ is true. Recall that an event is a semi-private announcement if it is common knowledge under what conditions which information is gained. So $[\updownarrow](\square_ap\wedge \neg \square_bp)$ is true if and only if there is a method to inform $a$ of the truth of $p$ without informing $b$, under the assumption that the method itself is common knowledge. Or, in other (and slightly trendier) words, $[\updownarrow](\square_ap \wedge \neg \square_bp)$ is true if and only if it is possible to inform $a$ but not $b$ of the truth of $p$, without relying on \emph{security through obscurity}. For more examples of the applications of arrow updates and arbitrary arrow updates, see \cite{AUL} and \cite{AAULAIJ}. \begin{remark} In the semantics of AAULC, we let $[\updownarrow]$ quantify over arrow updates in $\mathcal{L}_\mathit{AULC}$, so these updates may contain the common knowledge operator $C$. This means that our $[\updownarrow]$ operator is slightly different from the one in AAUL \cite{AAULAIJ}, since the quantification in AAUL is over updates that do not contain $C$. This difference is not important for the current paper. All the results presented here still hold if we let $[\updownarrow]$ quantify only over the updates that contain neither $[\updownarrow]$ nor $C$. \end{remark} \section{Turing Machines} \label{sec:machines} A full discussion of Turing machines is outside the scope of this paper. We assume that the reader is familiar with the basic ideas of a Turing machine; here we only concern us with the notation that we use to represent Turing machines. \begin{definition} A \emph{Turing machine} $T$ is a tuple $T=(\Lambda,S,\Delta)$, where $\Lambda$ is a finite alphabet such that $\alpha_0\in \Lambda$, $S$ is a finite set of states such that $s_0,s_\mathit{end}\in S$ and $\Delta:\Lambda\times S\rightarrow \Lambda\times S\times \{\mathit{left},\mathit{remain},\mathit{right}\}$ is a transition function. \end{definition} We write $\Delta_1,\Delta_2$ and $\Delta_3$ for the projections of $\Delta$ to its first, second and third components. So if $\alpha$ is the symbol currently under the read/write head and $s$ is the current state, then the machine will write the symbol $\Delta_1(\alpha,s)$, go to the state $\Delta_2(\alpha,s)$ and move the read/write head in direction $\Delta_3(\alpha,s)$. We assume, without loss of generality, that the state $s_0$ doesn't re-occur. Furthermore, note that we defined $\Delta$ to be a function with $\Lambda\times S$ as domain. So the machine $T$ continues after reaching $s_\mathit{end}$. This is notationally more convenient than letting $T$ terminate once it reaches $s_\mathit{end}$. We don't care about what happens after reaching $s_\mathit{end}$, though. \begin{definition} A Turing machine $T$ \emph{halts} if, when starting in state $s_0$ with a tape that contains only the symbol $\alpha_0$, the system reaches the state $s_\mathit{end}$. \end{definition} It is well known that the halting Turing machines are recursively enumerable, but the non-halting ones are not \cite{turing1937}. The Turing machines that we consider are deterministic, so the execution of a machine $T$ on a tape that only contains $\alpha_0$ happens in exactly one way. We call this the run of $T$. One straightforward way to represent this run of $T$ is to consider it as a function $\mathit{run}^T : \mathbb{Z}\times \mathbb{N}\rightarrow \Lambda\times S \times \{0,1\}$, where $\mathit{run}^T(n,m)=(\alpha,s,x)$ means that at time $m$, the symbol in position $n$ on the tape is $\alpha$, the machine is in state $s$ and the read/write head is at position $n$ if and only if $x=1$. For notational reasons, it is convenient to extend this function to $\mathit{run}^T : \mathbb{Z}\times \mathbb{Z}\rightarrow \Lambda\times S \times \{0,1\}$, where $\mathit{run}^T(n,m)=(\alpha_0,s_\mathit{void},0)$ for all $m<0$. Doing so allows us to avoid a number of special cases that we would otherwise have to consider for $m=0$. Like with $\Delta$, we use $\mathit{run}^T_1$, $\mathit{run}^T_2$ and $\mathit{run}^T_3$ to refer to the projections to the first, second and third coordinates. \section{The Reduction} \label{sec:reduction} For every Turing machine $T$, we want to represent the unique run $\mathit{run}^T$ in AAULC. In order to do this, we start by encoding certain facts as propositional atoms. For every state $s\in S\cup \{s_\mathit{void}\}$ and every element $\alpha\in \Lambda$ of the alphabet, we assume that $s,\alpha\in \mathcal{P}$. We are free to do this, since $\mathcal{P}$ is countably infinite, while $S$ and $\Lambda$ are finite. As one might expect, we use the propositional atom $s$ to represent the state of the Turing machine at a particular point in time being $s$, and we use the atom $\alpha$ to represent a particular position of the tape containing the symbol $\alpha$ at a particular point in time. Additionally, we assume that $\mathit{pos},\mathit{lpos},\mathit{rpos}\in \mathcal{P}$. These three atoms are used to indicate that a particular point on the tape is the current position of the read/write head, to the left of the current position of the read/write head and to the right of the current position of the read/write head, respectively. We also assume that there are five agents named $a,\mathit{right},\mathit{left},\mathit{up}$ and $\mathit{down}$ in $\mathcal{A}$. Note that we can do this because we assumed that $|\mathcal{A}|\geq 5$. With these preliminaries out of the way, we can define the formula $\varphi_T$ that represents the Turing machine $T$ in AAULC. \begin{definition} Let $T=(\Lambda,S,\Delta)$ be a Turing machine. The formula $\varphi_T$ is given by \begin{equation*}\varphi_T := C\psi_\mathit{grid}\wedge C\psi_\mathit{sane}\wedge C\psi_T\wedge s_0\wedge \mathit{pos},\end{equation*} where $\psi\mathit{grid}$, $\psi_\mathit{sane}$ and $\psi_T$ are as shown in Tables~\ref{table:psi_grid}--\ref{table:psi_T}. \end{definition} \begin{table*}[h] \caption{The Formula $\psi_\mathit{grid}$.} \label{table:psi_grid} $$\begin{array}{rl} \psi_\mathit{grid} := {} & \mathit{ref_a} \wedge\mathit{direction}\wedge \mathit{inverse}\wedge\mathit{commute} \\ D := {} & \{\mathit{left},\mathit{right},\mathit{up},\mathit{down}\}\\ \mathit{INV} := {} & \{(\mathit{left},\mathit{right}),(\mathit{right},\mathit{left}),(\mathit{up},\mathit{down}),(\mathit{down},\mathit{up})\}\\ \mathit{COMM} := {} & (\{\mathit{up},\mathit{down}\}\times \{\mathit{left},\mathit{right}\})\cup (\{\mathit{left},\mathit{right}\}\times \{\mathit{up},\mathit{down}\})\\ \mathit{ref_a} := {} & \lozenge_a\top\wedge [\updownarrow]\square_a\lozenge_a\top\\ \mathit{no\_other} := {} & \bigwedge_{x\in \mathcal{A}\setminus (D \cup \{a\})}\square_x\bot\\ \mathit{direction} := {} &\bigwedge_{x\in D}(\lozenge_x\top \wedge [\updownarrow](\lozenge_x\lozenge_a\top\rightarrow \square_x\lozenge_a\top))\\ \mathit{inverse} := {} & [\updownarrow](\lozenge_a\top\rightarrow \bigwedge_{(x,y)\in \mathit{INV}}\square_x\square_y\lozenge_a\top)\\ \mathit{commute} := {} & [\updownarrow]\bigwedge_{(x,y)\in \mathit{COMM}}(\lozenge_x\lozenge_y\lozenge_a\top\rightarrow \square_y\square_x\lozenge_a\top)\\ \end{array}$$ \end{table*} \begin{table*}[h] \caption{The Formula $\psi_\mathit{sane}$.} \label{table:psi_sane} $$\begin{array}{rl} \psi_\mathit{sane} := & \mathit{position}_1\wedge \mathit{position}_2\wedge\mathit{one\_state}\wedge \mathit{same\_state} \wedge \\ &\mathit{one\_symbol}\wedge \mathit{void\_state}\wedge \mathit{initial\_symbol} \wedge \mathit{unchanged}\\ \mathit{position}_1 := & \neg (\mathit{pos}\wedge \mathit{lpos}) \wedge \neg (\mathit{pos}\wedge \mathit{rpos})\wedge \neg (\mathit{rpos}\wedge \mathit{lpos})\\ \mathit{position}_2 := & ((\mathit{pos}\vee \mathit{rpos})\rightarrow \square_\mathit{right}\mathit{rpos}) \wedge ((\mathit{pos}\vee \mathit{lpos})\rightarrow \square_\mathit{left}\mathit{lpos})\\ \mathit{one\_state} := & \bigvee_{s\in \mathit{states}}(s\wedge \bigwedge_{s'\in \mathit{states}\setminus \{s\}}\neg s)\\ \mathit{same\_state} := & \bigwedge_{s\in \mathit{states}}(s\rightarrow (\square_\mathit{left}s\wedge \square_\mathit{right}s))\\ \mathit{one\_symbol} := & \bigvee_{\alpha\in\mathit{symbols}}(\alpha \wedge \bigwedge_{\beta\in \mathit{symbols}\setminus \{\alpha\}}\neg \beta)\\ \mathit{void\_state} := & (s_0\vee s_\mathit{void})\rightarrow \square_\mathit{down}s_\mathit{void}\\ \mathit{initial\_symbol} := {} & s_0\rightarrow \alpha_0 \\ \mathit{unchanged} := {} & \bigwedge_{\alpha \in \mathit{symbols}}((\neg \mathit{pos}\wedge \alpha)\rightarrow\square_\mathit{up}\alpha) \\ \end{array}$$ \end{table*} \begin{table*}[h] \caption{The Formula $\psi_T$.} \label{table:psi_T} $$\begin{array}{rl} \psi_T := {} & \mathit{position\_change}_T\wedge \mathit{state\_change}_T\wedge \mathit{symbol\_change}_T\\\ \mathit{position\_change}_T := {} & \bigwedge_{\{(s,\alpha)\mid \Delta_3(s,\alpha)=\mathit{left}\}}((\mathit{pos}\wedge s\wedge \alpha) \rightarrow \square_\mathit{up}\square_\mathit{left}\mathit{pos})\wedge \\ & \bigwedge_{\{(s,\alpha)\mid \Delta_3(s,\alpha)=\mathit{right}\}}((\mathit{pos}\wedge s\wedge \alpha) \rightarrow \square_\mathit{up}\square_\mathit{right}\mathit{pos})\wedge\\ & \bigwedge_{\{(s,\alpha)\mid \Delta_3(s,\alpha)=\mathit{remain}\}}((\mathit{pos}\wedge s\wedge \alpha) \rightarrow \square_\mathit{up}\mathit{pos})\\ \mathit{state\_change}_T :={} & \bigwedge_{s'\in\mathit{states}}\bigwedge_{\{(s,\alpha)\mid \Delta_2(s,\alpha)=s'\}}((\mathit{pos}\wedge s\wedge \alpha)\rightarrow \square_\mathit{up}s')\\ \mathit{symbol\_change}_T := {} & \bigwedge_{\beta\in \mathit{symbols}}\bigwedge_{\{(s,\alpha)\mid \Delta_1(s,\alpha)=\beta\}}((\mathit{pos}\wedge s\wedge \alpha)\rightarrow \square_\mathit{up}\beta)\\ \end{array}$$ \end{table*} This formula may look somewhat intimidating, but apart from $\psi_\mathit{grid}$ all named formulas are very simple encodings of aspects of a Turing machine. The formula $\psi_\mathit{grid}$, as the name might suggest, encodes a $\mathbb{Z}\times\mathbb{Z}$ grid. We first show that $\varphi_T$ is satisfiable. After that, we show that any model that satisfies $\varphi_T$ contains a representation of $\mathit{run}_T$. \begin{lemma} \label{lemma:sat} For every Turing machine $T$, the formula $\varphi_T$ is satisfiable. \end{lemma} \begin{proof} Let $\mathcal{M}=(W,R,V)$ be given as follows. Take $W=\mathbb{Z}\times \mathbb{Z}$. For every direction $x\in D$ let $(w,w')\in R(x)$ if and only if $w'$ is immediately to the $x$ of $w$, and let $R(a)=\{(w,w)\mid w\in W\}$. For $x\not \in D\cup \{a\}$, let $R(x)=\emptyset$. Now, for any $\alpha\in \Lambda$ and $s\in S\cup\{s_\mathit{void}\}$, let $V(\alpha)=\{(n,m)\mid \mathit{run}^T_1(n,m)=\alpha\}$ and $V(s)=\{(n,m)\mid \mathit{run}^T_2(n,m)$. Furthermore, let $V(\mathit{pos}) =\{(n,m)\mid \mathit{run}^T_3(n,m)=1\}$, $V(\mathit{lpos})=\{(n,m)\mid \exists n'>n : \mathit{run}^T_3(n',m)=1\}$ and $V(\mathit{rpos})=\{(n,m)\mid \exists n'<n : \mathit{run}^T_3(n',m)=1\}$. (In other words, $\mathit{lpos}$ holds if you are to the left of $\mathit{pos}$ and $\mathit{rpos}$ holds if you are to the right of $\mathit{pos}$.) We claim that $\mathcal{M},(0,0)\models \varphi_T$. Since $\mathit{run}^T(0,0)=(\alpha_0,s_0,1)$, we have $\mathcal{M},(0,0)\models \mathit{pos}\wedge\mathit{s_0}$. This leaves the conjuncts $C\psi_\mathit{grid}, C\psi_\mathit{sane}$ and $C\psi_T$. We start by looking at $C\psi_\mathit{sane}$. The conjuncts of $\psi_\mathit{sane}$ hold under the following conditions. \begin{itemize} \item $\mathit{position}_1$ holds if being the position of the head, being to the right of the head and being to the left of the head are mutually exclusive. \item $\mathit{position}_2$ holds if all worlds to the left of the head satisfy $\mathit{lhead}$ and all worlds to the right of the head satisfy $\mathit{rhead}$. \item $\mathit{initial\_symbol}$ holds if, at the initial state $s_0$, the entire tape contains the symbol $\alpha_0$. \item $\mathit{one\_state}$ holds if at every $(n,m)$, the system is in exactly one state. \item $\mathit{same\_state}$ holds if for every $n,m,k\in \mathbb{Z}$, the worlds $(n,m)$ and $(k,m)$ are in the same state. (So the state depends only on time, not on the tape position.) \item $\mathit{one\_symbol}$ holds if at every time $m$, every position $n$ contains exactly one symbol. \item $\mathit{void\_state}$ holds if at every time before $s_0$, the system was in the dummy state $s_\mathit{void}$. \item $\mathit{unchanged}$ holds if every symbol that is not under the read/write head remains unchanged. \end{itemize} All of these conditions are satisfied, because the valuation of $\mathcal{M}$ was derived from the run of a Turing machine. So $\mathcal{M},(0,0)\models C\psi_\mathit{sane}$. Now, consider the conjuncts of $\psi_T$. \begin{itemize} \item $\mathit{position\_change}_T$ holds if the read/write head moves in the appropriate direction, as specified by $\Delta$. \item $\mathit{state\_change}_T$ holds if the state changes as specified by $\Delta$. \item $\mathit{symbol\_change}_T$ holds if the symbol under the read/write head is written as specified by $\Delta$. \end{itemize} These conditions are also satisfied, because the valuation of $\mathcal{M}$ was derived from $\mathit{run}_T$. So $\mathcal{M},(0,0)\models C\psi_T$. Left to show is that $\mathcal{M},(0,0)\models \psi_\mathit{grid}$. So take any $w\in W$. The world $w$ itself is the only $a$-successor of $w$. So we have $\mathcal{M},w\models \lozenge_a\top$. Furthermore, it is impossible for any arrow update to retain the $a$-arrow from $w$ while removing the $a$-arrows from its successor, since they are the same $a$-arrow. It follows that $\mathcal{M},w\models [\updownarrow]\square_a\lozenge_a\top$. So we have shown that $\mathcal{M},w\models \mathit{ref_a}$. We have defined $R_x=\emptyset$ for all $x\not \in D\cup \{a\}$, so we also have $\mathcal{M},w\models \mathit{no\_other}$. Now, consider $\mathit{direction}$. Take any $x\in D$. There is a $x$-arrow from $w$ to the world $w'$ to its $x$, so $\mathcal{M},w\models \lozenge_x\top$. Furthermore, since this $w'$ is the only $x$-successor of $w$, it follows that it is impossible to retain an $a$-arrow on one $x$-successor of $w$ while removing all $a$-arrows from another. So $\mathcal{M},w\models [\updownarrow](\lozenge_x\lozenge_a\top\rightarrow\square_x\lozenge_a\top)$. This holds or every $x\in D$, so $\mathcal{M},w\models\mathit{direction}$. For opposite directions $x$ and $y$, there is exactly one $x$-$y$-successor of $w$, namely $w$ itself. It follows that it is impossible for any arrow update to retain the $a$-arrow on $w$ while removing all $a$-arrows from its $x$-$y$-successor, so $\mathcal{M},w\models\mathit{inverse}$. Finally, for perpendicular directions $x$ and $y$ there is exactly one $x$-$y$-successor $w'$ of $w$, and this $w'$ is also the unique $y$-$x$-successor of $w$. So it is impossible for an arrow update to retain the $a$-arrow from the $x$-$y$-successor while removing it from some $y$-$x$-successor. So $\mathcal{M},w\models \mathit{commute}$. This completes the proof that $\mathcal{M},w\models \psi_\mathit{grid}$ and therefore the proof that $\mathcal{M},w\models \varphi_T$. So $\varphi_T$ is satisfiable. \end{proof} \begin{lemma} \label{lemma:reduction} If $\mathcal{M},w_0\models \varphi_T$, then $\mathcal{M},w_0\models C\neg s_\mathit{end}$ if and only if $T$ is non-halting. \end{lemma} \begin{proof} Suppose $\mathcal{M},w_0\models \varphi_T$. Then, by definition, $\mathcal{M},w_0\models\psi_\mathit{grid}\wedge C\psi_\mathit{sane}\wedge C\psi_T\wedge \mathit{pos}\wedge s_0$. We will first show that $\mathcal{M},w_0\models C\psi_\mathit{grid}$ implies that the model $\mathcal{M}$ is grid-like. Then, we will show that the remaining subformulas imply that the model $\mathcal{M}$ represents $\mathit{run}^T$. By the $\lozenge_a\top$ conjunct of $\mathit{ref_a}$, every reachable world $w$ has at least one $a$-successor. If any $a$-successor of $w$ is $\mathit{AULC}$-distinguishable from $w$, then it would be possible for an arrow update to remove the $a$-arrow from this successor while retaining the $a$-arrow from $w$. This would contradict the $[\updownarrow]\square_a\lozenge_a\top$ conjunct of $\mathit{ref_a}$. Now, for any $x\in D$, consider the $x$-successors of $w$, of which there is at least one by the $\lozenge_x\top$ part of $\mathit{direction}$. If any of these successors were $\mathit{AULC}$-distinguishable, it would be possible to remove the $a$-arrow from one of them but not from the other. This would contradict the $[\updownarrow](\lozenge_x\lozenge_a\top\rightarrow \square_x\lozenge_a\top)$ part of $\mathit{direction}$. For any opposite directions $x$ and $y$, consider any $x$-$y$-successor $w'$ of $w$. If any $\mathit{AULC}$ formula could distinguish between $w'$ and $w'$, it would be possible for an arrow update to retain the $a$-arrow from $w$ while removing the $a$-arrow from $w'$, which would contradict $\mathit{inverse}$. Finally, for any perpendicular direction $x$ and $y$, consider any $x$-$y$- and $y$-$x$-successors of $w$. If these successors were $\mathit{AULC}$-distinguishable, it would be possible for an arrow update to remove one while retaining the other, contradicting $\mathit{commute}$. Taken together, the above facts imply that $\mathcal{M}$ contains a representation of a grid $\mathbb{Z}\times\mathbb{Z}$ where, for every $x\in D$, we have $w'\in R_x(w)$ if and only if $w'$ is to the $x$ of $w$. Furthermore, if $w$ represents $(n,m)$ then so does every $a$-successor of $w$. Every world $(n,m)$ may be represented by multiple worlds in $\mathcal{M}$, but all the worlds that represent a single grid point are $\mathit{AULC}$-indistinguishable from one another. Furthermore, the formula $\mathit{no\_other}$ implies that we cannot escape this grid, every reachable world represents some grid point $(n,m)$. The remaining subformulas of $\varphi_T$ guarantee that this grid encodes $\mathit{run}_T$. First, consider $\psi_\mathit{sane}$. This formula enforces a number of general sanity constraints. \begin{itemize} \item The formula $\mathit{position}_1$ says that being the current position of the read/write head, being to the right of the position of the head and being to the left of the position of the head are mutually exclusive. \item The formula $\mathit{position}_2$ says that if you are either at the current position of the read/write head or to the right of the current position, then if you go further to the right then you will be to the right of the current position. Similarly, it says that if you are either at the current position of the head or to the left of it and go further left, then you will be to the left of the head. Together with $\mathit{position}_1$, this guarantees that the read/write head is in at most one position at any time step. (Ensuring that the head is in at least one position at every time is done later). \item The formula $\mathit{one\_state}$ says that every world is in exactly one state. \item The formula $\mathit{same\_state}$ says that if a world is in state $s$, then the worlds to the left and right are also in state $s$. So all the worlds that represent a single time step satisfy the same state. Together with $\mathit{one\_state}$, this implies that every time step is associated with exactly one state. \item The formula $\mathit{one\_symbol}$ says that every world satisfies exactly one symbol. \item The formula $\mathit{void\_state}$ says that every time before the initial state $s_0$ is in the dummy state $s_\mathit{void}$. So the worlds satisfying $s_0$ are where the computation starts. \item The formula $\mathit{initial\_symbol}$ says that the $s_0$ worlds satisfy $\alpha_0$, so if the system is in the initial state $s_0$, then the tape is empty. \item Finally, the formula $\mathit{symbol\_unchanged}_T$ guarantees that the symbol remains unchanged everywhere other than under the read/write head. \end{itemize} Now, consider $\psi_T$, which forces the transitions to satisfy $\Delta$. \begin{itemize} \item The formula $\mathit{position\_change}_T$ guarantees that the read/write head moves in the correct direction, depending on the current symbol under the head and the current state. \item The formula $\mathit{state\_change}_T$ guarantees that the next state is as specified by $T$. \item The formula $\mathit{symbol\_change}_T$ guarantees that the correct symbol is written to the tape, as specified by $T$. \end{itemize} The last two conjuncts of $\varphi_T$ do not contain a common knowledge operator. They state that the world $w_0$ satisfies $\mathit{pos}$ and $s_0$. So $w_0$ represents the point $(0,0)$. Because the rules of $T$ always require the read/write head to stay in he same position or to move to the left or right, this also implies that at every time after $w_0$ the head is in at least one position. Taken together, the above shows that the valuation on the grid represents $\mathit{run}_T$. So the grid contains a $s_\mathit{end}$ state if and only if $T$ is halting. Since every reachable state is part of the grid, it follows that $\mathcal{M},w_0\models C\neg s_\mathit{end}$ if and only if $T$ is non-halting. \end{proof} \begin{theorem} The formula $\varphi_T\rightarrow C\neg s_\mathit{end}$ is valid if and only if $T$ is non-halting. Furthermore, $\varphi_T\rightarrow \neg C\neg s_\mathit{end}$ is valid if and ony if $T$ is halting. \end{theorem} \begin{proof} Suppose that $T$ is non-halting. Then, by Lemma~\ref{lemma:reduction}, we have $\models \varphi_T\rightarrow C\neg s_\mathit{end}$. Furthermore, since $\varphi_T$ is satisfiable, this implies that $\not\models \varphi_T\rightarrow \neg C\neg s_\mathit{end}$. Suppose, on the other hand, that $T$ is halting. Then, by Lemma~\ref{lemma:reduction}, we have $\models \varphi_T\rightarrow \neg C\neg s_\mathit{end}$. Furthermore, since $\varphi_T$ is satisfiable, this implies that $\not \models \varphi_T\rightarrow C\neg s_\mathit{end}$. \end{proof} \begin{corollary} The set of valid formulas of AAULC is neither RE not co-RE. \end{corollary} \begin{corollary} AAULC does not have a finitary axiomatization. \end{corollary} \section{Conclusion} The validity problems for the quantified update logics APAL, GAL, CAL and AAUL are known not to be co-RE. It is not currently known whether these problems are RE. This question is particularly relevant because if the validity problem of a logic is not RE, then that logics cannot have a recursive axiomatization. The logic AAULC adds a common knowledge operator to AAUL. Here, we showed that the validity problem of AAULC is not RE, using a reduction from the non-halting problem of Turing machines. This reduction uses the common knowledge operator $C$, so it does not immediately follow that the validity problem of AAUL is not RE. Still, we believe that the proof presented here can be adapted for AAUL. It is less clear whether our reduction could be adapted for APAL, GAL and CAL. Still, it seems worthwhile to attempt to modify this reduction for APAL, GAL and CAL. If such an attempt succeeds, it would show that these logics are nor recursively axiomatizable. Or if the attemp fails, then the way in which it fails might provide a hint about how to prove that the validity problems of these logics are RE. \end{document}
\begin{document} \title{The symmetrization problem for multiple orthogonal polynomials} \author{Am\'ilcar Branquinho$^{1}$ and Edmundo J. Huertas$^{1}$\thanks{ The work of the first author was partially supported by Centro de Matem\'atica da Universidade de Coimbra (CMUC), funded by the European Regional Development Fund through the program COMPETE and by the Portuguese Government through the FCT - Funda\c c\~ao para a Ci\^encia e a Tecnologia under the project PEst-C/MAT/UI0324/2011. The work of the second author was supported by Funda\c c\~ao para a Ci\^encia e a Tecnologia (FCT) of Portugal, ref. SFRH/BPD/91841/2012, and partially supported by Direcci\'on General de Investigaci\'on Cient\'ifica y T\'ecnica, Ministerio de Econom\'ia y Competitividad of Spain, grant MTM2012-36732-C03-01.} \\ $^{1}$Universidade de Coimbra, FCTUC, CMUC and Departamento de Matem\'atica\\ Apartado 3008, 3000 Coimbra, Portugal\\ [email protected], [email protected]} \date{\emph{(\today)}} \maketitle \begin{abstract} We analyze the effect of symmetrization in the theory of multiple orthogonal polynomials. For a symmetric sequence of type~II multiple orthogonal polynomials satisfying a high--term recurrence relation, we fully characterize the Weyl function associated to the corresponding block Jacobi matrix as well as the Stieltjes matrix function. Next, from an arbitrary sequence of type II multiple orthogonal polynomials with respect to a set of $d$ linear functionals, we obtain a total of $d+1$ sequences of type II multiple orthogonal polynomials, which can be used to construct a new sequence of symmetric type~II multiple orthogonal polynomials. Finally, we prove a Favard-type result for certain sequences of matrix multiple orthogonal polynomials satisfying a matrix four--term recurrence relation with matrix coefficients. \end{abstract} AMS SUBJECT CLASSIFICATION (2000): Primary 33C45; Secondary 39B42. \section{Introduction} \label{[Section-1]-Intro} In recent years an increasing attention has been paid to the notion of multiple orthogonality. Multiple orthogonal polynomials are a generalization of orthogonal polynomials \cite{Chi78}, satisfying orthogonality conditions with respect to a number of measures, instead of just one measure. There exists a vast literature on this subject, e.g. the classical works \cite {A-JCAM-98}, \cite{ABV-TAM-03}, \cite[Ch.\thinspace 23]{I-EMA-05} and \cite {NS-TrnAMS-91} among others. A characterization through a vectorial functional equation, where the authors call them $d$\textit{--orthogonal polynomials} instead of multiple orthogonal polynomials, was done in \cite {DM-JAT-82}. Their asymptotic behavior have been studied in \cite{AKS-CA-09} , also continued in \cite{DL-CA-2012}, and properties for their zeros have been analyzed in \cite{HV-JMAA-12}. Bäcklund transformations resulted from the symmetrization process in the usual (standard) orthogonality, which allows one to jump, from one hierarchy to another, in the whole Toda lattices hierarchy (see \cite{GGKM-PRL67}). That is, they allow reinterpretations inside the hierarchy.\ In \cite {BBF-JMAA-13}, the authors have found certain Bäcklund--type transformations (also known as Miura--type transformations) which allow to reduce problems in a given full Konstant--Toda hierarchy to another. Also, in\ \cite {BB-JDEA-09}, where Peherstorfer's work \cite{P-JCAM-01} is extended to the whole Toda hierarchy, it is shown how this system can be described with the evolution of only one parameter instead of two, using exactly this kind of transformations. Other application to the Toda systems appear in \cite {AKI-CA-00}, \cite{BBF-JMAA-10}, and \cite{BBF-JMAA-11}, where the authors studied Bogoyavlenskii systems which were modeled by certain symmetric multiple orthogonal polynomials. In this paper, we are interested in analyze the effect of symmetrization in systems of multiple orthogonality measures. Our viewpoint seeds some new light on the subject, and we prove that the symmetrization process in multiple orthogonality is a model to define the aforementioned Bä cklund--type transformations, as happens in the scalar case with the Bä cklund transformations (see \cite{Chi78}, \cite{MS-NA-92}, \cite{P-JCAM-01} ). Furthermore, we solve the so called \textit{symmetrization problem} in the theory of multiple orthogonal polynomials. We apply certain \textit{ Darboux transformations}, already described in \cite{BBF-JMAA-13}, to a $ (d+2)$--banded matrix, associated to a $(d+2)$--term recurrence relation satisfied by an arbitrary sequence of type~II multiple orthogonal polynomials, to obtain a total of $d+1$ sequences of not necessarily symmetric multiple orthogonal polynomials, which we use to construct a new sequence of symmetric multiple orthogonal polynomials. On the other hand, following the ideas in \cite{MS-NA-92} (and the references therein) for standard sequences of orthogonal polynomials, in \cite{MMR-JDEA-11} (see also\ \cite{DM-A-92}) the authors provide a cubic decomposition for sequences of polynomials, multiple orthogonal with respect to a two different linear functionals. Concerning the symmetric case, in \cite{MM-MJM13} this cubic decomposition is analyzed for a 2-symmetric sequence of polynomials, which is called a \textit{diagonal cubic decomposition (CD)} by the authors. Here, we also extend this notion of diagonal decomposition to a more general case, considering symmetric sequences of polynomials multiple orthogonal with respect to $d>3$ linear functionals. The structure of the manuscript is as follows. In Section \ref {[Section-2]-Defs} we summarize without proofs the relevant material about multiple orthogonal polynomials, and a basic background about the matrix interpretation of the type~II multi-orthogonality conditions with respect to the a regular system of $d$ linear functionals $\{u^{1},\ldots ,u^{d}\}$ and diagonal multi--indices. In Section \ref{[Section-3]-FyR-Functions} we fully characterize the Weyl function $\mathcal{R}_{J}$ and the Stieltjes matrix function $\mathcal{F}$ associated to the block Jacobi matrix $J$ corresponding to a $(d+2)$--term recurrence relation satisfied by a symmetric sequence of type~II multiple orthogonal polynomials. In Section \ref{[Section-4]-SymProb}, starting from an arbitrary sequence of type~II multiple polynomials satisfying a $(d+2)$--term recurrence relation, we state the conditions to find a total of $d+1$ sequences of type~II multiple orthogonal polynomials, in general non--symmetric, which can be used to construct a new sequence of \textit{symmetric} type~II multiple orthogonal polynomials. Moreover, we also deal with the converse problem, i.e., we propose a decomposition of a given \textit{symmetric} type~II multiple orthogonal polynomial sequence, which allows us to find a set of other (in general non--symmetric) $d+1$ sequences of type~II multiple orthogonal polynomials, satisfying in turn $(d+2)$--term recurrence relations. Finally, in Section \ref{[Section-5]-FavardTh}, we present a Favard-type result, showing that certain $3\times 3$ matrix decomposition of a type~II multiple $ 2$--orthogonal polynomials, satisfy a \textit{matrix four--term recurrence relation}, and therefore it is type~II multiple $2$--orthogonal (in a matrix sense) with respect to a certain system of matrix measures. \section{Definitions and matrix interpretation of multiple orthogonality} \label{[Section-2]-Defs} Let $\boldsymbol{n}=(n_{1},...,n_{d})\in \mathbb{N}^{d}$ be a multi--index with length $|\mathbf{n}|:=n_{1}+\cdots +n_{d}$ and let $\{u^{j}\}_{j=1}^{d}$ be a set of linear functionals, i.e. $u^{j}:\mathbb{P}\rightarrow \mathbb{C}$ . Let $\{P_{\mathbf{n}}\}$ be a sequence of polynomials, with $\deg P_{ \mathbf{n}}$ is at most $|\mathbf{n}|$. $\{P_{\mathbf{n}}\}$ is said to be \textit{type~II multiple orthogonal} with respect to the set of linear functionals $\{u_{j}\}_{j=1}^{d}$ and multi--index $\boldsymbol{n}$ if \begin{equation} u^{j}(x^{k}P_{\mathbf{n}})=0\,,\ \ k=0,1,\ldots ,n_{j}-1\,,\ \ j=1,\ldots ,d\,. \label{[Sec2]-OrtConduj} \end{equation} A multi--index $\boldsymbol{n}$ is said to be \textit{normal} for the set of linear functionals $\{u_{j}\}_{j=1}^{d}$, if the degree of $P_{\mathbf{n}}$ is exactly $|\mathbf{n}|=n$. When all the multi--indices of a given family are normal, we say that the set of linear functionals $\{u_{j}\}_{j=1}^{d}$ is \textit{regular}. In the present work, we will restrict our attention ourselves to the so called \textit{diagonal multi--indices} $\boldsymbol{n} =(n_{1},...,n_{d})\in \mathcal{I}$, where \begin{equation*} \mathcal{I}=\{(0,0,\ldots ,0),(1,0,\ldots ,0),\ldots ,(1,1,\ldots ,1),(2,1,\ldots ,1),\ldots ,(2,2,\ldots ,2),\ldots \}. \end{equation*} Notice that there exists a one to one correspondence, $\mathbf{i}$, between the above set of diagonal multi--indices $\mathcal{I}\subset \mathbb{N}^{d}$ and $\mathbb{N}$, given by $\mathbf{i}(\mathbb{N}^{d})=|\mathbf{n}|=n$. Therefore, to simplify the notation, we write in the sequel $P_{\mathbf{n} }\equiv P_{|\mathbf{n}|}=P_{n}$. The left--multiplication of a linear functional $u:\mathbb{P}\rightarrow \mathbb{C}$ by a polynomial $p\in \mathbb{P}$ is given by the new linear functional $p\,u:\mathbb{P} \rightarrow \mathbb{C}$ such that \begin{equation*} p\,u(x^{k})=u(p(x)x^{k})\,,\ \ k\in \mathbb{N}\,. \end{equation*} Next, we briefly review a matrix interpretation of type~II multiple orthogonal polynomials with respect to a system of $d$ regular linear functionals and a family of diagonal multi--indices. Throughout this work, we will use this matrix interpretation as a useful tool to obtain some of the main results of the manuscript. For a recent and deeper account of the theory (in a more general framework, considering quasi--diagonal multi--indices) we refer the reader to \cite{BCF-NA-10}. Let us consider the family of vector polynomials \begin{equation*} \mathbb{P}^{d}=\{ \begin{bmatrix} P_{1} & \cdots & P_{d} \end{bmatrix} ^{T},\ \ d\in \mathbb{N},\,P_{j}\in \mathbb{P}\}, \end{equation*} and $\mathcal{M}_{d\times d}$ the set of $d\times d$ matrices with entries in $\mathbb{C}$. Let $\{\mathcal{X}_{j}\}$ be the family of vector polynomials $\mathcal{X}_{j}\in \mathbb{P}^{d}$\ defined by \begin{equation} \mathcal{X}_{j}= \begin{bmatrix} x^{jd} & \cdots & x^{(j+1)d-1} \end{bmatrix} ^{T},\ \ j\in \mathbb{N}, \label{[Sec2]-vecXj} \end{equation} where $\mathcal{X}_{0}= \begin{bmatrix} 1 & \cdots & x^{d-1} \end{bmatrix} ^{T}$. By means of the shift $n\rightarrow nd$, associated with $\{P_{n}\}$, we define the sequence of vector polynomials $\{\mathcal{P}_{n}\}$, with \begin{equation} \mathcal{P}_{n}= \begin{bmatrix} P_{nd}(x) & \cdots & P_{(n+1)d-1}(x) \end{bmatrix} ^{T},\ \ n\in \mathbb{N},\,\mathcal{P}_{n}\in \mathbb{P}^{d}. \label{[Sec2]-vecPn} \end{equation} Let $u^{j}:\mathbb{P}\rightarrow \mathbb{C}$ with $j=1,\ldots ,d$ a system of linear functionals as in (\ref{[Sec2]-OrtConduj}). From now on, we define the \textit{vector of functionals }$\mathcal{U}= \begin{bmatrix} u^{1} & \cdots & u^{d} \end{bmatrix} ^{T}$ acting in $\mathbb{P}^{d}\rightarrow \mathcal{M}_{d\times d}$, by \begin{equation*} \mathcal{U}(\mathcal{P})=\left( \mathcal{U}\text{\textperiodcentered } \mathcal{P}^{T}\right) ^{T}= \begin{bmatrix} u^{1}(P_{1}) & \cdots & u^{d}(P_{1}) \\ \vdots & \ddots & \vdots \\ u^{1}(P_{d}) & \cdots & u^{d}(P_{d}) \end{bmatrix} . \end{equation*} Let \begin{equation*} A_{\ell }(x)=\sum_{k=0}^{\ell }A_{k}^{\ell }\,x^{k}, \end{equation*} be a matrix polynomial of degree $\ell $, where $A_{k}^{\ell }\in \mathcal{M} _{2\times 2}$, and $\mathcal{U}$ a vector of functional. We define the new vector of functionals called \textit{left multiplication of }$\mathcal{U}$ \textit{\ by a matrix polynomial }$A_{\ell }$, and we denote it by $A_{\ell } \mathcal{U}$,\ to the map of~$\mathbb{P}^{d}$ into $\mathcal{M}_{d\times d}$ , described by \begin{equation} \left( A_{\ell }\mathcal{U}\right) (\mathcal{P})=\sum_{k=0}^{\ell }\left( x^{k}\mathcal{U}\right) \left( \mathcal{P}\right) (A_{k}^{n})^{T}. \label{[Sec2]-Def-LeftMultp} \end{equation} From (\ref{[Sec2]-Def-LeftMultp}) we introduce the notion of \textit{moments of order }$j\in \mathbb{N}$, associated with the vector of functionals $x^{k} \mathcal{U}$, which will be in general the following $d\times d$\ matrices \begin{equation*} \mathcal{U}_{j}^{k}=\left( x^{k}\mathcal{U}\right) (\mathcal{X}_{j})= \begin{bmatrix} u^{1}(x^{jd+k}) & \cdots & u^{d}(x^{jd+k}) \\ \vdots & \ddots & \vdots \\ u^{1}(x^{(j+1)d-1+k}) & \cdots & u^{d}(x^{(j+1)d-1+k}) \end{bmatrix} , \end{equation*} with $j,k\in \mathbb{N}$, and from this moments, we construct the \textit{ block \textit{Hankel} matrix of moments} \begin{equation*} \mathcal{H}_{n}= \begin{bmatrix} \mathcal{U}_{0}^{0} & \cdots & \mathcal{U}_{0}^{n} \\ \vdots & \ddots & \vdots \\ \mathcal{U}_{n}^{0} & \cdots & \mathcal{U}_{n}^{n} \end{bmatrix} \,,\ \ n\in \mathbb{N}. \end{equation*} We say that the vector of functionals $\mathcal{U}$ is \textit{regular}, if the determinants of the principal minors of the above matrix are non-zero for every $n\in \mathbb{N}$. Having in mind (\ref{[Sec2]-vecXj}) it is obvious that $\mathcal{X}_{j}=(x^{d})^{j}\mathcal{X}_{0},\,\,\,j\in \mathbb{N }$. Thus, from (\ref{[Sec2]-vecPn}) we can express $\mathcal{P}_{n}(x)$ in the alternative way \begin{equation} \mathcal{P}_{n}(x)=\sum_{j=0}^{n}P_{j}^{n}\mathcal{X}_{j}\,,\ \ P_{j}^{n}\in \mathcal{M}_{d\times d}\,, \label{[Sec2]-MatrForm-I} \end{equation} where the matrix coefficients $P_{j}^{n},\ j=0,1,\ \ldots ,\ n$ are uniquely determined. Thus, it also occurs \begin{equation} \mathcal{P}_{n}(x)=W_{n}(x^{d})\mathcal{X}_{0}\,, \label{[Sec2]-MatForm-II} \end{equation} where $W_{n}$ is a matrix polynomial (i.e., $W_{n}$ is a $d\times d$ matrix whose entries are polynomials) of degree $n$ and dimension $d$, given by \begin{equation} W_{n}(x)=\sum_{j=0}^{n}P_{j}^{n}x^{j}\,,\ \ P_{j}^{n}\in \mathcal{M} _{d\times d}. \label{[Sec2]-MatForm-II-Vn} \end{equation} Notice that the matrices $P_{j}^{n}\in \mathcal{M}_{d\times d}$ in (\ref {[Sec2]-MatForm-II-Vn}) are the same as in (\ref{[Sec2]-MatrForm-I}). Within this context, we can now describe the matrix interpretation of multiple orthogonality for diagonal multi--indices. Let $\{\mathcal{P}_{n}\}$ be a sequence of vector polynomials with polynomial entries as in (\ref {[Sec2]-vecPn}), and a vector of functionals $\mathcal{U}$ as described above. $\{\mathcal{P}_{n}\}$ is said to be a \textit{type~II vector multiple orthogonal polynomial sequence} with respect to the vector of functionals $ \mathcal{U}$, and a set of diagonal multi--indices, if \begin{equation} \left. \begin{array}{rll} i) & (x^{k}\mathcal{U})(\mathcal{P}_{n})=0_{d\times d}\,, & k=0,1,\ldots ,n-1\,, \\ ii) & (x^{n}\mathcal{U})(\mathcal{P}_{n})=\Delta _{n}\,, & \end{array} \right\} \label{[Sec2]-OrthCond-U} \end{equation} where $\Delta _{n}$ is a regular upper triangular $d\times d$ matrix (see \cite[Th. 3]{BCF-NA-10} considering diagonal multi--indices). Next, we introduce a few aspects of the duality theory, which will be useful in the sequel. We denote by $\mathbb{P}^{\ast }$ the dual space of $\mathbb{P }$, i.e. the linear space of linear functionals defined on $\mathbb{P}$ over $\mathbb{C}$. Let $\{P_{n}\}$ be a sequence of monic polynomials. We call $ \{\ell _{n}\}$, $\ell _{n}\in \mathbb{P}^{\ast }$, the \textit{dual sequence} \ of $\{P_{n}\}$\ if $\ell _{i}(P_{j})=\delta _{i,j},\,\,i,\,j\in \mathbb{N}$ \ holds. Given a sequence of linear functionals $\{\ell _{n}\}\in \mathbb{P} ^{\ast }$, by means of the shift $n\rightarrow nd$, the vector sequence of linear functionals $\{\mathcal{L}_{n}\}$, with \begin{equation} \mathcal{L}_{n}= \begin{bmatrix} \ell _{nd} & \cdots & \ell _{(n+1)d-1} \end{bmatrix} ^{T},\ \ n\in \mathbb{N}, \label{[Sec2]-vecLinFrm} \end{equation} is said to be the \textit{vector sequence of linear functionals} associated with $\{\ell _{n}\}$. It is very well known (see \cite{DM-JAT-82}) that a given sequence of type~II polynomials $\{P_{n}\}$, simultaneously orthogonal with respect to a $d$ linear functionals, or simply $d$\textit{--orthogonal} polynomials, satisfy the following $(d+2)$--term order recurrence relation \begin{equation} xP_{n+d}(x)=P_{n+d+1}(x)+\beta _{n+d}P_{n+d}(x)+\sum_{\nu =0}^{d-1}\gamma _{n+d-\nu }^{d-1-\nu }P_{n+d-1-\nu }(x)\,, \label{[Sec2]-HTRR-MultOP} \end{equation} $\gamma _{n+1}^{0}\neq 0$ for $n\geq 0$, with the initial conditions $ P_{0}(x)=1$, $P_{1}(x)=x-\beta _{0}$, and \begin{equation*} P_{n}(x)=(x-\beta _{n-1})P_{n-1}(x)-\sum_{\nu =0}^{n-2}\gamma _{n-1-\nu }^{d-1-\nu }P_{n-2-\nu }(x)\,,\ \ 2\leq n\leq d\,. \end{equation*} E.g., if $d=2$, the sequence of monic type~II multiple orthogonal polynomials $\{P_{n}\}$ with respect to the regular system of functionals $ \{u^{1},u^{2}\}$ and normal multi--index satisfy, for every $n\geq 0$, the following four term recurrence relation (see \cite[Lemma 1-a]{BCF-NA-10}, \cite{K-JCAM-95}) \begin{equation} xP_{n+2}(x)=P_{n+3}(x)+\beta _{n+2}P_{n+2}(x)+\gamma _{n+2}^{1}P_{n+1}(x)+\gamma _{n+1}^{0}P_{n}(x)\,, \label{[Sec2]-4TRR-P} \end{equation} where $\beta _{n+2},\gamma _{n+2}^{1},\gamma _{n+1}^{0}\in \mathbb{C}$, $ \gamma _{n+1}^{0}\neq 0$, $P_{0}(x)=1$, $P_{1}(x)=x-\beta _{0}$ and $ P_{2}(x)=(x-\beta _{1})P_{1}(x)-\gamma _{1}^{1}P_{0}(x)$. We follow \cite[Def. 4.1.]{DM-JAT-82} in assuming that a monic system of polynomials $\{S_{n}\}$ is said to be $d$\textit{--symmetric} when it verifies \begin{equation} S_{n}(\xi _{k}x)=\xi _{k}^{n}S_{n}(x)\,,\ \ n\geq 0\,, \label{[Sec2]-d-symmetric-Sn} \end{equation} where $\xi _{k}=\exp \left( {2k\pi i}/({d+1})\right) $, $k=1,\ldots ,d$, and $\xi _{k}^{d+1}=1$. Notice that, if $d=1$, then $\xi _{k}=-1$ and therefore $ S_{n}(-x)=(-1)^{n}S_{n}(x)$ (see~\cite{Chi78}). We also assume (see \cite[ Def. 4.2.]{DM-JAT-82}) that the vector of linear functionals $\mathcal{L} _{0}= \begin{bmatrix} \ell _{0} & \cdots & \ell _{d-1} \end{bmatrix} ^{T}$ is said to be $d$\textit{--symmetric} when the moments of its entries satisfy, for every $n\geq 0$, \begin{equation} \ell _{\nu }(x^{(d+1)n+\mu })=0\,,\ \ \nu =0,1,\ldots ,d-1\,,\ \ \mu =0,1,\ldots ,d\,,\ \ \nu \neq \mu \,. \label{[Sec2]-momL1} \end{equation} Observe that if $d=1$, this condition leads to the well known fact $\ell _{0}(x^{2n+1})=0$, i.e., all the odd moments of a symmetric moment functional are zero (see~\cite[Def. 4.1, p.20]{Chi78}). Under the above assumptions, we have the following \begin{teo}[{cf. \protect\cite[Th. 4.1]{DM-JAT-82}}] \label{[SEC2]-TH41-DMJAT82} For every sequence of monic polynomials $ \{S_{n}\}$, $d$--orthogonal with respect to the \textit{vector of linear functionals }$\mathcal{L}_{0}= \begin{bmatrix} \ell _{0} & \cdots & \ell _{d-1} \end{bmatrix} ^{T}$, the following statements are equivalent: \begin{itemize} \item[$(a)$] The vector of linear functionals $\mathcal{L}_{0}$ is $d$ --symmetric. \item[$(b)$] The sequence $\{S_{n}\}$ is $d$--symmetric. \item[$(c)$] The sequence $\{S_{n}\}$ satisfies \begin{equation} xS_{n+d}(x)=S_{n+d+1}(x)+\gamma _{n+1}S_{n}(x)\,,\ \ n\geq 0, \label{[Sec2]-HTRR-Symm} \end{equation} with $S_{n}(x)=x^{n}$ for $0\leq n\leq d$. \end{itemize} \end{teo} Notice that (\ref{[Sec2]-HTRR-Symm}) is a particular case of\ the $(d+2)$ --term recurrence relation (\ref{[Sec2]-HTRR-MultOP}). Continuing the same trivial example above for $d=2$, it directly implies that the sequence of polynomials $\{S_{n}\}$, satisfy the particular case of~(\ref {[Sec2]-HTRR-MultOP}), i.e. $S_{0}(x)=1$, $S_{1}(x)=1$, $S_{2}(x)=x^{2}$ and \begin{equation*} xS_{n+2}(x)=S_{n+3}(x)+\gamma _{n+1}S_{n}(x)\,,\ \ n\geq 0. \end{equation*} Notice that the coefficients $\beta _{n+2}$ and $\gamma _{n+2}^{1}$ of polynomials $S_{n+2}$ and $S_{n+1}$ respectively, on the right hand side of ( \ref{[Sec2]-4TRR-P}) are zero. On the other hand, the $(d+2)$--term recurrence relation (\ref {[Sec2]-HTRR-Symm}) can be rewritten in terms of vector polynomials (\ref {[Sec2]-vecPn}), and then we obtain what will be referred to as the \textit{ symmetric type~II vector multiple orthogonal polynomial sequence} $\mathcal{S }_{n}= \begin{bmatrix} S_{nd} & \cdots & S_{(n+1)d-1} \end{bmatrix} ^{T}$. For $n\rightarrow dn+j$, $j=0,1,\ldots d-1$ and $n\in \mathbb{N}$, we have the following matrix three term recurrence relation \begin{equation} x\mathcal{S}_{n}=A\mathcal{S}_{n+1}+B\mathcal{S}_{n}+C_{n}\mathcal{S} _{n-1}\,,\ \ n=0,1,\ldots \label{[Sec2]-3TRR-S} \end{equation} with $\mathcal{S}_{-1}= \begin{bmatrix} 0 & \cdots & 0 \end{bmatrix} ^{T}$, $\mathcal{S}_{0}= \begin{bmatrix} S_{0} & \cdots & S_{d-1} \end{bmatrix} ^{T}$, and matrix coefficients $A$, $B$, $C_{n}\in \mathcal{M}_{d\times d}$ given \begin{eqnarray} A &=& \begin{bmatrix} 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \\ 1 & 0 & \cdots & 0 \end{bmatrix} \,,\ B= \begin{bmatrix} 0 & 1 & & \\ & \ddots & \ddots & \\ & & 0 & 1 \\ & & & 0 \end{bmatrix} ,\ \ \mbox{ and } \label{[Sec2]-3TTR-MCoefs} \\ C_{n} &=&\operatorname{diag}\,[\gamma _{1},\,\gamma _{2},\ldots ,\gamma _{nd}]. \notag \end{eqnarray} Note that, in this case, one has \begin{equation*} \mathcal{S}_{0}=\mathcal{X}_{0}= \begin{bmatrix} 1 & x & \cdots & x^{d-1} \end{bmatrix} ^{T}. \end{equation*} Since $\{S_{n}\}$\ satisfies (\ref{[Sec2]-HTRR-Symm}), it is clear that this $d$--symmetric type~II multiple polynomial sequence will be orthogonal with respect to certain system of $d$ linear functionals, say $\{v^{1},\ldots ,v^{d}\}$. Hence, according to the matrix interpretation of multiple orthogonality, the corresponding type II vector multiple polynomial sequence $\{\mathcal{S}_{n}\}$ will be orthogonal with respect to a \textit{symmetric vector of functionals }$\mathcal{V}= \begin{bmatrix} v^{1} & \cdots & v^{d} \end{bmatrix} ^{T}$. The corresponding matrix orthogonality conditions for $\{\mathcal{S} _{n}\}$ and $\mathcal{V}$ are described in (\ref{[Sec2]-OrthCond-U}). One of the main goals of this manuscript is to analyze symmetric sequences of type~II vector multiple polynomials, orthogonal with respect to a symmetric vector of functionals. The remainder of this section will be devoted to the proof of one of our main results concerning the moments of the $\ d$ functional entries of such symmetric vector of functionals $\mathcal{V} $. The following theorem states that, under certain conditions, the moment of each functional entry in $\mathcal{V}$ can be given in terms of the moments of other functional entry in the same $\mathcal{V}$. \begin{lema} \label{[SEC2]-TH2-MOM-v1v2} If $\mathcal{V}= \begin{bmatrix} v^{1} & \cdots & v^{d} \end{bmatrix} ^{T}$ is a symmetric vector of functionals, the moments of each functional entry $v^{j}$, $j=1,2,\ldots ,d$ in $\mathcal{V}$, can be expressed for all $ n\geq 0$ as \begin{itemize} \item[$(i)$] If $\mu =0,1,\ldots ,j-2$ \begin{equation} v^{j}(x^{(d+1)n+\mu })=\frac{v_{j,\mu }}{v_{\mu +1,\mu }}v^{\mu +1}(x^{(d+1)n+\mu }), \label{[Sec2]-momVsym} \end{equation} where $v_{k,l}=v^{k}(S_{l})$. \item[$(ii)$] If $\mu =j-1$, the value $v^{j}(x^{(d+1)n+\mu })$ depends on $ v^{j}$, and it is different from zero. \item[$(iii)$] If $\mu =j,j+1,\ldots ,d$ \begin{equation*} v^{j}(x^{(d+1)n+\mu })=0. \end{equation*} \end{itemize} \end{lema} \begin{proof} In the matrix framework of multiple orthogonality, the type~II vector polynomials $\mathcal{S}_{n}$ are multiple orthogonal with respect to the symmetric vector moment of functionals $\mathcal{V}:\mathbb{P} ^{d}\rightarrow \mathcal{M}_{d\times d}$, with $\mathcal{V}= \begin{bmatrix} v^{1} & \cdots & v^{d} \end{bmatrix} ^{T}$. If we multiply (\ref{[Sec2]-3TRR-S}) by~$x^{n-1}$, together with the linearity of $\mathcal{V}$, we get, for $n=0,1,\ldots $ \begin{equation*} \mathcal{V}\left( x^{n}\mathcal{S}_{n}\right) =A\mathcal{V}\left( x^{n-1} \mathcal{S}_{n+1}\right) +B\mathcal{V}\left( x^{n-1}\mathcal{S}_{n}\right) +C_{n}\mathcal{V}\left( x^{n-1}\mathcal{S}_{n-1}\right) \,,\,\,n=0,1,\ldots . \end{equation*} By the orthogonality conditions (\ref{[Sec2]-OrthCond-U}) for $\mathcal{V}$, we have \begin{equation*} A\mathcal{V}\left( x^{n-1}\mathcal{S}_{n+1}\right) =B\mathcal{V}\left( x^{n-1}\mathcal{S}_{n}\right) =0_{d\times d}\,,\ \ n=0,1,\ldots \,, \end{equation*} and iterating the remain expression \begin{equation*} \mathcal{V}\left( x^{n}\mathcal{S}_{n}\right) =C_{n}\mathcal{V}\left( x^{n-1} \mathcal{S}_{n-1}\right) \,,\ \ n=0,1,\ldots \end{equation*} we obtain \begin{equation*} \mathcal{V}\left( x^{n}\mathcal{S}_{n}\right) =C_{n}C_{n-1}\cdots C_{1} \mathcal{V}\left( \mathcal{S}_{0}\right) \,,\ \ n=0,1,\ldots . \end{equation*} The above matrix $\mathcal{V}\left( \mathcal{S}_{0}\right) $ is given by \begin{equation*} \mathcal{V}\left( \mathcal{S}_{0}\right) =\mathcal{V}_{0}^{0}= \begin{bmatrix} v^{1}(S_{0}) & \cdots & v^{d}(S_{0}) \\ \vdots & \ddots & \vdots \\ v^{1}(S_{d-1}) & \cdots & v^{d}(S_{d-1}) \end{bmatrix} . \end{equation*} To simplify the notation, in the sequel $v_{i,j-1}$ denotes $v^{i}(S_{j-1})$ . Notice that (\ref{[Sec2]-OrthCond-U})\ leads to the fact that the above matrix is an upper triangular matrix, which in turn means that $v_{i,j-1}=0$ , for every $i,j=1,\ldots ,d-1$, that is \begin{equation} \mathcal{V}_{0}^{0}= \begin{bmatrix} v_{1,0} & v_{2,0} & \cdots & v_{d,0} \\ & v_{2,1} & \cdots & v_{d,1} \\ & & \ddots & \vdots \\ & & & v_{d,d-1} \end{bmatrix} . \label{[Sec2]-V00} \end{equation} Let $\mathcal{L}_{0}$ be a $d$--symmetric vector of linear functionals as in Theorem \ref{[SEC2]-TH41-DMJAT82}. We can express $\mathcal{V}$ in terms of $ \mathcal{L}_{0}$ as $\mathcal{V}=G_{0}\mathcal{L}_{0}$. Thus, we have \begin{equation*} (G_{0}^{-1}\,\mathcal{V})(\mathcal{S}_{0})=\mathcal{L}_{0}(\mathcal{S} _{0})=I_{d}\,. \end{equation*} From (\ref{[Sec2]-Def-LeftMultp}) we have \begin{equation*} \left( G_{0}^{-1}\,\mathcal{V}\right) (\mathcal{S}_{0})=\mathcal{V}\left( \mathcal{S}_{0}\right) (G_{0}^{-1})^{T}=\mathcal{V} _{0}^{0}(G_{0}^{-1})^{T}=I_{d}\,. \end{equation*} Therefore, taking into account (\ref{[Sec2]-V00}), we conclude \begin{equation} G_{0}=(\mathcal{V}_{0}^{0})^{T}= \begin{bmatrix} v_{1,0} & & & \\ v_{2,0} & v_{2,1} & & \\ \vdots & \vdots & \ddots & \\ v_{d,0} & v_{d,1} & \cdots & v_{d,d-1} \end{bmatrix} . \label{[Sec2]-G0} \end{equation} Observe that the matrix $(\mathcal{V}_{0}^{0})^{T}$ is lower triangular, and every entry in their main diagonal is different from zero, so $G_{0}$ always exists and is a lower triangular matrix. Since $\mathcal{V}=G_{0}\mathcal{L} _{0}$, we finally obtain the expressions \begin{equation} \begin{array}{l} v^{1}=v_{1,0}\ell _{0}, \\ v^{2}=v_{2,0}\ell _{0}+v_{2,1}\ell _{1}, \\ v^{3}=v_{3,0}\ell _{0}+v_{3,1}\ell _{1}+v_{3,2}\ell _{2}, \\ \cdots \\ v^{d}=v_{d,0}\ell _{0}+v_{d,1}\ell _{1}+\cdots +v_{d,d-1}\ell _{d-1}, \end{array} \label{[Sec2]-RelEntreFors} \end{equation} between the entries of $\mathcal{L}_{0}$\ and $\mathcal{V}$. Next, from (\ref{[Sec2]-RelEntreFors}), (\ref{[Sec2]-momL1}),\ together with the crucial fact that every value in the main diagonal of $\mathcal{V} _{0}^{0}$ is different from zero, it is a simple matter to check that the three statements of the lemma follow. We conclude the proof only for the functionals $v^{2}$ and $v^{1}$. The other cases can be deduced in a similar way. From (\ref{[Sec2]-RelEntreFors}) we get $v^{1}(x^{(d+1)n+\mu })=v_{1,0}\cdot \ell _{0}(x^{(d+1)n+\mu })\neq 0$. Then, from (\ref{[Sec2]-momL1}) we see that for every $\mu \neq 0$ we have $v^{1}(x^{(d+1)n+\mu })=0$\ (statement $ (iii)$).\ If $\mu =0$, we have $v^{1}(x^{(d+1)n})=v_{1,0}\cdot \ell _{0}(x^{(d+1)n})\neq 0$ (statement $(ii)$).\ Next, from (\ref {[Sec2]-RelEntreFors}) we get $v^{2}(x^{(d+1)n+\mu })=v_{2,0}\cdot \ell _{0}(x^{(d+1)n+\mu })+v_{2,1}\cdot \ell _{1}(x^{(d+1)n+\mu })$. Then, from ( \ref{[Sec2]-momL1}) we see that for every $\mu \neq 1$, we have $ v^{2}(x^{(d+1)n+\mu })=0$ (statement $(iii)$). If $\mu =0$ we have \begin{equation*} v^{2}(x^{(d+1)n})=\frac{v_{2,0}}{v_{1,0}}v^{1}(x^{(d+1)n})\,\,\,\text{ (statement }(i)\text{).} \end{equation*} If $\mu =1$ then\ $v^{2}(x^{(d+1)n+1})=v_{2,1}\cdot \ell _{1}(x^{(d+1)n+1})\neq 0$ (statement $(ii)$). Thus, the lemma follows. \end{proof} \section{Representation of the Stieltjes and Weyl functions} \label{[Section-3]-FyR-Functions} Let $\mathcal{U}$ a vector of functionals $\mathcal{U}= \begin{bmatrix} u^{1} & \cdots & u^{d} \end{bmatrix} ^{T}$. We define the \textit{Stieltjes matrix function }associated to $ \mathcal{U}$ (or \textit{matrix generating function} associated to $\mathcal{ U}$), $\mathcal{F}$ by (see \cite{BCF-NA-10}) \begin{equation*} \mathcal{F}(z)=\sum_{n=0}^{\infty }\frac{\left( x^{n}\mathcal{U}\right) ( \mathcal{X}_{0}(x))}{z^{n+1}}. \end{equation*} In this Section we find the relation between the Stieltjes matrix function $ \mathcal{F}$, associated to a certain $d$--symmetric vector of functionals $ \mathcal{V}$, and certain interesting function associated to the corresponding block Jacobi matrix $J$. Here we deal with $d$--symmetric sequences of type~II multiple orthogonal polynomials $\{S_{n}\}$, and hence $ J$ is a $(d+2)$--banded matrix with only two extreme non-zero diagonals, which is the block--matrix representation of the three--term recurrence relation with $d\times d$ matrix coefficients, satisfied by the vector sequence of polynomials $\{\mathcal{S}_{n}\}$ (associated to $\{S_{n}\}$), orthogonal with respect to $\mathcal{V}$. Thus, the shape of a Jacobi matrix $J$, associated with the $(d+2)$--term recurrence relation (\ref{[Sec2]-HTRR-Symm}) satisfied by a $d$--symmetric sequence of type~II multiple orthogonal polynomials $\{S_{n}\}$ is \begin{equation} J= \begin{bmatrix} 0 & 1 & & & & & \\ 0 & 0 & 1 & & & & \\ \vdots & & \ddots & \ddots & & & \\ 0 & & & 0 & 1 & & \\ \gamma _{1} & 0 & & & 0 & 1 & \\ & \gamma _{2} & 0 & & & 0 & \ddots \\ & & \ddots & \ddots & & & \ddots \end{bmatrix} . \label{[Sec3]-JacMatxJ} \end{equation} We can rewrite $J$ as the block tridiagonal matrix \begin{equation*} J= \begin{bmatrix} B & A & & & \\ C_{1} & B & A & & \\ & C_{2} & B & A & \\ & & \ddots & \ddots & \ddots \end{bmatrix} \end{equation*} associated to a three term recurrence relation with matrix coefficients, satisfied by the sequence of type~II vector multiple orthogonal polynomials $ \{\mathcal{S}_{n}\}$ associated to $\{S_{n}\}$. Here, every block matrix $A$ , $B$ and $C_{n}$ has $d\times d$ size, and they are given in (\ref {[Sec2]-3TTR-MCoefs}). When $J$ is a bounded operator, it is possible to define the \textit{resolvent operator} by \begin{equation*} (zI-J)^{-1}=\sum_{n=0}^{\infty }\frac{J^{n}}{z^{n+1}},\,\,\,|z|>||J||, \end{equation*} (see \cite{B-JAT-99}) and we can put in correspondence the following block tridiagonal analytic function, known as the \textit{Weyl function associated with }$J$ \begin{equation} \mathcal{R}_{J}(z)=\sum_{n=0}^{\infty }\frac{\mathbf{e}_{0}^{T}\,J^{n}\, \mathbf{e}_{0}}{z^{n+1}},\,\,\,|z|>||J||, \label{[Sec3]-R-Weyl} \end{equation} where $\mathbf{e}_{0}= \begin{bmatrix} I_{d} & 0_{d\times d} & \cdots \end{bmatrix} ^{T}$. If we denote by $M_{ij}$ the $d\times d$\ block matrices of a semi-infinite matrix $M$, formed by the entries of rows $d(i-1)+1,d(i-1)+2, \ldots ,di\,$, and columns $d(j-1)+1,d(j-1)+2,\ldots ,dj\,$, the matrix $ J^{n}$ can be written as the semi-infinite block matrix \begin{equation*} J^{n}= \begin{bmatrix} J_{11}^{n} & J_{12}^{n} & \cdots \\ J_{21}^{n} & J_{22}^{n} & \cdots \\ \vdots & \vdots & \ddots \end{bmatrix} . \end{equation*} We can now formulate our first important result in this Section. For more details we refer the reader to \cite[Sec. 1.2]{BBF-JMAA-10} and \cite {AKI-CA-00}. Let $\{\mathcal{S}_{n}\}$\ be a symmetric type~II vector multiple polynomial sequence orthogonal with respect to the $d$--symmetric vector of functionals $\mathcal{V}$. Following \cite[Th. 7]{BCF-NA-10}, the matrix generating function associated to $\mathcal{V}$, and the Weyl function associated with $ J$, the block Jacobi matrix corresponding to $\{\mathcal{S}_{n}\}$, can be\ put in correspondence by means of the matrix expression \begin{equation} \mathcal{F}(z)=R_{J}(z)\mathcal{V}(\mathcal{X}_{0})\,, \label{[Sec3]-FyRV00} \end{equation} where, for the $d$--symmetric case, $\mathcal{V}(\mathcal{X}_{0})=\mathcal{V} (\mathcal{S}_{0})=\mathcal{V}_{0}^{0}$, which is explicitly given in (\ref {[Sec2]-V00}). First we study the case $d=2$, and next we consider more general situations for $d>2$ functional entries in $\mathcal{V}$. From Lemma \ref {[SEC2]-TH2-MOM-v1v2}, we obtain the entries for the representation of the Stieltjes matrix function $\mathcal{F}(z)$, associated with~$\mathcal{V}= \begin{bmatrix} v^{1} & v^{2} \end{bmatrix} $, as \begin{multline} \mathcal{F}(z)=\sum_{n=0}^{\infty }{ \begin{bmatrix} v^{1}(x^{3n}) & \frac{v_{2,0}\cdot v^{1}(x^{3n})}{v_{1,0}} \\ 0 & {v^{2}(x^{3n+1})} \end{bmatrix} }/{z^{3n+1}}+\sum_{n=0}^{\infty }{ \begin{bmatrix} {0} & {v^{2}(x^{3n+1})} \\ 0 & {0} \end{bmatrix} }/{z^{3n+2}} \label{[Sec2]-DescFen3} \\ +\sum_{n=0}^{\infty }{ \begin{bmatrix} {0} & {0} \\ v^{1}(x^{3n+3}) & \frac{v_{2,0}\cdot v^{1}(x^{3n+3})}{v_{1,0}} \end{bmatrix} }/{z^{3n+3}}\,. \end{multline} Notice that we have $\mathcal{F}(z)=\mathcal{F}_{1}(z)+\mathcal{F}_{2}(z)+ \mathcal{F}_{3}(z)$. The following theorem shows that we can obtain $ \mathcal{F}(z)$ in our particular case, analyzing two different situations. \begin{teo} \label{[SEC2]-TEO-Fv2}Let $\mathcal{V}= \begin{bmatrix} v^{1} & v^{2} \end{bmatrix} ^{T}$ be a symmetric vector of functionals, with $d=2$. Then the Weyl function is given by \begin{equation*} \mathcal{R}_{J}(z)=\sum_{n=0}^{\infty }\frac{ \begin{bmatrix} \frac{v^{1}(x^{3n})}{v_{1,0}} & 0 \\ 0 & \frac{v^{2}(x^{3n+1})}{v_{2,1}} \end{bmatrix} }{z^{3n+1}}+\sum_{n=0}^{\infty }\frac{ \begin{bmatrix} {0} & \frac{v^{2}(x^{3n+1})}{v_{2,1}} \\ 0 & {0} \end{bmatrix} }{z^{3n+2}}+\sum_{n=0}^{\infty }\frac{ \begin{bmatrix} {0} & {0} \\ \frac{v^{1}(x^{3n+3})}{v_{1,0}} & 0 \end{bmatrix} }{z^{3n+3}} \end{equation*} \end{teo} \begin{proof} It is enough to multiply (\ref{[Sec2]-DescFen3}) by $(\mathcal{V} _{0}^{0})^{-1}$. The explicit expression for $\mathcal{V}_{0}^{0}$ is given in (\ref{[Sec2]-V00}). \end{proof} Computations considering $d>2$ functionals, can be cumbersome but doable. The matrix generating function $\mathcal{F}$ (as well as the Weyl function), will be the sum of $d+1$ matrix terms, i.e. \ $\mathcal{F}(z)=\mathcal{F} _{1}(z)+\cdots +\mathcal{F}_{d+1}(z)$, each of them of size $d\times d$. Let us now outline the very interesting structure of $\mathcal{R}_{J}(z)$ for the general case of $d$ functionals. We shall describe the structure of $ \mathcal{R}_{J}(z)$ for $d=3$, comparing the situation with the general case. Let $\ast $ denote every non-zero entry in a given matrix. Thus, there will be four $J_{11}$\ matrices of size $3\times 3$. Here, and in the general case, the first matrix $[J_{11}^{(d+1)n}]_{d\times d}$ will allways be diagonal, as follows \begin{equation*} \lbrack J_{11}^{4n}]_{3\times 3}= \begin{bmatrix} \ast & 0 & 0 \\ 0 & \ast & 0 \\ 0 & 0 & \ast \end{bmatrix} . \end{equation*} Indeed, observe that for $n=0$, $[J_{11}^{(d+1)n}]_{d\times d}=I_{d}$. Next, we have \begin{equation*} \lbrack J_{11}^{4n+1}]_{3\times 3}= \begin{bmatrix} 0 & \ast & 0 \\ 0 & 0 & \ast \\ 0 & 0 & 0 \end{bmatrix} . \end{equation*} From (\ref{[Sec2]-HTRR-Symm}) we know that the \textquotedblleft distance\textquotedblright\ between the two extreme non-zero diagonals of $J$ will allways consist of $d$ zero diagonals. It directly implies that, also in the general case, every entry in the last row of $[J_{11}^{(d+1)n}]_{d \times d}$ will always be zero, and therefore the unique non-zero entries in $[J_{11}^{(d+1)n}]_{d\times d}$ will be one step over the main diagonal. Next we have \begin{equation*} \lbrack J_{11}^{4n+2}]_{3\times 3}= \begin{bmatrix} 0 & 0 & \ast \\ 0 & 0 & 0 \\ \ast & 0 & 0 \end{bmatrix} . \end{equation*} Notice that for every step, the main diagonal in $[J_{11}^{(d+1)n}]_{d\times d}$ goes \textquotedblleft one diagonal\textquotedblright\ up, but the other extreme diagonal of $J$ is also moving upwards, with exactly $d$ zero diagonals between them. It directly implies that, no matter the number of functionals, only the lowest-left element of $[J_{11}^{(d+1)n+2}]_{d\times d} $ will be different from zero. Finally, we have \begin{equation*} \lbrack J_{11}^{4n+3}]_{3\times 3}= \begin{bmatrix} 0 & 0 & 0 \\ \ast & 0 & 0 \\ 0 & \ast & 0 \end{bmatrix} . \end{equation*} Here, the last non-zero entry of the main diagonal in $[J_{11}^{4n}]_{3 \times 3}$ vanishes. In the general case, it will occur exactly at step $ [J_{11}^{(d+1)n+(d-1)}]_{d\times d}$, in which only the uppest-right entry is different from zero. Meanwhile, in matrices $[J_{11}^{(d+1)n+3}]_{d\times d}$ up to $[J_{11}^{(d+1)n+(d-2)}]_{d\times d}$ will be non-zero entries of the two extreme diagonals. In this last situation, the non-zero entries of $ [J_{11}^{(d+1)n+(d-1)}]_{d\times d}$, will always be exactly one step under the main diagonal. \section{The symmetrization problem for multiple OP} \label{[Section-4]-SymProb} Throughout this section, let $\{A_{n}^{1}\}$\ be an arbitrary and not necesarily symmetric sequence of type~II multiple orthogonal polynomials, satisfying a $(d+2)$--term recurrence relation with known recurrence coefficients, and let $J_{1}$ be the corresponding $(d+2)$--banded matrix. Let $J_{1}$ be such that the following $LU$ factorization \begin{equation} J_{1}=LU=L_{1}L_{2}\cdots L_{d}U \label{[Sec4]-xdFactorization} \end{equation} is unique, where $U$ is an upper two--banded, semi--infinite, and invertible matrix, $L$ is a $(d+1)$--lower triangular semi--infinite with ones in the main diagonal, and every $L_{i}$, $i=1,\ldots ,d$ is a lower two--banded, semi--infinite, and invertible matrix with ones in the main diagonal. We follow \cite[Def. 3]{BBF-JMAA-13}, where the authors generalize the concept of Darboux transformation to general Hessenberg banded matrices, in assuming that any circular permutation of $L_{1}L_{2}\cdots L_{d}U$\ is a Darboux transformation of $J_{1}$. Thus we have $d$ possible Darboux transformations of $J_{1}$, say $J_{j}$, $j=2,\ldots ,d+1$, with $J_{2}=L_{2}\cdots L_{d}UL_{1}\,$, $J_{3}=L_{3}\cdots L_{d}UL_{1}L_{2}\,$,\ldots , $ J_{d+1}=UL_{1}L_{2}\cdots L_{d}\,$. Next, we solve the so called \textit{symmetrization problem} in the theory of multiple orthogonal polynomials, i.e., starting with $\{A_{n}^{1}\}$, we find a total $d+1$ type~II multiple orthogonal polynomial sequences $ \{A_{n}^{j}\}$, $j=1,\ldots ,d+1$, satisfying $(d+2)$--term recurrence relation with known recurrence coefficients, which can be used to construct a new $d$--symmetric type~II multiple orthogonal polynomial sequence $ \{S_{n}\}$. It is worth pointing out that all the aforesaid sequences $ \{A_{n}^{j}\}$, $j=1,\ldots ,d+1$\ are of the same kind, with the same number of elements in their respectives $(d+2)$--term recurrence relations, and multiple orthogonal with respect to the same number of functionals $d$. \begin{teo} \label{Th-Symm-Directo}Let $\{A_{n}^{1}\}$\ be an arbitrary and not necesarily symmetric sequence of type~II multiple orthogonal polynomials as stated above. Let $J_{j}$, $j=2,\ldots ,d+1$, be the Darboux transformations of $J_{1}$ given by the $d$ cyclid permutations of the matrices in the right hand side of (\ref{[Sec4]-xdFactorization}). Let $\{A_{n}^{j}\}$, $ j=2,\ldots ,d+1$, $d$ new families of type~II multiple orthogonal polynomials satisfying $(d+2)$--term recurrence relations given by the matrices $J_{j}$, $j=2,\ldots ,d+1$. Then, the sequence $\{S_{n}\}$\ defined by \begin{equation} \left\{ \begin{array}{l} S_{(d+1)n}(x)=A_{n}^{1}(x^{d+1}), \\ S_{(d+1)n+1}(x)=xA_{n}^{2}(x^{d+1}), \\ \cdots \\ S_{(d+1)n+d}(x)=x^{d}A_{n}^{d+1}(x^{d+1}), \end{array} \right. \label{[Sec4]-xdDecomp} \end{equation} is a $d$--symmetric sequence of type~II multiple orthogonal polynomials. \end{teo} \begin{proof} Let $\{A_{n}^{1}\}$ satify the $(d+2)$--term recurrence relation given by ( \ref{[Sec2]-HTRR-MultOP}) \begin{equation*} xA_{n+d}^{1}(x)=A_{n+d+1}^{1}(x)+b_{n+d}^{[1]}A_{n+d}^{1}(x)+\sum_{\nu =0}^{d-1}c_{n+d-\nu }^{d-1-\nu ,[1]}A_{n+d-1-\nu }^{1}(x)\,, \end{equation*} $c_{n+1}^{0,[1]}\neq 0$ for $n\geq 0$, with the initial conditions $ A_{0}^{1}(x)=1$, $A_{1}^{1}(x)=x-b_{0}^{[1]}$,~and \begin{equation*} A_{n}^{1}(x)=(x-b_{n-1}^{[1]})A_{n-1}^{1}(x)-\sum_{\nu =0}^{n-2}c_{n-1-\nu }^{d-1-\nu ,[1]}A_{n-2-\nu }^{1}(x)\,,\ \ 2\leq n\leq d\,, \end{equation*} with known recurrence coefficients. Hence, in a matrix notation, we have \begin{equation} x\mathbf{\underline{A}}^{1}=J_{1}\,\mathbf{\underline{A}}^{1}, \label{[Sec4]-J1A1} \end{equation} where \begin{equation*} J_{1}= \begin{bmatrix} b_{d}^{[1]} & 1 & & & & & \\ c_{d+1}^{d-1,[1]} & b_{d+1}^{[1]} & 1 & & & & \\ c_{d+1}^{d-2,[1]} & c_{d+2}^{d-1,[1]} & b_{d+2}^{[1]} & 1 & & & \\ \vdots & c_{d+2}^{d-2,[1]} & c_{d+3}^{d-1,[1]} & b_{d+3}^{[1]} & 1 & & \\ c_{d+1}^{0,[1]} & \cdots & c_{d+3}^{d-2,[1]} & c_{d+4}^{d-1,[1]} & b_{d+4}^{[1]} & 1 & \\ & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \end{bmatrix} . \end{equation*} Following \cite{BBF-JMAA-13} (see also \cite{Keller-94}), the unique $LU$ factorization for the square $(d+2)$--banded semi-infinite Hessenberg matrix $J_{1}$ is such that \begin{equation*} U= \begin{bmatrix} \gamma _{1} & 1 & & & \\ & \gamma _{d+2} & 1 & & \\ & & \gamma _{2d+3} & 1 & \\ & & & \gamma _{3d+4} & \ddots \\ & & & & \ddots \end{bmatrix} \end{equation*} is an upper two-banded, semi-infinite, and invertible matrix, and $L$ is a $ (d+1)$--lower triangular, semi--infinite, and invertible matrix with ones in the main diagonal. It is clear that the entries in $L$ and $U$ depend entirely on the known entries of $J_{1}$. Thus, we rewrite (\ref{[Sec4]-J1A1} ) as \begin{equation} x\mathbf{\underline{A}}^{1}=LU\,\mathbf{\underline{A}}^{1}. \label{[Sec4]-LUA1} \end{equation} Next, we define a\ new sequence of polynomials $\{A_{n}^{d+1}\}$ by $x \mathbf{\underline{A}}^{d+1}=U\mathbf{\underline{A}}^{1}$. Multiplying both sides of (\ref{[Sec4]-LUA1}) by the matrix $U$, we have \begin{equation*} x\left( U\,\mathbf{\underline{A}}^{1}\right) =UL\left( U\,\mathbf{\underline{ A}}^{1}\right) =x(x\mathbf{\underline{A}}^{d+1})=UL(x\mathbf{\underline{A}} ^{d+1}), \end{equation*} and pulling out $x$ we get \begin{equation} x\mathbf{\underline{A}}^{d+1}=UL\,\mathbf{\underline{A}}^{d+1}, \label{[Sec4]-UAd1} \end{equation} which is the matrix form of the $(d+2)$--term recurrence relation satisfied by the new type~II multiple polynomial sequence $\{A_{n}^{d+1}\}$. Since $L$ is given by (see \cite{BBF-JMAA-13}) \begin{equation*} L=L_{1}L_{2}\cdots L_{d}, \end{equation*} where every $L_{j}$ is the lower two-banded, semi-infinite, and invertible matrix \begin{equation*} L_{j}= \begin{bmatrix} 1 & & & & \\ \gamma _{d-j} & 1 & & & \\ & \gamma _{2d+1-j} & 1 & & \\ & & \gamma _{3d+2-j} & 1 & \\ & & & \ddots & \ddots \end{bmatrix} \end{equation*} with ones in the main diagonal, it is also clear that the entries in $L_{j}$ \ will depend on the known entries in $J_{1}$. Under the same hypotheses, we can define new $d-1$ polinomial sequences starting with $\mathbf{\underline{A }}^{1}$ as follows: $\mathbf{\underline{A}}^{2}=L_{1}^{-1}\mathbf{\underline{ A}}^{1}$, $\mathbf{\underline{A}}^{3}=L_{2}^{-1}\mathbf{\underline{A}}^{2}$, \ldots , $\mathbf{\underline{A}}^{d}=L_{d-1}^{-1}\mathbf{\underline{A}} ^{d-1} $ up to $\mathbf{\underline{A}}^{d+1}=L_{d}^{-1}\mathbf{\underline{A}} ^{d}$. That is, $\mathbf{\underline{A}}^{j+1}=L_{j}^{-1}\mathbf{\underline{A} }^{j}$, $j=1,\ldots d-1$. Combining this fact with (\ref{[Sec4]-UAd1}) we deduce \begin{eqnarray*} x\mathbf{\underline{A}}^{d} &=&L_{d}UL_{1}L_{2}\cdots L_{d-1}\mathbf{ \underline{A}}^{d}, \\ x\mathbf{\underline{A}}^{d-1} &=&L_{d-1}L_{d}UL_{1}L_{2}\cdots L_{d-2} \mathbf{\underline{A}}^{d-1}, \end{eqnarray*} up to the known expression (\ref{[Sec4]-J1A1}) \begin{equation*} x\mathbf{\underline{A}}^{1}=L_{1}L_{2}\cdots L_{d}U\mathbf{\underline{A}} ^{1}. \end{equation*} The above expresions mean that all these $d+1$ sequences $\{A^{j}\}$, $ j=1,\ldots ,d+1$ are in turn type~II multiple orthogonal polynomials. Finally, from these $d+1$ sequences, we construct the type~II polynomials in the sequence $\{S_{n}\}$ as (\ref{[Sec4]-xdDecomp}). Note that, according to (\ref{[Sec2]-d-symmetric-Sn}), it directly follows that $\{S_{n}\}$ is a $d$ --symmetric type~II multiple orthogonal polynomial sequence, which proves our assertion. \end{proof} Next, we state the converse of the above theorem. That is, given a sequence of type~II $d$--symmetric multiple orthogonal polynomials $\{S_{n}\}$ satisfying the high--term recurrence relation~(\ref{[Sec2]-HTRR-Symm}), we find a set of $d+1$ polynomial families $\{A_{n}^{j}\}$, $j=1,\ldots ,d+1$ of not necessarily symmetric type~II multiple orthogonal polynomials, satisfying in turn $(d+2)$--term recurrence relations, so o they are themselves sequences of type~II multiple orthogonal polynomials. When $d=2$, this construction goes back to the work of Douak and Maroni (see \cite {DM-A-92}). \begin{teo} \label{Th-Symm-Inverso} Let $\{S_{n}\}$ be a $d$--symmetric sequence of type~II multiple orthogonal polynomials satisfiyng the corresponding high--order recurrence relation (\ref{[Sec2]-HTRR-Symm}), and $\{A_{n}^{j}\}$ , $j=1,\ldots ,d+1$, the sequences of polynomials given by (\ref {[Sec4]-xdDecomp}). Then, each sequence $\{A_{n}^{j}\}$, $j=1,\ldots ,d+1$, satisfies the $(d+2)$--term recurrence relation \begin{equation*} xA_{n+d}^{j}(x)=A_{n+d+1}^{j}(x)+b_{n+d}^{[j]}A_{n+d}^{j}(x)+\sum_{\nu =0}^{d-1}c_{n+d-\nu }^{d-1-\nu ,[j]}A_{n+d-1-\nu }^{j}(x)\,, \end{equation*} $c_{n+1}^{0,[j]}\neq 0$ for $n\geq 0$, with initial conditions $ A_{0}^{j}(x)=1$, $A_{1}^{j}(x)=x-b_{0}^{[j]}$, \begin{equation*} A_{n}^{j}(x)=(x-b_{n-1}^{[j]})A_{n-1}^{j}(x)-\sum_{\nu =0}^{n-2}c_{n-1-\nu }^{d-1-\nu ,[j]}A_{n-2-\nu }^{j}(x)\,,\ \ 2\leq n\leq d\,, \end{equation*} and therefore they are type~II multiple orthogonal polynomial sequences. \end{teo} \begin{proof} Since $\{S_{n}\}$ is a $d$--symmetric multiple orthogonal sequence, it satisfies (\ref{[Sec2]-HTRR-Symm}) with $S_{n}(x)=x^{n}$ for $0\leq n\leq d$ . Shifting, for convenience, the multi--index in (\ref{[Sec2]-HTRR-Symm}) as $n\rightarrow (d+1)n-d+j,\quad j=0,1,2,....d$, we obtain the equivalent system of $(d+1)$ equations \begin{equation*} \left\{ \begin{array}{cc} xS_{(d+1)n}(x)=S_{(d+1)n+1}(x)+\gamma _{(d+1)n-d+1}S_{(d+1)n-d}(x)\,, & j=0, \\ xS_{(d+1)n+1}(x)=S_{(d+1)n+2}(x)+\gamma _{(d+1)n-d+2}S_{(d+1)n-d+1}(x)\,, & j=1, \\ \vdots & \vdots \\ xS_{(d+1)n+d-1}(x)=S_{(d+1)n+d}(x)+\gamma _{(d+1)n}S_{(d+1)n-1}(x)\,, & j=d-1, \\ xS_{(d+1)n+d}(x)=S_{(d+1)n+(d+1)}(x)+\gamma _{(d+1)n+1}S_{(d+1)n}(x)\,, & j=d. \end{array} \right. \,. \end{equation*} Substituting (\ref{[Sec4]-xdDecomp}) into the above expressions, and replacing $x^{d+1}\rightarrow x$, we get the following system of $(d+1)$ equations \begin{equation} \left\{ \begin{array}{cc} 1) & A_{n}^{1}(x)=A_{n}^{2}(x)+\gamma _{(d+1)n-(d-1)}A_{n-1}^{2}(x)\,, \\ 2) & A_{n}^{2}(x)=A_{n}^{3}(x)+\gamma _{(d+1)n-(d-2)}A_{n-1}^{3}(x)\,, \\ \vdots & \vdots \\ d) & A_{n}^{d}(x)=A_{n}^{d+1}(x)+\gamma _{(d+1)n}A_{n-1}^{d+1}(x), \\ d+1) & xA_{n}^{d+1}(x)=A_{n+1}^{1}(x)+\gamma _{(d+1)n+1}A_{n}^{1}(x)\,. \end{array} \right. \label{[Sec4]-9-Relaciones} \end{equation} Notice that having $x=0$, we define the $\gamma $ as \begin{equation*} \gamma _{(d+1)n+1}=\frac{-A_{n+1}^{1}(0)}{A_{n}^{1}(0)}. \end{equation*} In the remainder of this section we deal with the matrix representation of each equation in (\ref{[Sec4]-9-Relaciones}). Notice that the first $d$ equations of (\ref{[Sec4]-9-Relaciones}), namely $A_{n}^{j}=A_{n}^{j+1}+ \gamma _{(d+1)n+(j-d)}A_{n-1}^{j+1}$, \ $j=1,\ldots ,d$, can be written in the matrix way \begin{equation} \mathbf{\underline{A}}^{j}=L_{j}\,\mathbf{\underline{A}}^{j+1}, \label{[Sec4]-LowerLj} \end{equation} where $L_{j}$ is the lower two-banded, semi-infinite, and invertible matrix \begin{equation*} L_{j}= \begin{bmatrix} 1 & & & & \\ \gamma _{d-j} & 1 & & & \\ & \gamma _{2d+1-j} & 1 & & \\ & & \gamma _{3d+2-j} & 1 & \\ & & & \ddots & \ddots \end{bmatrix} , \end{equation*} and \begin{equation*} \mathbf{\underline{A}}^{j}= \begin{bmatrix} A_{0}^{j}(x) & A_{1}^{j}(x) & A_{2}^{j}(x) & \cdots \end{bmatrix} ^{T}. \end{equation*} Similarly, the $d+1)$--th equation in (\ref{[Sec4]-9-Relaciones}) can be expressed as \begin{equation} x\mathbf{\underline{A}}^{d+1}=U\,\mathbf{\underline{A}}^{1}, \label{[Sec4]-UpperU} \end{equation} where $U$ is the upper two-banded, semi-infinite, and invertible matrix \begin{equation*} U= \begin{bmatrix} \gamma _{1} & 1 & & & \\ & \gamma _{d+2} & 1 & & \\ & & \gamma _{2d+3} & 1 & \\ & & & \gamma _{3d+4} & \ddots \\ & & & & \ddots \end{bmatrix} . \end{equation*} It is clear that the entries in the above matrices $L_{j}$ and $U$ are given in terms of the recurrence coefficients $\gamma _{n+1}$ for $\{S_{n}\}$ given (\ref{[Sec2]-HTRR-Symm}). From (\ref{[Sec4]-LowerLj}) we have $\mathbf{ \underline{A}}^{1}$ defined in terms of $\mathbf{\underline{A}}^{2}$, as $ \mathbf{\underline{A}}^{1}=L_{1}\mathbf{\underline{A}}^{2}$. Likewise $ \mathbf{\underline{A}}^{2}$ in terms of $\mathbf{\underline{A}}^{3}$ as $ \mathbf{\underline{A}}^{2}=L_{2}\mathbf{\underline{A}}^{3}$, and so on up to $j=d$. Thus, it is easy to see that, \begin{equation*} \mathbf{\underline{A}}^{1}=L_{1}L_{2}\cdots L_{d}\mathbf{\underline{A}} ^{d+1}. \end{equation*} Next, we multiply by $x$ both sides of the above expression, and we apply ( \ref{[Sec4]-UpperU}) to obtain \begin{equation*} x\mathbf{\underline{A}}^{1}=L_{1}L_{2}\cdots L_{d}U\,\mathbf{\underline{A}} ^{1}. \end{equation*} Since each $L_{j}$ and $U$ are lower and upper two-banded semi-infinite matrices, it follows easily that $L_{1}L_{2}\cdots L_{d}$ is a lower triangular $(d+1)$--banded matrix with ones in the main diagonal, so the above decomposition is indeed a $LU$ decomposition of certain $(d+2)$ --banded Hessenberg matrix $J_{1}=L_{1}L_{2}\cdots L_{d}U$\ (see for instance \cite{BBF-JMAA-13} and \cite[Sec. 3.2 and 3.3]{Keller-94}). The values of the entries of $J_{1}$ come from the usual definition for matrix multiplication, matching every entry in $J_{1}=L_{1}L_{2}\cdots L_{d}U$, with $L_{j}\,U$ given in (\ref{[Sec4]-LowerLj}) and (\ref{[Sec4]-UpperU}) respectively. On the other hand, starting with $\mathbf{\underline{A}}^{2}=L_{2}\mathbf{ \underline{A}}^{3}$ instead of $\mathbf{\underline{A}}^{1}$, and proceeding in the same fashion as above, we can reach \begin{eqnarray*} x\mathbf{\underline{A}}^{2} &=&L_{2}\cdots L_{d}UL_{1}\,\mathbf{\underline{A} }^{2}, \\ x\mathbf{\underline{A}}^{3} &=&L_{3}\cdots L_{d}UL_{1}L_{2}\,\mathbf{ \underline{A}}^{3}, \\ &&\vdots \\ x\mathbf{\underline{A}}^{d+1} &=&UL_{1}L_{2}\cdots L_{d}\,\mathbf{\underline{ A}}^{d+1}. \end{eqnarray*} Observe that $J_{j}$ denotes a particular circular permutation of the matrix product $L_{1}L_{2}\cdots L_{d}U$ . Thus, we have $J_{1}=L_{1}L_{2}\cdots L_{d}U\,$, $J_{2}=L_{2}\cdots L_{d}UL_{1}\,$, \ldots , $J_{d+1}=UL_{1}L_{2} \cdots L_{d}\,$. Using this notation, $J_{j}$ is the matrix representation of the operator of multiplication by $x$ in \begin{equation} x\mathbf{\underline{A}}^{j}=J_{j}\mathbf{\underline{A}}^{j}, \label{[Sec4]-MatrixJj} \end{equation} which from (\ref{[Sec2]-HTRR-MultOP}) directly implies that each polynomial sequence $\{A^{j}\}$, $j=1,\ldots d+1$, satisfies a $(d+2)$--term recurrence relation as in the statement of the theorem, with coefficients given in terms of the recurrence coefficients $\gamma _{n+1}\,$, from the high--term recurrence relation (\ref{[Sec2]-HTRR-Symm}) satisfied by the symmetric sequence $\{S_{n}\}$. This completes the proof. \end{proof} \section{Matrix multiple orthogonality} \label{[Section-5]-FavardTh} For an arbitrary system of type~II vector multiple polynomials $\{P_{n}\}$ orthogonal with respect to certain vector of functionals $\mathcal{U}= \begin{bmatrix} u^{1} & u^{2} \end{bmatrix} ^{T}$, with \begin{equation} \mathcal{P}_{n}= \begin{bmatrix} P_{3n}(x) & P_{3n+1}(x) & P_{3n+2}(x) \end{bmatrix} ^{T}, \label{[Sec5]-P3n} \end{equation} there exists a matrix decomposition \begin{equation} \mathcal{P}_{n}=W_{n}(x^{3})\mathcal{X}_{n}\rightarrow \begin{bmatrix} P_{3n}(x) \\ P_{3n+1}(x) \\ P_{3n+2}(x) \end{bmatrix} =W_{n}(x^{3}) \begin{bmatrix} 1 \\ x \\ x^{2} \end{bmatrix} , \label{[Sec5]-PnWnX} \end{equation} with $W_{n}$ being the matrix polynomial (see \cite{MM-MJM13}) \begin{equation} W_{n}(x)= \begin{bmatrix} A_{n}^{1}(x) & A_{n-1}^{2}(x) & A_{n-1}^{3}(x) \\ B_{n}^{1}(x) & B_{n}^{2}(x) & B_{n-1}^{3}(x) \\ C_{n}^{1}(x) & C_{n}^{2}(x) & C_{n}^{3}(x) \end{bmatrix} . \label{[Sec5]-MatrixWn} \end{equation} Throughout this Section, for simplicity of computations, we assume $d=2$ for the vector of functionals $\mathcal{U}$, but the same results can be easily extended for an arbitrary number of functionals. We first show that, if a sequence of type~II multiple $2$--orthogonal polynomials $\{P_{n}\}$ satisfy a recurrence relation like (\ref{[Sec2]-4TRR-P}), then there exists a sequence of matrix polynomials $\{W_{n}\}$, $W_{n}(x)\in \mathbb{P}^{3\times 3}$ associated to $\{P_{n}\}$ by (\ref{[Sec5]-P3n}) and (\ref{[Sec5]-PnWnX} ), satisfying a matrix four term recurrence relation with matrix coefficients. \begin{teo} \label{[SEC5]-TH1}Let $\{P_{n}\}$ be a sequence of type~II multiple polynomials, $2$--orthogonal with respect to the system of functionals $ \{u^{1},u^{2}\}$, i.e., satisfying the four--term type recurrence relation ( \ref{[Sec2]-4TRR-P}). Let $\{W_{n}\}$, $W_{n}(x)\in \mathbb{P}^{3\times 3}$ associated to $\{P_{n}\}$ by (\ref{[Sec5]-P3n}) and (\ref{[Sec5]-PnWnX}). Then, the matrix polynomials $W_{n}$ satisfy a matrix four term recurrence relation with matrix coefficients. \end{teo} \begin{proof} We first prove that the sequence of vector polynomials $\{W_{n}\}$ satisfy a four term recurrence relation with matrix coefficients, starting from the fact that $\{P_{n}\}$ satisfy (\ref{[Sec2]-4TRR-P}). In order to get this result, we use the matrix interpretation of multiple orthogonality described in Section \ref{[Section-2]-Defs}. We know that the sequence of type~II multiple $2$--orthogonal polynomials $\{P_{n}\}$ satisfy the four term recurrence relation (\ref{[Sec2]-4TRR-P}). From (\ref{[Sec5]-P3n}), using the matrix interpretation for multiple orthogonality, the above expression can be seen as the matrix three term recurrence relation \begin{equation*} x\mathcal{P}_{n}=A\mathcal{P}_{n+1}+B_{n}\mathcal{P}_{n}+C_{n}\mathcal{P} _{n-1} \end{equation*} or, equivalently \begin{equation*} x \begin{bmatrix} P_{3n} \\ P_{3n+1} \\ P_{3n+2} \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} P_{3n+3} \\ P_{3n+4} \\ P_{3n+5} \end{bmatrix} \end{equation*} \begin{equation*} + \begin{bmatrix} b_{3n} & 1 & 0 \\ c_{3n+1} & b_{3n+1} & 1 \\ d_{3n+2} & c_{3n+2} & b_{3n+2} \end{bmatrix} \begin{bmatrix} P_{3n} \\ P_{3n+1} \\ P_{3n+2} \end{bmatrix} + \begin{bmatrix} 0 & d_{3n} & c_{3n} \\ 0 & 0 & d_{3n+1} \\ 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} P_{3n-3} \\ P_{3n-2} \\ P_{3n-1} \end{bmatrix} \end{equation*} Multiplying the above expression by $x$ we get \begin{eqnarray} x^{2}\mathcal{P}_{n} &=&Ax\mathcal{P}_{n+1}+B_{n}x\mathcal{P}_{n}+C_{n}x \mathcal{P}_{n-1} \notag \\ &=&A\left[ x\mathcal{P}_{n+1}\right] +B_{n}\left[ x\mathcal{P}_{n}\right] +C_{n}\left[ x\mathcal{P}_{n-1}\right] \notag \\ &=&AA\mathcal{P}_{n+2}+\left[ AB_{n+1}+B_{n}A\right] \mathcal{P}_{n+1} \label{[Sec5]-PrimABC} \\ &&+\left[ AC_{n+1}+B_{n}B_{n}+C_{n}A\right] \mathcal{P}_{n} \notag \\ &&+\left[ B_{n}C_{n}+C_{n}B_{n-1}\right] \mathcal{P}_{n-1}+C_{n}C_{n-1} \mathcal{P}_{n-2} \notag \end{eqnarray} The matrix $A$ is nilpotent, so $AA$ is the zero matrix of size $3\times 3$. Having \begin{equation*} \begin{array}{ll} A_{n}^{\langle 1\rangle }=AB_{n+1}+B_{n}A, & B_{n}^{\langle 1\rangle }=AC_{n+1}+B_{n}B_{n}+C_{n}A, \\ C_{n}^{\langle 1\rangle }=B_{n}C_{n}+C_{n}B_{n-1}, & D_{n}^{\langle 1\rangle }=C_{n}C_{n-1}, \end{array} \end{equation*} where the entries of $A_{n}^{\langle 1\rangle }$, $B_{n}^{\langle 1\rangle }$ , and $C_{n}^{\langle 1\rangle }$ can be easily obtained using a computational software as Mathematica$^{\circledR }$ or Maple$^{\circledR }$ , from the entries of $A_{n}$, $B_{n}$, and $C_{n}$. Thus, we can rewrite ( \ref{[Sec5]-PrimABC}) as \begin{equation*} x^{2}\mathcal{P}_{n}=A_{n}^{\langle 1\rangle }\mathcal{P}_{n+1}+B_{n}^{ \langle 1\rangle }\mathcal{P}_{n}+C_{n}^{\langle 1\rangle }\mathcal{P} _{n-1}+D_{n}^{\langle 1\rangle }\mathcal{P}_{n-2}. \end{equation*} We now continue in this fashion,multiplying again by $x$ \begin{eqnarray*} x^{3}\mathcal{P}_{n} &=&A_{n}^{\langle 1\rangle }x\mathcal{P} _{n+1}+B_{n}^{\langle 1\rangle }x\mathcal{P}_{n}+C_{n}^{\langle 1\rangle }x \mathcal{P}_{n-1}+D_{n}^{\langle 1\rangle }x\mathcal{P}_{n-2} \\ &=&A_{n}^{\langle 1\rangle }A\mathcal{P}_{n+2}+\left[ A_{n}^{\langle 1\rangle }B_{n+1}+B_{n}^{\langle 1\rangle }A\right] \mathcal{P}_{n+1} \\ &&+\left[ A_{n}^{\langle 1\rangle }C_{n+1}+B_{n}^{\langle 1\rangle }B_{n}+C_{n}^{\langle 1\rangle }A\right] \mathcal{P}_{n} \\ &&+\left[ B_{n}^{\langle 1\rangle }C_{n}+C_{n}^{\langle 1\rangle }B_{n-1}+D_{n}^{\langle 1\rangle }A\right] \mathcal{P}_{n-1} \\ &&+\left[ C_{n}^{\langle 1\rangle }C_{n-1}+D_{n}^{\langle 1\rangle }B_{n-2} \right] \mathcal{P}_{n-2}+D_{n}^{\langle 1\rangle }C_{n-2}\mathcal{P}_{n-3} \end{eqnarray*} The matrix products $A_{n}^{\langle 1\rangle }A$ and $D_{n}^{\langle 1\rangle }C_{n-2}$ both give the zero matrix of size $3\times 3$, and the remaining matrix coefficients are \begin{equation} \begin{array}{ll} A_{n}^{\langle 2\rangle }=A_{n}^{\langle 1\rangle }B_{n+1}+B_{n}^{\langle 1\rangle }A, & B_{n}^{\langle 2\rangle }=A_{n}^{\langle 1\rangle }C_{n+1}+B_{n}^{\langle 1\rangle }B_{n}+C_{n}^{\langle 1\rangle }A, \\ C_{n}^{\langle 2\rangle }=B_{n}^{\langle 1\rangle }C_{n}+C_{n}^{\langle 1\rangle }B_{n-1}+D_{n}^{\langle 1\rangle }A, & D_{n}^{\langle 2\rangle }=C_{n}^{\langle 1\rangle }C_{n-1}+D_{n}^{\langle 1\rangle }B_{n-2}. \end{array} \label{[Sec5]-Matrix4TRR-coef} \end{equation} Using the expressions stated above, the matrix coefficients (\ref {[Sec5]-Matrix4TRR-coef}) can be easily obtained as well. Beyond the explicit expression of their respective entries, the key point is that they are structured matrices, namely $A_{n}^{\langle 2\rangle }$ is lower-triangular with one's in the main diagonal, $D_{n}^{\langle 2\rangle }$ is upper-triangular, and $B_{n}^{\langle 2\rangle }$, $C_{n}^{\langle 2\rangle }$ are full matrices. Therefore, the sequence of type~II vector multiple orthogonal polynomials satisfy the following matrix four term recurrence relation \begin{equation} x\mathcal{P}_{n}=A_{n}^{\langle 2\rangle }\mathcal{P}_{n+1}+B_{n}^{\langle 2\rangle }\mathcal{P}_{n}+C_{n}^{\langle 2\rangle }\mathcal{P} _{n-1}+D_{n}^{\langle 2\rangle }\mathcal{P}_{n-2},\ \ n=2,3,\ldots \,, \label{[Sec5]-Matrix4TRR-3} \end{equation} Next,\ combining (\ref{[Sec5]-PnWnX}) with (\ref{[Sec5]-Matrix4TRR-3}), we can assert that \begin{multline*} x^{3}W_{n}(x^{3}) \begin{bmatrix} 1 \\ x \\ x^{2} \end{bmatrix} =A_{n}^{\langle 2\rangle }W_{n+1}(x^{3}) \begin{bmatrix} 1 \\ x \\ x^{2} \end{bmatrix} +B_{n}^{\langle 2\rangle }W_{n}(x^{3}) \begin{bmatrix} 1 \\ x \\ x^{2} \end{bmatrix} \\ +C_{n}^{\langle 2\rangle }W_{n-1}(x^{3}) \begin{bmatrix} 1 \\ x \\ x^{2} \end{bmatrix} +D_{n}^{\langle 2\rangle }W_{n-2}(x^{3}) \begin{bmatrix} 1 \\ x \\ x^{2} \end{bmatrix} ,\ \ n=2,3,\ldots . \end{multline*} The vector $ \begin{bmatrix} 1 & x & x^{2} \end{bmatrix} ^{T}$ can be removed, and after the shift $x^{3}\rightarrow x$ the above expression may be simplified as \begin{equation} xW_{n}(x)=A_{n}^{\langle 2\rangle }W_{n+1}(x)+B_{n}^{\langle 2\rangle }W_{n}(x)+C_{n}^{\langle 2\rangle }W_{n-1}(x)+D_{n}^{\langle 2\rangle }W_{n-2}(x), \label{[Sec5]-Matrix4TRR-2} \end{equation} where $W_{-1}=0_{3\times 3}$ and $W_{0}$ is certain constant matrix, for every $n=1,2,\ldots $, which is the desired matrix four term recurrence relation for $W_{n}(x)$. \end{proof} This kind of matrix high--term recurrence relation completely characterizes certain type of orthogonality. Hence, we are going prove a Favard type Theorem which states that, under the assumptions of Theorem \ref{[SEC5]-TH1} , the matrix polynomials $\{W_{n}\}$ are \textit{type~II matrix multiple orthogonal} with respect to a system of two matrices of measures $\{d\mathbf{ M}^{1},d\mathbf{M}^{2}\}$. Next, we briefly review some of the standard facts on the theory of \textit{ matrix orthogonality}, or \textit{orthogonality with respect to a matrix of measures} (see \cite{D-CJM-95}, \cite{DG-JCAM-05} and the references therein). Let $W,V\in \mathbb{P}^{3\times 3}$ be two matrix polynomials, and let \begin{equation*} \mathbf{M}(x)= \begin{bmatrix} \mu _{11}(x) & \mu _{12}(x) & \mu _{13}(x) \\ \mu _{21}(x) & \mu _{22}(x) & \mu _{23}(x) \\ \mu _{31}(x) & \mu _{32}(x) & \mu _{33}(x) \end{bmatrix} , \end{equation*} be a matrix with positive Borel measures $\mu _{i,j}(x)$. Let $\mathbf{M}(E)$ be positive definite for any Borel set $E\subset \mathbb{R}$, having finite moments \begin{equation*} \bar{\mu}_{k}=\int_{E}d\mathbf{M}(x)x^{k},\,\,\,k=0,1,2,\ldots \end{equation*} of every order, and satisfying that \begin{equation*} \int_{E}V(x)d\mathbf{M}(x)V^{\ast }(x), \end{equation*} where $V^{\ast }\in \mathbb{P}^{3\times 3}$ is the adjoint matrix of $V\in \mathbb{P}^{3\times 3}$, is non-singular if the matrix leading coefficient of the matrix polynomial $V$ is non-singular. Under these conditions, it is possible to associate to a weight matrix $\mathbf{M}$, the Hermitian sesquilinear form \begin{equation*} \langle W,V\rangle =\int_{E}W(x)d\mathbf{M}(x)V^{\ast }(x). \end{equation*} We then say that a sequence of matrix polynomials $\{W_{n}\}$, $W_{n}\in \mathbb{P}^{3\times 3}$ with degree $n$ and nonsingular leading coefficient, is orthogonal with respect to $\mathbf{M}$\ if \begin{equation} \langle W_{m},W_{n}\rangle =\Delta _{n}\delta _{m,n}\,, \label{[Sec5]-Matrix-Orthog} \end{equation} where $\Delta _{n}\in \mathcal{M}_{3\times 3}$ is a positive definite upper triangular matrix, for $n\geq 0$. We can define the \textit{matrix moment functional}$\ \mathrm{M}$ acting in $ \mathbb{P}^{3\times 3}$ over $\mathcal{M}_{3\times 3}$,\ in terms of the above matrix inner product, by $\mathrm{M}(WV)=\langle W,V\rangle $. This construction is due to J\'odar \textit{et al.} (see \cite{JD-JD-97}, \cite {JDP-JAT-96}) where the authors extend to the matrix framework the linear moment functional approach developed by Chihara in \cite{Chi78}. Hence, the moments of $\mathbf{M}(x)$ and (\ref{[Sec5]-Matrix-Orthog}) can be written \begin{equation*} \begin{array}{l} \mathrm{M}(x^{k})=\bar{\mu}_{k}\,,\,\,\,k=0,1,2,\ldots , \\ \mathrm{M}(W_{m}V_{n})=\Delta _{n}\delta _{m,n}\,,\,\,\,m,n=0,1,2,\ldots . \end{array} \end{equation*} Let $\boldsymbol{m}=(m_{1},m_{2})\in \mathbb{N}^{2}$ be a multi--index with length $|\mathbf{m}|:=m_{1}+\cdots +m_{2}$ and let $\{\mathrm{M}^{1},\mathrm{ M}^{2}\}$ be a set of matrix moment functionals as defined above. Let $\{W_{ \mathbf{m}}\}$ be a sequence of matrix polynomials, with $\deg W_{\mathbf{m} } $ is at most $|\mathbf{m}|$. $\{W_{\mathbf{m}}\}$ is said to be a \textit{ type~II} multiple orthogonal with respect to the set of linear functionals $ \{\mathrm{M}^{1},\mathrm{M}^{2}\}$ and multi--index $\boldsymbol{m}$ if it satisfy the following orthogonality conditions \begin{equation} \mathrm{M}^{j}(x^{k}W_{\mathbf{n}})=0_{3\times 3}\,,\ \ k=0,1,\ldots ,n_{j}-1\,,\ \ j=1,2. \label{[Sec5]-OrtConduj} \end{equation} A multi--index $\boldsymbol{m}$ is said to be \textit{normal} for the set of matrix moment functionals $\{\mathrm{M}^{1},\mathrm{M}^{2}\}$, if the degree of $W_{\mathbf{m}}$ is exactly $|\mathbf{m}|=m$. Thus, in what follows we will write $W_{\mathbf{m}}\equiv W_{|\mathbf{m}|}=W_{m}$. In this framework, let consider the sequence of \textit{vector of matrix polynomials} $\{\mathcal{B}_{n}\}$ where \begin{equation} \mathcal{B}_{n}= \begin{bmatrix} W_{2n} \\ W_{2n+1} \end{bmatrix} . \label{[Sec5]-BnofWn} \end{equation} We define the vector of matrix--functionals $\mathfrak{M}= \begin{bmatrix} \mathrm{M}^{1} & \mathrm{M}^{2} \end{bmatrix} ^{T}$, with $\mathfrak{M}:\mathbb{P}^{6\times 3}\rightarrow \mathcal{M} _{6\times 6}$, by means of the action $\mathfrak{M}$ on $\mathcal{B}_{n}$, as follows \begin{equation} \mathfrak{M}\left( \mathcal{B}_{n}\right) = \begin{bmatrix} \mathrm{M}^{1}(W_{2n}) & \mathrm{M}^{2}(W_{2n}) \\ \mathrm{M}^{1}(W_{2n+1}) & \mathrm{M}^{2}(W_{2n+1}) \end{bmatrix} \in \mathcal{M}_{6\times 6}\,. \label{[Sec5]-UBn} \end{equation} where \begin{equation} \mathrm{M}^{i}(W_{j})=\int W_{j}(x)d\mathbf{M}^{i}=\Delta _{j}^{i},\quad i=1,2,\text{ and }j=0,1, \label{[Sec5]-tauiWj} \end{equation} with $\{d\mathbf{M}^{1},d\mathbf{M}^{2}\}$ being a system of two matrix of measures as described above. Thus, we say that a sequence of vectors of matrix polynomials $\{\mathcal{B}_{n}\}$ is orthogonal with respect to a vector of matrix functionals $\mathfrak{M}$ if \begin{equation} \left. \begin{array}{rll} i) & \mathfrak{M}(x^{k}\mathcal{B}_{n})=0_{6\times 6}\,, & k=0,1,\ldots ,n-1\,, \\ ii) & \mathfrak{M}(x^{n}\mathcal{B}_{n})=\Omega _{n}\,, & \end{array} \right\} \label{[Sec5]-MatrixOrthoU} \end{equation} where $\Omega _{n}$ is a regular block upper triangular $6\times 6$ matrix, holds. Now we are in a position to prove the following \begin{teo}[Favard type] \label{[SEC5]-TH2}Let $\{\mathcal{B}_{n}\}$ a sequence of vectors of matrix polynomials of size $6\times 3$, defined in (\ref{[Sec5]-BnofWn}), with $ W_{n}$ matrix polynomials satisfying the four term recurrence relation (\ref {[Sec5]-Matrix4TRR-2}). The following statements are equivalent: \begin{itemize} \item[$(a)$] The sequence $\{\mathcal{B}_{n}\}_{n\geq 0}$ is orthogonal with respect to a certain vector of two matrix--functionals. \item[$(b)$] There are sequences of scalar $6\times 6$ block matrices $ \{A_{n}^{\langle 3\rangle }\}_{n\geq 0}$, $\{B_{n}^{\langle 3\rangle }\}_{n\geq 0}$, and $\{C_{n}^{\langle 3\rangle }\}_{n\geq 0}$, with $ C_{n}^{\langle 3\rangle }$\ block upper triangular non-singular matrix for $ n\in \mathbb{N}$, such that the sequence $\{\mathcal{B}_{n}\}$ satisfy the matrix three term recurrence relation \begin{equation} x\mathcal{B}_{n}=A_{n}^{\langle 3\rangle }\mathcal{B}_{n+1}+B_{n}^{\langle 3\rangle }\mathcal{B}_{n}+C_{n}^{\langle 3\rangle }\mathcal{B}_{n-1}, \label{[Sec5]-MatrixFavard-1} \end{equation} with $\mathcal{B}_{n-1}= \begin{bmatrix} 0_{3\times 3} & 0_{3\times 3} \end{bmatrix} ^{T}$, $\mathcal{B}_{0}$ given, and $C_{n}^{\langle 3\rangle }$ non-singular. \end{itemize} \end{teo} \begin{proof} First we prove that $(a)$ implies $(b)$. Since the sequence of vector of matrix polynomials $\{\mathcal{B}_{n}\}$ is a basis in the linear space $ \mathbb{P}^{6\times 3}$, we can write \begin{equation*} x\mathcal{B}_{n}=\sum_{k=0}^{n+1}\widetilde{A}_{k}^{n}\mathcal{B} _{k},\,\,\,\,\widetilde{A}_{k}^{n}\in \mathcal{M}_{6\times 6}\,. \end{equation*} Then, from the orthogonality conditions (\ref{[Sec5]-MatrixOrthoU}), we get \begin{equation*} \mathfrak{M}\left( x\mathcal{B}_{n}\right) =\mathfrak{M}(\widetilde{A} _{k}^{n}\mathcal{B}_{k})=0_{6\times 6},\quad k=0,1,\ldots ,\,n-2. \end{equation*} Thus, \begin{equation*} x\mathcal{B}_{n}=\widetilde{A}_{n+1}^{n}\mathcal{B}_{n+1}+\widetilde{A} _{n}^{n}\mathcal{B}_{n}+\widetilde{A}_{n-1}^{n}\mathcal{B}_{n-1}\,. \end{equation*} Having $\widetilde{A}_{n+1}^{n}=A_{n}^{\langle 3\rangle }$, $\widetilde{A} _{n}^{n}=B_{n}^{\langle 3\rangle }$, and $\widetilde{A}_{n-1}^{n}=C_{n}^{ \langle 3\rangle }$, the result follows. To proof that $(b)$\ implies $(a)$, we know from Theorem \ref{[SEC5]-TH1} that the sequence of vector polynomials $\{W_{n}\}$ satisfy a four term recurrence relation with matrix coefficients. We can associate this matrix four term recurrence relation (\ref{[Sec5]-Matrix4TRR-2}) with the block matrix three term recurrence relation (\ref{[Sec5]-MatrixFavard-1}). Then, it is sufficient to show that $\mathfrak{M}$ is uniquely determined by its orthogonality conditions (\ref{[Sec5]-MatrixOrthoU}), in terms of the sequence $\{C_{n}^{\langle 3\rangle }\}_{n\geq 0}$ in that (\ref {[Sec5]-MatrixFavard-1}). Next, from (\ref{[Sec5]-BnofWn}) we can rewrite the matrix four term recurrence relation (\ref{[Sec5]-Matrix4TRR-2}) into a matrix three term recurrence relation \begin{equation*} x\mathcal{B}_{n}= \begin{bmatrix} 0_{3\times 3} & 0_{3\times 3} \\ A_{2n+1}^{\langle 2\rangle } & 0_{3\times 3} \end{bmatrix} \mathcal{B}_{n+1}+ \begin{bmatrix} B_{2n}^{\langle 2\rangle } & A_{2n}^{\langle 2\rangle } \\ C_{2n+1}^{\langle 2\rangle } & B_{2n+1}^{\langle 2\rangle } \end{bmatrix} \mathcal{B}_{n}+ \begin{bmatrix} D_{2n}^{\langle 2\rangle } & C_{2n}^{\langle 2\rangle } \\ 0_{3\times 3} & D_{2n+1}^{\langle 2\rangle } \end{bmatrix} \mathcal{B}_{n-1}, \end{equation*} where the size of $\mathcal{B}_{n}$\ is $6\times 3$. We give $\mathfrak{M}$ in terms of its block matrix moments, which in turn are given by the matrix coefficients in (\ref{[Sec5]-MatrixFavard-1}). There is a unique vector moment functional $\mathfrak{M}$ and hence two matrix measures $d\mathbf{M} ^{1}$ and $d\mathbf{M}^{2}$, such that \begin{equation*} \mathfrak{M}\left( \mathcal{B}_{0}\right) = \begin{bmatrix} \mathrm{M}^{1}(W_{0}) & \mathrm{M}^{2}(W_{0}) \\ \mathrm{M}^{1}(W_{1}) & \mathrm{M}^{2}(W_{1}) \end{bmatrix} =C_{0}^{\langle 3\rangle }\in \mathcal{M}_{6\times 6}, \end{equation*} where $\mathrm{M}^{i}(W_{j})$ was defined in (\ref{[Sec5]-tauiWj}). For the first moment of $\mathfrak{M}_{0}$ we get \begin{equation*} \mathfrak{M}_{0}=\mathfrak{M}\left( \mathcal{B}_{0}\right) = \begin{bmatrix} \Delta _{0}^{1} & \Delta _{0}^{2} \\ \Delta _{1}^{1} & \Delta _{1}^{2} \end{bmatrix} =C_{0}^{\langle 3\rangle }. \end{equation*} Hence, we have $\mathfrak{M}\left( \mathcal{B}_{1}\right) =0_{6\times 6}$, which from (\ref{[Sec5]-MatrixFavard-1}) also implies $0_{6\times 6}=x \mathfrak{M}\left( \mathcal{B}_{0}\right) -B_{0}^{\langle 3\rangle } \mathfrak{M}\left( \mathcal{B}_{0}\right) =\mathfrak{M}_{1}-B_{0}^{\langle 3\rangle }\mathfrak{M}_{0}$. Therefore \begin{equation*} \mathfrak{M}_{1}=B_{0}^{\langle 3\rangle }C_{0}^{\langle 3\rangle }. \end{equation*} By a similar argument, we have \begin{eqnarray*} 0_{6\times 6} &=&\mathfrak{M}\left( (A_{0}^{\langle 3\rangle })^{-1}x^{2} \mathcal{B}_{0}-(A_{0}^{\langle 3\rangle })^{-1}B_{0}^{\langle 3\rangle }x \mathcal{B}_{0}-B_{1}^{\langle 3\rangle }(A_{0}^{\langle 3\rangle })^{-1}x \mathcal{B}_{0}+B_{1}(A_{0}^{\langle 3\rangle })^{-1}B_{0}^{\langle 3\rangle }\mathcal{B}_{0}-C_{1}^{\langle 3\rangle }\mathcal{B}_{0}\right) \\ &=&(A_{0}^{\langle 3\rangle })^{-1}\mathfrak{M}_{2}-\left( (A_{0}^{\langle 3\rangle })^{-1}B_{0}^{\langle 3\rangle }+B_{1}^{\langle 3\rangle }(A_{0}^{\langle 3\rangle })^{-1}\right) \mathfrak{M}_{1}+\left( B_{1}^{\langle 3\rangle }(A_{0}^{\langle 3\rangle })^{-1}B_{0}^{\langle 3\rangle }-C_{1}^{\langle 3\rangle }\right) \mathfrak{M}_{0}\in \mathcal{M} _{6\times 6}, \end{eqnarray*} which in turn yields the second moment of $\mathfrak{M}$ \begin{equation*} \mathfrak{M}_{2}=\left( B_{0}^{\langle 3\rangle }+A_{0}^{\langle 3\rangle }B_{1}^{\langle 3\rangle }(A_{0}^{\langle 3\rangle })^{-1}\right) B_{0}^{\langle 3\rangle }C_{0}^{\langle 3\rangle }-A_{0}^{\langle 3\rangle }\left( B_{1}^{\langle 3\rangle }(A_{0}^{\langle 3\rangle })^{-1}B_{0}^{\langle 3\rangle }-C_{1}^{\langle 3\rangle }\right) C_{0}^{\langle 3\rangle }. \end{equation*} Repeated application of this inductive process, enables us to determine $ \mathfrak{M}$ in a unique way through its moments, only in terms of the sequences of matrix coefficients $\{A_{n}^{\langle 3\rangle }\}_{n\geq 0}$, $ \{B_{n}^{\langle 3\rangle }\}_{n\geq 0}$ and $\{C_{n}^{\langle 3\rangle }\}_{n\geq 0}$. On the other hand, because of (\ref{[Sec5]-UBn}) and (\ref {[Sec5]-MatrixFavard-1}) we have \begin{equation*} \mathfrak{M}\left( x\mathcal{B}_{n}\right) =0_{6\times 6},\quad n\geq 2. \end{equation*} Multiplying by $x$ both sides of (\ref{[Sec5]-MatrixFavard-1}), from the above result we get \begin{equation*} \mathfrak{M}\left( x^{2}\mathcal{B}_{n}\right) =0_{6\times 6},\quad n\geq 3. \end{equation*} The same conclusion can be drawn for $0<k<n$ \begin{equation} \mathfrak{M}\left( x^{k}\mathcal{B}_{n}\right) =0_{6\times 6},\quad 0<k<n, \label{[Sec5]-Concl1} \end{equation} and finally \begin{equation*} \mathfrak{M}\left( x^{n}\mathcal{B}_{n}\right) =C_{n}^{\langle 3\rangle } \mathfrak{M}\left( x^{n-1}\mathcal{B}_{n-1}\right) . \end{equation*} Notice that the repeated application of the above argument leads to \begin{equation} \mathfrak{M}\left( x^{n}\mathcal{B}_{n}\right) =C_{n}^{\langle 3\rangle }C_{n-1}^{\langle 3\rangle }C_{n-2}^{\langle 3\rangle }\cdots C_{1}^{\langle 3\rangle }C_{0}^{\langle 3\rangle }. \label{[Sec5]-Concl2} \end{equation} From (\ref{[Sec5]-Concl1}), (\ref{[Sec5]-Concl2}) we conclude \begin{eqnarray*} \mathfrak{M}\left( x^{k}\mathcal{B}_{n}\right) &=&C_{n}^{\langle 3\rangle }C_{n-1}^{\langle 3\rangle }C_{n-2}^{\langle 3\rangle }\cdots C_{1}^{\langle 3\rangle }C_{0}^{\langle 3\rangle }\delta _{n,k} \\ &=&\Omega _{n}\delta _{n,k},\quad n,k=0,1,\ldots ,\,k\leq n, \end{eqnarray*} which are exactly the desired orthogonality conditions (\ref {[Sec5]-MatrixOrthoU}) for $\mathfrak{M}$ stated in the theorem. \end{proof} \end{document}
\begin{document} \date{} \title{Isolated Singularities of Polyharmonic Inequalities} \thispagestyle{empty} \begin{abstract} We study nonnegative classical solutions $u$ of the polyharmonic inequality \[ -\Delta^mu \ge 0 \quad \text{in}\quad B_1(0) - \{0\} \subset{\bb R}^n. \] We give necessary and sufficient conditions on integers $n\ge 2$ and $m\ge 1$ such that these solutions $u$ satisfy a pointwise a priori bound as $x\to 0$. In this case we show that the optimal bound for $u$ is \[ u(x) = O(\Gamma(x))\quad \text{as}\quad x\to 0 \] where $\Gamma$ is the fundamental solution of $-\Delta$ in ${\bb R}^n$. \noindent {\it Keywords:} Polyharmonic inequality, isolated singularity. \noindent 2010 Mathematics Subject Classification Codes: 35B09, 35B40, 35B45, 35C15, 35G05, 35R45. \end{abstract} \section{Introduction}\label{sec1} \indent It is easy to show that there does not exist a pointwise a priori bound as $x\to 0$ for $C^2$ nonnegative solutions $u(x)$ of \begin{equation}\label{eq1.1} -\Delta u\ge 0 \quad \text{in}\quad B_1(0)-\{0\} \subset {\bb R}^n, \quad n\ge 2. \end{equation} That is, given any continuous function $\psi\colon (0,1)\to (0,\infty)$ there exists a $C^2$ nonnegative solution $u(x)$ of \eqref{eq1.1} such that \[ u(x)\noindente O(\psi(|x|)) \quad \text{as}\quad x\to 0. \] The same is true if the inequality in \eqref{eq1.1} is reversed. In this paper we study $C^{2m}$ nonnegative solutions of the polyharmonic inequality \begin{equation}\label{eq1.2} -\Delta^m u\ge 0\quad \text{in}\quad B_1(0) - \{0\}\subset {\bb R}^n \end{equation} where $n\ge 2$ and $m\ge 1$ are integers. We obtain the following result. \begin{thm}\label{thm1.1} A necessary and sufficient condition on integers $n\ge 2$ and $m\ge 1$ such that $C^{2m}$ nonnegative solutions $u(x)$ of \eqref{eq1.2} satisfy a pointwise a priori bound as $x\to 0$ is that \begin{equation}\label{eq1.3} \text{either $m$ is even or $n<2m$.} \end{equation} In this case, the optimal bound for $u$ is \begin{equation}\label{eq1.5} u(x) = O(\Gamma_0(x))\quad \text{as}\quad x\to 0, \end{equation} where \begin{equation}\label{eq1.6} \Gamma_0(x)= \begin{cases} |x|^{2-n}&\text{if $n\ge 3$}\\ \noindentoalign{ } \log \dfrac5{|x|}&\text{if $n=2$.} \end{cases} \end{equation} \end{thm} The $m$-Kelvin transform of a function $u(x)$, $x\in\Omega\subset {\bb R}^n-\{0\}$, is defined by \begin{equation}\label{eq1.5.1} v(y)=|x|^{n-2m}u(x) \qquad \text{where} \quad x=y/|y|^2. \end{equation} By direct computation, $v(y)$ satisfies \begin{equation}\label{eq1.5.2} \Delta^mv(y)=|x|^{n+2m}\Delta^mu(x). \end{equation} See \cite[p.~221]{WX} or \cite[p.~660]{X}. This fact and Theorem \ref{thm1.1} immediately imply the following result. \begin{thm}\label{thm1.2} A necessary and sufficient condition on integers $n\ge 2$ and $m\ge 1$ such that $C^{2m}$ nonnegative solutions $v(y)$ of \[ -\Delta^m v\ge 0\quad \text{in}\quad {\bb R}^n-B_1(0) \] satisfy a pointwise a priori bound as $|y|\to \infty$ is that \eqref{eq1.3} holds. In this case, the optimal bound for $v$ is \begin{equation}\label{eq1.7} v(y) = O(\Gamma_\infty(y))\quad \text{as}\quad |y|\to\infty \end{equation} where \begin{equation}\label{eq1.71} \Gamma_\infty(y) = \begin{cases} |y|^{2m-2}&\text{if $n\ge 3$}\\ |y|^{2m-2}\log(5|y|)&\text{if $n=2$.} \end{cases} \end{equation} \end{thm} The estimates \eqref{eq1.5} and \eqref{eq1.7} are optimal because $\Delta^m\Gamma_0=0=\Delta^m\Gamma_\infty$ in ${\bb R}^n-\{0\}$. The sufficiency of condition \eqref{eq1.3} in Theorem \ref{thm1.1} and the estimate \eqref{eq1.5} are an immediate consequence of the following theorem, which gives for $C^{2m}$ nonnegative solutions $u$ of \eqref{eq1.2} one sided estimates for $\Delta^\sigma u$, $\sigma=0,1,2,\dots, m$, and estimates for $|D^\beta u|$ for certain multi-indices $\beta$. \begin{thm}\label{thm1.3} Let $u(x)$ be a $C^{2m}$ nonnegative solution of \begin{equation}\label{eq4.1} -\Delta^mu\ge 0\quad \text{in}\quad B_2(0) -\{0\}\subset {\bb R}^n, \end{equation} where $n\ge 2$ and $m\ge 1$ are integers. Then for each nonnegative integer $\sigma\le m$ we have \begin{equation}\label{eq4.2} (-1)^{m+\sigma} \Delta^\sigma u(x) \le C\left|\frac{d^{2\sigma}}{d|x|^{2\sigma}} \Gamma_0(|x|)\right| \quad \text{for}\quad 0<|x|<1 \end{equation} where $\Gamma_0$ is given by \eqref{eq1.6} and $C$ is a positive constant independent of $x$. Moreover, if $n<2m$ and $\beta$ is a multi-index then \begin{equation}\label{eq4.3} |D^\beta u(x)| = O\left(\left|\frac{d^{|\beta|}}{d|x|^{|\beta|}} \Gamma_0(|x|)\right|\right) \quad \text{as}\quad x\to 0 \end{equation} for \begin{equation}\label{neq4.4} |\beta|\le \begin{cases} 2m-n&\text{if $n$ is odd}\\ 2m-n-1&\text{if $n$ is even.} \end{cases} \end{equation} \end{thm} There is a similar result when the singularity is at infinity. \begin{thm}\label{thm1.4} Let $v(y)$ be a $C^{2m}$ nonnegative solution of \begin{equation}\label{neq4.1} -\Delta^mv\ge 0\quad \text{in}\quad {\bb R}^n - B_{1/2}(0), \end{equation} where $n\ge 2$ and $m\ge 1$ are integers. Then for each nonnegative integer $\sigma\le m$ we have \begin{equation}\label{neq4.2} (-1)^{m+\sigma} \Delta^\sigma(|y|^{2\sigma-2m}v(y)) \le C \begin{cases} |y|^{-2}\log 5|y| &\text{if $\sigma=0$ and $n=2$}\\ |y|^{-2} &\text{if $\sigma\ge 1$ or $n\ge 3$} \end{cases} \quad \text{for} \quad |y|>1 \end{equation} where $C$ is a positive constant independent of $y$. Moreover, if $n<2m$ and $\beta$ is a multi-index satisfying \eqref{neq4.4} then \begin{equation}\label{neq4.3} |D^\beta v(y)| = O\left(\left|\frac{d^{|\beta|}}{d|y|^{|\beta|}} \Gamma_\infty(|y|)\right|\right) \quad \text{as}\quad |y|\to \infty \end{equation} where $\Gamma_\infty$ is given by \eqref{eq1.71}. \end{thm} Note that in Theorems \ref{thm1.3} and \ref{thm1.4} we do not require that $m$ and $n$ satisfy \eqref{eq1.3}. Inequality \eqref{neq4.2} gives one sided estimates for $\Delta^\sigma(|y|^{2\sigma-2m}v(y))$. Sometimes one sided estimates for $\Delta^\sigma v$ also hold. For example, in the important case $m=2$, $n=2$ or 3, and the singularity is at the infinity, we have the following corollary of Theorem \ref{thm1.4}. \begin{cor}\label{cor1.1} Let $v(y)$ be a $C^4$ nonnegative solution of \[ -\Delta^2 v \ge 0 \quad \text{in} \quad {\bb R}^n - B_{1/2}(0) \] where $n=2$ or $3$. Then \begin{equation}\label{eq1.14.1} v(y)=O\left(\Gamma_\infty(|y|)\right) \quad\text{and} \quad |\noindentabla v(y)|=O\left(\left|\frac{d}{d|y|} \Gamma_\infty(|y|)\right|\right) \quad \text{as} \quad |y|\to\infty \end{equation} and \begin{equation}\label{eq1.14.2} -\Delta v(y)<C\left|\frac{d^2}{d|y|^2} \Gamma_\infty(|y|)\right|\quad \text{for} \quad |y|>1 \end{equation} where $\Gamma_\infty$ is given by \eqref{eq1.71} and $C$ is a positive constant independent of $y$. \end{cor} The proof of Theorem~\ref{thm1.3} relies heavily on a representation formula for $C^{2m}$ nonnegative solutions $u$ of \eqref{eq1.2}, which we state and prove in Section~\ref{sec3}. This formula, which is valid for all integers $n\ge 2$ and $m\ge 1$ and which when $m=1$ is essentially a result of Brezis and Lions \cite{BL}, may also be useful for studying nonnegative solutions in a punctured neighborhood of the origin---or near $x=\infty$ via the $m$-Kelvin transform---of problems of the form \begin{equation}\label{eq1.8} -\Delta^m u = f(x,u)\quad\text{or}\quad 0\le - \Delta^mu \le f(x,u) \end{equation} when $f$ is a nonnegative function and $m$ and $n$ may or may not satisfy \eqref{eq1.3}. Examples of such problems can be found in \cite{CMS,CX,GL,H,MR,WX,X} and elsewhere. Pointwise estimates at $x=\infty$ of solutions $u$ of problems \eqref{eq1.8} can be crucial for proving existence results for entire solutions of \eqref{eq1.8} which in turn can be used to obtain, via scaling methods, existence and estimates of solutions of boundary value problems associated with \eqref{eq1.8}, see e.g. \cite{RW1,RW2}. An excellent reference for polyharmonic boundary value problems is \cite{GGS}. Lastly, weak solutions of $\Delta^mu = \mu$, where $\mu$ is a measure on a subset of ${\bb R}^n$, have been studied in \cite{CDM} and \cite{FKM}, and removable isolated singularities of $\Delta^mu=0$ have been studied in \cite{H}. \section{Preliminary results}\label{sec2} \indent In this section we state and prove four lemmas. Lemmas~\ref{lem2.1}, \ref{lem2.2}, and \ref{lem2.3} will only be used to prove Lemma~\ref{lem2.4}, which in turn will be used in Section~\ref{sec3} to prove Theorem~\ref{thm3.1}. Lemmas \ref{lem2.1} and \ref{lem2.2} are well-known. We include their very short proofs for the convenience of the reader. \begin{lem}\label{lem2.1} Let $f\colon (0,r_2]\to [0,\infty)$ be a continuous function where $r_2$ is a finite positive constant. Suppose $n\ge 2$ is an integer and the equation \begin{equation}\label{eq2.1} v'' + \frac{n-1}r v' = -f(r)\qquad 0<r<r_2 \end{equation} has a nonnegative solution $v(r)$. Then \begin{equation}\label{eq2.2} \int^{r_2}_0 r^{n-1} f(r)\,dr < \infty. \end{equation} \end{lem} \begin{proof} Let $r_1=r_2/2$. Integrating \eqref{eq2.1} we obtain \begin{equation}\label{eq2.3} r^{n-1}v'(r) = r^{n-1}_1 v'(r_1) + \int^{r_1}_r \rho^{n-1}f(\rho)\,d\rho \quad \text{for}\quad 0<r<r_1. \end{equation} Suppose for contradiction that \[ r^{n-1}_1v'(r_1) + \int^{r_1}_{r_0} \rho^{n-1}f(\rho)\,d\rho \ge 1 \quad \text{for some} \quad r_0\in (0,r_1). \] Then for $0<r<r_0$ we have by \eqref{eq2.3} that \[ v(r_0) - v(r) \ge \int^{r_0}_r \rho^{1-n}\,d\rho \to \infty \quad \text{as}\quad r\to 0^+ \] which contradicts the nonnegativity of $v(r)$. \end{proof} \begin{lem}\label{lem2.2} Suppose $f\colon (0,R]\to {\bb R}$ is a continuous function, $n\ge 2$ is an integer, and \begin{equation}\label{eq2.4} \int^R_0 \rho^{n-1}|f(\rho)|\,d\rho < \infty. \end{equation} Define $u_0\colon (0,R]\to {\bb R}$ by \[ u_0(r)= \begin{cases} \dfrac1{n-2} \left[\dfrac1{r^{n-2}} {\displaystyle\int^r_0} \rho^{n-1}f(\rho)\,d\rho + {\displaystyle\int^R_r} \rho f(\rho)\,d\rho\right]&\text{if $n\ge 3$}\\ \noindentoalign{ } \left(\log \dfrac{2R}r\right) {\displaystyle\int^r_0} \rho f(\rho)\,d\rho + {\displaystyle\int^R_r} \rho\left(\log \dfrac{2R}\rho\right) f(\rho)\,d\rho &\text{if $n=2$}. \end{cases} \] Then $u = u_0(r)$ is a $C^2$ solution of \begin{equation}\label{eq2.5} -(\Delta u)(r) := -\left(u''(r) + \frac{n-1}r u'(r)\right) = f(r)\quad \text{for}\quad 0<r\le R. \end{equation} Moreover, all solutions $u(r)$ of \eqref{eq2.5} are such that \begin{equation}\label{eq2.6} \int^r_0 \rho^{n-1}|u(\rho)|\,d\rho = \begin{cases} O(r^2)&\text{as $r\to 0^+$ if $n\ge 3$}\\ \noindentoalign{ } O\left(r^2 \log \dfrac1r\right)&\text{as $r\to 0^+$ if $n=2$.} \end{cases} \end{equation} \end{lem} \begin{proof} By \eqref{eq2.4} the formula for $u_0(r)$ makes sense and it is easy to check that $u=u_0(r)$ is a solution of \eqref{eq2.5} and, as $r\to 0^+$, \[ u_0(r) = \begin{cases} O(r^{2-n})&\text{if $n\ge 3$}\\ \noindentoalign{ } O\left(\log \dfrac1r\right)&\text{if $n=2$.} \end{cases} \] Thus, since all solutions of \eqref{eq2.5} are given by \[ u= u_0(r) + C_1+C_2 \begin{cases} r^{2-n}&\text{if $n\ge 3$}\\ \noindentoalign{ } \log \dfrac1r&\text{if $n=2$} \end{cases} \] where $C_1$ and $C_2$ are arbitrary constants, we see that all solutions of \eqref{eq2.5} satisfy \eqref{eq2.6}. \end{proof} \begin{lem}\label{lem2.3} Suppose $f\colon (0,R]\to {\bb R}$ is a continuous function, $n\ge 2$ is an integer, and \begin{equation}\label{eq2.7} \int\limits_{x\in B_R(0)\subset{\bb R}^n} |f(|x|)| \,dx < \infty. \end{equation} If $u = u(|x|)$ is a radial solution of \begin{equation}\label{eq2.8} -\Delta^m u = f\quad \text{for}\quad 0<|x|\le R,\quad m\ge 1 \end{equation} then \begin{equation}\label{eq2.9} \int\limits_{|x|<r} |u(x)|\,dx = \begin{cases} O(r^2)&\text{as $r\to 0^+$ if $n\ge 3$}\\ \noindentoalign{ } O\left(r^2 \log \dfrac1r\right)&\text{as $r\to 0^+$ if $n=2$.} \end{cases} \end{equation} \end{lem} \begin{proof} The lemma is true for $m=1$ by Lemma~\ref{lem2.2}. Assume, inductively, that the lemma is true for $m-1$ where $m\ge 2$. Let $u$ be a radial solution of \eqref{eq2.8}. Then \[ -\Delta(\Delta^{m-1}u) = -\Delta^mu = f\quad \text{for}\quad 0<|x|\le R. \] Hence by \eqref{eq2.7} and Lemma \ref{lem2.2}, \[ g := -\Delta^{m-1}u \in L^1(B_R(0)). \] So by the inductive assumption, \eqref{eq2.9} holds. \end{proof} \begin{lem}\label{lem2.4} Suppose $f\colon \overline{B_R(0)} - \{0\}\to {\bb R}$ is a nonnegative continuous function and $u$ is a $C^{2m}$ solution of \begin{equation}\label{eq2.10} \left.\begin{array}{r} -\Delta^mu=f\\ u\ge 0\end{array}\right\} \quad \text{in} \quad \overline{B_R(0)} - \{0\} \subset {\bb R}^n, \quad n\ge 2,\quad m\ge 1. \end{equation} Then \begin{align}\label{eq2.11} &\int\limits_{|x|<r} u(x)\,dx = \begin{cases} O(r^2)&\text{as $r\to 0^+$ if $n\ge 3$}\\ \noindentoalign{ } O\left(r^2\log \dfrac1r\right)&\text{as $r\to 0^+$ if $n=2$}\end{cases}\\ \intertext{and} \label{eq2.12} &\int\limits_{|x|<R} |x|^{2m-2} f(x)\,dx < \infty. \end{align} \end{lem} \begin{proof} By averaging \eqref{eq2.10} we can assume $f=f(|x|)$ and $u=u(|x|)$ are radial functions. The lemma is true for $m=1$ by Lemmas~\ref{lem2.1} and \ref{lem2.2}. Assume inductively that the lemma is true for $m-1$, where $m\ge 2$. Let $u = u(|x|)$ be a radial solution of \eqref{eq2.10}. Let $v=\Delta^{m-1}u$. Then $-\Delta v = -\Delta^mu=f$ and integrating this equation we obtain as in the proof of Lemma~\ref{lem2.1} that \begin{equation}\label{eq2.13} r^{n-1}v'(r) = r^{n-1}_2v'(r_2) + \int^{r_2}_r \rho^{n-1}f(\rho)\,d\rho \quad \text{for all}\quad 0<r<r_2\le R. \end{equation} We can assume \begin{equation}\label{eq2.14} \int^R_0 \rho^{n-1}f(\rho)\,d\rho = \infty \end{equation} for otherwise $\int\limits_{|x|<R} f(x)\,dx < \infty$ and hence \eqref{eq2.12} obviously holds and \eqref{eq2.11} holds by Lemma \ref{lem2.3}. By \eqref{eq2.13} and \eqref{eq2.14} we have for some $r_1\in (0,R)$ that \begin{equation}\label{eq2.15} v'(r_1)\ge 1. \end{equation} Replacing $r_2$ with $r_1$ in \eqref{eq2.13} we get \[ v'(\rho) = \frac{r^{n-1}_1 v'(r_1)}{\rho^{n-1}} + \frac1{\rho^{n-1}} \int^{r_1}_\rho s^{n-1}f(s)\,ds \quad \text{for}\quad 0<\rho\le r_1 \] and integrating this equation from $r$ to $r_1$ we obtain for $0<r\le r_1$ that \[ -v(r) = -v(r_1) + r^{n-1}_1v'(r_1) \int^{r_1}_r \frac1{\rho^{n-1}} \,d\rho + \int^{r_1}_r \frac1{\rho^{n-1}} \int^{r_1}_\rho s^{n-1} f(s)\,ds\,d\rho \] and hence by \eqref{eq2.15} for some $r_0\in (0,r_1)$ we have \[ -\Delta^{m-1}u(r) = -v(r) > \int^{r_0}_r \frac1{\rho^{n-1}} \int^{r_0}_\rho s^{n-1}f(s)\,ds \,d\rho \ge 0\quad \text{for}\quad 0<r\le r_0. \] So by the inductive assumption, $u$ satisfies \eqref{eq2.11} and \begin{align*} \infty &> \frac1{n\omega_n} \int\limits_{|x|<r_0} |x|^{2m-4}(-v(|x|))\,dx\\ &= \int^{r_0}_0 r^{2m+n-5} (-v(r))\,dr\\ &\ge \int^{r_0}_0 r^{2m+n-5} \left(\int^{r_0}_r \frac1{\rho^{n-1}} \int^{r_0}_\rho s^{n-1} f(s)\,ds\,d\rho\right)dr\\ &= C\int^{r_0}_0 s^{2m-2} f(s)s^{n-1}ds\\ &= C\int\limits_{|x|<r_0} |x|^{2m-2} f(x)\,dx \end{align*} where in the above calculation we have interchanged the order of integration and $C$ is a positive constant which depends only on $m$ and $n$. This completes the inductive proof. \end{proof} \section{Representation formula}\label{sec3} \indent A fundamental solution of $\Delta^m$ in ${\bb R}^n$, where $n\ge 2$ and $m\ge 1$ are integers, is given by \begin{numcases}{\Phi(x):=a} (-1)^m |x|^{2m-n}, & if $2 \le 2m < n$ \label{eq3.1}\\ (-1)^{\frac{n-1}2}|x|^{2m-n}, & if $3\le n < 2m$ and $n$ is odd \label{eq3.2}\\ (-1)^{\frac{n}2} |x|^{2m-n} \log \frac5{|x|}, & if $2\le n \le 2m$ and $n$ is even \label{eq3.3} \end{numcases} where $a = a(m,n)$ is a \emph{positive} constant. In the sense of distributions, $\Delta^m\Phi=\delta$, where $\delta$ is the Dirac mass at the origin in ${\bb R}^n$. For $x\noindente 0$ and $y\noindente x$, let \begin{equation}\label{eq3.4} \Psi(x,y) = \Phi(x-y) - \sum_{|\alpha|\le 2m-3} \frac{(-y)^\alpha}{\alpha!} D^\alpha\Phi(x) \end{equation} be the error in approximating $\Phi(x-y)$ with the partial sum of degree $2m-3$ of the Taylor series of $\Phi$ at $x$. The following theorem gives representation formula \eqref{eq3.6} for nonnegative solutions of inequality \eqref{eq3.5}. \begin{thm}\label{thm3.1} Let $u(x)$ be a $C^{2m}$ nonnegative solution of \begin{equation}\label{eq3.5} -\Delta^mu \ge 0\quad \text{in}\quad B_2(0)-\{0\}\subset {\bb R}^n, \end{equation} where $n\ge 2$ and $m\ge 1$ are integers. Then \begin{equation}\label{eq3.6} u = N+h+ \sum_{|\alpha|\le 2m-2} a_\alpha D^\alpha\Phi\quad \text{ in}\quad B_1(0)-\{0\} \end{equation} where $a_\alpha, |\alpha|\le 2m-2$, are constants, $h\in C^\infty(B_1(0))$ is a solution of \[ \Delta^mh = 0\quad \text{in}\quad B_1(0), \] and \begin{equation}\label{eq3.7} N(x) = \int\limits_{|y|\le 1} \Psi(x,y) \Delta^mu(y)\,dy\quad \text{for}\quad x\noindente 0. \end{equation} \end{thm} When $m=1$, equation \eqref{eq3.6} becomes \[ u = N+h+a_0\Phi_1 \quad \text{in} \quad B_1(0)-\{0\}, \] where \[ N(x)= \int\limits_{|y|<1} \Phi_1(x-y) \Delta u(y)\,dy \] and $\Phi_1$ is the fundamental solution of the Laplacian in ${\bb R}^n$. Thus, when $m=1$, Theorem~\ref{thm3.1} is essentially a result of Brezis and Lions \cite{BL}. Futamura, Kishi, and Mizuta \cite[Theorem 1]{FKM} and \cite[Corollary 5.1]{FM} obtained a result very similar to our Theorem~\ref{thm3.1}, but using their result we would have to let the index of summation $\alpha$ in \eqref{eq3.4} range over the larger set $|\alpha|\le 2m-2$. This would not suffice for our proof of Theorem~\ref{thm1.1}. We have however used their idea of using the remainder term $\Psi(x,y)$ instead of $\Phi(x-y)$ in \eqref{eq3.7}. This is done so that the integral in \eqref{eq3.7} is finite. See also the book \cite[p. 137]{HK}. \begin{proof}[Proof of Theorem \ref{thm3.1}] By \eqref{eq3.5}, \begin{equation}\label{eq3.8} f := -\Delta^mu \ge 0\quad \text{in}\quad B_2(0)-\{0\}. \end{equation} Thus by Lemma \ref{lem2.4}, \begin{equation}\label{eq3.9} \int\limits_{|x|<1} |x|^{2m-2} f(x)\,dx < \infty \end{equation} and \begin{equation}\label{eq3.10} \int\limits_{|x|<r} u(x)\,dx = O\left(r^2\log \frac1r\right) \quad \text{as}\quad r\to 0^+. \end{equation} If $|\alpha| = 2m-2$ we claim \begin{equation}\label{eq3.11} D^\alpha \Phi(x) = O(\Gamma_0(x))\quad \text{as}\quad x\to 0 \end{equation} where $\Gamma_0(x)$ is given by \eqref{eq1.6}. This is clearly true if $\Phi$ is given by \eqref{eq3.1} or \eqref{eq3.2} because then $n\ge 3$ and $\Gamma_0(x)=|x|^{2-n}$. The estimate \eqref{eq3.11} is also true when $\Phi$ is given by \eqref{eq3.3} because then $|x|^{2m-n}$ is a \emph{polynomial} of degree $2m-n\le 2m-2=|\alpha|$ with equality if and only if $n=2$, and hence $D^\alpha\Phi$ has a term with $\log \frac5{|x|}$ as a factor if and only if $n=2$. This proves \eqref{eq3.11}. By Taylor's theorem and \eqref{eq3.11} we have \begin{align}\label{eq3.12} |\Psi(x,y)| &\le C|y|^{2m-2} \Gamma_0(x)\\ &\le C|y|^{2m-2} |x|^{2-n} \log \frac5{|x|} \quad \text{for}\quad |y| < \frac{|x|}2<1.\noindentotag \end{align} Differentiating \eqref{eq3.4} with respect to $x$ we get \begin{equation}\label{eq3.13} D^\beta_x(\Psi(x,y)) = (D^\beta\Phi)(x-y) - \sum_{|\alpha|\le 2m-3} \frac{(-y)^\alpha}{\alpha!} (D^{\alpha+\beta}\Phi)(x)\quad \text{for}\quad x\noindente 0 \quad \text{and}\quad y\noindente x \end{equation} and so by Taylor's theorem applied to $D^\beta\Phi$ we have \begin{equation}\label{eq3.14} |D^\beta_x\Psi(x,y)| \le C|y|^{2m-2} |x|^{2-n-|\beta|} \log \frac5{|x|}\quad \text{for}\quad |y| < \frac{|x|}2 <1. \end{equation} Also, \begin{equation}\label{eq3.15} \Delta^m_x\Psi(x,y) = 0 = \Delta^m_y\Psi(x,y)\quad \text{for} \quad x\noindente 0\quad \text{and}\quad y\noindente x \end{equation} (see also \cite[Lemma 4.1, p. 137]{HK}) and \begin{align} \int\limits_{|x|<r} |\Phi(x-y)|\,dx &\le Cr^{2m}\log\frac{5}{r}\noindentotag\\ &\le C|y|^{2m-2} r^2\log \frac5r \quad \text{for} \quad 0 < r \le 2|y| < 2. \label{eq3.16} \end{align} Before continuing with the proof of Theorem \ref{thm3.1}, we state and prove the following lemma. \begin{lem}\label{lem3.1} For $|y|<1$ and $0<r<1$ we have \begin{equation}\label{eq3.17} \int\limits_{|x|<r} |\Psi(x,y)| \,dx \le C|y|^{2m-2} r^2\log \frac5r. \end{equation} \end{lem} \begin{proof} Since $\Psi(x,0)\equiv 0$ for $x\noindente 0$, we can assume $y\noindente 0$. \noindent {\bf Case I.} Suppose $0<r\le |y| <1$. Then by \eqref{eq3.16} \begin{align*} \int\limits_{0<|x|<r} |\Psi(x,y)|\,dx &\le \int\limits_{0<|x|<r} |\Phi(x-y)|\,dx + \sum_{|\alpha|\le 2m-3} |y|^{|\alpha|} \int\limits_{0<|x|<r} |D^\alpha\Phi(x)|\,dx\\ &\le C\left[|y|^{2m-2} r^2 \log \frac5r + \sum_{|\alpha|<2m-3} |y|^{|\alpha|} r^{2m-|\alpha|} \log \frac5r\right]\\ &\le C |y|^{2m-2} r^2 \log \frac5r. \end{align*} \noindent {\bf Case II.} Suppose $0<|y|<r<1$. Then by \eqref{eq3.16}, with $r=2|y|$, and \eqref{eq3.12} we have \begin{align*} \int\limits_{|x|<2r} |\Psi(x,y)|\,dx &= \int\limits_{2|y|<|x|<2r} |\Psi(x,y)|\,dx + \int\limits_{|x|<2|y|} |\Psi(x,y)|\,dx\\ &\le C\left[\, \int\limits_{2|y|<|x|<2r} |y|^{2m-2} |x|^{2-n} \log \frac5{|x|} \,dx + |y|^{2m} \log \frac5{|y|}\right.\\ &\quad \left. + \sum_{|\alpha|\le 2m-3} |y|^{|\alpha|} \int\limits_{|x|<2|y|} |D^\alpha\Phi(x)|\,dx\right]\\ &\le C\left[|y|^{2m-2} r^2 \log \frac5r + |y|^{2m-2} |y|^2 \log \frac5{|y|}\right]\\ &\le C |y|^{2m-2} r^2 \log \frac5r \end{align*} which proves the lemma. \end{proof} Continuing with the proof of Theorem \ref{thm3.1}, let $N$ be defined by \eqref{eq3.7} and let $2r\in (0,1)$ be fixed. Then for $2r<|x|<1$ we have \begin{align*} N(x) &= \int\limits_{r<|y|<1} \left[\Phi (y-x) - \sum_{|\alpha|\le 2m-3} \frac{(-y)^\alpha}{\alpha!} D^\alpha\Phi(x)\right] \Delta^m u(y)\,dy\\ &\quad -\int\limits_{0<|y|<r} \Psi(x,y) f(y)\,dy. \end{align*} By \eqref{eq3.9} and \eqref{eq3.14}, we can move differentiation of the second integral with respect to $x$ under the integral. Hence by \eqref{eq3.15}, \begin{equation}\label{eq3.18} \Delta^m N = \Delta^m u \end{equation} for $2r< |x|<1$ and since $2r\in (0,1)$ was arbitrary, \eqref{eq3.18} holds for $0<|x|<1$. By \eqref{eq3.7}, \eqref{eq3.8}, and Lemma \ref{lem3.1}, for $0<r<1$ we have \begin{align*} \int\limits_{|x|<r} |N(x)|\,dx &\le \int\limits_{|y|<1} \left(\, \int\limits_{|x|<r} |\Psi(x,y)|\,dx\right) f(y)\,dy\\ &\le Cr^2 \log \frac5r \int\limits_{|y|<1} |y|^{2m-2} f(y)\,dy\\ &= O\left(r^2 \log \frac1r\right) \quad \text{as}\quad r\to 0^+ \end{align*} by \eqref{eq3.9}. Thus by \eqref{eq3.10} \begin{equation}\label{eq3.19} v := u-N \in L^1_{\text{loc}}(B_1(0)) \subset {\cl D}'(B_1(0)) \end{equation} and \begin{equation}\label{eq3.20} \int\limits_{|x|<r} |v(x)|\,dx = O\left(r^2\log \frac1r\right) \quad \text{as}\quad r\to 0^+. \end{equation} By \eqref{eq3.18}, \[ \Delta^mv(x) = 0\quad \text{for}\quad 0<|x|<1. \] Thus $\Delta^mv$ is a distribution in ${\cl D}'(B_1(0))$ whose support is a subset of $\{0\}$. Hence \[ \Delta^mv = \sum_{|\alpha|\le k} a_\alpha D^\alpha\delta \] is a finite linear combination of the delta function and its derivatives. We now use a method of Brezis and Lions \cite{BL} to show $a_\alpha=0$ for $|\alpha|\ge 2m-1$. Choose $\varphi\in C^\infty_0(B_1(0))$ such that \[ (-1)^{|\alpha|}(D^\alpha\varphi)(0)=a_\alpha\quad \text{for}\quad |\alpha|\le k. \] Let $\varphi_\varepsilon(x) = \varphi\big(\frac{x}\varepsilon\big)$. Then, for $0<\varepsilon<1$, $\varphi_\varepsilon\in C^\infty_0(B_1(0))$ and \begin{align*} \int v\Delta^m\varphi_\varepsilon &= (\Delta^mv)(\varphi_\varepsilon) = \sum_{|\alpha|\le k} a_\alpha(D^\alpha\delta)\varphi_\varepsilon\\ &= \sum_{|\alpha|\le k} a_\alpha(-1)^{|\alpha|} \delta(D^\alpha\varphi_\varepsilon) = \sum_{|\alpha|\le k} a_\alpha(-1)^{|\alpha|} (D^\alpha\varphi_\varepsilon)(0)\\ &= \sum_{|\alpha|\le k} a_\alpha(-1)^{|\alpha|} \frac1{\varepsilon^{|\alpha|}} (D^\alpha\varphi)(0) = \sum_{|\alpha|\le k} a^2_\alpha \frac1{\varepsilon^{|\alpha|}}. \end{align*} On the other hand, \begin{align*} \int v\Delta^m \varphi_\varepsilon &= \int v(x) \frac1{\varepsilon^{2m}} (\Delta^m\varphi) \left(\frac{x}\varepsilon\right) \,dx\\ &\le \frac{C}{\varepsilon^{2m}} \int\limits_{|x|<\varepsilon} |v(x)|\,dx = O\left(\frac1{\varepsilon^{2m-2}} \log \frac1\varepsilon\right) \quad \text{as}\quad \varepsilon\to 0^+ \end{align*} by \eqref{eq3.20}. Hence $a_\alpha = 0$ for $|\alpha|\ge 2m-1$ and consequently \[ \Delta^mv = \sum_{|\alpha|\le 2m-2} a_\alpha D^\alpha\delta = \sum_{|\alpha|\le 2m-2} a_\alpha D^\alpha \Delta^m\Phi. \] That is \[ \Delta^m \left(v-\sum_{|\alpha|\le 2m-2} a_\alpha D^\alpha\Phi\right) = 0 \quad \text{in}\quad {\cl D}'(B_1(0)). \] Thus for some $C^\infty$ solution of $\Delta^mh = 0$ in $B_1(0)$ we have \[ v = \sum_{|\alpha|\le 2m-2} a_\alpha D^\alpha \Phi + h\quad \text{in}\quad B_1(0) - \{0\}. \] Hence Theorem \ref{thm3.1} follows from \eqref{eq3.19}. \end{proof} \section{Proofs of Theorems \ref{thm1.3} and \ref{thm1.4} and Corollary \ref{cor1.1}} \label{sec4} \indent In this section we prove Theorems \ref{thm1.3} and \ref{thm1.4} and Corollary \ref{cor1.1}. \begin{proof}[Proof of Theorem \ref{thm1.3}] This proof is a continuation of the proof of Theorem~\ref{thm3.1}. If $m=1$ then Theorem~\ref{thm1.3} is trivially true. Hence we can assume $m\ge 2$. Also, if $\sigma=m$ then \eqref{eq4.2} follows trivially from \eqref{eq4.1}. Hence we can assume $\sigma\le m-1$ in \eqref{eq4.2}. If $\alpha$ and $\beta$ are multi-indices and $|\alpha|= 2m-2$ then it follows from \eqref{eq3.1}--\eqref{eq3.3} that \begin{equation}\label{eq4.4} D^{\alpha+\beta}\Phi(x) = O\left(\left|\frac{d^{|\beta|}}{d|x|^{|\beta|}} \Gamma_0(|x|)\right|\right) \quad \text{as}\quad x\to 0. \end{equation} (This is clearly true if $n=2$. If $n\ge 3$ then $|\alpha+\beta| = 2m-2+|\beta| > 2m-n$ and thus \[ D^{\alpha+\beta} \Phi(x) = O(|x|^{2m-n-(2m-2+|\beta|)}) = O\left(\left|\frac{d^{|\beta|}}{d|x|^{|\beta|}} \Gamma_0(|x|)\right|\right).) \] Let $L^b$ be any linear partial differential operator of the form $\sum\limits_{|\beta|=b} c_\beta D^\beta$, where $b$ is a nonnegative integer and $c_\beta\in {\bb R}$. Then applying Taylor's theorem to \eqref{eq3.13} and using \eqref{eq4.4} we obtain \begin{equation}\label{eq4.5} |L^b_x\Psi(x,y)|\le C|y|^{2m-2} \left|\frac{d^b}{d|x|^b} \Gamma_0(|x|)\right| \quad \text{for}\quad |y|< \frac{|x|}2<1. \end{equation} Here and later $C$ is a positive constant, independent of $x$ and $y$, whose value may change from line to line. For $0\le b\le 2m-1$ we have \[ L^bN(x)=\int\limits_{|y|<1} -L^b_x \Psi(x,y) f(y)\,dy\quad \text{for}\quad 0<|x|<1. \] Hence by \eqref{eq4.4}, \eqref{eq4.5}, \eqref{eq3.6} and \eqref{eq3.9} we have \begin{equation}\label{eq4.6} L^bu(x) \le C\left|\frac{d^b}{d|x|^b} \Gamma_0(|x|)\right|\quad \text{for}\quad 0<|x|<1 \end{equation} provided $0\le b\le 2m-1$ and \begin{equation}\label{eq4.7} -L^b_x\Psi(x,y) \le C|y|^{2m-2} \left|\frac{d^b}{d|x|^b} \Gamma_0(|x|)\right|\quad \text{for}\quad 0<\frac{|x|}2 < |y|<1. \end{equation} We will complete the proof of Theorem~\ref{thm1.3} by proving \eqref{eq4.7} for various choices for $L^b$. For the rest of the proof of Theorem~\ref{thm1.3} we will always assume \begin{equation}\label{eq4.8} 0 < \frac{|x|}2 < |y|<1 \end{equation} which implies \begin{equation}\label{eq4.9} |x-y| \le |x|+|y| \le 3|y|. \end{equation} \noindent {\bf Case I.} Suppose $\Phi$ is given by \eqref{eq3.1} or \eqref{eq3.2}. It follows from \eqref{eq3.13} and \eqref{eq4.8} that \begin{align*} |D^\beta_x\Psi(x,y) - D^\beta_x\Phi(x-y)| &\le C\sum_{|\alpha|\le 2m-3} |y|^{|\alpha|} |x|^{2m-n-|\alpha|-|\beta|}\\ &\le C|y|^{2m-2} |x|^{2-n-|\beta|}. \end{align*} Thus \eqref{eq4.7}, and hence \eqref{eq4.6}, holds provided $0\le b\le 2m-1$ and \begin{equation}\label{eq4.10} -(L^b\Phi)(x-y) \le C|y|^{2m-2} |x|^{2-n-b}. \end{equation} \noindent {\bf Case I(a).} Suppose $\Phi$ is given by \eqref{eq3.1}. Let $\sigma\in [0,m-1]$ be an integer, $b=2\sigma$, and $L^b=(-1)^{m+\sigma}\Delta^\sigma$. Then $0 \le b\le 2m-2$ and \[ \text{sgn}(-L^b\Phi) = (-1)^{1+m+\sigma} \text{ sgn } \Delta^\sigma\Phi = (-1)^{1+2m+\sigma} \text{ sgn } \Delta^\sigma|x|^{2m-n} = (-1)^{1+2m+2\sigma}=-1. \] Thus \eqref{eq4.10}, and hence \eqref{eq4.6} holds with $L^b=(-1)^{m+\sigma}\Delta^\sigma$ and $0\le \sigma \le m-1$. This completes the proof of Theorem~\ref{thm1.3} when $\Phi$ is given by \eqref{eq3.1}. \noindent {\bf Case I(b).} Suppose $\Phi$ is given by \eqref{eq3.2}. Then $n$ is odd. It follows from \eqref{eq4.8} and \eqref{eq4.9} that for $0\le |\beta|\le 2m-n$ we have \[ |(D^\beta\Phi)(x-y)| \le C|x-y|^{2m-n-|\beta|} \le C|y|^{2m-n-|\beta|} \le C|y|^{2m-2} |x|^{2-n-|\beta|}. \] So \eqref{eq4.10} holds with $L^b = \pm D^\beta$ and $|\beta|=b$. Hence \[ |D^\beta u(x)| \le C|x|^{2-n-|\beta|} \quad \text{for}\quad 0 \le |\beta|\le 2m-n\quad \text{and} \quad 0<|x|<1. \] In particular \[ |\Delta^\sigma u(x)|\le C|x|^{2-n-2\sigma} \quad \text{for} \quad 2\sigma \le 2m-n\quad \text{and}\quad 0<|x|<1. \] Also, if $2m-n+1 \le 2\sigma \le 2m-2$, $b=2\sigma$, and $L^b = (-1)^{m+\sigma}\Delta^\sigma$, then $0 \le \sigma\le m-1$ and \begin{align*} \text{sgn} (-L^b\Phi) &= (-1)^{m+\sigma+1} \text{ sgn } \Delta^\sigma\Phi = (-1)^{m+\sigma+1+\frac{n-1}2} \text{ sgn } \Delta^\sigma |x|^{2m-n}\\ &= (-1)^{m+\sigma+1+\frac{n-1}2} \text{ sgn} (\Delta^{\frac{b-(2m-n+1)}2} \Delta^{\frac{2m-n+1}2} |x|^{2m-n})\\ &= (-1)^{m+\sigma+1+\frac{n-1}2 +\sigma-m+\frac{n-1}2} = -1 \end{align*} because $\Delta^{\frac{2m-n+1}2} |x|^{2m-n} = C|x|^{-1}$ where $C>0$. So \eqref{eq4.10} holds with $L^b = (-1)^{m+\sigma}\Delta^\sigma$. Hence $(-1)^{m+\sigma} \Delta^\sigma u(x) \le C|x|^{2-n-2\sigma}$ for $0 \le \sigma\le m-1$ and $0<|x|<1$. This completes the proof Theorem~\ref{thm1.3} when $\Phi$ is given by \eqref{eq3.2}. \noindent {\bf Case II.} Suppose $\Phi$ is given by \eqref{eq3.3}. Then $2\le n \le 2m$ and $n$ is even. To prove Theorem~\ref{thm1.3} in Case~II, it suffices to prove the following three statements. \begin{itemize} \item[(i)] Estimate \eqref{eq4.3} holds when $n=2$, $\beta=0$, and $m\ge 2$. \item[(ii)] Estimate \eqref{eq4.3} holds when $|\beta|\le 2m-n-1$ and either $n\ge 3$ or $|\beta|\ge 1$. \item[(iii)] Estimate \eqref{eq4.2} holds for $2m-n\le 2\sigma\le 2m-2$. \end{itemize} \noindentoindent {\it Proof of (i).} Suppose $n=2$, $\beta=0$, and $m \ge 2$. Then, since $u$ is nonnegative, to prove (i) it suffices to prove \[ u(x) \le C \log \frac5{|x|}\quad \text{for}\quad 0<|x|<1 \] which holds if \eqref{eq4.7} holds with $b=0$ and $L^b=D^0 =$ id. That is if \begin{equation}\label{eq4.11} -\Psi(x,y) \le C|y|^{2m-2} \log \frac5{|x|} \end{equation} By \eqref{eq3.4}, \eqref{eq4.8}, and \eqref{eq4.9} we have \begin{align*} |\Psi(x,y) - \Phi(x-y)| &\le \sum_{|\alpha|\le 2m-3} |y|^{|\alpha|} |D^\alpha\Phi(x)|\\ &\le C \sum_{|\alpha|\le 2m-3} |y|^{|\alpha|} |x|^{2m-2-|\alpha|} \log \frac5{|x|} \le C |y|^{2m-2} \log \frac5{|x|} \end{align*} and \begin{align*} |\Phi(x-y)| &= a|x-y|^{2m-2} \log \frac5{|x-y|}\\ &\le C|y|^{2m-2} \log \frac5{|y|} \le C|y|^{2m-2} \log \frac5{|x|} \end{align*} which imply \eqref{eq4.11}. This completes the proof of (i). \noindentoindent {\it Proof of (ii).} Suppose $|\beta|\le 2m-n-1$ and either $n\ge 3$ or $|\beta|\ge 1$. Then $n+|\beta|\ge 3$ and in order to prove (ii) it suffices to prove \begin{equation}\label{eq4.12} |D^\beta_x\Psi(x,y)|\le C|y|^{2m-2} \left|\frac{d^{|\beta|}}{d|x|^{|\beta|}} \Gamma_0(|x|)\right| \end{equation} because then \eqref{eq4.7}, and hence \eqref{eq4.6}, holds with $L^b=\pm D^\beta$. Since $\Phi$ is given by \eqref{eq3.3} we have $n\ge 2$ is even and \[ \Phi(x) = P(x) \log \frac5{|x|} \] where $P(x) = a(-1)^{\frac{n}2} |x|^{2m-n}$ is a \emph{polynomial} of degree $2m-n$. Since $D^\beta P$ is a polynomial of degree $2m-n-|\beta|\le 2m-3$ we have \begin{equation}\label{eq4.13} D^\beta_xP(x-y) = \sum_{|\alpha|\le 2m-3} \frac{(-y)^\alpha}{\alpha!} D^{\alpha+\beta}P(x). \end{equation} Since $D^\beta_x\Psi(x,y) = A_1+A_2+A_3$, where \begin{align*} A_1 &= D^\beta_x\Psi(x,y) - D^\beta_x\Phi(x-y) + (D^\beta_xP(x-y)) \log \frac5{|x|}\\ A_2 &= D^\beta_x\Phi(x-y) - (D^\beta_xP(x-y)) \log \frac5{|x-y|}\\ A_3 &= (D^\beta_xP(x-y)) \log \frac{|x|}{|x-y|}, \end{align*} to prove \eqref{eq4.12} it suffices to prove for $j=1,2,3$ that \begin{equation}\label{eq4.14} |A_j| \le C|y|^{2m-2} \left|\frac{d^{|\beta|}}{d|x|^{|\beta|}} \Gamma_0(|x|)\right|. \end{equation} Since \begin{align*} \left|D^{\alpha+\beta}\Phi(x) - (D^{\alpha+\beta}P(x)) \log \frac5{|x|}\right| &= \left| \sum_{\underset{\scriptstyle |\alpha+\beta-\gamma|\ge 1}{\gamma\le \alpha+\beta}} \binom{\alpha+\beta}\gamma (D^\gamma P(x)) \left(D^{\alpha+\beta-\gamma} \log \frac5{|x|}\right)\right|\\ &\le C |x|^{2m-n-|\alpha|-|\beta|} \end{align*} it follows from \eqref{eq3.13}, \eqref{eq4.13}, and \eqref{eq4.8} that \begin{align*} |A_1| = |-A_1| &= \left|\sum_{|\alpha|\le 2m-3} \frac{(-y)^\alpha}{\alpha!} D^{\alpha+\beta} \Phi(x) - \sum_{|\alpha|\le 2m-3} \frac{(-y)^\alpha}{\alpha!} (D^{\alpha+\beta} P(x)) \log \frac5{|x|}\right|\\ &\le C \sum_{|\alpha|\le 2m-3} |y|^{|\alpha|} |x|^{2m-n-|\alpha|-|\beta|} \le C |y|^{2m-2} |x|^{2-n-|\beta|}\\ &= C|y|^{2m-2} \left|\frac{d^{|\beta|}}{d|x|^{|\beta|}} \Gamma_0(|x|)\right|. \end{align*} Thus \eqref{eq4.14} hold when $j=1$. Since $A_2=0$ when $\beta=0$, we can assume for the proof of \eqref{eq4.14} when $j=2$ that $|\beta|\ge 1$. Then by \eqref{eq4.9} and \eqref{eq4.8}, \begin{align*} |A_2| &= \left|\sum_{\underset{\scriptstyle |\beta-\alpha|\ge 1}{\alpha\le \beta}} \binom\beta\alpha (D^\alpha_xP(x-y)) \left(D^{\beta-\alpha}_x \log \frac5{|x-y|}\right)\right|\\ &\le C|x-y|^{2m-n-|\beta|} \le C|y|^{2m-n-|\beta|}\\ &\le C |y|^{2m-2} |x|^{2-n-|\beta|}\\ &= C|y|^{2m-2} \left|\frac{d^{|\beta|}}{d|x|^{|\beta|}} \Gamma_0(|x|)\right|. \end{align*} Thus \eqref{eq4.14} holds when $j=2$. Finally we prove \eqref{eq4.14} when $j=3$. Let $d=2m-n-|\beta|$. Then $1\le d\le 2m-3$, \[ |A_3| \le C|x-y|^d \left|\log \frac{|x|}{|x-y|}\right| \] and by \eqref{eq4.8} and \eqref{eq4.9} we have \begin{align*} |x-y|^d \left|\log \frac{|x|}{|x-y|}\right| &\le \begin{cases} |x-y|^d \left(\dfrac{|x|}{|x-y|}\right)^d = |x|^d \le C|y|^{2m-2} |x|^{2-n-|\beta|} &\text{if $|x-y|\le |x|$}\\ |x-y|^d \left(\dfrac{|x-y|}{|x|}\right)^{2m-2-d} = |x-y|^{2m-2} |x|^{2-n-|\beta|} &\text{if $|x|\le |x-y|$} \end{cases}\\ &\le C|y|^{2m-2} |x|^{2-n-|\beta|} = C|y|^{2m-2} \left|\frac{d^{|\beta|}}{d|x|^{|\beta|}} \Gamma_0(|x|)\right|. \end{align*} Thus \eqref{eq4.14} holds when $j=3$. This completes the proof of \eqref{eq4.12} and hence of (ii). \noindentoindent {\it Proof of (iii).} Suppose $2m-n\le 2\sigma \le 2m-2$. In order to prove (iii) it suffices to prove \begin{equation}\label{eq4.15} (-1)^{m+\sigma+1} \Delta^\sigma_x\Psi(x,y) \le C|y|^{2m-2} \left|\frac{d^{2\sigma}}{d|x|^{2\sigma}} \Gamma_0(|x|)\right| \end{equation} because then \eqref{eq4.7}, and hence \eqref{eq4.6}, holds with $L^b=(-1)^{m+\sigma}\Delta^\sigma$ and $b=2\sigma$. If $|\beta|=2\sigma$ then \eqref{eq4.8} implies \begin{align*} \left|\sum_{1\le |\alpha| \le 2m-3} \frac{(-y)^\alpha}{\alpha!} D^{\alpha+\beta} \Phi(x)\right| &\le C \sum_{1\le |\alpha|\le 2m-3} |y|^{|\alpha|} |x|^{2m-n-|\alpha|-|\beta|}\\ &\le C|y|^{2m-2} |x|^{2-n-|\beta|}. \end{align*} Thus it follows from \eqref{eq3.13} that \[ |\Delta^\sigma_x\Psi(x,y) - \Delta^\sigma_x\Phi(x-y) + \Delta^\sigma\Phi(x)|\le C|y|^{2m-2} |x|^{2-n-2\sigma}. \] Hence to prove \eqref{eq4.15} it suffices to prove \begin{equation}\label{eq4.16} (-1)^{m+\sigma+1} (\Delta^\sigma_x \Phi(x-y) - \Delta^\sigma\Phi(x)) \le C|y|^{2m-2} |x|^{2-n-2\sigma}. \end{equation} We divide the proof of \eqref{eq4.16} into cases. \noindent {\bf Case 1.} Suppose $2\le 2m-n+2\le 2\sigma\le 2m-2$. Then by \eqref{eq4.8} \[ |\Delta^\sigma\Phi(x)| \le C|x|^{2m-n-2\sigma} \le C|y|^{2m-2} |x|^{2-n-2\sigma} \] and since \begin{equation}\label{eq4.17} \Delta^{\frac{2m-n}2} \left(|x|^{2m-n} \log \frac5{|x|}\right) = A \log \frac5{|x|}-B \end{equation} where $A>0$ and $B\ge 0$ are constants, we have \[ \text{sgn}((-1)^{m+\sigma+1} \Delta^\sigma\Phi(z)) = (-1)^{m+\sigma+\frac{n}2+1} (-1)^{\sigma - \frac{2m-n}2} = -1 \quad \text{for}\quad |z|>0. \] This proves \eqref{eq4.16} and hence (iii) in Case 1. \noindent {\bf Case 2.} Suppose $2\sigma =2m-n$. Then by \eqref{eq4.17} and \eqref{eq4.9} we have \begin{align*} (-1)^{m+\sigma+1} (\Delta^\sigma_x\Phi(x-y) - \Delta^\sigma\Phi(x)) &= (-1)^{\frac{n}2 +m+\sigma+1} A \log \frac{|x|}{|x-y|}\\ &= A\log \frac{|x-y|}{|x|} \le A \log \frac{3|y|}{|x|} \le A \left(\frac{3|y|}{|x|}\right)^{2m-2}\\ &= A 3^{2m-2} |y|^{2m-2} |x|^{2-n-2\sigma}. \end{align*} This proves \eqref{eq4.16} and hence (iii) in Case 2, and thereby completes the proof of Theorem~\ref{thm1.3}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1.4}] Let $u(x)$ be defined in terms of $v(y)$ by \eqref{eq1.5.1}. Then by \eqref{eq1.5.2} and \eqref{neq4.1}, $u(x)$ is a $C^{2m}$ nonnegative solution of \eqref{eq4.1}, and hence $u(x)$ satisfies the conclusion of Theorem \ref{thm1.3}. It is a straight-forward exercise to show that \eqref{neq4.3} follows from \eqref{eq4.3} when $n<2m$ and $\beta$ satisfies \eqref{neq4.4}. So to complete the proof of Theorem \ref{thm1.4} we will now prove \eqref{neq4.2}. Suppose $\sigma\le m$ is a nonnegative integer. Let $v_\sigma(y)$ be the $\sigma$-Kelvin transform of $u(x)$. Then $v_\sigma(y)=|y|^{2\sigma-2m}v(y)$ and thus by \eqref{eq4.2}, we have for $|y|>1$ that \begin{align*} (-1)^{m+\sigma}\Delta^\sigma(|y|^{2\sigma-2m}v(y)) &=(-1)^{m+\sigma}\Delta^\sigma v_\sigma(y)\\ &=(-1)^{m+\sigma}|x|^{n+2\sigma}\Delta^\sigma u(x)\\ &\le C|x|^{n+2\sigma} \left|\frac{d^{2\sigma}}{d|x|^{2\sigma}} \Gamma_0(|x|)\right|\\ &\le C\begin{cases} |x|^2\log\frac{5}{|x|} &\text{if $\sigma=0$ and $n=2$}\\ |x|^2 &\text{if $\sigma\ge 1$ or $n\ge 3$} \end{cases} \end{align*} which implies \eqref{neq4.2} after replacing $|x|$ with $1/|y|$. \end{proof} \begin{proof}[Proof of Corollary \ref{cor1.1}] Theorem \ref{thm1.4} implies \eqref{eq1.14.1} and \[ -\Delta(|y|^{-2}v(y))\le C|y|^{-2} \qquad \text{for} \quad |y|>1 \] and thus for $|y|>1$ we have \begin{align*} -|y|^{-2}\Delta v(y)&=-\Delta(|y|^{-2} v(y))+(\Delta|y|^{-2})v(y) + 2\noindentabla|y|^{-2}\cdot\noindentabla v(y)\\ &\le -\Delta(|y|^{-2} v(y))+C\left(|y|^{-4}\Gamma_\infty(|y|)+|y|^{-3} \frac{d}{d|y|} \Gamma_\infty(|y|)\right)\\ &\le C\begin{cases} |y|^{-2} & \text{if $n=3$}\\ |y|^{-2}\log 5|y| & \text{if $n=2$} \end{cases}\\ &\le C|y|^{-2}\left|\frac{d^2}{d|y|^2} \Gamma_\infty(|y|)\right| \end{align*} which implies \eqref{eq1.14.2}. \end{proof} \section{Proof of Theorem \ref{thm1.1}} \label{sec5} \indent As noted in the introduction, the sufficiency of condition \eqref{eq1.3} in Theorem \ref{thm1.1} and the estimate \eqref{eq1.5} follow from Theorem \ref{thm1.3}, which we proved in the last section. Consequently, we can complete the proof of Theorem \ref{thm1.1} by proving the following proposition. \begin{pro}\label{pro5.1} Suppose $n\ge 2$ and $m\ge 1$ are integers such that \eqref{eq1.3} does not hold. Let $\psi\colon (0,1)\to (0,\infty)$ be a continuous function. Then there exists a $C^\infty$ positive solution of \begin{equation}\label{eq4.3.1} -\Delta^m u \ge 0\quad \text{in}\quad B_1(0)-\{0\}\subset {\bb R}^n \end{equation} such that \begin{equation}\label{eq4.3.2} u(x)\noindente O(\psi(|x|)) \quad \text{as}\quad x\to 0. \end{equation} \end{pro} \begin{proof} Let $\{x_j\}^\infty_{j=1} \subset {\bb R}^n-\{0\}$ be a sequence such that $4|x_{j+1}| < |x_j| < 1$. Choose $\alpha_j>0$ such that \begin{equation}\label{eq4.18} \frac{\alpha_j}{\psi(x_j)}\to \infty \quad \text{as}\quad j\to \infty. \end{equation} Since \eqref{eq1.3} does not hold, it follows from \eqref{eq3.1}--\eqref{eq3.3} that $\lim\limits_{x\to 0} - \Phi(x) = \infty$ and $-\Phi(x) > 0$ for $0<|x|<5$. Hence we can choose $R_j\in (0, |x_j|/4)$ such that \begin{equation}\label{eq4.19} \int\limits_{|z|<R_j} - \Phi(z)\,dz > R^n_j 2^j\alpha_j,\quad \text{for} \quad j=1,2,\ldots~. \end{equation} Let $\varphi\colon {\bb R}\to [0,1]$ be a $C^\infty$ function such that $\varphi(t) = 1$ for $t\le 1$ and $\varphi(t) = 0$ for $t\ge 2$. Define $f_j\in C^\infty_0(B_{\frac{|x_j|}2}(x_j))$ by \[ f_j(x) = \frac1{2^jR^n_j} \varphi\left(\frac{|x-x_j|}{R_j}\right). \] Then the functions $f_j$ have disjoint supports and \[ \int\limits_{{\bb R}^n} f_j(x)\,dx = \int\limits_{|x-x_j|<2R_j} f_j(x)\,dx \le \frac{C(n)}{2^j}. \] Thus $f := \sum\limits^\infty_{j=1} f_j\in L^1({\bb R}^n) \cap C^\infty({\bb R}^n - \{0\})$ and hence the function $u\colon B_1(0) - \{0\}\to {\bb R}$ defined by \[ u(x) := \int\limits_{|y|<1} - \Phi(x-y) f(y)\,dy \] is a $C^\infty$ positive solution of \eqref{eq4.3.1}. Also \begin{align*} u(x_j) &\ge \int\limits_{|y|<1} - \Phi(x_j-y) f_j(y)\,dy\\ &\ge \frac1{2^jR^n_j} \int\limits_{|x-x_j|<R_j} - \Phi(x_j-y)\,dy\\ &= \frac1{2^jR^n_j} \int\limits_{|z|<R_j} - \Phi(z)\,dz > \alpha_j \end{align*} by \eqref{eq4.19}. Hence \eqref{eq4.18} implies that $u$ satisfies \eqref{eq4.3.2}. \end{proof} \end{document}
{\mathfrak m}athfrak{b}egin{document} {\mathfrak m}aketitle \section*{Introduction} Key polynomials over a valued field $(K,v)$ were introduced by S. MacLane as a tool to construct augmentations of discrete rank-one valuations on the polynomial ring $\op{Ker}x$ {\mathfrak m}athfrak{c}ite{mcla}. As an application, MacLane designed an algorithm to compute all extensions of the given valuation $v$ on $K$ to a finite field extension $L/K$ {\mathfrak m}athfrak{c}ite{mclb}. This work was generalized to arbitrary valuations by M. Vaqui\'e {\mathfrak m}athfrak{c}ite{Vaq} and, independently, by F.J. Herrera, M.A. Olalla and M. Spivakovsky {\mathfrak m}athfrak{c}ite{hos}. In the non-discrete case, {\mathfrak m}edskipmph{limit augmented valuations} arise. The structure of their graded algebra, and the description of their sets of key polynomials are crucial questions, linked with the study of the defect of a valuation in a finite extension, and the local uniformization problem {\mathfrak m}athfrak{c}ite{hmos,mahboub,SS,Vaq2}. In this paper, we fix an arbitrary valuation ${\mathfrak m}u$ on $\op{Ker}x$, and we determine the structure of its graded algebra $\Gammagm$, and describe its set of key polynomials $\op{Ker}pm$, in terms of a key polynomial of minimal degree. We also characterize valuations not admitting key polynomials (Theorem \ref{kpempty}). Some of the results of the paper can be found in {\mathfrak m}athfrak{c}ite{Vaq}, but only for augmented valuations. Also, in {\mathfrak m}athfrak{c}ite{PP} some partial results are obtained for residually transcendental valuations, by using the fact that these valuations are determined by a minimal pair. In our approach, we do not make any assumption on ${\mathfrak m}u$, and we derive our results in a pure abstract form, from the mere existence of key polynomials. In section 2 we study general properties of key polynomials, while in section 3, we study specific properties of key polynomials of minimal degree. In section 4, we determine the structure of the graded algebra. Section 5 is devoted to the introduction of residual polynomial operators, based on old ideas of Ore and MacLane {\mathfrak m}athfrak{c}ite{ore,mcla}. These operators yield a malleable and elegant tool, able to replace the onerous ``lifting" techniques in the context of valuations constructed from minimal pairs. In section 6 we describe the set of key polynomials and we prove that a certain {\mathfrak m}edskipmph{residual ideal operator} sets a bijection $$ \op{Ker}pm/\!\sim_\mu\ {\mathfrak m}athfrak{l}ra\ {\mathfrak m}x(\Delta) $$ between the set of ${\mathfrak m}u$-equivalence classes of key polynomials, and the maximal spectrum of the subring $\Delta\subset \Gammagm$, piece of degree zero in the graded algebra. This result is inspired in {\mathfrak m}athfrak{c}ite{ResidualIdeals}, where it was proved for discrete rank-one valuations. Finally, in section 7, we single out a key polynomial of minimal degree for augmented and limit augmented valuations. In this way, the structure of the graded algebra and the set of key polynomials for these valuations can be obtained from the results of this paper. \section{Graded algebra of a valuation on a polynomial ring} \subsection{Graded algebra of a valuation} Let $\Gamma$ be an ordered abelian group. Consider $$ w{\mathfrak m}athfrak{c}olon L{\mathfrak m}athfrak{l}ra \Gamma{\mathfrak m}athfrak{c}up\{\infty\} $$ a valuation on a field $L$, and denote{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{itemize} \item ${\mathfrak m}_w\subset \mathcal{O}_w\subset L$, the maximal ideal and valuation ring of $w$. {\mathfrak m}edskip \item $k_w=\mathcal{O}_w/{\mathfrak m}_w$, the residue class field of $w$.{\mathfrak m}edskip \item $\Gamma_w=w(L^*)$, the group of values of $w$. {\mathfrak m}edskipnd{itemize}{\mathfrak m}edskip To any subring $A\subset L$ we may associate a graded algebra as follows. For every ${\mathfrak m}athfrak{a}lpha \in \Gamma_w$, consider the $\mathcal{O}_w$-submodules: $$ \mathfrak{p}pa=\{a\in A{\mathfrak m}id w(a)\Gammae {\mathfrak m}athfrak{a}lpha\}\supset \mathfrak{p}pa^+=\{a\in A{\mathfrak m}id w(a)> {\mathfrak m}athfrak{a}lpha\}, $$ leading to the graded algebra $$ \operatornameeratorname{gr}_w(A)={\mathfrak m}athfrak{b}igoplus\mathbf{n}olimits_{{\mathfrak m}athfrak{a}lpha\in\Gamma_w}\mathfrak{p}pa/\mathfrak{p}pa^+. $$ The product of homogeneous elements is defined in an obvious way: $$ {\mathfrak m}athfrak{l}eft(a+\mathfrak{p}pa^+\right){\mathfrak m}athfrak{l}eft(b+{\mathfrak m}athcal{P}_{{\mathfrak m}athfrak{b}eta}^+\right)=ab+{\mathfrak m}athcal{P}_{{\mathfrak m}athfrak{a}lpha+{\mathfrak m}athfrak{b}eta}^+. $$ If the classes $a+\mathfrak{p}pa^+$, $b+{\mathfrak m}athcal{P}_{{\mathfrak m}athfrak{b}eta}^+$ are different from zero, then $w(a)={\mathfrak m}athfrak{a}lpha$, $w(b)={\mathfrak m}athfrak{b}eta$. Hence, $w(ab)={\mathfrak m}athfrak{a}lpha+{\mathfrak m}athfrak{b}eta$, so that $ab+{\mathfrak m}athcal{P}_{{\mathfrak m}athfrak{a}lpha+{\mathfrak m}athfrak{b}eta}^+$ is different from zero too. Thus, $\operatornameeratorname{gr}_w(A)$ is an integral domain.{\mathfrak m}edskip Consider the ``initial term" mapping \;$H_w{\mathfrak m}athfrak{c}olon A\thetao \operatornameeratorname{gr}_w(A)$,\; given by $$ H_w(0)=0,\mathfrak{q}quad H_w(a)=a+\mathfrak{p}set^+_{w(a)},\ {\mathfrak m}box{ for }a\in A,\ a\mathbf{n}e0. $$ Note that $H_w(a)\mathbf{n}e0$ if $a\mathbf{n}e0$. For all $a,b\in A$ we have: {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{Hmu} {\mathfrak m}athfrak{a}s{1.3} {\mathfrak m}athfrak{b}egin{array}{l} H_w(ab)=H_w(a)H_w(b), \\ H_w(a+b)=H_w(a)+H_w(b), \ {\mathfrak m}box{ if }w(a)=w(b)=w(a+b). {\mathfrak m}edskipnd{array} {\mathfrak m}edskipnd{equation} {\mathfrak m}athfrak{b}egin{definition} Two elements $a,b\in A$ are said to be {\mathfrak m}edskipmph{$w$-equivalent} if $H_w(a)=H_w(b)$. In this case, we write $a\sim_w b$. This is equivalent to $w(a-b)>w(b)$. {\mathfrak m}edskip We say that $a$ is {\mathfrak m}edskipmph{$w$-divisible} by $b$ if $H_w(a)$ is divisible by $H_w(b)$ in $\operatornameeratorname{gr}_w(A)$. In this case, we write $b{\mathfrak m}id_w a$. This is equivalent to $a\sim_w bc$, \ for some $c\in A$.{\mathfrak m}edskip {\mathfrak m}edskipnd{definition} \subsection{Valuations on polynomial rings. General setting} Throughout the paper, we fix a field $K$ and a valuation $${\mathfrak m}u{\mathfrak m}athfrak{c}olon K(x) {\mathfrak m}athfrak{l}ra \Gammam {\mathfrak m}athfrak{c}up \{\infty\},$$ on the field $K(x)$ of rational functions in one indeterminate $x$. We do not make any assumption on the rank of ${\mathfrak m}u$.{\mathfrak m}edskip We denote by $v={\mathfrak m}u_{|_K}$ the valuation on $K$ obtained by the restriction of ${\mathfrak m}u$. The group of values of $v$ is a subgroup of $\Gammam$: $$ \Gammav=v(K^*)={\mathfrak m}u(K^*)\subset \Gammam. $$ For each one of the two valuations $v$, ${\mathfrak m}u$, we consider a different graded algebra: $$ \Gammagv:=\operatornameeratorname{gr}_v(K),\mathfrak{q}quad \Gammagm:=\operatornameeratorname{gr}_{\mathfrak m}u(\op{Ker}x). $$ In the algebra $\Gammagv$, every non-zero homogeneous element is a unit: $$H_v(a)^{-1}=H_v(a^{-1}),\mathfrak{q}quad \forall a\in K^*.$$ The subring of homogeneous elements of degree zero of $\Gammagv$ is $\op{Ker}v$, so that $\Gammagv$ has a natural structure of $\op{Ker}v$-algebra.{\mathfrak m}edskip We have a natural embedding of graded algebras, $$ \Gammagv\hookrightarrow \Gammagm,\mathfrak{q}quad a+\mathfrak{p}set^+_{\mathfrak m}athfrak{a}lpha(v)\,{\mathfrak m}apsto\, a+\mathfrak{p}set^+_{\mathfrak m}athfrak{a}lpha({\mathfrak m}u),\mathfrak{q}uad \forall{\mathfrak m}athfrak{a}lpha\in\Gammav,\ \forall a\in \mathfrak{p}set_{\mathfrak m}athfrak{a}lpha(v). $$ The subring of $\Gammagm$ determined by the piece of degree zero is denoted $$ \Delta=\Delta_{\mathfrak m}u=\mathfrak{p}set_0({\mathfrak m}u)/\mathfrak{p}set_0^+({\mathfrak m}u). $$ Since $\mathcal{O}_v\subset\mathfrak{p}set_0=\op{Ker}x{\mathfrak m}athfrak{c}ap \mathcal{O}_{\mathfrak m}u$, and ${\mathfrak m}_v=\mathfrak{p}set_0^+{\mathfrak m}athfrak{c}ap \mathcal{O}_v\subset \mathfrak{p}set_0^+=\op{Ker}x{\mathfrak m}athfrak{c}ap {\mathfrak m}_{\mathfrak m}u$, there are canonical injective ring homomorphisms: $$\op{Ker}v{\mathfrak m}athfrak{l}hook{\mathfrak m}athbf{j}oinrel{\mathfrak m}athfrak{l}ongrightarrow\Delta{\mathfrak m}athfrak{l}hook{\mathfrak m}athbf{j}oinrel{\mathfrak m}athfrak{l}ongrightarrow \op{Ker}m.$$ In particular, $\Delta$ and $\Gammagm$ are equipped with a canonical structure of $\op{Ker}v$-algebra. {\mathfrak m}edskip The aim of the paper is to analyze the structure of the graded algebra $\Gammagm$ and show that most of the properties of the extension ${\mathfrak m}u/v$ are reflected in algebraic properties of the extension $\Gammagm/\Gammagv$. For instance, an essential role is played by the {\mathfrak m}edskipmph{residual ideal operator} {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{resIdeal} \mathcal{R}=\mathcal{R}m{\mathfrak m}athfrak{c}olon \op{Ker}x{\mathfrak m}athfrak{l}ra I(\Delta),\mathfrak{q}quad g{\mathfrak m}apsto {\mathfrak m}athfrak{l}eft(H_{\mathfrak m}u(g)\Gammagm\right){\mathfrak m}athfrak{c}ap \Delta, {\mathfrak m}edskipnd{equation} where $I(\Delta)$ is the set of ideals in $\Delta$. In sections \ref{secR} and \ref{secKPuf}, we shall study in more detail this operator $\mathcal{R}$, which translates questions about the action of ${\mathfrak m}u$ on $\op{Ker}x$ into ideal-theoretic problems in the ring $\Delta$. \subsection*{Commensurability} The {\mathfrak m}edskipmph{divisible hull} of an ordered abelian group $\Gamma$ is $$ \mathfrak{q}g:=\Gamma\otimes_{\mathfrak m}athbb Z{\mathfrak m}athbb Q. $$ This ${\mathfrak m}athbb Q$-vector space inherits a natural structure of ordered abelian group, with the same rank as $\Gamma$.{\mathfrak m}edskip The {\mathfrak m}edskipmph{rational rank} of $\Gamma$ is defined as $\operatorname{rr}(\Gamma)=\op{diag}m_{\mathfrak m}athbb Q(\mathfrak{q}g)$.{\mathfrak m}edskip Since $\Gamma$ has no torsion, it admits an order-preserving embedding $\Gamma\hookrightarrow\mathfrak{q}g$ into its divisible hull. For every $\Gammaa\in\mathfrak{q}g$ there exists a minimal positive integer $e$ such that $e\Gammaa\in\Gamma$.{\mathfrak m}edskip We say that our extension ${\mathfrak m}u/v$ is {\mathfrak m}edskipmph{commensurable} if $\mathfrak{q}g_v=\mathfrak{q}g_{\mathfrak m}u$, or in other words, if $\operatorname{rr}(\Gammam/\Gammav)=0$. This is equivalent to $\Gammam/\Gammav$ being a torsion group.{\mathfrak m}edskip Actually, $\operatorname{rr}(\Gammam/\Gammav)$ takes only the values $0$ or $1$, as the following well-known inequality shows {\mathfrak m}athfrak{c}ite[Thm. 3.4.3]{valuedfield}: {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{ftalineq} \operatorname{tr.deg}(\op{Ker}m/\op{Ker}v)+\operatorname{rr}(\Gammam/\Gammav){\mathfrak m}athfrak{l}e \operatorname{tr.deg}(K(x)/K)=1. {\mathfrak m}edskipnd{equation} Finally, we fix some notation to be used throughout the paper.{\mathfrak m}edskip \mathbf{n}oindent{{\mathfrak m}athfrak{b}f Notation. }For any positive integer $m$ we denote $$ \op{Ker}x_m=\{a\in\op{Ker}x{\mathfrak m}id \deg(a)<m\}. $$ For any polynomials $f,{\mathfrak m}athfrak{c}hi\in\op{Ker}x$, with $\deg({\mathfrak m}athfrak{c}hi)>0$, we denote the canonical ${\mathfrak m}athfrak{c}hi$-expansion of $f$ by: $$ f=\sum\mathbf{n}olimits_{0{\mathfrak m}athfrak{l}e s}f_s{\mathfrak m}athfrak{c}hi^s, $$ being implicitly assumed that the coefficients $f_s\in \op{Ker}x$ have $\deg(f_s)<\deg({\mathfrak m}athfrak{c}hi)$. \section{Key polynomials. Generic properties}{\mathfrak m}athfrak{l}abel{secKP} In this section, we introduce the concept of key polynomial for ${\mathfrak m}u$, and we study some generic properties of key polynomials. In section \ref{secKPmindeg}, we shall see that, if ${\mathfrak m}u$ admits key polynomials at all, then the structure of $\Gammagm$ is determined by any key polynomial of minimal degree. {\mathfrak m}athfrak{b}egin{definition}{\mathfrak m}athfrak{l}abel{mu}Let ${\mathfrak m}athfrak{c}hi\in \op{Ker}x$. We say that ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-irreducible if $H_{\mathfrak m}u({\mathfrak m}athfrak{c}hi)\Gammagm$ is a non-zero prime ideal. We say that ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-minimal if ${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{\mathfrak m}u f$ for any non-zero $f\in \op{Ker}x$ with $\deg f<\deg {\mathfrak m}athfrak{c}hi$. {\mathfrak m}edskipnd{definition} The property of ${\mathfrak m}u$-minimality admits a relevant characterization, given in Proposition \ref{minimal0} below. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{minimal1} Let $f,{\mathfrak m}athfrak{c}hi\in \op{Ker}x$. Consider a ${\mathfrak m}athfrak{c}hi$-expansion of $f\in \op{Ker}x$ as follows: $$f=\sum\mathbf{n}olimits_{0{\mathfrak m}athfrak{l}e s}a_s{\mathfrak m}athfrak{c}hi^s, \mathfrak{q}quad a_s\in \op{Ker}x,\mathfrak{q}uad {\mathfrak m}athfrak{c}hi\mathbf{n}mid_{\mathfrak m}u a_s,\ \forall s.$$ Then, ${\mathfrak m}u(f)={\mathfrak m}athbb Min\{{\mathfrak m}u(a_s{\mathfrak m}athfrak{c}hi^s){\mathfrak m}id 0{\mathfrak m}athfrak{l}e s\}$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Write $f=a_0+{\mathfrak m}athfrak{c}hi q$ with $q\in \op{Ker}x$. Then, ${\mathfrak m}u(f)\Gammaeq{\mathfrak m}athbb Min\{{\mathfrak m}u(a_0),{\mathfrak m}u({\mathfrak m}athfrak{c}hi q)\}$. A strict inequality would imply $a_0\sim_\mu-{\mathfrak m}athfrak{c}hi q$, against our assumption. Hence, equality holds, and the result follows from a recurrent argument. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{minimal0} Let ${\mathfrak m}athfrak{c}hi\in \op{Ker}x$ be a non-constant polynomial. The following conditions are equivalent. {\mathfrak m}athfrak{b}egin{enumerate} \item ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-minimal \item For any $f\in \op{Ker}x$, with ${\mathfrak m}athfrak{c}hi$-expansion $f=\sum\mathbf{n}olimits_{0{\mathfrak m}athfrak{l}e s}f_s{\mathfrak m}athfrak{c}hi^s$, we have $${\mathfrak m}u(f)={\mathfrak m}athbb Min\{{\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_s{\mathfrak m}athfrak{c}hi^s\right){\mathfrak m}id 0{\mathfrak m}athfrak{l}e s\}.$$ \item For any non-zero $f\in \op{Ker}x$, with ${\mathfrak m}athfrak{c}hi$-expansion $f=\sum\mathbf{n}olimits_{0{\mathfrak m}athfrak{l}e s}f_s{\mathfrak m}athfrak{c}hi^s$, we have $${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{\mathfrak m}u f\,\Longleftrightarrow\,{\mathfrak m}u(f)={\mathfrak m}u(f_0).$$ {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} The implication (1)$\,\Longrightarrow\,$(2) follows from Lemma \ref{minimal1}. In fact, if ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-minimal, then ${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{\mathfrak m}u f_s$ for all $s$, because $\deg(f_s)<\deg({\mathfrak m}athfrak{c}hi)$.{\mathfrak m}edskip Let us deduce (3) from (2). Take a non-zero $f\in \op{Ker}x$ and write $f=f_0+{\mathfrak m}athfrak{c}hi q$ with $q\in \op{Ker}x$. By item (2), we have ${\mathfrak m}u(f) {\mathfrak m}athfrak{l}e {\mathfrak m}u(f_0)$. If ${\mathfrak m}u(f)<{\mathfrak m}u(f_0)$, then $f\sim_\mu {\mathfrak m}athfrak{c}hi q$, so ${\mathfrak m}athfrak{c}hi{\mathfrak m}mu f$. Conversely, if $f \sim_\mu {\mathfrak m}athfrak{c}hi g$ for some $g\in \op{Ker}x$, then ${\mathfrak m}u(f-{\mathfrak m}athfrak{c}hi g) > {\mathfrak m}u(f)$. Since the ${\mathfrak m}athfrak{c}hi$-expansion of $f-{\mathfrak m}athfrak{c}hi g$ has the same $0$-th coefficient $f_0$, condition $(2)$ shows that ${\mathfrak m}u(f) <{\mathfrak m}u(f-{\mathfrak m}athfrak{c}hi g){\mathfrak m}athfrak{l}eq{\mathfrak m}u(f_0)$. {\mathfrak m}edskip Finally, (3) implies (1). If $\deg(f)<\deg({\mathfrak m}athfrak{c}hi)$, then the ${\mathfrak m}athfrak{c}hi$-expansion of $f$ is $f=f_0$. By item $(3)$, ${\mathfrak m}athfrak{c}hi \mathbf{n}mid_{{\mathfrak m}u}f$. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s The property of ${\mathfrak m}u$-minimality is not stable under ${\mathfrak m}u$-equivalence. For instance, if ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-minimal and ${\mathfrak m}u({\mathfrak m}athfrak{c}hi)>0$, then ${\mathfrak m}athfrak{c}hi+{\mathfrak m}athfrak{c}hi^2\sim_\mu {\mathfrak m}athfrak{c}hi$ and ${\mathfrak m}athfrak{c}hi+{\mathfrak m}athfrak{c}hi^2$ is not ${\mathfrak m}u$-minimal. However, for ${\mathfrak m}u$-equivalent polynomials of the same degree, ${\mathfrak m}u$-minimality is clearly preserved. {\mathfrak m}athfrak{b}egin{definition} A {\mathfrak m}edskipmph{key polynomial} for ${\mathfrak m}u$ is a monic polynomial in $\op{Ker}x$ which is ${\mathfrak m}u$-minimal and ${\mathfrak m}u$-irreducible. The set of key polynomials for ${\mathfrak m}u$ will be denoted by $\op{Ker}pm$. {\mathfrak m}edskipnd{definition} {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{mid=sim} Let ${\mathfrak m}athfrak{c}hi\in\op{Ker}pm$, and let $f\in \op{Ker}x$ a monic polynomial such that ${\mathfrak m}athfrak{c}hi{\mathfrak m}mu f$ and $\deg(f)=\deg({\mathfrak m}athfrak{c}hi)$. Then, ${\mathfrak m}athfrak{c}hi\sim_\mu f$ and $f$ is a key polynomial for ${\mathfrak m}u$ too. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} The ${\mathfrak m}athfrak{c}hi$-expansion of $f$ is of the form $f=f_0+{\mathfrak m}athfrak{c}hi$, with $\deg(f_0)<\deg({\mathfrak m}athfrak{c}hi)$. Items (2) and (3) of Proposition \ref{minimal0} show that ${\mathfrak m}u(f)<{\mathfrak m}u(f_0)$. Hence, $H_{\mathfrak m}u(f)=H_{\mathfrak m}u({\mathfrak m}athfrak{c}hi)$, and $f$ is ${\mathfrak m}u$-irreducible. Since $\deg(f)=\deg({\mathfrak m}athfrak{c}hi)$, it is ${\mathfrak m}u$-minimal too. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{ab} Let ${\mathfrak m}athfrak{c}hi\in\op{Ker}pm$. {\mathfrak m}athfrak{b}egin{enumerate} \item For $a,b\in \op{Ker}xchi$, let $ab=c+d{\mathfrak m}athfrak{c}hi$ be the ${\mathfrak m}athfrak{c}hi$-expansion of $ab$. Then, $${\mathfrak m}u(ab)={\mathfrak m}u(c){\mathfrak m}athfrak{l}e{\mathfrak m}u(d{\mathfrak m}athfrak{c}hi).$$ \item ${\mathfrak m}athfrak{c}hi$ is irreducible in $\op{Ker}x$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} For any $a,b\in \op{Ker}xchi$, we have ${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{\mathfrak m}u a$, ${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{\mathfrak m}u b$ by the ${\mathfrak m}u$-minimality of ${\mathfrak m}athfrak{c}hi$. Hence, ${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{\mathfrak m}u ab$ by the ${\mathfrak m}u$-irreducibility of ${\mathfrak m}athfrak{c}hi$. Thus, (1) follows from Proposition \ref{minimal0}. In particular, the equality ${\mathfrak m}athfrak{c}hi=ab$ is impossible, so that ${\mathfrak m}athfrak{c}hi$ is irreducible. {\mathfrak m}edskipnd{proof} \subsection*{Minimal expression of $H_{\mathfrak m}u(f)$ in terms of ${\mathfrak m}athfrak{c}hi$-expansions} {\mathfrak m}athfrak{b}egin{definition}{\mathfrak m}athfrak{l}abel{order} For ${\mathfrak m}athfrak{c}hi\in\op{Ker}pm$ and a non-zero $f\in \op{Ker}x$, we let $s_{\mathfrak m}athfrak{c}hi(f)$ be the largest integer $s$ such that ${\mathfrak m}athfrak{c}hi^s{\mathfrak m}id_{\mathfrak m}u f$. Namely, $s_{\mathfrak m}athfrak{c}hi(f)$ is the order with which the prime $H_{\mathfrak m}u({\mathfrak m}athfrak{c}hi)$ divides $H_{\mathfrak m}u(f)$ in $\Gammagm$. Accordingly, by setting \,$s_{\mathfrak m}athfrak{c}hi(0):=\infty$, we get {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{multiplicative} s_{\mathfrak m}athfrak{c}hi(fg)=s_{\mathfrak m}athfrak{c}hi(f)+s_{\mathfrak m}athfrak{c}hi(g),\ {\mathfrak m}box{ for all }f,g\in \op{Ker}x. {\mathfrak m}edskipnd{equation} {\mathfrak m}edskipnd{definition} {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{sphi} Let $f\in \op{Ker}x$ with ${\mathfrak m}athfrak{c}hi$-expansion $f=\sum_{0{\mathfrak m}athfrak{l}e s}f_s{\mathfrak m}athfrak{c}hi^s$. Denote $$ I_{\mathfrak m}athfrak{c}hi(f)={\mathfrak m}athfrak{l}eft\{s\in{\mathfrak m}athbb Z_{\Gammae0}{\mathfrak m}id {\mathfrak m}u(f_s{\mathfrak m}athfrak{c}hi^s)={\mathfrak m}u(f)\right\}. $$ Then, $f\sim_\mu \sum\mathbf{n}olimits_{s\in I_{\mathfrak m}athfrak{c}hi(f)}f_s{\mathfrak m}athfrak{c}hi^s$, and\, $s_{\mathfrak m}athfrak{c}hi(f)={\mathfrak m}athbb Min(I_{\mathfrak m}athfrak{c}hi(f))$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Let $g=\sum\mathbf{n}olimits_{s\in I_{\mathfrak m}athfrak{c}hi(f)}f_s{\mathfrak m}athfrak{c}hi^s$. By construction, $f-g=\sum_{s\mathbf{n}ot\in I}f_s{\mathfrak m}athfrak{c}hi^s$ has ${\mathfrak m}u$-value ${\mathfrak m}u(f-g)>{\mathfrak m}u(f)$. This proves $f\sim_\mu g$. In particular, $s_{\mathfrak m}athfrak{c}hi(f)=s_{\mathfrak m}athfrak{c}hi(g)$. If $s_0={\mathfrak m}athbb Min(I_{\mathfrak m}athfrak{c}hi(f))$, we may write, $$ g={\mathfrak m}athfrak{c}hi^{s_0}{\mathfrak m}athfrak{l}eft(f_{s_0}+{\mathfrak m}athfrak{c}hi h\right), $$ for some $h\in \op{Ker}x$. By construction, ${\mathfrak m}u(f_{s_0})={\mathfrak m}u(f_{s_0}+{\mathfrak m}athfrak{c}hi h)={\mathfrak m}u(g/{\mathfrak m}athfrak{c}hi^{s_0})$. By item (3) of Proposition \ref{minimal0}, ${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_{s_0}+{\mathfrak m}athfrak{c}hi h\right)$. Therefore, $s_{\mathfrak m}athfrak{c}hi(g)=s_0$. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{definition} For any $f\in\op{Ker}x$ we denote $s'_{\mathfrak m}athfrak{c}hi(f)={\mathfrak m}x(I_{\mathfrak m}athfrak{c}hi(f))$. Denote for simpliciy $s=s_{\mathfrak m}athfrak{c}hi(f)$, $s'=s'_{\mathfrak m}athfrak{c}hi(f)$. The homogeneous elements $$\op{irc}(f):=H_{\mathfrak m}u(f_s),\mathfrak{q}quad {\mathfrak m}athfrak{l}rc(f):=H_{\mathfrak m}u(f_{s'}) $$ are the {\mathfrak m}edskipmph{initial residual coefficient} and {\mathfrak m}edskipmph{leading residual coefficient} of $f$, respectively. {\mathfrak m}edskipnd{definition}{\mathfrak m}edskip The next lemma shows that $s'_{\mathfrak m}athfrak{c}hi(f)$ is an invariant of the ${\mathfrak m}u$-equivalence class of $f$. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{sprime} If $f,g\in \op{Ker}x$ satisfy $f\sim_\mu g$, then $I_{\mathfrak m}athfrak{c}hi(f)=I_{\mathfrak m}athfrak{c}hi(g)$, and $f_s\sim_\mu g_s$ for all $s\in I_{\mathfrak m}athfrak{c}hi(f)$. In particular, $\op{irc}(f)=\op{irc}(g)$ and ${\mathfrak m}athfrak{l}rc(f)={\mathfrak m}athfrak{l}rc(g)$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Consider the ${\mathfrak m}athfrak{c}hi$-expansions $f=\sum_{0{\mathfrak m}athfrak{l}e s}f_s{\mathfrak m}athfrak{c}hi^s$, $g=\sum_{0{\mathfrak m}athfrak{l}e s}g_s{\mathfrak m}athfrak{c}hi^s$. If $f\sim_\mu g$, then for any $s\Gammae0$ we have {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{smus} {\mathfrak m}u(f)<{\mathfrak m}u(f-g){\mathfrak m}athfrak{l}e {\mathfrak m}u{\mathfrak m}athfrak{l}eft((f_s-g_s){\mathfrak m}athfrak{c}hi^s\right). {\mathfrak m}edskipnd{equation} The condition $s\in I_{\mathfrak m}athfrak{c}hi(f)$, $s\mathbf{n}ot\in I_{\mathfrak m}athfrak{c}hi(g)$ (or viceversa) contradicts (\ref{smus}). In fact, $${\mathfrak m}u{\mathfrak m}athfrak{l}eft((f_s-g_s){\mathfrak m}athfrak{c}hi^s\right)={\mathfrak m}u(f),$$ because ${\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_s{\mathfrak m}athfrak{c}hi^s\right)={\mathfrak m}u(f)$ and ${\mathfrak m}u{\mathfrak m}athfrak{l}eft(g_s{\mathfrak m}athfrak{c}hi^s\right)>{\mathfrak m}u(g)={\mathfrak m}u(f)$. Also, for all $s\in I_{\mathfrak m}athfrak{c}hi(f)$, we have ${\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_s{\mathfrak m}athfrak{c}hi^s\right)={\mathfrak m}u(f)$ and (\ref{smus}) shows that $f_s{\mathfrak m}athfrak{c}hi^s\sim_\mu g_s{\mathfrak m}athfrak{c}hi^s$. Thus, $f_s\sim_\mu g_s$. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s We shall see in section \ref{secKPmindeg} that the equality $$ s'_{\mathfrak m}athfrak{c}hi(fg)= s'_{\mathfrak m}athfrak{c}hi(f)+s'_{\mathfrak m}athfrak{c}hi(g),\ {\mathfrak m}box{ for all }f,g\in \op{Ker}x $$ holds if ${\mathfrak m}athfrak{c}hi$ is a key polynomial of minimal degree. \subsection*{Semivaluation attached to a key polynomial} {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{subgroup} Let ${\mathfrak m}athfrak{c}hi\in\op{Ker}pm$. Consider the subset $\Gamma\subset\Gammachi\subset\Gammam$ defined as $$ \Gammachi={\mathfrak m}athfrak{l}eft\{{\mathfrak m}u(a){\mathfrak m}id a\in \op{Ker}xchi,\ a\mathbf{n}e0\right\}.$$ Then, $\Gammachi$ is a subgroup of $\Gammam$ and $\Gammaen{\Gammachi,{\mathfrak m}u({\mathfrak m}athfrak{c}hi)}=\Gammam$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Since ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-minimal, Proposition \ref{minimal0} shows that $\Gammaen{\Gammachi,{\mathfrak m}u({\mathfrak m}athfrak{c}hi)}=\Gammam$. By Lemma \ref{ab}, $\Gammachi$ is closed under addition. Take ${\mathfrak m}u(a)\in\Gammachi$ for some non-zero $a\in\op{Ker}xchi$. The polynomials $a$ and ${\mathfrak m}athfrak{c}hi$ are coprime, because ${\mathfrak m}athfrak{c}hi$ is irreducible. Hence, they satisfy a B\'ezout identity {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{bezoutab} ab+{\mathfrak m}athfrak{c}hi\, d=1,\mathfrak{q}quad \deg(b)<\deg({\mathfrak m}athfrak{c}hi),\mathfrak{q}uad \deg(d)<\deg(a)<\deg({\mathfrak m}athfrak{c}hi). {\mathfrak m}edskipnd{equation} Since $ab=1-d{\mathfrak m}athfrak{c}hi$ is the ${\mathfrak m}athfrak{c}hi$-expansion of $ab$, Lemma \ref{ab} shows that ${\mathfrak m}u(ab)={\mathfrak m}u(1)=0$. Hence $-{\mathfrak m}u(a)={\mathfrak m}u(b)\in\Gammachi$. This shows that $\Gammachi$ is a subgroup of $\Gammam$. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s Let ${\mathfrak m}athfrak{c}hi\in\op{Ker}pm$. Consider the prime ideal $\Gammap={\mathfrak m}athfrak{c}hi\,\op{Ker}x$ and the field $K_{\mathfrak m}athfrak{c}hi=\op{Ker}x/\Gammap$. By the definition of $\Gammachi$, we get a well-defined onto mapping: $$ v_{\mathfrak m}athfrak{c}hi{\mathfrak m}athfrak{c}olon K_{\mathfrak m}athfrak{c}hi^*{\mathfrak m}athfrak{l}ongtwoheadrightarrow \Gammachi,\mathfrak{q}quad v_{\mathfrak m}athfrak{c}hi(f+\Gammap)={\mathfrak m}u(f_0),\mathfrak{q}uad \forall f\in\op{Ker}x\setminus\Gammap, $$ where $f_0\in\op{Ker}x$ is the common $0$-th coefficient of the ${\mathfrak m}athfrak{c}hi$-expansion of all polynomials in the class $f+\Gammap$. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{vphi} The mapping $v_{\mathfrak m}athfrak{c}hi$ is a valuation on $K_{\mathfrak m}athfrak{c}hi$ extending $v$, with group of values $\Gammachi$. {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} This mapping $v_{\mathfrak m}athfrak{c}hi$ is a group homomorphism by Lemma \ref{ab}. Finally, $$ v_{\mathfrak m}athfrak{c}hi((f+g)+\Gammap)={\mathfrak m}u(f_0+g_0)\Gammae {\mathfrak m}athbb Min\{{\mathfrak m}u(f_0),{\mathfrak m}u(g_0)\}={\mathfrak m}athbb Min\{v_{\mathfrak m}athfrak{c}hi(f+\Gammap),v_{\mathfrak m}athfrak{c}hi(g+\Gammap)\}, $$ because $(f+g)_0=f_0+g_0$. Hence, $v_{\mathfrak m}athfrak{c}hi$ is a valuation on $K_{\mathfrak m}athfrak{c}hi$. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s Denote the maximal ideal, the valuation ring and the residue class field of $v_{\mathfrak m}athfrak{c}hi$ by: $${\mathfrak m}_{\mathfrak m}athfrak{c}hi\subset\mathcal{O}_{\mathfrak m}athfrak{c}hi\subset K_{\mathfrak m}athfrak{c}hi,\mathfrak{q}quad k_{\mathfrak m}athfrak{c}hi=\mathcal{O}_{\mathfrak m}athfrak{c}hi/{\mathfrak m}_{\mathfrak m}athfrak{c}hi.$$ Let $\theta\in K_{\mathfrak m}athfrak{c}hi=\op{Ker}x/({\mathfrak m}athfrak{c}hi)$ be the root of ${\mathfrak m}athfrak{c}hi$ determined by the class of $x$. With this notation, we have $K_{\mathfrak m}athfrak{c}hi=K(\theta)$, and $$ v_{\mathfrak m}athfrak{c}hi(f(\theta))={\mathfrak m}u(f_0)=v_{\mathfrak m}athfrak{c}hi(f_0(\theta)),\mathfrak{q}quad \forall f\in\op{Ker}x. $$ We abuse of language and denote still by $v_{\mathfrak m}athfrak{c}hi$ the corresponding semivaluation $$ \op{Ker}x{\mathfrak m}athfrak{l}ongtwoheadrightarrow K_{\mathfrak m}athfrak{c}hi \stackrel{v_{\mathfrak m}athfrak{c}hi}{\mathfrak m}athfrak{l}ra \Gammachi{\mathfrak m}athfrak{c}up\{\infty\} $$ with support ${\mathfrak m}athfrak{c}hi\op{Ker}x=v_{\mathfrak m}athfrak{c}hi^{-1}(\infty)$.{\mathfrak m}edskip According to the definition given in (\ref{resIdeal}), the residual ideal $\mathcal{R}({\mathfrak m}athfrak{c}hi)$ of a key polynomial ${\mathfrak m}athfrak{c}hi$ is a prime ideal in $\Delta$. Let us show that it is actually a maximal ideal in $\Delta$. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{maxideal} If ${\mathfrak m}athfrak{c}hi$ is a key polynomial for ${\mathfrak m}u$, then $\mathcal{R}({\mathfrak m}athfrak{c}hi)$ is the kernel of the onto homomorphism $$\Delta{\mathfrak m}athfrak{l}ongtwoheadrightarrow k_{\mathfrak m}athfrak{c}hi,\mathfrak{q}quad g+\mathfrak{p}set^+_0\ {\mathfrak m}apsto\ g(\theta)+{\mathfrak m}_{\mathfrak m}athfrak{c}hi.$$ In particular, $\mathcal{R}({\mathfrak m}athfrak{c}hi)$ is a maximal ideal in $\Delta$. {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} By Proposition \ref{minimal0}, if $g\in\mathfrak{p}set_0$, we have $v_{\mathfrak m}athfrak{c}hi(g(\theta))={\mathfrak m}u(g_0)\Gammae {\mathfrak m}u(g)\Gammae0$, so that $g(\theta)\in \mathcal{O}_{\mathfrak m}athfrak{c}hi$. Thus, we get a well-defined ring homomorphism $\mathfrak{p}set_0\thetao k_{\mathfrak m}athfrak{c}hi$. This mapping is onto, because every element in $k_{\mathfrak m}athfrak{c}hi$ can be represented as $h(\theta)+{\mathfrak m}_{\mathfrak m}athfrak{c}hi$ for some $h\in \op{Ker}xchi$ with $v_{\mathfrak m}athfrak{c}hi(h(\theta))\Gammae0$. Since ${\mathfrak m}u(h)=v_{\mathfrak m}athfrak{c}hi(h(\theta))\Gammae0$, we see that $h$ belongs to $\mathfrak{p}set_0$. Finally, if $g\in\mathfrak{p}set_0^+$, then $v_{\mathfrak m}athfrak{c}hi(g(\theta))\Gammae {\mathfrak m}u(g)>0$; thus, the above homomorphism vanishes on $\mathfrak{p}set_0^+$ and it induces an onto mapping $\Delta\thetawoheadrightarrow k_{\mathfrak m}athfrak{c}hi$. The kernel of this mapping is the set of all elements $H_{\mathfrak m}u(f)$ for $f\in\op{Ker}x$ satisfying ${\mathfrak m}u(f_0)>{\mathfrak m}u(f)=0$. By Proposition \ref{minimal0}, this is equivalent to ${\mathfrak m}u(f)=0$ and ${\mathfrak m}athfrak{c}hi{\mathfrak m}mu f$. In other words, the kernel is $\mathcal{R}({\mathfrak m}athfrak{c}hi)$. {\mathfrak m}edskipnd{proof} \section{Key polynomials of minimal degree}{\mathfrak m}athfrak{l}abel{secKPmindeg} In this section, we study special properties of key polynomials of minimal degree. These objects are crucial for the resolution of the two main aims of the paper: {\mathfrak m}athfrak{b}egin{itemize} \item Determine the structure of $\Gammagm$ as a $\op{Ker}v$-algebra.{\mathfrak m}edskip \item Determine the structure of the quotient set $\op{Ker}pm/\!\sim_\mu$. {\mathfrak m}edskipnd{itemize} These tasks will be carried out in sections \ref{secDelta} and \ref{secKPuf}, respectively.{\mathfrak m}edskip Recall the embedding of graded $\op{Ker}v$-algebras $$\Gammagv{\mathfrak m}athfrak{l}hook{\mathfrak m}athbf{j}oinrel{\mathfrak m}athfrak{l}ongrightarrow\Gammagm.$$ Let $\xi\in\Gammagm$ be a non-zero homogeneous element which is algebraic over $\Gammagv$. Then, $\xi$ satisfies an homogeneous equation: $$ {\mathfrak m}edskipp_0+{\mathfrak m}edskipp_1\xi+{\mathfrak m}athfrak{c}dots+{\mathfrak m}edskipp_m\xi^m=0, $$ with ${\mathfrak m}edskipp_0\,\dots,{\mathfrak m}edskipp_m$ homogeneous elements in $\Gammagv$ such that $\deg({\mathfrak m}edskipp_i\xi^i)$ is constant for all indices $0{\mathfrak m}athfrak{l}e i{\mathfrak m}athfrak{l}e m$ for which ${\mathfrak m}edskipp_i\mathbf{n}e0$. Since all non-zero homogeneous elements in $\Gammagv$ are units, we have: {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{xi} \xi {\mathfrak m}box{ algebraic over }\Gammagv\,\Longrightarrow\,\xi {\mathfrak m}box{ is a unit in }\Gammagm, {\mathfrak m}box{ and $\xi$ is integral over }\Gammagv. {\mathfrak m}edskipnd{equation} {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{alg} Let $\Gammagv\subset \Gammagalg\subset\Gammagm$ be the subalgebra generated by all homogeneous elements in $\Gammagm$ which are algebraic over $\Gammagv$. If an homogeneous element $\xi\in\Gammagm$ is algebraic over $\Gammagalg$, then it belongs to $\Gammagalg$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Since all non-zero homogeneous elements in $\Gammagalg$ are units, the element $\xi$ is integral over $\Gammagalg$. Hence, it is integral over $\Gammagv$, so that it belongs to $\Gammagalg$. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{phimindeg} Let $\mathfrak{p}hi\in\op{Ker}x$ be a monic polynomial of minimal degree $n$ such that $H_{\mathfrak m}u(\mathfrak{p}hi)$ is transcendental over $\Gammagv$. Then, $\mathfrak{p}hi$ is a key polynomial for ${\mathfrak m}u$. Moreover, for $a,b\in\op{Ker}x_n$, let the $\mathfrak{p}hi$-expansion of $ab$ be {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{abc} ab=c+d\mathfrak{p}hi,\mathfrak{q}quad c,d\in\op{Ker}x_n. {\mathfrak m}edskipnd{equation} Then, $ab\sim_\mu c$. {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} Let us first show that $\mathfrak{p}hi$ is ${\mathfrak m}u$-minimal. According to Proposition \ref{minimal0}, the ${\mathfrak m}u$-minimality of $\mathfrak{p}hi$ is equivalent to $$ {\mathfrak m}u(f)={\mathfrak m}athbb Min\{{\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_s\mathfrak{p}hi^s\right){\mathfrak m}id 0{\mathfrak m}athfrak{l}e s\}, $$ for all $f\in \op{Ker}x$, being $f=\sum_{0{\mathfrak m}athfrak{l}e s}f_s\mathfrak{p}hi^s$ its canonical $\mathfrak{p}hi$-expansion. For any given $f\in\op{Ker}x$, let $\delta={\mathfrak m}athbb Min\{{\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_s\mathfrak{p}hi^s\right){\mathfrak m}id 0{\mathfrak m}athfrak{l}e s\}$, and consider $$ I=\{0{\mathfrak m}athfrak{l}e s{\mathfrak m}id {\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_s\mathfrak{p}hi^s\right)=\delta\},\mathfrak{q}quad f_I=\sum_{s\in I}f_s\mathfrak{p}hi^s. $$ We have ${\mathfrak m}u(f)\Gammae\delta$, and the desired equality ${\mathfrak m}u(f)=\delta$ is equivalent to ${\mathfrak m}u(f_I)=\delta$. If $\#I=1$, this is obvious. In the case $\#I>1$, the equality ${\mathfrak m}u(f_I)=\delta$ follows from the transcendence of $H_{\mathfrak m}u(\mathfrak{p}hi)$ over $\Gammagv$. In fact, $$ {\mathfrak m}u(f_I)>\delta \,\Longleftrightarrow\, \sum_{s\in I}H_{\mathfrak m}u(f_s)H_{\mathfrak m}u(\mathfrak{p}hi)^s=0. $$ By the minimality of $n=\deg(\mathfrak{p}hi)$, all $H_{\mathfrak m}u(f_s)$ are algebraic over $\Gammagv$. Hence, ${\mathfrak m}u(f_I)>\delta$ would imply that $H_{\mathfrak m}u(\mathfrak{p}hi)$ is algebraic over $\Gammagalg$, leading to $H_{\mathfrak m}u(\mathfrak{p}hi)$ algebraic over $\Gammagv$ by Lemma \ref{alg}. This ends the proof that $\mathfrak{p}hi$ is ${\mathfrak m}u$-minimal.{\mathfrak m}edskip Let us now prove the last statement of the theorem. For $a,b\in\op{Ker}x_n$ with $\mathfrak{p}hi$-expansion given by (\ref{abc}), Proposition \ref{minimal0} shows that $$ {\mathfrak m}u(ab)={\mathfrak m}athbb Min\{{\mathfrak m}u(c), {\mathfrak m}u(d\mathfrak{p}hi)\}, $$ because $\mathfrak{p}hi$ is ${\mathfrak m}u$-minimal. By equation (\ref{Hmu}), the inequality ${\mathfrak m}u(c)\Gammae {\mathfrak m}u(d\mathfrak{p}hi)$ implies $$ H_{\mathfrak m}u(ab)= {\mathfrak m}athfrak{b}egin{cases} H_{\mathfrak m}u(d)H_{\mathfrak m}u(\mathfrak{p}hi),&{\mathfrak m}box{ if }{\mathfrak m}u(c)>{\mathfrak m}u(d\mathfrak{p}hi),\\ H_{\mathfrak m}u(c)+H_{\mathfrak m}u(d)H_{\mathfrak m}u(\mathfrak{p}hi),&{\mathfrak m}box{ if }{\mathfrak m}u(c)={\mathfrak m}u(d\mathfrak{p}hi). {\mathfrak m}edskipnd{cases} $$ By the minimality of $n$, the four elements $H_{\mathfrak m}u(a)$, $H_{\mathfrak m}u(b)$, $H_{\mathfrak m}u(c)$, $H_{\mathfrak m}u(d)$ are algebraic over $\Gammagv$. Hence, $H_{\mathfrak m}u(\mathfrak{p}hi)$ would be algebraic over $\Gammagalg$, leading to $H_{\mathfrak m}u(\mathfrak{p}hi)$ algebraic over $\Gammagv$ by Lemma \ref{alg}. This contradicts our assumption on $H_{\mathfrak m}u(\mathfrak{p}hi)$. Therefore, we must have ${\mathfrak m}u(c)<{\mathfrak m}u(d\mathfrak{p}hi)$, leading to $ab\sim_\mu c$.{\mathfrak m}edskip Finally, let us prove that $\mathfrak{p}hi$ is ${\mathfrak m}u$-irreducible. Let $f,g\in\op{Ker}x$ be polynomials such that $\mathfrak{p}hi\mathbf{n}mid_{\mathfrak m}u f$, $\mathfrak{p}hi\mathbf{n}mid_{\mathfrak m}u g$. By Proposition \ref{minimal0}, $$ {\mathfrak m}u(f)={\mathfrak m}u(f_0),\mathfrak{q}quad {\mathfrak m}u(g)={\mathfrak m}u(g_0), $$ where $f_0$, $g_0$ are the $0$-th degree coefficients of the $\mathfrak{p}hi$-expansions of $f$, $g$, respectively. Let $f_0g_0=c+d\mathfrak{p}hi$ be the $\mathfrak{p}hi$-expansion of $f_0g_0$. As shown above, $f_0g_0\sim_\mu c$, so that $$ {\mathfrak m}u(fg)={\mathfrak m}u(f_0g_0)={\mathfrak m}u(c). $$ Since $c$ is the $0$-th coefficient of the $\mathfrak{p}hi$-expansion of $fg$, the equality ${\mathfrak m}u(fg)={\mathfrak m}u(c)$ shows that $\mathfrak{p}hi\mathbf{n}mid_{\mathfrak m}u fg$, by Proposition \ref{minimal0}. This ends the proof that $\mathfrak{p}hi$ is ${\mathfrak m}u$-irreducible. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{lmn} Consider the following three natural numbers:{\mathfrak m}edskip ${\mathfrak m}edskipll=$ minimal degree of $f\in\op{Ker}x$ such that $H_{\mathfrak m}u(f)$ is not a unit in $\Gammagm$. $m=$ minimal degree of a key polynomial for ${\mathfrak m}u$. $n=$ minimal degree of $f\in\op{Ker}x$ such that $H_{\mathfrak m}u(f)$ is transcendental over $\Gammagv$.{\mathfrak m}edskip If one of these numbers exists, then all exist and \ ${\mathfrak m}edskipll=m=n$. {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} For any $f\in\op{Ker}x$ we have $$ f\in\op{Ker}pm\,\Longrightarrow\, H_{\mathfrak m}u(f)\mathbf{n}ot\in\Gammagm^*\,\Longrightarrow\, H_{\mathfrak m}u(f)\mathbf{n}ot\in\Gammagalg\,\Longrightarrow\, \op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset, $$ the last implication by Theorem \ref{phimindeg}. Hence, the conditions for the existence of these numbers are all equivalent: {\mathfrak m}athfrak{b}egin{align*} {\mathfrak m}edskipxistsH_{\mathfrak m}u(f)\mathbf{n}ot\in\Gammagm^*&\,\Longleftrightarrow\,\op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset\,\Longleftrightarrow\, \Gammagalg\subsetneq\Gammagm. {\mathfrak m}edskipnd{align*} Suppose these conditions are satisfied. By Theorem \ref{phimindeg}, there exists a key polynomial $\mathfrak{p}hi$ of degree $n$. Also, for any $a\in\op{Ker}x_n$, the homogeneous element $H_{\mathfrak m}u(a)\in\Gammagm$ is algebraic over $\Gammagv$, hence a unit in $\Gammagm$ by (\ref{xi}). Since $H_{\mathfrak m}u(\mathfrak{p}hi)$ is not a unit, this proves that $n={\mathfrak m}edskipll$. Since there are no key polynomials in $\op{Ker}x_n$, this proves $n=m$ too. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{sprime+} If $\mathfrak{p}hi$ is a key polynomial of minimal degree, then for all $f,g\in \op{Ker}x$ we have $$ {\mathfrak m}athfrak{l}rc(fg)={\mathfrak m}athfrak{l}rc(f){\mathfrak m}athfrak{l}rc(g),\mathfrak{q}quad s'_\mathfrak{p}hi(fg)= s'_\mathfrak{p}hi(f)+s'_\mathfrak{p}hi(g). $$ {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} Consider the $\mathfrak{p}hi$-expansions $f=\sum_{0{\mathfrak m}athfrak{l}e s}f_s\mathfrak{p}hi^s$, $g=\sum_{0{\mathfrak m}athfrak{l}e t}g_t\mathfrak{p}hi^t$. We may write $$ fg=\sum_{0{\mathfrak m}athfrak{l}e j}b_j\mathfrak{p}hi^j,\mathfrak{q}quad b_j=\sum_{s+t=j}f_sg_t. $$ For each index $j$, there exist $s_0,t_0$ such that $s_0+t_0=j$ and $${\mathfrak m}u(b_j\mathfrak{p}hi^j)\Gammae {\mathfrak m}u(f_{s_0}\mathfrak{p}hi^{s_0}g_{t_0}\mathfrak{p}hi^{t_0})\Gammae {\mathfrak m}u(fg), $$ the last inequality because ${\mathfrak m}u(f_{s_0}\mathfrak{p}hi^{s_0})\Gammae{\mathfrak m}u(f)$ and ${\mathfrak m}u(g_{t_0}\mathfrak{p}hi^{t_0})\Gammae {\mathfrak m}u(g)$. Hence, $$ fg\sim_\mu \sum_{j\in J}b_j\mathfrak{p}hi^j,\mathfrak{q}quad J={\mathfrak m}athfrak{l}eft\{0{\mathfrak m}athfrak{l}e j{\mathfrak m}id {\mathfrak m}u{\mathfrak m}athfrak{l}eft(b_j\mathfrak{p}hi^j\right)={\mathfrak m}u(fg)\right\}. $$ For every $j\in J$, consider the set $$I_j=\{(s,t){\mathfrak m}id s+t=j,\ s\in I_\mathfrak{p}hi(f),\ t\in I_\mathfrak{p}hi(g)\}.$$ Then, (\ref{Hmu}) and Theorem \ref{phimindeg} show the existence of $c_{s,t}\in\op{Ker}x_n$ such that $$ H_{\mathfrak m}u(b_j)=\sum_{(s,t)\in I_j}H_{\mathfrak m}u(f_sg_t)=\sum_{(s,t)\in I_j}H_{\mathfrak m}u(c_{s,t})=H_{\mathfrak m}u(c_j), $$ where $c_j=\sum_{(s,t)\in I_j}c_{s,t}$. Therefore, again by (\ref{Hmu}), we deduce that $$ fg\sim_\mu h,\mathfrak{q}quad h=\sum_{j\in J}c_j\mathfrak{p}hi^j. $$ Note that $J=I_\mathfrak{p}hi(h)$ by construction. By Lemma \ref{sprime}, $I_\mathfrak{p}hi(fg)=I_\mathfrak{p}hi(h)=J$ and ${\mathfrak m}athfrak{l}rc(fg)={\mathfrak m}athfrak{l}rc(h)$. Thus, if ${\mathfrak m}edskipll=s'_\mathfrak{p}hi(f)$, $m=s'_\mathfrak{p}hi(g)$, we need to show that ${\mathfrak m}x(J)={\mathfrak m}edskipll+m$ and ${\mathfrak m}athfrak{l}rc(h)=H_{\mathfrak m}u(f_{\mathfrak m}edskipll)H_{\mathfrak m}u(g_m)$. If $j>{\mathfrak m}edskipll+m$, then for all pairs $(s,t)$ with $s+t=j$ we have either $s>{\mathfrak m}edskipll$ or $t>m$. Thus, ${\mathfrak m}u(f_sg_t\mathfrak{p}hi^j)={\mathfrak m}u(f_s\mathfrak{p}hi^sg_t\mathfrak{p}hi^t)>{\mathfrak m}u(fg)$. So that $j\mathbf{n}ot\in J$. For $j={\mathfrak m}edskipll+m$, the same argument applies to all pairs $(s,t)$ with $s+t=j$, except for the pair $(s_{\mathfrak m}edskipll,t_m)$, for which ${\mathfrak m}u(f_{s_{\mathfrak m}edskipll}g_{t_m}\mathfrak{p}hi^j)={\mathfrak m}u(fg)$. Therefore, $j={\mathfrak m}edskipll+m$ is the maximal index in $J$ and ${\mathfrak m}athfrak{l}rc(h)=H_{\mathfrak m}u(c_{{\mathfrak m}edskipll+m})=H_{\mathfrak m}u(b_{{\mathfrak m}edskipll+m})=H_{\mathfrak m}u( f_{\mathfrak m}edskipll g_m)$. {\mathfrak m}edskipnd{proof} \subsection*{Units and maximal subfield of $\Delta$} {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{smallunits} Let $\mathfrak{p}hi$ be a key polynomial of minimal degree $n$. For any non-zero $g\in\op{Ker}x$, with $\mathfrak{p}hi$-expansion $g=\sum_{0{\mathfrak m}athfrak{l}e s}g_s\mathfrak{p}hi^s$, the following conditions are equivalent. {\mathfrak m}athfrak{b}egin{enumerate} \item $g\sim_\mu a$, for some $a\in\op{Ker}x_n$. \item $H_{\mathfrak m}u(g)$ is algebraic over $\Gammagv$. \item $H_{\mathfrak m}u(g)$ is a unit in $\Gammagm$. \item $s_\mathfrak{p}hi(g)=s'_\mathfrak{p}hi(g)=0$. \item $g\sim_\mu g_0$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} If $g\sim_\mu a$, for some $a\in\op{Ker}x_n$, then $H_{\mathfrak m}u(g)=H_{\mathfrak m}u(a)$ is algebraic over $\Gammagv$ by Corollary \ref{lmn}. If $H_{\mathfrak m}u(g)$ is algebraic over $\Gammagv$ then $H_{\mathfrak m}u(g)$ is a unit by (\ref{xi}). If $H_{\mathfrak m}u(g)$ is a unit, there exists $f\in\op{Ker}x$ such that $fg\sim_\mu 1$. By Lemma \ref{sprime}, $I_\mathfrak{p}hi(fg)=I_\mathfrak{p}hi(1)=\{0\}$, so that $s_\mathfrak{p}hi(fg)=s'_\mathfrak{p}hi(fg)=0$. By equation (\ref{multiplicative}) and Corollary \ref{sprime+}, we deduce that $s_\mathfrak{p}hi(g)=s'_\mathfrak{p}hi(g)=0$. This condition $s_\mathfrak{p}hi(g)=s'_\mathfrak{p}hi(g)=0$ is equivalent to $I_\mathfrak{p}hi(g)=\{0\}$, which implies $g\sim_\mu g_0$ by Lemma \ref{sphi}. This proves (1) $\Longrightarrow$ (2) $\Longrightarrow$ (3) $\Longrightarrow$ (4) $\Longrightarrow$ (5). Finally, (5) $\Longrightarrow$ (1) is obvious. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s As a consequence of this characterization of homogeneous algebraic elements, the subfield $\op{Ker}al\subset\Delta$ of all elements in $\Delta$ which are algebraic over $\Gammagv$ can be expressed as: {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{defkal} \op{Ker}al=\Delta^*{\mathfrak m}athfrak{c}up\{0\}={\mathfrak m}athfrak{l}eft\{H_{\mathfrak m}u(a){\mathfrak m}id a\in\op{Ker}x_n,\ {\mathfrak m}u(a)=0\right\}{\mathfrak m}athfrak{c}up\{0\}. {\mathfrak m}edskipnd{equation} Since $\op{Ker}al$ contains all units of $\Delta$, it is the maximal subfield contained in $\Delta$. Since every $\xi\in\op{Ker}al$ is homogeneous of degree zero, any monic homogeneous algebraic equation of $\xi$ over $\Gammagv$ has coefficients in the residue field $\op{Ker}v$. Thus, $\op{Ker}al$ coincides with the algebraic closure of $\op{Ker}v$ in $\Delta$. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{maxsubfield} Let $\op{Ker}al\subset\Delta$ be the algebraic closure of $\op{Ker}v$ in $\Delta$. For any key polynomial $\mathfrak{p}hi$ of minimal degree $n$, the mapping $\op{Ker}al\hookrightarrow\Delta\thetawoheadrightarrow k_\mathfrak{p}hi$ is an isomorphism. {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} The restriction to $\op{Ker}al$ of the onto mapping $\Delta\thetao k_\mathfrak{p}hi$ described in Proposition \ref{maxideal} maps $$H_{\mathfrak m}u(a)\ {\mathfrak m}athfrak{l}ongmapsto\ a(\theta)+{\mathfrak m}_\mathfrak{p}hi,\mathfrak{q}quad \forall a\in \op{Ker}x_n.$$ Since the images cover all $k_\mathfrak{p}hi$, this mapping is an isomorphism between $\op{Ker}al$ and $k_\mathfrak{p}hi$. {\mathfrak m}edskipnd{proof} \subsection*{Upper bound for weighted values} Let us characterize ${\mathfrak m}u$-minimality of a polynomial $f\in\op{Ker}x$ in terms of its $\mathfrak{p}hi$-expansion. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{minimal} Let $\mathfrak{p}hi$ be a key polynomial of minimal degree $n$. For any $f\in \op{Ker}x$ with $\mathfrak{p}hi$-expansion \,$f=\sum_{s=0}^{\mathfrak m}edskipll\,f_s\mathfrak{p}hi^s$, $f_{\mathfrak m}edskipll\mathbf{n}e 0$,\, the following conditions are equivalent: {\mathfrak m}athfrak{b}egin{enumerate} \item $f$ is ${\mathfrak m}u$-minimal. \item $\deg(f)=s_\mathfrak{p}hi'(f)n$. \item $\deg(f_{\mathfrak m}edskipll)=0$ and ${\mathfrak m}u(f)={\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_{\mathfrak m}edskipll\mathfrak{p}hi^{\mathfrak m}edskipll\right)$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} Since $\deg(f) = \deg(f_{\mathfrak m}edskipll)+{\mathfrak m}edskipll n$ and $s'_\mathfrak{p}hi(f){\mathfrak m}athfrak{l}e{\mathfrak m}edskipll$, item (2) is equivalent to $\deg(f_{\mathfrak m}edskipll)=0$ and $s'_\mathfrak{p}hi(f)={\mathfrak m}edskipll$. Thus, (2) and (3) are equivalent. {\mathfrak m}edskip Let us deduce (3) from (1). Since $\deg(f-f_{\mathfrak m}edskipll\mathfrak{p}hi^{\mathfrak m}edskipll)<\deg(f)$, the ${\mathfrak m}u$-minimality of $f$ implies that $f-f_{\mathfrak m}edskipll\mathfrak{p}hi^{\mathfrak m}edskipll$ cannot be ${\mathfrak m}u$-equivalent to $f$. Hence, ${\mathfrak m}u(f_{\mathfrak m}edskipll\mathfrak{p}hi^{\mathfrak m}edskipll)={\mathfrak m}u(f)$. In particular, ${\mathfrak m}edskipll={\mathfrak m}x{\mathfrak m}athfrak{l}eft(I_\mathfrak{p}hi(f)\right)$. By Proposition \ref{smallunits}, $H_{\mathfrak m}u(f_s)$ is a unit for all $f_s\mathbf{n}e0$. Take $b\in\op{Ker}x$ and $c_s\in \op{Ker}x_n$ such that $b\,f_{\mathfrak m}edskipll\sim_\mu1$ and $bf_s\sim_{{\mathfrak m}u}c_s$, for all $0{\mathfrak m}athfrak{l}e s<{\mathfrak m}edskipll$. If we denote $c_{\mathfrak m}edskipll=1$, Lemma \ref{sphi} and equation (\ref{Hmu}) show that $$ bf\sim_\mu b\sum_{s\in I_\mathfrak{p}hi(f)}f_s\mathfrak{p}hi^s\,\Longrightarrow\,H_{\mathfrak m}u(bf)=\sum_{s\in I_\mathfrak{p}hi(f)}H_{\mathfrak m}u(bf_s\mathfrak{p}hi^s)=\sum_{s\in I_\mathfrak{p}hi(f)}H_{\mathfrak m}u(c_s\mathfrak{p}hi^s). $$ Hence, $bf\sim_\mu g:=\sum_{s\in I_\mathfrak{p}hi(f)}c_s\mathfrak{p}hi^s$. Since $f{\mathfrak m}mu g$ and $f$ is ${\mathfrak m}u$-minimal, $$ \deg(f_{\mathfrak m}edskipll)+{\mathfrak m}edskipll n=\deg(f){\mathfrak m}athfrak{l}e\deg(g)={\mathfrak m}edskipll n, $$ which implies $\deg(f_{\mathfrak m}edskipll)=0$.{\mathfrak m}edskip Finally, let us deduce (1) from (2). Take non-zero $g,h\in \op{Ker}x$ such that $g\sim_{{\mathfrak m}u}fh$. By Lemma \ref{sprime} and Corollary \ref{sprime+}, $s'_\mathfrak{p}hi(g)=s'_\mathfrak{p}hi(fh)=s'_\mathfrak{p}hi(f)+s'_\mathfrak{p}hi(h)$, so that $$\deg(f)=s'_\mathfrak{p}hi(f)\deg(\mathfrak{p}hi){\mathfrak m}athfrak{l}e s'_\mathfrak{p}hi(g)\deg(\mathfrak{p}hi){\mathfrak m}athfrak{l}e \deg(g).$$ Thus, $f$ is ${\mathfrak m}u$-minimal. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{minimalpower} Take $f\in \op{Ker}x$ and $m$ a positive integer. Then, $f$ is ${\mathfrak m}u$-minimal if and only if $f^m$ is ${\mathfrak m}u$-minimal. {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} Let $\mathfrak{p}hi$ be a key polynomial of minimal degree. By Corollary \ref{sprime+}, $s'_\mathfrak{p}hi(f^m)=ms'_\mathfrak{p}hi(f)$. Hence, condition (2) of Proposition \ref{minimal} is equivalent for $f$ and $f^m$. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s As another consequence of the criterion for ${\mathfrak m}u$-minimality, we may introduce an important numerical invariant of a valuation on $\op{Ker}x$ admitting key polynomials. {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{bound} Let $\mathfrak{p}hi$ be a key polynomial of minimal degree for a valuation ${\mathfrak m}u$ on $\op{Ker}x$. Then, for any monic non-constant $f\in \op{Ker}x$ we have $$ {\mathfrak m}u(f)/\deg(f){\mathfrak m}athfrak{l}e C({\mathfrak m}u):={\mathfrak m}u(\mathfrak{p}hi)/\deg(\mathfrak{p}hi), $$ and equality holds if and only if $f$ is ${\mathfrak m}u$-minimal. {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} Since $\mathfrak{p}hi$ and $f$ are monic, we may write $$ f^{\deg(\mathfrak{p}hi)}=\mathfrak{p}hi^{\deg(f)}+h,\mathfrak{q}quad \deg(h)<\deg(\mathfrak{p}hi)\deg(f). $$ By Proposition \ref{minimal0}, ${\mathfrak m}u{\mathfrak m}athfrak{l}eft(f^{\deg(\mathfrak{p}hi)}\right){\mathfrak m}athfrak{l}e {\mathfrak m}u{\mathfrak m}athfrak{l}eft(\mathfrak{p}hi^{\deg(f)}\right)$, or equivalently, ${\mathfrak m}u(f)/\deg(f){\mathfrak m}athfrak{l}e C({\mathfrak m}u)$. By Proposition \ref{minimal}, equality holds if and only if $f^{\deg(\mathfrak{p}hi)}$ is ${\mathfrak m}u$-minimal, and this is equivalent to $f$ being ${\mathfrak m}u$-minimal, by Corollary \ref{minimalpower}. {\mathfrak m}edskipnd{proof} \section{Structure of $\Delta$ as a $k_v$-algebra}{\mathfrak m}athfrak{l}abel{secDelta} In this section we determine the structure of $\Delta$ as a $k_v$-algebra, and we derive some specific information about the extension $\op{Ker}m/\op{Ker}v$. In section \ref{subsecIncomm}, we deal with the case ${\mathfrak m}u/v$ incommensurable. We show that $\Delta=\op{Ker}m$. In this case, all key polynomials have the same degree and they are all ${\mathfrak m}u$-equivalent. In section \ref{subsecKPempty}, we assume that ${\mathfrak m}u$ admits no key polynomials. We have again $\Delta=\op{Ker}m$. Also, we find several characterizations of the condition $\op{Ker}pm={\mathfrak m}edskipmptyset$. In section \ref{subsecComm}, we deal with the case ${\mathfrak m}u/v$ commensurable and $\op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset$, which corresponds to the classical situation in which ${\mathfrak m}u$ is {\mathfrak m}edskipmph{residually transcendental}. In this case, $\Delta$ is isomorphic to a polynomial ring in one indeterminate with coefficients in $\op{Ker}al$, the algebraic closure of $\op{Ker}v$ in $\op{Ker}m$. \subsection{Case ${\mathfrak m}u/v$ incommensurable}{\mathfrak m}athfrak{l}abel{subsecIncomm} We recall that ${\mathfrak m}u/v$ incommensurable means $\mathfrak{q}g_v\subsetneq \mathfrak{q}gm$, or equivalently, $\operatorname{rr}(\Gammam/\Gammav)>0$. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{s} Suppose ${\mathfrak m}u/v$ is incommensurable. Let $\mathfrak{p}hi\in\op{Ker}x$ be a monic polynomial of minimal degree $n$ satisfying ${\mathfrak m}u(\mathfrak{p}hi)\mathbf{n}ot\in\mathfrak{q}g_v$. Then, for all $f,g\in\op{Ker}x$ we have: {\mathfrak m}athfrak{b}egin{enumerate} \item $f\sim_\mu a\mathfrak{p}hi^{s(f)}$, for some $a\in\op{Ker}x_n$ and some unique integer $s(f)\Gammae0$. \item $\mathfrak{p}hi$ is a key polynomial and $s(f)=s_\mathfrak{p}hi(f)=s'_\mathfrak{p}hi(f)$ for all $f\in\op{Ker}x$. \item $f$ is a ${\mathfrak m}u$-unit if and only if $s(f)=0$. \item $f{\mathfrak m}mu g$ if and only if $s(f){\mathfrak m}athfrak{l}e s(g)$. \item $f$ is ${\mathfrak m}u$-irreducible if and only if $s(f)=1$. \item $f$ is ${\mathfrak m}u$-minimal if and only if $\deg(f)=s(f)n$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Consider the $\mathfrak{p}hi$-expansion $f=\sum_{0{\mathfrak m}athfrak{l}e s}f_s\mathfrak{p}hi^s$, with $f_s\in\op{Ker}x_n$ for all $s$. All monomials have diferent ${\mathfrak m}u$-value. In fact, an equality $$ {\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_s\mathfrak{p}hi^s\right)={\mathfrak m}u{\mathfrak m}athfrak{l}eft(f_t\mathfrak{p}hi^t\right) \,\Longrightarrow\, {\mathfrak m}u(f_s)-{\mathfrak m}u(f_t)=(t-s){\mathfrak m}u(\mathfrak{p}hi), $$ is possible only for $s=t$ because ${\mathfrak m}u(\mathfrak{p}hi)\mathbf{n}ot\in\mathfrak{q}g_v$, and ${\mathfrak m}u(f_s),{\mathfrak m}u(f_t)$ belong to $\mathfrak{q}g_v$ by the minimality of $n$. Hence, $f\sim_\mu a\mathfrak{p}hi^s$ for the monomial of least ${\mathfrak m}u$-value. The unicity of $s$ follows from the same argument as above. This proves (1).{\mathfrak m}edskip By Proposition \ref{minimal0}, $\mathfrak{p}hi$ is ${\mathfrak m}u$-minimal. Let us show that $\mathfrak{p}hi$ is ${\mathfrak m}u$-irreducible. Consider $f,g\in\op{Ker}x$ such that $\mathfrak{p}hi\mathbf{n}mu f$, $\mathfrak{p}hi\mathbf{n}mu g$. By item (1), $f\sim_\mu a$, $g\sim_\mu b$, for some $a,b\in \op{Ker}x_n$; in particular, $${\mathfrak m}u(fg)={\mathfrak m}u(ab)={\mathfrak m}u(a)+{\mathfrak m}u(b)\in\mathfrak{q}g_v,$$ by the minimality of $n$. By item (1), $fg\sim_\mu c\mathfrak{p}hi^s$ for some $c\in\op{Ker}x_n$ and an integer $s\Gammae0$. Since ${\mathfrak m}u(\mathfrak{p}hi)\mathbf{n}ot\in\mathfrak{q}g_v$, the condition $s{\mathfrak m}u(\mathfrak{p}hi)={\mathfrak m}u(fg)-{\mathfrak m}u(c)\in\mathfrak{q}g_v$ leads to $s=0$, so that $fg\sim_\mu c$. Since $\mathfrak{p}hi$ is ${\mathfrak m}u$-minimal, we have $\mathfrak{p}hi\mathbf{n}mu fg$. Hence, $\mathfrak{p}hi$ is ${\mathfrak m}u$-irreducible. Once we know that $\mathfrak{p}hi$ is a key polynomial, item (1) implies $I_\mathfrak{p}hi(f)=\{s(f)\}$ for all $f\in\op{Ker}x$. Thus, $s(f)=s_\mathfrak{p}hi(f)=s'_\mathfrak{p}hi(f)$. This proves (2).{\mathfrak m}edskip If $f$ is a ${\mathfrak m}u$-unit, then $\mathfrak{p}hi\mathbf{n}mid_{\mathfrak m}u f$, so that $s(f)=0$. Conversely, if $s(f)=0$, then $H_{\mathfrak m}u(f)=H_{\mathfrak m}u(a)$ for some $a\in\op{Ker}x_n$. Since the polynomials $a$ and $\mathfrak{p}hi$ are coprime, they satisfy a B\'ezout identity $$ ab+\mathfrak{p}hi\, d=1,\mathfrak{q}quad \deg(b)<n,\mathfrak{q}uad \deg(d)<\deg(a)<n. $$ Clearly, $ab=1-d\mathfrak{p}hi$ is the canonical $\mathfrak{p}hi$-expansion of $ab$. Since we cannot have $ab\sim_\mu d\mathfrak{p}hi$, because ${\mathfrak m}u(\mathfrak{p}hi)\mathbf{n}ot\in\mathfrak{q}g_v$, we must have $ab\sim_\mu 1$. This proves (3). {\mathfrak m}edskip The rest of statements follow easily from (1), (2) and (3). {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s In particular, all results of the last section apply to our key polynomial $\mathfrak{p}hi$, because it is a key polynomial of minimal degree. {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{mainincomm} Suppose ${\mathfrak m}u/v$ incommensurable. Let $\mathfrak{p}hi\in\op{Ker}x$ be a monic polynomial of minimal degree $n$ satisfying ${\mathfrak m}u(\mathfrak{p}hi)\mathbf{n}ot\in\mathfrak{q}g_v$. Then, {\mathfrak m}athfrak{b}egin{enumerate} \item $\mathfrak{p}hi$ is a key polynomial for ${\mathfrak m}u$. \item All key polynomials have degree $n$, and are ${\mathfrak m}u$-equivalent to $\mathfrak{p}hi$. More precisely, $$\op{Ker}pm={\mathfrak m}athfrak{l}eft\{\mathfrak{p}hi+a{\mathfrak m}id a\in \op{Ker}x_n,\ {\mathfrak m}u(a)>{\mathfrak m}u(\mathfrak{p}hi)\right\}. $$ In particular, $\Gammagm$ has a unique homogeneous prime ideal $H_{\mathfrak m}u(\mathfrak{p}hi)\Gammagm$. \item The natural inclusions determine equalities \ $\op{Ker}al=\Delta=\op{Ker}m$. \item $\mathcal{R}(\mathfrak{p}hi)=0$, and $\op{Ker}m$ is a finite extension of $\op{Ker}v$, isomorphic to $k_\mathfrak{p}hi$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} By items (2), (5) and (6) of Lemma \ref{s}, $\mathfrak{p}hi$ is a key polynomial for ${\mathfrak m}u$, all key polynomials have degree $n$, and are all ${\mathfrak m}u$-divisible by $\mathfrak{p}hi$. By Lemma \ref{mid=sim}, they are ${\mathfrak m}u$-equivalent to $\mathfrak{p}hi$. This proves (1) and (2).{\mathfrak m}edskip Since $\operatorname{rr}(\Gammam/\Gammav)>0$, the inequality in equation (\ref{ftalineq}) shows that $\op{Ker}m/\op{Ker}v$ is an algebraic extension. Since $\op{Ker}v\subset \Delta\subset \op{Ker}m$, the ring $\Delta$ must be a field. In particular, $\op{Ker}al=\Delta$ by the remarks preceding Proposition \ref{maxsubfield}. Let us show that $\Delta=\op{Ker}m$. An element in $\op{Ker}m^*$ is of the form $$(g/h)+{\mathfrak m}_{\mathfrak m}u\in \op{Ker}m^*,$$ with $g,\,h\in\op{Ker}x$ such that ${\mathfrak m}u(g/h)=0$. By item (1) of Lemma \ref{s}, from ${\mathfrak m}u(g)={\mathfrak m}u(h)$ we deduce that $$ g\sim_\mu a\mathfrak{p}hi^s,\mathfrak{q}quad h\sim_\mu b\mathfrak{p}hi^s, $$ for certain integer $s\Gammae0$ and polynomials $a,b\in\op{Ker}x_n$ such that ${\mathfrak m}u(a)={\mathfrak m}u(b)$. Thus, $$ \operatorname{Diff}rac gh+{\mathfrak m}_{\mathfrak m}u=\operatorname{Diff}rac{a\mathfrak{p}hi^s}{b\mathfrak{p}hi^s}+{\mathfrak m}_{\mathfrak m}u=\operatorname{Diff}rac{a}{b}+{\mathfrak m}_{\mathfrak m}u. $$ By Lemma \ref{s}, $H_{\mathfrak m}u(a)$, $H_{\mathfrak m}u(b)$ are units in $\Gammagm$, so that $H_{\mathfrak m}u(a)/H_{\mathfrak m}u(b)\in\Delta^*$ is mapped to $(g/h)+{\mathfrak m}_{\mathfrak m}u$ under the embedding $\Delta\hookrightarrow \op{Ker}m$. This proves (3).{\mathfrak m}edskip By Proposition \ref{maxideal}, $\mathcal{R}(\mathfrak{p}hi)$ is the kernel of the onto mapping $\Delta\thetawoheadrightarrow k_\mathfrak{p}hi$. Since $\Delta$ is a field, $\mathcal{R}(\mathfrak{p}hi)=0$ and this mapping is an isomorphism. This ends the proof of (4), because $k_\mathfrak{p}hi/\op{Ker}v$ is a finite extension. {\mathfrak m}edskipnd{proof} \subsection{Valuations not admitting key polynomials}{\mathfrak m}athfrak{l}abel{subsecKPempty} {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{deltaKPempty} If $\op{Ker}pm={\mathfrak m}edskipmptyset$, the canonical embedding $\Delta\hookrightarrow \op{Ker}m$ is an isomorphism. {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} An element in $\op{Ker}m^*$ is of the form $$(f/g)+{\mathfrak m}_{\mathfrak m}u\in \op{Ker}m^*,\mathfrak{q}quad f,g\in\op{Ker}x,\mathfrak{q}uad {\mathfrak m}u(f/g)=0.$$ If $\op{Ker}pm={\mathfrak m}edskipmptyset$, then Corollary \ref{lmn} shows that $H_{\mathfrak m}u(f)$ and $H_{\mathfrak m}u(g)$ are units in $\Gammagm$. Hence, $H_{\mathfrak m}u(f)H_{\mathfrak m}u(g)^{-1}$ is an element in $\Delta$ whose image in $\op{Ker}m$ is $(f/g)+{\mathfrak m}_{\mathfrak m}u$. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{kpempty} Let ${\mathfrak m}u$ be a valuation on $\op{Ker}x$ extending $v$. The following conditions are equivalent. {\mathfrak m}athfrak{b}egin{enumerate} \item $\op{Ker}pm={\mathfrak m}edskipmptyset$. \item $\Gammagm$ is algebraic over $\Gammagv$. \item Every non-zero homogeneous element in $\Gammagm$ is a unit. \item ${\mathfrak m}u/v$ is commensurable and $\op{Ker}m/\op{Ker}v$ is algebraic. \item The set of weighted values $$W={\mathfrak m}athfrak{l}eft\{{\mathfrak m}u(f)/\deg(f){\mathfrak m}id f\in\op{Ker}x\setminus K\ {\mathfrak m}box{monic}\right\} $$ does not contain a maximal element. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} By Corollary \ref{lmn}, conditions (1), (2) and (3) are equivalent.{\mathfrak m}edskip Let us show that (1) implies (4). If $\op{Ker}pm={\mathfrak m}edskipmptyset$, then ${\mathfrak m}u/v$ is commensurable by Theorem \ref{mainincomm}, and $\op{Ker}m/\op{Ker}v$ is algebraic by Theorems \ref{phimindeg} and \ref{deltaKPempty}.{\mathfrak m}edskip Let us now deduce (5) from (4). Take $\mathfrak{p}hi$ an arbitrary monic polynomial in $\op{Ker}x\setminus K$. Let us show that ${\mathfrak m}u(\mathfrak{p}hi)/\deg(\mathfrak{p}hi)\in W$ is not an upper bound for this set. Since $\mathfrak{q}g_v=\mathfrak{q}gm$, there exists a positive integer $e$ such that $e{\mathfrak m}u(\mathfrak{p}hi)\in\mathfrak{q}g_v$. Thus, there exists $a\in K^*$ such that ${\mathfrak m}u(a\mathfrak{p}hi^e)=0$, so that $H_{\mathfrak m}u(a\mathfrak{p}hi^e)\in\op{Ker}m^*$. By hypothesis, this element is algebraic over $\op{Ker}v$. Hence, $H_{\mathfrak m}u(\mathfrak{p}hi^e)$ is algebraic over $\Gammagv$, and Lemma \ref{alg} shows that $H_{\mathfrak m}u(\mathfrak{p}hi)$ is algebraic over $\Gammagv$. As mentioned in (\ref{xi}), $H_{\mathfrak m}u(\mathfrak{p}hi)$ is integral over $\Gammagv$. Consider an homogeneous equation {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{eqn} {\mathfrak m}edskipp_0+{\mathfrak m}edskipp_1H_{\mathfrak m}u(\mathfrak{p}hi)+{\mathfrak m}athfrak{c}dots+{\mathfrak m}edskipp_{m-1}H_{\mathfrak m}u(\mathfrak{p}hi)^{m-1}+H_{\mathfrak m}u(\mathfrak{p}hi)^m=0, {\mathfrak m}edskipnd{equation} with ${\mathfrak m}edskipp_0\,\dots,{\mathfrak m}edskipp_m$ homogeneous elements in $\Gammagv$ such that $\deg({\mathfrak m}edskipp_iH_{\mathfrak m}u(\mathfrak{p}hi)^i)=m{\mathfrak m}u(\mathfrak{p}hi)$ for all indices $0{\mathfrak m}athfrak{l}e i< m$ for which ${\mathfrak m}edskipp_i\mathbf{n}e0$. By choosing $a_i\in K$ with $H_{\mathfrak m}u(a_i)={\mathfrak m}edskipp_i$ for all $i$, equation (\ref{eqn}) is equivalent to $$ {\mathfrak m}u{\mathfrak m}athfrak{l}eft(a_0+a_1\mathfrak{p}hi+{\mathfrak m}athfrak{c}dots+a_{m-1}\mathfrak{p}hi^{m-1}+\mathfrak{p}hi^m\right)>{\mathfrak m}u(\mathfrak{p}hi^m)=m{\mathfrak m}u(\mathfrak{p}hi). $$ Hence, this monic polynomial $f=a_0+a_1\mathfrak{p}hi+{\mathfrak m}athfrak{c}dots+a_{m-1}\mathfrak{p}hi^{m-1}+\mathfrak{p}hi^m$ has a larger weighted value $$ {\mathfrak m}u(f)/\deg(f)>{\mathfrak m}u(\mathfrak{p}hi^m)/\deg(f)={\mathfrak m}u(\mathfrak{p}hi)/\deg(\mathfrak{p}hi). $$ Hence, the set $W$ contains no maximal element.{\mathfrak m}edskip Finally, the implication (5)$\,\Longrightarrow\,$(1) follows from Theorem \ref{bound}. {\mathfrak m}edskipnd{proof} \subsection{Case ${\mathfrak m}u/v$ commensurable and $\op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset$}{\mathfrak m}athfrak{l}abel{subsecComm} {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{resfield} Suppose ${\mathfrak m}u/v$ commensurable and $\op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset$. The canonical embedding $\Delta\hookrightarrow \op{Ker}m$ induces an isomorphism between the field of fractions of $\Delta$ and $\op{Ker}m$. {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} Let ${\mathfrak m}athfrak{c}hi\in\op{Ker}x$ be an arbitrary key polynomial for ${\mathfrak m}u$. We must show that the induced morphism $\operatorname{Frac}(\Delta)\!\thetao\!\op{Ker}m$ is onto. An element in $\op{Ker}m^*$ is of the form $$(f/g)+{\mathfrak m}_{\mathfrak m}u\in \op{Ker}m^*,\mathfrak{q}quad f,g\in\op{Ker}x,\mathfrak{q}uad {\mathfrak m}u(f/g)=0.$$ Set ${\mathfrak m}athfrak{a}lpha={\mathfrak m}u(f)={\mathfrak m}u(g)\in\Gammam$. By Lemma \ref{subgroup}, $\Gammam=\Gammaen{\Gammachi,{\mathfrak m}u({\mathfrak m}athfrak{c}hi)}$; hence, we may write $$ -{\mathfrak m}athfrak{a}lpha={\mathfrak m}athfrak{b}eta+s{\mathfrak m}u({\mathfrak m}athfrak{c}hi),\mathfrak{q}quad {\mathfrak m}athfrak{b}eta\in\Gammachi,\mathfrak{q}quad s\in{\mathfrak m}athbb Z. $$ Since ${\mathfrak m}u({\mathfrak m}athfrak{c}hi)\in{\mathfrak m}athbb Q\Gammachi$, we may assume that $0{\mathfrak m}athfrak{l}e s<e$ for some positive integer $e$. Take $a\in \op{Ker}xchi$ such that ${\mathfrak m}u(a)={\mathfrak m}athfrak{b}eta$. Then, the polynomial $h=a\mathfrak{p}hi^s$ satisfies ${\mathfrak m}u(h)=-{\mathfrak m}athfrak{a}lpha$. Thus, $H_{\mathfrak m}u(hf),H_{\mathfrak m}u(hg)$ belong to $\Delta$ and the fraction $H_{\mathfrak m}u(hf)/H_{\mathfrak m}u(hg)$ is mapped to $(f/g)+{\mathfrak m}_{\mathfrak m}u$ by the morphism $\operatorname{Frac}(\Delta)\!\thetao\!\op{Ker}m$. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{Dstructure} Suppose ${\mathfrak m}u/v$ commensurable. Let $\mathfrak{p}hi$ be a key polynomial of minimal degree $n$, and let $e$ be a minimal positive integer such that $e{\mathfrak m}u(\mathfrak{p}hi)\in\Gammaphi$. Take $u\in\op{Ker}x_n$ such that ${\mathfrak m}u(u\mathfrak{p}hi^e)=0$. Then, $\xi=H_{\mathfrak m}u(u\mathfrak{p}hi^e)\in\Delta$ is transcendental over $\op{Ker}v$ and $\Delta=\op{Ker}al[\xi]$. {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} The element $\xi$ is not a unit, because it is divisible by the prime element $H_{\mathfrak m}u(\mathfrak{p}hi)$. By (\ref{xi}), $\xi$ is transcendental over $\op{Ker}v$.{\mathfrak m}edskip Consider $H_{\mathfrak m}u(g)\in\Delta$, for some $g\in\op{Ker}x$ with ${\mathfrak m}u(g)=0$. Let $g=\sum_{0{\mathfrak m}athfrak{l}e s}g_s\mathfrak{p}hi^s$ be the $\mathfrak{p}hi$-expansion of $g$. Let $I=I_\mathfrak{p}hi(g)$ be the set of indices $s$ such that ${\mathfrak m}u(a_s\mathfrak{p}hi^s)=0$. For each $s\in I$, the equality $s{\mathfrak m}u(\mathfrak{p}hi)=-{\mathfrak m}u(a_s)\in\Gammaphi$ implies that $s=ej_s$ for some integer $j_s\Gammae0$. By Proposition \ref{minimal0} and equation (\ref{Hmu}), {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{usual} g\sim_\mu \sum\mathbf{n}olimits_{s\in I}g_s\mathfrak{p}hi^s,\mathfrak{q}quadH_{\mathfrak m}u(g)=\sum\mathbf{n}olimits_{s\in I}H_{\mathfrak m}u(g_s\mathfrak{p}hi^s). {\mathfrak m}edskipnd{equation} For each $s\in I$, Proposition \ref{smallunits} shows that $H_{\mathfrak m}u(u)$ is a unit, and there exists $c_s\in\op{Ker}x_n$ such that $H_{\mathfrak m}u(c_s)=H_{\mathfrak m}u(g_s)H_{\mathfrak m}u(u)^{-j_s}$. Hence, $$ H_{\mathfrak m}u(g_s\mathfrak{p}hi^s)=H_{\mathfrak m}u(g_s)H_{\mathfrak m}u(u)^{-j_s}H_{\mathfrak m}u(u)^{j_s}H_{\mathfrak m}u(\mathfrak{p}hi^s)=H_{\mathfrak m}u(c_s)\xi^{j_s}\in \op{Ker}al[\xi]. $$ Hence, $H_{\mathfrak m}u(g)\in \op{Ker}al[\xi]$. This proves that $\Delta=\op{Ker}al[\xi]$. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s As a consequence of Theorems \ref{mainincomm}, \ref{deltaKPempty}, \ref{kpempty}, \ref{resfield} and \ref{Dstructure}, we obtain the following computation of the residue class field $\op{Ker}m$. {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{kstructure} If $\op{Ker}pm={\mathfrak m}edskipmptyset$, then $\op{Ker}al=\Delta=\op{Ker}m$ is an algebraic extension of $k$. If ${\mathfrak m}u/v$ is incommensurable, then, $\op{Ker}al=\Delta=\op{Ker}m$ is a finite extension of $k$. If ${\mathfrak m}u/v$ is commensurable and $\op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset$, then $\op{Ker}m\simeq\op{Ker}al(y)$, where $y$ is an indeterminate. {\mathfrak m}edskipnd{corollary} \section{Residual polynomial operator}{\mathfrak m}athfrak{l}abel{secR} Suppose ${\mathfrak m}u/v$ commensurable and $\op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset$. Let us fix a key polynomial $\mathfrak{p}hi\in\op{Ker}pm$ of minimal degree $n$. Let $q=H_{\mathfrak m}u(\mathfrak{p}hi)$ be the corresponding prime element of $\Gammagm$. {\mathfrak m}edskip For the description of the set $\op{Ker}pm$ in section \ref{secKPuf}, we need a ``residual polynomial" operator $R{\mathfrak m}athfrak{c}olon \op{Ker}x\thetao\op{Ker}al[y]$, yielding a decomposition of any homogeneous element $H_{\mathfrak m}u(f)\in\Gammagm$ into a product of a unit, a power of $q$, and the degree-zero element $R(f)(\xi)\in\Delta=\op{Ker}al[\xi]$ (Theorem \ref{Hmug}). As a consequence, the operator $R$ provides a computation of the residual ideal operator (Theorem \ref{RR}).{\mathfrak m}edskip In Lemma \ref{subgroup} we proved that $\Gammam=\Gammaen{\Gamma_n,{\mathfrak m}u(\mathfrak{p}hi)}$, where $\Gamma_n$ is the subgroup $$ \Gamma_n=\{{\mathfrak m}u(a){\mathfrak m}id a\in\op{Ker}x_n,\ a\mathbf{n}e0\}\subset\Gammam. $$ Let $e$ be a minimal positive integer with $e{\mathfrak m}u(\mathfrak{p}hi)\in\Gamma_n$. By Theorem \ref{bound}, all key polynomials ${\mathfrak m}athfrak{c}hi$ of degree $n$ have the same ${\mathfrak m}u$-value ${\mathfrak m}u({\mathfrak m}athfrak{c}hi)={\mathfrak m}u(\mathfrak{p}hi)$. Thus, this positive integer $e$ does not depend on the choice of $\mathfrak{p}hi$. It will be called the {\mathfrak m}edskipmph{relative ramification index of ${\mathfrak m}u$}.{\mathfrak m}edskip We fix a polynomial $u\in\op{Ker}x_n$ such that ${\mathfrak m}u(u\mathfrak{p}hi^e)=0$, and consider $$\xi=H_{\mathfrak m}u(u\mathfrak{p}hi^e)=H_{\mathfrak m}u(u)q^e\in\Delta.$$ By Theorem \ref{Dstructure}, $\xi$ is transcendental over $\op{Ker}v$ and $\Delta=\op{Ker}al[\xi]$.{\mathfrak m}edskip Throughout this section, for any polynomial $f\in\op{Ker}x$ we denote $$ s(f):=s_\mathfrak{p}hi(f),\mathfrak{q}quad s'(f):=s'_\mathfrak{p}hi(f),\mathfrak{q}quad I(f):=I_\mathfrak{p}hi(f). $$ For $s\in I(f)$, the condition ${\mathfrak m}u(f_s\mathfrak{p}hi^s)={\mathfrak m}u(f)$ implies that $s$ belongs to a fixed class modulo $e$. In fact, for any pair $s,t\in I(f)$, $$ {\mathfrak m}u(f_s\mathfrak{p}hi^s)={\mathfrak m}u(f_t\mathfrak{p}hi^t)\,\Longrightarrow\, (t-s){\mathfrak m}u(\mathfrak{p}hi)={\mathfrak m}u(f_s)-{\mathfrak m}u(f_t)\in\Gamma_n \,\Longrightarrow\, t{\mathfrak m}edskipquiv s{\mathfrak m}d{e}. $$ Hence, $I(f)\subset{\mathfrak m}athfrak{l}eft\{s_0,s_1,\dots,s_d\right\}$, where $$ s_0=s(f)={\mathfrak m}athbb Min(I(f)),\mathfrak{q}uad s_j=s_0+je,\ 0{\mathfrak m}athfrak{l}e j{\mathfrak m}athfrak{l}e d,\mathfrak{q}uad s_d=s'(f)={\mathfrak m}x(I(f)). $$ By Lemma \ref{sphi}, we may write {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{rdetre} f\sim_\mu\sum_{s\in I(f)}f_s\mathfrak{p}hi^s\sim_\mu \mathfrak{p}hi^{s_0}{\mathfrak m}athfrak{l}eft(f_{s_0}+{\mathfrak m}athfrak{c}dots +f_{s_j}\mathfrak{p}hi^{je}+{\mathfrak m}athfrak{c}dots +f_{s_d}\mathfrak{p}hi^{de}\right), {\mathfrak m}edskipnd{equation} having into account only the monomials for which $s_j\in I(f)$. {\mathfrak m}athfrak{b}egin{definition} Consider the residual polynomial operator $$ R:=R_\mathfrak{p}hi{\mathfrak m}athfrak{c}olon \op{Ker}x {\mathfrak m}athfrak{l}ra \op{Ker}al[y],\mathfrak{q}quad R(f)=\zeta_0+\zeta_1y+{\mathfrak m}athfrak{c}dots+\zeta_{d-1}y^{d-1}+y^d, $$ for $f \mathbf{n}e0$, where the coefficients $\zeta_j\in\op{Ker}al$ are defined by: {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{defzetaj} \zeta_j={\mathfrak m}athfrak{b}egin{cases} H_{\mathfrak m}u(f_{s_d})^{-1}H_{\mathfrak m}u(u)^{d-j}H_{\mathfrak m}u(f_{s_j}),&{\mathfrak m}box{ if }s_j\in I(f),\\ 0,&{\mathfrak m}box{ if }s_j\mathbf{n}ot\in I(f). {\mathfrak m}edskipnd{cases} {\mathfrak m}edskipnd{equation} Also, we define $R(0)=0$. {\mathfrak m}edskipnd{definition} For $s_j\in I(f)$, we have $${\mathfrak m}u(f_{s_j}\mathfrak{p}hi^{je})={\mathfrak m}u(f_{s_d}\mathfrak{p}hi^{de})={\mathfrak m}u(f/\mathfrak{p}hi^{s_0}),$$ so that ${\mathfrak m}u(f_{s_j})={\mathfrak m}u(f_{s_d})+(d-j)e{\mathfrak m}u(\mathfrak{p}hi)={\mathfrak m}u(f_{s_d})-(d-j){\mathfrak m}u(u)$. Since the three homogeneous elements $H_{\mathfrak m}u(f_{s_j})$, $H_{\mathfrak m}u(f_{s_d})$ and $H_{\mathfrak m}u(u)$ are units in $\Gammagm$, we deduce that $\zeta_j\in\Delta^*=(\op{Ker}al)^*$ for $s_j\in I(f)$. Thus, the monic residual polynomial $R(f)$ is well defined, and it has degree {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{degR} d(f):=\deg(R(f))=d={\mathfrak m}athfrak{l}eft(s'(f)-s(f)\right)/e. {\mathfrak m}edskipnd{equation} Note that $\zeta_0\mathbf{n}e0$, because $s_0\in I(f)$. Thus, $R(f)(0)\mathbf{n}e0$.{\mathfrak m}athfrak{b}s \mathbf{n}oindent{{\mathfrak m}athfrak{b}f Example. }For any monomial $f=a\mathfrak{p}hi^s$ with $a\in\op{Ker}x_n$, we have $R(f)=1$. {\mathfrak m}athfrak{b}egin{definition}{\mathfrak m}athfrak{l}abel{defnlc} With the above notation, the {\mathfrak m}edskipmph{normalized leading residual coeficient} $$ \mathbf{n}lc(f)=H_{\mathfrak m}u(f_{s_d})H_{\mathfrak m}u(u)^{-d}={\mathfrak m}athfrak{l}rc(f)H_{\mathfrak m}u(u)^{-d}\in\Gammagm^*, $$ is an homogeneous element in $\Gammagm$ of degree ${\mathfrak m}u(f)-s(f){\mathfrak m}u(\mathfrak{p}hi)$. {\mathfrak m}edskipnd{definition} For any $g\in\op{Ker}x$, from (\ref{multiplicative}) and Corollary \ref{sprime+}, we deduce that $$ d(fg)=d(f)+d(g),\mathfrak{q}quad \mathbf{n}lc(fg)=\mathbf{n}lc(f)\mathbf{n}lc(g),\mathfrak{q}uad \forall f,g\in\op{Ker}x. $$ By definition, for any $s_j\in I(f)$ we have $$\mathbf{n}lc(f)\,\zeta_j\,\xi^j=H_{\mathfrak m}u(f_{s_j})H_{\mathfrak m}u(\mathfrak{p}hi^{je}). $$ Thus, (\ref{rdetre}) leads to the following identity, which is the ``raison d'$\hat{e}$tre" of $R(f)$. {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{Hmug} For any $f\in\op{Ker}x$, we have $H_{\mathfrak m}u(f)=\mathbf{n}lc(f)\,q^{s(f)}R(f)(\xi)$. {$\Box$} {\mathfrak m}edskipnd{theorem} Note that $\mathbf{n}lc(f)$ is a unit, $q^{s(f)}$ the power of a prime element, and $R(f)(\xi)\in\Delta$. Let us derive from Theorem \ref{Hmug} some basic properties of the residual polynomials. {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{Rmult} For all $f,g\in\op{Ker}x$, we have $R(fg)=R(f)R(g)$. {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} Since the functions $H_{\mathfrak m}u$ and $\mathbf{n}lc$ are multiplicative, Theorem \ref{Hmug} shows that $$R(fg)(\xi)=R(f)(\xi)R(g)(\xi).$$ By Theorem \ref{Dstructure}, we deduce that $R(fg)=R(f)R(g)$. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{equivR} For all $f,g\in K[x]$, $$ {\mathfrak m}athfrak{a}s{1.3} {\mathfrak m}athfrak{b}egin{array}{ccl} f \sim_\mu g&\iff&I(f)=I(g), \ \mathbf{n}lc(f)=\mathbf{n}lc(g)\ {\mathfrak m}box{ and }\ R(f)=R(g).\\ f {\mathfrak m}mu g&\iff&s(f){\mathfrak m}athfrak{l}e s(g) \ {\mathfrak m}box{ and }\ R(f){\mathfrak m}id R(g) \ {\mathfrak m}box{ in }\ \op{Ker}al[y]. {\mathfrak m}edskipnd{array} $$ {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} If $f \sim_\mu g$, then $I(f)=I(g)$ and $\mathbf{n}lc(f)=\mathbf{n}lc(g)$ by Lemma \ref{sprime}. Thus, $R(f)(\xi)=R(g)(\xi)$ by Theorem \ref{Hmug}, leading to $R(f)=R(g)$ by Theorem \ref{Dstructure}. Conversely, $I(f)=I(g)$ implies $s(f)={\mathfrak m}athbb Min(I(f))={\mathfrak m}athbb Min(I(g))=s(g)$. Thus, $H_{\mathfrak m}u(f)=H_{\mathfrak m}u(g)$ follows from Theorem \ref{Hmug}. {\mathfrak m}edskip If $f {\mathfrak m}mu g$, then $fh\sim_\mu g$ for some $h\in\op{Ker}x$. By the first item and Corollary \ref{Rmult}, we get $R(g)=R(fh)=R(f)R(h)$, so that $R(f){\mathfrak m}id R(g)$. Also, since $s(g)=s(f)+s(h)$, we deduce that $s(f){\mathfrak m}athfrak{l}e s(g)$. Conversely, $s(f){\mathfrak m}athfrak{l}e s(g)$ and $R(f){\mathfrak m}id R(g)$ imply $H_{\mathfrak m}u(f){\mathfrak m}idH_{\mathfrak m}u(g)$ by Theorem \ref{Hmug}, having in mind that $\mathbf{n}lc(f)$, $\mathbf{n}lc(g)$ are units in $\Gammagm$. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{Rconstruct} Let $s\in{\mathfrak m}athbb Z_{\Gammae0}$, $\zeta\in (\op{Ker}al)^*$, and $\mathfrak{p}si\in\op{Ker}al[y]$ a monic polynomial with $\mathfrak{p}si(0)\mathbf{n}e0$. Then, there exists a polynomial $f\in\op{Ker}x$ such that $$ s(f)=s,\mathfrak{q}quad \mathbf{n}lc(f)=\zeta,\mathfrak{q}quad R(f)=\mathfrak{p}si. $$ {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} Let $\mathfrak{p}si=\zeta_0+\zeta_1y+{\mathfrak m}athfrak{c}dots \zeta_{d-1}y^{d-1}+\zeta_dy^d$, with $\zeta_0,\dots,\zeta_{d-1}\in\op{Ker}al$ and $\zeta_d=1$. Let $I$ be the set of indices $0{\mathfrak m}athfrak{l}e j{\mathfrak m}athfrak{l}e d$ with $\zeta_j\mathbf{n}e0$. By (\ref{defkal}), for each $j\in I$ we may take $f_j\in \op{Ker}x_n$ such that $H_{\mathfrak m}u(f_j)=\zetaH_{\mathfrak m}u(u)^j\zeta_j$. Then, $f=\mathfrak{p}hi^s{\mathfrak m}athfrak{l}eft(f_0+{\mathfrak m}athfrak{c}dots+f_j\mathfrak{p}hi^{je}+{\mathfrak m}athfrak{c}dots +f_d\mathfrak{p}hi^{de}\right)$ satisfies all our requirements. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{RR} For any non-zero $f\in K[x]$, $$\mathcal{R}(f)=\xi^{{\mathfrak m}athfrak{l}ceil s(f)/e\rceil}R(f)(\xi)\Delta. $$ {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} By definition, an element in the ideal $\mathcal{R}(f)$ is of the form $H_{\mathfrak m}u(h)$ for some $h\in\op{Ker}x$ such that $f{\mathfrak m}mu h$ and ${\mathfrak m}u(h)=0$. The condition ${\mathfrak m}u(h)=0$ implies $e{\mathfrak m}id s(h)$. By Theorem \ref{Hmug}, $$H_{\mathfrak m}u(h)=\xi^{s(h)/e}H_{\mathfrak m}u(u)^{-s(h)/e}\mathbf{n}lc(h)\,R(h)(\xi).$$ On the other hand, Corollary \ref{equivR} shows that $s(f){\mathfrak m}athfrak{l}e s(h)$ and $R(f){\mathfrak m}id R(h)$. Therefore, $H_{\mathfrak m}u(h)$ belongs to the ideal $\xi^{{\mathfrak m}athfrak{l}ceil s(f)/e\rceil}R(f)(\xi)\Delta$. Conversely, if $m={\mathfrak m}athfrak{l}ceil s(f)/e\rceil$, then Theorem \ref{Hmug} shows that $$\xi^mR(f)(\xi)=q^{me}H_{\mathfrak m}u(u)^mR(f)(\xi)=H_{\mathfrak m}u(f)q^{me-s(f)}\mathbf{n}lc(f)^{-1}H_{\mathfrak m}u(u)^m\in\mathcal{R}(f),$$ because $me\Gammae s(f)$ and $\mathbf{n}lc(f)$, $H_{\mathfrak m}u(u)$ are units. {\mathfrak m}edskipnd{proof} \subsection{Dependence of $R$ on the choice of $u$} Let $u^*\in\op{Ker}x_n$ be another choice of a polynomial such that ${\mathfrak m}u(u^*\mathfrak{p}hi^e)=0$, and denote $$ \xi^*=H_{\mathfrak m}u(u^*\mathfrak{p}hi^e)=H_{\mathfrak m}u(u^*)q^e\in\Delta. $$ Since ${\mathfrak m}u(u)={\mathfrak m}u(u^*)$, and $H_{\mathfrak m}u(u)$, $H_{\mathfrak m}u(u^*)$ are units in $\Gammagm$, we have $$\xi^*=\sigma^{-1}\xi,\mathfrak{q}uad {\mathfrak m}box{ where }\sigma=H_{\mathfrak m}u(u)H_{\mathfrak m}u(u^*)^{-1}\in\Delta^*=(\op{Ker}al)^*.$$ Let $R^*$ be the residual polynomial operator associated with this choice of $u^*$. For any $f\in\op{Ker}x$, suppose that $R^*(f)=\zeta^*_0+\zeta^*_1y+{\mathfrak m}athfrak{c}dots + \zeta^*_{d-1}y^{d-1}+y^d$. By the very definition (\ref{defzetaj}) of the residual coefficients, $$ \zeta^*_j=\sigma^{d-j}\zeta_j,\mathfrak{q}quad 1{\mathfrak m}athfrak{l}e j{\mathfrak m}athfrak{l}e d. $$ We deduce the following relationship between $R$ and $R^*$: $$ R^*(f)(y)=\sigma^dR(f)(\sigma^{-1} y),\mathfrak{q}quad \forall f\in\op{Ker}x. $$ \subsection{Dependence of $R$ on the choice of $\mathfrak{p}hi$} Let $\mathfrak{p}hi_*$ be another key polynomial with minimal degree $n$, and denote $q_*=H_{\mathfrak m}u(\mathfrak{p}hi_*)$. By Theorem \ref{bound}, ${\mathfrak m}u(\mathfrak{p}hi_*)={\mathfrak m}u(\mathfrak{p}hi)$, so that $$ \mathfrak{p}hi_*=\mathfrak{p}hi+a,\mathfrak{q}quad a\in\op{Ker}x_n,\mathfrak{q}uad {\mathfrak m}u(a)\Gammae{\mathfrak m}u(\mathfrak{p}hi). $$ In particular, ${\mathfrak m}u(u\mathfrak{p}hi_*^e)=0$, and we may consider $$ \xi_*=H_{\mathfrak m}u(u\mathfrak{p}hi_*^e)=H_{\mathfrak m}u(u)q_*^e\in\Delta $$ as a transcendental generator of $\Delta$ as a $\op{Ker}al$-algebra. Let $R_*$ be the residual polynomial operator associated with this choice of $\mathfrak{p}hi_*$. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{lastlevel} Let $\mathfrak{p}hi_*$ be another key polynomial with minimal degree, and denote with a subindex $(\ )_*$ all objects depending on $\mathfrak{p}hi_*$. {\mathfrak m}athfrak{b}egin{enumerate} \item If $\mathfrak{p}hi_*\sim_\mu\mathfrak{p}hi$, then \ $q_*=q$, \ $\xi_*=\xi$ \,and\, $R_*=R$. \item If $\mathfrak{p}hi_*\mathbf{n}ot\sim_\mu\mathfrak{p}hi$, then \ $e=1$, \ $q_*=q+H_{\mathfrak m}u(a)$ \,and\,\, $\xi_*=\xi+\thetaau$, \mathbf{n}oindent where \,$\thetaau=H_{\mathfrak m}u(ua)\in(\op{Ker}al)^*$. In this case, for any $f\in K[x]$ we have {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{RR*} y^{s(f)}R(f)(y)=(y+\thetaau)^{s_*(f)}R_*(f)(y+\thetaau). {\mathfrak m}edskipnd{equation} In particular, $s_*(f)=\op{ord}_{y+\thetaau}{\mathfrak m}athfrak{l}eft(R(f)\right)$ and $s(f)+d(f)=s_*(f)+d_*(f)$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} Suppose $\mathfrak{p}hi_*\sim_\mu\mathfrak{p}hi$. By definition, $$ q_*=H_{\mathfrak m}u(\mathfrak{p}hi_*)=H_{\mathfrak m}u(\mathfrak{p}hi)=q,\mathfrak{q}quad \xi_*=H_{\mathfrak m}u(u\mathfrak{p}hi_*)=H_{\mathfrak m}u(u\mathfrak{p}hi)=\xi. $$ Let $f\in\op{Ker}x$. By , we can replace $\mathfrak{p}hi$ with $\mathfrak{p}hi_*$ in equation (\ref{rdetre}) to obtain $$ f\sim_\mu\sum\mathbf{n}olimits_{s\in I(f)}f_s\mathfrak{p}hi_*^s\sim_\mu \mathfrak{p}hi_*^{s_0}{\mathfrak m}athfrak{l}eft(f_{s_0}+{\mathfrak m}athfrak{c}dots +f_{s_j}\mathfrak{p}hi_*^{je}+{\mathfrak m}athfrak{c}dots +f_{s_d}\mathfrak{p}hi_*^{de}\right). $$ Hence, (\ref{defzetaj}) leads to the same residual coefficients, so that $R(f)=R_*(f)$.{\mathfrak m}edskip Suppose $\mathfrak{p}hi_*\mathbf{n}ot\sim_\mu\mathfrak{p}hi$; that is, ${\mathfrak m}u(a)={\mathfrak m}u(\mathfrak{p}hi)$. Then, $e=1$, $H_{\mathfrak m}u(\mathfrak{p}hi_*)=H_{\mathfrak m}u(\mathfrak{p}hi)+H_{\mathfrak m}u(a)$, and $$\xi_*=H_{\mathfrak m}u(u\mathfrak{p}hi_*)=H_{\mathfrak m}u(u)H_{\mathfrak m}u(\mathfrak{p}hi)+H_{\mathfrak m}u(u)H_{\mathfrak m}u(a)=\xi+\thetaau. $$ Finally, let $f\in\op{Ker}x$, and denote $s=s(f)$, $s_*=s_*(f)$. By Theorem \ref{Hmug}, $$ q^s\,\mathbf{n}lc(f)R(f)(\xi)=H_{\mathfrak m}u(f)=q_*^{s_*}\,\mathbf{n}lc_*(f)R_*(f)(\xi_*). $$ Since $q=H_{\mathfrak m}u(u)^{-1}\xi$ and $q_*=H_{\mathfrak m}u(u)^{-1}\xi_*=H_{\mathfrak m}u(u)^{-1}(\xi+\thetaau)$, we deduce $$ \xi^sR(f)(\xi)=\sigma(\xi+\thetaau)^{s_*}R^*(f)(\xi+\thetaau), $$ where $\sigma=H_{\mathfrak m}u(u)^{s-s_*}\mathbf{n}lc_*(f)\mathbf{n}lc(f)^{-1}\in(\op{Ker}al)^*$, because $$\deg{\mathfrak m}athfrak{l}eft(\mathbf{n}lc(f)H_{\mathfrak m}u(u)^{-s}\right)=0=\deg{\mathfrak m}athfrak{l}eft(\mathbf{n}lc_*(f)H_{\mathfrak m}u(u)^{-s_*}\right).$$ By Theorem \ref{Dstructure}, this implies $y^sR(f)(y)=\sigma(y+\thetaau)^{s_*}R^*(f)(y+\thetaau)$. Since $R(f)$ and $R_*(f)$ are monic polynomials, we have necessarily $\sigma=1$. This proves (\ref{RR*}). {\mathfrak m}edskipnd{proof} \section{Key polynomials and unique factorization in $\Gammagm$}{\mathfrak m}athfrak{l}abel{secKPuf} We keep assuming ${\mathfrak m}u/v$ commensurable and $\op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset$. Also, we keep dealing with a fixed key polynomial $\mathfrak{p}hi$ of minimal degree $n$, and we denote $$ s(f):=s_\mathfrak{p}hi(f),\mathfrak{q}quad s'(f):=s'_\mathfrak{p}hi(f),\mathfrak{q}quad R(f):=R_\mathfrak{p}hi(f), \mathfrak{q}quad \forall f\in\op{Ker}x. $$ \subsection{Homogeneous prime elements}{\mathfrak m}athfrak{l}abel{subsecHomogP} By Theorem \ref{Dstructure}, the prime elements in $\Delta$ are those of the form $\mathfrak{p}si(\xi)$ for $\mathfrak{p}si\in\op{Ker}al[y]$ an irreducible polynomial. An element in $\Delta$ which is a prime in $\Gammagm$, is a prime in $\Delta$, but the converse is not true. Let us now discuss what primes in $\Delta$ remain prime in $\Gammagm$. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{prime0} Let $\mathfrak{p}si\in \op{Ker}al[y]$ be a monic irreducible polynomial. {\mathfrak m}athfrak{b}egin{enumerate} \item If $\mathfrak{p}si\mathbf{n}e y$, then $\mathfrak{p}si(\xi)$ is a prime element in $\Gammagm$. \item If $\mathfrak{p}si=y$, then $\xi$ is a prime element in $\Gammagm$ if and only if $e=1$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Suppose $\mathfrak{p}si\mathbf{n}e y$. Being $\mathfrak{p}si$ irreducible, we have $\mathfrak{p}si(0)\mathbf{n}e0$. By Lemma \ref{Rconstruct}, there exists $f\in\op{Ker}x$ such that $s(f)=0$, $\mathbf{n}lc(f)=1$ and $R(f)=\mathfrak{p}si$. By Theorem \ref{Hmug}, $H_{\mathfrak m}u(f)=\mathfrak{p}si(\xi)$. Suppose $\mathfrak{p}si(\xi)=H_{\mathfrak m}u(f)$ divides the product of two homogeneous elements in $\Gammagm$. Say $f{\mathfrak m}mu gh$ for some $g,h\in\op{Ker}x$. By Corollaries \ref{equivR} and \ref{Rmult}, $\mathfrak{p}si=R(f)$ divides $R(gh)=R(g)R(h)$. Being $\mathfrak{p}si$ irreducible, it divides either $R(g)$ or $R(h)$, and this leads to $\mathfrak{p}si(\xi)$ dividing either $H_{\mathfrak m}u(g)$ or $H_{\mathfrak m}u(h)$ in $\Gammagm$, by Theorem \ref{Hmug}.{\mathfrak m}edskip The element $\xi$ is associate to $q^{e}$ in $\Gammagm$. Since $q$ is a prime element, its $e$-th power is a prime if and only if $e=1$. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s Besides these prime elements belonging to $\Delta$, we know that $q$ is another prime element in $\Gammagm$, of degree ${\mathfrak m}u(\mathfrak{p}hi)$. The next result shows that there are no other homogeneous prime elements in $\Gammagm$, up to multiplication by units. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{mu-irr} A polynomial $f\in\op{Ker}x$ is ${\mathfrak m}u$-irreducible if and only if one of the two following conditions is satisfied: {\mathfrak m}athfrak{b}egin{enumerate} \item[(a)] $s(f)=s'(f)=1$. \item[(b)] $s(f)=0$ and $R(f)$ is irreducible in $\op{Ker}al[y]$. {\mathfrak m}edskipnd{enumerate} In the first case, $H_{\mathfrak m}u(f)$ is associate to $q$. In the second case, to $R(f)(\xi)$. {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} By Theorem \ref{Hmug}, $H_{\mathfrak m}u(f)=q^{s(f)}\,\mathbf{n}lc(f)\,R(f)(\xi)$. Since $\mathbf{n}lc(f)$ is a unit and $q$ is a prime, $H_{\mathfrak m}u(f)$ is a prime if and only if one of the two following conditions is satisfied: {\mathfrak m}athfrak{b}egin{enumerate} \item[(i)] $s(f)=1$ and $R(f)(\xi)$ is a unit. \item[(ii)] $s(f)=0$ and $R(f)(\xi)$ is a prime in $\Gammagm$. {\mathfrak m}edskipnd{enumerate} The homogeneous element of degree zero $R(f)(\xi)$ is a unit in $\Gammagm$ if and only if it is a unit in $\Delta$. By Theorem \ref{Dstructure}, this is equivalent to $\deg(R(f))=0$, which in turn is equivalent to $s(f)=s'(f)$ by (\ref{degR}). Thus, (i) is equivalent to (a), and $H_{\mathfrak m}u(f)$ is associate to $q$ in this case. Since $R(f)\mathbf{n}e y$, (ii) is equivalent to (b) by Lemma \ref{prime0}. Clearly, $H_{\mathfrak m}u(f)$ is associate to $R(f)(\xi)$ in this case. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}s Putting together this characterization of ${\mathfrak m}u$-irreducibility with the characterization of ${\mathfrak m}u$-minimality from Proposition \ref{minimal}, we get the following characterization of key polynomials. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{charKP} Let $\mathfrak{p}hi$ be a key polynomial for ${\mathfrak m}u$, of minimal degree $n$. A monic ${\mathfrak m}athfrak{c}hi\in\op{Ker}x$ is a key polynomial for ${\mathfrak m}u$ if and only if one of the two following conditions is satisfied: {\mathfrak m}athfrak{b}egin{enumerate} \item[(a)] $\deg({\mathfrak m}athfrak{c}hi)=\deg(\mathfrak{p}hi)$ \,and\; ${\mathfrak m}athfrak{c}hi\sim_\mu\mathfrak{p}hi$. \item[(b)] $s({\mathfrak m}athfrak{c}hi)=0$, $\deg({\mathfrak m}athfrak{c}hi)=e\,n\deg(R({\mathfrak m}athfrak{c}hi))$\, and\; $R({\mathfrak m}athfrak{c}hi)$ is irreducible in $\op{Ker}al[y]$. {\mathfrak m}edskipnd{enumerate} In the first case, $\mathcal{R}({\mathfrak m}athfrak{c}hi)=\xi\Delta$. In the second case, $\mathcal{R}({\mathfrak m}athfrak{c}hi)=R({\mathfrak m}athfrak{c}hi)(\xi)\Delta$. {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} If ${\mathfrak m}athfrak{c}hi$ satisfies (a), then ${\mathfrak m}athfrak{c}hi$ is a key polynomial by Lemma \ref{mid=sim}. Also, $\mathcal{R}({\mathfrak m}athfrak{c}hi)=\mathcal{R}(\mathfrak{p}hi)=\xi\Delta$ by Theorem \ref{RR}, since $s(\mathfrak{p}hi)=1$ and $R(\mathfrak{p}hi)=1$.{\mathfrak m}edskip If ${\mathfrak m}athfrak{c}hi$ satisfies (b), then $\deg(R({\mathfrak m}athfrak{c}hi))=s'({\mathfrak m}athfrak{c}hi)/e$ by (\ref{degR}), so that $\deg({\mathfrak m}athfrak{c}hi)=s'({\mathfrak m}athfrak{c}hi)n$, and ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-minimal by Proposition \ref{minimal}. Also, ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-irreducible by Proposition \ref{mu-irr}. Thus, ${\mathfrak m}athfrak{c}hi$ is a key polynomial, and $\mathcal{R}({\mathfrak m}athfrak{c}hi)=R({\mathfrak m}athfrak{c}hi)(\xi)\Delta$ by Theorem \ref{RR}. {\mathfrak m}edskip Conversely, suppose ${\mathfrak m}athfrak{c}hi$ is a key polynomial for ${\mathfrak m}u$. Since ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-minimal, it has $$ \deg({\mathfrak m}athfrak{c}hi)=s'({\mathfrak m}athfrak{c}hi)n,\mathfrak{q}quad {\mathfrak m}u({\mathfrak m}athfrak{c}hi)={\mathfrak m}u(\mathfrak{p}hi), $$ by Proposition \ref{minimal}. Since ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-irreducible, it satisfies one of the conditions of Proposition \ref{mu-irr}. {\mathfrak m}edskip If $s({\mathfrak m}athfrak{c}hi)=s'({\mathfrak m}athfrak{c}hi)=1$, we get $\deg({\mathfrak m}athfrak{c}hi)=n$ and $\mathfrak{p}hi{\mathfrak m}mu {\mathfrak m}athfrak{c}hi$. Thus, ${\mathfrak m}athfrak{c}hi\sim_\mu\mathfrak{p}hi$ by Lemma \ref{mid=sim}, and $\mathfrak{p}hi$ satisfies (a).{\mathfrak m}edskip If $s({\mathfrak m}athfrak{c}hi)=0$ and $R({\mathfrak m}athfrak{c}hi)$ is irreducible in $\op{Ker}al[y]$, then $\deg(R({\mathfrak m}athfrak{c}hi))=s'({\mathfrak m}athfrak{c}hi)/e$ by (\ref{degR}). Thus, $\deg({\mathfrak m}athfrak{c}hi)=s'({\mathfrak m}athfrak{c}hi)n=en\deg(R({\mathfrak m}athfrak{c}hi))$, and ${\mathfrak m}athfrak{c}hi$ satisfies (b). {\mathfrak m}edskipnd{proof} {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{gchi} Let $\mathfrak{p}hi$ be a key polynomial for ${\mathfrak m}u$, of minimal degree $n$. Let ${\mathfrak m}athfrak{c}hi\in\op{Ker}x$ be a key polynomial such that ${\mathfrak m}athfrak{c}hi\mathbf{n}ot\sim_\mu\mathfrak{p}hi$. Then, $\Gammachi=\Gammam$. {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} By Lemma \ref{subgroup}, $\Gammam=\Gammaen{\Gammachi,{\mathfrak m}u({\mathfrak m}athfrak{c}hi)}$. Since $\deg({\mathfrak m}athfrak{c}hi)\Gammae n$, we clearly have $\Gamma_n\subset \Gammachi$. By Theorem \ref{bound} and Proposition \ref{charKP}, $$ {\mathfrak m}u({\mathfrak m}athfrak{c}hi)=\operatorname{Diff}rac{\deg({\mathfrak m}athfrak{c}hi)}{\deg(\mathfrak{p}hi)}\,{\mathfrak m}u(\mathfrak{p}hi)=\deg(R({\mathfrak m}athfrak{c}hi))e{\mathfrak m}u(\mathfrak{p}hi)\in\Gamma_n\subset \Gammachi. $$ Hence, $\Gammam=\Gammachi$. {\mathfrak m}edskipnd{proof} {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{e>1} The two following conditions are equivalent. {\mathfrak m}athfrak{b}egin{enumerate} \item $e>1$ \item All key polynomials of minimal degree are ${\mathfrak m}u$-equivalent. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} Let $\mathfrak{p}hi$ be a key polynomial of minimal degree $n$.{\mathfrak m}edskip If $e>1$ and ${\mathfrak m}athfrak{c}hi$ is a key polynomial not ${\mathfrak m}u$-equivalent to $\mathfrak{p}hi$, then Proposition \ref{charKP} shows that $\deg({\mathfrak m}athfrak{c}hi)=e\,n\deg(R({\mathfrak m}athfrak{c}hi))>n$. Hence, all key polynomials of degree $n$ are ${\mathfrak m}u$-equivalent to $\mathfrak{p}hi$.{\mathfrak m}edskip If $e=1$, then ${\mathfrak m}u(\mathfrak{p}hi)\in \Gamma_n$, so that there exists $a\in \op{Ker}x_n$ with ${\mathfrak m}u(a)={\mathfrak m}u(\mathfrak{p}hi)$. The monic polynomial ${\mathfrak m}athfrak{c}hi=\mathfrak{p}hi+a$ has degree $n$ and it is not ${\mathfrak m}u$-equivalent to $\mathfrak{p}hi$. Also, $\deg(R({\mathfrak m}athfrak{c}hi))=1$, so that $R({\mathfrak m}athfrak{c}hi)$ is irreducible. Therefore, ${\mathfrak m}athfrak{c}hi$ is a key polynomial for ${\mathfrak m}u$, because it satisfies condition (b) of Proposition \ref{charKP}. {\mathfrak m}edskipnd{proof} \subsection{Unique factorization in $\Gammagm$}{\mathfrak m}athfrak{l}abel{subsecKPUF} If ${\mathfrak m}athfrak{c}hi$ is a key polynomial for ${\mathfrak m}u$, then $\mathcal{R}({\mathfrak m}athfrak{c}hi)$ is a maximal ideal of $\Delta$, by Proposition \ref{maxideal}. Let us study the fibers of the mapping $\mathcal{R}{\mathfrak m}athfrak{c}olon \op{Ker}pm\thetao{\mathfrak m}x(\Delta)$. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{samefiber} Let $\mathfrak{p}hi$ be a key polynomial of minimal degree, and let $R=R_\mathfrak{p}hi$. For any ${\mathfrak m}athfrak{c}hi,{\mathfrak m}athfrak{c}hi'\in\op{Ker}pm$, the following conditions are equivalent: {\mathfrak m}athfrak{b}egin{enumerate} \item ${\mathfrak m}athfrak{c}hi\sim_\mu{\mathfrak m}athfrak{c}hi'$. \item $H_{\mathfrak m}u({\mathfrak m}athfrak{c}hi)$ and $H_{\mathfrak m}u({\mathfrak m}athfrak{c}hi')$ are associate in $\Gammagm$. \item ${\mathfrak m}athfrak{c}hi{\mathfrak m}mu{\mathfrak m}athfrak{c}hi'$. \item $\mathcal{R}({\mathfrak m}athfrak{c}hi)=\mathcal{R}({\mathfrak m}athfrak{c}hi')$. \item $R({\mathfrak m}athfrak{c}hi)=R({\mathfrak m}athfrak{c}hi')$. {\mathfrak m}edskipnd{enumerate} Moreover, these conditions imply \,$\deg({\mathfrak m}athfrak{c}hi)=\deg({\mathfrak m}athfrak{c}hi')$. {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{proof} The implications (1) $\,\Longrightarrow\,$ (2) $\,\Longrightarrow\,$ (3) are obvious. Also, (3) $\,\Longrightarrow\,$ $\mathcal{R}({\mathfrak m}athfrak{c}hi')\subset \mathcal{R}({\mathfrak m}athfrak{c}hi)$ $\,\Longrightarrow\,$ (4), because $\mathcal{R}({\mathfrak m}athfrak{c}hi')$ is a maximal ideal. {\mathfrak m}edskip Let us show that (4) implies (5). By Proposition \ref{charKP}, condition (4) implies that we have two possibilities for the pair ${\mathfrak m}athfrak{c}hi$, ${\mathfrak m}athfrak{c}hi'$:{\mathfrak m}edskip (i) \ ${\mathfrak m}athfrak{c}hi\sim_\mu\mathfrak{p}hi\sim_\mu{\mathfrak m}athfrak{c}hi'$, or {\mathfrak m}edskip (ii) \ $s({\mathfrak m}athfrak{c}hi)=s({\mathfrak m}athfrak{c}hi')=0$.{\mathfrak m}edskip In the first case, we deduce (5) from Corollary \ref{equivR}. In the second case, condition (5) follows from Theorems \ref{RR}, \ref{Dstructure}, and the fact that $R({\mathfrak m}athfrak{c}hi)$, $R({\mathfrak m}athfrak{c}hi')$ are monic polynomials.{\mathfrak m}edskip Let us show that (5) implies (1). If $R({\mathfrak m}athfrak{c}hi)=R({\mathfrak m}athfrak{c}hi')=1$, then Proposition \ref{charKP} shows that ${\mathfrak m}athfrak{c}hi\sim_\mu\mathfrak{p}hi\sim_\mu{\mathfrak m}athfrak{c}hi'$. If $R({\mathfrak m}athfrak{c}hi)=R({\mathfrak m}athfrak{c}hi')\mathbf{n}e1$, then Proposition \ref{charKP} shows that $s({\mathfrak m}athfrak{c}hi)=s({\mathfrak m}athfrak{c}hi')=0$ and $$ \deg({\mathfrak m}athfrak{c}hi)=en\deg(R({\mathfrak m}athfrak{c}hi))=en\deg(R({\mathfrak m}athfrak{c}hi'))=\deg({\mathfrak m}athfrak{c}hi'). $$ Also, ${\mathfrak m}athfrak{c}hi{\mathfrak m}mu{\mathfrak m}athfrak{c}hi'$ by item (2) of Corollary \ref{equivR}. Hence, ${\mathfrak m}athfrak{c}hi\sim_\mu{\mathfrak m}athfrak{c}hi'$ by Lemma \ref{mid=sim}. This ends the proof of the equivalence of all conditions.{\mathfrak m}edskip Finally, (1) implies $\deg({\mathfrak m}athfrak{c}hi)=\deg({\mathfrak m}athfrak{c}hi')$ by the ${\mathfrak m}u$-minimality of both polynomials. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{Max} Suppose $\op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset$. The residual ideal mapping $$\mathcal{R}{\mathfrak m}athfrak{c}olon \op{Ker}pm\,{\mathfrak m}athfrak{l}ra\,{\mathfrak m}x(\Delta)$$ induces a bijection between $\op{Ker}pm/\!\!\sim_\mu$ and\, ${\mathfrak m}x(\Delta)$. {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} If ${\mathfrak m}u/v$ is incommensurable, the statement follows from Theorem \ref{mainincomm}.{\mathfrak m}edskip Suppose ${\mathfrak m}u/v$ commensurable, and let $\mathfrak{p}hi$ be a key polynomial of minimal degree $n$. By Proposition \ref{samefiber}, $\mathcal{R}$ induces a 1-1 mapping between $\op{Ker}pm/\!\!\sim_\mu$ and ${\mathfrak m}x(\Delta)$. Let us show that $\mathcal{R}$ is onto. By Theorem \ref{Dstructure}, a maximal ideal in $\Delta$ is given by $\mathfrak{p}si(\xi)\Delta$ for some monic irreducible polynomial $\mathfrak{p}si\in\op{Ker}al[y]$. If $\mathfrak{p}si=y$, then ${\mathfrak m}athfrak{l}l=\mathcal{R}(\mathfrak{p}hi)$, by Theorem \ref{RR}. If $\mathfrak{p}si\mathbf{n}e y$, then it suffices to show the existence of a key polynomial ${\mathfrak m}athfrak{c}hi$ such that $R({\mathfrak m}athfrak{c}hi)=\mathfrak{p}si$, by Proposition \ref{charKP}. Let $d=\deg(\mathfrak{p}si)$. By Lemma \ref{Rconstruct}, there exists ${\mathfrak m}athfrak{c}hi\in\op{Ker}x$ such that $s({\mathfrak m}athfrak{c}hi)=0$, $\mathbf{n}lc({\mathfrak m}athfrak{c}hi)=H_{\mathfrak m}u(u)^{-d}$ and $R({\mathfrak m}athfrak{c}hi)=\mathfrak{p}si$. Along the proof of that lemma, we saw that ${\mathfrak m}athfrak{c}hi$ may be chosen to have $\mathfrak{p}hi$-expansion: $$ {\mathfrak m}athfrak{c}hi=a_0+a_1\mathfrak{p}hi^e+{\mathfrak m}athfrak{c}dots+a_d\mathfrak{p}hi^{de}, \mathfrak{q}quad \deg(a_j)<n. $$ Also, the condition on $a_d$ is $H_{\mathfrak m}u(a_d)=\mathbf{n}lc({\mathfrak m}athfrak{c}hi)H_{\mathfrak m}u(u)^d=1_{\Gammagm}$. Thus, we may choose $a_d=1$. Then $\deg({\mathfrak m}athfrak{c}hi)=den$, so that ${\mathfrak m}athfrak{c}hi$ is a key polynomial because it satisfies condition (b) of Proposition \ref{charKP}. {\mathfrak m}edskipnd{proof}{\mathfrak m}edskip {\mathfrak m}athfrak{b}egin{theorem}{\mathfrak m}athfrak{l}abel{homogeneousprimes} Let $\mathfrak{p}set\subset\op{Ker}pm$ be a set of representatives of key polynomials under ${\mathfrak m}u$-equivalence. Then, the set $H\mathfrak{p}set={\mathfrak m}athfrak{l}eft\{H_{\mathfrak m}u({\mathfrak m}athfrak{c}hi){\mathfrak m}id {\mathfrak m}athfrak{c}hi\in \mathfrak{p}set\right\}$ is a system of representatives of homogeneous prime elements of $\Gammagm$ up to associates. Also, up to units in $\Gammagm$, for any non-zero $f\in K[x]$, there is a unique factorization: {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{factorization} f\sim_\mu \mathfrak{p}rod\mathbf{n}olimits_{{\mathfrak m}athfrak{c}hi\in \mathfrak{p}set}{\mathfrak m}athfrak{c}hi^{a_{\mathfrak m}athfrak{c}hi},\mathfrak{q}uad a_{\mathfrak m}athfrak{c}hi=s_{\mathfrak m}athfrak{c}hi(f). {\mathfrak m}edskipnd{equation} {\mathfrak m}edskipnd{theorem} {\mathfrak m}athfrak{b}egin{proof} All elements in $H\mathfrak{p}set$ are prime elements by the definition of ${\mathfrak m}u$-irreducibi\-lity. Also, they are pairwise non-associate by Proposition \ref{samefiber}. Let $\mathfrak{p}hi$ be a key polynomial of minimal degree $n$. By Proposition \ref{mu-irr}, every homogeneous prime element is associate either to $H_{\mathfrak m}u(\mathfrak{p}hi)$ (which belongs to $H\mathfrak{p}set$), or to $\mathfrak{p}si(\xi)$ for some irreducible polynomial $\mathfrak{p}si\in\op{Ker}al[y]$, $\mathfrak{p}si\mathbf{n}e y$. In the latter case, along the proof of Theorem \ref{Max} we saw the existence of ${\mathfrak m}athfrak{c}hi\in\op{Ker}pm$ such that $s({\mathfrak m}athfrak{c}hi)=0$ and $R({\mathfrak m}athfrak{c}hi)=\mathfrak{p}si$. Therefore, $\mathfrak{p}si(\xi)=R({\mathfrak m}athfrak{c}hi)(\xi)$ is associate to $H_{\mathfrak m}u({\mathfrak m}athfrak{c}hi)\in H\mathfrak{p}set$, by Theorem \ref{Hmug}. Finally, every homogeneous element in $\Gammagm$ is associate to a product of homogeneous prime elements, by Theorem \ref{Hmug} and Lemma \ref{prime0}. {\mathfrak m}edskipnd{proof} \section{Augmentation of valuations}{\mathfrak m}athfrak{l}abel{secAugm} We keep dealing with a valuation ${\mathfrak m}u$ on $\op{Ker}x$ with $\op{Ker}pm\mathbf{n}e{\mathfrak m}edskipmptyset$. There are different procedures to {\mathfrak m}edskipmph{augment} this valuation, in order to obtain valuations ${\mathfrak m}u'$ on $\op{Ker}x$ such that $${\mathfrak m}u(f){\mathfrak m}athfrak{l}e{\mathfrak m}u'(f),\mathfrak{q}quad \forall\, f\in\op{Ker}x,$$ after embedding the value groups of ${\mathfrak m}u$ and ${\mathfrak m}u'$ in a common ordered group. In this section, we show how to single out a key polynomial of ${\mathfrak m}u'$ of minimal degree. As an application, all results of this paper apply to determine the structure of $\Gammag_{{\mathfrak m}u'}$ and the set $\operatorname{KP}({\mathfrak m}u')/\!\!\sim_{{\mathfrak m}u'}$ in terms of this key polynomial. \subsection{Ordinary augmentations} {\mathfrak m}athfrak{b}egin{definition}{\mathfrak m}athfrak{l}abel{muprima} Take ${\mathfrak m}athfrak{c}hi\in \op{Ker}pm$. Let $\Gammam\hookrightarrow\Gamma'$ be an order-preserving embedding of $\Gammam$ into another ordered group, and choose $\Gammaa\in\Gamma'$ such that ${\mathfrak m}u({\mathfrak m}athfrak{c}hi)<\Gammaa$. The {\mathfrak m}edskipmph{augmented valuation} of ${\mathfrak m}u$ with respect to these data is the mapping $${\mathfrak m}u':\op{Ker}x\rightarrow \Gamma' {\mathfrak m}athfrak{c}up {\mathfrak m}athfrak{l}eft\{\infty\right\} $$ assigning to any $g\in\op{Ker}x$, with canonical ${\mathfrak m}athfrak{c}hi$-expansion $g=\sum_{0{\mathfrak m}athfrak{l}e s}g_s{\mathfrak m}athfrak{c}hi^s$, the value $$ {\mathfrak m}u'(g)={\mathfrak m}athbb Min\{{\mathfrak m}u(g_s)+s\Gammaa{\mathfrak m}id 0{\mathfrak m}athfrak{l}e s\}. $$ We use the notation ${\mathfrak m}u'=[{\mathfrak m}u;{\mathfrak m}athfrak{c}hi,\Gammaa]$. Note that ${\mathfrak m}u'({\mathfrak m}athfrak{c}hi)=\Gammaa$. {\mathfrak m}edskipnd{definition} The following proposition collects several results of {\mathfrak m}athfrak{c}ite[sec. 1.1]{Vaq}. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{extension} {\mathfrak m}box{\mathbf{n}ull} {\mathfrak m}athfrak{b}egin{enumerate} \item The mapping ${\mathfrak m}u'=[{\mathfrak m}u;{\mathfrak m}athfrak{c}hi,\Gammaa]$ is a valuation on $\op{Ker}x$ extending $v$, with value group $\Gamma_{{\mathfrak m}u'}=\Gammaen{\Gamma_{\deg({\mathfrak m}athfrak{c}hi)},\Gammaa}$. \item For all $f\in\op{Ker}x$, we have ${\mathfrak m}u(f){\mathfrak m}athfrak{l}e{\mathfrak m}u'(f)$. Equality holds if and only if ${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{{\mathfrak m}u}f$ or $f=0$. \item If ${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{{\mathfrak m}u}f$, then $H_{{\mathfrak m}u'}(f)$ is a unit in $\Gammag_{{\mathfrak m}u'}$. \item The polynomial ${\mathfrak m}athfrak{c}hi$ is a key polynomial for ${\mathfrak m}u'$. \item The kernel of the canonical homomorphism $$\Gammagm{\mathfrak m}athfrak{l}ra\Gammag_{{\mathfrak m}u'},\mathfrak{q}quad a+\mathfrak{p}set_{\mathfrak m}athfrak{a}l({\mathfrak m}u)\,{\mathfrak m}athfrak{l}ongmapsto\, a+\mathfrak{p}set_{\mathfrak m}athfrak{a}l({\mathfrak m}u'),\ \forall\,{\mathfrak m}athfrak{a}l\in\Gammam,$$ is the principal ideal of $\Gammagm$ generated by $H_{\mathfrak m}u({\mathfrak m}athfrak{c}hi)$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{corollary} The polynomial ${\mathfrak m}athfrak{c}hi$ is a key polynomial for ${\mathfrak m}u'$, of minimal degree. {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} For any polynomial $f\in\op{Ker}x$ with $\deg(f)<\deg({\mathfrak m}athfrak{c}hi)$, we have ${\mathfrak m}athfrak{c}hi\mathbf{n}mid_{{\mathfrak m}u}f$ because ${\mathfrak m}athfrak{c}hi$ is ${\mathfrak m}u$-minimal. By item (3) of Proposition \ref{extension}, $H_{{\mathfrak m}u'}(f)$ is a unit in $\Gammag_{{\mathfrak m}u'}$, so that $f$ cannot be a key polynomial for ${\mathfrak m}u'$. {\mathfrak m}edskipnd{proof} \subsection{Limit augmentations} Consider a totally ordered set $A$, not containing a maximal element. A {\mathfrak m}edskipmph{continuous MacLane chain} based on ${\mathfrak m}u$, and parameterized by $A$, is a family ${\mathfrak m}athfrak{l}eft({\mathfrak m}ua\right)_{{\mathfrak m}athfrak{a}l\in A}$ of augmented valuations $$ {\mathfrak m}ua=[{\mathfrak m}u;\mathfrak{p}hi_{\mathfrak m}athfrak{a}l,\Gammaa_{\mathfrak m}athfrak{a}l],\mathfrak{q}quad \mathfrak{p}hi_{\mathfrak m}athfrak{a}l\in\op{Ker}pm,\mathfrak{q}uad {\mathfrak m}u(\mathfrak{p}hi_{\mathfrak m}athfrak{a}l)<\Gammaa_{\mathfrak m}athfrak{a}l\in\Gammam, $$ for all ${\mathfrak m}athfrak{a}l\in A$, satisfying the following conditions: {\mathfrak m}athfrak{b}egin{enumerate} \item $\deg(\mathfrak{p}hi_{\mathfrak m}athfrak{a}l)=d$ is independent of ${\mathfrak m}athfrak{a}l\in A$. \item The mapping $A\thetao \Gammam$, ${\mathfrak m}athfrak{a}l{\mathfrak m}apsto \Gammaa_{\mathfrak m}athfrak{a}l$ is an order-preserving embedding of $A$ in $\Gammam$. \item For all ${\mathfrak m}athfrak{a}l<{\mathfrak m}athfrak{b}eta$ in $A$, we have $$ \mathfrak{p}hi_{\mathfrak m}athfrak{b}eta\in \operatorname{KP}({\mathfrak m}ua),\mathfrak{q}quad \mathfrak{p}hi_{\mathfrak m}athfrak{a}l\mathbf{n}ot\sim_{{\mathfrak m}ua}\mathfrak{p}hi_{\mathfrak m}athfrak{b}eta,\mathfrak{q}quad {\mathfrak m}u_{\mathfrak m}athfrak{b}eta=[{\mathfrak m}ua;\mathfrak{p}hi_{\mathfrak m}athfrak{b}eta,\Gammaa_{\mathfrak m}athfrak{b}eta]. $$ {\mathfrak m}edskipnd{enumerate} In Vaqui\'e's terminology, ${\mathfrak m}athfrak{l}eft({\mathfrak m}ua\right)_{{\mathfrak m}athfrak{a}l\in A}$ is a ``famille continue de valuations augment\'ees it\'er\'ees" {\mathfrak m}athfrak{c}ite{Vaq}. {\mathfrak m}athfrak{b}egin{definition} A polynomial $f\in\op{Ker}x$ is {\mathfrak m}edskipmph{$A$-stable} if there exists ${\mathfrak m}athfrak{a}l_0\in A$ such that $$ {\mathfrak m}u_{{\mathfrak m}athfrak{a}l_0}(f)={\mathfrak m}ua(f),\mathfrak{q}quad \forall\,{\mathfrak m}athfrak{a}l\Gammae {\mathfrak m}athfrak{a}l_0. $$ In this case, we denote by ${\mathfrak m}u_A(f)$ this stable value. {\mathfrak m}edskipnd{definition} {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{nonstable} If $f\in\op{Ker}x$ is not $A$-stable, then ${\mathfrak m}ua(f)<{\mathfrak m}u_{\mathfrak m}athfrak{b}eta(f)$ for all ${\mathfrak m}athfrak{a}l<{\mathfrak m}athfrak{b}eta$ in $A$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Let us show that the equality ${\mathfrak m}ua(f)={\mathfrak m}u_{{\mathfrak m}athfrak{b}eta}(f)$ for some ${\mathfrak m}athfrak{a}l<{\mathfrak m}athfrak{b}eta$ in $A$, implies that $f$ is $A$-stable. Since ${\mathfrak m}u_{{\mathfrak m}athfrak{b}eta}=[{\mathfrak m}ua;\mathfrak{p}hi_{\mathfrak m}athfrak{b}eta,\Gammaa_{\mathfrak m}athfrak{b}eta]$, Proposition \ref{extension} shows that $\mathfrak{p}hi_{\mathfrak m}athfrak{b}eta\mathbf{n}mid_{{\mathfrak m}ua}f$ and $H_{{\mathfrak m}u_{{\mathfrak m}athfrak{b}eta}}(f)$ is a unit in $\Gammag_{{\mathfrak m}u_{{\mathfrak m}athfrak{b}eta}}$. Hence, for all $\delta\Gammae{\mathfrak m}athfrak{b}eta$, the image of $H_{{\mathfrak m}u_{{\mathfrak m}athfrak{b}eta}}(f)$ in $\Gammag_{{\mathfrak m}u_\delta}$ is a unit. Since ${\mathfrak m}u_\delta=[{\mathfrak m}u_{{\mathfrak m}athfrak{b}eta};\mathfrak{p}hi_\delta,\Gammaa_\delta]$, item (5) of Proposition \ref{extension} shows that $\mathfrak{p}hi_\delta\mathbf{n}mid_{{\mathfrak m}u_{{\mathfrak m}athfrak{b}eta}}f$. Hence, ${\mathfrak m}u_{{\mathfrak m}athfrak{b}eta}(f)={\mathfrak m}u_\delta(f)$, again by Proposition \ref{extension}. Thus, $f$ is $A$-stable. {\mathfrak m}edskipnd{proof}{\mathfrak m}athfrak{b}igskip If all polynomials in $\op{Ker}x$ are $A$-stable, then ${\mathfrak m}u_A={\mathfrak m}athfrak{l}im_{{\mathfrak m}athfrak{a}l\in A}{\mathfrak m}ua$ has an obvious meaning, and ${\mathfrak m}u_A$ is a valuation on $\op{Ker}x$. If there are polynomials which are not $A$-stable, there are still some particular situations in which ${\mathfrak m}athfrak{l}eft({\mathfrak m}ua\right)_{{\mathfrak m}athfrak{a}l\in A}$ converges to a valuation (or semivaluation) on $\op{Ker}x$. However, regardless of the fact that ${\mathfrak m}athfrak{l}eft({\mathfrak m}ua\right)_{{\mathfrak m}athfrak{a}l\in A}$ converges or not, non-stable polynomials may be used to define {\mathfrak m}edskipmph{limit augmented} valuations of this continuous MacLane chain. Let us assume that not all polynomials in $\op{Ker}x$ are $A$-stable. We take a monic $\mathfrak{p}hi\in\op{Ker}x$ which is not $A$-stable, and has minimal degree among all polynomials having this property. Since the product of $A$-stable polynomials is $A$-stable, $\mathfrak{p}hi$ is irreducible in $\op{Ker}x$. {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{minimalA} Let $f\in\op{Ker}x$ be a non-zero polynomial, with canonical $\mathfrak{p}hi$-expansion $f=\sum_{0{\mathfrak m}athfrak{l}e s}f_s\mathfrak{p}hi^s$. Then, there exist an index $s_0$ and an element ${\mathfrak m}athfrak{a}l_0\in A$ such that $$ {\mathfrak m}ua(f)={\mathfrak m}ua(f_{s_0}\mathfrak{p}hi^{s_0})<{\mathfrak m}ua(f_s\mathfrak{p}hi^s),\mathfrak{q}quad \forall\,s\mathbf{n}e s_0,\ \forall\,{\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_0. $$ {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Since all coefficients $f_s$ have degree less than $\deg(\mathfrak{p}hi)$, they are all $A$-stable. Let us take ${\mathfrak m}athfrak{a}l_1$ sufficiently large so that ${\mathfrak m}ua(f_s)={\mathfrak m}u_A(f_s)$ for all $s\Gammae0$ and all ${\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_1$. For every ${\mathfrak m}athfrak{a}l\in A$, ${\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_1$, let $$ \delta_{\mathfrak m}athfrak{a}l={\mathfrak m}n\{{\mathfrak m}ua(f_s\mathfrak{p}hi^s){\mathfrak m}id 0{\mathfrak m}athfrak{l}e s\},\mathfrak{q}quad I_{\mathfrak m}athfrak{a}l={\mathfrak m}athfrak{l}eft\{s{\mathfrak m}id {\mathfrak m}ua(f_s\mathfrak{p}hi^s)=\delta_{\mathfrak m}athfrak{a}l\right\},\mathfrak{q}quad s_{\mathfrak m}athfrak{a}l={\mathfrak m}n(I_{\mathfrak m}athfrak{a}l). $$ For any index $s$ we have {\mathfrak m}athfrak{b}egin{equation}{\mathfrak m}athfrak{l}abel{s0} {\mathfrak m}ua(f_{s_{\mathfrak m}athfrak{a}l}\mathfrak{p}hi^{s_{\mathfrak m}athfrak{a}l})={\mathfrak m}u_A(f_{s_{\mathfrak m}athfrak{a}l})+s_{\mathfrak m}athfrak{a}l{\mathfrak m}ua(\mathfrak{p}hi){\mathfrak m}athfrak{l}e {\mathfrak m}ua(f_s\mathfrak{p}hi^s)={\mathfrak m}u_A(f_s)+s{\mathfrak m}ua(\mathfrak{p}hi). {\mathfrak m}edskipnd{equation} Since $\mathfrak{p}hi$ is not $A$-stable, Lemma \ref{nonstable} shows that ${\mathfrak m}ua(\mathfrak{p}hi)<{\mathfrak m}u_{\mathfrak m}athfrak{b}eta(\mathfrak{p}hi)$, for all ${\mathfrak m}athfrak{b}eta>{\mathfrak m}athfrak{a}l$, ${\mathfrak m}athfrak{b}eta\in A$. Thus, if we replace ${\mathfrak m}ua$ with ${\mathfrak m}u_{\mathfrak m}athfrak{b}eta$ in (\ref{s0}), we get a strict inequality for all $s>s_{\mathfrak m}athfrak{a}l$, because the left-hand side of (\ref{s0}) increases by $s_{\mathfrak m}athfrak{a}l({\mathfrak m}u_{\mathfrak m}athfrak{b}eta(\mathfrak{p}hi)-{\mathfrak m}ua(\mathfrak{p}hi))$, while the right-hand side increases by $s({\mathfrak m}u_{\mathfrak m}athfrak{b}eta(\mathfrak{p}hi)-{\mathfrak m}ua(\mathfrak{p}hi))$. Therefore, either $I_{\mathfrak m}athfrak{b}eta=\{s_{\mathfrak m}athfrak{a}l\}$, or $s_{\mathfrak m}athfrak{b}eta={\mathfrak m}n(I_{\mathfrak m}athfrak{b}eta)<s_{\mathfrak m}athfrak{a}l$. Since $A$ contains no maximal element, we may consider a strictly increasing infinite sequence of values of ${\mathfrak m}athfrak{b}eta\in A$. There must be an ${\mathfrak m}athfrak{a}l_0\in A$ such that $$I_{\mathfrak m}athfrak{a}l=\{s_0\},\mathfrak{q}quad \forall\,{\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_0, $$ because the set of indices $s$ is finite. {\mathfrak m}edskipnd{proof} {\mathfrak m}athfrak{b}egin{definition} For any $f\in\op{Ker}x$, we say that $f$ is $A$-divisible by $\mathfrak{p}hi$, and we write $\mathfrak{p}hi{\mathfrak m}id_A f$ if there exists ${\mathfrak m}athfrak{a}l_0\in A$ such that $\mathfrak{p}hi{\mathfrak m}id_{{\mathfrak m}ua}f$ for all ${\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_0$. {\mathfrak m}edskipnd{definition} {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{stableChar} For any $f\in\op{Ker}x$ with canonical $\mathfrak{p}hi$-expansion $f=\sum_{0{\mathfrak m}athfrak{l}e s}f_s\mathfrak{p}hi^s$, the following conditions are equivalent: {\mathfrak m}athfrak{b}egin{enumerate} \item $f$ is $A$-stable. \item There exists ${\mathfrak m}athfrak{a}l_0\in A$ such that $$ {\mathfrak m}ua(f)={\mathfrak m}ua(f_0)<{\mathfrak m}ua(f_s\mathfrak{p}hi^s),\mathfrak{q}uad \forall s>0, \ \forall\,{\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_0. $$ \item $\mathfrak{p}hi\mathbf{n}mid_Af$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} By Lemma \ref{minimalA}, there exist an index $s_0$ and an element ${\mathfrak m}athfrak{a}l_0\in A$ such that ${\mathfrak m}ua(f_s)={\mathfrak m}u_A(f_s)$ for all $s$, and $${\mathfrak m}ua(f)={\mathfrak m}ua{\mathfrak m}athfrak{l}eft(f_{s_0}\mathfrak{p}hi^{s_0}\right)={\mathfrak m}u_A(f_{s_0})+s_0{\mathfrak m}ua(\mathfrak{p}hi)<{\mathfrak m}ua{\mathfrak m}athfrak{l}eft(f_s\mathfrak{p}hi^s\right),\mathfrak{q}quad \forall\,s\mathbf{n}e s_0.$$ By Lemma \ref{nonstable}, ${\mathfrak m}ua(\mathfrak{p}hi)$ grows strictly with ${\mathfrak m}athfrak{a}l$. Hence, condition (1) is equivalent to $s_0=0$, which is in turn equivalent to condition (2). On the other hand, Proposition \ref{minimal0} shows that $\mathfrak{p}hi$ is ${\mathfrak m}ua$-minimal for all ${\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_0$. Hence, again by Proposition \ref{minimal0}, the three following conditions are equivalent: {\mathfrak m}athfrak{b}egin{itemize} \item $\mathfrak{p}hi\mathbf{n}mid_{{\mathfrak m}ua}f$ for all ${\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_0$. \item $\mathfrak{p}hi\mathbf{n}mid_{{\mathfrak m}ua}f$ for some ${\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_0$. \item $s_0=0$. {\mathfrak m}edskipnd{itemize} Hence, conditions (2) and (3) are equivalent too. {\mathfrak m}edskipnd{proof} {\mathfrak m}athfrak{b}egin{definition}{\mathfrak m}athfrak{l}abel{muprimalim} Take $\mathfrak{p}hi\in \op{Ker}x$ a monic polynomial with minimal degree among all polynomials which are not $A$-stable. Let $\Gammam\hookrightarrow\Gamma'$ be an order-preserving embedding of $\Gammam$ into another ordered group, and choose $\Gammaa\in\Gamma'$ such that ${\mathfrak m}ua(\mathfrak{p}hi)<\Gammaa$ for all ${\mathfrak m}athfrak{a}l\in A$. The {\mathfrak m}edskipmph{limit augmented valuation} of the continuous MacLane chain ${\mathfrak m}athfrak{l}eft({\mathfrak m}ua\right)_{{\mathfrak m}athfrak{a}l\in A}$ with respect to these data is the mapping $${\mathfrak m}u':\op{Ker}x\rightarrow \Gamma' {\mathfrak m}athfrak{c}up {\mathfrak m}athfrak{l}eft\{\infty\right\} $$ assigning to any $g\in\op{Ker}x$, with $\mathfrak{p}hi$-expansion $g=\sum_{0{\mathfrak m}athfrak{l}e s}g_s\mathfrak{p}hi^s$, the value $$ {\mathfrak m}u'(g)={\mathfrak m}athbb Min\{{\mathfrak m}u_A(g_s)+s\Gammaa{\mathfrak m}id 0{\mathfrak m}athfrak{l}e s\}. $$ We use the notation ${\mathfrak m}u'=[{\mathfrak m}u_A;\mathfrak{p}hi,\Gammaa]$. Note that ${\mathfrak m}u'(\mathfrak{p}hi)=\Gammaa$. {\mathfrak m}edskipnd{definition} The following proposition collects Propositions 1.22 and 1.23 of {\mathfrak m}athfrak{c}ite{Vaq}. {\mathfrak m}athfrak{b}egin{proposition}{\mathfrak m}athfrak{l}abel{extensionlim} {\mathfrak m}box{\mathbf{n}ull} {\mathfrak m}athfrak{b}egin{enumerate} \item The mapping ${\mathfrak m}u'=[{\mathfrak m}u_A;\mathfrak{p}hi,\Gammaa]$ is a valuation on $\op{Ker}x$ extending $v$. \item For all $f\in\op{Ker}x$, we have ${\mathfrak m}ua(f){\mathfrak m}athfrak{l}e{\mathfrak m}u'(f)$ for all ${\mathfrak m}athfrak{a}l\in A$. \mathbf{n}oindent The condition ${\mathfrak m}ua(f)<{\mathfrak m}u'(f)$, for all ${\mathfrak m}athfrak{a}l\in A$, is equivalent to $\mathfrak{p}hi{\mathfrak m}id_Af$ and $f\mathbf{n}e0$. {\mathfrak m}edskipnd{enumerate} {\mathfrak m}edskipnd{proposition} {\mathfrak m}athfrak{b}egin{lemma}{\mathfrak m}athfrak{l}abel{abcd} For $a,b\in\op{Ker}x_{\deg(\mathfrak{p}hi)}$, let the $\mathfrak{p}hi$-expansion of $ab$ be $$ab=c+d\mathfrak{p}hi,\mathfrak{q}quad c,d\in\op{Ker}x_{\deg(\mathfrak{p}hi)}.$$ Then, $ab\sim_{{\mathfrak m}u'} c$. {\mathfrak m}edskipnd{lemma} {\mathfrak m}athfrak{b}egin{proof} Since $\deg(a),\deg(b)<\deg(\mathfrak{p}hi)$, the polynomials $a$ and $b$ are $A$-stable. In particular, $ab$ is $A$-stable. By Lemma \ref{stableChar}, there exists ${\mathfrak m}athfrak{a}l_0\in A$ such that $$ {\mathfrak m}u_A(c)={\mathfrak m}ua(c),\mathfrak{q}uad {\mathfrak m}box{ and }\mathfrak{q}uad {\mathfrak m}ua(ab)={\mathfrak m}ua(c)<{\mathfrak m}ua(d\mathfrak{p}hi), \mathfrak{q}quad \forall\,{\mathfrak m}athfrak{a}l\Gammae{\mathfrak m}athfrak{a}l_0\ {\mathfrak m}box{ in }A. $$ Hence, ${\mathfrak m}u_A(ab)={\mathfrak m}u_A(c)<{\mathfrak m}u_A(d)+{\mathfrak m}ua(\mathfrak{p}hi)<{\mathfrak m}u_A(d)+\Gammaa$. By the definition of ${\mathfrak m}u'$, we conclude that ${\mathfrak m}u'(ab)={\mathfrak m}u'(c)<{\mathfrak m}u'(d\mathfrak{p}hi)$. {\mathfrak m}edskipnd{proof} {\mathfrak m}athfrak{b}egin{corollary}{\mathfrak m}athfrak{l}abel{smallunitslim} For all $a\in\op{Ker}x_{\deg(\mathfrak{p}hi)}$, $H_{{\mathfrak m}u'}(a)$ is a unit in $\Gammag_{{\mathfrak m}u'}$. {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} Since $\mathfrak{p}hi$ is irreducible in $\op{Ker}x$ and $\deg(a)<\deg(\mathfrak{p}hi)$, these two polynomials are coprime. Hence, there is a B\'ezout identity: $$ ac+d\mathfrak{p}hi=1,\mathfrak{q}quad \deg(c)<\deg(\mathfrak{p}hi),\ \deg(d)<\deg(a)<\deg(\mathfrak{p}hi). $$ By Lemma \ref{abcd}, $ac\sim_{{\mathfrak m}u'}1$. {\mathfrak m}edskipnd{proof} {\mathfrak m}athfrak{b}egin{corollary} The polynomial $\mathfrak{p}hi$ is a key polynomial for ${\mathfrak m}u'$, of minimal degree. {\mathfrak m}edskipnd{corollary} {\mathfrak m}athfrak{b}egin{proof} By the very definition of ${\mathfrak m}u'$ we have ${\mathfrak m}u'(f)={\mathfrak m}n\{{\mathfrak m}u'{\mathfrak m}athfrak{l}eft(f_s\mathfrak{p}hi^s\right){\mathfrak m}id 0{\mathfrak m}athfrak{l}e s\}$ for any $f=\sum_{0{\mathfrak m}athfrak{l}e s}f_s\mathfrak{p}hi^s\in\op{Ker}x$. By Proposition \ref{minimal0}, $\mathfrak{p}hi$ is ${\mathfrak m}u'$-minimal.{\mathfrak m}edskip Let us show that $\mathfrak{p}hi$ is ${\mathfrak m}u'$-irreducible. Suppose $\mathfrak{p}hi\mathbf{n}mid_{{\mathfrak m}u'}f$, $\mathfrak{p}hi\mathbf{n}mid_{{\mathfrak m}u'}g$ for $f,g\in\op{Ker}x$. Let $f_0,g_0$ be the $0$-th coefficients of the $\mathfrak{p}hi$-expansion of $f$ and $g$, respectively. Since $\mathfrak{p}hi$ is ${\mathfrak m}u'$-minimal, Proposition \ref{minimal0} shows that ${\mathfrak m}u'(f)={\mathfrak m}u'(f_0)$ and ${\mathfrak m}u'(g)={\mathfrak m}u'(g_0)$. Consider the $\mathfrak{p}hi$-expansion $$ f_0g_0=c+d\mathfrak{p}hi,\mathfrak{q}quad \deg(c),\deg(d)<\deg(\mathfrak{p}hi). $$ By Lemma \ref{abcd}, ${\mathfrak m}u'(f_0g_0)={\mathfrak m}u'(c)$, so that $$ {\mathfrak m}u'(fg)={\mathfrak m}u'(f_0g_0)={\mathfrak m}u'(c). $$ Since the polynomial $c$ is the $0$-th coefficient of the $\mathfrak{p}hi$-expansion of $fg$, Proposition \ref{minimal0} shows that $\mathfrak{p}hi\mathbf{n}mid_{{\mathfrak m}u'}fg$. Hence, $\mathfrak{p}hi$ is ${\mathfrak m}u$-irreducible. This shows that $\mathfrak{p}hi$ is a key polynomial for ${\mathfrak m}u'$.{\mathfrak m}edskip Finally, by Corollary \ref{smallunitslim}, any polynomial in $\op{Ker}x$ of degree smaller than $\deg(\mathfrak{p}hi)$ cannot be a key polynomial for ${\mathfrak m}u'$, because it is a ${\mathfrak m}u'$-unit. {\mathfrak m}edskipnd{proof} {\mathfrak m}athfrak{b}egin{thebibliography}{} {\mathfrak m}athfrak{b}ibitem{valuedfield} A. J. Engler and A. Prestel, {\mathfrak m}edskipmph{Valued fields}, Springer, Berlin, 2005. {\mathfrak m}athfrak{b}ibitem{ResidualIdeals} J. Fern\'{a}ndez, J. Gu\`{a}rdia, J. Montes, E. Nart, {\mathfrak m}edskipmph{Residual ideals of MacLane valuations}, Journal of Algebra {{\mathfrak m}athfrak{b}f 427} (2015), 30--75. {\mathfrak m}athfrak{b}ibitem{hos} F.J. Herrera Govantes, M.A. Olalla Acosta, M. Spivakovsky, {\mathfrak m}edskipmph{Valuations in algebraic field extensions}, Journal of Algebra {{\mathfrak m}athfrak{b}f 312} (2007), no. 2, 1033--1074. {\mathfrak m}athfrak{b}ibitem{hmos} F.J. Herrera Govantes, W. Mahboub, M.A. Olalla Acosta, M. Spivakovsky, {\mathfrak m}edskipmph{Key polynomials for simple extensions of valued fields}, preprint, arXiv:1406.0657 [math.AG], 2014. {\mathfrak m}athfrak{b}ibitem{mahboub} W. Mahboub, {\mathfrak m}edskipmph{Key polynomials}, Journal of Pure and Applied Algebra {{\mathfrak m}athfrak{b}f 217} (2013), no. 6, 989--1006. {\mathfrak m}athfrak{b}ibitem{mcla} S. MacLane, {\mathfrak m}edskipmph{A construction for absolute values in polynomial rings}, Transactions of the American Mathematical Society {{\mathfrak m}athfrak{b}f40} (1936), pp. 363--395. {\mathfrak m}athfrak{b}ibitem{mclb} S. MacLane, {\mathfrak m}edskipmph{A construction for prime ideals as absolute values of an algebraic field}, Duke Mathematical Journal {{\mathfrak m}athfrak{b}f2} (1936), pp. 492--510. {\mathfrak m}athfrak{b}ibitem{ore} \O{}. Ore, {\mathfrak m}edskipmph{Zur Theorie der algebraischen K\"orper}, Acta Mathematica {{\mathfrak m}athfrak{b}f44} (1923), pp. 219--314. {\mathfrak m}athfrak{b}ibitem{PP} L. Popescu, N. Popescu, {\mathfrak m}edskipmph{On the residual transcendental extensions of a valuation. Key polynomials and augmented valuations}, Tsukuba Journal of Mathematics {{\mathfrak m}athfrak{b}f 15} (1991), 57--78. {\mathfrak m}athfrak{b}ibitem{SS} J.-C. San Saturnino, {\mathfrak m}edskipmph{Defect of an extension, key polynomials and local uniformization}, Journal of Algebra {{\mathfrak m}athfrak{b}f 481} (2017), 91--119. {\mathfrak m}athfrak{b}ibitem{Vaq} M. Vaqui\'e, {\mathfrak m}edskipmph{Extension d'une valuation}, Transactions of the American Mathematical Society {{\mathfrak m}athfrak{b}f 359} (2007), no. 7, 3439--3481. {\mathfrak m}athfrak{b}ibitem{Vaq2}M. Vaqui\'e, {\mathfrak m}edskipmph{Famille admissible de valuations et d\'efaut d'une extension}, Journal of Algebra {{\mathfrak m}athfrak{b}f 311} (2007), no. 2, 859--876. {\mathfrak m}edskipnd{thebibliography} {\mathfrak m}edskipnd{document}
\begin{document} \title{No-signaling in Nonlinear Extensions of Quantum Mechanics} \author{Rohit Kishan Ray} \email{[email protected]} \affiliation{Department of Physics, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal - 721302, India} \author{Gian Paolo Beretta} \email{[email protected]} \affiliation{Department of Mechanical and Industrial Engineering, University of Brescia, 25123 Brescia, Italy} \date{\today} \begin{abstract} Devising a nonlinear extension of quantum mechanics is nontrivial because unphysical features such as supraluminal communication (signaling) are to be excluded. In this Letter, we show that the steepest entropy ascent formalism is a viable no-signaling extension belonging to a broader class of no-signaling nonlinear evolution equations for which the local evolution of a subsystem is not necessarily bound to depend only on its reduced state. \end{abstract} \pacs{} \maketitle Quantum mechanics (QM) in its most common form (Schr\"{o}dinger--von Neumann formalism) is linear in state space, where linear operators operate upon state vectors, and the time evolution is linear. In 1989, when the late Prof. Weinberg asked to test the linearity of quantum mechanics as we know it \cite{weinberg_1989_testing}, a new quest began for the physicists working in the field. Thence onward, distinguished researchers such as \citet{gisin_1990_weinberg} and \citet{polchinski_1991_weinberg} showed that introducing non-linearity via operators produces signaling (faster than light communication), leading to the violation of causality. In this context as a consequence of using linear operators in QM, we must take note of the fact that no-cloning, first introduced by \citet{park_1970_concept} and later explicitly shown by Wootters, Zurek, and Dieks \cite{wootters_1982_single,*dieks_1982_communication}, is sufficient for no-signaling. Nonlinearity, when introduced via stochastic QM through Lindblad operator formalism \cite{gisin_1995_relevant} for open quantum systems, has been shown to respect no-signaling. Ensuing work by \citet{ferrero_2004_nonlinear} has shown that nonlinearity in QM can be accommodated without compromising no-signaling if we consider the time evolution to be nonlinear albeit maintaining linearity in the state space and operators. In a more recent work \cite{rembielinski_2020_nonlinear,*rembielinski_2021_nonlinear}, it has been shown that convex quasilinear maps can contribute to the nonlinear dynamics of QM without invoking signaling, and this formalism retains much of QM as it is. Thusly, \citet{rembielinski_2020_nonlinear} have found a minimal permissible deviation from the linear structure of QM dynamics.\par For about four decades, a nonlinear extension of quantum mechanics initiated by Hatsopoulos and Gyftopoulos \cite{hatsopoulos_1976_unified,*hatsopoulos_1976_unifieda,*hatsopoulos_1976_unifiedb,*hatsopoulos_1976_unifiedc}, and developed by Beretta \cite{beretta_1984_quantuma,*beretta_1985_quantuma,*maddox1985uniting,beretta_2005_nonlinear, *beretta_2009_nonlinear,*beretta_2010_maximum,*beretta_2019_time} exists. This formalism originally sought to establish the second law of thermodynamics as a foundational principle at par with other conservation laws, embedded in a nonlinear law of evolution for mixed states, thus conceptualizing stability of canonical states and spontaneous decoherence. Later, the steepest entropy ascent (SEA) formalism was developed as a powerful modeling tool, applied to various quantum systems \cite{cano-andrade_2015_steepestentropyascent,*vonspakovsky_2014_trends,ray_2022_steepestb}, and even elevated to the status of fourth law of thermodynamics \cite{beretta_2014_steepest,beretta_2020_fourth} because of its equivalence with most of the recent far-nonequilibrium modeling formalisms. Nonetheless, while discussing possible nonlinear theories of QM in the context of no-signaling, SEA formalism is hardly ever mentioned in the literature cited in the preceding paragraph. The authors in Refs. \cite{beretta_2005_nonlinear,beretta_2009_nonlinear,beretta_2010_maximum,vonspakovsky_2014_trends} claim that the SEA formalism abides by the no-signaling criteria as discussed by Gisin and others. Yet, an explicit focus on no-signaling is lacking in their works. As a result, a lacuna remains to be breached to answer Weinberg's original question. This Letter provides definitive proof that the SEA nonlinear extension of QM respects the no-signaling principle. Philosophically, SEA evolution was designed as part of a theory whereby spontaneous decoherence could be conceived as a fundamental dynamical feature, in contrast to the coarse-graining approach. Entropy, the second law, and irreversibility could acquire a more fundamental stature, by emerging as deterministic consequences of the law of evolution, without contradicting standard QM \cite{Beretta1986_intrinsic}. Nonlinearity was known to be essential for this purpose \cite{SimmonsPark1981,*ParkSimmons1983_knots}. While the prevalent notion is that the second law is statistical, the pioneers of SEA believed that the many knots of thermodynamics \cite{ParkSimmons1983_knots,HatsopoulosBeretta2008} could be untied by elevating it to a more fundamental stature \cite{hatsopoulos_1976_unified,*hatsopoulos_1976_unifieda,*hatsopoulos_1976_unifiedb,*hatsopoulos_1976_unifiedc}. But later, the strength and generality of its mathematical formalism made SEA a suitable tool for thermodynamically consistent nonequilibrium modeling even beyond and outside of its original context, including the current quest for fundamental or phenomenological nonlinear extensions of QM for quantum computing applications, and attempts to assign ontological status to mixed density operators, albeit motivated differently from Refs.\ \cite{hatsopoulos_1976_unified,*hatsopoulos_1976_unifieda,*hatsopoulos_1976_unifiedb,*hatsopoulos_1976_unifiedc,SimmonsPark1981,Beretta1986_intrinsic,weinberg_1989_testing,beretta2006_paradox,HatsopoulosBeretta2008}. The ontological hypothesis amounts to assuming, accepting, and interpreting the elevation of the mixed density operators to the same status of ontic physical states that traditional von Neumann QM reserves to pure states. To see that it is a conceptual prerequisite to adopt a nonlinear law of evolution for mixed states, let $\mathcal{W}_t$ with $\rho' = \mathcal{W}_t(\rho)$ denote a nonlinear map representing the evolution $\rho \rightarrow \rho'$ after some time $t$. Consider the evolution of three states $\rho_1$, $\rho_2$, $\rho_3$ such that $\rho_1 \rightarrow \rho_1'$, $\rho_2 \rightarrow \rho_2'$, $\rho_3 \rightarrow \rho_3'$, respectively. Further, assume that $\rho_3 = w\rho_1+(1{-}w)\rho_2$ with $0\le w\le 1$. Due to the nonlinearity of $\mathcal{W}_t$, in general $\rho_3' \ne w\rho_1'+(1{-}w)\rho_2'$. This is no issue only if the ontological hypothesis is accepted, since $w$ and $1{-}w$ do not represent epistemic ignorance anymore. A sufficient condition \cite{rembielinski_2020_nonlinear} for the nonlinear map to be no-signaling is that it be convex quasilinear, i.e., it always admits a $w'$, $0\le w'\le 1$, such that $\rho_3' = w'\rho_1'+(1{-}w')\rho_2'$. In this Letter, we introduce a much broader class of non-convex-quasilinear, nonlinear maps that are no-signaling The SEA map, despite its logarithmic nonlinearity and structured construction, belongs to this class and has general thermodynamic compatibility features. The no-signaling condition, as noted in \cite{ferrero_2004_nonlinear}, is usually imposed by asking that in the absence of mutual interactions between subsystems A and B, the evolution of the local observables of A should only depend on its own reduced state. The SEA formalism, however, demonstrates that we can take a less restrictive view \cite{beretta_2005_nonlinear}: we only require that, if A and B are non-interacting, the law of evolution must not allow that a local unitary operation within B could affect the time evolution of local (reduced, marginal) state of A. Thus, the condition $\rho_A=\rho_A'$, such as for the two different states $\rho\ne\rho_A\otimes\rho_B$ and $\rho'=\rho_A\otimes\rho_B$, does not require that $\rm{d}\rho_A/\rm{d}t=\rm{d}\rho_A'/\rm{d}t$, because local memory of past interactions, i.e., existing entanglement and/or correlations, may well influence the local evolutions without violating no-signaling. This incorporates the idea that (1) by studying the local evolutions we can disclose the existence of correlations, but only of the type that can be classically communicated between the subsystems, and (2) in the absence of interactions the nonlinear dynamics may produce the fading away of correlations (spontaneous decoherence) but cannot create new correlations. In linear QM, the system's composition is specified by declaring: (1) the Hilbert space structure as direct product ${\mathcal H} = \bigotimes_{{J\vphantom{\overline{J}}}=1}^M{\mathcal H}_{J\vphantom{\overline{J}}}$ of the subspaces of the $M$ component subsystems, and (2) the overall Hamiltonian operator $H = \sum_{J=1}^M H_{J\vphantom{\overline{J}}}\otimes I_{J\vphantom{\overline{J}}}bar + V$ where $H_{J\vphantom{\overline{J}}}$ (on ${\mathcal H}_{J\vphantom{\overline{J}}}$) is the local Hamiltonian of the $J$-th subsystem, $I_{J\vphantom{\overline{J}}}bar$ the identity on the direct product ${\mathcal H}_{J\vphantom{\overline{J}}}bar= \bigotimes_{K\ne J}{\mathcal H}_K$ of all the other subspaces, and $V$ (on ${\mathcal H}$) is the interaction Hamiltonian. The linear law of evolution, $\dot\rho= -\mathrm{i}[H,\rho]/\hbar$, has a universal structure and entails the local evolutions through partial tracing, $\dot\rho_{J\vphantom{\overline{J}}}= -\mathrm{i}[H_{J\vphantom{\overline{J}}},\rho_{J\vphantom{\overline{J}}}]/\hbar -\mathrm{i}\Tr_{J\vphantom{\overline{J}}}bar([V,\rho])/\hbar $. Thus, we recover the universal law $\dot\rho_{J\vphantom{\overline{J}}}= -\mathrm{i}[H_{J\vphantom{\overline{J}}},\rho_{J\vphantom{\overline{J}}}]/\hbar$ for the local density operator $\rho_J=\Tr_{J\vphantom{\overline{J}}}bar(\rho)$ if subsystem ${J\vphantom{\overline{J}}}$ does not interact with the others (i.e., if $V=I_{J\vphantom{\overline{J}}}\otimes V_{J\vphantom{\overline{J}}}bar$). Instead, a fully nonlinear QM cannot have a universal structure, because the subdivision into subsystems must explicitly concur to the structure of the dynamical law (see \cite{beretta_2010_maximum} for more on this). A different subdivision requires a different equation of motion. This high price for abandoning linearity is clearly reflected in the nontrivial structure of the SEA law of evolution. But this renders it compatible with the compelling constraint that correlations should not build up and signaling between subsystems should not occur other than per effect of the interaction Hamiltonian $V$ through the standard Schr\"{o}dinger term $-i[H,\rho]/\hbar$ in the evolution law. Seldom used in composite quantum dynamics analysis, but crucial in our opinion, are the physical observables first introduced in \cite{beretta_1985_quantuma} and called `local perception' operators (on ${\mathcal H}_{J\vphantom{\overline{J}}}$). Together with their `deviation from the local mean value' operators and covariance functionals, they are defined as follows \begin{align} (X)^J_\rho & = \Tr_{J\vphantom{\overline{J}}}bar[(I_{J\vphantom{\overline{J}}}\otimes\rho_{J\vphantom{\overline{J}}}bar)X] \,,\label{eq:local_perception_operators} \\ \Delta(X)^{J\vphantom{\overline{J}}}_\rho & =(X)^{J\vphantom{\overline{J}}}_\rho-I_{J\vphantom{\overline{J}}}\Tr[\rho_{J\vphantom{\overline{J}}}(X)^{J\vphantom{\overline{J}}}_\rho]\,,\label{eq:deviation_X} \\ (X,Y)^{J\vphantom{\overline{J}}}_\rho & = \half\Tr[\rho_{J\vphantom{\overline{J}}}\acomm{\Delta(X)^{J\vphantom{\overline{J}}}_\rho}{\Delta(Y)^{J\vphantom{\overline{J}}}_\rho}]\,,\label{eq:covariance_XY} \end{align} where $\rho_{J\vphantom{\overline{J}}}bar=\Tr_{J\vphantom{\overline{J}}}(\rho)$. For a bipartite system AB, the local perception operators $(X)^A_\rho$ (on ${\mathcal H}_A$) and $(X)^B_\rho$ (on ${\mathcal H}_B$) are the unique operators that for a given $X$ on ${\mathcal H}_{AB}$ satisfy for all states $\rho$ the identity \begin{equation}\label{eq:local_perception_operators_id} \Tr[\rho_A(X)^A_\rho] =\Tr[(\rho_A\otimes\rho_B)X] =\Tr[\rho_B(X)^B_\rho]\,,\\ \end{equation} which shows that they represent all that A and B can say about the overall observable X by classically sharing their local states. Operator $(X)^A_\rho$ can be viewed as the projection onto ${\mathcal H}_A$ of the operator $X$ weighted by the local state $\rho_B$ of subsystem $B$. It is a local observable for subsystem A which, however, depends on the overall state $\rho$ and overall observable $X$. Its local mean value $\Tr_A[\rho_A(X)^A_\rho]$ differs from the mean value $\Tr(\rho X)$ for the overall system AB, except when A and B are uncorrelated ($\rho=\rho_A\otimes\rho_B$). It was dubbed `local perception' because even if B performs a local tomography and sends the measured $\rho_B$ to A by classical communication, the most that A can measure locally about the overall observable $X$ is $(X)^A_\rho$. The overall energy and entropy of the composite system are locally perceived within subsystem $J$ through the operators $(H)^{J\vphantom{\overline{J}}}_\rho$ and $(S(\rho))^{J\vphantom{\overline{J}}}_\rho$ defined on ${\mathcal H}_{J\vphantom{\overline{J}}}$ by Eq. (\ref{eq:local_perception_operators}), respectively with $X=H$, the overall Hamiltonian, and $X=S(\rho)=- k_{\rm\scriptscriptstyle B}\rm{Bln}(\rho)$, that we call the overall entropy operator, where $\rm{Bln}(x)$ denotes the discontinuous function $\rm{Bln}(x)=\ln(x)$ for $0<x\le 1$ and $\rm{Bln}(0)=0$. Note that the `locally perceived overall entropy' operator $(S(\rho))^{J\vphantom{\overline{J}}}_\rho$ is different from the `local entropy' operator $S(\rho_{J\vphantom{\overline{J}}})=- k_{\rm\scriptscriptstyle B}{\rm B}_J{\rm ln}(\rho_{J\vphantom{\overline{J}}})$. Their mean values $\Tr[\rho_{J\vphantom{\overline{J}}}(S(\rho))^{J\vphantom{\overline{J}}}_\rho]=- k_{\rm\scriptscriptstyle B}\Tr[(\rho_{J\vphantom{\overline{J}}}\otimes\rho_{J\vphantom{\overline{J}}}bar){\rm Bln}(\rho)]$ and $\Tr[\rho_{J\vphantom{\overline{J}}} S(\rho_{J\vphantom{\overline{J}}})]= - k_{\rm\scriptscriptstyle B}\Tr[\rho_{J\vphantom{\overline{J}}}\ln (\rho_{J\vphantom{\overline{J}}})]$ are different. Only when $\rho=\rho_{J\vphantom{\overline{J}}}\otimes\rho_{J\vphantom{\overline{J}}}bar$ they are related by $\Tr[\rho_{J\vphantom{\overline{J}}}(S(\rho))^{J\vphantom{\overline{J}}}_\rho]=\Tr[\rho_{J\vphantom{\overline{J}}} S(\rho_{J\vphantom{\overline{J}}})]+\Tr[\rho_{J\vphantom{\overline{J}}}bar S(\rho_{J\vphantom{\overline{J}}}bar)]= - k_{\rm\scriptscriptstyle B}\Tr[\rho\ln (\rho)]$. Likewise, the `locally perceived overall Hamiltonian' operator $(H)^{J\vphantom{\overline{J}}}_\rho$ is different from the `local Hamiltonian' operator $H_{J\vphantom{\overline{J}}}$. Their mean values $\Tr[\rho_{J\vphantom{\overline{J}}}(H)^{J\vphantom{\overline{J}}}_\rho]=\Tr[(\rho_{J\vphantom{\overline{J}}}\otimes\rho_{J\vphantom{\overline{J}}}bar)H]$ and $\Tr(\rho_{J\vphantom{\overline{J}}} H_{J\vphantom{\overline{J}}})$ are different, and only when $V=I_{J\vphantom{\overline{J}}}\otimes V_{J\vphantom{\overline{J}}}bar$ they are related by $\Tr[\rho_{J\vphantom{\overline{J}}}(H)^{J\vphantom{\overline{J}}}_\rho]=\Tr(\rho_{J\vphantom{\overline{J}}} H_{J\vphantom{\overline{J}}})+\Tr(\rho_{J\vphantom{\overline{J}}}bar H_{J\vphantom{\overline{J}}}bar)= \Tr(\rho H)$. However, it is noteworthy that when the overall observable $X$ is `separable for subsystem J', in the sense that $X=X_{J\vphantom{\overline{J}}}\otimes I_{J\vphantom{\overline{J}}}bar+I_{J\vphantom{\overline{J}}}\otimes X_{J\vphantom{\overline{J}}}bar$ then, even if $\rho\ne\rho_{J\vphantom{\overline{J}}}\otimes\rho_{J\vphantom{\overline{J}}}bar$, the deviations and covariances reduce to their local versions, \begin{align} \Delta(X)^{J\vphantom{\overline{J}}}_\rho & =\Delta X_{J\vphantom{\overline{J}}} = X_{J\vphantom{\overline{J}}}-I_{J\vphantom{\overline{J}}}\Tr[\rho_{J\vphantom{\overline{J}}} X_{J\vphantom{\overline{J}}}]\,,\label{eq:deviation_separableJ} \\ (X,Y)^{J\vphantom{\overline{J}}}_\rho & =\Tr[\rho_{J\vphantom{\overline{J}}}\acomm{\Delta X_{J\vphantom{\overline{J}}}}{\Delta Y_{J\vphantom{\overline{J}}}}]/2\,.\label{eq:covariance_separable} \end{align} Now, to formalize the no-signaling definition following \cite{beretta_2005_nonlinear} as discussed above, we impose that if A and B are non-interacting, a local unitary operation on B should not affect the evolution of A. So, assume that with AB in the state, $\rho$ a local operation on B changes the state to \begin{equation}\label{eq:partial_unitary_op_on_rho} \rho'=(I_A\otimes U_B)\,\rho\, (I_A\otimes U_B^\dagger)\,, \end{equation} where $U_B$ is an arbitrary unitary operator ($U_B^\dagger U_B=I_B$). Using the properties of the partial trace, in particular, \begin{align} & \Tr_B[(I_A\otimes X_B)Z_{AB}]=\Tr_B[Z_{AB}(I_A\otimes X_B)]\,,\notag \\ & \Tr_A[(I_A\otimes X_B)Z_{AB}(I_A\otimes Y_B)]= X_B\Tr_A(Z_{AB})Y_B \,,\notag \end{align} we obtain the identities \begin{align} \rho_B & =\Tr_A[(I_A\otimes U_B^\dagger)\,\rho'\, (I_A\otimes U_B)]=U_B^\dagger\rho'_BU_B, \label{eq:no_signaling_unit_rhoB} \\ \rho'_A & =\Tr_B[(I_A\otimes U_B)\,\rho\, (I_A\otimes U_B^\dagger)]=\Tr_B[(I_A\otimes U_B^\dagger U_B)\,\rho]\notag \\&=\Tr_B[(I_A\otimes I_B)\,\rho]=\rho_A, \label{eq:rhoAprime} \end{align} which confirms that a local operation on B does not affect the local state $\rho_A$ of A, hence the usual idea \cite{ferrero_2004_nonlinear} that for no-signaling it is sufficient that the dynamical model implies evolutions of local observables that depend only on $\rho_A$. But it is seldom noted that this is not a necessary condition. In fact, we prove next that not only the local reduced state $\rho_A$ but also the local perception operators $(F(\rho))^A$ of any well-defined nonlinear function $F(\rho)$ of the overall state (such as the function $S(\rho)$ defined above for entropy) are not affected by local operations on B according to Eq.\ (\ref{eq:partial_unitary_op_on_rho}). And since the SEA formalism is based on such local perception operators, this is an important lemma in the proof that SEA is no-signaling. So, let us apply Eq. (\ref{eq:partial_unitary_op_on_rho}) to a function of $F(\rho)$ as locally perceived by A represented, according to definition Eq. (\ref{eq:local_perception_operators}), by its partial trace weighted with respect to $\rho_B$, \begin{equation}\label{eq:no_signaling_on_f5} (F(\rho))^A = \Tr_B[(I_A\otimes\rho_B)F(\rho)]. \end{equation} A function of $\rho$ is defined from its eigenvalue decomposition by $ F(\rho) = VF(D)V^\dagger=\sum_jF(\lambda_j)\dyad{\lambda_j}$, where $\rho=VDV^\dagger$, $D=\sum_j\lambda_j\dyad{j}$, and $V=\sum_j\dyad{\lambda_j}{j}$. Since unitary transformations do not alter the eigenvalues, \begin{equation}\label{eq:no_signaling_on_f7} F(\rho') = V'F(D)V'^\dagger \mbox{ where } V'=(I_A\otimes U_B)V \,, \end{equation} and therefore, using Eq. (\ref{eq:no_signaling_unit_rhoB}) in the last step, we obtain \begin{align}\label{eq:no_signaling_on_f8} & (F(\rho'))^A = \Tr_B[(I_A\otimes\rho'_B)F(\rho')]\notag \\ &=\Tr_B[(I_A\otimes \rho'_B)\,(I_A\otimes U_B)VF(D)V^\dagger(I_A\otimes U_B^\dagger)]\notag\\ &=\Tr_B[(I_A\otimes U_B^\dagger\rho'_B U_B)\,VF(D)V^\dagger]\notag\\ & =\Tr_B[(I_A\otimes \rho_B)\,F(\rho)]=(F(\rho))^A\,. \end{align} This confirms that local operations on B do not affect the local perception operators of A and, therefore, their proper use in nonlinear QM does not cause signaling issues. We are now ready to introduce the last but not least essential ingredient of a general composite-system nonlinear QM, namely, the system's structure-dependent expressions of the separate contribution of each subsystem to the dissipative term of the equation of motion for the overall state $\rho$. As discussed above (and clearly recognized in the early SEA literature \cite{beretta_1985_quantuma,beretta_2005_nonlinear,beretta_2010_maximum}), the composite-system nonlinear evolution should reflect explicitly the internal structure of the system, essentially by declaring which subsystems are to be prevented from nonphysical effects such as signaling, exchange of energy, or build-up of correlations between non-interacting subsystems. In terms of the notation introduced above, the structure proposed in \cite{beretta_1985_quantuma,beretta_2010_maximum} for the dissipative term of the dynamics to be added to the usual Hamiltonian term is as follows \begin{equation}\label{eq:SEA_Composite_simp} \dv{\rho}{t} = -\frac{\mathrm{i}}{\hbar}\comm{\mathit{H}}{\rho}-\sum_{J=1}^M\acomm*{\mathcal{D}^{J\vphantom{\overline{J}}}_\rho}{\rho_{J\vphantom{\overline{J}}}}\otimes\rho_{J\vphantom{\overline{J}}}bar\,, \end{equation} where the `local dissipation operators' $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$ (on ${\mathcal H}_{J\vphantom{\overline{J}}}$) may be nonlinear functions of the local observables of J, the reduced state $\rho_{J\vphantom{\overline{J}}}$, and the local perception operators of overall observables. For the dissipative term to preserve $\Tr(\rho)$, operators $\acomm{\mathcal{D}^{J\vphantom{\overline{J}}}_\rho}{\rho_{J\vphantom{\overline{J}}}}$ must be traceless. To preserve $\Tr(\rho H)$ [and possibly other conserved properties $\Tr(\rho C_k)$], operators $\acomm{\mathcal{D}^{J\vphantom{\overline{J}}}_\rho}{\rho_{J\vphantom{\overline{J}}}}(H)^{J\vphantom{\overline{J}}}_\rho$ [and $\acomm{\mathcal{D}^{J\vphantom{\overline{J}}}_\rho}{\rho_{J\vphantom{\overline{J}}}}(C_k)^{J\vphantom{\overline{J}}}_\rho$] must also be traceless. The rate of change of the overall system entropy $s(\rho)=- k_{\rm\scriptscriptstyle B}\Tr[\rho\ln(\rho)]$ is \begin{equation}\label{eq:SEA_entropy_prod} \dv{s(\rho)}{t}= -\sum_{J=1}^M\Tr[\acomm*{\mathcal{D}^{J\vphantom{\overline{J}}}_\rho}{\rho_{J\vphantom{\overline{J}}}}(S(\rho))^{J\vphantom{\overline{J}}}_\rho]\,, \end{equation} and the local nonlinear evolution of subsystem J is obtained by partial tracing over ${\mathcal H}_{J\vphantom{\overline{J}}}bar$, in general, \begin{equation}\label{eq:SEA_local} \dv{\rho_{J\vphantom{\overline{J}}}}{t} = -\frac{\mathrm{i}}{\hbar}\comm*{H_{J\vphantom{\overline{J}}}}{\rho_{J\vphantom{\overline{J}}}} -\frac{\mathrm{i}}{\hbar}\Tr_{J\vphantom{\overline{J}}}bar(\comm*{V}{\rho}) -\acomm*{\mathcal{D}^{J\vphantom{\overline{J}}}_\rho}{\rho_{J\vphantom{\overline{J}}}}\,, \end{equation} where we recall that the second term in the rhs can be put, for weak interactions and under well-known assumptions, in Kossakowski-Lindblad form. Before introducing the SEA assumption, we emphasize that the construction obtained so far, Eq.\ (\ref{eq:SEA_Composite_simp}), opens up and paves the way for a class of no-signaling nonlinear evolution equations that is much broader, through all the possible compatible choices of the operators $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$, than nonlinear laws restricted by the sufficient but not necessary condition that ${\rm d}\rho_{J\vphantom{\overline{J}}}/{\rm d}t$ be a function of $\rho_{J\vphantom{\overline{J}}}$ only. We can formally state this no-signaling condition using the following statement, \begin{equation}\label{eq:no_signaling_condition} \dv{\rho_J}{t} = f(\rho_J,(C_k)^J). \end{equation} Finally, to introduce the SEA assumption in the spirit of the fourth law of thermodynamics \cite{beretta_2020_fourth,beretta_2014_steepest}, one way is employing a variational principle. We first observe from Eq. (\ref{eq:SEA_entropy_prod}) that the rate of entropy change contributed by subsystem J is directly proportional to the norm of operator $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$, so there is no maximum entropy production rate because we can trivially increase it indefinitely by simple multiplication of $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$ by a positive scalar. But we can fix that norm, and maximize against the direction in operator space, to identify, for each given state $\rho$, the operators $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$ that point in the direction of steepest entropy ascent. To this end, to recover the original SEA formulation \cite{beretta_1985_quantuma} let us maximize Eq.\ (\ref{eq:SEA_entropy_prod}) subject to the conservation constraints $\Tr[\acomm*{\mathcal{D}^{J\vphantom{\overline{J}}}_\rho}{\rho_{J\vphantom{\overline{J}}}}(C_k)^{J\vphantom{\overline{J}}}_\rho]=0$ where $C_1=I$, $C_2=H$, and $C_k$ are other conserved properties (if any), together with the fixed weighted norm constraints $\Tr [\rho_{J\vphantom{\overline{J}}}(\mathcal{D}^{J\vphantom{\overline{J}}}_\rho)^2]=\rm{const}$ (for more general SEA formulations in terms of a different metric as necessary to incorporate Onsager reciprocity see \cite{beretta_2014_steepest,beretta_2020_fourth}). Introducing Lagrange multipliers $\beta^{J\vphantom{\overline{J}}}_k$ and $\tau_{J\vphantom{\overline{J}}}$ for the conservation and norm constraints, respectively, and imposing vanishing variational derivatives with respect to operators $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$ at fixed $\rho$ and $\rho_{J\vphantom{\overline{J}}}$'s (derivation details in \cite{beretta_2010_maximum,beretta_2014_steepest}) yields \begin{equation}\label{eq:DJ_Def} 2\tau_{J\vphantom{\overline{J}}}\mathcal{D}^{J\vphantom{\overline{J}}}_\rho =({\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho+ {\textstyle \sum_\ell}\beta^{J\vphantom{\overline{J}}}_\ell(C_\ell)^{J\vphantom{\overline{J}}}_\rho. \end{equation} where the multipliers $\beta^{J\vphantom{\overline{J}}}_\ell$ must solve the system of equations obtained by substituting these maximizing expressions of the $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$'s into the conservation constraints, \begin{align}\label{eq:betakJ_system} & \sum_\ell\beta^{J\vphantom{\overline{J}}}_\ell\Tr[\rho_{J\vphantom{\overline{J}}}\acomm{(C_\ell)^{J\vphantom{\overline{J}}}_\rho}{(C_k)^{J\vphantom{\overline{J}}}_\rho}]\notag \\ & =-\Tr[\rho_{J\vphantom{\overline{J}}}\acomm{({\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho}{(C_k)^{J\vphantom{\overline{J}}}_\rho}]\, . \end{align} When $C_1=I$ and $C_2=H$ determine the conserved properties and Eqs. (\ref{eq:betakJ_system}) are linearly independent, using Cramers' rule, properties of determinants, and definitions (\ref{eq:deviation_X}) and (\ref{eq:covariance_XY}), the SEA dissipators can be cast as \begin{equation}\label{eq:DJ_compact} \mathcal{D}^{J\vphantom{\overline{J}}}_\rho = \dfrac{1}{4\tau_{J\vphantom{\overline{J}}}}\tfrac{\vmqty{\Delta({\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho & \Delta (H)^{J\vphantom{\overline{J}}}_\rho \\ (H,{\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho & (H,H)^{J\vphantom{\overline{J}}}_\rho }}{\displaystyle (H,H)^{J\vphantom{\overline{J}}}_\rho }\,. \end{equation} The rate of entropy production may be expressed as \begin{equation}\label{eq:S_production} \dv{s(\rho)}{t}=\sum_{J=1}^M \dfrac{1}{2\tau_{J\vphantom{\overline{J}}}}\tfrac{\vmqty{({\rm Bln}(\rho),{\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho & (H,{\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho \\ (H,{\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho & (H,H)^{J\vphantom{\overline{J}}}_\rho }}{\displaystyle (H,H)^{J\vphantom{\overline{J}}}_\rho } \,, \end{equation} showing clearly that it is nonnegative since the numerators in the summation are Gram determinants. Regarding no-signaling, we note that: (1) if subsystem J is noninteracting, $H=H_{J\vphantom{\overline{J}}}\otimes I_{J\vphantom{\overline{J}}}bar+I_{J\vphantom{\overline{J}}}\otimes H_{J\vphantom{\overline{J}}}bar$, then $\Delta (H)^{J\vphantom{\overline{J}}}_\rho= H_{J\vphantom{\overline{J}}}-I_{J\vphantom{\overline{J}}}\Tr(\rho_{J\vphantom{\overline{J}}} H_{J\vphantom{\overline{J}}})$ and $(H,H)^{J\vphantom{\overline{J}}}_\rho=\Tr[\rho_J(\Delta H_{J\vphantom{\overline{J}}})^2]$ depend only on the local $H_{J\vphantom{\overline{J}}}$ and $\rho_{J\vphantom{\overline{J}}}$; and (2) if J is uncorrelated, ${\rm Bln}(\rho)={\rm Bln}(\rho_{J\vphantom{\overline{J}}})\otimes I_{J\vphantom{\overline{J}}}bar+I_{J\vphantom{\overline{J}}}\otimes {\rm Bln}(\rho_{J\vphantom{\overline{J}}}bar)$, then $\Delta ({\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho= {\rm Bln}(\rho_{J\vphantom{\overline{J}}})-I_{J\vphantom{\overline{J}}}\Tr(\rho_{J\vphantom{\overline{J}}} \ln(\rho_{J\vphantom{\overline{J}}}))$ and $({\rm Bln}(\rho),{\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho=\Tr[\rho_J(\ln(\rho_{J\vphantom{\overline{J}}}))^2]$ depend only on the local $\rho_{J\vphantom{\overline{J}}}$. Therefore, it is only when J is both noninteracting and uncorrelated that its local dissipation operator $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$ depends only on the local $H_{J\vphantom{\overline{J}}}$ and $\rho_{J\vphantom{\overline{J}}}$, and the local equation of motion Eq. (\ref{eq:SEA_local}) reduces exactly to the non-composite system version of SEA evolution \cite{beretta_1984_quantuma}. Instead, if J is either interacting or correlated, $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$ and, therefore, the local nonlinear SEA evolution according to Eq.\ (\ref{eq:SEA_local}), is determined not only by the local $H_{J\vphantom{\overline{J}}}$ and $\rho_{J\vphantom{\overline{J}}}$, but also by the local perceptions of the overall interaction Hamiltonian and/or the overall entropy operator ${\rm Bln}(\rho)$, nonetheless without violating the no-signaling condition. In extremal cases, it is known \cite{beretta_1984_quantuma,beretta2006PRE,beretta_2009_nonlinear} that even if the subsystems are entangled and therefore the local states $\rho_{J\vphantom{\overline{J}}}$ are mixed, operators $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho$ vanish and Eqs.\ (\ref{eq:SEA_Composite_simp}) and (\ref{eq:SEA_local}) reduce to the standard Schr\"{o}dinger equation. E.g., if the overall system is in a pure state, ${\rm Bln}(\rho)=0$, standard unitary evolutions of pure states emerge as limit cycles of the nonlinear SEA dynamics. Consider the example of a two-qubit composite AB. The mixed and correlated states \begin{equation}\label{eq:Bell_diagonal_states} \rho =\dfrac{1}{4}\Big[\mathrm{I}_4+\!\!\!\!\sum_{j=\{x,y,z\}}\!\!\!(a_j\,\sigma_j\otimes\mathrm{I}_2 +b_j\,\mathrm{I}_2\otimes\sigma_j+c_j\,\sigma_j\otimes\sigma_j)\Big] , \end{equation} are Bell diagonal states if $a_j=b_j=0$ for all $j$'s (and Werner states if in addition $c_j=4w/3-1$ for all $j$'s) with eigenvalues $4\lambda_1 = 1-c_x-c_y-c_z$, $4 \lambda_2 = 1-c_x+c_y+c_z$, $4 \lambda_3 = 1+c_x-c_y+c_z$, $4 \lambda_4 = 1+c_x+c_y-c_z$. Somewhat surprisingly, Bell diagonal states are nondissipative limit cycles within nonlinear SEA dynamics, under any Hamiltonian. Indeed, we find $({\rm Bln}(\rho^{\mbox{\tiny Bell}}))^{J\vphantom{\overline{J}}}_\rho =\mathrm{I}_2\sum_k{\rm Bln}(\lambda_k)/2$, so that $\Delta({\rm Bln}(\rho))^{J\vphantom{\overline{J}}}_\rho=0$ and $\mathcal{D}^{J\vphantom{\overline{J}}}_\rho=0$, for both $J=A,B$. But most neighboring and other states in this class are dissipative. For a simple example of correlated but separable mixed states, assume $a_x=a$, $b_x=b$, and $a_y=a_z=b_y=b_z=c_x=c_y=c_z=0$, so that $\rho_A\otimes\rho_B-\rho=(ab/4)\sigma_x\otimes\sigma_x$ and the eigenvalues are $4\lambda_1 = 1-a-b$, $4 \lambda_2 = 1-a+b$, $4 \lambda_3 = 1+a-b$, $4 \lambda_4 = 1+a+b$. If the two noninteracting qubits A and B have local Hamiltonians $H_A=H_B=\sigma_z$, we find \begin{align}\label{eq:Bell_diagonal_states} \acomm*{\mathcal{D}^A_\rho}{\rho_A} & = \dfrac{(1-a^2)}{16}(bf_{a,b}-g_{a,b})\,\sigma_x, \\ \acomm*{\mathcal{D}^B_\rho}{\rho_B} & = \dfrac{(1-b^2)}{16}(af_{a,b}-h_{a,b})\,\sigma_x, \end{align} where $f_{a,b}={\rm Bln}(\lambda_1) - {\rm Bln}(\lambda_2) -{\rm Bln}(\lambda_3) + {\rm Bln}(\lambda_4)$, $g_{a,b}={\rm Bln}(\lambda_1) + {\rm Bln}(\lambda_2) - {\rm Bln}(\lambda_3) - {\rm Bln}(\lambda_4)$, $h_{a,b}={\rm Bln}(\lambda_1) - {\rm Bln}(\lambda_2) + {\rm Bln}(\lambda_3) - {\rm Bln}(\lambda_4)$ so that the nonlinear evolution is clearly nontrivial. But it preserves the zero mean energies of both qubits, and while the overall entropy increases and mutual information partially fades away, it drives the overall state towards a nondissipative correlated state with maximally mixed marginals. We proved above that signaling is impossible, even though $\mathcal{D}^A_\rho$ depends not only on $a$ but also on $b$, and $\mathcal{D}^B_\rho$ on $a$ which agrees with our no-signaling condition in Eq. (\ref{eq:no_signaling_condition}). For a slightly more elaborate example that includes entangled mixed states, assume $a_x=a_z=a/\sqrt{2}$, $b_x=b_z=b/\sqrt{2}$, and $c_x=c_y=c_z=2(a-b)/3$, so that the eigenvalues are $4\lambda_1 = 1+a-b$, $12\lambda_2 = 3-a-5b$, $12\lambda_3 = 3+5a+b$, $12\lambda_4 = 3-7a+7b$, and those of the partial tranpose $12\lambda^{PT}_1 = 3+a-b$, $12\lambda^{PT}_2 = 3-5a+5b$, $12 \lambda^{PT}_3 = 3+2a-2b+\sqrt{d}$, $12 \lambda^{PT}_4 = 3+2a-2b-\sqrt{d}$ with $d=25a^2 - 14ab + 25b^2$. For $a=-b$ these states are separable for $-3/14\le b\le 1/4$ and entangled for $1/4<b\le 1/2$. If the two noninteracting qubits A and B have local Hamiltonians $H_A=H_B=\sigma_z$, we find \begin{align}\label{eq:Bell_diagonal_states} \acomm*{\mathcal{D}^A_\rho}{\rho_A} & = \dfrac{\sqrt{2}(1-a^2)}{80(2-a^2)}(f_{a,b}-5bh_{a,b})\,\sigma_x, \\ \acomm*{\mathcal{D}^B_\rho}{\rho_B} & = -\dfrac{\sqrt{2}(1-b^2)}{80(2-b^2)}(g_{a,b}+5ah_{a,b})\,\sigma_x, \end{align} where here $f_{a,b}=3{\rm Bln}(\lambda_1) - 5{\rm Bln}(\lambda_2) +5 {\rm Bln}(\lambda_3) -3{\rm Bln}(\lambda_4)$, $g_{a,b}=3{\rm Bln}(\lambda_1) +5{\rm Bln}(\lambda_2) - 5{\rm Bln}(\lambda_3) - 3{\rm Bln}(\lambda_4)$, $h_{a,b}={\rm Bln}(\lambda_1) - {\rm Bln}(\lambda_2) - {\rm Bln}(\lambda_3) + {\rm Bln}(\lambda_4)$ so that again the nonlinear evolution is clearly nontrivial in the sense that the local nonlinear evolution of A (B) does not depend only on $\rho_A$ ($\rho_B$), despite being no-signaling. To summarize, in this Letter we prove that the SEA formalism provides a valid non-linear extension of QM. To show this, we explore the definition of no-signaling for composite systems and provide generalized necessary criteria in terms of locally perceived operators, less restrictive than the traditional criterion in terms of local density operators. Furthermore, we build on that definition and show how, by construction, SEA is no-signaling. For non-interacting subsystems, the traditional criterion is met for uncorrelated states, but we provide nontrivial examples of correlated states for which it is not met. RKR is grateful to the INSPIRE Fellowship program by the Department of Science and Technology, Govt. of India for funding his Ph.D., to Prof. Alok Pan of the Indian Institute of Technology Hyderabad for many useful discussions, and to the Wolfram Publication Watch Team for providing full access to online Mathematica \cite{wolframresearchinc._2022_mathematica}. \end{document}
\begin{document} \title[On the Harnack inequality for antisymmetric \(s\)-harmonic functions]{On the Harnack inequality \\ for antisymmetric \(s\)-harmonic functions} \author[S. Dipierro]{Serena Dipierro} \address{Serena Dipierro: Department of Mathematics and Statistics, The University of Western Australia, 35 Stirling Highway, Crawley, Perth, WA 6009, Australia} \email{[email protected]} \author[J. Thompson]{Jack Thompson} \address{Jack Thompson: Department of Mathematics and Statistics, The University of Western Australia, 35 Stirling Highway, Crawley, Perth, WA 6009, Australia} \email{[email protected]} \author[E. Valdinoci]{Enrico Valdinoci} \address{Enrico Valdinoci: Department of Mathematics and Statistics, The University of Western Australia, 35 Stirling Highway, Crawley, Perth, WA 6009, Australia} \email{[email protected]} \subjclass[2010]{Primary 35R11; 47G20; 35B50} \date{\today} \dedicatory{} \maketitle \begin{abstract} We prove the Harnack inequality for antisymmetric \(s\)-harmonic functions, and more generally for solutions of fractional equations with zero-th order terms, in a general domain. This may be used in conjunction with the method of moving planes to obtain quantitative stability results for symmetry and overdetermined problems for semilinear equations driven by the fractional Laplacian. The proof is split into two parts: an interior Harnack inequality away from the plane of symmetry, and a boundary Harnack inequality close to the plane of symmetry. We prove these results by first establishing the weak Harnack inequality for super-solutions and local boundedness for sub-solutions in both the interior and boundary case. En passant, we also obtain a new mean value formula for antisymmetric $s$-harmonic functions. \end{abstract} \section{Introduction and main results} Since the groundbreaking work \cite{harnack1887basics}, Harnack-type inequalities have become an essential tool in the analysis of partial differential equations (\textsc{PDE}). In \cite{MR1729395,MR3522349,MR3481178}, Harnack inequalities have been applied to antisymmetric functions---functions that are odd with respect to reflections across a given plane, for more detail see Section \ref{LenbLVkk}---in conjunction with the method of moving planes to obtain stability results for local overdetermined problems including \emph{Serrin's overdetermined problem} and the \emph{parallel surface problem}; see also \cite{MR3932952}. In recent years, there has been consolidated interest in overdetermined problems driven by nonlocal operators, particularly the fractional Laplacian; however, standard Harnack inequalities for such operators are incompatible with antisymmetric functions since they require functions to be non-negative in all of \(\mathbb{R}^n\). In this paper, we will address this problem by proving a Harnack inequality for antisymmetric \(s\)-harmonic functions with zero-th order terms which only requires non-negativity in a halfspace. By allowing zero-th order terms this result is directly applicable to symmetry and overdetermined problems for semilinear equations driven by the fractional Laplacian. \subsection{Background} Fundamentally the original Harnack inequality for elliptic \textsc{PDE} is a quantitative formulation of the strong maximum principle and directly gives, among other things, Liouville's theorem, the removable singularity theorem, compactness results, and Hölder estimates for weak and viscosity solutions, see~\cite{MR1814364,MR2597943}. This has led it to be extended to a wide variety of other settings. For local \textsc{PDE} these include: linear parabolic \textsc{PDE}~\cite{MR2597943,MR1465184}; quasilinear and fully nonlinear elliptic \textsc{PDE}~\cite{MR170096,MR226198,MR1351007}; quasilinear and fully nonlinear parabolic \textsc{PDE}~\cite{MR244638,MR226168}; and in connection with curvature and geometric flows such as Ricci flow on Riemannian manifolds~\cite{MR431040,MR834612,MR2251315}. An extensive survey on the Harnack inequality for local \textsc{PDE} is~\cite{MR2291922}. For equations arising from jump processes, colloquially known as nonlocal \textsc{PDE}, the first Harnack inequality is due to Bogdan~\cite{MR1438304} who proved the boundary Harnack inequality for the fractional Laplacian. The fractional Laplacian is the prototypical example of a nonlocal operator and is defined by \begin{align} (-\Delta)^s u(x) &= c_{n,s} \mathrm{P.V.} \int_{\mathbb{R}^n} \frac{u(x)-u(y)}{\vert x - y \vert^{n+2s}} \mathop{}\!d y \label{ZdAlT} \end{align} where~$s\in(0,1)$, \(c_{n,s}\) is a positive normalisation constant (see~\cite{MR3916700} for more details, particularly Proposition~5.6 there) and \(\mathrm{P.V.}\) refers to the Cauchy principle value. The result in~\cite{MR1438304} was only valid for Lipschitz domains, but this was later extended to any bounded domain in~\cite{MR1719233}. Over the proceeding decade there were several papers proving Harnack inequalities for more general jump processes including~\cite{MR1918242,MR3271268}, see also~\cite{MR2365478}. For fully nonlinear nonlocal \textsc{PDE}, a Harnack inequality was established in~\cite{MR2494809}, see also~\cite{MR2831115}. More recently, in~\cite{MR4023466} the boundary Harnack inequality was proved for nonlocal \textsc{PDE} in non-divergence form. As far as we are aware, the only nonlocal Harnack inequality for antisymmetric functions in the literature is in~\cite{ciraolo2021symmetry} where a boundary Harnack inequality was established for antisymmetric \(s\)-harmonic functions in a small ball centred at the origin. Our results generalise this to arbitrary symmetric domains and to equations with zero-th order terms. \subsection{Main results} \label{K4mMv} Let us now describe in detail our main results. First, we will introduce some useful notation. Given a point \(x=(x_1,\dots , x_n) \in \mathbb{R}^n\), we will write \(x=(x_1,x')\) with \(x'=(x_2,\dots , x_n)\) and we denote by \(x_\ast\) the reflection of \(x\) across the plane \(\{x_1=0\}\), that is, \(x_\ast = (-x_1,x')\). Then we call a function \(u:\mathbb{R}^n \to \mathbb{R}\) \emph{antisymmetric} if \begin{align*} u(x_\ast) = -u(x) \qquad \text{for all } x\in \mathbb{R}^n. \end{align*} We will also denote by \(\mathbb{R}^n_+\) the halfspace \(\{x\in \mathbb{R}^n \text{ s.t. } x_1>0\}\) and, given \(A\subset \mathbb{R}^n\), we let \(A^+ := A \cap \mathbb{R}^n_+\). Moreover, we will frequently make use of the functional space \(\mathscr A_s(\mathbb{R}^n)\) which we define to be the set of all antisymmetric functions \(u\in L^1_{\mathrm {loc}}(\mathbb{R}^n)\) such that \begin{align} \Anorm{u}:= \int_{\mathbb{R}^n_+} \frac{x_1 \vert u(x) \vert}{1+ \vert x \vert^{n+2s+2}}\mathop{}\!d x \label{QhKBn} \end{align} is finite. The role that the new space \(\mathscr A_s(\mathbb{R}^n)\) plays will be explained in more detail later in this section. Our main result establishes the Harnack inequality\footnote{The arguments presented here are quite general and can be suitably extended to other integro-differential operators. For the specific case of the fractional Laplacian, the extension problem can provide alternative ways to prove some of the results presented here. We plan to treat this case extensively in a forthcoming paper, but in Appendix~\ref{APPEEXT:1} here we present a proof of~\eqref{LA:PAKSM} specific for the fractional Laplacian and~$c:=0$ that relies on extension methods.} in a general symmetric domain \(\Omega\). \begin{thm} \thlabel{Hvmln} Let \(\Omega \subset \mathbb{R}^n\) and \(\tilde \Omega \Subset \Omega\) be bounded domains that are symmetric with respect to \(\{x_1=0\}\), and let \(c\in L^\infty(\Omega^+)\). Suppose that \(u \in C^{2s+\alpha}(\Omega)\cap \mathscr{A}_s(\mathbb{R}^n) \) for some \(\alpha >0\) with \(2s+\alpha\) not an integer, \(u\) is non-negative in \(\mathbb{R}^n_+\), and satisfies \begin{align*} (-\Delta)^s u +cu&=0 \qquad \text{in }\Omega^+. \end{align*} Then there exists \(C>0\) depending only on \(\Omega\), \(\tilde \Omega\), \(\| c\|_{L^\infty(\Omega^+)}\), \(n\), and \(s\) such that \begin{equation}\label{LA:PAKSM} \sup_{\tilde \Omega^+} \frac{u(x)}{x_1} \leqslantslant C \inf_{ \tilde \Omega^+} \frac{u(x)}{x_1} . \end{equation} Moreover, \(\inf_{\tilde \Omega^+} \frac{u(x)}{x_1}\) and \(\sup_{\tilde \Omega^+} \frac{u(x)}{x_1}\) are comparable to \(\Anorm{u}\). \end{thm} Here and throughout this document we use the convention that if \(\beta>0\) and \(\beta\) is not an integer then \(C^\beta (\Omega)\) denotes the Hölder space \(C^{k,\beta'}(\Omega)\) where \(k\) is the integer part of \(\beta\) and \(\beta'=\beta-k \in (0,1)\). One can compare the Harnack inequality in Theorem~\ref{Hvmln} here with previous results in the literature, such as Proposition~6.1 in~\cite{MR4308250}, which can be seen as a boundary Harnack inequality for antisymmetric functions, in a rather general fractional elliptic setting. We point out that Theorem~\ref{Hvmln} here holds true without any sign assumption on~$c$ (differently from Proposition~6.1 in~\cite{MR4308250} in which a local sign assumption\footnote{The sign assumption on \(c\) in Proposition~6.1 of~\cite{MR4308250} is required since the result was for unbounded domains. On the other hand, our result is for bounded domains which is why this assumption is no longer necessary. } on~$c$ was taken), for all~$s\in(0,1)$ (while the analysis of Proposition~6.1 in~\cite{MR4308250} was focused on the case~$s\in\left[\frac12,1\right)$), and in every dimension~$n$ (while Proposition~6.1 in~\cite{MR4308250} dealt with the case~$n=1$). We would like to emphasise that, in contrast to the standard Harnack inequality for the fractional Laplacian, \thref{Hvmln} only assumes \(u\) to be non-negative in the halfspace \(\mathbb{R}^n_+\) which is a more natural assumption for antisymmetric functions. Also, note that the assumption that \(\tilde \Omega \Subset \Omega\) allows~\(\tilde \Omega^+\) to touch \(\{x_1=0\}\), but prevents \(\tilde \Omega^+\) from touching the portion of \(\partial\Omega\) that is not on \(\{x_1=0\}\); it is in this sense that we will sometimes refer to \thref{Hvmln} (and later \thref{C35ZH}) as a \emph{boundary} Harnack inequality for antisymmetric functions. Interestingly, the quantity \(\Anorm{u}\) arises very naturally in the context of antisymmetric functions and plays the same role that \begin{align} \| u\|_{\mathscr L_s (\mathbb{R}^n)} := \int_{\mathbb{R}^n} \frac{\vert u (x) \vert }{1+\vert x \vert^{n+2s}} \mathop{}\!d x \label{ACNAzPn8} \end{align} plays in the non-antisymmetric nonlocal Harnack inequality, see for example \cite{MR4023466}. To our knowledge, \(\Anorm{\cdot}\) is new in the literature. A technical aspect of \thref{Hvmln}, however, is that the fractional Laplacian is in general not defined if we simply have \(u\) is \(C^{2s+\alpha}\) and \(\Anorm{u} < +\infty\). This leads us to introduce the following new definition of the fractional Laplacian for functions in \(\mathscr A_s(\mathbb{R}^n)\) which we will use throughout the remainder of this paper. \begin{defn} \thlabel{mkG4iRYH} Let \(x\in \mathbb{R}^n_+\) and suppose that \(u \in \mathscr A_s(\mathbb{R}^n)\) is \( C^{2s+\alpha}\) in a neighbourhood of \(x\) for some \(\alpha>0\) with \(2s+\alpha\) not an integer. The fractional Laplacian of \(u\) at \(x\), which we denote as usual by \((-\Delta)^su(x)\), is defined by \begin{align} (-\Delta)^su(x) = c_{n,s} \lim_{\varepsilon\to 0^+} \int_{\mathbb{R}^n_+ \setminus B_\varepsilon(x)} \left( \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast- y \vert^{n+2s}}\right) \big(u(x)-u(y)\big)\mathop{}\!d y + \frac {c_{1,s}} s u(x)x_1^{-2s} \label{OopaQ0tu} \end{align} where \(c_{n,s}\) is the constant from~\eqref{ZdAlT}. \end{defn} We will motivate \thref{mkG4iRYH} in Section \ref{LenbLVkk} as well as verify it is well-defined. We also prove in Section \ref{LenbLVkk} that if \(\| u\|_{\mathscr L_s(\mathbb{R}^n)}\) is finite, $u$ is \( C^{2s+\alpha}\) in a neighbourhood of \(x\), and antisymmetric then \thref{mkG4iRYH} agrees with~\eqref{ZdAlT}. See also~\cite{MR3453602, MR4108219, MR4030266, MR4308250} where maximum principles in the setting of antisymmetric solutions of integro-differential equations have been established by exploiting the antisymmetry of the functions as in~\eqref{OopaQ0tu}. It is also worth mentioning that the requirement that \(u\) is antisymmetric in \thref{Hvmln} cannot be entirely removed. In Appendix \ref{j4njb}, we construct a sequence of functions that explicitly demonstrates this. Moreover, \thref{Hvmln} is also false if one only assumes \(u\geqslantslant0\) in \(\Omega^+\) as proven in \cite[Corollary 1.3]{RoleAntisym2022}. To obtain the full statement of \thref{Hvmln} we divide the proof into two parts: an interior Harnack inequality and a boundary Harnack inequality close to \(\{x_1=0\}\). The interior Harnack inequality is given as follows. \begin{thm} \thlabel{DYcYH} Let \(\rho\in(0,1)\) and \(c\in L^\infty(B_\rho(e_1))\). Suppose that \(u \in C^{2s+\alpha}(B_\rho(e_1))\cap \mathscr{A}_s(\mathbb{R}^n) \) for some \(\alpha >0\) with \(2s+\alpha\) not an integer, \(u\) is non-negative in \(\mathbb{R}^n_+\), and satisfies \begin{align*} (-\Delta)^s u +cu =0 \qquad\text{in } B_{\rho}(e_1). \end{align*} Then there exists \(C_\rho>0\) depending only on \(n\), \(s\), \(\| c \|_{L^\infty(B_\rho(e_1))}\), and \(\rho\) such that \begin{align*} \sup_{B_{\rho/2}(e_1)} u \leqslantslant C_\rho \inf_{B_{\rho/2}(e_1)} u. \end{align*} Moreover, both the quantities \(\sup_{B_{\rho/2}(e_1)} u \) and \(\inf_{B_{\rho/2}(e_1)} u \) are comparable to \(\Anorm{u}\). \end{thm} For all \(r>0\), we write \(B_r^+ := B_r \cap \mathbb{R}^n_+\). Then the following result is the antisymmetric boundary Harnack inequality. \begin{thm} \thlabel{C35ZH} Let \(\rho>0\) and \(c\in L^\infty(B_\rho^+)\). Suppose that \(u \in C^{2s+\alpha}(B_\rho)\cap \mathscr{A}_s(\mathbb{R}^n) \) for some \(\alpha >0\) with \(2s+\alpha\) not an integer, \(u\) is non-negative in \(\mathbb{R}^n_+\), and satisfies \begin{align*} (-\Delta)^s u +cu&=0 \qquad \text{in } B_\rho^+. \end{align*} Then there exists \(C_\rho>0\) depending only on \(n\), \(s\), \(\| c \|_{L^\infty(B_\rho^+)}\), and \(\rho\) such that\begin{align*} \sup_{x\in B_{\rho/2}^+} \frac{u(x)}{x_1} \leqslantslant C_\rho \inf_{x\in B_{\rho/2}^+} \frac{u(x)}{x_1}. \end{align*} Moreover, both the quantities \(\sup_{x\in B_{\rho/2}^+} \frac{u(x)}{x_1} \) and \(\inf_{x\in B_{\rho/2}^+} \frac{u(x)}{x_1} \) are comparable to \(\Anorm{u}\). \end{thm} To our knowledge, Theorems~\ref{DYcYH} and~\ref{C35ZH} are new in the literature. In the particular case \(c\equiv0\), \thref{C35ZH} was proven in \cite{ciraolo2021symmetry}, but the proof relied on the Poisson representation formula for the Dirichlet problem in a ball which is otherwise unavailable in more general settings. The proofs of Theorems~\ref{DYcYH} and~\ref{C35ZH} are each split into two halves: for \thref{DYcYH} we prove an interior weak Harnack inequality for super-solutions (\thref{MT9uf}) and interior local boundedness of sub-solutions (\thref{guDQ7}). Analogously, for \thref{C35ZH} we prove a boundary weak Harnack inequality for super-solutions (\thref{SwDzJu9i}) and boundary local boundedness of sub-solutions (\thref{EP5Elxbz}). The proofs of Propositions~\ref{MT9uf}, \ref{guDQ7}, \ref{SwDzJu9i}, and~\ref{EP5Elxbz} make use of general barrier methods which take inspiration from \cite{MR4023466,MR2494809,MR2831115}; however, these methods require adjustments which take into account the antisymmetry assumption. Once Theorems~\ref{DYcYH} and \ref{C35ZH} have been established, \thref{Hvmln} follows easily from a standard covering argument. For completeness, we have included this in Section \ref{lXIUl}. Finally, in Appendix~\ref{4CEly} we provide an alternate elementary proof of \thref{C35ZH} in the particular case \(c\equiv0\). As in \cite{ciraolo2021symmetry}, our proof relies critically on the Poisson representation formula in a ball for the fractional Laplacian, but the overall strategy of our proof is entirely different. This was necessary to show that \(\sup_{x\in B_{\rho/2}^+} \frac{u(x)}{x_1} \) and \(\inf_{x\in B_{\rho/2}^+} \frac{u(x)}{x_1} \) are comparable to \(\Anorm{u}\) which was not proven in \cite{ciraolo2021symmetry}. Our proof makes use of a new mean-value formula for antisymmetric \(s\)-harmonic functions, which we believe to be interesting in and of itself. The usual mean value formula for \(s\)-harmonic functions says that if \(u\) is \(s\)-harmonic in \(B_1\) then we have that \begin{align} u(0) = \gamma_{n,s} \int_{\mathbb{R}^n\setminus B_r} \frac{r^{2s} u(y)}{(\vert y \vert^2-r^2)^s\vert y \vert^n}\mathop{}\!d y \qquad \text{for all }r\in(0, 1] \label{K8kw5rpi} \end{align}where \begin{equation}\label{sry6yagamma098765} \gamma_{n,s}:= \frac{\sin(\pi s)\Gamma (n/2)}{\pi^{\frac n 2 +1 }}. \end{equation} {F}rom antisymmetry, however, both the left-hand side and the right-hand side of~\eqref{K8kw5rpi} are zero irrespective of the fact \(u\) is \(s\)-harmonic. It is precisely this observation that leads to the consideration of \(\partial_1u(0)\) instead of \(u(0)\) which is more appropriate when \(u\) is antisymmetric. \begin{prop} \thlabel{cqGgE} Let \(u\in C^{2s+\alpha}(B_1) \cap \mathscr L_s(\mathbb{R}^n)\) with \(\alpha>0\) and \(2s+\alpha\) not an integer. Suppose that~\(u\) is antisymmetric and \(r\in(0, 1]\). If \((-\Delta)^s u = 0 \) in \(B_1\) then \begin{align*} \frac{\partial u}{\partial x_1} (0) &= 2n \gamma_{n,s} \int_{\mathbb{R}^n_+ \setminus B_r^+} \frac{r^{2s}y_1 u(y)}{(\vert y \vert^2 - r^2)^s\vert y \vert^{n+2}} \mathop{}\!d y , \end{align*} with~$\gamma_{n,s}$ given by~\eqref{sry6yagamma098765}. \end{prop} \subsection{Organisation of paper} The paper is organised as follows. In Section \ref{LenbLVkk}, we motivate \thref{mkG4iRYH} and establish some technical lemmata regarding this definition. In Section~\ref{q7hKF}, we prove \thref{DYcYH} and, in Section~\ref{TZei6Wd4}, we prove \thref{C35ZH}. In Section~\ref{lXIUl}, we prove the main result \thref{Hvmln}. In Appendix~\ref{j4njb}, we construct a sequence that demonstrates the antisymmetric assumption in \thref{Hvmln} cannot be removed. Finally, in Appendix~\ref{4CEly}, we prove Proposition~\ref{cqGgE}, and with this, we provide an elementary alternate proof of \thref{C35ZH} for the particular case~\(c\equiv0\). \section{The antisymmetric fractional Laplacian} \label{LenbLVkk} Let \(n\) be a positive integer and \(s\in (0,1)\). The purpose of this section is to motivate \thref{mkG4iRYH} as well as prove some technical aspects of the definition such as that it is well-defined and that it coincides with~\eqref{ZdAlT} when \(u\) is sufficiently regular. Recall that, given a point \(x=(x_1,\dots , x_n) \in \mathbb{R}^n\), we will write \(x=(x_1,x')\) with \(x'=(x_2,\dots , x_n)\) and we denote by \(x_\ast\) the reflection of \(x\) across the plane \(\{x_1=0\}\), that is, \(x_\ast = (-x_1,x')\). Then we call a function \(u:\mathbb{R}^n \to \mathbb{R}\) \emph{antisymmetric} if \begin{align*} u(x_\ast) = -u(x) \qquad \text{for all } x\in \mathbb{R}^n. \end{align*} It is common in the literature, particularly when dealing with the method of moving planes, to define antisymmetry with respect to a given hyperplane \(T\); however, for our purposes, it is sufficient to take~\(T=\{x_1=0\}\). For the fractional Laplacian of \(u\), given by~\eqref{ZdAlT}, to be well-defined in a pointwise sense, we need two ingredients: \(u\) needs enough regularity in a neighbourhood of \(x\) to overcome the singularity at \(x\) in the kernel \(\vert x-y\vert^{-n-2s}\), and \(u\) also needs an integrability condition at infinity to account for the integral in~\eqref{ZdAlT} being over \(\mathbb{R}^n\). For example, if \(u\) is \(C^{2s+\alpha}\) in a neighbourhood of \(x\) for some~\(\alpha >0\) with \(2s+\alpha\) not an integer and \(u\in\mathscr L_s(\mathbb{R}^n)\) where \(\mathscr L_s (\mathbb{R}^n)\) is the set of functions \(v\in L^1_{\mathrm{loc}}(\mathbb{R}^n)\) such that \(\| v\|_{\mathscr L_s(\mathbb{R}^n)} <+\infty\) (recall \(\| \cdot \|_{\mathscr L_s(\mathbb{R}^n)}\) is given by~\eqref{ACNAzPn8}) then \((-\Delta)^su\) is well-defined at \(x\) and is in fact continuous there, \cite[Proposition 2.4]{MR2270163}. In the following proposition, we show that if \(u\in \mathscr L_s(\mathbb{R}^n)\) is antisymmetric and \(u\) is \(C^{2s+\alpha}\) in a neighbourhood of \(x\) for some \(\alpha >0\) with \(2s+\alpha\) not an integer then \(u\) satisfies~\eqref{OopaQ0tu}. This simultaneously motivates \thref{mkG4iRYH} and demonstrates this definition does generalise the definition of the fractional Laplacian when the given function is antisymmetric. \begin{prop} \thlabel{eeBLRjcZ} Let \(x\in \mathbb{R}^n_+\) and \(u \in \mathscr L_s(\mathbb{R}^n)\) be an antisymmetric function that is \(C^{2s+\alpha}\) in a neighbourhood of \(x\) for some \(\alpha>0\) with \(2s+\alpha\) not an integer. Then~\eqref{ZdAlT} and \thref{mkG4iRYH} coincide. \end{prop} \begin{proof} Let~$x=(x_1,x')\in \mathbb{R}^n_+$ and take~$\delta\in\left(0,\frac{x_1}2\right)$ such that~$u$ is~$C^{2s+\alpha}$ in~$B_\delta(x)\subset\mathbb{R}^n_+$. Furthermore, let \begin{align} (-\Delta)^s_\delta u(x):= c_{n,s}\int_{\mathbb{R}^n \setminus B_\delta(x)} \frac{u(x)-u(y)}{\vert x - y\vert^{n+2s}} \mathop{}\!d y .\label{WUuXb0Hk} \end{align} By the regularity assumptions on \(u\), the integral in~\eqref{ZdAlT} is well-defined and \begin{align*} \lim_{\delta\to0^+}(-\Delta)^s_\delta u(x) = c_{n,s} \mathrm{P.V.} \int_{\mathbb{R}^n}& \frac{u(x)-u(y)}{\vert x - y \vert^{n+2s}} \mathop{}\!d y =(-\Delta)^s u(x). \end{align*} Splitting the integral in~\eqref{WUuXb0Hk} into two integrals over \(\mathbb{R}^n_+ \setminus B_\delta(x)\) and \(\mathbb{R}^n_-\) respectively and then using that \(u\) is antisymmetric in the integral over \(\mathbb{R}^n_-\), we obtain \begin{align} (-\Delta)^s_\delta u(x) &= c_{n,s}\int_{\mathbb{R}^n_+ \setminus B_\delta(x)} \frac{u(x)-u(y)}{\vert x - y\vert^{n+2s}} \mathop{}\!d y +c_{n,s} \int_{\mathbb{R}^n_+} \frac{u(x)+u(y)}{\vert x_\ast - y\vert^{n+2s}} \mathop{}\!d y \nonumber \\ &= c_{n,s}\int_{\mathbb{R}^n_+ \setminus B_\delta(x)} \left( \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast- y \vert^{n+2s}} \right)\big(u(x)-u(y)\big) \mathop{}\!d y \nonumber \\ &\qquad +2c_{n,s}u(x) \int_{\mathbb{R}^n_+ \setminus B_\delta(x)} \frac{\mathop{}\!d y }{\vert x_\ast - y \vert^{n+2s}} + c_{n,s} \int_{B_\delta(x)} \frac{u(x)+u(y) }{\vert x_\ast - y \vert^{n+2s}}\mathop{}\!d y . \label{cGXQjDIV} \end{align} We point out that if~$y\in\mathbb{R}^n_+$ then~$|x_\ast-y|\ge x_1$, and therefore \begin{eqnarray*} && \left \vert \int_{B_\delta(x)} \frac{u(x)+u(y) }{\vert x_\ast - y \vert^{n+2s}}\mathop{}\!d y \right \vert\le \int_{B_\delta(x)} \frac{2|u(x)|+|u(y)-u(x)| }{\vert x_\ast - y \vert^{n+2s}}\mathop{}\!d y\\ &&\qquad\qquad \le \int_{B_\delta(x)} \frac{2|u(x)|+C|x-y|^{\min\{1,2s+\alpha\}} }{\vert x_\ast - y \vert^{n+2s}}\mathop{}\!d y\le C\big ( \vert u(x) \vert + \delta^{\min\{1,2s+\alpha\}} \big ) \delta^n, \end{eqnarray*} for some~$C>0$ depending on~$n$, $s$, $x$, and the norm of~$u$ in a neighbourood of~$x$. As a consequence, sending \(\delta \to 0^+\) in~\eqref{cGXQjDIV} and using the Dominated Convergence Theorem, we obtain \begin{align*} (-\Delta)^s u(x) &= c_{n,s}\lim_{\delta \to 0^+}\int_{\mathbb{R}^n_+ \setminus B_\delta(x)} \left( \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast- y \vert^{n+2s}} \right)\big(u(x)-u(y)\big) \mathop{}\!d y \\ &\qquad +2c_{n,s}u(x) \int_{\mathbb{R}^n_+} \frac{\mathop{}\!d y }{\vert x_\ast - y \vert^{n+2s}}. \end{align*} Also, making the change of variables \(z=(y_1/x_1 , (y'-x')/x_1)\) if \(n>1\) and \(z_1=y_1/x_1\) if \(n=1\), we see that \begin{align*} \int_{\mathbb{R}^n_+} \frac{\mathop{}\!d y }{\vert x_\ast - y \vert^{n+2s}} &= x_1^{-2s} \int_{\mathbb{R}^n_+} \frac{\mathop{}\!d z}{\vert e_1+ z \vert^{n+2s}}. \end{align*} Moreover, via a direct computation \begin{equation}\label{sd0w3dewt95b76 498543qdetr57uy54u} c_{n,s} \int_{\mathbb{R}^n_+} \frac{\mathop{}\!d z}{\vert e_1+ z \vert^{n+2s}} = \frac {c_{1,s}}{2s} \end{equation} see \thref{tuMIrhco} below for details. Putting together these considerations, we obtain the desired result. \end{proof} \begin{remark} \thlabel{tuMIrhco} Let \begin{align} \tilde c_{n,s} := c_{n,s} \int_{\mathbb{R}^n_+} \frac{\mathop{}\!d z}{\vert e_1+ z \vert^{n+2s}}. \label{CA7GPGvx} \end{align}The value of \(\tilde c_{n,s}\) is of no consequence to \thref{mkG4iRYH} and so, for this paper, is irrelevant; however, it is interesting to note that \(\tilde c_{n,s}\) can be explicitly evaluated and is, in fact, equal to \((2s)^{-1}c_{1,s}\) (in particular \(\tilde c_{n,s}\) is independent of \(n\) which is not obvious from~\eqref{CA7GPGvx}). Indeed, if \(n>1\) then \begin{align*} \int_{\mathbb{R}^n_+} \frac{\mathop{}\!d z}{\vert e_1+ z \vert^{n+2s}} &= \int_0^\infty \int_{\mathbb{R}^{n-1}} \frac{\mathop{}\!d z_1\mathop{}\!d z'}{\big ( (z_1+1)^2 + \vert z' \vert^2\big )^{\frac{n+2s}2}} \end{align*} so, making the change of variables \(\tilde z'= (z_1+1)^{-1} z'\) in the inner integral, we have that \begin{align*} \int_{\mathbb{R}^n_+} \frac{\mathop{}\!d z}{\vert e_1+ z \vert^{n+2s}} &= \int_0^\infty \frac 1 {(z_1+1)^{1+2s}} \left( \int_{\mathbb{R}^{n-1}} \frac{\mathop{}\!d \tilde z'}{\big ( 1 + \vert \tilde z' \vert^2\big )^{\frac{n+2s}2}} \right) \mathop{}\!d z_1. \end{align*} By \cite[Proposition 4.1]{MR3916700}, \begin{align*} \int_{\mathbb{R}^{n-1}} \frac{\mathop{}\!d \tilde z'}{\big ( 1 + \vert \tilde z' \vert^2\big )^{\frac{n+2s}2}} &= \frac{\displaystyle \pi^{\frac{n-1}2} \Gamma \big ( \frac{1+2s}2 \big )}{\displaystyle \Gamma \big ( \frac{n+2s}2 \big )}. \end{align*} Hence, \begin{align*} \int_{\mathbb{R}^n_+} \frac{\mathop{}\!d z}{\vert e_1+ z \vert^{n+2s}} &= \frac{\pi^{\frac{n-1}2} \Gamma \big ( \frac{1+2s}2 \big )}{\Gamma \big ( \frac{n+2s}2 \big )} \int_0^\infty \frac {\mathop{}\!d z_1} {(z_1+1)^{1+2s}} = \frac{\pi^{\frac{n-1}2} \Gamma \big ( \frac{1+2s}2 \big )}{2s \Gamma \big ( \frac{n+2s}2 \big )}. \end{align*} This formula remains valid if \(n=1\). Since \(c_{n,s} = s \pi^{-n/2}4^s \Gamma \big ( \frac{n+2s}2 \big )/ \Gamma (1-s)\), see \cite[Proposition~5.6]{MR3916700}, it follows that \begin{align*} \tilde c_{n,s} &= \frac{2^{2s-1} \Gamma \big ( \frac{1+2s}2 \big )}{\pi^{1/2} \Gamma(1-s) } = \frac { c_{1,s}}{2s}. \end{align*} In particular, this proves formula~\eqref{sd0w3dewt95b76 498543qdetr57uy54u}, as desired. \end{remark} The key observation from \thref{eeBLRjcZ} is that the kernel \( {\vert x - y \vert^{-n-2s}} - {\vert x_\ast- y \vert^{-n-2s}}\) decays quicker at infinity than the kernel \(\vert x-y \vert^{-n-2s}\) which is what allows us to relax the assumption~\(u\in \mathscr L_s (\mathbb{R}^n)\). Indeed, if~$x$, $y\in \mathbb{R}^n_+ $ then \(\vert x_\ast - y \vert \geqslantslant \vert x-y \vert\) and \begin{align} \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast- y \vert^{n+2s}} &= \frac{n+2s} 2 \int_{\vert x - y \vert^2}^{\vert x_\ast -y \vert^2} t^{-\frac{n+2s+2}2} \mathop{}\!d t \nonumber \\ &\leqslantslant \frac{n+2s} 2 \big ( \vert x_\ast -y \vert^2-\vert x -y \vert^2 \big ) \frac 1 {\vert x - y \vert^{n+2s+2}} \nonumber \\ &= 2(n+2s) \frac{x_1y_1}{\vert x - y \vert^{n+2s+2}}. \label{LxZU6} \end{align} Similarly, for all~$x$, $y\in \mathbb{R}^n_+$,\begin{align} \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast- y \vert^{n+2s}} &\geqslantslant 2(n+2s) \frac{x_1y_1}{\vert x_\ast - y \vert^{n+2s+2}}. \label{buKHzlE6} \end{align} Hence, if \(\vert x\vert <C_0\) and if \(|x-y|>C_1\) for some \(C_0,C_1>0\) independent of \(x,y\), we find that \begin{align} \frac{C^{-1} x_1y_1}{1+\vert y \vert^{n+2s+2}} \leqslantslant \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast- y \vert^{n+2s}} \leqslantslant \frac{Cx_1y_1}{1+\vert y \vert^{n+2s+2}} \label{Zpwlcwex} \end{align} which motivates the consideration of \(\mathscr A_s(\mathbb{R}^n)\). Our final lemma in this section proves that \thref{mkG4iRYH} is well-defined, that is, the assumptions of \thref{mkG4iRYH} are enough to guarantee \eqref{OopaQ0tu} converges. \begin{lem} \thlabel{zNNKjlJJ} Let \(x \in \mathbb{R}^n_+\) and \(r\in (0,x_1/2)\) be sufficiently small so that \(B:=B_r(x) \Subset\mathbb{R}^n_+\). If \(u \in C^{2s+\alpha}(B) \cap \mathscr A_s(\mathbb{R}^n)\) for some \(\alpha>0\) with \(2s+\alpha\) not an integer then \((-\Delta)^su(x)\) given by \eqref{OopaQ0tu} is well-defined and there exists \(C>0\) depending on \(n\), \(s\), \(\alpha\), \(r\), and \(x_1\) such that \begin{align*} \big \vert (-\Delta)^s u(x) \big \vert &\leqslantslant C \big ( \| u\|_{C^{\beta}(B)} + \Anorm{u} \big ) \end{align*} where \(\beta = \min \{ 2s+\alpha ,2\}\). \end{lem} We remark that the constant \(C\) in \thref{zNNKjlJJ} blows up as \(x_1\to 0^+\). \begin{proof}[Proof of Lemma~\ref{zNNKjlJJ}] Write \( (-\Delta)^s u(x) = I_1 +I_2 + s^{-1} c_{1,s} u(x) x_1^{-2s}\) where \begin{align*} I_1 &:= c_{n,s}\lim_{\varepsilon \to 0^+} \int_{B\setminus B_\varepsilon(x)} \left( \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast- y \vert^{n+2s}}\right) \big(u(x)-u(y)\big)\mathop{}\!d y\\ \text{ and } \qquad I_2 &:= c_{n,s}\int_{\mathbb{R}^n_+\setminus B} \left(\frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast- y \vert^{n+2s}}\right) \left(u(x)-u(y)\right)\mathop{}\!d y. \end{align*} \emph{For \(I_1\):} If \(y\in \mathbb{R}^n_+\) then \({\vert x - y \vert^{-n-2s}} - {\vert x_\ast- y \vert^{-n-2s}} \) is only singular at \(x\), so we may write \begin{align*} I_1 &= c_{n,s}\lim_{\varepsilon \to 0^+} \int_{B\setminus B_\varepsilon(x)}\frac {u(x)-u(y)} {\vert x - y \vert^{n+2s}}\mathop{}\!d y - \int_B \frac {u(x)-u(y)} {\vert x_\ast- y \vert^{n+2s}}\mathop{}\!d y. \end{align*}If \(\chi_B\) denotes the characteristic function of \(B\) then \(\bar u := u\chi_B\in C^{2s+\alpha}(B)\cap L^\infty(\mathbb{R}^n)\), so \((-\Delta)^s\bar u(x)\) is defined and \begin{align*} (-\Delta)^s\bar u(x) &= c_{n,s}\mathrm{P.V.} \int_{\mathbb{R}^n} \frac{u(x) - u(y)\chi_B(y)}{\vert x-y\vert^{n+2s}} \mathop{}\!d y \\ &= c_{n,s} \mathrm{P.V.} \int_B \frac{u(x) - u(y)}{\vert x-y\vert^{n+2s}} \mathop{}\!d y + c_{n,s} u(x)\int_{\mathbb{R}^n\setminus B} \frac{\mathop{}\!d y }{\vert x-y\vert^{n+2s}}\\ &= c_{n,s}\mathrm{P.V.} \int_B \frac{u(x) - u(y)}{\vert x-y\vert^{n+2s}} \mathop{}\!d y + \frac 1{2s}c_{n,s} n \vert B_1 \vert r^{-2s} u(x) . \end{align*}Hence, \begin{align*} I_1&= (-\Delta)^s\bar u(x) -\frac 1{2s}c_{n,s} n \vert B_1 \vert r^{-2s} u(x) -c_{n,s} \int_B \frac {u(x)-u(y)} {\vert x_\ast- y \vert^{n+2s}}\mathop{}\!d y \end{align*} which gives that \begin{align*} \vert I_1 \vert &\leqslantslant \big \vert (-\Delta)^s\bar u(x) \big \vert +C \| u\|_{L^\infty(B)} +2\| u\|_{L^\infty(B)} \int_B \frac 1 {\vert x_\ast- y \vert^{n+2s}}\mathop{}\!d y \\ &\leqslantslant \big \vert (-\Delta)^s\bar u(x) \big \vert +C\| u\|_{L^\infty(B)}. \end{align*} Furthermore, if \(\alpha <2(1-s)\) then by a Taylor series expansion \begin{align*} \big \vert (-\Delta)^s\bar u (x) \big \vert &\leqslantslant C \int_{\mathbb{R}^n} \frac{\vert 2\bar u(x)-\bar u(x+y) - \bar u(x-y) \vert }{\vert y \vert^{n+2s}} \mathop{}\!d y \\ &\leqslantslant C [\bar u]_{C^{2s+\alpha}(B)} \int_{B_1} \frac{\mathop{}\!d y }{\vert y \vert^{n-\alpha}} + C \| \bar u \|_{L^\infty(\mathbb{R}^n)} \int_{\mathbb{R}^n\setminus B_1} \frac{\mathop{}\!d y }{\vert y \vert^{n+2s}} \\ &\leqslantslant C \big ( \| \bar u\|_{C^{2s+\alpha}(B)} + \| \bar u \|_{L^\infty(\mathbb{R}^n)} \big ) \\ &=C \big ( \| u\|_{C^{2s+\alpha}(B)} + \| u \|_{L^\infty(B)} \big ) . \end{align*} If \(\alpha \geqslantslant 2(1-s)\) then this computation holds with \([\bar u]_{C^{2s+\alpha}(B)} \) replaced with \(\|u\|_{C^2(B)}\). \emph{For \(I_2\):} By~\eqref{LxZU6}, \begin{align*} \vert I_2 \vert &\leqslantslant Cx_1\int_{\mathbb{R}^n_+\setminus B} \frac {y_1\big (\vert u(x) \vert + \vert u(y) \vert \big ) } {\vert x - y\vert^{n+2s+2}} \mathop{}\!d y \\ &\leqslantslant C \| u \|_{L^\infty(B)}\int_{\mathbb{R}^n_+\setminus B} \frac {y_1} {\vert x - y\vert^{n+2s+2}} \mathop{}\!d y + C \Anorm{u} \\ &\leqslantslant C \big (\| u \|_{L^\infty(B)}+ \Anorm{u} \big ) . \end{align*} Combining the estimates for \(I_1\) and \(I_2\) immediately gives the result. \end{proof} \section{Interior Harnack inequality and proof of Theorem~\ref{DYcYH}}\label{q7hKF} The purpose of this section is to prove \thref{DYcYH}. Its proof is split into two parts: the interior weak Harnack inequality for super-solutions (\thref{MT9uf}) and interior local boundedness for sub-solutions (\thref{guDQ7}). \subsection{The interior weak Harnack inequality} The interior weak Harnack inequality for super-solutions is given in \thref{MT9uf} below. \begin{prop} \thlabel{MT9uf} Let \(M\in \mathbb{R} \), \(\rho \in(0,1)\), and \(c\in L^\infty(B_\rho(e_1))\). Suppose that \(u \in C^{2s+\alpha}(B_\rho(e_1))\cap \mathscr{A}_s(\mathbb{R}^n)\) for some \(\alpha>0\) with \(2s+\alpha\) not an integer, \(u\) is non-negative in \(\mathbb{R}^n_+\), and satisfies \begin{align*} (-\Delta)^su +cu &\geqslantslant -M \qquad \text{in } B_\rho(e_1). \end{align*} Then there exists \(C_\rho>0\) depending only on \(n\), \(s\), \(\| c \|_{L^\infty(B_\rho(e_1))}\), and \(\rho\) such that \begin{align*} \Anorm{u} &\leqslantslant C_\rho \left( \inf_{B_{\rho/2}(e_1)} u + M \right) . \end{align*} \end{prop} The proof of \thref{MT9uf} is a simple barrier argument that takes inspiration from \cite{MR4023466}. We will begin by proving the following proposition, \thref{YLj1r}, which may be viewed as a rescaled version of \thref{MT9uf}. We will require \thref{YLj1r} in the second part of the section when we prove the interior local boundedness of sub-solutions (\thref{guDQ7}). \begin{prop} \thlabel{YLj1r} Let \(M\in \mathbb{R} \), \(\rho \in(0,1)\), and \(c\in L^\infty(B_\rho(e_1))\). Suppose that \(u \in C^{2s+\alpha}(B_\rho(e_1))\cap \mathscr{A}_s(\mathbb{R}^n)\) for some \(\alpha>0\) with \(2s+\alpha\) not an integer, \(u\) is non-negative in \(\mathbb{R}^n_+\), and satisfies \begin{align*} (-\Delta)^su +cu &\geqslantslant -M \qquad \text{in } B_\rho(e_1). \end{align*} Then there exists \(C>0\) depending only on \(n\), \(s\), and~\(\rho^{2s}\| c \|_{L^\infty(B_\rho(e_1))}\), such that \begin{align*} \frac 1 {\rho^n} \int_{B_{\rho/2}(e_1)} u(x) \mathop{}\!d x &\leqslantslant C \left( \inf_{B_{\rho/2}(e_1)} u + M \rho^{2s}\right) . \end{align*} Moreover, the constant \(C\) is of the form \begin{align} C = C' \left(1+ \rho^{2s} \| c \|_{L^\infty(B_\rho(e_1))}\right) \label{CyJJQHrF} \end{align} with \(C'>0\) depending only on \(n\) and \(s\). \end{prop} Before we give the proof of \thref{YLj1r}, we introduce some notation. Given~$ \rho \in(0,1)$, we define $$ T_\rho:= \left\{x_1=-\frac1{\rho}\right\}\qquad{\mbox{and}}\qquad H_\rho:=\left\{x_1>-\frac1{\rho}\right\}.$$ We also let~$Q_\rho$ be the reflection across the hyperplane~$T_\rho$, namely $$Q_\rho(x):=x-2(x_1+1/\rho)e_1\qquad{\mbox{for all }} x\in \mathbb{R}^n. $$ With this, we establish the following lemma. \begin{lem} \thlabel{rvZJdN88} Let \(\rho \in(0,1)\) and \(\zeta\) be a smooth cut-off function such that \begin{align*} \zeta \equiv 1 \text{ in } B_{1/2}, \quad \zeta \equiv 0 \text{ in } \mathbb{R}^n \setminus B_{3/4}, \quad{\mbox{ and }} \quad 0\leqslantslant \zeta \leqslantslant 1. \end{align*} Define \(\varphi^{(1)} \in C^\infty_0(\mathbb{R}^n)\) by \begin{align*} \varphi^{(1)}(x) := \zeta (x) - \zeta \big ( Q_\rho(x) \big ) \qquad \text{for all } x\in \mathbb{R}^n. \end{align*} Then \(\varphi^{(1)}\) is antisymmetric with respect to \(T_\rho:= \{x_1=-1/\rho\}\) and there exists \(C>0\) depending only on \(n\) and \(s\) (but not on \(\rho\)) such that \(\| (-\Delta)^s \varphi^{(1)} \|_{L^\infty(B_{3/4})} \leqslantslant C\). \end{lem} \begin{proof} Since \(Q_\rho(x)\) is the reflection of \(x\in \mathbb{R}^n\) across \(T_\rho\), we immediately obtain that \(\varphi^{(1)} \) is antisymmetric with respect to the plane \(T_\rho\). As \(0\leqslantslant \zeta \circ Q_\rho \leqslantslant 1\) in \(\mathbb{R}^n\) and \(\zeta \circ Q_\rho=0\) in \(B_{3/4}\), from~\eqref{ZdAlT} we have that \begin{align*} \vert (-\Delta)^s (\zeta \circ Q_\rho)(x) \vert &=C \int_{B_{3/4}(-2e_1/\rho)} \frac{(\zeta \circ Q_\rho)(y)}{\vert x - y\vert^{n+2s}} \mathop{}\!d y \leqslantslant C \qquad \text{for all }x \in B_{3/4} \end{align*} using also that \(\vert x- y\vert \geqslantslant 2(1/\rho -3/4) \geqslantslant 1/2\). Moreover, \begin{align*} \| (-\Delta)^s \zeta \|_{L^\infty(B_{3/4})} &\leqslantslant C ( \| D^2\zeta \|_{L^\infty(B_{3/4})} + \| \zeta \|_{L^\infty(\mathbb{R}^n)} ) \leqslantslant C, \end{align*} for example, see the computation on p.~9 of \cite{MR3469920}. Thus, $$\| (-\Delta)^s \varphi^{(1)} \|_{L^\infty(B_{3/4})} \leqslantslant \| (-\Delta)^s \zeta\|_{L^\infty(B_{3/4})} + \| (-\Delta)^s (\zeta \circ Q_\rho)\|_{L^\infty(B_{3/4})} \leqslantslant C,$$ which completes the proof. \end{proof} Now we give the proof of \thref{YLj1r}. \begin{proof}[Proof of \thref{YLj1r}] Let \(\tilde u(x):=u (\rho x+e_1)\) and \(\tilde c(x) := \rho^{2s}c(\rho x+e_1)\). Observe that \((-\Delta)^s\tilde u+\tilde c \tilde u \geqslantslant - M\rho^{2s}\) in \(B_1\) and that \(\tilde u \) is antisymmetric with respect to \(T_\rho\). Let \(\varphi^{(1)}\) be defined as in \thref{rvZJdN88} and suppose that\footnote{Note that such a \(\tau\) exists. Indeed, let \(U \subset \mathbb{R}^n\), \(u:U \to \mathbb{R}\) be a nonnegative function and let \(\varphi : U \to \mathbb{R}\) be such that there exists \(x_0\in U\) for which \(\varphi(x_0)>0\). Then define \begin{align*} I := \{ \tau \geqslantslant 0 \text{ s.t. } u(x) \geqslantslant \tau \varphi(x) \text{ for all } x\in U \} . \end{align*} It follows that \(I\) is non-empty since \(0\in I\), so \(\sup I\) exists, but may be \(+\infty\). Moreover, \(\sup I \leqslantslant u(x_0)/\varphi(x_0) < +\infty\), so \(\sup I\) is also finite.} \(\tau\geqslantslant 0\) is the largest possible value such that~\(\tilde u \geqslantslant \tau \varphi^{(1)} \) in the half space~\(H_\rho\). Since \(\varphi^{(1)} =1\) in \(B_{1/2}\), we immediately obtain that \begin{align} \tau \leqslantslant \inf_{B_{1/2}} \tilde u = \inf_{B_{\rho/2}(e_1)} u. \label{mzMEg} \end{align} Moreover, by continuity, there exists \(a \in \overline{B_{3/4}}\) such that \(\tilde u(a)=\tau\varphi^{(1)} (a)\). On one hand, using \thref{rvZJdN88}, we have that \begin{align} (-\Delta)^s(\tilde u-\tau \varphi^{(1)})(a)+\tilde c (a) (\tilde u-\tau \varphi^{(1)})(a) &\geqslantslant -M\rho^{2s} - \tau \big( C+ \|\tilde c\|_{L^\infty(B_1)}\big) \nonumber \\ &\geqslantslant -M\rho^{2s} - C\tau \big( 1+ \rho^{2s}\|c\|_{L^\infty(B_\rho(e_1))}\big) .\label{aDLwG3pX} \end{align} On the other hand, since \(\tilde u-\tau\varphi^{(1)}\) is antisymmetric with respect to \(T_\rho\), \(\tilde u - \tau \varphi^{(1)} \geqslantslant 0\) in \(H_\rho\), and~\((\tilde u-\tau \varphi^{(1)})(a)=0\), it follows that \begin{align} &(-\Delta)^s (\tilde u-\tau \varphi^{(1)})(a)+c_\rho(a)(\tilde u-\tau \varphi^{(1)})(a) \nonumber \\ &\leqslantslant - C \int_{B_{1/2}} \left( \frac 1 {\vert a - y \vert^{n+2s}} - \frac 1 {\vert Q_\rho(a) - y \vert^{n+2s}} \right) \big( \tilde u(y)-\tau \varphi^{(1)}(y)\big) \mathop{}\!d y. \label{wPcy8znV} \end{align} For all \(y \in B_{1/2}\), we have that \(\vert a - y \vert \leqslantslant C\) and \(\vert Q_\rho(a) - y \vert \geqslantslant C\) (the assumption \(\rho <1\) allows to choose this \(C\) independent of \(\rho\)), so \begin{align} (-\Delta)^s (\tilde u-\tau \varphi^{(1)})(a)+c_\rho(a)(\tilde u-\tau \varphi^{(1)})(a) &\leqslantslant - C \int_{B_{1/2}} \big(\tilde u(y)-\tau \varphi^{(1)}(y)\big) \mathop{}\!d y \nonumber \\ &\leqslantslant -C \left( \frac 1 {\rho^n} \int_{B_{\rho/2}(e_1)} u(y) \mathop{}\!d y - \tau \right). \label{ETn5BCO5} \end{align} Rearranging \eqref{aDLwG3pX} and \eqref{ETn5BCO5} then using \eqref{mzMEg}, we obtain \begin{align*} \frac 1 {\rho^n} \int_{B_{\rho/2}(e_1)} u(y) \mathop{}\!d y &\leqslantslant C \Big( \tau \big( 1+ \rho^{2s}\|c\|_{L^\infty(B_\rho(e_1))}\big) + M\rho^{2s} \Big) \\ &\leqslantslant C \big( 1+ \rho^{2s}\|c\|_{L^\infty(B_\rho(e_1))}\big) \left( \inf_{B_{\rho/2}(e_1)} u + M \rho^{2s}\right) \end{align*} as required. \end{proof} A simple adaptation of the proof of \thref{YLj1r} leads to the proof of \thref{MT9uf} which we now give. \begin{proof}[Proof of \thref{MT9uf}] Follow the proof of \thref{YLj1r} but instead of \eqref{wPcy8znV}, we write \begin{align*} &(-\Delta)^s (\tilde u-\tau \varphi^{(1)})(a)+c_\rho(a)(\tilde u-\tau \varphi^{(1)})(a)\\ &\qquad= - C \int_{H_\rho} \left( \frac 1 {\vert a - y \vert^{n+2s}} - \frac 1 {\vert Q_\rho(a) - y \vert^{n+2s}} \right) \big( \tilde u(y)-\tau \varphi^{(1)}(y)\big) \mathop{}\!d y. \end{align*} Then, for all \(x,y\in H_\rho\), \begin{align} \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert Q_\rho(x) - y \vert^{n+2s}} &= \frac{n+2s} 2 \int_{\vert x-y\vert^2}^{\vert Q_\rho(x)-y\vert^2} t^{- \frac{n+2s+2}2} \mathop{}\!d t \label{gvKkSL1N}\\ &\geqslantslant C \Big (\vert Q_\rho(x)-y\vert^2-\vert x-y\vert^2 \Big ) \vert Q_\rho(x) - y \vert^{- (n+2s+2)} \nonumber\\ &= C \frac{(x_1+1/\rho)(y_1+1/\rho)}{\vert Q_\rho(x) - y \vert^{n+2s+2}}, \nonumber \end{align} so using that \(\tilde u - \tau \varphi^{(1)} \geqslantslant 0\) in \(H_\rho\), we see that \begin{align*} (-\Delta)^s (\tilde u-\tau \varphi^{(1)})(a)+c_\rho(a)(\tilde u-\tau \varphi^{(1)})(a) &\leqslantslant - C \int_{H_\rho} \frac{(y_1+1/\rho)(\tilde u(y)-\tau \varphi^{(1)}(y)) }{\vert Q_\rho(a) - y \vert^{n+2s+2}}\mathop{}\!d y \\ &\leqslantslant - C_\rho \left( \int_{H_\rho} \frac{(y_1+1/\rho)\tilde u(y) }{\vert Q_\rho(a) - y \vert^{n+2s+2}}\mathop{}\!d y + \tau \right) . \end{align*} Making the change of variables \(z=\rho y +e_1\), we have that \begin{align*} \int_{H_\rho} \frac{(y_1+1/\rho)\tilde u(y) }{\vert Q_\rho(a) - y \vert^{n+2s+2}}\mathop{}\!d y &= \rho^{-n-1} \int_{\mathbb{R}^n_+} \frac{z_1 u(z) }{\vert Q_\rho(a) - z/\rho +e_1/\rho \vert^{n+2s+2}}\mathop{}\!d z. \end{align*} Thus, since \( \vert Q_\rho(a) - z/\rho +e_1/\rho \vert^{n+2s+2} \leqslantslant C_\rho (1+ \vert z \vert^{n+2s+2} ) \), we conclude that \begin{align*} \int_{H_\rho} \frac{(y_1+1/\rho)\tilde u(y) }{\vert Q_\rho(a) - y \vert^{n+2s+2}}\mathop{}\!d y &\geqslantslant C_\rho \int_{\mathbb{R}^n_+} \frac{z_1 u(z) }{1+ \vert z \vert^{n+2s+2}} \mathop{}\!d z=C_\rho \Anorm{u}. \end{align*} As a consequence, \begin{align} (-\Delta)^s (\tilde u-\tau \varphi^{(1)})(a)+\tilde c(a)(\tilde u-\tau \varphi^{(1)})(a) &\leqslantslant- C_\rho \Anorm{u}+C_\rho \tau . \label{Pzf2a} \end{align} Rearranging~\eqref{aDLwG3pX} and~\eqref{Pzf2a} then using~\eqref{mzMEg} gives \begin{align*} \Anorm{u} &\leqslantslant C_\rho ( \tau + M ) \leqslantslant C_\rho \left( \inf_{B_{\rho/2}(e_1)} u + M \right), \end{align*} as desired. \end{proof} \subsection{Interior local boundedness} The second part of the proof of \thref{DYcYH} is the interior local boundedness of sub-solutions given in \thref{guDQ7} below. \begin{prop} \thlabel{guDQ7} Let \(M \geqslantslant 0\), \(\rho\in(0,1/2)\), and \(c\in L^\infty(B_\rho(e_1))\). Suppose that \(u \in C^{2s+\alpha}(B_\rho(e_1))\cap \mathscr A_s(\mathbb{R}^n)\) for some \(\alpha>0\) with \(2s+\alpha\) not an integer, and \(u\) satisfies \begin{equation}\label{sow85bv984dert57nb5} (-\Delta)^su +cu \leqslantslant M \qquad \text{in } B_\rho(e_1). \end{equation} Then there exists \(C_\rho>0\) depending only on \(n\), \(s\), \(\| c \|_{L^\infty(B_\rho(e_1))}\), and \(\rho\) such that \begin{align*} \sup_{B_{\rho/2}(e_1)} u &\leqslantslant C_\rho ( \Anorm{u} +M ) . \end{align*} \end{prop} The proof of \thref{guDQ7} uses similar ideas to \cite[Theorem~11.1]{MR2494809} and \cite[Theorem~5.1]{MR2831115}. Before we prove \thref{guDQ7}, we need the following lemma. \begin{lem} \thlabel{ltKO2} Let \(R\in(0,1)\) and \(a\in \overline{ B_2^+}\). Then there exists \(C>0\) depending only on \(n\) and \(s\) such that, for all \(x\in B_{R/2}^+(a)\) and \(y \in \mathbb{R}^n_+ \setminus B_R^+(a)\), \begin{align*} \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast - y \vert^{n+2s}} &\leqslantslant C R^{-n-2s-2} \frac{x_1y_1}{1+\vert y \vert^{n+2s+2}}. \end{align*} \end{lem} \begin{proof} Let \(\tilde x := (x-a)/R\in B_{1/2}\) and \(\tilde y = (y-a)/ R\in \mathbb{R}^n\setminus B_1\). Clearly, we have that~\(\vert \tilde y - \tilde x \vert \geqslantslant 1/2\). Moreover, since \(\vert \tilde x \vert <1/2 < 1/(2 \vert \tilde y \vert)\), we have that \(\vert \tilde y - \tilde x \vert \geqslantslant \vert \tilde y \vert -\vert \tilde x \vert \geqslantslant (1/2) \vert \tilde y \vert\). Hence, \begin{align*} \vert \tilde y - \tilde x \vert^{n+2s+2} \geqslantslant \frac 1 {2^{n+2s+2}} \max \big \{ 1 , \vert \tilde y \vert^{n+2s+2} \big \} \geqslantslant C \big ( 1+ \vert \tilde y \vert^{n+2s+2} \big ) \end{align*} for some \(C\) depending only on \(n\) and \(s\). It follows that \begin{align} \vert x -y \vert^{n+2s+2} = R^{n+2s+2} \vert \tilde x - \tilde y \vert^{n+2s+2} \geqslantslant C R^{n+2s+2} \big ( 1 + R^{-n-2s-2} \vert y-a \vert^{n+2s+2} \big ). \label{TO9DRBuX} \end{align} Finally, we claim that there exists \(C\) independent of \(R\) such that \begin{align} \vert y - a\vert \geqslantslant CR \vert y \vert \qquad \text{for all } y\in \mathbb{R}^n \setminus B_R(a) . \label{tdNyAxBN} \end{align} Indeed, if \(y \in \mathbb{R}^n \setminus B_4\) then \begin{align*} \vert y - a\vert \geqslantslant \vert y \vert -2 \geqslantslant \frac12 \vert y \vert>\frac R 2 \vert y \vert, \end{align*} and if \(y \in (\mathbb{R}^n \setminus B_R(a) ) \cap B_4\) then \begin{align*} \vert y -a\vert \geqslantslant R > \frac R 4 \vert y \vert \end{align*} which proves \eqref{tdNyAxBN}. Thus, \eqref{TO9DRBuX} and \eqref{tdNyAxBN} give that, for all \( x \in B_{R/2}(a)\) and \( y \in \mathbb{R}^n_+ \setminus B_R(a)\), \begin{align*} \vert x -y \vert^{n+2s+2} &\geqslantslant C R^{n+2s+2} \big ( 1 + \vert y\vert^{n+2s+2} \big ) . \end{align*} Then the result follows directly from~\eqref{LxZU6}. \end{proof} With this preliminary work, we now focus on the proof of \thref{guDQ7}. \begin{proof}[Proof of \thref{guDQ7}] We first observe that, dividing through by~\( \Anorm{u} +M\), we may assume that \((-\Delta)^su +cu\leqslantslant 1\) in \(B_\rho(e_1)\) and~\(\Anorm{u} \leqslantslant 1\). We also point out that if~$u\le0$ in~$B_\rho(e_1)$, then the claim in \thref{guDQ7} is obviously true. Therefore, we can suppose that \begin{equation}\label{upos44567890} \{u>0\}\cap B_\rho(e_1)\ne \varnothing.\end{equation} Thus, we let~\(\tau\geqslantslant 0 \) be the smallest possible value such that \begin{align*} u(x) &\leqslantslant \tau (\rho-\vert x-e_1 \vert )^{-n-2} \qquad \text{for all } x\in B_\rho(e_1). \end{align*} Such a \(\tau\) exists in light of~\eqref{upos44567890} by following a similar argument to the one in the footnote at the bottom of p. 11. To complete the proof, we will show that \begin{align} \tau &\leqslantslant C_\rho \label{svLyO} \end{align}with \(C_\rho\) depending only on \(n\), \(s\), \(\| c\|_{L^\infty(B_\rho(e_1))}\), and \(\rho\) (but independent of \(u\)). Since \(u\) is uniformly continuous in~$B_\rho(e_1)$, there exists \(a\in B_\rho(e_1)\) such that \(u(a) = \tau (\rho-\vert a-e_1\vert)^{-n-2}\). Notice that~$u(a)>0$ and~\(\tau >0\). Let also~\(d=:\rho-\vert a-e_1\vert\) so that \begin{align} u(a) = \tau d^{-n-2} ,\label{CEnkR} \end{align} and let \begin{align*} U:= \bigg \{ y \in B_\rho (e_1) \text{ s.t. } u(y) > \frac{u(a)} 2 \bigg \} . \end{align*} Since \(\Anorm{u}\leqslantslant 1\), if \(r\in(0,d)\) then \begin{align*} C_\rho &\geqslantslant \int_{B_\rho (e_1)} |u(x)| \mathop{}\!d x \ge \int_{ U\cap B_r (a)} u(x) \mathop{}\!d x \geqslantslant \frac{u(a)}2 \cdot \vert U \cap B_r (a) \vert . \end{align*} Thus, from~\eqref{CEnkR}, it follows that \begin{align} \vert U \cap B_r (a) \vert \leqslantslant \frac{C_\rho d^{n+2}}\tau \qquad \text{ for all } r\in(0,d). \label{W5Ar0} \end{align} Next, we make the following claim. \begin{claim} There exists~$\theta_0\in(0,1)$ depending only on \(n\), \(s\), \(\| c\|_{L^\infty(B_\rho(e_1))}\), and \(\rho\) such that if~\(\theta\in(0,\theta_0]\) there exists~\(C>0\) depending only on \(n\), \(s\), \(\| c\|_{L^\infty(B_\rho(e_1))}\), \(\rho\), and~$\theta$ such that \begin{align*} \big \vert B_{\theta d /8} (a) \setminus U \big \vert \leqslantslant \frac 1 4 \vert B_{\theta d /8} \vert +C \frac{d^n }{\tau} . \end{align*} In particular, neither \(\theta\) nor \(C\) depend on \(\tau\), \(u\), or \(a\). \end{claim} We will withhold the proof of the claim until the end. Assuming the claim is true, we complete the proof of the \thref{guDQ7} as follows. By~\eqref{W5Ar0} (used here with~$r:={\theta_0 d}/8$) and the claim, we have that \begin{align*} \frac{C_\rho d^{n+2}}\tau \geqslantslant \big \vert B_{\theta_0 d /8} \big \vert -\big \vert B_{\theta_0 d /8} (a) \setminus U \big \vert \geqslantslant \frac 3 4 \vert B_{\theta_0 d/8} \vert -C \frac{d^n }{\tau} . \end{align*} Rearranging gives that \( \tau \leqslantslant C( d^2 +1 ) \leqslantslant C\), which proves \eqref{svLyO}. Accordingly, to complete the proof of \thref{guDQ7}, it remains to establish the Claim. For this, let \(\theta\in(0,1)\) be a small constant to be chosen later. We will prove the claim by applying \thref{YLj1r} to an appropriate auxiliary function. For \(x\in B_{\theta d/2} (a)\), we have that \(\vert x -e_1 \vert \leqslantslant \vert a -e_1 \vert + \theta d/2=\rho-(1-\theta/2)d \), so, using~\eqref{CEnkR}, \begin{align} u(x) \leqslantslant \tau \bigg (1-\frac \theta2\bigg )^{-n-2} d^{-n-2} = \bigg (1-\frac\theta 2\bigg )^{-n-2} u(a) \qquad \text{for all } x\in B_{\theta d/2}(a). \label{sS0kO} \end{align} Let \(\zeta \) be a smooth, antisymmetric function such that~$\zeta \equiv 1$ in~$\{x_1 > 1/2 \}$ and~$0\leqslantslant \zeta \leqslantslant 1$ in~$\mathbb{R}^n_+$, and consider the antisymmetric function \begin{align*} v(x) &:= \left(1-\frac\theta 2\right)^{-n-2} u(a) \zeta(x) -u (x) \qquad \text{for all } x\in \mathbb{R}^n. \end{align*} Since \(\zeta \equiv 1\) in \(\{x_1 > 1/2 \} \supset B_{\theta d/2}(a)\) and \(0\leqslantslant \zeta \leqslantslant 1\) in \(\mathbb{R}^n_+\), it follows easily from~\eqref{OopaQ0tu} that \((-\Delta)^s \zeta \geqslantslant 0\) in \(B_{\theta d/2}(a)\). Hence, in \(B_{\theta d/2}(a)\), \begin{align*} (-\Delta)^s v +cv &\geqslantslant -(-\Delta)^s u-cu+ c\bigg (1-\frac\theta 2\bigg )^{-n-2} u(a) \\ &\geqslantslant -1 -C\| c^-\|_{L^\infty(B_\rho(e_1))}\bigg (1-\frac\theta 2\bigg )^{-n-2} u(a) . \end{align*} Taking \(\theta\) sufficiently small, we obtain \begin{align} (-\Delta)^s v +cv &\geqslantslant -C (1 + u(a) ) \qquad \text{in } B_{\theta d/2}(a). \label{7fUyX} \end{align} The function \(v\) is almost the auxiliary function to which we would like to apply \thref{YLj1r}; however, \thref{YLj1r} requires \(v \geqslantslant 0\) in \(\mathbb{R}^n_+\) but we only have \(v \geqslantslant 0\) in \(B_{\theta d/2}(a)\) due to \eqref{sS0kO}. To resolve this issue let us instead consider the function \(w\) such that \(w (x) = v^+(x)\) for all \(x\in \mathbb{R}^n_+\) and~\(w(x) = -w(x_\ast)\) for all \(x\in \overline{\mathbb{R}^n_-}\). We point out that~$w$ coincides with~$v$ in~$B_{\theta d/2}(a)$, thanks to~\eqref{sS0kO}, and therefore it is as regular as~$v$ in~$B_{\theta d/2}(a)$, which allows us to write the fractional Laplacian of~$w$ in~$B_{\theta d/2}(a)$ in a pointwise sense. Also we observe that we have~\(w\geqslantslant0\) in \(\mathbb{R}^n_+\) but we no longer have a lower bound for \((-\Delta)^sw+cw\). To obtain this, observe that for all \(x\in \mathbb{R}^n_+\), \begin{align*} (w-v)(x) &= \begin{cases} 0, &\text{for all } x\in \{v>0\} \cap \mathbb{R}^n_+ \\ u(x)- (1-\theta /2 )^{-n-2} u(a) \zeta(x), &\text{for all } x\in \{v \leqslantslant 0\}\cap \mathbb{R}^n_+. \end{cases} \end{align*} In particular, \(w-v\leqslantslant |u|\) in \(\mathbb{R}^n_+\). It follows that for all \(x\in B_{\theta d/2}(a)\), \begin{align*} (-\Delta)^s (w-v)(x) &\geqslantslant -C \int_{\mathbb{R}^n_+ \setminus B_{\theta d/2}(a) } \left( \frac 1 {\vert x-y\vert^{n+2s}} - \frac 1 {\vert x_\ast - y \vert^{n+2s}} \right)| u(y)|\mathop{}\!d y. \end{align*} Moreover, by \thref{ltKO2}, for all~$x\in B_{\theta d/4}(a)$,\begin{align} (-\Delta)^s (w-v)(x) &\geqslantslant -C (\theta d)^{-n-2s-2} \Anorm{u} \geqslantslant -C (\theta d)^{-n-2s-2} . \label{Fy8oddTC} \end{align}Thus, by~\eqref{7fUyX} and~\eqref{Fy8oddTC}, for all \(x\in B_{\theta d/4}(a)\), we have that \begin{align} (-\Delta)^sw(x)+c(x)w(x) &= (-\Delta )^s v (x)+c(x) v(x) + (-\Delta)^s (w-v)(x) \nonumber \\ &\geqslantslant -C \big (1 + u(a) +(\theta d)^{-n-2s-2} \big ) \nonumber \\ &\geqslantslant -C \big ( (\theta d)^{-n-2s-2}+u(a) \big ) \label{c1kcbe0C} \end{align} using that \(\theta d<1\). Next let us consider the rescaled and translated functions \(\tilde w(x) := w(a_1 x+(0,a'))\) and \(\tilde c (x) := a_1^{2s} c(a_1 x+(0,a'))\) (recall that \(a' = (a_2,\dots, a_n) \in \mathbb{R}^{n-1}\)). By~\eqref{c1kcbe0C} we have that \begin{align*} (-\Delta)^s \tilde w+\tilde c\, \tilde w \geqslantslant -Ca_1^{2s} \big ((\theta d)^{-n-2s-2} + u(a) \big ) \qquad \text{in } B_{\theta d /(4a_1)}(e_1 ). \end{align*} On one hand, by \thref{YLj1r}, we obtain \begin{align*} \left(\frac{\theta d } 8 \right)^{-n} \int_{B_{\theta d /8}(a )} w( x) \mathop{}\!d x &= \left(\frac{\theta d }{8a_1} \right)^{-n} \int_{B_{\theta d/(8a_1)}(e_1 )} \tilde w ( x) \mathop{}\!d x \\ &\leqslantslant C \Big( \tilde w(e_1) + (\theta d)^{-n-2} +u(a) (\theta d )^{2s} \Big)\\ &= C \left( \left(1-\frac\theta 2\right)^{-n-2} -1 \right)u(a) +C (\theta d)^{-n-2} +u(a) (\theta d )^{2s} . \end{align*}We note explicitly that, by~\eqref{CyJJQHrF}, the constant in the above line is given by \begin{align*} C= C'\big ( 1 +(\theta d)^{2s} \| c\|_{L^\infty(B_{\rho}(e_1))} \big ), \end{align*} so using that \(\theta d<1\) it may be chosen to depend only on \(n\), \(s\), and \( \| c\|_{L^\infty(B_{\rho}(e_1))}\). On the other hand, \begin{align*} B_{\theta d /8} (a) \setminus U &\subseteq\left\{ w \geqslantslant \left( \left( 1- \frac \theta 2\right)^{-n-2} -\frac12 \right) u(a) \right\} \cap B_{\theta d /8} (a) , \end{align*}so we have that \begin{align*} (\theta d )^{-n} \int_{B_{\theta d/8}(a)} w(x) \mathop{}\!d x &\geqslantslant (\theta d)^{-n} \bigg ( \bigg ( 1- \frac \theta 2\bigg )^{-n-2} -\frac12 \bigg ) u(a) \cdot \big \vert B_{\theta d /8} (a) \setminus U\big \vert \\ &\geqslantslant C (\theta d)^{-n} u(a) \cdot \big \vert B_{\theta d /8} (a) \setminus U\big \vert \end{align*} for \(\theta\) sufficiently small. Thus, \begin{align*} \big \vert B_{\theta d /8} (a) \setminus U \big \vert &\leqslantslant C (\theta d)^n \bigg ( \bigg (1-\frac\theta 2\bigg )^{-n-2} -1 \bigg ) +C(u(a))^{-1} (\theta d)^{-2} + (\theta d )^{n+2s} \\ &\leqslantslant C (\theta d)^n \bigg ( \bigg (1-\frac\theta 2\bigg )^{-n-2} -1 +\theta^{2s}\bigg ) +C \frac{\theta^{-2} d^n }{\tau} \end{align*} using~\eqref{CEnkR} and that \(d^{n+2s}<d^n\) since \(d<1\). At this point we may choose \(\theta\) sufficiently small such that \begin{align*} (\theta d)^n \bigg ( \bigg (1-\frac\theta 2\bigg )^{-n-2} -1 +\theta^{2s}\bigg ) \leqslantslant \frac 1 4 \vert B_{\theta d/8} \vert . \end{align*} This proves the claim, and thus completes the proof of Proposition~\ref{guDQ7}. \end{proof} \section{Boundary Harnack inequality and proof of \thref{C35ZH}} \label{TZei6Wd4} In this section, we give the proof of \thref{C35ZH}. Analogous to the proof of \thref{DYcYH}, the proof of \thref{C35ZH} is divided into the boundary Harnack inequality for super-solutions (\thref{SwDzJu9i}) and the boundary local boundedness for sub-solutions (\thref{EP5Elxbz}). Together these two results immediately give \thref{C35ZH}. \subsection{The boundary weak Harnack inequality} Our next result is the antisymmetric boundary weak Harnack inequality. \begin{prop} \thlabel{SwDzJu9i} Let \(M\in \mathbb{R}\), \(\rho>0\), and \(c\in L^\infty(B_\rho^+)\). Suppose that \(u\in C^{2s+\alpha}(B_\rho)\cap \mathscr{A}_s(\mathbb{R}^n)\) for some \(\alpha > 0\) with \(2s+\alpha\) not an integer, \(u\) is non-negative in \(\mathbb{R}^n_+\) and satisfies \begin{align*} (-\Delta)^s u +cu \geqslantslant -Mx_1 \qquad \text{in } B_\rho^+. \end{align*} Then there exists \(C_\rho>0\) depending only on \(n\), \(s\), \(\| c \|_{L^\infty(B_\rho^+)}\), and \(\rho\) such that \begin{align*} \Anorm{u} &\leqslantslant C_\rho \left( \inf_{B_{\rho/2}^+} \frac{u(x)}{x_1} +M \right). \end{align*} \end{prop} As with the interior counter-part of \thref{SwDzJu9i}, that is \thref{MT9uf}, we will prove the following rescaled version of \thref{SwDzJu9i}, namely \thref{g9foAd2c}. This version is essential to the proof of the boundary local boundedness for sub-solutions. Once \thref{g9foAd2c} has been proven, \thref{SwDzJu9i} follows easily with some minor adjustments. \begin{prop} \thlabel{g9foAd2c} Let \(M\in \mathbb{R}\), \(\rho >0\), and \(c\in L^\infty(B_\rho^+)\). Suppose that \(u\in C^{2s+\alpha}(B_\rho)\cap \mathscr{A}_s(\mathbb{R}^n)\) for some \(\alpha >0\) with~\(2s+\alpha\) not an integer, \(u\) is non-negative in \(\mathbb{R}^n_+\) and satisfies \begin{align*} (-\Delta)^s u +cu \geqslantslant -Mx_1\qquad \text{in } B_\rho^+. \end{align*} Then there exists \(C>0\) depending only on \(n\), \(s\), and \(\rho^{2s}\| c \|_{L^\infty(B_\rho^+)}\) such that \begin{align*} \frac 1 {\rho^{n+2}} \int_{B_{\rho/2}^+} y_1 u(y) \mathop{}\!d y &\leqslantslant C \left( \inf_{B_{\rho/2}^+} \frac{u(x)}{x_1} + M \rho^{2s}\right). \end{align*} Moreover, the constant \(C\) is of the form \begin{align*} C=C' \big(1+\rho^{2s} \| c \|_{L^\infty(B_\rho^+)} \big) \end{align*} with \(C'\) depending only on \(n\) and \(s\). \end{prop} Before we prove \thref{g9foAd2c}, we require some lemmata. \begin{lem} \thlabel{tZVUcYJl} Let \(M\geqslantslant0\), \(k \geqslantslant 0\), and suppose that \(u\in C^{2s+\alpha}(B_1)\cap \mathscr{A}_s(\mathbb{R}^n)\) for some~\(\alpha>0\) with~\(2s+\alpha\) not an integer. \begin{enumerate}[(i)] \item If \(u\) satisfies \begin{align*} (-\Delta)^s u +k u \geqslantslant -Mx_1\qquad \text{in } B_1^+ \end{align*} then for all \(\varepsilon>0\) sufficiently small there exists \(u_\varepsilon \in C^\infty_0(\mathbb{R}^n)\) antisymmetric and such that \begin{equation} (-\Delta)^s u_\varepsilon +k u_\varepsilon \geqslantslant -(M+\varepsilon) x_1\qquad \text{in } B_{7/8}^+. \label{EuoJ3En6} \end{equation} \item If \(u\) satisfies \begin{align*} (-\Delta)^s u +k u \leqslantslant Mx_1\qquad \text{in } B_1^+ \end{align*} then for all \(\varepsilon>0\) sufficiently small there exists \(u_\varepsilon \in C^\infty_0(\mathbb{R}^n)\) antisymmetric and such that \begin{align*} (-\Delta)^s u_\varepsilon +k u_\varepsilon \leqslantslant (M+\varepsilon) x_1\qquad \text{in } B_{7/8}^+. \end{align*} \end{enumerate} In both cases the sequence \(\{ u_\varepsilon\} \) converges to \(u\) uniformly in \(B_{7/8}\). Additionally, if \(u \) is non-negative in \(\mathbb{R}^n_+\) then \(u_\varepsilon\) is also non-negative in \(\mathbb{R}^n_+\). \end{lem} For the usual fractional Laplacian, \thref{tZVUcYJl} follows immediately by taking a mollification of \(u\) and in principle, this is also the idea here. However, there are a couple of technicalities that need to be addressed. The first is that here the fractional Laplacian is defined according to \thref{mkG4iRYH} and it remains to be verified that this fractional Laplacian commutes with the convolution operation as the usual one does. As a matter of fact, \thref{mkG4iRYH} does not lend itself well to the Fourier transform which makes it difficult to prove such a property. We overcome this issue by first multiplying \(u\) by an appropriate cut-off function which allows us to reduce to the case \((-\Delta)^s\) as given by the usual definition. The second issue is that directly using the properties of mollifiers, we can only expect to control~\(u_\varepsilon\) in some~\(U \Subset B_1^+\) and not up to~\(\{x_1=0\}\). We can relax this thanks to the antisymmetry of \(u\). \begin{proof}[Proof of Lemma~\ref{tZVUcYJl}] Fix \(\varepsilon>0\). Let \(R>1\) and let~\(\zeta\) be a smooth radial cut-off function such that $$ \zeta \equiv 1 \text{ in } B_R, \quad \zeta \equiv 0 \text{ in } \mathbb{R}^n \setminus B_{2R}, \quad{\mbox{and}}\quad 0\leqslantslant \zeta \leqslantslant 1 . $$ Let also \(\bar u := u \zeta\). Now let us define a function \(f:B_1\to\mathbb{R}\) as follows: let \(f(x)=(-\Delta)^s u(x) + k u(x)\) for all \(x\in B_1^+\), \(f(x) = 0\) for all \(x\in B_1\cap \{x_1=0\}\), and \(f(x)=-f(x_\ast)\). We also define \(\bar f:B_1\to \mathbb{R}\) analogously with \(u\) replaced with \(\bar u\). By definition, both \(f\) and \(\bar f\) are antisymmetric\footnote{Note that the definition of antisymmetric requires the domain of \(f\) and \(\bar f\) to be \(\mathbb{R}^n\). For simplicity, we will still refer to \(f\) and \(\bar f\) as antisymmetric in this context since this technicality does not affect the proof.}, but note carefully that, \emph{a priori}, there is no reason to expect any regularity of \(f\) and \(\bar f\) across \(\{x_1=0\}\) (we will in fact find that \(\bar f \in C^\alpha(B_1)\)). We claim that for \(R\) large enough (depending on \(\varepsilon\)), \begin{align} \vert \bar f(x) - f(x) \vert \leqslantslant \varepsilon x_1\qquad \text{for all } x\in B_1^+. \label{8LncZIti} \end{align} Indeed, if \(x\in B_1^+\) then \begin{align*} \bar f(x) - f(x) =(-\Delta)^s(\bar u - u ) (x) &= C \int_{\mathbb{R}^n_+\setminus B_R} \bigg ( \frac 1 {\vert x - y \vert^{n+2s}} - \frac 1 {\vert x_\ast- y \vert^{n+2s}} \bigg ) ( u -\bar u ) (y) \mathop{}\!d y . \end{align*} {F}rom~\eqref{LxZU6}, it follows that \begin{align*} \vert \bar f(x) - f(x) \vert \leqslantslant Cx_1 \int_{\mathbb{R}^n_+\setminus B_R} \frac{y_1\vert u(y) -\bar u(y)\vert}{\vert x - y \vert^{n+2s+2}} \mathop{}\!d y \leqslantslant Cx_1 \int_{\mathbb{R}^n_+\setminus B_R} \frac{y_1\vert u(y) \vert }{1+ \vert y \vert^{n+2s+2}} \mathop{}\!d y . \end{align*} Since \(u\in \mathscr A_s(\mathbb{R}^n)\), taking \(R\) large we obtain~\eqref{8LncZIti}. Next, consider the standard mollifier \(\eta(x) := C_0 \chi_{B_1}(x) e^{-\frac 1 {1-\vert x\vert ^2}}\) with \(C_0>0\) such that \(\int_{\mathbb{R}^n} \eta(x) \mathop{}\!d x =1\) and let~\(\eta_\varepsilon(x) := \varepsilon^{-n} \eta (x/\varepsilon)\). Also, let \(u_\varepsilon := \bar u \ast \eta_\varepsilon\) and \(f_\varepsilon := \bar f \ast \eta_\varepsilon\). Notice that~$u_\varepsilon\in C^\infty_0(\mathbb{R}^n)$ and it is antisymmetric. Additionally, we show that~\eqref{EuoJ3En6} holds true in case~$(i)$ (case~$(ii)$ being analogous). To this end, we observe that, since \(\bar u\) has compact support, we have that~\(\bar u \in \mathscr L_s(\mathbb{R}^n)\), so by \thref{eeBLRjcZ}, \((-\Delta)^s \bar u\) can be understood in the usual sense in \(B_1\), that is, by~\eqref{ZdAlT}. Moreover, by~\cite[Propositions~2.4-2.6]{MR2270163}, we have that~\((-\Delta)^s\bar u\in C^\alpha(B_1)\) which gives that \(\bar f \in C^\alpha(B_1)\) and \begin{align*} (-\Delta)^s\bar u +k \bar u &= \bar f \qquad \text{in }B_1. \end{align*} In particular, we may use standard properties of mollifiers to immediately obtain \begin{align*} (-\Delta)^s u_\varepsilon +k u_\varepsilon &= f_\varepsilon \qquad \text{in }B_{7/8}. \end{align*} Also, since \(\bar f\) is antisymmetric, it follows that \begin{align*} f_\varepsilon(x) &= \int_{\mathbb{R}^n } \bar f (y) \eta_\varepsilon (x-y) \mathop{}\!d y = \int_{\mathbb{R}^n_+ } \bar f (y)\big ( \eta_\varepsilon (x-y) - \eta_\varepsilon (x_\ast-y) \big )\mathop{}\!d y. \end{align*} Observe that, since \(\eta \) is monotone decreasing in the radial direction and \(\vert x- y \vert \leqslantslant \vert x_\ast - y \vert\) for all~\(x,y\in \mathbb{R}^n_+\), \begin{align} \eta_\varepsilon (x-y) - \eta_\varepsilon (x_\ast-y) \geqslantslant 0 \qquad \text{for all } x,y\in \mathbb{R}^n_+. \label{DPmhap5t} \end{align} Moreover, by~\eqref{8LncZIti}, we see that~\(\bar f (x) \geqslantslant -(M+\varepsilon)x_1\) for all \(x\in B_1^+\), so if \(x\in B_{7/8}^+\) and \(\varepsilon>0\) is sufficiently small (independent of \(x\)) then it follows that \begin{equation}\label{fjrehgeruig009887} f_\varepsilon(x) =\int_{B_\varepsilon^+(x)} \bar f (y)\big ( \eta_\varepsilon (x-y) - \eta_\varepsilon (x_\ast-y) \big )\mathop{}\!d y \geqslantslant -(M+\varepsilon)\int_{B_\varepsilon^+(x)} y_1 \big ( \eta_\varepsilon (x-y) - \eta_\varepsilon (x_\ast-y) \big )\mathop{}\!d y. \end{equation} Next, we claim that \begin{align} \int_{B_\varepsilon^+(x)} y_1 \big ( \eta_\varepsilon (x-y) - \eta_\varepsilon (x_\ast-y) \big )\mathop{}\!d y &\leqslantslant x_1. \label{Hp0lBCzB} \end{align} Indeed, \begin{align*} \int_{B_\varepsilon^+(x)} y_1 \big ( \eta_\varepsilon (x-y) - \eta_\varepsilon (x_\ast-y) \big )\mathop{}\!d y &= \int_{B_\varepsilon(x)} y_1 \eta_\varepsilon (x-y) \mathop{}\!d y \\&-\int_{B_\varepsilon^-(x)} y_1 \eta_\varepsilon (x-y) \mathop{}\!d y - \int_{B_\varepsilon^+(x)} y_1 \eta_\varepsilon (x_\ast-y) \mathop{}\!d y \\ &= \int_{B_\varepsilon(x)} y_1 \eta_\varepsilon (x-y) \mathop{}\!d y - \int_{B_\varepsilon^+(x)\setminus B_\varepsilon^+(x_\ast)} y_1 \eta_\varepsilon (x_\ast-y) \mathop{}\!d y \\ &\leqslantslant \int_{B_\varepsilon(x)} y_1 \eta_\varepsilon (x-y) \mathop{}\!d y . \end{align*} Moreover, using that \(z \mapsto z_1 \eta(z)\) is antisymmetric and \(\int_{B_\varepsilon} \eta(z)\mathop{}\!d z =1\), we obtain that \begin{align*} \int_{B_\varepsilon(x)} y_1 \eta_\varepsilon (x-y) \mathop{}\!d y &= \int_{B_\varepsilon(x)} (y_1-x_1) \eta_\varepsilon (x-y) \mathop{}\!d y +x_1 \int_{B_\varepsilon(x)} \eta_\varepsilon (x-y) \mathop{}\!d y =x_1 \end{align*} which gives \eqref{Hp0lBCzB}. {F}rom~\eqref{fjrehgeruig009887} and~\eqref{Hp0lBCzB}, we obtain that $$ f_\varepsilon(x) \ge -(M+\varepsilon)x_1$$ for all \(x\in B_{7/8}^+\), as soon as~$\varepsilon$ is taken sufficiently small. This is the desired result in~\eqref{EuoJ3En6}. Finally, it follows immediately from the properties of mollifiers that \( u_\varepsilon \to u \) uniformly in \(B_{7/8}\) as~\(\varepsilon\to 0^+\). Moreover, if \(u \geqslantslant 0\) in \(\mathbb{R}^n_+\) then from antisymmetry, \begin{align*} u_\varepsilon (x) = \int_{\mathbb{R}^n_+ } \bar u(y)\big ( \eta_\varepsilon (x-y) - \eta_\varepsilon (x_\ast-y) \big )\mathop{}\!d y \geqslantslant 0 \qquad \text{for all } x\in \mathbb{R}^n_+ \end{align*} using~\eqref{DPmhap5t}. \end{proof} Our second lemma is as follows. \begin{lem} \thlabel{6fD34E6w} Suppose that \( v\in C^\infty_0(\mathbb{R}^n)\) is an antisymmetric function satifying \(\partial_1 v (0)=0\). Then \begin{align*} \lim_{h\to 0 }\frac{(-\Delta)^sv(he_1)} h = -2c_{n,s}(n+2s) \int_{\mathbb{R}^n_+} \frac{y_1 v(y)}{\vert y \vert^{n+2s+2}} \mathop{}\!d y . \end{align*} \end{lem} Note that since \(v\in C^\infty_0(\mathbb{R}^n)\), the fractional Laplacian is given by the usual definition as per \thref{eeBLRjcZ}. \begin{proof} [Proof of Lemma~\ref{6fD34E6w}] We will begin by proving that \begin{align} \lim_{h\to 0 }\frac{(-\Delta)^sv(he_1)} h = (-\Delta)^s \partial_1 v(0). \label{JMT019eU} \end{align} For this, consider the difference quotient \begin{align*} \partial^h_1v(x) := \frac{v(x+he_1)-v(x)}h \end{align*} for all \(x\in \mathbb{R}^n\) and~\(\vert h \vert>0\) small. Since \(v\) is antisymmetric, \(v(0)=0\), so \begin{align*} \frac{2v(h e_1)-v(he_1+y)-v(he_1-y)}h&= 2\partial^h_1v(0)-\partial^h_1v(y)-\partial^h_1v(-y) - \frac{v(y)+v(-y)}h \end{align*} for all \(y\in \mathbb{R}^n\). Moreover, the function~\(y \mapsto v(y)+v(-y)\) is odd with respect to~$y'$, and so \begin{align*} \int_{\mathbb{R}^n} \frac{v(y)+v(-y)}{\vert y \vert^{n+2s}}\mathop{}\!d y&=0. \end{align*} It follows that \begin{align*} \frac{(-\Delta)^sv(he_1)} h &= \frac{c_{n,s}} 2 \int_{\mathbb{R}^n } \bigg ( 2\partial^h_1v(0)-\partial^h_1v(y)-\partial^h_1v(-y) - \frac{v(y)+v(-y)}h \bigg ) \frac{\mathop{}\!d y }{\vert y \vert^{n+2s}}\\ &= (-\Delta)^s \partial^h_1v(0) . \end{align*} {F}rom these considerations and the computation at the top of p. 9 in \cite{MR3469920}, we have that \begin{align*} \bigg \vert \frac{(-\Delta)^sv(he_1)} h -(-\Delta)^s \partial_1 v(0) \bigg \vert &= \big \vert (-\Delta)^s ( \partial^h_1v-\partial_1v)(0) \big \vert \\ &\leqslantslant C \Big( \|\partial^h_1v-\partial_1v\|_{L^\infty(\mathbb{R}^n)} + \|D^2\partial^h_1v-D^2\partial_1v\|_{L^\infty(\mathbb{R}^n)} \Big) . \end{align*} Then we obtain~\eqref{JMT019eU} by sending \(h\to 0\), using that \(\partial^h_1v \to \partial_1v\) in \(C^\infty_{\mathrm{loc}}(\mathbb{R}^n)\) as \(h\to 0\). To complete the proof, we use that \(\partial_1v(0)=0\) and integration by parts to obtain \begin{align*} (-\Delta)^s \partial_1 v(0) &= -c_{n,s} \int_{\mathbb{R}^n} \frac{\partial_1v(y)}{\vert y \vert^{n+2s}} \mathop{}\!d y \\ &= c_{n,s} \int_{\mathbb{R}^n} v(y) \partial_1\vert y \vert^{-n-2s} \mathop{}\!d y \\ &= -c_{n,s} (n+2s) \int_{\mathbb{R}^n} \frac{y_1v(y)}{\vert y \vert^{n+2s+2}} \mathop{}\!d y \\ &= -2c_{n,s} (n+2s) \int_{\mathbb{R}^n_+} \frac{y_1v(y)}{\vert y \vert^{n+2s+2}} \mathop{}\!d y \end{align*} where the last equality follows from antisymmetry of \(v\). \end{proof} We are now able to give the proof of \thref{g9foAd2c}. \begin{proof}[Proof of \thref{g9foAd2c}] Since \(u\) is non-negative in \(\mathbb{R}^n_+\), we have that~\( (-\Delta)^s u + \| c\|_{L^\infty(B_\rho^+)} u \geqslantslant -Mx_1\) in \(B_\rho^+\). Define~\(\tilde u(x):= u(\rho x)\) and note that \begin{align*} (-\Delta)^s \tilde u +\rho^{2s} \| c\|_{L^\infty(B_\rho^+)} \tilde u \geqslantslant -M\rho^{2s+1} x_1 \qquad \text{in } B_1^+. \end{align*} By way of \thref{tZVUcYJl} (i), we may take a \(C^\infty_0(\mathbb{R}^n)\) sequence of functions approximating \(\tilde u\) which satisfy the assumptions of \thref{g9foAd2c} with \(M\) replaced with \(M+\varepsilon\), obtain the estimate, then pass to the limit. In this way we may assume \(\tilde u \in C^\infty_0(\mathbb{R}^n)\). Let \(\zeta\) be a smooth radially symmetric cut-off function such that\begin{align*} \zeta \equiv 1 \text{ in } B_{1/2}, \quad \zeta \equiv 0 \text{ in } \mathbb{R}^n \setminus B_{3/4},\quad {\mbox{and}} \quad 0\leqslantslant \zeta \leqslantslant 1, \end{align*} and define \(\varphi^{(2)} \in C^\infty_0(\mathbb{R}^n)\) by \(\varphi^{(2)}(x):= x_1 \zeta (x)\) for all \(x\in \mathbb{R}^n\). Suppose that \(\tau \geqslantslant 0\) is the largest possible value such that \(\tilde u \geqslantslant \tau \varphi^{(2)}\) in \(\mathbb{R}^n_+\). For more detail on the existance of such a \(\tau\), see the footnote at the bottom of p. 11. Since \(\varphi^{(2)}(x)=x_1\) in \(B_{1/2}\), we have that \(x_1 \tau \leqslantslant \tilde u(x)\) for all~\(x\in B_{1/2}\), so \begin{align} \tau \leqslantslant \inf_{B_{1/2}^+} \frac{\tilde u(x)}{x_1} = \rho \inf_{B_{\rho/2}^+} \frac{u(x)}{x_1}. \label{pUJA2JZI} \end{align} Since \(\tilde u\) is \(C^1\) in \(B_1\), there are two possibilities that can occur: either there exists \(a\in B_{3/4}^+\) such that~\( \tilde u(a) = \tau \varphi^{(2)} (a)\); or there exists \(a \in B_{3/4} \cap \{ x_1=0\}\) such that \(\partial_1 \tilde u(a) = \tau \partial_1 \varphi^{(2)} (a)\). First suppose that there exists \(a\in B_{3/4}^+\) such that \( \tilde u(a) = \tau \varphi^{(2)} (a)\). Since \(\varphi^{(2)}\in C^\infty_0(\mathbb{R}^n)\) and is antisymmetric, \((-\Delta)^s\varphi^{(2)}\) is antisymmetric and~$\partial_1 \varphi^{(2)}=0$ in~$\{x_1=0\}$, we can exploit Lemma~\ref{6fD34E6w} to say that~\((-\Delta)^s\varphi^{(2)}(x)/x_1\) is bounded in \(\mathbb{R}^n\). On one hand, using that \((\tilde u-\tau \varphi^{(2)})(a)=0\), we have that \begin{align} (-\Delta)^s(\tilde u-\tau \varphi^{(2)})(a) &= (-\Delta)^s(\tilde u-\tau \varphi^{(2)})(a) +\rho^{2s} \| c\|_{L^\infty(B_\rho^+)} (\tilde u-\tau \varphi^{(2)})(a) \nonumber \\ &\geqslantslant -M\rho^{2s+1}a_1 -\tau \big (C + \rho^{2s}\| c \|_{L^\infty(B_\rho^+)} \big ) a_1 \nonumber \\ &\geqslantslant -M\rho^{2s+1}a_1 -C \tau \big (1 + \rho^{2s} \| c \|_{L^\infty(B_\rho^+)} \big ) a_1 . \label{doW9AF3Y} \end{align} On the other hand, since \(\tilde u-\tau \varphi^{(2)}\) is antisymmetric, non-negative in \(\mathbb{R}^n_+\), and \((\tilde u-\tau \varphi^{(2)})(a) = 0\), we have by \thref{mkG4iRYH} that \begin{align*} (-\Delta)^s(\tilde u-\tau \varphi^{(2)})(a) = -C \int_{\mathbb{R}^n_+} \bigg (\frac 1 {\vert a - y \vert^{n+2s}} - \frac 1 {\vert a_\ast- y \vert^{n+2s}}\bigg )(\tilde u-\tau \varphi^{(2)})(y) \mathop{}\!d y . \end{align*} It follows from~\eqref{buKHzlE6} that \begin{align} (-\Delta)^s(\tilde u-\tau \varphi^{(2)})(a) &\leqslantslant -C a_1 \int_{B_{1/2}^+} \frac{y_1 (\tilde u-\tau \varphi^{(2)})(y)} {\vert a_\ast- y \vert^{n+2s+2}}\mathop{}\!d y \nonumber \\ &\leqslantslant -C a_1 \bigg ( \int_{B_{1/2}^+} y_1 \tilde u(y) \mathop{}\!d y -\tau \bigg ) \nonumber \\ &=-C a_1 \bigg ( \frac 1 {\rho^{n+1}} \int_{B_{\rho/2}^+} y_1 u(y) \mathop{}\!d y -\tau \bigg ). \label{retlxLlR} \end{align} Rearranging~\eqref{doW9AF3Y} and~\eqref{retlxLlR}, and recalling~\eqref{pUJA2JZI} gives \begin{align} \frac 1 {\rho^{n+1}} \int_{B_{\rho/2}^+} y_1 u(y) \mathop{}\!d y &\leqslantslant C \Big( \tau (1 + \rho^{2s} \| c \|_{L^\infty(B_\rho^+)} ) + \rho^{2s+1}M \Big) \nonumber \\ &\leqslantslant C \rho (1 + \rho^{2s} \| c \|_{L^\infty(B_\rho^+)} ) \left( \inf_{B_{\rho/2}^+} \frac{u(x)}{x_1} + M \rho^{2s}\right), \label{7O4TL1vF} \end{align} which gives the desired result in this case. Now suppose that there exists \(a \in B_{3/4} \cap \{ x_1=0\}\) such that \(\partial_1 \tilde u(a) =\tau \partial_1 \varphi^{(2)} (a)\). Let \(h>0\) be small and set \(a^{(h)}:= a+ he_1\). On one hand, as in~\eqref{doW9AF3Y}, \begin{align*} (-\Delta)^s(\tilde u-\tau \varphi^{(2)})(a^{(h)}) &\geqslantslant -M\rho^{2s+1}h -C \tau \big ( 1+ \rho^{2s}\|c\|_{L^\infty(B_\rho^+)} \big )h . \end{align*} Dividing both sides by \(h\) and sending \(h\to 0^+\), it follows from \thref{6fD34E6w} (after a translation) that \begin{align*} -M\rho^{2s+1} -C \tau \big ( 1+ \rho^{2s}\|c\|_{L^\infty(B_\rho^+)} \big ) &\leqslantslant -C \int_{\mathbb{R}^n_+} \frac{y_1 (\tilde u-\tau \varphi^{(2)})(y)}{\vert y-a\vert^{n+2s+2}} \mathop{}\!d y \\ &\leqslantslant-C \bigg ( \frac 1{\rho^{n+1}} \int_{B_{\rho/2}^+} y_1 u(y) \mathop{}\!d y - \tau \bigg ). \end{align*} Rearranging as before gives the desired result. \end{proof} Next, we give the proof of~\thref{SwDzJu9i}. \begin{proof}[Proof of~\thref{SwDzJu9i}] Let \(\varphi^{(2)}\), \(\tau\), and \(a\) be the same as in the proof of \thref{g9foAd2c}. The proof of \thref{SwDzJu9i} is identical to the proof of \thref{g9foAd2c} except for the following changes. In the case \(a\in B_{3/4}^+\) and \(\tilde u(a) = \tau \varphi^{(2)}(a)\), we use~\eqref{buKHzlE6} to obtain \begin{align*} (-\Delta)^s (\tilde u - \tau \varphi^{(2)})(a) \leqslantslant-C a_1 \int_{\mathbb{R}^n_+} \frac{y_1(\tilde u-\tau \varphi^{(2)})(y)}{\vert a_\ast - y \vert^{n+2s+2}} \mathop{}\!d y \leqslantslant - C_\rho a_1 \big ( \Anorm{u} -\tau \big ) \end{align*} where we have also used that \begin{align} \vert a_\ast - y \vert^{n+2s+2} \leqslantslant C (1+\vert y \vert^{n+2s+2} \big ) \qquad \text{for all } y\in \mathbb{R}^n_+. \label{BWqeWX33} \end{align} Moreover, in the case \(a\in B_{3/4}\cap \{x_1=0\}\) and \(\partial_1\tilde u(a) = \tau \partial_1\varphi^{(2)}(a)\), we have that~\(a=a_\ast\) so~\eqref{BWqeWX33} also gives \begin{align*} \int_{\mathbb{R}^n_+} \frac{y_1 (\tilde u-\tau \varphi^{(2)})(y)}{\vert y-a\vert^{n+2s+2}} \mathop{}\!d y &\geqslantslant C_\rho \big (\Anorm{u} -\tau \big ) . \qedhere \end{align*} \end{proof} \subsection{Boundary local boundedness} We now prove the boundary local boundedness for sub-solutions. \begin{prop} \thlabel{EP5Elxbz} Let \(M \geqslantslant 0\), \(\rho\in(0,1)\), and \(c\in L^\infty(B_\rho^+)\). Suppose that \(u \in C^{2s+\alpha}(B_\rho)\cap \mathscr A_s (\mathbb{R}^n)\) for some \(\alpha>0\) with \(2s+\alpha\) not an integer, and \(u\) satisfies \begin{align*} (-\Delta)^su +cu &\leqslantslant M x_1 \qquad \text{in } B_\rho^+. \end{align*} Then there exists \(C_\rho>0\) depending only on \(n\), \(s\), \(\| c \|_{L^\infty(B_\rho^+)}\), and \(\rho\) such that \begin{align*} \sup_{x\in B_{\rho/2}^+} \frac{ u(x)}{x_1} &\leqslantslant C_\rho ( \Anorm{u} +M ) . \end{align*} \end{prop} Before we prove \thref{EP5Elxbz}, we prove the following lemma. \begin{lem} \thlabel{KaMmndyO} Let \(\varphi \in C^\infty(\mathbb{R})\) be an odd function such that \(\varphi(t)=1\) if \(t>2\) and \(0\leqslantslant \varphi(t) \leqslantslant 1\) for all \(t\geqslantslant0\). Suppose that \(\varphi^{(3)} \in C^s(\mathbb{R}^n) \cap L^\infty(\mathbb{R}^n)\) is the solution to \begin{align} \begin{PDE} (-\Delta)^s \varphi^{(3)} &= 0 &\text{in } B_1, \\ \varphi^{(3)}(x) &= \varphi(x_1) &\text{in } \mathbb{R}^n \setminus B_1. \end{PDE} \label{pa7rDaw7} \end{align} Then \(\varphi^{(3)}\) is antisymmetric and there exists \(C>1\) depending only on \(n\) and \(s\) such that \begin{align*} C^{-1}x_1 \leqslantslant \varphi^{(3)}(x) \leqslantslant Cx_1 \end{align*} for all \(x\in B_{1/2}^+\). \end{lem} \begin{proof} Via the Poisson kernel representation, see \cite[Section 15]{MR3916700}, and using that \(\varphi\) is an odd function, we may write \begin{align*} \varphi^{(3)}(x) &= C \int_{\mathbb{R}^n \setminus B_1} \bigg ( \frac{1-\vert x \vert^2}{\vert y \vert^2-1}\bigg )^s \frac{\varphi(y_1)}{\vert x -y \vert^n} \mathop{}\!d y \\ &= C \int_{\mathbb{R}^n_+ \setminus B_1^+} \bigg ( \frac{1-\vert x \vert^2}{\vert y \vert^2-1}\bigg )^s \bigg ( \frac 1 {\vert x -y \vert^n} - \frac 1 {\vert x_\ast -y \vert^n}\bigg ) \varphi(y_1)\mathop{}\!d y. \end{align*} {F}rom this formula, we immediately obtain that \(\varphi^{(3)}\) is antisymmetric (this can also be argued by the uniqueness of solutions to~\eqref{pa7rDaw7}). Then, by an analogous computation to~\eqref{LxZU6} (just replacing~$n+2s$ with~$n$), \begin{align*} \varphi^{(3)}(x) &\leqslantslant C x_1 \int_{\mathbb{R}^n_+ \setminus B_1^+} \frac{ y_1\varphi(y_1)}{(\vert y \vert^2-1)^s\vert x-y \vert^{n+2}} \mathop{}\!d y \\ &\leqslantslant C x_1 \left[\int_{B_2^+ \setminus B_1^+} \frac{ y_1}{(\vert y \vert^2-1)^s } \mathop{}\!d y +\int_{\mathbb{R}^n_+ \setminus B_2^+} \frac{ y_1}{(\vert y \vert^2-1)^s(\vert y \vert - 1)^{n+2}} \mathop{}\!d y\right] \\ &\leqslantslant Cx_1 \end{align*} for all \(x\in B_{1/2}^+\). Similarly, using now~\eqref{buKHzlE6} (replacing~$n+2s$ with~$n$), we have that \begin{align*} \varphi^{(3)}(x) &\geqslantslant C x_1 \int_{\mathbb{R}^n_+ \setminus B_1^+} \frac{ y_1\varphi(y_1)}{(\vert y \vert^2-1)^s\vert x_\ast -y \vert^{n+2}} \mathop{}\!d y \\ &\geqslantslant C x_1 \int_{\{y_1>2\}} \frac{ y_1}{(\vert y \vert^2-1)^s(\vert y \vert +1)^{n+2}} \mathop{}\!d y \\ &\geqslantslant C x_1 \end{align*} for all \(x\in B_{1/2}^+\). \end{proof} Now we can give the proof of \thref{EP5Elxbz}. \begin{proof}[Proof of \thref{EP5Elxbz}] Dividing through by \(\Anorm{u}+M\), we can also assume that \((-\Delta)^s u + cu \leqslantslant x_1\) in \(B_\rho^+\) and \(\Anorm{u}\leqslantslant 1\). Moreover, as explained at the start of the proof of \thref{g9foAd2c}, via \thref{tZVUcYJl} (ii) (after rescaling), it is not restrictive to assume \(u \in C^\infty_0(\mathbb{R}^n)\) and it is antisymmetric. Furthermore, we point out that the claim in \thref{EP5Elxbz} is obviously true if~$u\le0$ in~$B^+_\rho$, hence we suppose that~$\{u>0\}\cap B^+_\rho\ne\varnothing$. Let \(\varphi^{(3)}\) be as in \thref{KaMmndyO} and let \(\zeta(x) := \varphi^{(3)}(x/(2\rho)) \). Suppose that \(\tau \geqslantslant 0\) is the smallest value such that \begin{align*} u(x) &\leqslantslant \tau \zeta (x) (\rho - \vert x \vert )^{-n-2} \qquad \text{in } B_\rho^+. \end{align*} The existence of such a \(\tau\) follows from a similar argument to the one presented in the footnote at the bottom of p. 11. Notice that~$\tau>0$. To complete the proof we will show that \(\tau \leqslantslant C_\rho\) with \(C_\rho\) independent of \(u\). Since \(u\) is continuously differentiable, two possibilities can occur: \begin{itemize} \item[Case 1:] There exists \(a \in B_\rho^+\) such that \begin{align*} u(a) &= \tau \zeta (a) (\rho - \vert a \vert )^{-n-2}. \end{align*} \item[Case 2:] There exists \(a\in B_\rho \cap \{x_1=0\}\) such that \begin{align*} \partial_1 u(a) = \tau \partial_1 \big \vert_{x=a} \big ( \zeta (x) (\rho - \vert x \vert )^{-n-2} \big )= \tau (\partial_1 \zeta (a)) (\rho - \vert a \vert )^{-n-2} . \end{align*} \end{itemize} Let \(d:= \rho - \vert a\vert \) and define \(U\subset B_\rho^+\) as follows: if Case 1 occurs let \begin{align*} U := \bigg \{ x \in B_\rho^+ \text{ s.t. } \frac{u(x)}{\zeta(x)} > \frac{u(a)}{2\zeta (a)} \bigg \}; \end{align*} otherwise if Case 2 occurs then let \begin{align*} U:=\bigg \{ x \in B_\rho^+ \text{ s.t. } \frac{u(x)}{\zeta(x)} > \frac{\partial_1u(a)}{2 \partial_1\zeta (a)} \bigg \}. \end{align*} Since \(u(a) = \tau \zeta(a) d^{-n-2}\) in Case 1 and \(\partial_1u(a) = \tau \partial_1\zeta(a) d^{-n-2}\) in Case 2, we may write \begin{equation}\label{ei395v7b865998cn754mx984zUUU} U = \bigg \{ x \in B_\rho^+ \text{ s.t. } u(x)> \frac 1 2 \tau d^{-n-2} \zeta(x) \bigg \} \end{equation}which is valid in both cases. Then, we have that, for all \(r\in(0,d)\), \begin{align*} C_\rho &\geqslantslant \int_{B_\rho^+} y_1| u(y)| \mathop{}\!d y \geqslantslant \frac 1 2 \tau d^{-n-2} \int_{U \cap B_r(a)} y_1 \zeta(y) \mathop{}\!d y . \end{align*} As a consequence, by \thref{KaMmndyO}, we have that \begin{align} \int_{U \cap B_r(a)} y_1^2 \mathop{}\!d y \le C_\rho\int_{U \cap B_r(a)} y_1 \zeta(y) \mathop{}\!d y \leqslantslant \frac{ C_\rho d^{n+2} } \tau \qquad \text{for all } r\in(0,d). \label{TTWpDkie} \end{align} Next, we make the following claim. \begin{claim} There exists~$\theta_0\in(0,1)$ depending only on \(n\), \(s\), \(\| c\|_{L^\infty(B_\rho(e_1))}\), and \(\rho\) such that if~$\theta\in(0,\theta_0]$ there exists~\(C>0\) depending only on \(n\), \(s\), \(\| c\|_{L^\infty(B_\rho(e_1))}\), \(\rho\), and~$\theta$ such that \begin{itemize} \item In Case 1: \begin{enumerate}[(i)] \item If \(a_1 \geqslantslant \theta d/16 \) then \begin{align*} \big \vert B_{(\theta d)/64}(a) \setminus U \big \vert &\leqslantslant \frac 1 4 \big \vert B_{(\theta d)/64} \big \vert +\frac{C d^n} \tau . \end{align*} \item If \(a_1 < \theta d/16 \) then \begin{align*} \int_{B_{(\theta d)/64}^+(a) \setminus U } x_1^2 \mathop{}\!d x &\leqslantslant \frac 1 4 \int_{B_{(\theta d)/64}^+(a) } x_1^2 \mathop{}\!d x + \frac{C d^{n+2}}\tau . \end{align*} \end{enumerate} \item In Case 2: \begin{align*} \int_{B_{(\theta d)/64}^+(a)\setminus U } x_1^2\mathop{}\!d x &\leqslantslant\frac 1 4 \int_{B_{(\theta d)/64}^+ (a)} x_1^2 \mathop{}\!d x + \frac{ C d^{n+2}}\tau. \end{align*} \end{itemize} In particular, neither~\(\theta\) nor~\(C\) depend on \(\tau\), \(u\), or \(a\). \end{claim} We withhold the proof of the claim until the end. Assuming that the claim is true, we complete the proof of \thref{EP5Elxbz} as follows. If Case 1(i) occurs then for all \(y\in B_{(\theta_0 d)/64} (a)\) we have that~\(y_1>a_1-( \theta_0 d)/64 \geqslantslant C d\), and so \begin{align*} \int_{U \cap B_{(\theta_0 d)/64}(a)} y_1^2 \mathop{}\!d y \geqslantslant C d^2 \cdot \big \vert U \cap B_{(\theta_0 d)/64}(a) \big \vert . \end{align*} Hence, from~\eqref{TTWpDkie} (used here with~$r:=(\theta_0d)/64$), we have that \begin{align*} \vert U \cap B_{( \theta_0 d)/64}(a) \big \vert \leqslantslant \frac{C_\rho d^n}{\tau}. \end{align*} Then using the claim, we find that \begin{align*} \frac{C_\rho d^n}{\tau} \geqslantslant \big \vert B_{(\theta_0 d)/64} \big \vert - \big \vert B_{(\theta_0 d)/64}(a) \setminus U \big \vert \geqslantslant \frac 3 4 \big \vert B_{(\theta_0 d)/64} \big \vert -\frac{C d^n} \tau \end{align*}which gives that \(\tau \leqslantslant C_\rho\) in this case. If Case 1(ii) or Case 2 occurs then from~\eqref{TTWpDkie} (used here with~$r:=(\theta_0d)/64$) and the claim, we have that \begin{equation}\label{dewiotbv5748976w4598ty} \frac{C_\rho d^{n+2}}{\tau} \geqslantslant \int_{B_{(\theta_0 d)/64}^+(a) } x_1^2 \mathop{}\!d x - \int_{B_{( \theta_0 d)/64}^+(a) \setminus U } x_1^2 \mathop{}\!d x \geqslantslant \frac 3 4 \int_{B_{( \theta_0 d)/64}^+(a) } x_1^2 \mathop{}\!d x - \frac{C d^{n+2}}\tau . \end{equation} We now observe that, given~$r\in(0,d)$, if~$x\in B_{r/4}\left(a+\frac34 re_1\right)\subset B^+_{r}(a)$ then~$x_1\ge a_1+\frac34 r-\frac{r}4\ge\frac{r}2$, and thus \begin{equation}\label{sdwet68980poiuytrkjhgfdmnbvcx23456789} \int_{B_{r}^+(a) } x_1^2 \mathop{}\!d x\ge \int_{B_{r/4}(a+(3r)/4 e_1) } x_1^2 \mathop{}\!d x\ge \frac{r^2}4\,|B_{r/4}(a+(3r)/4 e_1) |=C r^{n+2},\end{equation} for some~$C>0$ depending on~$n$. Exploiting this formula with~$r:=(\theta_0 d)/64$ into~\eqref{dewiotbv5748976w4598ty}, we obtain that $$ \frac{C_\rho d^{n+2}}{\tau} \geqslantslant C \left(\frac{ \theta_0 d}{64}\right)^{n+2 }- \frac{C d^{n+2}}\tau,$$ which gives that \(\tau \leqslantslant C_\rho\) as required. Hence, we now focus on the proof of the claim. Let \(\theta\in(0,1)\) be a small constant to be chosen later. By translating, we may also assume without loss of generality that \(a'=0\) (recall that~\(a'=(a_2,\dots,a_n)\in \mathbb{R}^{n-1}\)). For each \(x\in B_{\theta d/2}^+(a)\), we have that~\(\vert x \vert \leqslantslant \vert a \vert +\theta d/2=\rho- (1-\theta/2)d\). Hence, in both Case 1 and Case 2, \begin{align} u(x) \leqslantslant \tau d^{-n-2} \bigg (1- \frac \theta 2 \bigg )^{-n-2}\zeta (x) \qquad \text{in }B_{\theta d/2}^+(a). \label{ln9LcJTh} \end{align} Let \begin{align*} v(x):= \tau d^{-n-2} \bigg (1- \frac \theta 2 \bigg )^{-n-2}\zeta (x)-u(x) \qquad \text{for all } x\in \mathbb{R}^n. \end{align*} We have that \(v\) is antisymmetric and \(v\geqslantslant 0\) in \(B_{\theta d/2}^+(a) \) due to~\eqref{ln9LcJTh}. Moreover, since \(\zeta\) is \(s\)-harmonic in \(B_\rho^+ \supset B_{\theta d/2}^+(a)\), for all \(x\in B_{\theta d/2}^+(a)\), \begin{align*} (-\Delta)^s v(x) +c(x)v(x) &= -(-\Delta)^s u(x) - c(x) u(x) +c(x) \tau d^{-n-2} \bigg (1- \frac \theta 2 \bigg )^{-n-2}\zeta (x)\\ &\geqslantslant -x_1 -C \tau d^{-n-2} \|c^-\|_{L^\infty(B_\rho^+)} \bigg (1- \frac \theta 2 \bigg )^{-n-2}\zeta (x). \end{align*} Taking \(\theta\) sufficiently small and using that \(\zeta (x)\leqslantslant C x_1\) (in light of~\thref{KaMmndyO}), we obtain \begin{align} (-\Delta)^s v(x) +c(x)v(x)&\geqslantslant -C \big ( 1 + \tau d^{-n-2} \big )x_1 \qquad \text{in } B_{\theta d/2}^+(a). \label{ulzzcUwf} \end{align} Next, we define \(w(x):= v^+(x)\) for all \(x\in \mathbb{R}^n_+\) and \(w(x):= -w(x_\ast)\) for all \(x\in \overline{\mathbb{R}^n_-}\). We point out that, in light of~\eqref{ln9LcJTh}, $w$ is as regular as~$v$ in~$B_{\theta d/2}^+(a)$, and thus we can compute the fractional Laplacian of~$w$ in~$B_{\theta d/2}^+(a)$ in a pointwise sense. We also observe that \begin{align*} (w-v)(x) (x) &= \begin{cases} 0 &\text{if } x\in \mathbb{R}^n_+ \cap \{ v\geqslantslant 0\} ,\\ u(x)-\tau d^{-n-2} \big (1- \frac \theta 2 \big )^{-n-2}\zeta (x),&\text{if } x\in \mathbb{R}^n_+ \cap \{ v< 0\}. \end{cases} \end{align*} In particular, \(w-v\leqslantslant |u|\) in \(\mathbb{R}^n_+\). Thus, for all \(x\in B_{\theta d/2}^+(a)\), \begin{align*} (-\Delta)^s (w-v)(x)&\geqslantslant -C \int_{\mathbb{R}^n_+\setminus B_{\theta d/2}^+(a)} \left(\frac 1 {\vert x -y \vert^{n+2s}} - \frac 1 {\vert x_\ast -y \vert^{n+2s}} \right) |u(y)|\mathop{}\!d y . \end{align*} Moreover, by \thref{ltKO2}, for all~$x \in B_{\theta d/4}^+(a)$, \begin{equation} (-\Delta)^s (w-v)(x) \geqslantslant -C(\theta d)^{-n-2s-2} \Anorm{u} x_1 \geqslantslant -C(\theta d)^{-n-2s-2}x_1 . \label{y8tE2pf9} \end{equation} Hence, by~\eqref{ulzzcUwf} and~\eqref{y8tE2pf9}, we obtain \begin{align} (-\Delta)^s w +cw &=(-\Delta)^sv +cv +(-\Delta)^s(w-v) \nonumber \\ &\geqslantslant -C \bigg ( 1 + \tau d^{-n-2} +(\theta d)^{-n-2s-2} \bigg )x_1 \nonumber \\ &\geqslantslant -C \bigg ( (\theta d)^{-n-2s-2} + \tau d^{-n-2} \bigg )x_1\label{ayFE7nQK} \end{align} in \( B_{\theta d/4}^+(a)\). Next, let us consider Case 1 and Case 2 separately. \emph{Case 1:} Suppose that \(a\in B_\rho^+\) and let \(\tilde w (x) = w (a_1 x)\) and \(\tilde c (x) = a_1^{2s} c (a_1 x)\). Then from~\eqref{ayFE7nQK}, we have that \begin{align} (-\Delta)^s \tilde w(x) +\tilde c(x) \tilde w(x) &\geqslantslant - Ca_1^{2s+1} \bigg ( (\theta d)^{-n-2s-2} + \tau d^{-n-2} \bigg )x_1 \label{zrghHZhX} \end{align} for all \( x\in B_{\theta d/(4a_1)}^+(e_1)\). As in the proof of \thref{guDQ7}, we wish to apply the rescaled version of the weak Harnack inequality to \(\tilde w\); however, we cannot immediately apply either \thref{YLj1r} or \thref{g9foAd2c} to~\eqref{zrghHZhX}. To resolve this, let us split into a further two cases: (i) \(a_1 \geqslantslant \theta d/16 \) and (ii) \(a_1< \theta d/16\). \emph{Case 1(i):} If \(a_1 \geqslantslant \theta d/16 \) then \( B_{\theta d/(32a_1)}(e_1) \subset B_{\theta d/(4a_1)}^+(e_1)\) and for each \(x\in B_{\theta d/(32a_1)}(e_1)\) we have that~\(x_1<1+\theta d/(32a_1)\le1+1/4=5/4\). Therefore, from~\eqref{zrghHZhX}, we have that \begin{align*} (-\Delta)^s \tilde w(x) +\tilde c(x) \tilde w(x) &\geqslantslant - Ca_1^{2s+1} \big ( (\theta d)^{-n-2s-2} + \tau d^{-n-2} \big ) \end{align*} for all \( x\in B_{\theta d/(32a_1)}(e_1)\). On one hand, by~\thref{YLj1r} (used here with~$\rho:=\theta d/(32a_1)$), \begin{align*}& \left( \frac{\theta d}{64} \right)^{-n} \int_{B_{\theta d/64}(a)} w (x) \mathop{}\!d x \\= &\; \left( \frac{\theta d}{32a_1} \right)^{-n} \int_{B_{\theta d/(64a_1)}(e_1)} \tilde w (x) \mathop{}\!d x \\ \leqslantslant&\; C \left( \tilde w(e_1) + a_1 (\theta d)^{-n-2} + \tau a_1 \theta^{2s} d^{-n+2s-2} \right) \\ \leqslantslant& \;C \tau d^{-n-2} \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 \bigg ) a_1 +Ca_1(\theta d)^{-n-2} + C\tau a_1 \theta^{2s} d^{-n+2s-2} \end{align*} using also \thref{KaMmndyO} and that \(u(a) = \tau d^{-n-2} \zeta(a)\). On the other hand, by the definition of~$U$ in~\eqref{ei395v7b865998cn754mx984zUUU}, \begin{align} B_r(a) \setminus U &\subset \left\{ \frac{w}{\zeta} > \tau d^{-n-2} \left( \left(1 - \frac \theta 2 \right)^{-n-2} -\frac 1 2 \right) \right\} \cap B_r(a), \qquad \text{ for all } r\in\left(0,\frac12\theta d\right),\label{I9FvOxCz} \end{align} and so \begin{align*} (\theta d)^{-n} \int_{B_{\theta d/64}(a)} w(x) \mathop{}\!d x &\geqslantslant \tau \theta^{-n} d^{-2n-2} \bigg ( \bigg (1 - \frac \theta 2 \bigg )^{-n-2} -\frac 1 2 \bigg ) \int_{B_{\theta d/64}(a) \setminus U } \zeta (x) \mathop{}\!d x \\ &\geqslantslant C \tau \theta^{-n} d^{-2n-2} \int_{B_{\theta d/64}(a) \setminus U } \zeta (x) \mathop{}\!d x \end{align*} for \(\theta\) sufficiently small. Moreover, using that \(x_1>a_1-\theta d/64>Ca_1\) and \thref{KaMmndyO}, we have that\begin{align*} \int_{B_{\theta d/64}(a) \setminus U } \zeta (x) \mathop{}\!d x \geqslantslant C\int_{B_{\theta d/64}(a) \setminus U } x_1 \mathop{}\!d x \geqslantslant C a_1\cdot \big \vert B_{\theta d/64}(a) \setminus U \big \vert . \end{align*} Thus,\begin{align*} \big \vert B_{\theta d/64}(a) \setminus U \big \vert &\leqslantslant C (\theta d)^n \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 \bigg ) +\frac{C\theta^{-2} d^n} \tau + C (\theta d)^{n+2s} \\ &\leqslantslant C (\theta d)^n \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 +\theta^{2s} \bigg ) +\frac{C\theta^{-2} d^n} \tau , \end{align*} using also that \(d^{n+2s}<d^n\). Finally, we can take \(\theta\) sufficiently small so that \begin{align*} C (\theta d)^n \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 +\theta^{2s} \bigg ) \leqslantslant \frac 1 4 \big \vert B_{\theta d/64} \big \vert \end{align*} which gives $$ \big \vert B_{\theta d/64}(a) \setminus U \big \vert\le \frac14 \big \vert B_{\theta d/64} \big \vert +\frac{Cd^n} \tau. $$ This concludes the proof of the claim in Case 1(i). \emph{Case 1(ii):} Let \(a_1< \theta d/16\) and fix \(R:= \frac 1 {2} \big ( \sqrt{ ( \theta d/(4a_1))^2-1} +2 \big )\). Observe that \begin{align*} 2 < R< \sqrt{ \left(\frac{ \theta d}{4a_1}\right)^2-1}. \end{align*} Hence, \(e_1 \in B_{R/2}^+\). Moreover, if \(x\in B_R^+\) then \begin{align*} \vert x -e_1\vert^2 <1+R^2< ( \theta d/(4a_1))^2, \end{align*} so \(B_R^+ \subset B_{\theta d/(4a_1)}^+(e_1)\). Thus, applying \thref{g9foAd2c} to the equation in~\eqref{zrghHZhX} in \(B_R^+\), we obtain \begin{align*}& a_1^{-n-1}\bigg ( \frac R 2\bigg )^{-n-2} \int_{B_{a_1R/2}^+} x_1 w(x) \mathop{}\!d x \\ =&\; \bigg ( \frac R 2\bigg )^{-n-2} \int_{B_{R/2}^+} x_1 \tilde w(x) \mathop{}\!d x \\ \leqslantslant&\; C \left( \inf_{B_{R/2}^+} \frac{\tilde w(x)}{x_1} + a_1^{2s+1}R^{2s} (\theta d)^{-n-2s-2} + \tau d^{-n-2}a_1^{2s+1}R^{2s} \right) \\ \leqslantslant\;& C \tau d^{-n-2} \left( \left(1- \frac \theta 2 \right)^{-n-2}-1 \right)\zeta(a) + C a_1^{2s+1}R^{2s} (\theta d)^{-n-2s-2} +C \tau d^{-n-2}a_1^{2s+1}R^{2s} \end{align*} using that \(e_1 \in B_{R/2}^+\). Since \(R \leqslantslant C \theta d/a_1\) and \(\zeta(a)\leqslantslant Ca_1\) by \thref{KaMmndyO}, it follows that \begin{eqnarray*}&& a_1^{-1} \bigg ( \frac R {2a_1} \bigg )^{-n} \int_{B_{a_1R/2}^+} x_1 w(x) \mathop{}\!d x \\ &&\qquad\leqslantslant C \tau d^{-n-2} \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 \bigg )a_1 + C a_1 (\theta d)^{-n-2} +C \tau a_1 \theta^{2s} d^{-n+2s-2} . \end{eqnarray*} On the other hand, we claim that \begin{equation}\label{skweogtry76t678tutoy4554yb76i78io896} B_{(\theta d)/64}^+(a) \setminus U \subset B_{a_1R/2}^+ .\end{equation} Indeed, if~$x\in B_{(\theta d)/64}^+ \setminus U $ then $$ |x|\le |x-a|+|a|\le \frac{\theta d}{64} +a_1\le a_1\left(\frac{\theta d}{64a_1}+1\right). $$ Furthermore, \begin{eqnarray*} &&R\ge \frac 1 {2} \sqrt{ \left(\frac{\theta d}{4a_1}\right)^2-\left(\frac{\theta d}{16a_1}\right)^2} +1\ge \frac{\sqrt{15}\theta d}{32a_1}+1\ge\frac{3\theta d}{32a_1}+1\\&&\qquad =2\left(\frac{3\theta d}{64a_1}+\frac12\right)=2\left(\frac{\theta d}{64a_1}+\frac{\theta d}{32a_1}+\frac12\right)\ge 2\left(\frac{\theta d}{64a_1}+1\right). \end{eqnarray*} {F}rom these observations we obtain that if~$x\in B_{(\theta d)/64}^+ \setminus U $ then~$|x|\le a_1R/2$, which proves~\eqref{skweogtry76t678tutoy4554yb76i78io896}. Hence, by~\eqref{I9FvOxCz} (used with~$r:=\theta d/64$), \eqref{skweogtry76t678tutoy4554yb76i78io896}, and \thref{KaMmndyO}, we have that \begin{align*} a_1^{-1} \bigg ( \frac R {2a_1} \bigg )^{-n} \int_{B_{a_1R/2}^+} x_1 w(x) \mathop{}\!d x &\geqslantslant a_1^{-n-1} R^{-n-2} \tau d^{-n-2} \bigg ( \bigg (1 - \frac \theta 2 \bigg )^{-n-2} -\frac 1 2 \bigg ) \int_{B_{ (\theta d)/64}^+(a) \setminus U} x_1 \zeta(x) \mathop{}\!d x \\ &\geqslantslant C \tau a_1 \theta^{-n-2} d^{-2n-4} \int_{B_{ (\theta d)/64}^+(a) \setminus U } x_1^2 \mathop{}\!d x \end{align*} for \(\theta\) sufficiently small. Thus, \begin{align*} \int_{B_{( \theta d)/64}^+(a) \setminus U } x_1^2 \mathop{}\!d x &\leqslantslant C (\theta d)^{n+2} \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 \bigg ) + \frac{C d^{n+2}}\tau +C( \theta d)^{n+2s+2}\\ &\leqslantslant C (\theta d)^{n+2} \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 +\theta^{2s} \bigg ) + \frac{C d^{n+2}}\tau \end{align*} using that \(d^{n+2s+2}<d^{n+2}\). Recalling formula~\eqref{sdwet68980poiuytrkjhgfdmnbvcx23456789} and taking~\(\theta\) sufficiently small, we obtain that \begin{align*} C(\theta d)^{n+2} \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 +\theta^{2s} \bigg ) \leqslantslant \frac 1 4 \int_{B_{( \theta d)/64}^+ (a)} x_1^2 \mathop{}\!d x . \end{align*} which concludes the proof in Case 1(b). \emph{Case 2:} In this case, we can directly apply \thref{g9foAd2c} to~\eqref{ayFE7nQK}. When we do this we find that \begin{align*}& \bigg ( \frac{\theta d }{64} \bigg )^{-n-2} \int_{B_{\theta d/64}^+(a)} x_1 w (x)\mathop{}\!d x \\ \leqslantslant&\, C \bigg ( \partial_1 w (0) + (r\theta)^{-n-2} + \tau \theta^{2s} d^{-n+2s-2} \bigg )\\ =&\, C\tau d^{-n-2} \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 \bigg )\partial_1\zeta(0) + C (r\theta)^{-n-2} + C\tau \theta^{2s} d^{-n+2s-2}. \end{align*} On the other hand,~\eqref{I9FvOxCz} is still valid in Case 2 so \begin{align*} (\theta d )^{-n-2} \int_{B_{\theta d/64}^+(a)} x_1 w (x)\mathop{}\!d x &\geqslantslant \tau \theta^{-n-2} d^{-2n-4} \bigg ( \bigg (1 - \frac \theta 2 \bigg )^{-n-2} -\frac 1 2 \bigg ) \int_{B_{\theta d/64}^+(a)\setminus U } x_1 \zeta (x) \mathop{}\!d y \\ &\geqslantslant C \tau \theta^{-n-2} d^{-2n-4} \int_{B_{\theta d/64}^+(a)\setminus U } x_1^2\mathop{}\!d x \end{align*} using \thref{KaMmndyO} and taking \(\theta\) sufficiently small. Thus, \begin{align*} \int_{B_{\theta d/64}^+(a)\setminus U } x_1^2\mathop{}\!d x &\leqslantslant C (\theta d)^{n+2} \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 \bigg ) + \frac{ C d^{n+2}}\tau + C (\theta d)^{n+2s+2}\\ &\leqslantslant C (\theta d)^{n+2} \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 +\theta^{2s} \bigg ) + \frac{ C d^{n+2}}\tau \end{align*} using that \(d^{n+2s+2}<d^{n+2}\). Then, by~\eqref{sdwet68980poiuytrkjhgfdmnbvcx23456789}, we may choose \(\theta\) sufficiently small so that \begin{align*} C (\theta d)^{n+2} \bigg ( \bigg (1- \frac \theta 2 \bigg )^{-n-2}-1 +\theta^{2s} \bigg ) \leqslantslant \frac 1 4 \int_{B_{\theta d/64}^+(a)} x_1^2\mathop{}\!d x \end{align*} which concludes the proof in Case 2. The proof of \thref{EP5Elxbz} is thereby complete. \end{proof} \section{Proof of \protect\thref{Hvmln}} \label{lXIUl} In this short section, we give the proof of \thref{Hvmln}. This follows from Theorems~\ref{DYcYH} and~\ref{C35ZH} along with a standard covering argument which we include here for completeness. \begin{proof}[Proof of \thref{Hvmln}] Recall that~\(\Omega^+=\Omega \cap \mathbb{R}^n_+\) and let \(B_{2R}(y)\) be a ball such that \(B_{2R}(y)\Subset\Omega^+\). We will first prove that there exists a constant \(C=C(n,s,R,y)\) such that \begin{align} \sup_{B_R(y)} \frac{u(x)}{x_1} &\leqslantslant C \inf_{B_R(y)} \frac{u(x)}{x_1} . \label{c4oOr} \end{align}Indeed, if \(\tilde u (x) := u (y_1x+(0,y'))\) and~\(\tilde c (x):= y_1^{2s} c(y_1x+(0,y'))\) then \begin{align*} (-\Delta)^s \tilde u (x)+\tilde c \tilde u = 0, \qquad \text{in } B_{2R/y_1} (e_1). \end{align*} By \thref{DYcYH}, \begin{align*} \sup_{B_R(y)} \frac{u(x)}{x_1} \leqslantslant C \sup_{B_R(y)} u = C\sup_{B_{R/y_1}(e_1)} \tilde u & \leqslantslant C \inf_{B_{R/y_1}(e_1)} \tilde u =C\inf_{B_R(y)} u \leqslantslant C\inf_{B_R(y)} \frac{u(x)}{x_1} \end{align*} using that \(B_R(y) \Subset\mathbb{R}^n_+\). Next, let \(\{a^{(k)}\}_{k=1}^\infty,\{b^{(k)}\}_{k=1}^\infty \subset \tilde \Omega^+\) be such that \begin{align*} \frac{u(a^{(k)})}{a^{(k)}_1} \to \sup_{\Omega'}\frac{u(x)}{x_1} \quad \text{and} \quad \frac{u(b^{(k)})}{b^{(k)}_1} \to \inf_{\Omega'}\frac{u(x)}{x_1} \end{align*} as \(k\to \infty\). After possibly passing to a subsequence, there exist~\(a\), \(b\in \overline{\tilde \Omega^+}\) such that~\(a^{(k)}\to a\) and~\(b^{(k)}\to b\). Let \(\gamma \subset \tilde \Omega^+\) be a curve connecting \(a\) and \(b\). By the Heine-Borel Theorem, there exists a finite collection of balls \(\{B^{(k)}\}_{k=1}^M\) with centres in \(\tilde \Omega \cap \{x_1=0\}\) such that \begin{align*} \tilde \Omega \cap \{x_1=0\} \subset \bigcup_{k=1}^M B^{(k)} \Subset\Omega . \end{align*} Moreover, if \( \tilde \gamma := \gamma \setminus \bigcup_{k=1}^M B^{(k)}\) then there exists a further collection of balls \(\{B^{(k)}\}_{k=M+1}^N\) with centres in \(\tilde \gamma\) and radii equal to \( \frac 12 \operatorname{dist}(\tilde \gamma, \partial( \Omega^+))\) such that \begin{align*} \tilde \gamma \subset \bigcup_{k=M+1}^N B^{(k)} \Subset \Omega^+. \end{align*} By construction, \(\gamma\) is covered by \(\{B^{(k)}\}_{k=1}^n\). Thus, iteratively applying \thref{C35ZH} (after translating and rescaling) to each \(B^{(k)}\), \(k=1,\dots, M\), and~\eqref{c4oOr} to each \(B^{(k)}\), \(k=M+1,\dots, N\), we obtain the result. \end{proof} \appendix \section{A counterexample} \label{j4njb} In this appendix, we demonstrate that \thref{Hvmln} is in general false if we do not assume antisymmetry. We will do this by constructing a sequence of functions \(\{ u_k\}_{k=1}^\infty \subset C^\infty(B_1(2e_1)) \cap L^\infty(\mathbb{R}^n)\) such that \((-\Delta)^su_k=0\) in \(B_1(2e_1)\), \(u_k \geqslantslant 0\) in \(\mathbb{R}^n_+\), and \begin{align*} \frac{\sup_{B_{1/2}(2e_1)}u_k}{\inf_{B_{1/2}(2e_1)}u_k} \to + \infty \qquad \text{as } k\to \infty. \end{align*} The proof will rely on the mean value property of \(s\)-harmonic functions. Suppose that \(M\geqslantslant 0\) and \(\zeta_1\), \(\zeta_2\) are smooth functions such that \(0\leqslantslant \zeta_1,\zeta_2\leqslantslant 1\) in \(\mathbb{R}^n \setminus B_1(2e_1)\), and \begin{align*} \zeta_1(x) = \begin{cases} 0 &\text{in } \mathbb{R}^n_+ \setminus B_1(2e_1),\\ 1 &\text{in } \mathbb{R}^n_- \setminus \{x_1>-1\} \end{cases} \qquad{\mbox{and}}\qquad \zeta_2(x) =\begin{cases} 0 &\text{in } \mathbb{R}^n_+ \setminus B_1(2e_1) ,\\ 1 &\text{in } B_{1/2}(-2e_1). \end{cases} \end{align*}Then let \(v\) and \(w_M\) be the solutions to \begin{align*} \begin{PDE} (-\Delta)^s v &= 0 &\text{in }B_1(2e_1), \\ v&=\zeta_1 &\text{in } \mathbb{R}^n\setminus B_1(2e_1) \end{PDE}\qquad {\mbox{and}}\qquad \begin{PDE} (-\Delta)^s w_M &= 0 &\text{in }B_1(2e_1), \\ w_M&=-M\zeta_2 &\text{in } \mathbb{R}^n\setminus B_1(2e_1), \end{PDE} \end{align*} respectively. Both \(v\) and \(w_M\) are in \( C^\infty(B_1(2e_1)) \cap L^\infty(\mathbb{R}^n)\) owing to standard regularity theory. We want to emphasise that \(w_M\) depends on the parameter \(M\) (as indicated by the subscript) but \(v\) does not. Define \(\tilde u_M:= v+w_M\). Since \(w_M \equiv 0 \) when \(M=0\) and, by the strong maximum principle, \(v>0\) in \(B_1(2e_1)\), we have that \begin{align*} \tilde u_0 >0 \qquad \text{in } B_1 (2e_1). \end{align*} Hence, we can define \begin{align*} \bar M := \sup \{ M \geqslantslant 0 \text{ s.t. } \tilde u_M >0 \text{ in } B_1 (2e_1)\} \end{align*} though \(\bar M\) may possibly be infinity. We will show that \(\bar M\) is in fact finite and that \begin{align} \inf_{B_1(2e_1)} \tilde u_{\bar M} = 0. \label{fDMti} \end{align} Once these two facts have been established, we complete the proof as follows: set \(u_k := \tilde u_{\bar M - 1/k}\). By construction, \(u_k \geqslantslant 0\) in \(\mathbb{R}^n_+\) and \((-\Delta)^su_k=0\) in \(B_1(2e_1)\). Moreover, by the maximum principle, \(v\leqslantslant 1\) and \(w_M \leqslantslant 0\) in \(\mathbb{R}^n\), so \( u_k \leqslantslant 1\) in \(\mathbb{R}^n\). Hence, \begin{align*} \frac{\sup_{B_{1/2}(2e_1)}u_k}{\inf_{B_{1/2}(2e_1)}u_k} &\leqslantslant \frac 1 {\inf_{B_1(2e_1)}u_k} \to + \infty \end{align*} as \(k \to \infty\). Let us show that \(\bar M\) is finite. Since \(w_M\) is harmonic in \(B_1(2e_1)\), the mean value property for \(s\) harmonic functions, for example see \cite[Section 15]{MR3916700}, tells us that\begin{align*} w_M(2e_1) &= -CM \int_{\mathbb{R}^n \setminus B_1(2e_1)} \frac{\zeta_2(y)}{(\vert y \vert^2 -1 )^s \vert y \vert^n} \mathop{}\!d y . \end{align*} Then, since \(\zeta_2\equiv 1\) in \(B_{1/2}(-2e_1)\) and \(\zeta_2\geqslantslant0\), \begin{align*} \int_{\mathbb{R}^n \setminus B_1(2e_1)} \frac{\zeta_2(y)}{(\vert y \vert^2 -1 )^s \vert y \vert^n} \mathop{}\!d y &\geqslantslant \int_{B_{1/2}(-2e_1)} \frac{\mathop{}\!d y}{(\vert y \vert^2 -1 )^s \vert y \vert^n}\geqslantslant C. \end{align*} It follows that \(w_M(2e_1) \leqslantslant -CM\) whence \begin{align*} \tilde u_M (2e_1) \leqslantslant 1-CM \leqslantslant 0 \end{align*} for \(M\) sufficiently large. This gives that \(\bar M\) is finite. Now we will show \eqref{fDMti}. For the sake of contradiction, suppose that there exists \(a>0\) such that \(\tilde u_{\bar M } \geqslantslant a \) in \(B_1(2e_1)\). Suppose that \(\varepsilon>0\) is small and let \(h^{(\varepsilon)} := \tilde u_{\bar M+\varepsilon}-\tilde u_{\bar M} = w_{\bar M+\varepsilon}-w_{\bar M}\). Then \(h^{(\varepsilon)}\) is \(s\)-harmonic in \(B_1(2e_1)\) and \begin{align*} h^{(\varepsilon)}(x) = - \varepsilon \zeta_2(x) \geqslantslant -\varepsilon \qquad \text{in } \mathbb{R}^n \setminus B_1(2e_1). \end{align*}Thus, using the maximum principle we conclude that \(h^{(\varepsilon)} \geqslantslant -\varepsilon\) in \(B_1(2e_1)\), which in turn gives that \begin{align*} \tilde u_{\bar M+\varepsilon} =\tilde u +h^{(\varepsilon)} \geqslantslant a-\varepsilon >0 \qquad \text{in } B_1(2e_1) \end{align*} for \(\varepsilon\) sufficiently small. This contradicts the definition of \(\bar M\). \section{Alternate proof of \protect\thref{C35ZH} when \(c\equiv0\)} \label{4CEly} In this appendix, we will provide an alternate elementary proof of \thref{C35ZH} in the particular case \(c\equiv0\) and \(u\in \mathscr L_s(\mathbb{R}^n)\). More precisely, we prove the following. \begin{thm} \thlabel{oCuv7Zs2} Let \(u\in C^{2s+\alpha}(B_1) \cap \mathscr L_s(\mathbb{R}^n) \) for some \(\alpha>0\) with \(2s+\alpha\) not an integer. Suppose that \(u\) is antisymmetric, non-negative in \(\mathbb{R}^n_+\), and \(s\)-harmonic in \(B_1^+\). Then there exists \(C>0\) depending only on \(n\) and \(s\) such that \begin{align*} \sup_{B_{1/2}^+} \frac{u(x)}{x_1 } &\leqslantslant C \inf_{B_{1/2}^+} \frac{u(x)}{x_1 }. \end{align*} Moreover, \(\inf_{B_{1/2}^+} \frac{u(x)}{x_1 }\) and \(\sup_{B_{1/2}^+} \frac{u(x)}{x_1 }\) are comparable to \(\Anorm{u}\). \end{thm} Except for the statement \(\inf_{B_{1/2}^+} \frac{u(x)}{x_1 }\) and \(\sup_{B_{1/2}^+} \frac{u(x)}{x_1 }\) are comparable to \(\Anorm{u}\), Theorem~\ref{oCuv7Zs2} was proven in \cite{ciraolo2021symmetry}. Both the proof presented here and the proof in \cite{ciraolo2021symmetry} rely on the Poisson kernel representation for \(s\)-harmonic functions in a ball. Despite this, our proof is entirely different to the proof in \cite{ciraolo2021symmetry}. This was necessary to prove that~\(\inf_{B_{1/2}^+} \frac{u(x)}{x_1 }\) and \(\sup_{B_{1/2}^+} \frac{u(x)}{x_1 }\) are comparable to \(\Anorm{u}\) which does not readily follow from the proof in \cite{ciraolo2021symmetry}. Our proof of Theorem~\ref{oCuv7Zs2} is a consequence of a new mean-value formula for antisymmetric \(s\)-harmonic functions (Proposition~\ref{cqGgE}) which we believe to be interesting in and of itself. We first prove an alternate expression of the Poisson kernel representation formula for antisymmetric functions. \begin{lem} \thlabel{Gku6y} Let \(u\in C^{2s+\alpha}(B_1) \cap \mathscr L_s (\mathbb{R}^n)\) and \(r\in(0, 1] \). If \(u\) is antisymmetric and \((-\Delta)^s u = 0 \) in~\(B_1\). Then \begin{align} u(x) & = \gamma_{n,s} \int_{\mathbb{R}^n_+ \setminus B_r^+} \bigg ( \frac{r^2- \vert x \vert^2 }{\vert y \vert^2 -r^2 } \bigg )^s \bigg ( \frac 1 {\vert x - y \vert^n }-\frac 1 {\vert x_\ast - y \vert^n } \bigg ) u(y) \mathop{}\!d y \label{BPyAO} \end{align} for all \(x\in B_1^+\). Here~$\gamma_{n,s}$ is given in~\eqref{sry6yagamma098765}. \end{lem} We remark that since \(y \mapsto \frac 1 {\vert x - y \vert^n }-\frac 1 {\vert x_\ast - y \vert^n } \) is antisymmetric for each \(x\in \mathbb{R}^n_+\), we can rewrite~\eqref{BPyAO} as \begin{align} \label{ZrGnZFNg} u(x) & = \frac 1 2 \gamma_{n,s} \int_{\mathbb{R}^n \setminus B_r} \bigg ( \frac{r^2- \vert x \vert^2 }{\vert y \vert^2 -r^2 } \bigg )^s \bigg ( \frac 1 {\vert x - y \vert^n }-\frac 1 {\vert x_\ast - y \vert^n } \bigg ) u(y) \mathop{}\!d y. \end{align} \begin{proof}[Proof of Lemma~\ref{Gku6y}] The Poisson representation formula \cite{aintegrales} gives\begin{align} u(x) & =\gamma_{n,s} \int_{\mathbb{R}^n \setminus B_r} \bigg ( \frac{r^2- \vert x \vert^2 }{\vert y \vert^2 -r^2 } \bigg )^s \frac {u(y)} {\vert x - y \vert^n }\mathop{}\!d y \label{vDlwi} \end{align} where \(\gamma_{n,s}= \frac{\sin(\pi s)\Gamma (n/2)}{\pi^{\frac n 2 +1 }}\); see also \cite[p.112,p.122]{MR0350027} and \cite[Section 15]{MR3916700}. Splitting the integral in \eqref{vDlwi} into two separate integrals over \(\mathbb{R}^n_+ \setminus B_r^+\) and \(\mathbb{R}^n_- \setminus B_r^-\) respectively, then making the change of variables \(y \to y_\ast\) in the integral over \(\mathbb{R}^n_- \setminus B_r^-\) and using that \(u\) is antisymmetric, we obtain \begin{align*} u(x) & =\gamma_{n,s} \int_{\mathbb{R}^n_+ \setminus B_r^+} \bigg ( \frac{r^2- \vert x \vert^2 }{\vert y \vert^2 -r^2 } \bigg )^s \bigg ( \frac 1 {\vert x - y \vert^n }-\frac 1 {\vert x_\ast - y \vert^n } \bigg ) u(y) \mathop{}\!d y . \qedhere \end{align*} \end{proof} Now that we have proven the Poisson kernel formula for antisymmetric functions in Lemma~\ref{Gku6y}, we now establish Proposition~\ref{cqGgE}. \begin{proof}[Proof of Proposition~\ref{cqGgE}] Let \(h\in(0,1)\). It follows from Lemma~\ref{Gku6y} that\begin{align} \frac{u(he_1)} h & = \frac 1 {h} \gamma_{n,s} \int_{\mathbb{R}^n_+ \setminus B_r^+} \bigg ( \frac{r^2- h^2 }{\vert y \vert^2 -r^2 } \bigg )^s \bigg ( \frac 1 {\vert he_1 - y \vert^n }-\frac 1 {\vert he_1+ y \vert^n } \bigg ) u(y) \mathop{}\!d y \label{CNZLU} \end{align} for all \(r>h\). Taking a Taylor series in \(h\) about \(0\), we have the pointwise limit \begin{align*} \lim_{h \to 0 } \frac 1 h \bigg ( \frac 1 {\vert he_1 - y \vert^n }-\frac 1 {\vert he_1+ y \vert^n } \bigg ) &= \frac{2ny_1 }{\vert y \vert^{n+2}}. \end{align*} Moreover, by a similar argument to~\eqref{LxZU6},\begin{align*} \bigg \vert \frac 1 {(\vert y \vert^2 -r^2)^s } \bigg ( \frac 1 {\vert x - y \vert^n }-\frac 1 {\vert x_\ast - y \vert^n } \bigg )\bigg \vert &\leqslantslant \frac{2n h \vert y_1 \vert }{(\vert y \vert^2 -r^2)^s\vert he_1 - y \vert^{n+2}} \in L^1(\mathbb{R}^n \setminus B_r). \end{align*} Thus, we obtain the result by taking the limit \(h\to 0\) in \eqref{CNZLU} and applying the Dominated Convergence Theorem to justify swapping the limit and the integral on the right-hand side. \end{proof} {F}rom Proposition~\ref{cqGgE}, we obtain the following corollary. \begin{cor} \thlabel{OUqJk} Let \(u\in C^{2s+\alpha}(B_1) \cap \mathscr L_s(\mathbb{R}^n)\) with \(\alpha>0\) and \(2s+\alpha\) not an integer. Suppose that \(u\) is antisymmetric and \((-\Delta)^s u = 0 \) in \(B_1\). Then there exists a radially symmetric function \(\psi_s \in C(\mathbb{R}^n) \) satisfying \begin{align} \frac{C^{-1}}{1+\vert y \vert^{n+2s+2}} \leqslantslant \psi_s (y) \leqslantslant \frac{C}{1+\vert y \vert^{n+2s+2}} \qquad \text{for all } y \in \mathbb{R}^n, \label{nhvr2} \end{align} for some~$C>1$, such that \begin{align*} \frac{\partial u}{\partial x_1} (0) &= \int_{\mathbb{R}^n} y_1\psi_s (y) u(y) \mathop{}\!d y. \end{align*} In particular, if \(u\) is non-negative in \(\mathbb{R}^n_+\) then \[C^{-1} \Anorm{u} \leqslantslant \frac{\partial u}{\partial x_1} (0) \leqslantslant C \Anorm{u}.\] \end{cor} \begin{proof} Let \begin{align*} \psi_s (y) &:=n (n+2)\gamma_{n,s} \int_0^{\min \{ 1/\vert y \vert,1\}} \frac{r^{2s+n+1}}{(1 - r^2)^s}\mathop{}\!d r. \end{align*} It is clear that \(\psi_s\in C(\mathbb{R}^n)\) and that there exists \(C>1\) such that~\eqref{nhvr2} holds. Since \(u\in \mathscr L_s(\mathbb{R}^n)\), we have that~\( \Anorm{u} <+\infty\), so it follows that \begin{align} \bigg \vert \int_{\mathbb{R}^n}y_1 \psi_s (y) u(y) \mathop{}\!d y \bigg \vert \leqslantslant C \Anorm{u} < +\infty. \label{9B6Vm} \end{align} If we multiply \(\partial_1 u (0)\) by \(r^{n+1}\) then integrate from 0 to 1, Proposition~\ref{cqGgE} and~\eqref{ZrGnZFNg} give that \begin{align} \frac 1 {n+2} \frac{\partial u}{\partial x_1} (0) = \int_0^1 r^{n+1} \frac{\partial u}{\partial x_1} (0) \mathop{}\!d r = n \gamma_{n,s}\int_0^1 \int_{\mathbb{R}^n \setminus B_r} \frac{r^{2s+n+1}y_1 u(y)}{(\vert y \vert^2 - r^2)^s\vert y \vert^{n+2}} \mathop{}\!d y \mathop{}\!d r .\label{YYJ5o} \end{align} At this point, let us observe that if we formally swap the integrals in~\eqref{YYJ5o} and then make the change of variables \(r=\vert y \vert \tilde r\), we obtain \begin{align} &n \gamma_{n,s} \int_{\mathbb{R}^n \setminus B_1} \int_0^1 \frac{r^{2s+n+1}y_1 u(y)}{(\vert y \vert^2 - r^2)^s\vert y \vert^{n+2}}\mathop{}\!d r \mathop{}\!d y + n \gamma_{n,s}\int_{B_1} \int_0^{\vert y \vert} \frac{r^{2s+n+1}y_1 u(y)}{(\vert y \vert^2 - r^2)^s\vert y \vert^{n+2}}\mathop{}\!d r \mathop{}\!d y \nonumber\\ &= n \gamma_{n,s} \int_{\mathbb{R}^n \setminus B_1} \bigg ( \int_0^{1/\vert y \vert} \frac{\tilde r^{2s+n+1}}{(1 -\tilde r^2)^s}\mathop{}\!d \tilde r \bigg ) y_1 u(y)\mathop{}\!d y + n \gamma_{n,s}\int_{B_1}\bigg ( \int_0^1 \frac{\tilde r^{2s+n+1}}{(1 - \tilde r^2)^s}\mathop{}\!d \tilde r\bigg )y_1 u(y) \mathop{}\!d y \nonumber \\ &= \frac 1 {n+2} \int_{\mathbb{R}^n}y_1 \psi_s(y) u(y) \mathop{}\!d y . \label{LeVoi} \end{align} By~\eqref{9B6Vm}, equation~\eqref{LeVoi} is finite. Hence, Fubini's theorem justifies changing the order of integration in \eqref{YYJ5o} and that the right-hand side of~\eqref{YYJ5o} is equal to~\eqref{LeVoi} which proves the result. \end{proof} At this point, we can give the proof of Theorem~\ref{oCuv7Zs2}. \begin{proof}[Proof of Theorem~\ref{oCuv7Zs2}] We will begin by proving that \begin{align} u(x) \geqslantslant C x_1 \Anorm{u} \qquad \text{for all } x\in B_{1/2}^+. \label{p6oBI} \end{align} To this end, we observe that since \(\vert x_\ast - y \vert^{n+2} \leqslantslant C \vert y \vert^{n+2} \) for all \(x \in B_1\) and~\(y\in \mathbb{R}^n\setminus B_1\),~\eqref{buKHzlE6} gives that \begin{align*} \frac 1 {\vert x - y \vert^n }-\frac 1 {\vert x_\ast - y \vert^n } \geqslantslant C \frac{ x_1 y_1}{\vert y \vert^{n+2}} \qquad \text{for all } x\in B_1^+ {\mbox{ and }} y\in \mathbb{R}^n_+ \setminus B_1^+. \end{align*}Hence, by Lemma~\ref{Gku6y}, for all~$x\in B_{1/2}^+$, \begin{align*} u(x) &= C\int_{\mathbb{R}^n \setminus B_1} \bigg ( \frac{1- \vert x \vert^2 }{\vert y \vert^2 -1 } \bigg )^s \bigg ( \frac 1 {\vert x - y \vert^n }-\frac 1 {\vert x_\ast - y \vert^n } \bigg ) u(y) \mathop{}\!d y \\ &\geqslantslant Cx_1 \int_{\mathbb{R}^n \setminus B_1} \frac{ y_1 u(y)}{\big ( \vert y \vert^2 -1 \big )^s\vert y \vert^{n+2}} \mathop{}\!d y \end{align*} where we used that \((-y_1)u(y_\ast)=y_1u(y)\) and that \(u\geqslantslant 0\) in \(\mathbb{R}^n_+\). Then Proposition~\ref{cqGgE} with \(r=1\) gives that \begin{align*} u(x) &\geqslantslant Cx_1 \frac{\partial u}{\partial x_1}(0) \qquad \text{for all } x\in B_{1/2}^+. \end{align*} Finally, Corollary~\ref{OUqJk} gives~\eqref{p6oBI}. Next, we will prove that \begin{align} u(x) \leqslantslant C x_1 \Anorm{u} \qquad \text{for all } x\in B_{1/2}^+. \label{3abqf} \end{align} Similar to above, for all \(x\in B_{1/2}^+\) and~\(y\in \mathbb{R}^n_+ \setminus B_1^+\), we have that~\(\vert x- y \vert \geqslantslant \frac 1 2 \vert y \vert\), so~\eqref{LxZU6} gives that\begin{align*} \frac 1 {\vert x - y \vert^n }-\frac 1 {\vert x_\ast - y \vert^n } \leqslantslant \frac{C x_1 y_1}{\vert y \vert^{n+2}} \qquad \text{for all } x\in B_{1/2}^+ {\mbox{ and }} y\in \mathbb{R}^n_+ \setminus B_1^+. \end{align*} As before, using Lemma~\ref{Gku6y}, we have that, for all~$x\in B_{1/2}^+$,\begin{align*} u(x) &= C\int_{\mathbb{R}^n \setminus B_1} \bigg ( \frac{1- \vert x \vert^2 }{\vert y \vert^2 -1 } \bigg )^s \bigg ( \frac 1 {\vert x - y \vert^n }-\frac 1 {\vert x_\ast - y \vert^n } \bigg ) u(y) \mathop{}\!d y \\ &\leqslantslant C x_1 \int_{\mathbb{R}^n \setminus B_1} \frac{ y_1 u(y)}{\big ( \vert y \vert^2 -1 \big )^s\vert y \vert^{n+2}} \mathop{}\!d y. \end{align*}Then Proposition~\ref{cqGgE} and Corollary~\ref{OUqJk} give that \begin{align*} u(x) &\leqslantslant Cx_1 \frac{\partial u}{\partial x_1}(0) \leqslantslant C x_1 \Anorm{u} \qquad \text{for all } x\in B_{1/2}^+, \end{align*} which is~\eqref{3abqf}. {F}rom~\eqref{p6oBI} and~\eqref{3abqf} the result follows easily. \end{proof} \section{A proof of~\eqref{LA:PAKSM} when~$c:=0$ that relies on extension methods}\label{APPEEXT:1} We consider the extended variables~$X:=(x,y)\in\mathbb{R}^n\times\mathbb{R}$. Then, a solution~$u$ of~$(-\Delta)^su=0$ in~$\Omega^+$ can be seen as the trace along~$\Omega^+\times\{0\}$ of its $a$-harmonic extension~$U=U(x,y)$ satisfying $$ {\rm div}_{\!X}\big(|y|^a\nabla_{\!X} U\big)=0\quad{\mbox{ in }}\,\mathbb{R}^{n+1},$$ where~$a:=1-2s$, see Lemma~4.1 in~\cite{MR2354493}. We observe that the function~$V(x,y):=x_1$ is also a solution of the above equation. Also, if~$u$ is antisymmetric, then so is~$U$, and consequently~$U=V=0$ on~$\{x_1=0\}$. As a result, by the boundary Harnack inequality (see~\cite{MR730093}), \begin{equation}\label{LA:PAKSM:2} \sup_{\tilde \Omega^+\times(0,1)} \frac{U}{V} \leqslantslant C \inf_{ \tilde \Omega^+\times(0,1)} \frac{U}{V} . \end{equation} In addition, $$ \sup_{\tilde \Omega^+\times(0,1)} \frac{U}{V}\ge \sup_{\tilde \Omega^+\times\{0\}} \frac{U}{V} =\sup_{\tilde \Omega^+} \frac{u(x)}{x_1}$$ and similarly $$ \inf_{\tilde \Omega^+\times(0,1)} \frac{U}{V}\le\inf_{\tilde \Omega^+} \frac{u(x)}{x_1}.$$ {F}rom these observations and~\eqref{LA:PAKSM:2} we obtain~\eqref{LA:PAKSM} in this case. \end{document}
\overlineegin{document} \title{Local-in-time existence of free-surface 3D Euler flow with $H^{2+{\rm d}elta}$ initial vorticity in a neighborhood of the free boundary} \author{I. Kukavica, W. S. O\.za\'nski} {\rm d}ate{ } \maketitle \overlinelfootnote{I.~Kukavica: Department of Mathematics, University of Southern California, Los Angeles, CA 90089, USA, email: [email protected]\\ W.~S.~O\.za\'nski: Department of Mathematics, University of Southern California, Los Angeles, CA 90089, USA, email: [email protected]\\ I.~Kukavica was supported in part by the NSF grant DMS-1907992. W.~S.~O\.za\'nski was supported in part by the Simons Foundation. } \overlineegin{abstract} We consider the three-dimensional Euler equations in a domain with a free boundary with no surface tension. We assume that $u_0 \in H^{2.5+{\rm d}elta }$ is such that $\mathrm{curl}\,u_0 \in H^{2+{\rm d}elta }$ in an arbitrarily small neighborhood of the free boundary, and we use Lagrangian approach to derive an a~priori estimate that can be used to prove local-in-time existence and uniqueness of solutions under the Rayleigh-Taylor stability condition. {\rm e}nd{abstract} \section{Introduction} In this note we address the local existence of solutions to the free-surface Euler equations \overlineegin{equation}\lambdabel{euler} \overlineegin{split} \partial_t u + (u\cdot \nabla ) u +\nabla p &=0 \\ \mathrm{div}\, u &= 0 {\rm e}nd{split} {\rm e}nd{equation} in $\Omegaega(t)$, where $\Omegaega (t)$ is the region that is periodic in $x_1,x_2$, and $x_3$ lies between $\Gamma_0 := \{ x_3=0 \}$ and a free surface $\Gamma_1 (t)$ such that $\Gamma_1 (0):= \{ x_3=1 \}$. The Euler equations {\rm e}qref{euler} are supplemented with an initial condition $u(0)=u_0$. We consider the impermeable boundary condition on the fixed bottom boundary $\Gamma_0$ and no surface tension on the free boundary, that is \overlineegin{equation}\lambdabel{bcs} \overlineegin{split} u_3&=0\qquad \text{ on }\Gamma_0,\\ p&=0 \qquad \text{ on }\Gamma_1. {\rm e}nd{split} {\rm e}nd{equation} Furthermore the initial pressure is assumed to satisfy the Rayleigh-Taylor condition \overlineegin{equation}\lambdabel{rayleigh_taylor} \partial_{x_3} p (x,0) \leq -b <0 \qquad \text{ for } x\in \Gamma_1 (0), {\rm e}nd{equation} where $b>0$ is a constant. The problem of local existence of solutions to the free boundary Euler equations has been initially considered in \cite{Sh,Shn,N,Y1,Y2} under the assumption of irrotationality of the initial data, i.e., with $\mathrm{curl}\,u_0=0$. The local existence of solutions in such case with $u_0$ from a Sobolev space was established by Wu in \cite{W1,W2}. Ebin showed in \cite{E} that for general data in a Sobolev space, the problem is unstable without assuming the Rayleigh-Taylor sign condition {\rm e}qref{rayleigh_taylor}, which in essence requires that next to the free boundary the pressure in the fluid is higher than the pressure of air. With the Rayleigh-Taylor condition, the a~priori bounds for the existence of solutions for initial data in a Sobolev space were provided in \cite{ChL}. New energy estimates in the Lagrangian coordinates along with the construction of solutions were provided in~\cite{CS1} (see also \cite{CS2}). We refer the reader to \cite{ABZ1,B,CL,DE1,DE2,DK,KT1,KT2,KT3,S,T,ZZ} for other works concerned with local or global existence results of the Euler equations with or without surface tension, and to \cite{ABZ2,AD,AM1,AM2,EL,GMS,HIT,I,IT,IK,IP,KT3,L,Lin1,Lin2,MR,OT,P,W3} for other related works on the Euler equations with a free-evolving boundary. In this paper, we are concerned with the problem of local existence of solutions, and we aim to impose minimal regularity assumptions on the initial data $u_0$ that gives local-in-time existence of solutions in the Lagrangian setting of the problem. It is well-known that the threshold of regularity for the Euler equations in ${\mathbb R}^{n}$ (\cite{KP}) or a fixed domain is $u_0\in H^{2.5+{\rm d}elta}$, where ${\rm d}elta>0$ is arbitrarily small. In the case of the domain with a free boundary, this has recently been proven in \cite{WZZZ} in the Eulerian candidates (cf.~\cite{SZ1,SZ2} for the local existence in $H^{3}$). However, the same result in the Lagrangian coordinates is not known. One of the main difficulties is the fact that it is not clear whether an assumption that $f,g\in H^{2.5+{\rm d}elta } $ implies that the same is true of $f\circ g $ (the composition of $f$ and $g$) when ${\rm d}elta$ is not an integer. The main result of this paper is to obtain a~priori estimates for the local existence in $H^{2.5+{\rm d}elta}$ with an additional regularity assumption on initial vorticity $\omegaega_0:= \mathrm{curl}\, u_0$ in an arbitrarily small neighborhood of the free boundary. We note that a related result was obtained in \cite{KTVW} in the 2D case, but the coordinates used in \cite{KTVW} are not Lagrangian; they are in a sense a concatenation of Eulerian and Lagrangian variables. Moreover, the proof in~\cite{KTVW} uses in an essential way the preservation of Lagrangian vorticity, which is a property that does not hold in the 3D case. Another paper \cite{KT4} considers the problem in ALE coordinates (cf.~\cite{MC} for the ALE coordinates in the fluid-structure interaction problem), but the proof requires an additional assumption on the initial data, due to additional regularity required by the Lagrangian variable on the boundary; for instance, the present paper shows that the conditions of Theorem~\ref{thm_main} (see below) suffice. The main feature of the present paper is that the particle map preserves additional ($H^{3+{\rm d}elta}$) regularity for a short time in a small neighborhood of the boundary, a feature not available in the Eulerian approach. The proof of our main result, which we state in Theorem~\ref{thm_main} below, after a brief introduction of the Lagrangian coordinates, is direct and short. It is inspired by earlier works \cite{KT2,KTV} and previously by \cite{ChL,CS1,CS2}. Another new idea of our approach is a localization of the tangential estimates (i.e., estimates concerned with differential operators with respect to $x_1$, $x_2$ only) which, due to the additional regularity of the particle map, can be performed in a domain close to the boundary. Moreover, we formulate our a priori estimates in terms of three quantities (see {\rm e}qref{big_3}) that control the system, and we make use of the fact that all our estimates, except for the tangential estimates, are not of evolution type. Namely, they are static. We note that all the results in the paper also hold in the 2D case with all the Sobolev exponents reduced by~$1/2$. In the next section we introduce our notation regarding the Euler equations in the Lagrangian coordinates, state the main result, and recall some known facts and inequalities. The proof of Theorem~\ref{thm_main} is presented in Section~\ref{sec_proof_main_result}, where we first give a heuristic argument about our strategy. The proof is then divided into several parts: The div-curl-type estimates are presented in Section~\ref{sec_div_curl}, the pressure estimates are discussed in Sections~\ref{sec_pressure}--\ref{sec_pressure_time_deriv}, and the tangential estimates are presented in Section~\ref{sec_tangential}. The proof of the theorem is concluded with the final estimates in Section~\ref{sec_final}. \section{Preliminaries}\lambdabel{sec_prelims} \subsection{Lagrangian setting of the Euler equations and the main theorem} We use the summation convention of repeated indices. We denote the time derivative by $\partial_t$, and a derivative with respect to $x_j$ by $\partial_j$. We denote by $v(x,t)= (v_1,v_2,v_3)$ the velocity field in Lagrangian coordinates and by $q(x,t)$ the pressure function. The Euler equations then become \overlineegin{equation}\lambdabel{euler_lagr} \overlineegin{split} \partial_t v_i &= - a_{ki} \partial_k q,\qquad i=1,2,3,\\ a_{ik} \partial_i v_k &=0 {\rm e}nd{split} {\rm e}nd{equation} in $\Omegaega \times (0,T)$, where $\Omegaega := \Omegaega (0) = \mathbb{T}^2 \times (0,1)$, and $a_{ik}$ denotes the $(i,k)$-th entry of the matrix $a=(\nabla {\rm e}ta )^{-1}$. Here ${\rm e}ta $ stands for the particle map, i.e., the solution of the system \overlineegin{equation}\lambdabel{eta_def} \overlineegin{split} \partial_t {\rm e}ta (x,t) &= v(x,t) \\ {\rm e}ta (x,0) &= x {\rm e}nd{split} {\rm e}nd{equation} in $\Omegaega \times [0,T)$. (Note that the Lagrangian variable is denoted by $x$.) Due to the incompressibility condition in {\rm e}qref{euler_lagr}, we have that $\mathrm{det}\, \nabla {\rm e}ta =1$ for all times, which shows that $a$ is the corresponding cofactor matrix. Therefore, \overlineegin{equation}\lambdabel{a_exact_form} a_{ij} =\frac12 {\rm e}psilon_{imn} {\rm e}psilon_{jkl} \partialartialrtial_{m}{\rm e}ta_k\partialartialrtial_{n}{\rm e}ta_l , {\rm e}nd{equation} where ${\rm e}psilon_{ijk}$ denotes the permutation symbol. As for the boundary conditions {\rm e}qref{bcs}, in Lagrangian coordinates the impermeable condition for $u$ at the stationary bottom boundary $\Gamma_0$ becomes \overlineegin{equation}\lambdabel{noslip_at_bottom} v_3 =0 \qquad \text{ on }\Gamma_0, {\rm e}nd{equation} while the zero surface tension condition at the top boundary $\Gamma_1 := \Gamma_1(0) = \{ x_3 =1 \}$ reads \overlineegin{equation}\lambdabel{no_q_ontop} q=0 \qquad \text{ on } \Gamma_1 \times (0,T). {\rm e}nd{equation} Note that, in Lagrangian coordinates, $\Gamma_1$ does not depend on time. Let us consider a localization of $u_0$ given by $\chi u_0$, where $\chi {\rm e}quiv \chi (x_3) \in C^\infty (\mathbb{R} ; [0,1])$ is such that $\chi (x_3) =1$ in a neighborhood of $\Gamma_1 (0)$ and $\chi( x_3 ) =0$ outside of a larger neighborhood. The following is our main result establishing a~priori estimates for the local existence of solutions of the free boundary Euler equations in the Lagrangian formulation. \overlineegin{thm}\lambdabel{thm_main} Let ${\rm d}elta >0$. Assume that $(v,q,a,{\rm e}ta)$ is a $C^{\infty}$ solution of the Euler system in the Lagrangian setting {\rm e}qref{euler_lagr}--{\rm e}qref{no_q_ontop}, and assume that $u_0$ satisfies the Rayleigh-Taylor condition {\rm e}qref{rayleigh_taylor}. Then there exists a time $T>0$ depending on $\Vert v_0\Vert_{2.5+{\rm d}elta}$ and $\Vert \chi (\mathrm{curl}\, u_0 ) \Vert_{2+{\rm d}elta}$ such that the norms $\sup_t \Vert v\Vert_{2.5+{\rm d}elta}$, $\sup_t \Vert q\Vert_{2.5+{\rm d}elta}$, $\sup_t \Vert q \chi\Vert_{3+{\rm d}elta}$, and $\sup_t \Vert \chi{\rm e}ta\Vert_{3+{\rm d}elta}$ on $[0,T]$ are bounded from above by a constant depending only on $\Vert v_0\Vert_{2.5+{\rm d}elta}$ and $\Vert \chi (\mathrm{curl}\, u_0 ) \Vert_{2+{\rm d}elta}$. {\rm e}nd{thm} \color{black} The rest of the paper is devoted to the proof of this theorem. \subsection{Product and commutator estimates} We use the standard notions of Lebesgue spaces, $L^p$, and Sobolev spaces, $W^{k,p}$, $H^s$, and we reserve the notation $\| \cdot \|_s := \| \cdot \|_{H^s (\Omegaega )}$ for the $H^s$ norm. We recall the multiplicative Sobolev inequality \overlineegin{equation}\lambdabel{kpv_est} \| fg \|_{s} \lesssimssim \| f \|_{{s }} \| g \|_{{1.5+{\rm d}elta }} {\rm ,\qquad{}} s\in [0,1.5+{\rm d}elta ]. {\rm e}nd{equation} We shall also use the commutator estimates \overlineegin{equation}\lambdabel{kpv1} \| J(fg) - f Jg \|_{L^2} \lesssimssim \| f \|_{W^{s,p_1}} \| g \|_{L^{p_2}} + \| f \|_{W^{1,q_1}} \| g \|_{W^{s-1,q_2}} {\rm e}nd{equation} for $s\geq 1$ and $\frac{1}{p_1}+\frac1{p_2}= \frac{1}{q_1}+\frac1{q_2}= \frac12$, and \overlineegin{equation}\lambdabel{kpv3} \| J(fg) - f Jg - g Jf \|_{L^p} \lesssimssim \| f \|_{W^{1,p_1}} \| g \|_{W^{s-1,p_2}}+ \| f \|_{W^{s-1,q_1}} \| g \|_{W^{1,q_2}} {\rm e}nd{equation} for $s\geq 1 $, $\frac{1}{p_1}+\frac1{p_2}=\frac{1}{q_1}+\frac1{q_2}= \frac1p$, $p\in (1,p_1)$, and $p_2,q_1,q_2<\infty$, where $J$ is a nonhomogeneous differential operator in $(x_1,x_2)$ of order $s\geq 0$. We refer the reader to \cite{KP,Li} for the proofs. We set \[ \Lambdambda := (1-\Delta_2 )^{\frac12} \] and \[ S:= \Lambdambda^{\frac52 + {{\rm d}elta}}, \] where $\Delta_2$ denotes the Laplacian in $(x_1,x_2)$. \subsection{Properties of the particle map ${\rm e}ta$ and the cofactor matrix~$a$} Note that applying the product estimate {\rm e}qref{kpv_est} to the representation formula {\rm e}qref{a_exact_form} for $a$, we get \overlineegin{equation}\lambdabel{a_in_H1.5} \| a \|_{{1.5+{\rm d}elta }} \lesssimssim \| {\rm e}ta \|_{{2.5+{\rm d}elta }}^2. {\rm e}nd{equation} Moreover, by writing $\chi \nabla {\rm e}ta = \nabla (\chi {\rm e}ta) - {\rm e}ta\nabla \chi $, where $\chi$ is as above, we obtain \overlineegin{equation}\lambdabel{achi_in_H2} \| \chi^2 a \|_{{2+{\rm d}elta }} \lesssimssim \| \chi {\rm e}ta \|_{{3+{\rm d}elta }}^2+\| {\rm e}ta \|_{{2.5+{\rm d}elta }}^2 . {\rm e}nd{equation} Also, we have \overlineegin{equation}\lambdabel{at_in_H1.5} \| a_t \|_{{r}} \lesssimssim \| \nabla v \|_{{r}} \qquad \text{ for }r\in [0,1.5+{\rm d}elta ] {\rm e}nd{equation} and $ \| \partial_{tt} a \|_{r} \lesssimssim \| \nabla v \|_{1.5+{\rm d}elta } \| \nabla v \|_r + \| \nabla \partial_t v \|_r $, from where \overlineegin{equation}\lambdabel{att_in_Hr} \| \partial_{tt} a \|_{r} \lesssimssim \| \nabla v \|_{1.5+{\rm d}elta } \| \nabla v \|_r + \| a \|_{1.5+{\rm d}elta } \| q \|_{2+r } . {\rm e}nd{equation} Finally, we recall the Brezis-Bourgonion inequality \overlineegin{equation}\lambdabel{harmonic_est} \| f \|_{s} \lesssimssim \| f \|_{L^2 } + \| \mathrm{curl}\, f \|_{{s-1 }} + \| \mathrm{div}\, f \|_{{s-1 }} + \| \nabla_2 f_3 \|_{H^{s-0.5} (\partialartialrtial \Omegaega )}, {\rm e}nd{equation} cf.~\cite{BB}. \section{Proof of the main result}\lambdabel{sec_proof_main_result} The main idea of the proof of Theorem~\ref{thm_main} is to simplify the estimates introduced in \cite{KT2} and \cite{KTV} and localize the analysis to an arbitrarily small region near the free boundary $\Gamma_1$ and show that all important quantities can be controlled by \overlineegin{equation}\lambdabel{big_3} \| {\rm e}ta \|_{2.5+{\rm d}elta }, \| \chi {\rm e}ta \|_{3+{\rm d}elta }, \| v \|_{2.5+{\rm d}elta } . {\rm e}nd{equation} First we employ the div-curl estimates to estimate each of the key quantities {\rm e}qref{big_3} at time $t$ using a time integral, from $0$ until $t$, of a polynomial expression involving the same quantities, as well as terms concerned with initial data and two other terms. The two terms involve derivatives of ${\rm e}ta$ and $v$ only in the variables $x_1$, $x_2$, and only very close to the free boundary $\Gamma_1$, namely $\| S {\rm e}ta_3 \|_{L^2 (\Gamma_1)}$ and $\| S (\partialsi v ) \|_{L^2}$, where $\partialsi$ is a cutoff supported in a neighborhood of $\Gamma_1$ with $\operatorname{supp} \partialsi\subset \{\xi=1\}$; see {\rm e}qref{divcurl2}--{\rm e}qref{divcurl1} below for details. The cutoff $\partialsi$ is introduced at the beginning of Section~\ref{sec_div_curl}. These two terms, however, can also be controlled by a time integral of a polynomial expression involving {\rm e}qref{big_3} only, which we show in Section~\ref{sec_tangential}, after a brief discussion of some estimates, at each fixed time, of the pressure function and its time derivative in Sections \ref{sec_pressure}--\ref{sec_pressure_time_deriv}. Finally, Section~\ref{sec_final} combines the div-curl estimates with the tangential estimates to give an a priori bound that enables local-in-time existence and uniqueness. \color{black} Before proceeding to the proof, we note that, by {\rm e}qref{eta_def}, the particle map ${\rm e}ta$ satisfies \overlineegin{equation}\lambdabel{na_eta_-_I} \nablabla {\rm e}ta - I = \int_0^t \nabla v {\rm d} s, {\rm e}nd{equation} where, for brevity, we have omitted the time argument $t$ on the left-hand side. We continue this convention throughout. Moreover, observing that $a(0)=I$, where $I$ denotes the three-dimensional identity matrix, we see from {\rm e}qref{a_exact_form} that \overlineegin{equation} a-I = \int_0^t \partial_t a {\rm d} s = \int_0^t \nabla {\rm e}ta \nabla v {\rm d} s. \lambdabel{EQ01} {\rm e}nd{equation} Here and below, we use the convention of omitting writing various indices when only the product structure matters; for instance, the expression on the far right side stands for ${\rm e}psilon_{imn} {\rm e}psilon_{jkl} \int_{0}^{t} \partialartialrtial_{m}v_k\partialartialrtial_{n}{\rm e}ta_l$. The equations {\rm e}qref{na_eta_-_I} and {\rm e}qref{EQ01} demonstrate an important property that, as long as the key quantities {\rm e}qref{big_3} stay bounded, both $a$ and $\nabla {\rm e}ta$ remain close to $I$ in the $H^{1.5+{\rm d}elta }$ norm for sufficiently small times. In other words we obtain the following lemma. \overlineegin{lemma}[Stability of $a$ and $\nabla {\rm e}ta$ at initial time]\lambdabel{lem_stab_a_naeta} Let $M,T_0>0$ and suppose that $\| v \|_{2.5+{\rm d}elta }$, $\| {\rm e}ta \|_{2.5+{\rm d}elta }$, $\| \chi {\rm e}ta \|_{3+{\rm d}elta } \leq M$ for $t\in [0,T_0]$. Given $\varepsilon >0$, there exists $T=T(M,\varepsilon ) \in (0,T_0)$ such that \overlineegin{equation}\lambdabel{a-I_and_naeta-I} \| I -a \|_{1.5+{\rm d}elta },\| I -aa^T \|_{1.5+{\rm d}elta }, \| I-\nabla {\rm e}ta \|_{1.5+{\rm d}elta } \leq \varepsilon {\rm e}nd{equation} for $t\in [0,T]$. {\rm e}nd{lemma} \overlineegin{proof} By {\rm e}qref{na_eta_-_I} and {\rm e}qref{EQ01}, we have $\| I-a \|_{1.5+{\rm d}elta } \lesssimssim M^2 t $ and $\| \nabla{\rm e}ta - I \|_{1.5+{\rm d}elta } \lesssimssim M t$. Moreover, the triangle inequality and {\rm e}qref{a_in_H1.5} give $\| I-a a^T \|_{1.5+{\rm d}elta } \leq \| I- a \|_{1.5+{\rm d}elta } (1+ \| a \|_{1.5+{\rm d}elta }) \lesssimssim M^2(1+M^2) t$, and so the claim follows by taking $T$ sufficiently small. {\rm e}nd{proof} Corollary \ref{cor_stab_rt} below provides a similar estimate for the pressure function, which enables us to extend the Rayleigh-Taylor condition {\rm e}qref{rayleigh_taylor} for small $t>0$.\\ \subsection{Div-curl estimates}\lambdabel{sec_div_curl} Let $\partialsi (x_3) \in C^\infty (\mathbb{R} ; [0,1] ) $ be such that $\operatorname{supp} \, \partialsi \subset \{ \chi =1 \}$ and $\partialsi =1$ in a neighborhood of $\Gamma_1$. Note that both $\chi$ and $\partialsi$ commute with any differential operator in the variables $x_1,x_2$, and that, provided $\partialsi$ is present in any given expression involving classical derivatives or $\Lambdambda$, we can insert in $\chi$ at any other place. For example \overlineegin{equation}\lambdabel{cutoffs_plugin} \nabla f S g \nabla (\partialsi w) = \nabla f S(\chi g) \nabla (\partialsi w) {\rm e}nd{equation} for any functions $f$, $g$, and $w$. In this section, we provide estimates that allow us to control the key quantities $\| v \|_{2.5+{\rm d}elta }$, $\| \chi {\rm e}ta \|_{3+{\rm d}elta }$, and $\| {\rm e}ta \|_{2.5+{\rm d}elta }$. Namely, we denote by $P$ any polynomial depending on these quantities, and we show that \overlineegin{equation} \lambdabel{divcurl2} \| \chi {\rm e}ta \|_{3+{\rm d}elta } \lesssimssim t\| \chi \nabla \omegaega_0 \|_{1+{\rm d}elta }+ 1+ \| \Lambdambda^{2.5+{\rm d}elta } {\rm e}ta_3 \|_{L^2 (\Gamma_1)}+\int_0^t P \,{\rm d} s, {\rm e}nd{equation} and \overlineegin{equation}\lambdabel{v_2.5+delta} \| v \|_{{2.5 +{\rm d}elta }} \lesssimssim \| v \|_{L^2} +\| S(\partialsi v) \|_{L^2}+\Vert u_0\Vert_{2.5+{\rm d}elta}. {\rm e}nd{equation} Note that, on the other hand, by ${\rm e}ta_t=v$ and ${\rm e}ta(0,x)=x$, we have \overlineegin{equation}\lambdabel{divcurl1} \| {\rm e}ta \|_{2.5+{\rm d}elta } \lesssimssim 1 + \int_0^t \Vert v\Vert_{2.5+{\rm d}elta}{\rm d} s \lesssim 1+ \int_0^t P{\rm d} s . {\rm e}nd{equation} As pointed out above, we simplify the notation by omitting any indices in the cases where the exact value of the index becomes irrelevant. In those cases we only keep track of the power of the term and the order of any derivatives, as such terms are estimated using a H\"older, Sobolev, interpolation, or commutator inequality. We start with the proof of {\rm e}qref{divcurl2}. We use {\rm e}qref{harmonic_est} to get \overlineegin{equation}\lambdabel{001} \| \chi {\rm e}ta \|_{{3+{\rm d}elta }} \lesssimssim \| \chi {\rm e}ta \|_{L^2 } + \| \mathrm{curl}\, (\chi {\rm e}ta ) \|_{{2+{\rm d}elta }} + \| \mathrm{div}\, (\chi {\rm e}ta ) \|_{{2+{\rm d}elta }} + \| \Lambdambda^{2.5+{\rm d}elta } {\rm e}ta_3 \|_{L^2 (\Gamma_1)}. {\rm e}nd{equation} For the term involving curl, we first recall the Cauchy invariance \overlineegin{equation}\lambdabel{cauchy} {\rm e}psilon_{ijk} \partial_j v_m \partial_k {\rm e}ta_m = (\omegaega_0)_i, {\rm e}nd{equation} cf.~Appendix of \cite{KTV} for a proof. For $i=1,2,3$, we have \[\overlineegin{split} \nablabla ((\mathrm{curl}\, (\chi {\rm e}ta ))_i) &= {\rm e}psilon_{ijk} \partial_j \nabla (\chi {\rm e}ta_k) \\ &= {\rm e}psilon_{ijk} {\rm d}elta_{km} \partial_j \nabla (\chi {\rm e}ta_m ) \\ &= {\rm e}psilon_{ijk} ({\rm d}elta_{km} -\partial_k {\rm e}ta_m ) \partial_j \nabla (\chi {\rm e}ta_m ) + 2 \chi \int_0^t {\rm e}psilon_{ijk}\partial_k v_m \partial_j \nabla {\rm e}ta_m \, {\rm d} s + t \chi \nabla \omegaega_0^i \\ &\hspace{9cm}+ \underbrace{ \nabla {\rm e}ta (D^2 \chi {\rm e}ta + 2\nabla \chi \nabla {\rm e}ta )}_{=:LOT_1} \\ &= {\rm e}psilon_{ijk} ({\rm d}elta_{km} -\partial_k {\rm e}ta_m ) \partial_j \nabla (\chi {\rm e}ta_m ) + 2 \int_0^t {\rm e}psilon_{ijk}\partial_k v_m \partial_j \nabla (\chi {\rm e}ta_m) \, {\rm d} s + t \chi \nabla \omegaega_0^i + LOT_1\\ &\hspace{7cm}+ \underbrace{\int_0^t \nabla v (D^2 \chi {\rm e}ta + 2\nabla \chi \nabla {\rm e}ta + \nabla \chi \nabla {\rm e}ta ) {\rm d} s,}_{=:LOT_2} {\rm e}nd{split} \] where we used \[ 0= -{\rm e}psilon_{ijk} \partial_k {\rm e}ta_m \partial_j \nabla {\rm e}ta_m + 2 \int_0^t {\rm e}psilon_{ijk} \partial_k v_m \partial_j \nabla {\rm e}ta_m {\rm d} s + t \nabla (\omegaega_0)_i \] in the third equality, which in turn is a consequence of the Cauchy invariance {\rm e}qref{cauchy}. Thus \overlineegin{equation}\lambdabel{curl_chieta_est} \overlineegin{split} \| \nablabla (\mathrm{curl}\, (\chi {\rm e}ta )) \|_{{1+{\rm d}elta }} &\lesssimssim \| \chi {\rm e}ta \|_{{3+{\rm d}elta}} \| I - \nabla {\rm e}ta \|_{{1.5+{\rm d}elta }} + 2\int_0^t \underbrace{ \| v \|_{{2.5+{\rm d}elta }} \|\chi {\rm e}ta \|_{{3+{\rm d}elta }} }_{\leq P}{\rm d} s \\& + t\| \chi \nabla \omegaega_0 \|_{{1+{\rm d}elta }}+ \| LOT_1+LOT_2 \|_{{1+{\rm d}elta }}, {\rm e}nd{split} {\rm e}nd{equation} where we used {\rm e}qref{kpv_est}. Note that \[ \| LOT_1 \|_{1+{\rm d}elta } , \| LOT_2 \|_{1+{\rm d}elta } \lesssimssim 1+ \int_0^t \| v \|_{2.5+{\rm d}elta } \| {\rm e}ta \|_{1+ 1+{\rm d}elta } {\rm d} s \lesssimssim 1+\int_0^t P\, {\rm d} s, \] where we used ${\rm e}ta (x)=x+\int_0^t v (x,s){\rm d} s$ and $T\leq 1/CM$ with $\Vert v\Vert_{2.5+{\rm d}elta}\leq M$ to estimate $LOT_1$. As for the divergence, we have $\partial_t {\rm e}ta = v$, which gives \[ \overlineegin{split} \operatorname{div}\, \partial_l (\chi {\rm e}ta ) &= {\rm d}elta_{kj} \partial_l \partial_k (\chi {\rm e}ta_j ) = ( {\rm d}elta_{kj} - a_{kj} ) \partial_l \partial_k (\chi {\rm e}ta_j ) + a_{kj} \partial_l \partial_k (\chi {\rm e}ta_j ) \\ & = ( {\rm d}elta_{kj} - a_{kj} ) \partial_l \partial_k (\chi {\rm e}ta_j ) + \int_0^t \overlineigl( \partial_t a_{kj} \partial_l \partial_k (\chi {\rm e}ta_j ) + a_{kj} \partial_l \partial_k ( \chi v_j )\overlineigr) {\rm d} s\\ & = ( {\rm d}elta_{kj} - a_{kj} ) \partial_l \partial_k (\chi {\rm e}ta_j ) \\& + \int_0^t \overlineigl( \partial_t a_{kj} \partial_l \partial_k (\chi {\rm e}ta_j ) + \underbrace{a_{kj} ( \partial_l \partial_k \chi v_j + \partial_l \chi \partial_k v_j + \partial_k \chi \partial_l v_j)}_{=:LOT_3} - \chi \partial_l a_{kj} \partial_k v_j \overlineigr) {\rm d} s, {\rm e}nd{split} \] where in the last step we used $a_{kj} \partial_l \partial_k v_j = -\partial_l a_{kj} \partial_k v_j$, a consequence of the divergence-free condition $a_{kj}\partial_k v_j =0$. Therefore, \overlineegin{align} \overlineegin{split} \|\nabla \operatorname{div}\, (\chi {\rm e}ta ) \|_{{1+{\rm d}elta }} &\lesssimssim \| I - a \|_{{1.5+{\rm d}elta }} \| \chi {\rm e}ta \|_{{3+{\rm d}elta }} \\& + \int_0^t \overlineigl( \underbrace{\| \partial_t a \|_{{1.5+{\rm d}elta/2 }} \| \chi {\rm e}ta \|_{{3+{\rm d}elta }}}_{\leq P} + \| LOT_3 \|_{{1+{\rm d}elta }} + \| \chi \partial_l a \|_{{1+{\rm d}elta}} \| v \|_{{2.5+{\rm d}elta }} \overlineigr) {\rm d} s. {\rm e}nd{split} \notag {\rm e}nd{align} Since $\partial_l a $ consists of sums of the terms of the form $\partial_a {\rm e}ta_m \partial_l \partial_b {\rm e}ta_n $, for $m,n,a,b=1,2,3$, we have \[ \| \chi \partial_l a \|_{{1+{\rm d}elta }} \lesssimssim \| {\rm e}ta \|_{{2.5+{\rm d}elta }} (\| \chi {\rm e}ta \|_{{3+{\rm d}elta }} + \| {\rm e}ta \|_{{2+{\rm d}elta }}) .\] Moreover, \[ \| LOT_3 \|_1+{\rm d}elta \lesssimssim \| a \|_{1.5+{\rm d}elta } \| v\|_{2+{\rm d}elta } \lesssimssim P, \] since $1+{\rm d}elta < 1.5+{\rm d}elta $. Thus \[ \| \nabla \operatorname{div}\, (\chi {\rm e}ta ) \|_{{1+{\rm d}elta }} \lesssimssim \| I - a \|_{{1.5+{\rm d}elta }} \| \chi {\rm e}ta \|_{{3+{\rm d}elta }} + \int_0^t P\, {\rm d} s. \] Applying this inequality and {\rm e}qref{curl_chieta_est} into {\rm e}qref{001} gives \[\overlineegin{split} \| \chi {\rm e}ta \|_{{3+{\rm d}elta }} &\lesssimssim \| \chi {\rm e}ta \|_{L^2} + \| \mathrm{curl}\, (\chi {\rm e}ta ) \|_{1+{\rm d}elta } + \| \mathrm{div}\, (\chi {\rm e}ta ) \|_{1+{\rm d}elta } + \| \chi {\rm e}ta \|_{{3+{\rm d}elta }}\left( \| I-a \|_{1.5+{\rm d}elta } + \| I - \nabla {\rm e}ta \|_{1.5+{\rm d}elta } {\rm i}ght) \\ &+ \int_0^t P\, {\rm d} s + t\| \chi \nabla \omegaega_0 \|_{1+{\rm d}elta }+\Vert\Lambdambda^{2.5+{\rm d}elta}{\rm e}ta_3\Vert_{L^2(\Gamma_1)} +1 . {\rm e}nd{split} \] Recalling {\rm e}qref{a-I_and_naeta-I}, we may estimate the fourth term on the right-hand side by $\varepsilon \| \chi {\rm e}ta \|_{3+{\rm d}elta }$, and so we may absorb it on the left-hand side. Furthermore, we can absorb the second and the third terms on the right-hand side as well, by applying the Sobolev interpolation inequality between $L^2$ and $H^{3+{\rm d}elta}$, which gives {\rm e}qref{divcurl2}. As for the estimate {\rm e}qref{v_2.5+delta} for $v$, we note that the Cauchy invariance {\rm e}qref{cauchy} implies \[ (\mathrm{curl}\, v )_i = {\rm e}psilon_{ijk} \partial_j v_k = {\rm e}psilon_{ijk} ({\rm d}elta_{km}- \partial_k {\rm e}ta_m )\partial_j v_k + (\omegaega_0)_i, \] and the divergence-free condition, $a_{ji} \partial_j v_i=0$, gives \[ \mathrm{div}\, v = {\rm d}elta_{ji} \partial_j v_i = ({\rm d}elta_{ji}-a_{ji} ) \partial_j v_i . \] Thus, using {\rm e}qref{harmonic_est}, we obtain \overlineegin{equation}\lambdabel{v_2.5+delta_temp} \overlineegin{split} \| v \|_{{2.5+{\rm d}elta }} &\lesssimssim \| v \|_{L^2} + \| {\rm e}psilon_{ijk} ({\rm d}elta_{km} - a_{km } ) \partial_j v_m \|_{{1.5+{\rm d}elta }} \\& + \| ({\rm d}elta_{ji} - a_{ji} ) \partial_j v_i \|_{{1.5 +{\rm d}elta }} + \| \nabla_{2} v_3 \|_{H^{1+{\rm d}elta } (\Gamma_1)} + \| \omegaega_0 \|_{1.5+{\rm d}elta } {\rm e}nd{split} {\rm e}nd{equation} For the boundary term, applying Sobolev interpolation and trace estimates gives \[ \| \nabla_{2} v_3 \|_{H^{1+{\rm d}elta } (\Gamma_1)} \lesssimssim \| v \|_{L^2 (\Gamma_1)} + \| \Lambdambda^{1.5 +{\rm d}elta } v_3 \|_{H^{0.5}(\Gamma_1 )} \lesssimssim \| v \|_{1} + \| \Lambdambda^{1.5 +{\rm d}elta} \nabla (\partialsi v_3 ) \|_{L^2}. \] As for the last term, since \overlineegin{align} \overlineegin{split} \partial_3 (\partialsi v_3) &= \partialsi \operatorname{div} \, v - \partialsi \partial_1 v_1 - \partialsi \partial_2 v_2 + \partial_3 \partialsi v_3, \\& = ({\rm d}elta_{ji}-a_{ji})\partial_j v_i - \partial_1 (\partialsi v_1) - \partial_2 (\partialsi v_2) + \partial_3 \partialsi v_3 \notag {\rm e}nd{split} {\rm e}nd{align} we have \[\overlineegin{split} \| \Lambdambda^{1.5 +{\rm d}elta} \nabla (\partialsi v_3 ) \|_{L^2} &\lesssimssim \| \partialsi ( I-a ) \nabla v \|_{{1.5+{\rm d}elta }} + \| S (\partialsi v ) \|_{L^2} + \| \partial_3 \partialsi v_3 \|_{{1.5 }} \\ &\lesssimssim {\rm e}psilon \| v \|_{{2.5+{\rm d}elta }} + \| v \|_{L^2} + \| S(\partialsi v) \|_{L^2}, {\rm e}nd{split} \] where we used {\rm e}qref{a-I_and_naeta-I} and Sobolev interpolation of between $L^2$ and $H^{2.5 }$ for the last term. Applying this into {\rm e}qref{v_2.5+delta_temp}, and using {\rm e}qref{a-I_and_naeta-I} again we obtain \[ \| v \|_{{2.5+{\rm d}elta }} \lesssimssim {\rm e}psilon \| v \|_{{2.5+{\rm d}elta }} + \| v \|_{L^2} + \| S(\partialsi v) \|_{L^2}+\| \omegaega_0 \|_{1.5+{\rm d}elta }, \] which, after absorbing the first term on the right-hand side, gives {\rm e}qref{v_2.5+delta}, as required. \subsection{Pressure estimate}\lambdabel{sec_pressure} In this section we show that if $\| q \|_{2.5+{\rm d}elta }$ and $\| \partialsi q \|_{3+{\rm d}elta }$ are finite, then \overlineegin{equation}\lambdabel{pressure_est_2.5} \| q \|_{2.5+{\rm d}elta } \lesssimssim \| v \|_{2+{\rm d}elta }^2 {\rm e}nd{equation} and \overlineegin{equation}\lambdabel{pressure_est_with_psi} \| \partialsi q \|_{3+{\rm d}elta } \leq P(\| \chi {\rm e}ta \|_{3+{\rm d}elta }, \| v \|_{2.5+ {\rm d}elta }). {\rm e}nd{equation} In the remainder of this section we denote any polynomial of the form of the right-hand side by $P$, for simplicity. We also continue the convention of omitting the irrelevant indices. We first prove {\rm e}qref{pressure_est_with_psi} assuming that {\rm e}qref{pressure_est_2.5} holds. Multiplying the Euler equation, $ \partial_t v_i = - a_{ki } \partial_k q $ by $\partialsi$ we obtain $ \partial_t (\partialsi v_i) = -a_{ki} \partial_k (\partialsi q) + a_{ki} \partial_k \partialsi \, q $. Now applying $a_{ji}\partial_j$, summing over $i,j$, and using the Piola identity $\partial_j a_{ji}=0$ we get \[\overlineegin{split} &-\partial_j (a_{ji} a_{ki} \partial_k( \partialsi q)) + \partial_j (a_{ji} a_{ki} \partial_k \partialsi \, q )=a_{ji}\partial_j \partial_t (\partialsi v_i) \\&\quad{} = \partialsi a_{ji}\partial_j \partial_t v_i + a_{ji} \partial_j \partialsi \partial_t v_i = - \partialsi \partial_t a_{ji}\partial_j v_i - a_{ji} \partial_j \partialsi a_{ki} \partial_k q , {\rm e}nd{split} \] where we have applied the product rule for $\partial_j$ in the second equality, and, in the third equality, we used $\partial_t (a_{ji} \partial_j v_i )=0$ for the first term. Thus \overlineegin{equation}\lambdabel{poisson_psiq}\overlineegin{split} \Delta (\partialsi q) &= \partial_j \overlineigl( ({\rm d}elta_{jk} - a_{ji}a_{ki} ) \partial_k (\partialsi q) \overlineigr) + \partial_j (a_{ji} a_{ki} \partial_k (\partialsi q)) \\ &= \partial_j \overlineigl(({\rm d}elta_{jk} - a_{ji}a_{ki} ) \partial_k (\partialsi q) \overlineigr) + \partial_j (a_{ji} a_{ki} \partial_k \partialsi q) + \partialsi \partial_t a_{ji} \partial_jv_i + a_{ji} \partial_j \partialsi a_{ki} \partial_k q , {\rm e}nd{split} {\rm e}nd{equation} and thus, by the elliptic regularity, noting that $\partialsi q=0$ on $\Gamma_0\cup \Gamma_1$, \overlineegin{align}\overlineegin{split} \| \partialsi q \|_{{3+{\rm d}elta }} &\lesssimssim \| (I- aa^T )\nabla (\partialsi q) \|_{{2+{\rm d}elta }} + \| a^2 \nabla \partialsi q \|_{{2+{\rm d}elta }} + \| \partialsi \partial_t a \nabla v \|_{{1+{\rm d}elta }} + \| a \nabla \partialsi a \nabla q \|_{{1+{\rm d}elta }}\\ &\lesssimssim \| I- a a^T \|_{L^\infty } \| \partialsi q \|_{{3+{\rm d}elta }} + \| \chi^4 (I-aa^T) \|_{{2+{\rm d}elta }} \| \partialsi q \|_{{2.5+{\rm d}elta }} \\ &+\| \chi^2 a \|_{2 +{\rm d}elta }^2 \| q \|_{2+{\rm d}elta } + \| \partial_t a \|_{1.2+{\rm d}elta } \| v \|_{2.3+{\rm d}elta } + \| a \|_{1.5+{\rm d}elta }^2 \| q \|_{2+{\rm d}elta } \\ &\lesssimssim \varepsilon \| \partialsi q \|_{{3+{\rm d}elta }} + (1+\| \chi {\rm e}ta \|^4_{3+{\rm d}elta } ) \| \partialsi q \|_{2.5+{\rm d}elta } + P, {\rm e}nd{split} \lambdabel{EQ02} {\rm e}nd{align} where we used {\rm e}qref{cutoffs_plugin}, the estimate $\| fg\|_{2+{\rm d}elta } \lesssimssim \| f \|_{L^\infty } \| g \|_{2+{\rm d}elta } + \| f \|_{2+{\rm d}elta } \| g \|_{L^\infty }$, as well as the embedding $\| g \|_{L^\infty }\lesssimssim \| g \|_{1.5+{\rm d}elta }$ and {\rm e}qref{kpv_est} in the second inequality. In the third inequality we used {\rm e}qref{at_in_H1.5} and {\rm e}qref{a-I_and_naeta-I}. In the second term on the far right side of {\rm e}qref{EQ02}, we use {\rm e}qref{pressure_est_2.5}, proven below, to show that it is dominated by~$P$. Thus we have obtained {\rm e}qref{pressure_est_with_psi}, given {\rm e}qref{pressure_est_2.5}, which we now verify. In order to estimate $q$ in $H^{2.5+{\rm d}elta }$, we note that, as in {\rm e}qref{poisson_psiq} above, $q$ satisfies the Poisson equation \[ \partial_{kk} q = \partial_t a_{ji} \partial_j v_i + \partial_j ( ({\rm d}elta_{jk} - a_{ki}a_{ji} ) \partial_k q ) \] in $\Omegaega$, together with homogeneous boundary condition $q=0 $ on $\Gamma_1$ (by {\rm e}qref{no_q_ontop}), and nonhomogeneous Neumann boundary condition $\partial_3 q = ({\rm d}elta_{k3} - a_{k3} )\partial_k q$ on $\Gamma_0$, since taking $\partial_t $ of the boundary condition $v_3 =0$ on $\Gamma_0$ gives $a_{k3} \partial_k q =0$. Thus, elliptic estimates imply \[\overlineegin{split} \| q \|_{2.5+{\rm d}elta } &\lesssimssim \| \partial_t a \nabla v\|_{0.5+{\rm d}elta } + \| (I-aa^T ) \nabla q \|_{1.5+{\rm d}elta }+ \| (I-a) \nabla q \|_{H^{1+{\rm d}elta }(\Gamma_0)}\\ &\lesssimssim \| \partial_t a \|_{1+{\rm d}elta } \| v \|_{2+{\rm d}elta } + \| I-aa^T \|_{1.5+{\rm d}elta} \| q \|_{2.5+{\rm d}elta } + \| I-a \|_{1.5+{\rm d}elta } \| q \|_{2.5+{\rm d}elta }\\ &\lesssimssim \| v \|_{2+{\rm d}elta }^2 + \varepsilon \| q \|_{2.5+{\rm d}elta } , {\rm e}nd{split} \] where we used {\rm e}qref{at_in_H1.5} and {\rm e}qref{a-I_and_naeta-I} in the last step. Taking a sufficiently small $\varepsilon >0$ proves {\rm e}qref{pressure_est_2.5}, as required. \subsection{Time derivative of $q$}\lambdabel{sec_pressure_time_deriv} In this section, we supplement the pressure estimates {\rm e}qref{pressure_est_2.5} and {\rm e}qref{pressure_est_with_psi} from the previous section with \overlineegin{equation}\lambdabel{dt_q} \| \partial_t q \|_{2+{\rm d}elta } \leq P, {\rm e}nd{equation} \overlineegin{equation}\lambdabel{dt_psi_q_2.5} \| \partial_t (\partialsi q) \|_{2.5+{\rm d}elta } \leq P, {\rm e}nd{equation} where $P$ denotes a polynomial in terms of $\| \chi {\rm e}ta \|_{3+{\rm d}elta }$, $\| {\rm e}ta \|_{2.5+{\rm d}elta }$, and $\| v \|_{2.5+{\rm d}elta } $. We note that {\rm e}qref{dt_psi_q_2.5} implies that the Rayleigh-Taylor condition {\rm e}qref{rayleigh_taylor} holds for sufficiently small $t$ given our key quantities {\rm e}qref{big_3} remain bounded. \overlineegin{cor}[Rayleigh-Taylor condition for small times]\lambdabel{cor_stab_rt} Let $M,T_0>0$, and suppose that $\| v \|_{2.5+{\rm d}elta }$, $\| {\rm e}ta \|_{2.5+{\rm d}elta }$, $\| \chi {\rm e}ta \|_{3+{\rm d}elta } \leq M$ for $t\in [0,T_0]$. There exists $T=T(M,b ) \in (0,T_0)$ such that \overlineegin{equation}\lambdabel{rayleigh_t_small_times} \partial_3 q (x,t) \leq - \frac{b}2 {\rm e}nd{equation} for $x\in \Gamma_1$ and $t\in [0,T]$. {\rm e}nd{cor} \overlineegin{proof} The proof is analogous to the proof of Lemma~\ref{lem_stab_a_naeta}, by noting that for $x\in \Gamma_1$ \[ \partial_3 q (x,t) - \partial_3 q(x,0) = \int_0^t \partial_t \partial_3 q {\rm d} s \leq \int_0^t \|\partial_t \nabla q \|_{L^\infty (\Gamma_1) } {\rm d} s \leq \int_0^t \| \partial_t (q \partialsi) \|_{{2.5+{\rm d}elta }} {\rm d} s\leq P(M) t, \] which is bounded by $b/2$ for sufficiently small $t$. {\rm e}nd{proof} We prove {\rm e}qref{dt_psi_q_2.5} first. Applying $\partial_t$ to {\rm e}qref{poisson_psiq} we obtain \overlineegin{equation}\notag\overlineegin{split} \Delta \partial_t(\partialsi q) &= \partial_j \overlineigl( ({\rm d}elta_{jk} - a_{ji}a_{ki} ) \partial_k \partial_t(\partialsi q) \overlineigr) -2\nabla (a\partial_t a \partialsi \nabla q ) + \nabla (2a\partial_t a \nabla \partialsi q + a^2 \nabla \partialsi \partial_t q)\\ &+\partialsi \partial_{tt} a \nabla v +\partialsi \partial_t a \nabla \partial_t v +2a\partial_t a \nabla \partialsi \nabla q + a^2 \nabla \partialsi \nabla \partial_t q. {\rm e}nd{split} {\rm e}nd{equation} Noting that $\partial_t (\partialsi q)$ satisfies the homogeneous Dirichlet boundary conditions at both $\Gamma_1$ and $\Gamma_0$, and is periodic in $x_1$ and $x_3$, the elliptic regularity gives, for $\sigma=0.5+{\rm d}elta$, \overlineegin{align}\overlineegin{split} \| \partial_t (\partialsi q ) \|_{2 + \sigma } &\lesssimssim \| (I-aa^T ) \nabla \partial_t (\partialsi q ) \|_{1+\sigma } +\| a \partial_t a \partialsi \nabla q \|_{1+\sigma } + \| a \partial_t a \nabla \partialsi q \|_{1+\sigma} \\& + \| a^2 \nabla \partialsi \partial_t q \|_{1+\sigma } + \| \partial_{tt} a \nabla v \|_{\sigma } + \| \partial_t a \nabla (a \nabla q) \|_{\sigma } + \| a \partial_t a \nabla q \|_{\sigma } + \| a^2 \nabla \partial_t q \|_{\sigma } \\& \lesssimssim \varepsilon \| \partial_t (\partialsi q ) \|_{2+\sigma } + \| a \|_{1.5 +{\rm d}elta } \| \partial_t a \|_{1.5 +{\rm d}elta } \| q \|_{2+\sigma } +\| a \|_{1.5+{\rm d}elta }^2 \| \nabla \partialsi \partial_t q \|_{1+\sigma } \\& + \| \partial_{tt} a \|_{\sigma } \| v \|_{2.5 +{\rm d}elta} + \| \partial_t a \|_{1.5 +{\rm d}elta } \| a \|_{1.5 +{\rm d}elta } \| q \|_{2+\sigma } \\& + \| a \|_{1.5 +{\rm d}elta } \| \partial_t a \|_{1.5+{\rm d}elta } \| q \|_{1+\sigma } + \| a \|_{1.5 +{\rm d}elta}^2 \| \nabla \partialsi \nabla \partial_t q \|_{\sigma } , {\rm e}nd{split} \lambdabel{EQ03} {\rm e}nd{align} where we used {\rm e}qref{a-I_and_naeta-I} in the second inequality. Taking $\varepsilon >0 $ sufficiently small, and absorbing the first term on the left-hand side, and using {\rm e}qref{pressure_est_2.5}, {\rm e}qref{at_in_H1.5}, and {\rm e}qref{att_in_Hr}, we obtain \overlineegin{equation}\lambdabel{dt_q_naive} \| \partial_t (\partialsi q ) \|_{2 + \sigma } \lesssimssim P (1+ \| \nabla \partialsi \partial_t q \|_{1+\sigma} +\| \nabla \partial_t q \|_{\sigma } ). {\rm e}nd{equation} Thus, given {\rm e}qref{dt_q}, we have obtained {\rm e}qref{dt_psi_q_2.5}. It remains to show {\rm e}qref{dt_q}. Although it might seem that taking $\partialsi := 1$ and $\sigma := {\rm d}elta $ in {\rm e}qref{dt_q_naive} might seem equivalent to {\rm e}qref{dt_q}, we must note that in such case $\partial_t q$ does not satisfy the homogeneous Dirichlet boundary condition at $\Gamma_0$. Instead, we have the Neumann condition \[\partial_3 \partial_t q = ({\rm d}elta_{k3} - a_{k3}) \partial_k q - \partial_t a_{k3} \partial_k q, \] by applying $\partial_t$ to $a_{k3}\partial_k q =0$ on $\Gamma_1$. Hence, in the case $\partialsi =1$ and $\sigma := {\rm d}elta$, the estimate {\rm e}qref{dt_q_naive} does include the last two terms inside the parantheses, as $\nabla \partialsi = \nabla 1=0$, but instead {\rm e}qref{EQ03} needs to be amended by the boundary term \[ \| (I-a ) \nabla \partial_t q - \partial_t a \nabla q \|_{H^{0.5+{\rm d}elta }(\Gamma_0 )} \lesssimssim \| I- a \|_{1.5+{\rm d}elta } \| \partial_t q\|_{2+{\rm d}elta } + \| \partial_t a \|_{1.5+{\rm d}elta } \| \nabla q \|_{2+{\rm d}elta } \lesssimssim \varepsilon \| \partial_t q\|_{2+{\rm d}elta } + P, \] where we used {\rm e}qref{a-I_and_naeta-I} again and {\rm e}qref{at_in_H1.5}, {\rm e}qref{pressure_est_2.5}. Thus choosing a sufficiently small $\varepsilon >0$ and absorbing the first term on the right-hand side we obtain {\rm e}qref{dt_q}, as required. \subsection{Tangential estimates}\lambdabel{sec_tangential} In this section we show that \overlineegin{equation}\lambdabel{tang_est} \| S (\partialsi v ) \|_{L^2}^2 + \| a_{3l} S {\rm e}ta_l \|_{L^2 (\Gamma_1 )}^2 \lesssimssim \| \partialsi v_0 \|_{2.5+{\rm d}elta }^2 + \| v_0 \|_{2+{\rm d}elta }^2+\int_0^t P\, {\rm d} s, {\rm e}nd{equation} where, as above, $P$ denotes a polynomial in $\| v (s) \|_{{2.5+{\rm d}elta }}$, $\| \chi {\rm e}ta (s) \|_{{3+{\rm d}elta }}$, and $\| {\rm e}ta \|_{2.5+{\rm d}elta }$. Our estimate follows a similar scheme as \cite[Lemma~6.1]{KT2}, except that the argument is shorter and sharper in the sense that we eliminate the dependence on $\partial_t v$, $q$ and $\partial_t q$. Another essential change results from the appearance of the cutoff $\partialsi$. This localizes the estimate and causes minor changes to the scheme. Note that the appearance of $\partialsi$ is essential for localizing the highest order dependence on ${\rm e}ta$. Namely, it allows us to use the $H^{3+{\rm d}elta }$ norm of merely $\chi {\rm e}ta$, rather than ${\rm e}ta$ itself, which we only need to control in the $H^{2.5+{\rm d}elta }$ norm. In the remainder of Section~\ref{sec_tangential}, we prove~{\rm e}qref{tang_est}. Note that $S\partial_t (\partialsi v ) = -S( a_{ki} \partialsi \partial_k q ) $, which gives \[ \frac{{\rm d} }{{\rm d} t} \| S(\partialsi v) \|_{L^2}^2 = -\int S (a_{ki} \partialsi \partial_k q )S(\partialsi v_i) . \] In what follows, we estimate the right-hand side by $P+I_{113}$, where $I_{113}$ is defined in {\rm e}qref{EQ04} below and is such that \overlineegin{equation}\lambdabel{I113_toshow} \int_0^t I_{113} \, {\rm d} s \lesssimssim - \| a_{3l} S {\rm e}ta_l (t) \|_{L^2 (\Gamma )} + \| \partial_3 q (0) \|_{1.5+{\rm d}elta } + \int_0^t P \,{\rm d} s. {\rm e}nd{equation} The claim then follows by integration in time on $(0,t)$ and by recalling {\rm e}qref{pressure_est_2.5}. First we write \overlineegin{align}\overlineegin{split} -\int S(a_{ki} \partialsi \partial_k q ) S (\partialsi v_i) &=- \int Sa_{ki} \partialsi \partial_k q \, S (\partialsi v_i)-\int a_{ki} \partial_k S(\partialsi q) \, S (\partialsi v_i)+\int a S ( \nabla \partialsi q) S (\partialsi v) \\ &- \int \overlineigl( S(a \partialsi \nabla q )-Sa \partialsi \nabla q - a S(\partialsi \nablabla q) \overlineigr) S (\partialsi v ) \\ &=: I_1 + I_2+I_3 +I_4, {\rm e}nd{split} \notag {\rm e}nd{align} where we used ${\partialsi \partial_k q}{=\partial_k (\partialsi q ) - \partial_k \partialsi q}$ to obtain the second and third terms. Using {\rm e}qref{pressure_est_2.5} we obtain \[ I_3 \lesssimssim \| a \|_{L^\infty } \| q \|_{2.5+{\rm d}elta } \| \partialsi v \|_{2.5 +{\rm d}elta } \leq P. \] For $I_2$, we integrate by parts in $x_k$ and use the Piola identity $\partial_k a_{ki}=0$ to get \[\overlineegin{split} I_2 &= \int a_{ki} S(\partialsi q) \partial_k S (\partialsi v_i)=\int a_{ki} S(\partialsi q) S (\partialsi \partial_k v_i)+\underbrace{ \int a S(\partialsi q) S (\nabla \partialsi v)}_{\lesssimssim \| a \|_{L^\infty } \| q \|_{2.5 +{\rm d}elta } \| v \|_{2.5 +{\rm d}elta }} . {\rm e}nd{split} \] Note that the boundary terms in the first step vanish, as $q=0$ on $\Gamma_1$ and $\partialsi$ vanishes in a neighborhood of $\Gamma_0$. Moving $S$ away from $\partialsi \partial_k v$ in the first term above, and recalling the divergence-free condition $a_{ki} \partial_k v_i =0$, we obtain \[ a_{ki} S (\partialsi \partial_k v_i ) = - S a_{ki} \partialsi \partial_k v_i - (S(a_{ki} \partialsi \partial_k v_i ) - a_{ki} S (\partialsi \partial_k v_i ) - S a_{ki} \partialsi \partial_k v_i ) , \] and thus \[\overlineegin{split} I_2& \lesssimssim - \int S a S(\partialsi q) \partialsi \nabla v +\| S(\partialsi q) \|_{L^3 } \| S(a \partialsi \nabla v) - a S(\partialsi \nabla v) - Sa \partialsi \nabla v \|_{L^{\frac32}} +P \\ &\lesssimssim - \int \Lambdambda^{2+{\rm d}elta } a \Lambdambda^{\frac12}\left( S(\partialsi q) \partialsi \nabla v {\rm i}ght) + \| \partialsi q \|_{3+{\rm d}elta }( \| a \|_{W^{1,6}} \| v \|_{{2.5+{\rm d}elta }} + \| a \|_{W^{1.5+{\rm d}elta , 3}} \| v \|_{W^{2,3}} ) +P\\ &\lesssimssim \| \chi^2 a \|_{2+{\rm d}elta } \| S (\partialsi q ) \partialsi \nabla v \|_{0.5}+P \lesssimssim \| \chi {\rm e}ta \|_{3 +{\rm d}elta }^2 \| \partialsi q \|_{3+{\rm d}elta } \| v \|_{2.5+{\rm d}elta } +P \lesssimssim P , {\rm e}nd{split} \] where we used the embedding $H^{0.5} \subset L^3$, the commutator estimate {\rm e}qref{kpv3} in the second inequality, and the embeddings $H^1 \subset L^6$, $H^{0.5}\subset L^3$; we also used {\rm e}qref{pressure_est_with_psi} and {\rm e}qref{achi_in_H2} in the fourth and fifth inequalities respectively. As for $I_4$ we have \[\overlineegin{split} I_4 &\lesssimssim \| \partialsi v \|_{2.5+{\rm d}elta } \| S (a \partialsi \nabla q ) - S a \partialsi \nabla q - a S (\partialsi \nabla q) \|_{L^2} \\ &\lesssimssim P \| S (a \partialsi \nabla q ) - S a \partialsi \nabla q - a S (\partialsi \nabla q ) \|_{L^2}\\ &\lesssimssim P (\| \chi a \|_{W^{1,6}} \| \partialsi \nabla q \|_{W^{1.5+{\rm d}elta , 3}} + \| \chi a \|_{W^{1.5+{\rm d}elta ,3}} \| \partialsi \nabla q \|_{W^{1,6}} )\lesssimssim P, {\rm e}nd{split} \] where we have applied {\rm e}qref{kpv3}, {\rm e}qref{achi_in_H2}, {\rm e}qref{cutoffs_plugin}, {\rm e}qref{pressure_est_2.5}, and {\rm e}qref{pressure_est_with_psi} in the last line. For $I_1$, we write \[ S = \sum_{m=1,2} S_m \partial_m + S_0, \] where $S_m := -\Lambdambda^{\frac12+{{\rm d}elta }} \partial_m$ and $S_0 := \Lambdambda^{\frac12+{{\rm d}elta }}$. Furthermore, differentiating the identity $a\nabla {\rm e}ta =I$ with respect to $x_m$, where $m=1,2$, we get $\partial_m a \nabla {\rm e}ta =- a\partial_m \nabla {\rm e}ta $, and then multiplying this matrix equation by $a$ on the right-side gives $\partial_m a = - a \partial_m \nabla {\rm e}ta a$, which in components reads \[ \partial_m a_{ki} = - a_{kj} \partial_m \partial_l {\rm e}ta_j a_{li} . \] Thus we may write \[ Sa_{ki} = - S_m ( a_{kj} \partial_m \partial_l {\rm e}ta_j a_{li} ) + S_0 a_{ki}, \] and consequently \[\overlineegin{split} I_1 &= \sum_{m=1,2}\int S_m ( a_{kj} \partial_m \partial_l {\rm e}ta_j a_{li} ) \partialsi \partial_k q\, S(\partialsi v_i) - \int S_0 a \partialsi \nabla q S(\partialsi v) \\ &= \sum_{m=1,2}\int a_{kj} S_m \partial_m \partial_l {\rm e}ta_j a_{li} \partialsi \partial_k q S(\partialsi v_i) + \sum_{m=1,2}\int \overlineigl( S_m ( a D^2{\rm e}ta a ) - a S_m D^2 {\rm e}ta a \overlineigr) \partialsi \nabla q S(\partialsi v) \\& - \int S_0 a \partialsi \nabla q S(\partialsi v) \\ &=: I_{11} + I_{12} + I_{13} . {\rm e}nd{split} \] We have \[\overlineegin{split} I_{13} &\lesssimssim \| S_0 a \|_{L^2} \| \nabla q \|_{L^\infty } \| S ( \partialsi v ) \|_{L^2} \lesssimssim \|a \|_{0.5+{\rm d}elta } \| q \|_{2.5+{\rm d}elta } \| v \|_{2.5+{\rm d}elta } \leq P, {\rm e}nd{split} \] and \[\overlineegin{split} I_{12} &\lesssimssim \| \nabla q \|_{L^\infty } \| \partialsi v \|_{2.5+{\rm d}elta } \| S_m ((\chi^2 a)^2 D^2 ( \chi {\rm e}ta) ) - (\chi^2 a)^2 S_m D^2 (\chi {\rm e}ta ) \|_{L^2} \\ &\lesssimssim P \left( \| (\chi^2 a)^2 \|_{W^{1.5+{\rm d}elta , 3}} \| \chi {\rm e}ta \|_{W^{2,6}} + \| (\chi^2 a )^2 \|_{W^{1,6}} \| \chi {\rm e}ta \|_{W^{2.5+{\rm d}elta, 3}} {\rm i}ght) \lesssimssim P, {\rm e}nd{split} \] as claimed, where we used {\rm e}qref{cutoffs_plugin} in the first inequality, and {\rm e}qref{kpv1}, {\rm e}qref{achi_in_H2} in the second. For $I_{11}$ we write $\sum_{m=1,2}S_m \partial_m = - S_0+S $, integrate by parts in $x_l$ and use the Piola identity $\partial_l a_{li} =0$ to get \overlineegin{align}\overlineegin{split} I_{11} & = -\int a S_0 \nabla {\rm e}ta a \partialsi \nabla q S(\partialsi v) -\int \nabla a S {\rm e}ta a \partialsi \nabla q S(\partialsi v)-\int a S {\rm e}ta a \nabla \partialsi \nabla q S(\partialsi v) \\ &\underbrace{-\int a S {\rm e}ta a \partialsi D^2 q S(\partialsi v)}_{=:I_{111}}\underbrace{-\int a_{kj} S {\rm e}ta_j a_{li} \partialsi\partial_k q \partial_l S(\partialsi v_i)}_{=:I_{112}}+\underbrace{ \int_{\Gamma_1} a_{kj} S {\rm e}ta_j a_{3i} \partial_k q S(\partialsi v_i ) {\rm d} \sigma}_{=:I_{113}}\\ &\lesssimssim \| a \|^2_{L^\infty } \| q \|_{W^{1,\infty }} \| {\rm e}ta \|_{2.5+{\rm d}elta } \|v \|_{2.5+{\rm d}elta } \\& + \|\nabla (\chi^2a ) \|_{L^6 } \| S(\chi {\rm e}ta ) \|_{L^3} \| a \|_{L^\infty } \| q \|_{W^{1,\infty }}\| v\|_{2.5+{\rm d}elta } + I_{111} + I_{112}+I_{113} \\ &\lesssimssim P + I_{111} + I_{112}+I_{113} {\rm e}nd{split} \lambdabel{EQ04} {\rm e}nd{align} where we used {\rm e}qref{cutoffs_plugin} in the first inequality, and inequalities $\| f \|_{L^\infty }\lesssimssim \| f \|_{1.5+{\rm d}elta }$, and {\rm e}qref{achi_in_H2} to obtain $\| \nabla (\chi^2 a )\|_{L^6} \lesssimssim \| \chi^2 a \|_{2} \lesssimssim \| \chi {\rm e}ta \|_{3+{\rm d}elta }$ in the second inequality. Note that there is no boundary term at $\Gamma_0$ as $\partialsi =0$ on~$\Gamma_0$. For $I_{111}$ we write $\partialsi D^2 q = D^2 (\partialsi q ) - 2 \nabla \partialsi \nabla q - D^2 \partialsi q$ to obtain \[ \overlineegin{split} I_{111} &= -\int a S {\rm e}ta a D^2 (\partialsi q) S(\partialsi v)+\int a S {\rm e}ta a (2\nabla \partialsi \nabla q + D^2 \partialsi q ) S(\partialsi v) \\ &\lesssimssim \| a \|_{L^\infty}^2 \| S(\chi {\rm e}ta ) \|_{L^3} \| v\|_{2.5+{\rm d}elta } ( \| \partialsi q \|_{W^{2,6}} + \| q \|_{W^{1,6 }} )\lesssimssim P, {\rm e}nd{split} \] where we used {\rm e}qref{achi_in_H2}, {\rm e}qref{pressure_est_with_psi}, and the embedding $H^{0.5}\subset L^3$ in the last inequality. As for $I_{112}$, we note that the divergence-free condition, $a_{li} \partial_l v_i=0$, gives \[S (a_{li} \partial_l (\partialsi v_{i}))= S (a_{li} \partial_l \partialsi v_{i}). \] In order to use this fact, we put $a_{li}$ inside the second $S$ in $I_{112}$ and extract the resulting commutator. Namely, we denote $f:= -a_{kj} S{\rm e}ta_j \partialsi\partial_k q$ for brevity, and write \overlineegin{align} \overlineegin{split} I_{112} &= \int a_{li} S\partial_l (\partialsi v_i )f\\ &= \int S (a_{li} \partial_l \partialsi v_i)f -\int S a \nabla (\partialsi v)f\underbrace{-\int \overlineigl( S(a \nabla (\partialsi v )) - S a \nabla (\partialsi v) -a S\nabla (\partialsi v) \overlineigr)f}_{\lesssimssim \| S (a \nabla (\partialsi v)) - Sa \,\nabla (\partialsi v) - a S\nabla (\partialsi v) \|_{L^2} \| f \|_{L^2}}\\ &\lesssimssim \int \overlineigl( S(a \nabla \partialsi v) - Sa \nabla \partialsi v- a S( \nabla \partialsi v) \overlineigr)f -\underbrace{\int S a \partialsi \nabla v f}_{=\int \Lambdambda^{2+{\rm d}elta } (\chi^2 a ) \Lambdambda^{\frac12} (\partialsi \nabla v f ) } \hspace{0.5cm}+\underbrace{ \int a S (\nabla \partialsi v)f}_{\lesssimssim \| a \|_{L^\infty } \| v \|_{2.5+{\rm d}elta }\|f \|_{L^2}} \\ &+ \| S (a \nabla (\partialsi v)) - Sa \,\nabla (\partialsi v) - a S\nabla (\partialsi v) \|_{L^2}\| f \|_{L^2}\\ &\lesssim \| \chi^2 a \|_{2+{\rm d}elta } \| \Lambdambda^{\frac12 } ( \partialsi \nabla v f) \|_{L^2} \\ &+ \| f \|_{L^2} \overlineigl( P+\| S(a \nabla \partialsi v ) - Sa \nabla \partialsi v- a S(\nabla \partialsi v) \|_{L^2} + \| S (a \nabla (\partialsi v)) - Sa \,\nabla (\partialsi v) - a S\nabla (\partialsi v) \|_{L^2} \overlineigr) \\ &\lesssimssim P, {\rm e}nd{split} \notag {\rm e}nd{align} where we have used {\rm e}qref{cutoffs_plugin} in the first inequality. In the last step we have applied {\rm e}qref{achi_in_H2} and used $\| f \|_{L^2} \lesssimssim \| a \|_{L^\infty } \| \chi {\rm e}ta \|_{2.5+{\rm d}elta } \| q \|_{W^{1,\infty }} \lesssimssim P$; we have also estimated both commutator terms by \[ \| \chi^2 a \|_{W^{1, 3 }} \| v \|_{W^{1.5+{\rm d}elta , 6}} + \| v \|_{W^{1,6}} \| \chi^2 a \|_{W^{1.5+{\rm d}elta,3 }} \leq \| \chi^2 a \|_{{2+{\rm d}elta }} \| v \|_{{2.5+{\rm d}elta}} \leq P, \] using the Kato-Ponce inequality {\rm e}qref{kpv3} and {\rm e}qref{achi_in_H2}, {\rm e}qref{cutoffs_plugin}, and we estimated the remaining factor by \[\overlineegin{split} \| \Lambdambda^{\frac12 } ( \partialsi \nabla v f) \|_{L^2} &\lesssimssim \Vert \Lambdambda^{\frac12}f\Vert_{L^2} \Vert \partialsi\nablabla v\Vert_{L^\infty} + \Vert f\Vert_{L^3} \Vert \Lambdambda^{\frac12}(\partialsi\nablabla v)\Vert_{L^6} \\& \lesssimssim \Vert f\Vert_{0.5} \Vert v\Vert_{W^{1,\infty}} + \Vert f\Vert_{0.5} \Vert v\Vert_{2.5} \\& \lesssimssim \| a \|_{1.5+{\rm d}elta } \| \chi {\rm e}ta \|_{3+{\rm d}elta } \| q \|_{1.5+{\rm d}elta } \Vert v\Vert_{2.5+{\rm d}elta} \leq P. {\rm e}nd{split} \] \color{black} It remains to estimate the boundary term $I_{113}$, as claimed in {\rm e}qref{I113_toshow}. We note that $\partial_k q=0$ for $k=1,2$ on $\Gamma_1$, and $v_i = \partial_t {\rm e}ta_i$, which gives $a_{3i} Sv_i = \partial_t (a_{3i} S {\rm e}ta_i) - \partial_t a_{3i} S{\rm e}ta_i$. Thus \[\overlineegin{split} I_{113}& = \frac12 \frac{{\rm d} }{{\rm d} t} \int_{\Gamma_1} |a_{3i} S {\rm e}ta_i |^2 \partial_3 q {\rm d} \sigma - \int_{\Gamma_1} a_{3j} S {\rm e}ta_j \partial_3 q \partial_t a_{3i} S {\rm e}ta_i {\rm d} \sigma - \int_{\Gamma_1} a_{3j} S {\rm e}ta_j \partial_3 \partial_t q a_{3i} S {\rm e}ta_i {\rm d} \sigma \\ &\leq \frac12 \frac{{\rm d} }{{\rm d} t} \int_{\Gamma_1} |a_{3i} S {\rm e}ta_i |^2 \partial_3 q {\rm d} \sigma + \| a \|_{L^\infty } \| \partial_t a \|_{L^\infty } \| q \|_{W^{1,\infty }} \| S(\chi {\rm e}ta) \|_{L^2 (\Gamma_1)} + \| a \|_{L^\infty}^2 \| S (\chi {\rm e}ta )\|_{L^2(\Gamma_1 )} \| \partial_t (\partialsi q ) \|_{W^{1,\infty }} \\ &\leq \frac12 \frac{{\rm d} }{{\rm d} t} \int_{\Gamma_1} |a_{3i} S {\rm e}ta_i |^2 \partial_3 q {\rm d} \sigma + P, {\rm e}nd{split} \] where we used {\rm e}qref{a_in_H1.5}, {\rm e}qref{at_in_H1.5}, the facts $\| S(\chi {\rm e}ta ) \|_{L^2 (\Gamma_1)} \lesssimssim \| \chi {\rm e}ta \|_{3+{\rm d}elta } \leq P$, and $\| q \|_{W^{1,\infty }} \lesssimssim \| q \|_{2.5+{\rm d}elta }\leq P$, as well as the pressure estimate {\rm e}qref{dt_psi_q_2.5} to get $\| \partial_t (\partialsi q ) \|_{W^{1,\infty }} \lesssimssim \| \partial_t (\partialsi q ) \|_{2.5+{\rm d}elta } \leq P $. Consequently, the Rayleigh-Taylor condition for small times {\rm e}qref{rayleigh_t_small_times} and the fact that $a_{ki}={\rm d}elta_{ki}$ at time $0$ give \[\overlineegin{split} \int_0^t I_{113}\, {\rm d} s &\leq \frac12 \left. \int_{\Gamma_1} |a_{3i} S {\rm e}ta_i |^2 \partial_3 q {\rm d} \sigma {\rm i}ght|_{t} - \frac12 \left. \int_{\Gamma_1} | S {\rm e}ta |^2 \partial_3 q {\rm d} \sigma {\rm i}ght|_0 + \int_0^t P \, {\rm d} s\\ &\lesssimssim -\| a_{3i } S {\rm e}ta_i \|_{L^2}^2 + \| \partial_3 q(0) \|_{L^\infty } + \int_0^t P \, {\rm d} s, {\rm e}nd{split} \] where we used ${\rm e}ta(x) =x$ at time $0$ in the last step. This concludes the proof of {\rm e}qref{tang_est}. \subsection{Final estimates}\lambdabel{sec_final} In this section, we collect the estimates thus completing the proof of the main theorem. \overlineegin{proof}[Proof of Theorem~\ref{thm_main}] From the inequalities {\rm e}qref{v_2.5+delta}, {\rm e}qref{divcurl1}, and {\rm e}qref{divcurl2}, combined with the tangential estimate {\rm e}qref{tang_est}, give the inequality \overlineegin{equation}\lambdabel{apriori_est} \| v \|_{2.5+{\rm d}elta }, \| {\rm e}ta \|_{2.5+{\rm d}elta }, \| \chi {\rm e}ta \|_{3+{\rm d}elta } \lesssimssim \int_0^t P\, {\rm d} s + \| \partialsi v_0 \|_{2.5+{\rm d}elta }^2 + \| v_0 \|_{2+{\rm d}elta }^2 + t \| \chi \omegaega_0 \|_{2+{\rm d}elta } + \| \omegaega_0 \|_{1.5+{\rm d}elta } +1 + \|v_0 \|_{L^2}, {\rm e}nd{equation} where $P$ is a polynomial in $\| v \|_{2.5+{\rm d}elta }, \| {\rm e}ta \|_{2.5+{\rm d}elta }$ and $\| \chi {\rm e}ta \|_{3+{\rm d}elta }$. Note that, in order to estimate the boundary terms on the right hand side in {\rm e}qref{divcurl1}, {\rm e}qref{divcurl2}, we have used \[ \| S{\rm e}ta_3 \|_{L^2 (\Gamma_1 )} \leq \| a_{3l} S{\rm e}ta_l \|_{L^2 (\Gamma_1 )} + \| ({\rm d}elta_{3l} - a_{3l}) S{\rm e}ta_l \|_{L^2 (\Gamma_1 )} \leq \| a_{3l} S{\rm e}ta_l \|_{L^2 (\Gamma_1 )} + \varepsilon \| \chi {\rm e}ta \|_{3+{\rm d}elta } , \] where we applied {\rm e}qref{a-I_and_naeta-I} and a trace estimate in the second step, and we absorbed the last term by the left hand side above. Moreover, in order to obtain the initial kinetic energy $\| v_0 \|_{L^2}$ in {\rm e}qref{apriori_est}, instead of $\| v \|_{L^2}$ from {\rm e}qref{v_2.5+delta}, we note that \[ \| v \|_{L^2} \leq \| v_0 \|_{L^2} + \int_0^t \| \partial_t v\|_{L^2} {\rm d} s \leq \| v_0 \|_{L^2} + \int_0^t \| a \|_{1.5+{\rm d}elta } \| q \|_{1} {\rm d} s \lesssimssim \| v_0 \|_{L^2} + \int_0^t P \, {\rm d} s, \] where we used $\partialartialrtial_{t}=-a\nablabla q$, in the second inequality and {\rm e}qref{a_in_H1.5}, {\rm e}qref{pressure_est_2.5} in the last. The a priori estimate {\rm e}qref{apriori_est} allows us to apply a standard Gronwall argument, concluding the proof. {\rm e}nd{proof} \overlineegin{thebibliography}{[WWZ]} \small \overlineibitem[ABZ1]{ABZ1} T.~Alazard, N.~Burq, and C.~Zuily, {\rm e}mph{On the water-wave equations with surface tension}, Duke Math.~J.~\textbf{158} (2011), no.~3, 413--499. \overlineibitem[ABZ2]{ABZ2} T.~Alazard, N.~Burq, and C.~Zuily, {\rm e}mph{Low regularity {C}auchy theory for the water-waves problem: canals and wave pools}, Lectures on the analysis of nonlinear partial differential equations. {P}art 3, Morningside Lect. Math., vol.~3, Int. Press, Somerville, MA, 2013, pp.~1--42. \overlineibitem[AD]{AD} T.~Alazard and J.-M.~Delort, {\rm e}mph{Global solutions and asymptotic behavior for two dimensional gravity water waves}, Ann. Sci. \'Ec. Norm. Sup\'er. (4) \textbf{48} (2015), no.~5, 1149--1238. \overlineibitem[AM1]{AM1} D.M.~Ambrose and N.~Masmoudi, {\rm e}mph{The zero surface tension limit of two-dimensional water waves}, Comm. Pure Appl. Math. \textbf{58} (2005), no.~10, 1287--1315. \overlineibitem[AM2]{AM2} D.M.~Ambrose and N.~Masmoudi, {\rm e}mph{The zero surface tension limit of three-dimensional water waves}, Indiana Univ. Math. J. \textbf{58} (2009), no.~2, 479--521. \overlineibitem[B]{B} J.T.~Beale, {\rm e}mph{The initial value problem for the {N}avier-{S}tokes equations with a free surface}, Comm. Pure Appl. Math. \textbf{34} (1981), no.~3, 359--392. \overlineibitem[BB]{BB} J.P.~Bourguignon and H.~Brezis, {\rm e}mph{Remarks on the {E}uler equation}, J.~Functional Analysis \textbf{15} (1974), 341--363. \overlineibitem[CL]{CL} A.~Castro and D.~Lannes, {\rm e}mph{Well-posedness and shallow-water stability for a new {H}amiltonian formulation of the water waves equations with vorticity}, Indiana Univ. Math. J. \textbf{64} (2015), no.~4, 1169--1270. \overlineibitem[ChL]{ChL} D.~Christodoulou and H.~Lindblad, {\rm e}mph{On the motion of the free surface of a liquid}, Comm. Pure Appl. Math. \textbf{53} (2000), no.~12, 1536--1602. \overlineibitem[CS1]{CS1} D.~Coutand and S.~Shkoller, {\rm e}mph{Well-posedness of the free-surface incompressible {E}uler equations with or without surface tension}, J. Amer. Math. Soc. \textbf{20} (2007), no.~3, 829--930. \overlineibitem[CS2]{CS2} D.~Coutand and S.~Shkoller, {\rm e}mph{A simple proof of well-posedness for the free-surface incompressible {E}uler equations}, Discrete Contin. Dyn. Syst. Ser. S \textbf{3} (2010), no.~3, 429--449. \overlineibitem[DE1]{DE1} M.M.~Disconzi and D.G.~Ebin, {\rm e}mph{The free boundary {E}uler equations with large surface tension}, J. Differential Equations \textbf{261} (2016), no.~2, 821--889. \overlineibitem[DE2]{DE2} M.M.~Disconzi and D.G.~Ebin, {\rm e}mph{On the limit of large surface tension for a fluid motion with free boundary}, Comm. Partial Differential Equations \textbf{39} (2014), no.~4, 740--779. \overlineibitem[DK]{DK} M.M.~Disconzi and I.~Kukavica, {\rm e}mph{A priori estimates for the free-boundary Euler equations with surface tension in three dimensions} (submitted). \overlineibitem[E]{E} D.G.~Ebin, {\rm e}mph{The equations of motion of a perfect fluid with free boundary are not well posed}, Comm. Partial Differential Equations \textbf{12} (1987), no.~10, 1175--1201. \overlineibitem[EL]{EL} T.~Elgindi and D.~Lee, {\rm e}mph{Uniform regularity for free-boundary Navier-Stokes equations with surface tension}, arXiv:1403.0980 (2014). \overlineibitem[GMS]{GMS} P.~Germain, N.~Masmoudi, and J.~Shatah, {\rm e}mph{Global solutions for the gravity water waves equation in dimension 3}, Ann. of Math. (2) \textbf{175} (2012), no.~2, 691--754. \overlineibitem[HIT]{HIT} J.K.~Hunter, M.~Ifrim, and D.~Tataru, {\rm e}mph{Two dimensional water waves in holomorphic coordinates}, Comm. Math. Phys. \textbf{346} (2016), no.~2, 483--552. \overlineibitem[IT]{IT} M.~Ifrim and D.~Tataru, {\rm e}mph{Two dimensional water waves in holomorphic coordinates {II}: {G}lobal solutions}, Bull. Soc. Math. France \textbf{144} (2016), no.~2, 369--394. \overlineibitem[I]{I} T.~Iguchi, {\rm e}mph{Well-posedness of the initial value problem for capillary-gravity waves}, Funkcial. Ekvac. \textbf{44} (2001), no.~2, 219--241. \overlineibitem[IK]{IK} M. Ignatova and I. Kukavica, {\rm e}mph{On the local existence of the free-surface {E}uler equation with surface tension}, Asymptot. Anal. \textbf{100} (2016), no.~1-2, 63--86. \overlineibitem[IP]{IP} A.D.~Ionescu and F.~Pusateri, {\rm e}mph{Global solutions for the gravity water waves system in 2d}, Invent. Math. \textbf{199} (2015), no.~3, 653--804. \overlineibitem[KP]{KP} T.~Kato and G.~Ponce, {\rm e}mph{Commutator estimates and the {E}uler and {N}avier-{S}tokes equations}, Comm. Pure Appl. Math. \textbf{41} (1988), no.~7, 891--907. \overlineibitem[KT1]{KT1} I.~Kukavica and A.~Tuffaha, {\rm e}mph{On the 2{D} free boundary {E}uler equation}, Evol. Equ. Control Theory \textbf{1} (2012), no.~2, 297--314. \overlineibitem[KT2]{KT2} I.~Kukavica and A.~Tuffaha, {\rm e}mph{A regularity result for the incompressible {E}uler equation with a free interface}, Appl. Math. Optim. \textbf{69} (2014), no.~3, 337--358. \overlineibitem[KT3]{KT3} I.~Kukavica and A.~Tuffaha, {\rm e}mph{Well-posedness for the compressible {N}avier-{S}tokes-{L}am\'e system with a free interface}, Nonlinearity \textbf{25} (2012), no.~11, 3111--3137. \overlineibitem[KT4]{KT4} I.~Kukavica and A.~Tuffaha, {\rm e}mph{A sharp regularity result for the {E}uler equation with a free interface}, Asymptot. Anal. \textbf{106} (2018), no.~2, 121--145. \overlineibitem[KTV]{KTV} I.~Kukavica, A.~Tuffaha, and V.~Vicol, {{\rm e}m On the local existence and uniqueness for the 3D Euler equation with a free interface}, Appl.\ Math.\ Optim. (2016), doi:10.1007/s00245-016-9360-6. \overlineibitem[KTVW]{KTVW} I.~Kukavica, A.~Tuffaha, V.~Vicol, and F.~Wang, {\rm e}mph{On the existence for the free interface 2{D} {E}uler equation with a localized vorticity condition}, Appl. Math. Optim. \textbf{73} (2016), no.~3, 523--544. \overlineibitem[L]{L} D.~Lannes, {\rm e}mph{Well-posedness of the water-waves equations}, J. Amer. Math. Soc. \textbf{18} (2005), no.~3, 605--654 (electronic). \overlineibitem[Li]{Li} D.~Li, {\rm e}mph{On {K}ato-{P}once and fractional {L}eibniz}, Rev.\ Mat.\ Iberoam.~\textbf{35} (2019), no.~1, 23--100. \overlineibitem[Lin1]{Lin1} H.~Lindblad, {\rm e}mph{Well-posedness for the linearized motion of an incompressible liquid with free surface boundary}, Comm. Pure Appl. Math. \textbf{56} (2003), no.~2, 153--197. \overlineibitem[Lin2]{Lin2} H.~Lindblad, {\rm e}mph{Well-posedness for the motion of an incompressible liquid with free surface boundary}, Ann. of Math. (2) \textbf{162} (2005), no.~1, 109--194. \overlineibitem[MC]{MC} B.~Muha and S.~\v Cani\'c, {\rm e}mph{Fluid-structure interaction between an incompressible, viscous 3{D} fluid and an elastic shell with nonlinear {K}oiter membrane energy}, Interfaces Free Bound. \textbf{17} (2015), no.~4, 465--495. \overlineibitem[MR]{MR} N.~Masmoudi and F.~Rousset, {\rm e}mph{Uniform regularity and vanishing viscosity limit for the free surface {N}avier-{S}tokes equations}, Arch. Ration. Mech. Anal. \textbf{223} (2017), no.~1, 301--417. \overlineibitem[N]{N} V.I.~Nalimov, {\rm e}mph{The Cauchy-Poisson problem}, Dinamika Splo\v sn. Sredy (1974), no.~Vyp. 18 Dinamika Zidkost. so Svobod. Granicami, 104--210, 254. \overlineibitem[OT]{OT} M.~Iowa and A.~Tani, {\rm e}mph{Free boundary problem for an incompressible ideal fluid with surface tension}, Math. Models Methods Appl. Sci. \textbf{12} (2002), no.~12, 1725--1740. \overlineibitem[P]{P} F.~Pusateri, {\rm e}mph{On the limit as the surface tension and density ratio tend to zero for the two-phase {E}uler equations}, J. Hyperbolic Differ. Equ. \textbf{8} (2011), no.~2, 347--373. \overlineibitem[S]{S} B.~Schweizer, {\rm e}mph{On the three-dimensional {E}uler equations with a free boundary subject to surface tension}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire \textbf{22} (2005), no.~6, 753--781. \overlineibitem[Sh]{Sh} M.~Shinbrot, {\rm e}mph{The initial value problem for surface waves under gravity. {I}. {T}he simplest case}, Indiana Univ. Math. J. \textbf{25} (1976), no.~3, 281--300. \overlineibitem[Shn]{Shn} A.I.~Shnirelman, {\rm e}mph{The geometry of the group of diffeomorphisms and the dynamics of an ideal incompressible fluid}, Mat. Sb. (N.S.) \textbf{128(170)} (1985), no.~1, 82--109, 144. \overlineibitem[SZ1]{SZ1} J.~Shatah and C.~Zeng, {\rm e}mph{Geometry and a priori estimates for free boundary problems of the {E}uler equation}, Comm. Pure Appl. Math. \textbf{61} (2008), no.~5, 698--744. \overlineibitem[SZ2]{SZ2} J.~Shatah and C.~Zeng, {\rm e}mph{Local well-posedness for fluid interface problems}, Arch. Ration. Mech. Anal. \textbf{199} (2011), no.~2, 653--705. \overlineibitem[T]{T} A.~Tani, {\rm e}mph{Small-time existence for the three-dimensional Navier-Stokes equations for an incompressible fluid with a free surface}, Arch. Rational Mech. Anal. \textbf{133} (1996), no.~4, 299--331. \overlineibitem[WZZZ]{WZZZ} C.~Wang, Z.~Zhang, W.~Zhao, and Y.~Zheng, {\rm e}mph{Local well-posedness and break-down criterion of the incompressible Euler equations with free boundary}, arXiv:1507.02478, 2015. \overlineibitem[W1]{W1} S.~Wu, {\rm e}mph{Well-posedness in Sobolev spaces of the full water wave problem in {$2$}-{D}}, Invent. Math. \textbf{130} (1997), no.~1, 39--72. \overlineibitem[W2]{W2} S.~Wu, {\rm e}mph{Well-posedness in Sobolev spaces of the full water wave problem in 3-{D}}, J. Amer. Math. Soc. \textbf{12} (1999), no.~2, 445--495. \overlineibitem[W3]{W3} S.~Wu, {\rm e}mph{Global wellposedness of the 3-{D} full water wave problem}, Invent. Math. \textbf{184} (2011), no.~1, 125--220. \overlineibitem[Y1]{Y1} H.~Yosihara, {\rm e}mph{Gravity waves on the free surface of an incompressible perfect fluid of finite depth}, Publ. Res. Inst. Math. Sci. \textbf{18} (1982), no.~1, 49--96. \overlineibitem[Y2]{Y2} H.~Yosihara, {\rm e}mph{Capillary-gravity waves for an incompressible ideal fluid}, J. Math. Kyoto Univ. \textbf{23} (1983), no.~4, 649--694. \overlineibitem[ZZ]{ZZ} P.~Zhang and Z.~Zhang, {\rm e}mph{On the free boundary problem of three-dimensional incompressible {E}uler equations}, Comm. Pure Appl. Math. \textbf{61} (2008), no.~7, 877--940. {\rm e}nd{thebibliography} {\rm e}nd{document}
\begin{document} \mathfrak{m}aketitle \begin{abstract} We study $\sigma$-ideals and regularity properties related to the ``filter-Laver'' and ``dual-filter-Laver'' forcing partial orders. An important innovation which enables this study is a dichotomy theorem proved recently by Miller \cite{MillerMesses}. \end{abstract} \section{Introduction} In this paper, ${{F}}$ will always be a filter on $\omega$ (or a suitable countable set). We will use ${{F}}^-$ to refer to the ideal of all $a \subseteq \omega$ such that $\omega \setminus a \in F$, and ${{F}}^+$ to the collection of $a \subseteq \omega$ such that $a \notin F^-$. ${\sf Cof}$ and ${{F}}in$ denote the filter of {cofinite subsets} of $\omega$ and the ideal of finite subsets of $\omega$, respectively. \begin{Def} An \emph{${{F}}$-Laver tree} is a tree $T \subseteq \omega^{<\omega}$ such that for all $\sigma \in T$ extending ${\rm stem}(T)$, ${\rm Succ}_T(\sigma) \in F$. An \emph{${{F}}^+$-Laver-tree} is a tree $T \subseteq \omega^{<\omega}$ such that for all $\sigma \in T$ extending ${\rm stem}(T)$, ${\rm Succ}_T(\sigma) \in {{F}}^+$. We use $\mathfrak{m}athbb{L}F$ and $\mathfrak{m}athbb{L}FF$ to denote the partial orders of $F$-Laver and ${{F}}^+$-Laver trees, respectively, ordered by inclusion. \end{Def} If ${{F}} = {\sf Cof}$ then $\mathfrak{m}athbb{L}FF$ is the standard \emph{Laver forcing} $\mathfrak{m}athbb{L}$, and $\mathfrak{m}athbb{L}F$ is (a version of) the standard \emph{Hechler forcing} $\mathfrak{m}athbb{D}$. Both $\mathfrak{m}athbb{L}F$ and $\mathfrak{m}athbb{L}FF$ have been used as forcing notions in the literature, see, e.g., \cite{Groszek}. As usual, the generic real added by these forcings can be defined as the limit of the stems of conditions in the generic filter. It is easy to see that in both cases, this generic real is dominating. It is also known that if $F$ is not an ultrafilter, then $\mathfrak{m}athbb{L}_F$ adds a Cohen real, and if $F$ is an ultrafilter, then $\mathfrak{m}athbb{L}_F$ adds a Cohen real if and only if $F$ is not a \emph{nowhere dense ultrafilter} (see Definition \ref{nwdultrafilter}). Moreover, $\mathfrak{m}athbb{L}F$ is $\sigma$-centered and hence satisfies the ccc, and it is known that $\mathfrak{m}athbb{L}FF$ satisfies Axiom A (see \cite[Theorem]{Groszek} and Lemma \ref{b} (3)). In this paper, we consider $\sigma$-ideals and regularity properties naturally related to $\mathfrak{m}athbb{L}_F$ and $\mathfrak{m}athbb{L}FF$, and study the regularity properties for sets in the low projective hierarchy, following ideas from \cite{BrLo99, Ik10, KhomskiiThesis}. An important technical innovation is a dichotomy theorem proved recently by Miller in \cite{MillerMesses} (see Theorem \ref{dichotomy}), which allows us to simplify the $\sigma$-ideal for $\mathfrak{m}athbb{L}FF$ when restricted to Borel sets, while having a $\boldsymbol{\Sigma}^1_2$ definition regarding the membership of Borel sets in it. One question may occur to the reader of this paper: why are we not considering the \emph{filter-Mathias} forcing alongside the filter-Laver forcing, when clearly the two forcing notions (and their derived $\sigma$-ideals and regularity properties) are closely related? The answer is that, although the basic results from Section 2 do indeed hold for filter-Mathias, there is no corresponding dichotomy theorem like Theorem \ref{dichotomy}. In fact, by a result of Sabok \cite{MarcinSabok}, even the $\sigma$-ideal corresponding to the \emph{standard} Mathias forcing is not a $\boldsymbol{\Sigma}^1_2$-ideal on Borel sets, implying that even in this simple case, there is no hope of a similar dichotomy theorem. It seems that in the Mathias case, a more subtle analysis is required. In Section \ref{Sec2} we give the basic definitions and prove some easy properties. In Section \ref{Sec3} we present Miller's dichotomy and the corresponding $\sigma$-ideal. In Section \ref{Sec4} we study direct relationships that hold between the regularity properties regardless of the complexity of $F$, whereas in Section \ref{Sec5} we prove stronger results under the assumption that $F$ is an analytic filter. \section{$(\mathfrak{m}athbb{L}F)$- and $(\mathfrak{m}athbb{L}FF)$-measurable sets.} \label{Sec2} In \cite{Ik10}, Ikegami provided a natural framework for studying $\sigma$-ideals and regularity properties related to tree-like forcing notions, generalising the concepts of \emph{meager} and \emph{Baire property}. This concept proved to be very useful in a number of circumstances, see, e.g., \cite{KhomskiiThesis, LaguzziPaper, KhomskiiLaguzzi1}. \begin{Def} \label{a} Let $\mathfrak{m}athbb{P}$ be $\mathfrak{m}athbb{L}F$ or $\mathfrak{m}athbb{L}FF$ and let $A \subseteq \omega^\omega$. \begin{enumerate} \item $A \in \mathfrak{m}athcal{N}_\mathfrak{m}athbb{P}$ iff $\forall T \in \mathfrak{m}athbb{P} \: \exists S \leq T \:([S] \cap A = \varnothing)\}$. \item $A \in \mathcal{I}_\mathfrak{m}athbb{P}$ iff $A$ is contained in a countable union of sets in $\mathfrak{m}athcal{N}_\mathfrak{m}athbb{P}$. \item $A$ is \emph{$\mathfrak{m}athbb{P}$-measurable} iff $\forall T \in \mathfrak{m}athbb{P} \: \exists S \leq T \: ([S] \subseteq^* A$ or $[S] \cap A =^* \varnothing)$, where $\subseteq^*$ and $=^*$ stands for ``modulo a set in $\mathcal{I}_\mathfrak{m}athbb{P}$''. \end{enumerate} \end{Def} \begin{Lem} The collection $\{[T] \mathfrak{m}id T \in \mathfrak{m}athbb{L}F\}$ forms a topology base. The resulting topology refines the standard topology and the space satisfies the Baire category theorem $($i.e., $[T] \notin \mathcal{I}_\mathfrak{m}athbb{L}F$ for all $T \in \mathfrak{m}athbb{L}_F)$.\end{Lem} \begin{proof} Clearly, for all $S,T \in \mathfrak{m}athbb{L}F$ the intersection $S \cap T$ is either empty or an $\mathfrak{m}athbb{L}F$-condition. A basic open set in the standard topology trivially corresponds to a tree in $\mathfrak{m}athbb{L}F$. For the Baire category theorem, let $A_n$ be nowhere dense and, given an arbitrary $T \in \mathfrak{m}athbb{L}F$, build a sequence $T = T_0 \geq T_1 \geq T_2 \geq \dots$ with strictly increasing stems such that $[T_n] \cap A_n = \varnothing$ for all n. Then the limit of the stems is an element in $[T] \setminus \bigcup_n A_n$. \end{proof} We use $\tau_\mathfrak{m}athbb{L}F$ to denote the topology on $\omega^\omega$ generated by $\{[T] \mathfrak{m}id T \in \mathfrak{m}athbb{L}F\}$. Clearly $\mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}F$ is the collection of $\tau_\mathfrak{m}athbb{L}F$-nowhere dense sets and $\mathcal{I}_\mathfrak{m}athbb{L}F$ the collection of $\tau_\mathfrak{m}athbb{L}F$-meager sets. Moreover, we recall the following fact, which is true in arbitrary topologal spaces (the proof is similar to \cite[Theorem 8.29]{Kechris}): \begin{Fact} Let $\mathfrak{m}athcal{X}$ be any topological space, and $A \subseteq \mathfrak{m}athcal{X}$. Then the following are equivalent: \begin{enumerate} \item $A$ satisfies the Baire property. \item For every basic open $O$ there is a basic open $U \subseteq O$ such that $U \subseteq^* A$ or $U \cap A =^* \varnothing$, where $\subseteq^*$ and $=^*$ refer to ``modulo meager''. \end{enumerate} In particular, $A \subseteq \omega^\omega$ is $\mathfrak{m}athbb{L}F$-measurable iff $A$ satisfies the $\tau_\mathfrak{m}athbb{L}F$-Baire property. \end{Fact} What about the dual forcing $\mathfrak{m}athbb{L}FF$? Notice that a topological approach cannot work in general: \begin{Lem} The collection $\{[T] \mathfrak{m}id T \in \mathfrak{m}athbb{L}FF\}$ generates a topology base iff $F$ is an ultrafilter. \end{Lem} \begin{proof} If $F$ is not an ultrafilter, fix $Z$ such that $Z \in {{F}}^+$ and $(\omega \setminus Z) \in {{F}}^+$ and consider trees $S, T \in \mathfrak{m}athbb{L}FF$ defined so that $\forall \sigma \in S \: ({\rm Succ}_S(\sigma) = Z \cup \{0\})$ and $\forall \tau \in T \: ({\rm Succ}_T(\tau) = (\omega \setminus Z) \cup \{0\})$. \end{proof} Instead, to study $\mathfrak{m}athbb{L}FF$, we rely on combinatorial methods familiar from Laver forcing. For every $n$, define $\leq_n$ by: $$S \leq_n T \; : \Leftrightarrow \; S \leq T \text{ and } S \cap \omega^{\leq k+n} = T \cap \omega^{\leq k+n},$$ where $k = |{\rm stem}(T)|$. If $T_0 \geq_0 T_1 \geq_1 \dots$ is a decreasing sequence then $T := \bigcap_n{T_n} \in \mathfrak{m}athbb{L}FF$ and $T \leq T_n$ for every $n$. \begin{Lem} \label{b} Let $F$ be a filter on $\omega$. Then: \begin{enumerate} \item $\mathfrak{m}athbb{L}FF$ has \emph{pure decision}, i.e., for every $\mathfrak{m}edskip \noindenthi$ and every $T \in \mathfrak{m}athbb{L}FF$, there is $S \leq_0 T$ such that $S \Vdash \mathfrak{m}edskip \noindenthi$ or $S \Vdash \lnot \mathfrak{m}edskip \noindenthi$. \item For all $A \subseteq \omega^\omega$, the following are equivalent: \begin{enumerate} \item $A \in \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}FF$, \item $\forall T \in \mathfrak{m}athbb{L}FF \: \exists S \leq_0 T \:( [S] \cap A = \varnothing)$. \end{enumerate} \item $\mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}FF = \: \mathcal{I}_\mathfrak{m}athbb{L}FF$. \item For all $A \subseteq \omega^\omega$, the following are equivalent: \begin{enumerate} \item $A$ is $(\mathfrak{m}athbb{L}FF)$-measurable, \item $\forall T \in \mathfrak{m}athbb{L}FF \: \exists S \leq T \: ([S] \subseteq A$ or $[S] \cap A = \varnothing)$, \item $\forall T \in \mathfrak{m}athbb{L}FF \: \exists S \leq_0 T \: ([S] \subseteq A$ or $[S] \cap A = \varnothing)$. \end{enumerate} \item The collection of $(\mathfrak{m}athbb{L}FF)$-measurable sets forms a $\sigma$-algebra. \end{enumerate} \end{Lem} \begin{proof} Since many of the arguments here are similar, we prove the first assertion and only sketch the others. $\;$ \ \begin{enumerate} \item Fix $\mathfrak{m}edskip \noindenthi$ and $T$ and let $u := {\rm stem}(T)$. For $\sigma \in T$ extending $u$, say: \begin{itemize} \item $\sigma$ is \emph{positive-good} if $\exists S \leq_0 T {\uparrow} \sigma$ such that $S \Vdash \mathfrak{m}edskip \noindenthi$, \item $\sigma$ is \emph{negative-good} if $\exists S \leq_0 T {\uparrow} \sigma$ such that $S \Vdash \lnot \mathfrak{m}edskip \noindenthi$, \item $\sigma$ is \emph{bad} if neither of the above holds.\end{itemize} We claim that $u$ is good, completing the proof. Assume that $u$ is bad. Partition ${\rm Succ}_T(u)$ into $Z_0, Z_1$ and $Z_2$ by setting $n \in Z_0$ iff $u {^\frown} \left<n\right>$ is positive-good, $n \in Z_1$ iff $u {^\frown} \left<n\right>$ is negative-good, and $n \in Z_2$ iff $u {^\frown} \left<n\right>$ is bad. One of the three components must be in ${{F}}^+$. But if it is $Z_0$ then $S:= \bigcup_{n \in Z_0} T {\uparrow} (u {^\frown} \left<n\right>) \leq_0 T$ and $S \Vdash \mathfrak{m}edskip \noindenthi$, thus $u$ is positive-good contrary to assumption; likewise, if $Z_1$ is in ${{F}}^+$ then $u$ is negative-good contrary to assumption. Hence, $Z_2$ must be in ${{F}}^+$. Now, for each $n \in Z_2$, use the same argument to obtain an ${{F}}^+$-positive set $Z_{2,2}$ of successors of $u {^\frown}\left<n\right>$ such that for all $m \in Z_{2,2}$, $u {^\frown} \left<n,m\right>$ is bad, and so on. \mathfrak{m}edskip \noindent This way we construct a tree $T^* \leq T$ such that all $\sigma \in T^*$ are bad. But there is a $T^{**} \leq T^*$ deciding $\mathfrak{m}edskip \noindenthi$, which means that ${\rm stem}(T^{**})$ is either positive-good or negative-good, leading to a contradiction. \item Let $A \in \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}FF$, fix $T$, and let $u = {\rm stem}(T)$. For $\sigma \in T$ extending $u$, say that $\sigma$ is \emph{good} if $\exists S \leq_0 T {\uparrow} \sigma$ such that $[S] \cap A = \varnothing$, and $\sigma$ is \emph{bad} otherwise. By the same argument as above we prove that $u$ is good. \item Suppose $A_n \in \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}FF$ for all $n$. Fix $T \in \mathfrak{m}athbb{L}FF$. Clearly it is enough to produce a fusion sequence $T = T_0 \geq_0 T_1 \geq_1 \dots$ such that for all $n$, $[T_n] \cap A_n = \varnothing$. So suppose we have constructed $T_n$. Let $\{u_i \mathfrak{m}id i<\omega\}$ enumerate all the nodes in $T_n$ of length $|{\rm stem}(T_n) + n|$. For each $u_i$, use $(2)$ to find $S_i \leq_0 T_n {\uparrow} u_i$ with $[S_i] \cap A_{n+1} = \varnothing$. Let $T_{n+1} := \bigcup_i S_i$. Then clearly $T_{n+1} \leq_n T_n$ and $[T_{n+1}] \cap A_{n+1} = \varnothing$ as required. \item For $(a) {\sf R}ightarrow (b)$, use the fact that $\mathcal{I}_\mathfrak{m}athbb{L}FF = \; \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}FF$. For $(b) {\sf R}ightarrow (c)$, use the same argument as in $(1)$. \item It suffices to show closure under countable unions. Suppose $A_n$ is $\mathfrak{m}athbb{L}FF$-measurable and fix $T \in \mathfrak{m}athbb{L}FF$. If for one $n$, there is $S \leq T$ with $[S] \subseteq A_n$ then we are done. Otherwise (using the equivalence from (4)) for every $n$, there is $S \leq T$ such that $[S] \cap A_n = \varnothing$. Then an argument like in (3) shows that there is $S \leq T$ such that $[S] \cap \bigcup_n A_n = \varnothing$. $\Box$ (Sublemma)} \end{proofhere \end{enumerate} \end{proof} \begin{Remark} \label{strongproper} Note that an argument like in (4) above in fact shows that $\mathfrak{m}athbb{L}FF$ satisfies a stronger form of properness, namely, for all countable elementary models $M \mathfrak{m}edskip \noindentrec \mathfrak{m}athcal{H}_\theta$ and all $T \in \mathfrak{m}athbb{L}FF$, there exists $S \leq T$ such that every $x \in [S]$ is $\mathfrak{m}athbb{L}FF$-generic over $M$. \end{Remark} Again it is interesting to ask whether any of the ``simplifications'' (1)--(4) from the above Lemma might go through for $\mathfrak{m}athbb{L}F$, too. \begin{Lem} If we replace $\mathfrak{m}athbb{L}FF$ with $\mathfrak{m}athbb{L}F$ in Lemma \ref{b}, then the statements (1)--(4) are all equivalent to each other, and equivalent to the statement ``${{F}}$ is an ultrafilter''. \end{Lem} \begin{proof} If ${{F}}$ is not an ultrafilter, let $Z$ be such that $Z \in {{F}}^+$ and $(\omega \setminus Z) \in {{F}}^+$, let $A_n := \{x \in \omega^\omega \mathfrak{m}id \forall m \geq n \: (x(m) \in Z)\}$ and $A = \bigcup_n A_n = \{x \in \omega^\omega \mathfrak{m}id \forall^\infty m \: (x(m) \in Z)\}$. Also, $x_G$ denotes the $\mathfrak{m}athbb{L}F$-generic real. We leave it to the reader to verify that \begin{itemize} \item the statement ``$x_{G}(0) \in Z$'' cannot be decided by any $\mathfrak{m}athbb{L}F$-condition with empty stem (falsifying (1)), \item $Z^\omega \in \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}F$ but for every $T \in \mathfrak{m}athbb{L}F$ with empty stem we have $[T] \cap Z^\omega \neq \varnothing$ (falsifying (2)), \item $A_n \in \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}F$ for all $n$, but $A \notin \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}F$ (falsifying (3)), and \item $A$ is $\mathfrak{m}athbb{L}F$-measurable (see Theorem \ref{c}), but for every $T \in \mathfrak{m}athbb{L}F$ we have $[T] \not\subseteq A$ and $[T] \cap A \neq \varnothing$ (falsifying (4)). $\Box$ (Sublemma)} \end{proofhere \end{itemize} \end{proof} Thus, the situation can be neatly summarized as follows: when $F$ is \emph{not} an ultrafilter, $\mathfrak{m}athbb{L}F$ generates a topology but does not satisfy properties 1--4 from Lemma \ref{b}, while $\mathfrak{m}athbb{L}FF$ satisfies those properties but does not generate a topology. $\mathfrak{m}athbb{L}F$-measurability is the Baire propery in the $\tau_\mathfrak{m}athbb{L}F$-topology, whereas $\mathfrak{m}athbb{L}FF$-measurability is the ``Marczewski''-property corresponding to the partial order $\mathfrak{m}athbb{L}FF$, and $\mathcal{I}_\mathfrak{m}athbb{L}FF$ is the ``Marczewski''-ideal corresponing to $\mathfrak{m}athbb{L}FF$. In the interesting scenario when $F$ is an ultrafilter everything coincides, and the ideal $\mathcal{I}_\mathfrak{m}athbb{L}F$ of $\tau_\mathfrak{m}athbb{L}F$-meager sets is the same as the ideal of $\tau_\mathfrak{m}athbb{L}F$-nowhere dense sets. In this context, the ideal has been studied by Louevau in \cite{LouveauIdeal} and is sometimes called the \emph{Louveau ideal}. \begin{Thm} \label{c} Let $F$ be a filter on $\omega$. Every analytic and co-analytic set $A \subseteq \omega^\omega$ is both $\mathfrak{m}athbb{L}F$-measurable and $\mathfrak{m}athbb{L}FF$-measurable. \end{Thm} \begin{proof} Since $\tau_\mathfrak{m}athbb{L}F$ refines the standard topology on $\omega^\omega$, analytic (co-analytic) sets are also analytic (co-analytic) in $\tau_\mathfrak{m}athbb{L}F$. By classical results, such sets have the $\tau_\mathfrak{m}athbb{L}F$-Baire property. \mathfrak{m}edskip \noindent For $\mathfrak{m}athbb{L}FF$, suppose $A$ is analytic, defined by a $\Sigma^1_1(r)$ formula $\mathfrak{m}edskip \noindenthi$. Let $T \in \mathfrak{m}athbb{L}FF$. Let $S \leq T$ be a stronger condition forcing $\mathfrak{m}edskip \noindenthi(\dot{x}_G)$ or $\lnot \mathfrak{m}edskip \noindenthi(\dot{x}_G)$, without loss of generality the former. Let $M$ be a countable elementary submodel of a sufficiency large $\mathfrak{m}athcal{H}_\theta$ with $S, r, F \in M$. By Remark \ref{strongproper}, we can find an $S' \leq S$ such that all $x \in [S']$ are $\mathfrak{m}athbb{L}FF \cap M$-generic over $M$. Then for all such $x$ we have $M[x] \mathfrak{m}odels \mathfrak{m}edskip \noindenthi(x)$. By $\boldsymbol{\Sigma}^1_1$-absoluteness, $\mathfrak{m}edskip \noindenthi(x)$ is really true. Thus we have $[S'] \subseteq A$. The co-analytic case is analogous. \end{proof} A different (forcing-free) proof of the second assertion will follow from Theorem \ref{dichotomy}. \renewcommand{{\sf Borel}}{{\sf Borel}} From the above it follows that there we have dense embeddings $\mathfrak{m}athbb{L}F \lhook\joinrel\relbar\joinrel\rightarrow_d {\sf Borel}(\omega^\omega)/\mathcal{I}_\mathfrak{m}athbb{L}F$ and $\mathfrak{m}athbb{L}FF \lhook\joinrel\relbar\joinrel\rightarrow_d {\sf Borel}(\omega^\omega)/\mathcal{I}_\mathfrak{m}athbb{L}FF$. \begin{Def} Let $\boldsymbol{\Gamma}$ be a projective pointclass. The notation $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}F)$ and $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}FF)$ abbreviates the propositions ``all sets of complexity $\boldsymbol{\Gamma}$ are $\mathfrak{m}athbb{L}F$-measurable'' and ``all sets of complexity $\boldsymbol{\Gamma}$ are $\mathfrak{m}athbb{L}F$-measurable'', respectively. \end{Def} The statements $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{L}F)$ and $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{L}FF)$ are independent of ZFC, and we will study the exact strength of these statements in Section \ref{Sec4} (for arbitrary $F$) and Section \ref{Sec5} (for definable $F$). \section{A dichotomy theorem for $\mathfrak{m}athbb{L}FF$} \label{Sec3} While $\mathcal{I}_\mathfrak{m}athbb{L}F$ is a ccc Borel-generated ideal exhibiting many familiar properties, $\mathcal{I}_\mathfrak{m}athbb{L}FF$ is a ``Marczewski-style'' ideal, which is not Borel-generated and rather difficult to study. The rest of the paper depends crucially on the dichotomy result presented in this section, which simplifies the ideal $\mathcal{I}_\mathfrak{m}athbb{L}FF$ when it is restricted to Borel sets. The proof, as well as several key insights, are due to Arnold Miller \cite{MillerMesses}. For motivation, recall the \emph{Laver dichotomy}, originally due to Goldstern et al \cite{GoldsternGame}. \newcommand{{\mathcal{D}}}{{\mathfrak{m}athcal{D}}} \newcommand{{{F}}F}{{{{{F}}}^+}} \begin{Def} \label{aaa} If $f: \omega^{<\omega} \to \omega$ and $x \in \omega^\omega$, we say that \emph{$x$ strongly dominates $f$} if $\forall^\infty n \: (x(n) \geq f(x {\upharpoonright} n))$. A family $A \subseteq \omega^\omega$ is called \emph{strongly dominating} if for every $f: \omega^{<\omega} \to \omega$ there exists $x \in A$ which strongly dominates $f$. ${\mathcal{D}}$ denotes the ideal of sets $A$ which are \emph{not} strongly dominating. \end{Def} It is easy to see that if $T \in \mathfrak{m}athbb{L}$ then $[T] \notin {\mathcal{D}}$, and the classical result \cite[Lemma 2.3]{GoldsternGame} shows that if $A$ is analytic, then either $A \in {\mathcal{D}}$ or there is a Laver tree $T$ such that $[T] \subseteq A$. The ideal ${\mathcal{D}}$ was discovered independently by Zapletal (cf. \cite[Lemma 3.3.]{IsolatingCardinals}) and was studied, among others, in \cite{RepickyDeco, Deco}. Generalising this, we obtain the following definitions: \begin{Def} Let $F$ be a filter on $\omega$. If $\varphi: \omega^{<\omega} \to F$ and $x \in \omega^\omega$, we say that \emph{$x$ $F$-dominates $\varphi$} iff $\forall^\infty n \:(x(n) \in \varphi(x {\upharpoonright} n))$. A family $A \subseteq \omega^\omega$ is \emph{$F$-dominating} if for every $\varphi: \omega^{<\omega} \to F$ there exists $x \in A$ which dominates $\varphi$. ${\mathcal{D}}_{{{F}}F}$ denotes the ideal of sets $A$ which are not $F$-dominating. In other words: $$A \in {\mathcal{D}}_{{F}}F \; :\Longleftrightarrow \; \exists \varphi: \omega^{<\omega} \to F \;\; \forall x \in A \;\; \exists^\infty n \:(x(n) \notin \varphi(x {\upharpoonright} n)). $$ \end{Def} In the above context, the terminology ``$F$-dominates'' might seem inappropriate, but we choose it in order to retain the analogy with Definition \ref{aaa}. Note that ${\mathcal{D}} = {\mathcal{D}}_{{\sf Cof}^+}$. \begin{Lem} ${\mathcal{D}}_{{F}}F$ is a $\sigma$-ideal. \end{Lem} \begin{proof} Suppose $A_i \in {\mathcal{D}}_{{F}}F$ for $i < \omega$. Let $\varphi_i$ witness this for each $i$, and define $\varphi$ by setting $\varphi(\sigma) := \bigcap_{i<|\sigma|} \varphi_i(\sigma)$. We claim that $\varphi$ witnesses that $A = \bigcup_{i<\omega} A_i \in {\mathcal{D}}_{{F}}F$. Pick $x \in A$. There is $i$ such that $x \in A_i$, hence for infinitely many $n$ we have $x(n) \notin \varphi_i(x {\upharpoonright} n)$. But if $n > i$ then $\varphi(x {\upharpoonright} n) \subseteq \varphi_i(x {\upharpoonright} n)$. Therefore, for infinitely many $n$ we also have $x(n) \notin \varphi(x {\upharpoonright} n)$. \end{proof} \begin{Lem} \label{d} Let $A \subseteq \omega^\omega$. The following are equivalent: \begin{enumerate} \item $A \in {\mathcal{D}}_{{F}}F$. \item $\forall \sigma \in \omega^{<\omega} \: \exists T \in \mathfrak{m}athbb{L}_F$ with ${\rm stem}(T) = \sigma$, such that $[T] \cap A = \varnothing$. \item $\forall S \in \mathfrak{m}athbb{L}F \: \exists T \leq_0 S \: ([S] \cap A = \varnothing)$ \end{enumerate} \end{Lem} \begin{proof} The equivalence between 2 and 3 is clear so we prove the equivalence between 1 and 2. \mathfrak{m}edskip \noindent First, note that if $\varphi: \omega^{<\omega} \to F$ and $\sigma \in \omega^{<\omega}$, then there is a unique $T_{\sigma, \varphi} \in \mathfrak{m}athbb{L}_F$ such that ${\rm stem}(T_{\sigma,\varphi}) = \sigma$ and $\forall \tau \supseteq \sigma$, ${\rm Succ}_{T_{\sigma,\varphi}}(\tau) = \varphi(\tau)$. Conversely, for every $T \in \mathfrak{m}athbb{L}_F$ with ${\rm stem}(T) = \sigma$, there exists a (not unique) $\varphi$ such that $T = T_{\sigma,\varphi}$. \mathfrak{m}edskip \noindent Now suppose $A \in {\mathcal{D}}_{{F}}F$, as witnessed by $\varphi$, and let $\sigma \in \omega^{<\omega}$. Then $A \cap [T_{\sigma,\varphi}] = \varnothing$, since if $x \in A \cap [T_{\sigma,\varphi}]$ then $\forall n > |\sigma| \:( x(n) \in \varphi(x{\upharpoonright} n))$, contrary to the assumption. \mathfrak{m}edskip \noindent Conversely, suppose for every $\sigma$ there is $T_\sigma \in \mathfrak{m}athbb{L}_F$ such that ${\rm stem}(T_\sigma) = \sigma$ and $A \cap [T_\sigma] = \varnothing$. For each $\sigma$, let $\varphi_\sigma: \omega^{<\omega} \to F$ be such that $T_\sigma = T_{\sigma, \varphi_\sigma}$. Then define $\varphi: \omega^{<\omega} \to F$ by $$\varphi(\sigma) = \bigcap_{\tau \subseteq \sigma} \varphi_\tau(\sigma).$$ We claim that $\varphi$ witnesses that $A \in {\mathcal{D}}_{{F}}F$. Let $x \in A$ be arbitrary. Let $\sigma \subseteq x$. Then $x \notin [T_\sigma] = [T_{\sigma ,\varphi_\sigma}]$, hence, there is $n > |\sigma|$ such that $x(n) \notin \varphi_\sigma(x {\upharpoonright} n)$. But by definition, since $\sigma \subseteq x {\upharpoonright} n$, we have $\varphi(x {\upharpoonright} n) \subseteq \varphi_\sigma(x {\upharpoonright} n)$. Therefore also $x(n) \notin \varphi(x {\upharpoonright} n)$. \end{proof} The following are easy consequences of the above; the proofs are left to the reader. \begin{Lem} \label{35} $\;$ \begin{enumerate} \item ${\mathcal{D}}_{{F}}F \subseteq \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}F$. \item ${\mathcal{D}}_{{F}}F \subseteq \mathcal{I}_\mathfrak{m}athbb{L}FF$ $($in particular, if $T \in \mathfrak{m}athbb{L}FF$ then $[T] \notin {\mathcal{D}}_{{F}}F)$. \item If $F$ is an ultrafilter then ${\mathcal{D}}_{{F}}F = \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}F = \mathcal{I}_\mathfrak{m}athbb{L}F = \mathcal{I}_\mathfrak{m}athbb{L}FF$. \item If $F$ is not an ultrafilter then there is a closed witness to ${\mathcal{D}}_{{F}}F \neq \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}F$. \end{enumerate} \end{Lem} \begin{Thm}[Miller] \label{dichotomy} For every analytic $A$, either $A \in {\mathcal{D}}_{{F}}F$ or there is $T \in \mathfrak{m}athbb{L}FF$ such that $[T] \subseteq A$. \end{Thm} \begin{proof} See the proof of Theorem 3 and the comment after Theorem 8 in \cite{MillerMesses}.\footnote{Here we should also note that Miller's Theorem 3 is, in fact, a direct consequence of Goldstern et al's dichotomy \cite[Lemma 2.3]{GoldsternGame}. However, the point is that its generalisation to filters does not follow from the proof in \cite{GoldsternGame}, which uses infinite games and determinacy. Miller's proof, on the other hand, uses only classical methods and generalises directly to filters.} We need a slight modification of this proof: rather than talking about trees \emph{with empty stem}, we consider trees with a fixed stem $\sigma$. If $A \notin {\mathcal{D}}_{{F}}F$, then by Lemma \ref{d} (2), there exists $\sigma \in \omega^{<\omega}$ such that for all $S \in \mathfrak{m}athbb{L}F$ with ${\rm stem}(S) = \sigma$, $[S] \cap A \neq \varnothing$. By applying the same argument as in \cite[Theorem 3]{MillerMesses}, we obtain a $T \in \mathfrak{m}athbb{L}FF$ (with ${\rm stem}(T) = \sigma$) such that $[T] \subseteq A$.\end{proof} \begin{Remark} \label{forcingfree} As a direct consequence of this theorem, we obtain an alternative (forcing-free) proof of the second part of Theorem \ref{c}. Namely: let $A$ be analytic and let $T \in \mathfrak{m}athbb{L}FF$ be arbitrary, so $A \cap [T]$ is analytic. If there exists $S \in \mathfrak{m}athbb{L}FF$ with $[S] \subseteq A \cap [T]$ we are done, and if $A \cap [T] \in {\mathcal{D}}_{{F}}F$, use Lemma \ref{d} to find a tree $U \in \mathfrak{m}athbb{L}F$ with ${\rm stem}(U) = {\rm stem}(T)$ and $[U] \cap A \cap [T] = \varnothing$. Notice that $T \cap U \in \mathfrak{m}athbb{L}FF$, so we are done.\end{Remark} Also, we now have a dense embedding $\mathfrak{m}athbb{L}FF \lhook\joinrel\relbar\joinrel\rightarrow_d {\sf Borel}(\omega^\omega) / {\mathcal{D}}_{{F}}F$, with ${\mathcal{D}}_{{F}}F$ being a Borel-generated $\sigma$-ideal which is far easier to study than $\mathcal{I}_\mathfrak{m}athbb{L}FF$. This will be of particular importance in Section \ref{Sec5} where we look at analytic filters. \section{Direct implications}\label{Sec4} We first look at some straightforward implications between various statements of the form $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}F)$, $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}FF)$ and $\boldsymbol{\Gamma}(\mathfrak{m}athbb{P})$ for other well-known forcings $\mathfrak{m}athbb{P}$. Here $\boldsymbol{\Gamma}$ denotes an arbitrary \emph{boldface pointclass}, i.e., a collection of subsets of $\omega^\omega$ closed under continuous pre-images and intersections with closed sets. No further assumptions on the complexity of ${{F}}$ are required. \mathfrak{m}edskip Recall the following reducibility relations for filters on a countable set: \begin{Def} Let $F,G$ be filters on ${\rm dom}(F)$ and ${\rm dom}(G)$, respectively. We say that: \begin{enumerate} \item $G$ is \emph{Katetov-reducible} to $F$, notation $G \leq_K F$, if there is a map $\mathfrak{m}edskip \noindenti: {\rm dom}(F) \to {\rm dom}(G)$ such that $a \in G {\sf R}ightarrow \mathfrak{m}edskip \noindenti^{-1}[a] \in F$. \item $G$ is \emph{Rudin-Keisler-reducible} to $F$, notation $G \leq_{RK} F$, if there is a map $\mathfrak{m}edskip \noindenti: {\rm dom}(F) \to {\rm dom}(G)$ such that $a \in G \Leftrightarrow \mathfrak{m}edskip \noindenti^{-1}[a] \in F$. \end{enumerate} \end{Def} \begin{Remark} Note that $G \leq_K F$ and $G \leq_{RK} F$ are equivalent to the reducibility relation between ideals (i.e., between $G^-$ and $F^-$). Also, it is clear that if $\mathfrak{m}edskip \noindenti$ witnesses $G \leq_K F$, then $a \in F^+ {\sf R}ightarrow \mathfrak{m}edskip \noindenti[a] \in G^+$, and if $\mathfrak{m}edskip \noindenti$ witnesses $G \leq_{RK} F$ then, in addition, $a \in F {\sf R}ightarrow \mathfrak{m}edskip \noindenti[a] \in G$. \end{Remark} \begin{Notation} We use the following slight abuse of notation: if $F$ is a filter and $a \in F^+$, then $F {\upharpoonright} a$ denotes the set $\{b \subseteq a \mathfrak{m}id (a \setminus b) \in F^-\}$. In other words, $F {\upharpoonright} a$ is the filter with ${\rm dom}(F {\upharpoonright} a) = a$ which is dual to the ideal $(F^-) {\upharpoonright} a$. \end{Notation} \begin{Def} A filter $F$ is called \emph{$K$-uniform} if for every $a \in F^+$, $F {\upharpoonright} a \leq_K F$. \end{Def} \begin{Lem} \label{cc} Suppose $G {\upharpoonright} a \leq_K F$ for all $a \in G^+$. Then $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}FF) {\sf R}ightarrow \boldsymbol{\Gamma}(\mathfrak{m}athbb{L}GG)$. In particular, this holds if $G$ is $K$-uniform and $G \leq_K F$. \end{Lem} \begin{proof} Let $A \in \boldsymbol{\Gamma}$ and $T \in \mathfrak{m}athbb{L}GG$ arbitrary. For all $\sigma \in T$ extending ${\rm stem}(T)$, let $X_\sigma := {\rm Succ}_T(\sigma)$ and fix $\mathfrak{m}edskip \noindenti_\sigma$ witnessing $G {\upharpoonright} X_\sigma \leq_K F$. Define $f': \omega^{<\omega} \to \omega^{<\omega}$ by $f'(\varnothing) := {\rm stem}(T)$ and $f'(\tau {^\frown} \left<n\right>) := f'(\tau) {^\frown} \left<\mathfrak{m}edskip \noindenti_{f'(\tau)}(n)\right>$, and let $f: \omega^\omega \to \omega^\omega$ be the limit of $f'$. Let $A' := f^{-1}[A]$. Then $A \in \boldsymbol{\Gamma}$, so by assumption there is an $S \in \mathfrak{m}athbb{L}FF$ such that $[S] \subseteq A'$ or $[S] \cap A' = \varnothing$, without loss of generality the former. \mathfrak{m}edskip \noindent By assumption, we know that for every $\sigma \in S$ extending ${\rm stem}(S)$, $\mathfrak{m}edskip \noindenti_{f'(\sigma)}[{\rm Succ}_S(\sigma)] \in G^+$. To make sure that the image under $f$ is an $\mathfrak{m}athbb{L}GG$-tree, prune $S$ to $S^* \subseteq S$, so that ${\rm stem}(S^*) = {\rm stem}(S)$, and for all $\sigma \in S^*$ extending ${\rm stem}(S^*)$, $\mathfrak{m}edskip \noindenti_{f'(\sigma)}[{\rm Succ}_{S^*}(\sigma)] = \mathfrak{m}edskip \noindenti_{f'(\sigma)}[{\rm Succ}_{S}(\sigma)]$, and $\mathfrak{m}edskip \noindenti_{f'(\sigma)} {\upharpoonright} {\rm Succ}_{S^*}(\sigma)$ is injective. Then $f[S^*]$ is the set of branches through an $\mathfrak{m}athbb{L}FF$-tree, and moreover $f[S^*] \subseteq [T] \cap A$. \end{proof} \begin{Lem} Suppose $G {\upharpoonright} a \leq_K F$ for all $a \in G^+$. Then $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}F) {\sf R}ightarrow \boldsymbol{\Gamma}(\mathfrak{m}athbb{L}GG)$. In particular, if $F$ is $K$-uniform then $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}F) {\sf R}ightarrow \boldsymbol{\Gamma}(\mathfrak{m}athbb{L}FF)$ \end{Lem} \begin{proof} Let $A \in \boldsymbol{\Gamma}$ and $T \in \mathfrak{m}athbb{L}GG$ be arbitrary. Let $f$ and $A' := f^{-1}[A]$ be as above. By the same argument, it suffices to find $S \in \mathfrak{m}athbb{L}FF$ such that $[S] \subseteq A'$ or $[S] \subseteq A'$. \mathfrak{m}edskip \noindent By assumption, there is an $\mathfrak{m}athbb{L}F$-tree $U$ with $[U] \setminus A' \in \mathcal{I}_\mathfrak{m}athbb{L}F$ or $[U] \cap A' \in \mathcal{I}_\mathfrak{m}athbb{L}F$, without loss of generality the former. Since $\mathcal{I}_\mathfrak{m}athbb{L}F$ is Borel-generated, let $B$ be a Borel $\mathcal{I}_\mathfrak{m}athbb{L}F$-positive set such that $B \subseteq A' \cap [U]$. By Lemma \ref{35} $B$ is also ${\mathcal{D}}_{F^+}$-positive. But then, by Theorem \ref{dichotomy} there exists an $S \in \mathfrak{m}athbb{L}FF$ such that $[S] \subseteq B$, which completes the proof. \end{proof} \begin{Lem} Suppose $G \leq_{RK} F$. Then $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}F) {\sf R}ightarrow \boldsymbol{\Gamma}(\mathfrak{m}athbb{L}G)$. \end{Lem} \begin{proof} Let $\mathfrak{m}edskip \noindenti$ witness $G \leq_{RK} F$ and let $f: \omega^\omega \to \omega^\omega$ be defined by $f(x)(n) := \mathfrak{m}edskip \noindenti(x(n))$. Clearly $f$ is continuous in the standard sense. Moreover, we claim the following: \s {\textbf Claim.} \emph{$f$ is continuous and open as a function from $(\omega^\omega, \tau_{\mathfrak{m}athbb{L}F})$ to $(\omega^\omega, \tau_{\mathfrak{m}athbb{L}G})$}. \begin{proof} If $[T]$ is a basic open set in $\tau_\mathfrak{m}athbb{L}G$, then $T \in \mathfrak{m}athbb{L}G$ and so $f^{-1}[T]$ is a union of $\mathfrak{m}athbb{L}F$-trees (one for each $f$-preimage of the stem of $T$), so it is open in $\tau_\mathfrak{m}athbb{L}F$. Conversely, if $[S]$ is basic open in $\tau_\mathfrak{m}athbb{L}F$, then $S \in \mathfrak{m}athbb{L}F$. Although $f[S]$ is not necessarily a member of $\mathfrak{m}athbb{L}G$, we can argue as follows: given $y \in f[S]$, let $x \in [S]$ be such that $f(x) = y$. Then prune $S$ to $S^*$ in a similar way as in the proof of Lemma \ref{cc}, in such a way that the function $\mathfrak{m}edskip \noindenti$ restricted to ${\rm Succ}_{S^*}(\sigma)$ is injective for each $\sigma$ while the image $\mathfrak{m}edskip \noindenti[{\rm Succ}_{S^*}(\sigma)]$ remains unchanged. Moreover, we can do this so that $x \in [S^*]$. Then $f[S^*]$ is indeed an $\mathfrak{m}athbb{L}G$-tree, and moreover $y \in f[S^*] \subseteq f[S]$. Since this can be done for every $y \in f[S]$, it follows that $f[S]$ is open in $\tau_\mathfrak{m}athbb{L}G$. \renewcommand{ $\Box$ (Sublemma)} \end{proof}{$\square$ (Claim)}\end{proof} \mathfrak{m}edskip \noindent From this, it is not hard to conclude that if $A \in \mathcal{I}_\mathfrak{m}athbb{L}G$ then $f^{-1}[A] \in \mathcal{I}_\mathfrak{m}athbb{L}F$. To complete the proof, let $A \in \boldsymbol{\Gamma}$ and let $O$ be $\tau_\mathfrak{m}athbb{L}G$-open. It suffices to find a non-empty $\tau_\mathfrak{m}athbb{L}G$-open $U \subseteq O$ such that $U \subseteq^* A$ or $U \cap A =^* \varnothing$, where $\subseteq^*$ and $=^*$ refers to ``modulo $\mathcal{I}_\mathfrak{m}athbb{L}G$. \mathfrak{m}edskip \noindent Let $A' := f^{-1}[A]$ and $O' := f^{-1}[O]$. Since $A'$ has the Baire property in $\tau_\mathfrak{m}athbb{L}F$, there is an open $U' \subseteq O'$ such that $U \subseteq^* A'$ or $U \cap A' =^* \varnothing$ (wlog the former) where $\subseteq^*$ and $=^*$ refers to ``modulo $\mathcal{I}_\mathfrak{m}athbb{L}F$''. Then there is a Borel set $B$ such that $B \notin \mathcal{I}_\mathfrak{m}athbb{L}F$ and $B \subseteq A' \cap U'$. Hence $f[B]$ is an analytic subset of $A \cap O$, and by the Claim, $f[B] \notin \mathcal{I}_\mathfrak{m}athbb{L}G$. By the $\tau_\mathfrak{m}athbb{L}G$-Baire property of analytic sets, there is an $\tau_\mathfrak{m}athbb{L}G$-open $U$ such that $U \subseteq^* f[B]$. Hence $U \cap O \subseteq^* A'$, which completes the proof. \end{proof} The relationships established in the above three theorems are summarised in Figure \ref{fig}. \begin{figure} \caption{Implications between the properties for filters $F$ and $G$.} \label{fig} \end{figure} In particular, since ${\sf Cof} {\upharpoonright} a \leq_K F$ holds for every $F$ and every infinite $a$, we obtain the following corollary: \begin{Cor} \label{cor1} $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}F) {\sf R}ightarrow \boldsymbol{\Gamma}(\mathfrak{m}athbb{L})$ and $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}FF) {\sf R}ightarrow \boldsymbol{\Gamma}(\mathfrak{m}athbb{L})$ for all $F$. \end{Cor} \newcommand{\mathfrak{m}athcal{N}WD}{{\sf NWD}} Next, we look at the relationship between $\mathfrak{m}athbb{L}F$-measurability and the classical Baire property. In accordance to common usage, we denote the statement ``all sets in $\boldsymbol{\Gamma}$ have the Baire property'' by $\boldsymbol{\Gamma}(\mathfrak{m}athbb{C})$ ($\mathfrak{m}athbb{C}$ denoting the Cohen forcing partial order). It is known that if $F$ is not an ultrafilter then $\mathfrak{m}athbb{L}F$ adds a Cohen real. Specifically, if $Z$ is such that $Z \notin F$ and $(\omega \setminus Z) \notin F$, and $f: \omega^\omega \to 2^\omega$ is defined by $$f(x)(n) := \left\{ \begin{array}{ll} 1 & \text{ if } x(n) \in Z \\ 0 & \text{ if } x(n) \notin Z \end{array} \right. $$ then $f$ is continuous with the property that if $A$ is meager then $f^{-1}[A] \in \mathcal{I}_\mathfrak{m}athbb{L}F$. \mathfrak{m}edskip Concerning ultrafilters, the following is known. \begin{Def} \label{nwdultrafilter} Let $\mathfrak{m}athcal{N}WD \subseteq 2^{<\omega}$ denote the ideal of \emph{nowhere dense subsets of $2^{<\omega}$}, that is, those $H \subseteq 2^{<\omega}$ such that $\forall \sigma \; \exists \tau \supseteq \sigma \;\forall \rho \supseteq \tau \: (\rho \notin H)$. An ultrafilter $U$ is called \emph{nowhere dense} iff $\mathfrak{m}athcal{N}WD \not\leq_K U^-$. \end{Def} It is known that $\mathfrak{m}athbb{L}_U$ adds a Cohen real iff $U$ is not a nowhere dense ultrafilter. Specifically, if $U$ is not nowhere dense and $\mathfrak{m}edskip \noindenti:\omega \to 2^{<\omega}$ is a witness to $\mathfrak{m}athcal{N}WD \leq_K U^-$, then we can define a continuous function $f: \omega^\omega \to 2^\omega$ by $f(x) := \mathfrak{m}edskip \noindenti(x(0)) {^\frown} \mathfrak{m}edskip \noindenti(x(1)) {^\frown} \dots $. We leave it to the reader to verify that if $A$ is meager then $f^{-1}[A] \in \mathcal{I}_{\mathfrak{m}athbb{L}_U}$. This easily leads to the following: \begin{Lem} \label{lemcohen} If $F$ is not an ultrafilter, or a non-nowhere dense ultrafilter, then $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}F) {\sf R}ightarrow \boldsymbol{\Gamma}(\mathfrak{m}athbb{C})$. \end{Lem} \begin{proof} In either case, we have a continuous $f: \omega^\omega \to 2^\omega$ such that $f$-preimages of meager sets are $\mathcal{I}_\mathfrak{m}athbb{L}F$-small, as above. Let $A \in \boldsymbol{\Gamma}$ and $\sigma \in 2^{<\omega}$ arbitrary. Let $\varphi$ be a homeomorphism from $2^\omega$ to $[\sigma]$ and $A' := (\varphi \circ f)^{-1}[A]$. Then $A' \in \boldsymbol{\Gamma}$, so let $B$ be a Borel $\mathcal{I}_\mathfrak{m}athbb{L}F$-positive set with $B \subseteq A'$ or $B \cap A' = \varnothing$, without loss the former. Then $f[B]$ is an analytic non-meager subset of $A \cap [\sigma]$, so there exists $[\tau] \subseteq [\sigma]$ such that $[\tau] \subseteq^* A$, which is sufficient. \end{proof} Finally, an argument from \cite{MillerMesses} yields the following implication. Recall that a set $A \subseteq [\omega]^\omega$ is \emph{Ramsey} iff there exists $H \in [\omega]^\omega$ such that $[H]^\omega \subseteq A$ or $[H]^\omega \cap A = \varnothing$. \begin{Lem} If $U$ is an ultrafilter then $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}_U) {\sf R}ightarrow \boldsymbol{\Gamma}({\rm Ramsey})$. \end{Lem} \begin{proof} In fact, we prove a stronger statement: if $A \subseteq \omega^{\uparrow \omega}$ (strictly increasing sequences) is $\mathfrak{m}athbb{L}_U$-measurable then $\{{\rm ran}(x) \mathfrak{m}id x \in A\}$ is Ramsey. First note that, by Lemma \ref{b} (4), there exists a $T \in \mathfrak{m}athbb{L}_U$ with empty stem, such that $[T] \subseteq A$ or $[T] \cap A = \varnothing$. Also, without loss of generality, we can assume that $[T] \subseteq \omega^{\uparrow \omega}$. \mathfrak{m}edskip \noindent Now proceed inductively: \begin{itemize} \item Let $n_0 \in {\rm Succ}_T(\varnothing)$ be arbitrary. \item Let $n_1 \in {\rm Succ}_T(\varnothing) \cap {\rm Succ}_T(\left<n_0\right>)$. \item Let $n_2 \in {\rm Succ}_T(\varnothing) \cap {\rm Succ}_T(\left<n_0\right>) \cap {\rm Succ}_T(\left<n_1\right>) \cap {\rm Succ}_T(\left<n_0, n_1\right>) $. \item etc. \end{itemize} Since $U$ is a filter we can always continue this process and make sure that for any $k$, any subsequence of the sequence $\left<n_0, \dots, n_k\right>$ is an element of $T$. It then follows that any infinite subsequence of the sequence $\left<n_i \mathfrak{m}id i<\omega\right>$ is an element of $[T]$. This is exactly what we need. \end{proof} If $U$ is not an ultrafilter, then the above result does not hold in general. For example, considering the cofinite filter, both implications $\boldsymbol{\Gamma}(\mathfrak{m}athbb{L}) {\sf R}ightarrow \boldsymbol{\Gamma}($Ramsey$)$ and $\boldsymbol{\Gamma}(\mathfrak{m}athbb{D}) {\sf R}ightarrow \boldsymbol{\Gamma}($Ramsey$)$ are consistently false for $\boldsymbol{\Gamma} = \boldsymbol{{\mathcal{D}}elta}^1_2$ (see \cite[Section 6]{CichonPaper}). \section{Analytic filters} \label{Sec5} In this section, we focus on analytic filters (or ideals). This is important if we want the forcings to be definable, and if we want to apply results from \cite{Ik10, KhomskiiThesis}. Note that just for absoluteness of the forcing, it would have been sufficient to consider $\boldsymbol{\Sigma}^1_2$ or $\boldsymbol{\Pi}^1_2$ filters, by Shoenfield absoluteness. However, we also require the ideals and other related notions to have a sufficiently low complexity. For this reason, in this section the following assumption will hold: \mathfrak{m}edskip \noindent \textbf{Assumption.} $F$ is an analytic filter on $\omega$. \mathfrak{m}edskip It is clear that the statement ``$T \in \mathfrak{m}athbb{L}F$'' is as complex as $F$ itself. Recall from \cite[Section 3.6]{BaJu95}) that a forcing notion is \emph{Suslin ccc} if it is ccc and the statements ``$T \in \mathfrak{m}athbb{L}F$'', ``$T \:\bot\: S$'' and ``$S \leq T$'' are $\boldsymbol{\Sigma}^1_1$-relations on the codes of trees. The following is clear: \begin{Fact} Let $F$ be analytic. Then $\mathfrak{m}athbb{L}F$ is a Suslin ccc forcing notion. \end{Fact} \begin{Lem} Let $F$ be analytic. Then the ideals $\mathcal{I}_\mathfrak{m}athbb{L}F$ and ${\mathcal{D}}_{{F}}F$ are $\boldsymbol{\Sigma}^1_2$ on Borel sets $($i.e., the membership of Borel sets in the ideal is a $\boldsymbol{\Sigma}^1_2$-property on the Borel codes$)$. \end{Lem} \begin{proof} A Borel set $B$ is in ${\mathcal{D}}_{{F}}F$ iff $\exists \varphi: \omega^{<\omega} \to F \; \forall x \; (x \in B \to \exists^\infty n \:(x(n) \notin \varphi(x{\upharpoonright} n)))$. This is easily seen to be a $\boldsymbol{\Sigma}^1_2$ statement if $F$ is $\boldsymbol{\Sigma}^1_1$. \mathfrak{m}edskip \noindent For $\mathcal{I}_\mathfrak{m}athbb{L}F$, let $B$ be a Borel set. Notice that $B$ is $\tau_{\mathfrak{m}athbb{L}F}$-nowhere dense iff there exists a $\tau_{\mathfrak{m}athbb{L}F}$-open dense set $O$ such that $B \cap O = \varnothing$, iff there is a maximal antichain $A \subseteq \mathfrak{m}athbb{L}F$ such that $B \cap \bigcup\{[T] \mathfrak{m}id T \in A\} = \varnothing$. By the ccc, one such maximal antichain can be coded by a real. The resulting computation yields a $\boldsymbol{\Sigma}^1_2$ statement. \end{proof} In \cite{BrHaLo, Ik10} the concept of \emph{quasi-generic real} was introduced---a real avoiding all Borel sets in a certain $\sigma$-ideal coded in the ground model. This concept coincides with generic reals for ccc ideals, but yields a weaker concept for other (combinatorial) ideals, see e.g. \cite[Section 2.3]{KhomskiiThesis}. \iffalse It turns out to be very useful when studying regularity properties for projective sets. We mention important characterizations of the classical Baire property, Laver-measurability and Hechler-measurability, denoted by $\mathfrak{m}athbb{C}, \mathfrak{m}athbb{L}$ and $\mathfrak{m}athbb{D}$, respectively (recall that $\mathfrak{m}athbb{D}$ and $\mathfrak{m}athbb{L}$ are $\mathfrak{m}athbb{L}_{\sf Cof}$ and $\mathfrak{m}athbb{L}_{{\sf Cof}^+}$). \begin{Fact}[Judah-Shelah; Truss; Brendle-L\"owe] $\;$ \begin{enumerate} \item The following are equivalent: \begin{itemize} \item $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{C})$ \item $\forall r \in \omega^\omega \exists x \;( x $ is Cohen over $L[r])$. \end{itemize} \item The following are equivalent: \begin{itemize} \item $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L})$ \item $\forall r \in \omega^\omega \;\exists x \;( x $ is dominating over $L[r])$. \end{itemize} \item The following are equivalent: \begin{itemize} \item $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{C})$ \item $\forall r \in \omega^\omega \; \{x \mathfrak{m}id x $ not Cohen over $L[r]\} \in \mathfrak{m}athcal{M}$ \item $\forall r \in \omega^\omega \; \exists x \; \exists d \;(x$ is Cohen over $L[r]$ and $d$ is dominating over $L[r])$ \item $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{C}) \land \boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L})$ \item $\forall r \in \omega^\omega \; \exists y \; (y$ is Hechler over $L[r])$ \item $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{D})$ \end{itemize} \end{enumerate} \end{Fact} \begin{proof} (see \cite[Theorem 5.8]{BrLo99}). \end{proof} \fi In the case of $\mathfrak{m}athbb{L}F$, ``quasi-generic reals'' are the $\mathfrak{m}athbb{L}F$-generic ones, whereas in the case of $\mathfrak{m}athbb{L}FF$, they have a simple characterisation due to the combinatorial ideal ${\mathcal{D}}_{{F}}F$. \begin{Lem} \label{quasi} Let $M$ be a model of set theory. A real $x$ is $\mathfrak{m}athbb{L}F$-generic over $M$ iff $x \notin B$ for every Borel set $B \in \mathcal{I}_\mathfrak{m}athbb{L}F$ with code in $M$. \end{Lem} \begin{proof} See \cite[Lemma 2.3.2]{KhomskiiThesis}. \end{proof} \begin{Def} Let $M$ be a model of set theory. We will call a real $x \in \omega^\omega$ \emph{$F$-dominating over $M$} if for every $\varphi: \omega^{<\omega} \to F$ with $\varphi \in M$, $x$ $F$-dominates $\varphi$, i.e., $\forall^\infty n \:(x(n) \in \varphi(x{\upharpoonright} n))$ (note that the statement $\varphi: \omega^{<\omega} \to F$ is absolute for between $M$ and larger models). \end{Def} \begin{Lem} Let $M$ be a model of set theory with $\omega_1 \subseteq M$. A real $x$ is $F$-dominating over $M$ iff $x \notin B$ for every Borel set $B \in {\mathcal{D}}_{{F}}F$ with code in $M$. \end{Lem} \begin{proof} This is easy to verify from the definition, using $\boldsymbol{\Sigma}^1_2$-absoluteness between $M$ and $V$ and the fact that $B \in {\mathcal{D}}_{{F}}F$ is a $\boldsymbol{\Sigma}^1_2$-statement for Borel sets. \end{proof} As an immediate corollary of the above and the general framework from \cite{Ik10} and \cite{KhomskiiThesis}, we immediately obtain the following four characterizations for $(\mathfrak{m}athbb{L}F)$- and $(\mathfrak{m}athbb{L}FF)$-measurability. \begin{Cor} \label{equivalences} Let $F$ be an analytic filter. Then: \begin{enumerate} \item $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}F) \; \Longleftrightarrow \; \forall r \in \omega^\omega \; \exists x \:(x$ is $\mathfrak{m}athbb{L}F$-generic over $L[r])$. \item $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{L}F) \; \Longleftrightarrow \; \forall r \in \omega^\omega \; \{x \mathfrak{m}id x $ not $\mathfrak{m}athbb{L}F$-generic over $L[r]\} \in \mathcal{I}_\mathfrak{m}athbb{L}F$. \item $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}FF) \Longleftrightarrow \forall r \in \omega^\omega \; \forall T \in \mathfrak{m}athbb{L}FF \: \exists x \in [T] \:(x$ is $F$-dominating over $L[r])$. \item $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{L}FF) \; \Longleftrightarrow \; \forall r \in \omega^\omega \; \{x \mathfrak{m}id x $ not $F$-dominating over $L[r]\} \in \mathcal{I}_\mathfrak{m}athbb{L}FF$. \end{enumerate} \end{Cor} \begin{proof} See \cite[Theorem 4.3 and Theorem 4.4]{Ik10} and \cite[Theorem 2.3.7 and Corollary 2.3.8]{KhomskiiThesis}. Note that both ideals $\mathfrak{m}athbb{L}F$ and ${\mathcal{D}}_{{F}}F$ are $\boldsymbol{\Sigma}^1_2$, the forcings have absolute definitions and are proper, so the above results can be applied. \mathfrak{m}edskip \noindent Only one non-trivial fact requires some explanation. In point 1, the above abstract theorems only yield the statement ``$\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}F) \; \Longleftrightarrow \; \forall r \in \omega^\omega \; \forall T \in \mathfrak{m}athbb{L}F \; \exists x \in [T] \:(x$ is $\mathfrak{m}athbb{L}F$-generic over $L[r])$''. In order to eliminate the clause ``$\forall T \in \mathfrak{m}athbb{L}F$ \dots '', we use the following fact: for every non-principal filter $F$ and every $X \in F$, there exists a bijection $\mathfrak{m}edskip \noindenti: \omega \to X$, such that for all $a \subseteq X$, $a \in F \; \Leftrightarrow \; \mathfrak{m}edskip \noindenti^{-1}[a] \in F$. See, e.g., \cite[Lemma 3]{MediniZdomskyy}. We leave it to the reader to verify that this implies homogeneity of $\mathfrak{m}athbb{L}F$, in the sense that if there exists an $\mathfrak{m}athbb{L}F$-generic real then there also exists an $\mathfrak{m}athbb{L}F$-generic real inside $T$ for every $T \in \mathfrak{m}athbb{L}F$. \end{proof} We are interested in more elegant characterizations of the four above statements. \begin{Thm} \label{thistheo} $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{L}F) \; \Longleftrightarrow \; \forall r \in \omega^\omega \:(\omega_1^{L[r]} < \omega_1)$. \end{Thm} The proof uses methods similar to \cite[Theorem 6.2]{LabedzkiRepicky} (see also \cite[Theorem 5.11]{BrLo99}). It follows using a series of definitions and lemmas. \newcommand{{\rm rk}}{{\rm rk}} \begin{Def} \label{rankdef} For every open dense set $D \subseteq \mathfrak{m}athbb{L}F$, define a \emph{rank function} ${\rm rk}_D: \omega^{<\omega} \to \omega_1$ by \begin{itemize} \item ${\rm rk}_D(\sigma) := 0$ iff there is $T \in D$ with ${\rm stem}(T) = \sigma$ and \item ${\rm rk}_D(\sigma) := \alpha$ iff ${\rm rk}_D(\sigma) \not<\alpha$ and $\exists Z \in F^+ \; \forall n \in Z \:( {\rm rk}_D(\sigma {^\frown} \left<n\right>) < \alpha)$. \end{itemize} \end{Def} \noindent A standard argument shows that ${\rm rk}_D(\sigma)$ is well-defined for every $\sigma$. \begin{Def} An $(F^-)$-mad family is a collection $\mathfrak{m}athcal{A} \subseteq F^+$ such that $\forall a \neq b \in \mathfrak{m}athcal{A}$ $(a \cap b) \in F^-$, and $\forall a \in F^+$ there exists $b \in \mathfrak{m}athcal{A}$ such that $(a \cap b) \in F^+$. \end{Def} \begin{Fact} For every analytic filter $F$, there exists an $(F^-)$-mad family of size $2^{\aleph_0}$. \end{Fact} \begin{proof} See \cite[Corollary 1.8]{KhomskiiBarnabas}. \end{proof} \begin{Lem} \label{Lefty} Let $\mathfrak{m}athcal{A}$ be an $(F^-)$-mad family. For each $a \in \mathfrak{m}athcal{A}$, let $X_a := \{x \in \omega^\omega \mathfrak{m}id {\rm ran}(x) \cap a = \varnothing\} \in \mathfrak{m}athcal{N}_\mathfrak{m}athbb{L}F$. Then, for any $X \in \mathcal{I}_\mathfrak{m}athbb{L}F$, the collection $\{a \in \mathfrak{m}athcal{A} \mathfrak{m}id X_a \subseteq X\}$ is at most countable. \end{Lem} \begin{proof} Let $X \subseteq \bigcup_n X_n$ where $X_n$ are closed nowhere dense in $\tau_\mathfrak{m}athbb{L}F$, and let $D_n := \{T \mathfrak{m}id [T] \cap X_n = \varnothing\}$. Then the $D_n$ are open dense in $\mathfrak{m}athbb{L}F$. Consider a countable elementary submodel $N$ of some sufficiently large $\mathfrak{m}athcal{H}_\theta$ containing $\mathfrak{m}athcal{A}$, the $D_n$, and the defining parameter of $F$ (i.e., the $r \in \omega^\omega$ such that $F \in \Sigma^1_1(r)$). The proof will be completed by showing that if $a \in \mathfrak{m}athcal{A} \setminus N$, then there exists $x \in X_a \cap \bigcap_n \bigcup \{[T] \mathfrak{m}id T \in D_n\}$, hence $x \in X_a \setminus X$. \mathfrak{m}edskip \s{Sublemma.} For every $D_n$, every $a \in \mathfrak{m}athcal{A} \setminus N$, and every $T \in \mathfrak{m}athbb{L}F$, if ${\rm ran}({\rm stem}(T)) \cap a = \varnothing$ then there exists $S \leq T$ with $S \in D_n$ and such that ${\rm ran}({\rm stem}(S)) \cap a = \varnothing$ as well. \begin{proof} Let $Y := \{\tau \in T \mathfrak{m}id {\rm stem}(T) \subseteq \tau$ and ${\rm ran}(\tau) \cap a = \varnothing\}$. Let $\tau \in Y$ be of least $D_n$-rank. We claim that ${\rm rk}_{D_n}(\tau) = 0$, which completes the proof. Towards contradiction, assume ${\rm rk}_{D_n}(\tau) = \alpha > 0$ and let $Z \in F^+$ witness this. By elementarity and using the fact that all relevant objects are in $N$ and $F$ is absolute for $N$ as well, it follows that $Z \in N$. \mathfrak{m}edskip \noindent By elementarity and absoluteness of $F$, $N \mathfrak{m}odels $ ``$\mathfrak{m}athcal{A}$ is an $(F^-)$-mad family'', hence there exists $b \in \mathfrak{m}athcal{A} \cap N$ such that $Z \cap b \in F^+$. Since $b \neq a$, it follows that $b \cap a \in F^-$, so there exists $n \in (Z \setminus a)$. Then $\tau {^\frown} \left<n\right>$ is an element of $Y$ with $D_n$-rank less than $\alpha$, contradicting the minimality of $\tau$. \renewcommand{ $\Box$ (Sublemma)} \end{proof}{ ${\sf Borel}ox$ (Sublemma)} \end{proof} \mathfrak{m}edskip \noindent Now, it is clear that we can inductively apply the sublemma to find a sequence $T_0 \geq T_1 \geq T_2 \geq \dots$, with strictly increasing stems, such that $T_n \in D_n$ for every $n$, and moreover ${\rm ran}({\rm stem}(T_n)) \cap a = \varnothing$ for every $n$. Then $x := \bigcup_n {\rm stem}(T_n)$ has all the required properties, i.e., $x \in X_a \setminus X$. \end{proof} \begin{proof}[Proof of Theorem \ref{thistheo}] We need to prove the equivalence between \begin{enumerate} \item $ \forall r \; \{x \mathfrak{m}id x $ not $\mathfrak{m}athbb{L}F$-generic over $L[r]\} \in \mathcal{I}_\mathfrak{m}athbb{L}F$ and \item $ \forall r \;(\omega_1^{L[r]} < \omega_1)$. \end{enumerate} By Lemma \ref{quasi}, the former statement is equivalent to $\forall r \; \bigcup\{B \mathfrak{m}id B$ is a Borel $\mathcal{I}_\mathfrak{m}athbb{L}F$-small set with code in $L[r]\} \in \mathcal{I}_\mathfrak{m}athbb{L}F$. The direction from 2 to 1 is thus immediate. \mathfrak{m}edskip \noindent Conversely, fix $r$ and assume that $\omega_1^{L[r]} = \omega_1$. Let $\mathfrak{m}athcal{A}$ be an $(F^-)$-mad family such that $| \mathfrak{m}athcal{A} \cap L[r] | = \omega_1$ (this can be done by extending an $(F^-)$-almost disjoint family of size $\omega_1$ in $L[r]$). For every $a \in \mathfrak{m}athcal{A} \cap L[r]$, $X_a$ is a Borel $\mathcal{I}_\mathfrak{m}athbb{L}F$-small set with code in $L[r]$. If 1 was true, then in $V$ there would be an $X \in \mathcal{I}_\mathfrak{m}athbb{L}F$ such that $X_a \subseteq X$ for all such $a$, contradicting Lemma \ref{Lefty}. \end{proof} \begin{Remark} The same argument yields ${\rm add}(\mathcal{I}_\mathfrak{m}athbb{L}F) = \omega_1$ and ${4m c}f(\mathcal{I}_\mathfrak{m}athbb{L}F) = {2^{\aleph_0}}$ for analytic filters (where ${\rm add}$ and ${4m c}f$ denote the \emph{additivity} and \emph{cofinality} numbers of the ideal, respectively). \end{Remark} \mathfrak{m}edskip Next, we consider $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}F)$ and $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}FF)$. In \cite[Theorem 2]{BrendleUltrafilters}, the covering number of $\mathcal{I}_{\mathfrak{m}athbb{L}_U}$ for an ultrafilter $U$ was determined to be the minimum of $\mathfrak{b}$ and a certain combinatorial characteristic of $U$ called $\mathfrak{m}edskip \noindenti \mathfrak{m}athfrak{p}(U)$. This was generalised by Hrusak and Minami in \cite[Theorem 2]{HrusakMinami} to arbitrary filters. Similar proofs yield characterisations of $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}F)$ and $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}FF)$. \begin{Def} Let $M$ be a model of set theory and $F$ an analytic filter. We say that a real $C \in [\omega]^\omega$ is \begin{enumerate} \item \emph{$F$-pseudointersecting over $M$} if $C \subseteq^* a$ for all $a \in F \cap M$. \item \emph{$F$-separating over $M$} if it is $F$-pseudointersecting over $M$, and additionally, for all $b \in (F^+) \cap M$, $|C \cap b| = \omega$.\end{enumerate} We use the shorthand ``$\exists F$-${\sf pseudoint}$'' and ``$\exists F$-${\sf sep}$'' to abbreviate the statements ``$\forall r \in \omega^\omega \: \exists C \; (C$ is $F$-pseudointersecting/separating over $L[r]$)''. \end{Def} \begin{Question} Are there natural regularity properties equivalent to ``$\exists F$-${\sf pseudoint}$'' and ``$\exists F$-${\sf sep}$'' for $\boldsymbol{{\mathcal{D}}elta}^1_2$ sets of reals? \end{Question} Recall that $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{C})$ is equivalent to $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{C}) \land \boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L})$ and equivalent to $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{D})$, where $\mathfrak{m}athbb{C}, \mathfrak{m}athbb{L}$ and $\mathfrak{m}athbb{D}$ stand for the Baire property, Laver- and Hechler-measurability, respectively. Also, recall that $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{C})$ is equivalent to the existence of Cohen reals over $L[r]$, $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L})$ is equivalent to the existence of dominating reals over $L[r]$, and $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{D})$ is equivalent to the existence of Hechler-generic reals over $L[r]$. See \cite[Theorem 4.1 and Theorem 5.8]{BrLo99}. Also, note that $\mathfrak{m}athbb{D}$ and $\mathfrak{m}athbb{L}$ are just $\mathfrak{m}athbb{L}_{\sf Cof}$ and $\mathfrak{m}athbb{L}_{{\sf Cof}^+}$. \begin{Thm} $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}F) \; \Longleftrightarrow \; \boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{C}) \: \land \: \exists F$-${\sf sep}$. \end{Thm} \begin{proof} By Corollary \ref{cor1} and Lemma \ref{lemcohen}, we know that $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}F)$ implies $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L})$ and $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{C})$, which in turns implies $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{C})$ as mentioned above. Moreover, a standard density argument shows that if $\mathfrak{m}athbb{L}F$ generically adds an $F$-separating real, specifically, if $x$ is $\mathfrak{m}athbb{L}F$-generic then ${\rm ran}(x)$ is $F$-separating. \mathfrak{m}edskip \noindent For the converse direction, fix $r \in \omega^\omega$ and let $C$ be $F$-separating over $L[r]$. Let $\mathfrak{m}athbb{D}_C$ denote Hechler forcing as defined on $C^\omega$ (i.e., the conditions are trees in $C^{<\omega}$ with branching into all of $C$ except for finitely many points). Clearly $\mathfrak{m}athbb{D}_C$ is isomorphic to the ordinary Hechler forcing. Notice that for every $T \in \mathfrak{m}athbb{L}F$, if ${\rm ran}({\rm stem}(T)) \subseteq C$ then $T \cap C^{<\omega} \in \mathfrak{m}athbb{D}_C$. \mathfrak{m}edskip \noindent For every $D \in L[r]$ dense in $\mathfrak{m}athbb{L}F$, let $D' := \{T \cap C^{<\omega} \mathfrak{m}id T \in D$ and ${\rm ran}({\rm stem}(T)) \subseteq C\}$. We claim that $D'$ is predense in $\mathfrak{m}athbb{D}_C$. Let $S \in \mathfrak{m}athbb{D}_C$ be arbitrary, with $\sigma := {\rm stem}(S)$. Recall the rank-function from Definition \ref{rankdef}. Since $D \in L[r]$, we consider the rank function ${\rm rk}_D$ as defined inside $L[r]$. If ${\rm rk}_D(\sigma) = 0$ then there is $T \in D$ with ${\rm stem}(T) = \sigma$, hence $S$ and $T$ are compatible. Otherwise, let ${\rm rk}_D(\sigma) = \alpha$. By definition of ${\rm rk}_D$ and the fact that ${\rm rk}_D$ is in $L[r]$, there exists $Z \in F^+$ with $Z \in L[r]$, such that ${\rm rk}_D(\sigma {^\frown} \left<n\right>) < \alpha$ for all $n \in Z$. Since ${\rm Succ}_S(\sigma) \cap Z$ is also in ${{F}}^+$, by assumption, there is $n \in C \cap {\rm Succ}_S(\sigma) \cap Z$. Continuing this process, we arrive at some $\tau$ extending $\sigma$, such that $\tau \in S$, ${\rm ran}(\tau) \subseteq C$ and ${\rm rk}_D(\tau) = 0$. Then we are done as before. \mathfrak{m}edskip \noindent By the remark above, $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{C})$ implies $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{D})$, which implies the existence of Hechler-generic reals. In particular, there is a $d \in C^\omega$ which is $\mathfrak{m}athbb{D}_C$-generic real over $L[r][C]$. But then $d$ is $\mathfrak{m}athbb{L}F$-generic over $L[r]$, since for every $D \in L[r]$ dense in $\mathfrak{m}athbb{L}F$, we find $T \in D$ with $d \in [T \cap C^{<\omega}]$. \end{proof} A similar argument can be used to simplify $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}FF)$; however, here the \emph{homogeneity} of $\mathfrak{m}athbb{L}FF$ provides an additional obstacle, since $\mathfrak{m}athbb{L}FF$ is, in general, only homogeneous if $F$ is $K$-uniform. \begin{Thm} $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}FF)\; \Longrightarrow \; \boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}) \land \exists F$-${\sf pseudoint}$. If $F$ is $K$-uniform, then the converse implication holds. \end{Thm} \begin{proof} By Corollary \ref{cor1} we know that $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}FF) {\sf R}ightarrow \boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L})$. Let $x$ be $F$-dominating over $L[r]$ and let $C := {\rm ran}(x)$. For each $a \in F \cap L[r]$ let $\varphi$ be the function given by $\varphi(\sigma) := a \setminus |\sigma|$ for all $\sigma \in \omega^{<\omega}$. Since $\forall^\infty n \:(x(n) \in \varphi(x {\upharpoonright} n))$, clearly $C$ is infinite and $C \subseteq^* a$. \mathfrak{m}edskip \noindent Conversely, assume that $F$ is $K$-uniform. We leave it to the reader to verify that, if $T \in \mathfrak{m}athbb{L}FF$, then there exists a continuous function $f: \omega^\omega \to [T]$ such that $f$-preimages of ${\mathcal{D}}_{{F}}F$-small sets are ${\mathcal{D}}_{{F}}F$-small. In particular, the statements \begin{itemize} \item $\exists x \: (x$ is $F$-dominating over $L[r])$, and \item for all $T \in \mathfrak{m}athbb{L}FF \; \exists x \in [T] \; (x $ is $F$-dominating over $L[r])$ \end{itemize} are equivalent for all $T \in \mathfrak{m}athbb{L}FF \cap L[r]$. \mathfrak{m}edskip \noindent So, fix $r \in \omega^\omega$ and let $T \in \mathfrak{m}athbb{L}FF$ be arbitrary. Without loss of generality, assume $T \in L[r]$ (otherwise, use $L[r][T]$ instead). Let $C$ be $F$-pseudointersecting over $L[r]$. For each $\varphi: \omega^{<\omega} \to F$ from $L[r]$, define $g_\varphi: \omega^{<\omega} \to \omega$ by $g_\varphi(\sigma) := \mathfrak{m}in \{ n \mathfrak{m}in C \setminus n \subseteq \varphi(\sigma)\}$. Then all $g_\varphi$ are in $L[r]$, by $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L})$ there is a dominating real $g$ over $L[r]$, so, in particular, $g$ dominates all $g_\varphi$. Let $x \in \omega^\omega$ be such that $x(n) \in C$ and $x(n) \geq g(x {\upharpoonright} n)$ for every $n$. Clearly for every $\varphi \in L[r]$ we have $\forall^\infty n \; (x(n) \in \varphi(x {\upharpoonright} n))$, hence $x$ is $F$-dominating over $L[r]$. This suffices by what we mentioned above. \end{proof} Currently, we do not have a similarly elegant characterization for $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{L}FF)$. \begin{Question} Is there a characterization of $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{L}FF)$ similar to the above? Is $\boldsymbol{\Sigma}^1_2(\mathfrak{m}athbb{L}F)$ equivalent to $\boldsymbol{{\mathcal{D}}elta}^1_2(\mathfrak{m}athbb{L}FF)$? \end{Question} {} \end{document}
\begin{document} \begin{centering} {\Large{\bf A cyclotomic approach to the solution of} \Large{\bf Waring's problem mod $p$} } \vspace*{ 0.5 cm} M\'onica del Pilar Canales Ch. \footnote{Supported by DID--UACh, Grant S--03--06} \vspace*{ 0.4 cm} Instituto de Matem\'aticas\\ Universidad Austral de Chile\\ Casilla 567,Valdivia {\tt [email protected]} \end{centering} \vspace*{7 mm} \textsc{Abstract} Let $s_d(p,a) = \min \{k\ | \ a = \sum_{i=1}^{k}a_i^d, a_i\in {\mathbb F}_p^*\}$ be the smallest number of $d$--th powers in the finite field ${\mathbb F}_p$, sufficient to represent the number $a\in {\mathbb F}_p^*$. Then $$g_d(p) = \max_{a\in{\mathbb F}_p^*} s_d(p,a)$$ gives an answer to Waring's Problem mod p. We first introduce cyclotomic integers $n(k,\nu)$, which then allow to state and solve Waring's problem mod $p$ in terms of only the cyclotomic numbers $(i,j)$ of order $d$. We generalize the reciprocal of the Gaussian period equation $G(T)$ to a ${\mathbb C}$--differentiable function $I(T)\in{\mathbb Q}[[T]]$, which also satisfies $I'(T)/I(T)\in{\mathbb Z}[[T]]$. We show that and why $a\equiv -1{\mbox {\rm \ mod\ }} {\mathbb F}_p^{*d}$ (the classical {\it Stufe}, if $d=2$) behaves special: Here (and only here) $I(T)$ is in fact a polynomial from ${\mathbb Z}[T]$, the reciprocal of the period polynomial. We finish with explicit calculations of $g_d(p)$ for the cases $d=3$ and $d=4$, all primes $p$, using the known cyclotomic numbers compiled by Dickson. \subsection*{1. Introduction} Let $p > 2$ be a prime number, $d\geq 2$ a rational integer, and let $s_d(p,a)$ be the least positive integer $s$ such that $a \in{{\mathbb F}}_p^*$ is the sum of $s$ $d$--th powers in ${{\mathbb F}}_p$, i.e.~ \[ s_d(p,a) = \min \{k\ |\ a=\sum_{i=1}^k a_i^d,\ a_i\in{{\mathbb F}}_p^*\}. \] Since ${{{\mathbb F}}_p^*}^d = {{{\mathbb F}}_p^*}^{gcd(d,p-1)}$, it suffices to consider $d\ | \ p-1$. Let $f = (p-1)/d$ and let $\omega$ be a generator of ${{{\mathbb F}}_p^*}$, fixed from now on. Clearly, it is enough to consider $s_d(p,a)$ only for the $d$ classes mod ${{{\mathbb F}}_p^*}^d$. Let $a\in {{{\mathbb F}}_p^*}$ with $\alpha \equiv {\mbox {\rm ind}_\omega} (a) {\mbox {\rm \ mod\ }} d$ and let $\theta \equiv {\mbox {\rm ind}_\omega} (-1){\mbox {\rm \ mod\ }} d$, {\it i.e.} $\theta =0$ if $f$ is even, and $\theta = d/2$ if $f$ is odd. In \cite{BC}\cite{C} we established, by considering the generating function $$g(T) = \frac{1}{1-(\sum_{u\in {\mathbb F}_p^{*d}} \begin{array}r{X}^u)^T} \in\left(K[X]/(X^p-1)\right)[[T]]$$ with $K = {\mathbb Q}(\zeta)$ the cyclotomic field given by $\zeta$ a primitive $p$--th root of unity in ${\mathbb C}$ that if \[ N(k,a):=\# \{(u_{1},\dots ,u_{k})\in {{{\mathbb F}}_{p}^{\ast }}^{d}\times \dots \times {{{\mathbb F}}_{p}^{\ast }}^{d}\ |\ a=u_{1}+\dots +u_{k}\} \] then \[ N(k,a)=\frac{1}{p}\sum_{x=0}^{p-1}S(\zeta ^{x})^{k}\zeta ^{-ax} = \frac{1}{p}\bigg[f^k + \sum_{\begin{array}r x\in{{\mathbb F}}_p^*/{{{\mathbb F}}_p^*}^d} S(\zeta^{x})^k\cdot S(\zeta^{-ax})\bigg], \] where $S(\rho):=\sum_{u\in {{{\mathbb F}}_{p}^{\ast }}^{d}}\rho ^{u}$. Setting $i\equiv {\mbox {\rm ind}_\omega}(x){\mbox {\rm \ mod\ }} d$ and $\alpha+\theta \equiv {\mbox {\rm ind}_\omega}(-a){\mbox {\rm \ mod\ }} d$, we may write now \[ N(k,a)=\frac{1}{p}\bigg[f^{k}+\sum_{i=0}^{d-1}\eta _{i}^{k}\cdot \eta _{i+\alpha +\theta }\bigg] \] where $$\eta _{i}:=S(\zeta ^{\omega ^{i}})=\zeta ^{\omega ^{i}}+\zeta ^{\omega ^{d+i}}+\zeta ^{\omega ^{2d+i}}+\dots +\zeta ^{\omega ^{(f-1)d+i}};0\leq i\leq d-1$$ are the classical \textit{Gauss periods}, with minimal polynomial over ${\mathbb Q}$ the so-called {\it period polynomial of degree $d$} \[ G(T)=\prod_{i=0}^{d-1}(T-\eta _{i})=\alpha _{d}+\alpha _{d-1}T+\dots +\alpha _{2}T^{d-2}+T^{d-1}+T^{d}\in {{\mathbb Z}}[T], \] resolvent of the \textit{cyclotomic equation} $X^{p}-1=0$. Hence, since $s_d(p,a) = \min\{k \ | \ N(k,a)\neq 0\}$ we obtain: $$s_d(p,a) = \min\{k \ | \ f^k+\sum_{i=0}^{d-1} \eta_i^k\cdot \eta_{i+\alpha+\theta }\neq 0\}.$$ Our goal now is to determine $s_d(p,a)$, and thus $g_d(p)$, {\it only} using the {\it cyclotomic numbers of order $d$}, $$(i,j)\ :=\ \#\{(u,v);0\leq u,v\leq f-1\ |\ 1+\omega ^{du+i}\equiv \omega ^{dv+j}{\mbox {\rm \ mod\ }}p\},\ \ 0\leq i,j \leq d-1,$$ which have been extensively studied in the literature. \subsection*{2. Cyclotomic Integers} We call a polynomial expression on the periods, with integer coefficients a {\it cyclotomic integer} if it has an integer value. Examples of this are the coefficients of the period polynomial, $$\alpha_k=s_k(\eta_0,\dots,\eta_{d-1}) =(-1)^k\sum_{0\leq i_1<\dots<i_k\leq d-1} \eta_{i_1}\ldots\eta_{i_k}\in{\mathbb Z},$$ and its discriminant $$D_d = \prod_{0\leq i<j\leq d-1} (\eta_i-\eta_j)^2\in {\mathbb Z}.$$ We study now, for all $k\in{\mathbb N}$ and $0\leq \nu\leq d-1$, the non trivial {\it cyclotomic integers} $$n(k,\nu) = \sum_{i=0}^{d-1} \eta_i^k\cdot \eta_{i+\nu}\in{\mathbb Z};\ 0\leq \nu\leq d-1.$$ The theory of cyclotomy states the following formulae for the periods and the cyclotomic numbers (see \cite{B}\cite{Di1}\cite{Di2}). For all $0\leq k,l\leq d-1$ it holds: $ \begin{array}{ll} (i) & \eta _{l}\eta _{l+k}=\sum_{h=0}^{d-1}(k,h)\eta _{l+h}+f\delta_{\theta k} \\ (ii) & \sum_{l=0}^{d-1}\eta _{l}\eta _{l+k}=p\delta_{\theta k}-f \\ (iii) & \sum_{h=0}^{d-1}(k,h)=f-\delta_{\theta k} \end{array} $ with Kronecker's $\delta_{ij}$. The cyclotomic integers $n(k,\nu)$ assume values in ${\mathbb Z}$ despite not being symmetric on the periods, since they satisfy the following recurrence formula: \textbf{Lemma 1.}\ {\it Let $0\leq \nu \leq d-1$ and $k\geq 1$. Then \[ n(k+1,\nu) = \sum_{l=0}^{d-1} (\nu,l) n(k,l) + f\delta_{\theta \nu} n(k-1,0) \] where $n(0,\nu)=-1$ and $n(1,\nu) = p\delta_{\theta\nu}-f$. } \textit{Proof.}\ Clearly, $$n(0,\nu)=\sum_{i=0}^{d-1}\eta_{i+\nu }= \zeta+\zeta^2+\dots+\zeta^{p-1}=-1$$ and $$n(1,\nu)=\sum_{i=0}^{d-1}\eta_{i+\nu }\eta_{i}=p\delta_{\theta \nu }-f$$ by $(ii)$. Now by $(i)$, multiplying $\eta_l\eta_{l+\nu}$ by $\eta_l^k$ and adding over $l$ we get $ \begin{array}{lllll} n(k+1,\nu) & = & \sum_{l=0}^{d-1} \eta_{l+\nu}\eta_l^{k+1} & & \\ & = & \sum_{l=0}^{d-1}\left(\sum_{h=0}^{d-1}(\nu,h) \eta_{l+h}\eta_l^{k}+ f\delta_{\theta \nu}\eta_l^k\right) & & \\ & = & \sum_{h=0}^{d-1}(\nu,h)\sum_{l=0}^{d-1} \eta_{l+h}\eta_l^{k}+ f\delta_{\theta \nu} \sum_{l=0}^{d-1}\eta_l^k & & \\ & = & \sum_{h=0}^{d-1}(\nu,h)n(k,h) + f\delta_{\theta \nu} n(k-1,0). & & \hspace{25 mm}\Box \end{array} $ This turns out to be the key to determine $N(k,a)$ in terms of {\it only} the cyclotomic numbers in a remarkably simple way, since we find \textbf{Lemma 2.}\ \ {\it Let $0 \leq \nu\leq d-1$. Then $ \begin{array}{lll} n(1,\nu)+f & = & p\delta_{\theta \nu } \\ n(2,\nu)+f^{2} & = & p(\nu ,\theta ) \\ n(3,\nu)+f^{3} & = & p\sum_{i=0}^{d-1}(\nu ,i)(i,\theta )+f\delta_{\theta 0}[n(1,\nu)+f] \\ n(4,\nu)+f^{4} & = & p\sum_{i,j=0}^{d-1}(\nu ,i)(i,j)(j,\theta )+f\delta_{\theta 0}[n(2,\nu)+f^{2}]+f(0,\theta )[n(1,\nu)+f].\\ \end{array} $ and for $k \geq 5$ $ \begin{array}{lll} n(k,\nu) + f^k & = & p\sum_{i_2,\dots,i_{k-1}=0}^{d-1}(\nu,i_2) \dots(i_{k-1},\theta )+ f\delta_{\theta 0}[n(k-2,\nu)+f^{k-2}] \\ & + & f(0,\theta )[n(k-3,\nu)+f^{k-3}] \\ & + & \sum_{j=4}^{k-1}\ f\ [\sum_{i_2,\dots,i_{j-2}=0}^{d-1} (0,i_2)\dots(i_{j-2},\theta )][n(k-j,\nu)+f^{k-j}] \end{array} $ } \textit{Proof.}\ The formulae for $k=1,2,3,4$ result by straightforward computation. For higher $k$, we use induction on $k$. For all $0\leq l \leq d-1$, assume the formula for up to $k\geq 4$. This is (the empty sum for $k=4$ is assumed null by convention)\\ $ \begin{array}{llll} n(k,l) & = & p \sum_{i_2,\dots,i_{k-1}=0}^{d-1}(l,i_2)(i_2,i_3) \dots(i_{k-1}, \theta )-f^k + f\delta_{\theta 0}[n(k-2,l)+f^{k-2}] & \\ & + & f(0,\theta )[n(k-3,l)+f^{k-3}] & \\ & + & \sum_{j=4}^{k-1}\ f\ [\sum_{i_2,\dots,i_{j-2}=0}^{d-1} (0,i_2)\dots(i_{j-2},\theta )][n(k-j,l)+f^{k-j}]. & \end{array} $ Hence, by our previous lemma and the induction hypothesis, we have \begin{eqnarray*} n(k+1,\nu) & = & \sum_{l=0}^{d-1} (\nu,l) n(k,l) + f\delta_{\theta \nu} n(k-1,0)\\ & = & \sum_{l=0}^{d-1} (\nu,l) \bigg\{ p\sum_{i_2, \dots,i_{k-1}=0}^{d-1}(l,i_2)(i_2,i_3)\dots(i_{k-1}, \theta )-f^k \\ & + & f\delta_{\theta 0}[n(k-2,l)+f^{k-2}] +f(0,\theta )[n(k-3,l)+f^{k-3}] \\ & + & \sum_{j=4}^{k-1}\ f\ [\sum_{i_2,\dots,i_{j-2}=0}^{d-1} (0,i_2)\dots(i_{j-2},\theta )][n(k-j,l)+f^{k-j}]\bigg\} \\ & + & f\delta_{\theta \nu} \bigg\{ p\sum_{i_2,\dots,i_{k-2}=0}^{d-1}(0,i_2)(i_2,i_3) \dots(i_{k-2}, \theta )-f^{k-1} \\ & + & f\delta_{\theta 0}[n(k-3,0)+f^{k-3}] +f(0,\theta )[n(k-4,0)+f^{k-4}] \\ & + & \sum_{j=4}^{k-2}\ f\ [\sum_{i_2,\dots,i_{j-2}=0}^{d-1} (0,i_2)\dots(i_{j-2},\theta )][n((k-1)-j,0)+f^{(k-1)-j}]\bigg\}. \end{eqnarray*} Now, since $p\delta_{\theta \nu }=n(1,\nu)+f$ and $\sum_{l=0}^{d-1}(\nu,l)=f-\delta_{\theta \nu },$ and using that \[ \sum_{l=0}^{d-1}(\nu ,l)n(k-j,l)=n(k+1)-j,\nu)- f\delta_{\theta \nu}n((k-1)-j,0), \] the result follows as we may write $\sum_{l=0}^{d-1}(\nu ,l)n(k-j,l)+(f-\delta_{\theta \nu })f^{k-j}$ as $n((k+1)-j,\nu) +f^{(k+1)-j} -f\delta_{\theta \nu }[n((k-1)-j,0)+f^{(k-1)-j}]$. $\Box $ \subsection*{3. The Cyclotomic Solution of Waring's Problem mod $p$} We now may state the result that completes our study on higher levels and Waring's problem in ${\mathbb F}_p$ in terms of cyclotomy. \textbf{Theorem 1.}\ {\it Let $p>2$ be a prime number and $d\geq 2$ an integer with $p-1=df$. Let $\omega $ be a fixed generator of ${{\mathbb F}}_{p}^{\ast }$, let $(i,j);0\leq i,j\leq d-1$ be the cyclotomic numbers of order $d$. Also let $\theta =0$ if $f$ even, and $\theta =d/2$ if $f$ odd. Then, given $a\in {{\mathbb F}}_{p}^{\ast }\begin{array}ckslash {{\mathbb F}}_{p}^{{\ast }^d}$ with $\alpha \equiv {\mbox {\rm ind}_\omega}(a) {\mbox {\rm \ mod\ }}d,$ we get \[ s_{d}(p,a)=2\mbox{ \ if\ } (\alpha+\theta,\theta) \neq 0 \] and otherwise \[ s_{d}(p,a)=\min \{s\ |\ \exists\ 0\leq i_2,\dots,i_{s-1}\leq d-1\colon (\alpha+\theta ,i_{2})(i_{2},i_{3})\dots (i_{s-1},\theta )\neq 0 \}. \] } \textit{Proof.}\ Since $a\in F_{p}^*\begin{array}ckslash F_{p}^{*^d}$ and $\alpha+\theta \equiv {\mbox {\rm ind}_\omega}(-a){\mbox {\rm \ mod\ }} d$, we have \[ s_{d}(p,a)=\min \{k\geq 2\ |\ n(k,\alpha +\theta )+f^{k}\neq 0\}, \] and hence $n(l,\alpha +\theta )+f^{l}=0$, for all $l<s_{d}(p,a)=s$. Then by Lemma 2 \[ n(s,\alpha +\theta )+f^{s}=p\sum_{i_{2},\dots ,i_{s-1}=0}^{d-1}(\alpha +\theta ,i_{2})(i_{2},i_{3})\dots (i_{s-1},\theta ). \] Thus clearly \[ s_{d}(p,a)=2,\mbox{\rm \ for\ }(\alpha+\theta ,\theta )\neq 0 \] and otherwise $s_{d}(p,a)$ is the least integer $s$ with $3\leq s \leq d$, such that for some $0\leq i_2,\dots,i_{s-1}\leq d-1$, we have a nonvanishing consecutive product of $s-1$ cyclotomic numbers of the form $ (\alpha +\theta ,i_{2})(i_{2},i_{3})\dots (i_{s-1},\theta )\neq 0$. $\Box $ Hence we obtain a solution of Waring's problem in ${\mathbb F}_p$ via cyclotomy as: \textbf{Theorem 2.}\ {\it Let $p,d,f,\omega,\theta ,a,\alpha={\mbox {\rm ind}_\omega}(a)$ and the cyclotomic numbers $(i,j)$ be as in Theorem $2$. We define a matrix $M=(m_{ij})_{0\leq i,j\leq d-1}$ by\\ $m_{ij}= \left\{ \begin{array}{cl} 0,&\mbox{\rm if\ } (i,j) = 0,\\ 1,&\mbox{\rm otherwise,}\\ \end{array} \right.$ and we denote its $n$--th power as $(m_{ij}^{(n)}) := M^n$. Then \[ g_d(p) = \max_{0\leq \alpha\leq d-1}\min\{ s \ |\ m_{(\alpha+\theta )\theta }^{(s-1)} \neq 0\}. \] } \textit{Proof.}\ By Theorem 1, $s_{d}(p,a)$ is the least integer $s$ with $2\leq s \leq d$ such that $(\alpha+\theta,\theta)\neq 0$ or for some $0\leq i_2,\dots,i_{s-1}\leq d-1$ we have $ (\alpha +\theta ,i_{2})(i_{2},i_{3})\dots (i_{s-1},\theta )\neq 0$. Also, $\displaystyle g_d(p) = \max_{0\leq \alpha\leq d-1}\{s_d(p,a)\}$. Thus, since $$m^{(n)}_{(\alpha+\theta)\theta } = \sum^{d-1}_{i_2,i_3,\dots,i_{n}=0} m_{(\alpha+\theta )i_2}\cdot m_{i_2i_3}\cdot\ \dots\ \cdot m_{i_n\theta }$$ is the entry at $(\alpha+\theta ,\theta )$ of the $n$--th power of the matrix $M$, where $m_{ij}\neq 0$ iff $(i,j)\neq 0$, the result follows. $\Box$ \subsection*{4. On the Generalization of Theorem 1 in \cite{BC}} In \cite{BC}, we only consider the case $a\equiv -1 {\mbox {\rm \ mod\ }} {\mathbb F}_p^{*d}$, {\it i.e.} $\alpha+\theta \equiv 0{\mbox {\rm \ mod\ }} d$, finding $s_d(p,-1)$ in terms of the coefficients $\alpha_k; 2\leq k \leq d$ of the period polynomial. Now if $\alpha +\theta \not\equiv 0{\mbox {\rm \ mod\ }} d$, we can generalize Theorem 1 in \cite{BC} as follows: {\bf Theorem 3.}\ {\it Let $p>2$ be a prime number and $d\geq 2$ an integer with $d\ | \ p-1$. Let $\omega$ be a fixed generator of ${\mathbb F}_p^*$ and let $\eta_i; 0\leq i\leq d-1$ be the Gaussian periods. Then if $a\in{\mathbb F}_p^*\begin{array}ckslash {\mathbb F}_p^{*d}$ with ${\mbox {\rm ind}_\omega}(-a) \equiv \alpha + \theta {\mbox {\rm \ mod\ }} d$, we have $$s_d(p,a) = {\mbox {\rm ord}}_T\left(\frac{1}{1-fT} -\frac{I_{\alpha+\theta }(T)'}{I_{\alpha+\theta }(T)}\right)$$ where $$I_{\alpha+\theta }(T)=\prod_{i=0}^{d-1} (1-\eta_iT)^{\left(\frac{\eta_{i+\alpha+\theta }}{\eta_i}\right)}\in{\mathbb Q}[[T]]$$ is a complex differentiable function of $T$ and ${\mbox {\rm ord}}_T$ is the usual valuation in ${\mathbb Z}[[T]]$. } {\it Proof:}\ We have $$s_d(p,a)=\min\{k\ | \ N(k,a) \neq 0\} = {\mbox {\rm ord}}_T(\sum_{k=0}^\infty N(k,a)T^k)$$ where $$N(k,a) = \frac{1}{p}[f^k + n(k,\alpha+\theta)] = \frac{1}{p}[f^k + \sum_{i=0}^{d-1} \eta_i^k\cdot \eta_{i+\alpha+\theta}].$$ Thus formally \begin{eqnarray*} \sum_{k=0}^\infty N(k,a)T^k &=& \frac{1}{p}[\sum_{k=0}^\infty f^kT^k + \sum_{k=0}^\infty n(k,\alpha+\theta)T^k]\\ &=& \frac{1}{p}[\sum_{k=0}^\infty f^kT^k + \sum_{k=0}^\infty (\sum_{i=0}^{d-1} \eta_i^k\cdot \eta_{i+\alpha+\theta})T^k]\\ &=& \frac{1}{p}[\frac{1}{1- fT} + \sum_{i=0}^{d-1} (\sum_{k=0}^\infty \eta_i^k\cdot \eta_{i+\alpha+\theta}T^k)]\\ &=& \frac{1}{p}[\frac{1}{1- fT} + \sum_{i=0}^{d-1} \frac{ \eta_{i+\alpha+\theta}}{1- \eta_i T}].\\ \end{eqnarray*} Now, considering $\displaystyle\frac{\eta_{i+j}}{1-\eta_iT}$ as a complex differentiable function of $T$ for all $0\leq i,j \leq d-1$, and recalling that $\displaystyle\frac{\eta_{i+j}}{\eta_i}\in {\mathbb C}\begin{array}ckslash\{0\}$, we have\\ $ \begin{array}{rcl} \displaystyle\sum_{i=0}^{d-1} \frac{\eta_{i+j}}{1-\eta_iT} &=&\displaystyle-\sum_{i=0}^{d-1} \frac{\eta_{i+j}}{\eta_i}\cdot \frac{(1-\eta_{i}T)'}{(1-\eta_iT)} \\ &=&\displaystyle-\sum_{i=0}^{d-1} \frac{\eta_{i+j}}{\eta_i}[\log(1-\eta_{i}T)]'\\ &=&\displaystyle-[\sum_{i=0}^{d-1} \log((1-\eta_{i}T)^{\frac{\eta_{i+j}}{\eta_i}})]'\\ &=&\displaystyle-[\log(\prod_{i=0}^{d-1} (1-\eta_{i}T)^{\frac{\eta_{i+j}}{\eta_i}})]'\\ &=&\displaystyle-[\log I_j(T)]'\\ &=&\displaystyle-\frac{I_j(T)'}{I_j(T)} \ \ \in {\mathbb Z}[[T]], \end{array}$ where $I_j(T) = \prod_{i=0}^{d-1} (1-\eta_iT)^{\frac{\eta_{i+j}}{\eta_i}}$ has a power series expansion $I_j(T) = \sum_{k=0}^\infty c_kT^k\in{\mathbb C}[[T]]$, where $c_k = - \frac{1}{k}\sum_{l=0}^{k-1}c_l\cdot n(k-1-l,j)$ and $c_0=1$. Thus, we recursively obtain $k!c_k\in{\mathbb Z},\forall k\geq 0$, and hence $I_j(T),I_j(T)'\in{\mathbb Q}[[T]]$.~ ~$\Box$ {\it Remark:}\ This finally shows the class of $-1$ to be special since $$I_{\alpha+\theta }(T)\in{\mathbb Q}[T] \Leftrightarrow \frac{\eta_{i+\alpha+\theta }}{\eta_i}\in{\mathbb N}, \ \forall\ 0\leq i\leq d-1.$$ This is, iff $\alpha+\theta =0$ and hence $a\equiv -1 {\mbox {\rm \ mod\ }} {\mathbb F}_p^{*d}$. In this case, $$I_0(T) = \prod_{i=0}^{d-1} (1-\eta_{i}T)= T^dG(T^{-1})$$ is the reciprocal of the Gauss period polynomial and we recover Theorem 1 of \cite{BC}. \subsection*{5. Explicit Numerical Results} We state the complete results for $d=3$ and $d=4$, for all primes $p$, following \cite{Di1}\cite{Di2}: \textbf{Theorem 4.}\ \textit{Let $p=3f+1$ be a prime number with $ 4p=L^{2}+27M^{2}$ and $L\equiv 1{\mbox {\rm \ mod\ }}3$. Then \[ g_3(p)=\bigg\{ \begin{array}{ll} 3, & \mbox{ \ if \ }p=7, \\ 2, & \mbox{\ otherwise.} \end{array} \] } \textit{Proof.}\ Since $f$ is even, we have $(h,k)=(k,h)$ , and it is known by \cite{Di1} that\\ \[ 18(0,1)=2p-4-L+9M\text{ and }\\ 18(0,2)=2p-4-L-9M. \] Then by Theorem 1, with $\theta =0$ and the sign of $M$ depending on the choice of the generator $\omega$, we have \[ s_{3}(p,\omega)=\bigg\{ \begin{array}{ll} 2, & \mbox{\rm \ if \ }\left( 1,0\right)\neq 0 \mbox{\rm \ \ i.e. \ }2p\neq 4 + L- 9M \\ 3, & \mbox{\rm \ otherwise \ } \end{array} \] and \textit{ \[ s_{3}(p,\omega^{2})=\bigg\{ \begin{array}{ll} 2, & \mbox{\rm \ if \ }\left( 2,0\right) \neq 0\mbox{\rm \ \ i.e. \ }2p\neq 4 +L+9M \\ 3, & \mbox{\rm \ otherwise. \ } \end{array} \] } Thus \textit{ \[ g_3(p)=\bigg\{ \begin{array}{ll} 2, & \mbox{\rm \ if \ }2p\neq 4+L\pm 9M, \\ 3, & \mbox{\rm \ otherwise. \ } \end{array} \] } Now, $g_3(p) = 3 \Longleftrightarrow \exists \alpha\in\{1,2\}$ with $(\alpha,0)=0 \Longleftrightarrow 4p=L^{2}+27M^{2}=8+2L\pm 18M \Longleftrightarrow L=M=1 \Longleftrightarrow p=7$. $\Box $ {\bf Theorem 5.} \ \textit{Let $p=4f+1$ be a prime number with $p=x^{2}+4y^{2}$ and $x\equiv 1{\mbox {\rm \ mod\ }}4$. Then \[ g_4(p)=\Bigg\{ \begin{array}{ll} 4, & \mbox{ \ if \ }p=5, \\ 3, & \mbox{ \ if \ }p=13, 17, 29, \\ 2, & \mbox{ \ otherwise.\ } \end{array} \] } \textit{Proof.} \ By Theorem 1, with $\theta =\bigg\{$ \begin{tabular}{cc} 0, & if $f$ even,\\ $d/2$, & if $f$ odd, \end{tabular} and the sign of $y$\\ depending on the choice of the generator $\omega$, we have $s_4(p,\omega^\alpha) =\left\{ \begin{array}{lll} 1,& \mbox{ \rm if \ } \alpha=0 \\ 2,& \mbox{ \rm if \ } \alpha\neq 0, (\alpha+\theta ,\theta )\neq 0 \\ 3,& \mbox{ \rm if \ } \alpha\neq 0, (\alpha+\theta ,\theta )= 0\\ &\mbox{\ \rm and \ } (\alpha+\theta ,i)(i,\theta )\neq 0 \mbox{ \rm for some \ } 0\leq i\leq 3\\ 4,&\mbox{ \rm otherwise \ } \end{array} \right. $ where by \cite{Di1} we may find the cyclotomic numbers in terms of the representation of $p$. If $f$ is even: $16(0,0) = p-11-6x$\\ $16(0,1) = p-3+2x+8y$\\ $16(0,2) = p-3+2x$\\ $16(0,3) = p-3+2x-8y$\\ $16(1,2) = p+1-2x$ and $(1,1) = (0,3)$, $(1,3) = (2,3) = (1,2)$, $(2,2) = (0,2)$, $(3,3) = (0,1)$, with $(i,j) = (j,i)$. If $f$ is odd: \nopagebreak $16(0,0) = p-7+2x$\\ $16(0,1) = p+1+2x-8y$\\ $16(0,2) = p+1-6x$\\ $16(0,3) = p+1+2x+8y$\\ $16(1,0) = p-3-2x$ and $(1,1) = (2,1) = (2,3) = (3,0) = (3,3) = (1,0)$, $(1,2) = (3,1) = (0,3)$, $(1,3) = (3,2) = (0,1)$, $(2,0) = (2,2) = (0,0)$. Thus, we find $g_4(p) > 2$ if $p=x^2+4y^2,$ with $x\equiv 1 {\mbox {\rm \ mod\ }} 4$, satisfies one of the following diophantine equations: $f$ even: $\begin{array}{lll} (\alpha=1)& x^2+4y^2+2x+8y &= 3\\ (\alpha=2)& x^2+4y^2+2x &= 3\\ (\alpha=3)& x^2+4y^2+2x-8y& = 3 \end{array}$ $f$ odd: $\begin{array}{lll} (\alpha=1) & x^2+4y^2+2x-8y& = -1 \\ (\alpha=2) & x^2+4y^2-6x & = -1 \\ (\alpha=3) & x^2+4y^2+2x+8y& = -1 \end{array}$ All these are equations of the form $(x+a)^2 + 4(y+b)^2=c$ and one easily finds that the only solutions give $p= 5,13,17,$ and 29. Now, checking for these primes the equations for $g_4(p)=4$ we find only $g_4(5)=4$, and thus $g_4(13) = g_4(17) = g_4(29) = 3$. $\Box$ We may obtain complete solutions for Waring's problem mod $p$ and thus Waring's problem mod $n$ (see \cite{SAMM1}\cite{SAMM2}), for all $d\geq 3$ for which the cyclotomic numbers are known or may be found in terms of the representations of multiples of $p$ by binary quadratic forms. Clearly, for $d> 4$ much effort is needed to obtain the $d^2$ cyclotomic constants, and other representations of multiples of $p$ by quadratic forms and the study of different cases is necessary (see~\cite{BC}\cite{C}\cite{Di1}\cite{Di2}), but since $s_d(p,-1)$ and $g_d(p)$ are still open problems for $3d+1\leq p < (d-1)^4$, our work seems to give the only complete theoretical result on the subject. \subsection*{Conclusion} We introduced the concept of cyclotomic integers and gave some non--tri\-vial examples, the $n(k,\nu)$, which allowed us to solve the modular Waring's Problem using only the classical cyclotomic numbers. We saw that the analytic function $I_{\alpha+\theta}(T)$ is a formal power series with coefficients in ${\mathbb Q}$, and that for $a\equiv -1 {\mbox {\rm \ mod\ }} {\mathbb F}_p^{*d}$, {\it i.e.} $\alpha = \theta $, in fact $I_{\alpha+\theta}(T)=I_0(T)$ is a polynomial in ${\mathbb Z}[T]$, the reciprocal of the Gauss period equation. We finished with two examples of explicit calculations of $g_d(p)$ for $d=3$ and $d=4$, all primes $p$. \end{document}
\begin{document} \title[The $C^*$-algebras of arbitrary graphs]{The $\boldsymbol{C^*}$-algebras of arbitrary graphs} \author{D. Drinen} \address{Dartmouth College\\Department of Mathematics\\Bradley Hall 6188\\Hanover, NH 03755} \curraddr{Department of Mathematics \\ The University of the South\\ \\ Sewanee, TN 37383} \email{[email protected]} \author{M. Tomforde} \address{Dartmouth College\\Department of Mathematics\\Bradley Hall 6188\\Hanover, NH 03755} \curraddr{Department of Mathematics\\ University of Iowa\\ Iowa City\\ IA 52242\\ USA} \email{[email protected]} \subjclass{46L55} \begin{abstract} To an arbitrary directed graph we associate a row-finite directed graph whose $C^*$-algebra contains the $C^*$-algebra of the original graph as a full corner. This allows us to generalize results for $C^*$-algebras of row-finite graphs to $C^*$-algebras of arbitrary graphs: the uniqueness theorem, simplicity criteria, descriptions of the ideals and primitive ideal space, and conditions under which a graph algebra is AF and purely infinite. Our proofs require only standard Cuntz-Krieger techniques and do not rely on powerful constructs such as groupoids, Exel-Laca algebras, or Cuntz-Pimsner algebras. \end{abstract} \maketitle \section{Introduction} \label{sec-intro} Since they were first introduced in 1947 \cite{Seg}, $C^*$-algebras have become important tools for mathematicians working in many areas. Because of the immensity of the class of all $C^*$-algebras, however, it has become important to identify and study special types of $C^*$-algebras. These special types of $C^*$-algebras (e.g.~AF-algebras, Bunce-Deddens algebras, AH-algebras, irrational rotation algebras, group $C^*$-algebras, and various crossed products) have provided great insight into the behavior of more general $C^*$-algebras. In fact, it is fair to say that much of the development of operator algebras in the last twenty years has been based on a careful study of these special classes. One important and very natural class of $C^*$-algebras comes from considering $C^*$-algebras generated by partial isometries. There are a variety of ways to construct these $C^*$-algebras, but typically any such construction will involve having the partial isometries satisfy relations that describe how their initial and final spaces are related. Furthermore, one finds that in practice it is convenient to have an object (e.g.~a matrix, a graph, etc.) that summarizes these relations. In 1977 Cuntz introduced a class of $C^*$-algebras that became known as Cuntz algebras \cite{Cun}. For each $n=2,3,\ldots, \infty$ the Cuntz algebra $\mathcal{O}_n$ is generated by $n$ isometries satisfying certain relations. The Cuntz algebras were important in the development of $C^*$-algebras because they provided the first examples of $C^*$-algebras whose $K$-theory has torsion. In 1980 Cuntz and Krieger considered generalized versions of the Cuntz algebras \cite{CK1}. Given an $n \times n$ matrix $A$ with entries in $\{0,1\}$, the Cuntz-Krieger algebra $\mathcal{O}_A$ is defined to be the $C^*$-algebra generated by partial isometries satisfying relations determined by $A$. A study of the Cuntz-Krieger algebras was made in the seminal paper \cite{CK1} where it was shown that they arise naturally in the study of topological Markov chains. It was also shown that there are important parallels between these $C^*$-algebras and certain kinds of dynamical systems (e.g.~shifts of finite type). In 1982 Watatani noticed that by considering a $\{0,1\}$-matrix as the adjacency matrix of a directed graph, one could view Cuntz-Krieger algebras as $C^*$-algebras associated to certain finite directed graphs \cite{Wat}. Although Watatani published some papers using this graph approach \cite{FW,Wat}, his work went largely unnoticed. It was not until 1997 that Kumjian, Pask, Raeburn, and Renault rediscovered $C^*$-algebras associated to directed graphs. This theory of $C^*$-algebras associated to graphs was developed in \cite{kprr}, \cite{kpr}, and \cite{bprs}. In these papers the authors were able to define and work with $C^*$-algebras associated to finite graphs as well as $C^*$-algebras associated to infinite graphs that are row-finite (i.e.~all vertices emit a finite number of edges). By allowing all finite graphs as well as certain infinite graphs, these graph algebras included many $C^*$-algebras that were not Cuntz-Krieger algebras. Furthermore, it was found that the graph not only described the relations for the generators, but also many important properties of the associated $C^*$-algebra could be translated into graph properties. Thus the graph provides a tool for visualizing many aspects of the associated $C^*$-algebra. In addition, because graph algebras consist of a wide class of $C^*$-algebras whose structure can be understood, other areas of $C^*$-algebra theory have benefitted nontrivially from their study. Despite these successes, many people were still unsatisfied with the condition of row-finiteness and wanted a theory of $C^*$-algebras for arbitrary graphs. This desire was further fueled by the fact that in his original paper \cite{Cun} Cuntz defined a $C^*$-algebra $\mathcal{O}_\infty$, which seemed as though it should be the $C^*$-algebra associated to a graph with one vertex and a countably infinite number of edges. Despite many people's desire to extend the definition of graph algebras to arbitrary graphs, it was unclear exactly how to make sense of the defining relations in the non-row-finite case. It was not until 2000 that Fowler, Laca, and Raeburn were finally able to extend the definition of graph algebras to arbitrary directed graphs \cite{flr}. These graph algebras now included the Cuntz algebra $\mathcal{O}_\infty$, and as expected it arises as the $C^*$-algebra of the graph with one vertex and infinitely many edges. In the time since $C^*$-algebras associated to arbitrary graphs were defined, there have been many attempts to extend results for row-finite graph algebras to arbitrary graph algebras. However, because many of the proofs of the fundamental theorems for $C^*$-algebras of row-finite graphs make heavy use of the row-finiteness assumption, it has often been unclear how to proceed. In most cases where results have been generalized, the proofs have relied upon sophisticated techniques and powerful machinery such as groupoids, the Exel-Laca algebras of \cite{el}, and the Cuntz-Pimsner algebras of \cite{pimsner}. In this paper we describe an operation called desingularization that transforms an arbitrary graph into a row-finite graph with no sinks. It turns out that this operation preserves Morita equivalence of the associated $C^*$-algebra as well as the loop structure and path space of the graph. Consequently, it is a powerful tool in the analysis of graph algebras because it allows one to apply much of the machinery that has been developed for row-finite graph algebras to arbitrary graph algebras. Desingularization was motivated by the process of ``adding a tail to a sink" that is described in \cite{bprs}. In fact, this process is actually a special case of desingularization. The difference is that now we not only add tails at sinks, but we also add (more complicated) tails at vertices that emit infinitely many edges. Consequently, we shall see that vertices that emit infinitely many edges will often behave similarly to sinks in the way that they affect the associated $C^*$-algebra. In fact for some of our results, such as conditions for simplicity, one can take the result for row-finite graphs and replace the word ``sink" by the phrase ``sink or vertex that emits infinitely many edges" to get the corresponding result for arbitrary graphs. We begin in Section \ref{sec-desing} with the definition of desingularization. This is our main tool for dealing with $C^*$-algebras associated to arbitrary graphs. It gives the reader who is comfortable with $C^*$-algebras of row-finite graphs a great deal of intuition into the structure of non-row-finite graph algebras. This is accomplished by providing a method for easily translating questions about arbitrary graph algebras to the row-finite setting. After the definition of desingularization, we describe a correspondence between paths in the original graph and paths in the desingularization. We then show that desingularization preserves loop structure of the graph as well as Morita equivalence of the $C^*$-algebra. This allows us to obtain easy proofs of several known results. In particular, we prove the uniqueness theorem of \cite{flr} and give necessary and sufficient conditions for a graph algebra to be simple, purely infinite, and AF. In Section~\ref{sec-ideal} we describe the ideal structure of graph algebras. Here we will see that our solution is more complicated than what occurs in the the row-finite case. The correspondence with saturated hereditary sets described in \cite{bprs} no longer holds. Instead we have a correspondence of the ideals with pairs $(H,S)$, where $H$ is a saturated hereditary set and $S$ is a set containing vertices that emit infinitely many edges, only finitely many of which have range outside of $H$. We conclude in Section~\ref{sec-prim} with a description of the primitive ideal space of a graph algebra. Our result will again be more complicated than the corresponding result for the row-finite case, which involves maximal tails \cite{bprs}. For arbitrary graphs we will need to account for vertices that emit infinitely many edges, and our description of the primitive ideal space will include both maximal tails and special vertices that emit infinitely many edges known as ``breaking vertices". We thank Iain Raeburn for making us aware of the related papers by Szyma\'nski \cite{sz} and Paterson \cite{patgg}, and we thank both Iain Raeburn and Dana Williams for their comments on the first draft of this paper. After this work was completed, it was brought to our attention that our description of the primitive ideal space in Section~\ref{sec-prim} had been obtained independently in the preprint \cite{bhrs}. Although the results in \cite{bhrs} are similar to some of our results in Section~\ref{sec-prim}, one should note that the methods used in the proofs are very different. In addition, we mention that we have adopted their term ``breaking vertex" to provide consistency for readers who look at both papers. \section{The desingularized graph} \label{sec-desing} We closely follow the notation established in \cite{kpr} and \cite{bprs}. A (directed) graph $E=(E^0, E^1, r, s)$ consists of countable sets $E^0$ of vertices and $E^1$ of edges, and maps $r,s:E^1 \rightarrow E^0$ describing the source and range of each edge. We let $E^*$ denote the set of finite paths in $E$, and we let $E^\infty$ denote the set of infinite paths. The maps $r,s$ extend to $E^*$ in the obvious way and $s$ extends to $E^\infty$. A vertex $v$ is called a \emph{sink} if $|s^{-1}(v)|=0$, and $v$ is called an \emph{infinite-emitter} if $|s^{-1}(v)|=\infty$. If $v$ is either a sink or an infinite-emitter, we call it a {\it singular vertex}. A graph $E$ is said to be {\it row-finite} if it has no infinite-emitters. Given any graph (not necessarily row-finite), a {\it Cuntz-Krieger $E$-family} consists of mutually orthogonal projections $\{p_v \,|\, v \in E^0\}$ and partial isometries $\{s_e \,|\, e \in E^1\}$ with orthogonal ranges satisfying the {\it Cuntz-Krieger relations}: \begin{enumerate} \item $s_e^* s_e = p_{r(e)}$ for every $e \in E^1$; \item $s_e s_e^* \leq p_{s(e)}$ for every $e \in E^1$; \item $p_v = \sum_{\{e \,|\, s(e)=v\}} s_e s_e^*$ for every $v \in E^0$ that is not a singular vertex. \end{enumerate} The graph algebra $C^*(E)$ is defined to be the $C^*$-algebra generated by a universal Cuntz-Krieger $E$-family. For the existence of such a $C^*$-algebra, one can either modify the proofs in \cite[Theorem 2.1]{aHr} or \cite[Theorem 1.2]{kpr}, or one can appeal to more general constructions such as \cite{blackadar} or \cite{pimsner}. Given a graph $E$ we shall construct a graph $F$, called a {\it desingularization of $E$}, with the property that $F$ has no singular vertices and $C^*(E)$ is isomorphic to a full corner of $C^*(F)$. Loosely speaking, we will build $F$ from $E$ by replacing every singular vertex $v_0$ in $E$ with its own infinite path, and then redistributing the edges of $s^{-1}(v_0)$ along the vertices of the infinite path. Note that if $v_0$ happens to be a sink, then $|s^{-1}(v_0)| = 0$ and there are no edges to redistribute. In that case our procedure will coincide with the process of adding an infinite tail to a sink described in \cite[(1.2)]{bprs}. \begin{defn} \label{defn-addtail} Let $E$ be a graph with a singular vertex $v_0$. We {\it add a tail} to $v_0$ by performing the following procedure. If $v_0$ is a sink, we add a graph of the form \begin{equation} \label{tail} \xymatrix{ v_0 \ar[r]^{e_1} & v_1 \ar[r]^{e_2} & v_2 \ar[r]^{e_3} & v_3 \ar[r]^{e_4} & \cdots\\ } \end{equation} as described in \cite[(1.2)]{bprs}. If $v_0$ is an infinite emitter we first list the edges $g_1, g_2, g_3, \ldots$ of $s^{-1}(v_0)$. Then we add a graph of the form shown in (\ref{tail}), remove the edges in $s^{-1}(v_0)$, and for every $g_j \in s^{-1}(v_0)$ we draw an edge $f_j$ from $v_{j-1}$ to $r(g_j)$. For any $j$ we shall also define $\alpha^j$ to be the path $\alpha^j := e_1e_2 \ldots e_{j-1}f_j$ in $F$. \end{defn} Note that different orderings of the edges of $s^{-1}(v_0)$ may give rise to nonisomorphic graphs via the above procedure. \begin{defn} If $E$ is a directed graph, a {\it desingularization of $E$} is a graph $F$ obtained by adding a tail at every singular vertex of $E$. \end{defn} \begin{ex} \label{ex-mainex} Suppose we have a graph $E$ containing this fragment: $$ \xymatrix{ w_1 \ar[dr] & & w_3 \\ & v_0 \ar@{=>}[dr]^<>(.6){\infty} \ar[ur]_<>(.6){g_3} \ar@(ur,ul)[]_{g_1} \ar@(dr,dl)[]^{g_2} &\\ w_2 \ar[ur] & & w_4\\ } $$ \vspace*{.1in} \noindent where the double arrow labeled $\infty$ denotes a countably infinite number of edges from $v_0$ to $w_4$. Let us label the edges from $v_0$ to $w_4$ as $\{g_4, g_5, g_6, \ldots\}$. Then a desingularization of $E$ is given by the following graph $F$. $$ \xymatrix{ w_1 \ar[dr] & & w_3 & & &\\ & v_0 \ar@(ur,ul)[]_{f_1} \ar[r]^{e_1} & v_1 \ar[r]^{e_2} \ar@/^/[l]^{f_2} & v_2 \ar[r]^{e_3} \ar[ul]_{f_3} & v_3 \ar[r]^{e_4} \ar[dll]_{f_4} & v_4 \ar[r]^{e_5} \ar[dlll]_<>(.4){f_5} & \cdots \ar[dllll]^{f_6}\\ w_2 \ar[ur] & & w_4 & & & &\\ } $$ \end{ex} \begin{ex} \label{ex-Oinfty} If $E$ is the $\mathcal{O}_{\infty}$ graph (one vertex with infinitely many loops), a desingularization $F$ looks like this: $$ \xymatrix{ . \ar[r] \ar@(ul,dl)[] & . \ar[r] \ar@(d,d)[l] & . \ar[r] \ar@(d,d)[ll] & . \ar[r] \ar@(d,d)[lll] & . \ar[r] \ar@(d,d)[llll] & \ar@(d,d)[lllll] \cdots\\ &&&&&\\ } $$ \end{ex} \begin{ex} \label{ex-K} The following graph was mentioned in \cite[Remark 11]{flr}: $$ \xymatrix{ \cdots \ar[r] & . \ar[r] & . \ar[r] & v_0 \ar@{=>}[r]^{\infty} & . \ar[r] & . \ar[r] & \cdots \\ } $$ A desingularization of it is: $$ \xymatrix{ \cdots \ar[r] & . \ar[r] & . \ar[r] & v_0 \ar[r] \ar[d] & . \ar[r] & . \ar[r] & \cdots\\ &&& v_1 \ar[d] \ar[ur] &&&\\ &&& v_2 \ar[d] \ar[uur] &&&\\ &&& v_3 \ar[d] \ar[uuur] &&&\\ &&& \vdots \ar[uuuur] &&&\\ } $$ \end{ex} It is crucial that desingularizing a graph preserves connectivity, path space, and loop structure in the appropriate senses, and this will turn out to be the case. We make these ideas precise with the next three lemmas: Lemma~\ref{lem-correspondence} describes how the path spaces of $E$ and $F$ are related, Lemma~\ref{lem-LK} shows that desingularization preserves loop structure, and Lemma~\ref{lem-cofinalplus} describes the relationship between cofinality of a graph and cofinality of its desingularization. We first review some notation. If $E$ is a directed graph and $S_1,S_2 \subseteq E^0$ we say {\it $S_1$ connects to $S_2$}, denoted $S_1 \geq S_2$, if for every $v \in S_1$ there exists $w \in S_2$ and $\alpha \in E^*$ with $s(\alpha)=v$ and $r(\alpha)=w$. Frequently one or both of the $S_i$'s will contain a single vertex $v$, in which case we write $v$ rather than $\{v\}$. If $\lambda$ is a finite or infinite path in $E$, we write $S \geq \lambda$ to mean $S \geq \{s(\lambda_i)\}_{i=1}^{|\lambda|}$. Finally, a graph $E$ is said to be {\it cofinal} if for every infinite path $\lambda$ we have $E^0 \geq \lambda$. \begin{lem} \label{lem-correspondence} Let $E$ be a graph and let $F$ be a desingularization of $E$. \begin{itemize} \item [(a)] There are bijective maps $$ \phi: E^* \longrightarrow \{\beta \in F^* \,|\, s(\beta), r(\beta) \in E^0\}\\ $$ $$ \phi_\infty: E^\infty \cup \{\alpha \in E^* \,|\, r(\alpha) \hbox{ is a singular vertex}\} \longrightarrow \{\lambda \in F^\infty \,|\, s(\lambda) \in E^0\}.\\ $$ The map $\phi$ preserves source and range (and hence $\phi$ preserves loops), and the map $\phi_\infty$ preserves source. \item [(b)] The map $\phi_\infty$ preserves $\geq$ in the following sense. For every $v \in E^0$ and $\lambda \in E^\infty \cup \{ \alpha \in E^* \, | \, r(\alpha) \text{ is a singular vertex} \}$, we have $v \geq \lambda$ in $E$ if and only if $v \geq \phi_\infty(\lambda)$ in $F$. \end{itemize} \end{lem} \begin{proof} First define a map $\phi':E^1 \rightarrow F^*$. If $e \in E^1$, then $e$ will have one of two forms: either $s(e)$ is not a singular vertex, in which case $e \in F^1$, or else $s(e)$ is a singular vertex, in which case $e=g_j$ for some $j$. We define $\phi'$ by $$ \phi'(e) = \left\{\begin{array}{ll} e & \hbox{if $s(e)$ is not singular;}\\ \alpha^j & \hbox{if $e=g_j$ for some $j$,}\\ \end{array}\right. $$ where $\alpha^j := e_1 \ldots e_{j-1}f_j$ is the path described in Definition~\ref{defn-addtail}. Since $\phi'$ preserves source and range, it extends to a map on the finite path space $E^*$. In particular, for $\alpha = \alpha_1 \ldots \alpha_n \in E^*$ define $\phi(\alpha) = \phi'(\alpha_1)\phi'(\alpha_2)\ldots \phi'(\alpha_{|\alpha|})$. It is easy to check that $\phi$ is injective, that it preserves source and range, and that it is onto the set $\{\beta \in F^* \,|\, s(\beta), r(\beta) \in E^0\}$. We define $\phi_\infty$ similarly. In particular, if $\lambda = \lambda_1 \lambda_2 \ldots \in E^\infty$, define $\phi_\infty(\lambda) = \phi'(\lambda_1)\phi'(\lambda_2) \ldots$. If $\alpha$ is a finite path whose range is a singular vertex $v_0$, we define $\phi_\infty(\alpha) = \phi(\alpha)e_1e_2\ldots$, where $e_1e_2,\ldots$ is the tail in $F$ added to $v_0$. To show that $\phi_\infty$ is a bijection, we construct an inverse $\psi_\infty:\{\lambda \in F^\infty \,|\, s(\lambda) \in E^0\} \rightarrow E^\infty \cup \{\alpha \in E^* \,|\, r(\alpha) \hbox{ is a singular vertex}\}$. Notice that every $\lambda \in F^\infty$ either returns to $E$ infinitely often or it ends up in one of the added infinite tails. More precisely, $\lambda$ has one of two forms: either $\lambda = a^1a^2\ldots$ or $\lambda = a^1a^2 \ldots a^n e_1 e_2 e_3 \ldots$, where each $a^k$ is either an edge of $E$ or an $\alpha^j$. We define $\psi'$ by $$ \psi'(a^k) := \left\{\begin{array}{ll} a^k & \hbox{if $a^k \in E^1$;}\\ g_j & \hbox{if $a^k=\alpha^j$ for some $j$,}\\ \end{array}\right. $$ and we define $$ \psi_\infty(\lambda) := \left\{\begin{array}{ll} \psi'(a^1)\psi'(a^2)\ldots & \hbox{if $\lambda = a^1a^2\ldots$;}\\ \psi'(a^1)\ldots\psi'(a^n) & \hbox{if $\lambda = a^1 \ldots a^ne_1e_2\ldots$.}\\ \end{array}\right. $$ It is easy to check that $\phi_\infty$ and $\psi_\infty$ are inverses, and we have established (a). To prove (b), let $\lambda \in E^\infty \cup \{ \alpha \in E^* \,|\, r(\alpha) \hbox{ is a singular vertex}\}$ and $v \geq \lambda$ in $E$. Then there exists a finite path $\alpha$ in $E$ such that $s(\alpha)=v$ and $r(\alpha)=w$ for some $w \in E^0$ lying on the path $\lambda$. Note that the vertices of $E$ that are on the path $\phi_\infty(\lambda)$ are exactly the same as the vertices on the path $\lambda$. Hence $w$ must also be a vertex on the path $\phi_\infty(\lambda)$. Now, because $\phi$ preserves source and range, $\phi(\alpha)$ is a path that starts at $v$ and ends at $w$, which is a vertex on $\phi_\infty(\lambda)$. Thus $v \geq \phi_\infty(\lambda)$. For the converse let $\lambda \in E^\infty \cup \{ \alpha \in E^* \,|\, r(\alpha) \hbox{ is a singular vertex}\}$ and $v \in E^0 $, and suppose that $v \geq \phi_\infty(\lambda)$ in $F$. Then there exists a finite path $\beta$ in $F$ with $s(\beta)=v$ and $r(\beta) = w$ for some vertex $w$ on the path $\phi_\infty(\lambda)$. Notice that if $r(\beta)$ is a vertex on one of the added infinite tails, then $\phi_\infty(\lambda)$ must have passed through $v_0$, and so must have $\beta$. Thus we may assume $r(\beta) \in E^0 \subseteq F^0$. Now $\beta$ is a finite path in $F$ that starts and ends in $E^0$, so it can be pulled back to a path $\phi^{-1}(\beta) \in E^*$ with source $v$ and range $r(\beta)$. Since $r(\beta)$ lies on the path $\phi_\infty(\lambda)$, it lies on the path $\lambda$, and thus $\phi^{-1}(\beta)$ is a path from $v$ to some vertex of $\lambda$. Hence $v \geq \lambda$ in $E$. \end{proof} A {\it loop} in a graph $E$ is a finite path $\alpha = \alpha_1 \alpha_2 \ldots \alpha_{|\alpha|}$ with $s(\alpha)=r(\alpha)$. The vertex $s(\alpha)=r(\alpha)$ is called the {\it base point} of the loop. A loop is said to be {\it simple} if $s(\alpha_i)=s(\alpha_1)$ implies $i=1$. Therefore a simple loop is one that does not return to its base point more than once. An {\it exit} for a loop $\alpha$ is an edge $f$ such that $s(f)=s(\alpha_i)$ for some $i$, and $f \neq \alpha_i$. A graph $E$ is said to satisfy {\it Condition~(L)} if every loop has an exit and $E$ is said to satisfy {\it Condition~(K)} if no vertex in $E$ is the base point of exactly one simple loop. \begin{lem} \label{lem-LK} Let $E$ be a graph and let $F$ be a desingularization of $E$. Then \begin{itemize} \item[(a)] $E$ satisfies Condition~(L) if and only if $F$ satisfies Condition~(L). \item[(b)] $E$ satisfies Condition~(K) if and only if $F$ satisfies Condition~(K). \end{itemize} \end{lem} \begin{proof} If $\alpha$ is a loop in $E$ with no exits, then all the vertices on $\alpha$ emit exactly one edge. Hence none of these vertices are singular vertices, and $\phi(\alpha)$ is a loop in $F$ with no exits. If $\alpha$ is a loop in $F$ with no exits, then we claim that none of the singular vertices of $E$ can appear in the loop. To see this, note that if $v_0$ is a sink in $E$, then it cannot be a part of a loop in $F$; and if $v_0$ is an infinite-emitter in $E$, then $v_0$ is the source of two edges, which would necessarily create an exit for any loop. Since none of the singular vertices of $E$ appear in $\alpha$, it follows that $\phi^{-1}(\alpha)$ is a loop in $E$ with no exits. This establishes part (a). Now suppose $v \in E^0$ is the base of exactly one simple loop $\alpha$ in $E$. Then $\phi(\alpha)$ is a simple loop in $F$. If there were another simple loop $\beta$ in $F$ based at $v$, then $\phi^{-1}(\beta)$ would be simple loop in $E$ based at $v$ that is different from $\alpha$. Thus if $F$ satisfies Condition~(K), then $E$ satisfies Condition~(K). Now suppose $E$ satisfies Condition~(K). Let $v \in F^0$ be the base of a simple loop $\alpha$ in $F$. If $v \in E^0$, then $\phi^{-1}(\alpha)$ is a simple loop in $E$ based at $v$. Since $E$ satisfies Condition~(K), there is a simple loop $\beta$ in $E$ different from $\phi^{-1}(\alpha)$. Certainly, $\phi(\beta)$ is a simple loop in $F$ and, because $\phi$ is injective, $\phi(\beta)$ must be different from $\alpha$. Now suppose $v$ is on an added infinite tail; that is, $v=v_n$ for some $n \geq 1$. Then $\alpha$ must have the form $\alpha' e_1 e_2 \ldots e_n$ for some $\alpha' \in F^*$. Now, $e_1 e_2 \ldots e_n \alpha'$ is a simple loop in $F$ based at $v_0$ and hence $\phi^{-1}(e_1 \ldots e_n \alpha')$ is a simple loop in $E$ based at $v_0$. Since $E$ satisfies Condition~(K), there must be another simple loop $\beta$ in $E$ based at $v_0$. Now $\phi(\beta)$ will be a simple loop in $F$ based at $v_0$. If $v_n$ is not a vertex on $\phi(\beta)$, then $\alpha' \phi(\beta) e_1 \ldots e_n$ will be another simple loop based at $v_n$ that is different from $\alpha$. On the other hand, if $v_n$ is a vertex of $\phi(\beta)$, then $\phi(\beta)$ has the form $e_1 \ldots e_n \beta'$, where $\beta' \in F^*$. Since $\phi(\beta)$ is a simple loop based at $v_0$, we know that $s(\beta_i) \neq v_0$ for $1 \leq i \leq |\beta'|$. Hence $v_n$ is not a vertex on the path $\beta'$. Therefore $\beta'e_1 \ldots e_n$ is a simple loop based at $v_n$. Furthermore, it is different from the loop $\alpha = \alpha' e_1 \ldots e_n$, because if they were equal then we would have $\alpha' = \beta'$, which contradicts the fact that $\alpha = \alpha'e_1 \ldots e_n$ and $\phi(\beta) = \beta'e_1 \ldots e_n$ are distinct. Thus $F$ satisfies Condition~(K). \end{proof} \begin{lem} \label{lem-cofinalplus} Let $E$ be a graph and let $F$ be a desingularization of $E$. Then the following are equivalent: \begin{itemize} \item [(1)] $F$ is cofinal; \item [(2)] $E$ is cofinal and for every singular vertex $v_0 \in E^0$ we have $E^0 \geq v_0$. \end{itemize} \end{lem} \begin{proof} Assume $F$ is cofinal and fix $v \in E^0$. Suppose $\lambda \in E^\infty$. Because $F$ is cofinal, $v \geq \phi_\infty(\lambda)$ in $F$. Thus by Lemma \ref{lem-correspondence}(b), $v \geq \lambda$ in $E$. Now let $v_0 \in E^0$ be any singular vertex. Then $\phi_\infty(v_0)$ is the infinite tail $e_1e_2 \ldots$ added to $v_0$. By cofinality of $F$, $v$ connects to $e_1e_2 \ldots$, and since any path that connects to $e_1e_2 \ldots$ connects to $v_0$, we know that there is a path $\alpha \in F^*$ from $v$ to $v_0$. But then $\phi^{-1}(\alpha)$ is a path from $v$ to $v_0$ in $E$. Hence $E^0 \geq v_0$. Now assume $E$ is cofinal and for every singular vertex $v_0$ we have $E^0 \geq v_0$. If $E$ has a sink $v_0$, then since $E$ is cofinal it follows that $E^\infty = \emptyset$. Furthermore, since $E^0 \geq v_0$ it must be the case that $v_0$ is the only sink in $E$. Hence $F$ is obtained from $E$ by adding a single tail at $v_0$. Now if $\lambda \in F^\infty$, then since $E^\infty = \emptyset$ we must have that $\lambda$ eventually ends up in the tail. If $w \in F^0$, then either $w$ is in the tail or $w \in E^0$. Since $E^0 \geq v_0$ this implies that in either case $w \geq \lambda$. Hence $F$ is cofinal. Now assume that $E$ has no sinks. Let $\lambda \in F^\infty$ and $v \in F^0$. We must show that $v \geq \lambda$ in $F$. We will first show that it suffices to prove this for the case when $s(\lambda) \in E^0$ and $v \in E^0$. If $v = v_n$, a vertex in one of the added infinite tails, then because $E$ has no sinks, $v_n$ must be the source of some edge $f_j$ with $r(f_j) \in E^0$ and we see that $r(f_j) \geq \lambda$ in $F$ implies $v_n \geq \lambda$ in $F$. Likewise, if $s(\lambda) = v_n$, a vertex in the infinite tail added to $v_0$, then $v \geq e_1 e_2 \ldots e_n \lambda$ in $F$ implies $v \geq \lambda$ in $F$. Thus we may replace $\lambda$ by $e_1 e_2 \ldots e_n \lambda$. Hence we may assume that $s(\lambda) \in E^0$ and $v \in E^0$. Since $\lambda$ is a finite path in $F$ whose source is in $E^0$, Lemma~\ref{lem-correspondence}(a) implies that $\lambda = \phi_\infty(\mu)$, where $\mu$ is either an infinite path in $E$ or a finite path in $E$ ending at a singular vertex. If $\mu$ is an infinite path, then cofinality of $E$ implies that $v \geq \mu$ and Lemma~\ref{lem-correspondence}(b) implies that $v \geq \phi_\infty(\mu) = \lambda$. If $\mu$ is a finite path ending at a singular vertex, then $v \geq \mu$ by assumption and so $v \geq \phi_\infty(\mu) = \lambda$. Thus $F$ is cofinal. \end{proof} The next two lemmas will be used to prove Theorem~\ref{thm-morita}, which states that $C^*(E)$ is isomorphic to a full corner of $C^*(F)$. Lemma~\ref{lem-CKEinF} says, roughly speaking, that a Cuntz-Krieger $F$-family contains a Cuntz-Krieger $E$-family; and Lemma~\ref{lem-extendEfamtoF} says that we can extend a Cuntz-Krieger $E$-family to obtain a Cuntz-Krieger $F$-family. \begin{lem} \label{lem-CKEinF} Suppose $E$ is a graph and let $F$ be a desingularization of $E$. If $\{T_e, Q_v\}$ is a Cuntz-Krieger $F$-family, then there exists a Cuntz-Krieger $E$-family in $C^*(\{T_e, Q_v\})$. \end{lem} \begin{proof} For every vertex $v$ in $E$, define $P_v := Q_v$. For every edge $e$ in $E$ with $s(e)$ not a singular vertex, define $S_e := T_e$. If $e$ is an edge in $E$ with $s(e) = v_0$ a singular vertex, then $e = g_j$ for some $j$, and we define $S_e := T_{\alpha^j}$. The fact that $\{S_e, P_v\,|\,e \in E^1, v \in E^0\}$ is a Cuntz-Krieger $E$-family follows immediately from the fact that $\{T_e, Q_v \,|\, e \in F^1, v \in F^0 \}$ is a Cuntz-Krieger $F$-family. \end{proof} \begin{lem} \label{lem-extendEfamtoF} Let $E$ be a graph and let $F$ be a desingularization of $E$. For every Cuntz-Krieger $E$-family $\{S_e, P_v \,|\, e \in E^1, v \in E^0\}$ on a Hilbert space $\mathcal{H}_E$, there exists a Hilbert space $\mathcal{H}_F = \mathcal{H}_E \oplus \mathcal{H}_T$ and a Cuntz-Krieger $F$-family $\{T_e, Q_v \,|\, e \in F^1, v \in F^0\}$ on $\mathcal{H}_F$ satisfying: \begin{itemize} \item $P_v = Q_v$ for every $v \in E^0$; \item $S_e = T_e$ for every $e \in E^1$ such that $s(e)$ is not a singular vertex; \item $S_e = T_{\alpha^j}$ for every $e = g_j \in E^1$ such that $s(e)$ is a singular vertex; \item $\sum_{v \notin E^0} Q_{v}$ is the projection onto $\mathcal{H}_T$. \end{itemize} \end{lem} \begin{proof} We prove the case where $E$ has just one singular vertex $v_0$. If $v_0$ is a sink, then the result follows from \cite[Lemma~1.2]{bprs}. Therefore let us assume that $v_0$ is an infinite-emitter. Given a Cuntz-Krieger $E$-family $\{S_e, P_v\}$ we define $R_0 := 0$ and $R_n := \sum_{j=1}^n S_{g_j}S_{g_j}^*$ for each positive integer $n$. Note that the $R_n$'s are projections because the $S_{g_j}$'s have orthogonal ranges. Furthermore, $R_n \leq R_{n+1} < P_{v_0}$ for every $n$. Now for every integer $n \geq 1$ define $\mathcal{H}_n := (P_{v_0} - R_n)\mathcal{H}_E$ and set $$ \mathcal{H}_F := \mathcal{H}_E \oplus \bigoplus_{n=1}^{\infty} \mathcal{H}_n. $$ For every $v \in E^0$ define $Q_v = P_v$ acting on the $\mathcal{H}_E$ component of $\mathcal{H}_F$ and zero elsewhere. That is, $Q_v(\xi_E, \xi_1, \xi_2, \dots) = (P_v\xi_E, 0, 0, \dots)$. Similarly, for every $e \in E^1$ with $s(e) \neq v_0$ define $T_e = S_e$ on the $\mathcal{H}_E$ component. For each vertex $v_n$ on the infinite tail define $Q_{v_n}$ to be the projection onto $\mathcal{H}_n$. That is, $Q_{v_n}(\xi_E, \xi_1, \ldots ,\xi_n, \ldots) = (0,0,\ldots,\xi_n,0,\ldots)$. Now note that because the $R_n$'s are non-decreasing, $\mathcal{H}_{n} \subseteq \mathcal{H}_{n-1}$ for each $n$. Thus for each edge of the form $e_n$ we can define $T_{e_n}$ to be the inclusion of $\mathcal{H}_n$ into $\mathcal{H}_{n-1}$ (where $\mathcal{H}_0$ is taken to mean $P_{v_0}\mathcal{H}_E$). More precisely, $$T_{e_n}(\xi_E, \xi_1, \xi_2, \dots) = (0,0,\dots,0,\xi_n,0,\dots), $$ where the $\xi_n$ is in the $\mathcal{H}_{n-1}$ component. Finally, for each edge $g_j$ and for each $\xi \in \mathcal{H}_E$ we have $S_{g_j}\xi \in \mathcal{H}_{j-1}$. To see this recall that $\mathcal{H}_{j-1} = (P_{v_0} - R_{j-1})\mathcal{H}_E$, and thus $(P_{v_0} - R_{j-1})S_{g_j}\xi = S_{g_j}\xi$. Therefore we can define $T_{f_j}$ by $$ T_{f_j}(\xi_E, \xi_1, \xi_2, \dots) = (0,\dots,0,S_{g_j}\xi_E,0,\dots), $$ where the nonzero term appears in the $\mathcal{H}_{j-1}$ component. We will now check that the collection $\{T_e, Q_v\}$ is a Cuntz-Krieger $F$-family. It follows immediately from definitions and the Cuntz-Krieger relations on $E$ that $T_e^*T_e = Q_{r(e)}$ for every $e$ that is not of the form $f_j$ or $e_n$, and that $Q_v = \sum_{s(e)=v} T_eT_e^*$ for every $v$ not on the infinite tail. Furthermore, it is easy to check using the definitions that the $Q_v$'s are mutually orthogonal and that $T_{e_n}^*T_{e_n} = Q_{r(e_n)}$ for every edge $e_n$ on the infinite tail. Now note that for every $f_j$, \begin{align*} T_{f_j}^*T_{f_j}(\xi_E, \xi_1, \xi_2, \dots) &= T_{f_j}^*(0,\dots,0,S_{g_j}\xi_E,0,\dots)\\ &= (S_{g_j}^*S_{g_j}\xi_E, 0, 0, \dots)\\ &= (P_{r(e_j)}\xi_E, 0, 0, \dots)\\ &= Q_{r(e_j)}(\xi_E, \xi_1, \xi_2, \dots). \end{align*} Finally, let $v_n$ be a vertex on the infinite tail. The edges emanating from $v_n$ are $e_{n+1}$ and $f_{n+1}$, and we have $$ T_{e_{n+1}}T_{e_{n+1}}^*(\xi_E, \xi_1, \dots) = (0,\dots,0,(P_{v_0}-R_{n+1})\xi_n,0,\dots), $$ where the nonzero term is in the $\mathcal{H}_n$ component. Also $$ T_{f_{n+1}}T_{f_{n+1}}^*(\xi_E, \xi_1, \dots) = (0,\dots,0,S_{g_{n+1}}S_{g_{n+1}}^*\xi_n,0,\dots), $$ where the nonzero term is again in the $\mathcal{H}_n$ component. We then have the following: \begin{align*} (T_{e_{n+1}}T_{e_{n+1}}^* + T_{f_{n+1}}T_{f_{n+1}}^*) (\xi_E, \xi_1, \dots) &= (0,\dots,0,(P_{v_0} - R_{n+1} + S_{g_{n+1}}S_{g_{n+1}}^*)\xi_n, 0, \dots)\\ &= (0,\dots,0,(P_{v_0} - R_n)\xi_n, 0, \dots)\\ &= (0,\dots,0,\xi_n,0,\dots)\\ &= Q_{v_n}(\xi_E, \xi_1, \dots). \end{align*} Thus $\sum_{ \{e : s(e) = v_n \}} T_eT_e^* = T_{e_{n+1}}T_{e_{n+1}}^* + T_{f_{n+1}}T_{f_{n+1}}^* = Q_{v_n} = Q_{r(e_n)}$ and we have established that $\{T_e, Q_v\}$ is a Cuntz-Krieger $F$-family. It is easy to verify that the bulleted points in the statement of the lemma are satisfied. \end{proof} \begin{thm} \label{thm-morita} Let $E$ be a graph and let $F$ be a desingularization of $E$. Then $C^*(E)$ is isomorphic to a full corner of $C^*(F)$. Consequently, $C^*(E)$ and $C^*(F)$ are Morita equivalent. \end{thm} \begin{proof} Again for simplicity we assume that $E$ has only one singular vertex $v_0$. Let $\{t_e, q_v \,|\, e \in F^1, v \in F^0\}$ denote the canonical set of generators for $C^*(F)$ and let $\{s_e, p_v \,|\, e \in E^1, v \in E^0\}$ denote the Cuntz-Krieger $E$-family in $C^*(F)$ constructed in Lemma~\ref{lem-CKEinF}. Define $B:=C^*(\{s_e, p_v\})$ and $p:=\sum_{v \in E^0} q_v$. To prove the proposition, we will show that $C^*(E) \cong B \cong pC^*(F)p$ is a full corner in $C^*(F)$. Since $B$ is generated by a Cuntz-Krieger $E$-family, in order to show that $B \cong C^*(E)$ it suffices to prove that $B$ satisfies the universal property of $C^*(E)$. Let $\{S_e, P_v \,|\, e \in E^1, v \in E^0\}$ be a Cuntz-Krieger $E$-family on a Hilbert space $\mathcal{H}_E$. Then by Lemma \ref{lem-extendEfamtoF} we can construct a Hilbert space $\mathcal{H}_F$ and a Cuntz-Krieger $F$-family $\{T_e, Q_v \,|\, e \in F^1, v \in F^0\}$ on $\mathcal{H}_F$ such that $Q_v = P_v$ for every $v \in E^0$, $T_e = S_e$ for every $e \in F^1$ with $s(e) \neq v_0$, and $S_{g_j} = T_{\alpha^j}$ for every edge $g_j$ in $E$ whose source is $v_0$. Now by the universal property of $C^*(F)$, we have a homomorphism $\pi$ from $C^*(F)$ onto $C^*(\{T_e, Q_v \,|\, e \in F^1, v \in F^0\})$ that takes $t_e$ to $T_e$ and $q_v$ to $Q_v$. For any $v \in E^0$ we have $p_v = q_v$, so $\pi(p_v) = Q_v = P_v$. Let $e \in E^1$. If $s(e) \neq v_0$, then $s_e = t_e$ and $\pi(s_e) = T_e = S_e$. Finally, if $s(e) = v_0$ then $e = g_j$ for some $j$, and $s_e = t_{\alpha^j}$ so that $\pi(s_{g_j}) = T_{\alpha^j} = S_{g_j}$. Thus $\pi|_B$ is a representation of $B$ on $\mathcal{H}_E$ that takes generators of $B$ to the corresponding elements of the given Cuntz-Krieger $E$-family. Therefore $B$ satisfies the universal property of $C^*(E)$ and $C^*(E) \cong B$. We now show that $B \cong pC^*(F)p$. Just as in \cite[Lemma 1.2(c)]{bprs}, we have that $\sum_{v \in E^0} q_v$ converges strictly in $M(C^*(F))$ to a projection $p$ and that for any $\mu, \nu \in F^*$ with $r(\mu)=r(\nu)$, \begin{equation} \label{eq-pp} p t_{\mu} t_{\nu}^* p = \left\{\begin{array}{ll} t_{\mu} t_{\nu}^* & \hbox{if $s(\mu), s(\nu) \in E^0$;}\\ 0 & \hbox{otherwise.}\\ \end{array}\right. \end{equation} Therefore the generators of $B$ are contained in $pC^*(F)p$ and $B \subseteq pC^*(F)p$. To show the reverse inclusion, let $\mu$ and $\nu$ be finite paths in $F$ with $r(\mu) = r(\nu)$. We need to show that $pt_{\mu} t_{\nu}^*p \in B$. If either $\mu$ or $\nu$ does not start in $E^0$, then $pt_{\mu} t_{\nu}^*p = 0$ by the above formula. Hence we may as well assume that both $\mu$ and $\nu$ start in $E^0$. Now, if $r(\mu) = r(\nu) \in E^0$ as well, there will exist unique $\mu', \nu' \in E^0$ with $\phi(\mu') = \mu$ and $\phi(\nu') = \nu$. In this case, $t_\mu = s_{\mu'}$ and $t_\nu = s_{\nu'}$, so $$ pt_{\mu} t_{\nu}^*p = t_{\mu}t_{\nu}^* = s_{\mu'}s_{\nu'}^* \in B. $$ On the other hand, if $r(\mu) = r(\nu) \not\in E^0$ then $r(\mu) = r(\nu) = v_n$ for some $n$. We shall prove that $pt_{\mu} t_{\nu}^*p \in B$ by induction on $n$. Suppose that $pt_{\mu'} t_{\nu'}^*p \in B$ for any paths $\mu'$ and $\nu'$ with $r(\mu')=r(\nu')=v_{n-1}$. Then if $r(\mu)=r(\nu)=v_n$ we shall write $\mu = \mu' e_n, \nu =\nu' e_n$ for finite paths $\mu'$ and $\nu'$ with $r(\mu')=r(\nu')=v_{n-1}$. Now there are precisely two edges, $e_n$ and $f_{n}$ with source $v_{n-1}$. Thus \begin{align*} pt_{\mu}t_{\nu}^*p &= pt_{\mu'}t_{e_n}t_{e_n}^* t_{\nu'}^*p\\ &= pt_{\mu'} (q_{v_{n-1}} - t_{f_{n}}t_{f_{n}}^*) t_{\nu'}^*p\\ &= pt_{\mu'} t_{\nu'}^*p - pt_{\mu'f_{n}}t_{\nu'f_{n}}^*p, \end{align*} which is in $B$. Hence $pC^*(F)p \subseteq B$. Finally, we note that $pC^*(F)p$ is full by an argument identical to the one given in \cite[Lemma 1.2(c)]{bprs}. \end{proof} Theorem \ref{thm-morita} allows us to get easy proofs of several known results by passing to a desingularization and using the corresponding result for row-finite graphs. \begin{cor} \label{cor-uniqueness} Suppose $E$ is a graph in which every loop has an exit, and that $\{S_e, P_v\}$ and $\{T_e, Q_v\}$ are two Cuntz-Krieger $E$-families in which all the projections $P_v$ and $Q_v$ are non-zero. Then there is an isomorphism $\phi:C^*(\{S_e, P_v\}) \rightarrow C^*(\{T_e, Q_v\})$ such that $\phi(S_e) = T_e$ for all $e \in E^1$ and $\phi(P_v) = Q_v$ for all $v \in E^0$. \end{cor} \begin{proof} Let $F$ be a desingularization of $E$. Use Lemma \ref{lem-extendEfamtoF} to construct $F$-families from the given $E$-families. Then apply \cite[Theorem 3.1]{bprs} to get an isomorphism between the $C^*$-algebras generated by the $F$-families that will restrict to an isomorphism between $C^*(\{S_e, P_v\})$ and $C^*(\{T_e,Q_v\})$. \end{proof} \begin{cor} \label{cor-AF} Let $E$ be a graph. Then $C^*(E)$ is an AF-algebra if and only if $E$ has no loops. \end{cor} \begin{proof} This follows from \cite[Theorem 2.4]{kpr} and the fact that the class of AF-algebras is closed under stable isomorphism (see \cite[Theorem 9.4]{effros}). \end{proof} \begin{cor} \label{cor-pureinf} Let $E$ be a graph. Then $C^*(E)$ is purely infinite if and only if every vertex in $E$ connects to a loop and every loop in $E$ has an exit. \end{cor} \begin{proof} By \cite[Proposition 5.3]{bprs} and the fact that pure infiniteness is preserved by passing to corners, every vertex connects to a loop and every loop has an exit implies pure infiniteness. For the converse we note that the proof given in \cite[Theorem 3.9]{kpr} works for arbitrary graphs. \end{proof} The following result generalizes \cite[Theorem 3]{flr} and \cite[Corollary 4.5]{fr} and it was proven independently in \cite{sz} and \cite{patgg}. \begin{cor} \label{cor-simple} Let $E$ be a graph. Then $C^*(E)$ is simple if and only if \begin{itemize} \item [(1)] every loop in $E$ has an exit; \item [(2)] $E$ is cofinal; \item [(3)] for every singular vertex $v_0 \in E^0$, $E^0 \geq v_0$. \end{itemize} \end{cor} \begin{proof} Letting $F$ denote a desingularization of $E$, we have $$ \begin{array}{lcll} C^*(E) \hbox{ is simple} & \Longleftrightarrow & C^*(F) \hbox{ is simple} & \hbox{(by Theorem \ref{thm-morita})}\\ & \Longleftrightarrow & F \hbox{ is cofinal and every loop in $F$ has an exit} & \hbox{(by \cite[Proposition 5.1]{bprs})}\\ & \Longleftrightarrow & E \hbox{ satisfies (1),(2), and (3)} & \hbox{(by Lemmas \ref{lem-LK} and \ref{lem-cofinalplus})}.\\ \end{array} $$ \end{proof} \begin{rem} \label{rem-dichotomy} We see from the results above that the dichotomy described in \cite[Remark 5.6]{bprs} holds for arbitrary graphs: If $C^*(E)$ is simple, then it is either AF or purely infinite. For if $E$ has no loops then Corollary~\ref{cor-AF} shows that $C^*(E)$ is AF. If $E$ does have loops, then Corollary~\ref{cor-simple} says that they all have exits and that $E$ is cofinal; thus every vertex connects to every loop and Corollary~\ref{cor-pureinf} applies. \end{rem} \section{Ideal structure} \label{sec-ideal} Let $E$ be a directed graph. A set $H \subseteq E^0$ is {\it hereditary} if whenever $v \in H$ and $v \geq w$, then $w \in H$. A hereditary set $H$ is called {\it saturated} if every vertex that is not a singular vertex and that feeds only into $H$ is itself in $H$; that is, if $$ \hbox{$v$ not singular and $\{r(e) \,|\, s(e)=v\} \subseteq H$ implies $v \in H$.} $$ If $E$ is row-finite this definition reduces to the one given in \cite{bprs}. It was shown in \cite[Theorem 4.4]{bprs} that if $E$ is row-finite and satisfies Condition~(K), then every saturated hereditary subset $H$ of $E^0$ gives rise to exactly one ideal $\hbox{$I_H :=$ the ideal generated by $\{p_v \,|\, v \in H\}$}$ in $C^*(E)$. If $E$ is a graph that is not row-finite, it is easy to check that with the above definition of saturated \cite[Lemma 4.2]{bprs} and \cite[Lemma 4.3]{bprs} still hold. Consequently, $H \mapsto I_H$ is still injective, just as in the proof of \cite[Theorem 4.1]{bprs}. However, it is no longer true that this map is surjective; that is, there may exist ideals in $C^*(E)$ that are not of the form $I_H$ for some saturated hereditary set $H$. The reason the proof for row-finite graphs no longer works is that if $I$ is an ideal, then $\{s_e + I, p_v + I\}$ will not necessarily be a Cuntz-Krieger $E \setminus H$-family for the graph $E \setminus H$ defined in \cite[Theorem~4.1]{bprs}. It turns out that to describe an arbitrary ideal in $C^*(E)$ we need a saturated hereditary subset and one other descriptor. Loosely speaking, this descriptor tells us how close $\{s_e + I, p_v + I\}$ is to being a Cuntz-Krieger $E \setminus H$-family. Given a saturated hereditary subset $H \subseteq E^0$, define $$ B_H := \{v \in E^0 \,|\, \hbox{$v$ is an infinite-emitter and $0 < |s^{-1}(v) \cap r^{-1}(E^0 \setminus H)| < \infty$}\}. $$ Therefore $B_H$ is the set of infinite-emitters that point to only a finite number of vertices not in $H$. Since $H$ is hereditary, $B_H$ is disjoint from $H$. Now fix a saturated hereditary subset $H$ and let $S$ be any subset of $B_H$. Let $\{s_e, p_v\}$ be the canonical generating Cuntz-Krieger $E$-family and define $$ I_{(H,S)} := \hbox{the ideal in $C^*(E)$ generated by $\{p_v \,|\, v \in H\} \cup \{p_{v_0}^H \,|\, v_0 \in S\}$}, $$ where $$ p_{v_0}^H := p_{v_0} - \sum_{{s(e) = v_0} \atop {r(e) \notin H}} s_e s_e^*. $$ Note that the definition of $B_H$ ensures that the sum on the right is finite. Our goal is to show that the correspondence $(H,S) \mapsto I_{(H,S)}$ is a lattice isomorphism, so we must describe the lattice structure on $$ \{(H,S) \,|\, \hbox{$H$ is a saturated hereditary subset of $E^0$ and $S \subseteq B_H$}\}. $$ We say $(H,S) \leq (H',S')$ if and only if $H \subseteq H'$ and $S \subseteq H' \cup S'$. With this definition, the reader who is willing to spend a few minutes can check using nothing more than basic set theory that the following equations define a greatest lower bound and least upper bound: $$ (H_1,S_1) \wedge (H_2,S_2) := \left((H_1 \cap H_2), \left(H_1 \cup H_2 \cup S_1 \cup S_2)\right) \cap B_{H_1 \cap H_2} \right) $$ $$ (H_1,S_1) \vee (H_2,S_2) := \left( \bigcup_{n=0}^\infty X_n \ , \ (S_1 \cup S_2) \cap B_{ \bigcup_{n=0}^\infty X_n } \right)$$ where $X_n$ is defined recursively as $X_0 := H_1 \cup H_2$ and $X_{n+1} := X_n \cup \{ v \in E^0 \,|\, 0 < | s^{-1}(v) | < \infty \text{ and } \{ r(e) \,|\, s(e) = v \} \subseteq X_n \} \cup \{ v \in E^0 \,|\, v \in S_1 \cup S_2 \text{ and } \{r(e) \,|\, s(e)=v \} \subseteq X_n \}$. The reason for this strange definition of the $X_n$'s is the following: If $Y_0$ is a hereditary subset, then the saturation of $Y_0$ may be defined as the increasing union of $Y_{n+1} := Y_n \cup \{ v \in E^0 \,|\, 0 < | s^{-1}(v) | < \infty \text{ and } \{ r(e) \,|\, s(e) = v \} \subseteq Y_n \}$. In the $X_n$'s above we need not only these elements, but also at each stage we must include the infinite emitters in $S_1 \cup S_2$ that only feed into $X_n$. We now describe a correspondence between pairs $(H,S)$ as above and saturated hereditary subsets of vertices in a desingularization of $E$. Suppose that $E$ is a graph and let $F$ be a desingularization of $E$. Also let $H$ be a saturated hereditary subset of $E^0$ and let $S \subseteq B_H$. We define a saturated hereditary subset $H_S \subseteq F^0$. First set $\tilde{H} := H \cup \{v_n \in F^0 \,|\, v_n \hbox{ is on a tail added to a vertex in $H$}\}$. Now for each $v_0 \in S$ let $N_{v_0}$ be the smallest nonnegative integer such that $r(e_j) \in H$ for all $j \geq N_{v_0}$. The number $N_{v_0}$ exists since $v_0 \in B_H$ implies that there must be a vertex on the tail added to $v_0$ beyond which each vertex points only to the next vertex on the tail and into $H$. Define $T_{v_0} := \{v_n \,|\, v_n \hbox{ is on the infinite tail added to $v_0$ and $n \geq N_{v_0}$}\}$ and define $$ H_S := \tilde{H} \cup \bigcup_{v_0 \in S} T_{v_0}. $$ Note that for $v_0 \in B_H$ we have $v_0 \not \in H_S$. Furthermore, the tail attached to $v_0$ will eventually be inside $H_S$ if and only if $v_0 \in S$. It is easy to check that $H_S$ is hereditary, and choosing $N_{v_0}$ to be minimal ensures that $H_S$ is saturated. \begin{ex} \label{ex-prim} Suppose $E$ is the following graph: $$ \xymatrix{ w \ar@/^/[dd] \ar@{=>}[dr]^{\infty} & \\ & x \\ v \ar@/^/[uu] \ar@{=>}[ur]_{\infty}\\ } $$ A desingularization $F$ is given by $$ \xymatrix{ w \ar@/^/[dd] \ar[r] & w_1 \ar[r] \ar[d] & w_2 \ar[r] \ar[dl] & w_3 \ar[r] \ar[dll] & w_4 \ar[r] \ar[dlll] & \cdots\\ & x \ar[r] & x_1 \ar[r] & x_2 \ar[r] & x_3 \ar[r] & \cdots\\ v \ar@/^/[uu] \ar[r] & v_1 \ar[r] \ar[u] & v_2 \ar[r] \ar[ul] & v_3 \ar[r] \ar[ull] & v_4 \ar[r] \ar[ulll] & \cdots\\ } $$ The only saturated hereditary (proper) subset in $E$ is the set $H = \{x\}$. In this case $B_H = \{v,w\}$. There are four subsets of $B_H$ and there are four saturated hereditary (proper) subsets in the desingularization. In particular, if $S = \emptyset$, then $H_S$ consists of only the tail added to $x$; if $S$ contains $w$, then $H_S$ also includes $\{w_2, w_3, \dots\}$; and if $S$ contains $v$, then $H_S$ also includes $\{v_2, v_3, \dots\}$. \end{ex} The proof of the following lemma is straightforward. \begin{lem} \label{lem-HSHS} Let $E$ be a graph and let $F$ be a desingularization of $E$. The map $(H,S) \mapsto H_S$ is an isomorphism from the lattice $$ \{(H,S) \,|\, \hbox{$H$ is a saturated hereditary subset of $E^0$ and $S \subseteq B_H$}\} $$ onto the lattice of saturated hereditary subsets of $F$. \end{lem} Suppose $E$ is a graph that satisfies Condition~(K) and $F$ is a desingularization of $E$. Because $C^*(E)$ is isomorphic to the full corner $pC^*(F)p$, we have that $C^*(E)$ and $C^*(F)$ are Morita equivalent via the imprimitivity bimodule $pC^*(F)$. It then follows from \cite[Proposition 3.24]{tfb} that the Rieffel correspondence between ideals in $C^*(F)$ and ideals in $C^*(E)$ is given by the map $I \mapsto pIp$. \begin{prop} \label{prop-HSHS} Let $E$ be a graph satisfying Condition~(K) and let $F$ be a desingularization of $E$. Let $H$ be a saturated hereditary subset of $E^0$ and let $S \subseteq B_H$. If $\{t_e, q_v\}$ is a generating Cuntz-Krieger $F$-family and $p = \sum_{v \in E^0} q_v$, then $pI_{H_S}p = I_{(H,S)}$. \end{prop} \begin{proof} That $pI_{H_S}p \subseteq I_{(H,S)}$ is immediate from (\ref{eq-pp}). We show the reverse inclusion by showing that the generators of $I_{(H,S)}$ are in $pI_{H_S}p$. Letting $\{s_e, p_v\}$ denote the Cuntz-Krieger $E$-family defined in the proof of Lemma~\ref{lem-CKEinF}, the generators for $I_{(H,S)}$ are $\{p_v \,|\, v \in H\} \cup \{p_{v_0}^H \,|\, v_0 \in S\}$. Clearly for $v \in H$, we have $p_v = q_v = pq_vp \in pI_{H_S}p$, so all that remains to show is that for every $v_0 \in S$ we have $p_{v_0}^H \in pI_{H_S}p$. Let $v_0 \in S$ and $n:= N_{v_0}$. Then \begin{align*} q_{v_0} &= t_{e_1}t_{e_1}^* + t_{f_1}t_{f_1}^* \\ &= t_{e_1}q_{v_1}t_{e_1}^* + t_{f_1}t_{f_1}^* \\ &= t_{e_1}(t_{e_2}t_{e_2}^*+t_{f_2}t_{f_2}^*)t_{e_1}^* + t_{f_1}t_{f_1}^*\\ &= t_{e_1e_2}q_{v_2}t_{e_1e_2}^* + t_{e_1f_2}t_{e_1f_2}^* +t_{f_1}t_{f_1}^*\\ & \quad \quad \quad \vdots \\ &= t_{e_1 \ldots e_n}t_{e_1 \ldots e_n}^* + \sum_{j=1}^n t_{\alpha^j}t_{\alpha^j}^* \end{align*} Now since $r(e_n) = v_n \in H_S$ we see that $q_{v_n} \in I_{H_S}$ and hence $t_{e_n} = t_{e_n}t_{e_n}^*t_{e_n} = t_{e_n}q_{v_n} \in I_{H_S}$. Consequently, $t_{e_1 \ldots e_n}t_{e_1 \ldots e_n}^* \in I_{H_S}$. Similarly, whenever $r(\alpha^j) \in H$, then $t_{\alpha^j}t_{\alpha^j}^* \in I_{H_S}$. Now, by definition, every $\alpha^j$ with $r(\alpha^j) \notin H$ has $j < n$. Therefore the above equation shows us that \begin{align*} p_{v_0}^H &= p_{v_0} - \sum_{{s(g_j) = v_0} \atop {r(g_j) \notin H}} s_{g_j} s_{g_j}^* \\ &= q_{v_0} - \sum_{{s(\alpha^j) = v_0} \atop {r(\alpha^j) \notin H}} t_{\alpha^j} t_{\alpha^j}^* \\ &= \sum_{ {r(\alpha^j) \in H} \atop {j < n} } t_{\alpha^j} t_{\alpha^j}^* + t_{e_1 \ldots e_n} t_{e_1 \ldots e_n}^* \end{align*} which is an element of $I_{H_S}$ by the previous paragraph. Hence $I_{H_S} \subseteq I_{H,S}$. \end{proof} \begin{cor} \label{cor-prim} Let $E$ be a graph satisfying Condition~(K) and let $F$ be a desingularization of $E$. If $H$ is a saturated hereditary subset of $E^0$ and $S \subseteq B_H$, then $I_{(H,S)}$ is a primitive ideal in $C^*(E)$ if and only if $I_{H_S}$ is a primitive ideal in $C^*(F)$. \end{cor} We now have the following: $$ \xymatrix{ \{(H,S) \,|\, \hbox{$H$ is saturated, hereditary in $E$ and $S \subseteq B_H$}\} \ar[d] \ar@{.>}[rr] && \hbox{ideals in $C^*(E)$}\\ \hbox{saturated, hereditary subsets of $F$} \ar[rr] && \hbox{ideals in $C^*(F)$.} \ar[u] } $$ The map on the left is $(H,S) \mapsto H_S$, which is a lattice isomorphism by Lemma \ref{lem-HSHS}. The lattice isomorphism $H \mapsto I_H$ across the bottom comes from \cite[Theorem~4.4]{bprs}. The map on the right is $I_{H_S} \mapsto I_{(H,S)}$ and is an isomorphism because it agrees with the Rieffel correspondence (Proposition \ref{prop-HSHS}). Composing the three yields the following: \begin{thm} \label{thm-ideal} Let $E$ be a graph that satisfies Condition~(K). Then the map $(H,S) \mapsto I_{(H,S)}$ is a lattice isomorphism from the lattice $$ \{(H,S) \,|\, \hbox{$H$ is a saturated hereditary subset of $E^0$ and $S \subseteq B_H$}\} $$ onto the lattice of ideals in $C^*(E)$. \end{thm} \section{Primitive ideal space} \label{sec-prim} The following definition generalizes that in \cite[Proposition~6.1]{bprs}. \begin{defn} \label{defn-maxtail} Let $E$ be a graph. A nonempty subset $\gamma \subseteq E^0$ is called a {\it maximal tail} if it satisfies the following conditions: \begin{itemize} \item [(a)] for every $w_1, w_2 \in \gamma$ there exists $z \in \gamma$ such that $w_1 \geq z$ and $w_2 \geq z$; \item [(b)] for every $v \in \gamma$ that is not a singular vertex, there exists an edge $e$ with $s(e)=v$ and $r(e) \in \gamma$; \item [(c)] $v \geq w$ and $w \in \gamma$ imply $v \in \gamma$. \end{itemize} \end{defn} Given a graph $E$ we denote by $\Lambda_E$ the set of all maximal tails in $E$. Note that if $v_0$ is a sink, then the set $\lambda_{v_0} := \{v \in E^0 \,|\, v \geq v_0\}$ is a maximal tail according to Definition~\ref{defn-maxtail}, but was not considered to be a maximal tail in \cite[Section~6]{bprs}. In addition, when $v_0$ is an infinite-emitter $\lambda_{v_0} := \{v \in E^0 \,|\, v \geq v_0\}$ is a maximal tail. \begin{defn} \label{break-vertex} If $E$ is a graph, then a \emph{breaking vertex} is an element $v \in E^0$ such that $| s^{-1}(v) | = \infty$ and $0 < | \{ e \in E^1 \, | \, s(e)=v \text{ and } r(e) \geq v \} | < \infty$. We denote the set of breaking vertices of $E$ by $BV(E)$. \end{defn} \begin{rem} Notice that if $H$ is a hereditary subset in a graph $E$ and $v_0 \in B_H$, then $v_0$ is a breaking vertex if and only if there exists an edge $e \in E^1$ with $s(e)=v_0$ and $r(e) \geq v_0$. Also note that if $H$ is a saturated hereditary subset in a graph $E$ and $E^0 \setminus H = \lambda_{v_0}$ for some singular vertex $v_0$, then $v_0 \in B_H$ if and only if $v_0$ is a breaking vertex. \end{rem} We let $\Xi_E := \Lambda_E \cup BV(E)$ denote the disjoint union of the maximal tails and the breaking vertices. We shall see that the elements of $\Xi_E$ correspond to the primitive ideals in $C^*(E)$. \begin{lem} \label{tailpath} If $E$ is a graph and $\gamma$ is a maximal tail in $E$, then $\gamma = \{ v \in E^0 \, | \, v \geq \alpha \}$ for some $\alpha \in E^\infty \cup \{\alpha \in E^* \,|\, r(\alpha) \hbox{ is a singular vertex}\}$. \end{lem} \begin{proof} It is straightforward to see that if $\alpha \in E^\infty \cup \{\alpha \in E^* \,|\, r(\alpha) \hbox{ is a singular vertex}\}$, then $\{ v \in E^0 \, | \, v \geq \alpha \}$ is a maximal tail \cite[Remark~6.4]{bprs}. Conversely, suppose that $\gamma$ is a maximal tail. We shall create a path in $E$ inductively. Begin with an element $w \in \gamma$. If there exists an element $w' \in \gamma$ for which $w' \ngeq w$, then we may use property~(a) of maximal tails to choose a path $\beta^1$ with $s(\beta^1) = w$ and $w' \geq r(\beta^1)$. Now having chosen $\beta^i$, we do one of two things: if $w' \geq r(\beta^i)$ for all $w' \in \gamma$, we stop. If there exists $w' \in \gamma$ such that $w' \ngeq r(\beta^i)$, then we choose a path $\beta^{i+1}$ with $s(\beta^{i+1}) = r(\beta^i)$ and $w' \geq r(\beta^{i+1})$. We then continue in this manner to produce a path $\beta := \beta^1 \beta^2 \ldots$, which may be either finite or infinite. Note that since $\gamma$ has either a finite or countable number of elements, we may choose $\beta$ in such a way that $w \geq \beta$ for all $w \in \gamma$. Now if $\beta$ is an infinite path we define $\alpha := \beta$. On the other hand, if $\beta$ is a finite path then one of two things must occur. Either $r(\beta)$ is a singular vertex or there is an edge $e_1 \in E^1$ with $s(e_1) = r(\beta)$ and $r(e_1) \in \gamma$. Continuing in this way, we see that having chosen $e_i$, either $r(e)$ is a singular vertex or there exists $e_{i+1} \in E^1$ with $s(e_{i+1}) = r(e_i)$ and $r(e_{i+1}) \in \gamma$. Using this process we may extend $\beta$ to a path $\alpha:= \beta e_1 e_2 \ldots$ that is either infinite or is finite and ends at a singular vertex. Now since every vertex on $\alpha$ is an element of $\gamma$ we certainly have $\{ v \in F^0 \, | \, v \geq \alpha \} \subseteq \gamma$. Also, for every element $v \in \gamma$ there exists an $i$ such that $v \geq r(\beta^i) \geq \alpha$ so we have $\gamma \subseteq \{ v \in F^0 \, | \, v \geq \alpha \}$. \end{proof} \begin{thm} Let $E$ be a graph. An ideal $I$ in $C^*(E)$ is a primitive ideal if and only if one of the following two statements holds: \begin{enumerate} \item $I = I_{(H,S)}$, where $E^0 \setminus H$ is a maximal tail and $S = B_H$; or \item $I = I_{(H,S)}$, where $E^0 \setminus H = \lambda_{v_0}$ for some breaking vertex $v_0$ and $S = B_H \setminus \{v_0\}$. \end{enumerate} \end{thm} \begin{proof} It follows from Theorem~\ref{thm-ideal} that any ideal in $C^*(E)$ has the form $I_{(H,S)}$ for some saturated hereditary set $H \subseteq E^0$ and some $S \subseteq B_H$. Let $F$ be a desingularization of $E$. It follows from Corollary~\ref{cor-prim} that $I_{(H,S)}$ is primitive if and only if $I_{H_S}$ is primitive. Now suppose that $I_{(H,S)}$, and hence $I_{H_S}$, is primitive. It follows from \cite[Proposition~6.1]{bprs} that $F^0 \setminus H_S$ is a maximal tail in $F$. Thus by Lemma~\ref{tailpath} we have $F^0 = \{ w \in F^0 \, | \, w \geq \alpha \}$ for some $\alpha \in F^\infty$. Now $\phi_\infty^{-1}(\alpha)$ is either an infinite path in $E$ or a finite path in $E$ ending at a singular vertex. In either case $\gamma := \{ w \in E^0 \, | \, w \geq \phi_\infty^{-1}(\alpha) \}$ is a maximal tail in $E$. Furthermore, $$ v \in E^0 \setminus H \Longleftrightarrow v \notin H \Longleftrightarrow v \notin H_S \Longleftrightarrow v \geq \alpha \text{ in $F$} \Longleftrightarrow v \geq \phi_{\infty}^{-1}(\alpha) \text{ in $E$} \Longleftrightarrow v \in \gamma. $$ Therefore $E^0 \setminus H = \gamma$ is a maximal tail. Now if $S = B_H$, then we are in the case described in part~(1) of the theorem and the claim holds. Let us therefore suppose that there exists $v_0 \in B_H \setminus S$. If we define $T_{v_0} := \{ v_0, v_1, v_2, \ldots \}$ to be the vertices on the tail added to $v_0$, then we see that $v_0 \notin S$ implies that $T_{v_0} \subseteq F^0 \setminus H_S = \{ w \in F^0 \, | \, w \geq \alpha \}$. Now for each vertex $v_i$ with $i \geq N_{v_0}$ there are two edges, $e_{i+1}$ and $f_{i+1}$, with source $v_i$. Since $r(f_{i+1}) \in H_S$ and $r(e_{i+1}) = v_{i+1}$, it must be the case that $\alpha$ has the form $\alpha = \alpha' e_1e_2e_3 \ldots$ for some finite path $\alpha'$ in $F$. Consequently, $\phi_\infty^{-1}(\alpha)$ is a finite path in $E$ ending at $v_0$, and $\gamma = \lambda_{v_0}$. Now let $X := \{ e \in E^1 \, | \, s(e)=v_0 \text{ and } r(e) \geq v_0\}$. Note that if $s(e) = v_0$ and $r(e) \geq v_0$, then $r(e) \notin H$ since $H$ is hereditary. Because $v_0 \in B_H$ it follows that we must have $| X | < \infty$. Furthermore, since $v_0 \in B_H$ there exists $e \in E^1$ with $s(e)=v_0$ and $r(e) \notin H$. But then $r(e) \in \gamma$ and $r(e) \geq \phi_\infty^{-1}(\alpha)$ and hence $r(e) \geq v_0$. Thus $| X | > 0$, and by definition $v_0$ is a breaking vertex. All that remains is to show that $S = B_H \setminus \{v_0 \}$ . Let us suppose that $w_0 \in B_H$. If $w_0 \notin S$, then $T_{w_0} \subseteq F^0 \setminus H_S = \{ w \in F^0 \, | \, w \geq \alpha \}$. But because the $w_i$'s for $i \geq N_{w_0}$ can only reach elements of $H$ and $T_{w_0}$, the only way to have $w_i \geq \alpha = \alpha' e_1e_2 \ldots$ for all $i$ is if we have $w_0 = v_0$. Hence $v_0$ is the only element of $B_H \setminus S$ and $S = B_H \setminus \{v_0 \}$. Thus we have established all of the claims in part~(2). For the converse let $E^0 \setminus H$ be a maximal tail. Consider the following two cases. \begin{enumerate} \item[]{\bf Case I:} $S = B_H$ We shall show that $F^0 \setminus H_S$ is a maximal tail in $F$. Since $H_S$ is a saturated hereditary subset of $F^0$, the set $F^0 \setminus H_S$ certainly satisfies (b) and (c) in the definition of maximal tail. We shall prove that (a) also holds. Let $w_1,w_2 \in F^0 \setminus H_S$. If it is the case that $w_1,w_2 \in E^0$, then we must also have $w_1,w_2 \in E^0 \setminus H$, and hence there exists $z \in E^0 \setminus H$ such that $w_1 \geq z$ and $w_2 \geq z$ in $E$. But then $z \in F^0 \setminus H_S$ and $w_1 \geq z$ and $w_2 \geq z$ in $F$. On the other hand, if one of the $w_i$'s is not in $E^0$, then it must be on an infinite tail $T_{v_0}$. Because $w_i \notin H_S$ and $S = B_H$, we must have $w_i \geq z$ for some $z \in E^0 \setminus H$. Thus we can replace $w_i$ with $z$ and reduce to the case when $w_i \in E^0$. Hence $F^0 \setminus H_S$ also satisfies (a) and is a maximal tail. Consequently, $I_{H_S}$ is a primitive ideal by \cite[Proposition~6.1]{bprs}, and $I_{(H,S)}$ is a primitive ideal by Corollary~\ref{cor-prim}. \item[]{\bf Case II:} $E^0 \setminus H = \lambda_{v_0}$ for some breaking vertex $v_0$ and $S = B_H \setminus \{ v_0 \}$. As in Case I, it suffices to show that $F^0 \setminus H_S$ satisfies (a) in the definition of maximal tail. To see this, let $w \in F^0 \setminus H_S$. If $w \in E^0$, then we must have $w \in E^0 \setminus H = \lambda_{v_0}$ and $w \geq v_0$. If $w \notin E^0$, then $w$ must be on one of the added tails in $F$. Since $S = B_H \setminus \{v_0 \}$ we must have that $w$ is an element on $T_{v_0} = \{ v_0, v_1, v_2, \ldots \}$. In either case we see that $w$ can reach an element of $T_{v_0}$ in $F$. Consequently, $F^0 \setminus H_S \geq T_{v_0}$ and $F^0 \setminus H_S$ clearly satisfies (a). \end{enumerate} \end{proof} \begin{defn} \label{def-phi-E} Let $E$ be a graph that satisfies Condition~(K). We define a map $\phi_E : \Xi_E \rightarrow \hbox{Prim\,} C^*(E)$ as follows. For $\gamma \in \Lambda_E$ let $H(\gamma) := E^0 \setminus \gamma$ and define $\phi_E(\gamma) := I_{(H(\gamma), B_{H(\gamma)})}$. For $v_0 \in BV(E)$ we define $\phi_E(v_0) := I_{(H(\lambda_{v_0}), B_{H(\lambda_{v_0})} \setminus \{ v_0 \} ) }$. The previous theorem shows that $\phi_E$ is a bijection. \end{defn} We now wish to define a topology on $\Xi_E$ that will make $\phi_E$ a homeomorphism. As usual our strategy will be to translate the problem to a desingularized graph and make use of the corresponding results in \cite{bprs}. In particular, if $E$ is any graph and $F$ is a desingularization of $E$, then we have the following picture: $$ \xymatrix{ \Xi_E \ar[d]_{\phi_E} \ar@{.>}[r]^{h} & \Xi_F \ar[d]^{\phi_F}\\ \hbox{Prim\,} C^*(E) \ar[r]^{\psi} & \hbox{Prim\,} C^*(F), } $$ where $\psi$ is the Rieffel correspondence restricted to the primitive ideal space. If we use the topology on $\Xi_F = \Lambda_F$ defined in \cite[Theorem~6.3]{bprs}, then $\phi_F$ is a homeomorphism. To define a topology on $\Xi_E$ that makes $\phi_E$ a homeomorphism we will simply use the composition $h := \phi_F^{-1} \circ \psi \circ \phi_E$ to pull the topology on $\Xi_F$ back to a topology on $\Xi_E$. We start with a proposition that describes the map $h$. \begin{prop} \label{prop-riefmaxtail} Let $E$ be a graph satisfying Condition~(K) and let $F$ be a desingularization of $E$. \begin{enumerate} \item If $\alpha \in E^\infty \cup \{\alpha \in E^* \,|\, r(\alpha) \hbox{ is a singular vertex}\}$ and $\gamma = \{v \in E^0 \,|\, v \geq \alpha\} \in \Lambda_E$, then $h(\gamma) = \{v \in F^0 \,|\, v \geq \phi_{\infty}(\alpha)\}$. \item If $v_0$ is a breaking vertex, then $h(v_0) = \{ v \in F^0 \, | \, v \geq e_1e_2 \ldots \}$, where $e_1e_2 \ldots$ is the path on the tail added to $v_0$. \end{enumerate} \end{prop} \begin{proof} To prove part (1), let $H := E^0 \setminus \gamma$ and $S := B_H$. Then using Proposition~\ref{prop-HSHS} we have $h(\gamma) = \phi_F^{-1} \circ \psi \circ \phi_E (\gamma) = \phi_F^{-1} \circ \psi (I_{(H,S)}) = \phi_F^{-1}(I_{H_S}) = F^0 \setminus H_S$. We shall show that $F^0 \setminus H_S = \{v \in F^0 \,|\, v \geq \phi_{\infty}(\alpha)\}$. To begin, if $v \in E^0$ then $$ v \in F^0 \setminus H_S \Longleftrightarrow v \in E^0 \setminus H \Longleftrightarrow v \in \gamma \Longleftrightarrow v \geq \alpha \text{ in } E \Longleftrightarrow v \geq \phi_{\infty}(\alpha) \text{ in } F. $$ where the last step follows from Lemma~\ref{lem-correspondence}. On the other hand, suppose $v \in F^0 \setminus E^0$. Then since $S=B_H$ every vertex $v \in F^0 \setminus H_S$ must connect to some vertex $w \in E^0 \setminus H$. So we may replace $v$ with $w$ and repeat the above argument. Thus we have proven (1). For part (2), let $v_0$ be a breaking vertex and set $\lambda_{v_0} := \{ w \in E^0 \, | \, w \geq v_0 \}$ and $S := B_{(E^0 \setminus \lambda_{v_0})} \setminus \{ v_0 \}$. Then $h(v_0) = \phi_F^{-1} \circ \psi \circ \phi_E (v_0) = \phi_F^{-1} \circ \psi (I_{(H,S)}) = \phi_F^{-1}(I_{H_S}) = F^0 \setminus H_S$. An argument similar to the one above shows that $F^0 \setminus H_S = \{ v \in F^0 \, | \, v \geq e_1e_2\ldots \}$. \end{proof} \begin{defn} \label{defn-reach} Let $E$ be a graph and let $S \subseteq E^0$. If $\gamma$ is a maximal tail, then we write $\gamma \rightarrow S$ if $\gamma \geq S$. If $v_0$ is a breaking vertex in $E$, then we write $v_0 \rightarrow S$ if the set $\{e \in E^0 \,|\, s(e) = v_0, r(e) \geq S\}$ contains infinitely many elements. \end{defn} \begin{lem} \label{lem-reach} Let $\delta \in \Xi_E$ and let $P \subseteq \Xi_E$. Then $\delta \rightarrow \bigcup_{\lambda \in P} \lambda$ in $E$ if and only if $h(\delta) \geq \bigcup_{\lambda \in P} h(\lambda)$ in $F$. \end{lem} \begin{proof} If $\delta$ is a maximal tail, then from Lemma~\ref{tailpath} we have $\delta = \{ v \in E^0 \, | \, v \geq \alpha \}$ for some $\alpha \in E^\infty \cup \{\alpha \in E^* \,|\, r(\alpha) \hbox{ is a singular vertex}\}$. Similarly, for each $\lambda \in P \cap \Lambda_E$ we may write $\lambda = \{ v \in E^0 \, | \, v \geq \alpha^\lambda \}$ for some $\alpha^\lambda \in E^\infty \cup \{\alpha \in E^* \,|\, r(\alpha) \hbox{ is a singular vertex}\}$. Now \begin{align*} & \delta \rightarrow \bigcup_{\lambda \in P} \lambda \\ \Longleftrightarrow& \alpha \geq \bigcup_{\lambda \in P \cap \Lambda_E} \{ r(\alpha^\lambda_i) \}_{i=1}^{|\alpha^\lambda |} \cup \bigcup_{v_0 \in P \cap BV(E)} v_0 \\ \Longleftrightarrow& \phi_\infty(\alpha) \geq \bigcup_{\lambda \in P \cap \Lambda_E} \{ r(\phi_\infty(\alpha^\lambda)_i) \}_{i=1}^{|\alpha^\lambda|} \cup \bigcup_{v_0 \in P \cap BV(E)} \phi_\infty(v_0) \\ \Longleftrightarrow& \{ v \in F^0 \, | \, v \geq \phi_\infty(\alpha) \} \geq \bigcup_{\lambda \in P \cap \Lambda_E} \{ v \in F^0 \, | \, v \geq \phi_\infty(\alpha^\lambda_i) \} \cup \bigcup_{v_0 \in P \cap BV(E)} \{ v \in F^0 \, | \, v \geq e_1^{v_0}e_2^{v_0}\ldots \} \\ \Longleftrightarrow& h(\delta) \geq \bigcup_{\lambda \in P} h (\lambda) \end{align*} \noindent So the claim holds when $\delta$ is a maximal tail. Now let us consider the case when $\delta = v_0$ is a breaking vertex. It follows from Lemma~\ref{prop-riefmaxtail} that $h(v_0) = \{ v \in F^0 \, | \, v \geq e_1e_2\ldots \}$, where $e_1e_2\ldots$ is the path on the tail added to $v_0$. Now suppose that $v_0 \rightarrow \bigcup_{\lambda \in P} \lambda$. Fix $v \in h(\delta)$. Note that either $v \geq v_0$ in $F$ or $v$ is on the infinite tail added to $v_0$ in $F$. Because $v_0 \rightarrow \bigcup_{\lambda \in P} \lambda$, there are infinitely many edges in $E$ from $v_0$ to vertices that connect to $\bigcup_{\lambda \in P} \lambda$. Thus no matter how far out on the tail $v$ happens to be, there must be an edge in $F$ whose source is a vertex further out on the tail than $v$ and whose range is a vertex that connects to a vertex $w \in \lambda$ for some $\lambda \in P$. Since $w \in \lambda$ we must have $w \in h(\lambda)$ and thus $v \geq \bigcup_{\lambda \in P} h(\lambda)$. Now assume that $h(v_0) \geq \bigcup_{\lambda \in P} h(\lambda)$. Then every vertex on the infinite tail attached to $v_0$ connects to a vertex in $\bigcup_{\lambda \in P} h(\lambda)$. In fact it is true that every vertex on the infinite tail attached to $v_0$ connects to a vertex in $\bigcup_{\lambda \in P} h(\lambda) \cap E^0$, which implies that every vertex on the infinite tail connects to a vertex in $\bigcup_{\lambda \in P} \lambda$. But this implies that there must be infinitely many edges from $v_0$ to vertices that connect to $\bigcup_{\lambda \in P} \lambda$. Thus $v_0 \rightarrow \bigcup_{\lambda \in P} \lambda$. \end{proof} \begin{thm} \label{thm-topology} Let $E$ be a graph satisfying Condition~(K). Then there is a topology on $\Xi_E$ such that for $S \subseteq \Xi_E$, $$ \overline{S} := \{\delta \in \Xi_E \,|\, \delta \rightarrow \bigcup_{\lambda \in S} \lambda\}, $$ and the map $\phi_E$ given in Definition~\ref{def-phi-E} is a homeomorphism from $\Xi_E$ onto $\hbox{Prim\,} C^*(E)$. \end{thm} \begin{proof} Since $h$ is a bijection, we may use $h$ to pull the topology defined on $\Xi_F = \Lambda_F$ in \cite[Theorem 6.3]{bprs} back to a topology on $\Xi_E$. Specifically, if $S \subseteq \Xi_E$ then $S=h^{-1}(P)$ for some $P \subseteq \Xi_F$ and we define $\overline{S} := h^{-1}(\overline{P})$. But from Lemma \ref{lem-reach} we see that this is equivalent to defining $\overline{S} = \{\delta \in \Xi_E \,|\, \delta \rightarrow \bigcup_{\lambda \in S} \lambda\}$. Now with this topology $h$, and consequently $\phi_E$, is a homeomorphism. \end{proof} \section{Concluding Remarks} \label{sec-conc} When we defined a desingularization of a graph in Section~\ref{sec-desing}, for each singular vertex $v_0$ we chose an ordering of the edges $s^{-1}(v_0)$ and then redistributed these edges along the added tail in such a way that every vertex on the tail was the source of exactly one of these edges. Another way we could have defined a desingularization would be to instead redistribute a finite number of edges to each vertex on the added tail. Thus if $v_0$ is a singular vertex, we could choose a partition of $s^{-1}(v_0)$ into a countable collection $S_0^{v_0} , S_1^{v_0} , S_2^{v_0} , \ldots$ of finite (or empty) disjoint sets. Having done this, we add a tail to $E$ by first adding a graph of the form \begin{equation*} \xymatrix{ v_0 \ar[r]^{e_1} & v_1 \ar[r]^{e_2} & v_2 \ar[r]^{e_3} & v_3 \ar[r]^{e_4} & \cdots\\ } \end{equation*} We then remove the edges in $s^{-1}(v_0)$ and for each $i$ and each $g \in S_i^{v_0}$ we draw an edge from $v_i$ to $r(g)$. More formally, if the elements of $S_i^{v_0}$ are listed as $\{g_i^1, g_i^2, \ldots, g_i^{m_i} \}$ we define $F^0 := E^0 \cup \{v_i \}_{i=1}^\infty$, $F^1 := (E^1 \setminus s^{-1}(v_0)) \cup \{e_i \}_{i=1}^\infty \cup \{f_i^j \, | \, 1 \leq i \leq \infty \text{ and } 1 \leq j \leq m_i \}$, and extend $r$ and $s$ by $s(e_i) = v_{i-1}$, $r(e_i) = v_i$, $s(f_i^j) = v_i$, and $r(f_i^j) = r(g_i^j)$. If we add tails in this manner, then we can define a desingularization of $E$ to be the graph $F$ formed by adding a tail to each singular vertex in $E$. Here a choice of partition $S_0^{v_0} , S_1^{v_0} , S_2^{v_0} , \ldots$ must be made for each singular vertex, and different choices will sometimes produce nonisomorphic graphs. With this slightly more general definition of desingularization, all of the results of this paper still hold and the proofs of those results remain essentially the same. We avoided using this broader definition only because the partitioning and the use of double subscripts in the $f_i^j$'s creates very cumbersome notation, and we were afraid that this would obscure the main points of this article. However, we conclude by mentioning this more general method of desingularization because we believe that in practice there may be situations in which it is convenient to use. For example, if $H$ is a saturated hereditary subset of $E^0$, then for each $v_0 \in B_H$ one may wish to choose a partition of $s^{-1}(v)$ with $S_0^{v_0} := \{ e \in E^1 \, | \, s(e) = v_0 \text{ and } r(e) \notin H \}$. Then a desingularization created using this partition will have the property that every vertex on a tail added to $v_0$ will point only to the next vertex on the tail and elements of $H$. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \vspace*{.1in} \end{document}
\begin{document} \title{Asymptotics of local genus distributions and the genus distribution of the complete graph} \begin{abstract} We are interested in $2-$cell embeddings of graphs on orientable surfaces. The distribution of genus across all the embeddings of a graph is only known explicitly for a few graphs. In order to study these genus distributions, [Gross et al., European J. Combin. 2016] introduced local genus distributions. These describe how the faces are distributed around a single vertex across all the embeddings of a graph. Not much is known about the local or non-local genus distributions for general graphs. [F{\'e}ray, 2015] noted that it is possible to study local genus distributions by studying the product of conjugacy classes $C_n C_\lambda$. We therefore start by showing a central limit theorem on this product. We show that the difference between $C_n C_\lambda$ and the uniform distribution on all even/odd permutations is at most the number of fixed points in $\lambda$ plus one, divided by $\sqrt{n-1}$. This can be thought of as an analogue of a result of [Chmutov and Pittel, Adv. Appl. Math. 2016], who show a similar result for the product $C_{2^n} C_\lambda$. We use this to show that any graph with large vertex degrees and a small average number of faces has an asymptotically uniform local genus distribution at each of its vertices. Then, we study the whole genus distribution of the complete graph. We show that a portion of the complete graph of size $(1-o(1))|K_n|$ has the same genus distribution as the set of all permutations, up to parity. \end{abstract} \section{Introduction} We are interested in $2-$cell embeddings of graphs on orientable surfaces. These embeddings are well-known to be in bijection with local rotations, which are cyclic orderings of the half-edges at each vertex. The genus of an embedding is defined as the genus of the surface it is embedded on. A classical problem is to determine the minimum and maximum genus over all embeddings of a graph. For general graphs, this problem is very difficult: Thomassen\cite{thomassen1989graph} showed that determining the minimum genus of a graph is NP-complete. There are some fixed classes of graphs for which the minimum and maximum genus is known, but there will usually be many embeddings with a genus between these two values. Therefore, given a graph $G$, and a positive integer $g$, one could ask for the number of embeddings of $G$ of genus $g$. This question has only been answered for families of graphs with especially nice structure, see \cite{gross2018calculating} and the references therein. One focus of this paper will be the embeddings of the complete graph $K_n$. The maximum genus of $K_n$ was shown by Nordhaus, Stewart and White \cite{nordhaus1971maximum} to be $(n-1)(n-2)/4$. The minimum genus of $K_n$ was shown by Ringel \cite{ringel2012map} to be $(n-3)(n-4)/12$. However the distribution of genus across all embeddings of $K_n$ is still unknown. Even for small values of $n$ it proves very difficult, as $K_n$ has $(n-2)!^n$ embeddings. The distribution of genus across all embeddings of $K_7$ is calculated in \cite{beyer2016practical} and explicitly given in \cite{gross2018calculating}. It is not known for $n \geq 8$, due to the huge amount of embeddings which need to be generated. Due to the complexity of the problem, it is perhaps unrealistic to expect a `nice' formula for the number of embeddings of $K_n$ on a surface of genus $g$. Therefore an approach previously used for this problem is a probabilistic one. This approach was termed \emph{random topological graph theory} by White \cite{white1994introduction}. Stahl \cite{stahl1995average} gave a lower bound for the average genus across all the embeddings of $K_n$, and this was recently improved in \cite{loth2022randomcomplete}. Another approach is to look at the local distribution of faces around a single vertex in the graph, across all of its embeddings. This was introduced \cite{gross2016combinatorial} as the local genus polynomial by Gross et al. It was noted by F{\'e}ray \cite{feray2015combinatorial} that this local genus polynomial can be studied by using conjugacy class products. This fact is used in \cite{loth2021random} to estimate the average genus of various graphs. It is the purpose of this paper to study the asymptotics of local and non-local genus distributions of graphs. We will study the distribution of faces, as this is equivalent by Euler's formula. We will show that under certain conditions, both of these distributions satisfy a central limit theorem. We start proving these results with a more algebraic theorem on products of conjugacy classes in the symmetric group, which is of independent interest. The following is a restatement of Theorem \ref{thm:asymptoticunif}. \begin{theorem} The difference between the distribution of the product of conjugacy classes $C_n C_\lambda$ and the uniform distribution on all even/odd permutations is at most $\frac{\max\{1,\ell\}}{2\sqrt{n-1}}$, where $\ell$ is the number of fixed points in $\lambda$. \end{theorem} We use this to study the local distribution of faces at a fixed vertex across all embeddings of a graph. In particular we show that for any graph with a small average number of faces across all of its embeddings, the local distribution of faces at a vertex with large degree satisfies a central limit theorem. The following is a less general statement which follows immediately from Theorem \ref{thm:localunif}. \begin{theorem} Let $v$ be a vertex in a graph $G$. Let $F$ be the random variable for the number of faces in a randomly chosen embedding of $G-\{v\}$. Then if $\E[F] = o(\sqrt{\deg(v)})$, the local face distribution at vertex $v$ is uniform over all permutations on the half-edges incident with $v$, up to parity. \end{theorem} In the case of the complete graph $K_n$, we show that for any vertex $v$ the portion of $K_n$ made up of the faces which contain at least one half-edge at $v$ has size $(1-o(1))|K_n|$. These two results combine to give Theorem \ref{thm:knuniform}. This says that a part of the complete graph of size $(1-o(1))|K_n|$ has embeddings with a number of faces in the same distribution as all permutations, up to parity. More precisely, the following statement follows from Theorem \ref{thm:knuniform}. \begin{theorem} Choose an embedding of $K_n$ uniformly at random. Then there is a canonical way to discard a $o(1)$-proportion of this embedding, giving a permutation $\alpha$ describing the faces of what is left. The cycle distribution of $\alpha$ is asymptotic to the cycle distribution of all permutations, up to parity. \end{theorem} Our methods are a mixture of probabilistic and algebraic. We make use of recent bounds \cite{loth2022randomcomplete} on the average genus of a randomly chosen embedding of $K_n$, and techniques for calculating this average genus in general \cite{loth2021random}. We also make use of some algebraic machinery: representations of the symmetric group. We prove a new, simple bound for hook-shaped characters, and combine it with a general technique for calculating differences between probability distributions \cite{chmutov2016surface}\cite{gamburd2006poisson}. The representation theory is self-contained in Section \ref{sec:chartheory}. The reader who is mainly interested in conjugacy class products may refer to this section, while the reader more interested in graph theory may skip this section without any large loss of understanding. \section{Random embeddings of graphs} Let $G$ be a simple graph on $n$ vertices. For a set $X$, we write $S_X$ for the symmetric group on $X$. A combinatorial map is a triple $m = (D,R,E)$, where $D$ is a set, $R,E \in S_D$ and $E$ is a fixed point free involution. It is well known that any map will give an embedding of some graph on a surface. In this context, the darts in $D$ are the half-edges of the graph and each cycle of $R$ is a cyclic ordering of the half-edges at a vertex. The permutation $E$ then describes how the half-edges are joined together to make the edges of the graph. We call $R \cdot E$ the face permutation of $m$, and call the cycles of this permutation faces. We start by defining how we can describe all the embeddings of $G$ as maps. Suppose $G$ has vertices $v_1, \dots, v_n$ and edges $E(G)$. Let $D(G) := D_1(G) \cup D_2(G) \cup \dots \cup D_n(G)$ be a set of darts, with $| D_i(G) | = \deg(v_i)$ for each $i$. The darts in $D_i(G)$ will correspond to the half-edges at vertex $v_i$ in $G$. We define two sets of permutations $\mathcal{R}(G), \mathcal{E}(G)$ as follows: \begin{itemize} \item $\mathcal{R}(G)$ is the set of all \emph{local rotations}. This is the set of all permutations in $S_D$ with $n$ cycles, labelled $\pi_1, \dots, \pi_n$, with each cycle $\pi_i$ on the symbols $D_i(G)$. \item $\mathcal{E}(G)$ is the set of all \emph{edge schemes}. This is the set of all fixed point free involutions in $S_D$, where for each $(v_i,v_j) \in E(G)$ we have exactly one $2-$cycle containing a dart from $D_i(G)$ and a dart from $D_j(G)$. \end{itemize} We suppress notation and omit the $G$ when it is clear from the context. It is clear that each map $m := (D,R,E)$, where $R \in \mathcal{R}, E \in \mathcal{E}$ gives an embedding of $G$, see \cite{mohar2001graphs} \cite{lando2004graphs} for details. In fact, the set $\{ (D,R,E) : R \in \mathcal{R} \}$ (where we fix any edge scheme) is in bijection with the set of all embeddings of $G$ given by local rotations. Therefore each possible embedding of $G$ appears $| \mathcal{E} |$ times in $\{ (D,R,E) : R \in \mathcal{R}, E \in \mathcal{E} \}$. We choose to study this set where we vary the edge scheme instead of fixing $E$, as it will prove convenient in our analysis. This explains the following observation: \begin{observation} Let $\mathcal{M}(G) := \{ (D,R,E) : R \in \mathcal{R}(G), E \in \mathcal{E}(G) \}$. Choosing an $m \in \mathcal{M}(G)$ uniformly at random gives a randomly chosen embedding of $G$. \end{observation} \begin{figure} \caption{Two embeddings of $K_4$.} \label{fig:firstexample} \end{figure} \begin{example} We give two examples of embeddings of $K_4$, the complete graph on $4$ vertices, in Figure \ref{fig:firstexample}. The example on the left is a planar embedding. The example on the right is an embedding on the torus, but we represent it with a drawing on the plane with edges crossing. Both embeddings are given by maps where the set of darts $D$ is: $$ D = D_1 \cup D_2 \cup D_3 \cup D_4 = \{1,2,3\} \cup \{4,5,6\} \cup \{7,8,9\} \cup \{10,11,12\}. $$ The planar embedding has: \begin{align*} R &= (2\,1\,3)(4\,6\,5)(7\,8\,9)(10\,11\,12) \\ E &= (1\,7)(2\,4)(3\,12)(5\,8)(6\,11)(9\,10) \\ R \cdot E &= (1\,12\,9)(2\,7\,5)(3\,4\,11)(6\,8\,10) \end{align*} The embedding on the torus has: \begin{align*} R &= (1\,2\,3)(4\,6\,5)(7\,8\,9)(10\,11\,12) \\ E &= (1\,4)(2\,9)(3\,10)(5\,11)(6\,7)(8\,12) \\ R \cdot E &= (1\,9\,6\,11\,8\,2\,10\,5)(3\,4\,7\,12) \end{align*} \end{example} We now define two non-standard permutations associated with an embedding $m \in \mathcal{M}(G)$. \begin{itemize} \item Fix any set $X$, $\alpha \in S_X$, and $Y \subseteq X$. Define the induced permutation of $\alpha$ on $Y$ by deleting all symbols not contained in $Y$ from $\alpha$, then removing all empty cycles. \item Let $\omega_i(m) \in S_{D_i}$ be the induced permutation of $R \cdot E$ on $D_i$. We call the distribution of $\omega_i(m)$ over all $m \in \mathcal{M}(G)$ the \emph{local face distribution} of the embedding at the vertex $v_i$. \item Let $R- \pi_i$ be the permutation obtained from $R$ by replacing the cycle $\pi_i$ with $n-1$ fixed points. Let $\sigma_i(m) \in S_{D_i}$ be the induced permutation of $(R-\pi_i) \cdot E$ on $D_i$. This is the permutation on the darts in $D_i$ given by $m$ if we split the vertex $v_i$ into $n-1$ vertices with one half-edge incident with each, as shown in Figure \ref{fig:addingvertextofirstexample}. \end{itemize} When a map $m \in \mathcal{M}$ is fixed, we write $\omega_i = \omega_i(m), \sigma_i = \sigma_i(m)$. We note that our definition of the local face distribution similar to the local genus polynomial introduced by Gross et al \cite{gross2016combinatorial}. The difference is that their version fixes the value of $R - \pi_i$, whereas we let it run over all possibilities. We illustrate these new concepts by continuing the previous example. Consider $v_1$ in the embedding of $K_4$ on the torus given in Figure \ref{fig:firstexample}. Here we have: \begin{align*} \omega_1 &= (1\,2)(3) \\ (R-\pi_1)\cdot E &= (4\,6\,5)(7\,8\,9)(10\,11\,12) \cdot (1\,4)(2\,9)(3\,10)(5\,11)(6\,7)(8\,12)\\ &= (1\,4\,7\,12\,3\,10\,5)(2\,9\,6\,11\,8) \\ \sigma_1 &= (1\,3)(2) \end{align*} \begin{figure} \caption{Continuing from Example 1, we give a pictorial explanation of why $\pi_1 \cdot \sigma_1 = \omega_1$.} \label{fig:addingvertextofirstexample} \end{figure} Notice that in this example we have that: $$ \pi_1 \cdot \sigma_1 = (1\,2\,3) \cdot (1\,3)(2) = (1\,2)(3) = \omega_1 $$ Figure \ref{fig:addingvertextofirstexample} gives a diagram showing the intuition behind this product of permutations. We continue by showing that this correspondence between $\pi_i, \sigma_i$ and $\omega_i$ is always true. \begin{lemma} \label{lem:sigmaomegarelation} Fix a vertex $v_i$ in an embedding $m \in \mathcal{M}$. Then $\pi_i \sigma_i = \omega_i$. \end{lemma} \begin{proof} This correspondence is also outlined in \cite{loth2021random} using somewhat different notation. First we have that: $$ R \cdot E = \pi_i \cdot (R-\pi_i) \cdot E $$ Therefore the induced permutation of both sides on $D_i$ is the same. The induced permutation of $R \cdot E$ was defined as $\omega_i$. The induced permutation of $(R-\pi_i) \cdot E$ is $\sigma_i$. Since $\pi_i$, when viewed as a permutation in $S_D$, is just a single cycle only supported on $D_i$, the induced permutation of $\pi_i \cdot (R-\pi_i) \cdot E$ on $D_i$ is $\pi_i \sigma_i$. Therefore $\pi_i \sigma_i = \omega_i$ as required. \end{proof} Our first main result is a central limit theorem on the local face distribution of certain $G$, which we defined as the distribution of $\omega_i(m)$ over all $m \in \mathcal{M}(G)$. By Lemma \ref{lem:sigmaomegarelation}, in order to study $\omega_i$ we can study the distributions of $\pi_i$ and $\sigma_i$. By definition, $\pi_i$ varies over all possible full cycles on the symbols in $D_i$. We therefore continue with an observation on the distribution of $\sigma_i(m)$ across all $m \in \mathcal{M}$. Let $C_\lambda$ denote the conjugacy class of cycle type $\lambda$ on a symmetric group $S_X$. \begin{lemma} \label{lem:localcentral} For any $G$ and vertex $v_i$, the multiset $\{ \sigma_i(m) : m \in \mathcal{M}(G) \}$ is central. That is, for each conjugacy class of $S_{D_i}$, it visits all the elements in $S_{D_i}$ the same number of times. \end{lemma} \begin{proof} It's sufficient to show that the set of all possible $\sigma_i$ is closed under conjugation by any permutation in $S_{D_i}$. Fix some embedding $m=(D,R,E)\in \mathcal{M}$, and take any $\tau \in S_{D_i}$. Then since $R-\pi_i$ has all the symbols in $D_i$ as fixed points, we have: $$ (R-\pi_i) \cdot \tau E \tau^{-1} = \tau (R-\pi_i) \cdot E \tau^{-1} $$ Therefore the embedding given by $m' = (D,R,\tau E \tau^{-1})$ has $\sigma_i(m') = \tau \sigma_i(m) \tau^{-1}$. \end{proof} Let $d := \deg(v_i)$. By Lemma \ref{lem:localcentral} the multiset $\{ \sigma_i(m) : m \in \mathcal{M} \}$ can be expressed as: \begin{align*} &\cup_{\lambda \vdash d} a_\lambda C_d C_\lambda \\ &\text{where } C_d C_\lambda := \{ \pi \cdot \sigma : \pi \in C_d, \sigma \in C_\lambda \} \end{align*} for some constants $a_\lambda$. Therefore in order to continue our study of local face distributions we need to better understand the class products $C_d C_\lambda$. A generating function for the number of elements in $C_d C_\lambda$ with a given number of cycles is given by Stanley \cite{stanley1999enumerative}. However for our applications we need to understand the whole cycle type, not just the number of cycles. In order to do this, we use a technique of Gamburd \cite{gamburd2006poisson} and Chmutov and Pittel \cite{chmutov2016surface}. The main result of the next section is that, as long as $\lambda$ has not too many parts of size one, the distribution of $C_d C_\lambda$ is close to uniform (up to parity). \section{Character theory and class products} \label{sec:chartheory} Let $S_n$ be the symmetric group on symbols $\{1,2,\dots,n\}$. Let $U^e / U^o$ denote the uniform distribution on all even/odd permutations in $S_n$ respectively. We define $P_{\alpha, \beta}$ as the probability distribution on $S_n$ given by the multiset $C_\alpha C_\beta := \{ \pi \cdot \sigma : \pi \in C_\alpha, \sigma \in C_\beta \}$. Suppose $\alpha$ and $\beta$ have the same parity. The \emph{total variation difference} between $P_{\alpha,\beta}$ and the uniform distribution is: $$ || P_{\alpha, \beta} - U^e || := \sum_{\substack{\omega \in S_n \\ \omega \text{ even}}} | P_{\alpha,\beta}(\omega) - 2/n! | $$ The same definition applies with odd permutations and $U^o$ when $\alpha$ and $\beta$ have different parity. The main result of this section is the following. \begin{theorem} \label{thm:asymptoticunif} Let $\lambda$ be a partition of $n$ with $\ell$ fixed points such that $(n)$ and $\lambda$ have the same parity. Then we have: $$ || P_{(n), \lambda} - U^e || \leq \frac{\max\{1,\ell\}}{2\sqrt{n-1}} $$ The same statement holds with $U^o$ when $(n)$ and $\lambda$ have different parity. \end{theorem} The rest of the section is dedicated to the proof of this theorem, and is somewhat technical. It is also the only part of the paper which uses character theory. The reader may therefore skip to section \ref{sec:localclt} without any loss of understanding of the later sections. We follow a technique used by Gamburd \cite{gamburd2006poisson} and by Chmutov and Pittel \cite{chmutov2016surface}. In their papers, they use a bound on the total variation distance due to Diaconis and Shahshahani \cite{diaconis1981generating}. They combine this with a bound on character values of the Symmetric group. The character bound due to Larsen and Shalev \cite{larsen2008characters} will give us some kind of asymptotic form of Theorem \ref{thm:asymptoticunif}. However the character values in our application are simpler than those in the aforementioned papers, so we do not need to apply such general machinery. It will also be convenient to have a more concrete character bound. We will therefore start by estimating the character values we need with a simple bound, with no asymptotic notation. Let $\chi^{\lambda}(\alpha)$ denote the character of the irreducible representation of $S_n$ indexed by $\lambda$ evaluated at a permutation of cycle type $\alpha$. Write $f^\lambda$ for the dimension of the irreducible representation indexed by $\lambda$. \begin{lemma} \label{lem:charbound} Let $\lambda$ be a partition of $n$ with $\ell$ fixed points. Then: \begin{align*} \frac{|\chi^{(1^k,n-k)}(\lambda)|}{f^{(1^k,n-k)}} \leq \frac{\max\{1, \ell\}}{n-1} \end{align*} This bound is attained when $k = 1$ or $n-2$ and $\ell \geq 1$. \end{lemma} \begin{proof} Fix some ordering of the parts of $\lambda = (\lambda_1, \lambda_2, \dots )$. By the Murnaghan-Nakayama rule ($\mathsection$7.17 \cite{stanley1999enumerative}): $$ |\chi^{(n-k,1^k)}(\lambda)| \leq g^{(n-k,1^k)}(\lambda) $$ where $g^{(n-k,1^k)}(\lambda)$ is the number of ways of placing $\lambda_i$ $i$'s into the hook shape tableaux $(n-k,1^k)$ for each $i$, such that each row and column is weakly increasing. Note that the ordering of the parts of $\lambda$ we choose doesn't affect this number. We will therefore change this ordering when convenient to make our analysis easier. Due to this upper bound, we will spend the rest of this proof estimating: $$ \frac{g^{(n-k,1^k)}(\lambda)}{f^{(1^k,n-k)}} $$ Firstly the dimension of the hook shape is easy to calculate: $$ f^{(1^k,n-k)} = \chi^{(1^k,n-k)}(1^n) = \binom{n-1}{k}. $$ We now split into cases.\\ Case 1: Suppose $\lambda$ has at most one part of size one. We let $n$ be even, a very similar argument holds for $n$ odd. Define $\lambda'$ in the following way: \begin{itemize} \item If $\lambda$ has no fixed points, split one of its parts $\lambda_i \geq 2$ into two parts $\lambda_i - 1, 1$. \item Order the parts of $\lambda'$ so that $\lambda' = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_m$. \end{itemize} We use the fact that $g^{(n-k,1^k)}(\lambda) \leq g^{(n-k,1^k)}(\lambda')$ to make the calculations easier. Since $\lambda'$ has at least one fixed point $\lambda_1$, this is a part of size one which must be placed in the top left of the tableaux. Each way of placing the other numbers into the tableaux is uniquely given by a subset of the parts of $\lambda$ which sum to $k$. These parts correspond to the symbols which are placed in the main column of the tableaux. \begin{figure} \caption{We let $\lambda = (5,3,2,2)$, then $\lambda' = (4,3,2,2,1)$. We show all the valid placements of $\{1,2,2,3,3,4,4,4,5,5,5,5\} \label{fig:tableauxplacement} \end{figure} Suppose $k \leq n/2$, the case for $k > n/2$ follows by symmetry since $g^{(n-k,1^k)}(\lambda) = g^{(k+1,1^{n-k-1})}(\lambda)$. We do the calculations for $k$ even, but the same bound holds for $k$ odd by the same argument. Let $S=\{\lambda_2,\lambda_3,\dots, \lambda_m,0,0,\dots,0\}$ where $S$ has $n/2$ total parts. Each subset of $S$ with $k/2$ parts refers to at most one subset of $\lambda_2, \dots, \lambda_m$ with parts that sum to $k$. We refer to Figure \ref{fig:tableauxplacement} for two examples of this. This gives the following upper bound: \begin{align*} g^{(n-k,1^k)}(\lambda) &\leq g^{(n-k,1^k)}(\lambda') \leq \binom{n/2}{k/2} \\ &= \frac{n(n-2)(n-4)\dots(n-k+2)}{k(k-2)(k-4)\dots2} \end{align*} Therefore we have: \begin{align*} \frac{g^{(n-k,1^k)}(\lambda)}{f^{(1^k,n-k)}} &\leq \frac{n(n-2)(n-4)\dots(n-k+2)}{k(k-2)(k-4)\dots2} \frac{k!}{(n-1)(n-2)\dots(n-k)}\\ &\leq \frac{(k-1)(k-3)(k-5)\dots1}{(n-1)(n-3)(n-5)\dots(n-k+1)(n-k)} \\ &\leq \frac{1}{n-1} \left( \frac{k-1}{n-3} \frac{k-3}{n-5} \dots \frac{3}{n-k+1} \frac{1}{n-k} \right) < \frac{1}{n-1} \end{align*} Case 2: The partition $\lambda$ has $\ell \geq 2$ fixed points, and $k=1$ or $n-2$. Let $k=1$ as the case $k=n-2$ will give the same bound by symmetry. As before, suppose $\lambda_1 = 1$. Then this $1$ must be placed in the top left of the tableaux. The square just below this $1$ in the tableaux must be filled with an $i$ with $\lambda_i = 1$. There are $\ell-1$ choices of such a part, then the rest of tableaux placements are determined. This gives that $g^{(1,n-1)}(\lambda) = \ell-1$. The same reasoning gives that $f^{(1,n-1)}(\lambda) = n-1$. Therefore for this case we have: $$ \frac{g^{(1,n-1)}(\lambda)}{f^{(1,n-1)}} = \frac{\ell-1}{n-1} $$ Case 3: The partition $\lambda$ has $\ell \geq 2$ fixed points and $2\leq k \leq n-3$. This covers the remaining possible cases for $\lambda$. We do induction on the previous two cases. Suppose that for all $n$, all $1 \leq k \leq n-2$ and all $\lambda''$ with $\ell' < \ell$ fixed points we have that: $$ \frac{g^{(n-k,1^k)}(\lambda'')}{f^{(1^k,n-k)}} < \frac{\max\{1,\ell'\}}{n-1}. $$ Suppose $\lambda$ has $m$ parts. Order the parts of $\lambda$ so that $\lambda_m = 1$. Observe that the single $m$ must be placed in the far right cell or the bottom cell of the tableaux. If it is placed in the far right, then there are $g_{(n-k-1, 1^k)}(\lambda - \lambda_m)$ ways to place the remaining symbols into the tableaux. If it is placed in the bottom cell then there are $g_{(n-k, 1^{k-1})}(\lambda - \lambda_m)$ ways to place the remaining symbols. Note that since $f^{(n-k,1^k)} = \binom{n-1}{k}$, we have that $f^{(n-k,1^k)} = f^{(n-k-1,1^k)} + f^{(n-k,1^{k-1})}$. Therefore: \begin{align*} \frac{g^{(n-k,1^k)}(\lambda)}{f^{(n-k,1^k)}} \leq \frac{g^{(n-k-1, 1^k)}(\lambda - \lambda_m)+g^{(n-k, 1^{k-1})}(\lambda - \lambda_m)}{f^{(n-k-1,1^k)} + f^{(n-k,1^{k-1})}} \end{align*} By our inductive assumption, and the fact that $\ell \geq 2$, we have that: \begin{align*} \frac{g^{(n-k-1, 1^k)}(\lambda - \lambda_m)}{f^{(n-k-1,1^k)}}, \frac{g^{(n-k, 1^{k-1})}(\lambda - \lambda_m)}{f^{(n-k,1^{k-1})}} \leq \frac{\ell-1}{n-2} \end{align*} Therefore: \begin{align*} \frac{g^{(n-k,1^k)}(\lambda)}{f^{(n-k,1^k)}} \leq \frac{\ell-1}{n-2} < \frac{\ell}{n-1} \end{align*} \end{proof} We are now ready to prove Theorem \ref{thm:asymptoticunif}. \begin{proof}[Proof of Theorem \ref{thm:asymptoticunif}] We prove the formula in the case that $\alpha, \beta$ are partitions of the same parity. The same arguments hold for partitions of opposite parity, with $U^e$ replaced by $U^o$. The following formula is given in ($\mathsection$2.10, $\mathsection$2.11 \cite{chmutov2016surface}) in a special case, and trivially generalises to all $\alpha, \beta$. $$ || P_{\alpha, \beta} - U^e ||^2 \leq \frac{1}{4} \sum_{\lambda \neq (n), (1^n)} \left( \frac{\chi^{\lambda}(\alpha) \chi^{\lambda}(\beta)}{f^\lambda} \right)^2 $$ The character values referring to full cycles are easy to evaluate (see \cite{stanley1999enumerative}.) $$ \chi^\lambda((n)) = \begin{cases} (-1)^k \text{ if } \lambda = (n-k,1^k)\\ 0 \text{ else} \end{cases} $$ This simplifies the formula considerably: $$|| P_{(n), \lambda} - U^e ||^2 \leq \frac{1}{4} \sum_{k=1}^{n-2} \left( \frac{(-1)^k \chi^{(n-k,1^k)}(\lambda)}{f^{1^k, n-k)}} \right)^2 $$ Using the bound from Lemma \ref{lem:charbound} we obtain: \begin{align*} || P_{(n), \lambda} - U^e ||^2 \leq \frac{1}{4} \sum_{k=1}^{n-2} \left( \frac{\max\{1,\ell\}}{n-1} \right)^2 \leq \frac{\max\{1, \ell^2\}}{4(n-1)} \end{align*} \end{proof} \section{A local central limit theorem at a vertex} We start this section by stating some recent results on random embeddings of graphs which we will need. The first gives a way to randomly add a new vertex to a graph $G'$ to obtain a random embedding of a new graph $G$. \begin{lemma} ($\mathsection$4.2 \cite{loth2021random}) \label{lem:addavertex} Let $G$ have vertices $v_1,\dots,v_n$ and let $G' = G-\{v_n\}$. Pick $m = (D(G'), R', L') \in \mathcal{M}(G')$ uniformly at random. Let $D_n(G) = \{e_i : v_i \in N(v_n)\}$. For each $e_i \in D_n(G)$, let $D_i(G) = D_i(G') \cup \{d_i\}$. Let $D(G) = \cup_{i=1}^n D_i(G)$. \\ Pick a uniform at random perfect matching between $\{e_i \in D_n(G)\}$ and $\{d_i : e_i \in D_n(G)\}$. For each pair in the matching add a two cycle $(d_i \, e_i)$ to $E'$ to obtain $E$, and place $d_i$ between two randomly chosen symbols in the cycle $\pi_i'$ of $R'$ to obtain $\pi_i$. Let $R(G) = \pi_1 \dots \pi_n$. \\ This process outputs a uniform at random $(D, R, E) \in \mathcal{M}$. \end{lemma} Our upper bound on $||P_{(n),\lambda} - U^e/U^o||$ is only good when the number of fixed points in $\lambda$ is relatively small. Recall that in our application, each permutation in $C_\lambda$ in the product $C_n C_\lambda$ will be $\sigma_i(m)$ for some $m \in \mathcal{M}$. We therefore need to show that the number of fixed points in $\sigma_i$ is, on average, relatively small. We begin by bounding the number of fixed points in $\sigma_i$ by the number of faces in an embedding of a smaller graph. For a map $m = (D,R,E)$, let $F(m)$ denote the number of faces of $m$. Recall that we defined the faces of $m$ as the number of cycles in $R \cdot E$, and note that the underlying graph of $m$ need not be connected. Let $F(G)$ denote the random variable for the number of faces in a randomly chosen map $m \in \mathcal{M}(G)$. \label{sec:localclt} \begin{lemma} \label{lem:fpbound} Let $O_i$ be the random variable for the number of fixed points in $\sigma_i(m)$, in a randomly chosen $m \in \mathcal{M}(G)$. Then for any $i$ we have: $$ \E[O_i] \leq \E[F(G - \{v_n\})] $$ \end{lemma} \begin{proof} Let $m = (D,R,E)$ be an embedding of $G$, and let $m'$ be the induced embedding of $G -\{v_i\}$ given by removing $v_i$ and all the edges adjacent with it. Claim: We have that $O_i(m) \leq F(m')$ Proof of claim: Recall that $\sigma_i$ is the induced permutation of $(R - \pi_i)\cdot E$ on $D_i$. Therefore the number of fixed points in $\sigma_i$ is certainly at most the number of cycles in $(R-\pi_i)\cdot E$. The permutation $(R-\pi_i)\cdot E$ has $F(m')$ cycles, proving our claim. $\diamondsuit$ Picking a uniform at random $m' \in \mathcal{M}(G - \{v_i\})$ then adding $v_i$ as in Lemma \ref{lem:addavertex} gives a uniform at random embedding $m \in \mathcal{M}(G)$. Therefore the above claim gives that $\E[O_i] \leq \E[F(G-\{v_i\}]$. \end{proof} We are ready to prove the main theorem of the section. \begin{theorem} \label{thm:localunif} The local face distribution of $G$ at a vertex $v_i$ is asymptotically uniform (up to parity) if $\E[F(G - \{v_i\})] = o(\sqrt{\deg(v_i)})$. More generally, let $L_i$ be the local face distribution of $G$ at $v_i$, and let $U^e/U^o$ be the uniform distribution on all even/odd permutations on the darts in $D_i$. Then there exists $c \in [0,1]$ such that for any $x\geq1$ we have: $$ || L_i - (c U^o + (1-c) U^e) || \leq \frac{x\E[F(G-\{v_i\})]}{2\sqrt{\deg(v_i)-1}} + \frac{1}{x} $$ \end{theorem} \begin{proof} By Lemma \ref{lem:localcentral}, we have that for some constants $a_\lambda$: \begin{align*} L_i &= \sum_{\lambda} a_\lambda P_{(n),\lambda} \end{align*} Let $G' = G - \{v_i\}$ and write $d = \deg(v_i)$. By Lemma \ref{lem:fpbound} and Markov's inequality we have that $Pr[O_i \geq x \E[F(G')]] \leq \frac{1}{x}$. By Theorem \ref{thm:asymptoticunif} and since $x \E[F(G')] \geq 1$, if $\lambda$ has at most $x \E[F(G')]$ fixed points then $||P_{(d),\lambda} - U^{o/e}|| \leq \frac{x\E[F(G')]}{2\sqrt{d-1}}$. Let $c = Pr[\sigma_i(m) \text{ is odd}]$. Then we have that: \begin{align*} || L_i - (c U^o + (1-c) U^e) || &\leq \sum_\lambda Pr[\sigma_i(m) \in C_\lambda] ||P_{(d), \lambda} - U^{o/e}|| \\ &\leq Pr[O_i \geq x \E[F(G')]] + Pr[O_i < x \E[F(G')]] \frac{x\E[F(G')]}{2\sqrt{d-1}} \\ &\leq \frac{x\E[F(G')]}{2\sqrt{d-1}} + \frac{1}{x}. \end{align*} \end{proof} Note that the face permutation $R \cdot E$ will be the same parity for every $m \in \mathcal{M}$. However $\sigma_i(m)$ will vary in parity across different $m \in \mathcal{M}$. The value $c$ appears in the statement of this theorem as it is difficult to tell when $\sigma_i(m)$ will be odd or even. We give a concrete example of Theorem \ref{thm:localunif}. We use a recent result that bounds the average number of faces in the complete graph. \begin{theorem} \cite{loth2022randomcomplete} \label{thm:knavgfaces} For sufficiently large $n$, we have that $\E[F(K_n)] \leq 4\log(n)$. \end{theorem} Using this, we may show that the local face distribution at a vertex in the complete graph is asymptotically uniform. \begin{corollary} Let $v_i$ be a vertex in $K_n$, and let $L_i$ be the local face distribution at this vertex. Then there exists some sequence $\{c_n \in [0,1]: n \geq 2\}$ such that for sufficiently large $n$: $$ || L_i - (c_n U^o + (1-c_n) U^e) || \leq \frac{2\log(n-1)+2}{\sqrt[4]{n-2}} \rightarrow 0 $$ \end{corollary} \begin{proof} Letting $x = \sqrt[4]{n-2}/\log(n-1)$ in Theorem \ref{thm:localunif} and using Theorem \ref{thm:knavgfaces} gives the result. \end{proof} These results give us an understanding of the local behaviour of complete graph embeddings at a vertex. We continue by analysing the behaviour of these embeddings across separate vertices. \section{Asymptotics of complete graph embeddings} We start by showing that for any fixed dart and vertex, the face containing the dart in a random embedding will asymptotically almost surely contain this vertex. \begin{theorem} \label{thm:vertexinfaceprob} Let $e \in D(K_n)$ and let $F_e(m)$ be the face containing $e$ in an embedding $m \in \mathcal{M}(K_n)$. Let $v_i$ be some vertex in $K_n$, we write $i \in F_e(m)$ to mean that there is at least one dart $e' \in D_i(K_n)$ incident with vertex $v_i$ which is contained in the face $F_e(m)$. Then for a randomly chosen $m \in \mathcal{M}(K_n)$ we have the following for sufficiently large $n$: $$ Pr[i \notin F_e] \leq \frac{4\log(n)}{e(n-1)} \rightarrow 0. $$ \end{theorem} \begin{proof} We show the result for $i=n$, which is enough to prove the Theorem since $K_n$ is vertex transitive. Take an embedding $m=(D,R,E) \in \mathcal{M}(K_n)$, and fix some $e \in D_1$. Let $m'$ be the induced embedding obtained by removing vertex $v_n$ and its adjacent edges from $m$. This will be an embedding $m'=(D',R',E')$ of $K_{n-1}$. If $e \notin D'$ then certainly we have that $n \in F_e$. Therefore suppose $e \in D'$. Suppose that there are $f_i$ darts in $F_e(m')$ contained in $D_i(K_{n-1})$ for $i=1,\dots,n-1$. Write $d_e(m') := (f_1,\dots,f_{n-1})$, see Figure \ref{fig:addingvertexbreakingfaces} for an example of this sequence. \begin{figure} \caption{This is an embedding of $K_4$ with two faces, on the torus. It has two faces, one of length $4$ and one of length $8$. For the dart $e$ on the diagram of the map $m$, we have $d_e(m) = (2,2,2,2)$. If we add $v_5$ to this embedding to give an embedding of $K_5$, there there are $3$ places at each of $v_1,v_2,v_3,v_4$ with which to place the new half-edges. One choice of placement at each vertex doesn't intersect $F_e$. Therefore in this case, $Pr[5 \notin F_e] = (1/3)^4$.} \label{fig:addingvertexbreakingfaces} \end{figure} Now, given a random embedding $m'$ of $K_{n-1}$, recall how we can add a vertex $v_n$ to extend it to a random embedding $m$ of $K_n$, as outlined in Lemma \ref{lem:addavertex}. This process involves adding a new dart to each $D_i(K_{n-1})$, then placing it in a uniform at random position in the cycle $\pi_i'$. There are $n-2$ total choices of place to add the new dart in $D_i(K_n)$ into the cycle $\pi_i'$. Of these choices, there are $f_i$ of them which make the new dart in the face $F_e$. Therefore $n \notin F_e$ if and only if one of the other $n-2-f_i$ places is chosen to add the new dart at each vertex $v_i$. This probability is only dependent on the numbers $f_i$, so we have: $$ Pr[n \notin F_e(m) | d_e(m') = (f_1, \dots, f_{n-1})] \leq \prod_{i=1}^{n-1} \left(1-\frac{f_i}{n-2}\right) $$ Write $|F_e(m')| = \sum_{i=1}^{n-1} f_i$ for the length of this face, and suppose $|F_e(m')| = k$. For any sequence of positive numbers $x_1, \dots, x_{n-1}$ we have that: $$ \prod_{i=1}^{n-1} x_i \leq \left(\frac{1}{n-1} \sum_{i=1}^{n-1} x_i\right)^{n-1} $$ Letting $x_i = 1-\frac{f_i}{n-2}$ gives \begin{align*} \prod_{i=1}^{n-1} \left(1-\frac{f_i}{n-1}\right) \leq \left(1 - \frac{k}{(n-1)(n-2)}\right)^{n-1} \leq e^{-k/(n-2)} \end{align*} where we have used the bound $(1-x)^{n-1} \leq e^{-x(n-1)}$ for $0 \leq x \leq 1$. Using total probability we obtain: \begin{align*} Pr[n \notin F_e(m)] &= \sum_{k=1}^{(n-1)(n-2)} Pr[|F_e(m')| = k] Pr[n \notin F_e(m) \vert \, |F_e(m')| = k] \\ &\leq \sum_{k=1}^{(n-1)(n-2)} Pr[|F_e(m')| = k] e^{-k/(n-2)} \end{align*} We continue by expressing $\E[F(K_{n-1})]$ as a sum in terms of $|F_e(m')|$. Claim: $$ \E[F(K_{n-1})] = (n-1)(n-2) \sum_{k=1}^{(n-1)(n-2)} \frac{1}{k} Pr[|F_e(m')| = k] $$ Proof of claim: Recall that $F(m')$ is the number of faces in $m' \in \mathcal{M}(K_{n-1})$ and let $\mathcal{F}(m')$ be the set of these faces. Let $|f|$ for $f \in \mathcal{F}(m')$ be the length of this face, which is the number of darts contained in it. Then we have that: $$ F(m') = \sum_{f \in \mathcal{F}(m)} 1 = \sum_{f \in \mathcal{F}(m)} \sum_{e \in f} 1/|f| = \sum_{e \in D} 1/|F_e(m')| $$ Thus, by linearity of expectation and the symmetry of $e \in D$: $$ \E[F] = |D| \, \E[1/|F_e|] = (n-1)(n-2) \sum_{k=1}^{(n-1)(n-2)} \frac{1}{k} \left( \Pr[ |F_e(m')| = k ] \right). \,\,\,\,\, \diamondsuit $$ Rearranging this and using Theorem \ref{thm:knavgfaces} we obtain the following for sufficiently large $n$: $$ \sum_{k=1}^{n(n-1)} \frac{1}{k} Pr[|F_e(m')| = k] \leq \frac{4\log(n)}{(n-1)(n-2)} $$ This gives the result: \begin{align*} Pr[n \notin F_e(m) \mid m \in \mathcal{M}(K_n)] &\leq \sum_{k=1}^{(n-1)(n-2)} Pr[|F_e(m')| = k ] e^{-k/(n-2)} \\ &= \sum_{k=1}^{(n-1)(n-2)} \frac{1}{k} Pr[|F_e| = k] \left( ke^{-k/(n-2)} \right) \\ &\leq \sum_{k=1}^{(n-1)(n-2)} \frac{1}{k} Pr[|F_e| = k] \left( \frac{n-2}{e} \right) \\ &\leq \frac{4\log(n)}{e(n-1)} \rightarrow 0. \end{align*} The penultimate line uses that the function $xe^{-x/(n-2)}$ attains its maximum at $x=n-2$. \end{proof} As a corollary to this, we can estimate the proportion of the embedding which is not changed if we alter the rotation scheme at a vertex $v_i$. \begin{corollary} \label{cor:unchangedpartofgraph} Let $v_i$ be a vertex in $K_n$. Let $D'(m) \subseteq D$ be the set of darts contained in faces incident with vertex $v_i$ in some $m \in \mathcal{M}(K_n)$. Then $\E[|D'|] = (1-o(1))|D|$. \end{corollary} \begin{proof} We can decompose $|D'(m)|$ into indicator variables for each dart then use linearity of expectation to obtain: $$ \E[|F'(m)|] = \sum_{e \in D} Pr[i \in F_e] $$ The result then follows from Theorem \ref{thm:vertexinfaceprob}. \end{proof} We combine Theorem \ref{thm:localunif} with Corollary \ref{cor:unchangedpartofgraph} to obtain a limiting uniformity for embeddings of complete graphs. Let $\kappa(\alpha)$ denote the number of cycles in a permutation $\alpha$, and let $A_n$ denote the alternating group on $n$ symbols. \begin{theorem} \label{thm:knuniform} Take a random embedding $m=(D,R,E)$ of $K_n$, then almost all of $m$ has a number of faces asymptotically uniform to the distribution of cycles in a random permutation, up to some combination of parity. More precisely, given any $m =(D,R,E) \in \mathcal{M}(K_n)$ there exists a canonical induced permutation $\alpha(m)$ of $R \cdot E$ on $D' \subseteq D$ with $\E[|D'|] = (1-o(1))|D|$ and a sequence $\{c_n: n \geq 2\}$ such that: \begin{align*} Pr[\kappa(\alpha) = k] = \begin{cases} c_nPr[\kappa(\beta) = k \mid \beta \in A_{n-1}] :n+k \text{ odd} \\ (1-c_n) Pr[\kappa(\beta) = k \mid \beta \in S_{n-1} - A_{n-1}] : n+k \text{ even} \end{cases} \end{align*} \end{theorem} \begin{proof} Take some embedding of the complete graph, $m\in \mathcal{M}(K_n)$. We define $D'(m) \subseteq D$ as the set of all darts contained in faces which have at least one dart in $D_1$. Let $\alpha(m)$ be the induced permutation of $R \cdot E$ on $D'$. By Theorem \ref{cor:unchangedpartofgraph}, $\E[|D'|] = (1-o(1)) |D|$. We also have that $\kappa(\alpha(m))$ = $\kappa(\omega_1(m))$. Suppose $n+k$ is odd, the other parity case follows by the same argument. By Theorem \ref{thm:localunif}, there exists a sequence $\{c_n: n \geq 2\}$ such that $Pr[\kappa(\alpha) = k] = Pr[\kappa(\omega_1) =k] = c_n Pr[\kappa(\beta) = k \mid \beta \in A_{n-1}]$. The result follows. \end{proof} We remark that although $\E[|D'|] = (1-o(1))|D|$, $\E[|D|-|D'|] \rightarrow \infty$. The section of $m$ given by $D'(m)$ may have many faces of different sizes, so Theorem \ref{thm:knuniform} does not immediately imply any results on the average number, or average length, of faces in the whole graph. However, we can still estimate the average amount of steps it takes for a face to revisit its starting vertex. \begin{corollary} Fix a vertex $v_i$ in $K_n$. For any $e \in D_i$, let $\ell_e(m)$ be the number of edges the face containing $e$ passes through before returning to $v_i$. Then we have: $$ \E[\ell_e] = (1-o(1))n. $$ \end{corollary} \begin{proof} Let $D'(m) \subseteq D$ be the set of darts contained in faces incident with vertex $v_1$ in some $m \in \mathcal{M}(K_n)$. It's clear that for any $m \in M$, we have $D'(m) = \sum_{e \in D_1} \ell_e(m)$. However by symmetry $\E[\ell_i] = \E[\ell_j]$ for any $i,j \in D_1$. Therefore by linearity of expectation and Corollary \ref{cor:unchangedpartofgraph}, $(n-1) \E[\ell_e] = (1-o(1)) |D|$. Since $|D| = n(n-1)$, rearranging gives the required result. \end{proof} \end{document}
\begin{document} \title{\textbf{Sharp global estimates for local and nonlocal \\ porous medium-type equations \\ in bounded domains}} \author{\Large Matteo Bonforte, Alessio Figalli, ~and~ Juan Luis V\'azquez } \date{} \maketitle \begin{abstract} This paper provides a quantitative study of nonnegative solutions to nonlinear diffusion equations of porous medium-type of the form \ ${\delta^\gamma}artial_t u + \mathcal{L} u^m=0$, $m>1$, where the operator $\mathcal{L}$ belongs to a general class of linear operators, and the equation is posed in a bounded domain $\Omega\subset \mathbb{R}^N$. As possible operators we include the three most common definitions of the fractional Laplacian in a bounded domain with zero Dirichlet conditions, and also a number of other nonlocal versions. In particular, $\mathcal{L}$ can be a power of a uniformly elliptic operator with $C^1$ coefficients. Since the nonlinearity is given by $u^m$ with $m>1$, the equation is degenerate parabolic. The basic well-posedness theory for this class of equations has been recently developed in \cite{BV-PPR1,BV-PPR2-1}. Here we address the regularity theory: decay and positivity, boundary behavior, Harnack inequalities, interior and boundary regularity, and asymptotic behavior. All this is done in a quantitative way, based on sharp a priori estimates. Although our focus is on the fractional models, our results cover also the local case when $\mathcal{L}$ is a uniformly elliptic operator, and provide new estimates even in this setting. A surprising aspect discovered in this paper is the possible presence of non-matching powers for the long-time boundary behavior. More precisely, when $\mathcal{L}=(-\Delta)^s$ is a spectral power of the {Dirichlet} Laplacian inside a smooth domain, we can prove that: \\ - when $2s> 1-1/m$, for large times all solutions behave as ${\rm dist}^{1/m}$ near the boundary; \\ - when $2s\le 1-1/m$, different solutions may exhibit different boundary behavior.\\ This unexpected phenomenon is a completely new feature of the nonlocal nonlinear structure of this model, and it is not present in the semilinear elliptic equation $\mathcal{L} u^m=u$. \end{abstract} \vskip 1 cm Foindent {\sc Addresses:} Foindent Matteo Bonforte. Departamento de Matem\'{a}ticas, Universidad Aut\'{o}noma de Madrid,\\ Campus de Cantoblanco, 28049 Madrid, Spain. e-mail address:~\texttt{[email protected]} Foindent Alessio Figalli. ETH Z\"urich, Department of Mathematics, R\"amistrasse 101,\\ 8092 Z\"urich, Switzerland. E-mail:\texttt{[email protected]} Foindent Juan Luis V\'azquez. Departamento de Matem\'{a}ticas, Universidad Aut\'{o}noma de Madrid,\\ Campus de Cantoblanco, 28049 Madrid, Spain. e-mail address:~\texttt{[email protected]} Foindent {\sc Keywords.} Nonlocal diffusion, nonlinear equations, bounded domains, a priori estimates, positivity, boundary behavior, regularity, Harnack inequalities. Foindent{\sc Mathematics Subject Classification}. 35B45, 35B65, 35K55, 35K65. Fewpage \footnotesize \tableofcontents Formalsize Fewpage \section{Introduction} In this paper we address the question of obtaining a priori estimates, positivity, boundary behavior, Harnack inequalities, and regularity for a suitable class of weak solutions of nonlinear nonlocal diffusion equations of the form: \begin{equation}\label{FPME.equation} {\delta^\gamma}artial_t u+\mathcal{L}\,F (u)=0 \qquad\mbox{posed in }Q=(0,\infty)\times \Omega\,, \end{equation} where $\Omega\subset \mathbb{R}^N$ is a bounded domain with $C^{1,1}$ boundary, $N\ge 2$\footnote{Our results work also in dimension $N=1$ if the fractional exponent (that we shall introduce later) belongs to the range $0<s<1/2$. The interval $1/2\le s<1$ requires some minor modifications that we prefer to avoid in this paper.}, and $\mathcal{L}$ is a linear operator representing diffusion of local or nonlocal type, the prototype example being the fractional Laplacian (the class of admissible operators will be precisely described below). Although our arguments hold for a rather general class of nonlinearities $F:\mathbb{R} \to \mathbb{R}$, for the sake of simplicity we shall focus on the model case $F(u)=u^m$ with $m>1$. The use of nonlocal operators in diffusion equations reflects the need to model the presence of long-distance effects not included in evolution driven by the Laplace operator, and this is well documented in the literature. The physical motivation and relevance of the nonlinear diffusion models with nonlocal operators has been mentioned in many references, see for instance \cite{AC,BV2012,BV-PPR1,DPQRV1,DPQRV2,Vaz2014}. Because $u$ usually represents a density, all data and solutions are supposed to be nonnegative. Since the problem is posed in a bounded domain we need boundary or external conditions that we assume of Dirichlet type. This kind of problems has been extensively studied when $\mathcal{L}=-\Delta$ and $F(u)=u^m$, $m>1$, in which case the equation becomes the classical Porous Medium Equation \cite{VazBook,DK0, DaskaBook, JLVmonats}. Here, we are interested in treating nonlocal diffusion operators, in particular fractional Laplacian operators. Note that, since we are working on a bounded domain, the concept of fractional Laplacian operator admits several non-equivalent versions, the best known being the Restricted Fractional Laplacian (RFL), the Spectral Fractional Laplacian (SFL), and the Censored Fractional Laplacian (CFL); see Section \ref{ssec.examples} for more details. We use these names because they already appeared in some previous works \cite{BSV2013, BV-PPR2-1}, but we point out that RFL is usually known as the Standard Fractional Laplacian, or plainly Fractional Laplacian, and CFL is often called Regional Fractional Laplacian. The case of the SFL operator with $F(u)=u^m$, $m>1$, has been already studied by the first and the third author in \cite{BV-PPR1,BV-PPR2-1}. In particular, in \cite{BV-PPR2-1} the authors presented a rather abstract setting where they were able to treat not only the usual fractional Laplacians but also a large number of variants that will be listed below for the reader's convenience. Besides, rather general increasing nonlinearities $F$ were allowed. The basic questions of existence and uniqueness of suitable solutions for this problem were solved in \cite{BV-PPR2-1} in the class of `weak dual solutions', an extesion of the concept of solution introduced in \cite{BV-PPR1} that has proved to be quite flexible and efficient. A number of a priori estimates (absolute bounds and smoothing effects) were also derived in that generality. Since these basic facts are settled, here we focus our attention on the finer aspects of the theory, mainly sharp boundary estimates and decay estimates. Such upper and lower bounds will be formulated in terms of the first eigenfunction $\Phi_1$ of $\mathcal{L}$, that under our assumptions will satisfy $\Phi_1\asymp \mathrm{dist}(\cdot, {\delta^\gamma}artial\Omega)^\gamma$ for a certain characteristic power $\gamma\in (0,1]$ that depends on the particular operator we consider. Typical values are $\gamma=s$ (SFL), $\gamma=1$ (RFL), and $\gamma=s-1/2$ for $s>1/2$ (CFL), cf. Subsections \ref{ssec.examples} and \ref{sec.examples}. As a consequence, we get various kinds of local and global Harnack type inequalities. It is worth mentioning that some of the boundary estimates that we obtain for the parabolic case are essentially elliptic in nature. The study of this issue for stationary problems is done in a companion paper \cite{BFV-Elliptic}. This has the advantage that many arguments are clearer, since the parabolic problem is more complicated than the elliptic one. Clarifying such difference is one of the main contributions of our present work. Thanks to these results, in the last part of the paper we are able to prove both interior and boundary regularity, and to find the large-time asymptotic behavior of solutions. Let us indicate here some notation of general use. The symbol $\infty$ will always denote $+\infty$. Given $a,b$, we use the notation $a\asymp b$ whenever there exist universal constants $c_0,c_1>0$ such that $c_0\,b\le a\le c_1 b$\,. We also use the symbols $a\varepsilone b=\max\{a,b\}$ and $a\wedge b=\min\{a,b\}$. We will always consider bounded domains $\Omega$ with boundary of class $C^{2}$. In the paper we use the short form `solution' to mean `weak dual solution', unless differently stated. \subsection{Presentation of the results on sharp boundary behaviour}\label{ssec.results.boundary} Foindent $\overline{u}llet$ A basic principle in the paper is that the sharp boundary estimates depend not only on $\mathcal{L}$ but also on the behavior of the nonlinearity $F(u)$ near $u=0$, i.e., in our case, on the exponent $m>1$. The elliptic analysis performed in the companion paper \cite{BFV-Elliptic} combined with some standard arguments will allow us to prove that, in {\em all} cases, $u(t)$ approaches the separate-variable solution ${\mathcal U}(x,t)=t^{-\frac1{m-1}}S(x)$ in the sense that \begin{equation}\label{asymp.intro} \left\|t^{\frac{1}{m-1}}u(t,\cdot)- S\right\|_{\mathrm{L}^\infty(\Omega)}\xrightarrow{t\to\infty}0, \end{equation} where $S$ is the solution of the elliptic problem (see Theorems \ref{Thm.Elliptic.Harnack.m} and \ref{Thm.Asympt.0}). The behaviour of the profile $ S(x)$ is shown to be, when $2smFe \gamma(m-1)$, \begin{equation}\label{as.sep.var} S(x)\asymp \Phi_1(x)^{\sigma/m}, \qquad \sigma:=\min\left\{1,\frac{2sm}{\gamma(m-1)}\right\}. \end{equation} Thus, the behavior strongly depends on the new parameter $\sigma$, more precisely, on whether this parameter is equal to $1$ or less than $1$. As we shall see later, $\sigma$ encodes the interplay between the ``elliptic scaling power'' $2s/(m-1)$, the ``eigenfunction power'' $\gamma$, and the ``nonlinearity power'' $m$. When $2sm =\gamma(m-1)$ we have $\sigma=1$, but a logarithmic correction appears: \begin{equation}\label{as.sep.var.1} S(x)\asymp \Phi_1(x)^{1/m}\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)}\,. \end{equation} Foindent $\overline{u}llet$ This fact and the results in \cite{BFR} prompted us to look for estimates of the form \begin{equation}\label{intro.1b} c_0(t)\frac{\Phi_1^{\sigma/m}(x_0)} {t^{\frac1{m-1}}} \le u(t,x_0) \le c_1\frac{ \Phi_1^{\sigma/m}(x_0)} {t^{\frac1{m-1}}}\qquad \text{for all $t>0$, $x_0\in\Omega$,} \end{equation} where $c_0(t)$ and $c_1$ are positive and independent of $u$, eventually with a logarithmic term appearing when $2sm =\gamma(m-1)$, as in \eqref{as.sep.var.1}. We will prove in this paper that the upper bound holds for the three mentioned Fractional Laplacian choices, and indeed for the whole class of integro-differential operators we will introduce below, cf. Theorem \ref{thm.Upper.PME.II}. Also, separate-variable solutions saturate the upper bound. The issue of the validity of a lower bound as in \eqref{intro.1b} is instead much more elusive. A first indication for this is the introduction of a function $c_0(t)$ depending on $t$, instead of a constant. This wants to reflect the fact that the solution may take some time to reach the boundary behaviour that is expected to hold uniformly for large times. Indeed, recall that in the classical PME \cite{Ar-Pe,JLVmonats, VazBook}, for data supported away from the boundary, some `waiting time' is needed for the support to reach the boundary. Foindent $\overline{u}llet$ As proved in \cite{BFR}, the stated lower bound holds for the RFL with $c_0(t)\sim (1\wedge t)^{m/(m-1)}.$ In particular, in this nonlocal setting, infinite speed of propagation holds. Here, we show that this holds also for the CFL and a number of other operators, cf. Theorem \ref{Thm.lower.B}. Note that for the RFL and CFL we have $2sm>\gamma(m-1)$, in particular $\sigma=1$ which simplifies formula \eqref{intro.1b}.\\ A combination of an upper and a lower bound with matching behaviour (with respect to $x$ and $t$) will be called a {\sl Global Harnack Principle}, and holds for all $t>0$ for these operators, cf. Theorems \ref{thm.GHP.PME.I} and \ref{thm.GHP.PME.II}. Foindent $\overline{u}llet$ When $\mathcal{L}$ is the SFL, we shall see that the lower bound may fail. Of course, solutions by separation of variables satisfy the matching estimates in \eqref{intro.1b} (eventually with a extra logarithmic term in the limit case, as in \eqref{as.sep.var.1}), but it came as a complete surprise to us that for the SFL the situation is not the same for ``small'' initial data. More precisely: Foindent(i) We can prove that the following bounds always hold for all times: \begin{equation}\label{intro.1} c_0\left(1\wedge \frac{t}{t_*}\right)^{\frac{m}{m-1}}\frac{\Phi_1(x_0)} {t^{\frac1{m-1}}} \le u(t,x_0) \le c_1\frac{ \Phi_1^{\sigma/m}(x_0)} {t^{\frac1{m-1}}}\,, \end{equation} (when $2sm=\gamma(m-1)$, a logarithmic correction $\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)}$ appears in the right hand side), cf. Theorem \ref{thm.Lower.PME}. These are non-matching estimates. Foindent(ii) For $2sm>\gamma(m-1)$, the sharp estimate \eqref{intro.1b} holds for any nonnegative nontrivial solution for large times $t \geq t_*$, cf. Theorem \ref{thm.Lower.PME.large.t}. Foindent(iii) \textbf{Anomalous boundary behaviour.} Consider now the SFL with $\sigma<1$ (resp. $2sm=\gamma(m-1)$).\footnote{Since for the SFL $\gamma=1$, we have $\sigma<1$ if and only if $$ 0<s<s_*:=\frac{m-1}{2m}<\frac{1}{2}. $$ Note that $s_*\to 0$ as we tend to the linear case $m=1$, so this exceptional regime dooes not appear for linear diffusions, both fractional and standard.} In this case we can find initial data for which the upper bound in \eqref{intro.1} is not sharp. Depending on the initial data, there are several possible rates for the long-time behavior near the boundary. More precisely: \begin{enumerate} \item[(a)] When $u_0\leq A\,\Phi_1$, then $u(t)\le F(t)\Phi_1^{1/m} \ll \Phi_1^{\sigma/m}$ (resp. $\Phi_1^{1/m} \ll \Phi_1^{1/m}\left(1+|\log\Phi_1 |\right)^{1/(m-1)}$) for all times, see Theorem \ref{prop.counterex}. In particular \begin{equation}\label{limit.intro} \lim_{x\to {\delta^\gamma}artial\Omega}\frac{u(t,x)}{\Phi_1(x)^{\sigma/m}}= 0 \quad \mathcal{B}igl(\text{resp.} \lim_{x\to {\delta^\gamma}artial\Omega}\frac{u(t,x)}{\Phi_1(x)^{1/m}\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)}}= 0\mathcal{B}igr) \end{equation} for any $t>0.$ \item[(b)] When $u_0\leq A\,\Phi_1^{1-2s/\gamma}$ then $u(t)\le F(t)\Phi_1^{1-2s/\gamma}$ for small times, see Theorem \ref{thm.Upper.PME.III}. Notice that when $\sigma<1$ we have always $1-\frac{2s}{\gamma}>\sigma/m$. This sets a limitation to the improvement of the lower bound, which is confirmed by another result: In Theorem \ref{prop.counterex2} we show that lower bounds of the form $u(T,x)\geq \underline{\kappaappa}\Phi_1^\alpha(x)$ for data $u_0(x)\le A\Phi_1(x)$ are possible only for $\alpha\ge 1-2s/\gamma$. \item[(c)] On the other hand, for ``large'' initial data, Theorem \ref{thm.GHP.PME.II} shows that the desired matching estimates from above and below hold. \end{enumerate} After discovering this strange boundary behavior, we looked for numerical confirmation. In Section \ref{sec.numer} we will explain the numerical results obtained in \cite{numerics}. Note that, if one looks for universal bounds independent of the initial condition, Figures 2-3 below seem to suggest that the bounds provided by \eqref{intro.1} are optimal for all times and all operators. Foindent $\overline{u}llet$ The current interest in more general types of nonlocal operators led us to a more general analysis where the just explained alternative has been extended to a wide class of integro-differential operators, subject only to a list of properties that we call (A1), (A2), (L1), (L2), (K2), (K4); a number of examples are explained in Section \ref{sec.hyp.L}. These general classes appear also in the study of the elliptic problem \cite{BFV-Elliptic}. \subsection{Asymptotic behaviour and regularity} Our quantitative lower and upper estimates admit a formulation as local or global Harnack inequalities. They are used at the end of the paper to settle two important issues. Foindent\textbf{Sharp asymptotic behavior. }Exploiting the techniques in \cite{BSV2013}, we can prove a sharp asymptotic behavior for our nonnegative and nontrivial solutions when the upper and lower bound have matching powers. Such sharp results hold true for a quite general class of local and nonlocal operators. A detailed account is given in Section \ref{sec.asymptotic}. Foindent\textbf{Regularity. }By a variant of the techniques used in \cite{BFR}, we can show interior H\"older regularity. In addition, if the kernel of the operator satisfies some suitable continuity assumptions, we show that solutions are classical in the interior and are H\"older continuous up to the boundary if the upper and lower bound have matching powers. We refer to Section \ref{sect.regularity} for details. \section{General class of operators and their kernels}\label{sec.hyp.L} The interest of the theory developed here lies both in the sharpness of the results and in the wide range of applicability. We have just mentioned the most relevant examples appearing in the literature, and more are listed at the end of this section. Actually, our theory applies to a general class of operators with definite assumptions, and this is what we want to explain now. Let us present the properties that have to be assumed on the class of admissible operators. Some of them already appeared in \cite{BV-PPR2-1}. However, to further develop our theory, more hypotheses need to be introduced. In particular, while \cite{BV-PPR2-1} only uses the properties of the Green function, here we shall make some assumptions also on the kernel of $\mathcal{L}$ (whenever it exists). Note that assumptions on the kernel $K$ of $\mathcal{L}$ are needed for the positivity results, because we need to distinguish between the local and nonlocal cases. The study of the kernel $K$ is performed in Subsection \ref{ss2.2}. For convenience of reference, the list of used assumptions is (A1), (A2), (K2), (K4), (L1), (L2). The first three are assumed in all operators $\mathcal{L}$ that we use. Foindent $\overline{u}llet$ {\bf Basic assumptions on $\mathcal{L}$.} The linear operator $\mathcal{L}: \mathrm{dom}(\mathcal{L})\subseteq\mathrm{L}^1(\Omega)\to\mathrm{L}^1(\Omega)$ is assumed to be densely defined and sub-Markovian, more precisely, it satisfies (A1) and (A2) below: \begin{enumerate} \item[(A1)] $\mathcal{L}$ is $m$-accretive on $\mathrm{L}^1(\Omega)$; \item[(A2)] If $0\le f\le 1$ then $0\le \mathrm{e}^{-t\mathcal{L}}f\le 1$. \end{enumerate} Under these assumption, in \cite{BV-PPR2-1}, the first and the third author proved existence, uniqueness, weighted estimates, and smoothing effects. Foindent $\overline{u}llet$ {\bf Assumptions on the kernel.} Whenever $\mathcal{L}$ is defined in terms of a kernel $K(x,y)$ via the formula \begin{equation*} \mathcal{L} f(x)=P.V.\int_{\mathbb{R}^N} \big(f(x)-f(y)\big)\,K(x,y)\,{\rm d}y\,, \end{equation*} assumption (L1) states that there exists $\underline{\overline{\kappaappa}ppa}_\Omega>0$ such that \[\tag{L1} \inf_{x,y\in \Omega}K(x,y)\ge \underline{\overline{\kappaappa}ppa}_\Omega>0\,. \] We note that condition holds both for the RFL and the CFL, see Section \ref{ssec.examples}. Foindent- Whenever $\mathcal{L}$ is defined in terms of a kernel $K(x,y)$ and a zero order term via the formula \[ \mathcal{L} f(x)=P.V.\int_{\mathbb{R}^N} \big(f(x)-f(y)\big)\,K(x,y)\,{\rm d}y + B(x)f(x), \] assumptions (L2) states that \[\tag{L2} K(x,y)\ge c_0{\delta^\gamma}(x){\delta^\gamma}(y),\quad c_0>0, \qquad\mbox{and}\quad B(x)\ge 0, \] where, from now on, we adopt the notation $\delta(x):=\mathrm{dist}(x, {\delta^\gamma}artial\Omega)$. This condition is satisfied by the SFL in a stronger form, see Section \ref{ss2.2} and Lemma \ref{Lem.Spec.Ker}. Foindent $\overline{u}llet$ {\bf Assumptions on $\mathcal{L}I$.} In order to prove our quantitative estimates, we need to be more specific about the operator $\mathcal{L}$. Besides satisfying (A1) and (A2), we will assume that it has a left-inverse $\mathcal{L}I: \mathrm{L}^1(\Omega)\to \mathrm{L}^1(\Omega)$ that can be represented by a kernel ${\mathbb G}$ (the letter ``G'' standing for Green function) as \[ \mathcal{L}I[f](x)=\int_\Omega {\mathbb G}(x,y)f(y)\,{\rm d}y\,, \] where ${\mathbb G}$ satisfies the following assumption, for some $s\in (0,1]$: There exist constants $\gamma\in (0,1]$ and $c_{0,\Omega},c_{1,\Omega}>0$ such that, for a.e. $x,y\in \Omega$, \[\tag{K2} c_{0,\Omega}\,{\delta^\gamma}(x)\,{\delta^\gamma}(y) \le {\mathbb G}(x,y)\le \frac{c_{1,\Omega}}{|x-y|^{N-2s}} \left(\frac{{\delta^\gamma}(x)}{|x-y|^\gamma}\wedge 1\right) \left(\frac{{\delta^\gamma}(y)}{|x-y|^\gamma}\wedge 1\right). \] (Here and below we use the labels (K2) and (K4) to be consistent with the notation in \cite{BV-PPR2-1}.) Hypothesis (K2) introduces an exponent $\gamma$ which is a characteristic of the operator and will play a big role in the results. Notice that defining an inverse operator $\mathcal{L}I$ implies that we are taking into account the Dirichlet boundary conditions. See more details in Section 2 of \cite{BV-PPR2-1}. Foindent - The lower bound in (K2) is weaker than the known bounds on the Green function for many examples under consideration; indeed, the following stronger estimate holds in many cases: \[\tag{K4} {\mathbb G}(x,y)\asymp \frac{1}{|x-y|^{N-2s}} \left(\frac{{\delta^\gamma}(x)}{|x-y|^\gamma}\wedge 1\right) \left(\frac{{\delta^\gamma}(y)}{|x-y|^\gamma}\wedge 1\right)\,. \] Foindent\textbf{Remarks. }(i) The labels (A1), (A2), (K1), (K2), (K4) are consistent with the notation in \cite{BV-PPR2-1}. The label (K3) was used to mean hypothesis (K2) written in terms of $\Phi_1$ instead of ${\delta^\gamma}$.\\ (ii) In the classical local case $\mathcal{L}=-\Delta$, the Green function ${\mathbb G}$ satisfies (K4) only when $N\geq 3$, as the formulas slightly change when $N=1,2$. In the fractional case $s \in (0,1)$ the same problem arises when $N=1$ and $s \in [1/2,1)$. Hence, treating also these cases would require a slightly different analysis based on different but related assumptions on ${\mathbb G}$. Since our approach is very general, we expect it to work also in these remaining cases without any major difficulties. However, to simplify the presentation, from now on we assume that $$ \text{either $N\geq 2$ and $s\in(0,1),\qquad$ or $N=1$ and $s \in (0,1/2)$.} $$ Foindent\textbf{The role of the first eigenfunction of $\mathcal{L}$. }We have shown in \cite{BFV-Elliptic} that, under assumption (K1), the operator $\mathcal{L}$ is compact, it has a discrete spectrum, and a first nonnegative bounded eigenfunction $\Phi_1$; assuming also (K2), we have that \begin{equation}\label{Phi1.est} \Phi_1(x)\asymp {\delta^\gamma}(x)=\mathrm{dist}(x,{\delta^\gamma}artial\Omega)^\gamma\qquad\mbox{for all }x\in \overline{\Omega}. \end{equation} Hence, $\Phi_1$ encodes the parameter $\gamma$ that takes care of describing the boundary behavior. We recall that we are assuming that the boundary of $\Omega$ is smooth enough, for instance $C^{1,1}$. Foindent\textbf{Remark. }We note that our assumptions allow us to cover all the examples of operators described in Sections \ref{ssec.examples} and \ref{sec.examples}. \subsection{Main examples of operators and properties} \label{ssec.examples} When working in the whole $\mathbb R^N$, the fractional Laplacian admits different definitions that can be shown to be all equivalent. On the other hand, when we deal with bounded domains, there are at least three different operators in the literature, that we call the Restricted (RFL), the Spectral (SFL) and the Censored Fractional Laplacian (CFL). We will show below that these different operators exhibit quite different behaviour, so the distinction between them has to be taken into account. Let us present the statement and results for the three model cases, and we refer to Section \ref{sec.examples} for further examples. Here, we collect the sharp results about the boundary behavior, namely the Global Harnack inequalities from Theorems \ref{thm.GHP.PME.I}, \ref{thm.GHP.PME.II}, and \ref{thm.GHP.PME.III}. Foindent\textit{The parameters $\gamma$ and $\sigma$.} The strong difference between the various operators $\mathcal{L}$ is reflected in the different boundary behavior of their nonnegative solutions. We will often use the exponent $\gamma$, that represents the boundary behavior of the first eigenfunction $\Phi_1 \asymp \mathrm{dist}(\cdot,{\delta^\gamma}artial\Omega)^\gamma$, see~\cite{BFV-Elliptic}. Both in the parabolic theory of this paper and the elliptic theory of paper \cite{BFV-Elliptic} the parameter $\sigma=\min\left\{1, \frac{2sm}{\gamma(m-1)} \right\}$ introduced in \eqref{as.sep.var} plays a big role. \subsubsection{The RFL} We define the fractional Laplacian operator acting on a bounded domain by using the integral representation on the whole space in terms of a hypersingular kernel, namely \begin{equation}\label{sLapl.Rd.Kernel} (-\Delta_{\mathbb{R}^N})^{s} g(x)= c_{N,s}\mbox{ P.V.}\int_{\mathbb{R}^N} \frac{g(x)-g(z)}{|x-z|^{N+2s}}\,dz, \end{equation} where $c_{N,s}>0$ is a normalization constant, and we ``restrict'' the operator to functions that are zero outside $\Omega$. We denote such operator by $\mathcal{L}=(-\Delta_{|\Omega})^s$\,, and call it the \textit{restricted fractional Laplacian}\footnote{In the literature this is often called the fractional Laplacian on domains, but this simpler name may be confusing when the spectral fractional Laplacian is also considered, cf. \cite{BV-PPR1}. As discussed in this paper, there are other natural versions.} (RFL). The initial and boundary conditions associated to the fractional diffusion equation \eqref{FPME.equation} read $u(t,x)=0$ in $(0,\infty)\times\mathbb{R}^N\setminus \Omega$ and $u(0,\cdot)=u_0$. As explained in \cite{BSV2013}, such boundary conditions can also be understood via the Caffarelli-Silvestre extension, see \cite{Caffarelli-Silvestre}. The sharp expression of the boundary behavior for RFL has been investigated in \cite{RosSer}. We refer to \cite{BSV2013} for a careful construction of the RFL in the framework of fractional Sobolev spaces, and \cite{BlGe} for a probabilistic interpretation. This operator satisfies the assumptions (A1), (A2), (L1), and also (K2) and (K4) with $\gamma=s<1$. Let us present our results in this case. Note that we have $\sigma=1$ for all $0<s < 1$, and Theorem \ref{thm.GHP.PME.I} shows the sharp boundary behavior for all times, namely for all $t>0$ and a.e. $x\in \Omega$ we have \begin{equation}\label{thm.GHP.PME.I.Ineq.00} \underline{\kappaappa}\, \left(1\wedge \frac{t}{t_*}\right)^{\frac{m}{m-1}}\frac{\mathrm{dist}(x,{\delta^\gamma}artial\Omega)^{s/m}}{t^{\frac{1}{m-1}}} \le \, u(t,x) \le \overline{\kappaappa}\, \frac{\mathrm{dist}(x,{\delta^\gamma}artial\Omega)^{s/m}}{t^{\frac{1}{m-1}}}\,. \end{equation} The critical time $t_*$ is given by a weighted $\mathrm{L}^1$ norm, namely $t_*:= \kappa_* \|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{-(m-1)}$, where $\kappa_*>0$ is a universal constant. Moreover, solutions are classical in the interior and we prove sharp H\"older continuity up to the boundary. These regularity results have been first obtained in \cite{BFR}; we give here different proofs valid in the more general setting of this paper. See Section \ref{sect.regularity} for further details. \subsubsection{The SFL } Starting from the classical Dirichlet Laplacian $\Delta_{\Omega}$ on the domain $\Omega$\,, the so-called {\em spectral definition} of the fractional power of $\Delta_{\Omega}$ may be defined via a formula in terms of the semigroup associated to the Laplacian, namely \begin{equation}\label{sLapl.Omega.Spectral} \displaystyle(-\Delta_{\Omega})^{s} g(x)= \frac1{\mathbf{G}^s_\Omegaamma(-s)}\int_0^\infty \left(e^{t\Delta_{\Omega}}g(x)-g(x)\right)\frac{dt}{t^{1+s}}=\sum_{j=1}^{\infty}\lambda_j^s\, \hat{g}_j\, \varphi_j(x)\,, \end{equation} where $(\lambda_j,\varphi_j)$, $j=1,2,\ldots$, is the normalized spectral sequence of the standard Dirichlet Laplacian on $\Omega$\,, $ \hat{g}_j=\int_\Omega g(x)\varphi_j(x)\,{\rm d}x$, and $\|\varphi_j\|_{\mathrm{L}^2(\Omega)}=1$\,. We denote this operator by $\mathcal{L}=(-\Delta_{\Omega})^s$\,, and call it the \textit{spectral fractional Laplacian} (SFL) as in \cite{Cabre-Tan}. The initial and boundary conditions associated to the fractional diffusion equation \eqref{FPME.equation} read $u(t,x)=0$ on $(0,\infty)\times{\delta^\gamma}artial\Omega$ and $u(0,\cdot)=u_0$. Such boundary conditions can also be understood via the Caffarelli-Silvestre extension, see \cite{BSV2013}. Following ideas of \cite{SV2003}, we use the fact that this operator admits a kernel representation, \begin{equation}\label{SFL.Kernel} (-\Delta_{\Omega})^{s} g(x)= c_{N,s}\mbox{ P.V.}\int_{\Omega} \left[g(x)-g(z)\right]K(x,z)\,dz + B(x)g(x)\,, \end{equation} where $K$ is a singular and compactly supported kernel, which degenerates at the boundary, and $B\asymp \mathrm{dist}(\cdot,{\delta^\gamma}artial\Omega)^{-2s}$ (see \cite{SV2003} or Lemma \ref{Lem.Spec.Ker} for further details). This operator satisfies the assumptions (A1), (A2), (L2), and also (K2) and (K4) with $\gamma=1$. Therefore, $\sigma$ can be less than $1$, depending on the values of $s$ and $m$. As we shall see, in our parabolic setting, the degeneracy of the kernel is responsible for a peculiar change of the boundary behavior of the solutions (with respect to the previous case) for small and large times. Here, the lower bounds change both for short and large times, and they strongly depend on $\sigma$ and on $u_0$: we called this phenomenon \textit{anomalous boundary behaviour }in Subsection \ref{ssec.results.boundary}. More precisely, Theorem \ref{thm.GHP.PME.III} shows that for all $t>0$ and all $x\in \Omega$ we have \begin{equation}\label{thm.GHP.PME.III.Ineq.00} \underline{\kappaappa}\,\left(1\wedge \frac{t}{t_*}\right)^{\frac{m}{m-1}}\frac{\mathrm{dist}(x,{\delta^\gamma}artial\Omega) }{t^{\frac{1}{m-1}}} \le \, u(t,x) \le \overline{\kappaappa}\, \frac{\mathrm{dist}(x,{\delta^\gamma}artial\Omega)^{\sigma/m}}{t^{\frac1{m-1}}} \end{equation} (when $2sm=\gamma(m-1)$, a logarithmic correction $\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)}$ appears in the right hand side). Such lower behavior is somehow minimal, in the sense that it holds in all cases. The basic asymptotic result (cf. \eqref{asymp.intro} or Theorem \ref{Thm.Asympt.0.1}) suggests that the lower bound in \eqref{thm.GHP.PME.III.Ineq.00} could be improved by replacing $\mathrm{dist}(x,{\delta^\gamma}artial\Omega)$ with $\mathrm{dist}(x,{\delta^\gamma}artial\Omega)^{\sigma/m}$, at least for large times. This is shown to be true for $\sigma=1$ (cf. Theorem \ref{thm.Lower.PME.large.t}), but it is false for $\sigma<1$ (cf. Theorem \ref{prop.counterex}), since there are ``small'' solutions with non-matching boundary behaviour for all times, cf. \eqref{limit.intro}. It is interesting that, in this case, one can appreciate the interplay between the ``elliptic scaling power'' $2s/(m-1)$ related to the invariance of the equation $\mathcal{L} S^m=S$ under the scaling $S(x)\mapsto \lambda^{-2s/(m-1)}S(\lambda x)$, the ``eigenfunction power'' $\gamma=1$, and the ``nonlinearity power'' $m$, made clear through the parameter $\sigma/m$. Also in this case, thanks to the strict positivity in the interior, we can show interior space-time regularity of solutions, as well as sharp boundary H\"older regularity for large times whenever upper and lower bounds match. \subsubsection{The CFL} In the simplest case, the infinitesimal operator of the censored stochastic processes has the form \begin{equation} \mathcal{L} g(x)=\mathrm{P.V.}\int_{\Omega}\frac{g(x)-g(y)}{|x-y|^{N+2s}}\,{\rm d}y\,,\qquad\mbox{with }\frac{1}{2}<s<1\,. \end{equation} This operator has been introduced in \cite{bogdan-censor} (see also \cite{Song-coeff} and \cite{BV-PPR2-1} for further details and references). In this case $\gamma=s-1/2<2s$, hence $\sigma=1$ for all $1/2<s < 1$, and Theorem \ref{thm.GHP.PME.I} shows that for all $t>0$ and $x\in \Omega$ we have \[ \underline{\kappaappa}\, \left(1\wedge \frac{t}{t_*}\right)^{\frac{m}{m-1}}\frac{\mathrm{dist}(x,{\delta^\gamma}artial\Omega)^{(s-1/2)/m}}{t^{\frac{1}{m-1}}} \le \, u(t,x) \le \overline{\kappaappa}\, \frac{\mathrm{dist}(x,{\delta^\gamma}artial\Omega)^{(s-1/2)/m}}{t^{\frac{1}{m-1}}}\,. \] Again, we have interior space-time regularity of solutions, as well as sharp boundary H\"older regularity for all times. \subsubsection{Other examples} There a number of examples to which our theory applies, besides the RFL, CFL and SFL, since they satisfy the list of assumptions listed in the previous section. Some are listed in the last Section \ref{sec.comm}, see more detail in \cite{BV-PPR2-1}. \section{Reminders about weak dual solutions}\label{sec.results} We denote by $\mathrm{L}^p_{\Phi_1}(\Omega)$ the weighted $\mathrm{L}^p$ space $\mathrm{L}^p(\Omega\,,\, \Phi_1\,{\rm d}x)$, endowed with the norm \[ \|f\|_{\mathrm{L}^p_{\Phi_1}(\Omega)}=\left(\int_{\Omega} |f(x)|^p\Phi_1(x)\,{\rm d}x\right)^{\frac{1}{p}}\,. \] Foindent{\bf Weak dual solutions: existence and uniqueness.} We recall the definition of weak dual solutions used in \cite{BV-PPR2-1}. This is expressed in terms of the inverse operator $\mathcal{L}I$, and encodes the Dirichlet boundary condition. This is needed to build a theory of bounded nonnegative unique solutions to Equation \eqref{FPME.equation} under the assumptions of the previous section. Note that in \cite{BV-PPR2-1} we have used the setup with the weight ${\delta^\gamma}=\mathrm{dist}(\cdot,{\delta^\gamma}artial\Omega)^\gamma$, but the same arguments generalize immediately to the weight $\Phi_1$; indeed under assumption (K2), these two setups are equivalent. \begin{defn}\label{Def.Very.Weak.Sol.Dual} A function $u$ is a {\sl weak dual} solution to the Dirichlet Problem for Equation \eqref{FPME.equation} in $(0,\infty)\times \Omega$ if: \begin{itemize}[leftmargin=*] \item $u\in C((0,\infty): \mathrm{L}^1_{\Phi_1}(\Omega))$\,, $u^m \in \mathrm{L}^1\left((0,\infty):\mathrm{L}^1_{\Phi_1}(\Omega)\right)$; \item The identity \begin{equation} \displaystyle \int_0^\infty\int_{\Omega}\mathcal{L}I u \,\dfrac{{\delta^\gamma}artial {\delta^\gamma}si}{{\delta^\gamma}artial t}\,\,{\rm d}x\,{\rm d}t -\int_0^\infty\int_{\Omega} u^m\,{\delta^\gamma}si\,\,{\rm d}x \,{\rm d}t=0 \end{equation} holds for every test function ${\delta^\gamma}si$ such that ${\delta^\gamma}si/\Phi_1\in C^1_c((0,\infty): \mathrm{L}^\infty(\Omega))$\,. \item A {weak dual} solution to the Cauchy-Dirichlet problem {(CDP)} is a weak dual solution to the homogeneous Dirichlet Problem for equation \eqref{FPME.equation} such that $u\in C([0,\infty): \mathrm{L}^1_{\Phi_1}(\Omega))$ and $u(0,x)=u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$. \end{itemize}\end{defn} This kind of solution has been first introduced in \cite{BV-PPR1}, cf. also \cite{BV-PPR2-1}. Roughly speaking, we are considering the weak solution to the ``dual equation'' ${\delta^\gamma}artial_t U=- u^m$\,, where $U=\mathcal{L}I u$\,, posed on the bounded domain $\Omega$ with homogeneous Dirichlet conditions. Such weak solution is obtained by approximation from below as the limit of the unique mild solution provided by the semigroup theory (cf. \cite{BV-PPR2-1}), and it was used in \cite{Vaz2012} with space domain $\mathbb{R}^N$ in the study of Barenblatt solutions. We call those solutions \textit{minimal weak dual solutions, }and it has been proven in Theorems 4.4 and 4.5 of \cite{BV-PPR2-1} that such solutions exist and are unique for any nonnegative data $u_0\in\mathrm{L}^1_{\Phi_1}(\Omega)$. The class of weak dual solutions includes the classes of weak, mild and strong solutions, and is included in the class of very weak solutions. In this class of solutions the standard comparison result holds. Foindent {\bf Explicit solution.} When trying to understand the behavior of positive solutions with general nonnegative data, it is natural to look for solutions obtained by separation of variables. These are given by \begin{equation}\label{friendly.giant} \mathcal{U}_T(t,x):=(T+t)^{-\frac{1}{m-1}}S(x)\,,\qquad T\geq 0, \end{equation} where $S$ solves the elliptic problem \begin{equation}\label{Elliptic.prob} \left\{\begin{array}{lll} \mathcal{L} S^m= S & ~ {\rm in}~ (0,+\infty)\times \Omega,\\ S=0 & ~\mbox{on the boundary.} \end{array} \right. \end{equation} The properties of $S$ have been thoroughly studied in the companion paper \cite{BFV-Elliptic}, and we summarize them here for the reader's convenience. \begin{thm}[Properties of asymptotic profiles]\label{Thm.Elliptic.Harnack.m} Assume that $\mathcal{L}$ satisfies (A1), (A2), and (K2). Then there exists a unique positive solution $S$ to the Dirichlet Problem \eqref{Elliptic.prob} with $m>1$. Moreover, let $\sigma$ be as in \eqref{as.sep.var}, and assume that:\\ - either $\sigma=1$ and $2smFe \gamma(m-1)$;\\ - or $\sigma<1$ and (K4) holds.\\ Then there exist positive constants $c_0$ and $c_1$ such that the following sharp absolute bounds hold true for all $x\in \Omega$: \begin{equation}\label{Thm.Elliptic.Harnack.ineq.m.1} c_0 \Phi_1(x)^{\sigma/m}\le S(x)\le c_1 \Phi_1(x)^{\sigma/m}\,. \end{equation} When $2sm= \gamma(m-1)$ then, assuming (K4), for all $x\in \Omega$ we have \begin{equation}\label{Thm.Elliptic.Harnack.ineq.m.1.log} c_0 \Phi_1(x)^{1/m}\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)}\le S(x)\le c_1 \Phi_1(x)^{1/m}\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)}\,. \end{equation} \end{thm} Foindent\textbf{Remark. }As observed in the proof of Theorem \ref{Thm.Asympt}, by applying Theorem \ref{thm.GHP.PME.I} to the separate-variables solution $t^{-\frac{1}{m-1}}S(x)$ we deduce that \eqref{Thm.Elliptic.Harnack.ineq.m.1} is still true when $\sigma<1$ if, instead of assuming (K4), we suppose that $K(x,y)\le c_1|x-y|^{-(N+2s)}$ for a.e. $x,y\in \mathbb{R}^N$ and that $\Phi_1\in C^\gamma(\Omega)$. When $T=0$, the solution $\mathcal U_0$ in \eqref{friendly.giant} is commonly named ``Friendly Giant'', because it takes initial data $u_0\equiv +\infty$ (in the sense of pointwise limit as $t\to0$) but is bounded for all $t>0$. This term was coined in the study of the standard porous medium equation. In the following Sections \ref{sect.upperestimates} and \ref{Sec.Lower} we will state and prove our general results concerning upper and lower bounds respectively. These sections are the crux of this paper. The combination of such upper and lower bounds will then be summarized in Section \ref{sect.Harnack}. Consequences of these results in terms of asymptotic behaviour and regularity estimates will be studied in Sections \ref{sec.asymptotic} and \ref{sect.regularity} respectively. \section{Upper boundary estimates}\label{sect.upperestimates} We present a general upper bound that holds under the sole assumptions (A1), (A2), and (K2), hence valid for all our examples. \begin{thm}[Absolute boundary estimates]\label{thm.Upper.PME.II} Let (A1), (A2), and (K2) hold. Let $u\ge 0$ be a weak dual solution to the (CDP) corresponding to $u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$, and let $\sigma$ be as in \eqref{as.sep.var}. Then, there exists a computable constant $k_1>0$, depending only on $N, s, m $, and $\Omega$, such that for all $t\ge 0$ and all $x\in \Omega$ \begin{equation}\label{thm.Upper.PME.Boundary.F} u(t,x) \le \frac{k_1}{t^{\frac1{m-1}}}\, \left\{ \begin{array}{ll} \Phi_1(x_0)^{\sigma/m} &\text{if $\gammaFe 2sm/(m-1)$},\\ \Phi_1(x_0)^{1/m}\big(1+|\log \Phi_1(x_0)|\big)^{1/(m-1)}&\text{if $\gamma=2sm/(m-1)$}.\\ \end{array} \right. \end{equation} \end{thm} This absolute bound proves a strong regularization which is independent of the initial datum. It improves the absolute bound in \cite{BV-PPR2-1} in the sense that it exhibits a precise boundary behavior. The estimate gives the correct behaviour for the solutions $\mathcal{U}_T$ in \eqref{friendly.giant} obtained by separation of variables, see Theorem \ref{Thm.Elliptic.Harnack.m}. It turns out that the estimate will be sharp for all nonnegative, nontrivial solutions in the case of the RFL and CFL. We will also see below that the estimate is not always the correct behaviour for the SFL when data are small, as explained in the Introduction (see Subsection \ref{ssec.upper.small.data}, and Theorem \ref{prop.counterex} in Section \ref{Sec.Lower}). Foindent\textit{Proof of Theorem \ref{thm.Upper.PME.II}. } This subsection is devoted to the proof of Theorems \ref{thm.Upper.PME.II}. The first steps are based on a few basic results of \cite{BV-PPR2-1} that we will also be used in the rest of the paper. Foindent\textsc{Step 1. Pointwise and absolute upper estimates}\label{sec.upper.partI} Foindent\textit{Pointwise estimates. } We begin by recalling the basic pointwise estimates which are crucial in the proof of all the upper and lower bounds of this paper. \begin{prop}[\cite{BV-PPR1,BV-PPR2-1}]\label{prop.point.est} It holds \begin{equation}\label{thm.NLE.PME.estim.0} \int_{\Omega}u(t,x){\mathbb G}(x , x_0)\,{\rm d}x\le \int_{\Omega}u_0(x){\mathbb G}(x , x_0)\,{\rm d}x \qquad\mbox{for all $t> 0$\,.} \end{equation} Moreover, for every $0< t_0\le t_1 \le t$ and almost every $x_0\in \Omega$\,, we have \begin{equation}\label{thm.NLE.PME.estim} \frac{t_0^{\frac{m}{m-1}}}{t_1^{\frac{m}{m-1}}}(t_1-t_0)\,u^m(t_0,x_0) \le \int_{\Omega}\big[u(t_0,x)-u({t_1},x)\big]{\mathbb G}(x , x_0)\,{\rm d}x \le (m-1)\frac{t^{\frac{m}{m-1}}}{t_0^{\frac{1}{m-1}}}\,u^m(t,x_0)\,. \end{equation} \end{prop} \textit{Absolute upper bounds. } Using the estimates above, in Theorem 5.2 of \cite{BV-PPR2-1} the authors proved that solutions corresponding to initial data $u_0\in\mathrm{L}^1_{\Phi_1}(\Omega)$ satisfy \begin{equation}\label{thm.Upper.PME.Absolute.F} \|u(t)\|_{\mathrm{L}^\infty(\Omega)}\le \frac{K_1}{t^{\frac{1}{m-1}}}\qquad\mbox{for all $t>0$\,,} \end{equation} with a constant $K_1$ independent of $u_0$. For this reason, this is called ``absolute bound''. Foindent\textsc{Step 2. Upper bounds via Green function estimates. }The proof of Theorem \ref{thm.Upper.PME.II} requires the following general statement (see \cite[Proposition 6.5]{BFV-Elliptic}): \begin{lem}\label{Lem.Green.2aaa} Let (A1), (A2), and (K2) hold, and let $v:\Omega\to \mathbb R$ be a nonnegative bounded function. Let $\sigma$ be as in \eqref{as.sep.var}, and assume that, for a.e. $x_0\in\Omega$, \begin{equation}\label{Lem.Green.2.hyp.aaa} v(x_0)^m\le \overline{\kappaappa}ppa_0\int_{\Omega} v(x){\mathbb G}(x,x_0)\,{\rm d}x. \end{equation} Then, there exists a constant $\overline{\kappaappa}_\infty>0$, depending only on $s,\gamma, m, N,\Omega$, such that the following bound holds true for a.e. $x_0\in \Omega$: \begin{equation}\label{Lem.Green.2.est.Upper.aaa} \int_{\Omega} v(x){\mathbb G}(x,x_0)\,{\rm d}x\le \overline{\kappaappa}_\infty\overline{\kappaappa}ppa_0^{\frac{1}{m-1}} \left\{ \begin{array}{ll} \Phi_1(x_0)^\sigma &\text{if $\gammaFe 2s m/(m-1)$},\\ \Phi_1(x_0)\big(1+|\log \Phi_1(x_0)|^{\frac{m}{m-1}}\big)&\text{if $\gamma= 2s m/(m-1)$}.\\ \end{array} \right. \end{equation} \end{lem} Foindent {\sc Step 3. End of the proof of Theorem \ref{thm.Upper.PME.II}. } We already know that $u(t)\in \mathrm{L}^\infty(\Omega)$ for all $t>0$ by \eqref{thm.Upper.PME.Absolute.F}. Also, choosing $t_1=2t_0$ in \eqref{thm.NLE.PME.estim} we deduce that, for $t \ge 0$ and a.e. $x_0\in \Omega$, \begin{equation}\label{Upper.PME.Step.1.2.c} u^m(t,x_0) \le \frac{2^{\frac{m}{m-1}}}{t}\int_{\Omega}u(t,x){\mathbb G}(x , x_0)\,{\rm d}x\,. \end{equation} The above inequality corresponds exactly to hypothesis \eqref{Lem.Green.2.hyp.aaa} of Lemma \ref{Lem.Green.2aaa} with the value $\overline{\kappaappa}ppa_0=2^{\frac{m}{m-1}}t^{-1}$. As a consequence, inequality \eqref{Lem.Green.2.est.Upper.aaa} holds, and we conclude that for a.e. $x_0\in\Omega$ and all $t>0$ \begin{equation}\label{thm.Upper.PME.Boundary.2} \int_{\Omega}u(t,x){\mathbb G}(x , x_0)\,{\rm d}x \le \frac{\overline{\kappaappa}_\infty 2^{\frac{m}{(m-1)^2}}}{t^{\frac{1}{m-1}}}\left\{ \begin{array}{ll} \Phi_1(x_0)^\sigma &\text{if $\gammaFe \frac{2sm}{m-1}$},\\ \Phi_1(x_0)\big(1+|\log \Phi_1(x_0)|^{\frac{m}{m-1}}\big)&\text{if $\gamma=\frac{2sm}{m-1}$}.\\ \end{array} \right. \end{equation} Hence, combining this bound with \eqref{Upper.PME.Step.1.2.c}, we get $$ u^{m}(t,x_0) \le \frac{k_1^{m}}{t^{\frac{m}{m-1}}}\left\{ \begin{array}{ll} \Phi_1(x_0)^\sigma &\text{if $\gammaFe \frac{2sm}{m-1}$},\\ \Phi_1(x_0)\big(1+|\log \Phi_1(x_0)|^{\frac{m}{m-1}}\big)&\text{if $\gamma=\frac{2sm}{m-1}$}.\\ \end{array} \right. $$ This proves the upper bounds \eqref{thm.Upper.PME.Boundary.F} and concludes the proof.\qed \subsection{Upper bounds for small data and small times}\label{ssec.upper.small.data} As mentioned in the Introduction, the above upper bounds may not be realistic when $\sigma<1$. We have the following estimate for small times if the initial data are sufficiently small. \begin{thm}\label{thm.Upper.PME.III} Let $\mathcal{L}$ satisfy (A1), (A2), and (L2). Suppose also that $\mathcal{L}$ has a first eigenfunction $\Phi_1\asymp \mathrm{dist}(x,{\delta^\gamma}artial\Omega)^\gamma$\,, and assume that $\sigma<1$. Finally, we assume that for all $x,y\in \Omega$ \begin{equation}\label{Operator.Hyp.upper.III}\begin{split} K(x,y)\le \frac{c_1}{|x-y|^{N+2s}} \left(\frac{\Phi_1(x)}{|x-y|^\gamma }\wedge 1\right) \left(\frac{\Phi_1(y)}{|x-y|^\gamma }\wedge 1\right) &\quad\mbox{ and }\quad B(x)\le c_1\Phi_1(x)^{-\frac{2s}{\gamma}}\,. \end{split}\end{equation} Let $u\ge 0$ be a weak dual solution to (CDP) corresponding to $u_0\in\mathrm{L}^1_{\Phi_1}(\Omega)$. Then, for every initial data $u_0\leq A\,\Phi_1^{1-2s/\gamma}$ for some $A>0$, we have $$ u(t)\leq \frac{\Phi_1^{1-\frac{2s}{\gamma}}}{[A^{1-m} -\tilde C t]^{m-1}} \qquad \text{on }[0,T_A],\qquad\text{where } T_A:=\frac{1}{\tilde CA^{m-1}}, $$ and the constant $\tilde C>0$, that depends only on $N,s,m, \lambda_1, c_1$, and $\Omega$. \end{thm} Foindent\textbf{Remark. }This result applies to the SFL. Notice that when $\sigma<1$ we have always $1-\frac{2s}{\gamma}>\sigma/m$\,, hence in this situation small data have a smaller behaviour at the boundary than the one predicted in Theorem \ref{thm.Upper.PME.II}. This is not true for ``big'' data, for instance for solution obtained by separation of variables, as already said. Foindent\textit{Proof of Theorem \ref{thm.Upper.PME.III}. } In view of our assumption on the initial datum, namely $u_0\le A\,\Phi_1^{1-2s/\gamma}$, by comparison it is enough to prove that the function \[ \overline{u}(t,x)=F(t)\Phi_1(x)^{1-\frac{2s}{\gamma}},\qquad F(t)=\frac{1}{[A^{1-m} -\tilde Ct]^{m-1}} \] is a supersolution (i.e., ${\delta^\gamma}artial_t\overline{u}\ge -\mathcal{L}\overline{u}^m$) in $(0,T_A)\times \Omega$ provided we choose $\tilde C$ sufficiently large. To this aim, we use the following elementary inequality, whose proof is left to the interested reader: for any $\eta>1$ and any $M>0$ there exists $\widetilde{b}=\widetilde{b}(M)>0$ such that letting $\widetilde{\eta}:=\eta\wedge 2$ \begin{equation}\label{supersol.1.step3} a^\eta-b^\eta\le \eta\,b^{\eta-1}(a-b)+ \widetilde{b} |a-b|^{\widetilde{\eta}}\,,\qquad\mbox{ for all $0\le a,b\le M$.} \end{equation} We apply inequality \eqref{supersol.1.step3} to $a=\Phi_1(y)$ and $b=\Phi_1(x)$, $\eta=m(1-\frac{2s}{\gamma})$, noticing that $\eta>1$ if and only if $\sigma<1$\,, and we obtain (recall that $\Phi_1$ is bounded) \begin{equation*} \begin{split} \overline{u}^m(t,y)-\overline{u}^m(t,x)&= F(t)^m \left(\Phi_1(y)^{m(1-\frac{2s}{\gamma})}-\Phi_1(x)^{m(1-\frac{2s}{\gamma})} \right) = F(t)^m \left(\Phi_1(y)^\eta-\Phi_1(x)^\eta \right)\\ &\le \eta\,F(t)^m\Phi_1(x)^{\eta-1}\left[\Phi_1(y)-\Phi_1(x)\right] +\widetilde{b}\, F(t)^m\left|\Phi_1(y)-\Phi_1(x)\right|^{\widetilde{\eta}}\\ &\le \eta\,F(t)^m\Phi_1(x)^{\eta-1} \left[\Phi_1(y)-\Phi_1(x)\right] + \widetilde{b}\,F(t)^m c_\gamma^{\widetilde{\eta}} |x-y|^{\widetilde{\eta}\gamma}, \end{split} \end{equation*} where in the last step we have used that $\left|\Phi_1(y)-\Phi_1(x)\right|\le c_\gamma |x-y|^\gamma$. Since $B\le c_1\Phi_1^{-2s/\gamma}$, \[\begin{split} \int_{\mathbb{R}^N}\left[\Phi_1(y)-\Phi_1(x)\right]K(x,y)\,{\rm d}y &=-\mathcal{L} \Phi_1(x) + B(x)\Phi_1(x) \le -\lambda_1\Phi_1(x) + c_1\Phi_1(x)^{1-\frac{2s}{\gamma}}\,, \\ \end{split}\] Thus, recalling that $\eta,\widetilde{\eta}>1$ and that $\Phi_1$ is bounded, it follows \begin{equation}\label{supersol.2}\begin{split} -\mathcal{L}[\overline{u}^m](x) &=\int_{\mathbb{R}^N}\left[ \overline{u}^m(t,y)-\overline{u}^m(t,x)\right]K(x,y)\,{\rm d}y + B(x)\overline{u}^m(t,x)\\ &\le \eta\,F(t)^m\Phi_1(x)^{\eta-1} \left[-\lambda_1\Phi_1(x) + c_1\Phi_1(x)^{1-\frac{2s}{\gamma}}\right] + B(x)F(t)^m \Phi_1^{\eta}(x)\\ &+ \widetilde{b}\,c_\gamma^{\widetilde{m}}\,F(t)^m \int_{\mathbb{R}^N}|x-y|^{\widetilde{\eta}\gamma}K(x,y)\,{\rm d}y\\ &\le \widetilde{c}F(t)^m \left(\Phi_1(x)^{\eta-\frac{2s}{\gamma}} + \int_{\mathbb{R}^N}|x-y|^{\widetilde{\eta}\gamma}K(x,y)\,{\rm d}y\right) \end{split} \end{equation} Next, we claim that, as a consequence of \eqref{Operator.Hyp.upper.III}) \begin{equation}\label{supersol.3} \int_{\mathbb{R}^N}|x-y|^{\widetilde{\eta}\gamma}K(x,y)\,{\rm d}y \le c_4\Phi_1(x)^{1-\frac{2s}{\gamma}}\,. \end{equation} Postponing for the moment the proof of the above inequality, we first show how conclude: combining \eqref{supersol.2} and \eqref{supersol.3} we have \[ -\mathcal{L}\overline{u}^m \le c_5 F(t)^m \Phi_1(x)^{1-\frac{2s}{\gamma}}= F'(t)\Phi_1(x)^{1-\frac{2s}{\gamma}}= {\delta^\gamma}artial_t \overline{u} \] where we used that $F'(t)=c_5 F(t)^{\widetilde{m}}$ provided $\Tilde C=c_5(m-1)$. This proves that $\overline{u}$ is a supersolution in $(0,T)\times \Omega$. Hence the proof is concluded once we prove inequality \eqref{supersol.3}; for this, using hypothesis \eqref{Operator.Hyp.upper.III} and choosing $r=\Phi_1(x)^{1/\gamma}$ we have \[\begin{split} \int_{\mathbb{R}^N}&|x-y|^{\widetilde{\eta}\gamma}K(x,y)\,{\rm d}y \le c_1\int_{B_r(x)}\frac{1}{|x-y|^{N+2s-\widetilde{\eta}\gamma}}\,{\rm d}y+c_1\Phi_1(x)\int_{\Omega\setminus B_r(x)}\frac{1}{|x-y|^{N+2s+\gamma-\widetilde{\eta}\gamma}} \,{\rm d}y\\ &\le c_2 r^{\widetilde{\eta}\gamma-2s}+c_1\frac{\Phi_1(x)}{r^{2s}} \int_{\Omega\setminus B_r(x)}\frac{1}{|x-y|^{N+\gamma-\widetilde{\eta}\gamma}}\,{\rm d}y = c_2 r^{\widetilde{\eta}\gamma-2s}+ c_3\frac{\Phi_1(x)}{r^{2s}} \le c_4\Phi_1(x)^{1-\frac{2s}{\gamma}}\,, \end{split}\] where we used that $\widetilde{\eta}\gamma-2s>0$ and $\tilde\eta>1$.\qed Foindent{\bf Remark.} For operators for which the previous assumptions hold with $B\equiv 0$, we can actually prove a better upper bound for ``smaller data'', namely: \begin{cor}\label{thm.Upper.PME.IV}Under the assumptions of Theorem \ref{thm.Upper.PME.III}, assume that moreover $B\equiv 0$ and $u_0\leq A\,\Phi_1$ for some $A>0$. Then, we have $$ u(t)\leq \frac{\Phi_1}{[A^{1-m} -\tilde C t]^{m-1}} \qquad \text{on }[0,T_A],\qquad\text{where } T_A:=\frac{1}{\tilde CA^{m-1}}, $$ and the constant $\tilde C>0$, that depends only on $N,s,m, \lambda_1, c_1$, and $\Omega$. \end{cor} Foindent {\bf Proof.~}We have to show that $\overline{u}(t,x)=F(t)\Phi_1(x)$ is a supersolution: we essentially repeat the proof of Theorem \ref{thm.Upper.PME.III} with $\gamma=m$ (formally replace $1-2s/\gamma$ by $1$), taking into account that $B\equiv 0$ and $u_0\leq A\,\Phi_1$.\qed \section{Lower bounds}\label{Sec.Lower} This section is devoted to the proof of all the lower bounds summarized later in the main Theorems \ref{thm.GHP.PME.I}, \ref{thm.GHP.PME.II}, and \ref{thm.GHP.PME.III}. The general situation is quite involved to describe, so we will separate several cases and we will indicate for which examples it holds for the sake of clarity. Foindent $\overline{u}llet$ \textbf{Infinite speed of propagation: universal lower bounds. }First, we are going to quantitatively establish that all nonnegative weak dual solutions of our problems are in fact positive in $\Omega$ for all $t>0$. This result is valid for all nonlocal operators considered in this paper. \begin{thm}\label{thm.Lower.PME} Let $\mathcal{L}$ satisfy (A1), (A2), and (L2). Let $u\ge 0$ be a weak dual solution to the (CDP) corresponding to $u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$. Then there exists a constant $\underline{\kappaappa}_0>0$ such that the following inequality holds: \begin{equation}\label{thm.Lower.PME.Boundary.1} u(t,x)\ge \underline{\kappaappa}_0\,\left(1\wedge \frac{t}{t_*}\right)^{\frac{m}{m-1}}\frac{\Phi_1(x)}{t^{\frac{1}{m-1}}}\qquad\mbox{for all $t>0$ and a.e. $x\in \Omega$}\,. \end{equation} Here $t_*=\kappa_*\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{-(m-1)}$, and the constants $\underline{\kappaappa}_0$ and $\kappa_*$ depend only on $N,s,\gamma, m, c_0,c_1$, and $\Omega$\,. \end{thm} Notice that, for $t \geq t_*$, the dependence on the initial data disappears from the lower bound, as inequality reads $$ u(t)\geq \underline{\kappaappa}_0 \frac{\Phi_1}{t^{\frac{1}{m-1}}}\qquad \forall\,t \geq t_*, $$ where $\underline{\kappaappa}_0$ is an absolute constant. Assumption (L2) on the kernel $K$ of $\mathcal{L}$ holds for all examples mentioned in Section \ref{sec.examples}. Clearly, the power in this lower bound does not match the one of the general upper bounds of Theorem \ref{thm.Upper.PME.II}, hence we can not expect these bounds to be sharp. However, when $\sigma<1$, for small times and small data and when $B\equiv 0$, the lower bounds \eqref{thm.Lower.PME.Boundary.1} match the upper bounds of Corollary \ref{thm.Upper.PME.IV}, hence they are sharp. Theorem \ref{thm.Lower.PME} shows that, even in the ``worst case scenario'', there is a quantitative lower bound for all positive times, and shows infinite speed of propagation. Foindent$\overline{u}llet$ \textbf{Matching lower bounds I. }Actually, in many cases the kernel of the nonlocal operator satisfies a stronger property, namely $\inf_{x,y\in \Omega}K(x,y)\ge \underline{\overline{\kappaappa}ppa}_\Omega>0$ and $B\equiv 0$, in which case we can actually obtain sharp lower bounds for all times. Here we do not consider the potential logarithmic correction that may appear in ``critical case'' $2sm= \gamma(m-1)$: indeed, as far as examples are concerned, the next Theorem applies to the RFL and the CFL, for which $2sm> \gamma(m-1)$. \begin{thm}\label{Thm.lower.B} Let $\mathcal{L}$ satisfy (A1), (A2), and (L1). Furthermore, suppose that $\mathcal{L}$ has a first eigenfunction $\Phi_1\asymp \mathrm{dist}(x,{\delta^\gamma}artial\Omega)^\gamma$\,. Let $\sigma$ be as in \eqref{as.sep.var} and assume that:\\ - either $\sigma=1$;\\ - or $\sigma<1$, $K(x,y)\le c_1|x-y|^{-(N+2s)}$ for a.e. $x,y\in \mathbb{R}^N$, and $\Phi_1\in C^\gamma(\overline{\Omega})$.\\ Let $u\ge 0$ be a weak dual solution to the (CDP) corresponding to $u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$. Then there exists a constant $\underline{\kappaappa}_1>0$ such that the following inequality holds: \begin{equation}\label{Thm.B.lower.bdd} u(t,x)\ge \underline{\kappaappa}_1 \left(1\wedge \frac{t}{t_*}\right)^{\frac{m}{m-1}}\frac{\Phi_1(x)^{\sigma/m}}{t^{\frac{1}{m-1}}} \qquad\mbox{for all $t>0$ and a.e. $x\in \Omega$}\,, \end{equation} where $t_*=\kappa_*\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{-(m-1)}$. The constants $\kappa_*$ and $\underline{\kappaappa}_1$ depend only on $N,s,\gamma, m, \underline{\kappaappa}_\Omega,c_1,\Omega,$ and $\|\Phi_1\|_{C^\gamma(\Omega)}$. \end{thm} Foindent\textbf{Remarks. }(i) As in the case of the Theorem \ref{thm.Lower.PME}, for large times the dependence on the initial data disappears from the lower bound and we have absolute lower bounds. Foindent(ii) The boundary behavior is sharp when $2smFe \gamma(m-1)$ in view of the upper bound from Theorem \ref{thm.Upper.PME.II}. Foindent(iii) This theorem applies to the RFL and the CFL, but not to the SFL (or, more in general, spectral powers of elliptic operators), see Sections \ref{ssec.examples} and \ref{sec.hyp.L}. In the case of the RFL, this result was obtained in Theorem 1 of \cite{BFR}. We have already seen the example of the separate-variables solutions \eqref{friendly.giant} that have a very definite behavior at the boundary ${\delta^\gamma}artial \Omega$. The analysis of general solutions leads to completely different situations for $\sigma=1$ and $\sigma<1$. Foindent $\overline{u}llet$ \textbf{Matching lower bounds II. The case $\sigma=1$. }When $\sigma=1$ we can establish a quantitative lower bound near the boundary that matches the separate-variables behavior for large times (except in the case $2sm= \gamma(m-1)$ where the result is false, see Theorem \ref{prop.counterex} below). We do not need the assumption of non-degenerate kernel, so SFL can be considered. \begin{thm}\label{thm.Lower.PME.large.t} Let $(A1)$, $(A2)$, and $(K2)$ hold, and let $\sigma=1$. Let $u\ge 0$ be a weak dual solution to the (CDP) corresponding to $u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$. There exists a constant $\underline{\kappaappa}_2>0$ such that \begin{equation}\label{thm.Lower.PME.Boundary.large.t} u(t,x) \ge \underline{\kappaappa}_2\,\frac{\Phi_1(x)^{1/m}}{t^{\frac{1}{m-1}}}\qquad\mbox{for all $t\ge t_*$ and a.e. $x\in \Omega$}\,. \end{equation} Here, $t_*=\kappa_*\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{-(m-1)}$, and the constants $\kappa_* $ and $\underline{\kappaappa}_2$ depend only on $N,s,\gamma, m $, and $\Omega$\,. \end{thm} Foindent {\bf Remarks.} (i) At first sight, this theorem may seem weaker than the previous positivity result. However, this result has wider applicability since it holds under the only assumption (K2) on ${\mathbb G}.$ In particular it is valid in the local case $s=1$, where the finite speed of propagation makes it impossible to have global lower bounds for small times. Foindent (ii) When $\mathcal{L}=-\Delta$ the result has been proven in \cite{Ar-Pe} and \cite{JLVmonats} by quite different methods. On the other hand, our method is very general and immediately applies to the case when $\mathcal{L}$ is an elliptic operator with $C^1$ coefficients, see Section \ref{sec.examples}. Foindent (iii) This result fixes a small error in Theorem 7.1 of \cite{BV-PPR1} where the power $\sigma$ was not present. Foindent$\overline{u}llet$ \textbf{The anomalous lower bounds with small data. }As shown in Theorem \ref{thm.Lower.PME}, the lower bound $u(t)\gtrsim \Phi_1$ is always valid. We now discuss the possibility of improving this bound. Let $S$ solve the elliptic problem \eqref{Elliptic.prob}. It follows by comparison whenever $u_0 \geq \epsilon_0 S$ with $\epsilon_0>0$ then $u(t)\geq \frac{S}{(T_0+t)^{1/(m-1)}}$, where $T_0=\epsilon_0^{1-m}$. Since $S\asymp \Phi_1^{\sigma/m}$ under (K4) (up to a possible logarithmic correction in the critical case, see Theorem \ref{Thm.Elliptic.Harnack.m}), there are initial data for which the lower behavior is dictated by $\Phi_1(x)^{\sigma/m}t^{-1/(m-1)}$. More in general, as we shall see in Theorem \ref{Thm.Asympt.0}, given any initial datum $u_0 \in \mathrm{L}^1_{\Phi_1}(\Omega)$ the function $v(t,x):=t^{\frac{1}{m-1}}u(t)$ always converges to $S$ in $\mathrm{L}^\infty(\Omega)$ as $t\to \infty$, independently of the value of $\sigma$. Hence, one may conjecture that there should exist a waiting time $t_*>0$ after which the lower behavior is dictated by $\Phi_1(x)^{\sigma/m}t^{-1/(m-1)}$, in analogy with what happens for the classical porous medium equation. As we shall see, this is actually {\em false} when $\sigma<1$ or $2sm= \gamma(m-1)$. Since for large times $v(t,x)$ must look like $S(x)$ in uniform norm away from the boundary (by the interior regularity that we will prove later), the contrasting situation for large times could be described as `dolphin's head' with the `snout' flatter than the `forehead'. As $t\to\infty$ the forehead progressively fills the whole domain. The next result shows that, in general, we cannot hope to prove that $u(t)$ is larger than $\Phi_1^{1/m}$. In particular, when $\sigma<1$ or $2sm= \gamma(m-1)$, this shows that the behavior $u(t)\asymp S$ cannot hold. \begin{thm}\label{prop.counterex} Let (A1), (A2), and (K2) hold, and $u\ge 0$ be a weak dual solution to the (CDP) corresponding to a nonnegative initial datum $u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$. Assume that $u_0(x)\le C_0\Phi_1(x)$ a.e. in $\Omega$ for some $C_0>0$. Then there exists a constant $\hat\overline{\kappaappa}ppa$, depending only $N,s,\gamma, m $, and $\Omega$, such that $$ u(t,x)^m \leq C_0\hat\overline{\kappaappa}ppa \frac{\Phi_1(x)}{t}\qquad\mbox{for all $t>0$ and a.e. $x\in \Omega$}\,. $$ In particular, if $\sigma<1$ (resp. $2sm= \gamma(m-1)$), then $$ \lim_{x\to {\delta^\gamma}artial\Omega}\frac{u(t,x)}{\Phi_1(x)^{\sigma/m}}= 0 \quad \mathcal{B}igl(\text{resp.} \lim_{x\to {\delta^\gamma}artial\Omega}\frac{u(t,x)}{\Phi_1(x)^{1/m}\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)}}= 0\mathcal{B}igr)\qquad \text{for any $t>0$.} $$ \end{thm} The proposition above could make one wonder whether the sharp general lower bound could be given by $\Phi_1^{1/m}$, as in the case $\sigma=1$. Recall that, under rather minimal assumptions on the kernel $K$ associated to $\mathcal{L}$, we have a universal lower bound for $u(t)$ in terms of $\Phi_1$ (see Theorem \ref{thm.Lower.PME}). Here we shall see that, under (K4), the bound $u(t)\gtrsim \Phi_1^{1/m}$ is false for $\sigma<1$. \begin{thm}\label{prop.counterex2} Let (A1), (A2), and (K4) hold, and let $u\ge 0$ be a weak dual solution to the (CDP) corresponding to a nonnegative initial datum $u_0 \leq C_0 \Phi_1$ for some $C_0>0$. Assume that there exist constants $\underline{\kappaappa},T,\alpha>0$ such that $$ u(T,x)\geq \underline{\kappaappa}\Phi_1^\alpha(x)\qquad\mbox{for a.e. $x\in \Omega$}\,. $$ Then $\alpha \geq 1-\frac{2s}{\gamma}$. In particular $\alpha>\frac{1}{m}$ if $\sigma<1$. \end{thm} We devote the rest of this section to the proof of the above results, and to this end we collect in the first two subsections some preliminary lower bounds and results about approximate solutions. \subsection{Lower bounds for weighted norms}\label{Sect.Weighted.L1} Here we prove some useful lower bounds for weighted norms, which follow from the $\mathrm{L}^1$-continuity for ordered solutions in the version proved in Proposition 8.1 of \cite{BV-PPR2-1}. \begin{lem}[Backward in time $\mathrm{L}^1_{\Phi_1}$ lower bounds]\label{cor.abs.L1.phi} Let $u$ be a solution to (CDP) corresponding to the initial datum $u_0\in\mathrm{L}^1_{\Phi_1}(\Omega)$. For all \begin{equation}\label{L1weight.PME.estimates.2} 0\le \tau_0\le t\le \tau_0+ \frac{1}{\big(2\bar K\big)^{1/(2s\vartheta_{\gamma})}\|u(\tau_0)\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{m-1}} \end{equation} we have \begin{equation}\label{L1weight.PME.estimates.3} \frac{1}{2}\int_{\Omega}u(\tau_0,x)\Phi_1(x)\,{\rm d}x\le \int_{\Omega}u(t,x)\Phi_1(x)\,{\rm d}x\,, \end{equation} where $\vartheta_{\gamma}:=1/[2s+(N+\gamma)(m-1)]$ and $\bar K>0$ is a computable constant. \end{lem} Foindent\textsl{Proof of Lemma \ref{cor.abs.L1.phi}. }We recall the inequality of Proposition 8.1 of \cite{BV-PPR2-1}, adapted to our case: for all $0\le \tau_0\le \tau,t$ we have \begin{equation}\label{L1weight.contr.estimates.1} \int_{\Omega} u(\tau,x) \Phi_1(x)\,{\rm d}x \le \int_{\Omega} u(t,x) \Phi_1(x)\,{\rm d}x + \bar K\|u(\tau_0)\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{2s(m-1) \vartheta_{\gamma}+1}\,\left|t-\tau\right|^{2s\vartheta_{\gamma}} \,. \end{equation} Choosing $\tau=\tau_0$ in the above inequality, we get \begin{equation}\label{cor.0.1}\begin{split} \left[1- K_9\|u(\tau_0)\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{2s(m-1) \vartheta_{\gamma}}\,\left|t-\tau_0\right|^{2s\vartheta_{\gamma}}\right]\int_{\Omega}u(\tau_0,x)\Phi_1(x)\,{\rm d}x &\le \int_{\Omega}u(t,x)\Phi_1(x)\,{\rm d}x\,. \end{split} \end{equation} Then \eqref{L1weight.PME.estimates.3} follows from \eqref{L1weight.PME.estimates.2}\,.\qed We also need a lower bound for $\mathrm{L}^p_{\Phi_1}(\Omega)$ norms. \begin{lem}\label{lem.abs.lower} Let $u$ be a solution to (CDP) corresponding to the initial datum $u_0\in\mathrm{L}^1_{\Phi_1}(\Omega)$. Then the following lower bound holds true for any $t\in [0,t_*]$ and $p\ge 1$: \begin{equation}\label{lem1.lower.bdd} c_2 \left(\int_{\Omega}u_0(x)\Phi_1(x)\,{\rm d}x \right)^p \le \int_{\Omega}u^p(t,x)\Phi_1(x)\,{\rm d}x \end{equation} Here $t_*=c_*\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{-(m-1)}$, where $c_2, c_*>0$ are positive constants that depend only on $N,s,m,p,\Omega$. \end{lem} The proof of this Lemma is an easy adaptation of the proof of Lemma 2.2 of \cite{BFR}\,, so we skip it. Notice that $c_*$ has explicit form given in {\rm \cite{BV-PPR1,BV-PPR2-1, BFR}}, while the form of $c_2$ is given in the proof of Lemma 2.2 of \cite{BFR}. \subsection{Approximate solutions} \label{sect:approx sol} To prove our lower bounds, we will need a special class of approximate solutions $u_\delta$. We will list now the necessary details. In the case when $\mathcal{L}$ is the Restricted Fractional Laplacian (RFL) (see Section \ref{sec.examples}) these solutions have been used in the Appendix II of \cite{BFR}, where complete proofs can be found; the proof there holds also for the operators considered here. The interested reader can easily adapt the proofs in \cite{BFR} to the current case. Let us fix $\delta>0$ and consider the problem: \begin{equation}\label{probl.approx.soln} \left\{ \begin{array}{lll} {\delta^\gamma}artial_t v_\delta=-\mathcal{L}\left[(v_\delta+\delta)^m -\delta^m \right]& \qquad\mbox{for any $(t,x)\in (0,\infty)\times \Omega$}\\ v_\delta(t,x)=0 &\qquad\mbox{for any $(t,x)\in (0,\infty)\times (\mathbb{R}^N\setminus\Omega)$}\\ v_\delta(0,x)=u_0(x) &\qquad\mbox{for any $x\in\Omega$}\,. \end{array} \right. \end{equation} Next, we define \[ u_\delta:=v_\delta+\delta. \] We summarize here below the basic properties of ${u_\delta}$. Approximate solutions ${u_\delta}$ exist, are unique, and bounded for all $(t,x)\in (0,\infty)\times\overline{\Omega}$ whenever $0\le u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$\,. Also, they are uniformly positive: for any $t \geq 0$, \begin{equation}\label{approx.soln.positivity} {u_\delta}(t,x)\ge \delta>0 \qquad\mbox{for a.e. $x \in \Omega$.} \end{equation} This implies that the equation for ${u_\delta}$ is never degenerate in the interior, so solutions are smooth as the linear parabolic theory with the kernel $K$ allows them to be (in particular, in the case of the fractional laplacian, they are $C^\infty$ in space and $C^1$ in time). Also, by a comparison principle, for all $\delta>\delta'>0$ and $t \geq 0$, \begin{equation}\label{approx.soln.comparison1} {u_\delta}(t,x)\ge u_{\delta'}(t,x) \qquad\mbox{for $x \in \Omega$} \end{equation} and \begin{equation}\label{approx.soln.comparison} {u_\delta}(t,x)\ge u(t,x) \qquad\mbox{for a.e. $x \in \Omega$\,.} \end{equation} Furthermore, they converge in $\mathrm{L}^1_{\Phi_1}(\Omega)$ to $u$ as $\delta \to 0$: \begin{equation}\label{Lem.0} \|{u_\delta}(t)-u(t)\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}\le \|{u_\delta}(0)-u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}=\delta\,\|\Phi_1\|_{\mathrm{L}^1(\Omega)}\,. \end{equation} As a consequence of \eqref{approx.soln.comparison1} and \eqref{Lem.0}, we deduce that ${u_\delta}$ converge pointwise to $u$ at almost every point: more precisely, for all $t\geq 0,$ \begin{equation}\label{limit.sol} {u}(t,x)=\lim_{\delta\to 0^+}{u_\delta}(t,x)\qquad\mbox{for a.e. $x \in \Omega$\,.} \end{equation} \subsection{Proof of Theorem \ref{thm.Lower.PME}}The proof consists in showing that \[ u(t,x) \geq \underline{u}(t,x):=k_0\, t \,\Phi_1(x) \] for all $t\in [0,t_*]$, where the parameter $k_0>0$ will be fixed later. Note that, once the inequality $u \geq \underline{u}$ on $[0,t_*]$ is proved, we conclude as follows: since $t\mapsto t^{\frac{1}{m-1}}\,u(t,x)$ is nondecreasing in $t>0$ for a.e. $x\in \Omega$ (cf. (2.3) in \cite{BV-PPR2-1}) we have $$u(t,x)\ge \left( \frac{t_*}{t}\right)^{\frac{1}{m-1}}u(t_*,x) \geq k_0\, t_* \left( \frac{t_*}{t}\right)^{\frac{1}{m-1}}\,\Phi_1(x)\qquad \text{ for all $t\ge t_*$\,.}$$ Then, the result will follow $\underline{\kappaappa}_0=k_0t_*^{\frac{m}{m-1}}$ (note that, as we shall see below, $k_0t_*^{\frac{m}{m-1}}$ can be chosen independently of $u_0$). Hence, we are left with proving that $u \geq \underline{u}$ on $[0,t_*]$. Foindent$\overline{u}llet~$\textsc{Step 1. }\textit{Reduction to an approximate problem. }Let us fix $\delta>0$ and consider the approximate solutions ${u_\delta}$ constructed in Section \ref{sect:approx sol}. We shall prove that ${u_\delta} \geq \underline{u}$ on $[0,t_*]$, so that the result will follow by the arbitrariness of $\delta$. Foindent$\overline{u}llet~$\textsc{Step 2. }\textit{We claim that $\underline{u}(t,x)< {u_\delta}(t,x)$ for all $0\le t\le t_*$ and $x\in \Omega$, for a suitable choice of $k_0>0$\,.} Assume that the inequality $\underline{u}<{u_\delta}$ is false in $[0,t_*]\times \overline\Omega$, and let $(t_c,x_c)$ be the first contact point between $\underline{u}$ and ${u_\delta}$. Since ${u_\delta}=\delta>0=\underline{u}$ on the lateral boundary, $(t_c,x_c)\in(0,t_*]\times\Omega$\,. Now, since $(t_c,x_c)\in (0,t_*]\times\Omega$ is the first contact point, we necessarily have that \begin{equation}\label{contact.1} {u_\delta}(t_c,x_c)= \underline{u}(t_c,x_c)\qquad\mbox{and}\qquad {u_\delta}(t,x)\ge \underline{u}(t,x)\,\quad\forall t\in [0,t_c]\,,\;\; \forall x\in\overline{\Omega}\,. \end{equation} Thus, as a consequence, \begin{equation}\label{contact.2} {\delta^\gamma}artial_t {u_\delta}(t_c,x_c)\le {\delta^\gamma}artial_t \underline{u}(t_c,x_c)=k_0\,\Phi_1(x_c)\,. \end{equation} Next, we observe the following Kato-type inequality holds: for any nonnegative function $f$, \begin{equation}\label{Kato.ineq} \mathcal{L}(f^m)\le mf^{m-1}\mathcal{L} f. \end{equation} Indeed, by convexity, $f(x)^m-f(y)^m \le m [f(x)]^{m-1}(f(x)-f(y))$, therefore \begin{equation*} \begin{split} \mathcal{L}(f^m)(x)&=\int_{\mathbb{R}^N}[f(x)^m-f(y)^m]\,K(x,y)\,{\rm d}y + B(x)f(x)^m \\ &\le m [f(x)]^{m-1}\int_{\mathbb{R}^N}[f(x)-f(y)]\,K(x,y)\,{\rm d}y + B(x)f(x)^m \\ &= m [f(x)]^{m-1}\left[\int_{\mathbb{R}^N}[f(x)-f(y)]\,K(x,y)\,{\rm d}y + B(x)f(x) \right] - (m-1)B(x)f(x)^m\\ &\le m [f(x)]^{m-1}\mathcal{L} f(x)\,. \end{split} \end{equation*} As a consequence of \eqref{Kato.ineq}, since $t_c\le t_*$ and $\Phi_1$ is bounded, \begin{multline}\label{contact.2b} \mathcal{L}(\underline{u}^m)(t,x)\le m\underline{u}^{m-1}\mathcal{L}(\underline{u})=m [k_0 t \Phi_1(x)]^{m-1}\,k_0 t \mathcal{L}(\Phi_1)(x)\\ =m \lambda_1[k_0 t \Phi_1(x)]^m \le \kappa_1 (t_*k_0)^m\Phi_1(x)\,, \end{multline} Then, using \eqref{contact.2} and \eqref{contact.2b}, we establish an upper bound for $-\mathcal{L}(u_\delta^m-\underline{u}^m)(t_c,x_c)$ as follows: \begin{equation}\label{contact.3} -\mathcal{L}[u_\delta^m- \underline{u}^m](t_c,x_c)={\delta^\gamma}artial_t {u_\delta}(t_c,x_c)+\mathcal{L}(\underline{u}^m)(t_c,x_c) \le k_0\, \left[1+\kappa_1t_*^mk_0^{m-1}\right]\Phi_1(x_c). \end{equation} Next, we want to prove lower bounds for $-\mathcal{L}(u_\delta^m-{\delta^\gamma}si^m)(t_c,x_c)$, and this is the point where the nonlocality of the operator enters, since we make essential use of hypothesis (L2). We recall that by \eqref{contact.1} we have $u_\delta^m(t_c,x_c)=\underline{u}^m(t_c,x_c)$, so that assumption (L2) gives \begin{equation*}\begin{split} -\mathcal{L} &\left[u_\delta^m-\underline{u}^m\right](t_c,x_c)= -\mathcal{L} \left[u_\delta^m-\underline{u}^m\right](t_c,x_c) + B(x_c)[u_\delta^m(t_c,x_c)-\underline{u}^m(t_c,x_c)]\\ &=-\int_{\mathbb{R}^N}\left[\big(u_\delta^m(t_c,x_c)-u_\delta^m(t_c,y)\big)-\big(\underline{u}^m(t_c,x_c)-\underline{u}^m(t_c,y)\big)\right]K(x_c,y)\,{\rm d}y\\ &=\int_{\Omega}\left[u_\delta^m(t_c,y)-\underline{u}^m(t_c,y)\right]K(x_c,y)\,{\rm d}y \ge c_0\Phi_1(x_c)\int_{\Omega}\left[u_\delta^m(t_c,y)-\underline{u}^m(t_c,y)\right]\Phi_1(y)\,{\rm d}y\,,\\ \end{split} \end{equation*} from which it follows (since $\underline{u}^m= [k_0 t \Phi_1(x)]^m \le \kappa_2(t_*k_0)^m $) \begin{equation}\label{contact.44}\begin{split} -\mathcal{L} &\left[u_\delta^m-\underline{u}^m\right](t_c,x_c)\\ &\ge c_0\Phi_1(x_c)\int_{\Omega}u_\delta^m(t_c,y)\Phi_1(y)\,{\rm d}y-c_0\Phi_1(x_c)\int_{\Omega}\underline{u}^m(t_c,y)\Phi_1(y)\,{\rm d}y.\\ &\ge c_0\Phi_1(x_c)\int_{\Omega}u_\delta^m(t_c,y)\Phi_1(y)\,{\rm d}y-c_0\Phi_1(x_c)\kappa_3\,(t_*k_0)^m. \end{split} \end{equation} Combining the upper and lower bounds \eqref{contact.3} and \eqref{contact.44} we obtain \begin{equation}\label{contact.5}\begin{split} c_0\Phi_1(x_c)\int_{\Omega}u_\delta^m(t_c,y)\Phi_1(y)\,{\rm d}y &\le k_0\, \left[1+ (\kappa_1+\kappa_3)t_*^m k_0^{m-1} \right]\Phi_1(x_c)\,. \end{split} \end{equation} Hence, recalling \eqref{lem1.lower.bdd}, we get $$ c_2 \left(\int_{\Omega}u_0(x)\Phi_1(x)\,{\rm d}x \right)^m\le \int_{\Omega}u_\delta^m(t_c,y)\Phi_1(y)\,{\rm d}y \le \frac{k_0}{c_0}\, \left[1+ (\kappa_1+\kappa_3)t_*^mk_0^{m-1} \right]. $$ Since $t_*=\kappa_*\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{-(m-1)}$, this yields $$ c_2\kappa_*^{\frac{m}{m-1}}t_*^{-\frac{m}{m-1}} \le \frac{k_0}{c_0}\, \left[1+ (\kappa_1+\kappa_3)t_*^mk_0^{m-1} \right] $$ which gives the desired contradiction provided we choose $k_0$ so that $\underline{\kappaappa}_0:=k_0t_*^{\frac{m}{m-1}}$ is universally small. \qed \subsection{Proof of Theorem \ref{Thm.lower.B}. }The proof proceeds along the lines of the proof of Theorem \ref{thm.Lower.PME}, so we will just briefly mention the common parts. We want to show that \begin{equation}\label{lower.barrier} \underline{u}(t,x):=\overline{\kappaappa}ppa_0\,t\, \Phi_1(x)^{\sigma/m}\,, \end{equation} is a lower barrier for our problem on $[0,t_*]\times \Omega$ provided $\overline{\kappaappa}ppa_0$ is small enough. More precisely, as in the proof of Theorem \ref{thm.Lower.PME}, we aim to prove that $\underline{u}<{u_\delta}$ on $[0,t_*]$, as the lower bound for $t \geq t_*$ then follows by monotonicity. Assume by contradiction that the inequality $\underline{u}(t,x)< u_{\delta}(t,x)$ is false inside $[0,t_*]\times \overline\Omega$. Since $\underline{u}<{u_\delta}$ on the parabolic boundary, letting $(t_c,x_c)$ be the first contact point, we necessarily have that $(t_c,x_c)\in (0,t_*]\times\Omega$. The desired contradiction will be obtained by combining the upper and lower bounds (that we prove below) for the quantity $-\mathcal{L}\left[u_\delta^m- \underline{u}^m\right](t_c,x_c)$ , and then choosing $\overline{\kappaappa}ppa_0>0$ suitably small. In this direction, it is convenient in what follows to assume that \begin{equation}\label{k0.first.cond} \kappa_0\le 1\wedge t_*^{-\frac{m}{m-1}}\qquad\mbox{so that}\qquad \kappa_0^{m-1}t_*^m\le 1\,. \end{equation} Foindent\textit{Upper bound. }We first establish the following upper bound: there exists a constant $\overline{A}>0$ such that \begin{equation}\label{contact.up} -\mathcal{L}\left[u_\delta^m- \underline{u}^m\right](t_c,x_c)\le{\delta^\gamma}artial_t {u_\delta}(t_c,x_c)+\mathcal{L}\underline{u}^m(t_c,x_c)\le \overline{A} \,\overline{\kappaappa}ppa_0\,. \end{equation} To prove this, we estimate ${\delta^\gamma}artial_t {u_\delta}(t_c,x_c)$ and $\mathcal{L}\underline{u}^m(t_c,x_c)$ separately. First we notice that, since $(t_\delta,x_\delta)$ is the first contact point, we have \begin{equation}\label{contact.up.1} {u_\delta}(t_\delta,x_\delta)= \underline{u}(t_\delta,x_\delta)\qquad\mbox{and}\qquad {u_\delta}(t,x)\ge \underline{u}(t,x)\,\quad\forall t\in [0,t_\delta]\,,\;\; \forall x\in\Omega\,. \end{equation} Hence, since $t_\delta\le t_* $, \begin{equation}\label{contact.up.2} {\delta^\gamma}artial_t {u_\delta}(t_\delta,x_\delta)\le {\delta^\gamma}artial_t \underline{u}(t_\delta,x_\delta)=\overline{\kappaappa}ppa_0\,\Phi_1(x)^{\sigma/m} \le \overline{\kappaappa}ppa_0\, \|\Phi_1\|_{\mathrm{L}^\infty(\Omega)}^{\sigma/m} = A_1\,\overline{\kappaappa}ppa_0 \,, \end{equation} where we defined $A_1:= \|\Phi_1\|_{\mathrm{L}^\infty(\Omega)}^{\sigma/m}$. Next we estimate $\mathcal{L}\underline{u}^m(t_c,x_c)$, using the Kato-type inequality \eqref{Kato.ineq}\,, namely $\mathcal{L}[u^m]\le mu^{m-1}\mathcal{L} u$\,. This implies \begin{equation}\label{contact.up.3}\begin{split} \mathcal{L}[\underline{u}^m](t,x)&\le m\underline{u}^{m-1}(t,x)\, \mathcal{L} \underline{u}(t,x)= m (\overline{\kappaappa}ppa_0\, t)^m\,\Phi_1(x)^{\frac{\sigma (m-1)}{m}} \mathcal{L} \Phi_1^\sigma(x)\\ &\le m (\overline{\kappaappa}ppa_0\, t_*)^m\,\|\Phi_1\|_{\mathrm{L}^\infty(\Omega)}^{\frac{\sigma (m-1)}{m}} \left\| \mathcal{L} \Phi_1^\sigma\right\|_{\mathrm{L}^\infty(\Omega)} := A_2\,\overline{\kappaappa}ppa_0\,. \end{split} \end{equation} Since $\overline{\kappaappa}ppa_0^{m-1}\, t_*^m\leq 1$ (see \eqref{k0.first.cond}), in order to prove that $A_2$ is finite it is enough to bound $\left\| \mathcal{L} \Phi_1^\sigma\right\|_{\mathrm{L}^\infty(\Omega)}$. When $\sigma=1$ we simply have $\mathcal{L}\Phi_1=-\lambda_1\Phi_1$, hence $A_2\leq m \lambda_1 \|\Phi_1\|_{\mathrm{L}^\infty(\Omega)}^{2-1/m }$. When $\sigma<1$, we use the assumption $\Phi_1 \in C^\gamma(\Omega)$ to estimate \begin{equation}\label{contact.up.3a} |\Phi_1^\sigma(x)-\Phi_1^\sigma(y)|\le |\Phi_1 (x)-\Phi_1 (y)|^\sigma \le C|x-y|^{\gamma\sigma}\qquad\forall\,x,y\in \Omega\,. \end{equation} Hence, since $\gamma\sigma=2sm/(m-1)>2s$ and $K(x,y)\le c_1|x-y|^{-(N+2s)}$, we see that \[ \begin{split} \left|\mathcal{L} \Phi_1^\sigma(x)\right|&=\left|\int_{\mathbb{R}^N}\left[\Phi_1^\sigma(x)-\Phi_1^\sigma(y) \right]K(x,y)\,{\rm d}y\right|\\ &\le \int_{\Omega}|x-y|^{\gamma\sigma} K(x,y)\,{\rm d}y + C\|\Phi_1\|_{\mathrm{L}^\infty(\Omega)}^\sigma\int_{\mathbb{R}^N\setminus B_1} |y|^{-(N+2s)}\,{\rm d}y<\infty, \end{split} \] hence $A_2$ is again finite. Combining \eqref{contact.up.2} and \eqref{contact.up.3}, we obtain \eqref{contact.up} with $\overline{A}:=A_1+A_2$. Foindent\textit{Lower bound. } We want to prove that there exists $\underline{A}>0$ such that \begin{equation}\label{contact.low} -\mathcal{L}\left[u_\delta^m- \underline{u}^m\right](t_c,x_c)\ge\frac{\underline{\kappaappa}_\Omega}{\|\Phi_1\|_{\mathrm{L}^\infty(\Omega)}} \int_{\Omega}u_\delta^m(t_c,y)\Phi_1(y) \,{\rm d}y-\underline{A}\,\overline{\kappaappa}ppa_0\,. \end{equation} This follows by (L1) and \eqref{contact.up.1} as follows: \begin{equation}\label{contact.4aaaaa}\begin{split} -\mathcal{L} &\left[u_\delta^m-\underline{u}^m\right](t_c,x_c)\\ &=-\int_{\mathbb{R}^N}\left[\big(u_\delta^m(t_c,x_c)-u_\delta^m(t_c,y)\big)-\big(\underline{u}^m(t_c,x_c)-\underline{u}^m(t_c,y)\big)\right]K(x,y)\,{\rm d}y\\ &=\int_{\Omega}\left[u_\delta^m(t_c,y)-\underline{u}^m(t_c,y)\right]K(x,y)\,{\rm d}y\\ &\ge \underline{\kappaappa}_\Omega \int_{\Omega}\left[u_\delta^m(t_c,y)-\underline{u}^m(t_c,y)\right] \,{\rm d}y \ge \frac{\underline{\kappaappa}_\Omega}{\|\Phi_1\|_{\mathrm{L}^\infty(\Omega)}} \int_{\Omega}u_\delta^m(t_c,y)\Phi_1(y) \,{\rm d}y-\underline{A}\,\overline{\kappaappa}ppa_0, \end{split}\end{equation} where in the last step we used that $\underline{u}^m(t_c,y)= [\kappa_0 t \Phi_1^{\sigma/m}(y)]^m \le \kappa_2(\kappa_0 t_*)^m $ and $\kappa_0^{m-1}t_*^m\le 1$ (see \eqref{k0.first.cond}). Foindent\textit{End of the proof.} The contradiction can be now obtained by joining the upper and lower bounds \eqref{contact.up} and \eqref{contact.low}. More precisely, we have proved that \[ \int_{\Omega}u_\delta^m(t_c,y)\Phi_1(y) \,{\rm d}y\le \frac{\|\Phi_1\|_{\mathrm{L}^\infty(\Omega)}}{\underline{\kappaappa}_\Omega}(\overline{A}+\underline{A})\,\overline{\kappaappa}ppa_0 :=\overline{\kappaappa}\,\kappa_0, \] that combined with the lower bound \eqref{lem1.lower.bdd} yields $$ c_2 \left(\int_{\Omega}u_0(x)\Phi_1(x)\,{\rm d}x \right)^m\le \int_{\Omega}u_\delta^m(t_c,y)\Phi_1(y)\,{\rm d}y \le \overline{\kappaappa}\,\kappa_0. $$ Setting $\kappa_0 := \left(1\wedge \frac{c_2}{\overline{\kappaappa}}\right)t_*^{-m/(m-1)}$ we obtain the desired contradiction.\qed \subsection{Proof of Theorem \ref{thm.Lower.PME.large.t}.~} We first recall the upper pointwise estimates \eqref{thm.NLE.PME.estim}: for all $0\le t_0\le t_1 \le t $ and a.e. $x_0\in \Omega$\,, we have that \begin{equation}\label{Lower.PME.Step.1.1} \int_{\Omega}u(t_0,x){\mathbb G}(x , x_0)\,{\rm d}x - \int_{\Omega}u({t_1},x){\mathbb G}(x , x_0)\,{\rm d}x \le (m-1)\frac{t^{\frac{m}{m-1}}}{t_0^{\frac{1}{m-1}}} \,u^m(t,x_0)\,. \end{equation} The proof follows by estimating the two integrals on the left-hand side separately. We begin by using the upper bounds \eqref{thm.Upper.PME.Boundary.2} to get \begin{equation}\label{Lower.PME.Step.1.2} \int_{\Omega}u(t_1,x){\mathbb G}(x, x_0)\,{\rm d}x\le \overline{\kappaappa} \frac{\Phi_1(x_0)}{t_1^{\frac{1}{m-1}}}\qquad\mbox{for all $(t_1,x)\in(0,+\infty)\times\Omega$}\,. \end{equation} Then we note that, as a consequence of (K2) and Lemma \ref{cor.abs.L1.phi}, \begin{equation}\label{Lower.PME.Step.1.4} \int_{\Omega}u(t_0,x){\mathbb G}(x, x_0)\,{\rm d}x\ge \underline{\kappaappa}_\Omega \Phi_1(x_0)\int_{\Omega}u(t_0,x)\Phi_1(x)\,{\rm d}x \ge \frac{\underline{\kappaappa}_\Omega}{2}\Phi_1(x_0)\int_{\Omega}u_0(x)\Phi_1(x)\,{\rm d}x \end{equation} provided $t_0\le \frac{\tau_0}{\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{m-1}}$. Combining \eqref{Lower.PME.Step.1.1}, \eqref{Lower.PME.Step.1.2}, and \eqref{Lower.PME.Step.1.4}, for all $t\ge t_1\ge t_0\ge 0$ we obtain $$ u^m(t,x_0) \ge \frac{t_0^{\frac{1}{m-1}}}{m-1}\left(\frac{\underline{\kappaappa}_\Omega}{2}\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)} - \overline{\kappaappa} t_1^{-\frac{1}{m-1}}\right)\frac{\Phi_1(x_0)}{t^{\frac{m}{m-1}}}\,. $$ Choosing \[ t_0:=\frac{\tau_0}{\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{m-1}}\le t_1:=t_*=\frac{\kappa_*}{\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{m-1}} \qquad\mbox{with}\qquad \kappa_*\ge \tau_0 \varepsilone \left(\frac{\underline{\kappaappa}_\Omega}{4\overline{\kappaappa}}\right)^{m-1} \] so that $ \frac{\underline{\kappaappa}_\Omega}{2}\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)} - \overline{\kappaappa} t_1^{-\frac{1}{m-1}} \ge \frac{\underline{\kappaappa}_\Omega}{4} \|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)},$ the result follows.\qed \subsection{Proof of Theorems \ref{prop.counterex} and \ref{prop.counterex2}} Foindent\textbf{Proof of Theorem \ref{prop.counterex}. } Since $u_0\leq C_0 \Phi_1$ and $\mathcal{L} \Phi=\lambda_1 \Phi_1$, we have $$ \int_\Omega u_0(x) {\mathbb G}(x,x_0)\,{\rm d}x\leq C_0\int_\Omega \Phi_1(x){\mathbb G}(x,x_0)\,{\rm d}x=C_0 \mathcal{L}I\Phi_1(x_0)=\frac{C_0}{\lambda_1}\Phi_1(x_0). $$ Since $t\mapsto \int_\Omega u(t,y){\mathbb G}(x,y)\,{\rm d}y$ is decreasing (see \eqref{thm.NLE.PME.estim.0}), it follows that \begin{equation} \label{eq:initial data.2} \int_\Omega u(t,y){\mathbb G}(x_0,y)\,{\rm d}y\le \frac{C_0}{\lambda_1}\Phi_1(x_0)\qquad\mbox{for all }t\ge 0. \end{equation} Combining this estimate with \eqref{Upper.PME.Step.1.2.c} concludes the proof. \qed Foindent\textbf{Proof of Theorem \ref{prop.counterex2}. } Given $x_0 \in \Omega$, set $R_0:={\rm dist}(x_0,{\delta^\gamma}artial\Omega)$. Since ${\mathbb G}(x,x_0) \gtrsim |x-x_0|^{-(N-2s)}$ inside $B_{R_0/2}(x_0)$ (by (K4)), using our assumption on $u(T)$ we get $$ \int_\Omega {\mathbb G}(x,x_0) u(T,x)\,{\rm d}x\gtrsim \int_{B_{R_0/2}(x_0)} \frac{ \Phi_1(x)^\alpha}{|x-x_0|^{N-2s}} \gtrsim \Phi_1(x_0)^\alpha R_0^{2s}. $$ Recalling that $\Phi_1(x_0)\asymp R_0^\gamma$, this yields $$ \Phi_1(x_0)^{\alpha+\frac{2s}{\gamma}}\lesssim \int_\Omega {\mathbb G}(x,x_0) u(T,x)\,{\rm d}x. $$ Combining the above inequality with \eqref{eq:initial data.2} gives \[ \Phi_1(x_0)^{\alpha+\frac{2s}{\gamma}} \lesssim \Phi_1(x_0) \qquad \forall\,x_0\in \Omega, \qquad\mbox{which implies}\qquad \alpha \geq 1-\frac{2s}{\gamma}\,. \] Noticing that $1-\frac{2s}{\gamma}>\frac{1}{m}$ if and only if $\sigma<1$, this concludes the proof.\qed \section{Summary of the general decay and boundary results}\label{sect.Harnack} In this section we present as summary of the main results, which can be summarized in various forms of upper and lower bounds, which we call Global Harnack Principle, (GHP) for short. As already mentioned, such inequalities are important for regularity issues (see Section \ref{sect.regularity}), and they play a fundamental role in formulating the sharp asymptotic behavior (see Section \ref{sec.asymptotic}). The proof of such GHP is obtained by combining upper and lower bounds, stated and proved in Sections \ref{sect.upperestimates} and \ref{Sec.Lower} respectively. There are cases when the bounds do not match, for which the complicated panorama described in the Introduction holds. As explained before, as far as examples are concerned, the latter anomalous situation happens only for the SFL. \begin{thm}[Global Harnack Principle I]\label{thm.GHP.PME.I} Let $\mathcal{L}$ satisfy (A1), (A2), (K2), and (L1). Furthermore, suppose that $\mathcal{L}$ has a first eigenfunction $\Phi_1\asymp \mathrm{dist}(x,{\delta^\gamma}artial\Omega)^\gamma$\,. Let $\sigma$ be as in \eqref{as.sep.var} and assume that $2smFe \gamma(m-1)$ and:\\ - either $\sigma=1$;\\ - or $\sigma<1$, $K(x,y)\le c_1|x-y|^{-(N+2s)}$ for a.e. $x,y\in \mathbb{R}^N$, and $\Phi_1\in C^\gamma(\overline{\Omega})$.\\ Let $u\ge 0$ be a weak dual solution to the (CDP) corresponding to $u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$. Then, there exist constants $\underline{\kappaappa},\overline{\kappaappa}>0$, so that the following inequality holds: \begin{equation}\label{thm.GHP.PME.I.Ineq} \underline{\kappaappa}\, \left(1\wedge \frac{t}{t_*}\right)^{\frac{m}{m-1}}\frac{\Phi_1(x)^{\sigma/m}}{t^{\frac{1}{m-1}}} \le \, u(t,x) \le \overline{\kappaappa}\, \frac{\Phi_1(x)^{\sigma/m}}{t^{\frac1{m-1}}}\qquad\mbox{for all $t>0$ and all $x\in \Omega$}\,. \end{equation} The constants $\underline{\kappaappa},\overline{\kappaappa}$ depend only on $N,s,\gamma, m, c_1,\underline{\kappaappa}_\Omega,\Omega$, and $\|\Phi_1\|_{C^\gamma(\Omega)}$\,. \end{thm} Foindent {\sl Proof.~}We combine the upper bound \eqref{thm.Upper.PME.Boundary.F} with the lower bound \eqref{Thm.lower.B}. The expression of $t_*$ is explicitly given in Theorem \ref{Thm.lower.B}. \qed Foindent\textbf{Degenerate kernels. }When the kernel $K$ vanishes on ${\delta^\gamma}artial\Omega$, there are two combinations of upper/lower bounds that provide Harnack inequalities, one for small times and one for large times. As we have already seen, there is a strong difference between the case $\sigma=1$ and $\sigma<1$. \begin{thm}[Global Harnack Principle II]\label{thm.GHP.PME.II} Let (A1), (A2), and (K2) hold. Let $u\ge 0$ be a weak dual solution to the (CDP) corresponding to $u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$. Assume that:\\ - either $\sigma=1$ and $2smFe \gamma(m-1)$;\\ - or $\sigma<1$, $u_0\geq \underline{\kappaappa}_0\Phi_1^{\sigma/m}$ for some $\underline{\kappaappa}_0>0$, and (K4) holds.\\ Then there exist constants $\underline{\kappaappa},\overline{\kappaappa}>0$ such that the following inequality holds: $$ \underline{\kappaappa}\, \frac{\Phi_1(x)^{\sigma/m}}{t^{\frac{1}{m-1}}} \le \, u(t,x) \le \overline{\kappaappa}\, \frac{\Phi_1(x)^{\sigma/m}}{t^{\frac1{m-1}}}\qquad\mbox{for all $t\ge t_*$ and all $x\in \Omega$}\,. $$ When $2sm= \gamma(m-1)$, assuming (K4) and that $u_0\geq \underline{\kappaappa}_0\Phi_1\left(1+|\log\Phi_1 |\right)^{1/(m-1)}$ for some $\underline{\kappaappa}_0>0$, then for all $t\ge t_*$ and all $x\in \Omega$ \begin{multline*} \underline{\kappaappa}\, \frac{\Phi_1(x)^{1/m}}{t^{\frac{1}{m-1}}}\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)} \le \, u(t,x) \le \overline{\kappaappa}\, \frac{\Phi_1(x)^{1/m} }{t^{\frac1{m-1}}}\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)}\,. \end{multline*} The constants $\underline{\kappaappa},\overline{\kappaappa}$ depend only on $N,s,\gamma, m, \underline{\kappaappa}_0,\underline{\kappaappa}_\Omega$, and $\Omega$. \end{thm} Foindent {\sl Proof.~}In the case $\sigma=1$, we combine the upper bound \eqref{thm.Upper.PME.Boundary.F} with the lower bound \eqref{thm.Lower.PME.Boundary.large.t}. The expression of $t_*$ is explicitly given in Theorem \ref{thm.Lower.PME.large.t}. When $\sigma<1$, the upper bound is still given \eqref{thm.Upper.PME.Boundary.F}, while the lower bound follows by comparison with the solution $S(x)(\underline{\kappaappa}_0^{1-m}+t)^{-\frac{1}{m-1}}$, recalling that $S\asymp \Phi_1^{\sigma/m}$ (see Theorem \ref{Thm.Elliptic.Harnack.m}). \qed Foindent\textbf{Remark. }Local Harnack inequalities of elliptic/backward type follow as a consequence of Theorems \ref{thm.GHP.PME.I} and \ref{thm.GHP.PME.II}, for all times and for large times respectively, see Theorem \ref{thm.Harnack.Local}. Note that, for small times, we cannot find matching powers for a global Harnack inequality (except for some special initial data), and such result is actually false for $s=1$ (in view of the finite speed of propagation). Hence, in the remaining cases, we have only the following general result. \begin{thm}[{Non matching upper and lower bounds}]\label{thm.GHP.PME.III} Let $\mathcal{L}$ satisfy (A1), (A2), (K2), and (L2). Let $u\ge 0$ be a weak dual solution to the (CDP) corresponding to $u_0\in \mathrm{L}^1_{\Phi_1}(\Omega)$. Then, there exist constants $\underline{\kappaappa},\overline{\kappaappa}>0$, so that the following inequality holds when $2smFe \gamma(m-1)$: \begin{equation}\label{thm.GHP.PME.III.Ineq} \underline{\kappaappa}\,\left(1\wedge \frac{t}{t_*}\right)^{\frac{m}{m-1}}\frac{\Phi_1(x)}{t^{\frac{1}{m-1}}} \le \, u(t,x) \le \overline{\kappaappa}\, \frac{\Phi_1(x)^{\sigma/m}}{t^{\frac1{m-1}}}\qquad\mbox{for all $t>0$ and all $x\in \Omega$}. \end{equation} When $2sm=\gamma(m-1)$, a logarithmic correction $\left(1+|\log\Phi_1(x) |\right)^{1/(m-1)}$ appears in the right hand side. \end{thm} Foindent {\sl Proof.~} We combine the upper bound \eqref{thm.Upper.PME.Boundary.F} with the lower bound \eqref{thm.Lower.PME.Boundary.1}. The expression of $t_*$ is explicitly given in Theorem \ref{thm.Lower.PME}.\qed Foindent\textbf{Remark. }As already mentioned in the introduction, in the non-matching case, which in examples can only happen for spectral-type operators, we have the appearance of an \textit{anomalous behaviour of solutions }corresponding to ``small data'': it happens for all times when $\sigma<1$ or $2sm=\gamma(m-1)$, and it can eventually happen for short times when $\sigma=1$. \section{Asymptotic behavior}\label{sec.asymptotic} An important application of the Global Harnack inequalities of the previous section concerns the sharp asymptotic behavior of solutions. More precisely, we first show that for large times all solutions behave like the separate-variables solution $\mathcal{U}(t,x)={S(x)}\,{t^{-\frac{1}{m-1}}}$ introduced at the end of Section \ref{sec.results}. Then, whenever the (GHP) holds, we can improve this result to an estimate in relative error. \begin{thm}[Asymptotic behavior]\label{Thm.Asympt.0} Assume that $\mathcal{L}$ satisfies (A1), (A2), and (K2), and let $S$ be as in Theorem \ref{Thm.Elliptic.Harnack.m}. Let $u$ be any weak dual solution to the (CDP). Then, unless $u\equiv 0$, \begin{equation}\label{Thm.Asympt.0.1} \left\|t^{\frac{1}{m-1}}u(t,\cdot)- S\right\|_{\mathrm{L}^\infty(\Omega)}\xrightarrow{t\to\infty}0\,. \end{equation} \end{thm} Foindent {\sl Proof.~}The proof uses rescaling and time monotonicity arguments, and it is a simple adaptation of the proof of Theorem 2.3 of \cite{BSV2013}. In those arguments, the interior $C^{\alpha}_{x}(\Omega)$ continuity is needed to improve the $\mathrm{L}^1(\Omega)$ convergence to $\mathrm{L}^\infty(\Omega)$, but the interior H\"older continuity is guaranteed by Theorem \ref{thm.regularity.1}(i) below. \qed We now exploit the (GHP) to get a stronger result. \begin{thm}[Sharp asymptotic behavior]\label{Thm.Asympt} Under the assumptions of Theorem \ref{Thm.Asympt.0}, assume that $uFot\equiv 0$. Furthermore, suppose that either the assumptions of Theorem \ref{thm.GHP.PME.I} or of Theorem \ref{thm.GHP.PME.II} hold. Set $\mathcal{U}(t,x):=t^{-\frac{1}{m-1}}S(x)$. Then there exists $c_0>0$ such that, for all $t\ge t_0:=c_0\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{-(m-1)}$, we have \begin{equation}\label{conv.rates.rel.err} \left\|\frac{u(t,\cdot)}{\mathcal{U}(t,\cdot)}-1 \right\|_{\mathrm{L}^\infty(\Omega)} \le \frac{2}{m-1}\,\frac{t_0}{t_0+t}\,. \end{equation} We remark that the constant $c_0>0$ only depends on $N,s,\gamma, m, \underline{\kappaappa}_0,\underline{\kappaappa}_\Omega$, and $\Omega$. \end{thm} Foindent\textbf{Remark. }This asymptotic result is sharp, as it can be checked by considering $u(t,x)=\mathcal{U}(t+1,x)$. For the classical case, that is $\mathcal{L}=\Delta$, we recover the classical results of \cite{Ar-Pe,JLVmonats} with a different proof. Foindent {\sl Proof.~}Notice that we are in the position to use of Theorems \ref{thm.GHP.PME.I} or \ref{thm.GHP.PME.II}, namely we have $$ u(t) \asymp t^{-\frac1{m-1}}S=\mathcal{U}(t,\cdot) \qquad\mbox{for all $t\ge t_*$}\,, $$ where the last equivalence follows by Theorem \ref{Thm.Elliptic.Harnack.m}. Hence, we can rewrite the bounds above saying that there exist $\underline{\kappaappa},\overline{\kappaappa}>0$ such that \begin{equation}\label{GHP.ASYM} \underline{\kappaappa}\, \frac{S(x)}{t^{\frac1{m-1}}} \le \, u(t,x) \le \overline{\kappaappa}\, \frac{S(x)}{t^{\frac1{m-1}}}\qquad\mbox{for all $t\ge t_*$ and a.e. $x\in \Omega$}\,. \end{equation} Since $t_*=\overline{\kappaappa}ppa_*\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{-(m-1)}$, the first inequality implies that $$ \frac{S}{(t_*+t_0)^{\frac{1}{m-1}}}\leq \underline{\kappaappa}\, \frac{S}{t_*^{\frac1{m-1}}} \leq u(t_*) $$ for some $t_0=c_0\|u_0\|_{\mathrm{L}^1_{\Phi_1}(\Omega)}^{-(m-1)} \geq t_*$. Hence, by comparison principle, $$ \frac{S}{(t+t_0)^{\frac{1}{m-1}}} \le \, u(t)\qquad\mbox{for all $t\ge t_*$.} $$ On the other hand, it follows by \eqref{GHP.ASYM} that $u(t,x) \leq \mathcal U_T(t,x):=S(x)(t-T)^{-\frac1{m-1}}$ for all $t \geq T$ provided $T$ is large enough. If we now start to reduce $T$, the comparison principle combined with the upper bound \eqref{thm.Upper.PME.Boundary.F} shows that $u$ can never touch $\mathcal U_T$ from below in $(T,\infty)\times \Omega$. Hence we can reduce $T$ until $T=0$, proving that $u \leq \mathcal U_0$ (for an alternative proof, see Lemma 5.4 in \cite{BSV2013}). Since $t_0 \geq t_*$, this shows that $$ \frac{S(x)}{(t+t_0)^{\frac{1}{m-1}}} \le \, u(t,x) \le \frac{S(x)}{t^{\frac1{m-1}}}\qquad\mbox{for all $t\ge t_0$ and a.e. $x\in \Omega$}\,, $$ therefore $$ \biggl|1-\frac{u(t,x)}{\mathcal{U}(t,x)}\biggr| \leq 1-\biggl(1-\frac{t_0}{t_0+t}\biggr)^{\frac{1}{m-1}}\leq \frac{2}{m-1}\frac{t_0}{t_0+t}\qquad\mbox{for all $t\ge t_0$ and a.e. $x\in \Omega$}, $$ as desired.\qed \section{Regularity results}\label{sect.regularity} Foindent In order to obtain the regularity results, we basically require the validity of a Global Harnack Principle, namely Theorems \ref{thm.GHP.PME.I}, \ref{thm.GHP.PME.II}, or \ref{thm.GHP.PME.III}, depending on the situation under study. For some higher regularity results, we will eventually need some extra assumptions on the kernels. For simplicity we assume that $\mathcal{L}$ is described by a kernel, without any lower order term. However, it is clear that the presence of lower order terms does not play any role in the interior regularity. \begin{thm}[Interior Regularity]\label{thm.regularity.1} Assume that \begin{equation*} \mathcal{L} f(x)=P.V.\int_{\mathbb{R}^N}\big(f(x)-f(y)\big)K(x,y)\,{\rm d}y+B(x)f(x)\,, \end{equation*} with $$ K(x,y)\asymp |x-y|^{-(N+2s)}\quad \text{ in $B_{2r}(x_0)\subset\Omega$}, \qquad K(x,y)\lesssim |x-y|^{-(N+2s)}\quad \text{ in $\mathbb R^N\setminus B_{2r}(x_0)$}. $$ Let $u$ be a nonnegative bounded weak dual solution to problem (CDP) on $(T_0,T_1)\times\Omega$, and assume that there exist $\delta,M>0$ such that \begin{align*} &0<\delta\le u(t,x)\qquad\mbox{for a.e. $(t,x)\in (T_0,T_1)\times B_{2r}(x_0)$,}\\ &0\leq u(t,x)\leq M\qquad\mbox{for a.e. $(t,x)\in (T_0,T_1)\times \Omega$.} \end{align*} \begin{enumerate} \item[(i)] Then $u$ is {H\"older continuous in the interior}. More precisely, there exists $\alpha>0$ such that, for all $0<T_0<T_2<T_1$, \begin{equation}\label{thm.regularity.1.bounds.0} \|u\|_{C^{\alpha/2s,\alpha}_{t,x}((T_2,T_1)\times B_{r}(x_0))}\leq C. \end{equation} \item[(ii)] Assume in addition $|K(x,y)-K(x',y)|\le c |x-x'|^\beta\,|y|^{-(N+2s)}$ for some $\beta\in (0,1\wedge 2s)$ such that $\beta+2s$ is not an integer. Then {$u$ is a classical solution in the interior}. More precisely, for all $0<T_0<T_2<T_1$, \begin{equation}\label{thm.regularity.1.bounds.1} \|u\|_{C^{1+\beta/2s, 2s+\beta}_{t,x}((T_2,T_1)\times B_{r}(x_0))}\le C. \end{equation} \end{enumerate} \end{thm} The constants in the above regularity estimates depend on the solution only through the upper and lower bounds on $u$. This bounds can be made quantitative by means of local Harnack inequalities, of elliptic and forward type, which follows from the Global ones. \begin{thm}[Local Harnack Inequalities of Elliptic/Backward Type]\label{thm.Harnack.Local} Under the assumptions of Theorem \ref{thm.GHP.PME.I}, there exists a constant $\hat H>0$, depending only on $N,s,\gamma, m, c_1,\underline{\kappaappa}_\Omega,\Omega$, such that for all balls $B_R(x_0)$ such that $B_{2R}(x_0)\subset\Omega$: \begin{equation}\label{thm.Harnack.Local.ell} \sup_{x\in B_R(x_0)}u(t,x)\le \frac{\hat H}{\left(1\wedge\frac{t}{t_*}\right)^{\frac{m}{m-1}}}\, \inf_{x\in B_R(x_0)}u(t,x)\qquad\mbox{for all }t>0. \end{equation} Moreover, for all $t>0$ and all $h>0$ we have the following: \begin{equation}\label{thm.Harnack.Local.ForwBack} \sup_{x\in B_R(x_0)}u(t,x)\le \hat H\left[\left(1+\frac{h}{t}\right) \left(1\wedge\frac{t}{t_*}\right)^{-m}\right]^{\frac{1}{m-1}}\, \inf_{x\in B_R(x_0)}u(t+h,x)\,. \end{equation} \end{thm} Foindent {\sl Proof.~} Recalling \eqref{thm.GHP.PME.I.Ineq}, the bound \eqref{thm.Harnack.Local.ell} follows easily from the following Harnack inequality for the first eigenfunction, see for instance \cite{BFV-Elliptic}: $$ \sup_{x\in B_R(x_0)} \Phi_1(x)\le H_{N,s,\gamma,\Omega} \inf_{x\in B_R(x_0)} \Phi_1(x). $$ Since $u(t,x)\leq (1+h/t)^{\frac{1}{m-1}}u(t+h,x)$, by the time monotonicity of $t\mapsto t^{\frac{1}{m-1}}\,u(t,x)$, \eqref{thm.Harnack.Local.ForwBack} follows.\qed Foindent\textbf{Remark. }The same result holds for large times, $t\ge t_*$ as a consequence of Theorem \ref{thm.GHP.PME.II}. Already in the local case $s=1$\,, these Harnack inequalities are stronger than the known two-sided inequalities valid for solutions to the Dirichlet problem for the classical porous medium equation, cf. \cite{AC83,DaskaBook,DiB88,DiBook,DGVbook}, which are of forward type and are often stated in terms of the so-called intrinsic geometry. Note that elliptic and backward Harnack-type inequalities usually occur in the fast diffusion range $m<1$ \cite{BGV-Domains, BV, BV-ADV, BV2012}, or for linear equations in bounded domains \cite{FGS86,SY}. For sharp boundary regularity we need a GHP with matching powers, like Theorems \ref{thm.GHP.PME.I} or \ref{thm.GHP.PME.II}, and when $s>\gamma/2$, we can also prove H\"older regularity up to the boundary. We leave to the interested reader to check that the presence of an extra term $B(x)u^m(t,x)$ with $0\leq B(x)\leq c_1 {\rm dist}(x,{\delta^\gamma}artial\Omega)^{-2s}$ (as in the SFL) does not affect the validity of the next result. Indeed, when considering the scaling in \eqref{eq:scaling}, the lower term scales as $\hat B_r u_r^m$ with $0\leq \hat B_r\leq c_1$ inside the unit ball $B_1$. \begin{thm}[H\"older continuity up to the boundary]\label{thm.regularity.2} Under assumptions of Theorem $\ref{thm.regularity.1}$(ii), assume in addition that $2s>\gamma$ Then {$u$ is H\"older continuous up to the boundary}. More precisely, for all $0<T_0<T_2<T_1$ there exists a constant $C>0$ such that \begin{equation}\label{thm.regularity.2.bounds} \|u\|_{C^{\frac{\gamma}{m\vartheta},\frac{\gamma}{m}}_{t,x}((T_2,T_1)\times{\Omega})}\le C \qquad\mbox{with }\quad\vartheta:=2s-\gamma\left(1-\frac{1}{m}\right)\,. \end{equation} \end{thm} Foindent\textbf{Remark. } Since we have $u(t,x)\asymp \Phi_1(x)^{1/m}\asymp\mathrm{dist}(x,{\delta^\gamma}artial\Omega)^{\gamma/m}$ (note that $2s>\gamma$ implies that $\sigma=1$ and that $2smFe \gamma(m-1)$), the spacial H\"older exponent is sharp, while the H\"older exponent in time is the natural one by scaling. \subsection{Proof of interior regularity} The strategy to prove Theorem \ref{thm.regularity.1} follows the lines of \cite{BFR} but with some modifications. The basic idea is that, because $u$ is bounded away from zero and infinity, the equation is non-degenerate and we can use parabolic regularity for nonlocal equations to obtain the results. More precisely, interior H\"older regularity will follow by applying $C^{\alpha/2s,\alpha}_{t,x}$ estimates of \cite{FK} for a ``localized'' linear problem. Once H\"older regularity is established, under an H\"older continuity assumption on the kernel we can use the Schauder estimates proved in \cite{Schauder} to conclude. \subsubsection{Localization of the problem }\label{sssect.localization} Up to a rescaling, we can assume $r=2$, $T_0=0$, $T_1=1$. Also, by a standard covering argument, it is enough to prove the results with $T_2=1/2$. Take a cutoff function $\rho\in C^\infty_c(B_4)$ such that $\rho\equiv 1$ on $B_3$, $\eta\in C^\infty_c(B_2)$ a cutoff function such that $\eta\equiv 1$ on $B_1$, and define $v=\rho u$. By construction $u=v$ on $(0,1)\times B_3$. Since $\rho\equiv 1$ on $B_3$, we can write the equation for $v$ on the small cylinder $(0,1)\times B_1$ as $$ {\delta^\gamma}artial_t v(t,x) = -\mathcal{L} [v^m](t,x) +g(t,x)= - L_a v(t,x) + f(t,x) +g(t,x) $$ where $$ L_a[v](t,x):=\int_{\mathbb{R}^N}\big(v(t,x)-v(t,y)\big)a(t,x,y)K(x,y)\,{\rm d}y\,, $$ \begin{align*} a(t,x,y) &:= \frac{v^m(t,x)-v^m(t,y)}{v(t,x)-v(t,y)}\eta(x-y)+ \big[1-\eta(x-y)\big]\\ &=m\eta(x-y)\int_0^1 \left[(1-\lambda)v(t,x) +\lambda v(t,y) \right]^{m-1} {\rm d}\lambda + \big[1-\eta(x-y)\big]\,, \end{align*} \[ f(t,x):=\int_{\mathbb{R}^N\setminus B_1(x)}\bigl(v^m(t,x)-v^m(t,y)-v(t,x)+v(t,y)\bigr)[1-\eta(x-y)] K(x,y)\,{\rm d}y\,, \] and $$ g(t,x):= -\mathcal{L}\left[(1-\rho^m)u^m\right](t,x)=\int_{\mathbb{R}^N\setminus B_3}(1-\rho^m(y))u^m(t,y) K(x,y)\,{\rm d}y $$ (recall that $(1-\rho^m)u^m\equiv 0$ on $(0,1)\times B_3$). \subsubsection{H\"older continuity in the interior}\label{sssect.Calpha} Set $b:=f+g$, with $f$ and $g$ as above. It is easy to check that, since $K(x,y)\lesssim |x-y|^{-(N+2s)}$, $b \in \mathrm{L}^\infty((0,1)\times B_1)$. Also, since $0<\delta\leq u \leq M$ inside $(0,1)\times B_1$, there exists $\Lambda>1$ such that $\Lambda^{-1}\leq a(t,x,y)\leq \Lambda$ for a.e. $(t,x,y)\in (0,1)\times B_1\times B_1$ with $|x-y|\leq 1$. This guarantees that the linear operator $L_a$ is uniformly elliptic, so we can apply the results in \cite{FK} to ensure that \[\|v\|_{C^{\alpha/2s,\alpha}_{t,x}((1/2,1)\times B_{1/2})}\leq C\bigl(\|b\|_{L^\infty((0,1)\times B_1)}+\|v\|_{L^\infty((0,1)\times\mathbb{R}^N)}\bigr)\] for some universal exponent $\alpha>0$. This proves Theorem \ref{thm.regularity.1}(i). \subsubsection{Classical solutions in the interior} Now that we know that $u\in C^{\alpha/2s,\alpha}((1/2,1)\times B_{1/2})$, we repeat the localization argument above with cutoff functions $\rho$ and $\eta$ supported inside $(1/2,1)\times B_{1/2}$ to ensure that $v:=\rho u$ is H\"older continuous in $(1/2,1)\times\mathbb{R}^N$. Then, to obtain higher regularity we argue as follows. Set $\beta_1:=\min\{\alpha,\beta\}$. Thanks to the assumption on $K$ and Theorem \ref{thm.regularity.1}(i), it is easy to check that $K_a(t,x,y):=a(t,x,y)K(x,y)$ satisfies $$ |K_a(t,x,y)-K_a(t',x',y)| \leq C\left(|x-x'|^{\beta_1}+|t-t'|^{\beta_1/2s}\right)|y|^{-(N+2s)} $$ inside $(1/2,1)\times B_{1/2}$. Also, $f,g \in C^{\beta_1/2s,\beta_1}((1/2,1)\times B_{1/2})$. This allows us to apply the Schauder estimates from \cite{Schauder} (see also \cite{CKriv}) to obtain that $$ \|v\|_{C^{1+\beta_1/2s,2s+\beta}_{t,x}((3/4,1)\times B_{1/4})} \leq C\bigl(\|b\|_{C^{\beta/2s,\beta}_{t,x}((1/2,1)\times B_{1/2})}+\|v\|_{{C^{\beta/2s,\beta}_{t,x}((1/2,1)\times\mathbb{R}^N)}}\bigr). $$ In particular, $u\in C^{1+\beta_1/2s,2s+\beta_1}((3/4,1)\times B_{1/8})$. In case $\beta_1=\beta$ we stop here. Otherwise we set $\alpha_1:=2s+\beta$ and we repeat the argument above with $\beta_2:=\min\{\alpha_1,\beta\}$ in place of $\beta_1$. In this way, we obtain that $u\in C^{1+\beta_1/2s,2s+\beta_1}((1-2^{-4},1)\times B_{2^{-5}})$. Iterating this procedure finitely many times, we finally obtain that $$ u\in C^{1+\beta/2s,2s+\beta}((1-2^{-k},1)\times B_{2^{-k-1}}) $$ for some universal $k$. Finally, a covering argument completes the proof of Theorem \ref{thm.regularity.1}(ii). \subsection{Proof of boundary regularity} The proof of Theorem \ref{thm.regularity.2} follows by scaling and interior estimates. Notice that the assumption $2s>\gamma$ implies that $\sigma=1$, hence $u(t)$ has matching upper and lower bounds. Given $x_0 \in \Omega$, set $r=\textrm{dist}(x_0,{\delta^\gamma}artial\Omega)/2$ and define \begin{equation} \label{eq:scaling} u_r(t,x):=r^{-\frac{\gamma}{m}}\,u\left(t_0+r^\vartheta t,\,x_0+rx\right)\,, \qquad\mbox{with}\qquad\vartheta:=2s-\gamma\left(1-\frac{1}{m}\right)\,. \end{equation} Note that, because $2s>\gamma$, we have $\vartheta>0$. With this definition, we see that $u_r$ satisfies the equation ${\delta^\gamma}artial_t u_r+\mathcal{L}_r u_r^m=0$ in $\Omega_r:=(\Omega-x_0)/r$, where \[ \mathcal{L}_r f(x)=P.V.\int_{\mathbb{R}^N}\big(f(x)-f(y)\big)K_r(x,y)\,{\rm d}y\,,\qquad K_r(x,y):=r^{N+2s}K(x_0+rx,x_0+ry)\,. \] Note that, since $\sigma=1$, it follows by the (GHP) that $u(t)\asymp \mathrm{dist}(x,{\delta^\gamma}artial\Omega)^{\gamma/m}$. Hence, \[ 0<\delta\leq u_r(t,x)\leq M, \qquad\mbox{for all $t\in[r^{-\vartheta}T_0,r^{-\vartheta}T_1]$ and $x\in B_1$,} \] with constants $\delta,M>0$ that are independent of $r$ and $x_0$. In addition, using again that $u(t)\asymp \mathrm{dist}(x,{\delta^\gamma}artial\Omega)^{\gamma/m}$, we see that \[ u_r(t,x)\leq C\bigl(1+|x|^{\gamma/m}\bigr) \qquad \text{for all $t\in[r^{-\vartheta}T_0,r^{-\vartheta}T_1]$ and $x\in \mathbb{R}^N$}. \] Noticing that $ u_r^m(t,x) \leq C(1+|x|^{\gamma}) $ and that $\gamma<2s$ by assumption, we see that the tails of $u_r$ will not create any problem. Indeed, for any $x \in B_1$, $$ \int_{\mathbb R^N\setminus B_2}u_r^m(t,y)K_r(x,y)^{-(N+2s)}\,dy\leq C\int_{\mathbb R^N\setminus B_2}|y|^\gamma |y|^{-(N+2s)}\,dy\leq \bar C_0, $$ where $\bar C_0$ is independent of $r$. This means that we can localize the problem using cutoff functions as done in Section \ref{sssect.localization}, and the integrals defining the functions $f$ and $g$ will converge uniformly with respect to $x_0$ and $r$. Hence, we can apply Theorem \ref{thm.regularity.1}(ii) to get \begin{equation} \label{eq:interior ur} \|u_r\|_{C^{1+\beta/2s,2s+\beta}([r^{-\vartheta}T+1/2,r^{-\vartheta}T+1]\times B_{1/2})} \leq C \end{equation} for all $T \in [T_0,T_1-r^{-\vartheta}]$. Since $\gamma/m<2s+\beta$ (because $\gamma<2s$), it follows that $$ \|u_r\|_{L^\infty([r^{-\vartheta}T+1/2,r^{-\vartheta}T+1],C^{\gamma/m}(B_{1/2})} \leq \|u_r\|_{C^{1+\beta/2s,2s+\beta}([r^{-\vartheta}T+1/2,r^{-\vartheta}T_0+1]\times B_{1/2})} \leq C. $$ Noticing that $$ \sup_{t \in [r^{-\vartheta}T+1/2,r^{-\vartheta}T+1]}[u_r]_{C^{\gamma/m}(B_{1/2})} = \sup_{t \in [T+r^{\vartheta}/2,r^{-\vartheta}T+r^{-\vartheta}]}[u]_{C^{\gamma/m}(B_{r}(x_0))}, $$ and that $T \in [T_0,T_1-r^{-\vartheta}]$ and $x_0$ are arbitrary, arguing as in \cite{RosSer} we deduce that, given $T_2\in (T_0,T_1)$, \begin{equation} \label{eq:holder x} \sup_{t \in [T_2,T_1]}[u]_{C^{\gamma/m}(\Omega)} \leq C. \end{equation} This proves the global H\"older regularity in space. To show the regularity in time, we start again from \eqref{eq:interior ur} to get $$ \|{\delta^\gamma}artial_tu_r\|_{\mathrm{L}^\infty([r^{-\vartheta}T+1/2,r^{-\vartheta}T+1]\times B_{1/2})} \leq C. $$ By scaling, this implies that $$ \|{\delta^\gamma}artial_tu\|_{\mathrm{L}^\infty([T+r^{\vartheta}/2,r^{-\vartheta}T+r^{-\vartheta}]\times B_{r}(x_0))} \leq Cr^{\frac{\gamma}{m}-\vartheta}, $$ and by the arbitrariness of $T$ and $x_0$ we obtain (recall that $r=\textrm{dist}(x_0,{\delta^\gamma}artial\Omega)/2$) \begin{equation} \label{eq:ut} |{\delta^\gamma}artial_tu(t,x)|\leq C\textrm{dist}(x,{\delta^\gamma}artial\Omega)^{\frac{\gamma}{m}-\vartheta} \qquad \forall\, t \in [T_2,T_1],\,x \in \Omega. \end{equation} Note that $\frac{\gamma}{m}-\vartheta=\gamma-2s<0$ by our assumption. Now, given $t_0,t_1 \in [T_2,T_1]$ and $x \in \Omega$, we argue as follows: if $|t_0-t_1|\leq \textrm{dist}(x,{\delta^\gamma}artial\Omega)^\vartheta$ then we use \eqref{eq:ut} to get (recall that $\frac{\gamma}{m}-\vartheta<0$) $$ |u(t_1,x)-u(t_0,x)|\leq C\textrm{dist}(x,{\delta^\gamma}artial\Omega)^{\frac{\gamma}{m}-\vartheta}|t_0-t_1|\leq C|t_0-t_1|^\frac{\gamma}{m\vartheta}. $$ On the other hand, if $|t_0-t_1|\geq \textrm{dist}(x,{\delta^\gamma}artial\Omega)^\vartheta$, then we use \eqref{eq:holder x} and the fact that $u$ vanishes on ${\delta^\gamma}artial \Omega$ to obtain $$ |u(t_1,x)-u(t_0,x)| \leq |u(t_1,x)|+|u(t_0,x)| \leq C\textrm{dist}(x,{\delta^\gamma}artial\Omega)^{\frac{\gamma}{m}}\leq C|t_0-t_1|^\frac{\gamma}{m\vartheta}. $$ This proves that $u$ is $\frac{\gamma}{m\vartheta}$-H\"older continuous in time, and completes the proof of Theorem \ref{thm.regularity.2}. \qed \section{Numerical evidence}\label{sec.numer} After discovering the unexpected boundary behavior, we looked for numerical confirmation. This has been given to us by the authors of \cite{numerics}, who exploited the analytical tools developed in this paper to support our results by means of accurate numerical simulations. We include here some of these simulations, by courtesy of the authors. In all the figures we shall consider the Spectral Fractional Laplacian, so that $\gamma=1$ (see Section \ref{ssec.examples} for more details). We take $\Omega=(-1,1)$, and we consider as initial datum the compactly supported function $u_0(x)=e^{4-\frac{1}{(x-1/2)(x+1/2)}}\chi_{|x|< 1/2}$ appearing in the left of Figure \ref{fig1}. In all the other figures, the solid line represents either \ $\Phi_1^{1/m}$ or $\Phi_1^{1-2s}$, while the dotted lines represent \ $t^{\frac{1}{m-1}}u(t)$ for different values of $t$, where $u(t)$ is the solution starting from $u_0$. These choices are motivated by Theorems \ref{thm.Lower.PME.large.t} and \ref{prop.counterex2}. Since the map $t\mapsto t^{\frac{1}{m-1}}\,u(t,x)$ is nondecreasing for all $x \in \Omega$ (cf. (2.3) in \cite{BV-PPR2-1}), the lower dotted line corresponds to an earlier time with respect to the higher one. \begin{figure} \caption{ \footnotesize On the left, the initial condition $u_0$. On the right, the solid line represents $\Phi_1^{1/m} \label{fig1} \end{figure} \begin{figure} \caption{ \footnotesize In both pictures, the solid line represents $\Phi_1^{1/m} \label{fig2} \end{figure} \begin{figure} \caption{ \footnotesize In both pictures we use the parameters $m=2$ and $s=1/10$ (hence $\sigma=2/5<1$), and the solid line represents $\Phi_1^{1-2s} \label{fig3} \end{figure} Comparing Figures 2 and 3, it seems that when $\sigma<1$ there is no hope to find a universal behavior of solutions for large times. In particular, the bound provided by \eqref{intro.1} seems to be optimal. \section{Complements, extensions and further examples}\label{sec.comm} Foindent {\bf Elliptic versus parabolic.} The exceptional boundary behaviors we have found for some operators and data came as a surprise to us, since the solution to the corresponding ``elliptic setting'' $\mathcal{L} S^m= S$ satisfies $S\asymp \Phi_1^{\sigma/m}$ (with a logarithmic correction when $2smFe \gamma(m-1)$), hence separate-variable solutions always satisfy \eqref{intro.1b} (see \eqref{friendly.giant} and Theorem \ref{Thm.Elliptic.Harnack.m}). Foindent\textbf{About the kernel of operators of the spectral type. }\label{ss2.2} In this paragraph we study the properties of the kernel of $\mathcal{L}$. While in some situations $\mathcal{L}$ may not have a kernel (for instance, in the local case), in other situations that may not be so obvious from its definition. In the next lemma it is shown in particular that the SFL, defined by \eqref{sLapl.Omega.Spectral}, admits representation of the form \eqref{SFL.Kernel}. We state hereby the precise result, mentioned in \cite{AbTh} and proven in \cite{SV2003} for the SFL. \begin{lem}[Spectral Kernels]\label{Lem.Spec.Ker} Let $s \in (0,1)$, and let $\mathcal{L}$ be the $s^{th}$-spectral power of a linear elliptic second order operator $\mathcal A$, and let $\Phi_1 \asymp \mathrm{dist}(\cdot,{\delta^\gamma}artial\Omega)^\gamma$ be the first positive eigenfunction of $\mathcal A$. Let $H(t,x,y)$ be the Heat Kernel of $\mathcal A$ , and assume that it satisfies the following bounds: there exist constants $c_0,c_1,c_2>0$ such that for all $0<t\le1$ \begin{equation}\label{Bounds.HK.s=1.t<1} c_0\left[\frac{\Phi_1(x)}{t^{\gamma/2}}\wedge 1\right] \left[\frac{\Phi_1(y)}{t^{\gamma/2}}\wedge 1\right]\frac{\mathrm{e}^{-c_1\frac{|x-y|^2}{t}}}{t^{N/2}}\le H(t,x,y)\le c_0^{-1}\left[\frac{\Phi_1(x)}{t^{\gamma/2}}\wedge 1\right] \left[\frac{\Phi_1(y)}{t^{\gamma/2}}\wedge 1\right]\frac{\mathrm{e}^{-\frac{|x-y|^2}{c_1\,t}}}{t^{N/2}} \end{equation} and \begin{equation}\label{Bounds.HK.s=1.t>1} 0\leq H(t,x,y) \leq c_2 \Phi_1(x)\Phi_1(y)\qquad\mbox{for all $t\ge 1$. } \end{equation} Then the operator $\mathcal{L}$ can be expressed in the form \begin{equation}\label{A.op.kern.1} \mathcal{L} f(x)=P.V.\int_{\mathbb{R}^N} \big(f(x)-f(y)\big)\,K(x,y)\,{\rm d}y + B(x)u(x) \end{equation} with a kernel $K(x,y)$ supported in $\overline{\Omega}\times\overline{\Omega}$ satisfying \begin{multline}\label{Lem.Spec.Ker.bounds} K(x,y)\asymp\frac{1}{|x-y|^{N+2s}} \left(\frac{\Phi_1(x)}{|x-y|^\gamma }\wedge 1\right) \left(\frac{\Phi_1(y)}{|x-y|^\gamma }\wedge 1\right)\quad\mbox{and}\quad B(x)\asymp \Phi_1(x)^{-\frac{2s}{\gamma}} \end{multline} \end{lem} The proof of this Lemma follows the ideas of \cite{SV2003}; indeed assumptions of Lemma \ref{Lem.Spec.Ker} allow to adapt the proof of \cite{SV2003} to our case with minor changes. Foindent {\bf Method and generality. }Our work is part of a current effort aimed at extending the theory of evolution equations of parabolic type to a wide class of nonlocal operators, in particular operators with general kernels that have been studied by various authors (see for instance \cite{EJT, DPQR, Ser14}). Our approach is different from many others: indeed, even if the equation is nonlinear, we concentrate on the properties of the inverse operator $\mathcal{L}I$ (more precisely, on its kernel given by the Green function ${\mathbb G}$), rather than on the operator $\mathcal{L} $ itself. Once this setting is well-established and good linear estimates for the Green function are available, the calculations and estimates are very general. Hence, the method is applicable to a very large class of equations, both for Elliptic and Parabolic problems, as well as to more general nonlinearities than $F(u)=u^m$ (see also related comments in the works \cite{BV-PPR1, BSV2013, BV-PPR2-1}). Foindent {\bf Finite and infinite propagation.} In all cases consider in the paper for $s<1$ we prove that the solution becomes strictly positive inside the domain at all positive times. This is called {\sl infinite speed of propagation}, a property that does not hold in the limit $s=1$ for any $m>1$ \cite{VazBook} (in that case, finite speed of propagation holds and a free boundary appears). Previous results on this infinite speed of propagation can be found in \cite{BFR, DPQRV2}. We recall that infinite speed of propagation is typical of the evolution with nonlocal operators representing long-range interactions, but it is not true for the standard porous medium equation, hence a trade-off takes place when both effects are combined; all our models fall on the side of infinite propagation, but we recall that finite propagation holds for a related nonlocal model called ``nonlinear porous medium flow with fractional potential pressure'', cf. \cite{CV1}. Foindent {\bf The local case.} Since $2sm>\gamma(m-1)$ when $s=1$ (independently of $m>1$), our results give a sharp behavior in the local case after a ``waiting time''. Although this is well-known for the classical porous medium equation, our results apply also to the case uniformly elliptic operator in divergence form with $C^1$ coefficients, and yield new results in this setting. Actually one can check that, even when the coefficients are merely measurable, many of our results are still true and they provided universal upper and lower estimates. At least to our knowledge, such general results are completely new. \subsection{Further examples of operators}\label{sec.examples} Here we briefly exhibit a number of examples to which our theory applies, besides the RFL, CFL and SFL already discussed in Section \ref{sec.hyp.L}. These include a wide class of local and nonlocal operators. We just sketch the essential points, referring to \cite{BV-PPR2-1} for a more detailed exposition. Foindent\textbf{Censored Fractional Laplacian (CFL) and operators with more general kernels. }As already mentioned in Section \ref{ssec.examples}, assumptions (A1), (A2), and (K2) are satisfied with $\gamma=s-1/2$. Moreover, it follows by \cite{bogdan-censor, Song-coeff} that we can also consider operators of the form: \[ \mathcal{L} f(x)=\mathrm{P.V.}\int_{\Omega}\left(f(x)-f(y)\right)\frac{a(x,y)}{|x-y|^{N+2s}}\,{\rm d}y\,,\qquad\mbox{with }\frac{1}{2}<s<1\,, \] where $a(x,y)$ is a symmetric function of class $C^1$ bounded between two positive constants. The Green function ${\mathbb G}(x,y)$ of $\mathcal{L}$ satisfies the stronger assumption (K4), cf. Corollary 1.2 of~\cite{Song-coeff}. Foindent\textbf{Fractional operators with more general kernels. }Consider integral operators of the form \[ \mathcal{L} f(x)=\mathrm{P.V.}\int_{\mathbb{R}^N}\left(f(x)-f(y)\right)\frac{a(x,y)}{|x-y|^{N+2s}}\,{\rm d}y\,, \] where $a$ is a measurable symmetric function, bounded between two positive constants, and satisfying \[ \big|a(x,y)-a(x,x)\big|\,\chi_{|x-y|<1}\le c |x-y|^\sigma\,,\qquad\mbox{with }0<s<\sigma\le 1\,, \] for some $c>0$ (actually, one can allow even more general kernels, cf. \cite{BV-PPR2-1, Kim-Coeff}). Then, for all $s\in (0, 1]$, the Green function ${\mathbb G}(x,y)$ of $\mathcal{L}$ satisfies (K4) with $\gamma=s$\,, cf. Corollary 1.4 of \cite{Kim-Coeff}. Foindent\textbf{Spectral powers of uniformly elliptic operators. }Consider a linear operator $\mathcal A$ in divergence form, \[ \mathcal A=-\sum_{i,j=1}^N{\delta^\gamma}artial_i(a_{ij}{\delta^\gamma}artial_j)\,, \] with uniformly elliptic $C^1$ coefficients. The uniform ellipticity allows one to build a self-adjoint operator on $\mathrm{L}^2(\Omega)$ with discrete spectrum $(\lambda_k, {\delta^\gamma}hi_k)$\,. Using the spectral theorem, we can construct the spectral power of such operator as follows \[ \mathcal{L} f(x):=\mathcal A^s\,f(x):=\sum_{k=1}^\infty \lambda_k^s \hat{f}_k {\delta^\gamma}hi_k(x),\qquad\mbox{where }\qquad \hat{f}_k=\int_\Omega f(x){\delta^\gamma}hi_k(x)\,{\rm d}x \] (we refer to the books \cite{Davies1,Davies2} for further details), and the Green function satisfies (K2) with $\gamma=1$\,, cf. \cite[Chapter 4.6]{Davies2}. Then, the first eigenfunction $\Phi_1$ is comparable to $\mathrm{dist}(\cdot, {\delta^\gamma}artial\Omega)$. Also, Lemma \ref{Lem.Spec.Ker} applies (see for instance \cite{Davies2}) and allow us to get sharp upper and lower estimates for the kernel $K$ of $\mathcal{L}$, as in \eqref{Lem.Spec.Ker.bounds}\,. Foindent\textbf{Other examples. } As explained in Section 3 of \cite{BV-PPR2-1}, our theory may also be applied to: (i) Sums of two fractional operators; (ii) Sum of the Laplacian and a nonlocal operator kernels; (iii) Schr\"odinger equations for non-symmetric diffusions; (iv) Gradient perturbation of restricted fractional Laplacians. Finally, it is worth mentioning that our arguments readily extend to operators on manifolds for which the required bounds hold. \vskip .5cm Foindent{\bf Acknowledgments. }M.B. and J.L.V. are partially funded by Project MTM2011-24696 and MTM2014-52240-P(Spain). A.F. has been supported by NSF Grants DMS-1262411 and DMS-1361122, and by the ERC Grant ``Regularity and Stability in Partial Differential Equations (RSPDE)''. M.B. and J.L.V. would like to acknowledge the hospitality of the Mathematics Department of the University of Texas at Austin, where part of this work has been done. J.L.V. was also invited by BCAM, Bilbao. We thank an anonymous referee for pointing out that Lemma \ref{Lem.Spec.Ker} was proved in \cite{SV2003} and mentioned in \cite{AbTh}. \addcontentsline{toc}{section}{~~~References} \end{document}
\begin{document} \title[More on Groups and Counter Automata] {More on Groups and Counter Automata} \author[Takao Yuyama]{Takao Yuyama} \address{Department of Mathematics, Tokyo Institute of Technology, Tokyo, Japan} \email{[email protected]} \thanks{This work was supported by JSPS KAKENHI Grant Number 20J23039} \begin{abstract} Elder, Kambites, and Ostheimer showed that if the word problem of a finitely generated group \(H\) is accepted by a \(G\)-automaton for an abelian group \(G\), then \(H\) is virtually abelian. We give a new, elementary, and purely combinatorial proof to the theorem. Furthermore, our method extracts an explicit connection between the two groups \(G\) and \(H\) from the automaton as a group homomorphism from a subgroup of \(G\) onto a finite index subgroup of \(H\). \end{abstract} \maketitle \section{Introduction} For a group \(G\), a \emph{\(G\)-automaton} is a finite automaton augmented with a register that stores an element of \(G\). Such an automaton first initializes the register with the identity element \(1_G\) of \(G\) and may update the register content by multiplying by an element of \(G\) during the computation. The automaton accepts an input word if the automaton can reach a terminal state and the register content is \(1_G\) when the entire word is read. (For the precise definition, see \Cref{subsection-G-automata}.) For a positive integer \(n\), \(\mathbb{Z}^n\)-automata are the same as \emph{blind \(n\)-counter automata}, which were defined and studied by Greibach~\cites{MR411257, MR513714}. The notion of \(G\)-automata is discovered repeatedly by several different authors. The name ``\(G\)-automaton'' is due to Kambites~\cite{MR2259632}. (In fact, they defined the notion of \(M\)-automata for any monoid \(M\).) Dassow--Mitrana~\cite{MR1809380} and Mitrana--Stiebe~\cite{MR1807922} use \emph{extended finite automata} (EFA) over \(G\) instead of \(G\)-automata. For a finitely generated group \(G\), the \emph{word problem} of \(G\), with respect to a fixed finite generating set of \(G\), is the set of words over the generating set representing the identity element of \(G\) (see \Cref{subsection-word-problem} for the precise definitions). For several language classes, the class of finitely generated groups whose word problem is in the class is determined~\citelist{ \cite{MR312774} \cite{MR710250} \cite{MR0357625} \cite{MR130286} \cite{MR2483126} \cite{MR2470538} \cite{MR3375036} }, and many attempts are made for other language classes~\citelist{ \cite{MR3200359} \cite{MR3394671} \cite{MR2132375} \cite{MR4388995} \cite{kanazawa-salvati-2012-mix} \cite{MR2274726} \cite{MR4000593} \cite{MR3871464} }. One of the most remarkable theorems about word problems is the well-known result due to Muller and Schupp~\cite{MR710250}, which states that, with the theorem by Dunwoody~\cite{MR807066}, a group has a context-free word problem if and only if it is virtually free. These theorems suggest deep connections between group theory and formal language theory. Involving both \(G\)-automata and word problems, the following broad question was posed implicitly by Elston and Ostheimer~\cite{MR2064297} and explicitly by Kambites~\cite{MR2259632}. \begin{qu}\label[qu]{qu-grand} For a given group \(G\), is there any connection between the structural property of \(G\) and of the collection of groups whose word problems are accepted by \emph{non-deterministic} \(G\)-automata? \end{qu} \noindent Note that by \(G\)-automata, we always mean non-deterministic \(G\)-automata. As for \emph{deterministic} \(G\)-automata, the following theorem is known. \begin{thm}[Kambites~\cite{MR2259632}*{Theorem 1}, 2006]\label[thm]{theorem-deterministic} Let \(G\) and \(H\) be groups with \(H\) finitely generated. Then the word problem of \(H\) is accepted by a \emph{deterministic} \(G\)-automaton if and only if \(H\) has a finite index subgroup which embeds in \(G\). \end{thm} For non-deterministic \(G\)-automata, several results are known for specific types of groups. For a free group \(F\) of rank \(\geq 2\), it is known that a language is accepted by an \(F\)-automaton if and only if it is context-free~\citelist{\cite{MR0152391}*{\textsc{Proposition} 2} \cite{MR2151422}*{Corollary 4.5} \cite{MR2482816}*{Theorem 7}}. Combining with the Muller--Schupp theorem, the class of groups whose word problems are accepted by \(F\)-automata is the class of virtually free groups. The class of groups whose word problems are accepted by \((F \times F)\)-automata is exactly the class of recursively presentable groups~\citelist{\cite{MR2151422}*{Corollary 3.5} \cite{MR2482816}*{Theorem 8} \cite{MR1807922}*{Theorem 10}}. For the case where \(G\) is (virtually) abelian, the following result was shown by Elder, Kambites, and Ostheimer. \begin{thm}[Elder, Kambites, and Ostheimer~\cite{MR2483126}, 2008]\label[thm]{theorem-EKO} \begin{enumerate} \item\label{item-Zn} Let \(H\) be a finitely generated group and \(n\) be a positive integer. Then the word problem of \(H\) is accepted by a \(\mathbb{Z}^n\)-automaton if and only if \(H\) is virtually free abelian of rank at most \(n\) \cite{MR2483126}*{Theorem 1}. \item\label{item-G-virtually-abelian} Let \(G\) be a virtually abelian group and \(H\) be a finitely generated group. Then the word problem of \(H\) is accepted by a \(G\)-automaton if and only if \(H\) has a finite index subgroup which embeds in \(G\) \cite{MR2483126}*{Theorem 4}. \end{enumerate} \end{thm} \noindent However, their proof is somewhat indirect in the sense that it depends on a deep theorem by Gromov~\cite{MR623534}, which states that every finitely generated group with polynomial growth function is virtually nilpotent. In fact, their proof proceeds as follows. Let \(H\) be a group whose word problem is accepted by a \(\mathbb{Z}^n\)-automaton. They first develop some techniques to compute several bounds for linear maps and semilinear sets. Then a map from \(H\) to \(\mathbb{Z}^n\) with certain geometric conditions is constructed to prove that \(H\) has polynomial growth function. By Gromov's theorem, \(H\) is virtually nilpotent. Finally, they conclude that \(H\) is virtually abelian, using some theorems about nilpotent groups and semilinear sets. Because of the indirectness of their proof, the embedding in \cref{theorem-EKO} \ref{item-G-virtually-abelian} is obtained only \emph{a posteriori} and hence has no relation with the combinatorial structure of the \(G\)-automaton. To our knowledge, there are almost no attempts so far to obtain explicit algebraic connections between \(G\) and \(H\), where \(H\) is a group that has a word problem accepted by a \(G\)-automaton. The only exception is the result due to Holt, Owens, and Thomas~\cite{MR2470538}*{\textsc{Theorem} 4.2}, where they gave a combinatorial proof to a special case of \cref{theorem-EKO} \ref{item-Zn}, for the case where \(n = 1\). (In fact, their theorem is slightly stronger than \cref{theorem-EKO} \ref{item-Zn} for \(n = 1\) because it is for \emph{non-blind} one-counter automata. See also \cite{MR2483126}*{Section 7}.) In this paper, we give a new, elementary, and purely combinatorial proof to \cref{theorem-EKO}. \begin{thm}\label[thm]{main-theorem} Let \(G\) be an abelian group and \(H\) be a finitely generated group. Suppose that the word problem of \(H\) is accepted by a \(G\)-automaton \(A\). Then one can define a finite collection of monoids \((M(\mu, p))_{\mu, p}\), as in {\normalfont\cref{def-M}}, such that: \begin{enumerate} \item each \(M(\mu, p)\) consists of closed paths in \(A\) with certain conditions, \item each \(M(\mu, p)\) induces a group homomorphism \(f_{\mu, p}\) from a subgroup \(G(\mu, p)\) of \(G\) onto a subgroup \(H(\mu, p)\) of \(H\), and \item at least one of \(H(\mu, p)\)'s is a finite index subgroup of \(H\). \end{enumerate} \end{thm} \noindent For the implication from \cref{main-theorem} to \cref{theorem-EKO}, see \Cref{subsection-G-automata}. Note that the direction of the group homomorphisms \(f_{\mu, p}\) in \cref{main-theorem} is opposite to the embeddings in \cref{theorem-deterministic} and \cref{theorem-EKO} \ref{item-G-virtually-abelian}. This direction seems more natural for the non-deterministic case; this observation suggests the following question. \begin{qu}\label[qu]{qu-hom} Let \(G\) and \(H\) be groups with \(H\) finitely generated. Suppose that the word problem of \(H\) is accepted by a \(G\)-automaton. Does there exist a group homomorphism from a subgroup of \(G\) onto a finite index subgroup of \(H\)? If so, is it obtained combinatorially? \end{qu} Our \cref{main-theorem} is the very first step for approaching \cref{qu-hom}. Note that an affirmative answer to \cref{qu-hom} would generalize \cref{theorem-deterministic}. \section{Preliminaries} \subsection{Words, subwords, and scattered subwords} For a set \(\Sigma\), we write \(\Sigma^*\) for the free monoid generated by \(\Sigma\), i.e., the set of \emph{words} over \(\Sigma\). For a word \(u = a_1 a_2 \dotsm a_n \in \Sigma^*\) (\(n \geq 0, a_i \in \Sigma\)), the number \(n\) is called the \emph{length} of \(u\), which is denoted by \(\len{u}\). For two words \(u, v \in \Sigma^*\), the \emph{concatenation} of \(u\) and \(v\) are denoted by \(u \cdot v\), or simply \(u v\). The identity element of \(\Sigma^*\) is the \emph{empty word}, denoted by \(\varepsilon\), which is the unique word of length zero. For an integer \(n \geq 0\), the \(n\)-fold concatenation of a word \(u \in \Sigma^*\) is denoted by \(u^n\). For an integer \(n > 0\), we write \(\Sigma^{< n}\) for the set of words of length less than \(n\). A word \(u \in \Sigma^*\) is a \emph{subword} of a word \(v \in \Sigma^*\), denoted by \(u \sqsubseteq v\), if there exist two words \(u_1, u_2 \in \Sigma^*\) such that \(u_1 u u_2 = v\). A word \(u \in \Sigma^*\) is a \emph{scattered subword} of a word \(v \in \Sigma^*\), denoted by \(u \sqsubseteq_{\mathrm{sc}} v\), if there exist two finite sequences of words \(u_1, u_2, \dotsc, u_n \in \Sigma^*\) (\(n \geq 0\)) and \(v_0, v_1, \dotsc, v_n \in \Sigma^*\) such that \(u = u_1 u_2 \dotsm u_n\) and \(v = v_0 u_1 v_1 u_2 v_2 \dotsm u_n v_n\). That is, \(v\) is obtained from \(u\) by inserting some words. Note that the two binary relations \(\sqsubseteq\) and \(\sqsubseteq_{\mathrm{sc}}\) are both partial orders on \(\Sigma^*\). \subsection{Word problem for groups}\label{subsection-word-problem} Let \(H\) be a finitely generated group. A \emph{choice of generators} for \(H\) is a surjective monoid homomorphism \(\rho\) from the free monoid \(\Sigma^*\) on a finite alphabet \(\Sigma\) onto \(H\). The \emph{word problem} of \(H\) with respect to \(\rho\), denoted by \(\WP_\rho(H)\), is the set of words in \(\Sigma^*\) mapped to the identity element \(1_H\) of \(H\) via \(\rho\), i.e., \(\WP_\rho(H) = \rho^{-1}(1_H)\). Although the word problem \(\WP_\rho(H)\) depends on the choice of generators \(\rho\), this does not cause problems: \begin{lem}[e.g., \cite{MR2132375}*{Lemma 1}] Let \(\mathcal{C}\) be a class of languages closed under inverse homomorphisms and let \(H\) be a finitely generated group. Then \(\WP_\rho(H) \in \mathcal{C}\) for \emph{some} choice of generators \(\rho\) if and only if \(\WP_\rho(H) \in \mathcal{C}\) for \emph{any} choice of generators \(\rho\). \qed \end{lem} \noindent This is the reason why we use ``\emph{the} word problem of \(H\)'' rather than ``\emph{a} word problem of \(H\).'' \subsection{Graphs and paths} A \emph{graph} is a \(4\)-tuple \((V, E, \mathop{\mathsf{s}}, \mathop{\mathsf{t}})\), where \(V\) is the set of vertices, \(E\) is the set of (directed) edges, \(\mathop{\mathsf{s}}\colon E \to V\) and \(\mathop{\mathsf{t}}\colon E \to V\) are functions assigning to every edge \(e \in E\) the \emph{source} \(\mathop{\mathsf{s}}(e) \in V\) and the \emph{target} \(\mathop{\mathsf{t}}(e) \in V\), respectively. A graph is \emph{finite} if it has only finitely many vertices and edges. A \emph{path} (of \emph{length} \(n\)) in a graph \(\Gamma = (V, E, \mathop{\mathsf{s}}, \mathop{\mathsf{t}})\) is a word \(e_1 e_2 \dotsm e_n \in E^*\) (\(n \geq 0\)) of edges \(e_i \in E\) such that \(\mathop{\mathsf{t}}(e_i) = \mathop{\mathsf{s}}(e_{i + 1})\) for \(i = 1, 2, \dotsc, n - 1\). We usually use Greek letters to denote paths in a graph. For a non-empty path \(\omega = e_1 e_2 \dotsm e_n \in E^*\), the source and the target of \(\omega\) are defined as \(\mathop{\mathsf{s}}(\omega) = \mathop{\mathsf{s}}(e_1)\) and \(\mathop{\mathsf{t}}(\omega) = \mathop{\mathsf{t}}(e_n)\), respectively. If \(\omega = e_1 e_2 \dotsm e_n\) and \(\omega' = e_1' e_2' \dotsm e_k'\) are non-empty paths such that \(\mathop{\mathsf{t}}(\omega) = \mathop{\mathsf{s}}(\omega')\), or at least one of \(\omega\) and \(\omega'\) is empty, then the concatenation of \(\omega\) and \(\omega'\), denoted by \(\omega \cdot \omega'\) or \(\omega \omega'\), is the path \(e_1 e_2 \dotsm e_n e_1' e_2' \dotsm e_k'\) of length \(n + k\), i.e., the concatenation as words. A path \(\omega\) in \(\Gamma\) is \emph{closed} if \(\mathop{\mathsf{s}}(\omega) = \mathop{\mathsf{t}}(\omega)\), or \(\omega = \varepsilon\). For a closed path \(\sigma\) and an integer \(n \geq 0\), we write \(\sigma^n\) for the \(n\)-fold concatenation of \(\sigma\). For a graph \(\Gamma = (V, E, \mathop{\mathsf{s}}, \mathop{\mathsf{t}})\), an \emph{edge-labeling function} is a function \(\ell\) from \(E\) to a set \(M\). If \(M\) is a monoid and \(\omega = e_1 e_2 \dotsm e_n\) is a path in \(\Gamma\), then the label of \(\omega\) is defined as \(\ell(\omega) = \ell(e_1) \ell(e_2) \dotsm \ell(e_n)\) via the multiplication of \(M\). \subsection{\texorpdfstring{\(G\)}{G}-automata}\label{subsection-G-automata} For a group \(G\), a (non-deterministic) \emph{\(G\)-automaton} over a finite alphabet \(\Sigma\) is defined as a \(5\)-tuple \((\Gamma, \ell_G, \ell_\Sigma, p_{\mathrm{init}}, p_{\mathrm{ter}})\), where \(\Gamma = (V, E, \mathop{\mathsf{s}}, \mathop{\mathsf{t}})\) is a finite graph, \(\ell_G\colon E \to G\) and \(\ell_\Sigma\colon E \to \Sigma^*\) are edge-labeling functions, \(p_{\mathrm{init}} \in V\) is the \emph{initial vertex}, and \(p_{\mathrm{ter}} \in V\) is the \emph{terminal vertex}. For simplicity, we assume that \(\ell_\Sigma(e) \in \Sigma \cup \{\varepsilon\}\) for each \(e \in E\). (Note that this assumption does not decrease the accepting power of \(G\)-automata. Indeed, if necessary, one can subdivide an edge \(e\) with labels \(\ell_\Sigma(e) = u v, \ell_G(e) = g\) into two new edges \(e_1, e_2\) with labels \(\ell_\Sigma(e_1) = u, \ell_G(e_1) = g\) and \(\ell_\Sigma(e_2) = v, \ell_G(e_2) = 1_G\).) An \emph{accepting path} in a \(G\)-automaton \(A = (\Gamma, \ell_G, \ell_\Sigma, p_{\mathrm{init}}, p_{\mathrm{ter}})\) is a path \(\alpha\) in \(\Gamma\) such that \(\mathop{\mathsf{s}}(\alpha) = p_{\mathrm{init}}\), \(\mathop{\mathsf{t}}(\alpha) = p_{\mathrm{ter}}\), and \(\ell_G(\alpha) = 1_G\) (we consider that the empty path \(\varepsilon \in E^*\) is accepting if and only if \(p_{\mathrm{init}} = p_{\mathrm{ter}}\)). We say that a path \(\omega\) in \(\Gamma\) is \emph{promising} if \(\omega\) is a subword of some accepting path in \(A\), i.e., there exist two paths \(\omega_1, \omega_2 \in E^*\) such that the concatenation \(\omega_1 \omega \omega_2 \in E^*\) is an accepting path in \(A\). The \emph{language accepted} by a \(G\)-automaton \(A\), denoted by \(L(A)\), is the set of all words \(u \in \Sigma^*\) such that \(u\) is the label of some accepting path in \(A\), i.e., \(L(A) = \{\, \ell_\Sigma(\alpha) \in \Sigma^* \mid \text{\(\alpha\) is an accepting path in \(A\)} \,\}\). \begin{prop}[e.g., \cite{MR2482816}*{Proposition 2}] For a group \(G\), the class of languages accepted by \(G\)-automata are closed under inverse homomorphisms. \qed \end{prop} Replacing the register group \(G\) by its finite index subgroup or finite index overgroup does not change the class of languages accepted by \(G\)-automata: \begin{prop}[e.g., \cite{MR2483126}*{Proposition 8}] Let \(G\) be a group and \(H\) be a subgroup of \(G\). Then every language accepted by a \(H\)-automaton is accepted by a \(G\)-automaton. If \(H\) has finite index in \(G\), then the converse holds. \qed \end{prop} \noindent Since the word problem of \(H\) is trivially accepted by an \(H\)-automaton for every finitely generated group \(H\), we obtain the following corollary. \begin{cor}\label[cor]{cor-implication} \cref{main-theorem} implies \cref{theorem-EKO}. \qed \end{cor} \section{Proof of the main theorem}\label{section-proof} Throughout this section, we fix an abelian group \(G\), a finitely generated group \(H\), a choice of generators \(\rho\colon \Sigma^* \to H\), and a \(G\)-automaton \(A = (\Gamma = (V, E, \mathop{\mathsf{s}}, \mathop{\mathsf{t}}), \ell_G, \ell_\Sigma, p_{\mathrm{init}}, p_{\mathrm{ter}})\) such that \(\WP_\rho(H) = L(A)\). We write the group operation of \(G\) additively and \(0_G\) for the identity element of \(G\). The following lemma is a starting point of our proof. \begin{lem}\label[lem]{lem-well-def} Let \(\omega\) and \(\omega'\) be paths in \(\Gamma\) such that \(\mathop{\mathsf{s}}(\omega) = \mathop{\mathsf{s}}(\omega')\) and \(\mathop{\mathsf{t}}(\omega) = \mathop{\mathsf{t}}(\omega')\), and suppose that \(\omega\) is promising. Then \(\ell_G(\omega) = \ell_G(\omega')\) implies \(\rho(\ell_\Sigma(\omega)) = \rho(\ell_\Sigma(\omega'))\). \end{lem} \begin{proof} Since \(\omega\) is promising, there exist two paths \(\omega_1, \omega_2\) in \(\Gamma\) such that \(\omega_1 \omega \omega_2\) is an accepting path in \(A\). It follows from the assumption that \(\ell_G(\omega_1 \omega' \omega_2) = \ell_G(\omega_1) + \ell_G(\omega') + \ell_G(\omega_2) = \ell_G(\omega_1) + \ell_G(\omega) + \ell_G(\omega_2) = \ell_G(\omega_1 \omega \omega_2) = 0_G\), and \(\omega_1 \omega' \omega_2\) is also an accepting path in \(A\). That is, \(\ell_\Sigma(\omega_1 \omega \omega_2), \ell_\Sigma(\omega_1 \omega' \omega_2) \in \WP_\rho(H)\), and \(\rho(\ell_\Sigma(\omega_1)) \rho(\ell_\Sigma(\omega)) \rho(\ell_\Sigma(\omega_2)) = 1_H = \rho(\ell_\Sigma(\omega_1)) \rho(\ell_\Sigma(\omega')) \rho(\ell_\Sigma(\omega_2))\) in \(H\). Thus we have \(\rho(\ell_\Sigma(\omega)) = \rho(\ell_\Sigma(\omega'))\). \end{proof} \begin{defi}\label[defi]{def-minimal} An accepting path \(\alpha\) in \(A\) is \emph{minimal} if it is minimal with respect to the scattered subword relation \(\sqsubseteq_{\mathrm{sc}}\) on \(E^*\). An accepting path \(\alpha\) in \(A\) \emph{dominates} a minimal accepting path \(\mu\) in \(A\) if \(\mu \sqsubseteq_{\mathrm{sc}} \alpha\). \end{defi} A similar notion of minimal accepting paths can be found in \cite{https://doi.org/10.48550/arxiv.math/0606415}*{Section 4}. \begin{rem}\label[rem]{rem-Higman} Note that, by Higman's lemma~\cite{MR49867}*{\textsc{Theorem} 4.4}, the scattered subword relation \(\sqsubseteq_{\mathrm{sc}}\) on \(\Sigma^*\) is a \emph{well-quasi-order}. In particular, there are only finitely many minimal accepting paths in \(A\), and every accepting path on \(A\) dominates some minimal accepting path in \(A\). \end{rem} \begin{defi}\label[defi]{def-pumpable} Let \(\mu = e_1 e_2 \dotsm e_n \in E^*\) (\(e_i \in E\)) be a minimal accepting path in \(A\). A closed path \(\sigma \in E^*\) in \(\Gamma\) is \emph{pumpable in \(\mu\)} if there exists an accepting path \(\alpha\) in \(A\) dominating \(\mu\) such that \(\alpha = \alpha_0 e_1 \alpha_1 e_2 \alpha_2 \dotsm e_n \alpha_n\) for some paths \(\alpha_0, \alpha_1, \dotsc, \alpha_n \in E^*\) in \(\Gamma\) and \(\sigma \sqsubseteq \alpha_j\) for some \(j \in \{0, 1, \dotsc, n\}\). \end{defi} \begin{rems} \begin{enumerate} \item In \cref{def-pumpable}, each \(\alpha_i\) is a closed path in \(\Gamma\) and satisfies \(\ell_G(\alpha_0) + \ell_G(\alpha_1) + \dotsb + \ell_G(\alpha_n) = \ell_G(\alpha) = 0_G\) since \(\ell_G(\mu) = 0_G\) and \(G\) is abelian. \item Every closed path pumpable in a minimal accepting path \(\mu\) is promising. \end{enumerate} \end{rems} \begin{defi}\label[defi]{def-M} For a minimal accepting path \(\mu\) in \(A\) and a vertex \(p \in V\), define \[ M(\mu, p) = \{\, \sigma \mid \text{\(\sigma\) is a closed path in \(\Gamma\) pumpable in \(\mu\) such that \(\mathop{\mathsf{s}}(\sigma) = p\), or \(\sigma = \varepsilon\)} \,\}. \] \end{defi} \begin{lem}\label[lem]{lem-monoid} Each \(M(\mu, p)\) is a monoid with respect to the concatenation operation, i.e., \(\sigma_1, \sigma_2 \in M(\mu, p)\) implies \(\sigma_1 \sigma_2 \in M(\mu, p)\). \end{lem} \begin{proof} Since both \(\sigma_1\) and \(\sigma_2\) are pumpable in \(\mu = e_1 e_2 \dotsm e_n \in E^*\) (\(e_i \in E\)), there exist two accepting paths \(\alpha = \alpha_0 e_1 \alpha_1 e_2 \alpha_2 \dotsm e_n \alpha_n\) (\(\alpha_i \in E^*\)) and \(\beta = \beta_0 e_1 \beta_1 e_2 \beta_2 \dotsm e_n \beta_n\) (\(\beta_i \in E^*\)) such that \(\sigma_1 \sqsubseteq \alpha_i\) and \(\sigma_2 \sqsubseteq \beta_j\) for some \(i, j \in \{0, 1, \dotsc, n\}\). Then we have \(\alpha_i = \alpha_i' \sigma_1 \alpha_i''\) for some \(\alpha_i', \alpha_i'' \in E^*\) and \(\beta_j = \beta_j' \sigma_2 \beta_j''\) for some \(\beta_j', \beta_j'' \in E^*\). We may assume that \(i \leq j\). Since \(G\) is abelian, the merged path \(\gamma = (\alpha_0 \beta_0) e_1 (\alpha_1 \beta_1) e_2 (\alpha_2 \beta_2) \dotsm e_n (\alpha_n \beta_n)\) and its permutation \begin{equation} \gamma' = (\alpha_0 \beta_0) e_1 (\alpha_1 \beta_1) e_2 (\alpha_2 \beta_2) \dotsm e_i (\alpha_i' \sigma_1 \sigma_2 \alpha_i'') e_{i + 1} \dotsm e_j (\beta_j' \beta_j'') e_{j + 1} \dotsm e_n (\alpha_n \beta_n) \label{eq-monoid} \end{equation} are accepting paths in \(A\) (\Cref{fig-monoid}). \begin{figure} \caption{Construction of the accepting path \(\gamma'\) in \eqref{eq-monoid} \label{fig-monoid} \end{figure} \end{proof} For each \(M(\mu, p)\), \cref{lem-monoid} allows us to define a surjective monoid homomorphism \(f_{\mu, p}\colon M(\mu, p) \to \rho(\ell_\Sigma(M(\mu, p)))\) as the composition function \(\rho \circ \ell_\Sigma\). By \cref{lem-well-def}, \(f_{\mu, p}\) induces a well-defined surjective monoid homomorphism \(\bar{f}_{\mu, p}\colon \ell_G(M(\mu, p)) \to \rho(\ell_\Sigma(M(\mu, p)))\). Let \(G(\mu, p)\) (resp.\ \(H(\mu, p)\)) denotes the subgroup of \(G\) generated by \(\ell_G(M(\mu, p))\) (resp.\ the subgroup of \(H\) generated by \(\rho(\ell_\Sigma(M(\mu, p)))\)). One can easily extend \(\bar{f}_{\mu, p}\) to a unique surjective group homomorphism \(\tilde{f}_{\mu, p}\colon G(\mu, p) \to H(\mu, p)\). The remaining task is to prove that at least one of the \(H(\mu, p)\)'s is a finite index subgroup of \(H\). \begin{lem}\label[lem]{lem-downward} Each \(M(\mu, p)\) is downward closed with respect to \(\sqsubseteq_{\mathrm{sc}}\), i.e., if \(\sigma\) is an element of \(M(\mu, p)\) and \(\tau\) is a closed path in \(\Gamma\) with \(\mathop{\mathsf{s}}(\tau) = p\) such that \(\tau \sqsubseteq_{\mathrm{sc}} \sigma\), then \(\tau \in M(\mu, p)\). \end{lem} \begin{proof} Suppose that \(\tau = e_1' e_2' \dotsm e_k'\) (\(k \geq 0, e_i' \in E\)) and \(\sigma = \sigma_0 e_1' \sigma_1 e_2' \dotsm e_k' \sigma_k\) (\(\sigma_i \in E^*\)). Each \(\sigma_i\) is a closed path in \(\Gamma\). Since, by \cref{lem-monoid}, \(\sigma^2\) is pumpable in \(\mu = e_1 e_2 \dotsm e_n\) (\(n \geq 0, e_i \in E\)), there exists an accepting path \(\alpha = \alpha_0 e_1 \alpha_1 e_2 \alpha_2 \dotsm e_n \alpha_n\) dominating \(\mu\) such that \(\sigma^2 \sqsubseteq_{\mathrm{sc}} \alpha_i\) for some \(i \in \{0, 1, \dotsc, n\}\). If \(\alpha_i = \alpha_i' \sigma^2 \alpha_i''\), then the path \begin{equation} \gamma = \alpha_0 e_1 \alpha_1 e_2 \alpha_2 \dotsm e_i (\alpha_i' \cdot \tau \cdot (\sigma_0^2 e_1' \sigma_1^2 e_2' \sigma_2^2 \dotsm e_k' \sigma_k^2) \cdot \alpha_i'') e_{i + 1} \dotsm e_n \alpha_n \label{eq-downward} \end{equation} is an accepting path in \(A\) (\Cref{fig-downward}). \begin{figure} \caption{Construction of the path \(\tau \cdot (\sigma_0^2 e_1' \sigma_1^2 e_2' \sigma_2^2 \dotsm e_k' \sigma_k^2)\) in \eqref{eq-downward} \label{fig-downward} \end{figure} \end{proof} \begin{lem}\label[lem]{lem-shrink} Let \(\sigma \in M(\mu, p)\) and \(\omega \sqsubseteq \sigma\) be a path. Then there exist two paths \(\omega_1, \omega_2 \in E^{< \card{V}}\) such that \(\omega_1 \omega \omega_2 \in M(\mu, p)\). \end{lem} \begin{proof} Let \((\omega_1, \omega_2) \in E^* \times E^*\) be a pair of two paths such that \(\omega_1 \omega \omega_2 \in M(\mu, p)\) and \(\max\{\len{\omega_1}, \len{\omega_2}\}\) is minimum. Such a pair exists since \(\omega \sqsubseteq \sigma \in M(\mu, p)\). Suppose the contrary that \(\max\{\len{\omega_1}, \len{\omega_2}\} \geq \card{V}\), say \(\len{\omega_1} \geq \card{V}\). By the pigeonhole principle, \(\omega_1\) must visit some vertex \(p \in V\) at least twice. That is, there exist three paths \(\alpha, \beta, \gamma\) such that \(\omega_1 = \alpha \beta \gamma\) and \(\beta\) is a non-empty closed path. Now we have \(\alpha \gamma \omega \omega_2 \sqsubseteq_{\mathrm{sc}} \omega_1 \omega \omega_2 \in M(\mu, p)\), and \cref{lem-downward} implies \(\alpha \gamma \omega \omega_2 \in M(\mu, p)\), which contradicts the minimality of \((\omega_1, \omega_2)\). \end{proof} \begin{proof}[Proof of \cref{main-theorem}] Let \(h \in H\) and fix a word \(v \in \Sigma^*\) such that \(\rho(v) = h\). There exists a word \(\bar{v} \in \Sigma^*\) such that \(v \bar{v} \in \WP_\rho(H)\). Define \[ N = 1 + \max\{\, \len{\mu} \mid \text{\(\mu \in E^*\) is a minimal accepting path in \(A\)} \,\}, \] and then \(N < \infty\) by \cref{rem-Higman}. Since \((v \bar{v})^N \in \WP_\rho(H)\), there exists an accepting path \begin{equation} \alpha = \omega_1 \bar{\omega}_1 \omega_2 \bar{\omega}_2 \dotsm \omega_N \bar{\omega}_N \label{eq-v} \end{equation} in \(A\) such that \(\ell_\Sigma(\omega_i) = v\) and \(\ell_\Sigma(\bar{\omega}_i) = \bar{v}\) for \(i = 1, 2, \dotsc, N\). Let \(\mu = e_1 e_2 \dotsm e_n\) (\(e_i \in E\)) be a minimal accepting path such that \(\alpha\) dominates \(\mu\). Then we have another decomposition \begin{equation} \alpha = \alpha_0 e_1 \alpha_1 e_2 \alpha_2 \dotsm e_n \alpha_n \label{eq-mu} \end{equation} for some closed paths \(\alpha_0, \alpha_1, \dotsc, \alpha_n \in E^*\). Since \(N > n\) and each \(e_i\) in the decomposition \eqref{eq-mu} is contained in at most one \(\omega_i\) in the decomposition \eqref{eq-v}, at least one of the \(\omega_i\)'s is disjoint from all \(e_i\)'s, i.e., there exist \(i \in \{1, 2, \dotsc, N\}\) and \(j \in \{0, 1, \dotsc, n\}\) such that \(\omega_i \sqsubseteq \alpha_j\). Since \(\alpha_j\) is a pumpable closed path in \(\mu\), \(\alpha_j\) is an element of \(M(\mu, \mathop{\mathsf{s}}(\alpha_j))\). By \cref{lem-shrink}, there exist \(\alpha_j', \alpha_j'' \in E^{< \card{V}}\) such that \(\alpha_j' \omega_i \alpha_j'' \in M(\mu, \mathop{\mathsf{s}}(\alpha_j))\). Then we have \(\len{\ell_\Sigma(\alpha_j')}, \len{\ell_\Sigma(\alpha_j'')} < \card{V}\) and \(\rho(\ell_\Sigma(\alpha_j')) \rho(\ell_\Sigma(\omega_i)) \rho(\ell_\Sigma(\alpha_j'')) \in H(\mu, \mathop{\mathsf{s}}(\alpha))\), hence \[ h = \rho(v) = \rho(\ell_\Sigma(\omega_i)) \in \rho(\ell_\Sigma(\alpha_j'))^{-1} H(\mu, \mathop{\mathsf{s}}(\alpha)) \rho(\ell_\Sigma(\alpha_j''))^{-1}. \] From the above argument, we obtain \[ H = \bigcup \mleft\{\, h_1^{-1} H(\mu, p) h_2^{-1} \mathrel{}\middle|\mathrel{} \begin{gathered} \text{\(\mu\) is a minimal accepting path in \(A\),}\\ \text{\(p \in V\), and \(h_1, h_2 \in \rho(\Sigma^{< \card{V}})\)} \, \end{gathered} \mright\}, \] where the right-hand side is a finite union of cosets of \(H\) by \cref{rem-Higman}. Thus, by B.\ H.\ Neumann's lemma~\cite{MR62122}*{(4.1) \textsc{Lemma} and (4.2)}, at least one of \(H(\mu, p)\)'s has finite index in \(H\). \end{proof} \end{document}
\begin{document} \title{{\Large\bf Weak amenability of weighted group algebras}} \author{{\normalsize\sc M. J. Mehdipour and A. Rejali\footnote{Corresponding author}}} \maketitle {\footnotesize {\bf Abstract.} In this paper, we study weak amenability of Beurling algebras. To this end, we introduce the notion inner quasi-additive functions and prove that for a locally compact group $G$, the Banach algebra $L^1(G, \omega)$ is weakly amenable if and only if every non-inner quasi-additive function in $L^\infty(G, 1/\omega)$ is unbounded. This provides an answer to the question concerning weak amenability of $L^1(G, \omega)$ and improve some known results in connection with it.} {\footnotetext{ 2020 {\it Mathematics Subject Classification}: 43A20, 47B47,47B48. {\it Keywords}: Locally compact group, Weak amenability, Weighted group algebras.}} \section{\normalsize\bf Introduction} Throughout this paper $G$ is a locally compact group with an identity element $e$. Let $L^1(G)$ and $M(G)$ be group and measure algebras of $G$, respectively. Let also $L^\infty(G)$ be the Lebesgue space of bounded Borel measurable functions on $G$; see \cite{d,hr} for an extensive study of these spaces. Let us recall that a continuous function $\omega: G\rightarrow [1, \infty)$ is called a \emph{weight} \emph{function} if for every $x, y\in G$ $$ \omega(xy)\leq\omega(x)\;\omega(y)\quad\hbox{and}\quad\omega(e)=1. $$ An elementary computation shows that the functions $\omega^*$ and $\omega^\otimes $ defined by $$ \omega^*(x):=\omega(x)\omega(x^{-1})\quad\hbox{and}\quad\omega^\otimes (x, y)=\omega(x)\omega(y) $$ are weight functions on $G$ and $G\times G$, respectively. Let the function $\Omega: G\times G\rightarrow (0, 1]$ be defined as follows: $$ \Omega(x, y)=\frac{\omega(xy)}{\omega^\otimes (x, y)}. $$ Then $\Omega$ is called \emph{zero cluster} if for every pair of sequences $(x_n)_n$ and $(y_m)_m$ of distinct elements in $G$, we have \begin{eqnarray*}\label{positive cluster} \lim_n\lim_m\Omega(x_n, y_m)=0=\lim_m\lim_n\Omega(x_n, y_m), \end{eqnarray*} whenever both iterated limits exist. Let $L^1(G, \omega)$ be the Banach space of all Borel measurable functions $f$ on $G$ such that $\omega f\in L^1(G)$. Then $L^1(G, \omega)$ with the convolution product ``$\ast$" and the norm $\|f\|_\omega=\|\omega f\|_1$ is a Banach algebras. Let $L^\infty (G,1/\omega)$ be the space of all Borel measurable functions $f$ on $G$ with $f/\omega\in L^\infty (G)$. Then $L^\infty (G,1/\omega)$ with the norm $$ \|h\|_{\infty,\;\omega}=\|h/\omega\|_\infty, $$ and the multiplication $``\cdot_\omega"$ defined by $$ h\cdot_\omega k=hk/\omega\quad\quad\quad (h, k\in L^\infty(G, 1/\omega)) $$ is a commutative $C^*-$algebra. Also, $L^\infty(G, 1/\omega)$ and $L^1(G, \omega)^*$ are isometrically isomorphic with the duality given through $$ \hbox{lan}gle f, h\hbox{ran}gle=\int_G f(x) h(x)\; dx\quad\quad\quad( f\in L^1(G, \omega), h\in L^\infty(G, 1/\omega)). $$ Let $C_b(G, 1/\omega)$ denote the subspace of $L^\infty(G, 1/\omega)$ consisting of all bounded continuous functions, and let $C_0(G, 1/\omega)$ be the subspace of $C_b(G, 1/\omega)$ consisting of all functions that vanish at infinity. Let also $M(G, \omega)$ be the Banach algebra of all complex regular Borel measures $\mu$ on $G$ for which $\omega\mu\in M(G)$. It is well-known that $M(G, \omega)$ is the dual space of $C_0(G, 1/\omega)$ \cite{dl, r0, sto}, see \cite{r111, rv1} for study of weighted semigroup measure algebras; see also \cite{mr1, mr2, mr}. For a Banach algebra $A$, let us recall that a bounded linear operator $D$ from $A$ into $A^*$ is called a \emph{derivation} if $D(ab)=D(a)\cdot b + a\cdot D(b)$ for all $a, b\in A$. Also, $D$ is said to be an \emph{inner derivation} if there exists $z\in A^*$ such that for every $a\in A$ $$ D(a)=\hbox{ad}_z(a):=z\cdot a-a\cdot z. $$ The space of all continuous (inner) derivations from $A$ into $A^*$ is denoted by (${\cal I}_{nn}(A, A^*)$) ${\cal Z}( A, A^*)$, respectively. A Banach algebra $A$ is called \emph{weakly amenable} if $$ {\cal Z}(A, A^*)={\cal I}_{nn}(A, A^*). $$ Johnson \cite{j} proved that if $G$ is a locally compact group $G$, then the Banach algebra $L^1(G)$ is weakly amenable; for a simple proof of this result see \cite{dg}. One can arise naturally the question of whether $L^1(G, \omega)$ is weakly amenable. Several authors studied this problem. For example, Bade, Curtis and Dales \cite{bcd} characterized weak amenability of the Banach algebra $L^1({\Bbb Z}, \omega_\alpha)$, where $\omega_\alpha(n)=(1+|n|)^\alpha$. Gronbaek \cite{gro} gave a necessary and sufficient condition for weak amenability of $L^1(G, \omega)$, when $G$ is a discrete Abelian group. He proved that $\ell^1(G, \omega)$ is weakly amenable if and only if there is no non-zero group homomorphism $\phi: G\rightarrow{\Bbb C}$ such that $\phi\in C_b(G, 1/\omega^*)$. The question comes to mind immediately: Dose this result hold for any Abelian group? Recently, Zhang \cite{z} gave an affirmative answer to this conjecture. It is natural to ask whether this result remain valid for non-Abelian locally compact groups? Borwick \cite{b} studied this question for non-Abelian discrete groups and gave the conditions that characterizes weak amenability of $\ell^1(G, \omega)$; see also \cite{s1} for weak amenability of $\ell^1({\Bbb F}_2, \omega)$ and $\ell^1(\mathbf{(ax+b)}, \omega)$. In this paper, we answer to this conjecture for non-Abelian locally compact groups. This paper is organized as follow. In Section 2, we introduce the notions quasi-additive and inner quasi-additive function. We show that $${\cal Z}( L^1(G, \omega), L^1(G, \omega)^*)$$ and the space bounded quasi-additive functions are isometrically isomorphic as Banach spaces. This statement holds for $${\cal I}_{nn}(L^1(G, \omega), L^1(G, \omega)^*)$$ and the space of inner quasi-additive functions. Using theses result, we prove that $L^1(G, \omega)$ is weakly amenable if and only if every non-inner quasi-additive function in $L^\infty(G, 1/\omega)$ is unbounded. This cover some known results concerning weak amenability of commutative Beurling algebras. In fact, we give an answer to the question raised in \cite{z}. In Section 3, we consider two Beurling algebras $L^1(G, \omega_i)$, for $i=1, 2$, and investigate the relation between weak amenability of them. In section 4, we take up the connection between weak amenability of $L^1(G_1, \omega_1)$ and $L^1(G_2, \omega_2)$. \section{\normalsize\bf Weak amenability of weighted group algebras} Let $G$ be a locally compact group. A Borel measurable function $p: G\times G\rightarrow{\Bbb C}$ is called \emph{quasi-additive} if for almost every where $x, y, z\in G$ $$p(xy, z)=p(x, yz)+p(y, zx).$$ A quasi-additive function $p$ is called \emph{inner} if there exists $h\in L^\infty(G, 1/\omega)$ such that $$p(x, y)= h(xy)-h(yx)$$ for almost every where $x, y\in G$. Let $Q(G)$ be the set of all quasi-additive functions, $D(G, \omega)$ be the set of all $p\in Q(G)$ such that $$C(p, \omega):=\sup_{x, y\in G}\frac{|p(x, y)|}{\omega^\otimes (x, y)}<\infty$$ and $I(G, \omega)$ be the set of inner quasi-additive functions. Clearly, $$ I(G, \omega)\subseteq D(G, \omega)\subseteq Q(G)\cap L^\infty(G\times G, 1/ \omega^\otimes ). $$ A quasi-additive function $p$ is called \emph{non-inner in} $L^\infty(G, 1/\omega)$ if $p\in Q(G)\setminus I(G, \omega)$. Let $\check{G}_\omega$ be the set of all group homomorphisms $\phi: G\rightarrow{\Bbb C}$ such that $\phi/\omega$ is bounded. We write $\check{G}$ for $\check{G}_1$. Note that if $p\in D(G, \omega)$ , $q\in\check{G}$ and $h\in L^\infty(G)$. Then the functions $ p_1, p_2, p_3: G\times G\rightarrow{\Bbb C}$ defined by $p_1(x, y)=q(x)$, $$ p_2(x, y)=h(xy)-h(yx)\quad\hbox{and}\quad p_3(x, y)=p(x^{-1}, y^{-1}) $$ are elements in $D(G, \omega)$. Also, if we define the function $q_1: G\rightarrow{\Bbb C}$ by $q_1(x)=p(x, x^{-1})$, then $q_1\in\check{G}$. We now give some properties of quasi-additive functions. \begin{lemma} \label{zmsj} Let $G$ be a locally compact group and let $\omega$ and $\omega_0$ be weight functions on $G$. Then the following statements hold. \emph{(i)} $D(G, \omega)$ is a closed subspace of $L^\infty(G\times G, 1/\omega^\otimes )$. In particular, $D(G, \omega)$ is a Banach space. \emph{(ii)} $I(G, \omega)$ is a subspace of $D(G, \omega)$. Furthermore, if $C=\{h\in L^\infty(G, 1/\omega): h(xy)=h(yx)\;\hbox{for al}l\; x, y\in G\}$, then $I(G, \omega)$ and $L^\infty(G, 1/\omega)/C$ are isomorphic. \emph{(iii)} If $\omega_0\leq m\omega$ for some $m>0$, then $D(G, \omega_0)$ and $I(G, \omega_0)$ are subspaces of $D(G, \omega)$ and $I(G, \omega)$, respectively. \emph{(iv)} $D(G, \omega)$ and $I(G, \omega)$ are subsets of $D(G, \omega^*)$ and $I(G, \omega^*)$, respectively. \emph{(v)} $\check{G}$ can be embedded into $D(G, \omega)$. \end{lemma} {\it Proof.} The statement (i) is proved by using standard arguments. For (ii), we define the function $\Lambda: L^\infty(G, 1/\omega)\rightarrow I(G, \omega)$ by $$\Lambda(h)(x, y)=h(xy)-h(yx).$$ Then $\Lambda$ is an epimorphism with $\hbox{ker}(\Lambda)=C$. Hence $ L^\infty(G, 1/\omega)/C $ and $I(G, \omega)$ are isomorphic. Hence (ii) holds. Note that if $\omega_0\leq m\omega$ for some $m>0$, then $C(p, \omega)\leq m^2 C(p, \omega_0)$ and $L^\infty(G, 1/\omega_0)$ is contained in $L^\infty(G, 1/\omega)$. So (iii) holds. The statement (iv) follows from (iii) and the fact that $\omega\leq\omega^*$. Finally, the function $q\mapsto q\circ\pi_1$ from $\check{G}$ onto $D(G, \omega)$ is isomorphism, where $\pi_1$ is the canonical projection. Hence (v) holds.$ \square$\\ Let $G_i$ be a locally compact group and $\omega_i$ be a weight function on $G_i$ for $i=1, 2$. We define $$ \omega_1\otimes\omega_2(x_1, x_2)=\omega_1(x_1)\omega_2(x_2) $$ for all $x_i\in G_i$. It is easy to prove that $\omega_1\otimes\omega_2$ is a weight function on $G_1\times G_2$. \begin{lemma} Let $G_i$ be a locally compact group and let $\omega_i$ be a weight functions on $G_i$, for $i=1,2$. Then $D(G_1\times G_2, \omega_1\otimes\omega_2)$ can be embedded into $D(G_1, \omega_1)\times D(G_2, \omega_2)$. \end{lemma} {\it Proof.} We only note that the function $p\mapsto (p_1, p_2)$ from $D(G_1\times G_2, \omega_1\otimes\omega_2)$ into $D(G_1, \omega_1)\times D(G_2, \omega_2)$ is injective, where $$ p_1(x_1, y_1)=p((x_1, e_2), (y_1, e_2))\quad\hbox{and}\quad p_2(x_2, y_2)=p((e_1, x_2), (e_1, y_2)) $$ for all $x_1, y_1\in G_1$ and $x_2, y_2\in G_2$.$ \square$\\ For Banach algebras $A$ and $B$, let $A\hat{\otimes} B$ be the projective tensor product of $A$ and $B$. Let also ${\cal B}(A, B^*)$ be the space of bounded linear operators from $A$ into $B^*$. Then the function $$ \Gamma: {\cal B}(A, B^*)\rightarrow (A\hat{\otimes}B)^* $$ defined by $$ \hbox{lan}gle \Gamma(T), a\otimes b\hbox{ran}gle=\hbox{lan}gle T(a), b\hbox{ran}gle $$ is an isometric isomorphism as Banach spaces; see for example Proposition 13 VI in \cite{bd}. We now give the next theorem which is actually the key to prove our results. \begin{theorem} \label{zms} Let $G$ be a locally compact group and $\omega$ be a weight function on $G$. Then the following statements hold. \emph{(i)} The function $\Gamma: {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))\rightarrow D(G, \omega)$ is an isometric isomorphism, as Banach spaces. \emph{(ii)} $\Gamma({\cal I}_{nn}(L^1(G, \omega), L^\infty(G, 1/\omega))= I(G, \omega)$. \emph{(iii)} If $D\in {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))$, then there exists a unique $p\in D(G, \omega)$ such that $D(f)(x)=\int_G p(x, y) f(y) \; dy$ for almost every where $x\in G$. \emph{(iv)} If $\omega$ is multiplicative, then $D(G, \omega)=I(G, \omega)$. In particular, $D(G, 1)=I(G, 1).$ \end{theorem} {\it Proof.} (i) Let $D\in {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))$. Since $D$ is a bounded linear operator, setting $A=B=L^1(G, \omega)$ in the definition of $\Gamma$, we have $$p:=\Gamma(D)\in( L^1(G, \omega)\hat{\otimes} L^1(G, \omega))^*=L^\infty(G\times G, 1/\omega^\otimes )$$ and \begin{eqnarray}\label{main} \hbox{lan}gle D(f), g\hbox{ran}gle=\hbox{lan}gle p, f\otimes g\hbox{ran}gle=\int_G\int_G p(x, y) f(x)g(y) dxdy \end{eqnarray} for all $f, g\in L^1(G, \omega)$. On the other hand, for every $f,g, k\in L^1(G, \omega)$ we have \begin{eqnarray*} \hbox{lan}gle D(f\ast g), k\hbox{ran}gle&=&\hbox{lan}gle D(f)\cdot g, k\hbox{ran}gle+\hbox{lan}gle f\cdot D(g), k\hbox{ran}gle\\ &=&\hbox{lan}gle D(f), g\ast k+k\ast f\hbox{ran}gle. \end{eqnarray*} It follows that $$ \int_G\int_G p(x, y) (f\ast g)(x) k(y)\;dxdy=\int_G\int_G p(x, y) f(x) (g\ast k+ k\ast f)(y)\; dxdy $$ Thus for every $f,g,h\in L^1(G, \omega)$, we obtain \begin{eqnarray*} \int_G\int_G\int_G p(xy, z) f(x)g(y)k(z)\;dxdydz&=&\int_G\int_G\int_G p(x,yz)f(x)g(y)k(z)\;dxdydz\\ &+& \int_G\int_G\int_G p( y,zx)f(x)g(y)k(z)\;dxdydz. \end{eqnarray*} This implies that $p\in D(G, \omega)$. Therefore, $\Gamma$ maps $ {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))$ into $D(G, \omega)$, as Banach spaces. Now, let $p\in D(G, \omega)$. Define the linear operator $D: L^1(G, \omega)\rightarrow L^\infty(G, 1/\omega)$ by $$ \hbox{lan}gle D(f), g\hbox{ran}gle:=\hbox{lan}gle p, f\otimes g\hbox{ran}gle $$ for all $f, g\in L^1(G, \omega)$. Since $p\in D(G, \omega)$, a similar argument as given above follows that $D\in {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))$. So $\Gamma(D)=p$. Thus $\Gamma$ is an isometric isomorphism from ${\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))$ onto $D(G, \omega)$. (ii) First, note that $L^\infty(G, 1/\omega)$ is a Banach $L^1(G, \omega)-$bimodule with the following actions. $$ f\cdot h(x)=\int_Gf(y) h(xy)\; dy\quad\hbox{and}\quad h\cdot f(x)=\int_G f(y) h(yx)\;dy $$ for all $f\in L^1(G, \omega)$, $h\in L^\infty(G, 1/\omega)$ and $x\in G$. Assume note that $D\in {\cal I}_{nn}(L^1(G, \omega), L^\infty(G, 1/\omega)$. Then $D=\hbox{ad}_h$ for some $h\in L^\infty(G, 1/\omega)$. So \begin{eqnarray}\label{x} \hbox{lan}gle D(f), g\hbox{ran}gle&=&\hbox{lan}gle h\cdot f-f\cdot h, g\hbox{ran}gle\nonumber\\ &=&\int_G g(y) (h\cdot f(x)-f\cdot h(x))\; dx\\ &=&\int_G\int_G g(y) f(x)(h(xy)-h(yx))\; dx dy\nonumber \end{eqnarray} for all $f, g\in L^1(G, \omega)$. Consequently, $\hbox{lan}gle D(f), g\hbox{ran}gle=\hbox{lan}gle p, f\otimes g\hbox{ran}gle$, where $$p(x, y)=h(xy)-h(yx)$$ for all $x, y\in G$. Obviously, $p\in D(G, \omega)$. This implies that $$\Gamma({\cal I}_{nn}(L^1(G, \omega), L^\infty(G, 1/\omega))= I(G, \omega).$$ (iii) Let $D\in {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))$. If $f\in L^1(G, \omega)$, then $D(f)\in L^1(G, \omega)^*$. So for every $g\in L^1(G, \omega)$, we have $$ \hbox{lan}gle D(f), g\hbox{ran}gle=\int_G D(f)(y)g(y)\; dy. $$ From this and (\ref{main}) we conclude that (iii) holds. (iv) Let $\omega$ be multiplicative. Then $L^1(G, \omega)$ and $L^1(G)$ are isomorphism, as Banach algebras. Since $L^1(G)$ is weakly amenable, the statement (v) holds. $ \square$\\ We now state the main result of this section which answers to an open problem given in \cite{z}. \begin{theorem}\label{xx2} Let $G$ be a locally compact infinite group and $\omega$ be a weight function on $G$. Then the following assertions are equivalent. \emph{(a)} $L^1(G, \omega)$ is weakly amenable. \emph{(b)} For every bounded derivation $D: L^1(G, \omega)\rightarrow L^\infty(G, 1/\omega)$ there exists $h\in L^\infty(G, 1/\omega)$ such that $\hbox{lan}gle D(f), g\hbox{ran}gle=\int_G\int_G f(x)g(y)(h(xy)-h(yx))\;dxdy$ for all $f, g\in L^1(G, \omega)$. \emph{(c)} For every quasi-additive function $p\in D(G, \omega)$ there exists $h\in L^\infty(G, 1/\omega)$ such that $p(x, y)= h(xy)-h(yx)$ for all $x, y\in G$. \emph{(d)} Every quasi-additive function $p$ with $C(p, \omega)<\infty$ is inner. \emph{(e)} Every non-inner quasi-additive function in $L^\infty(G, 1/\omega)$ is unbounded. \emph{(f)} $D(G, \omega)=I(G, \omega)$. \end{theorem} {\it Proof.} Let $D\in {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))$. If $L^1(G, \omega)$ is weakly amenable, then $D\in {\cal I}_{nn}(L^1(G, \omega), L^\infty(G, 1/\omega))$. By Theorem \ref{zms} (i), $\Gamma(D)\in I(G, \omega)$. Thus there exists $h\in L^\infty(G, 1/\omega)$ such that $$ \Gamma(D)(x, y)=h(xy)-h(yx) $$ for all $x, y\in G$. It follows from (\ref{main}) that $$ \hbox{lan}gle D(f), g\hbox{ran}gle=\int_G\int_G f(x)g(y) (h(xy)-h(yx))\; dxdy $$ for all $f, g\in L^1(G, \omega)$. So (a)$\Rightarrow$(b). Let $p\in D(G, \omega)$. Then $\Gamma(D)=p$ for some $D\in {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))$. If (b) holds, then by (\ref{main}), for every $f, g\in L^1(G, \omega)$, we have \begin{eqnarray*} \int_G\int_G p(x, y) f(x) g(y)\; dx dy&=&\hbox{lan}gle\Gamma(D), f\otimes g\hbox{ran}gle=\hbox{lan}gle D(f), g\hbox{ran}gle\\ &=&\int_G\int_G f(x)g(y)(h(xy)-h(yx))\;dxdy. \end{eqnarray*} This shows that $p(x, y)= h(xy)-h(yx)$ for almost every where $x, y\in G$. So (b)$\Rightarrow$(c). For the implication (c)$\Rightarrow$(d), note that $$ D(G,\omega)=\{p\in Q(G): C(p,\omega)<\infty\}. $$ The implications (d)$\Rightarrow$(e)$\Rightarrow$(f) are clear. Finally, let $D\in {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))$. Then $\Gamma(D)=p$ for some $p\in D(G, \omega)$. If $p\in I(G, \omega)$, then $D\in {\cal I}_{nn}( L^1(G, \omega), L^\infty(G, 1/\omega))$. That is, (d) implies (a). $ \square$\\ As a consequence of Theorem \ref{xx2} we give the following result. \begin{corollary} Let $G$ be a locally compact infinite group. Then $L^1(G, \omega)$ is not weakly amenable if and only if there exists a bounded non-inner quasi-additive function in $L^\infty(G, 1/\omega)$. \end{corollary} The next result can be considered as an improvement of Theorem 3.1 in \cite{z}. \begin{corollary}\label{ab} Let $G$ be a locally compact Abelian group. Then the following assertions are equivalent. \emph{(a)} $L^1(G, \omega)$ is weakly amenable. \emph{(b)} The zero map is the only quasi-additive function in $D(G, \omega)$. \emph{(c)} The zero map is the only group homomorphism in $L^\infty(G, 1/\omega^*)$. \emph{(d)} The zero map is the only group homomorphism in $C_b(G, 1/\omega^*)$. \end{corollary} {\it Proof.} From Theorem 1.2 in \cite{z} and Theorem \ref{xx2}, the implications (d)$\Rightarrow$(a)$\Rightarrow$(b) hold. The implication (c)$\Rightarrow$(d) is clear. To show that (b)$\Rightarrow$(c), let $\phi$ be a group homomorphism in $L^\infty(G, 1/\omega^*)$. Define $p: G\times G\rightarrow{\Bbb C}$ by $p(x, y)=\phi(x)$. Then $p$ is a quasi-additive function in $L^\infty(G\times G, 1/\omega^\otimes )$. So $\phi=0$.$ \square$\\ In the sequel, we give another consequent of Theorem \ref{xx2}. \begin{corollary} Let $G$ be a locally compact infinite group. Then the following statements hold. \emph{(i)} Every non-inner quasi-additive function in $L^\infty(G)$ is unbounded. \emph{(ii)} If $G$ is Abelian, then every non-zero group homomorphism from $G$ into ${\Bbb C}$ is unbounded. \end{corollary} {\it Proof.} It is well-known that if $G$ is a locally compact group, then $L^1(G)$ is weakly amenable; see for example \cite{dl}. This together with Theorem \ref{xx2} establish (i). The statement (ii) follows from Corollary \ref{ab}.$ \square$ \begin{example}{\rm Define the function $\omega: \Bbb{R}\rightarrow [1, \infty) $ by \begin{eqnarray*} \omega(x)=\left\{ \begin{array}{rl} \hbox{exp}(x) & x\geq 0\\ 1 & x<0 \end{array}\right. \end{eqnarray*} Then $\omega$ is a weight function on $\Bbb{R}$. For $r\in\Bbb{R}$, we define $p_r(x)=rx$ for all $x\in\Bbb{R}$. Then $$ \sup{\frac{|p_r(x)|}{\omega^*(x)}: x\in\Bbb{R}\}=\sup\{|rx|}{\hbox{exp}(x)}: x\in\Bbb{R}^+\}\leq|r|. $$ Hence there exists a non-zero group homomorphism in $L^\infty(\Bbb{R}, 1/\omega^*)$. Thus $L^1(\Bbb{R}, \omega)$ is not weakly amenable. It is proved that $$D(L^1(G, \omega))=[-1,0]\times\Bbb{R}.$$ } \end{example} \section{\normalsize\bf Beurling algebras with different weights}\label{sec4} We commence this section with the following result which is interesting and useful. \begin{theorem}\label{con} Let $\omega_1$ and $\omega_2$ be weight functions on a locally compact group $G$. Then the following assertions are equivalent. \emph{(a)} $L^1(G, \omega_2)$ is a subspace of $L^1(G, \omega_1)$. \emph{(b)} $M(G, \omega_2)$ is a subspace of $M(G, \omega_1)$. \emph{(c)} There exists $m>0$ such that $\|f\|_{\omega_1}\leq m\|f\|_{\omega_2}$ for all $f\in L^1(G, \omega_2)$. \emph{(d)} There exists $n>0$ such that $\|\mu\|_{\omega_2}\leq n\|\mu\|_{\omega_1}$ for all $\mu\in M(G, \omega_2)$. \emph{(e)} There exists $s>0$ such that $\omega_1\leq s\omega_2$. \end{theorem} {\it Proof.} The implications (e)$\Rightarrow$(c)$\Rightarrow$(a) are clear. If $L^1(G, \omega_2)$ is a subspace of $L^1(G, \omega_1)$, we can regard $L^1(G, \omega_2)^{**}$ as a subspace of $L^1(G, \omega_1)^{**}$. Hence $M(G, \omega_1)\oplus C_0(G, 1/\omega_1)^\perp$ is contained in $M(G, \omega_2)\oplus C_0(G, 1/\omega_2)^\perp$. This shows that $M(G, \omega_1)= \pi_1(L^1(G, \omega_1)^{**})$ is a subset of $M(G,\omega_2)=\pi_1(L^1(G, \omega_2)^{**})$. That is, (a)$\Rightarrow$(b). To show that (b)$\Rightarrow$(d), we define $$\|\mu\|=\|\mu\|_{\omega_1}+\|\mu\|_{\omega_2},$$ then $B:=(M(G, \omega_2), \|.\|)$ is a Banach algebra. So the identity map $F: B\rightarrow A$ is injective and continuous, where $A:= (M(G, \omega_1), \|.\|_{\omega_1})$. So $F: B\rightarrow F(B)$ is bijective. In view of the inverse mapping theorem, $F^{-1}$ is continuous. Thus there exists $r>1$ such that $$ \|F^{-1}(\mu)\|\leq r\|\mu\|_{\omega_1} $$ for all $\mu\in M(G, \omega_1)$. Consequently, $$ \|\mu\|_{\omega_1}+\|\mu\|_{\omega_2}\leq r\|\mu\|_{\omega_1}. $$ Therefore, $\|\mu\|_{\omega_2}\leq (r-1)\|\mu\|_{\omega_1}$. Finally, let (d) hold. Then for every $x\in G$, we have $\|\delta_x\|_{\omega_2}\leq n\|\delta_x\|_{\omega_1}$. Thus $\omega_1\leq n\omega_2$ for some $n>0$. That is, (d)$\Rightarrow$(e).$ \square$\\ Let us recall that two weight function $\omega_1$ and $\omega_2$ are \emph{equivalent} if $m\omega_1\leq\omega_2\leq n\omega_1$ for some $m, n>0$. \begin{corollary} Let $\omega_1$ and $\omega_2$ be weight functions on a locally compact group $G$. Then the following assertions are equivalent. \emph{(a)} $L^1(G, \omega_1)=L^1(G, \omega_2)$. \emph{(b)} $\|.\|_{\omega_1}$ and $\|.\|_{\omega_2}$ are equivalent. \emph{(c)} $\omega_1$ and $\omega_2$ are equivalent. \end{corollary} In the following, we consider two weight functions $\omega_1$ and $\omega_2$ on a locally compact group $G$ and study the relation between weak amenability of $L^1(G, \omega_1)$ and $L^1(G, \omega_2)$. \begin{proposition}\label{w**} Let $\omega_1$ and $\omega_2$ be weight functions on a locally compact group $G$ such that $\omega_1\leq m\omega_2$ for some $m>0$. Then the following statements hold. \emph{(i)} If $L^1(G, \omega_2)$ is weakly amenable and $I(G, \omega_1)=I(G, \omega_2)$, then $L^1(G, \omega_1)$ is weakly amenable. \emph{(ii)} If $\omega_2$ or $\omega_2^*$ is bounded, then $L^1(G, \omega_1)$ is weakly amenable. \end{proposition} {\it Proof.} (i) By Lemma \ref{zmsj} and Theorem \ref{xx2} we have $$ D(G, \omega_1)\subseteq D(G, \omega_2)=I(G, \omega_2)=I(G, \omega_1)\subseteq D(G, \omega_1). $$ Hence $D(G, \omega_1)= I(G, \omega_1)$ and so $L^1(G, \omega_1)$ is weakly amenable. (ii) Let $\omega_2$ be bounded. Then $\omega_1$ is bounded and so $L^1(G, \omega_1)$ is weakly amenable. Now if $\omega_2^*$ is bounded, then $\omega_2$ is bounded, because $\omega_2\leq\omega_2^*$. Hence $L^1(G, \omega_1)$ is weakly amenable.$ \square$\\ As an immediate corollary of Proposition \ref{w**} we have the following result. \begin{corollary} Let $\omega_1$ and $\omega_2$ be weight functions on a locally compact group $G$ such that $\omega_1$ and $\omega_2$ are equivalent. Then $L^1(G, \omega_1)$ is weakly amenable if and only if $L^1(G, \omega_2)$ is weakly amenable. \end{corollary} \begin{example}{\rm Let $\omega$ be a weight function on a locally compact group $G$ and $a\in G$. The function $\omega_a$ defined by $\omega_a(x)=\omega(axa^{-1})$ is a weight function on $G$. For every $x\in G$, we have \begin{eqnarray*} \omega(x)=\omega(a^{-1}(axa^{-1})a)\leq\omega(a^{-1})\omega_a(x)\omega(a) \end{eqnarray*} and $$ \omega_a(x)=\omega(axa^{-1})\leq\omega(a)\omega(a^{-1})\omega(x). $$ Hence \begin{eqnarray}\label{12345} (1/m)\omega(x)\leq\omega_a(x)\leq m\omega(x), \end{eqnarray} where $m=\omega(a)\omega(a^{-1})$. That is, $\omega$ and $\omega_a$ are equivalent. Now, define the function $W$ on $G$ by $$W(x)=\inf_{a\in G} \omega_a(x).$$ Then $W$ is a weight function on $G$. By (\ref{12345}), $\omega$ and $W$ are equivalent. Hence $L^1(G, \omega)$ is weakly amenable if and only if $L^1(G, \omega_a)$ is weakly amenable; or equivalently, $L^1(G, W)$ is weakly amenable.} \end{example} As another consequence of Theorem \ref{w*} we have the following result. The part (i) due to Pourabbas \cite{p}. \begin{corollary}\label{w*} Let $G$ be a locally compact group. Then the following statements hold. \emph{(i)} If $\omega^*$ is bounded, then $L^1(G, \omega)$ is weakly amenable and $D(G, \omega^*)=I(G, \omega^*)$. \emph{(ii)} If $L^1(G, \omega^*)$ is weakly amenable and $I(G, \omega)=I(G, \omega^*)$, then $L^1(G, \omega)$ is weakly amenable and $D(G, \omega)=D(G, \omega^*)$. In the case where $G$ is Abelian, $D(G, \omega)=\{0\}$. \emph{(iii)} If $D(G, \omega^*)=I(G, \omega^*)=I(G, \omega)$, then $D(G, \omega)=I(G, \omega)$. \end{corollary} {\it Proof.} Since $\omega\leq\omega^*$, the statement (i) follows from Proposition \ref{w**} (ii). From Proposition \ref{w**} yields that $L^1(G, \omega)$ is weakly amenable and by Theorem \ref{xx2}, $$ D(G, \omega)=I(G, \omega)=I(G, \omega^*)=D(G, \omega^*). $$ When $G$ is Abelian, we have $$ {\cal Z}( L^1(G, \omega), L^\infty(G, 1/\omega))= {\cal I}_{nn}(L^1(G, \omega), L^\infty(G, 1/\omega))=\{0\}. $$ In view of Theorem \ref{zms}, we infer that $D(G, \omega)=I(G, \omega)=\{0\}$. So (ii) holds. For (iii), by Lemma \ref{zmsj} we have $$ D(G, \omega)\subseteq D(G, \omega^*)=I(G, \omega^*)=I(G, \omega)\subseteq D(G, \omega). $$ Hence (iii) holds.$ \square$ \begin{example}{\rm It is well-known from \cite{bcd} that $\ell^1({\Bbb Z}, \omega_\alpha)$ is weakly amenable if and only if $0\leq\alpha<1/2$. Hence the Banach algebra $\ell^1({\Bbb Z}, \omega_{1/3})$ is weakly amenable, however, $\ell^1({\Bbb Z}, \omega_{1/3}^*)$ is not weakly amenable. Also, $\ell^1({\Bbb Z}, \omega_{1/6}^*)$ is weakly amenable, but, $\omega_\alpha^*$ is unbounded.} \end{example} Let $\omega$ be a weight function on $G$. Then $\omega^\prime$ defined by $\omega^\prime(x)=\omega(x^{-1})$ for all $x\in G$ is also a weight function on $G$. \begin{theorem}\label{d} Let $G$ be a locally compact group and $\omega$ be a weight function on $G$. Then $L^1(G, \omega^\prime)$ is weakly amenable if and only if $L^1(G, \omega)$ is weakly amenable. \end{theorem} {\it Proof.} Let $p$ be a complex-valued function $G\times G$. Define $p^\prime(x, y)=p(x^{-1}, y^{-1})$ for all $x, y\in G$. It routine to check that $$C(p, \omega)= C(p^\prime, \omega^\prime).$$ Now, for a complex-valued function $h$ on $G$, define $h^\prime(x)=-h(x^{-1})$ for all $x\in G$. Obviously, $p(x, y)=h(xy)-h(yx)$ if and only if $h^\prime(x, y)=h^\prime(xy)-h^\prime(yx)$ for all $x, y\in G$. Therefore, $p\in I(G, \omega)$ if and only if $p^\prime\in I(G, \omega^\prime)$. These facts together with Theorem \ref{xx2} prove the result.$ \square$\\ It is easy to see that if $G$ is a locally compact Abelian group, then $\frak{D}:=\{(x, x^{-1}): x\in G\}$ is a locally compact Abelian group. \begin{theorem}\label{d} Let $G$ be a locally compact Abelian group and $\omega$ be a weight function on $G$. Then $L^1(G, \omega^*)$ is weakly amenable if and only if $L^1(\frak{D}, \omega^\otimes )$ is weakly amenable. \emph{(ii)} $L^1(G, \omega^\prime)$ is weakly amenable if and only if $L^1(G, \omega)$ is weakly amenable, where $\omega^\prime(x)=\omega(x^{-1})$ for all $x\in G$. \end{theorem} {\it Proof.} Let $q: G\rightarrow{\Bbb C}$ be a group homomorphism. Define the function $Q: \frak{D}\rightarrow{\Bbb C}$ by $Q(x, x^{-1})=q(x)$. Then $Q$ is a group homomorphism. Since for every $x\in G$ $$ \frac{|q(x)|}{(\omega^*)^*(x)}= \frac{| Q(x,x^{-1})|}{(\omega^\otimes)^* (x,x^{-1})}, $$ it follows that $q\in\check{G}_{(\omega^*)^*}$ if and only if $Q\in\check{\frak{D}}_{(\omega^\otimes)^*}$. Also, $q=0$ if and only if $Q=0$. Now, apply Corollary \ref{ab}. $ \square$ \section{\normalsize\bf Weak amenability of Beurling algebras}\label{sec5} Let $\phi: G_1\rightarrow G_2$ be a group epimorphism and $\omega$ be a weight function on $G_2$. Then the function $\overleftarrow{\omega}: G_1\rightarrow [1, \infty)$ defined by $\overleftarrow{\omega}(x_1)=\omega(\phi(x_1))$ is a weight function on $G_1$. Define the function $\frak{S}: Q(G_2)\rightarrow Q(G_1)$ by $$ \frak{S}(p_2)(x_1, y_1)=p_2(\phi(x_1), \phi(y_1)). $$ It is clear that $\frak{S}$ is injective and $C(p_2, \omega)=C(\frak{S}(p_2), \overleftarrow{\omega})$ for all $p_2\in Q(G_2)$. So $\frak{S}$ maps $D(G_2, \omega)$ into $D(G_1, \overleftarrow{\omega})$. Since $h_2\circ\phi\in L^\infty(G_1, 1/\overleftarrow{\omega})$ for all $h_2\in L^\infty(G_2, 1/\omega)$. This shows that $\frak{S}(I(G_2, \omega))$ is contained in $I(G_1, \overleftarrow{\omega})$. \begin{proposition}\label{o} Let $G_1$ and $G_2$ be locally compact infinite groups and let $\omega$ be a weight function on $G_2$. Then the following statements hold. \emph{(i)} If $L^1(G_2, \omega)$ is weakly amenable, then $L^1(G_1, \overleftarrow{\omega})$ is weakly amenable. \emph{(ii)} $L^1(G_1, \overleftarrow{\omega})$ is weakly amenable and $\frak{S}(I(G_2, \omega))= I(G_1, \overleftarrow{\omega})$, then $L^1(G_2, \omega)$ is weakly amenable. \end{proposition} {\it Proof.} (i) Let $p_1$ be a non-inner quasi-additive function in $L^\infty(G_1, 1/\overleftarrow{\omega})$. Since $\frak{S}$ maps $I(G_2, \omega)$ into $I(G_1, \overleftarrow{\omega})$, there exists a non-inner quasi-additive function $p_2$ in $L^\infty(G_2, 1/\omega)$ such that $\frak{S}(p_2)=p_1$. So if $L^1(G_2, \omega)$ is weakly amenable, then $$ C(p_1, \overleftarrow{\omega})=C(\frak{S}(p_2), \overleftarrow{\omega})=C(p_2, \omega)=\infty. $$ Therefore, $L^1(G_1, \overleftarrow{\omega})$ is weakly amenable. (ii) Let $p_2$ be a non-inner quasi-additive function in $L^\infty(G_2, 1/\omega)$. Since $\frak{S}$ is injective, $\frak{S}(p_2)$ is a non-inner quasi-additive function in $L^\infty(G_1, 1/\overleftarrow{\omega})$. Hence if $L^1(G_1, \overleftarrow{\omega})$ is weakly amenable, then $$ C(p_2, \omega)=C(\frak{S}(p_2), \overleftarrow{\omega})=\infty. $$ This shows that $L^1(G_2, \omega)$ is weakly amenable.$ \square$\\ Let $G$ be a locally compact group and $N$ be a normal subgroup of $G$. It is easy to see that the function $$\hat{\omega}(xN)=\inf\{\omega(y): yN=xN\}$$ is a weight function on $G/N$. Also the function $\overline{\omega}(x)=\hat{\omega}(xN)$ is a weight function on $G$ and $\overline{\omega}(x)\leq\omega(x)$ for all $x\in G$. \begin{corollary} Let $\omega$ be a weight function on a locally compact group $G$ and $N$ be a normal subgroup of $G$ such that $G/N$ is infinite. If $L^1(G/N, \hat{\omega})$ is weakly amenable, then $L^1(G, \overline{\omega})$ is weakly amenable. \end{corollary} {\it Proof.} Let $\pi: G\rightarrow G/N$ be the quotient map. Then for every $x\in G$, we have $$ \overleftarrow{\omega}(x)= \hat{\omega}(\pi(x))=\hat{\omega}(xN)=\overline{\omega}(x). $$ Hence $\overleftarrow{\omega}= \overline{\omega}$. Now, invoke Proposition \ref{o}.$ \square$\\ Let $\omega_i$ be a weight function on a locally compact group $G_i$, for $i=1, 2$. Then $W_i(x_1, x_2)=\omega_i(x_i)$ is a weight function on $G_1\times G_2$, for $i=1,2$. \begin{corollary} Let $\omega_i$ be a weight function on a locally compact infinite group $G_i$, for $i=1, 2$. If $L^1(G_1, \omega_1)$ or $L^1(G_2, \omega_2)$ is weakly amenable, then $L^1(G_1\times G_2, W_i)$ is weakly amenable. \end{corollary} {\it Proof.} Let $\pi_i: G_1\times G_2\rightarrow G_i$ be the canonical projection, for $i=1,2$. Then $$ \overleftarrow{\omega}(x_1, x_2)=\omega_i(\pi_i(x_1, x_2))=\omega_i(x_i)=W_i(x_1, x_2) $$ for all $x_i\in G_i$. So $\overleftarrow{\omega}=W_i$. By Proposition \ref{o}, the result hold.$ \square$\\ In the following we investigate the relation between weak amenability of $L^1(G, \omega)$ and $L^1(H, \omega|_H)$ when $H$ is a subgroup of $G$. To this end, we need some results. \begin{proposition}\label{ex} Let $\omega$ be a weight function on a locally compact Abelian group $G$ and $H$ be a subgroup of $G$. Then every group homomorphism $q: H\rightarrow{\Bbb C}$ has an extension to a group homomorphism $Q: G\rightarrow{\Bbb C}$. \end{proposition} {\it Proof.} Let $q: H\rightarrow{\Bbb C}$ be a group homomorphism. Let $A$ be a subset of $G$ such that $\{ Ha: a\in A\}$ is a family of pairwise disjoint subsets of $G$ and $G=\cup_{a\in A} Ha$. If $x\in G$, then there exists a unique element $a_x\in A$ such that $x\in Ha_x$. So there exists a unique element $\tilde{x}=x(a_x)^{-1}$ in $H$ such that $x=\tilde{x} a_x$. Define $Q(x)=q(\tilde{x})$. For every $x, y\in G$, we have $Ha_x Ha_y= Ha_xa_y$. Thus $xy\in Ha_xa_y\cap Ha_{xy}$. Hence $ Ha_xa_y= Ha_{xy}$. It follows that $a_xa_y= a_{xy}$. Hence $$\tilde{x} \tilde{y} a_xa_y= \tilde{x} a_x\tilde{y} a_y= xy= \widetilde{xy} a_{xy}= \widetilde{xy} a_xa_y. $$ and thus $\widetilde{xy}=\tilde{x} \tilde{y}$. Consequently, $$Q(xy)= q(\widetilde{xy})= q(\tilde{x})+q(\tilde{y})= Q( x)+ Q(y).$$ This shows that $Q$ is a group homomorphism. For every $h\in H$, we have $he=h$. Thus $a_x=e$. Therefore, $x=\tilde{x}$. That is, $Q|_H=q$.$ \square$\\ Let $G$ be a locally compact Abelian group and $H$ be a subgroup of $G$. We denote by $\widetilde{H}$ be the set of all $\tilde{x}$ pointed out in the proof of Proposition \ref{ex}. \begin{lemma} Let $G$ be a locally compact Abelian group and $\omega$ be a weight function on $G$. Then the following statements hold. \emph{(i)} $\widetilde{H}$ is a normal subgroup of $H$. \emph{(ii)} $\omega^*(\tilde{x})=\omega\otimes\omega(\tilde{x}, (\tilde{x})^{-1})$ for all $x\in G$. \emph{(iii)} The function $\tilde{\omega}$ defined by $\tilde{\omega}(x)=\omega(\tilde{x})$ is a weight function on $G$. Furthermore, $(\tilde{\omega})^*(x)=\omega^*(\tilde{x})$ for all $x\in G$. \end{lemma} {\it Proof.} It is easy to see that $\tilde{e}=e$ and $\widetilde{xy}=\tilde{x} \tilde{y}$ for all $x, y\in G$. since $xx^{-1}=e$, we have $\tilde{x}\widetilde{x^{-1}}=\tilde{e}=e$. Thus $\widetilde{x^{-1}}=( \tilde{x})^{-1}$. These facts show that $\widetilde{H}$ is a subgroup of $G$. So (i) holds. It is easy to prove that (ii) holds. Since $\widetilde{xy}=\tilde{x} \tilde{y}$, it follows that \begin{eqnarray*} \tilde{\omega}(xy)&=&\omega(\widetilde{xy})=\omega(\tilde{x} \tilde{y})\\ &\leq&\omega(\tilde{x})\omega(\tilde{y})= \tilde{\omega}(x)\tilde{\omega}(y). \end{eqnarray*} This shows that $\tilde{\omega}$ is a weight function on $G$. For every $x\in G$, we have \begin{eqnarray*} (\tilde{\omega})^*(x)&=&\tilde{\omega}(x) \tilde{\omega}(x^{-1})\\ &=&\omega(\tilde{x})\omega(\widetilde{x^{-1}})\\ &=&\omega(\tilde{x})\omega((\tilde{x})^{-1})\\ &=&\omega^*(\tilde{x}). \end{eqnarray*} Therefore, $(\tilde{\omega})^*(x)=\omega^*(\tilde{x})$ for all $x\in G$. That is, (iii) holds. $ \square$ \begin{theorem}\label{til} Let $G$ be a locally compact Abelian group and $H$ be a subgroup of $G$. If $L^1(G, \tilde{\omega})$ is weakly amenable, then $L^1(H, \omega|_H)$ is weakly amenable. \end{theorem} {\it Proof.} Let $q: H\rightarrow \Bbb {C}$ be a non-zero group homomorphism. By Proposition \ref{ex}, there exists $Q: G\rightarrow \Bbb {C}$ such that $Q(x)=q(\tilde{x})$ for all $x\in G$. Hence \begin{eqnarray*} \sup\{\frac{|q(y)|}{\omega^*(y)}: y\in H\}&\geq&\sup\{\frac{|q(\tilde{x})|}{\omega^*(\tilde{x})}: x\in G\}\\ &=&\sup\{\frac{|Q( x)|}{\omega^*(\tilde{x})}: x\in G\}=\sup\{\frac{|Q(x)|}{(\tilde{\omega})^*(x)}: x\in G\}. \end{eqnarray*} Therefore, $L^1(H, \omega|_H)$ is weakly amenable.$ \square$ Now, we investigate weak amenability of tensor product of Beurling algebras. \begin{theorem}\label{ppp} Let $G_1$ and $G_2$ be locally compact groups and let $\omega_1$ and $\omega$ be weight functions on $G_1$ and $G_1\times G_2$, respectively. If $L^1(G_1\times G_2, \omega)$ is weakly amenable and $\sup\{\frac{\omega_1(x)}{\omega(x, y)}: x, y\in G\}<\infty$, then $L^1(G_1, \omega_1)$ is weakly amenable. \end{theorem} {\it Proof.} Let $p$ be a non-inner quasi-additive function in $L^\infty(G_1, 1/\omega_1)$. Define the function $\tilde{p}((x_1, y_1), (x_2, y_2))=p(x_1, x_2)$ for all $x_1, y_1\in G_1$ and $x_2, y_2\in G_2$. Suppose that $\tilde{p}\in I(G_1\times G_2, \omega_1\otimes\omega_2)$. Then there exists $h\in L^\infty(G_1\times G_2, \omega_1\times\omega_2)$ such that $$ \tilde{p}(x_1,x_2),(y_1,y_2))=h((x_1, y_1)(x_2, y_2))-h((x_2, y_2)(x_1, y_1)) $$ for all $x_1, y_1\in G_1$ and $x_2, y_2\in G_2$. Define the complex-valued function $k$ on $G_1$ by $k(x)=h(x, e_2)$ for all $x\in G_1$. Note that $$ \frac{k(x_1)}{\omega_1(x_1)}=\frac{h(x_1, e_2)}{\omega_1\otimes\omega_2(x_1, e_2)}. $$ This shows that $k\in L^\infty(G_1, 1/\omega_1)$. For every $x_1, y_1\in G_1$ and $x_2, y_2\in G_2$, we have \begin{eqnarray*} k(x_1y_1)-k(y_1x_1)&=&h((x_1, e_2)(y_1, e_2))-h((y_1, e_2)(x_1, e_2))\\ &=&\tilde{p}((x_1, e_2), (y_1, e_2))\\ &=&p(x_1, y_1). \end{eqnarray*} This contradiction shows that $\tilde{p}$ is non-inner. If $x_1, y_1\in G_1$ and $x_2, y_2\in G_2$,then $$ \frac{|\tilde{p}(x_1,x_2),(y_1,y_2))|}{\omega(x_1,x_2)\omega(y_1,y_2)}= \frac{|p(x_1,y_1)|}{\omega_1(x_1)\omega_1(y_1)}\frac{\omega_1( x_1)}{\omega( x_1,x_2)} \frac{\omega_1(y_1)}{\omega(y_1,y_2)}. $$ This implies that if $L^1(G_1, \omega_1)$ is not weakly amenable, then $L^1(G_1\times G_2, \omega)$ is not weakly amenable.$ \square$\\ For a locally compact Abelian group $G$, Zhang \cite{z} proved that if $L^1(G_1, \omega_1)\hat{\otimes} L^1(G_2, \omega_2)$ is weakly amenable, then $L^1(G_1, \omega_1)$ and $L^1(G_2, \omega_2)$ are weakly amenable. We show that this result is true for any locally compact group. \begin{corollary}\label{ten} Let $\omega_i$ be a weight function on a locally compact infinite group $G_i$, for $i=1, 2$. If $L^1(G_1, \omega_1)\hat{\otimes} L^1(G_2, \omega_2)$ is weakly amenable, then $L^1(G_1, \omega_1)$ and $L^1(G_2, \omega_2)$ are weakly amenable. \end{corollary} {\it Proof.} Let $L^1(G_1, \omega_1)\hat{\otimes} L^1(G_2, \omega_2)$ be weakly amenable. Then $L^1(G_1\times G_2, \omega_1\otimes\omega_2)$ is weakly amenable. Since for every $x, y\in G$ $$ \frac{\omega_1(x)}{\omega_1\otimes\omega_2(x, y)}=\frac{1}{\omega_2(y)}\leq 1, $$ it follows from Theorems \ref{ppp} that $L^1(G_1, \omega_1)$ is weakly amenable. The other case is similar.$ \square$ \footnotesize {\footnotesize \noindent {\bf Mohammad Javad Mehdipour}\\ Department of Mathematics,\\ Shiraz University of Technology,\\ Shiraz 71555-313, Iran\\ e-mail: [email protected]\\ {\bf Ali Rejali}\\ Department of Pure Mathematics,\\ Faculty of Mathematics and Statistics,\\ University of Isfahan,\\ Isfahan 81746-73441, Iran\\ e-mail: [email protected]\\ \end{document}
\begin{document} \theoremstyle{plain}\noindentewtheorem{thm}{Theorem} \theoremstyle{plain}\noindentewtheorem{prop}[thm]{Proposition} \theoremstyle{plain}\noindentewtheorem{lemma}[thm]{Lemma} \theoremstyle{plain}\noindentewtheorem{cor}[thm]{Corollary} \theoremstyle{definition}\noindentewtheorem{defini}[thm]{Definition} \theoremstyle{remark}\noindentewtheorem{remark}[thm]{Remark} \theoremstyle{plain} \noindentewtheorem{assum}[thm]{Assumption} \theoremstyle{definition}\noindentewtheorem{ex}{Example} \begin{abstract} In this paper we will investigate the global properties of complete Hilbert manifolds with upper and lower bounded sectional curvature. We shall prove the Focal Index lemma that will allow us to extend some classical results of finite dimensional Riemannian geometry as Rauch and Berger theorems and the Topogonov theorem in the class of manifolds in which the Hopf-Rinow theorem holds. \exp^{N}_{F(p)}d{abstract} \maketitle Key words: Riemannian geometry, Hilbert manifold. \section{Introduction} \mbox{} In infinite dimensional geometry, the most of the local results follow from general arguments analogous to those in the finite dimensional case (see \cite{Kl} or \cite{La}). The investigate of global properties is quite hard as finite dimensional case and the theorem of Hopf-Rinow is generic satisfy on complete Hilbert manifolds (see \cite{Ek}). Moreover, the exponential map may not be surjective also when the manifold is a complete Hilbert manifold. In section 1 we briefly discuss the relationship between completeness and geodesically completeness (at some point) and we note that this facts are equivalent either when a manifold has constant sectional curvature or no positive sectional curvature. We conclude this section to prove that a group of bijective isometry coincide with the set of the maps that preserve the distance. The fundamental tools to studying the geometry and topology of the finite dimensional manifolds are the Rauch and Berger theorems. These theorems allow us to understand the distribution of conjugate and focal points along geodesic and the geometry of the complete Hilbert manifolds with bounded sectional curvature. We recall briefly the notion of focal point: let $N$ be a submanifold of a riemannian manifold $M.$ The exponential map of $M$ is defined on an open subset $W \subset TM$ and we restrict its on $W \mathcal{H}p T^{\perp} N,$ that we will denote by $\mathrm{Exp}^{\perp}.$ A focal point is a singularity of $\mathrm{Exp}^{\perp} :W \mathcal{H}p T^{\perp} N \longrightarrow M .$ In the infinite dimensional manifolds two species of singularity can be appear: when the differential of $\mathrm{Exp}^{\perp}$ fails to be injective (monofocal) or when the differential of $\mathrm{Exp}^{\perp}$ fails to be surjective (epifocal). Clearly, when $N=p$ we have exactly the notion of conjugate points. In section $2$, we shall study the singularity of the exponential map of differential point of view, as in \cite{Ka}, and we shall prove that always monofocal point implies epifocal but not conversely and the distribution of epifocal points and monofocal points can have cluster points. Moreover we will deduce a weak form of the Rauch theorem. In section $4$ we prove the fundamental tool to prove the main results: the {\em Focal Index lemma}. Firstly, we will prove the same version of the finite dimensional theory; then we note that we shall prove its in the case when we have only a finite number of epifocal points which are not monofocal (pathological points). Using the above results we will prove the Rauch and Berger theorems in infinite dimensional geometry, when we have at most a finite number of pathological points along a finite geodesic. The main applications will appear in the last section and the main result is the Topogonov Theorem in the class of the complete Hilbert manifolds on which the Hopf-Rinow theorem holds. The prove is almost the same as in \cite{CE} because Rauch, Berger and Hopf-Rinow theorems hold. Moreover, we shall prove the Maximal Diameter Theorem. and two version of sphere theorems, with the strong assumption on injectivity radius, with pinching $\sim \frac{3}{4}$ and $\frac{4}{9},$ in the class of Hopf-Rinow manifolds. Other simple applications are two results like Berger-Topogonov theorem, one using Rauch Theorem and one using Topogonov Theorem, and a results about the image of the exponential map of a complete manifold with upper bounded sectional curvature. Some basic references for infinite dimensional geometry are \cite{La} and \cite{Kl}. \section{Preliminaries} \mbox{} In this section we give some general results of infinite dimensional Riemannian geometry and we briefly discuss some of the differences from the finite dimensional case. We begin recalling some basic facts and establishing our notation. Let $(M,g)$ be an Hilbert manifold modeled on a infinite dimensional Hilbert space $\text{$\mathbb{H}$}.$ Throughout this paper we shall assume that $M$ is a connected, paracompact and Hausdorff space. Any tangent space $T_p M$ has a scalar product $g(p),$ depending smoothly on $p,$ and defining on $T_p M$ a norm equivalent to the original norm on $\text{$\mathbb{H}$}.$ Using $g,$ we can define the length of piecewise differential curve and it easy to check that for any two points of $M$ there exists a piecewise differential curve joining them . Hence, we can introduce a metric $d,$ defining $d(x,y)$ the infimum of the lengths of all differential paths joining $x,y$ and one can prove that $(M,d)$ is a metric space (see \cite{Pa} ) and $d$ induces the same topology of $M.$ As in the finite dimensional case, $M$ admits a unique Levi Civita connection $\noindentb$ (see \cite{La}) defined by the Koszul's formula. We recall that the criterion of tensoriality in infinite dimensional geometry doesn't hold (see \cite{BMT}) so we must deduce all properties of $\noindentb$ by its local expression. Let $c:[a,b] \longrightarrow M$ be a smooth curve. For any $ v \in T_{c(a)} M$ there exists a unique vector field $V(t)$ along $c$ such that $ \noindentb_{\fis c (t)}{V(t)}=0.$ Moreover, the Levi Civita connection satisfies $ \noindentb_X Y (p) = \frac{d}{dt} \mid_{t=0} \pt{t}{0} (Y), $ where $\pt{t}{0}$ is a parallel transport along any curve $\gamma$ such that $\gamma (0)=p$ and $\fis{\gamma} (0)= X(p).$ A geodesic in $M$ is a smooth curve $\gamma$ which satisfies $\noindentb_{\fis{\gamma}} \fis{\gamma}=0.$ Using the theorems of existence, uniqueness and smooth dependence on the initial data, we may prove the existence, at any point $p,$ of the exponential map $\e p,$ that is defined in a neighborhood of the origin in $T_p M$ by setting $\e p (v)=\gamma (1),$ where $\gamma$ is the geodesic in $M$ such that $\gamma (0)=p$ and $\fis{\gamma }(0)=v.$ This map is smooth and is a local diffeomorphism, $d( \e p )_{0}=id,$ in a neighborhood of the origin in $T_p M$ by the inverse function theorem. Moreover, there exists an open neighborhood $W$ of $\{ 0_p \in T_p M: \ p \in M \}$ in the tangent bundle $TM,$ such that the application $\exp (X_p)= \e p (X)$ is defined and differentiable. Generally, the local theory works as in finite dimensional geometry and results as the Gauss lemma and existence of convex neighborhoods hold in infinite dimensional Riemannian geometry and the curvature tensor is defined as follows: let $x,y,z \in T_p M,$ we extend them to vector fields $X,Y,Z$ and define $R(x,y)z=\noindentb_X \noindentb_Y Z - \noindentb_Y \noindentb_X Z - \noindentb_{[X,Y]} Z.$ It is easy to check that $R$ doesn't depend on the extension and it is antisymmetric in $x,y$ and satisfies the first Bianchi identity $ R(x,y)z + R(z,x)y + R(y,z)x =0. $ Given any plane $\sigma$ in $T_p M$ and let $v,w \in \sigma$ be linearly independent. We define the sectional curvature $K(\sigma)$ to be \begin{small} $$ \frac{g(R(v,w)w,v)}{g(v,v) g(w,w) - g(v,w)^2}. $$ \exp^{N}_{F(p)}d{small} It is easy to check that $K$ doesn't depends on the choice of the spanning vectors and the curvature tensor $R$ is completely determined by the sectional curvature. Moreover, as in the finite dimensional case, we shall prove the Cartan theorem, see \cite{Kl} page 114, in which the existence of a local isometry is characterized by a certain property of the tensor curvature. The global theory on Hilbert manifolds is more difficult: for example the Hopf-Rinow theorem fails. We recall that a Hilbert manifold is called complete if $(M,d)$ is complete as a metric space. On the other hand, we say that a manifold $M$ is geodesically complete at a point $p$ if the exponential map is defined in $T_p M$ and $M$ is geodesically complete if it is geodesically complete for all point $q \in M.$ It is easy to check the following implication: \\ \noindent $M$ complete $\text{$\mathbb{R}$}ightarrow$ $M$ is geodesically complete $\text{$\mathbb{R}$}ightarrow$ $M$ is geodesically complete at some point. \\ If $M$ is a finite dimensional manifold, the above sentences are equivalent, thanks to the Hopf-Rinow theorem. Grossman, see \cite{Gr} constructs a simply connected complete Hilbert manifold on which there exist two points which cannot be connected with a minimal geodesic but the exponential map is surjective. On the other hand Ekeland (\cite{Ek}) proved that the Hopf-Rinow theorem is generically satisfied, i.e. if one takes a point $p \in M,$ in a complete Hilbert manifold $M,$ the set of points $q\in M$ such that there exists a unique minimal geodesic joining $p$ and $q$ is a $G_{\delta}$ set and in particular it is a dense subset of $M.$ The Hopf-Rinow theorem implies also that the exponential map must be surjective on complete finite dimensional Riemannian manifolds. Atkin (see \cite{At}) showed that there exists a complete Hilbert manifold $M$ on which, at some point $p \in M,$ $\e p (T_p M)$ is not surjective. Let $q \in M - (\e p (T_p M)).$ Then $M-\{ q \}$ is not complete as metric space, but, clearly, is geodesically complete at $p.$ In particular: {\em in infinite dimensional Riemannian geometry, geodesically complete at some point doesn't imply completeness.} Moreover, Atkin (see \cite{At1}) constructed some infinite dimensional Hilbert manifolds in which the induced metric is incomplete, but geodesically complete and any two point may be joined by minimizing geodesic. The above discussion justify the following definition. \begin{defini} A complete Hilbert manifold $M$ is called Hopf-Rinow if for every $p,q \in M$ there exists at least a minimal geodesic joining $p$ and $q.$ \exp^{N}_{F(p)}d{defini} In \cite{El} El\'{\i}ason showed that the Sobolev manifolds, i.e. the spaces of the Sobolev sections of a vector bundle on a compact manifold, are Hopf-Rinow. Other class of manifolds which are Hopf-Rinow are the simply connected complete Hilbert manifolds such that their sectional curvature is non positive; indeed the Cartan-Hadamard theorem holds, see \cite{Gr} and \cite{Mca}. Furthermore, see \cite{La}, we may prove that an Hilbert manifold $M$ with non positive sectional curvature is a complete Hilbert manifold if and only if $M$ is geodesically complete at some point. It is easy to check that the same holds for a manifolds with constant sectional curvature: indeed, using the same arguments used in the classification of the simply connected complete Hilbert manifolds with constant sectional curvature, we may prove our claim. The Bonnet theorem was proved by Anderson, see \cite{An}; however we cannot conclude any information about fundamental group since we may prove, see \cite{B}, that there exist infinite groups acting isometrically and properly discontinuously on the infinite unit sphere. In particular, Weinstein theorem fails: the following example gives an isometry of the unit sphere of a separable Hilbert space $S(l_2)$ without fixed points, such that $inf \{ d(x,f(x)) : x \in S(l_2) \} =0.$ $$ \begin{array}{lcl} f(\sum_{i=1}^{\infty} x_i e_i)&=& \sum_{i=1}^{\infty} (\cos(\frac{1}{i}) x_{2i-1} \ + \ \sin(\frac{1}{i}) x_{2i}) e_{2i-1} \\ &+& \sum_{i=1}^{\infty} (\cos(\frac{1}{i}) x_{2i})\ - \ \sin(\frac{1}{i}) x_{2i-1})e_{2i}. \\ \exp^{N}_{F(p)}d{array} $$ We conclude this section proving that the group of bijective isometry coincide with the set of applications that preserve the distance. \begin{prop} \label{isometrie} Let $F:(M,g) \longrightarrow (N,h)$ be a surjective map. Then $F$ is an isometry if and only if $F$ preserves the distance, i.e. if and only if $d(F(x),F(y))=d(x,y).$ \exp^{N}_{F(p)}d{prop} The same proof of the finite dimensional case holds, (\cite{KN}) once we prove the following result $$\lim_{s \rightarrow 0} \frac{d(\e p (sX) , \e p (sY)}{\parallel sX - sY \parallel}=1. $$ {\bf Proof:} $\ $ let $r>0$ such that $\e p: \bn{p}{r} \rightarrow \bd {\mathcal{H}l B}{p}{r},$ between the balls with their respectively metrics, is a onto diffeomorphism. Now recall that $d( \e p)_0 =id,$ hence there exists $\epsilon >0$ and $0< \eta \leq r$ such that \[ 1 - \epsilon \leq \parallel d (\e p)_q \parallel \leq 1 + \epsilon, \] for every $q \in \bn{p}{\eta}$. Let $m,a \in \bd{\mathcal{H}l B}{p}{\frac{\eta}{4}}$. By assumption, we shall calculate the distance from $m$ to $a,$ restricting ourself to the curve $c$ on $\bd{\mathcal{H}l B}{p}{\eta}$. Then $c(t)=\e p(\xi(t))$ and \[ (1 - \epsilon) \int_0^1 \parallel \fis {\xi}(t) \parallel_p dt \leq \int_0^1 \parallel \fis c (t) \parallel dt \leq (1 + \epsilon) \int_0^1 \parallel \fis {\xi}(t) \parallel_p dt. \] That is: for every $\epsilon>0$ there exists $s_o=\frac{\eta}{4}$ such that, for every $s<s_o$ we have \begin{small} \[ (1 - \epsilon)(\parallel sX - sY \parallel_p ) \leq d(\e p (sX), \e p (sY)) \leq (1 + \epsilon)( \parallel sX - sY \parallel), \] \exp^{N}_{F(p)}d{small} that implies our result. QED \section{Jacobi Flow} \mbox{} The linearized version of the geodesic equation is the famous Jacobi equation. In this section, we shall study the Jacobi field from the dif\-fe\-ren\-tial point of view getting some informations of the dis\-tri\-bu\-tion of singular points of exponential map. Throughout this section, all es\-ti\-mates are formulated in terms of unit speed geodesics because one can easily reparametrize: if $J$ is a Jacobi fields along $c,$ then \begin{center} $J_r(t)= J(rt)$ is a Jacobi field along $ c_r (t) = c(rt)$; \\ with $J_r (0)=J(0)$ and ${J'}_r(0)=r J' (0), {c'}_r (0)=r c' (0)$ \exp^{N}_{F(p)}d{center} \begin{defini} Let $c:[0.a] \longrightarrow M$ be a geodesic. A vector field along $c$ is called Jacobi field if it satisfies the Jacobi differential equation $$ \noindentbp{t} \noindentbp{t}{J}(t) + R(J(t), \fis c (t) ) \fis c (t) =0, $$ where $\noindentbp{t}{}$ denotes the covariant derivation along $c.$ \exp^{N}_{F(p)}d{defini} The Jacobi equation is a second order differential equation and, by theorems of differential equation in Banach space, see \cite{La}, the solutions are defined in the whole domain of definition and the set of Jacobi fields along $c$ is a vector space isomorphic to $T_{c(0)} M \times T_{c(0)} M$ under the map $J \longrightarrow (J(0), \noindentbp{t}{J} (0) ).$ We recall also that the Jacobi fields are characterized as in\-fini\-te\-si\-mal variation of $c$ by geodesic. We first give a lemma due to Ambrose (see \cite{La} page 243). \begin{lemma} Let $c:[0,a] \longrightarrow M$ be a geodesic and let $J$ and $Y$ be two Jacobi vector fields along $c.$ Then $$ \langle \noindentbp{t}{J} (t), Y(t) \rangle - \langle \noindentbp{t}{Y} (t), J(t) \rangle ={\rm constant} C. $$ \exp^{N}_{F(p)}d{lemma} Now, let $c: [0,b] \longrightarrow M$ be a geodesic. We will denote by $p=c(0)$ and by $\pt{s}{t}$ the parallel transport along $c$ between the points $c(s)$ and $c(t)$. We define $$ R_s:T_{c(0)} M \longrightarrow T_{c(0)} M, \ \ R_s(X)=\pt{s}{0}(R(\pt{0}{s}(X),\fis{c}(s))\fis{c}(s)) $$ that is a family of symmetric operators in $T_p M$. Take $\text{$\mathbb{H}$}_o$ a closed subspace of $T_p M$ and let $ A:\text{$\mathbb{H}$}_o \longrightarrow \text{$\mathbb{H}$}_o $ be a continuous and symmetric linear operator. Clearly $T_p M=\text{$\mathbb{H}$}_o \oplus \text{$\mathbb{H}$}_o^{\perp}$ and we will study the solutions of the following linear differential equation \[ \left \{ \begin{array}{l} T''(s)\ +\ R_s(T(s))=0;\\ T(0)(v,w)=(v,0),\ T'(0)(v,w)=(-A (v),w), \exp^{N}_{F(p)}d{array} \right. \] that we call {\em Jacobi flow } of $c.$ Firstly, note that the family of bilinear applications $$ \begin{array}{lcl} & T_p M \times T_p M \stackrel{\Phi(t)}{\longrightarrow} \text{$\mathbb{R}$} & \\ & (u,v) \longrightarrow \langle T(t)(u),T'(t)(v) \rangle & \\ \exp^{N}_{F(p)}d{array} $$ is symmetric; indeed is symmetric in $t=0,$ because $A$ is a symmetric operator, and \[ ( \langle T(t)(u),T'(t)(w) \rangle \ - \ \langle T(t)(w),T'(t)(u) \rangle )' \] is zero. The solutions of the Jacobi flow are exactly the Jacobi fields. \begin{prop} Let $(v,w) \in T_p M=\text{$\mathbb{H}$}_o \oplus \text{$\mathbb{H}$}_o^{\perp}.$ Then the Jacobi field with $J(0)=v \in \text{$\mathbb{H}$}_o$ and $\noindentbp{t}{J(0)}\ + \ A (J(0))=w \in {\text{$\mathbb{H}$}_o}^{\perp},$ is given by $\pt{0}{t}(T(t)(v,w))$. \exp^{N}_{F(p)}d{prop} \noindent {\bf Proof:} $\ $ Let $Z(t)$ be a parallel transport of $u\in T_p M$ along $c.$ We indicate by $Y(t)=\pt{0}{t}(T(t)(v,w))$. Hence \begin{eqnarray} \langle \sfv{t}{t}{Y(t)},Z(t) \rangle &=& \langle Y(t),Z(t) \rangle'' \noindentonumber \\ &=& \langle T(t)(v,w),u \rangle '' \noindentonumber \\ &=& - \langle R(Y(t),\fis {c}(t)) \fis{c} (t),Z(t) \rangle. \ \mathrm{QED} \noindentonumber \exp^{N}_{F(p)}d{eqnarray} Our aim is to study the distribution of singular points of the exponential map along a geodesic $c:$ then it is very useful compute the adjoint operator of $T(b).$ Let $u \in T_p M$ and let $J$ be the Jacobi field along the geodesic $c$ such that $ J(b)=0, \ \noindentbp{t}{J(b)}=\pt{0}{b}(u). $ By the Ambrose lemma we have \[ (1)\ \langle T(b)(v,w),u \rangle = \langle T(0)(v,w), \noindentbp{t}{J}(0) \rangle \ - \ \langle T'(0)(v,w),J(0) \rangle. \] We denote by $\overline{c}(t)=c(b-t)$ and let \[ \left \{ \begin{array}{l} \tilde{A}T''(s)\ +\ R_s(\tilde{A}T(s))=0;\\ \tilde{A}T(0)=0,\ \tilde{A}T'(0)=id, \exp^{N}_{F(p)}d{array} \right. \] be the Jacobi flow of $\overline{c}:[0,b] \longrightarrow M$. It is easy to check that $J$ is a vector field along $c,$ then $\overlineerline{J}(t)=J(b-t)$ is the Jacobi filed along $\overline{c}$ such that $\noindentbp{t}{\overlineerline{J}}(b)=-\noindentbp{t}{J}(0)$. Then $(1)$ becomes $$ \begin{array}{lcl} \langle T(b)(v,w),u \rangle &=& \langle (v,o), \pt{b}{0}(\tilde{A}T'(b)(-\pt{0}{b}(u))) \rangle \\ &-& \langle (-A(v),w),\pt{b}{0}(\tilde{A}T (b)(-\pt{0}{b}(u)) \rangle \\ \exp^{N}_{F(p)}d{array} $$ and the adjoint operator is given by \begin{small} $$\begin{array}{l} \langle T^*(b)(u),(v,0) \rangle = - \langle \pt{b}{0} (\tilde{A}T'(b) (\pt{0}{b}(u) ) ) ) + A( p_t ( \pt{b}{0}(\tilde{A}T(b) ( \pt{0}{b}(u) ) ) ) ) ,(v,0) \rangle , \\ \langle T^*(b)(u), (0,w) \rangle =\ \langle \pt{b}{0}(\tilde{A}T(b)(\pt{0}{b}(u))), (0,w) \rangle . \\ \exp^{N}_{F(p)}d{array}$$ \exp^{N}_{F(p)}d{small} where $p_t$ is the component along $\text{$\mathbb{H}$}_o.$ \begin{prop} \label{p1} There exists a bijective correspondence between the kernel of $ T(b)$ and the kernel of $T^* (b).$ \exp^{N}_{F(p)}d{prop} \noindent {\bf Proof:} $\ $ let $w \in T_p M$ such that $T(b)(w)=0.$ Then the Jacobi field $$ Y(t)=\pt{0}{t}(T(t)(w))=\pt{b}{t}(\tilde{A}T(t)(\overlineerline{w})) $$ and $\overlineerline{w} \in T_{c(b)}M$ is unique. Using the boundary condition on $Y(t)$ we have $T^* (b) (\pt{b}{0} (\overline{w}))=0.$ Vice-versa, if $T^* (b) (w)=0$ it is easy to check that $\pt{b}{t} \tilde{A}T (t) (\pt{0}{b}(w))= \pt{0}{t} (T(t) (\overline{w})$ for some $\overline{w} \in T_p M.$ Clearly $T(b) (\overline{w})=0$ and $w$ can be obtained, starting from $\overline w$, with the above arguments. QED\\ We recall that $Ker T^* (b)=\overline{Im T}^{\perp},$ hence we have proved also the following result \begin{prop} \label{singular1} If $T(b)$ is injective then $ImT(b)$ is a dense subspace. \exp^{N}_{F(p)}d{prop} We will study the behaviour of the Jacobi flow either when $\text{$\mathbb{H}$}_o=0$ or $\text{$\mathbb{H}$}_o= T_p M$ and $A=0.$ We will denote by $f_{ D^2 E (\gamma) elta}$ a solution of the differential equation $f''(t)\ +\ D^2 E (\gamma) elta(s)f(s)=0$ with $f(0)=0$, $f'(0)=1$, when $\text{$\mathbb{H}$}_o=0,$ or with $f(0)=1$, $f'(0)=0$, when $\text{$\mathbb{H}$}_o=T_p M$ and $A=0$. Firstly, we note that in \cite{Ka} it is proved the following results that holds in our context. \begin{prop} \label{p2} Let $\delta(s) \leq \langle R(u,\fis c(s))\fis c(s),u \rangle \leq D^2 E (\gamma) elta(s)$ be an upper and lower curvature along $c$. Hence \begin{enumerate} \item if $T(s)$ is invertible then \[ ( \langle T'T^{-1} u,u \rangle )' \leq -( D^2 E (\gamma) elta(s)\ + \ \langle T'T^{-1}u,T'T^{-1}u \rangle^2); \] \item $\parallel T(s)u \parallel \geq f_{ D^2 E (\gamma) elta}(s) \langle u,u \rangle^{\frac{1}{2}},$ $f_{ D^2 E (\gamma) elta}(s)$ being positive in $0< s \leq s_o$; \item $\langle T'u,Tu \rangle f_{ D^2 E (\gamma) elta} \geq \langle Tu,Tu \rangle f_{ D^2 E (\gamma) elta}'$, $f_{ D^2 E (\gamma) elta} (s)$ being positive in $0 < s \leq s_o$; \item $ \langle T'u,Tu \rangle f_{\delta} \leq \langle Tu,Tu \rangle f_{\delta}'$, $0< s \leq s_o$ if $T(s)$ is invertible in $0<s \leq s_1$ $(s_1 \geq s_o)$; \item if $T(s)$ is invertible in $0< s \leq s_1$ then \[ \parallel T(s) \parallel f_{\delta}(t) \geq \parallel T(t) \parallel f_{\delta} (s), \ 0 \leq s \leq t \leq s_1; \] \item $\parallel T(s) \parallel \leq f_{\delta}(s) <u,u>^{\frac{1}{2}},\ 0 \leq s \leq s_1$. \exp^{N}_{F(p)}d{enumerate} \exp^{N}_{F(p)}d{prop} \begin{cor} \label{singular2} Let \[ \left \{ \begin{array}{l} T''(s)\ +\ R_s(T(s))=0; \\ T(0)(v,w)=(v,o),\ T'(0)(v,w)=(-A (v),w), \exp^{N}_{F(p)}d{array} \right. \] be the Jacobi flow. We suppose that we have a upper bound sectional curvature, i.e. $K \leq H. $ Then: \begin{itemize} \item [{\bf (a)}] if $\text{$\mathbb{H}$}_o=0$ then the Jacobi flow is a topological isomorphism for $t \geq 0$, if $H \leq 0,$ or for $ 0 \leq t < \frac{\pi}{\sqrt{H}},$ if $H>0$; \item [{\bf (b)}] if $\text{$\mathbb{H}$}_o=T_p M$ and $A=0$ then then Jacobi flow is a topological isomorphism for $t \geq 0$, if $H \leq 0,$ or for $ 0 \leq t < \frac{\pi}{2 \sqrt{H}},$ if $H>0$. \exp^{N}_{F(p)}d{itemize} \exp^{N}_{F(p)}d{cor} \noindent {\bf Proof:} $\ $ using property $(2)$ of proposition \ref{p2} and proposition \ref{p1} we have our claim.\\ Another interesting corollary is a weak form of the Rauch theorem. \begin{thm} \label{rauch} {\bf (Rauch weak)} Let $c:[0,b] \longrightarrow M$ be a unit speed geodesic. Suppose that we have a lower and upper bound of the sectional curvature, i.e. $L \leq K(\fis c(t), v) \leq H,$ for any $t$ and $v \in T_{c(t)}M$ such that $\langle v,\fis c (t) \rangle=0,\ \langle v, v \rangle =1.$ Then $d (\e{p})_{t \fis c (0)}$ is a topological isomorphism for every $t \geq 0$, if $H \leq 0$ and for every $0 \leq t < \frac{\pi}{\sqrt{H}}$ if $H>0$. Moreover, in the above neighborhoods, we have \[ \frac{f_{H}(t)}{t} \parallel v \parallel \leq \parallel d ( \e p )_{t \fis c (0)} (v) \parallel \leq \frac{f_{L}(t)}{t} \parallel v \parallel . \] \exp^{N}_{F(p)}d{thm} \section{Focal Index lemma and the Rauch and Berger theorems} \mbox{} Let $N$ be a submanifold of an Hilbert manifold $(M,g),$ i.e. $N$ is an Hilbert manifold and the inclusion $i: N \hookrightarrow M$ is an embedding. Let $\gamma:[a,b] \longrightarrow M$ be a geodesic such that \begin{enumerate} \item $\gamma(a)=p \in N$; \item $\fis{\gamma}(a)=\xi \in {T_p N}^{\perp}$. \exp^{N}_{F(p)}d{enumerate} \begin{defini} A Jacobi vector field along $\gamma$ is called $N$-Jacobi if it satisfies the following boundary conditions: \[ Y(a) \in T_p N,\ \noindentbp{t}{ Y(a)} \ +\ A_{\xi}(Y(a)) \in T_p^{\perp} N, \] where $A_{\xi}$ is the operator of Weingarter relative to $N$. \exp^{N}_{F(p)}d{defini} The Jacobi flow along $\gamma$ on which $A= A_{\xi}$ is called Jacobi flow along $\gamma$ of $N.$ Let $W$ be the open subset of $TM$ on which $\exp$ is defined. We denote by $\mathrm{Exp}^{\perp} :T^{\perp} N \mathcal{H}p W \longrightarrow M,$ $\mathrm{Exp}^{\perp} (X)=\exp^{M}(X).$ It is easy to prove that $ \mathrm Ker ( d ( \mathrm{Exp}^{\perp} )_{t_o \xi} )$ is isomorphic to the subspace of the $N$-Jacobi field along the geodesic $\gamma (t) = \mathrm{Exp}^{\perp} (t \xi),$ $ t \in [0,t_o],$ which is zero in $\gamma (t_o).$ On the other hand, in infinite dimensional manifolds, two species of singular points can appear; so the following definition is justify. \begin{defini} A element $q=\gamma(t_o)$ along $\gamma$ is called: \begin{enumerate} \item monofocal if $\mathrm d ( \mathrm{Exp}^{\perp} )_{t_o \xi} $ fails to be injective; \item epifocal if $\mathrm d ( \mathrm{Exp}^{\perp} )_{t_o \xi} $ fails to be surjective. \exp^{N}_{F(p)}d{enumerate} \exp^{N}_{F(p)}d{defini} By proposition \ref{p1} and by the formula of the adjoint of the Jacobi flow, we have the following results. \begin{prop} \label{focal} Let $N$ be a submanifold of $M$ and let $\gamma :[0,b] \longrightarrow M$ such that $\gamma (0)=p \in N$ and $\xi= \fis{ \gamma} (0) \in {T_p N}^{\perp}.$ Then \begin{enumerate} \item if $\gamma (t_o)$ isn't monofocal then the image of $\mathrm d( \mathrm{Exp}^{\perp} )_{t_o \xi}$ is a dense subspace; \item $\gamma (t_o)$ is monofocal then $\gamma (t_o)$ is epifocal; \item $q=\gamma (t_o)$ is monofocal along $\gamma$ if and only if $p$ is monofocal along $\overline{\gamma}(t)=\gamma(t_o-t);$ \item $q=\gamma (t_o)$ is epiconjugate and the image of $\mathrm d ( \mathrm{Exp}^{\perp} )_{t_o \xi} $ is closed then $p$ is monofocal along $\overline{\gamma}(t)=\gamma(t_o-t);$ \exp^{N}_{F(p)}d{enumerate} \exp^{N}_{F(p)}d{prop} In the degenerate case, i.e. $N=p,$ then we call $q=\gamma(t_o)$ either {\em monoconjugate} or {\em epiconjugate} along $\gamma$. If there exist neither monofocal (monoconjugate) nor epifocal (epiconjugate) points then we will say that there aren't focal (conjugate) points along $\gamma$. The distribution of singular points of the exponential map along a finite geodesic is different from the finite dimensional case. Indeed, Grossman showed how the distribution of monoconjugate points be able to have cluster points. The same pathology appears in the case of focal points and we shall give a pathological example of the distribution of monofocal and epifocal points along a finite geodesic. \begin{ex} \label{esempio} {\rm Let $M= \{ x \in l_2 :\ x^2_1\ +\ x^2_2 \ + \ \sum_{i=3}^{\infty} a_i x^2_i=1 \}$, where $( a_i )_{i \in \text{$\mathbb{N}$}}$ is a positive sequence of real number. It is easy to check that \[ \gamma(s)= \sin (s) e_1 \ + \ \cos (s) e_2 \] is a geodesic and $T_{\gamma(s)}M=<\fis{\gamma}(s),e_3,e_4,\ldots >$. Let $N$ be a geodesic submanifold defined by $\fis{\gamma} (0).$ We shall restrict ourself to the normal Jacobi fields. We note that for $k \geq 3$ \[ E_k:= \{ x^2_1\ +\ x^2_2 \ +\ a_k x^2_k =1\ \} \hookrightarrow M \] is totally geodesic; then $K(\fis{\gamma}(s),e_k)=a_k$ and the Jacobi fields, with boundary conditions $J_k(0)=e_k$, $\noindentbp{t}{J_k(0)}=0$, are given by $J_k(t)=\cos (\sqrt{a_k}t)e_k$. Hence \[ d (\mathrm{Exp}^{\perp})_{s \fis{\gamma}(0)} (\sum_{k=3}^{\infty} b_k e_k )= \sum_{k=3}^{\infty} b_k \cos (\sqrt{a_k}s) e_k. \] Clearly, the points $\gamma (r_k^m)$, $r_k^m=\frac{m \pi}{2 \sqrt{a_k}}$ are monofocal. Specifically, let $a_k=(1- \frac{1}{k})^2.$ The points $\gamma (s_k)$, $s_k=\frac{k \pi}{2 (1-k)}$ are monofocal, $s_k \rightarrow \frac{\pi}{2}$ and \[ d (\mathrm{Exp}^{\perp})_{\fis{\gamma}(\frac{\pi}{2})) } ( \sum_{k=3}^{\infty} b_k e_k)= \sum_{k=3}^{\infty} b_k \cos (\frac{k-1}{k} \frac{\pi}{2}) e_k. \] In particular, $\gamma(\frac{\pi}{2})$ is not monofocal along $\gamma.$ On the other hand if $\sum_{k=3}^{\infty} \frac{1}{k} e_k= d (\mathrm{Exp}^{\perp})_{\gamma(\frac{\pi}{2})}(\sum_{k=3}^{\infty} b_k e_k)$ then $ \sin (\frac{\pi}{2k}) b_k= \frac{1}{k} $ hence \[ \lim_{k \rightarrow \infty} b_k = \lim_{k \rightarrow \infty} \frac{\pi}{2k} \frac{1}{\sin(\frac{\pi}{2k})} \frac{2}{\pi}=\frac{2}{\pi}. \] This means that $\gamma(\frac{\pi}{2})$ is epifocal.} \exp^{N}_{F(p)}d{ex} This example shows that there exist epifocal points which are not monofocal. We call them pathological points. If the exponential map of a Hilbert manifold has only a finite number of phatological points we will say that the exponential map is {\em almost non singular.} Clearly, if the exponential map were Fredholm, and this one must be of zero index, monoconjugate points and epiconjugate points along geodesics would coincide. This holds for the Hilbert manifold $\Omega (M),$ the free loop space of a compact manifold (see \cite{Mi}). Moreover, any finite geodesic in $\Omega (M)$ contains at most finitely many points which are conjugate. Now we shall prove the Index lemma. This lemma allows us to extend Rauch and Berger theorems in infinite dimensional context. Let $X:[0,1] \longrightarrow T_p M$, such that $X(0) \in T_p N.$ We define the {\em focal index} of $X$ as follows: $$ \begin{array}{lcl} I^N (X,X)& =& \int_{0}^{1} (\langle \fis X (t), \fis X (t) \rangle - \langle R_t(X(t)),X(t) \rangle )dt \\ & - & \langle A_{\xi} (X(0)),X(0) \rangle . \exp^{N}_{F(p)}d{array} $$ If $N=p,$ the Focal Index is called Index and we will denote it as $I(X,X).$ We note that any vector fields along $\gamma$ is a parallel transport of a unique application $X:[0,b] \longrightarrow T_p M$; we will denote by $\overlineerline X (t)=\pt{0}{t}(X)$ the vector field along $\gamma$ starting from $X$. \begin{lemma} $I^N (X,X)= D^2 E(\gamma)(\overlineerline X, \overlineerline X)$, where $D^2 E(\gamma)$ is the index form of $B=N \times M \hookrightarrow M \times M.$ \exp^{N}_{F(p)}d{lemma} \noindent {\bf Proof:} $\ $ we recall that \begin{small} \begin{eqnarray} D^2 E (\gamma) ( \overlineerline X , \overlineerline X ) &=& \int_a^b \parallel \noindentbp{t}{\overlineerline X (t)} \parallel^2 \ - \ \langle \overlineerline X (t), R(\overlineerline X (t),\fis c(t)) \fis c(t) \rangle dt \noindentonumber \\ &-& <<A_{(\fis c(a),-\fis c(b))}(\overlineerline X(a), \overlineerline X (b)),(\overlineerline X (a),\overlineerline X (b)>>, \noindentonumber \exp^{N}_{F(p)}d{eqnarray} \exp^{N}_{F(p)}d{small} see \cite{Sa}, where $A$ is the Weingarter operator of $N \times M \hookrightarrow M \times M.$ By the above expression, it is enough to prove that $\noindentbp{t}{\overline{X}}(t)=\pt{0}{t}(\fis X (t))$. Let $Z(t)$ be a parallel transport of a vector $Z \in T_p M.$ Then $$ \begin{array}{ccl} \langle \noindentbp{t}{\overlineerline X (t)},Z(t) \rangle &= & \langle \overlineerline X (t),Z(t) \rangle ' \\ &= & \langle \fis{X} (t), Z \rangle \\ &= & \langle \pt{0}{t}(\fis{X}(t)), Z(t) \rangle. \ \mathrm{QED} \exp^{N}_{F(p)}d{array}$$ We here give the Focal Index lemma formulated as in the finite dimensional Riemannian geometry. \begin{lemma} \label{fp} Let $X:[0,b] \longrightarrow T_p M$ be a piecewise differential application with $X(0) \in T_p N$. Suppose that $T(t)$ is invertible in $(0,a)$. Hence \[ I^N (X,X) \geq I^N (J,J), \] where $J(t)=T(t)(u)$ with $X(b)=T(b)u$. The equality holds if and only if $X=T(t)(u)$. Hence, if there aren't any focal points along $\gamma$, the index of a vector fields $Y$ along $\gamma$ is bigger than the focal index of the $N$-Jacobi field $J$ along $\gamma$ such that $W(a)=J(a).$ \exp^{N}_{F(p)}d{lemma} \noindent {\bf Proof:}$\ $ $T(t)$ is invertible, then there exists a piecewise differential application $Y:[0,b] \longrightarrow T_p M$ such that $Y(0)=X(0) \in T_p N$ and $X(t)=T(t)(Y(t))$. Hence \[ \fis{X}(t)=T'(t)((Y(t)) \ + \ T(t)(\fis{Y}(t))=A(t) \ + \ B(t). \] The focal index of $X$ is given by \begin{small} $$ \begin{array}{ccl} I(X,X) &=& \int_{0}^{b} ( \langle A(t),A(t) \rangle \ + \ 2 \langle A(t),B(t) \rangle \ + \ \langle B(t),B(t) \rangle dt \\ &-& \int_{0}^{b} \langle R_t (T(t)(Y(t)),T(t)(Y(t)) \rangle dt \ - \ \langle A_{\xi} (X(0),X(0) \rangle. \exp^{N}_{F(p)}d{array}$$ \exp^{N}_{F(p)}d{small} A straightforward computation show that \begin{eqnarray} \langle A(t),A(t) \rangle &=& \langle T(t)(Y(t)),T'(t)(Y(t) \rangle' \noindentonumber \\ &-& \langle B(t),A(t) \rangle \noindentonumber \\ &+& \langle T(t)(Y(t)),R_t (T(t)(Y(t)) \rangle \noindentonumber \\ &-& \langle T(t)(\fis{Y}(t)),T'(t)(Y(t)) \rangle = ( \langle A(t),B(t) \rangle ), \noindentonumber \noindentonumber \exp^{N}_{F(p)}d{eqnarray} where the last equality depends on the fact that $\Phi(t)$ is a family of symmetric bilinear form. Hence, the focal index of $X$ is given by \[ I^N (X,X) \ = \ \langle T(1)(u),T'(1)(u) \rangle \ + \ \int_{0}^{b} \parallel T(t)(\fis{Y}(t)) \parallel^{2}dt . \] thus proving our lemma. QED \begin{cor} \label{compara} Let $(M,g)$ be a Riemannian manifold and let $S$ and $\Sigma$ be submanifolds of codimension $1.$ We denote by $N$ and $\overline{N}$ the normal vector fields respectively in $S$ and $\Sigma$. Suppose that in some point $p \in S \mathcal{H}p \Sigma$ we have $N_p=\overline{N}_p$ and \[ g(\noindentb_{X}{N},X) < g(\noindentb_{X}{\overline{N}},X), \] for every $X \in T_p \Sigma =T_p S$. Then, if the Jacobi flow $T$ of $S$ is invertible in $(0,b)$ then the Jacobi flow of $\Sigma$ must be injective in $(0,b).$ Moreover, if $A-\overline{A}$ is invertible, where $A$ and $\overline{A}$ are the Weingarter operators in $p$ respectively of $S$ and $\Sigma,$ then the Jacobi flow of $\Sigma$ is also invertible in $(0,b).$ \exp^{N}_{F(p)}d{cor} \noindent {\bf Proof:} $\ $ let $Y(t)$ be $\Sigma$-Jacobi field. Since $T(t)$ is invertible then there exists a piecewise application $X:[0,s] \longrightarrow T_p M$ with $X(0) \in T_p S$ such that $Y(t)=T(t)(X(t)).$ Hence $$ \begin{array}{rcl} Y(0) &=& T(0)(X(0)) \\ \fis Y (0)&=& T'(0)(X(0))\ + \ T(0)(\fis X (0)) \\ (-\overline{A}(Y(0)), \fis Y (0)^n) &=& (-A (X(0))\ +\ \fis X (0)^t,0)). \exp^{N}_{F(p)}d{array} $$ Hence, $Y(0)=X(0)$ and the tangent component of $\fis X (0)$ is given by $(A\ -\ \overline{A})(X(0)).$ Then, \begin{small} $$ \begin{array}{lcl} g(Y(s),\noindentbp{s}{Y} (s))& = & I^S (Y,Y) \\ & = & g((A - \overline{A})(X(0)), X(0)) + \int_{0}^{s} \parallel T(t)(\fis X (t)) \parallel^2 dt \\ & > & 0. \exp^{N}_{F(p)}d{array} $$ \exp^{N}_{F(p)}d{small} In particular, the Jacobi flow of $\Sigma$ is injective in $(0,b).$ Moreover, if $A- \overline{A}$ is invertible, using the above formula, it is easy to prove that the image of the Jacobi flow relative to $\Sigma$ is a closed subspace for every $t\in (0,b)$ and using $(i)$ of proposition \ref{focal} we get our claim. QED \\ Now, assume that there exists a pathological point on the interior of $\gamma;$ this means that the Jacobi flow is an isomorphism for every $ t \noindenteq t_o$ in $(0,b)$ and in $t_o,$ $T(t_o)$ is a linear operator whose image is a dense subspace. Let $X:[0,b] \longrightarrow T_p M$ be a piecewise application with $X(0) \in T_p N$. Given $\epsilon>0$, there exist $X_n^{\epsilon}$, $n=1,2$ such that \[ \begin{array}{ccc} & \parallel T(t_o)( X_1^{\epsilon})) - X(t_o)) \parallel \leq \frac{\epsilon}{4} & \\ & \parallel T(t_o)( X_2^{\epsilon})) - \fis X(t_o)) \parallel \leq \frac{\epsilon}{4}. \exp^{N}_{F(p)}d{array} \] Choose $Y^{\epsilon}$ such that \[ \parallel T(t_o)(Y^{\epsilon}) - T'(t_o)(X_1^{\epsilon}) \parallel \leq \frac{\epsilon}{4} . \] Hence there exists $\eta(\epsilon) \leq \frac{\epsilon}{2}$ such that for $ t \in (\eta(\epsilon) - t_o, \eta(\epsilon) + t_o)$ we have \[ \begin{array}{ccc} (1) & \parallel T(t)( X_1^{\epsilon} + (t - t_o)(X_2^{\epsilon} - Y^{\epsilon})) - X(t) \parallel \leq \epsilon, & \\ (2) & \parallel \d1{}{t} (T(t)( X_1^{\epsilon} + (t - t_o)(X_2^{\epsilon} - Y^{\epsilon}))) - \fis X (t) \parallel \leq \epsilon. & \exp^{N}_{F(p)}d{array} \] We denote by $X^{\epsilon}$ the application \begin{small} \[ X^{\epsilon}(t)= \left \{ \begin{array}{ll} X(t) & {\rm if} \ \ 0 \leq t \leq t_o\ - \ \eta( \epsilon); \\ T(t)(X_1^{\epsilon} + (t - t_o)(X_2^{\epsilon} - Y^{\epsilon})) & {\rm if} \ t_o - \eta( \epsilon) < t < t_o + \eta( \epsilon); \\ X(t) & {\rm if}\ t_o - \eta( \epsilon ) \leq t \leq b . \exp^{N}_{F(p)}d{array} \right. \] \exp^{N}_{F(p)}d{small} Clearly, $X^{\epsilon}=T(t)(Y(t)),$ where $Y(t)$ is again a piecewise application, at less at the points $t=t_o + (\eta (\epsilon)$ and $t=t_o - \eta(\epsilon),$ and using the same arguments in lemma \ref{fp}, we shall prove \[ I^{N}(X^{\epsilon},X^{\epsilon}) \geq I^{N}(T(t)(u),T(t)(u)) . \] On the other hand, the Focal Index of $X$ is given by \begin{small} $$\begin{array}{lcl} I(X,X)&=& I(X^{\epsilon},X^{\epsilon}) \\ &-& \int_{ t_o - \eta(\epsilon) }^{ t_o + \eta(\epsilon) } \langle \fis{X}^{\epsilon}(t),\fis{X}^{\epsilon}(t) \rangle\ - \ \langle R({X}^{\epsilon}(t), \fis c (t))\fis c (t),{X}^{\epsilon}(t) \rangle dt \\ &+& \int_{ t_o - \eta ( \epsilon)}^{ t_o + \eta(\epsilon) } \langle \fis X (t), \fis X (t) \rangle \ - \ \langle R(X(t), \fis c (t)) \fis c (t),X(t) \rangle dt . \exp^{N}_{F(p)}d{array}$$ \exp^{N}_{F(p)}d{small} Now, using $(1)$ and $(2)$ it is easy to check that $$ \lim_{\epsilon \rightarrow 0} I^N ( X^{\epsilon}, X^{\epsilon} )= I^N (X,X) \geq I^N (J,J), $$ where $J(t)=T(t)(u).$ This proves the Focal Index lemma when there is a pathological point. Clearly, the same proof can be generalized to a finite number of pathological points. \begin{lemma} {\bf (Focal Index lemma)} Let $\gamma:[0,b] \longrightarrow M$ be a geodesic with a finite number of pathological points. Then for every vector fields $X$ along $\gamma$ with $X(0) \in T_p N,$ the index form of $X$ relative to the submanifold $N \times M \hookrightarrow M \times M$, satisfies $ D^2 E (\gamma) (X,X) \geq D^2 E (\gamma) (J,J)$, where $J$ is the $N$-Jacobi field such that $J(b)=X(b)$. \exp^{N}_{F(p)}d{lemma} In particular, if $\gamma$ has a finite number of pathological points, then $\gamma$ is a local minimum and the Index form $ D^2 E (\gamma) $ is non negative defined. \begin{cor} Let $\gamma:[0, \infty) \longrightarrow M$ be a geodesic, and let $\gamma (t_o)$ be a monoconjugate point. Then $\gamma:[0,t] \longrightarrow M$ is not minimal for $t>t_o.$ \exp^{N}_{F(p)}d{cor} Now, we shall prove the Rauch and Berger theorem and several corollaries. However, the proof are almost the same as the finite dimensional case, since these can be proved using Focal Index lemma: then we will give only a brief proof of the Rauch theorem. \begin{thm}{\bf (Rauch)}\label{R} Let $(M,\langle \ ,\ \rangle)$, $(N,\langle \ ,\ \rangle^*)$ be Hilbert manifolds modeled on $\text{$\mathbb{H}$}_1$ and $\text{$\mathbb{H}$}_2$ respectively, with $\text{$\mathbb{H}$}_1$ isometric to a closed subspace of $\text{$\mathbb{H}$}_2$. Let \[ c:[0,a] \longrightarrow M,\ \ c^*:[0,a] \longrightarrow N \] be two geodesics of same length. We assume that $c^*$ has at most a finite number of pathological points in its interior. Suppose also that for every $t\in [0,a]$ and for every $X \in T_{c(t)}M$, $X_o \in T_{c^*(t)}N$ we have \[ K^N (X_o,\fis c^*(t)) \geq K^M (X,\fis c(t)). \] Let $J$ and $J^*$ be Jacobi fields along $c$ and $c^*$ such that $J(0)$ and $J^*(0)$ are tangent to $c$ and $c^*$ respectively and \begin{enumerate} \item $\parallel J(0) \parallel = \parallel J^*(0) \parallel$; \item $\langle \fis c(0),\noindentbp {t}{J(0)} \rangle = \langle \fis c^*(0),\noindentbp {t}{J^*(0)} \rangle $; \item $\parallel \noindentbp {t}{J(0)} \parallel =\parallel \noindentbp {t}{J^*(0)} \parallel$. \exp^{N}_{F(p)}d{enumerate} Hence, for every $t\in [0,a]$ \[ \parallel J(t) \parallel \geq \parallel J^*(t) \parallel. \] \exp^{N}_{F(p)}d{thm} \noindent {\bf Proof:} $\ $ It is easy to check that we will restrict ourself to the case in which the Jacobi fields satisfy the following condition: \[ 0=\parallel J(0) \parallel=\langle \fis c(0),\noindentbp {t}{J(0)} \rangle = \parallel J^*(0) \parallel= \langle \fis c(0),\noindentbp{t}{J^*(0)}. \rangle \] Note, by assumption, $J^*(t) \noindenteq 0$. Let $t_o \in [0,a]$ be an isometry $$ \begin{array}{lcl} &F: T_{c(0)}M \longrightarrow T_{c^*(0)}N & \\ &F(\fis c(0))=\fis {c}^*(0) & \\ &F(\pt{t_o}{0}(J(t_o))=\ptn{t_o}{0}(J^*(t_o)) \frac{\parallel J(t_o) \parallel}{\parallel J^*(t_o) \parallel}. \exp^{N}_{F(p)}d{array} $$ We denote by \begin{eqnarray} & i_t: T_{c(t)}M \longrightarrow T_{c^*(t)}N \noindentonumber \\ & i_t=\ptn {0}{t} \circ F \circ \pt {t}{0}, \noindentonumber \exp^{N}_{F(p)}d{eqnarray} a family of isometries for $0 \leq t \leq t_o;$ it is easy to check that $i_t$ commute with the Levi Civita connection. Let $W(t)=i_t(J(t)).$ Then \begin{small} $$\begin{array}{lcl} D^2 E(c^*)(W,W) &=& \int_0^{t_o} \parallel \noindentbp{t}{W(t)} \parallel^2 \ - \ \langle R^N (\fis c^*(t),W(t))\fis c^*(t),W \rangle dt \\ &\leq & \int_0^{t_o} \parallel \noindentbp {t}{J(t)} \parallel^2 - \langle R^M (\fis c(t),J(t))\fis c(t),J(t) \rangle dt \\ &=& D^2 E (\gamma) (J,J) . \exp^{N}_{F(p)}d{array} $$ \exp^{N}_{F(p)}d{small} On the other hand, \begin{eqnarray} \frac{1}{2}\d1{}{t} \mid_{t=t_o} \langle J(t),J(t) \rangle &=& \langle J(t_o),\noindentbp{t}{J(t_o)} \rangle \noindentonumber \\ &=& D^2 E (\gamma) (J,J) \noindentonumber \\ &\geq& D^2 E(c^*)(W,W) \noindentonumber \\ & \geq & \d1{}{t} \mid_{t=t_o} \langle J^*(t),J^*(t) \rangle^* \frac{\parallel J(t_o) \parallel^2 }{\parallel J^*(t_o) \parallel^2 } . \noindentonumber \exp^{N}_{F(p)}d{eqnarray} where the last before inequality follows from the Focal Index lemma. Now, let $\epsilon >0$. For every $t \geq \epsilon$ we have \[ \d1{}{t} \log (\parallel J(t)\parallel^2) \geq \d1{}{t} \log (\parallel J^*(t) \parallel^2), \] that implies \[ \frac{\parallel J(t) \parallel^2 }{\parallel J(\epsilon) \parallel^2 } \geq \frac{\parallel J^*(t) \parallel^2 }{\parallel J^*(\epsilon) \parallel^2} . \] By $\parallel \noindentbp {t}{J(0)} \parallel = \parallel \noindentbp {t}{J^*(0)} \parallel,$ when $\epsilon \rightarrow 0$ we get our claim. QED\\ \noindent Misiolek, see \cite{Mi}, proved that in $\Omega(M)$ the index of any finite geodesic if finite. By Rauch theorem, we have that the sectional curvature of $\Omega (M),$ with the $H^1$ metric, cannot be positive along any geodesic. Indeed, if $K \geq K_o>0,$ we are able to compare $\Omega(M)$ with the sphere of radius $\frac{1}{\sqrt{K_o}}$ and we may prove that the index along any geodesic of length bigger than $t=\frac{\pi}{\sqrt{K_o}}$ is infinite. \begin{cor}\label{cr} Let $M$, $N$ be Hilbert manifolds modeled on $\text{$\mathbb{H}$}_1$ and $\text{$\mathbb{H}$}_2$ where $\text{$\mathbb{H}$}_2$ is isometric to a closed subspace of $\text{$\mathbb{H}$}_1$. Assume that for every $m \in M$ and $n \in N$ and for every $\eta \in T_p M$ e $\beta \in T_{n}N$ 2-subspaces we have \[ K^M (\eta) \geq K^N (\beta) . \] Let $i:T_n N \longrightarrow T_p M$ be an isometry and let $r>0$ such that \begin{center} \begin{tabular}{lcl} &$\e n:\bn{n}{r} \longrightarrow \bd{B}{m}{r}$ & is a diffeomorphism \\ &$\e m:\bn{m}{r} \longrightarrow \bd{B}{n}{r}$ & is almost non singular. \\ \exp^{N}_{F(p)}d{tabular} \exp^{N}_{F(p)}d{center} Let Let $c:[a,b] \longrightarrow \bn{n}{r}$ be a piecewise curve. Then \[ L(\e n (c)) \geq L(\e m (i\circ c)) . \] \exp^{N}_{F(p)}d{cor} \begin{thm}\label{B}{\bf (Berger)} Let $(M,g)$ and $(N,h)$ be an Hilbert manifolds modeled on $\text{$\mathbb{H}$}_1$ e $\text{$\mathbb{H}$}_2$, where $\text{$\mathbb{H}$}_1$ is isometric to a closed subspace of $\text{$\mathbb{H}$}_2$. Let $\gamma_1:[0,b] \longrightarrow M$ and $\gamma_2:[0,b] \longrightarrow N$ be two geodesics with the same length. Assume that for every $X_1 \in T_{\gamma_1(t)}M$ and $X_2 \in T_{\gamma_2(t)}N$ we have \[ K^N(X_2,\fis{\gamma_1}(t)) \geq K^M (X_1,\fis{\gamma_2}(t)),\ \langle X_1,\fis{\gamma_1}(t)) \rangle = \langle X_2,\fis{\gamma_2}(t) \rangle =0. \] Assume furthermore that $\gamma_2$ has at most a finite number of pathological points, on its interior, of the geodesic submanifold $N$ defined by $\fis{\gamma_2}(0)$. Let $J$ and $J^*$ Jacobi fields along $\gamma_1$ and $\gamma_2$ satisfying $\noindentbp{t}{J(0)}$ and $\noindentbp{t}{J^* (0)}$ are tangent to $\ga 1$ and $\ga 2$ and \begin{enumerate} \item $\parallel \noindentbp{t}{J(0)} \parallel = \parallel \noindentbp{t}{J^* (0)} \parallel$, \item $\langle \fis{\gamma_1}(0),J(0) \rangle = \langle \fis{\gamma_2}(0),J^*(0) \rangle ,\ \parallel J(0) \parallel = \parallel J^*(0) \parallel$. \exp^{N}_{F(p)}d{enumerate} Then \[ \parallel J(t) \parallel \geq \parallel J^* (t) \parallel, \] for every $t \in [0,b]$. \exp^{N}_{F(p)}d{thm} \begin{cor}\label{cb} Let $(M,g)$ and $(N,h)$ be an Hilbert manifolds modeled on $\text{$\mathbb{H}$}_1$ and $\text{$\mathbb{H}$}_2$ respectively , where $\text{$\mathbb{H}$}_1$ is isometric to a closed subspace of $\text{$\mathbb{H}$}_2$. Let $\gamma_1:[0,b] \longrightarrow M$ and $\gamma_2:[0,b] \longrightarrow N$ be two geodesics with the same length. Let $V_1(t)$ and $V_2(t)$ be parallel unit vectors along $\gamma_1$ and $\gamma_2$ which are everywhere perpendicular to the tangent vectors of $\gamma_1$ and $\gamma_2.$ Let $f:I \longrightarrow \text{$\mathbb{R}$}$ be a positive function and let \begin{center} $b(t)=\e{\gamma_1(t)} (f(t)V_1(t))$,\\ $b^*(t)=\e{\gamma_2(t)}(f(t)V_2(t))$, \exp^{N}_{F(p)}d{center} two curves. Assume that $K^N \geq K^M$ and for any $t \in I$ the geodesics \[ \eta_o (s)=\e{\gamma_2(t)}(sf(t)V_2(t)),\ 0 \leq s \leq 1 \] contains no focal points of the geodesic submanifold defined by $\fis{\eta} (0).$ Then $L(b) \geq L(b^*)$. \exp^{N}_{F(p)}d{cor} \begin{cor}\label{zio} Let $(M,g)$ be an Hilbert manifold such that $H \leq K \leq L,$ $H>0$ and let $\gamma:[0,b] \longrightarrow M$ be a unit speed geodesic. Then \begin{enumerate} \item the distance $d,$ along $\gamma,$ from $\gamma (0)$ to the first monoconjugate or epiconjugate satisfies the following inequality $$ \frac{\pi}{\sqrt{H}} \leq d \leq \frac{\pi}{\sqrt{L}}; $$ \item the distance $d$, along $\gamma,$ from $\gamma (0)$ to first monofocal or epifocal point, of the geodesic submanifold defined by $\fis{\gamma} (0),$ satisfies the following inequality $$ \frac{\pi}{2 \sqrt{H}} \leq d \leq \frac{\pi}{2 \sqrt{L}}. $$ \exp^{N}_{F(p)}d{enumerate} \exp^{N}_{F(p)}d{cor} \section{Hilbert Manifolds: a global theory} \mbox{} The Rauch and Berger theorems are very important to understand the geometry of the complete manifolds with upper or lower curvature bounded. Indeed, we can compare these manifolds with the complete Hilbert manifolds with constant curvature and the geometry of these is well known. We saw that in a complete Hilbert manifold the exponential map may not be surjective. When the curvature is upper bounded by a constant we have the following result. \begin{prop} \label{s1} Let $(M,g)$ be a complete Hilbert manifold such that $K \leq H$. If $c:[0,1] \longrightarrow M$ is a piecewise differential curve, with $L(c)< \frac{\pi}{\sqrt{H}}$ if $H>0$, then there exists a unique piecewise differential curve $\overlineerline{c}:[0,1] \longrightarrow \overlineerline{\bn{c(0)}{L(c)}}$ such that $\e {c(0)} ( \overlineerline{c(t)})=c(t)$. In particular, \[ \e p(\bn{p}{r})=\bd{\mathcal{H}l B}{p}{r} \] for every $p \in M$ and $r \geq 0$ if $H \leq 0$ or $r< \frac{\pi}{\sqrt{H}}$ if $H>0$. \exp^{N}_{F(p)}d{prop} \noindent {\bf Proof:} $\ $ we will give the proof only when $H>0;$ the other case is similar. Take \begin{small} \[ t_o=sup \{ \ t \in [0,1]:\ \exists! \ \overlineerline{c}:[0,t] \longrightarrow \overlineerline{\bn{c(0)}{L(c)}}, \ {\mathrm with}\ \e p ( \overlineerline{c} (t) )=c(t) \}; \] \exp^{N}_{F(p)}d{small} \noindent $t_o$ is positive by Rauch theorem, and we shall prove that $t_o$ is in fact $1.$ Let \[ \overlineerline{c}:[0,t_o) \longrightarrow \overlineerline{ \bn{c(0)}{L(c)} } \] the unique lift of $c$; using Rauch weak theorem we have $$ \begin{array}{rcl} \parallel \fis c (t) \parallel &=& \parallel d ( \e{c(o)})_{\overlineerline{c(t)}} (\fis{\overlineerline {c}}(t)) \parallel \\ &\geq & \frac {\sin(\parallel \overlineerline{c(t)}) \parallel \sqrt{H} ) }{\sqrt{H} \parallel \overlineerline{c(t)} \parallel } \parallel \fis{\overlineerline{c}}(t) \parallel \\ &\geq & \frac{\sin(L(c))\sqrt{H})}{\sqrt{H}L(c)} \parallel \fis{\overlineerline{c}}(t) \parallel, \exp^{N}_{F(p)}d{array}$$ so we get \[ \lim_{t \rightarrow t_o} \int_{0}^{t_o} \parallel \fis{\overlineerline{c}}(t) \parallel dt \ < \infty. \] However, $\overlineerline{\bn{c(0)}{L(c)}}$ is a complete metric space so $\lim_{t \rightarrow t_o} \overlineerline{c}(t)=q$ and by Gauss lemma and the definition of $t_o$ we get $t_o=1.$ QED \begin{cor} Let $(M,g)$ be a complete Hilbert manifold such $K \leq H.$ Let $p,q \in M$, such that $d(p,q)< \frac{\pi}{\sqrt{H}}$ if $H>0$. Hence, at least one of the following facts holds: \begin{enumerate} \item there exists a unique minimal geodesic between $p,q$; \item there exists a sequence $\gamma_n$ of geodesics from $p$ to $q$ such that $L(\gamma_n)>L(\gamma_{n+1})$ and $L(\gamma_n) \rightarrow d(p,q)$. \exp^{N}_{F(p)}d{enumerate} \exp^{N}_{F(p)}d{cor} Next we claim a very useful lemma that we will use in the following proofs. \begin{lemma} \label{s2} Let $(M,g)$ be an Hilbert manifold such that $K \geq L>0.$ Suppose there exists a point $p \in M$ on which $\e p$ is almost non singular in $\bn{p}{r}$. Let $\delta(s)$ be a curve joining two antipodal points on the sphere of radius $s$ in $T_p M$. Let $ D^2 E (\gamma) elta$ be the Image, via $\e p,$ of the curve $\delta(s).$ Then \[ L( D^2 E (\gamma) elta)\leq \frac{\pi}{\sqrt{L}} \sin (s \sqrt{L} ), \] for $s< r$. \exp^{N}_{F(p)}d{lemma} \noindent {\bf Proof:} $\ $ Let $S_{\frac{1}{\sqrt{L}}} (T_p M \times \text{$\mathbb{R}$})$ be the sphere of radius $\frac{1}{\sqrt{L}}$ and let $N=(0,\frac{1}{\sqrt{L}}) \in S_{\frac{1}{\sqrt{L}}} (T_p M \times \text{$\mathbb{R}$})$; it is easy to check that \[ \e N (v)= \frac{1}{\sqrt{L}}(\cos ( \parallel v \parallel \sqrt{L} )N\ +\ \sin ( \parallel v \parallel \sqrt{L}) \frac{v}{\parallel v \parallel}). \] Let $v,w \in T_p M$ be such that $\langle v,w \rangle =0, \ \langle v,v \rangle = \langle w,w \rangle =1$; any meridian on the sphere of radius $s$ can be parametrized as follows: \[ c_s(t)=s( \cos (s)v \ + \ \sin (s)w ) \] and $L(\e N (c_s))= \frac{\pi}{\sqrt{L}} \sin (s \sqrt{L})$. By corollary \ref{cr}, we have \[ L( D^2 E (\gamma) elta) \leq \frac{\pi}{\sqrt{L}} \sin (s \sqrt{L}). \] QED \begin{prop} \label{bt1} Let $(M,g)$ be an Hilbert manifold such that $K \geq 1$. Suppose there exists a point $p \in M$ such that $\e p$ is almost non singular in $\bn{p}{\pi}.$ Then $M$ has constant curvature $K=1$ and is covered by the unit sphere $S(T_p M \times \text{$\mathbb{R}$})$. Furthermore, $M$ is a complete Hilbert manifold. \exp^{N}_{F(p)}d{prop} \noindent {\bf Proof:} $\ $ let $S_r(T_p M),$ $ r < \pi$ be the sphere of radius $r$ in $T_p M$; using lemma \ref{s2} we get that the diameter of $\e p ( S_r(T_p M) ) \rightarrow 0$ when $r \rightarrow \pi.$ Hence $\e p ( S_{\pi} (T_p M))=q.$ Let $N=(0,1) \in S(T_p M \times \text{$\mathbb{R}$}).$ We define \[ \phi(m)= \left \{ \begin{array}{l} \e p ({\e N}^{-1}(m)) \ \ {\rm se} \ m \noindenteq -N; \\ q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm se} \ m=-N. \exp^{N}_{F(p)}d{array} \right. \] We claim that $\phi$ is a local isometry. Firstly, note that any geodesic which starts at $p$ get to $q.$ Hence any Jacobi field along a geodesic which starts in $p$ is zero in $q.$ Moreover the Index form of any geodesic $\gamma: [0,t] \longrightarrow M,$ $t \leq \pi$ and $\gamma (0)=p,$ is non negative definite because $\e p$ is almost non singular in $\bn{p}{\pi}.$ Let $\gamma :[0,\pi] \longrightarrow M$ be a geodesic such that $ \gamma (0)=p.$ Let $W(t)$ be a parallel transport along $\gamma$ of a unitary and perpendicular vector to $\fis{\gamma}(0)$. The index form of $W$ along $\gamma$ is given by \begin{small} $$ \begin{array}{ccl} 0 & \leq & D^2 E (\gamma) (Y,Y) \\ & = & \int_{0}^{\pi} \langle Y(t),Y(t) \rangle \ - \ \langle R(Y(t),\fis{\gamma}(t))\fis{\gamma}(t),Y(t) \rangle dt \\ & \leq & \int_{0}^{\pi} ( \cos^2 t\ - \ \sin^2 t ) dt \\ & = & 0 . \exp^{N}_{F(p)}d{array}$$ \exp^{N}_{F(p)}d{small} Hence $K(W(t),\fis{c}(t))=1$ and $Y(t)$ is a Jabobi field. Now, using Cartan theorem, about local isometry, and proposition 6.9 pag. 222 in \cite{La}, we get our result. QED\\ Next, we claim the Berger-Topogonov theorem of maximal diameter. \begin{thm} \label{bt} {\bf (Berger-Topogonov)} Let $(M,g)$ be a complete Hilbert manifold such that $\delta \leq K \leq 1$. Suppose that there exist two points $p,q$ with $d(p,q)=\frac{\pi}{\sqrt{\delta}}$ and at least a minimal geodesic from $p$ to $q.$ Then $M$ is isometric to a sphere $S_{\frac{1}{\sqrt{\delta}}}(T_p M \times \text{$\mathbb{R}$})$. \exp^{N}_{F(p)}d{thm} \noindent {\bf Proof:} $\ $ By corollary \ref{zio}, the distance from the first focal point along any geodesic is at least $\frac{\pi}{2};$ furthermore by Bonnet theorem, $d(M)=\frac{\pi}{\sqrt{\delta}}$. Let $\gamma:[0,\frac{\pi}{\sqrt{\delta}}] \longrightarrow M$ be a minimal geodesic from $p$ to $q$. Take the following vector field along $\gamma$ \[ Y(t)=\frac{1}{\sqrt{\delta}} \sin (t \sqrt{\delta}) W(t), \] where $W(t)$ is the parallel transport along $\gamma$ perpendicular to $\fis{\gamma}(t)$. We define the following variation of $\gamma$ to be \[ \Omega (s,t)=\e{\gamma(t)}(sY(t)),\ 0 \leq s \leq 1,\ 0 \leq t \leq \frac{\pi}{\sqrt{\delta}}. \] Any curve $\Omega (s,\cdot)$ joins $p,q$ and by corollary \ref{cr} we get \[ L[\Omega (s,\cdot )] \leq \frac{\pi}{\sqrt{\delta}} \] Hence any curve is a minimal geodesic and $Y(t)$ is a Jacobi field. Furthermore $\Omega (\cdot,\cdot)$ is a totally geodesic submanifold. Now, it easy to check that $\e p$ is non singular and injective in $\bn{p}{\frac{\pi}{\sqrt{\delta}}}$, $\e p (S_{\frac{\pi}{\sqrt{\delta}}} (T_p M) )=q$ and the application \[ \Phi:S_{\frac{1}{\sqrt{\delta}}} (T_p M \times \text{$\mathbb{R}$}) \longrightarrow M \] defined by \[ \left \{ \begin{array}{ll} \Phi(m)= \e p \circ {\e N}^{-1}(m) & {\rm se} \ m \noindenteq -N ; \\ \Phi(-N)= q & {\rm se} \ m=-N; \exp^{N}_{F(p)}d{array} \right. \] where $N=(0,\frac{1}{\sqrt{\delta}})$, is an isometry. QED \\ The Sphere theorem is one of the most beautiful theorem in classic Riemannian geometry. Unfortunately, we haven't found yet a proof in infinite dimensional case. We will show two soft versions of Sphere theorem: one is the Sphere theorem due in finite dimensional Riemannian geometry by Rauch (see \cite{Ra}), with the strong assumption on the injectivity radius, with pinching $\sim \frac{3}{4}$ and the other is the Sphere theorem in the class of Hopf-Rinow manifolds, on which we shall prove that the Topogonov theorem holds, with pinching $\frac{4}{9}.$ In the first case the fundamental lemma is the following. \begin{lemma} Let $(M,g)$ be an Hilbert manifold such that $K \leq H$, $H>0.$ Let $\phi: S(\text{$\mathbb{H}$}) \longrightarrow M$, where $S(\text{$\mathbb{H}$})$ is an unit sphere in an Hilbert space, be a local homeomorphism onto the image such that: $\phi(N)=p$ and the image of every meridian is a curve of length $r\leq r_o < \frac{\pi}{\sqrt{H}}$ . Then there exists a locally homeomorphism \[ \overline{\phi}: S(\text{$\mathbb{H}$}) \longrightarrow \overline{ \bn{p}{r_o} }, \] such that $\e p \circ \overline{\phi} = \phi$. \exp^{N}_{F(p)}d{lemma} \noindent {\bf Proof:} $\ $ We apply the proposition \ref{s1} to each meridian and we get \[ \overline{\phi}: S(\text{$\mathbb{H}$}) - \{ -N \} \longrightarrow \overline{ \bn{p}{r_o} }, \] with $\e p \circ \overline{\phi} = \phi.$ We claim that we can extend $\overline{\phi}$ to $-N.$ Let $\xi(t)=\e N (tv) $ be a meridian starting from $N.$ Let $\gamma(t)=\phi(\xi(t))$ and let $\overline{\gamma}(t)$ the lift of $\gamma$. By assumption, for every $t \in [0,\pi]$ there exists an open subset $W(t)$ of $\overline{\gamma}(t)$ and an open subset $U(t)$ of $\gamma(t)$ such that \[ \e p : W(t) \longrightarrow U(t) \] is an onto diffeomorphism. Now, $\phi$ is a local homeorphism then there exists an open subset $V(t)$ of $\xi (t)$ such that $\phi(V(t)) \subset U(t)$ and $\phi$ on $V(t)$ is an homeomorphism. The closed interval is compact, then there exits a partition $0=t_o \leq t_1 \leq \ldots \leq t_{n-1} \leq t_{n}=\pi$ such that \[ \left \{ \begin{array}{l} \xi( [0,\pi]) \subseteq V(t_o) \cup \cdots \cup V(t_n)=V; \\ \gamma( [0,\pi]) \subseteq U(t_o) \cup \cdots \cup U(t_n)=U; \\ \overline{\gamma}([0, \pi]) \subseteq W(t_o) \cup \cdots \cup W(t_n)=W; \\ \exp^{N}_{F(p)}d{array} \right. \] and another partition $ 0 < s_1 < \cdots < s_{n} < s_{n+1}=\pi$ such that $ t_i < s_{i+1} < t_{i+1}$ and $\xi(s_{i+1}) \in V(t_i) \mathcal{H}p V( t_{i+1})$ for $ 0 \leq i \leq n-1$. Now it is easy to see that \[ C:= \{ w \in T_N S(\text{$\mathbb{H}$}) :\ \overline{\e N (tw)}(\pi)=\overline{\xi}(\pi) \} \] where $\overline{\xi}$ is the unique lift of $\xi,$ is open and closed. Hence $\overline{\phi}$ can be extended in $-N$ and $\phi$ is a local homeomorphism. QED \\ Now we claim that a manifold with pinching $\sim \frac{3}{4}$ can be covered by two geodesic balls. Let $(M,g)$ be a complete Hilbert manifold. Suppose that the sectional curvature satisfies the inequality $0<h \leq K \leq 1,$ where $h$ is a solution of the equation \[ \sin \sqrt{\pi h}=\frac{\sqrt{h}}{2} \ \ (h \sim \frac{3}{4}). \] Let $p \in M$. By Rauch theorem we get that on the geodesic ball of radius $\pi$ there aren't conjugate points. We denote by $ D^2 E (\gamma) elta$, the meridian of the sphere in $T_p M$ of radius $\pi.$ Then \[ L[ D^2 E (\gamma) elta] \leq \frac{\pi}{\sqrt{h}}\sin {\pi \sqrt{h}} \leq \frac{\pi}{2} . \] In particular, there exists $\epsilon>0$ such that the image of any meridian in the sphere of radius $\pi - \epsilon$ is a curve with length $r\leq r_1 < r_o < \pi-\epsilon$. Furthermore, $\epsilon$ does not depend from $p.$ \begin{lemma} Let $p \in M$ and let $q \in \e p (S_{\pi- \epsilon} (T_p M))$. Then \[ M=\overline{\bd{\mathcal{H}l B}{p}{\pi-\epsilon}} \bigcup \overline{\bd{\mathcal{H}l B}{q}{\pi - \epsilon}}. \] \exp^{N}_{F(p)}d{lemma} \noindent {\bf Proof:} $\ $ take $m \in M$ and let $c:[0,1] \longrightarrow M$ be a piecewise curve joining $p$ and $m$. Take \[ t_o=sup \{ t\in [0,1]:\ \exists \ \overline{c}:[0,t] \longrightarrow \overline{\bn{p}{\pi- \epsilon}} :\ \e p (\overline{c}(s))=c(s) \}. \] As in proposition \ref{s1}, $\overline{c}$ is defined in $t_o$ and $L(c_{[0,t_o]}) \geq \pi - \epsilon$. If $t_o=1$ we get our claim; otherwise $c(t_o) \in \e q (\overline{\bn{q}{r_1}}).$ Now, we define \[ t_1= sup \{ t \geq t_o: \exists \ \overline{c}:[t_o, t] \longrightarrow \overline{\bn{q}{\pi-\epsilon}} . \} \] Now, $r_1< \pi \ - \ \epsilon$ so $t_1>0$ and, as before, $\overline{c} (t_1)$ is well-defined. If $t_1=1$ we get our result. Otherwise $\overline{c}(t_1) \in \e p (\overline{\bn{p}{r_1}})$ and by Gauss lemma we get \[ L[c_{[t_o,t_1]} ] \geq r_1\ - \ r_o. \] However, the curve $c$ has finite length, then after a finite numbers of steps we get that $m$ either belongs to $\overline{\bd{\mathcal{H}l B}{p}{\pi-\epsilon}}$ or to $\overline{\bd{\mathcal{H}l B}{q}{\pi - \epsilon}}.$ QED \\ Before proving the Sphere Rauch theorem , we recall that the injectivity radius of a complete Hilbert manifold is defined by \begin{small} $$i(M)= sup \{ r>0: \e p: \bn{p}{r} \longrightarrow \bd{\mathcal{H}l B}{p}{r} \ \mathrm{is \ a \ diff.\ onto} \ \mathcal Frall p \in M \}.$$ \exp^{N}_{F(p)}d{small} \begin{thm} {\bf (Sphere Rauch theorem )} Let $(M,g)$ be an Hilbert complete manifold modeled on $\text{$\mathbb{H}$}$ such that $0< h \leq K \leq 1$, where $h$ is the solution of the equation $\sin (\pi \sqrt{h})=\frac{\sqrt{h}}{2}$. Assume also that the injectivity radius $i(M) \geq \pi.$ Then $M$ is contractible. Furthermore, if $\text{$\mathbb{H}$}$ is a separable Hilbert space then $M$ is diffeomorphic to $S(l_2 )$. \exp^{N}_{F(p)}d{thm} \noindent {\bf Proof:} $\ $ we recall that an infinite dimensional sphere is a deformation retract of the unit closed disk, because by Bessega theorem, see \cite{BE}, there exists a diffeomorphism from $\text{$\mathbb{H}$}$ to $\text{$\mathbb{H}$}- \{0 \}$ which is the identity outside the unit disk. When a infinite dimensional manifold $M$ is modeled on Banach space, $M$ is contractible if and only if $\pi_k (M)=0,$ for every $k \in \text{$\mathbb{N}$}$ (see \cite{Pa2}). Let $f:S^k \longrightarrow M$ be a continuous application and let \[ H:\overline{ \bd{\mathcal{H}l B}{p}{ \pi - \frac{\epsilon}{2} } } \times [0.1] \longrightarrow \overline{ \bd{\mathcal{H}l B}{p}{ \pi - \frac{\epsilon}{2}}} \] be the homotopy from the identity map and the retraction on the boundary. We can extend a map on $M,$ that we denote by $\tilde{H}$, fixing the complementary of $\bd{\mathcal{H}l B}{p}{\pi - \frac{\epsilon}{2} }$. Then $$ \begin{array}{lll} & F: S^k \times [0,1] \longrightarrow M,& \\ & F(x,t)=\tilde{H}(f(x),t). & \\ \exp^{N}_{F(p)}d{array} $$ is a homotopy between $f$ and an application $\tilde{f}:S^k \longrightarrow \overline{ \bd{\mathcal{H}l B}{q}{\pi - \epsilon}}$. Then $f$ is nullhomotopic. If $\text{$\mathbb{H}$}$ is separable, using the Kuiper-Burghelea theorem (see \cite{KB}), homotopy classifies the Hilbert manifolds, up to a diffeomorphism, so $M$ is diffeomorphic to the sphere. QED \\ Next we claim another version of the Sphere theorem in the class of Hopf-Rinow manifolds. However, the main result in this class of Hilbert manifolds is the Topogonov theorem. From our results appeared in section $4,$ we shall prove it using the same idea in \cite{CE} page 42. First of all, we start with the following result that we may prove as in \cite{Kl} 2.7.11 Proposition page 224. \begin{lemma} \label{toro} Let $(M,g)$ be a Hilbert manifold with bounded sectional curvature $H \leq K \leq D^2 E (\gamma) elta,$ where $H, D^2 E (\gamma) elta$ are constant. Let $(\gamma_1,\gamma_2,\alpha)$ be a geodesic segment in $M$ such that $\ga 1 (0) = \ga 2 (0)$ and $\alpha=(- \fis{\ga 1} (l_1), \fis{\ga 2} (0)).$ We call such configuration hinge. Suppose that $\ga 1$ and $\ga 2$ are minimal geodesic with perimeter $P=l_1 \ +\ l_2 \ + \ d(\ga 1 (0),\ga 2 (\lo 2)) \leq \frac{2 \pi}{\sqrt{H}} \ - \ 4 \epsilon$, $\epsilon>0$ if $H>0.$ In addition, \begin{itemize} \item [(i)] if $H \leq 0$ then $l_2 \leq \frac{\pi}{2 \sqrt{ D^2 E (\gamma) elta}}$; \item [(ii)] if $H>0$ then \[ l_2 \leq inf ( \epsilon, \frac{\sin \sqrt {H}\epsilon }{\sqrt{H}} \sin \frac{{\pi} \sqrt{H}}{2 \sqrt{ D^2 E (\gamma) elta}}, \frac{\pi}{2 \sqrt{ D^2 E (\gamma) elta}} ). \] \exp^{N}_{F(p)}d{itemize} Let $(\gamma_1,\gamma_2,\alpha)o$ be a hinge $M^{H}$ such that $L[\ga i]=L[\gao i]$, i=1,2. Then \[ d(\ga 1 (0), \ga 2 (l_2)) \leq d(\gao 1 (0),\gao 2 (\lo 2)). \] \exp^{N}_{F(p)}d{lemma} \begin{thm}{\bf (Topogonov)} Let $(M,g)$ be a Hopf-Rinow manifold such that $H \leq K \leq D^2 E (\gamma) elta$. Then \begin{itemize} \item [{\bf (A)}] let $(\gamma_1,\gamma_2,\gamma_3)$ be a geodesic triangle in $M$. Assume $\ga 1,$ $\ga 3$ are minimal geodesic and if $H >0$, $l_2 \leq \frac{\pi}{\sqrt H}$. Then in $M^{H},$ simply connected $2$-dimensional manifold of constant curvature $H,$ there exists a geodesic triangle $(\gamma_1,\gamma_2,\gamma_3)o$ such that $l_i=\overline{l}_i$ and $\alfo 1 \leq \alf 1$, $\alfo 2 \leq \alf 2$. Except in case $H>0$ and $l_2 =\frac{\pi}{\sqrt H},$ the triangle is uniquely determined. \item [{\bf (B)}] Let $(\gamma_1,\gamma_2,\alpha)$ be a hinge in $M.$ Let $\ga 1$ be a minimal geodesic, and if $H>0$, $l_2 \leq \frac{\pi}{\sqrt H}$. Let $(\gamma_1,\gamma_2,\alpha)o$ be a hinge in $M^{H}$ such that $l_i =\overline{l}_i,\ i=1,2$ e $\alpha=\overline{\alpha}$. Then \[ d(\ga 1 (0), \ga 2 (0)) \leq d( \gao 1 (0), \gao 2 (0)) . \] \exp^{N}_{F(p)}d{itemize} \exp^{N}_{F(p)}d{thm} {\bf Proof:} $\ $ the proof consists of numbers of steps as in \cite{CE} page.43 . First of all, we recall briefly some facts of the proof in \cite{CE}. Let $(\gamma_1,\gamma_2,\alpha)$ be a hinge. We call this hinge small if $\frac{1}{2} r= max \ L[ \gamma_{i} ],\ i=1,2$ and $\e{ \ga 2 (0)}$ is an embedding on $\bn{p}{r}$. Let $(\gamma_1,\gamma_2,\gamma_3)$ be a geodesic triangle. We call $(\gamma_1,\gamma_2,\gamma_3)$ a {\em small triangle} if any hinge of $(\gamma_1,\gamma_2,\gamma_3)$ is small. Let $(\gamma_1,\gamma_2,\gamma_3) $ as in {\bf (A)}. We say that $(\gamma_1,\gamma_2,\gamma_3)$ is thin if $( \gamma_1,\gamma_2,\alpha_3 )$ and $( \gamma_3,\gamma_2,\alpha_1 )$ are thin hinges, i.e. thin right hinge or thin obtuse hinge or thin acute hinge. We briefly describe the above terminology. A thin right hinge is a hinge $(\gamma_1,\gamma_2,\gamma_3)h$ if the hypothesis of corollary \ref{cb} hold. Let $(\gamma_1,\gamma_2,\alpha)$ be a hinge with $\alpha > \frac{\pi}{2}$. Let $(\gamma_1,\gamma_2,\gamma_3)ho$ be the corresponding hinge in $M^{H}e$ with $L[\gao i] =L [\ga i],\ i=1,2.$ Let $\gao 3$ be a minimal geodesic from $\gao 2 (l_2)$ to $\gao 1 (0).$ Let $\overline{\sigma}:[0,l] \longrightarrow M^{H}e$ be a geodesic starting from $\gao 2 (0)$ such that \[ \langle \fis{\overline{\sigma}}(0),\fis{\gao 2}(0) \rangle =0,\ \fis{\overline{\sigma}}(0)=\lambda_1 \fis{\gao 1}(l_1) \ + \ \lambda_2 \fis{\gao 2}(0),\ \lambda_i>0 \] and let $\overline{\sigma}(l)$ be the first point of $\overline{\sigma}$ which lies on $\gao 3.$ Let $\sigma$ be a geodesic in $M$ starting from $\ga 2 (0)$ with the same properties of $\overline{\sigma}$. We call $(\gamma_1,\gamma_2,\alpha)$ is a {\em thin obtuse hinge} if $(\ga 1,\sigma,\alpha-\frac{\pi}{2})$ is a small hinge and $(\sigma,\ga 2, \frac{\pi}{2})$ is a thin right hinge. Let $(\gamma_1,\gamma_2,\alpha)$ be a hinge with $\alpha< \frac{\pi}{2}$ and let $\ga 2 (l)$ be a point closest to $\ga 1 (0)$ . Let \begin{center} $\tilde{A}u=\ga 2:[0,l] \longrightarrow M,\ $ \\ $\theta=\ga 2: [l,l_2]\longrightarrow M$ \exp^{N}_{F(p)}d{center} and let $\sigma:[0,k] \longrightarrow M$ be a minimal geodesic from $\ga 1 (0)$ to $\ga 2 (l_2)$. We call $(\gamma_1,\gamma_2,\alpha)$ a {\em thin acute hinge} if $(\ga 1,\tilde{A}u,\sigma)$ is a small triangle and $(\sigma,\theta,\frac{\pi}{2})$ is a small right hinge. From step $(1)$ to step $(7),$ in \cite{CE}, they essentially prove that {\bf (B)} holds for thin right hinges, thin obtuse hinge and thin acute hinge. The same proofs work in our context since in any steps they use Rauch and Berger theorems, and the main corollaries, and the existence of at least a minimal geodesic joining any two points. Now, we will prove it in general. Given an arbitrary hinge $(\gamma_1,\gamma_2,\alpha)$ as in {\bf (B)}, fix $N$ and let \[ \tilde{A}u_{k,l}= \ga 2:[\frac{kl_2}{N},\frac{(k+l)l_2}{N}] \longrightarrow M, \] where $k,l$ are integers with $0 \leq k,l \leq N.$ Let $\sigma_k$ be the minimal geodesic from $\ga 1 (0)$ to $\ga 2 (\frac{k l_2}{N}).$ As in \cite{CE} page. 48 we shall prove that any triangle $T_{k,l}=(\sigma_k,\tilde{A}u_{k,l},\sigma_{k+l})$ is a geodesic triangle. If we prove that there exists $N$ such that any $T_{k,1}$ is thin we may continue the proof as in \cite{CE}, proving our aim. If both $ H \leq 0$ and $ D^2 E (\gamma) elta$ are non positive the result follows easily while if $H \leq 0$ and $ D^2 E (\gamma) elta>0$ then it is enough to choose $N$ such that $(i)$ in lemma \ref{toro} holds. Then we shall assume $H>0$ and by Berger-Topogonov theorem we may suppose also that $ d (\ga 1 (0), \ga 2 (t) ) < \frac{\pi}{\sqrt{H}} .$ Indeed, if $d (\ga (0), \ga 2 (t)= \frac{\pi}{\sqrt{H}}$ for some $t$ then the manifold must be isometric to a sphere concluding our proof. Using the compactness of $\ga 2$ there exists $\epsilon_o>0$ such that for every $\epsilon \leq \epsilon_o$ there exists $\eta (\epsilon)$ such that for every $ s,t \in [0, l_2]$ we get \[ d(\gamma_1(0),\gamma_2 (t)) \ + \ d(\gamma_1(0), \gamma_2(s)) \leq \frac{2 \pi}{\sqrt{H - \epsilon}}\ - 5\eta(\epsilon). \] Hence, for every $\epsilon \leq \epsilon_o,$ we choose $N$ such that $L[ \ti{k,1}] \leq \eta(\epsilon)$ and once apply lemma \ref{toro}, comparing $M$ with $M^{H}e,$ on $(\si k,\ti{k,1},\alf k)$ and $(\si{k+l},\ti{k,i},\bi k ).$ Moreover, using again the compactness of $\ga 2$ then there exists $r_o$ such that \[ \e {\ga 2 (t)}: \bn{\ga 2 (t)}{2r_o} \longrightarrow \bd{B}{\ga 2 (t)}{2r_o} \] is a diffeomorphism onto. Now it is easy to see that any geodesic triangle $T_{k,l}$ is thin. QED \begin{cor} \label{pippo} Let $(\gamma_1,\gamma_2,\gamma_3)$ be a geodesic triangle in a Hopf-Rinow manifold such that $0<H \leq K \leq D^2 E (\gamma) elta$. Then the perimeter of $(\gamma_1,\gamma_2,\gamma_3)$ is at most $\frac{2 \pi }{\sqrt{H}}.$ \exp^{N}_{F(p)}d{cor} \begin{cor} \label{bt2} Let $(M,g)$ be a Hopf-Rinow manifold with $0<H \leq K \leq D^2 E (\gamma) elta$. Assume that $d(M)= \frac{\pi}{\sqrt H}$ and that there exists a point $p$ such that the image of the function $q \rightarrow d(p,q)$ has $[0,\frac{\pi}{\sqrt H})$ as subset. Then $M$ is isometric to $S_{\frac{1}{H}}(T_p M \times \text{$\mathbb{R}$}).$ \exp^{N}_{F(p)}d{cor} \noindent {\bf Proof:} $\ $ by Ekeland theorem (\cite{Ek}) there exists a sequence $q_n,$ in $M$ such that \[ d(p,q_n) \rightarrow \frac{\pi}{\sqrt H}. \] and there exists a unique geodesic $\gamma_n,$ that we shall assume parameterized in $[0,1],$ from $p$ to $q_n$. Take the hinges $(\ga n , \ga m, \alf{n,m}).$ Using Topogonov theorem there exists a hinge $(\gao n, \gao m, \alf{n,m})$ in $M^{H}$ such that \[ d(q_n,q_m) \leq d(\gao n (1),\gao m (1)), \] Now, $\gao n (1)$ converges then $q_n$ is a Cauchy sequence in $M.$ In particular there exists the limit $q$ of the sequence $q_n$ that it satisfies $d(p,q)=d(M).$ Using Berger-Topogonov theorem, $M$ is isometric to the sphere $ S_{\frac{1}{\sqrt{H}}}(T_p M \times \text{$\mathbb{R}$})$. QED \begin{thm} {\bf Sphere theorem} Let $(M,g)$ be a Hopf-Rinow manifold such that $\frac{4}{9}<\delta \leq K \leq 1$. Assume that $i(M)\geq \pi$. Then $M$ is contractible and if $\text{$\mathbb{H}$} $ is separable then $M$ is diffeomorphic to $S(l_2)$. \exp^{N}_{F(p)}d{thm} \noindent {\bf Proof:} $\ $ since $\frac{4}{9} < \delta$ there exists $\epsilon >0$ such that \[ \frac{\pi}{\sqrt{\delta}}=\frac{3}{2}(\pi \ - \ \epsilon). \] Using the fact $i(M)\geq \pi,$ there exist two points $p,q \in M$ such that $d(p,q)=\pi \ - \ \epsilon$. We claim that $M$ is covered by the following geodesic balls \[ M= \overline{\bd{\mathcal{H}l B}{p}{\pi-{\epsilon}}} \cup \overline{\bd{\mathcal{H}l B}{q}{\pi-{\epsilon}}}. \] Let $r\in M$ such that $d(p,r) \geq \pi \ - \ \epsilon$. Using corollary \ref{pippo}, we have \[ d(q,r)\leq 2(\frac{3}{2}(\pi \ - \ \epsilon)) \ - \ 2(\pi \ - \ \epsilon) =\pi \ - \ \epsilon. \] Now, we shall conclude our proof as in the Sphere Rauch theorem. QED \begin{thebibliography}{10} \bibitem{AMR} {\sc {A}braham, R., Marsden, J.~E., and Ratiu, T.} \noindentewblock {\em Manifolds, tensor analysis, and application}. \noindentewblock Applied Mathematical Sciences 75, Springer-Verlang, New York, 1988. \bibitem{An} {\sc Anderson, L.} \noindentewblock {\em The {B}onnet-{M}yer theorem is true for {R}iemannian {H}ilbert manifold}. \noindentewblock Math. Scand. 58 (1986), 236--238. \bibitem{At1} {\sc Atkin, C.~J.} \noindentewblock {\em Geodesic and metric completeness in infinite dimensions}. \noindentewblock Hokkaido Mathematical Journal vol 26 (1997), 1--61. \bibitem{At} {\sc Atkin, C.~J.} \noindentewblock {\em The {H}opf-{R}inow theorem is false in infinite dimension}. \noindentewblock Bull. London Math. Soc. 7 (1975), 261--266. \bibitem{BE} {\sc Bessega, C.} \noindentewblock {\em Every infinite-dimensional {H}ilbert space is diffeomorphic with its unit sphere}. \noindentewblock Bull. Acad. Polon. Sci XIV (1966), 27--31. \bibitem{B} {\sc Biliotti, L.} \noindentewblock{\em Properly discontinuous isometric actions on the unit sphere of infinite dimensional Hilbert spaces} \noindentewblock{preprint} \bibitem{BMT} {\sc Biliotti, L., Mercuri, F., and Tausk, D.} \noindentewblock {\em Note on tensor fields in {H}ilbert spaces}. \noindentewblock{Academia Brasileira de Ci\^encies (2002) 74 (2), 207-210.} \bibitem{KB} {\sc {B}urghelea, D., and Kuiper, N.} \noindentewblock {\em {H}ilbert manifolds}. \noindentewblock Annals of Mathematics 90 (1968), 379--417. \bibitem{CE} {\sc Cheeger, J., and Ebin, D.} \noindentewblock {\em Comparison theorem in {R}iemannian geometry}. \noindentewblock North--Holland, Amsterdam, 1975. \bibitem{Ee} {\sc Eells, J.} \noindentewblock {\em A setting for global analysis}. \noindentewblock Bull. Amer. Math. Soc 72 (1966), 751--807. \bibitem{Ek} {\sc Ekeland, I.} \noindentewblock {\em The {H}opf-{R}inow theorem in infinite dimension}. \noindentewblock Journal of Differential Geometry 13 (1978), 287--301. \bibitem{El} {\sc El\'{\i}ason, H.} \noindentewblock {\em Condiction (C) and geodesic on {S}obolev manifolds}. \noindentewblock Bull. Amer. Math. Soc. 77, n 6 (1971), 1002--1005. \bibitem{Gr2} {\sc Grossman, N.} \noindentewblock {\em {G}eodesic on {H}ilbert manifolds}. \noindentewblock PhD thesis, University of Minnesota, 1964. \bibitem{Gr} {\sc Grossman, N.} \noindentewblock {\em {H}ilbert manifolds without epiconjugates points}. \noindentewblock Proc. of Amer. Math. Soc 16 (1965), 1365--1371. \bibitem{Ka} {\sc Karcher, H.} \noindentewblock {\em Riemannian center of mass and mollifer smoothing}. \noindentewblock Comm. on Pure and Appl. Math. XXX (1977), 509--541. \bibitem{Kl} {\sc Klingemberg, W.} \noindentewblock {\em Riemannian geometry}. \noindentewblock De Gruyter studies in Mathemathics, New York, 1982. \bibitem{KN} {\sc Kobayashi, S., and Nomizu, K.} \noindentewblock {\em Foundations of differential geometry}. \noindentewblock vol I, Interscience Wiley, New York, 1963. \bibitem{La} {\sc Lang, S.} \noindentewblock {\em Differential and {R}iemannian manifolds}. \noindentewblock Third Ediction. Graduate Text em Mathematics, 160, Springer-Verlang, New York, 1996. \bibitem{Mca} {\sc McAlphin, J.} \noindentewblock {\em Infinite dimensional manifolds and {M}orse theory}. \noindentewblock PhD thesis, University of Columbia, 1965. \bibitem{Mi} {\sc Misiolek, G.} \noindentewblock {\em {T}he exponential map on the free loop space if {F}redholm}. \noindentewblock Geom. Funct. Anal. 7 (1997), 1--17. \bibitem{Pa} {\sc Palais, R.} \noindentewblock {M}orse {t}heory {o}n {H}ilbert {m}anifolds. \noindentewblock {\em Topology 2\/} (1963), 299--340. \bibitem{Pa2} {\sc Palais, R.} \noindentewblock {\em {H}omotopy {t}heory {o}f {i}nfinite {d}imensional {m}anifolds}. \noindentewblock Topology 5 (1966), 1--16. , Inc., New York-Amsterdam, 1968. \bibitem{Ra} {\sc Rauch, H.~E.} \noindentewblock {\em A contribution of differential geometry in the large}. \noindentewblock Annals of Mathematics 54 (1951), 38--55. \bibitem{Sa} {\sc Sakai, T.} \noindentewblock {\em Riemannian geometry}. \noindentewblock Trans. Math. Monog. vol 149. AMS, New York, 1996. \exp^{N}_{F(p)}d{thebibliography} \noindent Leonardo Biliotti \\ Universit\`a degli studi di Firenze \\ Dipartimento di Matematica e delle Applicazioni all'Architettura \\ Piazza Ghiberti 27 - Via dell'Agnolo 2r - 50132 Firenze (Italy) \\ e-mail:{\tt [email protected]} \\ \exp^{N}_{F(p)}d{document}
\begin{document} \bstctlcite{IEEEexample:BSTcontrol} \title{Risk-Averse Self-Scheduling of Storage in Decentralized Markets \thanks{Mr. Yurdakul gratefully acknowledges the support of the German Federal Ministry of Education and Research and the Software Campus program under Grant 01IS17052. Mr. Billimoria's Doctor of Philosophy at the University of Oxford is supported by the Commonwealth PhD Scholarship, awarded by the Commonwealth Scholarship Commission of the UK (CSCUK).} } \author{\IEEEauthorblockN{Ogun Yurdakul\IEEEauthorrefmark{1}\IEEEauthorrefmark{2} and Farhad Billimoria\IEEEauthorrefmark{3} } \IEEEauthorblockA{\IEEEauthorrefmark{1}Energy Systems and Infrastructure Analysis Division, Argonne National Laboratory, Lemont, IL 60439, USA.} \IEEEauthorblockA{\IEEEauthorrefmark{2}Department of Electrical Engineering and Computer Science, Technical University of Berlin 10623 Berlin, Germany} \IEEEauthorblockA{\IEEEauthorrefmark{3}Department of Engineering Science, University of Oxford, Oxford OX1 2JD, United Kingdom} } \maketitle \begin{abstract} Storage is expected to be a critical source of firming in low-carbon grids. A common concern raised from ex-post assessments is that storage resources can fail to respond to strong price signals during times of scarcity. While commonly attributed to forecast error or failures in operations, we posit that this behavior can be explained from the perspective of risk-averse scheduling. Using a stochastic self-scheduling framework and real-world data harvested from the Australian National Electricity Market, we demonstrate that risk-averse storage resources tend to have a myopic operational perspective, that is, they typically engage in near-term price arbitrage and chase only few extreme price spikes and troughs, thus remaining idle in several time periods with markedly high and low prices. This has important policy implications given the non-transparency of unit risk aversion and apparent risk in intervention decision-making. \end{abstract} \begin{IEEEkeywords} electricity markets, electricity storage, self-scheduling, risk-aversion.\end{IEEEkeywords} \section{Introduction}\label{sec1} In renewable-dominated power systems, electricity storage is expected to play an important role in maintaining system balance and resource adequacy. In decentralized markets, system operators (SOs) do not issue commitment instructions, instead relying upon full-strength price formation and iterative pre-dispatch systems to signal the need for additional or reduced resource commitment. During periods of scarcity, understanding the projected available capacity of resources to service load at intraday system stress points is critical to the SO's decision-making on whether to intervene or apply emergency protocols, such as load shedding. \par Storage resources introduce new complexity to the problem because while the resource may be operationally available, whether it can actually charge or discharge at a point in time depends upon its state of charge (driven by prior decisions) and technical constraints. As a case in point, the lack of transparency on the availability of energy-limited plants was one of the key drivers of the Australian Energy Market Operator's (AEMO) decision to suspend the National Electricity Market (NEM) in June 2022 during a period of extreme scarcity. In certain cases, storage resources were observed to not discharge during the highest prices of the day (while discharging at more muted prices earlier in the day).\footnote{Our online companion \cite{onlinecomp} provides an empirical record of many such observations of battery dispatch and energy prices for the NEM.} This seemingly non-intuitive behavior is commonly attributed to error or failure (e.g., in forecasting or operational management) of storage operations. Our work however proposes a second explanation: that of risk-averse self-scheduling under uncertainty, a concomitant of which is that such behavior could well be rational and utility-maximizing, and thus persist in energy markets. \par In the literature, risk-averse self-scheduling of thermal resources has been extensively studied. Risk-constrained self-scheduling is explored in \cite{1318695,Papavasiliou2015} given the internalization of the non-convex attributes of thermal resources. In \cite{jabr2005robust}, the conditional Value-at-Risk measure is used to develop bidding curves derived from an optimal self-schedule. For storage and in relation to decentralized markets, inter-temporal tradeoffs are considered from the perspective of look-ahead bidding to optimize state-of-charge and maximize profits \cite{wang2017look}. In \cite{kazempour2009risk} a risk-constrained problem is formulated for a pumped storage plant to trade-off between expected profit and risks in energy and ancillary service markets.\par In this paper, we work out a novel risk-averse stochastic optimization framework for the self-scheduling of storage in decentralized markets under uncertain prices. In Section \ref{sec2}, we develop the mathematical background and market architecture, leading to the precise mathematical formulation of the risk-averse scheduling problem in Section \ref{sec3}. In Section \ref{sec4}, we conduct several experiments using real-world actual data from the NEM and storage resources bidding therein.\par Using the insights we draw from the experiments, we lay out in Section \ref{sec4} two key contributions of this work. First, we provide a novel explanation for the seemingly non-intuitive behavior of storage. Specifically, we demonstrate how the increasing uncertainty of prices with longer horizons can lead a risk-averse decision-maker to adopt a rational but myopic approach to dispatch. We illustrate how such a decision-maker can favor low-risk near-term dispatch decisions at more moderate prices rather than higher-reward but more uncertain peaks and troughs. Second, we present valuable insights into the sensitivity of the expected profits to the duration and the degree of risk-aversion of a storage resource. We observe that while increasing the capacity of a storage resource can significantly boost profits for a risk-neutral decision-maker, it barely makes a dent for relatively more risk-averse decision-makers. We set out concluding remarks and policy implications in Section \ref{sec5}. \section{Mathematical Background}\label{sec2} We consider a storage resource trading in a decentralized electricity market. The adopted market setting does not include a centrally organized short-term forward market for energy and relies solely on the real-time market (RTM) for the spot trading of energy, settled under a marginal pricing scheme\footnote{In this paper we are focused upon energy prices and do not consider, at this stage, ancillary service or other spot markets for which storage is eligible}. We assume that the resource acts as a price-taker, that is, it has no capability to exercise market power or alter market prices. \par We denote by $k$ the index of each time period for which the RTM is cleared and by $K$ the total number of time periods in the problem horizon. We represent by $\kappa$ the duration of each time period $k$ in minutes, which is the smallest indecomposable unit of time in our analysis, during which we assume the system conditions hold constant. Using our notation, an RTM cleared for a day at half-hourly intervals would correspond to $\kappa=30$ min and $K=48$. Define the set $\mathscr{K} \coloneqq \{k \colon k = 1,\ldots,K\}$.\par We denote by $\tilde{\lambda}_k, k \in \mathscr{K}$ as the uncertain energy price in time period $k$ at the transmission system bus or zone into which the storage resource is located. We construct the vector $\boldsymbol{\tilde{\lambda}}\coloneqq [\tilde{\lambda}_{1}\cdots\tilde{\lambda}_{k} \cdots \tilde{\lambda}_{K}]^{\mathsf{T}}$. We assume that the SO determines and publishes pre-dispatch energy prices over the $K$ time periods in order to provide market participants and itself with advance information necessary to plan the physical operation of the power system. We write the relation $\boldsymbol{\tilde{\lambda}} \coloneqq \boldsymbol{\bar{\lambda}} + \boldsymbol{\tilde{\epsilon}}$, where $\boldsymbol{\bar{\lambda}}$ denotes the vector of pre-dispatch energy prices and $\boldsymbol{\tilde{\epsilon}}$ is a random vector of forecast errors representing the difference between the pre-dispatch and market-clearing prices.\par Typically, the uncertain forecast error exhibits a greater degree of variance with extending horizon \cite{onlinecomp}. While the forecast error between pre-dispatch and market-clearing prices are likely small for the periods close to the dispatch interval, the pre-dispatch prices that eventuate may significantly deviate from the market price for forecast horizons well away from the actual dispatch time. We use a set of scenarios to model $\boldsymbol{\tilde{\epsilon}}$, where $\boldsymbol{{\epsilon}}^{\omega}$ denotes the vector of forecast errors in scenario $\omega \in \Omega$ and ${\pi}^{\omega}$ denotes its associated probability of realization. We define the vector of LMPs for each scenario by $\boldsymbol{{\lambda}}^{\omega} \coloneqq \boldsymbol{\bar{\lambda}} + \boldsymbol{{\epsilon}}^{\omega}$ and write $\boldsymbol{{\lambda}}^{\omega} \coloneqq [{\lambda}_{1}^{\omega}\cdots {\lambda}_{k}^{\omega} \cdots {\lambda}_{K}^{\omega}]$, where ${\lambda}_{k}^{\omega}$ denotes the energy price in time period $k$ of scenario $\omega$. \par We next turn to the mathematical modeling of storage. We denote by $p^{c}_k$ and $p^{d}_k$ the charging and discharging power of the storage resource in time period $k$ with maximum values $p^{c}_{M}$ and $p^{d}_{M}$, respectively. To ensure the mutual exclusivity of the charging and discharging modes, and to reduce the computational burden of the problem, we draw upon the single binary variable storage formulation presented in \cite{ychen} and enforce the following constraints: \begin{IEEEeqnarray}{l'l'l} p^{d}_k - p_{M}^{d} u_k \leq 0,& p^{c}_k - p_{M}^{c} (1 - u_k) \leq 0 & \forall k \in \mathscr{K}.\label{plim}\\ u_k \in \{0, 1\}, & p^{c}_k,p^{c}_k\geq 0 & \forall k \in \mathscr{K}. \label{bin} \end{IEEEeqnarray} We denote the efficiency of the storage unit by $\eta$ (assuming symmetry between charging and discharging efficiencies). We represent by $E_k$ the stored energy level at the end of the time period $k$ and write the intertemporal operational constraints: \begin{IEEEeqnarray}{l;l} \hspace{-0.5cm}E_{k} = E_{k-1} - \frac{1}{\eta} p_k^{d} \kappa \frac{1\text{ h}}{60\text{ min}} + {\eta} p_k^{c} \kappa \frac{1\text{ h}}{60\text{ min}} &\forall k \in \mathscr{K} \setminus 1,\label{intertempgen}\\ \hspace{-0.5cm}E_{k} = E_{o} - \frac{1}{\eta} p_k^{d} \kappa \frac{1\text{ h}}{60\text{ min}} + {\eta} p_k^{c} \kappa \frac{1\text{ h}}{60\text{ min}} & \forall k \in \{1\},\label{intertempini} \end{IEEEeqnarray} where $E_{o}$ denotes the initial stored energy level at the beginning of the problem horizon. The stored energy level in each period $k$ is bounded from above and below by $E_{M}$ and $E_{m}$, respectively: \begin{IEEEeqnarray}{c} E_{m} \leq E_k \leq E_{M} \hspace{.3cm}\forall k \in \mathscr{K}.\label{enl} \end{IEEEeqnarray} \par In the next section, we harness the mathematical models laid out in this section to work out a framework for the risk-averse self-scheduling problem of a storage resource. \section{Risk-Averse Self-Scheduling} \label{sec3} Absent a short-term forward market, the storage resource is directly exposed to the uncertainty in the RTM prices. A great deal of studies in the literature considers a risk-neutral market participant, for which the storage resource may bring about large losses in some scenarios as long as those are offset by even greater gains in other scenarios. Such studies however do not cater to risk-averse decisions-makers, who constitute the focus of this work, that prefer to ward off large losses independent of potential profits. \par A widely used measure for incorporating risk into decision-making is the Value-at-Risk (VaR). For a specified risk confidence level $\alpha$ in $(0,1)$, $\alpha$-VaR is an upper estimate of losses that is exceeded with probability $1-\alpha$. We denote the $\alpha$-VaR of the loss associated with a decision $\boldsymbol{x}$ by $\zeta_{\alpha}(\boldsymbol{x})$. Despite presenting an intuitive representation of losses, VaR exhibits several undesirable properties, including not taking account of the losses suffered beyond $\zeta_{\alpha}(\boldsymbol{x})$, non-coherence, and non-convexity when computed using scenarios. Instead, we draw upon the CVaR measure to manage risk in our framework. \par For continuous distributions, the $\alpha$-CVaR of the loss for a decision $\boldsymbol{x}$, which we denote by $\phi_{\alpha}(\boldsymbol{x})$, is the expected loss given that the loss is greater than or equal to $\zeta_{\alpha}(\boldsymbol{x})$. The definition of CVaR for discrete distributions is yet more subtle, which is the case in our framework as we rely on scenarios to represent uncertain prices. Rockafellar and Uryasev \cite{2002} define $\phi_{\alpha}(\boldsymbol{x})$ for general distributions as the weighted average of $\zeta_{\alpha}(\boldsymbol{x})$ and the expected loss strictly exceeding $\zeta_{\alpha}(\boldsymbol{x})$. \par From a mathematical standpoint, CVaR presents several appealing features. Pflug \cite{pflug} shows that it is a coherent risk measure. Most notably, $\phi_{\alpha}(\boldsymbol{x})$ can be efficiently computed by minimizing a piecewise linear and convex function \cite[Theorem 10]{2002}, which can be cast as a linear programming (LP) problem by introducing an additional variable. \par A key thrust of our framework is to hedge the decision-maker against the risk of incurring high charging costs. For an associated confidence level $\alpha$, the $\alpha$-CVaR of the uncertain charging cost over the problem horizon can be evaluated by solving the following LP problem: \begin{IEEEeqnarray}{l'l} \hspace{0.0cm} \underset{z^{\omega}, \zeta}{\text{minimize}} & \mathcal{R}_{\alpha}(\boldsymbol{x}) \coloneqq \zeta+ \frac{1}{1-\alpha}\sum_{\omega \in \Omega} \pi^{\omega}z^{\omega} ,\label{cvaro}\\ \hspace{0.0cm}\text{subject to} & z^{\omega} \geq \sum_{k \in \mathscr{K}} {\lambda}^{\omega}_k p_k^{c} - \zeta,\; z^{\omega} \geq 0 \label{cvarc1} \end{IEEEeqnarray} where the decision vector $\boldsymbol{x}$ succinctly represents the storage variables $p_k^{c}$, $p_k^{d}$, and $E_k$ for all $k$ in $\mathscr{K}$, $\zeta$, and the auxiliary variable $z^{\omega}$ introduced for each $\omega$ in $\Omega$ and $\pi^{\omega}$ denotes the probability of the scenario $\omega \in \Omega$. \par In conjunction with minimizing the risk of incurring high charging costs,\footnote{We would be remiss if we did not set forth that the resource faces a risk while discharging as well, that of recording low revenues if the RTM prices materialize at significantly lower levels than their pre-dispatch counterparts. We choose to omit this risk in this paper and defer it to our future work. } the decision-maker seeks to maximize the expected profits of the resource over the $K$ time periods: \begin{IEEEeqnarray}{l} \mathcal{P}(\boldsymbol{x}) \coloneqq \sum_{\omega \in \Omega} \pi^{\omega} \Big[ \sum_{k=1}^{K} \lambda^{\omega}_k \big(p^{d}_k-p^{c}_k\big) \Big].\label{profit} \end{IEEEeqnarray} We adjust the trade-off between these seemingly conflicting objectives by minimizing the weighted combination of $\mathcal{R}(\boldsymbol{x})$ and $\mathcal{P}(\boldsymbol{x})$ subject to the storage constraints laid out in Section \ref{sec2} and the CVaR constraints presented in this section. The risk-averse self-scheduling (RASS) problem is expressed as: \begin{IEEEeqnarray}{l'll} \text{RASS}: & \text{minimize} \;&\hspace{0.5cm} -\mathcal{P}(\boldsymbol{x}) + \beta \mathcal{R}(\boldsymbol{x}),\nonumber\\ & \text{subject to} \;&\hspace{0.5cm}\eqref{plim}\text{--}\eqref{enl}, \eqref{cvarc1}. \nonumber \end{IEEEeqnarray} The weight parameter $\beta \in [0, \infty) $ tunes the decision-maker’s degree of risk-aversion. While increasing values of $\beta$ underscores the decision-maker’s desire to mitigate risk, driving down $\beta$ toward zero represents a more risk-neutral decision-maker. Setting $\beta=0$ implies that the sole objective of the decision-maker is to maximize her profits---independent of the risk that her decisions entail.\par The RASS problem is solved on a rolling-window basis with a shrinking horizon. At the outset, it is solved for the time periods $k=1\cdots K$ before the uncertain price for any of the time periods is revealed, yet the optimal decisions for only the first time period are implemented. After the RTM price for $k=1$ is observed, the RASS problem is solved again for $k=2\cdots K$, this time implementing the optimal decision for only $k=2$, and so forth. The process repeats until the RASS problem is solved for $k=K$ with a single-period horizon. The binding decisions of each rolling window are passed along to the subsequent window by explicitly enforcing that the stored energy level at the end of the first, binding time period of each window be equal to the initial stored energy level at the beginning of the ensuing window. \section{Case Study and Results}\label{sec4} In this section, we conduct two case studies to gain insights into the self-scheduling of a storage resource under different pre-dispatch price signals. The price data of both case studies are from the NEM. As spelled out below, we use the actual pre-dispatch prices observed in the NEM in two representative days to form the vector of pre-dispatch prices in our experiments. To construct the scenarios $\boldsymbol{{\lambda}}^{\omega},\,\omega \in \Omega$, we draw upon the historical differences between the pre-dispatch and RTM prices that eventuated in the NEM across 2019, yielding 11,680 observations, from which we randomly select 100 observations to form the scenarios of each case study. The price data have a temporal granularity of 30 min, that is, $\kappa=30$ min and $K=48$. Commensurate with the price data, we harness the data of the storage resources currently bidding in the NEM \cite{AEMO2022} in our experiments. Unless otherwise stated, we pick the risk confidence level as $\alpha=0.95$ and the storage efficiency as $\eta=0.85$ in all experiments. The data and the source code of all experiments are furnished in the online companion \cite{onlinecomp}. \subsection{Case Study I}\label{sec4a} The storage data of Case Study I are from that of the Victorian Big Battery, which is a grid-connected storage resource in the Australian state of Victoria with a charging/discharging power limit of 300.0 MW and an energy storage capacity of 450.0 MWh. We use the pre-dispatch prices in Victoria for June 12, 2022, which constitutes the last day before the cumulative price threshold was exceeded in Victoria, triggering the onset of a series of market interventions that culminated in AEMO suspending the NEM on June 15, 2022. We start out by solving the RASS problem for $\beta = 0$ and $\beta=0.4$. We observe from Fig. \ref{c1b0} that the net discharging power (i.e., discharging power less charging power) under $\beta=0$, by and large, closely follows the pre-dispatch prices, effectively exploiting price differences across three time windows, with the first being between $k=11$ and $k=19$, second between around $k=24$ and $k=36$, and finally that around the price spike at $k=46$.\par \begin{figure} \caption{Case Study I optimal storage dispatch decisions for $\beta=0$. Each time period $k$ has a duration of 30 minutes.} \label{c1b0} \end{figure} We note from Fig. \ref{c1b04} that whereas the risk-neutral decision-maker ($\beta=0$) waits until the price reaches the early-hour minimum at $k=11$ to start charging, the risk-averse decision-maker ($\beta=0.4$) charges at its maximum limit right at the first two time periods, during which the pre-dispatch price is higher than that at $k=11$. We attribute this seemingly counter-intuitive behavior to the fact that when the RASS problem is solved at the beginning of the horizon, charging at $k=1$ entails a lower risk compared to $k=11$, as the market-clearing price for the initial time periods is expected hover closely around the pre-dispatch price. Since the uncertainty in forecast error increases with the length of the look-ahead period, the risk-averse decision-maker is driven to store more energy at the beginning of the horizon vis-\`a-vis the risk-neutral counterpart. \par \begin{figure} \caption{Case Study I optimal storage dispatch decisions for $\beta=0.4$} \label{c1b04} \end{figure} In this light, the risk-averse decision-maker incurs a greater \textit{lost opportunity cost} by storing a higher level of energy at $k=1$ and $k=2$ and discharging very little before $k=11$. This is because, by doing so, it reaches its maximum capacity at $k=12$ and foregoes its ability to store energy when the prices are around their lowest ebb of the day at $k=13$ and $k=14$. In contrast, the risk-neutral decision-maker manages to store 195 MWh of energy at $k=13$ and $k=14$. Tellingly, in order to store energy when the prices reach the minimum of the PM hours at $k=27$, the risk-averse decision maker winds up discharging at $k=25$. Although it could have well discharged between $k=15$ and $k=22$ at higher prices and could have recorded greater revenues, it prefers to discharge at a time period closer to that during which it charges again. Indeed, the dispatch decisions under $\beta=0.4$ is overall marred by a myopic focus. The risk averse decision-maker exploits price differences primarily at extremely short time windows, such as between $k=10$ and $k=11$ or between $k=25$ and $k=27$, unless the price significantly rises as around $k=46$. The prevailing myopic focus under $\beta=0.4$ can be ascribed to the increasing uncertainty in forecast error with longer horizons, driving the risk-averse decision-maker to arbitrage primarily between shorter windows. A ramification of this behavior is that the resource fails to respond to pre-dispatch price signals at time periods during which the system is in dire need. For instance, after the pre-dispatch price reaches A\$481.1/MWh at $k=33$ (more than a sevenfold increase from that at $k = 27$) the risk-averse decision-maker does not discharge any energy. Similarly, when the pre-dispatch price precipitously falls around $k=41$, the risk-neutral decision-maker stores 352.9 MWh, whereas the risk-averse decision-maker does not charge at all, failing to respond to the available 65.5\% price drop. As a result, after the price spikes at $k=46$, the risk-neutral decision-maker discharges 176.5 MWh more energy compared to the risk-averse decision-maker, greatly aiding the SO during a period of extreme scarcity.\par \begin{figure} \caption{Case Study I optimal net discharging levels (i.e., discharging less charging power) under different values of the weight parameter $\beta$.} \label{c1a} \end{figure} Fig. \ref{c1a} makes evident that the so-called myopic focus of the decision-maker becomes more conspicuous under increasing values of $\beta$. Around the price spike at $k=19$, the storage resource discharges the highest level of energy under $\beta=0$, followed by $\beta=0.1$ and $\beta=0.2$, whereas no energy is discharged under $\beta=0.3$ and $\beta=0.4$. These decisions impart the relatively less conservative decision-makers ($\beta=0$, $\beta=0.1$, and $\beta=0.2$) the capability to store more energy during the price drop around $k=24$ compared to those under $\beta=0.3$ and $\beta=0.4$. These observations are echoed in the dispatch decisions throughout the rest of the day. For instance, as $\beta$ is reduced, the storage resource manages to more closely follow the pre-dispatch price signal before and around the sharp rise at $k=46$. Under decreasing values of $\beta$, the storage resource discharges a higher level of energy around $k=38$, gaining in turn the capability to store more energy during the dip in prices after $k=41$, which it can discharge when the pre-dispatch price abruptly climbs at $k=46$. \subsection{Case Study II}\label{sec4b} We next examine the battery dispatch decisions for a day in which the pre-dispatch prices are highly volatile. For this purpose, we draw upon the pre-dispatch prices in South Australia for January 16, 2019. The storage data are taken from that of the ESCRI storage resource, which is connected to the grid in South Australia and has a charging/discharging power limit of 30 MW and a capacity of 8 MWh.\par \begin{figure} \caption{Case Study II optimal net discharging levels under different values of the weight parameter $\beta$ for $\alpha=0.95$.} \label{c2a} \end{figure} We begin by solving the RASS problem by increasing $\beta$ from $0$ to $0.5$ in $0.1$ increments. The results in Fig. \ref{c2a} show that the relatively more risk-neutral cases ($\beta=0$ and $\beta=0.1$) closely follow the pre-dispatch price signals, charging/discharging at all of the seven price troughs/peaks of the day. Under $\beta=0.2$ and $\beta=0.3$, however, the storage resource prefers to arbitrage between only six and four time windows, respectively, whereas under $\beta=0.4$, it capitalizes on only the widest price spread, charging and discharging once throughout a highly volatile day. Most notably, under $\beta=0.4$, the resource relinquishes the chance to take advantage of a $56.7\%$ price difference between $k=22$ and $k=23$ when the price falls precipitously, yet changes course three periods later and records a $126.7\%$ rise between $k=23$ and $k=26$.\par \begin{figure} \caption{Total expected profits under different values of $\beta$ and $E_M$.} \label{c2hs} \end{figure} We observe that the storage fails to reach its maximum charging and discharging power in the simulations, because if it were to sustain its maximum charging power over 30 minutes, it would exceed its total energy capacity. As such, we explore how the recorded profits would have evolved had the storage resource had a different capacity. To this end, we repeat the experiments by varying $E_M$ from $4.0$ MWh to $24.0$ MWh with $4.0$ MWh increments. We note from Fig. \ref{c2hs} that, across most values of $\beta$, the total expected profits rise under growing values of $E_M$, which can be ascribed to an increased ability to leverage price differences under a larger storage capacity. As $\beta$ increases, however, the profits seem to be plagued by diminishing returns, and indeed plateau at a certain $E_M$ level as $\beta$ increases beyond $0.2$. These observations bring home that risk attitudes (via increasing $\beta$ values) can create higher hurdles to notching greater profits, which are not necessarily mitigated by a larger storage capacity.\par \begin{figure} \caption{Total expected profits under different values of $\alpha$ and $\beta$.} \label{c2h} \end{figure} In Fig. \ref{c2h}, we examine the influence of the risk confidence level $\alpha$ as well as $\beta$ on the total expected profits over the $K$ time periods. Note that, driving down $\alpha$ toward zero brings $\mathcal{R}_{\alpha}(\boldsymbol{x})$ toward the expected value of the charging costs, signifying a more risk-neutral decision-maker. In contrast, increasing $\alpha$ toward one drives $\mathcal{R}_{\alpha}(\boldsymbol{x})$ towards the highest value the costs can take, representing a more conservative decision-maker. Fig. \ref{c2h} bears out the diminished capability of risk-averse decision-maker to respond to pre-dispatch price signals and to arbitrage, manifesting itself through dwindling profits under increasing values of $\beta$ and $\alpha$. These results reaffirm the well-known relationship between risk and profit, whereby expected profits climb under less conservative decisions, characterized by the values of $\alpha$ and $\beta$ approaching zero. \section{Conclusion and Policy Implications}\label{sec5} We model the self-scheduling problem of a risk-averse storage resource in decentralized markets. Four core results are gleaned from the analysis. First that risk-aversion tends to lead to a myopic approach to dispatch most notably evident in seeking to arbitrage between near-term intervals, rather than taking charging positions in the expectation of more profitable but risky revenues much later in the look-ahead period. Second that risk aversion tends to lead to reduced profits because of the conservative operating stance adopted by the resource. Third, that this conservatism can mean that the resource forgoes opportunities to dispatch at peak spot prices, which are times of highest system scarcity. Finally, and importantly, while higher energy storage capacity can mitigate low profits, increasing the storage capacity has virtually no impact on profitability for higher risk-aversion levels. There are important policy implications that flow from this study. First, SOs need to be acutely aware of the role of risk aversion in storage resource dispatch and bidding. In particular, higher levels of risk aversion may mean that storage resources may not be available for dispatch at times where scarcity price signals are most evident. As risk aversion itself is a non-observable quantity, this introduces uncertainty into an SO's short-term and medium term reliability forecasts, and risk into its decision-making on market intervention. This points to an increasing need for system transparency on key storage parameters including state-of-charge so that SOs have a better view on available storage capacity. Finally, it points to the need for the SO itself to develop a set of tools that quantify such risks and allow for better risk-adjusted decisions on intervention and other system operations during scarcity. \end{document}
\begin{document} \title{Second-Order Hyperproperties} \author{Raven Beutner\orcidlink{0000-0001-6234-5651} \and Bernd Finkbeiner\orcidlink{0000-0002-4280-8441} \and Hadar Frenkel\orcidlink{0000-0002-3566-0338} \and Niklas Metzger\orcidlink{0000-0003-3184-6335}} \institute{CISPA Helmholtz Center for Information Security,\\ Saarbr\"ucken, Germany\\ \email{\{raven.beutner,finkbeiner,hadar.frenkel,\\niklas.metzger\} @cispa.de}} \authorrunning{R. Beutner, B. Finkbeiner, H. Frenkel, N. Metzger} \maketitle \begin{abstract} We introduce \sohyperltl, a temporal logic for the specification of hyperproperties that allows for second-order quantification over sets of traces. Unlike first-order temporal logics for hyperproperties, such as HyperLTL, \sohyperltl\ can express complex epistemic properties like common knowledge, Mazurkiewicz trace theory, and asynchronous hyperproperties. The model checking problem of \sohyperltl\ is, in general, undecidable. For the expressive fragment where second-order quantification is restricted to smallest and largest sets, we present an approximate model-checking algorithm that computes increasingly precise under- and overapproximations of the quantified sets, based on fixpoint iteration and automata learning. We report on encouraging experimental results with our model-checking algorithm, which we implemented in the tool~\texttt{HySO}. \end{abstract} \section{Introduction}\label{sec:intro} About a decade ago, Clarkson and Schneider coined the term \emph{hyperproperties}~\cite{DBLP:journals/jcs/ClarksonS10} for the rich class of system requirements that relate multiple computations. In their definition, hyperproperties generalize trace properties, which are sets of traces, to \emph{sets of} sets of traces. This covers a wide range of requirements, from information-flow security policies to epistemic properties describing the knowledge of agents in a distributed system. Missing from Clarkson and Schneider's original theory was, however, a concrete specification language that could express customized hyperproperties for specific applications and serve as the common semantic foundation for different verification methods. A first milestone towards such a language was the introduction of the temporal logic HyperLTL~\cite{ClarksonFKMRS14}. HyperLTL extends linear-time temporal logic (LTL) with quantification over traces. Suppose, for example, that an agent $i$ in a distributed system observes only a subset of the system variables. The agent \emph{knows} that some LTL formula $\varphi$ is true on some trace $\pi$ iff $\varphi$ holds on \emph{all} traces $\pi'$ that agent $i$ cannot distinguish from $\pi$. If we denote the indistinguishability of $\pi$ and $\pi'$ by $\pi \sim_i \pi'$, then the property that \emph{there exists a trace $\pi$ where agent~$i$ knows $\varphi$} can be expressed as the HyperLTL formula \begin{align*} \exists \pi. \forall \pi'\mathpunct{.} \pi \sim_i \pi' \rightarrow \varphi(\pi'), \end{align*} where we write $\varphi(\pi')$ to denote that the trace property $\varphi$ holds on trace $\pi'$. While HyperLTL and its variations have found many applications~\cite{FinkbeinerRS15,10.1145/3127041.3127058,DimitrovaFT20}, the expressiveness of these logics is limited, leaving many widely used hyperproperties out of reach. A prominent example is \emph{common knowledge}, which is used in distributed applications to ensure simultaneous action~\cite{DBLP:books/mit/FHMV1995,HM}. Common knowledge in a group of agents means that the agents not only know \emph{individually} that some condition $\varphi$ is true, but that this knowledge is ``common" to the group in the sense that each agent \emph{knows} that every agent \emph{knows} that $\varphi$ is true; on top of that, each agent in the group \emph{knows} that every agent \emph{knows} that every agent \emph{knows} that $\varphi$ is true; and so on, forming an infinite chain of knowledge. The fundamental limitation of HyperLTL that makes it impossible to express properties like common knowledge is that the logic is restricted to \emph{first-order quantification}. HyperLTL, then, cannot reason about sets of traces directly, but must always do so by referring to individual traces that are chosen existentially or universally from the full set of traces. For the specification of an agent's individual knowledge, where we are only interested in the (non-)existence of a single trace that is indistinguishable and that violates $\varphi$, this is sufficient; however, expressing an infinite chain, as needed for common knowledge, is impossible. In this paper, we introduce \sohyperltl{}, a temporal logic for hyperproperties with \emph{second-order quantification} over traces. In \sohyperltl{}, the existence of a trace $\pi$ where the condition $\varphi$ is common knowledge can be expressed as the following formula (using slightly simplified syntax): \begin{align*} \exists \pi.\, \exists X \mathpunct{.}\ \pi \in X \land \Big( \forall \pi' \in X.\, \forall \pi''.\ \big(\bigvee_{i=1}^n \pi' \sim_i \pi'' \big) \rightarrow \pi'' \in X \Big)\, \land\, \forall \pi' \in X\mathpunct{.} \varphi({\pi'}). \end{align*} The second-order quantifier $\exists X$ postulates the existence of a set $X$ of traces that (1) contains $\pi$; that (2) is closed under the observations of each agent, i.e., for every trace $\pi'$ already in $X$, all other traces $\pi''$ that some agent~$i$ cannot distinguish from $\pi'$ are also in $X$; and that (3) only contains traces that satisfy $\varphi$. The existence of $X$ is a necessary and sufficient condition for $\varphi$ being common knowledge on $\pi$. In the paper, we show that \sohyperltl{} is an elegant specification language for many hyperproperties of interest that cannot be expressed in HyperLTL, including, in addition to epistemic properties like common knowledge, also Mazurkiewicz trace theory and asynchronous hyperproperties. The model checking problem for \sohyperltl{} is much more difficult than for HyperLTL. A HyperLTL formula can be checked by translating the LTL subformula into an automaton and then applying a series of automata transformations, such as self-composition to generate multiple traces, projection for existential quantification, and complementation for negation~\cite{FinkbeinerRS15,BeutnerF23}. For \sohyperltl{}, the model checking problem is, in general, undecidable. We introduce a method that nevertheless obtains sound results by over- and underapproximating the quantified sets of traces. For this purpose, we study \fphyperltl{}, a fragment of \sohyperltl{}, in which we restrict second-order quantification to the smallest or largest set satisfying some property. For example, to check common knowledge, it suffices to consider the \emph{smallest} set $X$ that is closed under the observations of all agents. This smallest set $X$ is defined by the (monotone) fixpoint operation that adds, in each step, all traces that are indistinguishable to some trace already in $X$. We develop an approximate model checking algorithm for \fphyperltl{} that uses bidirectional inference to deduce lower and upper bounds on second-order variables, interposed with first-order model checking in the style of HyperLTL. Our procedure is parametric in an oracle that provides (increasingly precise) lower and upper bounds. In the paper, we realize the oracles with \emph{fixpoint iteration} for underapproximations of the sets of traces assigned to the second-order variables, and \emph{automata learning} for overapproximations. We report on encouraging experimental results with our model-checking algorithm, which has been implemented in a tool called \texttt{HySO}. \section{Preliminaries}\label{sec:prelim} For $n \in \mathbb{N}$ we define $[n] := \{1, \mathpunct{.}s, n\}$. We assume that $\text{AP}$ is a finite set of atomic propositions and define $\Sigma := 2^\text{AP}$. For $t \in \Sigma^\omega$ and $i \in \mathbb{N}$ define $t(i) \in \Sigma$ as the $i$th element in~$t$ (starting with the $0$th); and $t[i,\infty]$ for the infinite suffix starting at position~$i$. For traces $t_1, \mathpunct{.}s, t_n \in \Sigma^\omega$ we write $\mathit{zip}(t_1, \mathpunct{.}s, t_n) \in (\Sigma^n)^\omega$ for the pointwise zipping of the traces, i.e., $\mathit{zip}(t_1, \mathpunct{.}s, t_n)(i) := (t_1(i), \mathpunct{.}s, t_n(i))$. \paragraph{Transition systems.}\label{prelim:mealy:machines} A \emph{transition system} is a tuple $\mathcal{T} = (S, S_0, \kappa, L)$ where $S$ is a set of states, $S_0 \subseteq S$ is a set of initial states, $\kappa \subseteq S \times S$ is a transition relation, and $L : S \to \Sigma$ is a labeling function. A path in $\mathcal{T}$ is an infinite state sequence $s_0s_1s_2 \cdots \in S^\omega$, s.t., $s_0 \in S_0$, and $(s_i, s_{i+1}) \in \kappa$ for all $i$. The associated trace is given by $L(s_0)L(s_1)L(s_2) \cdots \in \Sigma^\omega$ and $\mathit{Traces}(\mathcal{T}) \subseteq \Sigma^\omega$ denotes all traces of $\mathcal{T}$. \paragraph{Automata.} A \emph{non-deterministic B{\"u}chi automaton} (NBA) \cite{Buechi62Decision} is a tuple $\mathcal{A}= (\Sigma, Q, q_0, \delta, F)$ where $\Sigma$ is a finite {alphabet}, $Q$ is a finite set of states, $Q_0\subseteq Q$ is the set of {initial states}, $F\subseteq Q$ is a set of {accepting states}, and $\delta : Q\times \Sigma \to 2^Q$ is the {transition function}. A run on a word $u \in \Sigma^\omega$ is an infinite sequence of states $q_0q_1q_2 \cdots \in Q^\omega$ such that $q_0 \in Q_0$ and for every $i \in \mathbb{N}$, $q_{i+1} \in \delta(q_i, u(i))$. The run is accepting if it visits states in $F$ infinitely many times, and we define the language of $\mathcal{A}$, denoted $\mathcal{L}(\mathcal{A}) \subseteq \Sigma^\omega$, as all infinite words on which $\mathcal{A}$ has an accepting run. \paragraph{HyperLTL.} HyperLTL \cite{ClarksonFKMRS14} is one of the most studied temporal logics for the specification of hyperproperties. We assume that $\mathcal{V}$ is a fixed set of trace variables. For the most part, we use variations of $\pi$ (e.g., $\pi, \pi', \pi_1, \mathpunct{.}s$) to denote trace variables. HyperLTL formulas are then generated by the grammar \begin{align*} \varphi &:= \mathbb{Q} \pi \mathpunct{.} \varphi \mid \psi \\ \psi &:= a_\pi \mid \neg \psi \mid \psi \land \psi \mid \LTLnext \psi \mid \psi \LTLuntil \psi \end{align*} where $a \in \text{AP}$ is an atomic proposition, $\pi \in \mathcal{V}{}$ is a trace variable, $\mathbb{Q} \in \{\forall, \exists\}$ is a quantifier, and $\LTLnext$ and $\LTLuntil$ are the temporal operators \emph{next} and \emph{until}. The semantics of HyperLTL is given with respect to a \emph{trace assignment} $\Pi$, which is a partial mapping $\Pi : \mathcal{V} \rightharpoonup \Sigma^\omega$ that maps trace variables to traces. Given $\pi \in \mathcal{V}$ and $t \in \Sigma^\omega$ we define $\Pi[\pi \mapsto t]$ as the updated assignment that maps $\pi$ to $t$. For $i \in \mathbb{N}$ we define $\Pi[i, \infty]$ as the trace assignment defined by $\Pi[i, \infty](\pi) := \Pi(\pi)[i, \infty]$, i.e., we (synchronously) progress all traces by $i$ steps. For quantifier-free formulas $\psi$ we follow the LTL semantics and define \begin{align*} \Pi &\vDash a_\pi &\text{iff} \quad &a \in \Pi(\pi)(0)\\ \Pi &\vDash \neg \psi &\text{iff} \quad & \Pi \not\vDash \psi \\ \Pi &\vDash \psi_1 \land \psi_2 &\text{iff} \quad &\Pi \vDash \psi_1 \text{ and } \Pi \vDash \psi_2\\ \Pi &\vDash \LTLnext \psi &\text{iff} \quad & \Pi[1,\infty] \vDash \psi \\ \Pi &\vDash \psi_1 \LTLuntil \psi_2 &\text{iff} \quad & \exists i \in \mathbb{N} \mathpunct{.} \Pi[i,\infty]\vDash \psi_2 \text{ and } \forall j < i\mathpunct{.} \Pi[j, \infty] \vDash \psi_1\, . \end{align*} The indexed atomic propositions refer to a specific path in $\Pi$, i.e., $a_\pi$ holds iff $a$ holds on the trace bound to $\pi$. Quantifiers range over system traces: \begin{align*} \Pi \vDash_\mathcal{T} \psi \text{ iff } \Pi \vDash\psi \quad\quad \text{and} \quad\quad \Pi \vDash_\mathcal{T} \mathbb{Q} \pi \mathpunct{.} \varphi \text{ iff } \mathbb{Q} t \in \mathit{Traces}(\mathcal{T}) \mathpunct{.} \Pi[\pi \mapsto t] \vDash \varphi\, . \end{align*} We write $\mathcal{T} \vDash \varphi$ if $\emptyset \vDash_\mathcal{T} \varphi$ where $\emptyset$ denotes the empty trace assignment. \paragraph{HyperQPTL.} HyperQPTL~\cite{DBLP:phd/dnb/Rabe16} adds -- on top of the trace quantification of HyperLTL -- also propositional quantification (analogous to the propositional quantification that QPTL~\cite{qptl} adds on top of LTL). For example, HyperQPTL can express a promptness property which states that there must exist a bound (which is common among all traces), up to which an event must have happened. We can express this as $\exists q. \forall \pi\mathpunct{.} \LTLeventually q \land (\neg q) \LTLuntil a_\pi$ which states that there exists an evaluation of proposition $q$ such that (1) $q$ holds at least once, and (2) for all traces $\pi$, $a$ holds on $\pi$ before the first occurrence of $q$. See \cite{BeutnerF23} for details. \section{Second-Order HyperLTL}\label{sec:second:order:hyperltl} The (first-order) trace quantification in HyperLTL ranges over the set of all system traces; we thus cannot reason about arbitrary sets of traces as required for, e.g., common knowledge. We introduce a second-order extension of HyperLTL by introducing second-order variables (ranging over sets of traces) and allowing quantification over traces from any such set. We present two variants of our logic that differ in the way quantification is resolved. In \sohyperltl{}, we quantify over arbitrary sets of traces. While this yields a powerful and intuitive logic, second-order quantification is inherently non-constructive. During model checking, there thus does not exist an efficient way to even approximate possible witnesses for the sets of traces. To solve this quandary, we restrict \sohyperltl{} to \fphyperltl{}, where we instead quantify over sets of traces that satisfy some minimality or maximality constraint. This allows for large fragments of \fphyperltl{} that admit algorithmic approximations to its model checking (by, e.g., using known techniques from fixpoint computations~\cite{tarski1955lattice,Winskel93}). \subsection{\oldsohyperltl{}}\label{sec:sohyperltl} Alongside the set $\mathcal{V}{}$ of trace variables, we use a set $\mathfrak{V}{}$ of second-order variables (which we, for the most part, denote with capital letters $X, Y, ...$). We assume that there is a special variable $\mathfrak{S} \in \mathfrak{V}{}$ that refers to the set of traces of the given system at hand, and a variable $\mathfrak{A} \in \mathfrak{V}{}$ that refers to the set of all traces. We define the \sohyperltl{} syntax by the following grammar: \begin{align*} \varphi &:= \mathbb{Q} \pi \in X \mathpunct{.} \varphi \mid \mathbb{Q} X\mathpunct{.} \varphi \mid \psi \\ \psi &:= a_\pi \mid \neg \psi \mid \psi \land \psi \mid \LTLnext \psi \mid \psi \LTLuntil \psi \end{align*} where $a \in \text{AP}{}$ is an atomic proposition, $\pi \in \mathcal{V}{}$ is a trace variable, $X \in \mathfrak{V}{}$ is a second-order variable, and $\mathbb{Q} \in \{\forall, \exists\}$ is a quantifier. We also consider the usual derived Boolean constants ($\true$, $\false$) and connectives ($\lor$, $\rightarrow$, $\leftrightarrow$) as well as the temporal operators \emph{eventually} ($\LTLeventually \psi := \true \LTLuntil \psi$) and \emph{globally} ($\LTLglobally \psi := \neg \LTLeventually \neg \psi$). Given a set of atomic propositions $P \subseteq \text{AP}$ and two trace variables $\pi, \pi'$, we abbreviate $\pi =_P \pi' := \bigwedge_{a \in P} (a_\pi \leftrightarrow a_{\pi'})$. \subsubsection*{Semantics.} Apart from a trace assignment $\Pi$ (as in the semantics of HyperLTL), we maintain a second-order assignment $\Delta : \mathfrak{V}{} \rightharpoonup 2^{\Sigma^\omega}$ mapping second-order variables to \emph{sets of traces}. Given $X \in \mathfrak{V}$ and $A \subseteq \Sigma^\omega$ we define the updated assignment $\Delta[X \mapsto A]$ as expected. Quantifier-free formulas $\psi$ are then evaluated in a fixed trace assignment as for HyperLTL (cf.~\Cref{sec:prelim}). For the quantifier prefix we define: \begin{align*} \Pi, \Delta &\vDash\psi &\text{iff} \quad &\Pi \vDash\psi\\ \Pi, \Delta &\vDash \mathbb{Q} \pi \in X \mathpunct{.} \varphi &\text{iff} \quad &\mathbb{Q} t \in \Delta(X) \mathpunct{.} \Pi[\pi \mapsto t], \Delta \vDash \varphi\\ \Pi, \Delta &\vDash \mathbb{Q} X \mathpunct{.} \varphi &\text{iff} \quad &\mathbb{Q} A \subseteq \Sigma^\omega \mathpunct{.} \Pi, \Delta[X \mapsto A] \vDash \varphi \end{align*} Second-order quantification updates $\Delta$ with a set of traces, and first-order quantification updates $\Pi$ by quantifying over traces within the set defined by $\Delta$. Initially, we evaluate a formula in the empty trace assignment and fix the valuation of the special second-order variable $\mathfrak{S}$ to be the set of all system traces and $\mathfrak{A}$ to be the set of all traces. That is, given a system $\mathcal{T}$ and \sohyperltl{} formula $\varphi$, we say that $\mathcal{T}$ satisfies $\varphi$, written $\mathcal{T} \vDash \varphi$, if $\emptyset, [\mathfrak{S} \mapsto \mathit{Traces}(\mathcal{T}), \mathfrak{A} \mapsto \Sigma^\omega] \vDash \varphi$, where we write $\emptyset$ for the empty trace assignment. The model-checking problem for \sohyperltl{} is checking whether $\mathcal{T} \vDash \varphi$ holds. \sohyperltl{} naturally generalizes HyperLTL by adding second-order quantification. As sets range over \emph{arbitrary} traces, \sohyperltl{} also subsumes the~more powerful logic HyperQPTL. The proof of~\Cref{lemma:ltlto2} is given in~\Cref{app:qptl}. \begin{restatable}{lemma}{hyperltlIntoSoHyper}\label{lemma:ltlto2} \sohyperltl{} subsumes \hyperqptl{} (and thus also \hyperltl{}). \end{restatable} \subsubsection*{Syntactic Sugar.} In \sohyperltl{}, we can quantify over traces within a second-order variable, but we cannot state, within the body of the formula, that some path is a member of some second-order variable. For that, we define $\pi \triangleright X$ (as an atom within the body) as syntactic sugar for $\exists \pi' \in X. \LTLglobally(\pi' =_\text{AP} \pi)$, i.e., $\pi$ is in $X$ if there exists some trace in $X$ that agrees with $\pi$ on all propositions. Note that we can only use $\pi \triangleright X$ \emph{outside} of the scope of any temporal operators; this ensures that we can bring the resulting formula into a form that conforms to the \sohyperltl{} syntax. \subsection{\oldfphyperltl{}}\label{sec:fphyperltl} The semantics of \sohyperltl{} quantifies over arbitrary sets of traces, making even approximations to its semantics challenging. We propose \fphyperltl{} as a restriction that only quantifies over sets that are subject to an additional minimality or maximality constraint. For large classes of formulas, we show that this admits effective model-checking approximations. We define \fphyperltl{} by the following grammar: \begin{align*} \varphi &:= \mathbb{Q}\,\pi \in X \mathpunct{.} \varphi \mid \mathbb{Q}\, (X, \fptype{}, \varphi) \mathpunct{.} \varphi \mid \psi\\ \psi &:= a_\pi \mid \neg \psi \mid \psi \land \psi \mid \LTLnext \psi \mid \psi \LTLuntil \psi \end{align*} where $a \in \text{AP}$, $\pi \in \mathcal{V}{}$, $X \in \mathfrak{V}$, $\mathbb{Q} \in \{\forall, \exists\}$, and $\fptype{} \in \{\curlywedge, \curlyvee\}$ determines if we consider smallest ($\curlyvee$) or largest ($\curlywedge$) sets. For example, the formula $\exists\, (X, \curlyvee, \varphi_1) \mathpunct{.} \varphi_2$ holds if there exists some set of traces $X$, that satisfies both $\varphi_1$ and $\varphi_2$, and is \emph{a} smallest set that satisfies~$\varphi_1$. Such minimality and maximality constraints with respect to a (hyper)property arise naturally in many properties. Examples include common knowledge (cf.~\Cref{sec:running:example}), asynchronous hyperproperties (cf.~\Cref{sec:asynchronous:hyperproperties}), and causality in reactive systems~\cite{DBLP:conf/atva/CoenenFFHMS22,DBLP:conf/cav/CoenenDFFHHMS22}. \subsubsection*{Semantics.} For path formulas, the semantics of \fphyperltl{} is defined analogously to that of \sohyperltl{} and HyperLTL. For the quantifier prefix we define: \begin{align*} \Pi, \Delta &\vDash \psi &\text{iff} \quad &\Pi \vDash \psi\\ \Pi, \Delta &\vDash \mathbb{Q} \pi \in X \mathpunct{.} \varphi &\text{iff} \quad &\mathbb{Q} t \in \Delta(X) \mathpunct{.} \Pi[\pi \mapsto t], \Delta \vDash \varphi\\ \Pi, \Delta &\vDash \mathbb{Q} (X, \fptype{}, \varphi_1) \mathpunct{.} \varphi_2 &\text{iff} \quad &\mathbb{Q} A \in \mathit{sol}(\Pi, \Delta, (X, \fptype{}, \varphi_1)) \mathpunct{.} \Pi, \Delta[X \mapsto A] \vDash \varphi_2 \end{align*} where $\mathit{sol}(\Pi, \Delta, (X, \fptype{}, \varphi_1))$ denotes all solutions to the minimality/maximality condition given by $\varphi_1$, which we define by mutual recursion as follows:\\ \scalebox{0.95}{\parbox{\linewidth}{ \begin{align*} \mathit{sol}(\Pi, \Delta, (X, \curlyvee, \varphi))&:= \{ A \subseteq \Sigma^\omega \mid \Pi, \Delta[X \mapsto A] \vDash \varphi \land \forall A' \subsetneq A\mathpunct{.} \Pi, \Delta[X \mapsto A'] \not\vDash \varphi \}\\ \mathit{sol}(\Pi, \Delta, (X, \curlywedge, \varphi))&:= \{ A \subseteq \Sigma^\omega \mid \Pi, \Delta[X \mapsto A] \vDash \varphi \land \forall A' \supsetneq A\mathpunct{.} \Pi, \Delta[X \mapsto A'] \not\vDash \varphi \} \end{align*} }}\\ A set $A$ satisfies the minimality/maximality constraint if it satisfies $\varphi$ and is a least (in case $\fptype{} = \curlyvee$) or greatest (in case $\fptype{} = \curlywedge$) set that satisfies $\varphi$. Note that $\mathit{sol}(\Pi, \Delta, (X,\fptype{}, \varphi))$ can contain multiple sets or no set at all, i.e., there may not exists a unique least or greatest set that satisfies $\varphi$. In \fphyperltl{}, we therefore add an additional quantification over the set of all solutions to the minimality/maximality constraint. When discussing our model checking approximation algorithm, we present a (syntactic) restriction on $\varphi$ which guarantees that $\mathit{sol}(\Pi, \Delta, (X, \fptype{}, \varphi))$ contains a unique element (i.e., is a singleton set). Moreover, our restriction allows us to employ fixpoint techniques to find approximations to this unique solution. In case the solution for $(X,\fptype{}, \varphi)$ is unique, we often omit the leading quantifier and simply write $(X, \fptype{}, \varphi)$ instead of $\mathbb{Q} (X,\fptype{}, \varphi)$. As we can encode the minimality/maximality constraints of \fphyperltl{} in \sohyperltl{} (see~\Cref{app:fptoso}), we have the following: \begin{restatable}{proposition}{fpToSo}\label{prop:fptoso} Any \fphyperltl{} formula $\varphi$ can be effectively translated into an \sohyperltl{} formula $\varphi'$ such that for all transition systems $\mathcal{T}$ we have $\mathcal{T} \vDash \varphi$ iff $\mathcal{T} \vDash \varphi'$. \end{restatable} \begin{figure} \caption{Left: An example for a multi-agent system with two agents, where agent~1 observes $a$ and $d$, and agent 2 observes $c$ and $d$. Right: The iterative construction of the traces to be considered for common knowledge starting with $a^nd^\omega$.} \label{fig:common:knowledge} \end{figure} \subsection{Common Knowledge in Multi-Agent Systems} \label{sec:running:example} To explain common knowledge, we use a variation of an example from~\cite{Meyden98}, and encode it in \fphyperltl{}. \Cref{fig:common:knowledge}(left) shows a transition system of a distributed system with two agents, agent~$1$ and agent $2$. Agent~$1$ observes variables $a$ and $d$, whereas agent $2$ observes $c$ and~$d$. The property of interest is \emph{starting from the trace $\pi = a^n d^\omega$ for some fixed $ n > 1$, is it common knowledge for the two agents that $a$ holds in the second step}. It is trivial to see that $\LTLnext a$ holds on~$\pi$. However, for common knowledge, we consider the (possibly) infinite chain of observationally equivalent traces. For example, agent $2$ cannot distinguish the traces $a^nd^\omega$ and $a^{n-1}bd^\omega$. Therefore, agent $2$ only knows that $\LTLnext a$ holds on $\pi$ if it also holds on $ \pi' = a^{n-1}bd^\omega$. For common knowledge, agent $1$ also has to know that agent~$2$ knows $\LTLnext a$, which means that for all traces that are indistinguishable from $\pi$ or~$\pi'$ for agent $1$, $\LTLnext a$ has to hold. This adds $\pi'' = a^{n-1}cd^\omega$ to the set of traces to verify $\LTLnext a$ against. This chain of reasoning continues as shown in \Cref{fig:common:knowledge}(right). In the last step we add $ac^{n-1}d^\omega$ to the set of indistinguishable traces, concluding that $\LTLnext a$ is not common knowledge. The following \fphyperltl{} formula specifies the property stated above. The abbreviation $\mathit{obs}(\pi_1, \pi_2) := \LTLglobally (\pi_1 =_{\{a, d\}} \pi_2) \vee \LTLglobally(\pi_1 =_{\{c, d\}} \pi_2)$ denotes that $\pi_1$ and $\pi_2$ are observationally equivalent for either agent 1 or agent 2. \begin{align*} &\forall \pi \in \mathfrak{S}. \big( \bigwedge_{i=0}^{n-1} \LTLnext^i a_\pi \land \LTLnext^{n}\LTLglobally d_\pi\big) \to \\ &\quad\Big(X, \curlyvee, \pi \triangleright X \land \big(\forall \pi_1 \in X. \forall \pi_2 \in \mathfrak{S}{}\mathpunct{.} \mathit{obs}(\pi_1,\pi_2) \rightarrow \pi_2 \triangleright X \big)\Big)\mathpunct{.} \forall \pi' \in X. \LTLnext a_{\pi'} \end{align*} For a trace $\pi$ of the form $\pi = a^n d^\omega$, the set $X$ represents the \emph{common knowledge set} on~$\pi$. This set $X$ is the smallest set that (1) contains $\pi$ (expressed using our syntactic sugar $\triangleright$); and (2) is closed under observations by either agent, i.e., if we find some $\pi_1 \in X$ and some system trace $\pi_2$ that are observationally equivalent, $\pi_2$ should also be in $X$. Note that this set is unique (due to the minimality restriction), so we do not quantify it explicitly. Lastly, we require that all traces in $X$ satisfy the property $\LTLnext a$. All sets that satisfy this formula would also include the trace $ac^{n-1}d^\omega$, and therefore no such $X$ exists; thus, we can conclude that starting from trace $a^nd^\omega$, it is \emph{not} common knowledge that $\LTLnext a$ holds. On the other hand, it \emph{is} common knowledge that $a$ holds in the \emph{first} step (cf.~\Cref{sec:implementation}). \subsection{\oldsohyperltl{} Model Checking}\label{sec:sohyperltl_mc} As \sohyperltl{} and \fphyperltl{} allow quantification over arbitrary sets of traces, we can encode the satisfiability of \HyperQPTL{} (i.e., the question of whether some set of traces satisfies a formula) within their model-checking problem; rendering the model-checking problem highly undecidable~\cite{FortinKT021}, even for very simple formulas \cite{BeutnerCFHK22}. \begin{restatable}{proposition}{hyperSatToSo}\label{prop:hyperltl_sat_in_sohyperltl_mc}\label{corr:sohyper_hardness} For any \HyperQPTL{} formula $\varphi$ there exists a \sohyperltl{} formula $\varphi'$ such that $\varphi$ is satisfiable iff $\varphi'$ holds on some arbitrary transition system. The model-checking problem of \sohyperltl{} is thus highly undecidable ($\Sigma_1^1$-hard). \end{restatable} \begin{proof} Let $\varphi'$ be the \sohyperltl{} formula obtained from $\varphi$ by replacing each \HyperQPTL{} trace quantifier $\mathbb{Q} \pi$ with the \sohyperltl{} quantifier $\mathbb{Q} \pi \in X$, and each propositional quantifier $\mathbb{Q} q$ with $\mathbb{Q} \pi_q \in \mathfrak{A}$ for some fresh trace variable $\pi_q$. In the body, we replace each propositional variable $q$ with $a_{\pi_q}$ for some fixed proposition $a \in \text{AP}$. Then, $\varphi$ is satisfiable iff the \sohyperltl{} formula $\exists X. \varphi'$ holds in some arbitrary system. \qed \end{proof} \fphyperltl{} cannot express \HyperQPTL{} satisfiability directly. If there exists a model of a \HyperQPTL{} formula, there may not exist a least one. However, model checking of \fphyperltl{} is also highly undecidable. \begin{restatable}{proposition}{fphyperUndec}\label{lem:fphyper_hardness} The model-checking problem of \fphyperltl{} is $\Sigma_1^1$-hard. \end{restatable} \begin{proof}[Sketch] We can encode the existence of a \emph{recurrent} computation of a Turing machine, which is known to be $\Sigma_1^1$-hard \cite{AlurH94}.\qed \end{proof} Conversely, the \emph{existential} fragment of \sohyperltl{} can be encoded back into \hyperqptl{} satisfiability: \begin{restatable}{proposition}{sohyperToSat} Let $\varphi$ be a \sohyperltl{} formula that uses only existential second-order quantification and $\mathcal{T}$ be any system. We can effectively construct a formula $\varphi'$ in \hyperqptl{} such that $\mathcal{T} \vDash \varphi$ iff $\varphi'$ is satisfiable. \end{restatable} Lastly, we present some easy fragments of \sohyperltl{} for which the model-checking problem is decidable. Here we write $\exists^* X$ (resp.~$\forall^* X$) for some sequence of existentially (resp.~universally) quantified \emph{second-order} variables and $\exists^* \pi$ (resp.~$\forall^* \pi$) for some sequence of existentially (resp.~universally) quantified \emph{first-order} variables. For example, $\exists^* X \forall^*\pi$ captures all formulas of the form $\exists X_1, \mathpunct{.}s X_n. \forall \pi_1, \mathpunct{.}s, \pi_m. \psi$ where $\psi$ is quantifier-free. \begin{restatable}{proposition}{easyfrags} \label{prop:easyfragments} The model-checking problem of \sohyperltl{} is decidable for the fragments: $\exists^* X \forall^*\pi$, $\forall^* X \forall^*\pi$, $\exists^* X \exists^*\pi$, $\forall^* X \exists^*\pi$, $\exists X.\exists^* \pi\in X \forall^*\pi'\in X$. \end{restatable} See \Cref{app:proofs} for the full proofs of the propositions above. \section{Expressiveness of \oldsohyperltl{}}\label{sec:examples} In this section, we point to existing logics that can naturally be encoded within our second-order hyperlogics \sohyperltl{} and \fphyperltl{}. \subsection{\oldsohyperltl{} and LTL$_{\mathsf{K}, \mathsf{C}}$}\label{sec:ltlK} LTL$_\mathsf{K}$ extends LTL with the knowledge operator $\mathsf{K}$. For some subset of agents $A$, the formula $\mathsf{K}_A \psi$ holds in timestep $i$, if $\psi$ holds on all traces equivalent to some agent in $A$ up to timestep $i$. See \Cref{app:ltlk} for detailed semantics. LTL$_\mathsf{K}$ and HyperCTL$^*$ have incomparable expressiveness \cite{DBLP:conf/fossacs/BozzelliMP15} but the knowledge operator $\mathsf{K}$ can be encoded by either adding a linear past operator \cite{DBLP:conf/fossacs/BozzelliMP15} or by adding propositional quantification (as in \HyperQPTL{}) \cite{DBLP:phd/dnb/Rabe16}. Using \fphyperltl{} we can encode LTL$_{\mathsf{K}, \mathsf{C}}$, featuring the knowledge operator $\mathsf{K}$ \emph{and} the common knowledge operator $\mathsf{C}$ (which requires that $\psi$ holds on the closure set of equivalent traces, up to the current timepoint) \cite{DBLP:conf/spin/HoekW02}. Note that LTL$_{\mathsf{K}, \mathsf{C}}$ is not encodable by only adding propositional quantification or the linear past operator. \begin{restatable}{proposition}{kltl} For every \emph{LTL$_{\mathsf{K}, \mathsf{C}}$} formula $\varphi$ there exists an \fphyperltl{} formula $\varphi'$ such that for any system $\mathcal{T}$ we have $\mathcal{T} \vDash_{\text{LTL$_{\mathsf{K}, \mathsf{C}}$}} \varphi$ iff $\mathcal{T} \vDash \varphi'$. \end{restatable} \begin{proof}[Sketch] We follow the intuition discussed in~\Cref{sec:running:example}. For each occurrence of a knowledge operator in $\{\mathsf{K}, \mathsf{C} \}$, we introduce a second-order set that collects all equivalent traces, and a fresh trace variable that keeps track on the points in time with respect to which we need to compare. We then inductively construct a \fphyperltl{} formula that captures all the knowledge and common-knowledge sets. For more details see~\Cref{app:ltlk}. \qed \end{proof} \subsection{\oldsohyperltl{} and Asynchronous Hyperproperties}\label{sec:asynchronous:hyperproperties} Most existing hyperlogics (including \sohyperltl{}) traverse the traces of a system \emph{synchronously}. However, in many cases such a synchronous traversal is too restricting and we need to compare traces asynchronously. As an example, consider \emph{observational determinism} (OD), which we can express in HyperLTL as $\varphi_\mathit{OD} := \forall \pi_1. \forall \pi_2. \LTLglobally(o_{\pi_1} \leftrightarrow o_{\pi_2})$. The formula states that the output of a system is identical across all traces and so (trivially) no information about high-security inputs is leaked. In most systems encountered in practice, this synchronous formula is violated, as the exact timing between updates to $o$ might differ by a few steps (see \Cref{app:asynchronous:hyperproperties} for some examples). However, assuming that an attacker only has access to the memory footprint and not a timing channel, we would only like to check that all traces are \emph{stutter} equivalent (with respect to~$o$). A range of extensions to existing hyperlogics has been proposed to reason about such asynchronous hyperproperties~\cite{BaumeisterCBFS21,BozzelliPS21,GutsfeldMO21,BeutnerF21,BeutnerF23LMCS}. We consider AHLTL~\cite{BaumeisterCBFS21}. An AHLTL formula has the form $\mathbb{Q}_1 \pi_1, \mathpunct{.}s , \mathbb{Q}_n \pi_m. \mathbf{E} \mathpunct{.} \psi$ where $\psi$ is a qunatifier-free HyperLTL formula. The initial trace quantifier prefix is handled as in HyperLTL. However, different from HyperLTL, a trace assignment $[\pi_1 \mapsto t_1, \mathpunct{.}s, \pi_n \mapsto t_n]$ satisfies $\mathbf{E} \mathpunct{.} \psi$ if there exist stuttered traces $t_1', \mathpunct{.}s, t_n'$ of $t_1, \mathpunct{.}s, t_n$ such that $[\pi_1 \mapsto t_1', \mathpunct{.}s, \pi_n \mapsto t_n'] \vDash \psi$. We write $\mathcal{T} \vDash_{\mathit{AHLTL}} \varphi$ if a system $\mathcal{T}$ satisfies the AHLTL formula $\varphi$. Using this quantification over stutterings we can, for example, express an asynchronous version of observational determinism as $\forall \pi_1. \forall \pi_2. \mathbf{E} \mathpunct{.} \LTLglobally(o_{\pi_1} \leftrightarrow o_{\pi_2})$ stating that every two traces can be aligned such that they (globally) agree on $o$. Despite the fact that \fphyperltl{} is itself synchronous, we can use second-order quantification to encode asynchronous hyperproperties, as we state in the following proposition. \begin{proposition} For any AHLTL formula $\varphi$ there exists a \fphyperltl{} formula $\varphi'$ such that for any system $\mathcal{T}$ we have $\mathcal{T} \vDash_{\mathit{AHLTL}} \varphi$ iff $\mathcal{T} \vDash \varphi'$. \end{proposition} \begin{proof} Assume that $\varphi = \mathbb{Q}_1 \pi_1, \mathpunct{.}s, \mathbb{Q}_n \pi_n. \mathbf{E} \mathpunct{.} \psi$ is the given AHLTL formula. For each $i \in [n]$ we define a formula $\varphi_i$ as follows \begin{align*} \forall \pi_1 \in X_i. \forall \pi_2 \in \mathfrak{A}{}\mathpunct{.} \Big( \big( \pi_1 =_\text{AP} \pi_2 \big) \LTLuntil \big( \LTLglobally \bigwedge_{a \in \text{AP}} a_{\pi_1} \leftrightarrow \LTLnext a_{\pi_2} \big) \Big) \rightarrow \pi_2 \triangleright X_i \end{align*} The formula asserts that the set of traces bound to $X_i$ is closed under stuttering, i.e., if we start from any trace in $X_i$ and stutter it once (at some arbitrary position) we again end up in $X_i$. Using the formulas $\varphi_i$, we then construct a \fphyperltl{} formula that is equivalent to $\varphi$ as follows \begin{align*} \varphi' := \,&\mathbb{Q}_1 \pi_1 \in \mathfrak{S}{}, \mathpunct{.}s, \mathbb{Q}_n \pi_n \in \mathfrak{S}{}. (X_1, \curlyvee, \pi_1 \triangleright X_1 \land \varphi_1) \cdots (X_n, \curlyvee, \pi_n \triangleright X_n \land \varphi_n) \\ &\quad\quad\exists \pi_1' \in X_1, \mathpunct{.}s, \exists \pi_n' \in X_n. \psi[\pi_1'/\pi_1, \mathpunct{.}s, \pi_n'/\pi_n] \end{align*} We first mimic the quantification in $\varphi$ and, for each trace $\pi_i$, construct a least set $X_i$ that contains $\pi_i$ and is closed under stuttering (thus describing exactly the set of all stuttering of $\pi_i$). Finally, we assert that there are traces $\pi_1', \mathpunct{.}s, \pi_n'$ with $\pi_i' \in X_i$ (so $\pi_i'$ is a stuttering of $\pi_i$) such that $\pi_1', \mathpunct{.}s, \pi_n'$ satisfy $\psi$. It is easy to see that $\mathcal{T} \vDash_{\mathit{AHLTL}} \varphi$ iff $\mathcal{T} \vDash \varphi'$ holds for all systems. \qed \end{proof} \fphyperltl{} captures all properties expressible in AHLTL. In particular, our approximate model-checking algorithm for \fphyperltl{} (cf.~\Cref{sec:algorithms}) is applicable to AHLTL; even for instances where no approximate solutions were previously known. In \Cref{sec:implementation}, we show that our prototype model checker for \fphyperltl{} can verify asynchronous properties in practice. \section{Model-Checking \oldfphyperltl}\label{sec:algorithms} In general, finite-state model checking of \fphyperltl{} is highly undecidable (cf.~\Cref{corr:sohyper_hardness}). In this section, we outline a partial algorithm that computes approximations on the concrete values of second-order variables for a fragment of \fphyperltl{}. At a very high-level, our algorithm (\Cref{alg:mainVerification}) iteratively computes under- and overapproximations for second-order variables. It then turns to resolve first-order quantification, using techniques from HyperLTL model checking~\cite{FinkbeinerRS15,BeutnerF23}, and resolves existential and universal trace quantification on the under- and overapproximation of the second-order variables, respectively. If the verification fails, it goes back to refine second-order approximations. In this section, we focus on the setting where we are interested in the least sets (using $\curlyvee$), and use techniques to approximate the \emph{least} fixpoint. A similar (dual) treatment is possible for \fphyperltl{} formulas that use the largest set. Every \fphyperltl{} which uses only minimal sets has the following form: \begin{align}\label{eq:fpformula} \varphi = \gamma_1. (Y_1, \curlyvee, \varphi^\mathit{con}_1). \gamma_2 \mathpunct{.}s. (Y_k, \curlyvee{}, \varphi^\mathit{con}_k)\mathpunct{.} \gamma_{k+1}\mathpunct{.} \psi \end{align} We quantify second-order variables $Y_1, \mathpunct{.}s, Y_k$, where, for each $j \in [k]$, $Y_j$ is the least set that satisfies $\varphi^\mathit{con}_j$. Finally, for each $j \in [k+1]$, \begin{align*} \gamma_j = \mathbb{Q}_{l_j+1} \pi_{l_j+1} \in X_{l_j+1} \mathpunct{.}s \mathbb{Q}_{l_{j+1}} \pi_{l_{j+1}} \in X_{l_{j+1}} \end{align*} is the block of first-order quantifiers that sits between the quantification of $Y_{j-1}$ and $Y_j$. Here $X_{l_j+1}, \mathpunct{.}s, X_{l_{j+1}} \in \{\mathfrak{S}, \mathfrak{A}, Y_1, \mathpunct{.}s, Y_{j-1}\}$ are second-order variables that are quantified before $\gamma_j$. In particular, $\pi_1, \mathpunct{.}s, \pi_{l_j}$ are the first-order variables quantified before~$Y_j$. \subsection{Fixpoints in \oldfphyperltl{}} We consider a fragment of \fphyperltl{} which we call the \emph{least fixpoint fragment}. Within this fragment, we restrict the formulas $\varphi^\mathit{con}_1, \mathpunct{.}s, \varphi^\mathit{con}_k$ such that $Y_1, \mathpunct{.}s, Y_k$ can be approximated as (least) fixpoints. Concretely, we say that $\varphi$ is in the \emph{least fixpoint fragment} of \fphyperltl{} if for all $j \in [k]$, $\varphi^\mathit{con}_j$ is a conjunction of formulas of the form \begin{align}\label{eq:fixpoint-condition} \forall \dot{\pi}_1 \in X_1. \mathpunct{.}s \forall \dot{\pi}_n \in X_n\mathpunct{.} \psi_{\mathit{step}} \rightarrow \dot{\pi}_M \triangleright Y_j \end{align} where each $X_i \in \{\mathfrak{S}, \mathfrak{A}, Y_1, \mathpunct{.}s, Y_j\}$, $\psi_{\mathit{step}}$ is quantifier-free formula over trace variables $\dot{\pi}_1, \mathpunct{.}s, \dot{\pi}_n, \pi_1, \mathpunct{.}s, \pi_{l_j}$, and $M \in [n]$. Intuitively, \Cref{eq:fixpoint-condition} states a requirement on traces that should be included in $Y_j$. If we find traces $\dot{t}_1 \in X_1, \mathpunct{.}s, \dot{t}_n \in X_n$ that, together with the traces $t_1, \mathpunct{.}s, t_{l_j}$ quantified before $Y_j$, satisfy $\psi_{\mathit{step}}$, then $\dot{t}_M$ should be included in $Y_j$. Together with the minimality constraint on $Y_j$ (stemming from the semantics of \fphyperltl{}), this effectively defines a (monotone) least fixpoint computation, as $\psi_{\mathit{step}}$ defines exactly the traces to be added to the set. This will allow us to use results from fixpoint theory to compute approximations for the sets $Y_j$. Our least fixpoint fragment captures most properties of interest, in particular, common knowledge (\Cref{sec:running:example}) and asynchronous hyperproperties (\Cref{sec:asynchronous:hyperproperties}). We observe that formulas of the above form ensure that the solution $Y_j$ is unique, i.e., for any trace assignment $\Pi$ to $\pi_1, \mathpunct{.}s, \pi_{l_j}$ and second-order assignment $\Delta$ to $\mathfrak{S}, \mathfrak{A}, Y_1, \mathpunct{.}s, Y_{j-1}$, there is only one element in $\mathit{sol}(\Pi, \Delta, (Y_j, \curlyvee, \varphi_j^\mathit{con}))$. \subsection{Functions as Automata}\label{sec:functions:as:automata} In our (approximate) model-checking algorithm, we represent a concrete assignment to the second-order variables $Y_1, \mathpunct{.}s, Y_k$ using automata $\mathcal{B}_{Y_1}, \mathpunct{.}s, \mathcal{B}_{Y_k}$. The concrete assignment of $Y_j$ can depend on traces assigned to $\pi_1, \mathpunct{.}s, \pi_{l_j}$, i.e., the first-order variables quantified before $Y_j$. To capture these dependencies, we view each $Y_j$ not as a set of traces but as a function mapping traces of all preceding first-order variables to a set of traces. We represent such a function $f : (\Sigma^\omega)^{l_j} \to 2^{(\Sigma^\omega)}$ mapping the $l_j$ traces to a set of traces as an automaton $\mathcal{A}$ over $\Sigma^{l_j+1}$. For traces $t_1, \mathpunct{.}s, t_{l_j}$, the set $f(t_1, \mathpunct{.}s, t_{l_j})$ is represented in the automaton by the set $\{ t \in \Sigma^\omega \mid \mathit{zip}(t_1, \mathpunct{.}s, t_{l_j}, t) \in \mathcal{L}(\mathcal{A}) \}$. For example, the function $f(t_1) := \{t_1\}$ can be defined by the automaton that accepts the zipping of a pair of traces exactly if both traces agree on all propositions. This representation of functions as automata allows us to maintain an assignment to $Y_j$ that is parametric in $\pi_1, \mathpunct{.}s, \pi_{l_j}$ and still allows first-order model checking on $Y_1, \mathpunct{.}s, Y_k$. \subsection{Model Checking for First-Order Quantification}\label{sec:first:order:checking} First, we focus on first-order quantification, and assume that we are given a concrete assignment for each second-order variable as fixed automata $\mathcal{B}_{Y_1}, \mathpunct{.}s, \mathcal{B}_{Y_k}$ (where $\mathcal{B}_{Y_j}$ is an automaton over $\Sigma^{l_j+1}$). Our construction for resolving first-order quantification is based on HyperLTL model checking~\cite{FinkbeinerRS15}, but needs to work on sets of traces that, themselves, are based on traces quantified before (cf.~\Cref{sec:functions:as:automata}). Recall that the first-order quantifier prefix is $\gamma_1 \cdots \gamma_{k+1} = \mathbb{Q}_{1} \pi_{1} \in X_{1} \cdots \mathbb{Q}_{l_{k+1}} \pi_{l_{k+1}} \in X_{l_{k+1}}$. For each $1 \leq i \leq l_{k+1}$ we inductively construct an automaton $\mathcal{A}_i$ over $\Sigma^{i-1}$ that summarizes all trace assignments to $\pi_1, \mathpunct{.}s, \pi_{i-1}$ that satisfy the subformula starting with the quantification of $\pi_i$. That is, for all traces $t_1, \mathpunct{.}s, t_{i-1}$ we have \begin{align*} [\pi_1 \mapsto t_1, \mathpunct{.}s, \pi_{i-1} \mapsto t_{i-1}] \vDash \mathbb{Q}_{i} \pi_{i} \in X_{i} \cdots \mathbb{Q}_{l_{k+1}} \pi_{l_{k+1}} \in X_{l_{k+1}}\mathpunct{.} \psi \end{align*} (under the fixed second-order assignment for $Y_1, \mathpunct{.}s, Y_k$ given by $\mathcal{B}_{Y_1}, \mathpunct{.}s, \mathcal{B}_{Y_k}$) if and only if $\mathit{zip}(t_1, \mathpunct{.}s, t_{i-1}) \in \mathcal{L}(\mathcal{A}_i)$. In the context of HyperLTL model checking we say $\mathcal{A}_i$ is \emph{equivalent} to $\mathbb{Q}_{i} \pi_{i} \in X_{i} \cdots \mathbb{Q}_{l_{k+1}} \pi_{l_{k+1}} \in X_{l_{k+1}}\mathpunct{.} \psi$ \cite{FinkbeinerRS15,BeutnerF23}. In particular, $\mathcal{A}_1$ is an automaton over singleton alphabet $\Sigma^0$. We construct $\mathcal{A}_1, \mathpunct{.}s, \mathcal{A}_{l_{k+1}+1}$ inductively, starting with $\mathcal{A}_{l_{k+1}+1}$. Initially, we construct $\mathcal{A}_{l_{k+1}+1}$ (over $\Sigma^{l_{k+1}}$) using a standard LTL-to-NBA construction on the (quantifier-free) body $\psi$ (see \cite{FinkbeinerRS15} for details). Now assume that we are given an (inductively constructed) automaton $\mathcal{A}_{i+1}$ over $\Sigma^i$ and want to construct~$\mathcal{A}_i$. We first consider the case where $\mathbb{Q}_i = \exists$, i.e., the $i$th trace quantification is existential. Now $X_i$ (the set where $\pi_i$ is resolved on) either equals $\mathfrak{S}$, $\mathfrak{A}$ or $Y_j$ for some $j \in [k]$. In either case, we represent the current assignment to $X_i$ as an automaton $\mathcal{C}$ over $\Sigma^{T + 1}$ for some $T < i$ that defines the model of $X_i$ based on traces $\pi_1, \mathpunct{.}s, \pi_T$: In case $X_i = \mathfrak{S}$, we set $\mathcal{C}$ to be the automaton over $\Sigma^{0 + 1}$ that accepts exactly the traces in the given system $\mathcal{T}$; in case $X_i = \mathfrak{A}$, we set $\mathcal{C}$ to be the automaton over $\Sigma^{0 + 1}$ that accepts all traces; If $X_i = Y_j$ for some $j \in [k]$ we set $\mathcal{C}$ to be $\mathcal{B}_{Y_j}$ (which is an automaton over $\Sigma^{l_j + 1}$).\footnote{Note that in this case $l_j < i$: if trace $\pi_i$ is resolved on $Y_j$ (i.e, $X_i = Y_j$), then $Y_j$ must be quantified \emph{before} $\pi_i$ so there are at most $i-1$ traces quantified before~$Y_j$. } Given $\mathcal{C}$, we can now modify the construction from \cite{FinkbeinerRS15}, to resolve first-order quantification: The desired automaton~$\mathcal{A}_i$ should accept the zipping of traces $t_1, \mathpunct{.}s, t_{i-1}$ if there exists a trace $t$ such that (1) $\mathit{zip}(t_1, \mathpunct{.}s, t_{i-1}, t) \in \mathcal{L}(\mathcal{A}_{i+1})$, \emph{and} (2) the trace $t$ is contained in the set of traces assigned to $X_i$ as given by $\mathcal{C}$, i.e., $\mathit{zip}(t_1, \mathpunct{.}s, t_T, t) \in \mathcal{L}(\mathcal{C})$. The construction of this automaton is straightforward by taking a product of~$\mathcal{A}_{i+1}$ and~$\mathcal{C}$. We denote this automaton with \lstinline[style=default, language=custom-lang]|eProduct($\mathcal{A}_{i+1}$,$\mathcal{C}$)|. In case $\mathbb{Q}_i = \forall$ we exploit the duality that $\forall \pi. \psi$ = $\neg \exists \pi. \neg \psi$, combining the above construction with automata complementation. We denote this universal product of $\mathcal{A}_{i+1}$ and $\mathcal{C}$ with \lstinline[style=default, language=custom-lang]|uProduct($\mathcal{A}_{i+1}$,$\mathcal{C}$)|. The final automaton $\mathcal{A}_1$ is an automaton over singleton alphabet $\Sigma^0$ that is equivalent to $\gamma_1 \cdots \gamma_{k+1}. \psi$, i.e., the entire first-order quantifier prefix. Automaton $\mathcal{A}_1$ thus satisfies $\mathcal{L}(\mathcal{A}_1) \neq \emptyset$ (which we can decide) iff the empty trace assignment satisfies the first-order formula $\gamma_1 \cdots \gamma_{k+1}\mathpunct{.} \psi$, iff $\varphi$ (of \Cref{eq:fpformula}) holds within the fixed model for $Y_1, \mathpunct{.}s, Y_k$. For a given fixed second-order assignment (given as automata $\mathcal{B}_{Y_1}, \mathpunct{.}s, \mathcal{B}_{Y_k}$), we can thus decide if the system satisfies the first-order part. During the first-order model-checking phase, each quantifier alternations in the formula require complex automata complementation. For the first-order phase, we could also use cheaper approximate methods by, e.g., instantiating the existential trace using a strategy \cite{CoenenFST19,BeutnerF22CAV,BeutnerF22}. \subsection{Bidirectional Model Checking} So far, we have discussed the verification of the first-order quantifiers assuming we have a fixed model for all second-order variables $Y_1, \mathpunct{.}s, Y_k$. In our actual model-checking algorithm, we instead maintain under- and overapproximations on each of the $Y_1, \mathpunct{.}s, Y_k$. \begin{algorithm}[!t] \caption{}\label{alg:mainVerification} \begin{code} verify($\varphi$, $\mathcal{T}$) = @@let $\varphi$ = $\big[\gamma_j \; (Y_j, \curlyvee , \varphi_j^\mathit{con})\big]_{j=1}^k \; \gamma_{k+1}\mathpunct{.} \psi$ where $\gamma_i$ = $\big[\mathbb{Q}_m \pi_m \in X_m \big]_{m=l_i+1}^{l_{i+1}} $ @@let N = 0 @@let $\mathcal{A}_\mathcal{T}$ = systemToNBA($\mathcal{T}$) @@repeat @@@@// Start outside-in traversal on second-order variables @@@@let $\flat$ = $\big[\mathfrak{S} \mapsto (\mathcal{A}_\mathcal{T},\mathcal{A}_\mathcal{T}), \mathfrak{A} \mapsto (\mathcal{A}_\top, \mathcal{A}_\top)\big]$ @@@@for $j$ from $1$ to $k$ do @@@@@@$\mathcal{B}_j^l$ := underApprox($(Y_j,\curlyvee, \varphi_j^\mathit{con})$,$\flat$,$N$) @@@@@@$\mathcal{B}_j^u$ := overApprox($(Y_j,\curlyvee, \varphi_j^\mathit{con})$,$\flat$,$N$) @@@@@@$\flat(Y_j)$ := $(\mathcal{B}_j^l,\mathcal{B}_j^u)$ @@@@// Start inside-out traversal on first-order variables @@@@let $\mathcal{A}_{l_{k+1} + 1}$ = LTLtoNBA($\psi$) @@@@for $i$ from $l_{k+1}$ to $1$ do @@@@@@let $(\mathcal{C}^l, \mathcal{C}^u)$ = $\flat(X_i)$ @@@@@@if $\mathbb{Q}_i = \exists$ then @@@@@@@@$\mathcal{A}_i$ := eProduct($\mathcal{A}_{i+1}$, $\mathcal{C}^l$) @@@@@@else @@@@@@@@$\mathcal{A}_i$ := uProduct($\mathcal{A}_{i+1}$, $\mathcal{C}^u$) @@@@if $\mathcal{L}(\mathcal{A}_1) \neq \emptyset$ then @@@@@@return SAT @@@else @@@@@@$N$ = $N$ + 1 \end{code} \end{algorithm} In each iteration, we first traverse the second-order quantifiers in an \emph{outside-in} direction and compute lower- and upper-bounds on each $Y_j$. Given the bounds, we then traverse the first-order prefix in an \emph{inside-out} direction using the current approximations to $Y_1, \mathpunct{.}s, Y_k$. If the current approximations are not precise enough to witness the satisfaction (or violation) of a property, we repeat and try to compute better bounds on $Y_1, \mathpunct{.}s, Y_k$. Due to the different directions of traversal, we refer to our model-checking approach as \emph{bidirectional}. \Cref{alg:mainVerification} provides an overview. Initially, we convert the system $\mathcal{T}$ to an NBA $\mathcal{A}_\mathcal{T}$ accepting exactly the traces of the system. In each round, we compute under- and overapproximations for each $Y_j$ in a mapping~$\flat$. We initialize $\flat$ by mapping $\mathfrak{S}$ to $(\mathcal{A}_\mathcal{T}, \mathcal{A}_\mathcal{T})$ (i.e., the value assigned to the system variable is precisely~$\mathcal{A}_\mathcal{T}$ for both under- and overapproximation), and $\mathfrak{A}$ to $(\mathcal{A}_\top, \mathcal{A}_\top)$ where $\mathcal{A}_\top$ is an automaton over $\Sigma^1$ accepting all traces. We then traverse the second-order quantifiers outside-in (from $Y_1$ to $Y_k$) and for each $Y_j$ compute a pair $(\mathcal{B}_j^l, \mathcal{B}_j^u)$ of automata over $\Sigma^{l_j + 1}$ that under- and overapproximate the actual (unique) model of $Y_j$. We compute these approximations using functions \lstinline[style=default, language=custom-lang]|underApprox| and \lstinline[style=default, language=custom-lang]|overApprox|, which can be instantiated with any procedure that computes sound lower and upper bounds (see \Cref{sec:computing:approximations}). During verification, we further maintain a precision bound~$N$ (initially set to $0$) that tracks the current precision of the second-order approximations. When $\flat$ contains an under- and overapproximation for each second-order variable, we traverse the first-order variables in an inside-out direction (from~$\pi_{l_{k+1}}$ to~$\pi_1$) and, following the construction outlined in \Cref{sec:first:order:checking}, construct automata $\mathcal{A}_{l_k+1}, \mathpunct{.}s, \mathcal{A}_1$. Different from the simplified setting in \Cref{sec:first:order:checking} (where we assume a fixed automaton $\mathcal{B}_{Y_j}$ providing a model for each $Y_j$), the mapping $\flat$ contains only approximations of the concrete solution. We choose which approximation to use according to the corresponding set quantification: In case we construct $\mathcal{A}_i$ and $\mathbb{Q}_i = \exists$, we use the \emph{underapproximation} (thus making sure that any witness trace we pick is indeed contained in the actual model of the second-order variable); and if $\mathbb{Q}_i = \forall$, we use the \emph{overapproximation} (making sure that we consider at least those traces that are in the actual solution). If $\mathcal{L}(\mathcal{A}_1)$ is non-empty, i.e., accepts the empty trace assignment, the formula holds (assuming the approximations returned by \lstinline[style=default, language=custom-lang]|underApprox| and \lstinline[style=default, language=custom-lang]|overApprox| are sound). If not, we increase the precision bound $N$ and repeat. In \Cref{alg:mainVerification}, we only check for the satisfaction of a formula (to keep the notation succinct). Using the second-order approximations in $\flat$ we can also check the negation of a formula (by considering the negated body and dualizing all trace quantifiers). Our tool (\Cref{sec:implementation}) makes use of this and thus simultaneously tries to show satisfaction and violation of a formula. \subsection{Computing Under- and Overapproximations}\label{sec:computing:approximations} In this section we provide concrete instantiations for \lstinline[style=default, language=custom-lang]|underApprox| and \lstinline[style=default, language=custom-lang]|overApprox|. \subsubsection*{Computing Underapproximations.} \label{sec:computing:under:approximations} As we consider the fixpoint fragment, each formula $\varphi_j^\mathit{con}$ (defining $Y_j$) is a conjunction of formulas of the form in \Cref{eq:fixpoint-condition}, thus defining $Y_j$ via a least fixpoint computation. For simplicity, we assume that $Y_j$ is defined by the single conjunct, given by~\Cref{eq:fixpoint-condition} (our construction generalizes easily to a conjunction of such formulas). Assuming fixed models for $\mathfrak{S}$, $\mathfrak{A}$ and $Y_1, \mathpunct{.}s, Y_{j-1}$, the fixpoint operation defining $Y_j$ is monotone, i.e., the larger the current model for $Y_j$ is, the more traces we need to add according to \Cref{eq:fixpoint-condition}. Monotonicity allows us to apply the Knaster–Tarski theorem \cite{tarski1955lattice} and compute underapproximations to the fixpoint by iteration. In our construction of an approximation for $Y_j$, we are given a mapping $\flat$ that fixes a pair of automata for $\mathfrak{S}$, $\mathfrak{A}$, and $Y_1, \mathpunct{.}s, Y_{j-1}$ (due to the outside-in traversal in \Cref{alg:mainVerification}). As we are computing an underapproximation, we use the underapproximation for each of the second-order variables in $\flat$. So $\flat(\mathfrak{S})$ and $\flat(\mathfrak{A})$ are automata over $\Sigma^1$ and for each $j' \in [j-1]$, $\flat(Y_{j'})$ is an automaton over $\Sigma^{l_{j'} + 1}$. Given this fixed mapping $\flat$, we iteratively construct automata $\hat{\mathcal{C}}_0, \hat{\mathcal{C}}_1, \mathpunct{.}s$ over $\Sigma^{l_j + 1}$ that capture (increasingly precise) underapproximations on the solution for $Y_j$. We set~$\hat{\mathcal{C}}_0$ to be the automaton with the empty language. We then recursively define $\hat{\mathcal{C}}_{N+1}$ based on $\hat{\mathcal{C}}_{N}$ as follows: For each second-order variable $X_i$ for $i \in [n]$ used in \Cref{eq:fixpoint-condition} we can assume a concrete assignment in the form of an automaton $\mathcal{D}_i$ over $\Sigma^{T_i + 1}$ for some $T_i \leq l_j$: In case $X_i \neq Y_j$ (so $X_i \in \{\mathfrak{S}, \mathfrak{A}, Y_1, \mathpunct{.}s, Y_{j-1}\}$), we set $\mathcal{D}_i := \flat(X_i)$. In case $X_i = Y_j$, we set $\mathcal{D}_i := \hat{\mathcal{C}}_N$, i.e., we use the current approximation of $Y_j$ in iteration $N$. After we have set $\mathcal{D}_1, \mathpunct{.}s, \mathcal{D}_n$, we compute an automaton $\dot{\mathcal{C}}$ over $\Sigma^{l_j+1}$ that accepts $\mathit{zip}(t_1, \mathpunct{.}s, t_{l_j}, t)$ iff there exists traces $\dot{t}_1, \mathpunct{.}s, \dot{t}_n$ such that (1) $\mathit{zip}(t_1, \mathpunct{.}s, t_{T_i}, \dot{t}_i) \in \mathcal{L}(\mathcal{D}_i)$ for all $i \in [n]$, (2) $[\pi_1 \mapsto t_1, \mathpunct{.}s, \pi_{l_j} \mapsto t_{l_j}, \dot\pi_1 \mapsto \dot{t}_{1},\mathpunct{.}s, \dot\pi_n \mapsto \dot{t}_n] \vDash \psi_\mathit{step}$, and (3) trace $t$ equals $\dot{t}_M$ (of~\Cref{eq:fixpoint-condition}). The intuition is that $\dot{\mathcal{C}}$ captures all traces that should be added to $Y_j$: Given $t_1, \mathpunct{.}s, t_{l_j}$ we check if there are traces $\dot{t}_1, \mathpunct{.}s, \dot{t}_n$ for trace variables $\dot{\pi}_1, \mathpunct{.}s, \dot{\pi}_n$ in \Cref{eq:fixpoint-condition} where (1) each $\dot{t}_i$ is in the assignment for $X_i$, which is captured by the automaton $\mathcal{D}_i$ over $\Sigma^{T_i + 1}$, and (2) the traces $\dot{t}_1, \mathpunct{.}s, \dot{t}_n$ satisfy $\varphi_\mathit{step}$. If this is the case, we want to add $\dot{t}_M$ (as stated in \Cref{eq:fixpoint-condition}). We then define $\hat{\mathcal{C}}_{N+1}$ as the union of $\hat{\mathcal{C}}_{N}$ and $\dot{\mathcal{C}}$, i.e. extend the previous model with all (potentially new) traces that need to be added. \subsubsection*{Computing Overapproximations.}\label{sec:computing:over:approximations} As we noted above, conditions of the form of \Cref{eq:fixpoint-condition} always define fixpoint constraints. To compute upper bounds on such fixpoint constructions we make use of Park's theorem, \cite{Winskel93} stating that if we find some set (or automaton) $\mathcal{B}$ that is inductive (i.e., when computing all traces that we would need to add assuming the current model of $Y_j$ is $\mathcal{B}$, we end up with traces that are already in $\mathcal{B}$), then $\mathcal{B}$ overapproximates the unique solution (aka.~least fixpoint) of $Y_j$. To derive such an inductive invariant, we employ techniques developed in the context of regular model checking \cite{BouajjaniJNT00} (see \Cref{sec:relatedWork}). Concretely, we employ the approach from \cite{ChenHLR17} that uses automata learning \cite{DBLP:journals/iandc/Angluin87} to find suitable invariants. While the approach from \cite{ChenHLR17} is limited to finite words, we extend it to an $\omega$-setting by interpreting an automaton accepting finite words as one that accepts an $\omega$-word $u$ iff every prefix of $u$ is accepted.\footnote{This effectively poses the assumption that the step formula specifies a safety property, which seems to be the case for almost all examples. As an example, common knowledge infers a safety property: In each step, we add all traces for which there exists some trace that agrees on all propositions observed by that agent.} As soon as the learner provides a candidate for an equivalence check, we check that it is inductive and, if not, provide some finite counterexample (see \cite{ChenHLR17} for details). If the automaton is inductive, we return it as a potential overapproximation. Should this approximation not be precise enough, the first-order model checking (\Cref{sec:first:order:checking}) returns some concrete counterexample, i.e., some trace contained in the invariant but violating the property, which we use to provide more counterexamples to the learner. \section{Implementation and Experiments}\label{sec:implementation} We have implemented our model-checking algorithm in a prototype tool we call \texttt{HySO} (\textbf{Hy}perproperties with \textbf{S}econd \textbf{O}rder).\footnote{Our tool is publicly available at \url{https://zenodo.org/record/7878311}.} Our tool uses \texttt{spot} \cite{DBLP:conf/cav/Duret-LutzRCRAS22} for basic automata operations (such as LTL-to-NBA translations and complementations). To compute under- and overapproximations, we use the techniques described in \Cref{sec:computing:approximations}. We evaluate the algorithm on the following benchmarks. \subsubsection*{Muddy Children.} The muddy children puzzle \cite{DBLP:books/mit/FHMV1995} is one of the classic examples in common knowledge literature. The puzzle consists of $n$ children standing such that each child can see all other children's faces. From the $n$ children, an unknown number $k \geq 1$ have a muddy forehead, and in incremental rounds, the children should step forward if they know if their face is muddy or not. Consider the scenario of $n=2$ and $k = 1$, so child $a$ sees that child $b$ has a muddy forehead and child $b$ sees that $a$ is clean. In this case, $b$ immediately steps forward, as it knows that its forehead is muddy since $k\geq 1$. In the next step, $a$ knows that its face is clean since $b$ stepped forward in round 1. In general, one can prove that all children step forward in round $k$, deriving common knowledge. For each $n$ we construct a transition system $\mathcal{T}_n$ that encodes the muddy children scenario with $n$ children. For every $m$ we design a \fphyperltl{} formula $\varphi_m$ that adds to the common knowledge set $X$ all traces that appear indistinguishable in the first $m$ steps for some child. We then specify that all traces in $X$ should agree on all inputs, asserting that all inputs are common knowledge.\footnote{This property is not expressible in non-hyper logics such as LTL$_{\mathsf{K}, \mathsf{C}}$, where we can only check \emph{trace properties} on the common knowledge set $X$. In contrast, \fphyperltl{} allows us to check \emph{hyperproperties} on $X$. That way, we can express that some value is common knowledge (i.e., equal across all traces in the set) and not only that a property is common knowledge (i.e., holds on all traces in the set).} We used \texttt{HySO} to \emph{fully automatically} check $\mathcal{T}_n$ against $\varphi_m$ for varying values of~$n$ and $m$, i.e., we checked if, after the first $m$ steps, the inputs of all children are common knowledge. As expected, the above property holds only if $m \geq n$ (in the worst case, where all children are dirty $(k = n)$, the inputs of all children only become common knowledge after $n$ steps). We depict the results in \Cref{tab:muddy}. \begin{table}[!t] \begin{subtable}[b]{0.45\textwidth} \begin{center} \def1.5{1.3} \setlength\tabcolsep{2mm} \begin{tabular}{cccccc} & & \multicolumn{4}{c}{\textbf{m}} \\ \cline{3-6} & & \multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{3} & \multicolumn{1}{|c|}{4} \\ \cline{2-6} \multirow{6}{*}{\textbf{n}} & \multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{\thead{\ding{55}{}\\0.64}} & \multicolumn{1}{|c|}{\thead{\ding{51}{}\\ 0.59}} &\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\ \cline{2-6} & \multicolumn{1}{|c|}{3} & \multicolumn{1}{|c|}{\thead{\ding{55}{}\\ 0.79}} & \multicolumn{1}{|c|}{\thead{\ding{55}{}\\ 0.75}} &\multicolumn{1}{|c|}{\thead{\ding{51}{}\\0.54}} & \multicolumn{1}{|c|}{}\\ \cline{2-6} & \multicolumn{1}{|c|}{4} & \multicolumn{1}{|c|}{\thead{\ding{55}{}\\2.72}} & \multicolumn{1}{|c|}{\thead{\ding{55}{}\\ 2.21}} &\multicolumn{1}{|c|}{\thead{\ding{55}{}\\ 1.67}} & \multicolumn{1}{|c|}{\thead{\ding{51}{}\\1.19}}\\ \cline{2-6} \end{tabular} \end{center} \subcaption{}\label{tab:muddy} \end{subtable} \hfil \begin{subtable}[b]{0.55\textwidth} \begin{center} \def1.5{1.5} \setlength\tabcolsep{2mm} \begin{tabular}{llll} \toprule \textbf{Instance} & \textbf{Method} & \textbf{Res} & $\boldsymbol{t}$ \\ \midrule $\mathcal{T}_\mathit{syn}$, $\varphi_\mathit{OD}$ & - & \ding{51}{} & 0.26\\ $\mathcal{T}_\mathit{asyn}$, $\varphi_\mathit{OD}$ & - & \ding{55}{} & 0.31\\ $\mathcal{T}_\mathit{syn}$, $\varphi^\mathit{asyn}_\mathit{OD}$ & Iter (0) & \ding{51}{} & 0.50\\ $\mathcal{T}_\mathit{syn}$, $\varphi^\mathit{asyn}_\mathit{OD}$ & Iter (1) & \ding{51}{} & 0.78\\ \textsc{Q1}, $\varphi_\mathit{OD}$ & - & \ding{55}{} & 0.34\\ \textsc{Q1}, $\varphi^\mathit{asyn}_\mathit{OD}$ & Iter (1) & \ding{51}{} & 0.86\\ \bottomrule \end{tabular} \end{center} \subcaption{ }\label{tab:more} \end{subtable} \caption{ In \Cref{tab:muddy}, we check common knowledge in the muddy children puzzle for $n$ children and $m$ rounds. We give the result (\ding{51}{} if common knowledge holds and \ding{55}{} if it does not), and the running time. In \Cref{tab:more}, we check synchronous and asynchronous versions of observational determinism. We depict the number of iterations needed and running time. Times are given in seconds. } \end{table} \subsubsection*{Asynchronous Hyperproperties.} As we have shown in \Cref{sec:asynchronous:hyperproperties}, we can encode arbitrary AHLTL properties into \fphyperltl{}. We verified synchronous and asynchronous version of observational determinism (cf.~\Cref{sec:asynchronous:hyperproperties}) on programs taken from \cite{BaumeisterCBFS21,BeutnerF21,BeutnerF23LMCS}. We depict the verification results in \Cref{tab:more}. Recall that \fphyperltl{} properties without any second-order variables correspond to HyperQPTL formulas. \texttt{HySO} can check such properties precisely, i.e., it constitutes a sound-and-complete model checker for HyperQPTL properties with an arbitrary quantifier prefix. The synchronous version of observational determinism is a HyperLTL property and thus needs no second-order approximation (we set the method column to ``-'' in these cases). \subsubsection*{Common Knowledge in Multi-agent Systems.} We used \texttt{HySO} for an automatic analysis of the system in \Cref{fig:common:knowledge}. Here, we verify that on initial trace $\{a\}^n\{d\}^\omega$ it is CK that $a$ holds in the first step. We use a similar formula as the one of \Cref{sec:running:example}, with the change that we are interested in whether $a$ is CK (whereas we used $\LTLnext a$ in \Cref{sec:running:example}). As expected, \texttt{HySO} requires $2n-1$ iterations to converge. We depict the results in \Cref{tab:running}. \begin{table}[!t] \begin{subtable}[b]{0.4\textwidth} \begin{center} \def1.5{1.5} \setlength\tabcolsep{1.7mm} \begin{tabular}{llll} \toprule $\boldsymbol{n}$ & \textbf{Method} & \textbf{Res} & $\boldsymbol{t}$ \\ \midrule 1 & Iter (1) & \ding{51}{} & 0.51\\ 2 & Iter (3) & \ding{51}{}& 0.83\\ 3 & Iter (5) & \ding{51}{}& 1.20\\ 10 & Iter (19) & \ding{51}{}& 3.81\\ 100 & Iter (199) & \ding{51}& 102.8\\ \bottomrule \end{tabular} \end{center} \subcaption{ }\label{tab:running} \end{subtable} \begin{subtable}[b]{0.6\textwidth} \begin{center} \def1.5{1.5} \setlength\tabcolsep{1.7mm} \begin{tabular}{llll} \toprule \textbf{Instance} & \textbf{Method} & \textbf{Res} & $\boldsymbol{t}$ \\ \midrule \textsc{SwapA} & Learn & \ding{51}{} & 1.07\\ \textsc{SwapATwice} & Learn & \ding{51}{} & 2.13\\ \textsc{SwapA}$_5$ & Iter (5) & \ding{51}{} & 1.15\\ \textsc{SwapA}$_{15}$ & Iter (15) & \ding{51}{} & 3.04\\ \textsc{SwapAViolation}$_5$ & Iter (5) & \ding{55}{} & 2.35\\ \textsc{SwapAViolation}$_{15}$ & Iter (15) & \ding{55}{} & 4.21\\ \bottomrule \end{tabular} \end{center} \subcaption{}\label{tab:mar} \end{subtable} \caption{ In \Cref{tab:muddy}, we check common knowledge in the example from \Cref{fig:common:knowledge} when starting with $a^nd^\omega$ for varying values of $n$. We depict the number of refinement iterations, the result, and the running time. In \Cref{tab:mar}, we verify various properties on Mazurkiewicz traces. We depict whether the property could be verified or refuted by iteration or automata learning, the result, and the time. Times are given in seconds.} \end{table} \subsubsection*{Mazurkiewicz Traces.} Mazurkiewicz traces are an important concept in the theory of distributed computing \cite{DR1995}. Let $I \subseteq \Sigma \times \Sigma$ be an independence relation that determines when two consecutive letters can be switched (think of two actions in disjoint processes in a distributed system). Any $t \in \Sigma^\omega$ then defines the set of all traces that are equivalent to $t$ by flipping consecutive independent actions an arbitrary number of times (the equivalence class of all these traces is called the Mazurkiewicz Trace). See \cite{DR1995} for details. The verification problem for Mazurkiewicz traces now asks if, given some $t \in \Sigma^\omega$, all traces in the Mazurkiewicz trace of $t$ satisfy some property $\psi$. Using \fphyperltl{} we can directly reason about the Mazurkiewicz Trace of any given trace, by requiring that all traces that are equal up to one swap of independent letters are also in a given set (which is easily expressed in \fphyperltl{}). Using \texttt{HySO} we verify a selection of such trace properties that often require non-trivial reasoning by coming up with a suitable invariant. We depict the results in \Cref{tab:mar}. In our preliminary experiments, we model a situation where we start with $\{a\}^1\{\}^\omega$ and can swap letters $\{a\}$ and $\{\}$. We then, e.g., ask if on any trace in the resulting Mazurkiewicz trace, $a$ holds at most once, which requires inductive invariants and cannot be established by iteration. \section{Related Work}\label{sec:relatedWork} In recent years, many logics for the formal specification of hyperproperties have been developed, extending temporal logics with explicit path quantification (examples include HyperLTL, HyperCTL$^*$ \cite{ClarksonFKMRS14}, HyperQPTL \cite{DBLP:phd/dnb/Rabe16,BeutnerF23LPAR}, HyperPDL \cite{GutsfeldMO20}, and HyperATL$^*$ \cite{BeutnerF21,BeutnerF23LMCS}); extending first and second-order logics with an equal level predicate \cite{CoenenFST19,Finkbeiner017}; or extending ($\omega$)-regular hyperproperties~\cite{DBLP:conf/vmcai/GoudsmidGS21,DBLP:conf/lata/BonakdarpourS21} to context-free hyperproperties~\cite{DBLP:journals/corr/abs-2209-10306}. \sohyperltl{} is the first temporal~logic that reasons about second-order hyperproperties which allows is to capture many existing (epistemic, asynchronous, etc.) hyperlogics while at the same time taking advantage of model-checking solutions that have been proven successful in first-order~settings. \paragraph{Asynchronous Hyperproperties.} For asynchronous hyperproperties, Gutfeld et al.~\cite{GutsfeldMO21} present an asynchronous extension of the polyadic $\mu$-calculus. Bozelli et al.~\cite{BozzelliPS21} extend HyperLTL with temporal operators that are only evaluated if the truth value of some temporal formula changes. Baumeister et al. present AHLTL~\cite{BaumeisterCBFS21}, that extends HyperLTL with a explicit quantification over trajectories and can be directly encoded within \fphyperltl{}. \paragraph{Regular Model Checking.} Regular model checking \cite{BouajjaniJNT00} is a general verification method for (possibly infinite state) systems, in which each state of the system is interpreted as a finite word. The transitions of the system are given as a finite-state (regular) transducer, and the model checking problem asks if, from some initial set of states (given as a regular language), some bad state is eventually reachable. Many methods for automated regular model checking have been developed \cite{BoigelotLW03,DamsLS01,BoigelotLW04,ChenHLR17}. \sohyperltl{} can be seen as a logical foundation for $\omega$-regular model checking: Assume the set of initial states is given as a QPTL formula $\varphi_\mathit{init}$, the set of bad states is given as a QPTL formula $\varphi_\mathit{bad}$, and the transition relation is given as a QPTL formula $\varphi_\mathit{step}$ over trace variables $\pi$ and $\pi'$. The set of bad states is reachable from a trace (state) in $\varphi_\mathit{init}$ iff the following \fphyperltl{} formula holds on the system that generates all traces: \begin{align*} &\big(X, \curlyvee, \forall \pi \in \mathfrak{S}\mathpunct{.} \varphi_\mathit{init}(\pi) \rightarrow \pi \triangleright X \land\\ &\quad\quad\quad\forall \pi \in X. \forall \pi' \in \mathfrak{S}\mathpunct{.} \varphi_\mathit{step}(\pi, \pi') \rightarrow \pi' \triangleright X\big)\mathpunct{.} \forall \pi \in X. \neg \varphi_\mathit{bad}(\pi) \end{align*} Conversely, \fphyperltl{} can express more complex properties, beyond the reachability checks possible in the framework of ($\omega$-)regular model checking. \paragraph{Model Checking Knowledge.} Model checking of knowledge properties in multi-agent systems was developed in the tools \texttt{MCK}~\cite{DBLP:conf/cav/GammieM04} and \texttt{MCMAS}~\cite{DBLP:journals/sttt/LomuscioQR17}, which can exactly express LTL$_{\mathsf{K}}$. Bozzelli et al.~\cite{DBLP:conf/fossacs/BozzelliMP15} have shown that HyperCTL$^*$ and LTL$_\mathsf{K}$ have incomparable expressiveness, and present HyperCTL$^*_{lp}$~ -- an extension of HyperCTL$^*$ that can reason about past -- to unify HyperCTL$^*$ and LTL$_\mathsf{K}$. While HyperCTL$^*_{lp}$ can express the knowledge operator, it cannot capture common knowledge. LTL$_{\mathsf{K},\mathsf{C}}$~\cite{DBLP:conf/spin/HoekW02} captures both knowledge and common knowledge, but the suggested model-checking algorithm only handles a decidable fragment that is reducible to LTL model checking. \section{Conclusion} Hyperproperties play an increasingly important role in many areas of computer science. There is a strong need for specification languages and verification methods that reason about hyperproperties in a uniform and general manner, similar to what is standard for more traditional notions of safety and reliability. In this paper, we have ventured forward from the first-order reasoning of logics like HyperLTL into the realm of second-order hyperproperties, i.e., properties that not only compare individual traces but reason comprehensively about \emph{sets} of such traces. With \sohyperltl{}, we have introduced a natural specification language and a general model-checking approach for second-order hyperproperties. \sohyperltl{} provides a general framework for a wide range of relevant hyperproperties, including common knowledge and asynchronous hyperproperties, which could previously only be studied with specialized logics and algorithms. \sohyperltl{} also provides a starting point for future work on second-order hyperproperties in areas such as cyber-physical~\cite{10.1145/3127041.3127058} and probabilistic systems~\cite{DimitrovaFT20}. \subsubsection*{Acknowledgements.} We thank Jana Hofmann for the fruitful discussions. This work was supported by the European Research Council (ERC) Grant HYPER (No. 101055412), by DFG grant 389792660 as part of TRR~248, and by the German Israeli Foundation (GIF) Grant No. I-1513-407.2019. \text{AP}pendix \section{Additional Material for \Cref{sec:second:order:hyperltl}} \subsection{Additional Material for \Cref{sec:sohyperltl}}\label{app:qptl} \hyperltlIntoSoHyper* \begin{proof} HyperQPTL extends HyperLTL with quantification over atomic propositions. However, HyperQPTL can also be expressed using quantification over arbitrary traces, that are not necessarily system traces, capturing exactly the same semantics. Then, we can translate every HyperLTL trace quantification $\mathbb{Q} \pi. \varphi$ to $\mathbb{Q} \pi \in \mathfrak{S}. \varphi$, and every HyperQPTL trace quantification $\mathbb{Q} \tau. \varphi$ to $\mathbb{Q} \tau \in \mathfrak{A}. \varphi$, to obtain a \sohyperltl{} formula. \qed \end{proof} \subsection{Additional Material for \Cref{sec:fphyperltl}}\label{app:fptoso} \fpToSo* \begin{proof} We translate the \fphyperltl{} formula $\varphi$ into a \sohyperltl{} formula $\llbracket \varphi \rrbracket$: \begin{align*} \llbracket \psi \rrbracket &:= \psi \\ \llbracket \mathbb{Q} \pi \in X\mathpunct{.} \varphi \rrbracket &:= \mathbb{Q} \pi \in X\mathpunct{.} \llbracket \varphi \rrbracket \\ \llbracket \exists (X, \curlyvee, \varphi_1)\mathpunct{.} \varphi_2 \rrbracket &:= \exists X\mathpunct{.} \llbracket \varphi_1 \rrbracket \land \big(\forall Y\mathpunct{.} Y \subsetneq X \LTLreleaseightarrow \neg \llbracket\varphi_1[Y / X] \rrbracket\big) \land \llbracket \varphi_2 \rrbracket\\ \llbracket \forall (X, \curlyvee, \varphi_1)\mathpunct{.} \varphi_2 \rrbracket &:= \forall X\mathpunct{.} \Big( \llbracket \varphi_1 \rrbracket \land \big(\forall Y\mathpunct{.} Y \subsetneq X \LTLreleaseightarrow \neg \llbracket\varphi_1[Y / X] \rrbracket\big)\Big) \LTLreleaseightarrow \llbracket \varphi_2 \rrbracket\\ \llbracket \exists (X, \curlywedge, \varphi_1)\mathpunct{.} \varphi_2 \rrbracket &:= \exists X\mathpunct{.} \llbracket \varphi_1 \rrbracket \land \big(\forall Y\mathpunct{.} Y \supsetneq X\LTLreleaseightarrow \neg \llbracket\varphi_1[Y / X] \rrbracket\big) \land \llbracket \varphi_2 \rrbracket\\ \llbracket \forall (X, \curlywedge, \varphi_1)\mathpunct{.} \varphi_2 \rrbracket &:= \forall X\mathpunct{.} \Big(\llbracket \varphi_1 \rrbracket \land \big(\forall Y\mathpunct{.} Y \supsetneq X \LTLreleaseightarrow \neg \llbracket\varphi_1[Y / X] \rrbracket\big) \Big) \LTLreleaseightarrow \llbracket \varphi_2 \rrbracket \end{align*} Path formulas and first-order quantification can be translated verbatim. To translate (fixpoint-based) second-order quantification we use additional second order quantification to express the fact that a set should be a least or greatest. We write $Y \subsetneq X$ as a shorthand for \begin{align*} \big(\forall \pi \in Y. \pi \triangleright X \big) \land \big(\exists \pi \in Y. \forall \pi' \in Y\mathpunct{.} \LTLeventually \neg (\pi =_\text{AP} \pi' ) \big) \end{align*} $\varphi_1[Y / X]$ denotes the formula where all free occurrences of $X$ are replaced by $Y$. Note that above formula is no \sohyperltl{} formula as it is not in prenex normal form. However, no (first or second-order) quantification occurs under temporal operators, so we can easiliy bring it into prenex normal form. It is easy to see that any system satisfies $\varphi$ in the \fphyperltl{} semantics iff it satisfies $\llbracket \varphi \rrbracket$ in the \sohyperltl{} semantics. \qed \end{proof} \subsection{Additional Proofs for \Cref{sec:sohyperltl_mc}}\label{app:proofs} \fphyperUndec* \begin{proof} Instead of working with Turing machines, we consider two counter machines as they are simpler to handle. A two-counter machine (2CM) maintains two counters $c_1, c_2$ and has a finite set of instructions $l_1, \mathpunct{.}s, l_n$. Each instruction $l_i$ is of one of the following forms, where $x \in \set{1,2}$ and $1 \leq j,k \leq n$. \begin{itemize} \item $l_i : \big[c_x \coloneqq c_x+1; \texttt{ goto } \{l_j, l_{k}\}\big]$ \item $l_i : \big[c_x \coloneqq c_x-1; \texttt{ goto } \{l_j, l_{k}\}\big]$ \item $l_i : \big[\texttt{if } c_x = 0 \texttt{ then goto } l_j \texttt{ else goto } l_k\big]$ \end{itemize} Here, \texttt{goto} $\{l_j, l_{k}\}$ indicates that the machine nondeterministically chooses between instructions $l_j$ and $l_k$. A configuration of a 2CM is a tuple $(l_i, v_1, v_2)$, where $l_i$ is the next instruction to be executed, and $v_1, v_2 \in \mathbb{N}$ denote the values of the counters. The initial configuration of a 2CM is $(l_1, 0, 0)$. The transition relation between configurations is defined as expected. Decrementing a counter that is already $0$ leaves the counter unchanged. An infinite computation is \emph{recurring} if it visits instruction $l_1$ infinitely many times. Deciding if a machine has a recurring computation is $\Sigma_1^1$-hard \cite{AlurH94}. For our proof, we encode each configuration of the 2CM as an infinite trace. We use to atomic propositions $c_1, c_2$, which we ensure to hold exactly once and use this unique position to represent the counter value. The current instruction can be encoded in the first position using APs $l_1, \mathpunct{.}s, l_n$. We further use a fresh AP $\dagger$ -- which also holds exactly once -- to mark the step of this configuration. We are interested in a set of traces $X$ that satisfies all of the following requirements: \begin{enumerate} \item The set $X$ contains the initial configuration. Note that in this configuration, $\dagger$ holds in the \emph{first} step. \item For every configuration, there exists a successor configuration. Note that in the successor configuration, $\dagger$ is shifted by one position. \item\label{item:uni} All pairs of traces where $\dagger$ holds at the same position are equal. $X$ thus assigns a unique configuration to each step. \item\label{item:cons} The computation is recurrent. As already done in \cite{BeutnerCFHK22}, we can ensure this by adding a fresh counter that counts down to the next visit of instruction $l_1$. \end{enumerate} It is easy to see that we can encode all the above as a \sohyperltl{} (or \fphyperltl{}) formula using only first-order quantification over traces in $X$. Let $\varphi$ be such a formula. It is easy to see that the \sohyperltl{} formula $\exists X. \varphi$ holds on any system, iff there exists a set $X$ with the above properties iff the 2CM has a recurring computation. The reproves \Cref{prop:hyperltl_sat_in_sohyperltl_mc}. For the present proof, we do, however, want to show $\Sigma^1_1$-hardness for the less powerful \fphyperltl{}. The key point to extend this is to ensure that iff there exists a set $X$ that satisfies the above requirements, then there also exists a minimal one. The key observation is that -- by the construction of $X$ -- any set $X$ that satisfies the above \emph{is already minimal}: The AP $\dagger$ ensures that, for each step, there exists exactly one configuration (\Cref{item:uni}), and, when removing any number of traces from $X$, we will inevitably violate \Cref{item:cons}. We thus get that the 2CM has a recurring computation iff there exists \emph{mininmal} $X$ that satisfies $\varphi$ iff the \fphyperltl{} formula $\exists (X, \curlyvee, \varphi) \mathpunct{.} \mathit{true}$ holds on an arbitrary system. \qed \end{proof} \sohyperToSat* \begin{proof} Assume that we have $m$ second-order quantifiers, and for each $k \in [m]$, $\pi_1, \mathpunct{.}s, \pi_{l_k}$ are the first-order variables occurring before $X_k$ is quantified: \begin{align*} \varphi = \mathbb{Q} \pi_1, \mathpunct{.}s, \mathbb{Q} \pi_{l_1} \exists X_1 \mathbb{Q} \pi_{l_1 + 1}, \mathpunct{.}s, \mathbb{Q} \pi_{l_2} \exists X_2 \cdots \exists X_m \mathbb{Q} \pi_{l_m + 1}, \mathpunct{.}s, \mathbb{Q} \pi_{l_{m+1}} \psi\ , \end{align*} The second-order variables we use thus are $\mathfrak{V} = \{\mathfrak{S}, \mathfrak{A}, X_1, \mathpunct{.}s, X_m\}$. Each $X \in \mathfrak{V}$ can depend on some of the traces quantified before it. In particular, each $X_k$ depends on traces $\pi_1, \mathpunct{.}s, \pi_{l_k}$, $\mathfrak{S}$ depends on none of the traces (as it is fixed) and neither does $\mathfrak{A}$. We define a function $c : \mathfrak{V}{} \to \mathbb{N}$ that denotes on how many traces the set can depend, i.e., $c(\mathfrak{S}) = c(\mathfrak{A}) = 0$, and $c(X_k) = l_k$. We then encode this functional dependence into a model by using traces. For each $X \in \mathfrak{V}$, we define atomic propositions \begin{align*} \text{AP}_X := \{ [a, j ]_X \mid a\in \text{AP} \land j \in [c(X)] \cup \{\dagger\} \} \end{align*} and then define \begin{align*} \text{AP}' := \text{AP} \uplus \bigcup_{X \in \mathfrak{V}{}} \text{AP}_X \end{align*} We will use the original APs to describe traces. The additional propositions are used to encode functions which map the $c(X)$ traces quantified before $X$ to some set of traces. Given traces $\pi_1, \mathpunct{.}s, \pi_{C(X)}$, we say a trace $t$ is in the model of $X$ if there exists some trace $\dot{t}$ (in the model of our final formula) such that \begin{align}\label{eq:encoding} \begin{split} \forall z \in \mathbb{N}\mathpunct{.} \Big(\bigwedge_{j \in [C(X)]} \bigwedge_{a \in \text{AP}} \big(a \in t_j(z) \leftrightarrow [a, j]_X \in \dot{t}(z) \big) \Big) \land \\ \forall z \in \mathbb{N}\mathpunct{.} \Big(\bigwedge_{a \in \text{AP}} \big(a \in t(z) \leftrightarrow [a, \dagger]_X \in \dot{t}(z) \big) \Big) \end{split} \end{align} holds. That is, the trace $\dot{t}$ defines the functional mapping from $t_1, \mathpunct{.}s, t_{c(X)}$ to $t$. We use this idea of encoding functions to translate $\varphi$ as follows: \begin{align*} \llbracket \psi \rrbracket &:= \psi\\ \llbracket \mathbb{Q} X. \varphi \rrbracket &:= \llbracket \varphi \rrbracket\\ \llbracket \exists \pi_i \in X\mathpunct{.} \varphi \rrbracket &:= \exists \pi_i\mathpunct{.} \mathcal{L}le \pi_i \in X \rangle \land \llbracket \varphi \rrbracket\\ \llbracket \forall \pi_i \in X\mathpunct{.} \varphi \rrbracket &:= \forall \pi_i\mathpunct{.} \mathcal{L}le \pi_i \in X \rangle \rightarrow \llbracket \varphi \rrbracket \end{align*} We leave path formulas unchanged and fully ignore second-order quantification (which is always existential). We define $\mathcal{L}le \pi \in X \rangle$ as an abbreviation for \begin{align*} \exists \dot{\pi}\mathpunct{.} \Big(\bigwedge_{j \in [c(X)]} \LTLglobally \bigwedge_{a \in \text{AP}} \big(a_{\pi_j} \leftrightarrow ([a, j]_X)_{\dot{\pi}}\big) \Big) \land \Big(\LTLglobally \bigwedge_{a \in \text{AP}} \big(a_{\pi} \leftrightarrow ([a, \dagger]_X)_{\dot{\pi}}\big) \Big) \end{align*} which encodes \Cref{eq:encoding}. The last thing we need to ensure is that $\mathfrak{S}$ and $\mathfrak{A}$ are encoded correctly. That is for any trace $t$ we have $t \in \mathit{Traces}(\mathcal{T})$ iff there exists a $\dot{t}$ (in the model) such that for any $z \in \mathbb{N}$ and $a \in \text{AP}$, we have $a \in t(z)$ iff $[a, \dagger]_\mathfrak{S} \in \dot{t}(z)$. Similarly, there should exists a trace $\dot{t}$ (in the model) such that for any $z \in \mathbb{N}$ and $a \in \text{AP}$, $[a, \dagger]_\mathfrak{A} \in \dot{t}(z)$. Both requirement can be easily expressed as \hyperqptl{} formulas $\varphi_\mathfrak{S}$ and $\varphi_\mathfrak{A}$ (Note that expressing these requirement in HyperLTL is not possible as we cannot quantify over traces that are not within the current model). It is easy to see that $\mathcal{T} \vDash \varphi$ iff $\llbracket \varphi \rrbracket \land \varphi_\mathfrak{S} \land \varphi_\mathfrak{A}$ is satisfiable. \qed \end{proof} \easyfrags* \begin{proof} $\forall^* X \forall^*\pi.\varphi$, $\exists^* X \exists^*\pi.\varphi$: As universal properties are downwards closed and existential properties are upwards closed, removing second order quantification does not change the semantics of the formula. $\exists^* X \forall^*\pi.\varphi$: for every variable $X$ we introduce a trace variable $\pi_x$ which is existentially bounded, and for every occurrence of $\pi$ in $\varphi$ such that $\pi\in X$, we replace $\pi$ with $\pi_x$. If $\varphi$ holds for all traces in $X$, it holds also when replacing $X$ with the singleton $\pi_x$. The other direction of implication is trivial as we found a set $X = \{ \pi_X \}$ for which $\varphi$ holds. For similar reasons, for $\forall^* X \exists^*\pi.\varphi$ we can remove the second order quantification and replace every existentially trace quantification with a universal trace quantification. As a conclusion of all of the above, we have that the model-checking of \sohyperltl formulas of the following type is decidable: $\mathbb{Q}_1 X_1 \cdots \mathbb{Q}_k X_k. \mathbb{Q}'_1 \pi_1 \in X_1 \cdots \mathbb{Q}'_k \pi_k \in X_k. \varphi$ where we have $\mathbb{Q}_i, \mathbb{Q}'_i\in \{\exists, \forall \}$ and in $\varphi$ only traces from the same set $X_i$ are compared to each other (that is, $\varphi$ does not bind traces from different sets to each other). Lastly, for $\psi = \exists X.\exists^* \pi\in X.\forall^*\pi'\in X. \varphi$, we use a reduction to the satisfiability problem of HyperQPTL~\cite{DBLP:conf/lics/CoenenFHH19}. Let $\varphi_{\aut{T}}$ a QPTL formula that models the system. Then, $\aut{T} \vDash \psi$ iff the HyperQPTL formula $\hat{\psi}$ is satisfiable, where $\hat{\psi} = \exists^* \pi. \forall^*\pi'.\forall\tau.\varphi_\aut{T}(\tau)\wedge\psi (\pi,\pi') $. Since $\hat{\psi}$ is a $\exists^*\forall^*$ HyperQPTL formula, its satisfiability problem is decidable~\cite{DBLP:conf/lics/CoenenFHH19}. \qed \end{proof} \section{Appendix for~\Cref{sec:examples}}\label{app:kltl} \subsection{Additional Material for \Cref{sec:ltlK}} \label{app:ltlk} LTL$_{\mathsf{K}, \mathsf{C}}$ is defined by the following grammar: \begin{align*} \psi := a \mid \neg \psi \mid \LTLnext \psi \mid \psi_1 \LTLuntil \psi_2 \mid \mathsf{K}_A \psi \mid \mathsf{E} \psi \mid \mathsf{C} \psi \end{align*} where $A$ is a set of agents. Given two traces $t, t'$ we write $t[0,i] =_{A_i} t'[0, i]$ if $t$ and $t'$ appear indistinguishable for agent $A_i$ in the first $i$ steps. Given a set of traces $T$ and a trace $t$ we define \begin{align*} t, i &\vDash_T a &\text{iff} \quad &a \in t(i)\\ t, i &\vDash_T \neg \psi &\text{iff} \quad & t, i \not\vDash_T \psi \\ t, i &\vDash_T \psi_1 \land \psi_2 &\text{iff} \quad &t, i \vDash_T \psi_1 \text{ and } t, t \vDash_T \psi_2\\ t, i &\vDash_T \LTLnext \psi &\text{iff} \quad & t, i+1 \vDash_T \psi \\ t, i &\vDash_T \psi_1 \LTLuntil \psi_2 &\text{iff} \quad & \exists j \geq i \mathpunct{.} t, j \vDash_T\psi_2 \text{ and } \forall i \leq k < j \mathpunct{.} t, k \vDash_T \psi_1\\ t, i &\vDash_T \mathsf{K}_{A_i} \psi &\text{iff} \quad & \forall t' \in T\mathpunct{.} t[0, i] =_{A_i} t'[0, i] \to t', i \vDash \psi\\ t, i &\vDash_T \mathsf{E} \psi & \text{iff} \quad & t, i \vDash_T \bigwedge_{A_i \in A} \mathsf{K}_{A_i} \psi\\ t, i &\vDash_T \mathsf{C} \psi &\text{iff} \quad & t,i \vDash_T \mathsf{E}^\infty \psi \end{align*} The \emph{everyone knows} operator $\mathsf{E}$ states that every agents knows that $\psi$ holds. The semantics of the common knowledge operator $\mathsf{C}$ is then the infinite chain, or transitive closure, of \emph{everyone knows that everyone knows that ...} $\psi$. \kltl* \begin{proof} Let $\{ A_1, \mathpunct{.}s, A_n\}$ be the set of agents. For the $j$th occurrence of a knowledge operator $\mathbb{K}\in\{ \mathsf{K}, \mathsf{C} \}$ we introduce a new trace variable $\tau_j$ and a second order variable $Y_j$. In addition, we introduce a new atomic proposition $k$. We then replace the $j$th occurrence of $\mathbb{K}$ in $\varphi$ with $k_{\tau_j}$, resulting in the HyperLTL formula $\psi^\tau$. Denote by $\psi_{j}$ the subformula of $\psi^\tau$ that directly follows~$k_{\tau_{j}}$. We define by induction on the nested knowledge operators the corresponding \fphyperltl{} formula $\varphi_j$. For the first (inner-most) operator, we define $\varphi_0$ to be the LTL formula nested under this operator. Now, assume we have defined $\varphi_{j-1}$, and let $\mathbb{K}\in \{\mathsf{K}, \mathsf{C} \}$ be the $j$th inner-most knowledge operator. Then, $\varphi_{j}$ is defined as follows. \begin{align*} \forall \tau_j\in\mathfrak{A}. \Big(Y_j, \curlyvee, (\pi\in Y_j \wedge \forall\pi_1\in Y_j .\forall\pi_2\in \mathfrak{S}. \left(\neg k_{\tau_j}\LTLuntil(k_{\tau_j}\wedge \LTLnext \LTLglobally \neg k_{\tau_j})\right) \rightarrow \\ \left( \mathit{equiv}^j_{\mathbb{K}}(\pi_1, \pi_2) \LTLuntil k_{\tau_j} \rightarrow \pi_2\triangleright Y_j \right)\Big) . \forall \pi_1\in Y_j.\varphi_{j-1} \wedge \LTLglobally \left(k_{\tau_j} \rightarrow \psi_j \right) \end{align*} Where \begin{align*} \mathit{equiv}^j_{\mathsf{K}_{A_i}} := \pi_1 \leftrightarrow_{A_i} \pi_{2} \quad\quad\quad \mathit{equiv}^j_{\mathsf{C}}: = \bigvee_{i\in[n]} \pi_1 \leftrightarrow_{A_i} \pi_{2} \end{align*} Each instantiation of the universally quantified variable $\tau_j$ corresponds to one timepoint in which we want to check knowledge on. Therefore, we verify that $k$ appears exactly once on the trace (first line of the formula). Then, we add to the knowledge set all traces that are equivalent (by the knowledge if this agent, or by the common knowledge of all agents) until this timepoint. The formula outside the minimality condition verifies that for all traces in the set $Y_j$, the subformula $\varphi_{j-1}$ holds thus enforcing the knowledge requirement on all traces in $Y_j$. In addition, it uses the property $\LTLglobally \left(k_{\tau_j} \rightarrow \psi_j \right)$ to make sure that the temporal (non-knowledge) requirements hold at the same time for all traces in $Y_j$. Finally, we define $\varphi' = \forall\pi.\varphi_n$ where $n$ is the number of knowledge operators in~$\varphi$. Note that in general, the formula above can yield infinitely many sets of traces. In practical examples, e.g. the examples appear in this paper, we can write simplified formulas that reason about the specific problem at hand and only require a finite (usually $1$) number of such sets. Also note that as the sets $Y_j$ are unique, we do not need the quantification over least sets. \qed \end{proof} \subsection{Additional Material To \Cref{sec:asynchronous:hyperproperties}}\label{app:asynchronous:hyperproperties} Consider the system in \Cref{fig:syn_program} (taken from \cite{BaumeisterCBFS21}). The synchronous version of observational determinism ($\varphi_\mathit{OD}$) holds on this system: While we branch on the secret input $h$, the value of $o$ is the same across all traces. In contrast, $\varphi_\mathit{OD}$ does not hold on the system in \Cref{fig:asyn_program} as, in the second branch, the update occurs one step later. This, however, is not an accurate interpretation of $\varphi_\mathit{OD}$ (assuming that an attacker only has access to the memory footprint and not the CPU registers or a timing channel), as any two traces are \emph{stutter} equivalent (with respect to $o$). In AHLTL we can express an asynchronous version of OD as $\forall \pi_1. \forall \pi_2. \mathbf{E} \LTLglobally(o_{\pi_1} \leftrightarrow o_{\pi_2})$ stating that all two traces can be aligned such they (globally) agree on $o$. This formula now holds on both \Cref{fig:syn_program} and \Cref{fig:asyn_program}. \begin{figure} \caption{Example Programs.} \label{fig:syn_program} \label{fig:asyn_program} \end{figure} \section{Additional Material for \Cref{sec:implementation}}\label{app:implementation} \subsection{Muddy Children} We consider the following \fphyperltl{} formula which captures the common knowledge set after $m$ steps. \begin{align*} &\forall \pi \in \mathfrak{S}{}. \\ &\quad\Big(X, \curlyvee,\pi \triangleright X \land \forall \pi_1 \in X. \forall \pi_2 \in \mathfrak{S}{}. \big(\bigvee_{i \in [n]} \LTLglobally^{\leq m} \pi_1 =_{\text{AP}_i} \pi_2\big) \LTLreleaseightarrow \pi_2 \triangleright X \Big). \\ &\quad\quad\quad\quad\forall \pi_1 \in X. \forall \pi_2 \in X. \bigwedge_{i \in [n]} {i_c}_{\pi_1} \leftrightarrow {i_c}_{\pi_2} \end{align*} where we write $\LTLglobally^{\leq m} \psi$ to assert that $\psi$ holds in the first $m$ steps. Here, $\text{AP}_i$ are all propositions observable by child $i$, i.e., all variables expect the one that determines if $i$ is muddy. Note that this formula falls within our fixpoint fragment of \fphyperltl{}. Further note that we express a hyperproperty on the knowledge set, i.e., compare pairs of traces in the knowledge set. This is not possible in logic's such as LTL$_{\mathsf{K}, \mathsf{C}}$ in which we can only check if a trace property holds on the knowledge set. \subsection{Asynchronous Hyperproperties} We verify \begin{align}\label{eq:od_synch} \varphi_\mathit{OD} := \forall \pi_1. \forall \pi_2. \LTLglobally (o_{\pi_1} \leftrightarrow o_{\pi_2}) \end{align} and the asynchronous version of it. In AHLTL \cite{BaumeisterCBFS21} we can define this as follows: \begin{align*} \varphi_\mathit{OD} := \forall \pi_1. \forall \pi_2. \mathbf{E} \LTLglobally (o_{\pi_1} \leftrightarrow o_{\pi_2}) \end{align*} In \fphyperltl{} we can express the above AHLTL formula as the following formula: \begin{align}\label{eq:od_asynch} \begin{split} &\forall \pi_1 \in \mathfrak{S}. \forall \pi_2 \in \mathfrak{S}. \\ &\quad\big(X_1, \curlyvee, \pi_1 \triangleright X_1 \land \forall \pi \in X. \forall \pi \in \mathfrak{A}. ((o_{\pi} \leftrightarrow o_{\pi'}) \LTLuntil \LTLglobally (o_{\pi} \leftrightarrow \LTLnext o_{\pi'}) \rightarrow \pi' \triangleright X_1 \big)\\ &\quad\big(X_2, \curlyvee, \pi_2 \triangleright X_2 \land \forall \pi \in X. \forall \pi' \in \mathfrak{A}. ((o_{\pi} \leftrightarrow o_{\pi'}) \LTLuntil \LTLglobally (o_{\pi} \leftrightarrow \LTLnext o_{\pi'}) \rightarrow \pi' \triangleright X_2 \big)\\ &\quad\quad\exists \pi_1 \in X_1, \exists \pi_2 \in X_2\mathpunct{.} \LTLglobally (o_{\pi_1} \leftrightarrow o_{\pi_2}) \end{split} \end{align} Note that this formula falls within our fixpoint fragment of \fphyperltl{}. In \Cref{tab:more}, we check \Cref{eq:od_synch} and \Cref{eq:od_asynch} on the two example programs from the introduction in \cite{BaumeisterCBFS21} and the asynchronous program \texttt{Q1} from \cite{BeutnerF21,BeutnerF23LMCS}. \subsection{Mazurkiewicz Trace} Using our logic, we can also express many properties that reason about the class of (Mazurkiewicz) traces. The idea of trace is to abstract away from the concrete order of independent actions (letters). Let $I \subseteq \Sigma \times \Sigma$ be an independence relation on letters. That is, $(a, b) \in I$ iff interchanging the order of $a$ and $b$ has no effect (e.g., local actions for two concurrent processes). We say two traces $t_1, t_2$ are equivalent (written $t_1 \equiv_I t_2)$ if we can rewrite $t_1$ into $t_2$ by flipping consecutive letters that are in $I$. For example if $(a, b) \in I$ then $xaby \equiv_I xbay$ for all $x \in \Sigma^*, y \in \Sigma^\omega$. The Mazurkiewicz trace of a concrete trace $t$ is then defined as $[t]_I := \{t' \mid t \equiv_i t\}$. Using \fphyperltl{} we can directly reason about the equivalence classes of~$\equiv_I$. Consider the following (quantifier-free) formula $\varphi_I(\pi, \pi')$, stating that $\pi$ and $\pi'$ are identical up to one flip of consecutive independent actions. \begin{align*} (\pi =_\text{AP} \pi') \LTLweakuntil \Big(\bigvee_{(x, y) \in I} x_\pi \land y_{\pi'} \land \LTLnext (y_\pi \land x_{\pi'}) \land \LTLnext\LTLnext \LTLglobally (\pi =_\text{AP} \pi') \Big) \end{align*} Here we write $x_\pi$ for $x \in \Sigma = 2^\text{AP}$ for the formula $\bigwedge_{a \in X} a_\pi \land \bigwedge_{a \not\in X} \neg a_\pi$. The formula asserts that both traces are equal, until one pair of independent actions is flipped, followed by, again, identical suffixes. Using $\varphi_I$ we can directly reason about Mazurkiewicz traces. Assume we are interested if for every (concrete) trace $t$ that satisfies LTL property $\phi$, all its equivalent traces satisfy $\psi$. We can express this in \fphyperltl{} as follows: \begin{align}\label{eq:trace} \begin{split} &\forall \pi \in \mathfrak{S}{}. (X, \curlyvee, \pi \triangleright X \land \forall {\pi_1} \in X\mathpunct{.} \forall {\pi_2} \in \mathfrak{A}\mathpunct{.} \varphi_I(\pi_1, \pi_2) \rightarrow \pi_2 \triangleright X) \mathpunct{.} \\ &\quad\quad\forall \pi' \in X\mathpunct{.} \phi(\pi) \rightarrow \psi(\pi') \end{split} \end{align} That is, for all traces $\pi$, we compute the set $X$ which contains all equivalent traces. This set should contain $\pi$, must be closed under $\varphi_I$, and is minimal w.r.t.~those two properties. Note that this formula falls within our fixpoint fragment of \fphyperltl{}. \subsubsection*{Preliminary Experiments.} The above formula is applicable to arbitrary independec relation, so our tool can be used to automatically check arbitrary properties on Mazurkiewicz traces. In our preliminary experiments, we focus on very simple Mazurkiewicz trace. We model a situation where we start with $\{a\}^1\{\}^\omega$ and can swap letters $\{a\}$ and $\{\}$. We acknowledge that the example is very simple, but nevertheless emphasize the complex trace-based reasoning possible with \texttt{HySO}. We then ask the following problems: We ask if, from every trace, $a$ holds at most once on all traces in $X$. If we apply only iteration, \texttt{HySO} will move the unique $a$ one step to the right in each iteration, i.e., after $n$ steps the current under-approximation contains traces $\{a\}\{\}^\omega$, $\{\}^1\{a\}\{\}^\omega$, $\{\}^2\{a\}\{\}^\omega$, $\mathpunct{.}s$, $\{\}^n\{a\}\{\}^\omega$. This fixpoint will never converge, so \texttt{HySO} would iterate forever. Instead, if we also enable learning of overapproximations, \texttt{HySO} automatically learns an invariant that proves the above property (which is instance \textsc{SwapA} in \Cref{tab:mar}). We can relax this requirement to only consider a fixed, finite prefix. \textsc{SwapA}$_n$ states that $a$ \emph{can} hold in any position in the first $n$ steps (by using existential quantification over $X$). As expected, \texttt{HySO} can prove this property by iteration $n$ times. Lastly, the violation \textsc{SwapAViolation}$_n$ states that any trace satisfies $a$ within the first $n$ steps. This obviously does not hold, but \texttt{HySO} requires $n$ iterations to find a counterexample, i.e., a trace where $a$ does not hold within the first $n$ steps. \end{document}
\begin{document} \title{Stationary random graphs with prescribed iid degrees on a spatial Poisson process} \author{Maria Deijfen \thanks{Department of Mathematics, Stockholm University, 106 91 Stockholm. Email: [email protected].}} \date{October 2008} \maketitle \thispagestyle{empty} \begin{abstract} \noindent Let $[\mathcal{P}]$ be the points of a Poisson process on ${\mathbb R}^d$ and $F$ a probability distribution with support on the non-negative integers. Models are formulated for generating translation invariant random graphs with vertex set $[\mathcal{P}]$ and iid vertex degrees with distribution $F$, and the length of the edges is analyzed. The main result is that finite mean for the total edge length per vertex is possible if and only if $F$ has finite moment of order $(d+1)/d$. \noindent \emph{Keywords:} Random graphs, degree distribution, Poisson process, stable matching, stationary model. \noindent AMS 2000 Subject Classification: 05C80, 60G50. \end{abstract} \section{Introduction} Consider a Poisson process $\mathcal{P}$ on $\mathbb{R}^d$ with intensity 1 and write $[\mathcal{P}]$ for the point set of the process. Furthermore, take a probability distribution $F$ with support on $\mathbb{N}$. How should one do to obtain a translation invariant random graph with vertex set $[\mathcal{P}]$ and degree distribution $F$? Which properties does the resulting configuration have? These are the questions that will be considered, and partly answered, in this paper. The problem of generating random graphs with prescribed degree distribution has been extensively studied the last few years. It is primarily motivated by the increasing use of random graphs as models for complex networks; see e.g.\ Durrett (2006). Such networks typically exhibit non-Poissonian distributions for the degrees, predominantly various types of power-laws, and it is hence interesting to develop graph models that can produce this type of degree distributions (indeed, the standard Erd\"{o}s-Renyi graph gives Poisson degrees in the limit). One example of a graph model for obtaining a prescribed degree distribution is the so called configuration model -- see Molloy and Reed (1995,1998) -- where, given $n$ vertices, each vertex is independently assigned a random number of stubs according to the prescribed distribution and the stubs are then paired randomly to create edges. See also Bollob\'{a}s et al.\ (2001), Chung and Lu (2001:1,2) and Bollob\'{a}s et al.\ (2006) for other work in the area. Most existing models for obtaining graphs with given degree distribution do not take spatial aspects into account, that is, there is no metric defined on the vertex set. In Deijfen and Meester (2006) however, a model is introduced where the vertex set is taken to be ${\mathbb Z}$. The vertices are equipped with stubs according to a prescribed distribution and these stubs are then paired according to a certain rule. This model, which turns out to be closely related to random walks, is shown to lead to well-defined configurations, but the edge length has infinite mean. In Deijfen and Jonasson (2006), a different model (on ${\mathbb Z}$) is formulated and shown to give finite mean for the edges. This work is generalized in Jonasson (2007) to more general (deterministic) vertex structures, in particular it is shown there that there exists a translation invariant graph on ${\mathbb Z}^d$ with finite total edge length per vertex if and only if the degree distribution has finite mean of order $(d+1)/d$. Here we will take the vertex set to be the random point set $[\mathcal{P}]$ of the Poisson process $\mathcal{P}$. Each vertex is independently assigned a number of stubs according to the desired degree distribution $F$ and we then ask for a way of connecting the stubs to form edges. The restrictions are that the procedure should be translation invariant and the resulting graph is not allowed to contain self-loops and multiple edges, that is, an edge cannot run from a vertex to itself and each pair of vertices can be connected by at most one edge. A few different ways of doing this will be discussed, and we will show that the condition for the possibility of finite mean for the total edge length of a vertex is the same as on ${\mathbb Z}^d$. As described above, the model can (at least for $d=1,2$) be seen as a way of introducing geography in a complex network model. However, it can also be viewed as a generalization of the problem treated in Holroyd et al.\ (2008), where the matching distance in various types of matchings of the points of a Poisson process is analyzed. Indeed, this would correspond to taking $F\equiv 1$ in our setup, that is, to equip each vertex with one single stub. We will make use of results from Holroyd et al.\ at several occasions. To formally describe the quantities we will work with, let $(\mathcal{P},\xi)$ be a marked Poisson process, where the ground process $\mathcal{P}$ is a homogeneous Poisson process on ${\mathbb R}^d$ with rate 1 and the marks are iid with distribution $F$; see Daley and Vere-Jones (2002, Section 6.4). We will often refer to the points in $[\mathcal{P}]$ as vertices, and we think of the marks as representing an assignment of stubs to the vertices. Let $\mathcal{M}$ be a translation invariant scheme for pairing these stubs so that no two stubs at the same vertex are paired and at most one pairing is made between stubs at two given vertices. Furthermore, let $(\mathcal{P}^*,\xi^*,\mathcal{M}^*)$ be the Palm version of $(\mathcal{P},\xi,\mathcal{M})$ with respect to $\mathcal{P}$ and write $\PP^*$ and ${\mathbb E}^*$ for the associated probability law and expectation operator respectively, that is, $\PP^*$ describes the conditional law of $(\mathcal{P},\xi,\mathcal{M})$ given that there is a point at the origin, with the mark process and the pairing scheme taken as stationary background; see Kallenberg (1997, Chapter 11) for details. Note that, since the Palm version of a homogeneous Poisson process has the same distribution as the process itself with an added point at the origin, we have that $[\mathcal{P}^*]\stackrel{d}{=}[\mathcal{P}]\cup \{0\}$. Let $T$ denote the total length in $\mathcal{M}^*$ of all edges at the origin vertex. We will show the following result for $T$. \begin{theorem}\label{th:main} There exists a translation invariant pairing scheme with ${\mathbb E}^*[T]<\infty$ if and only if $F$ has finite moment of order $(d+1)/d$. \end{theorem} Let $D$ denote the degree of the origin. The only-if direction of the theorem is fairly easy and follows from investigating the radius of the smallest ball around the origin of ${\mathbb R}^d$ that contains $D$ points in $\mathcal{P}^*$. Indeed, since multiple pairing to the same vertex is not allowed, $T$ is bounded from below by the sum of the distances to the vertices in this ball, which, as it turns out, cannot have finite mean if ${\mathbb E}[D^{(d+1)/d}]$ is infinite. As for the if-direction, we will describe a concrete pairing scheme with the property that ${\mathbb E}^*[T]<\infty$. Here, results from Jonasson (2007) and Holroyd et al. (2008) will be useful. Before proceeding, we remark that the distribution of $T$ can be described in an alternative way: For $r\in[0,\infty]$, let $H(r)$ denote the expected number of points in $[\mathcal{P}]\cap[0,1]^d$ whose total edge length does not exceed $r$. Since $\mathcal{P}$ has intensity 1 it is not hard to see that $H$ is a distribution function, and, in fact, the distribution of $T$ is given by $H$; see Holroyd et al. (2008, Section 2) for a more careful description of this connection. The rest of the paper is organized so that the only-if direction of Theorem \ref{th:main} is proved in Section 2, a possible pairing schemes is described in Section 3 and a refinement of this scheme that proves the if-part of Theorem \ref{th:main} is formulated in Section 4. Section 5 contains an outline of possible further work. \section{Only-if} Recall that $D$ denotes the degree of the origin vertex in $\mathcal{P}^*$. We will show that ${\mathbb E}^*[T]<\infty$ is impossible if ${\mathbb E}[D^{(d+1)/d}]=\infty$. To this end, let $B(r)$ be the $d$-dimensional ball with radius $r$ centered at the origin and write $V^*(r)$ for the number of $\mathcal{P}^*$-points in $B(r)$, excluding the origin, that is, $$ V^*(r)=\textrm{card}[\mathcal{P}^*\cap B(r)]-1, $$ where $\textrm{card}[\cdot]$ denotes set cardinality. Furthermore, for $n\geq 1$, define $R_n$ to be the radius of the smallest ball centered at the origin that contains at least $n$ points of $\mathcal{P}^*$ not counting the point at the origin, that is, $$ R_n=\inf\{r: V^*(r)\geq n\}. $$ The idea is that, with a probability that can be bounded from below, at least half of the $D$ stubs at the origin will be connected to stubs at vertices whose distance to the origin is at least $3^{-1/d}R_D$. On this event we have that $T\geq D R_D/(2\cdot 3^{-1/d})$ and, as we will see, the mean of $R_n$ behaves like $n^{1/d}$ for large $n$. Combining this will give that $T$ must have infinite mean if $D^{1+1/d}$ has so. Consider a Poisson process $\mathcal{P}_\lambda$ on ${\mathbb R}^d$ with intensity $\lambda$ and -- in analogy with the above definitions -- let $V(r)=\textrm{card}[\mathcal{P}_\lambda\cap B(r)]$ and $R_n=\inf\{r:V(r)\geq n\}$. The mean of $R_n$ is characterized in the following lemma. \begin{lemma}\label{le:vvRn} For $n\geq 1$, we have that \begin{equation}\label{eq:vvRn} {\mathbb E}[R_n]=\frac{C}{\lambda^{1/d}}\cdot\sum_{k=0}^{n}\frac{\Gamma(k+\frac{1}{d})}{\Gamma(k+1)}, \end{equation} \noindent where $C=C(d)$ is a constant. In particular, ${\mathbb E}[R_n]\geq C'\left(\frac{n}{\lambda}\right)^{1/d}$ for all $n\geq 1$ and some constant $C'=C'(d)$. \end{lemma} \noindent \textbf{Proof.} Since the volume of $B(r)$ is $cr^d$, where $c=c(d)$ is a constant, we have that $V(r)$ is Poisson distributed with parameter $\lambda cr^d$. Together with the definition of $R_n$, this yields that $$ \PP(R_n\geq r)=\PP(V(r)\leq n)=\sum_{k=0}^n\frac{(\lambda cr^d)^k}{k!}e^{\lambda cr^d}. $$ Hence $$ {\mathbb E}[R_n]=\int_0^\infty \PP(R_n\geq r)dr=\sum_{k=0}^n\frac{1}{k!}\int_0^\infty(\lambda cr^d)^ke^{\lambda cr^d}dr, $$ and, by variable substitution, $$ \int_0^\infty(\lambda cr^d)^ke^{\lambda cr^d}dr=\frac{\Gamma(k+\frac{1}{d})}{d(\lambda c)^{1/d}}. $$ Since $k!=\Gamma(k+1)$, this establishes (\ref{eq:vvRn}) with $C=c^{-1/d}d^{-1}$. As for the second claim of the lemma, just note that, since $\Gamma(k+\frac{1}{d})/\Gamma(k+1)$ is decreasing in $k$, we have that $$ \sum_{k=0}^{n}\frac{\Gamma(k+\frac{1}{d})}{\Gamma(k+1)}\geq n\cdot\frac{\Gamma(n+\frac{1}{d})}{\Gamma(n+1)}=\frac{\Gamma(n+\frac{1}{d})}{\Gamma(n)}, $$ and, by Sterlings formula, $\Gamma(n+\frac{1}{d})/\Gamma(n)\sim n^{1/d}$ as $n\to \infty$. $\Box$ With Lemma \ref{le:vvRn} at hand it is not hard to prove that finiteness of ${\mathbb E}^*[T]$ requires finite moment of order $(d+1)/d$ for $D$. \noindent \textbf{Proof of only-if direction of Theorem \ref{th:main}}. Write ${\mathbb E}^*[T]={\mathbb E}^*\left[{\mathbb E}^*[T|D,R_D]\right]$, and, conditional on $D$ and $R_D$, define $$ A=\left\{V^*\left(\frac{R_D}{3^{1/d}}\right)<\frac{D-1}{2}\right\}, $$ that is, $A$ is the event that at least half of the $D-1$ non-origin $\mathcal{P}^*$-points in the interior of $B(R_D)$ are located at distance at least $3^{-1/d}R_D$ from the origin. Since $\mathcal{P}^*$ has the same distribution as a Poisson process with an added point at the origin, it follows from properties of the Poisson process that the $D-1$ non-origin points are uniformly distributed in $B(R_D)$, implying that $$ {\mathbb E}^*\left[V^*\left(\frac{R_D}{3^{1/d}}\right)\Big|D,R_D\right]=(D-1)\cdot\frac{\textrm{vol}[B(3^{-1/d}R_D)]} {\textrm{vol}[B(R_D)]}=\frac{D-1}{3}, $$ where vol$[\cdot]$ denotes volume. Combining this with Markov's inequality, we can bound $$ \PP^*(A|D,R_D)\geq 1-\frac{{\mathbb E}^*[V^*(3^{-1/d}R_D)]}{(D-1)/2}=\frac{1}{3} $$ and hence $$ {\mathbb E}^*[T|D,R_D]\geq \frac{{\mathbb E}^*[T|A,D,R_D]}{3}. $$ On the event $A$, at least $(D-1)/2$ of the $D$ stubs at the origin must be connected to vertices whose distance to the origin is at least $3^{-1/d}R_D$, implying that $$ T\geq \frac{D-1}{2}\cdot \frac{R_D}{3^{1/d}}. $$ Hence, using Lemma \ref{le:vvRn}, we get that \begin{eqnarray*} {\mathbb E}^*[T] & \geq & \frac{1}{6\cdot 3^{1/d}}\cdot {\mathbb E}^*[(D-1)R_D]\\ & = & \frac{1}{6\cdot 3^{1/d}}\cdot {\mathbb E}^*[(D-1){\mathbb E}^*[R_D|D]]\\ & \geq & \frac{C'}{6\cdot 3^{1/d}}\cdot {\mathbb E}^*[(D-1)D^{1/d}], \end{eqnarray*} \noindent which proves that ${\mathbb E}[D^{(d+1)/d}]<\infty$ is indeed necessary for ${\mathbb E}^*[T]<\infty$. $\Box$ \section{Repeated stable matching} In this section we make a first attempt to formulate a pairing scheme. The algorithm is based on so called stable matchings -- see e.g.\ Gale and Shapely (1962) -- obtained by iteratively matching mutually closest points, and turns out to give finite mean for $T$ in $d\geq 2$ if $F$ has bounded support. For $d=1$ an alternative scheme is described. For a general translation invariant homogeneous point process $\mathcal{R}$ with finite intensity, one algorithm for matching its points is as follows: First consider all pairs $\{x,y\}\subset[\mathcal{R}]$ of mutually closest points -- that is, all pairs $\{x,y\}$ such that $x$ is the closest point of $y$ and $y$ is the closest point of $x$ -- and match them to each other. Then remove these pairs, and apply the same procedure to the remaining points of $[\mathcal{R}]$. Repeat recursively. If $\mathcal{R}$ is such that almost surely $[\mathcal{R}]$ is non-equidistant and has no descending chains, then this algorithm can be shown to yield an almost surely perfect matching, that is, a matching where each point is matched to exactly one other point. Here, the point set $[\mathcal{R}]$ is said to be non-equidistant if there are no distinct points $x,y,u,v\in[\mathcal{R}]$ with $|x-y|=|u-v|$, and a descending chains is an infinite sequence $\{x_i\}\subset[\mathcal{R}]$ such that $|x_i-x_{i-1}|$ is strictly decreasing. Furthermore, the obtained perfect matching is the unique stable matching in the sense of Gale and Shapely (1962). Now consider our marked Poisson process, where each point $x\in[\mathcal{P}]$ has a random number $D_x\sim F$ of stubs attached to it. For each point $x$, number the stubs $1,\ldots ,D_x$ and say that stub number $i$ belongs to \emph{level} $i$ in the stub configuration. Furthermore, for $i\geq 1$, define $[\mathcal{P}]_i=\{x\in[\mathcal{P}]:D_x\geq i\}$. To connect the stubs, first match the points in $[\mathcal{P}]_1$ to each other using the stable matching algorithm described above, and pair the stubs on the first level among themselves according to this matching. Then take a uniformly chosen 2-coloring of the points in $[\mathcal{P}]_2$ such that points that are matched to each other in the matching of $[\mathcal{P}]_1$ get different colors. This gives rise to two sets of points $[\mathcal{P}]_{2,1}$ and $[\mathcal{P}]_{2,2}$ with different colors. Match the points in $[\mathcal{P}]_{2,1}$ to each other using the stable matching algorithm and do likewise with the points in $[\mathcal{P}]_{2,2}$. Connect the stubs on the second level accordingly. Then continue in this way. In general, the points in $[\mathcal{P}]_i$ are colored with a uniformly chosen $i$-coloring such that points that have been matched in the previous steps are assigned different colors, giving rise to point sets $[\mathcal{P}]_{i,1},\ldots, [\mathcal{P}]_{i,i}$ whose points are then matched among themselves using the stable matching algorithm. Since the distributions of the point sets $\{[\mathcal{P}]_{i,j}\}$ fulfill the conditions for the stable matching algorithm to yield a perfect matching, this procedure indeed provides a well-defined pairing of the stubs, and the coloring prevents multiple edges between vertices. We refer to this scheme as repeated stable matching with coloring (RSMC). As for the total edge length per vertex with RSMC we have the following result. \begin{prop}\label{prop:RSMC} For RSMC with degree distribution $F$, we have that \begin{itemize} \item[{\rm{(a)}}] ${\mathbb E}^*[T]=\infty$ for any degree distribution $F$ in $d=1$. \item[{\rm{(b)}}] ${\mathbb E}^*[T]<\infty$ for $F$ with bounded support in $d\geq 2$. \end{itemize} \end{prop} \noindent \textbf{Proof.} Part (a) is a direct consequence of Theorem 5.2(i) in Holroyd et al.\ (2008), where the expected matching distance for the stable matching of a one-dimensional Poisson process is shown to be infinite. As for (b), write $D=D_0$ and let $L_i$ denote the length of the edge created by the $i$:th stub at the origin (if $D<i$, then $L_i:=0$). Also write $p_i=\PP(D=i)$ and $p_{i+}=\PP(D\geq i)$. By a straightforward generalization of Theorem 5.2(ii) in Holroyd et al. (2008), we have that \begin{equation}\label{eq:pp_beg} \PP^*\left(L_i>r|D\geq i\right)\leq C\frac{i}{p_{i+}}r^{-d} \end{equation} \noindent for some constant $C=C(d)$. Indeed, the point sets $\{[\mathcal{P}]_{i,j}\}_{j=1,\ldots, i}$ are outcomes of translation invariant processes with intensity $p_{i+}/i$, satisfying the conditions for the existence of a well-defined stable matching, and, as pointed out by Holroyd et al., the proof of Theorem 5.2(ii) applies to any such process. It follows that ${\mathbb E}^*[L_i|D\geq i]\leq Ci/p_{i+}$ for some constant $C=C(d)$ and hence, with $u=\max\{i:p_i>0\}$, we have $$ {\mathbb E}^*[T]={\mathbb E}\Big[\sum_{i=1}^uL_i\Big]=\sum_{i=1}^up_{i+}{\mathbb E}^*[L_i|D\geq i]\leq C\sum_{i=1}^ui<\infty. $$ $\Box$ \noindent \textbf{Remark.} Note that the proof applies also if the marking of the Poisson points with random degrees is not independent, but only translation invariant, that is, the probability law of the marked process is invariant under shifts of ${\mathbb R}^d$. Indeed, Theorem 5.2(ii) can be applied to conclude (\ref{eq:pp_beg}) also in such a situation. $\Box$ Clearly there is no reason to believe that RSMC is optimal to create short edges. Firstly, for a Poisson process, the stable matching is not optimal in terms of the matching distance -- see Holroyd et al. (2008) -- indicating that RSMC is not optimal within each level. Secondly, RSMC suffers from the obvious drawback that stubs on different levels are not connected to each other. Indeed, a stub on level $i$ must go at least to the nearest other vertex with degree larger than or equal to $i$ to be connected. By Lemma \ref{le:vvRn}, the expected distance to such a vertex is $Cp_{i+}^{-1/d}$, implying that $$ {\mathbb E}^*[T]\geq \sum_{n=1}^{\infty}\left(\sum_{i=1}^nCp_{i+}^{-1/d}\right)p_n. $$ This lower bound is infinite for instance for a power law distribution with sufficiently small exponent $\tau>1$, that is, for a distribution with $p_n=Cn^{-\tau}$. Indeed, then $p_{n+}=Cn^{-(\tau-1)}$, so that $p_{i+}\leq C(n/2)^{-(\tau-1)}$ for $i=n/2,\ldots,n$, and we get $$ {\mathbb E}^*[T]\geq C\sum_{n=1}^\infty\frac{n}{2} \left(\frac{n}{2}\right)^{-(\tau-1)/d}n^{-\tau}=C\sum_{n=1}^\infty n^{-(\tau-1-\tau/d+1/d)}, $$ which is infinite for $\tau\leq (2d-1)/(d-1)$. In $d=2$ hence ${\mathbb E}^*[T]=\infty$ for $\tau\leq 3$ with RSMC, while Theorem \ref{th:main} stipulates the existence of a pairing scheme with ${\mathbb E}^*[T]<\infty$ for $\tau\geq 5/2$. Before describing such a pairing scheme, we mention that, in $d=1$, RSMC can be modified to give finite mean for degree distributions with bounded support as follows: As before, let $[\mathcal{P}]_i=\{x\in[\mathcal{P}]:D_x\geq i\}$. To connect the stubs on level $i$, pick one of the two possible matchings of the points of $[\mathcal{P}]_i$ satisfying that, for any two matched points $x,y$ with $x<y$, the interval $(x,y)$ does not contain any points of $[\mathcal{P}]_i$. Call a point that is matched with a point to the right (left) right-oriented (left-oriented) and connect the stubs so that the stub at a right-oriented point $x$ is paired with the stub at the $i$:th left-oriented point located to the right of $x$. This is clearly stationary, and, since $[\mathcal{P}]_i\subset[\mathcal{P}]_{i-1}$, no multiple edges are created. We refer to this procedure as shifted adjacent matching (SAM). \begin{prop}\label{prop:SAM} Assume that $F$ has bounded support and let $u=\max\{p_i:p_i>0\}$. For SAM, we have that ${\mathbb E}^*[T]=u^2$. In particular, ${\mathbb E}^*[T]<\infty$ if and only if $F$ has bounded support. \end{prop} \noindent \textbf{Proof.} Recall that $L_i$ is the length of the edge created by the $i$:th stub at the origin ($L_i:=0$ if $D<i$). With SAM, the stub at level $i$ at the origin (if such a stub exists) will be connected to the $(2i-1)$:th point in $[\mathcal{P}]_i$ to the right or left of the origin with probability 1/2 respectively. The distance from the origin to the $(2i-1)$:th point to the right with degree at least $i$ is a sum of a NegBin$(2i-1,p_{i+})$ number of Exp(1)-variables. Hence ${\mathbb E}^*[L_i|D\geq i]=(2i-1)/p_{i+}$ and the proposition follows from a calculation analogous to the one in the proof of Proposition \ref{prop:RSMC}. $\Box$ \noindent \textbf{Remark.} SAM for a stub configuration on ${\mathbb Z}^d$ is analyzed in Deijfen and Jonasson (2006: Section 2), where it is shown that $T$ has finite mean also for \emph{stationary} degrees $\{D_i\}$ with bounded support. $\Box$ \section{Finite mean is possible when ${\mathbb E}[D^{(d+1)/d}]<\infty$} We now describe a scheme that gives finite mean for the total edge length per vertex as soon as $F$ has finite mean of order $(d+1)/d$. Roughly, the model is a truncated version of RSMC, where vertices with very high degree are connected nearby (instead of having to go to other vertices with equally high degree) and the remaining stubs are then connected according to RSMC. The model is designed to make it possible to exploit results from ${\mathbb Z}^d$ and the procedure for connecting the high-degree vertices will involve associating the Poisson points with their nearest vertex in a uniform translation of ${\mathbb Z}^d$. The following stepwise algorithm for connecting vertices with large degree to vertices with small degree in an iid stub configuration $\{\widetilde{D}_z\}$ on ${\mathbb Z}^d$ was formulated in Jonasson (2007, Section 3.2). It is a generalization of a similar algorithm for ${\mathbb Z}$ from Deijfen and Jonasson (2006) inspired by the ''stable marriage of Poisson and Lebesgue`` from Hoffman et al.\ (2006), and it is applicable also to more general graphs. For a large integer $m$, let $\widetilde{D}_z'=\widetilde{D}_z \mathbf{1}\{\widetilde{D}_z>m\}$ and say that $z$ is high if $\widetilde{D}_z'>0$, otherwise $z$ is called low. First, the positions of the vertices of ${\mathbb Z}^d$ are disturbed by moving each vertex independently a Unif(0,0.1)-distributed distance along a randomly chosen incident edge (this is just to make the vertex set non-equidistant). Then, in the first step, every high vertex $z$ claims its $\widetilde{D}_z$ nearest low neighbors, and a low vertex that is claimed by at least one high vertex is connected to the nearest high vertex that has claimed it. Let $\widetilde{D}_z(1)$ be the number of remaining stubs of the high vertex $z$ after this has been done. In the second step, each high vertex $z$ with $\widetilde{D}_z(1)>0$ claims its $\widetilde{D}_z(1)$ nearest low vertices that have not yet been connected to any high vertex, and each claimed low vertex is connected to the nearest high vertex that has claimed it. This is then repeated recursively. In Jonasson (2007) it is shown that, if $m$ is chosen large enough, this procedure leads to a well-defined configuration with ${\mathbb E}[\widetilde{T}]<\infty$ as soon as ${\mathbb E}[\widetilde{D}_z^{(d+1)/d}]<\infty$, where $\widetilde{T}$ denotes the total edge length (in the ${\mathbb Z}^d$-metric) of the origin. Now return to the Poisson setting. For $x\in{\mathbb R}^d$, write $U_x$ for the unit cube centered at $x$. Pick a point $x_0$ uniformly in the origin cube $U_{0}$ and let ${\mathbb Z}^d(x_0)$ be a translation of ${\mathbb Z}^d$ obtained by moving the origin to $x_0$. The stubs at a Poisson point $x\in[\mathcal{P}]$ are said to be associated with the point $z\in{\mathbb Z}^d(x_0)$ such that $x\in U_z$. Write $\widetilde{D}_z$ for the number of stubs associated with a point $z\in{\mathbb Z}^d(x_0)$ and $N_z$ for the number of Poisson points in $U_z$. Then $N_z\sim$ Po(1) and $\widetilde{D}_z=D_1+\ldots+D_{N_z}$, where $\{D_i\}$ are the iid marks of the Poisson points in $U_z$. The variables $\{\widetilde{D}_z\}$ are also iid and can be thought of as a stub configuration on ${\mathbb Z}^d(x_0)$. We number the stubs associated with each vertex $z\in{\mathbb Z}^d(x_0)$ randomly in some way, for instance by first numbering the Poisson points in $U_z$ randomly $1,\ldots,N_z$, and then consecutively picking one stub from each Poisson point according to that ordering until all stubs are numbered. Furthermore, in order to keep track of which Poisson point a certain stub originates from, the stubs are labelled with the position of their Poisson point. The stubs are now connected as follows: \begin{itemize} \item[1.] Pick $m$ large and, as in Jonasson (2007), say that a vertex $z\in{\mathbb Z}^d(x_0)$ is high (low) if $\widetilde{D}_z>m$ ($\widetilde{D}_z\leq m$). Match the stubs of the high vertices of ${\mathbb Z}^d(x_0)$ with stubs at the low vertices of ${\mathbb Z}^d(x_0)$ according to the algorithm from Jonasson described above, using the stubs at each vertex in numerical order. When two stubs are matched, we create an edge between the Poisson points that they originate from. Since there will be no multiple matchings between the same two vertices in ${\mathbb Z}^d(x_0)$, this will not give multiple edges between the Poisson points either. \item[2.] After step 1, in $d=1$, the unconnected stubs -- that is, the stubs at the low vertices of ${\mathbb Z}(x_0)$ that have not been matched with stubs from the high vertices -- are connected with SAM. In $d=2$, we drop the association of the stubs with the vertices of ${\mathbb Z}^d(x_0)$ and take the unconnected stubs back to their Poisson points. These stubs are then connected with RSMC. \end{itemize} This pairing scheme is clearly translation invariant and will give finite mean for $T$ if ${\mathbb E}[D^{(d+1)/d}]<\infty$: If $D_i$ has finite moment of order $(d+1)/d$, then $\widetilde{D}_z$ has so as well. Hence, if the point $z\in{\mathbb Z}^d(x_0)$ to which the stubs at the origin vertex of $[\mathcal{P}^*]$ are associated is high, then it follows from Jonasson (2007) that $T$ has finite mean. If the point is low, then $T$ has finite mean by Proposition \ref{prop:RSMC} and \ref{prop:SAM} and the following remarks. Indeed, the remaining stubs after step 1 induce a translation invariant assignment of degrees to the Poisson points -- or, in $d=1$, to the points of ${\mathbb Z}(x_0)$ -- where the degrees are bounded by $m$. \section{Further work} We have formulated a necessary and sufficient condition for the existence of a random graph on a spatial Poisson process with finite expected total edge length per vertex. There are several related problems that remain to investigate. \noindent \textbf{Generalization of stable matching.} A natural model for pairing the stubs is the following: Consider the marked Poisson process and, at time 0, start growing a number of balls from each Poisson point linearly in time, the number of balls that start growing from a given point being given by the mark of the point. When the collection of balls from two points meet, one ball from each point is annihilated, and an edge between the two points is created. The remaining balls continue growing (this is to avoid multiple edges). When $D\equiv 1$, so that each stub has exactly one stub attached to it, this gives rise to the unique stable matching of the points; see Holroyd and Peres (2003, Section 4). It remains to analyze the algorithm for other degree distributions, which seems to be more complicated. \noindent \textbf{Percolation.} Apart from the edge length, other features of the configurations arising from different pairing schemes could also be investigated. One such feature is the component size, that is, the number of vertices in a given component of the graph. Will the resulting edge configuration percolate in the sense that it contains an unbounded component? This question is not relevant when $D\leq 1$ -- indeed, the configuration will then consist only of isolated edges, implying that the component size is at most 2 -- but arises naturally for other degree distributions. How does the answer depend on the degree distribution? For a given degree distribution, does the answer depend on the pairing scheme? Is it always possible to achieve percolation by taking $d$ sufficiently large? \noindent \textbf{Independent Poisson processes.} Instead of a single Poisson process, the vertex set could be generated by two independent Poisson processes, representing two different types of vertices. This is related to matchings of points of two independent Poisson processes considered in Holroyd et al.\ (2008). We look for ways of obtaining a graph with edges running only between different types of vertices and with prescribed degree distributions for both vertex types. Can this be done if the degree distributions are different for the two vertex types? If so, how different are the degree distributions allowed to be? Which properties do the resulting configurations have? \begin{center} ----------------------------------------- \end{center} \noindent \textbf{Acknowledgement} I thank Ronald Meester for discussions on the problem during a stay at VU Amsterdam spring 2008. \section*{References} \noindent Bollob\'{a}s, B., Riordan, O., Spencer, J. and Tusn\'{a}dy, G. (2001): The degree sequence of a scale-free random graph process, \emph{Rand. Struct. Alg.} \textbf{18}, 279-290. \noindent Bollob\'{a}s, B., Janson, S. and Riordan, O. (2006): The phase transition in inhomogeneous random graphs, \emph{Rand. Struct. Alg.} \textbf{31}, 3-122. \noindent Chung, F. and Lu, L. (2002:1): Connected components in random graphs with given degrees sequences, \emph{Ann. Comb.} \textbf{6}, 125-145. \noindent Chung, F. and Lu, L. (2002:2): The average distances in random graphs with given expected degrees, \emph{Proc. Natl. Acad. Sci.} \textbf{99}, 15879-15882. \noindent Daley, D.J. and Vere-Jones, D. (2002): \emph{An introduction to the theory of point processes}, 2nd edition, vol. I, Springer. \noindent Deijfen, M. and Jonasson, J. (2006): Stationary random graphs on ${\mathbb Z}$ with prescribed iid degrees and finite mean connections, \emph{Electr. Comm. Probab.} \textbf{11}, 336-346. \noindent Deijfen, M. and Meester, R. (2006): Generating stationary random graphs on $\mathbb{Z}$ with prescribed iid degrees, \emph{Adv. Appl. Probab.} \textbf{38}, 287-298. \noindent Durrett, R. (2006): \emph{Random graph dynamics}, Cambridge University Press. \noindent Gale, D. and Shapely, L. (1962): College admissions and stability of marriage, \emph{Amer. Math. Monthly} \textbf{69}, 9-15. \noindent Hoffman, C., Holroyd, A. and Peres, Y. (2006): A stable marriage of Poisson and Lebesgue, \emph{Ann. Probab.} \textbf{34}, 1241-1272. \noindent Holroyd, A., Pemantle, R., Peres, Y. and Schramm, O. (2008): Poisson matching, \emph{Ann. Inst. Henri Poincare}, to appear. \noindent Holroyd, A. and Peres, Y. (2003): Trees and matchings from point processes, \emph{Elect. Comm. Probab.} \textbf{8}, 17-27. \noindent Jonasson, J. (2007): Invariant random graphs with iid degrees in a general geology, \emph{Probab. Th. Rel. Fields}, to appear. \noindent Kallenberg, O. (1997): \emph{Foundations of modern probability}, Springer. \noindent Molloy, M. and Reed, B. (1995): A critical point for random graphs with a given degree sequence, \emph{Rand. Struct. Alg.} \textbf{6}, 161-179. \noindent Molloy, M. and Reed, B. (1998): The size of the giant component of a random graphs with a given degree sequence, \emph{Comb. Probab. Comput.}\ \textbf{7}, 295-305. \end{document}
\begin{document} \title{Casimir interaction between spherical and planar plasma sheets } \author{L. P. Teo} \email{[email protected]} \affiliation{Department of Applied Mathematics, Faculty of Engineering, University of Nottingham Malaysia Campus, Jalan Broga, 43500, Semenyih, Selangor Darul Ehsan, Malaysia.} \begin{abstract}We consider the interaction between a spherical plasma sheet and a planar plasma sheet due to the vacuum fluctuations of electromagnetic fields. We use the mode summation approach to derive the Casimir interaction energy and study its asymptotic behaviors. In the small separation regime, we confirm the proximity force approximation and calculate the first correction beyond the proximity force approximation. This study has potential application to model Casimir interaction between objects made of materials that can be modeled by plasma sheets such as graphene sheets. \end{abstract} \partialcs{13.40.-f, 03.70.+k, 12.20.Ds} \keywords{ Casimir interaction, sphere-plane configuration, plasma sheet, asymptotic behavior.} \maketitle \section{Introduction} Due to the potential impact to nanotechnology, the Casimir interactions between objects of nontrivial geometries have been under active research in recent years. Thanks to works done by several groups of researchers \cite{1,2,3,4,5,6,7,8,9,10,11,12,13,14}, we have now a formalism to compute the exact functional representation (known as TGTG formula) for the Casimir interaction energy between two objects. Despite the seemingly different approaches taken, all the methods can be regarded as multiple scattering approach, which can also be understood from the point of view of mode summation approach \cite{43,44,45}. The basic ingredients in the TGTG formula are the scattering matrices of the two objects and the transition matrices that relate the coordinate system of one object to the other. In the case that the objects have certain symmetries that allow separable coordinate system to be employed, one can calculate these matrices explicitly. This has made possible the exact analytic and numerical analysis of the Casimir interaction between a sphere and a plate \cite{34, 33, 32,29, 28, 35, 27, 26, 25, 15, 16, 19}, between two spheres \cite{31,23,22}, between a cylinder and a plate \cite{9,39,40}, between two cylinders \cite{42,38, 21,20}, between a sphere and a cylinder \cite{17, 36}, as well as other geometries \cite{30, 37, 41}. As is well known, the strength of the Casimir interaction does not only depend on the geometries of the objects, it is also very sensitive to the boundary conditions imposed on the objects. For the past few years, a lot of works have been done in the analysis of the quantum effect on objects with perfect boundary conditions such as Dirichlet, Neumann, perfectly conducting, infinitely permeable, etc. There are also a number of works which consider real materials such as metals modeled by plasma or Drude models \cite{33,32,27,26,25,19,15,23,20,36}. In this work, we consider the Casimir interaction between a spherical plasma sheet and a planar plasma sheet. Plasma sheet model was considered in \cite{40, 46,47, 48, 49, 24, 18} to model graphene sheet, describing the $\pi$ electrons in C$_{60}$ molecule. This model has its own appeal in describing a thin shell of materials that have the same attributes. In \cite{40}, the Casimir interaction between a cylindrical plasma sheet and a planar plasma sheet has been considered. Our work can be considered as a generalization of \cite{40} where we consider a spherical plasma sheet instead of a cylindrical plasma sheet. One of the main objectives of the current work is to derive the TGTG formula for the Casimir interaction energy. As in \cite{40}, we are also going to study the asymptotic behaviors of the Casimir interaction in the small separation regimes. We would expect that the leading term of the Casimir interaction coincides with the proximity force approximation (PFA), which we are going to confirm. Another major contribution would be the exact analytic computation of the next-to-leading order term which determines the deviation from PFA. \section{The Casimir interaction energy} In this section, we derive the TGTG formula for the Casimir interaction energy between a spherical plasma sheet and a planar plasma sheet. We follow our approach in \cite{45}. Assume that the spherical plasma sheet is a spherical surface described by $r=R$ in spherical coordinates $(r,\theta,\phi)$, and the planar plasma sheet is located at $z=L$ with dimension $H\times H$. It is assumed that $R<L\ll H$. The center of the spherical shell is the origin $O$, and the center of the coordinate system about the plane $z=L$ is $O' =(0,0,L)$. The electromagnetic field is governed by the Maxwell's equations: \begin{equation}\langlebel{eq2_19_1}\begin{split} &\nabla\cdot\mathbf{E}=\frac{\rho_f}{\varepsilon_0},\hspace{1cm}\nabla\times\mathbf{E}+\frac{ \partial\mathbf{B} }{\partial t}=\mathbf{0},\\ &\nabla\cdot\mathbf{B}=0,\hspace{1cm} \nabla\times\mathbf{B}-\frac{1}{c^2}\frac{\partial\mathbf{E}}{\partial t}=\mu_0\mathbf{J}_f. \end{split}\end{equation} The free charge density $\rho_f$ and free current density $\mathbf{J}_f$ are functions having support on the plasma sheets (boundaries). Let $\mathbf{A}$ be a vector potential that satisfies the gauge condition $\nabla\cdot\mathbf{A}=0$ and such that \begin{equation*} \mathbf{E}=-\frac{\partial\mathbf{A}}{\partial t},\hspace{1cm}\mathbf{B}=\nabla\times\mathbf{A}. \end{equation*}$\mathbf{A}(\mathbf{x},t)$ can be written as a superposition of normal modes $\mathbf{A}(\mathbf{x},\omega)e^{-i\omega t}$: $$\mathbf{A}(\mathbf{x},t)=\int_{-\infty}^{\infty}d\omega \,\mathbf{A}(\mathbf{x},\omega)e^{-i\omega t}.$$ Maxwell's equations \eqref{eq2_19_1} imply that outside the boundaries, \begin{equation}\langlebel{eq2_18_1} \nabla\times\nabla \times \mathbf{A}(\mathbf{x},\omega)=k^2\mathbf{A}(\mathbf{x},\omega), \end{equation}where $$k=\frac{\omega}{c}.$$ The boundary conditions are given by \cite{46}: \begin{equation}\langlebel{eq2_18_2}\begin{split} &\mathbf{E}_{\partialrallel}\mathcal{B}igr|_{S_+}-\mathbf{E}_{\partialrallel}\mathcal{B}igr|_{S_-} =\mathbf{0},\\ &\mathbf{B}_{n}\mathcal{B}igr|_{S_+}-\mathbf{B}_{n}\mathcal{B}igr|_{S_-}=0,\\ &\mathbf{E}_{n}\mathcal{B}igr|_{S_+}-\mathbf{E}_{n}\mathcal{B}igr|_{S_-}=2\Omega\frac{c^2}{\omega^2}\nabla_{\partialrallel}\cdot\mathbf{E}_{\partialrallel}\mathcal{B}igr|_{ S},\\ &\mathbf{B}_{\partialrallel}\mathcal{B}igr|_{S_+}-\mathbf{B}_{\partialrallel}\mathcal{B}igr|_{S_-} =-2i\Omega\frac{1}{\omega}\mathbf{n}\times\mathbf{E}_{\partialrallel}\mathcal{B}igr|_{S}, \end{split}\end{equation}where $S$ is the boundary, $S_+$ and $S_-$ are respectively the outside and inside of the boundary, $\mathbf{n}$ is a unit vector normal to the boundary, and $\Omega$ is a constant characterizing the plasma, having dimension inverse of length. The solutions of the equation \eqref{eq2_18_1} can be divided into transverse electric (TE) waves $\mathbf{A}^{\text{TE}}_{\alpha}$ and transverse magnetic (TM) waves $\mathbf{A}^{\text{TM}}_{\alpha}$ parametrized by some parameter $\alpha$ and satisfy \begin{equation}\langlebel{eq2_18_3} \frac{1}{k}\nabla\times \mathbf{A}^{\text{TE}}_{\alpha}=\mathbf{A}^{\text{TM}}_{\alpha},\hspace{1cm}\frac{1}{k}\nabla\times \mathbf{A}^{\text{TM}}_{\alpha}= \mathbf{A}^{\text{TE}}_{\alpha}. \end{equation}Moreover, the waves can be divided into regular waves $\mathbf{A}^{\text{TE,reg}}_{\alpha}$, $\mathbf{A}^{\text{TM,reg}}_{\alpha}$ that are regular at the origin of the coordinate system and outgoing waves $\mathbf{A}^{\text{TE,out}}_{\alpha}$, $\mathbf{A}^{\text{TM,out}}_{\alpha}$ that decrease to zero rapidly when $\mathbf{x}\rightarrow \infty$ and $k$ is replaced by $ik$. In rectangular coordinates, the waves are parametrized by $\alpha=\mathbf{k}_{\perp}=(k_x,k_y)\in\mathbb{R}^2$, with \begin{equation*}\begin{split} \mathbf{A}_{\mathbf{k}_{\perp}}^{\text{TE}, \substack{\text{reg}\\\text{out}}}(\mathbf{x},\omega) =& \frac{1}{k_{\perp}}e^{ik_xx+ik_yy\mp i\sqrt{k^2-k_{\perp}^2}z} \left(ik_y\mathbf{e}_x-ik_x\mathbf{e}_y\right),\\ \mathbf{A}_{\mathbf{k}_{\perp}}^{\text{TM}, \substack{\text{reg}\\\text{out}}}(\mathbf{x},\omega) =& \frac{1}{kk_{\perp}}e^{ik_xx+ik_yy\mp i\sqrt{k^2-k_{\perp}^2}z} \left(\pm k_x\sqrt{k^2-k_{\perp}^2}\mathbf{e}_x \pm k_y\sqrt{k^2-k_{\perp}^2}\mathbf{e}_y+k_{\perp}^2\mathbf{e}_z\right). \end{split}\end{equation*} Here $k_{\perp}=\sqrt{k_x^2+k_y^2}$. In spherical coordinates, the waves are parametrized by $\alpha=(l,m)$, where $l=1,2,3,\ldots$ and $-l\leq m\leq l$, with \begin{equation*}\begin{split} \mathbf{A}_{lm}^{\text{TE},*}(\mathbf{x},\omega) =&\frac{\mathcal{C}_{l}^*}{\sqrt{l(l+1)}}f_l^*(kr)\left(\frac{im}{\sin\theta}Y_{lm}(\theta,\phi)\mathbf{e}_{\theta}-\frac{\partial Y_{lm}(\theta,\phi)}{\partial\theta} \mathbf{e}_{\phi}\right),\\ \mathbf{A}_{lm}^{\text{TM},*}(\mathbf{x},\omega) =&\mathcal{C}_{l}^*\left(\frac{\sqrt{l(l+1)}}{kr}f_l^*(kr)Y_{lm}(\theta,\phi)\mathbf{e}_r+\frac{1}{\sqrt{l(l+1)}}\frac{1}{kr}\frac{d}{dr}\left(rf_l^*(kr)\right) \left[\frac{\partial Y_{lm}(\theta,\phi)}{\partial\theta}\mathbf{e}_{\theta}+\frac{im}{\sin\theta}Y_{lm}(\theta,\phi)\mathbf{e}_{\phi}\right]\right). \end{split}\end{equation*}Here $*=$ reg or out, with $f_l^{\text{reg}}(z)=j_l(z)$ and $f_l^{\text{out}}(z)=h_l^{(1)}(z)$, $Y_{lm}(\theta,\phi)$ are the spherical harmonics. The constants $\mathcal{C}_{l}^{\text{reg}}$ and $\mathcal{C}_{l}^{\text{out}}$ are chosen so that \begin{equation*} \mathcal{C}_{l}^{\text{reg}} j_l(i\bar{z}eta) =\sqrt{\frac{\pi}{2\bar{z}eta}}I_{l+\frac{1}{2}}(\bar{z}eta),\quad \mathcal{C}_{l}^{\text{out}} h_l^{(1)}(i\bar{z}eta) =\sqrt{\frac{\pi}{2\bar{z}eta}}K_{l+\frac{1}{2}}(\bar{z}eta). \end{equation*} Now we can derive the dispersion relation for the energy eigenmodes $\omega$ of the system. Inside the sphere ($r<R$), express $\mathbf{A}(\mathbf{x},t) $ in the spherical coordinate system centered at $O$: \begin{equation*} \mathbf{A}(\mathbf{x},t) =\int_{-\infty}^{\infty} d\omega \sum_{l=1}^{\infty}\sum_{m=-l}^l\left(A_1^{lm} \mathbf{A}^{\text{TE,reg}}_{lm}(\mathbf{x},\omega)+C_1^{lm} \mathbf{A}^{\text{TM,reg}}_{lm}(\mathbf{x},\omega)\right)e^{-i\omega t}. \end{equation*} Outside the plane ($z>L$), express $\mathbf{A} $ in the rectangular coordinate system centered at $O'$: \begin{equation*} \mathbf{A}(\mathbf{x}',t) =H^2\int_{-\infty}^{\infty} d\omega \int_{-\infty}^{\infty}\frac{dk_x}{2\pi}\int_{-\infty}^{\infty}\frac{dk_y}{2\pi} \left(B_2^{\mathbf{k}_{\perp}} \mathbf{A}^{\text{TE,out}}_{\mathbf{k}_{\perp}}(\mathbf{x}',\omega)+D_2^{\mathbf{k}_{\perp}} \mathbf{A}^{\text{TM,out}}_{\mathbf{k}_{\perp}}(\mathbf{x}',\omega)\right)e^{-i\omega t}. \end{equation*} Here $\mathbf{x}'=\mathbf{x}-\mathbf{L}$, $\mathbf{L}=L\mathbf{e}_z$. In the region between the sphere and the plane, $\mathbf{A}$ can be represented in two ways: one is in terms of the spherical coordinate system centered at $O$: \begin{equation*} \mathbf{A}(\mathbf{x},t) =\int_{-\infty}^{\infty} d\omega \sum_{l=1}^{\infty}\sum_{m=-l}^l\left(a_1^{lm} \mathbf{A}^{\text{TE,reg}}_{lm}(\mathbf{x},\omega)+ b_1^{lm} \mathbf{A}^{\text{TE,out}}_{lm}(\mathbf{x},\omega)+c_1^{lm} \mathbf{A}^{\text{TM,reg}}_{lm}(\mathbf{x},\omega)+d_1^{lm} \mathbf{A}^{\text{TM,out}}_{lm}(\mathbf{x},\omega)\right)e^{-i\omega t}; \end{equation*}and one is in terms of the rectangular coordinate system centered at $O'$: \begin{equation*}\begin{split} \mathbf{A}(\mathbf{x}',t) =&H^2\int_{-\infty}^{\infty} d\omega \int_{-\infty}^{\infty}\frac{dk_x}{2\pi}\int_{-\infty}^{\infty}\frac{dk_y}{2\pi}\\&\hspace{2cm}\times\left(a_2^{\mathbf{k}_{\perp} }\mathbf{A}^{\text{TE,reg}}_{\mathbf{k}_{\perp} }(\mathbf{x}',\omega)+ b_2^{\mathbf{k}_{\perp} } \mathbf{A}^{\text{TE,out}}_{\mathbf{k}_{\perp}}(\mathbf{x}',\omega)+c_2^{\mathbf{k}_{\perp} } \mathbf{A}^{\text{TM,reg}}_{\mathbf{k}_{\perp} }(\mathbf{x}',\omega)+d_2^{\mathbf{k}_{\perp} } \mathbf{A}^{\text{TM,out}}_{\mathbf{k}_{\perp} }(\mathbf{x}',\omega)\right)e^{-i\omega t}.\end{split} \end{equation*}These two representations are related by translation matrices $\mathbb{V}$ and $\mathbb{W}$: \begin{equation*}\begin{split} \begin{pmatrix}\mathbf{A}^{\text{TE,reg}}_{\mathbf{k}_{\perp}}(\mathbf{x}',\omega)\\\mathbf{A}^{\text{TM,reg}}_{\mathbf{k}_{\perp}}(\mathbf{x}',\omega)\end{pmatrix} =&\sum_{l=1}^{\infty}\sum_{m=-l}^l \begin{pmatrix}V_{lm, \mathbf{k}_{\perp}}^{\text{TE,TE}} & V_{ lm, \mathbf{k}_{\perp}}^{\text{TM,TE}} \\ V_{ lm, \mathbf{k}_{\perp}}^{\text{TE,TM}} & V_{lm, \mathbf{k}_{\perp}}^{\text{TM,TM}} \end{pmatrix} \begin{pmatrix}\mathbf{A}^{\text{TE,reg}}_{lm}(\mathbf{x},\omega) \\\mathbf{A}^{\text{TM,reg}}_{lm}(\mathbf{x},\omega)\end{pmatrix}\\ \begin{pmatrix}\mathbf{A}^{\text{TE,out}}_{lm}(\mathbf{x},\omega)\\\mathbf{A}^{\text{TM,out}}_{lm}(\mathbf{x},\omega)\end{pmatrix} =&H^2\int_{-\infty}^{\infty}\frac{dk_x}{2\pi}\int_{-\infty}^{\infty}\frac{dk_y}{2\pi} \begin{pmatrix}W_{\mathbf{k}_{\perp}, lm}^{\text{TE,TE}} & W_{\mathbf{k}_{\perp}, lm}^{\text{TM,TE}} \\ W_{\mathbf{k}_{\perp}, lm}^{\text{TE,TM}} & W_{\mathbf{k}_{\perp}, lm}^{\text{TM,TM}} \end{pmatrix} \begin{pmatrix}\mathbf{A}^{\text{TE,out}}_{\mathbf{k}_{\perp}}(\mathbf{x}',\omega) \\\mathbf{A}^{\text{TM,out}}_{\mathbf{k}_{\perp}}(\mathbf{x}',\omega)\end{pmatrix}. \end{split}\end{equation*} Hence, \begin{equation*}\begin{split} \begin{pmatrix} a_1^{lm}\\c_1^{lm} \end{pmatrix}=&H^2\int_{-\infty}^{\infty}\frac{dk_x}{2\pi}\int_{-\infty}^{\infty}\frac{dk_y}{2\pi}\begin{pmatrix}V_{lm, \mathbf{k}_{\perp}}^{\text{TE,TE}} & V_{ lm, \mathbf{k}_{\perp}}^{\text{TE,TM}} \\ V_{ lm, \mathbf{k}_{\perp}}^{\text{TM,TE}} & V_{lm, \mathbf{k}_{\perp}}^{\text{TM,TM}} \end{pmatrix}\begin{pmatrix} a_2^{\mathbf{k}_{\perp}}\\c_2^{\mathbf{k}_{\perp}} \end{pmatrix},\\ \begin{pmatrix} b_2^{\mathbf{k}_{\perp}}\\operatorname{d}_2^{\mathbf{k}_{\perp}} \end{pmatrix}=&\sum_{l=1}^{\infty}\sum_{m=-l}^l\begin{pmatrix}W_{\mathbf{k}_{\perp}, lm}^{\text{TE,TE}} & W_{\mathbf{k}_{\perp}, lm}^{\text{TE,TM}} \\ W_{\mathbf{k}_{\perp}, lm}^{\text{TM,TE}} & W_{\mathbf{k}_{\perp}, lm}^{\text{TM,TM}} \end{pmatrix}\begin{pmatrix} b_1^{lm}\\operatorname{d}_1^{lm} \end{pmatrix}.\end{split} \end{equation*} These translation matrices have been derived in \cite{8,45}. Their components are given by \begin{equation*} \begin{split} V_{lm,\mathbf{k}_{\perp}}^{\text{TE,TE}} =V_{lm,\mathbf{k}_{\perp}}^{\text{TM,TM}} =&-\frac{4\pi i }{\sqrt{l(l+1)}} \frac{\partial Y_{l,-m}(\theta_k,\phi_k)}{\partial\theta_k}e^{i\sqrt{k^2-k_{\perp}^2}L},\\ V_{lm,\mathbf{k}_{\perp}}^{\text{TE,TM}} =V_{lm,\mathbf{k}_{\perp}}^{\text{TM,TE}} =&\frac{4\pi i }{\sqrt{l(l+1)}} \frac{m}{\sin\theta_k} Y_{l,-m}(\theta_k,\phi_k) e^{i\sqrt{k^2-k_{\perp}^2}L}, \end{split} \end{equation*} \begin{equation*} \begin{split} W_{\mathbf{k}_{\perp},lm}^{\text{TE,TE}} =&\frac{i }{H^2\sqrt{l(l+1)}} \frac{\pi^2}{k\sqrt{k^2-k_{\perp}^2}} \frac{\partial Y_{lm}(\theta_k,\phi_k)}{\partial\theta_k}e^{i\sqrt{k^2-k_{\perp}^2}L},\\ W_{\mathbf{k}_{\perp},lm}^{\text{TM,TE}} =&\frac{i }{H^2\sqrt{l(l+1)}} \frac{\pi^2}{k\sqrt{k^2-k_{\perp}^2}} \frac{m}{\sin\theta_k} Y_{lm}(\theta_k,\phi_k) e^{i\sqrt{k^2-k_{\perp}^2}L}. \end{split} \end{equation*}Here $\theta_k$ and $\phi_k$ are such that $k_{\perp}=k\sin\theta_k$, $k_x=k_{\perp}\cos\phi_k$ and $k_y=k_{\perp}\sin\phi_k$. Let $\Omega_s$ be the parameter characterizing the spherical plasma sheet. Matching the boundary conditions \eqref{eq2_18_2} on the sphere gives \begin{equation*} \begin{split} &a_1^{lm}\mathcal{C}_{l }^{\text{reg}}j_l(kR)+b_1^{lm}\mathcal{C}_{l }^{\text{out}}h_l^{(1)}(kR)=A_1^{lm}\mathcal{C}_{l }^{\text{reg}}j_l(kR),\\ &a_1^{lm}\mathcal{C}_{l }^{\text{reg}} \mathcal{B}igl(j_l(kR)+kRj_l'(kR)\mathcal{B}igr)+b_1^{lm}\mathcal{C}_{l }^{\text{out}} \left(h_l^{(1)}(kR)+kRh_l^{(1)\prime}(kR)\right)- A_1^{lm} \mathcal{C}_{l }^{\text{reg}}\mathcal{B}igl(j_l(kR)+kRj_l'(kR)\mathcal{B}igr)\\&\hspace{4cm}=2\Omega_s RA_1^{lm} \mathcal{C}_{l }^{\text{reg}}j_l(kR),\\ &c_1^{lm}\mathcal{C}_{l }^{\text{reg}}\mathcal{B}igl(j_l(kR)+kRj_l'(kR)\mathcal{B}igr)+d_1^{lm}\mathcal{C}_{l }^{\text{out}} \left(h_l^{(1)}(kR)+kRh_l^{(1)\prime}(kR)\right)=C_1^{lm}\mathcal{C}_{l }^{\text{reg}}\mathcal{B}igl(j_l(kR)+kRj_l'(kR)\mathcal{B}igr),\\ &c_1^{lm}\mathcal{C}_{l }^{\text{reg}}j_l(kR)+d_1^{lm}\mathcal{C}_{l }^{\text{out}}h_l^{(1)}(kR)-C_1^{lm}\mathcal{C}_{l }^{\text{reg}}j_l(kR)= -\frac{2\Omega_s c^2}{\omega^2R}C_1^{lm}\mathcal{C}_{l }^{\text{reg}}\mathcal{B}igl(j_l(kR)+kRj_l'(kR)\mathcal{B}igr). \end{split}\end{equation*}Eliminating $A^{lm}$ and $C^{lm}$, we obtain a relation of the form \begin{align*} \begin{pmatrix} b_1^{lm}\\operatorname{d}_1^{lm}\end{pmatrix}=-\mathbb{T}_{lm}\begin{pmatrix} a_1^{lm}\\c_1^{lm}\end{pmatrix}, \end{align*}where $\mathbb{T}_{lm}$ is a diagonal matrix: \begin{equation*} \mathbb{T}_{lm}=\begin{pmatrix} T_{lm}^{\text{TE}}&0\\0& T_{lm}^{\text{TM}}\end{pmatrix} \end{equation*}with \begin{equation}\langlebel{eq2_18_5}\begin{split} T_{lm}^{\text{TE}}(i\xi)=& \frac{2\Omega_s R I_{l+\frac{1}{2}}(\kappa R)^2}{1+2\Omega_s RI_{l+\frac{1}{2}}(\kappa R)K_{l+\frac{1}{2}}(\kappa R)},\\ T_{lm}^{\text{TM}}(i\xi)=& -\frac{ 2\Omega_s \left(\frac{1}{2}I_{l+\frac{1}{2}}(\kappa R)+ \kappa RI_{l+\frac{1}{2}}'(\kappa R)\right)^2}{\kappa^2R- 2\Omega_s \left(\frac{1}{2}I_{l+\frac{1}{2}}(\kappa R)+ \kappa RI_{l+\frac{1}{2}}'(\kappa R)\right)\left(\frac{1}{2}K_{l+\frac{1}{2}}(\kappa R)+ \kappa RK_{l+\frac{1}{2}}'(\kappa R)\right)}. \end{split} \end{equation}Here we have replaced $k$ by $i\kappa$ and $\omega$ by $i\xi$. Denote by $\Omega_p$ be the parameter characterizing the planar plasma sheet. Matching the boundary conditions \eqref{eq2_18_2} on the plane gives \begin{equation*} \begin{split} &a_2^{\mathbf{k}_{\perp}}+ b_2^{\mathbf{k}_{\perp}}=B_2^{\mathbf{k}_{\perp}},\\ &\sqrt{k^2-k_{\perp}^2}\left(a_2^{\mathbf{k}_{\perp}}- b_2^{\mathbf{k}_{\perp}}+B_2^{\mathbf{k}_{\perp}}\right)=-2i\Omega_p B_2^{\mathbf{k}_{\perp}},\\ &c_2^{\mathbf{k}_{\perp}}-d_2^{\mathbf{k}_{\perp}}=-D_2^{\mathbf{k}_{\perp}},\\ &c_2^{\mathbf{k}_{\perp}}+d_2^{\mathbf{k}_{\perp}}-D_2^{\mathbf{k}_{\perp}}=\frac{2i\Omega_p c^2}{\omega^2}\sqrt{k^2-k_{\perp}^2}D_2^{\mathbf{k}_{\perp}}. \end{split} \end{equation*}From here, we find that \begin{align*} \begin{pmatrix} a_2^{\mathbf{k}_{\perp}}\\c_2^{\mathbf{k}_{\perp}}\end{pmatrix}=-\widetilde{\mathbb{T}}_{\mathbf{k}_{\perp}}\begin{pmatrix} b_2^{\mathbf{k}_{\perp}}\\operatorname{d}_2^{\mathbf{k}_{\perp}}\end{pmatrix}, \end{align*} where $\widetilde{\mathbb{T}}_{\mathbf{k}_{\perp}}$ is a diagonal matrix with elements \begin{equation}\langlebel{eq2_19_4} \begin{split} \widetilde{T}_{\mathbf{k}_{\perp}}^{\text{TE}}(i\xi)=&\frac{\Omega_p}{\Omega_p+\sqrt{\kappa^2+k_{\perp}^2}},\\ \widetilde{T}_{\mathbf{k}_{\perp}}^{\text{TM}}(i\xi)=&-\frac{\Omega_p \sqrt{\kappa^2+k_{\perp}^2}}{\Omega_p \sqrt{\kappa^2+k_{\perp}^2}+\kappa^2}. \end{split} \end{equation} The eigenmodes $\omega$ are those modes where the boundary conditions give rise to nontrivial solutions of $(A_1^{lm}, C_1^{lm}, B_2^{\mathbf{k}_{\perp}}, D_2^{\mathbf{k}_{\perp}}, a_1^{lm}, b_1^{lm}, c_1^{lm}, d_1^{lm}, a_2^{\mathbf{k}_{\perp}}, b_2^{\mathbf{k}_{\perp}}, c_2^{\mathbf{k}_{\perp}}, d_2^{k_{\perp}})$. Now \begin{align*} \begin{pmatrix} b_1^{lm}\\operatorname{d}_1^{lm}\end{pmatrix}=&-\mathbb{T}_{lm}\begin{pmatrix} a_1^{lm}\\c_1^{lm}\end{pmatrix}\\ =&-\mathbb{T}_{lm} H^2\int_{-\infty}^{\infty}\frac{dk_x}{2\pi}\int_{-\infty}^{\infty}\frac{dk_y}{2\pi}\begin{pmatrix}V_{lm, \mathbf{k}_{\perp}}^{\text{TE,TE}} & V_{ lm, \mathbf{k}_{\perp}}^{\text{TE,TM}} \\ V_{ lm, \mathbf{k}_{\perp}}^{\text{TM,TE}} & V_{lm, \mathbf{k}_{\perp}}^{\text{TM,TM}} \end{pmatrix}\begin{pmatrix} a_2^{\mathbf{k}_{\perp}}\\c_2^{\mathbf{k}_{\perp}}\end{pmatrix}\\ =&\mathbb{T}_{lm} H^2\int_{-\infty}^{\infty}\frac{dk_x}{2\pi}\int_{-\infty}^{\infty}\frac{dk_y}{2\pi}\begin{pmatrix}V_{lm, \mathbf{k}_{\perp}}^{\text{TE,TE}} & V_{ lm, \mathbf{k}_{\perp}}^{\text{TE,TM}} \\ V_{ lm, \mathbf{k}_{\perp}}^{\text{TM,TE}} & V_{lm, \mathbf{k}_{\perp}}^{\text{TM,TM}} \end{pmatrix}\widetilde{\mathbb{T}}_{\mathbf{k}_{\perp}}\begin{pmatrix} b_2^{\mathbf{k}_{\perp}}\\operatorname{d}_2^{\mathbf{k}_{\perp}} \end{pmatrix}\\ =&\mathbb{T}_{lm} H^2\int_{-\infty}^{\infty}\frac{dk_x}{2\pi}\int_{-\infty}^{\infty}\frac{dk_y}{2\pi}\begin{pmatrix}V_{lm, \mathbf{k}_{\perp}}^{\text{TE,TE}} & V_{ lm, \mathbf{k}_{\perp}}^{\text{TE,TM}} \\ V_{ lm, \mathbf{k}_{\perp}}^{\text{TM,TE}} & V_{lm, \mathbf{k}_{\perp}}^{\text{TM,TM}} \end{pmatrix}\widetilde{\mathbb{T}}_{\mathbf{k}_{\perp}}\sum_{l'=1}^{\infty}\sum_{m'=-l'}^{l'}\begin{pmatrix}W_{\mathbf{k}_{\perp}, l'm'}^{\text{TE,TE}} & W_{\mathbf{k}_{\perp}, l'm'}^{\text{TE,TM}} \\ W_{\mathbf{k}_{\perp}, l'm'}^{\text{TM,TE}} & W_{\mathbf{k}_{\perp}, l'm'}^{\text{TM,TM}} \end{pmatrix}\begin{pmatrix} b_1^{l'm'}\\operatorname{d}_1^{l'm'} \end{pmatrix}. \end{align*} This shows that the matrix $\mathbb{B}$ with $(lm)$ component given by \begin{align*} \begin{pmatrix} b_1^{lm}\\operatorname{d}_1^{lm}\end{pmatrix} \end{align*} satisfies the relation \begin{align*} \left(\mathbb{I}-\mathbb{M}\right)\mathbb{B}=\mathbb{O}, \end{align*}where the $(lm, l'm')$-element of $\mathbb{M}$ is given by \begin{align*} M_{lm, l'm'}=\mathbb{T}_{lm} H^2\int_{-\infty}^{\infty}\frac{dk_x}{2\pi}\int_{-\infty}^{\infty}\frac{dk_y}{2\pi}\begin{pmatrix}V_{lm, \mathbf{k}_{\perp}}^{\text{TE,TE}} & V_{ lm, \mathbf{k}_{\perp}}^{\text{TE,TM}} \\ V_{ lm, \mathbf{k}_{\perp}}^{\text{TM,TE}} & V_{lm, \mathbf{k}_{\perp}}^{\text{TM,TM}} \end{pmatrix}\widetilde{\mathbb{T}}_{\mathbf{k}_{\perp}} \begin{pmatrix}W_{\mathbf{k}_{\perp}, l'm'}^{\text{TE,TE}} & W_{\mathbf{k}_{\perp}, l'm'}^{\text{TE,TM}} \\ W_{\mathbf{k}_{\perp}, l'm'}^{\text{TM,TE}} & W_{\mathbf{k}_{\perp}, l'm'}^{\text{TM,TM}} \end{pmatrix}. \end{align*}The condition for nontrivial solution of $\mathbb{B}$ is thus given by \begin{align*} \operatorname{d}et\left(\mathbb{I}-\mathbb{M}\right)=0. \end{align*} Hence, the Casimir interaction energy between the spherical plasma sheet and the planar plasma sheet is \begin{align}\langlebel{eq2_19_2} E_{\text{Cas}} =\frac{\hbar}{2\pi}\int_0^{\infty} d\xi \text{Tr}\,\ln \left(\mathbb{I}-\mathbb{M}(i\xi)\right)=\frac{\hbar c}{2\pi}\int_0^{\infty} d\kappa \text{Tr}\,\ln \left(\mathbb{I}-\mathbb{M} \right). \end{align}Set $k_x=k_{\perp}\cos\theta_k$, $k_y=k_{\perp}\sin\theta_k$, integrate over $\theta_k$ and make a change of variables $k_{\perp}=\kappa\sinh\theta$, we find that \begin{equation}\langlebel{eq3_3_1}\begin{split} \mathbb{M}_{lm,l'm'}(i\xi)=&\partialrtialta_{m,m'} \frac{(-1)^{m}\pi}{2}\sqrt{\frac{(2l+1)(2l'+1)}{l(l+1)l'(l'+1)}\frac{(l-m)!(l'-m)!}{(l+m)!(l'+m)!}} \mathbb{T}_{lm} \int_{0}^{\infty}d\theta \sinh\theta e^{-2\kappa L\cosh\theta}\\& \times \left(\begin{aligned} \sinh\theta P_l^{m\prime}(\cosh\theta)\hspace{0.5cm} &-\frac{m}{\sinh\theta}P_l^m(\cosh\theta)\\ -\frac{m}{\sinh\theta}P_l^m(\cosh\theta) \hspace{0.4cm} & \quad\sinh\theta P_l^{m\prime}(\cosh\theta) \end{aligned}\right)\left(\begin{aligned} \frac{\Omega_p}{\Omega_p+\kappa\cosh\theta} & ~\hspace{1cm} 0\hspace{0.5cm}\\ \hspace{0.5cm} 0\hspace{1cm} & -\frac{\Omega_p \cosh\theta}{\Omega_p \cosh\theta+\kappa}\end{aligned}\right) \\&\times\left(\begin{aligned} \sinh\theta P_{l'}^{m'\prime}(\cosh\theta)\hspace{0.5cm} & \frac{m'}{\sinh\theta}P_{l'}^{m'}(\cosh\theta)\\ \frac{m'}{\sinh\theta}P_{l'}^{m'}(\cosh\theta) \hspace{0.4cm} & \quad\sinh\theta P_{l'}^{m'\prime}(\cosh\theta) \end{aligned}\right). \end{split}\end{equation}Here $P_l^m(z)$ is an associated Legendre function and $P_l^{m\prime}(z)$ is its derivative, whereas $\mathbb{T}_{lm}$ is given by \eqref{eq2_18_5}. Notice that this approach has been formalized mathematically in \cite{45}. The self energy contributions from the sphere and the plane have automatically dropped out and \eqref{eq2_19_2} is the interaction energy between the sphere and the plane. In the limit $\Omega_s\rightarrow\infty$ and $\Omega_p\rightarrow\infty$, we find from \eqref{eq2_18_5} and \eqref{eq3_3_1} that \begin{equation}\langlebel{eq2_18_5_2}\begin{split} T_{lm}^{\text{TE}}(i\xi)=& \frac{ I_{l+\frac{1}{2}}(\kappa R)}{K_{l+\frac{1}{2}}(\kappa R)},\\ T_{lm}^{\text{TM}}(i\xi)=& \frac{\frac{1}{2}I_{l+\frac{1}{2}}(\kappa R)+ \kappa RI_{l+\frac{1}{2}}'(\kappa R)}{\frac{1}{2}K_{l+\frac{1}{2}}(\kappa R)+ \kappa RK_{l+\frac{1}{2}}'(\kappa R)}, \end{split} \end{equation} \begin{equation*}\begin{split} \mathbb{M}_{lm,l'm'}(i\xi)=&\partialrtialta_{m,m'} \frac{(-1)^{m}\pi}{2}\sqrt{\frac{(2l+1)(2l'+1)}{l(l+1)l'(l'+1)}\frac{(l-m)!(l'-m)!}{(l+m)!(l'+m)!}} \mathbb{T}_{lm} \int_{0}^{\infty}d\theta \sinh\theta e^{-2\kappa L\cosh\theta}\\& \times \begin{pmatrix} 1 & 0\\0& -1\end{pmatrix} \left(\begin{aligned} \sinh\theta P_l^{m\prime}(\cosh\theta)\hspace{0.5cm} &\frac{m}{\sinh\theta}P_l^m(\cosh\theta)\\ \frac{m}{\sinh\theta}P_l^m(\cosh\theta) \hspace{0.4cm} & \quad\sinh\theta P_l^{m\prime}(\cosh\theta) \end{aligned}\right) \left(\begin{aligned} \sinh\theta P_{l'}^{m'\prime}(\cosh\theta)\hspace{0.5cm} & \frac{m'}{\sinh\theta}P_{l'}^{m'}(\cosh\theta)\\ \frac{m'}{\sinh\theta}P_{l'}^{m'}(\cosh\theta) \hspace{0.4cm} & \quad\sinh\theta P_{l'}^{m'\prime}(\cosh\theta) \end{aligned}\right), \end{split}\end{equation*}which recovers the Casimir interaction energy between a perfectly conducting spherical shell and a perfectly conducting plane \cite{8, 45}. \section{Small separation asymptotic behavior} In this section, we consider the asymptotic behavior of the Casimir interaction energy when $d\ll R$, where $d=L-R$ is the distance between the spherical plasma sheet and the planar plasma sheet. Let $$\varepsilon=\frac{d}{R}$$be the dimensionless parameter, and we consider $\varepsilon\ll 1$. There are also another two length parameters in the problem: $1/\Omega_s$ and $1/\Omega_p$. Let $$\varpi_s=\Omega_s d,\hspace{1cm}\varpi_p=\Omega_pd.$$ They are dimensionless and we assume that they have order 1, i.e., $$\varpi_s\sim 1,\hspace{1cm}\varpi_p\sim 1.$$ First we consider the proximity force approximation to the Casimir interaction energy, which approximates the Casimir interaction energy by summing the local Casimir energy density between two planes over the surfaces. The Casimir interaction energy density between two planar plasma sheets with respective parameters $\Omega_1$ and $\Omega_2$ is given by the Lifshitz formula \cite{50}: \begin{equation*} \mathcal{E}_{\text{Cas}}^{\partialrallel}(d)=\frac{\hbar c}{4\pi^2}\int_0^{\infty} d\kappa\int_{0}^{\infty} dk_{\perp} k_{\perp} \left[\ln\left(1-r_{\text{TE}}^{(1)}r_{\text{TE}}^{(2)}e^{-2d\sqrt{k_{\perp}^2+\kappa^2}}\right)+\ln\left(1-r_{\text{TM}}^{(1)}r_{\text{TM}}^{(2)}e^{-2d\sqrt{\kappa^2+k_{\perp}^2}}\right)\right]. \end{equation*}Here $d$ is the distance between the two planar sheets, \begin{align*} r_{\text{TE}}^{(i)}=&\frac{\Omega_i}{\Omega_i+\sqrt{\kappa^2+k_{\perp}^2}},\\ r_{\text{TM}}^{(i)}=&-\frac{\Omega_i\sqrt{\kappa^2+k_{\perp}^2}}{\Omega_i\sqrt{\kappa^2+k_{\perp}^2}+\kappa^2} \end{align*}are nothing but the components of the $\mathbb{T}_2^{\mathbf{k}_{\perp}}$ given in \eqref{eq2_19_4}. The proximity force approximation for the Casimir interaction energy between a sphere and a plate is then given by \begin{align*} E_{\text{Cas}}^{\text{PFA}}=&R^2\int_0^{2\pi} d\phi\int_0^{\pi}d\theta\sin\theta \mathcal{E}^{\partialrallel}_{\text{Cas}}\left(L+R\cos\theta\right)\\ \sim &2\pi R\int_d^{\infty} du \mathcal{E}^{\partialrallel}_{\text{Cas}}(u)\\ =&-\frac{\hbar c R}{2\pi} \int_0^{\infty} d\kappa\int_0^{\infty} dk_{\perp} k_{\perp}\int_d^{\infty}du\sum_{n=1}^{\infty}\frac{1}{n}\left( \left[r_{\text{TE}}^{(1)}r_{\text{TE}}^{(2)}\right]^n+\left[r_{\text{TM}}^{(1)}r_{\text{TM}}^{(2)}\right]^n\right)e^{-2un\sqrt{\kappa^2+k_{\perp}^2}}\\ =&-\frac{\hbar c R}{4\pi } \int_0^{\infty} d\kappa\int_0^{\infty} dk_{\perp} \frac{k_{\perp}}{\sqrt{\kappa^2+k_{\perp}^2}} \sum_{n=1}^{\infty}\frac{1}{n^2}\left( \left[r_{\text{TE}}^{(1)}r_{\text{TE}}^{(2)}\right]^n+\left[r_{\text{TM}}^{(1)}r_{\text{TM}}^{(2)}\right]^n\right)e^{-2dn\sqrt{\kappa^2+k_{\perp}^2}}\\ =&-\frac{\hbar c R}{4\pi } \int_0^{\infty} d\kappa\int_0^{\infty} dk_{\perp} \frac{k_{\perp}}{\sqrt{\kappa^2+k_{\perp}^2}} \left(\text{Li}_2\left(r_{\text{TE}}^{(1)}r_{\text{TE}}^{(2)} e^{-2d\sqrt{\kappa^2+k_{\perp}^2}}\right)+\text{Li}_2\left(r_{\text{TM}}^{(1)}r_{\text{TM}}^{(2)} e^{-2d\sqrt{\kappa^2+k_{\perp}^2}}\right)\right)\\ =&-\frac{\hbar c R}{4\pi }\int_{0}^{\infty} dq \int_0^{q} d\kappa \left(\text{Li}_2\left(r_{\text{TE}}^{(1)}r_{\text{TE}}^{(2)} e^{-2dq}\right)+\text{Li}_2\left(r_{\text{TM}}^{(1)}r_{\text{TM}}^{(2)} e^{-2dq}\right)\right). \end{align*}Here $\operatorname{d}isplaystyle \text{Li}_2(z)=\sum_{n=1}^{\infty} \frac{z^n}{n^2}$ is a polylogarithm function of order 2. Making a change of variables $dq=t$ and $\kappa=q\sqrt{1-\tau^2}= t\sqrt{1-\tau^2}/d$, we finally obtain \begin{equation}\langlebel{eq3_25_2} E_{\text{Cas}}^{\text{PFA}}=-\frac{\hbar c R}{4\pi d^2 }\int_{0}^{\infty} dt \,t \int_0^{1} \frac{d\tau\,\tau}{\sqrt{1-\tau^2}} \left(\text{Li}_2\left(r_{\text{TE}}^{(1)}r_{\text{TE}}^{(2)} e^{-2t}\right)+\text{Li}_2\left(r_{\text{TM}}^{(1)}r_{\text{TM}}^{(2)} e^{-2t}\right)\right), \end{equation}where \begin{align*} r_{\text{TE}}^{(i)}=&\frac{\Omega_i}{\Omega_i+q}=\frac{\varpi_i}{\varpi_i+t},\\ r_{\text{TM}}^{(i)}=&-\frac{\Omega_iq}{\Omega_iq+ \kappa^2}=-\frac{\varpi_i}{\varpi_i+t(1-\tau^2)}. \end{align*} Next, we consider the small separation asymptotic behavior of the Casimir interaction energy up to the next-to-leading order term in $\varepsilon$ from the functional representation \eqref{eq2_19_2}. In \cite{15}, we have considered the small separation asymptotic expansion of the Casimir interaction between a magnetodielectric sphere and a magnetodielectric plane. Our present scenario is similar to the one considered in \cite{15}. The major differences are the boundary conditions on the sphere and the plate that are encoded in the two matrices $\mathbb{T}_{lm}$ and $\widetilde{\mathbb{T}}_{\mathbf{k}_{\perp}}$. Hence, we do not repeat the calculations that have been presented in \cite{15}, but only present the final result and point out the differences. The leading term and next-to-leading term of the Casimir interaction energy $E_{\text{Cas}}^0$ and $E_{\text{Cas}}^1$ are given respectively by \begin{equation}\langlebel{eq3_13_3}\begin{split} E_{\text{Cas}}^0= &-\frac{\hbar c R}{4\pi d^2}\sum_{s=0}^{\infty}\frac{1}{(s+1)^2} \int_0^{\infty}dt\,t \int_0^1 \frac{d\tau\,\tau}{\sqrt{1-\tau^2}} e^{-2t(s+1)} \sum_{* =\text{TE}, \text{TM}}\left[T^{* }_{0}\widetilde{T}^{*}_{0}\right]^{s+1}, \end{split}\end{equation}\begin{equation}\langlebel{eq3_18_1}\begin{split} E_{\text{Cas}}^1 =&-\frac{\hbar c }{4\pi d}\sum_{s=0}^{\infty}\frac{1}{(s+1)^2} \int_0^{\infty}dt\,t \int_0^1 \frac{d\tau\,\tau}{ \sqrt{1-\tau^2}}e^{-2t(s+1)} \mathcal{B}iggl\{\sum_{* =\text{TE}, \text{TM}}\left[T^{* }_{0}\widetilde{T}^{*}_{0}\right]^{s+1} \left(\mathscr{A}+\mathscr{C}^{*}+\mathscr{D}^{*}\right)+ \mathscr{B} \mathcal{B}iggr\}. \end{split}\end{equation}Here \begin{align*} T^{\text{TE} }_{0}=&\frac{ \varpi_s}{ \varpi_s+ t },\\ T^{\text{TM} }_{0}=& \frac{ \varpi_s}{ \varpi_s+ t\left(1-\tau^2\right)},\\ \widetilde{T}^{\text{TE} }_{0}=&\frac{ \varpi_p}{ \varpi_p+ t },\\ \widetilde{T}^{\text{TM} }_{0}=& \frac{ \varpi_p}{ \varpi_p+ t\left(1-\tau^2\right)}, \\ \mathscr{A}=&\frac{ t\tau^2}{3}\left((s+1)^3 +2(s+1)\right)+ \frac{1}{3}\left((\tau^2-2)(s+1)^2-3\tau (s+1)+2\tau^2-1\right)\\ &+\frac{\tau^4+\tau^2-12}{12t\tau^2}(s+1)+\frac{(1+\tau)(1-\tau^2)}{2t\tau^2}-\frac{ (1-\tau^2)}{3t }\frac{1}{s+1},\\ \mathscr{B}=&\frac{ 1-\tau^2}{2t\tau^2 }\left\{\left(T^{\text{TE} }_{0}\widetilde{T}_0^{\text{TM}}+T^{\text{TM} }_{0}\widetilde{T}_0^{\text{TE}}\right) \frac{\left[T^{\text{TE} }_{0}\widetilde{T}^{\text{TE} }_{0}\right]^{s+1 } -\left[T^{\text{TM} }_{0}\widetilde{T}^{\text{TM} }_{0}\right]^{s+1} }{T_0^{\text{TE}}\widetilde{T}_0^{\text{TE}}- T_0^{\text{TM}}\widetilde{T}_0^{\text{TM}}}\right.\\&\left.\hspace{2cm}+2T^{\text{TE} }_{0}\widetilde{T}_0^{\text{TE}}T^{\text{TM} }_{0}\widetilde{T}_0^{\text{TM}} \frac{\left[T^{\text{TE} }_{0}\widetilde{T}^{\text{TE} }_{0}\right]^{s } -\left[T^{\text{TM} }_{0}\widetilde{T}^{\text{TM} }_{0}\right]^{s} }{T_0^{\text{TE}}\widetilde{T}_0^{\text{TE}}- T_0^{\text{TM}}\widetilde{T}_0^{\text{TM}}}\right\},\\ \mathscr{C}^{*}=& C_{V } \mathcal{K}^{*}_1+C_J \mathcal{W}^{*}_1,\\ \mathscr{D}^{*}=&D_{VV} \mathcal{K}^{*2}_1+D_{VJ} \mathcal{K}^{* }_1 \mathcal{W}^{*}_1+D_{JJ} \mathcal{W}^{*2}_1+ D_V \mathcal{K}^{*}_2 + D_J \mathcal{W}^{*}_2+ (s+1) \mathcal{Y}^{*}_2, \end{align*}with \begin{align*} C_{V}=&-\frac{ \tau}{3}\left((s+1)^3+2(s+1)\right)+\frac{1-\tau^2}{6t\tau}(s+1)^2+\frac{1}{2t}(s+1)+\frac{1-4\tau^2}{12t\tau},\\ C_J=&-\frac{t\tau}{3}\left((s+1)^3-(s+1)\right)+\frac{1}{6\tau}\left((s+1)^2-1\right),\\ D_{VV}=&\frac{1}{12t}\left((s+1)^3-2(s+1)^2+2(s+1)-1\right),\\ D_{JJ}=&\frac{t}{12}\left((s+1)^3-2(s+1)^2-(s+1)+2\right),\\ D_{VJ}=&\frac{1}{6}\left((s+1)^3- (s+1) \right),\\ D_{V}=&\frac{1}{6t}\left(2(s+1)^2 +1\right),\\ D_{J}=&\frac{t}{3}\left( (s+1)^2- 1\right), \\ \mathcal{K}^{\text{TE}}_{1}=& - \frac{t\tau }{ \varpi_p+ t },\\ \mathcal{K}^{\text{TE}}_{2}=&-\frac{t\left( \varpi_p+t\left(1-2\tau^2\right)\right)}{2\left( \varpi_p+t \right)^2},\\ \mathcal{K}^{\text{TM}}_{1}=& \frac{t\left(1-\tau^2\right) }{ \varpi_p+ t\left(1-\tau^2\right)},\\ \mathcal{K}^{\text{TM}}_{2}=& \frac{t\left(1-\tau^2\right)\left( \varpi_p \left(1-2\tau^2\right)+ t\left(1-\tau^2\right) \right)}{2\left( \varpi_p+ t\left(1-\tau^2\right)\right)^2}, \\ \mathcal{W}_{ 1}^{\text{TE}}=&-\frac{\tau}{\varpi_s+ t },\\ \mathcal{W}_{ 2}^{\text{TE}}=&-\frac{\left(t(1-3\tau^2)+\varpi_s\left(1-\tau^2\right)\right)}{2t\left(\varpi_s+t\right)^2},\\ \mathcal{Y}_{ 2}^{\text{TE}}=&-\frac{\tau}{2\left(\varpi_s+t\right)}+\frac{1}{t}\left(\frac{1}{4}-\frac{5\tau^2}{12}\right),\\ \mathcal{W}_{ 1}^{\text{TM}}=&\frac{\tau(1-\tau^2)}{ \varpi_s+t\left(1-\tau^2\right)},\\ \mathcal{W}_{ 2}^{\text{TM}}=&\frac{\left(1-\tau^2\right)\left(t(1-\tau^2)^2+\varpi_s(1-3\tau^2)\right)}{2t\left(\varpi_s+ t\left(1-\tau^2\right)\right)^2},\\ \mathcal{Y}_{ 2}^{\text{TM}}=&\frac{\tau\left(1-\tau^2\right)}{2\left( \varpi_s+t\left(1-\tau^2\right)\right)}+\frac{1}{t}\left(\frac{1}{4}+\frac{7\tau^2}{12}\right). \end{align*}We have replaced the $l$ in \cite{15} with $t\tau/\varepsilon$. The definition of $\mathscr{B}$, $\mathscr{D}$, $C_V, C_J, D_{VV}, D_{VJ}, D_{JJ}$ are slightly different than those in \cite{15}. For $*$=TE or TM, $\mathcal{K}_1^*, \mathcal{K}_2^*, \mathcal{W}_1^*, \mathcal{W}_2^*$ and $\mathcal{Y}_2^*$ are obtained from the asymptotic expansions of $\mathbb{T}_{lm}$ and $\widetilde{\mathbb{T}}_{\mathbf{k}_{\perp}}$. Hence, there are different than those obtained in \cite{15}. Using polylogarithm function, we can rewrite the leading term $E_{\text{Cas}}^0$ \eqref{eq3_13_3} as \begin{equation}\langlebel{eq2_25_1}\begin{split} E_{\text{Cas}}^0= &-\frac{\hbar c R}{4\pi d^2} \int_0^{\infty}dt\,t \int_0^1 \frac{d\tau\,\tau}{\sqrt{1-\tau^2}} \left(\text{Li}_2\left( T^{\text{TE} }_{0}\widetilde{T}^{\text{TE}}_{0} e^{-2t}\right)+\text{Li}_2\left( T^{\text{TM} }_{0}\widetilde{T}^{\text{TM}}_{0} e^{-2t}\right)\right). \end{split}\end{equation}It is easy to see that this coincides with the proximity force approximation \eqref{eq3_25_2} when $\varpi_s=\varpi_1$ and $\varpi_p=\varpi_2$. Notice that the leading term $E_{\text{Cas}}^0$ can be split into a sum of TE and TM contributions. However, because of the $\mathscr{B}$-term, the next-to-leading order term $E_{\text{Cas}}^1$ \eqref{eq3_18_1} cannot be split into TE and TM contributions. In the limit $\varpi_p, \varpi_s\rightarrow\infty$ which corresponds to perfectly conducting boundary conditions on the sphere and the plate, we find that for $*$=TE or TM, $\mathcal{K}^{*}_{1}, \mathcal{K}^{*}_{2}, \mathcal{W}^{*}_{1}, \mathcal{W}^{*}_{2}$ vanishes, $T^{*}_{0}=\widetilde{T}^{*}_0=1$, \begin{align*} \mathcal{B}=&\frac{ \left(1-\tau^2\right)}{2t\tau^2 }(4s+2),\\ \mathcal{Y}_{ 2}^{\text{TE}}=&\frac{1}{t}\left(\frac{1}{4}-\frac{5\tau^2}{12}\right),\\ \mathcal{Y}_{ 2}^{\text{TM}}=& \frac{1}{t}\left(\frac{1}{4}+\frac{7\tau^2}{12}\right). \end{align*} Hence, \begin{align*} E_{\text{Cas}}^0= &-\frac{\hbar c R}{2\pi d^2}\sum_{s=0}^{\infty}\frac{1}{(s+1)^2} \int_0^{\infty}dt\,t \int_0^1 \frac{d\tau\,\tau}{\sqrt{1-\tau^2}} e^{-2t(s+1)}\\ =&-\frac{\hbar c R}{8\pi d^2}\sum_{s=0}^{\infty}\frac{1}{(s+1)^4} \\ =&-\frac{\hbar c\pi^3 R}{720 d^2}, \\ E_{\text{Cas}}^1 =&-\frac{\hbar c }{4\pi d}\sum_{s=0}^{\infty}\frac{1}{(s+1)^2} \int_0^{\infty}dt\,t \int_0^1 \frac{d\tau\,\tau}{ \sqrt{1-\tau^2}}e^{-2t(s+1)} \left(2\mathscr{A}+ \mathscr{B}+(s+1)\mathcal{Y}_2^{\text{TE}}+(s+1)\mathcal{Y}_2^{\text{TM}}\right)\\ =&-\frac{\hbar c }{4\pi d}\sum_{s=0}^{\infty}\frac{1}{(s+1)^2} \left(\frac{1}{6(s+1)^2}-\frac{2}{3}\right)\\ =&E_{\text{Cas}}^0\left(\frac{1}{3}-\frac{20}{\pi^2}\right)\frac{d}{R}. \end{align*}These recover the results for the case where both the sphere and the plane are perfectly conducting \cite{35}. \begin{figure} \caption{\langlebel{f1} \end{figure} \begin{figure} \caption{\langlebel{f2} \end{figure} \begin{figure} \caption{\langlebel{f3} \end{figure} Next, we consider the special case where we have a spherical graphene sheet in front of a planar graphene sheet. The parameters $\Omega_s$ and $\Omega_p$ are both equal to $6.75\times 10^5 \text{m}^{-1}$ (see Ref. \cite{50}). Assume that the radius of the spherical graphene sheet is $R=1$mm. Let \begin{align*} E_{\text{Cas}}^{\text{PFA,PC}}=-\frac{\hbar c\pi^3 R}{720 d^2} \end{align*}be the leading term of the Casimir interaction between a perfectly conducting sphere and a perfectly conducting plane. In Fig. \ref{f1}, we plot the ratio of the leading term of the Casimir interaction energy $E_{\text{Cas}}^0$ to $E_{\text{Cas}}^{\text{PFA,PC}}$, and the ratio of the sum of the leading term and next-to-leading order term $\left(E_{\text{Cas}}^0+ E_{\text{Cas}}^1\right)$ to $E_{\text{Cas}}^{\text{PFA,PC}}$. The ratio of $\left(E_{\text{Cas}}^0+ E_{\text{Cas}}^1\right)$ to $E_{\text{Cas}}^0$ is plotted in Fig. \ref{f2}. From these graphs, we can see that the next-to-leading order term plays a significant correction when $d/R\sim 0.1$. Another important quantity that characterize the correction to proximity force approximation is \begin{align*} \theta=\frac{E^{1}_{\text{Cas}}}{E_{\text{Cas}}^0}\frac{R}{d}, \end{align*} so that \begin{align*} E_{\text{Cas}}=E_{\text{Cas}}^0\left(1+\frac{d}{R}\theta+\ldots\right). \end{align*}In case of perfectly conducting sphere and plane, $\theta$ is a pure number given by \cite{35}: \begin{align}\langlebel{eq2_27_1}\theta=\frac{1}{3}-\frac{20}{\pi^2}=-1.69.\end{align} In Fig. \ref{f3}, we plot $\theta$ as a function of $d$ for a spherical graphene sheet in front of a planar graphene sheet. We observe that its variation pattern is significantly different from the case of gold sphere and gold plane modeled by plasma model and Drude model which we studied in \cite{15}. Nevertheless, as $d$ is large enough, $\theta$ approaches the limiting value \eqref{eq2_27_1}. \begin{figure} \caption{\langlebel{f4} \end{figure} \begin{figure} \caption{\langlebel{f5} \end{figure} \begin{figure} \caption{\langlebel{f6} \end{figure} To study the dependence of the Casimir interaction energy on the parameters $\Omega_s$ and $\Omega_p$, we plot in Fig. \ref{f4} and Fig. \ref{f5} respectively the ratio $E_{\text{Cas}}^0/E_{\text{Cas}}^{\text{PFA,PC}}$ and the ratio $(E_{\text{Cas}}^0+E_{\text{Cas}}^1)/E_{\text{Cas}}^{\text{PFA,PC}}$ as a function of $d$ for various values of $\Omega_s$ and $\Omega_p$. The variation of $\theta$ is plotted in Fig. \ref{f6}. It is observe that the larger $\Omega$ is, the larger is the Casimir interaction energy. The behavior of $\theta$ shown in Fig. \ref{f6} is more interesting. It is observe that it has a minimum which appears at $d\sim\Omega^{-1}$ when $\Omega_s=\Omega_p=\Omega$. \partialgebreak \section{Conclusion} We study the Casimir interaction between a spherical object and a planar object that are made of materials that can be modeled as plasma sheets. The functional representation of the Casimir interaction energy is derived. It is then used to study the small separation asymptotic behavior of the Casimir interaction. The leading term of the Casimir interaction is confirmed to be agreed with the proximity force approximation. The analytic formula for the next-to-leading order term is computed based on a previously established perturbation analysis \cite{15}. The special case where the spherical object and planar object are graphene sheets are considered. The results are found to be quite different from the case of metallic sphere-plane configuration when the separation between the sphere and the plane is small. This may suggest a new experimental setup to test the Casimir effect. It also has potential application to nanotechnology. \begin{acknowledgments}\noindent This work is supported by the Ministry of Higher Education of Malaysia under FRGS grant FRGS/1/2013/ST02/UNIM/02/2. I would like to thank M. Bordag for proposing this question. \end{acknowledgments} \end{document}
\begin{document} \noindentewcommand{{\mathbb R}}{{\mathbb R}} \noindentewcommand{\eqr}[1]{(\ref{#1})} \noindentewcommand{\mathbb H^3}{\mathbb H^3} \noindentewcommand{\mathbb C}{\mathbb C} \noindentewcommand{{\mathbb S}}{{\mathbb S}} \noindentewcommand{\noindent}{\noindentoindent} \pagestyle{plain} \pagenumbering{roman} \thispagestyle{empty} \begin{center} \vspace*{1in} {\Large CLASSIFICATION AND ANALYSIS OF LOW INDEX MEAN CURVATURE FLOW SELF-SHRINKERS}\\ {\large by\\ Caleb Hussey}\\ A dissertation submitted to The Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy\\ Baltimore, Maryland\\ June, 2012\\ \copyright Caleb Hussey 2012\\ All rights reserved\\ \end{center} \defequation{equation} \noindentewtheorem{thm}[equation]{Theorem} \noindentewtheorem{cor}[equation]{Corollary} \noindentewtheorem{Que}[equation]{Question} \noindentewtheorem{lem}[equation]{Lemma} \noindentewtheorem{Con}[equation]{Conjecture} \noindentewtheorem{Pro}[equation]{Proposition} \theoremstyle{definition} \noindentewtheorem{defn}[equation]{Definition} \noindentewtheorem{Exa}[equation]{Example} \noindentewtheorem{rem}[equation]{Remark} \noindentewpage \chapter*{Abstract} \addcontentsline{toc}{chapter}{\protect\noindentumberline{}Abstract} We investigate Mean Curvature Flow self-shrinking hypersurfaces with polynomial growth. It is known that such self shrinkers are unstable. We focus mostly on self-shrinkers of the form $\mathbb S^k\times{\mathbb R}^{n-k}{\mathbb S}ubset {\mathbb R}^{n+1}$. We use a connection between the stability operator and the quantum harmonic oscillator Hamiltonian to find all eigenvalues and eigenfunctions of the stability operator on these self-shrinkers. We also show self-shrinkers of this form have lower index than all other complete self-shrinking hypersurfaces. In particular, they have finite index. This implies that the ends of such self shrinkers must be stable. We look for the largest stable regions of these self shrinkers. \textsc{Readers:} William P. Minicozzi II (Advisor), Chikako Mese, and Joel Spruck. \noindentewpage \chapter*{Acknowledgments} \addcontentsline{toc}{chapter}{\protect\noindentumberline{}Acknowledgments} \noindent I would first and foremost like to thank my advisor, Dr. William Minicozzi, for his patience and advice. This dissertation could not have been written without him. I also want to thank my friend and colleague Sinan Ariturk for numerous fruitful discussions. I owe many thanks to my family and friends for their love, support, and distractions. Finally, this dissertation is dedicated to Claudia, for keeping me happy and motivated. \noindentewpage \tableofcontents \listoffigures \noindentewpage \pagenumbering{arabic} \chapter{Introduction} Mean Curvature Flow (MCF) is a nonlinear second order differential equation, so we are guaranteed the short-time existence of solutions \cite{HP}. However, unlike the heat equation, solutions of MCF with initially smooth data often develop singularities. Understanding these singularities and extending MCF past them have historically been the greatest impediments to the study of MCF. A surface flowing by mean curvature tends to decrease in area. In fact, it is possible to show that MCF is the $L^2$-gradient flow of the area functional, meaning that in some sense flowing a surface by its mean curvature decreases the surface's area as quickly as possible. Because of this it makes sense that the surfaces that don't change under MCF are precisely minimal surfaces. This is also clear from the fact that minimal surfaces have mean curvature $H=0$. When applied to curves in the plane, MCF is called the curve shortening flow. The singularities that can develop from the curve shortening flow are well understood thanks to the work of Abresch-Langer, Grayson, Gage-Hamilton, and others. Grayson showed that any smooth simple closed curve in the plane will evolve under the curve shortening flow into a smooth convex curve \cite{G1}. It had already been proven by Gage and Hamilton that any smooth convex curve will remain smooth until it disappears in a round point \cite{GH}. Abresch and Langer found a family of non-embedded plane curves that evolve under the curve shortening flow by holding their shape and shrinking until they become extinct at singularities \cite{AL}. These curves remain non-embedded until they become extinct, meaning that these singularities are different from the "round points" arising from simple closed curves evolving under the curve shortening flow. In both of the examples above, singularities can be modeled by self-shrinking curves, that is by curves that change by dilations when they flow by mean curvature. It follows from the work of Huisken in \cite{H} (see Section \ref{58} below) that as a hypersurface flowing by mean curvature in any dimension approaches a singularity, it asymptotically approaches a self-shrinker. Thus, there is great interest in understanding the possible MCF self-shrinkers, since these describe all the possible singularities of MCF. Colding and Minicozzi have recently proven that the only mean curvature flow singularities that cannot be perturbed away are the sphere and generalized cylinders given by ${\mathbb S}^k\times{\mathbb R}^{n-k}$ with $1\leq k\leq n$ \cite{CM1}. When $k=0$ the product space above is simply a hyperplane, which is a self-shrinker but of course does not give rise to any singularity. In this dissertation we focus on these same self-shrinkers, allowing the $k=0$ hyperplane case. In Chapter \ref{59} we define MCF and self-shrinkers and discuss known examples of self-shrinkers. We also define what we mean by stability of a self-shrinker and present some necessary background results. In Chapter \ref{26} we find the spectrum and the eigenfunctions of the stability operator on all self-shrinking hypersurfaces of the form ${\mathbb S}^k\times{\mathbb R}^{n-k}$. We also prove that these self-shrinkers have strictly lower stability index than all other self-shrinkers. In Chapter \ref{24} we investigate the largest stable regions of these generalized cylinders using two techniques. First, we apply the results of Chapter \ref{26} to find several stable regions, using the fact that a stability operator eigenfunction with eigenvalue $0$ is a Jacobi function. Then we look for stable regions with a rotational symmetry. In Chapter \ref{60} we solve a minimization problem to obtain stable portions of a family of surfaces recently discovered by Kleene and M{\o}ller \cite{KM}. We use the stability of these regions to find a larger stable region of the plane in ${\mathbb R}^3$ than we were able to find in Chapter \ref{24}. In Chapter \ref{61} we discuss a few open questions for future research. \chapter{Background}\label{59} {\mathbb S}ection{Preliminaries} Throughout, we will assume the following. Let $\Sigma^n{\mathbb S}ubset {\mathbb R}^{n+1}$ be an orientable connected and differentiable hypersurface with unit normal $\vec{n}$. We will only consider the case where $n\geq 2$. When the dimension of $\Sigma^n$ is clear or unimportant, we will sometimes omit the $n$ and write simply $\Sigma$. When possible, we will choose the unit normal $\vec{n}$ to be outward pointing. The mean curvature $H$ of $\Sigma$ is defined by $H=\text{div} (\vec{n})$. We will only consider hypersurfaces $\Sigma$ with polynomial volume growth. This means that there exists a point $p\in\Sigma$ and constants $C$ and $d$ such that for all $r\geq1$ $$Vol(B_r(p)\cap\Sigma)\leq C r^d$$ where $B_r(p)$ denotes the ball of radius $r$ about the point $p$. We let $A$ denote the second fundamental form on $\Sigma$. If $\{\vec{e_i}\}_{i=1}^n$ is an orthonormal frame for $\Sigma$, the components of $A$ with respect to $\{\vec{e_i}\}_{i=1}^n$ are given by \begin{equation}\label{34} a_{ij}=\langle \noindentabla_{\vec{e_i}}\vec{e_j},\vec n\rangle \end{equation} Differentiating $\langle\vec{e_j},\vec n\rangle=0$, we obtain \begin{equation}\label{35} \noindentabla_{\vec{e_i}}\vec n=-a_{ij}\vec{e_j} \end{equation} Thus the mean curvature $H=-a_{ii}$. Here, as in the rest of the paper, we use the convention that repeated indices are summed over. With this definition, the mean curvature of a sphere of radius $r$ is given by $H=\frac 2 r$, and the mean curvature of a cylinder of radius $r$ is given by $H=\frac 1 r$ \cite{CM1}. Note that with this definition, the mean curvature is the sum of the principal curvatures, rather than their average. Unless otherwise stated, $\langle\cdot,\cdot\rangle$ denotes the Euclidean inner product. We also let $B^{\Sigma}_r(p)$ denote the geodesic ball in $\Sigma$ about the point $p\in\Sigma$. In other words, $B^{\Sigma}_r(p)$ denotes the set of all points in $\Sigma$ whose intrinsic distance from $p$ is less than $r$. The standard ball of radius $r$ about the point $p$ in Euclidean $n$-space will be denoted by $B^n_r(p)$, or simply by $B_r(p)$ when the dimension is clear from context. {\mathbb S}ection{Mean Curvature Flow (MCF)}\label{58} The idea of mean curvature flow is to start with a surface $\Sigma$, and at every point move $\Sigma$ in the direction $-\vec{n}$ with speed $H$. Note that for a hypersurface this process does not depend on our choice of $\vec n$, since replacing $\vec n$ by its opposite will change the sign of $H=\text{div}(\vec n)$. \begin{defn}\cite{CM1} Let $\Sigma_t$ be a family of hypersurfaces in ${\mathbb R}^3$. We say $\Sigma_t$ flows by mean curvature if $$(\partial_t \vec{x})^\bot=-H\vec{n}.$$ Here $(\partial_t \vec{x})^\bot$ denotes the component of $\partial_t \vec{x}$ which is perpendicular to $\Sigma_t$ at $\vec{x}$. \end{defn} This restriction to the normal component is reasonable, because tangential components of $\partial_t\vec{x}$ correspond to internal reparameterizations of $\Sigma_t$, which do not affect the geometry of the surface. Indeed, by changing the parameterizations of $\Sigma_t$, $\partial_t\vec{x}$ can be made perpendicular to $\Sigma_t.$ Note that minimal surfaces satisfy $H=0$ and are thus invariant under MCF. If $\Sigma$ is a minimal surface, and $\Sigma_t$ is any smooth family of parameterizations of $\Sigma$, then $\Sigma_t$ flows by mean curvature. We now compute the simplest non-static solution of MCF. This is a round sphere which shrinks until it disappears in finite time at a singularity. An equivalent computation would work for hyperspheres in arbitrary dimension. \begin{Exa}\label{1} Consider the case of a round sphere of radius $r$ in ${\mathbb R}^3.$ At every point the mean curvature of this sphere is given by $H=\frac 2 r$. As $r\rightarrow 0$, $H\rightarrow \infty$. Since $(\partial_t\vec x)^\bot=-H\vec n$, the sphere is shrinking, and the rate at which the sphere shrinks grows as $r\rightarrow 0$. Thus, the sphere becomes extinct in finite time. By the symmetry of the sphere, it is clear that as the surface flows by mean curvature it remains a sphere until it reaches the singularity and vanishes. Thus we can model the evolution of the surface by an ordinary differential equation for the radius $r=r(t)$. $$(\partial_t\vec x)^\bot=\frac{dr}{dt}\vec{n}=-H\vec{n}$$ Substituting in for $H$ yields $$\frac{dr}{dt}=-\frac 2 r$$ which has the solution $r={\mathbb S}qrt{c-4t}$. If we want to determine the constant of integration $c$, then we need to choose a convention. Clearly the point where $r=0$ corresponds to the singularity, so specifying the time at which the singularity occurs is equivalent to choosing the constant $c$. It is standard to choose the constant so that the singularity occurs at time $t=0$. That makes the constant 0, so our solution becomes $r={\mathbb S}qrt{-4t}$. Note that at the time $t=-1$ our sphere has radius 2. \end{Exa} MCF is a nonlinear parabolic flow, so its solutions satisfy a maximum principle \cite{N}. This means that if two complete hypersurfaces are initially disjoint, then they will remain disjoint when allowed to flow by mean curvature. This implies via the following argument that the development of singularities in MCF is a widespread phenomenon. Let $\Sigma$ be any closed surface in ${\mathbb R}^3$, meaning that $\Sigma$ is compact and has no boundary. Then there exists some $r$ such that $\Sigma$ is completely contained in $B_r(0)$. Letting $S_r=\partial B_r(0)$ be the boundary of this ball, we see that $S_r$ and $\Sigma$ are disjoint. Then by the maximum principle they must stay disjoint when they flow by mean curvature. However, by Example \ref{1} we know $S_r$ becomes extinct in finite time. Thus, $\Sigma$ must become extinct at a singularity before $S_r$ does. Thus, all closed surfaces in ${\mathbb R}^3$ become singular in finite time. We now prove a well known theorem with some important implications for the study of MCF singularities. We follow the conventions used in \cite{CM1}. \begin{thm}[Huisken's Monotonicity Formula]\cite{H} Let $\Sigma_t$ be a surface flowing by mean curvature. Define $\Phi (\vec x,t)=[-4\pi t]^{-\frac n 2 }e^{\frac {|\vec x|^2}{4t}}$, and set $\Phi_{(\vec {x_0},t_0)}=\Phi(\vec x-\vec {x_0},t-t_0)$. Then $$\frac d{dt}\int_{\Sigma_t}\Phi_{(\vec {x_0},t_0)}d\mu=-\int_{\Sigma_t} \Bigl| H\vec{n}-\frac{(\vec x-\vec {x_0})^\bot}{2(t_0-t)}\Bigr|^2\Phi_{(\vec {x_0},t_0)}d\mu.$$ \end{thm} \begin{proof}\cite{H} We begin by setting $\vec y=\vec x-\vec {x_0}$ and $\tau=t-t_0$. We will also set $\tilde\Phi = \Phi_{(\vec {x_0},t_0)}$ to simplify notation. Then when we differentiate $\int_{\Sigma_t}\tilde\Phi d\mu$ we obtain an extra term coming from how $d\mu$ changes as the surface flows as a function of $t$. In general (see e.g. \cite{CM1}), if $f\vec n$ is a variation of $\Sigma$, then $$(d\mu)'=fH d\mu.$$ Since $\Sigma$ is flowing by mean curvature, we have that $(d\mu)'=-H^2d\mu$. Now we compute. \begin{align} \tilde\Phi \noindentotag &= (-4\pi\tau)^{-\frac n 2}e^{\frac{|\vec y|^2}{4\tau}} \\ \frac d{dt}\int\limits_{\Sigma_t}\tilde\Phi d\mu \noindentotag &= \int\limits_{\Sigma_t} \left[\frac d{dt}\tilde\Phi - H^2\tilde\Phi \right]d\mu \\ \frac d{dt}\int\limits_{\Sigma_t}\tilde\Phi d\mu \noindentotag &= \int\limits_{\Sigma_t} \left[\frac {\partial}{\partial t}\tilde\Phi +\left\langle \noindentabla\tilde\Phi,\frac{d\vec x}{dt} \right\rangle -H^2\tilde\Phi \right]d\mu \\ \frac{d\vec y}{dt} \noindentotag &= \frac{d\vec x}{dt} = -H\vec n \\ \frac d{dt}\int\limits_{\Sigma_t}\tilde\Phi d\mu \noindentotag &= \int\limits_{\Sigma_t} \left[-\frac n{2\tau}-\frac{|\vec y|^2}{4\tau^2}-\frac{H}{2\tau}\langle\vec y,\vec n\rangle -H^2 \right]\tilde\Phi d\mu \\ \frac d{dt}\int\limits_{\Sigma_t}\tilde\Phi d\mu \noindentotag &= -\int\limits_{\Sigma_t}\tilde\Phi \left| H\vec n+\frac{\vec y}{2\tau} \right|^2 d\mu + \int\limits_{\Sigma_t} \frac{H}{2\tau}\langle \vec y,\vec n\rangle\tilde \Phi d\mu -\int\limits_{\Sigma_t}\frac n{2\tau}\tilde\Phi d\mu \end{align} However, for any vector field $\vec v$ we can write $\text {div}_{\Sigma} \vec v=\text {div}_\Sigma \vec v^\top +H\langle\vec v,\vec n\rangle$. We fix a value of $t$ and consider $\Sigma(r)=B_r(p)\cap\Sigma_t$. Note that as $r\rightarrow \infty$, $\Sigma(r)\rightarrow\Sigma_t$. We then choose a specific vector field $\vec v$ and apply Stokes' Theorem. We let $\vec \noindentu$ denote the unit vector field normal to $\partial \Sigma(r)$ and tangent to $\Sigma(r)$, and let $d\tilde\mu$ denote the measure on $\partial \Sigma(r)$. \begin{align} \vec v \noindentotag &= \frac 1 {2\tau}\tilde\Phi\vec y \\ \int\limits_{\Sigma(r)}\text {div}_\Sigma \vec v^\top d\mu \noindentotag &= \int\limits_{\partial\Sigma(r)} \langle \vec v,\vec\noindentu\rangle d\tilde\mu \\ \int\limits_{\Sigma(r)}\text {div}_\Sigma \vec v^\top d\mu \noindentotag &= \int\limits_{\partial\Sigma(r)} \frac 1 {2\tau}\tilde\Phi\langle \vec y,\vec\noindentu\rangle d\tilde\mu \\ \int\limits_{\Sigma_t}\text {div}_\Sigma \vec v^\top d\mu \noindentotag &= \lim\limits_{r\rightarrow\infty} \int\limits_{\partial\Sigma(r)} \frac 1 {2\tau}\tilde\Phi\langle \vec y,\vec\noindentu\rangle d\tilde\mu \\ \int\limits_{\Sigma_t}\text {div}_\Sigma \vec v^\top d\mu \noindentotag &= 0 \end{align} In the last equality we used the fact that $\Sigma_t$ has polynomial volume growth, so the exponential decay of $\tilde\Phi$ dominates in the limit. We let $\{\vec {e_i}\}_{i=1}^n$ be an orthonormal frame for $\Sigma_t$ and continue. \begin{align}\int\limits_{\Sigma_t}\text {div}_{\Sigma} \left(\frac 1 {2\tau}\tilde\Phi\vec y\right)d\mu \noindentotag &=\int\limits_{\Sigma_t}H\langle\frac 1 {2\tau}\tilde\Phi\vec y,\vec n\rangle d\mu \\ \int\limits_{\Sigma_t} \frac{H}{2\tau}\tilde\Phi \langle \vec y,\vec n\rangle d\mu \noindentotag &= \int\limits_{\Sigma_t}\frac 1{2\tau}\text {div}_{\Sigma} (\tilde\Phi\vec y)d\mu \\ \noindentabla_{\vec{e_i}} (\tilde\Phi\langle\vec y,\vec{e_i}\rangle) \noindentotag &= \tilde\Phi \langle\vec {e_i},\vec{e_i}\rangle + \tilde\Phi \langle\vec y,\vec{e_i}\rangle \frac{\langle\vec y,\vec{e_i}\rangle}{2\tau} \\ \frac 1{2\tau}\text {div}_{\Sigma} (\tilde\Phi\vec y)d\mu \noindentotag &= \frac n{2\tau}\tilde\Phi + \frac{|\vec y^\top|^2}{4\tau^2}\tilde\Phi \\ \int\limits_{\Sigma_t} \frac{H}{2\tau}\tilde\Phi \langle \vec y,\vec n\rangle d\mu \noindentotag &= \int\limits_{\Sigma_t}\left(\frac n{2\tau} + \frac{|\vec y^\top|^2}{4\tau^2} \right)\tilde\Phi d\mu \\ \frac d{dt}\int\limits_{\Sigma_t}\tilde\Phi d\mu \noindentotag &= -\int\limits_{\Sigma_t}\tilde\Phi \left| H\vec n+\frac{\vec y}{2\tau} \right|^2 d\mu + \int\limits_{\Sigma_t} \frac{|\vec y^\top|^2}{4\tau^2} \tilde\Phi d\mu \\ \frac d{dt}\int\limits_{\Sigma_t}\tilde\Phi d\mu \noindentotag &= -\int\limits_{\Sigma_t}\tilde\Phi \left| H\vec n+\frac{\vec y^\bot}{2\tau} \right|^2 d\mu \end{align} \end{proof} Understanding the possible singularities of MCF has been one of the main obstacles to the theory, because these are the places where the MCF ceases to be smooth. Also, a connected surface flowing by mean curvature can split into multiple connected components at a singularity. As an example of this, Grayson \cite{G} constructed a dumbbell that flows smoothly by mean curvature until it splits into two topological spheres (see Subsection \ref{63} below). However, using Huisken's monotonicity formula it is possible to show \cite{H} that in the limit as a surface evolving by MCF approaches a singularity the surface asymptotically approaches a self-shrinker, which we will define in the next chapter. Thus, a key step in understanding the singularities of MCF is to understand its self-shrinkers. {\mathbb S}ection{Self-Shrinkers} The sphere discussed in Example \ref{1} is a self-shrinker, meaning that as it flows by mean curvature it changes by a dilation. However, as we did in the example, we will often specify that the MCF should become extinct at $t=0$ and $\vec{x}=0$. In such cases, we will refer to a surface $\Sigma$ as a self-shrinker only if it is the $t=-1$ time slice of such a MCF that will become extinct at $t=0$ and $\vec{x}=0$. So for instance, by the computation in Example \ref{1}, we would consider a sphere to be a self-shrinker only if it has radius 2 and is centered at the origin. Otherwise, it would be a different time slice of the MCF in question, or if it is not centered at the origin it would become extinct at a point other than $\vec{x}=0$. This leads us to the following definition. \begin{defn}\cite{CM1} \label{7} A surface $\Sigma$ is a self-shrinker if the family of surfaces $\Sigma_t={\mathbb S}qrt{-t}\Sigma$ flows by mean curvature. \end{defn} We will also need the following two equivalent characterizations of self-shrinkers. \begin{lem}\label{30} Suppose a surface $\Sigma^n{\mathbb S}ubset{\mathbb R}^{n+1}$ satisfies \begin{equation}\label{2} H=\frac{\langle \vec{x},\vec{n}\rangle}2. \end{equation} Then $\Sigma_t={\mathbb S}qrt{-t}\Sigma$ flows by mean curvature, and for every $t$ we have \begin{equation}\label{36} H_{\Sigma_t}=-\frac{\langle \vec{x},\vec{n}_{\Sigma_t}\rangle}{2t}. \end{equation} Conversely, suppose a family of surfaces $\Sigma_t$ flows by mean curvature. Then $\Sigma_t$ is a self-shrinker, meaning $\Sigma_t={\mathbb S}qrt{-t}\Sigma_{-1}$, if and only if $\Sigma_t$ satisfies \ref{36}. \end{lem} \begin{proof} Suppose $\Sigma$ has mean curvature $H=\frac 1 2 \langle \vec x,\vec n\rangle$. We wish to show $\Sigma_t={\mathbb S}qrt{-t}\Sigma$ flows by mean curvature. For each $\vec p\in\Sigma$ and $t\in(-\infty,0)$, set $\vec x(\vec p, t)={\mathbb S}qrt{-t}\vec p$. Then \begin{align}\vec n_{\Sigma_t}(\vec x(\vec p,t)) \noindentotag &= \vec n_{\Sigma}(\vec p) \\ H_{\Sigma_t}(\vec x(\vec p,t)) \noindentotag &= \frac 1{{\mathbb S}qrt{-t}}H_\Sigma(\vec p) \end{align} To see this scaling rule for the mean curvature, consider a circle in ${\mathbb R}^2$. Scaling the circle by $\lambda$ causes the radius to scale by $\lambda$, which causes the curvature to scale by $\frac 1 {\lambda}$. This example is actually completely general, because the mean curvature of $\Sigma$ equals the sum of the principal curvatures of $\Sigma$. Each principal curvature can be defined as the inverse of the radius of the circle of best fit to a curve on $\Sigma$. We compute $H_{\Sigma_t}$. \begin{align} \partial_t\vec x \noindentotag &= \partial_t({\mathbb S}qrt{-t}\vec p) \\ \partial_t\vec x \noindentotag &= -\frac 1 {2{\mathbb S}qrt {-t}}\vec p \\ (\partial_t\vec x)^\bot \noindentotag &= -\frac {\langle \vec p,\vec n_{\Sigma}\rangle} {2{\mathbb S}qrt {-t}} \\ (\partial_t\vec x)^\bot \noindentotag &= -H_{\Sigma_t} \end{align} Thus $\Sigma_t$ flows by mean curvature. We now show that $\Sigma_t$ also satisfies Equation \ref{36}. By the above \begin{align} H_{\Sigma_t} \noindentotag &= \frac {\langle \vec p,\vec n_{\Sigma}\rangle} {2{\mathbb S}qrt {-t}} \\ H_{\Sigma_t} \noindentotag &= \frac {\langle {\mathbb S}qrt{-t}\vec p,\vec n_{\Sigma_t}\rangle} {-2t} \\ H_{\Sigma_t} \noindentotag &= -\frac {\langle \vec x,\vec n_{\Sigma_t}\rangle} {2t} \end{align} Conversely, suppose that $\Sigma_t$ flows by mean curvature. Suppose first that $\Sigma_t$ satisfies Equation \ref{36} for all $t$. Then setting $t=-1$ yields Equation \ref{2}, so the above argument shows that $\Sigma_t$ is a self-shrinker. Now suppose instead that $\Sigma_{-1}=\frac{\Sigma_t}{{\mathbb S}qrt{-t}}$. Then note that \begin{align} (-t)^{\frac 3 2}\partial_t \left(\frac {\vec x}{{\mathbb S}qrt{-t}}\right) \noindentotag &= (-t)^{\frac 3 2}\left(\frac{\partial_t\vec x}{{\mathbb S}qrt{-t}} +\frac{\vec x}{2(-t)^{\frac 3 2}} \right) \\ (-t)^{\frac 3 2}\partial_t \left(\frac {\vec x}{{\mathbb S}qrt{-t}}\right) \noindentotag &= -t\partial_t\vec x+\frac{\vec x}2 \end{align} However, $\dfrac {\vec x}{{\mathbb S}qrt {-t}}$ is fixed, so $\partial_t\left(\dfrac {\vec x}{{\mathbb S}qrt {-t}}\right)=0$. \begin{align} 0 \noindentotag &= (-t)^{\frac 3 2}\left\langle \partial_t\left(\frac {\vec x}{{\mathbb S}qrt{-t}}\right),\vec n_{\Sigma_{-1}}\right\rangle \\ 0 \noindentotag &= -t\langle \partial_t\vec x,\vec n_{\Sigma_{-1}}\rangle +\frac 1 2 \langle\vec x,\vec n_{\Sigma_{-1}}\rangle \end{align} By assumption $\Sigma_t$ flows by mean curvature, so \begin{align} H_{\Sigma_{-1}} \noindentotag &= -\langle \partial_t \vec x, \vec n_{\Sigma_{-1}}\rangle \\ H_{\Sigma_{-1}} \noindentotag &= \frac 1 2\langle\vec x, \vec n_{\Sigma_{-1}}\rangle \end{align} Then $\Sigma_{-1}$ satisfies Equation \ref{2}, so $\Sigma_t$ must satisfy Equation \ref{36} \end{proof} We now define a two parameter family of functionals that are closely related to self-shrinkers. These functionals will help us discuss the stability of self-shrinkers. \begin{thm}\cite{CM1}\label{11} Let $\vec{x_0}\in{\mathbb R}^3$, and let $t_0>0$. Then given a surface $\Sigma{\mathbb S}ubset{\mathbb R}^n$ define the functional $F_{\vec{x_0},t_0}$ by $$F_{\vec{x_0},t_0}(\Sigma)=(4\pi t_0)^{-\frac n 2}\int_\Sigma e^{\frac{-|\vec x-\vec{x_0}|^2}{4t_0}}d\mu.$$ Then $\Sigma$ is a critical point of $F_{\vec{x_0},t_0}$ if and only if $\Sigma=\Sigma_{-t_0}$ where $\Sigma_t=\vec{x_0} + {\mathbb S}qrt{-t}\Sigma_{-1}$ is flowing by mean curvature. Note that at $t=0$, $\Sigma_t$ becomes extinct at $\vec{x_0}$. \end{thm} \begin{proof} Let $\Sigma_s$ be a variation of $\Sigma$ with variation vector field $\frac d {ds}\big|_{s=0} \vec x=f\vec n$. Recall from the proof of Huisken's Monotonicity Formula that $(d\mu)'=fH d\mu.$ Then we compute. \begin{align}\frac d{ds}\bigg|_{s=0} F_{\vec{x_0},t_0}(\Sigma_s) \noindentotag &= (4\pi t_0)^{-\frac n 2}\int_\Sigma \left[fHe^{\frac{-|\vec x-\vec{x_0}|^2}{4t_0}} +\frac d{ds}\bigg|_{s=0}e^{\frac{-|\vec x-\vec{x_0}|^2}{4t_0}}\right]d\mu \\ \frac d{ds}\bigg|_{s=0} F_{\vec{x_0},t_0}(\Sigma_s) \noindentotag &= (4\pi t_0)^{-\frac n 2}\int_\Sigma \left[fH+ \frac d{ds}\bigg|_{s=0}\log e^{\frac{-|\vec x-\vec{x_0}|^2}{4t_0}}\right]e^{\frac{-|\vec x-\vec{x_0}|^2}{4t_0}} d\mu \\ \frac d{ds}\bigg|_{s=0} F_{\vec{x_0},t_0}(\Sigma_s) \noindentotag &= (4\pi t_0)^{-\frac n 2}\int_\Sigma \left[fH - \frac d{ds}\bigg|_{s=0}\frac{\langle\vec x-\vec{x_0}, \vec x-\vec{x_0}\rangle}{4t_0}\right]e^{\frac{-|\vec x-\vec{x_0}|^2}{4t_0}} d\mu \\ \frac d{ds}\bigg|_{s=0} F_{\vec{x_0},t_0}(\Sigma_s) \noindentotag &= (4\pi t_0)^{-\frac n 2}\int_\Sigma \left[fH - \frac{\langle\vec x-\vec{x_0}, f\vec n\rangle}{2t_0}\right]e^{\frac{-|\vec x-\vec{x_0}|^2}{4t_0}} d\mu \\ \frac d{ds}\bigg|_{s=0} F_{\vec{x_0},t_0}(\Sigma_s) \noindentotag &= (4\pi t_0)^{-\frac n 2}\int_\Sigma \left[H - \frac{\langle\vec x-\vec{x_0}, \vec n\rangle}{2t_0}\right]f e^{\frac{-|\vec x-\vec{x_0}|^2}{4t_0}} d\mu \end{align} Thus, $\Sigma$ is a critical point of $F_{\vec{x_0},t_0}$ if and only if $H = \frac{\langle\vec x-\vec{x_0}, \vec n\rangle}{2t_0}$. The result follows from Lemma \ref{30} and changing coordinates. \end{proof} Thus, by Lemma \ref{30}, a surface is a self-shrinker if and only if it is a critical point of the functional $F_{0,1}$. {\mathbb S}ubsection{Examples of Self-Shrinkers}\label{63} There are several standard examples of self-shrinkers. In Example \ref{1} we explicitly computed the mean curvature flow of a round sphere in ${\mathbb R}^3$ to show that a $2$-sphere of radius $2$ centered at the origin is a self-shrinker. In general a round sphere $\mathbb S^n{\mathbb S}ubset{\mathbb R}^{n+1}$ is a self-shrinker if and only if it is centered at the origin and has radius ${\mathbb S}qrt{2n}$. To see this, we apply Lemma \ref{30}. Note that an $n$-sphere of radius $r$ has mean curvature $$H={\mathbb S}um\limits_{i=1}^n\frac 1 r=\frac n r.$$ This is constant, and in order for $\langle\vec x,\vec n\rangle$ to be constant the sphere must be centered at the origin. In this case $\vec x=r\vec n$. \begin{align} \frac {\langle\vec x,\vec n\rangle} 2 \noindentotag &= \frac r 2 \\ H \noindentotag &= \frac {\langle\vec x,\vec n\rangle} 2 \\ \frac n r \noindentotag &= \frac r 2 \\ r^2 \noindentotag &= 2n \end{align} and the claim follows. Recall that minimal surfaces are fixed points of MCF, since they satisfy $H\equiv 0$. However, in general minimal surfaces do not satisfy $${\mathbb S}qrt{-t}\Sigma=\Sigma.$$ In order to be invariant under dilations and hence a self-shrinker, a minimal surface $\Sigma$ must be a cone. Then in order to be a smooth surface $\Sigma$ must be a flat plane through the origin. We can also generalize the above two examples to show that $\Sigma^n=\mathbb S^k\times{\mathbb R}^{n-k}{\mathbb S}ubset{\mathbb R}^{n+1}$ is a self-shrinker if the $\mathbb S^k$ factor is a round $k$-sphere centered at the origin with radius ${\mathbb S}qrt{2k}$. This will follow from Lemma \ref{41} below, or we can again apply Lemma \ref{30}. \begin{align} H \noindentotag &= \frac k{{\mathbb S}qrt{2k}} = {\mathbb S}qrt{\frac k 2} \\ \frac{\langle\vec x,\vec n\rangle}2 \noindentotag &= \frac {\langle {\mathbb S}qrt{2k}\vec n,\vec n\rangle}2 = \frac{{\mathbb S}qrt{2k}} 2 = {\mathbb S}qrt {\frac k 2} \\ H \noindentotag &= \frac {\langle\vec x,\vec n\rangle}2 \end{align} Thus $\Sigma$ is a self-shrinker. There is also a family of embedded self-shrinkers that are topologically $\mathbb S^1\times\mathbb S^{n-1}{\mathbb S}ubset{\mathbb R}^{n+1}$ for each $n\geq 2$. These surfaces were constructed by Angenent in \cite{Ang}. When $n=2$, the obtained surface is a self-shrinking torus. Angenent used this torus to give another proof that Grayson's dumbbell \cite{G} splits at a singularity into two topological spheres. The idea is to enclose the neck of the dumbbell by the self-shrinking torus and make the bells on either side large enough to each enclose a large sphere. Make the enclosed spheres large enough that they will not become extinct until after the self-shrinking torus does. Then the maximum principle gives that none of the surfaces can intersect any of the others, so before the torus becomes extinct the neck of the dumbbell must pinch off, leaving two topological spheres. We mentioned above that as a surface flowing by mean curvature approaches a singularity it asymptotically approaches a self-shrinker \cite{H}. In the case of the dumbbell described above, the singularity is of the type of a self-shrinking cylinder. Not all self-shrinkers are embedded. Abresch and Langer \cite{AL} constructed self-intersecting curves in the plane that are self-shrinkers under MCF. It is possible to obtain self-shrinking non-embedded surfaces by taking products of these curves with Euclidean factors (see Lemma \ref{41} below). However, it is noted in \cite{CM1} that the assumption of embeddedness in the classification of generic singularities is made unnecessary by the work of Epstein and Weinstein \cite{EW}. It should also be noted that there are more complicated self-shrinkers that can arise as singularities. For example, Kapouleas, Kleene and M{\o}ller \cite{KKM} recently proved the existence of noncompact self-shrinkers of arbitrary genus, as long as that genus is sufficiently large. {\mathbb S}ection{Two Types of Stability for Self-Shrinkers} Now that we have characterized self-shrinkers as the critical points of a functional (see Theorem \ref{11}), we can define stability in the usual manner. By taking the second variation of the functional, we obtain the following stability operator. \begin{defn}\cite{CM1}\label{8} Define the stability operator on a self-shrinking hypersurface $\Sigma^n{\mathbb S}ubset{\mathbb R}^{n+1}$ as \begin{equation}\label{3} Lf=\Delta f-\frac 1 2 \langle \vec{x},\noindentabla f\rangle+(|A|^2+\frac 1 2)f\end{equation} where $|A|^2$ denotes the norm squared of the second fundamental form. Here $\Delta$ denotes the Laplacian on the manifold $\Sigma$. Likewise, $\noindentabla$ denotes the tangential gradient, rather than the full Euclidean gradient. We call any function $u:\Sigma\rightarrow{\mathbb R}$ such that $Lu=0$ a Jacobi function. We say $\Sigma$ is stable if there exists a positive Jacobi function on $\Sigma$. \end{defn} \begin{thm}\cite{CM1}\label{12} A self-shrinker $\Sigma$ is stable if and only if it is a local minimum of the $F_{0,1}$ functional. That is, let $\tilde{\Sigma}$ be an arbitrary graph over $\Sigma$ of a function with sufficiently small $C^2$ norm. Then $\Sigma$ is stable if and only if $F_{0,1}(\Sigma)\leq F_{0,1}(\tilde{\Sigma})$ for every such $\tilde{\Sigma}$. \end{thm} However, it turns out that with this standard definition, every complete self-shrinker is unstable. This is because it is always possible to decrease the functional $F_{0,1}$ by translating $\Sigma$ in either space or time. Since $\Sigma$ is a self-shrinker, by ``translating in time'' we mean dilating $\Sigma$. The following notion of F-stability compensates for this inherent instability by defining a self-shrinker to be F-stable if translations in space and time are the only local ways to decrease the $F_{0,1}$ functional. \begin{defn}\cite{CM1} We say a self-shrinker $\Sigma$ is F-stable if for every normal variation $f\vec{n}$ of $\Sigma$ there exist variations $x_s$ of $x_0$ and $t_s$ of $t_0$ that make $F''=(F_{x_s,t_s}(\Sigma+sf\vec{n}))''\geq 0$ at $s=0$. \end{defn} In this dissertation we will focus exclusively on the standard notion of stability from Definition \ref{8}. \begin{defn}\label{10} If $\tilde L$ is any operator on a surface $\Sigma$, then we say $u$ is an eigenfunction of $\tilde L$ if $u\in L^2(\Sigma)$, $u$ is not identically $0$, and $$\tilde L u=-\lambda u$$ for some constant $\lambda$. In this case we say that $\lambda$ is an eigenvalue of $\tilde L$ corresponding to the eigenfunction $u$. \end{defn} \begin{defn}\cite{FC}\label{16} Let $\Sigma$ be a self-shrinker, and let $L$ be the stability operator on $\Sigma$. The index of $L$ on $\Sigma$ is defined to be the supremum over compact domains of $\Sigma$ of the number of negative eigenvalues of $L$ with 0 boundary data. We will also call this the stability index of $\Sigma$. \end{defn} Notice that a self-shrinker is stable if and only if its stability index is 0. \begin{thm}\label{45}\cite{FC} A self-shrinker with finite stability index is stable outside of some compact set. \end{thm} \begin{proof} Fix a point $0\in \Sigma$, and let $B^\Sigma_\rho (0)$ denote the intrinsic ball of radius $\rho$ centered at 0. We will use the fact that $L$ is stable on any set of sufficiently small area. Thus, for $\rho$ sufficiently small $L$ is stable on $B_\rho(0)$. Let $\rho_1=2{\mathbb S}up\{\rho|L\text{ is stable on } B_\rho(0)\}$. If $\rho_1$ is infinite, then $L$ is stable on all of $\Sigma$ and we are done. Otherwise let $\rho_2=2{\mathbb S}up\{\rho>\rho_1|L\text{ is stable on } B_\rho(0){\mathbb S}etminus B_{\rho_1}(0)\}$. If $\rho_2$ is infinite, then $L$ is stable on $\Sigma{\mathbb S}etminus B_{\rho_1}(0)$ and we are done. Otherwise repeat this construction to obtain $\{\rho_k\}$ such that $L$ is strictly unstable on each $B_{\rho_k}(0){\mathbb S}etminus B_{\rho_{k-1}}(0)$. $L$ has at least one negative eigenvalue for each $B_{\rho_k}(0){\mathbb S}etminus B_{\rho_{k-1}}(0)$ (assuming $\rho_k<\infty$), so let $f_k$ denote a corresponding eigenfunction on $B_{\rho_k}(0){\mathbb S}etminus B_{\rho_{k-1}}(0)$. The $f_k$ are defined on sets which are disjoint except for sets of measure 0, so the $f_k$ are independent. By assumption the index of $L$ is finite, so there must be only finitely many $f_k$. Thus some $\rho_{k+1}$ is infinite, and $L$ is stable outside the compact set $C=\overline{B_{\rho_k}(0)}$. \end{proof} \noindentewpage \chapter{Low Index Self-Shrinkers in Arbitrary Dimension}\label{26} {\mathbb S}ection{Self-Shrinkers Splitting off a Line} This chapter is devoted to proving the following theorem. \begin{thm}\label{37} Let $\Sigma^n{\mathbb S}ubset{\mathbb R}^{n+1}$ be a smooth complete embedded self-shrinker without boundary and with polynomial volume growth. If $\Sigma={\mathbb R}^n$ is a flat hyperplane through the origin, then the index of $\Sigma$ is $1$. If $\Sigma=\mathbb S^k\times{\mathbb R}^{n-k}$ for some $1\leq k\leq n$, then the index of $\Sigma$ is $n+2$. If $\Sigma\noindenteq \mathbb S^k\times{\mathbb R}^{n-k}$, then the index of $\Sigma$ is at least $n+3$. If $\Sigma\noindenteq \mathbb S^k\times{\mathbb R}^{n-k}$ and $\Sigma$ also splits off a line, then the index of $\Sigma$ is at least $n+4$. \end{thm} Note that in the above theorem each $\mathbb S^k$ must have radius ${\mathbb S}qrt{2k}$ in order for $\Sigma^n$ to be a self-shrinker. We first prove a theorem of Colding and Minicozzi that provides possible eigenfunctions of $L$ on self-shrinkers. \begin{thm}\label{38}\cite{CM1} Suppose $\Sigma^n{\mathbb S}ubset{\mathbb R}^{n+1}$ is a smooth self-shrinking hypersurface without boundary. Then the mean curvature $H$ satisfies $LH=H$. Also, for any constant vector field $\vec v$ we have $L\langle\vec v,\vec n\rangle=\frac 1 2 \langle\vec v,\vec n\rangle.$ In particular, any of these functions which is not identically $0$ and which is in $L^2(\Sigma)$ must be an eigenfunction with negative eigenvalue. \end{thm} \begin{proof}\cite{CM1} Let $\vec p\in\Sigma$ be an arbitrary point, and let $\{\vec {e_i}\}_{i=1}^n$ be an orthonormal frame for $\Sigma$ such that $\noindentabla_{\vec{e_i}}^{\top}\vec{e_j}(\vec p)=0$. We begin by showing $LH=H$. This will show that if $H\in L^2(\Sigma)$ is not identically $0$, then $H$ is an eigenfunction of $L$ with eigenvalue $-1$. By Equation \ref{2} \begin{align} H \noindentotag &=\frac 1 2 \langle\vec x,\vec n\rangle \\ \noindentabla_{\vec{e_i}}H \noindentotag &= \frac 1 2 [\langle \noindentabla_{\vec{e_i}}\vec x,\vec n\rangle+\langle\vec x,\noindentabla_{\vec{e_i}}\vec n\rangle ] \\ \noindentabla_{\vec{e_i}}H \noindentotag &= \frac 1 2 [\langle\vec{e_i},\vec n\rangle-a_{ij}\langle\vec x,\vec{e_j}\rangle] \end{align} The last line follows from Equation \ref{35}. We note that $\langle\vec{e_i},\vec n\rangle=0$, so we continue. \begin{align} \noindentabla_{\vec{e_i}}H \noindentotag &= -\frac 1 2 a_{ij}\langle\vec x,\vec{e_j}\rangle \\ \noindentabla_{\vec{e_k}}\noindentabla_{\vec{e_i}}H \noindentotag &= -\frac 1 2 a_{ij,k}\langle\vec x,\vec{e_j}\rangle-\frac 1 2 a_{ij}\langle\noindentabla_{\vec{e_k}}\vec x,\vec{e_j}\rangle-\frac 1 2 a_{ij}\langle\vec x,\noindentabla_{\vec{e_k}}\vec{e_j}\rangle \\ \noindentabla_{\vec{e_k}}\noindentabla_{\vec{e_i}}H \noindentotag &= -\frac 1 2 a_{ij,k}\langle\vec x,\vec{e_j}\rangle-\frac 1 2 a_{ij}\langle \vec{e_k},\vec{e_j}\rangle-\frac 1 2 a_{ij}a_{kj}\langle\vec x,\vec n\rangle \end{align} Setting $k=i$ and summing over $i$, we obtain $$\Delta H =-\frac 1 2 a_{ij,i}\langle\vec x,\vec{e_j}\rangle-\frac 1 2 a_{ii}-\frac 1 2 |A|^2\langle\vec x,\vec n\rangle.$$ By the Codazzi Equation, we get that $a_{ij,i}=a_{ii,j}$. Combining this with the fact that $H=-a_{ii}$ yields \begin{align} \Delta H \noindentotag &= \frac 1 2 \langle\vec x,\noindentabla H\rangle + \frac 1 2 H -|A|^2 H \\ L H \noindentotag &= \Delta H -\frac 1 2 \langle \vec x,\noindentabla H\rangle +(\frac 1 2 +|A|^2)H \\ L H \noindentotag &= \frac 1 2 \langle\vec x,\noindentabla H\rangle + \frac 1 2 H -|A|^2 H -\frac 1 2 \langle \vec x,\noindentabla H\rangle +(\frac 1 2 +|A|^2)H \\ L H \noindentotag &= H \end{align} We now consider a constant vector field $\vec v$. Let $f=\langle\vec v,\vec n\rangle$, and we compute. \begin{align} \noindentabla_{\vec{e_i}} f \noindentotag &= \langle\vec v,\noindentabla_{\vec{e_i}}\vec n\rangle \\ \noindentabla_{\vec{e_i}} f \noindentotag &= -a_{ij}\langle\vec v,\vec{e_j}\rangle \\ \noindentabla_{\vec{e_k}}\noindentabla_{\vec{e_i}}f \noindentotag &= -a_{ij,k}\langle\vec v,\vec{e_j}\rangle -a_{ij}a_{kj}\langle\vec v,\vec n\rangle \\ \Delta f \noindentotag &= \langle \vec v,\noindentabla H\rangle - |A|^2 f \end{align} From above, we know that \begin{align} \noindentabla_{\vec{e_i}}H \noindentotag &= -\frac 1 2 a_{ij}\langle\vec x,\vec{e_j}\rangle \\ \noindentabla H \noindentotag &= -\frac 1 2 a_{ij}\langle\vec x,\vec{e_j}\rangle\vec{e_i} \\ \langle\vec v,\noindentabla H\rangle \noindentotag &= -\frac 1 2 a_{ij}\langle\vec x,\vec{e_j}\rangle\langle\vec v,\vec{e_i}\rangle \\ \langle\vec v,\noindentabla H\rangle \noindentotag &= \frac 1 2 \langle\vec x,-a_{ij}\langle\vec v,\vec{e_i}\rangle\vec{e_j}\rangle \\ \langle\vec v,\noindentabla H\rangle \noindentotag &= \frac 1 2 \langle\vec x,\noindentabla f\rangle \\ Lf \noindentotag &= \Delta f - \frac 1 2 \langle\vec x,\noindentabla f\rangle +(|A|^2+\frac 1 2)f \\ Lf \noindentotag &= \frac 1 2 \langle\vec x,\noindentabla f\rangle -|A|^2 f - \frac 1 2 \langle\vec x,\noindentabla f\rangle +(|A|^2+\frac 1 2)f \\ Lf \noindentotag &= \frac 1 2 f \end{align} \end{proof} In \cite{KKM} it is noted that on the plane ${\mathbb R}^2{\mathbb S}ubset{\mathbb R}^3$, when conjugated by a Gaussian $L$ is equal to the Hamiltonian operator for the two-dimensional quantum harmonic oscillator plus a constant. Since the quantum harmonic oscillator is well understood, this technique helps us understand the operator $L$. In the following we adapt this idea to a much larger class of self-shrinking hypersurfaces of $R^{n+1}$. With this in mind, we now discuss the well known eigenvalues and eigenfunctions of the quantum harmonic oscillator Hamiltonian. \begin{defn}[Hermite Polynomials]\label{44} \cite{T} Consider the operator on ${\mathbb R}$ given by $$E=-\left(\frac 1{2m}\right)\frac{d^2}{dx^2}+\frac 1 2 m \omega^2 x^2.$$ In quantum mechanics, the first term is the kinetic energy operator (using the convention that $\mathbb H^3bar =1$), and the second term denotes the potential energy of the one dimensional harmonic oscillator. The sum of these is the Hamiltonian operator, which gives the total energy of the harmonic oscillator. The possible energy levels of the oscillator are then exactly the eigenvalues of the operator $E$, and the eigenfunctions are the states corresponding to these energy levels. Due to its importance in quantum mechanics, this problem has been extensively studied. We define the $k$th Hermite polynomial to be $$H_k(z)=(-1)^k e^{z^2}\frac {d^k}{dz^k}e^{-z^2}.$$ Then the eigenvalues of the Hamiltonian $E$ are given by $$\left\{-\omega\left(k+\frac 1 2\right):k=0,1,2,3,...\right\}$$ and the corresponding eigenfunctions are $$\psi_k={\mathbb S}qrt{\frac 1 {2^k(k!)}}\left(\frac{m\omega}{\pi}\right)^{\frac 1 4} e^{-\frac{m\omega x^2}2}H_k({\mathbb S}qrt{m\omega}x).$$ These eigenfunctions form an orthonormal basis of $L^2({\mathbb R})$. Note that these eigenvalues differ from the quantum mechanical energy levels by a factor of $-1$ due to our choice of sign convention in Definition \ref{10}. For future reference, we record the first few Hermite polynomials. In our applications below, we will be interested in the case when $m=\omega=\frac 1 2$, so we will record the polynomials $H_k({\mathbb S}qrt{m\omega}x)=H_k(\frac x 2).$ \begin{align} H_0\left(\frac x 2\right) \noindentotag &= 1 \\ H_1\left(\frac x 2\right) \noindentotag &= x \\ H_2\left(\frac x 2\right) \noindentotag &= x^2-2 \\ H_3\left(\frac x 2\right) \noindentotag &= x^3-6x \end{align} \end{defn} We now prove a very helpful lemma. \begin{lem}\label{41} Suppose $\Sigma^{n-1}{\mathbb S}ubset{\mathbb R}^n$ is a smooth complete embedded self-shrinker without boundary and with polynomial volume growth. Suppose $\{g_m\}_{m=0}^{\infty}$ is an orthonormal basis of the weighted space $L^2(\Sigma)$ made up of eigenfunctions of $L$ satisfying $$L g_m=-\lambda_m g_m.$$ Then $\Sigma^{n-1}\times{\mathbb R}{\mathbb S}ubset{\mathbb R}^{n+1}$ is also a self-shrinker. Furthermore, the eigenvalues of $L$ on $\Sigma^{n-1}\times{\mathbb R}{\mathbb S}ubset{\mathbb R}^{n+1}$ are $$\left\{\lambda_m+\frac 1 2 k:m,k=0,1,2,3,...\right\},$$ and the corresponding eigenfunctions are $g_m H_k(\frac {x_n}2).$ Here $x_n$ is the coordinate function on the ${\mathbb R}$ factor of $\Sigma^{n-1}\times{\mathbb R}$, and $H_k$ denotes the $k$th Hermite polynomial. We have omitted the conventional normalizations for simplicity. \end{lem} \begin{proof} Fix a point $\vec p\in\Sigma^{n-1}$, and choose an orthonormal frame $\{\vec{e_i}\}_{i=1}^{n-1}$ for $\Sigma$ such that $\noindentabla_{\vec{e_i}}^{\top}\vec{e_j}=0$. Then letting $\vec{e_n}$ be a unit vector tangent to the ${\mathbb R}$ factor of $\Sigma\times{\mathbb R}$ and $x_n$ be the coordinate on this factor, we obtain an orthonormal frame $\{\vec{e_i}\}_{i=1}^n$ for $\Sigma\times{\mathbb R}$. Note that the mean curvature of $\Sigma^{n-1}$ equals that of $\Sigma^{n-1}\times{\mathbb R}$. Also, $\langle\vec{e_n},\vec n\rangle=0$, so Lemma \ref{30} shows that $\Sigma^{n-1}\times{\mathbb R}{\mathbb S}ubset{\mathbb R}^{n+1}$ is a self-shrinker. For the rest of this proof, we will use the subscript $_\Sigma$ to denote operators and quantities restricted to the surface $\Sigma^{n-1}{\mathbb S}ubset{\mathbb R}^n$. Operators and quantities without the subscript $_\Sigma$ will refer to the surface $\Sigma^{n-1}\times{\mathbb R}{\mathbb S}ubset{\mathbb R}^{n+1}$. Since ${\mathbb R}$ is a straight line, $\vec{e_n}$ is a constant vector field. Thus $\noindentabla_{\vec{e_i}}\vec{e_n}=0$ for all $i$, so $|A_\Sigma|^2=|A|^2$. Likewise, the Laplacian on $\Sigma\times{\mathbb R}$ splits as \begin{align} \Delta \noindentotag &=\Delta_\Sigma + (\noindentabla_{\vec{e_n}})^2 \\ L_{\Sigma}f \noindentotag &= \Delta_{\Sigma}f-\frac 1 2 \langle\vec x,\noindentabla_\Sigma f\rangle + \left(|A_\Sigma|^2+\frac 1 2\right)f \\ L_\Sigma f \noindentotag &= e^{\frac{|\vec {x}_\Sigma|^2}8} (\mathbb H^3at H_\Sigma) e^{-\frac{|\vec {x}_\Sigma|^2}8} f \\ L_\Sigma f \noindentotag &= e^{\frac{|\vec {x}|^2}8} (\mathbb H^3at H_\Sigma) e^{-\frac{|\vec {x}|^2}8} f \end{align} Here we are defining $\mathbb H^3at H_{\Sigma}$ to be $$\mathbb H^3at H_\Sigma=e^{-\frac{|\vec {x}_\Sigma|^2}8} L_\Sigma e^{\frac{|\vec {x}_\Sigma|^2}8}.$$ We now show that the eigenvalues of $L_\Sigma$ and $\mathbb H^3at H_\Sigma$ are identical, and their eigenvectors differ by a factor of $e^{-\frac{|\vec {x}_\Sigma|^2}8}$. Suppose $g_m$ is an eigenfunction of $L_\Sigma$ with eigenvalue $\lambda_m$. This means that $$ L_\Sigma g_m = -\lambda_m g_m $$ and $g_m$ is in $L^2(\Sigma)$ with the weighted measure $e^{-\frac{|\vec x_\Sigma|^2}4}d\mu$. In other words $$\int\limits_{\Sigma}|g_m |^2 \left(e^{-\frac{|\vec {x}_\Sigma|^2}4}d\mu\right) <\infty.$$ We now show that $\lambda_m$ is also an eigenvalue of $\mathbb H^3at H_\Sigma$ with corresponding eigenfunction $e^{-\frac{|\vec x_\Sigma|^2}{8}} g_m.$ \begin{align} \mathbb H^3at H_\Sigma \noindentotag &=e^{-\frac{|\vec {x}_\Sigma|^2}8} L_\Sigma e^{\frac{|\vec {x}_\Sigma|^2}8} \\ \mathbb H^3at H_\Sigma e^{-\frac{|\vec x_\Sigma|^2}{8}} g_m \noindentotag &=e^{-\frac{|\vec {x}_\Sigma|^2}8} L_\Sigma e^{\frac{|\vec {x}_\Sigma|^2}8} e^{-\frac{|\vec x_\Sigma|^2}{8}} g_m \\ \mathbb H^3at H_\Sigma e^{-\frac{|\vec x_\Sigma|^2}{8}} g_m \noindentotag &=e^{-\frac{|\vec {x}_\Sigma|^2}8} L_\Sigma g_m \\ \mathbb H^3at H_\Sigma e^{-\frac{|\vec x_\Sigma|^2}{8}} g_m \noindentotag &=e^{-\frac{|\vec {x}_\Sigma|^2}8} (-\lambda_m g_m )\\ \mathbb H^3at H_\Sigma e^{-\frac{|\vec x_\Sigma|^2}{8}} g_m \noindentotag &= -\lambda_m e^{-\frac{|\vec {x}_\Sigma|^2}8} g_m \end{align} We also check the integrability condition. In order to be an eigenfunction for $\mathbb H^3at H_\Sigma$, we must show that $e^{-\frac{|\vec x_\Sigma|^2}{8}} g_m\in L^2(\Sigma)$ with respect to the standard measure $d\mu$. However, this is trivial since $$\int\limits_{\Sigma}|e^{-\frac{|\vec x_\Sigma|^2}{8}}g_m |^2 d\mu = \int\limits_{\Sigma}|g_m |^2 e^{-\frac{|\vec {x}_\Sigma|^2}4}d\mu <\infty.$$ These arguments work in both directions, so there is a one to one correspondence between the eigenvalue-eigenfunction pairs of $\mathbb H^3at H_\Sigma$ and those of $L_\Sigma$. We now turn our attention to the operator $L$ on $\Sigma\times{\mathbb R}$. \begin{align} Lf \noindentotag &= \Delta f - \frac 1 2 \langle \vec x,\noindentabla f\rangle+\left(|A|^2+\frac 1 2\right)f \\ Lf \noindentotag &= \Delta_\Sigma f +(\noindentabla_{\vec{e_n}})^2 f - \frac 1 2 \langle \vec x,\noindentabla_\Sigma f\rangle-\frac 1 2 \langle \vec x,\vec{e_n}\noindentabla_{\vec{e_n}} f\rangle +\left(|A_\Sigma|^2+\frac 1 2\right)f \\ Lf \noindentotag &= L_\Sigma f + (\noindentabla_{\vec{e_n}})^2 f -\frac 1 2 \langle \vec x,\vec{e_n}\noindentabla_{\vec{e_n}} f\rangle \\ \noindentabla_{\vec{e_n}}\left( e^{-\frac{|\vec {x}|^2}8}f \right) \noindentotag &= -\frac 1 4 \langle \vec{e_n},\vec x\rangle e^{-\frac{|\vec {x}|^2}8}f + e^{-\frac{|\vec {x}|^2}8}\noindentabla_{\vec{e_n}}f \\ (\noindentabla_{\vec{e_n}})^2\left( e^{-\frac{|\vec {x}|^2}8}f \right) \noindentotag &= -\frac 1 4 \langle \vec{e_n},\vec {e_n}\rangle e^{-\frac{|\vec {x}|^2}8}f - \frac 1 4 \langle \vec{e_n},\vec x\rangle e^{-\frac{|\vec {x}|^2}8}\noindentabla_{\vec{e_n}} f +\frac 1{16}\langle\vec{e_n},\vec x\rangle^2 e^{-\frac{|\vec {x}|^2}8} f \\ \noindentotag & \ \ \ -\frac 1 4 \langle \vec{e_n},\vec x\rangle e^{-\frac{|\vec {x}|^2}8}\noindentabla_{\vec{e_n}} f + e^{-\frac{|\vec {x}|^2}8}(\noindentabla_{\vec{e_n}})^2 f \\ (\noindentabla_{\vec{e_n}})^2\left( e^{-\frac{|\vec {x}|^2}8}f \right) \noindentotag &= \left(-\frac 1 4 f -\frac 1 2 \langle \vec x,\vec{e_n}\noindentabla_{\vec{e_n}} f\rangle +\frac {x_n^2}{16} +(\noindentabla_{\vec{e_n}})^2 f \right)e^{-\frac{|\vec {x}|^2}8} \\ Lf \noindentotag &= e^{\frac{|\vec {x}|^2}8} \left( \mathbb H^3at H_\Sigma +(\noindentabla_{\vec{e_n}})^2 - \frac{x_n^2}{16} +\frac 1 4\right) e^{-\frac{|\vec {x}|^2}8} f \end{align} Define the operator in parentheses to be $$\mathbb H^3at H = \mathbb H^3at H_\Sigma +(\noindentabla_{\vec{e_n}})^2 - \frac{x_n^2}{16} +\frac 1 4.$$ By the same argument as above, the eigenvalues of $L$ and $\mathbb H^3at H$ are identical, and their eigenvectors differ by a factor of $e^{-\frac{|\vec {x}|^2}8}$. We define $\mathbb H^3at H_n = (\noindentabla_{\vec{e_n}})^2 - \frac{x_n^2}{16} +\frac 1 4$, so now $$\mathbb H^3at H = \mathbb H^3at H_\Sigma + \mathbb H^3at H_n$$ and $\mathbb H^3at H_n$ depends only on $\vec{x_n}$. This is useful, because $\mathbb H^3at H_n -\frac 1 4$ is the Hamiltonian of the quantum harmonic oscillator, which we discussed in Definition \ref{44}. We wish to prove that all the eigenvalues and eigenvectors of $\mathbb H^3at H$ can be built out of the eigenvalues and eigenvectors of its two composite operators. To this end, suppose that $\{g_m(\vec x_\Sigma)\}_{m=0}^{\infty}$ is an orthonormal basis of $L^2(\Sigma)$ (with the weighted metric) made up of eigenfunctions of $L_\Sigma$ with $$L_\Sigma g_m=-\lambda_m g_m.$$ Then $\{e^{-\frac{|\vec {x}_\Sigma|^2}8}g_m(\vec x_\Sigma)\}_{m=0}^{\infty}$ is an orthonormal basis of $L^2(\Sigma)$ (with the standard metric) made up of eigenfunctions of $\mathbb H^3at H_\Sigma$. Suppose also that $\{e^{-\frac{x_n^2}8}h_k(x_n)\}_{k=0}^{\infty}$ is an orthonormal basis of $L^2({\mathbb R})$ made up of eigenfunctions of $\mathbb H^3at H_n$ such that $$\mathbb H^3at H_n e^{-\frac{x_n^2}8}h_k=-\mu_k e^{-\frac{x_n^2}8}h_k.$$ Then we claim that all eigenfunctions of $\mathbb H^3at H$ can be constructed by taking products $$\left\{\left( e^{-\frac{|\vec {x}_\Sigma|^2}8} g_m(\vec x_\Sigma)\right)\left(e^{-\frac{x_n^2}8}h_k(x_n)\right):m,k=0,1,2, ...\right\}$$ which can be simplified to the following. $$\{e^{-\frac{|\vec {x}|^2}8} g_m(\vec x_\Sigma) h_k(x_n):m,k=0,1,2, ...\}$$ Furthermore, these eigenfunctions satisfy $$\mathbb H^3at H e^{-\frac{|\vec {x}|^2}8} g_m h_k =-(\lambda_m + \mu_k)e^{-\frac{|\vec {x}|^2}8}g_m h_k.$$ We check the last claim first. Fix a specific $m$ and $k$ and compute. \begin{align} \mathbb H^3at H e^{-\frac{|\vec {x}|^2}8} g_m h_k \noindentotag &= (\mathbb H^3at H_\Sigma +\mathbb H^3at H_n) e^{-\frac{|\vec {x}|^2}8} g_m h_k \\ \mathbb H^3at H e^{-\frac{|\vec {x}|^2}8} g_m h_k \noindentotag &= \left(\mathbb H^3at H_\Sigma e^{-\frac{|\vec {x}_\Sigma|^2}8}g_m\right)e^{-\frac{|x_n|^2}8}h_k +e^{-\frac{|\vec {x}_\Sigma|^2}8}g_m\left(\mathbb H^3at H_n e^{-\frac{|x_n|^2}8}h_k \right) \\ \mathbb H^3at H e^{-\frac{|\vec {x}|^2}8} g_m h_k \noindentotag &= \left(-\lambda_m e^{-\frac{|\vec {x}_\Sigma|^2}8}g_m\right)e^{-\frac{|x_n|^2}8}h_k +e^{-\frac{|\vec {x}_\Sigma|^2}8}g_m\left(-\mu_k e^{-\frac{|x_n|^2}8}h_k \right) \\ \mathbb H^3at H e^{-\frac{|\vec {x}|^2}8} g_m h_k \noindentotag &= -(\lambda_m + \mu_k)e^{-\frac{|\vec {x}|^2}8}g_m h_k \end{align} Likewise, the integrability condition is clearly satisfied. This shows that products of the form $$\{g_m(\vec x_\Sigma) h_k(x_n):m,k=0,1,2, ...\}$$ are eigenfunctions of the operator $L$ on $\Sigma\times{\mathbb R}$. Thus, it suffices to show that these eigenvectors form a basis of $L^2(\Sigma\times{\mathbb R})$. For this, it suffices to show that they are complete. To this end, let $f(\vec x)\in L^2(\Sigma\times{\mathbb R})$ be arbitrary. Then define \begin{align} b_k(\vec x_\Sigma) \noindentotag &=\int\limits_{{\mathbb R}} f(\vec x)h_k(x_n)\left(e^{-\frac{|x_n|^2}4}dx_n\right) \\ |b_k(\vec x_\Sigma)|^2 \noindentotag &\leq \int\limits_{{\mathbb R}} |f(\vec x)|^2 \left(e^{-\frac{|x_n|^2}4}dx_n\right) \int\limits_{{\mathbb R}} |h_k(x_n)|^2 \left(e^{-\frac{|x_n|^2}4}dx_n\right) \\ |b_k(\vec x_\Sigma)|^2 \noindentotag &\leq \int\limits_{{\mathbb R}} |f(\vec x)|^2 \left(e^{-\frac{|x_n|^2}4}dx_n\right) \\ \int\limits_{\Sigma}|b_{k}(\vec x_\Sigma)|^2 \left(e^{-\frac{|\vec {x}_\Sigma|^2}4}d\vec x_\Sigma\right) \noindentotag &\leq \int\limits_{\Sigma}\int\limits_{{\mathbb R}} |f(\vec x)|^2 \left(e^{-\frac{|x_n|^2}4}dx_n\right) \left(e^{-\frac{|\vec {x}_\Sigma|^2}4}d\vec x_\Sigma\right) \\ \int\limits_{\Sigma}|b_{k}(\vec x_\Sigma)|^2 \left(e^{-\frac{|\vec {x}_\Sigma|^2}4}d\vec x_\Sigma\right) \noindentotag &\leq \| f(\vec x)\|_{L^2(\Sigma\times{\mathbb R})} \end{align} Thus, $b_{k}(\vec x_\Sigma)\in L^2(\Sigma)$, so we can write $b_{k}(\vec x_\Sigma)={\mathbb S}um\limits_{m=0}^\infty b_{mk}g_{m}(\vec x_\Sigma)$. Thus, $f(\vec x)={\mathbb S}um\limits_{m,k=0}^\infty b_{mk}g_{m}h_{k}$. Thus we have an orthonormal basis of $L^2(\Sigma\times{\mathbb R})$. We now turn our attention to the operator $\mathbb H^3at H_n -\frac 1 4$. From Definition \ref{44}, this is the negative of the Hamiltonian of the quantum harmonic oscillator with $m=\frac 1 2$ and $\omega=\frac 1 2$. The eigenvalues of this operator are $\{\frac 1 4 +\frac 1 2 k:k=0,1,2,3, ...\}$ with corresponding eigenfunctions $h_k=e^{-\frac{x_n^2}{8}}H_k(\frac {x_n} 2)$, where $H_k$ is the $k$th Hermite polynomial. Thus the eigenfunctions of $\mathbb H^3at H_n$ are the same, but the eigenvalues of $\mathbb H^3at H_n$ are all lowered by $\frac 1 4$ to become $$\left\{\frac 1 2 k:k=0,1,2,3, ...\right\}.$$ By assumption, the eigenvalues of $\mathbb H^3at H_\Sigma$ are $\{\lambda_m\}$ with eigenfunctions $\left\{e^{-\frac{|\vec x_\Sigma|^2}8}g_m\right\}$. Thus the eigenvalues of $L$ are $$\left\{\lambda_m+\frac 1 2 k:m,k=0,1,2,3,...\right\},$$ and the corresponding eigenfunctions are $\left\{g_m(\vec x_\Sigma) H_k(\frac {x_n}2)\right\}.$ \end{proof} {\mathbb S}ection{Proof of Low Index Classification} In this section, we will apply Lemma \ref{41} in order to prove the classification result, Theorem \ref{37}. \begin{Pro}\label{39} Let $\Sigma^n{\mathbb S}ubset{\mathbb R}^{n+1}$ be a hyperplane through the origin. Then the eigenvalues of $L$ on $\Sigma$ are $$\left\{-\frac 1 2 + \frac 1 2 {\mathbb S}um\limits_{i=1}^n k_i:k_i=0,1,2,3,... \right\}.$$ For each choice of the $k_i$'s there exists a unique eigenfunction given by $$\prod\limits_{i=1}^n H_{k_i}\left(\frac {x_i}2\right).$$ Thus in particular, the index of $L$ on $\Sigma$ is $1$. \end{Pro} \begin{proof} We first restrict attention to $\Sigma^1={\mathbb R}{\mathbb S}ubset{\mathbb R}^2$. Without loss of generality assume $\Sigma$ is the $x$-axis. Then \begin{align} Lf \noindentotag &= \Delta f - \frac 1 2 \langle\vec x,\noindentabla f\rangle +(|A|^2+\frac 1 2)f \\ |A|^2 \noindentotag &= 0 \\ Lf \noindentotag &= \partial_x^2 f-\frac 1 2 x\partial_x f +\frac 1 2 f \\ \partial_x \left(e^{-\frac{x^2}8} f \right) \noindentotag &= -\frac x 4 e^{-\frac{x^2}8} f +e^{-\frac{x^2}8} \partial_x f \\ \partial_x^2 \left(e^{-\frac{x^2}8} f \right) \noindentotag &= \left(-\frac 1 4 f + \frac{x^2}{16}f -\frac{2x}4 \partial_x f +\partial_x^2 f \right) e^{-\frac{x^2}8} \\ e^{\frac{x^2}8} \partial_x^2 \left(e^{-\frac{x^2}8} f \right) \noindentotag &= -\frac 1 4 f + \frac{x^2}{16}f -\frac{x}2 \partial_x f +\partial_x^2 f \\ Lf \noindentotag &= e^{\frac{x^2}8}\left( \partial_x^2 -\frac{x^2}{16} +\frac 3 4 \right)e^{-\frac{x^2}8} f \end{align} We define the operator in parentheses above as $$\mathbb H^3at{H}= \partial_x^2 -\frac{x^2}{16} +\frac 3 4.$$ We analyzed this situation in the proof of Lemma \ref{41}. We showed there that on $\Sigma$, $\lambda$ is an eigenvalue of $\mathbb H^3at{H}$ with eigenfunction $u_\lambda$ if and only if $\lambda$ is also an eigenvalue of $L$ with corresponding eigenfunction $e^{\frac{x^2}{8}}u_\lambda.$ As in the same proof, $\mathbb H^3at H-\frac 3 4$ is the negative of the quantum harmonic oscillator Hamiltonian with eigenvalues $$\left\{\frac 1 4 +\frac 1 2 k: k=0,1,2,...\right\}.$$ Thus, the eigenvalues of $L$ on ${\mathbb R}{\mathbb S}ubset{\mathbb R}^2$ are $$\left\{-\frac 1 2 +\frac 1 2 k: k=0,1,2,...\right\}$$ with corresponding eigenfunctions given by $$\left\{H_k(\frac x 2)\right\}$$ where $H_k(z)$ denotes the $k$th Hermite polynomial from Definition \ref{44}. We then extend this to a general dimensional hyperplane $\Sigma^n{\mathbb S}ubset {\mathbb R}^{n+1}$ by successive applications of Lemma \ref{41}. A simple induction argument shows that the eigenvalues of $L$ on $\Sigma^n{\mathbb S}ubset{\mathbb R}^{n+1}$ are $$\left\{-\frac 1 2 + \frac 1 2 {\mathbb S}um\limits_{i=1}^n k_i:k_i=0,1,2,3,... \right\}$$ with corresponding eigenfunctions $$\left\{\prod\limits_{i=1}^n H_{k_i}(\frac {x_i}2):k_i=0,1,2,...\right\}.$$ Thus, the index of $L$ on a hyperplane $\Sigma$ is $1$. The only negative eigenvalue is $-\frac 1 2$, and the corresponding eigenfunction is the constant function $f=1$. \end{proof} \begin{rem} Note that the negative eigenvalue found in Proposition \ref{39} is exactly the one we knew existed from Theorem \ref{38}. On a hyperplane, the mean curvature $H$ is identically $0$. Also, if $\vec v$ is any constant vector field tangent to $\Sigma$, then $\langle \vec v,\vec n\rangle\equiv 0$. Taking $\vec v = \vec n$ gives the eigenfunction $\langle \vec v,\vec n\rangle =1$ with eigenvalue $-\frac 1 2$. \end{rem} \begin{Pro}\label{40} Let $\Sigma^n=\mathbb S^k\times{\mathbb R}^{n-k}{\mathbb S}ubset {\mathbb R}^{n+1}$ be a self-shrinker with $1\leq k\leq n$. Then the eigenvalues of $L$ on $\Sigma$ are $$\left\{-1+\frac 1{2k}m(m+k-1)+\frac 1 2 {\mathbb S}um\limits_{i=k+1}^n c_i:m,c_i=0,1,2,3, ... \right\}.$$ For a fixed choice of the $c_i$'s and a fixed $m$, the number of independent eigenfunctions is given by the number of independent harmonic homogeneous polynomials of degree $m$ in $k+1$ variables. In particular, the index of $L$ on $\Sigma$ is $n+2$. \end{Pro} \begin{proof} Note that in order to be a self-shrinker the $\mathbb S^k$ factor of $\Sigma$ must have radius ${\mathbb S}qrt{2k}$. We first restrict attention to the case $\Sigma=\mathbb S^k{\mathbb S}ubset {\mathbb R}^{k+1}$. Note that on $\mathbb S^k$ all tangent vectors are perpendicular to $\vec x$. Thus \begin{align} |A|^2 \noindentotag &= {\mathbb S}um\limits_{i=1}^k \frac 1 {2k} \\ |A|^2 \noindentotag &= \frac 1 2 \\ Lf \noindentotag &= \Delta f -\frac 1 2 \langle\vec x,\noindentabla f\rangle +f \\ Lf \noindentotag &= \Delta f +f \end{align} Switching to spherical coordinates gives $$L=\frac 1{2k}\Delta_{\mathbb S^k} +1.$$ However, it is well known (see \cite{T}) that the eigenvalues of $\Delta_{\mathbb S^k}$ are $$\{m(m+k-1):m=0,1,2,3,...\}$$ with corresponding eigenfunctions given by the spherical harmonics. The multiplicity of each eigenvalue is given by the number of harmonic homogeneous polynomials of degree $m$ in $k+1$ variables. Thus the eigenvalues of $L$ on $\Sigma^k=\mathbb S^k{\mathbb S}ubset {\mathbb R}^{k+1}$ are $$\left\{-1+\frac 1{2k}m(m+k-1):m=0,1,2,3, ... \right\}.$$ We now extend this to the case where $\Sigma^n=\mathbb S^k\times{\mathbb R}^{n-k}{\mathbb S}ubset {\mathbb R}^{n+1}$ via $n-k$ applications of Lemma \ref{41}. This shows that the eigenvalues of $L$ on $\Sigma^n$ are $$\left\{-1+\frac 1{2k}m(m+k-1)+\frac 1 2 {\mathbb S}um\limits_{i=k+1}^n c_i:m,c_i=0,1,2,3, ... \right\}$$ with corresponding eigenfunctions given by spherical harmonics multiplied by Hermite polynomials. When $m=c_i=0$, we obtain the lowest eigenvalue $-1$, corresponding to the constant eigenfunction given by the mean curvature of $\Sigma$. When all the $c_i=0$ and $m=1$, we obtain the eigenvalue $-\frac 1 2$ with multiplicity $k+1$. The $k+1$ eigenfunctions are the restrictions to $\Sigma$ of the homogeneous linear polynomials in $k+1$ variables. However, the eigenvalue $-\frac 1 2$ has additional eigenfunctions coming from the case when $m=0$ and exactly one $c_i=1$ while the others are $0$. There are $n-k$ choices of a $c_i$, so the total multiplicity of $-\frac 1 2 $ is $(n-k)+(k+1)=n+1$. It is interesting to note that the $n+1$ eigenfunctions for eigenvalue $-\frac 1 2$ are given by the restrictions of the $n+1$ Euclidean coordinate functions to $\Sigma$. \end{proof} We have now proven the first two claims of Theorem \ref{37}. In order to complete the proof of this theorem, we will need the following facts from \cite{CM1}. \begin{thm}\label{33} $\mathbb S^k\times{\mathbb R}^{n-k}$ are the only smooth complete embedded self-shrinkers without boundary, with polynomial volume growth, and $H\geq 0$ in ${\mathbb R}^{n+1}$. \end{thm} \begin{thm}\label{42} Any smooth complete embedded self-shrinker in ${\mathbb R}^3$ without boundary and with polynomial area growth that splits off a line must either be a plane or a round cylinder. \end{thm} \begin{thm}\label{43} If the mean curvature $H$ changes sign, then the first eigenvalue of $L$ is strictly less than $-1$. \end{thm} We are now ready to prove Theorem \ref{37}. \begin{proof}[proof of Theorem \ref{37}] The cases when $\Sigma=\mathbb S^k\times{\mathbb R}^{n-k}{\mathbb S}ubset {\mathbb R}^{n+1}$ are covered in Propositions \ref{39} and \ref{40}. We therefore assume for the rest of the proof that $\Sigma^n\noindenteq \mathbb S^k\times{\mathbb R}^{n-k}$ for any $k$. The proof procedes by induction on the dimension $n$ of $\Sigma^n$. We begin with the base case $n=2$. By Theorem \ref{33}, the mean curvature $H$ of $\Sigma$ changes sign. Thus Theorem \ref{43} gives that the first eigenvalue $\mu_1$ of $L$ satisfies $\mu_1<-1$. However, by Theorem \ref{38} we know $LH=H$, so since $H$ is not identically $0$ we know that $-1$ is also an eigenvalue of $L$. We are assuming $\Sigma^2\noindenteq \mathbb S^k\times{\mathbb R}^{2-k}$, so Theorem \ref{42} gives that $\Sigma$ does not split off a line. This means that there is no nonzero constant vector field $\vec v$ such that $\langle \vec v,\vec n\rangle\equiv 0$. Thus Theorem \ref{38} gives that $-\frac 1 2$ is an eigenvalue of $L$ with multiplicity at least $3$. Thus, the index of $\Sigma$ is at least $n+3=5$. It is not possible that $\Sigma^2\noindenteq \mathbb S^k\times{\mathbb R}^{2-k}$ and also splits off a line, so the final claim is trivially true. Now assume that the theorem holds for all surfaces $\tilde\Sigma^{n-1}{\mathbb S}ubset{\mathbb R}^n$, and consider an arbitrary self-shrinker $\Sigma^n{\mathbb S}ubset{\mathbb R}^{n+1}$ such that $\Sigma^n\noindenteq \mathbb S^k\times{\mathbb R}^{n-k}$. We have two cases. Either $\Sigma^n$ splits off a line, or it does not. Suppose $\Sigma^n$ does not split off a line. In this case there is no nonzero constant vector field $\vec v$ such that $\langle \vec v,\vec n\rangle\equiv 0$. Thus Theorem \ref{38} gives that $-\frac 1 2$ is an eigenvalue of $L$ with multiplicity at least $n+1$. However, we also know from Theorem \ref{33} that the mean curvature $H$ changes sign. Thus, $H$ is not identically $0$, so $-1$ is an eigenvalue of $L$. Also, Theorem \ref{43} gives the existence of at least one eigenvalue lower than $-1$. Thus, the index of $\Sigma$ is at least $n+3$. We now consider the other case, so suppose $\Sigma^n$ splits off a line. In this case, there exists some $\tilde\Sigma^{n-1}{\mathbb S}ubset{\mathbb R}^n$ such that $\Sigma=\tilde\Sigma\times{\mathbb R}$. Note that $\tilde\Sigma\noindenteq\mathbb S^k\times{\mathbb R}^{n-1-k}$, since that would contradict our assumption that $\Sigma^n\noindenteq \mathbb S^k\times{\mathbb R}^{n-k}$. By our inductive hypothesis the index of $\tilde\Sigma^{n-1}$ is at least $n+2$. We also know that since $H$ changes sign, one of the eigenvalues of $L$ on $\tilde\Sigma^{n-1}$ is $-1$ and another eigenvalue $\mu_1<-1$. We now apply Lemma \ref{41} to see that $\Sigma$ has the same negative eigenvalues as $\tilde\Sigma$ as well as at least two new ones, namely $-1+\frac 1 2$ and $\mu_1+\frac 1 2$. Thus, in this case the index of $\Sigma$ is at least $n+4$, which completes the proof. \end{proof} \noindentewpage \chapter{Stability of Pieces of $\mathbb S^k\times{\mathbb R}^{n-k}$}\label{24} {\mathbb S}ection{Eigenfunctions with Eigenvalue $0$} In the previous chapter we showed that all hypersurfaces of the form $\mathbb S^k\times{\mathbb R}^{n-k}{\mathbb S}ubset{\mathbb R}^{n+1}$ have finite index. Then by Theorem \ref{45} each of these surfaces must be stable outside of some compact set. In this chapter, we look for the largest stable subsets of these self-shrinkers. For the rest of this chapter, we will let $\vec x=(x_{k+1},x_{k+2},...,x_n)$ denote Euclidean coordinates on the ${\mathbb R}^{n-k}$ factor of $\Sigma$. We will also let $\vec\phi$ denote spherical coordinates on the ${\mathbb S}^k$ factor of $\Sigma.$ Then $(\vec\phi,\vec x)$ are coordinates on $\Sigma$. We will proceed by finding positive Jacobi functions as in Definition \ref{8}. Note that eigenfunctions with eigenvalue $0$ are Jacobi functions. This allows us to exploit the results of Chapter \ref{26} to easily prove the following series of propositions giving stable subsets of ${\mathbb S}^k\times{\mathbb R}^{n-k}$. \begin{Pro} Let $\Sigma^n{\mathbb S}ubset{\mathbb R}^{n+1}$ be a flat hyperplane through the origin. Suppose $P{\mathbb S}ubset\Sigma$ is a flat $(n-1)$-plane through the origin. Then $P$ splits $\Sigma^n$ into two stable half-hyperplanes. \end{Pro} \begin{proof} By Proposition \ref{39} any coordinate function $x_i$ on $\Sigma$ satisfies $L x_i=0$. Thus, $x_i$ is a Jacobi function on the portion of $\Sigma$ given by $\{x_i>0\}$. The result follows by changing coordinates. \end{proof} In the previous proposition, $\Sigma={\mathbb S}^k\times{\mathbb R}^{n-k}$ for the value $k=0$. For comparison, and because we will use it later, we record the following fact. \begin{Pro}\label{9} Let $\Sigma={\mathbb S}^k\times{\mathbb R}^{n-k}{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker with $1\leq k\leq n-1$. Let $H{\mathbb S}ubset{\mathbb R}^{n-k}$ be a half-space with boundary equal to a flat $(n-k-1)$-plane through the origin. Then ${\mathbb S}^k\times H{\mathbb S}ubset \Sigma$ is unstable. \end{Pro} \begin{proof} It suffices to show the existence on ${\mathbb S}^k\times H$ of an eigenfunction with negative eigenvalue and $0$ boundary value. However, from \ref{40} we know that $x_n$ is an eigenfunction of $L$ with eigenvalue $-\frac 1 2$. By changing coordinates, we can make the half-space $H$ equal to the region $\{x_n>0\}.$ \end{proof} \begin{Pro} Let $\Sigma={\mathbb S}^k\times{\mathbb R}^{n-k}{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker with $1\leq k\leq n-1$. Let $C{\mathbb S}ubset{\mathbb S}^k$ be an arbitrary hemisphere, and $H{\mathbb S}ubset{\mathbb R}^{n-k}$ be a half-space with boundary equal to a flat $(n-k-1)$-plane through the origin. Then $C\times H{\mathbb S}ubset \Sigma$ is stable. \end{Pro} \begin{proof} By Proposition \ref{40} we can obtain an eigenfunction with eigenvalue $0$ by taking $m=c_n=1$ and all other $c_i=0$. The spherical harmonics corresponding to $m=1$ are obtained by considering ${\mathbb S}^k{\mathbb S}ubset{\mathbb R}^{k+1}$ and restricting the coordinate functions of ${\mathbb R}^{k+1}$ to the surface ${\mathbb S}^k$. By choice of coordinates, we can thus obtain any hemisphere $C$ as the portion of ${\mathbb S}^k$ on which $g_c$ is positive, where $g_c$ is some spherical harmonic with $m=1.$ Likewise, we can choose coordinates on ${\mathbb R}^{n-k}$ such that $H$ is the half-space given by $x_n>0.$ Then the eigenfunction $x_n g_c$ satisfies $$L (x_n g_c)=0$$ and $x_n g_c>0$ on the set $C\times H{\mathbb S}ubset \Sigma$. Thus $x_n g_c$ is a positive Jacobi function, so the result follows. \end{proof} \begin{Pro} Let $\Sigma={\mathbb S}^k\times{\mathbb R}^{n-k}{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker with $1\leq k\leq n-2$. Let $P_1,P_2{\mathbb S}ubset {\mathbb R}^{n-k}$ be flat, orthogonal $(n-k-1)$-planes through the origin. Then $P_1\cup P_2$ splits ${\mathbb R}^{n-k}$ into four quarter-spaces $Q_i{\mathbb S}ubset {\mathbb R}^{n-k}$. Each quarter-space ${\mathbb S}^k\times Q_i{\mathbb S}ubset\Sigma$ is stable. \end{Pro} \begin{proof} By Proposition \ref{40} we can obtain an eigenfunction with eigenvalue $0$ by taking $c_{n-1}=c_n=1$ and all other $m,c_i=0$. In this case, the eigenfunction is $x_{n-1}x_n$, which is positive on $\{x_n>0\}\cap\{x_{n-1}>0\}$. This set can be made equal to any $Q_i$ defined above by changing coordinates, so the result follows. \end{proof} \begin{Pro}\label{46} Let $\Sigma={\mathbb S}^k\times{\mathbb R}^{n-k}{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker with $1\leq k\leq n-1$. Then the following subsets of $\Sigma$ are stable. \begin{enumerate}\item $\{x_n>{\mathbb S}qrt 2\}$ \item $\{|x_n|<{\mathbb S}qrt 2\}$ \item $\{x_n<-{\mathbb S}qrt 2\}$ \item $\{|\vec x|>{\mathbb S}qrt {2(n-k)}\}$ \item $\{|\vec x|<{\mathbb S}qrt {2(n-k)}\}$ \end{enumerate} Note that by changing coordinates, the $x_n$-axis is arbitrary in ${\mathbb R}^{n-k}$. \end{Pro} \begin{proof} By Proposition \ref{40} we can obtain an eigenfunction with eigenvalue $0$ by taking $c_n=2$ and all other $m,c_i=0$. In this case, the eigenfunction is $$x_n^2-2.$$ The first and third claims follow from finding the regions of $\Sigma$ on which this Jacobi function is positive. Clearly $2-x_n^2$ is a positive Jacobi function on the region $\{|x_n|<{\mathbb S}qrt 2\}$, thus giving the second claim. However, note that by choosing another $c_i=2$ while letting $c_n=0$, we obtain the eigenfunction $x_i^2-2.$ Then by linearity of the operator $L$ we have \begin{align} L (x_i^2 -2) \noindentotag &= 0 \\ L \left({\mathbb S}um\limits_{i=n-k}^n x_i^2 -2\right) \noindentotag &= 0 \\ L (|\vec x|^2 -2(n-k)) \noindentotag &= 0 \end{align} This Jacobi function is positive on the region $\{|\vec x|>{\mathbb S}qrt {2(n-k)}\}$, and its opposite is positive on the final region. This completes the proof. \end{proof} {\mathbb S}ection{Rotationally Symmetric Stable Regions} In Proposition \ref{46} we found some stable, radially symmetric portions of ${\mathbb S}^k\times{\mathbb R}^{n-k}$. Regions of this form are given by $\{a <|\vec x|<b\}$. In the following lemma we show that in order to find stable regions of this form, we need only consider radially symmetric Jacobi functions of the form $f=f(|\vec x|).$ This vastly simplifies the search for stable, radially symmetric regions by reducing a PDE to an ODE. \begin{lem}\label{13} Suppose $f=f(\vec\phi,\vec x)$ is a positive Jacobi function on some portion $\{a<|\vec x|<b\}$ of $\mathbb S^k\times{\mathbb R}^{n-k}$. Then the $\vec\phi$-rotationally symmetric function $g(\vec x)=\int\limits_{S^k} f(\vec\phi,\vec x)d\vec\phi$ is also a positive Jacobi function on the same region. Suppose $g=g(\vec x)$ is a positive Jacobi function on some portion $\{a<|\vec x|<b\}$ of $\mathbb S^k\times{\mathbb R}^{n-k}$. Then the function $h(r)=\int\limits_{|\vec x|=r} g(\vec x)$ is also a positive Jacobi function on the same region. \end{lem} \begin{proof} For both claims, positivity of the integral on the desired region follows trivially from positivity of the integrand. It thus suffices to show in both cases that the differential operator $L$ yields $0$ when applied to the given integral. However, this follows from the fact that in both cases the domain of integration is a compact set, so the operator $L$ commutes with integration. Since in both cases the integrand is a Jacobi function, the result follows. \end{proof} Now we know we only need to consider symmetric Jacobi functions, so in the following proposition we apply the stability operator $L$ to radial functions. We then find the general form a radial function must have in order to be a Jacobi function. \begin{Pro}\label{47} Let $\Sigma={\mathbb S}^k\times{\mathbb R}^{n-k}{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker with $0 \leq k\leq n-1$, and let $\vec x$ denote Euclidean coordinates on the ${\mathbb R}^{n-k}$ factor of $\Sigma$. Suppose $f=f(r)$, where $r=|\vec x|$. Let $L$ denote the stability operator. Then there exist constants $K_1,K_2$ and $K_3$ such that the following hold. In the following, $c_1$ and $c_2$ are arbitrary. \begin{enumerate} \item\label{64} If $k=n-1$, then $\Sigma={\mathbb S}^{n-1}\times{\mathbb R}$, and $\vec x=x_n$. Let $g=g(x_n)$. Then $$Lg=g''-\frac {x_n}2 g' + g,$$ and the general solution to the differential equation $Lg=0$ is given by $$g(x_n)=c_1(x_n^2-2)+c_2 g_2(x_n)$$ where $g_2(x_n)$ is defined piecewise by $$g_2(x_n) = \left\{ \begin{array}{ll} K_1(x_n^2-2) + (x_n^2-2)\int\limits_2^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz & : x_n \in ({\mathbb S}qrt 2,\infty)\\ -\frac 1 2{\mathbb S}qrt{\frac e 2} & : x = {\mathbb S}qrt 2 \\ (x_n^2-2)\int\limits_0^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz & : x \in (-{\mathbb S}qrt 2,{\mathbb S}qrt 2)\\ \frac 1 2{\mathbb S}qrt{\frac e 2} & : x =-{\mathbb S}qrt 2 \\ -K_1(x_n^2-2) + (x_n^2-2)\int\limits_{-2}^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz & : x \in (-\infty,-{\mathbb S}qrt 2)\\ \end{array}\right.$$ \item\label{65} If $1\leq k\leq n-2$, then $$Lf=f''+\left(\frac{n-k-1}r-\frac r 2\right)f'+f.$$ The dimensions only appear in the differential equation $Lf=0$ as a difference $n-k$, so the solutions likewise only depend on this difference. We therefore set $\lambda=n-k$. The general solution to the differential equation $Lf=0$ on $\{r\geq 0\}$ is given by $$f(r)=c_1(r^2-2\lambda)+c_2 f_2(r)$$ where $f_2(r)$ is defined on $(0,\infty)$ by $$f_2(r) = \left\{ \begin{array}{ll} (r^2-2\lambda)\int\limits_1^r \frac{e^\frac {s^2}4}{s^{\lambda-1}(s^2-2\lambda)^2} ds & : r \in (0,{\mathbb S}qrt{2\lambda})\\ -\frac 1 2 \left(\frac e {2\lambda}\right)^{\frac \lambda 2} & : r = {\mathbb S}qrt{2\lambda} \\ (r^2-2\lambda)\left(K_2 + \int\limits_{2{\mathbb S}qrt{2\lambda}}^r \frac{e^\frac {s^2}4}{s^{\lambda-1}(s^2-2\lambda)^2} ds\right) & : r \in ({\mathbb S}qrt{2\lambda},\infty)\\ \end{array}\right.$$ Clearly this is only a global solution when $c_2=0$. \item\label{66} If $k=0$, $\Sigma={\mathbb R}^n$ is a flat plane, and $$Lf=f''+\left(\frac{n-1}r-\frac r 2\right) f'+\frac 1 2 f.$$ The general solution to the differential equation $Lf=0$ on $[0,\infty)$ is given by $$f(r)=c_1 f_1 +c_2 f_2$$ where $f_1$ and $f_2$ are defined below. Note that $f_2$ is defined only on $\{r>0\}$, so we only obtain a global solution when $c_2=0.$ $$f_1(r)=-1+{\mathbb S}um\limits_{m=1}^\infty \frac{m(2m-2)!}{2^{3m-1}(m!)^2\prod\limits_{j=0}^{m-1}(n+2j)}r^{2m}$$ $$f_2(r) = \left\{ \begin{array}{ll} f_1\int\limits_{\frac 1 2 r_1}^r \frac{e^{\frac{s^2}4}}{s^{n-1}f_1^2}ds & : r \in (0,r_1)\\ -\frac{e^{\frac{r_1^2}4}}{r_1^{n-1} f_1 '(r_1)} & : r = r_1 \\ K_3 f_1 + f_1\int\limits_{2r_1}^r \frac{e^{\frac{s^2}4}}{s^{n-1}f_1^2}ds & : r \in (r_1,\infty)\\ \end{array}\right.$$ \end{enumerate} \end{Pro} \begin{proof} First recall from Equation \ref{3} we have that $$Lf=\Delta f -\frac 1 2 \langle (\vec\phi,\vec x), \noindentabla f\rangle+\left(|A|^2+\frac 1 2\right)f.$$ Here we are letting $\vec\phi$ denote coordinates on the ${\mathbb S}^k$ factor of $\Sigma$, so that $(\vec\phi,\vec x)$ are coordinates on $\Sigma$. When $k=0$ we see $\Sigma={\mathbb R}^n$ is a flat plane through the origin, so in particular $|A|^2=0$. When $1\leq k\leq n-1,$ we have \begin{align} |A|^2 \noindentotag &= {\mathbb S}um\limits_{i=1}^k \frac 1 {2k} \\ |A|^2 \noindentotag &= \frac 1 2 \end{align} We note that the Laplacian on $\Sigma$ splits as a sum of Laplacians on its two factor spaces. $$\Delta_\Sigma=\Delta_{{\mathbb S}^k}+\Delta_{{\mathbb R}^{n-k}}$$ The functions we are considering here do not depend on the ${\mathbb S}^k$ factor, so that part of the Laplacian is $0$. Recall that in spherical coordinates on ${\mathbb R}^{n-k}$ the Laplacian is given by $$\Delta_{{\mathbb R}^{n-k}} f=\frac{\partial^2 f}{\partial r^2}+\frac{n-k-1}{r} \frac{\partial f}{\partial r}+\frac 1 {r^2}\Delta_{{\mathbb S}^{n-k-1}} f.$$ Thus, when $f$ depends only on $r$ this reduces to $$\Delta_\Sigma f=\frac{\partial^2 f}{\partial r^2}+\frac{n-k-1}{r} \frac{\partial f}{\partial r}.$$ Likewise, from the geometric definition of the gradient we obtain $$\langle (\vec\phi,\vec x), \noindentabla f\rangle=rf'.$$ We will use these facts freely in the rest of the proof. \noindentoindent\emph{Proof of \ref{64}} When $k=n-1$ and $g=g(x_n)$ we obtain $\Delta g=g''$ and $\langle (\vec\phi,x_n), \noindentabla g\rangle=x_n g'.$ As noted above, $|A|^2=\frac 1 2$. We thus obtain that $$Lg=g''-\frac {x_n}2 g' + g.$$ We wish to find all such functions $g$ that satisfy $Lg=0$. This is a second order linear ODE whose coefficients are continuous on all of ${\mathbb R}$, so the Existence and Uniqueness Theorem states that there must exist two linearly independent solutions defined on all of ${\mathbb R}$. We saw in the proof of Proposition \ref{46} that $$g_1(x_n)=x_n^2-2$$ is one solution. We use reduction of order to find another. \begin{align}g_2(x_n) \noindentotag &= (x_n^2-2)v(x_n) \\ 0= L g_2(x_n) \noindentotag &= \left(2v+4x_n v'+(x_n^2-2)v''\right) -\frac{x_n}2 \left(2x_n v+(x_n^2-2)v'\right) +(x_n^2-2)v \\ 0 \noindentotag &= 4x_n v'+(x_n^2-2)v''-\frac{x_n^3}2 v' +x_n v' \\ (x_n^2-2)v'' \noindentotag &= \left(\frac 1 2 x_n^3-5x_n \right)v' \\ \frac{v''}{v'} \noindentotag &= \frac {x_n} 2 \left(\frac{x_n^2-10}{x_n^2-2}\right) \\ \frac{dv'}{v'} \noindentotag &= \left(\frac {x_n} 2 -2\left(\frac{2x_n}{x_n^2-2}\right)\right)dz \\ \log|v'| \noindentotag &= \frac{x_n^2}4 -2\log |x_n^2-2| \\ v' \noindentotag &= \frac{e^{\frac{x_n^2} 4}}{(x_n^2-2)^2} \end{align} We would like to find the second solution by setting $g_2=(x_n^2-2)v$, but there is a complication, since the formula above for $v'$ goes to $\infty$ at $x_n=\pm {\mathbb S}qrt 2$. Thus, we first consider solutions on each of the three regions $(-\infty,-{\mathbb S}qrt 2)$, $(-{\mathbb S}qrt 2,{\mathbb S}qrt 2)$, and $({\mathbb S}qrt 2,\infty)$. Let $a$ be an element of any one of these intervals. Then on that same interval, the general solution of the differential equation $Lg=0$ is given by \begin{equation}\label{51}g(x_n) = A(x_n^2-2) + B(x_n^2-2)\int\limits_a^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz \end{equation} where $A$ and $B$ are arbitrary constants. We now use this to build on all of ${\mathbb R}$ a single solution which is linearly independent of $g_1=(x_n^2-2)$. We begin by defining $g_2$ on the interval $(-{\mathbb S}qrt 2,{\mathbb S}qrt 2)$ by $$g_2(x_n)=(x_n^2-2)\int\limits_0^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz.$$ We note that this function $g_2$ is odd on its domain of definition. By the Existence and Uniqueness Theorem, it extends uniquely to a solution defined on all of ${\mathbb R}$, which we will also call $g_2$. We will see shortly that this extended function $g_2$ is odd on all of ${\mathbb R}$. Currently $g_2$ is undefined at $x_n=\pm {\mathbb S}qrt 2$, but using L'Hospital's Rule we find the one-sided limits \begin{align} \lim\limits_{x_n\rightarrow {\mathbb S}qrt 2^-}g_2 \noindentotag &=-\frac 1 2{\mathbb S}qrt{\frac e 2}\\ \lim\limits_{x_n\rightarrow -{\mathbb S}qrt 2^+}g_2 \noindentotag &=\frac 1 2{\mathbb S}qrt{\frac e 2} \end{align} We thus define $g_2$ to be equal to its limits at these two points, so it is now defined on the closed interval $[-{\mathbb S}qrt 2,{\mathbb S}qrt 2]$. However, we know from Equation \ref{51} that on $({\mathbb S}qrt 2,\infty)$ any solution of the differential equation is of the form $A(x_n^2-2) + B(x_n^2-2)\int\limits_2^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz.$ We have chosen the lower limit of integration to be $2$, but any other number in $({\mathbb S}qrt 2,\infty)$ would work equally well and would only affect the value of $A$. We now take the limit of this expression $$\lim\limits_{x_n\rightarrow{\mathbb S}qrt 2^+}\left(A(x_n^2-2) + B(x_n^2-2)\int\limits_2^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz\right) =-B\frac 1 2{\mathbb S}qrt{\frac e 2}$$ This means that in order for this expression to match up with $g_2$ at $x_n={\mathbb S}qrt 2$, we must have $B=1$. The constant $A=K_1$ is fixed by requiring that the function $g_2$ be continuously differentiable at $x_n={\mathbb S}qrt 2$. We proceed similarly on the remaining interval $(-\infty,-{\mathbb S}qrt 2)$ to find the desired definition of $g_2$. We obtain the constant $-K_1$ on the interval $(-\infty,-{\mathbb S}qrt 2)$ via symmetry considerations. Clearly $g_2$ is an odd function, so it is independent of $g_1=x_n^2-2$. Then any solution of the differential equation must be a linear combination of these two solutions. \noindentoindent\emph{Proof of \ref{65}} Suppose $1\leq k\leq n-2$, and $f=f(r)$. Then the Laplacian is given by $\Delta f=\frac{\partial^2 f}{\partial r^2}+\frac{n-k-1}{r} \frac{\partial f}{\partial r}.$ Also $|A|^2=\frac 1 2$, and $\langle (\vec\phi,\vec x), \noindentabla f\rangle=rf'.$ These facts combine to show that $$Lf=f''+\left(\frac{n-k-1}r-\frac r 2\right)f'+f.$$ We wish to solve the differential equation $Lf=0$. We know from Proposition \ref{46} that $f_1(r)=r^2-2(n-k)$ is one solution. As we did in the proof of (1) above, we would like to use reduction of order to find a second, linearly independent solution to this differential equation. However, in this case one of the coefficients of the differential equation is discontinuous at $r=0$. Thus, we are only guaranteed existence of a second linearly independent solution on the intervals $(-\infty,0)$ and $(0,\infty)$. Since we are thinking of $r$ as the distance to the origin, we restrict attention to finding this solution on the geometrically meaningful region $(0,\infty)$. In this case reduction of order yields $$v'= \frac{e^\frac {r^2}4}{r^{n-k-1}[r^2-2(n-k)]^2}.$$ This function diverges as $r\rightarrow 0^+$, as expected. However, the formula also diverges as $r\rightarrow {\mathbb S}qrt{2(n-k)}$. Thus, it will only immediately yield a solution on the intervals $(0,{\mathbb S}qrt{2(n-k)})$ and $({\mathbb S}qrt{2(n-k)},\infty)$. However, using the same technique as in the proof of (1), we can piece these solutions together to obtain a solution on the full interval $(0,\infty)$. We first note that $1\in(0,{\mathbb S}qrt{2(n-k)})$, so on that interval a second solution is given by $$f_2=(r^2-2(n-k))\int\limits_1^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds.$$ Then we extend this solution to the endpoint $r={\mathbb S}qrt{2(n-k)}$ by taking the limit. $$\lim\limits_{r\rightarrow {\mathbb S}qrt{2(n-k)}^-}f_2=-\frac 1 2 \left(\frac e {2(n-k)}\right)^{\frac{n-k}2}$$ Now, on the interval $({\mathbb S}qrt{2(n-k)},\infty)$, every solution of the differential equation can be written in the form $$g(r)=A(r^2-2(n-k)) + B(r^2-2(n-k))\int\limits_{2{\mathbb S}qrt{2(n-k)}}^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds$$ for constants $A$ and $B$. Thus, we can extend $f_2$ to be defined on all of $(0,\infty)$ by finding the correct values of $A$ and $B$ to make $f_2$ continuous and continuously differentiable. $$\lim\limits_{r\rightarrow {\mathbb S}qrt{2(n-k)}^+} g(r)=-B\frac 1 2 \left(\frac e {2(n-k)}\right)^{\frac{n-k}2}$$ Thus, in order for $g(r)$ to be the continuous continuation of $f_2$, we must have $B=1$. As in the proof of Proposition \ref{17}, we know from the Existence and Uniqueness Theorem that there exists some unique value $A=K_2$ such that the following piecewise defined function $f_2$ is a solution of the differential equation on all of $(0,\infty)$. $$f_2(r) = \left\{ \begin{array}{ll} (r^2-2(n-k))\int\limits_1^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds & : r \in (0,{\mathbb S}qrt{2(n-k)})\\ -\frac 1 2 \left(\frac e {2(n-k)}\right)^{\frac{n-k}2} & : r = {\mathbb S}qrt{2(n-k)} \\ (r^2-2(n-k))\left(K_2 + \int\limits_{2{\mathbb S}qrt{2(n-k)}}^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds\right) & : r \in ({\mathbb S}qrt{2(n-k)},\infty)\\ \end{array}\right.$$ We note that the constant $K_2$ depends on the dimensional value $n-k$. Since $f_1$ and $f_2$ are independent, any solution of the differential equation can be written as a linear combination of them. \noindentoindent\emph{Proof of \ref{66}} Now consider the case when $k=0$. In this case $\Sigma={\mathbb R}^n$ is a flat plane through the origin, so in particular $|A|^2=0$. We also know that $\Delta f=\frac{\partial^2 f}{\partial r^2}+\frac{n-1}{r} \frac{\partial f}{\partial r},$ and $\langle (\vec\phi,\vec x), \noindentabla f\rangle=rf'.$. Putting these pieces together yields $$Lf=f''+\left(\frac{n-1}r-\frac r 2\right) f'+\frac 1 2 f.$$ We wish to solve the differential equation $Lf=0$ on the geometrically relevant region $\{r\geq 0\}$. We note that by the Existence and Uniqueness Theorem we are only guaranteed the existence of two linearly independent solutions on the subregion $\{r>0\}$, since one of the coefficients in the differential equation diverges at $r=0$. We begin by looking for a series solution $f={\mathbb S}um\limits_{m=0}^\infty c_m r^m$. We obtain the following recurrence relations. \begin{align}m \noindentotag &= 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ c_1=0 \\ m \noindentotag &= 1 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ c_2=\frac{-c_0}{4n} \\ m \noindentotag &\geq 2 \ \ \ \ \ \ \ \ \ \ \ \ c_{m+1}=\frac{m-2}{2(m+1)(n+m-1)}c_{m-1} \end{align} From the recurrence relations, we see that all of the odd terms $c_{2m+1}=0$, meaning that this approach only yields one linearly independent solution of the differential equation. Solving these recurrence relations and letting $c_0=-1$, we obtain the series solution $$f_1=-1+{\mathbb S}um\limits_{m=1}^\infty \frac{m(2m-2)!}{2^{3m-1}(m!)^2\prod\limits_{j=0}^{m-1}(n+2j)}r^{2m}.$$ This series converges for all $r$ by the ratio test. By making the substitution $r=|\vec x|$, it is possible to show that $f_1$ is a Jacobi function on all of $\Sigma$, including the origin. Note that all of the non-constant terms $c_{2m}$ are positive. Thus, this solution has a unique root, which we will call $r_1$. As we did in the proofs of (1) and (2) above, we next find a second linearly independent solution using reduction of order. We set $f_2(r)=v(r)f_1(r)$. Substituting this into the equation $Lf=0$ yields \begin{align}(2rf_1)v'' \noindentotag &= (r^2f_1-2(n-1)f_1-4rf_1')v' \\ \frac{v''}{v'} \noindentotag &= \frac r 2 -\frac {n-1} r -2(\log f_1)' \\ v' \noindentotag &= \frac{e^{\frac{r^2}4}}{r^{n-1}f_1^2} \end{align} It is no surprise that this expression for $v'$ diverges as $r\rightarrow 0^+$, since we do not expect to find a second solution defined at $0$. However, $v'$ also diverges as $r\rightarrow r_1$. We work around this as we did before. Note that we understand the solution of the differential equation on the two intervals $(0,r_1)$ and $(r_1,\infty)$. If $a$ is any number in one of these intervals, then the general solution of the differential equation on that same interval is given by $$f=A f_1 +B f_1\int\limits_a^r \frac{e^{\frac{s^2}4}}{s^{n-1}f_1^2}ds.$$ Clearly $\frac 1 2 r_1\in(0,r_1)$, so we define $f_2$ on that interval by $$f_2=f_1\int\limits_{\frac 1 2 r_1}^r \frac{e^{\frac{s^2}4}}{s^{n-1}f_1^2}ds.$$ We then extend the definition of $f_2$ to the point $r=r_1$ by setting it equal to its limit. We move $f_1$ to the denominator and apply L'Hospital's Rule to show the following. $$\lim\limits_{r\rightarrow r_1^-}f_2=-\frac{e^{\frac{r_1^2}4}}{r_1^{n-1} f_1 '(r_1)}$$ From the definition of $f_1$, we see that $f_1 '$ is a power series with all positive terms. Thus $f_1 '(r_1)>0$, so the limit is finite. We now extend $f_2$ onto the interval $(r_1,\infty)$ by first noting that $2r_1$ is in this interval. Thus we know that every solution of the differential equation on this interval can be written in the form $$\tilde f=A f_1 +B f_1\int\limits_{2r_1}^r \frac{e^{\frac{s^2}4}}{s^{n-1}f_1^2}ds.$$ We take the limit of this expression as $r\rightarrow r_1$ from the right and require that this equal the value $f_2(r_1)$. $$\lim\limits_{r\rightarrow r_1^+}\tilde f=-B\frac{e^{\frac{r_1^2}4}}{r_1^{n-1} f_1 '(r_1)}$$ Thus $B=1$. We know from the Existence and Uniqueness Theorem that for some choice $A=K_3$ the expression yields a continuation of $f_2$ such that the full function is a solution of the differential equation on all of $(0,\infty)$. This solution $f_2$ is clearly not a multiple of $f_1$, so the general solution of the differential equation must be a linear combination of $f_1$ and $f_2$. This completes the proof. \end{proof} We now investigate the stability of symmetric regions of ${\mathbb S}^k\times{\mathbb R}^{n-k}$. This is accomplished by finding the regions on which the differential equation $Lf=0$ has a positive solution. In Proposition \ref{47} we found the general radial solution of this differential equation on every unbounded cylindrical self-shrinker. However, the behavior of these solutions is not immediately clear from their form. Our task then is to study these solutions to see what conclusions we can draw about the regions on which positive solutions exist. We begin by proving a proposition about unstable regions that we will need in the next chapter. \begin{Pro}\label{17} Let $\Sigma={\mathbb S}^{n-1}\times{\mathbb R}{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker. For each $a\in [0,{\mathbb S}qrt 2)$, the half-infinite portion of the cylinder $\Sigma$ given by $\{x_n>a\}$ is unstable. Further, for each such $a$ there exists some $b_a > a$ such that the portion of $\Sigma$ given by $\{a<x_n<b\}$ is unstable whenever $b>b_a$. \end{Pro} \begin{proof} By the first part of Lemma \ref{13}, in order to show that the portion of $\Sigma$ given by $\{x_n>a\}$ is unstable, we need only show that there is no axially symmetric Jacobi function on $\{x_n>a\}$. However, we know from Proposition \ref{47} that any such Jacobi function $g=g(x_n)$ must be of the form $g(x_n)=c_1(x_n^2-2)+c_2 g_2,$ where $g_2$ is as defined in Proposition \ref{47}. Since $g_2$ is odd, it has $0$ as a root. Thus $\{|x_n|<{\mathbb S}qrt 2\}$ is the largest stable portion of the cylinder $\Sigma$ centered at the origin. By Proposition \ref{9}, the portion of $\Sigma$ given by $\{x_n>0\}$ is unstable. Thus $g_2$ must also have at least one positive root. Let $r_0$ denote the smallest positive root of $g_2$. Then since $g_2$ is odd, it has roots $\pm r_0$. We note that $g_2(x_n)<0$ for all $x_n\in(0,{\mathbb S}qrt 2)$. We also have that $g_2({\mathbb S}qrt 2)<0$. This implies that $r_0>{\mathbb S}qrt 2.$ Thus $g_2$ by itself is not strictly positive on any region $\{x_n>a\}$ where $a<{\mathbb S}qrt 2$. It now suffices to show that no linear combination of $g_2$ and $g_1=x_n^2-2$ is strictly positive on any such region. To this end, suppose $$\tilde g=Ag_1+Bg_2$$ is positive on $\{x_n>a\}$ where $a<{\mathbb S}qrt 2$. Then since $g_2(r_0)=0$ and $r_0>{\mathbb S}qrt 2$, we must have $A>0$. By the same logic since $g_1({\mathbb S}qrt 2)=0$, we must have $B<0$. Then on the region $\{x_n>{\mathbb S}qrt 2\}$ we have that \begin{align}\tilde g \noindentotag &= (A+BK_1)(x_n^2-2)+B(x_n^2-2)\int\limits_2^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz \\ \tilde g \noindentotag &= (x_n^2-2)\left(A+BK_1+B\int\limits_2^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz \right) \end{align} However, we already know that $B<0$. We compute the following limits. \begin{align} \lim\limits_{x_n\rightarrow \infty} \int\limits_2^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz \noindentotag &= \infty \\ \lim\limits_{x_n\rightarrow \infty} \left(A+BK_1+B\int\limits_2^{x_n} \frac{e^{\frac{z^2}4}}{(z^2-2)^2}dz \right) \noindentotag &= -\infty \\ \lim\limits_{x_n\rightarrow \infty} \tilde g \noindentotag &= -\infty \end{align} In particular, for large enough $x_n$ we must have $\tilde g<0$, so $\tilde g$ is not strictly positive on $\{x_n>a\}$. To see the second claim, note that since $\{x_n>a\}$ is unstable it has positive stability index. However, by Definition \ref{16}, this means that some compact subset $\{a<x_n<b_a\}$ also has positive stability index. This $b_a$ satisfies the second claim, because the stability index is defined as a supremum. \end{proof} We note that in the proof of Proposition \ref{17} above we have also proven the following corollary. \begin{cor}\label{52} Let $\Sigma={\mathbb S}^{n-1}\times{\mathbb R}{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker. Then $C={\mathbb S}qrt 2$ is the unique value such that the two $(n-1)$-surfaces $\{x_n=\pm C\}$ split $\Sigma$ into three stable regions. \end{cor} \begin{rem} Based on a computer approximation, $r_0\approx 3.00395$ is unique. Whatever the exact value of $r_0$, we note that the portions of $\Sigma={\mathbb S}^{n-1}\times{\mathbb R}$ given by $\{0<x_n<r_0\}$ and $\{-r_0<x_n<0\}$ are also stable. By taking linear combinations of $g_1$ and $g_2$, it is possible to interpolate between the two intervals $(-{\mathbb S}qrt 2,{\mathbb S}qrt 2)$ and $(0,r_0)$ to find other intervals of comparable length over which $\Sigma$ is stable. Also, the reflection through $0$ of each of these intervals can be obtained by interpolating between the intervals $(-{\mathbb S}qrt 2,{\mathbb S}qrt 2)$ and $(-r_0,0)$. \end{rem} \begin{Pro}\label{53} Let $\Sigma={\mathbb S}^k\times{\mathbb R}^{n-k}{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker with $1\leq k\leq n-2$. Then $\{r={\mathbb S}qrt{2(n-k)}\}$ is the unique rotationally symmetric $(n-1)$-surface that splits $\Sigma$ into two stable regions $\{r<{\mathbb S}qrt{2(n-k)}\}$ and $\{r>{\mathbb S}qrt{2(n-k)}\}.$ \end{Pro} \begin{proof} Recall from Proposition \ref{47} that the general solution to the differential equation $Lf=0$ on $\{r\geq 0\}$ is given by $$f(r)=c_1(r^2-2(n-k))+c_2 f_2(r)$$ where $f_2(r)$ is only defined on $(0,\infty)$. Thus, the only solutions defined on sets containing the origin are constant multiples of $f_1=r^2-2(n-k)$. Thus, the largest stable region of the form $\{r<C\}$ is given by $C={\mathbb S}qrt{2(n-k)}$. We also know that the region $\{r>{\mathbb S}qrt {2(n-k)}\}$ is stable. It therefore suffices to show that no linear combination of $f_1$ and $f_2$ is strictly positive on $\{r>C\}$ where $C<{\mathbb S}qrt{2(n-k)}$. To see this, suppose by way of contradiction that there exist constants $A$ and $B$ and some $C<{\mathbb S}qrt{2(n-k)}$ such that on the region $\{r>C\}$ $$\tilde f=A(r^2-2(n-k))+Bf_2>0.$$ Recall that $f_2$ is defined to be $$f_2(r) = \left\{ \begin{array}{ll} (r^2-2(n-k))\int\limits_1^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds & : r \in (0,{\mathbb S}qrt{2(n-k)})\\ -\frac 1 2 \left(\frac e {2(n-k)}\right)^{\frac{n-k}2} & : r = {\mathbb S}qrt{2(n-k)} \\ (r^2-2(n-k))\left(K_2 + \int\limits_{2{\mathbb S}qrt{2(n-k)}}^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds\right) & : r \in ({\mathbb S}qrt{2(n-k)},\infty)\\ \end{array}\right.$$ In particular $f_2\left({\mathbb S}qrt{2(n-k)}\right)<0$, so we must have $B<0$. Likewise, it is clear that $f_2(r)$ has a unique root on the region $\left\{r>{\mathbb S}qrt{2(n-k)}\right\}$. Thus, we must have $A>0$. Thus we obtain that on the region $\{r>{\mathbb S}qrt{2(n-k)}\}$, \begin{align}\tilde f\noindentotag &= A(r^2-2(n-k))+B(r^2-2(n-k))\left(K_2 + \int\limits_{2{\mathbb S}qrt{2(n-k)}}^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds\right) \\ \tilde f \noindentotag &= (r^2-2(n-k))\left(A+B K_2 + B\int\limits_{2{\mathbb S}qrt{2(n-k)}}^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds\right) \end{align} We compute the following limits. \begin{align} \lim\limits_{r\rightarrow \infty} \int\limits_{2{\mathbb S}qrt{2(n-k)}}^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds \noindentotag &= \infty \\ \lim\limits_{r\rightarrow \infty} \left(A+B K_2 + B\int\limits_{2{\mathbb S}qrt{2(n-k)}}^r \frac{e^\frac {s^2}4}{s^{n-k-1}[s^2-2(n-k)]^2} ds\right) \noindentotag &= -\infty \\ \lim\limits_{r\rightarrow \infty} \tilde f \noindentotag &= -\infty \end{align} In particular, for large enough $r$ we must have $\tilde f<0$, so $\tilde f$ is not strictly positive on $\{r>C\}$. This completes the proof. \end{proof} The uniqueness results in Corollary \ref{52} and Proposition \ref{53} are not surprising, and in fact they follow from the domain monotonicity of the lowest eigenvalue. We now give a statement of this domain monotonicity property, and we show how it implies the uniqueness results that we already proved using more direct arguments. \begin{thm}\cite{CM1} Suppose $\Omega_1$ and $\Omega_2$ are domains in some self-shrinker $\Sigma$. Suppose also that $\Omega_1{\mathbb S}ubset\Omega_2$, and $\Omega_2$ is strictly larger than $\Omega_1$. Then letting $\lambda_1(\Omega_i)$ denote the lowest eigenvalue of $L$ with Dirichlet boundary condition, we have $$\lambda_1(\Omega_2)<\lambda_1(\Omega_1).$$ \end{thm} Now note that on each of the three regions in Corollary \ref{52}, the Jacobi function $g_1=x_n^2-2$ is actually an eigenfunction. This is because for each of the three regions, $g_1\in L_2$ with the weighted metric $d\tilde\mu=e^{-\frac{|x|^2}4}d\mu$. Likewise, the Jacobi function on each of the two regions in Proposition \ref{53} is $f_1=r^2-2(n-k)$. This is in $L_2$ on each of the two regions with respect to the weighted measure, so $f_1$ is an eigenfunction on each region. The fact that these Jacobi functions are also eigenfunctions is important, because it means that $0$ is an eigenvalue of $L$ on each of the regions under consideration. We have already shown that each of these regions is stable, meaning that $L$ has no negative eigenvalues on each region. Thus, $0$ must be the lowest eigenvalue of $L$ on each of these regions. Then by the domain monotonicity of the lowest eigenvalue, making any of the regions larger would decrease the lowest eigenvalue below $0$, so the region would no longer be stable. The situation is quite different when $\Sigma={\mathbb R}^n$ is a hyperplane. Recall from Proposition \ref{47} that the only globally defined Jacobi functions are constant multiples of $f_1$, where $$f_1(r)=-1+{\mathbb S}um\limits_{m=1}^\infty \frac{m(2m-2)!}{2^{3m-1}(m!)^2\prod\limits_{j=0}^{m-1}(n+2j)}r^{2m}.$$ This function clearly has exactly one root when $r\geq 0$. As before, we continue to call this root $r_1$. Then the $(n-1)$-sphere $\{r=r_1\}$ splits $\Sigma$ into two stable regions given by $\{r<r_1\}$ and $\{r>r_1\}$. Since $f_1$ is bounded on compact sets, we clearly have $f_1\in L_2(\{r<r_1\})$ with the weighted metric. Thus, $0$ is the smallest eigenvalue on this region. However, we now show that $f_1$ is not in the weighted $L_2$ space on $\{r>r_1\}$, and hence $f_1$ is not an eigenfunction on the unbounded region. \begin{lem}\label{62} Let $\Sigma={\mathbb R}^n{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker. Define $$f_1(r)=-1+{\mathbb S}um\limits_{m=1}^\infty \frac{m(2m-2)!}{2^{3m-1}(m!)^2\prod\limits_{j=0}^{m-1}(n+2j)}r^{2m},$$ and let $r_1$ denote the positive root of $f_1$. Then $f_1$ is a positive Jacobi function on $\{r>r_1\}$, but it is not an eigenfunction on that same region. \end{lem} \begin{proof} We know from Proposition \ref{47} that $Lf_1=0$, so it is a Jacobi function. It is clearly positive on $\{r>r_1\}$. It thus suffices to show that $f_1$ is not an eigenfunction on that same region. To do this, we will show that $f_1$ is not in the weighted $L_2$ space on this region. Letting $C_n$ denote the volume of the unit $n$-sphere, we obtain the following expressions for the $L_2$ norm of $f_1$. \begin{align}\|f_1\|_{L^2} \noindentotag &= C_n\int\limits_0^\infty r^n f_1^2 e^{-\frac {r^2}4} dr \\ \|f_1\|_{L^2} \noindentotag &= C_n\int\limits_0^\infty r^n \left(f_1 e^{-\frac {r^2}8}\right)^2 dr \end{align} We wish to show that this integral diverges. For this, it is sufficient to show that the integrand does not go to $0$ as $r\rightarrow\infty$. We will show more, namely that the integrand actually diverges to $\infty$ as $r\rightarrow\infty$. To show this, we will show that for large enough $r$ $$\left(f_1 e^{-\frac {r^2}8}\right)>1.$$ Clearly this is equivalent to the claim that for large enough $r$, we have $f_1>e^{\frac{r^2}8}$. For a fixed dimension $n$, we define the constants $a_{2m}$ and $b_{2m}$ by the following equations. $$f_1(r)=-1+{\mathbb S}um\limits_{m=1}^\infty \frac{m(2m-2)!}{2^{3m-1}(m!)^2\prod\limits_{j=0}^{m-1}(n+2j)}r^{2m}={\mathbb S}um\limits_{m=0}^\infty a_{2m}r^{2m}$$ $$e^{\frac{r^2}8}={\mathbb S}um\limits_{m=0}^\infty \frac 1{2^{3m} m!} r^{2m} = {\mathbb S}um\limits_{m=0}^\infty b_{2m}r^{2m}$$ We will show that \begin{equation}\label{54}\lim\limits_{m\rightarrow\infty} \frac{a_{2m}}{b_{2m}}=\infty.\end{equation} We now assume this fact and show how this completes the proof. Assuming Equation \ref{54}, we see there exists some $M$ such that $a_{2m}>2 b_{2m}$ for all $m\geq M$. Then for $r\geq 1$, \begin{align} f_1-e^{\frac{r^2}8} \noindentotag &= {\mathbb S}um\limits_{m=0}^\infty a_{2m} r^{2m} - {\mathbb S}um\limits_{m=0}^\infty b_{2m} r^{2m} \\ f_1-e^{\frac{r^2}8} \noindentotag &= {\mathbb S}um\limits_{m=0}^\infty (a_{2m} - b_{2m}) r^{2m} \\ f_1-e^{\frac{r^2}8} \noindentotag &= {\mathbb S}um\limits_{m=0}^{M-1} (a_{2m} - b_{2m}) r^{2m} + {\mathbb S}um\limits_{m=M}^\infty (a_{2m} - b_{2m}) r^{2m} \\ f_1-e^{\frac{r^2}8} \noindentotag &\geq {\mathbb S}um\limits_{m=0}^{M-1} -|a_{2m} - b_{2m}| r^{2m} + {\mathbb S}um\limits_{m=M}^\infty b_{2m} r^{2m} \\ f_1-e^{\frac{r^2}8} \noindentotag &\geq \left({\mathbb S}um\limits_{m=0}^{M-1} -|a_{2m} - b_{2m}|\right) r^{2M-2} + \left({\mathbb S}um\limits_{m=M}^\infty b_{2m}\right) r^{2M} \\ f_1-e^{\frac{r^2}8} \noindentotag &\geq (-A + Br^2)r^{2M-2} \end{align} In the above, $A$ and $B$ are positive constants. Thus, for large enough $r$, $f_1>e^{\frac{r^2}8}$. Thus, $$f_1 e^{-\frac{r^2}8}\geq e^{\frac{r^2}8} e^{-\frac{r^2}8}=1.$$ This completes the proof. Thus, it suffices to prove Equation \ref{54}. We will do this in two cases, depending on whether the dimension $n$ is even or odd. Suppose first that $n$ is even. Then there exists an integer $d$ such that $n=2d$. We restrict attention to $m\geq 1$ and compute. \begin{align} \prod\limits_{j=0}^{m-1} (n+2j) \noindentotag &= (2d)(2d+2)\cdots(2d+2(m-1)) \\ \prod\limits_{j=0}^{m-1} (n+2j) \noindentotag &= 2^m d(d+1)(d+2)\cdots (d+m-1) \\ \prod\limits_{j=0}^{m-1} (n+2j) \noindentotag &= \frac{2^m(d+m-1)!}{(d-1)!} \\ \frac {a_{2m}}{b_{2m}} \noindentotag &= \frac{m(2m-2)!}{2^{3m-1}(m!)^2\prod\limits_{j=0}^{m-1}(n+2j)} 2^{3m} m! \\ \frac {a_{2m}}{b_{2m}} \noindentotag &= \frac{2m(2m-2)!}{(m!)\prod\limits_{j=0}^{m-1}(n+2j)} \\ \frac {a_{2m}}{b_{2m}} \noindentotag &= \frac{2m(2m-2)!(d-1)!}{(m!)2^m(d+m-1)!} \\ \frac {a_{2m}}{b_{2m}} \noindentotag &= \frac{(2m)!(d-1)!(d+m)}{(m!)(d+m)!2^m(2m-1)} \end{align} We now apply Stirling's Approximation. For large $m$ $$m!\approx {\mathbb S}qrt{2\pi m}\left(\frac m e\right)^m.$$ This approximation is valid in the limit, which is all we care about. \begin{align} \lim\limits_{m\rightarrow\infty}\frac {a_{2m}}{b_{2m}} \noindentotag &= \lim\limits_{m\rightarrow\infty} \frac{2{\mathbb S}qrt{\pi m}\left(\frac{2m}e\right)^{2m}(d-1)!(d+m)}{{\mathbb S}qrt{2\pi m}\left(\frac{m}e\right)^m {\mathbb S}qrt{2\pi(d+m)}\left(\frac{d+m}e\right)^{d+m}2^m (2m-1)} \\ \lim\limits_{m\rightarrow\infty}\frac {a_{2m}}{b_{2m}} \noindentotag &= \lim\limits_{m\rightarrow\infty} \frac{\left(\frac{2m}e\right)^{m}(d-1)!(d+m)}{ {\mathbb S}qrt{\pi(d+m)}\left(\frac{d+m}e\right)^{d+m} (2m-1)} \\ \lim\limits_{m\rightarrow\infty}\frac {a_{2m}}{b_{2m}} \noindentotag &= \frac{(d-1)!e^d}{{\mathbb S}qrt \pi}\lim\limits_{m\rightarrow\infty} \frac {2^m}{(d+m)^{d-\frac 1 2}} \left(\frac m{d+m}\right)^m \\ \lim\limits_{m\rightarrow\infty}\frac {a_{2m}}{b_{2m}} \noindentotag &= \frac{(d-1)!e^d}{{\mathbb S}qrt \pi}\lim\limits_{m\rightarrow\infty} \frac {2^m}{(d+m)^{d-\frac 1 2}} \frac 1{(1+\frac d m)^m} \\ \lim\limits_{m\rightarrow\infty}\frac {a_{2m}}{b_{2m}} \noindentotag &= \frac{(d-1)!}{{\mathbb S}qrt \pi}\lim\limits_{m\rightarrow\infty} \frac {2^m}{(d+m)^{d-\frac 1 2}} \\ \lim\limits_{m\rightarrow\infty}\frac {a_{2m}}{b_{2m}} \noindentotag &= \infty \end{align} This proves Equation \ref{54} whenever the dimension $n$ is even. When $n$ is odd, we notice that $$\frac{2m(2m-2)!}{(m!)\prod\limits_{j=0}^{m-1}(n+2j)} > \frac{2m(2m-2)!}{(m!)\prod\limits_{j=0}^{m-1}(n+1+2j)}.$$ However, we already know from the above computation that this expression goes to $\infty$ as $m\rightarrow \infty$. This proves Equation \ref{54} in the case when $n$ is odd, which completes the proof. \end{proof} Since $f_1$ is not an eigenfunction on the region $\{r>r_1\}$ in the hyperplane, it is possible that the smallest eigenvalue of $L$ on this region is strictly positive. In this case, it would be possible to find some value $C<r_1$ such that the region in the hyperplane given by $\{r>C\}$ is stable. We already know that any subset of the ball $\{r<r_1\}$ is stable, so this would show that any $(n-1)$-sphere between $\{r=C\}$ and $\{r=r_1\}$ splits the hyperplane into two stable region. In the following chapter we will use a very different argument to show that this phenomenon does occur in the case of a flat plane in ${\mathbb R}^3$. In particular, we will show that the region in the $2$-plane given by $\{r>{\mathbb S}qrt 2\}$ is stable (see Theorem \ref{49}). In Remark \ref{55} we estimate the value of $r_1$ for the $2$-plane to be about $2.514$. Thus, any circle centered at the origin with radius between ${\mathbb S}qrt 2$ and $r_1\approx 2.514$ splits the plane into two stable regions. We note that $f_1$ is not an eigenfunction on $\{r>r_1\}$ in a hyperplane of any dimension, and also the index of every hyperplane is $1$. These facts lead us to conjecture that every hyperplane has a similar non-uniqueness property. However, our argument in the case of a plane will depend on a special convergence result in ${\mathbb R}^3$, so we are unable at the present time to generalize these results to higher dimensions. We note that in spite of our interest in the value $r_1$, we have yet to investigate it directly. We do this in the following proposition and remark. \begin{Pro}\label{48} Let $\Sigma={\mathbb R}^n{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker. The portion of $\Sigma$ given by $$\left\{r>2{\mathbb S}qrt{\left( n+2\right)\left({\mathbb S}qrt{\frac{3n+2}{n+2}}-1\right)}\right\}$$ is stable. \end{Pro} \begin{proof} By Proposition \ref{47} we know that $L f_1=0$, where $$f_1(r)=-1+{\mathbb S}um\limits_{m=1}^\infty \frac{m(2m-2)!}{2^{3m-1}(m!)^2\prod\limits_{j=0}^{m-1}(n+2j)}r^{2m}.$$ As we mentioned in that proof, all of the terms in this power series are positive except the constant term. Thus there exists a single positive root of the series, and we call that root $r_1$. We also see that all of the partial sums are strictly less than $f_1$ for $r>0$. Thus, if we find a root of a partial sum, then we know the root $r_1$ of the full series must be lower. We are thus able to estimate $r_1$ from above by considering only the first few terms of $f_1$. To second order, $f_1(r)=-1+\frac 1 {4n} r^2$. This has the positive root $r=2{\mathbb S}qrt{n}$. To fourth order, $f_1(r)=-1+\frac 1 {4n} r^2+\frac 1 {2^5 n(n+2)}r^4$. This has the positive root $$r=2{\mathbb S}qrt{(n+2)\left({\mathbb S}qrt{\frac{3n+2}{n+2}}-1\right)}.$$ Thus, $f_1$ is a positive Jacobi function of $L$ on the region of $\Sigma$ given by $$\left\{r>2{\mathbb S}qrt{\left( n+2\right)\left({\mathbb S}qrt{\frac{3n+2}{n+2}}-1\right)}\right\}$$ which completes the proof. \end{proof} \begin{rem}\label{55} In the above proof, we approximate the root $r_1$ of the power series $f_1$ using partial sums. We now use a computer program to find a numerical approximation of $r_1$ for the first few values of $n$. We compare these to the partial sum approximations from Proposition \ref{48} in the following table. In the first column we show the dimension $n$ of the hyperplane. In the second column we give a numerical value of the second order approximation $r_1\approx 2{\mathbb S}qrt{n}.$ In the third column we give a numerical value of the fourth order approximation $r_1\approx 2{\mathbb S}qrt{(n+2)\left({\mathbb S}qrt{\frac{3n+2}{n+2}}-1\right)}$. In the last column we give a numerical approximation of $r_1$ obtained using Mathematica. All numbers are rounded to three decimal places. \begin{center} \begin{tabular}{|c|c|c|c|} \mathbb H^3line \multicolumn{4}{|c|}{Approximate values of $r_1$} \\ \mathbb H^3line n & 2nd order & 4th order & Full \\ \mathbb H^3line \mathbb H^3line 2 & 2.828 & 2.574 & 2.514 \\ \mathbb H^3line 3 & 3.464 & 3.109 & 3.004 \\ \mathbb H^3line 4 & 4 & 3.558 & 3.408 \\ \mathbb H^3line 5 & 4.472 & 3.954 & 3.760 \\ \mathbb H^3line 6 & 4.899 & 4.312 & 4.076 \\ \mathbb H^3line 7 & 5.292 & 4.642 & 4.364 \\ \mathbb H^3line \end{tabular} \end{center} \end{rem} We now turn our attention to the situation in ${\mathbb R}^3$. In this setting more is known, so we can obtain improved results. \noindentewpage \chapter{Half-Infinite, Stable Self-Shrinkers with Boundary in ${\mathbb R}^3$}\label{60} {\mathbb S}ection{Background Results} In this chapter we look at circular slices of the self-shrinking cylinder ${\mathbb S}^1\times{\mathbb R}{\mathbb S}ubset{\mathbb R}^3$. We let the $x_3$-axis be the axis of symmetry of the cylinder. \begin{defn} Let $\Sigma={\mathbb S}^{1}\times{\mathbb R}$ be a self-shrinker. For each $a\in{\mathbb R}$, we let $$\gamma_a=\Sigma\cap\{x_3=a\}.$$ \end{defn} For each of these slices $\gamma_a$ we will find a stable, half-infinite self-shrinker with boundary $\gamma_a.$ Recall from Corollary \ref{52} that when $a\geq {\mathbb S}qrt 2$ the portion of the cylinder given by $\{x_3>a\}$ is stable. Likewise when $a\leq -{\mathbb S}qrt 2$ we have the stable region $\{x_3< a\}$. We will spend the rest of this chapter finding a stable half-infinite self-shrinker with boundary $\gamma_a$ for the remaining values of $a$. In Section \ref{56} we will show the existence of such a surface for each $a\in (0,{\mathbb S}qrt 2)$. By symmetry this also solves the problem for $a\in(-{\mathbb S}qrt 2, 0)$. In Section \ref{57} we will analyze these surfaces and use them to show that the portion of the plane given by $\{|\vec x|>{\mathbb S}qrt 2\}$ is stable. We begin by stating results that hold in as much generality as possible, but we will be forced to specialize to the low dimensional case before long. We first wish to define a class of objects which play a fundamental role in geometric measure theory, the class of rectifiable currents. This in turn requires the introduction of Hausdorff measure. \begin{defn} Let $A{\mathbb S}ubset {\mathbb R}^n$, and define the diamater of $A$ to be $$diam(A)={\mathbb S}up{|x-y|:x,y\in A}$$ Let $\alpha_m$ equal the volume of the unit ball in ${\mathbb R}^m$. Then the $m-$dimensional Hausdorff measure of $A$ is defined by $$\textbf{H}^m(A)=\lim\limits_{\delta\rightarrow 0^+}\inf\limits_{{\mathbb S}ubstack{ A{\mathbb S}ubset\cup S_j \\ diam(S_j)\leq\delta}}{\mathbb S}um\limits_{j}\alpha_m\left(\frac{diam(S_j)} 2\right)^m$$ The Hausdorff dimension of $A$ is given by $$dim_\textbf{H}(A)=\inf\{m\geq 0:\textbf{H}^m(A)=0\}$$ \end{defn} \begin{defn}\label{25} Consider a Borel set $B{\mathbb S}ubset {\mathbb R}^n$. We say $B$ is $(\textbf{H}^m,m)$ rectifiable if it has the following properties. \begin{enumerate} \item There exist at most countably many bounded subsets $K_i{\mathbb S}ubset{\mathbb R}^m$ and Lipschitz maps $f_i:{\mathbb R}^m\rightarrow{\mathbb R}^n$ such that $B=\cup f_i(K_i)$ (ignoring sets of measure 0). \item $\textbf{H}^m(B)<\infty$ \end{enumerate} A rectifiable $m$-current is a compactly supported, oriented $(\textbf{H}^m,m)$ rectifiable set with integer multiplicities. \end{defn} \begin{rem} Currents are usually defined as linear functionals on differential forms. We are not explicitly stating the definition in that form. However, it is possible to integrate a smooth differential form $\phi$ over a rectifiable current defined above. In this way, each rectifiable current $B$ gives rise to the following linear functional on differential forms. $$\phi\rightarrow\int\limits_B \phi$$ The inclusion of integer multiplicities in definition \ref{25} can then be seen as necessary to allow for the standard additivity of linear functionals. \end{rem} It was necessary to define rectifiable currents, because these are the basic objects dealt with in geometric measure theory. The general approach often taken to proving an existence result such as the one we are pursuing is to first show the existence of a rectifiable current with the desired minimization property. However, as can be seen from definition \ref{25}, rectifiable currents are extremely general objects which often bear little resemblance to our usual notion of a surface. Thus it is then necessary to show that the obtained minimizing current is regular, or smooth. Otherwise, the object obtained will still be an area minimizer, but it cannot be considered a minimal surface. Our argument will be a little more complicated. We will first show the existence and regularity of a sequence of self-shrinkers in increasingly large compact sets. We will then show that there exists a subsequence of these self-shrinkers which converges to the desired half-infinite self-shrinker. The following general theorems of Federer will form the foundation of the first steps in this argument. \begin{thm}\label{27}\cite{F} Suppose $T$ is a rectifiable current in a compact, $C^1$ Riemannian manifold $M$. Then consider the set of all rectifiable currents $S$ in $M$ such that $\partial S=\partial T$. There exists at least one such current $S$ of least area. \end{thm} Note that the above theorem holds in all dimensions. However, the following regularity result is more restrictive. \begin{thm}\label{28}\cite{F} Let $T{\mathbb S}ubset {\mathbb R}^n$ be an $(n-1)$ dimensional, area minimizing rectifiable current. Then in its interior $T$ is a smooth, embedded manifold except for a singular set of Hausdorff dimension at most $(n-8)$. In particular, if $n\leq 7$ then $T$ has no interior singularities. \end{thm} {\mathbb S}ection{Existence of the Surfaces}\label{56} We now show that there exists some stable half-infinite MCF self-shrinker with boundary $\gamma_a$ for each $a\in [0,{\mathbb S}qrt 2)$. We will need a few well known results from elsewhere which we will cite and use without proof. Note that by Theorem \ref{11}, MCF self-shrinkers in ${\mathbb R}^3$ can also be considered as minimal surfaces with respect to the conformal metric $$d\tilde\mu=e^{\frac{-|\vec x|^2} 4}d\mu.$$ This is because with respect to this metric $F_{0,1}$ is just a multiple of the area functional. Then by Theorem \ref{12}, the stability of a self-shrinker is equivalent to its stability as a minimal surface with respect to $d\tilde\mu.$ We can therefore appeal directly to the known existence and regularity results for stable minimal surfaces in order to show existence and regularity of stable self-shrinkers. Our approach will be to construct a sequence of compact, stable self-shrinkers in increasingly large domains. We will obtain a sequence of surfaces with one boundary component fixed at $\gamma_a$ and the other running off to infinity along $C$. We will then show that some subsequence of these surfaces converges to a stable, half-infinite self-shrinker, and furthermore this limit surface is axially symmetric. The existence of a convergent subsequence will follow from Theorem \ref{21} below, which only holds in three dimensional manifolds. As such, for the remainder of this chapter we will restrict attention to surfaces in ${\mathbb R}^3$. In the following lemma, we will construct the sequence of self-shrinkers with one boundary component $\gamma_a$ and the other a parallel, coaxial circle $\gamma_b$. We wish to exclude the possibility that for some choice of $b$ our solution splits into two disconnected topological discs, one with boundary $\gamma_a$ and the other with boundary $\gamma_b$. To accomplish this, we will solve the Plateau problem in a domain from which the interior of $C$ has been removed. \begin{lem}\label{18} Fix $a\in [0,{\mathbb S}qrt 2)$, and let $b_a$ be given by Proposition \ref{17}. Let $b\in\mathbb{Z}$ satisfy $b>b_a$. Let $\Omega_b=\{(x,y,z)\in{\mathbb R}^3:x^2+y^2+z^2\leq b^2+2, x^2+y^2\geq 2\}$. Then there exists a stable, embedded self-shrinker $\Sigma_b{\mathbb S}ubset \Omega_b$ such that $\partial\Sigma_b=\gamma_a\cup\gamma_b$. \end{lem} \begin{proof} We view $\Omega_b$ as a subset of ${\mathbb R}^3$ endowed with the metric $d\tilde\mu=e^{\frac{-|\vec x|^2} 4}d\mu$ from Theorem \ref{11}. Note that the portion of the cylinder given by $\{a<x_3<b\}$ is a rectifiable current in $\Omega_b$ with boundary $\gamma_a\cup\gamma_b$. Thus, the existence of a least area current $\Sigma_b{\mathbb S}ubset\Omega_b$ with the same boundary is guaranteed by Theorem \ref{27}. The fact that this current is also a smooth embedded surface with no interior singular points follows from Theorem \ref{28}. Area minimizing surfaces with respect to $d\tilde\mu$ are stable minimal surfaces, so $\Sigma_{b}$ is a stable, embedded self-shrinker with $\partial\Sigma_{b}=\gamma_a\cup\gamma_b.$ \end{proof} We wish to show the convergence of a subsequence of $\{\Sigma_b\}$, but for this we need a uniform bound on the curvature of the $\Sigma_b$. The following theorem of Schoen gives this bound away from the boundary. \begin{thm}\label{19}\cite{S} Let $\Sigma$ be an immersed stable minimal surface with trivial normal bundle, and let $B_{r_0}{\mathbb S}ubset \Sigma{\mathbb S}etminus\partial\Sigma$ be an intrinsic open ball of radius $r_0$ contained in $\Sigma$ and not touching the boundary of $\Sigma$. Then there exists a constant $C$ such that for any ${\mathbb S}igma>0$, $${\mathbb S}up\limits_{B_{r_0-{\mathbb S}igma}}|A|^2 \leq C{\mathbb S}igma^{-2}.$$ \end{thm} \begin{lem}\label{20} For large enough $b$, and away from the boundary component $\gamma_b$, the family of surfaces $\Sigma_b$ from Lemma \ref{18} have uniformly bounded $|A|^2$ up to and including the boundary component $\gamma_a$. \end{lem} \begin{proof} If we fix ${\mathbb S}igma>0$, we obtain a bound on $|A|^2$ for each $\Sigma_b$ further than ${\mathbb S}igma$ away from $\gamma_a$. We still need to worry about something going wrong near $\gamma_a$. However, we know from Hardt-Simon \cite{HS} that each $\Sigma_b$ is regular up to the boundary. Thus, we know that each $\Sigma_b$ has bounded $|A|^2$ up to the boundary. It thus suffices to rule out the possibility that these bounds on $|A|^2$ are themselves unbounded. That is, we need to show that a single bound holds for every $\Sigma_b$ in the sequence. We aren't concerned about the behavior of $\Sigma_b$ near the boundary component $\gamma_b$, because these components don't show up in the limit. As such, for the rest of this proof, we will restrict attention to the interior of a large ball in ${\mathbb R}^3$, say the ball of radius 20. We will also assume that $b>30$, so that we are sufficiently far away from $\gamma_b$. Thus, Schoen's curvature estimate from Theorem \ref{19} holds for every $\Sigma_b$ except near $\gamma_a$. Suppose by way of contradiction that there exists a subsequence $\Sigma_i$ such that $$\max\limits_{\Sigma_i}|A|>i.$$ For each $i$, pick a point $x_i\in\Sigma_i$ such that $|A(x_i)|=\max\limits_{\Sigma_i}|A|$. Then by Theorem \ref{19}, as $i$ increases, the $x_i$ must tend toward $\gamma_a$. Define a new sequence $$\Gamma_i=|A(x_i)|(\Sigma_i-x_i).$$ Then this sequence has the property that each $\Gamma_i$ has $|A(0)|=1$ and $|A|\leq 1$ everywhere else. Then since the $\Sigma_i$ are stable and embedded, the new surfaces $\Gamma_i$ are also embedded and stable with respect to the dilated metric. These dilated metrics are becoming more and more flat as $i\rightarrow\infty$. Thus by Arzela-Ascoli, some subsequence of the $\Gamma_i$ converges to a limit surface $\Gamma.$ This surface is embedded and stable as a surface in ${\mathbb R}^3$. It also has $|A(0)|=1$. There are now two cases, depending on what happens to $\gamma_a$ in the limit. Case 1 is that $\gamma_a$ runs off to $\infty$ and does not show up in the limit. In that case $\Gamma$ is a complete, stable minimal surface in ${\mathbb R}^3.$ However, Bernstein's Theorem then says that $\Gamma$ is a flat plane. This contradicts the fact that $\Gamma$ has $|A(0)|=1$, since a flat plane has $|A|\equiv 0$. Case 2 is that $\gamma_a$ does show up in the limit as the boundary of $\Gamma$. In this case, note that the limit of $\gamma_a$ under our successive dilations is a straight line. Then we have that $\Gamma$ is an embedded, area minimizing surface in ${\mathbb R}^3$ whose boundary is a straight line. Then by a result of P\'{e}rez \cite{P}, $\Gamma$ is a half-plane. However, this implies that $\Gamma$ has $|A|\equiv 0$, which contradicts the fact that $|A(0)|=1$. This completes the proof. \end{proof} We now state a well-known compactness theorem stated by Anderson in \cite{A} that gives us convergence of a subsequence of $\{\Sigma_b\}$. \begin{thm}\label{21}\cite{A} Let $\Omega$ be a bounded domain in a complete Riemannian 3-manifold $N^3$, and let $M_i$ be a sequence of minimally immersed surfaces in $\Omega$. Suppose there is a constant $C$ such that the Gauss curvature $K_{M_i}(x)$ satisfies $|K_{M_i}(x)|<C$ for all $i$. Then a subsequence of $\{M_i\}$ converges smoothly (in the $C^k$ topology, $k\geq 2$) to an immersed minimal surfaces $M_\infty$ (with multiplicity) in $\Omega$, and $|K_{M_\infty}(x)|\leq C$. If each $M_i$ is embedded, then $M_\infty$ is also embedded. \end{thm} \begin{thm}\label{22} There exists a subsequence of the surfaces $\Sigma_b$ from Lemma \ref{18} that converges to some limit surface $\Sigma_\infty$. This limit surface $\Sigma_\infty$ is a stable, half-infinite, embedded self-shrinker with boundary $\gamma_a$. \end{thm} \begin{proof} Recall from Lemma \ref{18} that $\Omega_b=\{(x,y,z)\in{\mathbb R}^3:x^2+y^2+z^2\leq b^2+2, x^2+y^2\geq 2\}$. By Theorem \ref{21}, for any fixed $\Omega_{b_0}$, there exists a subsequence of $\{\Sigma_b\}$ that converges smoothly on $\Omega_{b_0}$. We can then restrict attention to this subsequence, and find a further subsequence that converges on $\Omega_{b_0+1}$. Repeating this process, we obtain a sequence of sequences. We can then take the diagonal elements to form a single subsequence. This subsequence converges to a limit surface $\Sigma_\infty$ with the desired properties. \end{proof} {\mathbb S}ection{Analysis of the Surfaces}\label{57} We now state without proof a well known theorem that we will need in the following proof. \begin{thm}\label{31} Sard's Theorem. Given a smooth function $h$ from one manifold to another, the image of the set of critical points of $h$ has Lebesgue measure $0$. \end{thm} \begin{thm}\label{32} The surface $\Sigma_\infty$ from Theorem \ref{22} is rotationally symmetric about the $z$-axis. \end{thm} \begin{proof} Let $\vec{v}$ be a vector field on $\Sigma_\infty$ given by rotation in a fixed direction about the $z$-axis. For concreteness, at a point $(x,y,z)\in\Sigma_\infty$, let $\vec v=(-y,x,0)$. At every point of $\Sigma_\infty$, let $\vec n$ denote the outward pointing unit normal. We define the function $f:\Sigma_\infty\rightarrow {\mathbb R}$ by $f=<\vec v, \vec n>$. Note that on the boundary $\gamma_a$, $f\equiv 0$. We first show that $Lf=0$ which implies that $f$ is either identically $0$, or it is an eigenfunction of $L$ with eigenvalue $0$. Recall from Definition \ref{8} that $$Lf=\Delta f-\frac 1 2 \langle \vec{x},\noindentabla f\rangle+(|A|^2+\frac 1 2)f.$$ Let $\{\vec{e_i}\}_{i=1}^2$ be an orthonormal frame for the tangent space to $\Sigma_\infty$. We compute $Lf$ term by term, starting with $\Delta f$. In the following computations, we follow Einstein's convention of summing over repeated indices. \begin{align} f \noindentotag &= \langle\vec v, \vec n\rangle \\ \noindentabla_{\vec{e_i}}f \noindentotag &= \langle\noindentabla_{\vec{e_i}}\vec v,\vec n\rangle+\langle\vec v, \noindentabla_{\vec{e_i}}\vec n\rangle \\ \noindentabla_{\vec{e_i}}f \noindentotag &= \langle\noindentabla_{\vec{e_i}}\vec v, \vec n\rangle-a_{ij}\langle\vec v,\vec{e_j}\rangle \\ \noindentabla_{\vec{e_k}}\noindentabla_{\vec{e_i}}f \noindentotag &= \langle\noindentabla_{\vec{e_k}}\noindentabla_{\vec{e_i}}\vec v, \vec n\rangle - a_{kj}\langle\noindentabla_{\vec{e_i}}\vec v, \vec{e_j}\rangle-a_{ij}\langle\noindentabla_{\vec{e_k}}\vec v,\vec{e_j}\rangle \\ \noindentotag & \ \ \ -a_{ij,k}\langle\vec v, \vec{e_j}\rangle-a_{ij}\langle\vec v, \noindentabla_{\vec{e_k}}\vec{e_j}\rangle \\ \Delta f \noindentotag &= \langle\Delta \vec v, \vec n\rangle - 2a_{ij}\langle\noindentabla_{\vec{e_i}}\vec v, \vec{e_j}\rangle-a_{ii,j}\langle\vec v,\vec{e_j}\rangle - a_{ij}\langle\vec v,a_{ij}\vec n\rangle \\ \Delta f \noindentotag &= \langle\Delta \vec v, \vec n\rangle - 2a_{ij}\langle\noindentabla_{\vec{e_i}}\vec v, \vec{e_j}\rangle + \langle\vec v,\noindentabla H\rangle - |A|^2 f \end{align} We now use the fact that $\vec v$ is a Killing field, so for all vector fields $\vec X, \vec Y$ we have that $\langle \noindentabla_{\vec X}\vec v,\vec Y\rangle=-\langle \noindentabla_{\vec Y}\vec v,\vec X\rangle$. Thus \begin{align} \langle \noindentabla_{\vec {e_i}}\vec v,\vec {e_i}\rangle \noindentotag &= 0 \\ a_{ij}\langle \noindentabla_{\vec {e_i}}\vec v,\vec {e_j}\rangle \noindentotag &= {\mathbb S}um\limits_{i\noindenteq j}a_{ij}\langle \noindentabla_{\vec {e_i}}\vec v,\vec {e_j}\rangle \\ a_{ij}\langle \noindentabla_{\vec {e_i}}\vec v,\vec {e_i}\rangle \noindentotag &= {\mathbb S}um\limits_{i < j}(a_{ij}-a_{ji})\langle \noindentabla_{\vec {e_i}}\vec v,\vec {e_j}\rangle \\ a_{ij}\langle \noindentabla_{\vec {e_i}}\vec v,\vec {e_i}\rangle \noindentotag &= 0 \end{align} The last equality follows from the symmetry of the second fundamental form $A$. Recall that since $\vec v$ is a rotation vector field, it is linear on ${\mathbb R}^3$. Thus $\Delta_{{\mathbb R}^3}\vec v=0.$ However, we are working with the tangential Laplacian $\Delta_{\Sigma}$. We compute \begin{align} \Delta \vec v \noindentotag &= \Delta_{\Sigma}\vec v = \Delta_{{\mathbb R}^3}\vec v - \noindentabla_{\noindentabla_{e_i}e_i}\vec v - \noindentabla_{\vec n}\noindentabla\vec v \\ \Delta \vec v \noindentotag &= 0 - \noindentabla_{\noindentabla_{e_i}e_i}\vec v - 0 \\ \Delta \vec v \noindentotag &= - \noindentabla_{a_{ii}\vec n}\vec v \\ \Delta \vec v \noindentotag &= - a_{ii}\noindentabla_{\vec n}\vec v \\ \langle \Delta \vec v,\vec n\rangle \noindentotag &= -a_{ii}\langle \noindentabla_{\vec n}\vec v, \vec n\rangle \\ \langle \Delta \vec v,\vec n\rangle \noindentotag &= 0 \end{align} Putting these pieces together yields $$\Delta f=\langle\vec v,\noindentabla H\rangle - |A|^2 f.$$ We now turn our attention to the second term in $Lf.$ Recall from Lemma \ref{30} that since $\Sigma_\infty$ is a self-shrinker its mean curvature $H$ satisfies $H=\frac 1 2 \langle \vec x, \vec n\rangle$. We will use this fact in the following. \begin{align}H \noindentotag &= \frac 1 2 \langle \vec x, \vec n\rangle \\ \noindentabla_{\vec{e_i}}H \noindentotag &= \frac 1 2 \langle \noindentabla_{\vec{e_i}}\vec x, \vec n\rangle + \frac 1 2 \langle \vec x, \noindentabla_{\vec{e_i}}\vec n\rangle \\ \noindentabla_{\vec{e_i}}H \noindentotag &= \frac 1 2 \langle \vec{e_i}, \vec n\rangle - \frac 1 2 a_{ij}\langle \vec x, \vec{e_j}\rangle \\ \noindentabla_{\vec{e_i}}H \noindentotag &= -\frac {1} 2 a_{ij}\langle \vec x, \vec{e_j}\rangle \\ \noindentabla H \noindentotag &= -\frac {1} 2 a_{ij}\langle \vec x, \vec{e_j}\rangle \vec{e_i} \\ \langle \vec v,\noindentabla H\rangle \noindentotag &= -\frac {1} 2 a_{ij}\langle \vec x, \vec{e_j}\rangle \langle\vec v,\vec{e_i}\rangle \\ \noindentabla f \noindentotag &= \noindentabla \langle \vec v,\vec n\rangle \\ \noindentabla_{\vec{e_i}}f \noindentotag &= \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle +\langle \vec v, \noindentabla_{\vec{e_i}}\vec n\rangle \\ \noindentabla_{\vec{e_i}}f \noindentotag &= \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle -a_{ij}\langle \vec v, \vec{e_j}\rangle \\ \noindentabla f \noindentotag &= \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle\vec{e_i} -a_{ij}\langle \vec v, \vec{e_j}\rangle\vec{e_i} \\ \frac 1 2 \langle\vec x,\noindentabla f\rangle \noindentotag &= \frac 1 2 \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle \langle\vec x,\vec{e_i}\rangle -\frac 1 2 a_{ij}\langle \vec v, \vec{e_j}\rangle \langle \vec x, \vec{e_i}\rangle \\ \frac 1 2 \langle\vec x,\noindentabla f\rangle \noindentotag &= \frac 1 2 \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle \langle\vec x,\vec{e_i}\rangle +\langle \vec v, \noindentabla H\rangle \end{align} We now have that $$Lf= - \frac 1 2 \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle \langle\vec x,\vec{e_i}\rangle +\frac 1 2 \langle \vec v,\vec n\rangle.$$ We investigate the first term, again using that $\vec v$ is a Killing field. \begin{align} \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle \langle\vec x,\vec{e_i}\rangle \noindentotag &= - \langle \noindentabla_{\vec n}\vec v, \vec {e_i} \rangle \langle\vec x,\vec{e_i}\rangle \\ \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle \langle\vec x,\vec{e_i}\rangle \noindentotag &= - \langle \noindentabla_{\vec n}\vec v, \langle \vec x,\vec {e_i}\rangle \vec{e_i}\rangle \\ \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle \langle\vec x,\vec{e_i}\rangle \noindentotag &= - \langle \noindentabla_{\vec n}\vec v, \vec x -\langle\vec x, \vec n\rangle\vec n\rangle \\ \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle \langle\vec x,\vec{e_i}\rangle \noindentotag &= - \langle \noindentabla_{\vec n}\vec v, \vec x\rangle +\langle\vec x,\vec n\rangle\langle \noindentabla_{\vec n}\vec v,\vec n\rangle\\ \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle \langle\vec x,\vec{e_i}\rangle \noindentotag &= - \langle \noindentabla_{\vec n}\vec v, \vec x \rangle \\ \langle \noindentabla_{\vec{e_i}}\vec v, \vec n \rangle \langle\vec x,\vec{e_i}\rangle \noindentotag &= \langle \noindentabla_{\vec x}\vec v, \vec n \rangle \\ \end{align} We now switch to Euclidean coordinates. \begin{align}\vec x \noindentotag &= x\vec i +y\vec j +z\vec k \\ \vec v \noindentotag &= -y\vec i+x\vec j \\ \vec n \noindentotag &= n_1\vec i+n_2\vec j +n_3\vec k \\ \langle\vec v,\vec n\rangle \noindentotag &= xn_2-yn_1 \\ \noindentabla_{\vec x}\vec v \noindentotag &= x\noindentabla_{\vec i}\vec v +y\noindentabla_{\vec j}\vec v +z\noindentabla_{\vec k}\vec v \\ \noindentabla_{\vec x}\vec v \noindentotag &= x\vec j - y\vec i \\ \langle \noindentabla_{\vec x}\vec v, \vec n \rangle \noindentotag &= xn_2-yn_1 \\ \langle \noindentabla_{\vec x}\vec v, \vec n \rangle \noindentotag &= \langle\vec v,\vec n\rangle \end{align} Thus $Lf=0$, so $f$ is either identically $0$ or it is an eigenvector of $L$ with corresponding eigenvalue $0$. Recall from Definitions \ref{10} and \ref{16} that since $\Sigma_\infty$ is stable it must have no negative eigenvalues. Thus the lowest possible eigenvalue is $0$. Suppose by way of contradiction that $f$ is not identically $0$. Then it must be an eigenfunction of $L$ with eigenvalue $0$, which would make $0$ the lowest eigenvalue of $L$. However, Colding and Minicozzi showed in \cite{CM1} that the eigenfunction for the lowest eigenvalue of $L$ cannot change sign. Thus we can assume (by possibly multiplying it by $-1$) that $f\geq 0$ on all of $\Sigma_\infty$. This leads to a contradiction. Consider the function $h:\Sigma_\infty\rightarrow {\mathbb R}$ defined by $h(x,y,z)=z$ for all $(x,y,z)\in \Sigma_\infty$. Then by Sard's Theorem (Theorem \ref{31}) the set of critical points of this function maps to a set of measure $0$. Then for a generic value $z=c$, the slice $\Sigma_\infty\cap\{z=c\}$ contains no critical points of $h$. Therefore, for such a $c$, the slice $\Sigma_\infty\cap\{z=c\}$ consists of a disjoint union of curves. Consider one of these curves, and call it $\gamma$. Parameterize $\gamma$ by arclength $s$. Then $\gamma=(x(s),y(s),c)$, and $\gamma'=(x'(s),y'(s),0)$. Note that since there is no critical point of the function $h$ when $z=c$, at every point of $\gamma$ the tangent plane to $\Sigma_\infty$ does not equal the slice $\{z=c\}$. Thus, at each point of $\gamma$ the projection of the unit normal to $\Sigma_\infty$ onto the slice $\{z=c\}$ is nonzero. This projection, call it $\vec n ^{\top}$, is also perpendicular to $\gamma'$, since $\gamma'$ is tangent to $\Sigma_\infty$. Thus we can assume that $\vec n ^{\top}=l(s)(-y',x',0)$, where $l(s)>0$. Note that $\vec v=(-y,x,0)$, so $$f=\langle \vec v, \vec n\rangle=l(s)(xx'+yy').$$ However, $r^2=x^2+y^2,$ so $2rr'=2xx'+2yy'$ which implies $r'=\frac f{rl}$. We know $r$ and $l$ are strictly positive, so $f$ is positive at a point on $\gamma$ if and only if moving along $\gamma$ in the direction of increasing $s$ causes $r$ to increase. Likewise, $f$ is negative if and only if $r$ is decreasing along $\gamma$. We know that $f\geq 0$. Thus if $\gamma$ is a closed curve, $\gamma$ must be a circle centered on the $z$-axis. If $\gamma$ is not closed, then consider moving along $\gamma$ in the direction of decreasing $s$. Then the distance $r$ to the $z$-axis can never increase. Also, $\Sigma_\infty$ is embedded, so it cannot intersect itself. Since $\Sigma_\infty$ does not intersect the interior of the cylinder, $r$ can never decrease below ${\mathbb S}qrt 2$. Thus, $\gamma$ must be a spiral. However, this cannot happen since by construction $\Sigma_\infty$ minimizes area. Thus, $\gamma$ must be a circle centered on the $z$-axis, and $f\equiv 0$ on a dense set and hence everywhere. Likewise, $\Sigma_\infty$ is rotationally symmetric on a dense set of $z$ slices. Since $\Sigma_\infty$ has no singularities, it must be rotationally symmetric everywhere. This completes the proof. \end{proof} We now state a theorem by Kleene and M{\o}ller that gives us more information about the surface $\Sigma_\infty$ found in Theorem \ref{22}. \begin{thm}\label{23}\cite{KM} For each fixed ray from the origin, $$r_{{\mathbb S}igma}(z)={\mathbb S}igma z, \ \ \ r_{\mathbb S}igma :(0,\infty)\rightarrow {\mathbb R}^+, \ \ \ {\mathbb S}igma>0,$$ there exists a unique smooth graphical solution $u_{\mathbb S}igma :[0,\infty)\rightarrow {\mathbb R}^+$ which is asymptotic to $r_{\mathbb S}igma$ and whose rotation about the $z$-axis is a self-shrinker. Also, for $d>0,$ any solution $u:(d,\infty)\rightarrow {\mathbb R}^+$ (whose rotation about the $z$-axis is a self-shrinker) is either the cylinder $u\equiv {\mathbb S}qrt 2$, or is one of the $u_{\mathbb S}igma$ for some ${\mathbb S}igma={\mathbb S}igma(u)>0$. Furthermore, the following properties hold for $u_{\mathbb S}igma$ when ${\mathbb S}igma>0$: \begin{enumerate} \begin{item}$u_{\mathbb S}igma>r_{\mathbb S}igma$, and $u(0)<{\mathbb S}qrt 2,$ \end{item} \begin{item}$|u_{\mathbb S}igma(z)-{\mathbb S}igma z|=O(\frac 1 z ),$ and $|u'_{\mathbb S}igma(z)-{\mathbb S}igma|=O(\frac 1 {z^2})$ as $z\rightarrow \infty,$ \end{item} \begin{item}$\Sigma_{\mathbb S}igma$ generated by $u_{\mathbb S}igma$ has mean curvature $H(\Sigma_{\mathbb S}igma)>0$, \end{item} \begin{item}$u_{\mathbb S}igma$ is strictly convex, and $0<u'_{\mathbb S}igma<{\mathbb S}igma$ holds on $[0,\infty)$, \end{item} \begin{item}$\gamma_{\mathbb S}igma$, the maximal geodesic containing the graph of $u_{\mathbb S}igma$, is not embedded. \end{item} \end{enumerate} \end{thm} The next corollary follows immediately from Theorem \ref{32} and the uniqueness in Theorem \ref{23}. \begin{cor} For each $a\in(0,{\mathbb S}qrt 2)$, the stable surface $\Sigma_\infty$ found in Theorem \ref{22} must be a piece of one of the surfaces of rotation $\Sigma_{\mathbb S}igma$ described in Theorem \ref{23}. In particular, there exists some ${\mathbb S}igma$ such that $\Sigma_\infty$ must equal the portion of $\Sigma_{\mathbb S}igma$ contained in the exterior of the cylinder of radius ${\mathbb S}qrt 2$. \end{cor} We now have a stable half-infinite self-shrinker with boundary equal to $\gamma_a$ for each $a>0$. By symmetry, we have also solved the problem for all $a<0$. We now extend this result to $a=0$. \begin{thm}\label{49} The portion of the flat plane in ${\mathbb R}^3$ given by $\{|\vec x|>{\mathbb S}qrt 2\}$ is a stable self-shrinker. \end{thm} \begin{proof} We note first that for each $a\in(0,{\mathbb S}qrt 2)$, we obtain a stable self-shrinker given by one of the surfaces described in Theorem \ref{23}. Each of these surfaces is contained between the flat plane $\{z=0\}$ and the cone obtained by rotating the ray $r_{{\mathbb S}igma}(z)={\mathbb S}igma z$ about the $z$-axis. Then, letting ${\mathbb S}igma\rightarrow\infty$ so that the cones approach the plane, we obtain a sequence of stable self-shrinkers converging pointwise to the portion of the plane given by $\{|\vec x|>{\mathbb S}qrt 2\}$. In particular, this convergence is uniform on compact sets. In the conformal metric $d\tilde\mu=e^{\frac{-|\vec x|^2} 4}d\mu$ these converging surfaces are area minimizing. We now follow an argument from \cite{MY} to show that the limit surface is also area minimizing and hence stable as a self-shrinker in the standard metric. Suppose the limit surface does not minimize area. This means that there exists some compact subdomain $\Sigma$ of the limit surface such that some other surface $\tilde \Sigma$ satisfies $\partial\tilde\Sigma=\partial\Sigma$ and $Area(\Sigma)=Area(\tilde\Sigma)+\epsilon$ for some $\epsilon>0$. However, there is a sequence of area minimizing compact surfaces with boundary converging uniformly to $\Sigma$. The boundaries are also converging uniformly. Thus, there exists some area minimizing $\Sigma_m$ such that $Area(\Sigma)\leq Area(\Sigma_m) +\frac{\epsilon}2$, and the boundaries of $\Sigma$ and $\Sigma_m$ are close enough together that they bound some surface $\tilde A$ with $Area(\tilde A)<\frac{\epsilon}2$. However, this gives us a contradiction. The surface $\tilde\Sigma\cup\tilde A$ has the same boundary as $\Sigma_m$. On the other hand, we know $$Area(\tilde\Sigma\cup\tilde A)<Area(\Sigma)-\frac{\epsilon}2 \leq Area(\Sigma_m).$$ This contradicts the fact that $\Sigma_m$ minimizes area. This completes the proof. \end{proof} We recall attention to the value $r_1\approx 2.514$ from Proposition \ref{48}. We have now shown that for any value $r\in({\mathbb S}qrt 2 , r_1)$, the circle of radius $r$ in the $2$-plane splits the plane into two stable regions. \chapter{Future Research}\label{61} We showed in Chapter \ref{24} that generalized cylinders split in a unique way into maximal stable rotationally symmetric regions. However, we showed at the end of Chapter \ref{60} that this is not true for the plane in ${\mathbb R}^3$. This result relies on a minimal surface compactness result in Riemannian $3$-manifolds (Theorem \ref{21}), so we have been unable to reproduce it in higher dimensions. We know the index of the hyperplane in any dimension is $1$ (see Theorem \ref{37}), and we also know that the positive Jacobi field $f_1$ on the region $\{r>r_1\}$ is not an eigenfunction in any dimension (see Lemma \ref{62}). These facts lead us to the following. \begin{Con} Let $\Sigma={\mathbb R}^n{\mathbb S}ubset{\mathbb R}^{n+1}$ be a self-shrinker with $n\geq 3$. Let $r_1$ be as in Remark \ref{55}. Note that $r_1$ depends on $n$. Then there exists some $r_0=r_0(n)$ with $r_0<r_1$ such that for all $r\in(r_0,r_1)$ the $(n-1)$-sphere of radius $r$ centered at the origin splits $\Sigma$ into two stable regions. \end{Con} Recall that self-shrinkers are also minimal surfaces with respect to a Gaussian metric. Minimal surfaces solve a variational problem, so there is also a closely related isoperimetric problem. This problem is to minimize surface area within the class of surfaces which bound a fixed volume. Note that the volume of all of ${\mathbb R}^n$ with the Gaussian metric is finite, so any surface that splits ${\mathbb R}^n$ into two connected components bounds a region of finite volume. Some work has already been done on this subject, and in fact Carlen and Kerce have proven that solutions of this isoperimetric problem are flat hyperplanes \cite{CK}. In Proposition \ref{39} we showed that all hyperplanes through the origin have stability index $1$. It should be possible to extend this result to the hyperplanes that do not go through the origin and prove the following. \begin{Con} Every hyperplane in ${\mathbb R}^n$ endowed with the Gaussian metric is stable as a solution to the isoperimetric problem. \end{Con} \begin{appendix} \end{appendix} \addcontentsline{toc}{chapter}{\protect\noindentumberline{}Bibliography} \pagebreak \begin{center}Vitae\end{center} Caleb Hussey was born on March 13th, 1982 in West Union, Ohio. He received his Bachelor of Arts in Mathematics from New College of Florida in June 2005. His undergraduate thesis was completed under the guidance of Dr. David T. Mullins. He entered a Doctoral Program at The Johns Hopkins University in the fall of 2006. He received his Master of Arts in Mathematics from The Johns Hopkins University in May 2007. His dissertation was completed under the guidance of Dr. William P. Minicozzi II, and this dissertation was defended on June 27th, 2012. \addcontentsline{toc}{chapter}{\protect\noindentumberline{} Vitae} \end{document}
\begin{document} \title{\bf\sc Large deviations for random evolutions with independent increments in the scheme of L\'{e}vy approximation} \author{{\sc I.V. Samoilenko}\\ Institute of Mathematics,\\ Ukrainian National Academy of Science, Kyiv, Ukraine, [email protected]} \maketitle {\bf\sc Short title: Large deviations for random evolutions} \baselineskip 6 mm \hrule \begin{abstract} In the work asymptotic analysis of the problem of large deviations for random evolutions with independent increments in the circuit of L\'{e}vy approximation is carried out. Large deviations for random evolutions in the circuit of L\'{e}vy approximation are determined by exponential generator for jumping process with independent increments. \end{abstract} {\small {\sc Key Words:} {L\'{e}vy approximation, nonlinear exponential generator, Markov process, locally independent increments process, piecewise deterministic Markov process, singular perturbation. } {\small {\sc Mathematics Subject Classification Primary:} 60J55, 60B10, 60F17, 60K10; Secondary 60G46, 60G60.} \hrule \section{Introduction} Asymptotic analysis of the problem of large deviations for random evolutions with independent increments in the circuit of L\'{e}vy approximation (see Koroliuk and Limnios, 2005, Ch. 9) is carried out in the paper. Asymptotic analysis of random evolutions with independent increments in L\'{e}vy approximation scheme is conducted in the work of Koroliuk, Limnios and Samoilenko (2009). In the monograph of Feng and Kurtz (2006) an effective method for studying the problem of large deviations for Markov processes is developed. It is based on the theory of convergence of exponential (nonlinear) operators. The exponential operator in the series scheme with a small series parameter $ \varepsilon \to0 (\varepsilon> 0) $ has the form (see, e.g. Koroliuk, 2011): $$ \mathbb {H} ^ \varepsilon \varphi (x): = e ^ {- \varphi (x) / \varepsilon} \varepsilon \mathbb {L} ^ \varepsilon e ^ {\varphi (x) / \varepsilon}, $$ where the operators $ \mathbb {L} ^ \varepsilon, \varepsilon> 0 $ define Markov processes $ \zeta^ \varepsilon (t), t \geq0, \varepsilon> 0 $ in the series scheme on the standard phase-space $(G, \mathcal{G})$. Test-functions $\varphi(x)\in G$ are real-valued and finite. Random evolutions with independent increments (see Koroliuk and Limnios, 2005, Ch. 1) are given by: $$ \xi (t) = \xi_0 + \int ^ t_0 \eta (ds; x (s)), \ t\geq0. \eqno (1) $$ Markov processes with independent increments $\eta(t; x), $ $ t\geq0, $ $ x\in E,$ are defined in $\mathbb{R}$ and given by the generators $$ \Gamma(x)\varphi(u)=\int_{\mathbb{R}}[\varphi(u+v)-\varphi(u)]\Gamma(dv; x), \ x\in E, \varphi(u)\in\mathcal{B}_{\mathbb{R}}. $$ \textbf{Remark 1.} The process in $\mathbb{R}^d, d>1$ may also be studied. See Remark 7 for more details. Markov switching process $ x(t), t \geq0, $ on a standard phase space $ (E, \mathcal{E}) $ is defined by the generator $$ Q\varphi(x)=q(x)\int_{\mathbb{E}}[\varphi(y)-\varphi(x)]P(x, dy), \ x\in E, \varphi(u)\in\mathcal{B}_{E}. \eqno(2) $$ Thus, random evolution (1) is characterized by the generator of two-component Markov process $\xi(t), x(t), t\geq 0$ (see Koroliuk and Limnios, 2005, Ch. 2) $$ \mathbb{L}\varphi(u, x)=Q\varphi(\cdot, x)+\Gamma(x)\varphi(u, \cdot). $$ The basic assumption about the switching Markov process is the following condition \begin{itemize} \item[{\bf C1}:] Markov process $x(t), t\geq0,$ is uniformly ergodic with the stationary distribution $\pi(A), \ A\in \mathcal{E}.$ \end{itemize} Let $\Pi$ be a projector onto null-subspace of reducible-invertible operator $Q$, defined in (2): $$\Pi\varphi(x)=\int_E\pi(dx)\varphi(x).$$ The following correlation is true $$Q\Pi=\Pi Q=0.$$ Potential operator $R_0$ has the following property (Koroliuk and Limnios, 2005, Ch. 1): $$QR_0=R_0Q=\Pi-I.$$ \textbf{Remark 2.} It follows from the last correlation that under solvability condition $$\Pi\psi=0$$ Poisson equation $$Q\varphi=\psi$$ has the unique solution $$\psi=R_0\varphi,$$ when $\Pi\varphi=0.$ \textbf{Remark 3.} Studying of limit properties of Markov processes is based at the martingale characterization of such processes, namely we should regard $$ \mu_t=\varphi(x(t))-\varphi(x(0))-\int^t_0\mathbb{L}\varphi(x(s))ds, \eqno(3) $$ where $\mathbb{L}$ is the generator that defines Markov process $x(t), t\geq0,$ on the standard phase-space $(E, \mathcal{E})$. It has a dense domain $\mathcal{D}(\mathbb{L})\subseteq\mathcal{B}_E$, that contains continuous functions with continuous derivatives. Here $\mathcal{B}_E$ - Banach space of real-valued finite test-functions $\varphi(x)\in E,$ endowed by the norm: $\|\varphi\|:=\sup_{x\in E}|\varphi(x)|.$ Large deviation theory is based on the studying of exponential martingale characterization (see Feng and Kurtz, 2006, Ch.1): $$ \widetilde{\mu}_t=\exp\{\varphi(x(t))-\varphi(x(0))-\int^t_0\mathbb{H}\varphi(x(s))ds\} \eqno(4) $$ is the martingale. Here exponential nonlinear operator $$ \mathbb{H}\varphi(x):=e^{-\varphi(x)}\mathbb{L}e^{\varphi(x)}, \ \varphi(x)\in \mathcal{B}_E. $$ Equivalence of (3) and (4) follows from the correlations: \textit{Proposition} (see Ethier and Kurtz, 1986, p.66) $$\mu(t)=x(t)-\int_0^ty(s)ds$$ is the martingale if and only if $$\widetilde{\mu}(t)=x(t)exp\left\{-\int_0^t\frac{y(s)}{x(s)}ds\right\} \mbox{is the martingale.}$$ We may assume that the domain $\mathcal{D}(\mathbb{L})$ contains constants, and if $\varphi(x)\in\mathcal{D}(\mathbb{L})$, then there exists a constant $c$ such that $\varphi(x)+c\in\mathcal{D}(\mathbb{L})$ is positive. \textbf{Remark 4.} The large deviation problem is realized in four stages (Feng and Kurtz, 2006, Ch.2): 1) Verify the convergence of the exponential (nonlinear) generator that defines large deviations; 2) Verify the exponential tightness of Markov processes; 3) Verify the comparison principle for the limit exponential generator; 4) Construct a variational representation for the limit exponential generator. The stages 2)--4) for the exponential generator corresponding to the processes with independent increments are realized in Feng and Kurtz (2006). Some of the stages are also presented in the monograph of Freidlin and Wentzel (1998), where the large deviation problem is studied with the use of cumulant of the process with independent increments. Cumulant and exponential generator are obviously connected. Really, generator of Markov process may be written in the form (see, e.g. Skorokhod, 1989) $$\mathbb{L}\varphi(x)=\int_\mathbb{R}e^{\lambda x}a(\lambda)\overline{\varphi}(\lambda)d\lambda,$$ where $a(\lambda)$ - cumulant of the process, $\overline{\varphi}(\lambda)=\int_\mathbb{R}e^{-\lambda x}\varphi(x)dx.$ Inverse transformation gives $$\int_\mathbb{R}e^{-\lambda x}\mathbb{L}\varphi(x)dx=a(\lambda)\overline{\varphi}(\lambda).$$ Let's rewrite $$\int_\mathbb{R}e^{-\lambda x}\mathbb{L}\varphi(x)dx=\int_\mathbb{R}e^{-\lambda x}a(\lambda)\varphi(x)dx,$$ and by taking $$e^{-\lambda x}\varphi(x)=:\widetilde{\varphi}(x)$$ we obtain $$\int_\mathbb{R}e^{-\lambda x}\mathbb{L}e^{\lambda x}\widetilde{\varphi}(x)dx=\int_\mathbb{R}a(\lambda)\widetilde{\varphi}(x)dx.$$ Thus, $$e^{-\lambda x}\mathbb{L}e^{\lambda x}=a(\lambda),$$ or, using the exponential generator: $$\mathbb{H} \varphi_0(x)=a(\lambda), \hskip2mm \mbox{where} \hskip2mm \varphi_0(x)=\lambda x.$$ Our aim is to realize the stage 1) - to verify the convergence of the exponential (nonlinear) generator that defines large deviations $\mathbb{H}^{\varepsilon,\delta}\varphi^\delta_\varepsilon(u,x):=e^{-\varphi^\delta_\varepsilon/\varepsilon}\varepsilon \mathbb{L}^{\delta}_{\varepsilon} e^{\varphi^\delta_\varepsilon/\varepsilon}$ (see Theorem 1): $$\mathbb{H}^{\varepsilon,\delta}\varphi^\delta_\varepsilon(u,x)\to H^0\varphi(u), \varepsilon, \delta\to 0, \varepsilon^{-1}\delta\to 1.$$ To do this we use the method of solution of the problem of singular perturbation with two small series parameters. Normalization of random evolution (1) by a small series parameters for solution of large deviation problem in L\'{e}vy approximation scheme is realized in a following way: $$\xi_{\varepsilon}^{\delta}(t)=\xi_{\varepsilon}^{\delta}(0)+\int^t_0\eta^{\delta}_{\varepsilon}(ds; x(s/\varepsilon^3)), \ t\geq0,$$ $$\eta^{\delta}_{\varepsilon}(t)=\varepsilon\eta^{\delta}(t/\varepsilon^3),$$ $$\Gamma^{\delta}_{\varepsilon}(x)\varphi(u)=\varepsilon^{-3}\int_{\mathbb{R}}[\varphi(u+\varepsilon v)-\varphi(u)]\Gamma^{\delta}(dv; x), \ x\in E,$$ where $\varepsilon,\delta\to 0$ so that $\varepsilon^{-1}\delta\to 1$. \textbf{Remark 5.} In the paper (Koroliuk, 2011) V.S.Koroliuk proposed to use the method of solution of the problem of singular perturbation for the studying of large deviations for random evolutions with independent increments in asymptotically small diffusion scheme. In classical works asymptotical analysis of the problem of large deviations is made, as a rule, with the use of large series parameter $n\to\infty,$ sometimes even some different parameters (see, e.g. Mogulskii, 1993). The method, proposed in this work, with the use of two small parameters, was firstly realized in Samoilenko (2011) for the scheme of Poisson approximation. \section {L\'{e}vy approximation conditions} \begin{itemize} \item[{\bf C2:}] {\it L\'{e}vy approximation}. The family of processes with independent increments $\eta^{\delta}(t;x), $ $x\in E,$ $ t\geq 0 $ satisfies L\'{e}vy approxiation conditions: \begin{description} \item[LA1] Approximation of mean values: $$a_{\delta}(x) = \int_{\mathbb{R}} v\Gamma^{\delta}(dv; x) = \delta a_1(x)+\delta^2[a(x) +\theta_a^{\delta} (x)],$$ and $$c_{\delta}(x) = \int_{\mathbb{R}} v^2\Gamma^{\delta}(dv; x) = \delta^2[c(x) + \theta_c^{\delta} (x)],$$ where $$\sup\limits_{x\in E}|a_1(x)|\leq a_1<+\infty,\sup\limits_{x\in E}|a(x)|\leq a<+\infty, \sup\limits_{x\in E}|c(x)|\leq c<+\infty.$$ \item[LA2] Asymptotic representation of intensity kernel $$\Gamma_g^{\delta}(x) = \int_{\mathbb{R}} g(v)\Gamma^{\delta}(dv; x) = \delta^2[\Gamma_g(x) + \theta^{\delta}_g(x)]$$ for all $g \in C_3(\mathbb{R})$ - measure-determining class of functions (see Jacod and Shiryaev, 1987, Ch. 7), $\Gamma_g(x)$ is a finite kernel $$|\Gamma_g(x)| \leq\Gamma_g \quad \hbox{(constant depending on $g$)}.$$ Kernel $\Gamma^0(dv; x)$ is defined on the measure-determining class of functions $C_3(\mathbb{R})$ by a relation $$\Gamma_g(x) = \int_{\mathbb{R}} g(v)\Gamma^0(dv; x),\quad g \in C_3(\mathbb{R}).$$ Negligible terms $\theta_a^\delta,\theta_c^\delta, \theta_g^\delta$ satisfy the condition $$\sup\limits_{x\in E} |\theta_{\cdot}^{\delta}(x)|\to 0,\quad \delta\to 0.$$ \item[LA3] Balance condition: $$\int_E \pi(dx)a_1(x) = 0.$$ \end{description} \item[{\bf C3}:] {\it Uniform square integrability}: $$\lim\limits_{c\to\infty}\sup\limits_{x\in E} \int_{|v|>c} v^2\Gamma^0(dv; x) = 0.$$ \item[{\bf C4}:] {\it Exponential finiteness}: $$\int_{\mathbb{R}}e^{p |v|}\Gamma^{\delta}(dv; x)<\infty, \forall p\in \mathbb{R}.$$ \end{itemize} \section {Main result} \textbf{ Theorem 1.} Solution of large deviation problem for random evolution $$\xi^{\delta}_{\varepsilon}(t)=\xi^{\delta}_{\varepsilon}(0)+\int^t_0\eta^{\delta}_{\varepsilon}(ds; x(s/\varepsilon^3)), \ t\geq0,$$ defined by a generator of two-component Markov process $\xi(t), x(t), t\geq 0$ $$ \mathbb{L}^{\delta}_{\varepsilon}\varphi(u, x)=\varepsilon^{-3}Q\varphi(\cdot, x)+\Gamma^{\delta}_{\varepsilon}(x)\varphi(u, \cdot), \eqno(5) $$ where $$\Gamma^{\delta}_{\varepsilon}(x)\varphi(u)=\varepsilon^{-3}\int_{\mathbb{R}}[\varphi(u+\varepsilon v)-\varphi(u)]\Gamma^{\delta}(dv; x), \ x\in E \eqno(6)$$ is realized by the exponential generator $$ H^0\varphi(u)=(\widetilde{a}-\widetilde{a}_0)\varphi'(u)+\frac{1}{2}\sigma^2(\varphi'(u))^2+\int_{\mathbb{R}}[e^{v\varphi'(u)}-1]\widetilde{\Gamma}^0(dv), \eqno(7) $$ $$ \widetilde{a}=\Pi a(x)=\int_E\pi(dx)a(x), \widetilde{a}_0=\Pi a_0(x)=\int_E\pi(dx)a_0(x), a_0(x)=\int_{\mathbb{R}}v\Gamma^0(dv;x),$$ $$ \widetilde{c}=\Pi c(x)=\int_E\pi(dx)c(x), \widetilde{c}_0=\Pi c_0(x)=\int_E\pi(dx)c_0(x), c_0(x)=\int_{\mathbb{R}}v^2\Gamma^0(dv;x),$$ $$ \sigma^2=(\widetilde{c}-\widetilde{c}_0)+2\int_E\pi(dx) a_1(x)R_0a_1(x), \widetilde{\Gamma}^0(v)=\Pi {\Gamma}^0(v;x)=\int_E\pi(dx){\Gamma}^0(v;x).$$ \textbf{Remark 6.} Large deviations for random evolutions in L\'{e}vy approximation scheme are determined by exponential generator for jumping process with independent increments. Studying of large deviation problem for jumping process with independent increments is presented in monograph (Freidlin and Wentzel, 1998, Ch.3,4). \textbf{Remark 7.} The limit exponential generator in the Euclidean space $\mathbb{R}^d, d>1$ is represented in the following view: $$ H^0\varphi(u)=\sum^d_{k=1}(\widetilde{a}_k-\widetilde{a}_k^0)\varphi'_k+\frac{1}{2}\sum^d_{k,r=1}\sigma_{kr}\varphi'_k\varphi'_r+\int_{\mathbb{R}^d}[e^{v\varphi'(u)}-1]\widetilde{\Gamma}^0(dv), \ \varphi'_k:=\partial\varphi(u)/\partial u_k, 1\leq k\leq d. $$ Here $\sigma^2=[\sigma_{kr}; 1\leq k,r\leq d]$ is the variance matrix. In addition, the last exponential generator can be extended on the space of absolutely continuous functions (see Feng and Kurtz, 2006) $$C^1_b(R^d)=\{\varphi: \ \exists \lim_{|u|\to\infty}\varphi(u)=\varphi(\infty), \ \lim_{|u|\to\infty}\varphi'(u)=0\}.$$ \textbf{Proof.} Limit transition in the exponential nonlinear generator of random evolution is realized on the perturbed test-functions $$ \varphi^\delta_\varepsilon(u, x)=\varphi(u)+\varepsilon\ln[1+\delta\varphi_1(u, x)+\delta^2\varphi_2(u, x)], $$ where $\varphi(u)\in C^3(\mathbb{R})$ (the space of continuous bounded functions with continuous bounded derivatives up to third degree). Thus, we have from (5): $$ \mathbb{H}^{\varepsilon,\delta}\varphi^\delta_\varepsilon=e^{-\varphi^\delta_\varepsilon/\varepsilon}\varepsilon \mathbb{L}^{\delta}_{\varepsilon} e^{\varphi^\delta_\varepsilon/\varepsilon}=e^{-\varphi^\delta_\varepsilon/\varepsilon} [\varepsilon^{-2}Q+\varepsilon\Gamma_{\varepsilon}^\delta(x)] e^{\varphi^\delta_\varepsilon/\varepsilon}=$$ $$e^{-\varphi/\varepsilon}[1 + \delta\varphi_1+\delta^2\varphi_2]^{-1}[\varepsilon^{-2}Q+\varepsilon\Gamma_{\varepsilon}^\delta(x)]e^{\varphi/\varepsilon}[1 + \delta\varphi_1+\delta^2\varphi_2]. $$ To obtain the asymptotic behavior of the last exponential generator we use the following results. \textbf{Lemma 1.} Exponential generator $$H^{\varepsilon}_{Q}\varphi_\varepsilon^\delta(u,x)=e^{-\varphi_\varepsilon^\delta/\varepsilon}\varepsilon^{-2}Qe^{\varphi_\varepsilon^\delta/\varepsilon}\eqno(8)$$ has the following asymptotic representation $$ H^{\varepsilon}_{Q}\varphi_\varepsilon^\delta=\varepsilon^{-1}Q\varphi_1+Q\varphi_2-\varphi_1Q\varphi_1 +\theta_{Q}^{\varepsilon,\delta}(x), \eqno(9) $$ where $\sup\limits_{x\in E}|\theta_{Q}^{\varepsilon,\delta}(x)|\to 0, \varepsilon,\delta\to 0.$ \textbf{ Proof.} We have: $$ \mathbb{H}^{\varepsilon}_Q\varphi^\delta_\varepsilon=e^{-\varphi/\varepsilon}[1 + \delta\varphi_1+\delta^2\varphi_2]^{-1}\varepsilon^{-2}Qe^{\varphi/\varepsilon}[1 + \delta\varphi_1+\delta^2\varphi_2]=$$ $$\left[1-\delta\varphi_1+\delta^2\frac{\varphi_1^2+\delta\varphi_1\varphi_2-\varphi_2}{1+\delta\varphi_1+\delta^2\varphi_2}\right] [\delta \varepsilon^{-2}Q\varphi_1+\delta^2\varepsilon^{-2}Q\varphi_2]=\delta \varepsilon^{-2}Q\varphi_1+\delta^2\varepsilon^{-2}Q\varphi_2-\delta^2\varepsilon^{-2}\varphi_1Q\varphi_1+\theta_{Q}^{\varepsilon,\delta}(x), $$ where $$\theta_{Q}^{\varepsilon,\delta}(x)=\delta^3\varepsilon^{-2}\frac{\varphi_1^2+\delta\varphi_1\varphi_2-\varphi_2}{1+\delta\varphi_1+\delta^2\varphi_2}[Q\varphi_1+\delta Q\varphi_2]-\delta^3\varepsilon^{-2}\varphi_1Q\varphi_2.$$ By the limit condition $\varepsilon^{-1}\delta\to 1, \varepsilon,\delta\to 0$, we finally have (9). Lemma is proved. \textbf{Lemma 2.} Exponential generator $$H^{\varepsilon,\delta}_{\Gamma}(x)\varphi_\varepsilon^\delta(u,x)=e^{-\varphi_\varepsilon^\delta/\varepsilon}\varepsilon\Gamma^{\delta}_{\varepsilon}(x)e^{\varphi_\varepsilon^\delta/\varepsilon}\eqno(10)$$ has the following asymptotic representation $$ H^{\varepsilon,\delta}_{\Gamma}(x)\varphi_\varepsilon^\delta=H_{\Gamma}(x)\varphi(u)+\varepsilon^{-1} a_1(x)\varphi'(u)+\theta_{\Gamma}^{\varepsilon,\delta}(x), $$ where $$H_{\Gamma}(x)\varphi(u)=(a(x)-a_0(x))\varphi'(u)+\frac{1}{2}(c(x)-c_0(x))(\varphi'(u))^2+\int_{\mathbb{R}}[e^{v\varphi'(u)}-1]\Gamma^0(dv;x),\eqno(11)$$ and $\sup\limits_{x\in E}|\theta_{\Gamma}^{\varepsilon,\delta}(x)|\to 0, \varepsilon,\delta\to 0.$ \textbf{ Proof.} We have: $$ \mathbb{H}^{\varepsilon,\delta}_{\Gamma}(x)\varphi^\delta_\varepsilon=e^{-\varphi/\varepsilon}[1 + \delta\varphi_1+\delta^2\varphi_2]^{-1}\varepsilon\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}[1 + \delta\varphi_1+\delta^2\varphi_2]=$$ $$e^{-\varphi/\varepsilon}\left[1-\delta\varphi_1+\delta^2\frac{\varphi_1^2+\delta\varphi_1\varphi_2-\varphi_2}{1+\delta\varphi_1+\delta^2\varphi_2}\right] [\varepsilon\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}+\varepsilon\delta\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}\varphi_1+ \varepsilon\delta^2\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}\varphi_2]= $$ $$H^{\varepsilon,\delta}_{\Gamma}(x)\varphi(u)+e^{-\varphi/\varepsilon}\varepsilon\delta[\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}\varphi_1- \varphi_1\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}]+\widetilde{\theta}_{\Gamma}^{\varepsilon,\delta}(x), $$ where $$\widetilde{\theta}_{\Gamma}^{\varepsilon,\delta}(x)=\varepsilon\delta^2[e^{-\varphi/\varepsilon}\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}\varphi_2-e^{-\varphi/\varepsilon}\varphi_1\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}\varphi_1] +\varepsilon\delta^2\frac{\varphi_1^2+\delta\varphi_1\varphi_2-\varphi_2}{1+\delta\varphi_1+\delta^2\varphi_2} [e^{-\varphi/\varepsilon}\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}+$$ $$e^{-\varphi/\varepsilon}\delta\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}\varphi_1+ e^{-\varphi/\varepsilon}\delta^2\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}\varphi_2]- \varepsilon\delta^3e^{-\varphi/\varepsilon}\varphi_1\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}\varphi_2$$ We use the following results: \textbf{Lemma 3.} $$\Gamma_{\varepsilon}^{\delta}(x)e^{\varphi(u)/\varepsilon}\varphi_1(u,x)=\varphi_1(u,x)\Gamma_{\varepsilon}^{\delta}(x)e^{\varphi(u)/\varepsilon}+(\varepsilon\delta)^{-1}\widehat{\theta}_{\Gamma}^{\varepsilon,\delta}(x),$$ where for the negligible term $\sup\limits_{x\in E}|\widehat{\theta}^{\varepsilon,\delta}_\Gamma(x)|\to 0, \varepsilon,\delta\to 0.$ \textbf{Proof.} Really, by (6) we have: $$\Gamma_{\varepsilon}^{\delta}(x)e^{\varphi(u)/\varepsilon}\varphi_1(u,x)=\varepsilon^{-3} \int_{\mathbb{R}}[e^{\varphi(u+\varepsilon v)/\varepsilon}\varphi_1(u+\varepsilon v,x)-e^{\varphi(u)/\varepsilon}\varphi_1(u,x)]\Gamma^\delta(dv;x)=$$ $$\varphi_1(u,x)\Gamma_{\varepsilon}^{\delta}(x)e^{\varphi(u)/\varepsilon}+ (\varepsilon\delta)^{-1}\left[\varphi'_1(u,x)\varepsilon^{-1}\delta \int_{\mathbb{R}}e^{\varphi(u+\varepsilon v)/\varepsilon}v\Gamma^\delta(dv;x)\right].$$ Let's estimate the last integral. As soon as function $\varphi(u)$ is bounded, we have for fixed $\varepsilon$: $$\int_{\mathbb{R}}e^{\varphi(u+\varepsilon v)/\varepsilon}v\Gamma^\delta(dv;x)<e^C\int_{\mathbb{R}}v\Gamma^\delta(dv;x)=\delta e^C[a_1(x)+\delta a(x)+\delta\theta_a^{\delta}(x)].$$ Thus, we see that the last term is negligible when $\varepsilon, \delta\to 0$. Lemma is proved. \textbf{Lemma 4.} Exponential generator $$H^{\varepsilon,\delta}_{\Gamma}(x)\varphi(u)=e^{-\varphi/\varepsilon}\varepsilon\Gamma_\varepsilon^\delta(x)e^{\varphi/\varepsilon}\eqno(12)$$ has the following asymptotic representation $$H^{\varepsilon,\delta}_{\Gamma}(x)\varphi^\delta_\varepsilon=H_\Gamma(x)\varphi(u)+ \varepsilon^{-1}a_1(x)\varphi'(u)+\theta^{\varepsilon,\delta}(x),$$ where $\sup\limits_{x\in E}|\theta^{\varepsilon,\delta}(x)|\to 0, \varepsilon,\delta\to 0.$ \textbf{Proof.} Let's rewrite (12), using the view of the generator (6). We have: $$H^{\varepsilon,\delta}_{\Gamma}(x)\varphi(u)=\varepsilon^{-2}\int_{\mathbb{R}}[e^{\Delta_{\varepsilon}\varphi(u)}-1]\Gamma^\delta(dv;x), $$ where $$ \Delta_\varepsilon\varphi(u):=\varepsilon^{-1}[\varphi(u+\varepsilon v)-\varphi(u)]. $$ We may rewrite it in a following way: $$H^{\varepsilon,\delta}_{\Gamma}(x)\varphi(u)=\varepsilon^{-2}\int_{\mathbb{R}}[e^{\Delta_{\varepsilon}\varphi(u)}-1-\Delta_{\varepsilon}\varphi(u)- \frac{1}{2}(\Delta_{\varepsilon}\varphi(u))^2]\Gamma^\delta(dv;x)+$$ $$\varepsilon^{-2}\int_{\mathbb{R}}[\Delta_{\varepsilon}\varphi(u)+ \frac{1}{2}(\Delta_{\varepsilon}\varphi(u))^2]\Gamma^\delta(dv;x). $$ Easy to see that the function $\psi^{\varepsilon}_u(v)=e^{\Delta_{\varepsilon}\varphi(u)}-1-\Delta_{\varepsilon}\varphi(u)- \frac{1}{2}(\Delta_{\varepsilon}\varphi(u))^2$ belongs to the class $C_3(\mathbb{R})$. Really, $$\psi^{\varepsilon}_u(v)/v^2\to 0, v\to 0.$$ Besides, this function is continuous and bounded for every $\varepsilon$ under the condition that $\varphi(u)$ is bounded. Moreover, the function $\psi^{\varepsilon}_u(v)$ is bounded uniformly by $u$ under the conditions {\bf C3, C4} and if $\varphi'(u)$ is bounded. Thus, we have: $$H^{\varepsilon,\delta}_{\Gamma}(x)\varphi(u)=\varepsilon^{-2}\delta^2\int_{\mathbb{R}}[e^{\Delta_{\varepsilon}\varphi(u)}-1-\Delta_{\varepsilon}\varphi(u)- \frac{1}{2}(\Delta_{\varepsilon}\varphi(u))^2]\Gamma^0(dv;x)+$$ $$\varepsilon^{-2}\int_{\mathbb{R}}[\Delta_{\varepsilon}\varphi(u)-v\varphi'(u)- \varepsilon \frac{v^2}{2}\varphi''(u)]\Gamma^\delta(dv;x)+\varepsilon^{-2}\delta a_1(x)\varphi'(u)+\varepsilon^{-2}\delta^2 a(x)\varphi'(u)+$$ $$\varepsilon^{-1}\delta^2 c(x)\varphi''(u)+ \varepsilon^{-2}\int_{\mathbb{R}}[\frac{1}{2}(\Delta_{\varepsilon}\varphi(u))^2-\frac{v^2}{2}(\varphi'(u))^2]\Gamma^\delta(dv;x)+\varepsilon^{-2}\delta^2\frac{1}{2} c(x)(\varphi'(u))^2.$$ The functions in the second and third integrals are obviously belong to $C_3(\mathbb{R})$. Using Taylor's formula to the test-functions $\varphi(u)\in C^3(\mathbb{R})$, and condition \textbf{PA2} we obtain: $$H^{\varepsilon,\delta}_{\Gamma}(x)\varphi(u)=\varepsilon^{-2}\delta^2\int_{\mathbb{R}}[e^{v\varphi'(u)}-1-v\varphi'(u)- \frac{v^2}{2}(\varphi'(u))^2]\Gamma^0(dv;x)+$$ $$\varepsilon^{-2}\delta^2\int_{\mathbb{R}}(e^{v\varphi'(u)}\varepsilon\frac{v^2}{2}\varphi''(\widetilde{u})-\varepsilon\frac{v^2}{2}\varphi''(\widetilde{u})- \varepsilon^2\frac{v^4}{8}(\varphi''(\widetilde{u}))^2)\Gamma^0(dv;x)+$$ $$\varepsilon^{-2}\delta^2\int_{\mathbb{R}}\varepsilon^{2}\frac{v^3}{3!}\varphi'''(\widetilde{u})\Gamma^0(dv;x)+\varepsilon^{-2}\delta a_1(x)\varphi'(u)+\varepsilon^{-2}\delta^2 a(x)\varphi'(u)+\varepsilon^{-1}\delta^2 c(x)\varphi''(u)+$$ $$ \varepsilon^{-2}\delta^2\int_{\mathbb{R}}\varepsilon^{2}\frac{v^4}{4}(\varphi''(\widetilde{u}))^2\Gamma^0(dv;x)+\varepsilon^{-2}\delta^2\frac{1}{2} c(x)(\varphi'(u))^2.$$ By the limit condition $\varepsilon^{-1}\delta\to 1$, we finally have: $$ H^{\varepsilon,\delta}_{\Gamma}(x)\varphi(u)=H_{\Gamma}(x)\varphi(u)+\varepsilon^{-1} a_1(x)\varphi'(u)+\theta^{\varepsilon,\delta}(x),$$ where $\sup\limits_{x\in E}|\theta^{\varepsilon,\delta}(x)|\to 0, \varepsilon,\delta\to 0.$ Lemma is proved. From Lemmas 3,4 we obtain $$ H^{\varepsilon,\delta}_{\Gamma}(x)\varphi_\varepsilon^\delta=H_{\Gamma}(x)\varphi(u)+\varepsilon^{-1} a_1(x)\varphi'(u)+\theta_{\Gamma}^{\varepsilon,\delta}(x),$$ where $\sup\limits_{x\in E}|\theta_{\Gamma}^{\varepsilon,\delta}(x)|\to 0, \varepsilon,\delta\to 0.$ Lemma is proved. From (8) and (10) we see that $$\mathbb{H}^{\varepsilon,\delta}\varphi^\delta_\varepsilon=H^{\varepsilon}_{Q}\varphi_\varepsilon^\delta(u,x)+H^{\varepsilon,\delta}_{\Gamma}(x)\varphi_\varepsilon^\delta(u,x)$$ Thus, using Lemmas 1,2, we obtain asymptotic representation $$ \mathbb{H}^{\varepsilon,\delta}\varphi^\delta_\varepsilon=\varepsilon^{-1}[ Q\varphi_1+a_1(x)\varphi'(u)]+Q\varphi_2-\varphi_1Q\varphi_1+H_{\Gamma}(x)\varphi(u)+ h^{\varepsilon,\delta}(x), $$ where $h^{\varepsilon,\delta}(x)=\theta^{\varepsilon,\delta}_Q(x)+\theta^{\varepsilon,\delta}_{\Gamma}(x).$ Now we may use the solution of singular perturbation problem for reducibly-invertible operator $Q$ (see Koroliuk and Limnios, 2005, Ch. 1). $$\begin{array}{c} Q\varphi_1+a_1(x)\varphi'(u)=0, \\ Q\varphi_2-\varphi_1Q\varphi_1+H_{\Gamma}(x)\varphi(u)= H^0\varphi(u). \end{array} $$ From the first equation we obtain $$\varphi_1(u,x)=R_0a_1(x)\varphi'(u),\hskip 5mm Q\varphi_1(u,x)=-a_1(x)\varphi'(u).$$ After substitution to the second equation we have $$Q\varphi_2+a_1(x)R_0a_1(x)(\varphi'(u))^2+H_{\Gamma}(x)\varphi(u)= H^0\varphi(u)$$ From the solvability condition: $$H^0\varphi(u)=\Pi H_{\Gamma}(x)\Pi\varphi(u)+\Pi a_1(x)R_0a_1(x)\mathbf{1}(\varphi'(u))^2,$$ where $\mathbf{1}$ is the unit vector. Now, using (11) we finally obtain (7). The negligible term $h^{\varepsilon,\delta}(x)$ may be found explicitly, using the solution of Poisson equation (see Remark 1, in details Koroliuk and Limnios, 2005) $$\varphi_2(u,x)=R_0\widetilde{H}(x)\varphi(u)-R_0a_1(x)R_0a_1(x)\mathbf{1}(\varphi'(u))^2, \hskip 5mm \widetilde{H}(x):=H^0-H_{\Gamma}(x).$$ Theorem is proved. \end{document}
\begin{document} \title[Derived string topology]{ Derived string topology and the Eilenberg-Moore spectral sequence } \footnote[0]{{\it 2010 Mathematics Subject Classification}: 55P50, 55P35, 55T20 \\ {\it Key words and phrases.} String topology, Gorenstein space, differential torsion product, Eilenberg-Moore spectral sequence. Department of Mathematical Sciences, Faculty of Science, Shinshu University, Matsumoto, Nagano 390-8621, Japan e-mail:{\tt [email protected]} D\'epartement de Math\'ematiques Facult\'e des Sciences, Universit\'e d'Angers, 49045 Angers, France e-mail:{\tt [email protected]} Department of Mathematical Sciences, Faculty of Science, Shinshu University, Matsumoto, Nagano 390-8621, Japan e-mail:{\tt [email protected]} } \author{Katsuhiko KURIBAYASHI, Luc MENICHI and Takahito Naito} \maketitle \begin{abstract} Let $M$ be any simply-connected Gorenstein space over any field. F\'elix and Thomas have extended to simply-connected Gorenstein spaces, the loop (co)products of Chas and Sullivan on the homology of the free loop space $H_*(LM)$. We describe these loop (co)products in terms of the torsion and extension functors by developing string topology in appropriate derived categories. As a consequence, we show that the Eilenberg-Moore spectral sequence converging to the loop homology of a Gorenstein space admits a multiplication and a comultiplication with shifted degree which are compatible with the loop product and the loop coproduct of its target, respectively. We also define a generalized cup product on the Hochschild cohomology $HH^*(A,A^\vee)$ of a commutative Gorenstein algebra $A$ and show that over $\mathbb{Q}$, $HH^*(A_{PL}(M),A_{PL}(M)^\vee)$ is isomorphic as algebras to $H_*(LM)$. Thus, when $M$ is a Poincar\'e duality space, we recover the isomorphism of algebras $\mathbb{H}_*(LM;\mathbb{Q})\cong HH^*(A_{PL}(M),A_{PL}(M))$ of F\'elix and Thomas. \varepsilonnd{abstract} \section{Introduction} There are several spectral sequences concerning main players in string topology \cite{C-J-Y, C-L, LeB, S, Kuri2011}. Cohen, Jones and Yan \cite{C-J-Y} have constructed a loop algebra spectral sequence which is of the Leray-Serre type. The Moore spectral sequence converging to the Hochschild cohomology ring of a differential graded algebra is endowed with an algebra structure \cite{F-T-VP} and moreover a Batalin-Vilkovisky algebra structure \cite{Kuri2011}, which are compatible with such a structure of the target. Very recently, Shamir \cite{S} has constructed a Leray-Serre type spectral sequence converging to the Hochschild cohomology ring of a differential graded algebra. Then as announced by McClure~\cite[Theorem B]{McClure}, one might expect that the Eilenberg-Moore spectral sequence (EMSS), which converges to the loop homology of a closed oriented manifold and of a more general Gorenstein space, enjoys a multiplicative structure corresponding to the loop product. The class of Gorenstein spaces contains Poincar\'e duality spaces, for example closed oriented manifolds, and Borel constructions, in particular, the classifying spaces of connected Lie groups; see \cite{FHT_G, Murillo, KMnoetherian}. In \cite{F-T}, F\'elix and Thomas develop string topology on Gorenstein spaces. As seen in string topology, the shriek map (the wrong way map) plays an important role when defining string operations. Such a map for a Gorenstein space appears in an appropriate derived category. Thus we can discuss string topology due to Chas and Sullivan in the more general setting with cofibrant replacements of the singular cochains on spaces. In the remainder of this section, our main results are surveyed. We describe explicitly the loop (co)products for a Gorenstein space in terms of the differential torsion product and the extension functors; see Theorems \ref{thm:torsion_product}, \ref{thm:torsion_coproduct} and \ref{thm:freeloopExt}. The key idea of the consideration comes from the general setting in \cite{F-T} for defining string operations mentioned above. Thus our description of the loop (co)product with derived functors fits {\it derived string topology}, namely the framework of string topology due to F\'elix and Thomas. Indeed, according to expectation, the full descriptions of the products with derived functors permits us to give the EMSS (co)multiplicative structures which are compatible with the dual to the loop (co)products of its target; see Theorem \ref{thm:EMSS}. By dualizing the EMSS, we obtain a new spectral sequence converging to the Chas-Sullivan relative loop homology algebra with coefficients in a field ${\mathbb K}$ of a Gorenstein space $N$ over a space $M$. We observe that the $E_2$-term of the dual EMSS is represented by the Hochschild cohomology ring of $H^*(M; {\mathbb K})$ with coefficients in the shifted homology of $N$; see Theorems \ref{thm:loop_homology_ss}. It is conjectured that there is an isomorphism of graded algebras between the loop homology of $M$ and the Hochschild cohomology of the singular cochains on $M$. But over $\mathbb{F}_p$, even in the case of a simply-connected closed orientable manifold, there is no complete written proof of such an isomorphism of algebras (See~\cite[p. 237]{F-T-VP} for details). Anyway, even if we assume such isomorphism, it is not clear that the spectral sequence obtained by filtering Hochschild cohomology is isomorphic to the dual EMSS although these two spectral sequences have the same $E_2$ and $E_\infty$-term. It is worth stressing that the EMSS in Theorem \ref{thm:EMSS} is applicable to each space in the more wide class of Gorenstein spaces and is moreover endowed with both the loop product and the loop coproduct. Let $N$ be a simply-connected space whose cohomology is of finite dimension and is generated by a single element. Then explicit calculations of the dual EMSS made in the sequel \cite{K-M-N2} to this paper yield that the loop homology of $N$ is isomorphic to the Hochschild cohomology of $H^*(N; {\mathbb K})$ as an algebra. This illustrates computability of our spectral sequence in Theorem \ref{thm:loop_homology_ss}. With the aid of the torsion functor descriptions of the loop (co)products, we see that the composite $(\text{\it the loop product})\circ(\text{\it the loop coproduct})$ is trivial for a simply-connected Poincar\'e duality space; see Theorem \ref{thm:loop(co)product}. Therefore, the same argument as in the proof of \cite[Theorem A]{T} deduces that if string operations on a Poincar\'e duality space gives rise to a 2-dimensional TQFT, then all operations associated to surfaces of genus at least one vanish. For a more general Gorenstein space, an obstruction for the composite to be trivial can be found in a hom-set, namely the extension functor, in an appropriate derived category; see Remark \ref{rem:obstruction}. This small but significant result also asserts an advantage of derived string topology. It is also important to mention that in the Appendices, we have paid attention to signs and extended the properties of shriek maps on Gorenstein spaces given in \cite{F-T}, in order to prove that the loop product is associative and commutative for Poincar\'e duality space. \section{Derived string topology and main results} The goal of this section is to state our results in detail. The proofs are found in Sections 3 to 7. We begin by recalling the most prominent result on shriek maps due to F\'elix and Thomas, which supplies string topology with many homological and homotopical algebraic tools. Let ${\mathbb K}$ be a field of arbitrary characteristic. In what follows, we denote by $C^*(M)$ and $H^*(M)$ the normalized singular cochain algebra of a space $M$ with coefficients in ${\mathbb K}$ and its cohomology, respectively. For a differential graded algebra $A$, let $\text{D}(\text{Mod-}A)$ and $\text{D}(A\text{-Mod})$ be the derived categories of right $A$-modules and left $A$-modules, respectively. Unless otherwise explicitly stated, it is assumed that a space has the homotopy type of a CW-complex whose homology with coefficients in an underlying field is of finite type. Consider a pull-back diagram ${\mathcal F}$: $$ \xymatrix@C30pt@R15pt{ X \ar[r]^{g} \ar[d]_q & E \ar[d]^p \\ N \ar[r]_f & M} $$ in which $p$ is a fibration over a simply-connected Poincar\'e duality space $M$ of dimension $m$ with the fundamental class $\omega_M$ and $N$ is a Poincar\'e duality space of dimension $n$ with the fundamental class $\omega_N$. \begin{thm}\label{thm:main_F-T}{\varepsilonm (}\cite{L-SaskedbyFelix},\cite[Theorems 1 and 2]{F-T}{\varepsilonm )} With the notation above there exist unique elements $$ f^! \in \text{\varepsilonm Ext}^{m-n}_{C^*(M)}(C^*(N), C^*(M)) \ \ \text{and} \ \ g^! \in \text{\varepsilonm Ext}^{m-n}_{C^*(E)}(C^*(X), C^*(E)) $$ such that $H^*(f^!)(\omega_N)=\omega_M$ and in $\text{D}D(\text{\varepsilonm Mod-}C^*(M))$, the following diagram is commutative $$ \xymatrix@C25pt@R15pt{ C^*(X) \ar[r]^(0.4){g^!} & C^{*+m-n}(E) \\ C^*(N) \ar[r]_(0.4){f^!} \ar[u]^{q^*}& C^{*+m-n}(M). \ar[u]_{p^*} } $$ \varepsilonnd{thm} Let $A$ be a differential graded augmented algebra over ${\mathbb K}$. We call $A$ a {\it Gorenstein algebra} of dimension $m$ if $$ \dim \text{Ext}_A^*({\mathbb K}, A) = \left\{ \begin{array}{l} 0 \ \ \text{if} \ *\neq m, \\ 1 \ \ \text{if} \ *= m. \varepsilonnd{array} \right. $$ A path-connected space $M$ is called a ${\mathbb K}$-{\it Gorenstein space} (simply, Gorenstein space) of dimension $m$ if the normalized singular cochain algebra $C^*(M)$ with coefficients in ${\mathbb K}$ is a Gorenstein algebra of dimension $m$. We write $\dim M$ for the dimension $m$. The result \cite[Theorem 3.1]{FHT_G} yields that a simply-connected Poincar\'e duality space, for example a simply-connected closed orientable manifold, is Gorenstein. The classifying space $BG$ of connected Lie group $G$ and the Borel construction $EG\times_GM$ for a simply-connected Gorenstein space $M$ with $\dim H^*(M; {\mathbb K})< \infty$ on which $G$ acts are also examples of Gorenstein spaces; see \cite{FHT_G, Murillo, KMnoetherian}. Observe that, for a closed oriented manifold $M$, $\dim M$ coincides with the ordinary dimension of $M$ and that for the classifying space $BG$ of a connected Lie group, $\dim BG= -\dim G$. Thus the dimensions of Gorenstein spaces may become negative. The following theorem enables us to generalize the above result concerning shriek maps on a Poincar\'e duality space to that on a Gorenstein space. \begin{thm}{\varepsilonm(}\cite[Theorem 12]{F-T}{\varepsilonm)}\label{thm:ext} Let $X$ be a simply-connected ${\mathbb K}$-Gorenstein space of dimension $m$ whose cohomology with coefficients in ${\mathbb K}$ is of finite type. Then $$ \text{\varepsilonm Ext}^*_{C^*(X^n)}(C^*(X), C^*(X^n)) \cong H^{*-(n-1)m}(X), $$ where $C^*(X)$ is considered a $C^*(X^n)$-module via the diagonal map $\text{D}elta : X \to X^n$. \varepsilonnd{thm} We denote by $\text{D}elta^!$ the map in $\text{D}(\text{Mod-}C^*(X^n))$ which corresponds to a generator of $\text{Ext}^{(n-1)m}_{C^*(X^n)}(C^*(X), X^*(X^n)) \cong H^0(X)$. Then, for a Gorenstein space $X$ of dimension $m$ and a fibre square $$ \xymatrix@C25pt@R15pt{ E' \ar[r]^{g} \ar[d]_{p'} & E \ar[d]^p \\ X^{} \ar[r]_\text{D}elta & X^{n} , } $$ there exists a unique map $g^!$ in $\text{Ext}_{C^*(E)}^{(n-1)m}(C^*(E'), C^*(E))$ which fits into the commutative diagram in $\text{D}(\text{Mod-}C^*(X^n))$ $$ \xymatrix@C25pt@R15pt{ C^*(E') \ar[r]^{g^!} & C^*(E) \\ C^*(X^{}) \ar[r]_{\text{D}elta^!} \ar[u]^{(p')^*}& C^*(X^{n}) . \ar[u]_{p^*} } $$ We remark that the result follows from the same proof as that of Theorem \ref{thm:main_F-T}. Let $\xymatrix@C15pt@R15pt{K & \ar[l]_{f} A \ar[r]^{g} & L}$ be a diagram in the category of differential graded algebras (henceforth called DGA's). We consider $K$ and $L$ right and left modules over $A$ via maps $f$ and $g$, respectively. Then the differential torsion product $\text{Tor}_A(K, L)$ is denoted by $\text{Tor}_A(K, L)_{f, g}$ when the actions are emphasized. We recall here the Eilenberg-Moore map. Consider the pull-back diagram ${\mathcal F}$ mentioned above, in which $p$ is a fibration and $M$ is a simply-connected space. Let $\varepsilon : F \to C^*(E)$ be a left semi-free resolution of $C^*(E)$ in $C^*(M)\text{-Mod}$ the category of left $C^*(M)$-modules. Then the Eilenberg-Moore map $$ EM : \text{Tor}_{C^*(M)}^*(C^*(N), C^*(E))_{f^*, p^*} =H(C^*(N)\otimes_{C^*(M)}F) \longrightarrow H^*(X) $$ is defined by $ EM(x\otimes_{C^*(M)} u) = q^*(x)\smile (g^*\varepsilon(u)) $ for $x\otimes_{C^*(M)} u \in C^*(N)\otimes_{C^*(M)}F$. Observe that in the same way, we can define the Eilenberg-Moore map by using a semi-free resolution of $C^*(N)$ as a right $C^*(M)$-module. We see that the map $EM$ is an isomorphism of graded algebras with respect to the cup products; see \cite{G-M} for example. In particular, for a simply-connected space $M$, consider the commutative diagram, $$ \xymatrix@C30pt@R15pt{ LM \ar[r]^{} \ar[d]_{ev_0} & M^I \ar[d]_{p=(ev_0,ev_1)} & M\ar[l]_{\sigma}^\simeq\ar[ld]^-{\text{D}elta}\\ M \ar[r]_-\text{D}elta & M\times M} $$ where $ev_i$ stands for the evaluation map at $i$ and $\sigma:M\buildrel{\simeq}\over\hookrightarrow M^I$ for the inclusion of the constant paths. We then obtain the composite $EM' :$ $$ \xymatrix@1{ H^*(LM) & \text{Tor}_{C^*(M^{\times 2})}^*(C^*M, C^*M^I)_{\text{D}elta^*,p^*}\ar[l]_-{EM}^-\cong \ar[r]^{\text{Tor}_{1}(1,\sigma^*)}_\cong & \text{Tor}_{C^*(M^{\times 2})}^*(C^*M, C^*M)_{\text{D}elta^*,\text{D}elta^*} }. $$ Our first result states that the torsion functor $ \text{Tor}_{C^*(M^{\times 2)}}^*(C^*(M), C^*(M))_{\text{D}elta^*,\text{D}elta^*}$ admits (co)products which are compatible with $EM'$. In order to describe such a result, we first recall the definition of the loop product on a simply-connected Gorenstein space. Consider the diagram $$ \xymatrix@C25pt@R15pt{ LM \ar[d]_{ev_0} & LM\times_M LM \ar[l]_(0.6){Comp} \ar[d] \ar[r]^q & LM \times LM \ar[d]^{(ev_0,ev_1)} \\ M \ar@{=}[r] & M \ar[r]_{\text{D}elta} & M\times M, } \varepsilonqnlabel{add-0} $$ where the right-hand square is the pull-back of the diagonal map $\text{D}elta$, $q$ is the inclusion and $Comp$ denotes the concatenation of loops. By definition the composite $$ q^!\circ (Comp)^* : C^*(LM) \to C^*(LM\times_M LM) \to C^*(LM \times LM) $$ induces the dual to the loop product $Dlp$ on $H^*(LM)$; see \cite[Introduction]{F-T}. We see that $C^*(LM)$ and $C^*(LM\times LM)$ are $C^*(M\times M)$-modules via the map $ev_0\circ \text{D}elta$ and $(ev_0, ev_1)$, respectively. Moreover since $q^!$ is a morphism of $C^*(M\times M)$-modules, it follows that so is $q^!\circ (Comp)^*$. The proof of Theorem \ref{thm:main_F-T} states that the map $q^!$ is obtained extending the shriek map $\text{D}elta^!$, which is first given, in the derived category $\text{D}(\text{Mod-}C^*(M\times M))$. This fact allows us to formulate $q^!$ in terms of differential torsion functors. \begin{thm} \label{thm:torsion_product} Let $M$ be a simply-connected Gorenstein space of dimension $m$. Consider the comultiplication $\widetilde{(Dlp)}$ given by the composite $$ {\footnotesize \xymatrix@C8pt@R15pt{ \text{\varepsilonm Tor}^*_{C^*(M^{2})}(C^*(M), C^*(M))_{\text{D}elta^*, \text{D}elta^*} \ar[r]^-{\text{\varepsilonm Tor}_{p_{13}^*}(1, 1)} \ar@{.>}[dd] & \text{\varepsilonm Tor}^*_{C^*(M^{3})}(C^*(M), C^*(M))_{((1\times \text{D}elta)\circ \text{D}elta)^*, ((1\times \text{D}elta)\circ \text{D}elta)^*}\\ & \text{\varepsilonm Tor}^*_{C^*(M^{4})}(C^*(M), C^*(M^2))_{(\text{D}elta^2\circ\text{D}elta)^*, {\text{D}elta^2}^*} \ar[u]_{\text{\varepsilonm Tor}_{(1\times \text{D}elta \times 1)^*}(1, {\text{D}elta}^*)}^{\cong} \ar[d]^{\text{\varepsilonm Tor}_{1}(\text{D}elta^!, 1)} \\ \left(\text{\varepsilonm Tor}^*_{C^*(M^{2})}(C^*(M), C^*(M))_{\text{D}elta^*, \text{D}elta^*}^{\otimes 2}\right)^{*+m}\ar[r]^-{\cong}_{\widetilde{\top}} & \text{\varepsilonm Tor}^{*+m}_{C^*(M^{4})}(C^*(M^{2}), C^*(M^2))_{{\text{D}elta^2}^*, {\text{D}elta^2}^*}. } } $$ See Remark~\ref{definition generalized T-product} below for the definition of $\widetilde{\top}$. Then the composite $EM' :$ $$ \xymatrix@C15pt@R15pt{ H^*(LM)\ar[r]^-{EM^{-1}}_-\cong & \text{\varepsilonm Tor}_{C^*(M^{2})}^*(C^*(M), C^*(M^I))_{\text{D}elta^*,p^*} \ar[r]^{\text{\varepsilonm Tor}_{1}(1,\sigma^*)}_\cong &\text{\varepsilonm Tor}_{C^*(M^{2})}^*(C^*(M), C^*(M))_{\text{D}elta^*,\text{D}elta^*} }$$ is an isomorphism which respects the dual to the loop product $Dlp$ and the comultiplication $\widetilde{(Dlp)}$ defined here. \varepsilonnd{thm} \begin{rem}\label{definition generalized T-product} The isomorphism $\widetilde{\top}$ in Theorem~\ref{thm:torsion_product} is the canonical map defined by~\cite[p. 26]{G-M} or by~\cite[p. 255]{Mccleary} as the composite $$ {\footnotesize \xymatrix@C60pt@R20pt{ \text{Tor}^*_{C^*(M^{2})}(C^*(M), C^*(M))^{\otimes 2} \ar[r]^-\top & \text{Tor}^*_{C^*(M^{2})^{\otimes 2}}(C^*(M)^{\otimes 2}, C^*(M)^{\otimes 2})\ar[d]^{\text{Tor}_\gamma(\gamma,\gamma)}\\ \text{Tor}^*_{C^*(M^{4})}(C^*(M^{2}), C^*(M^2)) \ar[r]_-{\text{Tor}_{EZ^\vee}(EZ^\vee,EZ^\vee)}^-\cong &\text{Tor}^*_{(C_*(M^{2})^{\otimes 2})^\vee}((C_*(M)^{\otimes 2})^\vee, (C_*(M)^{\otimes 2})^\vee) } } $$ where $\top$ is the $\top$-product of Cartan-Eilenberg~\cite[XI. Proposition 1.2.1]{CartanEilenberg} or~\cite[VIII.Theorem 2.1]{MacLanehomology}, $EZ:C_*(M)^{\otimes 2}\buildrel{\simeq}\over\rightarrow C_*(M^2)$ denotes the Eilenberg-Zilber quasi-isomorphism and $\gamma:\text{Hom}(C_*(M), {\mathbb K})^{\otimes 2}\rightarrow \text{Hom}(C_*(M)^{\otimes 2}, {\mathbb K})$ is the canonical map. \varepsilonnd{rem} It is worth mentioning that this theorem gives an intriguing decomposition of the cup product on the Hochschild cohomology of a commutative algebra; see Lemma \ref{lem:generalizeddecompositioncup} below. The loop coproduct on a Gorenstein space is also interpreted in terms of torsion products. In order to recall the loop coproduct, we consider the commutative diagram $$ \xymatrix@C25pt@R20pt{ LM \times LM & LM\times_M LM \ar[l]_(0.5){q} \ar[d] \ar[r]^(0.6){Comp} & LM \ar[d]^l \\ & M \ar[r]_{\text{D}elta} & M\times M, } \varepsilonqnlabel{add-2} $$ where $l : LM \to M\times M$ is a map defined by $l(\gamma)= (\gamma(0), \gamma(\frac{1}{2}))$. By definition, the composite $$ Comp^!\circ q^* : C^*(LM\times LM ) \to C^*(LM\times_M LM) \to C^*(LM) $$ induces the dual to the loop coproduct $Dlcop$ on $H^*(LM)$. Note that we apply Theorem \ref{thm:main_F-T} to (2.2) in defining the loop coproduct. On the other hand, applying Theorem \ref{thm:main_F-T} to the diagram (2.1), the loop product is defined. \begin{thm} \label{thm:torsion_coproduct} Let $M$ be a simply-connected Gorenstein space of dimension $m$. Consider the multiplication defined by the composite $$ {\footnotesize \xymatrix@C20pt@R15pt{ \left(\text{\varepsilonm Tor}^{*}_{C^*(M^{2})}(C^*(M), C^*(M))_{\text{D}elta^*, \text{D}elta^*}\right)^{\otimes 2} \ar@{.>}[dd] \ar[r]^\cong_{\widetilde{\top}} &\text{\varepsilonm Tor}^*_{C^*(M^4)}(C^*(M^2), C^*(M^2))_{{\text{D}elta^2}^* \!, {\text{D}elta^2}^*}\ar[d]^{\text{\varepsilonm Tor}_{1}(\text{D}elta^*, 1)} \\ & \text{\varepsilonm Tor}^*_{C^*(M^{4})}(C^*(M), C^*(M^2))_{(\text{D}elta^2\circ\text{D}elta)^*, {\text{D}elta^2}^*}\ar[d]^{\text{\varepsilonm Tor}_{1}(\text{D}elta^!, 1)} \\ \text{\varepsilonm Tor}^{*+m}_{C^*(M^{2})}(C^*(M), C^*(M))_{\text{D}elta^*, \text{D}elta^*} &\text{\varepsilonm Tor}^{*+m}_{C^*(M^{4})}(C^*(M^2), C^*(M^2))_{{\gamma'}^*, {\text{D}elta^2}^*}\ar[l]^{\text{\varepsilonm Tor}_{\alpha^*}(\text{D}elta^*, \text{D}elta^*)}_\cong } } $$ where the maps $\alpha : M^{2} \to M^{4}$ and $\gamma' : M^{2} \to M^{4}$ are defined by $\alpha(x, y) = (x, y, y, y)$ and $\gamma'(x, y) =(x, y, y, x)$. See remark~\ref{definition generalized T-product} above for the definition of $\widetilde{\top}$. Then the composite $EM' :$ $$ \xymatrix@C15pt@R20pt{ H^*(LM)\ar[r]^-{EM^{-1}}_-\cong & \text{\varepsilonm Tor}_{C^*(M^{2})}^*(C^*(M), C^*(M^I))_{\text{D}elta^*,p^*} \ar[r]^{\text{\varepsilonm Tor}_{1}(1,\sigma^*)}_\cong &\text{\varepsilonm Tor}_{C^*(M^{2})}^*(C^*(M), C^*(M))_{\text{D}elta^*,\text{D}elta^*} } $$ is an isomorphism respects the dual to the loop coproduct $Dlcop$ and the multiplication defined here. \varepsilonnd{thm} \begin{rem}\label{rem:relative_cases} A relative version of the loop product is also in our interest. Let $f : N \to M$ be a map. Then by definition, the relative loop space $L_fM$ fits into the pull-back diagram $$ \xymatrix@C30pt@R20pt{ L_fM \ar[r]^{} \ar[d] & M^I \ar[d]^{(ev_0, ev_1)} \\ N \ar[r]_(0.4){(f, f)} & M\times M , } $$ where $ev_t$ denotes the evaluation map at $t$. We may write $L_NM$ for the relative loop space $L_fM$ in case there is no danger of confusion. Suppose further that $M$ is simply-connected and has a base point. Let $N$ be a simply-connected Gorenstein space. Then the diagram $$ \xymatrix@C25pt@R20pt{ L_NM & L_NM\times_N L_NM \ar[l]_(0.6){Comp} \ar[d] \ar[r]^q & L_NM \times L_NM \ar[d]^{(ev_0,ev_1)} \\ & N \ar[r]_{\text{D}elta} & N\times N} $$ gives rise to the composite $$ q^!\circ (Comp)^* : C^*(L_NM) \to C^*(L_NM\times_N L_NM) \to C^*(L_NM \times L_NM) $$ which, by definition, induces the dual to the relative loop product $Drlp$ on the cohomology $H^*(L_NM)$ with degree $\dim N$; see \cite{F-T-VP,G-S} for case that $N$ is a smooth manifold. Since the diagram above corresponds to the diagram (2.1), the proof of Theorem \ref{thm:torsion_product} permits one to conclude that $Drlp$ has also the same description as in Theorem \ref{thm:torsion_product}, where $C^*(N)$ is put instead of $C^*(M)$ in the left-hand variables of the torsion functors in the theorem. As for the loop coproduct, we cannot define its relative version in natural way because of the evaluation map $l$ of loops at $\frac{1}{2}$; see the diagram (2.2). Indeed the point $\gamma(\frac{1}{2})$ for a loop $\gamma$ in $L_NM$ is not necessarily in $N$. \varepsilonnd{rem} The associativity of $Dlp$ and $Dlcop$ on a Gorenstein space is an important issue. We describe here an algebra structure on the shifted homology $H_{-*+d}(L_NM)=(H^*(L_NM)^\vee)^{*-d}$ of a simply-connected Poincar\'e duality space $N$ of dimension $d$ with a map $f : N \to M$ to a simply-connected space. We define a map $m : H_*(L_NM)\otimes H_*(L_NM) \to H_*(L_NM)$ of degree $d$ by $$ m(a\otimes b) = (-1)^{d(|a|+d)}((Drlp)^\vee)(a\otimes b) $$ for $a$ and $b \in H_*(L_NM)$; see \cite[sign of Proposition 4]{C-J-Y} or~\cite[Definition 3.2]{Tamanoi:cap products}. Moreover, put ${\mathbb H}_{*}(L_NM)= H_{*+d}(L_NM)$. Then we establish the following proposition. \begin{prop}\label{prop:loop_homology} Let $N$ be a simply-connected Poincar\'e duality space. Then the shifted homology ${\mathbb H}_{*}(L_NM)$ is an associative algebra with respect to the product $m$. Moreover, if $M=N$, then the shifted homology ${\mathbb H}_{*}(LM)$ is graded commutative. \varepsilonnd{prop} As mentioned below, the loop product on $L_NM$ is not commutative in general. We call a bigraded vector space $V$ a {\it bimagma} with shifted degree $(i, j)$ if $V$ is endowed with a multiplication $V\otimes V \to V$ and a comultiplication $V \to V\otimes V$ of degree $(i, j)$. Let $K$ and $L$ be objects in $\text{Mod-}A$ and $A\text{-Mod}$, respectively. Consider a torsion product of the form $\text{Tor}_{A}(K, L)$ which is the homology of the derived tensor product $K\otimes_A^{{\mathbb L}}L$. The external degree of the bar resolution of the second variable $L$ filters the torsion products. Indeed, we can regard the torsion product $\text{Tor}_{A}(K, L)$ as the homology $H(M\otimes_AB(A, A, L))$ with the bar resolution $B(A, A, L)\to L$ of $L$. Then the filtration ${\mathcal F}=\{F^p\text{Tor}_{A}(K, L)\}_{p\leq 0}$ of the torsion product is defined by $$ F^p\text{Tor}_{A}(K, L) = \text{Im} \{i^* : H(M\otimes_AB^{\leq p}(A, A, L)) \to \text{Tor}_{A}(K, L)\}. $$ Thus the filtration ${\mathcal F}=\{F^p\text{Tor}_{C^*(M^2)}(C^*(M), C^*(M^I))\}_{p\leq 0}$ induces a filtration of $H^*(LM)$ via the Eilenberg-Moore map for a simply-connected space $M$. By adapting differential torsion functor descriptions of the loop (co)products in Theorems \ref{thm:torsion_product} and \ref{thm:torsion_coproduct}, we can give the EMSS a bimagma structure. \begin{thm} \label{thm:EMSS} Let $M$ be a simply-connected Gorenstein space of dimension $d$. Then the Eilenberg-Moore spectral sequence $\{E_r^{*,*}, d_r\}$ converging to $H^*(LM; {\mathbb K})$ admits loop (co)products which is compatible with those in the target; that is, each term $E_r^{*,*}$ is endowed with a comultiplication $Dlp_r : E_r^{p,q} \to \oplus_{s+s' =p, t+t'=q+d}E_r^{s, t}\otimes E_r^{s',t'}$ and a multiplication $Dlcop_r : E_r^{s,t}\otimes E_r^{s', t'} \to E_r^{s+s', t+t'+d}$ which are compatible with differentials in the sense that $$Dlp_r d_r = (-1)^d(d_r\otimes 1 + 1\otimes d_r)Dlp_r \ \ \text{and} \ \ Dlcop_r(d_r\otimes 1+ 1\otimes d_r)=(-1)^dd_rDlcop_r. $$ Here $(d_r\otimes 1 + 1\otimes d_r)(a\otimes b)$ means $d_ra\otimes b +(-1)^{p+q}a\otimes d_rb$ if $a\in E_r^{p, q}$. Note the unusual sign $(-1)^d$. Moreover the $E_\infty$-term $E_\infty^{*,*}$ is isomorphic to $\text{\varepsilonm Gr}H^*(LM; {\mathbb K})$ as a bimagma with shifted degree $(0, d)$. \varepsilonnd{thm} If the dimension of the Gorenstein space is non-positive, unfortunately the loop product and the loop coproduct in the EMSS are trivial and the only information that Theorem~\ref{thm:EMSS} gives is the following corollary. \begin{cor}\label{cor:trivial coproduct in EMSS} Let $M$ be a simply-connected Gorenstein space of dimension $d$. Assume that $d$ is negative or that $d$ is null and $H^*(M)$ is not concentrated in degree $0$. Consider the filtration given by the cohomological Eilenberg-Moore spectral sequence converging to $H^*(LM; {\mathbb K})$. Then the dual to the loop product and that to the loop coproduct increase both the filtration degree of $H^*(LM)$ by at least one. \varepsilonnd{cor} \begin{rem} a) Let $M$ be a simply-connected closed oriented manifold. We can choose a map $\text{D}elta^! : C^*(M) \to C^*(M\times M)$ so that $H(\text{D}elta^!)w_M =w_{M\times M}$; that is, $\text{D}elta^!$ is the usual shriek map in the cochain level. Then the map $Dlp$ and $Dlcop$ coincide with the dual to the loop product and to the loop coproduct in the sense of Chas and Sullivan \cite{C-S}, Cohen and Godin \cite{C-G}, respectively. Indeed, this fact follows from the uniqueness of shriek map and the comments in three paragraphs in the end of \cite[p. 421]{F-T}. Thus the Eilenberg-Moore spectral sequence in Theorem \ref{thm:EMSS} converges to $H^*(LM; {\mathbb K})$ as an algebra and a coalgebra. b) Let $M$ be the classifying space $BG$ of a connected Lie group $G$. Since the homotopy fibre of $\text{D}elta : BG \to BG \times BG$ in (2.1) and (2.2) is homotopy equivalent to $G$, we can choose the shriek map $\text{D}elta^!$ described in Theorems \ref{thm:torsion_coproduct} and \ref{thm:torsion_product} as the integration along the fibre. Thus $q^!$ also coincides with the integration along the fibre; see \cite[Theorems 6 and 13]{F-T}. This yields that the bimagma structure in $\text{Gr}H^*(LBG; {\mathbb K})$ is induced by the loop product and coproduct in the sense of Chataur and Menichi \cite{C-M}. c) Let $M$ be the Borel construction $EG\times_G X$ of a connected compact Lie group $G$ acting on a simply-connected closed oriented manifold $X$. In~\cite{BGNX}, Behrend, Ginot, Noohi and Xu defined a loop product and a loop coproduct on the homology $H_*(L\frak{X})$ of free loop of a stack $\frak{X}$. Their main example of stack is the quotient stack $[X/G]$ associated to a connected compact Lie group $G$ acting smoothly on a closed oriented manifold $X$. Although F\'elix and Thomas did not prove it, we believe that their loop (co)products for the Gorenstein space $M=EG\times_G X$ coincide with the loop (co)products for the quotient stack $[X/G]$ of~\cite{BGNX}. \varepsilonnd{rem} The following theorem is the main result of this paper. \begin{thm} \label{thm:loop_homology_ss} Let $N$ be a simply-connected Gorenstein space of dimension $d$. Let $f : N\rightarrow M$ be a continuous map to a simply-connected space $M$. Then the Eilenberg-Moore spectral sequence is a right-half plane cohomological spectral sequence $\{{\mathbb E}_r^{*,*}, d_r\}$ converging to the Chas-Sullivan loop homology ${\mathbb H}_*(L_NM)$ as an algebra with $$ {\mathbb E}_2^{*,*} \cong HH^{*, *}(H^*(M); {\mathbb H}_*(N)) $$ as a bigraded algebra; that is, there exists a decreasing filtration $\{F^p{\mathbb H}_*(L_N M)\}_{p\geq 0}$ of $({\mathbb H}_*(L_N M),m)$ such that ${\mathbb E}_\infty^{*,*} \cong Gr^{*,*}{\mathbb H}_*(L_N M)$ as a bigraded algebra, where $$ Gr^{p,q}{\mathbb H}_*(L_N M) = F^p{\mathbb H}_{-(p+q)}(L_NM)/F^{p+1}{\mathbb H}_{-(p+q)}(L_N M). $$ Here the product on the $\mathbb{E}_2$-term is the cup product (See Definition~\ref{cup product Hochschild} (1)) induced by $$ (-1)^d\overline{H(\text{D}elta^{!})^\vee}:\mathbb{H}_*(N)\otimes_{H^{*}(M)} \mathbb{H}_*(N)\rightarrow \mathbb{H}_*(N). $$ Suppose further that $N$ is a Poincar\'e duality space. Then the $\mathbb{E}_2$-term is isomorphic to the Hochschild cohomology $HH^{*, *}(H^*(M); {H}^*(N))$ with the cup product as an algebra. \varepsilonnd{thm} Taking $N$ to be the point, we obtain the following well-known corollary. \begin{cor} {\varepsilonm (}cf. \cite[Corollary 7.19]{Mccleary}{\varepsilonm )} Let $M$ be a pointed topological space. Then the Eilenberg-Moore spectral sequence $E_2^{*,*}=\text{\varepsilonm Ext}^{*,*}_{H^*(M)}({\mathbb K}, {\mathbb K})$ converging to $H_*(\Omega M)$ is a spectral sequence of algebras with respect to the Pontryagin product. \varepsilonnd{cor} When $M=N$ is a closed manifold, Theorem 2.11 has been announced by McClure in~\cite[Theorem B]{McClure}. But the proof has not appeared. Moreover, McClure claimed that when $M=N$, the Eilenberg-Moore spectral sequence is a spectral sequence of BV-algebras. We have not yet been able to prove this very interesting claim. We summarize here spectral sequences converging the loop homology and the Hochschild cohomology of the singular cochain on a space, which are mentioned at the beginning of the Introduction. \noindent \ \ \ {\small \begin{tabular}{|l|l|} \hline The homological Leray-Serre type & The cohomological Eilenberg-Moore type \\ \hline $E_{-p,q}^2=H^p(M; H_q(\Omega M))$ & $E^{p,q}_2=HH^{p,q}(H^*(M); H^*(M))$ \\ \ \ \ \ \ \ \ \ $ \Rightarrow {\mathbb H}_{-p+q}(LM)$ as an algebra, & \ \ \ \ \ \ $\Rightarrow {\mathbb H}_{-p-q}(LM)$ as an algebra, \\ where $M$ is a simply-connected closed & where $M$ is a simply-connected Poincar\'e \\ oriented manifold; see \cite{C-J-Y}. & duality space; see Theorem \ref{thm:loop_homology_ss}. \\ \hline $E_{p,q}^2=H^{-p}(M)\otimes \text{Ext}_{C^*(M)}^{-q}({\mathbb K}, {\mathbb K})$ & $E^{p,q}_2=HH^{p,q}(H^*(M); H^*(M))$ \\ \ \ \ \ \ \ $\Rightarrow HH^{-p-q}(C^*(M); C^*(M))$ & \ \ \ \ \ \ $\Rightarrow HH^{p+q}(C^*(M); C^*(M))$ \\ as an algebra, where $M$ is a simply- & as a B-V algebra, where $M$ is a simply- \\ connected space whose cohomology is & connected Poincar\'e duality space; see \cite{Kuri2011}. \\ locally finite; see \cite{S}. & \\ \hline \varepsilonnd{tabular} } \noindent Observe that each spectral sequence in the table above converges strongly to the target. It is important to remark that, for a fibration $N \to X \to M$ of closed orientable manifolds, Le Borgne \cite{LeB} has constructed a spectral sequence converging to the loop homology ${\mathbb H}_*(LX)$ as an algebra with $E_2 \cong {\mathbb H}_*(LM)\otimes {\mathbb H}_*(LN)$ under an appropriate assumption; see also \cite{C-L} for applications of the spectral sequence. We refer the reader to \cite{Me} for spectral sequences concerning a generalized homology theory in string topology. We focus on a global nature of the loop (co)product. Drawing on the torsion functor description of the loop product and the loop coproduct mentioned in Theorems \ref{thm:torsion_product} and \ref{thm:torsion_coproduct}, we have the following result. \begin{thm} \label{thm:loop(co)product} Let $M$ be a simply-connected Poincar\'e duality space. Then the composite $(\text{the loop product})\circ(\text{the loop coproduct}) $ is trivial. \varepsilonnd{thm} When $M$ is a connected closed oriented manifold, the triviality of this composite was first proved by Tamanoi~\cite[Theorem A]{T}. Tamanoi has also shown that this composite is trivial when $M$ is the classifying space $BG$ of a connected Lie group $G$~\cite[Theorem 4.4]{Tamanoi:stabletrivial}. We are aware that the description of the loop coproduct in Theorem \ref{thm:torsion_coproduct} has no {\it opposite arrow} such as $\text{Tor}_{(1\times \text{D}elta \times 1)^*}(1, \text{D}elta^*)$ in Theorem \ref{thm:torsion_product}. This is a key to the proof of Theorem \ref{thm:loop(co)product}. Though we have not yet obtained the same result as Theorem \ref{thm:loop(co)product} on a more general Gorenstein space, some obstruction for the composite to be trivial is described in Remark \ref{rem:obstruction}. We may describe the loop product in terms of the extension functor. \begin{thm}\label{thm:freeloopExt} Let $M$ be a simply-connected Poincar\'e duality space. Consider the multiplication defined by the composite $$ {\footnotesize \xymatrix@C20pt@R20pt{ \text{\varepsilonm Ext}^*_{C^*(M^{2})}(C^*(M), C^*(M))_{\text{D}elta^*, \text{D}elta^*}^{\otimes 2}\ar[r]_-{\cong}^-{\widetilde{\vee}}\ar@{.>}[dd] &\text{\varepsilonm Ext}^{*}_{C^*(M^{4})}(C^*(M^{2}), C^*(M^2))_{{\text{D}elta^2}^*, {\text{D}elta^2}^*}\ar[d]^{\text{\varepsilonm Ext}_{1}(1,\text{D}elta^*)}\\ &\text{\varepsilonm Ext}^*_{C^*(M^{4})}(C^*(M^ 2), C^*(M))_{{\text{D}elta^2}^*,(\text{D}elta^2\circ\text{D}elta)^*}\\ \text{\varepsilonm Ext}^*_{C^*(M^{2})}(C^*(M), C^*(M))_{\text{D}elta^*, \text{D}elta^*} &\text{\varepsilonm Ext}^*_{C^*(M^{3})}(C^*(M), C^*(M))_{((1\times \text{D}elta)\circ \text{D}elta)^*, ((1\times \text{D}elta)\circ \text{D}elta)^*}. \ar[u]_{\text{\varepsilonm Ext}_{(1\times \text{D}elta \times 1)^*}({\text{D}elta}^*,1)}^{\cong} \ar[l]^-{\text{\varepsilonm Ext}_{p_{13}^*}(1, 1)} } } $$ See Remark~\ref{definition generalized V-product} below for the definition of $\widetilde{\vee}$. The cap with a representative $\sigma$ of the fundamental class $[M]\in H_m(M)$ gives a quasi-isomorphism of right-$C^*(M)$-modules of upper degre $-m$, $$\sigma \cap \text{--} : C^*(M)\buildrel{\simeq}\over\rightarrow C_{m-*}(M), x\mapsto \sigma\cap x.$$ Let $\Phi:H^{*+m}(LM)\buildrel{\cong}\over\rightarrow \text{\varepsilonm Tor}_{C^*(M^{\times 2})}^*(C_*(M), C^*(M))$ be the composite of the isomorphisms $$ {\footnotesize \xymatrix@C15pt@R25pt{ H^{p+m}(LM)\ar[r]^-{EM^{-1}}_-\cong & \text{\varepsilonm Tor}_{C^*(M^{2})}^{p+m}(C^*(M), C^*(M^I))_{\text{D}elta^*,p^*} \ar[r]^{\text{\varepsilonm Tor}_{1}(1,\sigma^*)}_\cong &\text{\varepsilonm Tor}_{C^*(M^{2})}^{p+m}(C^*(M), C^*(M))_{\text{D}elta^*,\text{D}elta^*}\ar[d]^{\text{\varepsilonm Tor}_{1}(\sigma \cap \text{--},1)}_\cong\\ &&\text{\varepsilonm Tor}_{C^*(M^{2})}^p(C_*(M), C^*(M)). } } $$ Then the dual of $\Phi$, $\Phi^\vee: \text{\varepsilonm Ext}^{-p}_{C^*(M^{2})}(C^*M, C^*M)_{\text{D}elta^*, \text{D}elta^*}\rightarrow H_{p+m}(LM)$ is an isomorphism which respects the multiplication defined here and the loop product. \varepsilonnd{thm} \begin{rem}\label{definition generalized V-product} The isomorphism $\widetilde{\vee}$ in Theorem~\ref{thm:freeloopExt} is the composite $$ {\footnotesize \xymatrix@C15pt@R25pt{ \text{Ext}^*_{C^*(M^{2})}(C^*M, C^*M)^{\otimes 2} \ar[r]^-\vee & \text{Ext}^*_{C^*(M^{2})^{\otimes 2}}(C^*(M)^{\otimes 2}, C^*(M)^{\otimes 2})\ar[d]^{\text{Ext}_1(1,\gamma)}\\ \text{Ext}^*_{(C_*(M^{2})^{\otimes 2})^\vee}((C_*(M)^{\otimes 2})^\vee, (C_*(M)^{\otimes 2})^\vee) \ar[r]_-{\text{Ext}_\gamma(\gamma,1)}^-\cong \ar[d]_{\text{Ext}_{EZ^\vee}(EZ^\vee,1)}^-\cong &\text{Ext}^*_{C^*(M^{2})^{\otimes 2}}(C^*(M)^{\otimes 2}, (C_*(M)^{\otimes 2})^\vee)\\ \text{Ext}^*_{C^*(M^{4})}(C^*(M^{2}), (C_*(M)^{\otimes 2})^\vee) &\text{Ext}^*_{C^*(M^{4})}(C^*(M^{2}), C^*(M^2)) \ar[l]_{\text{Ext}_{1}(1,EZ^\vee)}^-\cong } } $$ where $\vee$ is the $\vee$-product of Cartan-Eilenberg~\cite[XI. Proposition 1.2.3]{CartanEilenberg} or~\cite[VIII.Theorem 4.2]{MacLanehomology}, $EZ:C_*(M)^{\otimes 2}\buildrel{\simeq}\over\rightarrow C_*(M^2)$ denotes the Eilenberg-Zilber quasi-isomorphism and $\gamma:\text{Hom}(C_*(M), {\mathbb K})^{\otimes 2}\rightarrow \text{Hom}(C_*(M)^{\otimes 2}, {\mathbb K})$ is the canonical map. \varepsilonnd{rem} \begin{rem} We believe that the multiplication on $\text{Ext}^*_{C^*(M^{2})}(C^*(M), C^*(M))_{\text{D}elta^*, \text{D}elta^*}$ defined in Theorem~\ref{thm:freeloopExt} coincides with the Yoneda product. \varepsilonnd{rem} Denote by $A(M)$ the functorial commutative differential graded algebra $A_{PL}(M)$; see ~\cite[Corollary 10.10]{F-H-T}. Let $\varphi:A(M)^{\otimes 2}\buildrel{\simeq}\over\rightarrow A(M^2)$ be the quasi-isomorphism of algebras given by~\cite[Example 2 p. 142-3]{F-H-T}. Remark that the composite $\text{D}elta^*\circ\varphi$ coincides with the multiplication of $A(M)$. Remark also that we have an Eilenberg-Moore isomorphism $EM$ for the functor $A(M)$; see \cite[Theorem 7.10]{F-H-T}. Replacing the singular cochains over the rationals $C^*(M;\mathbb{Q})$ by the commutative algebra $A_{PL}(M)$ in Theorem~\ref{thm:torsion_product}, we obtain the following theorem. \begin{thm}{\varepsilonm (}Compare with~\cite{F-T:rationalBV}{\varepsilonm )}\label{rational iso of Felix-Thomas Gorenstein} Let $N$ be a simply-connected Gorenstein space of dimension $n$ and $N\rightarrow M$ a continuous map to a simply-connected space $M$. Let $\Phi$ be the map given by the commutative square $$ \xymatrix@C25pt@R25pt{ H^{p+n}(A(L_N M))\ar@{.>}[d]_\Phi & \text{\varepsilonm Tor}^{A(M^{2})}_{-p-n}(A(N), A(M^I))_{\text{D}elta^*,p^*} \ar[d]^{\text{Tor}^{1}(1,\sigma^*)}_\cong\ar[l]_-{EM}^-\cong\\ HH_{-p-n}(A(M),A(N))\ar[r]^-{\text{\varepsilonm Tor}^{\varphi}(1,1)}_-\cong &\text{\varepsilonm Tor}^{A(M^{2})}_{-p-n}(A(N), A(M))_{\text{D}elta^*,\text{D}elta^*}. } $$ Then the dual $ \xymatrix@1{ HH^{-p-n}(A(M),A(N)^\vee)\ar[r]^-{\Phi^\vee} & H_{p+n}(L_NM;\mathbb{Q}) } $ to $\Phi$ is an isomorphism of graded algebras with respect to the loop product $Dlp^\vee$ and the generalized cup product on Hochschild cohomology induced by $(\text{D}elta_{A(N)})^\vee:A(N)^\vee\otimes A(N)^\vee\rightarrow A(N)^\vee$ (See Example~\ref{generalized cup product Gorenstein}). \varepsilonnd{thm} \begin{cor}\label{rational iso of Felix-Thomas Poincare} Let $N$ be a simply-connected Poincar\'e duality space of dimension $n$. Let $N\rightarrow M$ be a continuous map to a simply-connected space $M$. Then $HH^{-p}(A(M),A(N))$ is isomorphic as graded algebras to $H_{p+n}(L_NM;\mathbb{Q})$ with respect to the loop product $Dlp^\vee$ and the cup product on Hochschild cohomology induced by the morphism of algebras $A(f):A(M)\rightarrow A(N)$ (See Remark~\ref{cup product of an algebra morphism} and Definition~\ref{cup product Hochschild}(1)) \varepsilonnd{cor} \begin{rem} When $M=N$ is a Poincar\'e duality space, such an isomorphism of algebras between Hochschild cohomology and Chas-Sullivan loop space homology was first proved in \cite{F-T:rationalBV} (See also~\cite{Merkulov} over $\mathbb{R}$). But here our isomorphism is explicit since we do not use a Poincar\'e duality DGA model for $A(M)$ given by~\cite{L-S}. In fact, as explain in~\cite{F-T:rationalBV}, such an isomorphism is an isomorphism of BV-algebras, since $\Phi$ is compatible with the circle action and Connes boundary map. Here the BV-algebra on $ HH^{*}(A(M),A(M))$ is given by~\cite[Theorem 18 or Proof of Corollary 20]{Menichi_BV_Hochschild}. \varepsilonnd{rem} In the forthcoming paper \cite{K-L}, we discuss the loop (co)products on the classifying space $BG$ of a Lie group $G$ by looking at the integration along the fibre $(Comp)^! : H^*(LBG\times_{BG} LBG) \to H^*(LBG)$ of the homotopy fibration $G \to LBG\times_{BG}LBG \to BG$. In a sequel \cite{KMnoetherian}, we intend to investigate duality on extension groups of the (co)chain complexes of spaces. Such discussion enables one to deduce that Noetherian H-spaces are Gorenstein. In adding, the loop homology of a Noetherian H-space is considered. The rest of this paper is organized as follows. Section 3 is devoted to proving Theorems \ref{thm:torsion_product}, \ref{thm:torsion_coproduct}, \ref{thm:EMSS} and Corollary \ref{cor:trivial coproduct in EMSS}. Theorem \ref{thm:loop(co)product} is proved in Section 4. In Section 5, we recall the generalized cup product on the Hochschild cohomology defied by appropriate shriek map. Section 6 proves Theorems \ref{thm:loop_homology_ss}, \ref{thm:freeloopExt} and \ref{rational iso of Felix-Thomas Gorenstein} and Corollary \ref{rational iso of Felix-Thomas Poincare}. We prove Proposition \ref{prop:loop_homology} and discuss the associativity and commutativity of the loop product on Poincar\'e duality space in Section 7. In the last three sections, Appendix, shriek maps on Gorenstein spaces are considered and their important properties, which we use in the body of the paper, are described. \section{Proofs of Theorems \ref{thm:torsion_product}, \ref{thm:torsion_coproduct} and \ref{thm:EMSS}} In order to prove Theorem \ref{thm:torsion_product}, we consider two commutative diagrams $$ \xymatrix@C7pt@R10pt{ & LM \times LM \ar[rr]^{i} \ar@{->}'[d]^{p\times p}[dd] & & M^I\times M^I \ar[dd]^{p\times p=p^2} \\ LM\times _MLM \ar[rr]_(0.6){j} \ar[ru]^{q} \ar[dd] & & M^I\times_MM^I \ar[ru]_{\widetilde{q}} \ar[dd]^(0.3){ev_0, ev_1, ev_1=u} & \\ & M\times M \ar@{->}'[r]^(0.6){\text{D}elta \times \text{D}elta}[rr] & & M^{4} \\ M \ar[rr]_{(1\times \text{D}elta)\circ \text{D}elta=v} \ar[ur]^{\text{D}elta} & & M^{3} \ar[ru]_{1\times \text{D}elta \times 1=w} & } \varepsilonqnlabel{add-0} $$ and $$ \xymatrix@C7pt@R10pt{ & LM \times_M LM \ar[ld]_(0.7){Comp} \ar@{->}'[d][dd] \ar[rr]^{j} & & M^I\times_M M^I \ar[ld]^{Comp=c} \ar[dd]^{(ev_0, ev_1=ev_0, ev_1)=u} \\ LM \ar[rr]^(0.6){k} \ar[dd]_p & & M^I \ar[dd]^(0.3)p & \\ & M \ar@{->}'[r]^(0.6){(1\times \text{D}elta)\circ \text{D}elta}[rr] \ar@{=}[ld]& & M^{3} \ar[ld]_{p_{13}}\\ M \ar[rr]_\text{D}elta & & M\times M & } \varepsilonqnlabel{add-1} $$ in which front and back squares are pull-back diagrams. Observe that the left and right hand side squares in (3.1) are also pull-back diagrams. Here $\text{D}elta$ and $k$ denote the diagonal map and the inclusion, respectively. Moreover $p_{13} : M^3 \to M^2$ is the projection defined by $p_{13}(x, y, z) =(x, z)$ and $Comp : LM\times_MLM \to LM$ stands for the concatenation of loops. The cube (3.2) first appeared in~\cite[p. 320]{F-T:rationalBV}. \begin{proof}[Proof of Theorem \ref{thm:torsion_product}] Consider the diagram $$ \xymatrix@C10pt@R8pt{ \text{Tor}^*_{C^*(M^{2})}(C^*(M), C^*(M^I))_{\text{D}elta^*, p^*} \ar[r]^(0.45){\text{Tor}_{p_{13}^*}(1, c^*)} \ar[d]^(0.6){\cong}_(0.6){EM} & \text{Tor}^*_{C^*(M^{3})}(C^*(M), C^*(M^I\times_M M^I))_{v^*, u^*} \ar@/_/[ldd]_(0.6){EM_1}^(0.6){\cong} \\ H^*(LM) \ar[d]_{Comp^*} \ar@/_5pc/[dd]_{Dlp}& \text{Tor}^*_{C^*(M^{4})}(C^*(M), C^*(M^I\times M^I))_{(wv)^*, {p^2}^*} \ar[u]_{\text{Tor}_{w^*}(1, {\widetilde{q}}^*)}^{\cong} \ar[d]^{\text{Tor}_{1}(\text{D}elta^!, 1)} \ar[ld]^{EM_2}_{\cong}\\ H^*(LM\times_M LM) \ar[d]_{H(q^!)} & \text{Tor}^{*+m}_{C^*(M^{4})}(C^*(M^{2}), C^*(M^I\times M^I))_{{\text{D}elta^2}^*, {p^2}^*} . \ar[dl]^(0.6){\cong}_(0.6){EM_3} \\ H^{*+m}(LM\times LM) } \varepsilonqnlabel{add-2} $$ \noindent where $EM$ and $EM_i$ denote the Eilenberg-Moore maps. The diagram (3.2) is a morphism of pull-backs from the back face to the front face. Therefore the naturality of the Eilenberg-Moore map yields that the upper-left triangle is commutative. We now consider the front square and the right-hand side square in the diagram (3.1). The squares are pull-back diagrams and hence we have a large pull-back one connecting them. Therefore the naturality of the Eilenberg-Moore map shows that the triangle in the center of the diagram (3.3) is commutative. Thus it follows that the map $\text{Tor}_{w^*}(1, {\widetilde{q}}^*)$ is an isomorphism. Let $\varepsilon:{\mathbb B}\buildrel{\simeq}\over\rightarrow C^*(M)$ be a right $C^*(M^2)$-semifree resolution of $C^*(M)$. By~\cite[Proof of Theorem 2 or Remark p. 429]{F-T}, the following square is commutative in the derived category of right $C^*(LM\times LM)$-modules.. $$ \xymatrix{ C^*(LM\times_M LM)\ar[r]^{q^!} & C^*(LM\times LM) \\ {\mathbb B}\otimes_{C^*(M^2)}C^*(LM\times LM) \ar[u]^{EM_4}_{\simeq} \ar[r]_-{\text{D}elta^! \otimes 1_{C^*(LM\times LM) }} & C^*(M^2)\otimes_{C^*(M^2)}C^*(LM\times LM) \ar@{=}[u] ^{EM_5} } $$ By taking homology, we obtain the top square in the following diagram commutes. $$ {\footnotesize \xymatrix@C18pt@R25pt{ H^*(LM\times_M LM)\ar[r]^{H^*(q^!)} & H^{*+m}(LM\times LM) \\ \text{Tor}^*_{C^*(M^2)}(C^ *(M),C^*(LM\times LM)) \ar[u]^{EM_4}_{\cong} \ar[r]_(0.45){\text{Tor}_{1}(\text{D}elta^!, 1)} & \text{Tor}^{*+m}_{C^*(M^2)}(C^ *(M^2),C^*(LM\times LM)) \ar@{=}[u]^{EM_5} \\ \text{Tor}^*_{C^*(M^{4})}(C^*(M), C^*(M^I\times M^I)) \ar[r]_(0.45){\text{Tor}_{1}(\text{D}elta^!, 1)} \ar[u]_{\text{Tor}^*_{(\text{D}elta\times\text{D}elta)^*}(1,i^*)} \ar@/^8pc/[uu]^(0.7){EM_2} & \text{Tor}^{*+m}_{C^*(M^{4})}(C^*(M^2), C^*(M^I\times M^I)) \ar[u]^{\text{Tor}^*_{(\text{D}elta\times\text{D}elta)^*}(1,i^*)}\ar@/_8pc/[uu]_(0.7){EM_3} } } \varepsilonqnlabel{add-3} $$ The bottom square commutes obviously. We now consider the left-hand square and the back square in the diagram (3.1). The squares are pull-back diagrams and hence we have a large pull-back one connecting them. Therefore the naturality of the Eilenberg-Moore map shows that the left-hand side in (3.4) is commutative. The same argument or the definition of the Eilenberg-Moore map shows that the right-hand side in (3.4) is commutative. So finally, the lower square in (3.3) is commutative. The usual proof~\cite[p. 26]{G-M} that the Eilenberg-Moore isomorphism $EM$ is an isomorphism of algebras with respect to the cup product gives the following commutative square $$ \xymatrix@C15pt@R10pt{ H^*(LM)^{\otimes 2}\ar[d]^\cong_\times & \text{Tor}^{*}_{C^*(M^{2})}(C^*(M), C^*(M^I))_{{\text{D}elta}^*, {p}^*}^{\otimes 2} \ar[l]_-{\cong}^-{EM^{\otimes 2}}\ar[d]_\cong^{\widetilde{\top}}\\ H^{*}(LM\times LM) &\text{Tor}^{*}_{C^*(M^{4})}(C^*(M^{2}), C^*(M^I\times M^I))_{{\text{D}elta^2}^*, {p^2}^*} \ar[l]^-{\cong}_-{EM_3}. } $$ This square is the top square in \cite[p. 255]{Mccleary}. Consider the commutative diagram of spaces where the three composites of the vertical morphisms are the diagonal maps $$ \xymatrix@C25pt@R10pt{ M\ar[d]_\sigma^\simeq\ar@/_3pc/[dd]_{\text{D}elta} \ar@{=}[r] &M\ar[r]^\text{D}elta\ar[d]_{\sigma'}^\simeq & M\times M\ar[d]_{\sigma\times\sigma}^\simeq\ar@/^3pc/[dd]^{\text{D}elta\times\text{D}elta}\\ M^I\ar[d]_p &M^I\times_M M^I\ar[r]_{\widetilde{q}}\ar[d]^u\ar[l]^-c & M^I\times M^I\ar[d]_{p\times p}\\ M\times M &M^3\ar[r]_w\ar[l]^{p_{13}} & M^4. } $$ Using the homotopy equivalence $\sigma$, $\sigma'$ and $\sigma^2$, we have the result. \varepsilonnd{proof} We decompose the maps, which induce the loop coproduct, with pull-back diagrams. Let $l : LM \to M\times M$ be a map defined by $l(\gamma)= (\gamma(0), \gamma(\frac{1}{2}))$. We define a map $\varphi :LM \to LM$ by $\varphi(\gamma)(t)=\gamma(2t)$ for $0\leq t \leq\frac{1}{2}$ and $\varphi(\gamma)(t)=\gamma(1)$ for $\frac{1}{2} \leq t \leq 1$. Then $\varphi$ is homotopic to the identity map and fits into the commutative diagram $$ \xymatrix@C10pt@R5pt{ & LM \ar[ld]_{\varphi}^{\simeq} \ar@{->}'[d][dd]^(0.4){ev_0} \ar[rr]^{j} & & M^I \ar[ld]^(0.4){\beta} \ar[dd]^{ev_0, ev_1=p} \\ LM \ar[rr] \ar[dd]_{l} & & M^I \times M^I \ar[dd]^(0.3){p\times p}& \\ & M \ar@{->}'[r]^(0.7){\text{D}elta}[rr] \ar@{->}[ld]_{\text{D}elta} & & M\times M \ar[ld]^{\alpha} \\ M \times M \ar[rr]_{\gamma'} & & M^{4}. & } \varepsilonqnlabel{add-3} $$ Here the maps $\alpha : M^{2} \to M^{4}$, $\beta : M^I \to M^I \times M^I$ and $\gamma' : M^{2} \to M^{4}$ are defined by $\alpha(x, y) = (x, y, y, y)$, $\beta(r) = (r, c_{r(1)})$ with the constant loop $c_{r(1)}$ at $r(1)$ and $\gamma'(x, y) =(x, y, y, x)$, respectively. We consider moreover the two pull-back squares $$ \xymatrix@C15pt@R15pt{ LM\times _M LM \ar[r]^-{Comp}\ar[d] & LM\ar[r]\ar[d]_l & M^I\times M^I\ar[d]^{p\times p}\\ M\ar[r]_\text{D}elta & M\times M\ar[r]_{\gamma'} & M^4 } \varepsilonqnlabel{add-4} $$ and the commutative cube $$ \xymatrix@C10pt@R8pt{ & LM \times_M LM \ar[ld]_(0.55){q} \ar@{->}'[d][dd] \ar[rr]^{i\circ q} & & M^I\times M^I \ar@{=}[ld] \ar[dd]^{p^2} \\ LM \times LM \ar[rr]^(0.6){i} \ar[dd] & & M^I \times M^I \ar[dd]^(0.3){p^2} & \\ & M \ar@{->}'[r]^(0.6){(\text{D}elta\times \text{D}elta)\circ \text{D}elta}[rr] \ar@{->}[ld]_{\text{D}elta} & & M^{4} \ar@{=}[ld] \\ M \times M\ar[rr]_{\text{D}elta \times \text{D}elta} & & M^{4} & } \varepsilonqnlabel{add-5} $$ in which front and back squares are also pull-back diagrams. \begin{proof}[Proof of Theorem \ref{thm:torsion_coproduct}] We see that the diagrams (3.5), (3.6) and (3.7) give rise to a commutative diagram $$ \hspace{-0.5cm} { \xymatrix@C12pt@R20pt{ H^*(LM\times LM) \ar[d]_{q^*} \ar@/_4pc/[dd]_{Dlcop}& \text{Tor}^*_{C^*(M^{4})}(C^*(M^2), C^*((M^I)^2))_{{\text{D}elta^2}^*,{p^2}^*} \ar[d]^{\text{Tor}_{1}(\text{D}elta^*, 1)} \ar[l]_-{EM}^-{\cong}\\ H^*(LM\times_M LM) \ar[d]_{Comp^!} & \text{Tor}^*_{C^*(M^{4})}(C^*(M), C^*(M^I\times M^I))_{(wv)^*, {p^2}^*} \ar[d]^{\text{Tor}_{1}(\text{D}elta^!, 1)} \ar[l]^-{EM}_-{\cong} \\ H^{*+m}(LM) \ar[d]_{\varphi^*=id} & \text{Tor}^{*+m}_{C^*(M^{4})}(C^*(M^2), C^*(M^I\times M^I))_{{\gamma'}^*, {p^2}^*} . \ar[l]^-{\cong}_-{EM} \ar[d]_{\text{Tor}_{\alpha^*}(\text{D}elta^*, \beta^*)}^{\cong} \\ H^{*+m}(LM) & \text{Tor}^{*+m}_{C^*(M^{2})}(C^*(M), C^*(M^I))_{{\text{D}elta}*,{p}^*}. \ar[l]^-{EM}_-{\cong} } } $$ In fact, the diagrams (3.5) and (3.7) give morphisms of pull-backs from the back face to the front face. Therefore the naturality of the Eilenberg-Moore map yields that the top and the bottom squares are commutative. Using the diagram (3.6), the same argument as in the proof of Theorem \ref{thm:torsion_product} enables us to conclude that the middle square is commutative. Since the following diagram of spaces $$ \xymatrix@C15pt@R18pt{ M\ar[r]^\text{D}elta\ar[d]_\sigma^\simeq\ar@/_3pc/[dd]_{\text{D}elta} & M\times M\ar[d]_{\sigma\times\sigma}^\simeq\ar@/^3pc/[dd]^{\text{D}elta\times\text{D}elta}\\ M^I\ar[r]^\beta\ar[d]_p & M^I\times M^I\ar[d]_{p\times p}\\ M\times M \ar[r]^\alpha & M^4 } $$ is commutative, the theorem follows. \varepsilonnd{proof} By considering the free loop fibration $\Omega M\buildrel{\widetilde{\varepsilonta}}\over\rightarrow LM\buildrel{ev_0}\over\rightarrow M$, we define for Gorenstein space (see Example~\ref{intersection fibration}) an intersection morphism $H(\widetilde{\varepsilonta}_!):H_{*+m}(LM)\rightarrow H_*(\Omega M)$ generalizing the one defined by Chas and Sullivan~\cite{C-S}. Using the following commutative cube $$ \xymatrix@C10pt@R8pt{ & LM\ar[rr] \ar@{->}'[d]^{ev_0}[dd] & & M^I \ar[dd]^{p} \\ \Omega M \ar[rr] \ar[ru]^{\widetilde{\varepsilonta}} \ar[dd] && PM \ar[ru]_{\widetilde{\varepsilonta\times 1}} \ar[dd]^(0.3){ev_1} & \\ & M \ar@{->}'[r]^(0.6){\text{D}elta}[rr] & & M\times M \\ {*} \ar[rr] \ar[ur]^{\varepsilonta} & & {*}\times M \ar[ru]_{\varepsilonta \times 1} & } $$ where all the faces are pull-backs, we obtain similarly the following theorem. \begin{thm} Let $M$ be a simply-connected Gorenstein with generator $\omega_M$ in $\text{\varepsilonm Ext}^{m}_{C^*(M)}({\mathbb K},C^*(M))$. Then the dual of the intersection morphism $H(\widetilde{\varepsilonta}^!)$ is given by the commutative diagram $$ \xymatrix@C12pt@R20pt{ H^*(\Omega M)\ar[dd]_{H(\tilde{\varepsilonta}^!)} & \text{\varepsilonm Tor}^*_{C^*(M)}({\mathbb K},C^*(PM))_{{\varepsilonta}*,{ev_1}^*} \ar[l]_-{EM}^-{\cong}\ar[r]^{\text{\varepsilonm Tor}^*_{1}(1,\varepsilonta^*)}_\cong & \text{\varepsilonm Tor}^*_{C^*(M)}({\mathbb K},{\mathbb K})_{{\varepsilonta}*,{\varepsilonta}^*}\\ & \text{\varepsilonm Tor}^*_{C^*(M^2)}({\mathbb K},C^*(M^I))_{{\varepsilonta}*,{p}^*} \ar[lu]^-{EM}_-{\cong}\ar[d]^{\text{\varepsilonm Tor}^*_{1}(\omega_M,1)} \ar[u]_{\text{\varepsilonm Tor}^*_{(\varepsilonta\times 1)^*}(1,(\widetilde{\varepsilonta\times 1})^*)}^\cong \ar[r]^{\text{\varepsilonm Tor}^*_{1}(1,\sigma^*)}_\cong &\text{\varepsilonm Tor}^*_{C^*(M^2)}({\mathbb K},C^*M)_{{\varepsilonta}*,{\text{D}elta}^*} \ar[d]^{\text{\varepsilonm Tor}^*_{1}(\omega_M,1)} \ar[u]_{\text{\varepsilonm Tor}^*_{(\varepsilonta\times 1)^*}(1,\varepsilonta^*)}^\cong\\ H^{*+m}(LM) & \text{\varepsilonm Tor}^*_{C^*(M^2)}(C^*(M),C^*(M^I))_{{\text{D}elta}*,{p}^*}\ar[l]^-{EM}_-{\cong} \ar[r]^{\text{\varepsilonm Tor}^*_{1}(1,\sigma^*)}_\cong &\text{\varepsilonm Tor}^*_{C^*(M^2)}(C^*M,C^*M)_{{\text{D}elta}*,{\text{D}elta}^*}. } $$ \varepsilonnd{thm} Let $\widehat{\mathcal F}$ be the pull-back diagram in the front of (3.1). Let $\widetilde{\mathcal F}$ denote the pull-back diagram obtained by combining the front and the right hand-side squares in (3.1). Then a map inducing the isomorphism $\text{Tor}_{w^*}(1, {\widetilde{q}}^*)$ gives rise to a morphism $\{f_r\} : \{\widetilde{E}_r, \widetilde{d}_r \} \to \{\widehat{E}_r, \widehat{d}_r \}$ of spectral sequences, where $\{\widehat{E}_r, \widehat{d}_r \}$ and $\{\widetilde{E}_r, \widetilde{d}_r \}$ are the Eilenberg-Moore spectral sequences associated with the fibre squares $\widehat{\mathcal F}$ and $\widetilde{\mathcal F}$, respectively. In order to prove Theorem \ref{thm:EMSS}, we need the following lemma. \begin{lem} \label{lem:key} The map $f_2$ is an isomorphism. \varepsilonnd{lem} \begin{proof} We identify $f_2$ with the map $$ \text{Tor}_{H^*(w)}(1, H^*(\text{D}elta)): \text{Tor}_{H^*(M^4)}(H^*M, H^*(M\times M))\rightarrow \text{Tor}_{H^*(M^3)}(H^*M, H^*M). $$ \noindent up to isomorphism between the $E_2$-term and the torsion product. Thus, in order to obtain the result, it suffices to apply part (1) of Lemma~\ref{lem:generalizeddecompositioncup} for the algebra $H^ *(M)$ and the module $H^*(M)$. \varepsilonnd{proof} We are now ready to give the EMSS (co)multiplicative structures. \noindent {\it Proof of Theorem \ref{thm:EMSS}.} Gugenheim and May~\cite[p. 26]{G-M} have shown that the map $\widetilde{\top}$ induces a morphism of spectral sequences from $E_r\otimes E_r$ to the Eilenberg-Moore spectral sequence converging to $H^*(LM\times LM)$. In fact, $\widetilde{\top}$ induces an isomorphism of spectral sequences. All the other maps between torsion products in Theorems \ref{thm:torsion_product} and \ref{thm:torsion_coproduct} preserve the filtrations. Thus in view of Lemma \ref{lem:key}, we have Theorem \ref{thm:EMSS}. In fact, the shriek map $\text{D}elta^!$ is in $\text{Ext}_{C^*(M^2)}^m(C^*(M), C^*(M^2))$. Then we have $d\text{D}elta^! = (-1)^m\text{D}elta^!d$. Let $\{\widehat{E}_r^{*,*} , \widehat{d}_r\}$ and $\{\widetilde{E}_r^{*,*} , \widetilde{d}_r\}$ be the EMSS's converging to $\text{Tor}_{C^*(M^4)}^*(C^*(M), C^*((M^I))^2)$ and $\text{Tor}_{C^*(M^4)}^*(C^*(M^2), C^*((M^I)^2))$, respectively. Let $\{f_r\} : \{\widehat{E}_r^{*,*} , \widehat{d}_r\} \to \{\widetilde{E}_r^{*,*} , \widetilde{d}_r\}$ be the morphism of spectral sequences which gives rise to $\text{Tor}_1(\text{D}elta^!, 1)$. Recall the map $\text{D}elta^! \otimes 1 : {\mathbb B}\otimes_{C^*(M^4)}{\mathbb B}' \to C^*(M^2)\otimes_{C^*(M^4)}{\mathbb B}'$ in the proof of Theorem \ref{thm:torsion_product}. It follows that, for any $b\otimes b' \in {\mathbb B}\otimes_{C^*(M^4)}{\mathbb B}'$, \begin{eqnarray*} (\text{D}elta^!\otimes 1)d(b\otimes b' )&=& \text{D}elta^!\otimes 1(db\otimes b' + (-1)^{\deg b}b\otimes db') \\ &=&\text{D}elta^!db\otimes b' + (-1)^{\deg b}\text{D}elta^! b\otimes db' \\ &=& (-1)^md\text{D}elta^!b\otimes b' + (-1)^{\deg b}\text{D}elta^! b\otimes db'. \varepsilonnd{eqnarray*} On the other hand, we see that \begin{eqnarray*} d(\text{D}elta^!\otimes 1)(b\otimes b' )&=& d(\text{D}elta^!(b\otimes b')) \\ &=&d\text{D}elta^!b\otimes b' + (-1)^{\deg b +m}\text{D}elta^! b\otimes db' \varepsilonnd{eqnarray*} and hence $(\text{D}elta^!\otimes 1)d=(-1)^m d (\text{D}elta^!\otimes 1)$. This implies that $f_r\widehat{d}_r = (-1)^m\widetilde{d}_rf_r$. The fact yields the compatibility of the multiplication with the differential of the spectral sequence. The same argument does work well to show the compatibility of the comultiplication with the differential of the EMSS. \qed \noindent {\it Proof of Corollary \ref{cor:trivial coproduct in EMSS}.} Since $H^*(\text{D}elta^!)$ is $H^*(M^2)$-linear, it follows that $H^*(\text{D}elta^!)\circ H^*(\text{D}elta)(x)=H^*(\text{D}elta^!)(1)\cup x$. If $d<0$ then $H^*(\text{D}elta^!)(1)=0$. If $d=0$ then $H^*(\text{D}elta^!)(1)=\lambda 1$ where $\lambda\in {\mathbb K}$ and so the composite $H^*(\text{D}elta^!)\circ H^*(\text{D}elta)$ is the multiplication by the scalar $\lambda$. Let $m$ be an non-trivial element of positive degre in $H^*(M)$. Then we see that $0=H^*(\text{D}elta^!)\circ H^*(\text{D}elta)(m\otimes 1-1\otimes m)=\lambda(m\otimes 1-1\otimes m).$ Therefore $\lambda=0$. So in both cases, we have proved that $H^*(\text{D}elta^!)\circ H^*(\text{D}elta)=0$. Since $H^*(\text{D}elta)$ is surjective, $H^*(\text{D}elta^!):H^*(M)\rightarrow H^{*+d}(M^2)$ is trivial. In particular, the induced maps $\text{Tor}^{*}_{H^*(M^{4})}(H^*(\text{D}elta^!), H^*(M^I\times_M M^I))$ and $\text{Tor}^{*}_{H^*(M^{4})}(H^*(\text{D}elta^!), H^*((M^I)^2)$ are trivial. Then it follows from Theorems~\ref{thm:torsion_product} and \ref{thm:torsion_coproduct} that both the comultiplication and the multiplication on the $E_2$-term of the EMSS, which correspond to the duals to loop product and loop coproduct on $H^*(LM)$ are null. Therefore, $E^{*,*}_\infty\cong \text{Gr}H^*(LM)$ is equipped with a trivial coproduct and a trivial product. Then the conclusion follows. \qed \begin{rem} It follows from Corollary \ref{cor:trivial coproduct in EMSS} that under the hypothesis of Corollary \ref{cor:trivial coproduct in EMSS} the two composites $$ \xymatrix@C30pt@R5pt{ H^*(M)\otimes H^*(M) \ar[r]^(0.48){p^*\otimes p^*} & H^*(LM)\otimes H^*(LM) \ar[r]^(0.6){Dlcop} &H^*(LM) } $$ and $$ \xymatrix@C25pt@R5pt{ H_*(LM)\otimes H_*(LM) \ar[rr]^(0.6){\text{Loop product}} & &H_*(LM) \ar[r]^{p_*} & H_*(M) } $$ are trivial. This can also be proved directly since we have the commuting diagram $$ \xymatrix@C30pt@R15pt{ H^*(LM\times_M LM)\ar[r]^{H(Comp^!)} &H^*(LM\times LM)\\ H^*(M)\ar[r]_{H(\text{D}elta^!)}\ar[u] &H^*(M\times M)\ar[u]_{(p \times p) ^*} } $$ and since $p_*:H_*(LM)\rightarrow H_*(M)$ is a morphism of graded algebras with respect to the loop product and to the intersection product $H(\text{D}elta_!)$. As we saw in the proof of Corollary \ref{cor:trivial coproduct in EMSS}, under the hypothesis of Corollary \ref{cor:trivial coproduct in EMSS}, $H^*(\text{D}elta^!)$ and its dual $H_*(\text{D}elta_!)$ are trivial. \varepsilonnd{rem} \section{Proof of Theorem \ref{thm:loop(co)product}} The following Lemma is interesting on his own since it gives a very simple proof of a result of Klein (see Remark~\ref{Klein} below). \begin{lem}\label{isoextfreeloops} Let $M$ be an oriented simply-connected Poincar\'e duality space of dimension $m$. Let $M\twoheadrightarrow B$ be a fibration. Denote by $M\times_B M$ the pull-back over $B$. Then for all $p\in {{\mathbb Z}}$, $\text{\varepsilonm Ext}^{-p}_{C^*(B)}(C^*(M),C^*(M))$ is isomorphic to $H_{p+m}(M\times_B M)$ as a vector space. \varepsilonnd{lem} \begin{rem} A particular case of Lemma~\ref{isoextfreeloops} is the isomorphim of graded vector spaces $\text{Ext}^{-p}_{C^*(M\times M)}(C^*(M),C^*(M))\cong H_{p+m}(LM)$ underlying the isomorphism of algebras given in Theorem~\ref{thm:freeloopExt}. Note yet that in the proof of Lemma~\ref{isoextfreeloops}, we consider right $C^*(B)$-modules and that in the proof of Theorem~\ref{thm:freeloopExt}, we need left $C^*(M^2)$-modules; see Section 10. \varepsilonnd{rem} \begin{rem}\label{Klein} Let $F$ be the homotopy fibre of $M\rightarrow B$. In~\cite[Theorem B]{KleinPoincare}, Klein shows in term of spectra that $ \text{Ext}^{-p}_{C_*(\Omega B)}(C_*(F),C_*(F))\cong H_{p+m}(M\times_B M) $ and so, using the Yoneda product~\cite[Theorem A]{KleinPoincare}, $H_{*+m}(M\times_B M)$ is a graded algebra. The isomorphism above and that in Lemma \ref{isoextfreeloops} make us aware of {\it duality} on the extension functors of (co)chain complexes of spaces. As mentioned in the Introduction, this is one of topics in \cite{KMnoetherian}. \varepsilonnd{rem} \begin{proof}[Proof of Lemma \ref{isoextfreeloops}] The Eilenberg-Moore map gives an isomorphism $$ H_{p+m}(M\times_B M)\cong \text{Ext}^{-p-m}_{C^*(B)}(C^*(M),C_*(M)).$$ The cap with a representative $\sigma$ of the fundamental class $[M]\in H_m(M)$ gives a quasi-isomorphism of right-$C^*(M)$-modules of upper degre $-m$, $$\sigma \cap \text{--} : C^*(M)\buildrel{\simeq}\over\rightarrow C_{m-*}(M), x\mapsto \sigma\cap x.$$ Therefore, we have an isomorphism $$ \text{Ext}^{*}_{C^*(B)}(C^*(M),\sigma \cap \text{--}):\text{Ext}^{-p}_{C^*(B)}(C^*(M),C^*(M))\rightarrow\text{Ext}^{-p-m}_{C^*(B)}(C^*(M),C_*(M)) $$ This completes the proof. \varepsilonnd{proof} \begin{proof}[Proof of Theorem \ref{thm:loop(co)product}] Theorems \ref{thm:torsion_product} and \ref{thm:torsion_coproduct} allow us to describe part of the composite $H^*(LM\times_M LM)\buildrel{H(q^!)}\over\rightarrow H^{*+m}(LM\times LM)\buildrel{Dlcop}\over\rightarrow H^{*+m}(LM\times LM)$ in terms of the following composite of appropriate maps between torsion functors $$ { \xymatrix@C30pt@R10pt{ \text{Tor}^*_{C^*(M^{4})}(C^*(M), C^*(M^2))_{(wv)^*, {\text{D}elta^2}^*} \ar[d]^{\text{Tor}_{1}(\text{D}elta^!, 1)} \\ \text{Tor}^{*+m}_{C^*(M^{4})}(C^*(M^{2}), C^*(M^2))_{{\text{D}elta^2}^*, {\text{D}elta^2}^*} . \ar[d]^(0.45){\text{Tor}_{1}(\text{D}elta^*, 1)} \\ \text{Tor}^{*+m}_{C^*(M^{4})}(C^*(M), C^*(M^2))_{(wv)^*, {\text{D}elta^2}^*} \ar[d]^{\text{Tor}_{1}(\text{D}elta^!, 1)} \\ \text{Tor}^{*+2m}_{C^*(M^{4})}(C^*(M^2), C^*(M^2))_{{\gamma'}^*, {\text{D}elta^2}^*} \ar[d]_(0.45){\text{Tor}_{1}(\text{D}elta^*, 1)} \ar@/^7mm/[rd]^(0.7){\text{Tor}_{\alpha^*}(\text{D}elta^*, \text{D}elta^*)}_(0.7)\cong \\ \text{Tor}^{*+2m}_{C^*(M^{4})}(C^*(M), C^*(M^2))_{\text{D}elta^*{\alpha}^*, {\text{D}elta^2}^*} \ar[r]_{\text{Tor}_{\alpha^*}(1, \text{D}elta^*)} & \text{Tor}^{*+2m}_{C^*(M^{2})}(C^*(M), C^*(M))_{\text{D}elta^*, \text{D}elta^*} . } } $$ By virtue of Lemma~\ref{isoextfreeloops}, we see that $\text{Ext}^{2m}_{C^*(M^4)}(C^*(M),C^*(M))_{(wv)^*,\text{D}elta^*{\alpha}^*}$ is isomorphic to $H_{-m}(M^{S^1\vee S^1\vee S^1})=\{0\}$. Then the composite $C^*(M)\buildrel{\text{D}elta^!}\over\rightarrow C^*(M\times M)\buildrel{\text{D}elta^*}\over\rightarrow C^*(M)\buildrel{\text{D}elta^!}\over\rightarrow C^*(M\times M) \buildrel{\text{D}elta^*}\over\rightarrow C^*(M) $ is null in $\text{D}(\text{Mod-}C^*(M^4))$. Therefore the composite $Dlcop\circ H(q^!)$ is trivial and hence $Dlcop\circ Dlp:=Dlcop\circ H(q^!)\circ comp^*$ is also trivial. \varepsilonnd{proof} \begin{rem} Instead of using Lemma~\ref{isoextfreeloops}, one can show that $$\text{Ext}^{2m}_{C^*(M^4)}(C^*(M),C^*(M))_{(wv)^*,\text{D}elta^*{\alpha}^*}=\{0\}$$ as follow: Consider the cohomological Eilenberg-Moore spectral sequence with $${\mathbb E}^{p,*}_2\cong\text{Ext}^p_{H^*(M^4)}(H^*(M),H^*(M))$$ converging to $\text{Ext}^{*}_{C^*(M^4)}(C^*(M),C^*(M))_{(wv)^*,\text{D}elta^*{\alpha}^*}$. Then we see that $${\mathbb E}^{p,*}_1=\text{Hom}(H^*(M)\otimes H^+(M^4)^{\otimes p}, H^*(M)).$$ Therefore, since $M^4$ is simply-connected and $H^{>m}(M)=\{0\}$, ${\mathbb E}^{p,q}_r=\{0\}$ if $q>m-2p$ (Compare with Remark~\ref{zeroelementsofEMSS}). Therefore $\text{Ext}^{p+q}_{C^*(M^4)}(C^*(M),C^*(M))_{(wv)^*,\text{D}elta^*{\alpha}^*}=\{0\}$ if $p+q>m$. \varepsilonnd{rem} \begin{rem}\label{rem:obstruction} Let $M$ be a Gorenstein space of dimension $m$. The proof of Theorem 2.12 shows that if the composite $$\text{D}elta^*\circ\text{D}elta^!\circ \text{D}elta^*\circ\text{D}elta^!\in\text{Ext}^{2m}_{C^*(M^4)}(C^*(M),C^*(M))_{(wv)^*,\text{D}elta^*{\alpha}^*}$$ is the zero element. Then $Dlcop\circ Dlp$ trivial. \varepsilonnd{rem} \begin{rem} In the proof of Theorem \ref{thm:loop(co)product}, it is important to work in the derived category of $C^*(M^4)$-modules: Suppose that $M$ is the classifying space of a connected Lie group of dimension $-m$. Then since $m$ is negative, the composite $\text{D}elta^!\circ \text{D}elta^*$ is null. In fact $\text{D}elta^!\circ \text{D}elta^* \in \text{Ext}^{m}_{C^*(M^2)}(C^*(M^2),C^*(M^2))_{1^*,1^*}\cong H^m(M^2)=\{0\}$. But in general, $Dlcop$ is not trivial; see \cite[Theorem D]{F-T} and \cite{K-L}. Therefore the composite $\text{D}elta^*\circ\text{D}elta^!\circ \text{D}elta^*\in \text{Ext}^{m}_{C^*(M^4)}(C^*(M^2),C^*(M))_{{\text{D}elta^2}^*,\text{D}elta^*{\alpha}^*}$ is also not trivial. \varepsilonnd{rem} \section{The generalized cup product on the Hochschild cohomology} After recalling (defining) the (generalized) cup product on the Hochschild cohomology, we give an extension functor description of the product. The result plays an important role in proving our main theorem, Theorem \ref{thm:loop_homology_ss}. \begin{defn}\label{cup product Hochschild} Let $A$ be a (differential graded) algebra. Let $M$ be a $A$-bimodule. Recall that we have a canonical map~\cite[p. 283]{Menichi_BV_Hochschild} $$ \otimes_A:HH^*(A,M)\otimes HH^*(A,M)\rightarrow HH^*(A,M\otimes_A M). $$ {(1)} Let $\bar{\mu}_M:M\otimes_A M\rightarrow M$ be a morphism of $A$-bimodules of degree $d$. Then the {\it cup product} $\cup$ on $HH^*(A,M)$ is the composite $$ HH^p(A,M)\otimes HH^q(A,M)\buildrel{\otimes_A}\over\rightarrow HH^{p+q}(A,M\otimes_A M)\buildrel{HH^{p+q}(A,\bar{\mu}_M)}\over\longrightarrow HH^{p+q+d}(A,M). $$ {(2)} Let $\varepsilon:Q\buildrel{\simeq}\over\rightarrow M\otimes_A M$ be a $A\otimes A^{op}$-projective (semi-free) resolution of $M\otimes_A M$. Let $\bar{\mu}_M\in\text{Ext}_{A\otimes A^{op}}^d(M\otimes_A M,M)=H^d(\text{Hom}_{A\otimes A^{op}}(Q,M))$. Then the {\it generalized cup product} $\cup$ on $HH^*(A,M)$ is the composite $$ HH^*(A,M)^{\otimes 2}\buildrel{\otimes_A}\over\rightarrow HH^{*}(A,M\otimes_A M) \buildrel{HH^{*}(A,\varepsilon)^{-1}}\over\longrightarrow HH^{*}(A,Q) \buildrel{HH^{*}(A,\bar{\mu}_M)}\over\longrightarrow HH^{*}(A,M). $$ \varepsilonnd{defn} \begin{rem}\label{cup product of an algebra morphism} Let $M$ be an associative (differential graded) algebra with unit $1_M$. Let $h:A\rightarrow M$ be a morphism of (differential graded) algebras. Then $$a\cdot m\star b:=h(a)mh(b)$$ defines an $A$-bimodule structure on $M$ such that the multiplication of $M$, $\mu_M:M\otimes M\rightarrow M$ induces a morphism of $A$-bimodules $\bar{\mu}_M:M\otimes_A M\rightarrow M$. Conversely, let $M$ be a $A$-bimodule equipped with an element $1_M\in M$ and a morphism of $A$-bimodules $\bar{\mu}_M:M\otimes_A M\rightarrow M$ such that $\bar{\mu}_M\circ(\bar{\mu}_M\otimes_A 1)=\bar{\mu}_M\circ(1\otimes_A\bar{\mu}_M)$ and such that the two maps $m\mapsto \bar{\mu}_M(m\otimes_A 1)$ and $m\mapsto \bar{\mu}_M(1\otimes_A m)$ coincide with the identity map on $M$. Then the map $h:A\rightarrow M$ defined by $h(a):=a\cdot 1_M$ is a morphism of algebras. \varepsilonnd{rem} The following lemma gives an interesting decomposition of the cup product of the Hochschild cohomology of a commutative (possible differential graded) algebra. \begin{lem}\label{lem:generalizeddecompositioncup} Let $A$ be a commutative (differential graded) algebra. Let $M$ be a $A$-module. Let $B$ be an $A^{\otimes 2}$-module. Let $\mu:A^{\otimes 2}\rightarrow A$ denote the multiplication of $A$. Let $\varepsilonta:{\mathbb K}\rightarrow A$ be the unit of $A$. Let $q:B\otimes B\twoheadrightarrow B\otimes_A B$ be the quotient map. Then {\varepsilonm (1)} $\text{\varepsilonm Tor}^{1\otimes\mu\otimes 1}_*(1,\mu):\text{\varepsilonm Tor}^{A^{\otimes 4}}_*(M,A\otimes A)\buildrel{\cong}\over\rightarrow \text{\varepsilonm Tor}^{A^{\otimes 3}}_*(M,A)$ is an isomorphism, {\varepsilonm (2)} $\text{\varepsilonm Hom}_{1\otimes\mu\otimes 1}(q,1):\text{\varepsilonm Hom}_{A^{\otimes 3}}(B\otimes_A B,M)\buildrel{\cong}\over\rightarrow\text{\varepsilonm Hom}_{A^{\otimes 4}}(B\otimes B,M)$ is an isomorphism and {\varepsilonm (3)} $\text{\varepsilonm Ext}_{1\otimes\mu\otimes 1}^*(q,1): \text{\varepsilonm Ext}_{A^{\otimes 3}}^*(B\otimes_A B,M)\buildrel{\cong}\over\rightarrow\text{\varepsilonm Ext}_{A^{\otimes 4}}^*(B\otimes B,M)$ is also an isomorphism. {\varepsilonm (4)} Let $\mu_M\in\text{Hom}_{A^{\otimes 4}}(M^{\otimes 2},M)$. Then $\mu_M$ induced a quotient map $\bar{\mu}_M:M\otimes_A M\rightarrow M$ and the cup product $\cup$ of the Hochschild cohomology of $A$ with coefficients in $M$, $HH^*(A,M)=\text{\varepsilonm Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}$ is given by the following commutative diagram $ \xymatrix@C12pt@R10pt{ \text{\varepsilonm Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}\otimes \text{\varepsilonm Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}\ar[r]^-{\otimes}\ar[dd]_\cup &\text{\varepsilonm Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},M^{\otimes 2})_{\mu^{\otimes 2},\mu^{\otimes 2}} \ar[d]^{\text{\varepsilonm Ext}^*_{1}(1,\mu_M)}\\ &\text{\varepsilonm Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},M)_{\mu^{\otimes 2},\mu\circ \mu^{\otimes 2}} \ar[d]^{\text{\varepsilonm Ext}^*_{1\otimes\mu\otimes 1}(\mu,1)^{-1}}_\cong\\ \text{\varepsilonm Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}&\text{\varepsilonm Ext}^*_{A^{\otimes 3}}(A,M)_{\mu\circ(\mu\otimes 1),\mu\circ(\mu\otimes 1)} \ar[l]^-{\text{\varepsilonm Ext}^*_{1\otimes\varepsilonta\otimes 1}(1,1)} } $ {\varepsilonm (5)} Let $\varepsilon:R\buildrel{\simeq}\over\rightarrow M\otimes M$ be a $A^{\otimes 4}$-projective (semi-free) resolution of $M\otimes M$. Let $\mu_M\in\text{\varepsilonm Ext}_{A^{\otimes 4}}(M^{\otimes 2},M)=H(\text{\varepsilonm Hom}_{A^{\otimes 4}}(R,M))$. Let $\bar{\mu}_M$ be $\text{\varepsilonm Ext}_{1\otimes\mu\otimes 1}^*(q,1)^{-1}(\mu_M)$. Then the generalized cup product $\cup$ of the Hochschild cohomology of $A$ with coefficients in $M$, $HH^*(A,M)=\text{\varepsilonm Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}$ is given by the following commutative diagram $$ \xymatrix@C12pt@R10pt{ \text{\varepsilonm Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}\otimes \text{\varepsilonm Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}\ar[r]^-{\otimes}\ar[ddd]_\cup &\text{\varepsilonm Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},M^{\otimes 2})_{\mu^{\otimes 2},\mu^{\otimes 2}} \ar[d]^{(\text{\varepsilonm Ext}^*_{1}(1,\varepsilon))^{-1}}_\cong\\ &\text{\varepsilonm Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},R) \ar[d]^{\text{\varepsilonm Ext}^*_{1}(1,\mu_M)}\\ &\text{\varepsilonm Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},M)_{\mu^{\otimes 2},\mu\circ \mu^{\otimes 2}} \ar[d]^{\text{\varepsilonm Ext}^*_{1\otimes\mu\otimes 1}(\mu,1)^{-1}}_\cong\\ \text{\varepsilonm Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}&\text{\varepsilonm Ext}^*_{A^{\otimes 3}}(A,M)_{\mu\circ(\mu\otimes 1),\mu\circ(\mu\otimes 1)} \ar[l]^-{\text{\varepsilonm Ext}^*_{1\otimes\varepsilonta\otimes 1}(1,1)} } $$ \varepsilonnd{lem} As mentioned at the beginning of this section, Lemma \ref{lem:generalizeddecompositioncup} (4) contributes toward proving Theorem \ref{thm:loop_homology_ss}. Moreover in view of part (5) of the lemma, we prove Theorem \ref{rational iso of Felix-Thomas Gorenstein}. \begin{proof}[Proof of Lemma \ref{lem:generalizeddecompositioncup}] (1) Consider the bar resolution $\xi: B(A, A, A) \stackrel{\simeq}{\to} A$ of $A$. Since the complex $B(A, A, A)$ is a semifree $A$-module, it follows from \cite[Theorem 6.1]{F-H-T} that $\xi\otimes_A \xi : B(A, A, A)\otimes_A B(A, A, A) \to A\otimes_A A=A$ is a quasi-isomorphism and hence it is a projective resolution of $A$ as a $A^{\otimes 3}$-module. We moreover have a commutative diagram $$ \xymatrix@C15pt@R15pt{ B(A, A, A)\otimes B(A, A, A) \ar[r]^(0.7){\xi\otimes \xi} \ar@{->>}[d]_q & A \otimes A \ar[d]^\mu \\ B(A, A, A)\otimes_A B(A, A, A) \ar[r]_(0.75){\xi\otimes_A \xi} & A } $$ in which $q$ is the natural projection and the first row is a projective resolution of $A\otimes A$ as a $A^{\otimes 4}$-module. It is immediate that $q$ is a morphism of $A^{\otimes 4}$-modules with respect to the morphism of algebras $1\otimes \mu \otimes 1:A^{\otimes 4}\rightarrow A^{\otimes 3}$. Then $\text{Tor}_{1\otimes \mu \otimes 1}(1, \mu)$ is induced by the map $$ 1\otimes q : M\otimes_{A^{\otimes 4}}B(A, A, A)\otimes B(A, A, A) \to M\otimes_{A^{\otimes 3}}B(A, A, A)\otimes_A B(A, A, A). $$ Since $A$ is a commutative, it follows that both the source and target of $1\otimes u$ are isomorphic to $W:= M\otimes B({\mathbb K}, A, {\mathbb K}) \otimes B({\mathbb K}, A, {\mathbb K})$ as a vector space. As a linear map, $1\otimes q$ coincides with the identity map on $W$ up to isomorphism. (2) By the universal property of the quotient map $q:B\otimes B\twoheadrightarrow B\otimes_A B$, $\text{Hom}_{1\otimes\mu\otimes 1}(q,1)$ is an isomorphism. (3) Let $\varepsilon:P\buildrel{\simeq}\over\rightarrow B$ be an $A^{\otimes 2}$-projective (semifree) resolution of $B$. We have a commutative square of $A^{\otimes 4}$-modules $$ \xymatrix@C20pt@R15pt{ P\otimes P\ar[r]^{\varepsilon\otimes\varepsilon}_\simeq\ar[d]_{q'} &B\otimes B \ar[d]^q\\ P\otimes_A P\ar[r]^{\varepsilon\otimes_A\varepsilon}_\simeq &B\otimes_A B } $$ Therefore $\text{Ext}_{1\otimes\mu\otimes 1}^*(q,1)$ is induced by $\text{Hom}_{1\otimes\mu\otimes 1}(q',1)$ which is an isomorphism by (2). (4) Let $A$ be any algebra and $M$ be any $A$-bimodule. Let $\xi:\mathbb{B}\buildrel{\simeq}\over\rightarrow A$ an $A\otimes A^{op}$-projective (semi-free) resolution (for example the double bar resolution). Let $c:\mathbb{B}\rightarrow \mathbb{B}\otimes_A\mathbb{B}$ be a morphism of $A$-bimodules such that the diagram of $A$-bimodules $$ \xymatrix@C20pt@R15pt{ \mathbb{B}\ar[r]^\xi\ar[d]^c &A \ar[d]_\cong\\ \mathbb{B}\otimes_A\mathbb{B}\ar[r]_{\xi\otimes_A\xi} &A\otimes_A A } $$ is homotopy commutative. The cup product of $f$ and $g\in\text{Hom}_{A\otimes A^{op}}(\mathbb{B},M)$ is the composite $\bar{\mu}_M\circ (f\otimes_A g)\circ c\in \text{Hom}_{A\otimes A^{op}}(\mathbb{B},M)$~\cite[p. 134]{S-G}. Suppose now that $A$ is commutative and that the $A$-bimodule structure on $M$ comes from the multiplication $\mu$ of $A$ and an $A$-module structure on $M$. The following diagram of complexes gives two different decompositions of the cup product on $\text{Hom}_{A\otimes A^{op}}(\mathbb{B},M)$. $$ { \xymatrix@C20pt@R10pt{ \text{Hom}_{A^{\otimes 2}}(\mathbb{B},M)\otimes \text{Hom}_{A^{\otimes 2}}(\mathbb{B},M) \ar[r]^-{\otimes}\ar[d]^{\otimes_A} & \text{Hom}_{A^{\otimes 4}}(\mathbb{B}\otimes\mathbb{B},M\otimes M) \ar[d]^{\text{Hom}_{1}(1,\mu_M)}\\ \text{Hom}_{A^{\otimes 3}}(\mathbb{B}\otimes_A\mathbb{B},M\otimes_A M) \ar[dr]^{\text{Hom}_{1}(1,\bar{\mu}_M)} &\text{Hom}_{A^{\otimes 4}}(\mathbb{B}\otimes\mathbb{B},M)\\ \text{Hom}_{A\otimes A^{op}}(\mathbb{B}\otimes_A\mathbb{B},M) \ar[d]^{\text{Hom}_{1}(c,1)} &\text{Hom}_{A^{\otimes 3}}(\mathbb{B}\otimes_A\mathbb{B},M) \ar[u]_{\text{Hom}_{1\otimes\mu\otimes 1}(q,1)}^\cong \ar[l]^{\text{Hom}_{1\otimes\varepsilonta\otimes 1}(1,1)}\\ \text{Hom}_{A\otimes A^{op}}(\mathbb{B},M) } } $$ (5) Let $\varepsilon:P\buildrel{\simeq}\over\rightarrow M$ be a surjective $A\otimes A^{op}$-projective (semifree) resolution of $M$. Tnen $\mu_M$ can be considered as an element of $\text{Hom}_{A^{\otimes 4}}(P\otimes P, M)$. By lifting, there exists $\mu_P\in\text{Hom}_{A^{\otimes 4}}(P\otimes P, P)$ such that $\varepsilon\circ\mu_P=\mu_M$. By 2), there exists $\bar{\mu}_P$ such that $\bar{\mu}_P\circ q=\mu_P$. We can take $\bar{\mu}_M=\varepsilon\circ\bar{\mu}_P$. It is now easy to check that the isomorphism $HH^*(A,\varepsilon):HH^*(A,P)\buildrel{\cong}\over\rightarrow HH^*(A,M)$ transports the cup product on $HH^*(A,P)$ defined using $\bar{\mu}_P$ to the generalized cup product on $HH^*(A,M)$ defined using $\bar{\mu}_M$. We now check that the isomorphism $HH^*(A,\varepsilon):HH^*(A,P)\buildrel{\cong}\over\rightarrow HH^*(A,M)$ transports the composite $$ \xymatrix@C15pt@R10pt{ \text{Ext}^*_{A^{\otimes 2}}(A,P)_{\mu,\mu}\otimes \text{Ext}^*_{A^{\otimes 2}}(A,P)_{\mu,\mu}\ar[r]^-{\otimes} &\text{Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},P^{\otimes 2})_{\mu^{\otimes 2},\mu^{\otimes 2}} \ar[d]^{\text{Ext}^*_{1}(1,\mu_P)}\\ &\text{Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},P)_{\mu^{\otimes 2},\mu\circ \mu^{\otimes 2}} \ar[d]^{\text{Ext}^*_{1\otimes\mu\otimes 1}(\mu,1)^{-1}}_\cong\\ \text{Ext}^*_{A^{\otimes 2}}(A,P)_{\mu,\mu}&\text{Ext}^*_{A^{\otimes 3}}(A,P)_{\mu\circ(\mu\otimes 1),\mu\circ(\mu\otimes 1)} \ar[l]^-{\text{Ext}^*_{1\otimes\varepsilonta\otimes 1}(1,1)} } $$ into the composite $$ \xymatrix@C15pt@R10pt{ \text{Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}\otimes \text{Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}\ar[r]^-{\otimes} &\text{Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},M^{\otimes 2})_{\mu^{\otimes 2},\mu^{\otimes 2}} \ar[d]^{(\text{Ext}^*_{1}(1,\varepsilon\otimes\varepsilon))^{-1}}_\cong\\ &\text{Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},P\otimes P) \ar[d]^{\text{Ext}^*_{1}(1,\mu_M)}\\ &\text{Ext}^*_{A^{\otimes 4}}(A^{\otimes 2},M)_{\mu^{\otimes 2},\mu\circ \mu^{\otimes 2}} \ar[d]^{\text{Ext}^*_{1\otimes\mu\otimes 1}(\mu,1)^{-1}}_\cong\\ \text{Ext}^*_{A^{\otimes 2}}(A,M)_{\mu,\mu}&\text{Ext}^*_{A^{\otimes 3}}(A,M)_{\mu\circ(\mu\otimes 1),\mu\circ(\mu\otimes 1)} \ar[l]^-{\text{Ext}^*_{1\otimes\varepsilonta\otimes 1}(1,1)} } $$ By applying (4) to $\mu_P$, we have proved (5). \varepsilonnd{proof} \begin{thm}(Compare with~\cite[Theorem 12]{F-T})\label{ext Gorenstein commutatif} Let $B$ be a simply-connected commutative Gorenstein cochain algebra of dimension $m$ such that $\forall i\in\mathbb{N}$, $H^i(B)$ is finite dimensional. Then $ \text{\varepsilonm Ext}^{*+m}_{B\otimes B}(B,B\otimes B)\cong H^*(B). $ \varepsilonnd{thm} \begin{proof} The proof of~\cite[Theorem 12]{F-T} for the strongly homotopy commutative algebra $C^*(X)$ obviously works in the case of a commutative algebra $B$. \varepsilonnd{proof} \begin{rem} In~\cite[Theorem 2.1 i) iv)]{A-I} Avramov and Iyengar have shown a related result in the non graded case: Let $S$ be a a commutative algebra over a field ${\mathbb K}$, which is the quotient of a polynomial algebra ${\mathbb K}[x_1,\dots,x_d]$ or more generally which is the quotient of a localization of ${\mathbb K}[x_1,\dots,x_d]$. Then $S$ is Gorenstein if and only if the graded $S$-module $ \text{Ext}^{*}_{S\otimes S}(S,S\otimes S) $ is projective of rank $1$. \varepsilonnd{rem} \begin{ex}\label{generalized cup product Gorenstein}(The generalized cup product of a Gorenstein algebra) Let $A\rightarrow B$ be a morphism of commutative differential graded algebras where $B$ satisfies the hypotheses of Theorem~\ref{ext Gorenstein commutatif}. Let $\text{D}elta_B:B\rightarrow B\otimes B$ be a generator of $\text{Ext}^{m}_{B\otimes B}(B,B\otimes B)\cong {\mathbb K}$. By taking duals, we obtain the following element of $ \text{Ext}^{m}_{A\otimes A}(B^\vee\otimes B^\vee,B^\vee) $: $$ (\text{D}elta_B)^{\vee}:B^\vee\otimes B^\vee\rightarrow (B\otimes B)^\vee \rightarrow B^\vee. $$ By 3) of Lemma~\ref{lem:generalizeddecompositioncup}, $ (\text{D}elta_B)^{\vee}$ induces an element $\bar{\mu}_{B^{\vee}}\in\text{Ext}^{m}_{A\otimes A}(B^\vee\otimes_A B^\vee,B^\vee)$. Therefore, by ii) of definition~\ref{cup product Hochschild}, we have a generalized cup product $$ HH^p(A,B^\vee)\otimes HH^q(A,B^\vee) \buildrel{\cup}\over\rightarrow HH^{p+q+m}(A,B^\vee). $$ In the case $A=B$, of course, we believe that $HH^*(A,A^\vee)$ equipped with this generalized cup product and Connes coboundary is a non-unital BV-algebra. \varepsilonnd{ex} \section{Proofs of Theorems \ref{thm:loop_homology_ss}, \ref{thm:freeloopExt} and \ref{rational iso of Felix-Thomas Gorenstein} and Corollary~\ref{rational iso of Felix-Thomas Poincare}} We first prove Theorem \ref{rational iso of Felix-Thomas Gorenstein}. The same argument is applicable when proving Theorem \ref{thm:loop_homology_ss}. \begin{proof}[Proof of Theorem \ref{rational iso of Felix-Thomas Gorenstein}] Step 1: The polynomial differential functor $A(X)$ extends to a functor $A(X,Y)$ for pairs of spaces $Y\subset X$. The two natural short exact sequences~\cite[p. 124]{F-H-T} $\rightarrow A(X,Y)\rightarrow A(X)\rightarrow A(Y)\rightarrow 0$ and $0\rightarrow C^*(X,Y)\rightarrow C^*(X)\rightarrow C^*(Y)\rightarrow 0$ are naturally weakly equivalent~\cite[p. 127-8]{F-H-T}. Therefore all the results of Felix and Thomas given in~\cite{F-T} with the singular cochains algebra $C^*(X)$ are valid with $A(X)$ (For example, the description of the shriek map of an embedding $N\hookrightarrow M$ at the level of singular cochains given page 419 of~\cite{F-T}). In particular, our Theorem~\ref{thm:torsion_product} is valid when we replace $C^*(X)$ by $A(X)$. (Note also that a proof similar to the proof of Theorems~\ref{thm:newThm3} or~\ref{produitshriek} shows that the dual of the loop product on $A(L_NM)$ is isomorphic to the dual of the loop product defined on $C^*(L_NM)$.) This means the following: Let $\text{D}elta^!_A$ be a generator of $\text{Ext}_{A(N^2)}^n(A(N),A(N^2))\cong {\mathbb Q}$ given by \cite[Theorem 12]{F-T}. Then the composite $\text{Tor}^1(1,\sigma^*)\circ EM^{-1}$ is an isomorphism of algebras between the dual of the loop product $Dlp$ on $H^*(A(L_NM))$ and the coproduct defined by the composite on the left column of the following diagram. Step 2: We have chosen $\text{D}elta^!_A$ and $\text{D}elta_{A(N)}$ such that the composite $\varphi\circ\text{D}elta_{A(N)}$ is equal to $\text{D}elta^!_A$ in the derived category of $A(N)^{\otimes 2}$-modules. Therefore the following diagram commutes. $$ \xymatrix@C50pt@R15pt{ \text{Tor}^{A(M^{2})}_{*}(A(N), A(M))\ar[d]_{\text{Tor}^{A(p_{13})}(1,1)} &\text{Tor}^{A(M)^{\otimes 2}}_{*}(A(N), A(M))\ar[l]_-{\text{Tor}^{\varphi}(1,1)}^-\cong \ar[d]_{\text{Tor}^{1\otimes\varepsilonta\otimes1}(1,1)}\\ \text{Tor}^{A(M^{3})}_{*}(A(N), A(M)) &\text{Tor}^{A(M)^{\otimes 3}}_{*}(A(N), A(M))\ar[l]_-{\text{Tor}^{\varphi\circ(1\otimes\varphi)}(1,1)}^-\cong\\ \text{Tor}^{A(M^{4})}_{*}(A(N), A(M^2))\ar[u]^{\text{Tor}^{A(1\times\text{D}elta\times 1)}(1,A(\text{D}elta))}_\cong \ar[d]_{\text{Tor}^{1}(\text{D}elta^!_A,1)} &\text{Tor}^{A(M)^{\otimes 4}}_{*}(A(N), A(M)^{\otimes 2})\ar[l]_-{\text{Tor}^{\varphi}(1,\varphi)}^-\cong \ar[u]^{\text{Tor}^{1\otimes\mu\otimes1}(1,\mu)}_\cong \ar[d]_{\text{Tor}^{1}(\text{D}elta_{A(N)},1)}\\ \text{Tor}^{A(M^{4})}_{*}(A(N^2), A(M^2))\ar[d]_\cong &\text{Tor}^{A(M)^{\otimes 4}}_{*}(A(N)^{\otimes 2}, A(M)^{\otimes 2})\ar[l]_-{\text{Tor}^{\varphi}(\varphi,\varphi)}^-\cong \ar[d]_\cong\\ \left(\text{Tor}^{A(M^{2})}_{*}(A(N), A(M))\right)^{\otimes 2} &\left(\text{Tor}^{A(M)^{\otimes 2}}_{*}(A(N), A(M))\right)^{\otimes 2}\ar[l]^-{\left(\text{Tor}^{\varphi}(1,1)\right)^{\otimes 2}}_-\cong } $$ Step 3: Dualizing and using the natural isomorphism $$ \text{Ext}^*_B(Q,P^\vee)\buildrel{\cong}\over\rightarrow \text{Tor}^*_B(P,Q)^\vee $$ for any differential graded algebra $B$, right $B$-module $P$ and left $B$-module $Q$, we see that the dual of $\Phi$ is an isomorphism of algebras with respect to the loop product and to the long composite given by the diagram of (5) of Lemma~\ref{lem:generalizeddecompositioncup} when $A:=A(M)$, $M:=A(N)^\vee$ and $\mu_M:=(\text{D}elta_{A(N)})^\vee$. Step 4: We apply part (5) of Lemma~\ref{lem:generalizeddecompositioncup} to see that this long composite coincides with the generalized cup product of the Gorenstein algebra $B:=A(N)$. \varepsilonnd{proof} \noindent {\it Proof of Theorem \ref{thm:loop_homology_ss}.} Let $\{E_r^{*,*}, d_r\}$ denote the spectral sequence described in Theorem \ref{thm:EMSS}. Then we define a spectral sequence $\{{\mathbb E}_r^{*,*}, d_r^\vee\}$ by $$ {\mathbb E}_r^{p,q+d}:= ({E_r^{*,*}}^\vee)^{p,q} = (E_r^{-p,-q})^\vee. $$ The decreasing filtration $\{F^pH^*(L_N M) \}_{p\leq 0}$ of $H^*(L_N M)$ induces the decreasing filtration $\{F^pH_*(L_N M) \}_{p\geq 0}$ of $H_*(L_N M)$ defined by $$F^pH_*(L_N M)= (H^*(L_N M)/F^{-p}H^*(L_N M))^\vee.$$ By definition, the Chas-Sullivan loop homology (the shift homology) ${\mathbb H}_*(L_N M)$ is given by ${\mathbb H}_{-(p+q)}(L_N M)=(H^*(L_N M)^\vee)^{p+q-d}$. By Proposition~\ref{prop:loop_homology}, the product $m$ on ${\mathbb H}_*(L_N M)$ is defined by $$ m(a\otimes b)= (-1)^{d(|a|-d)}(Dlp)^\vee(a\otimes b) $$ for $a\otimes b \in (H^*(L_N M)^\vee)^{*}\otimes (H^*(L_N M)^\vee)^{*}$. Then we see that \begin{eqnarray*} {\mathbb E}_{\infty}^{p, q} &\cong& F^p(H^*(L_N M)^\vee)^{p+q-d}/ F^{p+1}(H^*(L_N M)^\vee)^{p+q-d} \\ &=&F^p{\mathbb H}_{-(p+q)}(L_N M)/F^{p+1}{\mathbb H}_{-(p+q)}(L_N M). \varepsilonnd{eqnarray*} The composite $\widetilde{(Dlp)}$ in Theorem \ref{thm:torsion_product} which gives rise to $Dlp$ on $H^*(L_N M)$ preserves the filtration of the EMSS $\{E^{*,*}_r, d_r\}$; see Remark \ref{rem:relative_cases}. As mentioned in the proof of Theorem \ref{thm:EMSS}, the map $\widetilde{(Dlp)}$ induces the morphism $(Dlp)_r : E_r^{*, *} \to E_r^{*, *}\otimes E_r^{*, *}$ of spectral sequences of bidegree $(0, d)$. Define $m_r : {\mathbb E}_r^{*, *} \otimes {\mathbb E}_r^{*, *}\to {\mathbb E}_r^{*, *}$ by $$ m_r(a\otimes b)=(-1)^{d(|a|+d)}((Dlp)_r)^\vee(a\otimes b), $$ where $|a| = p+ q$ if $a \in {({E_r^{*,*}}^\vee)}^{p, q}$. Then a straightforward computation enables us to deduce that $m_r(d_r^\vee a\otimes b+(-1)^{\vert a\vert+d}a \otimes d_r^\vee b) = d_r^\vee\circ m_r(a\otimes b)$ for any $r$. Note that $\vert a\vert+d=p+q+d$ is the total degree of $a$ in ${\mathbb E}_r^{*, *}$. It turns out that $\{{\mathbb E}_r, d_r^\vee\}$ is a spectral sequence of algebras converging to ${\mathbb H}_{-*}(L_N M)$ as an algebra. It remains now to identify the $\mathbb{E}_2$-term with the Hochschild cohomology. We proceed as in the proof of Theorem~\ref{rational iso of Felix-Thomas Gorenstein} replacing the polynomial differential functor $A$ by singular cohomology $H^*$. The product $Dlp_2$ is given by the composite $$ \xymatrix@C50pt@R15pt{ \text{Tor}^{H^*(M^{2})}_{*}(H^*(N), H^*(M))\ar[r]_{\text{Tor}^{H^*(p_{13})}(1,1)}& \text{Tor}^{H^*(M^{3})}_{*}(H^*(N), H^*(M))\\ &\text{Tor}^{H^*(M^{4})}_{*}(H^*(N), H^*(M^2))\ar[u]_{\text{Tor}^{H^*(1\times\text{D}elta\times 1)}(1,H^*(\text{D}elta))}^\cong \ar[d]_{\text{Tor}^{1}(\text{D}elta^!,1)}\\ \left(\text{Tor}^{H^*(M^{2})}_{*}(H^*(N), H^*(M))\right)^{\otimes 2} &\text{Tor}^{H^*(M^{4})}_{*}(H^*(N^2), H^*(M^2))\ar[l]_\cong } $$ Dualizing and using the natural isomorphism $$ \text{Ext}^*_B(Q,P^\vee)\buildrel{\cong}\over\rightarrow \text{Tor}^*_B(P,Q)^\vee $$ for any graded algebra $B$, right $B$-module $P$ and left $B$-module $Q$, we see that $Dlp_2^\vee$, the dual of $Dlp_2$ is the long composite given by the diagram of Lemma~\ref{lem:generalizeddecompositioncup} (4) when $A:=H^*(M)$, $M:=H_*(N)$ and $\mu_M:=H(\text{D}elta^!)^\vee$. By Lemma~\ref{lem:generalizeddecompositioncup} (4), we obtain an isomorphism of algebras $ u : HH^*(H^*(M),H_*(N)) \stackrel{\cong}{\to} (E^{*,*}_2)^\vee $ with respect to $Dlp_2^\vee$ and the cup product induced by $$ \bar{\mu}_M=\overline{H(\text{D}elta^{!})^\vee}:H_*(N)\otimes_{H^{*}(M)}H_*(N)\rightarrow H_*(N). $$ Using Example~\ref{lem:important_degree} (ii) and Example~\ref{examples naturality cup product} (i), we finally obtain an isomorphism of algebras $\mathbb{E}^{*,*}_2\cong HH^*(H^*(M),\mathbb{H}_*(N))$ with respect to $m_2$ and the cup product induced by $$ \bar{\mu}_\mathbb{M}:\mathbb{H}_*(N)\otimes_{H^{*}(M)}\mathbb{H}_*(N)\rightarrow \mathbb{H}_*(N). $$ Note that $\bar{\mu}_\mathbb{M}$ coincides with $\overline{H(\text{D}elta^{!})^\vee}$ only up to the multiplication by $(-1)^d$. Suppose further that $N$ is a Poincar\'e duality space of dimension $d$. Consider the two squares $$ \xymatrix@C30pt@R15pt{ H_{d-p}(N)\otimes H_{d-q}(N)\ar[r]^-\times & H_{2d-p-q}(N\times N)\ar[r]^-{H(\text{D}elta^!)^\vee} & H_{d-p-q}(N)\\ H^p(N)\otimes H^q(N)\ar[r]^-\times\ar[u]^{-\cap [N] \otimes -\cap [N]} & H^{p+q}(N\times N)\ar[r]^-{H(\text{D}elta)}\ar[u]_{- \cap [N] \times [N]} & H^{p+q}(N)\ar[u]_{-\cap [N]} } $$ The right square is the diagram 2) of Proposition~\ref{shriek and Poincare duality}. Therefore the right square commutes by Corollary~\ref{shriek diagonal}. By~\cite[VI.5.4 Theorem]{Bredongeometry}, we see that $$(\alpha\times\beta)\cap ([N] \times[N])= (-1)^{\vert\alpha\vert\vert [N]\vert} (\alpha\cap[N])\times (\beta\cap[N])=\times\circ(-\cap [N])\otimes(-\cap [N])(\alpha\otimes \beta). $$ This means that the left square commutes. Therefore we have proved that the isomorphism of lower degree $d$, $\theta_{H^*(N)}:=- \cap [N]:H^*(N)\rightarrow H_{d-*}(N)$ is a morphism of algebras with respect to the cup product and the composite of $H(\text{D}elta^!)^\vee$ and the homological cross product. By naturality of the cup product on Hochschild cohomology defined (Remark~\ref{cup product of an algebra morphism}) by a morphism of algebras, this implies that the morphism $$ HH^*(1,- \cap [N]):HH^*(H^*(M),H^*(N))\buildrel{\cong}\over \rightarrow HH^*(H^*(M),H_*(N)) $$ is an isomorphism of algebras of lower degree $d$. We see that the composite $$ \zeta = u\circ HH^*(1,- \cap [N]):HH^*(H^*(M),H^*(N))\buildrel{\cong}\over \rightarrow {\mathbb E}_2^{*,*} $$ is an isomorphism of algebras; see Example~\ref{lem:important_degree} i) and ii). This completes the proof. \qed \begin{rem}\label{zeroelementsofEMSS} For the EMSS $\{E_r^{*,*}, d_r\}$ described in Theorem \ref{thm:EMSS}, we see that $E_r^{p,q}=0$ if $q< - 2p$ since $M$ is simply-connected. This implies that ${\mathbb E}_r^{p, q}=0$ if $q > -2p+d$. \varepsilonnd{rem} \begin{proof}[Proof of Corollary~\ref{rational iso of Felix-Thomas Poincare}] Denote by $\int_N:C^*(N)\buildrel{\simeq}\over\rightarrow A(N)$ a quasi-isomorphism of complexes which coincides in homology with the natural equivalence of algebras between the singular cochains and the polynomial differential forms~\cite[Corollary 10.10]{F-H-T}. Let $\psi_N:C_*(N)\hookrightarrow C^*(N)^\vee$ be the canonical inclusion of the complexe $C_*(N)$ into its bidual defined in~\cite[7.1]{F-T-VP} or~\cite[Property 57 i)]{Menichi} by $\psi_N(c)(\varphi)=(-1)^{\vert c\vert\vert\varphi\vert}\varphi(c)$ for $c\in C_*(N)$ and $\varphi\in C^*(N)$. Consider the diagram of complexes $$ \xymatrix@C30pt@R15pt{ C_*(N)\otimes C_*(N)\ar[rr]^{EZ}\ar[d]_{\psi_N\otimes\psi_N} && C_*(N\times N)\ar[d]^{\psi_{N\times N}}\\ C^*(N)^\vee\otimes C^*(N)^\vee\ar[r]\ar[d]_{\int_N^\vee\otimes\int_N^\vee} & (C^*(N)\otimes C^*(N))^\vee\ar[d]_{(\int_N\otimes\int_N)^\vee} & C^*(N\times N)^\vee\ar[l]_-{AW^{\vee\vee}} \ar[d]^{\int_{N\times N}^\vee}\\ A(N)^\vee\otimes A(N)^\vee\ar[r] &(A(N)\otimes A(N))^\vee & A(N\times N)^\vee\ar[l]^-{\varphi^\vee} } $$ where $EZ$ and $AW$ are the Eilenberg-Zilber and Alexander-Whitney maps. The bottom left square commutes by naturality of the horizontal maps. The top rectangle commutes in homology since by~\cite[VI.5.4 Theorem]{Bredongeometry}, $<\alpha\times\beta,a\times b>=(-1)^{\vert\beta\vert\vert a\vert} <\alpha,a><\beta,b>$ for all $\alpha$, $\beta\in H^*(N)$ and $a$, $b\in H_*(N)$. The bottom right square commutes in homology since using $H(\int_N)$, $H(\varphi)$ can be identified with the cohomological cross product ~\cite[Example 2 p. 142-3]{F-H-T} and~\cite[Chap. 5 Sec. 6 14 Corollary]{Spanier}. Let $\theta_N:A(N)\buildrel{\simeq}\over\rightarrow A(N)^\vee$ be a quasi-isomorphim of upper degree $-n$ right $A(N)$-linear such that the image of the fundamental class $[N]$ by the composite $C_*(N)\buildrel{\psi_N}\over\rightarrow C^*(N)^{\vee}\buildrel{\int_N^\vee}\over\rightarrow A(N)^\vee$ is the class of $\theta_N(1)$. Let $\theta_{N\times N}:A(N\times N)\buildrel{\simeq}\over\rightarrow A(N\times N)^\vee$ be a quasi-isomorphim of upper degree $-2n$ right $A(N^2)$-linear such that the image of the fundamental class $[N]\times[N]$ by the composite $C_*(N\times N)\buildrel{\psi_{N\times N}}\over\rightarrow C^*(N\times N)^{\vee}\buildrel{\int_{N\times N}^\vee}\over\rightarrow A(N\times N)^\vee$ is the class of $\theta_{N\times N}(1)$. Using the previous commutative diagram, the classes $\varphi^\vee\circ \theta_{N\times N}(1)$ and $\theta_N(1)\otimes\theta_N(1)$ are equal. Consider the diagram in the derived category of $A(N)^{\otimes 2}$-modules $$ \xymatrix{ A(N)\otimes A(N)\ar[rr]^\varphi\ar[d]_{\theta_N\otimes\theta_N} && A(N\times N)\ar[r]^-{A(\text{D}elta)}\ar[d]_{\theta_{N\times N}} & A(N)\ar[d]^{\theta_N}\\ A(N)^\vee\otimes A(N)^\vee\ar[r] & (A(N)\otimes A(N))^\vee & A(N\times N)^\vee\ar[r]_-{\text{D}elta^{!\vee}_A}\ar[l]^-{\varphi^\vee} & A(N)^\vee } $$ By Corollary~\ref{shriek diagonal}, the right square commutes in the derived category of $A(N\times N)$-modules. The left rectangle commutes up to homotopy of $A(N)\otimes A(N)$-modules since the classes $\varphi^\vee\circ \theta_{N\times N}(1)$ and $\theta_N(1)\otimes\theta_N(1)$ are equal. Finally, since $\theta_N:A(N)\buildrel{\simeq}\over\rightarrow A(N)^\vee$ is a morphism of algebras of upper degree $-n$ in the derived category of $A(N)^{\otimes 2}$-modules, by example~\ref{examples naturality cup product}ii), $HH^*(1,\theta_N):HH^*(A(M),A(N))\buildrel{\cong}\over\rightarrow HH^*(A(M),A(N)^\vee)$ is an isomorphism of algebras of upper degree $-n$. \varepsilonnd{proof} \begin{proof}[Proof of Theorem \ref{thm:freeloopExt}] Let $\varepsilon:\mathbb{B}\buildrel{\simeq}\over\rightarrow C^*(M)$ be a right $C^*(M^2)$-semifree resolution of $C^*(M)$. By 1) of Proposition~\ref{shriek and Poincare duality} and Corollary ~\ref{shriek diagonal}, $\text{D}elta^!$ fits into the following homotopy commutative diagram of right $C^*(M^2)$-modules. $$ \xymatrix@C30pt@R15pt{ \mathbb{B}\ar[rr]^{\text{D}elta^!}\ar[d]_{\varepsilon}^\simeq && C^{*+m}(M^2)\ar[d]^{\sigma^2 \cap \text{ --}}_\simeq\\ C^*(M) \ar[r]^{\sigma \cap \text{ --}}_\simeq & C_{m-*}(M)\ar[r]^{\text{D}elta_*} & C_{m-*}(M^2) } $$ Here $(\sigma^2 \cap \text{ --})(x)=EZ(\sigma\otimes\sigma)\cap x$. By applying the functor $\text{Tor}^*_{C^*(M^4)}(-,C^*(M^2))$, we obtain the commutative square $$ \xymatrix@C30pt@R15pt{ \text{Tor}^*_{C^*(M^4)}(C^*(M),C^*(M^2)) \ar[rr]^{\text{Tor}_1(\sigma \cap \text{ --},1)}\ar[d]_{\text{Tor}_1(\text{D}elta^!,1)} &&\text{Tor}^*_{C^*(M^4)}(C_*(M),C^*(M^2)) \ar[d]^{\text{Tor}_1(\text{D}elta_*,1)}\\ \text{Tor}^*_{C^*(M^4)}(C^*(M^2),C^*(M^2))\ar[rr]_{\text{Tor}_1(\sigma^2 \cap \text{ --},1)} &&\text{Tor}^*_{C^*(M^4)}(C_*(M^2),C^*(M^2)). } $$ Therefore using Theorem~\ref{thm:torsion_product}, $\Phi$ is an isomorphism of coalgebras with respect to the dual of the loop product and to the following composite $$ \xymatrix@C30pt@R15pt{ \text{Tor}^*_{C^*(M^{2})}(C_*M, C^*M) \ar[r]^-{\text{Tor}_{p_{13}^*}(1, 1)} & \text{Tor}^*_{C^*(M^{3})}(C_*M, C^*M)\\ \text{Tor}^{*}_{C^*(M^{4})}(C_*(M^{2}), C^*(M^2)) & \text{Tor}^*_{C^*(M^{4})}(C_*(M), C^*(M^2)) \ar[u]_{\text{Tor}_{(1\times \text{D}elta \times 1)^*}(1, {\text{D}elta}^*)}^{\cong} \ar[l]^{\text{Tor}_{1}(\text{D}elta_*, 1)}\\ \text{Tor}^*_{C^*(M^{4})}(C_*(M)^{\otimes 2},C^*(M^2)) \ar[r]_-{\text{Tor}_{EZ^\vee}(1,EZ^\vee)}^-\cong \ar[u]_-{\text{Tor}_{1}(EZ,1)}^-\cong & \text{Tor}^*_{(C_*(M^{2})^{\otimes 2})^\vee}(C_*(M)^{\otimes 2}, (C_*(M)^{\otimes 2})^\vee)\\ \text{Tor}^*_{C^*(M^{2})}(C_*(M), C^*(M))^{\otimes 2}\ar[r]^-{\cong}_-\top & \text{Tor}^*_{C^*(M^{2})^{\otimes 2}}(C_*(M)^{\otimes 2}, C^*(M)^{\otimes 2}). \ar[u]^{\text{Tor}_{\gamma}(1,\gamma)}_-\cong } $$ Dualizing and using the natural isomorphism $$ \text{Ext}^*_B(Q,P^\vee)\buildrel{\cong}\over\rightarrow \text{Tor}^*_B(P,Q)^\vee $$ for any differential graded algebra $B$, right $B$-module $P$ and left $B$-module $Q$, we see that the dual of $\Phi$ is an isomorphism of algebras with respect to the loop product and to the multiplication defined in Theorem~\ref{thm:freeloopExt}. \varepsilonnd{proof} \section{Associativity of the loop product on a Poincar\'e duality space}\label{associativity loop product} In this section, by applying the same argument as in the proof of \cite[Theorem 2.2]{T}, we shall prove the associativity of the loop products. \noindent {\it Proof of Proposition \ref{prop:loop_homology}}. We prove the proposition in the case where $N=M$. The same argument as in the proof permits us to conclude that the loop homology ${\mathbb H}_*(L_NM)$ is associative with respect to the relative loop products. Let $M$ be a simply-connected Gorenstein space of dimension $d$. In order to prove the associativity of the dual to $Dlp$, we first consider the diagram $$ \xymatrix@C25pt@R15pt{ LM\times LM & (LM\times_M LM)\times LM \ar[l]_(0.6){Comp\times 1} \ar[r]^{q\times 1} & LM \times LM \times LM \\ LM\times_M LM \ar[u]^q \ar[d]_{Comp} & LM\times_M LM \times_M LM \ar[l]_(0.6){Comp\times_M 1} \ar[r]^{q\times_M1} \ar[u]_{1\times_ Mq} \ar[d]^{1\times_M Comp} & LM \times (LM \times_M LM) \ar[u]_{1\times q} \ar[d]^{1\times Comp}\\ LM & LM\times_M LM \ar[l]^{Comp} \ar[r]_q & LM\times LM, } $$ for which the lower left hand-side square is homotopy commutative and other three square are strictly commutative. Consider the corresponding diagram $$ \xymatrix@C20pt@R15pt{ H^*(LM\times LM)\ar[r]^(0.4){(Comp\times 1)^*} & H^*((LM\times_M LM)\times LM) \ar[r]^{\varepsilon' \alpha H(q^!)\otimes 1} &H^*(LM \times LM \times LM) \\ H^*(LM\times_M LM) \ar[u]^{H(q^!)} \ar[r]_(0.4){(Comp\times_M 1)^*} & H^*(LM\times_M LM \times_M LM) \ar[r]^{H((q\times_M1)^!)} \ar[u]_{H((1\times_ Mq)^!)} & H^*(LM \times (LM \times_M LM)) \ar[u]_{\varepsilon\alpha'1\otimes H(q^!)}\\ H^*(LM) \ar[u]^{Comp^*} \ar[r]_{Comp^*} & H^*(LM\times_M LM) \ar[r]_{H(q^!)} \ar[u]^{(1\times_M Comp)^*} & H^*(LM\times LM)\ar[u]_{(1\times Comp)^*}. } $$ The lower left square commutes obviously. By Theorem~\ref{naturalityshriek}, the upper left square and the lower right square are commutative. We now show that the upper right square commutes. By Theorem~\ref{produitshriek}, we see that $H((q\times 1)^!)=\alpha H(q^!)\otimes 1$ and $H((1\times q)^!)=\alpha'1\otimes H(q^!)$ where $\alpha$ and $\alpha'\in {\mathbb K}^*$. By virtue of \cite[Theorem C]{F-T}, in $\text{D}(\text{Mod-}C^*(LM^{\times 3}))$, $$\varepsilon'(\text{D}elta \times 1)^!\circ \text{D}elta^! =\varepsilon(1\times \text{D}elta)^!\circ \text{D}elta^!$$ where $(\varepsilon,\varepsilon')\neq (0,0)\in {\mathbb K}\times {\mathbb K}$. Therefore the uniqueness of the shriek map implies that $$ \varepsilon'(q\times 1)^!\circ (1 \times_M q)^! =\varepsilon(1\times q)^!\circ (q\times_M 1)^! $$ in $\text{D}(\text{Mod-}C^*(LM^{\times 3}))$; see \cite[Theorem 13]{F-T}. So finally, we have proved that $$\varepsilon'\alpha(Dlp \otimes 1)\circ Dlp=\varepsilon\alpha'(1\otimes Dlp)\circ Dlp.$$ Suppose that $M$ is a Poincar\'e duality space of dimension $d$. By part (2) of Theorem~\ref{produitshriek}, $\alpha=1$ and $\alpha'=(-1)^d$. Since $\varepsilon (\omega_M\times \omega_M\times\omega_M)=\varepsilon H((\text{D}elta \times 1)^!)\circ H(\text{D}elta^!) (\omega_M)= \varepsilon' H((1\times\text{D}elta)^!)\circ H(\text{D}elta^!) (\omega_M)=\varepsilon' (\omega_M\times \omega_M\times\omega_M)$, we see that $\varepsilon=\varepsilon'$. Therefore $(Dlp \otimes 1)\circ Dlp=(-1)^d(1\otimes Dlp)\circ Dlp$. Thus Lemma \ref{lem:important_degree}(i) together with Lemma~\ref{lem:dualsign} (i) and (ii) yields that the product $m : {\mathbb H}_*(LM)\otimes {\mathbb H}_*(LM) \to {\mathbb H}_*(LM)$ is associative. We prove that the loop product is graded commutative. Consider the commutative diagram $$ \xymatrix@C10pt@R5pt{ & LM \times_M LM \ar[ld]_(0.5){T} \ar@{->}'[d][dd] \ar[rr]^{q} & & LM\times LM \ar[ld]^{T} \ar[dd]^{p\times p} \\ LM \times_M LM\ar[rr]^(0.6){q} \ar[dd]_p & & LM \times LM \ar[dd]^(0.3){p\times p} & \\ & M \ar@{->}'[r]^(0.6){\text{D}elta}[rr] \ar@{=}[ld]& & M \times M \ar[ld]_{T}\\ M \ar[rr]_\text{D}elta & & M\times M. & } $$ By Theorem \ref{thm:newThm3} below, $H(q^!)\circ T^*=\varepsilon T^*\circ H(q^!)$. Since $Comp\circ T$ is homotopic to $Comp$, $Dlp=\varepsilon T^*\circ Dlp$. If $M$ is a Poincar\'e duality space with orientation class $\omega_M\in H^d(M)$ then $T^*(\omega_M\otimes \omega_M) =(-1)^{d^2} (\omega_M\otimes \omega_M)$. Therefore by part a) of Remark~\ref{remafternewthm3}, $\varepsilon=(-1)^d$. By Lemma \ref{lem:important_degree}(ii) together with Lemma~\ref{lem:dualsign}(i), we see that the product $m$ is graded commutative. This completes the proof. \qed \begin{rem} The commutativity of the loop homology ${\mathbb H}_*(L_NM)$ does not follow from the proof of Proposition \ref{prop:loop_homology}. In general $Comp\circ T$ is not homotopic to $Comp$ in $L_NM$. As mentioned in the Introduction, the relative loop product is not necessarily commutative; see \cite{Naito}. \varepsilonnd{rem} \section{Appendix: Properties of shriek maps} In this section, we extend the definitions and properties of shriek maps on Gorenstein spaces given in~\cite{F-T}. These properties are used in section~\ref{associativity loop product}. \begin{defn} A pull-back diagram, $$ \xymatrix@C15pt@R15pt{ X \ar[d]_{q} \ar[rr]^g &&E \ar[d]^p\\ N\ar[rr]^f &&M} $$ satisfies Hypothesis (H) (Compare with the hypothesis (H) described in \cite[page 418]{F-T}) if $p:E\twoheadrightarrow M$ is a fibration, for any $n\in\mathbb{N}$, $H^n(E)$ is of finite dimension and \centerline{$ \hspace{5mm}\left\{ \begin{array}{l} N \mbox{ is an oriented Poincar\'e duality space of dimension } n\,,\\ M \mbox{ is a 1-connected oriented Poincar\'e duality space of dimension } m\,, \varepsilonnd{array}\right. $} \noindent or $f:B^r\rightarrow B^t$ is the product of diagonal maps $B\rightarrow B^{n_i}$, the identity map of $B$, the inclusion $\varepsilonta:{*}\rightarrow B$ for a simply-connected ${\mathbb K}$-Gorenstein space $B$. \varepsilonnd{defn} Let $n$ be the dimension of $N$ or $r$ times the dimension of $B$. Let $m$ be the dimension of $M$ or $t$ times the dimension of $B$. It follows from \cite[Lemma 1 and Corollary p. 448]{F-T} that $H^q(N)\cong \text{Ext}_{C^*(M)}^{q+m-n}(C^*(N),C^*(M))$. By definition, a shriek map $f^!$ for $f$ is a generator of $\text{Ext}_{C^*(M)}^{\leq m-n}(C^*(N),C^*(M))$. Moreover, there exists an unique element $g^!\in \text{Ext}_{C^*(E)}^{m-n}(C^*(X),C^*(E))$ such that $g^!\circ C^*(q)= C^*(p)\circ f^!$ in the derived category of $C^*(M)$-modules; see Theorem \ref{thm:main_F-T}. Here we have extended the definitions of shriek maps due to Felix and Thomas in order to include the following example and the case $(\text{D}elta\times 1)^!$ that we use in the proof of Proposition \ref{prop:loop_homology}. \begin{ex}\label{intersection fibration}(Compare with~\cite[p. 419-420]{F-T} where $M$ is a Poincar\'e duality space) Let $F\buildrel{\widetilde{\varepsilonta}}\over\rightarrow E\buildrel{p}\over\rightarrow M$ be a fibration over a simply-connected Gorenstein space $M$ with generator $\varepsilonta^!=\omega_M\in\text{Ext}^m_{C^*(M)}({\mathbb K},C^*(M))$. By definition, $H(\widetilde{\varepsilonta}^!):H^*(F)\rightarrow H^{*+m}(E)$ is the dual to the {\it intersection morphism}. Let $G$ be a connected Lie group. Then its classifying space $BG$ is an example of Gorenstein space of negative dimension. Let $F$ be a $G$-space. It is not difficult to see that our intersection morphism of $F\rightarrow F\times_G EG\rightarrow BG$ coincides with the integration along the fibre of the principal $G$-fibration $G\rightarrow F\times EG\rightarrow F\times_G EG$ for an appropriate choice of the generator $\varepsilonta^!$; see the proof of \cite[Theorem 6]{F-T}. Suppose now that $F\buildrel{\widetilde{\varepsilonta}}\over\rightarrow E\buildrel{p}\over\rightarrow M$ is a monoidal fibration. With the properties of shriek maps given in this section, generalizing~\cite[Theorem 10]{F-T} (See also~\cite[Proposition 10]{G-S}) in the Gorenstein case, one can show that the intersection morphism $H(\widetilde{\varepsilonta}_!):H_{*+m}(E)\rightarrow H_*(F)$ is multiplicative if in the derived category of $C^*(M\times M)$-modules $$ \text{D}elta^!\circ \omega_M=\omega_M\times \omega_M. \varepsilonqnlabel{add-1} $$ The generator $\text{D}elta^!\in\text{Ext}^m_{C^*(M^2)}(C^*(M),C^*(M^2))$ is defined up to a multiplication by a scalar. If we could prove that $\text{D}elta^!\circ \omega_M$ is always not zero, we would have an unique choice for $\text{D}elta^!$ satisfying (11.1). Then we would have solved the ``up to a constant problem'' mentioned in~\cite[Q1 p. 423]{F-T}. \varepsilonnd{ex} We now describe a generalized version of \cite[Theorem 3]{F-T}. We consider the following commutative diagram. $$ \xymatrix@C15pt@R5pt{ & X \ar[ld]_{v} \ar@{->}'[d][dd]_(0.4){q} \ar[rr]^{g} & & E \ar[ld]^(0.4){k} \ar[dd]^{p} \\ X' \ar[rr]^(0.6){g'} \ar[dd]_{q'} & & E' \ar[dd]^(0.4){p'}& \\ & N\ar@{->}'[r]_(0.7){f}[rr] \ar@{->}[ld]_{u} & & M \ar[ld]^{h} \\ N' \ar[rr]_{f'} & & M' & } $$ in which the back and the front squares satisfies Hypothesis (H). \begin{thm} \label{thm:newThm3} {\varepsilonm (}Compare with \cite[Theorem 3]{F-T}{\varepsilonm )} With the above notations, suppose that $m'-n'=m-n$. {\varepsilonm (1)} If $h$ is a homotopy equivalence then in the derived category of $C^*(M')$-modules, $f^!\circ C^*(u)=\varepsilon C^*(h)\circ f'^!$, where $\varepsilon\in {\mathbb K}$. {\varepsilonm (2)} If in the derived category of $C^*(M')$-modules, $f^!\circ C^*(u)=\varepsilon C^*(h)\circ f'^!$ then in the derived category of $C^*(E')$-modules, $g^!\circ C^*(v)=\varepsilon C^*(k)\circ g'^!$. In particular, $$ H^*(g^!)\circ H^*(v) = \varepsilon H^*(k)\circ H^*({g'}^!) . $$ \varepsilonnd{thm} \begin{rem}\label{remafternewthm3} a) In (1), if $N'$ and $M'$ are oriented Poincar\'e duality spaces, the constant $\varepsilon$ is given by $$ H^n(f^!)\circ H^n(u)(\omega_{N'}) = \varepsilon H^{m'}(h)\circ H^{n'}({f'}^!)(\omega_{N'}) . $$ In fact, this is extracted from the uniqueness of the shriek map described in \cite[Lemma 1]{F-T}. b) In \cite[Theorem 3]{F-T}, it is not useful that $v$ and $k$ are homotopy equivalence. But in \cite[Theorem 3]{F-T}, the homotopy equivalences $u$ and $h$ should be orientation preserving in order to deduce $\varepsilon =1$. c) If the bottom square is the pull-back along a smooth embedding $f'$ of compact oriented manifolds and a smooth map $h$ transverse to $N'$. Then by~\cite[Proposition 4.2]{Me}, $f^!\circ C^*(u)= C^*(h)\circ f'^!$ and $ H^*(g^!)\circ H^*(v) = H^*(k)\circ H^*({g'}^!)$. \varepsilonnd{rem} \begin{proof}[Proof of Theorem \ref{thm:newThm3}] The proofs of (1) and (2) follow from the proof of \cite[Theorem 3]{F-T}. But we review this proof, in order to explain that Theorem~\ref{thm:newThm3} is valid in the Gorenstein case and that we don't need to assume as in~\cite[Theorem 3]{F-T} that $u$, $k$ and $v$ are homotopy equivalence. (1) Since $h$ is a homotopy equivalence, \begin{eqnarray*} \text{Ext}^*_{C^*(M')}(C^*(N'),C^*(h)): &&\\ && \hspace{-3cm} \text{Ext}^*_{C^*(M')}(C^*(N'),C^*(M')) \rightarrow \text{Ext}^*_{C^*(M')}(C^*(N'),C^*(M)) \varepsilonnd{eqnarray*} is an isomorphism. By definition~\cite[Theorem 1 and p. 449]{F-T}, the shriek map $f'^!$ is a generator of $\text{Ext}^{m'-n'}_{C^*(M')}(C^*(N'),C^*(M'))\cong {\mathbb K}$. Then $C^*(h)\circ f'^!$ is a generator of $\text{Ext}^{m'-n'}_{C^*(M')}(C^*(N'),C^*(M))$. So since $f^!\circ C^*(u)$ is in $\text{Ext}^{m-n}_{C^*(M')}(C^*(N'),C^*(M))$, we have (1). (2) Let $P$ be any $C^*(E')$-module. Since $X'$ is a pull-back, a straightforward generalization of~\cite[Theorem 2]{F-T} shows that $$ \text{Ext}^*_{C^*(p')}(C^*(q'),P): \text{Ext}^*_{C^*(E')}(C^*(X'),P)\rightarrow \text{Ext}^*_{C^*(M')}(C^*(N'),P) $$ is an isomorphism. Take $P:=C^*(E)$. Consider the following cube in the derived category of $C^*(M')$-modules. $$ \xymatrix@C15pt@R8pt{ & C^*(X) \ar[rr]^{g^!} & & C^*(E) \\ C^*(X') \ar[ru]^{C^*(v)} \ar[rr]^(0.6){g'^!} & & C^*(E') \ar[ur]_(0.4){C^*(k)} & \\ & C^*(N) \ar@{->}'[u][uu]^(0.4){C^*(q)} \ar@{->}'[r]_(0.7){f^!}[rr] & & C^*(M). \ar[uu]_{C^*(p)} \\ C^*(N') \ar[rr]_{f'^!} \ar[uu]^{C^*(q')} \ar@{->}[ur]^{C^*(u)} & & C^*(M') \ar[ur]_{C^*(h)} \ar[uu]_(0.4){C^*(p')}& } $$ Since in $\text{Ext}^*_{C^*(M')}(C^*(N'),C^*(E))$, the elements $g^!\circ C^*(v)\circ C^*(q')$ and $\varepsilon C^*(k)\circ g'^!\circ C^*(q')$ are equal, the assertion (2) follows. \varepsilonnd{proof} When $u$ and $h$ are the identity maps, Theorem~\ref{thm:newThm3} gives~\cite[Theorem 4]{F-T} (Compare with~\cite[Lemma 4]{G-S}) and the following variant for Gorenstein spaces: \begin{thm}{\varepsilonm (}Naturality of shriek maps with respect to pull-backs {\varepsilonm )}\label{naturalityshriek} Consider the two pull-back squares $$ \xymatrix@C20pt@R10pt{ X\ar[r]^{g}\ar[d]_v & E\ar[d]^{k}\\ X'\ar[r]^{g'}\ar[d]_{q'} & E'\ar[d]^{p'}\\ B^r\ar[r]_{\text{D}elta} & B^t } $$ where $\text{D}elta:B^r\rightarrow B^t$ is the product of diagonal maps of a simply-connected ${\mathbb K}$-Gorenstein space $B$ and $p'$ and $p'\circ k$ are two fibrations. Then in the derived category of $C^*(E')$-modules, $g^!\circ C^*(v)= C^*(k)\circ g'^!.$ \varepsilonnd{thm} \begin{thm}{\varepsilonm (}Products of shriek maps {\varepsilonm )}\label{produitshriek} Let \ \ \ \ \ \ \ $ \xymatrix@C15pt@R15pt{ X \ar[d]_{q} \ar[rr]^g &&E \ar[d]^p&\ar@{}[d] & \\ N\ar[rr]^f && M&} $ and \ \ \ \ \ \ \ \ \ \ \ \ $\xymatrix@C15pt@R15pt{ X' \ar[d]_{q'} \ar[rr]^{g'} &&E' \ar[d]^{p'}\\ N'\ar[rr]^{f'} &&M'} $ \noindent be two pull-back diagrams satisfying Hypothesis (H). Let $$EZ^\vee:C^*(M\times M')\buildrel{\simeq}\over\rightarrow \left(C_*(M)\otimes C_*(M')\right)^\vee$$ be the quasi-isomorphism of algebras dual to the Eilenberg-Zilber morphism. Let $${\mathcal T}heta: C^*(M)\otimes C^*(M')\buildrel{\simeq}\over\rightarrow\left(C_*(M)\otimes C_*(M')\right)^\vee$$ be the quasi-isomorphism of algebras sending the tensor product of cochains $\varphi\otimes \varphi'$ to the form denoted again $\varphi\otimes \varphi'$ defined by $(\varphi\otimes \varphi')(a\otimes b)=(-1)^{\vert\varphi'\vert\vert a\vert}\varphi(a)\varphi'(b)$. Then {\varepsilonm (1)} there exists $h\in\text{\varepsilonm Ext}^{m+m'-n-n'}_{\left(C_*(M)\otimes C_*(M')\right)^\vee}(\left(C_*(N)\otimes C_*(N')\right)^\vee,\left(C_*(M)\otimes C_*(M')\right)^\vee )$ such that in the derived category of $C^*(M\times M')$-modules $$ \xymatrix@C15pt@R15pt{ C^*(N\times N')\ar[r]^ {(f\times f')^!}\ar[d]_{EZ^\vee} & C^*(M\times M')\ar[d]^{EZ^\vee}\\ \left(C_*(N)\otimes C_*(N')\right)^\vee\ar[r]_{h} &\left(C_*(M)\otimes C_*(M')\right)^\vee } $$ and in the derived category of $C^*(M)\otimes C^*(M')$-modules $$ \xymatrix@C15pt@R15pt{ \left(C_*(N)\otimes C_*(N')\right)^\vee\ar[r]^{h} &\left(C_*(M)\otimes C_*(M')\right)^\vee\\ C^*(N)\otimes C^*(N')\ar[r]_{\varepsilon f^!\otimes f'^!}\ar[u]^{{\mathcal T}heta} & C^*(M)\otimes C^*(M')\ar[u]_{{\mathcal T}heta} } $$ are commutative squares for some $\varepsilon\in {\mathbb K}^*$. {\varepsilonm (2)} Suppose that $N$, $N'$, $M$ and $M'$ are Poincar\'e duality spaces oriented by $\omega_N\in H^n(N)$, $\omega_{N'}\in H^{n'}(N')$, $\omega_M\in H^m(M)$ and $\omega_{M'}\in H^{m'}(M')$. If we orient $N\times N'$ by $\omega_{N}\times\omega_{N'}$ and $M\times M'$ by $\omega_{M}\times\omega_{M'}$ then $\varepsilon=(-1)^{(m'-n')n}$. {\varepsilonm (3)} There exists $k\in\text{\varepsilonm Ext}^{m+m'-n-n'}_{\left(C_*(E)\otimes C_*(E')\right)^\vee}(\left(C_*(X)\otimes C_*(X')\right)^\vee,\left(C_*(E)\otimes C_*(E')\right)^\vee )$ such that in the derived category of $C^*(E\times E')$-modules $$ \xymatrix@C15pt@R15pt{ C^*(X\times X')\ar[r]^ {(g\times g')^!}\ar[d]_{EZ^\vee} & C^*(E\times E')\ar[d]^{EZ^\vee}\\ \left(C_*(X)\otimes C_*(X')\right)^\vee\ar[r]_{k} &\left(C_*(E)\otimes C_*(E')\right)^\vee } $$ and in the derived category of $C^*(E)\otimes C^*(E')$-modules $$ \xymatrix@C15pt@R15pt{ \left(C_*(X)\otimes C_*(X')\right)^\vee\ar[r]^{k} &\left(C_*(E)\otimes C_*(E')\right)^\vee\\ C^*(X)\otimes C^*(X')\ar[r]_{\varepsilon g^!\otimes g'^!}\ar[u]^{{\mathcal T}heta} & C^*(E)\otimes C^*(E')\ar[u]_{{\mathcal T}heta} } $$ are commutative squares. \varepsilonnd{thm} \begin{rem} Here $g^!\otimes g'^!$ denotes the $C^*(E)\otimes C^*(E')$-linear map defined by $$ (g^!\otimes g'^!)(a\otimes b)=(-1)^{\vert g'^!\vert\vert a\vert}g^!(a)\otimes g'^!(b). $$ Therefore, Theorem~\ref{produitshriek} (2) implies that $$ H^*((g\times g')^!)(a\times b)=(-1)^{(m'-n')(n+\vert a\vert)}H^*(g^!)(a)\times H^*(g'^!)(b). $$ The signs of~\cite[VI.14.3]{Bredongeometry} are different from that mentioned here. \varepsilonnd{rem} \begin{proof}[Proof of Theorem \ref{produitshriek}] (1) By definition~\cite[Theorem 1 and p. 449]{F-T}, $(f\times f')^!$ is a generator of $\text{Ext}^{\leq m+m'-n-n'}_{C^*(M \times M')}(C^*(N\times N'), C^*(M \times M'))$. Let $h$ be the image of $(f\times f')^!$ by the composite of isomorphisms $$\xymatrix@C15pt@R17pt{ \text{Ext}^{*}_{C^*(M \times M')}(C^*(N\times N'), C^*(M \times M'))\ar[d]_{\text{Ext}^{*}_{Id}(Id, EZ^\vee)}^\cong\\ \text{Ext}^{*}_{C^*(M \times M')}(C^*(N\times N'), \left(C_*(M)\otimes C_*(M')\right)^\vee)\\ \text{Ext}^{*}_{\left(C_*(M)\otimes C_*(M')\right)^\vee}(\left(C_*(N)\otimes C_*(N')\right)^\vee,\left(C_*(M)\otimes C_*(M')\right)^\vee ). \ar[u]^{\text{Ext}^{*}_{EZ^\vee}(EZ^\vee,Id)}_\cong } $$ Since $f^!\otimes f'^!$ is a generator of $$\text{Ext}^{\leq m+m'-n-n'}_{C^*(M)\otimes C^*(M')}(C^*(N)\otimes C^*(N'),C^*(M)\otimes C^*(M'))$$ $$\cong \text{Ext}^{\leq m-n}_{C^*(M)}(C^*(N),C^*(M))\otimes \text{Ext}^{\leq m'-n'}_{C^*(M')}(C^*(N'),C^*(M')),$$ the image of $h$ by the composite of isomorphisms $$\xymatrix@C15pt@R17pt{ \text{Ext}^{*}_{\left(C_*(M)\otimes C_*(M')\right)^\vee}(\left(C_*(N)\otimes C_*(N')\right)^\vee,\left(C_*(M)\otimes C_*(M')\right)^\vee ) \ar[d]^{\text{Ext}^{*}_{{\mathcal T}heta}({\mathcal T}heta,Id)}_\cong\\ \text{Ext}^{*}_{C^*(M)\otimes C^*(M')}(C^*(N)\otimes C^*(N'), \left(C_*(M)\otimes C_*(M')\right)^\vee)\\ \text{Ext}^{*}_{C^*(M)\otimes C^*(M')}(C^*(N)\otimes C^*(N'),C^*(M)\otimes C^*(M')). \ar[u]_{\text{Ext}^{*}_{Id}(Id, {\mathcal T}heta)}^\cong } $$ is an element $\varepsilon(f^!\otimes f'^!)$, where $\varepsilon$ is a non-zero constant (2) In cohomology, (1) gives a commutative diagram $$ \xymatrix@C15pt@R25pt{ H^*(N\times N')\ar[r]^{H^*((f\times f')^!)} &H^*(M\times M')\\ H^*(N)\otimes H^*(N') \ar[u]^{\times}\ar[r]_{\varepsilon H^*(f^!)\otimes H^*(f'^!)} &H^*(M)\otimes H^*(M'), \ar[u]_{\times} } $$ where $\times$ is the cross product. Therefore \begin{multline*} \omega_M\times\omega_{M'}=H^*((f\times f')^!)(\omega_N\times\omega_{N'})=\\ \varepsilon(-1)^{(m'-n')n}H^*(f^!)(\omega_{N})\times H^*(f'^!)(\omega_{N'})=\varepsilon(-1)^{(m'-n')n}\omega_M\times\omega_{M'}. \varepsilonnd{multline*} \noindent (3) Consider the following cube in the derived category of $C^*(M\times M')$-modules $$ {\footnotesize \xymatrix@C5pt@R15pt{ & \left(C_*(X)\otimes C_*(X')\right)^\vee \ar[rr]^{k} & & \left(C_*(E)\otimes C_*(E')\right)^\vee \\ C^*(X\times X') \ar[ru]^{EZ^\vee} \ar[rr]_(0.6){(g\times g')^!} & & C^*(E\times E') \ar[ur]_(0.4){EZ^\vee} & \\ & \left(C_*(N)\otimes C_*(N')\right)^\vee \ar@{->}'[u][uu]_(0.4){\left(C_*(q)\otimes C_*(q')\right)^\vee} \ar@{->}'[r]_(0.7){h}[rr] & & \left(C_*(M)\otimes C_*(M')\right)^\vee \ar[uu]_{\left(C_*(p)\otimes C_*(p')\right)^\vee} \\ C^*(N\times N') \ar[rr]_{(f\times f')^!} \ar[uu]^{C^*(q\times q')} \ar@{->}[ur]^{EZ^\vee} & & C^*(M\times M') \ar[ur]_{EZ^\vee} \ar[uu]_(0.4){C^*(p\times p')}& } } $$ with $k$ defined below. Since $${\text{Ext}^{*}_{C^*(p \times p')}(C^*(q\times q'),C^*(E \times E'))}:$$ $$ \text{Ext}^{*}_{C^*(E \times E')}(C^*(X\times X'), C^*(E \times E'))\rightarrow \text{Ext}^{*}_{C^*(M \times M')}(C^*(N\times N'), C^*(E \times E')) $$ is an isomorphism, it follows that the maps $$ \text{Ext}^{*}_{C^*(p \times p')}(C^*(q\times q'), \left(C_*(E)\otimes C_*(E')\right)^\vee):$$ \begin{eqnarray*} \text{Ext}^{*}_{C^*(E \times E')}(C^*(X\times X'), \left(C_*(E)\otimes C_*(E')\right)^\vee) & \\ & \hspace{-3cm}\rightarrow \text{Ext}^{*}_{C^*(M \times M')}(C^*(N\times N'), \left(C_*(E)\otimes C_*(E')\right)^\vee) \varepsilonnd{eqnarray*} and $$\text{Ext}^{*}_{\left(C_*(p)\otimes C_*(p')\right)^\vee}(\left(C_*(q)\otimes C_*(q')\right)^\vee,\left(C_*(E)\otimes C_*(E')\right)^\vee ):$$ \begin{align*} \text{Ext}^{*}_{\left(C_*(E)\otimes C_*(E')\right)^\vee}(\left(C_*(X)\otimes C_*(X')\right)^\vee, \left(C_*(E)\otimes C_*(E')\right)^\vee ) \\ \rightarrow \text{Ext}^{*}_{\left(C_*(M)\otimes C_*(M')\right)^\vee}(\left(C_*(N)\otimes C_*(N')\right)^\vee, \left(C_*(E)\otimes C_*(E')\right)^\vee ) \varepsilonnd{align*} are also isomorphisms. Let $k$ be the image of $\left(C_*(p)\otimes C_*(p')\right)^\vee\circ h$ by the inverse of the isomorphism $$\text{Ext}^{*}_{\left(C_*(p)\otimes C_*(p')\right)^\vee}(\left(C_*(q)\otimes C_*(q')\right)^\vee,\left(C_*(E)\otimes C_*(E')\right)^\vee ).$$ Since $EZ^\vee\circ (g\times g')^!$ and $k\circ EZ^\vee$ have the same image by $$ \text{Ext}^{*}_{C^*(p \times p')}(C^*(q\times q'), \left(C_*(E)\otimes C_*(E')\right)^\vee),$$ they coincide and hence we have proved the commutativity of the first square in (3). For the second square in (3), the proof is the same using this time the following cube in the derived category of $C^*(M)\otimes C^*(M')$-modules $$ {\footnotesize \xymatrix@C10pt@R15pt{ & \left(C_*(X)\otimes C_*(X')\right)^\vee \ar[rr]^{k} & & \left(C_*(E)\otimes C_*(E')\right)^\vee \\ C^*(X)\otimes C^*(X') \ar[ru]^{{\mathcal T}heta} \ar[rr]_(0.6){\varepsilon g^!\otimes g'^!} & & C^*(E)\otimes C^*(E') \ar[ur]_-{{\mathcal T}heta} & \\ & \left(C_*(N)\otimes C_*(N')\right)^\vee \ar@{->}'[u][uu]_(0.4){\left(C_*(q)\otimes C_*(q')\right)^\vee} \ar@{->}'[r]_(0.7){h}[rr] & & \left(C_*(M)\otimes C_*(M')\right)^\vee \ar[uu]_{\left(C_*(p)\otimes C_*(p')\right)^\vee} \\ C^*(N)\otimes C^*(N') \ar[rr]_{\varepsilon f^!\otimes f'^!} \ar[uu]_(0.7){C^*(q)\otimes C^*(q')} \ar@{->}[ur]^{{\mathcal T}heta} & & C^*(M)\otimes C^*(M') \ar[ur]_{{\mathcal T}heta} \ar[uu]_(0.4){C^*(p)\otimes C^*(p')}& } } $$ \varepsilonnd{proof} \section{Appendix: shriek maps and Poincar\'e duality} In this section, we compare precisely the shriek map defined by Felix and Thomas in ~\cite[Theorem A]{F-T} with various shriek maps defined by Poincar\'e duality. We first remark the cap products used in the body of the present paper. Let $\text{--} \cap \sigma : C^{*}(M)\to C_{m-*}(M)$ be the cap product given in ~\cite[VI.5]{Bredongeometry}, where $\sigma \in C_{m}(M)$. Observe that $\text{--}\cap \sigma$ is defined by $$(\text{--} \cap \sigma)(x) = (-1)^{\vert m \vert\vert x\vert} x \cap \sigma.$$ By \cite[VI.5.1 Proposition (iii)]{Bredongeometry}, the map $-\cap \sigma $ is a morphism of left $C^{*}(M)$-modules. The sign $(-1)^{\vert m \vert\vert x\vert}$ makes $\text{--} \cap \sigma$ a left $C^*(M)$ linear map in the sense that $f(xm)=(-1)^{\vert f \vert \vert x\vert} xf(m)$ as quoted in \cite[p. 44]{F-H-T}. We denote by $\sigma \cap \text{ --} : C^{*}(M)\to C_{m-*}$ the cap product described in \cite[\S 7]{Menichi}. The map $\sigma \cap -$ is a right $C^{*}(M)$-module map (\cite[Proposition 2.1.1]{Sweedler}). Moreover, we see that $x\cap \sigma =(-1)^{m|x|}\sigma \cap x $ in homology for any $x\in H^{*}(M)$. \begin{prop}\label{shriek and Poincare duality} Let $N$ and $M$ be two oriented Poincar\'e duality space of dimensions $n$ and $m$. Let $[N]\in H_n(N)$, $[M]\in H_n(M)$ and $\omega_N\in H^n(N)$, $\omega_M\in H^m(M)$ such that the Kronecker products $<\omega_M,[M]>=1=<\omega_N,[N]>$. Let $f:N\rightarrow M$ be a continous map. Let $f^!$ be the unique element of $\text{\varepsilonm Ext}^{m-n}_{C^*(M)}(C^*(N),C^*(M))$ such that $H(f^!)(\omega_N)=\omega_M$. Then 1) The diagram in the derived category of right-$C^*(M)$ modules $$ \xymatrix@C20pt@R20pt{ C^*(N)\ar[r]^{f^!}\ar[d]_{[N]\cap -} & C^{*+m-n}(M)\ar[d]^{[M]\cap -}\\ C_{n-*}(N)\ar[r]_{C_*(f)} & C_{n-*}(M) } $$ commutes up to the sign $(-1)^{m+n}$. 2) Let $\psi_N:C_*(N)\hookrightarrow C^*(N)^\vee$ be the canonical inclusion of the complex $C_*(N)$ into its bidual. The diagram of left $H^*(M)$-modules $$ \xymatrix@C20pt@R15pt{ H^*(M)^\vee\ar[r]^{H^*(f^!)^\vee} &H^{*+n-m}(N)^\vee\\ H_*(M)\ar[u]^{H(\psi_M)} & H_{*+n-m}(N)\ar[u]_{H(\psi_N)}\\ H^{m-*}(M)\ar[r]_{H^{*}(f)}\ar[u]^{-\cap [M]} &H^{m-*}(N)\ar[u]_{-\cap [N]} } $$ commutes up to the sign $(-1)^{n(m-n)}$. 3) Let $\int_N:H(A(N))\buildrel{\cong}\over\rightarrow H(C^*(N))$ be the isomorphism of algebras induced by the natural equivalence between the rational singular cochains and the polynomial differential forms~\cite[Corollary 10.10]{F-H-T}. Let $\theta_N:A(N)\buildrel{\simeq}\over\rightarrow A(N)^\vee$ be a morphism of $A(N)$-modules such that the class of $\theta_N(1)$ is the fundamental class of $N$, $[N]\in H_n(A(N)^\vee)\cong H_n(N;\mathbb{Q})$. Let $f^!_A$ be the unique element of $\text{\varepsilonm Ext}^{m-n}_{A(M)}(A(N),A(M))$ such that $H(f^!_A)(\int_N^{-1}\omega_N)=\int_M^{-1}\omega_M$. Then in the derived category of $A(M)$-modules, the diagram $$ \xymatrix@C20pt@R20pt{ A(M)^\vee\ar[r]^{(f^!_A)^\vee} &A(N)^\vee\\ A(M)\ar[r]_{A(f)}\ar[u]^{\theta_M} &A(N)\ar[u]_{\theta_N} } $$ commutes also with the sign $(-1)^{n(m-n)}$. \varepsilonnd{prop} \begin{rem} Part 1) of the previous proposition is already in~\cite[p. 419]{F-T} but without sign and with left-$C^*(M)$ modules. In particular, they should have defined their maps $\text{--} \cap [M]:C^*(M)\rightarrow C_{m-*}(M)$ by $(\text{--} \cap [M])(x)=(-1)^{\vert x\vert m} x\cap [M]$ in order to have a left-$C^*(M)$ linear map~\cite[p. 283]{Friedman}. Note that diagram of part 2) of Proposition \ref{shriek and Poincare duality} is commutative at the cochain level as our proof below shows it. \varepsilonnd{rem} \begin{proof} 1) By~\cite[Lemma 1]{F-T}, it suffices to show that the diagram commutes in homology on the generator $\omega_M\in H^m(M)$. Let $\varepsilon_M:H_0(M)\buildrel{\cong}\over\rightarrow{\mathbb K}$ be the augmentation. It is well-known that $\varepsilon_M(\omega_M\cap [M])=<\omega_M,[M]>=1$. Therefore $\varepsilon_M([M]\cap \omega_M)=(-1)^m\varepsilon_M(\omega_M\cap [M])=(-1)^m$. On the other hand, $\varepsilon_N([N]\cap H(f^!)(\omega_M))=\varepsilon_N([N]\cap \omega_N)=(-1)^n$. 2) Since $H^*(f^!)$ is right $H^*(M)$-linear, its dual is left $H^*(M)$-linear. By $H^*(M)$-linearity, it suffices to check that the diagram commutes on $1\in H^*(M)$. By~\cite[7.1]{F-T-VP} or~\cite[Property 57 i)]{Menichi}, $\psi_M:C_*(M)\hookrightarrow C^*(M)^\vee$ is defined by $\psi_M(c)(\varphi)=(-1)^{\vert c\vert\vert\varphi\vert}\varphi(c)$ for $c\in C_*(M)$ and $\varphi\in C^*(M)$. Therefore $\psi_N([N])(\omega_N)=(-1)^{n^2}$. And \begin{align*} \left((f^!)^\vee\circ\psi_M\right)([M])(\omega_N)=(-1)^{m\vert f^!\vert}\left(\psi_M([M])\circ f^!\right)(\omega_N)\\ =(-1)^{m^2-mn}\psi_M([M])(\omega_M)=(-1)^{mn}. \varepsilonnd{align*} So $((f^!)^\vee\circ\psi_M)([M])=(-1)^{n(m-n)}\psi_N([N])$. Observe that the map $\psi_M$ is left $C^*(M)$-linear; see \cite[p. 250]{F-T-VP}. 3) The isomorphism $ \text{Ext}^{q}_{A(M)}(A(M),A(N)^\vee)\cong H_{-q}(A(N)^\vee)$ maps any element $\varphi$ to $ H(\varphi)(1)$. Therefore it suffices to check that the diagram commutes in homology on $1\in H^0(A(M))$. Consider the cube $$ {\footnotesize \xymatrix@C20pt@R15pt{ & H(A(M)^\vee) \ar[rr]^{H((f^!_A)^\vee)} & & H(A(N)^\vee) \\ H^*(M)^\vee \ar[ru]^{\int_M^\vee} \ar[rr]_(0.6){H((f^!)^\vee)} & & H^*(N)^\vee \ar[ur]_-{\int_N^\vee} & \\ & H(A(M)) \ar@{->}'[u][uu]_(0.4){H(\theta_M)} \ar@{->}'[r]_(0.7){H(A(f))}[rr] \ar@{->}[dl]_{\int_M} & & H(A(N)) \ar[uu]_{H(\theta_N)} \ar@{->}[dl]^{\int_N}\\ H^*(M) \ar[rr]_{H^*(f)} \ar[uu]^{H(\psi_M)\circ [M]\cap -} & & H^*(N) \ar[uu]_(0.7){H(\psi_N)\circ [N]\cap -}& } } $$ The bottom face commutes by naturality of the isomorphism $\int_N$. The left face and right faces commute on $1$ by definition of $\theta_M(1)$ and $\theta_N(1)$. The top face is the dual of the following square $$ \xymatrix@C35pt@R20pt{ H(A(N))\ar[r]^{H^*(f^!_A)}\ar[d]_{\int_N} & H(A(M))\ar[d]^{\int_M}\\ H(C^ *(N))\ar[r]_{H^*(f^!)} & H(C^*(M))\ar[r]_-{H(\psi_M([M]))} & {\mathbb K} } $$ Since $H(\psi_M([M]))\circ H^*(f^!)\circ \int_N=H(\psi_M([M]))\circ \int_M\circ H^*(f^!_A)$, by Lemma~\ref{caracterisation shriek map cohomology} below, the square is commutative. Therefore the top face of the previous cube is commutative. The front face is exactly the diagram in part 2) of this proposition. Therefore the back face commutes on $1$ up to the same sign $(-1)^{n(m-n)}$. \varepsilonnd{proof} The following lemma is a cohomological version of~\cite[Lemma 1]{F-T}. \begin{lem}\label{caracterisation shriek map cohomology} Let $P$ be a right $H^*(M)$-module. Then the map $$\Psi:\text{\varepsilonm Hom}_{H^*(M)}^q(P,H^*(M))\rightarrow (P^{q-m})^\vee$$ mapping $\varphi$ to the composite $H(\psi_M([M]))\circ\varphi$ is an isomorphism. \varepsilonnd{lem} \begin{rem} This lemma holds also with $\text{Ext}$ instead of $\text{Hom}$ and with $H_*(M)$, $C_*(M)$ or~\cite[Lemma 1]{F-T} $C^*(M)$ instead of $H^*(M)$. \varepsilonnd{rem} We give the proof of Lemma~\ref{caracterisation shriek map cohomology}, since we don't understand all the proof of~\cite[Lemma 1]{F-T}. \begin{proof} Since the form $H(\psi_M([M])):H(C^*(M))\rightarrow{\mathbb K}$, $\alpha\mapsto (-1)^{m\vert\alpha\vert}<\alpha,[M]>$ coincides with $ H^m(C^*(M))\buildrel{[M]\cap -}\over\rightarrow H_0(C^*(M))\buildrel{\varepsilon}\over\cong {\mathbb K} $, the map $\Psi$ coincides with the composite of $$\text{Hom}_{H^*(M)}^q(P,H(\psi_M)\circ [M]\cap -): \text{Hom}_{H^*(M)}^q(P,H^*(M))\buildrel{\cong}\over\rightarrow \text{Hom}_{H^*(M)}^{q-m}(P,H^*(M)^\vee)$$ and $$ \text{Hom}_{H^*(M)}(P,\text{Hom}(H^*(M),{\mathbb K}))\cong \text{Hom}_{\mathbb K}(P\otimes_{H^*(M)} H^*(M),{\mathbb K})\cong \text{Hom}_{\mathbb K}(P,{\mathbb K}). $$ \varepsilonnd{proof} \begin{cor}\label{shriek diagonal} Let $N$ be an oriented Poincar\'e duality space. Let $[N]\in H_n(N)$ and $\omega_N\in H^n(N)$ such that the Kronecker product $<\omega_N,[N]>=1$. Let $\text{D}elta^!$ be the unique element of $\text{\varepsilonm Ext}^{n}_{C^*(N\times N)}(C^*(N),C^*(N\times N))$ such that $H(\text{D}elta^!)(\omega_N)=\omega_N\times \omega_N$. (This is the $\text{D}elta^!$ considered in all this paper, since we orient $N\times N$ with the cross product $\omega_N\times \omega_N$: See part (2) of Theorem~\ref{produitshriek}). Let $\text{D}elta^!_A$ be the unique element of $\text{Ext}^{n}_{A(N\times N)}(A(N),A(N\times N))$ such that $H(\text{D}elta^!_A)(\int_N^{-1}\omega_N)=\int_{N\times N}^{-1}\omega_N\times \omega_N$. Then in the case of $\text{D}elta^!$ and of $\text{D}elta^!_A$, all the diagrams of Proposition~\ref{shriek and Poincare duality} commute exactly\footnote{As we see in the proof, this is lucky!}. \varepsilonnd{cor} \begin{proof} By~\cite[VI.5.4 Theorem]{Bredongeometry}, the cross products in homology and cohomology and the Kronecker product satisfy $$ <\omega_N\times \omega_N, [N]\times [N]>=(-1)^{\vert \omega_N\vert\vert [N]\vert} <\omega_N,[N]><\omega_N,[N]> =(-1)^n. $$ Therefore, by Proposition~\ref{shriek and Poincare duality}, the diagram of part 1) commutes up to the sign $(-1)^{2n+n}(-1)^n=+1$. The diagrams of part 2) and (3) commutes up to the sign $(-1)^{n(2n-n)}(-1)^n=+1$. \varepsilonnd{proof} \section{Appendix: Signs and degree shifting of products} Let $A$ be a graded vector space equipped with a morphism $\mu_A:A\otimes A\rightarrow A$ of degree $\vert\mu_A\vert$. Let $B$ be another graded vector space equipped with a morphism $\mu_B:B\otimes B\rightarrow B$ of degree $\vert\mu_B\vert$. \begin{defn}\label{definition associative and commutative} The multiplication $\mu_A$ is {\it associative} if $\mu_A\circ(\mu_A\otimes 1)=(-1)^{ \vert\mu_A\vert}\mu_A\circ(1\otimes\mu_A)$. The multiplication $\mu_A$ is {\it commutative} if $\mu_A(a\otimes b)=(-1)^{ \vert\mu_A\vert+\vert a\vert\vert b\vert}\mu_A(b\otimes a)$ for all $a$, $b\in A$. A linear map $f:A\rightarrow B$ is a {\it morphism of algebras of degree $\vert f\vert$ }if $f\circ\mu_A=(-1)^{\vert f\vert\vert\mu_A\vert}\mu_B\circ(f\otimes f)$ (In particular $\vert\mu_A\vert=\vert\mu_B\vert+\vert f\vert$). \varepsilonnd{defn} \begin{prop} {\varepsilonm (i)} The composite $g\circ f$ of two morphisms of algebras $f$ and $g$ of degrees $\vert f\vert$ and $\vert g\vert$ is a morphism of algebras of degree $\vert f\vert+\vert g\vert$. The inverse $f^{-1}$ of an isomorphism $f$ of algebras of degree $\vert f\vert$ is a morphism of algebras of degree $-\vert f\vert$. {\varepsilonm (ii)} Let $f:A\rightarrow B$ be an isomorphism of algebras of degree $\vert f\vert$. Then $\mu_A$ is commutative if and only if $\mu_B$ is commutative. And $\mu_A$ is associative if and only if $\mu_B$ is associative. \varepsilonnd{prop} \begin{ex}\label{lem:important_degree} (i) (Compare with~\cite[Remark 3.6 and proof of Proposition 3.5]{Tamanoi:cap products}) Let $A$ be a lower graded vector space equipped with a morphism $\mu_A:A\otimes A\rightarrow A$ of lower degree $-d$ associative and commutative in the sense of definition~\ref{definition associative and commutative}. Denote by ${\mathbb A}=s^{-d}A$ the $d$-desuspension~\cite[p. 41]{F-H-T} of $A$: ${\mathbb A}_i=A_{i+d}$. Let $\mu_{\mathbb{A}}:{\mathbb{A}}\otimes {\mathbb{A}}\rightarrow {\mathbb{A}}$ the morphism of degree $0$ given by $\mu_{\mathbb{A}}(a\otimes b)=(-1)^{d(d+p)}\mu_A(a\otimes b)$ for $a\in A_p$ and $b\in A_q$. Then the map $s^{-d}:A\rightarrow \mathbb{A}$, $a\mapsto a$, is an isomorphism of algebras of lower degree $-d$ and $\mu_{\mathbb A}$ is commutative and associative in the usual graded sense. (ii) Let $f:A\rightarrow B$ be a morphism of algebras of degree $\vert f\vert$ in the sense of definition~\ref{definition associative and commutative} with respect to the multiplications $\mu_A$ and $\mu_B$. Then the composite $\mathbb{A}\buildrel{(s^{\vert\mu_A\vert})^{-1}}\over\rightarrow A \buildrel{f}\over\rightarrow B\buildrel{s^{\vert\mu_B\vert}}\over\rightarrow \mathbb{B} $ is a morphism of algebras of degree $0$ with respect to the multiplications $\mu_{\mathbb{A}}$ and $\mu_{\mathbb{B}}$. \varepsilonnd{ex} The following proposition explains that the generalized cup product (Definition~\ref{cup product Hochschild}) is natural with respect to morphism of algebras of any degree (Definition~\ref{definition associative and commutative}). \begin{prop}\label{naturality generalized cup product} Let $A$ be an algebra. Let $M$ and $N$ be two $A$-bimodules. Let $\bar{\mu}_M\in\text{\varepsilonm Ext}_{A\otimes A^{op}}^*(M\otimes_A M,M)$ and $\bar{\mu}_N\in\text{\varepsilonm Ext}_{A\otimes A^{op}}^*(N\otimes_A N,N)$. Let $f\in\text{\varepsilonm Ext}_{A\otimes A^{op}}^*(M,N)$ such that in the derived category of $A$-bimodules \begin{equation} f\circ \bar{\mu}_M=(-1)^{\vert f\vert\vert \bar{\mu}_M\vert}\bar{\mu}_N\circ (f\otimes_A f) \varepsilonnd{equation} Then $HH^*(A,f):HH^*(A,M)\rightarrow HH^*(A,N)$ is a morphism of algebras of degree $\vert f\vert$. \varepsilonnd{prop} \begin{proof} Consider the diagram $$ \xymatrix@C35pt@R15pt{ HH^*(A,M)^{\otimes 2}\ar[r]^-{\otimes_A}\ar[d]_-{HH*(A,f)^{\otimes 2}} & HH^*(A,M\otimes_A M)\ar[r]^-{HH^*(A,\bar{\mu}_M)}\ar[d]^-{HH*(A,f\otimes_A f)} & HH^*(A,M)\ar[d]^{HH*(A,f)}\\ HH^*(A,N)^{\otimes 2}\ar[r]_-{\otimes_A} & HH^*(A,N\otimes_A N)\ar[r]_-{HH^*(A,\bar{\mu}_N)} & HH^*(A,N) } $$ The left square commutes exactly since for $g$, $h\in\text{Hom}_{A\otimes A^{op}}(B(A,A,A),M)$, $$ (f\otimes_A f)\circ (g\otimes_A h)\circ c=(-1)^{\vert f\vert\vert g\vert} \left((f\circ g)\otimes_A (f\circ h)\right)\circ c. $$ By equation~(1), the right square commutes up to the sign $(-1)^{\vert f\vert\vert \bar{\mu}_M\vert}$. \varepsilonnd{proof} \begin{ex}\label{examples naturality cup product} (i) Let $A$ be an algebra. Let $M$ be an $A$-bimodule. Let $\bar{\mu}_M\in\text{Ext}_{A\otimes A^{op}}^d(M\otimes_A M,M)$. Denote by $\mathbb{M}:=s^{-d}M$ the $d$-desuspension of the $A$-bimodule $M$: for $a$, $b\in A$ and $m\in M$, $a(s^{-d}m)b:=(-1)^{d\vert a\vert}s^{-d}(amb)$~\cite[X.(8.4)]{MacLanehomology}. Then the map $s^{-d}:M\rightarrow \mathbb{M}$ is an isomorphism of $A$-bimodules of degree $d$. Consider $\bar{\mu}_{\mathbb{M}}\in\text{Ext}_{A\otimes A^{op}}^0(\mathbb{M}\otimes_A \mathbb{M},\mathbb{M})$ such that in the derived category of $A$-bimodules, $s^{-d}\circ \bar{\mu}_M=(-1)^{d\vert \bar{\mu}_M\vert}\bar{\mu}_{\mathbb{M}}\circ ( s^{-d}\otimes_A s^{-d})$. Then $HH^*(A,s^{-d}):HH^*(A,M)\rightarrow HH^*(A,\mathbb{M})$ is a morphism of algebras of lower degree $d$. In particular, by example~\ref{lem:important_degree} (ii), the composite $$ \mathbb{HH^*(A,M)}:=s^{-d}HH^*(A,M)\buildrel{s^{d}}\over\rightarrow HH^*(A,M) \buildrel{HH^*(A,s^{-d})}\over\rightarrow HH^*(A,\mathbb{M}) $$ is an isomorphism of algebras of degree $0$. (ii) Let $A$ be a commutative algebra. Let $M$ and $N$ be two $A$-modules. Let $\mu_M\in\text{Ext}_{A^{\otimes 4}}^*(M\otimes M,M)$ and $\mu_N\in\text{Ext}_{A^{\otimes 4}}^*(N\otimes N,N)$. Let $f\in\text{Ext}_{A}^*(M,N)$ such that in the derived category of $A^{\otimes 4}$-modules, $f\circ \mu_M=(-1)^{\vert f\vert\vert \mu_M\vert}\mu_N\circ (f\otimes f)$. From Lemma~\ref{lem:generalizeddecompositioncup}(3) and Proposition~\ref{naturality generalized cup product}, since $f\circ \bar{\mu}_M\circ q=(-1)^{\vert f\vert\vert \bar{\mu}_M\vert}\bar{\mu}_N\circ (f\otimes_A f)\circ q$, $HH^*(A,f):HH^*(A,M)\rightarrow HH^*(A,N)$ is a morphism of algebras of degree $\vert f\vert$. \varepsilonnd{ex} \begin{lem}\label{lem:dualsign} {\varepsilonm (i)} For a commutative diagram of graded ${\mathbb K}$-modules $$ \xymatrix@C20pt@R15pt{ A \ar[r]^-{f} \ar[d]_-{h} & B \ar[d]^-{k}\\ C \ar[r]_-{g} & D, } $$ the square $$ \xymatrix@C65pt@R15pt{ A^{\vee} & B^{\vee} \ar[l]_-{f^{\vee}}\\ C^{\vee} \ar[u]^-{h^{\vee}} & D^{\vee} \ar[u]_-{k^{\vee}} \ar[l]^-{(-1)^{|f||k|+|g||h|}g^{\vee}} } $$ is commutative. \\ {\varepsilonm (ii)} Let $f:A\to B$ and $g:C\to D$ be maps of graded ${\mathbb K}$-modules. Then, the square is commutative: $$ \xymatrix@C20pt@R15pt{ A^{\vee}\otimes C^{\vee} \ar[r]^-{f^{\vee}\otimes g^{\vee}} \ar[d]_-{\cong} & B^{\vee}\otimes D^{\vee} \ar[d]^-{\cong}\\ (A\otimes C)^{\vee} \ar[r]_-{(f\otimes g)^{\vee}} & (B\otimes D)^{\vee}. } $$ \varepsilonnd{lem} \begin{proof} (i) By definition~\cite[0.1 (7)]{Tanre}, $f^\vee(\varphi)=(-1)^{\vert\varphi\vert\vert f\vert}\varphi\circ f$. Therefore $(g\circ h)^\vee=(-1)^{\vert g\vert\vert h\vert}h^\vee\circ g^\vee$. \varepsilonnd{proof} \noindent {\it Acknowledgments} The first author thanks Jean-Claude Thomas for a precious comment which helps him to understand Theorem \ref{thm:main_F-T}. \begin{thebibliography}{99} \bibitem{A-I} L. Avramov and S. Iyengar, Gorenstein algebras and Hochschild cohomology, special volume in honor of Melvin Hochster, Michigan Math. J. 57 (2008), 17--35. \bibitem{BGNX} Behrend, Kai; Ginot, Gr\'egory; Noohi, Behrang; Xu, Ping String topology for stacks. Ast√©risque No. 343 (2012), \bibitem{Bredongeometry} Bredon G., Topology and Geometry, Graduate Texts in Mathematics {\bf 139}, Springer-Verlag. \bibitem{CartanEilenberg} H. Cartan and S. Eilenberg, Homological algebra, Princeton University Press, Princeton, N. J., 1956. \bibitem{C-S}M. Chas and D. Sullivan, String topology, preprint (math.GT/0107187). \bibitem{C-L}D. Chataur and J. -F. Le Borgne, Homology of spaces of regular loops in the sphere, Algebr. Geom. Topol. {\bf 9} (2009), 935-977. \bibitem{C-M}D. Chataur and L. Menichi, String topology of classifying spaces, J. Reine Angew. Math. {\bf 669} (2012), 1-45. \bibitem{C-J-Y}R. L. Cohen, J. D. S. Jones and J. Yan, The loop homology algebra of spheres and projective spaces. (English summary) Categorical decomposition techniques in algebraic topology (Isle of Skye, 2001), 77-92, Progr. Math., 215, Birkh\"auser, Basel, 2004. \bibitem{C-G}R. L. Cohen and V. Godin, A polarized view of string topology, Topology, geometry and quantum field theory, 127-154, London Math. Soc. Lecture Note Ser., 308, Cambridge Univ. Press, Cambridge, 2004. \bibitem{FHT_G}Y. F\'elix, S. Halperin and J. -C. Thomas, Gorenstein spaces. Adv. in Math. {\bf 71}(1988), 92-112. \bibitem{F-H-T}Y. F\'elix, S. Halperin and J. -C. Thomas, Rational Homotopy Theory, Graduate Texts in Mathematics {\bf 205}, Springer-Verlag. \bibitem{F-T}Y. F\'elix and J. -C. Thomas, String topology on Gorenstein spaces, Math. Ann. {\bf 345}(2009), 417-452. \bibitem{F-T:rationalBV}Y. F\'elix and J. -C. Thomas, Rational BV-algebra in String topology, Bull. Soc. Math. France {\bf 136} (2) (2008), 311-327. \bibitem{F-T-VP}Y. F\'elix, J. -C. Thomas and M. Vigu{\'e}-Poirrier, The Hochschild cohomology of a closed manifold. Publ. Math. Inst. Hautes {\'E}tudes Sci. {\bf 99}(2004), 235-252. \bibitem{Friedman} G. Friedman, On the chain-level intersection pairing for PL pseudomanifolds, Homology, Homotopy Appl. {\bf 11} (2009), no. 1, 261-314. \bibitem{G-S} K. Gruher, P. Salvatore, Generalized string topology operations. Proc. Lond. Math. Soc. (3) 96 (2008), no. 1, 78-106. \bibitem{G-M}V. K. A. M. Gugenheim and J. P. May, On the Theory and Applications of Differential Torsion Products, Memoirs of Amer. Math. Soc. {\bf 142} 1974. \bibitem{KleinPoincare} J. Klein, Fiber products, Poincar\'e duality and $A_\infty$-ring spectra, Proc. Amer. Math. Soc. 134 (2006), no. 6, 1825-1833. \bibitem{Kuri2011} K. Kuribayashi, The Hochschild cohomology ring of the singular cochain algebra of a space, Ann. Inst. Fourier, Grenoble {\bf 61} (2011), 1779-1805. arXiv:{\tt math.AT/1006.0884}. \bibitem{K-M-N2} K. Kuribayashi and L. Menichi, Behavior of the Eilenberg-Moore spectral sequence in derived string topology, preprint (2013). \bibitem{K-L} K. Kuribayashi and L. Menichi, On the loop (co)product on the classifying space of a Lie group, in preparation. \bibitem{KMnoetherian} K. Kuribayashi and L. Menichi, Loop products on Noetherian H-spaces, in preparation. \bibitem{L-SaskedbyFelix} P. Lambrechts and D. Stanley, Algebraic models of Poincar\'e embeddings, Algebr. Geom. Topol. {\bf 5} (2005), 135-182. \bibitem{L-S} P. Lambrechts and D. Stanley, Poincar\'e duality and commutative differential graded algebras, Ann. Sci. \'Ec. Norm. Sup\'er. (4) {\bf 41}(2008), 495-509. \bibitem{LeB} J -F. Le Borgne, The loop-product spectral sequence, Expo. Math. {\bf 26}(2008), 25-40. \bibitem{MacLanehomology} S. Mac Lane, Homology, Die Grundlehren der mathematischen Wissenschaften, Bd. 114 Academic Press, Inc., Publishers, New York; Springer-Verlag, Berlin-G\"ottingen-Heidelberg, 1963. \bibitem{Mccleary} J. McCleary, A user's guide to spectral sequences. Second edition. Cambridge University Press, 2001. \bibitem{McClure} J. McClure, On the chain-level intersection pairing for PL manifolds, Geom. Topol. {\bf 10} (2006), 1391-1424. \bibitem{Me} L. Meier, Spectral sequences in string topology, Algebr. Geom. Topol.{\bf 11}(2011), 2829-2860. \bibitem{Menichi_BV_Hochschild} L. Menichi, Batalin-Vilkovisky algebra structures on Hochschild Cohomology, Bull. Soc. Math. France 137 (2009), no 2, 277-295. \bibitem{Menichi} L. Menichi, Van Den Bergh isomorphism in string topology, J. Noncommut. Geom. {\bf 5} (2011), 69-105. \bibitem{Merkulov} Merkulov, S. A. De Rham model for string topology. Int. Math. Res. Not. 2004, no. 55, 2955--2981. \bibitem{Murillo} A. Murillo, The virtual Spivak fiber, duality on fibrations and Gorenstein spaces, Trans. Amer. Math. Soc. {\bf 359} (2007), 3577-3587. \bibitem{Naito}T. Naito, On the mapping space homotopy groups and the free loop space homology groups, Algebr. Geom. Topol. {\bf 11} (2011), 2369-2390. \bibitem{S-G} S. Siegel, S. Witherspoon, the Hochschild cohomology ring of a group algebra, Proc. London Math. Soc. (3) {\bf 79},(1999), no 1, 131-157. \bibitem{S}S. Shamir, A spectral sequence for the Hochschild cohomology of a coconnective dga, preprint (2010), arXiv:{\tt math.KT/1011.0600v2}. \bibitem{Spanier} Spanier, Edwin H. Algebraic topology. Corrected reprint. Springer-Verlag, New York-Berlin, 1981. \bibitem{Sweedler} M. E. Sweedler, Hopf algebras, Mathematics Lecture Note Series, W. A. Benjamin, Inc., New York, 1969. \bibitem{Tamanoi:cap products} Tamanoi, Hirotaka Cap products in string topology. Algebr. Geom. Topol. 9 (2009), no. 2, 1201-1224. \bibitem{Tamanoi:stabletrivial}H. Tamanoi, Stable string operations are trivial, Int. Math. Res. Not. IMRN 2009, {\bf 24}, 4642-4685. \bibitem{T}H. Tamanoi, Loop coproducts in string topology and triviality of higher genus TQFT operations, J. Pure Appl. Algebra {\bf 214} (2010), 605-615. \bibitem{Tanre} Tanr\'e, Daniel Homotopie rationnelle: mod\`eles de Chen, Quillen, Sullivan. Lecture Notes in Mathematics, 1025. Springer-Verlag, Berlin, 1983. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{\equation}gin{document} \begin{\equation}gin{center}{\bf\Large Existence and concentration of solution for a non-local regional Schr\"odinger equation with competing potentials } {Claudianor O. Alves} Universidade Federal de Campina Grande\\ Unidade Acad\^emica de Matem\'atica\\ CEP: 58429-900 - Campina Grande - PB, Brazil\\ {{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})l [email protected]} \hspace*pace*{0em}oindent {C\'esar E. Torres Ledesma} Departamento de Matem\'aticas, \\ Universidad Nacional de Trujillo,\\ Av. Juan Pablo II s/n. Trujillo-Per\'u\\ {{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})l ctl\[email protected]} \end{center} \centerline{\bf Abstract} In this paper, we study the existence and concentration phenomena of solutions for the following non-local regional Schr\"odinger equation $$ \left\{ \begin{\equation}gin{array}{l} \epsilon^{2\alpha}(-\displaystyleplaystyleelta)_\rho^{\alpha} u + Q(x)u = K(x)|u|^{p-1}u,\;\;\mbox{in}\;\; \mathbb{R}^n,\\ u\in H^{\alpha}(\mathbb{R}^n) \end{array} \right. $$ where $\epsilon$ is a positive parameter, $0< \alpha < 1$, $1<p<\frac{n+2\alpha}{n-2\alpha}$, $n>2\alpha$; $(-\displaystyleplaystyleelta)_{\rho}^{\alpha}$ is a variational version of the regional fractional Laplacian, whose range of scope is a ball with radius $\rho (x)>0$, $\rho, Q, K$ are competing functions. We study the existence of ground state and we analyze the behavior of semi-classical solutions as $\epsilon \to 0$. \date{} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})etcounter{equation}{0} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ection{Introduction} The aim of this article is to study the non-linear Schr\"odinger equation with non-local regional diffusion and competing potentials $$ \left\{ \begin{\equation}gin{aligned} \epsilon^{2\alpha}(-\displaystyleplaystyleelta)_\rho^{\alpha} u + Q(x)u& = K(x)|u|^{p-1}u,\;\;\mbox{in}\;\; \mathbb{R}^n,\\ u&\in H^{\alpha}(\mathbb{R}^n), \end{aligned} \right. \leqno{(P)} $$ where $0< \alpha < 1$, $\epsilon >0$, $n> 2\alpha$, $Q,K \in C(\mathbb{R}^n, \mathbb{R}^+)$ are bounded and the operator $(-\displaystyleplaystyleelta)_{\rho}^{\alpha}$ is a variational version of the non-local regional fractional Laplacian, with range of scope determined by a positive function $\rho \in C(\mathbb{R}^n, \mathbb{R}^+)$, which is defined as $$ \int_{\mathbb{R}^n}(-\displaystyleplaystyleelta)_{\rho}^{\alpha}u(x)\varphi (x)dx = \int_{\mathbb{R}^n}\int_{B(0, \rho (x))} \frac{[u(x+z) - u(z)][\varphi (x+z) - \varphi (x)]}{|z|^{n+2\alpha}}dzdx $$ In what follows, we will work with the problem $$ (-\displaystyleplaystyleelta)_{\rho_{\epsilon}}^{\alpha}v + Q(\epsilon x)v = K(\epsilon x)|v|^{p-1}v,\quad x\in \mathbb{R}^n, \leqno({P'}) $$ with and $\rho_\epsilon = \frac{1}{\epsilon}\rho (\epsilon x)$, which is equivalent to $(P)$ by considering the change variable $v(x) = u(\epsilon x)$ . Associated with $(P')$ we have the energy functional $I_{\rho_\epsilon} :X^{\epsilon} \to \mathbb{R}$ defined as $$ \begin{\equation}gin{array}{l} \displaystyleplaystyle I_{\rho_{\epsilon}} (v) = \frac{1}{2}\left( \int_{\mathbb{R}^n}\int_{B(0, \frac{1}{\epsilon}\rho (\epsilon x))}\hspace*pace{-.5cm} \frac{|v(x+z) - v(x)|^2}{|z|^{n+2\alpha}} + \int_{\mathbb{R}^n}Q(\epsilon x)|v(x)|^2dx \right) -\\ \mbox{}\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\displaystyleplaystyle \frac{1}{p+1}\int_{\mathbb{R}^n} K(\epsilon x)|v(x)|^{p+1}dx, \end{array} $$ where $X^{\epsilon}$ denotes the Hilbert space $H^{\alpha}(\mathbb{R}^n)$ endowed with the norm \begin{\equation}gin{equation}\label{08} \|v\|_{\rho_\epsilon}=\left(\int_{\mathbb{R}^{n}}\int_{B(0,\rho_\epsilon (x))}\frac{|v(x+z) - v(x)|^{2}}{|z|^{n+2\alpha}}dzdx + \int_{\mathbb{R}^{n}}Q(\epsilon x)|v(x)|^{2}dx \right)^{\frac{1}{2}}. \end{equation} Hereafter, we say that $v \in X^{\epsilon}$ is a weak solution of $(P')$ if $v$ is a critical point of $I_{\rho_\epsilon}$. In Section 2, Proposition \ref{FSprop1}, it is proved that $\|\,\,\,\|_{\rho_\epsilon}$ is equivalent to the usual norm in $H^{\alpha}(\mathbb{R}^n)$. Recently, the study on problems of fractional Schr\"odinger equations has attracted much attention from many mathematicians. In the case of the fractional Laplacian $(-\displaystyleplaystyleelta)^\alpha$, Chen \cite{MC} studied the existence of ground sate solution of nonlinear fractional Schr\"odinger equation \begin{\equation}gin{equation}\label{02} (-\displaystyleplaystyleelta)^\alpha u + V(x)u = u^p\;\;\mbox{in}\;\;\mathbb{R}^n \end{equation} with unbounded potential. The existence of a ground state of (\ref{02}) is obtained by a Lagrange multiplier method and the Nehari manifold method is used to obtain standing waves with prescribed frequency. If $V(x) = 1$, Dipierro et al. \cite{SDGPEV} proved existence and symmetry results for the solution of equation (\ref{02}). Felmer et al. \cite{PFAQJT}, studied the same equation with a more general nonlinearity $f(x,u)$, they obtained the existence, regularity and qualitative properties of ground states. Secchi \cite{SS} obtained positive solutions of a more general fractional Schr\"odinger equation by the variational method. On the other hand, research has been done in recent years regarding regional fractional Laplacian, where the scope of the operator is restricted to a variable region near each point. We mention the work by Guan \cite{guan1} and Guan and Ma \cite{guan2} where they study these operators, their relation with stochastic processes and they develop integration by parts formula, and the work by Ishii and Nakamura \cite{HIGN}, where the authors studied the Dirichlet problem for regional fractional Laplacian modeled on the p-Laplacian. Recently, Felmer and Torres \cite{PFCT1, PFCT2} considered positive solution of nonlinear Schr\"odinger equation with non-local regional diffusion \begin{\equation}gin{equation}\label{03} \epsilon^{2\alpha}(-\displaystyleplaystyleelta)_{\rho}^{\alpha} u + u = f(u),\;\;u\in H^{\alpha}(\mathbb{R}^n), \end{equation} where the operator $(-\displaystyleplaystyleelta)_{\rho}^{\alpha}$ is defined as above. Under suitable assumptions on the non-linearity $f$ and the range of scope $\rho$, they obtained the existence of a ground state by mountain pass argument and a comparison method. Furthermore, they analyzed symmetry properties and concentration phenomena of these solutions. These regional operators present various interesting characteristics that make them very attractive from the point of view of mathematical theory of non-local operators. We also mention the recent works by Torres \cite{CT1, CT2, CT3}, where existence, multiplicity and symmetry results are considered in bounded domain and $\mathbb{R}^n$. We recall that when $(-\displaystyleplaystyleelta)_{\rho}^{\alpha}$ is replaced by $(-\displaystyleplaystyleelta)^{\alpha}$, Chen and Zheng \cite{GCYZ} studied (\ref{03}) with external potential $V(x)$ and $f(u) = |u|^{p-1}u$. They showed that when $n = 1,2,3$, $\epsilon$ is sufficiently small, $\max\{\frac{1}{2}, \frac{n}{4}\} < \alpha < 1$ and $V$ satisfies some smoothness and boundedness assumptions, equation (\ref{03}) has a nontrivial solution $u_\epsilon$ concentrated to some single point as $\epsilon \to 0$. Very recently, in \cite{JDMPJW}, D\'avila, del Pino and Wei generalized various existence results known for (\ref{03}) with $\alpha=1$ to the case of fractional Laplacian. Moreover, we also mention the works by Shang and Zhang \cite{XSJZ1, XSJZ2}, where it was considered the nonlinear fractional Schr\"odinger equation with competing potentials \begin{\equation}gin{equation}\label{04} \epsilon^{2\alpha}(-\displaystyleplaystyleelta)^{\alpha}u + V(x)u = K(x)|u|^{p-2}u + Q(x)|u|^{q-2}u,\;\;x\in \mathbb{R}^n, \end{equation} where $2<q<p<2_{\alpha}^{*}$. By using perturbative variational method, mountain pass arguments and Nehari manifold method, they analyzed the existence, multiplicity and concentration phenomena for the solutions of (\ref{04}). Motivated by these previous works, in this paper, our goal is to study the existence and concentration phenomena for the solutions of $(P)$. As pointed out in \cite{CASS, CASSJY, XWBZ}, the geometry of the ground state energy function $C(\xi)$, which is defined to be the ground state level associated with $$ (-\displaystyleplaystyleelta)^{\alpha}u + Q(\xi)u = K(\xi)|u|^{p-1}u,\;\;x\in \mathbb{R}^n, $$ where $\xi \in \mathbb{R}^n$ is regard as a parameter instead of an independent variable, it is crucial in our approach. Here, the functions $\rho, Q$ and $K$ satisfy the following conditions: \begin{\equation}gin{itemize} \item[$(H_0)$] There are positive real numbers $Q_\infty, K_\infty$ such that $$ Q_\infty=\lim_{|\xi|\to +\infty}Q(\xi) \quad \mbox{and} \quad K_\infty=\lim_{|\xi|\to +\infty}K(\xi). $$ \item[$(H_{1})$] There are numbers $0<\rho_0<\rho_\infty\le \infty$ such that $$\rho_{0 } \leq \rho (\xi) < \rho_{\infty}, \quad \quad\mbox{for all}\quadl \xi \in \mathbb{R}^{n} \quad\mbox{and}\quad \lim_{|\xi| \to \infty} \rho (\xi) =\rho_\infty. $$ \item[$(H_2)$] $Q,K: \mathbb{R}^n \to \mathbb{R}$ are continuous function satisfying $$ 0< a_1 \leq Q(\xi),K(\xi) \leq a_2 \quad \quad\mbox{for all}\quadl \xi \in \mathbb{R}^n $$ for some positive constants $a_1,a_2$. \end{itemize} \begin{\equation}gin{Thm} \label{T1} Assume $(H_0)-(H_2)$. Then, if $$ \inf_{\xi \in \mathbb{R}^n}C(\xi) < \liminf_{|\xi| \to +\infty}C(\xi), \leqno{(C)} $$ problem $(P)$ has a ground state solution $u_\epsilon \in X^\epsilon$ for $\epsilon$ small enough. Moreover, for each sequence $\epsilon_m \to 0$, there is a subsequence such that for each $m \in \mathbb{N}$, the solution $u_{\epsilon_m}$ concentrates around a minimum point $\xi^*$ of the function $C(\xi)$, in the following sense: given $\delta>0$, there are $\epsilon_0, R>0$ such that $$ \int_{B^{c}(\xi^*,\epsilon_m R)}|u_{\epsilon_m}|^{2}\,dx \leq \epsilon_m^{n}\delta \quad \mbox{and} \quad \int_{B(\xi^*,\epsilon_m R)}|u_{\epsilon_m}|^{2}\,dx {\gamma}eq \epsilon_m^{n} C, \quad \quad\mbox{for all}\quadl \epsilon_m \leq \epsilon_0, $$ where $C$ is a constant independent of $\delta$ and $m$. \end{Thm} We would like to point out that the condition $(C)$ is not empty, because it holds by supposing that there is $\xi_0 \in \mathbb{R}^n$ such that $$ \frac{Q(\xi_0)^{\frac{p+1}{p-1}-\frac{n}{2\alpha}}}{K(\xi_0)^{\frac{2}{p-1}}} < \frac{Q_\infty^{\frac{p+1}{p-1}-\frac{n}{2\alpha}}}{K_\infty^{\frac{2}{p-1}}}. $$ For more details, see Corollary \ref{ntakey} in Section 3. {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ection{Preliminaries} In this section we recall some basic facts about the Sobolev space $H^\alpha(\mathbb{R}^n)$ such as embeddings and compactness properties. To begin with, we recall the following embedding theorem. \begin{\equation}gin{Thm} {\rm (\cite{EDNGPEV})}\label{FStm1} Let $\alpha \in (0,1)$, then there exists a positive constant $C = C(n,\alpha)$ such that \begin{\equation}gin{equation}\label{P01} \|u\|_{L^{2_{\alpha}^{*}}(\mathbb{R}^{n})}^{2} \leq C \int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}} \frac{|u(x) - u(y)|^{2}}{|x-y|^{n+2\alpha}}dydx \end{equation} and then $H^{\alpha}(\mathbb{R}^{n}) \hookrightarrow L^{q}(\mathbb{R}^{n})$ is continuous for all $q \in [2, 2_{\alpha}^{*}]$. Moreover, $H^{\alpha}(\mathbb{R}^{n}) \hookrightarrow L^{q}({\Omega}ega)$ is compact for any bounded set ${\Omega}ega{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ubset \mathbb{R}^{n}$ and for all $q \in [2, 2_{\alpha}^{*})$, where $2_{\alpha}^{*} = \frac{2n}{n-2\alpha}$ is the critical exponent. \end{Thm} The next lemma establishes that $\|\,\,\,\|_{\rho_\epsilon}$ is equivalent to usual norm in $H^{\alpha}(\mathbb{R}^n)$. \begin{\equation}gin{Prop}\label{FSprop1} Suppose that ($H_{1}$) and $(H_2)$ hold and set $$ \|u\|=\left(\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^n}\frac{|u(x) - u(z)|^{2}}{|x-z|^{n+2\alpha}}dzdx + \int_{\mathbb{R}^{n}}|u(x)|^{2}dx \right)^{\frac{1}{2}} $$ the usual norm in $H^{\alpha}(\mathbb{R}^n)$. Then, there exists a constant $\mathfrak{S}> 0$ independent of $\epsilon$ such that $$ \|u\| \leq \mathfrak{S} \|u\|_{\rho_\epsilon}, \quad \quad\mbox{for all}\quadl u \in H^{\alpha}(\mathbb{R}^n). $$ From this, $\|\cdot\|$ and $\|\cdot\|_{\rho_\epsilon}$ are equivalents norms in $H^{\alpha}(\mathbb{R}^{n})$. \end{Prop} \begin{\equation}gin{proof} Without loss of generality we will consider $\epsilon =1$. For $u \in X^1=H^{\alpha}(\mathbb{R}^{n})$, the Fubini's Theorem together with $(H_1)$ and $(H_2)$ gives \begin{\equation}gin{equation}\label{P02} \begin{\equation}gin{aligned} &a_1 \|u\|^2 = a_1 \int_{\mathbb{R}^{n}} |u(x)|^{2}dx + a_1 \int_{\mathbb{R}^{n}}\int_{B(x,\rho_{0})} \frac{|u(x) - u(z)|^{2}}{|x-z|^{n+2\alpha}}dzdx + \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;a_1\int_{\mathbb{R}^{n}}\int_{B^c(x,\rho_{0})} \frac{|u(x) - u(z)|^{2}}{|x-z|^{n+2\alpha}}dzdx\\ &\leq \left(1+ \frac{2|S^{n-1}|}{\alpha \rho_{0}^{2\alpha}}\right) \int_{\mathbb{R}^n} Q(x)|u(x)|^{2}dx + a_1\int_{\mathbb{R}^{n}}\int_{B(x,\rho_{0})} \frac{|u(x) - u(z)|^{2}}{|x-z|^{n+2\alpha}}dzdx\\ &\leq A\left( \int_{\mathbb{R}^n} Q(x)|u(x)|^2dx + \int_{\mathbb{R}^n} \int_{B(0, \rho(x))}\frac{|u(x+z) - u(x)|^2}{|z|^{n+2\alpha}}dzdx\right), \end{aligned} \end{equation} where $A=\max\left\{a_1, \left( 1 + \frac{2|S^{n-1}|}{\alpha \rho_{0}^{2\alpha}} \right)\right\}$. The proposition follows by taking $\mathfrak{S} = \frac{1}{a_1}A$. \end{proof} The following lemma is a version of the concentration compactness principle proved by Felmer and Torres \cite{PFCT1}, which will be use later on. \begin{\equation}gin{Lem}\label{FSlem1} Let $n{\gamma}eq 2$. Assume that $\{u_{k}\}$ is bounded in $H_{\rho}^{\alpha}(\mathbb{R}^{n})$ and it satisfies \begin{\equation}gin{equation}\label{P03} \lim_{k\to \infty} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})up_{y\in \mathbb{R}^{n}}\int_{B(y,R)}|u_{k}(x)|^{2}dx = 0, \end{equation} where $R>0$. Then $u_{k} \to 0$ in $L^{q}(\mathbb{R}^{n})$ for $2 < q < 2_{\alpha}^{*}$. \end{Lem} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ection{Ground state} We prove the existence of weak solution of $(P')$ finding a critical point of the functional $I_{\rho_\epsilon}$. Using the embeddings given in Theorem \ref{FStm1}, it follows that the functional $I_{\rho_\epsilon}$ is of class $C^{1}(X^\epsilon, \mathbb{R})$ with $$ I'_{\rho_\epsilon}(u)v = \langle u,v \rangle_{\rho_\epsilon} - \int_{\mathbb{R}^{n}}K(\epsilon x)|u(x)|^{p-1}u(x)v(x)dx,\;\;\quad\mbox{for all}\quadl \;v\in X^{\epsilon} $$ where $$ \langle u,v \rangle_{\rho_\epsilon} =\int_{\mathbb{R}^n}\int_{B(0, \rho_\epsilon (x))} \frac{[u(x+z) - u(x)][v(x+z) - v(x)]}{|z|^{n+2\alpha}}dzdx + \int_{\mathbb{R}^n}Q(\epsilon x)uv dx . $$ Using well known arguments, it follows that $I_{\rho_\epsilon}$ verifies the mountain pass geometry. Then, there is a $(PS)_{c}$ sequence $\{u_k\} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ubset X^{\epsilon}$ such that \begin{\equation}gin{equation}\label{MPC1} I_{\rho_\epsilon}(u_k) \to C_{\rho_\epsilon} \quad \mbox{and}\quad I'_{\rho_\epsilon} (u_k) \to 0 \end{equation} where $C_{\rho_\epsilon}$ is the mountain pass level given by $$ C_{\rho_\epsilon} = \inf_{{\gamma}amma \in \Gamma_{\rho_{\epsilon}} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})up_{t\in [0,1]}} I_{\rho_\epsilon} ({\gamma}amma (t))>0 $$ with $$ \Gamma_{\rho_\epsilon} = \{ {\gamma}amma \in C([0,1], X^{\epsilon}): {\gamma}amma(0) = 0,\;\;I_{\rho_\epsilon} ({\gamma}amma (1))<0\}. $$ In the sequel, $\mathcal{N}_{\rho_\epsilon}$ denotes the Nehari manifold associated to the functional $I_{\rho_\epsilon}$, that is, $$ \mathcal{N}_{\rho_\epsilon} = \{u\in X^{\epsilon} \backslash \{0\}:\;\; I'_{\rho_\epsilon}(u)u=0\}. $$ It is easy to see that all non trivial solutions of $(P')$ belongs to $ \mathcal{N}_{\rho_\epsilon}$. Moreover, by using standard arguments, it is possible to prove that \begin{\equation}gin{equation} \label{ZZ0} C_{\rho_\epsilon}=\inf_{u \in \mathcal{N}_{\rho_\epsilon}}I_{\rho_\epsilon}(u) \end{equation} and there is $\begin{\equation}ta>0$ independent of $\epsilon$, such that \begin{\equation}gin{equation}\label{BETA1} \begin{\equation}ta \leq \|u\|_{\rho_\epsilon}^{2}, \quad \quad\mbox{for all}\quadl u \in X^{\epsilon} \end{equation} and so, \begin{\equation}gin{equation} \label{BETA2} \begin{\equation}ta \leq C_{\rho_\epsilon}, \quad \quad\mbox{for all}\quadl \epsilon >0. \end{equation} From (\ref{ZZ0}), if $C_{\rho_\epsilon}$ is a critical value of $I_{\rho_\epsilon}$ then it is the least energy critical value of $I_{\rho_\epsilon}$. Hereafter, we say that $C_{\rho_\epsilon}$ is the {\it ground state level} of $I_{\rho_\epsilon}$. Now, we consider the following equation \begin{\equation}gin{equation}\label{05} (-\displaystyleplaystyleelta)^{\alpha} u + Q(\xi) u = K(\xi)|u|^{p-1}u, \;\;x\in \mathbb{R}^n, \end{equation} where $\xi \in \mathbb{R}^n$ is regard as a parameter instead of an independent variable. We define the energy functional $J_\xi : H^{\alpha}(\mathbb{R}^n) \to \mathbb{R}$ associated with (\ref{05}) by \begin{\equation}gin{equation}\label{n02} \begin{\equation}gin{array}{l} J_\xi (u) =\displaystyleplaystyle \frac{1}{2}\left( \int_{\mathbb{R}^n}\int_{\mathbb{R}^n} \frac{|u(x+z) - u(x)|^2}{|z|^{n+2\alpha}}dzdx + \int_{\mathbb{R}^n} Q(\xi)|u(x)|^2dx \right) \\ \mbox{}\\ \;\;\;\;\;\;\;\;\;\;\;\;\; - \displaystyleplaystyle \frac{1}{p+1}\int_{\mathbb{R}^n} K(\xi)|u(x)|^{p+1}dx. \end{array}. \end{equation} Let $$ C(\xi) = \inf_{u \in \mathcal{N}_{\xi}} J_{\xi}(u) $$ the ground state energy associated with (\ref{05}), where $\mathcal{N}_\xi$ is the Nehari manifold defined as $$ \mathcal{N}_\xi = \{u\in H^{\alpha}(\mathbb{R}^n){\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})etminus \{0\}: J'_{\xi}(u) u = 0\}. $$ Arguing as above, we see that $C(\xi)>0$ and $$ C(\xi) = \inf_{v\in H^{\alpha}(\mathbb{R}^n){\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})etminus \{0\}} \max_{t>0} J_\xi (tv) = \inf_{{\gamma}amma \in \Gamma_\xi} \max_{t\in [0,1]} J_\xi ({\gamma}amma (t)), $$ where $$ \Gamma_\xi =\{{\gamma}amma \in C([0,1], H^{\alpha}(\mathbb{R}^n)): \;\;{\gamma}amma (0) = 0, \;\;J_{\xi}({\gamma}amma (1)) <0\}. $$ By \cite{PFAQJT}, we know that for each $\xi \in \mathbb{R}^n$, problem (\ref{05}) has a nontrivial nonnegative ground state solution. Thus, $C(\xi)$ is the least critical value of $J_{\xi}$. Next, we will study the continuity of $C(\xi)$. \begin{\equation}gin{Lem}\label{lema1} The function $\xi \to C(\xi)$ is continuous. \end{Lem} \begin{\equation}gin{proof} Set $\{\xi_r \} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ubset \mathbb{R}^n$ and $\xi_0 \in \mathbb{R}^n$ with $$ \xi_r \to \xi_0 \quad \mbox{in} \quad \mathbb{R}^n. $$ By using the conditions on $\rho$, $Q$ and $K$, we know that $$ \liminf_{\xi \in \mathbb{R}^n}C(\xi)>0 \quad \mbox{and} \quad \limsup_{\xi \in \mathbb{R}^n}C(\xi)<+\infty. $$ Next, we denote by $v_r \in H^{\alpha}(\mathbb{R}^n)$ the function which satisfies $$ J_{\xi_r}(v_r)=C(\xi_r) \quad \mbox{and} \quad J'_{\xi_r}(v_r)=0. $$ In the sequel, we will consider two sequences $\{\xi_{r_j}\}$ and $\{\xi_{r_k}\}$ such that $$ C(\xi_{r_j}) {\gamma}eq C(\xi_0) \quad \quad\mbox{for all}\quadl r_j \eqno{(I)} $$ and $$ C(\xi_{r_k}) \leq C(\xi_0) \quad \quad\mbox{for all}\quadl r_k. \eqno{(II)} $$ \hspace*pace*{0em}oindent {\bf Analysis of $(I)$:} From the above commentaries, we know that $\{C_(\xi_{r_j})\}$ is bounded. Therefore, there are a subsequence $\{\xi_{{r_j}_i}\} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ubset \{\xi_{r_j}\}$ and $C_0>0$ such that $$ C(\xi_{{r_j}_{i}}) \to C_0. $$ In the sequel, we will use the following notations: $$ v_i=v_{{r_j}_i} \quad \mbox{and} \quad \xi_i=\xi_{{r_j}_i}. $$ Thereby, $$ \xi_i \to \xi_0 \quad \mbox{and} \quad C(\xi_i) \to C_0. $$ \hspace*pace*{0em}oindent {\bf Claim A:} \, $C_0=C(\xi_0)$. From (I), $$ \lim_{i}C(\xi_i) {\gamma}eq C(\xi_0) $$ and so, \begin{\equation}gin{equation} \label{C0} C_0 {\gamma}eq C(\xi_0). \end{equation} In the sequel, we set $w_0 \in H^{\alpha}(\mathbb{R}^n)$ be a function satisfying $$ J_{\xi_0}(w_0)=C(\xi_0) \quad \mbox{and} \quad J'_{\xi_0}(w_0)=0. $$ Moreover, we denote by $t_{i}>0$ the real number which verifies $$ J_{\xi_i}(t_iw_0)=\max_{t {\gamma}eq 0}J_{\xi_i}(t_iw_0). $$ Thus, by definition of $C(\xi_0)$, $$ C(\xi_i)\leq J_{\xi_i}(t_iw_0). $$ It is possible to prove that $\{t_i\}$ is a bounded sequence, then without lost of generality we can assume that $t_i \to t_0$. Now, by using the fact that the functions $\rho$ and $K$ are continuous, the Lebesgue's Theorem gives $$ \lim_{i} J_{\xi_i}(t_iw_0)=J_{\xi_0}(t_0w_0) \leq J_{\xi_0}(w_0)=C(\xi_0), $$ leading to \begin{\equation}gin{equation}\label{C1} C_0 \leq C(\xi_0). \end{equation} From (\ref{C0})-(\ref{C1}), $$ C(\xi_0)=C_0. $$ The above study implies that $$ \lim_{i}C(\xi_{{r_j}_i})=C(\xi_0). $$ \hspace*pace*{0em}oindent {\bf Analysis of $(II)$:} \, By using the definition of $\{v_r\}$, it is easy to prove that $\{v_r\}$ is a bounded sequence in $H^{\alpha}(\mathbb{R}^n)$. Consequently, there is $v_0 \in H^{\alpha}(\mathbb{R}^n)$ such that $$ v_r \rightharpoonup v_0 \quad \mbox{in} \quad H^{\alpha}(\mathbb{R}^n). $$ By using Lemma \ref{FSlem1}, we can assume that $v_0 \hspace*pace*{0em}ot=0$, because for any translation of the type $\tilde{v}_n(x)=v_n(x+y_n)$ also satisfies $$ J_{\xi_r}(\tilde{v}_r)=C(\xi_r) \quad \mbox{and} \quad J'_{\xi_r}(\tilde{v}_r)=0. $$ The above information permits to conclude that $v_0$ is a nontrivial solution of the problem \begin{\equation}gin{eqnarray}\label{Eq0101} (-\displaystyleplaystyleelta)^{\alpha}u + Q(\xi_0)u = K(\xi_0)|u|^{p-1}u \mbox{ in } \mathbb{R}^{n}, \;\; u \in H^{\alpha}(\mathbb{R}^{n}). \end{eqnarray} By Fatous' lemma, it is possible to prove that \begin{\equation}gin{equation} \label{C6} \liminf_{r}J_{\xi_r}(v_r) {\gamma}eq J_{\xi_0}(v_0). \end{equation} On the other hand, there is $s_r >0$ such that $$ C(\xi_r) \leq J_{\xi_r}(s_r v_0) \quad \quad\mbox{for all}\quadl r. $$ So \begin{\equation}gin{equation} \label{C7} \limsup_{r}J_{\xi_r}(v_r)=\limsup_{r}C(\xi_r) \leq \limsup_{r}J_{\xi_r}(s_r v_0)=J_{\xi_0}(v_0). \end{equation} From (\ref{C6})-(\ref{C7}), $$ \lim_{r}J_{\xi_n}(v_n)=J_{\xi_0}(v_0). $$ The last limit yields $$ v_r \to v_0 \quad \mbox{in} \quad H^{\alpha}(\mathbb{R}^n). $$ Since $\{C(\xi_{{r_j}_k})\}$ is bounded, there are a subsequence $\{\xi_{{r_j}_k}\} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ubset \{\xi_{r_j}\}$ and $C_*>0$ such that $$ C(\xi_{{r_j}_{k}}) \to C_*. $$ In the sequel, we will use the following notations: $$ v_k=v_{{r_j}_k} \quad \mbox{and} \quad \xi_k=\xi_{{r_j}_k}. $$ Thus, $$ v_k \to v_0, \quad \xi_k \to \xi_0 \quad \mbox{and} \quad C(\xi_k) \to C_*. $$ In what follows, we denote by $t_{k}>0$ the real number which verifies $$ J_{\xi_0}(t_kv_k)=\max_{t {\gamma}eq 0}J_{\xi_0}(tv_k). $$ Thus, by definition of $C(\xi_0)$, $$ C(\xi_0)\leq J_{\xi_0}(t_kv_k). $$ It is possible to prove that $\{t_k\}$ is a bounded sequence, then without lost of generality we can assume that $t_k \to t_*$. Now, by using the fact that the functions $\rho$ and $K$ are continuous, the Lebesgue's Theorem gives $$ \lim_{k} J_{\xi_0}(t_kv_k)=J_{\xi_0}(t_*v_0)= \lim_{k} J_{\xi_k}(t_kv_k) \leq \lim_{k}C(\xi_k)=C_*. $$ Thereby, \begin{\equation}gin{equation} \label{C3} C(\xi_0)\leq C_*. \end{equation} On the other hand, from $(II)$, $$ \lim_{k}C(\xi_k) \leq C(\xi_0) $$ leading to \begin{\equation}gin{equation} \label{C4} C_* {\gamma}eq C(\xi_0). \end{equation} From (\ref{C3})-(\ref{C4}), $$ C_* = C(\xi_0). $$ The above study implies that $$ \lim_{k}C(\xi_{{n_j}_k})=C(\xi_0). $$ From $(I)$ and $(II)$, $$ \lim_{r}(\xi_r)=C(\xi_0), $$ showing the lemma. \end{proof} In what follows, we denote by $D$ the ground state level of the function $J: H^{\alpha}(\mathbb{R}^n) \to \mathbb{R}$ given by $$ J(u) = \frac{1}{2}\left( \int_{\mathbb{R}^n}\int_{\mathbb{R}^n} \frac{|u(x) - u(z)|^2}{|x-z|^{n+2\alpha}}dz dx + \int_{\mathbb{R}^n} |u(x)|^{2}dx\right) - \frac{1}{p+1}\int_{\mathbb{R}^n}|u|^{p+1}dx $$ Using the above notations, we have the following lemma \begin{\equation}gin{Lem}\label{lema2} The functions $C(\xi)$ verifies the following relation \begin{\equation}gin{equation}\label{g02} C(\xi) = \frac{Q(\xi)^{\frac{p+1}{p-1} - \frac{n}{2\alpha}}}{K(\xi)^{\frac{2}{p-1}}}D, \quad \quad\mbox{for all}\quadl \xi \in\mathbb{R}^n. \end{equation} \end{Lem} \begin{\equation}gin{proof} Let $u \in H^{\alpha}(\mathbb{R}^n)$ be a function verifying $$ J(u) = D \quad \mbox{and} \quad J'(u) = 0. $$ For each $\xi \in \mathbb{R}^n$ fixed, let ${\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma^{2\alpha} = \frac{1}{Q(\xi)}$ and define $$ w(x) = \left[ \frac{Q(\xi)}{K(\xi)} \right]^{\frac{1}{p-1}}u(\frac{x}{{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma}). $$ Then, doing the change of variable $x = {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma \tilde{x}$ and $z = {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma \tilde{z}$ we obtain $$ \begin{\equation}gin{aligned} &J_\xi (w) =\frac{1}{2}\left(\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\frac{|w(x+z) - w(x)|^{2}}{|z|^{n+2\alpha}}dz dx + \int_{\mathbb{R}^n}Q(\xi)w^2dx \right) - \frac{1}{p+1}\int_{\mathbb{R}^n}K(\xi)|w|^{p+1}dx\\ &= \frac{Q(\xi)}{2}\left( {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma^{2\alpha}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\frac{|w(x+z) - w(x)|^2}{|z|^{n+2\alpha}}dz dx + \int_{\mathbb{R}^n}w^2(x)dx\right) - \frac{1}{p+1}\int_{\mathbb{R}^n}K(\xi)|w|^{p+1}dx \\ &= \frac{Q(\xi)^{\frac{p+1}{p-1}}}{K(\xi)^{\frac{2}{p-1}}}\left[\left( \frac{{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma^{2\alpha}}{2}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\frac{|u(\frac{x}{{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma} + \frac{z}{{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma}) - u(\frac{x}{{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma})|^2}{|z|^{n+2\alpha}}dz dx + \frac{1}{2}\int_{\mathbb{R}^n} |u(\frac{x}{{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma})|^{2}dx \right) - \frac{1}{p+1}\int_{\mathbb{R}^n}|u(\frac{x}{{\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})igma})|^{p+1}dx\right]\\ &= \frac{Q(\xi)^{\frac{p+1}{p-1} - \frac{n}{2\alpha}}}{K(\xi)^{\frac{2}{p-1}}} J(u). \end{aligned} $$ A similar argument also gives $J'_\xi(w)(w)=0$, from where it follows $$ C(\xi) \leq \frac{Q(\xi)^{\frac{p+1}{p-1} - \frac{n}{2\alpha}}}{K(\xi)^{\frac{2}{p-1}}}D, \quad \quad\mbox{for all}\quadl \xi \in\mathbb{R}^n. $$ The reverse inequality is obtained of the same way, finishing the proof. \end{proof} As a byproduct of the last proof, we have the following corollary \begin{\equation}gin{Cor}\label{ntakey} By Lemma \ref{lema2}, if there is $\xi_0 \in \mathbb{R}^n$ such that $$ \frac{Q(\xi_0)^{\frac{p+1}{p-1}-\frac{n}{2\alpha}}}{K(\xi_0)^{\frac{2}{p-1}}} < \frac{Q_\infty^{\frac{p+1}{p-1}-\frac{n}{2\alpha}}}{K_\infty^{\frac{2}{p-1}}}, $$ we have $$ \inf_{\xi \in \mathbb{R}^n}C(\xi) < \liminf_{|\xi| \to +\infty}C(\xi) = C(\infty), $$ where $C(\infty)$ is the mountain pass level of the functionals $J_\infty:H^{\alpha}(\mathbb{R}^n) \to \mathbb{R}$ given by $$ J_\infty(u) = \frac{1}{2}\left( \int_{\mathbb{R}^n}\int_{\mathbb{R}^n} \frac{|u(x) - u(z)|^2}{|x-z|^{n+2\alpha}}dz dx + \int_{\mathbb{R}^n} Q_\infty |u|^2dx\right) - \frac{1}{p+1}\int_{\mathbb{R}^n}K_\infty |u|^{p+1}dx. $$ \end{Cor} \vspace*pace{0.2 cm} The next lemma studies the behavior of function $C_{\rho_{\epsilon}}(\xi)$ when $\epsilon$ goes to 0. \begin{\equation}gin{Lem} \label{LIMITC} $\displaystyleplaystyle \limsup_{\epsilon \to 0}C_{\rho_{\epsilon}} \leq \inf_{\xi \in \mathbb{R}^n}C(\xi)$. Hence, $\displaystyleplaystyle \limsup_{\epsilon \to 0}C_{\rho_{\epsilon}} < C(\infty)$. \end{Lem} \begin{\equation}gin{proof} Fix $\xi_0 \in \mathbb{R}^N$ and $w\in H^{\alpha}(\mathbb{R}^n)$ with $$ J_{\xi_0}(w) = \max_{t{\gamma}eq 0} J_{\xi_0}(tw)=C(\xi_0) \quad \mbox{and} \quad J_{\xi_0}'(w)=0 $$ where $$ J_{\xi_0}(u) = \frac{1}{2}\left( \int_{\mathbb{R}^n}\int_{\mathbb{R}^n} \frac{|u(x) - u(z)|^2}{|x-z|^{n+2\alpha}}dz dx + \int_{\mathbb{R}^n} Q(\xi_0)|u(x)|^{2}dx\right) - \frac{1}{p+1}\int_{\mathbb{R}^n}K(\xi_0)|u|^{p+1}dx. $$ Then, we take $w_\epsilon (x) = w(x - \frac{\xi_0}{\epsilon})$ and $t_\epsilon >0$ satisfying $$ C_{\rho_\epsilon} \leq I_{\rho_\epsilon}(t_\epsilon w_\epsilon) = \max_{t{\gamma}eq 0} I_{\rho_\epsilon}(tw_\epsilon). $$ The change of variable $\tilde{x} = x - \frac{\xi_0}{\epsilon}$ gives $$ \begin{\equation}gin{aligned} I_{\rho_\epsilon}(t_\epsilon w_\epsilon) &= \frac{t_{\epsilon}^{2}}{2}\left( \int_{\mathbb{R}^n}\int_{B(0, \frac{1}{\epsilon}\rho (\epsilon x))}\frac{|w_\epsilon(x+z) - w_\epsilon (x)|^2}{|z|^{n+2\alpha}}dxdx + \int_{\mathbb{R}^n}Q(\epsilon x)w_\epsilon^2(x)dx \right)\\ &-\frac{t_{\epsilon}^{p+1}}{p+1}\int_{\mathbb{R}^n}K(\epsilon x)w_{\epsilon}^{p+1}(x)dx\\ &= \frac{t_{\epsilon}^{2}}{2}\left( \int_{\mathbb{R}^n}\int_{B(0, \frac{1}{\epsilon}\rho (\epsilon \tilde{x} + \xi_0))} \frac{|w(\tilde{x} + z) - w(\tilde{x})|^2}{|z|^{n+2\alpha}}dz d\tilde{x} + \int_{\mathbb{R}^n} Q(\epsilon \tilde{x} + \xi_0)w^2(\tilde{x})d\tilde{x}\right)\\ &-\frac{t_{\epsilon}^{p+1}}{p+1}\int_{\mathbb{R}^n}K(\epsilon \tilde{x} + \xi_0)w^{p+1}(\tilde{x})d\tilde{x}. \end{aligned} $$ Thereby, considering a sequence $\epsilon_n \to 0$, the fact that $I'_{\rho_{\epsilon_n}}(t_{\epsilon_n} w_{\epsilon_n})(t_{\epsilon_n} w_{\epsilon_n})=0$ yields $\{t_{\epsilon_n}\}$ is bounded. Thus, we can assume that $$ t_{\epsilon_n} \to t_*>0, $$ for some $t_*>0$. Using a change variable as above, we can infer that $$ J_{\xi_0}'(t_*w)(t_*w)=0. $$ On the other hand, we know that $J_{\xi_0}'(w)(w)=0$. Then by uniqueness, we must have $$ t_*=1. $$ From this, $$ I_{\rho_{\epsilon_n}}(t_{\epsilon_n} w_{\epsilon_n}) \to J_{ \xi_0}(w)=C(\xi_0)\;\;\mbox{as}\;\;\epsilon \to 0. $$ As the point $\xi_0 \in \mathbb{R}^n$ is arbitrary, the lemma is proved. \end{proof} \begin{\equation}gin{Thm}\label{main1} For $\epsilon >0$ small enough, the problem $(P')$ has a positive least energy solution. \end{Thm} \begin{\equation}gin{proof} In what follows, we denote by $\{u_k\} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ubset H^{\alpha}(\mathbb{R}^N)$ a sequence satisfying $$ I_{\rho_{\epsilon}}(u_k) \to C_{\rho_{\epsilon}} \quad \mbox{and} \quad I'_{\rho_{\epsilon}}(u_k) \to 0. $$ If $u_k \rightharpoonup 0$ in $H^{\alpha}(\mathbb{R}^N)$, then \begin{\equation}gin{equation}\label{lim0} u_k \to 0\;\;\mbox{ in}\;\;L_{loc}^{p}(\mathbb{R}^n)\;\;\mbox{for}\;\; p\in [2, 2_{\alpha}^{*}). \end{equation} By $(H_0)$, we can take $\delta, R>0$ such that \begin{\equation}gin{equation}\label{eq17} Q_\infty - \delta \leq Q(x)\leq Q_\infty + \delta\;\;\mbox{and}\;\;K_\infty - \delta \leq K(x) \leq K_\infty + \delta \end{equation} for all $|x|{\gamma}eq R$. Then, for all $t{\gamma}eq 0$, $$ \begin{\equation}gin{aligned} I_{\rho_\epsilon} (tu_k) & = I_{\epsilon, \infty}^{\delta} (tu_k) + \frac{t^2}{2} \int_{\mathbb{R}^n} [Q(x) - Q_\infty + \delta] |u_k(x)|^2dx \\ &+ \frac{t^{p+1}}{p+1}\int_{\mathbb{R}^n} [K_\infty + \delta -K(x)]|u_k(x)|^{p+1}dx\\ &{\gamma}eq I_{\epsilon , \infty}^{\delta}(tu_k) + \frac{t^2}{2} \int_{B(0, \frac{R}{\epsilon})} [Q(x) - Q_\infty + \delta] |u_k(x)|^2dx\\ & + \frac{t^{p+1}}{p+1}\int_{B(0, \frac{R}{\epsilon})} [K_\infty + \delta -K(x)]|u_k(x)|^{p+1}dx, \end{aligned} $$ where $$ \begin{\equation}gin{aligned} I_{\epsilon, \infty}^{\delta} (u) &= \frac{1}{2}\left( \int_{\mathbb{R}^n}\int_{B(0, \frac{1}{\epsilon}\rho (\epsilon x)} \frac{|u(x+z) - u(x)|^2}{|z|^{n+2\alpha}}dxdx + \int_{\mathbb{R}^n}(Q_\infty - \delta)|u(x)|^2dx \right) \\&- \frac{1}{p+1}\int_{\mathbb{R}^n} (K_\infty + \delta)|u(x)|^{p+1}dx. \end{aligned} $$ Now we know that there exists a bounded sequence $\{\tau_k\}$ such that $$ I_{\epsilon, \infty}^{\delta}(\tau_k u_k) {\gamma}eq C(\frac{\rho(\epsilon x)}{\epsilon}, Q_\infty - \delta, K_\infty + \delta), $$ where $$ C(\frac{\rho(\epsilon x)}{\epsilon}, Q_\infty - \delta, K_\infty + \delta) = \inf_{v\in H^{\alpha}(\mathbb{R}){\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})etminus \{0\}} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})up_{t{\gamma}eq 0}I_{\epsilon , \infty}^{\delta}(tv) $$ Thus, $$ \begin{\equation}gin{aligned} C_{\rho_{\epsilon}} &{\gamma}eq C(\frac{\rho(\epsilon x)}{\epsilon}, Q_\infty - \delta, K_\infty + \delta)+ \frac{\tau_k^2}{2} \int_{B(0, \frac{R}{\epsilon})} [Q(x) - Q_\infty + \delta] |u_k(x)|^2dx \\ &+ \frac{\tau_k^{p+1}}{p+1}\int_{B(0, \frac{R}{\epsilon})} [K_\infty + \delta -K(x)]|u_k(x)|^{p+1}dx \end{aligned} $$ Taking the limit as $k\to \infty$, and after $\delta \to 0$, we find \begin{\equation}gin{equation}\label{eq21} c_{\rho_\epsilon} {\gamma}eq C(\frac{\rho(\epsilon x)}{\epsilon}, Q_\infty, K_\infty) \end{equation} where $C(\frac{\rho(\epsilon x)}{\epsilon}, Q_\infty, K_\infty )$ denotes the mountain pass level of the functional $$ \begin{\equation}gin{array}{l} I^{0}_{\infty,\xi}(u) = \displaystyleplaystyle \frac{1}{2}\left( \int_{\mathbb{R}^n}\int_{B(0,\frac{1}{\epsilon}\rho(\epsilon x)} \frac{|u(x+z) - u(x)|^2}{|z|^{n+2\alpha}}dz dx + \int_{\mathbb{R}^n} Q_\infty |u|^2dx\right) - \\ \mbox{}\\ \;\;\;\;\;\;\;\;\;\;\;\;\; \displaystyleplaystyle \frac{1}{p+1}\int_{\mathbb{R}^n}K_\infty u^{p+1}dx. \end{array} $$ A standard argument shows that $$ \liminf_{\epsilon \to 0}C(\frac{\rho(\epsilon x)}{\epsilon}, Q_\infty, K_\infty) {\gamma}eq C(\infty). $$ Therefore, if there is $\epsilon_n \to 0$ such that the $(PS)_{C_{\rho_{\epsilon_n}}}$ sequence has weak limit equal to zero, we must have $$ C_{\rho_{\epsilon_n}} {\gamma}eq C(\frac{\rho(\epsilon_n x)}{\epsilon_n}, Q_\infty, K_\infty), \quad \quad\mbox{for all}\quadl n \in \mathbb{N}, $$ leading to $$ \liminf_{n \to +\infty} C_{\rho_{\epsilon_n}} {\gamma}eq C(\infty), $$ which contradicts Lemma \ref{LIMITC}. This proves that the weak limit is non trivial for $\epsilon >0$ small enough and standard arguments show that its energy is equal to $C_{\rho_{\epsilon}}$, showing the desired result. \end{proof} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})ection{Concentration of the solutions $u_\epsilon$} \begin{\equation}gin{Lem}\label{Clm3} If $v_\epsilon$ is family solutions of $(P')$ with critical value $C_{\rho_\epsilon}$, then there exists a family $\{y_{\epsilon}\}$ and positive constants $R$ and $\begin{\equation}ta$ such that \begin{\equation}gin{equation}\label{Ceq10} \liminf_{\epsilon \to 0^{+}} \int_{B(y_{\epsilon}, R)}|v_{\epsilon}|^{2}\,dx {\gamma}eq \begin{\equation}ta >0. \end{equation} \end{Lem} \begin{\equation}gin{proof} First we note that, by ($H_1$) and ($H_3$) we have $$ \begin{\equation}gin{aligned} I_{\rho_\epsilon}(v) {\gamma}eq I_{*}(v) &= \frac{1}{2}\left(\int_{\mathbb{R}^n}\int_{B(0, \rho_0)} \frac{|v(x+z) - v(x)|^2}{|z|^{n+2\alpha}}dz dx + \int_{\mathbb{R}^n}a_1|v|^2dx\right)\\ & - \frac{1}{p+1}\int_{\mathbb{R}^n}a_2 |u|^{p+1}dx. \end{aligned} $$ Let $\mathcal{N}_{*} = \{v\in H^{\alpha}(\mathbb{R}^n){\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})etminus \{0\}:\;\; I'_{*}(v)v =0\}$. Then, for each $v\in \mathcal{N}_{*}$ there exists unique $t_v>0$ such that $t_vv \in \mathcal{N}_*$. Hence, \begin{\equation}gin{equation}\label{c0} \begin{\equation}gin{aligned} 0< C(\rho_0, a_1,a_2) &= \inf_{v\in \mathcal{N}_*}I_*(v) \leq \inf_{v\in \mathcal{N}_*}I_{\rho_\epsilon}(v)\\ &\leq \inf_{v\in \mathcal{N}_*}I_{\rho_{\epsilon}} (t_vv) = \inf_{u\in \mathcal{N}_{\rho_\epsilon}}I_{\rho_\epsilon}(u) = C_{\rho_\epsilon}. \end{aligned} \end{equation} Now, by contradiction, if (\ref{Ceq10}) does not hold, then there exists a sequence $v_{k} = v_{\epsilon_{k}}$ such that $$ \lim_{k\to \infty} {\tau}}\defC(\overline{\Omega}){C(\overline{\Omega})up_{y \in \mathbb{R}^{n}} \int_{B(y ,R)} |v_{k}|^{2}dx = 0. $$ By Lemma \ref{FSlem1}, $ v_{k} \to 0$ in $L^{q}(\mathbb{R}^{n})$ for any $2 < q < 2_{\alpha}^{*}. $ However, this is impossible since by (\ref{c0}) $$ \begin{\equation}gin{aligned} 0<C(\rho_0, a_1,a_2) \leq C_{\rho_\epsilon} &= I_{\rho_\epsilon}(v_\epsilon) - \frac{1}{2}I'_{\rho_\epsilon}(v_\epsilon)v_\epsilon\\ & = \frac{p-1}{2(p+1)}\int_{\mathbb{R}^{n}}K(\epsilon x)|v_{\epsilon}|^{p+1}dx \\ &\leq \frac{p-1}{2(p+1)}\int_{\mathbb{R}^n} a_2|v_\epsilon|^{p+1}dx \to 0,\;\;\mbox{as} \;\;k \to \infty. \end{aligned} $$ \end{proof} Now let \begin{\equation}gin{equation}\label{sol} w_{\epsilon}(x) = v_{\epsilon}(x + y_{\epsilon}) = u_{\epsilon}(\epsilon x + \epsilon y_{\epsilon}), \end{equation} then by (\ref{Ceq12}), \begin{\equation}gin{equation}\label{Ceq11} \liminf_{\epsilon \to 0^{+}}\int_{B(0,R)} |w_{\epsilon}|^{2}dx {\gamma}eq \begin{\equation}ta > 0. \end{equation} To continue, we consider the rescaled scope function $\overline\rho_\epsilon$, defined as, $$ \bar\rho_\epsilon(x)=\frac{1}{\epsilon}\rho(\epsilon x+\epsilon y_\epsilon) $$ and then $w_\epsilon$ satisfies the equation \begin{\equation}gin{equation}\label{Ceq12} (-\displaystyleplaystyleelta)_{\overline{\rho}_{\epsilon}}^{\alpha}w_{\epsilon}(x) + Q(\epsilon x + \epsilon y_\epsilon)w_{\epsilon}(x) = K(\epsilon x + \epsilon y_\epsilon)|w_{\epsilon}(x)|^{p-1}w_\epsilon(x), \;\;\mbox{in}\;\;\mathbb{R}^{n}. \end{equation} \begin{\equation}gin{Lem}\label{Clm4} The sequence $\{\epsilon y_\epsilon\}$ is bounded. Moreover, if $\epsilon_m y_{\epsilon_m} \to \xi^*$, then $$ C(\xi^*) = \inf_{\xi \in \mathbb{R}^n} C(\xi). $$ \end{Lem} \begin{\equation}gin{proof} Suppose by contradiction that $|\epsilon_m y_{\epsilon_m}| \to \infty$ and consider the function $w_{\epsilon_m}$ defined by (\ref{sol}), which satisfies (\ref{Ceq12}). Since $\{C_{\rho_{\epsilon_m}}\}$ is bounded, so the sequence $\{w_m\}$ is also bounded in $H^{\alpha}(\mathbb{R}^n)$. Then $w_m \rightharpoonup w$ in $H^\alpha (\mathbb{R}^n)$, and $w\hspace*pace*{0em}eq 0$ by Lemma \ref{Clm3} . Now, by (\ref{Ceq12}) we get the following equality $$ \begin{\equation}gin{aligned} &\int_{\mathbb{R}^n}\int_{B(0, \frac{1}{\epsilon_m}\rho (\epsilon_m x + \epsilon_m y_{\epsilon_m}))}\frac{[w_{m}(x+z) - w_{m}(x)][w(x+z) - w(x)]}{|z|^{n+2\alpha}}dz dx\\ & + \int_{\mathbb{R}^n}Q(\epsilon_m x + \epsilon_m y_{\epsilon_m})w_{m}wdx = \int_{\mathbb{R}^n}K(\epsilon_m x + \epsilon_m y_{\epsilon_m})|w_{m}|^{p-1}w_{m}w dx. \end{aligned} $$ So, by Fatou's Lemma we get \begin{\equation}gin{equation}\label{Ceq13} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\frac{|w(x+z) - w(x)|^2}{|z|^{n+2\alpha}}dz dx + \int_{\mathbb{R}^n}Q_\infty |w|^2dx \leq \int_{\mathbb{R}^n}K_\infty|w|^{p+1}dx \end{equation} Let $\theta >0$ such that $$ J_\infty(\theta w) = \max_{t{\gamma}eq 0} J_\infty (t w). $$ From (\ref{Ceq13}), $\theta \in (0,1]$, whence $$ \begin{\equation}gin{aligned} C(\infty) &\leq J_\infty (\theta w) - \frac{1}{2}J'_{\infty}(\theta w)\theta w = \left( \frac{1}{2} - \frac{1}{p+1}\right) \theta^{p+1}\int_{\mathbb{R}^n} K_\infty|w(x)|^{p+1}dx\\ &\leq \left( \frac{1}{2} - \frac{1}{p+1} \right)\int_{\mathbb{R}^n}K_\infty|w(x)|^{p+1}dx\\ &\leq \left( \frac{1}{2} - \frac{1}{p+1} \right) \liminf_{m \to \infty} \int_{\mathbb{R}^n} K(\epsilon_m x + \epsilon_m y_{\epsilon_m})|w_m(x)|^{p+1}dx\\ &= \liminf_{m\to \infty} C_{{\rho}_{\epsilon_n}} < C(\infty) \end{aligned} $$ which is a contradiction. So $\{\epsilon y_\epsilon\}$ is bounded. Thus, there exists a subsequence of $\{\epsilon y_\epsilon\}$ such that $\epsilon_m y_{\epsilon_m} \to \xi^*$. Repeating above arguments, define the function $$ w_m(x)=v_{\epsilon_m}(x + y_{\epsilon_m}) = u_{\epsilon_m}(\epsilon_m x + \epsilon_m y_{\epsilon_m}). $$ This function satisfies the equation (\ref{Ceq12}), and again $\{w_m\}$ is bounded in $H^{\alpha}(\mathbb{R}^n)$. Then $w_m \rightharpoonup w$ in $H^\alpha (\mathbb{R}^n)$, where $w$ satisfy the following equation \begin{\equation}gin{equation}\label{Ceq14} (-\displaystyleplaystyleelta)^{\alpha}w + Q(\xi^*)w = K(\xi^*)|w|^{p-1}w, \quad x \in \mathbb{R}^n, \end{equation} in the weak sense. Furthermore, associated to (\ref{Ceq14}) we have the energy functional $$ \begin{\equation}gin{aligned} J_{\xi^*}(u) &= \frac{1}{2}\left( \int_{\mathbb{R}^n}\int_{\mathbb{R}^n} \frac{|u(x+z) - u(x)|^2}{|z|^{n+2\alpha}}dz dx + \int_{\mathbb{R}^n} Q(\xi^*)|u(x)|^2dx\right)\\ &- \frac{1}{p+1}\int_{\mathbb{R}^n} K(\xi^*)|u(x)|^{p+1}dx. \end{aligned} $$ Using $w$ as a test function in (\ref{Ceq12}) and taking the limit of $m \to +\infty$, we get $$ \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\frac{|w(x+z) - w(x)|^2}{|z|^{n+2\alpha}}dxdx + \int_{\mathbb{R}^n}Q(\xi^*)|w(x)|^2dx \leq \int_{\mathbb{R}^n}K(\xi^*)|w|^{p+1}dx, $$ which implies that there exists $\theta \in (0, 1]$ such that $$ J_{\xi^*}(\theta w) = \max_{t{\gamma}eq 0}J_{\xi^*}(tw). $$ So, by Lemma \ref{LIMITC}, $$ \begin{\equation}gin{aligned} C(\xi^*)&\leq J_{\xi^*}(\theta w) = \left( \frac{1}{2} - \frac{1}{p+1} \right)\theta^{p+1} \int_{\mathbb{R}^n}K(\xi^*)|w(x)|^{p+1}dx\\ &\leq \left( \frac{1}{2}-\frac{1}{p+1} \right) \liminf_{m \to \infty} \int_{\mathbb{R}^n}K(\epsilon_m x + \epsilon_m y_{\epsilon_m})|w_{m}(x)|^{p+1}dx\\ &= \liminf_{m\to \infty} [I_{{\rho}_{\epsilon_m}}(v_{\epsilon_m}) - I'_{{\rho}_{\epsilon_m}}(v_{\epsilon_m})v_{\epsilon_m}]\\ &=\liminf_{m\to \infty} C_{{\rho}_{\epsilon_m}} \leq \limsup_{m\to \infty} C_{{\rho}_{\epsilon_m}} \leq \inf_{\xi\in \mathbb{R}^n}C(\xi), \end{aligned} $$ showing that $C(\xi^*)=\displaystyleplaystyle \inf_{\xi\in \mathbb{R}^n}C(\xi)$. \end{proof} Now we prove the convergence of $w_\epsilon$ as $\epsilon\to 0$. \begin{\equation}gin{Lem}\label{Clm5} For every sequence $\{\epsilon_m\}$ there is a subsequence, we keep calling the same, so that $w_{\epsilon_m}=w_{m} \to w$ in $H^{\alpha}(\mathbb{R}^{n})$, when $m\to \infty$, where $w$ is a solution of \equ{Ceq14}. \end{Lem} \begin{\equation}gin{proof} Since $w$ is a solution of (\ref{Ceq14}), from Lemma \ref{LIMITC} , we have $$ \begin{\equation}gin{aligned} &\inf_{\xi\in \mathbb{R}^n} C(\xi) = C(\xi^*) \leq J_{\xi^*}(w) = J_{\xi^*}(w) - \frac{1}{2}J'_{\xi^*}(w)w\\ & = \left(\frac{1}{2} - \frac{1}{p+1}\right) \int_{\mathbb{R}^n}K(\xi^*)|w|^{p+1}dx\\ &\leq \left(\frac{1}{2} - \frac{1}{p+1} \right)\liminf_{m\to \infty} \int_{\mathbb{R}^n} K(\epsilon_m x + \epsilon_m y_{\epsilon_m})|w_m|^{p+1}(x)dx\\ & \leq \left(\frac{1}{2} - \frac{1}{p+1} \right)\limsup_{m\to \infty}\int_{\mathbb{R}^n} K(\epsilon_m x + \epsilon_m y_{\epsilon_m})|w_m|^{p+1}dx\\ & = \left(\frac{1}{2} - \frac{1}{p+1} \right)\limsup_{m\to \infty}\int_{\mathbb{R}^n} K(\epsilon_m x )|v_m|^{p+1}dx\\ &\leq \limsup_{m\to \infty} \left( I_{{\rho}_{\epsilon_m}}(v_m) - \frac{1}{p+1}I'_{\rho_{\epsilon_m}}(v_m)v_m \right) \\ &= \limsup_{m\to \infty} C_{\overline{\rho}_{\epsilon_m}} \leq \inf_{\xi \in \mathbb{R}^n} C(\xi). \end{aligned} $$ The above estimates gives $$ \lim_{m\to \infty} \int_{\mathbb{R}^n}K(\epsilon_m x + \epsilon_m y_{\epsilon_m})|w_m|^{p+1}dx= \int_{\mathbb{R}^n}K(\xi^*)|w|^{p+1}dx. $$ Consequently, $$ \begin{\equation}gin{aligned} (a)&\lim_{m\to \infty} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n} \frac{|w_m(x+z) - w_m(x)|^2}{|z|^{n+2\alpha}}dz dx = \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{|w(x+z)-w(x)|^2}{|z|^{n+2\alpha}}dz dx\\ (b)&\lim_{m\to \infty} \int_{\mathbb{R}^n} Q(\epsilon_m x + \epsilon_m y_{\epsilon_m})|w_m(x)|^{2}dx = \int_{\mathbb{R}^n} Q(\xi^*)|w(x)|^2dx. \end{aligned} $$ From $(b)$, given $\delta>0$ there exists $R>0$ such that $$ \int_{|x|{\gamma}eq R}Q(\epsilon_m x + \epsilon_m y_{\epsilon_m})|w_m(x)|^{2}dx \leq \delta. $$ Furthermore, using $(H_3)$, we obtain \begin{\equation}gin{equation}\label{Ceq15} \int_{|x|{\gamma}eq R}|w_m(x)|^2dx \leq \frac{\delta}{a_1}. \end{equation} On the other hand \begin{\equation}gin{equation}\label{Ceq16} \lim_{m\to \infty}\int_{|x|\leq R}|w_m(x)|^2dx = \int_{|x|\leq R}|w(x)|^2dx. \end{equation} From (\ref{Ceq15}) and (\ref{Ceq16}), $w_m \to w $ in $L^2(\mathbb{R}^n)$. From this, given $\delta>0$ there are $\epsilon_0, R>0$ such that $$ \int_{B^{c}(x^*,\epsilon_m R)}|u_{\epsilon_m}|^{2}\,dx \leq \epsilon_m^{n}\delta \quad \mbox{and} \quad \int_{B(x^*,\epsilon_m R)}|u_{\epsilon_m}|^{2}\,dx {\gamma}eq \epsilon_m^{n} C, \quad \quad\mbox{for all}\quadl \epsilon_m \leq \epsilon_0, $$ where $C$ is a constant independent of $\delta$ and $m$, showing the concentration of solutions $\{u_{\epsilon_{n}}\}$. \end{proof} \begin{\equation}gin{thebibliography}{99} \bibitem{CASS}C.O. Alves and S.H.M. Soares, {\it Existence and concentration os positive solutions for a class of gradient systems}, Nonlinear Differential Equations Appl. {\bf 12}, 437-457 (2005). \bibitem{CASSJY}C.O. Alves, S.H.M. Soares and J. Yang, {\it On existence and concentration of solutions for a class of Hamiltonian Systems in $\mathbb{R}^n$}, Advanced Non. Studies {\bf 3}, 161-189 (2003). \bibitem{MC}M. Cheng, {\it Bound state for the fractional Schr\"odinger equation with undounded potential}, J. Math. Phys. {\bf 53}, 043507 (2012). \bibitem{GCYZ}G. Chen and Y. Zheng, {\it Concentration phenomenon for fractional nonlinear Schr\"odinger equations}, Comm. Pure Appl. Anal. {\bf 13}(6), 2359 - 2376 (2014). \bibitem{JDMPJW}J. D\'avila, M. Del Pino and J. Wei, {\it Concentrating standing waves for the fractional nonlinear Schr\"odinger equation}, J. Differential Equations. {\bf 256}, 858-892 (2014). \bibitem{SDGPEV}S. Dipierro, G. Palatucci, E. {\it Valdinoci, Existence and symmetry results for a Schr\"odinger type problem involving the fractional Laplacian,} Matematiche {\bf 68}, 201-216 (2013). \bibitem{EDNGPEV}E. DiNezza, G. Palatucci and E. Valdinoci, {\it Hitchhiker's guide to the fractional Sobolev spaces}. Bull. Sci. math. {\bf 136}, 521-573 (2012). \bibitem{PFAQJT}P. Felmer, A. Quaas and J. Tan, {\it Positive solutions of nonlinear Schr\"odinger equation with the fractional laplacian}, Proceedings of the Royal Society of Edinburgh: Section A Mathematics, {\bf 142}, No 6, 1237-1262 (2012). \bibitem{PFCT1}P. Felmer and C. Torres, {\it Non-linear Schr\"odinger equation with non-local regional diffusion}. Calc. Var. Partial Diff. Equ. {\bf 54}, 75-98 (2015). \bibitem{PFCT2}P. Felmer and C. Torres, {\it Radial symmetry of ground states for a regional fractional nonlinear Schr\"odinger equation}. Comm. Pure Appl. Anal. {\bf 13}, 2395-2406 (2014). \bibitem{guan1} Q-Y. Guan, {\it Integration by Parts Formula for Regional Fractional Laplacian}. Commun. Math. Phys. {\bf 266}, 289Ð329 (2006). \bibitem{guan2} Q-Y. Guan, Z.M. Ma, {\it The reflected $\alpha$-symmetric stable processes and regional fractional Laplacian.} Probab. Theory Relat. Fields {\bf 134}, 649Ð694 (2006) \bibitem{HIGN}H. Ishii and G. Nakamura, {\it A class of integral equations and approximation of p-Laplace equations}, Calc. Var. {\bf 37}, 485-522 (2010). \bibitem{SS}S. Secchi, {\it Ground state solutions for nonlinear fractional Schr\"odinger equations in $\mathbb{R}^n$}, J. Math. Phys. {\bf 54}, 031501 (2013). \bibitem{XSJZ1}X. Shang and J. Zhang, {\it Concentrating solutions of nonlinear fractional Schr\"odinger equation with potentials}, J. Differential Equations {\bf 258}, 1106-1128 (2015). \bibitem{XSJZ2}X. Shang and J. Zhang, {\it Existence and multiplicity solutions of fractional Schr\"odinger equation with competing potential functions}, Complex Variables and Elliptic Equations {\bf 61}, 1435-1463 (2016). \bibitem{CT1}C. Torres, {\it Symmetric ground state solution for a non-linear Schr\"odinger equation with non-local regional diffusion}, Complex Variables and Elliptic Equations, {\blue http://dx.doi.org/10.1080/17476933.2016.1178730} (2016) \bibitem{CT2}C. Torres, {\it Multiplicity and symmetry results for a nonlinear Schr\"odinger equation with non-local regional diffusion}, Math. Meth. Appl. Sci. {\bf 39}, 2808-2820 (2016). \bibitem{CT3}C. Torres, {\it Nonlinear Dirichlet problem with non local regional diffusion}, Fract. Cal. Appl. Anal. {\bf 19} 2, 379-393 (2016). \bibitem{XWBZ}X. Wang and B. Zeng, {\it On concentration of positive bound states of nonlinear Schršdinger equations with competing potential functions}. SIAM J. Math. Anal. {\bf 28}, 633-655 (1997). \end{thebibliography} \end{document}
\begin{document} \title{Time-Polynomial Lieb-Robinson bounds for finite-range spin-network models} \author{Stefano Chessa} \email{[email protected]} \author{Vittorio Giovannetti} \affiliation{NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa, Italy} \date{\today} \begin{abstract} The Lieb-Robinson bound sets a theoretical upper limit on the speed at which information can propagate in non-relativistic quantum spin networks. In its original version, it results in an exponentially exploding function of the evolution time, which is partially mitigated by an exponentially decreasing term that instead depends upon the distance covered by the signal (the ratio between the two exponents effectively defining an upper bound on the propagation speed). In the present paper, by properly accounting for the free parameters of the model, we show how to turn this construction into a stronger inequality where the upper limit only scales polynomially with respect to the evolution time. Our analysis applies to any chosen topology of the network, as long as the range of the associated interaction is explicitly finite. For the special case of linear spin networks we present also an alternative derivation based on a perturbative expansion approach which improves the previous inequality. In the same context we also establish a lower bound to the speed of the information spread which yields a non trivial result at least in the limit of small propagation times. \end{abstract} \maketitle \section{Introduction} \label{sec.Intro} When dealing with communication activities, information transfer speed is one of the most relevant parameters in order to characterise the communication line performances. This statement applies both to Quantum Communication, obviously, and Quantum Computation, where the effective ability to carry information, for instance from a gate to another one, can determine the number of calculations executable per unit of time. It appears therefore to be useful being able to estimate such speed or, whenever not possible, bound it with an upper value. In the context of communication via quantum spin networks~\cite{BOSE1} a result of this kind can be obtained exploiting the so called {Lieb-Robinson (L-R) bound} \cite{LR,REVIEW}: defining a suitable correlation function involving two local spatially separated operators $\hat{A}$ and $\hat{B}$, a maximum group velocity for correlations and consequently for signals can be extrapolated. In more recent years this bound has been generalised and applied to attain results in a wider set of circumstances. Specifically, among others, stick out proofs for the {Lieb-Schultz-Mattis theorem in higher dimensions} \cite{LSM Theo}, for the {exponential clustering theorem} \cite{Clust Theo}, to link spectral gap and exponential decay of correlations for short-range interacting systems \cite{exponential1}, for the {existence of the dynamics for interactions with polynomial decay} \cite{ExistDynam}, for {area law in 1-D systems} \cite{AreaLaw}, for the {stability of topological quantum order}~\cite{TopQOrder}, for information and entanglement spreading~\cite{BRAV,SUPER,EISERT,PRA}, for black holes physics and information scrambling~\cite{Scram, Scram1}. Bounds on correlation spreading, remaining in the framework set by L-R bounds, have been then generalized to different scenarios such as, for instance, long-range interactions \cite{LongRange, LongRange1, LongRange2, LongRange3, LongRange4}, disordered systems \cite{Burrell,Burrell2}, finite temperature \cite{FinTemp,FinTemp1,FinTemp2}. After the original work by Lieb and Robinson the typical shape found to describe the bound has been the exponentially growing in time $t$ and depressed with the spatial distance between the supports of the two operators $d(A,B)$, namely: \begin{eqnarray}\label{ORIGIM} \| [\hat{A}(t),\hat{B}] \| \lesssim \, e^{v|t|} \; f( d(A,B)) \;,\end{eqnarray} with $v$ positive constant, and $f(\cdot)$ being a suitable decreasing function, both depending upon the interaction considered, the size of the supports of $\hat{A}$ and $\hat{B}$ and the dimensions of the system~\cite{LSM Theo,Clust Theo,ExistDynam,exponential1}. More recently instances have been proposed~\cite{FinTemp2,Them} in which such behaviour can be improved to a polynomial one \begin{eqnarray} \| [\hat{A}(t),\hat{B}] \| \lesssim \left(\frac{t}{d(A,B)}\right)^{d(A,B)}\;, \label{LEQSIMM} \end{eqnarray} at least for Hamiltonian couplings which have an explicitly finite range, and for short enough times. Aim of the present work is to set these results on a firm ground providing an alternative derivation of the polynomial version~(\ref{LEQSIMM}) of the L-R inequality which, as long as the range of the interactions involved is finite, holds true for arbitrary topology of the spin network and which does not suffer from the short time limitations that instead affects previous approaches. Our analysis yields a simple way to estimate the maximum speed at which signals can propagate along the network. In the second part of the manuscript we focus instead on the special case of single sites located at the extremal points of a 1-D linear spin chain model. In this context we give an alternative derivation of the $t$-polynomial L-R bound and discuss how the same technique can also be used to provide a lower bound on $\| [\hat{A}(t),\hat{B}] \|$, which at least for small $t$ is non trivial. The manuscript is organized as follows. We start in Sec.~\ref{sec1} presenting the model and recalling the original version of the L-R bound. The main result of the paper is hence presented in Sec.~\ref{sec1new} where by using simple analytical argument we derive our $t$-polynomial version of the L-R inequality. In Sec.~\ref{SEC:PERT} we present instead the perturbative expansion approach for 1-D linear spin chain models. In Sec.~\ref{Sec:Simulation} we test results achieved in previous sections by comparing them to the numerical simulation of a spin chain. Conclusions are presented finally in Sec.~\ref{Sec:conc}. \section{The model and some preliminary observations} \label{sec1} Adopting the usual framework for the derivation of the L-B bound~\cite{Clust Theo} let us consider a network $\cal N$ of quantum systems (spins) distributed on a graph $\mathbb{G}:=(V,E)$ characterized by a set of vertices $V$ and by a set $E$ of edges. The model is equipped with a metric $d(x,y)$ defined as the shortest path (least number of edges) connecting $x,y \in V$ ($d(x,y)$ being set equal to infinity in the absence of a connecting path), which induces a measure for the diameter $D(X)$ of a given subset $X\subset V$, and a distance $d(X,Y)$ among the elements $X,Y \subset V$, \begin{eqnarray} D(X)&:=&\max\limits_{x,y} \min \{ d(x,y) | x,y\in X\}\;, \nonumber \\ d(X,Y) &:=& \min \{ d(x,y) | x \in X ,y\in Y\}\;. \end{eqnarray} Indicating with ${\cal H}_x$ the Hilbert space associated with spin that occupies the vertex $x$ of the graph, the Hamiltonian of ${\cal N}$ can be formally written as \begin{eqnarray} \label{HAMILT} \hat{H} : = \sum_{X\subset V} \hat{H}_X\;, \end{eqnarray} where the summation runs over the subsets $X$ of $V$ with $\hat{H}_X$ being a self-adjoint operator that is local on the Hilbert space ${\cal H}_X:= \otimes_{x\in X} {\cal H}_x$ , i.e. it acts non-trivially on the spins of $X$ while being the identity everywhere else. Consider then two subsets $A,B\subset V$ which are disjoint, $d(A,B)>0$. Any two operators $\hat{A}:= \hat{A}_{A}$ and $\hat{B}:=\hat{B}_{B}$ that are local on such subsets clearly commute, i.e. $[\hat{A},\hat{B}]=0$. Yet as we let the system evolve under the action of the Hamiltonian $\hat{H}$, this condition will not necessarily hold due to the building up of correlations along the graph. More precisely, given $\hat{U}(t):= e^{-i \hat{H}t}$ the unitary evolution induced by (\ref{HAMILT}), and indicating with \begin{eqnarray} \hat{A}(t) := \hat{U}^\dag(t) \hat{A} \hat{U}(t)\;, \end{eqnarray} the evolved counterpart of $\hat{A}$ in the Heisenberg representation, we expect the commutator $[\hat{A}(t), \hat{B}]$ to become explicitly non-zero for large enough $t$, the faster this happens, the strongest being the correlations that are dynamically induced by $\hat{H}$ (hereafter we set $\hbar =1$ for simplicity). The Lieb-Robinson bound puts a limit on such behaviour that applies for all $\hat{H}$ which are characterized by couplings that have a finite range character (at least approximately). Specifically, indicating with $|X|$ the total number of sites in the domain $X\subset V$, and with \begin{eqnarray} M_X:= \max_{x \in X} \mbox{dim}[{\cal H}_x]\;, \end{eqnarray} the maximum value of its spins Hilbert space dimension, we say that $\hat{H}$ is well behaved in terms of long range interactions, if there exists a positive constant $\lambda$ such that the functional \begin{equation} \label{DEFPHILambda} \| \hat{H}\|_\lambda :=\sup_{x \in V} \sum\limits_{X\ni x} \left|X\right| M_X^{2\left|X\right|} e^{\lambda D(X)} \; \|\hat{H}_X\| \, , \end{equation} is finite. In this expression the symbol \begin{eqnarray} \| \hat{\Theta} \|: = \max_{|\psi\rangle} \| \hat{\Theta} |\psi\rangle\|\;,\end{eqnarray} represents the standard operator norm, while the summation runs over all the subset $X \subset V$ that contains $x$ as an element. Variant versions~\cite{Bratteli, Clust Theo,exponential1} or generalizations~\cite{REVIEW,NACHTER1} of Eq.~(\ref{DEFPHILambda}) can be found in the literature , however as they express the same behaviour and substantially differ only by constants, in the following we shall gloss over these differences. The L-R bound can now be expressed in the form of the following inequality~\cite{Clust Theo} \begin{equation}\label{eq: lambda bound} \| [\hat{A}(t),\hat{B}] \| \leq 2 |A| | B| \| \hat{A}\| \| \hat{B}\| ( e^{2 |t| \| \hat{H} \|_\lambda}-1) e^{ -\lambda\, d(A,B)}\;, \end{equation} which holds non trivially for well behaved Hamiltonian $\hat{H}$ admitting finite values of the quantity $\| \hat{H} \|_\lambda$. It is worth stressing that Eq.~(\ref{eq: lambda bound}) is valid irrespectively from the initial state of the network and that, due to the dependence upon $|t|$ on the r.h.s. term, exactly the same bound can be derived for $\| [\hat{A},\hat{B}(t)] \|$, obtained by exchanging the roles of $\hat{A}$ and $\hat{B}$. Finally we also point out that in many cases of physical interest the pre-factor $|A| | B|$ on the r.h.s. can be simplified: for instance it can be omitted for one-dimensional models, while for nearest neighbor interactions one can replace this by the smaller of the boundary sizes of $\hat{A}$ and $\hat{B}$ supports~\cite{NACHTER1}. For models characterized by interactions which are explicitly not finite, refinements of Eq.~(\ref{eq: lambda bound}) have been obtained under special constraints on the decaying of the long-range Hamiltonian coupling contributions~\cite{exponential1,Clust Theo}. For instance assuming that there exist (finite) positive quantities $s_1$ and $\mu_1$ ($s_1$ being independent from total number of sites of the graph $\mathbb{G}$), such that \begin{equation} \label{DEFPHILambdaNEW} \sup_{x \in V} \sum\limits_{X\ni x} \left|X\right| \|\hat{H}_X\| [1+D(X)]^{\mu_1} \leq s_1 \, , \end{equation} one gets \begin{equation}\label{eq: lambda boundnew1} \| [\hat{A}(t),\hat{B}] \| \leq C_1 |A| | B| \| \hat{A}\| \| \hat{B}\| \frac{ e^{v_1 |t|} -1}{ (1+ d(A,B))^{\mu_1}} \;, \end{equation} with $C_1$ and $v_1$ positive quantities that only depend upon the metric of the network and on the Hamiltonian. On the contrary if there exist (finite) positive quantities $\mu_2$ and $s_2$ (the latter being again independent from total number of sites of $\mathbb{G}$), such that \begin{equation} \label{DEFPHILambdaNEW2} \sup_{x \in V} \sum\limits_{X\ni x} \left|X\right| \|\hat{H}_X\| e^{\mu_2 D(X)} \leq s_2 \, , \end{equation} we get instead \begin{equation}\label{eq: lambda boundnew2} \| [\hat{A}(t),\hat{B}] \| \leq C_2 |A| | B| \| \hat{A}\| \| \hat{B}\| ( e^{v_2 |t|} -1) e^{-\mu_2 d(A,B)} \;, \end{equation} where once more $C_2$ and $v_2$ are positive quantities that only depend upon the metric of the network and on the Hamiltonian. The common trait of these results is the fact that their associated upper bounds maintain the exponential dependence with respect to the transferring $t$ enlightened in Eq.~(\ref{ORIGIM}). \section{Casting the Lieb-Robinson bound into a $t$-polynomial form for (explicitly) finite range couplings} \label{sec1new} The inequality (\ref{eq: lambda bound}) is the starting point of our analysis: it is indicative of the fact that the model admits a finite speed $v\simeq 2 \| \hat{H} \|_\lambda / \lambda$ at which correlations can spread out in the spin network. As $|t|$ increases, however, the bound becomes less and less informative due to the exponential dependence of the r.h.s.: in particular it becomes irrelevant as soon as the multiplicative factor of $\| \hat{A}\| \| \hat{B}\|$ gets larger than $2$. In this limit in fact Eq.~(\ref{eq: lambda bound}) is trivially subsided by the inequality \begin{eqnarray} \| [\hat{A}(t),\hat{B}] \| \leq 2 \| \hat{A}(t)\| \| \hat{B}\|= \label{TRIVIAL} 2 \| \hat{A}\| \| \hat{B}\|\;, \end{eqnarray} that follows by simple algebraic considerations. One way to strengthen the conclusions one can draw from (\ref{eq: lambda bound}) is to consider $\lambda$ as a free parameter and to optimize with respect to all the values it can assume. As the functional dependence of $\| \hat{H} \|_\lambda$ upon $\lambda$ is strongly influenced by the specific properties of the spin model, we restrict the analysis to the special (yet realistic and interesting) scenario of Hamiltonians $\hat{H}$~(\ref{HAMILT}) which are strictly short-ranged. Accordingly we now impose $\hat{H}_X=0$ to all the subsets $X\subset V$ which have a diameter $D(X)$ that is larger than a fixed finite value $\bar{D}$, i.e. \begin{eqnarray} \quad D(X) > \bar{D} \qquad \Longrightarrow \qquad \hat{H}_X = 0 \;, \end{eqnarray} which is clearly more stringent than both those presented in Eqs.~(\ref{DEFPHILambdaNEW}) and~(\ref{DEFPHILambdaNEW2}). Under this condition $\hat{H}$ is well behaved for all $\lambda \geq 0$ and one can write \begin{equation}\label{eq: Max diameter} \|\hat{H} \|_{\lambda}\leq \zeta \, e^{\lambda\bar{D}}, \qquad \forall \lambda \geq 0\;, \end{equation} with $\zeta$ being a finite positive constant that for sufficiently regular graphs does not scale with the total number of spins of the system. For instance for regular arrays of first-neighbours-coupled spins we get $\zeta = 2 C M^4 \| \hat{h}\|$, where $C$ is the maximum coordination number of the graph (i.e. the number of edges associated with a given site), \begin{eqnarray} \| \hat{h}\| : = \sup_{X\subset V} \| \hat{H}_X\|\;,\end{eqnarray} is the maximum strength of the interactions, and where $M:= \max_{x \in V} \mbox{dim}[{\cal H}_x]$ is the maximum dimension of the local spins Hilbert space of the model. More generally for graphs $\mathbb{G}$ characterized by finite values of $C$ it is easy to show that $\zeta$ can not be greater than $C^{\bar D} M^{C^{\bar D}} \| \hat{h} \|$. Using (\ref{eq: Max diameter}) we can now turn~(\ref{eq: lambda bound}) into a more treatable expression \begin{equation} \| [\hat{A}(t),\hat{B}] \| \leq 2 |A| |B| \| \hat{A}\| \| \hat{B}\| (e^{2 |t| \zeta e^{\lambda \bar{D}}} -1) e^{ -\lambda\, d(A,B)}\;, \label{eq: lambda boundnew complete} \end{equation} whose r.h.s. can now be explicitly minimized in terms of $\lambda$ for any fixed $t$ and $d(A,B)$. As shown in Sec.~\ref{APPimp} the final result is given by \begin{eqnarray} \nonumber \| [\hat{A}(t),\hat{B}] \| &\leq& 2 |A| |B| \| \hat{A}\| \| \hat{B}\| \left(\frac{2\, e\,\zeta\,\bar{D}\,|t|}{d(A,B)}\right)^{\tfrac{d(A,B)}{\bar{D}}} \!\!\!\!\!\! \!\! {\cal F}(\tfrac{d(A,B)}{\bar{D}}) \\ &\leq& 2|A| |B| \| \hat{A}\| \| \hat{B}\| \left(\frac{2\, e\,\zeta\,\bar{D}\,|t|}{d(A,B)}\right)^{\tfrac{d(A,B)}{\bar{D}}} \!\!\!\!\!\!\;, \label{eq:MinNachBound} \end{eqnarray} where in the second inequality we used the fact that the function ${\cal F}(x)$ defined in the Eq.~(\ref{DEFC}) below and plotted in Fig.~\ref{fig:plotC} is monotonically increasing and bounded from above by its asymptotic value $1$. \begin{figure} \caption{Plot of the function ${\cal F} \label{fig:plotC} \end{figure} At variance with Eq.~(\ref{eq: lambda bound}), the inequality~(\ref{eq:MinNachBound}) contains only terms which are explicit functions of the spin network parameters. Furthermore the new bound is polynomial in $t$ with a scaling that is definitely better than the linear behaviour one could infer from the Taylor expansion of the r.h.s. of Eq.~(\ref{eq: lambda bound}). Looking at the spatial component of (\ref{eq:MinNachBound}) we notice that correlations still decrease with distance as well as in bounds (\ref{eq: lambda bound}), (\ref{eq: lambda boundnew1}) and (\ref{eq: lambda boundnew2}) but with a scaling $(1/x)^x=e^{-x \log x}$ that is more than exponentially depressed. Also, fixing a (positive) target threshold value $R_*<1$ for the ratio \begin{eqnarray} R(t):=\| [\hat{A}(t),\hat{B}] \| /(2 |A| |B| \| \hat{A}\| \| \hat{B}\| )\;, \end{eqnarray} equation~(\ref{eq:MinNachBound}) predicts that it will be reached not before a time interval \begin{eqnarray} t_* = \frac{d(A,B) R_*^{\bar{D}/d(A,B)}}{2 e \zeta \bar{D}} \;, \label{elapsingtime} \end{eqnarray} has elapsed from the beginning of the dynamical evolution. Exploiting the fact that $\lim_{z\rightarrow \infty} R_*^{1/z}=1$, in the asymptotic limit of very distant sites (i.e. $d(A,B)\gg \bar{D}$), this can be simplified to \begin{eqnarray} t_* \simeq \frac{d(A,B) }{2 e \zeta \bar{D}} \;, \label{elapsingtime} \end{eqnarray} that is independent from the actual value of the target $R_*\neq 0$, leading us to identify the quantity \begin{eqnarray} \label{SPEED} v_{\max} := 2 e \zeta \bar{D}\;, \end{eqnarray} as an upper bound for the maximum speed allowed for the propagation of signals in the system. \subsection{Explicit derivation of Eq.~(\ref{eq:MinNachBound}) }\label{APPimp} We start by noticing that by neglecting the negative contribution $-e^{ -\lambda\, d(A,B)}$, we can bound the r.h.s. Eq.~(\ref{eq: lambda boundnew complete}) by a form which is much easier to handle, i.e. \begin{eqnarray} \| [\hat{A}(t),\hat{B}] \| \leq 2|A| |B| \| \hat{A}\| \| \hat{B}\| e^{2 |t| \zeta e^{\lambda \bar{D}} -\lambda\, d(A,B)}.\label{eq: lambda boundnew} \end{eqnarray} One can observe that for $t > d(A,B)/ (2 \zeta \bar{D})$ the approach yields an inequality that is always less stringent than (\ref{TRIVIAL}). On the contrary for $|t| \leq d(A,B)/ (2 \zeta \bar{D})$, imposing the stationary condition on the exponent term, i.e. $\partial_{\lambda}(e^{2\,\zeta\, e^{\lambda \overline{D}(X)}|t| -\lambda\, d(A,B)})=0$, we found that for the optimal value for $\lambda$ is provided by \begin{equation} \lambda_{\rm opt}:=\frac{1}{\bar{D}}\ln(\frac{d(A,B)}{2\,|t|\,\zeta\,\bar{D}}), \end{equation} which replaced in Eq.~(\ref{eq: lambda boundnew}) yields directly (\ref{eq:MinNachBound}). More generally, we can avoid to pass through Eq.~(\ref{eq: lambda boundnew}) by looking for minima of the r.h.s. of Eq.~(\ref{eq: lambda bound}) obtaining the first inequality given in Eq.~(\ref{eq:MinNachBound}), i.e. \begin{equation} \label{eq:MinNachBound11} \| [\hat{A}(t),\hat{B}] \| \leq2|A| |B| \| \hat{A}\| \| \hat{B}\| \left(\tfrac{2\, e\,\zeta\,\bar{D}\,|t|}{d(A,B)}\right)^{\tfrac{d(A,B)}{\bar{D}}} \!\!\! {\cal F}(\tfrac{d(A,B)}{\bar{D}}) \;. \end{equation} For this purpose we consider a parametrization of the coefficient $\lambda$ in terms of the positive variable $z$ as indicated here \begin{equation} \lambda:=\frac{1}{\bar{D}}\ln(\frac{ z d(A,B)}{2\,|t|\,\zeta\,\bar{D}}). \end{equation} With this choice the quantity we are interested in becomes \begin{eqnarray} \label{FFDD1} &&2 |A| |B| \| \hat{A}\| \| \hat{B}\| (e^{2 |t| \zeta e^{\lambda \bar{D}}} -1) e^{ -\lambda\, d(A,B)} \\ \nonumber &&\qquad \qquad\qquad= 2 |A| |B| \| \hat{A}\| \| \hat{B}\| \left(\tfrac{2 e t \zeta }{x} \right)^{x} f_x (z) \;, \end{eqnarray} where in the r.h.s. term for easy of notation we introduced $x=d(A,B)/\bar{D}$ and the function \begin{eqnarray} \label{DEFF} f_x(z):= \frac{e^{x z} -1}{z^x e^x} \;.\end{eqnarray} For fixed value of $x\geq 1$ the minimum of the Eq.~(\ref{DEFF}) is attained for $z=z_{\rm opt}$ fulfilling the constraint \begin{eqnarray} x = - \frac{\ln(1-z_{\rm opt})}{z_{\rm opt}} \;. \end{eqnarray} By formally inverting this expression and by inserting it into Eq.~(\ref{FFDD1}) we hence get (\ref{eq:MinNachBound11}) with \begin{eqnarray} \label{DEFC} {\cal F}(x):= \frac{z_{\rm opt}(x)}{1-z_{\rm opt}(x)} \left(\frac{1}{e z_{\rm opt}(x)}\right)^x \;, \end{eqnarray} being the monotonically increasing function reported in Fig.~\ref{fig:plotC}. \section{Perturbative expansion approach}\label{SEC:PERT} An alternative derivation of a $t$-polynomial bound similar to the one reported in Eq.~(\ref{eq:MinNachBound}) can be obtained by adopting a perturbative expansion of the unitary evolution of the operator $\hat{A}(t)$ that allows one to express the commutator $[ \hat{A}(t), \hat{B}]$ as a sum over a collections of ``paths'' connecting the locations $A$ and $B$, see e.g. Eq.~(\ref{summation}) below. This derivation is somehow analogous to the one used in Refs.~\cite{FinTemp2,Them}. Yet in these papers the number of relevant terms entering in the calculation of the norm of $[ \hat{A}(t), \hat{B}]$ could be underestimated by just considering those paths which are obtained by concatenating adjacent contributions and resulting in corrections that are negligible only for small times $t$. In what follows we shall overcome these limitations by focusing on the special case of linear spin chains which allows for a proper account of the relevant paths. Finally we shall see how it is possible to exploit the perturbative expansion approach to also derive a lower bound for $\| [ \hat{A}(t), \hat{B}]\|$. While in principle the perturbative expansion approach can be adopted to discuss arbitrary topologies of the network, in order to get a closed formula for the final expression we shall restrict the analysis to the case of two single sites (i.e. $|A|=|B|=1$) located at the end of a $N$-long, 1-D spin chain with next-neighbour interactions (i.e. $d=N-1$). Accordingly we shall write the Hamiltonian (\ref{HAMILT}) as \begin{eqnarray} \label{1DMODEL} \hat{H}:=\sum\limits_{i=1}^{N-1} \hat{h}_i\;, \end{eqnarray} with $\hat{h}_i$ operators acting non trivially only on the $i$-th and $(i+1)$-th spins, hence fulfilling the condition \begin{eqnarray} \label{COMM} [\hat{h}_i,\hat{h}_j]=0 \;, \qquad \forall |i-j|>1\;. \end{eqnarray} \subsection{Upper bound} \label{Sec:UPPER} Adopting the Baker-Campbell-Hausdorff formula we write \begin{equation}\label{eq: CBH expansion} [\hat{A}(t),\hat{B}] = [\hat{A},\hat{B}] + \sum\limits_{k=1}^\infty \frac{(it)^k}{k!} \left[[\hat{H},\hat{A}]_k,\hat{B} \right]\;, \end{equation} where for $k\geq 1$, \begin{eqnarray} [\hat{H},\hat{A}]_k:= [\overbrace{\hat{H},[\hat{H},[\cdots, [\hat{H}, [\hat{H}}^{k\mbox{ times}},\hat{A}]]\cdots]]\;, \end{eqnarray} indicates the $k$-th order, nested commutator between $\hat{H}$ and $\hat{A}$. Exploiting the structural properties of Eqs.~(\ref{1DMODEL}) and (\ref{COMM}) it is easy to check that the only terms which may give us a non-zero contribution to the r.h.s. of Eq.~(\ref{eq: CBH expansion}) are those with $k\geq d$. Accordingly we get \begin{equation}\label{eq: CBH expansion11} [\hat{A}(t),\hat{B}] = \sum\limits_{k=d}^\infty \frac{(it)^k}{k!} \left[[\hat{H},\hat{A}]_k,\hat{B} \right]\;, \end{equation} which leads to \begin{eqnarray} \| [\hat{A}(t),\hat{B}] \| \leq \sum\limits_{k=d}^\infty \frac{|t|^k}{k!} \| [[\hat{H},\hat{A}]_k,\hat{B} ] \| \;, \label{eq: Corr func all terms} \end{eqnarray} via sub-additivity of the norm. To proceed further we observe that \begin{eqnarray} \| [[\hat{H},\hat{A}]_k,\hat{B}]\|\leq 2\| \hat{A} \| \| \hat{B} \| \| 2 \hat{H} \|^k \;, \end{eqnarray} which for sufficiently small times $t$ yields \begin{eqnarray} \| [\hat{A}(t),\hat{B}] \| &\simeq& \frac{|t|^{d}}{d!}\| [[\hat{H},\hat{A}]_d,\hat{B}]\| \nonumber \\ &\leq& 2\| \hat{A} \| \| \hat{B} \| \frac{\left( 2\| \hat{H} \| |t| \right)^{d}}{d!} \nonumber \\ \label{eq: Poly bound1} &\leq& \frac{ 2\| \hat{A} \| \| \hat{B} \| }{\sqrt{2\pi d}} \left( \frac{2\,e\| \hat{H} \| \, |t| }{d} \right)^{d}, \end{eqnarray} where in the last passage we adopted the lower bound on $d!$ that follows from the Stirling's inequalities \begin{eqnarray} (d/e)^d \sqrt{e^2 d} \geq d!\geq (d/e)^d \sqrt{2\pi d}\;, \label{STIRLING} \end{eqnarray} Equation~(\ref{eq: Poly bound1}) exhibits a polynomial behaviour similar to the one observed in Eq.~(\ref{eq:MinNachBound}) (notice that if instead of next-neighbour we had next-$\bar{D}$-neighbours interaction the first not null order will be the $\lceil \frac{d}{\bar{D}} \rceil$-th one and accordingly, assuming ${d}/{\bar{D}}$ to be integer, the above derivation will still hold with $d$ replaced by $d/\bar{D}$). Yet the derivation reported above suffers from two main limitations: first of all it only holds for sufficiently small $t$ due to the fact that we have neglected all the terms of (\ref{eq: Corr func all terms}) but the first one; second the r.h.s of Eq.~(\ref{eq: Poly bound1}) has a direct dependence on the total size $N$ of the system carried by $\| \hat{H} \|$, i.e. on the distance $d$ connecting the two sites. Both these problems can be avoided by carefully considering each ``nested'' commutator $[[\hat{H},\hat{A}]_k,\hat{B}]$ entering ~(\ref{eq: Corr func all terms}). Indeed given the structure of the Hamiltonian and the linearity of commutators, it follows that we can write \begin{eqnarray} \label{summation} [[\hat{H},\hat{A}]_k,\hat{B}] =\sum_{i_1,i_2,\cdots, i_k=1}^{N-1} [\hat{C}^{(k)}_{i_1,i_2 ,\cdots ,i_k} (\hat{A}) ,\hat{B} ]\;,\end{eqnarray} where for $i_1 ,i_2 ,\cdots ,i_k\in\{1,2,\cdots,N-1\}$ we have \begin{eqnarray}\label{Knested} \hat{C}^{(k)}_{i_1,i_2 ,\cdots ,i_k} (\hat{A}) :=[\hat{h}_{i_k} , [\hat{h}_{i_{k-1}} , \cdots ,[\hat{h}_{i_2}, [\hat{h}_{i_1} ,\hat{A}]]\cdots]]\;. \end{eqnarray} Now taking into account the commutation rule~(\ref{COMM}) and of the fact that $\hat{A}$ and $\hat{B}$ are located at the two opposite ends of the chain, it turns out that only a limited number \begin{eqnarray} n_k \leq \binom{k}{d} d^{k-d} = \frac{k! \; d^{k-d}}{d! (k-d)!} \label{BOUNDONNK} \;,\end{eqnarray} of the $N^k$ terms entering (\ref{summation}) will have a chance of being non zero. For the sake of readability we postpose the explicit derivation of this inequality (as well as the comment on alternative approaches presented in Refs.~\cite{FinTemp2,Them}) in Sec.~\ref{sec. Counting comm}: here instead we observe that using \begin{eqnarray} \| [\hat{C}^{(k)}_{i_1,i_2 ,\cdots ,i_k} (\hat{A}) ,\hat{B} ] \|\leq 2 \| \hat{A}\| \| \hat{B}\| (2 \| \hat{h}\|)^k \;, \end{eqnarray} where now $\| \hat{h}\| := \max\limits_i\| \hat{h}_i \| $, it allows us to transform Eq.~(\ref{eq: Corr func all terms}) into \begin{eqnarray} \| [\hat{A}(t),\hat{B}] \| &\leq& 2\| \hat{A} \| \| \hat{B} \| \sum_{k=d}^\infty n_k\frac{(2 |t| \| \hat{h} \| )^k}{k!} \nonumber \\ & \leq &2\| \hat{A} \| \| \hat{B} \| \frac{\left( 2 |t|\| \hat{h} \| \right)^{d}}{d!}\sum_{k=0}^\infty \frac{\left( 2 |t| \| \hat{h} \| d\right)^k}{k!}\nonumber \\ &=&2\| \hat{A} \| \| \hat{B} \| \frac{\left(2|t|\| \hat{h} \| \right)^{d}}{d!}e^{2|t|\| \hat{h} \| d} \;, \nonumber \end{eqnarray} which presents a scaling that closely resemble to one obtained in Ref.~\cite{CRAMER} for finite-range quadratic Hamiltonians for harmonic systems on a lattice. Invoking hence the lower bound for $d!$ that follows from (\ref{STIRLING}) we finally get \begin{equation} \label{UUPP} \| [\hat{A}(t),\hat{B}] \| \leq \frac{2\| \hat{A} \| \| \hat{B} \|}{ \sqrt{2 \pi d} } \left( \frac{2\,e\| \hat{h} \| \, |t|}{d} \right)^d \; e^{2 |t| \| \hat{h} \| d} \;, \end{equation} which explicitly shows that the dependence from the system size present in~(\ref{eq: Poly bound1}) is lost in favour of a dependence on the interaction strength $\| \hat{h} \|$ similar to what we observed in Sec.~\ref{sec1new}. In particular for small times the new inequality mimics the polynomial behaviour of~(\ref{eq:MinNachBound}): as a matter of fact, in this regime, due to the presence of the multiplicative term $1/\sqrt{d}$, Eq.~(\ref{UUPP}) tends to be more strict than our previous bound (a result which is not surprising as the derivation of the present section takes full advantage of the linear topology of the network, while the analysis of Sec.~\ref{sec1new} holds true for a larger, less regular, class of possible scenarios). At large times on the contrary the new inequality is dominated by the exponential trend $e^{2 |t| \| \hat{h} \| d}$ which however tends to be overruled by the trivial bound (\ref{TRIVIAL}). \subsection{A lower bound}\label{Sec.low} By properly handling the identity~(\ref{eq: CBH expansion11}) it is also possible to derive a lower bound for $\| [\hat{A}(t),\hat{B}] \|$. Indeed using the inequality $\| \hat{O}_1+\hat{O}_2\| \geq \| \hat{ O}_1\| -\| \hat{ O}_2\| $ we can write \begin{eqnarray}\label{eq: Lower Bound1} &&\| [\hat{A}(t),\hat{B}] \| =\Big\| \sum\limits_{k=d}^\infty \frac{(it)^k}{k!} [[\hat{H},\hat{A}]_k,\hat{B} ]\Big\| \\ \nonumber &&\geq \frac{|t|^d}{d!}\| [[\hat{H},\hat{A}]_d,\hat{B}] \| -\Big\| \sum\limits_{k=d+1}^\infty \frac{(it)^k}{k!} [[\hat{H},\hat{A}]_k,\hat{B} ] \Big\| \;, \end{eqnarray} (notice that the above bound is clearly trivial if $[[\hat{H},\hat{A}]_d,\hat{B}]$ is the null operator: when this happens however we can replace it by substituting $d$ on it with the smallest $k> d$ for which $[[\hat{H},\hat{A}]_k,\hat{B}]\neq 0$). Now we observe that the last term appearing on the r.h.s. of the above expression can be bounded by following the same derivation of the previous paragraphs, i.e. \begin{eqnarray} &&\Big\| \sum\limits_{k=d+1}^\infty \frac{(it)^k}{k!} [[\hat{H},\hat{A}]_k,\hat{B}] \Big\| \nonumber \\ &&\quad\qquad \leq 2\| \hat{A} \| \| \hat{B} \| \sum_{k=d+1}^\infty n_k\frac{(2 |t| \| \hat{h} \| )^k}{k!} \nonumber \\ &&\quad\qquad \leq 2\| \hat{A} \| \| \hat{B} \| \frac{\left( 2 |t| \| \hat{h} \| \right)^d}{d!}\sum_{k=1}^\infty \frac{\left( 2 |t| \| \hat{h} \| d\right)^k}{k!}\nonumber \\ &&\quad \qquad =2\| \hat{A} \| \| \hat{B} \| \frac{\left(2 |t| \| \hat{h} \| \right)^d}{d!}(e^{2 |t| \| \hat{h} \| d}-1) \nonumber\\ &&\quad\qquad \leq 2\| \hat{A} \| \| \hat{B} \| {\left(\frac{2 e |t| \| \hat{h} \|}{d} \right)^d}\frac{e^{2 |t| \| \hat{h} \| d}-1}{\sqrt{2 \pi d} } \;. \nonumber \end{eqnarray} Hence by replacing this into Eq.~(\ref{eq: Lower Bound1}) we obtain \begin{eqnarray}\nonumber &&\| [\hat{A}(t),\hat{B}] \| \geq \frac{ |t|^d}{d!}\| [[\hat{H},\hat{A}]_d,\hat{B}] \| \\ &&\qquad - 2\| \hat{A} \| \| \hat{B} \| {\left(\frac{2 e |t| \| \hat{h} \|}{d} \right)^d}\frac{e^{2 |t| \| \hat{h} \| d}-1}{\sqrt{2 \pi d} } \nonumber \\ &&\qquad \geq \frac{2\| \hat{A} \| \| \hat{B} \|}{ \sqrt{2 \pi d} } \nonumber \left(\frac{2 e |t| \| \hat{h} \|}{d} \right)^d \left( \Gamma_d - ({e^{2 |t| \| \hat{h} \| d}-1}) \right) \;, \\ \label{eq: Lower Bound1new} \end{eqnarray} where in the last passage we used the upper bound for $d!$ that comes from Eq.~(\ref{STIRLING}) and introduced the dimensionless quantity \begin{eqnarray} \label{defgamma} \Gamma_d: =\sqrt{ \frac{\pi}{2 e^2}} \; \frac{ \| [[\hat{H},\hat{A}]_d,\hat{B}] \|}{ \| \hat{A} \| \| \hat{B} \| (2 \| \hat{h}\|)^d} \;, \end{eqnarray} which can be shown to be strictly smaller than $1$ (see Sec.~\ref{sec. Counting comm}). It's easy to verify that as long as $\Gamma_d$ is non-zero (i.e. as long as $[[\hat{H},\hat{A}]_d,\hat{B}]\neq 0$), there exists always a sufficiently small time $\bar{t}$ such that $\forall\, 0<t<\bar{t}$ the r.h.s. of Eq.~(\ref{eq: Lower Bound1new}) is explicitly positive, implying that we could have a finite amount of correlation at a time shorter than that required to light pulse to travel from $A$ to $B$ at speed $c$. This apparent violation of causality is clearly a consequence of the approximations that lead to the effective spin Hamiltonian we are working on (the predictive power of the model being always restricted to time scales $t$ which are larger than $\frac{d(A,B)}{c}$). More precisely, for sufficiently small value of $t$ (i.e. for $2 |t|\| \hat{h} \| d \ll 1$) the negative contribution on the r.h.s. of Eq.~(\ref{eq: Lower Bound1new}) can be neglected and the bound predicts the norm of $ [\hat{A}(t),\hat{B}]$ to grow polynomially as $t^d$, i.e. \begin{eqnarray}\label{eq: Lower Bound1new1} \| [\hat{A}(t),\hat{B}] \| &\gtrsim& \frac{2\| \hat{A} \| \| \hat{B} \|}{ \sqrt{2 \pi d} } \left(\frac{2 e |t| \| \hat{h} \|}{d} \right)^d \Gamma_d \;, \end{eqnarray} which should be compared with \begin{eqnarray}\label{eq: upper Bound1new1} \| [\hat{A}(t),\hat{B}] \| &\lesssim& \frac{2\| \hat{A} \| \| \hat{B} \|}{ \sqrt{2 \pi d} } \left(\frac{2 e |t| \| \hat{h} \|}{d} \right)^d \;, \end{eqnarray} that, for the same temporal regimes is instead predicted from the upper bound~(\ref{UUPP}). \subsection{Counting commutators}\label{sec. Counting comm} Here we report the explicit derivation of the inequality~(\ref{BOUNDONNK}). The starting point of the analysis is the recursive identity \begin{eqnarray} \label{recursive} \hat{C}^{(k)}_{i_1,i_2 ,\cdots ,i_k} (\hat{A}) = [\hat{h}_{i_k} , \hat{C}^{(k-1)}_{i_1,i_2 ,\cdots ,i_{k-1}} (\hat{A})]\;, \end{eqnarray} which links the expression for nested commutators~(\ref{Knested}) of order $k$ to those of order $k-1$. Remember now that the operator $\hat{A}$ is located on the first site of the chain. Accordingly, from Eq.~(\ref{COMM}) it follows that \begin{eqnarray} \hat{C}^{(1)}_{i} (\hat{A}) = [\hat{h}_{i} , \hat{A}]=0 \;, \qquad \forall i\geq 2\;, \end{eqnarray} i.e. the only possibly non-zero nested commutator of order 1 will be the operator $\hat{C}^{(1)}_{1} (\hat{A})=[\hat{h}_{1} , \hat{A}]$ which acts non trivially on the first and second spin. From this and the recursive identity~(\ref{recursive}) we can then derive the following identity for the nested commutator of order $k=2$, i.e. \begin{eqnarray} \hat{C}^{(2)}_{1,i_2} (\hat{A})&=&0\;, \qquad \forall i_2 \geq 3 \;, \\ \hat{C}^{(2)}_{i_1,i_2} (\hat{A})&=&0\;, \qquad \mbox{$\forall i_1 \geq 2$ {and} $\forall i_2\geq 1$} \;, \end{eqnarray} the only terms which can be possibly non-zero being now $\hat{C}^{(2)}_{1,1} (\hat{A})$ and $\hat{C}^{(2)}_{1,2} (\hat{A})= [\hat{h}_{2} ,[ \hat{h}_{1},\hat{A}]]$, the first having support on the first and second spin of the chain, the second instead being supported on the first, second, and third spin. Iterating the procedure it turns out that for generic value of $k$, the operators $\hat{C}^{(k)}_{i_1,i_2 ,\cdots ,i_k} (\hat{A})$ which may be explicitly not null are those for which we have \begin{equation} \left\{ \begin{array}{l} i_1 = 1\;, \\ i_j \leq \max\{ i_1, i_2, \cdots, i_{j-1}\} + 1\;, \qquad \forall j\in\{ 2, \cdots, k\}\;, \end{array} \right. \label{NONZERO} \end{equation} the rule being that passing from $\hat{C}^{(k-1)}_{i_1,i_2 ,\cdots ,i_{k-1}} (\hat{A})$ to $\hat{C}^{(k)}_{i_1,i_2 ,\cdots ,i_k} (\hat{A})$, the new Hamiltonian element $\hat{h}_{i_k}$ entering (\ref{recursive}) has to be one of those already touched (except the first one $[\hat{h}_1,A]$) or one at distance at most 1 to the maximum position reached until there. We also observe that among the element $\hat{C}^{(k)}_{i_1,i_2 ,\cdots ,i_k} (\hat{A})$ which are not null, the one which have the largest support are those that have the largest value of the indexes: indeed from (\ref{recursive}) it follows that the extra commutator with $\hat{h}_{i_k}$ will create an operator whose support either coincides with the one of $\hat{C}^{(k-1)}_{i_1,i_2 ,\cdots ,i_{k-1}} (\hat{A})$ (this happens whenever ${i_k}$ belongs to $\{ i_1,i_2 ,\cdots ,i_{k-1}\}$), or it is larger than the latter by one (this happens instead for ${i_k} = \max\{ i_1, i_2, \cdots, i_{k-1}\} + 1$). Accordingly among the nested commutators of order $k$ the one with the largest support is \begin{eqnarray} \hat{C}^{(k)}_{1,2 ,\cdots ,k} (\hat{A}) = [\hat{h}_{k} , [\hat{h}_{{k-1}} , \cdots ,[\hat{h}_{2}, [\hat{h}_{1} ,\hat{A}]]\cdots]]\;, \end{eqnarray} that in principle operates non trivially on all the first $k+1$ elements of the chain. Observe then that in order to get a non-zero contribution in (\ref{summation}) we also need the succession $\hat{h}_{i}$ entering $\hat{C}^{(k)}_{i_1,i_2 ,\cdots ,i_k} (\hat{A})$ to touch at least once the support of $\hat{B}$. This, together with the prescription just discussed, implies that at least once every element $\hat{h}_i$ between $A$ and $B$ has to appear, and the first appearance of each $\hat{h}_i$ has to happen after the first appearance of $\hat{h}_{i-1}$. In summary we can think each nested commutator of order $k$ as a numbered set of $k$ boxes fillable with elements $\hat{h}_i$ (see Fig.~\ref{fig:boxes} (a)) and, keeping in mind the rules just discussed, we want to count how many fillings give us non zero commutators. \begin{figure} \caption{Panel (a): Pictorial representation of the nested commutator $\hat{C} \label{fig:boxes} \end{figure} Starting from $k=d$, we have only one possibility, i.e. the element $\hat{C}^{(d)}_{1,2 ,\cdots ,d} (\hat{A})$, see Fig.~\ref{fig:boxes} (b). This implies \begin{eqnarray} [[\hat{H},\hat{A}]_d,\hat{B}] &=& [\hat{C}^{(d)}_{1,2 ,\cdots ,d} (\hat{A}),\hat{B}] \\ \nonumber &=& [ [\hat{h}_{d} , [\hat{h}_{{d-1}} , \cdots ,[\hat{h}_{2}, [\hat{h}_{1} ,\hat{A}]]\cdots]],\hat{B}]\;, \end{eqnarray} and hence by sub-additivity of the norm, to \begin{eqnarray}\label{eq:Gamma bound} \| [[\hat{H},\hat{A}]_d,\hat{B}]\| \leq 2 \| \hat{A} \| \| \hat{B} \| (2 \| \hat{h}\|)^d\;, \end{eqnarray} which leads to $\Gamma_d\leq \sqrt{ {2 \pi}/{e^2}}\simeq 0.923$ as anticipated in the paragraph below Eq.~(\ref{defgamma}). Consider next the case $k=d+n$ with $n\geq 1$. In this event we must have at least $d$ boxes filled with each $\hat{h}_i$ between $\hat{A}$ and $\hat{B}$. Once we fix them, the content of the remaining $k=n-d$ boxes (indicated by an asterisk in panel (c) of Fig.~\ref{fig:boxes}) depends on their position in the sequence: if one of those is before the first $\hat{h}_1$ it will be forced to be $\hat{h}_1$, if it's before the first $\hat{h}_2$ it will be $\hat{h}_1$ or $\hat{h}_2$ and so on until the one before the first $\hat{h}_{d}$, which will be anyone among the $\hat{h}_i$. So in order to compute the number $n_k$ of non-zero terms entering (\ref{summation}) we need to know in how many ways we can dispose the empty boxes in the sequence: since empty boxes (as well as the ones necessarily filled) are indistinguishable there are $\binom{k}{n}=\binom{k}{d}$ ways. For each way we'd have to count possible fillings, but there's not a straightforward method to do it so we settle for an upper bound. The worst case is the one in which all empty boxes come after the first $\hat{h}_{d}$, so that we have $d^n$ fillings, accordingly we can bound $n_k$ with $\binom{k}{n}d^n =\binom{k}{d}d^{k-d}$ leading to Eq.~(\ref{BOUNDONNK}). As mentioned at the beginning of the section a technique similar to the one reported here has been presented in the recent literature expressed in \cite{FinTemp2,Them}. These works also results in a polynomial upper bound for the commutator, yet it appears that the number of contributions entering in the parameter $n_k$ could be underestimated, and this underestimation is negligible only at orders $k\simeq d$ or, equivalently, at small times. Specifically in \cite{Them}, which exploits intermediate results from \cite{ThemRef,ExistDynam}, the bound is obtained from the iteration of the inequality: \begin{equation} C_B(t,X)\leq C_B(0,X)+2\sum_{Z\in \partial X}\int_0^{|t|} \mathrm{d}s \, \, C_B(s,Z)\| \hat{H}_Z \| , \end{equation} where $C_B(t,X)=\| [A(t),\hat{B}] \| $, $X$ is the support of $A$ and $\partial X$ is the surface of the set $X$. The iteration adopted in \cite{Them} produces an object that involve a summation of the form $\sum\limits_{Z\in \partial X}\,\sum\limits_{Z_1\in \partial Z}\,\sum\limits_{Z_2\in \partial Z_1}\cdots$. This selection however underestimates the actual number of contributing terms. Indeed in the first order of iteration $Z\in \partial X$ takes account of all Hamiltonian elements non commutating with $\hat{A}$, but the next iteration needs to count all non commuting elements, given by $Z_1\in \partial Z$ \textit{and} $Z\in \partial X$. So the generally correct statement, as in Ref.~\cite{ExistDynam}, would be $\sum\limits_{Z\cap X\neq \emptyset}\,\,\sum\limits_{Z_1\cap Z\neq \emptyset}\,\,\sum\limits_{Z_2\cap Z_1\neq \emptyset }\cdots$. The above discrepancy is particularly evident when focusing on the linear spin chain case we consider here. Taking account only of surface terms in the nested commutators in Eq.~(\ref{eq: Corr func all terms}), among all the contributions which can be non-zero according to Eq.~(\ref{NONZERO}), we would have included only those with $i_{j+1}=i_j+1$. This corrections are irrelevant at the first order in time in Eq.~(\ref{eq: Corr func all terms}) but lead to underestimations in successive orders. In \cite{Them} the discrepancy is mitigated at first orders by the fact that the number of paths of length $L$ considered is upper bounded by $N_1(L):=(2(2\delta-1))^L$ with $\delta$ dimensions of the graph. But again at higher orders this quantity is overcome by the actual numbers of potentially not null commutators (interestingly in the case of 2-D square lattice $N_1(L)$ could be found exactly, shrinking at the minimum the bound, see \cite{Guy}). Similarly is done in \cite{FinTemp2}, where, in the specific case of a 2-D square lattice, to estimate the number of paths of length $L$ a coordination number $C$ is used, which gives an upper bound $N_2(L):=(2C-1)^L$ that for higher orders is again an underestimation. To better visualize why this is the case, let's consider once more the chain configuration. Following rules of Eq.~(\ref{NONZERO}) we understood that nested commutators $\hat{C}^{(k)}_{i_1,i_2 ,\cdots ,i_k} (\hat{A})$ with repetitions of indexes. So with growing $k$ the number of possibilities for successive terms in the commutator grows itself: this is equivalent to a growing dimension $\delta^{(k)}$ or coordination number $C^{(k)}$. For instance we can study the multiplicity of the extensions of the first not null order $\hat{C}^{(d)}_{1,2 ,\cdots ,d} (\hat{A})$. Since the support of this commutator has covered all links between $A$ and $B$ we can choose among $d$ possibilities (not taking into account possible sites beyond $B$ and before $A$, depending on the geometry of the chain we choose), we'll have then $d^{L-d}$ possibilities at the $L$-th order: for suitable $d$ and $n$ we shall have $d^{L-d}>N_1(L),N_2(L)$. This multiplicity is relative to a single initial path, so we do not even need to count also the different possible initial paths one can construct with $d+l$ steps s.t. $d+l<L$. In summary, the polynomial behaviour found previously in the literature is solid at the first order but could not be at higher orders. \section{Simulation for a Heisenberg XY chain} \label{Sec:Simulation} Here we test the validity of our results presented in the previous section for a reasonably simple system such as a uniformly coupled, next-neighbour Heisenberg XY chain composed by $L$ spin-$1/2$, described by the following Hamiltonian: \begin{equation} \hat{H}=J\sum_{i=0}^{L-1}\hat{\sigma}_i^x \hat{\sigma}_{i+1}^x+\hat{\sigma}_i^y \hat{\sigma}_{i+1}^y\;. \label{DEFHAMH} \end{equation} As local operators $\hat{A}$ and $\hat{B}$ we adopt two $\hat{\sigma}^z$ operators, acting respectively on the first and last spin of the chain, so that $\|\hat{A}\|=\|\hat{B} \|=1$. Employing QuTiP~\cite{Qutip1,Qutip2} we perform the numerical evaluation for $\|[\hat{A}(t),\hat{B}]\|$ varying the length of the chain $L$. (Fig.~\ref{fig:plot_sim}). \begin{figure} \caption{(Color online): Simulation of $\| [\hat{A} \label{fig:plot_sim} \end{figure} \begin{figure} \caption{(Color online): Plot of the value of $\Gamma_d$ defined in Eq.~(\ref{defgamma} \label{fig:plot_Gamma} \end{figure} \begin{figure} \caption{(Color online): Simulation and bounds of the function $\|[\hat{A} \label{fig:L=4} \end{figure} We are interested in the comparison between these results with the expressions obtained for the upper bound (\ref{UUPP}), the lower bound (\ref{eq: Lower Bound1new}), and the simplified lower bound at short times (\ref{eq: Lower Bound1new1}). The time domain in which the simplified lower bound stands depends also on the value of the parameter $\Gamma_d$ specified in Eq.~(\ref{defgamma}), which we understood to be $\leq \sqrt{2\pi /e^2}$ but which we need reasonably large in order to produce a detectable bound in the numerical evaluation. In Fig.~\ref{fig:plot_Gamma} values of $\Gamma_d$ for different chain lengths $L$ (s.t. $d=L-1$) are reported. The magnitude of $\Gamma_d$ exhibits an exponential decrease with the size of the chain $L$. The results of our simulations are presented in Fig.~\ref{fig:L=4} for the cases $L=4$ and $L=10$. The upper bound (\ref{UUPP}), as well as the lower bound (\ref{eq: Lower Bound1new}) should result to be universal, i.e. to hold for every $t$, although being the latter trivial at large times. This condition is satisfied for every $L$ at every $t$ analysed (we performed the simulation for $2\leq L \leq 12$). For what concerns the simplified lower bound (\ref{eq: Lower Bound1new1}), we would expect its validity to be guaranteed only for sufficiently small $t$ and as a matter of fact we find the time domain of validity to be limited at relatively small times (see e.g. the histograms in Fig.~\ref{fig:L=4}). \section{Conclusions} \label{Sec:conc} The study of the L-R inequality we have presented here shows that for a large class of spin-network models characterized by couplings that are of finite range, the correlation function $\|[\hat{A}(t),\hat{B}]\|$ can be more tightly bounded by a new constraining function that exhibits a polynomial dependence with respect to time, and which, for sufficiently large distances, allows for a precise definition of a maximum speed of the signal propagation, see Eq.~(\ref{SPEED}). Our approach does not rely on often complicated graph-counting arguments, instead is based on an analytical optimization of the original inequality~\cite{LR} with respect to all free parameters of the model (specifically the $\lambda$ parameter defining via Eq.~(\ref{DEFPHILambda}) the convergence of the Hamiltonian couplings at large distances). Yet, in the special case of linear spin-chain, we do adopt a graph-counting argument to present an alternative derivation of our result and to show that a similar reasoning can be used to also construct non-trivial lower bounds for $\|[\hat{A}(t),\hat{B}]\|$ when the two sites are located at the opposite ends of the chain. Possible generalizations of the present approach can be foreseen by including a refined evaluation of the dependence upon $\lambda$ of Eq.~(\ref{DEFPHILambda}), that goes beyond the one we adopted in Eq.~(\ref{eq: Max diameter}). We point out that during the preparation of this manuscript the same result presented in Eq.~(\ref{eq: upper Bound1new1}) for a chain appeared in Ref.~\cite{Lucas}. The Authors would like to thank R. Fazio and B. Nachtergaele for their comments and suggestions. \end{document}